heat-10.0.2/0000775000175000017500000000000013343562672012550 5ustar zuulzuul00000000000000heat-10.0.2/heat.egg-info/0000775000175000017500000000000013343562672015163 5ustar zuulzuul00000000000000heat-10.0.2/heat.egg-info/requires.txt0000664000175000017500000000257213343562667017575 0ustar zuulzuul00000000000000pbr!=2.1.0,>=2.0.0 Babel!=2.4.0,>=2.3.4 croniter>=0.3.4 cryptography!=2.0,>=1.9 eventlet!=0.18.3,!=0.20.1,<0.21.0,>=0.18.2 keystoneauth1>=3.3.0 keystonemiddleware>=4.17.0 lxml!=3.7.0,>=3.4.1 netaddr>=0.7.18 openstacksdk>=0.9.19 oslo.cache>=1.26.0 oslo.config>=5.1.0 oslo.concurrency>=3.25.0 oslo.context>=2.19.2 oslo.db>=4.27.0 oslo.i18n>=3.15.3 oslo.log>=3.36.0 oslo.messaging>=5.29.0 oslo.middleware>=3.31.0 oslo.policy>=1.30.0 oslo.reports>=1.18.0 oslo.serialization!=2.19.1,>=2.18.0 oslo.service!=1.28.1,>=1.24.0 oslo.utils>=3.33.0 osprofiler>=1.4.0 oslo.versionedobjects>=1.31.2 PasteDeploy>=1.5.0 aodhclient>=0.9.0 python-barbicanclient!=4.5.0,!=4.5.1,>=4.0.0 python-ceilometerclient>=2.5.0 python-cinderclient>=3.3.0 python-designateclient>=2.7.0 python-glanceclient>=2.8.0 python-heatclient>=1.10.0 python-keystoneclient>=3.8.0 python-magnumclient>=2.1.0 python-manilaclient>=1.16.0 python-mistralclient!=3.2.0,>=3.1.0 python-monascaclient>=1.7.0 python-neutronclient>=6.3.0 python-novaclient>=9.1.0 python-octaviaclient>=1.3.0 python-openstackclient>=3.12.0 python-saharaclient>=1.4.0 python-swiftclient>=3.2.0 python-troveclient>=2.2.0 python-zaqarclient>=1.0.0 python-zunclient>=1.0.0 pytz>=2013.6 PyYAML>=3.10 requests>=2.14.2 tenacity>=3.2.1 Routes>=2.3.1 six>=1.10.0 SQLAlchemy!=1.1.5,!=1.1.6,!=1.1.7,!=1.1.8,>=1.0.10 sqlalchemy-migrate>=0.11.0 stevedore>=1.20.0 WebOb>=1.7.1 yaql>=1.1.3 heat-10.0.2/heat.egg-info/SOURCES.txt0000664000175000017500000015145013343562672017055 0ustar zuulzuul00000000000000.coveragerc .stestr.conf .zuul.yaml AUTHORS CONTRIBUTING.rst ChangeLog HACKING.rst LICENSE README.rst babel.cfg config-generator.conf install.sh requirements.txt setup.cfg setup.py test-requirements.txt tox.ini uninstall.sh api-ref/source/conf.py api-ref/source/index.rst api-ref/source/v1/build-info.inc api-ref/source/v1/events.inc api-ref/source/v1/general-info.inc api-ref/source/v1/index.rst api-ref/source/v1/parameters.yaml api-ref/source/v1/resource-types.inc api-ref/source/v1/resources.inc api-ref/source/v1/services.inc api-ref/source/v1/software-config.inc api-ref/source/v1/stack-actions.inc api-ref/source/v1/stack-outputs.inc api-ref/source/v1/stack-snapshots.inc api-ref/source/v1/stack-templates.inc api-ref/source/v1/stacks.inc api-ref/source/v1/status.yaml api-ref/source/v1/versions.inc api-ref/source/v1/samples/build-info-response.json api-ref/source/v1/samples/config-create-request.json api-ref/source/v1/samples/config-create-response.json api-ref/source/v1/samples/config-show-response.json api-ref/source/v1/samples/configs-list-response.json api-ref/source/v1/samples/deployment-create-request.json api-ref/source/v1/samples/deployment-create-response.json api-ref/source/v1/samples/deployment-metadata-response.json api-ref/source/v1/samples/deployment-show-response.json api-ref/source/v1/samples/deployment-update-request.json api-ref/source/v1/samples/deployment-update-response.json api-ref/source/v1/samples/deployments-list-response.json api-ref/source/v1/samples/event-show-response.json api-ref/source/v1/samples/events-find-response.json api-ref/source/v1/samples/events-list-response.json api-ref/source/v1/samples/resource-metadata-response.json api-ref/source/v1/samples/resource-schema-response.json api-ref/source/v1/samples/resource-show-response.json api-ref/source/v1/samples/resource-type-template-hot-response.json api-ref/source/v1/samples/resource-type-template-response.json api-ref/source/v1/samples/resource-types-list-advanced-response.json api-ref/source/v1/samples/resource-types-list-response.json api-ref/source/v1/samples/resources-list-response.json api-ref/source/v1/samples/services-list-response.json api-ref/source/v1/samples/stack-abandon-response.json api-ref/source/v1/samples/stack-action-cancel-update-request.json api-ref/source/v1/samples/stack-action-cancel-without-rollback-request.json api-ref/source/v1/samples/stack-action-check-request.json api-ref/source/v1/samples/stack-action-resume-request.json api-ref/source/v1/samples/stack-action-suspend-request.json api-ref/source/v1/samples/stack-adopt-request.json api-ref/source/v1/samples/stack-create-request.json api-ref/source/v1/samples/stack-create-response.json api-ref/source/v1/samples/stack-environment-show-response.json api-ref/source/v1/samples/stack-export-response.json api-ref/source/v1/samples/stack-files-show-response.json api-ref/source/v1/samples/stack-find-delete-response.json api-ref/source/v1/samples/stack-find-response.json api-ref/source/v1/samples/stack-output-show-response.json api-ref/source/v1/samples/stack-outputs-list-response.json api-ref/source/v1/samples/stack-preview-response.json api-ref/source/v1/samples/stack-show-response.json api-ref/source/v1/samples/stack-snapshot-request.json api-ref/source/v1/samples/stack-snapshot-response.json api-ref/source/v1/samples/stack-snapshot-restore-response.json api-ref/source/v1/samples/stack-snapshot-show-response.json api-ref/source/v1/samples/stack-snapshots-list-response.json api-ref/source/v1/samples/stack-update-preview-response.json api-ref/source/v1/samples/stack-update-request.json api-ref/source/v1/samples/stack-update-response.json api-ref/source/v1/samples/stacks-list-response.json api-ref/source/v1/samples/template-functions-list-response.json api-ref/source/v1/samples/template-show-response.json api-ref/source/v1/samples/template-validate-request.json api-ref/source/v1/samples/template-validate-response.json api-ref/source/v1/samples/template-versions-response.json api-ref/source/v1/samples/versions-list-response.json bin/heat-api bin/heat-api-cfn bin/heat-db-setup bin/heat-engine bin/heat-keystone-setup bin/heat-keystone-setup-domain bin/heat-manage contrib/heat_docker/README.md contrib/heat_docker/requirements.txt contrib/heat_docker/setup.cfg contrib/heat_docker/setup.py contrib/heat_docker/heat_docker/__init__.py contrib/heat_docker/heat_docker/resources/__init__.py contrib/heat_docker/heat_docker/resources/docker_container.py contrib/heat_docker/heat_docker/tests/__init__.py contrib/heat_docker/heat_docker/tests/fake_docker_client.py contrib/heat_docker/heat_docker/tests/test_docker_container.py contrib/rackspace/README.md contrib/rackspace/requirements.txt contrib/rackspace/setup.cfg contrib/rackspace/setup.py contrib/rackspace/heat_keystoneclient_v2/__init__.py contrib/rackspace/heat_keystoneclient_v2/client.py contrib/rackspace/heat_keystoneclient_v2/tests/__init__.py contrib/rackspace/heat_keystoneclient_v2/tests/test_client.py contrib/rackspace/rackspace/__init__.py contrib/rackspace/rackspace/clients.py contrib/rackspace/rackspace/resources/__init__.py contrib/rackspace/rackspace/resources/auto_scale.py contrib/rackspace/rackspace/resources/cloud_dns.py contrib/rackspace/rackspace/resources/cloud_loadbalancer.py contrib/rackspace/rackspace/resources/cloud_server.py contrib/rackspace/rackspace/resources/cloudnetworks.py contrib/rackspace/rackspace/resources/lb_node.py contrib/rackspace/rackspace/tests/__init__.py contrib/rackspace/rackspace/tests/test_auto_scale.py contrib/rackspace/rackspace/tests/test_cloud_loadbalancer.py contrib/rackspace/rackspace/tests/test_cloudnetworks.py contrib/rackspace/rackspace/tests/test_lb_node.py contrib/rackspace/rackspace/tests/test_rackspace_cloud_server.py contrib/rackspace/rackspace/tests/test_rackspace_dns.py devstack/README.rst devstack/plugin.sh devstack/settings devstack/lib/heat devstack/upgrade/resources.sh devstack/upgrade/settings devstack/upgrade/shutdown.sh devstack/upgrade/upgrade.sh devstack/upgrade/templates/random_string.yaml doc/.gitignore doc/Makefile doc/README.rst doc/source/conf.py doc/source/glossary.rst doc/source/index.rst doc/source/_extra/.htaccess doc/source/_templates/.placeholder doc/source/admin/auth-model.rst doc/source/admin/index.rst doc/source/admin/introduction.rst doc/source/admin/stack-domain-users.rst doc/source/api/index.rst doc/source/configuration/api.rst doc/source/configuration/clients.rst doc/source/configuration/config-options.rst doc/source/configuration/index.rst doc/source/configuration/logs.rst doc/source/configuration/sample_policy.rst doc/source/configuration/tables/heat-api.rst doc/source/configuration/tables/heat-cfn_api.rst doc/source/configuration/tables/heat-clients.rst doc/source/configuration/tables/heat-clients_aodh.rst doc/source/configuration/tables/heat-clients_backends.rst doc/source/configuration/tables/heat-clients_barbican.rst doc/source/configuration/tables/heat-clients_ceilometer.rst doc/source/configuration/tables/heat-clients_cinder.rst doc/source/configuration/tables/heat-clients_designate.rst doc/source/configuration/tables/heat-clients_glance.rst doc/source/configuration/tables/heat-clients_heat.rst doc/source/configuration/tables/heat-clients_keystone.rst doc/source/configuration/tables/heat-clients_magnum.rst doc/source/configuration/tables/heat-clients_manila.rst doc/source/configuration/tables/heat-clients_mistral.rst doc/source/configuration/tables/heat-clients_monasca.rst doc/source/configuration/tables/heat-clients_neutron.rst doc/source/configuration/tables/heat-clients_nova.rst doc/source/configuration/tables/heat-clients_sahara.rst doc/source/configuration/tables/heat-clients_senlin.rst doc/source/configuration/tables/heat-clients_swift.rst doc/source/configuration/tables/heat-clients_trove.rst doc/source/configuration/tables/heat-clients_zaqar.rst doc/source/configuration/tables/heat-common.rst doc/source/configuration/tables/heat-crypt.rst doc/source/configuration/tables/heat-loadbalancer.rst doc/source/configuration/tables/heat-metadata_api.rst doc/source/configuration/tables/heat-notification.rst doc/source/configuration/tables/heat-quota.rst doc/source/configuration/tables/heat-redis.rst doc/source/configuration/tables/heat-testing.rst doc/source/configuration/tables/heat-trustee.rst doc/source/configuration/tables/heat-waitcondition_api.rst doc/source/contributing/blueprints.rst doc/source/contributing/index.rst doc/source/developing_guides/architecture.rst doc/source/developing_guides/gmr.rst doc/source/developing_guides/index.rst doc/source/developing_guides/pluginguide.rst doc/source/developing_guides/rally_on_gates.rst doc/source/developing_guides/schedulerhints.rst doc/source/developing_guides/supportstatus.rst doc/source/ext/__init__.py doc/source/ext/resources.py doc/source/ext/tablefromtext.py doc/source/getting_started/create_a_stack.rst doc/source/getting_started/index.rst doc/source/getting_started/jeos_building.rst doc/source/getting_started/on_devstack.rst doc/source/getting_started/on_fedora.rst doc/source/getting_started/on_other.rst doc/source/getting_started/on_ubuntu.rst doc/source/getting_started/standalone.rst doc/source/install/get_started.rst doc/source/install/index.rst doc/source/install/install-debian.rst doc/source/install/install-obs.rst doc/source/install/install-rdo.rst doc/source/install/install-ubuntu.rst doc/source/install/install.rst doc/source/install/launch-instance.rst doc/source/install/next-steps.rst doc/source/install/verify.rst doc/source/man/heat-api-cfn.rst doc/source/man/heat-api.rst doc/source/man/heat-db-setup.rst doc/source/man/heat-engine.rst doc/source/man/heat-keystone-setup-domain.rst doc/source/man/heat-keystone-setup.rst doc/source/man/heat-manage.rst doc/source/man/index.rst doc/source/operating_guides/httpd.rst doc/source/operating_guides/scale_deployment.rst doc/source/operating_guides/upgrades_guide.rst doc/source/sourcecode/.gitignore doc/source/template_guide/advanced_topics.rst doc/source/template_guide/basic_resources.rst doc/source/template_guide/cfn.rst doc/source/template_guide/composition.rst doc/source/template_guide/contrib.rst doc/source/template_guide/environment.rst doc/source/template_guide/existing_templates.rst doc/source/template_guide/functions.rst doc/source/template_guide/hello_world.rst doc/source/template_guide/hot_guide.rst doc/source/template_guide/hot_spec.rst doc/source/template_guide/index.rst doc/source/template_guide/openstack.rst doc/source/template_guide/software_deployment.rst doc/source/template_guide/unsupported.rst doc/source/templates/index.rst doc/source/templates/cfn/WordPress_Single_Instance.rst doc/source/templates/hot/hello_world.rst etc/heat/README-heat.conf.txt etc/heat/api-paste.ini etc/heat/heat-policy-generator.conf etc/heat/environment.d/default.yaml etc/heat/templates/AWS_CloudWatch_Alarm.yaml etc/heat/templates/AWS_RDS_DBInstance.yaml heat/__init__.py heat/version.py heat.egg-info/PKG-INFO heat.egg-info/SOURCES.txt heat.egg-info/dependency_links.txt heat.egg-info/entry_points.txt heat.egg-info/not-zip-safe heat.egg-info/pbr.json heat.egg-info/requires.txt heat.egg-info/top_level.txt heat/api/__init__.py heat/api/versions.py heat/api/aws/__init__.py heat/api/aws/ec2token.py heat/api/aws/exception.py heat/api/aws/utils.py heat/api/cfn/__init__.py heat/api/cfn/versions.py heat/api/cfn/v1/__init__.py heat/api/cfn/v1/signal.py heat/api/cfn/v1/stacks.py heat/api/middleware/__init__.py heat/api/middleware/fault.py heat/api/middleware/version_negotiation.py heat/api/openstack/__init__.py heat/api/openstack/versions.py heat/api/openstack/v1/__init__.py heat/api/openstack/v1/actions.py heat/api/openstack/v1/build_info.py heat/api/openstack/v1/events.py heat/api/openstack/v1/resources.py heat/api/openstack/v1/services.py heat/api/openstack/v1/software_configs.py heat/api/openstack/v1/software_deployments.py heat/api/openstack/v1/stacks.py heat/api/openstack/v1/util.py heat/api/openstack/v1/views/__init__.py heat/api/openstack/v1/views/stacks_view.py heat/api/openstack/v1/views/views_common.py heat/cloudinit/__init__.py heat/cloudinit/boothook.sh heat/cloudinit/config heat/cloudinit/loguserdata.py heat/cloudinit/part_handler.py heat/cmd/__init__.py heat/cmd/all.py heat/cmd/api.py heat/cmd/api_cfn.py heat/cmd/engine.py heat/cmd/manage.py heat/common/__init__.py heat/common/auth_password.py heat/common/auth_url.py heat/common/cache.py heat/common/config.py heat/common/context.py heat/common/crypt.py heat/common/custom_backend_auth.py heat/common/endpoint_utils.py heat/common/environment_format.py heat/common/environment_util.py heat/common/exception.py heat/common/grouputils.py heat/common/i18n.py heat/common/identifier.py heat/common/lifecycle_plugin_utils.py heat/common/messaging.py heat/common/netutils.py heat/common/noauth.py heat/common/param_utils.py heat/common/password_gen.py heat/common/plugin_loader.py heat/common/pluginutils.py heat/common/policy.py heat/common/profiler.py heat/common/serializers.py heat/common/service_utils.py heat/common/short_id.py heat/common/template_format.py heat/common/timeutils.py heat/common/urlfetch.py heat/common/wsgi.py heat/db/__init__.py heat/db/sqlalchemy/__init__.py heat/db/sqlalchemy/api.py heat/db/sqlalchemy/filters.py heat/db/sqlalchemy/migration.py heat/db/sqlalchemy/models.py heat/db/sqlalchemy/types.py heat/db/sqlalchemy/utils.py heat/db/sqlalchemy/migrate_repo/README heat/db/sqlalchemy/migrate_repo/__init__.py heat/db/sqlalchemy/migrate_repo/manage.py heat/db/sqlalchemy/migrate_repo/migrate.cfg heat/db/sqlalchemy/migrate_repo/versions/071_mitaka.py heat/db/sqlalchemy/migrate_repo/versions/072_raw_template_files.py heat/db/sqlalchemy/migrate_repo/versions/073_resource_data_fk_ondelete_cascade.py heat/db/sqlalchemy/migrate_repo/versions/074_placeholder.py heat/db/sqlalchemy/migrate_repo/versions/075_placeholder.py heat/db/sqlalchemy/migrate_repo/versions/076_placeholder.py heat/db/sqlalchemy/migrate_repo/versions/077_placeholder.py heat/db/sqlalchemy/migrate_repo/versions/078_placeholder.py heat/db/sqlalchemy/migrate_repo/versions/079_resource_properties_data.py heat/db/sqlalchemy/migrate_repo/versions/080_resource_attrs_data.py heat/db/sqlalchemy/migrate_repo/versions/081_placeholder.py heat/db/sqlalchemy/migrate_repo/versions/082_placeholder.py heat/db/sqlalchemy/migrate_repo/versions/083_placeholder.py heat/db/sqlalchemy/migrate_repo/versions/084_placeholder.py heat/db/sqlalchemy/migrate_repo/versions/085_placeholder.py heat/db/sqlalchemy/migrate_repo/versions/__init__.py heat/engine/__init__.py heat/engine/api.py heat/engine/attributes.py heat/engine/check_resource.py heat/engine/conditions.py heat/engine/constraints.py heat/engine/dependencies.py heat/engine/environment.py heat/engine/event.py heat/engine/function.py heat/engine/lifecycle_plugin.py heat/engine/node_data.py heat/engine/output.py heat/engine/parameter_groups.py heat/engine/parameters.py heat/engine/parent_rsrc.py heat/engine/plugin_manager.py heat/engine/properties.py heat/engine/properties_group.py heat/engine/resource.py heat/engine/rsrc_defn.py heat/engine/scheduler.py heat/engine/service.py heat/engine/service_software_config.py heat/engine/software_config_io.py heat/engine/stack.py heat/engine/stack_lock.py heat/engine/status.py heat/engine/stk_defn.py heat/engine/support.py heat/engine/sync_point.py heat/engine/template.py heat/engine/template_common.py heat/engine/template_files.py heat/engine/timestamp.py heat/engine/translation.py heat/engine/update.py heat/engine/worker.py heat/engine/cfn/__init__.py heat/engine/cfn/functions.py heat/engine/cfn/parameters.py heat/engine/cfn/template.py heat/engine/clients/__init__.py heat/engine/clients/client_exception.py heat/engine/clients/client_plugin.py heat/engine/clients/progress.py heat/engine/clients/os/__init__.py heat/engine/clients/os/aodh.py heat/engine/clients/os/barbican.py heat/engine/clients/os/ceilometer.py heat/engine/clients/os/cinder.py heat/engine/clients/os/designate.py heat/engine/clients/os/glance.py heat/engine/clients/os/heat_plugin.py heat/engine/clients/os/magnum.py heat/engine/clients/os/manila.py heat/engine/clients/os/mistral.py heat/engine/clients/os/monasca.py heat/engine/clients/os/nova.py heat/engine/clients/os/octavia.py heat/engine/clients/os/openstacksdk.py heat/engine/clients/os/sahara.py heat/engine/clients/os/senlin.py heat/engine/clients/os/swift.py heat/engine/clients/os/trove.py heat/engine/clients/os/zaqar.py heat/engine/clients/os/zun.py heat/engine/clients/os/keystone/__init__.py heat/engine/clients/os/keystone/fake_keystoneclient.py heat/engine/clients/os/keystone/heat_keystoneclient.py heat/engine/clients/os/keystone/keystone_constraints.py heat/engine/clients/os/neutron/__init__.py heat/engine/clients/os/neutron/lbaas_constraints.py heat/engine/clients/os/neutron/neutron_constraints.py heat/engine/constraint/__init__.py heat/engine/constraint/common_constraints.py heat/engine/hot/__init__.py heat/engine/hot/functions.py heat/engine/hot/parameters.py heat/engine/hot/template.py heat/engine/notification/__init__.py heat/engine/notification/autoscaling.py heat/engine/notification/stack.py heat/engine/resources/__init__.py heat/engine/resources/alarm_base.py heat/engine/resources/scheduler_hints.py heat/engine/resources/server_base.py heat/engine/resources/signal_responder.py heat/engine/resources/stack_resource.py heat/engine/resources/stack_user.py heat/engine/resources/template_resource.py heat/engine/resources/volume_base.py heat/engine/resources/wait_condition.py heat/engine/resources/aws/__init__.py heat/engine/resources/aws/autoscaling/__init__.py heat/engine/resources/aws/autoscaling/autoscaling_group.py heat/engine/resources/aws/autoscaling/launch_config.py heat/engine/resources/aws/autoscaling/scaling_policy.py heat/engine/resources/aws/cfn/__init__.py heat/engine/resources/aws/cfn/stack.py heat/engine/resources/aws/cfn/wait_condition.py heat/engine/resources/aws/cfn/wait_condition_handle.py heat/engine/resources/aws/ec2/__init__.py heat/engine/resources/aws/ec2/eip.py heat/engine/resources/aws/ec2/instance.py heat/engine/resources/aws/ec2/internet_gateway.py heat/engine/resources/aws/ec2/network_interface.py heat/engine/resources/aws/ec2/route_table.py heat/engine/resources/aws/ec2/security_group.py heat/engine/resources/aws/ec2/subnet.py heat/engine/resources/aws/ec2/volume.py heat/engine/resources/aws/ec2/vpc.py heat/engine/resources/aws/iam/__init__.py heat/engine/resources/aws/iam/user.py heat/engine/resources/aws/lb/__init__.py heat/engine/resources/aws/lb/loadbalancer.py heat/engine/resources/aws/s3/__init__.py heat/engine/resources/aws/s3/s3.py heat/engine/resources/openstack/__init__.py heat/engine/resources/openstack/aodh/__init__.py heat/engine/resources/openstack/aodh/alarm.py heat/engine/resources/openstack/aodh/composite_alarm.py heat/engine/resources/openstack/aodh/gnocchi/__init__.py heat/engine/resources/openstack/aodh/gnocchi/alarm.py heat/engine/resources/openstack/barbican/__init__.py heat/engine/resources/openstack/barbican/container.py heat/engine/resources/openstack/barbican/order.py heat/engine/resources/openstack/barbican/secret.py heat/engine/resources/openstack/cinder/__init__.py heat/engine/resources/openstack/cinder/encrypted_volume_type.py heat/engine/resources/openstack/cinder/qos_specs.py heat/engine/resources/openstack/cinder/quota.py heat/engine/resources/openstack/cinder/volume.py heat/engine/resources/openstack/cinder/volume_type.py heat/engine/resources/openstack/designate/__init__.py heat/engine/resources/openstack/designate/domain.py heat/engine/resources/openstack/designate/record.py heat/engine/resources/openstack/designate/recordset.py heat/engine/resources/openstack/designate/zone.py heat/engine/resources/openstack/glance/__init__.py heat/engine/resources/openstack/glance/image.py heat/engine/resources/openstack/heat/__init__.py heat/engine/resources/openstack/heat/access_policy.py heat/engine/resources/openstack/heat/autoscaling_group.py heat/engine/resources/openstack/heat/cloud_config.py heat/engine/resources/openstack/heat/cloud_watch.py heat/engine/resources/openstack/heat/deployed_server.py heat/engine/resources/openstack/heat/ha_restarter.py heat/engine/resources/openstack/heat/instance_group.py heat/engine/resources/openstack/heat/multi_part.py heat/engine/resources/openstack/heat/none_resource.py heat/engine/resources/openstack/heat/random_string.py heat/engine/resources/openstack/heat/remote_stack.py heat/engine/resources/openstack/heat/resource_chain.py heat/engine/resources/openstack/heat/resource_group.py heat/engine/resources/openstack/heat/scaling_policy.py heat/engine/resources/openstack/heat/software_component.py heat/engine/resources/openstack/heat/software_config.py heat/engine/resources/openstack/heat/software_deployment.py heat/engine/resources/openstack/heat/structured_config.py heat/engine/resources/openstack/heat/swiftsignal.py heat/engine/resources/openstack/heat/test_resource.py heat/engine/resources/openstack/heat/value.py heat/engine/resources/openstack/heat/wait_condition.py heat/engine/resources/openstack/heat/wait_condition_handle.py heat/engine/resources/openstack/keystone/__init__.py heat/engine/resources/openstack/keystone/domain.py heat/engine/resources/openstack/keystone/endpoint.py heat/engine/resources/openstack/keystone/group.py heat/engine/resources/openstack/keystone/project.py heat/engine/resources/openstack/keystone/region.py heat/engine/resources/openstack/keystone/role.py heat/engine/resources/openstack/keystone/role_assignments.py heat/engine/resources/openstack/keystone/service.py heat/engine/resources/openstack/keystone/user.py heat/engine/resources/openstack/magnum/__init__.py heat/engine/resources/openstack/magnum/bay.py heat/engine/resources/openstack/magnum/baymodel.py heat/engine/resources/openstack/magnum/cluster.py heat/engine/resources/openstack/magnum/cluster_template.py heat/engine/resources/openstack/manila/__init__.py heat/engine/resources/openstack/manila/security_service.py heat/engine/resources/openstack/manila/share.py heat/engine/resources/openstack/manila/share_network.py heat/engine/resources/openstack/manila/share_type.py heat/engine/resources/openstack/mistral/__init__.py heat/engine/resources/openstack/mistral/cron_trigger.py heat/engine/resources/openstack/mistral/external_resource.py heat/engine/resources/openstack/mistral/workflow.py heat/engine/resources/openstack/monasca/__init__.py heat/engine/resources/openstack/monasca/alarm_definition.py heat/engine/resources/openstack/monasca/notification.py heat/engine/resources/openstack/neutron/__init__.py heat/engine/resources/openstack/neutron/address_scope.py heat/engine/resources/openstack/neutron/extraroute.py heat/engine/resources/openstack/neutron/firewall.py heat/engine/resources/openstack/neutron/floatingip.py heat/engine/resources/openstack/neutron/loadbalancer.py heat/engine/resources/openstack/neutron/metering.py heat/engine/resources/openstack/neutron/net.py heat/engine/resources/openstack/neutron/network_gateway.py heat/engine/resources/openstack/neutron/neutron.py heat/engine/resources/openstack/neutron/port.py heat/engine/resources/openstack/neutron/provider_net.py heat/engine/resources/openstack/neutron/qos.py heat/engine/resources/openstack/neutron/quota.py heat/engine/resources/openstack/neutron/rbac_policy.py heat/engine/resources/openstack/neutron/router.py heat/engine/resources/openstack/neutron/security_group.py heat/engine/resources/openstack/neutron/security_group_rule.py heat/engine/resources/openstack/neutron/segment.py heat/engine/resources/openstack/neutron/subnet.py heat/engine/resources/openstack/neutron/subnetpool.py heat/engine/resources/openstack/neutron/trunk.py heat/engine/resources/openstack/neutron/vpnservice.py heat/engine/resources/openstack/neutron/lbaas/__init__.py heat/engine/resources/openstack/neutron/lbaas/health_monitor.py heat/engine/resources/openstack/neutron/lbaas/l7policy.py heat/engine/resources/openstack/neutron/lbaas/l7rule.py heat/engine/resources/openstack/neutron/lbaas/listener.py heat/engine/resources/openstack/neutron/lbaas/loadbalancer.py heat/engine/resources/openstack/neutron/lbaas/pool.py heat/engine/resources/openstack/neutron/lbaas/pool_member.py heat/engine/resources/openstack/neutron/sfc/__init__.py heat/engine/resources/openstack/neutron/sfc/flow_classifier.py heat/engine/resources/openstack/neutron/sfc/port_chain.py heat/engine/resources/openstack/neutron/sfc/port_pair.py heat/engine/resources/openstack/neutron/sfc/port_pair_group.py heat/engine/resources/openstack/nova/__init__.py heat/engine/resources/openstack/nova/flavor.py heat/engine/resources/openstack/nova/floatingip.py heat/engine/resources/openstack/nova/host_aggregate.py heat/engine/resources/openstack/nova/keypair.py heat/engine/resources/openstack/nova/quota.py heat/engine/resources/openstack/nova/server.py heat/engine/resources/openstack/nova/server_group.py heat/engine/resources/openstack/nova/server_network_mixin.py heat/engine/resources/openstack/octavia/__init__.py heat/engine/resources/openstack/octavia/health_monitor.py heat/engine/resources/openstack/octavia/l7policy.py heat/engine/resources/openstack/octavia/l7rule.py heat/engine/resources/openstack/octavia/listener.py heat/engine/resources/openstack/octavia/loadbalancer.py heat/engine/resources/openstack/octavia/octavia_base.py heat/engine/resources/openstack/octavia/pool.py heat/engine/resources/openstack/octavia/pool_member.py heat/engine/resources/openstack/sahara/__init__.py heat/engine/resources/openstack/sahara/cluster.py heat/engine/resources/openstack/sahara/data_source.py heat/engine/resources/openstack/sahara/image.py heat/engine/resources/openstack/sahara/job.py heat/engine/resources/openstack/sahara/job_binary.py heat/engine/resources/openstack/sahara/templates.py heat/engine/resources/openstack/senlin/__init__.py heat/engine/resources/openstack/senlin/cluster.py heat/engine/resources/openstack/senlin/node.py heat/engine/resources/openstack/senlin/policy.py heat/engine/resources/openstack/senlin/profile.py heat/engine/resources/openstack/senlin/receiver.py heat/engine/resources/openstack/senlin/res_base.py heat/engine/resources/openstack/swift/__init__.py heat/engine/resources/openstack/swift/container.py heat/engine/resources/openstack/trove/__init__.py heat/engine/resources/openstack/trove/cluster.py heat/engine/resources/openstack/trove/instance.py heat/engine/resources/openstack/zaqar/__init__.py heat/engine/resources/openstack/zaqar/queue.py heat/engine/resources/openstack/zaqar/subscription.py heat/engine/resources/openstack/zun/__init__.py heat/engine/resources/openstack/zun/container.py heat/hacking/__init__.py heat/hacking/checks.py heat/httpd/__init__.py heat/httpd/heat_api.py heat/httpd/heat_api_cfn.py heat/httpd/files/heat-api-cfn-uwsgi.ini heat/httpd/files/heat-api-cfn.conf heat/httpd/files/heat-api-uwsgi.ini heat/httpd/files/heat-api.conf heat/httpd/files/uwsgi-heat-api-cfn.conf heat/httpd/files/uwsgi-heat-api.conf heat/locale/de/LC_MESSAGES/heat.po heat/locale/es/LC_MESSAGES/heat.po heat/locale/fr/LC_MESSAGES/heat.po heat/locale/it/LC_MESSAGES/heat.po heat/locale/ja/LC_MESSAGES/heat.po heat/locale/ko_KR/LC_MESSAGES/heat.po heat/locale/pt_BR/LC_MESSAGES/heat.po heat/locale/ru/LC_MESSAGES/heat.po heat/locale/zh_CN/LC_MESSAGES/heat.po heat/locale/zh_TW/LC_MESSAGES/heat.po heat/objects/__init__.py heat/objects/base.py heat/objects/event.py heat/objects/fields.py heat/objects/raw_template.py heat/objects/raw_template_files.py heat/objects/resource.py heat/objects/resource_data.py heat/objects/resource_properties_data.py heat/objects/service.py heat/objects/snapshot.py heat/objects/software_config.py heat/objects/software_deployment.py heat/objects/stack.py heat/objects/stack_lock.py heat/objects/stack_tag.py heat/objects/sync_point.py heat/objects/user_creds.py heat/policies/__init__.py heat/policies/actions.py heat/policies/base.py heat/policies/build_info.py heat/policies/cloudformation.py heat/policies/events.py heat/policies/resource.py heat/policies/resource_types.py heat/policies/service.py heat/policies/software_configs.py heat/policies/software_deployments.py heat/policies/stacks.py heat/rpc/__init__.py heat/rpc/api.py heat/rpc/client.py heat/rpc/listener_client.py heat/rpc/worker_api.py heat/rpc/worker_client.py heat/scaling/__init__.py heat/scaling/cooldown.py heat/scaling/lbutils.py heat/scaling/rolling_update.py heat/scaling/scalingutil.py heat/scaling/template.py heat/tests/__init__.py heat/tests/common.py heat/tests/fakes.py heat/tests/generic_resource.py heat/tests/test_attributes.py heat/tests/test_auth_password.py heat/tests/test_auth_url.py heat/tests/test_common_context.py heat/tests/test_common_env_util.py heat/tests/test_common_exception.py heat/tests/test_common_param_utils.py heat/tests/test_common_policy.py heat/tests/test_common_serializers.py heat/tests/test_common_service_utils.py heat/tests/test_constraints.py heat/tests/test_convg_stack.py heat/tests/test_crypt.py heat/tests/test_dbinstance.py heat/tests/test_empty_stack.py heat/tests/test_engine_api_utils.py heat/tests/test_engine_service.py heat/tests/test_environment.py heat/tests/test_environment_format.py heat/tests/test_event.py heat/tests/test_exception.py heat/tests/test_fault_middleware.py heat/tests/test_function.py heat/tests/test_grouputils.py heat/tests/test_hacking.py heat/tests/test_hot.py heat/tests/test_identifier.py heat/tests/test_lifecycle_plugin_utils.py heat/tests/test_loguserdata.py heat/tests/test_metadata_refresh.py heat/tests/test_nested_stack.py heat/tests/test_noauth.py heat/tests/test_nokey.py heat/tests/test_notifications.py heat/tests/test_parameters.py heat/tests/test_plugin_loader.py heat/tests/test_properties.py heat/tests/test_properties_group.py heat/tests/test_provider_template.py heat/tests/test_resource.py heat/tests/test_resource_properties_data.py heat/tests/test_rpc_client.py heat/tests/test_rpc_listener_client.py heat/tests/test_rpc_worker_client.py heat/tests/test_rsrc_defn.py heat/tests/test_server_tags.py heat/tests/test_short_id.py heat/tests/test_signal.py heat/tests/test_stack.py heat/tests/test_stack_collect_attributes.py heat/tests/test_stack_delete.py heat/tests/test_stack_lock.py heat/tests/test_stack_resource.py heat/tests/test_stack_update.py heat/tests/test_stack_user.py heat/tests/test_support.py heat/tests/test_template.py heat/tests/test_template_files.py heat/tests/test_template_format.py heat/tests/test_timeutils.py heat/tests/test_translation_rule.py heat/tests/test_urlfetch.py heat/tests/test_validate.py heat/tests/test_version.py heat/tests/test_vpc.py heat/tests/testing-overview.txt heat/tests/utils.py heat/tests/api/__init__.py heat/tests/api/test_wsgi.py heat/tests/api/aws/__init__.py heat/tests/api/aws/test_api_aws.py heat/tests/api/aws/test_api_ec2token.py heat/tests/api/cfn/__init__.py heat/tests/api/cfn/test_api_cfn_v1.py heat/tests/api/middleware/__init__.py heat/tests/api/middleware/test_version_negotiation_middleware.py heat/tests/api/openstack_v1/__init__.py heat/tests/api/openstack_v1/test_actions.py heat/tests/api/openstack_v1/test_build_info.py heat/tests/api/openstack_v1/test_events.py heat/tests/api/openstack_v1/test_resources.py heat/tests/api/openstack_v1/test_routes.py heat/tests/api/openstack_v1/test_services.py heat/tests/api/openstack_v1/test_software_configs.py heat/tests/api/openstack_v1/test_software_deployments.py heat/tests/api/openstack_v1/test_stacks.py heat/tests/api/openstack_v1/test_util.py heat/tests/api/openstack_v1/test_views_common.py heat/tests/api/openstack_v1/test_views_stacks_view.py heat/tests/api/openstack_v1/tools.py heat/tests/autoscaling/__init__.py heat/tests/autoscaling/inline_templates.py heat/tests/autoscaling/test_heat_scaling_group.py heat/tests/autoscaling/test_heat_scaling_policy.py heat/tests/autoscaling/test_launch_config.py heat/tests/autoscaling/test_lbutils.py heat/tests/autoscaling/test_new_capacity.py heat/tests/autoscaling/test_rolling_update.py heat/tests/autoscaling/test_scaling_group.py heat/tests/autoscaling/test_scaling_policy.py heat/tests/autoscaling/test_scaling_template.py heat/tests/aws/__init__.py heat/tests/aws/test_eip.py heat/tests/aws/test_instance.py heat/tests/aws/test_instance_network.py heat/tests/aws/test_loadbalancer.py heat/tests/aws/test_network_interface.py heat/tests/aws/test_s3.py heat/tests/aws/test_security_group.py heat/tests/aws/test_user.py heat/tests/aws/test_volume.py heat/tests/aws/test_waitcondition.py heat/tests/clients/__init__.py heat/tests/clients/test_aodh_client.py heat/tests/clients/test_barbican_client.py heat/tests/clients/test_ceilometer_client.py heat/tests/clients/test_cinder_client.py heat/tests/clients/test_clients.py heat/tests/clients/test_designate_client.py heat/tests/clients/test_glance_client.py heat/tests/clients/test_heat_client.py heat/tests/clients/test_keystone_client.py heat/tests/clients/test_magnum_client.py heat/tests/clients/test_manila_client.py heat/tests/clients/test_mistral_client.py heat/tests/clients/test_monasca_client.py heat/tests/clients/test_neutron_client.py heat/tests/clients/test_nova_client.py heat/tests/clients/test_octavia_client.py heat/tests/clients/test_progress.py heat/tests/clients/test_sahara_client.py heat/tests/clients/test_sdk_client.py heat/tests/clients/test_senlin_client.py heat/tests/clients/test_swift_client.py heat/tests/clients/test_zaqar_client.py heat/tests/clients/test_zun_client.py heat/tests/constraints/__init__.py heat/tests/constraints/test_common_constraints.py heat/tests/convergence/__init__.py heat/tests/convergence/test_converge.py heat/tests/convergence/framework/__init__.py heat/tests/convergence/framework/engine_wrapper.py heat/tests/convergence/framework/event_loop.py heat/tests/convergence/framework/fake_resource.py heat/tests/convergence/framework/message_processor.py heat/tests/convergence/framework/message_queue.py heat/tests/convergence/framework/processes.py heat/tests/convergence/framework/reality.py heat/tests/convergence/framework/scenario.py heat/tests/convergence/framework/scenario_template.py heat/tests/convergence/framework/testutils.py heat/tests/convergence/framework/worker_wrapper.py heat/tests/convergence/scenarios/basic_create.py heat/tests/convergence/scenarios/basic_create_rollback.py heat/tests/convergence/scenarios/basic_update_delete.py heat/tests/convergence/scenarios/create_early_delete.py heat/tests/convergence/scenarios/disjoint_create.py heat/tests/convergence/scenarios/multiple_update.py heat/tests/convergence/scenarios/update_add.py heat/tests/convergence/scenarios/update_add_concurrent.py heat/tests/convergence/scenarios/update_add_rollback.py heat/tests/convergence/scenarios/update_add_rollback_early.py heat/tests/convergence/scenarios/update_interrupt_create.py heat/tests/convergence/scenarios/update_remove.py heat/tests/convergence/scenarios/update_remove_rollback.py heat/tests/convergence/scenarios/update_replace.py heat/tests/convergence/scenarios/update_replace_invert_deps.py heat/tests/convergence/scenarios/update_replace_missed_cleanup.py heat/tests/convergence/scenarios/update_replace_missed_cleanup_delete.py heat/tests/convergence/scenarios/update_replace_rollback.py heat/tests/convergence/scenarios/update_user_replace.py heat/tests/convergence/scenarios/update_user_replace_rollback.py heat/tests/convergence/scenarios/update_user_replace_rollback_update.py heat/tests/db/__init__.py heat/tests/db/test_migrations.py heat/tests/db/test_sqlalchemy_api.py heat/tests/db/test_sqlalchemy_filters.py heat/tests/db/test_sqlalchemy_types.py heat/tests/db/test_utils.py heat/tests/engine/__init__.py heat/tests/engine/test_check_resource.py heat/tests/engine/test_dependencies.py heat/tests/engine/test_engine_worker.py heat/tests/engine/test_node_data.py heat/tests/engine/test_plugin_manager.py heat/tests/engine/test_resource_type.py heat/tests/engine/test_scheduler.py heat/tests/engine/test_sync_point.py heat/tests/engine/tools.py heat/tests/engine/service/__init__.py heat/tests/engine/service/test_service_engine.py heat/tests/engine/service/test_software_config.py heat/tests/engine/service/test_stack_action.py heat/tests/engine/service/test_stack_adopt.py heat/tests/engine/service/test_stack_create.py heat/tests/engine/service/test_stack_delete.py heat/tests/engine/service/test_stack_events.py heat/tests/engine/service/test_stack_resources.py heat/tests/engine/service/test_stack_snapshot.py heat/tests/engine/service/test_stack_update.py heat/tests/engine/service/test_threadgroup_mgr.py heat/tests/openstack/__init__.py heat/tests/openstack/aodh/__init__.py heat/tests/openstack/aodh/test_alarm.py heat/tests/openstack/aodh/test_composite_alarm.py heat/tests/openstack/aodh/test_gnocchi_alarm.py heat/tests/openstack/barbican/__init__.py heat/tests/openstack/barbican/test_container.py heat/tests/openstack/barbican/test_order.py heat/tests/openstack/barbican/test_secret.py heat/tests/openstack/cinder/__init__.py heat/tests/openstack/cinder/test_qos_specs.py heat/tests/openstack/cinder/test_quota.py heat/tests/openstack/cinder/test_volume.py heat/tests/openstack/cinder/test_volume_type.py heat/tests/openstack/cinder/test_volume_type_encryption.py heat/tests/openstack/cinder/test_volume_utils.py heat/tests/openstack/designate/__init__.py heat/tests/openstack/designate/test_domain.py heat/tests/openstack/designate/test_record.py heat/tests/openstack/designate/test_recordset.py heat/tests/openstack/designate/test_zone.py heat/tests/openstack/glance/__init__.py heat/tests/openstack/glance/test_image.py heat/tests/openstack/heat/__init__.py heat/tests/openstack/heat/test_cloud_config.py heat/tests/openstack/heat/test_deployed_server.py heat/tests/openstack/heat/test_instance_group.py heat/tests/openstack/heat/test_instance_group_update_policy.py heat/tests/openstack/heat/test_multi_part.py heat/tests/openstack/heat/test_none_resource.py heat/tests/openstack/heat/test_random_string.py heat/tests/openstack/heat/test_remote_stack.py heat/tests/openstack/heat/test_resource_chain.py heat/tests/openstack/heat/test_resource_group.py heat/tests/openstack/heat/test_software_component.py heat/tests/openstack/heat/test_software_config.py heat/tests/openstack/heat/test_software_deployment.py heat/tests/openstack/heat/test_structured_config.py heat/tests/openstack/heat/test_swiftsignal.py heat/tests/openstack/heat/test_value.py heat/tests/openstack/heat/test_waitcondition.py heat/tests/openstack/keystone/__init__.py heat/tests/openstack/keystone/test_domain.py heat/tests/openstack/keystone/test_endpoint.py heat/tests/openstack/keystone/test_group.py heat/tests/openstack/keystone/test_project.py heat/tests/openstack/keystone/test_region.py heat/tests/openstack/keystone/test_role.py heat/tests/openstack/keystone/test_role_assignments.py heat/tests/openstack/keystone/test_service.py heat/tests/openstack/keystone/test_user.py heat/tests/openstack/magnum/__init__.py heat/tests/openstack/magnum/test_bay.py heat/tests/openstack/magnum/test_cluster.py heat/tests/openstack/magnum/test_cluster_template.py heat/tests/openstack/manila/__init__.py heat/tests/openstack/manila/test_security_service.py heat/tests/openstack/manila/test_share.py heat/tests/openstack/manila/test_share_network.py heat/tests/openstack/manila/test_share_type.py heat/tests/openstack/mistral/__init__.py heat/tests/openstack/mistral/test_cron_trigger.py heat/tests/openstack/mistral/test_external_resource.py heat/tests/openstack/mistral/test_workflow.py heat/tests/openstack/monasca/__init__.py heat/tests/openstack/monasca/test_alarm_definition.py heat/tests/openstack/monasca/test_notification.py heat/tests/openstack/neutron/__init__.py heat/tests/openstack/neutron/inline_templates.py heat/tests/openstack/neutron/test_address_scope.py heat/tests/openstack/neutron/test_extraroute.py heat/tests/openstack/neutron/test_neutron.py heat/tests/openstack/neutron/test_neutron_firewall.py heat/tests/openstack/neutron/test_neutron_floating_ip.py heat/tests/openstack/neutron/test_neutron_loadbalancer.py heat/tests/openstack/neutron/test_neutron_metering.py heat/tests/openstack/neutron/test_neutron_net.py heat/tests/openstack/neutron/test_neutron_network_gateway.py heat/tests/openstack/neutron/test_neutron_port.py heat/tests/openstack/neutron/test_neutron_provider_net.py heat/tests/openstack/neutron/test_neutron_rbac_policy.py heat/tests/openstack/neutron/test_neutron_router.py heat/tests/openstack/neutron/test_neutron_security_group.py heat/tests/openstack/neutron/test_neutron_security_group_rule.py heat/tests/openstack/neutron/test_neutron_segment.py heat/tests/openstack/neutron/test_neutron_subnet.py heat/tests/openstack/neutron/test_neutron_subnetpool.py heat/tests/openstack/neutron/test_neutron_trunk.py heat/tests/openstack/neutron/test_neutron_vpnservice.py heat/tests/openstack/neutron/test_qos.py heat/tests/openstack/neutron/test_quota.py heat/tests/openstack/neutron/lbaas/__init__.py heat/tests/openstack/neutron/lbaas/test_health_monitor.py heat/tests/openstack/neutron/lbaas/test_l7policy.py heat/tests/openstack/neutron/lbaas/test_l7rule.py heat/tests/openstack/neutron/lbaas/test_listener.py heat/tests/openstack/neutron/lbaas/test_loadbalancer.py heat/tests/openstack/neutron/lbaas/test_pool.py heat/tests/openstack/neutron/lbaas/test_pool_member.py heat/tests/openstack/neutron/test_sfc/__init__.py heat/tests/openstack/neutron/test_sfc/test_flow_classifier.py heat/tests/openstack/neutron/test_sfc/test_port_chain.py heat/tests/openstack/neutron/test_sfc/test_port_pair.py heat/tests/openstack/neutron/test_sfc/test_port_pair_group.py heat/tests/openstack/nova/__init__.py heat/tests/openstack/nova/fakes.py heat/tests/openstack/nova/test_flavor.py heat/tests/openstack/nova/test_floatingip.py heat/tests/openstack/nova/test_host_aggregate.py heat/tests/openstack/nova/test_keypair.py heat/tests/openstack/nova/test_quota.py heat/tests/openstack/nova/test_server.py heat/tests/openstack/nova/test_server_group.py heat/tests/openstack/octavia/__init__.py heat/tests/openstack/octavia/inline_templates.py heat/tests/openstack/octavia/test_health_monitor.py heat/tests/openstack/octavia/test_l7policy.py heat/tests/openstack/octavia/test_l7rule.py heat/tests/openstack/octavia/test_listener.py heat/tests/openstack/octavia/test_loadbalancer.py heat/tests/openstack/octavia/test_pool.py heat/tests/openstack/octavia/test_pool_member.py heat/tests/openstack/sahara/__init__.py heat/tests/openstack/sahara/test_cluster.py heat/tests/openstack/sahara/test_data_source.py heat/tests/openstack/sahara/test_image.py heat/tests/openstack/sahara/test_job.py heat/tests/openstack/sahara/test_job_binary.py heat/tests/openstack/sahara/test_templates.py heat/tests/openstack/senlin/__init__.py heat/tests/openstack/senlin/test_cluster.py heat/tests/openstack/senlin/test_node.py heat/tests/openstack/senlin/test_policy.py heat/tests/openstack/senlin/test_profile.py heat/tests/openstack/senlin/test_receiver.py heat/tests/openstack/swift/__init__.py heat/tests/openstack/swift/test_container.py heat/tests/openstack/trove/__init__.py heat/tests/openstack/trove/test_cluster.py heat/tests/openstack/trove/test_instance.py heat/tests/openstack/zaqar/__init__.py heat/tests/openstack/zaqar/test_queue.py heat/tests/openstack/zaqar/test_subscription.py heat/tests/openstack/zun/__init__.py heat/tests/openstack/zun/test_container.py heat/tests/policy/check_admin.json heat/tests/policy/deny_stack_user.json heat/tests/policy/notallowed.json heat/tests/policy/resources.json heat/tests/templates/Neutron.template heat/tests/templates/Neutron.yaml heat/tests/templates/README heat/tests/templates/WordPress_Single_Instance.template heat/tests/templates/WordPress_Single_Instance.yaml heat_integrationtests/.gitignore heat_integrationtests/README.rst heat_integrationtests/__init__.py heat_integrationtests/cleanup_test_env.sh heat_integrationtests/config-generator.conf heat_integrationtests/install-requirements heat_integrationtests/post_test_hook.sh heat_integrationtests/pre_test_hook.sh heat_integrationtests/prepare_test_env.sh heat_integrationtests/prepare_test_network.sh heat_integrationtests/common/__init__.py heat_integrationtests/common/clients.py heat_integrationtests/common/config.py heat_integrationtests/common/exceptions.py heat_integrationtests/common/test.py heat_integrationtests/functional/__init__.py heat_integrationtests/functional/functional_base.py heat_integrationtests/functional/test_admin_actions.py heat_integrationtests/functional/test_autoscaling.py heat_integrationtests/functional/test_aws_stack.py heat_integrationtests/functional/test_cancel_update.py heat_integrationtests/functional/test_conditional_exposure.py heat_integrationtests/functional/test_conditions.py heat_integrationtests/functional/test_create_update.py heat_integrationtests/functional/test_default_parameters.py heat_integrationtests/functional/test_delete.py heat_integrationtests/functional/test_env_merge.py heat_integrationtests/functional/test_heat_autoscaling.py heat_integrationtests/functional/test_immutable_parameters.py heat_integrationtests/functional/test_instance_group.py heat_integrationtests/functional/test_nested_get_attr.py heat_integrationtests/functional/test_notifications.py heat_integrationtests/functional/test_preview_update.py heat_integrationtests/functional/test_purge.py heat_integrationtests/functional/test_replace_deprecated.py heat_integrationtests/functional/test_resource_chain.py heat_integrationtests/functional/test_resource_group.py heat_integrationtests/functional/test_simultaneous_update.py heat_integrationtests/functional/test_snapshot_restore.py heat_integrationtests/functional/test_software_deployment_group.py heat_integrationtests/functional/test_stack_cancel.py heat_integrationtests/functional/test_swiftsignal_update.py heat_integrationtests/functional/test_template_resource.py heat_integrationtests/functional/test_template_versions.py heat_integrationtests/functional/test_translation.py heat_integrationtests/functional/test_update_restricted.py heat_integrationtests/functional/test_validation.py heat_integrationtests/locale/de/LC_MESSAGES/heat_integrationtests.po heat_integrationtests/locale/en_GB/LC_MESSAGES/heat_integrationtests.po heat_integrationtests/locale/ja/LC_MESSAGES/heat_integrationtests.po heat_integrationtests/locale/ko_KR/LC_MESSAGES/heat_integrationtests.po heat_upgradetests/post_test_hook.sh heat_upgradetests/pre_test_hook.sh playbooks/get_amphora_tarball.yaml playbooks/devstack/functional/post.yaml playbooks/devstack/functional/run.yaml playbooks/devstack/grenade/run.yaml playbooks/devstack/multinode-networking/pre.yaml rally-scenarios/README.rst rally-scenarios/heat-fakevirt.yaml rally-scenarios/extra/README.rst rally-scenarios/extra/default.yaml rally-scenarios/extra/rg_template_with_constraint.yaml rally-scenarios/extra/rg_template_with_outputs.yaml rally-scenarios/plugins/sample_plugin.py rally-scenarios/plugins/stack_output.py releasenotes/notes/.placeholder releasenotes/notes/add-aodh-composite-alarm-f8eb4f879fe0916b.yaml releasenotes/notes/add-cephfs-share-protocol-033e091e7c6c5166.yaml releasenotes/notes/add-contains-function-440aa7184a07758c.yaml releasenotes/notes/add-hostname-hints-security_groups-to-container-d3b69ae4b6f71fc7.yaml releasenotes/notes/add-list-concat-unique-function-5a87130d9c93cb08.yaml releasenotes/notes/add-list_concat-function-c28563ab8fb6362e.yaml releasenotes/notes/add-tags-for-neutron-router-43d72e78aa89fd07.yaml releasenotes/notes/add-template-dir-config-b96392a9e116a2d3.yaml releasenotes/notes/add-zun-client-plugin-dfc10ecd1a6e98be.yaml releasenotes/notes/add-zun-container-c31fa5316237b13d.yaml releasenotes/notes/api-outputs-6d09ebf5044f51c3.yaml releasenotes/notes/barbican-container-77967add0832d51b.yaml releasenotes/notes/bp-mistral-new-resource-type-workflow-execution-748bd37faa3e427b.yaml releasenotes/notes/bp-support-conditions-1a9f89748a08cd4f.yaml releasenotes/notes/bp-support-host-aggregate-fbc4097f4e6332b8.yaml releasenotes/notes/bp-support-neutron-qos-3feb38eb2abdcc87.yaml releasenotes/notes/bp-support-rbac-policy-fd71f8f6cc97bfb6.yaml releasenotes/notes/bp-support-trunk-port-733019c49a429826.yaml releasenotes/notes/bp-update-cinder-resources-e23e62762f167d29.yaml releasenotes/notes/cancel_without_rollback-e5d978a60d9baf45.yaml releasenotes/notes/change-heat-keystone-user-name-limit-to-255-bd076132b98744be.yaml releasenotes/notes/cinder-backup-cb72e775681fb5a5.yaml releasenotes/notes/cinder-qos-specs-resource-ca5a237ebc114729.yaml releasenotes/notes/cinder-quota-resource-f13211c04020cd0c.yaml releasenotes/notes/configurable-server-name-limit-947d9152fe9b43ee.yaml releasenotes/notes/converge-flag-for-stack-update-e0e92a7fe232f10f.yaml releasenotes/notes/convergence-delete-race-5b821bbd4c5ba5dc.yaml releasenotes/notes/deployment-swift-data-server-property-51fd4f9d1671fc90.yaml releasenotes/notes/deprecate-nova-floatingip-resources-d5c9447a199be402.yaml releasenotes/notes/deprecate-threshold-alarm-5738f5ab8aebfd20.yaml releasenotes/notes/designate-v2-support-0f889e9ad13d4aa2.yaml releasenotes/notes/dns-resolution-5afc1c57dfd05aff.yaml releasenotes/notes/doc-migrate-10c968c819848240.yaml releasenotes/notes/environment-merging-d623362fac1279f7.yaml releasenotes/notes/environment_validate_template-fee21a03bb628446.yaml releasenotes/notes/event-list-nested-depth-80081a2a8eefee1a.yaml releasenotes/notes/event-transport-302d1db6c5a5daa9.yaml releasenotes/notes/external-resources-965d01d690d32bd2.yaml releasenotes/notes/fix-attachments-type-c5b6fb5b4c2bcbfe.yaml releasenotes/notes/force-delete-nova-instance-6ed5d7fbd5b6f5fe.yaml releasenotes/notes/get-server-webmks-console-url-f7066a9e14429084.yaml releasenotes/notes/give-me-a-network-67e23600945346cd.yaml releasenotes/notes/glance-image-tag-6fa123ca30be01aa.yaml releasenotes/notes/hidden-designate-domain-record-res-d445ca7f1251b63d.yaml releasenotes/notes/hidden-heat-harestarter-resource-a123479c317886a3.yaml releasenotes/notes/immutable-parameters-a13dc9bec7d6fa0f.yaml releasenotes/notes/keystone-domain-support-e06e2c65c5925ae5.yaml releasenotes/notes/keystone-project-allow-get-attribute-b382fe97694e3987.yaml releasenotes/notes/keystone-region-ce3b435c73c81ce4.yaml releasenotes/notes/know-limit-releasenote-4d21fc4d91d136d9.yaml releasenotes/notes/legacy-client-races-ba7a60cef5ec1694.yaml releasenotes/notes/legacy-stack-user-id-cebbad8b0f2ed490.yaml releasenotes/notes/magnum-resource-update-0f617eec45ef8ef7.yaml releasenotes/notes/make_url-function-d76737adb1e54801.yaml releasenotes/notes/map-replace-function-26bf247c620f64bf.yaml releasenotes/notes/mark-combination-alarm-as-placeholder-resource-e243e9692cab52e0.yaml releasenotes/notes/mark-unhealthy-phys-id-e90fd669d86963d1.yaml releasenotes/notes/monasca-period-f150cdb134f1e036.yaml releasenotes/notes/monasca-supported-71c5373282c3b338.yaml releasenotes/notes/neutron-address-scope-ce234763e22c7449.yaml releasenotes/notes/neutron-lbaas-v2-resources-c0ebbeb9bc9f7a42.yaml releasenotes/notes/neutron-quota-resource-7fa5e4df8287bf77.yaml releasenotes/notes/neutron-segment-support-a7d44af499838a4e.yaml releasenotes/notes/nova-quota-resource-84350f0467ce2d40.yaml releasenotes/notes/octavia-resources-0a25720e16dfe55d.yaml releasenotes/notes/parameter-group-for-nested-04559c4de34e326a.yaml releasenotes/notes/parameter-tags-148ef065616f92fc.yaml releasenotes/notes/policy-in-code-124372f6cdb0a497.yaml releasenotes/notes/project-tags-orchestration-If9125519e35f9f95ea8343cb07c377de9ccf5edf.yaml releasenotes/notes/random-string-entropy-9b8e23874cd79b8f.yaml releasenotes/notes/remove-SSLMiddleware-2f15049af559f26a.yaml releasenotes/notes/remove-cloudwatch-api-149403251da97b41.yaml releasenotes/notes/remove-heat-resourcetype-constraint-b679618a149fc04e.yaml releasenotes/notes/repeat-support-setting-permutations-fbc3234166b529ca.yaml releasenotes/notes/resource-search-3234afe601ea4e9d.yaml releasenotes/notes/resource_group_removal_policies_mode-d489e0cc49942e2a.yaml releasenotes/notes/restrict_update_replace-68abece58cf3f6a0.yaml releasenotes/notes/sahara-job-resource-84aecc11fdf1d5af.yaml releasenotes/notes/senlin-resources-71c856dc62d0b407.yaml releasenotes/notes/server-add-user-data-update-policy-c34646acfaada4d4.yaml releasenotes/notes/server-ephemeral-bdm-v2-55e0fe2afc5d8b63.yaml releasenotes/notes/server-group-soft-policy-8eabde24bf14bf1d.yaml releasenotes/notes/server-side-multi-env-7862a75e596ae8f5.yaml releasenotes/notes/set-networks-for-trove-cluster-b997a049eedbad17.yaml releasenotes/notes/set-tags-for-network-resource-d6f3843c546744a2.yaml releasenotes/notes/set-tags-for-port-471155bb53436361.yaml releasenotes/notes/set-tags-for-subnet-17a97b88dd11de63.yaml releasenotes/notes/set-tags-for-subnetpool-d86ca0d7e35a05f1.yaml releasenotes/notes/stack-definition-in-functions-3f7f172a53edf535.yaml releasenotes/notes/store-resource-attributes-8bcbedca2f86986e.yaml releasenotes/notes/subnet-pool-resource-c32ff97d4f956b73.yaml releasenotes/notes/support-rbac-for-qos-policy-a55434654e1dd953.yaml releasenotes/notes/sync-queens-releasenote-13f68851f7201e37.yaml releasenotes/notes/system-random-string-38a14ae2cb6f4a24.yaml releasenotes/notes/template-validate-improvements-52ecf5125c9efeda.yaml releasenotes/notes/yaql-function-4895e39555c2841d.yaml releasenotes/notes/zaqar-notification-a4d240bbf31b7440.yaml releasenotes/source/conf.py releasenotes/source/index.rst releasenotes/source/liberty.rst releasenotes/source/mitaka.rst releasenotes/source/newton.rst releasenotes/source/ocata.rst releasenotes/source/pike.rst releasenotes/source/unreleased.rst releasenotes/source/_static/.placeholder releasenotes/source/_templates/.placeholder tools/README.rst tools/cfn-json2yaml tools/custom_guidelines.py tools/heat-db-drop tools/test-requires-deb tools/test-requires-rpm tools/test-setup.shheat-10.0.2/heat.egg-info/entry_points.txt0000664000175000017500000002246713343562667020500 0ustar zuulzuul00000000000000[console_scripts] heat-all = heat.cmd.all:main heat-api = heat.cmd.api:main heat-api-cfn = heat.cmd.api_cfn:main heat-engine = heat.cmd.engine:main heat-manage = heat.cmd.manage:main [heat.clients] aodh = heat.engine.clients.os.aodh:AodhClientPlugin barbican = heat.engine.clients.os.barbican:BarbicanClientPlugin ceilometer = heat.engine.clients.os.ceilometer:CeilometerClientPlugin cinder = heat.engine.clients.os.cinder:CinderClientPlugin designate = heat.engine.clients.os.designate:DesignateClientPlugin glance = heat.engine.clients.os.glance:GlanceClientPlugin heat = heat.engine.clients.os.heat_plugin:HeatClientPlugin keystone = heat.engine.clients.os.keystone:KeystoneClientPlugin magnum = heat.engine.clients.os.magnum:MagnumClientPlugin manila = heat.engine.clients.os.manila:ManilaClientPlugin mistral = heat.engine.clients.os.mistral:MistralClientPlugin monasca = heat.engine.clients.os.monasca:MonascaClientPlugin neutron = heat.engine.clients.os.neutron:NeutronClientPlugin nova = heat.engine.clients.os.nova:NovaClientPlugin octavia = heat.engine.clients.os.octavia:OctaviaClientPlugin openstack = heat.engine.clients.os.openstacksdk:OpenStackSDKPlugin sahara = heat.engine.clients.os.sahara:SaharaClientPlugin senlin = heat.engine.clients.os.senlin:SenlinClientPlugin swift = heat.engine.clients.os.swift:SwiftClientPlugin trove = heat.engine.clients.os.trove:TroveClientPlugin zaqar = heat.engine.clients.os.zaqar:ZaqarClientPlugin zun = heat.engine.clients.os.zun:ZunClientPlugin [heat.constraints] barbican.container = heat.engine.clients.os.barbican:ContainerConstraint barbican.secret = heat.engine.clients.os.barbican:SecretConstraint cinder.backup = heat.engine.clients.os.cinder:VolumeBackupConstraint cinder.qos_specs = heat.engine.clients.os.cinder:QoSSpecsConstraint cinder.snapshot = heat.engine.clients.os.cinder:VolumeSnapshotConstraint cinder.volume = heat.engine.clients.os.cinder:VolumeConstraint cinder.vtype = heat.engine.clients.os.cinder:VolumeTypeConstraint cron_expression = heat.engine.constraint.common_constraints:CRONExpressionConstraint designate.domain = heat.engine.clients.os.designate:DesignateDomainConstraint designate.zone = heat.engine.clients.os.designate:DesignateZoneConstraint dns_domain = heat.engine.constraint.common_constraints:DNSDomainConstraint dns_name = heat.engine.constraint.common_constraints:DNSNameConstraint expiration = heat.engine.constraint.common_constraints:ExpirationConstraint glance.image = heat.engine.clients.os.glance:ImageConstraint ip_addr = heat.engine.constraint.common_constraints:IPConstraint iso_8601 = heat.engine.constraint.common_constraints:ISO8601Constraint keystone.domain = heat.engine.clients.os.keystone.keystone_constraints:KeystoneDomainConstraint keystone.group = heat.engine.clients.os.keystone.keystone_constraints:KeystoneGroupConstraint keystone.project = heat.engine.clients.os.keystone.keystone_constraints:KeystoneProjectConstraint keystone.region = heat.engine.clients.os.keystone.keystone_constraints:KeystoneRegionConstraint keystone.role = heat.engine.clients.os.keystone.keystone_constraints:KeystoneRoleConstraint keystone.service = heat.engine.clients.os.keystone.keystone_constraints:KeystoneServiceConstraint keystone.user = heat.engine.clients.os.keystone.keystone_constraints:KeystoneUserConstraint mac_addr = heat.engine.constraint.common_constraints:MACConstraint magnum.baymodel = heat.engine.clients.os.magnum:BaymodelConstraint magnum.cluster_template = heat.engine.clients.os.magnum:ClusterTemplateConstraint manila.share_network = heat.engine.clients.os.manila:ManilaShareNetworkConstraint manila.share_snapshot = heat.engine.clients.os.manila:ManilaShareSnapshotConstraint manila.share_type = heat.engine.clients.os.manila:ManilaShareTypeConstraint mistral.workflow = heat.engine.clients.os.mistral:WorkflowConstraint monasca.notification = heat.engine.clients.os.monasca:MonascaNotificationConstraint net_cidr = heat.engine.constraint.common_constraints:CIDRConstraint neutron.address_scope = heat.engine.clients.os.neutron.neutron_constraints:AddressScopeConstraint neutron.flow_classifier = heat.engine.clients.os.neutron.neutron_constraints:FlowClassifierConstraint neutron.lb.provider = heat.engine.clients.os.neutron.neutron_constraints:LBaasV1ProviderConstraint neutron.lbaas.listener = heat.engine.clients.os.neutron.lbaas_constraints:ListenerConstraint neutron.lbaas.loadbalancer = heat.engine.clients.os.neutron.lbaas_constraints:LoadbalancerConstraint neutron.lbaas.pool = heat.engine.clients.os.neutron.lbaas_constraints:PoolConstraint neutron.lbaas.provider = heat.engine.clients.os.neutron.lbaas_constraints:LBaasV2ProviderConstraint neutron.network = heat.engine.clients.os.neutron.neutron_constraints:NetworkConstraint neutron.port = heat.engine.clients.os.neutron.neutron_constraints:PortConstraint neutron.port_pair = heat.engine.clients.os.neutron.neutron_constraints:PortPairConstraint neutron.port_pair_group = heat.engine.clients.os.neutron.neutron_constraints:PortPairGroupConstraint neutron.qos_policy = heat.engine.clients.os.neutron.neutron_constraints:QoSPolicyConstraint neutron.router = heat.engine.clients.os.neutron.neutron_constraints:RouterConstraint neutron.security_group = heat.engine.clients.os.neutron.neutron_constraints:SecurityGroupConstraint neutron.segment = heat.engine.clients.os.openstacksdk:SegmentConstraint neutron.subnet = heat.engine.clients.os.neutron.neutron_constraints:SubnetConstraint neutron.subnetpool = heat.engine.clients.os.neutron.neutron_constraints:SubnetPoolConstraint nova.flavor = heat.engine.clients.os.nova:FlavorConstraint nova.host = heat.engine.clients.os.nova:HostConstraint nova.keypair = heat.engine.clients.os.nova:KeypairConstraint nova.network = heat.engine.constraint.common_constraints:TestConstraintDelay nova.server = heat.engine.clients.os.nova:ServerConstraint octavia.l7policy = heat.engine.clients.os.octavia:L7PolicyConstraint octavia.listener = heat.engine.clients.os.octavia:ListenerConstraint octavia.loadbalancer = heat.engine.clients.os.octavia:LoadbalancerConstraint octavia.pool = heat.engine.clients.os.octavia:PoolConstraint rel_dns_name = heat.engine.constraint.common_constraints:RelativeDNSNameConstraint sahara.cluster = heat.engine.clients.os.sahara:ClusterConstraint sahara.cluster_template = heat.engine.clients.os.sahara:ClusterTemplateConstraint sahara.data_source = heat.engine.clients.os.sahara:DataSourceConstraint sahara.image = heat.engine.clients.os.sahara:ImageConstraint sahara.job_binary = heat.engine.clients.os.sahara:JobBinaryConstraint sahara.job_type = heat.engine.clients.os.sahara:JobTypeConstraint sahara.plugin = heat.engine.clients.os.sahara:PluginConstraint senlin.cluster = heat.engine.clients.os.senlin:ClusterConstraint senlin.policy = heat.engine.clients.os.senlin:PolicyConstraint senlin.policy_type = heat.engine.clients.os.senlin:PolicyTypeConstraint senlin.profile = heat.engine.clients.os.senlin:ProfileConstraint senlin.profile_type = heat.engine.clients.os.senlin:ProfileTypeConstraint test_constr = heat.engine.constraint.common_constraints:TestConstraintDelay timezone = heat.engine.constraint.common_constraints:TimezoneConstraint trove.flavor = heat.engine.clients.os.trove:FlavorConstraint zaqar.queue = heat.engine.clients.os.zaqar:QueueConstraint [heat.event_sinks] zaqar-queue = heat.engine.clients.os.zaqar:ZaqarEventSink [heat.stack_lifecycle_plugins] [heat.templates] AWSTemplateFormatVersion.2010-09-09 = heat.engine.cfn.template:CfnTemplate HeatTemplateFormatVersion.2012-12-12 = heat.engine.cfn.template:HeatTemplate heat_template_version.2013-05-23 = heat.engine.hot.template:HOTemplate20130523 heat_template_version.2014-10-16 = heat.engine.hot.template:HOTemplate20141016 heat_template_version.2015-04-30 = heat.engine.hot.template:HOTemplate20150430 heat_template_version.2015-10-15 = heat.engine.hot.template:HOTemplate20151015 heat_template_version.2016-04-08 = heat.engine.hot.template:HOTemplate20160408 heat_template_version.2016-10-14 = heat.engine.hot.template:HOTemplate20161014 heat_template_version.2017-02-24 = heat.engine.hot.template:HOTemplate20170224 heat_template_version.2017-09-01 = heat.engine.hot.template:HOTemplate20170901 heat_template_version.2018-03-02 = heat.engine.hot.template:HOTemplate20180302 heat_template_version.newton = heat.engine.hot.template:HOTemplate20161014 heat_template_version.ocata = heat.engine.hot.template:HOTemplate20170224 heat_template_version.pike = heat.engine.hot.template:HOTemplate20170901 heat_template_version.queens = heat.engine.hot.template:HOTemplate20180302 [oslo.config.opts] heat.api.aws.ec2token = heat.api.aws.ec2token:list_opts heat.common.config = heat.common.config:list_opts heat.common.context = heat.common.context:list_opts heat.common.crypt = heat.common.crypt:list_opts heat.common.wsgi = heat.common.wsgi:list_opts heat.engine.clients = heat.engine.clients:list_opts heat.engine.clients.os.keystone.heat_keystoneclient = heat.engine.clients.os.keystone.heat_keystoneclient:list_opts heat.engine.notification = heat.engine.notification:list_opts heat.engine.resources = heat.engine.resources:list_opts heat_integrationtests.common.config = heat_integrationtests.common.config:list_opts [oslo.config.opts.defaults] heat.common.config = heat.common.config:set_config_defaults [oslo.policy.policies] heat = heat.policies:list_rules [wsgi_scripts] heat-wsgi-api = heat.httpd.heat_api:init_application heat-wsgi-api-cfn = heat.httpd.heat_api_cfn:init_application heat-10.0.2/heat.egg-info/pbr.json0000664000175000017500000000005613343562667016646 0ustar zuulzuul00000000000000{"git_version": "1f08105", "is_release": true}heat-10.0.2/heat.egg-info/dependency_links.txt0000664000175000017500000000000113343562667021235 0ustar zuulzuul00000000000000 heat-10.0.2/heat.egg-info/PKG-INFO0000664000175000017500000000767513343562667016303 0ustar zuulzuul00000000000000Metadata-Version: 1.1 Name: heat Version: 10.0.2 Summary: OpenStack Orchestration Home-page: http://docs.openstack.org/developer/heat/ Author: OpenStack Author-email: openstack-dev@lists.openstack.org License: UNKNOWN Description: ======================== Team and repository tags ======================== .. image:: http://governance.openstack.org/badges/heat.svg :target: http://governance.openstack.org/reference/tags/index.html .. Change things from this point on ==== Heat ==== Heat is a service to orchestrate multiple composite cloud applications using templates, through both an OpenStack-native REST API and a CloudFormation-compatible Query API. Why heat? It makes the clouds rise and keeps them there. Getting Started --------------- If you'd like to run from the master branch, you can clone the git repo: git clone https://git.openstack.org/openstack/heat * Wiki: http://wiki.openstack.org/Heat * Developer docs: http://docs.openstack.org/heat/latest * Template samples: https://git.openstack.org/cgit/openstack/heat-templates * Agents: https://git.openstack.org/cgit/openstack/heat-agents Python client ------------- https://git.openstack.org/cgit/openstack/python-heatclient References ---------- * http://docs.amazonwebservices.com/AWSCloudFormation/latest/APIReference/API_CreateStack.html * http://docs.amazonwebservices.com/AWSCloudFormation/latest/UserGuide/create-stack.html * http://docs.amazonwebservices.com/AWSCloudFormation/latest/UserGuide/aws-template-resource-type-ref.html * http://www.oasis-open.org/committees/tc_home.php?wg_abbrev=tosca We have integration with ------------------------ * https://git.openstack.org/cgit/openstack/python-novaclient (instance) * https://git.openstack.org/cgit/openstack/python-keystoneclient (auth) * https://git.openstack.org/cgit/openstack/python-swiftclient (s3) * https://git.openstack.org/cgit/openstack/python-neutronclient (networking) * https://git.openstack.org/cgit/openstack/python-ceilometerclient (metering) * https://git.openstack.org/cgit/openstack/python-aodhclient (alarming service) * https://git.openstack.org/cgit/openstack/python-cinderclient (storage service) * https://git.openstack.org/cgit/openstack/python-glanceclient (image service) * https://git.openstack.org/cgit/openstack/python-troveclient (database as a Service) * https://git.openstack.org/cgit/openstack/python-saharaclient (hadoop cluster) * https://git.openstack.org/cgit/openstack/python-barbicanclient (key management service) * https://git.openstack.org/cgit/openstack/python-designateclient (DNS service) * https://git.openstack.org/cgit/openstack/python-magnumclient (container service) * https://git.openstack.org/cgit/openstack/python-manilaclient (shared file system service) * https://git.openstack.org/cgit/openstack/python-mistralclient (workflow service) * https://git.openstack.org/cgit/openstack/python-zaqarclient (messaging service) * https://git.openstack.org/cgit/openstack/python-monascaclient (monitoring service) Platform: UNKNOWN Classifier: Environment :: OpenStack Classifier: Intended Audience :: Information Technology Classifier: Intended Audience :: System Administrators Classifier: License :: OSI Approved :: Apache Software License Classifier: Operating System :: POSIX :: Linux Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 2 Classifier: Programming Language :: Python :: 2.7 Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.5 heat-10.0.2/heat.egg-info/top_level.txt0000664000175000017500000000003313343562667017715 0ustar zuulzuul00000000000000heat heat_integrationtests heat-10.0.2/heat.egg-info/not-zip-safe0000664000175000017500000000000113343562625017407 0ustar zuulzuul00000000000000 heat-10.0.2/heat/0000775000175000017500000000000013343562672013471 5ustar zuulzuul00000000000000heat-10.0.2/heat/locale/0000775000175000017500000000000013343562672014730 5ustar zuulzuul00000000000000heat-10.0.2/heat/locale/ja/0000775000175000017500000000000013343562672015322 5ustar zuulzuul00000000000000heat-10.0.2/heat/locale/ja/LC_MESSAGES/0000775000175000017500000000000013343562672017107 5ustar zuulzuul00000000000000heat-10.0.2/heat/locale/ja/LC_MESSAGES/heat.po0000666000175000017500000106653513343562351020404 0ustar zuulzuul00000000000000# Translations template for heat. # Copyright (C) 2015 ORGANIZATION # This file is distributed under the same license as the heat project. # # Translators: # yfukuda , 2014 # Satoshi Kodama , 2013 # Tomoyuki KATO , 2013 # Andreas Jaeger , 2016. #zanata # Yuko Fukuda , 2017. #zanata msgid "" msgstr "" "Project-Id-Version: heat VERSION\n" "Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n" "POT-Creation-Date: 2018-02-05 10:35+0000\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" "PO-Revision-Date: 2017-04-28 06:33+0000\n" "Last-Translator: Yuko Fukuda \n" "Language: ja\n" "Plural-Forms: nplurals=1; plural=0;\n" "Generated-By: Babel 2.0\n" "X-Generator: Zanata 3.9.6\n" "Language-Team: Japanese\n" #, python-format msgid "\"%%s\" is not a valid keyword inside a %s definition" msgstr "\"%%s\" は %s 定義内部で有効なキーワードではありません" #, python-format msgid "\"%(fn_name)s\": %(err)s" msgstr "\"%(fn_name)s\": %(err)s" #, python-format msgid "" "\"%(name)s\" params must be strings, numbers, list or map. Failed to json " "serialize %(value)s" msgstr "" "\"%(name)s\" パラメーターは文字列、数値、リスト、またはマップでなければなりま" "せん。JSON のシリアライズ %(value)s に失敗しました。" #, python-format msgid "" "\"%(section)s\" must contain a map of %(obj_name)s maps. Found a [%(_type)s] " "instead" msgstr "" "\"%(section)s\" には %(obj_name)s マップのマップが含まれている必要がありま" "す。代わりに [%(_type)s] が見つかりました。" #, python-format msgid "\"%(url)s\" is not a valid SwiftSignalHandle. The %(part)s is invalid" msgstr "" "\"%(url)s\" は有効な SwiftSignalHandle ではありません。%(part)s は無効です" #, python-format msgid "\"%(value)s\" does not validate %(name)s" msgstr "\"%(value)s\" は %(name)s を妥当性検査しません" #, python-format msgid "\"%(value)s\" does not validate %(name)s (constraint not found)" msgstr "\"%(value)s\" は %(name)s を妥当性検査しません (制約が見つかりません)" #, python-format msgid "\"%(version)s\". \"%(version_type)s\" should be one of: %(available)s" msgstr "" "\"%(version)s\"。\"%(version_type)s\" は、次のいずれかでなければなりません: " "%(available)s" #, python-format msgid "\"%(version)s\". \"%(version_type)s\" should be: %(available)s" msgstr "" "\"%(version)s\"。\"%(version_type)s\" は、%(available)s でなければなりません" #, python-format msgid "\"%s\" : [ { \"key1\": \"val1\" }, { \"key2\": \"val2\" } ]" msgstr "\"%s\" : [ { \"key1\": \"val1\" }, { \"key2\": \"val2\" } ]" #, python-format msgid "\"%s\" argument must be a string" msgstr "\"%s\" 引数は文字列でなければなりません" #, python-format msgid "\"%s\" can't traverse path" msgstr "\"%s\" はパスを全探索できません" #, python-format msgid "\"%s\" deletion policy not supported" msgstr "\"%s\" 削除ポリシーはサポートされていません" #, python-format msgid "\"%s\" delimiter must be a string" msgstr "\"%s\" 区切り文字は文字列でなければなりません" #, python-format msgid "\"%s\" is not a list" msgstr "\"%s\" はリストではありません" #, python-format msgid "\"%s\" is not a map" msgstr "\"%s\" はマップではありません" #, python-format msgid "\"%s\" is not a valid ARN" msgstr "\"%s\" は有効な ARN ではありません" #, python-format msgid "\"%s\" is not a valid ARN URL" msgstr "\"%s\" は有効な ARN URL ではありません" #, python-format msgid "\"%s\" is not a valid Heat ARN" msgstr "\"%s\" は有効な heat ARN ではありません" #, python-format msgid "\"%s\" is not a valid URL" msgstr "\"%s\" は有効な URL ではありません" #, python-format msgid "\"%s\" is not a valid boolean" msgstr "\"%s\" は有効なブール値ではありません" #, python-format msgid "\"%s\" is not a valid template section" msgstr "\"%s\" が有効なテンプレートセクションではありません" #, python-format msgid "\"%s\" must operate on a list" msgstr "\"%s\" はリストに対して作動しなければなりません" #, python-format msgid "\"%s\" param placeholders must be strings" msgstr "\"%s\" パラメーターのプレースホルダーは文字列である必要があります。" #, python-format msgid "\"%s\" parameters must be a mapping" msgstr "\"%s\" パラメーターはマッピングでなければなりません" #, python-format msgid "\"%s\" params must be a map" msgstr "\"%s\" パラメーターはマップでなければなりません" #, python-format msgid "\"%s\" params must be strings, numbers, list or map." msgstr "\"%s\" パラメーターは文字列、数値、またはマップでなければなりません。" #, python-format msgid "\"%s\" template must be a string" msgstr "\"%s\" テンプレートは文字列でなければなりません" #, python-format msgid "\"repeat\" syntax should be %s" msgstr "\"repeat\" 構文は %s でなければなりません" #, python-format msgid "%(a)s paused until Hook %(h)s is cleared" msgstr "%(a)s はフック %(h)s がクリアされるまで休止されます" #, python-format msgid "%(action)s is not supported for resource." msgstr "%(action)s はリソースではサポートされていません。" #, python-format msgid "%(action)s is restricted for resource." msgstr "%(action)s はリソースが制限されています。" #, python-format msgid "%(desired_capacity)s must be between %(min_size)s and %(max_size)s" msgstr "" "%(desired_capacity)s は %(min_size)s と %(max_size)s の間である必要があります" #, python-format msgid "%(error)s%(path)s%(message)s" msgstr "%(error)s%(path)s%(message)s" #, python-format msgid "%(feature)s is not supported." msgstr "%(feature)s はサポートされていません。" #, python-format msgid "" "%(img)s must be provided: Referenced cluster template %(tmpl)s has no " "default_image_id defined." msgstr "" "%(img)s を指定する必要があります: 参照クラスターテンプレート %(tmpl)s に、" "default_image_id は定義されていません。" #, python-format msgid "%(lc)s (%(ref)s) reference can not be found." msgstr "%(lc)s (%(ref)s) 参照が見つかりません。" #, python-format msgid "" "%(lc)s (%(ref)s) requires a reference to the configuration not just the name " "of the resource." msgstr "" "%(lc)s (%(ref)s) は、リソースの名前だけではなく設定への参照を必要とします。" #, python-format msgid "%(len)d of %(count)d received" msgstr "%(len)d / %(count)d を受信しました" #, python-format msgid "%(len)d of %(count)d received - %(reasons)s" msgstr "%(len)d / %(count)d を受信しました - %(reasons)s" #, python-format msgid "%(message)s" msgstr "%(message)s" #, python-format msgid "%(min_size)s can not be greater than %(max_size)s" msgstr "%(min_size)s を %(max_size)s よりも大きくすることはできません" #, python-format msgid "%(name)s constraint invalid for %(utype)s" msgstr "%(name)s 制約は %(utype)s には無効です" #, python-format msgid "%(prop1)s cannot be specified without %(prop2)s." msgstr "%(prop1)s は %(prop2)s なしに指定できません。" #, python-format msgid "" "%(prop1)s property should only be specified for %(prop2)s with value " "%(value)s." msgstr "" "%(prop1)s プロパティーは、%(value)s の値を持つ %(prop2)s にのみ設定する必要が" "あります。" #, python-format msgid "%(resource)s: Invalid attribute %(key)s" msgstr "%(resource)s: 属性 %(key)s は無効です" #, python-format msgid "" "%(result)s - Unknown status %(resource_status)s due to \"%(status_reason)s\"" msgstr "%(result)s: \"%(status_reason)s\" が原因の不明状況 %(resource_status)s" #, python-format msgid "%(schema)s supplied for %(type)s %(data)s" msgstr "%(schema)s が %(type)s %(data)s に指定されました" #, python-format msgid "%(server)s-port-%(number)s" msgstr "%(server)s-ポート-%(number)s" #, python-format msgid "%(type)s not in valid format: %(error)s" msgstr "%(type)s が有効な形式ではありません: %(error)s" #, python-format msgid "%s Key Name must be a string" msgstr "%s キー名は文字列でなければなりません" #, python-format msgid "%s Timed out" msgstr "%s がタイムアウトになりました" #, python-format msgid "%s Value Name must be a string" msgstr "%s 値名は文字列でなければなりません" #, python-format msgid "%s is not a valid job location." msgstr "%s は有効なジョブロケーションではありません。" #, python-format msgid "%s is not active" msgstr "%s はアクティブではありません" #, python-format msgid "%s is not an integer." msgstr "%s は整数ではありません。" #, python-format msgid "%s must be provided" msgstr "%s を指定する必要があります" #, python-format msgid "'%(attr)s': expected '%(expected)s', got '%(current)s'" msgstr "" "'%(attr)s': '%(expected)s' が予期されていましたが、'%(current)s' を受け取りま" "した" msgid "" "'task_name' is not assigned in 'params' in case of reverse type workflow." msgstr "" "リバースタイプのワークフローが場合、 'params' において 'task_name' は設定され" "ません。" msgid "'true' if DHCP is enabled for this subnet; 'false' otherwise." msgstr "DHCP がこのサブネットで有効な場合は TRUE、そうでない場合は FALSE。" msgid "A UUID for the set of servers being requested." msgstr "サーバーのセットの UUID が要求されています。" msgid "A bad or out-of-range value was supplied" msgstr "異常または範囲外の値が指定されました" msgid "A boolean value of default flag." msgstr "デフォルトフラグのブール値。" msgid "A boolean value specifying the administrative status of the network." msgstr "ネットワークの管理状況を指定するブール値。" #, python-format msgid "" "A character class and its corresponding %(min)s constraint to generate the " "random string from." msgstr "ランダム文字列の生成元である文字クラスとその対応する %(min)s 制約。" #, python-format msgid "" "A character sequence and its corresponding %(min)s constraint to generate " "the random string from." msgstr "" "ランダム文字列の生成元である文字シーケンスとその対応する %(min)s 制約。" msgid "A comma-delimited list of server ip addresses. (Heat extension)." msgstr "サーバー IP アドレスのコンマ区切りリスト (heat 拡張)。" msgid "A description of the volume." msgstr "ボリュームの説明。" msgid "" "A device name where the volume will be attached in the system at /dev/" "device_name. This value is typically vda." msgstr "" "システムの /dev/device_name でボリュームが接続されるときのデバイスの名前。通" "常、この値は Vda です。" msgid "" "A device name where the volume will be attached in the system at /dev/" "device_name.e.g. vdb" msgstr "" "システムの /dev/device_name でボリュームが接続される場合のデバイス名。例えば " "vdb" msgid "" "A dict of all network addresses with corresponding port_id. Each network " "will have two keys in dict, they are network name and network id. The port " "ID may be obtained through the following expression: \"{get_attr: [, " "addresses, , 0, port]}\"." msgstr "" "対応する port_id を持つすべてのネットワークアドレスのディクショナリー。各ネッ" "トワークはディクショナリー内に 2 つのキー (ネットワーク名とネットワーク ID) " "を持ちます。ポート ID は以下の式で取得できます: \"{get_attr: [, " "addresses, , 0, port]}\"" msgid "" "A dict of assigned network addresses of the form: {\"public\": [ip1, " "ip2...], \"private\": [ip3, ip4], \"public_uuid\": [ip1, ip2...], " "\"private_uuid\": [ip3, ip4]}. Each network will have two keys in dict, they " "are network name and network id." msgstr "" "以下の形式を持つネットワークアドレスのディクショナリー: {\"public\": [ip1, " "ip2...]、\"private\": [ip3, ip4]、\"public_uuid\": [ip1, " "ip2...]、\"private_uuid\": [ip3, ip4]}。各ネットワークはディクショナリー内に " "2 つのキー (ネットワーク名とネットワーク ID) を持ちます。" msgid "A dict of key-value pairs output from the stack." msgstr "スタックから出力されるキー/値ペアのディクショナリー。" msgid "A dictionary which contains name and input of the workflow." msgstr "ワークフローの名前と入力を含むディクショナリー。" msgid "A length constraint must have a min value and/or a max value specified." msgstr "" "長さ制約には最小値と最大値のいずれかまたは両方を指定する必要があります。" msgid "A list of URLs (webhooks) to invoke when state transitions to alarm." msgstr "状態がアラームへ遷移する際に呼び出す URL (webhook) のリスト。" msgid "" "A list of URLs (webhooks) to invoke when state transitions to insufficient-" "data." msgstr "状態がデータ不足へ遷移する際に呼び出す URL (webhook) のリスト。" msgid "A list of URLs (webhooks) to invoke when state transitions to ok." msgstr "状態が OK へ遷移する際に呼び出す URL (webhook) のリスト。" msgid "A list of access rules that define access from IP to Share." msgstr "IP からシェアへのアクセスを定義するアクセスルールのリスト。" msgid "A list of all rules for the QoS policy." msgstr "QoS ポリシーのすべてのルールのリスト。" msgid "A list of all subnet attributes for the port." msgstr "ポートのサブネット属性すべてのリスト。" msgid "" "A list of character class and their constraints to generate the random " "string from." msgstr "ランダム文字列の生成元である文字クラスとその制約のリスト。" msgid "" "A list of character sequences and their constraints to generate the random " "string from." msgstr "ランダム文字列の生成元である文字シーケンスとその制約のリスト。" msgid "A list of cluster instance IPs." msgstr "クラスターインスタンスの IP のリスト。" msgid "A list of clusters to which this policy is attached." msgstr "このポリシーが追加されるクラスターのリスト。" msgid "A list of host route dictionaries for the subnet." msgstr "サブネットのホストルートディクショナリーのリスト。" msgid "A list of instances ids." msgstr "インスタンス ID の一覧。" msgid "A list of metric ids." msgstr "メトリック ID のリスト。" msgid "" "A list of query factors, each comparing a Sample attribute with a value. " "Implicitly combined with matching_metadata, if any." msgstr "" "照会因子のリスト。各因子は「サンプル」属性と値を比較します。" "matching_metadata と暗黙的に結合されます (存在する場合)。" msgid "A list of resource IDs for the resources in the chain." msgstr "チェーン内のリソースのリソース ID のリスト。" msgid "A list of resource IDs for the resources in the group." msgstr "グループ内のリソースのリソース ID のリスト。" msgid "A list of security groups for the port." msgstr "ポートのセキュリティーグループのリスト。" msgid "A list of security services IDs or names." msgstr "セキュリティーサービスの ID または名前のリスト。" msgid "A list of string policies to apply. Defaults to anti-affinity." msgstr "" "適用する文字列ポリシーの一覧。アンチアフィニティーにデフォルト設定されます。" msgid "A login profile for the user." msgstr "ユーザーのログインプロファイル。" msgid "A mandatory input parameter is missing" msgstr "必須の入力パラメーターが欠落しています" msgid "A map containing all headers for the container." msgstr "コンテナーのすべてのヘッダーを含むマップ。" msgid "" "A map of Nova names and captured stderrs from the configuration execution to " "each server." msgstr "" "Nova 名および取り込み済み stderr の、設定実行から各サーバーへのマップ。" msgid "" "A map of Nova names and captured stdouts from the configuration execution to " "each server." msgstr "" "Nova 名および取り込み済み stdout の、設定実行から各サーバーへのマップ。" msgid "" "A map of Nova names and returned status code from the configuration " "execution." msgstr "Nova 名と設定実行から返される状況コードのマップ。" msgid "" "A map of files to create/overwrite on the server upon boot. Keys are file " "names and values are the file contents." msgstr "" "ブート時にサーバーで作成または上書きするファイルのマップ。キーはファイル名、" "値はファイル内容です。" msgid "" "A map of resource names to the specified attribute of each individual " "resource." msgstr "個別のリソースごとに指定されている属性とリソース名のマップ。" msgid "" "A map of resource names to the specified attribute of each individual " "resource. Requires heat_template_version: 2014-10-16." msgstr "" "個別のリソースごとに指定されている属性とリソース名のマップ。" "heat_template_version : 2014-10-16 が必要です。" msgid "" "A map of user-defined meta data to associate with the account. Each key in " "the map will set the header X-Account-Meta-{key} with the corresponding " "value." msgstr "" "アカウントに関連付けるユーザー定義メタデータのマップ。マップ内の各キーはヘッ" "ダー X-Account-Meta-{key} とそれに対応する値に設定されます。" msgid "" "A map of user-defined meta data to associate with the container. Each key in " "the map will set the header X-Container-Meta-{key} with the corresponding " "value." msgstr "" "コンテナーに関連付けるユーザー定義メタデータのマップ。マップ内の各キーはヘッ" "ダー X-Container-Meta-{key} とそれに対応する値に設定されます。" msgid "A name used to distinguish the volume." msgstr "ボリュームを区別するために使用される名前。" msgid "" "A per-tenant quota on the prefix space that can be allocated from the subnet " "pool for tenant subnets." msgstr "" "サブネットプールからテナントのサブネットに割り当てることができるプレフィック" "ススペース上のテナントごとのクォータ。" msgid "" "A predefined access control list (ACL) that grants permissions on the bucket." msgstr "バケットに対する許可を付与する事前定義アクセス制御リスト (ACL)。" msgid "A range constraint must have a min value and/or a max value specified." msgstr "" "範囲制約には最小値と最大値のいずれかまたは両方を指定する必要があります。" msgid "" "A reference to the wait condition handle used to signal this wait condition." msgstr "" "この待ち状態をシグナル通知するために使用される待ち状態ハンドルに対する参照。" msgid "" "A signed url to create executions for workflows specified in Workflow " "resource." msgstr "" "ワークフローのリソースで指定されたワークフローの処理を作成するための署名済み " "URL。" msgid "A signed url to handle the alarm." msgstr "アラームを処理する署名済み URL。" msgid "A signed url to handle the alarm. (Heat extension)." msgstr "アラームを処理する署名済み URL (heat 拡張)。" msgid "A specified set of DNS name servers to be used." msgstr "使用する DNS ネームサーバーの指定されたセット。" msgid "" "A string specifying a symbolic name for the network, which is not required " "to be unique." msgstr "ネットワークのシンボル名を指定する文字列。固有である必要はありません。" msgid "" "A string specifying a symbolic name for the security group, which is not " "required to be unique." msgstr "" "セキュリティーグループのシンボル名を指定する文字列。固有である必要はありませ" "ん。" msgid "A string specifying physical network mapping for the network." msgstr "ネットワークの物理ネットワークマッピングを指定する文字列。" msgid "A string specifying the provider network type for the network." msgstr "ネットワークのプロバイダーネットワーク形式を指定する文字列。" msgid "A string specifying the segmentation id for the network." msgstr "ネットワークのセグメンテーション ID を指定する文字列。" msgid "A symbolic name for this port." msgstr "このポートのシンボル名。" msgid "A url to handle the alarm using native API." msgstr "ネイティブ API を使用して アラームを処理する URL。" msgid "" "A variable that this resource will use to replace with the current index of " "a given resource in the group. Can be used, for example, to customize the " "name property of grouped servers in order to differentiate them when listed " "with nova client." msgstr "" "このリソースが、グループ内のあるリソースの現行のインデックスと置きかえるため" "に使用する変数。例えば、Nova クライアントとともに一覧表示するグループ化された" "サーバーを差別化するために、グループ化されたサーバーの名前プロパティーをカス" "タマイズするために使用できます。" msgid "AWS compatible instance name." msgstr "AWS 互換インスタンス名。" msgid "AWS query string is malformed, does not adhere to AWS spec" msgstr "AWSクエリ文字列の形式が正しくありません。AWS 仕様に従っていません" msgid "Access policies to apply to the user." msgstr "ユーザーに適用するアクセスポリシー。" #, python-format msgid "AccessPolicy resource %s not in stack" msgstr "アクセスポリシーリソース %s がスタックにありません" #, python-format msgid "Action %s not allowed for user" msgstr "アクション %s はユーザーに許可されていません" msgid "Action to be performed on the traffic matching the rule." msgstr "ルールに一致するトラフィックに対して実行するアクション。" msgid "Actual input parameter values of the task." msgstr "タスクの実際の入力パラメーター。" msgid "Add needed policies directly to the task, Policy keyword is not needed" msgstr "" "必要なポリシーをタスクに直接追加します。ポリシーのキーワードは不要です。" msgid "Additional MAC/IP address pairs allowed to pass through a port." msgstr "" "ポートを通過することが許可されている追加の MAC アドレスと IP アドレスのペア。" msgid "Additional MAC/IP address pairs allowed to pass through the port." msgstr "" "ポートを通過することが許可されている追加の MAC アドレスと IP アドレスのペア。" msgid "Additional routes for this subnet." msgstr "このサブネットの追加経路。" msgid "Address family of the address scope, which is 4 or 6." msgstr "アドレススコープのアドレスファミリー (4 または 6)。" msgid "" "Address of the notification. It could be a valid email address, url or " "service key based on notification type." msgstr "" "通知のアドレス。通知タイプに基づいて、有効な E メールアドレス、URL、または" "サービスキーのいずれかである場合があります。" msgid "" "Address to bind the server. Useful when selecting a particular network " "interface." msgstr "" "サーバーをバインドするアドレス。特定のネットワークインターフェースを選択する" "ときに役立ちます。" msgid "Administrative state for the ipsec site connection." msgstr "ipsec サイト接続の管理状態。" msgid "Administrative state for the vpn service." msgstr "VPN サービスの管理状態。" msgid "" "Administrative state of the firewall. If false (down), firewall does not " "forward packets and will drop all traffic to/from VMs behind the firewall." msgstr "" "ファイアウォールの管理状態。False (ダウン) の場合、ファイアウォールはパケット" "を転送せず、ファイアウォールの背後にある VM とのトラフィックをすべて除去しま" "す。" msgid "Administrative state of the router." msgstr "ルーターの管理状態。" #, python-format msgid "Alarm %(alarm)s could not find scaling group named \"%(group)s\"" msgstr "" "アラーム %(alarm)s がスケーリンググループ \"%(group)s\" を見つけられませんで" "した。" #, python-format msgid "Algorithm must be one of %s" msgstr "アルゴリズムは %s のいずれかでなければなりません" msgid "All heat engines are down." msgstr "す べての heat エンジンが停止しています。" msgid "Allocated floating IP address." msgstr "割り当て済み Floating IP アドレス。" msgid "Allocation ID for VPC EIP address." msgstr "VPC EIP アドレスに割り当てられた ID。" msgid "Allow client's debug log output." msgstr "クライアントのデバッグログ出力を許可します。" msgid "Allow or deny action for this firewall rule." msgstr "このファイアウォールルールの許可アクションまたは拒否アクション。" msgid "Allow orchestration of multiple clouds." msgstr "複数クラウドのオーケストレーションを許可します。" msgid "" "Allow reauthentication on token expiry, such that long-running tasks may " "complete. Note this defeats the expiry of any provided user tokens." msgstr "" "長時間にわたって実行されるタスクを完了できるよう、トークンの有効期限の再認証" "を許可します。これは、設定されたあらゆるユーザートークンの有効期限に優先しま" "す。" msgid "" "Allowed keystone endpoints for auth_uri when multi_cloud is enabled. At " "least one endpoint needs to be specified." msgstr "" "multi_cloud が有効になっているときに auth_uri の keystone エンドポイントが許" "可されました。1 つ以上のエンドポイントを指定する必要があります。" msgid "" "Allowed tenancy of instances launched in the VPC. default - any tenancy; " "dedicated - instance will be dedicated, regardless of the tenancy option " "specified at instance launch." msgstr "" "VPC で起動されたインスタンスの許可されるテナンシー。デフォルト - 任意のテナン" "シー。専用 - インスタンス起動時に指定されたテナンシーオプションとは無関係にイ" "ンスタンスは専用化されます。" #, python-format msgid "Allowed values: %s" msgstr "許可値: %s" msgid "AllowedPattern must be a string" msgstr "AllowedPattern は文字列でなければなりません" msgid "AllowedValues must be a list" msgstr "許可される値はリストでなければなりません" msgid "Allowing not to store action results after task completion." msgstr "タスクの完了後にアクションの結果を保存することはできません。" msgid "" "Allows to synchronize multiple parallel workflow branches and aggregate " "their data. Valid inputs: all - the task will run only if all upstream tasks " "are completed. Any numeric value - then the task will run once at least this " "number of upstream tasks are completed and corresponding conditions have " "triggered." msgstr "" "複数の同時ワークフローブランチの同期を行い、データのアグリゲートを行うことが" "できます。すべての有効な入力の場合、すべての上流タスクが完了した場合のみタス" "クは実行されます。任意の数値の場合、この数値以上の上流タスクが完了し、該当す" "る条件がトリガーされた場合にタスクは実行されます。" #, python-format msgid "Ambiguous versions (%s)" msgstr "バージョンが不明確です (%s)" msgid "" "Amount of disk space (in GB) required to boot image. Default value is 0 if " "not specified and means no limit on the disk size." msgstr "" "イメージのブートに必要なディスク容量 (GB)。指定しない場合のデフォルト値は 0 " "で、これはディスク容量の制限がないことを意味します。" msgid "" "Amount of ram (in MB) required to boot image. Default value is 0 if not " "specified and means no limit on the ram size." msgstr "" "イメージのブートに必要な RAM の容量 (MB)。指定しない場合のデフォルト値は 0 " "で、これは RAM サイズの制限がないことを意味します。" msgid "An address scope ID to assign to the subnet pool." msgstr "サブネットプールに割り当てる必要があるアドレススコープ。" msgid "An application health check for the instances." msgstr "インスタンスのアプリケーションヘルスチェック。" msgid "An ordered list of firewall rules to apply to the firewall." msgstr "ファイアウォールに適用するファイアウォールルールの番号付きリスト。" msgid "" "An ordered list of nics to be added to this server, with information about " "connected networks, fixed ips, port etc." msgstr "" "このサーバーに追加される NIC の番号付きリスト。および、接続されているネット" "ワーク、固定 IP、ポートなどに関する情報。" msgid "An unknown exception occurred." msgstr "不明な例外が発生しました。" msgid "" "Any data structure arbitrarily containing YAQL expressions that defines " "workflow output. May be nested." msgstr "" "ワークフローの出力を定義する YAQL 式を任意に含むあらゆるデータ構造。ネストさ" "れている場合があります。" msgid "Anything other than one VPCZoneIdentifier" msgstr "1 つの VPCZoneIdentifier を除くすべて" msgid "Api endpoint reference of the instance." msgstr "インスタンスの API エンドポイント参照。" msgid "" "Arbitrary key-value pairs specified by the client to help boot a server." msgstr "" "サーバーのブートを支援するためにクライアントよって指定されるキー/値のペア (任" "意)。" msgid "" "Arbitrary key-value pairs specified by the client to help the Cinder " "scheduler creating a volume." msgstr "" "Cinder スケジューラーによるボリュームの作成を支援するためにクライアントで指定" "される任意のキーと値のペア。" msgid "" "Arbitrary key/value metadata to store contextual information about this " "queue." msgstr "" "このキューに関するコンテキスト情報を保存する任意のキーと値のメタデータ。" msgid "" "Arbitrary key/value metadata to store for this server. Both keys and values " "must be 255 characters or less. Non-string values will be serialized to JSON " "(and the serialized string must be 255 characters or less)." msgstr "" "このサーバーのために保存する必要のある任意のキーと値のペア。キーと値の両方と" "も 255 文字以下である必要があります。文字列でない値は JSON にシリアライズされ" "ます (シリアライズされた文字列は 255 文字以下である必要があります)。" msgid "Arbitrary key/value metadata to store information for aggregate." msgstr "アグリゲートのための情報を保存する任意のキーと値のメタデータ。" #, python-format msgid "Argument to \"%s\" must be a list" msgstr "\"%s\" に対する引数はリストでなければなりません" #, python-format msgid "Argument to \"%s\" must be a string" msgstr "\"%s\" に対する引数は文字列でなければなりません" #, python-format msgid "Argument to \"%s\" must be string or list" msgstr "\"%s\" に対する引数は文字列またはリストでなければなりません" #, python-format msgid "Argument to function \"%s\" must be a list of strings" msgstr "関数 \"%s\" に対する引数は文字列のリストでなければなりません" #, python-format msgid "" "Arguments to \"%s\" can be of the next forms: [resource_name] or " "[resource_name, attribute, (path), ...]" msgstr "" "\"%s\" に対する引数は以下の形式を取ることができます: [resource_name] または " "[resource_name, attribute, (path), ...]" #, python-format msgid "Arguments to \"%s\" must be a map" msgstr "\"%s\" に対する引数はマップでなければなりません" #, python-format msgid "Arguments to \"%s\" must be of the form [index, collection]" msgstr "\"%s\" に対する引数の形式は [index, collection]でなければなりません" #, python-format msgid "" "Arguments to \"%s\" must be of the form [resource_name, attribute, " "(path), ...]" msgstr "" "\"%s\" に対する引数の形式は [resource_name, attribute, (path), ...] でなけれ" "ばなりません" #, python-format msgid "Arguments to \"%s\" must be of the form [resource_name, attribute]" msgstr "" "\"%s\" に対する引数の形式は [resource_name, attribute] でなければなりません" #, python-format msgid "Arguments to %s not fully resolved" msgstr "%s に対する引数が完全に解決されていません" #, python-format msgid "At least one of the following properties must be specified: %(props)s." msgstr "次の属性のうち少なくとも1つを指定する必要があります: %(props)s" #, python-format msgid "Attempt to delete a stack with id: %(id)s %(msg)s" msgstr "ID %(id)s のスタックの削除を試行: %(msg)s" #, python-format msgid "Attempt to delete user creds with id %(id)s that does not exist" msgstr "ID が %(id)s の存在しないユーザー資格情報を削除しようとしています" #, python-format msgid "Attempt to delete watch_rule: %(id)s %(msg)s" msgstr "watch_rule: %(id)s の削除を試行: %(msg)s" #, python-format msgid "Attempt to update a stack with id: %(id)s %(msg)s" msgstr "ID %(id)s のスタックの更新を試行: %(msg)s" #, python-format msgid "Attempt to update a stack with id: %(id)s %(traversal)s %(msg)s" msgstr "%(id)s %(traversal)s %(msg)s の ID でスタックの更新を試みます。" #, python-format msgid "Attempt to update a watch with id: %(id)s %(msg)s" msgstr "ID %(id)s の監視の更新を試行: %(msg)s" msgid "Attempt to use stored_context with no user_creds" msgstr "user_creds なしで stored_context を使用しようとしています" #, python-format msgid "Attribute %(attr)s for facade %(type)s missing in provider" msgstr "ファサード %(type)s の属性 %(attr)s がプロバイダーにありません" msgid "Audit status of this firewall policy." msgstr "このファイアウォールポリシーの監査状況。" msgid "Authentication Endpoint URI." msgstr "認証エンドポイント URI" msgid "Authentication hash algorithm for the ike policy." msgstr "IKE ポリシーの認証ハッシュアルゴリズム。" msgid "Authentication hash algorithm for the ipsec policy." msgstr "ipsec ポリシーの認証ハッシュアルゴリズム。" msgid "Authorization failed." msgstr "認証に失敗しました。" msgid "AutoScaling group ID to apply policy to." msgstr "ポリシーを適用するオートスケールグループ ID。" msgid "AutoScaling group name to apply policy to." msgstr "ポリシーを適用するオートスケールグループ名。" msgid "Availability Zone of the subnet." msgstr "サブネットのアベイラビリティーゾーン。" msgid "Availability zone in which you want the subnet." msgstr "サブネットを必要とするアベイラビリティーゾーン。" msgid "Availability zone to create servers in." msgstr "サーバーを作成するアベイラビリティーゾーン。" msgid "Availability zone to create volumes in." msgstr "ボリュームを作成するアベイラビリティーゾーン。" msgid "Availability zone to launch the instance in." msgstr "インスタンスを起動するアベイラビリティーゾーン。" msgid "Backend authentication failed" msgstr "バックエンドの認証が失敗しました" msgid "Binary" msgstr "バイナリー" msgid "Block device mappings for this server." msgstr "このサーバーのブロックデバイスマッピング。" msgid "Block device mappings to attach to instance." msgstr "インスタンスに接続するためのブロックデバイスマッピング。" msgid "Block device mappings v2 for this server." msgstr "このサーバーのブロックデバイスマッピング v2。" msgid "" "Boolean extra spec that used for filtering of backends by their capability " "to create share snapshots." msgstr "" "シェアのスナップショットを作成する機能がバックエンドのフィルタリングのために" "使用する追加のブーリアン仕様。" msgid "Boolean indicating if the volume can be booted or not." msgstr "ボリュームをブート可能かどうかを示すブール値。" msgid "Boolean indicating if the volume is encrypted or not." msgstr "ボリュームを暗号化するかどうかを示すブール値。" msgid "" "Boolean indicating whether allow the volume to be attached more than once." msgstr "2 度以上ボリュームを追加できるかを示すブール値。" msgid "" "Bus of the device: hypervisor driver chooses a suitable default if omitted." msgstr "" "デバイスのバス: 省略すると、ハイパーバイザードライバーによって適切なデフォル" "トが選択されます。" msgid "CIDR block notation for this subnet." msgstr "このサブネットの CIDR ブロック表記。" msgid "CIDR block to apply to subnet." msgstr "サブネットに適用する CIDR ブロック。" msgid "CIDR block to apply to the VPC." msgstr "VPC に適用する CIDR ブロック。" msgid "CIDR of subnet." msgstr "サブネットの CIDR。" msgid "CIDR to be associated with this metering rule." msgstr "この計測ルールに関連付ける CIDR。" #, python-format msgid "Can not specify property \"%s\" if the volume type is public." msgstr "" "ボリュームタイプが公開されていない場合、プロパティー \"%s\" を指定することは" "できません。" #, python-format msgid "Can not use %s property on Nova-network." msgstr "nova-network では %s プロパティーを使用できません。" #, python-format msgid "Can't find role %s" msgstr "ロール %s が見つかりません。" msgid "Can't get user token without password" msgstr "パスワードなしでユーザートークンを取得することはできません" msgid "Can't get user token, user not yet created" msgstr "ユーザートークンを取得できません。ユーザーはまだ作成されていません" msgid "Can't traverse attribute path" msgstr "属性パスを全探索できません" #, python-format msgid "Cancelling update when stack is %s" msgstr "スタックが %s である場合は、更新を取り消します" #, python-format msgid "Cannot call %(method)s on orphaned %(objtype)s object" msgstr "孤立 %(objtype)s オブジェクトで %(method)s を呼び出すことはできません" #, python-format msgid "Cannot check %s, stack not created" msgstr "%s を検査できません。スタックは作成されません" #, python-format msgid "Cannot define the following properties at the same time: %(props)s." msgstr "次のプロパティーは同時に定義できません: %(props)s。" #, python-format msgid "" "Cannot establish connection to Heat endpoint at region \"%(region)s\" due to " "\"%(exc)s\"" msgstr "" "\"%(exc)s\" が原因でリージョン \"%(region)s\" での heat エンドポイントへの接" "続を確立できません" msgid "" "Cannot get stack domain user token, no stack domain id configured, please " "fix your heat.conf" msgstr "" "スタックドメインユーザートークンを取得できません。スタックドメイン ID が設定" "されていません。heat.conf を修正してください。" msgid "Cannot migrate to lower schema version." msgstr "より低いスキーマのバージョンには移行できません。" #, python-format msgid "Cannot modify readonly field %(field)s" msgstr "読み取り専用フィールド %(field)s は変更できません" #, python-format msgid "Cannot resume %s, resource not found" msgstr "%s を再開できません。リソースが見つかりません" #, python-format msgid "Cannot resume %s, resource_id not set" msgstr "%s を再開できません。resource_id は設定されません" #, python-format msgid "Cannot resume %s, stack not created" msgstr "%s を再開できません。スタックは作成されません" #, python-format msgid "Cannot suspend %s, resource not found" msgstr "%s を中断できません。リソースが見つかりません" #, python-format msgid "Cannot suspend %s, resource_id not set" msgstr "%s を中断できません。resource_id が設定されていません" #, python-format msgid "Cannot suspend %s, stack not created" msgstr "%s を中断できません。スタックは作成されません" msgid "Captured stderr from the configuration execution." msgstr "設定実行から取り込まれた stderr。" msgid "Captured stdout from the configuration execution." msgstr "設定実行から取り込まれた stdout。" #, python-format msgid "Circular Dependency Found: %(cycle)s" msgstr "循環依存関係が見つかりました: %(cycle)s" msgid "Client entity to poll." msgstr "ポーリング対象のクライアントのエンティティー。" msgid "Client name and resource getter name must be specified." msgstr "クライアント名とリソースの getter 名を指定する必要があります。" msgid "Client to poll." msgstr "ポーリング対象のクライアント。" msgid "Cluster configs dictionary." msgstr "クラスター設定ディクショナリー。" msgid "Cluster information." msgstr "クラスターの情報。" msgid "Cluster metadata." msgstr "クラスターのメタデータ。" msgid "Cluster name." msgstr "クラスター名。" msgid "Cluster status." msgstr "クラスターの状態。" msgid "Comparison operator." msgstr "比較演算子。" #, python-format msgid "Concurrent transaction for %(action)s" msgstr "%(action)s のための同時実行トランザクション" msgid "Configuration of session persistence." msgstr "セッション永続性の設定。" msgid "" "Configuration script or manifest which specifies what actual configuration " "is performed." msgstr "どの実設定を実行するかを指定する設定スクリプトまたは設定マニフェスト。" msgid "Configure most important configs automatically." msgstr "自動的に最も重要なコンフィグを設定します。" #, python-format msgid "Confirm resize for server %s failed" msgstr " サーバー %s のサイズ変更の確定が失敗しました" #, python-format msgid "" "Conflicting merge strategy '%(strategy)s' for parameter '%(param)s' in file " "'%(env_file)s'." msgstr "" "ファイル '%(env_file)s' に指定されたパラメタ '%(param)s'に対して不正なマージ" "戦略 '%(strategy)s' が指定されています。" msgid "Connection info for this network gateway." msgstr "このネットワークゲートウェイの接続情報。" #, python-format msgid "Container '%(name)s' creation failed: %(code)s - %(reason)s" msgstr "コンテナー '%(name)s' の作成が失敗しました: %(code)s - %(reason)s" msgid "Container format of image." msgstr "イメージのコンテナー形式。" msgid "" "Content of part to attach, either inline or by referencing the ID of another " "software config resource." msgstr "" "インラインまたは別のソフトウェア設定リソースの ID の参照によって追加するパー" "ツのコンテンツ。" msgid "Context for this stack." msgstr "このスタックのコンテキスト。" msgid "Continue ? [y/N]" msgstr "続けますか?[y/N]" msgid "Control how the disk is partitioned when the server is created." msgstr "" "サーバー作成時にディスクをどのようにパーティション化するかを制御します。" msgid "Controls DPD protocol mode." msgstr "DPD プロトコルモードを制御します。" msgid "" "Convenience attribute to fetch the first assigned network address, or an " "empty string if nothing has been assigned at this time. Result may not be " "predictable if the server has addresses from more than one network." msgstr "" "最初に割り当てられたネットワークアドレス、または現時点で何も割り当てられてい" "ない場合は空の文字列を取り出すための便利属性。サーバーが複数のネットワークか" "らのアドレスを持っている場合、結果は予測できないものになる可能性があります。" msgid "" "Convenience attribute, provides curl CLI command prefix, which can be used " "for signalling handle completion or failure when signal_transport is set to " "TOKEN_SIGNAL. You can signal success by adding --data-binary '{\"status\": " "\"SUCCESS\"}' , or signal failure by adding --data-binary '{\"status\": " "\"FAILURE\"}'. This attribute is set to None for all other signal transports." msgstr "" "curl CLI コマンドのプレフィックスを提供する Convenience 属性。" "signal_transport が TOKEN_SIGNAL に設定されている場合、これはハンドルの完了ま" "たは失敗を送信するために使用できます。正常終了を送信するには --data-binary " "'{\"status\": \"SUCCESS\"}' を追加し、失敗を送信するには --data-binary " "'{\"status\": \"FAILURE\"}' を追加します。その他の信号の送信の場合は、この値" "を None に設定します。" msgid "" "Convenience attribute, provides curl CLI command prefix, which can be used " "for signalling handle completion or failure. You can signal success by " "adding --data-binary '{\"status\": \"SUCCESS\"}' , or signal failure by " "adding --data-binary '{\"status\": \"FAILURE\"}'." msgstr "" "curl CLI コマンドのプレフィックスを提供する Convenience 属性。これはハンドル" "の完了または失敗を送信するために使用できます。正常終了を送信するには --data-" "binary '{\"status\": \"SUCCESS\"}' を追加し、失敗を送信するには --data-" "binary '{\"status\": \"FAILURE\"}' を追加します。" msgid "Cooldown period, in seconds." msgstr "クールダウン期間 (秒)。" #, python-format msgid "Could not bind to %(bind_addr)s after trying for 30 seconds" msgstr "30秒間試行しましたが%(bind_addr)s にバインドできませんでした" #, python-format msgid "Could not confirm resize of server %s" msgstr "サーバー %s のサイズ変更を確定できませんでした" #, python-format msgid "Could not detach attachment %(att)s from server %(srv)s." msgstr "サーバー %(srv)s から接続 %(att)s を切断できませんでした。" #, python-format msgid "Could not fetch remote template \"%(name)s\": %(exc)s" msgstr "" "リモートのテンプレート \"%(name)s\" を取り出すことができませんでした: %(exc)s" #, python-format msgid "Could not fetch remote template '%(url)s': %(exc)s" msgstr "" "リモートテンプレート '%(url)s' を取り出すことができませんでした: %(exc)s" #, python-format msgid "Could not load %(name)s: %(error)s" msgstr "%(name)s をロードできませんでした: %(error)s" #, python-format msgid "Could not retrieve template: %s" msgstr "テンプレートを取得できませんでした: %s" msgid "Create volumes on the same physical port as an instance." msgstr "インスタンスと同じ物理ポート上でボリュームを作成します。" msgid "" "Credentials used for swift. Not required if sahara is configured to use " "proxy users and delegated trusts for access." msgstr "" "Swift のために使用する認証情報。sahara がアクセスのためにプロキシーユーザーと" "委任された信頼を使用する場合は、この情報は必要ありません。" msgid "Cron expression." msgstr "Cron 式。" msgid "Current share status." msgstr "現在のシェアの状態。" msgid "Custom LoadBalancer template can not be found" msgstr "カスタムロードバランサーテンプレートが見つかりません" msgid "DB instance restore point." msgstr "DB インスタンスの復元ポイント。" msgid "DNS Domain id or name." msgstr "DNS ドメインの ID または名前。" msgid "DNS IP address used inside tenant's network." msgstr "テナントのネットワーク内で使用されている DNS の IP アドレス。" msgid "DNS Record type." msgstr "DNS のレコードタイプ。" msgid "DNS domain serial." msgstr "DNS のドメインシリアル。" msgid "" "DNS record data, varies based on the type of record. For more details, " "please refer rfc 1035." msgstr "" "レコードのタイプによって異なる DNS レコードのデータ。詳細については、rfc " "1035 を参照してください。" msgid "" "DNS record priority. It is considered only for MX and SRV types, otherwise, " "it is ignored." msgstr "" "DNS レコードの優先順位。これは MX と SRV のタイプの場合のみ考慮され、その他の" "タイプの場合は無視されます。" #, python-format msgid "Data supplied was not valid: %(reason)s" msgstr "指定されたデータは無効でした: %(reason)s" #, python-format msgid "" "Database %(dbs)s specified for user does not exist in databases for resource " "%(name)s." msgstr "" "ユーザー用に指定されたデータベース %(dbs)s が、リソース %(name)s のデータベー" "スに存在しません。" msgid "Database volume size in GB." msgstr "データベースのボリュームの容量 (GB)。" #, python-format msgid "" "Databases property is required if users property is provided for resource %s." msgstr "" "リソース %s にユーザープロパティーを指定した場合、データベースのプロパティー" "が必要です。" #, python-format msgid "" "Datastore version %(dsversion)s for datastore type %(dstype)s is not valid. " "Allowed versions are %(allowed)s." msgstr "" "データストアタイプ %(dstype)s のデータストアバージョン %(dsversion)s は無効で" "す。許可されるバージョンは %(allowed)s です。" msgid "Datetime when a share was created." msgstr "シェアが作成された日時。" msgid "" "Dead Peer Detection protocol configuration for the ipsec site connection." msgstr "ipsec サイト接続の Dead Peer Detection プロトコル設定。" msgid "Dead engines are removed." msgstr "停止したheatエンジンは取り除かれました" msgid "Default TLS container reference to retrieve TLS information." msgstr "TLS 情報を抽出するデフォルトの TLS コンテナー参照。" #, python-format msgid "Default must be a comma-delimited list string: %s" msgstr "デフォルトはコンマ区切り一覧の文字列である必要があります: %s" msgid "Default name or UUID of the image used to boot Hadoop nodes." msgstr "Hadoop ノードのブートに使用されるイメージのデフォルト名または UUID。" msgid "Default region name used to get services endpoints." msgstr "サービスエンドポイントの取得に使用するデフォルトの名" msgid "Default settings for some of task attributes defined at workflow level." msgstr "ワークフローレベルで定義する一部のタスク属性のデフォルト設定。" msgid "Default value for the input if none is specified." msgstr "none が指定された場合の入力のデフォルト値。" msgid "" "Defines a delay in seconds that Mistral Engine should wait after a task has " "completed before starting next tasks defined in on-success, on-error or on-" "complete." msgstr "" "あるタスクを完了後に次のタスク (on-success、on-error、または on-complete のい" "ずれかが定義されたタスク) を開始する前に Mistral Engine が待機すべき遅延時間 " "(秒) を定義します。" msgid "" "Defines a delay in seconds that Mistral Engine should wait before starting a " "task." msgstr "" "タスクの開始前にミストラルエンジンが待機すべき遅延時間 (秒) を定義します。" msgid "Defines a pattern how task should be repeated in case of an error." msgstr "エラーの際にタスクを繰り返すパターンを定義します。" msgid "" "Defines a period of time in seconds after which a task will be failed " "automatically by engine if hasn't completed." msgstr "" "タスクが完了しない場合にエンジンがタスクを自動的に失敗させる時間 (秒) を定義" "します。" msgid "Defines if share type is accessible to the public." msgstr "シェアのタイプが一般ユーザーからアクセスできるか定義します。" msgid "Defines if shared filesystem is public or private." msgstr "" "共有ファイルがシステムをパブリックかプライベートのいずれかに設定します。" msgid "" "Defines the method in which the request body for signaling a workflow would " "be parsed. In case this property is set to True, the body would be parsed as " "a simple json where each key is a workflow input, in other cases body would " "be parsed expecting a specific json format with two keys: \"input\" and " "\"params\"." msgstr "" "ワークフローの送信のためのリクエスト本文が解析される方法を定義します。この値" "を True に設定した場合、本文はシンプルな JSON (各キーはワークフローの入力とな" "ります) として解析され、その他の値を設定した場合は本文は 2 つのキー(\"input" "\" および \"params\")を持つ特定の JSON 形式として解析されます。" msgid "" "Defines whether Mistral Engine should put the workflow on hold or not before " "starting a task." msgstr "" "タスクの開始前に Mistral Engine がワークフローを保留すべきかどうかを定義しま" "す。" msgid "Defines whether auto-assign security group to this Node Group template." msgstr "" "このノードグループテンプレートにセキュリティーグループを自動的に割り当てるか" "どうかを定義します。" #, python-format msgid "" "Defining more than one configuration for the same action in " "SoftwareComponent \"%s\" is not allowed." msgstr "" "SoftwareComponent \"%s\" において、同じアクションについて複数の設定を定義する" "ことは許可されていません。" msgid "Deleting in-progress snapshot" msgstr "作成中のスナップショットの削除" #, python-format msgid "Deleting non-empty container (%(id)s) when %(prop)s is False" msgstr "%(prop)s が False の場合に空でないコンテナー (%(id)s) を削除中" #, python-format msgid "Delimiter for %s must be string" msgstr "%s の区切り文字は文字列でなければなりません" msgid "" "Denotes that the deployment is in an error state if this output has a value." msgstr "" "この出力に値がある場合はデプロイメントがエラー状態であることを示します。" msgid "Deploy data available" msgstr "デプロイデータが使用可能" #, python-format msgid "Deployment exited with non-zero status code: %s" msgstr "デプロイメントがゼロ以外の状況コードで終了しました: %s" #, python-format msgid "Deployment to server failed: %s" msgstr "サーバーへのデプロイメントに失敗しました: %s" #, python-format msgid "Deployment with id %s not found" msgstr "ID %s のデプロイメントが見つかりません" msgid "Deprecated." msgstr "提供を終了しています。" msgid "" "Describe time constraints for the alarm. Only evaluate the alarm if the time " "at evaluation is within this time constraint. Start point(s) of the " "constraint are specified with a cron expression, whereas its duration is " "given in seconds." msgstr "" "アラームの時間制約の説明。評価の時間が時間制約に収まっている場合のみアラーム" "を評価します。制約の開始ポイントは con 式で指定され、制約の長さは秒数で指定さ" "れます。" msgid "Description for the alarm." msgstr "アラームの説明。" msgid "Description for the firewall policy." msgstr "ファイアウォールポリシーの説明。" msgid "Description for the firewall rule." msgstr "ファイアウォールルールの説明。" msgid "Description for the firewall." msgstr "ファイアウォールの説明。" msgid "Description for the ike policy." msgstr "IKE ポリシーの説明。" msgid "Description for the ipsec policy." msgstr "ipsec ポリシーの説明。" msgid "Description for the ipsec site connection." msgstr "ipsec サイト接続の説明。" msgid "Description for the time constraint." msgstr "時間制約の説明。" msgid "Description for the vpn service." msgstr "VPN サービスの説明。" msgid "Description for this interface." msgstr "このインターフェースの説明。" msgid "Description of domain." msgstr "ドメインの説明。" msgid "Description of keystone group." msgstr "keystone グループの説明。" msgid "Description of keystone project." msgstr "keystone プロジェクトの説明。" msgid "Description of keystone region." msgstr "keystone 領域の説明。" msgid "Description of keystone service." msgstr "keystone サービスの説明。" msgid "Description of keystone user." msgstr "keystone ユーザーの説明。" msgid "Description of record." msgstr "レコードの説明。" msgid "Description of the Node Group Template." msgstr "ノードグループテンプレートの説明。" msgid "Description of the Sahara Group Template." msgstr "Sahara グループテンプレートの説明。" msgid "Description of the alarm." msgstr "アラームの説明。" msgid "Description of the data source." msgstr "データソースの説明。" msgid "Description of the firewall policy." msgstr "ファイアウォールポリシーの説明。" msgid "Description of the firewall rule." msgstr "ファイアウォールルールの説明。" msgid "Description of the firewall." msgstr "ファイアウォールの説明。" msgid "Description of the image." msgstr "イメージの説明。" msgid "Description of the input." msgstr "入力の説明。" msgid "Description of the job binary." msgstr "ジョブバイナリーの説明。" msgid "Description of the metering label." msgstr "計測ラベルの説明。" msgid "Description of the output." msgstr "出力の説明。" msgid "Description of the pool." msgstr "プールの説明。" msgid "Description of the security group." msgstr "セキュリティーグループの説明。" msgid "Description of the vip." msgstr "VIP の説明。" msgid "Description of the volume type." msgstr "ボリュームタイプの説明。" msgid "Description of the volume." msgstr "ボリュームの説明" msgid "Description of this Load Balancer." msgstr "このロードバランサーの説明。" msgid "Description of this listener." msgstr "このリスナーの説明。" msgid "Description of this pool." msgstr "このプールの説明。" msgid "Desired IPs for this port." msgstr "このポートで必要な IP。" msgid "Desired capacity of the cluster." msgstr "クラスターに存在することが望ましいキャパシティー。" msgid "Desired initial number of instances." msgstr "インスタンスの初期必要数。" msgid "Desired initial number of resources in cluster." msgstr "クラスター内に当初存在することが好ましいリソース数。" msgid "Desired initial number of resources." msgstr "最初に存在することが好ましいリソース数。" msgid "Desired number of instances." msgstr "インスタンスの必要数。" msgid "DesiredCapacity must be between MinSize and MaxSize" msgstr "必要なキャパシティーは最小サイズと最大サイズの間でなければなりません" msgid "Destination IP address or CIDR." msgstr "宛先 IP アドレスまたは CIDR。" msgid "Destination ip_address for this firewall rule." msgstr "このファイアウォールルールの宛先 ip_address。" msgid "Destination port number or a range." msgstr "宛先ポート番号または範囲。" msgid "Destination port range for this firewall rule." msgstr "このファイアウォールルールの宛先ポート範囲。" msgid "Detailed information about resource." msgstr "リソースに関する詳細情報。" msgid "Device ID of this port." msgstr "このポートのデバイス ID。" msgid "Device info for this network gateway." msgstr "このネットワークゲートウェイのデバイス情報。" msgid "" "Device type: at the moment we can make distinction only between disk and " "cdrom." msgstr "デバイス種別: 現在、ディスクと CD-ROM のみを区別することができます。" msgid "" "Dict, which has expand properties for port. Used only if port property is " "not specified for creating port." msgstr "" "ポートに関する拡張プロパティーを持つディクショナリー。ポートの作成のために" "ポートプロパティーが指定されていない場合にのみ使用します。" msgid "Dictionary containing workflow tasks." msgstr "ワークフローのタスクを含むディクショナリー。" msgid "Dictionary of node configurations." msgstr "ノード設定のディクショナリー。" msgid "Dictionary of variables to publish to the workflow context." msgstr "ワークフローのコンテキストに提供される変数のディクショナリー。" msgid "Dictionary which contains input for workflow." msgstr "ワークフローへの入力を含むディクショナリー。" msgid "" "Dictionary-like section defining task policies that influence how Mistral " "Engine runs tasks. Must satisfy Mistral DSL v2." msgstr "" "Mistral Engine のタスクの実行に影響を及ぼすタスクポリシーを定義するディクショ" "ナリーのようなセクション。Mistral DSL v2 をサポートする必要があります。" msgid "DisableRollback and OnFailure may not be used together" msgstr "DisableRollback と OnFailure を一緒に使用できません" msgid "Disk format of image." msgstr "イメージのディスク形式。" msgid "Does not contain a valid AWS Access Key or certificate" msgstr "有効な AWS アクセスキーまたは証明書が含まれていません" msgid "Domain email." msgstr "ドメインの E メール。" msgid "Domain name." msgstr "ドメイン名。" #, python-format msgid "Duplicate names %s" msgstr "名前 %s が重複しています" msgid "Duplicate refs are not allowed." msgstr "重複する参照は許容されません。" msgid "Duration for the time constraint." msgstr "時間制約の長さ。" msgid "EIP address to associate with instance." msgstr "インスタンスに関連付ける EIP アドレス。" #, python-format msgid "Each %(object_name)s must contain a %(sub_section)s key." msgstr "" "各 %(object_name)s には %(sub_section)s キーが含まれていなければなりません。" msgid "Each Resource must contain a Type key." msgstr "各リソースには Type キーが含まれている必要があります。" msgid "Ebs is missing, this is required when specifying BlockDeviceMappings." msgstr "" "Ebs がありません。これは BlockDeviceMappings を指定する場合には必須です。" msgid "" "Egress rules are only allowed when Neutron is used and the 'VpcId' property " "is set." msgstr "" "送信ルールは、Neutron が使用されていて 'VpcId' プロパティーが設定されている場" "合にのみ許可されます。" #, python-format msgid "Either %(net)s or %(port)s must be provided." msgstr "%(net)s または %(port)s のいずれかを指定する必要があります。" msgid "Either 'EIP' or 'AllocationId' must be provided." msgstr "'EIP' または 'AllocationId' のいずれかを指定する必要があります。" msgid "Either 'InstanceId' or 'LaunchConfigurationName' must be provided." msgstr "" "'InstanceId' または 'LaunchConfigurationName' のいずれかを指定する必要があり" "ます。" #, python-format msgid "Either project or domain must be specified for role %s" msgstr "" "ロール %s に関して、プロジェクトまたはドメインのいずれかを指定する必要があり" "ます。" #, python-format msgid "Either volume_id or snapshot_id must be specified for device mapping %s" msgstr "" "volume_id または snapshot_id をデバイスマッピング %s に指定する必要があります" msgid "Email address of keystone user." msgstr "keystone ユーザーの E メールアドレス。" msgid "Enable the legacy OS::Heat::CWLiteAlarm resource." msgstr "既存の OS::Heat::CWLiteAlarm リソースを有効にします。" msgid "Enable the preview Stack Abandon feature." msgstr "スタック破棄のプレビュー機能を有効にします。" msgid "Enable the preview Stack Adopt feature." msgstr "スタック引き取り機能のプレビューを有効にします。" msgid "" "Enables Source NAT on the router gateway. NOTE: The default policy setting " "in Neutron restricts usage of this property to administrative users only." msgstr "" "ゲートウェイルーター上のソース NAT を有効化します。注意: Neutron のデフォルト" "ポリシー設定では、このプロパティーの使用は管理ユーザーのみに制限されます。" msgid "" "Enables engine with convergence architecture. All stacks with this option " "will be created using convergence engine." msgstr "" "convergence アーキテクチャーを持つエンジンを有効化します。このオプションを持" "つすべてのスタックは、convergence エンジンを使用して作成します。" msgid "Enables or disables read-only access mode of volume." msgstr "" "ボリュームに対する読み取り専用アクセスのモードを有効化または無効化します。" msgid "Encapsulation mode for the ipsec policy." msgstr "ipsec ポリシーのカプセル化モード。" msgid "Encountered an empty component." msgstr "空のコンポーネントが検出されました。" #, fuzzy msgid "" "Encrypt template parameters that were marked as hidden and also all the " "resource properties before storing them in database." msgstr "" "非表示に設定されたテンプレートパラメーターを暗号化するだけでなく、データベー" "スに保存する前にすべてのリソースプロパティーを暗号化します。" msgid "Encryption algorithm for the ike policy." msgstr "IKE ポリシーの暗号化アルゴリズム。" msgid "Encryption algorithm for the ipsec policy." msgstr "ipsec ポリシーの暗号化アルゴリズム。" msgid "End address for the allocation pool." msgstr "割り当てプールの終了アドレス。" #, python-format msgid "End resizing the group %(group)s" msgstr "グループ %(group)s のサイズ変更が終了しました" msgid "" "Endpoint/url which can be used for signalling handle when signal_transport " "is set to TOKEN_SIGNAL. None for all other signal transports." msgstr "" "signal_transport が TOKEN_SIGNAL に設定されている場合にハンドルを送信するため" "に使用できるエンドポイントまたは URL。その他の信号の送信の場合は存在しませ" "ん。" msgid "Endpoint/url which can be used for signalling handle." msgstr "ハンドルの送信に使用できるエンドポイントおよび URL。" msgid "Engine_Id" msgstr "Engine_Id" msgid "Error" msgstr "エラー" #, python-format msgid "Error authorizing action %s" msgstr "エラー許可アクション %s" #, python-format msgid "Error creating ec2 keypair for user %s" msgstr "ユーザー %s の ec2 キーペアの作成エラーです" msgid "" "Error during applying access rules to share \"{0}\". The root cause of the " "problem is the following: {1}." msgstr "" "シェア \"{0}\" にアクセスルールを適用する際にエラーが発生しました。この問題の" "根本原因は以下のとおりです: {1}" msgid "Error during creation of share \"{0}\"" msgstr "シェア \"{0}\" の作成中にエラーが発生しました 。" msgid "Error during deleting share \"{0}\"." msgstr "シェア \"{0}\" の削除中にエラーが発生しました。" #, python-format msgid "Error in %(resource)s output %(attribute)s: %(message)s" msgstr "%(resource)s の出力 %(attribute)s にエラーが発生しました: %(message)s" #, python-format msgid "Error validating value '%(value)s'" msgstr "値 '%(value)s' の検証エラー" #, python-format msgid "Error validating value '%(value)s': %(message)s" msgstr "値 '%(value)s' の検証エラー: %(message)s" msgid "Ethertype of the traffic." msgstr "トラフィックのイーサネットタイプ。" msgid "Exclude state for cidr." msgstr "CIDR の除外状態。" #, python-format msgid "Expected 1 external network, found %d" msgstr "1 つの外部ネットワークが必要ですが、%d が見つかりました" msgid "Export locations of share." msgstr "シェアのロケーションをエクスポートします。" msgid "Expression of the alarm to evaluate." msgstr "評価対象のアラームの式。" msgid "External fixed IP address." msgstr "外部 Fixed IP のアドレス。" msgid "External fixed IP addresses for the gateway." msgstr "ゲートウェイの外部 Fixed IP のアドレス。" msgid "External network gateway configuration for a router." msgstr "ルーターの外部ネットワークゲートウェイの設定。" msgid "" "Extra parameters to include in the \"floatingip\" object in the creation " "request. Parameters are often specific to installed hardware or extensions." msgstr "" "作成要求で \"floatingip\" オブジェクトに組み込む追加パラメーター。パラメー" "ターは多くの場合、取り付けられたハードウェアまたは拡張機能に固有です。" msgid "Extra parameters to include in the creation request." msgstr "作成要求で組み込む追加パラメーター。" msgid "Extra parameters to include in the request." msgstr "リクエストに含める必要のある追加パラメーター。" msgid "" "Extra parameters to include in the request. Parameters are often specific to " "installed hardware or extensions." msgstr "" "リクエストに含める必要のある追加のパラメーター。インストール済みのハードウェ" "アや強化機能ごとにパラメーターが存在することがよくあります。" msgid "Extra specs key-value pairs defined for share type." msgstr "シェアのタイプに定義したキーと値のペアの追加の仕様。" #, python-format msgid "Failed to attach interface (%(port)s) to server (%(server)s)" msgstr "" "サーバー (%(server)s) にインターフェース (%(port)s) を接続できませんでした。" #, python-format msgid "Failed to attach volume %(vol)s to server %(srv)s - %(err)s" msgstr "サーバー %(srv)s へのボリューム %(vol)s の追加に失敗しました: %(err)s" #, python-format msgid "Failed to create Bay '%(name)s' - %(reason)s" msgstr "ベイ '%(name)s' の作成に失敗しました: %(reason)s" #, python-format msgid "Failed to detach interface (%(port)s) from server (%(server)s)" msgstr "" "サーバー (%(server)s) からインターフェース (%(port)s) を接続解除できませんで" "した。" #, python-format msgid "Failed to execute %(action)s for %(cluster)s: %(reason)s" msgstr "%(cluster)s に対する %(action)s の実行に失敗しました: %(reason)s" #, python-format msgid "Failed to extend volume %(vol)s - %(err)s" msgstr "ボリューム %(vol)s の拡張に失敗しました - %(err)s" #, python-format msgid "Failed to fetch template: %s" msgstr "テンプレートを取り出すことができませんでした: %s" #, python-format msgid "Failed to find instance %s" msgstr "インスタンス %s が見つかりませんでした" #, python-format msgid "Failed to find server %s" msgstr "サーバー %s が見つかりませんでした" #, python-format msgid "Failed to parse JSON data: %s" msgstr "JSON データの解析に失敗しました: %s" #, python-format msgid "Failed to restore volume %(vol)s from backup %(backup)s - %(err)s" msgstr "" "バックアップ %(backup)s からボリューム %(vol)s のリストアに失敗しました: " "%(err)s" msgid "Failed to retrieve template" msgstr "テンプレートの取得に失敗しました" #, python-format msgid "Failed to retrieve template data: %s" msgstr "テンプレートデータの取得に失敗しました: %s" #, python-format msgid "Failed to retrieve template from %s" msgstr "%s からテンプレートの取得に失敗しました。" #, python-format msgid "Failed to retrieve template: %s" msgstr "テンプレート %s の取得に失敗しました。" #, python-format msgid "" "Failed to send message to stack (%(stack_name)s) on other engine " "(%(engine_id)s)" msgstr "" "他のエンジン (%(engine_id)s) 上のスタック (%(stack_name)s) にメッセージを送信" "できませんでした。" #, python-format msgid "Failed to stop stack (%(stack_name)s) on other engine (%(engine_id)s)" msgstr "" "他のエンジン (%(engine_id)s) 上のスタック (%(stack_name)s) を停止できませんで" "した。" #, python-format msgid "Failed to update Bay '%(name)s' - %(reason)s" msgstr "ベイ '%(name)s' の更新に失敗しました: %(reason)s" msgid "Failed to update, can not found port info." msgstr "更新できませんでした。ポート情報が見つかりません。" #, python-format msgid "" "Failed validating stack template using Heat endpoint at region \"%(region)s" "\" due to \"%(exc)s\"" msgstr "" "\"%(exc)s\" が原因でリージョン \"%(region)s\" での heat エンドポイントを使用" "したスタックテンプレートの検証に失敗しました" msgid "Fake attribute !a." msgstr "フェイク属性 !a。" msgid "Fake attribute a." msgstr "フェイク属性 a。" msgid "Fake property !a." msgstr "フェイクプロパティー !a。" msgid "Fake property !c." msgstr "フェイクプロパティー !c。" msgid "Fake property a." msgstr "フェイクプロパティー a。" msgid "Fake property c." msgstr "フェイクプロパティー c。" msgid "Fake property ca." msgstr "フェイクプロパティー ca。" msgid "" "False to trigger actions when the threshold is reached AND the alarm's state " "has changed. By default, actions are called each time the threshold is " "reached." msgstr "" "しきい値に達してアラームの状態が変わったときにアクションを起動する場合は" "FALSE。デフォルトでは、しきい値に達するたびにアクションが呼び出されます。" #, python-format msgid "Field %(field)s of %(objname)s is not an instance of Field" msgstr "" "%(objname)s のフィールド %(field)s はフィールドのインスタンスではありません" msgid "" "Fixed IP address to specify for the port created on the requested network." msgstr "" "要求されたネットワークで作成されたポートに対して指定する固定 IP アドレス。" msgid "Fixed IP addresses." msgstr "Fixed IP のアドレス。" msgid "Fixed IPv4 address for this NIC." msgstr "この NIC の固定 IPv4 アドレス。" msgid "Flag indicating if traffic to or from instance is validated." msgstr "インスタンスとのトラフィックが妥当性検査されるかどうかを示すフラグ。" msgid "" "Flag to enable/disable port security on the network. It provides the default " "value for the attribute of the ports created on this network." msgstr "" "ネットワーク上のポートセキュリティーを有効化または無効化するフラグ。このネッ" "トワーク上で作成されたポートの属性のデフォルト値を提供します。" msgid "" "Flag to enable/disable port security on the port. When disable this " "feature(set it to False), there will be no packages filtering, like security-" "group and address-pairs." msgstr "" "ポート上でポートセキュリティーを有効化または無効化するためのフラグ。この機能" "を無効化する (False を設定) と、セキュリティーグループやアドレスペアなどの" "パッケージのフィルタリングは行われません。" msgid "Flavor of the instance." msgstr "インスタンスのフレーバー。" msgid "Friendly name of the port." msgstr "ポートのフレンドリー名。" msgid "Friendly name of the router." msgstr "ルーターのフレンドリー名。" msgid "Friendly name of the subnet." msgstr "サブネットのフレンドリー名。" #, python-format msgid "Function \"%s\" must have arguments" msgstr "関数 \"%s\" には引数が必要です" #, python-format msgid "Function \"%s\" usage: [\"\", \"\"]" msgstr "関数 \"%s\" の使用状況: [\"\", \"\"]" #, python-format msgid "Gateway IP address \"%(gateway)s\" is in invalid format." msgstr "ゲートウェイの IP アドレス \"%(gateway)s\" の形式が無効です。" msgid "Gateway network for the router." msgstr "ルーターのゲートウェイネットワーク。" msgid "Generic HeatAPIException, please use specific subclasses!" msgstr "汎用 HeatAPIException、具体的なサブクラスを使用してください。" msgid "Glance image ID or name." msgstr "glance イメージの ID または名前。" msgid "Governs permissions set in manila for the cluster ips." msgstr "クラスターの ips のために manila で設定する権限を管理します。" msgid "Granularity to use for age argument, defaults to days." msgstr "存続期間引数に使用する単位、デフォルトは日数。" msgid "Hadoop cluster name." msgstr "Hadoop クラスター名。" #, python-format msgid "Header X-Auth-Url \"%s\" not an allowed endpoint" msgstr "ヘッダー X-Auth-Url \"%s\" は、許可されたエンドポイントではありません" msgid "Health probe timeout, in seconds." msgstr "ヘルスプローブタイムアウト (秒)。" msgid "" "Heat build revision. If you would prefer to manage your build revision " "separately, you can move this section to a different file and add it as " "another config option." msgstr "" "heat ビルドリビジョン。ビルドリビジョンを個別に管理したい場合は、このセクショ" "ンを別のファイルに移動し、別の設定オプションとして追加することができます。" msgid "Host" msgstr "ホスト" msgid "Hostname" msgstr "ホスト名" msgid "Hostname of the instance." msgstr "インスタンスのホスト名。" msgid "How long to preserve deleted data." msgstr "削除されたデータを保存する期間。" msgid "" "How the client will signal the wait condition. CFN_SIGNAL will allow an HTTP " "POST to a CFN keypair signed URL. TEMP_URL_SIGNAL will create a Swift " "TempURL to be signalled via HTTP PUT. HEAT_SIGNAL will allow calls to the " "Heat API resource-signal using the provided keystone credentials. " "ZAQAR_SIGNAL will create a dedicated zaqar queue to be signalled using the " "provided keystone credentials. TOKEN_SIGNAL will allow and HTTP POST to a " "Heat API endpoint with the provided keystone token. NO_SIGNAL will result in " "the resource going to a signalled state without waiting for any signal." msgstr "" "クライアントが待機条件を送信する方法。CFN_SIGNAL を設定すると、CFN キーペアの" "署名済み URL に HTTP POST を実行できます。 TEMP_URL_SIGNAL を設定すると、 " "HTTP PUT 経由で送信可能な Swift TempURL を作成できます。HEAT_SIGNAL を設定す" "ると、提供される keystone の認証情報を使用してHeat API のリソース信号を呼び出" "すことができます。ZAQAR_SIGNAL を設定すると、提供される keystone の認証情報を" "使用して送信される専用の zaqar キューを作成できます。TOKEN_SIGNAL を設定する" "と、提供される keystone の認証情報を使用して Heat API のエンドポイントに " "HTTP POST を実行できます。NO_SIGNAL を設定すると、リソースは信号を待機せずに" "送信済み状態に達します。" msgid "" "How the server should receive the metadata required for software " "configuration. POLL_SERVER_CFN will allow calls to the cfn API action " "DescribeStackResource authenticated with the provided keypair. " "POLL_SERVER_HEAT will allow calls to the Heat API resource-show using the " "provided keystone credentials. POLL_TEMP_URL will create and populate a " "Swift TempURL with metadata for polling. ZAQAR_MESSAGE will create a " "dedicated zaqar queue and post the metadata for polling." msgstr "" "サーバーがソフトウェアの構成に必要なメタデータを受信する方法。" "POLL_SERVER_CFN を設定すると、提供されるキーペアで認証された cfn API アクショ" "ンである DescribeStackResource を呼び出すことができます。POLL_SERVER_HEATを設" "定すると、提供される keystone の認証情報を使用して Heat API の resource-show " "を呼び出すことができます。POLL_TEMP_URL を設定すると、Swift TempURL を作成" "し、ポーリングのためのメタデータをロードすることができます。ZAQAR_MESSAGE は" "専用の zaqar キューを作成し、ポーリング用のメタデータを提供します。" msgid "How the server should signal to heat with the deployment output values." msgstr "サーバーが heat にデプロイメント出力値をシグナル通知する方法。" msgid "" "How the server should signal to heat with the deployment output values. " "CFN_SIGNAL will allow an HTTP POST to a CFN keypair signed URL. " "TEMP_URL_SIGNAL will create a Swift TempURL to be signaled via HTTP PUT. " "HEAT_SIGNAL will allow calls to the Heat API resource-signal using the " "provided keystone credentials. ZAQAR_SIGNAL will create a dedicated zaqar " "queue to be signaled using the provided keystone credentials. NO_SIGNAL will " "result in the resource going to the COMPLETE state without waiting for any " "signal." msgstr "" "サーバーが実装環境の出力値 によって heatに信号を送信する方法。CFN_SIGNAL を設" "定すると、CFN キーペアの署名済み URL に HTTP POST を実行できます。 " "TEMP_URL_SIGNAL を設定すると、 HTTP PUT 経由で送信可能な Swift TempURL を作" "成できます。HEAT_SIGNAL を設定すると、提供される keystone の認証情報を使用し" "てHeat API のリソース信号を呼び出すことができます。ZAQAR_SIGNAL を設定する" "と、提供される keystone の認証情報を使用して送信される専用の zaqar キューを作" "成できます。NO_SIGNAL を設定すると、リソースは信号を待機せずに COMPLETE 状態" "に達します。" msgid "" "How the user_data should be formatted for the server. For HEAT_CFNTOOLS, the " "user_data is bundled as part of the heat-cfntools cloud-init boot " "configuration data. For RAW the user_data is passed to Nova unmodified. For " "SOFTWARE_CONFIG user_data is bundled as part of the software config data, " "and metadata is derived from any associated SoftwareDeployment resources." msgstr "" "user_data のサーバー用のフォーマット方法。heat_CFNTOOLS の場合、user_data は " "heat-cfntools cloud-init ブート設定データの一部として組み込まれます。RAW の場" "合、user_data は未変更のまま Nova に渡されます。SOFTWARE_CONFIG の場合、" "user_data はソフトウェア設定データの一部として組み込まれ、関連付けられたすべ" "ての SoftwareDeployment リソースからメタデータが派生します。" msgid "Human readable name for the secret." msgstr "人間が読むことができる秘密の名前。" msgid "Human-readable name for the container." msgstr "人間が読むことができるコンテナー名。" msgid "" "ID list of the L3 agent. User can specify multi-agents for highly available " "router. NOTE: The default policy setting in Neutron restricts usage of this " "property to administrative users only." msgstr "" "L3 エージェントの ID リスト。ユーザーは高可用性ルーターに対してマルチエージェ" "ントを指定できます。注意: Neutron のデフォルトのポリシー設定では、このプロパ" "ティーの使用は管理ユーザーのみに制限されます。" msgid "ID of an existing port to associate with this server." msgstr "このサーバーに関連付ける既存ポートの ID。" msgid "" "ID of an existing port with at least one IP address to associate with this " "floating IP." msgstr "" "この Floating IP に関連付ける、少なくとも 1 つの IP アドレスを持つ既存のポー" "トの ID。" msgid "ID of network to create a port on." msgstr "ポートの作成場所となるネットワークの ID。" msgid "ID of project for API authentication" msgstr "API 認証用プロジェクトの ID" msgid "ID of queue to use for signaling output values" msgstr "出力値の信号を送信するために使用するキューの ID" msgid "" "ID of resource to apply configuration to. Normally this should be a Nova " "server ID." msgstr "" "設定を適用する対象のリソースのID。通常これは Nova サーバーの ID である必要が" "あります。" msgid "" "ID of server (VM, etc...) on host that is used for exporting network file-" "system." msgstr "" "ネットワークのファイルシステムのエクスポートのために使用されるホスト上のサー" "バー (VM など) のID。" msgid "ID of signal to use for signaling output values" msgstr "出力値のシグナル通知に使用するシグナルの ID" msgid "" "ID of software configuration resource to execute when applying to the server." msgstr "サーバーへの適用時に実行するソフトウェア設定リソースの ID。" msgid "ID of the Cluster Template used for Node Groups and configurations." msgstr "ノードグループおよび設定に使用されるクラスターテンプレートの ID。" msgid "ID of the InternetGateway." msgstr "インターネットゲートウェイの ID。" msgid "" "ID of the L3 agent. NOTE: The default policy setting in Neutron restricts " "usage of this property to administrative users only." msgstr "" "L3 エージェントの ID。注意: Neutron でのデフォルトポリシー設定により、管理" "ユーザーのみに対するこのプロパティーの使用が制限されます。" msgid "ID of the Node Group Template." msgstr "ノードグループテンプレートの ID。" msgid "ID of the VPNGateway to attach to the VPC." msgstr "VPC に接続する VPN ゲートウェイ ID。" msgid "ID of the default image to use for the template." msgstr "テンプレートに使用するデフォルトイメージの ID。" msgid "ID of the default pool this listener is associated to." msgstr "このリスナーが関連付けられているデフォルトプールの ID。" msgid "ID of the floating IP to assign to the server." msgstr "サーバーに割り当てる Floating IP の ID。" msgid "ID of the floating IP to associate." msgstr "関連付ける必要のある Floating IP の ID。" msgid "ID of the health monitor associated with this pool." msgstr "このプールと関連付けられているヘルスモニターの ID。" msgid "ID of the image to use for the template." msgstr "テンプレートに使用するイメージの ID。" msgid "ID of the load balancer this listener is associated to." msgstr "このリスナーが関連付けられているロードバランサーの ID。" msgid "ID of the network in which this IP is allocated." msgstr "この IP が割り振られているネットワークの ID。" msgid "ID of the port associated with this IP." msgstr "この IP に関連付けられているポートの ID。" msgid "ID of the queue." msgstr "キューの ID。" msgid "ID of the router used as gateway, set when associated with a port." msgstr "" "ゲートウェイとして使用するルーターの ID。ポートに関連付けられている場合に設定" "します。" msgid "ID of the router." msgstr "ルーターの ID。" msgid "ID of the server being deployed to" msgstr "デプロイ先サーバーの ID" msgid "ID of the stack this deployment belongs to" msgstr "このデプロイメントが属するスタックの ID" msgid "ID of the tenant to which the RBAC policy will be enforced." msgstr "RBAC ポリシーが適用されるテナントの ID。" msgid "ID of the tenant who owns the health monitor." msgstr "ヘルスモニターを所有するテナントの ID。" msgid "ID or name of the QoS policy." msgstr "QoS ポリシーの ID または名前。" msgid "ID or name of the RBAC object." msgstr "RBAC オブジェクトの名前または ID。" msgid "ID or name of the external network for the gateway." msgstr "ゲートウェイの外部ネットワークの ID または名前。" msgid "ID or name of the image to register." msgstr "登録対象のイメージの ID または名前。" msgid "ID or name of the load balancer with which listener is associated." msgstr "リスナーが関連付けられているロードバランサーの ID または名前。" msgid "ID or name of the load balancing pool." msgstr "ロードバランシングプールの ID または名前。" msgid "" "ID that AWS assigns to represent the allocation of the address for use with " "Amazon VPC. Returned only for VPC elastic IP addresses." msgstr "" "Amazon VPC で使用するためのアドレスの割り当てを表す AWS の ID。VPC elastic " "IP アドレスに対してのみ返されます。" msgid "IP address and port of the pool." msgstr "プールの IP アドレスとポート。" msgid "IP address desired in the subnet for this port." msgstr "このポートのサブネットで必要な IP アドレス。" msgid "IP address for the VIP." msgstr "VIP の IP アドレス。" msgid "IP address of the associated port, if specified." msgstr "関連付けられているポートの IP アドレス (指定されている場合)。" msgid "" "IP address of the floating IP. NOTE: The default policy setting in Neutron " "restricts usage of this property to administrative users only." msgstr "" "Floating IP の IP アドレス。注意: Neutron のデフォルトのポリシー設定では、こ" "のプロパティーを使用できるのは管理者に限られます。" msgid "IP address of the pool member on the pool network." msgstr "プールネットワーク上のプールメンバーの IP アドレス。" msgid "IP address of the pool member." msgstr "プールのメンバーの IP アドレス。" msgid "IP address of the vip." msgstr "VIP の IP アドレス。" msgid "IP address to allow through this port." msgstr "このポートを経由することが許可されている IP アドレス。" msgid "IP address to use if the port has multiple addresses." msgstr "ポートに複数のアドレスがある場合に使用する IP アドレス。" msgid "" "IP or other address information about guest that allowed to access to Share." msgstr "シェアにアクセス可能なゲストのIP またはその他のアドレス情報。" msgid "IPv6 RA (Router Advertisement) mode." msgstr "IPv6 の RA (ルーター広告) モード。" msgid "IPv6 address mode." msgstr "IPv6 アドレスモード。" msgid "Id of a resource." msgstr "リソースの ID。" msgid "Id of the manila share." msgstr "manila シェアの ID。" msgid "Id of the tenant owning the firewall policy." msgstr "ファイアウォールポリシーを所有するテナントの ID。" msgid "Id of the tenant owning the firewall." msgstr "ファイアウォールを所有するテナントの ID。" msgid "Identifier of the source instance to replicate." msgstr "複製すべきソースインスタンスの識別子。" #, python-format msgid "" "If \"%(size)s\" is provided, only one of \"%(image)s\", \"%(image_ref)s\", " "\"%(source_vol)s\", \"%(snapshot_id)s\" can be specified, but currently " "specified options: %(exclusive_options)s." msgstr "" "\"%(size)s\" が提供される場合、\"%(image)s\"、\"%(image_ref)s" "\"、\"%(source_vol)s\"、\"%(snapshot_id)s\" のうち 1 つのみを指定することがで" "きるものの、現在指定されているオプションは %(exclusive_options)s です。" #, fuzzy msgid "If False, closes the client socket connection explicitly." msgstr "False の場合、明示的にクライアントのソケット接続を終了します。" msgid "" "If True, delete any objects in the container when the container is deleted. " "Otherwise, deleting a non-empty container will result in an error." msgstr "" "True の場合、コンテナーの削除時にコンテナー内のオブジェクトをすべて削除してく" "ださい。そうでないと、空でないコンテナーの削除によりエラーが発生します。" msgid "If True, enable config drive on the server." msgstr "True の場合、サーバー上で設定ドライブを有効にします。" msgid "" "If configured, it allows to run action or workflow associated with a task " "multiple times on a provided list of items." msgstr "" "設定すると、提供される一連の項目に対してタスクと関連付けられたアクションまた" "はワークフローを何度でも実行できます。" msgid "If set, then the server's certificate will not be verified." msgstr "設定すると、サーバーの証明書は検証されません。" msgid "If specified, the backup to create the volume from." msgstr "指定した場合、ボリュームの作成元となるバックアップ。" msgid "If specified, the backup used as the source to create the volume." msgstr "" "指定した場合、バックアップはボリュームを作成するためのソースとして使用されま" "す。" msgid "If specified, the name or ID of the image to create the volume from." msgstr "指定した場合、ボリュームの作成元となるイメージの名前または ID。" msgid "If specified, the snapshot to create the volume from." msgstr "指定した場合、ボリュームの作成元となるスナップショット。" msgid "If specified, the type of volume to use, mapping to a specific backend." msgstr "" "指定した場合、特定のバックエンドにマッピングする、使用するボリュームのタイ" "プ。" msgid "If specified, the volume to use as source." msgstr "指定した場合、ソースとして使用するボリューム。" msgid "" "If the region is hierarchically a child of another region, set this " "parameter to the ID of the parent region." msgstr "" "この領域が階層的に別の領域の子である場合、このパラメーターを親領域のIDに設定" "してください。" msgid "" "If true, the resources in the chain will be created concurrently. If false " "or omitted, each resource will be treated as having a dependency on the " "previous resource in the list." msgstr "" "True の場合は、チェーン内の複数のリソースを同時に作成します。False の場合やこ" "の値を設定しない場合は、 各リソースはリストに含まれている過去のリソースと従属" "関係を持つものとして処理されます。" msgid "If without InstanceId, ImageId and InstanceType are required." msgstr "InstanceId がない場合は ImageId および InstanceType が必要です。" #, python-format msgid "Illegal prefix bounds: %(key1)s=%(value1)s, %(key2)s=%(value2)s." msgstr "不正なプレフィックス境界: %(key1)s=%(value1)s、%(key2)s=%(value2)s" #, python-format msgid "" "Image %(image)s requires %(imram)s minimum ram. Flavor %(flavor)s has only " "%(flram)s." msgstr "" "イメージ %(image)s には %(imram)s 以上の RAM が必要です。フレーバー " "%(flavor)s には %(flram)s しかありません。" #, python-format msgid "" "Image %(image)s requires %(imsz)s GB minimum disk space. Flavor %(flavor)s " "has only %(flsz)s GB." msgstr "" "イメージ %(image)s には %(imsz)s GB 以上のディスクスペースが必要です。フレー" "バー %(flavor)s には %(flsz)s GB しかありません。" #, python-format msgid "Image status is required to be %(cstatus)s not %(wstatus)s." msgstr "" "イメージのステータスは、%(wstatus)s ではなく %(cstatus)s である必要がありま" "す。" msgid "Incompatible parameters were used together" msgstr "同時に指定できないパラメーターが使用されました" #, python-format msgid "Incorrect arguments to \"%(fn_name)s\" should be one of: %(allowed)s" msgstr "" "\"%(fn_name)s\" に対する引数が正しくありません。次のいずれかでなければなりま" "せん: %(allowed)s" #, python-format msgid "Incorrect arguments to \"%(fn_name)s\" should be: %(example)s" msgstr "" "\"%(fn_name)s\" に対する引数が正しくありません。正しい引数: %(example)s" msgid "Incorrect arguments: Items to merge must be maps." msgstr "不正確な引数: マージする項目はマップでなければなりません。" #, python-format msgid "" "Incorrect index to \"%(fn_name)s\" should be between 0 and %(max_index)s" msgstr "" "\"%(fn_name)s\" に対するインデックスが正しくありません。0 と %(max_index)s の" "間の値である必要があります。" #, python-format msgid "Incorrect index to \"%(fn_name)s\" should be: %(example)s" msgstr "" "\"%(fn_name)s\" に対するインデックスが正しくありません。%(example)s である必" "要があります。" #, python-format msgid "Index to \"%s\" must be a string" msgstr "\"%s\" に対する索引は文字列でなければなりません" #, python-format msgid "Index to \"%s\" must be an integer" msgstr "\"%s\" に対する索引は整数でなければなりません" msgid "" "Indicate whether the volume should be deleted when the instance is " "terminated." msgstr "インスタンスの終了時にボリュームを削除するかどうかを指示します。" msgid "" "Indicate whether the volume should be deleted when the server is terminated." msgstr "サーバーの終了時にボリュームを削除するかどうかを指示します。" msgid "Indicates remote IP prefix to be associated with this metering rule." msgstr "この計測ルールに関連付けるリモート IP プレフィックスを示します。" msgid "" "Indicates whether or not to create a distributed router. NOTE: The default " "policy setting in Neutron restricts usage of this property to administrative " "users only. This property can not be used in conjunction with the L3 agent " "ID." msgstr "" "分散ルーターを作成するかどうかを指示します。注意: Neutron のデフォルトのポリ" "シー設定では、このプロパティーの使用は管理者ユーザーのみに制限されます。この" "プロパティーは、L3 エージェント ID と共に使用することはできません。" msgid "" "Indicates whether or not to create a highly available router. NOTE: The " "default policy setting in Neutron restricts usage of this property to " "administrative users only. And now neutron do not support distributed and ha " "at the same time." msgstr "" "高可用性ルーターを作成するかどうかを指示します。注意: Neutron のデフォルトの" "ポリシー設定では、このプロパティーの使用は管理ユーザーのみに制限されます。現" "在、Neutron は DVR と HA を同時にサポートしません。" msgid "Indicates whether this firewall rule is enabled or not." msgstr "このファイアウォールルールを有効にするかどうかを示します。" msgid "Information used to configure the bucket as a static website." msgstr "静的 Web サイトとしてバケットを設定するために使用する情報。" msgid "Initiator state in lowercase for the ipsec site connection." msgstr "ipsec サイト接続のイニシエーター状態 (小文字)。" #, python-format msgid "Input in signal data must be a map, find a %s" msgstr "信号データ内の入力はマップである必要があります。%s が見つかりました。" msgid "Input values for the workflow." msgstr "ワークフローの入力値。" msgid "Input values to apply to the software configuration on this server." msgstr "このサーバー上のソフトウェア設定に適用する入力値。" msgid "Instance ID to associate with EIP specified by EIP property." msgstr "EIP プロパティーで指定された EIP に関連付けるインスタンス ID。" msgid "Instance ID to associate with EIP." msgstr "EIP に関連付けるインスタンス ID。" msgid "Instance connection to CFN/CW API validate certs if SSL is used." msgstr "SSL が使用されている場合の CFN/CW API 証明書検証へのインスタンス接続。" msgid "Instance connection to CFN/CW API via https." msgstr "HTTPS 経由での CFN/CW API へのインスタンス接続。" #, python-format msgid "Instance is not ACTIVE (was: %s)" msgstr "インスタンスが ACTIVE ではありません (%s)" #, python-format msgid "" "Instance metadata must not contain greater than %s entries. This is the " "maximum number allowed by your service provider" msgstr "" "インスタンスメタデータには %s を超える項目を含めないでください。これはサービ" "スプロバイダーで許可された最大数です" msgid "Interface type of keystone service endpoint." msgstr "keystone サービスのエンドポイントのインターフェースタイプ。" msgid "Internet protocol version." msgstr "インターネットプロトコルバージョン。" #, python-format msgid "Invalid %s, expected a mapping" msgstr "%s は無効です。マッピングが必要です" #, python-format msgid "Invalid CRON expression: %s" msgstr "無効な CRON 式: %s" #, python-format msgid "Invalid Parameter type \"%s\"" msgstr "パラメーター形式 \"%s\" は無効です" #, python-format msgid "Invalid Property %s" msgstr "無効なプロパティー %s" msgid "Invalid Stack address" msgstr "無効なスタックのアドレス" msgid "Invalid Template URL" msgstr "無効なテンプレート URL" #, python-format msgid "Invalid URL scheme %s" msgstr "URL スキーム %s は無効です" #, python-format msgid "Invalid UUID version (%d)" msgstr "UUID バージョン (%d) は無効です" #, python-format msgid "Invalid action %s" msgstr "アクション %s は無効です" #, python-format msgid "Invalid action %s specified" msgstr "無効なアクション %s が指定されました" #, python-format msgid "Invalid adopt data: %s" msgstr "無効な引き取りデータ: %s" #, python-format msgid "Invalid cloud_backend setting in heat.conf detected - %s" msgstr "heat.conf で cloud_backend の無効な設定が見つかりました: %s" #, python-format msgid "Invalid codes in ignore_errors : %s" msgstr "ignore_errors に含まれる無効なコード : %s" #, python-format msgid "Invalid content type %(content_type)s" msgstr "コンテンツタイプ %(content_type)s が無効です" #, python-format msgid "Invalid default %(default)s (%(exc)s)" msgstr "無効なデフォルト %(default)s (%(exc)s)" #, python-format msgid "Invalid deletion policy \"%s\"" msgstr "無効な削除ポリシー \"%s\"" #, python-format msgid "" "Invalid dependency with external %(resource_type)s resource: %(external_id)s" msgstr "" "外部リソース %(resource_type)s に対して不正な依存関係があります: " "%(external_id)s" #, fuzzy, python-format msgid "Invalid filter parameters %s" msgstr "無効なフィルターパラメーター %s" #, python-format msgid "Invalid hook type \"%(hook)s\" for %(resource)s" msgstr "%(resource)s の無効なフックタイプ \"%(hook)s\"" #, python-format msgid "" "Invalid hook type \"%(value)s\" for resource breakpoint, acceptable hook " "types are: %(types)s" msgstr "" "リソースのブレークポイントのフックタイプ \"%(value)s\" が無効です。許容される" "フックタイプ: %(types)s" #, python-format msgid "Invalid key %s" msgstr "キー %s は無効です" #, python-format msgid "Invalid key '%(key)s' for %(entity)s" msgstr "%(entity)s の無効キー '%(key)s'" #, python-format msgid "Invalid keys in resource mark unhealthy %s" msgstr "不適切 %s とマークされたリソースの無効なキー" #, python-format msgid "Invalid merge strategy '%(strategy)s' for parameter '%(param)s'." msgstr "" "パラメタ '%(param)s'に対して不正なマージ戦略 '%(strategy)s' が指定されていま" "す。" msgid "" "Invalid mix of disk and container formats. When setting a disk or container " "format to one of 'aki', 'ari', or 'ami', the container and disk formats must " "match." msgstr "" "ディスクとコンテナーの形式が無効な形で混在しています。ディスクまたはコンテ" "ナーの形式を 'aki'、'ari'、または 'ami' のいずれかに設定するときは、コンテ" "ナーとディスクの形式が一致していなければなりません。" #, python-format msgid "Invalid parameter constraints for parameter %s, expected a list" msgstr "パラメーター %s のパラメーター制約が無効です。リストが必要です" #, python-format msgid "Invalid parameter in environment %s." msgstr "環境 %sに不正なパラメタがあります。" #, python-format msgid "" "Invalid restricted_action type \"%(value)s\" for resource, acceptable " "restricted_action types are: %(types)s" msgstr "" "リソースの restricted_action タイプ \"%(value)s\" が無効です。許容される " "restricted_action のタイプ: %(types)s" #, python-format msgid "Invalid service %(service)s version %(version)s" msgstr "不正なサービス %(service)s バージョン %(version)s" #, python-format msgid "" "Invalid stack name %s must contain only alphanumeric or \"_-.\" characters, " "must start with alpha and must be 255 characters or less." msgstr "" "無効なスタック名 %s は英数か \"_-.\" の文字のみを含む必要があり、英語で始ま" "り、255 文字以下である必要があります。" #, python-format msgid "Invalid stack name %s, must be a string" msgstr "無効なスタック名 %s は文字列である必要があります" #, python-format msgid "Invalid status %s" msgstr "状況 %s は無効です" #, python-format msgid "Invalid support status and should be one of %s" msgstr "" "サポート状態が無効です。この値は %s のうちの 1 つである必要があります。" #, python-format msgid "Invalid tag, \"%s\" contains a comma" msgstr "タグが無効で、 \"%s\" にコンマが含まれています" #, python-format msgid "Invalid tag, \"%s\" is longer than 80 characters" msgstr "\"%s\" は無効なタグです。80 文字より長くなっています" #, python-format msgid "Invalid tag, \"%s\" is not a string" msgstr "タグが無効で、 \"%s\" は文字列ではありません" #, python-format msgid "Invalid tags, not a list: %s" msgstr "タグが無効で、リストではありません: %s" #, python-format msgid "Invalid template type \"%(value)s\", valid types are: cfn, hot." msgstr "" "\"%(value)s\" は無効なテンプレートタイプです。有効なタイプは cfn, hot です。" #, python-format msgid "Invalid timeout value %s" msgstr "無効なタイムアウト値 %s" #, python-format msgid "Invalid timezone: %s" msgstr "無効なタイムゾーン: %s" #, python-format msgid "Invalid type (%s)" msgstr "無効なタイプ (%s)" msgid "Ip allocation pools and their ranges." msgstr "IP の割り当てプールとその範囲。" msgid "Ip of the subnet's gateway." msgstr "サブネットのゲートウェイの IP。" msgid "Ip version for the subnet." msgstr "サブネットの IP バージョン。" msgid "Ip_version for this firewall rule." msgstr "このファイアウォールルールの IP バージョン。" msgid "It defines an executor to which task action should be sent to." msgstr "タスクのアクションが送信される先の実行プログラムを定義します。" msgid "It is advised to shutdown all Heat engines beforehand." msgstr "事前にすべてのHeatエンジンの停止を推奨します。" #, python-format msgid "Items to join must be string, map or list not %s" msgstr "" "結合する項目は文字列、マップ、またはリストである必要があり、%s であってはなり" "ません" #, python-format msgid "Items to join must be string, map or list. %s failed json serialization" msgstr "" "結合する項目は文字列、マップ、またはリストである必要があります。%s が JSON の" "シリアライズに失敗しました。" #, python-format msgid "Items to join must be strings not %s" msgstr "結合する項目は %s ではなく文字列である必要があります。" #, python-format msgid "" "JSON body size (%(len)s bytes) exceeds maximum allowed size (%(limit)s " "bytes)." msgstr "" "JSON 本体のサイズ (%(len)s バイト) が最大許容サイズ (%(limit)s バイト) を超え" "ています。" msgid "JSON data that was uploaded via the SwiftSignalHandle." msgstr "SwiftSignalHandle によってアップロードされた JSON データ。" msgid "" "JSON serialized map that includes the endpoint, token and/or other " "attributes the client must use for signalling this handle. The contents of " "this map depend on the type of signal selected in the signal_transport " "property." msgstr "" "クライアントがこのハンドルを送信するために使用する必要のあるエンドポイント、" "トークン、またはその他の属性を含む、JSON のシリアライズされたマップ。このマッ" "プのコンテンツは、 signal_transport プロパティーで選択された信号のタイプに" "よって決まります。" msgid "" "JSON string containing data associated with wait condition signals sent to " "the handle." msgstr "" "ハンドルに送信された待機条件の信号と関連付けられたデータを含む JSON 文字列。" msgid "" "Key used to encrypt authentication info in the database. Length of this key " "must be 32 characters." msgstr "" "データベースで認証情報を暗号化するために使用されるキー。このキーの長さは 32 " "文字である必要があります。" msgid "Key/Value pairs to extend the capabilities of the flavor." msgstr "フレーバーの機能を拡張するキーと値のペア。" msgid "Key/value pairs associated with the volume in raw dict form." msgstr "ボリュームに関連付けられたローディクショナリー形式のキー/値ペア。" msgid "Key/value pairs associated with the volume." msgstr "ボリュームに関連付けられたキー/値ペア。" msgid "Key/value pairs to associate with the volume." msgstr "ボリュームに関連付けるキー/値ペア。" msgid "Keypair added to instances to make them accessible for user." msgstr "" "キーペアがインスタンスに追加され、ユーザーがそれらのインスタンスにアクセス可" "能になります。" msgid "Keypair secret key." msgstr "キーペア秘密鍵。" msgid "" "Keystone domain ID which contains heat template-defined users. If this " "option is set, stack_user_domain_name option will be ignored." msgstr "" "heat テンプレート定義ユーザーが含まれている keystone のドメイン ID。このオプ" "ションが設定されている場合、stack_user_domain_name オプションは無視されます。" msgid "" "Keystone domain name which contains heat template-defined users. If " "`stack_user_domain_id` option is set, this option is ignored." msgstr "" "heat テンプレート定義ユーザーが含まれている keystone ドメイン" "名。'stack_user_domain_id' オプションが設定されている場合、このオプションは無" "視されます。" msgid "Keystone domain." msgstr "Keystone のドメイン。" #, python-format msgid "" "Keystone has more than one service with same name %(service)s. Please use " "service id instead of name" msgstr "" "Keystone には %(service)s という同じ名前のサービスが複数あります。名前ではな" "くサービス ID を使用してください" msgid "Keystone password for stack_domain_admin user." msgstr "stack_domain_admin ユーザーの Keystone パスワード。" msgid "Keystone project." msgstr "Keystone のプロジェクト。" msgid "Keystone role for heat template-defined users." msgstr "heat テンプレート定義ユーザーの keystone ロール。" msgid "Keystone role." msgstr "Keystone のロール。" msgid "Keystone user group." msgstr "Keystone のユーザーグループ。" msgid "Keystone user groups." msgstr "Keystone のユーザーグループ。" msgid "Keystone user is enabled or disabled." msgstr "Keystone ユーザーは有効化または無効化されています。" msgid "" "Keystone username, a user with roles sufficient to manage users and projects " "in the stack_user_domain." msgstr "" "Keystone ユーザー名。これは、stack_user_domain 内のユーザーおよびプロジェクト" "を管理するために十分なロールを持つユーザーです。" msgid "L2 segmentation strategy on the external side of the network gateway." msgstr "ネットワークゲートウェイの外部サイドの L2 セグメンテーション戦略。" msgid "LBaaS provider to implement this load balancer instance." msgstr "このロードバランサーのインスタンスを実装する LBaaS プロバイダー。" msgid "Length of OS_PASSWORD after encryption exceeds Heat limit (255 chars)" msgstr "暗号化後の OS_PASSWORD の長さが heat 制限 (255 文字) を超えています" msgid "Length of the string to generate." msgstr "生成する文字列の長さ。" msgid "" "Length property cannot be smaller than combined character class and " "character sequence minimums" msgstr "" "長さプロパティーを、結合された最小の文字クラスと文字シーケンスより小さくする" "ことはできません" msgid "Level of access that need to be provided for guest." msgstr "ゲストに提供する必要のあるアクセスのレベル。" msgid "" "Lifecycle actions to which the configuration applies. The string values " "provided for this property can include the standard resource actions CREATE, " "DELETE, UPDATE, SUSPEND and RESUME supported by Heat." msgstr "" "設定が適用されるライフサイクルアクション。このプロパティーに指定する文字列値" "には、heat がサポートしている標準リソースアクション CREATE、DELETE、UPDATE、" "SUSPEND、および RESUME があります。" msgid "List of LoadBalancer resources." msgstr "LoadBalancer リソースのリスト。" msgid "List of Security Groups assigned on current LB." msgstr "現在の LB に割り当てられたセキュリティーグループのリスト。" msgid "List of TLS container references for SNI." msgstr "SNI に対する TLS コンテナー参照のリスト。" msgid "List of database instances." msgstr "データベースインスタンスの一覧。" msgid "List of databases to be created on DB instance creation." msgstr "DB インスタンス作成時に作成するデータベースのリスト。" msgid "List of directories to search for plug-ins." msgstr "プラグインを検索するディレクトリーのリスト。" msgid "List of dns nameservers." msgstr "DNS ネームサーバーの一覧。" msgid "List of firewall rules in this firewall policy." msgstr "このファイアウォールポリシー内のファイアウォールルールのリスト。" msgid "List of health monitors associated with the pool." msgstr "プールに関連付けられているヘルスモニターのリスト。" msgid "List of hosts to join aggregate." msgstr "結合とアグリゲートを行うホストのリスト。" msgid "List of manila shares to be mounted." msgstr "搭載すべき manila シェアのリスト。" msgid "List of network interfaces to create on instance." msgstr "インスタンス上に作成するネットワークインターフェースの一覧。" msgid "List of processes to enable anti-affinity for." msgstr "アンチアフィニティーを有効にする対象のプロセスのリスト。" msgid "List of processes to run on every node." msgstr "すべてのノードにおいて実行するプロセスの一覧。" msgid "List of role assignments." msgstr "ロールの割り当てのリスト。" msgid "List of security group IDs associated with this interface." msgstr "" "このインターフェースに関連付けられているセキュリティーグループID のリスト。" msgid "List of security group egress rules." msgstr "セキュリティーグループ送信ルールのリスト。" msgid "List of security group ingress rules." msgstr "セキュリティーグループ受信ルールのリスト。" msgid "" "List of security group names or IDs to assign to this Node Group template." msgstr "" "このノードグループテンプレートに割り当てるセキュリティーグループ名または ID " "のリスト。" msgid "" "List of security group names or IDs. Cannot be used if neutron ports are " "associated with this server; assign security groups to the ports instead." msgstr "" "セキュリティーグループの名前または ID のリスト。neutron ポートがこのサーバー" "に関連付けられている場合は使用できません。代わりにセキュリティーグループを" "ポートに割り当ててください。" msgid "List of security group rules." msgstr "セキュリティーグループルールの一覧。" msgid "List of subnet prefixes to assign." msgstr "割り当てる必要があるサブネットのプレフィックスのリスト。" msgid "List of tags associated with this interface." msgstr "このインターフェースに関連付けられているタグのリスト。" msgid "List of tags to attach to the instance." msgstr "インスタンスに接続するためのタグのリスト。" msgid "List of tags to attach to this resource." msgstr "このリソースに接続するためのタグのリスト。" msgid "List of tags to be attached to this resource." msgstr "このリソースに付加するタグのリスト。" msgid "" "List of tasks which should be executed before this task. Used only in " "reverse workflows." msgstr "" "このタスクの前に実行されるタスクのリスト。ワークフローでのみ使用できます。" msgid "" "List of tasks which will run after the task has completed regardless of " "whether it is successful or not." msgstr "" "正常に完了したかどうかにかかわらずタスクが完了した後に実行されるタスクのリス" "ト。" msgid "List of tasks which will run after the task has completed successfully." msgstr "タスクが正常に完了した後に実行されるタスクのリスト。" msgid "" "List of tasks which will run after the task has completed with an error." msgstr "タスクがエラーとともに完了した後に実行されるタスクのリスト。" msgid "List of users to be created on DB instance creation." msgstr "DB インスタンス作成時に作成するユーザーの一覧。" msgid "" "List of workflows' executions, each of them is a dictionary with information " "about execution. Each dictionary returns values for next keys: id, " "workflow_name, created_at, updated_at, state for current execution state, " "input, output." msgstr "" "ワークフローの処理のリスト。各リストは処理に関する情報を含むディクショナリー" "です。各ディクショナリーはキー (id、workflow_name、created_at、updated_at、" "state for current execution state、input, output) に関する値を返します。" msgid "Listener associated with this pool." msgstr "このプールと関連付けられているリスナー。" msgid "" "Local path on each cluster node on which to mount the share. Defaults to '/" "mnt/{share_id}'." msgstr "" "シェアを搭載する必要のある各クラスターノード上のローカルパス。デフォルトでは " "'/mnt/{share_id}' に設定されます。" msgid "Location of the SSL certificate file to use for SSL mode." msgstr "SSL モードに使用する SSL 証明書ファイルのロケーション。" msgid "Location of the SSL key file to use for enabling SSL mode." msgstr "SSL モードを有効にするために使用する SSL 鍵ファイルのロケーション。" msgid "MAC address of the port." msgstr "ポートの MAC アドレス。" msgid "MAC address to allow through this port." msgstr "このポートを経由することが許可されている MAC アドレス。" msgid "Map between role with either project or domain." msgstr "プロジェクトまたはドメインを持つロール間のマップ。" msgid "" "Map containing options specific to the configuration management tool used by " "this resource." msgstr "このリソースで使用される設定管理ツールに固有のオプションを含むマップ。" msgid "" "Map representing the cloud-config data structure which will be formatted as " "YAML." msgstr "YAML としてフォーマットされる、クラウド設定データ構造を表すマップ。" msgid "" "Map representing the configuration data structure which will be serialized " "to JSON format." msgstr "JSON フォーマットに直列化される設定データ構造を表すマップ。" msgid "Max bandwidth in kbps." msgstr "最大帯域幅 (kbps)。" msgid "Max burst bandwidth in kbps." msgstr "最大バースト帯域幅 (kbps)。" msgid "Max size of the cluster." msgstr "クラスターの最大サイズ。" #, python-format msgid "Maximum %s is 1 hour." msgstr "最大 %s は 1 時間です。" msgid "Maximum depth allowed when using nested stacks." msgstr "ネストスタック使用時に許可される最大の深さ。" msgid "Maximum length of a server name to be used in nova." msgstr "Novaで使用できるサーバ名の最大長。" msgid "" "Maximum line size of message headers to be accepted. max_header_line may " "need to be increased when using large tokens (typically those generated by " "the Keystone v3 API with big service catalogs)." msgstr "" "受け入れられるメッセージヘッダーの最大行サイズ。大きなトークン (通常は、" "Keystone v3 API で大きなサービスカタログを使用して生成されるトークン) を使用" "するときは max_header_line を増やさなければならない場合があります。" msgid "" "Maximum line size of message headers to be accepted. max_header_line may " "need to be increased when using large tokens (typically those generated by " "the Keystone v3 API with big service catalogs.)" msgstr "" "受け入れられるメッセージヘッダーの最大行サイズ。大きなトークン (通常は、" "Keystone v3 API で大きなサービスカタログを使用して生成されるトークン) を使用" "するときは max_header_line を増やさなければならない場合があります。" msgid "Maximum number of instances in the group." msgstr "グループ内のインスタンスの最大数。" msgid "Maximum number of resources in the cluster. -1 means unlimited." msgstr "クラスター内のリソースの最小数。-1 は無制限を指します。" msgid "Maximum number of resources in the group." msgstr "グループ内のリソースの最大数。" msgid "Maximum number of stacks any one tenant may have active at one time." msgstr "任意の 1 つのテナントが同時にアクティブにできるスタックの最大数。" msgid "Maximum prefix size that can be allocated from the subnet pool." msgstr "" "サブネットプールから割り当てることができるプレフィックスサイズの最大値。" msgid "" "Maximum raw byte size of JSON request body. Should be larger than " "max_template_size." msgstr "" "JSON 要求本文の最大ローバイトサイズ。max_template_size よりも大きな値である必" "要があります。" msgid "Maximum raw byte size of any template." msgstr "テンプレートの最大容量、元のバイト単位。" #, fuzzy msgid "Maximum resources allowed per top-level stack. -1 stands for unlimited." msgstr "" "トップレベルのスタックごとに許容される最大リソース。-1 は無制限を指します。" msgid "Maximum resources per stack exceeded." msgstr "スタックあたりの最大リソース数を超えました。" msgid "" "Maximum transmission unit size (in bytes) for the ipsec site connection." msgstr "ipsec サイト接続の最大伝送単位サイズ (バイト)。" msgid "Member list items must be strings" msgstr "メンバー一覧の項目は文字列でなければなりません" msgid "Member list must be a list" msgstr "メンバー一覧はリストでなければなりません" msgid "Members associated with this pool." msgstr "このプールと関連付けられているメンバー。" msgid "Memory in MB for the flavor." msgstr "フレーバーのメモリー容量 (MB)。" #, python-format msgid "Message: %(message)s, Code: %(code)s" msgstr "メッセージ: %(message)s、コード: %(code)s" msgid "Metadata format invalid" msgstr "メタデータの形式が無効です" msgid "Metadata key-values defined for cluster." msgstr "クラスター用に定義したメタデータのキーと値。" msgid "Metadata key-values defined for node." msgstr "ノードのために定義したメタデータのキーと値。" msgid "Metadata key-values defined for profile." msgstr "プロファイルのために定義したメタデータのキーと値。" msgid "Metadata key-values defined for share." msgstr "シェアのために定義したメタデータのキーと値。" msgid "Meter name watched by the alarm." msgstr "アラームによって監視されるメーター名。" msgid "" "Meter should match this resource metadata (key=value) additionally to the " "meter_name." msgstr "" "メーターはこのリソースメタデータ (key=value) に加えて meter_name に一致してい" "る必要があります。" msgid "Meter statistic to evaluate." msgstr "評価するメーター統計。" msgid "Method of implementation of session persistence feature." msgstr "セッション永続性機能の実装方法。" msgid "Metric name watched by the alarm." msgstr "アラームによって監視されるメトリック名。" msgid "Min size of the cluster." msgstr "クラスターの最小サイズ。" msgid "MinSize can not be greater than MaxSize" msgstr "最小サイズを最大サイズより大きくすることはできません" msgid "Minimum number of instances in the group." msgstr "グループ内のインスタンスの最小数。" msgid "Minimum number of resources in the cluster." msgstr "クラスター内のリソースの最小数。" msgid "Minimum number of resources in the group." msgstr "グループ内のリソースの最小数。" msgid "" "Minimum number of resources that are added or removed when the AutoScaling " "group scales up or down. This can be used only when specifying " "PercentChangeInCapacity for the AdjustmentType property." msgstr "" "AutoScaling グループのスケールアップまたはスケールダウンを行う場合に追加また" "は削除されるリソースの最小数。これは、AdjustmentType プロパティーに" "PercentChangeInCapacity を設定する場合にのみ指定できます。" msgid "" "Minimum number of resources that are added or removed when the AutoScaling " "group scales up or down. This can be used only when specifying " "percent_change_in_capacity for the adjustment_type property." msgstr "" "AutoScaling グループのスケールアップまたはスケールダウンを行う際に追加または" "削除されるリソースの最小数。これは、adjustment_type プロパティーに " "percent_change_in_capacity を指定する場合にのみ使用できます。" #, python-format msgid "Missing mandatory (%s) key from mark unhealthy request" msgstr "不適切とマークされた要求から欠落した必須 (%s) キー" #, python-format msgid "Missing parameter type for parameter: %s" msgstr "パラメーターのパラメーター形式が指定されていません: %s" #, python-format msgid "Missing required credential: %(required)s" msgstr "必須のクレデンシャルがありません: %(required)s" msgid "Mistral resource validation error" msgstr "Mistral のリソース検証エラー" msgid "Monasca notification." msgstr "Monasca の通知。" msgid "Multiple actions specified" msgstr "複数のアクションが指定されています。" #, python-format msgid "Multiple physical resources were found with name (%(name)s)." msgstr "名前が (%(name)s) の複数の物理リソースが見つかりました。" #, python-format msgid "Multiple resources were found with the physical ID (%(phys_id)s)." msgstr "物理ID (%(phys_id)s) を持つ複数のリソースが見つかりました。" #, python-format msgid "Multiple routers found with name %s" msgstr "%s という名前の複数のルーターが見つかりました" msgid "Must specify 'InstanceId' if you specify 'EIP'." msgstr "'EIP' を指定する場合は、'InstanceId' も指定する必要があります。" #, python-format msgid "Name '%s' must not start or end with a hyphen." msgstr "名前 '%s の先頭または末尾にハイフンを使用してはなりません。" msgid "Name for the Sahara Cluster Template." msgstr "Sahara クラスターテンプレートの名前。" msgid "Name for the Sahara Node Group Template." msgstr "Sahara ノードグループテンプレートの名前。" msgid "Name for the aggregate." msgstr "アグリゲートの名前。" msgid "Name for the availability zone." msgstr "アベイラビリティーゾーンの名前。" msgid "" "Name for the container. If not specified, a unique name will be generated." msgstr "コンテナーの名前。指定しない場合、一意な名前が生成されます。" msgid "Name for the firewall policy." msgstr "ファイアウォールポリシーの名前。" msgid "Name for the firewall rule." msgstr "ファイアウォールルールの名前。" msgid "Name for the firewall." msgstr "ファイアウォールの名前。" msgid "Name for the ike policy." msgstr "IKE ポリシーの名前。" msgid "" "Name for the image. The name of an image is not unique to a Image Service " "node." msgstr "" "イメージの名前。イメージの名前が Image service ノードに対して固有ではありませ" "ん。" msgid "Name for the ipsec policy." msgstr "ipsec ポリシーの名前。" msgid "Name for the ipsec site connection." msgstr "ipsec サイト接続の名前。" msgid "Name for the time constraint." msgstr "時間制約の名前。" msgid "Name for the vpn service." msgstr "VPN サービスの名前。" msgid "" "Name of attribute to compare. Names of the form metadata.user_metadata.X or " "metadata.metering.X are equivalent to what you can address through " "matching_metadata; the former for Nova meters, the latter for all others. To " "see the attributes of your Samples, use `ceilometer --debug sample-list`." msgstr "" "比較する属性の名前。形式 metadata.user_metadata.X または metadata.metering.X " "の名前は、matching_metadata を使用してアドレス指定できる名前と同等です。前者" "は Nova メーター用、後者は他のすべてのメーター用です。「サンプル」の属性を確" "認するには、'ceilometer --debug sample-list' を使用してください。" msgid "Name of key to use for substituting inputs during deployment." msgstr "実装中に入力を置き換えるために使用するキーの名前。" msgid "Name of keypair to inject into the server." msgstr "サーバー内に注入するキーペアの名前。" msgid "Name of keystone endpoint." msgstr "keystone のエンドポイントの名前。" msgid "Name of keystone group." msgstr "keystone グループの名前。" msgid "Name of keystone project." msgstr "keystone プロジェクトの名前。" msgid "Name of keystone role." msgstr "keystone のロールの名前。" msgid "Name of keystone service." msgstr "keystone サービスの名前。" msgid "Name of keystone user." msgstr "keystone ユーザーの名前。" msgid "Name of registered datastore type." msgstr "登録済みのデータストア種別の名前。" msgid "Name of the DB instance to create." msgstr "作成する DB インスタンスの名前。" msgid "Name of the Node group." msgstr "ノードグループの名前。" msgid "" "Name of the action associated with the task. Either action or workflow may " "be defined in the task." msgstr "" "タスクと関連付けられたアクションの名前。タスクではアクションかワークフローの" "いずれかを定義できます。" msgid "Name of the administrative user to use on the server." msgstr "サーバー上で使用する管理者の名前。" msgid "Name of the alarm. By default, physical resource name is used." msgstr "アラームの名前。デフォルトでは物理リソース名を使用します。" msgid "Name of the availability zone for DB instance." msgstr "DB インスタンスのアベイラビリティーゾーン名。" msgid "Name of the availability zone for server placement." msgstr "サーバー配置のアベイラビリティーゾーンの名前。" msgid "Name of the cluster to create." msgstr "作成するクラスターの名前。" msgid "Name of the cluster. By default, physical resource name is used." msgstr "クラスターの名前。デフォルトでは物理リソース名を使用します。" msgid "Name of the cookie, required if type is APP_COOKIE." msgstr "Cookie の名前。タイプが APP_COOKIE である場合に必要です。" msgid "Name of the cron trigger." msgstr "Cron トリガーの名前。" msgid "Name of the current action being deployed" msgstr "デプロイされている現行アクションの名前" msgid "Name of the data source." msgstr "データソースの名前。" msgid "" "Name of the derived config associated with this deployment. This is used to " "apply a sort order to the list of configurations currently deployed to a " "server." msgstr "" "このデプロイメントに関連付けられている派生した設定の名前。これは、現在サー" "バーにデプロイされている設定のリストにソート順を適用するために使用します。" msgid "" "Name of the engine node. This can be an opaque identifier. It is not " "necessarily a hostname, FQDN, or IP address." msgstr "" "エンジンノードの名前。これには不透明な ID を指定できます。この名前はホスト" "名、FQDN、または IP アドレスとは限りません。" msgid "Name of the input." msgstr "入力の名前。" msgid "Name of the job binary." msgstr "ジョブバイナリーの名前。" msgid "Name of the metering label." msgstr "計測ラベルの名前。" msgid "Name of the network owning the port." msgstr "ポートを所有するネットワークの名前。" msgid "" "Name of the network owning the port. The value is typically network:" "floatingip or network:router_interface or network:dhcp." msgstr "" "このポートを所有するネットワークの名前。典型的な値は、network:floatingip、" "network:router_interface、または network:dhcp です。" msgid "Name of the notification. By default, physical resource name is used." msgstr "通知の名前。デフォルトでは物理リソース名を使用します。" msgid "Name of the output." msgstr "出力の名前。" msgid "Name of the pool." msgstr "プールの名前。" msgid "Name of the queue instance to create." msgstr "作成すべきキューインスタンスの名前。" msgid "" "Name of the registered datastore version. It must exist for provided " "datastore type. Defaults to using single active version. If several active " "versions exist for provided datastore type, explicit value for this " "parameter must be specified." msgstr "" "登録済みのデータストアバージョンの名前。指定されたデータストアタイプ用に存在" "している必要があります。デフォルトでは 1 つのアクティブバージョンが使用されま" "す。指定されたデータストアタイプに複数のアクティブなバージョンが存在する場合" "は、このパラメーターに対して明示的な値を指定する必要があります。" msgid "Name of the secret." msgstr "秘密の名前。" msgid "Name of the senlin node. By default, physical resource name is used." msgstr "Senlin ノードの名前。デフォルトでは物理リソース名を使用します。" msgid "Name of the senlin policy. By default, physical resource name is used." msgstr "Senlin ポリシーの名前。デフォルトでは物理リソース名を使用します。" msgid "Name of the senlin profile. By default, physical resource name is used." msgstr "Senlin プロファイルの名前。デフォルトでは物理リソース名を使用します。" msgid "" "Name of the senlin receiver. By default, physical resource name is used." msgstr "Senlin レシーバーの名前。デフォルトでは物理リソース名を使用します。" msgid "Name of the server." msgstr "サーバーの名前。" msgid "Name of the share network." msgstr "シェアのネットワークの名前。" msgid "Name of the share type." msgstr "シェアのタイプの名前。" msgid "Name of the stack." msgstr "スタックの名前。" msgid "Name of the subnet pool." msgstr "サブネットプールの名前。" msgid "Name of the vip." msgstr "VIP の名前。" msgid "Name of the volume type." msgstr "ボリュームタイプの名前。" msgid "Name of the volume." msgstr "ボリュームの名前。" msgid "" "Name of the workflow associated with the task. Can be defined by intrinsic " "function get_resource or by name of the referenced workflow, i.e. " "{ workflow: wf_name } or { workflow: { get_resource: wf_name }}. Either " "action or workflow may be defined in the task." msgstr "" "タスクと関連付けられたワークフローの名前。組み込み関数の get_resource または" "参照されるワークフローの名前 ( { workflow: wf_name } や { workflow: " "{ get_resource: wf_name }} など) で定義できます。タスクにはアクションかワーク" "フローのいずれかを定義できます。" msgid "Name of this Load Balancer." msgstr "このロードバランサーの名前。" msgid "Name of this deployment resource in the stack" msgstr "スタック内のこのデプロイメントリソースの名前" msgid "Name of this listener." msgstr "このリスナーの名前。" msgid "Name of this pool." msgstr "このプールの名前。" msgid "Name or ID Nova flavor for the nodes." msgstr "ノードの Nova フレーバーの名前または ID。" msgid "Name or ID of network to create a port on." msgstr "ポートを作成するネットワークの名前または ID。" msgid "Name or ID of senlin profile to create this node." msgstr "このノードを作成するための Senlin プロファイルの名前または ID。" msgid "" "Name or ID of shared file system snapshot that will be restored and created " "as a new share." msgstr "" "リストアされ新たなシェアとして作成される共有ファイルシステムのスナップショッ" "トの名前または ID。" msgid "" "Name or ID of shared filesystem type. Types defines some share filesystem " "profiles that will be used for share creation." msgstr "" "共有ファイルシステムのタイプの名前または ID。タイプによって、シェアの作成のた" "めに使用されるシェアのファイルシステムのプロファイルを定義します。" msgid "Name or ID of shared network defined for shared filesystem." msgstr "共有ファイルシステムのために定義した共有ネットワークの名前または ID。" msgid "Name or ID of target cluster." msgstr "ターゲットクラスターの名前または ID。" msgid "Name or ID of the load balancing pool." msgstr "ロードバランシングプールの名前または ID。" msgid "Name or Id of keystone region." msgstr "keystone の領域の名前または ID。" msgid "Name or Id of keystone service." msgstr "keystone サービスの名前または ID。" #, python-format msgid "" "Name or UUID of Neutron port to attach this NIC to. Either %(port)s or " "%(net)s must be specified." msgstr "" "この NIC を接続する先の Neutron ポートの名前または UUID。%(port)s または " "%(net)s のいずれかを指定する必要があります。" msgid "Name or UUID of network." msgstr "ネットワークの名前または UUID。" msgid "" "Name or UUID of the Neutron floating IP network or name of the Nova floating " "ip pool to use. Should not be provided when used with Nova-network that auto-" "assign floating IPs." msgstr "" "使用する Neutron Floating IP ネットワークの名前または UUID、あるいは " "NovaFloating IP プールの名前。Floating IP を自動的に割り当てる Nova ネット" "ワークで使用する場合は指定しないでください。" msgid "Name or UUID of the image used to boot Hadoop nodes." msgstr "Hadoop ノードをブートするために使用されるイメージの名前または UUID。" #, python-format msgid "" "Name or UUID of the network to attach this NIC to. Either %(port)s or " "%(net)s must be specified." msgstr "" "この NIC を接続する先のネットワークの名前または UUID。%(port)s または " "%(net)s のいずれかを指定する必要があります。" msgid "Name or id of keystone domain." msgstr "keystone ドメインの名前または ID。" msgid "Name or id of keystone group." msgstr "keystone グループの名前または ID。" msgid "Name or id of keystone user." msgstr "keystone ユーザーの名前または ID。" msgid "Name or id of volume type (OS::Cinder::VolumeType)." msgstr "ボリュームタイプの名前または ID (OS::Cinder::VolumeType)。" msgid "Names of databases that those users can access on instance creation." msgstr "" "インスタンス作成時にこれらのユーザーがアクセス可能なデータベースの名前。" msgid "" "Namespace to group this software config by when delivered to a server. This " "may imply what configuration tool is going to perform the configuration." msgstr "" "サーバーへの配布時にこのソフトウェア設定をグループ化するための名前空間。これ" "は、どの設定ツールが設定を実行しようとしているかを暗黙に示す場合があります。" msgid "Need more arguments" msgstr "さらなる引数が必要です" msgid "Negotiation mode for the ike policy." msgstr "IKE ポリシーのネゴシエーションモード。" #, python-format msgid "Neither image nor bootable volume is specified for instance %s" msgstr "インスタンス %s にはイメージもブート可能ボリュームも指定されていません" msgid "Network in CIDR notation." msgstr "CIDR 表記のネットワーク。" msgid "Network interface ID to associate with EIP." msgstr "EIP に関連付けるネットワークインターフェース ID。" msgid "Network interfaces to associate with instance." msgstr "インスタンスに関連付けるネットワークインターフェース。" #, python-format msgid "" "Network this port belongs to. If you plan to use current port to assign " "Floating IP, you should specify %(fixed_ips)s with %(subnet)s. Note if this " "changes to a different network update, the port will be replaced." msgstr "" "このポートが所属するネットワーク。現在のポートを使用して Floating IP を割り当" "てる場合、%(subnet)s に %(fixed_ips)s を指定する必要があります。このポートを" "別のネットワークで使用する場合、このポートは置き換えられます。" msgid "Network to allocate floating IP from." msgstr "Floating IP の割り当て元のネットワーク。" msgid "Neutron network id." msgstr "Neutron のネットワーク ID。" msgid "Neutron subnet id." msgstr "Neutron のサブネット ID。" msgid "Nexthop IP address." msgstr "ネクストホップの IP アドレス。" #, python-format msgid "No %s specified" msgstr "%s が指定されていません" msgid "No Template provided." msgstr "テンプレートが指定されていません。" msgid "No action specified" msgstr "アクションが指定されていません。" msgid "No constraint expressed" msgstr "制約が示されていません" #, python-format msgid "" "No content found in the \"files\" section for %(fn_name)s path: %(file_key)s" msgstr "" "%(fn_name)s パスの \"files\" セクションに内容がありません: %(file_key)s" #, python-format msgid "No event %s found" msgstr "イベント \"%s\" は見つかりませんでした。" #, python-format msgid "No events found for resource %s" msgstr "リソース \"%s\" に関連するイベントは見つかりませんでした。" msgid "No resource data found" msgstr "リソースデータが見つかりません" #, python-format msgid "No stack exists with id \"%s\"" msgstr "ID \"%s\" のスタックは存在しません" msgid "No stack name specified" msgstr "スタック名が指定されていません。" msgid "No template specified" msgstr "テンプレートが指定されていません" msgid "No volume service available." msgstr "使用可能なボリュームサービスがありません。" msgid "Node groups." msgstr "ノードグループ。" msgid "Nodes list in the cluster." msgstr "クラスター内のノードのリスト。" msgid "Non HA routers can only have one L3 agent." msgstr "非 HA ルーターが持つことができるのは 1 つの L3 エージェントのみです。" #, python-format msgid "Non-empty resource type is required for resource \"%s\"" msgstr "リソース \"%s\" には空でないリソースタイプが必要です" msgid "Not Implemented." msgstr "実装されていません。" #, python-format msgid "Not allowed - %(dsver)s without %(dstype)s." msgstr "許可されません: %(dstype)s のない %(dsver)s。" msgid "Not found" msgstr "見つかりません" msgid "Not waiting for outputs signal" msgstr "出力シグナルを待たない" msgid "" "Notional service where encryption is performed For example, front-end. For " "Nova." msgstr "" "暗号化が実行される概念サービス。例えば front-end など。Nova の場合に適用され" "る。" msgid "Nova instance type (flavor)." msgstr "Nova インスタンスタイプ (フレーバー)。" msgid "Nova network id." msgstr "Nova のネットワーク ID。" msgid "Number of VCPUs for the flavor." msgstr "フレーバーの VCPU 数。" msgid "Number of backlog requests to configure the socket with." msgstr "ソケットを設定するためのバックログ要求の数。" msgid "Number of instances in the Node group." msgstr "ノードグループ内のインスタンス数。" msgid "Number of minutes to wait for this stack creation." msgstr "このスタック作成を待機する時間 (分)。" msgid "Number of periods to evaluate over." msgstr "評価する期間数。" msgid "" "Number of permissible connection failures before changing the member status " "to INACTIVE." msgstr "" "これを超えるとメンバーステータスが INACTIVE に変わる、許容可能な接続障害数。" msgid "Number of remaining executions." msgstr "今後実施すべき処理の数。" msgid "Number of seconds for the DPD delay." msgstr "DPD 遅延の秒数。" msgid "Number of seconds for the DPD timeout." msgstr "DPD タイムアウトの秒数。" msgid "" "Number of times to check whether an interface has been attached or detached." msgstr "" "インターフェースが接続されているか接続解除されているかどうかを検査する回数。" msgid "" "Number of times to retry to bring a resource to a non-error state. Set to 0 " "to disable retries." msgstr "" "リソースを非エラー状態にするための再試行の回数。再試行を無効にするには、0 に" "設定します。" #, fuzzy msgid "" "Number of times to retry when a client encounters an expected intermittent " "error. Set to 0 to disable retries." msgstr "" "クライアントが予測される偶発的エラーを起こした際の再試行の回数。再試行を無効" "にするには、0 に設定します。" msgid "Number of workers for Heat service." msgstr "heat サービスのワーカーの数。" #, fuzzy msgid "" "Number of workers for Heat service. Default value 0 means, that service will " "start number of workers equal number of cores on server." msgstr "" "Heat サービスのワーカー数。デフォルト値の 0 は、サービスがサーバー上のコア数" "と同じワーカー数を起動することを意味しています。" msgid "Number value for delay during resolve constraint." msgstr "resolve 制約中の遅延に関する数値。" msgid "Number value for timeout during resolving output value." msgstr "出力値の解決中のタイムアウトに関する数値。" #, python-format msgid "Object action %(action)s failed because: %(reason)s" msgstr "オブジェクトアクション %(action)s が失敗しました。原因: %(reason)s" msgid "" "On update, enables heat to collect existing resource properties from reality " "and converge to updated template." msgstr "" "更新を実施すると、heat は現実の環境から既存のリソースプロパティーを収集し、集" "約してテンプレートを更新します。" msgid "One of predefined health monitor types." msgstr "事前定義ヘルスモニタータイプの 1 つ。" msgid "One or more listeners for this load balancer." msgstr "このロードバランサーの 1 つ以上のリスナー。" msgid "Only ISO 8601 duration format of the form PT#H#M#S is supported." msgstr "形式が PT#H#M#S の ISO 8601 期間形式のみがサポートされています。" msgid "Only Templates with an extension of .yaml or .template are supported" msgstr "" "拡張子が .yaml または .template のテンプレートのみがサポートされています" #, python-format msgid "Only integer is acceptable by '%(name)s'." msgstr "'%(name)s' では整数のみが許容されます。" #, python-format msgid "Only non-zero integer is acceptable by '%(name)s'." msgstr "'%(name)s' ではゼロ以外の整数のみが受け入れられます。" msgid "Operator used to compare specified statistic with threshold." msgstr "指定された統計をしきい値と比較するために使用する演算子。" msgid "Optional CA cert file to use in SSL connections." msgstr "SSL 接続に使用する CA 証明書ファイル。オプション。" msgid "Optional Nova keypair name." msgstr "オプションの Nova キーペア名。" msgid "Optional PEM-formatted certificate chain file." msgstr "PEM 形式の証明書チェーンファイル。オプション。" msgid "Optional PEM-formatted file that contains the private key." msgstr "秘密鍵を含む PEM ファイル。オプション。" msgid "Optional filename to associate with part." msgstr "パーツに関連付けるオプションのファイル名。" #, python-format msgid "Optional heat url in format like http://0.0.0.0:8004/v1/%(tenant_id)s." msgstr "" "http://0.0.0.0:8004/v1/%(tenant_id)s のような形式のオプションの heat URL。" msgid "Optional subtype to specify with the type." msgstr "タイプとともに指定するオプションのサブタイプ。" msgid "Options for simulating waiting." msgstr "待機のシミュレーションを行うためのオプション。" #, python-format msgid "Order '%(name)s' failed: %(code)s - %(reason)s" msgstr "注文 '%(name)s' が失敗しました: %(code)s - %(reason)s" msgid "Outputs received" msgstr "受け取った出力" msgid "Owner of the source security group." msgstr "ソースセキュリティーグループの所有者。" msgid "PATCH update to non-COMPLETE stack" msgstr "COMPLETE スタック以外に対する PATCH の更新" #, python-format msgid "Parameter '%(name)s' is invalid: %(exp)s" msgstr "パラメーター '%(name)s' が無効です: %(exp)s" msgid "Parameter Groups error" msgstr "パラメーターグループのエラー" msgid "" "Parameter Groups error: parameter_groups.: The grouped parameter key_name " "does not reference a valid parameter." msgstr "" "パラメーターグループのエラー: parameter_groups。グループ化されたパラメー" "ター key_name が有効なパラメーターを参照していません。" msgid "" "Parameter Groups error: parameter_groups.: The key_name parameter must be " "assigned to one parameter group only." msgstr "" "パラメーターグループのエラー: parameter_groups 。key_name パラメーターは 1 つ" "のパラメーターグループにしか割り当てることができません。" msgid "" "Parameter Groups error: parameter_groups.: The parameters of parameter group " "should be a list." msgstr "" "パラメーターグループのエラー: parameter_groups。パラメーターグループのパラ" "メーターはリストである必要があります。" msgid "" "Parameter Groups error: parameter_groups.Database Group: The InstanceType " "parameter must be assigned to one parameter group only." msgstr "" "パラメーターグループのエラー: parameter_groups。データベースグループ: " "InstanceType パラメーターは 1 つのパラメーターグループにしか割り当てることが" "できません。" msgid "" "Parameter Groups error: parameter_groups.Database Group: The grouped " "parameter SomethingNotHere does not reference a valid parameter." msgstr "" "パラメーターグループのエラー: parameter_groups。データベースグループ: グルー" "プ化されたパラメーターの SomethingNotHere が有効なパラメーターを参照していま" "せん。" msgid "" "Parameter Groups error: parameter_groups.Server Group: The parameters must " "be provided for each parameter group." msgstr "" "パラメーターグループのエラー: parameter_groups。サーバーグループ: パラメー" "ターを各パラメーターグループに提供する必要があります。" msgid "" "Parameter Groups error: parameter_groups.Server Group: The parameters of " "parameter group should be a list." msgstr "" "パラメーターグループのエラー: parameter_groups。サーバーグループ: " "InstanceType パラメーターグループはリストである必要があります。" msgid "" "Parameter Groups error: parameter_groups: The parameter_groups should be a " "list." msgstr "" "パラメーターグループのエラー: parameter_groups 。parameter_groups はリストで" "ある必要があります。" #, python-format msgid "Parameter name in \"%s\" must be string" msgstr "\"%s\" 内のパラメーター名は文字列でなければなりません" #, python-format msgid "Params must be a map, find a %s" msgstr "Params はマップである必要があります。%s が見つかりました。" msgid "Parent network of the subnet." msgstr "サブネットの親ネットワーク。" msgid "Parts belonging to this message." msgstr "このメッセージに属するパーツ。" msgid "Password for API authentication" msgstr "API 認証用パスワード" msgid "Password for accessing the data source URL." msgstr "データソースの URL にアクセスするためのパスワード。" msgid "Password for accessing the job binary URL." msgstr "ジョブバイナリーの URL にアクセスするためのパスワード。" msgid "Password for those users on instance creation." msgstr "インスタンス作成時のこれらのユーザーのパスワード。" msgid "Password of keystone user." msgstr "keystone ユーザーのパスワード。" msgid "Password used by user." msgstr "ユーザーが使用するパスワード。" #, python-format msgid "Path components in \"%s\" must be strings" msgstr "\"%s\" 内のパスコンポーネントは文字列でなければなりません" msgid "Path components in attributes must be strings" msgstr "属性の Path コンポーネントは文字列である必要があります。" msgid "Payload exceeds maximum allowed size" msgstr "ペイロードが最大許容サイズを超えています。" msgid "Perfect forward secrecy for the ipsec policy." msgstr "ipsec ポリシーの Perfect Forward Secrecy。" msgid "Perfect forward secrecy in lowercase for the ike policy." msgstr "IKE ポリシーの Perfect Forward Secrecy (小文字)。" msgid "" "Perform a check on the input values passed to verify that each required " "input has a corresponding value. When the property is set to STRICT and no " "value is passed, an exception is raised." msgstr "" "渡された入力値の検査を実行して、必要な各入力に対応する値があることを確認して" "ください。プロパティーが STRICT に設定され、値が渡されない場合は、例外が発生" "します。" msgid "Period (seconds) to evaluate over." msgstr "評価する期間 (秒)。" msgid "Physical ID of the VPC. Not implemented." msgstr "VPC の物理 ID。実装されていません。" #, python-format msgid "" "Plugin %(plugin)s doesn't support the following node processes: " "%(unsupported)s. Allowed processes are: %(allowed)s" msgstr "" "プラグイン %(plugin)s は %(unsupported)s のノードプロセスをサポートしませ" "ん。 許容されるプロセスは %(allowed)s です。" msgid "Plugin name." msgstr "プラグイン名。" msgid "Policies for removal of resources on update." msgstr "更新時のリソースの削除に関するポリシー。" msgid "Policy for rolling updates for this scaling group." msgstr "このスケーリンググループの更新のロールアップ用ポリシー。" msgid "" "Policy on how to apply a flavor update; either by requesting a server resize " "or by replacing the entire server." msgstr "" "フレーバー更新の適用方法に関するポリシー。サーバーのサイズ変更を要求するか、" "サーバー全体を置換するかのいずれかです。" msgid "" "Policy on how to apply an image-id update; either by requesting a server " "rebuild or by replacing the entire server." msgstr "" "イメージ ID 更新の適用方法に関するポリシー。サーバーの再ビルドを要求するか、" "サーバー全体を置換するかのいずれかを行います。" msgid "" "Policy on how to respond to a stack-update for this resource. REPLACE_ALWAYS " "will replace the port regardless of any property changes. AUTO will update " "the existing port for any changed update-allowed property." msgstr "" "このリソースのスタック更新への対応方法に関するポリシー。REPLACE_ALWAYS は、プ" "ロパティー変更とは無関係にポートを置換します。AUTO は、変更されたすべての更新" "許可プロパティーの既存ポートを更新します。" msgid "" "Policy to be processed when doing an update which requires removal of " "specific resources." msgstr "特定リソースの削除を必要とする更新の実行時に処理されるポリシー。" msgid "Pool creation failed" msgstr "プール作成に失敗しました" msgid "Pool creation failed due to vip" msgstr "vip が原因で、プール作成に失敗しました" msgid "Pool from which floating IP is allocated." msgstr "Floating IP アドレスの割り当て元であるプール。" msgid "Port number on which the servers are running on the members." msgstr "サーバーがメンバー上で稼働するときのポート番号。" msgid "Port on which the pool member listens for requests or connections." msgstr "プールメンバーがリクエストまたは接続をリッスンするポート。" msgid "Port security enabled of the network." msgstr "ネットワークで有効化されたポートセキュリティー。" msgid "Port security enabled of the port." msgstr "ポートで有効化したポートセキュリティー" msgid "Position of the rule within the firewall policy." msgstr "ファイアウォールポリシー内のルールの位置。" msgid "Pre-shared key string for the ipsec site connection." msgstr "ipsec サイト接続の事前共有鍵文字列。" msgid "Prefix length for subnet allocation from subnet pool." msgstr "サブネットプールからサブネットを割り当てる際のプレフィックス長。" msgid "Private DNS name of the specified instance." msgstr "指定されたインスタンスのプライベート DNS 名。" msgid "Private IP address of the network interface." msgstr "ネットワークインターフェースのプライベート IP アドレス。" msgid "Private IP address of the specified instance." msgstr "指定されたインスタンスのプライベート IP アドレス。" msgid "Project ID" msgstr "プロジェクト ID" msgid "" "Projects to add volume type access to. NOTE: This property is only supported " "since Cinder API V2." msgstr "" "ボリュームタイプのアクセスを追加する対象となるプロジェクト。注意: このプロパ" "ティーは Cinder API V2 以降でのみサポートされます。" #, python-format msgid "" "Properties %(algorithm)s and %(bit_length)s are required for %(type)s type " "of order." msgstr "" "%(type)s タイプの注文に対しては、プロパティー %(algorithm)s と " "%(bit_length)s が必要です。" msgid "Properties for profile." msgstr "プロファイルのプロパティー。" msgid "Properties of this policy." msgstr "このポリシーのプロパティー。" msgid "Properties to pass to each resource being created in the chain." msgstr "チェーン内で作成中の各ソースに渡すプロパティー。" #, python-format msgid "Property %(cookie)s is required when %(sp)s type is set to %(app)s." msgstr "" "%(sp)s タイプが %(app)s に設定されている場合、プロパティー %(cookie)s が必要" "です。" #, python-format msgid "" "Property %(cookie)s must NOT be specified when %(sp)s type is set to %(ip)s." msgstr "" "%(sp)s タイプが %(ip)s に設定されている場合、プロパティー %(cookie)s を設定す" "ることはできません。" #, python-format msgid "" "Property %(key)s updated value %(new)s should be superset of existing value " "%(old)s." msgstr "" "プロパティー %(key)s が更新した値 %(new)s は、既存の値 %(old)s のスーパーセッ" "トである必要があります。" #, python-format msgid "" "Property %(n)s type mismatch between facade %(type)s (%(fs_type)s) and " "provider (%(ps_type)s)" msgstr "" "プロパティー %(n)s タイプがファサード %(type)s (%(fs_type)s) とプロバイダー " "(%(ps_type)s) の間で一致しません" #, python-format msgid "Property %(policies)s and %(item)s cannot be used both at one time." msgstr "" "プロパティー %(policies)s と %(item)s の両方を同時に使用することはできませ" "ん。" #, python-format msgid "Property %(ref)s required when protocol is %(term)s." msgstr "プロトコルが %(term)s の場合、プロパティー %(ref)s が必要です。" #, python-format msgid "Property %s not assigned" msgstr "プロパティー %s が割り当てられていません" #, python-format msgid "Property %s not implemented yet" msgstr "プロパティー %s はまだ実装されていません" msgid "" "Property cookie_name is required when session_persistence type is set to " "APP_COOKIE." msgstr "" "session_persistence タイプが APP_COOKIE に設定されている場合、プロパティー" "cookie_name が必要です。" msgid "" "Property cookie_name is required, when session_persistence type is set to " "APP_COOKIE." msgstr "" "session_persistence タイプが APP_COOKIE に設定された場合、プロパティー" "cookie_name は必須です。" msgid "" "Property cookie_name must NOT be specified when session_persistence type is " "set to SOURCE_IP." msgstr "" "session_persistence タイプが SOURCE_IPP に設定されている場合、プロパティーの " "cookie_name を指定することはできません。" msgid "Property values for the resources in the group." msgstr "グループ内のリソースのプロパティー値。" msgid "Protocol for balancing." msgstr "バランシング用のプロトコル。" msgid "Protocol for the firewall rule." msgstr "ファイアウォールルールのプロトコル。" msgid "Protocol of the pool." msgstr "プールのプロトコル。" msgid "Protocol on which to listen for the client traffic." msgstr "クライアントトラフィックをリッスンするプロトコル。" msgid "Protocol to balance." msgstr "バランスを取るためのプロトコル。" msgid "Protocol value for this firewall rule." msgstr "このファイアウォールルールのプロトコル値。" msgid "" "Provide access to nodes using other nodes of the cluster as proxy gateways." msgstr "" "プロキシゲートウェイとしてクラスターの他のノードを使用することで、ノードへの" "アクセスを提供します。" msgid "" "Provide old encryption key. New encryption key would be used from config " "file." msgstr "" "古い 暗号化キーを提供します。新規の暗号化キーは設定ファイルから使用します。" msgid "Provider for this Load Balancer." msgstr "このロードバランサーのプロバイダー。" msgid "Provider implementing this load balancer instance." msgstr "このロードバランサーのインスタンスを実装するプロバイダー。" #, python-format msgid "Provider requires property %(n)s unknown in facade %(type)s" msgstr "" "プロバイダーはファサード %(type)s で不明なプロパティー %(n)s を必要としていま" "す" msgid "Public DNS name of the specified instance." msgstr "指定されたインスタンスのパブリック DNS 名。" msgid "Public IP address of the specified instance." msgstr "指定されたインスタンスのパブリック IP アドレス。" msgid "" "RPC timeout for the engine liveness check that is used for stack locking." msgstr "スタックロックに使用するエンジン活性チェックの RPC タイムアウト。" msgid "RX/TX factor." msgstr "RX/TX 係数。" #, python-format msgid "Rebuilding server failed, status '%s'" msgstr "サーバーの再ビルドに失敗しました。状況 '%s'" msgid "Record name." msgstr "レコード名。" #, python-format msgid "Recursion depth exceeds %d." msgstr "再帰深度が %d を超えています。" msgid "" "Ref structure that contains the ID of the VPC on which you want to create " "the subnet." msgstr "サブネットを作成する VPC の ID を含む参照構造。" msgid "Reference to a flavor for creating DB instance." msgstr "DB インスタンスを作成するためのフレーバーに対する参照。" msgid "Reference to certificate." msgstr "証明書の参照。" msgid "Reference to intermediates." msgstr "中間証明書の参照。" msgid "Reference to private key passphrase." msgstr "秘密鍵のパスフレーズの参照。" msgid "Reference to private key." msgstr "秘密鍵の参照。" msgid "Reference to public key." msgstr "公開鍵の参照。" msgid "Reference to the secret." msgstr "秘密の参照。" msgid "References to secrets that will be stored in container." msgstr "コンテナーに保存される秘密の参照。" msgid "Region name in which this stack will be created." msgstr "このスタックが作成されるリージョン名。" msgid "Remaining executions." msgstr "今後実行すべき処理。" msgid "Remote branch router identity." msgstr "リモートブランチルーター ID。" msgid "Remote branch router public IPv4 address or IPv6 address or FQDN." msgstr "" "リモートブランチルーターのパブリック IPv4 アドレスまたは IPv6 アドレスあるい" "は FQDN。" msgid "Remote subnet(s) in CIDR format." msgstr "CIDR 形式のリモートサブネット。" msgid "" "Replacement policy used to work around flawed nova/neutron port interaction " "which has been fixed since Liberty." msgstr "" "Liberty 以降で修正された nova/neutron ポートの処理エラーを解決するために使用" "される置き換えポリシー。" msgid "Request expired or more than 15mins in the future" msgstr "要求が有効期限切れになったか、15 分を超過します" #, python-format msgid "Request limit exceeded: %(message)s" msgstr "要求の限度を超えました: %(message)s" msgid "Request missing required header X-Auth-Url" msgstr "要求されたヘッダー X-Auth-Url が要求に含まれていません" msgid "Request was denied due to request throttling" msgstr "要求数の制限のため、要求が拒否されました" #, python-format msgid "" "Requested plugin '%(plugin)s' doesn't support version '%(version)s'. Allowed " "versions are %(allowed)s" msgstr "" "リクエストされたプラグイン '%(plugin)s' がバージョン '%(version)s' をサポート" "していません。許容されるバージョンは %(allowed)s です。" msgid "" "Required extra specification. Defines if share drivers handles share servers." msgstr "" "追加の指定が必要です。シェアドライバーがシェアサーバーを処理するか定義しま" "す。" #, python-format msgid "Required property %(n)s for facade %(type)s missing in provider" msgstr "" "ファサード %(type)s に必要なプロパティー %(n)s がプロバイダーにありません" #, python-format msgid "Resizing to '%(flavor)s' failed, status '%(status)s'" msgstr "'%(flavor)s' のサイズ変更に失敗しました。状況: '%(status)s'" #, python-format msgid "Resource \"%s\" has no type" msgstr "リソース \"%s\" にタイプがありません" #, python-format msgid "Resource \"%s\" type is not a string" msgstr "リソース \"%s\" のタイプが文字列ではありません" #, python-format msgid "Resource %(name)s %(key)s type must be %(typename)s" msgstr "リソース %(name)s %(key)s タイプは %(typename)s でなければなりません" #, python-format msgid "Resource %(name)s is missing \"%(type_key)s\"" msgstr "リソース %(name)s に \"%(type_key)s\" がありません" #, python-format msgid "" "Resource %s's property user_data_format should be set to SOFTWARE_CONFIG " "since there are software deployments on it." msgstr "" "ソフトウェア配備があるため、リソース %s のプロパティー user_data_format を " "SOFTWARE_CONFIG に設定する必要があります。" msgid "Resource ID was not provided." msgstr "リソース ID が指定されませんでした。" msgid "" "Resource definition for the resources in the group, in HOT format. The value " "of this property is the definition of a resource just as if it had been " "declared in the template itself." msgstr "" "グループ内のリソースのリソース定義 (HOT フォーマット)。このプロパティーの値" "は、テンプレート自体で宣言された場合と同様に、リソースの定義です。" msgid "" "Resource definition for the resources in the group. The value of this " "property is the definition of a resource just as if it had been declared in " "the template itself." msgstr "" "グループ内のリソースのリソース定義。このプロパティーの値は、テンプレート自体" "で宣言された場合と同様に、リソースの定義を指します。" msgid "Resource failed" msgstr "リソースに障害がありました" msgid "Resource is not built" msgstr "リソースは作成されていません" msgid "Resource name may not contain \"/\"" msgstr "リソース名に \"/\" を含めることはできません" msgid "Resource type." msgstr "リソースタイプ。" msgid "Resource update already requested" msgstr "リソース更新は既に要求済みです" msgid "Resource with the name requested already exists" msgstr "要求された名前のリソースは既に存在します" msgid "" "ResourceInError: resources.remote_stack: Went to status UPDATE_FAILED due to " "\"Remote stack update failed\"" msgstr "" "ResourceInError: resources。remote_stack: \"Remote stack update\" のため" "UPDATE_FAILED の状態が発生しました。" #, python-format msgid "ResourcePropertiesData with id %s not found" msgstr " id %s を持つ ResourcePropertiesDataが見つかりません" #, python-format msgid "Resources must contain Resource. Found a [%s] instead" msgstr "" "リソースにはリソースが含まれている必要があります。代わりに [%s] が見つかりま" "した" msgid "" "Resources that users are allowed to access by the DescribeStackResource API." msgstr "" "ユーザーが DescribeStackResource API によるアクセスを許可されているリソース。" msgid "Returned status code from the configuration execution." msgstr "設定実行から返された状況コード。" msgid "Route duplicates an existing route." msgstr "ルートが既存のルートを複製します。" msgid "Route table ID." msgstr "経路テーブル ID。" msgid "Safety assessment lifetime configuration for the ike policy." msgstr "IKE ポリシーの安全アセスメント存続期間設定。" msgid "Safety assessment lifetime configuration for the ipsec policy." msgstr "ipsec ポリシーの安全アセスメント存続期間設定。" msgid "Safety assessment lifetime units." msgstr "安全アセスメント存続期間単位。" msgid "Safety assessment lifetime value in specified units." msgstr "指定された単位の安全アセスメント存続期間値。" msgid "Scheduler hints to pass to Nova (Heat extension)." msgstr "nova に渡すスケジューラーヒント (heat 拡張)。" msgid "Schema representing the inputs that this software config is expecting." msgstr "このソフトウェア設定が必要としている入力を表すスキーマ。" msgid "Schema representing the outputs that this software config will produce." msgstr "このソフトウェア設定が生成する出力を表すスキーマ。" #, python-format msgid "Schema valid only for %(ltype)s or %(mtype)s, not %(utype)s" msgstr "" "スキーマは %(utype)s ではなく %(ltype)s または %(mtype)s にのみ有効です" msgid "" "Scope of flavor accessibility. Public or private. Default value is True, " "means public, shared across all projects." msgstr "" "フレーバーのアクセス可能性の範囲。パブリックまたはプライベートに設定します。" "デフォルト値の True はプライベートを意味し、すべてのプロジェクトで共有されま" "す。" #, python-format msgid "Searching Tenant %(target)s from Tenant %(actual)s forbidden." msgstr "" "禁止されたテナント %(actual)s からテナント %(target)s を検索しています。" msgid "Seconds between running periodic tasks." msgstr "定期タスクの実行間隔 (秒)" msgid "Seconds to wait after a create. Defaults to the global wait_secs." msgstr "" "作成処理の後の待機時間 (秒)。デフォルトではグローバルの wait_secs に設定され" "ます。" msgid "Seconds to wait after a delete. Defaults to the global wait_secs." msgstr "" "削除処理の後の待機時間 (秒)。デフォルトではグローバルの wait_secs に設定され" "ます。" msgid "Seconds to wait after an action (-1 is infinite)." msgstr "アクションの後の待機時間の秒数 (-1 は無制限を指します)。" msgid "Seconds to wait after an update. Defaults to the global wait_secs." msgstr "" "更新処理の後の待機時間 (秒)。デフォルトではグローバルの wait_secs に設定され" "ます。" #, python-format msgid "Section %s can not be accessed directly." msgstr "セクション %s には直接アクセスできません。" #, python-format msgid "Security Group \"%(group_name)s\" not found" msgstr "セキュリティーグループ \"%(group_name)s\" が見つかりません" msgid "Security group IDs to assign." msgstr "割り当てるセキュリティーグループ ID。" msgid "Security group IDs to associate with this port." msgstr "このポートに関連付けるセキュリティーグループ ID。" msgid "Security group names to assign." msgstr "割り当てるセキュリティーグループ名。" msgid "Security groups cannot be assigned the name \"default\"." msgstr "" "セキュリティーグループを名前 \"default\" に関連付けることはできません。" msgid "Security service IP address or hostname." msgstr "セキュリティーサービスの IP アドレスまたはホスト名。" msgid "Security service description." msgstr "セキュリティーサービスの説明。" msgid "Security service domain." msgstr "セキュリティーサービスのドメイン。" msgid "Security service name." msgstr "セキュリティーサービスの名前。" msgid "Security service type." msgstr "セキュリティーサービスのタイプ。" msgid "Security service user or group used by tenant." msgstr "テナントが使用するセキュリティーサービスのユーザーまたはグループ。" msgid "Select deferred auth method, stored password or trusts." msgstr "" "据え置き認証方式、保管済みパスワード、またはトラストを選択してください。" msgid "Sequence of characters to build the random string from." msgstr "ランダム文字列の作成元である文字のシーケンス。" #, python-format msgid "Server %(name)s delete failed: (%(code)s) %(message)s" msgstr "サーバー %(name)s の削除に失敗しました: (%(code)s) %(message)s" msgid "Server Group name." msgstr "サーバーグループ名。" msgid "Server name." msgstr "サーバー名。" msgid "Server to assign floating IP to." msgstr "Floating IP の割り当て先サーバー。" #, python-format msgid "" "Service %(service_name)s is not available for resource type " "%(resource_type)s, reason: %(reason)s" msgstr "" "サービス %(service_name)s はリソースタイプ %(resource_type)s では使用できませ" "ん。理由: %(reason)s" msgid "Service misconfigured" msgstr "サービスの設定に誤りがあります" msgid "Service temporarily unavailable" msgstr "サービスが一時的に使用できません" msgid "Set of parameters passed to this stack." msgstr "このスタックに渡すパラメーターのセット。" msgid "Set of rules for comparing characters in a character set." msgstr "文字セット内の文字を比較するための一連のルール。" msgid "Set of symbols and encodings." msgstr "一連のシンボルとエンコード。" msgid "Set to \"vpc\" to have IP address allocation associated to your VPC." msgstr "IP アドレスの割り当てを VPC に関連付けるには、\"vpc\" に設定します。" msgid "Set to true if DHCP is enabled and false if DHCP is disabled." msgstr "DHCP が有効である場合は TRUE、無効である場合は FALSE に設定します。" msgid "Severity of the alarm." msgstr "アラームの重大度。" msgid "Share description." msgstr "シェアの説明。" msgid "Share host." msgstr "シェアのホスト。" msgid "Share name." msgstr "シェアの名前。" msgid "Share network description." msgstr "シェアのネットワークの説明。" msgid "Share project ID." msgstr "シェアのプロジェクト ID。" msgid "Share protocol supported by shared filesystem." msgstr "共有ファイルシステムがサポートするシェアのプロトコル。" msgid "Share storage size in GB." msgstr "シェアのストレージ容量 (GB)。" msgid "Shared status of the metering label." msgstr "計測ラベルの共有状況。" msgid "Shared status of this firewall policy." msgstr "このファイアウォールポリシーの共有状況。" msgid "Shared status of this firewall rule." msgstr "このファイアウォールルールの共有状況。" msgid "Shared status of this firewall." msgstr "このファイアウォールの共有状況。" msgid "Show available commands." msgstr "利用可能なコマンドを表示する。" msgid "Shrinking volume" msgstr "ボリュームの縮小中" msgid "Signal data error" msgstr "信号データのエラー" #, python-format msgid "Signal resource during %s" msgstr "%s 中の信号リソース" #, python-format msgid "Single schema valid only for %(ltype)s, not %(utype)s" msgstr "単一スキーマは %(utype)s ではなく %(ltype)s にのみ有効です" msgid "Size of a secondary ephemeral data disk in GB." msgstr "セカンダリー一時データディスクのサイズ (GB)。" msgid "Size of adjustment." msgstr "調整のサイズ。" msgid "Size of encryption key, in bits. For example, 128 or 256." msgstr "暗号化キーのサイズ (ビット数)。例えば、128 や 256 など。" msgid "" "Size of local disk in GB. The \"0\" size is a special case that uses the " "native base image size as the size of the ephemeral root volume." msgstr "" "ローカルディスクのサイズ (GB)。\"0\" のサイズは、一時的なルートボリュームとし" "てネイティブのベースイメージのサイズを使用する特別なケースです。" msgid "" "Size of the block device in GB. If it is omitted, hypervisor driver " "calculates size." msgstr "" "ブロックデバイスの容量 (GB)。省略すると、ハイパーバイザードライバーによって容" "量が計算されます。" msgid "Size of the instance disk volume in GB." msgstr "インスタンスのボリュームの容量 (GB)。" msgid "Size of the volumes, in GB." msgstr "ボリューム容量 (GB)。" msgid "Smallest prefix size that can be allocated from the subnet pool." msgstr "" "サブネットプールから割り当てることができるプレフィックスサイズの最小値。" #, python-format msgid "Snapshot with id %s not found" msgstr "ID %s のスナップショットが見つかりません" msgid "" "SnapshotId is missing, this is required when specifying BlockDeviceMappings." msgstr "" "SnapshotId がありません。これは BlockDeviceMappings を指定する場合には必須で" "す。" #, python-format msgid "Software config with id %s not found" msgstr "ID %s のソフトウェア設定が見つかりません" msgid "Source IP address or CIDR." msgstr "送信元 IP アドレスまたは CIDR。" msgid "Source ip_address for this firewall rule." msgstr "このファイアウォールルールの送信元 IP アドレス。" msgid "Source port number or a range." msgstr "送信元ポート番号または範囲。" msgid "Source port range for this firewall rule." msgstr "このファイアウォールルールのソースポート範囲。" #, python-format msgid "Specified output key %s not found." msgstr "指定された出力キー %s が見つかりません。" #, python-format msgid "Specified status is invalid, defaulting to %s" msgstr "指定された状況は無効です。%s にデフォルト設定されます" #, python-format msgid "Specified subnet %(subnet)s does not belongs to network %(network)s." msgstr "" "指定したサブネット %(subnet)s がネットワーク %(network)s 属していません。" msgid "Specifies a custom discovery url for node discovery." msgstr "ノードを発見するためにカスタムディスカバリーの URL を指定します。" msgid "Specifies database names for creating databases on instance creation." msgstr "" "インスタンス作成時にデータベースを作成するためのデータベース名を指定します。" msgid "Specify the ACL permissions on who can read objects in the container." msgstr "" "コンテナー内のオブジェクトの読み取りを可能にする ACL 許可を指定します。" msgid "Specify the ACL permissions on who can write objects to the container." msgstr "" "オブジェクトのコンテナーへの書き込みを可能にする ACL 許可を指定します。" msgid "" "Specify whether the remote_ip_prefix will be excluded or not from traffic " "counters of the metering label. For example to not count the traffic of a " "specific IP address of a range." msgstr "" "remote_ip_prefix を計測ラベルのトラフィックカウンターから除外するかどうかを指" "定します。特定 IP アドレス範囲のトラフィックをカウントしない場合などに使用し" "ます。" #, python-format msgid "Stack %(stack_name)s already has an action (%(action)s) in progress." msgstr "" "スタック %(stack_name)s には既に進行中のアクション (%(action)s) があります。" msgid "Stack ID" msgstr "スタック ID" msgid "Stack Name" msgstr "スタック名" msgid "Stack id" msgstr "Stack ID" msgid "Stack name may not contain \"/\"" msgstr "スタック名には \"/\" を使用できません" msgid "Stack resource id" msgstr "スタックリソース ID" msgid "Stack unknown status" msgstr "スタックの不明状況" #, python-format msgid "Stack with id %s can not be found." msgstr "スタック ID %sが見つかりません。" #, python-format msgid "Stack with id %s not found" msgstr "%s の ID を持つスタックが見つかりません" #, fuzzy msgid "" "Stacks containing these tag names will be hidden. Multiple tags should be " "given in a comma-delimited list (eg. hidden_stack_tags=hide_me,me_too)." msgstr "" "これらのタグ名を含むスタックは非表示に設定されます。コンマで区切ったリストで" "複数のタグが提供されます (hidden_stack_tags=hide_me,me_too など)。" msgid "Start address for the allocation pool." msgstr "割り当てプールの開始アドレス。" #, python-format msgid "Start resizing the group %(group)s" msgstr "グループ %(group)s のサイズ変更を開始" msgid "Start time for the time constraint. A CRON expression property." msgstr "時間制約の開始時間。CRON 式のプロパティー。" #, python-format msgid "State %s invalid for create" msgstr "状態 %s は作成には無効です" #, python-format msgid "State %s invalid for resume" msgstr "状態 %s は再開には無効です" #, python-format msgid "State %s invalid for suspend" msgstr "状態 %s は中断には無効です" msgid "Status" msgstr "ステータス" #, python-format msgid "String to split must be string; got %s" msgstr "分割する文字列は文字列でなければなりません。%s を受け取りました" msgid "String value with which to compare." msgstr "比較対象の文字列の値。" msgid "Subnet ID to associate with this interface." msgstr "このインターフェースに関連付けるサブネット ID。" msgid "Subnet ID to launch instance in." msgstr "インスタンスを起動するサブネット ID。" msgid "Subnet ID." msgstr "サブネット ID。" msgid "Subnet in which the vpn service will be created." msgstr "VPN サービスが作成されるサブネット。" msgid "" "Subnet in which to allocate the IP address for port. Used for creating port, " "based on derived properties. If subnet is specified, network property " "becomes optional." msgstr "" "ポートに IP アドレスを割り当てる必要のあるサブネット。抽出したプロパティーに" "基づいてポートの作成のために使用します。サブネットを指定すると、ネットワーク" "プロパティーは必須ではなくなります。" msgid "Subnet in which to allocate the IP address for this port." msgstr "このポートの IP アドレスを割り振るサブネット。" msgid "Subnet name or ID of this member." msgstr "このメンバーのサブネットの名前または ID。" msgid "Subnet of external fixed IP address." msgstr "外部 Fixed IP のアドレスのサブネット。" msgid "Subnet of the vip." msgstr "vip のサブネット。" msgid "Subnets of this network." msgstr "このネットワークのサブネット。" msgid "" "Subset of trustor roles to be delegated to heat. If left unset, all roles of " "a user will be delegated to heat when creating a stack." msgstr "" "heat に委任される委託者ロールのサブセット。設定しない場合、ユーザーのロールは" "すべて、スタックの作成時に heat に委任されます。" msgid "Supplied metadata for the resources in the group." msgstr "グループ内のリソースに関して提供されたメタデータ。" msgid "Supported versions: keystone v3" msgstr "サポートされるバージョン: keystone v3" #, python-format msgid "Suspend of instance %s failed" msgstr "インスタンスの中断 %s が失敗しました" #, python-format msgid "Suspend of server %s failed" msgstr "サーバー %s の中断が失敗しました" msgid "Swap space in MB." msgstr "スワップスペース (MB)。" msgid "System SIGHUP signal received." msgstr "システムの SIGHUP 信号を受信しました。" msgid "TCP or UDP port on which to listen for client traffic." msgstr "クライアントトラフィックをリッスンする TCP ポートまたは UDP ポート。" msgid "TCP port on which the instance server is listening." msgstr "インスタンスサーバーが listen している TCP ポート。" msgid "TCP port on which the pool member listens for requests or connections." msgstr "プールメンバーが要求または接続を listen する TCP ポート。" msgid "" "TCP port on which to listen for client traffic that is associated with the " "vip address." msgstr "" "仮想 IP アドレスに関連付けられているクライアントトラフィックをリッスンする " "TCP ポート。" msgid "" "TTL, in seconds, for any cached item in the dogpile.cache region used for " "caching of OpenStack service finder functions." msgstr "" "OpenStack サービスの finder 機能のキャッシングに使用される dogpile.cache リー" "ジョン内でキャッシングされた任意の項目の TTL (秒)。" msgid "" "TTL, in seconds, for any cached item in the dogpile.cache region used for " "caching of service extensions." msgstr "" "サービス拡張のキャッシングに使用される dogpile.cache リージョン内でキャッシン" "グされた任意の項目の TTL (秒)。" msgid "" "TTL, in seconds, for any cached item in the dogpile.cache region used for " "caching of validation constraints." msgstr "" "検証制約のキャッシングに使用される dogpile.cache リージョン内でキャッシングさ" "れた任意の項目の TTL (秒)。" msgid "Tag key." msgstr "タグキー。" msgid "Tag value." msgstr "タグ値。" msgid "Tags to add to the image." msgstr "イメージに追加する必要のあるタグ。" msgid "Tags to attach to instance." msgstr "インスタンスに接続するタグ。" msgid "Tags to attach to the bucket." msgstr "バケットに接続するためのタグ。" msgid "Tags to attach to this group." msgstr "このグループに接続するためのタグ。" msgid "Task description." msgstr "タスクの説明。" msgid "Task name." msgstr "タスク名。" msgid "" "Template default for how the server should receive the metadata required for " "software configuration. POLL_SERVER_CFN will allow calls to the cfn API " "action DescribeStackResource authenticated with the provided keypair " "(requires enabled heat-api-cfn). POLL_SERVER_HEAT will allow calls to the " "Heat API resource-show using the provided keystone credentials (requires " "keystone v3 API, and configured stack_user_* config options). POLL_TEMP_URL " "will create and populate a Swift TempURL with metadata for polling (requires " "object-store endpoint which supports TempURL).ZAQAR_MESSAGE will create a " "dedicated zaqar queue and post the metadata for polling." msgstr "" "サーバーがソフトウェアの設定に必要なメタデータを受信する方法を設定するテンプ" "レートのデフォルト。POLL_SERVER_CFN を設定すると、提供されるキーペア (有効化" "された heat-api-cfn が必要)で認証される cfn API アクションの " "DescribeStackResource を呼び出すことができます。POLL_SERVER_HEAT を設定する" "と、提供される keystone の認証情報 (keystone v3 API が必要で、stack_user_* の" "設定オプションを設定済み) を使用して Heat API の resource-show を呼び出すこと" "ができます。POLL_TEMP_URL を設定すると、 ポーリング (TempURL をサポートするオ" "ブジェクトストアのエンドポイントが必要) のためのメタデータを持つ Swift " "TempURL を作成し、データをロードできます。ZAQAR_MESSAGE を設定すると、専用の " "zaqar キューを作成し、ポーリングのためのメタデータを提供できます。" msgid "" "Template default for how the server should signal to heat with the " "deployment output values. CFN_SIGNAL will allow an HTTP POST to a CFN " "keypair signed URL (requires enabled heat-api-cfn). TEMP_URL_SIGNAL will " "create a Swift TempURL to be signaled via HTTP PUT (requires object-store " "endpoint which supports TempURL). HEAT_SIGNAL will allow calls to the Heat " "API resource-signal using the provided keystone credentials. ZAQAR_SIGNAL " "will create a dedicated zaqar queue to be signaled using the provided " "keystone credentials." msgstr "" "サーバーが実装環境の出力値を使用して heat の信号を送信する方法を設定するテン" "プレートのデフォルト。CFN_SIGNAL を設定すると、CFN のキーペアで署名した URL " "(有効化された heat-api-cfn が必要)に対して HTTP POST を実行できます。" "TEMP_URL_SIGNAL を設定すると、Swift TempURL を作成し、HTTP PUT (TempURL をサ" "ポートするオブジェクトストアのエンドポイントが必要) を使用して信号を送信する" "ことができます。 HEAT_SIGNAL を設定すると、 提供される keystone の認証情報を" "使用して Heat API の resource-signal を呼び出すことができます。ZAQAR_SIGNAL " "を設定すると、専用の zaqar キューを作成し、 提供される keystone の認証情報を" "使用して信号を送信することができます。" #, python-format msgid "Template exceeds maximum allowed size (%s bytes)" msgstr "テンプレートが最大許容サイズ (%s バイト) を超えています" msgid "Template format version not found." msgstr "テンプレートのフォーマットバージョンが見つかりません。" #, python-format msgid "" "Template size (%(actual_len)s bytes) exceeds maximum allowed size (%(limit)s " "bytes)." msgstr "" "テンプレートサイズ (%(actual_len)s bytes) が許容される最大サイズ (%(limit)s " "bytes) を超えています。" msgid "Template that specifies the stack to be created as a resource." msgstr "リソースとして作成するスタックを指定するテンプレート。" #, python-format msgid "Template type is not supported: %s" msgstr "テンプレートタイプ %s はサポートされていません" msgid "Template version was not provided" msgstr "テンプレートバージョンが指定されていませんでした" #, python-format msgid "Template with version %s not found" msgstr "バージョン %s のテンプレートが見つかりません" msgid "TemplateBody or TemplateUrl were not given." msgstr "TemplateBody または TemplateUrl が指定されていません。" msgid "Tenant owning the health monitor." msgstr "ヘルスモニターを所有するテナント。" msgid "Tenant owning the pool member." msgstr "プールのメンバーを所有するテナント。" msgid "Tenant owning the pool." msgstr "プールを所有するテナント。" msgid "Tenant owning the port." msgstr "ポートを所有するテナント。" msgid "Tenant owning the router." msgstr "ルーターを所有するテナント。" msgid "Tenant owning the subnet." msgstr "サブネットを所有するテナント。" #, python-format msgid "Testing message %(text)s" msgstr "メッセージ %(text)s をテストしています" #, python-format msgid "The \"%(hook)s\" hook is not defined on %(resource)s" msgstr "%(resource)s で \"%(hook)s\" のフックが定義されていません" #, python-format msgid "The \"for_each\" argument to \"%s\" must contain a map" msgstr "" "\"%s\" に対する \"for_each\" 引数にはマップが含まれていなければなりません" #, fuzzy, python-format msgid "The %(entity)s (%(name)s) could not be found." msgstr "%(entity)s (%(name)s) が見つかりませんでした。" #, python-format msgid "The %s must be provided for each parameter group." msgstr "各パラメーターグループに対して %s を提供する必要があります。" #, python-format msgid "The %s of parameter group should be a list." msgstr "パラメーターグループの %s はリストである必要があります。" #, python-format msgid "The %s parameter must be assigned to one parameter group only." msgstr "" "%s パラメーターは 1 つのパラメーターグループにしか割り当てることができませ" "ん。" #, python-format msgid "The %s should be a list." msgstr "%s はリストである必要があります。" msgid "The API paste config file to use." msgstr "使用する API paste ファイル。" msgid "The AWS Access Key ID needs a subscription for the service" msgstr "AWS アクセスキー ID にはサービスのサブスクリプションが必要です" msgid "The Availability Zone where the specified instance is launched." msgstr "指定されたインスタンスが起動されるアベイラビリティーゾーン。" msgid "The Availability Zones in which to create the load balancer." msgstr "ロードバランサーを作成するアベイラビリティーゾーン。" msgid "The CIDR." msgstr "CIDR。" msgid "The DNS name for the LoadBalancer." msgstr "LoadBalancer の DNS 名。" msgid "The DNS name of the specified bucket." msgstr "指定されたバケットの DNS 名。" msgid "The DNS nameserver address." msgstr "DNS ネームサーバーのアドレス。" msgid "The HTTP method used for requests by the monitor of type HTTP." msgstr "タイプ HTTP のモニターで要求に使用される HTTP メソッド。" msgid "" "The HTTP path used in the HTTP request used by the monitor to test a member " "health." msgstr "" "メンバーヘルスをテストするためにモニターで使用される、HTTP 要求で使用される " "HTTP パス。" msgid "" "The HTTP path used in the HTTP request used by the monitor to test a member " "health. A valid value is a string the begins with a forward slash (/)." msgstr "" "メンバーの健全性をテストするためにモニターが使用する、HTTP 要求で使用される " "HTTP パス。有効な値は、スラッシュ (/) で始まる文字列です。" msgid "" "The HTTP status codes expected in response from the member to declare it " "healthy. Specify one of the following values: a single value, such as 200. a " "list, such as 200, 202. a range, such as 200-204." msgstr "" "正常であることを宣言するためにメンバーからの応答で予測される HTTP ステータス" "コード。以下の値のうちのいずれかを設定します: 単一の値 (200 など)、リスト " "(200, 202 など)、範囲 (200-204 など)" msgid "" "The ID of an existing instance to use to create the Auto Scaling group. If " "specify this property, will create the group use an existing instance " "instead of a launch configuration." msgstr "" "オートスケールグループの作成に使用する既存インスタンスの ID。このプロパティー" "を指定すると、起動設定ではなく、既存のインスタンスを使用してグループが作成さ" "れます。" msgid "" "The ID of an existing instance you want to use to create the launch " "configuration. All properties are derived from the instance with the " "exception of BlockDeviceMapping." msgstr "" "起動設定の作成に使用する既存インスタンスの ID。すべてのプロパティーは、" "BlockDeviceMapping を除き、インスタンスから派生します。" msgid "The ID of the attached network." msgstr "接続されたネットワークの ID。" msgid "The ID of the firewall policy that this firewall is associated with." msgstr "このファイアウォールが関連付けられているファイアウォールポリシー ID。" msgid "" "The ID of the hosted zone name that is associated with the LoadBalancer." msgstr "LoadBalancer に関連付けられたホストされるゾーン名の ID。" msgid "The ID of the image to create a volume from." msgstr "ボリュームの作成元となるイメージの ID。" msgid "The ID of the image to create the volume from." msgstr "ボリュームの作成元となるイメージの ID。" msgid "The ID of the instance to which the volume attaches." msgstr "ボリュームの接続先インスタンスの ID。" msgid "The ID of the load balancing pool." msgstr "ロードバランシングプールの ID。" msgid "The ID of the pool to which the pool member belongs." msgstr "プールメンバーが所属するプールの ID。" msgid "The ID of the server to which the volume attaches." msgstr "ボリュームの接続先サーバーの ID。" msgid "The ID of the snapshot to create a volume from." msgstr "ボリューム作成元のスナップショットの ID。" msgid "" "The ID of the tenant which will own the network. Only administrative users " "can set the tenant identifier; this cannot be changed using authorization " "policies." msgstr "" "ネットワークを所有することになるテナントの ID。テナント ID を設定できるのは管" "理ユーザーのみです。権限ポリシーを使用してこれを変更することはできません。" msgid "" "The ID of the tenant who owns the Load Balancer. Only administrative users " "can specify a tenant ID other than their own." msgstr "" "ロードバランサーを所有するテナントの ID。自分の ID 以外のテナント ID を指定で" "きるのは管理者に限られます。" msgid "The ID of the tenant who owns the listener." msgstr "このリスナーを所有するテナントの ID。" msgid "" "The ID of the tenant who owns the network. Only administrative users can " "specify a tenant ID other than their own." msgstr "" "ネットワークを所有するテナントの ID。自分以外のテナント ID を指定できるのは管" "理ユーザーのみです。" msgid "" "The ID of the tenant who owns the subnet pool. Only administrative users can " "specify a tenant ID other than their own." msgstr "" "サブネットプールを所有するテナントのID。自分の ID 以外にテナント ID を設定で" "きるのは管理者に限られます。" msgid "The ID of the volume to be attached." msgstr "接続するボリュームの ID。" msgid "" "The ID of the volume to boot from. Only one of volume_id or snapshot_id " "should be provided." msgstr "" "ブート元のボリュームの ID。volume_id と snapshot_id のいずれか一方のみを指定" "する必要があります。" msgid "The ID or name of the flavor to boot onto." msgstr "ブート先のフレーバーの ID または名前。" msgid "The ID or name of the image to boot with." msgstr "ブート時に使用するイメージの ID または名前。" msgid "" "The IDs of the DHCP agent to schedule the network. Note that the default " "policy setting in Neutron restricts usage of this property to administrative " "users only." msgstr "" "ネットワークをスケジュールする DHCP エージェントの ID。Neutron のデフォルトポ" "リシー設定では、このプロパティーの使用は管理ユーザーのみに制限されることに注" "意してください。" msgid "The IP address of the pool member." msgstr "プールメンバーの IP アドレス。" msgid "The IP version, which is 4 or 6." msgstr "IP バージョン (4 または 6)。" #, python-format msgid "The Parameter (%(key)s) was not defined in template." msgstr "パラメーター (%(key)s) がテンプレートに定義されませんでした。" #, python-format msgid "The Parameter (%(key)s) was not provided." msgstr "パラメーター (%(key)s) が指定されませんでした。" msgid "The QoS policy ID attached to this network." msgstr "このネットワークに追加する QoS ポリシーの ID。" msgid "The QoS policy ID attached to this port." msgstr "このポートに追加した QoS ポリシーの ID。" #, python-format msgid "The Referenced Attribute (%(resource)s %(key)s) is incorrect." msgstr "参照された属性 (%(resource)s %(key)s) が正しくありません。" #, python-format msgid "The Resource %s requires replacement." msgstr "リソース %s は置換する必要があります。" #, python-format msgid "" "The Resource (%(resource_name)s) could not be found in Stack %(stack_name)s." msgstr "" "スタック %(stack_name)s でリソース (%(resource_name)s) が見つかりませんでし" "た。" #, python-format msgid "The Resource (%(resource_name)s) is not available." msgstr "リソース %(resource_name)s) は使用できません。" #, python-format msgid "The Snapshot (%(snapshot)s) for Stack (%(stack)s) could not be found." msgstr "" "スタック (%(stack)s) のスナップショット (%(snapshot)s) が見つかりませんでし" "た。" #, python-format msgid "The Stack (%(stack_name)s) already exists." msgstr "スタック %(stack_name)s) は既に存在します。" msgid "The Template must be a JSON or YAML document." msgstr "テンプレートは JSON または YAML ドキュメントである必要があります。" msgid "The URI to the container." msgstr "コンテナーの URI。" msgid "The URI to the created container." msgstr "作成したコンテナーの URI。" msgid "The URI to the created secret." msgstr "作成した秘密の URL。" msgid "The URI to the order." msgstr "注文の URI。" msgid "The URIs to container consumers." msgstr "コンテナーの使用者のURI。" msgid "The URIs to secrets stored in container." msgstr "コンテナーに保存される秘密の URI。" msgid "" "The URL of a template that specifies the stack to be created as a resource." msgstr "リソースとして作成するスタックを指定するテンプレートの URL。" msgid "The URL of the container." msgstr "コンテナーの URL。" msgid "The VIP address of the LoadBalancer." msgstr "ロードバランサーの VIP アドレス。" msgid "The VIP port of the LoadBalancer." msgstr "ロードバランサー の VIP ポート。" msgid "The VIP subnet of the LoadBalancer." msgstr "ロードバランサーの VIP サブネット。" msgid "The action or operation requested is invalid" msgstr "要求されたアクションまたは操作は無効です" msgid "The action to be executed when the receiver is signaled." msgstr "レシーバーが信号を受信した際に実行されるアクション。" msgid "The administrative state of the firewall." msgstr "ファイアウォールの管理状態。" msgid "The administrative state of the health monitor." msgstr "ヘルスモニターの管理状態。" msgid "The administrative state of the ipsec site connection." msgstr "ipsec サイト接続の管理状態。" msgid "The administrative state of the pool member." msgstr "プールメンバーの管理状態。" msgid "The administrative state of the router." msgstr "ルーターの管理状態。" msgid "The administrative state of the vpn service." msgstr "VPN サービスの管理状態。" msgid "The administrative state of this Load Balancer." msgstr "このロードバランサーの管理状態。" msgid "The administrative state of this health monitor." msgstr "このヘルスモニターの管理状態。" msgid "The administrative state of this listener." msgstr "このリスナーの管理状態。" msgid "The administrative state of this pool member." msgstr "このプールのメンバーの管理状態。" msgid "The administrative state of this pool." msgstr "このプールの管理状態。" msgid "The administrative state of this port." msgstr "このポートの管理状態。" msgid "The administrative state of this vip." msgstr "この VIP の管理状態。" msgid "The administrative status of the network." msgstr "ネットワークの管理状況。" msgid "The administrator password for the server." msgstr "サーバーの管理者パスワード。" msgid "The aggregation method to compare to the threshold." msgstr "しきい値と比較するためのアグリゲートメソッド。" msgid "The algorithm type used to generate the secret." msgstr "秘密を生成するために使用されるアルゴリズムのタイプ。" msgid "" "The algorithm type used to generate the secret. Required for key and " "asymmetric types of order." msgstr "" "秘密を生成するために使用されるアルゴリズムタイプ。鍵と非対称なタイプの注文で" "必要。" msgid "The algorithm used to distribute load between the members of the pool." msgstr "プールのメンバー間で負荷を分散するために使用されるアルゴリズム。" msgid "The allocated address of this IP." msgstr "この IP に割り当て済みのアドレス。" msgid "" "The approximate interval, in seconds, between health checks of an individual " "instance." msgstr "個々のインスタンスのヘルスチェック間のおおよその間隔 (秒)。" msgid "The authentication hash algorithm of the ipsec policy." msgstr "ipsec ポリシーの認証ハッシュアルゴリズム。" msgid "The authentication hash algorithm used by the ike policy." msgstr "IKE ポリシーで使用される認証ハッシュアルゴリズム。" msgid "The authentication mode of the ipsec site connection." msgstr "ipsec サイト接続の認証モード。" msgid "The availability zone in which the volume is located." msgstr "ボリュームが存在するアベイラビリティーゾーン。" msgid "The availability zone in which the volume will be created." msgstr "ボリュームが作成されるアベイラビリティーゾーン。" msgid "The availability zone of shared filesystem." msgstr "共有ファイルシステムのアベイラビリティーゾーン。" msgid "The bay name." msgstr "ベイの名前。" msgid "The bit-length of the secret." msgstr "秘密のビット長。" msgid "" "The bit-length of the secret. Required for key and asymmetric types of order." msgstr "秘密のビット長。鍵と非対称なタイプの注文で必要。" #, python-format msgid "The bucket you tried to delete is not empty (%s)." msgstr "削除しようとしたバケットは空ではありません (%s)。" msgid "The can be used to unmap a defined device." msgstr "定義済みのデバイスをマップ解除するために使用できます。" msgid "The certificate or AWS Key ID provided does not exist" msgstr "指定された証明書または AWS キー ID が存在しません" msgid "The channel for receiving signals." msgstr "信号の受信のためのチャンネル。" msgid "" "The class that provides encryption support. For example, nova.volume." "encryptors.luks.LuksEncryptor." msgstr "" "暗号化をサポートするクラス。例えば、nova.volume.encryptors.luks." "LuksEncryptor など。" #, python-format msgid "The client (%(client_name)s) is not available." msgstr "クライアント (%(client_name)s) は使用できません。" msgid "The cluster ID this node belongs to." msgstr "このノードが属するクラスター ID。" msgid "The config value of the software config." msgstr "ソフトウェア設定の設定値。" msgid "" "The configuration tool used to actually apply the configuration on a server. " "This string property has to be understood by in-instance tools running " "inside deployed servers." msgstr "" "実際に設定をサーバーで適用するために使用される設定ツール。この文字列のプロパ" "ティーは、デプロイ済みサーバーの中で動作する、インスタンス内のツールによって" "認識される必要があります。" msgid "The content of the CSR. Only for certificate orders." msgstr "CSR のコンテンツ。証明書の注文にのみ必要。" #, python-format msgid "" "The contents of personality file \"%(path)s\" is larger than the maximum " "allowed personality file size (%(max_size)s bytes)." msgstr "" "パーソナリティーファイル \"%(path)s\" の内容が、許可されているパーソナリ" "ティーファイルの最大容量 (%(max_size)s バイト) を超えています。" msgid "The current size of AutoscalingResourceGroup." msgstr "AutoscalingResourceGroup の現行サイズ。" msgid "The current status of the volume." msgstr "ボリュームの現行状況。" msgid "" "The database instance was created, but heat failed to set up the datastore. " "If a database instance is in the FAILED state, it should be deleted and a " "new one should be created." msgstr "" "データベースインスタンスが作成されましたが、heat はデータストアをセットアップ" "できませんでした。データベースインスタンスが FAILED 状態の場合は、そのインス" "タンスを削除し、新しいインスタンスを作成する必要があります。" msgid "" "The dead peer detection protocol configuration of the ipsec site connection." msgstr "ipsec サイト接続のデッドピア検出プロトコル設定。" msgid "The decrypted secret payload." msgstr "復号化された秘密のペイロード。" msgid "" "The default cloud-init user set up for each image (e.g. \"ubuntu\" for " "Ubuntu 12.04+, \"fedora\" for Fedora 19+ and \"cloud-user\" for CentOS/RHEL " "6.5)." msgstr "" "各イメージに関するデフォルトの cloud-init のユーザーセットアップ (Ubuntu " "12.04+ の場合の \"ubuntu\"、Fedora 19+ の場合の \"fedora\"、CentOS/RHEL 6.5 " "の場合の \"cloud-user\" など)。" msgid "The description for the QoS policy." msgstr "QoS ポリシーの説明。" msgid "The description of the ike policy." msgstr "IKE ポリシーの説明。" msgid "The description of the ipsec policy." msgstr "ipsec ポリシーの説明。" msgid "The description of the ipsec site connection." msgstr "ipsec サイト接続の説明。" msgid "The description of the vpn service." msgstr "VPN サービスの説明。" msgid "The destination for static route." msgstr "静的なルートの宛先。" msgid "The details of physical object." msgstr "物理オブジェクトの詳細。" msgid "The device id for the network gateway." msgstr "ネットワークゲートウェイのデバイス ID。" msgid "" "The device where the volume is exposed on the instance. This assignment may " "not be honored and it is advised that the path /dev/disk/by-id/virtio-" " be used instead." msgstr "" "インスタンスでボリュームが公開されているデバイス。この割り当ては順守されない" "可能性があり、パス /dev/disk/by-id/virtio- を代わりに使用することを" "お勧めします。" msgid "" "The direction in which metering rule is applied, either ingress or egress." msgstr "計測ルールが適用される方向 (受信または送信)。" msgid "The direction in which metering rule is applied." msgstr "計測ルールが適用される方向。" msgid "" "The direction in which the security group rule is applied. For a compute " "instance, an ingress security group rule matches traffic that is incoming " "(ingress) for that instance. An egress rule is applied to traffic leaving " "the instance." msgstr "" "セキュリティーグループルールが適用される方向。計算インスタンスの場合、受信セ" "キュリティーグループルールはそのインスタンスの入力 (受信) であるトラフィック" "に適合します。送信ルールはインスタンスから出るトラフィックに適用されます。" msgid "The directory to search for environment files." msgstr "環境ファイルを検索するディレクトリー。" msgid "The directory to search for template files." msgstr "テンプレートファイルを検索するためのディレクトリ先。" msgid "The ebs volume to attach to the instance." msgstr "インスタンスに接続する ebs ボリューム。" msgid "The encapsulation mode of the ipsec policy." msgstr "ipsec ポリシーのカプセル化モード。" msgid "The encoding format used to provide the payload data." msgstr "ペイロードデータを提供するために使用されるエンコード形式。" msgid "The encryption algorithm of the ipsec policy." msgstr "ipsec ポリシーの暗号化アルゴリズム。" msgid "The encryption algorithm or mode. For example, aes-xts-plain64." msgstr "暗号化のアルゴリズムまたはモード。例えば aes-xts-plain64 など。" msgid "The encryption algorithm used by the ike policy." msgstr "IKE ポリシーで使用される暗号化アルゴリズム。" msgid "The environment is not a valid YAML mapping data type." msgstr "この環境は有効な YAML マッピングデータ形式ではありません。" msgid "The expiration date for the secret in ISO-8601 format." msgstr "ISO-8601 形式の秘密の有効期限。" msgid "The external load balancer port number." msgstr "外部ロードバランサーのポート番号。" msgid "The extra specs key and value pairs of the volume type." msgstr "追加スペックに関するボリュームタイプのキーと値のペア。" msgid "The flavor to use." msgstr "使用するフレーバー。" #, fuzzy, python-format msgid "The following parameters are immutable and may not be updated: %(keys)s" msgstr "以下のパラメーターは変更できず、更新できません: %(keys)s" #, python-format msgid "The function %s is not supported in this version of HOT." msgstr "関数 %s は、このバージョンの HOT でサポートされていません。" msgid "" "The gateway IP address. Set to any of [ null | ~ | \"\" ] to create/update a " "subnet without a gateway. If omitted when creation, neutron will assign the " "first free IP address within the subnet to the gateway automatically. If " "remove this from template when update, the old gateway IP address will be " "detached." msgstr "" "ゲートウェイの IP アドレス。ゲートウェイを使用せずにサブネットの作成または更" "新を行うには 、[ null | ~ | \"\" ] のうちの任意の値を設定します。作成時にこの" "値を指定しないと、neutron はサブネット内の空いている最初の IP アドレスを自動" "的にゲートウェイに割り当てます。更新時にテンプレートからこの値を削除すると、" "古いゲートウェイの IP アドレスは接続が解除されます。" #, python-format msgid "The grouped parameter %s does not reference a valid parameter." msgstr "" "グループ化されたパラメーター %s が有効なパラメーターを参照していません。" msgid "The host from the container URL." msgstr "コンテナー URL からのホスト。" msgid "The host from which a user is allowed to connect to the database." msgstr "ユーザーがデータベースへの接続を許可されるホスト。" msgid "" "The id for L2 segment on the external side of the network gateway. Must be " "specified when using vlan." msgstr "" "ネットワークゲートウェイの外部サイドの L2 セグメントの ID。vlan の使用時には" "指定が必須です。" msgid "The identifier of the CA to use." msgstr "使用する CA の識別子。" msgid "The image ID. Glance will generate a UUID if not specified." msgstr "イメージ ID。指定しない場合は Glance によって UUID が生成されます。" msgid "The initiator of the ipsec site connection." msgstr "ipsec サイト接続のイニシエーター。" msgid "The input string to be stored." msgstr "保存される入力文字列。" msgid "The interface name for the network gateway." msgstr "ネットワークゲートウェイのインターフェース名。" msgid "The internal network to connect on the network gateway." msgstr "ネットワークゲートウェイで接続する内部ネットワーク。" msgid "The last operation for the database instance failed due to an error." msgstr "データベースインスタンスの最後の操作がエラーのために失敗しました。" #, python-format msgid "The length must be at least %(min)s." msgstr "長さは %(min)s 以上でなければなりません。" #, python-format msgid "The length must be in the range %(min)s to %(max)s." msgstr "長さは %(min)s から %(max)s までの範囲になければなりません。" #, python-format msgid "The length must be no greater than %(max)s." msgstr "長さは %(max)s を超えてはなりません。" msgid "The length of time, in minutes, to wait for the nested stack creation." msgstr "ネストされたスタック作成を待機する時間 (分)。" msgid "" "The list of HTTP status codes expected in response from the member to " "declare it healthy." msgstr "" "正常であることを宣言するためにメンバーからの応答で必要な HTTP ステータスコー" "ドのリスト。" msgid "The list of Nova server IDs load balanced." msgstr "負荷分散される Nova サーバー ID の一覧。" msgid "The list of Pools related to this monitor." msgstr "このモニターに関連するプールのリスト。" msgid "The list of attachments of the volume." msgstr "ボリュームの接続機構のリスト。" msgid "" "The list of configurations for the different lifecycle actions of the " "represented software component." msgstr "" "表示されているソフトウェアコンポーネントのさまざまなライフサイクルアクション" "の設定のリスト。" msgid "The list of instance IDs load balanced." msgstr "ロードバランシングされたインスタンス ID のリスト。" msgid "" "The list of resource types to create. This list may contain type names or " "aliases defined in the resource registry. Specific template names are not " "supported." msgstr "" "作成対象のリソースタイプのリスト。このリストには、リソースレジストリーで定義" "されたタイプ名や別名が含まれる場合があります。特定のテンプレート名はサポート" "されません。" msgid "The list of tags to associate with the volume." msgstr "ボリュームに関連付けるタグのリスト。" msgid "The load balancer transport protocol to use." msgstr "使用するロードバランサーのトランスポートプロトコル。" msgid "" "The location where the volume is exposed on the instance. This assignment " "may not be honored and it is advised that the path /dev/disk/by-id/virtio-" " be used instead." msgstr "" "インスタンスでボリュームが公開されているロケーション。この割り当ては順守され" "ない可能性があり、パス /dev/disk/by-id/virtio- を代わりに使用するこ" "とをお勧めします。" msgid "The manually assigned alternative public IPv4 address of the server." msgstr "サーバーに手動で割り当てられた代替パブリック IPv4 アドレス。" msgid "The manually assigned alternative public IPv6 address of the server." msgstr "サーバーに手動で割り当てられた代替パブリック IPv6 アドレス。" msgid "The maximum number of connections per second allowed for the vip." msgstr "VIP に許容される 1 秒あたりの最大接続数。" msgid "" "The maximum number of connections permitted for this load balancer. Defaults " "to -1, which is infinite." msgstr "" "このロードバランサーで許容される接続の最大数。デフォルト値は -1 であり、無制" "限を指します。" msgid "The maximum number of resources to create at once." msgstr "一度に作成可能なリソースの最大数。" msgid "The maximum number of resources to replace at once." msgstr "一度に置き換えるリソースの最大数。" msgid "" "The maximum number of seconds to wait for the resource to signal completion. " "Once the timeout is reached, creation of the signal resource will fail." msgstr "" "リソースが完了を送信するのを待機する最大秒数。タイムアウトに達すると、信号リ" "ソースの作成は失敗します。" msgid "" "The maximum port number in the range that is matched by the security group " "rule. The port_range_min attribute constrains the port_range_max attribute. " "If the protocol is ICMP, this value must be an ICMP type." msgstr "" "セキュリティーグループルールによって突き合わせが行われる範囲内の最大ポート番" "号。port_range_min 属性により、port_range_max 属性が制限されます。プロトコル" "が ICMP の場合、この値は ICMP タイプでなければなりません。" msgid "" "The maximum transmission unit size (in bytes) of the ipsec site connection." msgstr "ipsec サイト接続の最大伝送単位サイズ (バイト)。" msgid "The maximum transmission unit size(in bytes) for the network." msgstr "ネットワークの転送単位の最大容量 (バイト)。" msgid "The metering label ID to associate with this metering rule." msgstr "この計測ルールに関連付けられている計測ラベル ID。" msgid "" "The metric dimensions to match to the alarm dimensions. One or more " "dimension key names separated by a comma." msgstr "" "アラームの次元に合致するメトリックの次元。コンマで区切られた 1 つ以上の次元の" "キー名。" msgid "" "The minimum number of characters from this character class that will be in " "the generated string." msgstr "生成される文字列に含められる、この文字クラスの文字の最小数。" msgid "" "The minimum number of characters from this sequence that will be in the " "generated string." msgstr "生成される文字列に含められる、このシーケンスの文字の最小数。" msgid "" "The minimum number of resources in service while rolling updates are being " "executed." msgstr "" "更新のロールアップが実行されている間にサービス中であるリソースの最小数。" msgid "" "The minimum port number in the range that is matched by the security group " "rule. If the protocol is TCP or UDP, this value must be less than or equal " "to the value of the port_range_max attribute. If the protocol is ICMP, this " "value must be an ICMP type." msgstr "" "セキュリティーグループルールによって突き合わせが行われる範囲内の最小ポート番" "号。プロトコルが TCP または UDP の場合、この値はport_range_max 属性の値以下で" "なければなりません。プロトコルがICMP の場合、この値は ICMP タイプでなければな" "りません。" msgid "The name for the QoS policy." msgstr "QoS ポリシーの名前。" msgid "The name for the address scope." msgstr "アドレススコープの名前。" msgid "" "The name of the driver used for instantiating container networks. By " "default, Magnum will choose the pre-configured network driver based on COE " "type." msgstr "" "コンテナーネットワークのインスタンス化のために使用されるドライバーの名前。デ" "フォルトでは、Magnum はCOE タイプに基づいて事前に設定されたネットワークドライ" "バーを選択します。" msgid "The name of the error document." msgstr "エラー文書の名前。" msgid "The name of the hosted zone that is associated with the LoadBalancer." msgstr "LoadBalancer に関連付けられたホストされるゾーンの名前。" msgid "The name of the ike policy." msgstr "IKE ポリシーの名前。" msgid "The name of the index document." msgstr "索引文書の名前。" msgid "The name of the ipsec policy." msgstr "ipsec ポリシーの名前。" msgid "The name of the ipsec site connection." msgstr "ipsec サイト接続の名前。" msgid "The name of the key pair." msgstr "キーペアの名前。" msgid "The name of the network gateway." msgstr "ネットワークゲートウェイの名前。" msgid "The name of the network." msgstr "ネットワークの名前。" msgid "The name of the router." msgstr "ルーターの名前。" msgid "The name of the subnet." msgstr "サブネットの名前。" msgid "The name of the user that the new key will belong to." msgstr "新規キーが属するユーザーの名前。" msgid "" "The name of the virtual device. The name must be in the form ephemeralX " "where X is a number starting from zero (0); for example, ephemeral0." msgstr "" "仮想デバイスの名前。この名前は ephemeralX という形式にする必要があります。X " "は数値でゼロ (0) から開始されます。例えば、ephemeral0 です。" msgid "The name of the vpn service." msgstr "VPN サービスの名前。" msgid "The name or ID of QoS policy to attach to this network." msgstr "このネットワークに追加する QoS ポリシーの名前または ID。" msgid "The name or ID of QoS policy to attach to this port." msgstr "このポートに追加する QoS ポリシーの名前または ID。" msgid "The name or ID of parent of this keystone project in hierarchy." msgstr "階層内のこの keystone プロジェクトの親の名前または ID。" msgid "The name or ID of target cluster." msgstr "ターゲットクラスターの名前または ID。" msgid "The name or ID of the bay model." msgstr "ベイモデルの名前または ID。" msgid "The name or ID of the subnet on which to allocate the VIP address." msgstr "VIP アドレスを割り当てる必要のあるサブネットの名前または ID。" msgid "The name or ID of the subnet pool." msgstr "サブネットプールの名前または ID。" msgid "The name or id of the Senlin profile." msgstr "Senlin プロファイルの名前または ID。" msgid "The negotiation mode of the ike policy." msgstr "IKE ポリシーのネゴシエーションモード。" msgid "The next hop for the destination." msgstr "宛先のネクストホップ。" msgid "The node count for this bay." msgstr "このベイのノード数。" msgid "The notification methods to use when an alarm state is ALARM." msgstr "アラームの状態が ALARM の場合に使用する通知方法。" msgid "The notification methods to use when an alarm state is OK." msgstr "アラームの状態が OK の場合に使用する通知方法。" msgid "The notification methods to use when an alarm state is UNDETERMINED." msgstr "アラームの状態が UNDETERMINED の場合に使用する通知方法。" msgid "The number of I/O operations per second that the volume supports." msgstr "ボリュームでサポートされる 1 秒当たりの入出力操作数。" msgid "The number of bytes stored in the container." msgstr "コンテナーに保管されたバイト数。" msgid "" "The number of consecutive health probe failures required before moving the " "instance to the unhealthy state" msgstr "" "インスタンスを正常でない状態に移行する前に、連続するヘルスプローブの失敗数が" "必要です。" msgid "" "The number of consecutive health probe successes required before moving the " "instance to the healthy state." msgstr "" "インスタンスを正常な状態に移行する前に、連続するヘルスプローブの成功数が必要" "です。" msgid "The number of master nodes for this bay." msgstr "このベイのマスターノードの数。" msgid "The number of objects stored in the container." msgstr "コンテナーに保管されたオブジェクトの数。" msgid "The number of replicas to be created." msgstr "作成されるレプリカの数。" msgid "The number of resources to create." msgstr "作成対象のリソースの数。" msgid "The number of seconds to wait between batches of updates." msgstr "次の更新のバッチまで待機する秒数。" msgid "The number of seconds to wait between batches." msgstr "バッチとバッチの間の待機時間 (秒)。" msgid "The number of seconds to wait for the cluster actions." msgstr "クラスターのアクションを待機する秒数。" msgid "" "The number of seconds to wait for the correct number of signals to arrive." msgstr "正しい数のシグナルが到着するのを待つ秒数。" msgid "" "The number of success signals that must be received before the stack " "creation process continues." msgstr "スタック作成プロセスを続行するために受信する必要のある成功シグナル数。" msgid "" "The optional public key. This allows users to supply the public key from a " "pre-existing key pair. If not supplied, a new key pair will be generated." msgstr "" "オプションの公開鍵。これを使用して、ユーザーは既存のキーペアからの公開鍵を指" "定できます。指定しない場合は、新しいキーペアが生成されます。" msgid "" "The owner tenant ID of the address scope. Only administrative users can " "specify a tenant ID other than their own." msgstr "アドレススコープの所有者" msgid "The owner tenant ID of this QoS policy." msgstr "この QoS ポリシーの所有者のテナント ID。" msgid "The owner tenant ID of this rule." msgstr "このルールの所有者のテナント ID。" msgid "" "The owner tenant ID. Only required if the caller has an administrative role " "and wants to create a RBAC for another tenant." msgstr "" "所有者のテナント ID。呼び出し元が管理者のロールを持ち、他のテナントのために " "RBAC を設定したい場合にのみ必要となります。" msgid "The parameters passed to action when the receiver is signaled." msgstr "レシーバーが信号を受信した際にアクションに渡されるパラメーター。" msgid "The parent URL of the container." msgstr "コンテナーの親 URL。" msgid "The payload of the created certificate, if available." msgstr "作成した証明書 (存在する場合) のペイロード。" msgid "The payload of the created intermediates, if available." msgstr "作成した中間証明書 (存在する場合) のペイロード。" msgid "The payload of the created private key, if available." msgstr "作成した秘密鍵 (存在する場合) のペイロード。" msgid "The payload of the created public key, if available." msgstr "作成した公開鍵 (存在する場合) のペイロード。" msgid "The perfect forward secrecy of the ike policy." msgstr "IKE ポリシーの Perfect Forward Secrecy。" msgid "The perfect forward secrecy of the ipsec policy." msgstr "ipsec ポリシーの Perfect Forward Secrecy。" #, python-format msgid "The personality property may not contain greater than %s entries." msgstr "パーソナリティープロパティーには %s を超える項目を含めないでください。" msgid "The physical mechanism by which the virtual network is implemented." msgstr "仮想ネットワークを実装する物理メカニズム。" #, python-format msgid "The physical resource for (%(name)s) exists." msgstr "(%(name)s) の物理リソースが存在します。" msgid "The port being checked." msgstr "ポートの検査中。" msgid "The port id, either subnet or port_id should be specified." msgstr "ポート ID。サブネットまたは port_id を指定する必要があります。" msgid "The port on which the server will listen." msgstr "サーバーが listen するポート。" msgid "The port, either subnet or port should be specified." msgstr "ポート。サブネット、またはポートを指定する必要があります。" msgid "The pre-shared key string of the ipsec site connection." msgstr "ipsec サイト接続の事前共有鍵文字列。" msgid "The private key if it has been saved." msgstr "保存された場合の秘密鍵。" msgid "The profile of certificate to use." msgstr "使用する証明書のプロファイル。" msgid "" "The protocol that is matched by the security group rule. Valid values " "include tcp, udp, and icmp." msgstr "" "セキュリティーグループルールによって突き合わせが行われるプロトコル。有効な値" "は tcp、udp、および icmp です。" msgid "The public key." msgstr "公開鍵。" msgid "The query string is malformed" msgstr "クエリの文字列の形式が正しくありません" msgid "The query to filter the metrics." msgstr "メトリックをフィルタリングするクエリー。" msgid "" "The random string generated by this resource. This value is also available " "by referencing the resource." msgstr "" "このリソースによってランダム文字列が生成されました。この値はリソースの参照に" "よっても入手可能です。" msgid "The reference to a LaunchConfiguration resource." msgstr "LaunchConfiguration リソースに対する参照。" msgid "" "The remote IP prefix (CIDR) to be associated with this security group rule." msgstr "" "このセキュリティーグループルールに関連付けるリモート IP プレフィックス" "(CIDR)。" msgid "The remote branch router identity of the ipsec site connection." msgstr "ipsec サイト接続のリモートブランチルーター ID。" msgid "The remote branch router public IPv4 address or IPv6 address or FQDN." msgstr "" "リモートブランチルーターのパブリック IPv4 アドレスまたは IPv6 アドレスあるい" "は FQDN。" msgid "" "The remote group ID to be associated with this security group rule. If no " "value is specified then this rule will use this security group for the " "remote_group_id. The remote mode parameter must be set to \"remote_group_id" "\"." msgstr "" "このセキュリティーグループルールに関連付けるリモートグループ ID。値を指定しな" "いと、このルールはこのセキュリティーグループを remote_group_id に使用します。" "リモートモードのパラメーターは \"remote_group_id\" に設定する必要があります。" msgid "The remote subnet(s) in CIDR format of the ipsec site connection." msgstr "ipsec サイト接続の CIDR 形式のリモートサブネット。" msgid "The request is missing an action or operation parameter" msgstr "要求にアクションまたは操作パラメーターがありません" msgid "The request processing has failed due to an internal error" msgstr "内部エラーのため、要求の処理に失敗しました" msgid "The request signature does not conform to AWS standards" msgstr "要求シグニチャーが AWS 標準に準拠していません" msgid "" "The request signature we calculated does not match the signature you provided" msgstr "計算された要求シグニチャーが指定したシグニチャーに一致しません" msgid "The requested action is not yet implemented" msgstr "要求されたアクションはまだ実装されていません" #, python-format msgid "The resource %s is already being updated." msgstr "リソース %s は既に更新中です。" msgid "The resource href of the queue." msgstr "キューのリソース href。" msgid "The route mode of the ipsec site connection." msgstr "ipsec サイト接続の経路モード。" msgid "The router id." msgstr "ルーター ID。" msgid "The router to which the vpn service will be inserted." msgstr "vpn サービスが挿入されるルーター。" msgid "The router." msgstr "ルーター。" msgid "The safety assessment lifetime configuration for the ike policy." msgstr "IKE ポリシーの安全アセスメント存続期間設定。" msgid "The safety assessment lifetime configuration of the ipsec policy." msgstr "ipsec ポリシーの安全アセスメント存続期間設定。" msgid "" "The security group that you can use as part of your inbound rules for your " "LoadBalancer's back-end instances." msgstr "" "ロードバランサーのバックエンドのインスタンスの受信ルールの一部として使用可能" "なセキュリティーグループ。" msgid "" "The server could not comply with the request since it is either malformed or " "otherwise incorrect." msgstr "" "要求の形式が誤っているか、要求が正しくないため、サーバーは要求に従うことがで" "きませんでした。" msgid "The set of parameters passed to this nested stack." msgstr "このネストされたスタックに渡すパラメーターの集合。" msgid "The size in GB of the docker volume." msgstr "ドッカー容量のサイズ (GB)。" msgid "The size of AutoScalingGroup can not be less than zero" msgstr "オートスケールグループのサイズをゼロより小さくすることはできません" msgid "" "The size of the prefix to allocate when the cidr or prefixlen attributes are " "not specified while creating a subnet." msgstr "" "サブネットの作成時に CIDR や prefixlen の属性が指定されない場合に割り当てるプ" "レフィックスのサイズ。" msgid "The size of the swap, in MB." msgstr "スワップの容量 (MB)。" msgid "The size of the volume in GB." msgstr "GB 単位のボリューム容量。" msgid "" "The size of the volume, in GB. It is safe to leave this blank and have the " "Compute service infer the size." msgstr "" "ボリュームのギガバイト単位の容量。このフィールドを空欄にして、Compute サービ" "スに容量を推定させるのが安全です。" msgid "" "The size of the volume, in GB. Must be equal or greater than the size of the " "snapshot. It is safe to leave this blank and have the Compute service infer " "the size." msgstr "" "ボリュームのサイズ (GB)。スナップショットのサイズ以上でなければなりません。こ" "れをブランクのままにしておいて、計算サービスにこのサイズを推測させることもで" "きます。" msgid "The snapshot the volume was created from, if any." msgstr "ボリュームの作成元となったスナップショット (ある場合)。" msgid "The source of certificate request." msgstr "証明書のリクエストのソース。" #, python-format msgid "The specified reference \"%(resource)s\" (in %(key)s) is incorrect." msgstr "指定された参照 \"%(resource)s\" (%(key)s 内) は正しくありません。" msgid "The start and end addresses for the allocation pools." msgstr "割り当てプールの開始アドレスと終了アドレス。" msgid "The status of the container." msgstr "コンテナーの状況。" msgid "The status of the firewall." msgstr "ファイアウォールの状況。" msgid "The status of the ipsec site connection." msgstr "ipsec サイト接続の状況。" msgid "The status of the network." msgstr "ネットワークの状況。" msgid "The status of the order." msgstr "注文の状態。" msgid "The status of the port." msgstr "ポートの状況。" msgid "The status of the router." msgstr "ルーターの状況。" msgid "The status of the secret." msgstr "秘密の状態。" msgid "The status of the vpn service." msgstr "VPN サービスの状況。" msgid "" "The string that was stored. This value is also available by referencing the " "resource." msgstr "" "文字列が保存されました。この値は リソースを参照することにより取得することもで" "きます。" msgid "The subject of the certificate request." msgstr "証明書のリクエストの件名。" msgid "" "The subnet for the port on which the members of the pool will be connected." msgstr "プールのメンバーが接続されるポートのサブネット。" msgid "The subnet, either subnet or port should be specified." msgstr "サブネット。サブネットまたはポートを指定する必要があります。" msgid "The tag key name." msgstr "タグキー名。" msgid "The tag value." msgstr "タグ値。" msgid "The template is not a JSON object or YAML mapping." msgstr "テンプレートが JSON オブジェクトでも YAML マッピングでもありません。" #, python-format msgid "The template section is invalid: %(section)s" msgstr "template セクションが無効です: %(section)s" #, python-format msgid "The template version is invalid: %(explanation)s" msgstr "テンプレートバージョンが無効です: %(explanation)s" msgid "The tenant owning this floating IP." msgstr "この Floating IP を所有するテナント。" msgid "The tenant owning this network." msgstr "このネットワークを所有するテナント。" msgid "The time range in seconds." msgstr "時間の範囲 (秒)。" msgid "The timestamp indicating volume creation." msgstr "ボリューム作成を示すタイムスタンプ。" msgid "The transform protocol of the ipsec policy." msgstr "ipsec ポリシーの変換プロトコル。" msgid "The type of profile." msgstr "プロファイルのタイプ。" msgid "The type of senlin policy." msgstr "Senlin ポリシーの名前。" msgid "The type of the certificate request." msgstr "証明書のリクエストのタイプ。" msgid "The type of the order." msgstr "注文のタイプ。" msgid "The type of the resources in the group." msgstr "グループ内のリソースのタイプ。" msgid "The type of the secret." msgstr "秘密のタイプ。" msgid "The type of the volume mapping to a backend, if any." msgstr "バックエンドにマッピングされるボリュームのタイプ (ある場合)。" msgid "The type/format the secret data is provided in." msgstr "秘密データが提供されるタイプと形式。" msgid "The type/mode of the algorithm associated with the secret information." msgstr "秘密情報と関連付けられるアルゴリズムのタイプとモード。" msgid "The unencrypted plain text of the secret." msgstr "秘密に関する暗号化されていないプレーンテキスト。" msgid "" "The unique identifier of ike policy associated with the ipsec site " "connection." msgstr "ipsec サイト接続に関連付けられている IKE ポリシーの固有 ID。" msgid "" "The unique identifier of ipsec policy associated with the ipsec site " "connection." msgstr "ipsec サイト接続に関連付けられている ipsec ポリシーの固有 ID。" msgid "" "The unique identifier of the router to which the vpn service was inserted." msgstr "VPN サービスが挿入されたルーターの固有 ID。" msgid "" "The unique identifier of the subnet in which the vpn service was created." msgstr "VPN サービスが作成されたサブネットの固有 ID。" msgid "The unique identifier of the tenant owning the ike policy." msgstr "IKE ポリシーを所有するテナントの固有 ID。" msgid "The unique identifier of the tenant owning the ipsec policy." msgstr "ipsec ポリシーを所有するテナントの固有 ID。" msgid "The unique identifier of the tenant owning the ipsec site connection." msgstr "ipsec サイト接続を所有するテナントの固有 ID。" msgid "The unique identifier of the tenant owning the vpn service." msgstr "VPN サービスを所有するテナントの固有 ID。" msgid "" "The unique identifier of vpn service associated with the ipsec site " "connection." msgstr "ipsec サイト接続に関連付けられている VPN サービスの固有 ID。" msgid "" "The user-defined region ID and should unique to the OpenStack deployment. " "While creating the region, heat will url encode this ID." msgstr "" "OpenStack の実装環境ごとに一意である必要のある、ユーザーが定義した領域 ID。こ" "の領域を作成する際に、heat この ID を URL エンコードします。" msgid "" "The value for the socket option TCP_KEEPIDLE. This is the time in seconds " "that the connection must be idle before TCP starts sending keepalive probes." msgstr "" "ソケットオプション TCP_KEEPIDLE の値。これは、TCP が Keepalive プローブの送信" "を開始する前に接続がアイドル状態にならなければならない時間 (秒) を指します。" #, python-format msgid "The value must be at least %(min)s." msgstr "値は %(min)s 以上でなければなりません。" #, python-format msgid "The value must be in the range %(min)s to %(max)s." msgstr "値は %(min)s から %(max)s までの範囲になければなりません。" #, python-format msgid "The value must be no greater than %(max)s." msgstr "値は %(max)s を超えてはなりません。" #, python-format msgid "The values of the \"for_each\" argument to \"%s\" must be lists" msgstr "\"%s\" に対する \"for_each\" 引数はリストでなければなりません" msgid "The version of the ike policy." msgstr "IKE ポリシーのバージョン。" msgid "" "The vnic type to be bound on the neutron port. To support SR-IOV PCI " "passthrough networking, you can request that the neutron port to be realized " "as normal (virtual nic), direct (pci passthrough), or macvtap (virtual " "interface with a tap-like software interface). Note that this only works for " "Neutron deployments that support the bindings extension." msgstr "" "neutron ポートでバインドする vnic タイプ。SR-IOV PCI パススルーネットワークを" "サポートするために、neutron ポートが通常 (仮想 nic)、直接 (pci パススルー)、" "または macvtap (タップのようなソフトウェアインターフェースを持つ仮想インター" "フェース) として認識されるように要求できます。これは、バインディング拡張機能" "をサポートする neutron デプロイメントでのみ機能することに注意してください。" msgid "The volume type." msgstr "ボリューム種別。" msgid "The volume used as source, if any." msgstr "ソースとして使用されるボリューム (ある場合)。" msgid "The volume_id can be boot or non-boot device to the server." msgstr "" "volume_id はサーバーに対するブートデバイスまたは非ブートデバイスにすることが" "できます。" msgid "The website endpoint for the specified bucket." msgstr "指定されたバケットの Web サイトのエンドポイント。" #, python-format msgid "There is no rule %(rule)s. List of allowed rules is: %(rules)s." msgstr "ルール %(rule)s がありません。許容されるルールのリスト: %(rules)s" msgid "" "There is no such option during 5.0.0, so need to make this attribute " "unsupported, otherwise error will raised." msgstr "" "5.0.0 ではこのようなオプションは存在しないため、この属性をサポート対象外に設" "定する必要があります。設定しない場合、エラーが発生します。" msgid "" "There is no such option during 5.0.0, so need to make this property " "unsupported while it not used." msgstr "" "5.0.0 ではこのようなオプションは存在しないため、このプロパティーを使用しない" "場合は、プロパティーをサポート対象外に設定する必要があります。" #, python-format msgid "" "There was an error loading the definition of the global resource type " "%(type_name)s." msgstr "" "グローバルリソースタイプ %(type_name)s の定義をロードする際にエラーがありまし" "た。" msgid "This endpoint is enabled or disabled." msgstr "このエンドポイントは有効化または無効化されています。" msgid "This project is enabled or disabled." msgstr "このプロジェクトは有効化または無効化されています。" msgid "This region is enabled or disabled." msgstr "この領域は有効化または無効化されています。" msgid "This service is enabled or disabled." msgstr "このサービスは有効化または無効化されています。" msgid "Threshold to evaluate against." msgstr "評価するしきい値。" msgid "Time To Live (Seconds)." msgstr "ライブになるまでの時間 (秒)。" msgid "Time of the first execution in format \"YYYY-MM-DD HH:MM\"." msgstr "\"YYYY-MM-DD HH:MM\" 形式で表示される最初の実行時間。" msgid "Time of the next execution in format \"YYYY-MM-DD HH:MM:SS\"." msgstr "\"YYYY-MM-DD HH:MM:SS\" 形式で表示される次の実行時間。" msgid "" "Timeout for client connections' socket operations. If an incoming connection " "is idle for this number of seconds it will be closed. A value of '0' means " "wait forever." msgstr "" "クライアント接続のソケット処理のタイムアウト時間。提供される接続がこの秒数の" "間アイドル状態にある場合、接続は終了します。値が '0' の場合、待機時間に制限が" "ないことを指します。" msgid "Timeout for creating the bay in minutes. Set to 0 for no timeout." msgstr "" "ベイを作成する際のタイムアウト時間 (分)。タイムアウト時間を設定しない場合は " "0 を設定します。" msgid "Timeout in seconds for stack action (ie. create or update)." msgstr "スタックアクション (作成または更新) のタイムアウト (秒)。" msgid "" "Toggle to enable/disable caching when Orchestration Engine looks for other " "OpenStack service resources using name or id. Please note that the global " "toggle for oslo.cache(enabled=True in [cache] group) must be enabled to use " "this feature." msgstr "" "オーケストレーションエンジンが名前または ID を使用してその他の OpenStack サー" "ビスのリソースを探す際に、キャッシングの有効化と無効化を切り替えます。 この機" "能を使用するには、 oslo.cache(enabled=True in [cache] group) のグローバルトグ" "ルを有効化する必要があることに注意してください。" msgid "" "Toggle to enable/disable caching when Orchestration Engine retrieves " "extensions from other OpenStack services. Please note that the global toggle " "for oslo.cache(enabled=True in [cache] group) must be enabled to use this " "feature." msgstr "" "オーケストレーションエンジンが他の OpenStack サービスから拡張を抽出する際に、" "キャッシングの有効化と無効化を切り替えます。この機能を使用するには、oslo." "cache(enabled=True in [cache] group) のグローバルトグルを有効化する必要がある" "ことに注意してください。" msgid "" "Toggle to enable/disable caching when Orchestration Engine validates " "property constraints of stack.During property validation with constraints " "Orchestration Engine caches requests to other OpenStack services. Please " "note that the global toggle for oslo.cache(enabled=True in [cache] group) " "must be enabled to use this feature." msgstr "" "オーケストレーションエンジンがスタックのプロパティー制限を検証する際に、" "キャッシングの有効化と無効化を切り替えます。制約のプロパティーを検証する際" "に、オーケストレーションエンジンは他の OpenStack サービスへの要求をキャッシン" "グします。 この機能を使用するには、oslo.cache(enabled=True in [cache] group) " "のグローバルトグルを有効化する必要があることに注意してください。" msgid "" "Token for stack-user which can be used for signalling handle when " "signal_transport is set to TOKEN_SIGNAL. None for all other signal " "transports." msgstr "" "signal_transport が TOKEN_SIGNAL に設定されている場合にハンドルを送信するため" "に使用できるスタックユーザーのトークン。その他の信号の送信の場合は存在しませ" "ん。" msgid "" "Tokens are not needed for Swift TempURLs. This attribute is being kept for " "compatibility with the OS::Heat::WaitConditionHandle resource." msgstr "" "トークンは Swift TempURL には不要です。この属性は、OS::heat::" "WaitConditionHandle リソースとの互換性のために保持されています。" msgid "Topic" msgstr "トピック" msgid "Transform protocol for the ipsec policy." msgstr "ipsec ポリシーの変換プロトコル。" msgid "True if alarm evaluation/actioning is enabled." msgstr "アラーム評価/アクションが有効になっている場合は TRUE。" msgid "" "True if the system should remember a generated private key; False otherwise." msgstr "" "生成された秘密鍵をシステムに記憶させる場合は TRUE、そうでない場合はFALSE。" msgid "Type of access that should be provided to guest." msgstr "ゲストに提供する必要のあるアクセスのタイプ。" msgid "Type of adjustment (absolute or percentage)." msgstr "調整のタイプ (絶対またはパーセンテージ)。" msgid "" "Type of endpoint in Identity service catalog to use for communication with " "the OpenStack service." msgstr "" "OpenStack サービスとの通信に使用する Identity サービスカタログ内のエンドポイ" "ント種別。" msgid "Type of keystone Service." msgstr "keystone サービスのタイプ。" msgid "Type of receiver." msgstr "レシーバーのタイプ。" msgid "Type of the data source." msgstr "データソースのタイプ。" msgid "Type of the notification." msgstr "通知のタイプ。" msgid "Type of the object that RBAC policy affects." msgstr "RBAC ポリシーが影響するオブジェクトのタイプ。" msgid "Type of the value of the input." msgstr "入力の値のタイプ。" msgid "Type of the value of the output." msgstr "出力の値のタイプ。" msgid "Type of the volume to create on Cinder backend." msgstr "Cinder バックエンドで作成するボリュームのタイプ。" msgid "URL for API authentication" msgstr "API 認証用 URL" msgid "URL for the data source." msgstr "データソースの URL。" msgid "" "URL for the job binary. Must be in the format swift:/// or " "internal-db://." msgstr "" "ジョブバイナリーの URL。swift:/// または iternal-db://" " の形式である必要があります。" msgid "" "URL of TempURL where resource will signal completion and optionally upload " "data." msgstr "" "リソースが完了をシグナル通知し、オプションでデータをアップロードする TempURL " "の URL。" msgid "URL of keystone service endpoint." msgstr "keystone サービスのエンドポイントのURL。" msgid "URL of the Heat CloudWatch server." msgstr "heat CloudWatch サーバーの URL。" msgid "" "URL of the Heat metadata server. NOTE: Setting this is only needed if you " "require instances to use a different endpoint than in the keystone catalog" msgstr "" "Heat メタデータサーバーの URL。注意: これを設定する必要があるのは、インスタン" "スが keystone カタログ内のエンドポイントとは異なるエンドポイントを使用する必" "要がある場合に限られます。" msgid "URL of the Heat waitcondition server." msgstr "heat waitcondition サーバーの URL。" msgid "" "URL where the data for this image already resides. For example, if the image " "data is stored in swift, you could specify \"swift://example.com/container/" "obj\"." msgstr "" "このイメージのデータが既にある URL。例えば、イメージデータが swift に格納され" "ている場合は、\"swift://example.com/container/obj\" と指定できます。" msgid "UUID of the internal subnet to which the instance will be attached." msgstr "インスタンスの接続先となる内部サブネットの UUID。" #, python-format msgid "" "Unable to find neutron provider '%(provider)s', available providers are " "%(providers)s." msgstr "" "neutron のプロバイダー '%(provider)s' が見つかりません。使用可能なプロバイ" "ダーは %(providers)s です。" #, python-format msgid "" "Unable to find senlin policy type '%(pt)s', available policy types are " "%(pts)s." msgstr "" "senlin のポリシータイプ '%(pt)s' が見つかりません。使用可能なポリシータイプ" "は %(pts)s です。" #, python-format msgid "" "Unable to find senlin profile type '%(pt)s', available profile types are " "%(pts)s." msgstr "" "senlin のプロファイルタイプ '%(pt)s' が見つかりません。使用可能なプロファイル" "タイプは %(pts)s です。" #, python-format msgid "" "Unable to load %(app_name)s from configuration file %(conf_file)s.\n" "Got: %(e)r" msgstr "" "設定ファイル %(conf_file)s から %(app_name)s をロードできません。\n" "受け取ったエラー: %(e)r" #, python-format msgid "Unable to locate config file [%s]" msgstr " 設定ファイル [%s] を特定できません" #, python-format msgid "Unexpected action %(action)s" msgstr "予期していないアクション %(action)s" #, python-format msgid "Unexpected action %s" msgstr "予期していないアクション %s" #, python-format msgid "" "Unexpected properties: %(unexpected)s. Only these properties are allowed for " "%(type)s type of order: %(allowed)s." msgstr "" "予期しないプロパティー: %(unexpected)s。%(type)s タイプの注文に対しては以下の" "プロパティーのみが許容されます: %(allowed)s" msgid "Unique identifier for the device." msgstr "デバイスの固有 ID。" msgid "" "Unique identifier for the ike policy associated with the ipsec site " "connection." msgstr "ipsec サイト接続に関連付けられている IKE ポリシーの固有 ID。" msgid "" "Unique identifier for the ipsec policy associated with the ipsec site " "connection." msgstr "ipsec サイト接続に関連付けられている ipsec ポリシーの固有 ID。" msgid "Unique identifier for the network owning the port." msgstr "ポートを所有するネットワークの固有 ID。" msgid "" "Unique identifier for the router to which the vpn service will be inserted." msgstr "VPN サービスが挿入されるルーターの固有 ID。" msgid "" "Unique identifier for the vpn service associated with the ipsec site " "connection." msgstr "ipsec サイト接続に関連付けられている VPN サービスの固有 ID。" msgid "" "Unique identifier of the firewall policy to which this firewall rule belongs." msgstr "このファイアウォールルールが属するファイアウォールポリシーの固有 ID。" msgid "Unique identifier of the firewall policy used to create the firewall." msgstr "ファイアウォールの作成に使用するファイアウォールポリシーの固有 ID。" msgid "Unknown" msgstr "不明" #, python-format msgid "Unknown Property %s" msgstr "プロパティー %s は不明です" #, python-format msgid "Unknown attribute \"%s\"" msgstr "属性 \"%s\" が不明です" #, python-format msgid "Unknown error retrieving %s" msgstr "%s の取り出し中に不明なエラーが発生しました" #, python-format msgid "Unknown input %s" msgstr "入力 %s は不明です" #, python-format msgid "Unknown key(s) %s" msgstr "キー %s は不明です" msgid "Unknown share_status during creation of share \"{0}\"" msgstr "シェア \"{0}\" の作成中の share_status が不明です。" #, python-format msgid "Unknown status creating Bay '%(name)s' - %(reason)s" msgstr "ベイ '%(name)s' の作成状況が不明です: %(reason)s" msgid "Unknown status during deleting share \"{0}\"" msgstr "シェア \"{0}\" の削除中の状態が不明です。" #, python-format msgid "Unknown status updating Bay '%(name)s' - %(reason)s" msgstr "ベイ '%(name)s' の更新状況が不明です: %(reason)s" #, python-format msgid "Unknown status: %s" msgstr "不明状況: %s" #, python-format msgid "" "Unrecognized value \"%(value)s\" for \"%(name)s\", acceptable values are: " "true, false." msgstr "" " \"%(name)s\" の \"%(value)s\" は認識されない値です。許容される値は \"true, " "false\" です。" #, python-format msgid "Unsupported object type %(objtype)s" msgstr "サポートされないオブジェクト種別 %(objtype)s" #, python-format msgid "Unsupported resource '%s' in LoadBalancerNames" msgstr "LoadBalancerNames ではリソース '%s' はサポートされません" msgid "Unversioned keystone url in format like http://0.0.0.0:5000." msgstr "http://0.0.0.0:5000 のような形式のバージョンなし keystone URL。" #, python-format msgid "Update to properties %(props)s of %(name)s (%(res)s)" msgstr "%(name)s (%(res)s) のプロパティー %(props)s に更新します" msgid "Updated At" msgstr "最終更新" msgid "Updating a stack when it is deleting" msgstr "スタックを削除中に更新しています" msgid "Updating a stack when it is suspended" msgstr "中断中にスタックを更新しています" msgid "" "Use get_resource|Ref command instead. For example: { get_resource : " " }" msgstr "" "代わりに get_resource|Ref コマンドを使用してください。コマンドの例は、" "{ get_resource : } です。" msgid "" "Use only with Neutron, to list the internal subnet to which the instance " "will be attached; needed only if multiple exist; list length must be exactly " "1." msgstr "" "インスタンスの接続先となる内部サブネットをリストするために、Neutron でのみ使" "用します。複数存在する場合にのみ必要です。リストの長さは正確に 1 でなければな" "りません。" #, python-format msgid "Use property %s" msgstr "プロパティー %s の使用" #, python-format msgid "Use property %s." msgstr "プロパティー %s を使用してください。" msgid "" "Use the `external_gateway_info` property in the router resource to set up " "the gateway." msgstr "" "ゲートウェイを設定するには、ルーターリソース内の 'external_gateway_info' プロ" "パティーを使用します。" msgid "" "Use the networks attribute instead of first_address. For example: " "\"{get_attr: [, networks, , 0]}\"" msgstr "" "first_address ではなく、ネットワーク属性を使用します。例: \"{get_attr: " "[, networks, , 0]}\"" msgid "Use this resource at your own risk." msgstr "このリソースは自己責任で使用してください。" #, python-format msgid "User %s in invalid domain" msgstr "ユーザー %s は無効なドメインにあります" #, python-format msgid "User %s in invalid project" msgstr "ユーザー %s は無効なプロジェクトにあります" msgid "User ID for API authentication" msgstr "API 認証用ユーザー ID" msgid "User data to pass to instance." msgstr "インスタンスに渡すユーザーデータ。" msgid "User is not authorized to perform action" msgstr "ユーザーはアクションの実行を許可されていません" msgid "User name to create a user on instance creation." msgstr "インスタンス作成時にユーザーを作成するためのユーザー名。" msgid "Username associated with the AccessKey." msgstr "AccessKey に関連付けられたユーザー名。" msgid "Username for API authentication" msgstr "API 認証用ユーザー名" msgid "Username for accessing the data source URL." msgstr "データソースの URL にアクセスするためのユーザー名。" msgid "Username for accessing the job binary URL." msgstr "ジョブバイナリーの URL にアクセスするためのユーザー名。" msgid "Username of privileged user in the image." msgstr "イメージ内の特権ユーザーのユーザー名。" msgid "VLAN ID for VLAN networks or tunnel-id for GRE/VXLAN networks." msgstr "" "VLAN ネットワークの VLAN ID または GRE/VXLAN ネットワークのトンネル ID。" msgid "VPC ID for this gateway association." msgstr "このゲートウェイ関連付けの VPC ID。" msgid "VPC ID for where the route table is created." msgstr "経路テーブルの作成対象 VPC ID。" msgid "" "Valid values are encrypt or decrypt. The heat-engine processes must be " "stopped to use this." msgstr "" "適切な値は encrypt か decrypt のいずれかです。これを使用するには、heat-" "engine プロセスを停止する必要があります。" #, python-format msgid "Value \"%(val)s\" is invalid for data type \"%(type)s\"." msgstr "値 \"%(val)s\" はデータ型 \"%(type)s\" では無効です。" #, python-format msgid "Value '%(value)s' is invalid for '%(name)s' which only accepts integer." msgstr "値 '%(value)s' は、整数のみを受け入れる '%(name)s' には無効です。" #, python-format msgid "" "Value '%(value)s' is invalid for '%(name)s' which only accepts non-negative " "integer." msgstr "" "値 '%(value)s' は、負でない整数のみを受け入れる '%(name)s' には無効です。" #, python-format msgid "Value '%s' is not an integer" msgstr "値 '%s' は整数ではありません" #, python-format msgid "Value must be a comma-delimited list string: %s" msgstr "値はコンマ区切り一覧の文字列である必要があります: %s" #, python-format msgid "Value must be of type %s" msgstr "値のタイプは %s でなければなりません" #, python-format msgid "Value must be valid JSON: %s" msgstr "値は有効な JSON でなければなりません: %s" #, python-format msgid "Value must match pattern: %s" msgstr "値はパターン %s に一致しなければなりません。" msgid "" "Value which can be set or changed on stack update to trigger the resource " "for replacement with a new random string. The salt value itself is ignored " "by the random generator." msgstr "" "新規のランダム文字列との置換を行うリソースをトリガーするために、スタック更新" "時に設定または変更が可能な値。ランダムジェネレーターではソルト値自体は無視さ" "れます。" msgid "" "Value which can be set to fail the resource operation to test failure " "scenarios." msgstr "" "失敗シナリオをテストすることを目的として、リソースの処理を失敗させるために設" "定可能な値。" msgid "" "Value which can be set to trigger update replace for the particular resource." msgstr "特定のリソースの更新置換をトリガーするために設定可能な値。" #, python-format msgid "Version %(objver)s of %(objname)s is not supported" msgstr "バージョン %(objver)s の %(objname)s はサポートされません" msgid "Version for the ike policy." msgstr "IKE ポリシーのバージョン。" msgid "Version of Hadoop running on instances." msgstr "インスタンスで稼働している Hadoop のバージョン。" msgid "Version of IP address." msgstr "IP アドレスのバージョン。" msgid "Vip associated with the pool." msgstr "プールに関連付けられている VIP。" msgid "Volume attachment failed" msgstr "ボリューム接続に失敗しました" msgid "Volume backup failed" msgstr "ボリュームのバックアップに失敗しました" msgid "Volume backup restore failed" msgstr "ボリュームのバックアップのリストアが失敗しました" msgid "Volume create failed" msgstr "ボリュームの作成に失敗しました" msgid "Volume detachment failed" msgstr "ボリューム切り離しに失敗しました" msgid "Volume in use" msgstr "ボリュームが使用中です" msgid "Volume resize failed" msgstr "ボリュームのサイズ変更に失敗しました" msgid "Volumes per node." msgstr "ノードごとのボリューム。" msgid "Volumes to attach to instance." msgstr "インスタンスに接続するボリューム。" #, python-format msgid "WaitCondition invalid Handle %s" msgstr "WaitCondition 無効ハンドル %s" #, python-format msgid "WaitCondition invalid Handle stack %s" msgstr "WaitCondition 無効ハンドルスタック %s" #, python-format msgid "WaitCondition invalid Handle tenant %s" msgstr "WaitCondition 無効ハンドルテナント %s" msgid "" "Warning: this command is potentially destructive and only intended to " "recover from specific crashes." msgstr "" "注意:本コマンドは破壊的なため、特定の障害のリカバリのみにしか利用できませ" "ん。" msgid "Weight of pool member in the pool (default to 1)." msgstr "プール内のプールメンバーの重み (1 にデフォルト設定されます)。" msgid "Weight of the pool member in the pool." msgstr "プール内のプールメンバーの重み。" #, python-format msgid "Went to status %(resource_status)s due to \"%(status_reason)s\"" msgstr "\"%(status_reason)s\" が原因で状況 %(resource_status)s になりました" msgid "" "When both ipv6_ra_mode and ipv6_address_mode are set, they must be equal." msgstr "" "ipv6_ra_mode と ipv6_address_mode の両方が設定されている場合、これらは等しく" "なければなりません。" msgid "" "When running server in SSL mode, you must specify both a cert_file and " "key_file option value in your configuration file" msgstr "" "サーバーを SSL モードで実行する場合は、cert_file オプション値と key_file オプ" "ション値の両方を設定ファイルに指定する必要があります" msgid "Whether enable this policy on that cluster." msgstr "このクラスター上でこのポリシーを有効化すべきかどうか。" msgid "" "Whether the address scope should be shared to other tenants. Note that the " "default policy setting restricts usage of this attribute to administrative " "users only, and restricts changing of shared address scope to unshared with " "update." msgstr "" "アドレススコープを他のテナントに共有すべきかどうか。デフォルトのポリシー設定" "ではこの属性を使用できるのは管理者に限られ、共有アドレススコープを非共有に変" "更できるのは更新時に限られます。" msgid "Whether the flavor is shared across all projects." msgstr "すべてのプロジェクトでフレーバーを共有するかどうか。" msgid "" "Whether the image can be deleted. If the value is True, the image is " "protected and cannot be deleted." msgstr "" "イメージを削除できるかどうか。値が True の場合、イメージは保護されていて削除" "できません。" msgid "Whether the metering label should be shared across all tenants." msgstr "計測ラベルをすべてのテナントで共有するかどうか。" msgid "Whether the network contains an external router." msgstr "ネットワークに外部ルーターが含まれるかどうか。" msgid "Whether the part content is text or multipart." msgstr "パーツ内容がテキストかマルチパートか。" msgid "" "Whether the subnet pool will be shared across all tenants. Note that the " "default policy setting restricts usage of this attribute to administrative " "users only." msgstr "" "このサブネットプールをすべてのテナントで共有するかどうか。デフォルトのポリ" "シー設定では、この属性を使用できるのは管理者に限られます。" msgid "Whether the volume type is accessible to the public." msgstr "ボリュームタイプが一般ユーザーからアクセスできるかどうか。" msgid "Whether this QoS policy should be shared to other tenants." msgstr "このポリシーを他のテナントと共有すべきかどうか。" msgid "" "Whether this firewall should be shared across all tenants. NOTE: The default " "policy setting in Neutron restricts usage of this property to administrative " "users only." msgstr "" "このファイアウォールをすべてのテナントで共有するかどうか。注: Neutron のデ" "フォルトポリシー設定では、このプロパティーの使用は管理ユーザーのみに制限され" "ます。" msgid "" "Whether this is default IPv4/IPv6 subnet pool. There can only be one default " "subnet pool for each IP family. Note that the default policy setting " "restricts administrative users to set this to True." msgstr "" "これがデフォルトの IPv4/IPv6 のサブネットプールであるかどうか。各 IP ファミ" "リーに存在するデフォルトのサブネットプールは 1 つだけです。 デフォルトのポリ" "シー設定では、管理者のみがこの値を True に設定することができます。" msgid "Whether this network should be shared across all tenants." msgstr "このネットワークがすべてのテナントで共有されるかどうか。" msgid "" "Whether this network should be shared across all tenants. Note that the " "default policy setting restricts usage of this attribute to administrative " "users only." msgstr "" "このネットワークをすべてのテナントで共有するかどうか。デフォルトのポリシー設" "定では、この属性の使用は管理ユーザーのみに制限されます。" msgid "" "Whether this policy should be audited. When set to True, each time the " "firewall policy or the associated firewall rules are changed, this attribute " "will be set to False and will have to be explicitly set to True through an " "update operation." msgstr "" "このポリシーを監査するかどうか。True に設定するときは、ファイアウォールポリ" "シーまたは関連付けられたファイアウォールルールが変更されるたびに、この属性は " "False に設定され、更新操作で明示的に True に設定する必要があります。" msgid "Whether this policy should be shared across all tenants." msgstr "このポリシーがすべてのテナントで共有されるかどうか。" msgid "Whether this rule should be enabled." msgstr "このルールを有効にするかどうか。" msgid "Whether this rule should be shared across all tenants." msgstr "このルールがすべてのテナントで共有されるかどうか。" msgid "Whether to enable the actions or not." msgstr "アクションを有効化すべきかどうか。" msgid "Whether to specify a remote group or a remote IP prefix." msgstr "リモートグループまたはリモート IP プレフィックスのどちらを指定するか。" msgid "" "Which lifecycle actions of the deployment resource will result in this " "deployment being triggered." msgstr "" "実装リソースのどのライフサイクルアクションがこの実装環境をトリガーするのか。" msgid "" "Workflow additional parameters. If Workflow is reverse typed, params " "requires 'task_name', which defines initial task." msgstr "" "ワークフローの追加のパラメーター。ワークフローがリバースタイプの場合、params " "には最初のタスクを定義する 'task_name' が必要になります" msgid "Workflow description." msgstr "ワークフローの説明。" msgid "Workflow name." msgstr "ワークフロー名。" msgid "Workflow to execute." msgstr "実行すべきワークフロー。" msgid "Workflow type." msgstr "ワークフローのタイプ。" #, python-format msgid "Wrong Arguments try: \"%s\"" msgstr "引数試行が正しくありません: \"%s\"" msgid "You are not authenticated." msgstr "認証されていません。" msgid "You are not authorized to complete this action." msgstr "このアクションの実行を許可されていません。" #, python-format msgid "You are not authorized to use %(action)s." msgstr "%(action)s を使用する権限がありません。" #, python-format msgid "" "You have reached the maximum stacks per tenant, %d. Please delete some " "stacks." msgstr "" "テナントあたりの最大スタック数 %d に達しました。スタックをいくつか削除してく" "ださい。" #, python-format msgid "could not find user %s" msgstr "ユーザー %s が見つかりませんでした" msgid "deployment_id must be specified" msgstr "deployment_id の指定は必須です" msgid "" "deployments key not allowed in resource metadata with user_data_format of " "SOFTWARE_CONFIG" msgstr "" "SOFTWARE_CONFIG の user_data_format を持つリソースのメタデータでは、実装キー" "は許容されません。 " #, python-format msgid "deployments of server %s" msgstr "サーバー %s の実装環境" #, python-format msgid "environment has empty section \"%s\"" msgstr "環境に空のセクション \"%s\"があります。" #, python-format msgid "environment has wrong section \"%s\"" msgstr "環境に正しくない \"%s\" があります" msgid "error in pool" msgstr "プールでのエラー" msgid "error in vip" msgstr "vip でのエラー" msgid "external network for the gateway." msgstr "ゲートウェイの外部ネットワーク。" msgid "granularity should be days, hours, minutes, or seconds" msgstr "細分性は日数、時間数、分数、または秒数でなければなりません" msgid "heat.conf misconfigured, auth_encryption_key must be 32 characters" msgstr "" "heat.conf が適切に設定されていません。auth_encryption_key は 32 文字である必" "要があります。" msgid "" "heat.conf misconfigured, cannot specify \"stack_user_domain_id\" or " "\"stack_user_domain_name\" without \"stack_domain_admin\" and " "\"stack_domain_admin_password\"" msgstr "" "heat.conf の設定に誤りがあります。\"stack_user_domain_id\" または " "\"stack_user_domain_name\" を、\"stack_domain_admin\" および " "\"stack_domain_admin_password\" なしで指定することはできません。" msgid "ipv6_ra_mode and ipv6_address_mode are not supported for ipv4." msgstr "" "ipv6_ra_mode および ipv6_address_mode は ipv4 ではサポートされていません。" msgid "limit cannot be less than 4" msgstr "制限は 4 未満にはできません" #, python-format msgid "metadata setting for resource %s" msgstr "リソース %s のメタデータ設定" msgid "min/max length must be integral" msgstr "min/max 長は整数でなければなりません" msgid "min/max must be numeric" msgstr "min/max は数値でなければなりません" msgid "need more memory." msgstr "追加のメモリーが必要です。" msgid "no resource data found" msgstr "リソースデータが見つかりません" msgid "no resources were found" msgstr "リソースが見つかりませんでした" msgid "nova server metadata needs to be a Map." msgstr "Nova サーバーメタデータはマップでなければなりません。" #, python-format msgid "previous_status must be SupportStatus instead of %s" msgstr "previous_status は %s ではなく SupportStatus である必要があります" #, python-format msgid "raw template with id %s not found" msgstr "ID %s のローテンプレートが見つかりません" #, python-format msgid "raw_template_files with files_id %d not found" msgstr "files_id %d を持つ raw_template_files が見つかりません" #, python-format msgid "resource with id %s not found" msgstr "ID %s のリソースが見つかりません" #, python-format msgid "roles %s" msgstr "ロール %s" msgid "segmentation_id cannot be specified except 0 for using flat" msgstr "フラットを使用する場合の 0 を除き、segmentation_id は指定できません" msgid "segmentation_id must be specified for using vlan" msgstr "vlan を使用するには segmentation_id を指定する必要があります" msgid "segmentation_id not allowed for flat network type." msgstr "segmentation_id はフラットネットワーク形式には使用できません。" msgid "server_id must be specified" msgstr "server_id の指定は必須です" #, python-format msgid "" "task %(task)s contains property 'requires' in case of direct workflow. Only " "reverse workflows can contain property 'requires'." msgstr "" "直接的なワークフローの場合、タスク %(task)s には 'requires' プロパティーが含" "まれます。'requires' プロパティーが含まれるのは、リバースワークフローのみで" "す。" heat-10.0.2/heat/locale/ru/0000775000175000017500000000000013343562672015356 5ustar zuulzuul00000000000000heat-10.0.2/heat/locale/ru/LC_MESSAGES/0000775000175000017500000000000013343562672017143 5ustar zuulzuul00000000000000heat-10.0.2/heat/locale/ru/LC_MESSAGES/heat.po0000666000175000017500000117216413343562351020434 0ustar zuulzuul00000000000000# Translations template for heat. # Copyright (C) 2015 ORGANIZATION # This file is distributed under the same license as the heat project. # # Translators: # Andreas Jaeger , 2016. #zanata msgid "" msgstr "" "Project-Id-Version: heat VERSION\n" "Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n" "POT-Creation-Date: 2018-02-05 10:35+0000\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" "PO-Revision-Date: 2016-04-12 05:29+0000\n" "Last-Translator: Copied by Zanata \n" "Language: ru\n" "Plural-Forms: nplurals=4; plural=(n%10==1 && n%100!=11 ? 0 : n%10>=2 && n" "%10<=4 && (n%100<12 || n%100>14) ? 1 : n%10==0 || (n%10>=5 && n%10<=9) || (n" "%100>=11 && n%100<=14)? 2 : 3);\n" "Generated-By: Babel 2.0\n" "X-Generator: Zanata 3.9.6\n" "Language-Team: Russian\n" #, python-format msgid "\"%%s\" is not a valid keyword inside a %s definition" msgstr "\"%%s\" не является допустимым ключевым словом в определении %s" #, python-format msgid "\"%(fn_name)s\": %(err)s" msgstr "\"%(fn_name)s\": %(err)s" #, python-format msgid "" "\"%(name)s\" params must be strings, numbers, list or map. Failed to json " "serialize %(value)s" msgstr "" "Параметры \"%(name)s\" должны быть строками, числами, списками или картами. " "Сериализация json для %(value)s не выполнена" #, python-format msgid "" "\"%(section)s\" must contain a map of %(obj_name)s maps. Found a [%(_type)s] " "instead" msgstr "" "\"%(section)s\" должен содержать карту карт %(obj_name)s. Вместо нее " "обнаружен [%(_type)s]" #, python-format msgid "\"%(url)s\" is not a valid SwiftSignalHandle. The %(part)s is invalid" msgstr "" "\"%(url)s\" не является допустимым SwiftSignalHandle. %(part)s недопустим" #, python-format msgid "\"%(value)s\" does not validate %(name)s" msgstr "\"%(value)s\" не проверяет %(name)s" #, python-format msgid "\"%(value)s\" does not validate %(name)s (constraint not found)" msgstr "\"%(value)s\" не проверяет %(name)s (ограничение не найдено)" #, python-format msgid "\"%(version)s\". \"%(version_type)s\" should be one of: %(available)s" msgstr "" "\"%(version)s\". \"%(version_type)s\" должен быть одним из следующих: " "%(available)s" #, python-format msgid "\"%(version)s\". \"%(version_type)s\" should be: %(available)s" msgstr "" "\"%(version)s\". \"%(version_type)s\" должен быть следующим: %(available)s" #, python-format msgid "\"%s\" : [ { \"key1\": \"val1\" }, { \"key2\": \"val2\" } ]" msgstr "\"%s\" : [ { \"key1\": \"val1\" }, { \"key2\": \"val2\" } ]" #, python-format msgid "\"%s\" argument must be a string" msgstr "Аргумент \"%s\" должен быть строкой" #, python-format msgid "\"%s\" can't traverse path" msgstr "\"%s\" не удается обойти путь" #, python-format msgid "\"%s\" deletion policy not supported" msgstr "Стратегия удаления \"%s\" не поддерживается" #, python-format msgid "\"%s\" delimiter must be a string" msgstr "Разделитель \"%s\" должен быть строкой" #, python-format msgid "\"%s\" is not a list" msgstr "\"%s\" не является списком" #, python-format msgid "\"%s\" is not a map" msgstr "\"%s\" не является картой связей" #, python-format msgid "\"%s\" is not a valid ARN" msgstr "\"%s\" не является допустимым ARN" #, python-format msgid "\"%s\" is not a valid ARN URL" msgstr "\"%s\" не является допустимым URL ARN" #, python-format msgid "\"%s\" is not a valid Heat ARN" msgstr "\"%s\" не является допустимым ARN Heat" #, python-format msgid "\"%s\" is not a valid URL" msgstr "\"%s\" не является допустимым URL" #, python-format msgid "\"%s\" is not a valid boolean" msgstr "\"%s\" не является допустимым булевским значением" #, python-format msgid "\"%s\" is not a valid template section" msgstr "\"%s\" не является допустимым разделом шаблона" #, python-format msgid "\"%s\" must operate on a list" msgstr "\"%s\" должен выполняться над списком" #, python-format msgid "\"%s\" param placeholders must be strings" msgstr "Замените параметров \"%s\" должны быть строками" #, python-format msgid "\"%s\" parameters must be a mapping" msgstr "Параметры \"%s\" должны быть связыванием" #, python-format msgid "\"%s\" params must be a map" msgstr "Параметры \"%s\" должны быть картой связей" #, python-format msgid "\"%s\" params must be strings, numbers, list or map." msgstr "Параметры \"%s\" должны быть строками, числами, списками или картами." #, python-format msgid "\"%s\" template must be a string" msgstr "Шаблон \"%s\" должен быть строкой" #, python-format msgid "\"repeat\" syntax should be %s" msgstr "Формат \"repeat\" должен быть %s" #, python-format msgid "%(a)s paused until Hook %(h)s is cleared" msgstr "%(a)s приостанавливается до очистки Hook %(h)s" #, python-format msgid "%(action)s is not supported for resource." msgstr "%(action)s не поддерживается для ресурса." #, python-format msgid "%(action)s is restricted for resource." msgstr "%(action)s ограничено для ресурса." #, python-format msgid "%(desired_capacity)s must be between %(min_size)s and %(max_size)s" msgstr "" "%(desired_capacity)s должна быть в пределах от %(min_size)s до %(max_size)s" #, python-format msgid "%(error)s%(path)s%(message)s" msgstr "%(error)s%(path)s%(message)s" #, python-format msgid "%(feature)s is not supported." msgstr "%(feature)s не поддерживается." #, python-format msgid "" "%(img)s must be provided: Referenced cluster template %(tmpl)s has no " "default_image_id defined." msgstr "" "Необходимо указать %(img)s: в шаблоне кластера %(tmpl)s не указан " "default_image_id." #, python-format msgid "%(lc)s (%(ref)s) reference can not be found." msgstr "Не удалось найти указатель %(lc)s (%(ref)s)." #, python-format msgid "" "%(lc)s (%(ref)s) requires a reference to the configuration not just the name " "of the resource." msgstr "" "Для %(lc)s (%(ref)s) требуется указатель на конфигурацию, а не только имя " "ресурса." #, python-format msgid "%(len)d of %(count)d received" msgstr "Получено %(len)d из %(count)d" #, python-format msgid "%(len)d of %(count)d received - %(reasons)s" msgstr "Получено %(len)d из %(count)d - %(reasons)s" #, python-format msgid "%(message)s" msgstr "%(message)s" #, python-format msgid "%(min_size)s can not be greater than %(max_size)s" msgstr "%(min_size)s не может превышать %(max_size)s" #, python-format msgid "%(name)s constraint invalid for %(utype)s" msgstr "Неверное ограничение %(name)s для %(utype)s" #, python-format msgid "%(prop1)s cannot be specified without %(prop2)s." msgstr "%(prop1)s должно указываться вместе с %(prop2)s." #, python-format msgid "" "%(prop1)s property should only be specified for %(prop2)s with value " "%(value)s." msgstr "" "%(prop1)s должно указываться только для %(prop2)s со значением %(value)s." #, python-format msgid "%(resource)s: Invalid attribute %(key)s" msgstr "%(resource)s: неверный атрибут %(key)s" #, python-format msgid "" "%(result)s - Unknown status %(resource_status)s due to \"%(status_reason)s\"" msgstr "" "%(result)s - Неизвестное состояние %(resource_status)s, причина: " "\"%(status_reason)s\"" #, python-format msgid "%(schema)s supplied for %(type)s %(data)s" msgstr "%(schema)s указана для %(type)s %(data)s" #, python-format msgid "%(server)s-port-%(number)s" msgstr "%(server)s-порт-%(number)s" #, python-format msgid "%(type)s not in valid format: %(error)s" msgstr "Формат %(type)s недопустим: %(error)s" #, python-format msgid "%s Key Name must be a string" msgstr "Имя ключа %s должно быть строкой" #, python-format msgid "%s Timed out" msgstr "%s завершен по тайм-ауту" #, python-format msgid "%s Value Name must be a string" msgstr "Имя значения %s должно быть строкой" #, python-format msgid "%s is not a valid job location." msgstr "%s не является допустимым расположением задания." #, python-format msgid "%s is not active" msgstr "%s неактивен" #, python-format msgid "%s is not an integer." msgstr "%s не является целым числом." #, python-format msgid "%s must be provided" msgstr "Необходимо указать %s" #, python-format msgid "'%(attr)s': expected '%(expected)s', got '%(current)s'" msgstr "'%(attr)s': ожидалось: '%(expected)s', получено: '%(current)s'" msgid "" "'task_name' is not assigned in 'params' in case of reverse type workflow." msgstr "" "'task_name' не указывается в 'params' в случае обратного потока операций." msgid "'true' if DHCP is enabled for this subnet; 'false' otherwise." msgstr "'true', если DHCP для этой подсети включен; 'false' в других случаях." msgid "A UUID for the set of servers being requested." msgstr "Запрашивается UUID для набора серверов." msgid "A bad or out-of-range value was supplied" msgstr "Указано неверное значение или значение вне допустимого диапазона" msgid "A boolean value of default flag." msgstr "Булевское значение флага по умолчанию." msgid "A boolean value specifying the administrative status of the network." msgstr "Булевское значение, указывающее административное состояние сети." #, python-format msgid "" "A character class and its corresponding %(min)s constraint to generate the " "random string from." msgstr "" "Класс символов и его соответствующее ограничение %(min)s для генерирования " "из них псевдослучайной строки." #, python-format msgid "" "A character sequence and its corresponding %(min)s constraint to generate " "the random string from." msgstr "" "Последовательность символов и его соответствующее ограничение %(min)s для " "генерирования из них псевдослучайной строки." msgid "A comma-delimited list of server ip addresses. (Heat extension)." msgstr "Разделенный запятыми список ip-адресов сервера. (Расширение Heat)." msgid "A description of the volume." msgstr "Описание тома." msgid "" "A device name where the volume will be attached in the system at /dev/" "device_name. This value is typically vda." msgstr "" "Имя устройства, куда будет подключен том в системе (точка монтирования /dev/" "имя-устройства). Обычно это vda." msgid "" "A device name where the volume will be attached in the system at /dev/" "device_name.e.g. vdb" msgstr "" "Имя устройства, куда будет подключен том в системе (точка монтирования /dev/" "имя-устройства). Например, vdb" msgid "" "A dict of all network addresses with corresponding port_id. Each network " "will have two keys in dict, they are network name and network id. The port " "ID may be obtained through the following expression: \"{get_attr: [, " "addresses, , 0, port]}\"." msgstr "" "Словарь с сетевыми адресами и соответствующими port_id. Для каждой сети " "словарь содержит два ключа, имя сети и ИД сети. ИД порта можно получить из " "выражения: \"{get_attr: [, addresses, , 0, " "port]}\"." msgid "" "A dict of assigned network addresses of the form: {\"public\": [ip1, " "ip2...], \"private\": [ip3, ip4], \"public_uuid\": [ip1, ip2...], " "\"private_uuid\": [ip3, ip4]}. Each network will have two keys in dict, they " "are network name and network id." msgstr "" "Словарь с присвоенными сетевыми адресами в следующем формате: {\"public\": " "[ip1, ip2...], \"private\": [ip3, ip4], \"public_uuid\": [ip1, ip2...], " "\"private_uuid\": [ip3, ip4]}. Для каждой сети словарь содержит два ключа, " "имя сети и ИД сети." msgid "A dict of key-value pairs output from the stack." msgstr "Вывод словаря пар ключ-значение из стека." msgid "A dictionary which contains name and input of the workflow." msgstr "Словарь с именем и входными данными для потока операций." msgid "A length constraint must have a min value and/or a max value specified." msgstr "" "В ограничении длины должно быть указано минимальное и/или максимальное " "значение." msgid "A list of URLs (webhooks) to invoke when state transitions to alarm." msgstr "" "Список URL (веб-перехватчиков), вызываемых при изменении состояния на " "предупреждение." msgid "" "A list of URLs (webhooks) to invoke when state transitions to insufficient-" "data." msgstr "" "Список URL (веб-перехватчиков), вызываемых при изменении состояния на " "недостаточно-данных." msgid "A list of URLs (webhooks) to invoke when state transitions to ok." msgstr "" "Список URL (веб-перехватчиков), вызываемых при изменении состояния на OK." msgid "A list of access rules that define access from IP to Share." msgstr "Список прав доступа к общему ресурсу для IP-адресов." msgid "A list of all rules for the QoS policy." msgstr "Список всех правил стратегии QoS." msgid "A list of all subnet attributes for the port." msgstr "Список всех атрибутов подсети для порта." msgid "" "A list of character class and their constraints to generate the random " "string from." msgstr "" "Перечень классов символов и их ограничений для генерирования из них " "псевдослучайной строки." msgid "" "A list of character sequences and their constraints to generate the random " "string from." msgstr "" "Перечень последовательностей символов и их ограничений для генерирования из " "них псевдослучайной строки." msgid "A list of cluster instance IPs." msgstr "Список IP-адресов экземпляров кластера." msgid "A list of clusters to which this policy is attached." msgstr "Список кластеров, которым назначена эта стратегия." msgid "A list of host route dictionaries for the subnet." msgstr "Список словарей с маршрутами хостов для подсети." msgid "A list of instances ids." msgstr "Список ИД экземпляров." msgid "A list of metric ids." msgstr "Список ИД показателей." msgid "" "A list of query factors, each comparing a Sample attribute with a value. " "Implicitly combined with matching_metadata, if any." msgstr "" "Список факторов запрос, каждый в сравнении значения с атрибутом Sample. " "Неявным образом объединяется с matching_metadata, если таковой существует." msgid "A list of resource IDs for the resources in the chain." msgstr "Список ИД ресурсов в цепочке." msgid "A list of resource IDs for the resources in the group." msgstr "Список ИД ресурсов в группе." msgid "A list of security groups for the port." msgstr "Список групп защиты для порта." msgid "A list of security services IDs or names." msgstr "Список ИД или имен служб защиты." msgid "A list of string policies to apply. Defaults to anti-affinity." msgstr "" "Список стратегий строк для применения. Значение по умолчанию: anti-affinity." msgid "A login profile for the user." msgstr "Профайл регистрации для пользователя." msgid "A mandatory input parameter is missing" msgstr "Не указан обязательный входной параметр" msgid "A map containing all headers for the container." msgstr "Карта, содержащая все заголовки для контейнера." msgid "" "A map of Nova names and captured stderrs from the configuration execution to " "each server." msgstr "" "Карта связей имен Nova и записанных данных stderr из процесса выполнения " "настройки на каждый сервер." msgid "" "A map of Nova names and captured stdouts from the configuration execution to " "each server." msgstr "" "Карта связей имен Nova и записанных данных stdout из процесса выполнения " "настройки на каждый сервер." msgid "" "A map of Nova names and returned status code from the configuration " "execution." msgstr "" "Карта связей имен Nova и возвращенного кода состояния из процесса выполнения " "настройки." msgid "" "A map of files to create/overwrite on the server upon boot. Keys are file " "names and values are the file contents." msgstr "" "Карта файлов для создания/замены на сервере во время загрузки. Ключами " "являются имена файлов, значением - содержимое файлов." msgid "" "A map of resource names to the specified attribute of each individual " "resource." msgstr "" "Карта связи имен ресурсов и указанных атрибутов каждого отдельного ресурса." msgid "" "A map of resource names to the specified attribute of each individual " "resource. Requires heat_template_version: 2014-10-16." msgstr "" "Отображение имен ресурсов в указанный атрибут каждого отдельного ресурса. " "Требует heat_template_version: 2014-10-16." msgid "" "A map of user-defined meta data to associate with the account. Each key in " "the map will set the header X-Account-Meta-{key} with the corresponding " "value." msgstr "" "Карта пользовательских метаданных для связи с учетной записью. Каждый ключ в " "карте задает заголовок X-Account-Meta-{key} с соответствующим значением." msgid "" "A map of user-defined meta data to associate with the container. Each key in " "the map will set the header X-Container-Meta-{key} with the corresponding " "value." msgstr "" "Карта пользовательских метаданных для связи с контейнером. Каждый ключ в " "карте задает заголовок X-Container-Meta-{key} с соответствующим значением." msgid "A name used to distinguish the volume." msgstr "Имя, используемое для определения тома." msgid "" "A per-tenant quota on the prefix space that can be allocated from the subnet " "pool for tenant subnets." msgstr "" "Квота для пространства префиксов, установленная для арендатора при выделении " "для него подсетей из пула." msgid "" "A predefined access control list (ACL) that grants permissions on the bucket." msgstr "" "Готовый список управления доступом (ACL), который предоставляет права " "доступа к комплекту." msgid "A range constraint must have a min value and/or a max value specified." msgstr "" "В ограничении диапазона должно быть указано минимальное и/или максимальное " "значение." msgid "" "A reference to the wait condition handle used to signal this wait condition." msgstr "" "Ссылка на обработку состояния ожидания, используемого для сигнализации об " "этом состоянии ожидания." msgid "" "A signed url to create executions for workflows specified in Workflow " "resource." msgstr "" "Подписанный url для запуска потоков операций, указанных в ресурсе Workflow." msgid "A signed url to handle the alarm." msgstr "Подписанный url для обработки предупреждения." msgid "A signed url to handle the alarm. (Heat extension)." msgstr "Подписанный url для обработки предупреждения. (Расширение Heat)." msgid "A specified set of DNS name servers to be used." msgstr "Указанная группа серверов имен DNS для использования." msgid "" "A string specifying a symbolic name for the network, which is not required " "to be unique." msgstr "Строка, указывающая символическое имя сети, не обязательно уникальное." msgid "" "A string specifying a symbolic name for the security group, which is not " "required to be unique." msgstr "" "Строка, указывающая символическое имя группы защиты, не обязательно " "уникальное." msgid "A string specifying physical network mapping for the network." msgstr "Строка, указывающая связывание физической сети для сети." msgid "A string specifying the provider network type for the network." msgstr "Строка, указывающая тип сети поставщика для сети." msgid "A string specifying the segmentation id for the network." msgstr "Строка, указывающая ИД сегментации для сети." msgid "A symbolic name for this port." msgstr "Символическое имя этого порта." msgid "A url to handle the alarm using native API." msgstr "url для обработки предупреждения с помощью платформенного API." msgid "" "A variable that this resource will use to replace with the current index of " "a given resource in the group. Can be used, for example, to customize the " "name property of grouped servers in order to differentiate them when listed " "with nova client." msgstr "" "Переменная, с помощью которой этот ресурс подставит текущий индекс заданного " "ресурса в группу. Может использоваться, например, для настройки свойства " "имени сгруппированных серверов с целью различать их в списке на клиенте nova." msgid "AWS compatible instance name." msgstr "Имя экземпляра, совместимое с AWS." msgid "AWS query string is malformed, does not adhere to AWS spec" msgstr "Неверный формат строки запроса AWS, не соответствует спецификации AWS" msgid "Access policies to apply to the user." msgstr "Стратегии доступа для применения к пользователю." #, python-format msgid "AccessPolicy resource %s not in stack" msgstr "Ресурс AccessPolicy %s находится не в стеке" #, python-format msgid "Action %s not allowed for user" msgstr "Действие %s запрещено для пользователя" msgid "Action to be performed on the traffic matching the rule." msgstr "Действие для выполнения над трафиком, соответствующим правилу." msgid "Actual input parameter values of the task." msgstr "Фактические входные параметры задачи." msgid "Add needed policies directly to the task, Policy keyword is not needed" msgstr "" "Добавить необходимые стратегии прямо в задачу, ключевое слово Policy не " "требуется" msgid "Additional MAC/IP address pairs allowed to pass through a port." msgstr "" "Дополнительные пары MAC/IP-адресов, которые разрешается передавать через " "порт." msgid "Additional MAC/IP address pairs allowed to pass through the port." msgstr "" "Дополнительные пары MAC/IP-адресов, которые разрешается передавать через " "порт." msgid "Additional routes for this subnet." msgstr "Дополнительные маршруты для этой подсети." msgid "Address family of the address scope, which is 4 or 6." msgstr "Семейство адресов адресной области, 4 или 6." msgid "" "Address of the notification. It could be a valid email address, url or " "service key based on notification type." msgstr "" "Адрес уведомления. Это может быть адрес электронной почты, url или служебный " "ключ, в зависимости от типа уведомления." msgid "" "Address to bind the server. Useful when selecting a particular network " "interface." msgstr "" "Адрес привязки сервера. Полезен при выборе конкретного сетевого интерфейса." msgid "Administrative state for the ipsec site connection." msgstr "Административное состояние для соединения с сайтом ipsec." msgid "Administrative state for the vpn service." msgstr "Административное состояние службы vpn." msgid "" "Administrative state of the firewall. If false (down), firewall does not " "forward packets and will drop all traffic to/from VMs behind the firewall." msgstr "" "Административное состояние брандмауэра. В случае false (отключено) " "брандмауэр не пересылает пакеты и будет сбрасывать весь трафик VM за " "брандмауэром." msgid "Administrative state of the router." msgstr "Административное состояние маршрутизатора." #, python-format msgid "Alarm %(alarm)s could not find scaling group named \"%(group)s\"" msgstr "" "Предупреждению о %(alarm)s не удалось найти группу масштабирования с именем" "\"%(group)s\"" #, python-format msgid "Algorithm must be one of %s" msgstr "Алгоритм должен быть одним из следующих: %s" msgid "All heat engines are down." msgstr "Все модули heat выключены." msgid "Allocated floating IP address." msgstr "Выделенный нефиксированный IP-адрес." msgid "Allocation ID for VPC EIP address." msgstr "Выделение ИД для адреса EIP VPC." msgid "Allow client's debug log output." msgstr "Разрешить вывод протокола отладки клиента." msgid "Allow or deny action for this firewall rule." msgstr "Разрешить или запретить действие для этого правила брандмауэра." msgid "Allow orchestration of multiple clouds." msgstr "Разрешить координирование нескольких облачных сред." msgid "" "Allow reauthentication on token expiry, such that long-running tasks may " "complete. Note this defeats the expiry of any provided user tokens." msgstr "" "Разрешить повторную идентификацию при устаревании маркера, чтобы обеспечить " "успешное завершение долго выполняющихся задач. Учтите, что при этом " "нарушается защита от устаревания маркеров, предоставленных пользователями." msgid "" "Allowed keystone endpoints for auth_uri when multi_cloud is enabled. At " "least one endpoint needs to be specified." msgstr "" "Разрешить конечные точки keystone для auth_uri, когда включено multi_cloud. " "Необходимо указать по крайней мере одну конечную точку." msgid "" "Allowed tenancy of instances launched in the VPC. default - any tenancy; " "dedicated - instance will be dedicated, regardless of the tenancy option " "specified at instance launch." msgstr "" "Разрешенная аренда экземпляров, запущенных в VPC. Значение по умолчанию - " "любая аренда; выделенный - экземпляр будет выделен, независимо от опции " "аренды, указанной при записи экземпляра." #, python-format msgid "Allowed values: %s" msgstr "Допустимые значения: %s" msgid "AllowedPattern must be a string" msgstr "AllowedPattern должно быть строкой" msgid "AllowedValues must be a list" msgstr "AllowedValues должен быть списком" msgid "Allowing not to store action results after task completion." msgstr "Разрешает не сохранять результаты действия после завершения задачи." msgid "" "Allows to synchronize multiple parallel workflow branches and aggregate " "their data. Valid inputs: all - the task will run only if all upstream tasks " "are completed. Any numeric value - then the task will run once at least this " "number of upstream tasks are completed and corresponding conditions have " "triggered." msgstr "" "Разрешает синхронизировать несколько параллельных ветвей потока операций и " "объединять их данные. Допустимые значения: all - задача будет выполнена " "только по завершении всех ее предшествующих задач; любое число - задача " "будет выполнена после завершения не менее чем указанного числа " "предшествующих задач при условии срабатывания соответствующих условий." #, python-format msgid "Ambiguous versions (%s)" msgstr "Неразрешимые версии (%s)" msgid "" "Amount of disk space (in GB) required to boot image. Default value is 0 if " "not specified and means no limit on the disk size." msgstr "" "Объем дисковой памяти (в ГБ), необходимой для загрузки образа. Значение по " "умолчанию - 0, означающее отсутствие ограничения." msgid "" "Amount of ram (in MB) required to boot image. Default value is 0 if not " "specified and means no limit on the ram size." msgstr "" "Объем оперативной памяти (в МБ), необходимой для загрузки образа. Значение " "по умолчанию - 0, означающее отсутствие ограничения." msgid "An address scope ID to assign to the subnet pool." msgstr "ИД адресной области для присвоения пулу подсетей." msgid "An application health check for the instances." msgstr "Проверка работоспособности приложений для экземпляров." msgid "An ordered list of firewall rules to apply to the firewall." msgstr "Упорядоченный список правил брандмауэра для применения к брандмауэру." msgid "" "An ordered list of nics to be added to this server, with information about " "connected networks, fixed ips, port etc." msgstr "" "Упорядоченный список nic, добавляемых на этот сервер, с информацией о " "связанных сетях, фиксированных ip-адресах, портах и т. д." msgid "An unknown exception occurred." msgstr "Обнаружено неизвестное исключение." msgid "" "Any data structure arbitrarily containing YAQL expressions that defines " "workflow output. May be nested." msgstr "" "Любая структура данных, содержащая выражения YAQL в произвольном формате и " "определяющая выходные данные потока операций. Может быть вложенной." msgid "Anything other than one VPCZoneIdentifier" msgstr "Все, кроме одного одного VPCZoneIdentifier" msgid "Api endpoint reference of the instance." msgstr "Ссылка экземпляра на конечную точку API." msgid "" "Arbitrary key-value pairs specified by the client to help boot a server." msgstr "" "Произвольные пары ключ-значение, указанные клиентом для загрузки сервера." msgid "" "Arbitrary key-value pairs specified by the client to help the Cinder " "scheduler creating a volume." msgstr "" "Произвольные пары ключ-значение, указанные клиентом для упрощения создания " "тома планировщиком Cinder." msgid "" "Arbitrary key/value metadata to store contextual information about this " "queue." msgstr "" "Произвольные метаданные ключ/значение для сохранения информации о контексте " "для очереди. " msgid "" "Arbitrary key/value metadata to store for this server. Both keys and values " "must be 255 characters or less. Non-string values will be serialized to JSON " "(and the serialized string must be 255 characters or less)." msgstr "" "Произвольные метаданные ключ/значение, сохраняемые для этого сервера. Ключи " "и значения должны иметь длину не более 255 символов. Нестроковые значения " "будут сериализованы в JSON (и длина сериализованной строки не может " "превышать 255 символов). " msgid "Arbitrary key/value metadata to store information for aggregate." msgstr "" "Произвольные метаданные вида ключ-значение для сохранения информации о " "совокупном объекте. " #, python-format msgid "Argument to \"%s\" must be a list" msgstr "Аргумент \"%s\" должен быть списком" #, python-format msgid "Argument to \"%s\" must be a string" msgstr "Аргумент \"%s\" должен быть строкой" #, python-format msgid "Argument to \"%s\" must be string or list" msgstr "Аргумент \"%s\" должен быть строкой или списком" #, python-format msgid "Argument to function \"%s\" must be a list of strings" msgstr "Аргумент функции \"%s\" должен быть списком строк" #, python-format msgid "" "Arguments to \"%s\" can be of the next forms: [resource_name] or " "[resource_name, attribute, (path), ...]" msgstr "" "Аргументы для \"%s\" могут иметь следующий формат: [имя-ресурса] или [имя-" "ресурса, атрибут, (путь), ...]" #, python-format msgid "Arguments to \"%s\" must be a map" msgstr "Аргументы \"%s\" должны быть картой связей" #, python-format msgid "Arguments to \"%s\" must be of the form [index, collection]" msgstr "Аргументы \"%s\" должны иметь формат [индекс, набор]" #, python-format msgid "" "Arguments to \"%s\" must be of the form [resource_name, attribute, " "(path), ...]" msgstr "" "Аргументы \"%s\" должны иметь формат [имя-ресурса, атрибут, (путь), ...]" #, python-format msgid "Arguments to \"%s\" must be of the form [resource_name, attribute]" msgstr "Аргументы \"%s\" должны иметь формат [имя-ресурса, атрибут]" #, python-format msgid "Arguments to %s not fully resolved" msgstr "Аргументы %s обрабатываются неполностью" #, python-format msgid "Attempt to delete a stack with id: %(id)s %(msg)s" msgstr "Попытка удаления стека с ИД: %(id)s %(msg)s" #, python-format msgid "Attempt to delete user creds with id %(id)s that does not exist" msgstr "" "Попытка удаления идентификационных данных пользователя с несуществующим ИД " "%(id)s" #, python-format msgid "Attempt to delete watch_rule: %(id)s %(msg)s" msgstr "Попытка удаления watch_rule: %(id)s %(msg)s" #, python-format msgid "Attempt to update a stack with id: %(id)s %(msg)s" msgstr "Попытка обновления стека с ИД: %(id)s %(msg)s" #, python-format msgid "Attempt to update a stack with id: %(id)s %(traversal)s %(msg)s" msgstr "Попытка обновления стека с ИД: %(id)s %(traversal)s %(msg)s" #, python-format msgid "Attempt to update a watch with id: %(id)s %(msg)s" msgstr "Попытка обновления отслеживания с ИД: %(id)s %(msg)s" msgid "Attempt to use stored_context with no user_creds" msgstr "Попытка использовать stored_context без user_creds" #, python-format msgid "Attribute %(attr)s for facade %(type)s missing in provider" msgstr "Атрибут %(attr)s для фасада %(type)s не указан в поставщике" msgid "Audit status of this firewall policy." msgstr "Состояние контроля этой стратегии брандмауэра." msgid "Authentication Endpoint URI." msgstr "URI конечной точки идентификации." msgid "Authentication hash algorithm for the ike policy." msgstr "Хэш-алгоритм идентификации для стратегии ike." msgid "Authentication hash algorithm for the ipsec policy." msgstr "Хэш-алгоритм идентификации для стратегии ipsec." msgid "Authorization failed." msgstr "Доступ не предоставлен." msgid "AutoScaling group ID to apply policy to." msgstr "ИД группы AutoScaling, к которой применяется стратегия." msgid "AutoScaling group name to apply policy to." msgstr "Имя группы AutoScaling, к которой применяется стратегия." msgid "Availability Zone of the subnet." msgstr "Зона доступности подсети." msgid "Availability zone in which you want the subnet." msgstr "Желательная область доступности подсети." msgid "Availability zone to create servers in." msgstr "Зона доступности для создания серверов." msgid "Availability zone to create volumes in." msgstr "Зона доступности для создания томов." msgid "Availability zone to launch the instance in." msgstr "Область доступности для запуска экземпляра." msgid "Backend authentication failed" msgstr "Идентификация в базовой системе не выполнена" msgid "Binary" msgstr "Двоичный" msgid "Block device mappings for this server." msgstr "Блокировать распределения устройств для этого сервера." msgid "Block device mappings to attach to instance." msgstr "Отображения блочных устройств для прикрепления к экземпляру." msgid "Block device mappings v2 for this server." msgstr "Связи блочных устройств версии 2 для данного сервера." msgid "" "Boolean extra spec that used for filtering of backends by their capability " "to create share snapshots." msgstr "" "Булевский дополнительный параметр. Используется для фильтрации базовых " "систем по их способности создания моментальных копий общих ресурсов." msgid "Boolean indicating if the volume can be booted or not." msgstr "Булевское значение, указывающее возможность загрузки тома." msgid "Boolean indicating if the volume is encrypted or not." msgstr "Булевское значение, показывающее, зашифрован том или нет." msgid "" "Boolean indicating whether allow the volume to be attached more than once." msgstr "" "Булевское значение, указывающее, разрешено ли подсоединять том более одного " "раза." msgid "" "Bus of the device: hypervisor driver chooses a suitable default if omitted." msgstr "" "Шина устройства: драйвер гипервизора выбирает подходящее значение по " "умолчанию, если значение не указано." msgid "CIDR block notation for this subnet." msgstr "Представление блокировки CIDR для этой подсети." msgid "CIDR block to apply to subnet." msgstr "Блокировка CIDR, применяемая к подсети." msgid "CIDR block to apply to the VPC." msgstr "Блокировка CIDR, применяемая к VPC." msgid "CIDR of subnet." msgstr "CIDR подсети." msgid "CIDR to be associated with this metering rule." msgstr "CIDR для связи с этим правилом измерения." #, python-format msgid "Can not specify property \"%s\" if the volume type is public." msgstr "Свойство \"%s\" недоступно, если задан общедоступный тип тома." #, python-format msgid "Can not use %s property on Nova-network." msgstr "Невозможно использовать свойство %s в сети Nova." #, python-format msgid "Can't find role %s" msgstr "Не удается найти роль %s" msgid "Can't get user token without password" msgstr "Получить маркер пользователя без пароля невозможно" msgid "Can't get user token, user not yet created" msgstr "Получить маркер пользователя невозможно - пользователь пока не создан" msgid "Can't traverse attribute path" msgstr "Пройти по пути к атрибуту невозможно" #, python-format msgid "Cancelling update when stack is %s" msgstr "Отмена обновления, когда стек - %s" #, python-format msgid "Cannot call %(method)s on orphaned %(objtype)s object" msgstr "Невозможно вызвать %(method)s в неприсвоенном объекте %(objtype)s" #, python-format msgid "Cannot check %s, stack not created" msgstr "Проверить %s невозможно, стек не создан" #, python-format msgid "Cannot define the following properties at the same time: %(props)s." msgstr "Не удается определить следующие свойства одновременно: %(props)s." #, python-format msgid "" "Cannot establish connection to Heat endpoint at region \"%(region)s\" due to " "\"%(exc)s\"" msgstr "" "Не удалось установить соединение с конечной точкой Heat в области " "\"%(region)s\", причина: \"%(exc)s\"" msgid "" "Cannot get stack domain user token, no stack domain id configured, please " "fix your heat.conf" msgstr "" "Получить маркер пользователя домена стека не удалось, ИД домена стека не " "настроен, исправьте файл heat.conf" msgid "Cannot migrate to lower schema version." msgstr "Не удается перейти на прежнюю версию схемы." #, python-format msgid "Cannot modify readonly field %(field)s" msgstr "Нельзя изменить поле %(field)s, доступное только для чтения" #, python-format msgid "Cannot resume %s, resource not found" msgstr "Не удается возобновить %s, ресурс не найден" #, python-format msgid "Cannot resume %s, resource_id not set" msgstr "Возобновить %s невозможно, resource_id не задан" #, python-format msgid "Cannot resume %s, stack not created" msgstr "Возобновить %s невозможно, стек не создан" #, python-format msgid "Cannot suspend %s, resource not found" msgstr "Не удается приостановить %s, ресурс не найден" #, python-format msgid "Cannot suspend %s, resource_id not set" msgstr "Приостановить %s невозможно, resource_id не задан" #, python-format msgid "Cannot suspend %s, stack not created" msgstr "Приостановить %s невозможно, стек не создан" msgid "Captured stderr from the configuration execution." msgstr "Захвачен stderr при выполнении настройки." msgid "Captured stdout from the configuration execution." msgstr "Захвачен stdout при выполнении настройки." #, python-format msgid "Circular Dependency Found: %(cycle)s" msgstr "Обнаружена циклическая зависимость: %(cycle)s" msgid "Client entity to poll." msgstr "Объект клиента для опроса." msgid "Client name and resource getter name must be specified." msgstr "Необходимо указать имя клиента и метод get для ресурса." msgid "Client to poll." msgstr "Клиент для опроса." msgid "Cluster configs dictionary." msgstr "Словарь конфигураций кластеров." msgid "Cluster information." msgstr "Информация о кластере." msgid "Cluster metadata." msgstr "Метаданные кластера." msgid "Cluster name." msgstr "Имя кластера." msgid "Cluster status." msgstr "Состояние кластера." msgid "Comparison operator." msgstr "Оператор сравнения." #, python-format msgid "Concurrent transaction for %(action)s" msgstr "Параллельная транзакция для %(action)s" msgid "Configuration of session persistence." msgstr "Настройка сохранения состояния сеанса." msgid "" "Configuration script or manifest which specifies what actual configuration " "is performed." msgstr "" "Сценарий настройки или манифест, который указывает, как выполняется " "фактическая настройка." msgid "Configure most important configs automatically." msgstr "Автоматически создавать самые важные параметры конфигурации." #, python-format msgid "Confirm resize for server %s failed" msgstr "Не удалось подтвердить изменение размера сервера %s" msgid "Connection info for this network gateway." msgstr "Информация о соединении для этого сетевого шлюза." #, python-format msgid "Container '%(name)s' creation failed: %(code)s - %(reason)s" msgstr "Не удалось создать контейнер '%(name)s': %(code)s - %(reason)s" msgid "Container format of image." msgstr "Контейнерный формат образа." msgid "" "Content of part to attach, either inline or by referencing the ID of another " "software config resource." msgstr "" "Содержимое элемента для подключения методом встраивания или указания ИД " "другого ресурса конфигурации программного обеспечения." msgid "Context for this stack." msgstr "Контекст для этого стека." msgid "Control how the disk is partitioned when the server is created." msgstr "Управляет способом разделения диска на разделы при создании диска." msgid "Controls DPD protocol mode." msgstr "Управляет режимом протокола DPD." msgid "" "Convenience attribute to fetch the first assigned network address, or an " "empty string if nothing has been assigned at this time. Result may not be " "predictable if the server has addresses from more than one network." msgstr "" "Удобный атрибут для выборки первого выделенного сетевого адреса либо пустая " "строка, если адрес пока не выделен. результаты могут быть непредсказуемыми, " "если на сервере имеются адреса из нескольких сетей." msgid "" "Convenience attribute, provides curl CLI command prefix, which can be used " "for signalling handle completion or failure when signal_transport is set to " "TOKEN_SIGNAL. You can signal success by adding --data-binary '{\"status\": " "\"SUCCESS\"}' , or signal failure by adding --data-binary '{\"status\": " "\"FAILURE\"}'. This attribute is set to None for all other signal transports." msgstr "" "Обычный атрибут, предоставляющий командный префикс CLI curl, который можно " "использовать для сигнализирования об окончании или сбое, когда " "signal_transport задан равным TOKEN_SIGNAL. Сигнализировать об успехе можно " "путем добавления --data-binary '{\"status\": \"SUCCESS\"}', о сбое - путем " "добавления --data-binary '{\"status\": \"FAILURE\"}'. Для всех прочих " "signal_transport не задается." msgid "" "Convenience attribute, provides curl CLI command prefix, which can be used " "for signalling handle completion or failure. You can signal success by " "adding --data-binary '{\"status\": \"SUCCESS\"}' , or signal failure by " "adding --data-binary '{\"status\": \"FAILURE\"}'." msgstr "" "Обычный атрибут, предоставляющий командный префикс CLI curl, который можно " "использовать для сигнализирования о завершении или сбое. Сигнализировать об " "успехе можно путем добавления --data-binary '{\"status\": \"SUCCESS\"}', о " "сбое - путем добавления --data-binary '{\"status\": \"FAILURE\"}'." msgid "Cooldown period, in seconds." msgstr "Время отката (с)." #, python-format msgid "Could not confirm resize of server %s" msgstr "Не удается подтвердить изменение размера сервера %s" #, python-format msgid "Could not detach attachment %(att)s from server %(srv)s." msgstr "Не удается отключить %(att)s от сервера %(srv)s." #, python-format msgid "Could not fetch remote template \"%(name)s\": %(exc)s" msgstr "Не удалось извлечь удаленный шаблон \"%(name)s\": %(exc)s" #, python-format msgid "Could not fetch remote template '%(url)s': %(exc)s" msgstr "Не удалось извлечь удаленный шаблон '%(url)s': %(exc)s" #, python-format msgid "Could not load %(name)s: %(error)s" msgstr "Не удалось загрузить %(name)s: %(error)s" #, python-format msgid "Could not retrieve template: %s" msgstr "Извлечь шаблон не удалось: %s" msgid "Create volumes on the same physical port as an instance." msgstr "Создать тома на том же физическом порту, что и для экземпляра." msgid "" "Credentials used for swift. Not required if sahara is configured to use " "proxy users and delegated trusts for access." msgstr "" "Идентификационные данные для swift. Не требуются, если в sahara настроены " "пользователи прокси и делегирована проверка прав доступа." msgid "Cron expression." msgstr "Выражение cron." msgid "Current share status." msgstr "Текущее состояние общего ресурса." msgid "Custom LoadBalancer template can not be found" msgstr "Не найден пользовательский шаблон LoadBalancer" msgid "DB instance restore point." msgstr "Точка восстановления экземпляра БД." msgid "DNS Domain id or name." msgstr "ИД домена или имя DNS." msgid "DNS IP address used inside tenant's network." msgstr "IP-адрес сервера DNS, используемый в сети арендатора." msgid "DNS Record type." msgstr "Тип записи DNS." msgid "DNS domain serial." msgstr "Серийный номер домена в DNS." msgid "" "DNS record data, varies based on the type of record. For more details, " "please refer rfc 1035." msgstr "Данные записи DNS, зависят от типа записи. См. rfc 1035." msgid "" "DNS record priority. It is considered only for MX and SRV types, otherwise, " "it is ignored." msgstr "" "Приоритет записи DNS. Действует только для записей типа MX и SRV, для прочих " "- игнорируется." #, python-format msgid "Data supplied was not valid: %(reason)s" msgstr "Предоставленные данные недопустимы: %(reason)s" #, python-format msgid "" "Database %(dbs)s specified for user does not exist in databases for resource " "%(name)s." msgstr "" "Указанная для пользователя база данных %(dbs)s не существует среди баз " "данных для ресурса %(name)s." msgid "Database volume size in GB." msgstr "Размер тома базы данных в ГБ." #, python-format msgid "" "Databases property is required if users property is provided for resource %s." msgstr "Свойство databases обязательно, если для ресурса %s." #, python-format msgid "" "Datastore version %(dsversion)s for datastore type %(dstype)s is not valid. " "Allowed versions are %(allowed)s." msgstr "" "Версия хранилища данных, %(dsversion)s, для типа хранилища данных, " "%(dstype)s, недопустима. Разрешенные версии - %(allowed)s." msgid "Datetime when a share was created." msgstr "Дата и время создания общего ресурса." msgid "" "Dead Peer Detection protocol configuration for the ipsec site connection." msgstr "" "Конфигурация протокола обнаружения мертвого равноправного узла для " "соединения с сайтом ipsec." msgid "Dead engines are removed." msgstr "Неработающие модули удалены." msgid "Default TLS container reference to retrieve TLS information." msgstr "Ссылка на контейнер TLS по умолчанию для получения информации TLS." #, python-format msgid "Default must be a comma-delimited list string: %s" msgstr "" "Значение по умолчанию должно быть строкой списка, разделенного запятыми: %s" msgid "Default name or UUID of the image used to boot Hadoop nodes." msgstr "" "Имя по умолчанию или UUID образа, используемого для загрузки узлов Hadoop." msgid "Default region name used to get services endpoints." msgstr "Имя области по умолчанию для получения конечных точек служб." msgid "Default settings for some of task attributes defined at workflow level." msgstr "" "Значения по умолчанию для некоторых атрибутов задач, определенных на уровне " "потока операций." msgid "Default value for the input if none is specified." msgstr "Значение входа по умолчанию, если значение не указано." msgid "" "Defines a delay in seconds that Mistral Engine should wait after a task has " "completed before starting next tasks defined in on-success, on-error or on-" "complete." msgstr "" "Время ожидания модуля Mistral после завершения задачи перед началом " "выполнения новой задачи, в секундах. Задается для on-success, on-error или " "on-complete." msgid "" "Defines a delay in seconds that Mistral Engine should wait before starting a " "task." msgstr "" "Время ожидания модуля Mistral перед началом выполнения задачи, в секундах." msgid "Defines a pattern how task should be repeated in case of an error." msgstr "Определяет шаблон для способа повтора задачи в случае ошибки." msgid "" "Defines a period of time in seconds after which a task will be failed " "automatically by engine if hasn't completed." msgstr "" "Задает время в секундах, по истечении которого задача будет считаться " "неуспешной, если она еще не завершена." msgid "Defines if share type is accessible to the public." msgstr "Указывает, будет ли ресурс общедоступным." msgid "Defines if shared filesystem is public or private." msgstr "Определяет, будет ли общая файловая система общедоступной или частной." msgid "" "Defines the method in which the request body for signaling a workflow would " "be parsed. In case this property is set to True, the body would be parsed as " "a simple json where each key is a workflow input, in other cases body would " "be parsed expecting a specific json format with two keys: \"input\" and " "\"params\"." msgstr "" "Определяет метод, который будет обрабатывать тело запроса для отправки " "сигнала потоку операций. Если параметру присвоено значение True, то тело " "будет обрабатываться как простой запрос json, в котором каждый ключ является " "входным параметром потока операций. В других случаях тело будет " "анализироваться в особом формате json с двумя ключами: \"input\" и \"params" "\"." msgid "" "Defines whether Mistral Engine should put the workflow on hold or not before " "starting a task." msgstr "" "Указывает, будет ли модуль Mistral приостанавливать поток операций перед " "началом выполнения задачи." msgid "Defines whether auto-assign security group to this Node Group template." msgstr "" "Определяет, будет ли автоматически присваиваться группа защиты этому шаблону " "группы узлов." #, python-format msgid "" "Defining more than one configuration for the same action in " "SoftwareComponent \"%s\" is not allowed." msgstr "" "Создавать более одной конфигурации одного действия в компоненте программного " "обеспечения \"%s\" нельзя." msgid "Deleting in-progress snapshot" msgstr "Удаляется моментальная копия в состоянии IN_PROGRESS" #, python-format msgid "Deleting non-empty container (%(id)s) when %(prop)s is False" msgstr "" "Удаление непустого контейнера (%(id)s), если свойству %(prop)s присвоено " "значение False" #, python-format msgid "Delimiter for %s must be string" msgstr "Разделитель %s должен быть строкой" msgid "" "Denotes that the deployment is in an error state if this output has a value." msgstr "" "Обозначает, что развертывание находится в состоянии ошибки, если этот выход " "имеет значение." msgid "Deploy data available" msgstr "Развернуть имеющиеся данные" #, python-format msgid "Deployment exited with non-zero status code: %s" msgstr "Развертывание завершилось с ненулевым кодом: %s" #, python-format msgid "Deployment to server failed: %s" msgstr "Развертывание на сервере не выполнено: %s" #, python-format msgid "Deployment with id %s not found" msgstr "Развертывание с ИД %s не найдено" msgid "Deprecated." msgstr "Устарело." msgid "" "Describe time constraints for the alarm. Only evaluate the alarm if the time " "at evaluation is within this time constraint. Start point(s) of the " "constraint are specified with a cron expression, whereas its duration is " "given in seconds." msgstr "" "Ограничения времени для предупреждения. Предупреждение обрабатывается, " "только если оно не нарушает это ограничения. Начальное время ограничения " "указывается в формате cron, а его продолжительность - в секундах." msgid "Description for the alarm." msgstr "Описание предупреждения." msgid "Description for the firewall policy." msgstr "Описание стратегии брандмауэра." msgid "Description for the firewall rule." msgstr "Описание правила брандмауэра." msgid "Description for the firewall." msgstr "Описание брандмауэра." msgid "Description for the ike policy." msgstr "Описание стратегии ike." msgid "Description for the ipsec policy." msgstr "Описание стратегии ipsec." msgid "Description for the ipsec site connection." msgstr "Описание соединения с сайтом ipsec." msgid "Description for the time constraint." msgstr "Описание ограничения времени." msgid "Description for the vpn service." msgstr "Описание службы vpn." msgid "Description for this interface." msgstr "Описание этого интерфейса." msgid "Description of domain." msgstr "Описание домена." msgid "Description of keystone group." msgstr "Описание группы keystone." msgid "Description of keystone project." msgstr "Описание проекта keystone." msgid "Description of keystone region." msgstr "Описание области keystone." msgid "Description of keystone service." msgstr "Описание службы keystone." msgid "Description of keystone user." msgstr "Описание пользователя keystone." msgid "Description of record." msgstr "Описание записи." msgid "Description of the Node Group Template." msgstr "Описание шаблона группы узлов." msgid "Description of the Sahara Group Template." msgstr "Описание шаблона группы Sahara." msgid "Description of the alarm." msgstr "Описание предупреждения." msgid "Description of the data source." msgstr "Описание источника данных." msgid "Description of the firewall policy." msgstr "Описание стратегии брандмауэра." msgid "Description of the firewall rule." msgstr "Описание правила брандмауэра." msgid "Description of the firewall." msgstr "Описание брандмауэра." msgid "Description of the image." msgstr "Описание образа." msgid "Description of the input." msgstr "Описание входа." msgid "Description of the job binary." msgstr "Описание двоичного файла задания." msgid "Description of the metering label." msgstr "Описание метки измерений." msgid "Description of the output." msgstr "Описание выхода." msgid "Description of the pool." msgstr "Описание пула." msgid "Description of the security group." msgstr "Описание группы защиты." msgid "Description of the vip." msgstr "Описание vip." msgid "Description of the volume type." msgstr "Описание типа тома." msgid "Description of the volume." msgstr "Описание тома." msgid "Description of this Load Balancer." msgstr "Описание балансировщика нагрузки." msgid "Description of this listener." msgstr "Описание получателя запросов." msgid "Description of this pool." msgstr "Описание пула." msgid "Desired IPs for this port." msgstr "Предпочитаемые IP для этого порта." msgid "Desired capacity of the cluster." msgstr "Предпочтительная емкость кластера." msgid "Desired initial number of instances." msgstr "Необходимое начальное число экземпляров." msgid "Desired initial number of resources in cluster." msgstr "Предпочтительное начальное количество ресурсов в кластере." msgid "Desired initial number of resources." msgstr "Необходимое начальное число ресурсов." msgid "Desired number of instances." msgstr "Необходимое число экземпляров." msgid "DesiredCapacity must be between MinSize and MaxSize" msgstr "DesiredCapacity должен находиться в диапазоне от MinSize до MaxSize" msgid "Destination IP address or CIDR." msgstr "Целевой IP-адрес или CIDR." msgid "Destination ip_address for this firewall rule." msgstr "Целевой ip_address для этого правила брандмауэра." msgid "Destination port number or a range." msgstr "Целевой порт или диапазон портов." msgid "Destination port range for this firewall rule." msgstr "Диапазон целевых портов для этого правила брандмауэра." msgid "Detailed information about resource." msgstr "Подробная информация о ресурсе." msgid "Device ID of this port." msgstr "ИД устройства этого порта." msgid "Device info for this network gateway." msgstr "Информация об устройстве для этого сетевого шлюза." msgid "" "Device type: at the moment we can make distinction only between disk and " "cdrom." msgstr "" "Тип устройства: в настоящее время различаются только диски и приводы компакт-" "дисков." msgid "" "Dict, which has expand properties for port. Used only if port property is " "not specified for creating port." msgstr "" "Словарь, содержащий расширенные свойства для порта. Используется только если " "свойство port не указано при создании порта." msgid "Dictionary containing workflow tasks." msgstr "Словарь, содержащий задачи потока операций." msgid "Dictionary of node configurations." msgstr "Словарь конфигураций узлов." msgid "Dictionary of variables to publish to the workflow context." msgstr "Словарь с переменными, публикуемыми для контекста потока операций." msgid "Dictionary which contains input for workflow." msgstr "Словарь с входными данными для потока операций." msgid "" "Dictionary-like section defining task policies that influence how Mistral " "Engine runs tasks. Must satisfy Mistral DSL v2." msgstr "" "Раздел в формате словария, определяющий стратегии задачи, влияющие на способ " "выполнения задач модулем Mistral. Должен соответствовать Mistral DSL v2." msgid "DisableRollback and OnFailure may not be used together" msgstr "DisableRollback и OnFailure не могут быть указаны одновременно" msgid "Disk format of image." msgstr "Дисковый формат образа." msgid "Does not contain a valid AWS Access Key or certificate" msgstr "Не содержится верный ключ доступа AWS или сертификат" msgid "Domain email." msgstr "Электронная почта домена." msgid "Domain name." msgstr "Имя домена." #, python-format msgid "Duplicate names %s" msgstr "Повторяющиеся имена %s" msgid "Duplicate refs are not allowed." msgstr "Повторяющиеся ссылки не разрешены." msgid "Duration for the time constraint." msgstr "Продолжительность ограничения времени." msgid "EIP address to associate with instance." msgstr "Адрес EIP для связи с экземпляром." #, python-format msgid "Each %(object_name)s must contain a %(sub_section)s key." msgstr "Каждый объект %(object_name)s должен содержать ключ %(sub_section)s." msgid "Each Resource must contain a Type key." msgstr "Каждый Ресурс должен содержать ключ Тип." msgid "Ebs is missing, this is required when specifying BlockDeviceMappings." msgstr "Ebs отсутствует, но он необходим при указании BlockDeviceMappings." msgid "" "Egress rules are only allowed when Neutron is used and the 'VpcId' property " "is set." msgstr "" "Правила выхода разрешены только в случае, если используется Neutron и задано " "свойство 'VpcId'. " #, python-format msgid "Either %(net)s or %(port)s must be provided." msgstr "Необходимо указать %(net)s или %(port)s." msgid "Either 'EIP' or 'AllocationId' must be provided." msgstr "Необходимо указать EIP или AllocationId." msgid "Either 'InstanceId' or 'LaunchConfigurationName' must be provided." msgstr "Необходимо указать 'InstanceId' либо 'LaunchConfigurationName'." #, python-format msgid "Either project or domain must be specified for role %s" msgstr "Для роли %s необходимо указать проект или домен" #, python-format msgid "Either volume_id or snapshot_id must be specified for device mapping %s" msgstr "" "Для связывания устройства %s необходимо указать volume_id или snapshot_id" msgid "Email address of keystone user." msgstr "Адрес электронной почты пользователя keystone." msgid "Enable the legacy OS::Heat::CWLiteAlarm resource." msgstr "Включить устаревший ресурс OS::Heat::CWLiteAlarm." msgid "Enable the preview Stack Abandon feature." msgstr "Включить предварительный просмотр функции Отклонить стек." msgid "Enable the preview Stack Adopt feature." msgstr "Включить предварительный просмотр функции Внедрить стек." msgid "" "Enables Source NAT on the router gateway. NOTE: The default policy setting " "in Neutron restricts usage of this property to administrative users only." msgstr "" "Включает Source NAT в шлюзе маршрутизатора. ПРИМЕЧАНИЕ: стандартное значение " "стратегии в Neutron разрешает использование этого свойства только " "администраторам." msgid "" "Enables engine with convergence architecture. All stacks with this option " "will be created using convergence engine." msgstr "" "Включает службу с архитектурой конвергенции. Все стеки с таким параметром " "будут создаваться с помощью службы конвергенции." msgid "Enables or disables read-only access mode of volume." msgstr "Включает или выключает режим доступа к тому только для чтения." msgid "Encapsulation mode for the ipsec policy." msgstr "Режим инкапсуляции для стратегии ipsec." msgid "" "Encrypt template parameters that were marked as hidden and also all the " "resource properties before storing them in database." msgstr "" "Зашифровать параметры шаблона, помеченные как скрытые, и все свойства " "ресурса перед их сохранением в базе данных." msgid "Encryption algorithm for the ike policy." msgstr "Алгоритм шифрования для стратегии ike." msgid "Encryption algorithm for the ipsec policy." msgstr "Алгоритм шифрования для стратегии ipsec." msgid "End address for the allocation pool." msgstr "Конечный адрес для пула выделения." #, python-format msgid "End resizing the group %(group)s" msgstr "Завершение изменения размера группы %(group)s" msgid "" "Endpoint/url which can be used for signalling handle when signal_transport " "is set to TOKEN_SIGNAL. None for all other signal transports." msgstr "" "Конечная точка или URL, который можно использовать для информирования " "обработчика о том, что signal_transport задан равным TOKEN_SIGNAL. Для всех " "прочих signal_transport не задается." msgid "Endpoint/url which can be used for signalling handle." msgstr "" "Конечная точка или URL, которые можно использовать для сигнализирующего " "обработчика." msgid "Engine_Id" msgstr "ИД модуля" msgid "Error" msgstr "Ошибка" #, python-format msgid "Error authorizing action %s" msgstr "Ошибка при авторизации действия %s" #, python-format msgid "Error creating ec2 keypair for user %s" msgstr "Ошибка при создании пары ключей ec2 для пользователя %s" msgid "" "Error during applying access rules to share \"{0}\". The root cause of the " "problem is the following: {1}." msgstr "" "Ошибка настройки прав доступа для общего ресурса \"{0}\". Основная причина: " "{1}." msgid "Error during creation of share \"{0}\"" msgstr "Ошибка при создании общего ресурса \"{0}\"" msgid "Error during deleting share \"{0}\"." msgstr "Ошибка при удалении общего ресурса \"{0}\"" #, python-format msgid "Error validating value '%(value)s'" msgstr "Ошибка при проверке значения '%(value)s'" #, python-format msgid "Error validating value '%(value)s': %(message)s" msgstr "Ошибка при проверке значения '%(value)s': %(message)s" msgid "Ethertype of the traffic." msgstr "Тип Ethertype потока данных." msgid "Exclude state for cidr." msgstr "Исключить состояние для cidr." #, python-format msgid "Expected 1 external network, found %d" msgstr "Ожидалась 1 внешняя сеть, найдено %d" msgid "Export locations of share." msgstr "Экспортировать расположение общего ресурса." msgid "Expression of the alarm to evaluate." msgstr "Выражение для предупреждения." msgid "External fixed IP address." msgstr "Внешний фиксированный IP-адрес." msgid "External fixed IP addresses for the gateway." msgstr "Внешние фиксированные IP-адреса для шлюза." msgid "External network gateway configuration for a router." msgstr "Конфигурация шлюза внешней сети для маршрутизатора." msgid "" "Extra parameters to include in the \"floatingip\" object in the creation " "request. Parameters are often specific to installed hardware or extensions." msgstr "" "Дополнительные параметры, включаемые в объект \"floatingip\" в запросе на " "создание. Как правило, параметры задаются для конкретного установленного " "оборудования или расширений." msgid "Extra parameters to include in the creation request." msgstr "Дополнительные параметры, включаемые в запрос на создание." msgid "Extra parameters to include in the request." msgstr "Дополнительные параметры, включаемые в запрос." msgid "" "Extra parameters to include in the request. Parameters are often specific to " "installed hardware or extensions." msgstr "" "Дополнительные параметры, включаемые в запрос. Как правило, параметры " "задаются для конкретного установленного оборудования или расширений." msgid "Extra specs key-value pairs defined for share type." msgstr "" "Дополнительные пары ключ-значение, определенные для типа общего ресурса." #, python-format msgid "Failed to attach interface (%(port)s) to server (%(server)s)" msgstr "Не удается присоединить интерфейс (%(port)s) к серверу (%(server)s)" #, python-format msgid "Failed to attach volume %(vol)s to server %(srv)s - %(err)s" msgstr "Не удается присоединить том %(vol)s к серверу %(srv)s - %(err)s" #, python-format msgid "Failed to create Bay '%(name)s' - %(reason)s" msgstr "Не удается создать отсек '%(name)s' - %(reason)s" #, python-format msgid "Failed to detach interface (%(port)s) from server (%(server)s)" msgstr "Не удается отсоединить интерфейс (%(port)s) от сервера (%(server)s)" #, python-format msgid "Failed to execute %(action)s for %(cluster)s: %(reason)s" msgstr "Не удается выполнить %(action)s для %(cluster)s: %(reason)s" #, python-format msgid "Failed to extend volume %(vol)s - %(err)s" msgstr "Расширить том %(vol)s не удалось - %(err)s" #, python-format msgid "Failed to fetch template: %s" msgstr "Выборка шаблона не выполнена: %s" #, python-format msgid "Failed to find instance %s" msgstr "Экземпляр %s не найден" #, python-format msgid "Failed to find server %s" msgstr "Сервер %s не найден" #, python-format msgid "Failed to parse JSON data: %s" msgstr "Выполнить синтаксический анализ данных JSON не удалось: %s" #, python-format msgid "Failed to restore volume %(vol)s from backup %(backup)s - %(err)s" msgstr "" "Не удалось восстановить том %(vol)s из резервной копии %(backup)s - %(err)s" msgid "Failed to retrieve template" msgstr "Извлечь шаблон не удалось" #, python-format msgid "Failed to retrieve template data: %s" msgstr "Не удалось получить данные шаблона: %s" #, python-format msgid "Failed to retrieve template: %s" msgstr "Не удалось получить шаблон: %s" #, python-format msgid "" "Failed to send message to stack (%(stack_name)s) on other engine " "(%(engine_id)s)" msgstr "" "Не удалось отправить сообщение в стек (%(stack_name)s) другой службы " "(%(engine_id)s)" #, python-format msgid "Failed to stop stack (%(stack_name)s) on other engine (%(engine_id)s)" msgstr "" "Не удалось остановить стек (%(stack_name)s) в другом модуле (%(engine_id)s)" #, python-format msgid "Failed to update Bay '%(name)s' - %(reason)s" msgstr "Не удается обновить отсек '%(name)s' - %(reason)s" msgid "Failed to update, can not found port info." msgstr "Не удалось обновить: не найдена информация о портах." #, python-format msgid "" "Failed validating stack template using Heat endpoint at region \"%(region)s" "\" due to \"%(exc)s\"" msgstr "" "Не удалось проверить шаблон стека с помощью конечной точки Heat в области " "\"%(region)s\", причина: \"%(exc)s\"" msgid "Fake attribute !a." msgstr "Поддельный атрибут !a." msgid "Fake attribute a." msgstr "Поддельный атрибут a." msgid "Fake property !a." msgstr "Поддельное свойство !a." msgid "Fake property !c." msgstr "Поддельное свойство !c." msgid "Fake property a." msgstr "Поддельное свойство a." msgid "Fake property c." msgstr "Поддельное свойство c." msgid "Fake property ca." msgstr "Поддельное свойство ca." msgid "" "False to trigger actions when the threshold is reached AND the alarm's state " "has changed. By default, actions are called each time the threshold is " "reached." msgstr "" "False запускает действия при достижении порога, ЕСЛИ состояние " "предупреждения изменено. По умолчанию действия вызываются при каждом " "достижении порога." #, python-format msgid "Field %(field)s of %(objname)s is not an instance of Field" msgstr "Поле %(field)s объекта %(objname)s не является экземпляром Field" msgid "" "Fixed IP address to specify for the port created on the requested network." msgstr "" "Фиксированные IP-адреса для указания порта, созданного в запрашиваемой сети." msgid "Fixed IP addresses." msgstr "Фиксированные IP-адреса." msgid "Fixed IPv4 address for this NIC." msgstr "Фиксированный адрес IPv4 для этой карты сетевого адаптера." msgid "Flag indicating if traffic to or from instance is validated." msgstr "" "Флаг, показывающий, проверяются ли потоки данных к экземпляру или от него." msgid "" "Flag to enable/disable port security on the network. It provides the default " "value for the attribute of the ports created on this network." msgstr "" "Включает или выключает защиту портов в сети. Это значение по умолчанию для " "атрибутов портов, создаваемых в данной сети." msgid "" "Flag to enable/disable port security on the port. When disable this " "feature(set it to False), there will be no packages filtering, like security-" "group and address-pairs." msgstr "" "Включает или выключает защиту портов в сети. Если эта функция выключена " "(параметр задан как False), то фильтрация пакетов, например, по группе " "защиты или по парам адресов, применяться не будет." msgid "Flavor of the instance." msgstr "Разновидность экземпляра." msgid "Friendly name of the port." msgstr "Понятное имя порта." msgid "Friendly name of the router." msgstr "Понятное имя маршрутизатора." msgid "Friendly name of the subnet." msgstr "Понятное имя подсети." #, python-format msgid "Function \"%s\" must have arguments" msgstr "Функция \"%s\" должна иметь аргументы" #, python-format msgid "Function \"%s\" usage: [\"\", \"\"]" msgstr "Формат функции \"%s\": [\"<алгоритм>\", \"<значение>\"]" #, python-format msgid "Gateway IP address \"%(gateway)s\" is in invalid format." msgstr "Недопустимый формат IP-адреса шлюза: \"%(gateway)s\"." msgid "Gateway network for the router." msgstr "Сеть шлюза для маршрутизатора." msgid "Generic HeatAPIException, please use specific subclasses!" msgstr "" "Общая исключительная ситуация HeatAPIException, укажите конкретные подклассы!" msgid "Glance image ID or name." msgstr "ИД образа или имя Glance." msgid "Governs permissions set in manila for the cluster ips." msgstr "Управляет правами доступа, заданными в manila для IP-адресов кластера." msgid "Granularity to use for age argument, defaults to days." msgstr "" "Степень детализации для аргумента срока хранения, значение по умолчанию дней." msgid "Hadoop cluster name." msgstr "Имя кластера Hadoop." #, python-format msgid "Header X-Auth-Url \"%s\" not an allowed endpoint" msgstr "Заголовок X-Auth-Url \"%s\" не является разрешенной конечной точкой" msgid "Health probe timeout, in seconds." msgstr "Тайм-аут теста работоспособности (с)." msgid "" "Heat build revision. If you would prefer to manage your build revision " "separately, you can move this section to a different file and add it as " "another config option." msgstr "" "Версия компоновки Heat. Если необходимо управлять версиями компоновки " "отдельно, можно перенести этот раздел в другой файл и добавить его как " "другую опцию конфигурации." msgid "Host" msgstr "Узел" msgid "Hostname" msgstr "Имя узла" msgid "Hostname of the instance." msgstr "Имя хоста экземпляра." msgid "How long to preserve deleted data." msgstr "Срок сохранения удаленных данных." msgid "" "How the client will signal the wait condition. CFN_SIGNAL will allow an HTTP " "POST to a CFN keypair signed URL. TEMP_URL_SIGNAL will create a Swift " "TempURL to be signalled via HTTP PUT. HEAT_SIGNAL will allow calls to the " "Heat API resource-signal using the provided keystone credentials. " "ZAQAR_SIGNAL will create a dedicated zaqar queue to be signalled using the " "provided keystone credentials. TOKEN_SIGNAL will allow and HTTP POST to a " "Heat API endpoint with the provided keystone token. NO_SIGNAL will result in " "the resource going to a signalled state without waiting for any signal." msgstr "" "Способ отправки клиентом сигнала ожидания. CFN_SIGNAL разрешает HTTP POST " "для URL, подписанного парой ключей CFN. TEMP_URL_SIGNAL создает Swift " "TempURL, принимающий сигнал HTTP PUT. HEAT_SIGNAL разрешает вызовы resource-" "signal из Heat API с идентификационными данными keystone. ZAQAR_SIGNAL " "создает выделенную очередь zaqar, принимающую сигналы с идентификационными " "данными keystone. TOKEN_SIGNAL отправляет запрос HTTP POST конечной точке " "Heat API с заданным маркером keystone. NO_SIGNAL позволяет ресурсу перейти в " "состояние сигнала, не дожидаясь никакого сигнала." msgid "" "How the server should receive the metadata required for software " "configuration. POLL_SERVER_CFN will allow calls to the cfn API action " "DescribeStackResource authenticated with the provided keypair. " "POLL_SERVER_HEAT will allow calls to the Heat API resource-show using the " "provided keystone credentials. POLL_TEMP_URL will create and populate a " "Swift TempURL with metadata for polling. ZAQAR_MESSAGE will create a " "dedicated zaqar queue and post the metadata for polling." msgstr "" "Указывает, как сервер должен получать метаданные для конфигурации ПО. " "POLL_SERVER_CFN разрешает вызовы DescribeStackResource из cfn API с " "идентификацией по паре ключей. POLL_SERVER_HEAT разрешает вызовы resource-" "show из Heat API с идентификационными данными keystone. POLL_TEMP_URL " "создает и заполняет Swift TempURL с помощью метаданных для опроса. " "ZAQAR_MESSAGE создает выделенную очередь zaqar и отправляет метаданные для " "опроса." msgid "How the server should signal to heat with the deployment output values." msgstr "" "Каким образом сервер должен передавать сигналы Heat с выходными значениями " "развертывания." msgid "" "How the server should signal to heat with the deployment output values. " "CFN_SIGNAL will allow an HTTP POST to a CFN keypair signed URL. " "TEMP_URL_SIGNAL will create a Swift TempURL to be signaled via HTTP PUT. " "HEAT_SIGNAL will allow calls to the Heat API resource-signal using the " "provided keystone credentials. ZAQAR_SIGNAL will create a dedicated zaqar " "queue to be signaled using the provided keystone credentials. NO_SIGNAL will " "result in the resource going to the COMPLETE state without waiting for any " "signal." msgstr "" "Способ отправки сервером сигналов в heat с выходными значениями " "развертывания. CFN_SIGNAL разрешает HTTP POST для URL, подписанного парой " "ключей CFN. TEMP_URL_SIGNAL создает Swift TempURL, принимающий сигнал HTTP " "PUT. HEAT_SIGNAL разрешает вызовы resource-signal из Heat API с " "идентификационными данными keystone. ZAQAR_SIGNAL создает выделенную очередь " "zaqar, принимающую сигналы с идентификационными данными keystone. NO_SIGNAL " "позволяет ресурсу перейти в состояние COMPLETE, не дожидаясь никакого " "сигнала." msgid "" "How the user_data should be formatted for the server. For HEAT_CFNTOOLS, the " "user_data is bundled as part of the heat-cfntools cloud-init boot " "configuration data. For RAW the user_data is passed to Nova unmodified. For " "SOFTWARE_CONFIG user_data is bundled as part of the software config data, " "and metadata is derived from any associated SoftwareDeployment resources." msgstr "" "Способ форматирования user_data should для сервера. Для HEAT_CFNTOOLS " "user_data объединяется в комплект в составе данных конфигурации загрузки " "heat-cfntools cloud-init. Для RAW user_data передается в Nova без изменений. " "Для SOFTWARE_CONFIG user_data объединяется в комплект в составе данных " "конфигурации загрузки, а метаданные получают из любых связанных ресурсов " "SoftwareDeployment." msgid "Human readable name for the secret." msgstr "Описательное имя секретного ключа." msgid "Human-readable name for the container." msgstr "Описательное имя для контейнера." msgid "" "ID list of the L3 agent. User can specify multi-agents for highly available " "router. NOTE: The default policy setting in Neutron restricts usage of this " "property to administrative users only." msgstr "" "Список ИД агентов L3. Пользователь может указать несколько агентов для " "маршрутизатора высокой готовности. ПРИМЕЧАНИЕ: стратегия по умолчанию в " "Neutron разрешает использовать это свойство только администраторам." msgid "ID of an existing port to associate with this server." msgstr "ИД существующего порта для связи с этим сервером." msgid "" "ID of an existing port with at least one IP address to associate with this " "floating IP." msgstr "" "ИД существующего порта, а также как минимум один IP-адрес для связи с этим " "нефиксированным IP." msgid "ID of network to create a port on." msgstr "ИД сети для создания порта." msgid "ID of project for API authentication" msgstr "ИД проекта для идентификации API" msgid "ID of queue to use for signaling output values" msgstr "ИД очереди для сигнализации выходных значений." msgid "" "ID of resource to apply configuration to. Normally this should be a Nova " "server ID." msgstr "" "ИД ресурса, для которого выполняется настройка. Обычно здесь указывается ИД " "сервера Nova." msgid "" "ID of server (VM, etc...) on host that is used for exporting network file-" "system." msgstr "" "ИД сервера (VM и т. д.) на хосте, используемый для экспорта сетевой файловой " "системы." msgid "ID of signal to use for signaling output values" msgstr "ИД сигнала для сигнализации выходных значений" msgid "" "ID of software configuration resource to execute when applying to the server." msgstr "" "ИД ресурса настройки программного обеспечения, которая выполняется при " "применении на сервере." msgid "ID of the Cluster Template used for Node Groups and configurations." msgstr "ИД шаблона кластера, используемого для групп узлов и конфигураций." msgid "ID of the InternetGateway." msgstr "ИД InternetGateway." msgid "" "ID of the L3 agent. NOTE: The default policy setting in Neutron restricts " "usage of this property to administrative users only." msgstr "" "ИД агента L3. ПРИМЕЧАНИЕ: стратегия по умолчанию в Neutron разрешает " "использовать это свойство только администраторам." msgid "ID of the Node Group Template." msgstr "ИД шаблона группы узлов." msgid "ID of the VPNGateway to attach to the VPC." msgstr "ИД VPNGateway для подключения к VPC." msgid "ID of the default image to use for the template." msgstr "ИД образа по умолчанию для шаблона." msgid "ID of the default pool this listener is associated to." msgstr "ИД пула по умолчанию, с которым связан получатель запросов." msgid "ID of the floating IP to assign to the server." msgstr "ИД нефиксированного IP для назначения серверу." msgid "ID of the floating IP to associate." msgstr "ИД нефиксированного IP для связи." msgid "ID of the health monitor associated with this pool." msgstr "ИД монитора работоспособности, связанного с пулом." msgid "ID of the image to use for the template." msgstr "ИД образа для шаблона." msgid "ID of the load balancer this listener is associated to." msgstr "ИД балансировщика нагрузки, с которым связан получатель запросов." msgid "ID of the network in which this IP is allocated." msgstr "ИД сети, в которой выделен этот IP." msgid "ID of the port associated with this IP." msgstr "ИД порта, связанного с этим IP." msgid "ID of the queue." msgstr "ИД очереди." msgid "ID of the router used as gateway, set when associated with a port." msgstr "" "ИД маршрутизатора, используемого в качестве шлюза; задается, если связан с " "портом." msgid "ID of the router." msgstr "ИД маршрутизатора." msgid "ID of the server being deployed to" msgstr "ИД сервера, развертываемого на" msgid "ID of the stack this deployment belongs to" msgstr "ИД стека, к которому относится это развертывание" msgid "ID of the tenant to which the RBAC policy will be enforced." msgstr "ИД арендатора, для которого будет применяться стратегия RBAC." msgid "ID of the tenant who owns the health monitor." msgstr "ИД арендатора - владельца монитора работоспособности." msgid "ID or name of the QoS policy." msgstr "ИД или имя стратегии QoS." msgid "ID or name of the RBAC object." msgstr "ИД или имя объекта RBAC." msgid "ID or name of the external network for the gateway." msgstr "ИД или имя внешней сети для шлюза." msgid "ID or name of the image to register." msgstr "ИД или имя образа для регистрации." msgid "ID or name of the load balancer with which listener is associated." msgstr "" "ИД или имя балансировщика нагрузки, с которым связан получатель запросов." msgid "ID or name of the load balancing pool." msgstr "ИД или имя пула распределения нагрузки." msgid "" "ID that AWS assigns to represent the allocation of the address for use with " "Amazon VPC. Returned only for VPC elastic IP addresses." msgstr "" "ИД, назначаемый AWS, в целях представления выделяемого адреса для " "использования с Amazon VPC. Возвращается только для IP-адресов VPC elastic." msgid "IP address and port of the pool." msgstr "IP-адрес и порт пула." msgid "IP address desired in the subnet for this port." msgstr "IP-адрес для этого порта, предпочитаемый в подсети." msgid "IP address for the VIP." msgstr "IP-адрес VIP." msgid "IP address of the associated port, if specified." msgstr "IP-адрес связанного порта (в соответствующих случаях)." msgid "" "IP address of the floating IP. NOTE: The default policy setting in Neutron " "restricts usage of this property to administrative users only." msgstr "" "Нефиксированный IP-адрес. Примечание: Стратегия по умолчанию в Neutron " "разрешает применять это свойство только администраторам." msgid "IP address of the pool member on the pool network." msgstr "IP-адрес элемента пула в сети пула." msgid "IP address of the pool member." msgstr "IP-адрес элемента пула." msgid "IP address of the vip." msgstr "IP-адрес vip." msgid "IP address to allow through this port." msgstr "IP-адрес, разрешенный для этого порта." msgid "IP address to use if the port has multiple addresses." msgstr "" "IP-адрес, используемый в том случае, если порт имеет несколько адресов." msgid "" "IP or other address information about guest that allowed to access to Share." msgstr "" "IP-адрес или другая информация, управляющая гостевым доступом к общему " "ресурсу." msgid "IPv6 RA (Router Advertisement) mode." msgstr "Режим IPv6 RA (Router Advertisement)." msgid "IPv6 address mode." msgstr "Режим адресов IPv6." msgid "Id of a resource." msgstr "ИД ресурса." msgid "Id of the manila share." msgstr "ИД общего ресурса manila." msgid "Id of the tenant owning the firewall policy." msgstr "ИД арендатора - владельца стратегии брандмауэра." msgid "Id of the tenant owning the firewall." msgstr "ИД арендатора - владельца брандмауэра." msgid "Identifier of the source instance to replicate." msgstr "Идентификатор исходного экземпляра для репликации." #, python-format msgid "" "If \"%(size)s\" is provided, only one of \"%(image)s\", \"%(image_ref)s\", " "\"%(source_vol)s\", \"%(snapshot_id)s\" can be specified, but currently " "specified options: %(exclusive_options)s." msgstr "" "Если указан \"%(size)s\", то можно указать одну и только одну опцию из " "числа \"%(image)s\", \"%(image_ref)s\", \"%(source_vol)s\" и " "\"%(snapshot_id)s\". Сейчас указаны следующие опции: %(exclusive_options)s." msgid "If False, closes the client socket connection explicitly." msgstr "" "Если задано значение False, то клиентское соединение с сокетом будет явным " "образом закрыто." msgid "" "If True, delete any objects in the container when the container is deleted. " "Otherwise, deleting a non-empty container will result in an error." msgstr "" "Если задано значение True, то при удалении контейнера будут удалены все " "объекты в этом контейнере. В противном случае при удалении непустого " "контейнера будет выдано сообщение об ошибке." msgid "If True, enable config drive on the server." msgstr "В случае True включить диск конфигурации на сервере." msgid "" "If configured, it allows to run action or workflow associated with a task " "multiple times on a provided list of items." msgstr "" "Если настроен этот параметр, то действие или поток операций, связанные с " "задачей, могут выполняться несколько раз для заданного списка элементов." msgid "If set, then the server's certificate will not be verified." msgstr "Если значение задано, сертификат сервера не проверяется." msgid "If specified, the backup to create the volume from." msgstr "Если значение указано, - это резервная копия для создания тома." msgid "If specified, the backup used as the source to create the volume." msgstr "" "Если значение указано, - это резервная копия, используемая в качестве " "источника для создания тома." msgid "If specified, the name or ID of the image to create the volume from." msgstr "Если значение указано, - это имя или ИД образа для создания тома." msgid "If specified, the snapshot to create the volume from." msgstr "Если значение указано, - это моментальная копия для создания тома." msgid "If specified, the type of volume to use, mapping to a specific backend." msgstr "" "Если значение указано, - это тип используемого тома, связывание с отдельным " "базовым сервером." msgid "If specified, the volume to use as source." msgstr "Если значение указано, - это том для использования в качестве ресурса." msgid "" "If the region is hierarchically a child of another region, set this " "parameter to the ID of the parent region." msgstr "" "Если область является дочерней в иерархии, присвойте этому параметру ИД " "родительской области." msgid "" "If true, the resources in the chain will be created concurrently. If false " "or omitted, each resource will be treated as having a dependency on the " "previous resource in the list." msgstr "" "Если параметр равен true, то ресурсы в цепочке будут создаваться " "параллельно. Если параметр равен false или не задан, то каждый ресурс будет " "обрабатываться как зависящий от предыдущего ресурса из списка." msgid "If without InstanceId, ImageId and InstanceType are required." msgstr "" "Если отсутствует InstanceId, то ImageId InstanceType являются обязательными." #, python-format msgid "Illegal prefix bounds: %(key1)s=%(value1)s, %(key2)s=%(value2)s." msgstr "" "Недопустимые границы префикса: %(key1)s=%(value1)s, %(key2)s=%(value2)s." #, python-format msgid "" "Image %(image)s requires %(imram)s minimum ram. Flavor %(flavor)s has only " "%(flram)s." msgstr "" "Для образа %(image)s требуется не менее %(imram)s оперативной памяти. " "Разновидность %(flavor)s содержит только %(flram)s." #, python-format msgid "" "Image %(image)s requires %(imsz)s GB minimum disk space. Flavor %(flavor)s " "has only %(flsz)s GB." msgstr "" "Для образа %(image)s требуется не менее %(imsz)s ГБ на диске. Разновидность " "%(flavor)s содержит только %(flsz)s ГБ." #, python-format msgid "Image status is required to be %(cstatus)s not %(wstatus)s." msgstr "Состоянием образа должно быть %(cstatus)s, а не %(wstatus)s." msgid "Incompatible parameters were used together" msgstr "Одновременно указаны несовместимые параметры" #, python-format msgid "Incorrect arguments to \"%(fn_name)s\" should be one of: %(allowed)s" msgstr "Неверные аргументы \"%(fn_name)s\", должны быть: %(allowed)s" #, python-format msgid "Incorrect arguments to \"%(fn_name)s\" should be: %(example)s" msgstr "Неверные аргументы \"%(fn_name)s\", должны быть: %(example)s" msgid "Incorrect arguments: Items to merge must be maps." msgstr "Недопустимые аргументы. Объединяемые элементы должны быть картами." #, python-format msgid "" "Incorrect index to \"%(fn_name)s\" should be between 0 and %(max_index)s" msgstr "" "Недопустимый индекс для \"%(fn_name)s\", должен быть от 0 до %(max_index)s" #, python-format msgid "Incorrect index to \"%(fn_name)s\" should be: %(example)s" msgstr "Недопустимый индекс для \"%(fn_name)s\", должен быть: %(example)s" #, python-format msgid "Index to \"%s\" must be a string" msgstr "Индекс \"%s\" должен быть строкой" #, python-format msgid "Index to \"%s\" must be an integer" msgstr "Индекс \"%s\" должен быть целым числом" msgid "" "Indicate whether the volume should be deleted when the instance is " "terminated." msgstr "Указывает, следует ли удалять том, когда экземпляр завершает работу." msgid "" "Indicate whether the volume should be deleted when the server is terminated." msgstr "Указывает, следует ли удалять том, когда сервер завершает работу." msgid "Indicates remote IP prefix to be associated with this metering rule." msgstr "Указывает префикс удаленного IP для связи с этим правилом измерения." msgid "" "Indicates whether or not to create a distributed router. NOTE: The default " "policy setting in Neutron restricts usage of this property to administrative " "users only. This property can not be used in conjunction with the L3 agent " "ID." msgstr "" "Показывает, требуется ли создавать распределенный маршрутизатор. Примечание: " "значение стратегии по умолчанию в Neutron позволяет применять это свойство " "только администратору. Это свойство нельзя применять с ИД агента L3." msgid "" "Indicates whether or not to create a highly available router. NOTE: The " "default policy setting in Neutron restricts usage of this property to " "administrative users only. And now neutron do not support distributed and ha " "at the same time." msgstr "" "Показывает, требуется ли создавать маршрутизатор высокой готовности. " "Примечание: значение стратегии по умолчанию в Neutron позволяет применять " "это свойство только администраторам. И теперь Neutron не поддерживает " "использование распределенных маршрутизаторов и маршрутизаторов высокой " "готовности одновременно." msgid "Indicates whether this firewall rule is enabled or not." msgstr "Указывает, включено ли это правило брандмауэра." msgid "Information used to configure the bucket as a static website." msgstr "" "Информация, используемая для настройки комплекта как статического веб-сайта." msgid "Initiator state in lowercase for the ipsec site connection." msgstr "" "Состояние инициатора в символах нижнего регистра для соединения с сайтом " "ipsec." #, python-format msgid "Input in signal data must be a map, find a %s" msgstr "Входные данные сигнала должны быть картой, обнаружено %s" msgid "Input values for the workflow." msgstr "Входные значения для потока операций." msgid "Input values to apply to the software configuration on this server." msgstr "" "Входные значения для применения к конфигурации программного обеспечения на " "этом сервере." msgid "Instance ID to associate with EIP specified by EIP property." msgstr "ИД экземпляра для связи с EIP, указанным в свойстве EIP." msgid "Instance ID to associate with EIP." msgstr "ИД экземпляра для связи с EIP." msgid "Instance connection to CFN/CW API validate certs if SSL is used." msgstr "" "Соединение экземпляра с API CFN/CW проверяет сертификаты при использовании " "SSL." msgid "Instance connection to CFN/CW API via https." msgstr "Соединение экземпляра с API CFN/CW с помощью https." #, python-format msgid "Instance is not ACTIVE (was: %s)" msgstr "Экземпляр не активен (было: %s)" #, python-format msgid "" "Instance metadata must not contain greater than %s entries. This is the " "maximum number allowed by your service provider" msgstr "" "Метаданные экземпляра могут содержать не более %s записей. Это максимальное " "число, разрешенное поставщиком услуг" msgid "Interface type of keystone service endpoint." msgstr "Тип интерфейса конечной точки службы keystone." msgid "Internet protocol version." msgstr "Версия протокола Internet." #, python-format msgid "Invalid %s, expected a mapping" msgstr "Неверный %s, ожидалось связывание" #, python-format msgid "Invalid CRON expression: %s" msgstr "Недопустимое выражение CRON: %s" #, python-format msgid "Invalid Parameter type \"%s\"" msgstr "Неверный параметр type \"%s\"" #, python-format msgid "Invalid Property %s" msgstr "Недопустимое свойство %s" msgid "Invalid Stack address" msgstr "Недопустимый адрес стека" msgid "Invalid Template URL" msgstr "Недопустимый URL шаблона" #, python-format msgid "Invalid URL scheme %s" msgstr "Неверная схема URL %s" #, python-format msgid "Invalid UUID version (%d)" msgstr "Неверная версия UUID (%d)" #, python-format msgid "Invalid action %s" msgstr "Неверное действие %s" #, python-format msgid "Invalid action %s specified" msgstr "Указано недопустимое действие %s" #, python-format msgid "Invalid adopt data: %s" msgstr "Недопустимые данные для внедрения: %s" #, python-format msgid "Invalid cloud_backend setting in heat.conf detected - %s" msgstr "Недопустимый параметр cloud_backend в heat.conf - %s" #, python-format msgid "Invalid codes in ignore_errors : %s" msgstr "Недопустимые коды в ignore_errors : %s" #, python-format msgid "Invalid content type %(content_type)s" msgstr "Недопустимый тип содержимого: %(content_type)s" #, python-format msgid "Invalid default %(default)s (%(exc)s)" msgstr "Неверное значение %(default)s (%(exc)s) по умолчанию" #, python-format msgid "Invalid deletion policy \"%s\"" msgstr "Недопустимая стратегия удаления \"%s\"" #, python-format msgid "Invalid filter parameters %s" msgstr "Недопустимые параметры фильтра %s" #, python-format msgid "Invalid hook type \"%(hook)s\" for %(resource)s" msgstr "Недопустимый тип перехватчика \"%(hook)s\" для %(resource)s" #, python-format msgid "" "Invalid hook type \"%(value)s\" for resource breakpoint, acceptable hook " "types are: %(types)s" msgstr "" "Недопустимый тип перехватчика \"%(value)s\" для точки прерывания ресурса. " "Допустимые типы перехватчиков: %(types)s" #, python-format msgid "Invalid key %s" msgstr "Неверный ключ %s" #, python-format msgid "Invalid key '%(key)s' for %(entity)s" msgstr "Неверный ключ '%(key)s' для %(entity)s" #, python-format msgid "Invalid keys in resource mark unhealthy %s" msgstr "" "Недопустимые ключи в запросе 'пометить ресурс как неработоспособный' %s" msgid "" "Invalid mix of disk and container formats. When setting a disk or container " "format to one of 'aki', 'ari', or 'ami', the container and disk formats must " "match." msgstr "" "Недопустимое сочетание форматов диска и контейнера. При задании формата " "диска или контейнера равным 'aki', 'ari' или 'ami' форматы контейнера и " "диска должны совпадать." #, python-format msgid "Invalid parameter constraints for parameter %s, expected a list" msgstr "Недопустимые ограничения на параметр %s, ожидался список" #, python-format msgid "" "Invalid restricted_action type \"%(value)s\" for resource, acceptable " "restricted_action types are: %(types)s" msgstr "" "Недопустимый тип restricted_action \"%(value)s\" для ресурса. Допустимые " "типы restricted_action: %(types)s" #, python-format msgid "" "Invalid stack name %s must contain only alphanumeric or \"_-.\" characters, " "must start with alpha and must be 255 characters or less." msgstr "" "Недопустимое имя стека %s. Оно может содержать только буквы, цифры и символы " "\"_-.\", должно начинаться с буквы и иметь длину не свыше 255 символов." #, python-format msgid "Invalid stack name %s, must be a string" msgstr "Недопустимое имя стека %s. Оно должно быть строкой" #, python-format msgid "Invalid status %s" msgstr "Неверное состояние %s" #, python-format msgid "Invalid support status and should be one of %s" msgstr "Недопустимое состояние поддержки. Допустимые состояния: %s" #, python-format msgid "Invalid tag, \"%s\" contains a comma" msgstr "Недопустимый тег \"%s\". Содержит запятую" #, python-format msgid "Invalid tag, \"%s\" is longer than 80 characters" msgstr "Недопустимый тег \"%s\". Его длина превышает 80 символов" #, python-format msgid "Invalid tag, \"%s\" is not a string" msgstr "Недопустимый тег \"%s\". Это не строка" #, python-format msgid "Invalid tags, not a list: %s" msgstr "Недопустимые теги, должен быть задан список: %s" #, python-format msgid "Invalid template type \"%(value)s\", valid types are: cfn, hot." msgstr "Недопустимый тип шаблона: \"%(value)s\". Допустимые типы: cfn, hot." #, python-format msgid "Invalid timeout value %s" msgstr "Недопустимое значение тайм-аута %s" #, python-format msgid "Invalid timezone: %s" msgstr "Недопустимый часовой пояс: %s" #, python-format msgid "Invalid type (%s)" msgstr "Неверный тип (%s)" msgid "Ip allocation pools and their ranges." msgstr "Пулы выделения Ip и их диапазоны." msgid "Ip of the subnet's gateway." msgstr "Ip шлюза подсети." msgid "Ip version for the subnet." msgstr "Ip-версия подсети." msgid "Ip_version for this firewall rule." msgstr "Ip_version для этого правила брандмауэра." msgid "It defines an executor to which task action should be sent to." msgstr "Определяет исполнителя, которому направляется действие задачи." #, python-format msgid "Items to join must be string, map or list not %s" msgstr "" "Соединяемые элементы должны быть строками, картами или списками, а не %s" #, python-format msgid "Items to join must be string, map or list. %s failed json serialization" msgstr "" "Соединяемые элементы должны быть строками, списками или картами. " "Сериализация json для %s не выполнена" #, python-format msgid "Items to join must be strings not %s" msgstr "Соединяемые элементы должны быть строками, а не %s" #, python-format msgid "" "JSON body size (%(len)s bytes) exceeds maximum allowed size (%(limit)s " "bytes)." msgstr "" "Размер тела JSON (%(len)s байтов) превышает максимально допустимый " "(%(limit)s байтов)." msgid "JSON data that was uploaded via the SwiftSignalHandle." msgstr "Данные JSON, загруженные через SwiftSignalHandle." msgid "" "JSON serialized map that includes the endpoint, token and/or other " "attributes the client must use for signalling this handle. The contents of " "this map depend on the type of signal selected in the signal_transport " "property." msgstr "" "Сериализованная карта JSON, включающая конечную точку, маркер и (или) прочие " "атрибуты, которые клиент должен использовать для информирования обработчика. " "Содержимое карты зависит от типа сигнала, заданного в свойстве " "signal_transport." msgid "" "JSON string containing data associated with wait condition signals sent to " "the handle." msgstr "" "Строка JSON с данными для сигналов условия ожидания, отправляемых " "обработчику." msgid "" "Key used to encrypt authentication info in the database. Length of this key " "must be 32 characters." msgstr "" "Ключ шифрования для идентификационных данных в базе данных. Длина ключа " "должна быть равна 32 символам." msgid "Key/Value pairs to extend the capabilities of the flavor." msgstr "Пары ключ-значение для расширения функций разновидности." msgid "Key/value pairs associated with the volume in raw dict form." msgstr "Пары ключ/значение, связанные с томом, в необработанном формате dict." msgid "Key/value pairs associated with the volume." msgstr "Пары ключ/значение, связанные с томом." msgid "Key/value pairs to associate with the volume." msgstr "Пары ключ/значение для связи с томом." msgid "Keypair added to instances to make them accessible for user." msgstr "Пара ключей, добавляемая в экземпляры для доступа пользователя." msgid "Keypair secret key." msgstr "Секретный ключ пары ключей." msgid "" "Keystone domain ID which contains heat template-defined users. If this " "option is set, stack_user_domain_name option will be ignored." msgstr "" "ИД домена Keystone, содержащий пользователей, определенных шаблоном heat. " "Если эта опция задана, опция stack_user_domain_name будет проигнорирована." msgid "" "Keystone domain name which contains heat template-defined users. If " "`stack_user_domain_id` option is set, this option is ignored." msgstr "" "ИД домена Keystone, содержащий пользователей, определенных шаблоном heat. " "Если задана опция `stack_user_domain_id`, эта опция будет проигнорирована." msgid "Keystone domain." msgstr "Домен keystone." #, python-format msgid "" "Keystone has more than one service with same name %(service)s. Please use " "service id instead of name" msgstr "" "В Keystone задано несколько служб с именем %(service)s. Используйте ИД " "службы вместо имени" msgid "Keystone password for stack_domain_admin user." msgstr "Пароль Keystone для пользователь stack_domain_admin." msgid "Keystone project." msgstr "Проект keystone." msgid "Keystone role for heat template-defined users." msgstr "Роль Keystone для пользователей, определенных шаблоном heat." msgid "Keystone role." msgstr "Роль keystone." msgid "Keystone user group." msgstr "Группа пользователей keystone." msgid "Keystone user groups." msgstr "Группы пользователей keystone." msgid "Keystone user is enabled or disabled." msgstr "Указывает, включен ли пользователь keystone." msgid "" "Keystone username, a user with roles sufficient to manage users and projects " "in the stack_user_domain." msgstr "" "Имя пользователя Keystone, пользователь в ролями управления пользователями и " "проектами в домене stack_user_domain." msgid "L2 segmentation strategy on the external side of the network gateway." msgstr "Стратегия сегментации L2 с внешней стороны сетевого шлюза." msgid "LBaaS provider to implement this load balancer instance." msgstr "" "Поставщик LBaaS для реализации этого экземпляра балансировщика нагрузки." msgid "Length of OS_PASSWORD after encryption exceeds Heat limit (255 chars)" msgstr "" "Длина OS_PASSWORD после шифрования превышает ограничение Heat (255 символов)" msgid "Length of the string to generate." msgstr "Длина создаваемой строки." msgid "" "Length property cannot be smaller than combined character class and " "character sequence minimums" msgstr "" "Свойство длины не может быть меньше суммы минимальных значений класса " "символов и последовательности символов" msgid "Level of access that need to be provided for guest." msgstr "Уровень доступа для гостевого пользователя." msgid "" "Lifecycle actions to which the configuration applies. The string values " "provided for this property can include the standard resource actions CREATE, " "DELETE, UPDATE, SUSPEND and RESUME supported by Heat." msgstr "" "Действия жизненного цикла, к которым применяется конфигурация. Строковые " "значения, указываемые для этого свойства, могут включать стандартные " "действия ресурсов: CREATE, DELETE, UPDATE, SUSPEND и RESUME,- поддерживаемые " "Heat." msgid "List of LoadBalancer resources." msgstr "Список ресурсов LoadBalancer." msgid "List of Security Groups assigned on current LB." msgstr "Список групп защиты, присвоенных текущему LB." msgid "List of TLS container references for SNI." msgstr "Список ссылок на контейнеры TLS для SNI." msgid "List of database instances." msgstr "Список экземпляров базы данных." msgid "List of databases to be created on DB instance creation." msgstr "Список баз данных, создаваемых при создании экземпляра базы данных." msgid "List of directories to search for plug-ins." msgstr "Список каталогов для поиска модулей." msgid "List of dns nameservers." msgstr "Список серверов имен dns." msgid "List of firewall rules in this firewall policy." msgstr "Список правил брандмауэра в этой стратегии брандмауэра." msgid "List of health monitors associated with the pool." msgstr "Список мониторов работоспособности, связанных с пулом." msgid "List of hosts to join aggregate." msgstr "Список хостов, включаемых в совокупный объект." msgid "List of manila shares to be mounted." msgstr "Список общих ресурсов manila для монтирования." msgid "List of network interfaces to create on instance." msgstr "Список сетевых интерфейсов для создания в экземпляре." msgid "List of processes to enable anti-affinity for." msgstr "" "Список процессов, для которых должен быть включен режим строгой " "распределенности." msgid "List of processes to run on every node." msgstr "Список процессов для выполнения на каждом узле." msgid "List of role assignments." msgstr "Список присвоенных ролей." msgid "List of security group IDs associated with this interface." msgstr "Список ИД групп защиты, связанных с этим интерфейсом." msgid "List of security group egress rules." msgstr "Список правил выхода группы защиты." msgid "List of security group ingress rules." msgstr "Список правил доступа группы защиты." msgid "" "List of security group names or IDs to assign to this Node Group template." msgstr "" "Список имен или ИД групп защиты для присвоения этому шаблону группы узлов." msgid "" "List of security group names or IDs. Cannot be used if neutron ports are " "associated with this server; assign security groups to the ports instead." msgstr "" "Список имен или ИД групп защиты. Не используется, если порты neutron связаны " "с этим сервером; в этом случае назначьте портам группы защиты." msgid "List of security group rules." msgstr "Список правил группы защиты." msgid "List of subnet prefixes to assign." msgstr "Список присваиваемых префиксов подсетей." msgid "List of tags associated with this interface." msgstr "Список тегов, связанных с этим интерфейсом." msgid "List of tags to attach to the instance." msgstr "Список тегов, закрепляемых за экземпляром." msgid "List of tags to attach to this resource." msgstr "Список тегов, закрепляемых за этим ресурсом." msgid "List of tags to be attached to this resource." msgstr "Список тегов, закрепляемых за этим ресурсом." msgid "" "List of tasks which should be executed before this task. Used only in " "reverse workflows." msgstr "" "Список задач, которые будут выполнены перед данной задачей. Только для " "обратных потоков операций." msgid "" "List of tasks which will run after the task has completed regardless of " "whether it is successful or not." msgstr "" "Список задач, которые будут выполнены в случае любого варианта завершения " "задачи." msgid "List of tasks which will run after the task has completed successfully." msgstr "" "Список задач, которые будут выполнены в случае успешного завершения задачи." msgid "" "List of tasks which will run after the task has completed with an error." msgstr "" "Список задач, которые будут выполнены в случае неуспешного завершения задачи." msgid "List of users to be created on DB instance creation." msgstr "Список пользователей, создаваемых при создании экземпляра БД." msgid "" "List of workflows' executions, each of them is a dictionary with information " "about execution. Each dictionary returns values for next keys: id, " "workflow_name, created_at, updated_at, state for current execution state, " "input, output." msgstr "" "Список выполняющихся потоков операций в формате словаря с информацией о " "выполнении. Каждый словарь возвращает значения для следующих ключей: id " "(ИД), workflow_name, (имя потока операций) created_at (время создания), " "updated_at (время обновления), state (текущее состояние выполнения), input " "(входные данные), output (выходные данные)." msgid "Listener associated with this pool." msgstr "Получатель запросов, связанный с пулом." msgid "" "Local path on each cluster node on which to mount the share. Defaults to '/" "mnt/{share_id}'." msgstr "" "Локальная точка монтирования общего ресурса в каждом узле кластера. Значение " "по умолчанию: '/mnt/{share_id}'." msgid "Location of the SSL certificate file to use for SSL mode." msgstr "Расположение файла сертификата SSL для использования в режиме SSL." msgid "Location of the SSL key file to use for enabling SSL mode." msgstr "Расположение файла ключа SSL для включения режима SSL." msgid "MAC address of the port." msgstr "MAC-адрес порта." msgid "MAC address to allow through this port." msgstr "MAC-адрес, разрешенный для этого порта." msgid "Map between role with either project or domain." msgstr "Карта связей ролей с проектом или доменом." msgid "" "Map containing options specific to the configuration management tool used by " "this resource." msgstr "" "Карта связей, содержащая опции для отдельного инструмента управления " "конфигурациями, используемого этим ресурсом." msgid "" "Map representing the cloud-config data structure which will be formatted as " "YAML." msgstr "" "Карта связей, представляющая настроенную для облака структуру данных, " "которая будет представлена в формате YAML." msgid "" "Map representing the configuration data structure which will be serialized " "to JSON format." msgstr "" "Карта связей, представляющая структуру данных конфигурации, которая будет " "сериализована в формате JSON." msgid "Max bandwidth in kbps." msgstr "Максимальная пропускная способность (кбит/с)." msgid "Max burst bandwidth in kbps." msgstr "Максимальная пропускная способность для пакета (кбит/с)." msgid "Max size of the cluster." msgstr "Максимальный размер кластера." #, python-format msgid "Maximum %s is 1 hour." msgstr "Максимальное значение %s - 1 час." msgid "Maximum depth allowed when using nested stacks." msgstr "Максимальная разрешенная глубина при использовании вложенных стеков." msgid "" "Maximum line size of message headers to be accepted. max_header_line may " "need to be increased when using large tokens (typically those generated by " "the Keystone v3 API with big service catalogs)." msgstr "" "Максимальный размер строки заголовка сообщений. Возможно, max_header_line " "потребуется увеличить при использовании больших маркеров (как правило, " "созданных API Keystone версии 3 API с большими каталогами)." msgid "" "Maximum line size of message headers to be accepted. max_header_line may " "need to be increased when using large tokens (typically those generated by " "the Keystone v3 API with big service catalogs.)" msgstr "" "Максимальный размер строки заголовка сообщений. Возможно, max_header_line " "потребуется увеличить при использовании больших маркеров (как правило, " "созданных API Keystone версии 3 API с большими каталогами)." msgid "Maximum number of instances in the group." msgstr "Максимальное число экземпляров в группе." msgid "Maximum number of resources in the cluster. -1 means unlimited." msgstr "" "Максимальное количество ресурсов в кластере. -1 означает отсутствие " "ограничений." msgid "Maximum number of resources in the group." msgstr "Максимальное число ресурсов в группе." msgid "Maximum number of stacks any one tenant may have active at one time." msgstr "" "Максимальное число стеков, которые могут быть активны одновременно для " "одного арендатора." msgid "Maximum prefix size that can be allocated from the subnet pool." msgstr "" "Максимальный размер префикса, разрешенный при выделении подсетей из пула." msgid "" "Maximum raw byte size of JSON request body. Should be larger than " "max_template_size." msgstr "" "Максимальный размер в байтах тела запроса JSON. Должен быть больше, чем " "max_template_size." msgid "Maximum raw byte size of any template." msgstr "Максимальный размер необработанных данных (в байтах) любого шаблона." msgid "Maximum resources allowed per top-level stack. -1 stands for unlimited." msgstr "" "Максимальное количество ресурсов в стеке верхнего уровня. Значение -1 " "снимает все ограничения." msgid "Maximum resources per stack exceeded." msgstr "Превышено максимальное количество ресурсов в стеке." msgid "" "Maximum transmission unit size (in bytes) for the ipsec site connection." msgstr "" "Максимальный размер блока передачи (в байтах) для соединения с сайтом ipsec." msgid "Member list items must be strings" msgstr "Элементы списка участников должны быть строками" msgid "Member list must be a list" msgstr "Список участников должен быть списком" msgid "Members associated with this pool." msgstr "Участники, связанные с пулом." msgid "Memory in MB for the flavor." msgstr "Объем памяти в МБ для разновидности." #, python-format msgid "Message: %(message)s, Code: %(code)s" msgstr "Сообщение: %(message)s, код: %(code)s" msgid "Metadata format invalid" msgstr "Неверный формат метаданных" msgid "Metadata key-values defined for cluster." msgstr "Пары ключ-значение метаданных, определенные для кластера." msgid "Metadata key-values defined for node." msgstr "Пары ключ-значение метаданных, определенные для узла." msgid "Metadata key-values defined for profile." msgstr "Пары ключ-значение метаданных, определенные для профайла." msgid "Metadata key-values defined for share." msgstr "Пары ключ-значение метаданных, определенные для общего ресурса." msgid "Meter name watched by the alarm." msgstr "Имя счетчика, отслеживаемого в предупреждении." msgid "" "Meter should match this resource metadata (key=value) additionally to the " "meter_name." msgstr "" "Счетчик должен соответствовать указанным метаданным ресурса (key=value), " "помимо meter_name." msgid "Meter statistic to evaluate." msgstr "Статистика по счетчикам для оценки." msgid "Method of implementation of session persistence feature." msgstr "Метод реализации функции сохранения состояния сеанса." msgid "Metric name watched by the alarm." msgstr "Имя показателя, отслеживаемого в предупреждении." msgid "Min size of the cluster." msgstr "Минимальный размер кластера." msgid "MinSize can not be greater than MaxSize" msgstr "MinSize не может быть больше MaxSize" msgid "Minimum number of instances in the group." msgstr "Минимальное число экземпляров в группе." msgid "Minimum number of resources in the cluster." msgstr "Минимальное количество ресурсов в кластере." msgid "Minimum number of resources in the group." msgstr "Минимальное число ресурсов в группе." msgid "" "Minimum number of resources that are added or removed when the AutoScaling " "group scales up or down. This can be used only when specifying " "PercentChangeInCapacity for the AdjustmentType property." msgstr "" "Минимальное число добавляемых или удаляемых ресурсов для группы с " "автомасштабированием. Используется только при указании " "PercentChangeInCapacity для свойства AdjustmentType." msgid "" "Minimum number of resources that are added or removed when the AutoScaling " "group scales up or down. This can be used only when specifying " "percent_change_in_capacity for the adjustment_type property." msgstr "" "Минимальное число добавляемых или удаляемых ресурсов для группы с " "автомасштабированием. Используется только при указании " "percent_change_in_capacity для свойства adjustment_type." #, python-format msgid "Missing mandatory (%s) key from mark unhealthy request" msgstr "" "Отсутствует обязательный ключ (%s) в запросе 'пометить как неработоспособный'" #, python-format msgid "Missing parameter type for parameter: %s" msgstr "В параметре %s отсутствует тип" #, python-format msgid "Missing required credential: %(required)s" msgstr "Отсутствуют обязательные идентификационные данные: %(required)s" msgid "Mistral resource validation error" msgstr "Ошибка проверки ресурса Mistral" msgid "Monasca notification." msgstr "Уведомление Monasca." msgid "Multiple actions specified" msgstr "Указано несколько действий" #, python-format msgid "Multiple physical resources were found with name (%(name)s)." msgstr "Обнаружено несколько физических ресурсов с именем (%(name)s)." #, python-format msgid "Multiple routers found with name %s" msgstr "Обнаружено несколько маршрутизаторов с именем %s" msgid "Must specify 'InstanceId' if you specify 'EIP'." msgstr "Должен быть указан InstanceId, когда указан EIP." msgid "Name for the Sahara Cluster Template." msgstr "Имя шаблона кластера Sahara." msgid "Name for the Sahara Node Group Template." msgstr "Имя шаблона группы узлов Sahara." msgid "Name for the aggregate." msgstr "Имя совокупного объекта." msgid "Name for the availability zone." msgstr "Имя зоны доступности." msgid "" "Name for the container. If not specified, a unique name will be generated." msgstr "" "Имя контейнера. Если значение не указано, будет создано уникальное имя." msgid "Name for the firewall policy." msgstr "Имя стратегии брандмауэра." msgid "Name for the firewall rule." msgstr "Имя правила брандмауэра." msgid "Name for the firewall." msgstr "Имя брандмауэра." msgid "Name for the ike policy." msgstr "Имя стратегии ike." msgid "" "Name for the image. The name of an image is not unique to a Image Service " "node." msgstr "Имя образа. Имя образа неуникально на узле службы образов." msgid "Name for the ipsec policy." msgstr "Имя стратегии ipsec." msgid "Name for the ipsec site connection." msgstr "Имя соединения с сайтом ipsec." msgid "Name for the time constraint." msgstr "Имя ограничения времени." msgid "Name for the vpn service." msgstr "Имя службы vpn." msgid "" "Name of attribute to compare. Names of the form metadata.user_metadata.X or " "metadata.metering.X are equivalent to what you can address through " "matching_metadata; the former for Nova meters, the latter for all others. To " "see the attributes of your Samples, use `ceilometer --debug sample-list`." msgstr "" "Имя атрибута для сравнения. Имена в виде metadata.user_metadata.X или " "metadata.metering.X аналогичны именам, к которым можно обращаться с помощью " "matching_metadata - первый для функций измерения Nova, второй - для всех " "остальных. Для просмотра атрибутов в примерах воспользуйтесь командой " "`ceilometer --debug sample-list`." msgid "Name of key to use for substituting inputs during deployment." msgstr "" "Имя ключа, используемого для замены входных данных во время развертывания." msgid "Name of keypair to inject into the server." msgstr "Имя пары ключей для внесения на сервер." msgid "Name of keystone endpoint." msgstr "Имя конечной точки keystone." msgid "Name of keystone group." msgstr "Имя группы keystone." msgid "Name of keystone project." msgstr "Имя проекта keystone." msgid "Name of keystone role." msgstr "Имя роли keystone." msgid "Name of keystone service." msgstr "Имя службы keystone." msgid "Name of keystone user." msgstr "Имя пользователя keystone." msgid "Name of registered datastore type." msgstr "Имя зарегистрированного типа хранилища данных." msgid "Name of the DB instance to create." msgstr "Число создаваемых экземпляров БД." msgid "Name of the Node group." msgstr "Имя группы узлов." msgid "" "Name of the action associated with the task. Either action or workflow may " "be defined in the task." msgstr "" "Имя действия, связанного с задачей. В задаче может быть определено действие " "или поток операций." msgid "Name of the administrative user to use on the server." msgstr "Имя пользователя с правами администратора на сервере." msgid "Name of the alarm. By default, physical resource name is used." msgstr "Имя предупреждения. По умолчанию - имя физического ресурса." msgid "Name of the availability zone for DB instance." msgstr "Имя области доступности для экземпляра БД." msgid "Name of the availability zone for server placement." msgstr "Имя области доступности для размещения сервера." msgid "Name of the cluster to create." msgstr "Имя создаваемого кластера." msgid "Name of the cluster. By default, physical resource name is used." msgstr "Имя кластера. По умолчанию - имя физического ресурса." msgid "Name of the cookie, required if type is APP_COOKIE." msgstr "Имя cookie, обязательное для типа APP_COOKIE." msgid "Name of the cron trigger." msgstr "Имя триггера cron." msgid "Name of the current action being deployed" msgstr "Имя текущего развертываемого действия" msgid "Name of the data source." msgstr "Имя источника данных." msgid "" "Name of the derived config associated with this deployment. This is used to " "apply a sort order to the list of configurations currently deployed to a " "server." msgstr "" "Имя производной конфигурации, связанной с эти развертыванием. Служит для " "применения порядка сортировки к списку конфигураций, развернутых на сервере." msgid "" "Name of the engine node. This can be an opaque identifier. It is not " "necessarily a hostname, FQDN, or IP address." msgstr "" "Имя узла модуля. Это может быть сложный идентификатор. Он не обязательно " "должен быть именем хоста, FQDN или IP-адресом." msgid "Name of the input." msgstr "Имя входа." msgid "Name of the job binary." msgstr "Имя двоичного файла задания." msgid "Name of the metering label." msgstr "Имя метки измерений." msgid "Name of the network owning the port." msgstr "Имя сети - владельца порта." msgid "" "Name of the network owning the port. The value is typically network:" "floatingip or network:router_interface or network:dhcp." msgstr "" "Имя сети, которой принадлежит этот порт. Допустимые значения: network:" "floatingip, network:router_interface или network:dhcp" msgid "Name of the notification. By default, physical resource name is used." msgstr "Имя уведомления. По умолчанию - имя физического ресурса." msgid "Name of the output." msgstr "Имя выхода." msgid "Name of the pool." msgstr "Имя пула." msgid "Name of the queue instance to create." msgstr "Имя создаваемого экземпляра очереди." msgid "" "Name of the registered datastore version. It must exist for provided " "datastore type. Defaults to using single active version. If several active " "versions exist for provided datastore type, explicit value for this " "parameter must be specified." msgstr "" "Имя зарегистрированной версии хранилища данных. Она должна существовать для " "предоставленного типа хранилища данных. Значение по умолчанию - одна " "активная версия. Если для предоставленного типа хранилища данных существует " "несколько активных версий, то необходимо указать явное значение в этом " "параметре." msgid "Name of the secret." msgstr "Имя секретного ключа." msgid "Name of the senlin node. By default, physical resource name is used." msgstr "Имя узла senlin. По умолчанию - имя физического ресурса." msgid "Name of the senlin policy. By default, physical resource name is used." msgstr "Имя стратегии senlin. По умолчанию - имя физического ресурса." msgid "Name of the senlin profile. By default, physical resource name is used." msgstr "Имя профайла senlin. По умолчанию - имя физического ресурса." msgid "" "Name of the senlin receiver. By default, physical resource name is used." msgstr "Имя получателя senlin. По умолчанию - имя физического ресурса." msgid "Name of the server." msgstr "Имя сервера." msgid "Name of the share network." msgstr "Имя сети общего ресурса." msgid "Name of the share type." msgstr "Имя типа общего ресурса." msgid "Name of the stack." msgstr "Имя стека." msgid "Name of the subnet pool." msgstr "Имя пула подсетей." msgid "Name of the vip." msgstr "Имя vip." msgid "Name of the volume type." msgstr "Имя типа тома." msgid "Name of the volume." msgstr "Имя тома." msgid "" "Name of the workflow associated with the task. Can be defined by intrinsic " "function get_resource or by name of the referenced workflow, i.e. " "{ workflow: wf_name } or { workflow: { get_resource: wf_name }}. Either " "action or workflow may be defined in the task." msgstr "" "Имя потока операций, связанного с задачей. Может быть определено вызовом " "внутренней функции get_resource или по имени потока операций, то есть " "{ workflow: wf_name } или { workflow: { get_resource: wf_name }}. В задаче " "может быть определено действие или поток операций." msgid "Name of this Load Balancer." msgstr "Имя балансировщика нагрузки." msgid "Name of this deployment resource in the stack" msgstr "Имя этого ресурса развертывания в стеке" msgid "Name of this listener." msgstr "Имя получателя запросов." msgid "Name of this pool." msgstr "Имя пула." msgid "Name or ID Nova flavor for the nodes." msgstr "Имя или ИД разновидности Nova для узлов." msgid "Name or ID of network to create a port on." msgstr "Имя или ИД сети для создания порта." msgid "Name or ID of senlin profile to create this node." msgstr "Имя или ИД профайла senlin для создания узла." msgid "" "Name or ID of shared file system snapshot that will be restored and created " "as a new share." msgstr "" "Имя или ИД моментальной копии общей файловой системы, которая будет " "восстановлена и создана как новый общий ресурс." msgid "" "Name or ID of shared filesystem type. Types defines some share filesystem " "profiles that will be used for share creation." msgstr "" "Имя или ИД типа общей файловой системы. Типы определяют некоторые профайлы " "общей файловой системы, которые будут использоваться для создания общих " "ресурсов." msgid "Name or ID of shared network defined for shared filesystem." msgstr "" "Имя или ИД общедоступной сети, определенной для общей файловой системы." msgid "Name or ID of target cluster." msgstr "Имя или ИД целевого кластера." msgid "Name or ID of the load balancing pool." msgstr "Имя или ИД пула распределения нагрузки." msgid "Name or Id of keystone region." msgstr "Имя или ИД области keystone." msgid "Name or Id of keystone service." msgstr "Имя или ИД службы keystone." #, python-format msgid "" "Name or UUID of Neutron port to attach this NIC to. Either %(port)s or " "%(net)s must be specified." msgstr "" "Имя или UUID порта Neutron, к которому будет подключаться эта карта сетевого " "адаптера. %(port)s или %(net)s должен быть указан." msgid "Name or UUID of network." msgstr "Имя или UUID сети." msgid "" "Name or UUID of the Neutron floating IP network or name of the Nova floating " "ip pool to use. Should not be provided when used with Nova-network that auto-" "assign floating IPs." msgstr "" "Имя или UUID сети нефиксированных IP-адресов или имя пула нефиксированных IP-" "адресов Nova. Не должен указываться при использовании с сетью Novaс " "автоматическим присваиванием нефиксированных IP-адресов." msgid "Name or UUID of the image used to boot Hadoop nodes." msgstr "Имя или UUID образа, используемого для загрузки узлов Hadoop." #, python-format msgid "" "Name or UUID of the network to attach this NIC to. Either %(port)s or " "%(net)s must be specified." msgstr "" "Имя или UUID сети, к которой будет подключаться эта карта сетевого адаптера. " "%(port)s или %(net)s должен быть указан." msgid "Name or id of keystone domain." msgstr "Имя или ИД домена keystone." msgid "Name or id of keystone group." msgstr "Имя или ИД группы keystone." msgid "Name or id of keystone user." msgstr "Имя или ИД пользователя keystone." msgid "Name or id of volume type (OS::Cinder::VolumeType)." msgstr "Имя или ИД типа тома (OS::Cinder::VolumeType)." msgid "Names of databases that those users can access on instance creation." msgstr "Имена баз данных, доступных пользователям при создании экземпляра." msgid "" "Namespace to group this software config by when delivered to a server. This " "may imply what configuration tool is going to perform the configuration." msgstr "" "Пространство имен для объединения конфигурации этого программного " "обеспечения при доставке на сервер. Оно может подразумевать, какой " "инструмент конфигурации будет выполнять настройку." msgid "Need more arguments" msgstr "Аргументов недостаточно" msgid "Negotiation mode for the ike policy." msgstr "Режим согласования для стратегии ike." #, python-format msgid "Neither image nor bootable volume is specified for instance %s" msgstr "Для экземпляра %s не указан ни образ, ни самозагружаемый том" msgid "Network in CIDR notation." msgstr "Сеть в нотации CIDR." msgid "Network interface ID to associate with EIP." msgstr "ИД сетевого интерфейса для связи с EIP." msgid "Network interfaces to associate with instance." msgstr "Сетевые интерфейсы для связи с экземпляром." #, python-format msgid "" "Network this port belongs to. If you plan to use current port to assign " "Floating IP, you should specify %(fixed_ips)s with %(subnet)s. Note if this " "changes to a different network update, the port will be replaced." msgstr "" "Сеть, которой принадлежит этот порт. Для того чтобы использовать текущий " "порт для назначения нефиксированных IP, укажите %(fixed_ips)s с %(subnet)s. " "Если будет изменена сеть, то порт будет заменен." msgid "Network to allocate floating IP from." msgstr "Сеть для выделения нефиксированного IP." msgid "Neutron network id." msgstr "ИД сети Neutron." msgid "Neutron subnet id." msgstr "ИД подсети Neutron." msgid "Nexthop IP address." msgstr "IP адрес следующего узла в маршруте." #, python-format msgid "No %s specified" msgstr "Не указан %s" msgid "No Template provided." msgstr "Шаблон не предоставлен." msgid "No action specified" msgstr "Действие не указано" msgid "No constraint expressed" msgstr "Ограничение не выражено" #, python-format msgid "" "No content found in the \"files\" section for %(fn_name)s path: %(file_key)s" msgstr "" "В разделе \"files\" не найдено содержимое для пути %(fn_name)s: %(file_key)s" #, python-format msgid "No event %s found" msgstr "Событие %s не найдено" #, python-format msgid "No events found for resource %s" msgstr "События для ресурса %s не найдены" msgid "No resource data found" msgstr "Данные о ресурсах не найдены" #, python-format msgid "No stack exists with id \"%s\"" msgstr "Отсутствует стек с ИД \"%s\"" msgid "No stack name specified" msgstr "Имя стека не указано" msgid "No template specified" msgstr "Шаблон не указан" msgid "No volume service available." msgstr "Нет доступных служб тома." msgid "Node groups." msgstr "Группы узлов." msgid "Nodes list in the cluster." msgstr "Список узлов в кластере." msgid "Non HA routers can only have one L3 agent." msgstr "Маршрутизаторы не высокой готовности могут иметь только один агент L3." #, python-format msgid "Non-empty resource type is required for resource \"%s\"" msgstr "Ресурс \"%s\" должен иметь непустой тип" msgid "Not Implemented." msgstr "Не реализован." #, python-format msgid "Not allowed - %(dsver)s without %(dstype)s." msgstr "Не разрешено - %(dsver)s без %(dstype)s." msgid "Not found" msgstr "Не найден" msgid "Not waiting for outputs signal" msgstr "Не ожидает сигнала выходов" msgid "" "Notional service where encryption is performed For example, front-end. For " "Nova." msgstr "" "Фиктивная служба, где выполняется шифрование. Пример: front-end. Для Nova." msgid "Nova instance type (flavor)." msgstr "Тип экземпляра Nova (разновидность)." msgid "Nova network id." msgstr "ИД сети Nova." msgid "Number of VCPUs for the flavor." msgstr "Количество VCPU для разновидности." msgid "Number of backlog requests to configure the socket with." msgstr "Количество непереданных запросов для настройки сокета." msgid "Number of instances in the Node group." msgstr "Число экземпляров в группе узлов." msgid "Number of minutes to wait for this stack creation." msgstr "Число минут ожидания создания стека." msgid "Number of periods to evaluate over." msgstr "Число периодов для выполнения оценки." msgid "" "Number of permissible connection failures before changing the member status " "to INACTIVE." msgstr "" "Число разрешенных сбоев соединения перед изменением состояния участника на " "INACTIVE." msgid "Number of remaining executions." msgstr "Число еще не выполненных операций." msgid "Number of seconds for the DPD delay." msgstr "Время задержки DPD (с)." msgid "Number of seconds for the DPD timeout." msgstr "Время тайм-аута DPD (с)." msgid "" "Number of times to check whether an interface has been attached or detached." msgstr "Количество проверок того, был ли интерфейс присоединен или отсоединен." msgid "" "Number of times to retry to bring a resource to a non-error state. Set to 0 " "to disable retries." msgstr "" "Количество повторных попыток для перевода ресурса в исправное состояние. " "Укажите 0 для отключения повторных попыток." msgid "" "Number of times to retry when a client encounters an expected intermittent " "error. Set to 0 to disable retries." msgstr "" "Количество повторных попыток клиента при возникновении ожидаемой временной " "ошибки. Укажите 0 для отключения повторных попыток." msgid "Number of workers for Heat service." msgstr "Число исполнителей в службе Heat." msgid "" "Number of workers for Heat service. Default value 0 means, that service will " "start number of workers equal number of cores on server." msgstr "" "Количество рабочих процессов для службы Heat. Значение по умолчанию 0 " "означает, что будет запущено число процессов, равно количеству доступных " "ядер процессора на сервере." msgid "Number value for delay during resolve constraint." msgstr "Время задержки при обработке ограничения." msgid "Number value for timeout during resolving output value." msgstr "Тайм-айт для обработки выходного значения." #, python-format msgid "Object action %(action)s failed because: %(reason)s" msgstr "Действие объекта %(action)s не выполнено, причина: %(reason)s" msgid "" "On update, enables heat to collect existing resource properties from reality " "and converge to updated template." msgstr "" "При обновлении heat будет собирать все реально существующие свойства " "ресурсов и сохранять их в обновленном шаблоне." msgid "One of predefined health monitor types." msgstr "Один из стандартных типов монитора работоспособности." msgid "One or more listeners for this load balancer." msgstr "Один или несколько приемников для этого распределителя нагрузки." msgid "Only ISO 8601 duration format of the form PT#H#M#S is supported." msgstr "" "В форме PT#H#M#S поддерживается единственный формат продолжительности - ISO " "8601." msgid "Only Templates with an extension of .yaml or .template are supported" msgstr "Поддерживаются шаблоны только с расширением .yaml или .template" #, python-format msgid "Only integer is acceptable by '%(name)s'." msgstr "Для %(name)s допустимо только целое число." #, python-format msgid "Only non-zero integer is acceptable by '%(name)s'." msgstr "Для %(name)s допустимо только целое число, отличное от нуля." msgid "Operator used to compare specified statistic with threshold." msgstr "Оператор для сравнения указанной статистики с порогом." msgid "Optional CA cert file to use in SSL connections." msgstr "" "Необязательный файл сертификата CA для использования в соединениях SSL." msgid "Optional Nova keypair name." msgstr "Необязательное имя пары ключей Nova." msgid "Optional PEM-formatted certificate chain file." msgstr "Необязательный файл цепочки сертификатов в формате PEM." msgid "Optional PEM-formatted file that contains the private key." msgstr "Необязательный файл в формате PEM, содержащий личный ключ." msgid "Optional filename to associate with part." msgstr "Необязательное имя файла для связи с элементом." #, python-format msgid "Optional heat url in format like http://0.0.0.0:8004/v1/%(tenant_id)s." msgstr "" "Необязательный url heat в формате вида http://0.0.0.0:8004/v1/%(tenant_id)s." msgid "Optional subtype to specify with the type." msgstr "Необязательный подтип, указываемый с типом." msgid "Options for simulating waiting." msgstr "Параметры для эмуляции ожидания." #, python-format msgid "Order '%(name)s' failed: %(code)s - %(reason)s" msgstr "Заказ '%(name)s' не выполнен: %(code)s - %(reason)s" msgid "Outputs received" msgstr "Выходы получены" msgid "Owner of the source security group." msgstr "Владелец исходной группы защиты." msgid "PATCH update to non-COMPLETE stack" msgstr "Пакетное обновление стека в состоянии, отличном от COMPLETE" #, python-format msgid "Parameter '%(name)s' is invalid: %(exp)s" msgstr "Параметр '%(name)s' недопустим: %(exp)s" msgid "Parameter Groups error" msgstr "Ошибка группы параметров" msgid "" "Parameter Groups error: parameter_groups.: The grouped parameter key_name " "does not reference a valid parameter." msgstr "" "Ошибка группы параметров - parameter_groups. Сгруппированный параметр " "key_name не ссылается на допустимый параметр." msgid "" "Parameter Groups error: parameter_groups.: The key_name parameter must be " "assigned to one parameter group only." msgstr "" "Ошибка группы параметров - parameter_groups. Параметр key_name должен быть " "указан только для одной группы параметров." msgid "" "Parameter Groups error: parameter_groups.: The parameters of parameter group " "should be a list." msgstr "" "Ошибка группы параметров - parameter_groups. Параметры группы параметров " "должны быть списком." msgid "" "Parameter Groups error: parameter_groups.Database Group: The InstanceType " "parameter must be assigned to one parameter group only." msgstr "" "Ошибка группы параметров - parameter_groups.Database Group. Параметр " "InstanceType должен быть указан только для одной группы параметров." msgid "" "Parameter Groups error: parameter_groups.Database Group: The grouped " "parameter SomethingNotHere does not reference a valid parameter." msgstr "" "Ошибка группы параметров - parameter_groups.Database Group. Сгруппированный " "параметр SomethingNotHere не ссылается на допустимый параметр." msgid "" "Parameter Groups error: parameter_groups.Server Group: The parameters must " "be provided for each parameter group." msgstr "" "Ошибка группы параметров - parameter_groups.Server Group. Параметры должны " "быть указаны для каждой группы параметров." msgid "" "Parameter Groups error: parameter_groups.Server Group: The parameters of " "parameter group should be a list." msgstr "" "Ошибка группы параметров - parameter_groups.Server Group. Параметры группы " "параметров должны быть списком." msgid "" "Parameter Groups error: parameter_groups: The parameter_groups should be a " "list." msgstr "" "Ошибка группы параметров - parameter_groups. parameter_groups должен быть " "списком." #, python-format msgid "Parameter name in \"%s\" must be string" msgstr "Имя параметра в \"%s\" должно быть строкой" #, python-format msgid "Params must be a map, find a %s" msgstr "Параметры должны быть картой, обнаружено %s" msgid "Parent network of the subnet." msgstr "Родительская сеть подсети." msgid "Parts belonging to this message." msgstr "Элементы, относящиеся к этому сообщению." msgid "Password for API authentication" msgstr "Пароль для идентификации API" msgid "Password for accessing the data source URL." msgstr "Пароль для доступа к URL источника данных." msgid "Password for accessing the job binary URL." msgstr "Пароль для доступа к URL двоичного файла задания." msgid "Password for those users on instance creation." msgstr "Пароль для пользователей при создании экземпляра." msgid "Password of keystone user." msgstr "Пароль пользователя keystone." msgid "Password used by user." msgstr "Пароль для пользователя." #, python-format msgid "Path components in \"%s\" must be strings" msgstr "Компоненты пути в \"%s\" должны быть строками" msgid "Path components in attributes must be strings" msgstr "Компоненты пути в атрибутах должны быть строками" msgid "Payload exceeds maximum allowed size" msgstr "Размер полезной нагрузки превышает максимально допустимый" msgid "Perfect forward secrecy for the ipsec policy." msgstr "Полное прямое шифрование для стратегии ipsec." msgid "Perfect forward secrecy in lowercase for the ike policy." msgstr "" "Полное прямое шифрование в символах нижнего регистра для стратегии ike." msgid "" "Perform a check on the input values passed to verify that each required " "input has a corresponding value. When the property is set to STRICT and no " "value is passed, an exception is raised." msgstr "" "Выполнить проверку переданных входных значений на соответствие каждого " "обязательного входа и значения. Если свойству присвоено значение STRICT, а " "значение не передано, возникает исключительная ситуация." msgid "Period (seconds) to evaluate over." msgstr "Период (с) для выполнения оценки." msgid "Physical ID of the VPC. Not implemented." msgstr "Физический ИД VPC. Не реализовано." #, python-format msgid "" "Plugin %(plugin)s doesn't support the following node processes: " "%(unsupported)s. Allowed processes are: %(allowed)s" msgstr "" "Модуль %(plugin)s не поддерживает следующие процессы в узле: " "%(unsupported)s. Допустимые процессы: %(allowed)s" msgid "Plugin name." msgstr "Имя модуля." msgid "Policies for removal of resources on update." msgstr "Стратегии для удаления ресурсов при обновлении." msgid "Policy for rolling updates for this scaling group." msgstr "Стратегия параллельных обновлений для этой группы масштабирования." msgid "" "Policy on how to apply a flavor update; either by requesting a server resize " "or by replacing the entire server." msgstr "" "Стратегия применения обновлений разновидности; возможные варианты: запрос на " "изменение размера сервера или замена сервера целиком." msgid "" "Policy on how to apply an image-id update; either by requesting a server " "rebuild or by replacing the entire server." msgstr "" "Стратегия применения обновлений image-id; возможные варианты: запрос на " "изменение конфигурации сервера или замена сервера целиком." msgid "" "Policy on how to respond to a stack-update for this resource. REPLACE_ALWAYS " "will replace the port regardless of any property changes. AUTO will update " "the existing port for any changed update-allowed property." msgstr "" "Стратегия ответа на обновление стека для этого ресурса. Значение " "REPLACE_ALWAYS заменит порт независимо от любых изменений свойства. Значение " "AUTO обновит существующий порт для любого измененного свойства, для которого " "разрешено обновление." msgid "" "Policy to be processed when doing an update which requires removal of " "specific resources." msgstr "" "Стратегия для обработки во время обновления, для которого требуется удаление " "конкретных ресурсов." msgid "Pool creation failed" msgstr "Не удалось создать пул" msgid "Pool creation failed due to vip" msgstr "Не удалось создать пул из-за vip" msgid "Pool from which floating IP is allocated." msgstr "Пул для выделения нефиксированных IP." msgid "Port number on which the servers are running on the members." msgstr "Номер порта, на котором серверы работают у элементов." msgid "Port on which the pool member listens for requests or connections." msgstr "Порт, на котором участники пула принимают запросы или соединения." msgid "Port security enabled of the network." msgstr "Защита портов включена в сети." msgid "Port security enabled of the port." msgstr "Защита портов включена для порта." msgid "Position of the rule within the firewall policy." msgstr "Позиция правила в стратегии брандмауэра." msgid "Pre-shared key string for the ipsec site connection." msgstr "Строка PSK для соединения с сайтом ipsec." msgid "Prefix length for subnet allocation from subnet pool." msgstr "Длина префикса при выделении подсети из пула." msgid "Private DNS name of the specified instance." msgstr "Имя частной DNS указанного экземпляра." msgid "Private IP address of the network interface." msgstr "Частный IP-адрес сетевого интерфейса." msgid "Private IP address of the specified instance." msgstr "Частный IP-адрес указанного экземпляра." msgid "Project ID" msgstr " ID проекта" msgid "" "Projects to add volume type access to. NOTE: This property is only supported " "since Cinder API V2." msgstr "" "Проекты, в которые добавляются функции доступа к этому типу тома. " "Примечание: это свойство поддерживается только начиная с Cinder API v2." #, python-format msgid "" "Properties %(algorithm)s and %(bit_length)s are required for %(type)s type " "of order." msgstr "" "Свойства %(algorithm)s и %(bit_length)s являются обязательными для типа " "заказа %(type)s." msgid "Properties for profile." msgstr "Свойства профайла." msgid "Properties of this policy." msgstr "Свойства стратегии." msgid "Properties to pass to each resource being created in the chain." msgstr "Свойства, передаваемые каждому ресурсу, создаваемому в цепочке." #, python-format msgid "Property %(cookie)s is required when %(sp)s type is set to %(app)s." msgstr "" "Свойство %(cookie)s является обязательным, если тип %(sp)s задан как %(app)s." #, python-format msgid "" "Property %(cookie)s must NOT be specified when %(sp)s type is set to %(ip)s." msgstr "" "Свойство %(cookie)s нельзя указывать, если тип %(sp)s задан как %(ip)s." #, python-format msgid "" "Property %(key)s updated value %(new)s should be superset of existing value " "%(old)s." msgstr "" "Для свойства %(key)s обновленное значение %(new)s должно быть надмножеством " "существующего значения %(old)s." #, python-format msgid "" "Property %(n)s type mismatch between facade %(type)s (%(fs_type)s) and " "provider (%(ps_type)s)" msgstr "" "Разные типы свойства %(n)s указаны в фасаде %(type)s (%(fs_type)s) и " "поставщике (%(ps_type)s)" #, python-format msgid "Property %(policies)s and %(item)s cannot be used both at one time." msgstr "Свойства %(policies)s и %(item)s нельзя использовать одновременно." #, python-format msgid "Property %(ref)s required when protocol is %(term)s." msgstr "Свойство %(ref)s является обязательным, если протокол - это %(term)s." #, python-format msgid "Property %s not assigned" msgstr "Свойство %s не присвоено" #, python-format msgid "Property %s not implemented yet" msgstr "Свойство %s еще не реализовано" msgid "" "Property cookie_name is required when session_persistence type is set to " "APP_COOKIE." msgstr "" "Свойство cookie_name является обязательным, если тип session_persistence " "задан как APP_COOKIE." msgid "" "Property cookie_name is required, when session_persistence type is set to " "APP_COOKIE." msgstr "" "Свойство cookie_name является обязательным, если тип session_persistence " "задан как APP_COOKIE." msgid "" "Property cookie_name must NOT be specified when session_persistence type is " "set to SOURCE_IP." msgstr "" "Свойство cookie_name запрещено указывать, если тип session_persistence задан " "как SOURCE_IP." msgid "Property values for the resources in the group." msgstr "Значения свойств для ресурсов в группе." msgid "Protocol for balancing." msgstr "Протокол для распределения нагрузки." msgid "Protocol for the firewall rule." msgstr "Протокол правила брандмауэра." msgid "Protocol of the pool." msgstr "Протокол пула." msgid "Protocol on which to listen for the client traffic." msgstr "Протокол для приема данных клиента." msgid "Protocol to balance." msgstr "Протокол для распределения нагрузки." msgid "Protocol value for this firewall rule." msgstr "Значение протокола для этого правила брандмауэра." msgid "" "Provide access to nodes using other nodes of the cluster as proxy gateways." msgstr "" "Предоставить доступ к узлам посредством других узлов кластера, работающих " "как проксирующие шлюзы." msgid "" "Provide old encryption key. New encryption key would be used from config " "file." msgstr "" "Укажите прежний ключ шифрования. Новый ключ шифрования использовался бы из " "файла конфигурации." msgid "Provider for this Load Balancer." msgstr "Поставщик балансировщика нагрузки." msgid "Provider implementing this load balancer instance." msgstr "Поставщик для реализации этого экземпляра балансировщика нагрузки." #, python-format msgid "Provider requires property %(n)s unknown in facade %(type)s" msgstr "Для поставщика требуется свойство %(n)s, неизвестное в фасаде %(type)s" msgid "Public DNS name of the specified instance." msgstr "Имя общей DNS указанного экземпляра." msgid "Public IP address of the specified instance." msgstr "Общедоступный IP-адрес указанного экземпляра." msgid "" "RPC timeout for the engine liveness check that is used for stack locking." msgstr "" "Тайм-аут RPC проверки активности модуля, который используется для блокировки " "стека." msgid "RX/TX factor." msgstr "Фактор RX/TX." #, python-format msgid "Rebuilding server failed, status '%s'" msgstr "Изменение конфигурации сервера не выполнено, состояние '%s'" msgid "Record name." msgstr "Имя записи." #, python-format msgid "Recursion depth exceeds %d." msgstr "Глубина рекурсии превышает %d." msgid "" "Ref structure that contains the ID of the VPC on which you want to create " "the subnet." msgstr "" "Основная структура, содержащая ИД соединения VPC, на котором будет создана " "подсеть." msgid "Reference to a flavor for creating DB instance." msgstr "Ссылка на разновидность для создания экземпляра БД." msgid "Reference to certificate." msgstr "Ссылка на сертификат." msgid "Reference to intermediates." msgstr "Ссылка на промежуточные сертификаты." msgid "Reference to private key passphrase." msgstr "Ссылка на пароль для личного ключа." msgid "Reference to private key." msgstr "Ссылка на личный ключ." msgid "Reference to public key." msgstr "Ссылка на открытый ключ." msgid "Reference to the secret." msgstr "Ссылка на секретный ключ." msgid "References to secrets that will be stored in container." msgstr "Ссылки на секретные ключи будут сохранены в контейнере." msgid "Region name in which this stack will be created." msgstr "Имя области, в которой будет создан этот стек." msgid "Remaining executions." msgstr "Еще не выполненные операции." msgid "Remote branch router identity." msgstr "Идентификатор удаленного маршрутизатора ветви." msgid "Remote branch router public IPv4 address or IPv6 address or FQDN." msgstr "" "Общедоступный адрес IPv4 или адрес IPv6 удаленного маршрутизатора ветви, " "либо FQDN." msgid "Remote subnet(s) in CIDR format." msgstr "Удаленная подсеть(и) в формате CIDR." msgid "" "Replacement policy used to work around flawed nova/neutron port interaction " "which has been fixed since Liberty." msgstr "" "Правила замены для изолирования ошибочных взаимодействий портов nova/" "neutron, которые были исправлены в выпуске после Liberty." msgid "Request expired or more than 15mins in the future" msgstr "Время действия запроса истекло или истекает через 15 минут" #, python-format msgid "Request limit exceeded: %(message)s" msgstr "Превышено ограничение по числу запросов: %(message)s" msgid "Request missing required header X-Auth-Url" msgstr "В запросе не указан обязательный заголовок X-Auth-Url" msgid "Request was denied due to request throttling" msgstr "Запрос отклонен в процессе регулирования количества запросов" #, python-format msgid "" "Requested plugin '%(plugin)s' doesn't support version '%(version)s'. Allowed " "versions are %(allowed)s" msgstr "" "Модуль '%(plugin)s' не поддерживает версию '%(version)s'. Допустимые версии: " "%(allowed)s" msgid "" "Required extra specification. Defines if share drivers handles share servers." msgstr "" "Обязательный дополнительный параметр. Определяет, будут ли драйверы общих " "ресурсов обрабатывать общие серверы." #, python-format msgid "Required property %(n)s for facade %(type)s missing in provider" msgstr "" "Обязательное свойство %(n)s для фасада %(type)s не указано в поставщике" #, python-format msgid "Resizing to '%(flavor)s' failed, status '%(status)s'" msgstr "Изменение размера на '%(flavor)s' не выполнено, состояние '%(status)s'" #, python-format msgid "Resource \"%s\" has no type" msgstr "Ресурс \"%s\" не имеет типа" #, python-format msgid "Resource \"%s\" type is not a string" msgstr "Тип \"%s\" ресурса не является строкой" #, python-format msgid "Resource %(name)s %(key)s type must be %(typename)s" msgstr "Требуемый тип ресурса %(name)s %(key)s: %(typename)s" #, python-format msgid "Resource %(name)s is missing \"%(type_key)s\"" msgstr "В ресурсе %(name)s отсутствует \"%(type_key)s\"" #, python-format msgid "" "Resource %s's property user_data_format should be set to SOFTWARE_CONFIG " "since there are software deployments on it." msgstr "" "Свойству user_data_format ресурса %s должно быть присвоено значение " "SOFTWARE_CONFIG, поскольку в нем есть развертывания программного обеспечения." msgid "Resource ID was not provided." msgstr "Не указан ИД ресурса." msgid "" "Resource definition for the resources in the group, in HOT format. The value " "of this property is the definition of a resource just as if it had been " "declared in the template itself." msgstr "" "Определение ресурса для ресурсов группы в формате HOT. Значением этого " "свойства является такое определение ресурса, какое было бы объявлено в самом " "шаблоне." msgid "" "Resource definition for the resources in the group. The value of this " "property is the definition of a resource just as if it had been declared in " "the template itself." msgstr "" "Определение ресурса для ресурсов группы. Значением этого свойства является " "такое определение ресурса, какое было бы объявлено в самом шаблоне." msgid "Resource failed" msgstr "Сбой ресурса" msgid "Resource is not built" msgstr "Ресурс не скомпонован" msgid "Resource name may not contain \"/\"" msgstr "Имя ресурса не может содержать \"/\"" msgid "Resource type." msgstr "Тип ресурса." msgid "Resource update already requested" msgstr "Запрос на обновление ресурса уже отправлен" msgid "Resource with the name requested already exists" msgstr "Ресурс с запрашиваемым именем уже существует" msgid "" "ResourceInError: resources.remote_stack: Went to status UPDATE_FAILED due to " "\"Remote stack update failed\"" msgstr "" "ResourceInError: resources.remote_stack: переход в состояние UPDATE_FAILED с " "причиной \"Ошибка обновления удаленного стека\"" #, python-format msgid "Resources must contain Resource. Found a [%s] instead" msgstr "Ресурсы должны содержать Ресурс. Вместо него обнаружен [%s]" msgid "" "Resources that users are allowed to access by the DescribeStackResource API." msgstr "" "Ресурсы, к которым разрешается обращаться пользователям из API " "DescribeStackResource. " msgid "Returned status code from the configuration execution." msgstr "Код состояния, возвращаемый при выполнении настройки." msgid "Route duplicates an existing route." msgstr "Маршрут повторяет уже существующий." msgid "Route table ID." msgstr "ИД таблицы маршрутов." msgid "Safety assessment lifetime configuration for the ike policy." msgstr "Настройка времени существования оценки безопасности для стратегии ike." msgid "Safety assessment lifetime configuration for the ipsec policy." msgstr "" "Настройка времени существования оценки безопасности для стратегии ipsec." msgid "Safety assessment lifetime units." msgstr "Единицы времени существования оценки безопасности." msgid "Safety assessment lifetime value in specified units." msgstr "" "Значение времени существования оценки безопасности в указанных единицах." msgid "Scheduler hints to pass to Nova (Heat extension)." msgstr "Рекомендации планировщика для передачи в Nova (расширение Heat)." msgid "Schema representing the inputs that this software config is expecting." msgstr "" "Схема, представляющая входные данные, которые ожидаются в этой конфигурации " "программного обеспечения." msgid "Schema representing the outputs that this software config will produce." msgstr "" "Схема, представляющая выходные данные, создаваемые в этой конфигурации " "программного обеспечения." #, python-format msgid "Schema valid only for %(ltype)s or %(mtype)s, not %(utype)s" msgstr "" "Схема допустима только для %(ltype)s или %(mtype)s и недопустима для " "%(utype)s" msgid "" "Scope of flavor accessibility. Public or private. Default value is True, " "means public, shared across all projects." msgstr "" "Область доступности разновидности, открытая или частная. Значение по " "умолчанию - False, то есть открытая и общедоступная во всех проектах." #, python-format msgid "Searching Tenant %(target)s from Tenant %(actual)s forbidden." msgstr "Поиск арендатора %(target)s из арендатора %(actual)s запрещен." msgid "Seconds between running periodic tasks." msgstr "Интервал в секундах между запусками периодических задач." msgid "Seconds to wait after a create. Defaults to the global wait_secs." msgstr "" "Время ожидания в секундах после создания. По умолчанию - глобальное значение " "wait_secs." msgid "Seconds to wait after a delete. Defaults to the global wait_secs." msgstr "" "Время ожидания в секундах после удаления. По умолчанию - глобальное значение " "wait_secs." msgid "Seconds to wait after an action (-1 is infinite)." msgstr "Время ожидания в секундах после действия (-1 - бесконечное ожидание)." msgid "Seconds to wait after an update. Defaults to the global wait_secs." msgstr "" "Время ожидания в секундах после обновления. По умолчанию - глобальное " "значение wait_secs." #, python-format msgid "Section %s can not be accessed directly." msgstr "Раздел %s недоступен напрямую." #, python-format msgid "Security Group \"%(group_name)s\" not found" msgstr "Группа защиты \"%(group_name)s\" не найдена" msgid "Security group IDs to assign." msgstr "ИД группы защиты для назначения." msgid "Security group IDs to associate with this port." msgstr "ИД групп защиты для связи с этим портом." msgid "Security group names to assign." msgstr "Имена групп защиты для назначения." msgid "Security groups cannot be assigned the name \"default\"." msgstr "Группам защиты не может быть назначено имя \"default\"." msgid "Security service IP address or hostname." msgstr "IP-адрес или имя хоста службы защиты." msgid "Security service description." msgstr "Описание службы защиты." msgid "Security service domain." msgstr "Домен службы защиты." msgid "Security service name." msgstr "Имя службы защиты." msgid "Security service type." msgstr "Тип службы защиты." msgid "Security service user or group used by tenant." msgstr "Пользователь или группа службы защиты, используемые арендатором." msgid "Select deferred auth method, stored password or trusts." msgstr "" "Выбрать метод отложенной идентификации, хранимый пароль или группы доверия." msgid "Sequence of characters to build the random string from." msgstr "Последовательность символов для создания произвольной строки." #, python-format msgid "Server %(name)s delete failed: (%(code)s) %(message)s" msgstr "Удалить сервер %(name)s не удалось: (%(code)s) %(message)s" msgid "Server Group name." msgstr "Имя группы сервера." msgid "Server name." msgstr "Имя сервера." msgid "Server to assign floating IP to." msgstr "Сервер, которому будет назначен нефиксированный IP." #, python-format msgid "" "Service %(service_name)s is not available for resource type " "%(resource_type)s, reason: %(reason)s" msgstr "" "Служба %(service_name)s недоступна для типа ресурса %(resource_type)s, " "причина: %(reason)s" msgid "Service misconfigured" msgstr "Неверная конфигурация службы" msgid "Service temporarily unavailable" msgstr "Служба временно недоступна" msgid "Set of parameters passed to this stack." msgstr "Набор параметров, переданных в этот стек." msgid "Set of rules for comparing characters in a character set." msgstr "Набор правил для сравнения символов в наборе символов." msgid "Set of symbols and encodings." msgstr "Набор символов и кодировок." msgid "Set to \"vpc\" to have IP address allocation associated to your VPC." msgstr "Если указано \"vpc\", выделение IP-адреса будет связано с VPC." msgid "Set to true if DHCP is enabled and false if DHCP is disabled." msgstr "Укажите значение true, если DHCP включен, и false, если DHCP выключен." msgid "Severity of the alarm." msgstr "Серьезность предупреждения." msgid "Share description." msgstr "Описание общего ресурса." msgid "Share host." msgstr "Хост общего ресурса." msgid "Share name." msgstr "Имя общего ресурса." msgid "Share network description." msgstr "Описание сети общего ресурса." msgid "Share project ID." msgstr " ID проекта общего ресурса." msgid "Share protocol supported by shared filesystem." msgstr "Протокол, обеспечивающий общее использование файловой системы." msgid "Share storage size in GB." msgstr "Размер хранилища общих ресурсов в ГБ." msgid "Shared status of the metering label." msgstr "Общее состояние уровня измерений." msgid "Shared status of this firewall policy." msgstr "Общее состояние этой стратегии брандмауэра." msgid "Shared status of this firewall rule." msgstr "Общее состояние этого правила брандмауэра." msgid "Shared status of this firewall." msgstr "Общее состояние этого брандмауэра." msgid "Shrinking volume" msgstr "Сжатие тома" msgid "Signal data error" msgstr "Ошибка данных сигнала" #, python-format msgid "Signal resource during %s" msgstr "Отправить сигнал ресурсу во время %s" #, python-format msgid "Single schema valid only for %(ltype)s, not %(utype)s" msgstr "Одна схема допустима только для %(ltype)s и недопустима для %(utype)s" msgid "Size of a secondary ephemeral data disk in GB." msgstr "Размер вторичного временного диска в ГБ." msgid "Size of adjustment." msgstr "Размер корректировки." msgid "Size of encryption key, in bits. For example, 128 or 256." msgstr "Размер ключа шифрования в битах. Например, 128 или 256." msgid "" "Size of local disk in GB. The \"0\" size is a special case that uses the " "native base image size as the size of the ephemeral root volume." msgstr "" "Объем локального диска в ГБ. \"0\" указывает, что используется базовый " "размер встроенного образа в качестве размера временного корневого тома." msgid "" "Size of the block device in GB. If it is omitted, hypervisor driver " "calculates size." msgstr "" "Размер блочного устройства в ГБ. Если не указан, драйвер гипервизора " "вычисляет этот размер." msgid "Size of the instance disk volume in GB." msgstr "Размер тома диска экземпляра в ГБ." msgid "Size of the volumes, in GB." msgstr "Размер томов в ГБ." msgid "Smallest prefix size that can be allocated from the subnet pool." msgstr "" "Наименьший размер префикса, разрешенный при выделении подсетей из пула." #, python-format msgid "Snapshot with id %s not found" msgstr "Моментальная копия с ИД %s не найдена" msgid "" "SnapshotId is missing, this is required when specifying BlockDeviceMappings." msgstr "" "SnapshotId отсутствует, но он необходим при указании BlockDeviceMappings." #, python-format msgid "Software config with id %s not found" msgstr "Конфигурация программного обеспечения с ИД %s не найдена" msgid "Source IP address or CIDR." msgstr "Исходный IP-адрес или CIDR." msgid "Source ip_address for this firewall rule." msgstr "Исходный ip_address для этого правила брандмауэра." msgid "Source port number or a range." msgstr "Исходный порт или диапазон портов." msgid "Source port range for this firewall rule." msgstr "Диапазон исходных портов для этого правила брандмауэра." #, python-format msgid "Specified output key %s not found." msgstr "Указанный ключ вывода %s не найден." #, python-format msgid "Specified status is invalid, defaulting to %s" msgstr "указано неверное состояние, по умолчанию используется %s" #, python-format msgid "Specified subnet %(subnet)s does not belongs to network %(network)s." msgstr "Подсеть %(subnet)s не принадлежит сети %(network)s." msgid "Specifies a custom discovery url for node discovery." msgstr "Пользовательский url для поиска узлов." msgid "Specifies database names for creating databases on instance creation." msgstr "" "Указывает имена баз данных для создания баз данных при создании экземпляра." msgid "Specify the ACL permissions on who can read objects in the container." msgstr "" "Укажите права доступа ACL, определяющие, кто может читать объекты в " "контейнере." msgid "Specify the ACL permissions on who can write objects to the container." msgstr "" "Укажите права доступа ACL, определяющие, кто может записывать объекты в " "контейнер." msgid "" "Specify whether the remote_ip_prefix will be excluded or not from traffic " "counters of the metering label. For example to not count the traffic of a " "specific IP address of a range." msgstr "" "Указывает, следует ли исключать remote_ip_prefix в счетчиках потока данных " "метки измерений. Например, не включать потоки данных по отдельному IP-адресу " "диапазона." #, python-format msgid "Stack %(stack_name)s already has an action (%(action)s) in progress." msgstr "В стеке %(stack_name)s уже выполняется действие (%(action)s)." msgid "Stack ID" msgstr "ID стека" msgid "Stack Name" msgstr "Имя стека" msgid "Stack name may not contain \"/\"" msgstr "Имя стека не может содержать \"/\"" msgid "Stack resource id" msgstr "ИД ресурса стека" msgid "Stack unknown status" msgstr "Неизвестное состояние стека" #, python-format msgid "Stack with id %s not found" msgstr "Стек с ИД %s не найден" msgid "" "Stacks containing these tag names will be hidden. Multiple tags should be " "given in a comma-delimited list (eg. hidden_stack_tags=hide_me,me_too)." msgstr "" "Стеки, содержащие данные имена тегов, будут скрыты. Несколько тегов " "разделяются запятой, например, hidden_stack_tags=hide_me,me_too." msgid "Start address for the allocation pool." msgstr "Начальный адрес для пула выделения." #, python-format msgid "Start resizing the group %(group)s" msgstr "Запуск изменения размера группы %(group)s" msgid "Start time for the time constraint. A CRON expression property." msgstr "Начальное время для ограничения. В формате выражения CRON." #, python-format msgid "State %s invalid for create" msgstr "Неверное состояние %s для создания" #, python-format msgid "State %s invalid for resume" msgstr "Неверное состояние %s для возобновления" #, python-format msgid "State %s invalid for suspend" msgstr "Неверное состояние %s для приостановки" msgid "Status" msgstr "Статус" #, python-format msgid "String to split must be string; got %s" msgstr "Разделяемая строка должна быть строкой; получено %s" msgid "String value with which to compare." msgstr "Строковое значение для сравнения." msgid "Subnet ID to associate with this interface." msgstr "ИД подсети для связи с этим интерфейсом." msgid "Subnet ID to launch instance in." msgstr "ИД подсети для запуска экземпляра." msgid "Subnet ID." msgstr "ИД подсети." msgid "Subnet in which the vpn service will be created." msgstr "Подсеть, в которой будет создана служба vpn." msgid "" "Subnet in which to allocate the IP address for port. Used for creating port, " "based on derived properties. If subnet is specified, network property " "becomes optional." msgstr "" "Подсеть, в которой выделяются IP-адреса для этого порта. Используется при " "создании порта на основе производных свойств. Если указана подсеть, параметр " "network является необязательным." msgid "Subnet in which to allocate the IP address for this port." msgstr "Подсеть, в которой выделяются IP-адреса для этого порта." msgid "Subnet name or ID of this member." msgstr "Имя подсети или ИД этого участника." msgid "Subnet of external fixed IP address." msgstr "Подсеть внешних фиксированных IP-адресов." msgid "Subnet of the vip." msgstr "Подсеть vip." msgid "Subnets of this network." msgstr "Подсети этой сети." msgid "" "Subset of trustor roles to be delegated to heat. If left unset, all roles of " "a user will be delegated to heat when creating a stack." msgstr "" "Подмножество ролей доверителя для делегирования в heat. Если не задано, то " "при создании стека в heat будут делегированы все роли пользователя." msgid "Supplied metadata for the resources in the group." msgstr "Предоставленные метаданные для ресурсов в группе." msgid "Supported versions: keystone v3" msgstr "Поддерживаемые версии: keystone v3" #, python-format msgid "Suspend of instance %s failed" msgstr "Не удалось приостановить экземпляр %s" #, python-format msgid "Suspend of server %s failed" msgstr "Не удалось приостановить сервер %s" msgid "Swap space in MB." msgstr "Размер пространства подкачки в МБ." msgid "System SIGHUP signal received." msgstr "Получен системный сигнал SIGHUP." msgid "TCP or UDP port on which to listen for client traffic." msgstr "Порт TCP или UDP для приема данных клиента." msgid "TCP port on which the instance server is listening." msgstr "Порт TCP, на котором ведет прием сервер экземпляра." msgid "TCP port on which the pool member listens for requests or connections." msgstr "Порт TCP, через который элемент пула принимает запросы или соединения." msgid "" "TCP port on which to listen for client traffic that is associated with the " "vip address." msgstr "Порт TCP для приема потоков данных клиента, связанных с vip-адресами." msgid "" "TTL, in seconds, for any cached item in the dogpile.cache region used for " "caching of OpenStack service finder functions." msgstr "" "Время в секундах хранения элемента в кэше в области dogpile.cache, " "использованной для кэширования функций поиска службы OpenStack." msgid "" "TTL, in seconds, for any cached item in the dogpile.cache region used for " "caching of service extensions." msgstr "" "Время в секундах хранения элемента в кэше в области dogpile.cache, " "использованной для кэширования расширений служб." msgid "" "TTL, in seconds, for any cached item in the dogpile.cache region used for " "caching of validation constraints." msgstr "" "Время в секундах хранения элемента в кэше в области dogpile.cache, " "использованной для кэширования ограничений проверки." msgid "Tag key." msgstr "Ключ тега." msgid "Tag value." msgstr "Значение тега." msgid "Tags to add to the image." msgstr "Теги, добавляемые к образу." msgid "Tags to attach to instance." msgstr "Теги для прикрепления к экземпляру." msgid "Tags to attach to the bucket." msgstr "Теги для прикрепления к этой комплекту." msgid "Tags to attach to this group." msgstr "Теги для прикрепления к этой группе." msgid "Task description." msgstr "Описание задачи." msgid "Task name." msgstr "Имя задачи." msgid "" "Template default for how the server should receive the metadata required for " "software configuration. POLL_SERVER_CFN will allow calls to the cfn API " "action DescribeStackResource authenticated with the provided keypair " "(requires enabled heat-api-cfn). POLL_SERVER_HEAT will allow calls to the " "Heat API resource-show using the provided keystone credentials (requires " "keystone v3 API, and configured stack_user_* config options). POLL_TEMP_URL " "will create and populate a Swift TempURL with metadata for polling (requires " "object-store endpoint which supports TempURL).ZAQAR_MESSAGE will create a " "dedicated zaqar queue and post the metadata for polling." msgstr "" "Способ по умолчанию сбора сервером метаданных для конфигурации ПО. " "POLL_SERVER_CFN разрешает вызовы DescribeStackResource из cfn API с " "идентификацией по паре ключей (требуется heat-api-cfn). POLL_SERVER_HEAT " "разрешает вызовы resource-show из Heat API с идентификационными данными " "keystone (требуется keystone v3 API, а также настроенные параметры " "конфигурации stack_user_*). POLL_TEMP_URL создает и заполняет Swift TempURL " "с помощью метаданных для опроса (требуется конечная точка хранения объектов, " "поддерживающая TempURL). ZAQAR_MESSAGE создает выделенную очередь zaqar и " "отправляет метаданные для опроса." msgid "" "Template default for how the server should signal to heat with the " "deployment output values. CFN_SIGNAL will allow an HTTP POST to a CFN " "keypair signed URL (requires enabled heat-api-cfn). TEMP_URL_SIGNAL will " "create a Swift TempURL to be signaled via HTTP PUT (requires object-store " "endpoint which supports TempURL). HEAT_SIGNAL will allow calls to the Heat " "API resource-signal using the provided keystone credentials. ZAQAR_SIGNAL " "will create a dedicated zaqar queue to be signaled using the provided " "keystone credentials." msgstr "" "Способ по умолчанию отправки сервером сигналов в heat с выходными значениями " "развертывания. CFN_SIGNAL разрешает HTTP POST для URL, подписанного парой " "ключей CFN (требуется heat-api-cfn). TEMP_URL_SIGNAL создает Swift TempURL, " "принимающий сигнал HTTP PUT (требуется конечная точка хранения объектов, " "поддерживающая TempURL). HEAT_SIGNAL разрешает вызовы resource-signal из " "Heat API с идентификационными данными keystone. ZAQAR_SIGNAL создает " "выделенную очередь zaqar, принимающую сигналы с идентификационными данными " "keystone." msgid "Template format version not found." msgstr "Версия формата шаблона не найдена." #, python-format msgid "" "Template size (%(actual_len)s bytes) exceeds maximum allowed size (%(limit)s " "bytes)." msgstr "" "Размер шаблона (%(actual_len)s байт) превышает максимально допустимый " "(%(limit)s байт)." msgid "Template that specifies the stack to be created as a resource." msgstr "Шаблон, задающий стек для создания в качестве ресурса." #, python-format msgid "Template type is not supported: %s" msgstr "Тип шаблона %s не поддерживается" msgid "Template version was not provided" msgstr "Версия шаблона не была предоставлена" #, python-format msgid "Template with version %s not found" msgstr "Шаблон с версией %s не найден" msgid "TemplateBody or TemplateUrl were not given." msgstr "TemplateBody или TemplateUrl не заданы." msgid "Tenant owning the health monitor." msgstr "Арендатор - владелец монитора работоспособности." msgid "Tenant owning the pool member." msgstr "Арендатор - владелец элемента пула." msgid "Tenant owning the pool." msgstr "Арендатор - владелец пула." msgid "Tenant owning the port." msgstr "Арендатор - владелец порта." msgid "Tenant owning the router." msgstr "Арендатор - владелец маршрутизатора." msgid "Tenant owning the subnet." msgstr "Арендатор - владелец подсети." #, python-format msgid "Testing message %(text)s" msgstr "Тестовое сообщение %(text)s" #, python-format msgid "The \"%(hook)s\" hook is not defined on %(resource)s" msgstr "Перехватчик \"%(hook)s\" не определен для %(resource)s" #, python-format msgid "The \"for_each\" argument to \"%s\" must contain a map" msgstr "Аргумент \"for_each\" функции \"%s\" должен содержать карту связей" #, python-format msgid "The %(entity)s (%(name)s) could not be found." msgstr "%(entity)s (%(name)s) не найден." #, python-format msgid "The %s must be provided for each parameter group." msgstr "Необходимо указать %s для каждой группы параметров." #, python-format msgid "The %s of parameter group should be a list." msgstr "%s для группы параметров должен быть списком." #, python-format msgid "The %s parameter must be assigned to one parameter group only." msgstr "Параметр %s должен быть указан только для одной группы параметров." #, python-format msgid "The %s should be a list." msgstr "%s должен быть списком." msgid "The API paste config file to use." msgstr "Используемый файл конфигурации вставки API." msgid "The AWS Access Key ID needs a subscription for the service" msgstr "Для ИД ключа доступа AWS требуется подписка на службу" msgid "The Availability Zone where the specified instance is launched." msgstr "Область доступности, в котором запускается указанный экземпляр." msgid "The Availability Zones in which to create the load balancer." msgstr "Области доступности для создания распределителя нагрузки." msgid "The CIDR." msgstr "CIDR." msgid "The DNS name for the LoadBalancer." msgstr "Имя DNS для LoadBalancer." msgid "The DNS name of the specified bucket." msgstr "Имя DNS указанного комплекта." msgid "The DNS nameserver address." msgstr "Адрес сервера DNS." msgid "The HTTP method used for requests by the monitor of type HTTP." msgstr "Метод HTTP, используемый для запросов монитором типа HTTP." msgid "" "The HTTP path used in the HTTP request used by the monitor to test a member " "health." msgstr "" "Путь HTTP, используемый в запросе HTTP монитором для проверки " "работоспособности участника." msgid "" "The HTTP path used in the HTTP request used by the monitor to test a member " "health. A valid value is a string the begins with a forward slash (/)." msgstr "" "Путь HTTP, используемый в запросе HTTP монитором для проверки " "работоспособности участника. Допустимое значение - это строка, начинающаяся " "с косой черты (/)." msgid "" "The HTTP status codes expected in response from the member to declare it " "healthy. Specify one of the following values: a single value, such as 200. a " "list, such as 200, 202. a range, such as 200-204." msgstr "" "Коды ответа HTTP, ожидаемые от работоспособного участника. Укажите одно " "значение, например, 200, список, например 200, 202, или диапазон значений, " "например 200-204." msgid "" "The ID of an existing instance to use to create the Auto Scaling group. If " "specify this property, will create the group use an existing instance " "instead of a launch configuration." msgstr "" "ИД существующего экземпляра для создания группы автоматического " "масштабирования. Если это свойство указано, при создании группы будет " "использоваться существующий экземпляр, а не конфигурация запуска." msgid "" "The ID of an existing instance you want to use to create the launch " "configuration. All properties are derived from the instance with the " "exception of BlockDeviceMapping." msgstr "" "ИД существующего экземпляра для создания конфигурации запуска. От экземпляра " "наследуются все свойства, кроме BlockDeviceMapping." msgid "The ID of the attached network." msgstr "ИД подключенной сети." msgid "The ID of the firewall policy that this firewall is associated with." msgstr "ИД стратегии брандмауэра, с которой связан этот брандмауэр." msgid "" "The ID of the hosted zone name that is associated with the LoadBalancer." msgstr "ИД размещаемой области, связанной с LoadBalancer." msgid "The ID of the image to create a volume from." msgstr "ИД образа, из которого будет создаваться том." msgid "The ID of the image to create the volume from." msgstr "ИД образа для создания тома." msgid "The ID of the instance to which the volume attaches." msgstr "ИД экземпляра, к которому подключен том." msgid "The ID of the load balancing pool." msgstr "ИД пула распределения нагрузки." msgid "The ID of the pool to which the pool member belongs." msgstr "ИД пула, которому принадлежит участник." msgid "The ID of the server to which the volume attaches." msgstr "ИД сервера, к которому подключен том." msgid "The ID of the snapshot to create a volume from." msgstr "ИД моментальной копии для создания тома." msgid "" "The ID of the tenant which will own the network. Only administrative users " "can set the tenant identifier; this cannot be changed using authorization " "policies." msgstr "" "ИД арендатора, который будет владельцем сети. Идентификатор арендатора может " "быть задан только администратором. Это нельзя изменить стратегиями " "предоставления доступа." msgid "" "The ID of the tenant who owns the Load Balancer. Only administrative users " "can specify a tenant ID other than their own." msgstr "" "ИД арендатора, который является владельцем балансировщика нагрузки. Только " "администраторы могут указать ИД арендатора, не совпадающий с собственным." msgid "The ID of the tenant who owns the listener." msgstr "ИД арендатора - владельца получателя запросов." msgid "" "The ID of the tenant who owns the network. Only administrative users can " "specify a tenant ID other than their own." msgstr "" "ИД арендатора, который является владельцем сети. Только администраторы могут " "указать ИД арендатора, не совпадающий с собственным." msgid "" "The ID of the tenant who owns the subnet pool. Only administrative users can " "specify a tenant ID other than their own." msgstr "" "ИД арендатора, который является владельцем пула подсетей. Только " "администраторы могут указать ИД арендатора, не совпадающий с собственным." msgid "The ID of the volume to be attached." msgstr "ИД тома для прикрепления." msgid "" "The ID of the volume to boot from. Only one of volume_id or snapshot_id " "should be provided." msgstr "" "ИД тома для загрузки. Разрешается указать только volume_id или snapshot_id. " msgid "The ID or name of the flavor to boot onto." msgstr "ИД или имя разновидности для загрузки." msgid "The ID or name of the image to boot with." msgstr "ИД или имя образа для загрузки." msgid "" "The IDs of the DHCP agent to schedule the network. Note that the default " "policy setting in Neutron restricts usage of this property to administrative " "users only." msgstr "" "ИД агента DHCP для планирования сети. Заметьте, что стандартное значение " "стратегии в Neutron ограничивает использование этого свойства, которое " "разрешается только администраторам." msgid "The IP address of the pool member." msgstr "IP-адрес участника пула." msgid "The IP version, which is 4 or 6." msgstr "Версия IP, 4 или 6." #, python-format msgid "The Parameter (%(key)s) was not defined in template." msgstr "Параметр (%(key)s) не определен в шаблоне." #, python-format msgid "The Parameter (%(key)s) was not provided." msgstr "Параметр (%(key)s) не предоставлен." msgid "The QoS policy ID attached to this network." msgstr "ИД стратегии QoS для данной сети." msgid "The QoS policy ID attached to this port." msgstr "ИД стратегии QoS для данного порта." #, python-format msgid "The Referenced Attribute (%(resource)s %(key)s) is incorrect." msgstr "Опорный атрибут (%(resource)s %(key)s) неверен." #, python-format msgid "The Resource %s requires replacement." msgstr "Ресурс %s требует замены." #, python-format msgid "" "The Resource (%(resource_name)s) could not be found in Stack %(stack_name)s." msgstr "Ресурс (%(resource_name)s) не найден в стеке %(stack_name)s." #, python-format msgid "The Resource (%(resource_name)s) is not available." msgstr "Ресурс (%(resource_name)s) недоступен." #, python-format msgid "The Snapshot (%(snapshot)s) for Stack (%(stack)s) could not be found." msgstr "Не найдена моментальная копия (%(snapshot)s) для стека (%(stack)s)." #, python-format msgid "The Stack (%(stack_name)s) already exists." msgstr "Стек (%(stack_name)s) уже существует." msgid "The Template must be a JSON or YAML document." msgstr "Шаблон должен быть документом JSON или YAML." msgid "The URI to the container." msgstr "URI контейнера." msgid "The URI to the created container." msgstr "URI созданного контейнера." msgid "The URI to the created secret." msgstr "URI созданного секретного ключа." msgid "The URI to the order." msgstr "URI заказа." msgid "The URIs to container consumers." msgstr "URI потребителей контейнера." msgid "The URIs to secrets stored in container." msgstr "URI секретных ключей, хранимых в контейнере." msgid "" "The URL of a template that specifies the stack to be created as a resource." msgstr "URL шаблона, указывающий стек для создания в качестве ресурса." msgid "The URL of the container." msgstr "URL контейнера." msgid "The VIP address of the LoadBalancer." msgstr "Адрес VIP балансировщика нагрузки." msgid "The VIP port of the LoadBalancer." msgstr "Порт VIP балансировщика нагрузки." msgid "The VIP subnet of the LoadBalancer." msgstr "Подсеть VIP балансировщика нагрузки." msgid "The action or operation requested is invalid" msgstr "Запрашивается неверное действие или операция" msgid "The action to be executed when the receiver is signaled." msgstr "Действие, выполняемое при приеме сигнала получателем." msgid "The administrative state of the firewall." msgstr "Административное состояние брандмауэра." msgid "The administrative state of the health monitor." msgstr "Административное состояние монитора работоспособности." msgid "The administrative state of the ipsec site connection." msgstr "Административное состояние соединения с сайтом ipsec." msgid "The administrative state of the pool member." msgstr "Административное состояние элемента пула." msgid "The administrative state of the router." msgstr "Административное состояние маршрутизатора." msgid "The administrative state of the vpn service." msgstr "Административное состояние службы vpn." msgid "The administrative state of this Load Balancer." msgstr "Административное состояние балансировщика нагрузки." msgid "The administrative state of this health monitor." msgstr "Административное состояние монитора работоспособности." msgid "The administrative state of this listener." msgstr "Административное состояние получателя запросов." msgid "The administrative state of this pool member." msgstr "Административное состояние этого элемента пула." msgid "The administrative state of this pool." msgstr "Административное состояние пула." msgid "The administrative state of this port." msgstr "Административное состояние порта." msgid "The administrative state of this vip." msgstr "Административное состояние этого vip." msgid "The administrative status of the network." msgstr "Административное состояние сети." msgid "The administrator password for the server." msgstr "Пароль администратора для сервера." msgid "The aggregation method to compare to the threshold." msgstr "Способ объединения данных для сравнения с порогом." msgid "The algorithm type used to generate the secret." msgstr "Алгоритм генерации секретного ключа." msgid "" "The algorithm type used to generate the secret. Required for key and " "asymmetric types of order." msgstr "" "Алгоритм генерации секретного ключа. Должен быть указан для заказов ключа " "или асимметричных типов." msgid "The algorithm used to distribute load between the members of the pool." msgstr "Алгоритм распределения нагрузки между участниками пула." msgid "The allocated address of this IP." msgstr "Выделенный адрес этого IP." msgid "" "The approximate interval, in seconds, between health checks of an individual " "instance." msgstr "" "Приблизительный интервал (с) между проверками работоспособности отдельного " "экземпляра." msgid "The authentication hash algorithm of the ipsec policy." msgstr "Хэш-алгоритм идентификации для стратегии ipsec." msgid "The authentication hash algorithm used by the ike policy." msgstr "Хэш-алгоритм идентификации для стратегии ike." msgid "The authentication mode of the ipsec site connection." msgstr "Режим идентификации соединения с сайтом ipsec." msgid "The availability zone in which the volume is located." msgstr "Область доступности, в которой расположен том." msgid "The availability zone in which the volume will be created." msgstr "Область доступности, в которой будет создан том." msgid "The availability zone of shared filesystem." msgstr "Зона доступности для общих файловых систем." msgid "The bay name." msgstr "Имя отсека." msgid "The bit-length of the secret." msgstr "Число разрядов секретного ключа." msgid "" "The bit-length of the secret. Required for key and asymmetric types of order." msgstr "" "Длина секретного ключа в битах. Должна быть указана для заказов ключа или " "асимметричных типов." #, python-format msgid "The bucket you tried to delete is not empty (%s)." msgstr "Удаляемый сегмент не пуст (%s)." msgid "The can be used to unmap a defined device." msgstr "Можно использовать для аннулирования связи определенного устройства." msgid "The certificate or AWS Key ID provided does not exist" msgstr "Указанный сертификат или ИД ключа AWS не существует" msgid "The channel for receiving signals." msgstr "Канал для приема сигналов." msgid "" "The class that provides encryption support. For example, nova.volume." "encryptors.luks.LuksEncryptor." msgstr "" "Класс, обеспечивающий поддержку шифрования. Пример: nova.volume.encryptors." "luks.LuksEncryptor." #, python-format msgid "The client (%(client_name)s) is not available." msgstr "Клиент (%(client_name)s) недоступен." msgid "The cluster ID this node belongs to." msgstr "ИД кластера, которому принадлежит этот узел." msgid "The config value of the software config." msgstr "Значение config конфигурации программного обеспечения." msgid "" "The configuration tool used to actually apply the configuration on a server. " "This string property has to be understood by in-instance tools running " "inside deployed servers." msgstr "" "Инструмент настройки, используемый для фактического применения конфигурации " "на сервере. Это строковое свойство должно быть распознаваемо инструментами в " "экземпляре, выполняемыми внутри развернутых серверов." msgid "The content of the CSR. Only for certificate orders." msgstr "Содержимое CSR. Только для заказа сертификата." #, python-format msgid "" "The contents of personality file \"%(path)s\" is larger than the maximum " "allowed personality file size (%(max_size)s bytes)." msgstr "" "Содержимое файла личных параметров \"%(path)s\" превышает максимально " "допустимый размер (%(max_size)s байтов)." msgid "The current size of AutoscalingResourceGroup." msgstr "Текущий размер AutoscalingResourceGroup." msgid "The current status of the volume." msgstr "Текущее состояние тома." msgid "" "The database instance was created, but heat failed to set up the datastore. " "If a database instance is in the FAILED state, it should be deleted and a " "new one should be created." msgstr "" "Экземпляр базы данных создан, но heat не удалось настроить хранилище данных. " "Если экземпляр базы данных находится в состоянии Сбой, его необходимо " "удалить, а затем создать новый экземпляр." msgid "" "The dead peer detection protocol configuration of the ipsec site connection." msgstr "" "Конфигурация протокола обнаружения мертвого равноправного узла для " "соединения с сайтом ipsec. " msgid "The decrypted secret payload." msgstr "Расшифрованная полезная нагрузка секретного ключа." msgid "" "The default cloud-init user set up for each image (e.g. \"ubuntu\" for " "Ubuntu 12.04+, \"fedora\" for Fedora 19+ and \"cloud-user\" for CentOS/RHEL " "6.5)." msgstr "" "Пользователь по умолчанию для cloud-init в образах (\"ubuntu\" для Ubuntu " "12.04+, \"fedora\" для Fedora 19+ и \"cloud-user\" для CentOS/RHEL 6.5)." msgid "The description for the QoS policy." msgstr "Описание стратегии QoS." msgid "The description of the ike policy." msgstr "Описание стратегии ike." msgid "The description of the ipsec policy." msgstr "Описание стратегии ipsec." msgid "The description of the ipsec site connection." msgstr "Описание соединения с сайтом ipsec." msgid "The description of the vpn service." msgstr "Описание службы vpn." msgid "The destination for static route." msgstr "Конечная цель статического маршрута." msgid "The details of physical object." msgstr "Сведения о физическом объекте." msgid "The device id for the network gateway." msgstr "ИД устройства для этого сетевого шлюза." msgid "" "The device where the volume is exposed on the instance. This assignment may " "not be honored and it is advised that the path /dev/disk/by-id/virtio-" " be used instead." msgstr "" "Устройство, в котором том предоставляется экземпляру. Такое назначение может " "не приниматься, поэтому рекомендуется использовать путь /dev/disk/by-id/" "virtio-." msgid "" "The direction in which metering rule is applied, either ingress or egress." msgstr "" "Направление, в котором будет применяться правило измерения (вход/выход)." msgid "The direction in which metering rule is applied." msgstr "Направление, в котором будет применяться правило измерения." msgid "" "The direction in which the security group rule is applied. For a compute " "instance, an ingress security group rule matches traffic that is incoming " "(ingress) for that instance. An egress rule is applied to traffic leaving " "the instance." msgstr "" "Направление применения правила группы защиты. Для экземпляра вычисления " "правило доступа группы защиты соответствует потоку данных, который является " "входящим (доступ) для этого экземпляра. Правило выхода применяется к потоку " "данных в направлении от экземпляра." msgid "The directory to search for environment files." msgstr "Каталог для поиска файлов среды." msgid "The ebs volume to attach to the instance." msgstr "Том ebs для прикрепления к экземпляру." msgid "The encapsulation mode of the ipsec policy." msgstr "Режим инкапсуляции стратегии ipsec." msgid "The encoding format used to provide the payload data." msgstr "Кодировка данных полезной нагрузки." msgid "The encryption algorithm of the ipsec policy." msgstr "Алгоритм идентификации для стратегии ipsec." msgid "The encryption algorithm or mode. For example, aes-xts-plain64." msgstr "Алгоритм шифрования или режим. Например, aes-xts-plain64." msgid "The encryption algorithm used by the ike policy." msgstr "Алгоритм шифрования для стратегии ike." msgid "The environment is not a valid YAML mapping data type." msgstr "Среда имеет неверный тип данных связывания YAML." msgid "The expiration date for the secret in ISO-8601 format." msgstr "Дата истечения срока действия секретного ключа в формате ISO-8601." msgid "The external load balancer port number." msgstr "Номер порта внешнего распределителя нагрузки." msgid "The extra specs key and value pairs of the volume type." msgstr "Дополнительные пары ключ-значение, определенные для типа тома." msgid "The flavor to use." msgstr "Используемая разновидность." #, python-format msgid "The following parameters are immutable and may not be updated: %(keys)s" msgstr "" "Следующие параметры объявлены как неизменяемые, и их обновление невозможно: " "%(keys)s" #, python-format msgid "The function %s is not supported in this version of HOT." msgstr "Функция %s не поддерживается в этой версии HOT." msgid "" "The gateway IP address. Set to any of [ null | ~ | \"\" ] to create/update a " "subnet without a gateway. If omitted when creation, neutron will assign the " "first free IP address within the subnet to the gateway automatically. If " "remove this from template when update, the old gateway IP address will be " "detached." msgstr "" "IP-адрес шлюза. Укажите любое из следующих значений: [ null | ~ | \"\" ],- " "чтобы создать или обновить подсеть без шлюза. Если параметр не указан, то " "neutron автоматически присваивает шлюзу первый свободный IP-адрес в " "подсети . Если параметр удален из шаблона при обновлении, то старый IP-адрес " "шлюза будет освобожден." #, python-format msgid "The grouped parameter %s does not reference a valid parameter." msgstr "Сгруппированный параметр %s не ссылается на допустимый параметр." msgid "The host from the container URL." msgstr "Хост из URL контейнера." msgid "The host from which a user is allowed to connect to the database." msgstr "Хост, с которого пользователю разрешается подключиться к базе данных." msgid "" "The id for L2 segment on the external side of the network gateway. Must be " "specified when using vlan." msgstr "" "ИД для сегментации L2 с внешней стороны сетевого шлюза. Должен быть указан " "при использовании vlan." msgid "The identifier of the CA to use." msgstr "Идентификатор центра сертификации (CA)." msgid "The image ID. Glance will generate a UUID if not specified." msgstr "ИД образа. Glance создаст UUID, если тот не указан." msgid "The initiator of the ipsec site connection." msgstr "Инициатор соединения с сайтом ipsec." msgid "The input string to be stored." msgstr "Входная строка для сохранения." msgid "The interface name for the network gateway." msgstr "Имя интерфейса для сетевого шлюза." msgid "The internal network to connect on the network gateway." msgstr "Внутренняя сеть для подключения к сетевому шлюзу." msgid "The last operation for the database instance failed due to an error." msgstr "Последняя операция над базой данных не выполнена из-за ошибки." #, python-format msgid "The length must be at least %(min)s." msgstr "Длина не может быть меньше %(min)s." #, python-format msgid "The length must be in the range %(min)s to %(max)s." msgstr "Длина должна находиться в диапазоне %(min)s - %(max)s." #, python-format msgid "The length must be no greater than %(max)s." msgstr "Длина не может быть больше %(max)s." msgid "The length of time, in minutes, to wait for the nested stack creation." msgstr "Время ожидания (мин) создания вложенного стека." msgid "" "The list of HTTP status codes expected in response from the member to " "declare it healthy." msgstr "" "Список кодов состояний HTTP, ожидаемых в ответе от участника, для признания " "его работоспособным." msgid "The list of Nova server IDs load balanced." msgstr "Список ИД серверов Nova, выравненных по нагрузке." msgid "The list of Pools related to this monitor." msgstr "Список пулов, связанных с этим монитором." msgid "The list of attachments of the volume." msgstr "Список вложений тома." msgid "" "The list of configurations for the different lifecycle actions of the " "represented software component." msgstr "" "Список конфигураций для других действий жизненного цикла представленного " "компонента программного обеспечения." msgid "The list of instance IDs load balanced." msgstr "Список ИД экземпляров, выравненных по нагрузке." msgid "" "The list of resource types to create. This list may contain type names or " "aliases defined in the resource registry. Specific template names are not " "supported." msgstr "" "Список создаваемых типов ресурсов. В списке могут быть имена типов или " "псевдонимы, определенные в реестре ресурсов. Особые имена шаблонов не " "поддерживаются." msgid "The list of tags to associate with the volume." msgstr "Список тегов для связи с томом." msgid "The load balancer transport protocol to use." msgstr "Транспортный протокол распределителя нагрузки для использования." msgid "" "The location where the volume is exposed on the instance. This assignment " "may not be honored and it is advised that the path /dev/disk/by-id/virtio-" " be used instead." msgstr "" "Расположение, в котором том предоставляется экземпляру. Такое назначение " "может не приниматься, поэтому рекомендуется использовать путь /dev/disk/by-" "id/virtio-." msgid "The manually assigned alternative public IPv4 address of the server." msgstr "" "Назначенный в неавтоматическом режиме альтернативный общедоступный адрес " "IPv4 сервера." msgid "The manually assigned alternative public IPv6 address of the server." msgstr "" "Назначенный в неавтоматическом режиме альтернативный общедоступный адрес " "IPv6 сервера." msgid "The maximum number of connections per second allowed for the vip." msgstr "Максимальное число соединений в секунду, разрешенное для vip." msgid "" "The maximum number of connections permitted for this load balancer. Defaults " "to -1, which is infinite." msgstr "" "Максимальное количество соединений, разрешенных для этого балансировщика " "нагрузки. Значение по умолчанию -1 означает отсутствие ограничений." msgid "The maximum number of resources to create at once." msgstr "Максимальное количество ресурсов, создаваемых за один раз." msgid "The maximum number of resources to replace at once." msgstr "Максимальное количество ресурсов, заменяемых одновременно." msgid "" "The maximum number of seconds to wait for the resource to signal completion. " "Once the timeout is reached, creation of the signal resource will fail." msgstr "" "Максимальное время ожидания поступления от ресурса сигнала о выполнении, в " "секундах. По истечении этого времени создать сигнальный ресурс не удастся." msgid "" "The maximum port number in the range that is matched by the security group " "rule. The port_range_min attribute constrains the port_range_max attribute. " "If the protocol is ICMP, this value must be an ICMP type." msgstr "" "Максимальный номер порта из диапазона, который соответствует правилу группы " "защиты. Атрибут port_range_min ограничивает атрибут port_range_max. Если " "используется протокол ICMP, это значение должно иметь тип ICMP." msgid "" "The maximum transmission unit size (in bytes) of the ipsec site connection." msgstr "" "Максимальный размер блока передачи (в байтах) для соединения с сайтом ipsec. " msgid "The maximum transmission unit size(in bytes) for the network." msgstr "Размер MTU в байтах в сети." msgid "The metering label ID to associate with this metering rule." msgstr "ИД метки измерений для связи с этим правилом измерения." msgid "" "The metric dimensions to match to the alarm dimensions. One or more " "dimension key names separated by a comma." msgstr "" "Связь размерностей показателей и предупреждений. Имена ключей размерностей " "разделяются запятыми." msgid "" "The minimum number of characters from this character class that will be in " "the generated string." msgstr "" "Минимальное число символов из этого класса символов в генерируемой строке." msgid "" "The minimum number of characters from this sequence that will be in the " "generated string." msgstr "" "Минимальное число символов из этой последовательности в генерируемой строке." msgid "" "The minimum number of resources in service while rolling updates are being " "executed." msgstr "" "Минимальное количество задействованных ресурсов во время выполнения " "параллельных обновлений." msgid "" "The minimum port number in the range that is matched by the security group " "rule. If the protocol is TCP or UDP, this value must be less than or equal " "to the value of the port_range_max attribute. If the protocol is ICMP, this " "value must be an ICMP type." msgstr "" "Минимальный номер порта из диапазона, который соответствует правилу группы " "защиты. Если используется протокол TCP или UDP, это значение должно быть " "меньше или равно значению атрибута port_range_max. Если используется " "протокол ICMP, это значение должно иметь тип ICMP." msgid "The name for the QoS policy." msgstr "Имя стратегии QoS." msgid "The name for the address scope." msgstr "Имя адресной области." msgid "" "The name of the driver used for instantiating container networks. By " "default, Magnum will choose the pre-configured network driver based on COE " "type." msgstr "" "Имя драйвера для создания экземпляров сетей контейнеров. По умолчанию Magnum " "выбирает уже настроенный сетевой драйвер на основе типа модуля оркестрации " "контейнеров." msgid "The name of the error document." msgstr "Имя документа об ошибках." msgid "The name of the hosted zone that is associated with the LoadBalancer." msgstr "Имя размещаемой области, связанной с LoadBalancer." msgid "The name of the ike policy." msgstr "Имя стратегии ike." msgid "The name of the index document." msgstr "Имя документа индекса." msgid "The name of the ipsec policy." msgstr "Имя стратегии ipsec." msgid "The name of the ipsec site connection." msgstr "Имя соединения с сайтом ipsec." msgid "The name of the key pair." msgstr "Имя пары ключей." msgid "The name of the network gateway." msgstr "Имя сетевого шлюза." msgid "The name of the network." msgstr "Имя сети." msgid "The name of the router." msgstr "Имя маршрутизатора." msgid "The name of the subnet." msgstr "Имя подсети." msgid "The name of the user that the new key will belong to." msgstr "Имя пользователя, к которому будет относиться новый ключ." msgid "" "The name of the virtual device. The name must be in the form ephemeralX " "where X is a number starting from zero (0); for example, ephemeral0." msgstr "" "Имя виртуального устройства. Имя должно быть задано в формате ephemeralX, " "где X - это номер, начиная с нуля (0); например, ephemeral0." msgid "The name of the vpn service." msgstr "Имя службы vpn." msgid "The name or ID of QoS policy to attach to this network." msgstr "Имя или ИД стратегии QoS для данной сети." msgid "The name or ID of QoS policy to attach to this port." msgstr "Имя или ИД стратегии QoS для данного порта." msgid "The name or ID of parent of this keystone project in hierarchy." msgstr "Имя или ИД родительского объекта проекта keystone в иерархии." msgid "The name or ID of target cluster." msgstr "Имя или ИД целевого кластера." msgid "The name or ID of the bay model." msgstr "Имя или ИД модели отсека." msgid "The name or ID of the subnet on which to allocate the VIP address." msgstr "Имя или ИД подсети, в которой выделяется адрес VIP." msgid "The name or ID of the subnet pool." msgstr "Имя или ИД пула подсетей." msgid "The name or id of the Senlin profile." msgstr "Имя или ИД профайла senlin." msgid "The negotiation mode of the ike policy." msgstr "Режим согласования стратегии ike." msgid "The next hop for the destination." msgstr "Следующий промежуточный узел в маршруте." msgid "The node count for this bay." msgstr "Количество узлов для отсека." msgid "The notification methods to use when an alarm state is ALARM." msgstr "" "Методы уведомления для использования с состоянием ALARM предупреждения." msgid "The notification methods to use when an alarm state is OK." msgstr "Методы уведомления для использования с состоянием OK предупреждения." msgid "The notification methods to use when an alarm state is UNDETERMINED." msgstr "" "Методы уведомления для использования с состоянием UNDETERMINED " "предупреждения." msgid "The number of I/O operations per second that the volume supports." msgstr "Количество операций ввода-вывода в секунду, поддерживаемое томом." msgid "The number of bytes stored in the container." msgstr "Число байтов, хранимых в контейнере." msgid "" "The number of consecutive health probe failures required before moving the " "instance to the unhealthy state" msgstr "" "Число последовательных сбоев при выполнении тестов работоспособности, после " "которых экземпляр будет переведен в неработоспособное состояние" msgid "" "The number of consecutive health probe successes required before moving the " "instance to the healthy state." msgstr "" "Число последовательных успешных выполнений тестов работоспособности, после " "которых экземпляр будет переведен в работоспособное состояние." msgid "The number of master nodes for this bay." msgstr "Количество главных узлов для отсека." msgid "The number of objects stored in the container." msgstr "Число объектов, хранимых в контейнере." msgid "The number of replicas to be created." msgstr "Количество создаваемых реплик." msgid "The number of resources to create." msgstr "Число создаваемых ресурсов." msgid "The number of seconds to wait between batches of updates." msgstr "Время ожидания между пакетами обновлений, в секундах." msgid "The number of seconds to wait between batches." msgstr "Время ожидания между пакетами, в секундах." msgid "The number of seconds to wait for the cluster actions." msgstr "Время ожидания действий кластера, в секундах." msgid "" "The number of seconds to wait for the correct number of signals to arrive." msgstr "Время ожидания (с) поступления верного числа сигналов." msgid "" "The number of success signals that must be received before the stack " "creation process continues." msgstr "" "Число сигналов об успешном выполнении, после получения которых можно " "продолжать процесс создания стека." msgid "" "The optional public key. This allows users to supply the public key from a " "pre-existing key pair. If not supplied, a new key pair will be generated." msgstr "" "Необязательный общий ключ. Позволяет пользователям указывать общий ключ из " "готовой пары ключей. Если не указан, будет создана новая пара ключей. " msgid "" "The owner tenant ID of the address scope. Only administrative users can " "specify a tenant ID other than their own." msgstr "" "ИД арендатора, который является владельцем адресной области. Только " "администраторы могут указать ИД арендатора, не совпадающий с собственным." msgid "The owner tenant ID of this QoS policy." msgstr "ИД арендатора - владельца стратегии QoS." msgid "The owner tenant ID of this rule." msgstr "ИД арендатора - владельца правила." msgid "" "The owner tenant ID. Only required if the caller has an administrative role " "and wants to create a RBAC for another tenant." msgstr "" "ИД арендатора-владельца. Требуется только в том случае, если инициатор в " "роли администратора собирается создать RBAC для другого арендатора." msgid "The parameters passed to action when the receiver is signaled." msgstr "" "Параметры, передаваемые действию, выполняемому при приеме сигнала " "получателем." msgid "The parent URL of the container." msgstr "Родительский URL контейнера." msgid "The payload of the created certificate, if available." msgstr "Полезная нагрузка созданного сертификата, если она задана." msgid "The payload of the created intermediates, if available." msgstr "" "Полезная нагрузка созданных промежуточных сертификатов, если она задана." msgid "The payload of the created private key, if available." msgstr "Полезная нагрузка созданного личного ключа, если она задана." msgid "The payload of the created public key, if available." msgstr "Полезная нагрузка созданного открытого ключа, если она задана." msgid "The perfect forward secrecy of the ike policy." msgstr "Полное прямое шифрование стратегии ike." msgid "The perfect forward secrecy of the ipsec policy." msgstr "Полное прямое шифрование стратегии ipsec." #, python-format msgid "The personality property may not contain greater than %s entries." msgstr "" "Свойство personality (личные параметры) может содержать не более %s записей." msgid "The physical mechanism by which the virtual network is implemented." msgstr "Физический механизм, с помощью которого реализована виртуальная сеть." msgid "The port being checked." msgstr "Проверяемый порт." msgid "The port id, either subnet or port_id should be specified." msgstr "ИД порта. Допустимые значения: subnet или port_id." msgid "The port on which the server will listen." msgstr "Порт, через который сервер будет вести прием." msgid "The port, either subnet or port should be specified." msgstr "Порт. Допустимые значения: subnet или port." msgid "The pre-shared key string of the ipsec site connection." msgstr "Строка PSK для соединения с сайтом ipsec." msgid "The private key if it has been saved." msgstr "Личный ключ, если он сохранен." msgid "The profile of certificate to use." msgstr "Профиль сертификата для использования." msgid "" "The protocol that is matched by the security group rule. Valid values " "include tcp, udp, and icmp." msgstr "" "Протокол, который соответствует правилу группы защиты. Верными значениями " "являются tcp, udp и icmp." msgid "The public key." msgstr "Общий ключ." msgid "The query string is malformed" msgstr "Неверный формат строки запроса" msgid "The query to filter the metrics." msgstr "Запрос для фильтрации показателей." msgid "" "The random string generated by this resource. This value is also available " "by referencing the resource." msgstr "" "Произвольная строка, созданная этим ресурсом. Это значение также доступно по " "ссылке на ресурс." msgid "The reference to a LaunchConfiguration resource." msgstr "Указатель на ресурс LaunchConfiguration." msgid "" "The remote IP prefix (CIDR) to be associated with this security group rule." msgstr "Префикс удаленного IP (CIDR) для связи с этим правилом группы защиты. " msgid "The remote branch router identity of the ipsec site connection." msgstr "" "Идентификатор маршрутизатора удаленной ветви соединения с сайтом ipsec." msgid "The remote branch router public IPv4 address or IPv6 address or FQDN." msgstr "" "Общедоступный адрес IPv4 или адрес IPv6 удаленного маршрутизатора ветви, " "либо FQDN." msgid "" "The remote group ID to be associated with this security group rule. If no " "value is specified then this rule will use this security group for the " "remote_group_id. The remote mode parameter must be set to \"remote_group_id" "\"." msgstr "" "ИД удаленной группы для связи с этим правилом группы защиты. Если значение " "не указано, это правило будет использовать эту группу защиты для " "remote_group_id. Параметру удаленного режима должно быть присвоено значение " "\"remote_group_id\"." msgid "The remote subnet(s) in CIDR format of the ipsec site connection." msgstr "Удаленная подсеть(и) в формате CIDR соединения с сайтом ipsec." msgid "The request is missing an action or operation parameter" msgstr "В запросе не указан параметр действия или операции" msgid "The request processing has failed due to an internal error" msgstr "Обработка запроса не выполнена из-за внутренней ошибки" msgid "The request signature does not conform to AWS standards" msgstr "Подпись запроса не соответствует стандартам AWS" msgid "" "The request signature we calculated does not match the signature you provided" msgstr "" "Подпись запроса, вычисленная нами, не соответствует подписи, которая указана " "вами" msgid "The requested action is not yet implemented" msgstr "Запрашиваемое действие пока не реализовано" #, python-format msgid "The resource %s is already being updated." msgstr "Ресурс %s уже обновляется." msgid "The resource href of the queue." msgstr "Ресурс href очереди." msgid "The route mode of the ipsec site connection." msgstr "Режим маршрутизации соединения с сайтом ipsec." msgid "The router id." msgstr "ИД маршрутизатора." msgid "The router to which the vpn service will be inserted." msgstr "Маршрутизатор, к которому будет добавлена служба vpn." msgid "The router." msgstr "Маршрутизатор." msgid "The safety assessment lifetime configuration for the ike policy." msgstr "Настройка времени существования оценки безопасности для стратегии ike." msgid "The safety assessment lifetime configuration of the ipsec policy." msgstr "" "Настройка времени существования оценки безопасности для стратегии ipsec." msgid "" "The security group that you can use as part of your inbound rules for your " "LoadBalancer's back-end instances." msgstr "" "Группа защиты, которую можно использовать в составе входящих правил для " "базовых экземпляров LoadBalancer." msgid "" "The server could not comply with the request since it is either malformed or " "otherwise incorrect." msgstr "" "Серверу не удалось выполнить запрос, поскольку тот неправильно сформирован " "или по какой-либо еще причине недопустим." msgid "The set of parameters passed to this nested stack." msgstr "Набор параметров, передаваемый этому вложенному стеку." msgid "The size in GB of the docker volume." msgstr "Размер тома Docker в ГБ." msgid "The size of AutoScalingGroup can not be less than zero" msgstr "Размер AutoScalingGroup не может быть меньше нуля" msgid "" "The size of the prefix to allocate when the cidr or prefixlen attributes are " "not specified while creating a subnet." msgstr "" "Размер префикса при создании подсетей, когда не указаны атрибуты cidr или " "prefixlen." msgid "The size of the swap, in MB." msgstr "Размер пространства подкачки в МБ." msgid "The size of the volume in GB." msgstr "Размер тома в ГБ." msgid "" "The size of the volume, in GB. It is safe to leave this blank and have the " "Compute service infer the size." msgstr "" "Размер тома в ГБ. Разрешается не указывать значение, тогда размер " "определяется службой вычислений." msgid "" "The size of the volume, in GB. Must be equal or greater than the size of the " "snapshot. It is safe to leave this blank and have the Compute service infer " "the size." msgstr "" "Размер тома в ГБ. Должен быть не меньше размера моментальной копии. " "Разрешается не указывать значение, тогда размер определяется службой " "вычислений." msgid "The snapshot the volume was created from, if any." msgstr "" "Моментальная копия, на основе которой был создан том (в соответствующем " "случае)." msgid "The source of certificate request." msgstr "Источник запроса на сертификат." #, python-format msgid "The specified reference \"%(resource)s\" (in %(key)s) is incorrect." msgstr "Указана неверная ссылка \"%(resource)s\" (в %(key)s)." msgid "The start and end addresses for the allocation pools." msgstr "Начальный и конечный адреса пулов выделения." msgid "The status of the container." msgstr "Состояние контейнера." msgid "The status of the firewall." msgstr "Состояние брандмауэра." msgid "The status of the ipsec site connection." msgstr "Состояние соединения с сайтом ipsec." msgid "The status of the network." msgstr "Состояние сети." msgid "The status of the order." msgstr "Состояние заказа." msgid "The status of the port." msgstr "Состояние порта." msgid "The status of the router." msgstr "Состояние маршрутизатора." msgid "The status of the secret." msgstr "Состояние секретного ключа." msgid "The status of the vpn service." msgstr "Состояние службы vpn." msgid "" "The string that was stored. This value is also available by referencing the " "resource." msgstr "Сохраненная строка. Это значение также доступно по ссылке на ресурс." msgid "The subject of the certificate request." msgstr "Субъект запроса на сертификат." msgid "" "The subnet for the port on which the members of the pool will be connected." msgstr "Подсеть порта, к которой будут подключены участники пула. " msgid "The subnet, either subnet or port should be specified." msgstr "Подсеть. Допустимые значения: subnet или port." msgid "The tag key name." msgstr "Имя ключа тега." msgid "The tag value." msgstr "Значение тега." msgid "The template is not a JSON object or YAML mapping." msgstr "Шаблон не является объектом JSON или связыванием YAML." #, python-format msgid "The template section is invalid: %(section)s" msgstr "Раздел шаблона недопустим: %(section)s" #, python-format msgid "The template version is invalid: %(explanation)s" msgstr "Неверная версия шаблона: %(explanation)s" msgid "The tenant owning this floating IP." msgstr "Арендатор - владелец этого нефиксированного IP." msgid "The tenant owning this network." msgstr "Арендатор - владелец этой сети." msgid "The time range in seconds." msgstr "Диапазон времени в секундах." msgid "The timestamp indicating volume creation." msgstr "Системное время создания тома." msgid "The transform protocol of the ipsec policy." msgstr "Протокол преобразования стратегии ipsec." msgid "The type of profile." msgstr "Тип профайла." msgid "The type of senlin policy." msgstr "Тип стратегии senlin." msgid "The type of the certificate request." msgstr "Тип запроса на сертификат." msgid "The type of the order." msgstr "Тип заказа." msgid "The type of the resources in the group." msgstr "Тип ресурсов в группе." msgid "The type of the secret." msgstr "Тип секретного ключа." msgid "The type of the volume mapping to a backend, if any." msgstr "Тип связывания тома с базовым сервером (в соответствующем случае)." msgid "The type/format the secret data is provided in." msgstr "Тип или формат для данных секретного ключа." msgid "The type/mode of the algorithm associated with the secret information." msgstr "Тип или режим алгоритма для данных секретного ключа." msgid "The unencrypted plain text of the secret." msgstr "Незашифрованный текст секретного ключа." msgid "" "The unique identifier of ike policy associated with the ipsec site " "connection." msgstr "" "Уникальный идентификатор для стратегии ike, связанной с соединением с сайтом " "ipsec. " msgid "" "The unique identifier of ipsec policy associated with the ipsec site " "connection." msgstr "" "Уникальный идентификатор для стратегии ipsec, связанной с соединением с " "сайтом ipsec. " msgid "" "The unique identifier of the router to which the vpn service was inserted." msgstr "" "Уникальный идентификатор маршрутизатора, в который добавлена служба vpn." msgid "" "The unique identifier of the subnet in which the vpn service was created." msgstr "Уникальный идентификатор подсети, в которой создана служба vpn." msgid "The unique identifier of the tenant owning the ike policy." msgstr "Уникальный идентификатор арендатора - владельца стратегии ike." msgid "The unique identifier of the tenant owning the ipsec policy." msgstr "Уникальный идентификатор арендатора - владельца стратегии ipsec." msgid "The unique identifier of the tenant owning the ipsec site connection." msgstr "" "Уникальный идентификатор арендатора - владельца соединения с сайтом ipsec." msgid "The unique identifier of the tenant owning the vpn service." msgstr "Уникальный идентификатор арендатора - владельца службы vpn." msgid "" "The unique identifier of vpn service associated with the ipsec site " "connection." msgstr "" "Уникальный идентификатор для службы vpn, связанной с соединением с сайтом " "ipsec. " msgid "" "The user-defined region ID and should unique to the OpenStack deployment. " "While creating the region, heat will url encode this ID." msgstr "" "Заданный пользователем ИД области, который должен быть уникальным в " "развертывании OpenStack. При создании области heat закодирует этот ИД в " "формате url." msgid "" "The value for the socket option TCP_KEEPIDLE. This is the time in seconds " "that the connection must be idle before TCP starts sending keepalive probes." msgstr "" "Значение для опции сокета TCP_KEEPIDLE. Это время в секундах, в течение " "которого соединение должно простаивать, прежде чем TCP начнет отправлять " "пакеты keepalive." #, python-format msgid "The value must be at least %(min)s." msgstr "Значение не может быть меньше %(min)s." #, python-format msgid "The value must be in the range %(min)s to %(max)s." msgstr "Значение должно находиться в диапазоне %(min)s - %(max)s." #, python-format msgid "The value must be no greater than %(max)s." msgstr "Значение не может быть больше %(max)s." #, python-format msgid "The values of the \"for_each\" argument to \"%s\" must be lists" msgstr "Значения аргумента \"for_each\" функции \"%s\" должны быть списками" msgid "The version of the ike policy." msgstr "Версия стратегии ike." msgid "" "The vnic type to be bound on the neutron port. To support SR-IOV PCI " "passthrough networking, you can request that the neutron port to be realized " "as normal (virtual nic), direct (pci passthrough), or macvtap (virtual " "interface with a tap-like software interface). Note that this only works for " "Neutron deployments that support the bindings extension." msgstr "" "Тип vnic для ограничения порта neutron. Для поддержки транзитной сети SR-IOV " "PCI можно запросить реализацию порта neutron как обычного (виртуальный nic), " "прямого (транзитная передача pci) или macvtap (виртуальный интерфейс с " "программным интерфейсом, похожим на tap). Обратите внимание, что эта схема " "работает только для развертывания Neutron, поддерживающего расширение " "привязок." msgid "The volume type." msgstr "Тип тома." msgid "The volume used as source, if any." msgstr "Том, используемый в качестве ресурса (в соответствующем случае)." msgid "The volume_id can be boot or non-boot device to the server." msgstr "" "volume_id может быть как загрузочным, так и незагрузочным устройством для " "сервера." msgid "The website endpoint for the specified bucket." msgstr "Конечная точка веб-сайта для указанного комплекта." #, python-format msgid "There is no rule %(rule)s. List of allowed rules is: %(rules)s." msgstr "Правило %(rule)s не определено. Допустимые правила: %(rules)s." msgid "" "There is no such option during 5.0.0, so need to make this attribute " "unsupported, otherwise error will raised." msgstr "" "В версии 5.0.0 нет такой опции. Неиспользуемые атрибуты должны быть помечены " "как неподдерживаемые, в противном случае возникает ошибка." msgid "" "There is no such option during 5.0.0, so need to make this property " "unsupported while it not used." msgstr "" "В версии 5.0.0 нет такой опции. Неиспользуемые свойства должны быть помечены " "как неподдерживаемые." #, python-format msgid "" "There was an error loading the definition of the global resource type " "%(type_name)s." msgstr "" "Ошибка при загрузке определения глобального типа ресурса %(type_name)s." msgid "This endpoint is enabled or disabled." msgstr "Указывает, включена ли конечная точка." msgid "This project is enabled or disabled." msgstr "Включен или выключен проект." msgid "This region is enabled or disabled." msgstr "Указывает, включена ли область." msgid "This service is enabled or disabled." msgstr "Указывает, включена ли служба." msgid "Threshold to evaluate against." msgstr "Порог для сравнения." msgid "Time To Live (Seconds)." msgstr "Время жизни (TTL) в секундах." msgid "Time of the first execution in format \"YYYY-MM-DD HH:MM\"." msgstr "Время первого выполнения в формате \"ГГГГ-ММ-ДД чч:мм\"." msgid "Time of the next execution in format \"YYYY-MM-DD HH:MM:SS\"." msgstr "Время следующего выполнения в формате \"ГГГГ-ММ-ДД чч:мм\"." msgid "" "Timeout for client connections' socket operations. If an incoming connection " "is idle for this number of seconds it will be closed. A value of '0' means " "wait forever." msgstr "" "Тайм-аут операций клиентского соединения с сокетом. Если входящее соединение " "простаивает в течение этого времени, оно будет закрыто. Значение '0' " "означает неограниченное ожидание." msgid "Timeout for creating the bay in minutes. Set to 0 for no timeout." msgstr "Тайм-аут в минутах для создания отсека. Значение 0 отменяет тайм-аут." msgid "Timeout in seconds for stack action (ie. create or update)." msgstr "" "Тайм-аут в секундах для действия над стеком (например, создать или обновить)." msgid "" "Toggle to enable/disable caching when Orchestration Engine looks for other " "OpenStack service resources using name or id. Please note that the global " "toggle for oslo.cache(enabled=True in [cache] group) must be enabled to use " "this feature." msgstr "" "Включает или выключает кэширование, когда модуль оркестровки ищет другие " "ресурсы служб OpenStack по имени или ИД. Для использования этой функции " "необходимо глобально задать параметр oslo.cache (enabled=True в группе " "[cache])." msgid "" "Toggle to enable/disable caching when Orchestration Engine retrieves " "extensions from other OpenStack services. Please note that the global toggle " "for oslo.cache(enabled=True in [cache] group) must be enabled to use this " "feature." msgstr "" "Включает или выключает кэширование, когда модуль оркестровки получает " "расширения из других служб OpenStack. Для использования этой функции " "необходимо глобально задать параметр oslo.cache (enabled=True в группе " "[cache])." msgid "" "Toggle to enable/disable caching when Orchestration Engine validates " "property constraints of stack.During property validation with constraints " "Orchestration Engine caches requests to other OpenStack services. Please " "note that the global toggle for oslo.cache(enabled=True in [cache] group) " "must be enabled to use this feature." msgstr "" "Включает или выключает кэширование, когда модуль оркестровки проверяет " "ограничения свойств стека. В ходе проверки ограничений модуль оркестровки " "кэширует запросы к другим службам OpenStack. Для использования этой функции " "необходимо глобально задать параметр oslo.cache (enabled=True в группе " "[cache])." msgid "" "Token for stack-user which can be used for signalling handle when " "signal_transport is set to TOKEN_SIGNAL. None for all other signal " "transports." msgstr "" "Маркер для пользователя стека, который можно использовать для информирования " "обработчика о том, что signal_transport задан равным TOKEN_SIGNAL. Для всех " "прочих signal_transport не задается." msgid "" "Tokens are not needed for Swift TempURLs. This attribute is being kept for " "compatibility with the OS::Heat::WaitConditionHandle resource." msgstr "" "Маркеры не нужны для TempURL Swift. Этот атрибут оставлен для совместимости " "с ресурсом OS::Heat::WaitConditionHandle." msgid "Topic" msgstr "Тема" msgid "Transform protocol for the ipsec policy." msgstr "Протокол преобразования для стратегии ipsec." msgid "True if alarm evaluation/actioning is enabled." msgstr "" "True, если включено определение/выполнение действия при предупреждении." msgid "" "True if the system should remember a generated private key; False otherwise." msgstr "" "True, если в системе необходимо запомнить созданный личный ключ; False в " "других случаях." msgid "Type of access that should be provided to guest." msgstr "Тип гостевого доступа." msgid "Type of adjustment (absolute or percentage)." msgstr "Тип корректировки (абсолютная или выраженная в процентах)." msgid "" "Type of endpoint in Identity service catalog to use for communication with " "the OpenStack service." msgstr "" "Укажите конечную точку в каталоге службы идентификаторов для использования в " "связи со службой OpenStack." msgid "Type of keystone Service." msgstr "Тип службы keystone." msgid "Type of receiver." msgstr "Тип получателя." msgid "Type of the data source." msgstr "Тип источника данных." msgid "Type of the notification." msgstr "Тип уведомления." msgid "Type of the object that RBAC policy affects." msgstr "Тип объекта, на который влияет стратегия RBAC." msgid "Type of the value of the input." msgstr "Тип значения входа." msgid "Type of the value of the output." msgstr "Тип значения выхода." msgid "Type of the volume to create on Cinder backend." msgstr "Тип тома для создания в базовой программе Cinder." msgid "URL for API authentication" msgstr "URL для идентификации API" msgid "URL for the data source." msgstr "URL источника данных." msgid "" "URL for the job binary. Must be in the format swift:/// or " "internal-db://." msgstr "" "URL двоичного файла задания. Указывается в формате swift:///" " или internal-db://." msgid "" "URL of TempURL where resource will signal completion and optionally upload " "data." msgstr "" "URL TempURL, через который ресурс будет сигнализировать об окончании, а " "также, при необходимости, - загружать данные." msgid "URL of keystone service endpoint." msgstr "URL конечной точки службы keystone." msgid "URL of the Heat CloudWatch server." msgstr "URL сервера Heat CloudWatch." msgid "" "URL of the Heat metadata server. NOTE: Setting this is only needed if you " "require instances to use a different endpoint than in the keystone catalog" msgstr "" "URL сервера метаданных Heat. Примечание: этот параметр используется, только " "если экземпляры будут использовать конечную точку, отличную от указанной в " "каталоге keystone." msgid "URL of the Heat waitcondition server." msgstr "URL сервера Heat waitcondition." msgid "" "URL where the data for this image already resides. For example, if the image " "data is stored in swift, you could specify \"swift://example.com/container/" "obj\"." msgstr "" "URL местонахождения данных для этого образа. Например, если данные образа " "хранятся в swift, то вы можете указать \"swift://example.com/container/obj\"." msgid "UUID of the internal subnet to which the instance will be attached." msgstr "UUID внутренней подсети, к которой будет прикреплен экземпляр." #, python-format msgid "" "Unable to find neutron provider '%(provider)s', available providers are " "%(providers)s." msgstr "" "Не найден поставщик neutron '%(provider)s'. Доступные поставщики: " "%(providers)s." #, python-format msgid "" "Unable to find senlin policy type '%(pt)s', available policy types are " "%(pts)s." msgstr "" "Не найден тип стратегии senlin '%(pt)s'. Доступные типы стратегии: %(pts)s." #, python-format msgid "" "Unable to find senlin profile type '%(pt)s', available profile types are " "%(pts)s." msgstr "" "Не найден тип профайла senlin '%(pt)s'. Доступные типы профайла: %(pts)s." #, python-format msgid "" "Unable to load %(app_name)s from configuration file %(conf_file)s.\n" "Got: %(e)r" msgstr "" "Невозможно загрузить %(app_name)s из файла конфигурации %(conf_file)s.\n" "Ошибка: %(e)r" #, python-format msgid "Unable to locate config file [%s]" msgstr "Не найден файл конфигурации [%s]" #, python-format msgid "Unexpected action %(action)s" msgstr "Непредвиденное действие %(action)s" #, python-format msgid "Unexpected action %s" msgstr "Непредвиденное действие %s" #, python-format msgid "" "Unexpected properties: %(unexpected)s. Only these properties are allowed for " "%(type)s type of order: %(allowed)s." msgstr "" "Непредвиденные свойства: %(unexpected)s. Для типа заказа %(type)s допустимы " "только следующие свойства: %(allowed)s." msgid "Unique identifier for the device." msgstr "Уникальный идентификатор устройства." msgid "" "Unique identifier for the ike policy associated with the ipsec site " "connection." msgstr "" "Уникальный идентификатор для стратегии ike, связанной с соединением с сайтом " "ipsec. " msgid "" "Unique identifier for the ipsec policy associated with the ipsec site " "connection." msgstr "" "Уникальный идентификатор для стратегии ipsec, связанной с соединением с " "сайтом ipsec. " msgid "Unique identifier for the network owning the port." msgstr "Уникальный идентификатор сети - владельца порта." msgid "" "Unique identifier for the router to which the vpn service will be inserted." msgstr "" "Уникальный идентификатор маршрутизатора, в который будет добавлена служба " "vpn. " msgid "" "Unique identifier for the vpn service associated with the ipsec site " "connection." msgstr "" "Уникальный идентификатор для службы vpn, связанной с соединением с сайтом " "ipsec. " msgid "" "Unique identifier of the firewall policy to which this firewall rule belongs." msgstr "" "Уникальный идентификатор стратегии брандмауэра, которой принадлежит это " "правило брандмауэра." msgid "Unique identifier of the firewall policy used to create the firewall." msgstr "" "Уникальный идентификатор стратегии брандмауэра, используемый для создания " "брандмауэра." msgid "Unknown" msgstr "Неизвестно" #, python-format msgid "Unknown Property %s" msgstr "Неизвестное свойство %s" #, python-format msgid "Unknown attribute \"%s\"" msgstr "Неизвестный атрибут \"%s\"" #, python-format msgid "Unknown error retrieving %s" msgstr "Неизвестная ошибка при получении %s" #, python-format msgid "Unknown input %s" msgstr "Неизвестные входные данные %s" #, python-format msgid "Unknown key(s) %s" msgstr "Неизвестный ключ(и) %s" msgid "Unknown share_status during creation of share \"{0}\"" msgstr "Неизвестный share_status при создании общего ресурса \"{0}\"" #, python-format msgid "Unknown status creating Bay '%(name)s' - %(reason)s" msgstr "Неизвестное состояние при создании отсека '%(name)s' - %(reason)s" msgid "Unknown status during deleting share \"{0}\"" msgstr "Неизвестный код состояния при удалении общего ресурса \"{0}\"" #, python-format msgid "Unknown status updating Bay '%(name)s' - %(reason)s" msgstr "Неизвестное состояние при обновлении отсека '%(name)s' - %(reason)s" #, python-format msgid "Unknown status: %s" msgstr "Неизвестное состояние: %s" #, python-format msgid "" "Unrecognized value \"%(value)s\" for \"%(name)s\", acceptable values are: " "true, false." msgstr "" "Нераспознанное значение \"%(value)s\" для \"%(name)s\". Допустимые значения: " "true, false." #, python-format msgid "Unsupported object type %(objtype)s" msgstr "Неподдерживаемый тип объекта %(objtype)s" #, python-format msgid "Unsupported resource '%s' in LoadBalancerNames" msgstr "Неподдерживаемый ресурс '%s' в LoadBalancerNames" msgid "Unversioned keystone url in format like http://0.0.0.0:5000." msgstr "URL keystone без указания версии, в формате http://0.0.0.0:5000." #, python-format msgid "Update to properties %(props)s of %(name)s (%(res)s)" msgstr "Обновить до свойств %(props)s для %(name)s (%(res)s)" msgid "Updated At" msgstr "Обновлено" msgid "Updating a stack when it is deleting" msgstr "Обновление стек во время его удаления" msgid "Updating a stack when it is suspended" msgstr "Обновление приостановленного стека" msgid "" "Use get_resource|Ref command instead. For example: { get_resource : " " }" msgstr "" "Используйте команду get_resource|Ref. Пример: { get_resource : <имя-" "ресурса> }" msgid "" "Use only with Neutron, to list the internal subnet to which the instance " "will be attached; needed only if multiple exist; list length must be exactly " "1." msgstr "" "Используйте только с Neutron, для просмотра внутренней подсети, к которой " "будет прикреплен экземпляр; необходимо только при наличии нескольких; длина " "списка должна быть равна в точности 1." #, python-format msgid "Use property %s" msgstr "Используйте свойство %s" #, python-format msgid "Use property %s." msgstr "Используйте свойство %s." msgid "" "Use the `external_gateway_info` property in the router resource to set up " "the gateway." msgstr "" "Для настройки шлюза используйте свойство external_gateway_info в ресурсе " "маршрутизатора." msgid "" "Use the networks attribute instead of first_address. For example: " "\"{get_attr: [, networks, , 0]}\"" msgstr "" "Использовать атрибут networks вместо first_address. Например: \"{get_attr: " "[<имя сервера>, networks, <имя сети>, 0]}\"" msgid "Use this resource at your own risk." msgstr "Результаты использования этого ресурса могут быть непредсказуемыми." #, python-format msgid "User %s in invalid domain" msgstr "Пользователь %s из неверного домена" #, python-format msgid "User %s in invalid project" msgstr "Пользователь %s из неверного проекта" msgid "User ID for API authentication" msgstr "ИД пользователя для идентификации API" msgid "User data to pass to instance." msgstr "Пользовательские данные для передачи экземпляру." msgid "User is not authorized to perform action" msgstr "Пользователь не имеет права выполнять это действие." msgid "User name to create a user on instance creation." msgstr "Имя пользователя для создания пользователя при создании экземпляра." msgid "Username associated with the AccessKey." msgstr "Имя пользователя, связанное с AccessKey." msgid "Username for API authentication" msgstr "Имя пользователя для идентификации API" msgid "Username for accessing the data source URL." msgstr "Имя пользователя для доступа к URL источника данных." msgid "Username for accessing the job binary URL." msgstr "Имя пользователя для доступа к URL двоичного файла задания." msgid "Username of privileged user in the image." msgstr "Имя привилегированного пользователя в образе." msgid "VLAN ID for VLAN networks or tunnel-id for GRE/VXLAN networks." msgstr "VLAN ID для сетей VLAN или tunnel-id для сетей GRE/VXLAN." msgid "VPC ID for this gateway association." msgstr "ИД VPC для этой связи шлюза." msgid "VPC ID for where the route table is created." msgstr "ИД VPC для создания таблицы маршрутов." msgid "" "Valid values are encrypt or decrypt. The heat-engine processes must be " "stopped to use this." msgstr "" "Допустимые значения: зашифровать или расшифровать. Предварительно необходимо " "остановить процессы модуля heat." #, python-format msgid "Value \"%(val)s\" is invalid for data type \"%(type)s\"." msgstr "Значение \"%(val)s\" недопустимо для типа данных \"%(type)s\"." #, python-format msgid "Value '%(value)s' is invalid for '%(name)s' which only accepts integer." msgstr "" "Значение %(value)s недопустимо для %(name)s. Допустимы только целые числа." #, python-format msgid "" "Value '%(value)s' is invalid for '%(name)s' which only accepts non-negative " "integer." msgstr "" "Значение %(value)s недопустимо для %(name)s. Допустимы тольконеотрицательные " "целые числа." #, python-format msgid "Value '%s' is not an integer" msgstr "Значение '%s' не является целым" #, python-format msgid "Value must be a comma-delimited list string: %s" msgstr "Значение должно быть строкой списка, разделенного запятыми: %s" #, python-format msgid "Value must be of type %s" msgstr "Значение должно иметь тип %s" #, python-format msgid "Value must be valid JSON: %s" msgstr "Значение должно быть верным JSON: %s" #, python-format msgid "Value must match pattern: %s" msgstr "Значение должно быть шаблоном: %s" msgid "" "Value which can be set or changed on stack update to trigger the resource " "for replacement with a new random string. The salt value itself is ignored " "by the random generator." msgstr "" "Значение, которое может быть задано или изменено при обновлении стека, чтобы " "запустить ресурс для замены на новую произвольную строку. Само значение соли " "игнорируется генератором псевдослучайных чисел." msgid "" "Value which can be set to fail the resource operation to test failure " "scenarios." msgstr "" "Значение, которое может быть задано для того, чтобы операция с ресурсом не " "прошла тестовый сценарий проверки." msgid "" "Value which can be set to trigger update replace for the particular resource." msgstr "" "Значение, которое может быть задано для того, активировать замену с " "обновлением для заданного ресурса." #, python-format msgid "Version %(objver)s of %(objname)s is not supported" msgstr "Версия %(objver)s %(objname)s не поддерживается" msgid "Version for the ike policy." msgstr "Версия стратегии ike." msgid "Version of Hadoop running on instances." msgstr "Версия Hadoop, работающего на экземплярах." msgid "Version of IP address." msgstr "Версия IP-адреса." msgid "Vip associated with the pool." msgstr "Vip, связанный с пулом." msgid "Volume attachment failed" msgstr "Не удалось подключить том" msgid "Volume backup failed" msgstr "Не удалось создать резервную копию тома" msgid "Volume backup restore failed" msgstr "Ошибка восстановления тома из резервной копии" msgid "Volume create failed" msgstr "Не удалось создать том" msgid "Volume detachment failed" msgstr "Не удалось отключить том" msgid "Volume in use" msgstr "Том используется" msgid "Volume resize failed" msgstr "Не удалось изменить размер тома" msgid "Volumes per node." msgstr "Томов на узел." msgid "Volumes to attach to instance." msgstr "Тома для подключения к экземпляру." #, python-format msgid "WaitCondition invalid Handle %s" msgstr "Неверный обработчик %s WaitCondition" #, python-format msgid "WaitCondition invalid Handle stack %s" msgstr "Неверный стек %s обработчика WaitCondition" #, python-format msgid "WaitCondition invalid Handle tenant %s" msgstr "Неверный арендатор %s обработчика WaitCondition" msgid "Weight of pool member in the pool (default to 1)." msgstr "Вес элемента пула в пуле (значение по умолчанию равно 1)." msgid "Weight of the pool member in the pool." msgstr "Вес элемента пула в пуле." #, python-format msgid "Went to status %(resource_status)s due to \"%(status_reason)s\"" msgstr "Перешел в состояние %(resource_status)s из-за \"%(status_reason)s\"" msgid "" "When both ipv6_ra_mode and ipv6_address_mode are set, they must be equal." msgstr "" "Если заданы и ipv6_ra_mode, и ipv6_address_mode, их значения должны быть " "одинаковыми." msgid "" "When running server in SSL mode, you must specify both a cert_file and " "key_file option value in your configuration file" msgstr "" "При работе сервера в режиме SSL необходимо указать cert_file и key_file в " "файле конфигурации" msgid "Whether enable this policy on that cluster." msgstr "Указывает, следует ли включить стратегию в этом кластере." msgid "" "Whether the address scope should be shared to other tenants. Note that the " "default policy setting restricts usage of this attribute to administrative " "users only, and restricts changing of shared address scope to unshared with " "update." msgstr "" "Указывает, является ли адресная область общей для всех арендаторов. Обратите " "внимание, что стандартное значение стратегии ограничивает использование " "этого атрибута, которое разрешается только администраторам, а при обновлении " "общую адресную область можно сделать только необщей." msgid "Whether the flavor is shared across all projects." msgstr "Указывает, является ли разновидность общедоступной для всех проектов." msgid "" "Whether the image can be deleted. If the value is True, the image is " "protected and cannot be deleted." msgstr "" "Указывает, можно ли удалить образ. Если значение - True, то образ защищен и " "не может быть удален." msgid "Whether the metering label should be shared across all tenants." msgstr "Должна ли метка измерения использоваться совместно всеми клиентами." msgid "Whether the network contains an external router." msgstr "Указывает, содержит ли сеть внешний маршрутизатор." msgid "Whether the part content is text or multipart." msgstr "Является ли содержимое элемента текстовым или составным." msgid "" "Whether the subnet pool will be shared across all tenants. Note that the " "default policy setting restricts usage of this attribute to administrative " "users only." msgstr "" "Указывает, будет ли пул подсетей общим для всех арендаторов. Стратегия по " "умолчанию разрешает использовать этот атрибут только администраторам." msgid "Whether the volume type is accessible to the public." msgstr "Указывает, будет ли тип тома общедоступным." msgid "Whether this QoS policy should be shared to other tenants." msgstr "Указывает, является ли эта стратегия QoS общей для других арендаторов." msgid "" "Whether this firewall should be shared across all tenants. NOTE: The default " "policy setting in Neutron restricts usage of this property to administrative " "users only." msgstr "" "Должен ли этот брандмауэр использоваться совместно всеми клиентами. " "Примечание: Параметр значение стратегии по умолчанию в Neutron позволяет " "применять это свойство которое разрешается только администраторам." msgid "" "Whether this is default IPv4/IPv6 subnet pool. There can only be one default " "subnet pool for each IP family. Note that the default policy setting " "restricts administrative users to set this to True." msgstr "" "Указывает, является ли это пул подсетей IPv4/IPv6 пулом по умолчанию. Для " "каждого семейства протоколов IP может быть только один пул подсетей по " "умолчанию. Стратегия по умолчанию требует, чтобы администратор присвоил " "этому параметру значение True." msgid "Whether this network should be shared across all tenants." msgstr "Является ли сеть общей для всех арендаторов." msgid "" "Whether this network should be shared across all tenants. Note that the " "default policy setting restricts usage of this attribute to administrative " "users only." msgstr "" "Является ли сеть общей для всех арендаторов. Заметьте, что стандартное " "значение стратегии ограничивает использование этого атрибута, которое " "разрешается только администраторам." msgid "" "Whether this policy should be audited. When set to True, each time the " "firewall policy or the associated firewall rules are changed, this attribute " "will be set to False and will have to be explicitly set to True through an " "update operation." msgstr "" "Указывает, должна ли контролироваться эта стратегия. В случае True при " "каждом изменении стратегии брандмауэра или связанных правил брандмауэра " "этому атрибуту присваивается False, и ему нужно будет явно присваивать True " "через операцию обновления." msgid "Whether this policy should be shared across all tenants." msgstr "Указывает, является ли эта стратегия общей для всех арендаторов." msgid "Whether this rule should be enabled." msgstr "Указывает, следует ли включить это правило." msgid "Whether this rule should be shared across all tenants." msgstr "Указывает, является ли правило общим для всех арендаторов." msgid "Whether to enable the actions or not." msgstr "Указывает, следует ли включить действия." msgid "Whether to specify a remote group or a remote IP prefix." msgstr "Указывать ли удаленную группу или префикс удаленного IP." msgid "" "Which lifecycle actions of the deployment resource will result in this " "deployment being triggered." msgstr "" "Указывает, какие действия ресурса развертывания приведут к активации " "развертывания." msgid "" "Workflow additional parameters. If Workflow is reverse typed, params " "requires 'task_name', which defines initial task." msgstr "" "Дополнительные параметры потока операций. Если поток операций имеет обратный " "тип, то в params должен быть элемент 'task_name', определяющий начальную " "задачу." msgid "Workflow description." msgstr "Описание потока операций." msgid "Workflow name." msgstr "Имя потока операций." msgid "Workflow to execute." msgstr "Поток операций, который требуется выполнить." msgid "Workflow type." msgstr "Тип потока операций." #, python-format msgid "Wrong Arguments try: \"%s\"" msgstr "Неверные аргументы, попробуйте: \"%s\"" msgid "You are not authenticated." msgstr "Вы не прошли идентификацию." msgid "You are not authorized to complete this action." msgstr "У вас нет прав на выполнение этого действия." #, python-format msgid "You are not authorized to use %(action)s." msgstr "Нет прав на выполнение действия %(action)s." #, python-format msgid "" "You have reached the maximum stacks per tenant, %d. Please delete some " "stacks." msgstr "" "Достигнуто максимальное число стеков для арендатора %d. Удалите несколько " "стеков." #, python-format msgid "could not find user %s" msgstr "пользователь %s не найден" msgid "deployment_id must be specified" msgstr "Необходимо указать deployment_id" msgid "" "deployments key not allowed in resource metadata with user_data_format of " "SOFTWARE_CONFIG" msgstr "" "ключ развертывания недопустим в метаданных, если user_data_format равен " "SOFTWARE_CONFIG" #, python-format msgid "environment has wrong section \"%s\"" msgstr "в среде имеется неверный раздел \"%s\"" msgid "error in pool" msgstr "ошибка в пуле" msgid "error in vip" msgstr "ошибка в vip" msgid "external network for the gateway." msgstr "Внешняя сеть для шлюза." msgid "granularity should be days, hours, minutes, or seconds" msgstr "степенью детализации должны быть дни, минуты или секунды" msgid "heat.conf misconfigured, auth_encryption_key must be 32 characters" msgstr "" "Ошибка в heat.conf. Длина auth_encryption_key должна быть равна 32 символам" msgid "" "heat.conf misconfigured, cannot specify \"stack_user_domain_id\" or " "\"stack_user_domain_name\" without \"stack_domain_admin\" and " "\"stack_domain_admin_password\"" msgstr "" "Конфигурация heat.conf неверна, указать \"stack_user_domain_id\" или " "\"stack_user_domain_name\" нельзя без \"stack_domain_admin\" и " "\"stack_domain_admin_password\"" msgid "ipv6_ra_mode and ipv6_address_mode are not supported for ipv4." msgstr "ipv6_ra_mode и ipv6_address_mode не поддерживаются для ipv4." msgid "limit cannot be less than 4" msgstr "ограничение не может быть меньше 4" msgid "min/max length must be integral" msgstr "мин/макс длина должна быть целым числом" msgid "min/max must be numeric" msgstr "мин/макс должно быть числовым" msgid "need more memory." msgstr "необходимо больше памяти." msgid "no resource data found" msgstr "данные о ресурсах не найдены" msgid "no resources were found" msgstr "ресурсы не найдены" msgid "nova server metadata needs to be a Map." msgstr "Метаданные сервера nova должны быть картой." #, python-format msgid "previous_status must be SupportStatus instead of %s" msgstr "previous_status должен быть объектом типа SupportStatus, а не %s" #, python-format msgid "raw template with id %s not found" msgstr "необработанный шаблон с ИД %s не найден" #, python-format msgid "resource with id %s not found" msgstr "ресурс с ИД %s не найден" #, python-format msgid "roles %s" msgstr "роли %s" msgid "segmentation_id cannot be specified except 0 for using flat" msgstr "" "segmentation_id не может быть указан (кроме 0) для использования этого флага" msgid "segmentation_id must be specified for using vlan" msgstr "segmentation_id должен быть указан для использования vlan" msgid "segmentation_id not allowed for flat network type." msgstr "segmentation_id не разрешен для однородной сети." msgid "server_id must be specified" msgstr "Не указан server_id" #, python-format msgid "" "task %(task)s contains property 'requires' in case of direct workflow. Only " "reverse workflows can contain property 'requires'." msgstr "" "Для задачи %(task)s с прямым потоком операций указано свойство 'requires'. " "Свойство 'requires' применяется только для обратных потоков операций." heat-10.0.2/heat/locale/fr/0000775000175000017500000000000013343562672015337 5ustar zuulzuul00000000000000heat-10.0.2/heat/locale/fr/LC_MESSAGES/0000775000175000017500000000000013343562672017124 5ustar zuulzuul00000000000000heat-10.0.2/heat/locale/fr/LC_MESSAGES/heat.po0000666000175000017500000100464713343562351020415 0ustar zuulzuul00000000000000# Translations template for heat. # Copyright (C) 2015 ORGANIZATION # This file is distributed under the same license as the heat project. # # Translators: # Maxime COQUEREL , 2014 # Andrew Melim , 2014 # Andreas Jaeger , 2016. #zanata msgid "" msgstr "" "Project-Id-Version: heat VERSION\n" "Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n" "POT-Creation-Date: 2018-02-05 10:35+0000\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" "PO-Revision-Date: 2016-04-12 05:27+0000\n" "Last-Translator: Copied by Zanata \n" "Language: fr\n" "Plural-Forms: nplurals=2; plural=(n > 1);\n" "Generated-By: Babel 2.0\n" "X-Generator: Zanata 3.9.6\n" "Language-Team: French\n" #, python-format msgid "\"%%s\" is not a valid keyword inside a %s definition" msgstr "\"%%s\" n'est pas un mot clé valide dans une définition %s" #, python-format msgid "\"%(fn_name)s\": %(err)s" msgstr "\"%(fn_name)s\" : %(err)s" #, python-format msgid "" "\"%(name)s\" params must be strings, numbers, list or map. Failed to json " "serialize %(value)s" msgstr "" "Les paramètres \"%(name)s\" doivent être des chaînes, des nombres, une liste " "ou une mappe. Echec de sérialisation JSON de %(value)s" #, python-format msgid "" "\"%(section)s\" must contain a map of %(obj_name)s maps. Found a [%(_type)s] " "instead" msgstr "" "\"%(section)s\" doit contenir une mappe de mappes d'objets %(obj_name)s. " "[%(_type)s] trouvé à la place" #, python-format msgid "\"%(url)s\" is not a valid SwiftSignalHandle. The %(part)s is invalid" msgstr "" "\"%(url)s\" n'est pas une adresse SwiftSignalHandle valide. La partie " "%(part)s n'est pas valide" #, python-format msgid "\"%(value)s\" does not validate %(name)s" msgstr "\"%(value)s\" ne valide pas %(name)s" #, python-format msgid "\"%(value)s\" does not validate %(name)s (constraint not found)" msgstr "\"%(value)s\" ne valide pas %(name)s (contrainte introuvable)" #, python-format msgid "\"%(version)s\". \"%(version_type)s\" should be one of: %(available)s" msgstr "" "\"%(version)s\". \"%(version_type)s\" doit appartenir à ce qui suit : " "%(available)s" #, python-format msgid "\"%(version)s\". \"%(version_type)s\" should be: %(available)s" msgstr "\"%(version)s\". \"%(version_type)s\" doit être : %(available)s" #, python-format msgid "\"%s\" : [ { \"key1\": \"val1\" }, { \"key2\": \"val2\" } ]" msgstr "\"%s\" : [ { \"key1\": \"val1\" }, { \"key2\": \"val2\" } ]" #, python-format msgid "\"%s\" argument must be a string" msgstr "L'argument \"%s\" doit être une chaîne" #, python-format msgid "\"%s\" can't traverse path" msgstr "\"%s\" ne peut pas passer à travers le chemin" #, python-format msgid "\"%s\" deletion policy not supported" msgstr "Règle de suppression \"%s\" non prise en charge" #, python-format msgid "\"%s\" delimiter must be a string" msgstr "Le délimiteur \"%s\" doit être une chaîne" #, python-format msgid "\"%s\" is not a list" msgstr "\"%s\" n'est pas une liste" #, python-format msgid "\"%s\" is not a map" msgstr "\"%s\" n'est pas une map" #, python-format msgid "\"%s\" is not a valid ARN" msgstr "\"%s\" n'est pas un valide ARN" #, python-format msgid "\"%s\" is not a valid ARN URL" msgstr "\"%s\" n'est pas une URL ARN valide" #, python-format msgid "\"%s\" is not a valid Heat ARN" msgstr "\"%s\" n'est pas un valide Heat ARN" #, python-format msgid "\"%s\" is not a valid URL" msgstr "\"%s\" n'est pas une URL valide" #, python-format msgid "\"%s\" is not a valid boolean" msgstr "\"%s\" n'est pas un booléen valide" #, python-format msgid "\"%s\" is not a valid template section" msgstr "\"%s\" n'est pas une section de modèle valide" #, python-format msgid "\"%s\" must operate on a list" msgstr "\"%s\" doit s'exercer sur une liste" #, python-format msgid "\"%s\" param placeholders must be strings" msgstr "" "Les marques de réservation de paramètres \"%s\" doivent être des chaînes" #, python-format msgid "\"%s\" parameters must be a mapping" msgstr "Les paramètres \"%s\" doivent être un mappage" #, python-format msgid "\"%s\" params must be a map" msgstr "\"%s\" paramètre doit etre une map" #, python-format msgid "\"%s\" params must be strings, numbers, list or map." msgstr "" "Les paramètres \"%s\" doivent être des chaînes, des nombres, une liste ou " "une mappe." #, python-format msgid "\"%s\" template must be a string" msgstr "\"%s\" template doit etre une chaine de caractère" #, python-format msgid "\"repeat\" syntax should be %s" msgstr "la syntaxe de \"repeat\" doit être %s" #, python-format msgid "%(a)s paused until Hook %(h)s is cleared" msgstr "%(a)s mis en pause jusqu'à ce que le point d'ancrage %(h)s soit effacé" #, python-format msgid "%(action)s is not supported for resource." msgstr "%(action)s n'est pas supporté par la ressource." #, python-format msgid "%(action)s is restricted for resource." msgstr "L'action %(action)s est limitée pour la ressource." #, python-format msgid "%(desired_capacity)s must be between %(min_size)s and %(max_size)s" msgstr "" "%(desired_capacity)s doit être comprise entre %(min_size)s et %(max_size)s" #, python-format msgid "%(error)s%(path)s%(message)s" msgstr "%(error)s%(path)s%(message)s" #, python-format msgid "%(feature)s is not supported." msgstr "%(feature)s n'est pas supporté" #, python-format msgid "" "%(img)s must be provided: Referenced cluster template %(tmpl)s has no " "default_image_id defined." msgstr "" "%(img)s doit être fourni : default_image_id n'est pas défini pour le modèle " "de cluster référencé %(tmpl)s ." #, python-format msgid "%(lc)s (%(ref)s) reference can not be found." msgstr "Référence %(lc)s (%(ref)s) introuvable." #, python-format msgid "" "%(lc)s (%(ref)s) requires a reference to the configuration not just the name " "of the resource." msgstr "" "%(lc)s (%(ref)s) nécessite une référence à la configuration, pas uniquement " "le nom de la ressource." #, python-format msgid "%(len)d of %(count)d received" msgstr "%(len)d sur %(count)d reçus" #, python-format msgid "%(len)d of %(count)d received - %(reasons)s" msgstr "%(len)d sur %(count)d reçus - %(reasons)s" #, python-format msgid "%(message)s" msgstr "%(message)s" #, python-format msgid "%(min_size)s can not be greater than %(max_size)s" msgstr "%(min_size)s ne peut pas être supérieure à %(max_size)s" #, python-format msgid "%(name)s constraint invalid for %(utype)s" msgstr "contrainte %(name)s non valide pour %(utype)s" #, python-format msgid "%(prop1)s cannot be specified without %(prop2)s." msgstr "%(prop1)s ne peut pas être spécifié sans %(prop2)s." #, python-format msgid "" "%(prop1)s property should only be specified for %(prop2)s with value " "%(value)s." msgstr "" "La propriété %(prop1)s doit être uniquement spécifiée pour %(prop2)s avec la " "valeur %(value)s." #, python-format msgid "%(resource)s: Invalid attribute %(key)s" msgstr "%(resource)s : Attribut non valide %(key)s" #, python-format msgid "" "%(result)s - Unknown status %(resource_status)s due to \"%(status_reason)s\"" msgstr "" "%(result)s - Statut inconnu %(resource_status)s. Cause : \"%(status_reason)s" "\"" #, python-format msgid "%(schema)s supplied for %(type)s %(data)s" msgstr "%(schema)s fourni pour %(type)s %(data)s" #, python-format msgid "%(server)s-port-%(number)s" msgstr "%(server)s-port-%(number)s" #, python-format msgid "%(type)s not in valid format: %(error)s" msgstr "%(type)s n'est pas dans un format valide: %(error)s" #, python-format msgid "%s Key Name must be a string" msgstr "Le nom de clé %s doit être une chaîne" #, python-format msgid "%s Timed out" msgstr "%s a dépassé le délai d'attente" #, python-format msgid "%s Value Name must be a string" msgstr "Le nom de valeur %s doit être une chaîne" #, python-format msgid "%s is not a valid job location." msgstr "%s n'est pas un emplacement de travail valide." #, python-format msgid "%s is not active" msgstr "%s n'est pas actif/ve" #, python-format msgid "%s is not an integer." msgstr "%s n'est pas un entier." #, python-format msgid "%s must be provided" msgstr "%s doit être fourni" #, python-format msgid "'%(attr)s': expected '%(expected)s', got '%(current)s'" msgstr "'%(attr)s' : '%(expected)s' attendu, '%(current)s' obtenu" msgid "" "'task_name' is not assigned in 'params' in case of reverse type workflow." msgstr "" "'task_name' n'est pas affecté dans 'params' dans le cas d'un flux de travail " "de type inversé." msgid "'true' if DHCP is enabled for this subnet; 'false' otherwise." msgstr "'true' si DHCP est activé pour ce sous-réseau ; sinon 'false'." msgid "A UUID for the set of servers being requested." msgstr "UUID pour l'ensemble de serveurs demandé." msgid "A bad or out-of-range value was supplied" msgstr "Une valeur incorrecte ou hors plage a été fournie" msgid "A boolean value of default flag." msgstr "Valeur booléenne de l'indicateur par défaut." msgid "A boolean value specifying the administrative status of the network." msgstr "Valeur booléenne indiquant le statut administratif du réseau." #, python-format msgid "" "A character class and its corresponding %(min)s constraint to generate the " "random string from." msgstr "" "Classe de caractères et contrainte %(min)s correspondante à partir " "desquelles générer la chaîne aléatoire." #, python-format msgid "" "A character sequence and its corresponding %(min)s constraint to generate " "the random string from." msgstr "" "Séquence de caractères et contrainte %(min)s correspondante à partir " "desquelles générer la chaîne aléatoire." msgid "A comma-delimited list of server ip addresses. (Heat extension)." msgstr "" "Une liste délimitée par des virgules d'adresses IP de serveur. (Extension de " "Heat)." msgid "A description of the volume." msgstr "Description du volume." msgid "" "A device name where the volume will be attached in the system at /dev/" "device_name. This value is typically vda." msgstr "" "Un nom de périphérique auquel le volume sera rattaché dans le système, à " "l'emplacement /dev/device_name. Cette valeur est généralement vda." msgid "" "A device name where the volume will be attached in the system at /dev/" "device_name.e.g. vdb" msgstr "" "Un nom de périphérique auquel le volume sera rattaché dans le système, à " "l'emplacement /dev/device_name.par ex. vdb" msgid "" "A dict of all network addresses with corresponding port_id. Each network " "will have two keys in dict, they are network name and network id. The port " "ID may be obtained through the following expression: \"{get_attr: [, " "addresses, , 0, port]}\"." msgstr "" "Dictionnaire de toutes les adresses réseau avec port_id correspondant. Deux " "clés figurent pour chaque réseau dans le dictionnaire : nom du réseau et ID " "du réseau. L'ID de port peut être obtenu à l'aide de l'expression suivante : " "\"{get_attr: [, addresses, , 0, port]}\"." msgid "" "A dict of assigned network addresses of the form: {\"public\": [ip1, " "ip2...], \"private\": [ip3, ip4], \"public_uuid\": [ip1, ip2...], " "\"private_uuid\": [ip3, ip4]}. Each network will have two keys in dict, they " "are network name and network id." msgstr "" "Dictionnaire des adresses réseau affectées au format : {\"public\": [ip1, " "ip2...], \"private\": [ip3, ip4], \"public_uuid\": [ip1, ip2...], " "\"private_uuid\": [ip3, ip4]}. Deux clés figurent pour chaque réseau dans le " "dictionnaire : nom du réseau et ID du réseau." msgid "A dict of key-value pairs output from the stack." msgstr "Dictionnaire de paires clé-valeur de sortie de la pile." msgid "A dictionary which contains name and input of the workflow." msgstr "Dictionnaire contenant le nom et l'entrée du flux de travail." msgid "A length constraint must have a min value and/or a max value specified." msgstr "" "Une contrainte de longueur doit avoir une valeur minimale et/ou une valeur " "maximale indiquées." msgid "A list of URLs (webhooks) to invoke when state transitions to alarm." msgstr "Liste d'URL (webhooks) à appeler quand l'état passe à alarme." msgid "" "A list of URLs (webhooks) to invoke when state transitions to insufficient-" "data." msgstr "" "Liste d'URL (webhooks) à appeler quand l'état passe à insufficient-data." msgid "A list of URLs (webhooks) to invoke when state transitions to ok." msgstr "Liste d'URL (webhooks) à appeler quand l'état passe à OK." msgid "A list of access rules that define access from IP to Share." msgstr "" "Liste de règles d'accès définissant l'accès au partage depuis une adresse IP." msgid "A list of all rules for the QoS policy." msgstr "Liste de toutes les règles pour la stratégie de qualité de service." msgid "A list of all subnet attributes for the port." msgstr "Liste de tous les attributs de sous-réseau du port." msgid "" "A list of character class and their constraints to generate the random " "string from." msgstr "" "Liste de classe de caractères et de leurs contraintes à partir de laquelle " "générer la chaîne." msgid "" "A list of character sequences and their constraints to generate the random " "string from." msgstr "" "Liste de séquences de caractères et de leurs contraintes à partir de " "laquelle générer la chaîne aléatoire." msgid "A list of cluster instance IPs." msgstr "Liste des IP d'instance de cluster." msgid "A list of clusters to which this policy is attached." msgstr "Liste des clusters auxquels cette stratégie est connectée." msgid "A list of host route dictionaries for the subnet." msgstr "Liste des dictionnaires de route hôte pour le sous-réseau." msgid "A list of instances ids." msgstr "Liste d'ID d'instance." msgid "A list of metric ids." msgstr "Liste des ID de métrique." msgid "" "A list of query factors, each comparing a Sample attribute with a value. " "Implicitly combined with matching_metadata, if any." msgstr "" "Liste de facteurs d'interrogation, chacun d'eux comparant un attribut Sample " "et une valeur. Implicitement combiné au paramètre matching_metadata, si " "celui-ci est indiqué." msgid "A list of resource IDs for the resources in the chain." msgstr "Liste d'identificateurs de ressource pour les ressources de la chaîne." msgid "A list of resource IDs for the resources in the group." msgstr "Liste d'identificateurs de ressource pour les ressources du groupe." msgid "A list of security groups for the port." msgstr "Liste de groupes de sécurité pour le port." msgid "A list of security services IDs or names." msgstr "Liste des ID ou noms des services de sécurité." msgid "A list of string policies to apply. Defaults to anti-affinity." msgstr "Liste de règles de chaîne à appliquer. Par défaut : anti-affinity." msgid "A login profile for the user." msgstr "Un profil de connexion pour l'utilisateur." msgid "A mandatory input parameter is missing" msgstr "Un paramètre d'entrée obligatoire est manquant" msgid "A map containing all headers for the container." msgstr "Mappe contenant tous les en-têtes pour le conteneur." msgid "" "A map of Nova names and captured stderrs from the configuration execution to " "each server." msgstr "" "Mappe de noms Nova et d'erreurs standard (stderr) capturées depuis " "l'exécution de la configuration sur chaque serveur." msgid "" "A map of Nova names and captured stdouts from the configuration execution to " "each server." msgstr "" "Mappe de noms Nova et de sorties standard (stdout) capturées depuis " "l'exécution de la configuration sur chaque serveur." msgid "" "A map of Nova names and returned status code from the configuration " "execution." msgstr "" "Mappe de noms Nova et de codes de statut renvoyés à partir de l'exécution de " "la configuration." msgid "" "A map of files to create/overwrite on the server upon boot. Keys are file " "names and values are the file contents." msgstr "" "Une mappe de fichiers à créer/remplacer sur le serveur lors de l'amorçage. " "Les clés sont des noms de fichier et les valeurs sont le contenu des " "fichiers." msgid "" "A map of resource names to the specified attribute of each individual " "resource." msgstr "" "Mappe de noms de ressource vers l'attribut indiqué de chaque ressource " "individuelle." msgid "" "A map of resource names to the specified attribute of each individual " "resource. Requires heat_template_version: 2014-10-16." msgstr "" "Mappe de noms de ressource vers l'attribut indiqué de chaque ressource " "individuelle. Nécessite heat_template_version : 2014-10-16." msgid "" "A map of user-defined meta data to associate with the account. Each key in " "the map will set the header X-Account-Meta-{key} with the corresponding " "value." msgstr "" "Mappe de métadonnées définies par l'utilisateur à associer au compte. Chaque " "clé de la mappe définira l'en-tête X-Account-Meta-{key} avec la valeur " "correspondante." msgid "" "A map of user-defined meta data to associate with the container. Each key in " "the map will set the header X-Container-Meta-{key} with the corresponding " "value." msgstr "" "Mappe de métadonnées définies par l'utilisateur à associer au conteneur. " "Chaque clé de la mappe définira l'en-tête X-Container-Meta-{key} avec la " "valeur correspondante." msgid "A name used to distinguish the volume." msgstr "Nom utilisé pour distinguer le volume." msgid "" "A per-tenant quota on the prefix space that can be allocated from the subnet " "pool for tenant subnets." msgstr "" "Quota par locataire dans l'espace préfixe qui peut être alloué depuis le " "pool de sous-réseau pour les sous-réseaux du locataire." msgid "" "A predefined access control list (ACL) that grants permissions on the bucket." msgstr "" "Une liste de contrôle d'accès (ACL) prédéfinie qui accorde des droits sur le " "compartiment." msgid "A range constraint must have a min value and/or a max value specified." msgstr "" "Une contrainte d'intervalle doit avoir une valeur minimale et/ou une valeur " "maximale indiquées." msgid "" "A reference to the wait condition handle used to signal this wait condition." msgstr "" "Référence au descripteur de condition d'attente utilisé pour signaler cette " "condition d'attente." msgid "" "A signed url to create executions for workflows specified in Workflow " "resource." msgstr "" "URL signée permettant de créer des exécutions pour les flux de travail " "spécifiés dans la ressource Workflow." msgid "A signed url to handle the alarm." msgstr "Une URL signée pour traiter l'alarme." msgid "A signed url to handle the alarm. (Heat extension)." msgstr "Une URL signée pour traiter l'alarme. (Extension de Heat)." msgid "A specified set of DNS name servers to be used." msgstr "Ensemble de serveurs de noms DNS à utiliser." msgid "" "A string specifying a symbolic name for the network, which is not required " "to be unique." msgstr "" "Une chaîne indiquant un nom symbolique pour le réseau, qui ne doit pas " "forcément être unique." msgid "" "A string specifying a symbolic name for the security group, which is not " "required to be unique." msgstr "" "Une chaîne indiquant un nom symbolique pour le groupe de sécurité, qui ne " "doit pas forcément être unique." msgid "A string specifying physical network mapping for the network." msgstr "Chaîne indiquant le mappage de réseau physique pour le réseau." msgid "A string specifying the provider network type for the network." msgstr "Chaîne indiquant le type de réseau du fournisseur pour le réseau." msgid "A string specifying the segmentation id for the network." msgstr "Chaîne indiquant l'ID de segmentation pour le réseau." msgid "A symbolic name for this port." msgstr "Nom symbolique pour ce port." msgid "A url to handle the alarm using native API." msgstr "URL de traitement de l'alarme à l'aide d'une API native." msgid "" "A variable that this resource will use to replace with the current index of " "a given resource in the group. Can be used, for example, to customize the " "name property of grouped servers in order to differentiate them when listed " "with nova client." msgstr "" "Variable que cette ressource utilisera pour remplacer l'index en cours d'une " "ressource donnée dans le groupe. Peut être utilisé, par exemple, pour " "personnaliser la propriété du nom des serveurs groupés afin de les " "différencier lorsqu'ils sont répertoriés avec le client nova." msgid "AWS compatible instance name." msgstr "Nom d'instance compatible AWS." msgid "AWS query string is malformed, does not adhere to AWS spec" msgstr "" "La chaîne de demande AWS est incorrectement formée, ne respecte pas la " "spécification AWS" msgid "Access policies to apply to the user." msgstr "Règles d'accès à appliquer à l'utilisateur." #, python-format msgid "AccessPolicy resource %s not in stack" msgstr "La ressource AccessPolicy %s n'est pas dans la pile" #, python-format msgid "Action %s not allowed for user" msgstr "L'action %s n'est pas autorisé pour l'utilisateur" msgid "Action to be performed on the traffic matching the rule." msgstr "Action à effectuer sur le trafic appartenant à la règle." msgid "Actual input parameter values of the task." msgstr "Valeurs de paramètre d'entrée réelles de la tâche." msgid "Add needed policies directly to the task, Policy keyword is not needed" msgstr "" "Ajoutez les stratégies requises directement à la tâche ; le mot clé Policy " "n'est pas nécessaire." msgid "Additional MAC/IP address pairs allowed to pass through a port." msgstr "" "Paires d'adresses IP/MAC supplémentaires autorisées à passer par un port." msgid "Additional MAC/IP address pairs allowed to pass through the port." msgstr "" "Paires d'adresses IP/MAC supplémentaires autorisées à passer par le port." msgid "Additional routes for this subnet." msgstr "Routes supplémentaires pour ce sous-réseau." msgid "Address family of the address scope, which is 4 or 6." msgstr "Famille d'adresses du périmètre d'adresse, à savoir 4 ou 6." msgid "" "Address of the notification. It could be a valid email address, url or " "service key based on notification type." msgstr "" "Adresse de la notification. Il peut s'agir d'une adresse e-mail, d'une URL " "ou d'une clé de service valide basée sur le type de notification." msgid "" "Address to bind the server. Useful when selecting a particular network " "interface." msgstr "" "Adresse de liaison du serveur. Utile lors de la sélection d'une interface " "réseau en particulier." msgid "Administrative state for the ipsec site connection." msgstr "Etat administratif pour la connexion de site IPSec." msgid "Administrative state for the vpn service." msgstr "Etat administratif pour le service VPN." msgid "" "Administrative state of the firewall. If false (down), firewall does not " "forward packets and will drop all traffic to/from VMs behind the firewall." msgstr "" "Etat d'administration du pare-feu. Si false (en panne), le pare-feu ne " "transfère pas les paquets et supprimera l'ensemble du trafic entrant/sortant " "des machines virtuelles derrière le pare-feu." msgid "Administrative state of the router." msgstr "Etat administratif du routeur." #, python-format msgid "Alarm %(alarm)s could not find scaling group named \"%(group)s\"" msgstr "" "L'alarme %(alarm)s n'a pas pu trouver le groupe de mise à l'échelle nommé " "\"%(group)s\"" #, python-format msgid "Algorithm must be one of %s" msgstr "L'algorithme devrait faire partie des paramètres %s" msgid "All heat engines are down." msgstr "Tous les moteurs Heat sont en panne." msgid "Allocated floating IP address." msgstr "Adresse IP flottante allouée." msgid "Allocation ID for VPC EIP address." msgstr "ID d'allocation pour l'adresse EIP de VPC." msgid "Allow client's debug log output." msgstr "Autorisez la sortir du journal de débogage du client." msgid "Allow or deny action for this firewall rule." msgstr "Autoriser ou refuser l'action pour cette règle de pare-feu." msgid "Allow orchestration of multiple clouds." msgstr "Autoriser l'orchestration de multiple clouds." msgid "" "Allow reauthentication on token expiry, such that long-running tasks may " "complete. Note this defeats the expiry of any provided user tokens." msgstr "" "Autorisez la réauthentification à expiration du jeton, de sorte que les " "tâches à exécution longue puissent se terminer. Notez que cela désactive " "l'expiration de tout jeton utilisateur fourni." msgid "" "Allowed keystone endpoints for auth_uri when multi_cloud is enabled. At " "least one endpoint needs to be specified." msgstr "" "Noeuds finals keystone autorisés pour auth_uri lorsque multi_cloud est " "activé. Au moins un noeud final doit être spécifié." msgid "" "Allowed tenancy of instances launched in the VPC. default - any tenancy; " "dedicated - instance will be dedicated, regardless of the tenancy option " "specified at instance launch." msgstr "" "Location autorisée des instances lancées dans le VPC. par défaut - toute " "location ; dédié - l'instance sera dédiée, indépendamment de l'option de " "location indiquée lors du lancement de l'instance." #, python-format msgid "Allowed values: %s" msgstr "Valeurs autorisées : %s" msgid "AllowedPattern must be a string" msgstr "AllowedPattern doit être une chaîne" msgid "AllowedValues must be a list" msgstr "AllowedValues doit être une liste" msgid "Allowing not to store action results after task completion." msgstr "" "Autorisation de ne pas stocker les résultats d'action après l'achèvement de " "la tâche." msgid "" "Allows to synchronize multiple parallel workflow branches and aggregate " "their data. Valid inputs: all - the task will run only if all upstream tasks " "are completed. Any numeric value - then the task will run once at least this " "number of upstream tasks are completed and corresponding conditions have " "triggered." msgstr "" "Autorise la synchronisation de plusieurs branches de flux de travail " "parallèles et l'agrégation de leurs données. Entrées valides : all - La " "tâche est exécutée uniquement si toutes les tâches en amont sont terminées. " "Toute valeur numérique - La tâche est exécutée une fois au moins quand ces " "tâches en amont seront achevées et que les conditions correspondantes auront " "été déclenchées." #, python-format msgid "Ambiguous versions (%s)" msgstr "Versions ambiguës (%s)" msgid "" "Amount of disk space (in GB) required to boot image. Default value is 0 if " "not specified and means no limit on the disk size." msgstr "" "Quantité d'espace disque (en Go) requise pour l'image d'initialisation. La " "valeur par défaut est 0 si non spécifié et signifie taille de disque " "illimitée." msgid "" "Amount of ram (in MB) required to boot image. Default value is 0 if not " "specified and means no limit on the ram size." msgstr "" "Quantité de RAM (en Mo) requise pour l'image d'initialisation. La valeur par " "défaut est 0 si non spécifié et signifie taille de RAM illimitée." msgid "An address scope ID to assign to the subnet pool." msgstr "ID de portée d'adresse à affecter au pool de sous-réseau." msgid "An application health check for the instances." msgstr "Un diagnostic d'intégrité d'application pour les instances." msgid "An ordered list of firewall rules to apply to the firewall." msgstr "Liste ordonnée de règles de pare-feu à appliquer au pare-feu." msgid "" "An ordered list of nics to be added to this server, with information about " "connected networks, fixed ips, port etc." msgstr "" "Liste ordonnée des contrôleurs NIC à ajouter à ce serveur, avec des " "informations sur les réseaux connectés, les adresses IP fixes, les ports, " "etc." msgid "An unknown exception occurred." msgstr "Une exception inconnue s'est produite." msgid "" "Any data structure arbitrarily containing YAQL expressions that defines " "workflow output. May be nested." msgstr "" "Toute structure de données contenant de manière arbitraire des expressions " "YAQL et qui définit la sortie de flux de travail. Elle peut être imbriquée." msgid "Anything other than one VPCZoneIdentifier" msgstr "Elément autre qu'un ID VPCZoneIdentifier" msgid "Api endpoint reference of the instance." msgstr "Référence de noeud final d'API de l'instance." msgid "" "Arbitrary key-value pairs specified by the client to help boot a server." msgstr "" "Paires clé-valeur arbitraires indiquées par le client pour faciliter " "l'amorçage d'un serveur." msgid "" "Arbitrary key-value pairs specified by the client to help the Cinder " "scheduler creating a volume." msgstr "" "Paires clé-valeur arbitraires indiquées par le client pour aider le " "planificateur Cinder à créer un volume." msgid "" "Arbitrary key/value metadata to store contextual information about this " "queue." msgstr "" "Métadonnées de clé/valeur arbitraires pour le stockage d'informations " "contextuelles sur cette file d'attente." msgid "" "Arbitrary key/value metadata to store for this server. Both keys and values " "must be 255 characters or less. Non-string values will be serialized to JSON " "(and the serialized string must be 255 characters or less)." msgstr "" "Arbitrary key/value metadata to store for this server. Both keys and values " "must be 255 characters or less. Non-string values will be serialized to JSON " "(and the serialized string must be 255 characters or less)." msgid "Arbitrary key/value metadata to store information for aggregate." msgstr "" "Métadonnées de clé/valeur arbitraires pour le stockage des informations de " "l'agrégat." #, python-format msgid "Argument to \"%s\" must be a list" msgstr "L'argument pour \"%s\" doit être une liste" #, python-format msgid "Argument to \"%s\" must be a string" msgstr "L'argument pour \"%s\" doit être une chaîne" #, python-format msgid "Argument to \"%s\" must be string or list" msgstr "L'argument pour \"%s\" doit être une chaîne ou une liste" #, python-format msgid "Argument to function \"%s\" must be a list of strings" msgstr "L'argument pour la fonction \"%s\" doit être une liste de chaînes" #, python-format msgid "" "Arguments to \"%s\" can be of the next forms: [resource_name] or " "[resource_name, attribute, (path), ...]" msgstr "" "Les arguments pour \"%s\" peuvent être aux formats suivants : " "[nom_ressource] ou [nom_ressource, attribut, (chemin), ...]" #, python-format msgid "Arguments to \"%s\" must be a map" msgstr "Les arguments pour \"%s\" doivent être une mappe" #, python-format msgid "Arguments to \"%s\" must be of the form [index, collection]" msgstr "Les arguments pour \"%s\" doivent avoir le format [index, collection]" #, python-format msgid "" "Arguments to \"%s\" must be of the form [resource_name, attribute, " "(path), ...]" msgstr "" "Les arguments pour \"%s\" doivent avoir le format [nom_ressource, attribut, " "(chemin d'accès), ...]" #, python-format msgid "Arguments to \"%s\" must be of the form [resource_name, attribute]" msgstr "" "Les arguments pour \"%s\" doivent avoir le format [nom_ressource, attribut]" #, python-format msgid "Arguments to %s not fully resolved" msgstr "Les arguments pour %s ne sont pas entièrement résolus" #, python-format msgid "Attempt to delete a stack with id: %(id)s %(msg)s" msgstr "Tentative de suppression d'une pile avec l'ID : %(id)s %(msg)s" #, python-format msgid "Attempt to delete user creds with id %(id)s that does not exist" msgstr "" "Tentative de suppression des données d'identification utilisateur avec l'ID " "%(id)s qui n'existe pas" #, python-format msgid "Attempt to delete watch_rule: %(id)s %(msg)s" msgstr "Tentative de suppression de watch_rule : %(id)s %(msg)s" #, python-format msgid "Attempt to update a stack with id: %(id)s %(msg)s" msgstr "Tentative de mise à jour d'une pile avec l'ID : %(id)s %(msg)s" #, python-format msgid "Attempt to update a stack with id: %(id)s %(traversal)s %(msg)s" msgstr "" "Tentative de mise à jour d'une pile avec l'ID : %(id)s %(traversal)s %(msg)s" #, python-format msgid "Attempt to update a watch with id: %(id)s %(msg)s" msgstr "Tentative de mise à jour d'une surveillance avec l'ID : %(id)s %(msg)s" msgid "Attempt to use stored_context with no user_creds" msgstr "Tentative d'utilisation de stored_context sans user_creds" #, python-format msgid "Attribute %(attr)s for facade %(type)s missing in provider" msgstr "" "L'attribut %(attr)s pour la façade %(type)s est manquant dans le fournisseur" msgid "Audit status of this firewall policy." msgstr "Statut d'audit de ces règles d'administration de pare-feu." msgid "Authentication Endpoint URI." msgstr "URI du nœud final d'authentification." msgid "Authentication hash algorithm for the ike policy." msgstr "Algorithme de hachage d'authentification pour la stratégie IKE." msgid "Authentication hash algorithm for the ipsec policy." msgstr "Algorithme de hachage d'authentification pour la stratégie IPSec." msgid "Authorization failed." msgstr "Echec de l'autorisation." msgid "AutoScaling group ID to apply policy to." msgstr "ID de groupe AutoScaling auquel appliquer la règle." msgid "AutoScaling group name to apply policy to." msgstr "Nom de groupe AutoScaling auquel appliquer la règle." msgid "Availability Zone of the subnet." msgstr "Zone de disponibilité du sous-réseau." msgid "Availability zone in which you want the subnet." msgstr "Zone de disponibilité dans laquelle vous voulez placer le sous-réseau." msgid "Availability zone to create servers in." msgstr "Zone de disponibilité dans laquelle créer des serveurs." msgid "Availability zone to create volumes in." msgstr "Zone de disponibilité dans laquelle créer des volumes." msgid "Availability zone to launch the instance in." msgstr "Zone de disponibilité dans laquelle lancer l'instance." msgid "Backend authentication failed" msgstr "L'authentification d'arrière-plan a échoué" msgid "Binary" msgstr "binaire" msgid "Block device mappings for this server." msgstr "Mappages d'unités par bloc pour ce serveur." msgid "Block device mappings to attach to instance." msgstr "Mappages d'unité par bloc à connecter à l'instance." msgid "Block device mappings v2 for this server." msgstr "Mappages d'unités par bloc v2 pour ce serveur." msgid "" "Boolean extra spec that used for filtering of backends by their capability " "to create share snapshots." msgstr "" "Spécification booléenne supplémentaire utilisée pour le filtrage des back-" "end en fonction de leur capacité à créer des instantanés de partage." msgid "Boolean indicating if the volume can be booted or not." msgstr "Booléen indiquant si le volume peut être amorcé ou pas." msgid "Boolean indicating if the volume is encrypted or not." msgstr "Booléen indiquant si le volume est chiffré." msgid "" "Boolean indicating whether allow the volume to be attached more than once." msgstr "" "Valeur booléenne indiquant si le volume peut ou non être connecté plusieurs " "fois." msgid "" "Bus of the device: hypervisor driver chooses a suitable default if omitted." msgstr "" "Bus de l'unité : le pilote de l'hyperviseur choisit un paramètre par défaut " "convenable si celui-ci est omis." msgid "CIDR block notation for this subnet." msgstr "Notation de bloc CIDR pour ce sous-réseau." msgid "CIDR block to apply to subnet." msgstr "Bloc CIDR à appliquer au sous-réseau." msgid "CIDR block to apply to the VPC." msgstr "Bloc CIDR à appliquer au VPC." msgid "CIDR of subnet." msgstr "CIDR de sous-réseau." msgid "CIDR to be associated with this metering rule." msgstr "CIDR à associer à cette règle de mesure." #, python-format msgid "Can not specify property \"%s\" if the volume type is public." msgstr "" "Impossible de spécifier la propriété \"%s\" si le type de volume est public." #, python-format msgid "Can not use %s property on Nova-network." msgstr "Impossible d'utiliser la propriété %s sur le réseau Nova." #, python-format msgid "Can't find role %s" msgstr "Ne peut pas trouvé un role %s" msgid "Can't get user token without password" msgstr "Impossible d'obtenir le jeton utilisateur sans mot de passe" msgid "Can't get user token, user not yet created" msgstr "Impossible d'obtenir le jeton utilisateur, utilisateur pas encore créé" msgid "Can't traverse attribute path" msgstr "Impossible de traverser le chemin d'attribut" #, python-format msgid "Cancelling update when stack is %s" msgstr "Annulation de la mise à jour lorsque la pile est %s" #, python-format msgid "Cannot call %(method)s on orphaned %(objtype)s object" msgstr "Pas d'appel de %(method)s sur un objet %(objtype)s orphelin" #, python-format msgid "Cannot check %s, stack not created" msgstr "Impossible de contrôler %s, pile non créée" #, python-format msgid "Cannot define the following properties at the same time: %(props)s." msgstr "" "Impossible de définir simultanément les propriétés suivantes : %(props)s." #, python-format msgid "" "Cannot establish connection to Heat endpoint at region \"%(region)s\" due to " "\"%(exc)s\"" msgstr "" "Impossible d'établir une connexion au noeud final Heat dans la région " "\"%(region)s\". Cause : \"%(exc)s\"" msgid "" "Cannot get stack domain user token, no stack domain id configured, please " "fix your heat.conf" msgstr "" "Impossible d'obtenir le jeton d'utilisateur de domaine de pile, aucun " "identificateur de domaine de pile configuré, veuillez corriger votre fichier " "heat.conf" msgid "Cannot migrate to lower schema version." msgstr "Impossible de migrer vers une version de schéma inférieure." #, python-format msgid "Cannot modify readonly field %(field)s" msgstr "Impossible de modifier le champ en lecture seule %(field)s" #, python-format msgid "Cannot resume %s, resource not found" msgstr "Impossible de reprendre %s. Ressource introuvable" #, python-format msgid "Cannot resume %s, resource_id not set" msgstr "Impossible de reprendre %s, resource_id non défini" #, python-format msgid "Cannot resume %s, stack not created" msgstr "Impossible de reprendre %s, pile non créée" #, python-format msgid "Cannot suspend %s, resource not found" msgstr "Impossible d'interrompre %s. Ressource introuvable" #, python-format msgid "Cannot suspend %s, resource_id not set" msgstr "Impossible d'interrompre %s, resource_id non défini" #, python-format msgid "Cannot suspend %s, stack not created" msgstr "Impossible d'interrompre %s, pile non créée" msgid "Captured stderr from the configuration execution." msgstr "Stderr capturé à partir de l'exécution de configuration." msgid "Captured stdout from the configuration execution." msgstr "Stdout capturé à partir de l'exécution de configuration." #, python-format msgid "Circular Dependency Found: %(cycle)s" msgstr "Dépendance en boucle trouvée : %(cycle)s" msgid "Client entity to poll." msgstr "Entité client à interroger." msgid "Client name and resource getter name must be specified." msgstr "" "Le nom du client et le nom de la méthode d'accès get de la ressource doivent " "être spécifiés." msgid "Client to poll." msgstr "Client à interroger." msgid "Cluster configs dictionary." msgstr "Dictionnaire de configurations de cluster." msgid "Cluster information." msgstr "Information du cluster." msgid "Cluster metadata." msgstr "Métadonnées du cluster." msgid "Cluster name." msgstr "Nom du cluster." msgid "Cluster status." msgstr "Status du cluster." msgid "Comparison operator." msgstr "Opérateur de comparaison." #, python-format msgid "Concurrent transaction for %(action)s" msgstr "Transaction concurrente pour %(action)s" msgid "Configuration of session persistence." msgstr "Configuration de persistance de session." msgid "" "Configuration script or manifest which specifies what actual configuration " "is performed." msgstr "" "Script ou manifeste de configuration qui indique quelle configuration réelle " "est effectuée." msgid "Configure most important configs automatically." msgstr "Configurez les config les plus importantes automatiquement." #, python-format msgid "Confirm resize for server %s failed" msgstr "Echec de confirmation du redimensionnement du serveur %s " msgid "Connection info for this network gateway." msgstr "Information de connexion pour la passerelle réseau." #, python-format msgid "Container '%(name)s' creation failed: %(code)s - %(reason)s" msgstr "Echec de création du conteneur '%(name)s' : %(code)s - %(reason)s" msgid "Container format of image." msgstr "Format de conteneur de l'image." msgid "" "Content of part to attach, either inline or by referencing the ID of another " "software config resource." msgstr "" "Contenu de la partie à lier, soit en ligne, soit en référençant l'ID d'une " "autre ressource de configuration logicielle." msgid "Context for this stack." msgstr "Contexte de cette pile." msgid "Control how the disk is partitioned when the server is created." msgstr "" "Contrôler la façon dont le disque est partitionné lors de la création du " "serveur." msgid "Controls DPD protocol mode." msgstr "Contrôle le mode de protocole de détection d'homologue inactif (DPD)." msgid "" "Convenience attribute to fetch the first assigned network address, or an " "empty string if nothing has been assigned at this time. Result may not be " "predictable if the server has addresses from more than one network." msgstr "" "Attribut de commodité pour extraire la première adresse de réseau affectée, " "ou une chaîne vide si rien n'a été affecté pour le moment. Le résultat peut " "s'avérer imprévisible si le serveur a des adresses provenant de plusieurs " "réseaux." msgid "" "Convenience attribute, provides curl CLI command prefix, which can be used " "for signalling handle completion or failure when signal_transport is set to " "TOKEN_SIGNAL. You can signal success by adding --data-binary '{\"status\": " "\"SUCCESS\"}' , or signal failure by adding --data-binary '{\"status\": " "\"FAILURE\"}'. This attribute is set to None for all other signal transports." msgstr "" "Attribut de commodité, fournit le préfixe de commande CLI curl, qui peut " "être utilisé pour signaler l'achèvement ou l'incident de descripteur quand " "signal_transport est défini sur TOKEN_SIGNAL. Vous pouvez signaler la " "réussite en ajoutant --data-binary '{\"status\": \"SUCCESS\"}' ou signaler " "l'échec en ajoutant --data-binary '{\"status\": \"FAILURE\"}'. Cet attribut " "est défini sur None pour tout autre transport de signal." msgid "" "Convenience attribute, provides curl CLI command prefix, which can be used " "for signalling handle completion or failure. You can signal success by " "adding --data-binary '{\"status\": \"SUCCESS\"}' , or signal failure by " "adding --data-binary '{\"status\": \"FAILURE\"}'." msgstr "" "Attribut de commodité, fournit le préfixe de commande CLI curl, qui peut " "être utilisé pour signaler l'achèvement ou l'incident de descripteur. Vous " "pouvez signaler la réussite en ajoutant --data-binary '{\"status\": \"SUCCESS" "\"}' ou signaler l'échec en ajoutant --data-binary '{\"status\": \"FAILURE" "\"}'." msgid "Cooldown period, in seconds." msgstr "Période de Cooldown, en secondes." #, python-format msgid "Could not confirm resize of server %s" msgstr "Impossible de confirmer le redimensionnement du serveur %s" #, python-format msgid "Could not detach attachment %(att)s from server %(srv)s." msgstr "Impossible de déconnecter %(att)s du serveur %(srv)s." #, python-format msgid "Could not fetch remote template \"%(name)s\": %(exc)s" msgstr "Impossible d'extraire le modèle distant \"%(name)s\" : %(exc)s" #, python-format msgid "Could not fetch remote template '%(url)s': %(exc)s" msgstr "Impossible d'extraire le modèle distant '%(url)s' : %(exc)s" #, python-format msgid "Could not load %(name)s: %(error)s" msgstr "Impossible de charger %(name)s : %(error)s" #, python-format msgid "Could not retrieve template: %s" msgstr "Impossible de recevoir le template: %s" msgid "Create volumes on the same physical port as an instance." msgstr "Créez des volumes sur le même port physique qu'une instance." msgid "" "Credentials used for swift. Not required if sahara is configured to use " "proxy users and delegated trusts for access." msgstr "" "Informations d'identification utilisées pour Swift. Non requises si Sahara " "est configuré pour l'utilisation d'utilisateurs proxy et de confiances " "déléguées pour l'accès." msgid "Cron expression." msgstr "Expression CRON." msgid "Current share status." msgstr "Statut en cours du partage." msgid "Custom LoadBalancer template can not be found" msgstr "Le modèle LoadBalancer personnalisé est introuvable" msgid "DB instance restore point." msgstr "Point de restauration de l'instance de base de données." msgid "DNS Domain id or name." msgstr "ID ou nom du domaine DNS." msgid "DNS IP address used inside tenant's network." msgstr "Adresse IP DNS utilisée dans le réseau du locataire." msgid "DNS Record type." msgstr "Type d'enregistrement DNS." msgid "DNS domain serial." msgstr "Numéro de série du domaine DNS." msgid "" "DNS record data, varies based on the type of record. For more details, " "please refer rfc 1035." msgstr "" "Données d'enregistrement DNS, varient en fonction du type d'enregistrement. " "Pour plus de détails, voir rfc 1035." msgid "" "DNS record priority. It is considered only for MX and SRV types, otherwise, " "it is ignored." msgstr "" "Priorité de l'enregistrement DNS. Prise compte uniquement pour les types MX " "et SRV (sinon est ignorée)." #, python-format msgid "Data supplied was not valid: %(reason)s" msgstr "Les données fournies n'étaient pas valides: %(reason)s " #, python-format msgid "" "Database %(dbs)s specified for user does not exist in databases for resource " "%(name)s." msgstr "" "La base de données %(dbs)s indiquée pour l'utilisateur n'existe pas dans les " "bases de données pour la ressource %(name)s." msgid "Database volume size in GB." msgstr "Taille de volume de base de données en Go." #, python-format msgid "" "Databases property is required if users property is provided for resource %s." msgstr "" "La propriété de base de données est requise si la propriété utilisateurs est " "fournie pour la ressource %s." #, python-format msgid "" "Datastore version %(dsversion)s for datastore type %(dstype)s is not valid. " "Allowed versions are %(allowed)s." msgstr "" "La version du magasin de données %(dsversion)s pour le type du magasin de " "données %(dstype)s n'est pas correcte. Les versions autorisées sont " "%(allowed)s." msgid "Datetime when a share was created." msgstr "Date et heure de création d'un partage." msgid "" "Dead Peer Detection protocol configuration for the ipsec site connection." msgstr "" "Configuration du protocole de détection d'homologue inactif pour la " "connexion de site IPSec." msgid "Dead engines are removed." msgstr "Dead engines are removed." msgid "Default TLS container reference to retrieve TLS information." msgstr "" "Référence du conteneur TLS par défaut pour l'extraction des informations TLS." #, python-format msgid "Default must be a comma-delimited list string: %s" msgstr "" "La valeur par défaut doit être une chaîne de liste délimitée par des " "virgules : %s" msgid "Default name or UUID of the image used to boot Hadoop nodes." msgstr "" "Nom par défaut ou UUID de l'image utilisée pour amorcer les noeuds Hadoop." msgid "Default region name used to get services endpoints." msgstr "" "Nom de région par défaut utilisé pour obtenir des noeuds finals de service." msgid "Default settings for some of task attributes defined at workflow level." msgstr "" "Paramètres par défaut de certains attributs de tâche définis au niveau du " "flux de travail." msgid "Default value for the input if none is specified." msgstr "Valeur par défaut pour l'entrée si rien n'est indiqué." msgid "" "Defines a delay in seconds that Mistral Engine should wait after a task has " "completed before starting next tasks defined in on-success, on-error or on-" "complete." msgstr "" "Définit un délai, en secondes, que le moteur Mistral doit observer après " "l'exécution d'une tâche et avant le démarrage des tâches suivantes définies " "à l'état on-success, on-error ou on-complete." msgid "" "Defines a delay in seconds that Mistral Engine should wait before starting a " "task." msgstr "" "Définit un délai, en secondes, que le moteur Mistral doit observer avant de " "démarrer une tâche." msgid "Defines a pattern how task should be repeated in case of an error." msgstr "" "Définit un modèle (pattern) de la façon dont la tâche doit être répétée en " "cas d'erreur." msgid "" "Defines a period of time in seconds after which a task will be failed " "automatically by engine if hasn't completed." msgstr "" "Définit une durée, en secondes, au bout de laquelle une tâche est " "automatiquement considérée comme en échec par le moteur si elle n'a pas " "abouti." msgid "Defines if share type is accessible to the public." msgstr "Définit si le type de partage est accessible au public." msgid "Defines if shared filesystem is public or private." msgstr "Défini si le système de fichiers partagé est public ou privé." msgid "" "Defines the method in which the request body for signaling a workflow would " "be parsed. In case this property is set to True, the body would be parsed as " "a simple json where each key is a workflow input, in other cases body would " "be parsed expecting a specific json format with two keys: \"input\" and " "\"params\"." msgstr "" "Définit la méthode selon laquelle le corps de la demande permettant de " "signaler un flux de travail devrait être analysée syntaxiquement. Dans le " "cas où cette propriété est définie sur True, le corps serait analysé en tant " "qu'élément json simple dans lequel chaque clé est une entrée de flux de " "travail. Dans les autres cas, le corps serait analysé avec comme résultat un " "format json spécifique comportant deux clés : \"input\" et \"params\"." msgid "" "Defines whether Mistral Engine should put the workflow on hold or not before " "starting a task." msgstr "" "Détermine si le moteur Mistral doit ou non mettre le flux de travail en " "attente avant de démarrer une tâche." msgid "Defines whether auto-assign security group to this Node Group template." msgstr "" "Définit si un groupe de sécurité doit être automatiquement affecté à ce " "modèle de groupe de noeuds." #, python-format msgid "" "Defining more than one configuration for the same action in " "SoftwareComponent \"%s\" is not allowed." msgstr "" "La définition de plusieurs configurations pour la même action dans le " "composant logiciel \"%s\" n'est pas autorisée." msgid "Deleting in-progress snapshot" msgstr "Suppression de l'instantané en cours" #, python-format msgid "Deleting non-empty container (%(id)s) when %(prop)s is False" msgstr "" "Suppression d'un conteneur non vide (%(id)s) lorsque %(prop)s a pour valeur " "False" #, python-format msgid "Delimiter for %s must be string" msgstr "Le délimiteur pour %s doit être une chaîne" msgid "" "Denotes that the deployment is in an error state if this output has a value." msgstr "" "Dénote que le déploiement est dans un état d'erreur si cette sortie a une " "valeur." msgid "Deploy data available" msgstr "Déployer les données disponibles" #, python-format msgid "Deployment exited with non-zero status code: %s" msgstr "Déploiement quitté avec le code de statut non nul : %s" #, python-format msgid "Deployment to server failed: %s" msgstr "Echec du déploiement vers le serveur : %s" #, python-format msgid "Deployment with id %s not found" msgstr "Déploiement avec l'identifiant %s non trouvé" msgid "Deprecated." msgstr "Déprécier" msgid "" "Describe time constraints for the alarm. Only evaluate the alarm if the time " "at evaluation is within this time constraint. Start point(s) of the " "constraint are specified with a cron expression, whereas its duration is " "given in seconds." msgstr "" "Décrit les contraintes de l'alarme. Evalue l'alarme uniquement si l'heure de " "l'évaluation se situe dans la contrainte de temps. Le ou les points de " "départ de la contrainte sont spécifiés par une expression CRON, et sa durée " "est indiquée en secondes." msgid "Description for the alarm." msgstr "Description pour l'alarme." msgid "Description for the firewall policy." msgstr "Description des règles d'administration de pare-feu." msgid "Description for the firewall rule." msgstr "Description pour la règle du pare-feu." msgid "Description for the firewall." msgstr "Description pour le pare-feu" msgid "Description for the ike policy." msgstr "Description pour la stratégie IKE." msgid "Description for the ipsec policy." msgstr "Description pour la stratégie IPSec." msgid "Description for the ipsec site connection." msgstr "Description pour la connexion de site IPSec." msgid "Description for the time constraint." msgstr "Description de la contrainte de temps." msgid "Description for the vpn service." msgstr "Description du service vpn." msgid "Description for this interface." msgstr "Description pour cette interface." msgid "Description of domain." msgstr "Description du domaine." msgid "Description of keystone group." msgstr "Description du groupe Keystone." msgid "Description of keystone project." msgstr "Description du projet Keystone." msgid "Description of keystone region." msgstr "Description de la région Keystone." msgid "Description of keystone service." msgstr "Description du service Keystone." msgid "Description of keystone user." msgstr "Description de l'utilisateur Keystone." msgid "Description of record." msgstr "Description de l'enregistrement." msgid "Description of the Node Group Template." msgstr "Description du modèle de groupe de noeuds." msgid "Description of the Sahara Group Template." msgstr "Description du modèle de groupe Sahara." msgid "Description of the alarm." msgstr "Description de l'alarme." msgid "Description of the data source." msgstr "Description de la source de données." msgid "Description of the firewall policy." msgstr "Description des règles d'administration de pare-feu." msgid "Description of the firewall rule." msgstr "Description de la règle de pare-feu." msgid "Description of the firewall." msgstr "Description du pare-feu" msgid "Description of the image." msgstr "Description de l'image." msgid "Description of the input." msgstr "Description de l'entrée" msgid "Description of the job binary." msgstr "Description du binaire de travail." msgid "Description of the metering label." msgstr "Description de l'étiquette de mesure." msgid "Description of the output." msgstr "Description de la sortie" msgid "Description of the pool." msgstr "Description du pool." msgid "Description of the security group." msgstr "Description du groupe de sécurité." msgid "Description of the vip." msgstr "Description de vip." msgid "Description of the volume type." msgstr "Description du type de volume." msgid "Description of the volume." msgstr "Description du volume." msgid "Description of this Load Balancer." msgstr "Description de cet équilibreur de charge." msgid "Description of this listener." msgstr "Description de ce programme d'écoute." msgid "Description of this pool." msgstr "Description de ce pool." msgid "Desired IPs for this port." msgstr "Les IP souhaitées pour ce port." msgid "Desired capacity of the cluster." msgstr "Capacité de cluster souhaitée." msgid "Desired initial number of instances." msgstr "Nombre d'instances initial souhaité." msgid "Desired initial number of resources in cluster." msgstr "Nombre initial souhaité de ressources dans le cluster." msgid "Desired initial number of resources." msgstr "Nombre de ressources initial souhaité." msgid "Desired number of instances." msgstr "Nombre d'instances souhaité." msgid "DesiredCapacity must be between MinSize and MaxSize" msgstr "DesiredCapacity doit être compris entre MinSize et MaxSize" msgid "Destination IP address or CIDR." msgstr "Adresse IP de destination ou routage CIDR" msgid "Destination ip_address for this firewall rule." msgstr "Destination ip_address pour la règle du pare-feu." msgid "Destination port number or a range." msgstr "Numéro de port de destination ou intervalle." msgid "Destination port range for this firewall rule." msgstr "Plage de ports de destination pour cette règle de pare-feu." msgid "Detailed information about resource." msgstr "Informations détaillées sur la ressource." msgid "Device ID of this port." msgstr "ID unité de ce port." msgid "Device info for this network gateway." msgstr "Infos d'unité pour cette passerelle réseau." msgid "" "Device type: at the moment we can make distinction only between disk and " "cdrom." msgstr "" "Type d'unité : à cet instant, nous pouvons faire la distinction uniquement " "entre le disque et CD-ROM." msgid "" "Dict, which has expand properties for port. Used only if port property is " "not specified for creating port." msgstr "" "Dictionnaire comportant les propriétés d'expansion pour le port. Utilisé " "uniquement si la propriété de port n'est pas spécifiée pour la création du " "port." msgid "Dictionary containing workflow tasks." msgstr "Dictionnaire contenant des tâches de flux de travail." msgid "Dictionary of node configurations." msgstr "Dictionnaire de configurations de noeud." msgid "Dictionary of variables to publish to the workflow context." msgstr "" "Dictionnaire des variables à publier dans le contexte de flux de travail." msgid "Dictionary which contains input for workflow." msgstr "Dictionnaire contenant l'entrée du flux de travail." msgid "" "Dictionary-like section defining task policies that influence how Mistral " "Engine runs tasks. Must satisfy Mistral DSL v2." msgstr "" "Section de type dictionnaire qui définit les stratégies de tâche qui " "influencent la façon dont le moteur Mistral exécute les tâches. Doit être " "conforme à Mistral DSL v2." msgid "DisableRollback and OnFailure may not be used together" msgstr "DisableRollback et OnFailure ne peuvent pas être utilisés ensemble" msgid "Disk format of image." msgstr "Format de disque de l'image." msgid "Does not contain a valid AWS Access Key or certificate" msgstr "Ne contient pas de clé d'accès ou de certificat AWS valide" msgid "Domain email." msgstr "E-mail du domaine." msgid "Domain name." msgstr "Nom du domaine." #, python-format msgid "Duplicate names %s" msgstr "Noms en double %s" msgid "Duplicate refs are not allowed." msgstr "Les références dupliquées ne sont pas autorisées." msgid "Duration for the time constraint." msgstr "Durée de la contrainte de temps." msgid "EIP address to associate with instance." msgstr "Adresse EIP à associer à l'instance." #, python-format msgid "Each %(object_name)s must contain a %(sub_section)s key." msgstr "Chaque objet %(object_name)s doit contenir une clé %(sub_section)s." msgid "Each Resource must contain a Type key." msgstr "Chaque ressource doit contenir une clé Type." msgid "Ebs is missing, this is required when specifying BlockDeviceMappings." msgstr "Ebs manquant, requis lors de la spécification de BlockDeviceMappings" msgid "" "Egress rules are only allowed when Neutron is used and the 'VpcId' property " "is set." msgstr "" "Les règles Egress sont autorisées uniquement si Neutron est utilisé et que " "la propriété 'VpcId' est définie." #, python-format msgid "Either %(net)s or %(port)s must be provided." msgstr "Soit %(net)s, soit %(port)s doit être fourni." msgid "Either 'EIP' or 'AllocationId' must be provided." msgstr "'EIP' ou 'AllocationId' doit être fourni." msgid "Either 'InstanceId' or 'LaunchConfigurationName' must be provided." msgstr "'InstanceId' ou 'LaunchConfigurationName' doit être indiqué." #, python-format msgid "Either project or domain must be specified for role %s" msgstr "Le projet ou le domaine doit être spécifié pour le rôle %s" #, python-format msgid "Either volume_id or snapshot_id must be specified for device mapping %s" msgstr "" "volume_id ou snapshot_id doit être spécifié pour le mappage d'unités %s" msgid "Email address of keystone user." msgstr "Adresse e-mail de l'utilisateur Keystone." msgid "Enable the legacy OS::Heat::CWLiteAlarm resource." msgstr "Activer la ressource OS::Heat::CWLiteAlarm existante." msgid "Enable the preview Stack Abandon feature." msgstr "Activer la fonction préliminaire d'abandon de pile (Stack Abandon)." msgid "Enable the preview Stack Adopt feature." msgstr "Activer la fonction préliminaire d'adoption de pile (Stack Adopt)." msgid "" "Enables Source NAT on the router gateway. NOTE: The default policy setting " "in Neutron restricts usage of this property to administrative users only." msgstr "" "Active la conversion NAT source sur la passerelle de routeur. Remarque : Le " "paramètre de règle par défaut dans Neutron limite l'utilisation de cette " "propriété aux utilisateurs administrateurs uniquement." msgid "" "Enables engine with convergence architecture. All stacks with this option " "will be created using convergence engine." msgstr "" "Active le moteur avec l'architecture de convergence. Toutes les piles avec " "cette option seront créées à l'aide du moteur de convergence." msgid "Enables or disables read-only access mode of volume." msgstr "Active ou désactive le mode d'accès en lecture seule du volume." msgid "Encapsulation mode for the ipsec policy." msgstr "Mode d'encapsulation pour la stratégie IPSec." msgid "" "Encrypt template parameters that were marked as hidden and also all the " "resource properties before storing them in database." msgstr "" "Chiffrez les paramètres de modèle marqués comme masqués ainsi que les " "propriétés de ressource avant de les stocker dans la base de données." msgid "Encryption algorithm for the ike policy." msgstr "Algorithme de chiffrement pour la stratégie IKE." msgid "Encryption algorithm for the ipsec policy." msgstr "Algorithme de chiffrement pour la stratégie IPSec." msgid "End address for the allocation pool." msgstr "Adresse de fin pour le pool d'allocation." #, python-format msgid "End resizing the group %(group)s" msgstr "Finir de redimensionner le groupe %(group)s" msgid "" "Endpoint/url which can be used for signalling handle when signal_transport " "is set to TOKEN_SIGNAL. None for all other signal transports." msgstr "" "Noeud final/URL d'utilisateur de pile qui peut être utilisé(e) pour signaler " "le descripteur quand signal_transport est défini sur TOKEN_SIGNAL. Valeur " "None pour tout autre transport de signal." msgid "Endpoint/url which can be used for signalling handle." msgstr "Noeud final/URL utilisable pour signaler le descripteur." msgid "Engine_Id" msgstr "Engine_Id" msgid "Error" msgstr "Erreur" #, python-format msgid "Error authorizing action %s" msgstr "Erreur d'autorisation sur l'action %s" #, python-format msgid "Error creating ec2 keypair for user %s" msgstr "" "Erreur lors de la création de la paire de clés ec2 pour l'utilisateur %s" msgid "" "Error during applying access rules to share \"{0}\". The root cause of the " "problem is the following: {1}." msgstr "" "Erreur lors de l'application des règles d'accès au partage \"{0}\". Cause " "racine du problème : {1}." msgid "Error during creation of share \"{0}\"" msgstr "Erreur lors de la création du partage \"{0}\"" msgid "Error during deleting share \"{0}\"." msgstr "Erreur lors de la suppression du partage \"{0}\"." #, python-format msgid "Error validating value '%(value)s'" msgstr "Erreur lors de la validation de la valeur '%(value)s'" #, python-format msgid "Error validating value '%(value)s': %(message)s" msgstr "Erreur lors de la validation de la valeur '%(value)s' : %(message)s" msgid "Ethertype of the traffic." msgstr "Ethertype du trafic." msgid "Exclude state for cidr." msgstr "Exclure l'état pour le cidr." #, python-format msgid "Expected 1 external network, found %d" msgstr "1 réseau externe attendu, %d trouvé(s)" msgid "Export locations of share." msgstr "Emplacements d'exportation du partage." msgid "Expression of the alarm to evaluate." msgstr "Expression de l'alarme à évaluer." msgid "External fixed IP address." msgstr "Adresse IP fixe externe." msgid "External fixed IP addresses for the gateway." msgstr "Adresses IP fixes externes pour la passerelle." msgid "External network gateway configuration for a router." msgstr "Configuration de la passerelle réseau externe pour un routeur." msgid "" "Extra parameters to include in the \"floatingip\" object in the creation " "request. Parameters are often specific to installed hardware or extensions." msgstr "" "Paramètres supplémentaires à inclure dans l'objet \"floatingip\" dans la " "demande de création. Les paramètres sont souvent spécifiques au matériel ou " "aux extensions installés." msgid "Extra parameters to include in the creation request." msgstr "Paramètres supplémentaires à inclure dans la demande de création." msgid "Extra parameters to include in the request." msgstr "Paramètres supplémentaires à inclure dans la demande." msgid "" "Extra parameters to include in the request. Parameters are often specific to " "installed hardware or extensions." msgstr "" "Paramètres supplémentaires à inclure dans la demande. Les paramètres sont " "souvent spécifiques au matériel ou aux extensions installés." msgid "Extra specs key-value pairs defined for share type." msgstr "" "Paires de valeurs clé de spécification supplémentaire définies pour le type " "de partage." #, python-format msgid "Failed to attach interface (%(port)s) to server (%(server)s)" msgstr "Echec de connexion de l'interface (%(port)s) au serveur (%(server)s)" #, python-format msgid "Failed to attach volume %(vol)s to server %(srv)s - %(err)s" msgstr "Echec de connexion du volume %(vol)s au serveur %(srv)s - %(err)s" #, python-format msgid "Failed to create Bay '%(name)s' - %(reason)s" msgstr "Echec de la création de la baie '%(name)s' - %(reason)s" #, python-format msgid "Failed to detach interface (%(port)s) from server (%(server)s)" msgstr "Echec de déconnexion de l'interface (%(port)s) du serveur (%(server)s)" #, python-format msgid "Failed to execute %(action)s for %(cluster)s: %(reason)s" msgstr "Echec d'exécution de %(action)s pour %(cluster)s : %(reason)s" #, python-format msgid "Failed to extend volume %(vol)s - %(err)s" msgstr "Echec de l'extension du volume %(vol)s - %(err)s" #, python-format msgid "Failed to fetch template: %s" msgstr "Echec de l'extraction du modele: %s" #, python-format msgid "Failed to find instance %s" msgstr "Echec pour trouver instance %s" #, python-format msgid "Failed to find server %s" msgstr "Echec pour trouver le serveur %s" #, python-format msgid "Failed to parse JSON data: %s" msgstr "Echec de l'analyse des données JSON : %s" #, python-format msgid "Failed to restore volume %(vol)s from backup %(backup)s - %(err)s" msgstr "" "Echec de restauration du volume %(vol)s à partir de la sauvegarde %(backup)s " "- %(err)s" msgid "Failed to retrieve template" msgstr "Echec de l'extraction du modèle" #, python-format msgid "Failed to retrieve template data: %s" msgstr "Echec de l'extraction des données du modèle : %s" #, python-format msgid "Failed to retrieve template: %s" msgstr "Echec de réception du template: %s" #, python-format msgid "" "Failed to send message to stack (%(stack_name)s) on other engine " "(%(engine_id)s)" msgstr "" "Echec de l'envoi du message à la pile (%(stack_name)s) sur l'autre moteur " "(%(engine_id)s)" #, python-format msgid "Failed to stop stack (%(stack_name)s) on other engine (%(engine_id)s)" msgstr "" "Echec de l'arrêt de la pile (%(stack_name)s) sur l'autre moteur " "(%(engine_id)s)" #, python-format msgid "Failed to update Bay '%(name)s' - %(reason)s" msgstr "Echec de mise à jour de la baie '%(name)s' - %(reason)s" msgid "Failed to update, can not found port info." msgstr "Echec de la mise à jour, les informations de port sont introuvables." #, python-format msgid "" "Failed validating stack template using Heat endpoint at region \"%(region)s" "\" due to \"%(exc)s\"" msgstr "" "Echec de la validation du modèle de pile à l'aide du noeud final Heat dans " "la région \"%(region)s\". Cause : \"%(exc)s\"" msgid "Fake attribute !a." msgstr "Attribut factice !a." msgid "Fake attribute a." msgstr "Attribut factice a." msgid "Fake property !a." msgstr "Propriété factice !a." msgid "Fake property !c." msgstr "Propriété factice !c." msgid "Fake property a." msgstr "Propriété factice a." msgid "Fake property c." msgstr "Propriété factice c." msgid "Fake property ca." msgstr "Propriété factice ca." msgid "" "False to trigger actions when the threshold is reached AND the alarm's state " "has changed. By default, actions are called each time the threshold is " "reached." msgstr "" "False pour déclencher les actions lorsque le seuil est atteint ET que l'état " "d'alarme a changé. Par défaut, les actions sont appelées chaque fois que le " "seuil est atteint." #, python-format msgid "Field %(field)s of %(objname)s is not an instance of Field" msgstr "Le champ %(field)s de %(objname)s n'est pas une instance de Champ" msgid "" "Fixed IP address to specify for the port created on the requested network." msgstr "Adresse IP fixe à indiquer pour le port créé sur le réseau demandé." msgid "Fixed IP addresses." msgstr "Adresses IP fixes." msgid "Fixed IPv4 address for this NIC." msgstr "Adresse IPv4 fixe de ce contrôleur NIC." msgid "Flag indicating if traffic to or from instance is validated." msgstr "" "Indicateur signalant si le trafic en provenance ou à destination de " "l'instance est validé." msgid "" "Flag to enable/disable port security on the network. It provides the default " "value for the attribute of the ports created on this network." msgstr "" "Indicateur permettant d'activer/de désactiver la sécurité de port sur le " "réseau. Fournit la valeur par défaut pour l'attribut des ports créés sur ce " "réseau." msgid "" "Flag to enable/disable port security on the port. When disable this " "feature(set it to False), there will be no packages filtering, like security-" "group and address-pairs." msgstr "" "Indicateur permettant d'activer/de désactiver la sécurité de port sur le " "port. Lorsque cette fonction est désactivée (définie sur False), il n'y a " "aucun filtrage des packages, comme groupe de sécurité et paires d'adresses." msgid "Flavor of the instance." msgstr "Version de l'instance." msgid "Friendly name of the port." msgstr "Nom usuel du port." msgid "Friendly name of the router." msgstr "Nom usuel du routeur." msgid "Friendly name of the subnet." msgstr "Nom usuel du sous-réseau." #, python-format msgid "Function \"%s\" must have arguments" msgstr "La fonction \"%s\" doit avoir des arguments" #, python-format msgid "Function \"%s\" usage: [\"\", \"\"]" msgstr "Syntaxe de la fonction \"%s\" : [\"\", \"\"]" #, python-format msgid "Gateway IP address \"%(gateway)s\" is in invalid format." msgstr "" "L'adresse IP de passerelle \"%(gateway)s\" n'est pas dans un format valide." msgid "Gateway network for the router." msgstr "Passerelle réseau du routeur." msgid "Generic HeatAPIException, please use specific subclasses!" msgstr "" "HeatAPIException générique, veuillez utiliser des sous-classes spécifiques !" msgid "Glance image ID or name." msgstr "ID image Glance ou nom." msgid "Governs permissions set in manila for the cluster ips." msgstr "Régit les droits définis dans manila pour les IP de cluster." msgid "Granularity to use for age argument, defaults to days." msgstr "" "Granularité à utiliser pour l'argument d'ancienneté, par défaut il s'agit de " "jours." msgid "Hadoop cluster name." msgstr "Nom du cluster Hadoop." #, python-format msgid "Header X-Auth-Url \"%s\" not an allowed endpoint" msgstr "L'en-tête X-Auth-Url \"%s\" n'est pas un noeud final autorisé" msgid "Health probe timeout, in seconds." msgstr "Délai d'attente de l'analyse d'intégrité, en secondes." msgid "" "Heat build revision. If you would prefer to manage your build revision " "separately, you can move this section to a different file and add it as " "another config option." msgstr "" "Révision de la version de Heat. Si vous préférez gérer votre révision de " "version séparément, vous pouvez déplacer cette section dans un fichier " "différent et l'ajouter comme une autre option de configuration." msgid "Host" msgstr "Host" msgid "Hostname" msgstr "Nom d'Hôte" msgid "Hostname of the instance." msgstr "Nom d'hôte de l'instance." msgid "How long to preserve deleted data." msgstr "Délai de conservation des données supprimées." msgid "" "How the client will signal the wait condition. CFN_SIGNAL will allow an HTTP " "POST to a CFN keypair signed URL. TEMP_URL_SIGNAL will create a Swift " "TempURL to be signalled via HTTP PUT. HEAT_SIGNAL will allow calls to the " "Heat API resource-signal using the provided keystone credentials. " "ZAQAR_SIGNAL will create a dedicated zaqar queue to be signalled using the " "provided keystone credentials. TOKEN_SIGNAL will allow and HTTP POST to a " "Heat API endpoint with the provided keystone token. NO_SIGNAL will result in " "the resource going to a signalled state without waiting for any signal." msgstr "" "Indique comment le client va signaler la condition d'attente. CFN_SIGNAL va " "autoriser un HTTP POST vers une URL signée par une paire de clés CFN. " "TEMP_URL_SIGNAL va créer une TempURL Swift pour l'envoi d'un signal via HTTP " "PUT. HEAT_SIGNAL va autoriser les appels vers le signal de ressource d'API " "Heat à l'aide des informations d'identification Keystone fournies. " "ZAQAR_SIGNAL va créer une file d'attente zaqar dédiée pour l'envoi d'un " "signal à l'aide des informations d'identification Keystone fournies. " "TOKEN_SIGNAL va autoriser un HTTP POST vers un nœud final d'API Heat à " "l'aide du jeton Keystone fourni. NO_SIGNAL va se produire dans la ressource " "qui passe à l'état COMPLETE sans attendre aucun signal." msgid "" "How the server should receive the metadata required for software " "configuration. POLL_SERVER_CFN will allow calls to the cfn API action " "DescribeStackResource authenticated with the provided keypair. " "POLL_SERVER_HEAT will allow calls to the Heat API resource-show using the " "provided keystone credentials. POLL_TEMP_URL will create and populate a " "Swift TempURL with metadata for polling. ZAQAR_MESSAGE will create a " "dedicated zaqar queue and post the metadata for polling." msgstr "" "Indique comment le serveur doit recevoir les métadonnées requises pour la " "configuration logicielle. POLL_SERVER_CFN va autoriser les appels à l'action " "cfn API DescribeStackResource authentifiée avec la paire de clés fournie. " "POLL_SERVER_HEAT va autoriser les appels à l'API Heat resource-show à l'aide " "des données d'identification Keystone fournies. POLL_TEMP_URL va créer et " "remplir une TempURL Swift avec les métadonnées de l'interrogation. " "ZAQAR_MESSAGE va créer une file d'attente zaqar dédiée et publier les " "métadonnées pour l'interrogation." msgid "How the server should signal to heat with the deployment output values." msgstr "" "Indique comment le serveur doit envoyer un signal à Heat avec les valeurs de " "sortie de déploiement." msgid "" "How the server should signal to heat with the deployment output values. " "CFN_SIGNAL will allow an HTTP POST to a CFN keypair signed URL. " "TEMP_URL_SIGNAL will create a Swift TempURL to be signaled via HTTP PUT. " "HEAT_SIGNAL will allow calls to the Heat API resource-signal using the " "provided keystone credentials. ZAQAR_SIGNAL will create a dedicated zaqar " "queue to be signaled using the provided keystone credentials. NO_SIGNAL will " "result in the resource going to the COMPLETE state without waiting for any " "signal." msgstr "" "Indique comment le serveur doit envoyer un signal à Heat avec les valeurs de " "sortie de déploiement. CFN_SIGNAL va autoriser un HTTP POST vers une URL " "signée par une paire de clés CFN. TEMP_URL_SIGNAL va créer une TempURL Swift " "pour l'envoi d'un signal via HTTP PUT. HEAT_SIGNAL va autoriser les appels " "vers le signal de ressource d'API Heat à l'aide des informations " "d'identification Keystone fournies. ZAQAR_SIGNAL va créer une file d'attente " "zaqar dédiée pour l'envoi d'un signal à l'aide des informations " "d'identification Keystone fournies. NO_SIGNAL va se produire dans la " "ressource qui passe à l'état COMPLETE sans attendre aucun signal." msgid "" "How the user_data should be formatted for the server. For HEAT_CFNTOOLS, the " "user_data is bundled as part of the heat-cfntools cloud-init boot " "configuration data. For RAW the user_data is passed to Nova unmodified. For " "SOFTWARE_CONFIG user_data is bundled as part of the software config data, " "and metadata is derived from any associated SoftwareDeployment resources." msgstr "" "Indique comment les user_data doivent être formatées pour le serveur. Pour " "HEAT_CFNTOOLS, les user_data sont regroupées en tant qu'élément des données " "de configuration d'amorçage heat-cfntools cloud-init. Pour RAW, les " "user_data sont transmises à Nova non modifiées. Pour SOFTWARE_CONFIG, les " "user_data font partie des données de configuration et les métadonnées sont " "dérivées des ressources SoftwareDeployment associées." msgid "Human readable name for the secret." msgstr "Nom lisible du secret." msgid "Human-readable name for the container." msgstr "Nom lisible du conteneur." msgid "" "ID list of the L3 agent. User can specify multi-agents for highly available " "router. NOTE: The default policy setting in Neutron restricts usage of this " "property to administrative users only." msgstr "" "Liste d'ID de l'agent L3. L'utilisateur peut indiquer plusieurs agents pour " "le routeur à haute disponibilité. REMARQUE : le paramètre de règle par " "défaut dans Neutron limite l'utilisation de cette propriété aux " "administrateurs uniquement." msgid "ID of an existing port to associate with this server." msgstr "ID d'un port existant à associer à ce serveur." msgid "" "ID of an existing port with at least one IP address to associate with this " "floating IP." msgstr "" "ID d'un port existant avec au moins une adresse IP à associer à cette IP " "flottante." msgid "ID of network to create a port on." msgstr "ID du réseau sur lequel créer un port." msgid "ID of project for API authentication" msgstr "Identifiant du projet pour API authentification " msgid "ID of queue to use for signaling output values" msgstr "ID de la file d'attente à utiliser pour signaler des valeurs de sortie" msgid "" "ID of resource to apply configuration to. Normally this should be a Nova " "server ID." msgstr "" "ID de la resource à laquelle appliquer la configuration. Il s'agit " "normalement d'un ID serveur Nova." msgid "" "ID of server (VM, etc...) on host that is used for exporting network file-" "system." msgstr "" "ID du serveur (VM, etc.) sur l'hôte utilisé pour l'exportation du système de " "fichiers réseau." msgid "ID of signal to use for signaling output values" msgstr "ID du signal à utiliser pour signaler les valeurs de sortie" msgid "" "ID of software configuration resource to execute when applying to the server." msgstr "" "ID de la ressource de configuration logicielle à exécuter lors de " "l'application au serveur." msgid "ID of the Cluster Template used for Node Groups and configurations." msgstr "" "ID du modèle de cluster utilisé pour les groupes de noeuds et les " "configurations." msgid "ID of the InternetGateway." msgstr "ID de la passerelleInternet" msgid "" "ID of the L3 agent. NOTE: The default policy setting in Neutron restricts " "usage of this property to administrative users only." msgstr "" "ID de l'agent L3. REMARQUE : le paramètre de règle par défaut dans Neutron " "limite l'utilisation de cette propriété aux administrateurs uniquement." msgid "ID of the Node Group Template." msgstr "ID du modèle de groupe de noeuds." msgid "ID of the VPNGateway to attach to the VPC." msgstr "ID du VPNGateway à lier au VPC." msgid "ID of the default image to use for the template." msgstr "ID de l'image par défaut à utiliser pour le modèle." msgid "ID of the default pool this listener is associated to." msgstr "ID du pool par défaut auquel ce programme d'écoute est associé." msgid "ID of the floating IP to assign to the server." msgstr "ID de l'IP flottante à affecter au serveur." msgid "ID of the floating IP to associate." msgstr "ID de l'IP flottante à associer." msgid "ID of the health monitor associated with this pool." msgstr "ID du moniteur d'état associé à ce pool." msgid "ID of the image to use for the template." msgstr "ID de l'image à utiliser pour le modèle." msgid "ID of the load balancer this listener is associated to." msgstr "" "ID de l'équilibreur de charge auquel est associé ce programme d'écoute." msgid "ID of the network in which this IP is allocated." msgstr "Identificateur du réseau dans lequel cette adresse IP est allouée." msgid "ID of the port associated with this IP." msgstr "ID du port associé à cette IP." msgid "ID of the queue." msgstr "ID de la file d'attente." msgid "ID of the router used as gateway, set when associated with a port." msgstr "" "ID du routeur utilisé comme passerelle, défini lors de l'association à un " "port." msgid "ID of the router." msgstr "Identifiant du routeur." msgid "ID of the server being deployed to" msgstr "ID du serveur vers lequel le déploiement s'effectue" msgid "ID of the stack this deployment belongs to" msgstr "ID de la pile à laquelle ce déploiement appartient" msgid "ID of the tenant to which the RBAC policy will be enforced." msgstr "ID du locataire pour lequel la stratégie RBAC va être appliquée." msgid "ID of the tenant who owns the health monitor." msgstr "ID du locataire qui détient le moniteur d'état." msgid "ID or name of the QoS policy." msgstr "ID ou nom de la stratégie de qualité de service." msgid "ID or name of the RBAC object." msgstr "ID ou nom de l'objet RBAC." msgid "ID or name of the external network for the gateway." msgstr "ID ou nom du réseau externe pour la passerelle." msgid "ID or name of the image to register." msgstr "ID ou nom de l'image d'enregistrement." msgid "ID or name of the load balancer with which listener is associated." msgstr "ID ou nom de l'équilibreur de charge associé au programme d'écoute." msgid "ID or name of the load balancing pool." msgstr "ID ou nom du pool d'équilibrage de charge." msgid "" "ID that AWS assigns to represent the allocation of the address for use with " "Amazon VPC. Returned only for VPC elastic IP addresses." msgstr "" "ID qu'AWS affecte pour représenter l'allocation de l'adresse à utiliser avec " "Amazon VPC. Renvoyé uniquement pour les adresses IP élastiques de VPC." msgid "IP address and port of the pool." msgstr "Adresse IP et port du pool." msgid "IP address desired in the subnet for this port." msgstr "L'adresse IP désirée dans le sous-réseau pour ce port." msgid "IP address for the VIP." msgstr "Adresse IP du VIP." msgid "IP address of the associated port, if specified." msgstr "Adresse IP du port associé, si indiqué." msgid "" "IP address of the floating IP. NOTE: The default policy setting in Neutron " "restricts usage of this property to administrative users only." msgstr "" "Adresse IP de l'IP flottante. REMARQUE : Le paramètre de stratégie par " "défaut dans Neutron restreint l'utilisation de cette propriété aux seuls " "administrateurs." msgid "IP address of the pool member on the pool network." msgstr "Adresse IP du membre du pool sur le réseau du pool." msgid "IP address of the pool member." msgstr "Adresse IP du membre de pool." msgid "IP address of the vip." msgstr "Adresse IP de vip." msgid "IP address to allow through this port." msgstr "Adresse IP à autoriser via ce port." msgid "IP address to use if the port has multiple addresses." msgstr "Adresse IP à utiliser si le port a plusieurs adresses." msgid "" "IP or other address information about guest that allowed to access to Share." msgstr "" "IP ou autre information d'adresse concernant l'invité qui est autorisé à " "accéder au partage." msgid "IPv6 RA (Router Advertisement) mode." msgstr "Mode IPv6 RA (Router Advertisement)." msgid "IPv6 address mode." msgstr "Mode d'adressage IPv6." msgid "Id of a resource." msgstr "ID de la ressource." msgid "Id of the manila share." msgstr "ID du partage Manila." msgid "Id of the tenant owning the firewall policy." msgstr "" "Identificateur du locataire possédant les règles d'administration de pare-" "feu." msgid "Id of the tenant owning the firewall." msgstr "Identificateur du locataire possédant le pare-feu." msgid "Identifier of the source instance to replicate." msgstr "Identificateur de l'instance de source à répliquer." #, python-format msgid "" "If \"%(size)s\" is provided, only one of \"%(image)s\", \"%(image_ref)s\", " "\"%(source_vol)s\", \"%(snapshot_id)s\" can be specified, but currently " "specified options: %(exclusive_options)s." msgstr "" "Si \"%(size)s\" est indiqué, une seule des options \"%(image)s\", " "\"%(image_ref)s\", \"%(source_vol)s\", \"%(snapshot_id)s\" peut être " "spécifiée. Options actuellement spécifiées : %(exclusive_options)s." msgid "If False, closes the client socket connection explicitly." msgstr "" "Si la valeur est False, ferme explicitement la connexion au socket client." msgid "" "If True, delete any objects in the container when the container is deleted. " "Otherwise, deleting a non-empty container will result in an error." msgstr "" "Si True, supprimez d'abord tous les objets du conteneur à supprimer. Sinon, " "la suppression d'un conteneur non vide peut provoquer une erreur." msgid "If True, enable config drive on the server." msgstr "Si True, activez l'unité de configuration sur le serveur." msgid "" "If configured, it allows to run action or workflow associated with a task " "multiple times on a provided list of items." msgstr "" "Si configuré, permet d'exécuter plusieurs fois l'action ou le flux de " "travail associé à une tâche sur une liste d'éléments fournie." msgid "If set, then the server's certificate will not be verified." msgstr "S'il est défini, alors le certificat du serveur ne sera pas vérifié." msgid "If specified, the backup to create the volume from." msgstr "Si indiqué, la sauvegarde à partir de laquelle créer le volume." msgid "If specified, the backup used as the source to create the volume." msgstr "Si indiquée, la sauvegarde utilisée comme source pour créer le volume." msgid "If specified, the name or ID of the image to create the volume from." msgstr "" "Si indiqué, le nom ou l'ID de l'image à partir de laquelle créer le volume." msgid "If specified, the snapshot to create the volume from." msgstr "Si indiqué, l'image instantanée à partir de laquelle créer le volume." msgid "If specified, the type of volume to use, mapping to a specific backend." msgstr "" "Si indiqué, le type de volume à utiliser, mappé à un backend spécifique." msgid "If specified, the volume to use as source." msgstr "Si indiqué, le volume à utiliser comme source." msgid "" "If the region is hierarchically a child of another region, set this " "parameter to the ID of the parent region." msgstr "" "Si, d'un point de vue hiérarchique, la région est un enfant d'une autre " "région, définissez ce paramètre sur l'ID de la région parent." msgid "" "If true, the resources in the chain will be created concurrently. If false " "or omitted, each resource will be treated as having a dependency on the " "previous resource in the list." msgstr "" "Si la valeur est true, les ressources de la chaîne seront créées " "simultanément. Si la valeur est false ou omitted, chaque ressource sera " "traitée comme étant dépendante de la ressource qui la précède dans la liste." msgid "If without InstanceId, ImageId and InstanceType are required." msgstr "Si InstanceId est absent, ImageId et InstanceType sont obligatoires." #, python-format msgid "Illegal prefix bounds: %(key1)s=%(value1)s, %(key2)s=%(value2)s." msgstr "" "Limite de préfixe non conforme : %(key1)s=%(value1)s, %(key2)s=%(value2)s." #, python-format msgid "" "Image %(image)s requires %(imram)s minimum ram. Flavor %(flavor)s has only " "%(flram)s." msgstr "" "L'image %(image)s requiert au minimum %(imram)s de RAM. La version " "%(flavor)s n'a que %(flram)s." #, python-format msgid "" "Image %(image)s requires %(imsz)s GB minimum disk space. Flavor %(flavor)s " "has only %(flsz)s GB." msgstr "" "L'image %(image)s requiert au minimum un espace disque de %(imsz)s Go. La " "version %(flavor)s n'a que %(flsz)s Go." #, python-format msgid "Image status is required to be %(cstatus)s not %(wstatus)s." msgstr "Le statut de l'image doit être %(cstatus)s et non %(wstatus)s." msgid "Incompatible parameters were used together" msgstr "Des paramètres incompatibles ont été utilisés ensemble" #, python-format msgid "Incorrect arguments to \"%(fn_name)s\" should be one of: %(allowed)s" msgstr "" "Arguments incorrects pour \"%(fn_name)s\", devraient être l'un des arguments " "suivants : %(allowed)s" #, python-format msgid "Incorrect arguments to \"%(fn_name)s\" should be: %(example)s" msgstr "" "Arguments incorrects pour \"%(fn_name)s\", devraient être : %(example)s" msgid "Incorrect arguments: Items to merge must be maps." msgstr "" "Arguments incorrects : les éléments à fusionner doivent être des mappes." #, python-format msgid "" "Incorrect index to \"%(fn_name)s\" should be between 0 and %(max_index)s" msgstr "" "Index incorrect pour \"%(fn_name)s\" : doit être comprsi entre 0 et " "%(max_index)s" #, python-format msgid "Incorrect index to \"%(fn_name)s\" should be: %(example)s" msgstr "Index incorrect pour \"%(fn_name)s\", doit être %(example)s" #, python-format msgid "Index to \"%s\" must be a string" msgstr "Index de %s doit etre un string" #, python-format msgid "Index to \"%s\" must be an integer" msgstr "Index de %s doit etre un entier" msgid "" "Indicate whether the volume should be deleted when the instance is " "terminated." msgstr "" "Indiquez si le volume doit être supprimé quand l'instance est interrompu." msgid "" "Indicate whether the volume should be deleted when the server is terminated." msgstr "" "Indiquez si le volume doit être supprimé quand le serveur est interrompu." msgid "Indicates remote IP prefix to be associated with this metering rule." msgstr "Indique le préfixe d'IP distante à associer à cette règle de mesure." msgid "" "Indicates whether or not to create a distributed router. NOTE: The default " "policy setting in Neutron restricts usage of this property to administrative " "users only. This property can not be used in conjunction with the L3 agent " "ID." msgstr "" "Indique si la création d'un routeur réparti est nécessaire. REMARQUE : Dans " "Neutron, le paramètre de règle par défaut limite l'utilisation de cette " "propriété aux administrateurs seulement. Cette propriété ne peut pas être " "utilisée avec l'ID agent de niveau 3." msgid "" "Indicates whether or not to create a highly available router. NOTE: The " "default policy setting in Neutron restricts usage of this property to " "administrative users only. And now neutron do not support distributed and ha " "at the same time." msgstr "" "Indique si la création d'un routeur à haute disponibilité est nécessaire. " "REMARQUE : Dans Neutron, le paramètre de règle par défaut limite " "l'utilisation de cette propriété aux administrateurs uniquement. Désormais, " "neutron ne prend pas en charge les routeurs répartis et à haute " "disponibilité simultanément." msgid "Indicates whether this firewall rule is enabled or not." msgstr "Indique si cette règle de pare-feu est activée ou pas." msgid "Information used to configure the bucket as a static website." msgstr "" "Informations utilisées pour configurer le compartiment en tant que site Web " "statique." msgid "Initiator state in lowercase for the ipsec site connection." msgstr "Etat d'initiateur en minuscules pour la connexion de site IPSec." #, python-format msgid "Input in signal data must be a map, find a %s" msgstr "L'entrée dans les données de signal doit être une mappe, recherchez %s" msgid "Input values for the workflow." msgstr "Valeurs en entrée pour le flux de travail." msgid "Input values to apply to the software configuration on this server." msgstr "" "Valeurs en entrée à appliquer à la configuration logicielle sur ce serveur." msgid "Instance ID to associate with EIP specified by EIP property." msgstr "ID instance à associer à l'EIP indiqué par la propriété EIP." msgid "Instance ID to associate with EIP." msgstr "ID instance à associer à EIP." msgid "Instance connection to CFN/CW API validate certs if SSL is used." msgstr "" "La connexion d'instance à l'API CFN/CW valide des certificats si SSL est " "utilisé." msgid "Instance connection to CFN/CW API via https." msgstr "Connexion d'instance à l'API CFN/CW via HTTPS." #, python-format msgid "Instance is not ACTIVE (was: %s)" msgstr "L'instance n'est pas active (état antérieur : %s)" #, python-format msgid "" "Instance metadata must not contain greater than %s entries. This is the " "maximum number allowed by your service provider" msgstr "" "Les métadonnées d'instance ne doivent pas contenir plus de %s entrées. Il " "s'agit du nombre maximal autorisé par votre fournisseur de services" msgid "Interface type of keystone service endpoint." msgstr "Type d'interface du noeud final de service Keystone." msgid "Internet protocol version." msgstr "Version IP." #, python-format msgid "Invalid %s, expected a mapping" msgstr "%s non valide, mappage attendu" #, python-format msgid "Invalid CRON expression: %s" msgstr "Expression CRON non valide : %s" #, python-format msgid "Invalid Parameter type \"%s\"" msgstr "Type de paramètre invalide \"%s\"" #, python-format msgid "Invalid Property %s" msgstr "Propriété invalide %s" msgid "Invalid Stack address" msgstr "Adresses Stack invalide" msgid "Invalid Template URL" msgstr "Modèle d'URL non valide" #, python-format msgid "Invalid URL scheme %s" msgstr "URL du schéma invalide %s" #, python-format msgid "Invalid UUID version (%d)" msgstr "Version (%d) UUID invalide" #, python-format msgid "Invalid action %s" msgstr "Action non valide %s" #, python-format msgid "Invalid action %s specified" msgstr "Action spécifié non valide %s" #, python-format msgid "Invalid adopt data: %s" msgstr "Données d'adoption non valides : %s" #, python-format msgid "Invalid cloud_backend setting in heat.conf detected - %s" msgstr "Paramètre cloud_backend non valide détecté dans heat.conf - %s" #, python-format msgid "Invalid codes in ignore_errors : %s" msgstr "Codes non valides dans ignore_errors : %s" #, python-format msgid "Invalid content type %(content_type)s" msgstr "Type de contenu non valide %(content_type)s" #, python-format msgid "Invalid default %(default)s (%(exc)s)" msgstr "Valeur par défaut non valide %(default)s (%(exc)s)" #, python-format msgid "Invalid deletion policy \"%s\"" msgstr "Règle de suppression non valide \"%s\"" #, python-format msgid "Invalid filter parameters %s" msgstr "Paramètres de filtre %s non valides" #, python-format msgid "Invalid hook type \"%(hook)s\" for %(resource)s" msgstr "Type de point d'ancrage \"%(hook)s\" non valide pour %(resource)s" #, python-format msgid "" "Invalid hook type \"%(value)s\" for resource breakpoint, acceptable hook " "types are: %(types)s" msgstr "" "Type de point d'ancrage \"%(value)s\" non valide pour le point d'arrêt de " "ressource ; les types acceptables sont : %(types)s" #, python-format msgid "Invalid key %s" msgstr "Clé invalide %s" #, python-format msgid "Invalid key '%(key)s' for %(entity)s" msgstr "Clé invalide '%(key)s' pour %(entity)s" #, python-format msgid "Invalid keys in resource mark unhealthy %s" msgstr "" "Clés non valides dans la ressource pour la demande 'resource-mark-unhealthy' " "%s" msgid "" "Invalid mix of disk and container formats. When setting a disk or container " "format to one of 'aki', 'ari', or 'ami', the container and disk formats must " "match." msgstr "" "Combinaison non valide de formats de disque et de conteneur. Si vous " "définissez un disque ou un conteneur au format 'aki', 'ari' ou 'ami', les " "formats du disque et du conteneur doivent correspondre." #, python-format msgid "Invalid parameter constraints for parameter %s, expected a list" msgstr "" "Contraintes de paramètre non valides pour le paramètre %s, liste attendue" #, python-format msgid "" "Invalid restricted_action type \"%(value)s\" for resource, acceptable " "restricted_action types are: %(types)s" msgstr "" "Type restricted_action \"%(value)s\" non valide pour la ressource ; les " "types acceptables sont : %(types)s" #, python-format msgid "" "Invalid stack name %s must contain only alphanumeric or \"_-.\" characters, " "must start with alpha and must be 255 characters or less." msgstr "" "Nom de pile %s non valide ; doit comporter uniquement des caractères " "alphanumériques ou \"_-.\" ; doit commencer par un caractère alphanumérique " "et ne pas dépasser 255 caractères." #, python-format msgid "Invalid stack name %s, must be a string" msgstr "Nom de pile %s non valide, doit être une chaîne" #, python-format msgid "Invalid status %s" msgstr "Status non valide %s" #, python-format msgid "Invalid support status and should be one of %s" msgstr "Statut de support non valide. Le statut doit être l'un des %s" #, python-format msgid "Invalid tag, \"%s\" contains a comma" msgstr "Balise non valide : \"%s\" contient une virgule" #, python-format msgid "Invalid tag, \"%s\" is longer than 80 characters" msgstr "Balise non valide, \"%s\" dépasse 80 caractères" #, python-format msgid "Invalid tag, \"%s\" is not a string" msgstr "Balise non valide : \"%s\" n'est pas une chaîne" #, python-format msgid "Invalid tags, not a list: %s" msgstr "Balises non valides, pas en liste: %s" #, python-format msgid "Invalid template type \"%(value)s\", valid types are: cfn, hot." msgstr "" "Type de modèle non valide \"%(value)s\" ; les types valides sont cfn et hot." #, python-format msgid "Invalid timeout value %s" msgstr "Valeur de délai d'attente non valide %s" #, python-format msgid "Invalid timezone: %s" msgstr "Fuseau horaire non valide : %s" #, python-format msgid "Invalid type (%s)" msgstr "Type invalide (%s)" msgid "Ip allocation pools and their ranges." msgstr "Pools d'allocation d'IP et leurs plages." msgid "Ip of the subnet's gateway." msgstr "IP de la passerelle du sous-réseau." msgid "Ip version for the subnet." msgstr "Version IP pour le sous-réseau." msgid "Ip_version for this firewall rule." msgstr "Ip_version pour la règle du pare-feu." msgid "It defines an executor to which task action should be sent to." msgstr "Définit un exécuteur auquel l'action de tâche doit être envoyée." #, python-format msgid "Items to join must be string, map or list not %s" msgstr "" "Les éléments à joindre doivent être des chaînes, une mappe ou une liste et " "non des %s" #, python-format msgid "Items to join must be string, map or list. %s failed json serialization" msgstr "" "Les éléments à joindre doivent être de type chaîne, mappe ou liste. Echec de " "sérialisation JSON de %s" #, python-format msgid "Items to join must be strings not %s" msgstr "Les éléments à joindre doivent être des chaînes et non des %s" #, python-format msgid "" "JSON body size (%(len)s bytes) exceeds maximum allowed size (%(limit)s " "bytes)." msgstr "" "La taille du corps JSON (%(len)s octets) dépasse la taille maximale " "autorisée (%(limit)s octets)." msgid "JSON data that was uploaded via the SwiftSignalHandle." msgstr "" "Données JSON qui ont été téléchargées par l'intermédiaire du " "SwiftSignalHandle." msgid "" "JSON serialized map that includes the endpoint, token and/or other " "attributes the client must use for signalling this handle. The contents of " "this map depend on the type of signal selected in the signal_transport " "property." msgstr "" "Mappe sérialisée JSON qui inclut le nœud final, le jeton et/ou les autres " "attributs que le client doit utiliser pour signaler ce descripteur. Le " "contenu de cette mappe dépend du type de signal sélectionné dans la " "propriété signal_transport." msgid "" "JSON string containing data associated with wait condition signals sent to " "the handle." msgstr "" "Chaîne JSON contenant les données associées aux signaux de condition " "d'attente envoyées au descripteur." msgid "" "Key used to encrypt authentication info in the database. Length of this key " "must be 32 characters." msgstr "" "Clé utilisée pour le chiffrement des informations d'authentification dans la " "base de données. La longueur de cette clé doit être de 32 caractères." msgid "Key/Value pairs to extend the capabilities of the flavor." msgstr "Paires clé/valeur permettant d'étendre les capacités de la version." msgid "Key/value pairs associated with the volume in raw dict form." msgstr "" "Paires clé/valeur associées au volume dans la forme de dictionnaire brut." msgid "Key/value pairs associated with the volume." msgstr "Paires clé/valeur associées au volume." msgid "Key/value pairs to associate with the volume." msgstr "Paires clé/valeur à associer au volume." msgid "Keypair added to instances to make them accessible for user." msgstr "" "Paire de clés ajoutée aux instances afin de les rendre accessibles à " "l'utilisateur." msgid "Keypair secret key." msgstr "Clé confidentielle de Keypair." msgid "" "Keystone domain ID which contains heat template-defined users. If this " "option is set, stack_user_domain_name option will be ignored." msgstr "" "L'identificateur de domaine Keystone contient des utilisateurs définis par " "modèle Heat. Si cette option est définie, stack_user_domain_name est ignoré." msgid "" "Keystone domain name which contains heat template-defined users. If " "`stack_user_domain_id` option is set, this option is ignored." msgstr "" "Le nom de domaine Keystone contient des utilisateurs définis par modèle " "Heat. Si `stack_user_domain_id` est défini, cette option est ignorée." msgid "Keystone domain." msgstr "Domaine Keystone." #, python-format msgid "" "Keystone has more than one service with same name %(service)s. Please use " "service id instead of name" msgstr "" "Keystone comporte plusieurs services avec le même nom %(service)s. Utilisez " "un ID service à la place du nom." msgid "Keystone password for stack_domain_admin user." msgstr "Mot de passe Keystone de l'utilisateur stack_domain_admin." msgid "Keystone project." msgstr "Projet Keystone." msgid "Keystone role for heat template-defined users." msgstr "Rôle Keystone des utilisateurs définis par modèle Heat." msgid "Keystone role." msgstr "Rôle Keystone." msgid "Keystone user group." msgstr "Groupe d'utilisateurs Keystone." msgid "Keystone user groups." msgstr "Groupes d'utilisateurs Keystone." msgid "Keystone user is enabled or disabled." msgstr "L'utilisateur Keystone est activé ou désactivé." msgid "" "Keystone username, a user with roles sufficient to manage users and projects " "in the stack_user_domain." msgstr "" "Nom d'utilisateur Keystone, doté de rôles suffisants pour gérer des " "utilisateurs et des projets dans stack_user_domain." msgid "L2 segmentation strategy on the external side of the network gateway." msgstr "Stratégie de segmentation L2 du côté externe de la passerelle réseau." msgid "LBaaS provider to implement this load balancer instance." msgstr "" "Fournisseur LBaaS pour l'implémentation de cette instance de l'équilibreur " "de charge." msgid "Length of OS_PASSWORD after encryption exceeds Heat limit (255 chars)" msgstr "" "La longueur d'OS_PASSWORD après le chiffrement dépasse la limite Heat (255 " "caractères)" msgid "Length of the string to generate." msgstr "Longueur de la chaîne à générer." msgid "" "Length property cannot be smaller than combined character class and " "character sequence minimums" msgstr "" "La propriété de longueur ne peut pas être inférieure à la classe de " "caractères combinée et aux minimas de séquence de caractères" msgid "Level of access that need to be provided for guest." msgstr "Niveau d'accès devant être fourni pour l'invité." msgid "" "Lifecycle actions to which the configuration applies. The string values " "provided for this property can include the standard resource actions CREATE, " "DELETE, UPDATE, SUSPEND and RESUME supported by Heat." msgstr "" "Actions de cycle de vie auxquelles la configuration s'applique. Les valeurs " "de chaîne fournies pour cette propriété peuvent inclure les actions de " "ressource standard CREATE, DELETE, UPDATE, SUSPEND et RESUME prises en " "charge par Heat." msgid "List of LoadBalancer resources." msgstr "Liste des ressources LoadBalancer." msgid "List of Security Groups assigned on current LB." msgstr "" "Liste de groupes de sécurité affectée sur l'équilibreur de charge en cours." msgid "List of TLS container references for SNI." msgstr "Liste des références de conteneur TLS pour SNI." msgid "List of database instances." msgstr "Liste des instances de base de données." msgid "List of databases to be created on DB instance creation." msgstr "" "Liste de bases de données à créer lors de la création de l'instance de base " "de données." msgid "List of directories to search for plug-ins." msgstr "Liste des répertoires à utiliser pour la recherche de plug-in." msgid "List of dns nameservers." msgstr "Liste de serveurs de noms DNS." msgid "List of firewall rules in this firewall policy." msgstr "" "Liste de règles de pare-feu dans ces règles d'administration de pare-feu." msgid "List of health monitors associated with the pool." msgstr "Liste de moniteurs d'état associés au pool." msgid "List of hosts to join aggregate." msgstr "Liste des hôtes qui doivent rejoindre l'agrégat." msgid "List of manila shares to be mounted." msgstr "Liste des partages Manila à monter." msgid "List of network interfaces to create on instance." msgstr "Liste d'interfaces réseau à créer sur l'instance." msgid "List of processes to enable anti-affinity for." msgstr "Liste de processus pour lesquels l'anti-affinité doit être activée." msgid "List of processes to run on every node." msgstr "Liste de processus à exécuter sur chaque noeud." msgid "List of role assignments." msgstr "Liste des affectations de rôle." msgid "List of security group IDs associated with this interface." msgstr "Liste d'ID de groupe de sécurité associée à cette interface." msgid "List of security group egress rules." msgstr "Liste de règles de sortie de groupe de sécurité." msgid "List of security group ingress rules." msgstr "Liste de règles d'entrée de groupe de sécurité." msgid "" "List of security group names or IDs to assign to this Node Group template." msgstr "" "Liste de noms ou d'identificateurs de groupe de sécurité à affecter à ce " "modèle de groupe de noeuds." msgid "" "List of security group names or IDs. Cannot be used if neutron ports are " "associated with this server; assign security groups to the ports instead." msgstr "" "Liste de noms ou d'ID de groupe de sécurité. Impossible de l'utiliser si les " "ports neutron sont associés à ce serveur ; préférez l'affectation des " "groupes de sécurité aux ports." msgid "List of security group rules." msgstr "Liste de règles de groupe de sécurité." msgid "List of subnet prefixes to assign." msgstr "Liste des préfixes de sous-réseau à affecter." msgid "List of tags associated with this interface." msgstr "Liste des balises associées à cette interface." msgid "List of tags to attach to the instance." msgstr "Liste de balises à lier à l'instance." msgid "List of tags to attach to this resource." msgstr "Liste de balises à lier à cette ressource." msgid "List of tags to be attached to this resource." msgstr "Liste de balises à lier à cette ressource." msgid "" "List of tasks which should be executed before this task. Used only in " "reverse workflows." msgstr "" "Liste des tâches à exécuter avant cette tâche. Utilisée uniquement dans les " "flux de travail inverse." msgid "" "List of tasks which will run after the task has completed regardless of " "whether it is successful or not." msgstr "" "Liste de tâches qui s'exécuteront après que la tâche s'est terminée avec ou " "sans erreur." msgid "List of tasks which will run after the task has completed successfully." msgstr "Liste de tâches qui s'exécuteront après aboutissement de la tâche." msgid "" "List of tasks which will run after the task has completed with an error." msgstr "" "Liste de tâches qui s'exécuteront après que la tâche s'est terminée avec une " "erreur." msgid "List of users to be created on DB instance creation." msgstr "" "Liste d'utilisateurs à créer lors de la création de l'instance de base de " "données." msgid "" "List of workflows' executions, each of them is a dictionary with information " "about execution. Each dictionary returns values for next keys: id, " "workflow_name, created_at, updated_at, state for current execution state, " "input, output." msgstr "" "Liste des exécutions de flux de travail, chacune d'elles étant un " "dictionnaire qui comporte des informations sur l'exécution. Chaque " "dictionnaire retourne des valeurs pour les clés suivantes : id, " "workflow_name, created_at, updated_at, state pour l'état en cours, les " "entrées et les sorties." msgid "Listener associated with this pool." msgstr "Programme d'écoute associé à ce pool." msgid "" "Local path on each cluster node on which to mount the share. Defaults to '/" "mnt/{share_id}'." msgstr "" "Chemin local sur chaque nœud de cluster sur lequel monter le partage. Par " "défaut : '/mnt/{share_id}'." msgid "Location of the SSL certificate file to use for SSL mode." msgstr "Emplacement du fichier du certificat SSL à utiliser pour le mode SSL." msgid "Location of the SSL key file to use for enabling SSL mode." msgstr "" "Emplacement du fichier de clés SSL à utiliser pour l'activation du mode SSL" msgid "MAC address of the port." msgstr "Adresse MAC du port." msgid "MAC address to allow through this port." msgstr "Adresse MAC à autoriser via ce port." msgid "Map between role with either project or domain." msgstr "Mappe entre le rôle et le projet ou domaine." msgid "" "Map containing options specific to the configuration management tool used by " "this resource." msgstr "" "Mappe contenant des options propres à l'outil de gestion de configuration " "utilisé par cette ressource." msgid "" "Map representing the cloud-config data structure which will be formatted as " "YAML." msgstr "" "Mappe représentant la structure des données cloud-config qui seront " "formatées en tant que YAML." msgid "" "Map representing the configuration data structure which will be serialized " "to JSON format." msgstr "" "Mappe représentant la structure des données de configuration qui sera " "sérialisée au format JSON." msgid "Max bandwidth in kbps." msgstr "Bande passante maximale en kb/s" msgid "Max burst bandwidth in kbps." msgstr "Bande passante en rafale max en kb/s." msgid "Max size of the cluster." msgstr "Taille maximale du cluster." #, python-format msgid "Maximum %s is 1 hour." msgstr "La valeur maximale pour %s est 1 heure." msgid "Maximum depth allowed when using nested stacks." msgstr "Profondeur maximale admise lors de l'utilisation de piles imbriquées." msgid "" "Maximum line size of message headers to be accepted. max_header_line may " "need to be increased when using large tokens (typically those generated by " "the Keystone v3 API with big service catalogs)." msgstr "" "Taille maximale de ligne des en-têtes de message à accepter. max_header_line " "peut avoir besoin d'être augmenté lors de l'utilisation de grands jetons " "(généralement ceux qui sont générés par l'API Keystone v3 avec des " "catalogues de service volumineux)." msgid "" "Maximum line size of message headers to be accepted. max_header_line may " "need to be increased when using large tokens (typically those generated by " "the Keystone v3 API with big service catalogs.)" msgstr "" "Taille maximale de ligne des en-têtes de message à accepter. max_header_line " "peut avoir besoin d'être augmenté lors de l'utilisation de grands jetons " "(généralement ceux qui sont générés par l'API Keystone v3 avec des " "catalogues de service volumineux)." msgid "Maximum number of instances in the group." msgstr "Nombre maximal d'instances dans le groupe." msgid "Maximum number of resources in the cluster. -1 means unlimited." msgstr "Nombre maximum de ressources dans le cluster. -1 signifie illimité." msgid "Maximum number of resources in the group." msgstr "Nombre maximal de ressources dans le groupe." msgid "Maximum number of stacks any one tenant may have active at one time." msgstr "" "Nombre maximum de piles pouvant être actives en même temps pour n'importe " "quel locataire." msgid "Maximum prefix size that can be allocated from the subnet pool." msgstr "" "Taille de préfixe maximum qui peut être allouée depuis le pool de sous-" "réseau." msgid "" "Maximum raw byte size of JSON request body. Should be larger than " "max_template_size." msgstr "" "Taille maximale en octets bruts du corps de demande JSON. Doit être " "supérieure à la valeur de max_template_size." msgid "Maximum raw byte size of any template." msgstr "Taille maximale en octets bruts de tout modèle." msgid "Maximum resources allowed per top-level stack. -1 stands for unlimited." msgstr "" "Nombre maximal de ressources autorisées par pile de niveau supérieur. La " "valeur -1 correspond à \"illimité\"." msgid "Maximum resources per stack exceeded." msgstr "Ressource maximum par stack dépassé." msgid "" "Maximum transmission unit size (in bytes) for the ipsec site connection." msgstr "" "Taille d'unité de transmission maximale (en octets) pour la connexion de " "site IPSec." msgid "Member list items must be strings" msgstr "Les éléments de la liste de membres doivent être des chaînes" msgid "Member list must be a list" msgstr "La liste de membres doit être une liste" msgid "Members associated with this pool." msgstr "Membres associés à ce pool." msgid "Memory in MB for the flavor." msgstr "Mémoire en Mo pour la version." #, python-format msgid "Message: %(message)s, Code: %(code)s" msgstr "Message : %(message)s, code : %(code)s" msgid "Metadata format invalid" msgstr "Format des métadonnées non valide" msgid "Metadata key-values defined for cluster." msgstr "Métadonnées key-values définies pour le cluster." msgid "Metadata key-values defined for node." msgstr "Valeurs de clé de métadonnées définies pour le noeud." msgid "Metadata key-values defined for profile." msgstr "Valeurs de clé de métadonnées définies pour le profil." msgid "Metadata key-values defined for share." msgstr "Valeurs de clé de métadonnées définies pour le partage." msgid "Meter name watched by the alarm." msgstr "Nom du compteur surveillé par l'alarme." msgid "" "Meter should match this resource metadata (key=value) additionally to the " "meter_name." msgstr "" "Le compteur devrait en plus faire correspondre les métadonnées de cette " "ressource (key=value) à meter_name." msgid "Meter statistic to evaluate." msgstr "Statistique de mesure à évaluer." msgid "Method of implementation of session persistence feature." msgstr "Méthode d'implémentation de la fonction de persistance de session." msgid "Metric name watched by the alarm." msgstr "Nom d'indicateur surveillé par l'alarme." msgid "Min size of the cluster." msgstr "Taille minimale du cluster." msgid "MinSize can not be greater than MaxSize" msgstr "MinSize ne peut pas être supérieur à MaxSize" msgid "Minimum number of instances in the group." msgstr "Nombre minimal d'instances dans le groupe." msgid "Minimum number of resources in the cluster." msgstr "Nombre minimum de ressources dans le cluster." msgid "Minimum number of resources in the group." msgstr "Nombre minimal de ressources dans le groupe." msgid "" "Minimum number of resources that are added or removed when the AutoScaling " "group scales up or down. This can be used only when specifying " "PercentChangeInCapacity for the AdjustmentType property." msgstr "" "Nombre minimum de ressources ajoutées ou retirées lorsque le groupe " "AutoScaling s'agrandit ou se réduit. Peut être utilisé uniquement lorsque " "PercentChangeInCapacity est indiqué pour la propriété AdjustmentType." msgid "" "Minimum number of resources that are added or removed when the AutoScaling " "group scales up or down. This can be used only when specifying " "percent_change_in_capacity for the adjustment_type property." msgstr "" "Nombre minimum de ressources qui sont ajoutées ou retirées lorsque le groupe " "AutoScaling s'agrandit ou se réduit. Peut être utilisé uniquement lorsque " "percent_change_in_capacity est indiqué pour la propriété adjustment_type." #, python-format msgid "Missing mandatory (%s) key from mark unhealthy request" msgstr "" "Clé obligatoire (%s) manquante dans la demande 'resource-mark-unhealthy'" #, python-format msgid "Missing parameter type for parameter: %s" msgstr "Type de paramètre manquant pour le paramètre : %s" #, python-format msgid "Missing required credential: %(required)s" msgstr "Données d'identification obligatoires manquantes : %(required)s" msgid "Mistral resource validation error" msgstr "Erreur de validation de ressource Mistral." msgid "Monasca notification." msgstr "Notification Monasca." msgid "Multiple actions specified" msgstr "Plusieurs actions spécifiées " #, python-format msgid "Multiple physical resources were found with name (%(name)s)." msgstr "" "Plusieurs ressources physiques ont été trouvées avec le nom (%(name)s)." #, python-format msgid "Multiple routers found with name %s" msgstr "Plusieurs routeurs trouvés avec le nom %s" msgid "Must specify 'InstanceId' if you specify 'EIP'." msgstr "Vous devez spécifier 'InstanceId' si vous spécifiez 'EIP'." msgid "Name for the Sahara Cluster Template." msgstr "Nom du modèle de cluster Sahara." msgid "Name for the Sahara Node Group Template." msgstr "Nom du modèle de groupe de noeuds Sahara." msgid "Name for the aggregate." msgstr "Nom de l'agrégat." msgid "Name for the availability zone." msgstr "Nom de la zone de disponibilité." msgid "" "Name for the container. If not specified, a unique name will be generated." msgstr "" "Nom pour le conteneur. S'il n'est pas spécifié, un nom unique sera généré." msgid "Name for the firewall policy." msgstr "Nom pour les règles d'administration de pare-feu." msgid "Name for the firewall rule." msgstr "Nom pour la règle du pare-feu." msgid "Name for the firewall." msgstr "Nom pour le pare-feu" msgid "Name for the ike policy." msgstr "Nom pour la stratégie IKE." msgid "" "Name for the image. The name of an image is not unique to a Image Service " "node." msgstr "" "Nom d'image. Le nom d'une image n'est pas unique pour un noeud Image Service." msgid "Name for the ipsec policy." msgstr "Nom pour la stratégie IPSec." msgid "Name for the ipsec site connection." msgstr "Nom pour la connexion de site IPSec." msgid "Name for the time constraint." msgstr "Nom de la contrainte de temps." msgid "Name for the vpn service." msgstr "Nom du service vpn." msgid "" "Name of attribute to compare. Names of the form metadata.user_metadata.X or " "metadata.metering.X are equivalent to what you can address through " "matching_metadata; the former for Nova meters, the latter for all others. To " "see the attributes of your Samples, use `ceilometer --debug sample-list`." msgstr "" "Nom de l'attribut à comparer. Les noms au format metadata.user_metadata.X ou " "metadata.metering.X correspondent aux noms adressables à l'aide du paramètre " "matching_metadata ; le premier format s'utilise pour les compteurs Nova, le " "second pour tous les autres compteurs. Pour voir les attributs de vos " "exemples, utilisez `ceilometer --debug sample-list`." msgid "Name of key to use for substituting inputs during deployment." msgstr "" "Nom de la clé à utiliser pour la substitution des entrées durant le " "déploiement." msgid "Name of keypair to inject into the server." msgstr "Nom de la paire de clés à injecter dans le serveur." msgid "Name of keystone endpoint." msgstr "Nom du noeud final Keystone." msgid "Name of keystone group." msgstr "Nom du groupe Keystone." msgid "Name of keystone project." msgstr "Nom du projet Keystone." msgid "Name of keystone role." msgstr "Nom du rôle Keystone." msgid "Name of keystone service." msgstr "Nom du service Keystone." msgid "Name of keystone user." msgstr "Nom de l'utilisateur Keystone." msgid "Name of registered datastore type." msgstr "Nom du type de magasin de données enregistré." msgid "Name of the DB instance to create." msgstr "Nom d'instance de base de données à créer." msgid "Name of the Node group." msgstr "Nom du groupe de noeuds." msgid "" "Name of the action associated with the task. Either action or workflow may " "be defined in the task." msgstr "" "Nom de l'action associée à la tâche. L'action ou le flux de travail peut " "être défini(e) dans la tâche." msgid "Name of the administrative user to use on the server." msgstr "Nom de l'administrateur à utiliser sur le serveur." msgid "Name of the alarm. By default, physical resource name is used." msgstr "Nom de l'alarme. Par défaut, le nom de ressource physique est utilisé." msgid "Name of the availability zone for DB instance." msgstr "Nom de la zone de disponibilité pour l'instance de base de données." msgid "Name of the availability zone for server placement." msgstr "Nom de la zone de disponibilité pour le placement de serveur." msgid "Name of the cluster to create." msgstr "Nom du cluster à créer." msgid "Name of the cluster. By default, physical resource name is used." msgstr "" "Nom du cluster. Par défaut, le nom de la ressource physique est utilisé." msgid "Name of the cookie, required if type is APP_COOKIE." msgstr "Nom du cookie, requis si le type est APP_COOKIE." msgid "Name of the cron trigger." msgstr "Nom du déclencheur CRON." msgid "Name of the current action being deployed" msgstr "Nom de l'action actuellement en cours de déploiement" msgid "Name of the data source." msgstr "Nom de la source de données." msgid "" "Name of the derived config associated with this deployment. This is used to " "apply a sort order to the list of configurations currently deployed to a " "server." msgstr "" "Nom de la configuration dérivée associée au déploiement. Permet d'appliquer " "un ordre de tri à la liste de configurations déployée sur un serveur." msgid "" "Name of the engine node. This can be an opaque identifier. It is not " "necessarily a hostname, FQDN, or IP address." msgstr "" "Nom de noeud du moteur. Par exemple, un identificateur opaque. Il ne s'agit " "pas nécessairement d'un nom d'hôte, d'un nom de domaine complet (FQDN) ou " "d'une adresse IP." msgid "Name of the input." msgstr "Nom de l'entrée" msgid "Name of the job binary." msgstr "Nom du binaire de travail." msgid "Name of the metering label." msgstr "Nom de l'étiquette de mesure." msgid "Name of the network owning the port." msgstr "Nom du réseau possédant le port." msgid "" "Name of the network owning the port. The value is typically network:" "floatingip or network:router_interface or network:dhcp." msgstr "" "Nom du réseau propriétaire du port. La valeur est généralement network:" "floatingip, network:router_interface ou network:dhcp." msgid "Name of the notification. By default, physical resource name is used." msgstr "" "Nom de la notification. Par défaut, le nom de ressource physique est utilisé." msgid "Name of the output." msgstr "Nom de sortie" msgid "Name of the pool." msgstr "Nom du pool." msgid "Name of the queue instance to create." msgstr "Nom de l'instance de file d'attente à créer." msgid "" "Name of the registered datastore version. It must exist for provided " "datastore type. Defaults to using single active version. If several active " "versions exist for provided datastore type, explicit value for this " "parameter must be specified." msgstr "" "Nom de la version enregistrée de magasin de données. Elle doit exister pour " "le type de magasin de données fourni. Par défaut, utilisation de la version " "active simple. Si plusieurs versions actives existent pour le type de " "magasin de données fourni, la valeur explicite pour ce paramètre doit être " "indiquée." msgid "Name of the secret." msgstr "Nom du secret." msgid "Name of the senlin node. By default, physical resource name is used." msgstr "" "Nom du noeud senlin. Par défaut, le nom de ressource physique est utilisé." msgid "Name of the senlin policy. By default, physical resource name is used." msgstr "" "Nom de la stratégie senlin. Par défaut, le nom de ressource physique est " "utilisé." msgid "Name of the senlin profile. By default, physical resource name is used." msgstr "" "Nom du profil senlin. Par défaut, le nom de ressource physique est utilisé." msgid "" "Name of the senlin receiver. By default, physical resource name is used." msgstr "" "Nom du récepteur senlin. Par défaut, le nom de la ressource physique est " "utilisé." msgid "Name of the server." msgstr "Nom du serveur." msgid "Name of the share network." msgstr "Nom du réseau de partage." msgid "Name of the share type." msgstr "Nom du type de partage." msgid "Name of the stack." msgstr "Nom de la pile." msgid "Name of the subnet pool." msgstr "Nom du pool de sous-réseaux." msgid "Name of the vip." msgstr "Nom de vip." msgid "Name of the volume type." msgstr "Nom du type de volume." msgid "Name of the volume." msgstr "Nom du volume." msgid "" "Name of the workflow associated with the task. Can be defined by intrinsic " "function get_resource or by name of the referenced workflow, i.e. " "{ workflow: wf_name } or { workflow: { get_resource: wf_name }}. Either " "action or workflow may be defined in the task." msgstr "" "Nom du flux de travail associé à la tâche. Peut être défini par une fonction " "intrinsèque get_resource ou par le nom du flux de travail référencé, par " "exemple { workflow: wf_name } ou { workflow: { get_resource: wf_name }}. Une " "action ou un flux de travail peut être défini dans la tâche." msgid "Name of this Load Balancer." msgstr "Nom de cet équilibreur de charge." msgid "Name of this deployment resource in the stack" msgstr "Nom de cette ressource de déploiement dans la pile" msgid "Name of this listener." msgstr "Nom de ce programme d'écoute." msgid "Name of this pool." msgstr "Nom de ce pool." msgid "Name or ID Nova flavor for the nodes." msgstr "Nom ou ID de la version Nova pour les noeuds." msgid "Name or ID of network to create a port on." msgstr "Nom ou ID du réseau sur lequel créer un port." msgid "Name or ID of senlin profile to create this node." msgstr "Nom ou ID du profil senlin pour créer ce noeud." msgid "" "Name or ID of shared file system snapshot that will be restored and created " "as a new share." msgstr "" "Nom pou ID de l'instantané du système de fichiers partagé qui sera restauré " "et créé en tant que nouveau partage." msgid "" "Name or ID of shared filesystem type. Types defines some share filesystem " "profiles that will be used for share creation." msgstr "" "Nom ou ID du type de système de fichiers partagé. Le type définit des " "profils de système de fichiers partagé qui seront utilisés pouar la création " "de partage." msgid "Name or ID of shared network defined for shared filesystem." msgstr "" "Nom ou ID du réseau partagé défini pour le système de fichiers partagé." msgid "Name or ID of target cluster." msgstr "Nom ou ID du cluster cible." msgid "Name or ID of the load balancing pool." msgstr "Nom ou ID du pool d'équilibrage de charge." msgid "Name or Id of keystone region." msgstr "Nom ou ID de la région Keystone." msgid "Name or Id of keystone service." msgstr "Nom ou ID du service Keystone." #, python-format msgid "" "Name or UUID of Neutron port to attach this NIC to. Either %(port)s or " "%(net)s must be specified." msgstr "" "Nom ou UUID du port Neutron auquel ce contrôleur NIC doit être connecté. " "Vous devez spécifier %(port)s ou %(net)s." msgid "Name or UUID of network." msgstr "Nom ou UUID du réseau." msgid "" "Name or UUID of the Neutron floating IP network or name of the Nova floating " "ip pool to use. Should not be provided when used with Nova-network that auto-" "assign floating IPs." msgstr "" "Nom (ou UUID) du réseau d'adresses IP flottantes Neutron ou nom du pool " "d'adresses IP flottantes Nova à utiliser. Ne doit pas être indiqué lorsque " "le réseau Novaaffecte automatiquement des adresses IP flottantes." msgid "Name or UUID of the image used to boot Hadoop nodes." msgstr "" "Nom ou identificateur unique universel de l'image utilisée pour amorcer les " "noeuds Hadoop." #, python-format msgid "" "Name or UUID of the network to attach this NIC to. Either %(port)s or " "%(net)s must be specified." msgstr "" "Nom ou UUID du réseau auquel ce contrôleur NIC doit être connecté. Vous " "devez spécifier %(port)s ou %(net)s." msgid "Name or id of keystone domain." msgstr "Nom ou ID du domaine Keystone." msgid "Name or id of keystone group." msgstr "Nom ou ID du groupe Keystone." msgid "Name or id of keystone user." msgstr "Nom ou ID de l'utilisateur Keystone." msgid "Name or id of volume type (OS::Cinder::VolumeType)." msgstr "Nom ou ID du type de volume (OS::Cinder::VolumeType)." msgid "Names of databases that those users can access on instance creation." msgstr "" "Noms des bases de données auxquelles ces utilisateurs peuvent accéder lors " "de la création d'instance." msgid "" "Namespace to group this software config by when delivered to a server. This " "may imply what configuration tool is going to perform the configuration." msgstr "" "Espace de nom en fonction duquel regrouper cette config de logiciel " "lorsqu'elle est distribuée à un serveur. Cette propriété peut impliquer " "l'outil de configuration qui va effectuer la configuration." msgid "Need more arguments" msgstr "Besoin de plus d'arguments" msgid "Negotiation mode for the ike policy." msgstr "Mode de négociation pour la stratégie IKE." #, python-format msgid "Neither image nor bootable volume is specified for instance %s" msgstr "" "Aucune image ni aucun volume amorçable n'est indiqué pour l'instance %s" msgid "Network in CIDR notation." msgstr "Réseau en notation CIDR." msgid "Network interface ID to associate with EIP." msgstr "ID d'interface réseau à associer à EIP." msgid "Network interfaces to associate with instance." msgstr "Interfaces réseau à associer à l'instance." #, python-format msgid "" "Network this port belongs to. If you plan to use current port to assign " "Floating IP, you should specify %(fixed_ips)s with %(subnet)s. Note if this " "changes to a different network update, the port will be replaced." msgstr "" "Réseau auquel appartient ce port. Si vous prévoyez d'utiliser le port en " "cours pour affecter l'IP flottante, vous devez spécifier %(fixed_ips)s avec " "%(subnet)s. Notez qu'en cas de passage à une autre mise à jour réseau, le " "port sera remplacé." msgid "Network to allocate floating IP from." msgstr "Réseau à partir duquel affecter l'adresse IP flottante." msgid "Neutron network id." msgstr "ID réseau Neutron." msgid "Neutron subnet id." msgstr "ID sous-réseau Neutron." msgid "Nexthop IP address." msgstr "Adresse IP Nexthop." #, python-format msgid "No %s specified" msgstr "Non %s spécifié" msgid "No Template provided." msgstr "Aucun modèle fourni." msgid "No action specified" msgstr "Pas d'action spécifiée" msgid "No constraint expressed" msgstr "Aucune contrainte exprimée" #, python-format msgid "" "No content found in the \"files\" section for %(fn_name)s path: %(file_key)s" msgstr "" "Aucun contenu trouvé dans la section \"files\" pour le chemin d'accès " "%(fn_name)s : %(file_key)s" #, python-format msgid "No event %s found" msgstr "Pas d'évènement trouvé %s" #, python-format msgid "No events found for resource %s" msgstr "Pas d'évènement trouvé pour la ressource %s" msgid "No resource data found" msgstr "Pas de ressource trouvée" #, python-format msgid "No stack exists with id \"%s\"" msgstr "Pas de stack existante avec l'identifiant \"%s\"" msgid "No stack name specified" msgstr "Nom de la pile pas spécifiée" msgid "No template specified" msgstr "Modèle non spécifié" msgid "No volume service available." msgstr "Aucun service de volume n'est disponible." msgid "Node groups." msgstr "Groupes de noeuds." msgid "Nodes list in the cluster." msgstr "Liste de nœuds dans le cluster." msgid "Non HA routers can only have one L3 agent." msgstr "" "Les routeurs qui ne sont pas à haute disponibilité ne peuvent posséder qu'un " "seul agent L3." #, python-format msgid "Non-empty resource type is required for resource \"%s\"" msgstr "Un type de ressource non vide est requis pour la ressource \"%s\"" msgid "Not Implemented." msgstr "Non implémenté" #, python-format msgid "Not allowed - %(dsver)s without %(dstype)s." msgstr "Non autorisé - %(dsver)s sans %(dstype)s." msgid "Not found" msgstr "Non trouvé" msgid "Not waiting for outputs signal" msgstr "N'attend pas le signal des sorties" msgid "" "Notional service where encryption is performed For example, front-end. For " "Nova." msgstr "" "Service théorique où le chiffrement est effectué. Par exemple, front-end " "(frontal). Pour Nova." msgid "Nova instance type (flavor)." msgstr "Type d'instance Nova (version)." msgid "Nova network id." msgstr "ID réseau Nova." msgid "Number of VCPUs for the flavor." msgstr "Nombre de VCPU pour la version." msgid "Number of backlog requests to configure the socket with." msgstr "Nombre de demandes en attente avec lequel configurer le socket." msgid "Number of instances in the Node group." msgstr "Nombre d'instances dans le groupe de noeuds." msgid "Number of minutes to wait for this stack creation." msgstr "Délai d'attente (en minutes) de cette création de pile." msgid "Number of periods to evaluate over." msgstr "Nombre de périodes sur lesquelles effectuer l'évaluation." msgid "" "Number of permissible connection failures before changing the member status " "to INACTIVE." msgstr "" "Nombre de pannes de connexion permises avant de faire passer l'état du " "membre à INACTIF." msgid "Number of remaining executions." msgstr "Nombre d'exécutions restantes." msgid "Number of seconds for the DPD delay." msgstr "Nombre de secondes pour le délai DPD." msgid "Number of seconds for the DPD timeout." msgstr "Nombre de secondes pour le délai d'attente DPD." msgid "" "Number of times to check whether an interface has been attached or detached." msgstr "" "Nombre de fois où vérifier si une interface a été connectée ou déconnectée." msgid "" "Number of times to retry to bring a resource to a non-error state. Set to 0 " "to disable retries." msgstr "" "Nombre de nouveaux essais pour rétablir une ressource à un état non erroné. " "Réglez sur 0 pour désactiver les nouveaux essais." msgid "" "Number of times to retry when a client encounters an expected intermittent " "error. Set to 0 to disable retries." msgstr "" "Nombre de nouvelles tentatives quand un client rencontre une erreur " "intermittente attendue. Définissez cette valeur sur 0 pour désactiver les " "nouveaux essais." msgid "Number of workers for Heat service." msgstr "Nombre de workers pour le service Heat." msgid "" "Number of workers for Heat service. Default value 0 means, that service will " "start number of workers equal number of cores on server." msgstr "" "Nombre d'agents pour le service Heat. La valeur par défaut 0 signifie que ce " "service démarrera un nombre d'agents équivalent au nombre de coeurs sur le " "serveur." msgid "Number value for delay during resolve constraint." msgstr "" "Valeur numérique correspondant au délai d'attente lors de la résolution de " "la contrainte." msgid "Number value for timeout during resolving output value." msgstr "" "Valeur numérique correspondant au délai d'attente lors de la résolution de " "la valeur de sortie." #, python-format msgid "Object action %(action)s failed because: %(reason)s" msgstr "L'action de l'objet %(action)s a échoué car : %(reason)s" msgid "" "On update, enables heat to collect existing resource properties from reality " "and converge to updated template." msgstr "" "Lors d'une mise à jour, active la collecte par Heat des propriétées de " "ressource existantes à partir de la réalité et leur convergence vers le " "modèle mis à jour." msgid "One of predefined health monitor types." msgstr "L'un des types prédéfinis de moniteur d'état." msgid "One or more listeners for this load balancer." msgstr "Un ou plusieurs programmes d'écoute pour cet équilibreur de charge." msgid "Only ISO 8601 duration format of the form PT#H#M#S is supported." msgstr "Seul le format de durée ISO 8601 de type PT#H#M#S est pris en charge." msgid "Only Templates with an extension of .yaml or .template are supported" msgstr "" "Seuls les modèles avec une extension .yaml ou .template sont pris en charge" #, python-format msgid "Only integer is acceptable by '%(name)s'." msgstr "'%(name)s' prend en charge un entier uniquement." #, python-format msgid "Only non-zero integer is acceptable by '%(name)s'." msgstr "'%(name)s' prend en charge un entier différent de zéro uniquement." msgid "Operator used to compare specified statistic with threshold." msgstr "" "Opérateur utilisé pour comparer des statistiques spécifiées avec le seuil." msgid "Optional CA cert file to use in SSL connections." msgstr "" "Fichier de certificat de l'autorité de certification facultatif à utiliser " "dans les connexions SSL." msgid "Optional Nova keypair name." msgstr "Nom de la paire de clés Nova facultative." msgid "Optional PEM-formatted certificate chain file." msgstr "Fichier de chaîne de certificats au format PEM facultatif." msgid "Optional PEM-formatted file that contains the private key." msgstr "Fichier au format PEM facultatif qui contient la clé privée." msgid "Optional filename to associate with part." msgstr "Nom de fichier facultatif à associer à la partie." #, python-format msgid "Optional heat url in format like http://0.0.0.0:8004/v1/%(tenant_id)s." msgstr "" "URL Heat facultative dans un format du type http://0.0.0.0:8004/v1/" "%(tenant_id)s." msgid "Optional subtype to specify with the type." msgstr "Sous-type facultatif à indiquer avec le type." msgid "Options for simulating waiting." msgstr "Options de simulation d'attente." #, python-format msgid "Order '%(name)s' failed: %(code)s - %(reason)s" msgstr "La commande '%(name)s' a échoué : %(code)s - %(reason)s" msgid "Outputs received" msgstr "Sorties reçues" msgid "Owner of the source security group." msgstr "Propriétaire du groupe de sécurité source." msgid "PATCH update to non-COMPLETE stack" msgstr "Mise à jour PATCH dans la pile non-COMPLETE" #, python-format msgid "Parameter '%(name)s' is invalid: %(exp)s" msgstr "Le paramètre '%(name)s' n'est pas valide : %(exp)s" msgid "Parameter Groups error" msgstr "Erreur du groupe de paramèetre" msgid "" "Parameter Groups error: parameter_groups.: The grouped parameter key_name " "does not reference a valid parameter." msgstr "" "Erreur de groupes de paramètre : parameter_groups.: Le paramètre groupé " "key_name ne fait pas référence à un paramètre valide." msgid "" "Parameter Groups error: parameter_groups.: The key_name parameter must be " "assigned to one parameter group only." msgstr "" "Erreur de groupes de paramètre : parameter_groups.: Le paramètre key_name " "doit être affecté à un groupe de paramètres uniquement." msgid "" "Parameter Groups error: parameter_groups.: The parameters of parameter group " "should be a list." msgstr "" "Erreur de groupes de paramètre : parameter_groups.: Les paramètres du groupe " "de paramètres doivent être une liste." msgid "" "Parameter Groups error: parameter_groups.Database Group: The InstanceType " "parameter must be assigned to one parameter group only." msgstr "" "Erreur de groupes de paramètre : parameter_groups.Database Group: Le " "paramètre InstanceType doit être affecté à un groupe de paramètres " "uniquement." msgid "" "Parameter Groups error: parameter_groups.Database Group: The grouped " "parameter SomethingNotHere does not reference a valid parameter." msgstr "" "Erreur de groupes de paramètre : parameter_groups.Database Group: Le " "paramètre groupé SomethingNotHere ne fait pas référence à un paramètre " "valide." msgid "" "Parameter Groups error: parameter_groups.Server Group: The parameters must " "be provided for each parameter group." msgstr "" "Erreur de groupes de paramètre : parameter_groups.Server Group: Les " "paramètres doivent être fournis pour chaque groupe de paramètres." msgid "" "Parameter Groups error: parameter_groups.Server Group: The parameters of " "parameter group should be a list." msgstr "" "Erreur de groupes de paramètre : parameter_groups.Server Group: Les " "paramètres du groupe de paramètres doivent être une liste." msgid "" "Parameter Groups error: parameter_groups: The parameter_groups should be a " "list." msgstr "" "Erreur de groupes de paramètre : parameter_groups: Le paramètre _groups doit " "être une liste." #, python-format msgid "Parameter name in \"%s\" must be string" msgstr "Le nom de paramètre dans \"%s\" doit être une chaîne" #, python-format msgid "Params must be a map, find a %s" msgstr "Les paramètres doivent être une mappe, recherchez %s" msgid "Parent network of the subnet." msgstr "Réseau parent du sous-réseau." msgid "Parts belonging to this message." msgstr "Parties appartenant à ce message." msgid "Password for API authentication" msgstr "Mot de passe pour API authentification" msgid "Password for accessing the data source URL." msgstr "Mot de passe pour l'accès à l'URL de la source de données." msgid "Password for accessing the job binary URL." msgstr "Mot de passe pour accéder à l'URL du binaire de travail." msgid "Password for those users on instance creation." msgstr "Mot de passe pour ces utilisateurs lors de la création d'instance." msgid "Password of keystone user." msgstr "Mot de passe de l'utilisateur Keystone." msgid "Password used by user." msgstr "Mot de passe de l'utilisateur." #, python-format msgid "Path components in \"%s\" must be strings" msgstr "Les composants de chemin d'accès dans \"%s\" doivent être des chaînes" msgid "Path components in attributes must be strings" msgstr "" "Les composants de chemin d'accès dans les attributs doivent être des chaînes" msgid "Payload exceeds maximum allowed size" msgstr "Le contenu dépasse la taille maximale autorisée" msgid "Perfect forward secrecy for the ipsec policy." msgstr "PFS (Perfect Forward Secrecy) pour la stratégie IPSec." msgid "Perfect forward secrecy in lowercase for the ike policy." msgstr "PFS (Perfect Forward Secrecy) en minuscules pour la stratégie IKE." msgid "" "Perform a check on the input values passed to verify that each required " "input has a corresponding value. When the property is set to STRICT and no " "value is passed, an exception is raised." msgstr "" "Vérifiez les valeurs d'entrée transmises pour vous assurer que chaque entrée " "obligatoire a une valeur correspondante. Lorsque la propriété est définie " "sur STRICT et qu'aucune valeur n'est transmise, une exception est émise." msgid "Period (seconds) to evaluate over." msgstr "Périodes (en secondes) sur lesquelles effectuer l'évaluation." msgid "Physical ID of the VPC. Not implemented." msgstr "ID physique de VPC. Non implémenté." #, python-format msgid "" "Plugin %(plugin)s doesn't support the following node processes: " "%(unsupported)s. Allowed processes are: %(allowed)s" msgstr "" "Le plugin %(plugin)s ne prend pas en charge les processus de nœud suivants : " "%(unsupported)s. Les processus autorisés sont : %(allowed)s" msgid "Plugin name." msgstr "Nom du plugin" msgid "Policies for removal of resources on update." msgstr "Stratégies de retrait de ressources lors d'une mise à jour." msgid "Policy for rolling updates for this scaling group." msgstr "" "Règle pour les mises à jour en continu pour ce groupe de mise à l'échelle." msgid "" "Policy on how to apply a flavor update; either by requesting a server resize " "or by replacing the entire server." msgstr "" "Règle sur la façon d'appliquer une mise à jour de version ; soit en " "demandant le redimensionnement du serveur, soit en remplaçant le serveur " "entier." msgid "" "Policy on how to apply an image-id update; either by requesting a server " "rebuild or by replacing the entire server." msgstr "" "Règle sur la façon d'appliquer une mise à jour d'image-id : soit en " "demandant la reconstruction du serveur, soit en remplaçant le serveur " "complet." msgid "" "Policy on how to respond to a stack-update for this resource. REPLACE_ALWAYS " "will replace the port regardless of any property changes. AUTO will update " "the existing port for any changed update-allowed property." msgstr "" "Règle indiquant le mode de réponse à une demande de mise à jour de pile pour " "cette ressource. REPLACE_ALWAYS remplace le port indépendamment des " "changements de propriété. AUTO met à jour le port existant pour toute " "propriété update-allowed." msgid "" "Policy to be processed when doing an update which requires removal of " "specific resources." msgstr "" "Règle à traiter lors d'une mise à jour qui nécessite le retrait de " "ressources spécifiques." msgid "Pool creation failed" msgstr "Echec de création du pool" msgid "Pool creation failed due to vip" msgstr "Echec de la création du pool en raison de l'adresse IP virtuelle" msgid "Pool from which floating IP is allocated." msgstr "Pool à partir duquel l'IP flottante est allouée." msgid "Port number on which the servers are running on the members." msgstr "Numéro de port sur lequel les serveurs s'exécutent sur les membres." msgid "Port on which the pool member listens for requests or connections." msgstr "" "Port sur lequel es membres de pool écoutent les demandes ou connexions." msgid "Port security enabled of the network." msgstr "Sécurité des ports activée pour le réseau." msgid "Port security enabled of the port." msgstr "Sécurité des ports activée pour le port." msgid "Position of the rule within the firewall policy." msgstr "Position de la règle dans les règles d'administration de pare-feu." msgid "Pre-shared key string for the ipsec site connection." msgstr "Chaîne de clés prépartagées pour la connexion de site IPSec." msgid "Prefix length for subnet allocation from subnet pool." msgstr "" "Longueur de préfixe de l'allocation de sous-réseau depuis le pool de sous-" "réseaux." msgid "Private DNS name of the specified instance." msgstr "Nom DNS privé de l'instance indiquée." msgid "Private IP address of the network interface." msgstr "Adresse IP privée de l'interface réseau." msgid "Private IP address of the specified instance." msgstr "Adresse IP privée de l'instance indiquée." msgid "Project ID" msgstr "ID Projet" msgid "" "Projects to add volume type access to. NOTE: This property is only supported " "since Cinder API V2." msgstr "" "Projets auxquels ajouter l'accès aux types de volume. REMARQUE : Cette " "propriété est prise en charge uniquement à partir de l'API Cinder version 2." #, python-format msgid "" "Properties %(algorithm)s and %(bit_length)s are required for %(type)s type " "of order." msgstr "" "Propriétés %(algorithm)s et %(bit_length)s obligatoires pour le type de " "commande %(type)s." msgid "Properties for profile." msgstr "Propriétés du profil." msgid "Properties of this policy." msgstr "Propriétés de cette stratégie." msgid "Properties to pass to each resource being created in the chain." msgstr "" "Propriétés à transmettre à chaque ressource en cours de création dans la " "chaîne." #, python-format msgid "Property %(cookie)s is required when %(sp)s type is set to %(app)s." msgstr "" "La propriété %(cookie)s est obligatoire quand le type %(sp)s est défini sur " "%(app)s." #, python-format msgid "" "Property %(cookie)s must NOT be specified when %(sp)s type is set to %(ip)s." msgstr "" "La propriété %(cookie)s ne doit PAS être spécifiée quaand le type %(sp)s est " "défini sur %(ip)s." #, python-format msgid "" "Property %(key)s updated value %(new)s should be superset of existing value " "%(old)s." msgstr "" "La valeur %(new)s mise à jour de la propriété %(key)s doit être un sur-" "ensemble de la valeur existante %(old)s." #, python-format msgid "" "Property %(n)s type mismatch between facade %(type)s (%(fs_type)s) and " "provider (%(ps_type)s)" msgstr "" "Non-concordance du type de propriété %(n)s entre la façade %(type)s " "(%(fs_type)s) et le fournisseur (%(ps_type)s)" #, python-format msgid "Property %(policies)s and %(item)s cannot be used both at one time." msgstr "" "La propriété %(policies)s et %(item)s ne peuvent pas être utilisés en même " "temps." #, python-format msgid "Property %(ref)s required when protocol is %(term)s." msgstr "Propriété %(ref)s obligatoire quand le protocole est %(term)s." #, python-format msgid "Property %s not assigned" msgstr "Propriété %s non assignée" #, python-format msgid "Property %s not implemented yet" msgstr "La propriété %s n'est pas encore implémentée" msgid "" "Property cookie_name is required when session_persistence type is set to " "APP_COOKIE." msgstr "" "La propriété cookie_name est obligatoire quand le type session_persistence " "est défini sur APP_COOKIE." msgid "" "Property cookie_name is required, when session_persistence type is set to " "APP_COOKIE." msgstr "" "La propriété cookie_name est requise lorsque le type session_persistence est " "défini sur APP_COOKIE." msgid "" "Property cookie_name must NOT be specified when session_persistence type is " "set to SOURCE_IP." msgstr "" "La propriété cookie_name ne doit PAS être spécifiquée quand le type " "session_persistence est défini sur SOURCE_IP." msgid "Property values for the resources in the group." msgstr "Valeurs de propriété des ressources du groupe." msgid "Protocol for balancing." msgstr "Protocole pour l'équilibrage." msgid "Protocol for the firewall rule." msgstr "Protocole pour le pare-feu." msgid "Protocol of the pool." msgstr "Protocole du pool." msgid "Protocol on which to listen for the client traffic." msgstr "Protocole pour l'écoute du trafic client." msgid "Protocol to balance." msgstr "Protocole pour l'équilibrage." msgid "Protocol value for this firewall rule." msgstr "Valeur du protocole pour la règle du pare-feu." msgid "" "Provide access to nodes using other nodes of the cluster as proxy gateways." msgstr "" "Fournissez un accès aux nœuds en utilisant d'autres nœuds du cluster comme " "passerelles de proxy." msgid "" "Provide old encryption key. New encryption key would be used from config " "file." msgstr "" "Fournir l'ancienne clé de chiffrement. La nouvelle clé de chiffrement " "utilisée sera obtenue dans le fichier de configuration." msgid "Provider for this Load Balancer." msgstr "Fournisseur de cet équilibreur de charge." msgid "Provider implementing this load balancer instance." msgstr "Fournisseur implémentant cette instance d'équilibreur de charge." #, python-format msgid "Provider requires property %(n)s unknown in facade %(type)s" msgstr "" "Le fournisseur requiert la propriété %(n)s inconnue dans la façade %(type)s" msgid "Public DNS name of the specified instance." msgstr "Nom DNS public de l'instance spécifiée." msgid "Public IP address of the specified instance." msgstr "Adresse IP publique de l'instance indiquée." msgid "" "RPC timeout for the engine liveness check that is used for stack locking." msgstr "" "Délai d'attente d'appel de procédure distante pour la vérification " "d'activité du moteur qui est utilisé pour le verrouillage de pile." msgid "RX/TX factor." msgstr "Facteur RX/TX." #, python-format msgid "Rebuilding server failed, status '%s'" msgstr "Echec de la régénération du serveur, statut '%s'" msgid "Record name." msgstr "Nom de l'enregistrement." #, python-format msgid "Recursion depth exceeds %d." msgstr "La profondeur de récursivité dépasse %d." msgid "" "Ref structure that contains the ID of the VPC on which you want to create " "the subnet." msgstr "" "Structure de référence qui contient l'ID du VPC sur lequel vous voulez créer " "le sous-réseau." msgid "Reference to a flavor for creating DB instance." msgstr "Référence à une version pour créer l'instance de base de données." msgid "Reference to certificate." msgstr "Référence au certificat." msgid "Reference to intermediates." msgstr "Référence aux intermédiaires." msgid "Reference to private key passphrase." msgstr "Référence à la phrase passe de la clé privée." msgid "Reference to private key." msgstr "Référence à la clé privée." msgid "Reference to public key." msgstr "Référence à la clé publique." msgid "Reference to the secret." msgstr "Référence au secret." msgid "References to secrets that will be stored in container." msgstr "Références aux secrets qui seront stockés dans le conteneur." msgid "Region name in which this stack will be created." msgstr "Nom de la région dans laquelle la pile sera créée." msgid "Remaining executions." msgstr "Exécutions restantes." msgid "Remote branch router identity." msgstr "Identité du routeur de branche distant." msgid "Remote branch router public IPv4 address or IPv6 address or FQDN." msgstr "" "Adresse IPv4 ou adresse IPv6 ou nom de domaine complet publics de routeur de " "branche distant." msgid "Remote subnet(s) in CIDR format." msgstr "Sous-réseau(x) distant(s) au format CIDR." msgid "" "Replacement policy used to work around flawed nova/neutron port interaction " "which has been fixed since Liberty." msgstr "" "Stratégie de remplacement utilisée pour contourner l'interaction de port " "Nova/Neutron défaillante qui a été corrigée depuis Liberty." msgid "Request expired or more than 15mins in the future" msgstr "La demande a expiré ou intervient à plus de 15 minutes après" #, python-format msgid "Request limit exceeded: %(message)s" msgstr "Limite de requête dépassée : %(message)s" msgid "Request missing required header X-Auth-Url" msgstr "En-tête X-Auth-Url manquant dans la demande" msgid "Request was denied due to request throttling" msgstr "La demande a été refusée en raison d'une régulation des demandes" #, python-format msgid "" "Requested plugin '%(plugin)s' doesn't support version '%(version)s'. Allowed " "versions are %(allowed)s" msgstr "" "L'extension '%(plugin)s' demandée ne prend pas en charge la version " "'%(version)s'. Versions autorisées : %(allowed)s" msgid "" "Required extra specification. Defines if share drivers handles share servers." msgstr "" "Spécification supplémentaire requise. Défini si des pilotes de partage " "gèrent des serveurs de partage." #, python-format msgid "Required property %(n)s for facade %(type)s missing in provider" msgstr "" "La propriété requise %(n)s pour la façade %(type)s est manquante dans le " "fournisseur" #, python-format msgid "Resizing to '%(flavor)s' failed, status '%(status)s'" msgstr "Echec du redimensionnement vers '%(flavor)s', état '%(status)s'" #, python-format msgid "Resource \"%s\" has no type" msgstr "La ressource \"%s\" n'a aucun type" #, python-format msgid "Resource \"%s\" type is not a string" msgstr "Le type \"%s\" de ressource n'est pas une chaîne" #, python-format msgid "Resource %(name)s %(key)s type must be %(typename)s" msgstr "Le type de ressource %(name)s %(key)s doit être %(typename)s" #, python-format msgid "Resource %(name)s is missing \"%(type_key)s\"" msgstr "La ressource %(name)s ne comporte pas de \"%(type_key)s\"" #, python-format msgid "" "Resource %s's property user_data_format should be set to SOFTWARE_CONFIG " "since there are software deployments on it." msgstr "" "La propriété de la ressource %s user_data_format doit être réglée sur " "SOFTWARE_CONFIG puisqu'elle possède des déploiements de logiciels." msgid "Resource ID was not provided." msgstr "L'ID ressource n'a pas été fourni." msgid "" "Resource definition for the resources in the group, in HOT format. The value " "of this property is the definition of a resource just as if it had been " "declared in the template itself." msgstr "" "Définition des ressources dans le groupe, au format HOT. La valeur de cette " "propriété est la définition d'une ressource comme si elle avait été déclarée " "dans le modèle lui-même." msgid "" "Resource definition for the resources in the group. The value of this " "property is the definition of a resource just as if it had been declared in " "the template itself." msgstr "" "Définition de ressource pour les ressources du groupe. La valeur de cette " "propriété est la définition d'une ressource comme si elle avait été déclarée " "dans le modèle lui-même." msgid "Resource failed" msgstr "Echec de la ressource" msgid "Resource is not built" msgstr "La ressource n'est pas générée" msgid "Resource name may not contain \"/\"" msgstr "Le nom de la ressource ne peut pas contenir \"/\"" msgid "Resource type." msgstr "Type de ressource." msgid "Resource update already requested" msgstr "Mise à jour de ressource déjà demandée" msgid "Resource with the name requested already exists" msgstr "Le nom de la ressource demandée existe déjà" msgid "" "ResourceInError: resources.remote_stack: Went to status UPDATE_FAILED due to " "\"Remote stack update failed\"" msgstr "" "ResourceInError: resources.remote_stack: Passé au statut UPDATE_FAILED car " "\"La mise à jour de pile éloignée a échoué\"" #, python-format msgid "Resources must contain Resource. Found a [%s] instead" msgstr "Les ressources doivent contenir Ressource. [%s] trouvé à la place" msgid "" "Resources that users are allowed to access by the DescribeStackResource API." msgstr "" "Ressources auxquelles les utilisateurs sont autorisés à accéder par l'API " "DescribeStackResource ." msgid "Returned status code from the configuration execution." msgstr "Code de statut renvoyé à partir de l'exécution de la configuration." msgid "Route duplicates an existing route." msgstr "La route duplique une route existante." msgid "Route table ID." msgstr "ID de table de routage" msgid "Safety assessment lifetime configuration for the ike policy." msgstr "" "Configuration de durée de vie d'évaluation de sécurité pour la stratégie IKE." msgid "Safety assessment lifetime configuration for the ipsec policy." msgstr "" "Configuration de durée de vie d'évaluation de sécurité pour la stratégie " "IPSec." msgid "Safety assessment lifetime units." msgstr "Unités de durée de vie d'évaluation de sécurité." msgid "Safety assessment lifetime value in specified units." msgstr "" "Valeur de durée de vie d'évaluation de sécurité dans des unités spécifiées." msgid "Scheduler hints to pass to Nova (Heat extension)." msgstr "Suggestions de planificateur à transmettre à Nova (extension de Heat)." msgid "Schema representing the inputs that this software config is expecting." msgstr "" "Schéma représentant les entrées attendues par cette configuration logicielle." msgid "Schema representing the outputs that this software config will produce." msgstr "" "Schéma représentant les sorties que cette configuration logicielle produira." #, python-format msgid "Schema valid only for %(ltype)s or %(mtype)s, not %(utype)s" msgstr "" "Schéma valide uniquement pour %(ltype)s ou %(mtype)s, pas pour %(utype)s" msgid "" "Scope of flavor accessibility. Public or private. Default value is True, " "means public, shared across all projects." msgstr "" "Portée de l'accessibilité de version. Publique ou privée. La valeur par " "défaut est True, qui signifie une portée publique, partagée par l'ensemble " "des projets." #, python-format msgid "Searching Tenant %(target)s from Tenant %(actual)s forbidden." msgstr "" "Recherche du locataire %(target)s à partir du locataire %(actual)s interdite." msgid "Seconds between running periodic tasks." msgstr "Temps en secondes entre deux tâches périodiques." msgid "Seconds to wait after a create. Defaults to the global wait_secs." msgstr "" "Secondes d'attente après une création. Valeur par défaut globale : wait_secs." msgid "Seconds to wait after a delete. Defaults to the global wait_secs." msgstr "" "Secondes d'attente après une suppression. Valeur par défaut globale : " "wait_secs." msgid "Seconds to wait after an action (-1 is infinite)." msgstr "Secondes d'attente d'une action (-1 correspondant à infini)." msgid "Seconds to wait after an update. Defaults to the global wait_secs." msgstr "" "Secondes d'attente après une mise à jour. Valeur par défaut globale : " "wait_secs." #, python-format msgid "Section %s can not be accessed directly." msgstr "La section %s ne peut pas être consultée directement." #, python-format msgid "Security Group \"%(group_name)s\" not found" msgstr "Groupe de sécurité \"%(group_name)s\" introuvable." msgid "Security group IDs to assign." msgstr "ID de groupe de sécurité à affecter." msgid "Security group IDs to associate with this port." msgstr "ID de groupe de sécurité à associer à ce port." msgid "Security group names to assign." msgstr "Noms de groupe de sécurité à affecter." msgid "Security groups cannot be assigned the name \"default\"." msgstr "Le nom \"default\" ne peut pas être affecté aux groupes de sécurité." msgid "Security service IP address or hostname." msgstr "Adresse IP ou nom d'hôte du service de sécurité." msgid "Security service description." msgstr "Description du service de sécurité." msgid "Security service domain." msgstr "Domaine du service de sécurité." msgid "Security service name." msgstr "Nom du service de sécurité." msgid "Security service type." msgstr "Type du service de sécurité." msgid "Security service user or group used by tenant." msgstr "Utilisateur ou groupe du service de sécurité utilisé par le locataire." msgid "Select deferred auth method, stored password or trusts." msgstr "" "Sélectionner une méthode d'authentification différée, des mots de passe " "stockés ou des certificats de confiance." msgid "Sequence of characters to build the random string from." msgstr "" "Séquence de caractères à partir de laquelle générer la chaîne aléatoire." #, python-format msgid "Server %(name)s delete failed: (%(code)s) %(message)s" msgstr "Echec de la suppression du serveur %(name)s : (%(code)s) %(message)s" msgid "Server Group name." msgstr "Nom du groupe de serveurs." msgid "Server name." msgstr "Nom du serveur." msgid "Server to assign floating IP to." msgstr "Serveur auquel affecter l'IP flottante." #, python-format msgid "" "Service %(service_name)s is not available for resource type " "%(resource_type)s, reason: %(reason)s" msgstr "" "Le service %(service_name)s n'est pas disponible pour le type de ressource " "%(resource_type)s ; raison : %(reason)s" msgid "Service misconfigured" msgstr "Service mal configuré" msgid "Service temporarily unavailable" msgstr "Service temporairement non disponible" msgid "Set of parameters passed to this stack." msgstr "Jeu de paramètres transmis à cette pile." msgid "Set of rules for comparing characters in a character set." msgstr "" "Ensemble de règles pour comparer des caractères dans un jeu de caractères." msgid "Set of symbols and encodings." msgstr "Ensemble de symboles et de codages." msgid "Set to \"vpc\" to have IP address allocation associated to your VPC." msgstr "" "Définir sur \"vpc\" pour que l'allocation d'adresse IP soit associée à votre " "VPC." msgid "Set to true if DHCP is enabled and false if DHCP is disabled." msgstr "" "Défini avec la valeur true ou false selon que DHCP est activé ou désactivé." msgid "Severity of the alarm." msgstr "Gravité de l'alerte." msgid "Share description." msgstr "Description du partage." msgid "Share host." msgstr "Hôte du partage." msgid "Share name." msgstr "Nom du partage." msgid "Share network description." msgstr "Description du réseau de partage." msgid "Share project ID." msgstr "ID projet du partage." msgid "Share protocol supported by shared filesystem." msgstr "" "Protocole de partage pris en charge par le système de fichiers partagé." msgid "Share storage size in GB." msgstr "Taille du stockage de partage en Go." msgid "Shared status of the metering label." msgstr "Statut partagé de l'étiquette de mesure." msgid "Shared status of this firewall policy." msgstr "Statut partagé de ces règles d'administration de pare-feu." msgid "Shared status of this firewall rule." msgstr "Statut partagé de cette règle de pare-feu." msgid "Shared status of this firewall." msgstr "Statut partagé de ce pare-feu." msgid "Shrinking volume" msgstr "Redimensionnement du volume" msgid "Signal data error" msgstr "Erreur de données de signal" #, python-format msgid "Signal resource during %s" msgstr "Ressource de signal pendant %s" #, python-format msgid "Single schema valid only for %(ltype)s, not %(utype)s" msgstr "Schéma unique valide uniquement pour %(ltype)s, pas pour %(utype)s" msgid "Size of a secondary ephemeral data disk in GB." msgstr "Taille, en Go, d'un disque de données éphémères secondaire." msgid "Size of adjustment." msgstr "Taille de l'ajustement." msgid "Size of encryption key, in bits. For example, 128 or 256." msgstr "Taille de la clé de chiffrement. Par exemple, 128 ou 256." msgid "" "Size of local disk in GB. The \"0\" size is a special case that uses the " "native base image size as the size of the ephemeral root volume." msgstr "" "Taille du disque local en Go. La taille \"0\" est un cas de figure " "particulier qui utilise la taille d'image de base native comme taille du " "volume racine éphémère. " msgid "" "Size of the block device in GB. If it is omitted, hypervisor driver " "calculates size." msgstr "" "Taille de l'unité par bloc en Go. Si celle-ci n'est pas indiquée, le pilote " "d'hyperviseur la calcule." msgid "Size of the instance disk volume in GB." msgstr "Taille du volume disque de l'instance en Go." msgid "Size of the volumes, in GB." msgstr "Taille des volumes, en Go." msgid "Smallest prefix size that can be allocated from the subnet pool." msgstr "" "Taille de préfixe la plus petite qui peut être allouée depuis le pool de " "sous-réseau." #, python-format msgid "Snapshot with id %s not found" msgstr "Snapshot avec l'identifiant %s non trouvé" msgid "" "SnapshotId is missing, this is required when specifying BlockDeviceMappings." msgstr "" "SnapshotId manquant, requis lors de la spécification de BlockDeviceMappings." #, python-format msgid "Software config with id %s not found" msgstr "Configuration logiciel avec l'identifiant %s non trouvé" msgid "Source IP address or CIDR." msgstr "Adresse IP source ou CIDR." msgid "Source ip_address for this firewall rule." msgstr "Source ip_address pour la règle du pare-feu." msgid "Source port number or a range." msgstr "Numéro de port source ou intervalle." msgid "Source port range for this firewall rule." msgstr "Plage de ports source pour cette règle de pare-feu." #, python-format msgid "Specified output key %s not found." msgstr "La clé de sortie spécifiée %s est introuvable." #, python-format msgid "Specified status is invalid, defaulting to %s" msgstr "Le statut spécifié n'est pas valide, %s est utilisé par défaut" #, python-format msgid "Specified subnet %(subnet)s does not belongs to network %(network)s." msgstr "" "Le sous-réseau indiqué %(subnet)s n'appartient pas au réseau %(network)s." msgid "Specifies a custom discovery url for node discovery." msgstr "" "Indique une URL de reconnaissance personnalisée pour la reconnaissance de " "noeud." msgid "Specifies database names for creating databases on instance creation." msgstr "" "Indique des noms de base de données pour créer des bases de données lors de " "la création d'instance." msgid "Specify the ACL permissions on who can read objects in the container." msgstr "" "Indiquez les droits ACL sur les utilisateurs qui peuvent lire des objets " "dans le conteneur." msgid "Specify the ACL permissions on who can write objects to the container." msgstr "" "Indiquez les droits ACL sur les utilisateurs qui peuvent écrire des objets " "dans le conteneur." msgid "" "Specify whether the remote_ip_prefix will be excluded or not from traffic " "counters of the metering label. For example to not count the traffic of a " "specific IP address of a range." msgstr "" "Indiquez si remote_ip_prefix sera exclu ou non des compteurs de trafic de " "l'étiquette de mesure. Par exemple, pour ne pas compter le trafic d'une " "adresse IP spécifique d'une plage." #, python-format msgid "Stack %(stack_name)s already has an action (%(action)s) in progress." msgstr "La pile %(stack_name)s a déjà une action (%(action)s) en cours." msgid "Stack ID" msgstr "Id de la pile." msgid "Stack Name" msgstr "Nom de la Pile" msgid "Stack name may not contain \"/\"" msgstr "La stack ne doit pas contenir \"/\"" msgid "Stack resource id" msgstr "ID ressource Stack" msgid "Stack unknown status" msgstr "Etat de pile inconnu" #, python-format msgid "Stack with id %s not found" msgstr "La pile portant l'ID %s est introuvable" msgid "" "Stacks containing these tag names will be hidden. Multiple tags should be " "given in a comma-delimited list (eg. hidden_stack_tags=hide_me,me_too)." msgstr "" "Les piles contenant ces noms de balise seront masquées. Les balises " "multiples doivent être fournies sous forme de liste délimitées par des " "virgules (par ex. hidden_stack_tags=hide_me,me_too)." msgid "Start address for the allocation pool." msgstr "Adresse de début pour le pool d'allocation." #, python-format msgid "Start resizing the group %(group)s" msgstr "Commencer à redimensionner le groupe %(group)s" msgid "Start time for the time constraint. A CRON expression property." msgstr "" "Heure de début de la contrainte de temps. Propriété d'expression CRON." #, python-format msgid "State %s invalid for create" msgstr "Etat %s non valide pour une création" #, python-format msgid "State %s invalid for resume" msgstr "Etat %s non valide pour une reprise" #, python-format msgid "State %s invalid for suspend" msgstr "Etat %s non valide pour une interruption" msgid "Status" msgstr "Statut" #, python-format msgid "String to split must be string; got %s" msgstr "La chaîne à fractionner doit être une chaîne ; %s obtenu" msgid "String value with which to compare." msgstr "Valeur de chaîne avec laquelle effectuer la comparaison." msgid "Subnet ID to associate with this interface." msgstr "Identificateur de sous-réseau à associer à cette interface." msgid "Subnet ID to launch instance in." msgstr "ID de sous-réseau dans lequel lancer l'instance." msgid "Subnet ID." msgstr "ID de sous-réseau." msgid "Subnet in which the vpn service will be created." msgstr "Sous-réseau dans lequel le service VPN sera créé." msgid "" "Subnet in which to allocate the IP address for port. Used for creating port, " "based on derived properties. If subnet is specified, network property " "becomes optional." msgstr "" "Sous-réseau au sein duquel allouer l'adresse IP pour le port. Utilisé pour " "la création du port, à partir de propriétés dérivées. Si un sous-réseau est " "spécifié, la propriété network devient facultative." msgid "Subnet in which to allocate the IP address for this port." msgstr "Le sous-réseau dans lequel allouer l'adresse IP pour ce port." msgid "Subnet name or ID of this member." msgstr "Nom ou ID sous-réseau de ce membre." msgid "Subnet of external fixed IP address." msgstr "Sous-réseau de l'adresse IP fixe externe." msgid "Subnet of the vip." msgstr "Sous-réseau de vip." msgid "Subnets of this network." msgstr "Sous réseaux du réseau." msgid "" "Subset of trustor roles to be delegated to heat. If left unset, all roles of " "a user will be delegated to heat when creating a stack." msgstr "" "Sous-ensemble de rôles de fiduciant à déléguer à Heat. Si cette option reste " "non définie, tous les rôles d'un utilisateur sont délégués à Heat lors de la " "création d'une pile." msgid "Supplied metadata for the resources in the group." msgstr "Métadonnées fournies pour les ressources du groupe." msgid "Supported versions: keystone v3" msgstr "Versions prises en charge : Keystone version 3" #, python-format msgid "Suspend of instance %s failed" msgstr "Echec d'interruption de l'instance %s " #, python-format msgid "Suspend of server %s failed" msgstr "La suspension du serveur %s a échoué" msgid "Swap space in MB." msgstr "Espace de swap en Mo." msgid "System SIGHUP signal received." msgstr "Signal SIGHUP du système reçu." msgid "TCP or UDP port on which to listen for client traffic." msgstr "Port TCP ou UDP pour l'écoute du trafic client." msgid "TCP port on which the instance server is listening." msgstr "Port TCP sur lequel le serveur d'instance est en mode écoute." msgid "TCP port on which the pool member listens for requests or connections." msgstr "" "Port TCP sur lequel le membre du pool écoute les demandes ou les connexions." msgid "" "TCP port on which to listen for client traffic that is associated with the " "vip address." msgstr "" "Port TCP sur lequel écouter le trafic client qui est associé à l'adresse du " "VIP." msgid "" "TTL, in seconds, for any cached item in the dogpile.cache region used for " "caching of OpenStack service finder functions." msgstr "" "Délai d'attente (TTL), en secondes, pour tout élément en cache dans la " "région dogpile.cache utilisée pour le cache des fonctions de recherche de " "service OpenStack." msgid "" "TTL, in seconds, for any cached item in the dogpile.cache region used for " "caching of service extensions." msgstr "" "Délai d'attente (TTL), en secondes, pour tout élément en cache dans la " "région dogpile.cache utilisée pour le cache des extensions de service." msgid "" "TTL, in seconds, for any cached item in the dogpile.cache region used for " "caching of validation constraints." msgstr "" "Délai d'attente (TTL), en secondes, pour tous les éléments en cache dans la " "région dogpile.cache utilisée pour le cache des contraintes de validation." msgid "Tag key." msgstr "Clé de balise." msgid "Tag value." msgstr "Valeur de la balise." msgid "Tags to add to the image." msgstr "Balises à ajouter à l'image." msgid "Tags to attach to instance." msgstr "Balises à lier à l'instance." msgid "Tags to attach to the bucket." msgstr "Balises à lier au compartiment." msgid "Tags to attach to this group." msgstr "Balises à lier à ce groupe." msgid "Task description." msgstr "Description de la tâche." msgid "Task name." msgstr "Nom de la tâche." msgid "" "Template default for how the server should receive the metadata required for " "software configuration. POLL_SERVER_CFN will allow calls to the cfn API " "action DescribeStackResource authenticated with the provided keypair " "(requires enabled heat-api-cfn). POLL_SERVER_HEAT will allow calls to the " "Heat API resource-show using the provided keystone credentials (requires " "keystone v3 API, and configured stack_user_* config options). POLL_TEMP_URL " "will create and populate a Swift TempURL with metadata for polling (requires " "object-store endpoint which supports TempURL).ZAQAR_MESSAGE will create a " "dedicated zaqar queue and post the metadata for polling." msgstr "" "Modèle par défaut pour indiquer comment le serveur doit recevoir les " "métadonnées requises pour la configuration logicielle. POLL_SERVER_CFN va " "autoriser les appels vers l'action cfn API DescribeStackResource " "authentifiée avec la paire de clés fournie (nécessite que heat-api-cfn soit " "activé). POLL_SERVER_HEAT va autoriser les appels vers Heat API resource-" "show à l'aide des informations d'identification Keystone fournies (nécessite " "v3 API Keystone et les options stack_user_* config de pile configurées). " "POLL_TEMP_URL va créer et remplir une TempURL Swift à l'aide de métadonnées " "pour l'interrogation (nécessite le nœud final object-store prenant en charge " "TempURL). ZAQAR_MESSAGE va créer une file d'attente zaqar dédiée et publier " "les métadonnées pour l'interrogation." msgid "" "Template default for how the server should signal to heat with the " "deployment output values. CFN_SIGNAL will allow an HTTP POST to a CFN " "keypair signed URL (requires enabled heat-api-cfn). TEMP_URL_SIGNAL will " "create a Swift TempURL to be signaled via HTTP PUT (requires object-store " "endpoint which supports TempURL). HEAT_SIGNAL will allow calls to the Heat " "API resource-signal using the provided keystone credentials. ZAQAR_SIGNAL " "will create a dedicated zaqar queue to be signaled using the provided " "keystone credentials." msgstr "" "Modèle par défaut pour indiquer comment le serveur doit envoyer un signal à " "Heat avec les valeurs de sortie de déploiement. CFN_SIGNAL va autoriser un " "HTTP POST vers une URL signée par une paire de clés CFN (nécessite que heat-" "api-cfn soit activé ). TEMP_URL_SIGNAL va créer une TempURL Swift pour " "l'envoi d'un signal via HTTP PUT (nécessite un noeud final object-store qui " "prenant en charge TempURL). HEAT_SIGNAL va autoriser les appels vers le " "signal ressource d'API Heat à l'aide des informations d'identification " "Keystone fournies. ZAQAR_SIGNAL va créer une file d'attente zaqar dédiée " "pour l'envoi d'un signal à l'aide des informations d'identification Keystone " "fournies." msgid "Template format version not found." msgstr "Le format de la version du template n'est pas trouvé" #, python-format msgid "" "Template size (%(actual_len)s bytes) exceeds maximum allowed size (%(limit)s " "bytes)." msgstr "" "La taille du modèle (%(actual_len)s octets) dépasse la taille maximale " "autorisée (%(limit)s bytes)." msgid "Template that specifies the stack to be created as a resource." msgstr "Modèle indiquant la pile à créer comme ressource." #, python-format msgid "Template type is not supported: %s" msgstr "Type de template non supporté : %s" msgid "Template version was not provided" msgstr "Version de modèle non fournie" #, python-format msgid "Template with version %s not found" msgstr "Modèle de version %s introuvable" msgid "TemplateBody or TemplateUrl were not given." msgstr "TemplateBody ou TemplateUrl n'ont pas été donnés." msgid "Tenant owning the health monitor." msgstr "Locataire possédant le moniteur d'état." msgid "Tenant owning the pool member." msgstr "Locataire possédant le membre de pool." msgid "Tenant owning the pool." msgstr "Locataire possédant le pool." msgid "Tenant owning the port." msgstr "Locataire possédant le port." msgid "Tenant owning the router." msgstr "Locataire possédant le routeur." msgid "Tenant owning the subnet." msgstr "Locataire possédant le sous-réseau." #, python-format msgid "Testing message %(text)s" msgstr "Test du message %(text)s" #, python-format msgid "The \"%(hook)s\" hook is not defined on %(resource)s" msgstr "Le point d'ancrage \"%(hook)s\" n'est pas défini sur %(resource)s" #, python-format msgid "The \"for_each\" argument to \"%s\" must contain a map" msgstr "L'argument \"for_each\" pour \"%s\" doit contenir une mappe" #, python-format msgid "The %(entity)s (%(name)s) could not be found." msgstr "%(entity)s (%(name)s) introuvable." #, python-format msgid "The %s must be provided for each parameter group." msgstr "%s doit être fouri pour chaque groupe de paramètres." #, python-format msgid "The %s of parameter group should be a list." msgstr "Pour le groupe de paramètres, %s doit être une liste." #, python-format msgid "The %s parameter must be assigned to one parameter group only." msgstr "" "Le paramètre %s doit être affecté à un groupe de paramètres uniquement." #, python-format msgid "The %s should be a list." msgstr "%s doit être une liste\"" msgid "The API paste config file to use." msgstr "Fichier de configuration de collage d'API à utiliser." msgid "The AWS Access Key ID needs a subscription for the service" msgstr "" "Un identifiant AWS Access Key est nécessaire pour l'inscription du service" msgid "The Availability Zone where the specified instance is launched." msgstr "" "La zone de disponibilité dans laquelle l'instance spécifiée est lancée." msgid "The Availability Zones in which to create the load balancer." msgstr "" "Les zones de disponibilité dans lesquelles créer l'équilibrage de charge." msgid "The CIDR." msgstr "CIDR." msgid "The DNS name for the LoadBalancer." msgstr "Nom DNS du LoadBalancer." msgid "The DNS name of the specified bucket." msgstr "Nom DNS du compartiment spécifié." msgid "The DNS nameserver address." msgstr "Adresse du serveur de noms DNS." msgid "The HTTP method used for requests by the monitor of type HTTP." msgstr "" "La méthode HTTP utilisée pour les demandes par le moniteur de type HTTP." msgid "" "The HTTP path used in the HTTP request used by the monitor to test a member " "health." msgstr "" "Le chemin d'accès HTTP utilisé dans la demande HTTP utilisée par le moniteur " "pour tester la santé d'un membre." msgid "" "The HTTP path used in the HTTP request used by the monitor to test a member " "health. A valid value is a string the begins with a forward slash (/)." msgstr "" "Chemin d'accès HTTP utilisé dans la demande HTTP utilisée par le moniteur " "pour tester la santé d'un membre. Une valeur valide est une chaîne qui " "commence par une barre oblique (/)." msgid "" "The HTTP status codes expected in response from the member to declare it " "healthy. Specify one of the following values: a single value, such as 200. a " "list, such as 200, 202. a range, such as 200-204." msgstr "" "Liste des codes de statut HTTP attendus dans la réponse du membre pour le " "déclarer sain. Indiquez l'une des valeurs suivantes : valeur unique (par " "exemple, 200), liste (par exemple, 200, 202), plage (par exemple, 200-204)." msgid "" "The ID of an existing instance to use to create the Auto Scaling group. If " "specify this property, will create the group use an existing instance " "instead of a launch configuration." msgstr "" "Identificateur d'une instance existante à utiliser pour créer le groupe " "AutoScaling. Si vous indiquez cette propriété, elle va créer le groupe à " "l'aide d'une instance existante plutôt qu'une configuration de lancement." msgid "" "The ID of an existing instance you want to use to create the launch " "configuration. All properties are derived from the instance with the " "exception of BlockDeviceMapping." msgstr "" "Identificateur d'une instance existante à utiliser pour créer la " "configuration de lancement. Toutes les propriétés sont dérivées de " "l'instance, à l'exception de BlockDeviceMapping." msgid "The ID of the attached network." msgstr "ID du réseau rattaché." msgid "The ID of the firewall policy that this firewall is associated with." msgstr "" "ID des règles d'administration de pare-feu auxquelles ce pare-feu est " "associé." msgid "" "The ID of the hosted zone name that is associated with the LoadBalancer." msgstr "ID de la zone hébergée qui est associée au LoadBalancer." msgid "The ID of the image to create a volume from." msgstr "ID de l'image à partir de laquelle créer un volume." msgid "The ID of the image to create the volume from." msgstr "ID de l'image à partir de laquelle créer le volume." msgid "The ID of the instance to which the volume attaches." msgstr "ID de l'instance à laquelle le volume est connecté." msgid "The ID of the load balancing pool." msgstr "L'ID du pool d'équilibrage de charge." msgid "The ID of the pool to which the pool member belongs." msgstr "ID du pool auquel appartient le membre de pool." msgid "The ID of the server to which the volume attaches." msgstr "ID du serveur auquel le volume est connecté." msgid "The ID of the snapshot to create a volume from." msgstr "ID de l'instantané à partir duquel créer un volume." msgid "" "The ID of the tenant which will own the network. Only administrative users " "can set the tenant identifier; this cannot be changed using authorization " "policies." msgstr "" "Identificateur du locataire qui possédera le réseau. Seuls les utilisateurs " "administratifs peuvent définir l'ID locataire ; il ne peut pas être modifié " "à l'aide des règles d'autorisation." msgid "" "The ID of the tenant who owns the Load Balancer. Only administrative users " "can specify a tenant ID other than their own." msgstr "" "Identificateur du locataire qui possède l'équilibreur de charge. Seuls les " "administrateurs peuvent indiquer un ID locataire autre que le leur." msgid "The ID of the tenant who owns the listener." msgstr "ID du locataire propriétaire du programme d'écoute." msgid "" "The ID of the tenant who owns the network. Only administrative users can " "specify a tenant ID other than their own." msgstr "" "Identificateur du locataire qui possède le réseau. Seuls les utilisateurs " "administrateurs peuvent indiquer un ID locataire autre que le leur." msgid "" "The ID of the tenant who owns the subnet pool. Only administrative users can " "specify a tenant ID other than their own." msgstr "" "ID du locataire qui est détenteur du pool de sous-réseau. Seuls les " "administrateurs peuvent spécifier un ID locataire différent du leur." msgid "The ID of the volume to be attached." msgstr "L'ID du volume à lier." msgid "" "The ID of the volume to boot from. Only one of volume_id or snapshot_id " "should be provided." msgstr "" "ID du volume d'amorçage. Un seul volume_id ou snapshot_id doit être fourni." msgid "The ID or name of the flavor to boot onto." msgstr "ID ou nom de la version d'amorçage." msgid "The ID or name of the image to boot with." msgstr "ID ou nom de l'image d'amorçage." msgid "" "The IDs of the DHCP agent to schedule the network. Note that the default " "policy setting in Neutron restricts usage of this property to administrative " "users only." msgstr "" "ID de l'agent DHCP devant programmer le réseau. Notez que le paramètre de " "règle par défaut dans Neutron limite l'utilisation de cette propriété " "uniquement aux administrateurs." msgid "The IP address of the pool member." msgstr "Adresse IP du membre de pool." msgid "The IP version, which is 4 or 6." msgstr "Version IP, qui est 4 ou 6." #, python-format msgid "The Parameter (%(key)s) was not defined in template." msgstr "Le paramètre (%(key)s) n'a pas été défini dans le modèle." #, python-format msgid "The Parameter (%(key)s) was not provided." msgstr "Paramètre (%(key)s) non fourni" msgid "The QoS policy ID attached to this network." msgstr "ID stratégie QoS connecté à ce réseau." msgid "The QoS policy ID attached to this port." msgstr "ID stratégie QoS connecté à ce port." #, python-format msgid "The Referenced Attribute (%(resource)s %(key)s) is incorrect." msgstr "L'attribut référencé (%(resource)s %(key)s) est incorrect." #, python-format msgid "The Resource %s requires replacement." msgstr "La ressource %s doit être remplacée." #, python-format msgid "" "The Resource (%(resource_name)s) could not be found in Stack %(stack_name)s." msgstr "" "La ressource (%(resource_name)s) est introuvable dans la pile. " "%(stack_name)s." #, python-format msgid "The Resource (%(resource_name)s) is not available." msgstr "La ressource (%(resource_name)s) n'est pas disponible." #, python-format msgid "The Snapshot (%(snapshot)s) for Stack (%(stack)s) could not be found." msgstr "L'instantané (%(snapshot)s) de la pile (%(stack)s) est introuvable." #, python-format msgid "The Stack (%(stack_name)s) already exists." msgstr "La stack (%(stack_name)s) existe déjà." msgid "The Template must be a JSON or YAML document." msgstr "Le modèle doit être un document JSON ou YAML." msgid "The URI to the container." msgstr "URI du conteneur." msgid "The URI to the created container." msgstr "URI du conteneur créé." msgid "The URI to the created secret." msgstr "URI du secret créé." msgid "The URI to the order." msgstr "URI de la commande." msgid "The URIs to container consumers." msgstr "URI des consommateurs de conteneur." msgid "The URIs to secrets stored in container." msgstr "URI des secrets stockés dans le conteneur." msgid "" "The URL of a template that specifies the stack to be created as a resource." msgstr "L'URL d'un modèle qui indique la pile à créer en tant que ressource." msgid "The URL of the container." msgstr "URL du container" msgid "The VIP address of the LoadBalancer." msgstr "Adresse VIP de l'équilibreur de charge." msgid "The VIP port of the LoadBalancer." msgstr "Port VIP de l'équilibreur de charge." msgid "The VIP subnet of the LoadBalancer." msgstr "Sous-réseau VIP de l'équilibreur de charge." msgid "The action or operation requested is invalid" msgstr "L'action ou l’opération demandée est non valide" msgid "The action to be executed when the receiver is signaled." msgstr "Action à exécuter lorsque le récepteur est signalé." msgid "The administrative state of the firewall." msgstr "Etat administratif du pare-feu." msgid "The administrative state of the health monitor." msgstr "L'état administratif du moniteur d'état." msgid "The administrative state of the ipsec site connection." msgstr "L'état administratif de la connexion de site IPSec." msgid "The administrative state of the pool member." msgstr "L'état administratif du membre de pool." msgid "The administrative state of the router." msgstr "Etat d'administration du routeur." msgid "The administrative state of the vpn service." msgstr "Description du service vpn." msgid "The administrative state of this Load Balancer." msgstr "Etat administratif de cet équilibreur de charge." msgid "The administrative state of this health monitor." msgstr "L'état administratif de ce moniteur d'état." msgid "The administrative state of this listener." msgstr "Etat administratif de ce programme d'écoute." msgid "The administrative state of this pool member." msgstr "L'état administratif de ce membre de pool." msgid "The administrative state of this pool." msgstr "L'état administratif de ce pool." msgid "The administrative state of this port." msgstr "L'état administratif de ce port." msgid "The administrative state of this vip." msgstr "L'état administratif de ce VIP." msgid "The administrative status of the network." msgstr "Le statut administratif du réseau." msgid "The administrator password for the server." msgstr "Mot de passe de l'administrateur du serveur." msgid "The aggregation method to compare to the threshold." msgstr "Méthode d'agrégation pour la comparaison au seuil." msgid "The algorithm type used to generate the secret." msgstr "Type d'algorithme utilisé pour générer le secret." msgid "" "The algorithm type used to generate the secret. Required for key and " "asymmetric types of order." msgstr "" "Type d'algorithme utilisé pour générer le secret. Obligatoire pour les types " "d'ordre asymétrique et des clés." msgid "The algorithm used to distribute load between the members of the pool." msgstr "" "L'algorithme utilisé pour répartir le chargement entre les membres du pool." msgid "The allocated address of this IP." msgstr "Adresse allouée de cette IP." msgid "" "The approximate interval, in seconds, between health checks of an individual " "instance." msgstr "" "L'intervalle approximatif, en secondes, entre les diagnostics d'intégrité " "d'une instance individuelle." msgid "The authentication hash algorithm of the ipsec policy." msgstr "Algorithme de hachage d'authentification de la stratégie IPSec." msgid "The authentication hash algorithm used by the ike policy." msgstr "Algorithme de hachage d'authentification utilisé par la stratégie IKE." msgid "The authentication mode of the ipsec site connection." msgstr "Mode d'authentification de la connexion de site IPSec." msgid "The availability zone in which the volume is located." msgstr "Zone de disponibilité dans laquelle le volume se trouve." msgid "The availability zone in which the volume will be created." msgstr "Zone de disponibilité dans laquelle le volume sera créé." msgid "The availability zone of shared filesystem." msgstr "Zone de disponibilité du système de fichiers partagé." msgid "The bay name." msgstr "Nom de baie." msgid "The bit-length of the secret." msgstr "Longeur en bits du secret." msgid "" "The bit-length of the secret. Required for key and asymmetric types of order." msgstr "" "Longueur en bits du secret. Obligatoire pour les types d'ordre asymétrique " "et des clés." #, python-format msgid "The bucket you tried to delete is not empty (%s)." msgstr "Le compartiment que vous avez tenté de supprimer n'est pas vide (%s)." msgid "The can be used to unmap a defined device." msgstr "Permet de supprimer le mappage d'un appareil défini." msgid "The certificate or AWS Key ID provided does not exist" msgstr "Le certificat ou l'ID de clé AWS fourni n'existe pas" msgid "The channel for receiving signals." msgstr "Canal pour la réception des signaux." msgid "" "The class that provides encryption support. For example, nova.volume." "encryptors.luks.LuksEncryptor." msgstr "" "Classe fournissant le support de chiffrement. Par exemple, nova.volume." "encryptors.luks.LuksEncryptor." #, python-format msgid "The client (%(client_name)s) is not available." msgstr "Le client (%(client_name)s) est indisponible." msgid "The cluster ID this node belongs to." msgstr "ID cluster auquel ce noeud appartient." msgid "The config value of the software config." msgstr "Valeur de configuration de la configuration logicielle." msgid "" "The configuration tool used to actually apply the configuration on a server. " "This string property has to be understood by in-instance tools running " "inside deployed servers." msgstr "" "Outil de configuration utilisé pour appliquer la configuration sur un " "serveur. Cette propriété de chaîne doit être lue par les outils en instance " "exécutés dans les serveurs déployés." msgid "The content of the CSR. Only for certificate orders." msgstr "" "Contenu de la demande de signature de certificat (CSR). Uniquement pour les " "commandes de certificat." #, python-format msgid "" "The contents of personality file \"%(path)s\" is larger than the maximum " "allowed personality file size (%(max_size)s bytes)." msgstr "" "Le contenu du fichier de personnalité \"%(path)s\" est supérieur à la taille " "maximale de fichier de personnalité autorisée (%(max_size)s octets)." msgid "The current size of AutoscalingResourceGroup." msgstr "Taille en cours d'AutoscalingResourceGroup." msgid "The current status of the volume." msgstr "Statut actuel du volume." msgid "" "The database instance was created, but heat failed to set up the datastore. " "If a database instance is in the FAILED state, it should be deleted and a " "new one should be created." msgstr "" "L'instance de base de données a été créée, mais Heat n'a pas installé le " "magasin de données. Si une instance de base de données est à l'état FAILED, " "elle doit être supprimée et une nouvelle instance doit être créée." msgid "" "The dead peer detection protocol configuration of the ipsec site connection." msgstr "" "Configuration du protocole de détection d'homologue inactif de la connexion " "la connexion de site IPSec." msgid "The decrypted secret payload." msgstr "Contenu déchiffré du secret." msgid "" "The default cloud-init user set up for each image (e.g. \"ubuntu\" for " "Ubuntu 12.04+, \"fedora\" for Fedora 19+ and \"cloud-user\" for CentOS/RHEL " "6.5)." msgstr "" "Utilisateur cloud-init par défaut défini pour chaque image (par exemple, " "\"ubuntu\" pour Ubuntu 12.04+, \"fedora\" pour fedora 19+ et \"cloud-user\" " "pour CentOS/RHEL 6.5)." msgid "The description for the QoS policy." msgstr "Description de la stratégie de qualité de service." msgid "The description of the ike policy." msgstr "Description de la stratégie IKE." msgid "The description of the ipsec policy." msgstr "Description de la stratégie IPSec." msgid "The description of the ipsec site connection." msgstr "Description de la connexion de site IPSec." msgid "The description of the vpn service." msgstr "Description du service VPN." msgid "The destination for static route." msgstr "Destination de la route statique." msgid "The details of physical object." msgstr "Détails de l'objet physique." msgid "The device id for the network gateway." msgstr "Identificateur d'unité pour la passerelle réseau." msgid "" "The device where the volume is exposed on the instance. This assignment may " "not be honored and it is advised that the path /dev/disk/by-id/virtio-" " be used instead." msgstr "" "L'unité dans laquelle le volume est exposé sur l'instance. Cette affectation " "peut ne pas être respectée et il est conseillé d'utiliser le chemin /dev/" "disk/by-id/virtio- à la place." msgid "" "The direction in which metering rule is applied, either ingress or egress." msgstr "" "Le sens dans lequel la règle de mesure est appliquée, en entrée ou en sortie." msgid "The direction in which metering rule is applied." msgstr "Le sens dans lequel la règle de mesure est appliquée." msgid "" "The direction in which the security group rule is applied. For a compute " "instance, an ingress security group rule matches traffic that is incoming " "(ingress) for that instance. An egress rule is applied to traffic leaving " "the instance." msgstr "" "Le sens dans lequel la règle de groupe de sécurité est appliquée. Pour une " "instance de calcul, une règle de groupe de sécurité d'entrée met en " "correspondance le trafic qui entre pour cette instance. Une règle de sortie " "est appliquée au trafic qui quitte l'instance." msgid "The directory to search for environment files." msgstr "Répertoire à utiliser pour la recherche de fichiers d'environnement." msgid "The ebs volume to attach to the instance." msgstr "Volume ebs à connecter à l'instance." msgid "The encapsulation mode of the ipsec policy." msgstr "Mode d'encapsulation de la stratégie IPSec." msgid "The encoding format used to provide the payload data." msgstr "Format de codage utilisé pour fournir les données de contenu." msgid "The encryption algorithm of the ipsec policy." msgstr "Algorithme de chiffrement de la stratégie IPSec." msgid "The encryption algorithm or mode. For example, aes-xts-plain64." msgstr "Algorithme ou mode de chiffrement. Par exemple, aes-xts-plain64." msgid "The encryption algorithm used by the ike policy." msgstr "Algorithme de chiffrement utilisé par la stratégie IKE." msgid "The environment is not a valid YAML mapping data type." msgstr "L'environnement n'est pas un type de données de mappage YAML valide." msgid "The expiration date for the secret in ISO-8601 format." msgstr "Date d'expiration du secret au format ISO-8601." msgid "The external load balancer port number." msgstr "Le numéro de port externe de l'équilibreur de charge." msgid "The extra specs key and value pairs of the volume type." msgstr "" "Clé de spécifications et paires de valeur supplémentaires du type de volume." msgid "The flavor to use." msgstr "Version à utiliser." #, python-format msgid "The following parameters are immutable and may not be updated: %(keys)s" msgstr "" "Les paramètres suivants ne peuvent pas être changés et ne seront peut-être " "pas mis à jour : %(keys)s" #, python-format msgid "The function %s is not supported in this version of HOT." msgstr "La fonction %s n'est pas prise en charge dans cette version de HOT." msgid "" "The gateway IP address. Set to any of [ null | ~ | \"\" ] to create/update a " "subnet without a gateway. If omitted when creation, neutron will assign the " "first free IP address within the subnet to the gateway automatically. If " "remove this from template when update, the old gateway IP address will be " "detached." msgstr "" "Adresse IP de la passerelle. Définie sur [ null | ~ | \"\" ] pour la " "création/mise à jour d'un sous-réseau sans passerelle. Si elle est omise " "lors de la création, Neutron affecte automatiquement la première adresse IP " "libre au sein du sous-réseau à la passerelle. Si elle est retirée du modèle " "lors d'une mise à jour, l'ancienne adresse IP de passerelle est déconnectée." #, python-format msgid "The grouped parameter %s does not reference a valid parameter." msgstr "Le paramètre groupé %s ne référence pas de paramètre valide." msgid "The host from the container URL." msgstr "Hôte de l'URL de conteneur." msgid "The host from which a user is allowed to connect to the database." msgstr "" "L'hôte depuis lequel un utilisateur est autorisé à se connecter à la base de " "données." msgid "" "The id for L2 segment on the external side of the network gateway. Must be " "specified when using vlan." msgstr "" "ID du segment L2 du côté externe de la passerelle réseau. Doit être indiqué " "lors de l'utilisation de VLAN." msgid "The identifier of the CA to use." msgstr "Identificateur de l'autorité de certification (CA) à utiliser." msgid "The image ID. Glance will generate a UUID if not specified." msgstr "ID image. Glance générera un UUID si non spécifié." msgid "The initiator of the ipsec site connection." msgstr "Initiateur de la connexion de site IPSec." msgid "The input string to be stored." msgstr "Chaîne d'entrée à stocker." msgid "The interface name for the network gateway." msgstr "Nom de l'interface pour la passerelle réseau." msgid "The internal network to connect on the network gateway." msgstr "Réseau interne pour se connecter à la passerelle réseau." msgid "The last operation for the database instance failed due to an error." msgstr "" "La dernière opération d'instance de base de données a échoué en raison d'une " "erreur." #, python-format msgid "The length must be at least %(min)s." msgstr "La longueur minimale doit être %(min)s." #, python-format msgid "The length must be in the range %(min)s to %(max)s." msgstr "La longueur doit être comprise entre %(min)s et %(max)s." #, python-format msgid "The length must be no greater than %(max)s." msgstr "La longueur maximale doit être %(max)s." msgid "The length of time, in minutes, to wait for the nested stack creation." msgstr "La durée, en minutes, d'attente de la création de la pile imbriquée." msgid "" "The list of HTTP status codes expected in response from the member to " "declare it healthy." msgstr "" "La liste des codes de statut HTTP attendus dans la réponse du membre pour le " "déclarer sain." msgid "The list of Nova server IDs load balanced." msgstr "Liste d'ID de serveur Nova avec équilibrage de charge." msgid "The list of Pools related to this monitor." msgstr "Liste des pools associés à ce moniteur." msgid "The list of attachments of the volume." msgstr "Liste des connexions du volume." msgid "" "The list of configurations for the different lifecycle actions of the " "represented software component." msgstr "" "Liste de configurations pour les différentes actions de cycle de vie du " "composant logiciel représenté." msgid "The list of instance IDs load balanced." msgstr "La liste d'ID d'instance avec équilibrage de charge." msgid "" "The list of resource types to create. This list may contain type names or " "aliases defined in the resource registry. Specific template names are not " "supported." msgstr "" "Liste des types de ressource à créer. Cette liste peut comporter des noms de " "type ou des alias définis dans le registre de ressources. Les noms de " "modèles spécifiques ne sont pas pris en charge." msgid "The list of tags to associate with the volume." msgstr "Liste des balises à associer au volume." msgid "The load balancer transport protocol to use." msgstr "Le protocole de transport d'équilibreur de charge à utiliser." msgid "" "The location where the volume is exposed on the instance. This assignment " "may not be honored and it is advised that the path /dev/disk/by-id/virtio-" " be used instead." msgstr "" "Emplacement où le volume est exposé sur l'instance. Cette affectation peut " "ne pas être respectée et il est conseillé d'utiliser le chemin /dev/disk/by-" "id/virtio- à la place." msgid "The manually assigned alternative public IPv4 address of the server." msgstr "Adresse IPv4 publique alternative du serveur affectée manuellement." msgid "The manually assigned alternative public IPv6 address of the server." msgstr "Adresse IPv6 publique alternative du serveur affectée manuellement." msgid "The maximum number of connections per second allowed for the vip." msgstr "Nombre maximum de connexion par seconde autorisé pour vip." msgid "" "The maximum number of connections permitted for this load balancer. Defaults " "to -1, which is infinite." msgstr "" "Nombre maximal de connexions autorisées pour cet équilibreur de charge. La " "valeur par défaut est -1, qui correspond à infini." msgid "The maximum number of resources to create at once." msgstr "Nombre maximal de ressources à créer à la fois." msgid "The maximum number of resources to replace at once." msgstr "Nombre maximum de ressources à remplacer immédiatement." msgid "" "The maximum number of seconds to wait for the resource to signal completion. " "Once the timeout is reached, creation of the signal resource will fail." msgstr "" "Nombre de secondes d'attente pour la ressource du signalement de " "l'achèvement. Une fois le délai atteint, la création de la ressource du " "signal échoue." msgid "" "The maximum port number in the range that is matched by the security group " "rule. The port_range_min attribute constrains the port_range_max attribute. " "If the protocol is ICMP, this value must be an ICMP type." msgstr "" "Le numéro de port maximum de la plage qui est mis en correspondance par la " "règle de groupe de sécurité. L'attribut port_range_min contraint l'attribut " "port_range_max. Si le protocole est ICMP, cette valeur doit être un type " "ICMP." msgid "" "The maximum transmission unit size (in bytes) of the ipsec site connection." msgstr "" "Taille d'unité de transmission maximale (en octets) de la connexion la " "connexion de site IPSec." msgid "The maximum transmission unit size(in bytes) for the network." msgstr "Taille MTU maximale (en octets) pour le réseau." msgid "The metering label ID to associate with this metering rule." msgstr "ID d'étiquette de mesure à associer à cette règle de mesure." msgid "" "The metric dimensions to match to the alarm dimensions. One or more " "dimension key names separated by a comma." msgstr "" "Dimensions de mesure qui doivent correspondre aux dimensions d'alarme. Un ou " "plusieurs noms de clé de dimension séparés par une virgule." msgid "" "The minimum number of characters from this character class that will be in " "the generated string." msgstr "" "Nombre minimal de caractères de cette classe qui figureront dans la chaîne " "générée." msgid "" "The minimum number of characters from this sequence that will be in the " "generated string." msgstr "" "Nombre minimal de caractères de cette séquence qui figureront dans la chaîne " "générée." msgid "" "The minimum number of resources in service while rolling updates are being " "executed." msgstr "" "Nombre minimal de ressources en service tandis que les mises à jour en " "continu sont exécutées." msgid "" "The minimum port number in the range that is matched by the security group " "rule. If the protocol is TCP or UDP, this value must be less than or equal " "to the value of the port_range_max attribute. If the protocol is ICMP, this " "value must be an ICMP type." msgstr "" "Le numéro de port minimum de la plage qui est mis en correspondance par la " "règle de groupe de sécurité. Si le protocole est TCP ou UDP, cette valeur " "doit être inférieure ou égale à la valeur de l'attribut port_range_max. Si " "le protocole est ICMP, cette valeur doit être un type ICMP." msgid "The name for the QoS policy." msgstr "Nom de la stratégie de qualité de service." msgid "The name for the address scope." msgstr "Nom du périmètre d'adresse." msgid "" "The name of the driver used for instantiating container networks. By " "default, Magnum will choose the pre-configured network driver based on COE " "type." msgstr "" "Nom du pilote utilisé pour l'instanciation des réseaux de conteneurs. Par " "défaut, Magnum choisit le pilote réseau préconfiguré en fonction du type COE." msgid "The name of the error document." msgstr "Nom du document d'erreur." msgid "The name of the hosted zone that is associated with the LoadBalancer." msgstr "Nom de la zone hébergée qui est associée au LoadBalancer." msgid "The name of the ike policy." msgstr "Nom de la stratégie IKE." msgid "The name of the index document." msgstr "Nom du document d'index." msgid "The name of the ipsec policy." msgstr "Nom de la stratégie IPSec." msgid "The name of the ipsec site connection." msgstr "Nom de la connexion de site IPSec." msgid "The name of the key pair." msgstr "Nom de la paire de clés." msgid "The name of the network gateway." msgstr "Nom de la passerelle réseau." msgid "The name of the network." msgstr "Nom du réseau." msgid "The name of the router." msgstr "Nom du routeur." msgid "The name of the subnet." msgstr "Nom du sous réseau." msgid "The name of the user that the new key will belong to." msgstr "Nom d'utilisateur auquel la nouvelle clé appartiendra." msgid "" "The name of the virtual device. The name must be in the form ephemeralX " "where X is a number starting from zero (0); for example, ephemeral0." msgstr "" "Nom du périphérique virtuel. Le nom doit figurer dans le format ephemeralX " "où X est un numéro partant de zéro (0) ; par exemple, ephemeral0." msgid "The name of the vpn service." msgstr "Nom du service vpn." msgid "The name or ID of QoS policy to attach to this network." msgstr "" "Nom ou ID de la stratégie de qualité de service à connecter à ce réseau." msgid "The name or ID of QoS policy to attach to this port." msgstr "Nom ou ID de la stratégie de qualité de service à connecter à ce port." msgid "The name or ID of parent of this keystone project in hierarchy." msgstr "Nom ou ID du parent de ce projet Keystone dans la hiérarchie." msgid "The name or ID of target cluster." msgstr "Nom ou ID du cluster cible." msgid "The name or ID of the bay model." msgstr "Nom ou ID du modèle de baie." msgid "The name or ID of the subnet on which to allocate the VIP address." msgstr "Nom ou ID du sous-réseau sur lequel allouer l'adresse VIP." msgid "The name or ID of the subnet pool." msgstr "Nom ou ID du pool de sous-réseaux." msgid "The name or id of the Senlin profile." msgstr "Nom ou ID du profil Senlin." msgid "The negotiation mode of the ike policy." msgstr "Mode de négociation de la stratégie IKE." msgid "The next hop for the destination." msgstr "Prochain saut pour la destination." msgid "The node count for this bay." msgstr "Nombre de noeuds pour cette baie." msgid "The notification methods to use when an alarm state is ALARM." msgstr "" "Méthodes de notification à utiliser quand l'état d'une alarme est ALARM." msgid "The notification methods to use when an alarm state is OK." msgstr "Méthodes de notification à utiliser quand l'état d'une alarme est OK." msgid "The notification methods to use when an alarm state is UNDETERMINED." msgstr "" "Méthodes de notification à utiliser quand l'état d'une alarme est " "UNDETERMINED." msgid "The number of I/O operations per second that the volume supports." msgstr "Nombre d'opérations d'E-S par seconde prises en charge par le volume." msgid "The number of bytes stored in the container." msgstr "Nombre d'octets stockés dans le conteneur." msgid "" "The number of consecutive health probe failures required before moving the " "instance to the unhealthy state" msgstr "" "Le nombre d'échecs d'analyse d'intégrité consécutives requis avant de faire " "passer l'instance à l'état défectueux" msgid "" "The number of consecutive health probe successes required before moving the " "instance to the healthy state." msgstr "" "Le nombre d'analyses d'intégrité consécutives réussies requises avant de " "faire passer l'instance à l'état sain." msgid "The number of master nodes for this bay." msgstr "Nombre de noeuds maître pour cette baie." msgid "The number of objects stored in the container." msgstr "Nombre d'objets stockés dans le conteneur." msgid "The number of replicas to be created." msgstr "Nombre de répliques à créer." msgid "The number of resources to create." msgstr "Nombre de ressources à créer." msgid "The number of seconds to wait between batches of updates." msgstr "Nombre de secondes d'attente entre deux lots de mises à jour." msgid "The number of seconds to wait between batches." msgstr "Nombre de secondes d'attente entre deux lots." msgid "The number of seconds to wait for the cluster actions." msgstr "Délai (en nombre de secondes) à observer pour les actions de cluster." msgid "" "The number of seconds to wait for the correct number of signals to arrive." msgstr "" "Nombre de secondes d'attente de l'arrivée du nombre correct de signaux." msgid "" "The number of success signals that must be received before the stack " "creation process continues." msgstr "" "Nombre de signaux de succès qui doivent être reçus avant que le processus de " "création de la pile se poursuive." msgid "" "The optional public key. This allows users to supply the public key from a " "pre-existing key pair. If not supplied, a new key pair will be generated." msgstr "" "Clé publique facultative. Cela permet aux utilisateurs de fournir la clé " "publique à partir d'une paire de clés préexistante. Si elle n'est pas " "fournie, une nouvelle paire de clés sera générée." msgid "" "The owner tenant ID of the address scope. Only administrative users can " "specify a tenant ID other than their own." msgstr "" "ID locataire du propriétaire du périmètre d'adresse. Seuls des " "administrateurs peuvent spécifier un ID locataire autre que le leur." msgid "The owner tenant ID of this QoS policy." msgstr "ID locataire propriétaire de cette stratégie de qualité de service." msgid "The owner tenant ID of this rule." msgstr "ID locataire propriétaire de cette règle." msgid "" "The owner tenant ID. Only required if the caller has an administrative role " "and wants to create a RBAC for another tenant." msgstr "" "ID locataire propriétaire. Requis uniquement si l'appelant a un rôle " "administratif et qu'il souhaite créer une stratégie RBAC pour un autre " "locataire." msgid "The parameters passed to action when the receiver is signaled." msgstr "Paramètres transmis à l'action lorsque le récepteur est signalé." msgid "The parent URL of the container." msgstr "URL parente du conteneur." msgid "The payload of the created certificate, if available." msgstr "Contenu du certificat créé, le cas échéant." msgid "The payload of the created intermediates, if available." msgstr "Contenu des intermédiaires créés, le cas échéant." msgid "The payload of the created private key, if available." msgstr "Contenu de la clé privée créée, le cas échéant." msgid "The payload of the created public key, if available." msgstr "Contenu de la clé publique créée, le cas échéant." msgid "The perfect forward secrecy of the ike policy." msgstr "Mode PFS (Perfect Forward Secrecy) de la stratégie IKE." msgid "The perfect forward secrecy of the ipsec policy." msgstr "Mode PFS (Perfect Forward Secrecy) de la stratégie IPSec." #, python-format msgid "The personality property may not contain greater than %s entries." msgstr "La propriété de personnalité ne peut pas contenir plus de %s entrées." msgid "The physical mechanism by which the virtual network is implemented." msgstr "Mécanisme physique d'implémentation du réseau virtuel." msgid "The port being checked." msgstr "Le port en cours de vérification." msgid "The port id, either subnet or port_id should be specified." msgstr "Le sous-réseau, soit subnet ou port_id, doit être indiqué." msgid "The port on which the server will listen." msgstr "Le port sur lequel le serveur écoutera." msgid "The port, either subnet or port should be specified." msgstr "Le port (ou le sous-réseau) doit être indiqué." msgid "The pre-shared key string of the ipsec site connection." msgstr "Chaîne de clés prépartagées de la connexion de site IPSec." msgid "The private key if it has been saved." msgstr "La clé privée si elle a été enregistrée." msgid "The profile of certificate to use." msgstr "Profil du certificat à utiliser." msgid "" "The protocol that is matched by the security group rule. Valid values " "include tcp, udp, and icmp." msgstr "" "Le protocole qui est mis en correspondance par la règle de groupe de " "sécurité. Les valeurs valides incluent tcp, udp et icmp." msgid "The public key." msgstr "Clé publique." msgid "The query string is malformed" msgstr "La chaine de caractère de la requête est mal-formée" msgid "The query to filter the metrics." msgstr "ReEquête de filtrage des métriques." msgid "" "The random string generated by this resource. This value is also available " "by referencing the resource." msgstr "" "Chaîne aléatoire générée par cette ressource. Cette valeur est également " "disponible par référencement de la ressource." msgid "The reference to a LaunchConfiguration resource." msgstr "Référence à une ressource LaunchConfiguration." msgid "" "The remote IP prefix (CIDR) to be associated with this security group rule." msgstr "" "Le préfixe d'IP distante (CIDR) à associer à cette règle de groupe de " "sécurité." msgid "The remote branch router identity of the ipsec site connection." msgstr "Identité du routeur de branche distant de la connexion de site IPSec." msgid "The remote branch router public IPv4 address or IPv6 address or FQDN." msgstr "" "Adresse IPv4 ou adresse IPv6 ou nom de domaine complet publics de routeur de " "branche distant." msgid "" "The remote group ID to be associated with this security group rule. If no " "value is specified then this rule will use this security group for the " "remote_group_id. The remote mode parameter must be set to \"remote_group_id" "\"." msgstr "" "Identificateur de groupe distant à associer à cette règle de groupe de " "sécurité. Si aucune valeur n'est indiquée, cette règle utilise le groupe de " "sécurité pour remote_group_id. Le paramètre de mode distant doit être défini " "sur \"remote_group_id\"." msgid "The remote subnet(s) in CIDR format of the ipsec site connection." msgstr "" "Sous-réseau(x) distant(s) au format CIDR de la connexion de site IPSec." msgid "The request is missing an action or operation parameter" msgstr "Il manque un paramètre d'action ou d'opération dans la demande" msgid "The request processing has failed due to an internal error" msgstr "Le traitement de la demande a échoué en raison d'une erreur interne" msgid "The request signature does not conform to AWS standards" msgstr "La signature de la requête n'est pas conforme au standards AWS" msgid "" "The request signature we calculated does not match the signature you provided" msgstr "" "La signature de demande que nous avons calculée ne correspond pas à la " "signature que vous avez fournie" msgid "The requested action is not yet implemented" msgstr "L'action demandée n'est pas encore implémentée" #, python-format msgid "The resource %s is already being updated." msgstr "La ressource %s est déjà en cours de mise à jour." msgid "The resource href of the queue." msgstr "href ressource de la file d'attente." msgid "The route mode of the ipsec site connection." msgstr "Mode de route de la connexion de site IPSec." msgid "The router id." msgstr "Identifiant du routeur." msgid "The router to which the vpn service will be inserted." msgstr "Routeur vers lequel le service VPN sera inséré." msgid "The router." msgstr "Routeur." msgid "The safety assessment lifetime configuration for the ike policy." msgstr "" "La configuration de durée de vie d'évaluation de sécurité pour la stratégie." msgid "The safety assessment lifetime configuration of the ipsec policy." msgstr "" "Configuration de durée de vie d'évaluation de sécurité de la stratégie IPSec." msgid "" "The security group that you can use as part of your inbound rules for your " "LoadBalancer's back-end instances." msgstr "" "Groupe de sécurité que vous pouvez utiliser dans le cadre de vos règles " "entrantes pour vos instances dorsales de LoadBalancer." msgid "" "The server could not comply with the request since it is either malformed or " "otherwise incorrect." msgstr "" "Le serveur ne peut pas se conformer à la requête car elle est mal formulée " "ou incorrecte. " msgid "The set of parameters passed to this nested stack." msgstr "Jeu de paramètres transmis à cette pile imbriquée." msgid "The size in GB of the docker volume." msgstr "Taille en Go du volume de docker." msgid "The size of AutoScalingGroup can not be less than zero" msgstr "La taille d'AutoScalingGroup ne peut pas être inférieure à zéro" msgid "" "The size of the prefix to allocate when the cidr or prefixlen attributes are " "not specified while creating a subnet." msgstr "" "Taille de préfixe à allouer lorsque les attributs cidr ou prefixlen ne sont " "pas spécifiés lors de la création d'un sous-réseau." msgid "The size of the swap, in MB." msgstr "Taille du swap, en Mo." msgid "The size of the volume in GB." msgstr "Taille du volume en Go." msgid "" "The size of the volume, in GB. It is safe to leave this blank and have the " "Compute service infer the size." msgstr "" "Taille du volume en Go. Il est prudent de ne pas remplir cette zone et de " "laisser le service Compute calculer la taille." msgid "" "The size of the volume, in GB. Must be equal or greater than the size of the " "snapshot. It is safe to leave this blank and have the Compute service infer " "the size." msgstr "" "Taille du volume en Go. Doit être supérieure ou égale à la taille de " "l'instantané. Il est prudent de conserver cet espace et de laisser le " "service de calcul se charger de la taille." msgid "The snapshot the volume was created from, if any." msgstr "Instantané à partir duquel le volume a été créé, le cas échéant." msgid "The source of certificate request." msgstr "Source de la demande de certificat." #, python-format msgid "The specified reference \"%(resource)s\" (in %(key)s) is incorrect." msgstr "La référence spécifiée \"%(resource)s\" (dans %(key)s) est incorrecte." msgid "The start and end addresses for the allocation pools." msgstr "Adresses de début et de fin pour les pools d'allocation." msgid "The status of the container." msgstr "Statut du conteneur." msgid "The status of the firewall." msgstr "Status du pare-feu" msgid "The status of the ipsec site connection." msgstr "Statut de la connexion de site IPSec." msgid "The status of the network." msgstr "Statut du réseau." msgid "The status of the order." msgstr "Statut de la commande." msgid "The status of the port." msgstr "Statut du port." msgid "The status of the router." msgstr "Statut du routeur." msgid "The status of the secret." msgstr "Statut du secret." msgid "The status of the vpn service." msgstr "Status du service vpn." msgid "" "The string that was stored. This value is also available by referencing the " "resource." msgstr "" "Chaîne stockée. Cette valeur est également accessible via le référencement " "de la ressource." msgid "The subject of the certificate request." msgstr "Objet de la demande de certificat." msgid "" "The subnet for the port on which the members of the pool will be connected." msgstr "" "Le sous-réseau du port sur lequel les membres du pool seront connectés." msgid "The subnet, either subnet or port should be specified." msgstr "Le sous-réseau (ou le port) doit être indiqué." msgid "The tag key name." msgstr "Nom de clé de balise." msgid "The tag value." msgstr "Valeur de balise." msgid "The template is not a JSON object or YAML mapping." msgstr "Le modèle n'est pas un objet JSON ni un mappage YAML." #, python-format msgid "The template section is invalid: %(section)s" msgstr "La section de modèle n'est pas correcte : %(section)s" #, python-format msgid "The template version is invalid: %(explanation)s" msgstr "La version de modèle n'est pas correcte : %(explanation)s" msgid "The tenant owning this floating IP." msgstr "Locataire possédant cette adresse IP flottante." msgid "The tenant owning this network." msgstr "Locataire possédant ce réseau." msgid "The time range in seconds." msgstr "Plage de temps en secondes." msgid "The timestamp indicating volume creation." msgstr "Horodatage indiquant la création du volume." msgid "The transform protocol of the ipsec policy." msgstr "Protocole de transformation de la stratégie IPSec." msgid "The type of profile." msgstr "Type du profil." msgid "The type of senlin policy." msgstr "Type de la stratégie senlin." msgid "The type of the certificate request." msgstr "Type de la demande de certificat." msgid "The type of the order." msgstr "Type de commande." msgid "The type of the resources in the group." msgstr "Type des ressources dans le groupe." msgid "The type of the secret." msgstr "Type du secret." msgid "The type of the volume mapping to a backend, if any." msgstr "Le type du volume mappé à un backend, le cas échéant." msgid "The type/format the secret data is provided in." msgstr "Type/format dans lesquels les données de secret sont fournies." msgid "The type/mode of the algorithm associated with the secret information." msgstr "Type/mode de l'algorithme associé aux informations de secret." msgid "The unencrypted plain text of the secret." msgstr "Texte en clair non chiffré du secret." msgid "" "The unique identifier of ike policy associated with the ipsec site " "connection." msgstr "" "Identificateur unique de la stratégie IKE associée à la connexion la " "connexion de site IPSec." msgid "" "The unique identifier of ipsec policy associated with the ipsec site " "connection." msgstr "" "Identificateur unique de la stratégie IPSec associée à la connexion la " "connexion de site IPSec." msgid "" "The unique identifier of the router to which the vpn service was inserted." msgstr "" "Identificateur unique du routeur dans lequel le service VPN a été inséré." msgid "" "The unique identifier of the subnet in which the vpn service was created." msgstr "" "Identificateur unique du sous-réseau dans lequel le service VPN a été créé." msgid "The unique identifier of the tenant owning the ike policy." msgstr "Identificateur unique du locataire possédant la stratégie IKE." msgid "The unique identifier of the tenant owning the ipsec policy." msgstr "Identificateur unique du locataire possédant la stratégie IPSec." msgid "The unique identifier of the tenant owning the ipsec site connection." msgstr "" "Identificateur unique du locataire possédant la connexion de site IPSec." msgid "The unique identifier of the tenant owning the vpn service." msgstr "Identificateur unique du locataire possédant le service VPN." msgid "" "The unique identifier of vpn service associated with the ipsec site " "connection." msgstr "" "Identificateur unique du service de réseau privé virtuel associé à la " "connexion la connexion de site IPSec." msgid "" "The user-defined region ID and should unique to the OpenStack deployment. " "While creating the region, heat will url encode this ID." msgstr "" "ID région défini par l'utilisateur et qui doit être unique pour le " "déploiement OpenStack. Lors de la création de la région, Heat codera cet ID " "sous forme d'URL." msgid "" "The value for the socket option TCP_KEEPIDLE. This is the time in seconds " "that the connection must be idle before TCP starts sending keepalive probes." msgstr "" "Valeur de l'option de socket TCP_KEEPIDLE. Durée en secondes pendant " "laquelle la connexion doit être inactive avant que le protocole TCP commence " "à envoyer des sondes de signal de présence." #, python-format msgid "The value must be at least %(min)s." msgstr "La valeur doit être au moins %(min)s." #, python-format msgid "The value must be in the range %(min)s to %(max)s." msgstr "La valeur doit être comprise entre %(min)s et %(max)s." #, python-format msgid "The value must be no greater than %(max)s." msgstr "La valeur ne doit pas être plus grande que %(max)s." #, python-format msgid "The values of the \"for_each\" argument to \"%s\" must be lists" msgstr "" "Les valeurs de l'argument \"for_each\" pour \"%s\" doivent être des listes" msgid "The version of the ike policy." msgstr "La version de la stratégie IKE." msgid "" "The vnic type to be bound on the neutron port. To support SR-IOV PCI " "passthrough networking, you can request that the neutron port to be realized " "as normal (virtual nic), direct (pci passthrough), or macvtap (virtual " "interface with a tap-like software interface). Note that this only works for " "Neutron deployments that support the bindings extension." msgstr "" "Type vnic à lier au port Neutron. Pour la prise en charge du réseau " "passthrough PCI SR-IOV, vous pouvez demander que le port Neutron soit " "réalisé selon le mode normal (contrôleur NIC virtuel), direct (passthrough " "PCI) ou macvtap (interface virtuelle dotée d'une interface logicielle de " "type TAP). Notez que cette option ne fonctionne que pour les déploiements " "Neutron acceptant l'extension de liaisons." msgid "The volume type." msgstr "Type de volume." msgid "The volume used as source, if any." msgstr "Volume utilisé comme source, le cas échéant." msgid "The volume_id can be boot or non-boot device to the server." msgstr "volume_id peut être une unité d'amorçage ou non sur le serveur." msgid "The website endpoint for the specified bucket." msgstr "Noeud final de site Web pour le compartiment spécifié." #, python-format msgid "There is no rule %(rule)s. List of allowed rules is: %(rules)s." msgstr "" "Il n'existe aucune règle %(rule)s. Liste des règles autorisées : %(rules)s." msgid "" "There is no such option during 5.0.0, so need to make this attribute " "unsupported, otherwise error will raised." msgstr "" "Il n'y a pas d'option de ce type en version 5.0.0 ; cet attribut ne doit " "donc pas être pris en charge, sinon cela va générer une erreur." msgid "" "There is no such option during 5.0.0, so need to make this property " "unsupported while it not used." msgstr "" "Il n'y a pas d'option de ce type en version 5.0.0 ; cette propriété ne doit " "donc pas être prise en charge si elle n'est pas utilisée." #, python-format msgid "" "There was an error loading the definition of the global resource type " "%(type_name)s." msgstr "" "Une erreur s'est produite lors du chargement de la définition du type de " "ressource global %(type_name)s." msgid "This endpoint is enabled or disabled." msgstr "Ce noeud final est activé ou désactivé." msgid "This project is enabled or disabled." msgstr "Ce projet est activé ou désactivé." msgid "This region is enabled or disabled." msgstr "Cette région est activée ou désactivée." msgid "This service is enabled or disabled." msgstr "Ce service est activé ou désactivé." msgid "Threshold to evaluate against." msgstr "Seuil par rapport auquel effectuer l'évaluation." msgid "Time To Live (Seconds)." msgstr "Durée de vie (secondes)." msgid "Time of the first execution in format \"YYYY-MM-DD HH:MM\"." msgstr "Date/heure de la première exécution, au format \"AAAA-MM-JJ HH:MM\"." msgid "Time of the next execution in format \"YYYY-MM-DD HH:MM:SS\"." msgstr "Date/heure de l'exécution suivante, au format \"AAAA-MM-JJ HH:MM:SS\"." msgid "" "Timeout for client connections' socket operations. If an incoming connection " "is idle for this number of seconds it will be closed. A value of '0' means " "wait forever." msgstr "" "Délai d'attente à observer pour les opérations de socket des connexions " "client. La connexion entrante est fermée si elle reste en veille pendant ce " "délai en nombre de secondes. La valeur '0' signifie une attente illimitée." msgid "Timeout for creating the bay in minutes. Set to 0 for no timeout." msgstr "" "Délai d'attente, en minutes, pour la création de la baie. Défini sur 0 quand " "il n'y a pas de délai d'attente." msgid "Timeout in seconds for stack action (ie. create or update)." msgstr "" "Délai d'attente en secondes pour l'action de pile (par ex. création ou mise " "à jour)." msgid "" "Toggle to enable/disable caching when Orchestration Engine looks for other " "OpenStack service resources using name or id. Please note that the global " "toggle for oslo.cache(enabled=True in [cache] group) must be enabled to use " "this feature." msgstr "" "Activez/Désactivez la mise en cache lorsque le moteur Orchestration " "recherche d'autres ressources de service OpenStack à l'aide d'un nom ou d'un " "ID. Notez que l'activation/la désactivation globale pour oslo." "cache(enabled=True dans le groupe [cache]) doit être activée pour " "l'utilisation de cette fonction." msgid "" "Toggle to enable/disable caching when Orchestration Engine retrieves " "extensions from other OpenStack services. Please note that the global toggle " "for oslo.cache(enabled=True in [cache] group) must be enabled to use this " "feature." msgstr "" "Activez/Désactivez la mise en cache lorsque le moteur Orchestration extrait " "des extensions d'autres services OpenStack. Notez que l'activation globale " "pour oslo.cache(enabled=True dans le groupe [cache]) doit être activée pour " "l'utilisation de cette fonction." msgid "" "Toggle to enable/disable caching when Orchestration Engine validates " "property constraints of stack.During property validation with constraints " "Orchestration Engine caches requests to other OpenStack services. Please " "note that the global toggle for oslo.cache(enabled=True in [cache] group) " "must be enabled to use this feature." msgstr "" "Activez/Désactivez la mise en cache lorsque le moteur Orchestration valide " "les contraintes de propriété de la pile. Pendant la validation de propriété " "avec des contraintes, le moteur Orchestration met en cache les demandes vers " "d'autres services OpenStack. Notez que la validation globale pour oslo." "cache(enabled=True dans le grouoe [cache]) doit être activée pour " "l'utilisation de cette fonction" msgid "" "Token for stack-user which can be used for signalling handle when " "signal_transport is set to TOKEN_SIGNAL. None for all other signal " "transports." msgstr "" "Jeton d'utilisateur de pile qui peut être utilisé pour signaler le " "descripteur quand signal_transport est défini sur TOKEN_SIGNAL. Valeur None " "pour tout autre transport de signal." msgid "" "Tokens are not needed for Swift TempURLs. This attribute is being kept for " "compatibility with the OS::Heat::WaitConditionHandle resource." msgstr "" "Les jetons ne sont pas nécessaires pour les TempURL Swift. Cet attribut est " "gardé pour la compatibilité avec la ressource OS::Heat::WaitConditionHandle." msgid "Topic" msgstr "Sujet" msgid "Transform protocol for the ipsec policy." msgstr "Protocole de transformation pour la stratégie IPSec." msgid "True if alarm evaluation/actioning is enabled." msgstr "True : l'évaluation/déclenchement d'alarme est activé." msgid "" "True if the system should remember a generated private key; False otherwise." msgstr "" "True si le système doit mémoriser une clé privée générée ; sinon False." msgid "Type of access that should be provided to guest." msgstr "Type d'accès devant être fourni à l'invité." msgid "Type of adjustment (absolute or percentage)." msgstr "Type d'ajustement (absolu ou pourcentage)." msgid "" "Type of endpoint in Identity service catalog to use for communication with " "the OpenStack service." msgstr "" "Type de noeud final dans le catalogue de service d'identité à utiliser pour " "la communication avec le service OpenStack." msgid "Type of keystone Service." msgstr "Type du service Keystone." msgid "Type of receiver." msgstr "Type de récepteur." msgid "Type of the data source." msgstr "Type de la source de données." msgid "Type of the notification." msgstr "Type de la notification." msgid "Type of the object that RBAC policy affects." msgstr "Type d'objet affecté par cette stratégie RBAC." msgid "Type of the value of the input." msgstr "Type de valeur d'entrée" msgid "Type of the value of the output." msgstr "Type de la valeur de la sortie." msgid "Type of the volume to create on Cinder backend." msgstr "Type de volume à créer sur le backend Cinder." msgid "URL for API authentication" msgstr "URL pour l'authentification d'API" msgid "URL for the data source." msgstr "URL de la source de données." msgid "" "URL for the job binary. Must be in the format swift:/// or " "internal-db://." msgstr "" "URL du binaire de travail. Doit être au format swift :/// " "or internal-db://." msgid "" "URL of TempURL where resource will signal completion and optionally upload " "data." msgstr "" "URL de TempURL où la ressource signalera l'achèvement et le cas échéant " "l'envoi par téléchargement des données." msgid "URL of keystone service endpoint." msgstr "URL du noeud final de service Keystone." msgid "URL of the Heat CloudWatch server." msgstr "URL du serveur Heat CloudWatch." msgid "" "URL of the Heat metadata server. NOTE: Setting this is only needed if you " "require instances to use a different endpoint than in the keystone catalog" msgstr "" "URL du serveur de métadonnées Heat. REMARQUE : La définition de l'URL est " "nécessaire uniquement si vous avez besoin que des instances utilisent un " "noeud final autre que celui du catalogue keystone." msgid "URL of the Heat waitcondition server." msgstr "URL du serveur Heat Waitcondition." msgid "" "URL where the data for this image already resides. For example, if the image " "data is stored in swift, you could specify \"swift://example.com/container/" "obj\"." msgstr "" "URL où les données de cette image résident déjà. Par exemple, si les données " "d'image sont stockées dans SWIFT, vous pouvez indiquer \"swift://exemple.com/" "container/obj\"." msgid "UUID of the internal subnet to which the instance will be attached." msgstr "" "Identificateur unique universel du sous-réseau interne auquel l'instance " "sera jointe." #, python-format msgid "" "Unable to find neutron provider '%(provider)s', available providers are " "%(providers)s." msgstr "" "Fournisseur neutron '%(provider)s' introuvable ; fournisseurs disponibles : " "%(providers)s." #, python-format msgid "" "Unable to find senlin policy type '%(pt)s', available policy types are " "%(pts)s." msgstr "" "Type de stratégie senlin '%(pt)s' introuvable ; types de stratégie " "disponibles : %(pts)s." #, python-format msgid "" "Unable to find senlin profile type '%(pt)s', available profile types are " "%(pts)s." msgstr "" "Type de profil senlin '%(pt)s' introuvable ; types de profil disponibles : " "%(pts)s." #, python-format msgid "" "Unable to load %(app_name)s from configuration file %(conf_file)s.\n" "Got: %(e)r" msgstr "" "Impossible de charger %(app_name)s depuis le fichier de configuration " "%(conf_file)s.\n" "Résultat : %(e)r" #, python-format msgid "Unable to locate config file [%s]" msgstr "Impossible de localiser le fichier de configuration [%s]" #, python-format msgid "Unexpected action %(action)s" msgstr "Action inattendue : %(action)s" #, python-format msgid "Unexpected action %s" msgstr "Action %s inattendue" #, python-format msgid "" "Unexpected properties: %(unexpected)s. Only these properties are allowed for " "%(type)s type of order: %(allowed)s." msgstr "" "Propriétés inattendues : %(unexpected)s. Seules les propriétés suivantes " "sont autorisées pour le type de commande %(type)s : %(allowed)s." msgid "Unique identifier for the device." msgstr "Identificateur unique pour l'unité." msgid "" "Unique identifier for the ike policy associated with the ipsec site " "connection." msgstr "" "Identificateur unique pour la stratégie IKE associée à la connexion de la " "connexion de site IPSec." msgid "" "Unique identifier for the ipsec policy associated with the ipsec site " "connection." msgstr "" "Identificateur unique pour la stratégie IKE associée à la connexion de la " "connexion de site IPSec." msgid "Unique identifier for the network owning the port." msgstr "Identificateur unique pour le réseau possédant le port." msgid "" "Unique identifier for the router to which the vpn service will be inserted." msgstr "" "Identificateur unique pour le routeur dans lequel le service VPN sera inséré." msgid "" "Unique identifier for the vpn service associated with the ipsec site " "connection." msgstr "" "Identificateur unique pour le service VPN associé à la connexion de site " "IPSec." msgid "" "Unique identifier of the firewall policy to which this firewall rule belongs." msgstr "" "Identificateur unique de la stratégie de pare-feu utilisée à laquelle cette " "règle de pare-feu appartient." msgid "Unique identifier of the firewall policy used to create the firewall." msgstr "" "Identificateur unique de la stratégie de pare-feu utilisée pour créer le " "pare-feu." msgid "Unknown" msgstr "Inconnu" #, python-format msgid "Unknown Property %s" msgstr "Propriété inconnue %s" #, python-format msgid "Unknown attribute \"%s\"" msgstr "Attribut inconnu \"%s\"" #, python-format msgid "Unknown error retrieving %s" msgstr "Erreur inconnue lors de l'extraction de %s" #, python-format msgid "Unknown input %s" msgstr "Entrée inconnue %s" #, python-format msgid "Unknown key(s) %s" msgstr "Clé(s) inconnue(s) %s" msgid "Unknown share_status during creation of share \"{0}\"" msgstr "share_status inconnu lors de la création du partage \"{0}\"" #, python-format msgid "Unknown status creating Bay '%(name)s' - %(reason)s" msgstr "Statut inconnu lors de la création de la baie '%(name)s' - %(reason)s" msgid "Unknown status during deleting share \"{0}\"" msgstr "Statut inconnu lors de la suppression du partage \"{0}\"" #, python-format msgid "Unknown status updating Bay '%(name)s' - %(reason)s" msgstr "" "Statut inconnu lors de la mise à jour de la baie '%(name)s' - %(reason)s" #, python-format msgid "Unknown status: %s" msgstr "Status inconnu: %s" #, python-format msgid "" "Unrecognized value \"%(value)s\" for \"%(name)s\", acceptable values are: " "true, false." msgstr "" "Valeur \"%(value)s\" non reconnue pour \"%(name)s\" ; les valeurs " "acceptables sont true et false." #, python-format msgid "Unsupported object type %(objtype)s" msgstr "Type d'objet non supporté %(objtype)s" #, python-format msgid "Unsupported resource '%s' in LoadBalancerNames" msgstr "Ressource non prise en charge %s dans LoadBalancerNames" msgid "Unversioned keystone url in format like http://0.0.0.0:5000." msgstr "URL d'origine non versionnée au format http://0.0.0.0:5000." #, python-format msgid "Update to properties %(props)s of %(name)s (%(res)s)" msgstr "Mettre à jour les propriétés %(props)s de %(name)s (%(res)s)" msgid "Updated At" msgstr "Mis à jour à" msgid "Updating a stack when it is deleting" msgstr "Mise à jour d'une pile lors d'une suppression" msgid "Updating a stack when it is suspended" msgstr "Mise à jour d'une pile lorsqu'elle est interrompue" msgid "" "Use get_resource|Ref command instead. For example: { get_resource : " " }" msgstr "" "Utilisez plutôt la commande get_resource|Ref. Par exemple : { get_resource : " " }" msgid "" "Use only with Neutron, to list the internal subnet to which the instance " "will be attached; needed only if multiple exist; list length must be exactly " "1." msgstr "" "Utiliser uniquement avec Neutron pour lister le sous-réseau internet auquel " "l'instance sera joint ; requis uniquement si plusieurs existent ; la liste " "doit avoir une longueur de 1 exactement." #, python-format msgid "Use property %s" msgstr "Utilisez la propriété %s" #, python-format msgid "Use property %s." msgstr "Utilisez la propriété %s." msgid "" "Use the `external_gateway_info` property in the router resource to set up " "the gateway." msgstr "" "Utilisez la propriété `external_gateway_info` dans la ressource de routeur " "pour configurer la passerelle." msgid "" "Use the networks attribute instead of first_address. For example: " "\"{get_attr: [, networks, , 0]}\"" msgstr "" "Utilisez l'attribut networks au lieu de first_address. Par exemple : " "\"{get_attr: [, networks, , 0]}\"" msgid "Use this resource at your own risk." msgstr "Vous utilisez cette ressource à vos propres risques." #, python-format msgid "User %s in invalid domain" msgstr "Utilisateur %s invalide domaine" #, python-format msgid "User %s in invalid project" msgstr "Utilisateur %s invalide projet" msgid "User ID for API authentication" msgstr "Identifiant utilisateur pour API authentification" msgid "User data to pass to instance." msgstr "Données utilisateur à transmettre à l'instance." msgid "User is not authorized to perform action" msgstr "L'utilisateur n'est pas autorisé à exécuter l'action" msgid "User name to create a user on instance creation." msgstr "" "Nom d'utilisateur pour créer un utilisateur lors de la création d'instance." msgid "Username associated with the AccessKey." msgstr "Nom d'utilisateur associé à la clé d'accès." msgid "Username for API authentication" msgstr "Nom d'utilisateur pour API authentification" msgid "Username for accessing the data source URL." msgstr "Nom d'utilisateur pour l'accès à l'URL de la source de données." msgid "Username for accessing the job binary URL." msgstr "Nom d'utilisateur pour accéder à l'URL du binaire de travail." msgid "Username of privileged user in the image." msgstr "Nom d'utilisateur de l'utilisateur avec privilèges dans l'image." msgid "VLAN ID for VLAN networks or tunnel-id for GRE/VXLAN networks." msgstr "ID VLAN pour les réseaux VLAN ou ID tunnel pour les réseaux GRE/VXLAN." msgid "VPC ID for this gateway association." msgstr "ID VPC pour cette association de passerelle." msgid "VPC ID for where the route table is created." msgstr "ID VPC pour l'emplacement de création de la table de routage." msgid "" "Valid values are encrypt or decrypt. The heat-engine processes must be " "stopped to use this." msgstr "" "Les valeurs valides sont chiffrer ou déchiffrer. Les processus heat-engine " "doivent être arrêtés avant utilisation." #, python-format msgid "Value \"%(val)s\" is invalid for data type \"%(type)s\"." msgstr "" "La valeur \"%(val)s\" n'est pas valide pour le type de données \"%(type)s\"." #, python-format msgid "Value '%(value)s' is invalid for '%(name)s' which only accepts integer." msgstr "" "La valeur '%(value)s' n'est pas valide pour '%(name)s' qui accepte " "uniquement les nombres entiers." #, python-format msgid "" "Value '%(value)s' is invalid for '%(name)s' which only accepts non-negative " "integer." msgstr "" "La valeur '%(value)s' n'est pas valide pour '%(name)s' qui accepte " "uniquement les nombres entiersnon négatifs." #, python-format msgid "Value '%s' is not an integer" msgstr "La valeur '%s' n'est pas un entier" #, python-format msgid "Value must be a comma-delimited list string: %s" msgstr "" "La valeur doit être une chaîne de liste délimitée par des virgules : %s" #, python-format msgid "Value must be of type %s" msgstr "La valeur doit être de type %s" #, python-format msgid "Value must be valid JSON: %s" msgstr "La valeur doit être un JSON valide : %s" #, python-format msgid "Value must match pattern: %s" msgstr "La valeur doit correspondre au motif : %s" msgid "" "Value which can be set or changed on stack update to trigger the resource " "for replacement with a new random string. The salt value itself is ignored " "by the random generator." msgstr "" "Valeur qui peut être définie ou modifiée lors de la mise à jour de pile pour " "déclencher la ressource en vue d'un remplacement par une nouvelle chaîne " "aléatoire. La valeur de sel de cryptage elle-même est ignorée par le " "générateur aléatoire." msgid "" "Value which can be set to fail the resource operation to test failure " "scenarios." msgstr "" "Valeur pouvant être définie afin de mette en échec l'opération sur la " "ressource pour tester les scénarios d'échec." msgid "" "Value which can be set to trigger update replace for the particular resource." msgstr "" "Valeur pouvant être définie afin de déclencher le remplacement de mise à " "jour pour une ressource particulière." #, python-format msgid "Version %(objver)s of %(objname)s is not supported" msgstr "Version %(objver)s de %(objname)s n'est pas supporté" msgid "Version for the ike policy." msgstr "Version pour la stratégie IKE." msgid "Version of Hadoop running on instances." msgstr "Version de Hadoop exécutée sur les instances." msgid "Version of IP address." msgstr "Version de l'adresse IP." msgid "Vip associated with the pool." msgstr "VIP associé au pool." msgid "Volume attachment failed" msgstr "Echec de connexion du volume" msgid "Volume backup failed" msgstr "Echec de sauvegarde du volume" msgid "Volume backup restore failed" msgstr "Echec de restauration de la sauvegarde du volume" msgid "Volume create failed" msgstr "Echec de création du volume" msgid "Volume detachment failed" msgstr "Echec de déconnexion du volume" msgid "Volume in use" msgstr "Volume en service" msgid "Volume resize failed" msgstr "Échec de redimensionnement du volume" msgid "Volumes per node." msgstr "Volumes par node." msgid "Volumes to attach to instance." msgstr "Volumes à lier à l'instance." #, python-format msgid "WaitCondition invalid Handle %s" msgstr "Descripteur non valide %s de WaitCondition" #, python-format msgid "WaitCondition invalid Handle stack %s" msgstr "Pile de descripteur non valide %s de WaitCondition" #, python-format msgid "WaitCondition invalid Handle tenant %s" msgstr "Locataire de descripteur non valide %s de WaitCondition" msgid "Weight of pool member in the pool (default to 1)." msgstr "Pondération du membre du pool dans le pool (1 par défaut)." msgid "Weight of the pool member in the pool." msgstr "Pondération du membre de pool dans le pool." #, python-format msgid "Went to status %(resource_status)s due to \"%(status_reason)s\"" msgstr "" "Passage à l'état %(resource_status)s en raison de \"%(status_reason)s\"" msgid "" "When both ipv6_ra_mode and ipv6_address_mode are set, they must be equal." msgstr "" "Lorsque ipv6_ra_mode et ipv6_address_mode sont définis, ils doivent être " "égaux." msgid "" "When running server in SSL mode, you must specify both a cert_file and " "key_file option value in your configuration file" msgstr "" "Lors de l'exécution du serveur en mode SSL, vous devez spécifier une valeur " "d'option cert_file et key_file dans votre fichier de configuration" msgid "Whether enable this policy on that cluster." msgstr "Indique s'il faut ou non activer cette stratégie sur ce cluster." msgid "" "Whether the address scope should be shared to other tenants. Note that the " "default policy setting restricts usage of this attribute to administrative " "users only, and restricts changing of shared address scope to unshared with " "update." msgstr "" "Indique si le périmètre d'adresse doit être partagé avec d'autres " "locataires. notez que le paramètre de stratégie par défaut limite l'usage de " "cet attribut aux seuls administrateurs, et restreint le changement du " "périmètre d'adresse partagé à non partagé avec mise à jour." msgid "Whether the flavor is shared across all projects." msgstr "Indique si la version est partagée par l'ensemble des projets." msgid "" "Whether the image can be deleted. If the value is True, the image is " "protected and cannot be deleted." msgstr "" "Indique si l'image peut être supprimée ou non. Si la valeur est True, " "l'image est protégée et ne peut pas être supprimée." msgid "Whether the metering label should be shared across all tenants." msgstr "" "Indique si l'étiquette de mesure doit être partagée par tous les locataires." msgid "Whether the network contains an external router." msgstr "Indique si le réseau comporte un routeur externe." msgid "Whether the part content is text or multipart." msgstr "Si le contenu de la partie est du texte ou à plusieurs parties." msgid "" "Whether the subnet pool will be shared across all tenants. Note that the " "default policy setting restricts usage of this attribute to administrative " "users only." msgstr "" "Indique si le pool de sous-réseau va être partagé entre tous les locataires. " "Notez que le paramètre de stratégie par défaut limite l'utilisation de cet " "attribut aux administrateurs seulement." msgid "Whether the volume type is accessible to the public." msgstr "Indique si le type de volume est accessible au public." msgid "Whether this QoS policy should be shared to other tenants." msgstr "" "Indique si cette stratégie de qualité de service doit être partagée avec " "d'autres locataires." msgid "" "Whether this firewall should be shared across all tenants. NOTE: The default " "policy setting in Neutron restricts usage of this property to administrative " "users only." msgstr "" "Indiquer si ce pare-feu doit être partagé par tous les locataires. " "REMARQUE : Dans Neutron, le paramètre de règle par défaut limite " "l'utilisation de cette propriété uniquement aux administrateurs." msgid "" "Whether this is default IPv4/IPv6 subnet pool. There can only be one default " "subnet pool for each IP family. Note that the default policy setting " "restricts administrative users to set this to True." msgstr "" "Indique s'il s'agit du pool de sous-réseau IPv4/IPv6 par défaut. Il ne peut " "y avoir qu'un seul pool de sous-réseau par défaut pour chaque famille IP. " "Notez que le paramètre de stratégie par défaut limite les administrateurs à " "définir la valeur sur True." msgid "Whether this network should be shared across all tenants." msgstr "Si ce réseau doit être partagé entre tous les locataires." msgid "" "Whether this network should be shared across all tenants. Note that the " "default policy setting restricts usage of this attribute to administrative " "users only." msgstr "" "Indique si ce réseau doit être partagé entre tous les locataires. Notez que " "le paramètre de règles par défaut limite l'utilisation de cet attribut " "uniquement aux administrateurs." msgid "" "Whether this policy should be audited. When set to True, each time the " "firewall policy or the associated firewall rules are changed, this attribute " "will be set to False and will have to be explicitly set to True through an " "update operation." msgstr "" "Si cette règle doit être auditée. Lorsqu'elle est réglée sur true, chaque " "fois que la règle de pare-feu ou les règles de pare-feu associées sont " "modifiées, cet attribut sera réglé sur False et devra être explicitement " "réglé sur True lors d'une opération de mise à jour." msgid "Whether this policy should be shared across all tenants." msgstr "Si cette règle doit être partagée entre tous les locataires." msgid "Whether this rule should be enabled." msgstr "Si cette règle doit être activée." msgid "Whether this rule should be shared across all tenants." msgstr "Si cette règle doit être partagée entre tous les locataires." msgid "Whether to enable the actions or not." msgstr "Indique s'il faut ou non activer les actions." msgid "Whether to specify a remote group or a remote IP prefix." msgstr "" "Indique si un préfixe de groupe distant ou d'IP distante doit être spécifié." msgid "" "Which lifecycle actions of the deployment resource will result in this " "deployment being triggered." msgstr "" "Indique les actions de cycle de vie de la ressource de déploiement qui " "seront lancées par le déclenchement de ce déploiement." msgid "" "Workflow additional parameters. If Workflow is reverse typed, params " "requires 'task_name', which defines initial task." msgstr "" "Paramètres supplémentaires du flux de travail. Si le flux de travail est de " "type inversé, ces paramètres requièrent 'task_name', qui définit la tâche " "initiale." msgid "Workflow description." msgstr "Description du flux de travail." msgid "Workflow name." msgstr "Nom du flux de travail." msgid "Workflow to execute." msgstr "Flux de travail à exécuter." msgid "Workflow type." msgstr "Type du flux de travail." #, python-format msgid "Wrong Arguments try: \"%s\"" msgstr "Arguments incorrects, essayez : \"%s\"" msgid "You are not authenticated." msgstr "Vous n'êtes pas authentifié." msgid "You are not authorized to complete this action." msgstr "Vous n'êtes pas autorisé à effectuer cette action." #, python-format msgid "You are not authorized to use %(action)s." msgstr "Vous n'êtes pas autorisés à utiliser %(action)s" #, python-format msgid "" "You have reached the maximum stacks per tenant, %d. Please delete some " "stacks." msgstr "" "Vous avez atteint le nombre maximal de piles par locataire : %d. Supprimez " "quelques piles." #, python-format msgid "could not find user %s" msgstr "Ne peut pas trouver l'utilisateur %s" msgid "deployment_id must be specified" msgstr "deployment_id doit être indiqué" msgid "" "deployments key not allowed in resource metadata with user_data_format of " "SOFTWARE_CONFIG" msgstr "" "clés de déploiement non autorisées dans les métadonnées de ressource avec le " "format_données_utilisateur de SOFTWARE_CONFIG" #, python-format msgid "deployments of server %s" msgstr "déploiements du serveur %s" #, python-format msgid "environment has wrong section \"%s\"" msgstr "l'environnement contient une section incorrecte \"%s\"" msgid "error in pool" msgstr "Erreur dans le pool" msgid "error in vip" msgstr "erreur dans vip" msgid "external network for the gateway." msgstr "réseau externe pour la passerelle." msgid "granularity should be days, hours, minutes, or seconds" msgstr "la granularité devrait être en jours, heures, minutes ou secondes" msgid "heat.conf misconfigured, auth_encryption_key must be 32 characters" msgstr "" "Configuration de heat.conf erronée, la clé auth_encryption_key doit " "comporter 32 caractères" msgid "" "heat.conf misconfigured, cannot specify \"stack_user_domain_id\" or " "\"stack_user_domain_name\" without \"stack_domain_admin\" and " "\"stack_domain_admin_password\"" msgstr "" "heat.conf mal configuré, impossible d'indiquer \"stack_user_domain_id\" ou " "\"stack_user_domain_name\" sans \"stack_domain_admin\" et " "\"stack_domain_admin_password\"" msgid "ipv6_ra_mode and ipv6_address_mode are not supported for ipv4." msgstr "" "ipv6_ra_mode et ipv6_address_mode ne sont pas pris en charge pour ipv4." msgid "limit cannot be less than 4" msgstr "la limite ne peut pas être inférieure à 4" #, python-format msgid "metadata setting for resource %s" msgstr "paramètre de métadonnées pour la ressource %s" msgid "min/max length must be integral" msgstr "La longueur minimale/maximale doit être intégrale" msgid "min/max must be numeric" msgstr "min/max doit être numérique" msgid "need more memory." msgstr "besoin de plus de mémoire." msgid "no resource data found" msgstr "Pas de ressource trouvée" msgid "no resources were found" msgstr "Pas de ressources trouvées" msgid "nova server metadata needs to be a Map." msgstr "Les métadonnées de serveur Nova doivent être une mappe." #, python-format msgid "previous_status must be SupportStatus instead of %s" msgstr "previous_status doit être SupportStatus plutôt que %s" #, python-format msgid "raw template with id %s not found" msgstr "le modèle brut avec l'ID %s est introuvable" #, python-format msgid "resource with id %s not found" msgstr "ressource avec identifiant %s non trouvé" #, python-format msgid "roles %s" msgstr "roles %s" msgid "segmentation_id cannot be specified except 0 for using flat" msgstr "" "segmentation_id ne peut pas être indiqué excepté 0 pour l'utilisation de flat" msgid "segmentation_id must be specified for using vlan" msgstr "segmentation_id doit être indiqué pour l'utilisation de VLAN" msgid "segmentation_id not allowed for flat network type." msgstr "segmentation_id non autorisé pour le type de réseau centralisé." msgid "server_id must be specified" msgstr "server_id doit être indiqué" #, python-format msgid "" "task %(task)s contains property 'requires' in case of direct workflow. Only " "reverse workflows can contain property 'requires'." msgstr "" "La tâche %(task)s contient la propriété 'requires' dans le cas d'un flux de " "travail direct. Seuls les flux de travail de type inversé peuvent contenir " "la propriété 'requires'." heat-10.0.2/heat/locale/de/0000775000175000017500000000000013343562672015320 5ustar zuulzuul00000000000000heat-10.0.2/heat/locale/de/LC_MESSAGES/0000775000175000017500000000000013343562672017105 5ustar zuulzuul00000000000000heat-10.0.2/heat/locale/de/LC_MESSAGES/heat.po0000666000175000017500000130635413343562351020376 0ustar zuulzuul00000000000000# Translations template for heat. # Copyright (C) 2015 ORGANIZATION # This file is distributed under the same license as the heat project. # # Translators: # Andreas Jaeger , 2014 # Ettore Atalan , 2014 # Andreas Jaeger , 2016. #zanata # Robert Simai , 2016. #zanata # Frank Kloeker , 2018. #zanata msgid "" msgstr "" "Project-Id-Version: heat VERSION\n" "Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n" "POT-Creation-Date: 2018-02-21 16:05+0000\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" "PO-Revision-Date: 2018-02-21 05:44+0000\n" "Last-Translator: Frank Kloeker \n" "Language-Team: German\n" "Language: de\n" "Plural-Forms: nplurals=2; plural=(n != 1);\n" "Generated-By: Babel 2.0\n" "X-Generator: Zanata 3.9.6\n" #, python-format msgid "\"%%s\" is not a valid keyword inside a %s definition" msgstr "\"%%s\" ist kein gültiges Schlüsselwort in einer %s-Definition" #, python-format msgid "" "\"%(fn)s\" expected 2 arguments of the form [values, sequence] but got " "%(len)d arguments instead" msgstr "" "\"%(fn)s\" erwartet 2 Argumente der Form [Werte, Sequenz], aber stattdessen " "Argumente %(len)d" #, python-format msgid "\"%(fn)s\" filters a list of values" msgstr "\"%(fn)s\" filtert eine Liste von Werten" #, python-format msgid "\"%(fn_name)s\" syntax should be %(example)s" msgstr "\"%(fn_name)s\" -Syntax sollte %(example)s sein" #, python-format msgid "\"%(fn_name)s\": %(err)s" msgstr "\"%(fn_name)s\": %(err)s" #, python-format msgid "" "\"%(name)s\" params must be strings or numbers, param %(param)s is not valid" msgstr "" "\"%(name)s\" Parameter müssen Strings oder Zahlen sein, Param %(param)s ist " "nicht gültig" #, python-format msgid "" "\"%(name)s\" params must be strings, numbers, list or map. Failed to json " "serialize %(value)s" msgstr "" "\"%(name)s\"-Parameter müssen Zeichenfolgen, Zahlen, Listen oder Karten " "sein. Fehler bei der JSerialisierung von %(value)s" #, python-format msgid "\"%(name)s\" syntax should be %(example)s" msgstr "\"%(name)s\"-Syntax sollte %(example)s sein" #, python-format msgid "" "\"%(section)s\" must contain a map of %(obj_name)s maps. Found a [%(_type)s] " "instead" msgstr "" "\"%(section)s\" muss eine Zuordnung von %(obj_name)s Maps enthalten. " "Stattdessen wurde [%(_type)s] gefunden" #, python-format msgid "\"%(url)s\" is not a valid SwiftSignalHandle. The %(part)s is invalid" msgstr "" "\"%(url)s\" ist kein gültiger SwiftSignalHandle. Der %(part)s ist ungültig" #, python-format msgid "\"%(value)s\" does not validate %(name)s" msgstr "\"%(value)s\" validiert %(name)s nicht" #, python-format msgid "\"%(value)s\" does not validate %(name)s (constraint not found)" msgstr "\"%(value)s\" validiert nicht %(name)s (Einschränkung nicht gefunden)" #, python-format msgid "\"%(version)s\". \"%(version_type)s\" should be one of: %(available)s" msgstr "" "\"%(version)s\". \"%(version_type)s\" sollte einer der folgenden sein: " "%(available)s" #, python-format msgid "\"%(version)s\". \"%(version_type)s\" should be: %(available)s" msgstr "\"%(version)s\". \"%(version_type)s\" sollte sein: %(available)s" #, python-format msgid "\"%s\" : [ [ , ], [ , ] ]" msgstr "\"%s\": [[ , ], [ , ]]" #, python-format msgid "\"%s\" : [ { \"key1\": \"val1\" }, { \"key2\": \"val2\" } ]" msgstr "\"%s\": [{\"key1\": \"val1\"}, {\"key2\": \"val2\"}]" #, python-format msgid "" "\"%s\" : [ { \"key1\": \"val1\" }, {\"keys\": {\"key1\": \"key2\"}, \"values" "\": {\"val1\": \"val2\"}}]" msgstr "" "\"%s\" : [ { \"key1\": \"val1\" }, {\"keys\": {\"key1\": \"key2\"}, \"values" "\": {\"val1\": \"val2\"}}]" #, python-format msgid "\"%s\" argument must be a string" msgstr "Das Argument \"%s\" muss eine Zeichenfolge sein" #, python-format msgid "\"%s\" can't traverse path" msgstr "\"%s\" kann den Pfad nicht durchlaufen" #, python-format msgid "\"%s\" deletion policy not supported" msgstr "Die Löschrichtlinie \"%s\" wird nicht unterstützt" #, python-format msgid "" "\"%s\" deletion policy not supported - volume backup service is not enabled." msgstr "" "Löschrichtlinie \"%s\" wird nicht unterstützt - Volume-Sicherungsdienst ist " "nicht aktiviert." #, python-format msgid "\"%s\" delimiter must be a string" msgstr "Das Trennzeichen \"%s\" muss eine Zeichenfolge sein" #, python-format msgid "\"%s\" is not a list" msgstr "\"%s\" ist keine Liste" #, python-format msgid "\"%s\" is not a map" msgstr "\"%s\" ist keine Karte" #, python-format msgid "\"%s\" is not a valid ARN" msgstr "\"%s\" ist kein gültiger ARN" #, python-format msgid "\"%s\" is not a valid ARN URL" msgstr "\"%s\" ist keine gültige ARN-URL" #, python-format msgid "\"%s\" is not a valid Heat ARN" msgstr "\"%s\" ist kein gültiger Heat ARN" #, python-format msgid "\"%s\" is not a valid URL" msgstr "\"%s\" ist keine gültige URL" #, python-format msgid "\"%s\" is not a valid boolean" msgstr "\"%s\" ist kein gültiger boolescher Wert" #, python-format msgid "\"%s\" is not a valid template section" msgstr "\"%s\" ist kein gültiger Vorlagenabschnitt" #, python-format msgid "\"%s\" must operate on a list" msgstr "\"%s\" muss auf einer Liste operieren" #, python-format msgid "\"%s\" only works with lists" msgstr "\"%s\" funktioniert nur mit Listen" #, python-format msgid "\"%s\" param placeholders must be strings" msgstr "\"%s\" -Param-Platzhalter müssen Zeichenfolgen sein" #, python-format msgid "\"%s\" parameters must be a mapping" msgstr "\"%s\" -Parameter müssen ein Mapping sein" #, python-format msgid "\"%s\" params must be a map" msgstr "\"%s\" Parameter müssen eine Karte sein" #, python-format msgid "\"%s\" params must be strings, numbers, list or map." msgstr "" "\"%s\" Parameter müssen Zeichenfolgen, Zahlen, Listen oder Karten sein." #, python-format msgid "\"%s\" template must be a string" msgstr "\"%s\" Vorlage muss eine Zeichenfolge sein" #, python-format msgid "\"client_plugin\" and \"finder\" should be specified for %s rule" msgstr "" "\"client_plugin\" und \"finder\" sollten für die Regel %s angegeben werden" #, python-format msgid "\"permutations\" should be boolean type for %s function." msgstr "\"Permutationen\" sollten boolescher Typ für die Funktion %s sein." #, python-format msgid "\"repeat\" syntax should be %s" msgstr "Die \"repeat\"-Syntax sollte %s sein" msgid "\"translation_path\" should be non-empty list with path to translate." msgstr "" "\"translation_path\" sollte eine nicht leere Liste mit dem zu übersetzenden " "Pfad sein." msgid "\"value\" must be list type when rule is Add." msgstr "\"value\" muss Listentyp sein, wenn die Regel Add ist." msgid "" "\"value_path\", \"value\" and \"value_name\" are mutually exclusive and " "cannot be specified at the same time." msgstr "" "\"value_path\", \"value\" und \"value_name\" schließen sich gegenseitig aus " "und können nicht gleichzeitig angegeben werden." #, python-format msgid "%(a)s paused until Hook %(h)s is cleared" msgstr "%(a)s wurde angehalten, bis Hook %(h)s gelöscht wurde" #, python-format msgid "%(action)s is not supported for resource." msgstr "%(action)s wird für die Ressource nicht unterstützt." #, python-format msgid "%(action)s is restricted for resource." msgstr "%(action)s ist für die Ressource eingeschränkt." #, python-format msgid "%(desired_capacity)s must be between %(min_size)s and %(max_size)s" msgstr "" "%(desired_capacity)s muss zwischen %(min_size)s und %(max_size)s liegen" #, python-format msgid "%(error)s%(path)s%(message)s" msgstr "%(error)s%(path)s%(message)s" #, python-format msgid "%(feature)s is not supported." msgstr "%(feature)s wird nicht unterstützt." #, python-format msgid "" "%(img)s must be provided: Referenced cluster template %(tmpl)s has no " "default_image_id defined." msgstr "" "%(img)s muss angegeben werden: Referenzierte Clustervorlage %(tmpl)s hat " "keine default_image_id definiert." #, python-format msgid "%(lc)s (%(ref)s) reference can not be found." msgstr "%(lc)s (%(ref)s) Referenz kann nicht gefunden werden." #, python-format msgid "" "%(lc)s (%(ref)s) requires a reference to the configuration not just the name " "of the resource." msgstr "" "%(lc)s (%(ref)s) erfordert einen Verweis auf die Konfiguration, nicht nur " "den Namen der Ressource." #, python-format msgid "%(len)d of %(count)d received" msgstr "%(len)d von %(count)d erhalten" #, python-format msgid "%(len)d of %(count)d received - %(reasons)s" msgstr "%(len)d von %(count)d erhalten - %(reasons)s" #, python-format msgid "%(message)s" msgstr "%(message)s" #, python-format msgid "%(min_size)s can not be greater than %(max_size)s" msgstr "%(min_size)s darf nicht größer als %(max_size)s sein" #, python-format msgid "" "%(msg)s List items should be properties list-type names with format \"[prop, " "prop_child, prop_sub_child, ...]\" or nested properties group schemas." msgstr "" "%(msg)s Listenelemente sollten Namen von Eigenschaftenlistentypen im Format " "\"[prop, prop_child, prop_sub_child, ...]\" oder verschachtelte " "Eigenschaftengruppenschemas sein." #, python-format msgid "" "%(msg)s Properties group schema key should be one of the operators: %(op)s." msgstr "" "%(msg)s Der Schlüssel des Schemas der Eigenschaftengruppe sollte einer der " "Operatoren sein:%(op)s." #, python-format msgid "%(msg)s Schema should be a mapping, found %(t)s instead." msgstr "%(msg)s Schema sollte ein Mapping sein, stattdessen%(t)s." #, python-format msgid "%(msg)s Schema should be one-key dict." msgstr "%(msg)s Schema sollte ein Ein-Tasten-Diktat sein." #, python-format msgid "" "%(msg)s Schemas' values should be lists of properties names or nested " "schemas." msgstr "" "%(msg)s Die Werte von Schemas sollten Listen mit Eigenschaftsnamen oder " "verschachtelten Schemas sein." #, python-format msgid "%(name)s constraint invalid for %(utype)s" msgstr "%(name)s Einschränkung ist ungültig für %(utype)s" #, python-format msgid "" "%(name)s has an undefined or empty value for param %(param)s, must be a " "defined non-empty value" msgstr "" "%(name)s hat einen undefinierten oder leeren Wert für param%(param)s, muss " "ein definierter, nicht leerer Wert sein" #, python-format msgid "%(prop)s is prohibited for %(type)s provider network." msgstr "%(prop)s ist für das Providernetzwerk von %(type)s nicht zulässig." #, python-format msgid "%(prop)s is required for %(type)s provider network." msgstr "%(prop)s ist für das Providernetzwerk von %(type)s erforderlich." #, python-format msgid "%(prop1)s cannot be specified without %(prop2)s." msgstr "%(prop1)s kann nicht ohne %(prop2)s angegeben werden." #, python-format msgid "" "%(prop1)s property should only be specified for %(prop2)s with value " "%(value)s." msgstr "" "Die Eigenschaft %(prop1)s sollte nur für %(prop2)s mit dem Wert %(value)s " "angegeben werden." #, python-format msgid "%(resource)s: Invalid attribute %(key)s" msgstr "%(resource)s: Ungültiges Attribut %(key)s" #, python-format msgid "" "%(result)s - Unknown status %(resource_status)s due to \"%(status_reason)s\"" msgstr "" "%(result)s - Unbekannter Status %(resource_status)s aufgrund von " "\"%(status_reason)s\"" #, python-format msgid "%(schema)s supplied for %(type)s %(data)s" msgstr "%(schema)s für %(type)s %(data)s angegeben" #, python-format msgid "%(server)s-port-%(number)s" msgstr "%(server)s-Anschluss- %(number)s" #, python-format msgid "%(type)s not in valid format: %(error)s" msgstr "%(type)s nicht im gültigen Format: %(error)s" #, python-format msgid "%s Key Name must be a string" msgstr "%s Schlüsselname muss eine Zeichenfolge sein" #, python-format msgid "%s Timed out" msgstr "%s Zeitüberschreitung" #, python-format msgid "%s Value Name must be a string" msgstr "%s Wert Name muss eine Zeichenfolge sein" #, python-format msgid "%s flag only supported in stack update (or update preview) request." msgstr "" "Das %s-Flag wird nur in der Anforderung für die Stapelaktualisierung (oder " "Aktualisierungsvorschau) unterstützt." #, python-format msgid "%s is not a valid job location." msgstr "%s ist kein gültiger Jobstandort." #, python-format msgid "%s is not a valid wait condition handle." msgstr "%s ist kein gültiges Wait-Condition-Handle." #, python-format msgid "%s is not active" msgstr "%s ist nicht aktiv" #, python-format msgid "%s is not an integer." msgstr "%s ist keine ganze Zahl." #, python-format msgid "%s must be provided" msgstr "%s muss angegeben werden" #, python-format msgid "%s should be a positive integer" msgstr "%s sollte eine positive ganze Zahl sein" #, python-format msgid "%s should be an integer" msgstr "%s sollte eine ganze Zahl sein" #, python-format msgid "" "%s:\n" " template: This is var1 template var2\n" " params:\n" " var1: a\n" " var2: string" msgstr "%s: template: Dies ist var1 template var2 params: var1: a var2: string" #, python-format msgid "'%(attr)s': expected '%(expected)s', got '%(current)s'" msgstr "'%(attr)s': erwartet '%(expected)s', hat '%(current)s' erhalten" #, python-format msgid "'%(data)s' exceeds the %(max_len)s character FQDN limit" msgstr "'%(data)s' überschreitet den FQDN-Grenzwert von %(max_len)s Zeichen" #, python-format msgid "" "'%(value)s' contains '%(length)s' characters. Adding a domain name will " "cause it to exceed the maximum length of a FQDN of '%(max_len)s'." msgstr "" "'%(value)s' enthält '%(length)s' Zeichen. Durch das Hinzufügen eines " "Domänennamens wird die maximale Länge eines vollqualifizierten Domänennamens " "von '%(max_len)s' überschritten." #, python-format msgid "'%s' is a FQDN. It should be a relative domain name." msgstr "' %s' ist ein FQDN. Es sollte ein relativer Domain-Name sein." msgid "" "'task_name' is not assigned in 'params' in case of reverse type workflow." msgstr "" "'task_name' wird bei 'reverse type workflow' nicht in 'params' zugewiesen." msgid "'true' if DHCP is enabled for this subnet; 'false' otherwise." msgstr "'true', wenn DHCP für dieses Subnetz aktiviert ist; sonst 'false'." msgid "A UUID for the set of servers being requested." msgstr "Eine UUID für die Servergruppe, die angefordert wird." msgid "A bad or out-of-range value was supplied" msgstr "Ein ungültiger oder nicht zulässiger Wert wurde angegeben" msgid "" "A boolean value for L2 adjacency, True means that you can expect L2 " "connectivity throughout the Network." msgstr "" "Ein boolescher Wert für die L2-Umgebung, True bedeutet, dass Sie L2-" "Konnektivität im gesamten Netzwerk erwarten können." msgid "A boolean value of default flag." msgstr "Ein boolescher Wert des Standard-Flags." msgid "A boolean value specifying the administrative status of the network." msgstr "" "Ein boolescher Wert, der den administrativen Status des Netzwerks angibt." #, python-format msgid "" "A character class and its corresponding %(min)s constraint to generate the " "random string from." msgstr "" "Eine Zeichenklasse und die entsprechende%(min)s-Einschränkung, aus der die " "zufällige Zeichenfolge generiert wird." #, python-format msgid "" "A character sequence and its corresponding %(min)s constraint to generate " "the random string from." msgstr "" "Eine Zeichenfolge und die entsprechende%(min)s-Einschränkung, aus der die " "zufällige Zeichenfolge generiert wird." msgid "" "A comma separated list of addresses for which proxies should not be used in " "the cluster." msgstr "" "Eine durch Kommas getrennte Liste von Adressen, für die keine Proxies im " "Cluster verwendet werden sollen." msgid "A comma-delimited list of server ip addresses. (Heat extension)." msgstr "" "Eine durch Kommas getrennte Liste von Server-IP-Adressen. (Heat-Erweiterung)." msgid "A description of the volume." msgstr "Eine Beschreibung des Datenträgers." msgid "" "A device name where the volume will be attached in the system at /dev/" "device_name. This value is typically vda." msgstr "" "Ein Gerätename, unter dem der Datenträger im System unter /dev/device_name " "angehängt wird. Dieser Wert ist typischerweise vda." msgid "" "A device name where the volume will be attached in the system at /dev/" "device_name.e.g. vdb" msgstr "" "Ein Gerätename, an den das Volume im System unter /dev/device_name,z.B. vdb " "angehängt wird" msgid "" "A dict of all network addresses with corresponding port_id. Each network " "will have two keys in dict, they are network name and network id. The port " "ID may be obtained through the following expression: \"{get_attr: " "[, addresses, , 0, port]}\"." msgstr "" "Ein Diktat aller Netzwerkadressen mit entsprechender Port-ID. Jedes Netzwerk " "hat zwei Schlüssel in dict, sie sind Netzwerkname und Netzwerk-ID. Die Port-" "ID kann durch den folgenden Ausdruck erhalten werden: \"{get_attr: " "[, addresss, , 0, port]} \"." msgid "" "A dict of all network addresses with corresponding port_id. Each network " "will have two keys in dict, they are network name and network id. The port " "ID may be obtained through the following expression: \"{get_attr: [, " "addresses, , 0, port]}\"." msgstr "" "Ein Diktat aller Netzwerkadressen mit entsprechender Port-ID. Jedes Netzwerk " "hat zwei Schlüssel in dict, sie sind Netzwerkname und Netzwerk-ID. Die Port-" "ID kann durch den folgenden Ausdruck erhalten werden: \"{get_attr: " "[, addresses, , 0, port]} \"." msgid "" "A dict of assigned network addresses of the form: {\"public\": [ip1, " "ip2...], \"private\": [ip3, ip4], \"public_uuid\": [ip1, ip2...], " "\"private_uuid\": [ip3, ip4]}. Each network will have two keys in dict, they " "are network name and network id." msgstr "" "Ein dict der zugewiesenen Netzwerkadressen der Form: {\"public\": [ip1, " "ip2 ...], \"privat\": [ip3, ip4], \"public_uuid\": [ip1, ip2 ...], " "\"private_uuid\" : [ip3, ip4]}. Jedes Netzwerk hat zwei Schlüssel in dict, " "sie sind Netzwerkname und Netzwerk-ID." msgid "A dict of key-value pairs output from the stack." msgstr "" "Ein Diktat von Schlüssel-Wert-Paaren, die vom Stapel ausgegeben werden." msgid "A dictionary which contains name and input of the workflow." msgstr "Ein Dictionary, das den Namen und die Eingabe des Workflows enthält." msgid "A length constraint must have a min value and/or a max value specified." msgstr "" "Eine Längenbeschränkung muss einen minimalen Wert und/oder einen maximalen " "Wert aufweisen." msgid "" "A list for filtering events. Query conditions used to filter specific events " "when evaluating the alarm." msgstr "" "Eine Liste zum Filtern von Ereignissen. Abfragebedingungen zum Filtern " "bestimmter Ereignisse bei der Auswertung des Alarms" msgid "A list of Port Pair IDs or names to apply." msgstr "Eine Liste der zu verwendenden Portpaar-IDs oder -namen." msgid "A list of URLs (webhooks) to invoke when state transitions to alarm." msgstr "" "Eine Liste von URLs (Webhooks), die aufgerufen werden, wenn der Status in " "einen Alarmzustand übergeht." msgid "" "A list of URLs (webhooks) to invoke when state transitions to insufficient-" "data." msgstr "" "Eine Liste von URLs (Webhooks), die aufgerufen werden sollen, wenn der " "Status zu nicht ausreichenden Daten wechselt." msgid "A list of URLs (webhooks) to invoke when state transitions to ok." msgstr "" "Eine Liste von URLs (Webhooks), die aufgerufen werden, wenn der Status in ok " "übergeht." msgid "A list of Zaqar queues to post to when state transitions to alarm." msgstr "" "Eine Liste der Zaqar-Warteschlangen, an die der Status bei Alarmstatus " "gesendet werden soll." msgid "" "A list of Zaqar queues to post to when state transitions to insufficient-" "data." msgstr "" "Eine Liste der Zaqar-Warteschlangen, in die der Status bei einem Übergang in " "nicht ausreichende Daten gesendet werden soll." msgid "A list of Zaqar queues to post to when state transitions to ok." msgstr "" "Eine Liste der Zaqar-Warteschlangen, in die der Status nach dem Übergang in " "den Status \"OK\" hochgeladen werden soll." msgid "A list of access rules that define access from IP to Share." msgstr "" "Eine Liste mit Zugriffsregeln, die den Zugriff von IP auf Freigabe " "definieren." msgid "A list of all rules for the QoS policy." msgstr "Eine Liste aller Regeln für die QoS-Richtlinie." msgid "A list of all subnet attributes for the port." msgstr "Eine Liste aller Subnetzattribute für den Port." msgid "" "A list of character class and their constraints to generate the random " "string from." msgstr "" "Eine Liste der Zeichenklassen und ihrer Abhängigkeiten, aus denen die " "zufällige Zeichenfolge generiert werden soll." msgid "" "A list of character sequences and their constraints to generate the random " "string from." msgstr "" "Eine Liste von Zeichenfolgen und ihren Einschränkungen zum Generieren der " "zufälligen Zeichenfolge aus." msgid "A list of cluster instance IPs." msgstr "Eine Liste der Clusterinstanz-IPs." msgid "A list of clusters to which this policy is attached." msgstr "Eine Liste von Clustern, an die diese Richtlinie angehängt ist." msgid "" "A list of data for this RecordSet. Each item will be a separate record in " "Designate These items should conform to the DNS spec for the record type - e." "g. A records must be IPv4 addresses, CNAME records must be a hostname. DNS " "record data varies based on the type of record. For more details, please " "refer rfc 1035." msgstr "" "Eine Liste von Daten für dieses RecordSet. Jedes Element ist ein separater " "Datensatz in Designate. Diese Elemente sollten der DNS-Spezifikation für den " "Datensatztyp entsprechen. Beispiel: A-Einträge müssen IPv4-Adressen sein, " "CNAME-Einträge müssen ein Hostname sein. DNS-Datensatzdaten variieren je " "nach Art des Datensatzes. Weitere Informationen finden Sie in RFC 1035." msgid "A list of flow classifiers to apply to the Port Chain." msgstr "" "Eine Liste von Flussklassifikatoren, die auf die Portkette angewendet werden." msgid "A list of host route dictionaries for the subnet." msgstr "Eine Liste der Host-Route-Wörterbücher für das Subnetz." msgid "" "A list of inputs that should cause the resource to be replaced when their " "values change." msgstr "" "Eine Liste von Eingaben, die dazu führen sollten, dass die Ressource ersetzt " "wird, wenn sich ihre Werte ändern." msgid "A list of instances ids." msgstr "Eine Liste der Instanzen IDs." msgid "A list of metric ids." msgstr "Eine Liste der metrischen IDs." msgid "A list of policies to attach to this cluster." msgstr "Eine Liste der Richtlinien, die an diesen Cluster angehängt werden." msgid "A list of port pair groups to apply to the Port Chain." msgstr "" "Eine Liste von Portpaargruppen, die auf die Portkette angewendet werden " "sollen." msgid "" "A list of query factors, each comparing a Sample attribute with a value. " "Implicitly combined with matching_metadata, if any." msgstr "" "Eine Liste von Abfragefaktoren, die jeweils ein Sample-Attribut mit einem " "Wert vergleichen. Implizit kombiniert mit Matching_Metadata, falls vorhanden." msgid "A list of removed resource names." msgstr "Eine Liste der entfernten Ressourcennamen." msgid "A list of resource IDs for the resources in the chain." msgstr "Eine Liste der Ressourcen-IDs für die Ressourcen in der Kette." msgid "A list of resource IDs for the resources in the group." msgstr "Eine Liste der Ressourcen-IDs für die Ressourcen in der Gruppe." msgid "A list of security groups for the port." msgstr "Eine Liste der Sicherheitsgruppen für den Port." msgid "A list of security services IDs or names." msgstr "Eine Liste von IDs oder Namen von Sicherheitsdiensten." msgid "A list of string policies to apply. Defaults to anti-affinity." msgstr "" "Eine Liste der anzuwendenden Zeichenfolgenrichtlinien. Standardmäßig Anti-" "Affinität." msgid "A list of tags for labeling and sorting projects." msgstr "Eine Liste von Tags zum Beschriften und Sortieren von Projekten." msgid "" "A list of the specified attribute of each individual resource that is part " "of the AutoScalingGroup. This list of attributes is available as an output " "once the AutoScalingGroup has been instantiated." msgstr "" "Eine Liste des angegebenen Attributs jeder einzelnen Ressource, die Teil der " "AutoScalingGroup ist. Diese Liste von Attributen steht als Ausgabe zur " "Verfügung, sobald die AutoScalingGroup instanziiert wurde." msgid "A list of volumes mounted inside the container." msgstr "Eine Liste der Datenträger, die im Container geladen sind." msgid "A login profile for the user." msgstr "Ein Anmeldeprofil für den Benutzer." msgid "A mandatory input parameter is missing" msgstr "Ein obligatorischer Eingabeparameter fehlt" msgid "A map containing all headers for the container." msgstr "Eine Map, die alle Header für den Container enthält." msgid "" "A map of Nova names and captured stderrs from the configuration execution to " "each server." msgstr "" "Eine Karte von Nova-Namen und erfassten stderrs von der " "Konfigurationsausführung zu jedem Server." msgid "" "A map of Nova names and captured stdouts from the configuration execution to " "each server." msgstr "" "Eine Karte mit Nova-Namen und erfassten Stdouts von der " "Konfigurationsausführung zu jedem Server." msgid "" "A map of Nova names and returned status code from the configuration " "execution." msgstr "" "Eine Karte von Nova-Namen und Rückgabe des Statuscodes aus der " "Konfigurationsausführung." msgid "" "A map of files to create/overwrite on the server upon boot. Keys are file " "names and values are the file contents." msgstr "" "Eine Zuordnung von Dateien zum Erstellen/Überschreiben auf dem Server beim " "Start. Schlüssel sind Dateinamen und Werte sind der Dateiinhalt." msgid "" "A map of names and server IDs to apply configuration to. The name is " "arbitrary and is used as the Heat resource name for the corresponding " "deployment." msgstr "" "Eine Zuordnung von Namen und Server-IDs, auf die die Konfiguration " "angewendet werden soll. Der Name ist willkürlich und wird als Heat-" "Ressourcenname für die entsprechende Bereitstellung verwendet." msgid "A map of resource names to IDs for the resources in the group." msgstr "" "Eine Zuordnung von Ressourcennamen zu IDs für die Ressourcen in der Gruppe." msgid "" "A map of resource names to the specified attribute of each individual " "resource that is part of the AutoScalingGroup. This map specifies output " "parameters that are available once the AutoScalingGroup has been " "instantiated." msgstr "" "Eine Zuordnung von Ressourcennamen zum angegebenen Attribut jeder einzelnen " "Ressource, die Teil der AutoScalingGroup ist. Diese Zuordnung gibt " "Ausgabeparameter an, die verfügbar sind, nachdem die AutoScalingGroup " "instanziiert wurde." msgid "" "A map of resource names to the specified attribute of each individual " "resource." msgstr "" "Eine Zuordnung der Ressourcennamen zum angegebenen Attribut jeder einzelnen " "Ressource." msgid "" "A map of resource names to the specified attribute of each individual " "resource. Requires heat_template_version: 2014-10-16." msgstr "" "Eine Zuordnung der Ressourcennamen zum angegebenen Attribut jeder einzelnen " "Ressource. Erfordert heat_template_version: 2014-10-16." msgid "" "A map of user-defined meta data to associate with the account. Each key in " "the map will set the header X-Account-Meta-{key} with the corresponding " "value." msgstr "" "Eine Zuordnung benutzerdefinierter Metadaten, die mit dem Konto verknüpft " "werden sollen. Jeder Schlüssel in der Map setzt die Kopfzeile X-Account-" "Meta{key} mit dem entsprechenden Wert." msgid "" "A map of user-defined meta data to associate with the container. Each key in " "the map will set the header X-Container-Meta-{key} with the corresponding " "value." msgstr "" "Eine Zuordnung benutzerdefinierter Metadaten, die dem Container zugeordnet " "werden. Jeder Schlüssel in der Map setzt den Header X-Container-Meta-{key} " "mit dem entsprechenden Wert." msgid "" "A modulo constraint must have a step value and an offset value specified." msgstr "" "Eine Modulo-Abhängigkeit muss einen Schrittwert und einen Offset-Wert haben." msgid "A name used to distinguish the volume." msgstr "Ein Name, der zur Unterscheidung des Datenträgers verwendet wird." msgid "" "A per-tenant quota on the prefix space that can be allocated from the subnet " "pool for tenant subnets." msgstr "" "Ein pro-Tenant-Kontingent für den Präfixbereich, der aus dem Subnetzpool für " "Tenant-Subnetze zugewiesen werden kann." msgid "" "A predefined access control list (ACL) that grants permissions on the bucket." msgstr "" "Eine vordefinierte Zugriffssteuerungsliste (ACL), die Berechtigungen für den " "Bucket erteilt." msgid "A range constraint must have a min value and/or a max value specified." msgstr "" "Für eine Bereichsbeschränkung muss ein Mindestwert und/oder ein Maximalwert " "angegeben werden." msgid "" "A reference to the wait condition handle used to signal this wait condition." msgstr "" "Ein Verweis auf den Wartezustands-Handle, der zum Signalisieren dieser " "Wartebedingung verwendet wird." msgid "" "A signed url to create execution specified in default_execution_data " "property." msgstr "" "Eine signierte URL zum Erstellen der in der Eigenschaft " "\"default_execution_data\" angegebenen Ausführung." msgid "" "A signed url to create executions for workflows specified in Workflow " "resource." msgstr "" "Eine signierte URL zum Erstellen von Ausführungen für Arbeitsabläufen, die " "in der Workflow-Ressource angegeben sind." msgid "A signed url to handle the alarm." msgstr "Eine signierte URL zur Behandlung des Alarms" msgid "A signed url to handle the alarm. (Heat extension)." msgstr "Eine signierte URL zur Behandlung des Alarms (Heat-Erweiterung)." msgid "A specified set of DNS name servers to be used." msgstr "" "Eine bestimmte Gruppe von DNS-Nameservern, die verwendet werden sollen." msgid "A string representation of the list of attachments of the volume." msgstr "Eine Zeichenfolgendarstellung der Liste der Anlagen des Datenträgers." msgid "" "A string specifying a symbolic name for the network, which is not required " "to be unique." msgstr "" "Eine Zeichenfolge, die einen symbolischen Namen für das Netzwerk angibt, der " "nicht eindeutig sein muss." msgid "" "A string specifying a symbolic name for the security group, which is not " "required to be unique." msgstr "" "Eine Zeichenfolge, die einen symbolischen Namen für die Sicherheitsgruppe " "angibt, die nicht eindeutig sein muss." msgid "" "A string specifying a symbolic name for the trunk, which is not required to " "be uniqe." msgstr "" "Eine Zeichenfolge, die einen symbolischen Namen für den Stamm angibt, der " "nicht einheitlich sein muss." msgid "A string specifying physical network mapping for the network." msgstr "" "Eine Zeichenfolge, die die physische Netzwerkzuordnung für das Netzwerk " "angibt." msgid "A string specifying the provider network type for the network." msgstr "" "Eine Zeichenfolge, die den Netzwerktyp des Anbieters für das Netzwerk angibt." msgid "A string specifying the segmentation id for the network." msgstr "Eine Zeichenfolge, die die Segmentierungs-ID für das Netzwerk angibt." msgid "A symbolic name for this port." msgstr "Ein symbolischer Name für diesen Port." #, python-format msgid "" "A template version alias %(version)s was added for a template class that has " "no official YYYY-MM-DD version." msgstr "" "Ein Vorlagenversionsalias %(version)s wurde für eine Vorlagenklasse " "hinzugefügt, die keine offizielle Version von YYYY-MM-DD hat." msgid "A url to handle the alarm using native API." msgstr "Eine URL zum Behandeln des Alarms mithilfe der nativen API." msgid "" "A variable that this resource will use to replace with the current index of " "a given resource in the group. Can be used, for example, to customize the " "name property of grouped servers in order to differentiate them when listed " "with nova client." msgstr "" "Eine Variable, die diese Ressource zum Ersetzen durch den aktuellen Index " "einer bestimmten Ressource in der Gruppe verwendet. Kann zum Beispiel " "verwendet werden, um die Namenseigenschaft von gruppierten Servern " "anzupassen, um sie zu unterscheiden, wenn sie mit dem nova-Client " "aufgelistet werden." msgid "A volume type to attach specs." msgstr "Ein Volumetyp zum Anfügen von Spezifikationen." msgid "AWS compatible instance name." msgstr "AWS-kompatibler Instanzname" msgid "AWS query string is malformed, does not adhere to AWS spec" msgstr "" "Die AWS-Abfragezeichenfolge ist fehlerhaft, entspricht nicht der AWS-" "Spezifikation" msgid "Access policies to apply to the user." msgstr "Zugreifen auf Richtlinien, die auf den Benutzer angewendet werden" #, python-format msgid "AccessPolicy resource %s not in stack" msgstr "AccessPolicy-Ressource %s nicht im Stapel" #, python-format msgid "Action %s not allowed for user" msgstr "Aktion %s ist für Benutzer nicht zulässig" #, python-format msgid "" "Action for the RBAC policy. The allowed actions differ for different object " "types - only %(network)s objects can have an %(external)s action." msgstr "" "Aktion für die RBAC-Richtlinie. Die zulässigen Aktionen unterscheiden sich " "für verschiedene Objekttypen - nuri %(network)s Objekte können eine " "%(external)s Aktion haben." msgid "Action to be performed on the traffic matching the rule." msgstr "" "Aktion, die für den Verkehr ausgeführt werden soll, der der Regel entspricht." msgid "Action type of the policy." msgstr "Aktionstyp der Richtlinie" msgid "Actual input parameter values of the task." msgstr "Tatsächliche Eingabeparameterwerte der Aufgabe." msgid "Add needed policies directly to the task, Policy keyword is not needed" msgstr "" "Fügen Sie die erforderlichen Richtlinien direkt zur Aufgabe hinzu. Das " "Richtlinienschlüsselwort wird nicht benötigt" msgid "Additional MAC/IP address pairs allowed to pass through a port." msgstr "Zusätzliche MAC/IP-Adresspaare, die einen Port passieren dürfen." msgid "Additional MAC/IP address pairs allowed to pass through the port." msgstr "Zusätzliche MAC/IP-Adresspaare dürfen den Port passieren." msgid "Additional routes for this subnet." msgstr "Zusätzliche Routen für dieses Subnetz" #, python-format msgid "" "Address \"%(addr)s\" doesn't satisfies allowed format for \"%(email)s\" type " "of \"%(type)s\" property" msgstr "" "Die Adresse \"%(addr)s\" erfüllt nicht das zulässige Format für den " "\"%(email)s\"-Typ der Eigenschaft \"%(type)s\"" #, python-format msgid "Address \"%(addr)s\" doesn't satisfies allowed schemes: %(schemes)s" msgstr "" "Die Adresse \"%(addr)s\" erfüllt die zulässigen Schemata nicht: %(schemes)s" #, python-format msgid "" "Address \"%(addr)s\" should have correct format required by \"%(wh)s\" type " "of \"%(type)s\" property" msgstr "" "Die Adresse \"%(addr)s\" sollte das korrekte Format haben, das für den Typ " "\"%(wh)s\" der Eigenschaft \"%(type)s\" erforderlich ist" #, python-format msgid "Address \"%s\" doesn't have required URL scheme" msgstr "Die Adresse \"%s\" hat kein erforderliches URL-Schema" #, python-format msgid "Address \"%s\" doesn't have required network location" msgstr "Adresse \"%s\" hat keinen Netzwerkstandort benötigt" msgid "Address family of the address scope, which is 4 or 6." msgstr "Adressfamilie des Adressbereichs, der 4 oder 6 ist." msgid "" "Address of the notification. It could be a valid email address, url or " "service key based on notification type." msgstr "" "Adresse der Benachrichtigung Je nach Benachrichtigungstyp könnte es sich um " "eine gültige E-Mail-Adresse, URL oder einen Service-Schlüssel handeln." msgid "" "Address to bind the server. Useful when selecting a particular network " "interface." msgstr "" "Adresse zum Binden des Servers. Nützlich bei der Auswahl einer bestimmten " "Netzwerkschnittstelle." msgid "Adds a map of labels to a container. May be used multiple times." msgstr "" "Fügt einem Container eine Zuordnung von Beschriftungen hinzu. Kann mehrmals " "verwendet werden." msgid "Administrative state for the ipsec site connection." msgstr "Verwaltungsstatus für die IPSec-Standortverbindung." msgid "Administrative state for the vpn service." msgstr "Verwaltungsstatus für den VPN-Dienst." msgid "" "Administrative state of the firewall. If false (down), firewall does not " "forward packets and will drop all traffic to/from VMs behind the firewall." msgstr "" "Verwaltungsstatus der Firewall. Wenn false (down), leitet die Firewall keine " "Pakete weiter und lässt den gesamten Datenverkehr zu/von VMs hinter der " "Firewall fallen." msgid "Administrative state of the router." msgstr "Verwaltungszustand des Routers" #, python-format msgid "Alarm %(alarm)s could not find scaling group named \"%(group)s\"" msgstr "" "Alarm %(alarm)s konnte die Skalierungsgruppe \"%(group)s\" nicht finden" #, python-format msgid "Algorithm must be one of %s" msgstr "Der Algorithmus muss einer von %s sein" msgid "All heat engines are down." msgstr "Alle Heat-Engines sind ausgefallen." msgid "" "Allocate a floating IP from a given floating IP pool. Now that nova-network " "is not supported this represents the external network." msgstr "" "Vergeben Sie eine Floating-IP aus einem gegebenen Floating-IP-Pool. Jetzt, " "da das Nova-Netzwerk nicht unterstützt wird, repräsentiert dies das externe " "Netzwerk." msgid "Allocated floating IP address." msgstr "Zugewiesene Floating-IP-Adresse." msgid "Allocation ID for VPC EIP address." msgstr "Zuordnungs-ID für die VPC EIP-Adresse." msgid "Allow client's debug log output." msgstr "Erlaube die Debugprotokollausgabe des Clients." msgid "Allow or deny action for this firewall rule." msgstr "Aktion für diese Firewallregel zulassen oder verweigern" msgid "Allow orchestration of multiple clouds." msgstr "Orchestrierung mehrerer Clouds zulassen" msgid "" "Allow reauthentication on token expiry, such that long-running tasks may " "complete. Note this defeats the expiry of any provided user tokens." msgstr "" "Erlaube die erneute Authentifizierung bei Ablauf des Tokens, so dass lang " "andauernde Aufgaben abgeschlossen werden können. Beachten Sie, dass dies den " "Ablauf der angegebenen Benutzer-Token verhindert." msgid "" "Allowed keystone endpoints for auth_uri when multi_cloud is enabled. At " "least one endpoint needs to be specified." msgstr "" "Erlaubte Keystone-Endpunkte für auth_uri, wenn multi_cloud aktiviert ist. " "Mindestens ein Endpunkt muss angegeben werden." msgid "" "Allowed tenancy of instances launched in the VPC. default - any tenancy; " "dedicated - instance will be dedicated, regardless of the tenancy option " "specified at instance launch." msgstr "" "Erlaubte Mandantenfähigkeit von Instanzen, die in der VPC gestartet wurden. " "Standard - jeder Mandant; dedicated - Instanz wird dediziert, unabhängig von " "der beim Start der Instanz angegebenen Paketierungsoption." #, python-format msgid "Allowed values: %s" msgstr "Erlaubte Werte: %s" msgid "AllowedPattern must be a string" msgstr "AllowedPattern muss eine Zeichenfolge sein" msgid "AllowedValues must be a list" msgstr "AllowedValues muss eine Liste sein" msgid "Allowing not to store action results after task completion." msgstr "" "Möglichkeit, nach Beendigung der Aufgabe keine Aktionsergebnisse zu " "speichern." msgid "" "Allows to synchronize multiple parallel workflow branches and aggregate " "their data. Valid inputs: all - the task will run only if all upstream tasks " "are completed. Any numeric value - then the task will run once at least this " "number of upstream tasks are completed and corresponding conditions have " "triggered." msgstr "" "Ermöglicht das Synchronisieren mehrerer paralleler Workflow-Zweige und das " "Aggregieren ihrer Daten. Gültige Eingaben: alle - Die Aufgabe wird nur " "ausgeführt, wenn alle vorgelagerten Aufgaben abgeschlossen sind. Beliebiger " "numerischer Wert - dann wird die Aufgabe ausgeführt, sobald mindestens die " "Anzahl der vorgelagerten Aufgaben abgeschlossen ist und entsprechende " "Bedingungen ausgelöst wurden." msgid "Alternate IP address which health monitor can use for health check." msgstr "" "Alternative IP-Adresse, die der Integritätsmonitor für die " "Integritätsprüfung verwenden kann." msgid "Alternate Port which health monitor can use for health check." msgstr "" "Alternativer Port, den der Integritätsmonitor für die Integritätsprüfung " "verwenden kann." #, python-format msgid "Ambiguous versions (%s)" msgstr "Mehrdeutige Versionen (%s)" msgid "" "Amount of disk space (in GB) required to boot image. Default value is 0 if " "not specified and means no limit on the disk size." msgstr "" "Menge an Speicherplatz (in GB), die zum Starten des Abbildes benötigt wird. " "Der Standardwert ist 0, wenn nicht angegeben, und bedeutet keine Begrenzung " "der Festplattengröße." msgid "" "Amount of ram (in MB) required to boot image. Default value is 0 if not " "specified and means no limit on the ram size." msgstr "" "Die Menge an RAM (in MB), die zum Starten des Abbildes benötigt wird. Der " "Standardwert ist 0, wenn nicht angegeben, und bedeutet keine Begrenzung der " "RAM-Größe." msgid "An HTTP URI query fragment." msgstr "Ein HTTP-URI-Abfragefragment." msgid "An address scope ID to assign to the subnet pool." msgstr "Eine Adressbereichs-ID, die dem Subnetzpool zugewiesen werden soll." msgid "An application health check for the instances." msgstr "Eine Anwendungszustandsprüfung für die Instanzen." msgid "An ordered list of firewall rules to apply to the firewall." msgstr "" "Eine geordnete Liste von Firewallregeln, die auf die Firewall angewendet " "werden." msgid "" "An ordered list of nics to be added to this server, with information about " "connected networks, fixed ips, port etc." msgstr "" "Eine geordnete Liste von Nics, die diesem Server hinzugefügt werden, mit " "Informationen über verbundene Netzwerke, feste IPs, Port usw." msgid "An unknown exception occurred." msgstr "Eine unbekannte Ausnahme ist aufgetreten." msgid "" "Any data structure arbitrarily containing YAQL expressions that defines " "workflow output. May be nested." msgstr "" "Jede Datenstruktur, die willkürlich YAQL-Ausdrücke enthält, die die Workflow-" "Ausgabe definieren. Kann verschachtelt sein." msgid "Anything other than one VPCZoneIdentifier" msgstr "Alles andere als ein VPCZoneIdentifier" msgid "Api endpoint reference of the instance." msgstr "Api-Endpunktverweis der Instanz" msgid "Arbitrary key-value pairs for scheduler to select host." msgstr "Beliebige Schlüssel/Wert-Paare, damit der Scheduler den Host auswählt." msgid "" "Arbitrary key-value pairs specified by the client to help boot a server." msgstr "" "Beliebige Schlüssel-Wert-Paare, die vom Client zum Starten eines Servers " "angegeben wurden." msgid "" "Arbitrary key-value pairs specified by the client to help the Cinder " "scheduler creating a volume." msgstr "" "Beliebige Schlüssel/Wert-Paare, die vom Client angegeben werden, um dem " "Cinder-Scheduler beim Erstellen eines Datenträgers zu helfen." msgid "" "Arbitrary key/value metadata to store contextual information about this " "queue." msgstr "" "Beliebige Schlüssel/Wert-Metadaten zum Speichern von Kontextinformationen " "über diese Warteschlange." msgid "" "Arbitrary key/value metadata to store for this server. Both keys and values " "must be 255 characters or less. Non-string values will be serialized to JSON " "(and the serialized string must be 255 characters or less)." msgstr "" "Beliebige Schlüssel/Wert-Metadaten zum Speichern für diesen Server. Beide " "Schlüssel und Werte müssen maximal 255 Zeichen lang sein. Nicht-" "Zeichenfolgenwerte werden in JSON serialisiert (und die serialisierte " "Zeichenfolge muss maximal 255 Zeichen enthalten)." msgid "Arbitrary key/value metadata to store information for aggregate." msgstr "" "Beliebige Schlüssel/Wert-Metadaten zum Speichern von Informationen für " "Aggregate." msgid "" "Arbitrary labels in the form of key=value pairs to associate with cluster." msgstr "" "Beliebige Beschriftungen in Form von key=value-Paaren, die dem Cluster " "zugeordnet werden sollen." msgid "Arbitrary properties to associate with the image." msgstr "Beliebige Eigenschaften, die mit dem Abbild verknüpft werden sollen." #, python-format msgid "Argument to \"%s\" must be a condition" msgstr "Das Argument \"%s\" muss eine Bedingung sein" #, python-format msgid "Argument to \"%s\" must be a list" msgstr "Argument zu \"%s\" muss eine Liste sein" #, python-format msgid "Argument to \"%s\" must be a string" msgstr "Argument zu \"%s\" muss eine Zeichenfolge sein" #, python-format msgid "Argument to \"%s\" must be string or list" msgstr "Argument zu \"%s\" muss String oder Liste sein" #, python-format msgid "Argument to function \"%s\" must be a list of strings" msgstr "Das Argument zur Funktion \"%s\" muss eine Liste von Strings sein" #, python-format msgid "" "Arguments to \"%s\" can be of the next forms: [resource_name] or " "[resource_name, attribute, (path), ...]" msgstr "" "Argumente für \"%s\" können die folgenden Formen haben: [resource_name] oder " "[resource_name, attribute, (path), ...]" #, python-format msgid "Arguments to \"%s\" must be a list of conditions" msgstr "Argumente für \"%s\" müssen eine Liste von Bedingungen sein" #, python-format msgid "Arguments to \"%s\" must be a map" msgstr "Argumente für \"%s\" müssen eine Map sein" #, python-format msgid "Arguments to \"%s\" must be a map." msgstr "Argumente für \"%s\" müssen eine Map sein." #, python-format msgid "Arguments to \"%s\" must be of the form [index, collection]" msgstr "Argumente für \"%s\" müssen die Form [Index, Sammlung] haben" #, python-format msgid "" "Arguments to \"%s\" must be of the form [resource_name, attribute, " "(path), ...]" msgstr "" "Argumente für \"%s\" müssen die Form [Ressourcenname, Attribut, (Pfad), ...] " "haben" #, python-format msgid "Arguments to \"%s\" must be of the form [resource_name, attribute]" msgstr "Argumente für \"%s\" müssen die Form [Ressourcenname, Attribut] haben" #, python-format msgid "Arguments to \"%s\" must be of the form: [condition]" msgstr "Argumente für \"%s\" müssen folgende Form haben: [Bedingung]" #, python-format msgid "" "Arguments to \"%s\" must be of the form: [condition_name, value_if_true, " "value_if_false]" msgstr "" "Argumente für \"%s\" müssen folgende Form haben: [condition_name, " "value_if_true, value_if_false]" #, python-format msgid "Arguments to \"%s\" must be of the form: [value1, [value1, value2]]" msgstr "" "Argumente für \"%s\" müssen folgende Form haben: [Wert1, [Wert1, Wert2]]" #, python-format msgid "Arguments to \"%s\" must be of the form: [value_1, value_2]" msgstr "Argumente für \"%s\" müssen folgende Form haben: [value_1, value_2]" #, python-format msgid "Arguments to %s not fully resolved" msgstr "Argumente für %s wurden nicht vollständig aufgelöst" msgid "Arguments to add to the job." msgstr "Argumente, die dem Job hinzugefügt werden sollen." #, python-format msgid "At least one of the following properties must be specified: %(props)s." msgstr "" "Mindestens eine der folgenden Eigenschaften muss angegeben werden: %(props)s." #, python-format msgid "Attempt to delete a stack with id: %(id)s %(msg)s" msgstr "Versuch, einen Stapel mit der ID zu löschen: %(id)s %(msg)s" #, python-format msgid "Attempt to delete user creds with id %(id)s that does not exist" msgstr "" "Versuch, Benutzer-Creds mit der ID %(id)s zu löschen, die nicht existiert" #, python-format msgid "Attempt to delete watch_rule: %(id)s %(msg)s" msgstr "Versuch, watch_rule zu löschen: %(id)s%(msg)s" #, python-format msgid "Attempt to update a stack with id: %(id)s %(msg)s" msgstr "Versuch, einen Stapel mit der ID:%(id)s%(msg)s zu aktualisieren" #, python-format msgid "Attempt to update a stack with id: %(id)s %(traversal)s %(msg)s" msgstr "" "Versuch, einen Stapel mit der ID zu aktualisieren:%(id)s %(traversal)s " "%(msg)s" #, python-format msgid "Attempt to update a watch with id: %(id)s %(msg)s" msgstr "Versuch, eine Uhr mit der ID zu aktualisieren: %(id)s%(msg)s" msgid "Attempt to use stored_context with no user_creds" msgstr "Versuch, stored_context ohne user_creds zu verwenden" #, python-format msgid "Attempted to %s an IN_PROGRESS stack" msgstr "Versucht %s einen IN_PROGRESS-Stapel zu erstellen" #, python-format msgid "Attribute %(attr)s for facade %(type)s missing in provider" msgstr "Attribut %(attr)s für Fassade %(type)s fehlt im Provider" msgid "" "Attributes collected from cluster. According to the jsonpath following this " "attribute, it will return a list of attributes collected from the nodes of " "this cluster." msgstr "" "Attribute aus dem Cluster gesammelt. Entsprechend dem jsonpath, der diesem " "Attribut folgt, gibt es eine Liste der Attribute zurück, die von den Knoten " "dieses Clusters gesammelt wurden." msgid "Audit status of this firewall policy." msgstr "Überwachungsstatus dieser Firewall-Richtlinie" msgid "Authentication Endpoint URI." msgstr "Authentifizierungsendpunkt-URI." msgid "Authentication hash algorithm for the ike policy." msgstr "Authentifizierungs-Hash-Algorithmus für die Ike-Richtlinie." msgid "Authentication hash algorithm for the ipsec policy." msgstr "Authentifizierungs-Hash-Algorithmus für die IPSec-Richtlinie." msgid "Authorization failed." msgstr "Autorisation fehlgeschlagen." msgid "AutoScaling group ID to apply policy to." msgstr "AutoScaling-Gruppen-ID, auf die die Richtlinie angewendet werden soll." msgid "AutoScaling group name to apply policy to." msgstr "" "AutoScaling-Gruppenname, auf den die Richtlinie angewendet werden soll." msgid "Availability Zone of the subnet." msgstr "Availability Zone des Subnetzes." msgid "Availability zone in which you want the subnet." msgstr "Verfügbarkeitszone, in der Sie das Subnetz haben möchten." msgid "Availability zone to create servers in." msgstr "Verfügbarkeitszone zum Erstellen von Servern in." msgid "Availability zone to create volumes in." msgstr "Verfügbarkeitszone zum Erstellen von Datenträgern." msgid "Availability zone to launch the instance in." msgstr "Verfügbarkeitszone zum Starten der Instanz in." msgid "Backend authentication failed" msgstr "Die Back-End-Authentifizierung ist fehlgeschlagen" #, python-format msgid "Bad expression %s." msgstr "Schlechter Ausdruck %s." msgid "Binary" msgstr "Binär" msgid "Block device mappings for this server." msgstr "Blockgeräte-Zuordnungen für diesen Server." msgid "Block device mappings to attach to instance." msgstr "Blockieren Sie Gerätezuordnungen zum Anhängen an die Instanz." msgid "Block device mappings v2 for this server." msgstr "Blockieren Sie Gerätezuordnungen v2 für diesen Server." msgid "" "Boolean extra spec that used for filtering of backends by their capability " "to create share snapshots." msgstr "" "Boolesche zusätzliche Spezifikation, die zum Filtern von Back-Ends durch " "ihre Fähigkeit zum Erstellen von Freigabe-Snapshots verwendet wird." msgid "Boolean indicating if the volume can be booted or not." msgstr "" "Boolescher Wert, der angibt, ob der Datenträger gebootet werden kann oder " "nicht." msgid "Boolean indicating if the volume is encrypted or not." msgstr "" "Boolescher Wert, der angibt, ob der Datenträger verschlüsselt ist oder nicht." msgid "" "Boolean indicating whether allow the volume to be attached more than once." msgstr "" "Boolescher Wert, der angibt, ob der Datenträger mehr als einmal angehängt " "werden darf." msgid "" "Bus of the device: hypervisor driver chooses a suitable default if omitted." msgstr "" "Bus des Geräts: Der Hypervisor-Treiber wählt einen geeigneten Standard aus, " "wenn er weggelassen wird." msgid "CIDR block notation for this subnet." msgstr "CIDR-Blocknotation für dieses Subnetz." msgid "CIDR block to apply to subnet." msgstr "CIDR-Block, der auf das Subnetz angewendet werden soll." msgid "CIDR block to apply to the VPC." msgstr "CIDR-Block zum Anwenden auf die VPC." msgid "CIDR of subnet." msgstr "CIDR des Subnetzes." msgid "CIDR to be associated with this metering rule." msgstr "CIDR, die dieser Messregel zugeordnet werden soll." #, python-format msgid "Can not check %s, resource not created yet." msgstr "" "%s kann nicht überprüft werden, die Ressource wurde noch nicht erstellt." msgid "Can not decrypt data with the auth_encryption_key in heat config." msgstr "" "Kann Daten nicht mit dem auth_encryption_key in heat config entschlüsseln." #, python-format msgid "Can not specify \"%s\" with other keys of networks at the same time." msgstr "" "\"%s\" kann nicht gleichzeitig mit anderen Schlüsseln von Netzwerken " "angegeben werden." msgid "" "Can not specify \"allocate_network\" with other keys of networks at the same " "time." msgstr "" "Kann \"allocate_network\" nicht gleichzeitig mit anderen Schlüsseln von " "Netzwerken angeben." #, python-format msgid "Can not specify property \"%s\" if the volume type is public." msgstr "" "Die Eigenschaft \"%s\" kann nicht angegeben werden, wenn der Datenträgertyp " "öffentlich ist." #, python-format msgid "Can not use %s property on Nova-network." msgstr "%s-Eigenschaft kann nicht im Nova-Netzwerk verwendet werden." #, python-format msgid "Can't find role %s" msgstr "Die Rolle %s kann nicht gefunden werden" msgid "Can't get user token without password" msgstr "Benutzer-Token kann nicht ohne Passwort abgerufen werden" msgid "Can't get user token, user not yet created" msgstr "" "Benutzer-Token kann nicht abgerufen werden, Benutzer wurde noch nicht " "erstellt" msgid "Can't traverse attribute path" msgstr "Attributpfad kann nicht durchlaufen werden" #, python-format msgid "Cancelling update when stack is %s" msgstr "Das Update wird abgebrochen, wenn der Stapel %s ist" #, python-format msgid "Cannot call %(method)s on orphaned %(objtype)s object" msgstr "" "%(method)s kann nicht für verwaistes %(objtype)s-Objekt aufgerufen werden" #, python-format msgid "" "Cannot cancel stack %(stack_name)s: lock held by unknown engine %(engine_id)s" msgstr "" "Stapel %(stack_name)s kann nicht abgebrochen werden. lock wird von " "unbekannter Engine %(engine_id)s gehalten." #, python-format msgid "Cannot check %s, stack not created" msgstr "%s kann nicht überprüft werden, Stapel wurde nicht erstellt" #, python-format msgid "Cannot define the following properties at the same time: %(props)s." msgstr "" "Folgende Eigenschaften können nicht gleichzeitig definiert werden: %(props)s." #, python-format msgid "Cannot define the following properties at the same time: %s" msgstr "Folgende Eigenschaften können nicht gleichzeitig definiert werden: %s" #, python-format msgid "" "Cannot establish connection to Heat endpoint at region \"%(region)s\" due to " "\"%(exc)s\"" msgstr "" "Verbindung zum Heat-Endpunkt in der Region \"%(region)s\" kann aufgrund von " "\"%(exc)s\" nicht hergestellt werden" #, python-format msgid "Cannot get console url: %s" msgstr "Die Konsolen-URL kann nicht abgerufen werden: %s" msgid "" "Cannot get stack domain user token, no stack domain id configured, please " "fix your heat.conf" msgstr "" "Stack-Domain-Benutzer-Token kann nicht abgerufen werden, keine Stack-Domain-" "ID konfiguriert, bitte reparieren Sie Ihre heat.conf" msgid "Cannot migrate to lower schema version." msgstr "Kann nicht zur niedrigeren Schemaversion migrieren." #, python-format msgid "Cannot modify readonly field %(field)s" msgstr "Das schreibgeschützte Feld %(field)s kann nicht geändert werden" #, python-format msgid "Cannot resume %s, resource not found" msgstr "Kann %s nicht wieder aufnehmen, Ressource nicht gefunden" #, python-format msgid "Cannot resume %s, resource_id not set" msgstr "Kann %s nicht fortsetzen, resource_id nicht eingestellt" #, python-format msgid "Cannot resume %s, stack not created" msgstr "Kann %s nicht fortsetzen, Stapel nicht erstellt" #, python-format msgid "Cannot suspend %s, resource not found" msgstr "Kann %s nicht anhalten, Ressource nicht gefunden" #, python-format msgid "Cannot suspend %s, resource_id not set" msgstr "Kann %s nicht anhalten, resource_id nicht gesetzt" #, python-format msgid "Cannot suspend %s, stack not created" msgstr "Kann %s nicht anhalten, Stapel nicht erstellt" #, python-format msgid "Cannot use \"%(prop)s\" properties - nova does not support: %(error)s" msgstr "" "Eigenschaften von \"%(prop)s\" können nicht verwendet werden - nova " "unterstützt nicht: %(error)s" #, python-format msgid "" "Cannot use \"%(prop)s\" property - compute service does not support the " "required api microversion: %(ex)s" msgstr "" "Die Eigenschaft \"%(prop)s\" kann nicht verwendet werden - der Compute " "Service unterstützt nicht die erforderliche API-Mikroversion: %(ex)s" #, python-format msgid "" "Cannot use \"%(tag)s\" property in networks - nova does not support it: " "%(error)s" msgstr "" "Die Eigenschaft \"%(tag)s\" kann nicht in Netzwerken verwendet werden - nova " "unterstützt sie nicht: %(error)s" #, python-format msgid "Cannot use \"tags\" property - nova does not support it: %s" msgstr "" "Eigenschaft \"Tags\" kann nicht verwendet werden - nova unterstützt sie " "nicht: %s" msgid "Captured stderr from the configuration execution." msgstr "Eingefangener Stderr aus der Konfigurationsausführung." msgid "Captured stdout from the configuration execution." msgstr "Eingefangener Stdout von der Konfigurationsausführung." #, python-format msgid "Circular Dependency Found: %(cycle)s" msgstr "Kreisabhängigkeit gefunden: %(cycle)s" #, python-format msgid "Circular definition for condition \"%s\"" msgstr "Kreisdefinition für Bedingung \"%s\"" msgid "Client entity to poll." msgstr "Client-Entität zum Abfragen." msgid "Client name and resource getter name must be specified." msgstr "" "Der Name des Clients und der Name des Ressourcengetters müssen angegeben " "werden." msgid "Client to poll." msgstr "Kunde zum Abstimmen." msgid "Cluster configs dictionary." msgstr "Cluster konfiguriert das Dictionary." msgid "Cluster information." msgstr "Clusterinformationen" msgid "Cluster metadata." msgstr "Cluster-Metadaten" msgid "Cluster name." msgstr "Clustername" msgid "Cluster status." msgstr "Clusterstatus" msgid "Comma-delimited list of methods for convenience." msgstr "Komma-getrennte Liste von Methoden zur Vereinfachung." msgid "Comma-delimited list of paths for convenience." msgstr "Durch Kommas getrennte Liste von Pfaden zur Vereinfachung." msgid "Comparison operator." msgstr "Vergleichsoperator" msgid "Composite threshold rules in JSON format." msgstr "Zusammengesetzte Schwellenwertregeln im JSON-Format." #, python-format msgid "Concurrent transaction for %(action)s" msgstr "Gleichzeitige Transaktion für %(action)s" #, python-format msgid "Condition definitions must be a map. Found a %s instead" msgstr "" "Bedingungsdefinitionen müssen eine Map sein. Stattdessen wurde ein %s " "gefunden" msgid "Config parameters to add to the job." msgstr "Config-Parameter, die dem Job hinzugefügt werden sollen." msgid "Configuration of session persistence." msgstr "Konfiguration der Sitzungspersistenz" msgid "" "Configuration script or manifest which specifies what actual configuration " "is performed." msgstr "" "Konfigurationsskript oder Manifest, das angibt, welche tatsächliche " "Konfiguration ausgeführt wird." msgid "Configure most important configs automatically." msgstr "Konfigurieren Sie die wichtigsten Konfigurationen automatisch." #, python-format msgid "Confirm resize for server %s failed" msgstr "Bestätigen Sie die Größenänderung für Server %s fehlgeschlagen" #, python-format msgid "" "Conflicting merge strategy '%(strategy)s' for parameter '%(param)s' in file " "'%(env_file)s'." msgstr "" "Konfliktzusammenführungsstrategie '%(strategy)s' für Parameter '%(param)s' " "in Datei '%(env_file)s'." msgid "Connection info for this network gateway." msgstr "Verbindungsinformationen für dieses Netzwerk-Gateway." #, python-format msgid "Container '%(name)s' creation failed: %(code)s - %(reason)s" msgstr "" "Container '%(name)s' konnte nicht erstellt werden: %(code)s - %(reason)s" msgid "Container format of image." msgstr "Containerformat des Abbildes." msgid "" "Content of part to attach, either inline or by referencing the ID of another " "software config resource." msgstr "" "Inhalt des anzuhängenden Teils, entweder inline oder durch Verweis auf die " "ID einer anderen Softwarekonfigurationsressource." msgid "Context for this stack." msgstr "Kontext für diesen Stapel." msgid "Continue ? [y/N]" msgstr "Fortsetzen ? [J/N]" msgid "Control how the disk is partitioned when the server is created." msgstr "" "Steuern Sie, wie die Festplatte partitioniert wird, wenn der Server erstellt " "wird." msgid "Controls DPD protocol mode." msgstr "Steuert den DPD-Protokollmodus." msgid "" "Controls how many events will be pruned whenever a stack's events are " "purged. Set this lower to keep more events at the expense of more frequent " "purges." msgstr "" "Steuert, wie viele Ereignisse gelöscht werden, wenn die Ereignisse eines " "Stapels gelöscht werden. Setzen Sie dies niedriger, um mehr Ereignisse auf " "Kosten häufigerer Bereinigungen zu behalten." msgid "" "Convenience attribute to fetch the first assigned network address, or an " "empty string if nothing has been assigned at this time. Result may not be " "predictable if the server has addresses from more than one network." msgstr "" "Convenience-Attribut zum Abrufen der ersten zugewiesenen Netzwerkadresse " "oder einer leeren Zeichenfolge, wenn zu diesem Zeitpunkt nichts zugewiesen " "wurde. Das Ergebnis ist möglicherweise nicht vorhersehbar, wenn der Server " "Adressen von mehr als einem Netzwerk hat." msgid "" "Convenience attribute, provides curl CLI command prefix, which can be used " "for signalling handle completion or failure when signal_transport is set to " "TOKEN_SIGNAL. You can signal success by adding --data-binary '{\"status\": " "\"SUCCESS\"}' , or signal failure by adding --data-binary '{\"status\": " "\"FAILURE\"}'. This attribute is set to None for all other signal transports." msgstr "" "Convenience-Attribut, bietet curl-CLI-Befehlspräfix, der verwendet werden " "kann, um die Beendigung oder den Ausfall des Handle zu signalisieren, wenn " "signal_transport auf TOKEN_SIGNAL gesetzt ist. Sie können Erfolg melden, " "indem Sie --data-binary '{\"status\": \"SUCCESS\"}' hinzufügen oder einen " "Signalfehler melden, indem Sie --data-binary '{\"status\": \"FAILURE\"} " "hinzufügen. Dieses Attribut wird für alle anderen Signaltransporte auf None " "gesetzt." msgid "" "Convenience attribute, provides curl CLI command prefix, which can be used " "for signalling handle completion or failure. You can signal success by " "adding --data-binary '{\"status\": \"SUCCESS\"}' , or signal failure by " "adding --data-binary '{\"status\": \"FAILURE\"}'." msgstr "" "Convenience-Attribut, bietet curl-CLI-Befehlspräfix, der für die " "Signalisierung des Abschlusses oder des Fehlers des Handle verwendet werden " "kann. Sie können Erfolg melden, indem Sie --data-binary '{\"status\": " "\"SUCCESS\"}' hinzufügen oder einen Signalfehler melden, indem Sie --data-" "binary '{\"status\": \"FAILURE\"} hinzufügen." msgid "Cooldown period, in seconds." msgstr "Abklingzeit in Sekunden" #, python-format msgid "Could not bind to %(bind_addr)s after trying for 30 seconds" msgstr "" "Konnte nicht an%(bind_addr)s binden, nachdem ich 30 Sekunden lang versucht " "habe" #, python-format msgid "Could not confirm resize of server %s" msgstr "Die Größenänderung des Servers %s konnte nicht bestätigt werden" #, python-format msgid "Could not detach attachment %(att)s from server %(srv)s." msgstr "Anhang %(att)s konnte nicht vom Server %(srv)s getrennt werden." #, python-format msgid "Could not fetch remote template \"%(name)s\": %(exc)s" msgstr "Die Remote-Vorlage \"%(name)s\" konnte nicht abgerufen werden: %(exc)s" #, python-format msgid "Could not fetch remote template '%(url)s': %(exc)s" msgstr "Die Remote-Vorlage '%(url)s' konnte nicht abgerufen werden: %(exc)s" #, python-format msgid "Could not load %(name)s: %(error)s" msgstr "%(name)s konnte nicht geladen werden: %(error)s" #, python-format msgid "Could not retrieve template: %s" msgstr "Die Vorlage konnte nicht abgerufen werden: %s" msgid "Create volumes on the same physical port as an instance." msgstr "" "Erstellen Sie Datenträger auf demselben physischen Port wie eine Instanz." msgid "" "Creating a Glance Image based on an existing URL location requires the " "Glance v1 API, which is deprecated." msgstr "" "Erstellen eines Überblicks Image basierend auf einem vorhandenen URL-" "Speicherort erfordert die Glance v1-API, die veraltet ist." msgid "" "Credentials used for swift. Not required if sahara is configured to use " "proxy users and delegated trusts for access." msgstr "" "Anmeldeinformationen für swift. Nicht erforderlich, wenn Sahara für die " "Verwendung von Proxy-Benutzern und delegierten Vertrauensstellungen für den " "Zugriff konfiguriert ist." msgid "Cron expression." msgstr "Cron Ausdruck." msgid "Current share status." msgstr "Aktueller Status der Freigabe" msgid "Currently, no value is supported for this option." msgstr "Derzeit wird für diese Option kein Wert unterstützt." msgid "Custom LoadBalancer template can not be found" msgstr "Benutzerdefinierte LoadBalancer-Vorlage kann nicht gefunden werden" msgid "Custom template for the built-in loadbalancer nested stack." msgstr "" "Benutzerdefinierte Vorlage für den integrierten Loadbalancer-Nested-Stack" msgid "DB instance restore point." msgstr "DB-Instanzwiederherstellungspunkt" msgid "DNS Domain id or name." msgstr "DNS-Domänen-ID oder -Name" msgid "DNS IP address used inside tenant's network." msgstr "DNS-IP-Adresse, die im Netzwerk des Mandanten verwendet wird." msgid "DNS Name for the zone." msgstr "DNS-Name für die Zone." msgid "DNS Record type." msgstr "DNS-Aufnahmetyp" msgid "DNS RecordSet type." msgstr "DNS-Datensatztyp." msgid "DNS Zone id or name." msgstr "DNS-Zonen-ID oder Name" msgid "DNS domain associated with floating ip." msgstr "DNS-Domäne mit Floating-IP verbunden." msgid "DNS domain associated with this network." msgstr "DNS-Domäne, die diesem Netzwerk zugeordnet ist." msgid "DNS domain serial." msgstr "DNS-Domänen-Serial" msgid "DNS name associated with floating ip." msgstr "DNS-Name mit Floating-IP verbunden." msgid "DNS name associated with the port." msgstr "DNS-Name, der dem Port zugeordnet ist." msgid "" "DNS record data, varies based on the type of record. For more details, " "please refer rfc 1035." msgstr "" "DNS-Datensatzdaten variieren je nach Art des Datensatzes. Weitere " "Informationen finden Sie in RFC 1035." msgid "" "DNS record priority. It is considered only for MX and SRV types, otherwise, " "it is ignored." msgstr "" "DNS-Eintragspriorität Es wird nur für MX- und SRV-Typen berücksichtigt, " "andernfalls wird es ignoriert." msgid "DNS zone serial number." msgstr "DNS-Zonen-Seriennummer" msgid "DSCP mark between 0 and 56, except 2-6, 42, 44, and 50-54." msgstr "DSCP-Markierung zwischen 0 und 56, außer 2-6, 42, 44 und 50-54." #, python-format msgid "Data supplied was not valid: %(reason)s" msgstr "Daten geliefert war nicht gültig: %(reason)s" #, python-format msgid "" "Database %(dbs)s specified for user does not exist in databases for resource " "%(name)s." msgstr "" "Die für den Benutzer angegebene Datenbank %(dbs)s existiert nicht in den " "Datenbanken für die Ressource %(name)s." msgid "Database volume size in GB." msgstr "Größe des Datenbankdatenträgers in GB" #, python-format msgid "" "Databases property is required if users property is provided for resource %s." msgstr "" "Die Eigenschaft \"Datenbanken\" ist erforderlich, wenn die " "Benutzereigenschaft für die Ressource %s bereitgestellt wird." #, python-format msgid "" "Datastore version %(dsversion)s for datastore type %(dstype)s is not valid. " "Allowed versions are %(allowed)s." msgstr "" "Die Datenspeicherversion %(dsversion)s für den Datenspeichertyp %(dstype)s " "ist nicht gültig. Erlaubte Versionen sind %(allowed)s." msgid "Datetime when a share was created." msgstr "Datetime, wenn eine Freigabe erstellt wurde." msgid "" "Dead Peer Detection protocol configuration for the ipsec site connection." msgstr "" "Dead Peer Detection-Protokollkonfiguration für die IPSec-Standortverbindung." msgid "Dead engines are removed." msgstr "Tote Engines werden entfernt." msgid "Default TLS container reference to retrieve TLS information." msgstr "Standard-TLS-Containerreferenz zum Abrufen von TLS-Informationen." msgid "Default execution data to use when run signal." msgstr "Standard-Ausführungsdaten zur Verwendung beim Laufsignal." #, python-format msgid "Default must be a comma-delimited list string: %s" msgstr "Der Standardwert muss eine durch Kommas getrennte Liste sein: %s" msgid "Default name or UUID of the image used to boot Hadoop nodes." msgstr "" "Standardname oder UUID des Abbildes, das zum Starten von Hadoop-Knoten " "verwendet wird." msgid "Default notification level for outgoingnotifications." msgstr "Standardbenachrichtigungsstufe für ausgehende Benachrichtigungen" msgid "Default project id for user." msgstr "Standard-Projekt-ID für den Benutzer" msgid "Default publisher_id for outgoing notifications." msgstr "Standard-Publisher-ID für ausgehende Benachrichtigungen" msgid "Default region name used to get services endpoints." msgstr "" "Name der Standardregion, der zum Abrufen von Dienstendpunkten verwendet wird." msgid "Default settings for some of task attributes defined at workflow level." msgstr "" "Standardeinstellungen für einige der auf Workflowebene definierten " "Aufgabenattribute" msgid "Default value for the input if none is specified." msgstr "Standardwert für die Eingabe, wenn keine angegeben ist." msgid "" "Defines a delay in seconds that Mistral Engine should wait after a task has " "completed before starting next tasks defined in on-success, on-error or on-" "complete." msgstr "" "Definiert eine Verzögerung in Sekunden, die Mistral Engine nach Abschluss " "einer Aufgabe warten soll, bevor die nächsten Aufgaben gestartet werden, die " "bei Erfolg, bei Fehler oder bei Abschluss definiert sind." msgid "" "Defines a delay in seconds that Mistral Engine should wait before starting a " "task." msgstr "" "Definiert eine Verzögerung in Sekunden, die Mistral Engine vor dem Start " "einer Aufgabe warten soll." msgid "" "Defines a max number of actions running simultaneously in a task. Applicable " "only for tasks that have with-items." msgstr "" "Definiert eine maximale Anzahl von Aktionen, die gleichzeitig in einer " "Aufgabe ausgeführt werden. Gilt nur für Aufgaben, die über Elemente verfügen." msgid "Defines a pattern how task should be repeated in case of an error." msgstr "" "Definiert ein Muster, wie die Aufgabe im Falle eines Fehlers wiederholt " "werden soll." msgid "" "Defines a period of time in seconds after which a task will be failed " "automatically by engine if hasn't completed." msgstr "" "Definiert einen Zeitraum in Sekunden, nach dem eine Aufgabe von der Engine " "automatisch fehlgeschlagen ist, wenn sie nicht abgeschlossen wurde." msgid "Defines if share type is accessible to the public." msgstr "Definiert, ob der Freigabetyp öffentlich zugänglich ist." msgid "Defines if shared filesystem is public or private." msgstr "Definiert, ob das freigegebene Dateisystem öffentlich oder privat ist." msgid "" "Defines the method in which the request body for signaling a workflow would " "be parsed. In case this property is set to True, the body would be parsed as " "a simple json where each key is a workflow input, in other cases body would " "be parsed expecting a specific json format with two keys: \"input\" and " "\"params\"." msgstr "" "Definiert die Methode, mit der der Anfragetext für die Signalisierung eines " "Workflows analysiert wird. Wenn diese Eigenschaft auf \"True\" gesetzt ist, " "wird der Body als einfacher JSON analysiert, wobei jeder Schlüssel eine " "Workflow-Eingabe ist. In anderen Fällen wird der Body analysiert, wobei ein " "bestimmtes JSON-Format mit zwei Schlüsseln erwartet wird: \"input\" und " "\"params\"." msgid "" "Defines whether Mistral Engine should put the workflow on hold or not before " "starting a task." msgstr "" "Definiert, ob Mistral Engine den Arbeitsablauf in den Wartezustand versetzen " "soll oder nicht, bevor eine Aufgabe gestartet wird." msgid "Defines whether auto-assign security group to this Node Group template." msgstr "" "Definiert, ob die Sicherheitsgruppen automatisch dieser Knotengruppenvorlage " "zugewiesen werden." #, python-format msgid "" "Defining more than one configuration for the same action in " "SoftwareComponent \"%s\" is not allowed." msgstr "" "Das Definieren mehrerer Konfigurationen für dieselbe Aktion in " "SoftwareComponent \"%s\" ist nicht zulässig." msgid "Deleting in-progress snapshot" msgstr "Löschen eines laufenden Snapshots" #, python-format msgid "Deleting non-empty container (%(id)s) when %(prop)s is False" msgstr "" "Löschen eines nicht leeren Containers (%(id)s), wenn %(prop)s False ist" #, python-format msgid "Delimiter for %s must be string" msgstr "Trennzeichen für %s muss eine Zeichenfolge sein" msgid "" "Denotes that the deployment is in an error state if this output has a value." msgstr "" "Bezeichnet, dass sich die Bereitstellung in einem Fehlerstatus befindet, " "wenn diese Ausgabe einen Wert aufweist." msgid "Deploy data available" msgstr "Stellen Sie verfügbare Daten bereit" msgid "Deployment cancelled." msgstr "Bereitstellung abgebrochen" #, python-format msgid "Deployment exited with non-zero status code: %s" msgstr "Die Bereitstellung wurde mit Statuscode ungleich Null beendet: %s" #, python-format msgid "Deployment to server failed: %s" msgstr "Bereitstellung auf Server fehlgeschlagen: %s" #, python-format msgid "Deployment with id %s not found" msgstr "Bereitstellung mit ID %s nicht gefunden" msgid "Deprecated." msgstr "Veraltet" msgid "" "Describe time constraints for the alarm. Only evaluate the alarm if the time " "at evaluation is within this time constraint. Start point(s) of the " "constraint are specified with a cron expression, whereas its duration is " "given in seconds." msgstr "" "Beschreiben Sie Zeitbeschränkungen für den Alarm. Bewerten Sie den Alarm nur " "dann, wenn der Zeitpunkt der Auswertung innerhalb dieser Zeitbedingung " "liegt. Startpunkte der Einschränkung sind mit einem Cron-Ausdruck angegeben, " "während ihre Dauer in Sekunden angegeben ist." msgid "Description for the Flow Classifier." msgstr "Beschreibung für den Klassifizierer." msgid "Description for the Port Chain." msgstr "Beschreibung für die Port-Kette." msgid "Description for the Port Pair Group." msgstr "Beschreibung für die Portpaargruppe." msgid "Description for the Port Pair." msgstr "Beschreibung für das Portpaar." msgid "Description for the alarm." msgstr "Beschreibung für den Alarm" msgid "Description for the firewall policy." msgstr "Beschreibung für die Firewallrichtlinie" msgid "Description for the firewall rule." msgstr "Beschreibung für die Firewallregel" msgid "Description for the firewall." msgstr "Beschreibung für die Firewall" msgid "Description for the ike policy." msgstr "Beschreibung für die Ike-Richtlinie." msgid "Description for the ipsec policy." msgstr "Beschreibung für die IPSec-Richtlinie." msgid "Description for the ipsec site connection." msgstr "Beschreibung für die IPSec-Standortverbindung." msgid "Description for the time constraint." msgstr "Beschreibung für die Zeitbeschränkung." msgid "Description for the trunk." msgstr "Beschreibung für den Trunk." msgid "Description for the vpn service." msgstr "Beschreibung für den VPN-Dienst." msgid "Description for this interface." msgstr "Beschreibung für diese Schnittstelle" msgid "Description of RecordSet." msgstr "Beschreibung von RecordSet." msgid "Description of domain." msgstr "Beschreibung der Domäne" msgid "Description of keystone domain." msgstr "Beschreibung der Keystone-Domäne" msgid "Description of keystone group." msgstr "Beschreibung der Keystone-Gruppe." msgid "Description of keystone project." msgstr "Beschreibung des Keystone-Projekts." msgid "Description of keystone region." msgstr "Beschreibung der Keystone-Region." msgid "Description of keystone service." msgstr "Beschreibung des Keystone-Service." msgid "Description of keystone user." msgstr "Beschreibung des Keystone-Benutzers" msgid "Description of record." msgstr "Beschreibung des Datensatzes" msgid "Description of the Node Group Template." msgstr "Beschreibung der Knotengruppenvorlage" msgid "Description of the Sahara Group Template." msgstr "Beschreibung der Sahara Group Vorlage." msgid "Description of the alarm." msgstr "Beschreibung des Alarms" msgid "Description of the data source." msgstr "Beschreibung der Datenquelle" msgid "Description of the firewall policy." msgstr "Beschreibung der Firewall-Richtlinie" msgid "Description of the firewall rule." msgstr "Beschreibung der Firewall-Regel" msgid "Description of the firewall." msgstr "Beschreibung der Firewall" msgid "Description of the image." msgstr "Beschreibung des Abbildes" msgid "Description of the input." msgstr "Beschreibung der Eingabe" msgid "Description of the job binary." msgstr "Beschreibung der Job-Binärdatei." msgid "Description of the job." msgstr "Beschreibung des Jobs" msgid "Description of the metering label." msgstr "Beschreibung des Zumeßschildes." msgid "Description of the output." msgstr "Beschreibung der Ausgabe" msgid "Description of the policy." msgstr "Beschreibung der Richtlinie" msgid "Description of the pool." msgstr "Beschreibung des Pools." msgid "Description of the security group rule." msgstr "Beschreibung der Sicherheitsgruppenregel" msgid "Description of the security group." msgstr "Beschreibung der Sicherheitsgruppe" msgid "Description of the segment." msgstr "Beschreibung des Segments" msgid "Description of the vip." msgstr "Beschreibung des vip." msgid "Description of the volume type." msgstr "Beschreibung des Datenträgertyps" msgid "Description of the volume." msgstr "Beschreibung des Datenträgers." msgid "Description of this Load Balancer." msgstr "Beschreibung dieses Load Balancers." msgid "Description of this listener." msgstr "Beschreibung dieses Listeners" msgid "Description of this pool." msgstr "Beschreibung dieses Pools." msgid "Description of zone." msgstr "Beschreibung der Zone." msgid "Desired IPs for this port." msgstr "Gewünschte IPs für diesen Port." msgid "Desired capacity of the cluster." msgstr "Gewünschte Kapazität des Clusters" msgid "Desired initial number of instances." msgstr "Gewünschte Anfangsanzahl von Instanzen" msgid "Desired initial number of resources in cluster." msgstr "Gewünschte Anfangsanzahl von Ressourcen im Cluster." msgid "Desired initial number of resources." msgstr "Gewünschte Anfangsanzahl von Ressourcen" msgid "Desired number of instances." msgstr "Gewünschte Anzahl von Instanzen" msgid "DesiredCapacity must be between MinSize and MaxSize" msgstr "Die gewünschte Kapazität muss zwischen MinSize und MaxSize liegen" msgid "Destination IP address or CIDR." msgstr "Ziel-IP-Adresse oder CIDR." msgid "Destination IP prefix or subnet." msgstr "Ziel-IP-Präfix oder Subnetz." msgid "Destination ip_address for this firewall rule." msgstr "Ziel-IP-Adresse für diese Firewallregel" msgid "Destination port number or a range." msgstr "Zielportnummer oder ein Bereich." msgid "Destination port range for this firewall rule." msgstr "Zielportbereich für diese Firewallregel" msgid "Destination protocol port maximum." msgstr "Zielprotokoll-Port maximal." msgid "Destination protocol port minimum." msgstr "Zielprotokollportminimum." msgid "Detailed information about resource." msgstr "Detaillierte Informationen über die Ressource." msgid "Device ID of this port." msgstr "Geräte-ID dieses Ports" msgid "Device info for this network gateway." msgstr "Geräteinfo für dieses Netzwerk-Gateway." msgid "" "Device type: at the moment we can make distinction only between disk and " "cdrom." msgstr "" "Gerätetyp: Zur Zeit können wir nur zwischen Disk und CD-ROM unterscheiden." msgid "" "Dict, which has expand properties for port. Used only if port property is " "not specified for creating port." msgstr "" "Dict, das Eigenschaften für Port erweitert hat. Wird nur verwendet, wenn die " "Port-Eigenschaft nicht zum Erstellen des Ports angegeben ist." msgid "Dictionary containing workflow tasks." msgstr "Dictionary mit Workflow-Aufgaben." msgid "Dictionary of L7-parameters." msgstr "Dictionary der L7-Parameter." msgid "" "Dictionary of chain parameters. Currently, only correlation=mpls is " "supported by default." msgstr "" "Dictionary der Kettenparameter. Momentan wird nur die Korrelation=mpls " "standardmäßig unterstützt." msgid "Dictionary of node configurations." msgstr "Dictionary der Knotenkonfigurationen." msgid "" "Dictionary of service function parameter. Currently only correlation=None is " "supported." msgstr "" "Dictionary des Servicefunktionsparameters. Momentan wird nur die " "Korrelation=None unterstützt." msgid "Dictionary of variables to publish to the workflow context." msgstr "" "Verzeichnis der Variablen, die im Workflow-Kontext veröffentlicht werden " "sollen." msgid "Dictionary which contains input for the workflows." msgstr "Dictionary, das eine Eingabe für die Workflows enthält." msgid "Dictionary which contains input for workflow." msgstr "Dictionary, das eine Eingabe für den Workflow enthält." msgid "Dictionary which defines the workflow to run and its params." msgstr "" "Dictionary, das den auszuführenden Workflow und seine Parameter definiert." msgid "" "Dictionary-like section defining task policies that influence how Mistral " "Engine runs tasks. Must satisfy Mistral DSL v2." msgstr "" "Dictionary-ähnlicher Abschnitt zum Definieren von Task-Richtlinien, die " "beeinflussen, wie Mistral Engine Aufgaben ausführt. Muss Mistral DSL v2 " "erfüllen." msgid "Disable TLS in the cluster." msgstr "Deaktivieren Sie TLS im Cluster." msgid "DisableRollback and OnFailure may not be used together" msgstr "DisableRollback und OnFailure können nicht zusammen verwendet werden" msgid "Disk format of image." msgstr "Disk Format des Abbildes." msgid "Does not contain a valid AWS Access Key or certificate" msgstr "" "Enthält keinen gültigen AWS-Zugriffsschlüssel oder ein gültiges Zertifikat" msgid "Domain email." msgstr "Domänen-E-Mail" msgid "Domain id for project." msgstr "Domänen-ID für das Projekt" msgid "Domain id for user." msgstr "Domänen-ID für den Benutzer" msgid "Domain name." msgstr "Domänenname" #, python-format msgid "Duplicate names %s" msgstr "Doppelte Namen %s" msgid "Duplicate refs are not allowed." msgstr "Doppelte Referenzen sind nicht erlaubt." msgid "Duration for the time constraint." msgstr "Dauer für die Zeitbeschränkung." msgid "" "E-mail for the zone. Used in SOA records for the zone. It is required for " "PRIMARY Type, otherwise ignored." msgstr "" "E-Mail für die Zone. Wird in SOA-Datensätzen für die Zone verwendet. Er wird " "für PRIMARY Type benötigt, ansonsten ignoriert." msgid "EIP address to associate with instance." msgstr "EIP-Adresse, die mit der Instanz verknüpft werden soll." #, python-format msgid "Each %(object_name)s must contain a %(sub_section)s key." msgstr "Jeder %(object_name)s muss einen Schlüssel %(sub_section)s enthalten." msgid "Each Resource must contain a Type key." msgstr "Jede Ressource muss einen Typschlüssel enthalten." #, python-format msgid "Each output definition must contain a %s key." msgstr "Jede Ausgabedefinition muss einen %s Schlüssel enthalten." msgid "Ebs is missing, this is required when specifying BlockDeviceMappings." msgstr "" "Ebs fehlt, dies ist erforderlich, wenn Sie BlockDeviceMappings angeben." msgid "" "Egress rules are only allowed when Neutron is used and the 'VpcId' property " "is set." msgstr "" "Egress-Regeln sind nur erlaubt, wenn Neutron verwendet wird und die " "Eigenschaft 'VpcId' gesetzt ist." #, python-format msgid "Either %(net)s or %(port)s must be provided." msgstr "Entweder müssen %(net)s oder %(port)s angegeben werden." msgid "Either 'EIP' or 'AllocationId' must be provided." msgstr "Entweder \"EIP\" oder \"AllocationId\" muss angegeben werden." msgid "Either 'InstanceId' or 'LaunchConfigurationName' must be provided." msgstr "" "Entweder \"InstanceId\" oder \"LaunchConfigurationName\" muss angegeben " "werden." #, python-format msgid "Either project or domain must be specified for role %s" msgstr "" "Für die Rolle %s muss entweder das Projekt oder die Domäne angegeben werden" #, python-format msgid "Either volume_id or snapshot_id must be specified for device mapping %s" msgstr "" "Für die Gerätezuordnung %s muss entweder volume_id oder snapshot_id " "angegeben werden" msgid "" "Either volume_id, snapshot_id, image_id, swap_size, ephemeral_size or " "ephemeral_format must be specified." msgstr "" "Entweder volume_id, snapshot_id, image_id, swap_size, ephemeral_size oder " "ephemeral_format müssen angegeben werden." msgid "Email address of keystone user." msgstr "E-Mail-Adresse des Keystone-Benutzers" msgid "Enable the docker registry in the cluster." msgstr "Aktivieren Sie die Docker-Registrierung im Cluster." msgid "Enable the legacy OS::Heat::CWLiteAlarm resource." msgstr "Aktivieren Sie die ältere OS::Heat::CWLiteAlarm-Ressource." msgid "Enable the preview Stack Abandon feature." msgstr "Aktivieren Sie die Vorschau-Funktion Stapel-Abandon." msgid "Enable the preview Stack Adopt feature." msgstr "Aktivieren Sie die Vorschau Stack Adopt-Funktion." msgid "Enable/disable subport addition, removal and trunk delete." msgstr "" "Aktivieren/deaktivieren Sie das Hinzufügen, Entfernen von Subports und " "Löschen von Trunks." msgid "" "Enables Source NAT on the router gateway. NOTE: The default policy setting " "in Neutron restricts usage of this property to administrative users only." msgstr "" "Aktiviert Quell-NAT auf dem Router-Gateway. HINWEIS: Die " "Standardrichtlinieneinstellung in Neutron beschränkt die Verwendung dieser " "Eigenschaft auf administrative Benutzer." msgid "" "Enables engine with convergence architecture. All stacks with this option " "will be created using convergence engine." msgstr "" "Aktiviert Engine mit Konvergenzarchitektur. Alle Stapel mit dieser Option " "werden mit der Konvergenz-Engine erstellt." msgid "Enables or disables read-only access mode of volume." msgstr "" "Aktiviert oder deaktiviert den schreibgeschützten Zugriffsmodus des " "Datenträgers." msgid "Encapsulation mode for the ipsec policy." msgstr "Kapselungsmodus für die IPSec-Richtlinie" msgid "Encountered an empty component." msgstr "Eine leere Komponente gefunden." msgid "" "Encrypt template parameters that were marked as hidden and also all the " "resource properties before storing them in database." msgstr "" "Verschlüsseln Sie Vorlagenparameter, die als ausgeblendet markiert wurden, " "sowie alle Ressourceneigenschaften, bevor Sie sie in der Datenbank speichern." msgid "Encryption algorithm for the ike policy." msgstr "Verschlüsselungsalgorithmus für die IKE-Richtlinie." msgid "Encryption algorithm for the ipsec policy." msgstr "Verschlüsselungsalgorithmus für die IPSec-Richtlinie." msgid "End address for the allocation pool." msgstr "Endadresse für den Zuordnungspool." #, python-format msgid "End resizing the group %(group)s" msgstr "Beenden Sie die Größenänderung der Gruppe %(group)s" msgid "" "Endpoint/url which can be used for signalling handle when signal_transport " "is set to TOKEN_SIGNAL. None for all other signal transports." msgstr "" "Endpunkt/URL, der für die Signalisierung verwendet werden kann, wenn " "signal_transport auf TOKEN_SIGNAL gesetzt ist. Keine für alle anderen " "Signaltransporte." msgid "Endpoint/url which can be used for signalling handle." msgstr "Endpunkt/URL, der zur Signalisierung verwendet werden kann." #, python-format msgid "Engine went down during stack %s" msgstr "Die Engine ist während des Stapel %s ausgefallen" msgid "Engine_Id" msgstr "Engine_Id" msgid "Error" msgstr "Fehler" #, python-format msgid "Error authorizing action %s" msgstr "Fehler beim Autorisieren der Aktion %s" #, python-format msgid "Error creating ec2 keypair for user %s" msgstr "Fehler beim Erstellen des ec2-Schlüsselpaars für Benutzer %s" msgid "" "Error during applying access rules to share \"{0}\". The root cause of the " "problem is the following: {1}." msgstr "" "Fehler beim Anwenden von Zugriffsregeln für die Freigabe von \"{0}\". Die " "Ursache des Problems ist folgende: {1}." msgid "Error during creation of share \"{0}\"" msgstr "Fehler beim Erstellen der Freigabe \"{0}\"" msgid "Error during deleting share \"{0}\"." msgstr "Fehler beim Löschen der Freigabe \"{0}\"." #, python-format msgid "Error in %(resource)s output %(attribute)s: %(message)s" msgstr "Fehler in %(resource)s Ausgabe %(attribute)s: %(message)s" msgid "Error in Firewall" msgstr "Fehler in der Firewall" msgid "Error in RecordSet" msgstr "Fehler in RecordSet" #, python-format msgid "Error in creating container '%(name)s' - %(reason)s" msgstr "Fehler beim Erstellen des Containers '%(name)s' - %(reason)s" #, python-format msgid "" "Error in creating container '%(name)s' - interactive mode was enabled but " "the container has stopped running" msgstr "" "Fehler beim Erstellen des Containers '%(name)s' - Der interaktive Modus " "wurde aktiviert, der Container wurde jedoch nicht mehr ausgeführt" msgid "Error in zone" msgstr "Fehler in der Zone" #, python-format msgid "Error parsing template %(tmpl)s %(yea)s" msgstr "Fehler beim Analysieren der Vorlage%(tmpl)s%(yea)s" #, python-format msgid "Error retrieving %(entity)s list from sahara: %(err)s" msgstr "Fehler beim Abrufen der %(entity)s-Liste von Sahara: %(err)s" #, python-format msgid "Error validating value '%(value)s'" msgstr "Fehler beim Validieren des Wertes '%(value)s'" #, python-format msgid "Error validating value '%(value)s': %(message)s" msgstr "Fehler beim Validieren des Wertes '%(value)s':%(message)s" msgid "Ethertype of the traffic." msgstr "Ethertype des Verkehrs." msgid "Event type to evaluate against. If not specified will match all events." msgstr "" "Ereignistyp für die Auswertung. Wenn nicht angegeben, werden alle Ereignisse " "übereinstimmen." msgid "Exclude state for cidr." msgstr "Status für cidr ausschließen" #, python-format msgid "Expected 1 external network, found %d" msgstr "Erwartete 1 externes Netzwerk, gefunden %d" #, python-format msgid "Expected dict, got %(cname)s for files, (value is %(val)s)" msgstr "" "Erwartetes dict, hat %(cname)s für Dateien erhalten, (Wert ist %(val)s)" msgid "Expiration date of the URL." msgstr "Ablaufdatum der URL" msgid "Expiration time is out of date." msgstr "Ablaufzeit ist veraltet." msgid "Expiration {0} is invalid: {1}" msgstr "Ablauf {0} ist ungültig: {1}" msgid "Export locations of share." msgstr "Freigabestandorte exportieren" msgid "Expression of the alarm to evaluate." msgstr "Ausdruck des auszuwertenden Alarms" msgid "External fixed IP address." msgstr "Externe feste IP-Adresse" msgid "External fixed IP addresses for the gateway." msgstr "Externe feste IP-Adressen für das Gateway." msgid "External network gateway configuration for a router." msgstr "Externe Netzwerk-Gateway-Konfiguration für einen Router." msgid "" "Extra parameters to include in the \"floatingip\" object in the creation " "request. Parameters are often specific to installed hardware or extensions." msgstr "" "Zusätzliche Parameter, die in das \"FloatingIP\" -Objekt in der " "Erstellungsanfrage aufgenommen werden sollen. Parameter sind oft spezifisch " "für installierte Hardware oder Erweiterungen." msgid "Extra parameters to include in the creation request." msgstr "" "Zusätzliche Parameter, die in die Erstellungsanfrage aufgenommen werden " "sollen." msgid "Extra parameters to include in the request." msgstr "Zusätzliche Parameter, die in die Anfrage aufgenommen werden sollen." msgid "" "Extra parameters to include in the request. Parameters are often specific to " "installed hardware or extensions." msgstr "" "Zusätzliche Parameter, die in die Anfrage aufgenommen werden sollen. " "Parameter sind oft spezifisch für installierte Hardware oder Erweiterungen." msgid "Extra specs key-value pairs defined for share type." msgstr "" "Zusätzliche Schlüssel/Wert-Schlüsselpaare, die für den Freigabetyp definiert " "sind." msgid "Extra specs of the flavor in key-value pairs." msgstr "Zusätzliche Angaben zur Variante in Schlüssel/Wert-Paaren." #, python-format msgid "Failed to attach interface (%(port)s) to server (%(server)s)" msgstr "" "Fehler beim Anfügen der Schnittstelle (%(port)s) an den Server (%(server)s)" #, python-format msgid "Failed to attach volume %(vol)s to server %(srv)s - %(err)s" msgstr "" "Fehler beim Verbinden von Datenträger %(vol)s mit Server %(srv)s - %(err)s" #, python-format msgid "Failed to create Bay '%(name)s' - %(reason)s" msgstr "Fehler beim Erstellen der Bay '%(name)s' - %(reason)s" #, python-format msgid "Failed to create Cluster '%(name)s' - %(reason)s" msgstr "Fehler beim Erstellen des Clusters '%(name)s' - %(reason)s" #, python-format msgid "Failed to detach interface (%(port)s) from server (%(server)s)" msgstr "" "Fehler beim Trennen der Schnittstelle (%(port)s) vom Server (%(server)s)" #, python-format msgid "Failed to execute %(action)s for %(cluster)s: %(reason)s" msgstr "Fehler beim Ausführen von %(action)s für %(cluster)s: %(reason)s" #, python-format msgid "Failed to extend volume %(vol)s - %(err)s" msgstr "Fehler beim Erweitern des Datenträgers %(vol)s - %(err)s" #, python-format msgid "Failed to fetch template: %s" msgstr "Fehler beim Abrufen der Vorlage: %s" #, python-format msgid "Failed to find instance %s" msgstr "Die Instanz %s konnte nicht gefunden werden" #, python-format msgid "Failed to find server %s" msgstr "Der Server %s konnte nicht gefunden werden" #, python-format msgid "Failed to parse JSON data: %s" msgstr "Fehler beim Analysieren von JSON-Daten: %s" #, python-format msgid "Failed to restore volume %(vol)s from backup %(backup)s - %(err)s" msgstr "" "Fehler beim Wiederherstellen von Datenträger %(vol)s aus Backup %(backup)s - " "%(err)s" msgid "Failed to retrieve template" msgstr "Fehler beim Abrufen der Vorlage" #, python-format msgid "Failed to retrieve template data: %s" msgstr "Vorlagendaten konnten nicht abgerufen werden: %s" #, python-format msgid "Failed to retrieve template from %s" msgstr "Fehler beim Abrufen der Vorlage von %s" #, python-format msgid "Failed to retrieve template: %s" msgstr "Fehler beim Abrufen der Vorlage: %s" #, python-format msgid "" "Failed to send message to stack (%(stack_name)s) on other engine " "(%(engine_id)s)" msgstr "" "Fehler beim Senden der Nachricht an den Stapel (%(stack_name)s) auf einer " "anderen Engine (%(engine_id)s)" #, python-format msgid "Failed to stop stack (%(stack_name)s) on other engine (%(engine_id)s)" msgstr "" "Fehler beim Stoppen des Stapels (%(stack_name)s) auf einer anderen Engine " "(%(engine_id)s)" #, python-format msgid "Failed to update Bay '%(name)s' - %(reason)s" msgstr "Fehler beim Aktualisieren der Bay '%(name)s' - %(reason)s" #, python-format msgid "Failed to update Cluster '%(name)s' - %(reason)s" msgstr "Fehler beim Aktualisieren von Cluster '%(name)s' - %(reason)s" #, python-format msgid "Failed to update Cluster Template '%(name)s' - %(reason)s" msgstr "Fehler beim Aktualisieren der Clustervorlage '%(name)s' - %(reason)s" msgid "Failed to update, can not found port info." msgstr "Fehler beim Aktualisieren, keine Port-Informationen gefunden." #, python-format msgid "" "Failed validating stack template using Heat endpoint at region \"%(region)s" "\" due to \"%(exc)s\"" msgstr "" "Fehler beim Überprüfen der Stapelvorlage mit dem Heat-Endpunkt in der Region " "\"%(region)s\" aufgrund von \"%(exc)s\"" msgid "Fake attribute !a." msgstr "Gefälschtes Attribut !a." msgid "Fake attribute a." msgstr "Gefälschtes Attribut a." msgid "Fake property !a." msgstr "Gefälschte Eigenschaft !a." msgid "Fake property !c." msgstr "Gefälschte Eigenschaft !c." msgid "Fake property a." msgstr "Gefälschte Eigenschaft a." msgid "Fake property c." msgstr "Gefälschte Eigenschaft c." msgid "Fake property ca." msgstr "Gefälschte Eigenschaft ca." msgid "" "False to trigger actions when the threshold is reached AND the alarm's state " "has changed. By default, actions are called each time the threshold is " "reached." msgstr "" "False, um Aktionen auszulösen, wenn der Schwellenwert erreicht UND der " "Zustand des Alarms geändert wurde. Standardmäßig werden Aktionen jedes Mal " "aufgerufen, wenn der Schwellenwert erreicht wird." #, python-format msgid "Field %(field)s of %(objname)s is not an instance of Field" msgstr "Feld %(field)s von %(objname)s ist keine Instanz von Field" msgid "Firewall creation failed" msgstr "Firewall-Erstellung fehlgeschlagen" msgid "" "Fixed IP address to specify for the port created on the requested network." msgstr "" "Feste IP-Adresse für den Port, der im angeforderten Netzwerk erstellt wurde." msgid "Fixed IP addresses." msgstr "Feste IP-Adressen" msgid "Fixed IPv4 address for this NIC." msgstr "IPv4-Adresse für diese Netzwerkkarte wurde korrigiert." msgid "Flag indicating if traffic to or from instance is validated." msgstr "" "Flag, das angibt, ob der Datenverkehr zu oder von der Instanz validiert " "wurde." msgid "Flag of enable project." msgstr "Flag des Aktivierungsprojekts." msgid "Flag of enable user." msgstr "Flag des Benutzers aktivieren." msgid "" "Flag to enable/disable port security on the network. It provides the default " "value for the attribute of the ports created on this network." msgstr "" "Flag zum Aktivieren/Deaktivieren der Portsicherheit im Netzwerk. Es bietet " "den Standardwert für das Attribut der in diesem Netzwerk erstellten Ports." msgid "" "Flag to enable/disable port security on the port. When disable this " "feature(set it to False), there will be no packages filtering, like security-" "group and address-pairs." msgstr "" "Flag zum Aktivieren/Deaktivieren der Portsicherheit am Port. Wenn Sie diese " "Funktion deaktivieren (auf \"False\" setzen), werden keine Pakete gefiltert, " "wie Sicherheitsgruppen und Adresspaare." msgid "Flavor of the instance." msgstr "Variante der Instanz." msgid "Flow Classifier ID or Name ." msgstr "Flow Classifier ID oder Name." #, python-format msgid "" "For %s, the length of for_each values should be equal if no nested loop." msgstr "" "Für %s sollte die Länge von for_each-Werten gleich sein, wenn keine " "verschachtelte Schleife vorhanden ist." msgid "Friendly name of the port." msgstr "Freundlicher Name des Ports." msgid "Friendly name of the router." msgstr "Freundlicher Name des Routers." msgid "Friendly name of the subnet." msgstr "Freundlicher Name des Subnetzes." msgid "Fully qualified class name to use as a client backend." msgstr "Vollqualifizierter Klassenname zur Verwendung als Client-Back-End." msgid "Fully qualified class name to use as a keystone backend." msgstr "Vollqualifizierter Klassenname zur Verwendung als Keystone-Back-End." #, python-format msgid "Function \"%s\" must have arguments" msgstr "Die Funktion \"%s\" muss Argumente haben" #, python-format msgid "Function \"%s\" usage: [\"\", \"\"]" msgstr "Funktion \"%s\": [\" \",\" \"]" #, python-format msgid "Gateway IP address \"%(gateway)s\" is in invalid format." msgstr "Die Gateway-IP-Adresse \"%(gateway)s\" hat ein ungültiges Format." msgid "Gateway network for the router." msgstr "Gateway-Netzwerk für den Router." msgid "Generic HeatAPIException, please use specific subclasses!" msgstr "Generic HeatAPIException, bitte verwenden Sie spezielle Unterklassen!" msgid "Glance image ID or name." msgstr "Glance Abbild ID oder Name." msgid "Governs permissions set in manila for the cluster ips." msgstr "Steuert die Berechtigungen in Manila für die Cluster ips." msgid "Granularity to use for age argument, defaults to days." msgstr "" "Die Granularität, die für das Argument \"Alter\" verwendet wird, lautet " "standardmäßig Tage." msgid "HTTP verb to use for signaling outputvalues" msgstr "HTTP-Verb zur Signalisierung von Ausgabewerten" msgid "Hadoop cluster name." msgstr "Name des Hadoop-Clusters" #, python-format msgid "Header X-Auth-Url \"%s\" not an allowed endpoint" msgstr "Header X-Auth-URL \"%s\" kein erlaubter Endpunkt" msgid "Health probe timeout, in seconds." msgstr "Health Probe Timeout, in Sekunden." msgid "" "Heat build revision. If you would prefer to manage your build revision " "separately, you can move this section to a different file and add it as " "another config option." msgstr "" "Heat-Build-Revision. Wenn Sie Ihre Build-Revision lieber getrennt verwalten " "möchten, können Sie diesen Abschnitt in eine andere Datei verschieben und " "sie als weitere Konfigurationsoption hinzufügen." msgid "Host" msgstr "Gastgeber" msgid "Hostname" msgstr "Hostname" msgid "Hostname of the instance." msgstr "Hostname der Instanz" msgid "How long to preserve deleted data." msgstr "Wie lange werden gelöschte Daten beibehalten?" msgid "" "How the client will signal the wait condition. CFN_SIGNAL will allow an HTTP " "POST to a CFN keypair signed URL. TEMP_URL_SIGNAL will create a Swift " "TempURL to be signalled via HTTP PUT. HEAT_SIGNAL will allow calls to the " "Heat API resource-signal using the provided keystone credentials. " "ZAQAR_SIGNAL will create a dedicated zaqar queue to be signalled using the " "provided keystone credentials. TOKEN_SIGNAL will allow and HTTP POST to a " "Heat API endpoint with the provided keystone token. NO_SIGNAL will result in " "the resource going to a signalled state without waiting for any signal." msgstr "" "Wie der Client die Wartebedingung signalisiert. CFN_SIGNAL ermöglicht einen " "HTTP-POST zu einer CFN-Schlüsselpaar-signierten URL. TEMP_URL_SIGNAL " "erstellt eine Swift TempURL, die über HTTP PUT signalisiert wird. " "HEAT_SIGNAL ermöglicht Aufrufe an das Heat-API-Ressourcensignal unter " "Verwendung der bereitgestellten Keystone-Anmeldeinformationen. ZAQAR_SIGNAL " "erstellt eine dedizierte Zaqar-Warteschlange, die mit den bereitgestellten " "Keystone-Anmeldeinformationen signalisiert wird. TOKEN_SIGNAL ermöglicht und " "HTTP-POST zu einem Heat-API-Endpunkt mit dem bereitgestellten Keystone-" "Token. NO_SIGNAL führt dazu, dass die Ressource in einen signalisierten " "Zustand übergeht, ohne auf irgendein Signal zu warten." msgid "" "How the server should receive the metadata required for software " "configuration. POLL_SERVER_CFN will allow calls to the cfn API action " "DescribeStackResource authenticated with the provided keypair. " "POLL_SERVER_HEAT will allow calls to the Heat API resource-show using the " "provided keystone credentials. POLL_TEMP_URL will create and populate a " "Swift TempURL with metadata for polling. ZAQAR_MESSAGE will create a " "dedicated zaqar queue and post the metadata for polling." msgstr "" "Wie der Server die für die Softwarekonfiguration erforderlichen Metadaten " "erhalten soll. POLL_SERVER_CFN erlaubt Aufrufe an die cfn-API-Aktion " "DescribeStackResource, die mit dem angegebenen Schlüsselpaar authentifiziert " "wurde. POLL_SERVER_HEAT erlaubt Aufrufe an die Heat API Resource-Show mit " "den bereitgestellten Keystone-Zugangsdaten. POLL_TEMP_URL erstellt und füllt " "eine Swift-TempURL mit Metadaten für die Abfrage. ZAQAR_MESSAGE erstellt " "eine dedizierte Zaqar-Warteschlange und stellt die Metadaten für die Abfrage " "bereit." msgid "How the server should signal to heat with the deployment output values." msgstr "" "Wie sollte der Server signalisieren, dass er mit den " "Bereitstellungsausgabewerten heizen soll?" msgid "" "How the server should signal to heat with the deployment output values. " "CFN_SIGNAL will allow an HTTP POST to a CFN keypair signed URL. " "TEMP_URL_SIGNAL will create a Swift TempURL to be signaled via HTTP PUT. " "HEAT_SIGNAL will allow calls to the Heat API resource-signal using the " "provided keystone credentials. ZAQAR_SIGNAL will create a dedicated zaqar " "queue to be signaled using the provided keystone credentials. NO_SIGNAL will " "result in the resource going to the COMPLETE state without waiting for any " "signal." msgstr "" "Wie sollte der Server zu Heat die Bereitstellungsausgabewerten signalisieren " "soll? CFN_SIGNAL ermöglicht einen HTTP-POST zu einer CFN-Schlüsselpaar-" "signierten URL. TEMP_URL_SIGNAL erstellt eine Swift TempURL, die über HTTP " "PUT signalisiert wird. HEAT_SIGNAL ermöglicht Aufrufe an das Heat-API-" "Ressourcensignal unter Verwendung der bereitgestellten Keystone-" "Anmeldeinformationen. ZAQAR_SIGNAL erstellt eine dedizierte Zaqar-" "Warteschlange, die mit den bereitgestellten Keystone-Anmeldeinformationen " "signalisiert wird. NO_SIGNAL führt dazu, dass die Ressource in den Zustand " "COMPLETE übergeht, ohne auf ein Signal zu warten." msgid "" "How the user_data should be formatted for the server. For HEAT_CFNTOOLS, the " "user_data is bundled as part of the heat-cfntools cloud-init boot " "configuration data. For RAW the user_data is passed to Nova unmodified. For " "SOFTWARE_CONFIG user_data is bundled as part of the software config data, " "and metadata is derived from any associated SoftwareDeployment resources." msgstr "" "Wie die user_data für den Server formatiert werden soll. Für HEAT_CFNTOOLS " "wird die Benutzerdaten als Teil der Konfigurationsdaten für die Cloud-" "Initialisierung von heat-cfntools gebündelt. Für RAW werden die " "Benutzerdaten unverändert an Nova übergeben. Für SOFTWARE_CONFIG werden " "Benutzerdaten als Teil der Softwarekonfigurationsdaten gebündelt, und " "Metadaten werden von allen zugehörigen SoftwareDeployment-Ressourcen " "abgeleitet." msgid "" "How to handle changes to removal_policies on update. The default \"append\" " "mode appends to the internal list, \"update\" replaces it on update." msgstr "" "Wie werden Änderungen an removal_policies beim Update gehandhabt? Der " "Standardmodus \"append\" hängt an die interne Liste an, \"update\" ersetzt " "sie beim Update." msgid "Human readable name for the secret." msgstr "Vom Menschen lesbarer Name für das Geheimnis." msgid "Human-readable name for the container." msgstr "Vom Menschen lesbarer Name für den Container." msgid "" "ID list of the L3 agent. User can specify multi-agents for highly available " "router. NOTE: The default policy setting in Neutron restricts usage of this " "property to administrative users only." msgstr "" "ID-Liste des L3-Agenten. Benutzer kann Multi-Agenten für hoch verfügbare " "Router angeben. HINWEIS: Die Standardrichtlinieneinstellung in Neutron " "beschränkt die Verwendung dieser Eigenschaft auf administrative Benutzer." msgid "ID of an existing port to associate with this server." msgstr "ID eines vorhandenen Ports, der diesem Server zugeordnet werden soll." msgid "" "ID of an existing port with at least one IP address to associate with this " "floating IP." msgstr "" "ID eines vorhandenen Ports mit mindestens einer IP-Adresse, die dieser " "Floating-IP zugeordnet werden soll." msgid "" "ID of image stored in Glance that should be used as the kernel when booting " "an AMI-style image." msgstr "" "ID des in Glance gespeicherten Images, das beim Booten eines AMI-basierten " "Abbildes als Kernel verwendet werden soll." msgid "" "ID of image stored in Glance that should be used as the ramdisk when booting " "an AMI-style image." msgstr "" "ID des in Glance gespeicherten Abbildes, das beim Booten eines AMI-Abbildes " "als Ramdisk verwendet werden soll." msgid "ID of job's main job binary." msgstr "ID des Hauptauftragsbinärs des Jobs." msgid "ID of network to create a port on." msgstr "ID des Netzwerks, um einen Port zu erstellen." msgid "ID of project for API authentication" msgstr "ID des Projekts für die API-Authentifizierung" msgid "ID of queue to use for signaling output values" msgstr "" "ID der Warteschlange, die zum Signalisieren von Ausgabewerten verwendet " "werden soll" msgid "" "ID of resource to apply configuration to. Normally this should be a Nova " "server ID." msgstr "" "ID der Ressource, auf die die Konfiguration angewendet werden soll. " "Normalerweise sollte dies eine Nova Server ID sein." msgid "" "ID of server (VM, etc...) on host that is used for exporting network file-" "system." msgstr "" "ID des Servers (VM, etc ...) auf dem Host, der für den Export des " "Netzwerkdateisystems verwendet wird." msgid "ID of signal to use for signaling output values" msgstr "ID des Signals zur Signalisierung der Ausgabewerte" msgid "" "ID of software configuration resource to execute when applying to the server." msgstr "" "ID der Softwarekonfigurationsressource, die beim Anwenden auf den Server " "ausgeführt werden soll." msgid "ID of the Cluster Template used for Node Groups and configurations." msgstr "" "ID der Cluster-Vorlage, die für Knotengruppen und Konfigurationen verwendet " "wird." msgid "ID of the InternetGateway." msgstr "ID des InternetGateways." msgid "" "ID of the L3 agent. NOTE: The default policy setting in Neutron restricts " "usage of this property to administrative users only." msgstr "" "ID des L3-Agenten. HINWEIS: Die Standardrichtlinieneinstellung in Neutron " "beschränkt die Verwendung dieser Eigenschaft auf administrative Benutzer." msgid "ID of the Node Group Template." msgstr "ID der Knotengruppenvorlage." msgid "ID of the VPNGateway to attach to the VPC." msgstr "ID des VPNGateway zum Anhängen an die VPC." msgid "ID of the default image to use for the template." msgstr "ID des Standardbildes, das für die Vorlage verwendet werden soll." msgid "ID of the default pool this listener is associated to." msgstr "ID des Standardpools, dem dieser Listener zugeordnet ist." msgid "ID of the floating IP to assign to the server." msgstr "ID der Floating-IP, die dem Server zugewiesen werden soll." msgid "ID of the floating IP to associate." msgstr "ID der zu assoziierenden Floating IP." msgid "ID of the health monitor associated with this pool." msgstr "ID des Gesundheitsmonitors, der diesem Pool zugeordnet ist." msgid "ID of the image to use for the template." msgstr "ID des Abbildes, das für die Vorlage verwendet werden soll." msgid "ID of the load balancer this listener is associated to." msgstr "ID des Load Balancers, dem dieser Listener zugeordnet ist." msgid "ID of the network in which this IP is allocated." msgstr "ID des Netzwerks, in dem diese IP zugewiesen ist." msgid "ID of the port associated with this IP." msgstr "ID des Ports, der dieser IP zugeordnet ist." msgid "ID of the queue." msgstr "ID der Warteschlange" msgid "ID of the router used as gateway, set when associated with a port." msgstr "" "ID des Routers, der als Gateway verwendet wird, wenn er einem Port " "zugeordnet ist." msgid "ID of the router." msgstr "ID des Routers." msgid "ID of the server being deployed to" msgstr "ID des Servers, auf dem bereitgestellt wird" msgid "ID of the stack this deployment belongs to" msgstr "ID des Stacks, zu dem diese Bereitstellung gehört" msgid "ID of the tenant to which the RBAC policy will be enforced." msgstr "ID des Mandanten, für den die RBAC-Richtlinie durchgesetzt wird." msgid "ID of the tenant who owns the health monitor." msgstr "ID des Mandanten, der den Gesundheitsmonitor besitzt." msgid "ID or Name of the QoS specs." msgstr "ID oder Name der QoS-Spezifikationen." msgid "ID or name of L7 policy this rule belongs to." msgstr "ID oder Name der L7-Richtlinie, zu der diese Regel gehört." msgid "ID or name of a port to be used as a parent port." msgstr "" "ID oder Name eines Ports, der als übergeordneter Port verwendet werden soll." msgid "ID or name of a port to be used as a subport." msgstr "ID oder Name eines Ports, der als Unterport verwendet werden soll." msgid "ID or name of a port used as a parent port." msgstr "ID oder Name eines Ports, der als übergeordneter Port verwendet wird." msgid "ID or name of the QoS policy." msgstr "ID oder Name der QoS-Richtlinie." msgid "ID or name of the RBAC object." msgstr "ID oder Name des RBAC-Objekts." msgid "ID or name of the cluster to run the job in." msgstr "ID oder Name des Clusters, in dem der Job ausgeführt werden soll." msgid "ID or name of the default pool for the listener." msgstr "ID oder Name des Standardpools für den Listener." msgid "" "ID or name of the default pool for the listener. Requires shared_pools " "service extension." msgstr "" "ID oder Name des Standardpools für den Listener. Erfordert die " "Serviceerweiterung shared_pools." msgid "ID or name of the egress neutron port." msgstr "ID oder Name des Austritts-Neutronen-Ports." msgid "ID or name of the external network for the gateway." msgstr "ID oder Name des externen Netzwerks für das Gateway." msgid "ID or name of the image to register." msgstr "ID oder Name des zu registrierenden Abbildes." msgid "ID or name of the ingress neutron port." msgstr "ID oder Name des Eingangs-Neutronen-Ports." msgid "ID or name of the input data source." msgstr "ID oder Name der Eingabedatenquelle" msgid "ID or name of the listener this policy belongs to." msgstr "ID oder Name des Listeners, zu dem diese Richtlinie gehört." msgid "ID or name of the load balancer with which listener is associated." msgstr "" "ID oder Name des Lastenausgleichsmoduls, dem der Listener zugeordnet ist." msgid "ID or name of the load balancing pool." msgstr "ID oder Name des Lastverteilungspools." msgid "ID or name of the neutron destination port." msgstr "ID oder Name des Neutronenzielorts." msgid "ID or name of the neutron source port." msgstr "ID oder Name des Neutronenquellen-Ports." msgid "ID or name of the output data source." msgstr "ID oder Name der Ausgabedatenquelle" msgid "ID or name of the pool for REDIRECT_TO_POOL action type." msgstr "ID oder Name des Pools für den Aktionstyp REDIRECT_TO_POOL." msgid "" "ID or name of user to whom to add key-pair. The usage of this property is " "limited to being used by administrators only. Supported since Nova api " "version 2.10." msgstr "" "ID oder Name des Benutzers, dem ein Schlüsselpaar hinzugefügt werden soll. " "Die Verwendung dieser Eigenschaft ist darauf beschränkt, nur von " "Administratoren verwendet zu werden. Unterstützt seit Nova api Version 2.10." msgid "" "ID that AWS assigns to represent the allocation of the address for use with " "Amazon VPC. Returned only for VPC elastic IP addresses." msgstr "" "ID, die von AWS zugewiesen wird, um die Zuordnung der Adresse für die " "Verwendung mit Amazon VPC zu repräsentieren. Nur für elastische VPC-IP-" "Adressen zurückgegeben." msgid "IDs or names of job's lib job binaries." msgstr "IDs oder Namen der Job-Binärdateien des Jobs." msgid "" "IDs or names of job's main job binary. In case of specific Sahara service, " "this property designed as a list, but accepts only one item." msgstr "" "IDs oder Namen des Hauptauftragsbinärs des Jobs. Im Falle eines bestimmten " "Sahara-Dienstes ist diese Eigenschaft als Liste konzipiert, akzeptiert " "jedoch nur einen Gegenstand." msgid "IP Protocol for the Flow Classifier." msgstr "IP-Protokoll für den Klassifizierer." msgid "IP address and port of the pool." msgstr "IP-Adresse und Port des Pools." msgid "IP address desired in the subnet for this port." msgstr "IP-Adresse, die im Subnetz für diesen Port gewünscht wird." msgid "IP address for the VIP." msgstr "IP-Adresse für den VIP." msgid "IP address of the associated port, if specified." msgstr "IP-Adresse des zugeordneten Ports, falls angegeben." msgid "" "IP address of the floating IP. NOTE: The default policy setting in Neutron " "restricts usage of this property to administrative users only." msgstr "" "IP-Adresse der Floating-IP. HINWEIS: Die Standardrichtlinieneinstellung in " "Neutron beschränkt die Verwendung dieser Eigenschaft auf administrative " "Benutzer." msgid "IP address of the pool member on the pool network." msgstr "IP-Adresse des Pool-Mitglieds im Pool-Netzwerk." msgid "IP address of the pool member." msgstr "IP-Adresse des Poolmitglieds" msgid "IP address of the vip." msgstr "IP-Adresse des vip." msgid "IP address to allow through this port." msgstr "IP-Adresse, um diesen Port zuzulassen." msgid "IP address to use if the port has multiple addresses." msgstr "Zu verwendende IP-Adresse, wenn der Port mehrere Adressen hat." msgid "" "IP or other address information about guest that allowed to access to Share." msgstr "" "IP- oder andere Adressinformationen zu Gästen, die auf Share zugreifen " "dürfen." msgid "IPv6 RA (Router Advertisement) mode." msgstr "IPv6 RA (Router Advertisement) Modus." msgid "IPv6 address mode." msgstr "IPv6-Adressmodus" msgid "Id of a resource." msgstr "ID einer Ressource." msgid "Id of the manila share." msgstr "ID der Manila Freigabe." msgid "Id of the tenant owning the firewall policy." msgstr "ID des Mandanten, der die Firewall-Richtlinie besitzt." msgid "Id of the tenant owning the firewall." msgstr "ID des Mandaten, der die Firewall besitzt." msgid "Identifier of the source instance to replicate." msgstr "ID der zu replizierenden Quellinstanz" #, python-format msgid "" "If \"%(size)s\" is provided, only one of \"%(image)s\", \"%(image_ref)s\", " "\"%(source_vol)s\", \"%(snapshot_id)s\" can be specified, but currently " "specified options: %(exclusive_options)s." msgstr "" "Wenn \"%(size)s\" angegeben wird, kann nur eines von \"%(image)s\", " "\"%(image_ref)s\", \"%(source_vol)s\", \"%(snapshot_id)s\" angegeben werden " "aktuell angegebene Optionen: %(exclusive_options)s." msgid "If False, closes the client socket connection explicitly." msgstr "Wenn False, wird die Client-Socket-Verbindung explizit geschlossen." msgid "" "If True, delete any objects in the container when the container is deleted. " "Otherwise, deleting a non-empty container will result in an error." msgstr "" "Wenn True, löschen Sie alle Objekte im Container, wenn der Container " "gelöscht wird. Andernfalls führt das Löschen eines nicht leeren Containers " "zu einem Fehler." msgid "If True, enable config drive on the server." msgstr "Wenn True, aktivieren Sie das Konfigurationslaufwerk auf dem Server." msgid "If True, execution will be shared across the tenants." msgstr "Wenn True, wird die Ausführung über die Mandanten verteilt." msgid "" "If True, job will be protected from modifications and can not be deleted " "until this property is set to False." msgstr "" "Wenn True, wird der Job vor Änderungen geschützt und kann erst gelöscht " "werden, wenn diese Eigenschaft auf False gesetzt ist." msgid "If True, job will be shared across the tenants." msgstr "Wenn True, wird der Job über die Mandanten verteilt." msgid "" "If configured, it allows to run action or workflow associated with a task " "multiple times on a provided list of items." msgstr "" "Wenn konfiguriert, können Aktionen oder Arbeitsabläufe, die mit einer " "Aufgabe verknüpft sind, mehrere Male in einer bereitgestellten Liste von " "Elementen ausgeführt werden." #, python-format msgid "" "If neither \"%(backup_id)s\" nor \"%(size)s\" is provided, one and only one " "of \"%(source_vol)s\", \"%(snapshot_id)s\" must be specified, but currently " "specified options: %(exclusive_options)s." msgstr "" "Wenn weder \"%(backup_id)s\" noch \"%(size)s\" angegeben ist, muss nur einer " "von \"%(source_vol)s\", \"%(snapshot_id)s\" angegeben werden, aber die " "aktuell angegebenen Optionen: %(exclusive_options)s." msgid "If set, then the server's certificate will not be verified." msgstr "" "Wenn dies eingestellt ist, wird das Zertifikat des Servers nicht verifiziert." msgid "If specified, the backup to create the volume from." msgstr "" "Wenn angegeben, die Sicherung, aus der der Datenträger erstellt werden soll." msgid "If specified, the backup used as the source to create the volume." msgstr "" "Wenn angegeben, wird die Sicherung als Quelle zum Erstellen des Volumes " "verwendet." msgid "If specified, the name or ID of the image to create the volume from." msgstr "" "Wenn angegeben, der Name oder die ID des Abbildes, aus dem der Datenträger " "erstellt werden soll." msgid "If specified, the snapshot to create the volume from." msgstr "" "Wenn angegeben, die Schattenkopie, aus dem der Datenträger erstellt werden " "soll." msgid "If specified, the type of volume to use, mapping to a specific backend." msgstr "" "Wenn angegeben, der Typ des zu verwendenden Datenträgers, Zuordnung zu einem " "bestimmten Backend." msgid "If specified, the volume to use as source." msgstr "Wenn angegeben, das Datenträger, das als Quelle verwendet werden soll." msgid "" "If the region is hierarchically a child of another region, set this " "parameter to the ID of the parent region." msgstr "" "Wenn die Region hierarchisch ein Kind einer anderen Region ist, legen Sie " "diesen Parameter auf die ID der übergeordneten Region fest." msgid "" "If true, the resources in the chain will be created concurrently. If false " "or omitted, each resource will be treated as having a dependency on the " "previous resource in the list." msgstr "" "Wenn dies der Fall ist, werden die Ressourcen in der Kette gleichzeitig " "erstellt. Wenn false oder nicht angegeben, wird jede Ressource als von der " "vorherigen Ressource in der Liste abhängig behandelt." msgid "If without InstanceId, ImageId and InstanceType are required." msgstr "Wenn ohne InstanceId, ImageId und InstanceType erforderlich sind." #, python-format msgid "Illegal prefix bounds: %(key1)s=%(value1)s, %(key2)s=%(value2)s." msgstr "Ungültige Präfixgrenzen: %(key1)s=%(value1)s, %(key2)s=%(value2)s." #, python-format msgid "" "Image %(image)s requires %(imram)s minimum ram. Flavor %(flavor)s has only " "%(flram)s." msgstr "" "Abbild %(image)s benötigt %(imram)s Minimum RAM. Variante %(flavor)s hat nur " "%(flram)s." #, python-format msgid "" "Image %(image)s requires %(imsz)s GB minimum disk space. Flavor %(flavor)s " "has only %(flsz)s GB." msgstr "" "Abbild %(image)s benötigt %(imsz)s GB minimalen Speicherplatz. Variante " "%(flavor)s hat nur %(flsz)s GB." #, python-format msgid "Image status is required to be %(cstatus)s not %(wstatus)s." msgstr "Der Abbildstatus muss %(cstatus)s nicht %(wstatus)s sein." msgid "Incompatible parameters were used together" msgstr "Inkompatible Parameter wurden zusammen verwendet" #, python-format msgid "Incorrect arguments to \"%(fn_name)s\" should be one of: %(allowed)s" msgstr "" "Falsche Argumente zu \"%(fn_name)s\" sollten eine der folgenden sein: " "%(allowed)s" #, python-format msgid "Incorrect arguments to \"%(fn_name)s\" should be: %(example)s" msgstr "Falsche Argumente zu \"%(fn_name)s\" sollten sein: %(example)s" msgid "Incorrect arguments: Items to concat must be lists." msgstr "Falsche Argumente: Zu concat gehörende Elemente müssen Listen sein." msgid "Incorrect arguments: Items to merge must be maps." msgstr "Falsche Argumente: Zu vereinende Elemente müssen Maps sein." #, python-format msgid "" "Incorrect arguments: to \"%(fn_name)s\", arguments must be a list of maps. " "Example: %(example)s" msgstr "" "Falsche Argumente: Zu \"%(fn_name)s\" müssen Argumente eine Liste von Maps " "sein. Beispiel: %(example)s" #, python-format msgid "" "Incorrect index to \"%(fn_name)s\" should be between 0 and %(max_index)s" msgstr "" "Falscher Index zu \"%(fn_name)s\" sollte zwischen 0 und %(max_index)s liegen" #, python-format msgid "Incorrect index to \"%(fn_name)s\" should be: %(example)s" msgstr "Falscher Index zu \"%(fn_name)s\" sollte sein: %(example)s" #, python-format msgid "" "Incorrect translation rule using - cannot resolve Add rule for non-list " "translation value \"%s\"." msgstr "" "Falsche Übersetzungsregel benutzt - kann die Regel für den Umsetzungswert " "\"%s\" für die Liste ohne Liste nicht auflösen." #, python-format msgid "Index to \"%s\" must be a string" msgstr "Der Index auf \"%s\" muss eine Zeichenfolge sein" #, python-format msgid "Index to \"%s\" must be an integer" msgstr "Der Index auf \"%s\" muss eine ganze Zahl sein" msgid "" "Indicate if cinder-backup service is enabled. This is a temporary workaround " "until cinder-backup service becomes discoverable, see LP#1334856." msgstr "" "Geben Sie an, ob der Cinder-Backup-Dienst aktiviert ist. Dies ist eine " "vorübergehende Problemumgehung, bis der Schinder-Sicherungsdienst erkennbar " "wird, siehe LP # 1334856." msgid "" "Indicate whether the volume should be deleted when the instance is " "terminated." msgstr "" "Geben Sie an, ob der Datenträger beim Beenden der Instanz gelöscht werden " "soll." msgid "" "Indicate whether the volume should be deleted when the server is terminated." msgstr "" "Geben Sie an, ob der Datenträger beim Beenden des Servers gelöscht werden " "soll." msgid "Indicates remote IP prefix to be associated with this metering rule." msgstr "" "Gibt das Remote-IP-Präfix an, das dieser Messregel zugeordnet werden soll." msgid "Indicates whether created clusters should have a floating ip or not." msgstr "Gibt an, ob erstellte Cluster eine Floating-IP haben oder nicht." msgid "" "Indicates whether created clusters should have a load balancer for master " "nodes or not." msgstr "" "Gibt an, ob erstellte Cluster einen Load Balancer für Master-Knoten haben " "oder nicht." msgid "" "Indicates whether or not to create a distributed router. NOTE: The default " "policy setting in Neutron restricts usage of this property to administrative " "users only. This property can not be used in conjunction with the L3 agent " "ID." msgstr "" "Gibt an, ob ein verteilter Router erstellt werden soll oder nicht. HINWEIS: " "Die Standardrichtlinieneinstellung in Neutron beschränkt die Verwendung " "dieser Eigenschaft auf administrative Benutzer. Diese Eigenschaft kann nicht " "zusammen mit der L3-Agenten-ID verwendet werden." msgid "" "Indicates whether or not to create a highly available router. NOTE: The " "default policy setting in Neutron restricts usage of this property to " "administrative users only. And now neutron do not support distributed and ha " "at the same time." msgstr "" "Gibt an, ob ein hochverfügbarer Router erstellt werden soll oder nicht. " "HINWEIS: Die Standardrichtlinieneinstellung in Neutron beschränkt die " "Verwendung dieser Eigenschaft auf administrative Benutzer. Und jetzt " "unterstützen Neutron nicht verteilt und ha zur gleichen Zeit." msgid "Indicates whether the project also acts as a domain." msgstr "Gibt an, ob das Projekt auch als Domäne fungiert." msgid "Indicates whether this firewall rule is enabled or not." msgstr "Gibt an, ob diese Firewallregel aktiviert ist oder nicht." msgid "Information used to configure the bucket as a static website." msgstr "" "Informationen, die zum Konfigurieren des Buckets als statische Website " "verwendet werden." msgid "Initiator state in lowercase for the ipsec site connection." msgstr "Initiatorstatus in Kleinbuchstaben für die IPSec-Standortverbindung." #, python-format msgid "Input in signal data must be a map, find a %s" msgstr "Eingabe in Signaldaten muss eine Map sein, finde ein %s" msgid "Input values for the workflow." msgstr "Eingabewerte für den Workflow" msgid "Input values to apply to the software configuration on this server." msgstr "" "Eingabewerte, die für die Softwarekonfiguration auf diesem Server gelten." msgid "Input values to pass to the Mistral workflow." msgstr "Eingabewerte, die an den Mistral-Workflow übergeben werden." msgid "Instance ID to associate with EIP specified by EIP property." msgstr "" "Instanz-ID, die mit EIP verknüpft werden soll, die von der EIP-Eigenschaft " "angegeben wird." msgid "Instance ID to associate with EIP." msgstr "Instanz-ID, die mit EIP verknüpft werden soll." msgid "Instance connection to CFN/CW API validate certs if SSL is used." msgstr "" "Die Instanzverbindung zur CFN/CW API validiert Zertifikate, wenn SSL " "verwendet wird." msgid "Instance connection to CFN/CW API via https." msgstr "Instanzverbindung zur CFN/CW API über https." #, python-format msgid "Instance is not ACTIVE (was: %s)" msgstr "Instanz ist nicht AKTIV (war: %s)" #, python-format msgid "" "Instance metadata must not contain greater than %s entries. This is the " "maximum number allowed by your service provider" msgstr "" "Instanzmetadaten dürfen keine Einträge größer als %s enthalten. Dies ist die " "maximale Anzahl, die Ihr Dienstanbieter zulässt" msgid "" "Integer used for ordering the boot disks. If it is not specified, value " "\"0\" will be set for bootable sources (volume, snapshot, image); value " "\"-1\" will be set for non-bootable sources." msgstr "" "Integer für die Bestellung der Bootdisketten. Wenn es nicht angegeben ist, " "wird der Wert \"0\" für bootfähige Quellen (Datenträger, Schattenkopie, " "Abbild) gesetzt; Der Wert \"-1\" wird für nicht bootfähige Quellen " "festgelegt." msgid "Interface arguments to add to the job." msgstr "Schnittstellenargumente zum Hinzufügen zum Job." msgid "Interface type of keystone service endpoint." msgstr "Schnittstellentyp des Keystone-Service-Endpunkts." msgid "Internet protocol version." msgstr "Internetprotokollversion." msgid "" "Interval in seconds to invoke webhooks if the alarm state does not " "transition away from the defined trigger state. A value of 0 will disable " "continuous notifications. This property is only applicable for the webhook " "notification type and has default period interval of 60 seconds." msgstr "" "Intervall in Sekunden, um Webhooks aufzurufen, wenn der Alarmstatus nicht " "vom definierten Trigger-Status abweicht. Ein Wert von 0 deaktiviert " "fortlaufende Benachrichtigungen. Diese Eigenschaft gilt nur für den Webhook-" "Benachrichtigungstyp und hat ein Standardperiodenintervall von 60 Sekunden." #, python-format msgid "" "Invalid %(prop1)s for specified '%(value)s' value of '%(prop2)s' property." msgstr "" "Ungültige%(prop1)s für den angegebenen Wert ' %(value)s' der Eigenschaft " "'%(prop2)s'." #, python-format msgid "Invalid %s, expected a mapping" msgstr "Ungültige %s, erwartete eine Zuordnung" #, python-format msgid "Invalid CRON expression: %s" msgstr "Ungültiger CRON-Ausdruck: %s" #, python-format msgid "Invalid Parameter type \"%s\"" msgstr "Ungültiger Parametertyp \"%s\"" #, python-format msgid "Invalid Property %s" msgstr "Ungültige Eigenschaft %s" msgid "Invalid Stack address" msgstr "Ungültige Stapeladresse" msgid "Invalid Template URL" msgstr "Ungültige Vorlagen-URL" #, python-format msgid "Invalid URL port \"%(port)s\" for %(fn_name)s called with %(args)s" msgstr "" "Ungültiger URL-Port \"%(port)s\" für %(fn_name)s wird mit %(args)s aufgerufen" #, python-format msgid "Invalid URL port %d, must be in range 1-65535" msgstr "Ungültiger URL-Port %d muss im Bereich 1-65535 liegen" #, python-format msgid "Invalid URL scheme %s" msgstr "Ungültiges URL-Schema %s" #, python-format msgid "Invalid UUID version (%d)" msgstr "Ungültige UUID-Version (%d)" #, python-format msgid "" "Invalid action \"%(action)s\" for object type %(obj_type)s. Valid actions: " "%(valid_actions)s" msgstr "" "Ungültige Aktion \"%(action)s\" für Objekttyp %(obj_type)s. Gültige " "Aktionen: %(valid_actions)s" #, python-format msgid "Invalid action %s" msgstr "Ungültige Aktion %s" #, python-format msgid "Invalid action %s specified" msgstr "Die ungültige Aktion %s wurde angegeben" #, python-format msgid "Invalid adopt data: %s" msgstr "Ungültige Adoptionsdaten: %s" #, python-format msgid "Invalid arguments to \"%(fn)s\": %(args)s" msgstr "Ungültige Argumente für \"%(fn)s\": %(args)s" #, python-format msgid "Invalid cloud_backend setting in heat.conf detected - %s" msgstr "Ungültige cloud_backend-Einstellung in heat.conf erkannt - %s" #, python-format msgid "Invalid codes in ignore_errors : %s" msgstr "Ungültige Codes in ignore_errors: %s" #, python-format msgid "Invalid condition \"%s\"" msgstr "Ungültige Bedingung \"%s\"" #, python-format msgid "Invalid content type %(content_type)s" msgstr "Ungültiger Inhaltstyp %(content_type)s" #, python-format msgid "Invalid default %(default)s (%(exc)s)" msgstr "Ungültiger Standard %(default)s (%(exc)s)" #, python-format msgid "Invalid deletion policy \"%s\"" msgstr "Ungültige Löschrichtlinie \"%s\"" #, python-format msgid "" "Invalid dependency with external %(resource_type)s resource: %(external_id)s" msgstr "" "Ungültige Abhängigkeit mit externer Ressource %(resource_type)s: " "%(external_id)s" #, python-format msgid "" "Invalid external resource: Resource %(external_id)s (%(type)s) can not be " "found." msgstr "" "Ungültige externe Ressource: Ressource %(external_id)s (%(type)s) konnte " "nicht gefunden werden." #, python-format msgid "Invalid filter parameters %s" msgstr "Ungültige Filterparameter %s" #, python-format msgid "Invalid hook type \"%(hook)s\" for %(resource)s" msgstr "Ungültiger Hook-Typ \"%(hook)s\" für %(resource)s" #, python-format msgid "" "Invalid hook type \"%(value)s\" for resource breakpoint, acceptable hook " "types are: %(types)s" msgstr "" "Ungültiger Hook-Typ \"%(value)s\" für Ressourcen-Haltepunkt, zulässige Hook-" "Typen sind: %(types)s" #, python-format msgid "Invalid key %s" msgstr "Ungültiger Schlüssel %s" #, python-format msgid "Invalid key '%(key)s' for %(entity)s" msgstr "Ungültiger Schlüssel '%(key)s' für %(entity)s" #, python-format msgid "Invalid keys in resource mark unhealthy %s" msgstr "Ungültige Schlüssel in der Ressource markieren ungesunde %s" #, python-format msgid "Invalid keyword(s) inside a resource definition: %s" msgstr "Ungültige Schlüsselwörter in einer Ressourcendefinition: %s" #, python-format msgid "Invalid merge strategy '%(strategy)s' for parameter '%(param)s'." msgstr "" "Ungültige Zusammenführungsstrategie '%(strategy)s' für den Parameter " "'%(param)s'." msgid "" "Invalid mix of disk and container formats. When setting a disk or container " "format to one of 'aki', 'ari', or 'ami', the container and disk formats must " "match." msgstr "" "Ungültige Mischung aus Disk- und Containerformaten Wenn Sie ein Disketten- " "oder Containerformat auf 'aki', 'ari' oder 'ami' setzen, müssen die " "Container- und Festplattenformate übereinstimmen." #, python-format msgid "Invalid parameter constraints for parameter %s, expected a list" msgstr "" "Ungültige Parameterbeschränkungen für Parameter %s, erwartet eine Liste" #, python-format msgid "Invalid parameter in environment %s." msgstr "Ungültiger Parameter in der Umgebung %s" #, python-format msgid "" "Invalid quota %(property)s value(s): %(value)s. Can not be less than the " "current usage value(s): %(total)s." msgstr "" "Ungültiges Kontingent %(property)s Wert(e): %(value)s. Kann nicht kleiner " "als der aktuelle Nutzungswert sein: %(total)s." #, python-format msgid "" "Invalid restricted_action type \"%(value)s\" for resource, acceptable " "restricted_action types are: %(types)s" msgstr "" "Ungültiger restricted_action-Typ \" %(value)s\" für die Ressource, zulässige " "restricted_action-Typen sind: %(types)s" #, python-format msgid "Invalid service %(service)s version %(version)s" msgstr "Ungültiger Service %(service)s Version %(version)s" #, python-format msgid "" "Invalid stack name %s must contain only alphanumeric or \"_-.\" characters, " "must start with alpha and must be 255 characters or less." msgstr "" "Der ungültige Stapel-Name %s darf nur alphanumerisch oder \"_-\" enthalten. " "Zeichen müssen mit Alpha beginnen und dürfen maximal 255 Zeichen lang sein." #, python-format msgid "Invalid stack name %s, must be a string" msgstr "Der ungültige Stapelname %s muss eine Zeichenfolge sein" #, python-format msgid "Invalid status %s" msgstr "Ungültiger Status %s" #, python-format msgid "Invalid support status and should be one of %s" msgstr "Ungültiger Support-Status und sollte einer von %s sein" #, python-format msgid "Invalid tag, \"%s\" contains a comma" msgstr "Ungültiges Tag, \"%s\" enthält ein Komma" #, python-format msgid "Invalid tag, \"%s\" is longer than 80 characters" msgstr "Ungültiges Tag, \"%s\" ist länger als 80 Zeichen" #, python-format msgid "Invalid tag, \"%s\" is not a string" msgstr "Ungültiges Tag, \"%s\" ist keine Zeichenfolge" #, python-format msgid "Invalid tags, not a list: %s" msgstr "Ungültige Tags, keine Liste: %s" #, python-format msgid "Invalid template type \"%(value)s\", valid types are: cfn, hot." msgstr "Ungültiger Vorlagentyp \"%(value)s\", gültige Typen sind: cfn, hot." #, python-format msgid "Invalid timeout value %s" msgstr "Ungültiger Zeitüberschreitungswert %s" #, python-format msgid "Invalid timezone: %s" msgstr "Ungültige Zeitzone: %s" #, python-format msgid "Invalid type (%s)" msgstr "Ungültiger Typ (%s)" msgid "Invert the compare type." msgstr "Kehrt den Vergleichstyp um." msgid "Ip allocation pools and their ranges." msgstr "IP-Zuteilungspools und ihre Bereiche." msgid "Ip of the subnet's gateway." msgstr "IP des Gateway des Subnetzes." msgid "Ip version for the subnet." msgstr "IP-Version für das Subnetz." msgid "Ip_version for this firewall rule." msgstr "Ip_version für diese Firewall-Regel." msgid "It defines an executor to which task action should be sent to." msgstr "" "Es definiert einen Executor, an den die Task-Aktion gesendet werden soll." msgid "It is advised to shutdown all Heat engines beforehand." msgstr "Es wird empfohlen, alle Heat-Engines vorher auszuschalten." #, python-format msgid "Items to join must be string, map or list not %s" msgstr "" "Zu verbindende Elemente müssen Zeichenfolge, Map oder Liste nicht %s sein" #, python-format msgid "Items to join must be string, map or list. %s failed json serialization" msgstr "" "Zu verbindende Elemente müssen Zeichenfolge, Map oder Liste sein. %s hat " "die json-Serialisierung nicht bestanden" #, python-format msgid "Items to join must be strings not %s" msgstr "Zu verbindende Elemente müssen Zeichenfolgen nicht %s sein" #, python-format msgid "" "JSON body size (%(len)s bytes) exceeds maximum allowed size (%(limit)s " "bytes)." msgstr "" "Die JSON-Körpergröße (%(len)s Bytes) überschreitet die maximal zulässige " "Größe (%(limit)s Byte)." msgid "JSON data that was uploaded via the SwiftSignalHandle." msgstr "JSON-Daten, die über den SwiftSignalHandle hochgeladen wurden." msgid "JSON file containing the content returned by the noauth middleware." msgstr "" "JSON-Datei, die den Inhalt enthält, der von der Middleware noauth " "zurückgegeben wird." msgid "" "JSON serialized map that includes the endpoint, token and/or other " "attributes the client must use for signalling this handle. The contents of " "this map depend on the type of signal selected in the signal_transport " "property." msgstr "" "Serialisierte JSON-Zuordnung, die den Endpunkt, das Token und/oder andere " "Attribute enthält, die der Client zur Signalisierung dieses Handles " "verwenden muss. Der Inhalt dieser Map hängt vom Signaltyp ab, der in der " "Eigenschaft signal_transport ausgewählt wurde." msgid "" "JSON string containing data associated with wait condition signals sent to " "the handle." msgstr "" "JSON-Zeichenfolge, die Daten enthält, die Wartezustandssignalen zugeordnet " "sind, die an das Handle gesendet werden." msgid "Keep STDIN open even if not attached." msgstr "Halten Sie STDIN offen, auch wenn es nicht angehängt ist." msgid "Key to compare. Relevant for HEADER and COOKIE types only." msgstr "Schlüssel zum vergleichen. Nur für HEADER- und COOKIE-Typen relevant." msgid "" "Key used to encrypt authentication info in the database. Length of this key " "must be 32 characters." msgstr "" "Schlüssel zum Verschlüsseln der Authentifizierungsinformationen in der " "Datenbank. Die Länge dieses Schlüssels muss 32 Zeichen lang sein." msgid "Key/Value pairs to extend the capabilities of the flavor." msgstr "Schlüssel/Wert-Paare, um die Fähigkeiten der Variante zu erweitern." msgid "Key/value pairs associated with the volume in raw dict form." msgstr "" "Schlüssel/Wert-Paare, die dem Datenträger im Rohdict-Formular zugeordnet " "sind." msgid "Key/value pairs associated with the volume." msgstr "Schlüssel/Wert-Paare, die dem Datenträger zugeordnet sind." msgid "Key/value pairs to associate with the volume." msgstr "Schlüssel/Wert-Paare, die dem Datenträger zugeordnet werden sollen." msgid "Keypair added to instances to make them accessible for user." msgstr "" "Schlüsselpaar zu Instanzen hinzugefügt, um sie für Benutzer zugänglich zu " "machen." msgid "Keypair secret key." msgstr "Schlüsselschlüssel geheimer Schlüssel." msgid "Keypair type. Supported since Nova api version 2.2." msgstr "Schlüsselpaartyp Unterstützt seit Nova api Version 2.2." msgid "" "Keystone domain ID which contains heat template-defined users. If this " "option is set, stack_user_domain_name option will be ignored." msgstr "" "Keystone-Domänen-ID, die durch Heat-Template definierte Benutzer enthält. " "Wenn diese Option aktiviert ist, wird die Option stack_user_domain_name " "ignoriert." msgid "" "Keystone domain name which contains heat template-defined users. If " "`stack_user_domain_id` option is set, this option is ignored." msgstr "" "Keystone-Domain-Name, der Heat-Template-definierte Benutzer enthält. Wenn " "die Option `stack_user_domain_id` gesetzt ist, wird diese Option ignoriert." msgid "Keystone domain." msgstr "Keystone-Domäne" #, python-format msgid "" "Keystone has more than one service with same name %(service)s. Please use " "service id instead of name" msgstr "" "Keystone hat mehr als einen Service mit demselben Namen %(service)s. Bitte " "verwenden Sie die Service-ID anstelle des Namens" msgid "Keystone password for stack_domain_admin user." msgstr "Keystone-Passwort für den Benutzer stack_domain_admin." msgid "Keystone project." msgstr "Keystone-Projekt." msgid "Keystone role for heat template-defined users." msgstr "Keystone-Rolle für Heat-Template-definierte Benutzer." msgid "Keystone role." msgstr "Schlüsselrolle" msgid "Keystone user group." msgstr "Keystone-Benutzergruppe" msgid "Keystone user groups." msgstr "Keystone Benutzergruppen." msgid "Keystone user is enabled or disabled." msgstr "Keystone-Benutzer ist aktiviert oder deaktiviert." msgid "" "Keystone username, a user with roles sufficient to manage users and projects " "in the stack_user_domain." msgstr "" "Keystone-Benutzername, ein Benutzer mit Rollen, die ausreichen, um Benutzer " "und Projekte in der stack_user_domain zu verwalten." msgid "L2 ethertype." msgstr "L2 Ethertype." msgid "L2 segmentation strategy on the external side of the network gateway." msgstr "" "L2-Segmentierungsstrategie auf der externen Seite des Netzwerk-Gateways." msgid "" "L7 policy position in ordered policies list. This must be an integer " "starting from 1. If not specified, policy will be placed at the tail of " "existing policies list." msgstr "" "L7-Position in der Liste der geordneten Richtlinien. Dies muss eine ganze " "Zahl sein, die bei 1 beginnt. Wenn nicht angegeben, wird die Richtlinie am " "Ende der vorhandenen Richtlinienliste platziert." msgid "L7Rules associated with this policy." msgstr "L7Regeln, die mit dieser Richtlinie verknüpft sind." msgid "LBaaS provider to implement this load balancer instance." msgstr "LBaaS-Provider zur Implementierung dieser Load-Balancer-Instanz" msgid "Length of OS_PASSWORD after encryption exceeds Heat limit (255 chars)" msgstr "" "Die Länge von OS_PASSWORD nach der Verschlüsselung überschreitet die " "Wärmegrenze (255 Zeichen)" msgid "Length of the string to generate." msgstr "Länge der zu generierenden Zeichenfolge" msgid "" "Length property cannot be smaller than combined character class and " "character sequence minimums" msgstr "" "Die length-Eigenschaft darf nicht kleiner sein als die Mindestanzahl " "kombinierter Zeichenklassen und Zeichenfolgen" msgid "Level of access that need to be provided for guest." msgstr "Zugriffsebene, die für den Gast bereitgestellt werden muss." msgid "" "Lifecycle actions to which the configuration applies. The string values " "provided for this property can include the standard resource actions CREATE, " "DELETE, UPDATE, SUSPEND and RESUME supported by Heat." msgstr "" "Lebenszyklusaktionen, für die die Konfiguration gilt. Die für diese " "Eigenschaft bereitgestellten Zeichenfolgenwerte können die " "Standardressourcenaktionen CREATE, DELETE, UPDATE, SUSPEND und RESUME " "enthalten, die von Heat unterstützt werden." msgid "List of LoadBalancer resources." msgstr "Liste der LoadBalancer-Ressourcen" msgid "List of Security Groups assigned on current LB." msgstr "" "Liste der Sicherheitsgruppen, die auf der aktuellen LB zugewiesen sind." msgid "List of TLS container references for SNI." msgstr "Liste der TLS-Containerreferenzen für SNI." msgid "List of allowed HTTP methods to be used. Default to allow GET." msgstr "" "Liste der zulässigen HTTP-Methoden, die verwendet werden sollen. Standard, " "um GET zu erlauben." msgid "" "List of allowed paths to be accessed. Default to allow queue messages URL." msgstr "" "Liste der zulässigen Pfade, auf die zugegriffen werden soll. Standard, um " "Warteschlangennachrichten-URL zuzulassen." msgid "List of database instances." msgstr "Liste der Datenbankinstanzen" msgid "List of databases to be created on DB instance creation." msgstr "" "Liste der Datenbanken, die bei der Erstellung der DB-Instanz erstellt werden." msgid "List of directories to search for plug-ins." msgstr "Liste der Verzeichnisse, in denen nach Plug-ins gesucht werden soll." msgid "List of dns nameservers." msgstr "Liste der DNS-Nameserver." msgid "List of firewall rules in this firewall policy." msgstr "Liste der Firewall-Regeln in dieser Firewall-Richtlinie." msgid "List of floating IP of all master nodes." msgstr "Liste der Floating IP aller Master-Knoten." msgid "List of floating IP of all servers that serve as node." msgstr "Liste der Floating-IP aller Server, die als Knoten dienen." msgid "List of health monitors associated with the pool." msgstr "Liste der mit dem Pool verbundenen Systemmonitore" msgid "List of hosts to join aggregate." msgstr "Liste der Hosts, die dem Aggregat beitreten sollen." msgid "List of image tags." msgstr "Liste der Abbild-Tags" msgid "List of manila shares to be mounted." msgstr "Liste der zu montierenden Manila-Freigaben." msgid "List of network interfaces to create on instance." msgstr "" "Liste der Netzwerkschnittstellen, die für die Instanz erstellt werden sollen." msgid "List of processes to enable anti-affinity for." msgstr "Liste der Prozesse, für die die Anti-Affinität aktiviert werden soll." msgid "List of processes to run on every node." msgstr "Liste der Prozesse, die auf jedem Knoten ausgeführt werden sollen." msgid "" "List of resources to be removed when doing an update which requires removal " "of specific resources. The resource may be specified several ways: (1) The " "resource name, as in the nested stack, (2) The resource reference returned " "from get_resource in a template, as available via the 'refs' attribute. Note " "this is destructive on update when specified; even if the count is not being " "reduced, and once a resource name is removed, its name is never reused in " "subsequent updates." msgstr "" "Liste der Ressourcen, die entfernt werden sollen, wenn eine Aktualisierung " "durchgeführt wird, bei der bestimmte Ressourcen entfernt werden müssen. Die " "Ressource kann auf verschiedene Arten angegeben werden: (1) Der " "Ressourcenname, wie im verschachtelten Stapel. (2) Der von get_resource in " "einer Vorlage zurückgegebene Ressourcenverweis, wie er über das Attribut " "'refs' verfügbar ist. Beachten Sie, dass dies für die Aktualisierung " "destruktiv ist, wenn sie angegeben wird Selbst wenn der Zählerstand nicht " "reduziert wird und der Ressourcenname entfernt wird, wird sein Name in " "nachfolgenden Aktualisierungen nie wieder verwendet." msgid "List of role assignments." msgstr "Liste der Rollenzuordnungen" msgid "List of security group IDs associated with this interface." msgstr "" "Liste der Sicherheitsgruppen-IDs, die dieser Schnittstelle zugeordnet sind." msgid "List of security group egress rules." msgstr "Liste der Ausgangsregeln für Sicherheitsgruppen" msgid "List of security group ingress rules." msgstr "Liste der Sicherheitsregeln für Sicherheitsgruppen" msgid "" "List of security group names or IDs to assign to this Node Group template." msgstr "" "Liste der Sicherheitsgruppennamen oder IDs, die dieser Knotengruppenvorlage " "zugewiesen werden sollen." msgid "List of security group names or IDs." msgstr "Liste der Sicherheitsgruppennamen oder IDs" msgid "" "List of security group names or IDs. Cannot be used if neutron ports are " "associated with this server; assign security groups to the ports instead." msgstr "" "Liste der Sicherheitsgruppennamen oder IDs. Kann nicht verwendet werden, " "wenn diesem Server Neutronen-Ports zugeordnet sind. Weisen Sie stattdessen " "den Ports Sicherheitsgruppen zu." msgid "List of security group rules." msgstr "Liste der Sicherheitsgruppenregeln" msgid "List of subnet prefixes to assign." msgstr "Liste der zuzuweisenden Subnetzpräfixe" msgid "List of tags associated with this interface." msgstr "Liste der Tags, die mit dieser Schnittstelle verknüpft sind." msgid "List of tags to attach to the instance." msgstr "Liste der Tags, die an die Instanz angehängt werden sollen." msgid "List of tags to attach to this resource." msgstr "Liste der Tags, die an diese Ressource angehängt werden sollen." msgid "List of tags to be attached to this resource." msgstr "Liste der Tags, die an diese Ressource angehängt werden sollen." msgid "List of tags to set on the workflow." msgstr "Liste der Tags, die im Workflow festgelegt werden sollen" msgid "" "List of tasks which should be executed before this task. Used only in " "reverse workflows." msgstr "" "Liste der Aufgaben, die vor dieser Aufgabe ausgeführt werden sollen. Wird " "nur in umgekehrten Arbeitsabläufen verwendet." msgid "" "List of tasks which will run after the task has completed regardless of " "whether it is successful or not." msgstr "" "Liste der Aufgaben, die ausgeführt werden, nachdem die Aufgabe abgeschlossen " "wurde, unabhängig davon, ob sie erfolgreich war oder nicht." msgid "List of tasks which will run after the task has completed successfully." msgstr "" "Liste der Aufgaben, die ausgeführt werden, nachdem die Aufgabe erfolgreich " "abgeschlossen wurde." msgid "" "List of tasks which will run after the task has completed with an error." msgstr "" "Liste der Aufgaben, die ausgeführt werden, nachdem die Aufgabe mit einem " "Fehler abgeschlossen wurde." msgid "List of tenants." msgstr "Liste der Mandanten." msgid "List of the job executions." msgstr "Liste der Jobausführungen." msgid "List of users to be created on DB instance creation." msgstr "" "Liste der Benutzer, die bei der Erstellung der DB-Instanz erstellt werden " "sollen." msgid "List of volume type IDs or Names to be attached to QoS specs." msgstr "" "Liste der Volumetyp-IDs oder Namen, die an QoS-Spezifikationen angehängt " "werden sollen." msgid "" "List of workflows' executions, each of them is a dictionary with information " "about execution. Each dictionary returns values for next keys: id, " "workflow_name, created_at, updated_at, state for current execution state, " "input, output." msgstr "" "Liste der Ausführungen von Arbeitsabläufen, jede von ihnen ist ein " "Wörterbuch mit Informationen über die Ausführung. Jedes Dictionary gibt " "Werte für die nächsten Schlüssel zurück: id, workflow_name, created_at, " "updated_at, status für den aktuellen Ausführungsstatus, Eingabe, Ausgabe." msgid "List with 0 or more map elements containing subport details." msgstr "" "Listet mit 0 oder mehr Kartenelementen auf, die Unterport-Details enthalten." msgid "Listener associated with this pool." msgstr "Listener, der diesem Pool zugeordnet ist." msgid "Listener name or ID to be associated with this pool." msgstr "Listenername oder ID, die diesem Pool zugeordnet werden sollen." msgid "Loadbalancer name or ID to be associated with this pool." msgstr "Loadbalancer-Name oder ID, der diesem Pool zugeordnet werden soll." msgid "" "Loadbalancer name or ID to be associated with this pool. Requires " "shared_pools service extension." msgstr "" "Loadbalancer-Name oder ID, der diesem Pool zugeordnet werden soll. Erfordert " "die Serviceerweiterung shared_pools." msgid "" "Local path on each cluster node on which to mount the share. Defaults to '/" "mnt/{share_id}'." msgstr "" "Lokaler Pfad auf jedem Clusterknoten, auf dem die Freigabe bereitgestellt " "werden soll. Der Standardwert ist '/mnt/{share_id}'." msgid "Location of the SSL certificate file to use for SSL mode." msgstr "Speicherort der SSL-Zertifikatsdatei für den SSL-Modus" msgid "Location of the SSL key file to use for enabling SSL mode." msgstr "" "Speicherort der SSL-Schlüsseldatei, die zum Aktivieren des SSL-Modus " "verwendet werden soll." msgid "MAC address of the port." msgstr "MAC-Adresse des Ports." msgid "MAC address to allow through this port." msgstr "MAC-Adresse, um diesen Port zuzulassen." msgid "" "MAC address to give to this port. The default update policy of this property " "in neutron is that allow admin role only." msgstr "" "MAC-Adresse für diesen Port. Die Standardaktualisierungsrichtlinie dieser " "Eigenschaft in Neutron erlaubt nur die Admin-Rolle." msgid "" "Make the cluster template public. To enable this option, you must own the " "right to publish in magnum. Which default set to admin only." msgstr "" "Machen Sie die Clustervorlage öffentlich. Um diese Option zu aktivieren, " "müssen Sie das Recht besitzen, in Magnum zu veröffentlichen. Welcher " "Standardwert wird nur auf \"admin\" gesetzt." msgid "Map between role with either project or domain." msgstr "Zuordnung zwischen der Rolle mit einem Projekt oder einer Domäne" msgid "" "Map containing options specific to the configuration management tool used by " "this resource." msgstr "" "Karte mit Optionen, die für das von dieser Ressource verwendete " "Konfigurationsverwaltungstool spezifisch sind." msgid "" "Map representing the cloud-config data structure which will be formatted as " "YAML." msgstr "" "Map, die die Cloud-Config-Datenstruktur darstellt, die als YAML formatiert " "wird." msgid "" "Map representing the configuration data structure which will be serialized " "to JSON format." msgstr "" "Karte, die die Konfigurationsdatenstruktur darstellt, die in das JSON-Format " "serialisiert wird." msgid "Max bandwidth in kbps." msgstr "Maximale Bandbreite in kbps." msgid "Max burst bandwidth in kbps." msgstr "Max-Burst-Bandbreite in kbps." msgid "Max size of the cluster." msgstr "Maximale Größe des Clusters" #, python-format msgid "Maximum %s is 1 hour." msgstr "Maximum %s ist 1 Stunde." msgid "Maximum depth allowed when using nested stacks." msgstr "" "Maximale Tiefe, die bei Verwendung von verschachtelten Stapeln zulässig ist." msgid "Maximum length of a server name to be used in nova." msgstr "Maximale Länge eines Servernamens, der in Nova verwendet werden soll." msgid "" "Maximum line size of message headers to be accepted. max_header_line may " "need to be increased when using large tokens (typically those generated by " "the Keystone v3 API with big service catalogs)." msgstr "" "Maximale Zeilengröße von Nachrichtenheadern, die akzeptiert werden sollen. " "max_header_line muss möglicherweise erhöht werden, wenn große Token " "verwendet werden (normalerweise solche, die von der Keystone v3-API mit " "großen Servicekatalogen generiert werden)." msgid "" "Maximum line size of message headers to be accepted. max_header_line may " "need to be increased when using large tokens (typically those generated by " "the Keystone v3 API with big service catalogs.)" msgstr "" "Maximale Zeilengröße von Nachrichtenheadern, die akzeptiert werden sollen. " "Möglicherweise muss max_header_line erhöht werden, wenn große Token " "verwendet werden (in der Regel solche, die von der Keystone v3-API mit " "großen Servicekatalogen generiert werden.)" msgid "Maximum number of instances in the group." msgstr "Maximale Anzahl von Instanzen in der Gruppe." msgid "" "Maximum number of milliseconds for a monitor to wait for a connection to be " "established before it times out." msgstr "" "Die maximale Anzahl von Millisekunden, die ein Monitor auf das Herstellen " "einer Verbindung wartet, bevor das Zeitlimit überschritten wird." msgid "Maximum number of resources in the cluster. -1 means unlimited." msgstr "Maximale Anzahl von Ressourcen im Cluster -1 bedeutet unbegrenzt." msgid "Maximum number of resources in the group." msgstr "Maximale Anzahl von Ressourcen in der Gruppe." msgid "Maximum number of stacks any one tenant may have active at one time." msgstr "Maximale Anzahl von Stapeln, die ein Mieter gleichzeitig haben darf." msgid "Maximum prefix size that can be allocated from the subnet pool." msgstr "Maximale Präfixgröße, die vom Subnetzpool zugewiesen werden kann." msgid "" "Maximum raw byte size of JSON request body. Should be larger than " "max_template_size." msgstr "" "Maximale Raw-Byte-Größe des JSON-Anfragetexts Sollte größer als " "max_template_size sein." msgid "Maximum raw byte size of any template." msgstr "Maximale Rohbytegröße einer beliebigen Vorlage" msgid "Maximum resources allowed per top-level stack. -1 stands for unlimited." msgstr "" "Maximal zulässige Ressourcen pro Stapel auf oberster Ebene -1 steht für " "unbegrenzt." msgid "Maximum resources per stack exceeded." msgstr "Maximale Ressourcen pro Stapel überschritten" msgid "" "Maximum transmission unit size (in bytes) for the ipsec site connection." msgstr "" "Maximale Übertragungseinheitsgröße (in Byte) für die IPSec-Standortverbindung" #, python-format msgid "Member '%(mem)s' not found in group resource '%(grp)s'." msgstr "Mitglied '%(mem)s' wurde nicht in Gruppenressource '%(grp)s' gefunden." msgid "Member list items must be strings" msgstr "Mitgliederlistenelemente müssen Zeichenfolgen sein" msgid "Member list must be a list" msgstr "Die Mitgliederliste muss eine Liste sein" msgid "Members associated with this pool." msgstr "Mitglieder, die diesem Pool zugeordnet sind." msgid "Memory in MB for the flavor." msgstr "Speicher in MB für die Variante." #, python-format msgid "Message: %(message)s, Code: %(code)s" msgstr "Nachricht: %(message)s, Code: %(code)s" msgid "Metadata format invalid" msgstr "Das Metadatenformat ist ungültig" msgid "Metadata key-values defined for cluster." msgstr "Schlüsselwerte für Metadaten, die für Cluster definiert sind." msgid "Metadata key-values defined for node." msgstr "Schlüsselwerte für Metadaten, die für Knoten definiert sind." msgid "Metadata key-values defined for profile." msgstr "Metadaten Schlüsselwerte, die für das Profil definiert sind." msgid "Metadata key-values defined for share." msgstr "Schlüsselwerte für Metadaten, die für die Freigabe definiert sind." msgid "Meter name watched by the alarm." msgstr "Der Name des Geräts wird vom Alarm überwacht." msgid "" "Meter should match this resource metadata (key=value) additionally to the " "meter_name." msgstr "" "Das Messgerät sollte diese Ressourcen-Metadaten (Schlüssel=Wert) zusätzlich " "zu dem Zähler_Name anpassen." msgid "Meter statistic to evaluate." msgstr "Zu überprüfende Meterstatistik." msgid "Method of implementation of session persistence feature." msgstr "Methode zur Implementierung der Sitzungspersistenz-Funktion." msgid "Metric name watched by the alarm." msgstr "Der metrische Name wird vom Alarm überwacht." msgid "Min size of the cluster." msgstr "Mindestgröße des Clusters" msgid "MinSize can not be greater than MaxSize" msgstr "MinSize kann nicht größer als MaxSize sein" msgid "Minimum number of instances in the group." msgstr "Minimale Anzahl der Instanzen in der Gruppe" msgid "Minimum number of resources in the cluster." msgstr "Minimale Anzahl der Ressourcen im Cluster" msgid "Minimum number of resources in the group." msgstr "Mindestanzahl von Ressourcen in der Gruppe" msgid "" "Minimum number of resources that are added or removed when the AutoScaling " "group scales up or down. This can be used only when specifying " "PercentChangeInCapacity for the AdjustmentType property." msgstr "" "Die Mindestanzahl der Ressourcen, die hinzugefügt oder entfernt werden, wenn " "die AutoScaling-Gruppe skaliert wird. Dies kann nur verwendet werden, wenn " "PercentChangeInCapacity für die AdjustmentType-Eigenschaft angegeben wird." msgid "" "Minimum number of resources that are added or removed when the AutoScaling " "group scales up or down. This can be used only when specifying " "percent_change_in_capacity for the adjustment_type property." msgstr "" "Die Mindestanzahl der Ressourcen, die hinzugefügt oder entfernt werden, wenn " "die AutoScaling-Gruppe skaliert wird. Dies kann nur verwendet werden, wenn " "percent_change_in_capacity für die Eigenschaft \"adjustment_type\" angegeben " "wird." #, python-format msgid "Missing mandatory (%s) key from mark unhealthy request" msgstr "" "Fehlender obligatorischer Schlüssel (%s) für die Markierung der ungesunden " "Anfrage" #, python-format msgid "Missing parameter type for parameter: %s" msgstr "Fehlender Parametertyp für Parameter: %s" #, python-format msgid "Missing required credential: %(required)s" msgstr "Fehlende erforderliche Anmeldeinformationen: %(required)s" msgid "Mistral execution is in unknown state." msgstr "Mistral Ausführung ist in unbekanntem Zustand." msgid "Mistral resource validation error" msgstr "Mistral-Ressourcenüberprüfungsfehler" msgid "Monasca notification." msgstr "Monasca Benachrichtigung." msgid "Multiple actions specified" msgstr "Mehrere Aktionen angegeben" #, python-format msgid "Multiple bootable sources for instance %s." msgstr "Mehrere startbare Quellen für beispielsweise %s" #, python-format msgid "Multiple physical resources were found with name (%(name)s)." msgstr "Mehrere physische Ressourcen wurden mit Namen (%(name)s) gefunden." #, python-format msgid "Multiple resources were found with the physical ID (%(phys_id)s)." msgstr "" "Mehrere Ressourcen wurden mit der physischen ID (%(phys_id)s) gefunden." #, python-format msgid "Multiple routers found with name %s" msgstr "Mehrere Router mit dem Namen %s gefunden" msgid "Must specify 'InstanceId' if you specify 'EIP'." msgstr "Sie müssen \"InstanceId\" angeben, wenn Sie \"EIP\" angeben." #, python-format msgid "" "Name '%(name)s' must be 1-%(max_len)s characters long, each of which can " "only be alphanumeric or a hyphen." msgstr "" "Der Name '%(name)s' muss 1-%(max_len)s Zeichen lang sein, von denen jedes " "nur alphanumerisch oder ein Bindestrich sein kann." #, python-format msgid "Name '%s' must not start or end with a hyphen." msgstr "Der Name '%s' darf nicht mit einem Bindestrich beginnen oder enden." msgid "Name for the Port Pair Group." msgstr "Name für die Portpaargruppe." msgid "Name for the Port Pair." msgstr "Name für das Portpaar." msgid "Name for the Sahara Cluster Template." msgstr "Name für die Sahara-Cluster-Vorlage." msgid "Name for the Sahara Node Group Template." msgstr "Name für die Sahara-Knotengruppenvorlage." msgid "Name for the aggregate." msgstr "Name für das Aggregat" msgid "Name for the availability zone." msgstr "Name für die Verfügbarkeitszone" msgid "" "Name for the container. If not specified, a unique name will be generated." msgstr "" "Name für den Container. Wenn nicht angegeben, wird ein eindeutiger Name " "generiert." msgid "Name for the firewall policy." msgstr "Name für die Firewallrichtlinie" msgid "Name for the firewall rule." msgstr "Name für die Firewallregel" msgid "Name for the firewall." msgstr "Name für die Firewall" msgid "Name for the ike policy." msgstr "Name für die Ike-Richtlinie." msgid "" "Name for the image. The name of an image is not unique to a Image Service " "node." msgstr "" "Name für das Abbild. Der Name eines Abbilds ist nicht eindeutig für einen " "Abbild-Service-Knoten." msgid "Name for the ipsec policy." msgstr "Name für die IPSec-Richtlinie." msgid "Name for the ipsec site connection." msgstr "Name für die IPSec-Standortverbindung" msgid "Name for the time constraint." msgstr "Name für die Zeitbeschränkung" msgid "Name for the vpn service." msgstr "Name für den VPN-Dienst" msgid "Name of attribute to compare." msgstr "Name des zu vergleichenden Attributs" msgid "" "Name of attribute to compare. Names of the form metadata.user_metadata.X or " "metadata.metering.X are equivalent to what you can address through " "matching_metadata; the former for Nova meters, the latter for all others. To " "see the attributes of your Samples, use `ceilometer --debug sample-list`." msgstr "" "Name des zu vergleichenden Attributs. Die Namen des Formulars metadata." "user_metadata.X oder metadata.metering.X entsprechen denen, die Sie mit " "passing_metadata ansprechen können. Ersteres für Nova Meter, letzteres für " "alle anderen. Um die Attribute Ihrer Samples zu sehen, verwenden Sie " "`ceilometer - debug sample-list`." msgid "Name of key to use for substituting inputs during deployment." msgstr "" "Name des Schlüssels, der während der Bereitstellung zum Ersetzen von " "Eingaben verwendet wird." msgid "Name of keypair to inject into the server." msgstr "Name des Schlüsselpaars, das in den Server injiziert werden soll." msgid "Name of keystone endpoint." msgstr "Name des Keystone-Endpunkts" msgid "Name of keystone group." msgstr "Name der Keystone-Gruppe" msgid "Name of keystone project." msgstr "Name des Keystone-Projekts" msgid "Name of keystone role." msgstr "Name der Keystone-Rolle" msgid "Name of keystone service." msgstr "Name des Schlüsseldienstes" msgid "Name of keystone user." msgstr "Name des Keystone-Benutzers" msgid "Name of physical network to associate with this segment." msgstr "" "Name des physikalischen Netzwerks, das diesem Segment zugeordnet werden soll." msgid "Name of registered datastore type." msgstr "Name des registrierten Datenspeichertyps" msgid "Name of the DB instance to create." msgstr "Name der zu erstellenden DB-Instanz" msgid "Name of the Flow Classifier." msgstr "Name des Klassifizierers" msgid "Name of the Node group." msgstr "Name der Knotengruppe" msgid "Name of the Port Chain." msgstr "Name der Portkette" msgid "Name of the QoS." msgstr "Name der QoS." msgid "" "Name of the action associated with the task. Either action or workflow may " "be defined in the task." msgstr "" "Name der Aktion, die der Aufgabe zugeordnet ist. In der Aufgabe können " "entweder eine Aktion oder ein Arbeitsablauf definiert sein." msgid "Name of the administrative user to use on the server." msgstr "" "Name des administrativen Benutzers, der auf dem Server verwendet werden soll." msgid "Name of the alarm. By default, physical resource name is used." msgstr "" "Name des Alarms. Standardmäßig wird der Name der physischen Ressource " "verwendet." msgid "Name of the availability zone for DB instance." msgstr "Name der Verfügbarkeitszone für die DB-Instanz" msgid "Name of the availability zone for server placement." msgstr "Name der Verfügbarkeitszone für die Serverplatzierung" msgid "Name of the cluster to create." msgstr "Name des zu erstellenden Clusters" msgid "Name of the cluster. By default, physical resource name is used." msgstr "" "Name des Clusters Standardmäßig wird der Name der physischen Ressource " "verwendet." msgid "Name of the container." msgstr "Name des Containers" msgid "Name of the cookie, required if type is APP_COOKIE." msgstr "Name des Cookies, erforderlich, wenn der Typ APP_COOKIE ist." msgid "Name of the cron trigger." msgstr "Name des Cron-Triggers" msgid "Name of the current action being deployed" msgstr "Name der aktuellen Aktion, die bereitgestellt wird" msgid "Name of the data source." msgstr "Name der Datenquelle" msgid "" "Name of the derived config associated with this deployment. This is used to " "apply a sort order to the list of configurations currently deployed to a " "server." msgstr "" "Name der abgeleiteten Konfiguration, die dieser Bereitstellung zugeordnet " "ist Dies wird verwendet, um eine Sortierreihenfolge auf die Liste der " "Konfigurationen anzuwenden, die derzeit auf einem Server bereitgestellt " "werden." msgid "" "Name of the engine node. This can be an opaque identifier. It is not " "necessarily a hostname, FQDN, or IP address." msgstr "" "Name des Engine-Knotens. Dies kann eine undurchsichtige Kennung sein. Es ist " "nicht unbedingt ein Hostname, FQDN oder IP-Adresse." msgid "Name of the flavor." msgstr "Name der Variante" msgid "Name of the input." msgstr "Name der Eingabe" msgid "Name of the job binary." msgstr "Name der Job-Binärdatei" msgid "Name of the job." msgstr "Name des Jobs" msgid "Name of the metering label." msgstr "Name des Zählers" msgid "Name of the network owning the port." msgstr "Name des Netzwerks, dem der Port gehört." msgid "" "Name of the network owning the port. The value is typically network:" "floatingip or network:router_interface or network:dhcp." msgstr "" "Name des Netzwerks, dem der Port gehört. Der Wert ist normalerweise " "Netzwerk: FloatingIP oder Netzwerk: Router_Interface oder Netzwerk: DHCP." msgid "Name of the notification. By default, physical resource name is used." msgstr "" "Name der Benachrichtigung. Standardmäßig wird der Name der physischen " "Ressource verwendet." msgid "Name of the object." msgstr "Name des Objekts" msgid "Name of the output." msgstr "Name der Ausgabe" msgid "Name of the policy." msgstr "Name der Richtlinie" msgid "Name of the pool." msgstr "Name des Pools" msgid "Name of the queue instance to create a URL for." msgstr "Name der Warteschlangeninstanz, für die eine URL erstellt werden soll." msgid "Name of the queue instance to create." msgstr "Name der zu erstellenden Warteschlangeninstanz" msgid "Name of the queue to subscribe to." msgstr "Name der Warteschlange, die abonniert werden soll." msgid "" "Name of the registered datastore version. It must exist for provided " "datastore type. Defaults to using single active version. If several active " "versions exist for provided datastore type, explicit value for this " "parameter must be specified." msgstr "" "Name der registrierten Datenspeicherversion. Es muss für den " "bereitgestellten Datenspeichertyp vorhanden sein. Standardmäßig wird die " "einzelne aktive Version verwendet. Wenn mehrere aktive Versionen für den " "bereitgestellten Datenspeichertyp vorhanden sind, muss ein expliziter Wert " "für diesen Parameter angegeben werden." msgid "Name of the resource." msgstr "Name der Ressource" msgid "Name of the secret." msgstr "Name des Geheimnisses." msgid "Name of the segment." msgstr "Name des Segments" msgid "Name of the senlin node. By default, physical resource name is used." msgstr "" "Name des Senlinenknotens. Standardmäßig wird der Name der physischen " "Ressource verwendet." msgid "Name of the senlin policy. By default, physical resource name is used." msgstr "" "Name der senlin-Politik Standardmäßig wird der Name der physischen Ressource " "verwendet." msgid "Name of the senlin profile. By default, physical resource name is used." msgstr "" "Name des Senlinprofils Standardmäßig wird der Name der physischen Ressource " "verwendet." msgid "" "Name of the senlin receiver. By default, physical resource name is used." msgstr "" "Name des Senlin-Empfängers. Standardmäßig wird der Name der physischen " "Ressource verwendet." msgid "Name of the server." msgstr "Name des Servers" msgid "Name of the share network." msgstr "Name des Freigabenetzwerks" msgid "Name of the share type." msgstr "Name des Freigabetyps" msgid "Name of the stack." msgstr "Name des Stapels" msgid "Name of the subnet pool." msgstr "Name des Subnetzpools" msgid "Name of the vip." msgstr "Name des VIPs" msgid "Name of the volume type." msgstr "Name des Datenträgertyps" msgid "Name of the volume." msgstr "Name des Datenträgers" msgid "" "Name of the workflow associated with the task. Can be defined by intrinsic " "function get_resource or by name of the referenced workflow, i.e. " "{ workflow: wf_name } or { workflow: { get_resource: wf_name }}. Either " "action or workflow may be defined in the task." msgstr "" "Name des mit der Aufgabe verknüpften Arbeitsablaufs. Kann durch die " "intrinsische Funktion get_resource oder durch den Namen des referenzierten " "Workflows definiert werden, z.B. {workflow: wf_name} oder { workflow: " "{ get_resource: wf_name }}. In der Aufgabe können entweder eine Aktion oder " "ein Arbeitsablauf definiert sein." msgid "Name of this Load Balancer." msgstr "Name dieses Load Balancers" msgid "Name of this deployment resource in the stack" msgstr "Name dieser Bereitstellungsressource im Stapel" msgid "Name of this listener." msgstr "Name dieses Zuhörers" msgid "Name of this pool." msgstr "Name dieses Pools" msgid "Name or ID Nova flavor for the nodes." msgstr "Name oder ID Nova Variante für die Knoten." msgid "Name or ID of default project of keystone user." msgstr "Name oder ID des Standardprojekts des Keystone-Benutzers" msgid "Name or ID of keystone domain." msgstr "Name oder ID der Keystone-Domäne" msgid "Name or ID of network to create a port on." msgstr "Name oder ID des Netzwerks, auf dem ein Port erstellt werden soll." msgid "Name or ID of senlin profile to create this node." msgstr "Name oder ID des Senlin-Profils, um diesen Knoten zu erstellen." msgid "" "Name or ID of shared file system snapshot that will be restored and created " "as a new share." msgstr "" "Name oder ID des freigegebenen Dateisystem-Snapshots, der wiederhergestellt " "und als neue Freigabe erstellt wird" msgid "" "Name or ID of shared filesystem type. Types defines some share filesystem " "profiles that will be used for share creation." msgstr "" "Name oder ID des freigegebenen Dateisystemtyps Types definiert einige " "Freigabe-Dateisystemprofile, die für die Erstellung von Freigaben verwendet " "werden." msgid "Name or ID of shared network defined for shared filesystem." msgstr "" "Name oder ID des freigegebenen Netzwerks, das für das freigegebene " "Dateisystem definiert ist." msgid "Name or ID of target cluster." msgstr "Name oder ID des Zielclusters" msgid "Name or ID of the image." msgstr "Name oder ID des Abbildes" msgid "Name or ID of the load balancing pool." msgstr "Name oder ID des Lastverteilungspools" msgid "Name or ID of the workflow." msgstr "Name oder ID des Workflows" msgid "Name or Id of keystone region." msgstr "Name oder Id der Keystone-Region" msgid "Name or Id of keystone service." msgstr "Name oder ID des Schlüsseldienstes" #, python-format msgid "" "Name or UUID of Neutron port to attach this NIC to. Either %(port)s or " "%(net)s must be specified." msgstr "" "Name oder UUID des Neutron-Ports, an den diese Netzwerkkarte angeschlossen " "werden soll. Entweder müssen %(port)s oder %(net)s angegeben werden." msgid "Name or UUID of network." msgstr "Name oder UUID des Netzwerks" msgid "" "Name or UUID of the Neutron floating IP network or name of the Nova floating " "ip pool to use. Should not be provided when used with Nova-network that auto-" "assign floating IPs." msgstr "" "Name oder UUID des Floating IP-Neutron-Netzwerks oder Name des zu " "verwendenden Nova-Floating-IP-Pools. Sollte bei Verwendung mit Nova-network " "nicht zur Verfügung stehen, die Floating IPs automatisch zuweisen." msgid "Name or UUID of the image used to boot Hadoop nodes." msgstr "" "Name oder UUID des Abbildes, das zum Starten von Hadoop-Knoten verwendet " "wird." #, python-format msgid "" "Name or UUID of the network to attach this NIC to. Either %(port)s or " "%(net)s must be specified." msgstr "" "Name oder UUID des Netzwerks, an das diese Netzwerkkarte angehängt werden " "soll. Entweder müssen %(port)s oder %(net)s angegeben werden." msgid "Name or id of keystone domain." msgstr "Name oder ID der Keystone-Domäne" msgid "Name or id of keystone group." msgstr "Name oder ID der Keystone-Gruppe" msgid "Name or id of keystone user." msgstr "Name oder ID des Keystone-Benutzers" msgid "Name or id of the project to set the quota for." msgstr "" "Name oder ID des Projekts, für das das Kontingent festgelegt werden soll." msgid "Name or id of volume type (OS::Cinder::VolumeType)." msgstr "Name oder ID des Volumetyps (OS::Cinder::VolumeType)." msgid "Names of databases that those users can access on instance creation." msgstr "" "Namen von Datenbanken, auf die diese Benutzer bei der Instanzerstellung " "zugreifen können." msgid "" "Namespace to group this software config by when delivered to a server. This " "may imply what configuration tool is going to perform the configuration." msgstr "" "Namespace zum Gruppieren dieser Software-Konfiguration, wenn sie an einen " "Server geliefert wird. Dies kann bedeuten, welches Konfigurationswerkzeug " "die Konfiguration durchführen wird." msgid "Need more arguments" msgstr "Brauche mehr Argumente" msgid "Negotiation mode for the ike policy." msgstr "Verhandlungsmodus für die IKE-Richtlinie." #, python-format msgid "Neither image nor bootable volume is specified for instance %s" msgstr "" "Für die Instanz %s wird weder Abbild noch bootfähiger Datenträger angegeben" msgid "Network in CIDR notation." msgstr "Netzwerk in CIDR-Notation." msgid "Network interface ID to associate with EIP." msgstr "Netzwerkschnittstellen-ID, die mit EIP verknüpft werden soll." msgid "Network interfaces to associate with instance." msgstr "Netzwerkschnittstellen zur Verknüpfung mit der Instanz" #, python-format msgid "" "Network this port belongs to. If you plan to use current port to assign " "Floating IP, you should specify %(fixed_ips)s with %(subnet)s. Note if this " "changes to a different network update, the port will be replaced." msgstr "" "Netzwerk, zu dem dieser Port gehört. Wenn Sie planen, den aktuellen Port zum " "Zuweisen von Floating IP zu verwenden, sollten Sie %(fixed_ips)s mit " "%(subnet)s angeben. Beachten Sie, wenn sich das zu einem anderen " "Netzwerkupdate ändert, wird der Port ersetzt." msgid "Network to allocate floating IP from." msgstr "Netzwerk zum Zuweisen von Floating IP von." msgid "" "Neutron LBaaS v1 is deprecated in the Liberty release and is planned to be " "removed in a future release. Going forward, the LBaaS V2 should be used." msgstr "" "Neutron LBaaS v1 ist in der Liberty-Version veraltet und soll in einer " "zukünftigen Version entfernt werden. In Zukunft sollte der LBaaS V2 " "verwendet werden." msgid "Neutron network id." msgstr "Neutron-Netzwerk-ID" msgid "Neutron subnet id." msgstr "Neutron-Subnetz-ID" msgid "Nexthop IP address." msgstr "Nexthop IP-Adresse." #, python-format msgid "No %(entity)s matching %(args)s." msgstr "Keine %(entity)s passt zu %(args)s." #, python-format msgid "No %(entity)s unique match found for %(args)s." msgstr "Keine %(entity)s eindeutige Übereinstimmung gefunden für %(args)s." #, python-format msgid "No %s specified" msgstr "Keine %s angegeben" msgid "No Template provided." msgstr "Keine Vorlage zur Verfügung gestellt." msgid "No action specified" msgstr "Keine Aktion angegeben" msgid "No constraint expressed" msgstr "Keine Beschränkung ausgedrückt" #, python-format msgid "" "No content found in the \"files\" section for %(fn_name)s path: %(file_key)s" msgstr "" "Kein Inhalt im Abschnitt \"files\" für den Pfad %(fn_name)s gefunden: " "%(file_key)s" msgid "No description available" msgstr "Keine Beschreibung verfügbar" #, python-format msgid "No event %s found" msgstr "Kein Ereignis %s gefunden" #, python-format msgid "No events found for resource %s" msgstr "Für die Ressource %s wurden keine Ereignisse gefunden" msgid "No resource data found" msgstr "Keine Ressourcendaten gefunden" #, python-format msgid "No stack exists with id \"%s\"" msgstr "Kein Stapel existiert mit der ID \"%s\"" msgid "No stack name specified" msgstr "Kein Stapelname angegeben" msgid "No template specified" msgstr "Keine Vorlage angegeben" msgid "No volume service available." msgstr "Kein Datenträgerdienst verfügbar." msgid "Node groups." msgstr "Knotengruppen." msgid "Nodes list in the cluster." msgstr "Knotenliste im Cluster" msgid "Non HA routers can only have one L3 agent." msgstr "Nicht-HA-Router können nur einen L3-Agenten haben." #, python-format msgid "Non-empty resource type is required for resource \"%s\"" msgstr "" "Ein nicht leerer Ressourcentyp ist für die Ressource \"%s\" erforderlich" msgid "Not Implemented." msgstr "Nicht implementiert." #, python-format msgid "Not allowed - %(dsver)s without %(dstype)s." msgstr "Nicht erlaubt -%(dsver)s ohne %(dstype)s." msgid "Not found" msgstr "Nicht gefunden" msgid "Not waiting for outputs signal" msgstr "Nicht auf das Ausgangssignal warten" msgid "" "Notional service where encryption is performed For example, front-end. For " "Nova." msgstr "" "Fiktiver Dienst, bei dem die Verschlüsselung durchgeführt wird. Beispiel: " "Front-End. Für Nova." msgid "Nova instance type (flavor)." msgstr "Nova-Instanztyp (flavor)" msgid "Nova network id." msgstr "Nova-Netzwerk-ID" msgid "Now we only allow vpc here, so no need to set up this tag anymore." msgstr "" "Jetzt erlauben wir nur vpc hier, also brauchen wir dieses Tag nicht mehr " "einzurichten." msgid "Number of VCPUs for the flavor." msgstr "Anzahl der VCPUs für die Variante" msgid "Number of backlog requests to configure the socket with." msgstr "" "Anzahl der Backlog-Anfragen, mit denen der Socket konfiguriert werden soll." msgid "" "Number of heat-engine processes to fork and run. Will default to either to 4 " "or number of CPUs on the host, whichever is greater." msgstr "" "Anzahl der Heat-Engine-Prozesse zum Abzweigen und Laufen. Wird entweder auf " "4 oder auf die Anzahl der CPUs auf dem Host gesetzt, je nachdem, welcher " "Wert größer ist." msgid "Number of instances in the Node group." msgstr "Anzahl der Instanzen in der Knotengruppe" msgid "Number of minutes to wait for this stack creation." msgstr "" "Anzahl der Minuten, die auf diese Stapelerstellung gewartet werden müssen." msgid "Number of periods to evaluate over." msgstr "Anzahl der zu überprüfenden Perioden" msgid "" "Number of permissible connection failures before changing the member status " "to INACTIVE." msgstr "" "Anzahl der zulässigen Verbindungsfehler, bevor der Mitgliedsstatus in " "INACTIVE geändert wird." msgid "Number of remaining executions." msgstr "Anzahl der verbleibenden Ausführungen." msgid "Number of seconds for the DPD delay." msgstr "Anzahl der Sekunden für die DPD-Verzögerung." msgid "Number of seconds for the DPD timeout." msgstr "Anzahl der Sekunden für das DPD-Zeitlimit." msgid "" "Number of stacks to delete at a time (per transaction). Note that a single " "stack may have many db rows (events, etc.) associated with it." msgstr "" "Anzahl der zu löschenden Stapel (pro Transaktion). Beachten Sie, dass einem " "einzelnen Stapel möglicherweise mehrere DB-Zeilen (Ereignisse usw.) " "zugeordnet sind." msgid "" "Number of times to check whether an interface has been attached or detached." msgstr "" "Häufigkeit, mit der überprüft wird, ob eine Schnittstelle angehängt oder " "getrennt wurde." msgid "" "Number of times to retry to bring a resource to a non-error state. Set to 0 " "to disable retries." msgstr "" "Gibt an, wie oft versucht wird, eine Ressource in einen fehlerfreien Zustand " "zu versetzen. Setzen Sie den Wert auf 0, um erneute Versuche zu deaktivieren." msgid "" "Number of times to retry when a client encounters an expected intermittent " "error. Set to 0 to disable retries." msgstr "" "Anzahl der Wiederholungen, wenn ein Client einen erwarteten zeitweiligen " "Fehler feststellt. Setzen Sie den Wert auf 0, um erneute Versuche zu " "deaktivieren." msgid "Number of workers for Heat service." msgstr "Anzahl der Worker für Heat Service." msgid "" "Number of workers for Heat service. Default value 0 means, that service will " "start number of workers equal number of cores on server." msgstr "" "Anzahl der Worker für Heat Service. Der Standardwert 0 bedeutet, dass der " "Dienst die Anzahl der Worker gleich der Anzahl der Cores auf dem Server " "startet." msgid "Number value for delay during resolve constraint." msgstr "Zahlenwert für die Verzögerung während der Auflösungsbedingung." msgid "Number value for timeout during resolving output value." msgstr "Zahlenwert für Timeout beim Auflösen des Ausgabewerts." msgid "" "OS::Aodh::CombinationAlarm is deprecated and has been removed from Aodh, use " "OS::Aodh::CompositeAlarm instead." msgstr "" "OS::Aodh::CombinationAlarm ist veraltet und wurde von Aodh entfernt. " "Verwenden Sie stattdessen OS::Aodh::CompositeAlarm." msgid "" "OS::Heat::CWLiteAlarm resource has been removed since version 10.0.0. " "Existing stacks can still use it, where it would do nothing for update/" "delete." msgstr "" "Die OS::Heat::CWLiteAlarm-Ressource wurde seit der Version 10.0.0 entfernt. " "Bestehende Stapel können es immer noch verwenden, wo es nichts zum " "Aktualisieren/Löschen tun würde." #, python-format msgid "Object action %(action)s failed because: %(reason)s" msgstr "Objektaktion %(action)s ist fehlgeschlagen, weil: %(reason)s" msgid "" "On update, enables heat to collect existing resource properties from reality " "and converge to updated template." msgstr "" "Bei Aktualisierung ermöglicht es Heat, vorhandene Ressourceneigenschaften " "aus der Realität zu sammeln und mit aktualisierten Vorlagen zu konvergieren." msgid "One of predefined health monitor types." msgstr "Einer der vordefinierten Gesundheitsmonitortypen." #, python-format msgid "" "One of the properties \"%(id)s\" or \"%(size)s\" should be set for the " "specified mount of container \"%(container)s\"." msgstr "" "Eine der Eigenschaften \"%(id)s\" oder \"%(size)s\" sollte für das " "angegebene Mount des Containers \"%(container)s\" festgelegt werden." #, python-format msgid "" "One of the properties \"%(id)s\", \"%(port_id)s\", \"%(str_network)s\" or " "\"%(subnet)s\" should be set for the specified network of server \"%(server)s" "\"." msgstr "" "Eine der Eigenschaften \"%(id)s\", \"%(port_id)s\", \"%(str_network)s\" oder " "\"%(subnet)s\" sollte für das angegebene Netzwerk von Server \"%(server)s " "festgelegt werden \"." #, python-format msgid "" "One of the properties \"network\", \"port\", \"allocate_network\" or \"subnet" "\" should be set for the specified network of server \"%s\"." msgstr "" "Eine der Eigenschaften \"network\", \"port\", \"allocate_network\" oder " "\"subnet\" sollte für das angegebene Netzwerk des Servers \"%s\" festgelegt " "werden." msgid "One or more listeners for this load balancer." msgstr "Ein oder mehrere Listener für diesen Lastenausgleich." msgid "Only ISO 8601 duration format of the form PT#H#M#S is supported." msgstr "" "Es wird nur das ISO 8601-Dauerformat des Formulars PT#H#M#S unterstützt." msgid "Only Templates with an extension of .yaml or .template are supported" msgstr "" "Es werden nur Vorlagen mit der Erweiterung .yaml oder .template unterstützt" #, python-format msgid "Only integer is acceptable by '%(name)s'." msgstr "Nur eine Ganzzahl ist zulässig für '%(name)s'." #, python-format msgid "Only non-zero integer is acceptable by '%(name)s'." msgstr "Nur eine ganze Zahl ungleich Null ist für '%(name)s' akzeptabel." msgid "OpenStack Keystone Project." msgstr "OpenStack Keystone Projekt." msgid "Operating system architecture." msgstr "Betriebssystemarchitektur" msgid "Operator used to compare specified statistic with threshold." msgstr "" "Operator zum Vergleichen der angegebenen Statistik mit dem Schwellenwert." msgid "Optional CA cert file to use in SSL connections." msgstr "Optionale CA-Cert-Datei zur Verwendung in SSL-Verbindungen." msgid "Optional Nova keypair name." msgstr "Optionaler Nova-Schlüsselpaarname." msgid "Optional PEM-formatted certificate chain file." msgstr "Optionale PEM-formatierte Zertifikatskettendatei" msgid "Optional PEM-formatted file that contains the private key." msgstr "Optionale PEM-formatierte Datei, die den privaten Schlüssel enthält." msgid "Optional filename to associate with part." msgstr "Optionaler Dateiname, der mit dem Teil verknüpft werden soll." #, python-format msgid "Optional heat url in format like http://0.0.0.0:8004/v1/%(tenant_id)s." msgstr "Optionale Heat-URL im Format http://0.0.0.0:8004/v1/%(tenant_id)s." msgid "Optional subtype to specify with the type." msgstr "Optionaler Subtyp zum Angeben mit dem Typ." #, python-format msgid "Options %(ua)s and %(im)s cannot both be True" msgstr "Die Optionen%(ua)s und%(im)s können nicht beide True sein" msgid "Options for simulating waiting." msgstr "Optionen zum Simulieren von Warten." msgid "Options used to configure this subscription." msgstr "Optionen zum Konfigurieren dieses Abonnements." #, python-format msgid "Order '%(name)s' failed: %(code)s - %(reason)s" msgstr "Bestellung '%(name)s' fehlgeschlagen: %(code)s -%(reason)s" #, python-format msgid "Output definitions must be a map. Found a %s instead" msgstr "" "Ausgabendefinitionen müssen eine Map sein. Stattdessen wurde ein %s gefunden" msgid "Output from the execution." msgstr "Ausgabe von der Ausführung." msgid "Outputs received" msgstr "Ausgaben erhalten" msgid "Owner of the image." msgstr "Besitzer des Abbildes." msgid "Owner of the source security group." msgstr "Besitzer der Quellsicherheitsgruppe." msgid "PATCH update to non-COMPLETE stack" msgstr "PATCH Update auf nicht-COMPLETE-Stack" #, python-format msgid "Parameter '%(name)s' is invalid: %(exp)s" msgstr "Der Parameter '%(name)s' ist ungültig: %(exp)s" msgid "Parameter Groups error" msgstr "Parameter Gruppen Fehler" msgid "" "Parameter Groups error: parameter_groups.: The grouped parameter key_name " "does not reference a valid parameter." msgstr "" "Parametergruppen Fehler: Parametergruppen: Der gruppierte Parameter " "Schlüsselname verweist nicht auf einen gültigen Parameter." msgid "" "Parameter Groups error: parameter_groups.: The key_name parameter must be " "assigned to one parameter group only." msgstr "" "Parametergruppen error: Parametergruppen: Der Parameter key_name muss nur " "einer Parametergruppe zugewiesen werden." msgid "" "Parameter Groups error: parameter_groups.: The parameters of parameter group " "should be a list." msgstr "" "Parametergruppen Fehler: Parametergruppen: Die Parameter der Parametergruppe " "sollten eine Liste sein." msgid "" "Parameter Groups error: parameter_groups.Database Group: The InstanceType " "parameter must be assigned to one parameter group only." msgstr "" "Parametergruppen error: parametergruppen.Datenbankgruppe: Der Parameter " "InstanceType muss nur einer Parametergruppe zugewiesen werden." msgid "" "Parameter Groups error: parameter_groups.Database Group: The grouped " "parameter SomethingNotHere does not reference a valid parameter." msgstr "" "Parametergruppen error: parametergruppen.Datenbankgruppe: Der gruppierte " "Parameter SomethingNotHere verweist nicht auf einen gültigen Parameter." msgid "" "Parameter Groups error: parameter_groups.Server Group: The parameters must " "be provided for each parameter group." msgstr "" "Parametergruppen error: parametergruppen.Servergruppe: Die Parameter müssen " "für jede Parametergruppe angegeben werden." msgid "" "Parameter Groups error: parameter_groups.Server Group: The parameters of " "parameter group should be a list." msgstr "" "Parametergruppen error: parametergruppen.Servergruppe: Die Parameter der " "Parametergruppe sollten eine Liste sein." msgid "" "Parameter Groups error: parameter_groups: The parameter_groups should be a " "list." msgstr "" "Parametergruppen Fehler: Parametergruppen: Die Parametergruppen sollten eine " "Liste sein." #, python-format msgid "Parameter name in \"%s\" must be string" msgstr "Der Parametername in \"%s\" muss eine Zeichenfolge sein" msgid "Parameters to add to the job." msgstr "Parameter, die dem Job hinzugefügt werden sollen." msgid "" "Parameters to pass to the Mistral workflow execution. The parameters depend " "on the workflow type." msgstr "" "Parameter, die an die Mistral-Workflowausführung übergeben werden. Die " "Parameter hängen vom Workflow-Typ ab." #, python-format msgid "Params must be a map, find a %s" msgstr "Params muss eine Karte sein, finde ein %s" msgid "Parent network of the subnet." msgstr "Übergeordnetes Netzwerk des Subnetzes." msgid "Parent project id." msgstr "Übergeordnete Projekt-ID" msgid "Parts belonging to this message." msgstr "Teile, die zu dieser Nachricht gehören." msgid "Password for API authentication" msgstr "Passwort für die API-Authentifizierung" msgid "Password for accessing the data source URL." msgstr "Passwort für den Zugriff auf die URL der Datenquelle." msgid "Password for accessing the job binary URL." msgstr "Passwort für den Zugriff auf die binäre URL des Jobs." msgid "Password for those users on instance creation." msgstr "Passwort für diese Benutzer bei der Instanzerstellung." msgid "Password of keystone user." msgstr "Passwort des Keystone-Benutzers" msgid "Password used by user." msgstr "Passwort vom Benutzer verwendet." #, python-format msgid "Path components in \"%s\" must be strings" msgstr "Pfadkomponenten in \"%s\" müssen Strings sein" #, python-format msgid "" "Path components in '%s' must be a string that can be parsed into an integer." msgstr "" "Pfadkomponenten in ' %s' müssen eine Zeichenfolge sein, die in eine Ganzzahl " "geparst werden kann." msgid "Path components in attributes must be strings" msgstr "Pfadkomponenten in Attributen müssen Strings sein" msgid "Payload exceeds maximum allowed size" msgstr "Nutzlast überschreitet die maximal zulässige Größe" msgid "Perfect forward secrecy for the ipsec policy." msgstr "Perfekte Weiterleitungsgeheimhaltung für die IPSec-Richtlinie." msgid "Perfect forward secrecy in lowercase for the ike policy." msgstr "Perfect Forward Secrecy in Kleinbuchstaben für die Ike-Politik." msgid "" "Perform a check on the input values passed to verify that each required " "input has a corresponding value. When the property is set to STRICT and no " "value is passed, an exception is raised." msgstr "" "Führen Sie eine Überprüfung der übergebenen Eingabewerte durch, um " "sicherzustellen, dass jede erforderliche Eingabe einen entsprechenden Wert " "aufweist. Wenn die Eigenschaft auf STRICT festgelegt ist und kein Wert " "übergeben wird, wird eine Ausnahme ausgelöst." msgid "Period (seconds) to evaluate over." msgstr "Zeitraum (Sekunden) für die Auswertung." msgid "Physical ID of the VPC. Not implemented." msgstr "Physische ID der VPC. Nicht implementiert." msgid "Please use OS::Heat::SoftwareDeploymentGroup instead." msgstr "Bitte verwenden Sie stattdessen OS::Heat::SoftwareDeploymentGroup." msgid "Please use OS::Heat::StructuredDeploymentGroup instead." msgstr "Bitte verwenden Sie stattdessen OS::Heat::StructuredDeploymentGroup." msgid "Please use OS::Magnum::Cluster instead." msgstr "Bitte verwenden Sie stattdessen OS::Magnum::Cluster." msgid "Please use OS::Magnum::ClusterTemplate instead." msgstr "Bitte verwenden Sie stattdessen OS::Magnum::ClusterTemplate." msgid "Please use OS::Neutron::FloatingIP instead." msgstr "Bitte verwenden Sie stattdessen OS::Neutron::FloatingIP." msgid "Please use OS::Neutron::FloatingIPAssociation instead." msgstr "Bitte verwenden Sie stattdessen OS::Neutron::FloatingIPAssociation." #, python-format msgid "" "Plugin %(plugin)s doesn't support the following node processes: " "%(unsupported)s. Allowed processes are: %(allowed)s" msgstr "" "Plugin %(plugin)s unterstützt die folgenden Knotenprozesse nicht: " "%(unsupported)s. Erlaubte Prozesse sind: %(allowed)s" msgid "Plugin name." msgstr "Plug-in-Name" msgid "Policies attached to the cluster." msgstr "Richtlinien, die an den Cluster angehängt sind." msgid "Policies for removal of resources on update." msgstr "Richtlinien zum Entfernen von Ressourcen beim Update." msgid "Policy for rolling updates for this scaling group." msgstr "" "Richtlinie zum Ausführen von Aktualisierungen für diese Skalierungsgruppe." msgid "Policy not registered." msgstr "Richtlinie nicht registriert" msgid "" "Policy on how to apply a flavor update; either by requesting a server resize " "or by replacing the entire server." msgstr "" "Richtlinie zum Anwenden einer Variantenaktualisierung entweder durch " "Anfordern einer Servergrößenänderung oder durch Ersetzen des gesamten " "Servers." msgid "" "Policy on how to apply a user_data update; either by ignoring it or by " "replacing the entire server." msgstr "" "Richtlinie zum Anwenden einer Benutzerdatenaktualisierung entweder durch " "Ignorieren oder durch Ersetzen des gesamten Servers." msgid "" "Policy on how to apply an image-id update; either by requesting a server " "rebuild or by replacing the entire server." msgstr "" "Richtlinie zum Anwenden einer Image-ID-Aktualisierung Entweder indem Sie " "eine Wiederherstellung des Servers anfordern oder indem Sie den gesamten " "Server ersetzen." msgid "" "Policy on how to respond to a stack-update for this resource. REPLACE_ALWAYS " "will replace the port regardless of any property changes. AUTO will update " "the existing port for any changed update-allowed property." msgstr "" "Richtlinie, wie auf ein Stack-Update für diese Ressource reagiert werden " "soll. REPLACE_ALWAYS ersetzt den Port unabhängig von Eigenschaftsänderungen. " "AUTO wird den vorhandenen Port für jede geänderte Eigenschaft aktualisieren, " "die zulässig ist." msgid "" "Policy to be processed when doing an update which requires removal of " "specific resources." msgstr "" "Richtlinie, die bei einer Aktualisierung verarbeitet wird, bei der bestimmte " "Ressourcen entfernt werden müssen." msgid "Pool creation failed" msgstr "Die Poolerstellung ist fehlgeschlagen" msgid "Pool creation failed due to vip" msgstr "Die Pool-Erstellung ist aufgrund von vip fehlgeschlagen" msgid "Pool from which floating IP is allocated." msgstr "Pool, aus dem Floating IP zugewiesen ist." msgid "Pools this LoadBalancer is associated with." msgstr "Pools, denen dieser LoadBalancer zugeordnet ist." msgid "Port Pair Group ID or Name ." msgstr "Portpaar Gruppennummer oder Name" msgid "Port Pair ID or name ." msgstr "Portpaar-ID oder Name" msgid "Port number on which the servers are running on the members." msgstr "Portnummer, auf der die Server auf den Mitgliedern ausgeführt werden." msgid "Port on which the pool member listens for requests or connections." msgstr "Port, an dem der Pool-Teilnehmer auf Anfragen oder Verbindungen hört." msgid "Port security enabled of the network." msgstr "Portsicherheit des Netzwerks aktiviert." msgid "Port security enabled of the port." msgstr "Portsicherheit des Ports aktiviert." msgid "" "Port tag. Heat ignores any update on this property as nova does not support " "it." msgstr "" "Port-Tag Heat ignoriert alle Aktualisierungen dieser Eigenschaft, da Nova " "diese nicht unterstützt." msgid "Position of the rule within the firewall policy." msgstr "Position der Regel innerhalb der Firewall-Richtlinie." msgid "Pre-shared key string for the ipsec site connection." msgstr "Pre-Shared Key-Zeichenfolge für die Verbindung der IPSec-Site." msgid "Prefix length for subnet allocation from subnet pool." msgstr "Präfixlänge für die Subnetzzuweisung aus dem Subnetzpool" msgid "" "Print an INFO message when processing of each raw_template or resource " "begins or ends" msgstr "" "Drucken Sie eine INFO-Nachricht, wenn die Verarbeitung jeder raw_template " "oder Ressource beginnt oder endet" msgid "Private DNS name of the specified instance." msgstr "Privater DNS-Name der angegebenen Instanz" msgid "Private IP address of the network interface." msgstr "Private IP-Adresse der Netzwerkschnittstelle." msgid "Private IP address of the specified instance." msgstr "Private IP-Adresse der angegebenen Instanz" msgid "Project ID" msgstr "Projekt-ID" msgid "Project ID to purge deleted stacks." msgstr "Projekt-ID zum Löschen gelöschter Stapel." msgid "Project name." msgstr "Projektname." msgid "" "Projects to add volume type access to. NOTE: This property is only supported " "since Cinder API V2." msgstr "" "Projekte, denen der Zugriff auf den Datenträgertyp hinzugefügt werden soll. " "HINWEIS: Diese Eigenschaft wird nur von Cinder API V2 unterstützt." #, python-format msgid "" "Properties %(algorithm)s and %(bit_length)s are required for %(type)s type " "of order." msgstr "" "Die Eigenschaft %(algorithm)s und %(bit_length)s sind für den Auftragstyp " "%(type)s erforderlich." #, python-format msgid "" "Properties %(pool)s and %(url)s are not required when %(action)s type is set " "to %(action_type)s." msgstr "" "Die Eigenschaften %(pool)s und %(url)s sind nicht erforderlich, wenn der Typ " "%(action)s auf %(action_type)s gesetzt ist." msgid "Properties for profile." msgstr "Eigenschaften für Profil" msgid "Properties group schema incorrectly specified." msgstr "Das Gruppenschema der Eigenschaften wurde falsch angegeben." msgid "Properties of this policy." msgstr "Eigenschaften dieser Richtlinie" msgid "" "Properties redirect_pool and redirect_url are not required when action type " "is set to REJECT." msgstr "" "Die Eigenschaften redirect_pool und redirect_url sind nicht erforderlich, " "wenn der Aktionstyp auf REJECT gesetzt ist." msgid "Properties to pass to each resource being created in the chain." msgstr "" "Eigenschaften, die an jede Ressource übergeben werden, die in der Kette " "erstellt wird." #, python-format msgid "" "Property \"%(fip)s\" is not supported if only \"%(net)s\" is specified, " "because the corresponding port can not be retrieved." msgstr "" "Property \"%(fip)s\" wird nicht unterstützt, wenn nur \"%(net)s\" angegeben " "wird, da der entsprechende Port nicht abgerufen werden kann." #, python-format msgid "" "Property \"%s\" can not be specified if multiple network interfaces set for " "server." msgstr "" "Die Eigenschaft \"%s\" kann nicht angegeben werden, wenn mehrere " "Netzwerkschnittstellen für den Server festgelegt sind." msgid "" "Property \"floating_ip\" is not supported if only \"network\" is specified, " "because the corresponding port can not be retrieved." msgstr "" "Die Eigenschaft \"floating_ip\" wird nicht unterstützt, wenn nur \"network\" " "angegeben wird, da der entsprechende Port nicht abgerufen werden kann." #, python-format msgid "Property %(cookie)s is required when %(sp)s type is set to %(app)s." msgstr "" "Eigenschaft %(cookie)s ist erforderlich, wenn %(sp)s type auf %(app)s " "gesetzt ist." #, python-format msgid "" "Property %(cookie)s must NOT be specified when %(sp)s type is set to %(ip)s." msgstr "" "Eigenschaft %(cookie)s darf NICHT angegeben werden, wenn %(sp)s Typ auf " "%(ip)s gesetzt ist." #, python-format msgid "" "Property %(key)s is missing. This property should be specified for rules of " "%(header)s and %(cookie)s types." msgstr "" "Eigenschaft %(key)s fehlt. Diese Eigenschaft sollte für Regeln der Typen " "%(header)s und %(cookie)s angegeben werden." #, python-format msgid "" "Property %(key)s updated value %(new)s should be superset of existing value " "%(old)s." msgstr "" "Eigenschaft %(key)s aktualisierter Wert %(new)s sollte eine Obermenge des " "vorhandenen Werts %(old)s sein." #, python-format msgid "" "Property %(n)s type mismatch between facade %(type)s (%(fs_type)s) and " "provider (%(ps_type)s)" msgstr "" "Eigenschaft %(n)s Typ stimmt nicht überein zwischen %(type)s der Fassade " "(%(fs_type)s) und Provider (%(ps_type)s)" #, python-format msgid "Property %(policies)s and %(item)s cannot be used both at one time." msgstr "" "Property %(policies)s und %(item)s können nicht gleichzeitig verwendet " "werden." #, python-format msgid "" "Property %(pool)s is required when %(action)s type is set to %(action_type)s." msgstr "" "Eigenschaft %(pool)s ist erforderlich, wenn der Typ %(action)s auf " "%(action_type)s gesetzt ist." #, python-format msgid "Property %(prp)s is required for zone type %(zone_type)s" msgstr "Property %(prp)s ist für den Zonentyp %(zone_type)s erforderlich" #, python-format msgid "Property %(ref)s required when protocol is %(term)s." msgstr "Eigenschaft %(ref)s erforderlich, wenn das Protokoll %(term)s ist." #, python-format msgid "" "Property %(url)s is required when %(action)s type is set to %(action_type)s." msgstr "" "Eigenschaft %(url)s ist erforderlich, wenn der Typ %(action)s auf " "%(action_type)s gesetzt ist." #, python-format msgid "Property %s not assigned" msgstr "Eigenschaft %s ist nicht zugewiesen" #, python-format msgid "Property %s not implemented yet" msgstr "Eigenschaft %s ist noch nicht implementiert" msgid "" "Property cookie_name is required when session_persistence type is set to " "APP_COOKIE." msgstr "" "Die Eigenschaft cookie_name ist erforderlich, wenn der Session_Persistenztyp " "auf APP_COOKIE festgelegt ist." msgid "" "Property cookie_name is required, when session_persistence type is set to " "APP_COOKIE." msgstr "" "Die Eigenschaft cookie_name ist erforderlich, wenn der Session_Persistenztyp " "auf APP_COOKIE festgelegt ist." msgid "" "Property cookie_name must NOT be specified when session_persistence type is " "set to SOURCE_IP." msgstr "" "Die Eigenschaft cookie_name darf NICHT angegeben werden, wenn " "session_persistency type auf SOURCE_IP gesetzt ist." msgid "" "Property key is missing. This property should be specified for rules of " "HEADER and COOKIE types." msgstr "" "Der Eigenschaftenschlüssel fehlt. Diese Eigenschaft sollte für Regeln der " "HEADER- und COOKIE-Typen angegeben werden." msgid "" "Property redirect_pool is required when action type is set to " "REDIRECT_TO_POOL." msgstr "" "Property redirect_pool ist erforderlich, wenn der Aktionstyp auf " "REDIRECT_TO_POOL gesetzt ist." msgid "" "Property redirect_url is required when action type is set to REDIRECT_TO_URL." msgstr "" "Property redirect_url ist erforderlich, wenn der Aktionstyp auf " "REDIRECT_TO_URL festgelegt ist." #, python-format msgid "" "Property unspecified. For '%(value)s' value of '%(prop1)s' property, " "'%(prop2)s' property must be specified." msgstr "" "Eigenschaft nicht angegeben. Für den Wert ' %(value)s' der Eigenschaft " "'%(prop1)s' muss die Eigenschaft '%(prop2)s' angegeben werden." msgid "Property values for the resources in the group." msgstr "Eigenschaftswerte für die Ressourcen in der Gruppe." msgid "Protocol for balancing." msgstr "Protokoll für den Ausgleich." msgid "Protocol for the firewall rule." msgstr "Protokoll für die Firewall-Regel" msgid "Protocol of the pool." msgstr "Protokoll des Pools." msgid "Protocol on which to listen for the client traffic." msgstr "Protokoll, auf dem der Client-Verkehr überwacht werden soll." msgid "Protocol to balance." msgstr "Protokoll zum Ausgleich." msgid "Protocol value for this firewall rule." msgstr "Protokollwert für diese Firewallregel" msgid "" "Provide access to nodes using other nodes of the cluster as proxy gateways." msgstr "" "Bereitstellen von Zugriff auf Knoten, die andere Knoten des Clusters als " "Proxy-Gateways verwenden" msgid "" "Provide old encryption key. New encryption key would be used from config " "file." msgstr "" "Geben Sie den alten Verschlüsselungsschlüssel ein. Ein neuer " "Verschlüsselungsschlüssel würde aus der Konfigurationsdatei verwendet werden." #, python-format msgid "Provided %(subnet)s does not belong to provided %(network)s." msgstr "" "Bereitgestellte %(subnet)s gehört nicht zu den bereitgestellten %(network)s." msgid "Provider for this Load Balancer." msgstr "Provider für diesen Load Balancer." msgid "Provider implementing this load balancer instance." msgstr "Provider, der diese Load-Balancer-Instanz implementiert." #, python-format msgid "Provider requires property %(n)s unknown in facade %(type)s" msgstr "" "Der Provider benötigt die Eigenschaft %(n)s unbekannt in der Fassade %(type)s" msgid "Public DNS name of the specified instance." msgstr "Öffentlicher DNS-Name der angegebenen Instanz" msgid "Public IP address of the specified instance." msgstr "Öffentliche IP-Adresse der angegebenen Instanz" msgid "Queue name length must be 1-64" msgstr "Die Länge des Warteschlangennamens muss 1-64 sein" msgid "Queue name must be a string" msgstr "Der Warteschlangenname muss eine Zeichenfolge sein" msgid "" "Quota for the amount of disk space (in Gigabytes). Setting the value to -1 " "removes the limit." msgstr "" "Quote für die Menge an Speicherplatz (in Gigabyte). Wenn Sie den Wert auf -1 " "setzen, wird das Limit entfernt." msgid "" "Quota for the amount of ram (in megabytes). Setting the value to -1 removes " "the limit." msgstr "" "Quote für die Menge an RAM (in Megabyte). Wenn Sie den Wert auf -1 setzen, " "wird das Limit entfernt." msgid "" "Quota for the number of cores. Setting the value to -1 removes the limit." msgstr "" "Quote für die Anzahl der Kerne. Wenn Sie den Wert auf -1 setzen, wird das " "Limit entfernt." msgid "" "Quota for the number of fixed IPs. Setting the value to -1 removes the limit." msgstr "" "Quote für die Anzahl der festen IPs. Wenn Sie den Wert auf -1 setzen, wird " "das Limit entfernt." msgid "Quota for the number of floating IPs. Setting -1 means unlimited." msgstr "" "Quote für die Anzahl der Floating IPs. Einstellung -1 bedeutet unbegrenzt." msgid "" "Quota for the number of floating IPs. Setting the value to -1 removes the " "limit." msgstr "" "Quote für die Anzahl der Floating IPs. Wenn Sie den Wert auf -1 setzen, wird " "das Limit entfernt." msgid "" "Quota for the number of injected file content bytes. Setting the value to -1 " "removes the limit." msgstr "" "Quote für die Anzahl der Bytes des injizierten Dateiinhalts Wenn Sie den " "Wert auf -1 setzen, wird das Limit entfernt." msgid "" "Quota for the number of injected file path bytes. Setting the value to -1 " "removes the limit." msgstr "" "Quote für die Anzahl der injizierten Dateipfadbytes. Wenn Sie den Wert auf " "-1 setzen, wird das Limit entfernt." msgid "" "Quota for the number of injected files. Setting the value to -1 removes the " "limit." msgstr "" "Quote für die Anzahl der injizierten Dateien. Wenn Sie den Wert auf -1 " "setzen, wird das Limit entfernt." msgid "" "Quota for the number of instances. Setting the value to -1 removes the limit." msgstr "" "Quote für die Anzahl der Instanzen. Wenn Sie den Wert auf -1 setzen, wird " "das Limit entfernt." msgid "" "Quota for the number of key pairs. Setting the value to -1 removes the limit." msgstr "" "Quote für die Anzahl der Schlüsselpaare. Wenn Sie den Wert auf -1 setzen, " "wird das Limit entfernt." msgid "" "Quota for the number of metadata items. Setting the value to -1 removes the " "limit." msgstr "" "Quote für die Anzahl der Metadatenelemente Wenn Sie den Wert auf -1 setzen, " "wird das Limit entfernt." msgid "Quota for the number of networks. Setting -1 means unlimited." msgstr "" "Quote für die Anzahl der Netzwerke. Einstellung -1 bedeutet unbegrenzt." msgid "Quota for the number of ports. Setting -1 means unlimited." msgstr "Quote für die Anzahl der Ports. Einstellung -1 bedeutet unbegrenzt." msgid "Quota for the number of routers. Setting -1 means unlimited." msgstr "Quote für die Anzahl der Router. Einstellung -1 bedeutet unbegrenzt." msgid "" "Quota for the number of security group rules. Setting -1 means unlimited." msgstr "" "Quote für die Anzahl der Sicherheitsgruppenregeln Einstellung -1 bedeutet " "unbegrenzt." msgid "" "Quota for the number of security group rules. Setting the value to -1 " "removes the limit." msgstr "" "Quote für die Anzahl der Sicherheitsgruppenregeln Wenn Sie den Wert auf -1 " "setzen, wird das Limit entfernt." msgid "Quota for the number of security groups. Setting -1 means unlimited." msgstr "" "Quote für die Anzahl der Sicherheitsgruppen. Einstellung -1 bedeutet " "unbegrenzt." msgid "" "Quota for the number of security groups. Setting the value to -1 removes the " "limit." msgstr "" "Quote für die Anzahl der Sicherheitsgruppen. Wenn Sie den Wert auf -1 " "setzen, wird das Limit entfernt." msgid "" "Quota for the number of server group members. Setting the value to -1 " "removes the limit." msgstr "" "Quote für die Anzahl der Servergruppenmitglieder Wenn Sie den Wert auf -1 " "setzen, wird das Limit entfernt." msgid "" "Quota for the number of server groups. Setting the value to -1 removes the " "limit." msgstr "" "Quote für die Anzahl der Servergruppen. Wenn Sie den Wert auf -1 setzen, " "wird das Limit entfernt." msgid "" "Quota for the number of snapshots. Setting the value to -1 removes the limit." msgstr "" "Quote für die Anzahl der Snapshots Wenn Sie den Wert auf -1 setzen, wird das " "Limit entfernt." msgid "Quota for the number of subnets. Setting -1 means unlimited." msgstr "Quote für die Anzahl der Subnetze. Einstellung -1 bedeutet unbegrenzt." msgid "" "Quota for the number of volumes. Setting the value to -1 removes the limit." msgstr "" "Quote für die Anzahl der Volumes. Wenn Sie den Wert auf -1 setzen, wird das " "Limit entfernt." msgid "" "RPC timeout for the engine liveness check that is used for stack locking." msgstr "" "RPC-Timeout für die Überprüfung der Systemlebenszeit, die für die " "Stapelspeicherung verwendet wird." msgid "RX/TX factor." msgstr "RX / TX-Faktor." #, python-format msgid "Rebuilding server failed, status '%s'" msgstr "Neuaufbau des Servers fehlgeschlagen, Status ' %s'" msgid "Record name." msgstr "Namen aufzeichnen" msgid "RecordSet name." msgstr "RecordSet-Name" #, python-format msgid "Recursion depth exceeds %d." msgstr "Rekursionstiefe überschreitet %d." msgid "" "Ref structure that contains the ID of the VPC on which you want to create " "the subnet." msgstr "" "Ref-Struktur, die die ID der VPC enthält, auf der das Subnetz erstellt " "werden soll." msgid "Reference to a flavor for creating DB instance." msgstr "Referenz auf einen Flavor zum Erstellen einer DB-Instanz." msgid "Reference to certificate." msgstr "Verweis auf das Zertifikat" msgid "Reference to intermediates." msgstr "Verweis auf Zwischenprodukte." msgid "Reference to private key passphrase." msgstr "Verweis auf die Passphrase des privaten Schlüssels" msgid "Reference to private key." msgstr "Verweis auf den privaten Schlüssel" msgid "Reference to public key." msgstr "Verweis auf öffentlichen Schlüssel" msgid "Reference to the secret." msgstr "Hinweis auf das Geheimnis." msgid "References to secrets that will be stored in container." msgstr "Verweise auf Geheimnisse, die im Container gespeichert werden." msgid "Region name in which this stack will be created." msgstr "Regionsname, in dem dieser Stapel erstellt wird." msgid "Remaining executions." msgstr "Verbleibende Ausführungen." msgid "Remote branch router identity." msgstr "Remote-Zweigrouteridentität" msgid "Remote branch router public IPv4 address or IPv6 address or FQDN." msgstr "" "Remote Branch Router öffentliche IPv4-Adresse oder IPv6-Adresse oder FQDN." msgid "Remote subnet(s) in CIDR format." msgstr "Entfernte Subnetze im CIDR-Format." msgid "" "Replace the deployment instead of updating it when the input value changes." msgstr "" "Ersetzen Sie die Bereitstellung, anstatt sie zu aktualisieren, wenn sich der " "Eingabewert ändert." msgid "" "Replacement policy used to work around flawed nova/neutron port interaction " "which has been fixed since Liberty." msgstr "" "Ersetzungspolitik verwendet, um fehlerhafte Schnittstellen zwischen Novas " "und Neutronen zu umgehen, die seit Liberty behoben wurden." msgid "Request expired or more than 15mins in the future" msgstr "Anfrage abgelaufen oder länger als 15 Minuten in der Zukunft" #, python-format msgid "Request limit exceeded: %(message)s" msgstr "Anforderungslimit überschritten: %(message)s" msgid "Request missing required header X-Auth-Url" msgstr "Anfrage fehlende Kopfzeile X-Auth-Url" msgid "Request was denied due to request throttling" msgstr "Anfrage wurde abgelehnt, weil die Anfrage gedrosselt wurde" #, python-format msgid "" "Requested plugin '%(plugin)s' doesn't support version '%(version)s'. Allowed " "versions are %(allowed)s" msgstr "" "Angefordertes Plugin '%(plugin)s' unterstützt die Version '%(version)s' " "nicht. Erlaubte Versionen sind %(allowed)s" msgid "Required extension {0} in {1} service is not available." msgstr "Die erforderliche Erweiterung {0} in Service {1} ist nicht verfügbar." msgid "" "Required extra specification. Defines if share drivers handles share servers." msgstr "" "Erforderliche zusätzliche Spezifikation. Definiert, ob Freigabetreiber Share-" "Server behandeln." #, python-format msgid "Required property %(n)s for facade %(type)s missing in provider" msgstr "Erforderliche Eigenschaft %(n)s für Fassade %(type)s fehlt im Anbieter" #, python-format msgid "Resizing to '%(flavor)s' failed, status '%(status)s'" msgstr "Größenänderung auf '%(flavor)s' fehlgeschlagen, Status '%(status)s'" #, python-format msgid "Resource \"%s\" has no type" msgstr "Ressource \"%s\" hat keinen Typ" #, python-format msgid "Resource \"%s\" type is not a string" msgstr "Ressource \"%s\" Typ ist keine Zeichenfolge" #, python-format msgid "Resource %(name)s %(key)s type must be %(typename)s" msgstr "Ressource %(name)s %(key)s Typ muss %(typename)s sein" #, python-format msgid "Resource %(name)s is missing \"%(type_key)s\"" msgstr "Ressource %(name)s fehlt \"%(type_key)s\"" #, python-format msgid "Resource %s not created yet." msgstr "Die Ressource %s wurde noch nicht erstellt." #, python-format msgid "" "Resource %s's property user_data_format should be set to SOFTWARE_CONFIG " "since there are software deployments on it." msgstr "" "Die Eigenschaft user_data_format der Ressource %s sollte auf SOFTWARE_CONFIG " "festgelegt werden, da sich Softwarebereitstellungen darauf befinden." msgid "Resource ID was not provided." msgstr "Ressourcen-ID wurde nicht angegeben." msgid "Resource action which triggers a workflow execution." msgstr "Ressourcenaktion, die eine Workflow-Ausführung auslöst." msgid "" "Resource definition for the resources in the group, in HOT format. The value " "of this property is the definition of a resource just as if it had been " "declared in the template itself." msgstr "" "Ressourcendefinition für die Ressourcen in der Gruppe im HOT-Format. Der " "Wert dieser Eigenschaft ist die Definition einer Ressource, so als wäre sie " "in der Vorlage selbst deklariert worden." msgid "" "Resource definition for the resources in the group. The value of this " "property is the definition of a resource just as if it had been declared in " "the template itself." msgstr "" "Ressourcendefinition für die Ressourcen in der Gruppe. Der Wert dieser " "Eigenschaft ist die Definition einer Ressource, so als wäre sie in der " "Vorlage selbst deklariert worden." msgid "Resource failed" msgstr "Ressource ist fehlgeschlagen" msgid "Resource is not built" msgstr "Ressource wird nicht erstellt" msgid "Resource name may not contain \"/\"" msgstr "Ressourcenname darf nicht \"/\" enthalten" msgid "Resource type." msgstr "Ressourcentyp." msgid "Resource update already requested" msgstr "Ressourcenaktualisierung wurde bereits angefordert" msgid "Resource with the name requested already exists" msgstr "Ressource mit dem angeforderten Namen existiert bereits" msgid "" "ResourceInError: resources.remote_stack: Went to status UPDATE_FAILED due to " "\"Remote stack update failed\"" msgstr "" "ResourceInError: resources.remote_stack: Ging auf Status UPDATE_FAILED wegen " "\"Remote Stack Update fehlgeschlagen\"" #, python-format msgid "ResourcePropertiesData with id %s not found" msgstr "ResourcePropertiesData mit der ID %s wurde nicht gefunden" #, python-format msgid "Resources must contain Resource. Found a [%s] instead" msgstr "" "Ressourcen müssen Ressource enthalten. Stattdessen wurde ein [%s] gefunden" msgid "" "Resources that users are allowed to access by the DescribeStackResource API." msgstr "" "Ressourcen, auf die Benutzer von der DescribeStackResource-API zugreifen " "dürfen." msgid "" "Restart policy to apply when a container exits. Possible values are \"no\", " "\"on-failure[:max-retry]\", \"always\", and \"unless-stopped\"." msgstr "" "Richtlinie neu starten, die beim Beenden eines Containers angewendet wird " "Mögliche Werte sind \"nein\", \"bei Fehler [: max-retry]\", \"immer\" und " "\"wenn nicht gestoppt\"." msgid "Returned status code from the configuration execution." msgstr "Zurückgegebener Statuscode von der Konfigurationsausführung." msgid "" "Rough number of maximum events that will be available per stack. Actual " "number of events can be a bit higher since purge checks take place randomly " "200/event_purge_batch_size percent of the time. Older events are deleted " "when events are purged. Set to 0 for unlimited events per stack." msgstr "" "Ungefähre Anzahl von maximalen Ereignissen, die pro Stapel verfügbar sind. " "Die tatsächliche Anzahl der Ereignisse kann etwas höher sein, da die " "Löschprüfungen nach dem Zufallsprinzip 200/event_purge_batch_size in Prozent " "der Zeit stattfinden. Ältere Ereignisse werden gelöscht, wenn Ereignisse " "bereinigt werden. Setzen Sie den Wert auf 0 für unbegrenzte Ereignisse pro " "Stapel." msgid "Route duplicates an existing route." msgstr "Route dupliziert eine vorhandene Route." msgid "Route table ID." msgstr "Routentabelle ID" msgid "Rule compare type." msgstr "Regelvergleichstyp" msgid "Rule type." msgstr "Regeltyp" msgid "" "Rules list. Basic threshold/gnocchi rules and nested dict which combine " "threshold/gnocchi rules by \"and\" or \"or\" are allowed. For example, the " "form is like: [RULE1, RULE2, {\"and\": [RULE3, RULE4]}], the basic threshold/" "gnocchi rules must include a \"type\" field." msgstr "" "Liste der Regeln. Basic throttle/gnocchi regelt und verschachtelt Dicts, die " "threshold/gnocchi Regeln mit \"and\" oder \"or\" kombinieren, sind erlaubt. " "Zum Beispiel ist das Formular wie folgt: [RULE1, RULE2, {\"and\": [RULE3, " "RULE4]}], die grundlegenden Schwellen/Gnocchi-Regeln müssen ein \"Typ\" -" "Feld enthalten." msgid "Safety assessment lifetime configuration for the ike policy." msgstr "" "Lebenszeitkonfiguration für die Sicherheitsbewertung für die IKE-Richtlinie." msgid "Safety assessment lifetime configuration for the ipsec policy." msgstr "Sicherheitslebensdauerkonfiguration für die IPSec-Richtlinie." msgid "Safety assessment lifetime units." msgstr "Sicherheitsbeurteilung Lebensdauer Einheiten." msgid "Safety assessment lifetime value in specified units." msgstr "Sicherheitsbewertung der Lebensdauer in bestimmten Einheiten." msgid "Scheduler hints to pass to Nova (Heat extension)." msgstr "Scheduler Hinweise an Nova (Heat-Erweiterung) zu übergeben." msgid "Schema representing the inputs that this software config is expecting." msgstr "" "Schema, das die Eingänge darstellt, die diese Softwarekonfiguration erwartet." msgid "Schema representing the outputs that this software config will produce." msgstr "" "Schema, das die Ausgaben darstellt, die diese Softwarekonfiguration erzeugt." #, python-format msgid "Schema valid only for %(ltype)s or %(mtype)s, not %(utype)s" msgstr "" "Schema ist nur für %(ltype)s oder %(mtype)s gültig, nicht für %(utype)s" msgid "" "Scope of flavor accessibility. Public or private. Default value is True, " "means public, shared across all projects." msgstr "" "Umfang der Zugänglichkeit der Variante. Öffentlich oder privat. Der " "Standardwert ist \"True\", bedeutet \"öffentlich\" und wird für alle " "Projekte freigegeben." msgid "" "Scope of image accessibility. Public or private. Default value is False " "means private. Note: The policy setting of glance allows only users with " "admin roles to create public image by default." msgstr "" "Umfang der Abbildzugänglichkeit. Öffentlich oder privat. Standardwert ist " "false, bedeutet privat. Hinweis: Die Richtlinieneinstellung von Glance " "ermöglicht nur Benutzern mit Administratorrollen, ein öffentliches Abbild " "standardmäßig zu erstellen." #, python-format msgid "Searching Tenant %(target)s from Tenant %(actual)s forbidden." msgstr "Suche Mandant %(target)s von Mandant %(actual)s verboten." #, python-format msgid "Second argument to \"%s\" should be a sequence." msgstr "Das zweite Argument für \"%s\" sollte eine Sequenz sein." msgid "Seconds between running periodic tasks." msgstr "Sekunden zwischen dem Ausführen periodischer Aufgaben." msgid "Seconds to wait after a create. Defaults to the global wait_secs." msgstr "" "Sekunden, um nach einem Erstellen zu warten. Der Standardwert für die " "globalen wait_secs." msgid "Seconds to wait after a delete. Defaults to the global wait_secs." msgstr "" "Sekunden nach dem Löschen warten. Der Standardwert für die globalen " "wait_secs." msgid "Seconds to wait after an action (-1 is infinite)." msgstr "Sekunden, um nach einer Aktion zu warten (-1 ist unendlich)." msgid "Seconds to wait after an update. Defaults to the global wait_secs." msgstr "" "Sekunden, um nach einem Update zu warten. Der Standardwert für die globalen " "wait_secs." #, python-format msgid "Section %s can not be accessed directly." msgstr "Auf den Abschnitt %s kann nicht direkt zugegriffen werden." #, python-format msgid "Security Group \"%(group_name)s\" not found" msgstr "Sicherheitsgruppe \"%(group_name)s\" nicht gefunden" msgid "Security group IDs to assign." msgstr "Sicherheitsgruppen-IDs, die zugewiesen werden sollen." msgid "Security group IDs to associate with this port." msgstr "Sicherheitsgruppen-IDs, die mit diesem Port verknüpft werden sollen." msgid "Security group name or ID to add rule." msgstr "Name oder ID der Sicherheitsgruppe zum Hinzufügen der Regel" msgid "Security group names to assign." msgstr "Zu vergebende Sicherheitsgruppennamen." msgid "Security groups cannot be assigned the name \"default\"." msgstr "" "Sicherheitsgruppen können nicht den Namen \"Standard\" zugewiesen bekommen." msgid "Security service IP address or hostname." msgstr "Sicherheitsdienst-IP-Adresse oder Hostname" msgid "Security service description." msgstr "Beschreibung des Sicherheitsdienstes." msgid "Security service domain." msgstr "Sicherheitsdienstdomäne" msgid "Security service name." msgstr "Sicherheitsdienstname" msgid "Security service type." msgstr "Sicherheitsdiensttyp" msgid "Security service user or group used by tenant." msgstr "" "Sicherheitsdienstbenutzer oder -gruppe, die vom Mandanten verwendet wird." msgid "Segmentation ID for this segment." msgstr "Segmentierungs-ID für dieses Segment" msgid "Segmentation type to be used on the subport." msgstr "Segmentierungstyp, der im Unterport verwendet werden soll." msgid "Select a docker storage driver." msgstr "Wählen Sie einen Docker-Speichertreiber aus." msgid "Select deferred auth method, stored password or trusts." msgstr "" "Wählen Sie die verzögerte Authentifizierungsmethode, das gespeicherte " "Kennwort oder die Vertrauensstellungen aus." msgid "Send command to the container." msgstr "Sende den Befehl an den Container." msgid "Sequence of characters to build the random string from." msgstr "" "Reihenfolge der Zeichen, aus denen die zufällige Zeichenfolge erstellt " "werden soll." #, python-format msgid "Server %(name)s delete failed: (%(code)s) %(message)s" msgstr "Server %(name)s Löschen fehlgeschlagen: (%(code)s) %(message)s" #, python-format msgid "Server %s is not in ACTIVE state" msgstr "Server %s befindet sich nicht in aktivem Status" msgid "Server Group name." msgstr "Server Gruppenname." msgid "Server name." msgstr "Servername." msgid "Server tags. Supported since client version 2.26." msgstr "Server-Tags Unterstützt seit Client Version 2.26." msgid "Server to assign floating IP to." msgstr "Server, dem Floating IP zugewiesen werden soll." #, python-format msgid "" "Service %(service_name)s is not available for resource type " "%(resource_type)s, reason: %(reason)s" msgstr "" "Service %(service_name)s ist nicht verfügbar für Ressourcentyp " "%(resource_type)s, Grund: %(reason)s" msgid "Service misconfigured" msgstr "Dienst falsch konfiguriert" msgid "Service temporarily unavailable" msgstr "Dienst vorübergehend nicht verfügbar" msgid "Set of parameters passed to this stack." msgstr "Satz von Parametern, die an diesen Stapel übergeben wurden." msgid "Set of rules for comparing characters in a character set." msgstr "Satz Regeln zum Vergleichen von Zeichen in einem Zeichensatz." msgid "Set of symbols and encodings." msgstr "Satz von Symbolen und Kodierungen." msgid "Set to \"vpc\" to have IP address allocation associated to your VPC." msgstr "" "Setzen Sie den Wert auf \"vpc\", um die Zuordnung der IP-Adresse zu Ihrer " "VPC zu ermöglichen." msgid "Set to true if DHCP is enabled and false if DHCP is disabled." msgstr "" "Auf \"True\" setzen, wenn DHCP aktiviert ist, und auf \"False\", wenn DHCP " "deaktiviert ist." msgid "Severity of the alarm." msgstr "Schweregrad des Alarms" msgid "Share description." msgstr "Freigabebeschreibung." msgid "Share host." msgstr "Freigabe-Host" msgid "Share name." msgstr "Freigabename." msgid "Share network description." msgstr "Netzwerkbeschreibung der Freigabe." msgid "Share project ID." msgstr "Freigabe Projekt-ID." msgid "Share protocol supported by shared filesystem." msgstr "Freigabe-Protokoll unterstützt von freigegebenen Dateisystem." msgid "Share storage size in GB." msgstr "Teilen Sie die Speichergröße in GB." msgid "Shared status of the metering label." msgstr "Geteilter Status des Metering-Labels." msgid "Shared status of this firewall policy." msgstr "Freigegebener Status dieser Firewall-Richtlinie" msgid "Shared status of this firewall rule." msgstr "Freigegebener Status dieser Firewallregel" msgid "Shared status of this firewall." msgstr "Freigegebener Status dieser Firewall" msgid "Show available commands." msgstr "Zeige verfügbare Befehle." msgid "Show user password expiration time." msgstr "Zeigt die Ablaufzeit des Benutzerpassworts an." msgid "Shrinking volume" msgstr "Datenträger schrumpfen" msgid "Signal data error" msgstr "Signaldatenfehler" #, python-format msgid "Signal resource during %s" msgstr "Signalressource während %s" msgid "Signature of the URL built by Zaqar." msgstr "Signatur der von Zaqar erstellten URL" #, python-format msgid "Single schema valid only for %(ltype)s, not %(utype)s" msgstr "Ein einzelnes Schema ist nur für %(ltype)s, nicht für %(utype)s gültig" msgid "Size of a secondary ephemeral data disk in GB." msgstr "Größe einer sekundären ephemeren Datenplatte in GB." msgid "Size of adjustment." msgstr "Größe der Anpassung." msgid "Size of encryption key, in bits. For example, 128 or 256." msgstr "" "Größe des Verschlüsselungsschlüssels in Bits. Zum Beispiel 128 oder 256." msgid "" "Size of local disk in GB. The \"0\" size is a special case that uses the " "native base image size as the size of the ephemeral root volume." msgstr "" "Größe der lokalen Festplatte in GB. Die Größe \"0\" ist ein Sonderfall, bei " "dem die Größe des nativen Basisbildes als Größe des ephemeren root-" "Datenträgers verwendet wird." msgid "" "Size of the block device in GB. If it is omitted, hypervisor driver " "calculates size." msgstr "" "Größe des Blockgerätes in GB. Wenn es weggelassen wird, berechnet der " "Hypervisor-Treiber die Größe." msgid "Size of the instance disk volume in GB." msgstr "Größe des Instanzdatenträgers in GB" msgid "Size of the volumes, in GB." msgstr "Größe der Datenträgers, in GB." msgid "Smallest prefix size that can be allocated from the subnet pool." msgstr "Kleinste Präfixgröße, die vom Subnetzpool zugewiesen werden kann." #, python-format msgid "Snapshot with id %s not found" msgstr "Snapshot mit der ID %s wurde nicht gefunden" msgid "" "SnapshotId is missing, this is required when specifying BlockDeviceMappings." msgstr "" "SnapshotId fehlt, dies ist erforderlich, wenn Sie BlockDeviceMappings " "angeben." #, python-format msgid "Software config with id %s can not be deleted as it is referenced." msgstr "" "Softwarekonfig mit der ID %s kann nicht gelöscht werden, da auf sie " "verwiesen wird." #, python-format msgid "Software config with id %s not found" msgstr "Softwarekonfig mit der ID %s wurde nicht gefunden" msgid "Some value that can be stored but can not be updated." msgstr "" "Einige Werte, die gespeichert werden können, aber nicht aktualisiert werden " "können." msgid "Source IP address or CIDR." msgstr "Quell-IP-Adresse oder CIDR." msgid "Source IP prefix or subnet." msgstr "Quell-IP-Präfix oder Subnetz" msgid "Source ip_address for this firewall rule." msgstr "Quelle IP-Adresse für diese Firewall-Regel." msgid "Source port number or a range." msgstr "Quellportnummer oder ein Bereich" msgid "Source port range for this firewall rule." msgstr "Quellportbereich für diese Firewallregel" msgid "Source protocol port Maximum." msgstr "Quellprotokollport Maximum." msgid "Source protocol port Minimum." msgstr "Quellprotokoll-Port Minimum." #, python-format msgid "Specified output key %s not found." msgstr "Angegebener Ausgabeschlüssel %s wurde nicht gefunden." #, python-format msgid "Specified status is invalid, defaulting to %s" msgstr "" "Der angegebene Status ist ungültig und wird standardmäßig auf %s gesetzt" #, python-format msgid "Specified subnet %(subnet)s does not belongs to network %(network)s." msgstr "Angegebenes Subnetz %(subnet)s gehört nicht zu Netzwerk %(network)s." msgid "Specifies a custom discovery url for node discovery." msgstr "" "Gibt eine benutzerdefinierte Erkennungs-URL für die Knotenermittlung an." msgid "Specifies database names for creating databases on instance creation." msgstr "" "Gibt Datenbanknamen zum Erstellen von Datenbanken bei der Instanzerstellung " "an." msgid "Specify the ACL permissions on who can read objects in the container." msgstr "" "Geben Sie die ACL-Berechtigungen an, wer Objekte im Container lesen darf." msgid "Specify the ACL permissions on who can write objects to the container." msgstr "" "Geben Sie die ACL-Berechtigungen an, wer Objekte in den Container schreiben " "darf." msgid "Specify the server type to be used." msgstr "Geben Sie den zu verwendenden Servertyp an." msgid "" "Specify whether the remote_ip_prefix will be excluded or not from traffic " "counters of the metering label. For example to not count the traffic of a " "specific IP address of a range." msgstr "" "Geben Sie an, ob der remote_ip_prefix von Verkehrszählern des Messlabels " "ausgeschlossen werden soll oder nicht. Zum Beispiel, um den Verkehr einer " "bestimmten IP-Adresse eines Bereichs nicht zu zählen." #, python-format msgid "Stack %(stack_name)s already has an action (%(action)s) in progress." msgstr "" "Stapel %(stack_name)s hat bereits eine Aktion (%(action)s) in Bearbeitung." msgid "Stack ID" msgstr "Stapel-ID" msgid "Stack Name" msgstr "Stapelname" msgid "Stack id" msgstr "Stapel-ID" msgid "Stack name may not contain \"/\"" msgstr "Stapelname darf nicht \"/\" enthalten" msgid "Stack resource id" msgstr "Stack-Ressourcen-ID" msgid "Stack unknown status" msgstr "Stack unbekannter Status" #, python-format msgid "Stack with id %s can not be found." msgstr "Stack mit ID %s kann nicht gefunden werden." #, python-format msgid "Stack with id %s not found" msgstr "Stack mit ID %s nicht gefunden" msgid "" "Stacks containing these tag names will be hidden. Multiple tags should be " "given in a comma-delimited list (eg. hidden_stack_tags=hide_me,me_too)." msgstr "" "Stapel mit diesen Tag-Namen werden ausgeblendet. Mehrere Tags sollten in " "einer durch Kommas getrennten Liste angegeben werden (z. B. " "hidden_stack_tags=hide_me, me_too)." msgid "Start address for the allocation pool." msgstr "Startadresse für den Zuordnungspool." #, python-format msgid "Start resizing the group %(group)s" msgstr "Beginnen Sie die Größe der Gruppe %(group)s zu ändern" msgid "Start time for the time constraint. A CRON expression property." msgstr "Startzeit für die Zeitbeschränkung. Eine CRON-Expressionseigenschaft." #, python-format msgid "State %s invalid for create" msgstr "Status %s ist ungültig für create" #, python-format msgid "State %s invalid for resume" msgstr "Status %s ist ungültig für die Fortsetzung" #, python-format msgid "State %s invalid for suspend" msgstr "Status %s ist für Suspend ungültig" msgid "Status" msgstr "Status" #, python-format msgid "String to split must be string; got %s" msgstr "Zu teilende Zeichenfolge muss Zeichenfolge sein; hab %s bekommen" msgid "String value with which to compare." msgstr "String-Wert, mit dem verglichen werden soll." msgid "Subnet ID to associate with this interface." msgstr "Subnetz-ID, die mit dieser Schnittstelle verknüpft werden soll." msgid "Subnet ID to launch instance in." msgstr "Subnetz-ID zum Starten der Instanz in." msgid "Subnet ID." msgstr "Subnetz-ID" msgid "Subnet in which the vpn service will be created." msgstr "Subnetz, in dem der VPN-Dienst erstellt wird." msgid "" "Subnet in which to allocate the IP address for port. Used for creating port, " "based on derived properties. If subnet is specified, network property " "becomes optional." msgstr "" "Subnetz, in dem die IP-Adresse für den Port vergeben wird. Wird zum " "Erstellen eines Ports basierend auf abgeleiteten Eigenschaften verwendet. " "Wenn ein Subnetz angegeben ist, wird die Netzwerkeigenschaft optional." msgid "Subnet in which to allocate the IP address for this port." msgstr "Subnetz, in dem die IP-Adresse für diesen Port zugewiesen werden soll." msgid "Subnet name or ID of this member." msgstr "Subnetzname oder ID dieses Mitglieds" msgid "Subnet of external fixed IP address." msgstr "Subnetz der externen festen IP-Adresse." msgid "Subnet of the vip." msgstr "Teilnetz des vip." msgid "Subnet to allocate floating IP from." msgstr "Subnetz zur Zuweisung von Floating IP von." msgid "Subnets of this network." msgstr "Subnetze dieses Netzwerks." msgid "" "Subset of trustor roles to be delegated to heat. If left unset, all roles of " "a user will be delegated to heat when creating a stack." msgstr "" "Teilmenge der Trustor-Rollen, die an Heat delegiert werden sollen. Wenn sie " "nicht gesetzt sind, werden alle Rollen eines Benutzers an Heat delegiert, " "wenn ein Stack erstellt wird." msgid "Supplied metadata for the resources in the group." msgstr "Mitgelieferte Metadaten für die Ressourcen in der Gruppe." msgid "Supported versions: keystone v3" msgstr "Unterstützte Versionen: Keystone v3" #, python-format msgid "Suspend of instance %s failed" msgstr "Suspend der Instanz %s ist fehlgeschlagen" #, python-format msgid "Suspend of server %s failed" msgstr "Suspend des Servers %s ist fehlgeschlagen" msgid "Swap space in MB." msgstr "Tausche Platz in MB." msgid "" "Swift container and object to use for storing deployment data for the server " "resource. The parameter is a map value with the keys \"container\" and " "\"object\", and the values are the corresponding container and object names. " "The software_config_transport parameter must be set to POLL_TEMP_URL for " "swift to be used. If not specified, and software_config_transport is set to " "POLL_TEMP_URL, a container will be automatically created from the resource " "name, and the object name will be a generated uuid." msgstr "" "Swift-Container und -Objekt zum Speichern von Bereitstellungsdaten für die " "Serverressource Der Parameter ist ein Map-Wert mit den Schlüsseln \"Container" "\" und \"Objekt\", und die Werte sind die entsprechenden Container- und " "Objektnamen. Der Parameter \"software_config_transport\" muss auf " "\"POLL_TEMP_URL\" festgelegt sein, damit \"swift\" verwendet werden kann. " "Falls nicht angegeben, und software_config_transport auf POLL_TEMP_URL " "gesetzt ist, wird automatisch ein Container aus dem Ressourcennamen " "erstellt, und der Objektname wird eine generierte UUID sein." msgid "System SIGHUP signal received." msgstr "System SIGHUP-Signal empfangen." msgid "TCP or UDP port on which to listen for client traffic." msgstr "" "TCP- oder UDP-Port, auf dem der Client-Datenverkehr überwacht werden soll." msgid "TCP port on which the instance server is listening." msgstr "TCP-Port, auf dem der Instanzserver empfangsbereit ist." msgid "TCP port on which the pool member listens for requests or connections." msgstr "" "TCP-Port, auf dem das Pool-Mitglied auf Anforderungen oder Verbindungen " "wartet." msgid "" "TCP port on which to listen for client traffic that is associated with the " "vip address." msgstr "" "TCP-Port, auf dem der Client-Datenverkehr überwacht werden soll, der der VIP-" "Adresse zugeordnet ist." #, python-format msgid "TLD '%s' must not be all numeric." msgstr "TLD '%s' darf nicht numerisch sein." msgid "" "TTL, in seconds, for any cached item in the dogpile.cache region used for " "caching of OpenStack service finder functions." msgstr "" "TTL in Sekunden für alle zwischengespeicherten Elemente in der dogpile.cache-" "Region, die zum Zwischenspeichern von OpenStack-Dienstsuchfunktionen " "verwendet werden." msgid "" "TTL, in seconds, for any cached item in the dogpile.cache region used for " "caching of service extensions." msgstr "" "TTL in Sekunden für alle zwischengespeicherten Elemente in der dogpile.cache-" "Region, die zum Zwischenspeichern von Serviceerweiterungen verwendet werden." msgid "" "TTL, in seconds, for any cached item in the dogpile.cache region used for " "caching of validation constraints." msgstr "" "TTL in Sekunden für alle zwischengespeicherten Elemente in der dogpile.cache-" "Region, die zum Zwischenspeichern von Validierungsbeschränkungen verwendet " "werden." msgid "Tag key." msgstr "Tag-Schlüssel" msgid "Tag value." msgstr "Tag-Wert" msgid "Tags from the server. Supported since client version 2.26." msgstr "Tags vom Server. Unterstützt seit Client Version 2.26." #, python-format msgid "Tags property should be a list for parameter: %s" msgstr "Die Tags-Eigenschaft sollte eine Liste für den Parameter %s sein" msgid "Tags to add to the image." msgstr "Tags zum Hinzufügen zum Abbild" msgid "Tags to attach to instance." msgstr "Tags zum Anhängen an Instanz." msgid "Tags to attach to the bucket." msgstr "Tags zum Anhängen an den Bucket." msgid "Tags to attach to this group." msgstr "Tags zum Anhängen an diese Gruppe." msgid "Task description." msgstr "Aufgabenbeschreibung." msgid "Task name." msgstr "Aufgabennname." msgid "" "Template default for how the server should receive the metadata required for " "software configuration. POLL_SERVER_CFN will allow calls to the cfn API " "action DescribeStackResource authenticated with the provided keypair " "(requires enabled heat-api-cfn). POLL_SERVER_HEAT will allow calls to the " "Heat API resource-show using the provided keystone credentials (requires " "keystone v3 API, and configured stack_user_* config options). POLL_TEMP_URL " "will create and populate a Swift TempURL with metadata for polling (requires " "object-store endpoint which supports TempURL).ZAQAR_MESSAGE will create a " "dedicated zaqar queue and post the metadata for polling." msgstr "" "Vorlagenstandard für die Art und Weise, wie der Server die für die " "Softwarekonfiguration erforderlichen Metadaten erhalten soll. " "POLL_SERVER_CFN erlaubt Aufrufe an die cfn-API-Aktion DescribeStackResource, " "die mit dem angegebenen Schlüsselpaar authentifiziert wurde (erfordert " "aktiviertes heat-api-cfn). POLL_SERVER_HEAT erlaubt Aufrufe an die Heat-API-" "Ressourcen-Show unter Verwendung der bereitgestellten Keystone-" "Anmeldeinformationen (erfordert die Keystone v3-API und konfigurierte " "stack_user_ *-Konfigurationsoptionen). POLL_TEMP_URL erstellt und füllt " "Swift TempURL mit Metadaten für die Abfrage (erfordert einen Objektspeicher-" "Endpunkt, der TempURL unterstützt) .ZAQAR_MESSAGE erstellt eine dedizierte " "Zaqar-Warteschlange und stellt die Metadaten für die Abfrage bereit." msgid "" "Template default for how the server should signal to heat with the " "deployment output values. CFN_SIGNAL will allow an HTTP POST to a CFN " "keypair signed URL (requires enabled heat-api-cfn). TEMP_URL_SIGNAL will " "create a Swift TempURL to be signaled via HTTP PUT (requires object-store " "endpoint which supports TempURL). HEAT_SIGNAL will allow calls to the Heat " "API resource-signal using the provided keystone credentials. ZAQAR_SIGNAL " "will create a dedicated zaqar queue to be signaled using the provided " "keystone credentials." msgstr "" "Vorlagenstandard für die Art und Weise, in der der Server dem Heat mit den " "Bereitstellungsausgabewerten signalisieren soll. CFN_SIGNAL erlaubt einen " "HTTP-POST zu einer CFN-Schlüsselpaar-signierten URL (erfordert aktiviertes " "heat-api-cfn). TEMP_URL_SIGNAL erstellt eine Swift-TempURL, die über HTTP " "PUT signalisiert wird (erfordert einen Objektspeicher-Endpunkt, der TempURL " "unterstützt). HEAT_SIGNAL ermöglicht Aufrufe an das Heat-API-" "Ressourcensignal unter Verwendung der bereitgestellten Keystone-" "Anmeldeinformationen. ZAQAR_SIGNAL erstellt eine dedizierte Zaqar-" "Warteschlange, die mit den bereitgestellten Keystone-Anmeldeinformationen " "signalisiert wird." msgid "" "Template default for how the user_data should be formatted for the server. " "For HEAT_CFNTOOLS, the user_data is bundled as part of the heat-cfntools " "cloud-init boot configuration data. For RAW the user_data is passed to Nova " "unmodified. For SOFTWARE_CONFIG user_data is bundled as part of the software " "config data, and metadata is derived from any associated SoftwareDeployment " "resources." msgstr "" "Vorlagenstandard für die Art und Weise, wie die Benutzerdaten für den Server " "formatiert werden sollen. Für HEAT_CFNTOOLS wird die Benutzerdaten als Teil " "der Konfigurationsdaten für die Cloud-Initialisierung von heat-cfntools " "gebündelt. Für RAW werden die Benutzerdaten unverändert an Nova übergeben. " "Für SOFTWARE_CONFIG werden Benutzerdaten als Teil der " "Softwarekonfigurationsdaten gebündelt, und Metadaten werden von allen " "zugehörigen SoftwareDeployment-Ressourcen abgeleitet." #, python-format msgid "Template exceeds maximum allowed size (%s bytes)" msgstr "Vorlage überschreitet die maximal zulässige Größe (%s Bytes)" msgid "Template format version not found." msgstr "Vorlagenformatversion nicht gefunden." #, python-format msgid "" "Template size (%(actual_len)s bytes) exceeds maximum allowed size (%(limit)s " "bytes)." msgstr "" "Die Vorlagengröße (%(actual_len)s Bytes) überschreitet die maximal zulässige " "Größe (%(limit)s Bytes)." msgid "Template that specifies the stack to be created as a resource." msgstr "" "Vorlage, die den Stapel angibt, der als Ressource erstellt werden soll." #, python-format msgid "Template type is not supported: %s" msgstr "Der Vorlagentyp wird nicht unterstützt: %s" msgid "Template version was not provided" msgstr "Vorlagenversion wurde nicht bereitgestellt" #, python-format msgid "Template with version %s not found" msgstr "Vorlage mit Version %s nicht gefunden" msgid "TemplateBody or TemplateUrl were not given." msgstr "TemplateBody oder TemplateUrl wurden nicht angegeben." msgid "Tenant owning the health monitor." msgstr "Mandant, der den Gesundheitsmonitor besitzt." msgid "Tenant owning the pool member." msgstr "Mandant, der das Poolmitglied besitzt." msgid "Tenant owning the pool." msgstr "Mandant, der den Pool besitzt." msgid "Tenant owning the port." msgstr "Mandant, der den Port besitzt." msgid "Tenant owning the router." msgstr "Mandant, der den Router besitzt." msgid "Tenant owning the subnet." msgstr "Mandant, der das Subnetz besitzt." #, python-format msgid "Testing message %(text)s" msgstr "Testnachricht %(text)s" #, python-format msgid "The \"%(arg)s\" argument to \"%(fn_name)s\" must be a map" msgstr "Das Argument \"%(arg)s\" für \"%(fn_name)s\" muss eine Map sein" #, python-format msgid "The \"%(arg)s\" argument to \"%(fn_name)s\" must be a string" msgstr "" "Das Argument \"%(arg)s\" zu \"%(fn_name)s\" muss eine Zeichenfolge sein" #, python-format msgid "The \"%(hook)s\" hook is not defined on %(resource)s" msgstr "Der \"%(hook)s\" Hook ist nicht auf %(resource)s definiert" #, python-format msgid "The \"expression\" argument to %s must contain a string." msgstr "Das Argument \"expression\" für %s muss eine Zeichenfolge enthalten." #, python-format msgid "The \"for_each\" argument to \"%s\" must contain a map" msgstr "Das Argument \"for_each\" für \"%s\" muss eine Map enthalten" #, python-format msgid "The %(entity)s (%(name)s) could not be found." msgstr "Die %(entity)s (%(name)s) konnte nicht gefunden werden." #, python-format msgid "The %s must be provided for each parameter group." msgstr "Die %s müssen für jede Parametergruppe angegeben werden." #, python-format msgid "The %s of parameter group should be a list." msgstr "Die %s der Parametergruppe sollte eine Liste sein." #, python-format msgid "The %s parameter must be assigned to one parameter group only." msgstr "Der Parameter %s muss nur einer Parametergruppe zugewiesen werden." #, python-format msgid "The %s should be a list." msgstr "Das %s sollte eine Liste sein." msgid "The API paste config file to use." msgstr "Die zu verwendende API-Einfügekonfigurationsdatei." msgid "The AWS Access Key ID needs a subscription for the service" msgstr "Die AWS Access Key ID benötigt ein Abonnement für den Dienst" msgid "The Availability Zone where the specified instance is launched." msgstr "Die Availability Zone, in der die angegebene Instanz gestartet wird." msgid "The Availability Zones in which to create the load balancer." msgstr "" "Die Verfügbarkeitszonen, in denen der Lastenausgleich erstellt werden soll." msgid "The CIDR." msgstr "Die CIDR." msgid "The Container Orchestration Engine for cluster." msgstr "Die Container Orchestration Engine für Cluster." msgid "The DNS assigned to this port." msgstr "Der diesem Port zugewiesene DNS." msgid "The DNS name for the LoadBalancer." msgstr "Der DNS-Name für den LoadBalancer." msgid "The DNS name of the specified bucket." msgstr "Der DNS-Name des angegebenen Buckets." msgid "The DNS nameserver address." msgstr "Die DNS-Nameserveradresse." msgid "" "The HARestarter resource type has been removed. Existing stacks containing " "HARestarter resources can still be used, but the HARestarter resource will " "be a placeholder that does nothing." msgstr "" "Der HARestarter-Ressourcentyp wurde entfernt. Vorhandene Stapel, die " "HARestarter-Ressourcen enthalten, können weiterhin verwendet werden, aber " "die HARestarter-Ressource ist ein Platzhalter, der nichts tut." msgid "" "The HARestarter resource type is deprecated and will be removed in a future " "release of Heat, once it has support for auto-healing any type of resource. " "Note that HARestarter does *not* actually restart servers - it deletes and " "then recreates them. It also does the same to all dependent resources, and " "may therefore exhibit unexpected and undesirable behaviour. Instead, use the " "mark-unhealthy API to mark a resource as needing replacement, and then a " "stack update to perform the replacement while respecting the dependencies " "and not deleting them unnecessarily." msgstr "" "Der HARestarter-Ressourcentyp ist veraltet und wird in einer zukünftigen " "Version von Heat entfernt, sobald er die automatische Heilung eines " "beliebigen Ressourcentyps unterstützt. Beachten Sie, dass HARtestarter " "Server * * * nicht wirklich neu startet - es löscht sie und erstellt sie " "neu. Es macht auch dasselbe für alle abhängigen Ressourcen und kann daher " "unerwartetes und unerwünschtes Verhalten zeigen. Verwenden Sie stattdessen " "die API \"mark-unhealthy\", um eine Ressource als Ersatz zu markieren, und " "dann eine Stapelaktualisierung, um die Ersetzung durchzuführen, wobei die " "Abhängigkeiten berücksichtigt und nicht unnötigerweise gelöscht werden." msgid "The HTTP method used for requests by the monitor of type HTTP." msgstr "" "Die HTTP-Methode, die für Anforderungen des Monitors vom Typ HTTP verwendet " "wird." msgid "" "The HTTP path used in the HTTP request used by the monitor to test a member " "health." msgstr "" "Der HTTP-Pfad, der in der HTTP-Anforderung verwendet wird, die vom Monitor " "zum Testen eines Mitgliedsstatus verwendet wird." msgid "" "The HTTP path used in the HTTP request used by the monitor to test a member " "health. A valid value is a string the begins with a forward slash (/)." msgstr "" "Der HTTP-Pfad, der in der HTTP-Anforderung verwendet wird, die vom Monitor " "zum Testen eines Mitgliedsstatus verwendet wird. Ein gültiger Wert ist eine " "Zeichenfolge, die mit einem Schrägstrich (/) beginnt." msgid "" "The HTTP status codes expected in response from the member to declare it " "healthy. Specify one of the following values: a single value, such as 200. a " "list, such as 200, 202. a range, such as 200-204." msgstr "" "Die HTTP-Statuscodes, die von dem Mitglied erwartet werden, um sie als " "fehlerfrei zu deklarieren. Geben Sie einen der folgenden Werte an: einen " "einzelnen Wert, z. B. 200. Eine Liste, z. B. 200, 202. Einen Bereich, z. B. " "200-204." msgid "" "The ID of an existing instance to use to create the Auto Scaling group. If " "specify this property, will create the group use an existing instance " "instead of a launch configuration." msgstr "" "Die ID einer vorhandenen Instanz, die zum Erstellen der Auto Scaling-Gruppe " "verwendet werden soll. Wenn Sie diese Eigenschaft angeben, erstellt die " "Gruppe eine vorhandene Instanz anstelle einer Startkonfiguration." msgid "" "The ID of an existing instance you want to use to create the launch " "configuration. All properties are derived from the instance with the " "exception of BlockDeviceMapping." msgstr "" "Die ID einer vorhandenen Instanz, die Sie zum Erstellen der " "Startkonfiguration verwenden möchten. Alle Eigenschaften werden von der " "Instanz mit Ausnahme von BlockDeviceMapping abgeleitet." msgid "The ID of the Keystone project containing the queue." msgstr "Die ID des Keystone-Projekts, das die Warteschlange enthält." msgid "The ID of the attached network." msgstr "Die ID des verbundenen Netzwerks." msgid "The ID of the firewall policy that this firewall is associated with." msgstr "Die ID der Firewall-Richtlinie, der diese Firewall zugeordnet ist." msgid "" "The ID of the hosted zone name that is associated with the LoadBalancer." msgstr "" "Die ID des Namens der gehosteten Zone, die dem LoadBalancer zugeordnet ist." msgid "The ID of the image to create a volume from." msgstr "Die ID des Abbildes, aus dem ein Datenträger erstellt werden soll." msgid "The ID of the image to create the volume from." msgstr "Die ID des Abbildes, aus dem der Datenträger erstellt werden soll." msgid "The ID of the instance to which the volume attaches." msgstr "Die ID der Instanz, an die das Volume angehängt wird." msgid "The ID of the load balancing pool." msgstr "Die ID des Lastverteilungspools." msgid "The ID of the pool to which the pool member belongs." msgstr "Die ID des Pools, zu dem der Pool-Member gehört." msgid "The ID of the server to which the volume attaches." msgstr "Die ID des Servers, an den der Datenträger angehängt wird." msgid "The ID of the snapshot to create a volume from." msgstr "Die ID des Snapshots, aus dem ein Datenträger erstellt werden soll." msgid "" "The ID of the tenant which will own the network. Only administrative users " "can set the tenant identifier; this cannot be changed using authorization " "policies." msgstr "" "Die ID des Mandanten, dem das Netzwerk gehört. Nur administrative Benutzer " "können die Mandantenkennung festlegen. Dies kann nicht mithilfe von " "Autorisierungsrichtlinien geändert werden." msgid "" "The ID of the tenant who owns the Load Balancer. Only administrative users " "can specify a tenant ID other than their own." msgstr "" "Die ID des Mandanten, dem Load Balancer gehört. Nur administrative Benutzer " "können eine andere Mandanten-ID als ihre eigene angeben." msgid "The ID of the tenant who owns the listener." msgstr "Die ID des Mandanten, dem der Listener gehört." msgid "" "The ID of the tenant who owns the network. Only administrative users can " "specify a tenant ID other than their own." msgstr "" "Die ID des Mandanten, dem das Netzwerk gehört. Nur administrative Benutzer " "können eine andere Mandanten-ID als ihre eigene angeben." msgid "" "The ID of the tenant who owns the subnet pool. Only administrative users can " "specify a tenant ID other than their own." msgstr "" "Die ID des Mandanten, dem der Subnetzpool gehört. Nur administrative " "Benutzer können eine andere Mandanten-ID als ihre eigene angeben." msgid "The ID of the volume to be attached." msgstr "Die ID des hinzuzufügenden Volumes." msgid "" "The ID of the volume to boot from. Only one of volume_id or snapshot_id " "should be provided." msgstr "" "Die ID des Datenträgers, von dem gebootet werden soll. Nur eine von Volume-" "ID oder Snapshot-ID sollte bereitgestellt werden." msgid "The ID or name of the cinder volume mount to the container." msgstr "" "Die ID oder der Name des Cinder-Datenträgers wird an den Container angehängt." msgid "The ID or name of the flavor to boot onto." msgstr "Die ID oder der Name der zu startenden Variante." msgid "The ID or name of the image to boot with." msgstr "Die ID oder der Name des Images, mit dem gebootet werden soll." msgid "The ID or name of the image to create a volume from." msgstr "" "Die ID oder der Name des Abbildes, aus dem ein Datenträger erstellt werden " "soll." msgid "" "The IDs of the DHCP agent to schedule the network. Note that the default " "policy setting in Neutron restricts usage of this property to administrative " "users only." msgstr "" "Die IDs des DHCP-Agenten zum Planen des Netzwerks. Beachten Sie, dass die " "Standardrichtlinieneinstellung in Neutron die Verwendung dieser Eigenschaft " "auf administrative Benutzer beschränkt." msgid "The IP address of the pool member." msgstr "Die IP-Adresse des Pool-Mitglieds." msgid "The IP version, which is 4 or 6." msgstr "Die IP-Version, die 4 oder 6 ist." #, python-format msgid "The Parameter (%(key)s) was not defined in template." msgstr "Der Parameter (%(key)s) wurde nicht in der Vorlage definiert." #, python-format msgid "The Parameter (%(key)s) was not provided." msgstr "Der Parameter (%(key)s) wurde nicht angegeben." msgid "The QoS policy ID attached to this network." msgstr "Die an dieses Netzwerk angehängte QoS-Richtlinien-ID." msgid "The QoS policy ID attached to this port." msgstr "Die an diesen Port angehängte QoS-Richtlinien-ID." #, python-format msgid "The Referenced Attribute (%(resource)s %(key)s) is incorrect." msgstr "Das referenzierte Attribut (%(resource)s %(key)s) ist falsch." #, python-format msgid "The Resource %s requires replacement." msgstr "Die Ressource %s muss ersetzt werden." #, python-format msgid "" "The Resource (%(resource_name)s) could not be found in Stack %(stack_name)s." msgstr "" "Die Ressource (%(resource_name)s) wurde nicht in Stack %(stack_name)s " "gefunden." #, python-format msgid "The Resource (%(resource_name)s) is not available." msgstr "Die Ressource (%(resource_name)s) ist nicht verfügbar." #, python-format msgid "The Snapshot (%(snapshot)s) for Stack (%(stack)s) could not be found." msgstr "" "Die Schattenkopie (%(snapshot)s) für Stack (%(stack)s) konnte nicht gefunden " "werden." #, python-format msgid "The Stack (%(stack_name)s) already exists." msgstr "Der Stapel (%(stack_name)s) existiert bereits." msgid "The Template must be a JSON or YAML document." msgstr "Die Vorlage muss ein JSON- oder YAML-Dokument sein." msgid "The URI to the container." msgstr "Der URI für den Container." msgid "The URI to the created container." msgstr "Der URI für den erstellten Container." msgid "The URI to the created secret." msgstr "Der URI zum erstellten Geheimnis." msgid "The URI to the order." msgstr "Der URI zur Bestellung." msgid "The URIs to container consumers." msgstr "Die URIs für Containerverbraucher." msgid "The URIs to secrets stored in container." msgstr "Die URIs für im Container gespeicherte Secrets." msgid "" "The URL of a template that specifies the stack to be created as a resource." msgstr "" "Die URL einer Vorlage, die den Stapel angibt, der als Ressource erstellt " "werden soll." msgid "The URL of the container." msgstr "Die URL des Containers" msgid "The UUID of the cluster template." msgstr "Die UUID der Cluster-Vorlage." msgid "The VIP address of the LoadBalancer." msgstr "Die VIP-Adresse des LoadBalancers." msgid "The VIP port of the LoadBalancer." msgstr "Der VIP-Port des LoadBalancers." msgid "The VIP subnet of the LoadBalancer." msgstr "Das VIP-Subnetz des LoadBalancers." msgid "The action or operation requested is invalid" msgstr "Die angeforderte Aktion oder Operation ist ungültig" msgid "The action to be executed when the receiver is signaled." msgstr "" "Die Aktion, die ausgeführt werden soll, wenn der Empfänger signalisiert wird." msgid "The administrative state of the firewall." msgstr "Der Verwaltungsstatus der Firewall." msgid "The administrative state of the health monitor." msgstr "Der administrative Status des Gesundheitsmonitors." msgid "The administrative state of the ipsec site connection." msgstr "Der Verwaltungsstatus der IPSec-Standortverbindung." msgid "The administrative state of the policy." msgstr "Der Verwaltungsstatus der Richtlinie." msgid "The administrative state of the pool member." msgstr "Der Verwaltungsstatus des Pool-Mitglieds." msgid "The administrative state of the router." msgstr "Der Verwaltungsstatus des Routers." msgid "The administrative state of the rule." msgstr "Der Verwaltungsstatus der Regel." msgid "The administrative state of the vpn service." msgstr "Der Verwaltungsstatus des VPN-Dienstes." msgid "The administrative state of this Load Balancer." msgstr "Der Verwaltungsstatus dieses Load Balancers." msgid "The administrative state of this health monitor." msgstr "Der administrative Status dieses Gesundheitsmonitors." msgid "The administrative state of this listener." msgstr "Der administrative Status dieses Listeners." msgid "The administrative state of this pool member." msgstr "Der Verwaltungsstatus dieses Poolmitglieds." msgid "The administrative state of this pool." msgstr "Der administrative Status dieses Pools." msgid "The administrative state of this port." msgstr "Der administrative Zustand dieses Hafens." msgid "The administrative state of this vip." msgstr "Der administrative Zustand dieses vip." msgid "The administrative status of the network." msgstr "Der administrative Status des Netzwerks." msgid "The administrator password for the server." msgstr "Das Administratorkennwort für den Server." msgid "The aggregation method to compare to the threshold." msgstr "Die Aggregationsmethode zum Vergleichen mit dem Schwellenwert." msgid "The algorithm type used to generate the secret." msgstr "Der zum Generieren des Geheimnisses verwendete Algorithmus." msgid "" "The algorithm type used to generate the secret. Required for key and " "asymmetric types of order." msgstr "" "Der zum Generieren des Geheimnisses verwendete Algorithmus. Erforderlich für " "Schlüssel- und asymmetrische Auftragsarten." msgid "The algorithm used to distribute load between the members of the pool." msgstr "" "Der Algorithmus, der zum Verteilen der Last zwischen den Mitgliedern des " "Pools verwendet wird." msgid "The allocated address of this IP." msgstr "Die zugewiesene Adresse dieser IP." msgid "" "The amount of time in seconds after an error has occurred that tasks may " "continue to run before being cancelled." msgstr "" "Die Zeit in Sekunden nach dem ein Fehler aufgetreten ist, dass Tasks weiter " "ausgeführt werden können, bevor sie abgebrochen werden." msgid "" "The approximate interval, in seconds, between health checks of an individual " "instance." msgstr "" "Das ungefähre Intervall in Sekunden zwischen den Statusprüfungen einer " "einzelnen Instanz." #, python-format msgid "The arguments to \"%s\" must be a map" msgstr "Die Argumente für \"%s\" müssen eine Map sein" msgid "The authentication hash algorithm of the ipsec policy." msgstr "Der Authentifizierungshashalgorithmus der IPSec-Richtlinie." msgid "The authentication hash algorithm used by the ike policy." msgstr "" "Der Hash-Authentifizierungsalgorithmus, der von der IKE-Richtlinie verwendet " "wird." msgid "The authentication mode of the ipsec site connection." msgstr "Der Authentifizierungsmodus der IPSec-Standortverbindung." msgid "The availability zone in which the volume is located." msgstr "Die Verfügbarkeitszone, in der sich der Datenträger befindet." msgid "The availability zone in which the volume will be created." msgstr "Die Verfügbarkeitszone, in der das Volume erstellt wird." msgid "The availability zone of shared filesystem." msgstr "Die Verfügbarkeitszone des freigegebenen Dateisystems." msgid "The bay name." msgstr "Der Name der Bay." msgid "The bit-length of the secret." msgstr "Die Bitlänge des Geheimnisses." msgid "" "The bit-length of the secret. Required for key and asymmetric types of order." msgstr "" "Die Bitlänge des Geheimnisses. Erforderlich für Schlüssel- und asymmetrische " "Auftragsarten." #, python-format msgid "The bucket you tried to delete is not empty (%s)." msgstr "Der Bucket, den Sie löschen wollten, ist nicht leer (%s)." msgid "The can be used to unmap a defined device." msgstr "" "Mit dieser Option kann die Zuordnung eines definierten Geräts aufgehoben " "werden." msgid "The certificate or AWS Key ID provided does not exist" msgstr "Das angegebene Zertifikat oder die AWS-Schlüssel-ID existiert nicht" msgid "The channel for receiving signals." msgstr "Der Kanal für den Empfang von Signalen." msgid "" "The class that provides encryption support. For example, nova.volume." "encryptors.luks.LuksEncryptor." msgstr "" "Die Klasse, die die Verschlüsselung unterstützt. Zum Beispiel nova.volume." "encryptors.luks.LuksEncryptor." #, python-format msgid "The client (%(client_name)s) is not available." msgstr "Der Client (%(client_name)s) ist nicht verfügbar." msgid "The cluster ID this node belongs to." msgstr "Die Cluster-ID, zu der dieser Knoten gehört." msgid "The cluster name." msgstr "Der Clustername." msgid "The cluster template name." msgstr "Der Name der Clustervorlage." msgid "The common name of the operating system distribution in lowercase." msgstr "Der allgemeine Name der Betriebssystemverteilung in Kleinbuchstaben." msgid "The config value of the software config." msgstr "Der Konfigurationswert der Softwarekonfiguration." msgid "" "The configuration tool used to actually apply the configuration on a server. " "This string property has to be understood by in-instance tools running " "inside deployed servers." msgstr "" "Das Konfigurationstool, mit dem die Konfiguration tatsächlich auf einem " "Server angewendet wird. Diese Zeichenfolgeneigenschaft muss von " "instanziierten Tools verstanden werden, die auf bereitgestellten Servern " "ausgeführt werden." msgid "The container memory size in MiB." msgstr "Die Speichergröße des Containers in MiB." msgid "The content of the CSR. Only for certificate orders." msgstr "Der Inhalt des CSR. Nur für Zertifikatsbestellungen." #, python-format msgid "" "The contents of personality file \"%(path)s\" is larger than the maximum " "allowed personality file size (%(max_size)s bytes)." msgstr "" "Der Inhalt der Personality-Datei \"%(path)s\" ist größer als die maximal " "zulässige Größe der Personality-Datei (%(max_size)s Bytes)." msgid "The current size of AutoscalingResourceGroup." msgstr "Die aktuelle Größe von AutoscalingResourceGroup." msgid "The current status of the volume." msgstr "Der aktuelle Status des Datenträgers." msgid "The current update policy will result in stack update timeout." msgstr "" "Die aktuelle Update-Richtlinie führt zu einem Zeitlimit für den Stack-Update." msgid "The custom discovery url for node discovery." msgstr "Die benutzerdefinierte Erkennungs-URL für die Knotenerkennung." msgid "" "The database instance was created, but heat failed to set up the datastore. " "If a database instance is in the FAILED state, it should be deleted and a " "new one should be created." msgstr "" "Die Datenbankinstanz wurde erstellt, aber Heat konnte den Datenspeicher " "nicht einrichten. Wenn sich eine Datenbankinstanz im Status FAILED befindet, " "sollte sie gelöscht und eine neue Instanz erstellt werden." msgid "" "The dead peer detection protocol configuration of the ipsec site connection." msgstr "" "Die Dead-Peer-Erkennungsprotokollkonfiguration der IPSec-Standortverbindung." msgid "The decrypted secret payload." msgstr "Die entschlüsselte geheime Nutzlast." msgid "" "The default cloud-init user set up for each image (e.g. \"ubuntu\" for " "Ubuntu 12.04+, \"fedora\" for Fedora 19+ and \"cloud-user\" for CentOS/RHEL " "6.5)." msgstr "" "Der standardmäßige cloud-init-Benutzer wird für jedes Abbild eingerichtet (z." "B. \"ubuntu\" für Ubuntu 12.04+, \"Fedora\" für Fedora 19+ und \"cloud-user" "\" für CentOS/RHEL 6.5)." #, python-format msgid "The definition of condition \"%(cd)s\" is invalid: %(definition)s" msgstr "Die Definition der Bedingung \"%(cd)s\" ist ungültig: %(definition)s" msgid "The description for the QoS policy." msgstr "Die Beschreibung für die QoS-Richtlinie." msgid "The description of the ike policy." msgstr "Die Beschreibung der Ike-Richtlinie." msgid "The description of the ipsec policy." msgstr "Die Beschreibung der IPSec-Richtlinie." msgid "The description of the ipsec site connection." msgstr "Die Beschreibung der IPSec-Standortverbindung." msgid "The description of the vpn service." msgstr "Die Beschreibung des VPN-Dienstes." msgid "The destination for static route." msgstr "Das Ziel für die statische Route." msgid "The details of physical object." msgstr "Die Details des physischen Objekts." msgid "The device id for the network gateway." msgstr "Die Geräte-ID für das Netzwerk-Gateway." msgid "" "The device where the volume is exposed on the instance. This assignment may " "not be honored and it is advised that the path /dev/disk/by-id/virtio-" " be used instead." msgstr "" "Das Gerät, auf dem das Volume in der Instanz verfügbar gemacht wird. Diese " "Zuweisung wird möglicherweise nicht beachtet und es wird empfohlen, dass der " "Pfad /dev/disk/by-id/virtio- stattdessen verwendet werden." msgid "" "The direction in which metering rule is applied, either ingress or egress." msgstr "" "Die Richtung, in der die Messregel angewendet wird, entweder Eingang oder " "Ausgang." msgid "The direction in which metering rule is applied." msgstr "Die Richtung, in der die Messregel angewendet wird." msgid "" "The direction in which the security group rule is applied. For a compute " "instance, an ingress security group rule matches traffic that is incoming " "(ingress) for that instance. An egress rule is applied to traffic leaving " "the instance." msgstr "" "Die Richtung, in der die Sicherheitsgruppenregel angewendet wird. Bei einer " "Computerinstanz stimmt eine Ingress-Sicherheitsgruppenregel den eingehenden " "Datenverkehr(Ingress) für diese Instanz ab. Eine Egress-Regel wird auf den " "Verkehr angewendet, der die Instanz verlässt." msgid "The directory to search for environment files." msgstr "Das Verzeichnis für die Suche nach Umgebungsdateien." msgid "The directory to search for template files." msgstr "Das Verzeichnis, in dem nach Vorlagendateien gesucht werden soll." msgid "The ebs volume to attach to the instance." msgstr "Das ebs-Volume, das an die Instanz angehängt werden soll." msgid "The encapsulation mode of the ipsec policy." msgstr "Der Kapselungsmodus der IPSec-Richtlinie." msgid "The encoding format used to provide the payload data." msgstr "" "Das Codierungsformat, das zum Bereitstellen der Nutzdaten verwendet wird." msgid "The encryption algorithm of the ipsec policy." msgstr "Der Verschlüsselungsalgorithmus der IPSec-Richtlinie." msgid "The encryption algorithm or mode. For example, aes-xts-plain64." msgstr "" "Der Verschlüsselungsalgorithmus oder -modus Zum Beispiel aes-xts-plain64." msgid "The encryption algorithm used by the ike policy." msgstr "" "Der Verschlüsselungsalgorithmus, der von der IKE-Richtlinie verwendet wird." msgid "The endpoint URL of COE API exposed to end-users." msgstr "" "Die Endpunkt-URL der COE-API, die Endbenutzern zur Verfügung gestellt wird." msgid "The environment is not a valid YAML mapping data type." msgstr "Die Umgebung ist kein gültiger YAML-Zuordnungsdatentyp." msgid "The environment variables." msgstr "Die Umgebungsvariablen." msgid "The expiration date for the secret in ISO-8601 format." msgstr "Das Ablaufdatum für das Geheimnis im ISO-8601-Format." msgid "The expression to generate the \"value\" attribute." msgstr "Der Ausdruck zum Generieren des Attributs \"value\"." msgid "The external load balancer port number." msgstr "Die Portnummer des externen Lastenausgleichsmoduls." msgid "The external neutron network name or UUID to attach the Cluster." msgstr "" "Der Name des externen Neutronennetzwerks oder die UUID zum Anhängen des " "Clusters." msgid "The extra specs key and value pairs of the volume type." msgstr "Die zusätzlichen Schlüssel- und Wertepaare des Datenträgertyps." msgid "The filesystem path inside the container." msgstr "Der Dateisystempfad innerhalb des Containers." msgid "The fixed neutron network name or UUID to attach the Cluster." msgstr "" "Der Name des festen Neutron-Netzwerks oder UUID zum Anhängen des Clusters." msgid "The fixed neutron subnet name or UUID to attach the Cluster." msgstr "" "Der Name des festen Neutronen-Subnetzes oder UUID zum Anhängen des Clusters." msgid "The flavor to use." msgstr "Die zu verwendende Variante." #, python-format msgid "The following parameters are immutable and may not be updated: %(keys)s" msgstr "" "Die folgenden Parameter sind unveränderbar und können nicht aktualisiert " "werden: %(keys)s" #, python-format msgid "The following params were not found in the template: %s" msgstr "Die folgenden Parameter wurden in der Vorlage nicht gefunden: %s" msgid "" "The format of the local ephemeral block device. If no format is specified, " "uses default value, defined in nova configuration file." msgstr "" "Das Format des lokalen ephemeren Blockgeräts. Wenn kein Format angegeben " "ist, wird der Standardwert verwendet, der in der nova-Konfigurationsdatei " "definiert ist." #, python-format msgid "The function \"%s\" is invalid in this context" msgstr "Die Funktion \"%s\" ist in diesem Zusammenhang ungültig" #, python-format msgid "The function %s is not supported in this version of HOT." msgstr "Die Funktion %s wird in dieser Version von HOT nicht unterstützt." msgid "" "The gateway IP address. Set to any of [ null | ~ | \"\" ] to create/update a " "subnet without a gateway. If omitted when creation, neutron will assign the " "first free IP address within the subnet to the gateway automatically. If " "remove this from template when update, the old gateway IP address will be " "detached." msgstr "" "Die Gateway-IP-Adresse. Setzen Sie einen der Werte auf [null | ~ | \"\"] zum " "Erstellen/Aktualisieren eines Subnetzes ohne Gateway. Wird diese bei der " "Erstellung weggelassen, weist Neutron dem Gateway die erste freie IP-Adresse " "innerhalb des Subnetzes automatisch zu. Wenn Sie diese beim Aktualisieren " "aus der Vorlage entfernen, wird die alte Gateway-IP-Adresse entfernt." #, python-format msgid "The grouped parameter %s does not reference a valid parameter." msgstr "" "Der gruppierte Parameter %s verweist nicht auf einen gültigen Parameter." msgid "The host from the container URL." msgstr "Der Host von der Container-URL." msgid "The host from which a user is allowed to connect to the database." msgstr "" "Der Host, von dem ein Benutzer eine Verbindung mit der Datenbank herstellen " "darf." msgid "The hostname of the container." msgstr "Der Hostname des Containers." msgid "The http_proxy address to use for nodes in cluster." msgstr "" "Die http_proxy-Adresse, die für Knoten im Cluster verwendet werden soll." msgid "The https_proxy address to use for nodes in cluster." msgstr "" "Die https_proxy-Adresse, die für Knoten im Cluster verwendet werden soll." msgid "" "The id for L2 segment on the external side of the network gateway. Must be " "specified when using vlan." msgstr "" "Die ID für das L2-Segment auf der externen Seite des Netzwerk-Gateways. Muss " "angegeben werden, wenn vlan verwendet wird." msgid "The identifier of the CA to use." msgstr "Die ID der zu verwendenden Zertifizierungsstelle." msgid "The image ID. Glance will generate a UUID if not specified." msgstr "" "Die Abbild-ID Glance generiert eine UUID, wenn sie nicht angegeben wird." msgid "The image driver to use to pull container image." msgstr "" "Der Abbild-Treiber, der zum Ziehen des Container-Abbildes verwendet wird." msgid "The image name or UUID to use as a base image for cluster." msgstr "" "Der Abbild-Name oder UUID, der als Basis-Abbild für Cluster verwendet werden " "soll." msgid "The initiator of the ipsec site connection." msgstr "Der Initiator der ipsec-Standortverbindung." msgid "The input string to be stored." msgstr "Die zu speichernde Eingabezeichenfolge." msgid "The interface name for the network gateway." msgstr "Der Schnittstellenname für das Netzwerk-Gateway." msgid "The internal network to connect on the network gateway." msgstr "Das interne Netzwerk für die Verbindung auf dem Netzwerk-Gateway." msgid "The last operation for the database instance failed due to an error." msgstr "" "Die letzte Operation für die Datenbankinstanz ist aufgrund eines Fehlers " "fehlgeschlagen." #, python-format msgid "The length must be at least %(min)s." msgstr "Die Länge muss mindestens %(min)s betragen." #, python-format msgid "The length must be in the range %(min)s to %(max)s." msgstr "Die Länge muss im Bereich %(min)s bis %(max)s liegen." #, python-format msgid "The length must be no greater than %(max)s." msgstr "Die Länge darf nicht größer als %(max)s sein." msgid "The length of time, in minutes, to wait for the nested stack creation." msgstr "" "Die Zeitdauer in Minuten, die auf die Erstellung des verschachtelten Stapels " "gewartet wird." msgid "" "The list of HTTP status codes expected in response from the member to " "declare it healthy." msgstr "" "Die Liste der HTTP-Statuscodes, die vom Mitglied erwartet werden, um es als " "fehlerfrei zu deklarieren." msgid "The list of Nova server IDs load balanced." msgstr "Die Liste der Nova Server-IDs wird ausgeglichen angezeigt." msgid "The list of Pools related to this monitor." msgstr "Die Liste der Pools im Zusammenhang mit diesem Monitor." msgid "The list of attachments of the volume." msgstr "Die Liste der Anlagen des Datenträgers." msgid "" "The list of configurations for the different lifecycle actions of the " "represented software component." msgstr "" "Die Liste der Konfigurationen für die verschiedenen Lebenszyklusaktionen der " "dargestellten Softwarekomponente." msgid "The list of instance IDs load balanced." msgstr "Die Liste der Instanz-IDs wird ausgeglichen angezeigt." msgid "" "The list of resource types to create. This list may contain type names or " "aliases defined in the resource registry. Specific template names are not " "supported." msgstr "" "Die Liste der zu erstellenden Ressourcentypen. Diese Liste kann Typnamen " "oder Aliase enthalten, die in der Ressourcenregistrierung definiert sind. " "Bestimmte Vorlagennamen werden nicht unterstützt." msgid "The list of tags to associate with the volume." msgstr "Die Liste der Tags, die dem Volume zugeordnet werden sollen." msgid "The load balancer transport protocol to use." msgstr "Das zu verwendende Lastenausgleichs-Transportprotokoll." msgid "" "The location where the volume is exposed on the instance. This assignment " "may not be honored and it is advised that the path /dev/disk/by-id/virtio-" " be used instead." msgstr "" "Der Speicherort, an dem der Datenträger in der Instanz verfügbar gemacht " "wird. Diese Zuweisung wird möglicherweise nicht beachtet und es wird " "empfohlen, dass der Pfad /dev/disk/by-id/virtio- stattdessen " "verwendet werden." msgid "The manually assigned alternative public IPv4 address of the server." msgstr "" "Die manuell zugewiesene alternative öffentliche IPv4-Adresse des Servers." msgid "The manually assigned alternative public IPv6 address of the server." msgstr "" "Die manuell zugewiesene alternative öffentliche IPv6-Adresse des Servers." msgid "The maximum number of connections per second allowed for the vip." msgstr "" "Die maximale Anzahl von Verbindungen pro Sekunde, die für den vip zulässig " "sind." msgid "" "The maximum number of connections permitted for this load balancer. Defaults " "to -1, which is infinite." msgstr "" "Die maximale Anzahl der für diesen Lastenausgleich zulässigen Verbindungen. " "Der Standardwert ist -1, was unendlich ist." msgid "The maximum number of deployments to replace at once." msgstr "" "Die maximale Anzahl von Bereitstellungen, die gleichzeitig ersetzt werden " "müssen." msgid "" "The maximum number of elements in collection expression can take for its " "evaluation." msgstr "" "Die maximale Anzahl von Elementen im Auflistungsausdruck kann für ihre " "Auswertung verwendet werden." msgid "The maximum number of resources to create at once." msgstr "Die maximale Anzahl der gleichzeitig zu erstellenden Ressourcen." msgid "The maximum number of resources to replace at once." msgstr "" "Die maximale Anzahl der Ressourcen, die gleichzeitig ersetzt werden müssen." msgid "" "The maximum number of seconds to wait for the resource to signal completion. " "Once the timeout is reached, creation of the signal resource will fail." msgstr "" "Die maximale Anzahl von Sekunden, die gewartet werden muss, bis die " "Ressource die Beendigung signalisiert. Sobald das Timeout erreicht ist, wird " "die Erstellung der Signalressource fehlschlagen." msgid "" "The maximum port number in the range that is matched by the security group " "rule. The port_range_min attribute constrains the port_range_max attribute. " "If the protocol is ICMP, this value must be an ICMP code." msgstr "" "Die maximale Portnummer in dem Bereich, der der Sicherheitsgruppenregel " "entspricht. Das Attribut port_range_min beschränkt das Attribut " "port_range_max. Wenn das Protokoll ICMP ist, muss dieser Wert ein ICMP-Code " "sein." msgid "" "The maximum port number in the range that is matched by the security group " "rule. The port_range_min attribute constrains the port_range_max attribute. " "If the protocol is ICMP, this value must be an ICMP type." msgstr "" "Die maximale Portnummer in dem Bereich, der der Sicherheitsgruppenregel " "entspricht. Das Attribut port_range_min beschränkt das Attribut " "port_range_max. Wenn das Protokoll ICMP ist, muss dieser Wert ein ICMP-Typ " "sein." msgid "" "The maximum size of memory in bytes that expression can take for its " "evaluation." msgstr "" "Die maximale Größe des Speichers in Bytes, die der Ausdruck für seine " "Auswertung benötigt." msgid "" "The maximum transmission unit size (in bytes) of the ipsec site connection." msgstr "" "Die maximale Übertragungseinheitsgröße (in Byte) der IPSec-" "Standortverbindung." msgid "The maximum transmission unit size(in bytes) for the network." msgstr "Die maximale Übertragungseinheitsgröße (in Byte) für das Netzwerk." msgid "The metering label ID to associate with this metering rule." msgstr "Die ID des Messgeräts, die dieser Messregel zugeordnet werden soll." msgid "" "The metric dimensions to match to the alarm dimensions. One or more " "dimension key names separated by a comma." msgstr "" "Die metrischen Dimensionen, die den Alarmdimensionen entsprechen. Ein oder " "mehrere Dimensionsschlüsselnamen, getrennt durch ein Komma." msgid "" "The minimum number of characters from this character class that will be in " "the generated string." msgstr "" "Die Mindestanzahl von Zeichen aus dieser Zeichenklasse, die in der " "generierten Zeichenfolge enthalten sein wird." msgid "" "The minimum number of characters from this sequence that will be in the " "generated string." msgstr "" "Die Mindestanzahl von Zeichen aus dieser Sequenz, die in der generierten " "Zeichenfolge enthalten sind." #, python-format msgid "The minimum number of condition arguments to \"%s\" is 2." msgstr "Die Mindestanzahl von Bedingungsargumenten für \"%s\" ist 2." msgid "" "The minimum number of resources in service while rolling updates are being " "executed." msgstr "" "Die Mindestanzahl von Ressourcen, die während des Rollbacks von " "Aktualisierungen in Betrieb sind, wird ausgeführt." msgid "" "The minimum port number in the range that is matched by the security group " "rule. If the protocol is TCP or UDP, this value must be less than or equal " "to the value of the port_range_max attribute. If the protocol is ICMP, this " "value must be an ICMP type." msgstr "" "Die Mindestportnummer in dem Bereich, der der Sicherheitsgruppenregel " "entspricht. Wenn das Protokoll TCP oder UDP ist, muss dieser Wert kleiner " "oder gleich dem Wert des Attributs port_range_max sein. Wenn das Protokoll " "ICMP ist, muss dieser Wert ein ICMP-Typ sein." msgid "" "The minimum port number must be less than or equal to the maximum port " "number." msgstr "" "Die minimale Portnummer muss kleiner oder gleich der maximalen Portnummer " "sein." msgid "" "The minimum time in milliseconds between regular connections of the member." msgstr "" "Die Mindestzeit in Millisekunden zwischen regulären Verbindungen des " "Mitglieds." msgid "The name for the QoS policy." msgstr "Der Name für die QoS-Richtlinie." msgid "The name for the address scope." msgstr "Der Name für den Adressbereich." msgid "The name of senlin cluster to attach to." msgstr "Der Name des Senlin-Clusters zum Anhängen." msgid "The name of the SSH keypair to load into the cluster nodes." msgstr "" "Der Name des SSH-Schlüsselpaars, das in die Clusterknoten geladen werden " "soll." msgid "The name of the domain." msgstr "Der Name der Domäne." msgid "" "The name of the driver used for instantiating container networks. By " "default, Magnum will choose the pre-configured network driver based on COE " "type." msgstr "" "Der Name des Treibers, der zum Instanziieren von Containernetzwerken " "verwendet wird. Standardmäßig wählt Magnum den vorkonfigurierten " "Netzwerktreiber basierend auf dem COE-Typ." msgid "The name of the error document." msgstr "Der Name des Fehlerdokuments." msgid "The name of the hosted zone that is associated with the LoadBalancer." msgstr "Der Name der gehosteten Zone, die dem LoadBalancer zugeordnet ist." msgid "The name of the ike policy." msgstr "Der Name der Ike-Richtlinie." msgid "The name of the index document." msgstr "Der Name des Indexdokuments." msgid "The name of the ipsec policy." msgstr "Der Name der IPSec-Richtlinie." msgid "The name of the ipsec site connection." msgstr "Der Name der IPSec-Standortverbindung." msgid "The name of the key pair." msgstr "Der Name des Schlüsselpaars." msgid "The name of the keypair." msgstr "Der Name des Schlüsselpaars." msgid "" "The name of the keypair. If not presented, use keypair in cluster template." msgstr "" "Der Name des Schlüsselpaars. Falls nicht angezeigt, verwenden Sie das " "Schlüsselpaar in der Cluster-Vorlage." msgid "The name of the network gateway." msgstr "Der Name des Netzwerk-Gateways." msgid "The name of the network." msgstr "Der Name des Netzwerks." msgid "The name of the router." msgstr "Der Name des Routers." msgid "The name of the subnet." msgstr "Der Name des Subnetzes." msgid "The name of the user that the new key will belong to." msgstr "Der Name des Benutzers, zu dem der neue Schlüssel gehört." msgid "" "The name of the virtual device. The name must be in the form ephemeralX " "where X is a number starting from zero (0); for example, ephemeral0." msgstr "" "Der Name des virtuellen Geräts. Der Name muss in der Form ephemeralX sein, " "wobei X eine Zahl ist, die bei Null(0) beginnt; zum Beispiel, ephemeral0." msgid "The name of the vpn service." msgstr "Der Name des VPN-Dienstes." msgid "The name or ID of QoS policy to attach to this network." msgstr "" "Der Name oder die ID der QoS-Richtlinie, die an dieses Netzwerk angehängt " "werden soll." msgid "The name or ID of QoS policy to attach to this port." msgstr "" "Der Name oder die ID der QoS-Richtlinie, die an diesen Port angehängt werden " "soll." msgid "The name or ID of parent of this keystone project in hierarchy." msgstr "" "Der Name oder die ID des übergeordneten Elements dieses Keystone-Projekts in " "der Hierarchie." msgid "The name or ID of target cluster." msgstr "Der Name oder die ID des Zielclusters." msgid "The name or ID of the bay model." msgstr "Der Name oder die ID des Bay-Modells." msgid "The name or ID of the cluster template." msgstr "Der Name oder die ID der Clustervorlage." msgid "The name or ID of the policy." msgstr "Der Name oder die ID der Richtlinie." msgid "The name or ID of the subnet on which to allocate the VIP address." msgstr "" "Der Name oder die ID des Subnetzes, dem die VIP-Adresse zugewiesen werden " "soll." msgid "The name or ID of the subnet pool." msgstr "Der Name oder die ID des Subnetzpools." msgid "The name or id of the Senlin profile." msgstr "Der Name oder die ID des Senlin-Profils." msgid "The name/ID of the segment to associate." msgstr "Der Name/die ID des zu verknüpfenden Segments." msgid "The name/id of network to associate with this segment." msgstr "" "Der Name/die ID des Netzwerks, das diesem Segment zugeordnet werden soll." msgid "The negotiation mode of the ike policy." msgstr "Der Verhandlungsmodus der IKE-Richtlinie." msgid "The next hop for the destination." msgstr "Der nächste Hop für das Ziel." msgid "The node count for this bay." msgstr "Die Anzahl der Knoten für diese Bay." msgid "The node count for this cluster." msgstr "Die Knotenzahl für diesen Cluster." msgid "The notification methods to use when an alarm state is ALARM." msgstr "" "Die Benachrichtigungsmethoden, die verwendet werden sollen, wenn ein " "Alarmzustand ALARM ist." msgid "The notification methods to use when an alarm state is OK." msgstr "" "Die Benachrichtigungsmethoden, die verwendet werden sollen, wenn ein " "Alarmzustand OK ist." msgid "The notification methods to use when an alarm state is UNDETERMINED." msgstr "" "Die Benachrichtigungsmethoden, die verwendet werden, wenn der Status eines " "Alarms UNDETERMINED lautet." msgid "The nova flavor name or UUID to use when launching the cluster." msgstr "" "Der Name oder die UUID der Nova-Version, die beim Starten des Clusters " "verwendet werden soll." msgid "" "The nova flavor name or UUID to use when launching the master node of the " "cluster." msgstr "" "Der Name oder die UUID des Nova-Flavors, die beim Starten des Master-Knotens " "des Clusters verwendet werden soll." msgid "The number of I/O operations per second that the volume supports." msgstr "" "Die Anzahl der I/O-Vorgänge pro Sekunde, die der Datenträger unterstützt." msgid "The number of bytes stored in the container." msgstr "Die Anzahl der im Container gespeicherten Bytes." msgid "" "The number of consecutive health probe failures required before moving the " "instance to the unhealthy state" msgstr "" "Die Anzahl aufeinanderfolgender fehlgeschlagener Health Probe-Fehler, bevor " "die Instanz in den fehlerhaften Zustand versetzt wird" msgid "" "The number of consecutive health probe successes required before moving the " "instance to the healthy state." msgstr "" "Die Anzahl aufeinanderfolgender Health Probe-Erfolge, die erforderlich sind, " "bevor die Instanz in den Status \"Healthy\" versetzt wird." msgid "The number of master nodes for this bay." msgstr "Die Anzahl der Master-Knoten für diese Bay." msgid "The number of master nodes for this cluster." msgstr "Die Anzahl der Master-Knoten für diesen Cluster." msgid "The number of objects stored in the container." msgstr "Die Anzahl der im Container gespeicherten Objekte." msgid "The number of replicas to be created." msgstr "Die Anzahl der zu erstellenden Replikate." msgid "The number of resources to create." msgstr "Die Anzahl der zu erstellenden Ressourcen." msgid "The number of seconds to wait between batches of updates." msgstr "" "Die Anzahl der Sekunden, die zwischen den Update-Batches gewartet werden." msgid "The number of seconds to wait between batches." msgstr "" "Die Anzahl der Sekunden, die zwischen den Batches gewartet werden müssen." msgid "The number of seconds to wait for the cluster actions." msgstr "Die Anzahl der Sekunden, die auf die Clusteraktionen gewartet werden." msgid "" "The number of seconds to wait for the correct number of signals to arrive." msgstr "" "Die Anzahl der Sekunden, die auf das Eintreffen der richtigen Anzahl von " "Signalen gewartet werden muss." msgid "The number of servers that will serve as master for the cluster." msgstr "Die Anzahl der Server, die als Master für den Cluster dienen." msgid "The number of servers that will serve as node in the cluster." msgstr "Die Anzahl der Server, die als Knoten im Cluster dienen." msgid "" "The number of success signals that must be received before the stack " "creation process continues." msgstr "" "Die Anzahl der Erfolgssignale, die empfangen werden müssen, bevor der Stapel-" "Erstellungsprozess fortgesetzt wird." msgid "The number of virtual cpus." msgstr "Die Anzahl der virtuellen CPUs." msgid "The operator indicates how to combine the rules." msgstr "Der Operator gibt an, wie die Regeln zu kombinieren sind." msgid "" "The optional public key. This allows users to supply the public key from a " "pre-existing key pair. If not supplied, a new key pair will be generated." msgstr "" "Der optionale öffentliche Schlüssel. Dadurch können Benutzer den " "öffentlichen Schlüssel aus einem bereits vorhandenen Schlüsselpaar " "bereitstellen. Wenn nicht angegeben, wird ein neues Schlüsselpaar generiert." msgid "" "The os-collect-config configuration for the server's local agent to be " "configured to connect to Heat to retrieve deployment data." msgstr "" "Die Konfiguration \"os-collect-config\" für den lokalen Agenten des Servers " "muss so konfiguriert sein, dass er sich mit Heat verbindet, um die " "Bereitstellungsdaten abzurufen." msgid "" "The owner tenant ID of the address scope. Only administrative users can " "specify a tenant ID other than their own." msgstr "" "Die Eigentümermandanten-ID des Adressbereichs. Nur administrative Benutzer " "können eine andere Mandanten-ID als ihre eigene angeben." msgid "The owner tenant ID of this QoS policy." msgstr "Die Eigentümermandanten-ID dieser QoS-Richtlinie." msgid "The owner tenant ID of this rule." msgstr "Die Eigentümermandanten-ID dieser Regel." msgid "" "The owner tenant ID. Only required if the caller has an administrative role " "and wants to create a RBAC for another tenant." msgstr "" "Die Eigentümermandanten-ID. Nur erforderlich, wenn der Aufrufer eine " "administrative Rolle hat und eine RBAC für einen anderen Mandanten erstellen " "möchte." msgid "The parameters passed to action when the receiver is signaled." msgstr "" "Die Parameter werden an die Aktion übergeben, wenn der Empfänger " "signalisiert wird." msgid "The parent URL of the container." msgstr "Die übergeordnete URL des Containers." msgid "" "The passphrase of the created key. Can be set only for asymmetric type of " "order." msgstr "" "Die Passphrase des erstellten Schlüssels. Kann nur für asymmetrische " "Auftragsart eingestellt werden." msgid "The payload of the created certificate, if available." msgstr "Die Nutzlast des erstellten Zertifikats, falls verfügbar." msgid "The payload of the created intermediates, if available." msgstr "Die Nutzlast der erstellten Zwischenprodukte, falls verfügbar." msgid "The payload of the created private key, if available." msgstr "Die Nutzlast des erstellten privaten Schlüssels, falls verfügbar." msgid "The payload of the created public key, if available." msgstr "Die Nutzlast des erstellten öffentlichen Schlüssels, falls verfügbar." msgid "The perfect forward secrecy of the ike policy." msgstr "Die perfekte Geheimhaltung der Ike-Politik." msgid "The perfect forward secrecy of the ipsec policy." msgstr "Die perfekte Geheimhaltung der IPSec-Richtlinie." msgid "" "The period property can only be specified against a Webhook Notification " "type." msgstr "" "Die period-Eigenschaft kann nur für einen Webhook-Benachrichtigungstyp " "angegeben werden." #, python-format msgid "The personality property may not contain greater than %s entries." msgstr "" "Die Personality-Eigenschaft darf keine Einträge größer als %s enthalten." msgid "The physical mechanism by which the virtual network is implemented." msgstr "" "Der physische Mechanismus, mit dem das virtuelle Netzwerk implementiert wird." #, python-format msgid "The physical resource for (%(name)s) exists." msgstr "Die physische Ressource für (%(name)s) existiert." msgid "" "The policy which determines if the image should be pulled prior to starting " "the container." msgstr "" "Die Richtlinie, die festlegt, ob das Image vor dem Starten des Containers " "abgerufen werden soll." msgid "The port being checked." msgstr "Der Port wird überprüft." msgid "The port id, either subnet or port_id should be specified." msgstr "Die Port-ID, entweder Subnetz oder Port-ID, sollte angegeben werden." msgid "The port on which the server will listen." msgstr "Der Port, an dem der Server zuhören soll." msgid "The port, either subnet or port should be specified." msgstr "Der Port, entweder Subnetz oder Port, sollte angegeben werden." msgid "The pre-shared key string of the ipsec site connection." msgstr "Die Pre-Shared Key-Zeichenfolge der IPSec-Standortverbindung." msgid "The private key if it has been saved." msgstr "Der private Schlüssel, wenn er gespeichert wurde." msgid "The profile of certificate to use." msgstr "Das Profil des zu verwendenden Zertifikats." msgid "" "The protocol that is matched by the security group rule. Allowed values are " "ah, dccp, egp, esp, gre, icmp, icmpv6, igmp, ipv6-encap, ipv6-frag, ipv6-" "icmp, ipv6-nonxt, ipv6-opts, ipv6-route, ospf, pgm, rsvp, sctp, tcp, udp, " "udplite, vrrp and integer representations [0-255]." msgstr "" "Das Protokoll, dem die Sicherheitsgruppenregel entspricht. Erlaubte Werte " "sind: ah, dccp, egp, esp, gre, icmp, icmpv6, igmp, ipv6-encap, ipv6-frag, " "ipv6-icmp, ipv6-nonxt, ipv6-opts, ipv6-route, ospf, pgm, rsvp, sctp , tcp, " "udp, udplite, vrrp und Integer-Repräsentationen [0-255]." msgid "" "The protocol that is matched by the security group rule. Valid values " "include tcp, udp, and icmp." msgstr "" "Das Protokoll, dem die Sicherheitsgruppenregel entspricht. Zu den gültigen " "Werten gehören tcp, udp und icmp." msgid "The public key." msgstr "Der öffentliche Schlüssel." msgid "The query string is malformed" msgstr "Die Abfragezeichenfolge ist fehlerhaft" msgid "The query to filter the metrics." msgstr "Die Abfrage zum Filtern der Messwerte." msgid "" "The random string generated by this resource. This value is also available " "by referencing the resource." msgstr "" "Die zufällige Zeichenfolge, die von dieser Ressource generiert wird. Dieser " "Wert ist auch verfügbar, indem auf die Ressource verwiesen wird." msgid "The reason of cluster current status." msgstr "Der Grund für den aktuellen Clusterstatus." msgid "The reference UUID of orchestration stack for this COE cluster." msgstr "Die Referenz-UUID des Orchestrierungsstapels für diesen COE-Cluster." msgid "The reference to a LaunchConfiguration resource." msgstr "Der Verweis auf eine LaunchConfiguration-Ressource." msgid "" "The remote IP prefix (CIDR) to be associated with this security group rule." msgstr "" "Das Remote-IP-Präfix (CIDR), das dieser Sicherheitsgruppenregel zugeordnet " "werden soll." msgid "The remote branch router identity of the ipsec site connection." msgstr "" "Die Identität des entfernten Zweigstellenrouters der IPSec-" "Standortverbindung." msgid "The remote branch router public IPv4 address or IPv6 address or FQDN." msgstr "" "Die entfernte IPv4-Adresse des Zweigstellenrouters oder die IPv6-Adresse " "oder der FQDN." msgid "" "The remote group ID to be associated with this security group rule. If no " "value is specified then this rule will use this security group for the " "remote_group_id. The remote mode parameter must be set to \"remote_group_id" "\"." msgstr "" "Die Remote-Gruppen-ID, die dieser Sicherheitsgruppenregel zugeordnet werden " "soll. Wenn kein Wert angegeben ist, verwendet diese Regel diese " "Sicherheitsgruppe für die remote_group_id. Der Remote-Modus-Parameter muss " "auf \"remote_group_id\" eingestellt sein." msgid "" "The remote group name or ID to be associated with this security group rule." msgstr "" "Der Name oder die ID der Remote-Gruppe, die dieser Sicherheitsgruppenregel " "zugeordnet werden soll." msgid "The remote subnet(s) in CIDR format of the ipsec site connection." msgstr "Die Remote-Subnetze im CIDR-Format der IPSec-Standortverbindung." msgid "The request is missing an action or operation parameter" msgstr "Der Anfrage fehlt ein Aktions- oder Operationsparameter" msgid "The request processing has failed due to an internal error" msgstr "" "Die Anfrageverarbeitung ist aufgrund eines internen Fehlers fehlgeschlagen" msgid "The request signature does not conform to AWS standards" msgstr "Die Anforderungssignatur entspricht nicht den AWS-Standards" msgid "" "The request signature we calculated does not match the signature you provided" msgstr "" "Die von uns berechnete Anforderungssignatur stimmt nicht mit der von Ihnen " "angegebenen Signatur überein" msgid "The requested action is not yet implemented" msgstr "Die angeforderte Aktion ist noch nicht implementiert" #, python-format msgid "The resource %(res)s could not perform scaling action: %(reason)s" msgstr "" "Die Ressource %(res)s konnte keine Skalierungsaktion ausführen: %(reason)s" #, python-format msgid "The resource %s is already being updated." msgstr "Die Ressource %s wird bereits aktualisiert." msgid "The resource href of the queue." msgstr "Die Ressource href der Warteschlange." msgid "The route mode of the ipsec site connection." msgstr "Der Routenmodus der IPSec-Site-Verbindung." msgid "The router id." msgstr "Die Router-ID" msgid "The router to which the vpn service will be inserted." msgstr "Der Router, in den der VPN-Dienst eingefügt wird." msgid "The router." msgstr "Der Router." msgid "The safety assessment lifetime configuration for the ike policy." msgstr "" "Die Lebensdauerkonfiguration für die Sicherheitsbewertung für die IKE-" "Richtlinie." msgid "The safety assessment lifetime configuration of the ipsec policy." msgstr "" "Die Lebensdauerkonfiguration der Sicherheitsbewertung der ipsec-Richtlinie." msgid "" "The security group that you can use as part of your inbound rules for your " "LoadBalancer's back-end instances." msgstr "" "Die Sicherheitsgruppe, die Sie als Teil Ihrer eingehenden Regeln für die " "Back-End-Instanzen Ihres LoadBalancers verwenden können." msgid "" "The segmentation ID on which the subport network is presented to the " "instance." msgstr "" "Die Segmentierungs-ID, unter der das Subport-Netzwerk der Instanz " "präsentiert wird." msgid "" "The server could not comply with the request since it is either malformed or " "otherwise incorrect." msgstr "" "Der Server konnte der Anforderung nicht entsprechen, da sie entweder " "fehlerhaft oder auf andere Weise falsch ist." msgid "" "The servers to slave from to get DNS information and is mandatory for zone " "type SECONDARY, otherwise ignored." msgstr "" "Die Server, von denen aus die DNS-Informationen abgerufen werden sollen, " "sind obligatorisch für den Zonentyp SECONDARY, andernfalls ignoriert." msgid "The set of parameters passed to this nested stack." msgstr "" "Die Menge der Parameter, die an diesen verschachtelten Stapel übergeben " "wurden." msgid "The size in GB of the docker volume." msgstr "Die Größe in GB des Docker-Datenträgers." msgid "The size of AutoScalingGroup can not be less than zero" msgstr "Die Größe der AutoScalingGroup darf nicht kleiner als Null sein" msgid "The size of the cinder volume to create." msgstr "Die Größe des zu erstellenden Cinder-Datenträgers." msgid "The size of the local ephemeral block device, in GB." msgstr "Die Größe des lokalen ephemeren Blockgeräts in GB." msgid "" "The size of the prefix to allocate when the cidr or prefixlen attributes are " "not specified while creating a subnet." msgstr "" "Die Größe des Präfixes, das zugewiesen werden soll, wenn die Attribute cidr " "oder prefixlen beim Erstellen eines Subnetzes nicht angegeben werden." msgid "The size of the swap, in MB." msgstr "Die Größe des Swaps in MB." msgid "The size of the volume in GB." msgstr "Die Größe des Volumes in GB." #, python-format msgid "" "The size of the volume in GB. On update only increase in size is supported. " "This property is required unless property %(backup)s or %(vol)s or " "%(snapshot)s is specified." msgstr "" "Die Größe des Datenträgers in GB. Beim Update wird nur die Vergrößerung " "unterstützt. Diese Eigenschaft ist erforderlich, es sei denn, Eigenschaft " "%(backup)s oder %(vol)s oder %(snapshot)s ist angegeben." msgid "" "The size of the volume, in GB. It is safe to leave this blank and have the " "Compute service infer the size." msgstr "" "Die Größe des Datenträgers in GB. Es ist sicher, dieses Feld leer zu lassen " "und den Compute-Service auf die Größe schließen zu lassen." msgid "" "The size of the volume, in GB. Must be equal or greater than the size of the " "snapshot. It is safe to leave this blank and have the Compute service infer " "the size." msgstr "" "Die Größe des Volumes in GB. Muss gleich oder größer als die Größe der " "Schattenkopie sein. Es ist sicher, dieses Feld leer zu lassen und den " "Compute-Service auf die Größe schließen zu lassen." msgid "The snapshot the volume was created from, if any." msgstr "Die Schattenkope, aus dem der Datenträger erstellt wurde." msgid "The source of certificate request." msgstr "Die Quelle der Zertifikatsanforderung." msgid "" "The special string values of network, auto: means either a network that is " "already available to the project will be used, or if one does not exist, " "will be automatically created for the project; none: means no networking " "will be allocated for the created server. Supported by Nova API since " "version \"2.37\". This property can not be used with other network keys." msgstr "" "Die speziellen String-Werte für Netzwerk, auto: bedeutet, dass entweder ein " "Netzwerk verwendet wird, das bereits für das Projekt verfügbar ist, oder " "wenn ein solches nicht existiert, wird es automatisch für das Projekt " "erstellt. none: bedeutet, dass kein Netzwerk für den erstellten Server " "zugewiesen wird. Unterstützt von Nova API seit Version \"2.37\". Diese " "Eigenschaft kann nicht mit anderen Netzwerkschlüsseln verwendet werden." #, python-format msgid "The specified reference \"%(resource)s\" (in %(key)s) is incorrect." msgstr "Der angegebene Verweis \"%(resource)s\" (in %(key)s) ist falsch." msgid "The specs key and value pairs of the QoS." msgstr "Die Schlüssel- und Wertepaare der QoS." msgid "" "The stack or some of its nested stacks are in progress. Note, that all the " "stacks should be in COMPLETE state in order to be migrated." msgstr "" "Der Stapel oder einige seiner verschachtelten Stapel sind in Bearbeitung. " "Beachten Sie, dass alle Stapel im Status COMPLETE sein müssen, um migriert " "zu werden." msgid "The start and end addresses for the allocation pools." msgstr "Die Start- und Endadressen für die Zuordnungspools." msgid "The status for this COE cluster." msgstr "Der Status für diesen COE-Cluster." msgid "The status of the container." msgstr "Der Status des Containers." msgid "The status of the firewall." msgstr "Der Status der Firewall." msgid "The status of the ipsec site connection." msgstr "Der Status der ipsec-Standortverbindung." msgid "The status of the network." msgstr "Der Status des Netzwerks." msgid "The status of the order." msgstr "Der Status der Bestellung." msgid "The status of the port." msgstr "Der Status des Ports." msgid "The status of the router." msgstr "Der Status des Routers." msgid "The status of the secret." msgstr "Der Status des Geheimnisses." msgid "The status of the vpn service." msgstr "Der Status des VPN-Dienstes." msgid "" "The string that was stored. This value is also available by referencing the " "resource." msgstr "" "Die Zeichenfolge, die gespeichert wurde. Dieser Wert ist auch verfügbar, " "indem auf die Ressource verwiesen wird." msgid "The subject of the certificate request." msgstr "Der Betreff der Zertifikatsanforderung." msgid "" "The subnet for the port on which the members of the pool will be connected." msgstr "" "Das Subnetz für den Port, mit dem die Mitglieder des Pools verbunden werden." msgid "The subnet, either subnet or port should be specified." msgstr "Das Subnetz, entweder Subnetz oder Port, sollte angegeben werden." #, python-format msgid "The subscriber type of must be one of: %s." msgstr "Der Abonnententyp muss einer der folgenden Werte sein: %s." msgid "The tag key name." msgstr "Der Tag-Schlüsselname." msgid "The tag value." msgstr "Der Tag-Wert." msgid "The tags to be added to the network." msgstr "Die Tags, die dem Netzwerk hinzugefügt werden sollen." msgid "The tags to be added to the port." msgstr "Die Tags, die dem Port hinzugefügt werden sollen." msgid "The tags to be added to the router." msgstr "Die Tags, die dem Router hinzugefügt werden sollen." msgid "The tags to be added to the subnet." msgstr "Die Tags, die dem Subnetz hinzugefügt werden sollen." msgid "The tags to be added to the subnetpool." msgstr "Die Tags, die zum Subnetpool hinzugefügt werden sollen." msgid "The template is not a JSON object or YAML mapping." msgstr "Die Vorlage ist kein JSON-Objekt oder YAML-Mapping." #, python-format msgid "The template section is invalid: %(section)s" msgstr "Der Vorlagenbereich ist ungültig: %(section)s" #, python-format msgid "The template version is invalid: %(explanation)s" msgstr "Die Vorlagenversion ist ungültig: %(explanation)s" msgid "The tenant owning this floating IP." msgstr "Der Mandant besitzt diese Floating-IP." msgid "The tenant owning this network." msgstr "Der Mandant, der dieses Netzwerk besitzt." msgid "The time range in seconds." msgstr "Der Zeitbereich in Sekunden." msgid "The timeout for cluster creation in minutes." msgstr "Das Zeitlimit für die Clustererstellung in Minuten." msgid "The timestamp indicating volume creation." msgstr "Der Zeitstempel, der die Erstellung des Datenträgers anzeigt." msgid "The transform protocol of the ipsec policy." msgstr "Das Transformationsprotokoll der ipsec-Richtlinie." msgid "The type of profile." msgstr "Die Art des Profils." msgid "The type of senlin policy." msgstr "Die Art der Senlin-Politik." msgid "The type of the \"value\" property." msgstr "Der Typ der Eigenschaft \"value\"." msgid "The type of the attribute." msgstr "Der Typ des Attributs." msgid "The type of the certificate request." msgstr "Der Typ der Zertifikatsanforderung." msgid "The type of the order." msgstr "Die Art der Bestellung." msgid "The type of the resources in the group." msgstr "Der Typ der Ressourcen in der Gruppe." msgid "The type of the secret." msgstr "Die Art des Geheimnisses." msgid "The type of the volume mapping to a backend, if any." msgstr "Der Typ der Datenträgerzuordnung zu einem Back-End, falls vorhanden." msgid "The type/format the secret data is provided in." msgstr "Der Typ/das Format, in dem die geheimen Daten bereitgestellt werden." msgid "The type/mode of the algorithm associated with the secret information." msgstr "" "Der Typ/Modus des Algorithmus, der der geheimen Information zugeordnet ist." msgid "The unencrypted plain text of the secret." msgstr "Der unverschlüsselte Klartext des Geheimnisses." msgid "" "The unique identifier of ike policy associated with the ipsec site " "connection." msgstr "" "Der eindeutige Bezeichner der ike-Richtlinie, die der ipsec-" "Standortverbindung zugeordnet ist." msgid "" "The unique identifier of ipsec policy associated with the ipsec site " "connection." msgstr "" "Der eindeutige Bezeichner der ipsec-Richtlinie, die der ipsec-" "Standortverbindung zugeordnet ist." msgid "" "The unique identifier of the router to which the vpn service was inserted." msgstr "Die eindeutige ID des Routers, in den der VPN-Dienst eingefügt wurde." msgid "" "The unique identifier of the subnet in which the vpn service was created." msgstr "Die eindeutige ID des Subnetzes, in dem der VPN-Dienst erstellt wurde." msgid "The unique identifier of the tenant owning the ike policy." msgstr "Die eindeutige ID des Mandanten, der die IKE-Richtlinie besitzt." msgid "The unique identifier of the tenant owning the ipsec policy." msgstr "Die eindeutige ID des Mandanten, der die ipsec-Richtlinie besitzt." msgid "The unique identifier of the tenant owning the ipsec site connection." msgstr "" "Der eindeutige Bezeichner des Mandanten, der die ipsec-Standortverbindung " "besitzt." msgid "The unique identifier of the tenant owning the vpn service." msgstr "Die eindeutige ID des Mandanten, der den VPN-Dienst besitzt." msgid "" "The unique identifier of vpn service associated with the ipsec site " "connection." msgstr "" "Der eindeutige Bezeichner des VPN-Dienstes, der mit der IPSec-" "Standortverbindung verknüpft ist." msgid "" "The user-defined region ID and should unique to the OpenStack deployment. " "While creating the region, heat will url encode this ID." msgstr "" "Die benutzerdefinierte Regions-ID und sollte für die OpenStack-" "Bereitstellung eindeutig sein. Während der Erstellung der Region wird die " "URL diese ID codieren." msgid "" "The value for the socket option TCP_KEEPIDLE. This is the time in seconds " "that the connection must be idle before TCP starts sending keepalive probes." msgstr "" "Der Wert für die Socketoption TCP_KEEPIDLE. Dies ist die Zeit in Sekunden, " "in der die Verbindung inaktiv sein muss, bevor TCP mit dem Senden von " "Keepalive-Tests beginnt." msgid "" "The value generated by this resource's properties \"value\" expression, with " "type determined from the properties \"type\"." msgstr "" "Der Wert, der vom Ausdruck \"value\" der Eigenschaft \"resources\" generiert " "wird, wobei der Typ aus den Eigenschaften \"type\" ermittelt wird." #, python-format msgid "The value must be a multiple of %(step)s with an offset of %(offset)s." msgstr "" "Der Wert muss ein Vielfaches von %(step)s mit einem Offset von %(offset)s " "sein." #, python-format msgid "The value must be at least %(min)s." msgstr "Der Wert muss mindestens %(min)s betragen." #, python-format msgid "The value must be in the range %(min)s to %(max)s." msgstr "Der Wert muss im Bereich %(min)s bis %(max)s liegen." #, python-format msgid "The value must be no greater than %(max)s." msgstr "Der Wert darf nicht größer als %(max)s sein." msgid "The values must be specified." msgstr "Die Werte müssen angegeben werden." #, python-format msgid "The values of the \"for_each\" argument to \"%s\" must be lists" msgstr "Die Werte des Arguments \"for_each\" für \"%s\" müssen Listen sein" #, python-format msgid "The values of the \"for_each\" argument to \"%s\" must be lists or maps" msgstr "" "Die Werte des Arguments \"for_each\" für \"%s\" müssen Listen oder Maps sein" msgid "The version of the ike policy." msgstr "Die Version der Ike-Richtlinie." msgid "" "The vnic type to be bound on the neutron port. To support SR-IOV PCI " "passthrough networking, you can request that the neutron port to be realized " "as normal (virtual nic), direct (pci passthrough), or macvtap (virtual " "interface with a tap-like software interface). Note that this only works for " "Neutron deployments that support the bindings extension." msgstr "" "Der vnic-Typ, der an den Neutronen-Port gebunden werden soll. Zur " "Unterstützung von SR-IOV PCI Passthrough-Netzwerken können Sie anfordern, " "dass der Neutronen-Port als normal (virtuell nic), direkt (pci passthrough) " "oder macvtap (virtuelles Interface mit einer tapsähnlichen Software-" "Schnittstelle) realisiert wird. Beachten Sie, dass dies nur für Neutron-" "Bereitstellungen funktioniert, die die Bindungsverlängerung unterstützen." msgid "The volume driver name for instantiating container volume." msgstr "" "Der Name des Datenträger-Treibers für die Instanziierung des " "Containerdatenträgers." msgid "The volume type." msgstr "Der Datenträgertyp." msgid "The volume used as source, if any." msgstr "Der Datenträger, das als Quelle verwendet wird, falls vorhanden." msgid "The volume_id can be boot or non-boot device to the server." msgstr "" "Die volume_id kann ein Boot- oder Nicht-Boot-Gerät für den Server sein." msgid "The website endpoint for the specified bucket." msgstr "Der Website-Endpunkt für den angegebenen Bucket." msgid "The working directory for commands to run in." msgstr "Das Arbeitsverzeichnis für Befehle, in denen ausgeführt werden soll." #, python-format msgid "There is no rule %(rule)s. List of allowed rules is: %(rules)s." msgstr "" "Es gibt keine Regel %(rule)s. Liste der erlaubten Regeln ist: %(rules)s." msgid "" "There is no such option during 5.0.0, so need to make this attribute " "unsupported, otherwise error will raised." msgstr "" "Während 5.0.0 gibt es keine solche Option, daher muss dieses Attribut nicht " "unterstützt werden, da sonst ein Fehler auftritt." msgid "" "There is no such option during 5.0.0, so need to make this property " "unsupported while it not used." msgstr "" "Während 5.0.0 gibt es keine solche Option, daher muss diese Eigenschaft " "nicht unterstützt werden, solange sie nicht verwendet wird." #, python-format msgid "" "There was an error loading the definition of the global resource type " "%(type_name)s." msgstr "" "Beim Laden der Definition des globalen Ressourcentyps %(type_name)s ist ein " "Fehler aufgetreten." msgid "" "Theshold alarm relies on ceilometer-api and has been deprecated in aodh " "since Ocata. Use OS::Aodh::GnocchiAggregationByResourcesAlarm instead." msgstr "" "Der Schwellwertalarm beruht auf ceilometer-api und ist seit Ocata in aodh " "veraltet. Verwenden Sie stattdessen OS::Aodh::" "GnocchiAggregationByResourcesAlarm." msgid "This domain is enabled or disabled." msgstr "Diese Domain ist aktiviert oder deaktiviert." msgid "This endpoint is enabled or disabled." msgstr "Dieser Endpunkt ist aktiviert oder deaktiviert." msgid "This project is enabled or disabled." msgstr "Dieses Projekt ist aktiviert oder deaktiviert." msgid "This region is enabled or disabled." msgstr "Diese Region ist aktiviert oder deaktiviert." msgid "This service is enabled or disabled." msgstr "Dieser Dienst ist aktiviert oder deaktiviert." msgid "Threshold to evaluate against." msgstr "Schwelle, gegen die zu bewerten ist." msgid "Time To Live (Seconds) for the zone." msgstr "Lebenszeit (Sekunden) für die Zone." msgid "Time To Live (Seconds)." msgstr "Lebenszeit (Sekunden)." msgid "Time of the first execution in format \"YYYY-MM-DD HH:MM\"." msgstr "Uhrzeit der ersten Ausführung im Format \"JJJJ-MM-TT HH:MM\"." msgid "Time of the next execution in format \"YYYY-MM-DD HH:MM:SS\"." msgstr "Zeitpunkt der nächsten Ausführung im Format \"JJJJ-MM-TT HH:MM:SS\"." msgid "Time to live of the subscription in seconds." msgstr "Zeit, um das Abonnement in Sekunden zu leben." msgid "Time validity of the URL, in seconds. Default to one day." msgstr "Gültigkeit der URL in Sekunden Standard auf einen Tag." msgid "" "Timeout for client connections' socket operations. If an incoming connection " "is idle for this number of seconds it will be closed. A value of '0' means " "wait forever." msgstr "" "Timeout für Socket-Operationen von Clientverbindungen Wenn eine eingehende " "Verbindung für diese Anzahl von Sekunden inaktiv ist, wird sie geschlossen. " "Ein Wert von '0' bedeutet ewig warten." msgid "Timeout for creating the bay in minutes. Set to 0 for no timeout." msgstr "" "Timeout zum Erstellen der Bay in Minuten. Für kein Timeout auf 0 setzen." msgid "Timeout for creating the cluster in minutes. Set to 0 for no timeout." msgstr "" "Timeout zum Erstellen des Clusters in Minuten. Für kein Timeout auf 0 setzen." msgid "Timeout in seconds for stack action (ie. create or update)." msgstr "" "Timeout in Sekunden für Stack-Aktion (d.h. erstellen oder aktualisieren)." msgid "" "Timezone for the time constraint (eg. 'Asia/Taipei', 'Europe/Amsterdam')." msgstr "" "Zeitzone für die Zeitbeschränkung (z.B. \"Asien/Taipei\", \"Europa/Amsterdam" "\")." msgid "" "Toggle to enable/disable caching when Orchestration Engine looks for other " "OpenStack service resources using name or id. Please note that the global " "toggle for oslo.cache(enabled=True in [cache] group) must be enabled to use " "this feature." msgstr "" "Umschalten zum Aktivieren/Deaktivieren des Caching, wenn die Orchestrierungs-" "Engine nach anderen OpenStack-Service-Ressourcen sucht, die den Namen oder " "die ID verwenden. Bitte beachten Sie, dass der globale Schalter für oslo." "cache (enabled=True in der Gruppe [cache]) aktiviert sein muss, um diese " "Funktion zu verwenden." msgid "" "Toggle to enable/disable caching when Orchestration Engine retrieves " "extensions from other OpenStack services. Please note that the global toggle " "for oslo.cache(enabled=True in [cache] group) must be enabled to use this " "feature." msgstr "" "Umschalten zum Aktivieren/Deaktivieren des Caching, wenn " "Orchestrierungsmodul Erweiterungen von anderen OpenStack-Diensten abruft. " "Bitte beachten Sie, dass der globale Schalter für oslo.cache (enabled = True " "in der Gruppe [cache]) aktiviert sein muss, um diese Funktion zu verwenden." msgid "" "Toggle to enable/disable caching when Orchestration Engine validates " "property constraints of stack.During property validation with constraints " "Orchestration Engine caches requests to other OpenStack services. Please " "note that the global toggle for oslo.cache(enabled=True in [cache] group) " "must be enabled to use this feature." msgstr "" "Umschalten zum Aktivieren/Deaktivieren des Caching, wenn die Orchestrierungs-" "Engine die Eigenschafteneinschränkungen von stack überprüft. Bei der " "Überprüfung der Eigenschaften mit Einschränkungen speichert die " "Orchestrierung Engine Anfragen an andere OpenStack-Dienste zwischen. Bitte " "beachten Sie, dass der globale Schalter für oslo.cache (enables=True in der " "Gruppe [cache]) aktiviert sein muss, um diese Funktion zu verwenden." msgid "" "Token for stack-user which can be used for signalling handle when " "signal_transport is set to TOKEN_SIGNAL. None for all other signal " "transports." msgstr "" "Token für Stapel-Benutzer, der für die Signalisierung verwendet werden kann, " "wenn signal_transport auf TOKEN_SIGNAL gesetzt ist. Keine für alle anderen " "Signaltransporte." msgid "" "Tokens are not needed for Swift TempURLs. This attribute is being kept for " "compatibility with the OS::Heat::WaitConditionHandle resource." msgstr "" "Token werden für Swift-TempURLs nicht benötigt. Dieses Attribut wird aus " "Gründen der Kompatibilität mit der Ressource OS::Heat::WaitConditionHandle " "beibehalten." msgid "Topic" msgstr "Thema" msgid "Transform protocol for the ipsec policy." msgstr "Transform-Protokoll für die IPSec-Richtlinie." msgid "Triggers UPDATE action execution even if input is unchanged." msgstr "" "Löst die Ausführung der UPDATE-Aktion aus, auch wenn die Eingabe unverändert " "ist." msgid "True if alarm evaluation/actioning is enabled." msgstr "True, wenn die Alarmauswertung/-aktion aktiviert ist." msgid "" "True if the system should remember a generated private key; False otherwise." msgstr "" "True, wenn das System sich an einen generierten privaten Schlüssel erinnern " "soll; Falsch sonst." msgid "Type of access that should be provided to guest." msgstr "Art des Zugangs, der dem Gast zur Verfügung gestellt werden sollte." msgid "Type of adjustment (absolute or percentage)." msgstr "Art der Anpassung (absolut oder prozentual)" msgid "" "Type of endpoint in Identity service catalog to use for communication with " "the OpenStack service." msgstr "" "Endpunkttyp im Identitätsdienstekatalog, der für die Kommunikation mit dem " "OpenStack-Dienst verwendet wird." msgid "Type of keystone Service." msgstr "Art des Schlüsseldienstes." msgid "Type of network to associate with this segment." msgstr "Art des Netzwerks, das diesem Segment zugeordnet werden soll." msgid "Type of receiver." msgstr "Art des Empfängers." msgid "Type of the data source." msgstr "Art der Datenquelle" msgid "Type of the job." msgstr "Art des Jobs" msgid "Type of the notification." msgstr "Art der Benachrichtigung" msgid "Type of the object that RBAC policy affects." msgstr "Typ des Objekts, auf das sich die RBAC-Richtlinie auswirkt." msgid "Type of the value of the input." msgstr "Art des Wertes der Eingabe" msgid "Type of the value of the output." msgstr "Art des Wertes der Ausgabe" msgid "Type of the volume to create on Cinder backend." msgstr "Typ des Datenträgers, das auf dem Cinder-Backend erstellt werden soll." msgid "" "Type of zone. PRIMARY is controlled by Designate, SECONDARY zones are slaved " "from another DNS Server." msgstr "" "Art der Zone PRIMARY wird von Designate gesteuert, SECONDARY-Zonen werden " "von einem anderen DNS-Server verwaltet." msgid "" "URI of the subscriber which will be notified. Must be in the format: :" "." msgstr "" "URI des Teilnehmers, der benachrichtigt wird. Muss im Format sein: :" " ." #, python-format msgid "URL \"%s\" should not contain ':'" msgstr "URL \"%s\" sollte nicht \":\" enthalten" msgid "URL for API authentication" msgstr "URL für die API-Authentifizierung" msgid "URL for REDIRECT_TO_URL action type. This should be a valid URL string." msgstr "" "URL für den Aktionstyp REDIRECT_TO_URL Dies sollte eine gültige URL-" "Zeichenfolge sein." msgid "URL for the data source." msgstr "URL für die Datenquelle" msgid "" "URL for the job binary. Must be in the format swift:/// or " "internal-db://." msgstr "" "URL für die Job-Binärdatei Muss im Format swift:/// oder " "internal-db:// sein." msgid "" "URL of TempURL where resource will signal completion and optionally upload " "data." msgstr "" "URL von TempURL, wo die Ressource die Fertigstellung signalisieren und " "optional Daten hochladen kann." msgid "URL of keystone service endpoint." msgstr "URL des Schlüsseldienst-Endpunkts" msgid "URL of the Heat CloudWatch server." msgstr "URL des Heat CloudWatch Servers" msgid "" "URL of the Heat metadata server. NOTE: Setting this is only needed if you " "require instances to use a different endpoint than in the keystone catalog" msgstr "" "URL des Heat-Metadatenservers HINWEIS: Wenn Sie festlegen, dass Instanzen " "einen anderen Endpunkt als den Keystone-Katalog verwenden müssen, ist dies " "nur erforderlich" msgid "URL of the Heat waitcondition server." msgstr "URL des Heatcondition-Servers." msgid "" "URL where the data for this image already resides. For example, if the image " "data is stored in swift, you could specify \"swift://example.com/container/" "obj\"." msgstr "" "URL, auf der sich die Daten für dieses Abbild bereits befinden Wenn die " "Abbilddaten beispielsweise in Swift gespeichert sind, können Sie \"swift: //" "example.com/container/obj\" angeben." msgid "" "URLs of server's consoles. To get a specific console type, the requested " "type can be specified as parameter to the get_attr function, e.g. get_attr: " "[ , console_urls, novnc ]. Currently supported types are novnc, " "xvpvnc, spice-html5, rdp-html5, serial and webmks." msgstr "" "URLs der Serverkonsolen. Um einen bestimmten Konsolentyp zu erhalten, kann " "der angeforderte Typ als Parameter für die Funktion get_attr angegeben " "werden, zB get_attr: [ , console_urls, novnc]. Aktuell unterstützte " "Typen sind novnc, xvpvnc, spice-html5, rdp-html5, serial und webmks." msgid "UUID of the Mistral workflow to trigger." msgstr "UUID des Mistral-Workflows zum Auslösen." msgid "UUID of the internal subnet to which the instance will be attached." msgstr "UUID des internen Subnetzes, an das die Instanz angehängt wird." #, python-format msgid "Unable to automatically allocate a network: %(message)s" msgstr "Kann ein Netzwerk nicht automatisch zuordnen: %(message)s" #, python-format msgid "Unable to find %(resource)s with name or id '%(name_or_id)s'" msgstr "Kann %(resource)s mit Name oder ID '%(name_or_id)s' nicht finden" #, python-format msgid "" "Unable to find neutron provider '%(provider)s', available providers are " "%(providers)s." msgstr "" "Kann den Neutronenanbieter '%(provider)s' nicht finden, verfügbare Provider " "sind %(providers)s." #, python-format msgid "" "Unable to find senlin policy type '%(pt)s', available policy types are " "%(pts)s." msgstr "" "Senlin-Richtlinientyp '%(pt)s' konnte nicht gefunden werden, verfügbare " "Richtlinientypen sind %(pts)s." #, python-format msgid "" "Unable to find senlin profile type '%(pt)s', available profile types are " "%(pts)s." msgstr "" "Der Senlin-Profiltyp '%(pt)s' konnte nicht gefunden werden, verfügbare " "Profiltypen sind %(pts)s." #, python-format msgid "" "Unable to load %(app_name)s from configuration file %(conf_file)s.\n" "Got: %(e)r" msgstr "" "%(app_name)s konnte nicht aus der Konfigurationsdatei %(conf_file)s geladen " "werden. Bekam: %(e)r" #, python-format msgid "Unable to locate config file [%s]" msgstr "Die Konfigurationsdatei konnte nicht gefunden werden [%s]" #, python-format msgid "Unexpected action %(action)s" msgstr "Unerwartete Aktion %(action)s" #, python-format msgid "Unexpected action %s" msgstr "Unerwartete Aktion %s" msgid "Unexpected exit while IN_PROGRESS." msgstr "Unerwartetes Beenden während IN_PROGRESS." #, python-format msgid "" "Unexpected properties: %(unexpected)s. Only these properties are allowed for " "%(type)s type of order: %(allowed)s." msgstr "" "Unerwartete Eigenschaften: %(unexpected)s. Nur diese Eigenschaften sind für " "den Auftragstyp %(type)s zulässig: %(allowed)s." msgid "" "Unique ID of the flavor. If not specified, an UUID will be auto generated " "and used." msgstr "" "Eindeutige ID der Variante. Wenn nicht angegeben, wird automatisch eine UUID " "generiert und verwendet." msgid "Unique identifier for the device." msgstr "Eindeutige Kennung für das Gerät." msgid "" "Unique identifier for the ike policy associated with the ipsec site " "connection." msgstr "" "Eindeutiger Bezeichner für die Ike-Richtlinie, die der IPSec-" "Standortverbindung zugeordnet ist." msgid "" "Unique identifier for the ipsec policy associated with the ipsec site " "connection." msgstr "" "Eindeutige Kennung für die IPSec-Richtlinie, die der IPSEC-" "Standortverbindung zugeordnet ist." msgid "Unique identifier for the network owning the port." msgstr "Eindeutige Kennung für das Netzwerk, das den Port besitzt." msgid "" "Unique identifier for the router to which the vpn service will be inserted." msgstr "" "Eindeutige Kennung für den Router, in den der VPN-Dienst eingefügt wird." msgid "" "Unique identifier for the vpn service associated with the ipsec site " "connection." msgstr "" "Eindeutige Kennung für den VPN-Dienst, der der IPSec-Standortverbindung " "zugeordnet ist." msgid "" "Unique identifier of the firewall policy to which this firewall rule belongs." msgstr "" "Eindeutige Kennung der Firewallrichtlinie, zu der diese Firewallregel gehört." msgid "Unique identifier of the firewall policy used to create the firewall." msgstr "" "Eindeutige Kennung der Firewall-Richtlinie, die zum Erstellen der Firewall " "verwendet wurde." msgid "Unknown" msgstr "unbekannte" #, python-format msgid "Unknown Property %s" msgstr "Unbekannte Eigenschaft %s" #, python-format msgid "Unknown attribute \"%s\"" msgstr "Unbekanntes Attribut \"%s\"" #, python-format msgid "Unknown error retrieving %s" msgstr "Unbekannter Fehler beim Abrufen von %s" #, python-format msgid "Unknown input %s" msgstr "Unbekannte Eingabe %s" #, python-format msgid "Unknown key(s) %s" msgstr "Unbekannte Taste(n) %s" msgid "Unknown share_status during creation of share \"{0}\"" msgstr "Unbekannter share_status beim Erstellen der Freigabe \"{0}\"" #, python-format msgid "Unknown status Container '%(name)s' - %(reason)s" msgstr "Unbekannter Status Container '%(name)s' - %(reason)s" #, python-format msgid "Unknown status creating Bay '%(name)s' - %(reason)s" msgstr "Unbekannter Status beim Erstellen der Bay '%(name)s' - %(reason)s " #, python-format msgid "Unknown status creating Cluster '%(name)s' - %(reason)s" msgstr "Unbekannter Status beim Erstellen des Clusters '%(name)s' - %(reason)s" msgid "Unknown status during deleting share \"{0}\"" msgstr "Unbekannter Status beim Löschen der Freigabe \"{0}\"" #, python-format msgid "Unknown status updating Bay '%(name)s' - %(reason)s" msgstr "Unbekannter Bay-Status wird aktualisiert. '%(name)s' - %(reason)s" #, python-format msgid "Unknown status updating Cluster '%(name)s' - %(reason)s" msgstr "Unbekannter Status aktualisiert Cluster '%(name)s' - %(reason)s" #, python-format msgid "Unknown status updating Cluster Template '%(name)s' - %(reason)s" msgstr "" "Unbekannter Status aktualisiert Cluster-Vorlage '%(name)s' - %(reason)s" #, python-format msgid "Unknown status: %s" msgstr "Unbekannter Status: %s" #, python-format msgid "" "Unrecognized value \"%(value)s\" for \"%(name)s\", acceptable values are: " "true, false." msgstr "" "Nicht erkannter Wert \"%(value)s\" für \"%(name)s\", zulässige Werte sind: " "true, false." #, python-format msgid "Unsupported object type %(objtype)s" msgstr "Nicht unterstützter Objekttyp %(objtype)s" #, python-format msgid "Unsupported resource '%s' in LoadBalancerNames" msgstr "Nicht unterstützte Ressource '%s' in LoadBalancerNames" msgid "Unversioned keystone url in format like http://0.0.0.0:5000." msgstr "Nicht versionierte Keystone-URL im Format http://0.0.0.0:5000." msgid "Up to 4094 VLAN network segments can exist on each physical_network." msgstr "" "Bis zu 4094 VLAN-Netzwerksegmente können auf jedem physischen Netzwerk " "vorhanden sein." msgid "" "Update status to COMPLETE for FAILED resource neither update nor replace." msgstr "" "Aktualisieren Sie den Status auf COMPLETE für FAILED-Ressourcen, die weder " "aktualisiert noch ersetzt werden." #, python-format msgid "Update to properties %(props)s of %(name)s (%(res)s)" msgstr "Aktualisierung auf Eigenschaften %(props)s von %(name)s (%(res)s)" #, python-format msgid "Update to property %(prop)s of %(name)s (%(res)s)" msgstr "Aktualisierung auf Eigenschaft %(prop)s von %(name)s (%(res)s)" msgid "Updated At" msgstr "Aktualisiert am" msgid "Updating a stack when it is deleting" msgstr "Aktualisieren eines Stapels beim Löschen" msgid "Updating a stack when it is suspended" msgstr "Aktualisieren eines Stapels, wenn dieser angehalten ist" msgid "Use LBaaS V2 instead." msgstr "Verwenden Sie stattdessen LBaaS V2." msgid "Use OS::Designate::RecordSet instead." msgstr "Verwenden Sie stattdessen OS::Designate::RecordSet." msgid "Use OS::Designate::Zone instead." msgstr "Verwenden Sie stattdessen OS::Designate::Zone." msgid "" "Use get_resource|Ref command instead. For example: { get_resource : " " }" msgstr "" "Verwenden Sie stattdessen den Befehl get_resource | Ref. Zum Beispiel: " "{ get_resource : }" msgid "" "Use only with Neutron, to list the internal subnet to which the instance " "will be attached; needed only if multiple exist; list length must be exactly " "1." msgstr "" "Verwenden Sie nur mit Neutron, um das interne Subnetz aufzulisten, an das " "die Instanz angehängt werden soll; benötigt nur wenn mehrere existieren; " "Listenlänge muss genau 1 sein." #, python-format msgid "Use property %s" msgstr "Verwenden Sie die Eigenschaft %s" #, python-format msgid "Use property %s." msgstr "Verwenden Sie die Eigenschaft %s." msgid "" "Use the `external_gateway_info` property in the router resource to set up " "the gateway." msgstr "" "Verwenden Sie die Eigenschaft `external_gateway_info` in der Router-" "Ressource, um das Gateway einzurichten." msgid "" "Use the networks attribute instead of first_address. For example: " "\"{get_attr: [, networks, , 0]}\"" msgstr "" "Verwenden Sie das Netzwerkattribut anstelle von first_address. Zum Beispiel: " "\"{get_attr: [ , networks, , 0]} \"" msgid "Use this resource at your own risk." msgstr "Benutzen Sie diese Ressource auf eigene Gefahr." #, python-format msgid "User %s in invalid domain" msgstr "Benutzer %s in ungültiger Domäne" #, python-format msgid "User %s in invalid project" msgstr "Benutzer %s in ungültigem Projekt" msgid "User ID for API authentication" msgstr "Benutzer-ID für die API-Authentifizierung" msgid "" "User data script to be executed by cloud-init. Changes cause replacement of " "the resource by default, but can be ignored altogether by setting the " "`user_data_update_policy` property." msgstr "" "Benutzerdatenskript, das von cloud-init ausgeführt wird. Änderungen " "verursachen standardmäßig das Ersetzen der Ressource, können jedoch durch " "Festlegen der Eigenschaft 'user_data_update_policy' vollständig ignoriert " "werden." msgid "User data to pass to instance." msgstr "Benutzerdaten, die an die Instanz übergeben werden." msgid "User is not authorized to perform action" msgstr "Der Benutzer ist nicht berechtigt, eine Aktion auszuführen" msgid "User name to create a user on instance creation." msgstr "Benutzername zum Erstellen eines Benutzers bei der Instanzerstellung." msgid "User name." msgstr "Nutzername." msgid "Username associated with the AccessKey." msgstr "Benutzername, der dem AccessKey zugeordnet ist." msgid "Username for API authentication" msgstr "Benutzername für die API-Authentifizierung" msgid "Username for accessing the data source URL." msgstr "Benutzername für den Zugriff auf die Datenquellen-URL." msgid "Username for accessing the job binary URL." msgstr "Benutzername für den Zugriff auf die binäre URL des Jobs." msgid "Username of privileged user in the image." msgstr "Benutzername des privilegierten Benutzers im Abbild" msgid "VLAN ID for VLAN networks or tunnel-id for GRE/VXLAN networks." msgstr "VLAN-ID für VLAN-Netzwerke oder Tunnel-ID für GRE/VXLAN-Netzwerke." msgid "VPC ID for this gateway association." msgstr "VPC-ID für diese Gateway-Zuordnung." msgid "VPC ID for where the route table is created." msgstr "VPC ID für wo die Routentabelle erstellt wird." msgid "" "Valid values are encrypt or decrypt. The heat-engine processes must be " "stopped to use this." msgstr "" "Gültige Werte sind Verschlüsseln oder Entschlüsseln. Die Heat-Engine-" "Prozesse müssen gestoppt werden, um dies zu verwenden." #, python-format msgid "Value \"%(val)s\" is invalid for data type \"%(type)s\"." msgstr "Der Wert \"%(val)s\" ist für den Datentyp \"%(type)s\" ungültig." #, python-format msgid "Value '%(value)s' is invalid for '%(name)s' which only accepts integer." msgstr "" "Der Wert '%(value)s' ist ungültig für '%(name)s', der nur Integer akzeptiert." #, python-format msgid "" "Value '%(value)s' is invalid for '%(name)s' which only accepts non-negative " "integer." msgstr "" "Wert '%(value)s' ist ungültig für '%(name)s', die nur nicht negative ganze " "Zahl akzeptiert." #, python-format msgid "Value '%s' is not an integer" msgstr "Wert '%s' ist keine ganze Zahl" #, python-format msgid "Value must be a comma-delimited list string: %s" msgstr "Der Wert muss eine durch Kommas getrennte Liste sein: %s" #, python-format msgid "Value must be a string; got %r" msgstr "Wert muss eine Zeichenfolge sein; hab %r bekommen" #, python-format msgid "Value must be of type %s" msgstr "Der Wert muss vom Typ %s sein" #, python-format msgid "Value must be valid JSON: %s" msgstr "Wert muss gültig sein JSON: %s" #, python-format msgid "Value must match pattern: %s" msgstr "Der Wert muss dem Muster entsprechen: %s" msgid "Value to compare." msgstr "Wert zum Vergleichen." msgid "" "Value which can be set or changed on stack update to trigger the resource " "for replacement with a new random string. The salt value itself is ignored " "by the random generator." msgstr "" "Wert, der bei einer Stapelaktualisierung gesetzt oder geändert werden kann, " "um die Ressource zum Ersetzen durch eine neue zufällige Zeichenfolge " "auszulösen. Der Salt-Wert selbst wird vom Zufallsgenerator ignoriert." msgid "" "Value which can be set to fail the resource operation to test failure " "scenarios." msgstr "" "Wert, der so festgelegt werden kann, dass die Ressourcenoperation " "fehlschlägt, um Fehlerszenarien zu testen." msgid "" "Value which can be set to trigger update replace for the particular resource." msgstr "" "Wert, der gesetzt werden kann, um Update zu ersetzen, ersetzt die bestimmte " "Ressource." #, python-format msgid "Version %(objver)s of %(objname)s is not supported" msgstr "Version %(objver)s von %(objname)s wird nicht unterstützt" msgid "Version for the ike policy." msgstr "Version für die Ike-Richtlinie." msgid "" "Version info of chosen COE in cluster for helping client in picking the " "right version of client." msgstr "" "Versionsinformationen des ausgewählten COE im Cluster, um dem Kunden bei der " "Auswahl der richtigen Version des Clients zu helfen." msgid "" "Version info of constainer engine in the chosen COE in cluster for helping " "client in picking the right version of client." msgstr "" "Versionsinfo der Constainer-Engine im ausgewählten COE im Cluster, um dem " "Kunden bei der Auswahl der richtigen Version des Clients zu helfen." msgid "Version of Hadoop running on instances." msgstr "Version von Hadoop, die auf Instanzen ausgeführt wird." msgid "Version of IP address." msgstr "Version der IP-Adresse" msgid "Vip associated with the pool." msgstr "Vip mit dem Pool verbunden." msgid "Volume attachment failed" msgstr "Datenträgeranhang ist fehlgeschlagen" msgid "Volume backup failed" msgstr "Volume-Sicherung fehlgeschlagen" msgid "Volume backup restore failed" msgstr "Datenträger-Sicherungs-Wiederherstellung fehlgeschlagen" msgid "Volume create failed" msgstr "Volume-Erstellung fehlgeschlagen" msgid "Volume detachment failed" msgstr "Datenträgertrennung fehlgeschlagen" #, python-format msgid "" "Volume driver type %(driver)s is not supported by COE:%(coe)s, expecting a " "%(supported_volume_driver)s volume driver." msgstr "" "Datenträger-Treibertyp %(driver)s wird von COE nicht unterstützt: %(coe)s, " "erwartet einen %(supported_volume_driver)s Datenträgertreiber." msgid "Volume in use" msgstr "Volumen im Gebrauch" msgid "Volume resize failed" msgstr "Datenträger-Größenänderung fehlgeschlagen" msgid "Volumes per node." msgstr "Datenträger pro Knoten" msgid "Volumes to attach to instance." msgstr "Volumes zum Anhängen an Instanz." #, python-format msgid "WaitCondition invalid Handle %s" msgstr "WaitCondition ungültiges Handle %s" #, python-format msgid "WaitCondition invalid Handle stack %s" msgstr "WaitCondition ungültig Handle Stack %s" #, python-format msgid "WaitCondition invalid Handle tenant %s" msgstr "WaitCondition ungültig Behandle Mandanten %s" msgid "" "Warning: this command is potentially destructive and only intended to " "recover from specific crashes." msgstr "" "Warnung: Dieser Befehl ist möglicherweise destruktiv und nur zur " "Wiederherstellung von bestimmten Abstürzen vorgesehen." msgid "Weight of pool member in the pool (default to 1)." msgstr "Gewicht des Pool-Mitglieds im Pool (standardmäßig 1)" msgid "Weight of the pool member in the pool." msgstr "Gewicht des Pool-Mitglieds im Pool." #, python-format msgid "Went to status %(resource_status)s due to \"%(status_reason)s\"" msgstr "Wegen Status %(resource_status)s wegen \"%(status_reason)s\" gegangen" msgid "" "When both ipv6_ra_mode and ipv6_address_mode are set, they must be equal." msgstr "" "Wenn sowohl ipv6_ra_mode als auch ipv6_address_mode gesetzt sind, müssen sie " "gleich sein." msgid "" "When running server in SSL mode, you must specify both a cert_file and " "key_file option value in your configuration file" msgstr "" "Wenn Sie den Server im SSL-Modus ausführen, müssen Sie in der " "Konfigurationsdatei den Wert cert_file und den Wert für die Option key_file " "angeben" msgid "" "When this feature is enabled, scheduler hints identifying the heat stack " "context of a server or volume resource are passed to the configured " "schedulers in nova and cinder, for creates done using heat resource types " "OS::Cinder::Volume, OS::Nova::Server, and AWS::EC2::Instance. " "heat_root_stack_id will be set to the id of the root stack of the resource, " "heat_stack_id will be set to the id of the resource's parent stack, " "heat_stack_name will be set to the name of the resource's parent stack, " "heat_path_in_stack will be set to a list of comma delimited strings of " "stackresourcename and stackname with list[0] being 'rootstackname', " "heat_resource_name will be set to the resource's name, and " "heat_resource_uuid will be set to the resource's orchestration id." msgstr "" "Wenn diese Funktion aktiviert ist, werden Scheduler-Hinweise, die den Heat-" "Stack-Kontext eines Servers oder einer Volume-Ressource identifizieren, an " "die konfigurierten Scheduler in nova und cinder übergeben, die mit Heat-" "Ressourcentypen OS::Cinder::Volume, OS::Nova::Server und AWS::EC2::Instance " "erstellt wurden. heat_root_stack_id wird auf die ID des Root-Stacks der " "Ressource gesetzt, heat_stack_id wird auf die ID des übergeordneten Stacks " "der Ressource gesetzt, heat_stack_name wird auf den Namen des übergeordneten " "Stacks der Ressource gesetzt, heat_path_in_stack wird auf eine Liste von " "Kommaseparierte Zeichenfolgen von stackresourcename und stackname mit " "list[0] als 'rootstackname', heat_resource_name wird auf den Namen der " "Ressource und heat_resource_uuid auf die Orchestrierungs-ID der Ressource " "gesetzt." msgid "Whether allow the volume to be attached more than once." msgstr "Gibt an, ob der Datenträger mehr als einmal angehängt werden soll." msgid "Whether enable this policy on that cluster." msgstr "Aktivieren Sie diese Richtlinie für diesen Cluster." msgid "Whether enable this policy on this cluster." msgstr "Aktivieren Sie diese Richtlinie für diesen Cluster." msgid "" "Whether the address scope should be shared to other tenants. Note that the " "default policy setting restricts usage of this attribute to administrative " "users only, and restricts changing of shared address scope to unshared with " "update." msgstr "" "Ob der Adressbereich für andere Mandanten freigegeben werden soll. Beachten " "Sie, dass die Standardrichtlinieneinstellung die Verwendung dieses Attributs " "nur für administrative Benutzer einschränkt und die Änderung des Bereichs " "für gemeinsam genutzte Adressen auf die Freigabe mit Update beschränkt." msgid "Whether the flavor is shared across all projects." msgstr "Ob die Variante in allen Projekten geteilt wird." msgid "" "Whether the image can be deleted. If the value is True, the image is " "protected and cannot be deleted." msgstr "" "Ob das Abbild gelöscht werden kann. Wenn der Wert True ist, ist das Abbild " "geschützt und kann nicht gelöscht werden." msgid "Whether the metering label should be shared across all tenants." msgstr "Ob das Zähler-Label für alle Mandanten gemeinsam genutzt werden soll." msgid "Whether the network contains an external router." msgstr "Gibt an, ob das Netzwerk einen externen Router enthält." msgid "Whether the part content is text or multipart." msgstr "Ob der Teilinhalt Text oder Multipart ist." msgid "" "Whether the subnet pool will be shared across all tenants. Note that the " "default policy setting restricts usage of this attribute to administrative " "users only." msgstr "" "Gibt an, ob der Subnetzpool für alle Mandanten freigegeben wird. Beachten " "Sie, dass die Standardrichtlinieneinstellung die Verwendung dieses Attributs " "nur für administrative Benutzer einschränkt." msgid "Whether the volume type is accessible to the public." msgstr "Ob der Datenträgertyp für die Öffentlichkeit zugänglich ist." msgid "Whether this QoS policy should be shared to other tenants." msgstr "Ob diese QoS-Richtlinie für andere Mandanten freigegeben werden soll." msgid "" "Whether this firewall should be shared across all tenants. NOTE: The default " "policy setting in Neutron restricts usage of this property to administrative " "users only." msgstr "" "Ob diese Firewall für alle Mandanten freigegeben werden soll. HINWEIS: Die " "Standardrichtlinieneinstellung in Neutron beschränkt die Verwendung dieser " "Eigenschaft auf administrative Benutzer." msgid "" "Whether this is default IPv4/IPv6 subnet pool. There can only be one default " "subnet pool for each IP family. Note that the default policy setting " "restricts administrative users to set this to True." msgstr "" "Ob dies der Standard-IPv4/IPv6-Subnetzpool ist. Es kann nur einen Standard-" "Subnetz-Pool für jede IP-Familie geben. Beachten Sie, dass die " "Standardrichtlinieneinstellung es Benutzern mit Administratorrechten " "einschränkt, diesen Wert auf \"true\" festzulegen." msgid "Whether this network should be shared across all tenants." msgstr "Ob dieses Netzwerk für alle Mandanten freigegeben werden soll" msgid "" "Whether this network should be shared across all tenants. Note that the " "default policy setting restricts usage of this attribute to administrative " "users only." msgstr "" "Ob dieses Netzwerk für alle Mandanten freigegeben werden soll Beachten Sie, " "dass die Standardrichtlinieneinstellung die Verwendung dieses Attributs nur " "für administrative Benutzer einschränkt." msgid "" "Whether this policy should be audited. When set to True, each time the " "firewall policy or the associated firewall rules are changed, this attribute " "will be set to False and will have to be explicitly set to True through an " "update operation." msgstr "" "Ob diese Richtlinie überprüft werden sollte. Bei Festlegung auf \"True\" " "wird jedes Mal, wenn die Firewallrichtlinie oder die zugehörigen " "Firewallregeln geändert werden, dieses Attribut auf \"False\" festgelegt und " "muss während einer Aktualisierungsoperation explizit auf \"True\" festgelegt " "werden." msgid "Whether this policy should be shared across all tenants." msgstr "Ob diese Richtlinie für alle Mandanten gilt" msgid "Whether this rule should be enabled." msgstr "Ob diese Regel aktiviert werden soll" msgid "Whether this rule should be shared across all tenants." msgstr "Ob diese Regel für alle Mandanten gilt" msgid "Whether to enable the actions or not." msgstr "Gibt an, ob die Aktionen aktiviert werden sollen oder nicht." msgid "Whether to specify a remote group or a remote IP prefix." msgstr "" "Gibt an, ob eine Remote-Gruppe oder ein Remote-IP-Präfix angegeben werden " "soll." msgid "" "Which lifecycle actions of the deployment resource will result in this " "deployment being triggered." msgstr "" "Welche Lebenszyklusaktionen der Bereitstellungsressource führen dazu, dass " "diese Bereitstellung ausgelöst wird." msgid "" "With Neutron enabled you need to pass Neutron network and Neutron subnet " "instead of Nova network" msgstr "" "Wenn Neutron aktiviert ist, müssen Sie Neutron-Netzwerk und Neutron-Subnetz " "anstelle von Nova-Netzwerk übergeben" msgid "" "Workflow additional parameters. If Workflow is reverse typed, params " "requires 'task_name', which defines initial task." msgstr "" "Workflow zusätzliche Parameter. Wenn Workflow umgekehrt eingegeben wird, " "erfordert params 'task_name', was die anfängliche Aufgabe definiert." msgid "" "Workflow additional parameters. If workflow is reverse typed, params " "requires \"task_name\", which defines initial task." msgstr "" "Workflow zusätzliche Parameter. Wenn der Workflow umgekehrt eingegeben wird, " "benötigt params \"task_name\", was die anfängliche Aufgabe definiert." msgid "Workflow description." msgstr "Workflow-Beschreibung" msgid "Workflow execution description." msgstr "Workflow-Ausführungsbeschreibung" msgid "Workflow name." msgstr "Arbeitsablaufname" msgid "Workflow to execute." msgstr "Workflow zum Ausführen." msgid "Workflow type." msgstr "Arbeitsablauftyp" #, python-format msgid "Wrong Arguments try: \"%s\"" msgstr "Falsche Argumente versuchen: \"%s\"" msgid "You are not authenticated." msgstr "Sie sind nicht authentifiziert." msgid "You are not authorized to complete this action." msgstr "Sie sind nicht berechtigt, diese Aktion abzuschließen." #, python-format msgid "You are not authorized to use %(action)s." msgstr "Sie sind nicht berechtigt, %(action)s zu verwenden." #, python-format msgid "" "You have reached the maximum stacks per tenant, %d. Please delete some " "stacks." msgstr "" "Sie haben die maximalen Stapel pro Mandant erreicht,%d. Bitte löschen Sie " "einige Stapel." #, python-format msgid "could not find user %s" msgstr "Benutzer %s konnte nicht gefunden werden" msgid "decrypt" msgstr "entschlüsseln" msgid "deployment_id must be specified" msgstr "deployment_id muss angegeben werden" msgid "" "deployments key not allowed in resource metadata with user_data_format of " "SOFTWARE_CONFIG" msgstr "" "Bereitstellungsschlüssel in Ressourcenmetadaten mit user_data_format von " "SOFTWARE_CONFIG nicht zulässig" #, python-format msgid "deployments of server %s" msgstr "Bereitstellungen von Server %s" #, python-format msgid "due to cooldown, cooldown %s" msgstr "aufgrund der Abklingzeit, Abklingzeit %s" msgid "due to scaling activity" msgstr "aufgrund der Skalierungsaktivität" msgid "encrypt" msgstr "verschlüsseln" #, python-format msgid "environment has empty section \"%s\"" msgstr "Umgebung hat leeren Abschnitt \"%s\"" #, python-format msgid "environment has wrong section \"%s\"" msgstr "Umgebung hat falschen Abschnitt \"%s\"" msgid "error in pool" msgstr "Fehler im Pool" msgid "error in vip" msgstr "Fehler in vip" msgid "external network for the gateway." msgstr "externes Netzwerk für das Gateway." msgid "granularity should be days, hours, minutes, or seconds" msgstr "Granularität sollte Tage, Stunden, Minuten oder Sekunden sein" msgid "heat.conf misconfigured, auth_encryption_key must be 32 characters" msgstr "" "heat.conf falsch konfiguriert, auth_encryption_key muss 32 Zeichen lang sein" msgid "" "heat.conf misconfigured, cannot specify \"stack_user_domain_id\" or " "\"stack_user_domain_name\" without \"stack_domain_admin\" and " "\"stack_domain_admin_password\"" msgstr "" "heat.conf falsch konfiguriert, kann nicht \"stack_user_domain_id\" oder " "\"stack_user_domain_name\" ohne \"stack_domain_admin\" und " "\"stack_domain_admin_password\" angeben" msgid "ipv6_ra_mode and ipv6_address_mode are not supported for ipv4." msgstr "ipv6_ra_mode und ipv6_address_mode werden für ipv4 nicht unterstützt." #, python-format msgid "key replacement %s collides with a key in the input map" msgstr "" "Schlüsselersetzung %s kollidiert mit einem Schlüssel in der Eingabezuordnung" #, python-format msgid "key replacement %s collides with a key in the output map" msgstr "" "Schlüsselersetzung %s kollidiert mit einem Schlüssel in der Ausgabekarte" msgid "limit cannot be less than 4" msgstr "Limit kann nicht weniger als 4 sein" #, python-format msgid "metadata setting for resource %s" msgstr "Metadateneinstellung für die Ressource %s" msgid "min/max length must be integral" msgstr "Die minimale/maximale Länge muss ganzzahlig sein" msgid "min/max must be numeric" msgstr "Min/Max muss numerisch sein" msgid "need more memory." msgstr "brauche mehr Speicher." msgid "no resource data found" msgstr "Keine Ressourcendaten gefunden" msgid "no resources were found" msgstr "Es wurden keine Ressourcen gefunden" msgid "nova server metadata needs to be a Map." msgstr "Die Metadaten des nova-Servers müssen eine Map sein." msgid "offset must be smaller (by absolute value) than step." msgstr "Offset muss kleiner sein (absolut) als Schritt." #, python-format msgid "previous_status must be SupportStatus instead of %s" msgstr "previous_status muss SupportStatus anstelle von %s sein" #, python-format msgid "raw template with id %s not found" msgstr "Rohvorlage mit ID %s nicht gefunden" #, python-format msgid "raw_template_files with files_id %d not found" msgstr "raw_template_files mit files_id %d nicht gefunden" msgid "" "redirect_pool property should only be specified for action with value " "REDIRECT_TO_POOL." msgstr "" "Die Eigenschaft redirect_pool sollte nur für eine Aktion mit dem Wert " "REDIRECT_TO_POOL angegeben werden." msgid "" "redirect_url property should only be specified for action with value " "REDIRECT_TO_URL." msgstr "" "Die Eigenschaft redirect_url sollte nur für die Aktion mit dem Wert " "REDIRECT_TO_URL angegeben werden." #, python-format msgid "resource with id %s not found" msgstr "Ressource mit ID %s nicht gefunden" #, python-format msgid "" "restart_policy \"%s\" is invalid. Valid values are \"no\", \"on-failure[:max-" "retry]\", \"always\", and \"unless-stopped\"." msgstr "" "neustart_policy \"%s\" ist ungültig. Gültige Werte sind \"nein\", \"bei " "Fehler [: max-retry]\", \"immer\" und \"wenn nicht gestoppt\"." #, python-format msgid "roles %s" msgstr "Rollen %s" msgid "segmentation_id cannot be specified except 0 for using flat" msgstr "" "segmentation_id kann nicht angegeben werden außer 0 für die Verwendung von " "flat" msgid "segmentation_id must be specified for using vlan" msgstr "segmentation_id muss für die Verwendung von vlan angegeben werden" msgid "segmentation_id not allowed for flat network type." msgstr "segmentation_id ist für den flachen Netzwerktyp nicht zulässig." msgid "server_id must be specified" msgstr "server_id muss angegeben werden" msgid "step and offset must be both positive or both negative." msgstr "Schritt und Offset müssen beide positiv oder beide negativ sein." msgid "step cannot be 0." msgstr "Schritt kann nicht 0 sein." msgid "step/offset must be integer" msgstr "Schritt/Offset muss ganzzahlig sein" msgid "step/offset must be numeric" msgstr "Schritt/Offset muss numerisch sein" #, python-format msgid "" "task %(task)s contains property 'requires' in case of direct workflow. Only " "reverse workflows can contain property 'requires'." msgstr "" "Aufgabe %(task)s enthält Eigenschaft 'requires' bei direktem Workflow. Nur " "umgekehrte Workflows können die Eigenschaft \"requires\" enthalten." msgid "{0} {1} endpoint is not in service catalog." msgstr "Der {0} {1} Endpunkt befindet sich nicht im Servicekatalog." heat-10.0.2/heat/locale/es/0000775000175000017500000000000013343562672015337 5ustar zuulzuul00000000000000heat-10.0.2/heat/locale/es/LC_MESSAGES/0000775000175000017500000000000013343562672017124 5ustar zuulzuul00000000000000heat-10.0.2/heat/locale/es/LC_MESSAGES/heat.po0000666000175000017500000100701713343562351020407 0ustar zuulzuul00000000000000# Translations template for heat. # Copyright (C) 2015 ORGANIZATION # This file is distributed under the same license as the heat project. # # Translators: # Rafael Rivero , 2014 # Andreas Jaeger , 2016. #zanata # Omar Rivera , 2017. #zanata msgid "" msgstr "" "Project-Id-Version: heat VERSION\n" "Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n" "POT-Creation-Date: 2018-02-05 10:35+0000\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" "PO-Revision-Date: 2017-07-01 11:12+0000\n" "Last-Translator: Copied by Zanata \n" "Language: es\n" "Plural-Forms: nplurals=2; plural=(n != 1);\n" "Generated-By: Babel 2.0\n" "X-Generator: Zanata 3.9.6\n" "Language-Team: Spanish\n" #, python-format msgid "\"%%s\" is not a valid keyword inside a %s definition" msgstr "\"%%s\" no es una palabra clave válida dentro de una definición de %s" #, python-format msgid "\"%(fn_name)s\": %(err)s" msgstr "\"%(fn_name)s\": %(err)s" #, python-format msgid "" "\"%(name)s\" params must be strings, numbers, list or map. Failed to json " "serialize %(value)s" msgstr "" "Los parámetros de \"%(name)s\" deben ser cadenas, números, una lista o una " "correlación. No se ha podido serializar %(value)s con JSON" #, python-format msgid "" "\"%(section)s\" must contain a map of %(obj_name)s maps. Found a [%(_type)s] " "instead" msgstr "" "\"%(section)s\" debe contener una correlación de correlaciones %(obj_name)s. " "Se ha encontrado [%(_type)s] en su lugar" #, python-format msgid "\"%(url)s\" is not a valid SwiftSignalHandle. The %(part)s is invalid" msgstr "\"%(url)s\" no es un SwiftSignalHandle válido. %(part)s no es válido" #, python-format msgid "\"%(value)s\" does not validate %(name)s" msgstr "\"%(value)s\" no valida %(name)s" #, python-format msgid "\"%(value)s\" does not validate %(name)s (constraint not found)" msgstr "\"%(value)s\" no valida %(name)s (restricción no encontrada)" #, python-format msgid "\"%(version)s\". \"%(version_type)s\" should be one of: %(available)s" msgstr "" "\"%(version)s\". \"%(version_type)s\" debe ser uno de los siguientes: " "%(available)s" #, python-format msgid "\"%(version)s\". \"%(version_type)s\" should be: %(available)s" msgstr "\"%(version)s\". \"%(version_type)s\" debe ser: %(available)s" #, python-format msgid "\"%s\" : [ { \"key1\": \"val1\" }, { \"key2\": \"val2\" } ]" msgstr "\"%s\" : [ { \"key1\": \"val1\" }, { \"key2\": \"val2\" } ]" #, python-format msgid "\"%s\" argument must be a string" msgstr "El argumento \"%s\" debe ser una serie" #, python-format msgid "\"%s\" can't traverse path" msgstr "\"%s\" no puede recorrer la vía de acceso" #, python-format msgid "\"%s\" deletion policy not supported" msgstr "La política de supresión \"%s\" no está soportada" #, python-format msgid "\"%s\" delimiter must be a string" msgstr "El delimitador \"%s\" debe ser una serie" #, python-format msgid "\"%s\" is not a list" msgstr "\"%s\" no es una lista" #, python-format msgid "\"%s\" is not a map" msgstr "\"%s\" no es una correlación" #, python-format msgid "\"%s\" is not a valid ARN" msgstr "\"%s\" no es un ARN válido" #, python-format msgid "\"%s\" is not a valid ARN URL" msgstr "\"%s\" no es un URL de ARN válido" #, python-format msgid "\"%s\" is not a valid Heat ARN" msgstr "\"%s\" no es un ARN Heat válido" #, python-format msgid "\"%s\" is not a valid URL" msgstr "\"%s\" no es un URL válido" #, python-format msgid "\"%s\" is not a valid boolean" msgstr "\"%s\" no es un booleano válido" #, python-format msgid "\"%s\" is not a valid template section" msgstr "\"%s\" no es una sección de plantilla válida" #, python-format msgid "\"%s\" must operate on a list" msgstr "\"%s\" debe operar en una lista" #, python-format msgid "\"%s\" param placeholders must be strings" msgstr "Los marcadores de parámetros \"%s\" deben ser series" #, python-format msgid "\"%s\" parameters must be a mapping" msgstr "Los parámetros \"%s\" deben ser una correlación" #, python-format msgid "\"%s\" params must be a map" msgstr "Los parámetros \"%s\" deben ser una correlación" #, python-format msgid "\"%s\" params must be strings, numbers, list or map." msgstr "" "Los parámetros de \"%s\" deben ser cadenas, números, una lista o una " "correlación." #, python-format msgid "\"%s\" template must be a string" msgstr "La plantilla \"%s\" debe ser una serie" #, python-format msgid "\"repeat\" syntax should be %s" msgstr "La sintaxis de \"repeat\" debe ser %s" #, python-format msgid "%(a)s paused until Hook %(h)s is cleared" msgstr "%(a)s se ha detenido hasta que se borre el gancho %(h)s" #, python-format msgid "%(action)s is not supported for resource." msgstr "%(action)s no está soportada para el recurso." #, python-format msgid "%(action)s is restricted for resource." msgstr "%(action)s está restringida para este recurso." #, python-format msgid "%(desired_capacity)s must be between %(min_size)s and %(max_size)s" msgstr "%(desired_capacity)s debe ser entre %(min_size)s y %(max_size)s" #, python-format msgid "%(error)s%(path)s%(message)s" msgstr "%(error)s%(path)s%(message)s" #, python-format msgid "%(feature)s is not supported." msgstr "%(feature)s no está soportada." #, python-format msgid "" "%(img)s must be provided: Referenced cluster template %(tmpl)s has no " "default_image_id defined." msgstr "" "Debe proporcionarse %(img)s: la plantilla de clúster referenciada %(tmpl)s " "no tiene un default_image_id definido." #, python-format msgid "%(lc)s (%(ref)s) reference can not be found." msgstr "No se ha podido encontrar la referencia %(lc)s (%(ref)s)." #, python-format msgid "" "%(lc)s (%(ref)s) requires a reference to the configuration not just the name " "of the resource." msgstr "" "%(lc)s (%(ref)s) necesita una referencia a la configuración, no sólo el " "nombre del recurso." #, python-format msgid "%(len)d of %(count)d received" msgstr "Se ha recibido %(len)d de %(count)d" #, python-format msgid "%(len)d of %(count)d received - %(reasons)s" msgstr "Se ha recibido %(len)d de %(count)d - %(reasons)s" #, python-format msgid "%(message)s" msgstr "%(message)s" #, python-format msgid "%(min_size)s can not be greater than %(max_size)s" msgstr "%(min_size)s no puede ser mayor que %(max_size)s" #, python-format msgid "%(name)s constraint invalid for %(utype)s" msgstr "%(name)s restricción inválida para %(utype)s" #, python-format msgid "%(prop1)s cannot be specified without %(prop2)s." msgstr "no se puede especificar %(prop1)s sin %(prop2)s." #, python-format msgid "" "%(prop1)s property should only be specified for %(prop2)s with value " "%(value)s." msgstr "" "Solo se debe especificar la propiedad %(prop1)s para %(prop2)s con el valor " "%(value)s." #, python-format msgid "%(resource)s: Invalid attribute %(key)s" msgstr "%(resource)s: Atributo no válido %(key)s" #, python-format msgid "" "%(result)s - Unknown status %(resource_status)s due to \"%(status_reason)s\"" msgstr "" "%(result)s - Estado desconocido %(resource_status)s debido a " "\"%(status_reason)s\"" #, python-format msgid "%(schema)s supplied for %(type)s %(data)s" msgstr "%(schema)s proporcionado por %(type)s %(data)s" #, python-format msgid "%(server)s-port-%(number)s" msgstr "%(server)s-puerto-%(number)s" #, python-format msgid "%(type)s not in valid format: %(error)s" msgstr "%(type)s no esta en un formato válido: %(error)s" #, python-format msgid "%s Key Name must be a string" msgstr "El nombre de clave %s debe ser una serie" #, python-format msgid "%s Timed out" msgstr "%s se ha excedido el tiempo de espera" #, python-format msgid "%s Value Name must be a string" msgstr "El nombre de valor %s debe ser una serie" #, python-format msgid "%s is not a valid job location." msgstr "%s no es una ubicación de trabajo válida." #, python-format msgid "%s is not active" msgstr "%s no está activo" #, python-format msgid "%s is not an integer." msgstr "%s no es un entero." #, python-format msgid "%s must be provided" msgstr "Debe proporcionarse %s" #, python-format msgid "'%(attr)s': expected '%(expected)s', got '%(current)s'" msgstr "'%(attr)s': se esperaba '%(expected)s', se ha obtenido '%(current)s'" msgid "" "'task_name' is not assigned in 'params' in case of reverse type workflow." msgstr "" "'task_name' no está asignado en 'params' en caso de flujo de trabajo de tipo " "inverso." msgid "'true' if DHCP is enabled for this subnet; 'false' otherwise." msgstr "" "'true' si DHCP se ha habilitado para esta subred; de lo contrario, 'false'." msgid "A UUID for the set of servers being requested." msgstr "Un UUID para el conjunto de servidores que se estaban solicitando." msgid "A bad or out-of-range value was supplied" msgstr "Un valor incorrecto o fuera de rango estuvo proporcionado" msgid "A boolean value of default flag." msgstr "Un valor booleano del distintivo predeterminado." msgid "A boolean value specifying the administrative status of the network." msgstr "Un valor booleano que especifica el estado administrativo de la red." #, python-format msgid "" "A character class and its corresponding %(min)s constraint to generate the " "random string from." msgstr "" "Una clase de carácter y su correspondiente restricción %(min)s a partir de " "la que se genera la serie aleatoria." #, python-format msgid "" "A character sequence and its corresponding %(min)s constraint to generate " "the random string from." msgstr "" "Una secuencia de caracteres y su correspondiente restricción %(min)s a " "partir de la que se genera la serie aleatoria." msgid "A comma-delimited list of server ip addresses. (Heat extension)." msgstr "" "Lista delimitada por comas de direcciones IP de servidor. (Extensión de " "Heat)." msgid "A description of the volume." msgstr "Una descripción del volumen." msgid "" "A device name where the volume will be attached in the system at /dev/" "device_name. This value is typically vda." msgstr "" "Un nombre de dispositivo donde el volumen se conectará en el sistema en /dev/" "device_name. Este valor suele ser vda." msgid "" "A device name where the volume will be attached in the system at /dev/" "device_name.e.g. vdb" msgstr "" "Un nombre de dispositivo donde el volumen se conectará en el sistema en /dev/" "device_name.e.g. vdb" msgid "" "A dict of all network addresses with corresponding port_id. Each network " "will have two keys in dict, they are network name and network id. The port " "ID may be obtained through the following expression: \"{get_attr: [, " "addresses, , 0, port]}\"." msgstr "" "Un diccionario con todas las direcciones de red con su correspondiente " "port_id. Cada red tendrá dos claves en el diccionario: nombre de red y ID de " "red. El ID de puerto se puede obtener mediante la siguiente expresión: " "\"{get_attr: [, addresses, , 0, port]}\"." msgid "" "A dict of assigned network addresses of the form: {\"public\": [ip1, " "ip2...], \"private\": [ip3, ip4], \"public_uuid\": [ip1, ip2...], " "\"private_uuid\": [ip3, ip4]}. Each network will have two keys in dict, they " "are network name and network id." msgstr "" "Un diccionario con las direcciones de red asignadas con el formato: {\"public" "\": [ip1, ip2...], \"private\": [ip3, ip4], \"public_uuid\": [ip1, ip2...], " "\"private_uuid\": [ip3, ip4]}. Cada red tendrá dos claves en el diccionario: " "nombre de red y ID de red." msgid "A dict of key-value pairs output from the stack." msgstr "Un dict de pares clave-valor de salida de la pila." msgid "A dictionary which contains name and input of the workflow." msgstr "" "Un diccionario que contiene el nombre y la entrada del flujo de trabajo." msgid "A length constraint must have a min value and/or a max value specified." msgstr "" "Una restricción de longitud debe tener especificados un valor mínimo y/o un " "valor máximo." msgid "A list of URLs (webhooks) to invoke when state transitions to alarm." msgstr "" "Una lista de los URL (webhooks) a invocar cuando el estado cambia a alarma." msgid "" "A list of URLs (webhooks) to invoke when state transitions to insufficient-" "data." msgstr "" "Una lista de los URL (webhooks) a invocar cuando el estado cambia a datos " "insuficientes." msgid "A list of URLs (webhooks) to invoke when state transitions to ok." msgstr "" "Una lista de los URL (webhooks) a invocar cuando el estado cambia a correcto." msgid "A list of access rules that define access from IP to Share." msgstr "" "Una lista de reglas de acceso que definen el acceso desde la IP al recurso " "compartido." msgid "A list of all rules for the QoS policy." msgstr "Lista de todas las reglas de de la política de QoS." msgid "A list of all subnet attributes for the port." msgstr "Lista de todos los atributos de subred del puerto." msgid "" "A list of character class and their constraints to generate the random " "string from." msgstr "" "Una lista de clases de caracteres y sus restricciones a partir de la que se " "genera la serie aleatoria." msgid "" "A list of character sequences and their constraints to generate the random " "string from." msgstr "" "Una lista de secuencias de caracteres y sus restricciones a partir de la que " "se genera la serie aleatoria." msgid "A list of cluster instance IPs." msgstr "Una lista de ID de instancias de clúster.." msgid "A list of clusters to which this policy is attached." msgstr "Una lista de clústeres a los que estña conectada esta política." msgid "A list of host route dictionaries for the subnet." msgstr "Una lista de diccionarios de ruta de host para la subred." msgid "A list of instances ids." msgstr "Lista de ID de instancias." msgid "A list of metric ids." msgstr "Una lista de ID de métricas." msgid "" "A list of query factors, each comparing a Sample attribute with a value. " "Implicitly combined with matching_metadata, if any." msgstr "" "Lista de factores de consulta, en la que cada uno compara un atributo de " "ejemplo con un valor. Se combina de forma implícita con matching_metadata, " "si los hubiera." msgid "A list of resource IDs for the resources in the chain." msgstr "Una lista de ID de recurso para los recursos de la cadena." msgid "A list of resource IDs for the resources in the group." msgstr "Una lista de ID de recurso para los recursos del grupo." msgid "A list of security groups for the port." msgstr "Una lista de grupos de seguridad para el puerto." msgid "A list of security services IDs or names." msgstr "Una lista de ID o nombres de servicios de seguridad." msgid "A list of string policies to apply. Defaults to anti-affinity." msgstr "" "Una lista de políticas de serie a aplicar. Valores predeterminados para anti-" "afinidad." msgid "A login profile for the user." msgstr "Perfil de inicio de sesión para el usuario." msgid "A mandatory input parameter is missing" msgstr "Falta un parámetro de entrada obligatorio" msgid "A map containing all headers for the container." msgstr "Una correlación que contiene todas las cabeceras para el contenedor." msgid "" "A map of Nova names and captured stderrs from the configuration execution to " "each server." msgstr "" "Una correlación de nombres y stderrs capturados de Nova procedente de la " "ejecución de la configuración en cada servidor." msgid "" "A map of Nova names and captured stdouts from the configuration execution to " "each server." msgstr "" "Una correlación de nombres y stdouts capturados de Nova procedente de la " "ejecución de la configuración en cada servidor." msgid "" "A map of Nova names and returned status code from the configuration " "execution." msgstr "" "Una correlación de nombres de Nova y el código de estado devuelto a partir " "de la ejecución de la configuración." msgid "" "A map of files to create/overwrite on the server upon boot. Keys are file " "names and values are the file contents." msgstr "" "Una correlación de archivos a crear/sobrescribir en el servidor al arrancar. " "Las claves son nombres de archivo y los valores son el contenido de archivo." msgid "" "A map of resource names to the specified attribute of each individual " "resource." msgstr "" "Una correlación de nombres de recurso con el atributo especificado de cada " "recursos." msgid "" "A map of resource names to the specified attribute of each individual " "resource. Requires heat_template_version: 2014-10-16." msgstr "" "Una correlación de nombres de recursos con el atributo especificado de cada " "recurso. Requiere heat_template_version: 2014-10-16." msgid "" "A map of user-defined meta data to associate with the account. Each key in " "the map will set the header X-Account-Meta-{key} with the corresponding " "value." msgstr "" "Una correlación de metadatos definidos por el usuario a asociar con la " "cuenta. Cada clave de la correlación establecerá la cabecera X-Account-Meta-" "{key} con el valor correspondiente." msgid "" "A map of user-defined meta data to associate with the container. Each key in " "the map will set the header X-Container-Meta-{key} with the corresponding " "value." msgstr "" "Una correlación de metadatos definidos por el usuario a asociar con el " "contenedor. Cada clave de la correlación establecerá la cabecera X-Container-" "Meta-{key} con el valor correspondiente." msgid "A name used to distinguish the volume." msgstr "Un nombre utilizado para distinguir el volumen." msgid "" "A per-tenant quota on the prefix space that can be allocated from the subnet " "pool for tenant subnets." msgstr "" "Una cuota por inquilino sobre el espacio de prefijo que se puede asignar " "desde la agrupación de subred para subredes de inquilinos." msgid "" "A predefined access control list (ACL) that grants permissions on the bucket." msgstr "" "Lista de control de acceso (ACL) predefinida que otorga permisos en el grupo." msgid "A range constraint must have a min value and/or a max value specified." msgstr "" "Una restricción de rango debe tener especificados un valor mínimo y/o un " "valor máximo." msgid "" "A reference to the wait condition handle used to signal this wait condition." msgstr "" "Una referencia al descriptor de contexto de condición de espera utilizado " "para señalar esta condición de espera." msgid "" "A signed url to create executions for workflows specified in Workflow " "resource." msgstr "" "Un URL firmado para crear ejecuciones para flujos de trabajo especificados " "en el recurso Flujo de trabajo." msgid "A signed url to handle the alarm." msgstr "Un url firmado para manejar la alarma." msgid "A signed url to handle the alarm. (Heat extension)." msgstr "Un url firmado para manejar la alarma. (Extensión de Heat)." msgid "A specified set of DNS name servers to be used." msgstr "Una serie especificada de servidores DNS para ser utilizados." msgid "" "A string specifying a symbolic name for the network, which is not required " "to be unique." msgstr "" "Una serie que especifica un nombre simbólico para la red, que no necesita " "ser exclusivo." msgid "" "A string specifying a symbolic name for the security group, which is not " "required to be unique." msgstr "" "Una cadena especificando un nombre simbólico para el grupo de seguridad, que " "no es necesario que sea único." msgid "A string specifying physical network mapping for the network." msgstr "Una serie que especifica la correlación de red física para la red." msgid "A string specifying the provider network type for the network." msgstr "Una serie que especifica el tipo de red de proveedor para la red." msgid "A string specifying the segmentation id for the network." msgstr "Una serie que especifica el id de segmentación para la red." msgid "A symbolic name for this port." msgstr "Nombre simbólico para este puerto." msgid "A url to handle the alarm using native API." msgstr "Un URL para manejar la alarma utilizando la API nativa." msgid "" "A variable that this resource will use to replace with the current index of " "a given resource in the group. Can be used, for example, to customize the " "name property of grouped servers in order to differentiate them when listed " "with nova client." msgstr "" "Una variable que utilizará este recurso para sustituir el índice actual de " "un determinado recurso en el grupo. Por ejemplo, puede utilizarse para " "personalizar la propiedad de nombre de servidores agrupados para " "diferenciarlos cuando se listen con el cliente nova." msgid "AWS compatible instance name." msgstr "Nombre de instancia compatible con AWS." msgid "AWS query string is malformed, does not adhere to AWS spec" msgstr "" "cadena de consulta AWS esta formada incorrectamente, no se adhiere a la " "especificación AWS" msgid "Access policies to apply to the user." msgstr "Políticas de acceso a aplicar al usuario." #, python-format msgid "AccessPolicy resource %s not in stack" msgstr "El recurso AccessPolicy %s no está en la pila" #, python-format msgid "Action %s not allowed for user" msgstr "Acción %s no esta permitida por el usuario" msgid "Action to be performed on the traffic matching the rule." msgstr "Acción que se debe realizar en el tráfico que coincide con la regla." msgid "Actual input parameter values of the task." msgstr "Valores de los parámetros de entrada reales de la tarea." msgid "Add needed policies directly to the task, Policy keyword is not needed" msgstr "" "Añada las políticas necesarias directamente a las tareas, no es necesaria " "una palabra clave de política." msgid "Additional MAC/IP address pairs allowed to pass through a port." msgstr "" "Pares de direcciones MAC/IP adicionales a los que se permite pasar por un " "puerto." msgid "Additional MAC/IP address pairs allowed to pass through the port." msgstr "" "Pares de direcciones MAC/IP adicionales a los que se permite pasar por el " "puerto." msgid "Additional routes for this subnet." msgstr "Rutas adicionales para esta subred." msgid "Address family of the address scope, which is 4 or 6." msgstr "Familia de direcciones del ámbito de dirección, que es 4 o 6." msgid "" "Address of the notification. It could be a valid email address, url or " "service key based on notification type." msgstr "" "Dirección de la notificación. Puede ser una dirección de correo electrónico " "válida, un URL o una clave de servicio basada en el tipo notificación." msgid "" "Address to bind the server. Useful when selecting a particular network " "interface." msgstr "" "Dirección para enlazar el servidor. Útil al seleccionar una red particular " "interfaz." msgid "Administrative state for the ipsec site connection." msgstr "Estado administrativo para la conexión del sitio ipsec." msgid "Administrative state for the vpn service." msgstr "Estado administrativo para el servicio vpn." msgid "" "Administrative state of the firewall. If false (down), firewall does not " "forward packets and will drop all traffic to/from VMs behind the firewall." msgstr "" "Estado administrativo del cortafuegos. Si es false (apagado), el cortafuegos " "no reenvía paquetes y descartará todo el tráfico hacia/desde las VM detrás " "del cortafuegos." msgid "Administrative state of the router." msgstr "Estado administrativo de direccionador." #, python-format msgid "Alarm %(alarm)s could not find scaling group named \"%(group)s\"" msgstr "" "La alarma %(alarm)s no ha podido encontrar el grupo de escala denominado " "\"%(group)s\"" #, python-format msgid "Algorithm must be one of %s" msgstr "El algoritmo debe ser uno de %s" msgid "All heat engines are down." msgstr "Todos los motores de Heat están inactivos." msgid "Allocated floating IP address." msgstr "Dirección IP flotante asignada." msgid "Allocation ID for VPC EIP address." msgstr "ID de asignación para la dirección EIP VPC." msgid "Allow client's debug log output." msgstr "Permitir resultados del log del cliente." msgid "Allow or deny action for this firewall rule." msgstr "Permitir o denegar acción para esta regla de cortafuegos." msgid "Allow orchestration of multiple clouds." msgstr "Permitir orquestración de múltiples nubes." msgid "" "Allow reauthentication on token expiry, such that long-running tasks may " "complete. Note this defeats the expiry of any provided user tokens." msgstr "" "Permitir la reautenticación cuando caduque un token de forma que las tareas " "that de larga duración puedan completarse. Tenga en cuenta que esto vence a " "la caducidad de cualquier token de usuario proporcionado." msgid "" "Allowed keystone endpoints for auth_uri when multi_cloud is enabled. At " "least one endpoint needs to be specified." msgstr "" "Puntos de acceso Keystone permitidos para auth_uri cuando multi_cloud esta " "habilitado. Debe especificar al menos un punto de acceso." msgid "" "Allowed tenancy of instances launched in the VPC. default - any tenancy; " "dedicated - instance will be dedicated, regardless of the tenancy option " "specified at instance launch." msgstr "" "Arrendamiento permitido de instancias iniciadas en VPC. valor predeterminado " "- cualquier arrendamiento; dedicado - se dedicará la instancia, " "independientemente de la opción de arrendamiento especificada en el inicio " "de instancia." #, python-format msgid "Allowed values: %s" msgstr "Valores permitidos: %s" msgid "AllowedPattern must be a string" msgstr "AllowedPattern debe ser una serie" msgid "AllowedValues must be a list" msgstr "Los valores permitidos deben ser una lista" msgid "Allowing not to store action results after task completion." msgstr "" "Permitiendo no almacenar los resultados de la acción tras completar la tarea." msgid "" "Allows to synchronize multiple parallel workflow branches and aggregate " "their data. Valid inputs: all - the task will run only if all upstream tasks " "are completed. Any numeric value - then the task will run once at least this " "number of upstream tasks are completed and corresponding conditions have " "triggered." msgstr "" "Permite sincronizar múltiples ramas de flujo de trabajo paralelas y agregar " "sus datos. Entradas válidas: todas - la tarea solo se ejecutará si todas las " "tareas anteriores se han completado. Cualquier valor numérico - en este " "caso, la tarea se ejecutará una vez si al menos se han completado este " "número de tareas anteriores y se han desencadenado las condiciones " "correspondientes." #, python-format msgid "Ambiguous versions (%s)" msgstr "Versiones ambiguas (%s)" msgid "" "Amount of disk space (in GB) required to boot image. Default value is 0 if " "not specified and means no limit on the disk size." msgstr "" "Cantidad de espacio de disco (en GB) necesario para la imagen de arranque. " "El valor predeterminado es 0 si no se especifica y significa que no hay " "ningún límite en el tamaño del disco." msgid "" "Amount of ram (in MB) required to boot image. Default value is 0 if not " "specified and means no limit on the ram size." msgstr "" "Cantidad de RAM (en MB) necesario para la imagen de arranque. El valor " "predeterminado es 0 si no se especifica y significa que no hay ningún límite " "en el tamaño de ram." msgid "An address scope ID to assign to the subnet pool." msgstr "Un ID de ámbito de dirección a asignar a la agrupación de subred." msgid "An application health check for the instances." msgstr "Una comprobación de estado de aplicación para las instancias." msgid "An ordered list of firewall rules to apply to the firewall." msgstr "" "Una lista ordenada de reglas de cortafuegos que se aplican al cortafuegos." msgid "" "An ordered list of nics to be added to this server, with information about " "connected networks, fixed ips, port etc." msgstr "" "Una lista ordenada de nics a añadir a este servidor, con información sobre " "redes conectadas, ip fijas, puerto, etc." msgid "An unknown exception occurred." msgstr "Una excepcion desconocida ha ocurrido" msgid "" "Any data structure arbitrarily containing YAQL expressions that defines " "workflow output. May be nested." msgstr "" "Cualquier estructura de datos que contenga arbitrariamente expresiones YAQL " "que defina la salida del flujo de trabajo. Puede ser anidada." msgid "Anything other than one VPCZoneIdentifier" msgstr "Cualquier cosa aparte de un VPCZoneIdentifier" msgid "Api endpoint reference of the instance." msgstr "Referencia de punto final de Api de la instancia." msgid "" "Arbitrary key-value pairs specified by the client to help boot a server." msgstr "" "Pares de clave-valor arbitrarios especificados por el cliente para ayudar a " "arrancar un servidor." msgid "" "Arbitrary key-value pairs specified by the client to help the Cinder " "scheduler creating a volume." msgstr "" "Pares de clave-valor arbitrarios especificados por el cliente para ayudar al " "planificador de Cinder a crear un volumen." msgid "" "Arbitrary key/value metadata to store contextual information about this " "queue." msgstr "" "Metadatos arbitrarios de pares clave/valor para almacenar información " "contextual sobre esta cola." msgid "" "Arbitrary key/value metadata to store for this server. Both keys and values " "must be 255 characters or less. Non-string values will be serialized to JSON " "(and the serialized string must be 255 characters or less)." msgstr "" "Metadatos de clavees/valores arbitrarios a almacenar para este servidor. Las " "claves y los valores deben tener 255 caracteres o menos. Los valores no de " "serie se serializarán en JSON (y la serie serializada debe tener 255 " "caracteres o menos)." msgid "Arbitrary key/value metadata to store information for aggregate." msgstr "" "Metadatos arbitrarios de pares clave/valor para almacenar información para " "el agregado." #, python-format msgid "Argument to \"%s\" must be a list" msgstr "El argumento en \"%s\" debe ser una lista" #, python-format msgid "Argument to \"%s\" must be a string" msgstr "El argumento en \"%s\" debe ser una serie" #, python-format msgid "Argument to \"%s\" must be string or list" msgstr "El argumento en \"%s\" debe ser una serie o lista" #, python-format msgid "Argument to function \"%s\" must be a list of strings" msgstr "El argumento para la función \"%s\" debe ser una lista de series" #, python-format msgid "" "Arguments to \"%s\" can be of the next forms: [resource_name] or " "[resource_name, attribute, (path), ...]" msgstr "" "Los argumentos para \"%s\" pueden tener los siguientes formatos: " "[nombre_recurso] o [nombre_recurso, atributo, (ruta), ...]" #, python-format msgid "Arguments to \"%s\" must be a map" msgstr "Los argumentos en \"%s\" deben ser una correlación" #, python-format msgid "Arguments to \"%s\" must be of the form [index, collection]" msgstr "Los argumentos en \"%s\" deben tener el formato [índice, colección]" #, python-format msgid "" "Arguments to \"%s\" must be of the form [resource_name, attribute, " "(path), ...]" msgstr "" "Los argumentos en \"%s\" deben tener el formato [nombre_recursos, atributo, " "(vía de acceso), ...]" #, python-format msgid "Arguments to \"%s\" must be of the form [resource_name, attribute]" msgstr "" "Los argumentos en \"%s\" deben tener el formato [nombre_recurso, atributo]" #, python-format msgid "Arguments to %s not fully resolved" msgstr "Los argumentos en %s no se han resuelto por completo" #, python-format msgid "Attempt to delete a stack with id: %(id)s %(msg)s" msgstr "Intento de suprimir una pila con el id: %(id)s %(msg)s" #, python-format msgid "Attempt to delete user creds with id %(id)s that does not exist" msgstr "" "Intento de supresión de credenciales de usuario con el id %(id)s que no " "existe" #, python-format msgid "Attempt to delete watch_rule: %(id)s %(msg)s" msgstr "Intento de suprimir watch_rule: %(id)s %(msg)s" #, python-format msgid "Attempt to update a stack with id: %(id)s %(msg)s" msgstr "Intento de actualizar una pila con el id: %(id)s %(msg)s" #, python-format msgid "Attempt to update a stack with id: %(id)s %(traversal)s %(msg)s" msgstr "Intento de actualizar una pila con el id: %(id)s %(traversal)s %(msg)s" #, python-format msgid "Attempt to update a watch with id: %(id)s %(msg)s" msgstr "Intento de actualizar una vigilancia con el id: %(id)s %(msg)s" msgid "Attempt to use stored_context with no user_creds" msgstr "Intento de utilizar stored_context sin user_creds" #, python-format msgid "Attribute %(attr)s for facade %(type)s missing in provider" msgstr "Falta el atributo %(attr)s para la fachada %(type)s en el proveedor" msgid "Audit status of this firewall policy." msgstr "Estado de auditoría de esta política de cortafuegos." msgid "Authentication Endpoint URI." msgstr "URI de Autenticación del Punto de Acceso." msgid "Authentication hash algorithm for the ike policy." msgstr "El algoritmo hash de autenticación para la política ike." msgid "Authentication hash algorithm for the ipsec policy." msgstr "El algoritmo hash de autenticación para la política ipsec." msgid "Authorization failed." msgstr "Ha fallado la autorización." msgid "AutoScaling group ID to apply policy to." msgstr "ID de grupo de escalado automático al que aplicar la política." msgid "AutoScaling group name to apply policy to." msgstr "Escalado automático de nombre de grupo al que aplicar política." msgid "Availability Zone of the subnet." msgstr "Zona de disponibilidad de la subred." msgid "Availability zone in which you want the subnet." msgstr "Zona de disponibilidad en la que desea la subred." msgid "Availability zone to create servers in." msgstr "Zona de disponibilidad en la que crear los servidores." msgid "Availability zone to create volumes in." msgstr "Zona de disponibilidad en la que crear los volúmenes." msgid "Availability zone to launch the instance in." msgstr "Zona de disponibilidad en la que iniciar la instancia." msgid "Backend authentication failed" msgstr "Ha fallado la autenticación del programa de fondo" msgid "Binary" msgstr "Binario" msgid "Block device mappings for this server." msgstr "Correlaciones de dispositivo de bloque para este servidor." msgid "Block device mappings to attach to instance." msgstr "Correlaciones de dispositivo de bloque que se conectan a la instancia." msgid "Block device mappings v2 for this server." msgstr "Correlaciones de dispositivo de bloque v2 para este servidor." msgid "" "Boolean extra spec that used for filtering of backends by their capability " "to create share snapshots." msgstr "" "Especificación adicional booleana que se utiliza para el filtrado de " "programas de fondo por su capacidad de crear instantáneas de recursos " "compartidos." msgid "Boolean indicating if the volume can be booted or not." msgstr "Booleano que indica si el volumen se puede arrancar o no." msgid "Boolean indicating if the volume is encrypted or not." msgstr "Booleano que indica si el volumen está cifrado o no." msgid "" "Boolean indicating whether allow the volume to be attached more than once." msgstr "Booleano que indica si se permite conectar el volumen más de una vez." msgid "" "Bus of the device: hypervisor driver chooses a suitable default if omitted." msgstr "" "Bus de dispositivo: El controlador de hipervisor elige un valor " "predeterminado si está omitido." msgid "CIDR block notation for this subnet." msgstr "Notación de bloque CIDR para esta subred." msgid "CIDR block to apply to subnet." msgstr "Bloque CIDR a aplicar a la subred." msgid "CIDR block to apply to the VPC." msgstr "Bloque CIDR a aplicar a VPC." msgid "CIDR of subnet." msgstr "CIDR de la subred." msgid "CIDR to be associated with this metering rule." msgstr "CIDR a asociar con esta regla de medición." #, python-format msgid "Can not specify property \"%s\" if the volume type is public." msgstr "" "No se puede especificar la propiedad \"%s\" si el el tipo de volumen es " "público." #, python-format msgid "Can not use %s property on Nova-network." msgstr "No se puede usar la propiedad %s en Nova-red." #, python-format msgid "Can't find role %s" msgstr "No se ha podido encontrar rol %s" msgid "Can't get user token without password" msgstr "No se puede obtener la señal de usuario sin una contraseña" msgid "Can't get user token, user not yet created" msgstr "" "No se puede obtener la señal de usuario; el usuario no se ha creado todavía" msgid "Can't traverse attribute path" msgstr "No se puede atravesar la ruta de atributo" #, python-format msgid "Cancelling update when stack is %s" msgstr "Cancelación de actualización cuando la pila es %s" #, python-format msgid "Cannot call %(method)s on orphaned %(objtype)s object" msgstr "No se puede ejecutar %(method)s en un objecto huérfano %(objtype)s" #, python-format msgid "Cannot check %s, stack not created" msgstr "No se puede comprobar %s, pila no creada" #, python-format msgid "Cannot define the following properties at the same time: %(props)s." msgstr "" "No se puede definir las siguientes propiedades al mismo tiempo: %(props)s." #, python-format msgid "" "Cannot establish connection to Heat endpoint at region \"%(region)s\" due to " "\"%(exc)s\"" msgstr "" "No se puede establecer la conexión con el punto final de Heat en la región " "\"%(region)s\" debido a \"%(exc)s\"" msgid "" "Cannot get stack domain user token, no stack domain id configured, please " "fix your heat.conf" msgstr "" "No se ha podido obtener token de usuario del dominio, no está configurado el " "id de dominio de la pila, por favor corriga su heat.conf" msgid "Cannot migrate to lower schema version." msgstr "No se ha podido migrar a una versión inferior del esquema" #, python-format msgid "Cannot modify readonly field %(field)s" msgstr "No se puede modificar el campo de solo lectura %(field)s" #, python-format msgid "Cannot resume %s, resource not found" msgstr "No se puede reanudar %s, no se ha encontrado el recurso" #, python-format msgid "Cannot resume %s, resource_id not set" msgstr "No se puede reanudar %s, no se estableció resource_id" #, python-format msgid "Cannot resume %s, stack not created" msgstr "No se puede reanudar %s, no se ha creado la pila" #, python-format msgid "Cannot suspend %s, resource not found" msgstr "No se puede suspender %s, no se ha encontrado el recurso" #, python-format msgid "Cannot suspend %s, resource_id not set" msgstr "No se puede suspender %s, no se ha establecido resource_id" #, python-format msgid "Cannot suspend %s, stack not created" msgstr "No se puede suspender %s, no se ha creado la pila" msgid "Captured stderr from the configuration execution." msgstr "Error estándar capturado de la ejecución de configuración." msgid "Captured stdout from the configuration execution." msgstr "Salida estándar capturada de la ejecución de configuración." #, python-format msgid "Circular Dependency Found: %(cycle)s" msgstr "Dependencia Circular Encontrada: %(cycle)s" msgid "Client entity to poll." msgstr "Entidad de cliente a sondear." msgid "Client name and resource getter name must be specified." msgstr "" "Se debe especificar el nombre del cliente y el nombre del detter del recurso." msgid "Client to poll." msgstr "Cliente a sondear." msgid "Cluster configs dictionary." msgstr "Diccionario de configuraciones de clúster." msgid "Cluster information." msgstr "Información del clúster." msgid "Cluster metadata." msgstr "Metadatos del clúster." msgid "Cluster name." msgstr "Nombre del clúster." msgid "Cluster status." msgstr "Estado del clúster." msgid "Comparison operator." msgstr "Operador de comparación." #, python-format msgid "Concurrent transaction for %(action)s" msgstr "Transacción simultánea para %(action)s" msgid "Configuration of session persistence." msgstr "Configuración de persistencia de sesión." msgid "" "Configuration script or manifest which specifies what actual configuration " "is performed." msgstr "" "Script de configuración o manifiesto que especifica qué configuración real " "se realiza." msgid "Configure most important configs automatically." msgstr "Configure las configuraciones más importantes de forma automática." #, python-format msgid "Confirm resize for server %s failed" msgstr "Ha fallado la confirmación de redimensionamiento del servidor %s" #, python-format msgid "" "Conflicting merge strategy '%(strategy)s' for parameter '%(param)s' in file " "'%(env_file)s'." msgstr "" "Conflicto con la estrategia de unión '%(strategy)s' para el parámetro " "'%(param)s' en archivo '%(env_file)s'." msgid "Connection info for this network gateway." msgstr "Información de conexión para esta pasarela de red." #, python-format msgid "Container '%(name)s' creation failed: %(code)s - %(reason)s" msgstr "" "Ha fallado la creación del contenedor '%(name)s': %(code)s - %(reason)s" msgid "Container format of image." msgstr "Formato de contenedor de la imagen." msgid "" "Content of part to attach, either inline or by referencing the ID of another " "software config resource." msgstr "" "Contenido de la parte a conectar, sea en línea o haciendo referencia al ID " "de otro recurso de configuración de software" msgid "Context for this stack." msgstr "Contexto para esta pila." msgid "Continue ? [y/N]" msgstr "Continuar? Si/No [y/N]" msgid "Control how the disk is partitioned when the server is created." msgstr "Controlar cómo se particiona el disco cuando se crea el servidor." msgid "Controls DPD protocol mode." msgstr "Modalidad de protocolo DPD de controles." msgid "" "Convenience attribute to fetch the first assigned network address, or an " "empty string if nothing has been assigned at this time. Result may not be " "predictable if the server has addresses from more than one network." msgstr "" "Atributo de conveniencia para captar la primera dirección de red asignada o " "una serie vacía si no se ha asignado nada en este momento. El resultado " "puede no ser previsible si el servidor tiene direcciones de más de una red ." msgid "" "Convenience attribute, provides curl CLI command prefix, which can be used " "for signalling handle completion or failure when signal_transport is set to " "TOKEN_SIGNAL. You can signal success by adding --data-binary '{\"status\": " "\"SUCCESS\"}' , or signal failure by adding --data-binary '{\"status\": " "\"FAILURE\"}'. This attribute is set to None for all other signal transports." msgstr "" "Atributo de conveniencia que proporciona el prefijo de mandato de CLI curl, " "que puede utilizarse para señalar la terminación o el error del manejador " "cuando signal_transport está definido como TOKEN_SIGNAL. Puede señalar el " "éxito añadiendo --data-binary '{\"status\": \"SUCCESS\"}' o señalar el error " "añadiendo --data-binary '{\"status\": \"FAILURE\"}'. Este atributo está " "definido a None en todos los demás transportes de señal." msgid "" "Convenience attribute, provides curl CLI command prefix, which can be used " "for signalling handle completion or failure. You can signal success by " "adding --data-binary '{\"status\": \"SUCCESS\"}' , or signal failure by " "adding --data-binary '{\"status\": \"FAILURE\"}'." msgstr "" "Atributo de conveniencia que proporciona el prefijo de mandato de CLI curl, " "que puede utilizarse para señalar la terminación o el error del manejador. " "Puede señalar el éxito añadiendo --data-binary '{\"status\": \"SUCCESS\"}' o " "señalar el error añadiendo --data-binary '{\"status\": \"FAILURE\"}'." msgid "Cooldown period, in seconds." msgstr "Periodo de enfriamiento, en segundos." #, python-format msgid "Could not confirm resize of server %s" msgstr "No se ha podido confirmar el redimensionamiento del servidor %s" #, python-format msgid "Could not detach attachment %(att)s from server %(srv)s." msgstr "No se ha podido desconectar el adjunto %(att)s del servidor %(srv)s." #, python-format msgid "Could not fetch remote template \"%(name)s\": %(exc)s" msgstr "No se ha podido captar la plantilla remota \"%(name)s\": %(exc)s" #, python-format msgid "Could not fetch remote template '%(url)s': %(exc)s" msgstr "No se ha podido captar la plantilla remota '%(url)s': %(exc)s" #, python-format msgid "Could not load %(name)s: %(error)s" msgstr "No se ha podido cargar %(name)s: %(error)s" #, python-format msgid "Could not retrieve template: %s" msgstr "No se ha podido obtener la plantilla: %s" msgid "Create volumes on the same physical port as an instance." msgstr "Crear volúmenes en el mismo puerto físico como una instancia." msgid "" "Credentials used for swift. Not required if sahara is configured to use " "proxy users and delegated trusts for access." msgstr "" "Credenciales utilizadas para Swift. No son necesarias si Sahara está " "configurado para utilizar usuarios proxy y confianzas delegadas para el " "acceso." msgid "Cron expression." msgstr "Expresión CRON." msgid "Current share status." msgstr "Estado actual del recurso compartido." msgid "Custom LoadBalancer template can not be found" msgstr "No se puede encontrar plantilla de equilibrador de carga personalizada" msgid "DB instance restore point." msgstr "Punto de restauración de instancia de base de datos." msgid "DNS Domain id or name." msgstr "ID o nombre de dominio DNS." msgid "DNS IP address used inside tenant's network." msgstr "Dirección IP DNS utilizada dentro de la red del inquilino." msgid "DNS Record type." msgstr "Tipo de registro DNS." msgid "DNS domain serial." msgstr "Serial de dominio DNS." msgid "" "DNS record data, varies based on the type of record. For more details, " "please refer rfc 1035." msgstr "" "Los datos de registro DNS varían según el tipo de recurso. Para obtener " "información más detallada, consulte rfc 1035." msgid "" "DNS record priority. It is considered only for MX and SRV types, otherwise, " "it is ignored." msgstr "" "Prioridad del registro DNS. Solo se tiene en cuenta para los tipos MX y SRV, " "en otros casos se ignora." #, python-format msgid "Data supplied was not valid: %(reason)s" msgstr "Datos proporcionados no son válidos: %(reason)s" #, python-format msgid "" "Database %(dbs)s specified for user does not exist in databases for resource " "%(name)s." msgstr "" "La base de datos %(dbs)s especificada para el usuario no existe en las bases " "de datos para el recurso %(name)s." msgid "Database volume size in GB." msgstr "Tamaño de volumen de base datos en GB." #, python-format msgid "" "Databases property is required if users property is provided for resource %s." msgstr "" "La propiedad de bases de datos es necesaria si se proporciona la propiedad " "de usuarios para el recurso %s." #, python-format msgid "" "Datastore version %(dsversion)s for datastore type %(dstype)s is not valid. " "Allowed versions are %(allowed)s." msgstr "" "La versión de almacén de datos %(dsversion)s para el tipo de almacén de " "datos %(dstype)s no es válida. Las versiones permitidas son %(allowed)s." msgid "Datetime when a share was created." msgstr "Fecha y hora en que se ha creado un recurso compartido." msgid "" "Dead Peer Detection protocol configuration for the ipsec site connection." msgstr "" "Configuración de protocolo de detección de igual muerto para la conexión de " "sitio ipsec." msgid "Dead engines are removed." msgstr "Los motores inactivos se eliminan." msgid "Default TLS container reference to retrieve TLS information." msgstr "" "Referencia de contenedor TLS predeterminado para recuperar información de " "TLS." #, python-format msgid "Default must be a comma-delimited list string: %s" msgstr "" "El valor predeterminado debe ser una serie de lista delimitada por comas: %s" msgid "Default name or UUID of the image used to boot Hadoop nodes." msgstr "" "Nombre predeterminado o UUID de la imagen utilizada para arrancar nodos " "Hadoop." msgid "Default region name used to get services endpoints." msgstr "" "Nombre de región predeterminado utilizado para obtener puntos finales de " "servicios." msgid "Default settings for some of task attributes defined at workflow level." msgstr "" "Configuración predeterminada para algunos atributos de tarea definidos a " "nivel de flujo de trabajo." msgid "Default value for the input if none is specified." msgstr "Valor predeterminado de la entrada si no se especifica ninguno." msgid "" "Defines a delay in seconds that Mistral Engine should wait after a task has " "completed before starting next tasks defined in on-success, on-error or on-" "complete." msgstr "" "Define un retraso en segundos que indica el tiempo que el motor Mistral debe " "esperar una vez se ha completado una tarea hasta empezar las tareas " "siguientes definidas en on-success, on-error u on-complete." msgid "" "Defines a delay in seconds that Mistral Engine should wait before starting a " "task." msgstr "" "Define un retraso en segundos que indica el tiempo que el motor Mistral debe " "esperar antes de iniciar una tarea." msgid "Defines a pattern how task should be repeated in case of an error." msgstr "Define el patrón de cómo se debe repetir la tarea en caso de error." msgid "" "Defines a period of time in seconds after which a task will be failed " "automatically by engine if hasn't completed." msgstr "" "Define un periodo de tiempo en segundos tras el cual el motor hará fallar " "automáticamente una tarea si no se ha completado." msgid "Defines if share type is accessible to the public." msgstr "Define si el tipo de recurso compartido es accesible para el público." msgid "Defines if shared filesystem is public or private." msgstr "Define si el sistema de archivos compartido es público o privado." msgid "" "Defines the method in which the request body for signaling a workflow would " "be parsed. In case this property is set to True, the body would be parsed as " "a simple json where each key is a workflow input, in other cases body would " "be parsed expecting a specific json format with two keys: \"input\" and " "\"params\"." msgstr "" "Define el método con que se debe analizar el cuerpo de la solicitud para " "señalar un flijo de trabajo. Si esta propiedad está definida como True, el " "cuerpo se analiza como un json sencillo, donde cada clave es una entrada del " "flujo de trabajo, en otros casos el cuerpo se analizaría esperando un " "formato json específico con dos claves: \"input\" y \"params\"." msgid "" "Defines whether Mistral Engine should put the workflow on hold or not before " "starting a task." msgstr "" "Define si el el motor Mistral debe poner el flujo de trabajo en espera o no " "antes de empezar una tarea." msgid "Defines whether auto-assign security group to this Node Group template." msgstr "" "Define si se debe asignar automáticamente el grupo de seguridad a esta " "plantilla Grupo de nodos." #, python-format msgid "" "Defining more than one configuration for the same action in " "SoftwareComponent \"%s\" is not allowed." msgstr "" "No se permite la definición de más de una configuración para la misma acción " "en SoftwareComponent \"%s\"." msgid "Deleting in-progress snapshot" msgstr "Suprimiento la instantánea en curso" #, python-format msgid "Deleting non-empty container (%(id)s) when %(prop)s is False" msgstr "Supresión del contenedor no vacío (%(id)s) cuando %(prop)s es False" #, python-format msgid "Delimiter for %s must be string" msgstr "El delimitador para %s debe ser una serie" msgid "" "Denotes that the deployment is in an error state if this output has a value." msgstr "" "Indica que el despliegue está en estado de error si esta salida tiene un " "valor." msgid "Deploy data available" msgstr "Datos de despliegue disponibles" #, python-format msgid "Deployment exited with non-zero status code: %s" msgstr "El despliegue ha saldo con un código de estado distinto de cero: %s" #, python-format msgid "Deployment to server failed: %s" msgstr "Ha fallado el despliegue en el servidor: %s" #, python-format msgid "Deployment with id %s not found" msgstr "Despliegue con id %s no encontrado" msgid "Deprecated." msgstr "Descontinuado." msgid "" "Describe time constraints for the alarm. Only evaluate the alarm if the time " "at evaluation is within this time constraint. Start point(s) of the " "constraint are specified with a cron expression, whereas its duration is " "given in seconds." msgstr "" "Describa las restricciones de tiempo para la alarma. Evalúe la alarma solo " "si la hora de la evaluación cae dentro de la restricción de tiempo. El punto " "o puntos iniciales de la restricción se especifican con una expresión cron, " "mientras que la duración se da en segundos." msgid "Description for the alarm." msgstr "Descripción para la alarma." msgid "Description for the firewall policy." msgstr "Descripción de la política de cortafuegos." msgid "Description for the firewall rule." msgstr "Descripción de la regla de cortafuegos." msgid "Description for the firewall." msgstr "Descripción del cortafuegos." msgid "Description for the ike policy." msgstr "Descripción de la política ike." msgid "Description for the ipsec policy." msgstr "Descripción de la política ipsec." msgid "Description for the ipsec site connection." msgstr "Descripción de la conexión de sitio ipsec." msgid "Description for the time constraint." msgstr "Descripción de la restricción de tiempo." msgid "Description for the vpn service." msgstr "Descripción para el servicio vpn." msgid "Description for this interface." msgstr "Descripción para esta interfaz." msgid "Description of domain." msgstr "Descripción del dominio." msgid "Description of keystone group." msgstr "Descripción del grupo de keystone." msgid "Description of keystone project." msgstr "Descripción del proyecto de keystone." msgid "Description of keystone region." msgstr "Descripción de la región de keystone." msgid "Description of keystone service." msgstr "Descripción del servicio de keystone." msgid "Description of keystone user." msgstr "Descripción del usuario de keystone." msgid "Description of record." msgstr "Descripción del registro." msgid "Description of the Node Group Template." msgstr "Descripción de la plantilla del grupo de nodos." msgid "Description of the Sahara Group Template." msgstr "Descripción de la plantilla del grupo Sahara." msgid "Description of the alarm." msgstr "Descripción de la alarma." msgid "Description of the data source." msgstr "Descripción del origen de datos." msgid "Description of the firewall policy." msgstr "Descripción de la política de cortafuegos." msgid "Description of the firewall rule." msgstr "Descripción de la regla de cortafuegos." msgid "Description of the firewall." msgstr "Descripción del cortafuegos." msgid "Description of the image." msgstr "Descripción de la imagen." msgid "Description of the input." msgstr "Descripción de la entrada." msgid "Description of the job binary." msgstr "Descripción del binario del trabajo." msgid "Description of the metering label." msgstr "Descripción de la etiqueta de medición." msgid "Description of the output." msgstr "Descripción de la salida." msgid "Description of the pool." msgstr "Descripción de la agrupación." msgid "Description of the security group." msgstr "Descripción del grupo de seguridad." msgid "Description of the vip." msgstr "Descripción de vip." msgid "Description of the volume type." msgstr "Descripción del tipo de volumen." msgid "Description of the volume." msgstr "Descripción del volumen." msgid "Description of this Load Balancer." msgstr "Descripción de este equilibrador de carga." msgid "Description of this listener." msgstr "Descripción de este escucha." msgid "Description of this pool." msgstr "Descripción de esta agrupación." msgid "Desired IPs for this port." msgstr "Las IP deseadas para este puerto." msgid "Desired capacity of the cluster." msgstr "Capacidad deseada del clúster." msgid "Desired initial number of instances." msgstr "Número inicial deseado de instancias." msgid "Desired initial number of resources in cluster." msgstr "Número inicial deseado de recursos en el clúster." msgid "Desired initial number of resources." msgstr "Número inicial deseado de recursos." msgid "Desired number of instances." msgstr "Número deseado de instancias." msgid "DesiredCapacity must be between MinSize and MaxSize" msgstr "DesiredCapacity debe estar entre MinSize y MaxSize" msgid "Destination IP address or CIDR." msgstr "Dirección IP de destino o CIDR." msgid "Destination ip_address for this firewall rule." msgstr "ip_address de destino para esta regla de cortafuegos." msgid "Destination port number or a range." msgstr "Número de puerto de destino o un rango." msgid "Destination port range for this firewall rule." msgstr "Rango de puertos de destino para esta regla de cortafuegos." msgid "Detailed information about resource." msgstr "Información detallada sobre el recurso." msgid "Device ID of this port." msgstr "ID de dispositivo de este puerto." msgid "Device info for this network gateway." msgstr "Información de dispositivo para esta pasarela de red." msgid "" "Device type: at the moment we can make distinction only between disk and " "cdrom." msgstr "" "Tipo de dispositivo: En este momento se puede hacer distinción entre el " "disco y el cdrom." msgid "" "Dict, which has expand properties for port. Used only if port property is " "not specified for creating port." msgstr "" "Dict, que tiene propiedades de expansión para puerto. Se utiliza únicamente " "si no se ha especificado la propiedad puerto para crear el puerto." msgid "Dictionary containing workflow tasks." msgstr "Diccionario que contiene tareas de flujo de trabajo." msgid "Dictionary of node configurations." msgstr "Diccionario de configuraciones de nodo." msgid "Dictionary of variables to publish to the workflow context." msgstr "" "Diccionario de las variables a publicar en el contexto del flujo de trabajo." msgid "Dictionary which contains input for workflow." msgstr "El diccionario que contiene la entrada para el flujo de trabajo." msgid "" "Dictionary-like section defining task policies that influence how Mistral " "Engine runs tasks. Must satisfy Mistral DSL v2." msgstr "" "Sección de tipo diccionario que define las pilótocas de tarea que influyen " "en la forma en que el motor Mistral ejecuta las tareas. Debe satisfacer " "Mistral DSL v2." msgid "DisableRollback and OnFailure may not be used together" msgstr "DisableRollback y OnFailure no se pueden usar juntas" msgid "Disk format of image." msgstr "Formato de disco de la imagen." msgid "Does not contain a valid AWS Access Key or certificate" msgstr "No contiene una clave AWS Access Key o un certificádo válido" msgid "Domain email." msgstr "Correo electrónico del dominio." msgid "Domain name." msgstr "Nombre de dominio." #, python-format msgid "Duplicate names %s" msgstr "Nombres duplicados %s" msgid "Duplicate refs are not allowed." msgstr "No se admiten referencias duplicadas." msgid "Duration for the time constraint." msgstr "Duración de la restricción de tiempo." msgid "EIP address to associate with instance." msgstr "Dirección EIP a asociar con instancia." #, python-format msgid "Each %(object_name)s must contain a %(sub_section)s key." msgstr "Cada %(object_name)s debe contener una clave %(sub_section)s." msgid "Each Resource must contain a Type key." msgstr "Cada recurso debe contener una clave Type (tipo)." msgid "Ebs is missing, this is required when specifying BlockDeviceMappings." msgstr "Falta Ebs; es necesario cuando se especifica BlockDeviceMappings." msgid "" "Egress rules are only allowed when Neutron is used and the 'VpcId' property " "is set." msgstr "" "Reglas de egreso sólo se permiten cuando Neutron está un uso y la propiedad " "'VpcId' esta definida." #, python-format msgid "Either %(net)s or %(port)s must be provided." msgstr "Debe especificarse %(net)s o %(port)s." msgid "Either 'EIP' or 'AllocationId' must be provided." msgstr "Es necesario proporcionar 'EIP' o 'AllocationId'." msgid "Either 'InstanceId' or 'LaunchConfigurationName' must be provided." msgstr "Debe proporcionarse 'InstanceId' o 'LaunchConfigurationName'." #, python-format msgid "Either project or domain must be specified for role %s" msgstr "Se debe especificar el proyecto o bien el dominio para el rol %s." #, python-format msgid "Either volume_id or snapshot_id must be specified for device mapping %s" msgstr "" "Se debe especificar volume_id o snapshot_id para la correlación de " "dispositivo %s" msgid "Email address of keystone user." msgstr "Dirección de correo electrónico del usuario de keystone." msgid "Enable the legacy OS::Heat::CWLiteAlarm resource." msgstr "Habilitar el recurso heredado OS::Heat::CWLiteAlarm." msgid "Enable the preview Stack Abandon feature." msgstr "" "Habilitar la característica Stack Abandon (abandono de pila) de vista previa." msgid "Enable the preview Stack Adopt feature." msgstr "" "Habilitar la característica Stack Adopt (adopción de pila) de vista previa." msgid "" "Enables Source NAT on the router gateway. NOTE: The default policy setting " "in Neutron restricts usage of this property to administrative users only." msgstr "" "Habilita el NAT de origen en la pasarela de direccionador. NOTA: el valor de " "política predeterminada en Neutron restringe el uso de esta propiedad sólo a " "los usuarios administrativos." msgid "" "Enables engine with convergence architecture. All stacks with this option " "will be created using convergence engine." msgstr "" "Habilita el motor con arquitectura de convergencia. Todas las pilas con esta " "opción se crearán utilizando el motor de convergencia." msgid "Enables or disables read-only access mode of volume." msgstr "Habilita o deshabilita el modo de acceso de solo lectura del volumen." msgid "Encapsulation mode for the ipsec policy." msgstr "Modalidad de encapsulación de la política ipsec." msgid "" "Encrypt template parameters that were marked as hidden and also all the " "resource properties before storing them in database." msgstr "" "Cifrar los parámetros de plantilla que se han marcado como ocultos y también " "todas las propiedades del recurso antes de almacenarlos en la base de datos." msgid "Encryption algorithm for the ike policy." msgstr "Algoritmo de cifrado para la política ike." msgid "Encryption algorithm for the ipsec policy." msgstr "Algoritmo de cifrado para la política ipsec." msgid "End address for the allocation pool." msgstr "Dirección final de la agrupación de asignación." #, python-format msgid "End resizing the group %(group)s" msgstr "Finalizar redimensionamiento del grupo %(group)s" msgid "" "Endpoint/url which can be used for signalling handle when signal_transport " "is set to TOKEN_SIGNAL. None for all other signal transports." msgstr "" "Punto final/URL que puede utilizarse para señalar el manejador cuando " "signal_transport está definido como TOKEN_SIGNAL. Y hay None en todos los " "demás transportes de señal." msgid "Endpoint/url which can be used for signalling handle." msgstr "" "Punto final/url que puede utilizarse para la señalización del manejador." msgid "Engine_Id" msgstr "Engine_Id" msgid "Error" msgstr "Error" #, python-format msgid "Error authorizing action %s" msgstr "Error autorizando la acción %s" #, python-format msgid "Error creating ec2 keypair for user %s" msgstr "Error al crear par de claves ec2 para el usuario %s" msgid "" "Error during applying access rules to share \"{0}\". The root cause of the " "problem is the following: {1}." msgstr "" "Error al aplicar las reglas de acceso al recurso compartido \"{0}\". La " "causa raíz del problema es la siguiente: {1}." msgid "Error during creation of share \"{0}\"" msgstr "Error durante la creación del recurso compartido \"{0}\"" msgid "Error during deleting share \"{0}\"." msgstr "Error al suprimir el recurso compartido \"{0}\"" #, python-format msgid "Error validating value '%(value)s'" msgstr "Error al validar el valor '%(value)s'" #, python-format msgid "Error validating value '%(value)s': %(message)s" msgstr "Error al validar el valor '%(value)s': %(message)s" msgid "Ethertype of the traffic." msgstr "Ethertype del tráfico." msgid "Exclude state for cidr." msgstr "Excluir estado para cidr." #, python-format msgid "Expected 1 external network, found %d" msgstr "Se esperaba 1 red externa, se ha encontrado %d" msgid "Export locations of share." msgstr "Exportar ubicaciones del recurso compartido." msgid "Expression of the alarm to evaluate." msgstr "Expresión de la alarma a evaluar." msgid "External fixed IP address." msgstr "Dirección IP fija externa." msgid "External fixed IP addresses for the gateway." msgstr "Direcciones IP fijas externas de la pasarela." msgid "External network gateway configuration for a router." msgstr "Configuración de pasarela de red externa para un direccionador." msgid "" "Extra parameters to include in the \"floatingip\" object in the creation " "request. Parameters are often specific to installed hardware or extensions." msgstr "" "Parámetros adicionales a incluir el objeto \"floatingip\" en la solicitud de " "creación. Normalmente los parámetros son específicos del hardware o de las " "extensiones instalados." msgid "Extra parameters to include in the creation request." msgstr "Parámetros adicionales a incluir en la solicitud de creación." msgid "Extra parameters to include in the request." msgstr "Parámetros adicionales a incluir en la solicitud." msgid "" "Extra parameters to include in the request. Parameters are often specific to " "installed hardware or extensions." msgstr "" "Parámetros adicionales a incluir en la solicitud. Normalmente los parámetros " "son específicos del hardware o de las extensiones instalados." msgid "Extra specs key-value pairs defined for share type." msgstr "" "Pares de clave-valor de especificación adicional definidas para el tipo de " "recurso compartido." #, python-format msgid "Failed to attach interface (%(port)s) to server (%(server)s)" msgstr "" "No se ha podido conectar la interfaz (%(port)s) con el servidor (%(server)s)" #, python-format msgid "Failed to attach volume %(vol)s to server %(srv)s - %(err)s" msgstr "" "No se ha podido conectar el volumen %(vol)s al servidor %(srv)s - %(err)s" #, python-format msgid "Failed to create Bay '%(name)s' - %(reason)s" msgstr "No se ha podido crear la bahía '%(name)s' - %(reason)s" #, python-format msgid "Failed to detach interface (%(port)s) from server (%(server)s)" msgstr "" "No se ha podido desconectar la interfaz (%(port)s) del servidor (%(server)s)" #, python-format msgid "Failed to execute %(action)s for %(cluster)s: %(reason)s" msgstr "No se ha podido ejecutar %(action)s para %(cluster)s: %(reason)s" #, python-format msgid "Failed to extend volume %(vol)s - %(err)s" msgstr "No se ha podido ampliar el volumen %(vol)s - %(err)s" #, python-format msgid "Failed to fetch template: %s" msgstr "No se ha podido obtener la plantilla: %s" #, python-format msgid "Failed to find instance %s" msgstr "No se ha podido encontrar la instancia %s" #, python-format msgid "Failed to find server %s" msgstr "Error al buscar el servidor %s" #, python-format msgid "Failed to parse JSON data: %s" msgstr "No se han podido analizar los datos JSON: %s" #, python-format msgid "Failed to restore volume %(vol)s from backup %(backup)s - %(err)s" msgstr "" "No se ha podido restaurar el volumen %(vol)s de la copia de seguridad " "%(backup)s - %(err)s" msgid "Failed to retrieve template" msgstr "No se ha podido recuperar la plantilla" #, python-format msgid "Failed to retrieve template data: %s" msgstr "No se han podido recuperar los datos de plantilla: %s" #, python-format msgid "Failed to retrieve template: %s" msgstr "No se ha podido recuperar la plantilla: %s" #, python-format msgid "" "Failed to send message to stack (%(stack_name)s) on other engine " "(%(engine_id)s)" msgstr "" "Error al enviar el mensaje a la pila ( %(stack_name)s) en otro motor " "(%(engine_id)s)" #, python-format msgid "Failed to stop stack (%(stack_name)s) on other engine (%(engine_id)s)" msgstr "" "No se ha podido detener la pila (%(stack_name)s) en otro motor " "(%(engine_id)s)" #, python-format msgid "Failed to update Bay '%(name)s' - %(reason)s" msgstr "No se ha podido actualizar la bahía '%(name)s' - %(reason)s" msgid "Failed to update, can not found port info." msgstr "Ha fallado la actualización, no se encuentra información de puerto." #, python-format msgid "" "Failed validating stack template using Heat endpoint at region \"%(region)s" "\" due to \"%(exc)s\"" msgstr "" "No se ha podido validar la plantilla de pila utilizando el punto final de " "Heat en la región \"%(region)s\" debido a \"%(exc)s\"" msgid "Fake attribute !a." msgstr "Atributo ficticio !a." msgid "Fake attribute a." msgstr "Atributo ficticio a. " msgid "Fake property !a." msgstr "Propiedad ficticia !a." msgid "Fake property !c." msgstr "Propiedad ficticia !c." msgid "Fake property a." msgstr "Propiedad ficticia a." msgid "Fake property c." msgstr "Propiedad ficticia c." msgid "Fake property ca." msgstr "Propiedad ficticia ca." msgid "" "False to trigger actions when the threshold is reached AND the alarm's state " "has changed. By default, actions are called each time the threshold is " "reached." msgstr "" "Falso para desencadenar acciones cuando se alcanza el umbral Y el estado de " "la alarma ha cambiado. De forma predeterminada, se llaman a las acciones " "cada vez que se alcanza el umbral." #, python-format msgid "Field %(field)s of %(objname)s is not an instance of Field" msgstr "El campo %(field)s de %(objname)s no es una instancia de campo." msgid "" "Fixed IP address to specify for the port created on the requested network." msgstr "" "Dirección IP fija a especificar para el puerto creado en la red solicitada." msgid "Fixed IP addresses." msgstr "Direcciones IP fijas." msgid "Fixed IPv4 address for this NIC." msgstr "Dirección IPv4 fija para este NIC." msgid "Flag indicating if traffic to or from instance is validated." msgstr "" "Distintivo que indica si se valida el tráfico hacia o desde la instancia." msgid "" "Flag to enable/disable port security on the network. It provides the default " "value for the attribute of the ports created on this network." msgstr "" "Distintivo para habilitar/deshabilitar la seguridad de puertos en la red. " "Proporciona el valor predeterminado para el atributo de los puertos creados " "en esta red." msgid "" "Flag to enable/disable port security on the port. When disable this " "feature(set it to False), there will be no packages filtering, like security-" "group and address-pairs." msgstr "" "Distintivo para habilitar/deshabilitar la seguridad de puertos en el puerto. " "Si se deshabilita esta característica (definiéndola como False), no se " "realizará filtrado de paquetes, como grupo de seguridad y pares de " "direcciones." msgid "Flavor of the instance." msgstr "Tipo de la instancia." msgid "Friendly name of the port." msgstr "Nombre descriptivo del puerto." msgid "Friendly name of the router." msgstr "Nombre descriptivo del direccionador." msgid "Friendly name of the subnet." msgstr "Nombre familiár de la subred." #, python-format msgid "Function \"%s\" must have arguments" msgstr "La función \"%s\" debe tener argumentos" #, python-format msgid "Function \"%s\" usage: [\"\", \"\"]" msgstr "Uso de la función \"%s\": [\"\", \"\"]" #, python-format msgid "Gateway IP address \"%(gateway)s\" is in invalid format." msgstr "" "La dirección IP de pasarela \"%(gateway)s\" tiene un formato no válido." msgid "Gateway network for the router." msgstr "Red de pasarela para el direccionador." msgid "Generic HeatAPIException, please use specific subclasses!" msgstr "HeatAPIException genérica, por favor use sub-classes especificas!" msgid "Glance image ID or name." msgstr "ID o nombre de imagen de Glance." msgid "Governs permissions set in manila for the cluster ips." msgstr "Governa los permisos definidos en Manila para las IP del clúster." msgid "Granularity to use for age argument, defaults to days." msgstr "" "Granularidad para el uso del argumento de envejecimiento, predeterminado a " "días." msgid "Hadoop cluster name." msgstr "Nombre de clúster de Hadoop." #, python-format msgid "Header X-Auth-Url \"%s\" not an allowed endpoint" msgstr "Cabecera X-Auth-Url \"%s\" no es un punto de acceso permitido" msgid "Health probe timeout, in seconds." msgstr "Tiempo de espera de análisis de salud, en segundos." msgid "" "Heat build revision. If you would prefer to manage your build revision " "separately, you can move this section to a different file and add it as " "another config option." msgstr "" "Revisión de compilación Heat. Si prefieres administrar su revisión de " "compilación por separado, puedes pasar esta sección a un archivo distinto y " "añadirlo como otra opción de configuración." msgid "Host" msgstr "Host" msgid "Hostname" msgstr "Nombre del host" msgid "Hostname of the instance." msgstr "Nombre de host de la instancia." msgid "How long to preserve deleted data." msgstr "Cuanto tiempo a preservár datos borrados." msgid "" "How the client will signal the wait condition. CFN_SIGNAL will allow an HTTP " "POST to a CFN keypair signed URL. TEMP_URL_SIGNAL will create a Swift " "TempURL to be signalled via HTTP PUT. HEAT_SIGNAL will allow calls to the " "Heat API resource-signal using the provided keystone credentials. " "ZAQAR_SIGNAL will create a dedicated zaqar queue to be signalled using the " "provided keystone credentials. TOKEN_SIGNAL will allow and HTTP POST to a " "Heat API endpoint with the provided keystone token. NO_SIGNAL will result in " "the resource going to a signalled state without waiting for any signal." msgstr "" "Forma en que el servidor señalará la condición de espera. CFN_SIGNAL " "permitirá un POST HTTP en un URL firmado con un par de claves de CFN. " "TEMP_URL_SIGNAL creará un TempURL de Swift que se señalará vía HTTP PUT. " "HEAT_SIGNAL permitirá llamadas a señal de recursos de la API de Heat " "utilizando las credenciales de keystone proporcionadas. ZAQAR_SIGNAL creará " "una cola zaqar dedicada que se señalará utilizando las credenciales de " "keystone proporcionadas. TOKEN_SIGNAL permitirá un POST HTTP a un punto " "final de la API de API con el token de keystone proporcionado. NO_SIGNAL " "dará como resultado que el recurso se ponga en estado COMPLETO, sin esperar " "ninguna señal." msgid "" "How the server should receive the metadata required for software " "configuration. POLL_SERVER_CFN will allow calls to the cfn API action " "DescribeStackResource authenticated with the provided keypair. " "POLL_SERVER_HEAT will allow calls to the Heat API resource-show using the " "provided keystone credentials. POLL_TEMP_URL will create and populate a " "Swift TempURL with metadata for polling. ZAQAR_MESSAGE will create a " "dedicated zaqar queue and post the metadata for polling." msgstr "" "Forma en que el servidor debe recibir los metadatos necesarios para la " "configuración del software. POLL_SERVER_CFN permitirá llamadas a la acción " "de la API de CFN DescribeStackResource autenticadas con el par de claves " "proporcionado. POLL_SERVER_HEAT permitirá llamadas a la muestra de recursos " "de la API de Heat utilizando las credenciales de keystone proporcionadas. " "POLL_TEMP_URL creará y cumplimentará un TempURL de Swift con metadatos para " "sondeo. ZAQAR_MESSAGE creará una cola zaqar dedicada y publicará los " "metadatos para el sondeo." msgid "How the server should signal to heat with the deployment output values." msgstr "" "Cómo debe el servidor señalar en Heat con los valores de salida de " "despliegue." msgid "" "How the server should signal to heat with the deployment output values. " "CFN_SIGNAL will allow an HTTP POST to a CFN keypair signed URL. " "TEMP_URL_SIGNAL will create a Swift TempURL to be signaled via HTTP PUT. " "HEAT_SIGNAL will allow calls to the Heat API resource-signal using the " "provided keystone credentials. ZAQAR_SIGNAL will create a dedicated zaqar " "queue to be signaled using the provided keystone credentials. NO_SIGNAL will " "result in the resource going to the COMPLETE state without waiting for any " "signal." msgstr "" "Forma en que el servidor debe indicar a Heat los valores de salida del " "despliegue. CFN_SIGNAL permitirá un POST HTTP en un URL firmado con un par " "de claves de CFN. TEMP_URL_SIGNAL creará un TempURL Swift que se señalará " "vía HTTP PUT. HEAT_SIGNAL permitirá llamadas a señal de recursos de la API " "de Heat utilizando las credenciales de keystone proporcionadas. NO_SIGNAL " "dará como resultado que el recurso se ponga en estado COMPLETO, sin esperar " "ninguna señal." msgid "" "How the user_data should be formatted for the server. For HEAT_CFNTOOLS, the " "user_data is bundled as part of the heat-cfntools cloud-init boot " "configuration data. For RAW the user_data is passed to Nova unmodified. For " "SOFTWARE_CONFIG user_data is bundled as part of the software config data, " "and metadata is derived from any associated SoftwareDeployment resources." msgstr "" "La manera en que los datos de usuario se deben formatear para el servidor. " "Para HEAT_CFNTOOLS, los datos de usuario se empaquetan como parte de los " "datos de configuración de arranque heat-cfntools cloud-init. Para RAW, los " "datos de usuario se pasan a Nova sin modificarlos. Para SOFTWARE_CONFIG, los " "datos de usuario se empaquetan como parte de los datos de configuración de " "software y los metadatos se obtienen de los recursos SoftwareDeployment ." msgid "Human readable name for the secret." msgstr "El nombre del secreto legible por humanos." msgid "Human-readable name for the container." msgstr "Nombre legible para humanos del contenedor." msgid "" "ID list of the L3 agent. User can specify multi-agents for highly available " "router. NOTE: The default policy setting in Neutron restricts usage of this " "property to administrative users only." msgstr "" "Lista de ID del agente L3. El usuario puede especificar varios agentes para " "un direccionador de alta disponibilidad. NOTA: El valor de política " "predeterminada en Neutron restringe el uso de esta propiedad a usuarios " "administrativos solamente." msgid "ID of an existing port to associate with this server." msgstr "ID de un puerto existente a asociar con este servidor." msgid "" "ID of an existing port with at least one IP address to associate with this " "floating IP." msgstr "" "ID de un puerto existente con al menos una dirección IP a asociar con esta " "IP flotante." msgid "ID of network to create a port on." msgstr "ID de red en la que crear un puerto." msgid "ID of project for API authentication" msgstr "ID de proyecto para autenticación de API" msgid "ID of queue to use for signaling output values" msgstr "El ID de cola a utilizar para indicar valores de salida" msgid "" "ID of resource to apply configuration to. Normally this should be a Nova " "server ID." msgstr "" "ID del recurso al que se desea aplicar la configuración. Normalmente será un " "ID de servidor Nova." msgid "" "ID of server (VM, etc...) on host that is used for exporting network file-" "system." msgstr "" "ID del servidor (VM, etc...) en el que se utiliza para exportar el sistema " "de archivos de red." msgid "ID of signal to use for signaling output values" msgstr "El ID de señal a utilizar para indicar valores de salida" msgid "" "ID of software configuration resource to execute when applying to the server." msgstr "" "ID de recurso de configuración de software a ejecutar cuando se aplica al " "servidor." msgid "ID of the Cluster Template used for Node Groups and configurations." msgstr "" "ID de la plantilla de clústeres utilizados con grupos de nodos y " "configuraciones." msgid "ID of the InternetGateway." msgstr "ID de la pasarela de Internet." msgid "" "ID of the L3 agent. NOTE: The default policy setting in Neutron restricts " "usage of this property to administrative users only." msgstr "" "ID del agente L3. NOTA: El valor de política predeterminado en Neutron " "restringe el uso de esta propiedad a usuarios administrativos solamente." msgid "ID of the Node Group Template." msgstr "ID de la plantilla de grupos de nodos." msgid "ID of the VPNGateway to attach to the VPC." msgstr "ID de la pasarela VPN a adjuntar a VPC." msgid "ID of the default image to use for the template." msgstr "ID de la imagen predeterminada para utilizar con la plantilla." msgid "ID of the default pool this listener is associated to." msgstr "" "ID de la agrupación predeterminada a la que está asociado este escucha." msgid "ID of the floating IP to assign to the server." msgstr "ID de la IP flotante a asignar al servidor." msgid "ID of the floating IP to associate." msgstr "ID de la IP flotante a asociar." msgid "ID of the health monitor associated with this pool." msgstr "ID del supervisor de estado asociado a esta agrupación." msgid "ID of the image to use for the template." msgstr "ID de la imagen a utilizar para la plantilla." msgid "ID of the load balancer this listener is associated to." msgstr "ID del equilibrador de carga al que está asociado este escucha." msgid "ID of the network in which this IP is allocated." msgstr "ID de la red en la que está asignada esta IP." msgid "ID of the port associated with this IP." msgstr "ID del puerto asociado con esta IP." msgid "ID of the queue." msgstr "ID de la cola." msgid "ID of the router used as gateway, set when associated with a port." msgstr "" "ID del direccionador utilizado como pasarela, establecido cuando se asocia " "con un puerto." msgid "ID of the router." msgstr "ID del direccionador." msgid "ID of the server being deployed to" msgstr "ID del servidor en el que se está desplegando" msgid "ID of the stack this deployment belongs to" msgstr "ID de la pila a la que pertenece este despliegue" msgid "ID of the tenant to which the RBAC policy will be enforced." msgstr "Id del arrendatario al que se aplicará la política RBAC." msgid "ID of the tenant who owns the health monitor." msgstr "ID del inquilino propietario del supervisor de estado." msgid "ID or name of the QoS policy." msgstr "ID o nombre de la política de QoS." msgid "ID or name of the RBAC object." msgstr "ID o nombre del objeto RBAC." msgid "ID or name of the external network for the gateway." msgstr "ID o nombre de la red externa para la pasarela." msgid "ID or name of the image to register." msgstr "ID o nombre de la imagen a registrar." msgid "ID or name of the load balancer with which listener is associated." msgstr "" "ID o nombre del equilibrador de carga con el que está asociado el escucha." msgid "ID or name of the load balancing pool." msgstr "ID o nombre de la agrupación de equilibrio de carga." msgid "" "ID that AWS assigns to represent the allocation of the address for use with " "Amazon VPC. Returned only for VPC elastic IP addresses." msgstr "" "ID que AWS asigna para representar la asignación de la dirección para " "utilizar con VPC Amazon. Sólo se devuelve para direcciones IP elásticas de " "VPC." msgid "IP address and port of the pool." msgstr "Dirección IP y puerto de la agrupación." msgid "IP address desired in the subnet for this port." msgstr "Dirección IP deseada en la subred para este puerto." msgid "IP address for the VIP." msgstr "Dirección IP de la VIP." msgid "IP address of the associated port, if specified." msgstr "Dirección IP del puerto asociado, si se especifica." msgid "" "IP address of the floating IP. NOTE: The default policy setting in Neutron " "restricts usage of this property to administrative users only." msgstr "" "Dirección IP de la IP flotante. NOTA: el valor de política predeterminado en " "Neutron restringe el uso de esta propiedad solo a usuarios administrativos." msgid "IP address of the pool member on the pool network." msgstr "Dirección IP del miembro de agrupación en la red de agrupación." msgid "IP address of the pool member." msgstr "Dirección IP del miembro de agrupación." msgid "IP address of the vip." msgstr "Dirección IP de vip." msgid "IP address to allow through this port." msgstr "Dirección IP a permitir a través de este puerto." msgid "IP address to use if the port has multiple addresses." msgstr "Dirección IP a utilizar si el puerto tiene varias direcciones." msgid "" "IP or other address information about guest that allowed to access to Share." msgstr "" "IP u otra información de dirección sobre el invitado que le han permitido " "acceder al recurso compartido." msgid "IPv6 RA (Router Advertisement) mode." msgstr "Modalidad RA (Router Advertisement) de IPv6. " msgid "IPv6 address mode." msgstr "Modalidad de dirección IPv6 ." msgid "Id of a resource." msgstr "El ID de un recurso." msgid "Id of the manila share." msgstr "ID del recurso compartido de Manila." msgid "Id of the tenant owning the firewall policy." msgstr "Id del arrendatario propietario de la política de cortafuegos." msgid "Id of the tenant owning the firewall." msgstr "Id del arrendatario propietario del cortafuegos." msgid "Identifier of the source instance to replicate." msgstr "Identificador de la instancia de origen a replicar." #, python-format msgid "" "If \"%(size)s\" is provided, only one of \"%(image)s\", \"%(image_ref)s\", " "\"%(source_vol)s\", \"%(snapshot_id)s\" can be specified, but currently " "specified options: %(exclusive_options)s." msgstr "" "Si se proporciona \"%(size)s\", se debe especificar solo una de las " "siguientes opciones: \"%(image)s\", \"%(image_ref)s\", \"%(source_vol)s\", " "\"%(snapshot_id)s\", sin embargo,las opciones especificadas actualmente " "son: %(exclusive_options)s." msgid "If False, closes the client socket connection explicitly." msgstr "" "Si está establecido en False, cierra explícitamente la conexión con el " "socket cliente." msgid "" "If True, delete any objects in the container when the container is deleted. " "Otherwise, deleting a non-empty container will result in an error." msgstr "" "Si es True, se suprimen todos los objetos del contenedor cuando se suprime " "el contenedor. De lo contrario, la supresión de un contenedor no vacío " "provocará un error." msgid "If True, enable config drive on the server." msgstr "Si es True, habilite la unidad de configuración en el servidor." msgid "" "If configured, it allows to run action or workflow associated with a task " "multiple times on a provided list of items." msgstr "" "Si está configurado, permite ejecutar una acción o un flujo de trabajo " "asociados a una tarea múltiples veces en una lista de elementos que se haya " "proporcionado." msgid "If set, then the server's certificate will not be verified." msgstr "Si establecido, el certificado del servidor no va a estár verificado." msgid "If specified, the backup to create the volume from." msgstr "" "Si se especifica, la copia de seguridad desde la que se debe crear el " "volumen." msgid "If specified, the backup used as the source to create the volume." msgstr "" "Si se especifica, la copia de seguridad utilizada como origen para crear el " "volumen." msgid "If specified, the name or ID of the image to create the volume from." msgstr "" "Si se especifica, el nombre o ID de la imagen desde la que se debe crear el " "volumen." msgid "If specified, the snapshot to create the volume from." msgstr "" "Si se especifica, la instantánea desde la que se debe crear el volumen." msgid "If specified, the type of volume to use, mapping to a specific backend." msgstr "" "Si se especifica, el tipo de volumen a utilizar, correlacionándose a un " "programa de fondo específico." msgid "If specified, the volume to use as source." msgstr "Si se especifica, el volumen a utilizar como origen." msgid "" "If the region is hierarchically a child of another region, set this " "parameter to the ID of the parent region." msgstr "" "Si la región, jerárquicamente, es hija de otra región, defina este parámetro " "con el ID de la región padre." msgid "" "If true, the resources in the chain will be created concurrently. If false " "or omitted, each resource will be treated as having a dependency on the " "previous resource in the list." msgstr "" "Si está definido en true, los recursos de la cadena se crearán de forma " "simultánea. Si es false o si se ha omitido, cada recurso se tratará como si " "tuviera una dependencia sobre el recurso anterior de la lista." msgid "If without InstanceId, ImageId and InstanceType are required." msgstr "" "Si no se establece InstanceId, ImageId e InstanceType son obligatorias." #, python-format msgid "Illegal prefix bounds: %(key1)s=%(value1)s, %(key2)s=%(value2)s." msgstr "" "Límites de prefijo no permitidos: %(key1)s=%(value1)s, %(key2)s=%(value2)s." #, python-format msgid "" "Image %(image)s requires %(imram)s minimum ram. Flavor %(flavor)s has only " "%(flram)s." msgstr "" "La imagen %(image)s requiere una RAM mínima de %(imram)s. El tipo %(flavor)s " "tiene solo %(flram)s." #, python-format msgid "" "Image %(image)s requires %(imsz)s GB minimum disk space. Flavor %(flavor)s " "has only %(flsz)s GB." msgstr "" "La imagen %(image)s requiere un espacio de disco mínimo de %(imsz)s GB. El " "tipo %(flavor)s tiene solo %(flsz)s GB." #, python-format msgid "Image status is required to be %(cstatus)s not %(wstatus)s." msgstr "El estado de la imagen tiene que ser %(cstatus)s y no %(wstatus)s." msgid "Incompatible parameters were used together" msgstr "Parámetros incompatibles estaban usados juntos" #, python-format msgid "Incorrect arguments to \"%(fn_name)s\" should be one of: %(allowed)s" msgstr "Argumentos incorrectos en \"%(fn_name)s\" debe ser uno de: %(allowed)s" #, python-format msgid "Incorrect arguments to \"%(fn_name)s\" should be: %(example)s" msgstr "Argumentos incorrectos en \"%(fn_name)s\" deben ser: %(example)s" msgid "Incorrect arguments: Items to merge must be maps." msgstr "" "Argumentos no correctos: los elementos a mezclar deben ser correlaciones." #, python-format msgid "" "Incorrect index to \"%(fn_name)s\" should be between 0 and %(max_index)s" msgstr "" "Índice no correcto a \"%(fn_name)s\" debería ser entre 0 y %(max_index)s" #, python-format msgid "Incorrect index to \"%(fn_name)s\" should be: %(example)s" msgstr "Índice no correcto a \"%(fn_name)s\" debería ser: %(example)s" #, python-format msgid "Index to \"%s\" must be a string" msgstr "El índice en \"%s\" debe ser una serie" #, python-format msgid "Index to \"%s\" must be an integer" msgstr "El índice en \"%s\" debe ser un entero" msgid "" "Indicate whether the volume should be deleted when the instance is " "terminated." msgstr "Indique si el volumen se debe suprimir cuando la instancia se termina." msgid "" "Indicate whether the volume should be deleted when the server is terminated." msgstr "Indique si el volumen se debe suprimir cuando el servidor se termina." msgid "Indicates remote IP prefix to be associated with this metering rule." msgstr "" "Indica el prefijo IP remoto que se debe asociar con esta regla de medición." msgid "" "Indicates whether or not to create a distributed router. NOTE: The default " "policy setting in Neutron restricts usage of this property to administrative " "users only. This property can not be used in conjunction with the L3 agent " "ID." msgstr "" "Indica si se debe crear o no un direccionador distribuido. NOTA: el valor de " "política predeterminado en Neutron restringe el uso de esta propiedad sólo a " "usuarios administrativos. Esta propiedad no se puede utilizar junto al ID de " "agente L3." msgid "" "Indicates whether or not to create a highly available router. NOTE: The " "default policy setting in Neutron restricts usage of this property to " "administrative users only. And now neutron do not support distributed and ha " "at the same time." msgstr "" "Indica si desea crear un direccionador de alta disponibilidad NOTA: El valor " "de política predeterminado en Neutron restringe el uso de esta propiedad a " "usuarios administrativos únicamente. Y ahora Neutron no soporta " "direccionadores de alta disponibilidad y distribuidos al mismo tiempo." msgid "Indicates whether this firewall rule is enabled or not." msgstr "Indica si esta regla de cortafuegos está habilitada o no." msgid "Information used to configure the bucket as a static website." msgstr "" "Información utilizada para configurar el grupo como sitio web estático." msgid "Initiator state in lowercase for the ipsec site connection." msgstr "Estado de iniciador en minúsculas para la conexión de sitio ipsec." #, python-format msgid "Input in signal data must be a map, find a %s" msgstr "La entrada de datos de señal debe ser una correlación, busque un %s." msgid "Input values for the workflow." msgstr "Valores de entrada del flujo de trabajo." msgid "Input values to apply to the software configuration on this server." msgstr "" "Valores de entrada a aplicar a la configuración de software en este servidor." msgid "Instance ID to associate with EIP specified by EIP property." msgstr "ID de instancia a asociar con EIP especificado por la propiedad EIP." msgid "Instance ID to associate with EIP." msgstr "ID de instancia a asociar con EIP." msgid "Instance connection to CFN/CW API validate certs if SSL is used." msgstr "" "Conexión de instancia con la API CFN/CW, validar certificados si se utiliza " "SSL." msgid "Instance connection to CFN/CW API via https." msgstr "Conexión de instancia con la API CFN/CW a través de HTTPS." #, python-format msgid "Instance is not ACTIVE (was: %s)" msgstr "La instancia no está ACTIVA (era: %s)" #, python-format msgid "" "Instance metadata must not contain greater than %s entries. This is the " "maximum number allowed by your service provider" msgstr "" "Los metadatos de instancia no deben contener más de %s entradas. Este es el " "número máximo permitido por el proveedor de servicio" msgid "Interface type of keystone service endpoint." msgstr "Tipo de interfaz del punto final del servicio de keystone." msgid "Internet protocol version." msgstr "Versión del protocolo de Internet." #, python-format msgid "Invalid %s, expected a mapping" msgstr "%s no válido, se esperaba una correlación" #, python-format msgid "Invalid CRON expression: %s" msgstr "Expresión CRON no válida: %s" #, python-format msgid "Invalid Parameter type \"%s\"" msgstr "Tipo de parámetro no válido \"%s\"" #, python-format msgid "Invalid Property %s" msgstr "Propiedad %s no válida" msgid "Invalid Stack address" msgstr "Dirección de Pila inválida" msgid "Invalid Template URL" msgstr "Plantilla de URL inválida" #, python-format msgid "Invalid URL scheme %s" msgstr "Esquema de URL inválida %s" #, python-format msgid "Invalid UUID version (%d)" msgstr "Version UUID inválida (%d)" #, python-format msgid "Invalid action %s" msgstr "Acción inválida %s" #, python-format msgid "Invalid action %s specified" msgstr "Acción inválida %s especificada" #, python-format msgid "Invalid adopt data: %s" msgstr "Datos de adopción no válidos: %s" #, python-format msgid "Invalid cloud_backend setting in heat.conf detected - %s" msgstr "Se ha detectado un ajuste de cloud_backend no válido en heat.conf: %s" #, python-format msgid "Invalid codes in ignore_errors : %s" msgstr "Códigos no válidos en ignore_errors : %s" #, python-format msgid "Invalid content type %(content_type)s" msgstr "Tipo de contenido no válido %(content_type)s" #, python-format msgid "Invalid default %(default)s (%(exc)s)" msgstr "Predeterminada inválida %(default)s (%(exc)s)" #, python-format msgid "Invalid deletion policy \"%s\"" msgstr "Política de supresión no válida \"%s\"" #, python-format msgid "Invalid filter parameters %s" msgstr "Parámetros de filtro no válidos %s" #, python-format msgid "Invalid hook type \"%(hook)s\" for %(resource)s" msgstr "Tipo de gancho no válido \"%(hook)s\" para %(resource)s" #, python-format msgid "" "Invalid hook type \"%(value)s\" for resource breakpoint, acceptable hook " "types are: %(types)s" msgstr "" "Tipo de gancho no válido \"%(value)s\" para el punto de interrupción del " "recurso, los tipos de gancho aceptados son: %(types)s" #, python-format msgid "Invalid key %s" msgstr "Clave inválida %s" #, python-format msgid "Invalid key '%(key)s' for %(entity)s" msgstr "Clave no válida '%(key)s' para %(entity)s" #, python-format msgid "Invalid keys in resource mark unhealthy %s" msgstr "Las claves no válidas del recurso marcan %s no saludable " msgid "" "Invalid mix of disk and container formats. When setting a disk or container " "format to one of 'aki', 'ari', or 'ami', the container and disk formats must " "match." msgstr "" "Mezcla no válida de formatos de disco y contenedor. Al definir un formato de " "disco o de contenedor como 'aki', 'ari' o 'ami', los formatos de contenedor " "y de disco deben coincidir." #, python-format msgid "Invalid parameter constraints for parameter %s, expected a list" msgstr "" "Restricciones de parámetro no válidas para el parámetro %s, se esperaba una " "lista" #, python-format msgid "Invalid parameter in environment %s." msgstr "Parámetro invalido en el ambiente %s." #, python-format msgid "" "Invalid restricted_action type \"%(value)s\" for resource, acceptable " "restricted_action types are: %(types)s" msgstr "" "Tipo de acción restringida (restricted_action) no válida \"%(value)s\" para " "el recurso, los tipos de restricted_action aceptados son: %(types)s" #, python-format msgid "" "Invalid stack name %s must contain only alphanumeric or \"_-.\" characters, " "must start with alpha and must be 255 characters or less." msgstr "" "El nombre de pila no válido %s debe contener solo caracteres alfanuméricos o " "los caracteres \"_-.\" , debe empezar con alfa y debe tener 255 caracteres o " "menos." #, python-format msgid "Invalid stack name %s, must be a string" msgstr "El nombre de pila no válido \"%s\" debe ser una cadena" #, python-format msgid "Invalid status %s" msgstr "Estado no válido %s" #, python-format msgid "Invalid support status and should be one of %s" msgstr "Estado de soporte no válido, debe ser uno de los siguientes: %s" #, python-format msgid "Invalid tag, \"%s\" contains a comma" msgstr "Etiqueta no válida, \"%s\" contiene una coma" #, python-format msgid "Invalid tag, \"%s\" is longer than 80 characters" msgstr "Etiqueta no válida, \"%s\" tiene más de 80 caracteres" #, python-format msgid "Invalid tag, \"%s\" is not a string" msgstr "Etiqueta no válida, \"%s\" no es una cadena" #, python-format msgid "Invalid tags, not a list: %s" msgstr "Etiquetas no válidas, no forman una lista: %s" #, python-format msgid "Invalid template type \"%(value)s\", valid types are: cfn, hot." msgstr "" "Tipo de plantilla no válido \"%(value)s\", los valores válidos son: cfn, hot." #, python-format msgid "Invalid timeout value %s" msgstr "Valor de tiempo de espera no válido %s" #, python-format msgid "Invalid timezone: %s" msgstr "Zona horaria no válida: %s" #, python-format msgid "Invalid type (%s)" msgstr "Tipo inválido (%s)" msgid "Ip allocation pools and their ranges." msgstr "Pool de asignación IP y sus rangos." msgid "Ip of the subnet's gateway." msgstr "Ip de la pasarela de la subred." msgid "Ip version for the subnet." msgstr "Versión de Ip para la subred." msgid "Ip_version for this firewall rule." msgstr "Ip_version para esta regla de cortafuegos." msgid "It defines an executor to which task action should be sent to." msgstr "Define un ejecutor al que se debe enviar la acción de la tarea." msgid "It is advised to shutdown all Heat engines beforehand." msgstr "" "De antemano, es recomendado cerrar todas las sesiones de 'Heat engine'." #, python-format msgid "Items to join must be string, map or list not %s" msgstr "Los elementos a unir deben de tipo cadena, correlación o lista y no %s" #, python-format msgid "Items to join must be string, map or list. %s failed json serialization" msgstr "" "Los elementos a unir deben de tipo cadena, correlación o lista. %s ha dado " "error en la serialización con JSON" #, python-format msgid "Items to join must be strings not %s" msgstr "Los elementos a unir deben ser cadenas, no %s" #, python-format msgid "" "JSON body size (%(len)s bytes) exceeds maximum allowed size (%(limit)s " "bytes)." msgstr "" "El tamaño del cuerpo de JSON (%(len)s bytes) excede el tamaño máximo " "permitido (%(limit)s bytes)." msgid "JSON data that was uploaded via the SwiftSignalHandle." msgstr "Datos JSON que se han cargado mediante SwiftSignalHandle." msgid "" "JSON serialized map that includes the endpoint, token and/or other " "attributes the client must use for signalling this handle. The contents of " "this map depend on the type of signal selected in the signal_transport " "property." msgstr "" "Correlación serializada de JSON que incluye el punto final, el token y/o " "otros atributos que el cliente debe utilizar para señalar este manejador. El " "contenido de esta correlación depende del tipo de señal seleccionado en la " "propiedad signal_transport." msgid "" "JSON string containing data associated with wait condition signals sent to " "the handle." msgstr "" "Cadena JSON que contiene datos asociados con las señales de condición de " "espera que se envían al descriptor de contexto." msgid "" "Key used to encrypt authentication info in the database. Length of this key " "must be 32 characters." msgstr "" "Clave utilizada para cifrar la información de autenticación en la base de " "datos. Esta clave debe tener 32 caracteres de longitud." msgid "Key/Value pairs to extend the capabilities of the flavor." msgstr "Pares de clave/valor para ampliar las capacidades del tipo." msgid "Key/value pairs associated with the volume in raw dict form." msgstr "Pares de clave/valor asociados con el volumen en el dict sin formato." msgid "Key/value pairs associated with the volume." msgstr "Pares de clave/valor asociados con el volumen." msgid "Key/value pairs to associate with the volume." msgstr "Pares de clave/valor a asociar con el volumen." msgid "Keypair added to instances to make them accessible for user." msgstr "" "Par de claves añadido a las instancias para que pueda acceder a ellas el " "usuario." msgid "Keypair secret key." msgstr "Clave secreta de par de claves." msgid "" "Keystone domain ID which contains heat template-defined users. If this " "option is set, stack_user_domain_name option will be ignored." msgstr "" "ID de dominio de Keystone que contiene usuarios definidos por plantilla de " "Heat. Si se establece esta opción, no se tendrá en cuenta la opción " "stack_user_domain_name." msgid "" "Keystone domain name which contains heat template-defined users. If " "`stack_user_domain_id` option is set, this option is ignored." msgstr "" "Nombre de dominio de Keystone que contiene usuarios definidos por plantilla " "de Heat. Si se establece la opción `stack_user_domain_id`, se pasará por " "alto esta opción." msgid "Keystone domain." msgstr "Dominio de keystone." #, python-format msgid "" "Keystone has more than one service with same name %(service)s. Please use " "service id instead of name" msgstr "" "Keystone tiene más de un servicio con el mismo nombre %(service)s. Utilice " "el ID del servicio en lugar del nombre" msgid "Keystone password for stack_domain_admin user." msgstr "Contraseña de Keystone para el usuario stack_domain_admin." msgid "Keystone project." msgstr "Proyecto de keystone." msgid "Keystone role for heat template-defined users." msgstr "Rol de Keystone para usuarios definidos por plantilla de Heat." msgid "Keystone role." msgstr "Rol de keystone." msgid "Keystone user group." msgstr "Grupo de usuarios de keystone." msgid "Keystone user groups." msgstr "Grupos de usuarios de keystone." msgid "Keystone user is enabled or disabled." msgstr "El usuario de keystone está habilitado o deshabilitado." msgid "" "Keystone username, a user with roles sufficient to manage users and projects " "in the stack_user_domain." msgstr "" "Nombre de usuario de Keystone, un usuario con roles suficientes para " "gestionar usuarios y proyectos en stack_user_domain." msgid "L2 segmentation strategy on the external side of the network gateway." msgstr "" "Estrategia de segmentación L2 en el lado externo de la pasarela de red." msgid "LBaaS provider to implement this load balancer instance." msgstr "" "Proveedor de LBaaS que implementará esta instancia de equilibrador de carga." msgid "Length of OS_PASSWORD after encryption exceeds Heat limit (255 chars)" msgstr "" "La longitud de OS_PASSWORD después del cifrado supera el límite de Heat (255 " "caracteres)" msgid "Length of the string to generate." msgstr "Longitud de la serie a generar." msgid "" "Length property cannot be smaller than combined character class and " "character sequence minimums" msgstr "" "La propiedad de longitud no puede ser menor que los mínimos combinados de " "clase de carácter y secuencia de caracteres" msgid "Level of access that need to be provided for guest." msgstr "Nivel de acceso que se tiene que proporcionar para el invitado." msgid "" "Lifecycle actions to which the configuration applies. The string values " "provided for this property can include the standard resource actions CREATE, " "DELETE, UPDATE, SUSPEND and RESUME supported by Heat." msgstr "" "Las acciones de ciclo de vida a las que se aplica la configuración. Los " "valores de cadena proporcionados para esta propiedad pueden incluir las " "acciones de recurso estándar CREATE, DELETE, UPDATE, SUSPEND y RESUME " "admitidas por Heat." msgid "List of LoadBalancer resources." msgstr "Lista de recursos de LoadBalancer." msgid "List of Security Groups assigned on current LB." msgstr "Lista de grupos de seguridad asignada al LB actual." msgid "List of TLS container references for SNI." msgstr "Lista de referencias de contenedor TLS para SNI." msgid "List of database instances." msgstr "Lista de instancias de base de datos." msgid "List of databases to be created on DB instance creation." msgstr "" "Lista de bases de datos que se deben crear en creación de instancias de base " "de datos." msgid "List of directories to search for plug-ins." msgstr "Lista de directorios en los que buscar los plugins." msgid "List of dns nameservers." msgstr "Lista de servidores de nombre dns." msgid "List of firewall rules in this firewall policy." msgstr "Lista de las reglas de cortafuegos en esta política de cortafuegos." msgid "List of health monitors associated with the pool." msgstr "Lista de supervisores de estado asociados con la agrupación." msgid "List of hosts to join aggregate." msgstr "Lista de hosts a unir al agregado." msgid "List of manila shares to be mounted." msgstr "Lista de recursos compartidos de Manila que hay que montar." msgid "List of network interfaces to create on instance." msgstr "Lista de interfaces de red para crear en instancia." msgid "List of processes to enable anti-affinity for." msgstr "Lista de procesos para habilitar la anti-afinidad." msgid "List of processes to run on every node." msgstr "Lista de procesos para ejecutar en cada nodo." msgid "List of role assignments." msgstr "Lista de asignaciones de rol." msgid "List of security group IDs associated with this interface." msgstr "Lista de ID de grupo de seguridad asociados con esta interfaz." msgid "List of security group egress rules." msgstr "Lista de reglas de salida de grupo de seguridad." msgid "List of security group ingress rules." msgstr "Lista de reglas de acceso de grupo de seguridad." msgid "" "List of security group names or IDs to assign to this Node Group template." msgstr "" "Lista de nombres o ID de grupo de seguridad a asignar a esta plantilla Grupo " "de nodos." msgid "" "List of security group names or IDs. Cannot be used if neutron ports are " "associated with this server; assign security groups to the ports instead." msgstr "" "Lista de nombres o ID de grupo de seguridad. No se puede utilizar si hay " "puertos neutron asociados con este servidor; en lugar de ello, asigne grupos " "de seguridad a los puertos." msgid "List of security group rules." msgstr "Lista de reglas de grupos de seguridad." msgid "List of subnet prefixes to assign." msgstr "Lista de prefijos de subred a asignar." msgid "List of tags associated with this interface." msgstr "Lista de códigos asociados con esta interfaz." msgid "List of tags to attach to the instance." msgstr "Lista de códigos a adjuntar a la instancia." msgid "List of tags to attach to this resource." msgstr "Lista de códigos a adjuntar a este recurso." msgid "List of tags to be attached to this resource." msgstr "Lista de códigos que se deben adjuntar a este recurso." msgid "" "List of tasks which should be executed before this task. Used only in " "reverse workflows." msgstr "" "Lista de tareas que debe ejecutar antes de esta tarea. Se utiliza únicamente " "en flujos de trabajo inversos." msgid "" "List of tasks which will run after the task has completed regardless of " "whether it is successful or not." msgstr "" "Lista de tareas que se ejecutarán después de que la tarea se haya " "completado, independientemente de si se ha ejecutado satisfactoriamente o no." msgid "List of tasks which will run after the task has completed successfully." msgstr "" "Lista de tareas que se ejecutarán después de que la tarea se haya completado " "satisfactoriamente." msgid "" "List of tasks which will run after the task has completed with an error." msgstr "" "Lista de tareas que se ejecutarán después de que la tarea se haya completado " "con un error." msgid "List of users to be created on DB instance creation." msgstr "" "Lista de usuarios que se deben crear en creación de instancias de base de " "datos." msgid "" "List of workflows' executions, each of them is a dictionary with information " "about execution. Each dictionary returns values for next keys: id, " "workflow_name, created_at, updated_at, state for current execution state, " "input, output." msgstr "" "Lista de ejecuciones de flujos de trabajo, cada una es un diccionario con " "información sobre la ejecución. Cada diccionario devuelve valores para la " "claves siguientes: ID, nombre de flujo de trabajo (workflow_name), creado a " "las (created_at), actualizado a las (updated_at), estado de la ejecución " "actual, entrada, salida." msgid "Listener associated with this pool." msgstr "Escucha asociado a esta agrupación." msgid "" "Local path on each cluster node on which to mount the share. Defaults to '/" "mnt/{share_id}'." msgstr "" "Ruta local en cada nodo de clúster en el que se vaya a montar el recurso " "compartido. El valor predeterminado es '/mnt/{share_id}'." msgid "Location of the SSL certificate file to use for SSL mode." msgstr "" "Ubicación del archivo de certificado SSL a utilizar para la modalidad SSL." msgid "Location of the SSL key file to use for enabling SSL mode." msgstr "" "Ubicación del archivo de claves SSL a utilizar para habilitar la modalidad " "SSL." msgid "MAC address of the port." msgstr "Dirección MAC del puerto." msgid "MAC address to allow through this port." msgstr "Dirección MAC a permitir a través de este puerto." msgid "Map between role with either project or domain." msgstr "Correlación entre el rol y el proyecto o el dominio." msgid "" "Map containing options specific to the configuration management tool used by " "this resource." msgstr "" "Mapa que contiene opciones específicas de la herramienta de gestión de " "configuración utilizada por este recurso." msgid "" "Map representing the cloud-config data structure which will be formatted as " "YAML." msgstr "" "Mapa que representa la estructura de datos de configuración de nube que se " "formateará como YAML." msgid "" "Map representing the configuration data structure which will be serialized " "to JSON format." msgstr "" "Correlación que representa la estructura de datos de configuración que se " "serializará en el formato JSON." msgid "Max bandwidth in kbps." msgstr "Ancho de banda máximo en kbps." msgid "Max burst bandwidth in kbps." msgstr "Ráfaga máxima de ancho de banda en kbps." msgid "Max size of the cluster." msgstr "Tamaño máximo del clúster." #, python-format msgid "Maximum %s is 1 hour." msgstr "El %s máximo es una hora." msgid "Maximum depth allowed when using nested stacks." msgstr "Profundidad máxima permitida al utilizar pilas anidadas." msgid "" "Maximum line size of message headers to be accepted. max_header_line may " "need to be increased when using large tokens (typically those generated by " "the Keystone v3 API with big service catalogs)." msgstr "" "Tamaño de línea máximo de cabeceras de mensaje que se deben aceptar. Es " "posible que max_header_line se necesite incrementar al utilizar señales " "grandes (normalmente las generadas por la API de Keystone v3 con catálogos " "de servicio grandes)." msgid "" "Maximum line size of message headers to be accepted. max_header_line may " "need to be increased when using large tokens (typically those generated by " "the Keystone v3 API with big service catalogs.)" msgstr "" "Tamaño de línea máximo de cabeceras de mensaje que se deben aceptar. Es " "posible que max_header_line se necesite incrementar al utilizar señales " "grandes (normalmente las generadas por la API de Keystone v3 con catálogos " "de servicio grandes.)" msgid "Maximum number of instances in the group." msgstr "Número máximo de instancias en el grupo." msgid "Maximum number of resources in the cluster. -1 means unlimited." msgstr "Número máximo de recursos del clúster. -1 significa ilimitado." msgid "Maximum number of resources in the group." msgstr "Número máximo de recursos del grupo." msgid "Maximum number of stacks any one tenant may have active at one time." msgstr "" "Número máximo de pilas que cualquier arrendatario puede tener activas " "simultáneamente." msgid "Maximum prefix size that can be allocated from the subnet pool." msgstr "" "El tamaño máximo de prefijo que se puede asignar desde la agrupación de " "subred." msgid "" "Maximum raw byte size of JSON request body. Should be larger than " "max_template_size." msgstr "" "Tamaño máximo en bytes del cuerpo de la solicitud JSON. Debe ser superior a " "max_template_size." msgid "Maximum raw byte size of any template." msgstr "Tamaño máximo en bytes de plantilla." msgid "Maximum resources allowed per top-level stack. -1 stands for unlimited." msgstr "" "Recursos máximos permitidos por cada pila de nivel superior. -1 significa " "ilimitado." msgid "Maximum resources per stack exceeded." msgstr "Se ha superado los recursos máximos por pila." msgid "" "Maximum transmission unit size (in bytes) for the ipsec site connection." msgstr "" "Tamaño de unidad de transmisión máxima (en bytes) para la conexión de sitio " "ipsec." msgid "Member list items must be strings" msgstr "Los elementos de lista de miembros deben ser series" msgid "Member list must be a list" msgstr "La lista de miembros debe ser una lista" msgid "Members associated with this pool." msgstr "Miembros asociados a esta agrupación." msgid "Memory in MB for the flavor." msgstr "Memoria en MB del tipo." #, python-format msgid "Message: %(message)s, Code: %(code)s" msgstr "Mensaje: %(message)s, Código: %(code)s" msgid "Metadata format invalid" msgstr "Formato de metadatos no válido" msgid "Metadata key-values defined for cluster." msgstr "Pares de clave/valor de metadatos definidos para el clúster." msgid "Metadata key-values defined for node." msgstr "Pares de clave/valor de metadatos definidos para el nodo." msgid "Metadata key-values defined for profile." msgstr "Pares de clave/valor de metadatos definidos para el perfil." msgid "Metadata key-values defined for share." msgstr "" "Pares de clave/valor de metadatos definidos para el recurso compartido." msgid "Meter name watched by the alarm." msgstr "Nombre de medidor observado por la alarma." msgid "" "Meter should match this resource metadata (key=value) additionally to the " "meter_name." msgstr "" "El medidor debe coincidir con estos metadatos de recurso (clave=valor) " "adicionalmente al meter_name." msgid "Meter statistic to evaluate." msgstr "Estadísticas de medidor a evaluar." msgid "Method of implementation of session persistence feature." msgstr "" "Método de implementación de la característica de persistencia de sesión." msgid "Metric name watched by the alarm." msgstr "Nombre de medida observado por la alarma." msgid "Min size of the cluster." msgstr "Tamaño mínimo del clúster." msgid "MinSize can not be greater than MaxSize" msgstr "MinSize no puede ser mayor que MaxSize" msgid "Minimum number of instances in the group." msgstr "Número mínimo de instancias en el grupo." msgid "Minimum number of resources in the cluster." msgstr "Número mínimo de recursos del clúster." msgid "Minimum number of resources in the group." msgstr "Número mínimo de recursos del grupo." msgid "" "Minimum number of resources that are added or removed when the AutoScaling " "group scales up or down. This can be used only when specifying " "PercentChangeInCapacity for the AdjustmentType property." msgstr "" "Número mínimo de recursos que se añaden o se eliminan cuando el grupo de " "Escalado automático escala hacia arriba o hacia abajo. Esto solo se puede " "utilizar al especificar PercentChangeInCapacity para la propiedad " "AdjustmentType." msgid "" "Minimum number of resources that are added or removed when the AutoScaling " "group scales up or down. This can be used only when specifying " "percent_change_in_capacity for the adjustment_type property." msgstr "" "Número mínimo de recursos que se añaden o se eliminan cuando el grupo de " "Escalado automático escala hacia arriba o hacia abajo. Esto solo se puede " "utilizar cuando se especifique percent_change_in_capacity en la propiedad " "AdjustmentType." #, python-format msgid "Missing mandatory (%s) key from mark unhealthy request" msgstr "" "Falta la clave obligatoria (%s) en la solicitud de marcar como no saludable" #, python-format msgid "Missing parameter type for parameter: %s" msgstr "Falta el tipo de parámetro para el parámetro: %s" #, python-format msgid "Missing required credential: %(required)s" msgstr "Falta la credencial necesaria :%(required)s " msgid "Mistral resource validation error" msgstr "Error de validación de recurso de Mistral" msgid "Monasca notification." msgstr "Notificación Monasca." msgid "Multiple actions specified" msgstr "Múltiples acciónes especificadas" #, python-format msgid "Multiple physical resources were found with name (%(name)s)." msgstr "" "Múltiples recursos físicos fueron encontrados con el nombre (%(name)s)." #, python-format msgid "Multiple routers found with name %s" msgstr "Se han encontrado varios direccionadores con el nombre %s" msgid "Must specify 'InstanceId' if you specify 'EIP'." msgstr "Debe especificar 'InstanceId' si indica 'EIP'." msgid "Name for the Sahara Cluster Template." msgstr "Nombre de la plantilla de clústeres Sahara" msgid "Name for the Sahara Node Group Template." msgstr "Nombre de la plantilla del grupo de nodos Sahara" msgid "Name for the aggregate." msgstr "Nombre del agregado." msgid "Name for the availability zone." msgstr "Nombre de la zona de disponibilidad." msgid "" "Name for the container. If not specified, a unique name will be generated." msgstr "" "Nombre del contenedor. Si no se especifica, se generará un nombre exclusivo." msgid "Name for the firewall policy." msgstr "Nombre para la política de firewall." msgid "Name for the firewall rule." msgstr "Nombre de regla de cortafuegos." msgid "Name for the firewall." msgstr "Nombre del cortafuegos." msgid "Name for the ike policy." msgstr "Nombre para la política ike." msgid "" "Name for the image. The name of an image is not unique to a Image Service " "node." msgstr "" "Nombre de la imagen. El nombre de una imagen no es exclusivo para un nodo de " "servicio de imagen." msgid "Name for the ipsec policy." msgstr "Nombre de la política ipsec." msgid "Name for the ipsec site connection." msgstr "Nombre de la conexión de sitio ipsec." msgid "Name for the time constraint." msgstr "Nombre de la restricción de tiempo." msgid "Name for the vpn service." msgstr "Nombre para el servicio vpn." msgid "" "Name of attribute to compare. Names of the form metadata.user_metadata.X or " "metadata.metering.X are equivalent to what you can address through " "matching_metadata; the former for Nova meters, the latter for all others. To " "see the attributes of your Samples, use `ceilometer --debug sample-list`." msgstr "" "Nombre del atributo a comparar. Los nombres con formato metadata." "user_metadata.X o metadata.metering.X son equivalentes a lo que puede " "direccionar a través de matching_metadata; el primero es para medidores de " "Nova, el último para todos los demás. Para ver los atributos de los " "ejemplos, utilice `ceilometer --debug sample-list`." msgid "Name of key to use for substituting inputs during deployment." msgstr "" "Nombre de la clave a utilizar para sustituir entradas durante el despliegue." msgid "Name of keypair to inject into the server." msgstr "Nombre del par de claves a inyectar en el servidor." msgid "Name of keystone endpoint." msgstr "Nombre del punto final de keystone." msgid "Name of keystone group." msgstr "Nombre del grupo de keystone." msgid "Name of keystone project." msgstr "Nombre del proyecto de keystone." msgid "Name of keystone role." msgstr "Nombre del rol de keystone." msgid "Name of keystone service." msgstr "Nombre del servicio de keystone." msgid "Name of keystone user." msgstr "Nombre del usuario de keystone." msgid "Name of registered datastore type." msgstr "Nombre del tipo de almacén de datos registrado." msgid "Name of the DB instance to create." msgstr "Nombre de la instancia de BD a crear." msgid "Name of the Node group." msgstr "Nombre del grupo de nodos." msgid "" "Name of the action associated with the task. Either action or workflow may " "be defined in the task." msgstr "" "Nombre de la acción asociada con la tarea. Se puede definir acción o flujo " "de trabajo en la tarea." msgid "Name of the administrative user to use on the server." msgstr "Nombre del usuario administrativo a utilizar en el servidor." msgid "Name of the alarm. By default, physical resource name is used." msgstr "" "El nombre de la alarma. De forma predeterminada, se utiliza el nombre del " "recurso físico." msgid "Name of the availability zone for DB instance." msgstr "" "Nombre de la zona de disponibilidad para la instancia de base de datos." msgid "Name of the availability zone for server placement." msgstr "Nombre de la zona de disponibilidad para la colocación de servidor." msgid "Name of the cluster to create." msgstr "Nombre del clúster a crear." msgid "Name of the cluster. By default, physical resource name is used." msgstr "" "El nombre del clúster. De forma predeterminada, se utiliza el nombre del " "recurso físico." msgid "Name of the cookie, required if type is APP_COOKIE." msgstr "Nombre del cookie, necesario si el tipo es APP_COOKIE." msgid "Name of the cron trigger." msgstr "Nombre del desencadenante CRON." msgid "Name of the current action being deployed" msgstr "Nombre de la acción actual que se está desplegando" msgid "Name of the data source." msgstr "Nombre del origen de datos." msgid "" "Name of the derived config associated with this deployment. This is used to " "apply a sort order to the list of configurations currently deployed to a " "server." msgstr "" "Nombre de la configuración derivada asociada con este despliegue. Se utiliza " "para aplicar un orden de clasificación a la lista de configuraciones " "desplegadas actualmente en un servidor." msgid "" "Name of the engine node. This can be an opaque identifier. It is not " "necessarily a hostname, FQDN, or IP address." msgstr "" "Nombre del nodo del motor. Puede ser un identificador opaco. No es " "necesariamente un nombre de host, FQDN o dirección IP." msgid "Name of the input." msgstr "Nombre de la entrada." msgid "Name of the job binary." msgstr "Nombre del binario del trabajo." msgid "Name of the metering label." msgstr "Nombre de la etiqueta de medición." msgid "Name of the network owning the port." msgstr "Nombre de la red propietaria del puerto." msgid "" "Name of the network owning the port. The value is typically network:" "floatingip or network:router_interface or network:dhcp." msgstr "" "Nombre de la red propietaria del puerto. Normalmente el valor es network:" "floatingip, network:router_interface o network:dhcp." msgid "Name of the notification. By default, physical resource name is used." msgstr "" "El nombre de la notificación. De forma predeterminada, se utiliza el nombre " "del recurso físico." msgid "Name of the output." msgstr "Nombre de la salida." msgid "Name of the pool." msgstr "Nombre de la agrupación." msgid "Name of the queue instance to create." msgstr "Nombre de la instancia de cola a crear." msgid "" "Name of the registered datastore version. It must exist for provided " "datastore type. Defaults to using single active version. If several active " "versions exist for provided datastore type, explicit value for this " "parameter must be specified." msgstr "" "Nombre de la versión del almacén de datos registrado. Debe existir para el " "tipo de almacén de datos proporcionado. De forma predeterminada, utiliza la " "versión activa única. Si hay varias versiones activas para el tipo de " "almacén de datos proporcionado, debe especificarse un valor explícito para " "este parámetro." msgid "Name of the secret." msgstr "El nombre del secreto." msgid "Name of the senlin node. By default, physical resource name is used." msgstr "" "El nombre del nodo senlin. De forma predeterminada, se utiliza el nombre del " "recurso físico." msgid "Name of the senlin policy. By default, physical resource name is used." msgstr "" "El nombre de la política senlin. De forma predeterminada, se utiliza el " "nombre del recurso físico." msgid "Name of the senlin profile. By default, physical resource name is used." msgstr "" "El nombre del perfil senlin. De forma predeterminada, se utiliza el nombre " "del recurso físico." msgid "" "Name of the senlin receiver. By default, physical resource name is used." msgstr "" "El nombre del receptor senlin. De forma predeterminada, se utiliza el nombre " "del recurso físico." msgid "Name of the server." msgstr "El nombre del servidor." msgid "Name of the share network." msgstr "El nombre de la red del recurso compartido." msgid "Name of the share type." msgstr "Nombre del tipo de recurso compartido." msgid "Name of the stack." msgstr "Nombre de la pila." msgid "Name of the subnet pool." msgstr "Nombre de la agrupación de subred." msgid "Name of the vip." msgstr "Nombre de vip." msgid "Name of the volume type." msgstr "Nombre del tipo de volumen." msgid "Name of the volume." msgstr "Nombre del volumen." msgid "" "Name of the workflow associated with the task. Can be defined by intrinsic " "function get_resource or by name of the referenced workflow, i.e. " "{ workflow: wf_name } or { workflow: { get_resource: wf_name }}. Either " "action or workflow may be defined in the task." msgstr "" "Nombre del flujo de trabajo asociado con la tarea. Se puede definir mediante " "la función intrínseca get_resource o por el nombre del flujo de trabajo al " "que se hace referencia, es decir { workflow: wf_name } o { workflow: " "{ get_resource: wf_name }}. Se puede definir acción o flujo de trabajo en la " "tarea." msgid "Name of this Load Balancer." msgstr "Nombre de este equilibrador de carga." msgid "Name of this deployment resource in the stack" msgstr "Nombre de este recurso de despliegue en la pila" msgid "Name of this listener." msgstr "Nombre de este escucha." msgid "Name of this pool." msgstr "Nombre de esta agrupación." msgid "Name or ID Nova flavor for the nodes." msgstr "Nombre o ID del tipo Nova para los nodos." msgid "Name or ID of network to create a port on." msgstr "Nombre o ID de red en la que crear un puerto." msgid "Name or ID of senlin profile to create this node." msgstr "Nombre o ID del perfil Senlin para crear este nodo." msgid "" "Name or ID of shared file system snapshot that will be restored and created " "as a new share." msgstr "" "Nombre o ID de la instantánea del sistema de archivos compartido que se " "restaurará y se creará como nuevo recurso compartido." msgid "" "Name or ID of shared filesystem type. Types defines some share filesystem " "profiles that will be used for share creation." msgstr "" "Nombre o ID del tipo del sistema de archivos compartido. Los tipos definen " "algunos perfiles de sistema de archivos que se utilizarán para la creación " "de recursos compartidos." msgid "Name or ID of shared network defined for shared filesystem." msgstr "" "Nombre o ID de la red compartida definida para el sistema de archivos " "compartido." msgid "Name or ID of target cluster." msgstr "Nombre o ID del clúster de destino." msgid "Name or ID of the load balancing pool." msgstr "Nombre o ID de la agrupación de equilibrio de carga." msgid "Name or Id of keystone region." msgstr "Nombre o ID de la región de keystone." msgid "Name or Id of keystone service." msgstr "Nombre o ID del servicio de keystone." #, python-format msgid "" "Name or UUID of Neutron port to attach this NIC to. Either %(port)s or " "%(net)s must be specified." msgstr "" "Nombre o UUID del puerto de Neutron al que conectar este NIC. Es necesario " "especificar %(port)s o %(net)s." msgid "Name or UUID of network." msgstr "Nombre o UUID de red." msgid "" "Name or UUID of the Neutron floating IP network or name of the Nova floating " "ip pool to use. Should not be provided when used with Nova-network that auto-" "assign floating IPs." msgstr "" "Nombre o UUID de la red de IP flotante de Neutron o nombre de la agrupación " "de IP flotante de Nova a utilizar. No se debe proporcionar si se utiliza con " "una redNova que asigne automáticamente IP flotantes." msgid "Name or UUID of the image used to boot Hadoop nodes." msgstr "Nombre o UUID de la imagen utilizada para arrancar nodos Hadoop." #, python-format msgid "" "Name or UUID of the network to attach this NIC to. Either %(port)s or " "%(net)s must be specified." msgstr "" "Nombre o UUID de la red a la que conectar este NIC. Es necesario especificar " "%(port)s o %(net)s." msgid "Name or id of keystone domain." msgstr "Nombre o ID del dominio de keystone." msgid "Name or id of keystone group." msgstr "Nombre o ID del grupo de keystone." msgid "Name or id of keystone user." msgstr "Nombre o ID del usuario de keystone." msgid "Name or id of volume type (OS::Cinder::VolumeType)." msgstr "Nombre o ID de tipo de volumen (OS::Cinder::VolumeType)." msgid "Names of databases that those users can access on instance creation." msgstr "" "Nombres de las bases de datos a las que esos usuarios pueden acceder en la " "creación de instancias." msgid "" "Namespace to group this software config by when delivered to a server. This " "may imply what configuration tool is going to perform the configuration." msgstr "" "Espacio de nombre para agrupar esta configuración de software cuando se " "entrega a un servidor. Esto puede implicar que la herramienta de " "configuración va a realizar la configuración." msgid "Need more arguments" msgstr "Necesita más argumentos" msgid "Negotiation mode for the ike policy." msgstr "Modalidad de negociación para la política ike." #, python-format msgid "Neither image nor bootable volume is specified for instance %s" msgstr "" "No se ha especificado una imagen ni un volumen arrancable para la instancia " "%s" msgid "Network in CIDR notation." msgstr "La red en notación CIDR." msgid "Network interface ID to associate with EIP." msgstr "ID de interfaz de red a asociar con EIP." msgid "Network interfaces to associate with instance." msgstr "Interfaces de red a asociar con instancia." #, python-format msgid "" "Network this port belongs to. If you plan to use current port to assign " "Floating IP, you should specify %(fixed_ips)s with %(subnet)s. Note if this " "changes to a different network update, the port will be replaced." msgstr "" "Red a la que pertenece este puerto. Si tiene previsto utilizar el puerto " "actual para asignar un IP flotante, debe especificar %(fixed_ips)s con " "%(subnet)s. Tenga en cuenta que si cambia a una actualización de red " "distinta, se sustituirá el puerto." msgid "Network to allocate floating IP from." msgstr "Red a partir de la que se asigna la IP flotante." msgid "Neutron network id." msgstr "ID de red de Neutron." msgid "Neutron subnet id." msgstr "ID de subred de Neutron." msgid "Nexthop IP address." msgstr "Dirección IP del siguiente salto." #, python-format msgid "No %s specified" msgstr "No se ha especificado %s" msgid "No Template provided." msgstr "No se ha proporcionado una plantilla." msgid "No action specified" msgstr "Ninguna acción especificada" msgid "No constraint expressed" msgstr "No se ha expresado ninguna restricción" #, python-format msgid "" "No content found in the \"files\" section for %(fn_name)s path: %(file_key)s" msgstr "" "No se ha encontrado contenido en la sección \"files\" para la vía de acceso " "%(fn_name)s: %(file_key)s" #, python-format msgid "No event %s found" msgstr "Ningún evento %s se ha encontrado" #, python-format msgid "No events found for resource %s" msgstr "No se ha encontrado eventos para el recurso %s" msgid "No resource data found" msgstr "Datos de recursos no se han encontrado" #, python-format msgid "No stack exists with id \"%s\"" msgstr "No existe ninguna pila con el id \"%s\"" msgid "No stack name specified" msgstr "Nombre de pila no especificada" msgid "No template specified" msgstr "Plantilla no especificada" msgid "No volume service available." msgstr "No hay ningún servicio de volumen disponible." msgid "Node groups." msgstr "Grupos de nodos." msgid "Nodes list in the cluster." msgstr "Lista de nodos del clúster." msgid "Non HA routers can only have one L3 agent." msgstr "" "Los direccionadores que no son de alta disponibilidad sólo pueden tener un " "agente L3." #, python-format msgid "Non-empty resource type is required for resource \"%s\"" msgstr "Tipo de Recurso no vacío es obligatorio para el recurso \"%s\"" msgid "Not Implemented." msgstr "No implementado." #, python-format msgid "Not allowed - %(dsver)s without %(dstype)s." msgstr "No permitido - %(dsver)s sin %(dstype)s." msgid "Not found" msgstr "No se ha encontrado" msgid "Not waiting for outputs signal" msgstr "No se espera señal de salidas" msgid "" "Notional service where encryption is performed For example, front-end. For " "Nova." msgstr "" "Servicio hipotético donde se realiza el cifrado. Por ejemplo, front-end. " "Para Nova." msgid "Nova instance type (flavor)." msgstr "Tipo de instancia Nova (tipo)." msgid "Nova network id." msgstr "ID de red de Nova." msgid "Number of VCPUs for the flavor." msgstr "Número de VCPU para el tipo." msgid "Number of backlog requests to configure the socket with." msgstr "Número de solicitudes de retraso con las que configurar el socket." msgid "Number of instances in the Node group." msgstr "Número de instancias del grupo de nodos." msgid "Number of minutes to wait for this stack creation." msgstr "Número de minutos a esperar en la creación de esta pila." msgid "Number of periods to evaluate over." msgstr "Número de periodos durante los cuales se debe evaluar." msgid "" "Number of permissible connection failures before changing the member status " "to INACTIVE." msgstr "" "Número de anomalías de conexión permitidas antes de cambiar el estado de " "miembro a INACTIVO." msgid "Number of remaining executions." msgstr "Número de ejecuciones restantes." msgid "Number of seconds for the DPD delay." msgstr "Número de segundos para el retardo de DPD." msgid "Number of seconds for the DPD timeout." msgstr "Número de segundos para el tiempo de espera DPD." msgid "" "Number of times to check whether an interface has been attached or detached." msgstr "" "Número de veces que se comprobará si una interfaz se ha asociado o " "desasociado." msgid "" "Number of times to retry to bring a resource to a non-error state. Set to 0 " "to disable retries." msgstr "" "Número de veces que se reintenta poner un recurso en un estado distinto de " "error. Establézcalo en 0 para inhabilitar los reintentos." msgid "" "Number of times to retry when a client encounters an expected intermittent " "error. Set to 0 to disable retries." msgstr "" "Número de veces que se reintenta cuando un cliente encuentra un error " "intermitente esperado. Establézcalo en 0 para inhabilitar los reintentos." msgid "Number of workers for Heat service." msgstr "Número de trabajadores para el servicio Heat." msgid "" "Number of workers for Heat service. Default value 0 means, that service will " "start number of workers equal number of cores on server." msgstr "" "Número de trabajadores del servicio del Heat. El valor predeterminado 0 " "significa que el servicio se iniciará con un número de trabajadores igual al " "número de núcleos del servidor." msgid "Number value for delay during resolve constraint." msgstr "" "Valor numérico para especificar el retraso durante la resolución de la " "restricción." msgid "Number value for timeout during resolving output value." msgstr "" "Valor numérico para especificar el tiempo de espera al resolver el valor de " "salida." #, python-format msgid "Object action %(action)s failed because: %(reason)s" msgstr "La acción objeto %(action)s falló debido a: %(reason)s" msgid "" "On update, enables heat to collect existing resource properties from reality " "and converge to updated template." msgstr "" "Al actualizarlo, permite a Heat recopilar las propiedades de los recursos " "existentes de la realidad y converger en una plantilla actualizada." msgid "One of predefined health monitor types." msgstr "Uno de los tipos de supervisor de estado predefinidos." msgid "One or more listeners for this load balancer." msgstr "Una o más escuchas para este equilibrador de carga." msgid "Only ISO 8601 duration format of the form PT#H#M#S is supported." msgstr "" "Sólo se soporta el formato de duración ISO 8601 con el formato PT#H#M#S." msgid "Only Templates with an extension of .yaml or .template are supported" msgstr "Sólo se soportan las plantillas con una extensión de .yaml o .template" #, python-format msgid "Only integer is acceptable by '%(name)s'." msgstr "'%(name)s' sólo acepta el entero." #, python-format msgid "Only non-zero integer is acceptable by '%(name)s'." msgstr "'%(name)s' sólo acepta un entero que no sea cero." msgid "Operator used to compare specified statistic with threshold." msgstr "Operador utilizado para comparar estadísticas específicas con umbral." msgid "Optional CA cert file to use in SSL connections." msgstr "Certificado CA opcional a utilizar en conexiónes SSL." msgid "Optional Nova keypair name." msgstr "Nombre de par de claves Nova opcional." msgid "Optional PEM-formatted certificate chain file." msgstr "Certificado opcional en archivo de cadena en formato PEM." msgid "Optional PEM-formatted file that contains the private key." msgstr "Archivo opcional en formato PEM conteniendo la clave privada." msgid "Optional filename to associate with part." msgstr "Nombre de archivo opcional a asociar con la parte." #, python-format msgid "Optional heat url in format like http://0.0.0.0:8004/v1/%(tenant_id)s." msgstr "" "Heat URL opcional en un formato parecido a http://0.0.0.0:8004/v1/" "%(tenant_id)s." msgid "Optional subtype to specify with the type." msgstr "Subtipo opcional a especificar con el tipo." msgid "Options for simulating waiting." msgstr "Opciones para simular la espera." #, python-format msgid "Order '%(name)s' failed: %(code)s - %(reason)s" msgstr "La orden '%(name)s' ha fallado: %(code)s - %(reason)s" msgid "Outputs received" msgstr "Salidas recibidas" msgid "Owner of the source security group." msgstr "Propietario del grupo de seguridad de origen." msgid "PATCH update to non-COMPLETE stack" msgstr "Actualización de PARCHE a pila no COMPLETA" #, python-format msgid "Parameter '%(name)s' is invalid: %(exp)s" msgstr "El parámetro '%(name)s' no es válido: %(exp)s" msgid "Parameter Groups error" msgstr "Error en el grupo de parámetros" msgid "" "Parameter Groups error: parameter_groups.: The grouped parameter key_name " "does not reference a valid parameter." msgstr "" "Error de grupos de parámetros: parameter_groups.: El parámetro agrupado " "key_name no hace referencia a un parámetro válido." msgid "" "Parameter Groups error: parameter_groups.: The key_name parameter must be " "assigned to one parameter group only." msgstr "" "Error de grupos de parámetros: parameter_groups.: El parámetro key_name se " "debe asignar a un solo grupo de parámetros." msgid "" "Parameter Groups error: parameter_groups.: The parameters of parameter group " "should be a list." msgstr "" "Error de grupos de parámetros: parameter_groups.: Los parámetros del grupo " "de parámetros deben ser una lista." msgid "" "Parameter Groups error: parameter_groups.Database Group: The InstanceType " "parameter must be assigned to one parameter group only." msgstr "" "Error de grupos de parámetros: parameter_groups. Grupo de bases de datos: El " "parámetro InstanceType se debe asignar a un solo grupo de parámetros." msgid "" "Parameter Groups error: parameter_groups.Database Group: The grouped " "parameter SomethingNotHere does not reference a valid parameter." msgstr "" "Error de grupos de parámetros: parameter_groups. Grupo de bases de datos: El " "parámetro agrupado SomethingNotHere no hace referencia a un parámetro válido." msgid "" "Parameter Groups error: parameter_groups.Server Group: The parameters must " "be provided for each parameter group." msgstr "" "Error de grupos de parámetros: parameter_groups. Grupo de servidores: Se " "deben proporcionar los parámetros para cada grupo de parámetros." msgid "" "Parameter Groups error: parameter_groups.Server Group: The parameters of " "parameter group should be a list." msgstr "" "Error de grupos de parámetros: parameter_groups. Grupo de servidores: Los " "parámetros del grupo de parámetros deben ser una lista." msgid "" "Parameter Groups error: parameter_groups: The parameter_groups should be a " "list." msgstr "" "Error de grupos de parámetros: parameter_groups.: parameter_groups debe ser " "una lista." #, python-format msgid "Parameter name in \"%s\" must be string" msgstr "El nombre de parámetro en \"%s\" debe ser una serie" #, python-format msgid "Params must be a map, find a %s" msgstr "Params debe ser una correlación, busque un %s." msgid "Parent network of the subnet." msgstr "Red padre de la subred." msgid "Parts belonging to this message." msgstr "Partes que pertenecen a este mensaje." msgid "Password for API authentication" msgstr "Contraseña para autenticación de API" msgid "Password for accessing the data source URL." msgstr "Contraseña para acceder al URL del origen de datos." msgid "Password for accessing the job binary URL." msgstr "Contraseña para acceder al URL del binario del trabajo." msgid "Password for those users on instance creation." msgstr "Contraseña para los usuarios en la creación de instancias." msgid "Password of keystone user." msgstr "Contraseña del usuario de keystone." msgid "Password used by user." msgstr "Contraseña utilizada por el usuario." #, python-format msgid "Path components in \"%s\" must be strings" msgstr "Los componentes de vía de acceso en \"%s\" deben ser series" msgid "Path components in attributes must be strings" msgstr "Los componentes de vía de acceso en los atributos deben ser series" msgid "Payload exceeds maximum allowed size" msgstr "La carga útil supera el tamaño máximo permitido" msgid "Perfect forward secrecy for the ipsec policy." msgstr "Secreto de reenvío perfecto para la política ipsec." msgid "Perfect forward secrecy in lowercase for the ike policy." msgstr "Secreto de reenvío perfecto en minúsculas para la política ike." msgid "" "Perform a check on the input values passed to verify that each required " "input has a corresponding value. When the property is set to STRICT and no " "value is passed, an exception is raised." msgstr "" "Realice una comprobación de los valores de entrada pasados para verificar " "que cada entrada necesaria tiene un valor correspondiente. Cuando la " "propiedad se establece en STRICT y no se pasa ningún valor, se produce una " "excepción." msgid "Period (seconds) to evaluate over." msgstr "Periodo (segundos) durante el cual evaluar." msgid "Physical ID of the VPC. Not implemented." msgstr "ID físico de VPC. No implementado." #, python-format msgid "" "Plugin %(plugin)s doesn't support the following node processes: " "%(unsupported)s. Allowed processes are: %(allowed)s" msgstr "" "El plug-in %(plugin)s no da soporte a los siguientes procesos de nodo: " "%(unsupported)s. Los procesos permitidos son: %(allowed)s" msgid "Plugin name." msgstr "Nombre de plugin." msgid "Policies for removal of resources on update." msgstr "Políticas para la eliminación de recursos durante la actualización." msgid "Policy for rolling updates for this scaling group." msgstr "" "Política para las actualizaciones de transferencia de este grupo de escalado." msgid "" "Policy on how to apply a flavor update; either by requesting a server resize " "or by replacing the entire server." msgstr "" "Política sobre cómo aplicar una actualización de tipo, ya sea solicitando un " "cambio de tamaño de servidor o sustituyendo el servidor entero." msgid "" "Policy on how to apply an image-id update; either by requesting a server " "rebuild or by replacing the entire server." msgstr "" "Política sobre cómo aplicar una actualización de id de imagen; sea " "solicitando una reconstrucción de servidor o sustituyendo el servidor entero." msgid "" "Policy on how to respond to a stack-update for this resource. REPLACE_ALWAYS " "will replace the port regardless of any property changes. AUTO will update " "the existing port for any changed update-allowed property." msgstr "" "Política sobre cómo responder a una actualización de pila (stack-update) " "para este recurso. REPLACE_ALWAYS sustituirá el puerto independientemente de " "cualquier cambio de propiedad. AUTO actualizará el puerto existente para " "cualquier propiedad permitida por la actualización que se haya modificado." msgid "" "Policy to be processed when doing an update which requires removal of " "specific resources." msgstr "" "Política a procesar al realizar una actualización que requiere la " "eliminación de recursos específicos." msgid "Pool creation failed" msgstr "Ha fallado la creación de la agrupación" msgid "Pool creation failed due to vip" msgstr "La creación de la agrupación ha fallado debido a vip" msgid "Pool from which floating IP is allocated." msgstr "Agrupación de la que se asignan la IP flotante." msgid "Port number on which the servers are running on the members." msgstr "Número de puerto en el que los servidores se ejecutan en los miembros." msgid "Port on which the pool member listens for requests or connections." msgstr "" "Puerto en el que el miembro de la agrupación escucha las solicitudes o " "conexiones." msgid "Port security enabled of the network." msgstr "Seguridad de puertos habilitada en la red." msgid "Port security enabled of the port." msgstr "Seguridad de puertos habilitada en el puerto." msgid "Position of the rule within the firewall policy." msgstr "Posición de la regla en la política de cortafuegos." msgid "Pre-shared key string for the ipsec site connection." msgstr "Serie de clave precompartida para la conexión de sitio ipsec." msgid "Prefix length for subnet allocation from subnet pool." msgstr "" "Longitud de prefijo para la asignación de subred desde la agrupación de " "subred." msgid "Private DNS name of the specified instance." msgstr "Nombre de DNS privado de la instancia especificada." msgid "Private IP address of the network interface." msgstr "Dirección IP privada de la interfaz de red." msgid "Private IP address of the specified instance." msgstr "Dirección IP privada de la instancia especificada." msgid "Project ID" msgstr "ID del proyecto" msgid "" "Projects to add volume type access to. NOTE: This property is only supported " "since Cinder API V2." msgstr "" "Proyectos a los que se desea añadir acceso de tipo de volumen. NOTA: esta " "propiedad solo está soportada a partir de Cinder API V2." #, python-format msgid "" "Properties %(algorithm)s and %(bit_length)s are required for %(type)s type " "of order." msgstr "" "Las propiedades %(algorithm)s y %(bit_length)s son obligatorias para el tipo " "de orden %(type)s." msgid "Properties for profile." msgstr "Propiedades del perfil." msgid "Properties of this policy." msgstr "Propiedades de esta política." msgid "Properties to pass to each resource being created in the chain." msgstr "" "Propiedades que se tienen qu pasar a cada recurso que se cree en la cadena." #, python-format msgid "Property %(cookie)s is required when %(sp)s type is set to %(app)s." msgstr "" "La propiedad %(cookie)s es obligatoria cuando el tipo %(sp)s está " "establecido en %(app)s." #, python-format msgid "" "Property %(cookie)s must NOT be specified when %(sp)s type is set to %(ip)s." msgstr "" "La propiedad %(cookie)s NO se puede especificar cuando el tipo %(sp)s está " "establecido en %(ip)s." #, python-format msgid "" "Property %(key)s updated value %(new)s should be superset of existing value " "%(old)s." msgstr "" "Propiedad %(key)s, el valor actualizado %(new)s debería ser un subconjunto " "del valor existente %(old)s." #, python-format msgid "" "Property %(n)s type mismatch between facade %(type)s (%(fs_type)s) and " "provider (%(ps_type)s)" msgstr "" "Discrepancia de tipo de propiedad %(n)s entre la fachada %(type)s " "(%(fs_type)s) y el proveedor (%(ps_type)s)" #, python-format msgid "Property %(policies)s and %(item)s cannot be used both at one time." msgstr "La propiedad %(policies)s y %(item)s no se pueden usar a la vez." #, python-format msgid "Property %(ref)s required when protocol is %(term)s." msgstr "La propiedad %(ref)s es obligatoria cuando el protocolo es %(term)s." #, python-format msgid "Property %s not assigned" msgstr "Propiedad %s no asignada" #, python-format msgid "Property %s not implemented yet" msgstr "La propiedad %s aún no está implementada" msgid "" "Property cookie_name is required when session_persistence type is set to " "APP_COOKIE." msgstr "" "La propiedad cookie_name es obligatoria cuando el tipo de " "session_persistence está definido a APP_COOKIE." msgid "" "Property cookie_name is required, when session_persistence type is set to " "APP_COOKIE." msgstr "" "La propiedad cookie_name es necesaria, cuando el tipo session_persistence " "está establecido en APP_COOKIE." msgid "" "Property cookie_name must NOT be specified when session_persistence type is " "set to SOURCE_IP." msgstr "" "La propiedad cookie_name NO se debe especificar cuando el tipo " "session_persistence está establecido en SOURCE_IP." msgid "Property values for the resources in the group." msgstr "Valores de las propiedades para los recursos del grupo." msgid "Protocol for balancing." msgstr "Protocolo para equilibrado." msgid "Protocol for the firewall rule." msgstr "Protocolo de la regla de cortafuegos." msgid "Protocol of the pool." msgstr "Protocolo de la agrupación." msgid "Protocol on which to listen for the client traffic." msgstr "Protocolo en el que escuchar el tráfico de cliente." msgid "Protocol to balance." msgstr "Protocolo a equilibrar." msgid "Protocol value for this firewall rule." msgstr "Valor de protocolo para esta regla de cortafuegos." msgid "" "Provide access to nodes using other nodes of the cluster as proxy gateways." msgstr "" "Proporcionar acceso a nodos utilizando otros nodos del clúster como " "pasarelas proxy." msgid "" "Provide old encryption key. New encryption key would be used from config " "file." msgstr "" "Indique la clave de cifrado antigua. La nueva clave de cifrado se " "utilizará desde el archivo de configuración." msgid "Provider for this Load Balancer." msgstr "Proveedor de este equilibrador de carga." msgid "Provider implementing this load balancer instance." msgstr "Proveedor que implementa esta instancia de equilibrador de carga." #, python-format msgid "Provider requires property %(n)s unknown in facade %(type)s" msgstr "" "El proveedor necesita la propiedad %(n)s desconocida en la fachada %(type)s" msgid "Public DNS name of the specified instance." msgstr "Nombre de DNS público de la instancia especificada." msgid "Public IP address of the specified instance." msgstr "Dirección IP pública de la instancia especificada." msgid "" "RPC timeout for the engine liveness check that is used for stack locking." msgstr "" "Tiempo de espera de RPC para la comprobación de actividad de motor que se " "utiliza para el bloqueo de pila." msgid "RX/TX factor." msgstr "Factor RX/TX." #, python-format msgid "Rebuilding server failed, status '%s'" msgstr "La reconstrucción del servidor ha fallado, estado '%s'" msgid "Record name." msgstr "Nombre de registro." #, python-format msgid "Recursion depth exceeds %d." msgstr "La profundidad de recursión excede %d." msgid "" "Ref structure that contains the ID of the VPC on which you want to create " "the subnet." msgstr "" "Estructura de referencia que contiene el ID de VPC en el que desea crear la " "subred." msgid "Reference to a flavor for creating DB instance." msgstr "Referencia a un tipo para crear instancia de BD." msgid "Reference to certificate." msgstr "Referencia al certificado." msgid "Reference to intermediates." msgstr "Referencia a los intermedios." msgid "Reference to private key passphrase." msgstr "Referencia a la frase de contraseña de la clave privada." msgid "Reference to private key." msgstr "Referencia a la clave privada." msgid "Reference to public key." msgstr "Referencia a la clave pública." msgid "Reference to the secret." msgstr "Referencia al secreto." msgid "References to secrets that will be stored in container." msgstr "Referencias a los secretos que se guardarán en el contenedor." msgid "Region name in which this stack will be created." msgstr "Nombre de región en el que se va a crear esta pila." msgid "Remaining executions." msgstr "Ejecuciones restantes." msgid "Remote branch router identity." msgstr "Identidad de direccionador de bifurcación remoto." msgid "Remote branch router public IPv4 address or IPv6 address or FQDN." msgstr "" "Dirección IPv4 o dirección IPv6 pública de direccionador de bifurcación " "remoto o FQDN." msgid "Remote subnet(s) in CIDR format." msgstr "Subred(es) remotas en formato CIDR." msgid "" "Replacement policy used to work around flawed nova/neutron port interaction " "which has been fixed since Liberty." msgstr "" "Política de sustitución utilizada para solucionar temporalmente una " "interacción de puerto de nova/neutron con error que se ha arreglado desde " "Liberty." msgid "Request expired or more than 15mins in the future" msgstr "Solicitud expirada o más de 15 minutos en el futúro" #, python-format msgid "Request limit exceeded: %(message)s" msgstr "Se ha superado el límite de solicitud: %(message)s" msgid "Request missing required header X-Auth-Url" msgstr "Solicitud sin cabecera requisita X-Auth-Url" msgid "Request was denied due to request throttling" msgstr "Solicitud denegada debido a desaceleración de solicitudes" #, python-format msgid "" "Requested plugin '%(plugin)s' doesn't support version '%(version)s'. Allowed " "versions are %(allowed)s" msgstr "" "El plug-in solicitado '%(plugin)s' no da soporte a la versión '%(version)s'. " "Las versiones permitidas son %(allowed)s" msgid "" "Required extra specification. Defines if share drivers handles share servers." msgstr "" "Especificación adicional obligatoria. Define si los manejadores de " "controladores del recurso compartido comparten servidores." #, python-format msgid "Required property %(n)s for facade %(type)s missing in provider" msgstr "" "Falta la propiedad necesaria %(n)s para la fachada %(type)s en el proveedor" #, python-format msgid "Resizing to '%(flavor)s' failed, status '%(status)s'" msgstr "Cambiar el tamaño a '%(flavor)s' ha fallado, estado '%(status)s'" #, python-format msgid "Resource \"%s\" has no type" msgstr "El recurso \"%s\" no tiene tipo" #, python-format msgid "Resource \"%s\" type is not a string" msgstr "El tipo \"%s\" de recurso no es una serie" #, python-format msgid "Resource %(name)s %(key)s type must be %(typename)s" msgstr "El tipo %(key)s del recurso %(name)s debe ser %(typename)s" #, python-format msgid "Resource %(name)s is missing \"%(type_key)s\"" msgstr "En el recurso %(name)s falta \"%(type_key)s\"" #, python-format msgid "" "Resource %s's property user_data_format should be set to SOFTWARE_CONFIG " "since there are software deployments on it." msgstr "" "La propiedad user_data_format del recurso %s debe establecerse en " "SOFTWARE_CONFIG ya que incluye despliegues de software." msgid "Resource ID was not provided." msgstr "No se ha proporcionado el ID de recurso." msgid "" "Resource definition for the resources in the group, in HOT format. The value " "of this property is the definition of a resource just as if it had been " "declared in the template itself." msgstr "" "Definición de recurso para los recursos del grupo, en formato HOT. El valor " "de esta propiedad es la definición de un recurso como si se hubiera " "declarado en la propia plantilla." msgid "" "Resource definition for the resources in the group. The value of this " "property is the definition of a resource just as if it had been declared in " "the template itself." msgstr "" "Definición de recurso para los recursos del grupo. El valor de esta " "propiedad es la definición de un recurso igual que si se hubiera declarado " "en la propia plantilla." msgid "Resource failed" msgstr "Ha fallado el recurso" msgid "Resource is not built" msgstr "El recurso no se ha creado" msgid "Resource name may not contain \"/\"" msgstr "El nombre del recurso no puede incluir \"/\"" msgid "Resource type." msgstr "Tipo de recurso." msgid "Resource update already requested" msgstr "La actualización de recurso ya se ha solicitado" msgid "Resource with the name requested already exists" msgstr "Recurso con el nombre solicitado ya existe" msgid "" "ResourceInError: resources.remote_stack: Went to status UPDATE_FAILED due to " "\"Remote stack update failed\"" msgstr "" "ResourceInError: resources.remote_stack: Se ha puesto en estado " "UPDATE_FAILED debido a \"Actualización de pila remota fallida\"" #, python-format msgid "Resources must contain Resource. Found a [%s] instead" msgstr "" "Recursos debe contener un recurso. Se ha encontrado un [%s] en su lugar" msgid "" "Resources that users are allowed to access by the DescribeStackResource API." msgstr "" "Recursos a los que se les permite acceder a los usuarios mediante la API " "DescribeStackResource ." msgid "Returned status code from the configuration execution." msgstr "" "Se ha devuelto un código de estado a partir de la ejecución de la " "configuración." msgid "Route duplicates an existing route." msgstr "La ruta duplica una ruta existente." msgid "Route table ID." msgstr "ID de tabla de ruta." msgid "Safety assessment lifetime configuration for the ike policy." msgstr "" "Configuración de tiempo de vida de evaluación de seguridad para la política " "ike." msgid "Safety assessment lifetime configuration for the ipsec policy." msgstr "" "Configuración de tiempo de vida de evaluación de seguridad para la política " "ipsec." msgid "Safety assessment lifetime units." msgstr "Unidades de tiempo de vida de evaluación de seguridad." msgid "Safety assessment lifetime value in specified units." msgstr "" "Valor de tiempo de vida de evaluación de seguridad en unidades especificadas." msgid "Scheduler hints to pass to Nova (Heat extension)." msgstr "Sugerencias de planificador a pasar a Nova (extensión de Heat)." msgid "Schema representing the inputs that this software config is expecting." msgstr "" "Esquema que representa las entradas que está esperando esta configuración de " "software." msgid "Schema representing the outputs that this software config will produce." msgstr "" "Esquema que representa las salidas que esta configuración de software " "producirá." #, python-format msgid "Schema valid only for %(ltype)s or %(mtype)s, not %(utype)s" msgstr "Esquema sólo válido para %(ltype)s o %(mtype)s, no %(utype)s" msgid "" "Scope of flavor accessibility. Public or private. Default value is True, " "means public, shared across all projects." msgstr "" "Ámbito de accesibilidad del tipo. Público o privado. El valor predeterminado " "es True, que significa público, compartido entre todos los proyectos." #, python-format msgid "Searching Tenant %(target)s from Tenant %(actual)s forbidden." msgstr "" "Buscando Inquilino %(target)s desde Inquilino %(actual)s esta prohibido." msgid "Seconds between running periodic tasks." msgstr "Segundos transcurridos entre la ejecución de tareas periódicas." msgid "Seconds to wait after a create. Defaults to the global wait_secs." msgstr "" "Número de segundos a esperar después de una creación. De forma " "predeterminada se utiliza el valor de wait_secs global." msgid "Seconds to wait after a delete. Defaults to the global wait_secs." msgstr "" "Número de segundos a esperar después de una supresión. De forma " "predeterminada se utiliza el valor de wait_secs global." msgid "Seconds to wait after an action (-1 is infinite)." msgstr "" "Número de segundos a esperar después de una acción (-1 significa infinito)." msgid "Seconds to wait after an update. Defaults to the global wait_secs." msgstr "" "Número de segundos a esperar después de una actualización. De forma " "predeterminada se utiliza el valor de wait_secs global." #, python-format msgid "Section %s can not be accessed directly." msgstr "No se puede acceder directamente a la sección %s." #, python-format msgid "Security Group \"%(group_name)s\" not found" msgstr "No se ha encontrado el grupo de seguridad \"%(group_name)s\"" msgid "Security group IDs to assign." msgstr "ID de grupo de seguridad a asignar." msgid "Security group IDs to associate with this port." msgstr "ID de grupo de seguridad a asociar con este puerto." msgid "Security group names to assign." msgstr "Nombres de grupo de seguridad a asignar." msgid "Security groups cannot be assigned the name \"default\"." msgstr "Grupos de seguridad no pueden ser asignados a el nombre \"default\"." msgid "Security service IP address or hostname." msgstr "Dirección IP o nombre de host del servicio de seguridad." msgid "Security service description." msgstr "Descripción del servicio de seguridad." msgid "Security service domain." msgstr "Dominio del servicio de seguridad." msgid "Security service name." msgstr "Nombre del servicio de seguridad." msgid "Security service type." msgstr "Tipo del servicio de seguridad." msgid "Security service user or group used by tenant." msgstr "Usuario o grupo del servicio de seguridad utilizado por el inquilino." msgid "Select deferred auth method, stored password or trusts." msgstr "" "Seleccióna método de autenticación deferido, claves almacenadas, o " "confianzas." msgid "Sequence of characters to build the random string from." msgstr "Secuencia de caracteres desde la que crear la una serie aleatoria." #, python-format msgid "Server %(name)s delete failed: (%(code)s) %(message)s" msgstr "La supresión del servidor %(name)s ha fallado: (%(code)s) %(message)s" msgid "Server Group name." msgstr "Nombre de grupo de servidores." msgid "Server name." msgstr "Nombre de servidor." msgid "Server to assign floating IP to." msgstr "Servidor al que asignar IP flotante." #, python-format msgid "" "Service %(service_name)s is not available for resource type " "%(resource_type)s, reason: %(reason)s" msgstr "" "El servicio %(service_name)s no está disponible para el tipo de recurso " "%(resource_type)s, motivo: %(reason)s" msgid "Service misconfigured" msgstr "Servicio mal configurado" msgid "Service temporarily unavailable" msgstr "Servicio está temporalmente no disponible" msgid "Set of parameters passed to this stack." msgstr "Conjunto de parámetros pasados a esta pila." msgid "Set of rules for comparing characters in a character set." msgstr "" "Conjunto de reglas para comparar caracteres de un conjunto de caracteres." msgid "Set of symbols and encodings." msgstr "Conjunto de símbolos y codificaciones." msgid "Set to \"vpc\" to have IP address allocation associated to your VPC." msgstr "" "Establezca en \"vpc\" para tener asignación de dirección IP asociada a VPC." msgid "Set to true if DHCP is enabled and false if DHCP is disabled." msgstr "" "Establecer a true si DHCP está habilitado y falso si DHCP esta deshabilitado." msgid "Severity of the alarm." msgstr "Gravedad de la alarma." msgid "Share description." msgstr "Descripción del recurso compartido." msgid "Share host." msgstr "Host del recurso compartido." msgid "Share name." msgstr "Nombre de recurso compartido." msgid "Share network description." msgstr "Descripción de la red del recurso compartido." msgid "Share project ID." msgstr "ID de proyecto del recurso compartido." msgid "Share protocol supported by shared filesystem." msgstr "" "Protocolo de uso compartido que soporta el sistema de archivos compartido." msgid "Share storage size in GB." msgstr "Tamaño del almacenamiento para uso compartido en GB." msgid "Shared status of the metering label." msgstr "Estado compartido de la etiqueta de medición." msgid "Shared status of this firewall policy." msgstr "Estado compartido de esta política de cortafuegos." msgid "Shared status of this firewall rule." msgstr "Estado compartido de esta regla de cortafuegos." msgid "Shared status of this firewall." msgstr "Estado compartido de este cortafuegos." msgid "Shrinking volume" msgstr "Mermando volumen" msgid "Signal data error" msgstr "Error de datos de señal " #, python-format msgid "Signal resource during %s" msgstr "Señalar recurso durante %s" #, python-format msgid "Single schema valid only for %(ltype)s, not %(utype)s" msgstr "Esquema único sólo válido para %(ltype)s, no %(utype)s" msgid "Size of a secondary ephemeral data disk in GB." msgstr "Tamaño de un disco de datos efímero secundario en GB." msgid "Size of adjustment." msgstr "Tamaño de ajuste." msgid "Size of encryption key, in bits. For example, 128 or 256." msgstr "El tamaño de la clave de cifrado, en bits. Por ejemplo, 128 o 256." msgid "" "Size of local disk in GB. The \"0\" size is a special case that uses the " "native base image size as the size of the ephemeral root volume." msgstr "" "Tamaño del disco local en GB. El tamaño \"0\" es un caso especial que " "utiliza el tamaño de imagen base nativa como tamaño del volumen raíz efímero." msgid "" "Size of the block device in GB. If it is omitted, hypervisor driver " "calculates size." msgstr "" "Tamaño del dispositivo de bloque en GB. Si se omite, el controlador de " "hipervisor calcula el tamaño." msgid "Size of the instance disk volume in GB." msgstr "Tamaño del volumen de disco de la instancia en GB." msgid "Size of the volumes, in GB." msgstr "Tamaño de los volúmenes, en GB." msgid "Smallest prefix size that can be allocated from the subnet pool." msgstr "" "El menor tamaño de prefijo que se puede asignar desde la agrupación de " "subred." #, python-format msgid "Snapshot with id %s not found" msgstr "Instantánea con id %s no encontrado" msgid "" "SnapshotId is missing, this is required when specifying BlockDeviceMappings." msgstr "" "Falta SnapshotId; es necesario cuando se especifica BlockDeviceMappings." #, python-format msgid "Software config with id %s not found" msgstr "No se ha encontrado configuración de software con el id %s" msgid "Source IP address or CIDR." msgstr "Dirección IP de origen o CIDR." msgid "Source ip_address for this firewall rule." msgstr "ip_address de origen para esta regla de cortafuegos." msgid "Source port number or a range." msgstr "Número de puerto de origen o un rango." msgid "Source port range for this firewall rule." msgstr "Rango de puertos de origen para esta regla de cortafuegos." #, python-format msgid "Specified output key %s not found." msgstr "No se ha encontrado la clave de salida especificada, %s." #, python-format msgid "Specified status is invalid, defaulting to %s" msgstr "" "El estado especificado no es válido, se toma de forma predeterminada %s" #, python-format msgid "Specified subnet %(subnet)s does not belongs to network %(network)s." msgstr "La subred especificada %(subnet)s no pertenece a la red %(network)s." msgid "Specifies a custom discovery url for node discovery." msgstr "" "Especifica un URL de descubrimiento personalizado para el descubrimiento de " "nodos." msgid "Specifies database names for creating databases on instance creation." msgstr "" "Especifica nombres de base de datos para crear bases de datos en la creación " "de instancias." msgid "Specify the ACL permissions on who can read objects in the container." msgstr "" "Especifique los permisos de ACL sobre quién puede leer los objetos del " "contenedor." msgid "Specify the ACL permissions on who can write objects to the container." msgstr "" "Especifique los permisos ACL sobre quién puede grabar objetos en el " "contenedor." msgid "" "Specify whether the remote_ip_prefix will be excluded or not from traffic " "counters of the metering label. For example to not count the traffic of a " "specific IP address of a range." msgstr "" "Especifique si el prefijo ip remoto se excluirá o no de los contadores de " "tráfico de la etiqueta de medición. Por ejemplo para no contar el tráfico de " "una dirección IP específica de un rango." #, python-format msgid "Stack %(stack_name)s already has an action (%(action)s) in progress." msgstr "La Pila %(stack_name)s ya tiene una acción (%(action)s) en progreso." msgid "Stack ID" msgstr "ID de pila" msgid "Stack Name" msgstr "Nombre de la pila" msgid "Stack name may not contain \"/\"" msgstr "El nombre de pila no puede contener \"/\"" msgid "Stack resource id" msgstr "ID del recurso de pila" msgid "Stack unknown status" msgstr "Estado desconocido de la pila" #, python-format msgid "Stack with id %s can not be found." msgstr "Stack con id %s no pudo ser encontrado." #, python-format msgid "Stack with id %s not found" msgstr "No se ha encontrado la pila con el ID %s" msgid "" "Stacks containing these tag names will be hidden. Multiple tags should be " "given in a comma-delimited list (eg. hidden_stack_tags=hide_me,me_too)." msgstr "" "Las pilas que contengan estos nombres de etiquetas se ocultarán. Se deben " "indicar varias etiquetas en una lista delimitada por comas (p.e. " "hidden_stack_tags=hide_me,me_too)." msgid "Start address for the allocation pool." msgstr "Dirección de inicio de la agrupación de asignación." #, python-format msgid "Start resizing the group %(group)s" msgstr "Iniciar redimensionamiento del grupo %(group)s" msgid "Start time for the time constraint. A CRON expression property." msgstr "" "Hora de inicio de la restricción de tiempo. Una propiedad de la expresión " "CRON." #, python-format msgid "State %s invalid for create" msgstr "Estado %s no válido para crear" #, python-format msgid "State %s invalid for resume" msgstr "El estado %s no es válido para reanudar" #, python-format msgid "State %s invalid for suspend" msgstr "El estado %s no es válido para suspender" msgid "Status" msgstr "Estado" #, python-format msgid "String to split must be string; got %s" msgstr "La serie a dividir debe ser string; se ha obtenido %s" msgid "String value with which to compare." msgstr "Valor de cadena con el que comparar." msgid "Subnet ID to associate with this interface." msgstr "ID de subred a asociar con esta interfaz." msgid "Subnet ID to launch instance in." msgstr "ID de subred en el que iniciar instancia." msgid "Subnet ID." msgstr "ID de subred." msgid "Subnet in which the vpn service will be created." msgstr "Subred en la que se creará el servicio vpn." msgid "" "Subnet in which to allocate the IP address for port. Used for creating port, " "based on derived properties. If subnet is specified, network property " "becomes optional." msgstr "" "Subred en la que asignar la dirección IP para el puerto. Se utiliza para " "crear el puerto, basándose en propiedades derivadas. Si se ha especificado " "la subred, la propiedad red es opcional." msgid "Subnet in which to allocate the IP address for this port." msgstr "Subred en la que asignar la dirección IP para este puerto." msgid "Subnet name or ID of this member." msgstr "Nombre o ID de subred de este miembro." msgid "Subnet of external fixed IP address." msgstr "Subred de la dirección IP fija externa." msgid "Subnet of the vip." msgstr "Subred de vip." msgid "Subnets of this network." msgstr "Subredes de esta red." msgid "" "Subset of trustor roles to be delegated to heat. If left unset, all roles of " "a user will be delegated to heat when creating a stack." msgstr "" "Subconjunto de roles de confianza a delegar a Heat. Si se deja sin " "establecer, todos los roles de un usuario se delegarán a Heat al crear una " "pila." msgid "Supplied metadata for the resources in the group." msgstr "Metadoas suministrados para los recursos del grupo." msgid "Supported versions: keystone v3" msgstr "Versiones soportadas: keystone v3" #, python-format msgid "Suspend of instance %s failed" msgstr "Ha fallado la suspensión de la instancia %s" #, python-format msgid "Suspend of server %s failed" msgstr "Ha fallado la suspensión del servidor %s" msgid "Swap space in MB." msgstr "Espacio de intercambio en MB." msgid "System SIGHUP signal received." msgstr "Se ha recibido señal de sistema SIGHUP." msgid "TCP or UDP port on which to listen for client traffic." msgstr "Puerto TCP o UDP en el que escuchar el tráfico de cliente." msgid "TCP port on which the instance server is listening." msgstr "Puerto TCP en la que está escuchando el servidor de instancia." msgid "TCP port on which the pool member listens for requests or connections." msgstr "" "Puerto TCP en el que el miembro de agrupación escucha las solicitudes o " "conexiones." msgid "" "TCP port on which to listen for client traffic that is associated with the " "vip address." msgstr "" "Puerto TCP en el que se debe escuchar el tráfico de cliente que está " "asociado con la dirección vip." msgid "" "TTL, in seconds, for any cached item in the dogpile.cache region used for " "caching of OpenStack service finder functions." msgstr "" "TTL, en segundos, para cualquier elemento almacenado en caché de la región " "dogpile.cache utilizado para el almacenamiento en memoria caché de las " "funciones de búsqueda del servicio OpenStack." msgid "" "TTL, in seconds, for any cached item in the dogpile.cache region used for " "caching of service extensions." msgstr "" "TTL, en segundos, para cualquier elemento almacenado en caché de la región " "dogpile.cache utilizado para el almacenamiento en memoria caché de las " "extensiones de servicios." msgid "" "TTL, in seconds, for any cached item in the dogpile.cache region used for " "caching of validation constraints." msgstr "" "TTL, en segundos, para cualquier elemento almacenado en caché de la región " "dogpile.cache utilizado para el almacenamiento en memoria caché de las " "restricciones de validación." msgid "Tag key." msgstr "Clave de etiqueta." msgid "Tag value." msgstr "Valor de etiqueta." msgid "Tags to add to the image." msgstr "Etiquetas a añadir a la imagen." msgid "Tags to attach to instance." msgstr "Códigos a adjuntar a instancia." msgid "Tags to attach to the bucket." msgstr "Códigos a adjuntar a este grupo." msgid "Tags to attach to this group." msgstr "Códigos a adjuntar a este grupo." msgid "Task description." msgstr "Descripción de la tarea." msgid "Task name." msgstr "Nombre de la tarea." msgid "" "Template default for how the server should receive the metadata required for " "software configuration. POLL_SERVER_CFN will allow calls to the cfn API " "action DescribeStackResource authenticated with the provided keypair " "(requires enabled heat-api-cfn). POLL_SERVER_HEAT will allow calls to the " "Heat API resource-show using the provided keystone credentials (requires " "keystone v3 API, and configured stack_user_* config options). POLL_TEMP_URL " "will create and populate a Swift TempURL with metadata for polling (requires " "object-store endpoint which supports TempURL).ZAQAR_MESSAGE will create a " "dedicated zaqar queue and post the metadata for polling." msgstr "" "Plantilla predeterminada para la forma en que el servidor debe recibir los " "metadatos necesarios para la configuración del software. POLL_SERVER_CFN " "permitirá llamadas a la acción de la API de CFN DescribeStackResource " "autenticadas con el par de claves proporcionado (es necesario tener " "habilitado heat-api-cfn). POLL_SERVER_HEAT permitirá llamadas a la muestra " "de recursos de la API de Heat utilizando las credenciales de keystone " "proporcionadas (requiere la API de la v3 de keystone y las opciones de " "configuración stack_user_* configuradas). POLL_TEMP_URL creará y " "cumplimentará un TempURL de Swift con metadatos para sondeo (requiere un " "punto final almacén de objetos que admitaTempURL). ZAQAR_MESSAGE creará una " "cola zaqar dedicada y publicará los metadatos para el sondeo." msgid "" "Template default for how the server should signal to heat with the " "deployment output values. CFN_SIGNAL will allow an HTTP POST to a CFN " "keypair signed URL (requires enabled heat-api-cfn). TEMP_URL_SIGNAL will " "create a Swift TempURL to be signaled via HTTP PUT (requires object-store " "endpoint which supports TempURL). HEAT_SIGNAL will allow calls to the Heat " "API resource-signal using the provided keystone credentials. ZAQAR_SIGNAL " "will create a dedicated zaqar queue to be signaled using the provided " "keystone credentials." msgstr "" "Plantilla predeterminada para la forma en que el servidor debe indicar a " "Heat los valores de salida del despliegue. CFN_SIGNAL permitirá un POST HTTP " "en un URL firmado con un par de claves de CFN (requiere que heat-api-cfn " "esté habilitado). TEMP_URL_SIGNAL creará un TempURL Swift que se señalará " "vía HTTP PUT (requiere un punto final de almacén de objetos que admita " "TempURL). HEAT_SIGNAL permitirá llamadas a señal de recursos de la API de " "Heat utilizando las credenciales de keystone proporcionadas. ZAQAR_SIGNAL " "creará una cola zaqar dedicada que se señalará utilizando las credenciales " "de keystone proporcionadas." msgid "Template format version not found." msgstr "Versión de formato de plantilla no encontrada." #, python-format msgid "" "Template size (%(actual_len)s bytes) exceeds maximum allowed size (%(limit)s " "bytes)." msgstr "" "El tamaño de la plantilla (%(actual_len)s bytes) supera el tamaño máximo " "permitido (%(limit)s bytes)." msgid "Template that specifies the stack to be created as a resource." msgstr "Plantilla que especifica la pila a crear como recurso." #, python-format msgid "Template type is not supported: %s" msgstr "Este tipo de plantilla no está soportado: %s" msgid "Template version was not provided" msgstr "No se ha proporcionado la versión de plantilla" #, python-format msgid "Template with version %s not found" msgstr "No se ha encontrado la plantilla con la versión %s" msgid "TemplateBody or TemplateUrl were not given." msgstr "TemplateBody or TemplateUrl no han sido proporcionadas." msgid "Tenant owning the health monitor." msgstr "Arrendatario propietario del supervisor de estado." msgid "Tenant owning the pool member." msgstr "Arrendatario propietario del miembro de agrupación." msgid "Tenant owning the pool." msgstr "Arrendatario propietario de la agrupación." msgid "Tenant owning the port." msgstr "Proyecto propietario del puerto." msgid "Tenant owning the router." msgstr "Arrendatario propietario del direccionador." msgid "Tenant owning the subnet." msgstr "Proyecto propietario de la subred." #, python-format msgid "Testing message %(text)s" msgstr "Probando mensaje %(text)s" #, python-format msgid "The \"%(hook)s\" hook is not defined on %(resource)s" msgstr "El gancho \"%(hook)s\" no está definido en %(resource)s" #, python-format msgid "The \"for_each\" argument to \"%s\" must contain a map" msgstr "El argumento de \"for_each\" para \"%s\" debe contener una correlación" #, python-format msgid "The %(entity)s (%(name)s) could not be found." msgstr "No se ha podido encontrar la entidad %(entity)s (%(name)s)." #, python-format msgid "The %s must be provided for each parameter group." msgstr "Se debe indicar el %s de cada grupo de parámetros." #, python-format msgid "The %s of parameter group should be a list." msgstr "El %s del grupo de parámetros debería ser una lista." #, python-format msgid "The %s parameter must be assigned to one parameter group only." msgstr "El parámetro %s solo se puede asignar a un grupo de parámetros." #, python-format msgid "The %s should be a list." msgstr "El %s debería ser una lista." msgid "The API paste config file to use." msgstr "El archivo de configuración de pegar de API a utilizar" msgid "The AWS Access Key ID needs a subscription for the service" msgstr "La cuenta AWS Access Key ID necesita una subscripción para el servicio" msgid "The Availability Zone where the specified instance is launched." msgstr "La zona de disponibilidad donde se inicia la instancia especificada." msgid "The Availability Zones in which to create the load balancer." msgstr "Las zonas de disponibilidad en las que crear el equilibrador de carga." msgid "The CIDR." msgstr "El CIDR." msgid "The DNS name for the LoadBalancer." msgstr "El nombre DNS para el equilibrador de carga." msgid "The DNS name of the specified bucket." msgstr "El nombre DNS del grupo especificado." msgid "The DNS nameserver address." msgstr "La dirección del servidor de nombres DNS" msgid "The HTTP method used for requests by the monitor of type HTTP." msgstr "" "El método HTTP utilizado para las solicitudes por el supervisor de tipo HTTP." msgid "" "The HTTP path used in the HTTP request used by the monitor to test a member " "health." msgstr "" "La vía de acceso HTTP utilizada en la solicitud HTTP utilizada por el " "supervisor para probar un estado de miembro." msgid "" "The HTTP path used in the HTTP request used by the monitor to test a member " "health. A valid value is a string the begins with a forward slash (/)." msgstr "" "La vía de acceso HTTP utilizada en la solicitud HTTP utilizada por el " "supervisor para probar el estado de salud de un miembro. Un valor válido es " "una cadena que empiece con una barra inclinada (/)." msgid "" "The HTTP status codes expected in response from the member to declare it " "healthy. Specify one of the following values: a single value, such as 200. a " "list, such as 200, 202. a range, such as 200-204." msgstr "" "Los códigos de estado HTTP esperados como respuesta del miembro para " "declararlo saludable. Especifique uno de los valores siguientes: un valor " "único, como por ejemplo 200; una lista, como por ejemplo 200, 202; un rango, " "como por ejemplo 200-204." msgid "" "The ID of an existing instance to use to create the Auto Scaling group. If " "specify this property, will create the group use an existing instance " "instead of a launch configuration." msgstr "" "ID de una instancia existente a utilizar para crear el grupo de escalado " "automático. Si se especifica esta propiedad, se creará el grupo utilizando " "una instancia existente en lugar de una configuración de lanzamiento." msgid "" "The ID of an existing instance you want to use to create the launch " "configuration. All properties are derived from the instance with the " "exception of BlockDeviceMapping." msgstr "" "ID de una instancia existente que desee utilizar para crear la configuración " "de lanzamiento. Todas las propiedades se derivan de la instancia, a " "excepción de BlockDeviceMapping." msgid "The ID of the attached network." msgstr "El ID de la red asociada." msgid "The ID of the firewall policy that this firewall is associated with." msgstr "" "ID de la política de cortafuegos con el que está asociado este cortafuegos." msgid "" "The ID of the hosted zone name that is associated with the LoadBalancer." msgstr "" "El ID del nombre de zona de host que está asociado con el equilibrador de " "carga." msgid "The ID of the image to create a volume from." msgstr "El ID de la imagen desde la que crear el volumen." msgid "The ID of the image to create the volume from." msgstr "El ID de la imagen desde la que crear el volumen." msgid "The ID of the instance to which the volume attaches." msgstr "El ID de la instancia a la que se adjunta el volumen." msgid "The ID of the load balancing pool." msgstr "El ID de la agrupación de equilibrio de carga." msgid "The ID of the pool to which the pool member belongs." msgstr "El ID de la agrupación a la que pertenece el miembro." msgid "The ID of the server to which the volume attaches." msgstr "El ID del servidor al que se adjunta el volumen." msgid "The ID of the snapshot to create a volume from." msgstr "El ID de la instantánea desde la que se debe crear un volumen." msgid "" "The ID of the tenant which will own the network. Only administrative users " "can set the tenant identifier; this cannot be changed using authorization " "policies." msgstr "" "El ID del arrendatario que será propietario de la red. Sólo los usuarios " "administrativos pueden establecer el identificador de arrendatario; esto no " "se puede cambiar utilizando políticas de autorización." msgid "" "The ID of the tenant who owns the Load Balancer. Only administrative users " "can specify a tenant ID other than their own." msgstr "" "El ID del inquilino propietario del equilibrador de carga. Sólo los usuarios " "administrativos pueden especificar un ID de inquilino distinto del suyo " "propio." msgid "The ID of the tenant who owns the listener." msgstr "ID del inquilino propietario del escucha." msgid "" "The ID of the tenant who owns the network. Only administrative users can " "specify a tenant ID other than their own." msgstr "" "El ID del inquilino propietario de la red. Sólo los usuarios administrativos " "pueden especificar un ID de inquilino distinto del suyo propio." msgid "" "The ID of the tenant who owns the subnet pool. Only administrative users can " "specify a tenant ID other than their own." msgstr "" "El ID del inquilino propietario de la agrupación de subred. Sólo los " "usuarios administrativos pueden especificar un ID de inquilino distinto del " "suyo propio." msgid "The ID of the volume to be attached." msgstr "El ID del volumen que se debe adjuntar." msgid "" "The ID of the volume to boot from. Only one of volume_id or snapshot_id " "should be provided." msgstr "" "El ID del volumen desde el que se debe iniciar. Solo se debe proporcionar " "uno de volume_id o snapshot_id. " msgid "The ID or name of the flavor to boot onto." msgstr "El ID o nombre del tipo en el que se debe arrancar." msgid "The ID or name of the image to boot with." msgstr "El ID o nombre de la imagen con el que arrancar." msgid "" "The IDs of the DHCP agent to schedule the network. Note that the default " "policy setting in Neutron restricts usage of this property to administrative " "users only." msgstr "" "Los ID del agente DHCP para planificar la red. Tenga en cuenta que el valor " "de política predeterminado en Neutron restringe el uso de esta propiedad " "sólo a usuarios administrativos." msgid "The IP address of the pool member." msgstr "La dirección IP del miembro de agrupación." msgid "The IP version, which is 4 or 6." msgstr "La version IP, cuales son 4 o 6." #, python-format msgid "The Parameter (%(key)s) was not defined in template." msgstr "El Parametro (%(key)s) no estaba definido en la plantilla." #, python-format msgid "The Parameter (%(key)s) was not provided." msgstr "El Parametro (%(key)s) no estaba proporcionado." msgid "The QoS policy ID attached to this network." msgstr "El ID de política de QoS conectado a esta red." msgid "The QoS policy ID attached to this port." msgstr "El ID de política de QoS conectado a este puerto." #, python-format msgid "The Referenced Attribute (%(resource)s %(key)s) is incorrect." msgstr "El Atributo Referenciado (%(resource)s %(key)s) esta incorrecto." #, python-format msgid "The Resource %s requires replacement." msgstr "El recurso %s requiere reemplazo." #, python-format msgid "" "The Resource (%(resource_name)s) could not be found in Stack %(stack_name)s." msgstr "" "El Recurso (%(resource_name)s) no se pudo encontrar en la Pila " "%(stack_name)s." #, python-format msgid "The Resource (%(resource_name)s) is not available." msgstr "El Recurso (%(resource_name)s) no está disponible." #, python-format msgid "The Snapshot (%(snapshot)s) for Stack (%(stack)s) could not be found." msgstr "" "No se ha podido encontrar la instantánea (%(snapshot)s) de la pila " "(%(stack)s)." #, python-format msgid "The Stack (%(stack_name)s) already exists." msgstr "La Pila (%(stack_name)s) ya existe." msgid "The Template must be a JSON or YAML document." msgstr "La plantilla debe ser un documento JSON o YAML." msgid "The URI to the container." msgstr "El URI al contenedor." msgid "The URI to the created container." msgstr "El URI al contenedor creado." msgid "The URI to the created secret." msgstr "El URI al secreto creado." msgid "The URI to the order." msgstr "El URI a la orden." msgid "The URIs to container consumers." msgstr "El URI a los consumidores del contenedor." msgid "The URIs to secrets stored in container." msgstr "El URI a los secretos guardados en el contenedor." msgid "" "The URL of a template that specifies the stack to be created as a resource." msgstr "" "El URL de una plantilla que especifica la pila que se debe crear como una " "recurso individual." msgid "The URL of the container." msgstr "El URL del contenedor." msgid "The VIP address of the LoadBalancer." msgstr "La dirección VIP del equilibrador de carga." msgid "The VIP port of the LoadBalancer." msgstr "El puerto VIP del equilibrador de carga." msgid "The VIP subnet of the LoadBalancer." msgstr "La subred VIP del equilibrador de carga." msgid "The action or operation requested is invalid" msgstr "La acción o operación solicitada es invalída" msgid "The action to be executed when the receiver is signaled." msgstr "Acción a ejecutar cuando se señala el receptor." msgid "The administrative state of the firewall." msgstr "El estado administrativo del cortafuegos." msgid "The administrative state of the health monitor." msgstr "El estado administrativo del supervisor de estado." msgid "The administrative state of the ipsec site connection." msgstr "El estado administrativo de la conexión del sitio ipsec." msgid "The administrative state of the pool member." msgstr "El estado administrativo del miembro de agrupación." msgid "The administrative state of the router." msgstr "El estado administrativo del direccionador." msgid "The administrative state of the vpn service." msgstr "El estado administrativo del servicio vpn." msgid "The administrative state of this Load Balancer." msgstr "El estado administrativo de este equilibrador de carga." msgid "The administrative state of this health monitor." msgstr "El estado administrativo de este supervisor de estado." msgid "The administrative state of this listener." msgstr "El estado administrativo de este escucha." msgid "The administrative state of this pool member." msgstr "El estado administrativo de este miembro de agrupación." msgid "The administrative state of this pool." msgstr "El estado administrativo de esta agrupación." msgid "The administrative state of this port." msgstr "El estado administrativo de este puerto." msgid "The administrative state of this vip." msgstr "El estado administrativo de este vip." msgid "The administrative status of the network." msgstr "El estado administrativo de la red." msgid "The administrator password for the server." msgstr "La contraseña de administrador para el servidor." msgid "The aggregation method to compare to the threshold." msgstr "El método de agregación para comparar con el umbral." msgid "The algorithm type used to generate the secret." msgstr "El tipo de algoritmo utilizado para generar el secreto." msgid "" "The algorithm type used to generate the secret. Required for key and " "asymmetric types of order." msgstr "" "El tipo de algoritmo utilizado para generar el secreto. Obligatorio para " "claves y tipos asimétricos de orden." msgid "The algorithm used to distribute load between the members of the pool." msgstr "" "El algoritmo utilizado para distribuir la carga entre los miembros de la " "agrupación." msgid "The allocated address of this IP." msgstr "La dirección asignada de esta IP." msgid "" "The approximate interval, in seconds, between health checks of an individual " "instance." msgstr "" "El intervalo aproximado, en segundos, entre comprobaciones de estado de una " "instancia individual." msgid "The authentication hash algorithm of the ipsec policy." msgstr "El algoritmo hash de autenticación de la política ipsec." msgid "The authentication hash algorithm used by the ike policy." msgstr "El algoritmo hash de autenticación utilizado por la política IKE." msgid "The authentication mode of the ipsec site connection." msgstr "La modalidad de autenticación de la conexión del sitio ipsec." msgid "The availability zone in which the volume is located." msgstr "La zona de disponibilidad en la que está ubicado el volumen." msgid "The availability zone in which the volume will be created." msgstr "La zona de disponibilidad en la que se creará el volumen." msgid "The availability zone of shared filesystem." msgstr "La zona de disponibilidad del sistema de archivos compartido." msgid "The bay name." msgstr "El nombre de la bahía." msgid "The bit-length of the secret." msgstr "La longitud en bits del secreto." msgid "" "The bit-length of the secret. Required for key and asymmetric types of order." msgstr "" "La longitud en bits del secreto. Obligatorio para claves y tipos " "asimétricos de orden." #, python-format msgid "The bucket you tried to delete is not empty (%s)." msgstr "El grupo que ha intentado suprimir no está vacío (%s)." msgid "The can be used to unmap a defined device." msgstr "" "Puede utilizarse para anular la correlación de un dispositivo definido." msgid "The certificate or AWS Key ID provided does not exist" msgstr "El certificado o AWS Key ID proporcionado no existe" msgid "The channel for receiving signals." msgstr "El canal para recibir señales." msgid "" "The class that provides encryption support. For example, nova.volume." "encryptors.luks.LuksEncryptor." msgstr "" "La clase que proporciona soporte para el cifrado. Por ejemplo, nova.volume." "encryptors.luks.LuksEncryptor." #, python-format msgid "The client (%(client_name)s) is not available." msgstr "El cliente (%(client_name)s) no está disponible." msgid "The cluster ID this node belongs to." msgstr "El ID de clúster al que pertenece este nodo." msgid "The config value of the software config." msgstr "El valor de la configuración de software." msgid "" "The configuration tool used to actually apply the configuration on a server. " "This string property has to be understood by in-instance tools running " "inside deployed servers." msgstr "" "Herramienta de configuración utilizada para aplicar la configuración en un " "servidor. Esta propiedad de cadena debe ser comprendida por las herramientas " "dentro de la instancia que se ejecutan en el interior de los servidores " "desplegados." msgid "The content of the CSR. Only for certificate orders." msgstr "El contenido del CSR. Solo para órdenes de certificados." #, python-format msgid "" "The contents of personality file \"%(path)s\" is larger than the maximum " "allowed personality file size (%(max_size)s bytes)." msgstr "" "El contenido del archivo de personalidad \"%(path)s\" es mayor que el tamaño " "máximo de archivo de personalidad permitido (%(max_size)s bytes)." msgid "The current size of AutoscalingResourceGroup." msgstr "Tamaño actual de AutoscalingResourceGroup." msgid "The current status of the volume." msgstr "El estado actual del volumen." msgid "" "The database instance was created, but heat failed to set up the datastore. " "If a database instance is in the FAILED state, it should be deleted and a " "new one should be created." msgstr "" "Se ha creado la instancia de base de datos, pero Heat no ha podido " "configurar el almacén de datos. Si hay una instancia de base de datos en " "estado FAILED, se debe suprimir y debe crearse una nueva." msgid "" "The dead peer detection protocol configuration of the ipsec site connection." msgstr "" "La configuración de protocolo de detección de igual muerto de la conexión " "del sitio ipsec." msgid "The decrypted secret payload." msgstr "La carga útil secreta sin cifrar." msgid "" "The default cloud-init user set up for each image (e.g. \"ubuntu\" for " "Ubuntu 12.04+, \"fedora\" for Fedora 19+ and \"cloud-user\" for CentOS/RHEL " "6.5)." msgstr "" "El usuario cloud-init predeterminado configurado para cada imagen (p.e. " "\"ubuntu\" para Ubuntu 12.04+, \"fedora\" para Fedora 19+ y \"cloud-user\" " "para CentOS/RHEL 6.5)." msgid "The description for the QoS policy." msgstr "Descripción de la política de QoS." msgid "The description of the ike policy." msgstr "La descripción de la política ike." msgid "The description of the ipsec policy." msgstr "La descripción de la política ipsec." msgid "The description of the ipsec site connection." msgstr "La descripción de la conexión del sitio ipsec." msgid "The description of the vpn service." msgstr "La descripción del servicio vpn." msgid "The destination for static route." msgstr "El destino para la ruta estática." msgid "The details of physical object." msgstr "La información detallada del objeto físico." msgid "The device id for the network gateway." msgstr "El ID de dispositivo para la pasarela de red." msgid "" "The device where the volume is exposed on the instance. This assignment may " "not be honored and it is advised that the path /dev/disk/by-id/virtio-" " be used instead." msgstr "" "El dispositivo donde el volumen se expone en la instancia. Esta asignación " "puede no concederse y se aconseja utilizar la vía de acceso /dev/disk/by-id/" "virtio- en su lugar." msgid "" "The direction in which metering rule is applied, either ingress or egress." msgstr "" "La dirección en la que se aplica la regla de medición, de entrada o salida." msgid "The direction in which metering rule is applied." msgstr "La dirección en la que se aplica la regla de medición." msgid "" "The direction in which the security group rule is applied. For a compute " "instance, an ingress security group rule matches traffic that is incoming " "(ingress) for that instance. An egress rule is applied to traffic leaving " "the instance." msgstr "" "La dirección en la que se aplica la regla de grupo de seguridad. Para una " "instancia de cálculo, una regla de grupo de seguridad de entrada compara el " "tráfico que entra (entrada) para esa instancia. Se aplica una regla de " "salida al tráfico que sale de la instancia." msgid "The directory to search for environment files." msgstr "Directorio en el que buscar archivos de entorno." msgid "The ebs volume to attach to the instance." msgstr "El volumen ebs que se adjunta a la instancia." msgid "The encapsulation mode of the ipsec policy." msgstr "La modalidad de encapsulación de la política ipsec." msgid "The encoding format used to provide the payload data." msgstr "" "El formato de cifrado utilizado para proporcionar los datos de la carga útil." msgid "The encryption algorithm of the ipsec policy." msgstr "El algoritmo de cifrado de la política ipsec." msgid "The encryption algorithm or mode. For example, aes-xts-plain64." msgstr "El modo o el algoritmo de cifrado. Por ejemplo, aes-xts-plain64." msgid "The encryption algorithm used by the ike policy." msgstr "El algoritmo de cifrado utilizado por la política IKE." msgid "The environment is not a valid YAML mapping data type." msgstr "El entorno no es un tipo de datos de correlación YAML válido." msgid "The expiration date for the secret in ISO-8601 format." msgstr "La fecha de caducidad del secreto en formato ISO-8601." msgid "The external load balancer port number." msgstr "Número de puerto de equilibrador de carga externo." msgid "The extra specs key and value pairs of the volume type." msgstr "Pares de clave-valor de especificación adicional del tipo de volumen." msgid "The flavor to use." msgstr "El sabor a utilizar." #, python-format msgid "The following parameters are immutable and may not be updated: %(keys)s" msgstr "" "Los siguientes parámetros son inmutables y no se pueden actualizar: %(keys)s" #, python-format msgid "The function %s is not supported in this version of HOT." msgstr "La función %s no está soportada en esta versión de HOT." msgid "" "The gateway IP address. Set to any of [ null | ~ | \"\" ] to create/update a " "subnet without a gateway. If omitted when creation, neutron will assign the " "first free IP address within the subnet to the gateway automatically. If " "remove this from template when update, the old gateway IP address will be " "detached." msgstr "" "La dirección IP de la pasarela. Definir con cualquiera de los valores " "siguientes: [ null | ~ | \"\" ] para crear/actualizar una subred sin una " "pasarela. Si se omite, durante la creación, neutron asignará automáticamente " "la primera dirección IP libre dentro de la subred a la pasarela. Si se " "elimina de la plantilla al actualizar, se desasociará la dirección IP " "antigua de pasarela." #, python-format msgid "The grouped parameter %s does not reference a valid parameter." msgstr "El parámetro agrupado %s no hace referencia a un parámetro válido." msgid "The host from the container URL." msgstr "El host del URL de contenedor." msgid "The host from which a user is allowed to connect to the database." msgstr "" "El host desde el que se permite que un usuario se conecte a la base de datos." msgid "" "The id for L2 segment on the external side of the network gateway. Must be " "specified when using vlan." msgstr "" "El id para el segmento L2 en el lado externo de la pasarela de red. Se debe " "especificar al utilizar vlan." msgid "The identifier of the CA to use." msgstr "El identificador de la CA a utilizar." msgid "The image ID. Glance will generate a UUID if not specified." msgstr "El ID de imagen. Glance generará un UUID si no se especifica." msgid "The initiator of the ipsec site connection." msgstr "El iniciador de la conexión del sitio ipsec." msgid "The input string to be stored." msgstr "La cadena de entrada a almacenar." msgid "The interface name for the network gateway." msgstr "El nombre de interfaz para la pasarela de red." msgid "The internal network to connect on the network gateway." msgstr "La red interna a la que se conecta en la pasarela de red." msgid "The last operation for the database instance failed due to an error." msgstr "" "No se ha podido realizar la última operación de la instancia de base de " "datos debido a un error." #, python-format msgid "The length must be at least %(min)s." msgstr "La longitud debe estar al menos %(min)s." #, python-format msgid "The length must be in the range %(min)s to %(max)s." msgstr "La longitud debe estar en el rango de %(min)s a %(max)s." #, python-format msgid "The length must be no greater than %(max)s." msgstr "La longitud no debe ser mayor que %(max)s." msgid "The length of time, in minutes, to wait for the nested stack creation." msgstr "" "Periodo de tiempo, en minutos, durante el cual se debe esperar a que se cree " "la pila anidada." msgid "" "The list of HTTP status codes expected in response from the member to " "declare it healthy." msgstr "" "La lista de códigos de estado HTTP esperados en respuesta del miembro para " "declararlo saludable." msgid "The list of Nova server IDs load balanced." msgstr "La lista de ID de servidor Nova con la carga equilibrada." msgid "The list of Pools related to this monitor." msgstr "La lista de agrupaciones relacionadas con este supervisor." msgid "The list of attachments of the volume." msgstr "Lista de conexiones del volumen." msgid "" "The list of configurations for the different lifecycle actions of the " "represented software component." msgstr "" "Lista de configuraciones de las distintas acciones del ciclo de vida del " "componente de software representado." msgid "The list of instance IDs load balanced." msgstr "La lista de ID de instancia con equilibrio de carga." msgid "" "The list of resource types to create. This list may contain type names or " "aliases defined in the resource registry. Specific template names are not " "supported." msgstr "" "La lista de tipos de recurso a crear. Esta lista puede contener nombres o " "alias de tipos definidos en el registro del recurso. No se admiten nombres " "de plantilla específicos." msgid "The list of tags to associate with the volume." msgstr "La lista de códigos a asociar con el volumen." msgid "The load balancer transport protocol to use." msgstr "Protocolo de transporte de equilibrador de carga a utilizar." msgid "" "The location where the volume is exposed on the instance. This assignment " "may not be honored and it is advised that the path /dev/disk/by-id/virtio-" " be used instead." msgstr "" "La ubicación donde se expone el volumen en la instancia. Esta asignación " "puede no concederse y se aconseja utilizar la vía de acceso /dev/disk/by-id/" "virtio- en su lugar." msgid "The manually assigned alternative public IPv4 address of the server." msgstr "" "La dirección IPv4 pública alternativa asignada manualmente del servidor." msgid "The manually assigned alternative public IPv6 address of the server." msgstr "" "La dirección IPv6 pública alternativa asignada manualmente del servidor." msgid "The maximum number of connections per second allowed for the vip." msgstr "El número máximo de conexiones por segundo permitidas para el vip." msgid "" "The maximum number of connections permitted for this load balancer. Defaults " "to -1, which is infinite." msgstr "" "El número máximo de conexiones permitidas para este equilibrador de carga. " "El valor predeterminado es -1, que es infinito." msgid "The maximum number of resources to create at once." msgstr "El número máximo de recursos que se crearán a la vez." msgid "The maximum number of resources to replace at once." msgstr "El número máximo de recursos que se sustituyen a la vez." msgid "" "The maximum number of seconds to wait for the resource to signal completion. " "Once the timeout is reached, creation of the signal resource will fail." msgstr "" "El número máximo de segundos que se espera a que el recurso señale la " "finalización. Una vez se ha alcanzado el tiempo de espera, la creación de la " "señal de recurso fallará." msgid "" "The maximum port number in the range that is matched by the security group " "rule. The port_range_min attribute constrains the port_range_max attribute. " "If the protocol is ICMP, this value must be an ICMP type." msgstr "" "El número de puerto máximo en el rango coincidido con la regla de grupo de " "seguridad. El atributo port_range_min limíta el atributo port_range_max. Si " "el protocolo es ICMP, este valor debe ser uno de tipo ICMP." msgid "" "The maximum transmission unit size (in bytes) of the ipsec site connection." msgstr "" "El tamaño de unidad de transmisión máxima (en bytes) de la conexión del " "sitio ipsec." msgid "The maximum transmission unit size(in bytes) for the network." msgstr "Tamaño máximo de la unidad de transmisión (en bytes) para la red." msgid "The metering label ID to associate with this metering rule." msgstr "El ID de etiqueta de medición a asociar con esta regla de medición." msgid "" "The metric dimensions to match to the alarm dimensions. One or more " "dimension key names separated by a comma." msgstr "" "Las dimensiones métricas que deben coincidir con las dimensiones de la " "alarma. Uno o más nombres clave de dimensión separados por una coma." msgid "" "The minimum number of characters from this character class that will be in " "the generated string." msgstr "" "El número mínimo de caracteres de esta clase de carácter que estarán en la " "serie generada." msgid "" "The minimum number of characters from this sequence that will be in the " "generated string." msgstr "" "El número mínimo de caracteres de esta secuencia que estarán en la serie " "generada." msgid "" "The minimum number of resources in service while rolling updates are being " "executed." msgstr "" "El número mínimo de recursos en servicio cuando están ejecutando " "actualizaciones de transferencia." msgid "" "The minimum port number in the range that is matched by the security group " "rule. If the protocol is TCP or UDP, this value must be less than or equal " "to the value of the port_range_max attribute. If the protocol is ICMP, this " "value must be an ICMP type." msgstr "" "El número de puerto mínimo en el rango que coincide con la regla de grupo de " "seguridad. Si el protocolo es TCP o UDP, este valor debe ser menor que o " "igual al valor del atributo port_range_max. Si el protocolo es ICMP, este " "valor debe ser un tipo ICMP." msgid "The name for the QoS policy." msgstr "El nombre de la política de QoS." msgid "The name for the address scope." msgstr "El nombre del ámbito de dirección." msgid "" "The name of the driver used for instantiating container networks. By " "default, Magnum will choose the pre-configured network driver based on COE " "type." msgstr "" "El nombre del controlador utilizado para instanciar las redes de contenedor. " "De forma predeterminada, Magnum elegirá el controlador de red preconfigurado " "en base al tipo COE." msgid "The name of the error document." msgstr "El nombre del documento de error." msgid "The name of the hosted zone that is associated with the LoadBalancer." msgstr "" "El nombre de la zona de host que está asociada con el equilibrador de carga." msgid "The name of the ike policy." msgstr "El nombre de la política ike." msgid "The name of the index document." msgstr "El nombre del documento de índice." msgid "The name of the ipsec policy." msgstr "El nombre de la política ipsec." msgid "The name of the ipsec site connection." msgstr "El nombre de la conexión del sitio ipsec." msgid "The name of the key pair." msgstr "El nombre del par de claves." msgid "The name of the network gateway." msgstr "El nombre de la pasarela de red." msgid "The name of the network." msgstr "El nombre de la red." msgid "The name of the router." msgstr "El nombre del direccionador." msgid "The name of the subnet." msgstr "El nombre de la subred." msgid "The name of the user that the new key will belong to." msgstr "El nombre del usuario al que pertenecerá la nueva clave." msgid "" "The name of the virtual device. The name must be in the form ephemeralX " "where X is a number starting from zero (0); for example, ephemeral0." msgstr "" "El nombre del dispositivo virtual. El nombre debe tener el formato " "ephemeralX, donde X es un número a partir de cero (0); por ejemplo, " "ephemeral0." msgid "The name of the vpn service." msgstr "El nombre del servicio vpn." msgid "The name or ID of QoS policy to attach to this network." msgstr "El nombre o el ID de la política de QoS a conectar a esta red." msgid "The name or ID of QoS policy to attach to this port." msgstr "El nombre o el ID de la política de QoS a conectar a este puerto." msgid "The name or ID of parent of this keystone project in hierarchy." msgstr "El nombre o el ID de este proyecto de keystone en la jerarquía." msgid "The name or ID of target cluster." msgstr "El nombre o el ID del clúster de destino." msgid "The name or ID of the bay model." msgstr "El nombre o el ID del modelo de bahía." msgid "The name or ID of the subnet on which to allocate the VIP address." msgstr "El nombre o el ID de la subred donde asignar la dirección VIP." msgid "The name or ID of the subnet pool." msgstr "El nombre o el ID de la agrupación de subred." msgid "The name or id of the Senlin profile." msgstr "El nombre o el ID del perfil Senlin." msgid "The negotiation mode of the ike policy." msgstr "La modalidad de negociación de la política ike." msgid "The next hop for the destination." msgstr "El siguiente salto del destino." msgid "The node count for this bay." msgstr "El recuento de nodos para esta bahía." msgid "The notification methods to use when an alarm state is ALARM." msgstr "" "Los métodos de notificación a utilizar cuando el estado de una alarma es " "ALARMA." msgid "The notification methods to use when an alarm state is OK." msgstr "" "Los métodos de notificación a utilizar cuando el estado de una alarma es " "Correcto." msgid "The notification methods to use when an alarm state is UNDETERMINED." msgstr "" "Los métodos de notificación a utilizar cuando el estado de una alarma es " "INDETERMINADO." msgid "The number of I/O operations per second that the volume supports." msgstr "El número de operaciones de E/S por segundo que admite el volumen." msgid "The number of bytes stored in the container." msgstr "El número de bytes almacenados en el contenedor." msgid "" "The number of consecutive health probe failures required before moving the " "instance to the unhealthy state" msgstr "" "El número de anomalías de análisis de estado consecutivas necesarias antes " "de mover la instancia al estado no saludable." msgid "" "The number of consecutive health probe successes required before moving the " "instance to the healthy state." msgstr "" "El número de éxitos de análisis de estado consecutivos necesarios antes de " "mover la instancia al estado saludable." msgid "The number of master nodes for this bay." msgstr "El número de nodos mestros para esta bahía." msgid "The number of objects stored in the container." msgstr "El número de objetos almacenados en el contenedor." msgid "The number of replicas to be created." msgstr "El número de réplicas a crear." msgid "The number of resources to create." msgstr "El número de recursos a crear." msgid "The number of seconds to wait between batches of updates." msgstr "El número de segundos que se espera entre lotes de actualizaciones." msgid "The number of seconds to wait between batches." msgstr "El número de segundos que se espera entre lotes." msgid "The number of seconds to wait for the cluster actions." msgstr "El número de segundos que se espera para acciones de clúster." msgid "" "The number of seconds to wait for the correct number of signals to arrive." msgstr "" "El número de segundo a esperar a que llegue el número correcto de señales." msgid "" "The number of success signals that must be received before the stack " "creation process continues." msgstr "" "El número de señales de éxito que se deben recibir antes de que el proceso " "de creación de pila continúe." msgid "" "The optional public key. This allows users to supply the public key from a " "pre-existing key pair. If not supplied, a new key pair will be generated." msgstr "" "Clave pública opcional. Permite a los usuarios proporcionar la clave pública " "desde un par de claves existente previamente. Si no se proporciona, se " "generará un nuevo par de claves." msgid "" "The owner tenant ID of the address scope. Only administrative users can " "specify a tenant ID other than their own." msgstr "" "El ID de inquilino propietario del ámbito de dirección. Sólo los usuarios " "administrativos pueden especificar un ID de inquilino distinto del suyo " "propio." msgid "The owner tenant ID of this QoS policy." msgstr "ID del arrendatario propietario de esta política de QoS." msgid "The owner tenant ID of this rule." msgstr "ID del arrendatario propietario de esta regla." msgid "" "The owner tenant ID. Only required if the caller has an administrative role " "and wants to create a RBAC for another tenant." msgstr "" "El ID de inquilino propietario. Solo es necesario si el interlocutor tiene " "un rol administrativo y dese crear un RBAC para otro inquilino." msgid "The parameters passed to action when the receiver is signaled." msgstr "Los parámetros que se pasan a acción cuando se señala el receptor." msgid "The parent URL of the container." msgstr "El URL padre del contenedor." msgid "The payload of the created certificate, if available." msgstr "La carga útil del certificado creado, si está disponible." msgid "The payload of the created intermediates, if available." msgstr "La carga útil de los intermedios creados, si está disponible." msgid "The payload of the created private key, if available." msgstr "La carga útil de la clave privada creada, si está disponible." msgid "The payload of the created public key, if available." msgstr "La carga útil de la clave pública creada, si está disponible." msgid "The perfect forward secrecy of the ike policy." msgstr "El secreto de reenvío perfecto de la política ike." msgid "The perfect forward secrecy of the ipsec policy." msgstr "El secreto de reenvío perfecto de la política ipsec." #, python-format msgid "The personality property may not contain greater than %s entries." msgstr "La propiedad de personalidad no puede contener más de %s entradas." msgid "The physical mechanism by which the virtual network is implemented." msgstr "El mecanismo físico por el que está implementada la red virtual." msgid "The port being checked." msgstr "El puerto que se está comprobando." msgid "The port id, either subnet or port_id should be specified." msgstr "Se debe especificar el id de puerto, subnet o port_id." msgid "The port on which the server will listen." msgstr "El puerto en el cual el servidor escuchará." msgid "The port, either subnet or port should be specified." msgstr "El puerto; debe especificarse la subred o el puerto." msgid "The pre-shared key string of the ipsec site connection." msgstr "La serie de clave precompartida de la conexión del sitio ipsec." msgid "The private key if it has been saved." msgstr "La clave privada si se ha guardado." msgid "The profile of certificate to use." msgstr "El perfil del certificado a utilizar." msgid "" "The protocol that is matched by the security group rule. Valid values " "include tcp, udp, and icmp." msgstr "" "El protocólo que está coincidido con la regla de grupo de seguridad. Valores " "válido incluyen tcp, udp, y icmp." msgid "The public key." msgstr "La clave pública." msgid "The query string is malformed" msgstr "La cadena de consulta esta formada incorrectamente" msgid "The query to filter the metrics." msgstr "La consulta para filtrar las métricas." msgid "" "The random string generated by this resource. This value is also available " "by referencing the resource." msgstr "" "La serie aleatoria generada por este recurso. Este valor también está " "disponible al hacer referencia al recurso." msgid "The reference to a LaunchConfiguration resource." msgstr "Referencia a un recurso LaunchConfiguration." msgid "" "The remote IP prefix (CIDR) to be associated with this security group rule." msgstr "" "El prefijo del IP remote (CIDR) para ser asociado con esta regla de grupo de " "seguridad." msgid "The remote branch router identity of the ipsec site connection." msgstr "" "La identidad de direccionador de bifurcación remoto de la conexión del sitio " "ipsec." msgid "The remote branch router public IPv4 address or IPv6 address or FQDN." msgstr "" "La dirección IPv4 o dirección IPv6 pública de direccionador de bifurcación " "remoto o FQDN." msgid "" "The remote group ID to be associated with this security group rule. If no " "value is specified then this rule will use this security group for the " "remote_group_id. The remote mode parameter must be set to \"remote_group_id" "\"." msgstr "" "El Id de grupo remoto que se debe asociar con esta regla de grupo de " "seguridad. Si no se especifica ningún valor, esta regla utilizará este grupo " "de seguridad para el remote_group_id. El parámetro de modalidad remota debe " "establecerse en \"remote_group_id\"." msgid "The remote subnet(s) in CIDR format of the ipsec site connection." msgstr "" "La subred(es) remota(s) en formato CIDR de la conexión del sitio ipsec." msgid "The request is missing an action or operation parameter" msgstr "A la petición le falta una acción o un parámetro de operación" msgid "The request processing has failed due to an internal error" msgstr "La solicitud procesada ha fallado debido a un error interno" msgid "The request signature does not conform to AWS standards" msgstr "La firma de solicitud no se conforma a los estándares de AWS" msgid "" "The request signature we calculated does not match the signature you provided" msgstr "" "La firma de solicitud que calculámos no coincide con la firma que has " "proporcionado" msgid "The requested action is not yet implemented" msgstr "La acción solicitada aún no está implementada" #, python-format msgid "The resource %s is already being updated." msgstr "El recurso %s ya se está actualizando." msgid "The resource href of the queue." msgstr "El href del recurso de la cola." msgid "The route mode of the ipsec site connection." msgstr "La modalidad de ruta de la conexión del sitio ipsec." msgid "The router id." msgstr "El ID del router." msgid "The router to which the vpn service will be inserted." msgstr "Direccionador en el que se va a insertar el servicio vpn." msgid "The router." msgstr "El direccionador." msgid "The safety assessment lifetime configuration for the ike policy." msgstr "" "La configuración de tiempo de vida de evaluación de seguridad para la " "política ike." msgid "The safety assessment lifetime configuration of the ipsec policy." msgstr "" "La configuración de tiempo de vida de evaluación de seguridad de la política " "ipsec." msgid "" "The security group that you can use as part of your inbound rules for your " "LoadBalancer's back-end instances." msgstr "" "El grupo de seguridad que puede utilizar como parte de las reglas de entrada " "para las instancias de programa de fondo del equilibrador de carga." msgid "" "The server could not comply with the request since it is either malformed or " "otherwise incorrect." msgstr "" "El servidor no puede satisfacer la solicitud ya que tiene un formato " "incorrecto o es incorrecta por otro motivo." msgid "The set of parameters passed to this nested stack." msgstr "El conjunto de parámetros pasados a esta pila anidada." msgid "The size in GB of the docker volume." msgstr "El tamaño del volumen docker en GB." msgid "The size of AutoScalingGroup can not be less than zero" msgstr "El tamaño de AutoScalingGroup no puede ser menor que cero" msgid "" "The size of the prefix to allocate when the cidr or prefixlen attributes are " "not specified while creating a subnet." msgstr "" "El tamaño del prefijo a asignar cuando no se han especificado los atributos " "cidr o prefixlen al crear una subred." msgid "The size of the swap, in MB." msgstr "El tamaño del intercambio, en MB." msgid "The size of the volume in GB." msgstr "El tamaño del volumen en GB." msgid "" "The size of the volume, in GB. It is safe to leave this blank and have the " "Compute service infer the size." msgstr "" "El tamaño del volumen, en GB. Es seguro dejar esto en blanco y que el " "servicio de cálculo deduzca el tamaño." msgid "" "The size of the volume, in GB. Must be equal or greater than the size of the " "snapshot. It is safe to leave this blank and have the Compute service infer " "the size." msgstr "" "El tamaño del volumen, en GB. Debe ser mayor o igual que el tamaño de la " "instantánea. Es seguro dejar este valor en blanco y que el servicio de " "cálculo deduzca el tamaño." msgid "The snapshot the volume was created from, if any." msgstr "La instantánea desde la que se ha creado el volumen, si existe." msgid "The source of certificate request." msgstr "El origen de la solicitud de certificado." #, python-format msgid "The specified reference \"%(resource)s\" (in %(key)s) is incorrect." msgstr "" "La referencia especificada \"%(resource)s\" (in %(key)s) esta incorrecta." msgid "The start and end addresses for the allocation pools." msgstr "Las direcciónes iniciales y finales para los bancos de asignación." msgid "The status of the container." msgstr "El estado del contenedor." msgid "The status of the firewall." msgstr "El estado del cortafuegos." msgid "The status of the ipsec site connection." msgstr "El estado de la conexión del sitio ipsec." msgid "The status of the network." msgstr "El estado de la red." msgid "The status of the order." msgstr "El estado de la orden." msgid "The status of the port." msgstr "El estado del puerto." msgid "The status of the router." msgstr "El estado del direccionador." msgid "The status of the secret." msgstr "El estado del secreto." msgid "The status of the vpn service." msgstr "El estado del servicio vpn." msgid "" "The string that was stored. This value is also available by referencing the " "resource." msgstr "" "La cadena que se ha almacenado. Este valor también está disponible al hacer " "referencia al recurso." msgid "The subject of the certificate request." msgstr "El asunto de la solicitud de certificado." msgid "" "The subnet for the port on which the members of the pool will be connected." msgstr "" "La subred para el puerto en el que los miembros de la agrupación se " "conectarán." msgid "The subnet, either subnet or port should be specified." msgstr "La subred; debe especificarse la subred o el puerto." msgid "The tag key name." msgstr "Nombre de clave de código." msgid "The tag value." msgstr "El valor de código." msgid "The template is not a JSON object or YAML mapping." msgstr "La plantilla no es un objeto JSON ni correlación YAML." #, python-format msgid "The template section is invalid: %(section)s" msgstr "La sección de plantillas es inválida: %(section)s" #, python-format msgid "The template version is invalid: %(explanation)s" msgstr "La version de la plantilla es inválida: %(explanation)s" msgid "The tenant owning this floating IP." msgstr "El arrendatario propietario de esta IP flotante." msgid "The tenant owning this network." msgstr "El arrendatario propietario de esta red." msgid "The time range in seconds." msgstr "El rango de tiempo en segundos." msgid "The timestamp indicating volume creation." msgstr "La indicación de fecha y hora que señala la creación de volumen." msgid "The transform protocol of the ipsec policy." msgstr "El protocolo de transformación de la política ipsec." msgid "The type of profile." msgstr "El tipo del perfil." msgid "The type of senlin policy." msgstr "El tipo de la política senlin." msgid "The type of the certificate request." msgstr "El tipo de la solicitud de certificado." msgid "The type of the order." msgstr "El tipo de la orden." msgid "The type of the resources in the group." msgstr "El tipo de los recursos del grupo." msgid "The type of the secret." msgstr "El tipo del secreto." msgid "The type of the volume mapping to a backend, if any." msgstr "" "El tipo del volumen que se correlaciona con un programa de fondo, si existe." msgid "The type/format the secret data is provided in." msgstr "El tipo/formato en que se proporcionan los datos secretos." msgid "The type/mode of the algorithm associated with the secret information." msgstr "El tipo/modo del algoritmo asociado con la información secreta." msgid "The unencrypted plain text of the secret." msgstr "El texto sin formato y sin cifrar del secreto." msgid "" "The unique identifier of ike policy associated with the ipsec site " "connection." msgstr "" "El identificador exclusivo de la política ike asociada con la conexión del " "sitio ipsec." msgid "" "The unique identifier of ipsec policy associated with the ipsec site " "connection." msgstr "" "El identificador exclusivo de la política ipsec asociada con la conexión del " "sitio ipsec." msgid "" "The unique identifier of the router to which the vpn service was inserted." msgstr "" "El identificador exclusivo del direccionador en el que se ha insertado el " "servicio vpn." msgid "" "The unique identifier of the subnet in which the vpn service was created." msgstr "" "El identificador exclusivo de la subred en la que se ha creado el servicio " "vpn." msgid "The unique identifier of the tenant owning the ike policy." msgstr "" "El identificador exclusivo del arrendatario propietario de la política ike." msgid "The unique identifier of the tenant owning the ipsec policy." msgstr "" "El identificador exclusivo del arrendatario propietario de la política ipsec." msgid "The unique identifier of the tenant owning the ipsec site connection." msgstr "" "El identificador exclusivo del arrendatario propietario de la conexión del " "sitio ipsec." msgid "The unique identifier of the tenant owning the vpn service." msgstr "" "El identificador exclusivo del arrendatario propietario del servicio vpn." msgid "" "The unique identifier of vpn service associated with the ipsec site " "connection." msgstr "" "El identificador exclusivo del servicio vpn asociado con la conexión del " "sitio ipsec." msgid "" "The user-defined region ID and should unique to the OpenStack deployment. " "While creating the region, heat will url encode this ID." msgstr "" "El ID de región definido por el usuario, y debería ser exclusivo en el " "despliegue de OpenStack. Al crear la región, heat codificará en URL este ID." msgid "" "The value for the socket option TCP_KEEPIDLE. This is the time in seconds " "that the connection must be idle before TCP starts sending keepalive probes." msgstr "" "El valor de la opción de pila TCP_KEEPIDLE. Es el tiempo en segundos que " "debe estar la conexión inactiva antes de que TCP comience a enviar sondas " "keepalive." #, python-format msgid "The value must be at least %(min)s." msgstr "El valor debe ser al menos %(min)s." #, python-format msgid "The value must be in the range %(min)s to %(max)s." msgstr "El valor debe estar en el rango %(min)s a %(max)s." #, python-format msgid "The value must be no greater than %(max)s." msgstr "El valor no debe ser mayor que %(max)s." #, python-format msgid "The values of the \"for_each\" argument to \"%s\" must be lists" msgstr "Los valores del argumento \"for_each\" para \"%s\" deben ser listas" msgid "The version of the ike policy." msgstr "La versión de la política ike." msgid "" "The vnic type to be bound on the neutron port. To support SR-IOV PCI " "passthrough networking, you can request that the neutron port to be realized " "as normal (virtual nic), direct (pci passthrough), or macvtap (virtual " "interface with a tap-like software interface). Note that this only works for " "Neutron deployments that support the bindings extension." msgstr "" "El tipo vnic a vincular en el puerto Neutron. Para dar soporte a redes de " "paso a través SR-IOV PCI, puede solicitar que el puerto Neutron se tenga en " "cuenta como normal (nic virtual), direct (paso a través pci) o macvtap " "(interfaz virtual con una interfaz de software de tipo táctil). Tenga en " "cuenta que sólo funciona para despliegues de Neutron que admitan la " "ampliación de enlaces." msgid "The volume type." msgstr "El tipo de volumen." msgid "The volume used as source, if any." msgstr "El volumen utilizado como origen, si existe." msgid "The volume_id can be boot or non-boot device to the server." msgstr "" "El volume_id puede ser un dispositivo de arranque o no arranque para el " "servidor." msgid "The website endpoint for the specified bucket." msgstr "El punto final de sitio web para el grupo especificado." #, python-format msgid "There is no rule %(rule)s. List of allowed rules is: %(rules)s." msgstr "" "No hay ninguna regla %(rule)s. La lista de reglas permitidas es: %(rules)s." msgid "" "There is no such option during 5.0.0, so need to make this attribute " "unsupported, otherwise error will raised." msgstr "" "No existe esta opción en 5.0.0, por tanto, es necesario dejar de dar soporte " "a este atributo, de lo contrario surgirá un error." msgid "" "There is no such option during 5.0.0, so need to make this property " "unsupported while it not used." msgstr "" "No existe esta opción en 5.0.0, por tanto, es necesario dejar de dar soporte " "a esta propiedad mientras no se utiliza." #, python-format msgid "" "There was an error loading the definition of the global resource type " "%(type_name)s." msgstr "" "Se ha producido un error al cargar la definición del tipo de recurso global " "%(type_name)s." msgid "This endpoint is enabled or disabled." msgstr "Este punto final está habilitado o deshabilitado." msgid "This project is enabled or disabled." msgstr "Este proyecto está habilitado o deshabilitado." msgid "This region is enabled or disabled." msgstr "Esta región está habilitada o deshabilitada." msgid "This service is enabled or disabled." msgstr "Este servicio está habilitado o deshabilitado." msgid "Threshold to evaluate against." msgstr "Umbral en el que evaluar." msgid "Time To Live (Seconds)." msgstr "Tiempo de vida (segundos)." msgid "Time of the first execution in format \"YYYY-MM-DD HH:MM\"." msgstr "Hora de la primera ejecución, con el formato \"AAAA-MM-DD HH:MM:SS\"." msgid "Time of the next execution in format \"YYYY-MM-DD HH:MM:SS\"." msgstr "" "Hora de la siguiente ejecución, con el formato \"AAAA-MM-DD HH:MM:SS\"." msgid "" "Timeout for client connections' socket operations. If an incoming connection " "is idle for this number of seconds it will be closed. A value of '0' means " "wait forever." msgstr "" "Tiempo de espera para las operaciones de conexión de pila del cliente. Si " "una conexión entrante está inactiva para este número de segundos se cerrará. " "Un valor de '0' significa esperar permanentemente." msgid "Timeout for creating the bay in minutes. Set to 0 for no timeout." msgstr "" "Tiempo de espera para crear la bahía en minutos. Establézcalo en 0 si no " "desea que haya tiempo de espera." msgid "Timeout in seconds for stack action (ie. create or update)." msgstr "" "Tiempo de espera en segúndos para una acción de pila (por ejemplo, creár o " "actualizár)." msgid "" "Toggle to enable/disable caching when Orchestration Engine looks for other " "OpenStack service resources using name or id. Please note that the global " "toggle for oslo.cache(enabled=True in [cache] group) must be enabled to use " "this feature." msgstr "" "Conmute para habilitar/deshabilitar el almacenamiento en memoria caché " "cuando el motor de orquestación busca otros recursos del servicio OpenStack " "utilizando el nombre o el ID. Observe que el conmutador global para oslo." "cache(enabled=True del grupo [cache]) debe estar habilitado para poder " "utilizar esta característica." msgid "" "Toggle to enable/disable caching when Orchestration Engine retrieves " "extensions from other OpenStack services. Please note that the global toggle " "for oslo.cache(enabled=True in [cache] group) must be enabled to use this " "feature." msgstr "" "Conmute para habilitar/deshabilitar el almacenamiento en memoria caché " "cuando el motor de orquestación recupera extensiones de otros servicios de " "OpenStack. Observe que el conmutador global para oslo.cache(enabled=True del " "grupo [cache]) debe estar habilitado para poder utilizar esta característica." msgid "" "Toggle to enable/disable caching when Orchestration Engine validates " "property constraints of stack.During property validation with constraints " "Orchestration Engine caches requests to other OpenStack services. Please " "note that the global toggle for oslo.cache(enabled=True in [cache] group) " "must be enabled to use this feature." msgstr "" "Conmute para habilitar/deshabilitar el almacenamiento en memoria caché " "cuando el motor de orquestación valide las restricciones de las propiedades " "de la pila. Durante la validación de propiedades con restricciones, el motor " "de orquestación capta solicitudes dirigidas a otros servicios de OpenStack. " "Observe que el conmutador global para oslo.cache(enabled=True del grupo " "[cache]) debe estar habilitado para poder utilizar esta característica." msgid "" "Token for stack-user which can be used for signalling handle when " "signal_transport is set to TOKEN_SIGNAL. None for all other signal " "transports." msgstr "" "Señal de stack-user que puede utilizarse para señalar el manejador cuando " "signal_transport está definido como TOKEN_SIGNAL. Y hay None en todos los " "demás transportes de señal." msgid "" "Tokens are not needed for Swift TempURLs. This attribute is being kept for " "compatibility with the OS::Heat::WaitConditionHandle resource." msgstr "" "No se necesitan señales para Swift TempURLs. Este atributo se conserva a " "efectos de compatibilidad con el recurso OS::Heat::WaitConditionHandle." msgid "Topic" msgstr "Tema" msgid "Transform protocol for the ipsec policy." msgstr "Protocolo de transformación de la política ipsec." msgid "True if alarm evaluation/actioning is enabled." msgstr "Verdadero si se ha habilitado la evaluación/accionamiento de alarma." msgid "" "True if the system should remember a generated private key; False otherwise." msgstr "" "Verdadero si el sistema debe recordar una clave privada generada; Falso en " "los demás casos." msgid "Type of access that should be provided to guest." msgstr "Tipo de acceso que se debe proporcionar al invitado." msgid "Type of adjustment (absolute or percentage)." msgstr "Tipo de ajuste (absoluto o porcentaje)." msgid "" "Type of endpoint in Identity service catalog to use for communication with " "the OpenStack service." msgstr "" "Tipo de punto de acceso en el catálogo del servicio Identity para el uso de " "la comunicacíon con el servicio OpenStack." msgid "Type of keystone Service." msgstr "Tipo del servicio de keystone." msgid "Type of receiver." msgstr "Tipo de receptor." msgid "Type of the data source." msgstr "Tipo del origen de datos." msgid "Type of the notification." msgstr "Tipo de la notificación" msgid "Type of the object that RBAC policy affects." msgstr "Tipo de objeto al que afecta la política RBAC ." msgid "Type of the value of the input." msgstr "Tipo del valor de la entrada." msgid "Type of the value of the output." msgstr "Tipo del valor de la salida." msgid "Type of the volume to create on Cinder backend." msgstr "Tipo del volumen a crear en el programa de fondo de Cinder." msgid "URL for API authentication" msgstr "URL para autenticación de API" msgid "URL for the data source." msgstr "URL del origen de datos." msgid "" "URL for the job binary. Must be in the format swift:/// or " "internal-db://." msgstr "" "URL del binario del trabajo. Debe ser en formato Swift:/// " "o de base de datos interna://." msgid "" "URL of TempURL where resource will signal completion and optionally upload " "data." msgstr "" "URL de TempURL donde el recurso señalará la terminación y opcionalmente " "cargará los datos." msgid "URL of keystone service endpoint." msgstr "URL del punto final del servicio de keystone." msgid "URL of the Heat CloudWatch server." msgstr "URL del servidor CloudWatch de Heat." msgid "" "URL of the Heat metadata server. NOTE: Setting this is only needed if you " "require instances to use a different endpoint than in the keystone catalog" msgstr "" "URL del servidor de metadatos de Heat. NOTA: solo es necesario definirlo si " "desea que las instancias utilicen un punto final distinto que el del " "catálogo de keystone" msgid "URL of the Heat waitcondition server." msgstr "URL del servidor waitcondition de Heat." msgid "" "URL where the data for this image already resides. For example, if the image " "data is stored in swift, you could specify \"swift://example.com/container/" "obj\"." msgstr "" "URL donde residen los datos de esta imagen. Por ejemplo, si los datos de " "imagen se almacenan en Swift, puede especificar \"swift://example.com/" "container/obj\"." msgid "UUID of the internal subnet to which the instance will be attached." msgstr "UUID de la subred interna a la que se conectará la instancia." #, python-format msgid "" "Unable to find neutron provider '%(provider)s', available providers are " "%(providers)s." msgstr "" "No se ha podido encontrar el proveedor de Neutron '%(provider)s', los " "proveedores disponibles son %(providers)s." #, python-format msgid "" "Unable to find senlin policy type '%(pt)s', available policy types are " "%(pts)s." msgstr "" "No se ha podido encontrar el tipo de política senlin '%(pt)s', los tipos de " "política disponibles son %(pts)s." #, python-format msgid "" "Unable to find senlin profile type '%(pt)s', available profile types are " "%(pts)s." msgstr "" "No se ha podido encontrar el tipo de perfil senlin '%(pt)s', los tipos de " "perfil disponibles son %(pts)s." #, python-format msgid "" "Unable to load %(app_name)s from configuration file %(conf_file)s.\n" "Got: %(e)r" msgstr "" "No se ha podido cargar %(app_name)s desde el archivo de configuración " "%(conf_file)s.\n" "Se ha obtenido: %(e)r" #, python-format msgid "Unable to locate config file [%s]" msgstr "No se ha podido encontrar el archivo de configuración [%s]" #, python-format msgid "Unexpected action %(action)s" msgstr "Acción inesperada %(action)s" #, python-format msgid "Unexpected action %s" msgstr "Acción inesperada %s" #, python-format msgid "" "Unexpected properties: %(unexpected)s. Only these properties are allowed for " "%(type)s type of order: %(allowed)s." msgstr "" "Propiedades inesperadas: %(unexpected)s. Solo se permiten estas propiedades " "para el tipo de orden %(type)s: %(allowed)s." msgid "Unique identifier for the device." msgstr "Identificador exclusivo para el dispositivo." msgid "" "Unique identifier for the ike policy associated with the ipsec site " "connection." msgstr "" "Identificador exclusivo para la política ike asociada con la conexión del " "sitio ipsec." msgid "" "Unique identifier for the ipsec policy associated with the ipsec site " "connection." msgstr "" "Identificador exclusivo para la política ipsec asociada con la conexión del " "sitio ipsec." msgid "Unique identifier for the network owning the port." msgstr "Identificador exclusivo para la red propietaria del puerto." msgid "" "Unique identifier for the router to which the vpn service will be inserted." msgstr "" "Identificador exclusivo para el direccionador en el que el servicio vpn se " "insertará." msgid "" "Unique identifier for the vpn service associated with the ipsec site " "connection." msgstr "" "Identificador exclusivo para el servicio vpn asociado con la conexión del " "sitio ipsec." msgid "" "Unique identifier of the firewall policy to which this firewall rule belongs." msgstr "" "Identificador exclusivo de la política de cortafuegos a la que se pertenece " "esta regla de cortafuegos." msgid "Unique identifier of the firewall policy used to create the firewall." msgstr "" "Identificador exclusivo de la política de cortafuegos utilizada para crear " "el cortafuegos." msgid "Unknown" msgstr "Desconocido" #, python-format msgid "Unknown Property %s" msgstr "Propiedad desconocida %s" #, python-format msgid "Unknown attribute \"%s\"" msgstr "Atributo desconocido \"%s\"" #, python-format msgid "Unknown error retrieving %s" msgstr "Error desconocido al recuperar %s" #, python-format msgid "Unknown input %s" msgstr "Entrada desconocida %s" #, python-format msgid "Unknown key(s) %s" msgstr "Clave(s) desconocida(s) %s" msgid "Unknown share_status during creation of share \"{0}\"" msgstr "" "Estado del recurso compartido (share_status) desconocido durante la creación " "del recurso compartido \"{0}\"" #, python-format msgid "Unknown status creating Bay '%(name)s' - %(reason)s" msgstr "Estado desconocido al crear la bahía '%(name)s' - %(reason)s" msgid "Unknown status during deleting share \"{0}\"" msgstr "" "Estado del recurso compartido (share_status) desconocido al suprimir el " "recurso compartido \"{0}\"" #, python-format msgid "Unknown status updating Bay '%(name)s' - %(reason)s" msgstr "Estado desconocido al actualizar la bahía '%(name)s' - %(reason)s" #, python-format msgid "Unknown status: %s" msgstr "Estado desconocido: %s" #, python-format msgid "" "Unrecognized value \"%(value)s\" for \"%(name)s\", acceptable values are: " "true, false." msgstr "" "Valor no reconocido \"%(value)s\" para \"%(name)s\", los valores aceptables " "son: true, false." #, python-format msgid "Unsupported object type %(objtype)s" msgstr "Tipo de objeto no soportado %(objtype)s" #, python-format msgid "Unsupported resource '%s' in LoadBalancerNames" msgstr "Recurso '%s' no soportado en LoadBalancerNames" msgid "Unversioned keystone url in format like http://0.0.0.0:5000." msgstr "URL de keystone sin versión con un formato como http://0.0.0.0:5000." #, python-format msgid "Update to properties %(props)s of %(name)s (%(res)s)" msgstr "Actualizar a las propiedades %(props)s de %(name)s (%(res)s)" msgid "Updated At" msgstr "Actualizado el" msgid "Updating a stack when it is deleting" msgstr "Actualización de una pila cuando se está suprimiendo" msgid "Updating a stack when it is suspended" msgstr "Actualizando una pila cuando se suspende" msgid "" "Use get_resource|Ref command instead. For example: { get_resource : " " }" msgstr "" "Utilice el mandato get_resource|Ref en su plugar. Por ejemplo: " "{ get_resource : }" msgid "" "Use only with Neutron, to list the internal subnet to which the instance " "will be attached; needed only if multiple exist; list length must be exactly " "1." msgstr "" "Se utiliza sólo con Neutron, para listar la subred interna a la que se " "conectará la instancia; sólo es necesario si existen varias; la longitud de " "la lista debe ser exactamente 1." #, python-format msgid "Use property %s" msgstr "Utilice la propiedad %s" #, python-format msgid "Use property %s." msgstr "Utilizar propiedad %s." msgid "" "Use the `external_gateway_info` property in the router resource to set up " "the gateway." msgstr "" "Utilice la propiedad `external_gateway_info` del recurso de router para " "configurar la pasarela." msgid "" "Use the networks attribute instead of first_address. For example: " "\"{get_attr: [, networks, , 0]}\"" msgstr "" "Utilice el atributo networks en lugar de las first_address. Por ejemplo: " "\"{get_attr: [, networks, , 0]}\"" msgid "Use this resource at your own risk." msgstr "Utilice este recurso bajo su propia responsabilidad." #, python-format msgid "User %s in invalid domain" msgstr "Usuario %s en dominio inválido" #, python-format msgid "User %s in invalid project" msgstr "Usuario %s en proyecto inválido" msgid "User ID for API authentication" msgstr "ID de usuario para la autenticación de la API" msgid "User data to pass to instance." msgstr "Datos de usuario a pasar a instancia." msgid "User is not authorized to perform action" msgstr "Usuario no está autorizado a realizar la acción" msgid "User name to create a user on instance creation." msgstr "Nombre de usuario para crear un usuario en la creación de instancias." msgid "Username associated with the AccessKey." msgstr "Nombre de usuario asociado con la clave de acceso." msgid "Username for API authentication" msgstr "Nombre de usuario de autenticación de API" msgid "Username for accessing the data source URL." msgstr "Nombre de usuario para acceder al URL del origen de datos." msgid "Username for accessing the job binary URL." msgstr "Nombre de usuario para acceder al URL del binario del trabajo." msgid "Username of privileged user in the image." msgstr "Nombre de usuario del usuario privilegiado en la imagen." msgid "VLAN ID for VLAN networks or tunnel-id for GRE/VXLAN networks." msgstr "ID de VLAN para redes VLAN o ID de túnel para redes GRE/VXLAN." msgid "VPC ID for this gateway association." msgstr "ID de VPC para esta asociación de pasarela." msgid "VPC ID for where the route table is created." msgstr "ID VPC para el lugar donde se crea la tabla de ruta." msgid "" "Valid values are encrypt or decrypt. The heat-engine processes must be " "stopped to use this." msgstr "" "Los valores válidos son cifrar o descifrar. Se deben detener los procesos " "del motor de Heat para utilizar esto." #, python-format msgid "Value \"%(val)s\" is invalid for data type \"%(type)s\"." msgstr "El valor \"%(val)s\" no es válido para el tipo de datos \"%(type)s\"." #, python-format msgid "Value '%(value)s' is invalid for '%(name)s' which only accepts integer." msgstr "" "El valor '%(value)s' no es válido para '%(name)s' que sólo acepta entero." #, python-format msgid "" "Value '%(value)s' is invalid for '%(name)s' which only accepts non-negative " "integer." msgstr "" "El valor '%(value)s' no es válido para '%(name)s' que sólo acepta enterono " "negativo." #, python-format msgid "Value '%s' is not an integer" msgstr "El valor '%s' no es un entero" #, python-format msgid "Value must be a comma-delimited list string: %s" msgstr "El valor debe ser una serie de lista delimitada por comas: %s" #, python-format msgid "Value must be of type %s" msgstr "El valor debe ser de tipo %s" #, python-format msgid "Value must be valid JSON: %s" msgstr "El valor debe ser JSON válido: %s" #, python-format msgid "Value must match pattern: %s" msgstr "Valor debe coincidir la forma: %s" msgid "" "Value which can be set or changed on stack update to trigger the resource " "for replacement with a new random string. The salt value itself is ignored " "by the random generator." msgstr "" "Valor que se puede establecer o cambiar en la actualización de pila para " "desencadenar el recurso para sustituirlo por una serie aleatoria nueva. El " "propio valor de sal es ignorado por el generador aleatorio." msgid "" "Value which can be set to fail the resource operation to test failure " "scenarios." msgstr "" "Valor que se puede establecer para hacer fallar la operación del recurso, " "para probar escenarios de fallo." msgid "" "Value which can be set to trigger update replace for the particular resource." msgstr "" "Valor que se puede establecer para desencadenar la sustitución de la " "actualización para el recurso en particular." #, python-format msgid "Version %(objver)s of %(objname)s is not supported" msgstr "Versión %(objver)s de %(objname)s no está soportada" msgid "Version for the ike policy." msgstr "Versión para la política ike." msgid "Version of Hadoop running on instances." msgstr "Versión de Hadoop ejecutándose en instancias." msgid "Version of IP address." msgstr "Versión de la dirección IP." msgid "Vip associated with the pool." msgstr "Vip asociado con la agrupación." msgid "Volume attachment failed" msgstr "Ha fallado la conexión del volumen" msgid "Volume backup failed" msgstr "Ha fallado la copia de seguridad del volumen" msgid "Volume backup restore failed" msgstr "Ha fallado la restauración de la copia de seguridad del volumen" msgid "Volume create failed" msgstr "Ha fallado la creación del volumen" msgid "Volume detachment failed" msgstr "Ha fallado la desconexión del volumen" msgid "Volume in use" msgstr "Volumen en uso" msgid "Volume resize failed" msgstr "Ha fallado la redimensión del volumen" msgid "Volumes per node." msgstr "Volúmenes por nodo." msgid "Volumes to attach to instance." msgstr "Volúmenes a adjuntar a instancia." #, python-format msgid "WaitCondition invalid Handle %s" msgstr "Descriptor de contexto no válido de WaitCondition %s" #, python-format msgid "WaitCondition invalid Handle stack %s" msgstr "Pila de descriptor de contexto no válida de WaitCondition %s" #, python-format msgid "WaitCondition invalid Handle tenant %s" msgstr "Arrendatario de descriptor de contexto no válido de WaitCondition %s" msgid "" "Warning: this command is potentially destructive and only intended to " "recover from specific crashes." msgstr "" "Atención: esta operación es potencialmente destructiva y solo es para " "restaurar de fallas de procesos especificas." msgid "Weight of pool member in the pool (default to 1)." msgstr "" "Peso del miembro de agrupación en la agrupación (tomar como valor " "predeterminado 1)." msgid "Weight of the pool member in the pool." msgstr "Peso del miembro de la agrupación." #, python-format msgid "Went to status %(resource_status)s due to \"%(status_reason)s\"" msgstr "" "Se ha cambiado al estado %(resource_status)s debido a \"%(status_reason)s\"" msgid "" "When both ipv6_ra_mode and ipv6_address_mode are set, they must be equal." msgstr "" "Cuando se establece tanto ipv6_ra_mode como ipv6_address_mode, deben ser " "iguales." msgid "" "When running server in SSL mode, you must specify both a cert_file and " "key_file option value in your configuration file" msgstr "" "Al ejecutar el servidor en modalidad SSL, debe especificar un valor para las " "opciones cert_file y key_file en el archivo de configuración" msgid "Whether enable this policy on that cluster." msgstr "Si se debe habilitar esta política en ese clúster." msgid "" "Whether the address scope should be shared to other tenants. Note that the " "default policy setting restricts usage of this attribute to administrative " "users only, and restricts changing of shared address scope to unshared with " "update." msgstr "" "Si se debe compartir el ámbito de dirección con otros inquilinos. Tenga en " "cuenta que el valor de política predeterminado restringe el uso de este " "atributo sólo a usuarios administrativos, y restringe cambiar de un ámbito " "de dirección compartido a uno sin compartir con una actualización." msgid "Whether the flavor is shared across all projects." msgstr "Si el tipo es compartido entre todos los proyectos." msgid "" "Whether the image can be deleted. If the value is True, the image is " "protected and cannot be deleted." msgstr "" "Indica si la imagen puede suprimirse. Si el valor es true, la imagen está " "protegida y no puede suprimirse." msgid "Whether the metering label should be shared across all tenants." msgstr "" "Indica si la etiqueta de medición se debe compartir entre todos los " "arrendatarios." msgid "Whether the network contains an external router." msgstr "Si la red contiene un router externo." msgid "Whether the part content is text or multipart." msgstr "Si el contenido de parte es texto o multiparte." msgid "" "Whether the subnet pool will be shared across all tenants. Note that the " "default policy setting restricts usage of this attribute to administrative " "users only." msgstr "" "Si se compartirá esta agrupación de subred entre todos los inquilinos. Tenga " "en cuenta que el valor de política predeterminado restringe el uso de este " "atributo solo a usuarios administrativos." msgid "Whether the volume type is accessible to the public." msgstr "Si el tipo de volumen es accesible para el público." msgid "Whether this QoS policy should be shared to other tenants." msgstr "" "Indica si esta política de QoS se debe compartir con otros arrendatarios." msgid "" "Whether this firewall should be shared across all tenants. NOTE: The default " "policy setting in Neutron restricts usage of this property to administrative " "users only." msgstr "" "Indica si el cortafuegos se debe compartir entre todos los arrendatarios. " "NOTA: el valor de política predeterminado en Neutron restringe el uso de " "esta propiedad sólo a usuarios administrativos." msgid "" "Whether this is default IPv4/IPv6 subnet pool. There can only be one default " "subnet pool for each IP family. Note that the default policy setting " "restricts administrative users to set this to True." msgstr "" "Indica si se trata de una agrupación de subred IPv4/IPv6 predeterminada. " "Solo puede haber una agrupación de subred predeterminada para cada familia " "de IP. Tenga en cuenta que el valor de política predeterminado restringe a " "los usuarios administrativos la capacidad de definir este valor aTrue." msgid "Whether this network should be shared across all tenants." msgstr "Si esta red se debe compartir entre todos los arrendatarios." msgid "" "Whether this network should be shared across all tenants. Note that the " "default policy setting restricts usage of this attribute to administrative " "users only." msgstr "" "Si se debe compartir esta red entre todos los arrendatarios. Tenga en cuenta " "que el valor de política predeterminado restringe el uso de este atributo " "sólo a usuarios administrativos." msgid "" "Whether this policy should be audited. When set to True, each time the " "firewall policy or the associated firewall rules are changed, this attribute " "will be set to False and will have to be explicitly set to True through an " "update operation." msgstr "" "Indica si esta política debe auditarse. Cuando se establece en True, cada " "vez que cambian la política de cortafuegos o las reglas de cortafuegos " "asociadas, este atributo se establecerá en False y deberá establecerse " "explícitamente en True mediante una operación de actualización." msgid "Whether this policy should be shared across all tenants." msgstr "" "Indica si esta política debe estar compartida entre todos los arrendatarios." msgid "Whether this rule should be enabled." msgstr "Indica si esta regla debe estar habilitada." msgid "Whether this rule should be shared across all tenants." msgstr "Indica si esta regla debe compartirse entre todos los arrendatarios." msgid "Whether to enable the actions or not." msgstr "Si se desea habilitar o no las acciones ." msgid "Whether to specify a remote group or a remote IP prefix." msgstr "Si especificar o no un grupo remote o un prefijo de IP remote." msgid "" "Which lifecycle actions of the deployment resource will result in this " "deployment being triggered." msgstr "" "Las acciones de ciclo de vida del recurso de despliegue que tienen como " "resultado que se desencadene este despliegue." msgid "" "Workflow additional parameters. If Workflow is reverse typed, params " "requires 'task_name', which defines initial task." msgstr "" "Parámetros adicionales del flujo de trabajo. Si el flujo de trabajo tiene " "tipo inverso, params requiere 'task_name' (nombre de tarea), que define la " "tarea inicial." msgid "Workflow description." msgstr "Descripción del flujo de trabajo." msgid "Workflow name." msgstr "Nombre del flujo de trabajo." msgid "Workflow to execute." msgstr "Flujo de trabajo a ejecutar." msgid "Workflow type." msgstr "Tipo del flujo de trabajo." #, python-format msgid "Wrong Arguments try: \"%s\"" msgstr "Intento de argumentos incorrectos: \"%s\"" msgid "You are not authenticated." msgstr "No está autenticado." msgid "You are not authorized to complete this action." msgstr "No está autorizado a completar esta acción." #, python-format msgid "You are not authorized to use %(action)s." msgstr "No está autorizado para utilizar %(action)s." #, python-format msgid "" "You have reached the maximum stacks per tenant, %d. Please delete some " "stacks." msgstr "" "Ha alcanzado el máximo de pilas por arrendatario, %d. Suprima algunas pilas." #, python-format msgid "could not find user %s" msgstr "no se ha podido encontrar el usuario %s" msgid "deployment_id must be specified" msgstr "Debe especificarse deployment_id" msgid "" "deployments key not allowed in resource metadata with user_data_format of " "SOFTWARE_CONFIG" msgstr "" "clave de despliegues no permitida en los metadatos de recurso con " "user_data_format igual a SOFTWARE_CONFIG" #, python-format msgid "deployments of server %s" msgstr "despliegues del servidor %s" #, python-format msgid "environment has wrong section \"%s\"" msgstr "entorno tiene sección incorrecta \"%s\"" msgid "error in pool" msgstr "error en la agrupación" msgid "error in vip" msgstr "error en vip" msgid "external network for the gateway." msgstr "red externa de la pasarela." msgid "granularity should be days, hours, minutes, or seconds" msgstr "la granularidad debe ser días, horas, minutos o segundos" msgid "heat.conf misconfigured, auth_encryption_key must be 32 characters" msgstr "" "heat.conf mal configurado, auth_encryption_key debe tener 32 caracteres" msgid "" "heat.conf misconfigured, cannot specify \"stack_user_domain_id\" or " "\"stack_user_domain_name\" without \"stack_domain_admin\" and " "\"stack_domain_admin_password\"" msgstr "" "heat.conf configurado incorrectamente, no se puede especificar " "\"stack_user_domain_id\" o \"stack_user_domain_name\" sin " "\"stack_domain_admin\" y \"stack_domain_admin_password\"" msgid "ipv6_ra_mode and ipv6_address_mode are not supported for ipv4." msgstr "ipv6_ra_mode e ipv6_address_mode no se admiten en ipv4." msgid "limit cannot be less than 4" msgstr "límite no puede ser menos de 4" #, python-format msgid "metadata setting for resource %s" msgstr "Valor de metadatos del recurso %s" msgid "min/max length must be integral" msgstr "la longitud mínima-máxima debe ser un entero" msgid "min/max must be numeric" msgstr "min/max debe ser numérico" msgid "need more memory." msgstr "Necesita más memoria." msgid "no resource data found" msgstr "datos de recursos no se han encontrado" msgid "no resources were found" msgstr "recursos no se han encontrado" msgid "nova server metadata needs to be a Map." msgstr "Los metadatos del servidor Nova deben ser una correlación." #, python-format msgid "previous_status must be SupportStatus instead of %s" msgstr "previous_status debe ser SupportStatus en lugar de %s" #, python-format msgid "raw template with id %s not found" msgstr "plantilla plana con id %s no encontrada" #, python-format msgid "resource with id %s not found" msgstr "recurso con el id %s no encontrado" #, python-format msgid "roles %s" msgstr "roles %s" msgid "segmentation_id cannot be specified except 0 for using flat" msgstr "" "segmentation_id no se puede especificar excepto 0 para utilizarlo plano" msgid "segmentation_id must be specified for using vlan" msgstr "Se debe especificar segmentation_id para utilizar vlan" msgid "segmentation_id not allowed for flat network type." msgstr "segmentation_id no está permitido para tipo de red plana." msgid "server_id must be specified" msgstr "Se debe especificar server_id" #, python-format msgid "" "task %(task)s contains property 'requires' in case of direct workflow. Only " "reverse workflows can contain property 'requires'." msgstr "" "La tarea %(task)s contine la propiedad 'requires' en un flujo de trabajo " "directo. Solo los flujos de trabajo inversos pueden contener la propiedad " "'requires'." heat-10.0.2/heat/locale/ko_KR/0000775000175000017500000000000013343562672015735 5ustar zuulzuul00000000000000heat-10.0.2/heat/locale/ko_KR/LC_MESSAGES/0000775000175000017500000000000013343562672017522 5ustar zuulzuul00000000000000heat-10.0.2/heat/locale/ko_KR/LC_MESSAGES/heat.po0000666000175000017500000100604613343562351021006 0ustar zuulzuul00000000000000# Translations template for heat. # Copyright (C) 2015 ORGANIZATION # This file is distributed under the same license as the heat project. # # Translators: # Mario Cho , 2014 # Ying Chun Guo , 2015 # Andreas Jaeger , 2016. #zanata # minwook-shin , 2017. #zanata msgid "" msgstr "" "Project-Id-Version: heat VERSION\n" "Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n" "POT-Creation-Date: 2018-02-23 07:09+0000\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" "PO-Revision-Date: 2017-08-05 01:52+0000\n" "Last-Translator: minwook-shin \n" "Language: ko_KR\n" "Plural-Forms: nplurals=1; plural=0;\n" "Generated-By: Babel 2.0\n" "X-Generator: Zanata 4.3.3\n" "Language-Team: Korean (South Korea)\n" #, python-format msgid "\"%%s\" is not a valid keyword inside a %s definition" msgstr "\"%%s\"은(는) %s 정의 내의 올바른 키워드가 아님" #, python-format msgid "\"%(fn_name)s\": %(err)s" msgstr "\"%(fn_name)s\": %(err)s" #, python-format msgid "" "\"%(name)s\" params must be strings, numbers, list or map. Failed to json " "serialize %(value)s" msgstr "" "\"%(name)s\" 매개변수는 문자열, 숫자 또는 맵이어야 합니다. json 직렬화 " "%(value)s에 실패" #, python-format msgid "" "\"%(section)s\" must contain a map of %(obj_name)s maps. Found a [%(_type)s] " "instead" msgstr "" "\"%(section)s\"에 %(obj_name)s 맵의 맵이 포함되어야 함. 대신 [%(_type)s]을" "(를) 발견함" #, python-format msgid "\"%(url)s\" is not a valid SwiftSignalHandle. The %(part)s is invalid" msgstr "" "\"%(url)s\"은(는) 올바른 SwiftSignalHandle이 아닙니다. %(part)s이(가) 올바르" "지 않습니다." #, python-format msgid "\"%(value)s\" does not validate %(name)s" msgstr "\"%(value)s\"이(가) %(name)s을(를) 유효성 검증하지 않음" #, python-format msgid "\"%(value)s\" does not validate %(name)s (constraint not found)" msgstr "" "\"%(value)s\"이(가) %(name)s을(를) 유효성 검증하지 않음(제한조건을 찾을 수 없" "음)" #, python-format msgid "\"%(version)s\". \"%(version_type)s\" should be one of: %(available)s" msgstr "" "\"%(version)s\". \"%(version_type)s\"은(는) 다음 중 하나여야 함: " "%(available)s" #, python-format msgid "\"%(version)s\". \"%(version_type)s\" should be: %(available)s" msgstr "\"%(version)s\". \"%(version_type)s\"은(는) %(available)s이어야 함" #, python-format msgid "\"%s\" : [ { \"key1\": \"val1\" }, { \"key2\": \"val2\" } ]" msgstr "\"%s\" : [ { \"key1\": \"val1\" }, { \"key2\": \"val2\" } ]" #, python-format msgid "\"%s\" argument must be a string" msgstr "\"%s\" 인수는 문자열이어야 함" #, python-format msgid "\"%s\" can't traverse path" msgstr "\"%s\"은(는) 경로를 순회할 수 없음" #, python-format msgid "\"%s\" deletion policy not supported" msgstr "\"%s\" 삭제 정책이 지원되지 않음" #, python-format msgid "\"%s\" delimiter must be a string" msgstr "\"%s\" 구분 기호는 문자열이어야 함" #, python-format msgid "\"%s\" is not a list" msgstr "\"%s\"이(가) 목록이 아님" #, python-format msgid "\"%s\" is not a map" msgstr "\"%s\"이(가) 맵이 아님" #, python-format msgid "\"%s\" is not a valid ARN" msgstr "\"%s\"은(는) 올바른 ARN이 아님" #, python-format msgid "\"%s\" is not a valid ARN URL" msgstr "\"%s\"은(는) 올바른 ARN URL이 아님" #, python-format msgid "\"%s\" is not a valid Heat ARN" msgstr "\"%s\"은(는) 올바른 히트 ARN이 아님" #, python-format msgid "\"%s\" is not a valid URL" msgstr "\"%s\"은(는) 올바른 URL이 아님" #, python-format msgid "\"%s\" is not a valid boolean" msgstr "\"%s\"이(가) 올바른 부울이 아님" #, python-format msgid "\"%s\" is not a valid template section" msgstr "\"%s\"은(는) 올바른 템플리트 섹션이 아님" #, python-format msgid "\"%s\" must operate on a list" msgstr "\"%s\"이(가) 목록에서 작동해야 함" #, python-format msgid "\"%s\" param placeholders must be strings" msgstr "\"%s\" 매개변수 플레이스 홀더는 문자열이어야 함" #, python-format msgid "\"%s\" parameters must be a mapping" msgstr "\"%s\" 매개변수는 맵핑이어야 함" #, python-format msgid "\"%s\" params must be a map" msgstr "\"%s\" 매개변수는 맵이어야 함" #, python-format msgid "\"%s\" params must be strings, numbers, list or map." msgstr "\"%s\" 매개변수는 문자열, 숫자 또는 맵이어야 합니다." #, python-format msgid "\"%s\" template must be a string" msgstr "\"%s\" 템플리트는 문자열이어야 함" #, python-format msgid "\"repeat\" syntax should be %s" msgstr "\"repeat\" 구문은 %s이어야 함" #, python-format msgid "%(a)s paused until Hook %(h)s is cleared" msgstr "후크 %(h)s을(를) 지울 때까지 %(a)s을(를) 일시정지함" #, python-format msgid "%(action)s is not supported for resource." msgstr "자원에 대해 %(action)s이(가) 지원되지 않습니다." #, python-format msgid "%(action)s is restricted for resource." msgstr "%(action)s은(는) 자원에 대해서는 제한됩니다." #, python-format msgid "%(desired_capacity)s must be between %(min_size)s and %(max_size)s" msgstr "%(desired_capacity)s은(는) %(min_size)s ~ %(max_size)s이어야 함" #, python-format msgid "%(error)s%(path)s%(message)s" msgstr "%(error)s%(path)s%(message)s" #, python-format msgid "%(feature)s is not supported." msgstr "%(feature)s이(가) 지원되지 않습니다." #, python-format msgid "" "%(img)s must be provided: Referenced cluster template %(tmpl)s has no " "default_image_id defined." msgstr "" "%(img)s은(는) 제공되어야 합니다. 참조된 클러스터 템플리트 %(tmpl)s에 정의된 " "default_image_id가 없습니다." #, python-format msgid "%(lc)s (%(ref)s) reference can not be found." msgstr "%(lc)s(%(ref)s) 참조를 찾을 수 없습니다." #, python-format msgid "" "%(lc)s (%(ref)s) requires a reference to the configuration not just the name " "of the resource." msgstr "%(lc)s(%(ref)s)에는 자원 이름만이 아닌 구성에 대한 참조가 필요합니다." #, python-format msgid "%(len)d of %(count)d received" msgstr "%(count)d의 %(len)d이(가) 수신됨" #, python-format msgid "%(len)d of %(count)d received - %(reasons)s" msgstr "%(count)d의 %(len)d을(를) 수신함 - %(reasons)s" #, python-format msgid "%(message)s" msgstr "%(message)s" #, python-format msgid "%(min_size)s can not be greater than %(max_size)s" msgstr "%(min_size)s은(는) %(max_size)s보다 클 수 없음" #, python-format msgid "%(name)s constraint invalid for %(utype)s" msgstr "%(utype)s에 대해 올바르지 않은 %(name)s 제한조건" #, python-format msgid "%(prop1)s cannot be specified without %(prop2)s." msgstr "%(prop1)s은(는) %(prop2)s이(가) 없으면 지정할 수 없습니다." #, python-format msgid "" "%(prop1)s property should only be specified for %(prop2)s with value " "%(value)s." msgstr "%(prop1)s 특성은 값이 %(value)s인 %(prop2)s에만 지정해야 합니다." #, python-format msgid "%(resource)s: Invalid attribute %(key)s" msgstr "%(resource)s: 올바르지 않은 속성 %(key)s" #, python-format msgid "" "%(result)s - Unknown status %(resource_status)s due to \"%(status_reason)s\"" msgstr "" "%(result)s - 알 수 없는 상태 %(resource_status)s. 이유: \"%(status_reason)s\"" #, python-format msgid "%(schema)s supplied for %(type)s %(data)s" msgstr "%(type)s %(data)s에 대해 %(schema)s이(가) 제공됨" #, python-format msgid "%(server)s-port-%(number)s" msgstr "%(server)s-port-%(number)s" #, python-format msgid "%(type)s not in valid format: %(error)s" msgstr "%(type)s이(가) 올바른 형식이 아님: %(error)s" #, python-format msgid "%s Key Name must be a string" msgstr "%s 키 이름은 문자열이어야 함" #, python-format msgid "%s Timed out" msgstr "%s 제한시간 초과" #, python-format msgid "%s Value Name must be a string" msgstr "%s 값 이름은 문자열이어야 함" #, python-format msgid "%s is not a valid job location." msgstr "%s이(가) 올바른 작업 위치가 아닙니다." #, python-format msgid "%s is not active" msgstr "%s이(가) 활성 상태가 아님" #, python-format msgid "%s is not an integer." msgstr "%s이(가) 정수가 아닙니다. " #, python-format msgid "%s must be provided" msgstr "%s이(가) 제공되어야 함" #, python-format msgid "'%(attr)s': expected '%(expected)s', got '%(current)s'" msgstr "'%(attr)s': '%(expected)s'을(를) 예상했으나 '%(current)s'을(를) 가져옴" msgid "" "'task_name' is not assigned in 'params' in case of reverse type workflow." msgstr "" "'task_name'은 역방향 입력 워크플로우의 경우 'params'에 할당되지 않습니다." msgid "'true' if DHCP is enabled for this subnet; 'false' otherwise." msgstr "" "이 서브넷에 대해 DHCP가 사용으로 설정된 경우 'true'이며 그렇지 않은 경우 " "'false'입니다." msgid "A UUID for the set of servers being requested." msgstr "요청할 서버 세트에 대한 UUID입니다." msgid "A bad or out-of-range value was supplied" msgstr "잘못되었거나 범위를 벗어난 값이 제공됨" msgid "A boolean value of default flag." msgstr "기본 플래그의 부울 값입니다." msgid "A boolean value specifying the administrative status of the network." msgstr "네트워크의 관리 상태를 지정하는 부울 값입니다." #, python-format msgid "" "A character class and its corresponding %(min)s constraint to generate the " "random string from." msgstr "" "랜덤 문자열을 생성할 문자 클래스 및 그에 해당하는 %(min)s 제한조건입니다." #, python-format msgid "" "A character sequence and its corresponding %(min)s constraint to generate " "the random string from." msgstr "" "랜덤 문자열을 생성할 문자 순서 및 그에 해당하는 %(min)s 제한조건입니다." msgid "A comma-delimited list of server ip addresses. (Heat extension)." msgstr "서버 IP 주소의 쉼표로 구분된 목록입니다(히트 확장)." msgid "A description of the volume." msgstr "볼륨에 대한 설명입니다." msgid "" "A device name where the volume will be attached in the system at /dev/" "device_name. This value is typically vda." msgstr "" "시스템의 /dev/device_name에서 볼륨이 접속되는 /dev/device_name. 이 값은 일반" "적으로 vda입니다." msgid "" "A device name where the volume will be attached in the system at /dev/" "device_name.e.g. vdb" msgstr "" "시스템의 /dev/device_name에서 볼륨이 접속되는 디바이스 이름입니다(예: vdb)." msgid "" "A dict of all network addresses with corresponding port_id. Each network " "will have two keys in dict, they are network name and network id. The port " "ID may be obtained through the following expression: \"{get_attr: [, " "addresses, , 0, port]}\"." msgstr "" "해당 port_id가 포함된 모든 네트워크 주소의 dict입니다. 각 네트워크의 dict에" "는 두 개의 키가 있습니다. 즉, 네트우크 이름과 네트워크 id입니다. 포트 ID는 다" "음 식을 통해 가져올 수 있습니다. \"{get_attr: [, addresses, , 0, port]}\"." msgid "" "A dict of assigned network addresses of the form: {\"public\": [ip1, " "ip2...], \"private\": [ip3, ip4], \"public_uuid\": [ip1, ip2...], " "\"private_uuid\": [ip3, ip4]}. Each network will have two keys in dict, they " "are network name and network id." msgstr "" "다음 양식으로 할당된 네트워크 주소의 dict입니다. {\"public\": [ip1, ip2...], " "\"private\": [ip3, ip4], \"public_uuid\": [ip1, ip2...], \"private_uuid\": " "[ip3, ip4]}. 각 네트워크의 dict에는 두 개의 키, 즉 네트워크 이름과 네트워크 " "id가 있습니다." msgid "A dict of key-value pairs output from the stack." msgstr "스택의 키-값 쌍 출력에 대한 사전입니다." msgid "A dictionary which contains name and input of the workflow." msgstr "워크플로우의 입력과 이름을 포함하는 사전입니다." msgid "A length constraint must have a min value and/or a max value specified." msgstr "길이 제한조건에 최소값 및/또는 최대값이 지정되어 있어야 합니다." msgid "A list of URLs (webhooks) to invoke when state transitions to alarm." msgstr "알람으로 상태 전이 시 호출할 URL의 목록(웹 후크)입니다." msgid "" "A list of URLs (webhooks) to invoke when state transitions to insufficient-" "data." msgstr "충분하지 않은 데이터로 상태 전이 시 호출할 URL의 목록(웹 후크)입니다." msgid "A list of URLs (webhooks) to invoke when state transitions to ok." msgstr "확인으로 상태 전이 시 호출할 URL의 목록(웹 후크)입니다." msgid "A list of access rules that define access from IP to Share." msgstr "IP에서 공유로의 액세스를 정의하는 액세스 규칙 목록입니다." msgid "A list of all rules for the QoS policy." msgstr "QoS 정책의 모든 규칙 목록입니다." msgid "A list of all subnet attributes for the port." msgstr "포트의 모든 서브넷 속성 목록입니다." msgid "" "A list of character class and their constraints to generate the random " "string from." msgstr "랜덤 문자열을 생성할 문자 클래스 및 해당 제한조건의 목록입니다." msgid "" "A list of character sequences and their constraints to generate the random " "string from." msgstr "랜덤 문자열을 생성할 문자 순서 및 해당 제한조건의 목록입니다." msgid "A list of cluster instance IPs." msgstr "클러스터 인스턴스 IP 목록입니다." msgid "A list of clusters to which this policy is attached." msgstr "이 정책이 연결될 클러스터 목록입니다." msgid "A list of host route dictionaries for the subnet." msgstr "서브넷의 호스트 경로 사전 목록입니다." msgid "A list of instances ids." msgstr "인스턴스 ID의 목록입니다." msgid "A list of metric ids." msgstr "지표 id 목록." msgid "" "A list of query factors, each comparing a Sample attribute with a value. " "Implicitly combined with matching_metadata, if any." msgstr "" "각각 샘플 속성을 값과 비교하는 조회 요인 목록입니다. matching_metadata와 내재" "적으로 결합됩니다(해당하는 경우)." msgid "A list of resource IDs for the resources in the chain." msgstr "체인에 있는 자원의 자원 ID의 목록입니다." msgid "A list of resource IDs for the resources in the group." msgstr "그룹에서 자원에 대한 자원 ID의 목록." msgid "A list of security groups for the port." msgstr "포트의 보안 그룹 목록입니다." msgid "A list of security services IDs or names." msgstr "보안 서비스 ID 또는 이름 목록입니다." msgid "A list of string policies to apply. Defaults to anti-affinity." msgstr "적용할 문자열 정책 목록입니다. 기본값은 안티 선호도입니다." msgid "A login profile for the user." msgstr "사용자의 로그인 프로파일입니다." msgid "A mandatory input parameter is missing" msgstr "필수 입력 매개변수가 누락됨" msgid "A map containing all headers for the container." msgstr "컨테이너의 모든 헤더를 포함하는 맵입니다." msgid "" "A map of Nova names and captured stderrs from the configuration execution to " "each server." msgstr "Nova 이름의 맵 및 각 서버에 대해 구성 실행에서 캡처된 stdout입니다." msgid "" "A map of Nova names and captured stdouts from the configuration execution to " "each server." msgstr "Nova 이름의 맵 및 각 서버에 대해 구성 실행에서 캡처된 stdout입니다." msgid "" "A map of Nova names and returned status code from the configuration " "execution." msgstr "Nova 이름의 맵 및 각 서버에 대해 구성 실행에서 리턴된 상태 코드입니다." msgid "" "A map of files to create/overwrite on the server upon boot. Keys are file " "names and values are the file contents." msgstr "" "부트의 서버에서 작성/겹쳐쓸 파일의 맵입니다. 키는 파일 이름이며 값은 파일 컨" "텐츠입니다." msgid "" "A map of resource names to the specified attribute of each individual " "resource." msgstr "자원 이름과 각 개별 자원의 지정된 속성 간의 맵입니다." msgid "" "A map of resource names to the specified attribute of each individual " "resource. Requires heat_template_version: 2014-10-16." msgstr "" "자원 이름과 각 개별 자원의 지정된 속성 간의 맵. heat_template_version: " "2014-10-16이 필요합니다." msgid "" "A map of user-defined meta data to associate with the account. Each key in " "the map will set the header X-Account-Meta-{key} with the corresponding " "value." msgstr "" "계정과 연관시킬 사용자 정의 메타 데이터의 맵입니다. 맵의 각 키는 헤더 X-" "Account-Meta-{key}를 해당 값으로 설정합니다." msgid "" "A map of user-defined meta data to associate with the container. Each key in " "the map will set the header X-Container-Meta-{key} with the corresponding " "value." msgstr "" "컨테이너와 연관시킬 사용자 정의 메타 데이터의 맵입니다. 맵의 각 키는 헤더 X-" "Container-Meta-{key}를 해당 값으로 설정합니다." msgid "A name used to distinguish the volume." msgstr "볼륨을 식별하기 위해 사용되는 이름입니다." msgid "" "A per-tenant quota on the prefix space that can be allocated from the subnet " "pool for tenant subnets." msgstr "" "테넌트 서브넷의 서브넷 풀에서 할당할 수 있는 접두부 공간의 테넌트별 할당량입" "니다." msgid "" "A predefined access control list (ACL) that grants permissions on the bucket." msgstr "버킷에 대한 권한을 부여하는 사전 정의된 액세스 제어 목록(ACL)입니다." msgid "A range constraint must have a min value and/or a max value specified." msgstr "범위 제한조건에 최소값 및/또는 최대값이 지정되어 있어야 합니다." msgid "" "A reference to the wait condition handle used to signal this wait condition." msgstr "이 대기 조건을 표시하는 데 사용된 대기 조건 핸들에 대한 참조입니다." msgid "" "A signed url to create executions for workflows specified in Workflow " "resource." msgstr "" "워크플로우 자원에 지정된 워크플로우의 실행을 작성하는 서명된 url입니다." msgid "A signed url to handle the alarm." msgstr "알람을 처리할 서명된 URL입니다." msgid "A signed url to handle the alarm. (Heat extension)." msgstr "알람을 처리할 서명된 URL입니다(히트 확장)." msgid "A specified set of DNS name servers to be used." msgstr "사용할 DNS 이름 서버의 지정된 세트입니다." msgid "" "A string specifying a symbolic name for the network, which is not required " "to be unique." msgstr "" "네트워크의 기호 이름을 지정하는 문자열이며 이는 고유하지 않아도 됩니다." msgid "" "A string specifying a symbolic name for the security group, which is not " "required to be unique." msgstr "" "보안 그룹의 기호 이름을 지정하는 문자열이며 이는 고유하지 않아도 됩니다." msgid "A string specifying physical network mapping for the network." msgstr "네트워크에 대한 실제 네트워크 맵핑을 지정하는 문자열입니다." msgid "A string specifying the provider network type for the network." msgstr "네트워크에 대한 제공자 네트워크 유형을 지정하는 문자열입니다." msgid "A string specifying the segmentation id for the network." msgstr "네트워크에 대한 분석 방식 ID를 지정하는 문자열입니다." msgid "A symbolic name for this port." msgstr "이 포트의 기호 이름입니다." msgid "A url to handle the alarm using native API." msgstr "네이티브 API를 사용하여 알람을 처리하는 url." msgid "" "A variable that this resource will use to replace with the current index of " "a given resource in the group. Can be used, for example, to customize the " "name property of grouped servers in order to differentiate them when listed " "with nova client." msgstr "" "이 자원이 그룹에서 지정된 자원의 현재 색인으로 대체하기 위해 사용할 변수입니" "다. 예를 들어, nova 클라이언트에서 그룹화된 서버를 나열할 때 이들을 구별할 " "수 있도록 서버의 이름 특성을 사용자 정의하는 데 사용할 수 있습니다." msgid "AWS compatible instance name." msgstr "AWS 호환 가능 인스턴스 이름입니다." msgid "AWS query string is malformed, does not adhere to AWS spec" msgstr "AWS 조회 문자열은 잘못된 형식이며 AWS 스펙을 준수하지 않음" msgid "Access policies to apply to the user." msgstr "사용자에 적용할 액세스 정책입니다." #, python-format msgid "AccessPolicy resource %s not in stack" msgstr "AccessPolicy 자원 %s이(가) 스택에 없음" #, python-format msgid "Action %s not allowed for user" msgstr "사용자에게 조치 %s이(가) 허용되지 않음" msgid "Action to be performed on the traffic matching the rule." msgstr "규칙과 일치하는 트래픽에 대해 수행할 조치입니다." msgid "Actual input parameter values of the task." msgstr "태스크의 실제 입력 매개변수 값입니다." msgid "Add needed policies directly to the task, Policy keyword is not needed" msgstr "필수 정책을 직접 태스크에 추가, 정책 키워드는 필요하지 않음" msgid "Additional MAC/IP address pairs allowed to pass through a port." msgstr "포트를 통해 전달하도록 허용된 추가 MAC/IP 주소 쌍입니다." msgid "Additional MAC/IP address pairs allowed to pass through the port." msgstr "포트를 통해 전달하도록 허용된 추가 MAC/IP 주소 쌍입니다." msgid "Additional routes for this subnet." msgstr "이 서브넷에 대한 추가 라우트입니다." msgid "Address family of the address scope, which is 4 or 6." msgstr "주소 범위의 주소군입니다. 즉, 4 또는 6입니다." msgid "" "Address of the notification. It could be a valid email address, url or " "service key based on notification type." msgstr "" "알림의 주소입니다. 알림 타입에 따라 올바른 이메일 주소, url 또는 서비스 키일 " "수 있습니다." msgid "" "Address to bind the server. Useful when selecting a particular network " "interface." msgstr "서버를 바인드할 주소. 특정 네트워크 인터페이스를 선택할 때유용합니다. " msgid "Administrative state for the ipsec site connection." msgstr "ipsec 사이트 연결의 관리 상태입니다." msgid "Administrative state for the vpn service." msgstr "vpn 서비스의 관리 상태입니다." msgid "" "Administrative state of the firewall. If false (down), firewall does not " "forward packets and will drop all traffic to/from VMs behind the firewall." msgstr "" "방화벽의 관리 상태입니다. false(작동 중지)인 경우, 방화벽은 패킷을 전달하지 " "않으며 방화벽 뒤에 있는 VM과의 모든 트래픽을 삭제합니다." msgid "Administrative state of the router." msgstr "라우터의 관리 상태입니다." #, python-format msgid "Alarm %(alarm)s could not find scaling group named \"%(group)s\"" msgstr "알람 %(alarm)s이(가) 이름이 \"%(group)s\"인 배율 그룹을 찾을 수 없음" #, python-format msgid "Algorithm must be one of %s" msgstr "알고리즘은 %s 중 하나여야 함" msgid "All heat engines are down." msgstr "모든 히트 엔진이 중지됩니다." msgid "Allocated floating IP address." msgstr "부동 IP 주소가 할당되었습니다." msgid "Allocation ID for VPC EIP address." msgstr "VPC EIP 주소에 대한 할당 ID입니다." msgid "Allow client's debug log output." msgstr "클라이언트의 디버그 로그 출력을 허용합니다." msgid "Allow or deny action for this firewall rule." msgstr "이 방화벽에 대한 조치를 허용하거나 거부합니다." msgid "Allow orchestration of multiple clouds." msgstr "여러 클라우드들의 Orchestration을 허용합니다." msgid "" "Allow reauthentication on token expiry, such that long-running tasks may " "complete. Note this defeats the expiry of any provided user tokens." msgstr "" "장기 실행 작업이 완료될 수 있도록 토큰 만료 재인증을 허용합니다. 그러면 제공" "된 사용자 토큰의 만료가 무시됩니다." msgid "" "Allowed keystone endpoints for auth_uri when multi_cloud is enabled. At " "least one endpoint needs to be specified." msgstr "" "multi_cloud을 사용한다면 auth_uri에 대한 키스톤 엔드 포인트를 허용합니다. 적" "어도 하나이상의 엔드 포인트를 지정해야합니다." msgid "" "Allowed tenancy of instances launched in the VPC. default - any tenancy; " "dedicated - instance will be dedicated, regardless of the tenancy option " "specified at instance launch." msgstr "" "VPC에서 실행된 인스턴스의 임차가 허용됩니다. 기본값 - 모든 임차: 전용 - 인스" "턴스가 전용이 됩니다(인스턴스 실행 시 지정된 임차 옵션에 관계 없음)." #, python-format msgid "Allowed values: %s" msgstr "허용되는 값: %s" msgid "AllowedPattern must be a string" msgstr "AllowedPattern은 문자열이어야 함" msgid "AllowedValues must be a list" msgstr "AllowedValues는 목록이어야 함" msgid "Allowing not to store action results after task completion." msgstr "태스크가 완료된 후 작업 결과를 저장하지 않게 할 수 있습니다." msgid "" "Allows to synchronize multiple parallel workflow branches and aggregate " "their data. Valid inputs: all - the task will run only if all upstream tasks " "are completed. Any numeric value - then the task will run once at least this " "number of upstream tasks are completed and corresponding conditions have " "triggered." msgstr "" "여러 병렬 워크플로우 분기를 동기화하고 데이터를 집계할 수 있습니다. 올바른 입" "력: 모두 - 업스트림 태스크가 모두 완료된 경우에만 태스크를 실행합니다. 임의" "의 숫자 값 - 이 수의 업스트림 태스크가 완료되고 해당 조건이 트리거되면 태스크" "가 한 번 이상 실행됩니다." #, python-format msgid "Ambiguous versions (%s)" msgstr "부정확한 버전(%s)" msgid "" "Amount of disk space (in GB) required to boot image. Default value is 0 if " "not specified and means no limit on the disk size." msgstr "" "이미지를 부팅하는 데 필요한 디스크 공간의 양(GB)입니다. 값을 지정하지 않는 경" "우 기본값은 0이며 디스크 크기에 제한이 없음을 의미합니다." msgid "" "Amount of ram (in MB) required to boot image. Default value is 0 if not " "specified and means no limit on the ram size." msgstr "" "이미지를 부팅하는 데 필요한 RAM의 양(MB)입니다. 값을 지정하지 않는 경우 기본" "값은 0이며 RAM 크기에 제한이 없음을 의미합니다." msgid "An address scope ID to assign to the subnet pool." msgstr "서브넷 풀에 할당할 주소 범위 ID입니다." msgid "An application health check for the instances." msgstr "인스턴스의 애플리케이션 상태 검사입니다." msgid "An ordered list of firewall rules to apply to the firewall." msgstr "방화벽에 적용할 방화벽 규칙의 순서 지정된 목록입니다." msgid "" "An ordered list of nics to be added to this server, with information about " "connected networks, fixed ips, port etc." msgstr "" "이 서버에 추가할 nics의 정렬된 목록으로 연결된 네트워크, 고정 ips, 포트 등에 " "대한 정보가 포함되어 있습니다." msgid "An unknown exception occurred." msgstr "알 수 없는 예외가 발생했습니다. " msgid "" "Any data structure arbitrarily containing YAQL expressions that defines " "workflow output. May be nested." msgstr "" "워크플로우 출력을 정의하는 YAQL 식을 임의로 포함하는 데이터 구조입니다." msgid "Anything other than one VPCZoneIdentifier" msgstr "하나의 VPCZoneIdentifier 이외 항목" msgid "Api endpoint reference of the instance." msgstr "인스턴스의 Api 엔드포인트 참조입니다." msgid "" "Arbitrary key-value pairs specified by the client to help boot a server." msgstr "서버 부팅을 돕기 위해 클라이언트가 지정한 임의의 키 값 쌍입니다." msgid "" "Arbitrary key-value pairs specified by the client to help the Cinder " "scheduler creating a volume." msgstr "" "Cinder 스케줄러에서 볼륨을 작성할 수 있도록 클라이언트가 지정한 임의의 키-값 " "쌍입니다." msgid "" "Arbitrary key/value metadata to store contextual information about this " "queue." msgstr "이 큐에 대한 컨텍스트 정보를 저장할 임의의 키/값 메타데이터입니다." msgid "" "Arbitrary key/value metadata to store for this server. Both keys and values " "must be 255 characters or less. Non-string values will be serialized to JSON " "(and the serialized string must be 255 characters or less)." msgstr "" "이 서버에 저장할 임의의 키/값 메타데이터입니다. 키 및 값 모두 255자 이하여야 " "합니다. 비문자열 값은 JSON으로 직렬화됩니다(직렬화된 문자열도 255자 이하여야 " "함)." msgid "Arbitrary key/value metadata to store information for aggregate." msgstr "집합의 정보를 저장할 임의의 키/값 메타데이터입니다." #, python-format msgid "Argument to \"%s\" must be a list" msgstr "\"%s\"에 대한 인수는 목록이어야 함" #, python-format msgid "Argument to \"%s\" must be a string" msgstr "\"%s\"에 대한 인수는 문자열이어야 함" #, python-format msgid "Argument to \"%s\" must be string or list" msgstr "\"%s\"에 대한 인수는 문자열 또는 목록이어야 함" #, python-format msgid "Argument to function \"%s\" must be a list of strings" msgstr "\"%s\" 함수에 대한 인수는 문자열 목록이어야 함" #, python-format msgid "" "Arguments to \"%s\" can be of the next forms: [resource_name] or " "[resource_name, attribute, (path), ...]" msgstr "" "\"%s\"에 대한 인수는 [resource_name] 또는 [resource_name, attribute, " "(path), ...] 형식 중 하나일 수 있습니다." #, python-format msgid "Arguments to \"%s\" must be a map" msgstr "\"%s\"에 대한 인수는 맵이어야 함" #, python-format msgid "Arguments to \"%s\" must be of the form [index, collection]" msgstr "\"%s\"에 대한 인수는 [index, collection] 양식이어야 함 " #, python-format msgid "" "Arguments to \"%s\" must be of the form [resource_name, attribute, " "(path), ...]" msgstr "" "\"%s\"에 대한 인수는 다음 양식 중 하나여야 함 [resource_name, attribute, " "(path), ...]" #, python-format msgid "Arguments to \"%s\" must be of the form [resource_name, attribute]" msgstr "\"%s\"에 대한 인수는 [resource_name, attribute] 양식이어야 함 " #, python-format msgid "Arguments to %s not fully resolved" msgstr "%s에 대한 인수가 완전히 해결되지 않음" #, python-format msgid "Attempt to delete a stack with id: %(id)s %(msg)s" msgstr "ID로 스택을 삭제하려고 시도: %(id)s %(msg)s" #, python-format msgid "Attempt to delete user creds with id %(id)s that does not exist" msgstr "존재하지 않는 ID가 %(id)s인 사용자 신임 정보를 삭제하려고 시도" #, python-format msgid "Attempt to delete watch_rule: %(id)s %(msg)s" msgstr "watch_rule을 삭제하려고 시도: %(id)s %(msg)s" #, python-format msgid "Attempt to update a stack with id: %(id)s %(msg)s" msgstr "ID로 스택을 업데이트하려고 시도: %(id)s %(msg)s" #, python-format msgid "Attempt to update a stack with id: %(id)s %(traversal)s %(msg)s" msgstr "ID로 스택을 업데이트하려고 시도: %(id)s %(traversal)s %(msg)s" #, python-format msgid "Attempt to update a watch with id: %(id)s %(msg)s" msgstr "ID로 감시를 업데이트하려고 시도: %(id)s %(msg)s" msgid "Attempt to use stored_context with no user_creds" msgstr "user_creds 없이 stored_context를 사용하려고 시도했음" #, python-format msgid "Attribute %(attr)s for facade %(type)s missing in provider" msgstr "Facade %(type)s에 대한 속성 %(attr)s이(가) 제공자에서 누락됨" msgid "Audit status of this firewall policy." msgstr "이 방화벽 정책의 감사 상태입니다." msgid "Authentication Endpoint URI." msgstr "인증받은 엔드포인트 URI." msgid "Authentication hash algorithm for the ike policy." msgstr "ike 정책의 인증 해시 알고리즘입니다." msgid "Authentication hash algorithm for the ipsec policy." msgstr "ipsec 정책의 인증 해시 알고리즘입니다." msgid "Authorization failed." msgstr "권한 부여에 실패했습니다. " msgid "AutoScaling group ID to apply policy to." msgstr "정책을 적용할 AutoScaling 그룹 ID입니다." msgid "AutoScaling group name to apply policy to." msgstr "정책을 적용할 AutoScaling 그룹 이름입니다." msgid "Availability Zone of the subnet." msgstr "서브넷의 가용성 구역입니다." msgid "Availability zone in which you want the subnet." msgstr "서브넷을 원하는 가용성 구역입니다." msgid "Availability zone to create servers in." msgstr "서버를 작성할 가용성 구역입니다." msgid "Availability zone to create volumes in." msgstr "볼륨을 작성할 가용성 구역입니다." msgid "Availability zone to launch the instance in." msgstr "인스턴스를 실행할 가용성 구역입니다." msgid "Backend authentication failed" msgstr "백엔드 인증 실패" msgid "Binary" msgstr "2진" msgid "Block device mappings for this server." msgstr "이 서버의 블록 디바이스 맵핑입니다." msgid "Block device mappings to attach to instance." msgstr "인스턴스에 접속할 블록 디바이스 맵핑입니다." msgid "Block device mappings v2 for this server." msgstr "이 서버에 대한 블록 디바이스 맵핑 v2입니다." msgid "" "Boolean extra spec that used for filtering of backends by their capability " "to create share snapshots." msgstr "" "공유 스냅샷을 작성하는 기능으로 백엔드를 필터링하는 데 사용하는 부울 추가 사" "양입니다." msgid "Boolean indicating if the volume can be booted or not." msgstr "볼륨을 부팅할 수 있는지 여부를 표시하는 부울입니다." msgid "Boolean indicating if the volume is encrypted or not." msgstr "볼륨이 암호화되었는지 표시하는 부울입니다." msgid "" "Boolean indicating whether allow the volume to be attached more than once." msgstr "볼륨을 두 번 이상 연결할 수 있는지 나타내는 부울입니다." msgid "" "Bus of the device: hypervisor driver chooses a suitable default if omitted." msgstr "" "디바이스의 버스: 이를 생략하는 경우 하이퍼바이저 드라이버는 적절한 기본값을 " "선택합니다. " msgid "CIDR block notation for this subnet." msgstr "이 서브넷의 CIDR 블록 표기법입니다." msgid "CIDR block to apply to subnet." msgstr "서브넷에 적용할 CIDR 블록" msgid "CIDR block to apply to the VPC." msgstr "VPC에 적용할 CIDR 블록입니다." msgid "CIDR of subnet." msgstr "서브넷의 CIDR입니다." msgid "CIDR to be associated with this metering rule." msgstr "이 측정 규칙과 연관시킬 CIDR입니다." #, python-format msgid "Can not specify property \"%s\" if the volume type is public." msgstr "볼륨 타입이 공용인 경우 \"%s\" 특성을 지정할 수 없습니다." #, python-format msgid "Can not use %s property on Nova-network." msgstr "Nova 네트워크에서 %s 특성을 사용할 수 없습니다." #, python-format msgid "Can't find role %s" msgstr "역할 %s을(를) 찾을 수 없음" msgid "Can't get user token without password" msgstr "비밀번호 없이 사용자 토큰을 가져올 수 없음" msgid "Can't get user token, user not yet created" msgstr "사용자 토큰을 가져올 수 없음. 아직 사용자가 작성되지 않음" msgid "Can't traverse attribute path" msgstr "속성 경로를 순회할 수 없음" #, python-format msgid "Cancelling update when stack is %s" msgstr "스택이 %s일 때 업데이트 취소" #, python-format msgid "Cannot call %(method)s on orphaned %(objtype)s object" msgstr "고아 %(objtype)s 오브젝트에서 %(method)s 메소드를 호출할 수 없음" #, python-format msgid "Cannot check %s, stack not created" msgstr "%s을(를) 검사할 수 없음. 스택이 작성되지 않음" #, python-format msgid "Cannot define the following properties at the same time: %(props)s." msgstr "다음 특성을 동시에 정의할 수 없음: %(props)s." #, python-format msgid "" "Cannot establish connection to Heat endpoint at region \"%(region)s\" due to " "\"%(exc)s\"" msgstr "" "다음 이유로 리젼 \"%(region)s\"에서 히트 엔드포인트에 연결을 설정할 수 없음: " "\"%(exc)s\"" msgid "" "Cannot get stack domain user token, no stack domain id configured, please " "fix your heat.conf" msgstr "" "스택 도메인 사용자 토큰을 가져올 수 없습니다. 스택 도메인 ID가 구성되지 않았" "습니다. heat.conf를 수정하십시오." msgid "Cannot migrate to lower schema version." msgstr "낮은 스키마 버전으로 마이그레이션할 수 없습니다." #, python-format msgid "Cannot modify readonly field %(field)s" msgstr "읽기 전용 필드 %(field)s을(를) 수정할 수 없음" #, python-format msgid "Cannot resume %s, resource not found" msgstr "%s을(를) 재개할 수 없음, 자원을 찾을 수 없음" #, python-format msgid "Cannot resume %s, resource_id not set" msgstr "%s을(를) 재개할 수 없음, 자원 ID가 설정되지 않음" #, python-format msgid "Cannot resume %s, stack not created" msgstr "%s을(를) 재개할 수 없음, 스택이 작성되지 않음" #, python-format msgid "Cannot suspend %s, resource not found" msgstr "%s을(를) 일시중단할 수 없음, 자원을 찾을 수 없음" #, python-format msgid "Cannot suspend %s, resource_id not set" msgstr "%s을(를) 일시중단할 수 없음, 자원 ID가 설정되지 않음" #, python-format msgid "Cannot suspend %s, stack not created" msgstr "%s을(를) 일시중단할 수 없음, 스택이 작성되지 않음" msgid "Captured stderr from the configuration execution." msgstr "구성 실행에서 캡처된 stderr입니다." msgid "Captured stdout from the configuration execution." msgstr "구성 실행에서 캡처된 stdout입니다." #, python-format msgid "Circular Dependency Found: %(cycle)s" msgstr "순환 종속성 찾음: %(cycle)s" msgid "Client entity to poll." msgstr "폴링할 클라이언트 엔티티입니다." msgid "Client name and resource getter name must be specified." msgstr "" "클라이언트 이름과 자원 가져오기 프로그램(getter) 이름을 지정해야 합니다." msgid "Client to poll." msgstr "폴링할 클라이언트입니다." msgid "Cluster configs dictionary." msgstr "클러스터 구성 사전입니다." msgid "Cluster information." msgstr "클러스터 정보입니다." msgid "Cluster metadata." msgstr "클러스터 메타데이터입니다." msgid "Cluster name." msgstr "클러스터 이름입니다." msgid "Cluster status." msgstr "클러스터 상태입니다." msgid "Comparison operator." msgstr "비교 연산자." #, python-format msgid "Concurrent transaction for %(action)s" msgstr "%(action)s의 동시 트랜잭션" msgid "Configuration of session persistence." msgstr "세션 지속성의 구성입니다." msgid "" "Configuration script or manifest which specifies what actual configuration " "is performed." msgstr "수행되는 실제 구성을 지정하는 구성 스크립트 또는 Manifest입니다." msgid "Configure most important configs automatically." msgstr "가장 중요한 구성을 자동으로 구성합니다." #, python-format msgid "Confirm resize for server %s failed" msgstr "서버 %s의 크기 조정을 확인하는 데 실패" msgid "Connection info for this network gateway." msgstr "이 네트워크 게이트웨이에 대한 연결 정보입니다." #, python-format msgid "Container '%(name)s' creation failed: %(code)s - %(reason)s" msgstr "컨테이너 '%(name)s' 작성 실패: %(code)s - %(reason)s" msgid "Container format of image." msgstr "이미지의 컨테이너 형식입니다." msgid "" "Content of part to attach, either inline or by referencing the ID of another " "software config resource." msgstr "" "접속할 파트의 컨텐츠, 다른 소프트웨어 구성 자원의 ID를 참조하거나 인라인입니" "다." msgid "Context for this stack." msgstr "이 스택의 컨텍스트입니다." msgid "Continue ? [y/N]" msgstr "계속하시겠습니까 ? [y/N]" msgid "Control how the disk is partitioned when the server is created." msgstr "서버가 작성될 때 디스크가 파티셔닝되는 방법을 제어합니다." msgid "Controls DPD protocol mode." msgstr "DPD 프로토콜 모드를 제어합니다." msgid "" "Convenience attribute to fetch the first assigned network address, or an " "empty string if nothing has been assigned at this time. Result may not be " "predictable if the server has addresses from more than one network." msgstr "" "첫 번째 지정된 네트워크 주소를 페치하기 위한 편의 속성 또는 비어 있는 문자열" "(현재 아무 것도 지정되지 않은 경우)입니다. 서버가 둘 이상의 네트워크에서 주소" "를 가지는 경우 결과가 예측 불가능할 수 있습니다." msgid "" "Convenience attribute, provides curl CLI command prefix, which can be used " "for signalling handle completion or failure when signal_transport is set to " "TOKEN_SIGNAL. You can signal success by adding --data-binary '{\"status\": " "\"SUCCESS\"}' , or signal failure by adding --data-binary '{\"status\": " "\"FAILURE\"}'. This attribute is set to None for all other signal transports." msgstr "" "편의 속성은 curl CLI 명령 접두부를 제공하며, 이 접두부는 signal_transport가 " "TOKEN_SIGNAL로 설정된 경우 핸들 완료 또는 실패를 신호 표시하는 데 사용될 수 " "있습니다. 성공을 표시하려면 --data-binary '{\"status\": \"SUCCESS\"}'를 추가" "하고 실패를 표시하려면 --data-binary '{\"status\": \"FAILURE\"}'를 추가하면 " "됩니다. 이 속성은 기타 모든 신호 전송의 경우 None으로 설정됩니다." msgid "" "Convenience attribute, provides curl CLI command prefix, which can be used " "for signalling handle completion or failure. You can signal success by " "adding --data-binary '{\"status\": \"SUCCESS\"}' , or signal failure by " "adding --data-binary '{\"status\": \"FAILURE\"}'." msgstr "" "편의 속성은 curl CLI 명령 접두부를 제공하며 이 접두부는 핸들 완료 또는 실패" "를 신호 표시하는 데 사용될 수 있습니다. 성공을 표시하려면 --data-binary " "'{\"status\": \"SUCCESS\"}'를 추가하고 실패를 표시하려면 --data-binary " "'{\"status\": \"FAILURE\"}'를 추가하면 됩니다." msgid "Cooldown period, in seconds." msgstr "쿨다운 기간(초)입니다." #, python-format msgid "Could not confirm resize of server %s" msgstr "서버 %s의 크기 조정을 확인할 수 없음" #, python-format msgid "Could not detach attachment %(att)s from server %(srv)s." msgstr "서버 %(srv)s에서 첨부 파일 %(att)s의 연결을 해제할 수 없습니다." #, python-format msgid "Could not fetch remote template \"%(name)s\": %(exc)s" msgstr "원격 템플리트 \"%(name)s\"을(를) 페치할 수 없음: %(exc)s" #, python-format msgid "Could not fetch remote template '%(url)s': %(exc)s" msgstr "원격 템플리트 '%(url)s'을(를) 페치할 수 없음: %(exc)s" #, python-format msgid "Could not load %(name)s: %(error)s" msgstr "%(name)s을(를) 로드할 수 없음: %(error)s" #, python-format msgid "Could not retrieve template: %s" msgstr "템플리트를 검색할 수 없음: %s" msgid "Create volumes on the same physical port as an instance." msgstr "인스턴스와 동일한 실제 포트에서 볼륨을 작성하십시오." msgid "" "Credentials used for swift. Not required if sahara is configured to use " "proxy users and delegated trusts for access." msgstr "" "swift에 사용된 자격 증명입니다. 액세스를 위해 프록시 사용자 및 위임된 트러스" "트를 사용하도록 sahara가 구성된 경우 필요하지 않습니다." msgid "Cron expression." msgstr "Cron 식입니다." msgid "Current share status." msgstr "현재 공유 상태입니다." msgid "Custom LoadBalancer template can not be found" msgstr "사용자 정의 로드 밸런서 템플리트를 찾을 수 없음" msgid "DB instance restore point." msgstr "DB 인스턴스 복원 지점입니다." msgid "DNS Domain id or name." msgstr "DNS 도메인 id 또는 이름." msgid "DNS IP address used inside tenant's network." msgstr "테넌트의 네트워크 내에서 사용하는 DNS IP 주소입니다." msgid "DNS Record type." msgstr "DNS 레코드 유형." msgid "DNS domain serial." msgstr "DNS 도메인 일련 번호." msgid "" "DNS record data, varies based on the type of record. For more details, " "please refer rfc 1035." msgstr "" "DNS 레코드 데이터는 레코드 타입에 따라 다양합니다. 자세한 내용은 rfc 1035를 " "참조하십시오." msgid "" "DNS record priority. It is considered only for MX and SRV types, otherwise, " "it is ignored." msgstr "" "DNS 레코드 우선순위입니다. MX 및 SRV 타입에만 고려되며, 그렇지 않은 경우 무시" "합니다." #, python-format msgid "Data supplied was not valid: %(reason)s" msgstr "제공된 날짜가 올바르지 않음: %(reason)s" #, python-format msgid "" "Database %(dbs)s specified for user does not exist in databases for resource " "%(name)s." msgstr "" "사용자에 대해 지정된 %(dbs)s 데이터베이스가 %(name)s 자원에 대한 데이터베이스" "에 없습니다." msgid "Database volume size in GB." msgstr "데이터베이스 볼륨 크기(GB)입니다." #, python-format msgid "" "Databases property is required if users property is provided for resource %s." msgstr "" "%s 자원에 대한 사용자 특성이 제공된 경우 데이터베이스 특성은 필수입니다." #, python-format msgid "" "Datastore version %(dsversion)s for datastore type %(dstype)s is not valid. " "Allowed versions are %(allowed)s." msgstr "" "데이터 저장소 유형 %(dstype)s의 데이터 저장소 버전 %(dsversion)s이(가) 올바르" "지 않습니다. 허용되는 버전은 %(allowed)s입니다." msgid "Datetime when a share was created." msgstr "공유가 작성된 날짜 시간입니다." msgid "" "Dead Peer Detection protocol configuration for the ipsec site connection." msgstr "ipsec 사이트 연결에 대한 정지된 피어 발견 프로토콜 구성입니다." msgid "Dead engines are removed." msgstr "작동하지 않는 엔진이 제거됩니다." msgid "Default TLS container reference to retrieve TLS information." msgstr "TLS 정보를 검색하는 기본 TLS 컨테이너 참조입니다." #, python-format msgid "Default must be a comma-delimited list string: %s" msgstr "기본값은 쉼표로 구분된 목록 문자열이어야 함: %s" msgid "Default name or UUID of the image used to boot Hadoop nodes." msgstr "부트 Hadoop 노드에 사용된 이미지의 기본 이름 또는 UUID입니다." msgid "Default region name used to get services endpoints." msgstr "서비스 엔드포인트를 가져오는 데 사용하는 기본 리젼 이름입니다." msgid "Default settings for some of task attributes defined at workflow level." msgstr "워크플로우 레벨에 정의된 작업 속성의 기본 설정입니다." msgid "Default value for the input if none is specified." msgstr "지정된 사항이 없는 경우 입력에 대한 기본값입니다." msgid "" "Defines a delay in seconds that Mistral Engine should wait after a task has " "completed before starting next tasks defined in on-success, on-error or on-" "complete." msgstr "" "작업이 완료된 후, on-success, on-error 또는 on-complete에 정의된 다음 작업을 " "시작하기 전에 Mistral Engine이 대기해야 하는 지연 시간(초)을 정의합니다." msgid "" "Defines a delay in seconds that Mistral Engine should wait before starting a " "task." msgstr "" "작업을 시작하기 전에 Mistral Engine이 대기해야 하는 지연 시간(초)을 정의합니" "다." msgid "Defines a pattern how task should be repeated in case of an error." msgstr "오류가 발생하는 경우 작업을 반복해야 하는 패턴을 정의합니다." msgid "" "Defines a period of time in seconds after which a task will be failed " "automatically by engine if hasn't completed." msgstr "" "완료되지 않은 경우 엔진이 자동으로 태스크를 실패로 처리할 때까지 경과될 시간" "(초)을 정의합니다." msgid "Defines if share type is accessible to the public." msgstr "공유 타입에 공개적으로 액세스할 수 있는지 정의합니다." msgid "Defines if shared filesystem is public or private." msgstr "공유 파일 시스템이 공용인지 아니면 개인용인지 정의합니다." msgid "" "Defines the method in which the request body for signaling a workflow would " "be parsed. In case this property is set to True, the body would be parsed as " "a simple json where each key is a workflow input, in other cases body would " "be parsed expecting a specific json format with two keys: \"input\" and " "\"params\"." msgstr "" "워크플로우 신호를 보내기 위해 요청된 본문을 구문 분석하는 메소드를 정의합니" "다. 이 특성이 True로 설정된 경우 각 키가 워크플로우 입력인 단순 json으로 본문" "이 구문 분석되며, 그렇지 않은 경우에는 본문이 두 키, \"input\"과 \"params" "\"가 있는 특정 json 형식이 되도록 구문 분석됩니다." msgid "" "Defines whether Mistral Engine should put the workflow on hold or not before " "starting a task." msgstr "" "Mistral Engine이 태스크를 시작하기 전에 워크플로우를 보류 상태에 두어야 할지 " "정의합니다." msgid "Defines whether auto-assign security group to this Node Group template." msgstr "이 노드 그룹 템플리트에 보안 그룹을 자동 지정할 것인지 정의합니다." #, python-format msgid "" "Defining more than one configuration for the same action in " "SoftwareComponent \"%s\" is not allowed." msgstr "" "소프트웨어 구성요소 \"%s\"에서 동일한 조치에 대한 둘 이상의 구성 정의는 허용" "되지 않습니다." msgid "Deleting in-progress snapshot" msgstr "진행 중인 스냅샷 삭제" #, python-format msgid "Deleting non-empty container (%(id)s) when %(prop)s is False" msgstr "%(prop)s이(가) False일 때 비어 있지 않은 컨테이너 (%(id)s) 삭제" #, python-format msgid "Delimiter for %s must be string" msgstr "%s에 대한 구분 기호은 문자열이어야 함" msgid "" "Denotes that the deployment is in an error state if this output has a value." msgstr "이 출력에 값이 있는 경우 배치가 오류 상태에 있음을 나타냅니다." msgid "Deploy data available" msgstr "사용 가능한 데이터 배치" #, python-format msgid "Deployment exited with non-zero status code: %s" msgstr "0이 아닌 상태 코드와 함께 배치 종료됨: %s" #, python-format msgid "Deployment to server failed: %s" msgstr "서버에 배치 실패: %s" #, python-format msgid "Deployment with id %s not found" msgstr "ID가 %s인 배치를 찾을 수 없음" msgid "Deprecated." msgstr "더 이상 사용되지 않습니다." msgid "" "Describe time constraints for the alarm. Only evaluate the alarm if the time " "at evaluation is within this time constraint. Start point(s) of the " "constraint are specified with a cron expression, whereas its duration is " "given in seconds." msgstr "" "알람의 시간 제한조건을 설명합니다. 평가 시간이 이 시간 제한조건에 포함되는 경" "우 알람만 평가하십시오. 제한조건의 시작점은 cron 식으로 지정하는 반면 기간은 " "초 단위로 제공됩니다." msgid "Description for the alarm." msgstr "알람에 대한 설명입니다." msgid "Description for the firewall policy." msgstr "방화벽 정책에 대한 설명입니다." msgid "Description for the firewall rule." msgstr "방화벽 규칙에 대한 설명입니다." msgid "Description for the firewall." msgstr "방화벽에 대한 설명입니다." msgid "Description for the ike policy." msgstr "ike 정책에 대한 설명입니다." msgid "Description for the ipsec policy." msgstr "ipsec 정책에 대한 설명입니다." msgid "Description for the ipsec site connection." msgstr "ipsec 사이트 연결에 대한 설명입니다." msgid "Description for the time constraint." msgstr "시간 제한 조건의 설명입니다." msgid "Description for the vpn service." msgstr "vpn 서비스에 대한 설명입니다." msgid "Description for this interface." msgstr "이 인터페이스에 대한 설명입니다." msgid "Description of domain." msgstr "도메인의 설명입니다." msgid "Description of keystone group." msgstr "keystone 그룹의 설명입니다." msgid "Description of keystone project." msgstr "keystone 프로젝트의 설명입니다." msgid "Description of keystone region." msgstr "keystone 지역의 설명입니다." msgid "Description of keystone service." msgstr "keystone 서비스의 설명입니다." msgid "Description of keystone user." msgstr "keystone 사용자의 설명입니다." msgid "Description of record." msgstr "레코드 설명." msgid "Description of the Node Group Template." msgstr "노드 그룹 템플리트의 설명입니다." msgid "Description of the Sahara Group Template." msgstr "Sahara 그룹 템플리트의 설명입니다." msgid "Description of the alarm." msgstr "알람에 대한 설명입니다." msgid "Description of the data source." msgstr "데이터 소스의 설명입니다." msgid "Description of the firewall policy." msgstr "방화벽 정책의 설명입니다." msgid "Description of the firewall rule." msgstr "방화벽 규칙의 설명입니다." msgid "Description of the firewall." msgstr "방화벽에 대한 설명입니다." msgid "Description of the image." msgstr "이미지에 대한 설명입니다." msgid "Description of the input." msgstr "입력에 대한 설명입니다." msgid "Description of the job binary." msgstr "작업 바이너리에 대한 설명입니다." msgid "Description of the metering label." msgstr "측정 레이블에 대한 설명입니다." msgid "Description of the output." msgstr "출력에 대한 설명입니다." msgid "Description of the pool." msgstr "풀에 대한 설명입니다." msgid "Description of the security group." msgstr "보안 그룹에 대한 설명입니다." msgid "Description of the vip." msgstr "vip에 대한 설명입니다." msgid "Description of the volume type." msgstr "볼륨 유형의 설명입니다." msgid "Description of the volume." msgstr "볼륨에 대한 설명입니다." msgid "Description of this Load Balancer." msgstr "이 로드 밸런서의 설명입니다." msgid "Description of this listener." msgstr "이 리스너의 설명입니다." msgid "Description of this pool." msgstr "이 풀에 대한 설명입니다." msgid "Desired IPs for this port." msgstr "이 포트에 대해 원하는 IP입니다." msgid "Desired capacity of the cluster." msgstr "클러스터에 필요한 용량입니다." msgid "Desired initial number of instances." msgstr "원하는 인스턴스 초기 수입니다." msgid "Desired initial number of resources in cluster." msgstr "클러스터에 필요한 초기 자원 수입니다." msgid "Desired initial number of resources." msgstr "원하는 초기 자원 수입니다." msgid "Desired number of instances." msgstr "원하는 인스턴스 수입니다." msgid "DesiredCapacity must be between MinSize and MaxSize" msgstr "DesiredCapacity는 MinSize와 MaxSize 사이여야 함" msgid "Destination IP address or CIDR." msgstr "대상 IP 주소 또는 CIDR입니다." msgid "Destination ip_address for this firewall rule." msgstr "이 방화벽 규칙의 대상 ip_address입니다." msgid "Destination port number or a range." msgstr "대상 포트 번호 또는 범위입니다." msgid "Destination port range for this firewall rule." msgstr "이 방화벽 규칙의 대상 포트 범위입니다." msgid "Detailed information about resource." msgstr "자원에 대한 자세한 정보." msgid "Device ID of this port." msgstr "이 포트의 디바이스 ID입니다." msgid "Device info for this network gateway." msgstr "이 네트워크 게이트웨이에 대한 디바이스 정보입니다." msgid "" "Device type: at the moment we can make distinction only between disk and " "cdrom." msgstr "디바이스 유형: 디스크 및 cdrom만 구별할 수 있습니다." msgid "" "Dict, which has expand properties for port. Used only if port property is " "not specified for creating port." msgstr "" "포트의 특성을 확장한 Dict. 포트를 작성하는 데 사용할 포트 특성이 지정되지 않" "은 경우에만 사용합니다." msgid "Dictionary containing workflow tasks." msgstr "워크플로우 태스크를 포함하는 사전입니다." msgid "Dictionary of node configurations." msgstr "노드 구성의 사전입니다." msgid "Dictionary of variables to publish to the workflow context." msgstr "워크플로우 컨텍스트에 공개할 변수의 사전입니다." msgid "Dictionary which contains input for workflow." msgstr "워크플로우의 입력을 포함하는 사전입니다." msgid "" "Dictionary-like section defining task policies that influence how Mistral " "Engine runs tasks. Must satisfy Mistral DSL v2." msgstr "" "Mistral Engine에서 태스크를 실행하는 방법에 영향을 미치는 태스크 정책을 ㅈ어" "의하는 사전과 유사한 섹션입니다. Mistral DSL v2를 만족시켜야 합니다." msgid "DisableRollback and OnFailure may not be used together" msgstr "DisableRollback 및 OnFailure를 함께 사용할 수 없음" msgid "Disk format of image." msgstr "이미지의 디스크 형식입니다." msgid "Does not contain a valid AWS Access Key or certificate" msgstr "올바른 AWS 액세스 키 또는 인증서를 포함하고 있지 않음" msgid "Domain email." msgstr "도메인 이메일." msgid "Domain name." msgstr "도메인 이름." #, python-format msgid "Duplicate names %s" msgstr "중복된 이름 %s" msgid "Duplicate refs are not allowed." msgstr "중복 참조는 허용되지 않습니다." msgid "Duration for the time constraint." msgstr "시간 제한 조건의 기간입니다." msgid "EIP address to associate with instance." msgstr "인스턴스와 연관시킬 EIP 주소입니다." #, python-format msgid "Each %(object_name)s must contain a %(sub_section)s key." msgstr "각 %(object_name)s에 %(sub_section)s 키가 있어야 합니다." msgid "Each Resource must contain a Type key." msgstr "각 자원에 유형 키가 있어야 합니다." msgid "Ebs is missing, this is required when specifying BlockDeviceMappings." msgstr "" "Ebs가 누락되었습니다. BlockDeviceMappings를 지정할 때 Ebs는 필수입니다." msgid "" "Egress rules are only allowed when Neutron is used and the 'VpcId' property " "is set." msgstr "" "Neutron을 사용하고 'VpcId' 특성을 설정한 경우에만 Egress 규칙이 허용됩니다." #, python-format msgid "Either %(net)s or %(port)s must be provided." msgstr "%(net)s 또는 %(port)s을(를) 지정해야 합니다." msgid "Either 'EIP' or 'AllocationId' must be provided." msgstr "'EIP' 또는 'AllocationId'가 제공되어야 합니다." msgid "Either 'InstanceId' or 'LaunchConfigurationName' must be provided." msgstr "'InstanceId' 또는 'LaunchConfigurationName'을 제공해야 합니다." #, python-format msgid "Either project or domain must be specified for role %s" msgstr "역할 %s에 대해 프로젝트나 도메인을 지정해야 함" #, python-format msgid "Either volume_id or snapshot_id must be specified for device mapping %s" msgstr "디바이스 맵핑 %s에 대해 volume_id 또는 snapshot_id가 지정되어야 함" msgid "Email address of keystone user." msgstr "keystone 사용자의 이메일 주소입니다." msgid "Enable the legacy OS::Heat::CWLiteAlarm resource." msgstr "레거시 OS::Heat::CWLiteAlarm 자원을 사용합니다." msgid "Enable the preview Stack Abandon feature." msgstr "스택 중단 미리보기 기능을 사용합니다." msgid "Enable the preview Stack Adopt feature." msgstr "스택 채택 미리보기 기능을 사용합니다." msgid "" "Enables Source NAT on the router gateway. NOTE: The default policy setting " "in Neutron restricts usage of this property to administrative users only." msgstr "" "라우터 게이트웨이에서 소스 NAT를 사용으로 설정합니다. 참고: Neutron의 기본 정" "책 설정은 이 특성의 사용을 관리 사용자로만 제한합니다." msgid "" "Enables engine with convergence architecture. All stacks with this option " "will be created using convergence engine." msgstr "" "집중 아키텍처로 엔진을 사용합니다. 이 옵션이 있는 모든 스택은집중 엔진을 사용" "하여 작성됩니다." msgid "Enables or disables read-only access mode of volume." msgstr "볼륨의 읽기 전용 액세스 모드를 사용하거나 사용하지 않습니다." msgid "Encapsulation mode for the ipsec policy." msgstr "ipsec 정책의 캡슐화 모드입니다." msgid "" "Encrypt template parameters that were marked as hidden and also all the " "resource properties before storing them in database." msgstr "" "숨겨짐으로 표시된 템플릿 매개변수와 모든 자원 특성을 데이터베이스에 저장하기 " "전에 암호화하십시오." msgid "Encryption algorithm for the ike policy." msgstr "ike 정책의 암호화 알고리즘입니다." msgid "Encryption algorithm for the ipsec policy." msgstr "ipsec 정책의 암호화 알고리즘입니다." msgid "End address for the allocation pool." msgstr "할당 풀의 종료 주소입니다." #, python-format msgid "End resizing the group %(group)s" msgstr "그룹 %(group)s 크기 조정 종료" msgid "" "Endpoint/url which can be used for signalling handle when signal_transport " "is set to TOKEN_SIGNAL. None for all other signal transports." msgstr "" "signal_transport가 TOKEN_SIGNAL로 설정되면 핸들 신호 표시에 사용할 수 있는 엔" "드포인트/url입니다. 기타 모든 신호 전송의 경우 None입니다." msgid "Endpoint/url which can be used for signalling handle." msgstr "핸들 신호 표시에 사용할 수 있는 엔드포인트/url입니다." msgid "Engine_Id" msgstr "Engine_Id" msgid "Error" msgstr "에러" #, python-format msgid "Error authorizing action %s" msgstr "조치 %s 권한 부여 중 오류 발생" #, python-format msgid "Error creating ec2 keypair for user %s" msgstr "사용자 %s에 대한 ec2 키 쌍을 작성하는 중에 오류 발생" msgid "" "Error during applying access rules to share \"{0}\". The root cause of the " "problem is the following: {1}." msgstr "" "공유 \"{0}\"에 액세스 규칙을 적용하는 중에 오류가 발생했습니다. 문제점의 근" "본 원인은 다음과 같습니다. {1}." msgid "Error during creation of share \"{0}\"" msgstr "\"{0}\" 공유 작성 중에 오류 발생" msgid "Error during deleting share \"{0}\"." msgstr "공유 \"{0}\" 삭제 중에 오류가 발생했습니다." #, python-format msgid "Error validating value '%(value)s'" msgstr "'%(value)s' 값을 검증하는 중 오류 발생" #, python-format msgid "Error validating value '%(value)s': %(message)s" msgstr "'%(value)s' 값을 검증하는 중 오류 발생: %(message)s" msgid "Ethertype of the traffic." msgstr "트래픽의 이더타입입니다." msgid "Exclude state for cidr." msgstr "cidr의 상태를 제외하십시오." #, python-format msgid "Expected 1 external network, found %d" msgstr "1 외부 네트워크 예상, %d 찾음" msgid "Export locations of share." msgstr "공유 위치를 내보내십시오." msgid "Expression of the alarm to evaluate." msgstr "평가할 알람의 식입니다." msgid "External fixed IP address." msgstr "외부 Fixed IP 주소입니다." msgid "External fixed IP addresses for the gateway." msgstr "게이트웨이의 외부 Fixed IP 주소입니다." msgid "External network gateway configuration for a router." msgstr "라우터의 외부 네트워크 게이트웨이 구성입니다." msgid "" "Extra parameters to include in the \"floatingip\" object in the creation " "request. Parameters are often specific to installed hardware or extensions." msgstr "" "작성 요청에서 \"floatingip\" 오브젝트에 포함시킬 추가 매개변수입니다. 매개변" "수는 종종 설치된 하드웨어 또는 확장기능에 특정합니다." msgid "Extra parameters to include in the creation request." msgstr "작업 오브젝트에 포함할 추가 매개변수입니다." msgid "Extra parameters to include in the request." msgstr "요청에 포함할 추가 매개변수입니다." msgid "" "Extra parameters to include in the request. Parameters are often specific to " "installed hardware or extensions." msgstr "" "요청에서 포함할 추가 매개변수입니다. 매개변수는 종종 설치된 하드웨어 또는 확" "장기능에 특정합니다." msgid "Extra specs key-value pairs defined for share type." msgstr "공유 타입에 정의된 추가 스펙 키-값 쌍입니다." #, python-format msgid "Failed to attach interface (%(port)s) to server (%(server)s)" msgstr "인터페이스(%(port)s)를 서버(%(server)s)에 연결하는 데 실패" #, python-format msgid "Failed to attach volume %(vol)s to server %(srv)s - %(err)s" msgstr "볼륨 %(vol)s을(를) 서버 %(srv)s에 연결하는 데 실패 - %(err)s" #, python-format msgid "Failed to create Bay '%(name)s' - %(reason)s" msgstr "Bay '%(name)s' - %(reason)s 작성 실패" #, python-format msgid "Failed to detach interface (%(port)s) from server (%(server)s)" msgstr "서버((%(server)s)에서 인터페이스(%(port)s)의 연결을 해제하는 데 실패" #, python-format msgid "Failed to execute %(action)s for %(cluster)s: %(reason)s" msgstr "%(cluster)s에 대해 %(action)s을(를) 실행할 수 없음: %(reason)s" #, python-format msgid "Failed to extend volume %(vol)s - %(err)s" msgstr "%(vol)s 볼륨을 확장하지 못함 - %(err)s" #, python-format msgid "Failed to fetch template: %s" msgstr "템플리트 페치에 실패: %s" #, python-format msgid "Failed to find instance %s" msgstr "%s 인스턴스를 찾지 못했음" #, python-format msgid "Failed to find server %s" msgstr "%s 서버를 찾지 못함 " #, python-format msgid "Failed to parse JSON data: %s" msgstr "JSON 데이터 구문 분석 실패: %s" #, python-format msgid "Failed to restore volume %(vol)s from backup %(backup)s - %(err)s" msgstr "백업 %(backup)s에서 볼륨 %(vol)s을(를) 복원하는 데 실패 - %(err)s" msgid "Failed to retrieve template" msgstr "템플리트를 검색하지 못함" #, python-format msgid "Failed to retrieve template data: %s" msgstr "템플리트 데이터 검색 실패: %s" #, python-format msgid "Failed to retrieve template: %s" msgstr "템플리트 검색 실패: %s" #, python-format msgid "" "Failed to send message to stack (%(stack_name)s) on other engine " "(%(engine_id)s)" msgstr "" "기타 엔진에서 (%(stack_name)s) 스택에 메시지 발송 실패(%(engine_id)s)) " #, python-format msgid "Failed to stop stack (%(stack_name)s) on other engine (%(engine_id)s)" msgstr "다른 엔진(%(engine_id)s)에서 스택(%(stack_name)s) 중지 실패" #, python-format msgid "Failed to update Bay '%(name)s' - %(reason)s" msgstr "Bay '%(name)s' - %(reason)s 업데이트 실패" msgid "Failed to update, can not found port info." msgstr "업데이트에 실패했고 포트 정보를 찾을 수 없습니다." #, python-format msgid "" "Failed validating stack template using Heat endpoint at region \"%(region)s" "\" due to \"%(exc)s\"" msgstr "" "다음 리젼에서 히트 엔드포인트를 사용하여 스택 템플리트의 유효성을 검증하지 못" "함: \"%(region)s\", 이유: \"%(exc)s\"" msgid "Fake attribute !a." msgstr "Fake 속성 !a." msgid "Fake attribute a." msgstr "Fake 속성 a." msgid "Fake property !a." msgstr "Fake 특성 !a." msgid "Fake property !c." msgstr "Fake 특성 !c." msgid "Fake property a." msgstr "Fake 특성 a." msgid "Fake property c." msgstr "Fake 특성 c." msgid "Fake property ca." msgstr "Fake 특성 ca." msgid "" "False to trigger actions when the threshold is reached AND the alarm's state " "has changed. By default, actions are called each time the threshold is " "reached." msgstr "" "임계값에 도달하고 알람의 상태가 변경되었을 때 조치를 트리거하려면 False를 설" "정하십시오. 기본적으로 조치는 임계값에 도달할 때마다 호출됩니다." #, python-format msgid "Field %(field)s of %(objname)s is not an instance of Field" msgstr "%(objname)s의 %(field)s 필드는 필드 인스턴스가 아님" msgid "" "Fixed IP address to specify for the port created on the requested network." msgstr "" "요청한 네트워크에서 포트가 작성되도록 하려면 고정 IP 주소를 지정해야 합니다." msgid "Fixed IP addresses." msgstr "고정 IP 주소입니다." msgid "Fixed IPv4 address for this NIC." msgstr "이 NIC에 대한 고정 IPv4 주소입니다." msgid "Flag indicating if traffic to or from instance is validated." msgstr "" "인스턴스로의 트래픽 또는 인스턴스에서의 트래픽이 유효성 검증되었는지 여부를 " "표시하는 플래그입니다." msgid "" "Flag to enable/disable port security on the network. It provides the default " "value for the attribute of the ports created on this network." msgstr "" "네트워크에서 포트 보안을 사용/사용하지 않게 설정하는 플래그입니다. 이 네트워" "크에서 작성된 포트 속성의 기본값을 제공합니다." msgid "" "Flag to enable/disable port security on the port. When disable this " "feature(set it to False), there will be no packages filtering, like security-" "group and address-pairs." msgstr "" "포트에서 포트 보안을 사용/사용하지 않게 설정하는 플래그입니다. 이 기능을 사용" "하지 않으면(False로 설정) 보안 그룹 및 주소 쌍과 같은 패키지 필터링이 없습니" "다." msgid "Flavor of the instance." msgstr "인스턴스의 플레이버입니다." msgid "Friendly name of the port." msgstr "포트의 선호 이름입니다." msgid "Friendly name of the router." msgstr "라우터의 선호 이름입니다." msgid "Friendly name of the subnet." msgstr "서브넷의 선호 이름입니다." #, python-format msgid "Function \"%s\" must have arguments" msgstr "\"%s\" 기능은 인수여야 함" #, python-format msgid "Function \"%s\" usage: [\"\", \"\"]" msgstr "함수 \"%s\" 사용법: [\"\", \"\"]" #, python-format msgid "Gateway IP address \"%(gateway)s\" is in invalid format." msgstr "게이트웨이 IP 주소 \"%(gateway)s\"의 형식이 올바르지 않습니다." msgid "Gateway network for the router." msgstr "라우터의 게이트웨이 네트워크입니다." msgid "Generic HeatAPIException, please use specific subclasses!" msgstr "일반 HeatAPIException는 특정 서브 클래스를 이용하세요!" msgid "Glance image ID or name." msgstr "글랜스 이미지 ID 또는 이름입니다." msgid "Governs permissions set in manila for the cluster ips." msgstr "클러스터 ips에 대해 manila에 설정된 권한을 제어합니다." msgid "Granularity to use for age argument, defaults to days." msgstr "기간 인수에 사용할 단위이며 기본값을 일로 설정합니다." msgid "Hadoop cluster name." msgstr "Hadoop 클러스터 이름입니다." #, python-format msgid "Header X-Auth-Url \"%s\" not an allowed endpoint" msgstr "헤더 X-Auth-Url \"%s\"이(가) 허용되는 엔드포인트가 아님" msgid "Health probe timeout, in seconds." msgstr "상태 프로브 제한시간(초)입니다." msgid "" "Heat build revision. If you would prefer to manage your build revision " "separately, you can move this section to a different file and add it as " "another config option." msgstr "" "히트 빌드 개정판입니다. 빌드 개정판을 개별적으로 관리하려는 경우 이 섹션을 다" "른 파일로 이동하고 이를 다른 구성 옵션으로 추가할 수 있습니다." msgid "Host" msgstr "호스트" msgid "Hostname" msgstr "호스트 이름" msgid "Hostname of the instance." msgstr "인스턴스의 호스트 이름입니다." msgid "How long to preserve deleted data." msgstr "삭제된 데이터를 유지하는 기간입니다." msgid "" "How the client will signal the wait condition. CFN_SIGNAL will allow an HTTP " "POST to a CFN keypair signed URL. TEMP_URL_SIGNAL will create a Swift " "TempURL to be signalled via HTTP PUT. HEAT_SIGNAL will allow calls to the " "Heat API resource-signal using the provided keystone credentials. " "ZAQAR_SIGNAL will create a dedicated zaqar queue to be signalled using the " "provided keystone credentials. TOKEN_SIGNAL will allow and HTTP POST to a " "Heat API endpoint with the provided keystone token. NO_SIGNAL will result in " "the resource going to a signalled state without waiting for any signal." msgstr "" "클라이언트가 대기 조건 신호를 보내는 방법입니다. CFN_SIGNAL을 사용하면 CFN " "키 페어 서명 URL에 HTTP POST를 수행할 수 있습니다. TEMP_URL_SIGNAL은 HTTP PUT" "을 통해 신호를 보낼 Swift TempURL을 작성합니다. HEAT_SIGNAL은 제공된 " "keystone 자격 증명을 사용하여 Heat API resource-signal을 호출할 수 있습니다. " "ZAQAR_SIGNAL은 제공된 keystone 자격 증명을 사용하여 신호를 보낼 전용 zaqar 큐" "를 작성합니다. TOKEN_SIGNAL을 사용하면 제공된 keystone 토큰으로 Heat API 엔드" "포인트에 대해 HTTP POST를 수행할 수 있습니다. NO_SIGNAL을 사용하면 신호를 기" "다리지 않고 자원이 신호됨 단계로 이동합니다." msgid "" "How the server should receive the metadata required for software " "configuration. POLL_SERVER_CFN will allow calls to the cfn API action " "DescribeStackResource authenticated with the provided keypair. " "POLL_SERVER_HEAT will allow calls to the Heat API resource-show using the " "provided keystone credentials. POLL_TEMP_URL will create and populate a " "Swift TempURL with metadata for polling. ZAQAR_MESSAGE will create a " "dedicated zaqar queue and post the metadata for polling." msgstr "" "서버에서 소프트웨어 구성에 필요한 메타데이터를 받는 방법입니다. " "POLL_SERVER_CFN에서 제공된 키 페어로 인증된 cfn API 작업 " "DescribeStackResource을 호출할 수 있습니다. POLL_SERVER_HEAT에서 제공된 " "keystone 자격 증명을 사용하여 Heat API resource-show를 호출할 수 있습니다. " "POLL_TEMP_URL에서 Swift TempURL을 작성하여 폴링할 메타데이터로 채웁니다. " "ZAQAR_MESSAGE가 전용 zaqar 큐를 작성하고 폴링할 메타데이터를 게시합니다." msgid "How the server should signal to heat with the deployment output values." msgstr "서버에서 배치 출력 값으로 히트에 신호를 보내는 방법입니다. " msgid "" "How the server should signal to heat with the deployment output values. " "CFN_SIGNAL will allow an HTTP POST to a CFN keypair signed URL. " "TEMP_URL_SIGNAL will create a Swift TempURL to be signaled via HTTP PUT. " "HEAT_SIGNAL will allow calls to the Heat API resource-signal using the " "provided keystone credentials. ZAQAR_SIGNAL will create a dedicated zaqar " "queue to be signaled using the provided keystone credentials. NO_SIGNAL will " "result in the resource going to the COMPLETE state without waiting for any " "signal." msgstr "" "서버가 배포 출력 값으로 heat에 신호를 보내는 방법입니다. CFN_SIGNAL을 사용하" "면 CFN 키 페어 서명 URL에 HTTP POST를 수행할 수 있습니다. TEMP_URL_SIGNAL은 " "HTTP PUT을 통해 신호를 보낼 Swift TempURL을 작성합니다. HEAT_SIGNAL은 제공된 " "keystone 자격 증명을 사용하여 Heat API resource-signal을 호출할 수 있습니다. " "ZAQAR_SIGNAL은 제공된 keystone 자격 증명을 사용하여 신호를 보낼 전용 zaqar 큐" "를 작성합니다. NO_SIGNAL을 사용하면 신호를 기다리지 않고 자원이 COMPLETE 단계" "로 이동합니다." msgid "" "How the user_data should be formatted for the server. For HEAT_CFNTOOLS, the " "user_data is bundled as part of the heat-cfntools cloud-init boot " "configuration data. For RAW the user_data is passed to Nova unmodified. For " "SOFTWARE_CONFIG user_data is bundled as part of the software config data, " "and metadata is derived from any associated SoftwareDeployment resources." msgstr "" "서버에 대해 user_data가 형식화되어야 하는 방법입니다. HEAT_CFNTOOLS의 경우 " "user_data는 heat-cfntools cloud-init 부트 구성 데이터의 파트로 번들화됩니다. " "RAW의 경우 user_data는 수정되지 않은 Nova로 전달됩니다.SOFTWARE_CONFIG의 경" "우 user_data는 소프트웨어 구성 데이터의 일부로 번들되며 메타데이터는 연관된 " "SoftwareDeployment 자원에서 간격입니다." msgid "Human readable name for the secret." msgstr "시크릿의 판독 가능한 이름입니다." msgid "Human-readable name for the container." msgstr "컨테이너의 판독 가능 이름입니다." msgid "" "ID list of the L3 agent. User can specify multi-agents for highly available " "router. NOTE: The default policy setting in Neutron restricts usage of this " "property to administrative users only." msgstr "" "L3 에이전트의 ID 목록. 사용자는 고가용성 라우터에 대해 다중 에이전트를 지정" "할 수 있습니다. 참고: Neutron의 기본 정책 설정은 이 특성의 사용을 관리 사용" "자로만 제한합니다." msgid "ID of an existing port to associate with this server." msgstr "이 서버와 연관시킬 기존 포트의 ID입니다." msgid "" "ID of an existing port with at least one IP address to associate with this " "floating IP." msgstr "이 부동 IP와 연관시킬 하나 이상의 IP 주소가 있는 기존 포트의 ID입니다." msgid "ID of network to create a port on." msgstr "포트를 작성할 네트워크의 ID입니다." msgid "ID of project for API authentication" msgstr "API 인증용 프로젝트의 ID" msgid "ID of queue to use for signaling output values" msgstr "출력 값 신호를 보내는 데 사용할 큐의 ID" msgid "" "ID of resource to apply configuration to. Normally this should be a Nova " "server ID." msgstr "구성을 적용할 자원의 ID입니다. 일반적으로 Nova 서버 ID여야 합니다." msgid "" "ID of server (VM, etc...) on host that is used for exporting network file-" "system." msgstr "" "네트워크 파일 시스템을 내보내는 데 사용하는 호스트의 서버(VM 등) ID입니다." msgid "ID of signal to use for signaling output values" msgstr "출력 값 신호를 보내는 데 사용할 신호 ID" msgid "" "ID of software configuration resource to execute when applying to the server." msgstr "서버에 적용할 때 실행할 소프트웨어 구성 자원의 ID입니다. " msgid "ID of the Cluster Template used for Node Groups and configurations." msgstr "노드 그룹 및 구성에 사용된 클러스터 템플리트의 ID입니다." msgid "ID of the InternetGateway." msgstr "InternetGateway의 ID입니다." msgid "" "ID of the L3 agent. NOTE: The default policy setting in Neutron restricts " "usage of this property to administrative users only." msgstr "" "L3 에이전트의 ID입니다. 참고: Neutron의 기본 정책 설정은 이 특성의 사용을 관" "리 사용자로만 제한합니다." msgid "ID of the Node Group Template." msgstr "노드 그룹 템플리트의 ID입니다." msgid "ID of the VPNGateway to attach to the VPC." msgstr "VPC에 접속할 VPNGateway의 ID입니다." msgid "ID of the default image to use for the template." msgstr "템플리트에 사용할 기본 이미지의 ID입니다." msgid "ID of the default pool this listener is associated to." msgstr "이 리스너가 연관된 기본 풀의 ID입니다." msgid "ID of the floating IP to assign to the server." msgstr "서버에 지정할 부동 IP의 ID입니다." msgid "ID of the floating IP to associate." msgstr "연관시킬 부동 IP의 ID입니다." msgid "ID of the health monitor associated with this pool." msgstr "이 풀과 연관된 상태 모니터의 ID입니다." msgid "ID of the image to use for the template." msgstr "템플리트에 사용할 이미지 ID입니다." msgid "ID of the load balancer this listener is associated to." msgstr "이 리스너가 연관된 로드 밸런서의 ID입니다." msgid "ID of the network in which this IP is allocated." msgstr "이 IP가 할당되는 네트워크의 ID입니다." msgid "ID of the port associated with this IP." msgstr "이 IP와 연관된 포트의 ID입니다." msgid "ID of the queue." msgstr "큐의 ID입니다." msgid "ID of the router used as gateway, set when associated with a port." msgstr "" "게이트웨이로 사용된 라우터의 ID입니다. 포트와 연관된 경우 설정하십시오." msgid "ID of the router." msgstr "라우터의 ID입니다." msgid "ID of the server being deployed to" msgstr "배치 중인 서버의 ID" msgid "ID of the stack this deployment belongs to" msgstr "이 배치가 속해 있는 스택의 ID" msgid "ID of the tenant to which the RBAC policy will be enforced." msgstr "RBAC 정책이 강제 적용될 테넌트의 ID입니다." msgid "ID of the tenant who owns the health monitor." msgstr "상태 모니터를 소유하는 테넌트의 ID입니다." msgid "ID or name of the QoS policy." msgstr "QoS 정책의 ID 또는 이름입니다." msgid "ID or name of the RBAC object." msgstr "RBAC 오브젝트의 ID 또는 이름입니다." msgid "ID or name of the external network for the gateway." msgstr "게이트웨이에 대한 외부 네트워크의 ID 또는 이름입니다." msgid "ID or name of the image to register." msgstr "등록할 이미지의 ID 또는 이름입니다." msgid "ID or name of the load balancer with which listener is associated." msgstr "리스너가 연관된 로드 밸런서의 ID 또는 이름입니다." msgid "ID or name of the load balancing pool." msgstr "로드 밸런싱 풀의 ID 또는 이름입니다." msgid "" "ID that AWS assigns to represent the allocation of the address for use with " "Amazon VPC. Returned only for VPC elastic IP addresses." msgstr "" "AWS가 Amazon VPC와 사용할 주소의 할당을 표시하기 위해 지정하는 ID입니다. VPC " "신축적인 IP 주소에 대해서만 리턴되었습니다." msgid "IP address and port of the pool." msgstr "풀의 IP 주소 및 포트입니다." msgid "IP address desired in the subnet for this port." msgstr "이 포트의 서브넷에서 원하는 IP 주소입니다." msgid "IP address for the VIP." msgstr "VIP의 IP 주소입니다." msgid "IP address of the associated port, if specified." msgstr "연관된 포트의 IP 주소입니다(지정된 경우)." msgid "" "IP address of the floating IP. NOTE: The default policy setting in Neutron " "restricts usage of this property to administrative users only." msgstr "" "Floating IP의 IP 주소입니다. 참고: Neutron의 참고: Neutron의 기본 정책 설정에" "서는 이 특성의 사용을 관리 사용자로만 참고하십시오." msgid "IP address of the pool member on the pool network." msgstr "풀 네트워크에서 풀 멤버의 IP 주소입니다." msgid "IP address of the pool member." msgstr "풀 멤버의 IP 주소입니다." msgid "IP address of the vip." msgstr "vip의 IP 주소입니다." msgid "IP address to allow through this port." msgstr "이 포트를 통해 허용할 IP 주소입니다." msgid "IP address to use if the port has multiple addresses." msgstr "포트에 다중 주소가 있는 경우 사용할 IP 주소입니다." msgid "" "IP or other address information about guest that allowed to access to Share." msgstr "공유에 액세스할 수 있는 게스트에 대한 IP 또는 기타 주소 정보입니다." msgid "IPv6 RA (Router Advertisement) mode." msgstr "IPv6 RA(Router Advertisement) 모드입니다." msgid "IPv6 address mode." msgstr "IPv6 주소 모드입니다." msgid "Id of a resource." msgstr "자원의 ID." msgid "Id of the manila share." msgstr "manila 공유의 id입니다." msgid "Id of the tenant owning the firewall policy." msgstr "방화벽 정책을 소유하는 테넌트의 ID입니다." msgid "Id of the tenant owning the firewall." msgstr "방화벽을 소유하는 테넌트의 ID입니다." msgid "Identifier of the source instance to replicate." msgstr "복제할 소스 인스턴스의 ID입니다." #, python-format msgid "" "If \"%(size)s\" is provided, only one of \"%(image)s\", \"%(image_ref)s\", " "\"%(source_vol)s\", \"%(snapshot_id)s\" can be specified, but currently " "specified options: %(exclusive_options)s." msgstr "" "\"%(size)s\"이(가) 제공된 경우 \"%(image)s\", \"%(image_ref)s\", " "\"%(source_vol)s\", \"%(snapshot_id)s\" 중 하나만 지정할 수 있지만, 현재 지정" "된 옵션은 %(exclusive_options)s입니다." msgid "If False, closes the client socket connection explicitly." msgstr "False인 경우 클라이언트 소켓 연결을 명시적으로 종료합니다." msgid "" "If True, delete any objects in the container when the container is deleted. " "Otherwise, deleting a non-empty container will result in an error." msgstr "" "True인 경우, 컨테이너를 삭제할 때 컨테이너의 오브젝트를 모두 삭제하십시오. 그" "렇지 않으면, 비어 있지 않은 컨테이너를 삭제할 때 오류가 발생합니다." msgid "If True, enable config drive on the server." msgstr "true인 경우, 서버에서 구성 드라이브를 사용으로 설정합니다." msgid "" "If configured, it allows to run action or workflow associated with a task " "multiple times on a provided list of items." msgstr "" "구성된 경우 제공된 항목 목록에서 태스크와 연관된 작업 또는 워크플로우를 여러 " "번 실행할 수 있습니다." msgid "If set, then the server's certificate will not be verified." msgstr "설정되면 서버의 인증서가 확인되지 않습니다." msgid "If specified, the backup to create the volume from." msgstr "지정된 경우 볼륨을 작성하기 위한 백업입니다." msgid "If specified, the backup used as the source to create the volume." msgstr "지정된 경우 볼륨을 작성하기 위한 소스로 사용된 백업입니다." msgid "If specified, the name or ID of the image to create the volume from." msgstr "지정된 경우 볼륨을 작성하기 위한 이미지의 이름 또는 ID입니다." msgid "If specified, the snapshot to create the volume from." msgstr "지정된 경우 볼륨을 작성하기 위한 스냅샷입니다." msgid "If specified, the type of volume to use, mapping to a specific backend." msgstr "지정된 경우 사용할 볼륨의 유형입니다. 특정 백엔드로 맵핑 중입니다." msgid "If specified, the volume to use as source." msgstr "지정된 경우 소스로 사용하기 위한 볼륨입니다." msgid "" "If the region is hierarchically a child of another region, set this " "parameter to the ID of the parent region." msgstr "" "지역이 계층 구조에서 다른 지역의 하위에 있는 경우 이 매개변수를 상위 지역의 " "ID로 설정하십시오." msgid "" "If true, the resources in the chain will be created concurrently. If false " "or omitted, each resource will be treated as having a dependency on the " "previous resource in the list." msgstr "" "true인 경우, 체인의 자원은 동시에 작성됩니다. false이거나 생략된 경우 각 자원" "은 목록의 이전 자원에 종속성이 있는 것으로 처리됩니다." msgid "If without InstanceId, ImageId and InstanceType are required." msgstr "InstanceId가 없으면 ImageId 및 InstanceType이 필요합니다." #, python-format msgid "Illegal prefix bounds: %(key1)s=%(value1)s, %(key2)s=%(value2)s." msgstr "잘못된 접두부 경계: %(key1)s=%(value1)s, %(key2)s=%(value2)s." #, python-format msgid "" "Image %(image)s requires %(imram)s minimum ram. Flavor %(flavor)s has only " "%(flram)s." msgstr "" "이미지 %(image)s에는 %(imram)s 최소 RAM이 필요합니다. Flavor %(flavor)s에는 " "%(flram)s만 있습니다." #, python-format msgid "" "Image %(image)s requires %(imsz)s GB minimum disk space. Flavor %(flavor)s " "has only %(flsz)s GB." msgstr "" "이미지 %(image)s에는 %(imsz)sGB 최소 디스크 공간이 필요합니다. Flavor " "%(flavor)s에는 %(flsz)sGB만 있습니다." #, python-format msgid "Image status is required to be %(cstatus)s not %(wstatus)s." msgstr "이미지 상태는 %(wstatus)s이(가) 아니라 %(cstatus)s이어야 합니다." msgid "Incompatible parameters were used together" msgstr "호환되지 않는 매개변수를 함께 사용했습니다. " #, python-format msgid "Incorrect arguments to \"%(fn_name)s\" should be one of: %(allowed)s" msgstr "" "\"%(fn_name)s\"에 대한 올바르지 않은 인수는 다음 중 하나여야 함: %(allowed)s" #, python-format msgid "Incorrect arguments to \"%(fn_name)s\" should be: %(example)s" msgstr "\"%(fn_name)s\"에 대한 올바르지 않은 인수는 %(example)s이어야 함 " msgid "Incorrect arguments: Items to merge must be maps." msgstr "잘못된 인수: 병합할 항목은 맵이어야 합니다." #, python-format msgid "" "Incorrect index to \"%(fn_name)s\" should be between 0 and %(max_index)s" msgstr "" "\"%(fn_name)s\"에 대한 올바르지 않은 인덱스는 0과 %(max_index)s 사이여야 함" #, python-format msgid "Incorrect index to \"%(fn_name)s\" should be: %(example)s" msgstr "\"%(fn_name)s\"에 대한 올바르지 않은 인덱스는 %(example)s이어야 함 " #, python-format msgid "Index to \"%s\" must be a string" msgstr "\"%s\"에 대한 인덱스는 문자열이어야 함" #, python-format msgid "Index to \"%s\" must be an integer" msgstr "\"%s\"에 대한 인덱스는 정수여야 함" msgid "" "Indicate whether the volume should be deleted when the instance is " "terminated." msgstr "인스턴스가 종료될 때 볼륨을 삭제해야 하는지 여부를 표시합니다." msgid "" "Indicate whether the volume should be deleted when the server is terminated." msgstr "서버가 종료될 때 볼륨을 삭제해야 하는지 여부를 표시합니다." msgid "Indicates remote IP prefix to be associated with this metering rule." msgstr "이 측정 규칙과 연관시킬 원격 IP 접두부를 표시합니다." msgid "" "Indicates whether or not to create a distributed router. NOTE: The default " "policy setting in Neutron restricts usage of this property to administrative " "users only. This property can not be used in conjunction with the L3 agent " "ID." msgstr "" "분산 라우터를 작성할 것인지 표시합니다. 참고: Neutron의 기본 정책 설정에서는 " "이 특성의 사용을 관리 사용자로만 제한합니다. 이 특성은 L3 에이전트 ID와 함께 " "사용할 수 없습니다." msgid "" "Indicates whether or not to create a highly available router. NOTE: The " "default policy setting in Neutron restricts usage of this property to " "administrative users only. And now neutron do not support distributed and ha " "at the same time." msgstr "" "고가용성 라우터를 작성할 것인지 표시합니다. 참고: Neutron의 기본 정책 설정에" "서는 이 특성의 사용을 관리 사용자로만 제한합니다. 이제 neutron은 분산과 ha를 " "동시에 지원하지않습니다." msgid "Indicates whether this firewall rule is enabled or not." msgstr "이 방화벽 규칙이 사용되는지 여부를 표시합니다." msgid "Information used to configure the bucket as a static website." msgstr "정적 웹 사이트로 버킷을 구성하도록 사용된 정보입니다." msgid "Initiator state in lowercase for the ipsec site connection." msgstr "ipsec 사이트 연결에 대한 개시자 상태(소문자)입니다." #, python-format msgid "Input in signal data must be a map, find a %s" msgstr "신호 데이터의 입력은 맵이어야 함, %s 찾기" msgid "Input values for the workflow." msgstr "워크플로우의 입력 값입니다." msgid "Input values to apply to the software configuration on this server." msgstr "이 서버에서 소프트웨어 구성에 적용할 입력 값입니다." msgid "Instance ID to associate with EIP specified by EIP property." msgstr "EIP 특성으로 지정된 EIP와 연관시킬 인스턴스 ID입니다." msgid "Instance ID to associate with EIP." msgstr "EIP와 연관시킬 인스턴스 ID입니다." msgid "Instance connection to CFN/CW API validate certs if SSL is used." msgstr "" "SSL을 사용하는 경우 CFN/CW API 유효성 검증 인증서에 대한 인스턴스 연결입니다." msgid "Instance connection to CFN/CW API via https." msgstr "https를 통한 CFN/CW API에 대한 인스턴스 연결입니다." #, python-format msgid "Instance is not ACTIVE (was: %s)" msgstr "인스턴스가 활성 상태가 아님(was: %s)" #, python-format msgid "" "Instance metadata must not contain greater than %s entries. This is the " "maximum number allowed by your service provider" msgstr "" "인스턴스 메타데이터는 %s개를 초과하는 항목을 포함해서는 안됩니다. 이는 서비" "스 제공자가 허용하는 최대 수입니다." msgid "Interface type of keystone service endpoint." msgstr "keystone 서비스 엔드포인트의 인터페이스 타입입니다." msgid "Internet protocol version." msgstr "인터넷 프로토콜 버전입니다." #, python-format msgid "Invalid %s, expected a mapping" msgstr "올바르지 않은 %s, 맵핑이 예상됨" #, python-format msgid "Invalid CRON expression: %s" msgstr "올바르지 않은 CRON 식: %s" #, python-format msgid "Invalid Parameter type \"%s\"" msgstr "올바르지 않은 매개변수 유형 \"%s\"" #, python-format msgid "Invalid Property %s" msgstr "올바르지 않은 특성 %s" msgid "Invalid Stack address" msgstr "스택 주소가 올바르지 않음" msgid "Invalid Template URL" msgstr "올바르지 않은 템플리트 URL" #, python-format msgid "Invalid URL scheme %s" msgstr "올바르지 않은 URL 스키마 %s" #, python-format msgid "Invalid UUID version (%d)" msgstr "올바르지 않은 UUID 버전(%d)" #, python-format msgid "Invalid action %s" msgstr "올바르지 않은 조치 %s" #, python-format msgid "Invalid action %s specified" msgstr "올바르지 않은 조치 %s이(가) 지정됨" #, python-format msgid "Invalid adopt data: %s" msgstr "올바르지 않은 채택 데이터: %s" #, python-format msgid "Invalid cloud_backend setting in heat.conf detected - %s" msgstr "heat.conf에서 올바르지 않은 cloud_backend 설정 발견 - %s" #, python-format msgid "Invalid codes in ignore_errors : %s" msgstr "ignore_errors의 올바르지 않은 코드: %s" #, python-format msgid "Invalid content type %(content_type)s" msgstr "올바르지 않은 컨텐츠 유형 %(content_type)s" #, python-format msgid "Invalid default %(default)s (%(exc)s)" msgstr "올바르지 않은 기본값 %(default)s(%(exc)s)" #, python-format msgid "Invalid deletion policy \"%s\"" msgstr "올바르지 않은 삭제 정책 \"%s\"" #, python-format msgid "Invalid filter parameters %s" msgstr "올바르지 않은 필터 매개변수 %s" #, python-format msgid "Invalid hook type \"%(hook)s\" for %(resource)s" msgstr "%(resource)s의 올바르지 않은 후크 이름 \"%(hook)s\"" #, python-format msgid "" "Invalid hook type \"%(value)s\" for resource breakpoint, acceptable hook " "types are: %(types)s" msgstr "" "자원 중단점에 올바르지 않은 후크 타입 \"%(value)s\", 허용되는 후크 타입: " "%(types)s" #, python-format msgid "Invalid key %s" msgstr "올바르지 않은 키 %s" #, python-format msgid "Invalid key '%(key)s' for %(entity)s" msgstr "%(entity)s에 대한 올바르지 않은 키 '%(key)s'" #, python-format msgid "Invalid keys in resource mark unhealthy %s" msgstr "자원의 올바르지 않은 키를 상태 비양호 %s(으)로 표시" msgid "" "Invalid mix of disk and container formats. When setting a disk or container " "format to one of 'aki', 'ari', or 'ami', the container and disk formats must " "match." msgstr "" "디스크와 컨테이너 형식의 조합이 올바르지 않습니다. 디스크나 컨테이너 형식을 " "'aki', 'ari', 또는 'ami' 중 하나로 설정할 경우 컨테이너와 디스크형식이 일치해" "야 합니다." #, python-format msgid "Invalid parameter constraints for parameter %s, expected a list" msgstr "%s 매개변수에 올바르지 않은 매개변수 제한조건. 목록이 예상됨" #, python-format msgid "" "Invalid restricted_action type \"%(value)s\" for resource, acceptable " "restricted_action types are: %(types)s" msgstr "" "자원에 올바르지 않은 restricted_action 타입 \"%(value)s\", 허용 가능한 " "restricted_action 타입: %(types)s" #, python-format msgid "" "Invalid stack name %s must contain only alphanumeric or \"_-.\" characters, " "must start with alpha and must be 255 characters or less." msgstr "" "올바르지 않은 스택 이름 %s에는 영숫자 또는 \"_-.\"문자만 포함되어야 하고, 알" "파벳으로 시작해야 하며 255자 이하여야 합니다." #, python-format msgid "Invalid stack name %s, must be a string" msgstr "올바르지 않은 스택 이름 %s, 문자열이어야 함" #, python-format msgid "Invalid status %s" msgstr "올바르지 않은 상태 %s" #, python-format msgid "Invalid support status and should be one of %s" msgstr "올바르지 않은 지원 상태로, %s 중 하나여야 함" #, python-format msgid "Invalid tag, \"%s\" contains a comma" msgstr "올바르지 않은 태그, \"%s\"에 쉼표가 포함됨" #, python-format msgid "Invalid tag, \"%s\" is longer than 80 characters" msgstr "올바르지 않은 태그 \"%s\"이(가) 80자보다 깁니다." #, python-format msgid "Invalid tag, \"%s\" is not a string" msgstr "올바르지 않은 태그, \"%s\"이(가) 문자열이 아님" #, python-format msgid "Invalid tags, not a list: %s" msgstr "올바르지 않은 태그, 목록이 아님: %s" #, python-format msgid "Invalid template type \"%(value)s\", valid types are: cfn, hot." msgstr "올바르지 않은 템플릿 유형 \"%(value)s\", 올바른 유형: cfn, hot." #, python-format msgid "Invalid timeout value %s" msgstr "올바르지 않은 제한시간 값 %s" #, python-format msgid "Invalid timezone: %s" msgstr "올바르지 않은 시간대: %s" #, python-format msgid "Invalid type (%s)" msgstr "올바르지 않은 유형(%s)" msgid "Ip allocation pools and their ranges." msgstr "Ip 할당 풀 및 해당 범위입니다." msgid "Ip of the subnet's gateway." msgstr "서브넷의 게이트웨이 Ip입니다." msgid "Ip version for the subnet." msgstr "서브넷의 Ip 버전입니다." msgid "Ip_version for this firewall rule." msgstr "이 방화벽 규칙의 Ip_version입니다." msgid "It defines an executor to which task action should be sent to." msgstr "태스크 작업을 수신해야 하는 실행 프로그램을 정의합니다." #, python-format msgid "Items to join must be string, map or list not %s" msgstr "결합할 항목은 %s이(가) 아니라 문자열 또는 목록이어야 함" #, python-format msgid "Items to join must be string, map or list. %s failed json serialization" msgstr "결합할 항목은 문자열, 맵 또는 목록이어야 합니다. %s에서 json 결합 실패" #, python-format msgid "Items to join must be strings not %s" msgstr "결합할 항목은 %s이(가) 아니라 문자열이어야 함" #, python-format msgid "" "JSON body size (%(len)s bytes) exceeds maximum allowed size (%(limit)s " "bytes)." msgstr "" "JSON 본문 크기(%(len)s 바이트)가 허용되는 최대 크기(%(limit)s 바이트)를 초과" "합니다." msgid "JSON data that was uploaded via the SwiftSignalHandle." msgstr "SwiftSignalHandle을 통해 업로드된 JSON 데이터입니다." msgid "" "JSON serialized map that includes the endpoint, token and/or other " "attributes the client must use for signalling this handle. The contents of " "this map depend on the type of signal selected in the signal_transport " "property." msgstr "" "클라이언트가 이 핸들의 신호를 보내는 데 사용해야 하는 엔드포인트, 토큰 및/또" "는 기타 속성을 포함하는 JSON 직렬화된 맵입니다. 이 맵의 컨텐츠는 " "signal_transport 특성에서 선택한 신호 타입에 따라 달라집니다." msgid "" "JSON string containing data associated with wait condition signals sent to " "the handle." msgstr "" "핸들에 전송된 대기 조건 신호와 연관된 데이터를 포함하는 JSON 문자열입니다." msgid "" "Key used to encrypt authentication info in the database. Length of this key " "must be 32 characters." msgstr "" "데이터베이스에서 인증 정보를 암호화하는 데 사용하는 키입니다. 이 키의 길이는 " "32자입니다." msgid "Key/Value pairs to extend the capabilities of the flavor." msgstr "Flavor의 기능을 확장하는 키/값 쌍입니다." msgid "Key/value pairs associated with the volume in raw dict form." msgstr "원시 사전 양식의 볼륨과 연관된 키/값 쌍입니다." msgid "Key/value pairs associated with the volume." msgstr "볼륨과 연관된 키/값 쌍입니다." msgid "Key/value pairs to associate with the volume." msgstr "볼륨과 연관시킬 키/값 쌍입니다." msgid "Keypair added to instances to make them accessible for user." msgstr "키 쌍은 사용자들이 액세스할 수 있도록 인스턴스에 추가됩니다." msgid "Keypair secret key." msgstr "키 쌍 비밀 키입니다." msgid "" "Keystone domain ID which contains heat template-defined users. If this " "option is set, stack_user_domain_name option will be ignored." msgstr "" "히트 템플리트 정의 사용자가 포함된 키스톤 도메인 ID입니다. 이 옵션을 설정한 " "경우, stack_user_domain_name 옵션이 무시됩니다." msgid "" "Keystone domain name which contains heat template-defined users. If " "`stack_user_domain_id` option is set, this option is ignored." msgstr "" "히트 템플리트 정의 사용자가 포함된 키스톤 도메인 이름입니다. " "`stack_user_domain_id` 옵션을 설정하면 이 옵션이 무시됩니다." msgid "Keystone domain." msgstr "Keystone 도메인입니다." #, python-format msgid "" "Keystone has more than one service with same name %(service)s. Please use " "service id instead of name" msgstr "" "Keystone에는 이름이 같은 서비스가 하나 이상 있습니다(%(service)s). 이름 대신 " "서비스 id를 사용하십시오." msgid "Keystone password for stack_domain_admin user." msgstr "stack_domain_admin 사용자의 키스톤 비밀번호입니다." msgid "Keystone project." msgstr "Keystone 프로젝트입니다." msgid "Keystone role for heat template-defined users." msgstr "히트 템플리트 정의 사용자의 키스톤 역할입니다." msgid "Keystone role." msgstr "Keystone 역할." msgid "Keystone user group." msgstr "Keystone 사용자 그룹입니다." msgid "Keystone user groups." msgstr "Keystone 사용자 그룹입니다." msgid "Keystone user is enabled or disabled." msgstr "Keystone 사용자를 사용하거나 사용하지 않게 설정합니다." msgid "" "Keystone username, a user with roles sufficient to manage users and projects " "in the stack_user_domain." msgstr "" "키스톤 사용자 이름, stack_user_domain에서 사용자와 프로젝트를 관리할 수 있는 " "역할이 있는 사용자입니다." msgid "L2 segmentation strategy on the external side of the network gateway." msgstr "네트워크 게이트웨이의 외부 측에 있는 L2 분석 방식 전략입니다." msgid "LBaaS provider to implement this load balancer instance." msgstr "이 로드 밸런서 인스턴스를 구현하는 LBaaS 프로바이더입니다." msgid "Length of OS_PASSWORD after encryption exceeds Heat limit (255 chars)" msgstr "암호화 후 OS_PASSWORD 길이가 히트 한계(255자)를 초과함" msgid "Length of the string to generate." msgstr "생성할 문자열의 길이입니다." msgid "" "Length property cannot be smaller than combined character class and " "character sequence minimums" msgstr "" "길이 특성은 문자 클래스와 문자 순서의 최소값을 합한 값보다 작을 수 없음" msgid "Level of access that need to be provided for guest." msgstr "게스트에 제공해야 하는 액세스 레벨입니다." msgid "" "Lifecycle actions to which the configuration applies. The string values " "provided for this property can include the standard resource actions CREATE, " "DELETE, UPDATE, SUSPEND and RESUME supported by Heat." msgstr "" "구성이 적용되는 라이프사이클 조치입니다. 이 특성에 제공된 문자열 값은 히트에 " "의해 지원되는 표준 자원 조치(CREATE, DELETE, UPDATE, SUSPEND 및 RESUME)를 포" "함할 수 있습니다." msgid "List of LoadBalancer resources." msgstr "LoadBalancer 자원의 목록입니다." msgid "List of Security Groups assigned on current LB." msgstr "현재 LB에 할당된 보안 그룹 목록입니다." msgid "List of TLS container references for SNI." msgstr "SNI의 TLS 컨테이너 참조 목록입니다." msgid "List of database instances." msgstr "데이터베이스 인스턴스의 목록입니다." msgid "List of databases to be created on DB instance creation." msgstr "DB 인스턴스 작성 시 작성할 데이터베이스 목록입니다." msgid "List of directories to search for plug-ins." msgstr "플러그인을 검색할 디렉토리 목록입니다." msgid "List of dns nameservers." msgstr "dns 이름 서버의 목록입니다." msgid "List of firewall rules in this firewall policy." msgstr "이 방화벽 정책의 방화벽 규칙 목록입니다." msgid "List of health monitors associated with the pool." msgstr "풀과 연관된 상태 모니터의 목록입니다." msgid "List of hosts to join aggregate." msgstr "집합을 결합할 호스트 목록입니다." msgid "List of manila shares to be mounted." msgstr "마운트할 manila 공유 목록입니다." msgid "List of network interfaces to create on instance." msgstr "인스턴스에 작성할 네트워크 인터페이스의 목록입니다." msgid "List of processes to enable anti-affinity for." msgstr "안티 선호도를 사용으로 설정하는 프로세스의 목록입니다." msgid "List of processes to run on every node." msgstr "모든 노드에 실행할 프로세스의 목록입니다." msgid "List of role assignments." msgstr "역할 할당 목록입니다." msgid "List of security group IDs associated with this interface." msgstr "이 인터페이스와 연관된 보안 그룹 ID의 목록입니다." msgid "List of security group egress rules." msgstr "보안 그룹 출구 규칙의 목록입니다." msgid "List of security group ingress rules." msgstr "보안 그룹 입구 규칙의 목록입니다." msgid "" "List of security group names or IDs to assign to this Node Group template." msgstr "이 노드 그룹 템플리트에 지정할 보안 그룹 이름 또는 ID 목록입니다." msgid "" "List of security group names or IDs. Cannot be used if neutron ports are " "associated with this server; assign security groups to the ports instead." msgstr "" "보안 그룹 이름 또는 ID의 목록입니다. neutron 포트가 이 서버와 연관되어 있는 " "경우 사용할 수 없습니다. 대신 포트에 보안 그룹을 지정하십시오." msgid "List of security group rules." msgstr "보안 그룹 규칙의 목록입니다." msgid "List of subnet prefixes to assign." msgstr "할당할 서브넷 접두부 목록입니다." msgid "List of tags associated with this interface." msgstr "이 인터페이스와 연관된 태그의 목록입니다." msgid "List of tags to attach to the instance." msgstr "인스턴스에 연결할 태그의 목록입니다." msgid "List of tags to attach to this resource." msgstr "이 자원에 연결할 태그의 목록입니다." msgid "List of tags to be attached to this resource." msgstr "이 자원에 연결할 태그의 목록입니다." msgid "" "List of tasks which should be executed before this task. Used only in " "reverse workflows." msgstr "" "이 작업 전에 실행되어야 하는 작업 목록입니다. 역방향 워크플로우에서만 사용됩" "니다." msgid "" "List of tasks which will run after the task has completed regardless of " "whether it is successful or not." msgstr "성공 여부에 상관없이 작업이 완료된 후 실행될 작업 목록입니다." msgid "List of tasks which will run after the task has completed successfully." msgstr "작업이 성공적으로 완료된 후 실행될 작업 목록입니다." msgid "" "List of tasks which will run after the task has completed with an error." msgstr "작업이 오류로 인해 완료된 후 실행될 작업 목록입니다." msgid "List of users to be created on DB instance creation." msgstr "DB 인스턴스 작성 시 작성할 사용자의 목록입니다." msgid "" "List of workflows' executions, each of them is a dictionary with information " "about execution. Each dictionary returns values for next keys: id, " "workflow_name, created_at, updated_at, state for current execution state, " "input, output." msgstr "" "워크플로우 실행 목록, 각각 실행에 대한 정보가 있는 사전입니다. 각 사전은 다" "음 키의 값을 리턴합니다. id, workflow_name, created_at, updated_at, 현재 실" "행 상태, 입력, 출력." msgid "Listener associated with this pool." msgstr "이 풀과 연관된 리스너입니다." msgid "" "Local path on each cluster node on which to mount the share. Defaults to '/" "mnt/{share_id}'." msgstr "" "공유를 마운트할 각 클러스터 노드의 로컬 경로입니다. 기본값으로 '/mnt/" "{share_id}'가 지정됩니다." msgid "Location of the SSL certificate file to use for SSL mode." msgstr "SSL 모드에 사용할 SSL 인증서 파일의 위치입니다." msgid "Location of the SSL key file to use for enabling SSL mode." msgstr "SSL 모드 설정에 사용할 SSL 키 파일의 위치입니다." msgid "MAC address of the port." msgstr "포트의 MAC 주소입니다." msgid "MAC address to allow through this port." msgstr "이 포트를 통해 허용할 MAC 주소입니다." msgid "Map between role with either project or domain." msgstr "프로젝트나 도메인이 있는 역할 사이의 맵핑입니다." msgid "" "Map containing options specific to the configuration management tool used by " "this resource." msgstr "이 자원에서 사용되는 구성 관리 도구에 특정한 옵션을 포함하는 맵입니다." msgid "" "Map representing the cloud-config data structure which will be formatted as " "YAML." msgstr "YAML로 형식화되는 클라우드 구성 데이터 구조를 나타내는 맵입니다." msgid "" "Map representing the configuration data structure which will be serialized " "to JSON format." msgstr "JSON 형식으로 직렬화될 구성 데이터 구조를 나타내는 맵입니다." msgid "Max bandwidth in kbps." msgstr "최대 대역폭(kbps)." msgid "Max burst bandwidth in kbps." msgstr "최대 버스트 대역폭(kbps)." msgid "Max size of the cluster." msgstr "클러스터의 최대 크기입니다." #, python-format msgid "Maximum %s is 1 hour." msgstr "최대 %s은(는) 1시간입니다." msgid "Maximum depth allowed when using nested stacks." msgstr "중첩 스택 사용 시 허용되는 최대 깊이입니다." msgid "" "Maximum line size of message headers to be accepted. max_header_line may " "need to be increased when using large tokens (typically those generated by " "the Keystone v3 API with big service catalogs)." msgstr "" "허용할 메시지 헤더의 최대 행 크기입니다. 더 큰 토큰 사용 시 max_header_line" "을 늘려야 할 수 있습니다(일반적으로 큰 서비스 카탈로그가 있는 키스톤 v3 API" "에 의해 생성됨)." msgid "" "Maximum line size of message headers to be accepted. max_header_line may " "need to be increased when using large tokens (typically those generated by " "the Keystone v3 API with big service catalogs.)" msgstr "" "허용할 메시지 헤더의 최대 행 크기입니다. 더 큰 토큰 사용 시 max_header_line" "을 늘려야 할 수 있습니다(일반적으로 큰 서비스 카탈로그가 있는 키스톤 v3 API" "에 의해 생성됨)." msgid "Maximum number of instances in the group." msgstr "그룹의 최대 인스턴스 수입니다." msgid "Maximum number of resources in the cluster. -1 means unlimited." msgstr "클러스터의 최대 자원 수입니다. -1은 무제한을 나타냅니다." msgid "Maximum number of resources in the group." msgstr "그룹의 최대 자원 수입니다." msgid "Maximum number of stacks any one tenant may have active at one time." msgstr "하나의 테넌트가 한 번에 활성으로 가질 수 있는 최대 스택 수입니다." msgid "Maximum prefix size that can be allocated from the subnet pool." msgstr "서브넷 풀에서 할당할 수 있는 최대 접두부 크기입니다." msgid "" "Maximum raw byte size of JSON request body. Should be larger than " "max_template_size." msgstr "" "JSON 요청 본문의 최대 원시 바이트 크기입니다. max_template_size보다 커야 합니" "다." msgid "Maximum raw byte size of any template." msgstr "템플리트의 최대 원시 바이트 크기입니다." msgid "Maximum resources allowed per top-level stack. -1 stands for unlimited." msgstr "최상위 레벨 스택에 허용된 최대 자원입니다. -1은 무제한을 나타냅니다." msgid "Maximum resources per stack exceeded." msgstr "스택당 최대 자원 수를 초과했습니다." msgid "" "Maximum transmission unit size (in bytes) for the ipsec site connection." msgstr "ipsec 사이트 연결에 대한 최대 전송 단위 크기(바이트)입니다." msgid "Member list items must be strings" msgstr "멤버 목록 항목은 문자열이어야 함" msgid "Member list must be a list" msgstr "멤버 목록은 목록이어야 함" msgid "Members associated with this pool." msgstr "이 풀과 연관된 멤버입니다." msgid "Memory in MB for the flavor." msgstr "Flavor의 메모리(MB)입니다." #, python-format msgid "Message: %(message)s, Code: %(code)s" msgstr "메시지: %(message)s, 코드: %(code)s" msgid "Metadata format invalid" msgstr "올바르지 않은 메타데이터 형식" msgid "Metadata key-values defined for cluster." msgstr "클러스터에 대해 정의된 메타데이터 키-값입니다." msgid "Metadata key-values defined for node." msgstr "노드에 정의된 메타데이터 키-값입니다." msgid "Metadata key-values defined for profile." msgstr "프로파일에 대해 정의된 메타데이터 키-값입니다." msgid "Metadata key-values defined for share." msgstr "공유를 위해 정의된 메타데이터 키-값입니다." msgid "Meter name watched by the alarm." msgstr "알람으로 감시하는 메트릭 이름입니다." msgid "" "Meter should match this resource metadata (key=value) additionally to the " "meter_name." msgstr "미터는 이 자원 메타데이터(키=값) 및 meter_name과 일치해야 합니다." msgid "Meter statistic to evaluate." msgstr "평가할 메트릭 통계입니다." msgid "Method of implementation of session persistence feature." msgstr "세션 지속성 기능의 구현 메소드입니다." msgid "Metric name watched by the alarm." msgstr "알람으로 감시하는 메트릭 이름입니다." msgid "Min size of the cluster." msgstr "클러스터의 최소 크기입니다." msgid "MinSize can not be greater than MaxSize" msgstr "MinSize는 MaxSize보다 클 수 없음" msgid "Minimum number of instances in the group." msgstr "그룹의 최소 인스턴스 수입니다." msgid "Minimum number of resources in the cluster." msgstr "클러스터의 최소 자원 수입니다." msgid "Minimum number of resources in the group." msgstr "그룹의 최소 자원 수입니다." msgid "" "Minimum number of resources that are added or removed when the AutoScaling " "group scales up or down. This can be used only when specifying " "PercentChangeInCapacity for the AdjustmentType property." msgstr "" "AutoScaling 그룹을 확장하거나 축소할 때 추가하거나 제거할 최소 자원 수입니" "다. AdjustmentType 특성의 PercentChangeInCapacity를 지정할 때만 사용할 수 있" "습니다." msgid "" "Minimum number of resources that are added or removed when the AutoScaling " "group scales up or down. This can be used only when specifying " "percent_change_in_capacity for the adjustment_type property." msgstr "" "AutoScaling 그룹을 확장하거나 축소할 때 추가하거나 제거할 최소 자원 수입니" "다. adjustment_type 특성의 percent_change_in_capacity를 지정할 때만 사용할 " "수 있습니다." #, python-format msgid "Missing mandatory (%s) key from mark unhealthy request" msgstr "비양호로 표시 요청에서 필수(%s) 키 누락" #, python-format msgid "Missing parameter type for parameter: %s" msgstr "%s 매개변수의 매개변수 유형이 누락됨" #, python-format msgid "Missing required credential: %(required)s" msgstr "필수 신임 정보 누락: %(required)s" msgid "Mistral resource validation error" msgstr "Mistral 자원 유효성 검증 오류" msgid "Monasca notification." msgstr "Monasca 알림입니다." msgid "Multiple actions specified" msgstr "여러 조치가 지정됨" #, python-format msgid "Multiple physical resources were found with name (%(name)s)." msgstr "이름(%(name)s)이 있는 다중 실제 자원을 찾았습니다." #, python-format msgid "Multiple routers found with name %s" msgstr "이름이 %s인 여러 라우터를 찾음" msgid "Must specify 'InstanceId' if you specify 'EIP'." msgstr "'EIP'를 지정한 경우 'InstanceId'를 지정해야 합니다." msgid "Name for the Sahara Cluster Template." msgstr "Sahara 클러스터 템플리트의 이름입니다." msgid "Name for the Sahara Node Group Template." msgstr "Sahara 노드 그룹 템플리트의 이름입니다." msgid "Name for the aggregate." msgstr "집합의 이름입니다." msgid "Name for the availability zone." msgstr "가용 구역의 이름입니다." msgid "" "Name for the container. If not specified, a unique name will be generated." msgstr "컨테이너의 이름입니다. 지정되지 않은 경우 고유 이름이 생성됩니다." msgid "Name for the firewall policy." msgstr "방화벽 정책의 이름입니다." msgid "Name for the firewall rule." msgstr "방화벽 규칙의 이름입니다." msgid "Name for the firewall." msgstr "방화벽의 이름입니다." msgid "Name for the ike policy." msgstr "ike 정책의 이름입니다." msgid "" "Name for the image. The name of an image is not unique to a Image Service " "node." msgstr "" "이미지의 이름입니다. 이미지의 이름이 이미지 서비스 노드에 고유하지 않습니다." msgid "Name for the ipsec policy." msgstr "ipsec 정책의 이름입니다." msgid "Name for the ipsec site connection." msgstr "ipsec 사이트 연결의 이름입니다." msgid "Name for the time constraint." msgstr "시간 제한 조건의 이름입니다." msgid "Name for the vpn service." msgstr "vpn 서비스의 이름입니다." msgid "" "Name of attribute to compare. Names of the form metadata.user_metadata.X or " "metadata.metering.X are equivalent to what you can address through " "matching_metadata; the former for Nova meters, the latter for all others. To " "see the attributes of your Samples, use `ceilometer --debug sample-list`." msgstr "" "비교할 속성의 이름입니다. 양식 metadata.user_metadata.X 또는 metadata." "metering.X의 이름은 matching_metadata를 통해 처리할 수 있는 것과 동일합니다. " "전자는 Nova 미터에 후자는 기타 모든 것에 사용합니다. 샘플의 속성을 확인하려" "면 `ceilometer --debug sample-list`를 확인하십시오. " msgid "Name of key to use for substituting inputs during deployment." msgstr "배포 중에 입력을 대체하는 데 사용할 키 이름입니다." msgid "Name of keypair to inject into the server." msgstr "서버에 삽입할 키 쌍의 이름입니다." msgid "Name of keystone endpoint." msgstr "keystone 엔드포인트의 이름입니다." msgid "Name of keystone group." msgstr "keystone 그룹의 이름입니다." msgid "Name of keystone project." msgstr "keystone 프로젝트의 이름입니다." msgid "Name of keystone role." msgstr "keystone 역할의 이름입니다." msgid "Name of keystone service." msgstr "keystone 서비스의 이름입니다." msgid "Name of keystone user." msgstr "keystone 사용자의 이름입니다." msgid "Name of registered datastore type." msgstr "등록된 데이터 저장소 유형의 이름입니다." msgid "Name of the DB instance to create." msgstr "작성할 DB 인스턴스의 이름입니다." msgid "Name of the Node group." msgstr "노드 그룹의 이름입니다." msgid "" "Name of the action associated with the task. Either action or workflow may " "be defined in the task." msgstr "" "태스크과 연관된 작업의 이름입니다. 작업이나 워크플로우를 태스크에 정의할 수 " "있습니다." msgid "Name of the administrative user to use on the server." msgstr "서버에서 사용할 관리자의 이름입니다." msgid "Name of the alarm. By default, physical resource name is used." msgstr "알람의 이름입니다. 기본적으로 실제 자원 이름이 사용됩니다." msgid "Name of the availability zone for DB instance." msgstr "DB 인스턴스의 가용성 구역 이름입니다." msgid "Name of the availability zone for server placement." msgstr "서버 배치의 가용성 구역 이름입니다." msgid "Name of the cluster to create." msgstr "작성할 클러스터의 이름입니다." msgid "Name of the cluster. By default, physical resource name is used." msgstr "클러스터의 이름입니다. 기본적으로 실제 자원 이름이 사용됩니다." msgid "Name of the cookie, required if type is APP_COOKIE." msgstr "유형이 APP_COOKIE인 경우 필요한 쿠키의 이름입니다." msgid "Name of the cron trigger." msgstr "cron 트리거의 이름입니다." msgid "Name of the current action being deployed" msgstr "배치 중인 현재 조치의 이름" msgid "Name of the data source." msgstr "데이터 소스의 이름입니다." msgid "" "Name of the derived config associated with this deployment. This is used to " "apply a sort order to the list of configurations currently deployed to a " "server." msgstr "" "이 배치와 연관된 파생된 구성의 이름입니다. 이는 현재 서버에 배치된 구성의 목" "록에 정렬 순서를 적용하는 데 사용됩니다." msgid "" "Name of the engine node. This can be an opaque identifier. It is not " "necessarily a hostname, FQDN, or IP address." msgstr "" "엔진 노드의 이름입니다. 이 이름은 오파크 ID일 수 있습니다. 호스트 이름, FQDN " "또는 IP 주소가 아닐 수도 있습니다." msgid "Name of the input." msgstr "입력의 이름입니다." msgid "Name of the job binary." msgstr "작업 바이너리의 이름입니다." msgid "Name of the metering label." msgstr "측정 레이블의 이름입니다." msgid "Name of the network owning the port." msgstr "포트를 소유하는 네트워크의 이름입니다." msgid "" "Name of the network owning the port. The value is typically network:" "floatingip or network:router_interface or network:dhcp." msgstr "" "포트를 소유하는 네트워크의 이름입니다. 값은 일반적으로 network:floatingip 또" "는 network:router_interface 또는 network:dhcp입니다. " msgid "Name of the notification. By default, physical resource name is used." msgstr "알림의 이름입니다. 기본적으로 실제 자원 이름이 사용됩니다." msgid "Name of the output." msgstr "출력의 이름입니다." msgid "Name of the pool." msgstr "풀의 이름입니다." msgid "Name of the queue instance to create." msgstr "작성할 큐 인스턴스의 이름입니다." msgid "" "Name of the registered datastore version. It must exist for provided " "datastore type. Defaults to using single active version. If several active " "versions exist for provided datastore type, explicit value for this " "parameter must be specified." msgstr "" "등록된 데이터 저장소 버전의 이름입니다. 제공된 데이터 저장소 유형의 등록된 버" "전이 있어야 합니다. 기본적으로 단일 활성 버전을 사용합니다. 제공된 데이터 저" "장소 유형의 여러 활성 버전이 있는 경우 이 매개변수에 대한 명시적 값을 지정해" "야 합니다." msgid "Name of the secret." msgstr "시크릿의 이름입니다." msgid "Name of the senlin node. By default, physical resource name is used." msgstr "senlin 노드의 이름입니다. 기본적으로 실제 자원 이름이 사용됩니다." msgid "Name of the senlin policy. By default, physical resource name is used." msgstr "senlin 정책의 이름입니다. 기본적으로 실제 자원 이름이 사용됩니다." msgid "Name of the senlin profile. By default, physical resource name is used." msgstr "senlin 프로파일의 이름입니다. 기본적으로 실제 자원 이름이 사용됩니다." msgid "" "Name of the senlin receiver. By default, physical resource name is used." msgstr "senlin 수신기의 이름입니다. 기본적으로 실제 자원 이름이 사용됩니다." msgid "Name of the server." msgstr "서버의 이름입니다." msgid "Name of the share network." msgstr "공유 네트워크의 이름입니다." msgid "Name of the share type." msgstr "공유 타입의 이름입니다." msgid "Name of the stack." msgstr "스택 이름입니다." msgid "Name of the subnet pool." msgstr "서브넷 주소의 이름입니다." msgid "Name of the vip." msgstr "vip의 이름입니다." msgid "Name of the volume type." msgstr "볼륨 타입의 이름입니다." msgid "Name of the volume." msgstr "볼륨의 이름입니다." msgid "" "Name of the workflow associated with the task. Can be defined by intrinsic " "function get_resource or by name of the referenced workflow, i.e. " "{ workflow: wf_name } or { workflow: { get_resource: wf_name }}. Either " "action or workflow may be defined in the task." msgstr "" "태스크와 연관된 작업플로우의 이름입니다. 고유 함수 get_resource 또는 참조된 " "워크플로우의 이름(예: { workflow: wf_name } 또는 { workflow: { get_resource: " "wf_name }})으로 정의할 수 있습니다. 작업 또는 워크플로우를 태스크에 정의할 " "수 있습니다." msgid "Name of this Load Balancer." msgstr "이 로드 밸런서의 이름입니다." msgid "Name of this deployment resource in the stack" msgstr "스택에 있는 이 배치 자원의 이름" msgid "Name of this listener." msgstr "이 리스너의 이름입니다." msgid "Name of this pool." msgstr "이 풀의 이름입니다." msgid "Name or ID Nova flavor for the nodes." msgstr "노드에 대한 이름 또는 ID Nova 플레이버입니다." msgid "Name or ID of network to create a port on." msgstr "포트를 작성할 네트워크의 이름 또는 ID입니다." msgid "Name or ID of senlin profile to create this node." msgstr "이 노드를 작성할 senlin 프로파일의 이름 또는 ID입니다." msgid "" "Name or ID of shared file system snapshot that will be restored and created " "as a new share." msgstr "" "새 공유로 작성하여 복원될 공유 파일 시스템 스냅샷의 이름 또는 ID입니다." msgid "" "Name or ID of shared filesystem type. Types defines some share filesystem " "profiles that will be used for share creation." msgstr "" "공유 파일 시스템 타입의 이름 또는 ID입니다. 타입은 공유 작성에 사용할 공유 파" "일 시스템 프로파일을 정의합니다." msgid "Name or ID of shared network defined for shared filesystem." msgstr "공유 파일 시스템에 정의된 공유 네트워크의 이름 또는 ID입니다." msgid "Name or ID of target cluster." msgstr "대상 클러스터의 이름 또는 ID입니다." msgid "Name or ID of the load balancing pool." msgstr "로드 밸런싱 풀의 이름 또는 ID입니다." msgid "Name or Id of keystone region." msgstr "keystone 지역의 이름 또는 Id입니다." msgid "Name or Id of keystone service." msgstr "keystone 서비스의 이름 또는 Id입니다." #, python-format msgid "" "Name or UUID of Neutron port to attach this NIC to. Either %(port)s or " "%(net)s must be specified." msgstr "" "이 NIC에 접속할 Neutron 포트의 이름 또는 UUID입니다. %(port)s 또는 %(net)s을" "(를) 지정해야 합니다." msgid "Name or UUID of network." msgstr "네트워크의 이름 또는 UUID입니다." msgid "" "Name or UUID of the Neutron floating IP network or name of the Nova floating " "ip pool to use. Should not be provided when used with Nova-network that auto-" "assign floating IPs." msgstr "" "Neutron 부동 IP 네트워크의 이름 또는 UUID이거나 사용할 Nova 부동 IP 풀의 이름" "입니다. 부동 IP를 자동 지정하는 Nova 네트워크에서 사용하는 경우에는 제공하지 " "않아야 합니다." msgid "Name or UUID of the image used to boot Hadoop nodes." msgstr "부트 Hadoop 노드에 사용된 이미지의 이름 또는 UUID입니다." #, python-format msgid "" "Name or UUID of the network to attach this NIC to. Either %(port)s or " "%(net)s must be specified." msgstr "" "이 NIC에 접속할 네트워크의 이름 또는 UUID입니다. %(port)s 또는 %(net)s을(를) " "지정해야 합니다." msgid "Name or id of keystone domain." msgstr "keystone 도메인의 이름 또는 id입니다." msgid "Name or id of keystone group." msgstr "keystone 그룹의 이름 또는 id입니다." msgid "Name or id of keystone user." msgstr "keystone 사용자의 이름 또는 id입니다." msgid "Name or id of volume type (OS::Cinder::VolumeType)." msgstr "볼륨 타입의 이름 또는 id(OS::Cinder::VolumeType)." msgid "Names of databases that those users can access on instance creation." msgstr "사용자가 인스턴스 작성 시 액세스할 수 있는 데이터베이스의 이름입니다." msgid "" "Namespace to group this software config by when delivered to a server. This " "may imply what configuration tool is going to perform the configuration." msgstr "" "서버에 전달될 때 이 소프트웨어 구성을 그룹화할 네임스페이스입니다. 이는 구성" "을 수행할 구성 도구를 무시합니다." msgid "Need more arguments" msgstr "추가 인수가 필요함" msgid "Negotiation mode for the ike policy." msgstr "ike 정책의 협상 모드입니다." #, python-format msgid "Neither image nor bootable volume is specified for instance %s" msgstr "인스턴스 %s에 대해 이미지 및 부트 가능 볼륨이 지정되지 않음" msgid "Network in CIDR notation." msgstr "CIDR 표기법으로 된 네트워크입니다." msgid "Network interface ID to associate with EIP." msgstr "EIP와 연관시킬 네트워크 인터페이스 ID입니다." msgid "Network interfaces to associate with instance." msgstr "인스턴스와 연관시킬 네트워크 인터페이스입니다." #, python-format msgid "" "Network this port belongs to. If you plan to use current port to assign " "Floating IP, you should specify %(fixed_ips)s with %(subnet)s. Note if this " "changes to a different network update, the port will be replaced." msgstr "" "이 포트가 있는 네트워크. 현재 포트를 사용하여 부동 IP를 지정하려면 " "%(fixed_ips)s을(를) %(subnet)s(으)로 지정해야 합니다. 다른 네트워크 업데이트" "로 변경하면 포트가 대체됩니다." msgid "Network to allocate floating IP from." msgstr "부동 IP를 할당할 네트워크입니다." msgid "Neutron network id." msgstr "Neutron 네트워크 id입니다." msgid "Neutron subnet id." msgstr "Neutron 서브넷 id입니다." msgid "Nexthop IP address." msgstr "Nexthop IP 주소입니다." #, python-format msgid "No %s specified" msgstr "%s이(가) 지정되지 않음" msgid "No Template provided." msgstr "제공된 템플리트가 없습니다. " msgid "No action specified" msgstr "조치가 지정되지 않음" msgid "No constraint expressed" msgstr "제한조건이 표현되지 않음" #, python-format msgid "" "No content found in the \"files\" section for %(fn_name)s path: %(file_key)s" msgstr "" "%(fn_name)s 경로에 대해 \"files\" 섹션에서 컨텐츠를 찾을 수 없음: " "%(file_key)s" #, python-format msgid "No event %s found" msgstr "이벤트 %s을(를) 찾을 수 없음 " #, python-format msgid "No events found for resource %s" msgstr "%s 자원에 대한 이벤트를 찾을 수 없음" msgid "No resource data found" msgstr "자원 데이터를 찾을 수 없음" #, python-format msgid "No stack exists with id \"%s\"" msgstr "ID가 \"%s\"인 스택이 존재하지 않음" msgid "No stack name specified" msgstr "스택 이름이 지정되지 않음" msgid "No template specified" msgstr "템플리트가 지정되지 않음" msgid "No volume service available." msgstr "사용 가능한 볼륨 서비스가 없습니다." msgid "Node groups." msgstr "노드 그룹입니다." msgid "Nodes list in the cluster." msgstr "클러스터의 노드 목록입니다." msgid "Non HA routers can only have one L3 agent." msgstr "비HA 라우터는 하나의 L3 에이전트만 가질 수 있습니다." #, python-format msgid "Non-empty resource type is required for resource \"%s\"" msgstr "자원 \"%s\"에 대해 비어 있지 않은 유형이 필요함" msgid "Not Implemented." msgstr "구현되지 않음" #, python-format msgid "Not allowed - %(dsver)s without %(dstype)s." msgstr "허용되지 않음 - %(dstype)s이(가) 없는 %(dsver)s" msgid "Not found" msgstr "찾을 수 없음" msgid "Not waiting for outputs signal" msgstr "출력 신호를 대기하지 않음" msgid "" "Notional service where encryption is performed For example, front-end. For " "Nova." msgstr "암호화를 수행하는 개념상의 서비스(예: Nova의 경우 front-end)." msgid "Nova instance type (flavor)." msgstr "Nova 인스턴스 유형(플레이버)입니다." msgid "Nova network id." msgstr "Nova 네트워크 id입니다." msgid "Number of VCPUs for the flavor." msgstr "Flavor의 VCPU 수입니다." msgid "Number of backlog requests to configure the socket with." msgstr "소켓을 구성하기 위한 백로그 요청 수입니다." msgid "Number of instances in the Node group." msgstr "노드 그룹의 인스턴스 수입니다." msgid "Number of minutes to wait for this stack creation." msgstr "이 스택의 작성을 대기할 시간(분)입니다." msgid "Number of periods to evaluate over." msgstr "평가할 기간의 수입니다." msgid "" "Number of permissible connection failures before changing the member status " "to INACTIVE." msgstr "멤버 상태를 비활성으로 변경하기 전에 허용 가능한 연결 실패 수입니다." msgid "Number of remaining executions." msgstr "남은 실행 수입니다." msgid "Number of seconds for the DPD delay." msgstr "DPD 지연의 시간(초)입니다." msgid "Number of seconds for the DPD timeout." msgstr "DPD 제한시간의 시간(초)입니다." msgid "" "Number of times to check whether an interface has been attached or detached." msgstr "인터페이스가 연결되었는지 연결 해제되었는지 확인하는 횟수." msgid "" "Number of times to retry to bring a resource to a non-error state. Set to 0 " "to disable retries." msgstr "" "자원을 오류가 없는 상태로 만들기 위해 재시도한 횟수입니다. 재시도를 사용하지 " "않으려면 0으로 설정하십시오." msgid "" "Number of times to retry when a client encounters an expected intermittent " "error. Set to 0 to disable retries." msgstr "" "클라이언트에서 예상대로 간헐적 오류가 발생할 때 재시도하는 횟수. 재시도를 사" "용하지 않으려면 0을 설정하십시오." msgid "Number of workers for Heat service." msgstr "히트 서비스의 작업자 수입니다." msgid "" "Number of workers for Heat service. Default value 0 means, that service will " "start number of workers equal number of cores on server." msgstr "" "Heat 서비스의 작업자 수입니다. 기본값인 0을 지정하면 서비스에서 서버의 코어 " "수와 동일한 작업자 수를 시작합니다." msgid "Number value for delay during resolve constraint." msgstr "제한조건 분석 중에 지연될 값(숫자)입니다." msgid "Number value for timeout during resolving output value." msgstr "출력 값 분석 중의 제한시간 값(숫자)입니다." #, python-format msgid "Object action %(action)s failed because: %(reason)s" msgstr "%(action)s 오브젝트 조치가 실패함. 이유: %(reason)s" msgid "" "On update, enables heat to collect existing resource properties from reality " "and converge to updated template." msgstr "" "업데이트 시 heat에서 실제 기존 자원 특성을 수집하여 업데이트된 템플리트로 통" "합할 수 있게 합니다." msgid "One of predefined health monitor types." msgstr "사전 정의된 상태 모니터 유형 중 하나입니다." msgid "One or more listeners for this load balancer." msgstr "이 로드 밸런서에 대한 하나 이상의 리스너입니다." msgid "Only ISO 8601 duration format of the form PT#H#M#S is supported." msgstr "PT#H#M#S 양식의 ISO 8601 지속 기간 형식만 지원됩니다." msgid "Only Templates with an extension of .yaml or .template are supported" msgstr "확장자가 .yaml 또는 .template인 템플리트만 지원됨" #, python-format msgid "Only integer is acceptable by '%(name)s'." msgstr "'%(name)s'에서는 정수만 허용됩니다. " #, python-format msgid "Only non-zero integer is acceptable by '%(name)s'." msgstr "'%(name)s'에서는 0이 아닌 정수만 허용됩니다. " msgid "Operator used to compare specified statistic with threshold." msgstr "지정된 통계를 임계값과 비교하는 데 사용된 연산자입니다." msgid "Optional CA cert file to use in SSL connections." msgstr "SSL 연결에서 사용할 선택적 CA 인증 파일입니다." msgid "Optional Nova keypair name." msgstr "선택적 Nova 키 쌍 이름입니다." msgid "Optional PEM-formatted certificate chain file." msgstr "선택적 PEM 형식의 인증 체인 파일입니다." msgid "Optional PEM-formatted file that contains the private key." msgstr "개인 키를 포함하는 선택적 PEM 형식의 파일입니다." msgid "Optional filename to associate with part." msgstr "파트와 연관시킬 선택적 파일 이름입니다." #, python-format msgid "Optional heat url in format like http://0.0.0.0:8004/v1/%(tenant_id)s." msgstr "" "다음과 같은 형식의 선택적 히트 URL입니다. http://0.0.0.0:8004/v1/" "%(tenant_id)s" msgid "Optional subtype to specify with the type." msgstr "유형으로 지정할 선택적 하위 유형입니다." msgid "Options for simulating waiting." msgstr "대기를 시뮬레이션하는 옵션입니다." #, python-format msgid "Order '%(name)s' failed: %(code)s - %(reason)s" msgstr "순서 '%(name)s' 실패: %(code)s - %(reason)s" msgid "Outputs received" msgstr "출력 수신" msgid "Owner of the source security group." msgstr "소스 보안 그룹의 소유자입니다." msgid "PATCH update to non-COMPLETE stack" msgstr "비완료 스택의 패치 업데이트" #, python-format msgid "Parameter '%(name)s' is invalid: %(exp)s" msgstr "'%(name)s' 매개변수가 올바르지 않음: %(exp)s" msgid "Parameter Groups error" msgstr "매개변수 그룹 오류" msgid "" "Parameter Groups error: parameter_groups.: The grouped parameter key_name " "does not reference a valid parameter." msgstr "" "매개변수 그룹 오류: parameter_groups.: 그룹화된 매개변수 key_name에서 올바른 " "매개변수를 참조하지 않습니다." msgid "" "Parameter Groups error: parameter_groups.: The key_name parameter must be " "assigned to one parameter group only." msgstr "" "매개변수 그룹 오류: parameter_groups. key_name 매개변수는 한 매개변수 그룹에" "만 할당해야 합니다." msgid "" "Parameter Groups error: parameter_groups.: The parameters of parameter group " "should be a list." msgstr "" "매개변수 그룹 오류: parameter_groups.: 매개변수 그룹의 매개변수는 목록이어야 " "합니다." msgid "" "Parameter Groups error: parameter_groups.Database Group: The InstanceType " "parameter must be assigned to one parameter group only." msgstr "" "매개변수 그룹 오류: parameter_groups. 데이터베이스 그룹: InstanceType 매개변" "수는 한 매개변수 그룹에만 할당해야 합니다." msgid "" "Parameter Groups error: parameter_groups.Database Group: The grouped " "parameter SomethingNotHere does not reference a valid parameter." msgstr "" "매개변수 그룹 오류: parameter_groups. 데이터베이스 그룹: 그룹화된 매개변수 " "SomethingNotHere에서 올바른 매개변수를 참조하지 않습니다." msgid "" "Parameter Groups error: parameter_groups.Server Group: The parameters must " "be provided for each parameter group." msgstr "" "매개변수 그룹 오류: parameter_groups. 서버 그룹: 각 매개변수 그룹의 매개변수" "를 제공해야 합니다." msgid "" "Parameter Groups error: parameter_groups.Server Group: The parameters of " "parameter group should be a list." msgstr "" "매개변수 그룹 오류: parameter_groups. 서버 그룹: 매개변수 그룹의 매개변수는 " "목록이어야 합니다." msgid "" "Parameter Groups error: parameter_groups: The parameter_groups should be a " "list." msgstr "" "매개변수 그룹 오류: parameter_groups: parameter_groups는 목록이어야 합니다." #, python-format msgid "Parameter name in \"%s\" must be string" msgstr "\"%s\"의 매개변수 이름은 문자열이어야 함" #, python-format msgid "Params must be a map, find a %s" msgstr "매개변수는 맵이어야 함, %s 찾기" msgid "Parent network of the subnet." msgstr "서브넷의 상위 네트워크입니다." msgid "Parts belonging to this message." msgstr "이 메시지에 속한 파트입니다." msgid "Password for API authentication" msgstr "API 인증용 비밀번호" msgid "Password for accessing the data source URL." msgstr "데이터 소스 URL에 액세스하기 위한 암호입니다." msgid "Password for accessing the job binary URL." msgstr "작업 바이너리 URL에 액세스하기 위한 암호입니다." msgid "Password for those users on instance creation." msgstr "인스턴스 작성 시 해당 사용자의 비밀번호입니다." msgid "Password of keystone user." msgstr "keystone 사용자의 암호입니다." msgid "Password used by user." msgstr "사용자가 사용한 암호입니다." #, python-format msgid "Path components in \"%s\" must be strings" msgstr "\"%s\"의 경로 컴포넌트는 문자열이어야 함" msgid "Path components in attributes must be strings" msgstr "속성의 경로 컴포넌트는 문자열이어야 함" msgid "Payload exceeds maximum allowed size" msgstr "페이로드가 허용되는 최대 크기를 초과함" msgid "Perfect forward secrecy for the ipsec policy." msgstr "ipsec 정책의 전체 전달 비밀 유지입니다." msgid "Perfect forward secrecy in lowercase for the ike policy." msgstr "ike 정책의 전체 전달 비밀 유지(소문자)입니다." msgid "" "Perform a check on the input values passed to verify that each required " "input has a corresponding value. When the property is set to STRICT and no " "value is passed, an exception is raised." msgstr "" "개별 필수 입력에 해당 값이 있는지 확인하기 위해서 전달된 입력 값을 검사합니" "다. 특성이 STRICT로 설정되었으나 값을 전달하지 않은 경우, 예외가 발생합니다." msgid "Period (seconds) to evaluate over." msgstr "평가할 기간(초)입니다." msgid "Physical ID of the VPC. Not implemented." msgstr "VPC의 실제 ID입니다. 구현되지 않았습니다." #, python-format msgid "" "Plugin %(plugin)s doesn't support the following node processes: " "%(unsupported)s. Allowed processes are: %(allowed)s" msgstr "" "플러그인 %(plugin)s에서 다음 노드 프로세스를 지원하지 않음: %(unsupported)s. " "허용된 프로세스: %(allowed)s" msgid "Plugin name." msgstr "플러그인 이름입니다." msgid "Policies for removal of resources on update." msgstr "업데이트 시 자원 제거 정책." msgid "Policy for rolling updates for this scaling group." msgstr "이 스케일링 그룹의 롤링 업데이트에 대한 정책입니다." msgid "" "Policy on how to apply a flavor update; either by requesting a server resize " "or by replacing the entire server." msgstr "" "플레이버 업데이트를 적용하는 방법에 대한 정책입니다. 서버 크기 조정을 요청하" "거나 전체 서버를 바꿉니다." msgid "" "Policy on how to apply an image-id update; either by requesting a server " "rebuild or by replacing the entire server." msgstr "" "이미지 ID 업데이트를 적용하는 방법에 대한 정책입니다. 서버 다시 빌드를 요청하" "거나 전체 서버를 바꿉니다." msgid "" "Policy on how to respond to a stack-update for this resource. REPLACE_ALWAYS " "will replace the port regardless of any property changes. AUTO will update " "the existing port for any changed update-allowed property." msgstr "" "이 자원의 스택 업데이트에 대응하는 방법에 대한 정책입니다. REPLACE_ALWAYS는 " "특정 변경에 관계 없이 포트를 대체합니다. AUTO는 업데이트가 허용되는 변경된 특" "성의 기존 포트를 업데이트합니다." msgid "" "Policy to be processed when doing an update which requires removal of " "specific resources." msgstr "특정 자원을 제거해야 하는 업데이트를 수행할 때 처리할 정책입니다." msgid "Pool creation failed" msgstr "풀 작성 실패" msgid "Pool creation failed due to vip" msgstr "vip로 인해 풀 작성 실패" msgid "Pool from which floating IP is allocated." msgstr "부동 IP가 할당되는 풀입니다." msgid "Port number on which the servers are running on the members." msgstr "서버가 멤버에서 실행 중인 포트 번호입니다." msgid "Port on which the pool member listens for requests or connections." msgstr "풀 멤버가 요청 또는 연결을 청취하는 포트입니다." msgid "Port security enabled of the network." msgstr "네트워크의 포트 보안이 사용되었습니다." msgid "Port security enabled of the port." msgstr "포트의 포트 보안이 사용되었습니다." msgid "Position of the rule within the firewall policy." msgstr "방화벽 정책 내에서 규칙의 위치입니다." msgid "Pre-shared key string for the ipsec site connection." msgstr "ipsec 사이트 연결에 대한 사전 공유 키 문자열입니다." msgid "Prefix length for subnet allocation from subnet pool." msgstr "서브넷 풀에서 서브넷을 할당하기 위한 접두부 길이입니다." msgid "Private DNS name of the specified instance." msgstr "지정된 인스턴스의 개인용 DNS 이름입니다." msgid "Private IP address of the network interface." msgstr "네트워크 인터페이스의 개인용 IP 주소입니다." msgid "Private IP address of the specified instance." msgstr "지정된 인스턴스의 개인용 IP 주소입니다." msgid "Project ID" msgstr "프로젝트 ID" msgid "" "Projects to add volume type access to. NOTE: This property is only supported " "since Cinder API V2." msgstr "" "볼륨 유형 액세스를 추가할 프로젝트입니다. 참고: 이 특성은 Cinder API V2 이후" "로만 지원됩니다." #, python-format msgid "" "Properties %(algorithm)s and %(bit_length)s are required for %(type)s type " "of order." msgstr "" "특성 %(algorithm)s 및 %(bit_length)s은(는) %(type)s 타입의 순서에 필수입니다." msgid "Properties for profile." msgstr "프로파일의 특성입니다." msgid "Properties of this policy." msgstr "이 정책의 특성입니다." msgid "Properties to pass to each resource being created in the chain." msgstr "체인에 작성 중인 각 자원에 전달되는 특성입니다." #, python-format msgid "Property %(cookie)s is required when %(sp)s type is set to %(app)s." msgstr "" "%(sp)s 타입이 %(app)s(으)로 설정된 경우 특성 %(cookie)s이(가) 필요합니다." #, python-format msgid "" "Property %(cookie)s must NOT be specified when %(sp)s type is set to %(ip)s." msgstr "" "%(sp)s 타입이 %(ip)s(으)로 설정된 경우 특성 %(cookie)s을(를) 지정하지 않아야 " "합니다." #, python-format msgid "" "Property %(key)s updated value %(new)s should be superset of existing value " "%(old)s." msgstr "" "특성 %(key)s 업데이트 값 %(new)s은(는) 기존 값 %(old)s의 수퍼세트여야 합니다." #, python-format msgid "" "Property %(n)s type mismatch between facade %(type)s (%(fs_type)s) and " "provider (%(ps_type)s)" msgstr "" "facade%(type)s (%(fs_type)s) 및 프로바이더(%(ps_type)s) 간의 특성 %(n)s 타입 " "불일치" #, python-format msgid "Property %(policies)s and %(item)s cannot be used both at one time." msgstr "특성 %(policies)s 및 %(item)s을(를) 동시에 사용할 수 없습니다." #, python-format msgid "Property %(ref)s required when protocol is %(term)s." msgstr "프로토콜이 %(term)s인 경우 필요한 특성 %(ref)s입니다." #, python-format msgid "Property %s not assigned" msgstr "특성 %s이(가) 할당되지 않음" #, python-format msgid "Property %s not implemented yet" msgstr "특성 %s이(가) 아직 구현되지 않음" msgid "" "Property cookie_name is required when session_persistence type is set to " "APP_COOKIE." msgstr "" "session_persistence 유형이 APP_COOKIE로 설정되면 특성 cookie_name이 필요합니" "다." msgid "" "Property cookie_name is required, when session_persistence type is set to " "APP_COOKIE." msgstr "" "session_persistence 유형이 APP_COOKIE로 설정되면 특성 cookie_name이 필요합니" "다." msgid "" "Property cookie_name must NOT be specified when session_persistence type is " "set to SOURCE_IP." msgstr "" "session_persistence 타입이 SOURCE_IP로 설정된 경우 특성 cookie_name을 지정하" "지 않아야 합니다." msgid "Property values for the resources in the group." msgstr "그룹에 있는자원의 특성 값입니다." msgid "Protocol for balancing." msgstr "밸런싱을 위한 프로토콜입니다." msgid "Protocol for the firewall rule." msgstr "방화벽 규칙에 대한 프로토콜입니다." msgid "Protocol of the pool." msgstr "풀의 프로토콜입니다." msgid "Protocol on which to listen for the client traffic." msgstr "클라이언트 트래픽을 청취할 프로토콜입니다." msgid "Protocol to balance." msgstr "조정할 프로토콜입니다." msgid "Protocol value for this firewall rule." msgstr "이 방화벽 규칙에 대한 프로토콜 값입니다." msgid "" "Provide access to nodes using other nodes of the cluster as proxy gateways." msgstr "" "클러스터의 다른 노드를 프록시 게이트웨이로 사용하여 노드에 대한 액세스를 제공" "하십시오." msgid "" "Provide old encryption key. New encryption key would be used from config " "file." msgstr "" "이전 암호화 키를 제공하십시오. 구성 파일에서 새 암호화 키를 사용합니다." msgid "Provider for this Load Balancer." msgstr "이 로드 밸런서의 프로바이더입니다." msgid "Provider implementing this load balancer instance." msgstr "이 로드 밸런서 인스턴스를 구현하는 프로바이더입니다." #, python-format msgid "Provider requires property %(n)s unknown in facade %(type)s" msgstr "제공자에 Facade %(type)s에서 알 수 없는 특성 %(n)s이(가) 필요함" msgid "Public DNS name of the specified instance." msgstr "지정된 인스턴스의 공용 DNS 이름입니다." msgid "Public IP address of the specified instance." msgstr "지정된 인스턴스의 공용 IP 주소입니다." msgid "" "RPC timeout for the engine liveness check that is used for stack locking." msgstr "스택 잠금에 사용되는 엔진 활동 확인에 대한 RPC 제한시간입니다." msgid "RX/TX factor." msgstr "RX/TX 요인입니다." #, python-format msgid "Rebuilding server failed, status '%s'" msgstr "서버 다시 빌드 실패. 상태 '%s'" msgid "Record name." msgstr "레코드 이름." #, python-format msgid "Recursion depth exceeds %d." msgstr "순환 깊이가 %d을(를) 초과합니다." msgid "" "Ref structure that contains the ID of the VPC on which you want to create " "the subnet." msgstr "서브넷을 작성하려는 VPC의 ID를 포함하는 Ref 구조입니다." msgid "Reference to a flavor for creating DB instance." msgstr "DB 인스턴스 작성을 위한 플레이버에 대한 참조입니다." msgid "Reference to certificate." msgstr "인증서에 대한 참조입니다." msgid "Reference to intermediates." msgstr "중간에 대한 참조입니다." msgid "Reference to private key passphrase." msgstr "개인 키 암호에 대한 참조입니다." msgid "Reference to private key." msgstr "개인 키에 대한 참조입니다." msgid "Reference to public key." msgstr "공개 키에 대한 참조입니다." msgid "Reference to the secret." msgstr "시크릿에 대한 참조입니다." msgid "References to secrets that will be stored in container." msgstr "컨테이너에 저장될 시크릿에 대한 참조입니다." msgid "Region name in which this stack will be created." msgstr "이 스택을 작성할 리젼 이름입니다." msgid "Remaining executions." msgstr "남은 실행 수입니다." msgid "Remote branch router identity." msgstr "원격 분기 라우터 ID입니다." msgid "Remote branch router public IPv4 address or IPv6 address or FQDN." msgstr "원격 분기 라우터 공용 IPv4 주소 또는 IPv6 주소 또는 FQDN입니다." msgid "Remote subnet(s) in CIDR format." msgstr "CIDR 형식의 원격 서브넷입니다." msgid "" "Replacement policy used to work around flawed nova/neutron port interaction " "which has been fixed since Liberty." msgstr "" "Liberty 이후로 수정된 결함이 있는 nova/neutron 포트 상호 작용에 대해 작업하" "는 데 사용한 대체 정책입니다." msgid "Request expired or more than 15mins in the future" msgstr "요청이 만료되었거나 15분 이상 소요됨" #, python-format msgid "Request limit exceeded: %(message)s" msgstr "요청 한도 초과함: %(message)s" msgid "Request missing required header X-Auth-Url" msgstr "요청에 필수 헤더 X-Auth-Url이 누락됨" msgid "Request was denied due to request throttling" msgstr "요청 제한으로 인해 요청이 거부됨" #, python-format msgid "" "Requested plugin '%(plugin)s' doesn't support version '%(version)s'. Allowed " "versions are %(allowed)s" msgstr "" "요청된 플러그인 '%(plugin)s'에서 버전 '%(version)s'을(를) 지원하지 않습니다. " "허용되는 버전은 %(allowed)s입니다." msgid "" "Required extra specification. Defines if share drivers handles share servers." msgstr "" "필수 추가 스펙입니다. 공유 드라이버 핸들이 서버를 공유하는지 정의합니다." #, python-format msgid "Required property %(n)s for facade %(type)s missing in provider" msgstr "Facade %(type)s의 필수 특성 %(n)s이(가) 프로바이더에서 누락됨" #, python-format msgid "Resizing to '%(flavor)s' failed, status '%(status)s'" msgstr "'%(flavor)s'(으)로 크기 조정이 실패함, 상태 '%(status)s'" #, python-format msgid "Resource \"%s\" has no type" msgstr "자원 \"%s\"에 유형이 없음" #, python-format msgid "Resource \"%s\" type is not a string" msgstr "자원 \"%s\" 유형이 문자열이 아님" #, python-format msgid "Resource %(name)s %(key)s type must be %(typename)s" msgstr "자원 %(name)s %(key)s의 유형은 %(typename)s(이)어야 함" #, python-format msgid "Resource %(name)s is missing \"%(type_key)s\"" msgstr "%(name)s 자원에서 \"%(type_key)s\"이(가) 누락됨" #, python-format msgid "" "Resource %s's property user_data_format should be set to SOFTWARE_CONFIG " "since there are software deployments on it." msgstr "" "소프트웨어 배치가 있으므로 자원 %s의 특성 user_data_format을 SOFTWARE_CONFIG" "로 설정해야 합니다. " msgid "Resource ID was not provided." msgstr "자원 ID가 제공되지 않았습니다." msgid "" "Resource definition for the resources in the group, in HOT format. The value " "of this property is the definition of a resource just as if it had been " "declared in the template itself." msgstr "" "그룹의 자원에 대한 자원 정의, HOT 형식입니다. 이 특성의 값은 템플리트 자체에" "서 선언된 것처럼 자원의 정의입니다." msgid "" "Resource definition for the resources in the group. The value of this " "property is the definition of a resource just as if it had been declared in " "the template itself." msgstr "" "그룹의 자원에 대한 자원 정의입니다. 이 특성 값은 템플리트 자체에서 선언된 경" "우와 같은 자원의 정의입니다." msgid "Resource failed" msgstr "자원 실패" msgid "Resource is not built" msgstr "자원을 빌드할 수 없음" msgid "Resource name may not contain \"/\"" msgstr "자원 이름에 \"/\"가 포함되지 않아야 합니다. " msgid "Resource type." msgstr "자원 유형." msgid "Resource update already requested" msgstr "자원 업데이트가 이미 요청됨" msgid "Resource with the name requested already exists" msgstr "요청된 이름을 가진 자원이 이미 존재함" msgid "" "ResourceInError: resources.remote_stack: Went to status UPDATE_FAILED due to " "\"Remote stack update failed\"" msgstr "" "ResourceInError: resources.remote_stack: \"원격 스택 업데이트 실패\"로 인해" "UPDATE_FAILED 상태가 됨" #, python-format msgid "Resources must contain Resource. Found a [%s] instead" msgstr "자원에는 자원이 포함되어야 합니다. 대신 [%s]이(가) 발견되었습니다." msgid "" "Resources that users are allowed to access by the DescribeStackResource API." msgstr "사용자가 DescribeStackResource API로 액세스하도록 허용되는 자원입니다." msgid "Returned status code from the configuration execution." msgstr "구성 실행에서 상태 코드가 리턴됩니다." msgid "Route duplicates an existing route." msgstr "경로가 기존 경로를 중복합니다." msgid "Route table ID." msgstr "라우트 테이블 ID입니다." msgid "Rule type." msgstr "규칙 유형" msgid "Safety assessment lifetime configuration for the ike policy." msgstr "ike 정책의 안전 평가 유효시간 구성입니다." msgid "Safety assessment lifetime configuration for the ipsec policy." msgstr "ipsec 정책의 안전 평가 유효시간 구성입니다." msgid "Safety assessment lifetime units." msgstr "안전 평가 유효시간 단위입니다." msgid "Safety assessment lifetime value in specified units." msgstr "지정된 단위의 안전 평가 유효시간 값입니다." msgid "Scheduler hints to pass to Nova (Heat extension)." msgstr "Nova에 전달할 스케줄러 힌트입니다(히트 확장)." msgid "Schema representing the inputs that this software config is expecting." msgstr "이 소프트웨어 구성이 예상하는 입력을 나타내는 스키마입니다." msgid "Schema representing the outputs that this software config will produce." msgstr "이 소프트웨어 구성이 생성하는 출력을 나타내는 스키마입니다." #, python-format msgid "Schema valid only for %(ltype)s or %(mtype)s, not %(utype)s" msgstr "" "스키마는 %(utype)s이(가) 아니라 %(ltype)s 또는 %(mtype)s에 대해서만 유효함" msgid "" "Scope of flavor accessibility. Public or private. Default value is True, " "means public, shared across all projects." msgstr "" "Flavor 접근성 범위입니다. 공용 또는 개인용입니다. 기본값은 True이며, 모든 프" "로젝트에서 공유되는 공용을 나타냅니다." #, python-format msgid "Searching Tenant %(target)s from Tenant %(actual)s forbidden." msgstr "테넌트 %(actual)s에서 테넌트 %(target)s의 검색이 금지되었습니다." msgid "Seconds between running periodic tasks." msgstr "주기적 태스크 실행 사이의 시간(초)입니다." msgid "Seconds to wait after a create. Defaults to the global wait_secs." msgstr "작성 후 대기 시간(초)입니다. 글로벌 wait_secs로 기본값이 지정됩니다." msgid "Seconds to wait after a delete. Defaults to the global wait_secs." msgstr "삭제 후 대기할 시간(초)입니다. 글로벌 wait_secs로 기본값이 지정됩니다." msgid "Seconds to wait after an action (-1 is infinite)." msgstr "작업 후의 초 단위 대기 시간(-1은 무한)." msgid "Seconds to wait after an update. Defaults to the global wait_secs." msgstr "" "업데이트 후 대기할 시간(초)입니다. 글로벌 wait_secs로 기본값이 지정됩니다." #, python-format msgid "Section %s can not be accessed directly." msgstr "섹션 %s에 직접 액세스할 수 없습니다." #, python-format msgid "Security Group \"%(group_name)s\" not found" msgstr "보안 그룹 \"%(group_name)s\"을(를) 찾을 수 없음" msgid "Security group IDs to assign." msgstr "지정할 보안 그룹 ID입니다." msgid "Security group IDs to associate with this port." msgstr "이 포트와 연관시킬 보안 그룹 ID입니다." msgid "Security group names to assign." msgstr "지정할 보안 그룹 이름입니다." msgid "Security groups cannot be assigned the name \"default\"." msgstr "보안 그룹에 이름 \"default\"를 지정할 수 없습니다." msgid "Security service IP address or hostname." msgstr "보안 서비스 IP 주소 또는 호스트 이름입니다." msgid "Security service description." msgstr "보안 서비스 설명입니다." msgid "Security service domain." msgstr "보안 서비스 도메인입니다." msgid "Security service name." msgstr "보안 서비스 이름입니다." msgid "Security service type." msgstr "보안 서비스 타입입니다." msgid "Security service user or group used by tenant." msgstr "테넌트에서 사용한 보안 서비스 사용자 또는 그룹입니다." msgid "Select deferred auth method, stored password or trusts." msgstr "연기된 인증 방법, 저장된 비밀번호 또는 신뢰를 선택하십시오." msgid "Sequence of characters to build the random string from." msgstr "랜덤 문자열을 빌드할 문자 시퀀스입니다." #, python-format msgid "Server %(name)s delete failed: (%(code)s) %(message)s" msgstr "서버 %(name)s 삭제 실패: (%(code)s) %(message)s" msgid "Server Group name." msgstr "서버 그룹 이름입니다." msgid "Server name." msgstr "서버 이름입니다." msgid "Server to assign floating IP to." msgstr "부동 IP를 지정할 서버입니다." #, python-format msgid "" "Service %(service_name)s is not available for resource type " "%(resource_type)s, reason: %(reason)s" msgstr "" "서비스 %(service_name)s은(는) 자원 유형 %(resource_type)s에 사용할 수 없음, " "이유: %(reason)s" msgid "Service misconfigured" msgstr "서비스 설정이 잘못되었습니다. " msgid "Service temporarily unavailable" msgstr "서비스가 일시적으로 사용 불가능함" msgid "Set of parameters passed to this stack." msgstr "이 스택으로 전달하는 매개변수 세트입니다." msgid "Set of rules for comparing characters in a character set." msgstr "문자 세트에서 문자 비교를 위한 규칙 세트입니다." msgid "Set of symbols and encodings." msgstr "인코딩 및 기호의 세트입니다." msgid "Set to \"vpc\" to have IP address allocation associated to your VPC." msgstr "VPC와 연관된 IP 주소 할당을 가지도록 \"vpc\"로 설정하십시오." msgid "Set to true if DHCP is enabled and false if DHCP is disabled." msgstr "" "DHCP를 사용할 수 있는 경우 true로, DHCP를 사용할 수 없는 경우 false로 설정하" "십시오." msgid "Severity of the alarm." msgstr "알람의 심각도입니다." msgid "Share description." msgstr "공유 설명입니다." msgid "Share host." msgstr "공유 호스트입니다." msgid "Share name." msgstr "공유 이름입니다." msgid "Share network description." msgstr "공유 네트워크 설명입니다." msgid "Share project ID." msgstr "공유 프로젝트 ID입니다." msgid "Share protocol supported by shared filesystem." msgstr "공유 파일 시스템에서 지원하는 공유 프로토콜입니다." msgid "Share storage size in GB." msgstr "보안 스토리지 크기(GB)입니다." msgid "Shared status of the metering label." msgstr "측정 레이블의 공유 상태입니다." msgid "Shared status of this firewall policy." msgstr "이 방화벽 정책의 공유 상태입니다." msgid "Shared status of this firewall rule." msgstr "이 방화벽 규칙의 공유 상태입니다." msgid "Shared status of this firewall." msgstr "이 방화벽의 공유 상태입니다." msgid "Shrinking volume" msgstr "볼륨 축소 중" msgid "Signal data error" msgstr "신호 데이터 오류" #, python-format msgid "Signal resource during %s" msgstr "%s 중에 신호 자원" #, python-format msgid "Single schema valid only for %(ltype)s, not %(utype)s" msgstr "단일 스키마는 %(utype)s이(가) 아니라 %(ltype)s에 대해서만 유효함" msgid "Size of a secondary ephemeral data disk in GB." msgstr "보조 임시 데이터 디스크의 크기(GB)입니다." msgid "Size of adjustment." msgstr "조정 크기입니다." msgid "Size of encryption key, in bits. For example, 128 or 256." msgstr "비트 단위의 암호화 키 크기(예: 128 또는 256)." msgid "" "Size of local disk in GB. The \"0\" size is a special case that uses the " "native base image size as the size of the ephemeral root volume." msgstr "" "로컬 디스크의 크기(GB)입니다. 네이티브 기본 이미지 크기를 임시 루트 볼륨의 크" "기로 사용하는 특수 경우에 \"0\" 크기를 사용합니다." msgid "" "Size of the block device in GB. If it is omitted, hypervisor driver " "calculates size." msgstr "" "블록 디바이스의 크기(GB). 이를 생략하는 경우 하이퍼바이저 드라이버가 크기를 " "계산합니다." msgid "Size of the instance disk volume in GB." msgstr "인스턴스 디스크 볼륨의 크기(GB)입니다." msgid "Size of the volumes, in GB." msgstr "볼륨의 크기(GB)입니다." msgid "Smallest prefix size that can be allocated from the subnet pool." msgstr "서브넷 풀에서 할당할 수 있는 가장 작은 접두부 크기입니다." #, python-format msgid "Snapshot with id %s not found" msgstr "ID가 %s인 스냅샷을 찾을 수 없음" msgid "" "SnapshotId is missing, this is required when specifying BlockDeviceMappings." msgstr "" "SnapshotId가 누락되었습니다. BlockDeviceMappings를 지정할 때 SnapshotId는 필" "수입니다." #, python-format msgid "Software config with id %s not found" msgstr "ID가 %s인 소프트웨어 구성을 찾을 수 없음" msgid "Source IP address or CIDR." msgstr "소스 IP 주소 또는 CIDR입니다." msgid "Source ip_address for this firewall rule." msgstr "이 방화벽 규칙의 소스 ip_address입니다." msgid "Source port number or a range." msgstr "소스 포트 번호 또는 범위입니다." msgid "Source port range for this firewall rule." msgstr "이 방화벽 규칙의 소스 포트 범위입니다." #, python-format msgid "Specified output key %s not found." msgstr "지정된 출력 키 %s을(를) 찾을 수 없습니다." #, python-format msgid "Specified status is invalid, defaulting to %s" msgstr "지정된 상태가 올바르지 않으며 %s(으)로 기본 설정됨" #, python-format msgid "Specified subnet %(subnet)s does not belongs to network %(network)s." msgstr "지정된 서브넷 %(subnet)s은(는) 네트워크 %(network)s에 속하지 않습니다." msgid "Specifies a custom discovery url for node discovery." msgstr "노드 검색을 위한 사용자 정의 검색 url을 지정합니다." msgid "Specifies database names for creating databases on instance creation." msgstr "" "인스턴스 작성 시 데이터베이스 작성을 위해 데이터베이스 이름을 지정합니다." msgid "Specify the ACL permissions on who can read objects in the container." msgstr "" "컨테이너에서 오브젝트를 읽을 수 있는 사용자에 대한 ACL 권한을 지정하십시오." msgid "Specify the ACL permissions on who can write objects to the container." msgstr "" "컨테이너에 오브젝트를 쓸 수 있는 사용자에 대한 ACL 권한을 지정하십시오." msgid "" "Specify whether the remote_ip_prefix will be excluded or not from traffic " "counters of the metering label. For example to not count the traffic of a " "specific IP address of a range." msgstr "" "remote_ip_prefix를 측정 레이블의 트래픽 카운터에서 제외할지 여부를 지정하십시" "오. 예를 들어, 특정 IP 주소 범위의 트래픽을 계수할지 여부입니다." #, python-format msgid "Stack %(stack_name)s already has an action (%(action)s) in progress." msgstr "스택 %(stack_name)s에 진행 중인 조치(%(action)s)가 이미 있습니다." msgid "Stack ID" msgstr "스택 ID" msgid "Stack Name" msgstr "스택 이름" msgid "Stack id" msgstr "스택 ID" msgid "Stack name may not contain \"/\"" msgstr "스택 이름이 \"/\"을(를) 포함하지 않을 수 있음 " msgid "Stack resource id" msgstr "스택 자원 ID" msgid "Stack unknown status" msgstr "스택 알 수 없음 상태" #, python-format msgid "Stack with id %s not found" msgstr "ID가 %s인 스택을 찾을 수 없음" msgid "" "Stacks containing these tag names will be hidden. Multiple tags should be " "given in a comma-delimited list (eg. hidden_stack_tags=hide_me,me_too)." msgstr "" "이러한 태그 이름을 포함하는 스택은 숨깁니다. 쉼표로 구분된 목록으로 여러 태그" "를 제공해야 합니다(예: hidden_stack_tags=hide_me,me_too)." msgid "Start address for the allocation pool." msgstr "할당 풀의 시작 주소입니다." #, python-format msgid "Start resizing the group %(group)s" msgstr "그룹 %(group)s 크기 조정 시작" msgid "Start time for the time constraint. A CRON expression property." msgstr "시간 제한조건의 시작 시간입니다. CRON 식 특성입니다." #, python-format msgid "State %s invalid for create" msgstr "상태 %s이(가) 작성에 올바르지 않음" #, python-format msgid "State %s invalid for resume" msgstr "상태 %s이(가) 재개에 올바르지 않음" #, python-format msgid "State %s invalid for suspend" msgstr "상태 %s이(가) 일시중단에 올바르지 않음" msgid "Status" msgstr "상태" #, python-format msgid "String to split must be string; got %s" msgstr "분할할 문자열은 다음이어야 함; %s을(를) 가져옴" msgid "String value with which to compare." msgstr "비교에 사용할 문자열 값." msgid "Subnet ID to associate with this interface." msgstr "이 인터페이스와 연관시킬 서브넷 ID입니다." msgid "Subnet ID to launch instance in." msgstr "인스턴스를 실행할 서브넷 ID입니다." msgid "Subnet ID." msgstr "서브넷 ID입니다." msgid "Subnet in which the vpn service will be created." msgstr "vpn 서비스가 작성되는 서브넷입니다." msgid "" "Subnet in which to allocate the IP address for port. Used for creating port, " "based on derived properties. If subnet is specified, network property " "becomes optional." msgstr "" "포트의 IP 주소를 할당할 서브넷입니다. 파생된 특성을 기반으로 포트를 작성하는 " "데 사용합니다. 서브넷이 지정된 경우 네트워크 특성은 선택 사항이 됩니다." msgid "Subnet in which to allocate the IP address for this port." msgstr "이 포트의 IP 주소를 할당할 서브넷입니다." msgid "Subnet name or ID of this member." msgstr "이 멤버의 서브넷 이름 또는 ID입니다." msgid "Subnet of external fixed IP address." msgstr "외부 Fixed IP 주소의 서브넷입니다." msgid "Subnet of the vip." msgstr "vip의 서브넷입니다." msgid "Subnets of this network." msgstr "이 네트워크의 서브넷입니다." msgid "" "Subset of trustor roles to be delegated to heat. If left unset, all roles of " "a user will be delegated to heat when creating a stack." msgstr "" "히트에 위임할 trustor 역할의 서브세트입니다. 설정하지 않은 상태로 두면 스택" "을 작성할 때 사용자의 모든 역할이 히트로 위임됩니다." msgid "Supplied metadata for the resources in the group." msgstr "그룹에 있는 자원에 제공된 메타데이터입니다." msgid "Supported versions: keystone v3" msgstr "지원 버전: keystone v3" #, python-format msgid "Suspend of instance %s failed" msgstr "인스턴스 %s 일시 중단 실패" #, python-format msgid "Suspend of server %s failed" msgstr "서버 %s 일시 중단 실패" msgid "Swap space in MB." msgstr "스왑 공간(MB)입니다." msgid "System SIGHUP signal received." msgstr "시스템 SIGHUP 신호를 수신했습니다." msgid "TCP or UDP port on which to listen for client traffic." msgstr "클라이언트 트래픽을 청취할 TCP 또는 UDP 포트입니다." msgid "TCP port on which the instance server is listening." msgstr "인스턴스 서버가 청취하는 TCP 포트입니다." msgid "TCP port on which the pool member listens for requests or connections." msgstr "풀 멤버가 요청 또는 연결을 청취하는 TCP 포트입니다." msgid "" "TCP port on which to listen for client traffic that is associated with the " "vip address." msgstr "vip 주소와 연관된 클라이언트 트래픽을 청취하는 TCP 포트입니다." msgid "" "TTL, in seconds, for any cached item in the dogpile.cache region used for " "caching of OpenStack service finder functions." msgstr "" "OpenStack 서비스 찾기 프로그램 기능을 캐시하는 데 사용한 dogpile.cache 지역" "에 캐시된 항목의 TTL(초)." msgid "" "TTL, in seconds, for any cached item in the dogpile.cache region used for " "caching of service extensions." msgstr "" "서비스 확장을 캐시하는 데 사용한 dogpile.cache 지역에 캐시된 항목의 TTL(초)." msgid "" "TTL, in seconds, for any cached item in the dogpile.cache region used for " "caching of validation constraints." msgstr "" "유효성 검증 제한조건을 캐시하는 데 사용한 dogpile.cache 지역에 캐시된 항목의 " "TTL(초)." msgid "Tag key." msgstr "태그 키입니다." msgid "Tag value." msgstr "태그 값입니다." msgid "Tags to add to the image." msgstr "이미지에 추가할 태그입니다." msgid "Tags to attach to instance." msgstr "인스턴스에 연결할 태그입니다." msgid "Tags to attach to the bucket." msgstr "버킷에 연결할 태그입니다." msgid "Tags to attach to this group." msgstr "이 그룹에 연결할 태그입니다." msgid "Task description." msgstr "태스크 설명입니다." msgid "Task name." msgstr "태스크 이름입니다." msgid "" "Template default for how the server should receive the metadata required for " "software configuration. POLL_SERVER_CFN will allow calls to the cfn API " "action DescribeStackResource authenticated with the provided keypair " "(requires enabled heat-api-cfn). POLL_SERVER_HEAT will allow calls to the " "Heat API resource-show using the provided keystone credentials (requires " "keystone v3 API, and configured stack_user_* config options). POLL_TEMP_URL " "will create and populate a Swift TempURL with metadata for polling (requires " "object-store endpoint which supports TempURL).ZAQAR_MESSAGE will create a " "dedicated zaqar queue and post the metadata for polling." msgstr "" "서버에서 소프트웨어 구성에 필요한 메타데이터를 받는 방법을 나타내는 기본 템플" "릿입니다. POLL_SERVER_CFN에서 제공된 키 페어로 인증된 cfn API 작업 " "DescribeStackResource을 호출할 수 있습니다(사용된 heat-api-cfn 필요). " "POLL_SERVER_HEAT에서 제공된 keystone 자격 증명을 사용하여 Heat API resource-" "show를 호출할 수 있습니다(keystone v3 API 및 구성된 stack_user_* 구성 옵션이 " "필요). POLL_TEMP_URL에서 Swift TempURL을 작성하여 폴링할 메타데이터로 채웁니" "다(TempURL을 지원하는 object-store 엔드포인트 필요). ZAQAR_MESSAGE가 전용 " "zaqar 큐를 작성하고 폴링할 메타데이터를 게시합니다." msgid "" "Template default for how the server should signal to heat with the " "deployment output values. CFN_SIGNAL will allow an HTTP POST to a CFN " "keypair signed URL (requires enabled heat-api-cfn). TEMP_URL_SIGNAL will " "create a Swift TempURL to be signaled via HTTP PUT (requires object-store " "endpoint which supports TempURL). HEAT_SIGNAL will allow calls to the Heat " "API resource-signal using the provided keystone credentials. ZAQAR_SIGNAL " "will create a dedicated zaqar queue to be signaled using the provided " "keystone credentials." msgstr "" "서버가 배포 출력 값으로 heat에 신호를 보내는 방법을 나타내는 기본 템플릿입니" "다. CFN_SIGNAL을 사용하면 CFN 키 페어 서명 URL에 HTTP POST를 수행할 수 있습니" "다(사용된 heat-api-cfn 필요). TEMP_URL_SIGNAL은 HTTP PUT를 통해 신호를 보낼 " "Swift TempURL을 작성합니다(TempURL을 지원하는 object-store 엔드포인트 필요). " "HEAT_SIGNAL은 제공된 keystone 자격 증명을 사용하여 Heat API resource-signal" "을 호출할 수 있습니다. ZAQAR_SIGNAL은 제공된 keystone 자격 증명을 사용하여 신" "호를 보낼 전용 zaqar 큐를 작성합니다." msgid "Template format version not found." msgstr "템플리트 형식 버전을 찾을 수 없습니다." #, python-format msgid "" "Template size (%(actual_len)s bytes) exceeds maximum allowed size (%(limit)s " "bytes)." msgstr "" "템플릿 크기(%(actual_len)s 바이트)가 허용되는 최대 크기(%(limit)s 바이트)를 " "초과합니다." msgid "Template that specifies the stack to be created as a resource." msgstr "스택을 자원으로 작성하도록 지정하는 템플리트입니다." #, python-format msgid "Template type is not supported: %s" msgstr "템플릿이 지원되지 않음: %s" msgid "Template version was not provided" msgstr "템플리트 버전이 제공되지 않았음" #, python-format msgid "Template with version %s not found" msgstr "버전이 %s인 템플릿을 찾을 수 없음" msgid "TemplateBody or TemplateUrl were not given." msgstr "TemplateBody 또는 TemplateUrl이 제공되지 않음." msgid "Tenant owning the health monitor." msgstr "상태 모니터를 소유하는 테넌트입니다." msgid "Tenant owning the pool member." msgstr "풀 멤버를 소유하는 테넌트입니다." msgid "Tenant owning the pool." msgstr "풀을 소유하는 테넌트입니다." msgid "Tenant owning the port." msgstr "포트를 소유하는 테넌트입니다." msgid "Tenant owning the router." msgstr "라우터를 소유하는 테넌트입니다." msgid "Tenant owning the subnet." msgstr "서브넷을 소유하는 테넌트입니다." #, python-format msgid "Testing message %(text)s" msgstr "테스트 메시지 %(text)s" #, python-format msgid "The \"%(hook)s\" hook is not defined on %(resource)s" msgstr "\"%(hook)s\" 후크는 %(resource)s에 정의되지 않음" #, python-format msgid "The \"for_each\" argument to \"%s\" must contain a map" msgstr "\"%s\"에 대한 \"for_each\" 인수에 맵이 포함되어야 함" #, python-format msgid "The %(entity)s (%(name)s) could not be found." msgstr "%(entity)s (%(name)s)을(를) 찾을 수 없습니다." #, python-format msgid "The %s must be provided for each parameter group." msgstr "각 매개변수 그룹에 대해 %s을(를) 제공해야 합니다." #, python-format msgid "The %s of parameter group should be a list." msgstr "매개변수 그룹의 %s이(가) 목록이어야 합니다." #, python-format msgid "The %s parameter must be assigned to one parameter group only." msgstr "%s 매개변수는 하나의 매개변수 그룹에만 할당되어야 합니다." #, python-format msgid "The %s should be a list." msgstr "%s은(는) 목록이어야 합니다." msgid "The API paste config file to use." msgstr "사용할 API 붙여넣기 구성 파일" msgid "The AWS Access Key ID needs a subscription for the service" msgstr "AWS 액세스 키 ID는 서비스에 대한 등록이 필요함" msgid "The Availability Zone where the specified instance is launched." msgstr "지정된 인스턴스가 실행되는 가용성 구역입니다." msgid "The Availability Zones in which to create the load balancer." msgstr "로드 밸런서를 작성할 가용성 구역입니다." msgid "The CIDR." msgstr "CIDR입니다." msgid "The DNS name for the LoadBalancer." msgstr "LoadBalancer의 DNS 이름입니다." msgid "The DNS name of the specified bucket." msgstr "지정된 버킷의 DNS 이름입니다." msgid "The DNS nameserver address." msgstr "DNS 네임 서버 주소입니다." msgid "The HTTP method used for requests by the monitor of type HTTP." msgstr "유형 HTTP의 모니터에 의한 요청에 사용되는 HTTP 메소드입니다." msgid "" "The HTTP path used in the HTTP request used by the monitor to test a member " "health." msgstr "" "멤버 상태를 테스트하기 위해 모니터가 사용하는 HTTP 요청에서 사용되는 HTTP 경" "로입니다." msgid "" "The HTTP path used in the HTTP request used by the monitor to test a member " "health. A valid value is a string the begins with a forward slash (/)." msgstr "" "멤버 상태를 테스트하기 위해 모니터가 사용하는 HTTP 요청에서 사용되는 HTTP 경" "로입니다. 올바른 값은 슬래시(/)로 시작하는 문자열입니다." msgid "" "The HTTP status codes expected in response from the member to declare it " "healthy. Specify one of the following values: a single value, such as 200. a " "list, such as 200, 202. a range, such as 200-204." msgstr "" "상태를 양호로 선언할 멤버의 응답에 있어야 하는 HTTP 상태 코드입니다. 다음 값 " "중 하나를 지정하십시오. 200과 같은 단일 값, 200, 202와 같은 목록, 200-204와 " "같은 범위." msgid "" "The ID of an existing instance to use to create the Auto Scaling group. If " "specify this property, will create the group use an existing instance " "instead of a launch configuration." msgstr "" "자동 스케일링 그룹을 작성하는 데 사용할 기존 인스턴스의 ID입니다. 이 특성을 " "설정하면 실행 구성 대신 기존 인스턴스를 사용하는 그룹을 작성합니다." msgid "" "The ID of an existing instance you want to use to create the launch " "configuration. All properties are derived from the instance with the " "exception of BlockDeviceMapping." msgstr "" "실행 구성을 작성하는 데 사용할 기존 인스턴스의 ID입니다. BlockDeviceMapping" "을 제외한 모든 특성은 인스턴스에서 파생됩니다." msgid "The ID of the attached network." msgstr "접속된 네트워크의 ID입니다." msgid "The ID of the firewall policy that this firewall is associated with." msgstr "이 방화벽이 연관된 방화벽 정책의 ID입니다." msgid "" "The ID of the hosted zone name that is associated with the LoadBalancer." msgstr "LoadBalancer와 연관된 호스트된 구역 이름의 ID입니다." msgid "The ID of the image to create a volume from." msgstr "볼륨을 작성할 이미지 ID." msgid "The ID of the image to create the volume from." msgstr "볼륨을 작성할 이미지의 ID입니다." msgid "The ID of the instance to which the volume attaches." msgstr "볼륨이 접속하는 인스턴스의 ID입니다." msgid "The ID of the load balancing pool." msgstr "로드 밸런싱 풀의 ID입니다." msgid "The ID of the pool to which the pool member belongs." msgstr "풀 멤버가 속한 풀의 ID입니다." msgid "The ID of the server to which the volume attaches." msgstr "볼륨이 접속하는 서버의 ID입니다." msgid "The ID of the snapshot to create a volume from." msgstr "볼륨을 작성할 스냅샷의 ID입니다." msgid "" "The ID of the tenant which will own the network. Only administrative users " "can set the tenant identifier; this cannot be changed using authorization " "policies." msgstr "" "네트워크를 소유하는 테넌트의 ID입니다. 관리 사용자만 테넌트 ID를 설정할 수 있" "습니다. 이는 권한 정책을 사용하여 변경할 수 없습니다." msgid "" "The ID of the tenant who owns the Load Balancer. Only administrative users " "can specify a tenant ID other than their own." msgstr "" "로드 밸런서를 소유하는 테넌트의 ID입니다. 관리 사용자만 자체 ID가 아닌 테넌" "트 ID를 지정할 수 있습니다." msgid "The ID of the tenant who owns the listener." msgstr "리스너를 소유하는 테넌트의 ID입니다." msgid "" "The ID of the tenant who owns the network. Only administrative users can " "specify a tenant ID other than their own." msgstr "" "네트워크를 소유하는 테넌트의 ID입니다. 관리 사용자만 자체 ID가 아닌 테넌트 ID" "를 지정할 수 있습니다." msgid "" "The ID of the tenant who owns the subnet pool. Only administrative users can " "specify a tenant ID other than their own." msgstr "" "서브넷 풀을 소유하는 테넌트의 ID입니다. 관리 사용자만 자체 ID가 아닌 테넌트 " "ID를 지정할 수 있습니다." msgid "The ID of the volume to be attached." msgstr "접속할 볼륨의 ID입니다." msgid "" "The ID of the volume to boot from. Only one of volume_id or snapshot_id " "should be provided." msgstr "" "부팅할 볼륨의 ID입니다. volume_id 또는 snapshot_id 중 하나를 제공해야 합니다." msgid "The ID or name of the flavor to boot onto." msgstr "부팅할 플레이버의 ID 또는 이름입니다." msgid "The ID or name of the image to boot with." msgstr "부팅할 이미지의 ID 또는 이름입니다." msgid "" "The IDs of the DHCP agent to schedule the network. Note that the default " "policy setting in Neutron restricts usage of this property to administrative " "users only." msgstr "" "네트워크를 스케줄할 DHCP의 ID입니다. Neutron의 기본 정책 설정은 이 특성의 사" "용을 관리 사용자로만 제한함을 참고하십시오." msgid "The IP address of the pool member." msgstr "풀 멤버의 IP 주소입니다." msgid "The IP version, which is 4 or 6." msgstr "IP 버전(4 또는 6)입니다." #, python-format msgid "The Parameter (%(key)s) was not defined in template." msgstr "매개변수(%(key)s)이(가) 템플리트에 정의되지 않았습니다. " #, python-format msgid "The Parameter (%(key)s) was not provided." msgstr "매개변수(%(key)s)를 제공하지 않았습니다." msgid "The QoS policy ID attached to this network." msgstr "이 네트워크에 연결된 QoS 정책 ID입니다." msgid "The QoS policy ID attached to this port." msgstr "이 포트에 연결된 QoS 정책 ID입니다." #, python-format msgid "The Referenced Attribute (%(resource)s %(key)s) is incorrect." msgstr "참조된 속성(%(resource)s %(key)s)이(가) 올바르지 않습니다. " #, python-format msgid "The Resource %s requires replacement." msgstr "자원 %s을(를) 대체해야 합니다. " #, python-format msgid "" "The Resource (%(resource_name)s) could not be found in Stack %(stack_name)s." msgstr "자원(%(resource_name)s)을%(stack_name)s." #, python-format msgid "The Resource (%(resource_name)s) is not available." msgstr "자원(%(resource_name)s)을 사용할 수 없습니다." #, python-format msgid "The Snapshot (%(snapshot)s) for Stack (%(stack)s) could not be found." msgstr "스택 (%(stack)s)의 스냅샷(%(snapshot)s)을 찾을 수 없습니다." #, python-format msgid "The Stack (%(stack_name)s) already exists." msgstr "스택(%(stack_name)s)이 이미 존재합니다. " msgid "The Template must be a JSON or YAML document." msgstr "템플리트는 JSON 또는 YAML 문서여야 합니다. " msgid "The URI to the container." msgstr "컨테이너의 URI입니다." msgid "The URI to the created container." msgstr "작성된 컨테이너의 URI입니다." msgid "The URI to the created secret." msgstr "작성된 시크릿의 URI입니다." msgid "The URI to the order." msgstr "순서의 URI입니다." msgid "The URIs to container consumers." msgstr "컨테이너 이용자의 URI입니다." msgid "The URIs to secrets stored in container." msgstr "컨테이너에 포함된 시크릿의 URI입니다." msgid "" "The URL of a template that specifies the stack to be created as a resource." msgstr "자원으로 사용할 스택을 지정하는 템플리트의 URL입니다." msgid "The URL of the container." msgstr "컨테이너의 URL입니다." msgid "The VIP address of the LoadBalancer." msgstr "LoadBalancer의 VIP 주소입니다." msgid "The VIP port of the LoadBalancer." msgstr "LoadBalancer의 VIP 포트입니다." msgid "The VIP subnet of the LoadBalancer." msgstr "LoadBalancer의 VIP 서브넷입니다." msgid "The action or operation requested is invalid" msgstr "요청한 동작이나 작업이 잘못되었습니다" msgid "The action to be executed when the receiver is signaled." msgstr "수신기가 신호를 받으면 실행될 작업입니다." msgid "The administrative state of the firewall." msgstr "방화벽의 관리 상태입니다." msgid "The administrative state of the health monitor." msgstr "상태 모니터의 관리 상태입니다." msgid "The administrative state of the ipsec site connection." msgstr "ipsec 사이트 연결의 관리 상태입니다." msgid "The administrative state of the pool member." msgstr "풀 멤버의 관리 상태입니다." msgid "The administrative state of the router." msgstr "라우터의 관리 상태입니다." msgid "The administrative state of the vpn service." msgstr "vpn 서비스의 관리 상태입니다." msgid "The administrative state of this Load Balancer." msgstr "이 로드 밸런서의 관리 상태입니다." msgid "The administrative state of this health monitor." msgstr "이 상태 모니터의 관리 상태입니다." msgid "The administrative state of this listener." msgstr "이 리스너의 관리 상태입니다." msgid "The administrative state of this pool member." msgstr "이 풀 멤버의 관리 상태입니다." msgid "The administrative state of this pool." msgstr "이 풀의 관리 상태입니다." msgid "The administrative state of this port." msgstr "이 포트의 관리 상태입니다." msgid "The administrative state of this vip." msgstr "이 vip의 관리 상태입니다." msgid "The administrative status of the network." msgstr "네트워크의 관리 상태입니다." msgid "The administrator password for the server." msgstr "서버에 대한 관리자 비밀번호입니다." msgid "The aggregation method to compare to the threshold." msgstr "임계값과 비교할 집계 메소드." msgid "The algorithm type used to generate the secret." msgstr "시크릿을 생성하는 데 사용하는 알고리즘 타입입니다." msgid "" "The algorithm type used to generate the secret. Required for key and " "asymmetric types of order." msgstr "" "시크릿을 생성하는 데 사용한 알고리즘 타입입니다. 키와 비대칭 타입의 순서에 필" "요합니다." msgid "The algorithm used to distribute load between the members of the pool." msgstr "풀의 멤버 간에 로드를 분배하는 데 사용되는 알고리즘입니다." msgid "The allocated address of this IP." msgstr "이 IP의 할당된 주소입니다." msgid "" "The approximate interval, in seconds, between health checks of an individual " "instance." msgstr "개별 인스턴스 상태 검사 간의 대략적인 간격(초)입니다." msgid "The authentication hash algorithm of the ipsec policy." msgstr "ipsec 정책의 인증 해시 알고리즘입니다." msgid "The authentication hash algorithm used by the ike policy." msgstr "ike 정책에서 사용하는 인증 해시 알고리즘입니다." msgid "The authentication mode of the ipsec site connection." msgstr "ipsec 사이트 연결의 인증 모드입니다." msgid "The availability zone in which the volume is located." msgstr "볼륨이 있는 가용성 구역입니다." msgid "The availability zone in which the volume will be created." msgstr "볼륨이 작성되는 가용성 구역입니다." msgid "The availability zone of shared filesystem." msgstr "공유 파일 시스템의 가용 구역입니다." msgid "The bay name." msgstr "Bay 이름입니다." msgid "The bit-length of the secret." msgstr "시크릿의 비트 길이입니다." msgid "" "The bit-length of the secret. Required for key and asymmetric types of order." msgstr "시크릿의 비트 길이입니다. 키와 비대칭 타입의 순서에 필요합니다." #, python-format msgid "The bucket you tried to delete is not empty (%s)." msgstr "삭제하려는 버킷이 비어 있지 않습니다(%s)." msgid "The can be used to unmap a defined device." msgstr "정의된 디바이스를 맵핑 해제하는 데 사용될 수 있습니다." msgid "The certificate or AWS Key ID provided does not exist" msgstr "인증서 나 AWS Key ID가 없습니다." msgid "The channel for receiving signals." msgstr "신호를 받을 채널입니다." msgid "" "The class that provides encryption support. For example, nova.volume." "encryptors.luks.LuksEncryptor." msgstr "" "암호화 지원을 제공하는 클래스(예: nova.volume.encryptors.luks.LuksEncryptor)." #, python-format msgid "The client (%(client_name)s) is not available." msgstr "클라이언트(%(client_name)s)를 사용할 수 없습니다." msgid "The cluster ID this node belongs to." msgstr "이 노드가 속한 클러스터 ID입니다." msgid "The cluster name." msgstr "클러스터 이름." msgid "The config value of the software config." msgstr "소프트웨어 구성의 구성 값입니다." msgid "" "The configuration tool used to actually apply the configuration on a server. " "This string property has to be understood by in-instance tools running " "inside deployed servers." msgstr "" "서버에 구성을 적용하는 데 실제 사용되는 구성도구입니다.이 문자열 특성은 내부 " "인스턴스 도구에 의해 이해되어야 하고 배치된 서버 내에서 실행되어야 합니다." msgid "The content of the CSR. Only for certificate orders." msgstr "CSR의 컨텐츠입니다. 인증서 순서에만 사용합니다." #, python-format msgid "" "The contents of personality file \"%(path)s\" is larger than the maximum " "allowed personality file size (%(max_size)s bytes)." msgstr "" "특성 파일 \"%(path)s\"의 컨텐츠가 최대 허용 특성 파일 크기(%(max_size)s 바이" "트)보다 큽니다." msgid "The current size of AutoscalingResourceGroup." msgstr "AutoscalingResourceGroup의 현재 크기입니다." msgid "The current status of the volume." msgstr "볼륨의 현재 상태입니다." msgid "" "The database instance was created, but heat failed to set up the datastore. " "If a database instance is in the FAILED state, it should be deleted and a " "new one should be created." msgstr "" "데이터베이스 인스턴스가 작성되었으나 히트에서 데이터 저장소를 설정하지 못했습" "니다. 데이터베이스 인스턴스의 상태가 FAILED이면 이를 삭제하고 새로 작성해야 " "합니다." msgid "" "The dead peer detection protocol configuration of the ipsec site connection." msgstr "ipsec 사이트 연결의 정지된 피어 발견 프로토콜 ID입니다." msgid "The decrypted secret payload." msgstr "암호가 해독된 시크릿 페이로드입니다." msgid "" "The default cloud-init user set up for each image (e.g. \"ubuntu\" for " "Ubuntu 12.04+, \"fedora\" for Fedora 19+ and \"cloud-user\" for CentOS/RHEL " "6.5)." msgstr "" "각 이미지에 설정된 기본 cloud-init 사용자(예: Ubuntu 12.04+의 경우 \"ubuntu" "\", Fedora 19+의 경우 \"fedora\" 및 CentOS/RHEL 6.5의 경우 \"cloud-user\")." msgid "The description for the QoS policy." msgstr "QoS 정책에 대한 설명입니다." msgid "The description of the ike policy." msgstr "ike 정책에 대한 설명입니다." msgid "The description of the ipsec policy." msgstr "ipsec 정책에 대한 설명입니다." msgid "The description of the ipsec site connection." msgstr "ipsec 사이트 연결에 대한 설명입니다." msgid "The description of the vpn service." msgstr "vpn 서비스에 대한 설명입니다." msgid "The destination for static route." msgstr "정적 경로의 대상입니다." msgid "The details of physical object." msgstr "실제 오브젝트의 세부 사항입니다." msgid "The device id for the network gateway." msgstr "네트워크 게이트웨이에 대한 디바이스 ID입니다." msgid "" "The device where the volume is exposed on the instance. This assignment may " "not be honored and it is advised that the path /dev/disk/by-id/virtio-" " be used instead." msgstr "" "볼륨이 인스턴스에서 노출되는 디바이스입니다. 이 지정은 지켜지지 않을 수 있으" "며 경로 /dev/disk/by-id/virtio-를 대신 사용하는 것이 권장됩니다." msgid "" "The direction in which metering rule is applied, either ingress or egress." msgstr "측정 규칙이 적용되는 방향(입구 또는 출구)입니다." msgid "The direction in which metering rule is applied." msgstr "측정 규칙이 적용되는 방향입니다." msgid "" "The direction in which the security group rule is applied. For a compute " "instance, an ingress security group rule matches traffic that is incoming " "(ingress) for that instance. An egress rule is applied to traffic leaving " "the instance." msgstr "" "보안 그룹 규칙이 적용되는 방향입니다. 계산 인스턴스의 경우 입구 보안 그룹 규" "칙은 해당 인스턴스에 대해 수신하는(입구) 트래픽과 일치합니다. 출구 규칙은 인" "스턴스를 남기는 트래픽에 적용됩니다." msgid "The directory to search for environment files." msgstr "환경 파일을 검색할 디렉토리입니다." msgid "The ebs volume to attach to the instance." msgstr "인스턴스에 접속할 ebs 볼륨입니다." msgid "The encapsulation mode of the ipsec policy." msgstr "ipsec 정책의 캡슐화 모드입니다." msgid "The encoding format used to provide the payload data." msgstr "페이로드 데이터를 제공하는 데 사용한 인코딩 형식입니다." msgid "The encryption algorithm of the ipsec policy." msgstr "ipsec 정책의 암호화 알고리즘입니다." msgid "The encryption algorithm or mode. For example, aes-xts-plain64." msgstr "암호화 알고리즘 또는 모드(예: aes-xts-plain64)." msgid "The encryption algorithm used by the ike policy." msgstr "ike 정책에서 사용하는 암호화 알고리즘입니다." msgid "The environment is not a valid YAML mapping data type." msgstr "환경이 올바른 YAML 맵핑 데이터 유형이 아닙니다." msgid "The expiration date for the secret in ISO-8601 format." msgstr "ISO-8601 형식으로 된 시크릿의 만기 날짜입니다." msgid "The external load balancer port number." msgstr "외부 로드 밸런서 포트 번호입니다." msgid "The extra specs key and value pairs of the volume type." msgstr "볼륨 타입의 추가 스펙 키 및 값 쌍입니다." msgid "The flavor to use." msgstr "사용할 플레이버" #, python-format msgid "The following parameters are immutable and may not be updated: %(keys)s" msgstr "다음 매개변수는 변경할 수 없으며 업데이트할 수 없음: %(keys)s" #, python-format msgid "The function %s is not supported in this version of HOT." msgstr "%s 함수는 이 버전의 HOT에서 지원되지 않습니다." msgid "" "The gateway IP address. Set to any of [ null | ~ | \"\" ] to create/update a " "subnet without a gateway. If omitted when creation, neutron will assign the " "first free IP address within the subnet to the gateway automatically. If " "remove this from template when update, the old gateway IP address will be " "detached." msgstr "" "게이트웨이 IP 주소이빈다. 게이트웨이 없이 서브넷을 작성/업데이트하려면 " "[ null | ~ | \"\" ]로 설정하십시오. 작성 시 생략된 경우 neutron이 서브넷의 " "첫 번째 사용 가능한 IP 주소를 게이트웨이에 자동으로 할당합니다. 업데이트 시 " "템플릿에서 제거하면 이전 게이트웨이 IP 주소의 연결이 해제됩니다." #, python-format msgid "The grouped parameter %s does not reference a valid parameter." msgstr "그룹화된 매개변수 %s이(가) 올바른 매개변수를 참조하지 않습니다." msgid "The host from the container URL." msgstr "컨테이너 URL의 호스트입니다." msgid "The host from which a user is allowed to connect to the database." msgstr "사용자가 데이터베이스에 연결하도록 허용되는 호스트입니다." msgid "" "The id for L2 segment on the external side of the network gateway. Must be " "specified when using vlan." msgstr "" "네트워크 게이트웨이의 외부 측에 있는 L2 세그먼트에 대한 ID입니다. VLAN을 사용" "할 때 지정되어야 합니다." msgid "The identifier of the CA to use." msgstr "사용할 CA의 ID입니다." msgid "The image ID. Glance will generate a UUID if not specified." msgstr "이미지 ID입니다. 지정되어 있지 않으면 글랜스에서 UUID를 생성합니다." msgid "The initiator of the ipsec site connection." msgstr "ipsec 사이트 연결의 개시자입니다." msgid "The input string to be stored." msgstr "저장할 입력 문자열입니다." msgid "The interface name for the network gateway." msgstr "네트워크 게이트웨이에 대한 인터페이스 이름입니다." msgid "The internal network to connect on the network gateway." msgstr "네트워크 게이트웨이에서 연결할 내부 네트워크입니다." msgid "The last operation for the database instance failed due to an error." msgstr "오류가 발생하여 데이터베이스 인스턴스의 마지막 조작이 실패했습니다." #, python-format msgid "The length must be at least %(min)s." msgstr "길이가 최소 %(min)s이어야 합니다. " #, python-format msgid "The length must be in the range %(min)s to %(max)s." msgstr "길이가 %(min)s - %(max)s 범위여야 합니다. " #, python-format msgid "The length must be no greater than %(max)s." msgstr "길이가 %(max)s 이하여야 합니다." msgid "The length of time, in minutes, to wait for the nested stack creation." msgstr "중첩 스택 작성 동안 대기하는 시간의 길이(분)입니다." msgid "" "The list of HTTP status codes expected in response from the member to " "declare it healthy." msgstr "" "해당 상태가 양호하다고 선언할 멤버로부터의 응답에서 HTTP 상태 코드의 목록이 " "예상되었습니다." msgid "The list of Nova server IDs load balanced." msgstr "로드 밸런싱된 Nova 서버 ID의 목록입니다." msgid "The list of Pools related to this monitor." msgstr "이 모니터와 관련된 풀의 목록입니다." msgid "The list of attachments of the volume." msgstr "볼륨의 첨부 목록입니다." msgid "" "The list of configurations for the different lifecycle actions of the " "represented software component." msgstr "" "나타낸 소프트웨어 구성요소의 다른 라이프사이클 조치에 대한 구성 목록입니다." msgid "The list of instance IDs load balanced." msgstr "인스턴스 ID 로드 밸런스 목록입니다." msgid "" "The list of resource types to create. This list may contain type names or " "aliases defined in the resource registry. Specific template names are not " "supported." msgstr "" "작성할 자원 유형 목록입니다. 이 목록에는 자원 레지스트리에 정의된 타입 이름 " "또는 별명이 포함될 수 있습니다. 특정 템플리트 이름은 지원되지 않습니다." msgid "The list of tags to associate with the volume." msgstr "볼륨과 연관시킬 태그의 목록입니다." msgid "The load balancer transport protocol to use." msgstr "사용할 로드 밸런서 전송 프로토콜입니다." msgid "" "The location where the volume is exposed on the instance. This assignment " "may not be honored and it is advised that the path /dev/disk/by-id/virtio-" " be used instead." msgstr "" "볼륨이 인스턴스에서 노출되는 위치입니다. 이 지정은 지켜지지 않을 수 있으며 경" "로 /dev/disk/by-id/virtio-를 대신 사용하는 것이 권장됩니다." msgid "The manually assigned alternative public IPv4 address of the server." msgstr "서버의 수동으로 지정된 대체 공용 IPv4 주소입니다." msgid "The manually assigned alternative public IPv6 address of the server." msgstr "서버의 수동으로 지정된 대체 공용 IPv6 주소입니다." msgid "The maximum number of connections per second allowed for the vip." msgstr "vip에 허용되는 초당 최대 연결 수입니다." msgid "" "The maximum number of connections permitted for this load balancer. Defaults " "to -1, which is infinite." msgstr "" "이 로드 밸런서에 허용된 최대 연결 수입니다. 기본값은 -1로, 무제한을 나타냅니" "다." msgid "The maximum number of resources to create at once." msgstr "한 번에 작성할 최대 자원 수입니다." msgid "The maximum number of resources to replace at once." msgstr "한 번에 대체할 최대 자원 수입니다." msgid "" "The maximum number of seconds to wait for the resource to signal completion. " "Once the timeout is reached, creation of the signal resource will fail." msgstr "" "자원이 완료 신호를 표시할 때가지 기다리는 최대 시간(초)입니다. 제한시간에 도" "달하면 신호 자원 작성이 실패합니다." msgid "" "The maximum port number in the range that is matched by the security group " "rule. The port_range_min attribute constrains the port_range_max attribute. " "If the protocol is ICMP, this value must be an ICMP type." msgstr "" "보안 그룹 규칙과 일치하는 범위의 최대 포트 번호입니다. port_range_min 속성은 " "port_range_max 속성을 포함합니다. 프로토콜이 ICMP인 경우 이 값은 ICMP 유형이" "어야 합니다." msgid "" "The maximum transmission unit size (in bytes) of the ipsec site connection." msgstr "ipsec 사이트 연결의 최대 전송 단위 ID입니다." msgid "The maximum transmission unit size(in bytes) for the network." msgstr "네트워크의 최대 전송 단위 크기(바이트)입니다." msgid "The metering label ID to associate with this metering rule." msgstr "이 측정 규칙과 연관시킬 측정 레이블 ID입니다." msgid "" "The metric dimensions to match to the alarm dimensions. One or more " "dimension key names separated by a comma." msgstr "" "알람 크기와 일치할 지표 크기입니다. 쉼표로 구분된 하나 이상의 크기 키 이름입" "니다." msgid "" "The minimum number of characters from this character class that will be in " "the generated string." msgstr "생성된 문자열에 포함될 이 문자 클래스의 최소 문자 수입니다." msgid "" "The minimum number of characters from this sequence that will be in the " "generated string." msgstr "생성된 문자열에 포함될 이 순서의 최소 문자 수입니다." msgid "" "The minimum number of resources in service while rolling updates are being " "executed." msgstr "롤링 업데이트가 실행되는 동안 서비스의 최소 자원 수입니다." msgid "" "The minimum port number in the range that is matched by the security group " "rule. If the protocol is TCP or UDP, this value must be less than or equal " "to the value of the port_range_max attribute. If the protocol is ICMP, this " "value must be an ICMP type." msgstr "" "보안 그룹 규칙과 일치하는 범위의 최소 포트 번호입니다. 프로토콜이 TCP 또는 " "UDP인 경우 이 값은 port_range_max 속성과 같거나 작아야 합니다. 프로토콜이 " "ICMP인 경우 이 값은 ICMP 유형이어야 합니다." msgid "The name for the QoS policy." msgstr "QoS 정책의 이름입니다." msgid "The name for the address scope." msgstr "주소 범위의 이름입니다." msgid "" "The name of the driver used for instantiating container networks. By " "default, Magnum will choose the pre-configured network driver based on COE " "type." msgstr "" "컨테이너 네트워크를 인스턴스화하는 데 사용하는 드라이버의 이름입니다. 기본적" "으로 Magnum에서 COE 타입을 기반으로 사전 구성된 네트워크 드라이버를 선택합니" "다." msgid "The name of the error document." msgstr "오류 문서의 이름입니다." msgid "The name of the hosted zone that is associated with the LoadBalancer." msgstr "LoadBalancer와 연관된 호스트된 구역의 이름입니다." msgid "The name of the ike policy." msgstr "ike 정책의 이름입니다." msgid "The name of the index document." msgstr "색인 문서의 이름입니다." msgid "The name of the ipsec policy." msgstr "ipsec 정책의 이름입니다." msgid "The name of the ipsec site connection." msgstr "ipsec 사이트 연결의 이름입니다." msgid "The name of the key pair." msgstr "키 쌍의 이름입니다." msgid "The name of the network gateway." msgstr "네트워크 게이트웨이의 이름입니다." msgid "The name of the network." msgstr "네트워크의 이름입니다." msgid "The name of the router." msgstr "라우터의 이름입니다." msgid "The name of the subnet." msgstr "서브넷의 이름입니다." msgid "The name of the user that the new key will belong to." msgstr "새 키가 속한 사용자의 이름입니다." msgid "" "The name of the virtual device. The name must be in the form ephemeralX " "where X is a number starting from zero (0); for example, ephemeral0." msgstr "" "가상 디바이스의 이름입니다. 이름은 ephemeralX 양식이어야 하며 여기서 X는 영" "(0)에서 시작하는 숫자입니다(예: ephemeral0)." msgid "The name of the vpn service." msgstr "vpn 서비스의 이름입니다." msgid "The name or ID of QoS policy to attach to this network." msgstr "이 네트워크에 연결할 QoS 정책의 이름 또는 ID입니다." msgid "The name or ID of QoS policy to attach to this port." msgstr "이 포트에 연결할 QoS 정책의 이름 또는 ID입니다." msgid "The name or ID of parent of this keystone project in hierarchy." msgstr "계층 구조에 있는 이 keystone 프로젝트의 상위 이름 또는 ID입니다." msgid "The name or ID of target cluster." msgstr "대상 클러스터의 이름 또는 ID입니다." msgid "The name or ID of the bay model." msgstr "Bay 모델의 이름 또는 ID입니다." msgid "The name or ID of the subnet on which to allocate the VIP address." msgstr "VIP 주소를 할당할 서브넷의 이름 또는 ID입니다." msgid "The name or ID of the subnet pool." msgstr "서브넷 풀의 이름 또는 ID입니다." msgid "The name or id of the Senlin profile." msgstr "Senlin 프로파일의 이름 또는 id입니다." msgid "The negotiation mode of the ike policy." msgstr "ike 정책의 협상 모드입니다." msgid "The next hop for the destination." msgstr "대상의 다음 hop입니다." msgid "The node count for this bay." msgstr "이 Bay의 노드 수입니다." msgid "The notification methods to use when an alarm state is ALARM." msgstr "알람 상태가 ALARM이면 사용할 알림 방법입니다." msgid "The notification methods to use when an alarm state is OK." msgstr "알람 상태가 OK이면 사용할 알림 방법입니다." msgid "The notification methods to use when an alarm state is UNDETERMINED." msgstr "알람 상태가 UNDETERMINED이면 사용할 알림 방법입니다." msgid "The number of I/O operations per second that the volume supports." msgstr "볼륨이 지원하는 초당 I/O 조작 수입니다." msgid "The number of bytes stored in the container." msgstr "컨테이너에 저장된 바이트 수입니다." msgid "" "The number of consecutive health probe failures required before moving the " "instance to the unhealthy state" msgstr "" "인스턴스를 비양호 상태로 이동하기 전에 필요한 연속 상태 프로브 실패 수입니다." msgid "" "The number of consecutive health probe successes required before moving the " "instance to the healthy state." msgstr "" "인스턴스를 양호 상태로 이동하기 전에 필요한 연속 상태 프로브 성공 수입니다." msgid "The number of master nodes for this bay." msgstr "Bay의 마스터 노드 수입니다." msgid "The number of objects stored in the container." msgstr "컨테이너에 저장된 오브젝트 수입니다." msgid "The number of replicas to be created." msgstr "작성할 복제본 수입니다." msgid "The number of resources to create." msgstr "작성할 자원 수입니다." msgid "The number of seconds to wait between batches of updates." msgstr "업데이트 일괄처리 사이에 대기하는 시간(초)입니다." msgid "The number of seconds to wait between batches." msgstr "일괄처리 사이에 대기하는 시간(초)입니다." msgid "The number of seconds to wait for the cluster actions." msgstr "클러스터 작업을 대기하는 시간(초)입니다." msgid "" "The number of seconds to wait for the correct number of signals to arrive." msgstr "올바른 신호 수가 도착할 때까지 대기하는 시간(초)입니다." msgid "" "The number of success signals that must be received before the stack " "creation process continues." msgstr "스택 작성 프로세스가 계속되기 전에 수신되어야 하는 성공 신호 수입니다." msgid "" "The optional public key. This allows users to supply the public key from a " "pre-existing key pair. If not supplied, a new key pair will be generated." msgstr "" "선택적 공개 키입니다. 이로 인해 사용자가 이전의 기존 키 쌍에서 공개 키를 제공" "할 수 있습니다. 제공되지 않는 경우 새 키 쌍이 생성됩니다." msgid "" "The owner tenant ID of the address scope. Only administrative users can " "specify a tenant ID other than their own." msgstr "" "주소 범위의 소유자 테넌트 ID입니다. 관리 사용자만 자체 ID가 아닌 테넌트 ID를 " "지정할 수 있습니다." msgid "The owner tenant ID of this QoS policy." msgstr "이 QoS 정책의 소유자 테넌트 ID입니다." msgid "The owner tenant ID of this rule." msgstr "이 규칙의 소유자 테넌트 ID입니다." msgid "" "The owner tenant ID. Only required if the caller has an administrative role " "and wants to create a RBAC for another tenant." msgstr "" "소유자 테넌트 ID입니다. 호출자에게 관리 역할이 있고 다른 테넌트의 RBAC를 작성" "하려는 경우에만 필요합니다." msgid "The parameters passed to action when the receiver is signaled." msgstr "수신기에서 신호를 받을 때 작업에 전달된 매개변수입니다." msgid "The parent URL of the container." msgstr "컨테이너의 상위 URL입니다." msgid "The payload of the created certificate, if available." msgstr "사용 가능한 경우, 작성된 인증서의 페이로드입니다." msgid "The payload of the created intermediates, if available." msgstr "사용 가능한 경우, 작성된 중간의 페이로드입니다." msgid "The payload of the created private key, if available." msgstr "사용 가능한 경우, 작성된 개인 키의 페이로드입니다." msgid "The payload of the created public key, if available." msgstr "사용 가능한 경우, 작성된 공개 키의 페이로드입니다." msgid "The perfect forward secrecy of the ike policy." msgstr "ike 정책의 전제 전달 비밀 유지입니다." msgid "The perfect forward secrecy of the ipsec policy." msgstr "ipsec 정책의 전체 전달 비밀 유지입니다." #, python-format msgid "The personality property may not contain greater than %s entries." msgstr "개인 정보 특성은 %s개를 초과하는 항목을 포함할 수 없습니다." msgid "The physical mechanism by which the virtual network is implemented." msgstr "가상 네트워크를 이용하는 물리적 메커니즘을 구현합니다." msgid "The port being checked." msgstr "포트를 확인합니다." msgid "The port id, either subnet or port_id should be specified." msgstr "포트 ID, subnet 또는 port_id가 지정되어야 합니다." msgid "The port on which the server will listen." msgstr "서버가 청취할 포트입니다. " msgid "The port, either subnet or port should be specified." msgstr "포트, subnet 또는 port가 지정되어야 합니다." msgid "The pre-shared key string of the ipsec site connection." msgstr "ipsec 사이트 연결의 사전 공유 키 문자열입니다." msgid "The private key if it has been saved." msgstr "저장된 경우 개인 키입니다." msgid "The profile of certificate to use." msgstr "사용할 인증서 프로파일입니다." msgid "" "The protocol that is matched by the security group rule. Valid values " "include tcp, udp, and icmp." msgstr "" "보안 그룹 규칙과 일치하는 프로토콜입니다. 올바른 값은 tcp, udp, icmp를 포함합" "니다." msgid "The public key." msgstr "공개 키입니다." msgid "The query string is malformed" msgstr "조회 문자열은 잘못된 형식임" msgid "The query to filter the metrics." msgstr "지표를 필터링하기 위한 조회." msgid "" "The random string generated by this resource. This value is also available " "by referencing the resource." msgstr "" "이 자원으로 생성된 랜덤 문자열입니다. 이 값은 자원을 참조하여 사용할 수 있습" "니다." msgid "The reference to a LaunchConfiguration resource." msgstr "LaunchConfiguration 자원에 대한 참조입니다." msgid "" "The remote IP prefix (CIDR) to be associated with this security group rule." msgstr "이 보안 그룹 규칙과 연관시킬 원격 IP 접두부(CIDR)입니다." msgid "The remote branch router identity of the ipsec site connection." msgstr "ipsec 사이트 연결의 원격 분기 라우터 ID입니다." msgid "The remote branch router public IPv4 address or IPv6 address or FQDN." msgstr "원격 분기 라우터 공용 IPv4 주소 또는 IPv6 주소 또는 FQDN입니다." msgid "" "The remote group ID to be associated with this security group rule. If no " "value is specified then this rule will use this security group for the " "remote_group_id. The remote mode parameter must be set to \"remote_group_id" "\"." msgstr "" "이 보안 그룹 규칙과 연관시킬 원격 그룹 ID입니다. 값이 지정되지 않는 경우 이 " "규칙은 remote_group_id의 이 보안 그룹을 사용합니다. 원격 모드 매개변수는 " "\"remote_group_id\"로 지정해야 합니다." msgid "The remote subnet(s) in CIDR format of the ipsec site connection." msgstr "ipsec 사이트 연결의 CIDR 형식의 원격 서브넷입니다." msgid "The request is missing an action or operation parameter" msgstr "요청에서 조치 또는 조작 매개변수가 누락됨" msgid "The request processing has failed due to an internal error" msgstr "요청한 프로세스는 내부 오류로 실패했습니다" msgid "The request signature does not conform to AWS standards" msgstr "요청한 서명은 AWS 표준을 따르지 않습니다." msgid "" "The request signature we calculated does not match the signature you provided" msgstr "계산된 요청 서명이 사용자가 제공한 서명과 일치하지 않음" msgid "The requested action is not yet implemented" msgstr "요청된 조치가 아직 구현되지 않음" #, python-format msgid "The resource %s is already being updated." msgstr "자원 %s이(가) 이미 업데이트 중입니다." msgid "The resource href of the queue." msgstr "큐의 자원 href입니다." msgid "The route mode of the ipsec site connection." msgstr "ipsec 사이트 연결의 라우트 모드입니다." msgid "The router id." msgstr "라우터 ID입니다." msgid "The router to which the vpn service will be inserted." msgstr "vpn 서비스를 삽입할 라우터입니다." msgid "The router." msgstr "라우터입니다." msgid "The safety assessment lifetime configuration for the ike policy." msgstr "ike 정책의 안전 평가 유효시간 구성입니다." msgid "The safety assessment lifetime configuration of the ipsec policy." msgstr "ipsec 정책의 안전 평가 유효시간 구성입니다." msgid "" "The security group that you can use as part of your inbound rules for your " "LoadBalancer's back-end instances." msgstr "" "LoadBalancer의 백엔드 인스턴스에 대한 인바운드 규칙의 일부로 사용할 수 있는 " "보안 그룹입니다." msgid "" "The server could not comply with the request since it is either malformed or " "otherwise incorrect." msgstr "" "요청 양식이 잘못되었거나 요청이 올바르지 않기 때문에 서버가 요청을 준수할 수 " "없습니다." msgid "The set of parameters passed to this nested stack." msgstr "이 중첩 스택에 전달되는 매개변수 세트입니다." msgid "The size in GB of the docker volume." msgstr "Docker 볼륨의 크기(GB)입니다." msgid "The size of AutoScalingGroup can not be less than zero" msgstr "AutoScalingGroup 크기는 0 미만일 수 없음" msgid "" "The size of the prefix to allocate when the cidr or prefixlen attributes are " "not specified while creating a subnet." msgstr "" "서브넷을 작성할 때 cidr 또는 prefixlen 속성이 지정되지 않은 경우 할당할 접두" "부의 크기입니다." msgid "The size of the swap, in MB." msgstr "스왑의 크기(MB)입니다." msgid "The size of the volume in GB." msgstr "볼륨의 크기(GB)입니다." msgid "" "The size of the volume, in GB. It is safe to leave this blank and have the " "Compute service infer the size." msgstr "" "볼륨 크키, GB. 공백으로 두고 Compute 서비스가 크기를 추정하도록 하는 것이 안" "전하다." msgid "" "The size of the volume, in GB. Must be equal or greater than the size of the " "snapshot. It is safe to leave this blank and have the Compute service infer " "the size." msgstr "" "볼륨의 크기(GB)입니다. 스냅샷의 크기보다 크거나 같아야 합니다. 이 항목을 비워" "두고 계산 서비스에서 크기를 추론하도록 하는 것이 좋습니다." msgid "The snapshot the volume was created from, if any." msgstr "볼륨이 작성된 스냅샷입니다(해당하는 경우)." msgid "The source of certificate request." msgstr "인증서 요청의 소스입니다." #, python-format msgid "The specified reference \"%(resource)s\" (in %(key)s) is incorrect." msgstr "지정된 참조 \"%(resource)s\"(%(key)s)이(가) 올바르지 않습니다." msgid "The start and end addresses for the allocation pools." msgstr "할당 풀에 대한 시작 및 끝 주소입니다." msgid "The status of the container." msgstr "컨테이너의 상태입니다." msgid "The status of the firewall." msgstr "방화벽의 상태입니다." msgid "The status of the ipsec site connection." msgstr "ipsec 사이트 연결의 상태입니다." msgid "The status of the network." msgstr "네트워크의 상태입니다." msgid "The status of the order." msgstr "순서의 상태입니다." msgid "The status of the port." msgstr "포트의 상태입니다." msgid "The status of the router." msgstr "라우터의 상태입니다." msgid "The status of the secret." msgstr "시크릿의 상태입니다." msgid "The status of the vpn service." msgstr "vpn 서비스의 상태입니다." msgid "" "The string that was stored. This value is also available by referencing the " "resource." msgstr "저장된 문자열입니다. 이 값은 자원을 참조해서도 사용할 수 있습니다." msgid "The subject of the certificate request." msgstr "인증서 요청의 주제입니다." msgid "" "The subnet for the port on which the members of the pool will be connected." msgstr "풀의 멤버가 연결될 포트의 서브넷입니다." msgid "The subnet, either subnet or port should be specified." msgstr "서브넷, subnet 또는 port가 지정되어야 합니다." msgid "The tag key name." msgstr "태그 키 이름입니다." msgid "The tag value." msgstr "태그 값입니다." msgid "The template is not a JSON object or YAML mapping." msgstr "템플리트가 JSON 오브젝트 또는 YAML 맵핑이 아닙니다." #, python-format msgid "The template section is invalid: %(section)s" msgstr "템플리트 섹션이 올바르지 않음: %(section)s" #, python-format msgid "The template version is invalid: %(explanation)s" msgstr "템플리트 버전이 올바르지 않음: %(explanation)s" msgid "The tenant owning this floating IP." msgstr "이 부동 IP를 소유하는 테넌트입니다." msgid "The tenant owning this network." msgstr "이 네트워크를 소유하는 테넌트입니다." msgid "The time range in seconds." msgstr "시간 범위(초)." msgid "The timestamp indicating volume creation." msgstr "볼륨 작성을 표시하는 시간소인입니다." msgid "The transform protocol of the ipsec policy." msgstr "ipsec 정책의 변환 프로토콜입니다." msgid "The type of profile." msgstr "프로파일의 타입입니다." msgid "The type of senlin policy." msgstr "senlin 정책의 타입입니다." msgid "The type of the certificate request." msgstr "인증서 요청의 타입입니다." msgid "The type of the order." msgstr "순서의 타입입니다." msgid "The type of the resources in the group." msgstr "그룹의 자원 타입입니다." msgid "The type of the secret." msgstr "시크릿의 타입입니다." msgid "The type of the volume mapping to a backend, if any." msgstr "백엔드에 대한 볼륨 맵핑의 유형입니다(있는 경우)." msgid "The type/format the secret data is provided in." msgstr "시크릿 데이터가 제공된 타입/형식입니다." msgid "The type/mode of the algorithm associated with the secret information." msgstr "시크릿 정보와 연관된 알고리즘의 타입/모드입니다." msgid "The unencrypted plain text of the secret." msgstr "시크릿의 암호화되지 않은 일반 텍스트입니다." msgid "" "The unique identifier of ike policy associated with the ipsec site " "connection." msgstr "ipsec 사이트 연결과 연관된 ike 정책의 고유 ID입니다." msgid "" "The unique identifier of ipsec policy associated with the ipsec site " "connection." msgstr "ipsec 사이트 연결과 연관된 ipsec 정책의 고유 ID입니다." msgid "" "The unique identifier of the router to which the vpn service was inserted." msgstr "vpn 서비스가 삽입되는 라우터의 고유 ID입니다." msgid "" "The unique identifier of the subnet in which the vpn service was created." msgstr "vpn 서비스가 작성된 서브넷의 고유 ID입니다." msgid "The unique identifier of the tenant owning the ike policy." msgstr "ike 정책을 소유하는 테넌트의 고유 ID입니다." msgid "The unique identifier of the tenant owning the ipsec policy." msgstr "ipsec 정책을 소유하는 테넌트의 고유 ID입니다." msgid "The unique identifier of the tenant owning the ipsec site connection." msgstr "ipsec 사이트 연결을 소유하는 테넌트의 고유 ID입니다." msgid "The unique identifier of the tenant owning the vpn service." msgstr "vpn 서비스를 소유하는 테넌트의 고유 ID입니다." msgid "" "The unique identifier of vpn service associated with the ipsec site " "connection." msgstr "ipsec 사이트 연결과 연관된 vpn 서비스의 고유 ID입니다." msgid "" "The user-defined region ID and should unique to the OpenStack deployment. " "While creating the region, heat will url encode this ID." msgstr "" "사용자 정의 지역 ID이며, OpenStack 배포에 고유해야 합니다. 지역을 작성하는 동" "안 heat url에서 이 ID를 인코딩합니다." msgid "" "The value for the socket option TCP_KEEPIDLE. This is the time in seconds " "that the connection must be idle before TCP starts sending keepalive probes." msgstr "" "소켓 옵션 TCP_KEEPIDLE의 값입니다. 이는 TCP가 활성 유지 프로브 전송을 시작하" "기 전에 연결이 유휴되어야 하는 시간(초)입니다." #, python-format msgid "The value must be at least %(min)s." msgstr "값이 최소 %(min)s이어야 합니다." #, python-format msgid "The value must be in the range %(min)s to %(max)s." msgstr "값이 %(min)s - %(max)s 범위여야 합니다. " #, python-format msgid "The value must be no greater than %(max)s." msgstr "값이 %(max)s 이하여야 합니다. " #, python-format msgid "The values of the \"for_each\" argument to \"%s\" must be lists" msgstr "\"%s\"에 대한 \"for_each\" 인수 값은 목록이어야 함" msgid "The version of the ike policy." msgstr "ike 정책의 버전입니다." msgid "" "The vnic type to be bound on the neutron port. To support SR-IOV PCI " "passthrough networking, you can request that the neutron port to be realized " "as normal (virtual nic), direct (pci passthrough), or macvtap (virtual " "interface with a tap-like software interface). Note that this only works for " "Neutron deployments that support the bindings extension." msgstr "" "Neutron 포트에서 바인드할 vnic 유형입니다. SR-IOV PCI 패스스루 네트워킹을 지" "원하려는 경우, Neutron 포트를 정상(가상 nic), 직접(pci 패스스루) 또는 " "macvtap(탭 형식 소프트웨어 인터페이스가 있는 가상 인터페이스)으로 실현하도록 " "요청할 수 있습니다. 단, 이러한 요청은 바인딩 확장을 지원하는 Neutron 배치에 " "대해서만 작동합니다." msgid "The volume type." msgstr "볼륨 유형입니다." msgid "The volume used as source, if any." msgstr "소스로 사용된 볼륨입니다(해당하는 경우)." msgid "The volume_id can be boot or non-boot device to the server." msgstr "volume_id는 서버에 대한 부팅 또는 비부팅 디바이스가 될 수 있습니다." msgid "The website endpoint for the specified bucket." msgstr "지정된 버킷의 웹 사이트 엔드포인트입니다." #, python-format msgid "There is no rule %(rule)s. List of allowed rules is: %(rules)s." msgstr "규칙 %(rule)s이(가) 없습니다. 허용되는 규칙 목록: %(rules)s." msgid "" "There is no such option during 5.0.0, so need to make this attribute " "unsupported, otherwise error will raised." msgstr "" "5.0.0 중에는 이러한 옵션이 없으므로 이 특성을 지원하지 않게 만들어야 합니다. " "그렇지 않으면 오류가 발생합니다." msgid "" "There is no such option during 5.0.0, so need to make this property " "unsupported while it not used." msgstr "" "5.0.0 중에는 이러한 옵션이 없으므로, 이 옵션을 사용하지 않는 동안에는 이 특성" "을 지원하지 않게 만들어야 합니다." #, python-format msgid "" "There was an error loading the definition of the global resource type " "%(type_name)s." msgstr "" "글로벌 자원 유형 %(type_name)s의 정의를 로드하는 중에 오류가 발생했습니다." msgid "This endpoint is enabled or disabled." msgstr "이 엔드포인트를 사용하거나 사용하지 않게 설정합니다." msgid "This project is enabled or disabled." msgstr "이 프로젝트를 사용하거나 사용하지 않게 설정합니다." msgid "This region is enabled or disabled." msgstr "이 지역을 사용하거나 사용하지 않게 설정합니다." msgid "This service is enabled or disabled." msgstr "이 서비스를 사용하거나 사용하지 않게 설정합니다." msgid "Threshold to evaluate against." msgstr "평가할 임계값입니다." msgid "Time To Live (Seconds)." msgstr "초 단위의 지속 시간(TTL)." msgid "Time of the first execution in format \"YYYY-MM-DD HH:MM\"." msgstr "\"YYYY-MM-DD HH:MM\" 형식의 첫 번째 실행 시간입니다." msgid "Time of the next execution in format \"YYYY-MM-DD HH:MM:SS\"." msgstr "\"YYYY-MM-DD HH:MM:SS\" 형식의 다음 실행 시간입니다." msgid "" "Timeout for client connections' socket operations. If an incoming connection " "is idle for this number of seconds it will be closed. A value of '0' means " "wait forever." msgstr "" "클라이언트 연결의 소켓 조작에 대한 제한시간입니다. 수신 연결이 이 기간(초) 동" "안 유휴 상태이면 연결이 닫힙니다. 값이 '0'이면 무기한 대기합니다." msgid "Timeout for creating the bay in minutes. Set to 0 for no timeout." msgstr "" "Bay를 작성하기 위한 제한시간(분)입니다. 제한시간이 없게 하려면 0으로 설정하십" "시오." msgid "Timeout in seconds for stack action (ie. create or update)." msgstr "스택 조치(즉, 작성 또는 업데이트)에 대한 제한시간(초)입니다." msgid "" "Toggle to enable/disable caching when Orchestration Engine looks for other " "OpenStack service resources using name or id. Please note that the global " "toggle for oslo.cache(enabled=True in [cache] group) must be enabled to use " "this feature." msgstr "" "Orchestration Engine이 이름 또는 id를 사용하여 다른 OpenStack 서비스 자원을 " "검색할 때 캐싱을 사용/사용하지 않게 토글하십시오. 이 기능을 사용하려면 oslo." "cache([cache] 그룹에서 enabled=True)의 글로벌 토글을 사용해야 합니다. " msgid "" "Toggle to enable/disable caching when Orchestration Engine retrieves " "extensions from other OpenStack services. Please note that the global toggle " "for oslo.cache(enabled=True in [cache] group) must be enabled to use this " "feature." msgstr "" "Orchestration Engine이 다른 OpenStack 서비스의 확장을 검색할 때 캐싱을 사용/" "사용하지 않게 토글하십시오. 이 기능을 사용하려면 oslo.cache([cache] 그룹에서 " "enabled=True)의 글로벌 토글을 사용해야 합니다. " msgid "" "Toggle to enable/disable caching when Orchestration Engine validates " "property constraints of stack.During property validation with constraints " "Orchestration Engine caches requests to other OpenStack services. Please " "note that the global toggle for oslo.cache(enabled=True in [cache] group) " "must be enabled to use this feature." msgstr "" "Orchestration Engine이 스택의 특성 제한조건을 검증할 때 캐싱을 사용/사용하지 " "않게 토글하십시오. 제한조건으로 특성을 검증하는 동안 Orchestration Engine이 " "다른 OpenStack 서비스에 대한 요청을 캐시합니다. 이 기능을 사용하려면 oslo." "cache([cache] 그룹에서 enabled=True)의 글로벌 토글을 사용해야 합니다. " msgid "" "Token for stack-user which can be used for signalling handle when " "signal_transport is set to TOKEN_SIGNAL. None for all other signal " "transports." msgstr "" "signal_transport가 TOKEN_SIGNAL로 설정되면 핸들 신호 표시에 사용할 수 있는 " "stack-user의 토큰입니다. 기타 모든 신호 전송의 경우 없습니다." msgid "" "Tokens are not needed for Swift TempURLs. This attribute is being kept for " "compatibility with the OS::Heat::WaitConditionHandle resource." msgstr "" "Swift TempURL에는 토큰이 필요하지 않습니다. 이 속성은 OS::Heat::" "WaitConditionHandle 자원과의 호환성을 위해 유지됩니다." msgid "Topic" msgstr "Topic" msgid "Transform protocol for the ipsec policy." msgstr "ipsec 정책에 대한 변환 프로토콜입니다." msgid "True if alarm evaluation/actioning is enabled." msgstr "알람 평가/조치가 사용으로 설정된 경우 true입니다." msgid "" "True if the system should remember a generated private key; False otherwise." msgstr "" "시스템이 생성된 개인 키를 기억해야 하는 경우 true, 그렇지 않은 경우 false입니" "다." msgid "Type of access that should be provided to guest." msgstr "게스트에 제공해야 하는 액세스 타입입니다." msgid "Type of adjustment (absolute or percentage)." msgstr "조정 유형입니다(절대 또는 백분율)." msgid "" "Type of endpoint in Identity service catalog to use for communication with " "the OpenStack service." msgstr "" "OpenStack 서비스와 통신하는 데 사용할 ID 서비스 카탈로그의 엔드포인트 유형입" "니다." msgid "Type of keystone Service." msgstr "keystone 서비스의 타입입니다." msgid "Type of receiver." msgstr "수신기의 타입입니다." msgid "Type of the data source." msgstr "데이터 소스의 타입입니다." msgid "Type of the notification." msgstr "알림의 타입입니다." msgid "Type of the object that RBAC policy affects." msgstr "RBAC 정책이 영향을 미치는 오브젝트 타입입니다." msgid "Type of the value of the input." msgstr "입력 값의 유형입니다." msgid "Type of the value of the output." msgstr "출력 값의 유형입니다." msgid "Type of the volume to create on Cinder backend." msgstr "Cinder 백엔드에 작성할 볼륨 유형입니다." msgid "URL for API authentication" msgstr "API 인증용 URL" msgid "URL for the data source." msgstr "데이터 소스의 URL입니다." msgid "" "URL for the job binary. Must be in the format swift:/// or " "internal-db://." msgstr "" "작업 바이너리의 URL입니다. swift:/// 또는 internal-db://" " 형식이어야 합니다." msgid "" "URL of TempURL where resource will signal completion and optionally upload " "data." msgstr "" "자원이 완료를 신호 표시하고 선택적으로 데이터를 업로드하는 TempURL의 URL입니" "다." msgid "URL of keystone service endpoint." msgstr "keystone 서비스 엔드포인트의 URL입니다." msgid "URL of the Heat CloudWatch server." msgstr "히트 CloudWatch 서버의 URL입니다." msgid "" "URL of the Heat metadata server. NOTE: Setting this is only needed if you " "require instances to use a different endpoint than in the keystone catalog" msgstr "" "Heat 메타데이터 서버의 URL입니다. 참고: keystone 카탈로그와 다른 엔드포인트" "를 사용하는 인스턴스가 필요한 경우에만 이 설정이 필요합니다." msgid "URL of the Heat waitcondition server." msgstr "히트 waitcondition 서버의 URL입니다." msgid "" "URL where the data for this image already resides. For example, if the image " "data is stored in swift, you could specify \"swift://example.com/container/" "obj\"." msgstr "" "이 이미지에 대한 데이터가 이미 상주하는 URL입니다. 예를 들어, 이미지 데이터" "가 swift에 저장되어 있는 경우에는 \"swift://example.com/container/obj\"를 지" "정할 수 있습니다." msgid "UUID of the internal subnet to which the instance will be attached." msgstr "인스턴스가 접속되는 내부 서브넷의 UUID입니다." #, python-format msgid "" "Unable to find neutron provider '%(provider)s', available providers are " "%(providers)s." msgstr "" "neutron 프로바이더 '%(provider)s'을(를) 찾을 수 없음, 사용 가능한 프로바이더" "는 %(providers)s입니다." #, python-format msgid "" "Unable to find senlin policy type '%(pt)s', available policy types are " "%(pts)s." msgstr "" "senlin 정책 타입 '%(pt)s'을(를) 찾을 수 없음, 사용 가능한 정책 타입은 %(pts)s" "입니다." #, python-format msgid "" "Unable to find senlin profile type '%(pt)s', available profile types are " "%(pts)s." msgstr "" "senlin 프로파일 타입 '%(pt)s'을(를) 찾을 수 없음, 사용 가능한 프로파일 타입" "은 %(pts)s입니다." #, python-format msgid "" "Unable to load %(app_name)s from configuration file %(conf_file)s.\n" "Got: %(e)r" msgstr "" "구성 파일 %(conf_file)s에서 %(app_name)s을(를) 로드할 수 없습니다.\n" "오류 발생: %(e)r" #, python-format msgid "Unable to locate config file [%s]" msgstr "구성 파일 [%s]을(를) 찾을 수 없음" #, python-format msgid "Unexpected action %(action)s" msgstr "예상치 않은 조치 %(action)s" #, python-format msgid "Unexpected action %s" msgstr "예상치 않은 조치 %s" #, python-format msgid "" "Unexpected properties: %(unexpected)s. Only these properties are allowed for " "%(type)s type of order: %(allowed)s." msgstr "" "예상치 못한 특성: %(unexpected)s. 이러한 특성만 %(type)s 타입의 순서에 허용" "됨: %(allowed)s." msgid "Unique identifier for the device." msgstr "디바이스의 공유 ID입니다." msgid "" "Unique identifier for the ike policy associated with the ipsec site " "connection." msgstr "ipsec 사이트 연결과 연관된 ike 정책의 고유 ID입니다." msgid "" "Unique identifier for the ipsec policy associated with the ipsec site " "connection." msgstr "ipsec 사이트 연결과 연관된 ipsec 정책의 고유 ID입니다." msgid "Unique identifier for the network owning the port." msgstr "포트를 소유하는 네트워크의 고유 ID입니다." msgid "" "Unique identifier for the router to which the vpn service will be inserted." msgstr "vpn 서비스가 삽입되는 라우터의 고유 ID입니다." msgid "" "Unique identifier for the vpn service associated with the ipsec site " "connection." msgstr "ipsec 사이트와 연관된 vpn 서비스의 고유 ID입니다." msgid "" "Unique identifier of the firewall policy to which this firewall rule belongs." msgstr "이 방화벽 규칙이 속하는 방화벽 정책의 고유 ID입니다." msgid "Unique identifier of the firewall policy used to create the firewall." msgstr "방화벽을 작성하는 데 사용되는 방화벽 정책의 고유 ID입니다." msgid "Unknown" msgstr "알 수 없음" #, python-format msgid "Unknown Property %s" msgstr "알 수 없는 특성 %s" #, python-format msgid "Unknown attribute \"%s\"" msgstr "알 수 없는 속성 \"%s\"" #, python-format msgid "Unknown error retrieving %s" msgstr "%s 검색 중 알 수 없는 오류 발생" #, python-format msgid "Unknown input %s" msgstr "알 수 없는 입력 %s" #, python-format msgid "Unknown key(s) %s" msgstr "알 수 없는 키 %s" msgid "Unknown share_status during creation of share \"{0}\"" msgstr "공유 \"{0}\" 작성 중에 알 수 없는 share_status" #, python-format msgid "Unknown status creating Bay '%(name)s' - %(reason)s" msgstr "알 수 없는 Bay '%(name)s' - %(reason)s 작성 상태" msgid "Unknown status during deleting share \"{0}\"" msgstr "공유 \"{0}\" 삭제 중에 알 수 없는 상태" #, python-format msgid "Unknown status updating Bay '%(name)s' - %(reason)s" msgstr "알 수 없는 Bay '%(name)s' - %(reason)s 업데이트 상태" #, python-format msgid "Unknown status: %s" msgstr "알 수 없는 상태: %s" #, python-format msgid "" "Unrecognized value \"%(value)s\" for \"%(name)s\", acceptable values are: " "true, false." msgstr "" "\"%(name)s\"에 인식되지 않는 값 \"%(value)s\", 허용되는 값: true, false." #, python-format msgid "Unsupported object type %(objtype)s" msgstr "지원되지 않는 오브젝트 유형 %(objtype)s" #, python-format msgid "Unsupported resource '%s' in LoadBalancerNames" msgstr "LoadBalancerNames의 지원되지 않는 자원 '%s' " msgid "Unversioned keystone url in format like http://0.0.0.0:5000." msgstr "http://0.0.0.0:5000과 같은 형식의 버전이 지정되지 않은 keystone url." #, python-format msgid "Update to properties %(props)s of %(name)s (%(res)s)" msgstr "%(name)s(%(res)s)의 %(props)s 특성으로 업데이트" msgid "Updated At" msgstr "업데이트" msgid "Updating a stack when it is deleting" msgstr "삭제 중에 스택 업데이트" msgid "Updating a stack when it is suspended" msgstr "일시중단될 때 스택 업데이트" msgid "" "Use get_resource|Ref command instead. For example: { get_resource : " " }" msgstr "" "대신 get_resource|Ref 명령을 사용하십시오(예: { get_resource : " " })." msgid "" "Use only with Neutron, to list the internal subnet to which the instance " "will be attached; needed only if multiple exist; list length must be exactly " "1." msgstr "" "Neutron에서만 사용됩니다. 인스턴스가 접속되는 내부 서브넷을 나열합니다. 여러 " "개가 존재하는 경우에만 필요하며 목록 길이는 정확히 1이어야 합니다." #, python-format msgid "Use property %s" msgstr "%s 특성 사용" #, python-format msgid "Use property %s." msgstr "%s 특성을 사용하십시오." msgid "" "Use the `external_gateway_info` property in the router resource to set up " "the gateway." msgstr "" "라우터 자원에서 `external_gateway_info` 특성을 사용하여 게이트웨이를 설정합니" "다." msgid "" "Use the networks attribute instead of first_address. For example: " "\"{get_attr: [, networks, , 0]}\"" msgstr "" "first_address 대신 네트워크 속성을 사용하십시오. 예를 들어, 다음과 같습니다. " "\"{get_attr: [, networks, , 0]}\"" msgid "Use this resource at your own risk." msgstr "이 자원을 사용하면 위험이 따릅니다." #, python-format msgid "User %s in invalid domain" msgstr "사용자 %s이(가) 올바르지 않은 도메인에 있음" #, python-format msgid "User %s in invalid project" msgstr "사용자 %s이(가) 올바르지 않은 프로젝트에 있음" msgid "User ID for API authentication" msgstr "API 인증용 사용자 ID" msgid "User data to pass to instance." msgstr "인스턴스에 전달할 사용자 데이터입니다." msgid "User is not authorized to perform action" msgstr "사용자에게 조치를 수행할 권한이 부여되지 않음" msgid "User name to create a user on instance creation." msgstr "인스턴스 작성 시 사용자를 작성할 사용자 이름입니다." msgid "User name." msgstr "사용자 이름." msgid "Username associated with the AccessKey." msgstr "액세스 키와 연관된 사용자 이름입니다." msgid "Username for API authentication" msgstr "API 인증용 사용자 이름" msgid "Username for accessing the data source URL." msgstr "데이터 소스 URL에 액세스하기 위한 사용자 이름입니다." msgid "Username for accessing the job binary URL." msgstr "작업 바이너리 URL에 액세스하기 위한 사용자 이름입니다." msgid "Username of privileged user in the image." msgstr "이미지에서 권한이 부여된 사용자의 사용자 이름입니다." msgid "VLAN ID for VLAN networks or tunnel-id for GRE/VXLAN networks." msgstr "VLAN 네트워크의 VLAN ID 또는 GRE/VXLAN 네트워크의 tunnel-id입니다." msgid "VPC ID for this gateway association." msgstr "이 게이트웨이 연관에 대한 VPC ID입니다." msgid "VPC ID for where the route table is created." msgstr "라우트 테이블이 작성되는 VPC ID입니다." msgid "" "Valid values are encrypt or decrypt. The heat-engine processes must be " "stopped to use this." msgstr "" "올바른 값이 암호화되거나 암호화 해제됩니다. 이것을 사용하려면 heat-engine 프" "로세스를 중지해야 합니다." #, python-format msgid "Value \"%(val)s\" is invalid for data type \"%(type)s\"." msgstr "\"%(val)s\" 값이 데이터 유형 \"%(type)s\"에 올바르지 않습니다." #, python-format msgid "Value '%(value)s' is invalid for '%(name)s' which only accepts integer." msgstr "'%(value)s' 값은 정수만 허용되는 '%(name)s'에 올바르지 않습니다." #, python-format msgid "" "Value '%(value)s' is invalid for '%(name)s' which only accepts non-negative " "integer." msgstr "" "'%(value)s' 값은 음수가 아닌 정수만 허용되는 '%(name)s'에 올바르지않습니다." #, python-format msgid "Value '%s' is not an integer" msgstr "값 '%s'이(가) 정수가 아님" #, python-format msgid "Value must be a comma-delimited list string: %s" msgstr "값은 쉼표로 구분된 목록 문자열이어야 함: %s" #, python-format msgid "Value must be of type %s" msgstr "값은 %s 유형이어야 함" #, python-format msgid "Value must be valid JSON: %s" msgstr "값은 올바른 JSON이어야 함: %s" #, python-format msgid "Value must match pattern: %s" msgstr "값이 패턴에 일치해야 함: %s" msgid "" "Value which can be set or changed on stack update to trigger the resource " "for replacement with a new random string. The salt value itself is ignored " "by the random generator." msgstr "" "임의의 새 문자열로 대체하기 위해 자원을 트리거하도록 스택 업데이트에서 설정 " "또는 변경할 수 있는 값입니다. salt 값 자체는 임의 생성기에서 무시합니다." msgid "" "Value which can be set to fail the resource operation to test failure " "scenarios." msgstr "실패 시나리오를 테스트하는 자원 조작을 실패로 설정할 수 있는 값." msgid "" "Value which can be set to trigger update replace for the particular resource." msgstr "특정 자원의 업데이트 대체를 트리거하도록 설정할 수 있는 값." #, python-format msgid "Version %(objver)s of %(objname)s is not supported" msgstr "%(objname)s의 버전 %(objver)s이(가) 지원되지 않음" msgid "Version for the ike policy." msgstr "ike 정책의 버전입니다." msgid "Version of Hadoop running on instances." msgstr "Hadoop 버전을 인스턴스에서 실행 중입니다." msgid "Version of IP address." msgstr "IP 주소의 버전입니다." msgid "Vip associated with the pool." msgstr "풀과 연관된 Vip입니다." msgid "Volume attachment failed" msgstr "볼륨 접속 실패" msgid "Volume backup failed" msgstr "볼륨 백업 실패" msgid "Volume backup restore failed" msgstr "볼륨 백업 복원 실패" msgid "Volume create failed" msgstr "볼륨 작성 실패" msgid "Volume detachment failed" msgstr "볼륨 분리 실패" msgid "Volume in use" msgstr "사용 중인 볼륨" msgid "Volume resize failed" msgstr "볼륨 크기 조정 실패" msgid "Volumes per node." msgstr "노드당 볼륨 수입니다." msgid "Volumes to attach to instance." msgstr "인스턴스에 접속할 볼륨입니다." #, python-format msgid "WaitCondition invalid Handle %s" msgstr "WaitCondition 올바르지 않은 핸들 %s" #, python-format msgid "WaitCondition invalid Handle stack %s" msgstr "WaitCondition 올바르지 않은 핸들 스택 %s" #, python-format msgid "WaitCondition invalid Handle tenant %s" msgstr "WaitCondition 올바르지 않은 핸들 테넌트 %s" msgid "Weight of pool member in the pool (default to 1)." msgstr "풀에서 풀 멤버의 중량입니다(기본값 1로 설정)." msgid "Weight of the pool member in the pool." msgstr "풀에서 풀 멤버의 중량입니다." #, python-format msgid "Went to status %(resource_status)s due to \"%(status_reason)s\"" msgstr "\"%(status_reason)s\" 때문에 %(resource_status)s 상태로 이동함" msgid "" "When both ipv6_ra_mode and ipv6_address_mode are set, they must be equal." msgstr "" "ipv6_ra_mode 및 ipv6_address_mode를 모두 설정한 경우, 이들이 동등해야 합니다." msgid "" "When running server in SSL mode, you must specify both a cert_file and " "key_file option value in your configuration file" msgstr "" "SSL 모드로 서버를 실행하는 경우 구성 파일에서 cert_file 및 key_file 옵션 값 " "둘 다를 지정해야 합니다. " msgid "Whether enable this policy on that cluster." msgstr "해당 클러스터에서 이 정책 사용 여부입니다." msgid "" "Whether the address scope should be shared to other tenants. Note that the " "default policy setting restricts usage of this attribute to administrative " "users only, and restricts changing of shared address scope to unshared with " "update." msgstr "" "주소 범위를 다른 테넌트와 공유해야 하는지 여부입니다. 기본 정책 설정은 이 속" "성의 사용을 관리 사용자로만 한정하며 공유 주소 범위 변경을 업데이트와 공유하" "지 않음으로 제한합니다." msgid "Whether the flavor is shared across all projects." msgstr "모든 프로젝트에서 Flavor가 공유되는지 여부입니다." msgid "" "Whether the image can be deleted. If the value is True, the image is " "protected and cannot be deleted." msgstr "" "이미지를 삭제할 수 있는지 여부입니다. 값이 true이면 이미지가 보호되고 삭제할 " "수 없습니다." msgid "Whether the metering label should be shared across all tenants." msgstr "측정 레이블을 전체 테넌트 간에 공유해야 하는지 여부입니다. " msgid "Whether the network contains an external router." msgstr "네트워크에 외부 라우터가 포함되었는지 여부입니다." msgid "Whether the part content is text or multipart." msgstr "파트 컨텐츠가 텍스트인지 다중 파트인지 여부입니다." msgid "" "Whether the subnet pool will be shared across all tenants. Note that the " "default policy setting restricts usage of this attribute to administrative " "users only." msgstr "" "서브넷 풀을 전체 테넌트 간에 공유해야 하는지 여부입니다. 기본 정책 설정에서" "는 관리 사용자만 이 속성을 사용하도록 제한합니다." msgid "Whether the volume type is accessible to the public." msgstr "볼륨 타입에 공개적으로 액세스할 수 있는지 여부입니다." msgid "Whether this QoS policy should be shared to other tenants." msgstr "이 QoS 정책을 다른 테넌트와 공유해야 하는지 여부입니다." msgid "" "Whether this firewall should be shared across all tenants. NOTE: The default " "policy setting in Neutron restricts usage of this property to administrative " "users only." msgstr "" "이 방화벽을 모든 테넌트에서 공유해야 하는지 여부입니다. 참고: Neutron의 참" "고: Neutron의 기본 정책 설정에서는 이 특성의 사용을 관리 사용자로만 참고하십" "시오." msgid "" "Whether this is default IPv4/IPv6 subnet pool. There can only be one default " "subnet pool for each IP family. Note that the default policy setting " "restricts administrative users to set this to True." msgstr "" "이것이 기본 IPv4/IPv6 서브넷 풀인지 여부입니다. 각 IP군의 기본 서브넷 풀은 하" "나만 있을 수 있습니다. 기본 정책 설정에서는 관리 사용자가 이 값을 True로 설정" "하도록 제한합니다." msgid "Whether this network should be shared across all tenants." msgstr "이 네트워크를 전체 테넌트 간에 공유해야 하는지 여부입니다. " msgid "" "Whether this network should be shared across all tenants. Note that the " "default policy setting restricts usage of this attribute to administrative " "users only." msgstr "" "이 네트워크를 전체 테넌트 간에 공유해야 하는지 여부입니다. 기본 정책 설정은 " "이 속성의 사용을 관리 사용자로만 제한함을 참고하십시오." msgid "" "Whether this policy should be audited. When set to True, each time the " "firewall policy or the associated firewall rules are changed, this attribute " "will be set to False and will have to be explicitly set to True through an " "update operation." msgstr "" "이 정책을 감사해야 하는지 여부입니다. true로 설정한 경우, 방화벽 정책 또는 연" "관된 방화벽 규칙이 변경될 때마다 이 설정이 false로 설정되므로 업데이트 조작" "을 통해 명시적으로 true로 설정해야 합니다." msgid "Whether this policy should be shared across all tenants." msgstr "이 정책을 전체 테넌트 간에 공유해야 하는지 여부입니다. " msgid "Whether this rule should be enabled." msgstr "이 규칙을 사용해야 하는지 여부입니다." msgid "Whether this rule should be shared across all tenants." msgstr "이 규칙을 전체 테넌트 간에 공유해야 하는지 여부입니다. " msgid "Whether to enable the actions or not." msgstr "작업 사용 여부입니다." msgid "Whether to specify a remote group or a remote IP prefix." msgstr "원격 그룹 또는 원격 IP 접두부를 지정할지 여부입니다." msgid "" "Which lifecycle actions of the deployment resource will result in this " "deployment being triggered." msgstr "이 배포를 트리거할 배포 자원의 라이프사이클 작업입니다." msgid "" "Workflow additional parameters. If Workflow is reverse typed, params " "requires 'task_name', which defines initial task." msgstr "" "워크플로우 추가 매개변수. 워크플로우가 역방향으로 입력된 경우 매개변수에는 초" "기 작업을 정의하는 'task_name'이 필요합니다." msgid "Workflow description." msgstr "워크플로우 설명입니다." msgid "Workflow name." msgstr "워크플로우 이름입니다." msgid "Workflow to execute." msgstr "실행할 워크플로우입니다." msgid "Workflow type." msgstr "워크플로우 유형입니다." #, python-format msgid "Wrong Arguments try: \"%s\"" msgstr "잘못된 인수 시도: \"%s\"" msgid "You are not authenticated." msgstr "인증되지 않은 사용자입니다." msgid "You are not authorized to complete this action." msgstr "이 조치를 완료할 권한이 없습니다. " #, python-format msgid "You are not authorized to use %(action)s." msgstr "%(action)s을(를) 사용할 권한이 없습니다." #, python-format msgid "" "You have reached the maximum stacks per tenant, %d. Please delete some " "stacks." msgstr "테넌트당 최대 스택 수(%d)에 도달했습니다. 일부 스택을 삭제하십시오." #, python-format msgid "could not find user %s" msgstr "사용자 %s을(를) 찾을 수 없음" msgid "decrypt" msgstr "복호화" msgid "deployment_id must be specified" msgstr "deployment_id를 지정해야 함" msgid "" "deployments key not allowed in resource metadata with user_data_format of " "SOFTWARE_CONFIG" msgstr "" "user_data_format의 SOFTWARE_CONFIG가 있는 자원 메타데이터에서 허용되지 않는 " "배포 키" #, python-format msgid "deployments of server %s" msgstr "서버 %s 배포" msgid "encrypt" msgstr "암호화" #, python-format msgid "environment has wrong section \"%s\"" msgstr "환경에 잘못된 섹션 \"%s\"이(가) 있음" msgid "error in pool" msgstr "풀 오류" msgid "error in vip" msgstr "vip 오류" msgid "external network for the gateway." msgstr "게이트웨이에 대한 외부 네트워크입니다." msgid "granularity should be days, hours, minutes, or seconds" msgstr "단위는 일, 시간, 분 또는 초여야 함" msgid "heat.conf misconfigured, auth_encryption_key must be 32 characters" msgstr "heat.conf가 잘못 구성됨, auth_encryption_key는 32자여야 함" msgid "" "heat.conf misconfigured, cannot specify \"stack_user_domain_id\" or " "\"stack_user_domain_name\" without \"stack_domain_admin\" and " "\"stack_domain_admin_password\"" msgstr "" "heat.conf가 잘못 구성되었습니다. \"stack_domain_admin\" 및 " "\"stack_domain_admin_password\" 없이 \"stack_user_domain_id\" 또는 " "\"stack_user_domain_name\"을 지정할 수 없습니다." msgid "ipv6_ra_mode and ipv6_address_mode are not supported for ipv4." msgstr "ipv6_ra_mode 및 ipv6_address_mode는 ipv4에 지원되지 않습니다." msgid "limit cannot be less than 4" msgstr "한계는 4 미만일 수 없음" #, python-format msgid "metadata setting for resource %s" msgstr "자원 %s의 메타데이터 설정" msgid "min/max length must be integral" msgstr "최소/최대 길이는 적분이어야 함" msgid "min/max must be numeric" msgstr "최소/최대는 숫자여야 함" msgid "need more memory." msgstr "추가 메모리가 필요합니다." msgid "no resource data found" msgstr "자원 데이터를 찾을 수 없음" msgid "no resources were found" msgstr "자원을 찾을 수 없음" msgid "nova server metadata needs to be a Map." msgstr "nova 서버 메타데이터는 맵이어야 합니다." #, python-format msgid "previous_status must be SupportStatus instead of %s" msgstr "previous_status는 %s이(가) 아니라 SupportStatus여야 함" #, python-format msgid "raw template with id %s not found" msgstr "ID가 %s인 원시 템플리트를 찾을 수 없음" #, python-format msgid "resource with id %s not found" msgstr "ID가 %s인 자원을 찾을 수 없음" #, python-format msgid "roles %s" msgstr "역할 %s" msgid "segmentation_id cannot be specified except 0 for using flat" msgstr "일반을 사용하기 위해 0을 제외하고 segmentation_id를 지정할 수 없음" msgid "segmentation_id must be specified for using vlan" msgstr "VLAN을 사용하려면 segmentation_id가 지정되어야 함" msgid "segmentation_id not allowed for flat network type." msgstr "segmentation_id는 일반 네트워크 유형에 대해 허용되지 않습니다." msgid "server_id must be specified" msgstr "server_id를 지정해야 함" #, python-format msgid "" "task %(task)s contains property 'requires' in case of direct workflow. Only " "reverse workflows can contain property 'requires'." msgstr "" "직접 워크플로우의 경우 태스크 %(task)s에 특성 'requires'가 포함됩니다. 역방" "향 워크플로우에만 특성 'requires'가 포함될 수 있습니다." heat-10.0.2/heat/locale/zh_CN/0000775000175000017500000000000013343562672015731 5ustar zuulzuul00000000000000heat-10.0.2/heat/locale/zh_CN/LC_MESSAGES/0000775000175000017500000000000013343562672017516 5ustar zuulzuul00000000000000heat-10.0.2/heat/locale/zh_CN/LC_MESSAGES/heat.po0000666000175000017500000070776113343562351021015 0ustar zuulzuul00000000000000# Translations template for heat. # Copyright (C) 2015 ORGANIZATION # This file is distributed under the same license as the heat project. # # Translators: # Xiao Xi LIU , 2014 # Andreas Jaeger , 2016. #zanata msgid "" msgstr "" "Project-Id-Version: heat VERSION\n" "Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n" "POT-Creation-Date: 2018-02-23 07:09+0000\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" "PO-Revision-Date: 2016-04-12 05:32+0000\n" "Last-Translator: Copied by Zanata \n" "Language: zh_CN\n" "Plural-Forms: nplurals=1; plural=0;\n" "Generated-By: Babel 2.0\n" "X-Generator: Zanata 4.3.3\n" "Language-Team: Chinese (China)\n" #, python-format msgid "\"%%s\" is not a valid keyword inside a %s definition" msgstr "在 %s 定义内,“%%s”不是有效关键字" #, python-format msgid "\"%(fn_name)s\": %(err)s" msgstr "“%(fn_name)s”:%(err)s" #, python-format msgid "" "\"%(name)s\" params must be strings, numbers, list or map. Failed to json " "serialize %(value)s" msgstr "" "“%(name)s”参数必须为字符串、数字、列表或映射。对 %(value)s 执行 JSON 序列化失" "败。" #, python-format msgid "" "\"%(section)s\" must contain a map of %(obj_name)s maps. Found a [%(_type)s] " "instead" msgstr "" "“%(section)s”必须包含 %(obj_name)s 映射的映射。但发现包含的是 [%(_type)s]" #, python-format msgid "\"%(url)s\" is not a valid SwiftSignalHandle. The %(part)s is invalid" msgstr "“%(url)s”是无效 SwiftSignalHandle。%(part)s 无效" #, python-format msgid "\"%(value)s\" does not validate %(name)s" msgstr "“%(value)s”不会验证 %(name)s" #, python-format msgid "\"%(value)s\" does not validate %(name)s (constraint not found)" msgstr "“%(value)s”不会验证 %(name)s(找不到约束)" #, python-format msgid "\"%(version)s\". \"%(version_type)s\" should be one of: %(available)s" msgstr "“%(version)s”.“%(version_type)s”应该为下列其中一项:%(available)s" #, python-format msgid "\"%(version)s\". \"%(version_type)s\" should be: %(available)s" msgstr "“%(version)s”.“%(version_type)s”应该为:%(available)s" #, python-format msgid "\"%s\" : [ { \"key1\": \"val1\" }, { \"key2\": \"val2\" } ]" msgstr "“%s”:[ { \"key1\": \"val1\" }, { \"key2\": \"val2\" } ]" #, python-format msgid "\"%s\" argument must be a string" msgstr "参数\"%s\"必须为字符串" #, python-format msgid "\"%s\" can't traverse path" msgstr "“%s”无法遍历路径" #, python-format msgid "\"%s\" deletion policy not supported" msgstr "“%s”删除策略不受支持" #, python-format msgid "\"%s\" delimiter must be a string" msgstr "“%s”定界符必须是字符串" #, python-format msgid "\"%s\" is not a list" msgstr "“%s”不是列表" #, python-format msgid "\"%s\" is not a map" msgstr "“%s”不是映射" #, python-format msgid "\"%s\" is not a valid ARN" msgstr "“%s”不是有效 ARN" #, python-format msgid "\"%s\" is not a valid ARN URL" msgstr "“%s”不是有效 ARN URL" #, python-format msgid "\"%s\" is not a valid Heat ARN" msgstr "“%s”不是有效 Heat ARN" #, python-format msgid "\"%s\" is not a valid URL" msgstr "“%s”不是有效 URL" #, python-format msgid "\"%s\" is not a valid boolean" msgstr "“%s”不是有效布尔值" #, python-format msgid "\"%s\" is not a valid template section" msgstr "“%s”不是有效的模板部分" #, python-format msgid "\"%s\" must operate on a list" msgstr "“%s”必须对列表起作用" #, python-format msgid "\"%s\" param placeholders must be strings" msgstr "“%s”参数占位符必须是字符串" #, python-format msgid "\"%s\" parameters must be a mapping" msgstr "“%s”参数必须是映射" #, python-format msgid "\"%s\" params must be a map" msgstr "参数\"%s\"必须为map类型" #, python-format msgid "\"%s\" params must be strings, numbers, list or map." msgstr "“%s”参数必须是字符串、数字、列表或映射。" #, python-format msgid "\"%s\" template must be a string" msgstr "模板\"%s\"必须为字符串" #, python-format msgid "\"repeat\" syntax should be %s" msgstr "“repeat”语法应该是 %s" #, python-format msgid "%(a)s paused until Hook %(h)s is cleared" msgstr "%(a)s 暂停到清除挂钩 %(h)s 为止" #, python-format msgid "%(action)s is not supported for resource." msgstr "%(action)s 对于资源而言不受支持。" #, python-format msgid "%(action)s is restricted for resource." msgstr "%(action)s 在获取资源时受到限制。" #, python-format msgid "%(desired_capacity)s must be between %(min_size)s and %(max_size)s" msgstr "%(desired_capacity)s 必须在 %(min_size)s 到 %(max_size)s 之间" #, python-format msgid "%(error)s%(path)s%(message)s" msgstr "%(error)s%(path)s%(message)s" #, python-format msgid "%(feature)s is not supported." msgstr "%(feature)s 不受支持。" #, python-format msgid "" "%(img)s must be provided: Referenced cluster template %(tmpl)s has no " "default_image_id defined." msgstr "必须提供 %(img)s:引用的集群模板 %(tmpl)s 未定义 default_image_id。" #, python-format msgid "%(lc)s (%(ref)s) reference can not be found." msgstr "找不到 %(lc)s (%(ref)s) 引用。" #, python-format msgid "" "%(lc)s (%(ref)s) requires a reference to the configuration not just the name " "of the resource." msgstr "%(lc)s (%(ref)s) 需要对配置的引用,而不是仅需要资源的名称。" #, python-format msgid "%(len)d of %(count)d received" msgstr "接收到 %(len)d 个(共 %(count)d 个)" #, python-format msgid "%(len)d of %(count)d received - %(reasons)s" msgstr "接收到 %(len)d 个(共 %(count)d 个)- %(reasons)s" #, python-format msgid "%(message)s" msgstr "%(message)s" #, python-format msgid "%(min_size)s can not be greater than %(max_size)s" msgstr "%(min_size)s 不能大于 %(max_size)s" #, python-format msgid "%(name)s constraint invalid for %(utype)s" msgstr "%(name)s 约束对于 %(utype)s 无效" #, python-format msgid "%(prop1)s cannot be specified without %(prop2)s." msgstr "无法在没有 %(prop2)s 的情况下指定 %(prop1)s。" #, python-format msgid "" "%(prop1)s property should only be specified for %(prop2)s with value " "%(value)s." msgstr "仅应对值为 %(value)s 的 %(prop2)s 指定 %(prop1)s 属性。" #, python-format msgid "%(resource)s: Invalid attribute %(key)s" msgstr "%(resource)s:属性 %(key)s 无效" #, python-format msgid "" "%(result)s - Unknown status %(resource_status)s due to \"%(status_reason)s\"" msgstr "" "%(result)s - 由于以下原因,状态 %(resource_status)s 未知:“%(status_reason)s”" #, python-format msgid "%(schema)s supplied for %(type)s %(data)s" msgstr "已为 %(type)s %(data)s 提供 %(schema)s" #, python-format msgid "%(server)s-port-%(number)s" msgstr "%(server)s-port-%(number)s" #, python-format msgid "%(type)s not in valid format: %(error)s" msgstr "%(type)s 没有采用有效格式:%(error)s" #, python-format msgid "%s Key Name must be a string" msgstr "%s键名必须为字符串" #, python-format msgid "%s Timed out" msgstr "%s 已超时" #, python-format msgid "%s Value Name must be a string" msgstr "%s值名必须为字符串" #, python-format msgid "%s is not a valid job location." msgstr "%s 是无效作业位置。" #, python-format msgid "%s is not active" msgstr "%s 未处于活动状态" #, python-format msgid "%s is not an integer." msgstr "%s 不是整数。" #, python-format msgid "%s must be provided" msgstr "必须提供 %s" #, python-format msgid "'%(attr)s': expected '%(expected)s', got '%(current)s'" msgstr "“%(attr)s”:需要“%(expected)s”,已获得“%(current)s”" msgid "" "'task_name' is not assigned in 'params' in case of reverse type workflow." msgstr "未在“params”中指定“task_name”,以防出现保留类型工作流程。" msgid "'true' if DHCP is enabled for this subnet; 'false' otherwise." msgstr "如果对此子网启用 DHCP,那么为“true”;否则,为“false”。" msgid "A UUID for the set of servers being requested." msgstr "要请求的服务器集的 UUID。" msgid "A bad or out-of-range value was supplied" msgstr "已提供的值不正确或超出范围" msgid "A boolean value of default flag." msgstr "缺省标记的布尔值。" msgid "A boolean value specifying the administrative status of the network." msgstr "用于指定网络的管理状态的布尔值。" #, python-format msgid "" "A character class and its corresponding %(min)s constraint to generate the " "random string from." msgstr "字符类及要据其生成随机字符串的对应 %(min)s约束。" #, python-format msgid "" "A character sequence and its corresponding %(min)s constraint to generate " "the random string from." msgstr "字符序列及要据其生成随机字符串的对应 %(min)s约束。" msgid "A comma-delimited list of server ip addresses. (Heat extension)." msgstr "服务器 IP 地址的逗号定界列表。(Heat 扩展)。" msgid "A description of the volume." msgstr "云硬盘的描述信息。" msgid "" "A device name where the volume will be attached in the system at /dev/" "device_name. This value is typically vda." msgstr "设备名,该卷将连接至系统中的以下位置:此值通常为 vda。" msgid "" "A device name where the volume will be attached in the system at /dev/" "device_name.e.g. vdb" msgstr "设备名,该卷将连接至系统中的以下位置:/dev/device_name,例如,vdb" msgid "" "A dict of all network addresses with corresponding port_id. Each network " "will have two keys in dict, they are network name and network id. The port " "ID may be obtained through the following expression: \"{get_attr: [, " "addresses, , 0, port]}\"." msgstr "" "带有对应 port_id 的所有网络地址的字典。每个网络在字典中有两个键(网络名称和网" "络标识)。可通过以下表达式包含端口标识:“{get_attr: [, addresses, " ", 0, port]}”。" msgid "" "A dict of assigned network addresses of the form: {\"public\": [ip1, " "ip2...], \"private\": [ip3, ip4], \"public_uuid\": [ip1, ip2...], " "\"private_uuid\": [ip3, ip4]}. Each network will have two keys in dict, they " "are network name and network id." msgstr "" "以下格式的已分配网址地址的字典:{\"public\": [ip1, ip2...], \"private\": " "[ip3, ip4], \"public_uuid\": [ip1, ip2...], \"private_uuid\": [ip3, ip4]}。每" "个网络在字典中有两个键(网络名称和网络标识)。" msgid "A dict of key-value pairs output from the stack." msgstr "来自堆栈的键/值对输出的字典。" msgid "A dictionary which contains name and input of the workflow." msgstr "包含工作流程的名称和输入的字典。" msgid "A length constraint must have a min value and/or a max value specified." msgstr "长度限制必须指定最小值和/或最大值。" msgid "A list of URLs (webhooks) to invoke when state transitions to alarm." msgstr "当状态过渡至警报时要调用的 URL (webhook) 的列表。" msgid "" "A list of URLs (webhooks) to invoke when state transitions to insufficient-" "data." msgstr "当状态过渡至数据不足时要调用的 URL (webhook) 的的列表。" msgid "A list of URLs (webhooks) to invoke when state transitions to ok." msgstr "当状态过渡至正常时要调用的 URL (webhook) 的列表。" msgid "A list of access rules that define access from IP to Share." msgstr "用于定义通过 IP 访问共享的访问规则的列表。" msgid "A list of all rules for the QoS policy." msgstr "QoS 策略的所有规则列表。" msgid "A list of all subnet attributes for the port." msgstr "端口的所有子网属性的列表。" msgid "" "A list of character class and their constraints to generate the random " "string from." msgstr "字符类及要据其生成随机字符串的约束的列表。" msgid "" "A list of character sequences and their constraints to generate the random " "string from." msgstr "字符序列及要据其生成随机字符串的约束的列表。" msgid "A list of cluster instance IPs." msgstr "集群实例 IP 的列表。" msgid "A list of clusters to which this policy is attached." msgstr "此策略附加至的集群列表。" msgid "A list of host route dictionaries for the subnet." msgstr "子网的主机路由字典列表。" msgid "A list of instances ids." msgstr "实例标识的列表。" msgid "A list of metric ids." msgstr "指标标识列表。" msgid "" "A list of query factors, each comparing a Sample attribute with a value. " "Implicitly combined with matching_metadata, if any." msgstr "" "查询因子的列表,每个查询因子都将一个 Sample 属性与一个值进行比较。隐式与 " "matching_metadata(如果有)组合。" msgid "A list of resource IDs for the resources in the chain." msgstr "链中资源的资源标识的列表。" msgid "A list of resource IDs for the resources in the group." msgstr "组中资源的资源标识的列表。" msgid "A list of security groups for the port." msgstr "端口的安全组列表。" msgid "A list of security services IDs or names." msgstr "安全服务标识或名称的列表。" msgid "A list of string policies to apply. Defaults to anti-affinity." msgstr "要应用的字符串策略列表。缺省为“反亲缘关系”。" msgid "A login profile for the user." msgstr "用户的登录概要文件。" msgid "A mandatory input parameter is missing" msgstr "未输入必填参数" msgid "A map containing all headers for the container." msgstr "包含容器的所有头的映射。" msgid "" "A map of Nova names and captured stderrs from the configuration execution to " "each server." msgstr "Nova 名称与所捕获的针对每个服务器的配置执行产生的标准错误建立的映射。" msgid "" "A map of Nova names and captured stdouts from the configuration execution to " "each server." msgstr "Nova 名称与来自针对每个服务器的配置执行的所捕获标准输出的映射。" msgid "" "A map of Nova names and returned status code from the configuration " "execution." msgstr "Nova 名称与通过执行配置返回的状态码的映射。" msgid "" "A map of files to create/overwrite on the server upon boot. Keys are file " "names and values are the file contents." msgstr "" "在进行引导时要在服务器上创建/覆盖的文件的映射。键是文件名,值是文件内容。" msgid "" "A map of resource names to the specified attribute of each individual " "resource." msgstr "资源名称至每个资源的指定属性的映射。" msgid "" "A map of resource names to the specified attribute of each individual " "resource. Requires heat_template_version: 2014-10-16." msgstr "" "资源名称至每个资源的指定属性的映射。需要 heat_template_version:2014-10-16。" msgid "" "A map of user-defined meta data to associate with the account. Each key in " "the map will set the header X-Account-Meta-{key} with the corresponding " "value." msgstr "" "要与帐户关联的用户定义的元数据的映射。映射中的每个键都将使用对应值设置头 X-" "Account-Meta-{key}。" msgid "" "A map of user-defined meta data to associate with the container. Each key in " "the map will set the header X-Container-Meta-{key} with the corresponding " "value." msgstr "" "要与容器关联的用户定义的元数据的映射。映射中的每个键都将使用对应值设置头 X-" "Container-Meta-{key}。" msgid "A name used to distinguish the volume." msgstr "用于区别卷的名称。" msgid "" "A per-tenant quota on the prefix space that can be allocated from the subnet " "pool for tenant subnets." msgstr "前缀空间上可从子网池针对租户子网分配的每租户配额。" msgid "" "A predefined access control list (ACL) that grants permissions on the bucket." msgstr "预定义的访问控制表 (ACL),用于针对存储区授予许可权。" msgid "A range constraint must have a min value and/or a max value specified." msgstr "范围限制必须指定最小值和/或最大值。" msgid "" "A reference to the wait condition handle used to signal this wait condition." msgstr "对用于为此等待条件发出信号的等待条件句柄的引用。" msgid "" "A signed url to create executions for workflows specified in Workflow " "resource." msgstr "已签署 URL,用于为工作流程资源中指定的工作流程创建执行。" msgid "A signed url to handle the alarm." msgstr "用于处理警报的带签名的 URL。" msgid "A signed url to handle the alarm. (Heat extension)." msgstr "用于处理警报的带签名的 URL。(Heat 扩展)。" msgid "A specified set of DNS name servers to be used." msgstr "要使用的所指定 DNS 名称服务器集。" msgid "" "A string specifying a symbolic name for the network, which is not required " "to be unique." msgstr "用于为网络指定符号名称(不要求唯一)的字符串。" msgid "" "A string specifying a symbolic name for the security group, which is not " "required to be unique." msgstr "用于为安全组指定符号名称(不要求唯一)的字符串。" msgid "A string specifying physical network mapping for the network." msgstr "用于指定网络的物理网络映射的字符串。" msgid "A string specifying the provider network type for the network." msgstr "用于指定网络的提供程序网络类型的字符串。" msgid "A string specifying the segmentation id for the network." msgstr "用于指定网络的分段标识的字符串。" msgid "A symbolic name for this port." msgstr "此端口的符号名称。" msgid "A url to handle the alarm using native API." msgstr "用于使用本机 API 处理警报的 URL。" msgid "" "A variable that this resource will use to replace with the current index of " "a given resource in the group. Can be used, for example, to customize the " "name property of grouped servers in order to differentiate them when listed " "with nova client." msgstr "" "此资源将用于替换组中给定资源的当前索引的变量。例如,可用于定制分组服务器的名" "称属性,以在与 nova 客户机一起列示时进行区分。" msgid "AWS compatible instance name." msgstr "AWS 兼容的实例名称。" msgid "AWS query string is malformed, does not adhere to AWS spec" msgstr "AWS 查询字符串格式不正确,不符合 AWS 规范" msgid "Access policies to apply to the user." msgstr "要应用于用户的访问策略。" #, python-format msgid "AccessPolicy resource %s not in stack" msgstr "AccessPolicy 资源 %s 未在堆栈中" #, python-format msgid "Action %s not allowed for user" msgstr "用户不允许执行动作%s" msgid "Action to be performed on the traffic matching the rule." msgstr "要对与规则匹配的流量执行的操作。" msgid "Actual input parameter values of the task." msgstr "任务的实际输入参数值。" msgid "Add needed policies directly to the task, Policy keyword is not needed" msgstr "将所需策略直接添加至该任务。不需要策略关键字。" msgid "Additional MAC/IP address pairs allowed to pass through a port." msgstr "允许通过端口传递的其他 MAC/IP 地址对。" msgid "Additional MAC/IP address pairs allowed to pass through the port." msgstr "允许通过该端口传递的其他 MAC/IP 地址对。" msgid "Additional routes for this subnet." msgstr "此子网的其他路由。" msgid "Address family of the address scope, which is 4 or 6." msgstr "地址范围的地址系列(4 或 6)。" msgid "" "Address of the notification. It could be a valid email address, url or " "service key based on notification type." msgstr "通知的地址。它可以是有效电子邮件地址、URL 或基于通知类型的服务关键字。" msgid "" "Address to bind the server. Useful when selecting a particular network " "interface." msgstr "用于绑定服务器的地址。当选择特定网络接口时,此项很有用。" msgid "Administrative state for the ipsec site connection." msgstr "IPSec 站点连接的管理状态。" msgid "Administrative state for the vpn service." msgstr "VPN 服务的管理状态。" msgid "" "Administrative state of the firewall. If false (down), firewall does not " "forward packets and will drop all traffic to/from VMs behind the firewall." msgstr "" "防火墙的管理状态。如果为 false(关闭),那么防火墙不会转发包,并且会删除至/自" "防火墙后的 VM的所有流量。" msgid "Administrative state of the router." msgstr "路由器的管理状态。" #, python-format msgid "Alarm %(alarm)s could not find scaling group named \"%(group)s\"" msgstr "警报 %(alarm)s 找不到名为“%(group)s”的缩放组" #, python-format msgid "Algorithm must be one of %s" msgstr "算法必须为 %s 的其中一项" msgid "All heat engines are down." msgstr "所有 Heat 引擎都处于关闭状态。" msgid "Allocated floating IP address." msgstr "已分配浮动 IP 地址。" msgid "Allocation ID for VPC EIP address." msgstr "用于 VPC EIP 地址的分配标识。" msgid "Allow client's debug log output." msgstr "允许客户的调试日志输出。" msgid "Allow or deny action for this firewall rule." msgstr "允许或拒绝对此防火墙规则执行操作。" msgid "Allow orchestration of multiple clouds." msgstr "允许由多个云组成的业务流程。" msgid "" "Allow reauthentication on token expiry, such that long-running tasks may " "complete. Note this defeats the expiry of any provided user tokens." msgstr "" "允许在令牌到期时重新认证,以便长时间运行的任务可完成。任何所提供用户令牌到期" "时请注意此缺陷。" msgid "" "Allowed keystone endpoints for auth_uri when multi_cloud is enabled. At " "least one endpoint needs to be specified." msgstr "" "在启用 multi_cloud 的情况下,允许用于 auth_uri 的 Keystone 端点。至少需要指定" "一个端点。" msgid "" "Allowed tenancy of instances launched in the VPC. default - any tenancy; " "dedicated - instance will be dedicated, regardless of the tenancy option " "specified at instance launch." msgstr "" "VPC 中所启动实例的允许租用。缺省值 - 任何租用;专用 - 实例将专用,无论在实例" "启动时指定的租用选项如何。" #, python-format msgid "Allowed values: %s" msgstr "允许值:%s" msgid "AllowedPattern must be a string" msgstr "AllowedPattern 必须为字符串" msgid "AllowedValues must be a list" msgstr "AllowedValues 必须为列表" msgid "Allowing not to store action results after task completion." msgstr "允许在任务完成后不存储操作结果。" msgid "" "Allows to synchronize multiple parallel workflow branches and aggregate " "their data. Valid inputs: all - the task will run only if all upstream tasks " "are completed. Any numeric value - then the task will run once at least this " "number of upstream tasks are completed and corresponding conditions have " "triggered." msgstr "" "允许同步多个并行工作流程分支并汇总其数据。有效输入:all - 仅当所有上行任务完" "成后,才会运行此任务。任何数字值 - 完成的上行任务至少达到此数目并且已触发对应" "条件时,将运行一次此任务。" #, python-format msgid "Ambiguous versions (%s)" msgstr "版本 (%s) 含糊" msgid "" "Amount of disk space (in GB) required to boot image. Default value is 0 if " "not specified and means no limit on the disk size." msgstr "" "引导映像时所需的磁盘空间量(以 GB 计)。缺省值为 0(如果未指定),表示对磁盘" "大小没有限制。" msgid "" "Amount of ram (in MB) required to boot image. Default value is 0 if not " "specified and means no limit on the ram size." msgstr "" "引导映像所需的 RAM 数量(以 MB 计)。缺省值为 0(如果未指定),表示对 RAM 大" "小没有限制。" msgid "An address scope ID to assign to the subnet pool." msgstr "要分配给子网池的地址范围标识。" msgid "An application health check for the instances." msgstr "针对实例的应用程序运行状况检查。" msgid "An ordered list of firewall rules to apply to the firewall." msgstr "要对防火墙应用的防火墙规则的有序列表。" msgid "" "An ordered list of nics to be added to this server, with information about " "connected networks, fixed ips, port etc." msgstr "" "要对此服务器添加的 nic 的有序列表,包含有关已连接的网络、固定 IP 以及端口等的" "信息。" msgid "An unknown exception occurred." msgstr "发生未知异常。" msgid "" "Any data structure arbitrarily containing YAQL expressions that defines " "workflow output. May be nested." msgstr "包含用于定义工作流程输出的 YAQL 表达式的任何数据结构。可能嵌套。" msgid "Anything other than one VPCZoneIdentifier" msgstr "除了一个 VPCZoneIdentifier 之外的任何内容" msgid "Api endpoint reference of the instance." msgstr "实例的 Api 端点引用。" msgid "" "Arbitrary key-value pairs specified by the client to help boot a server." msgstr "由客户机指定来帮助引导服务器的任意键/值对。" msgid "" "Arbitrary key-value pairs specified by the client to help the Cinder " "scheduler creating a volume." msgstr "由客户机指定来帮助 Cinder 调度程序创建卷的任意键/值对。" msgid "" "Arbitrary key/value metadata to store contextual information about this " "queue." msgstr "用于存储此队列的相关上下文信息的任意键/值元数据。" msgid "" "Arbitrary key/value metadata to store for this server. Both keys and values " "must be 255 characters or less. Non-string values will be serialized to JSON " "(and the serialized string must be 255 characters or less)." msgstr "" "要对此服务器存储的任意键/值元数据。键和值不能超过 255 个字符。非字符串值将序" "列化至 JSON(已序列化字符串不能超过 255 个字符)。" msgid "Arbitrary key/value metadata to store information for aggregate." msgstr "用于存储聚集信息的任意键/值元数据。" #, python-format msgid "Argument to \"%s\" must be a list" msgstr "针对“%s”的自变量必须是列表" #, python-format msgid "Argument to \"%s\" must be a string" msgstr "针对“%s”的自变量必须是字符串" #, python-format msgid "Argument to \"%s\" must be string or list" msgstr "针对“%s”的自变量必须是字符串或列表" #, python-format msgid "Argument to function \"%s\" must be a list of strings" msgstr "针对函数“%s”的自变量必须是字符串的列表" #, python-format msgid "" "Arguments to \"%s\" can be of the next forms: [resource_name] or " "[resource_name, attribute, (path), ...]" msgstr "" "“%s”的自变量可为下列其中一种格式:[resource_name] 或 [resource_name, " "attribute, (path), ...]" #, python-format msgid "Arguments to \"%s\" must be a map" msgstr "针对“%s”的自变量必须是映射" #, python-format msgid "Arguments to \"%s\" must be of the form [index, collection]" msgstr "针对“%s”的自变量必须采用格式 [index, collection]" #, python-format msgid "" "Arguments to \"%s\" must be of the form [resource_name, attribute, " "(path), ...]" msgstr "针对“%s”的自变量必须采用格式 [resource_name, attribute, (path), ...]" #, python-format msgid "Arguments to \"%s\" must be of the form [resource_name, attribute]" msgstr "针对“%s”的自变量必须采用格式 [resource_name, attribute]" #, python-format msgid "Arguments to %s not fully resolved" msgstr "针对 %s 的自变量未完全解析" #, python-format msgid "Attempt to delete a stack with id: %(id)s %(msg)s" msgstr "请尝试删除具有标识 %(id)s 的堆栈:%(msg)s" #, python-format msgid "Attempt to delete user creds with id %(id)s that does not exist" msgstr "尝试删除的用户凭证(标识为 %(id)s)不存在" #, python-format msgid "Attempt to delete watch_rule: %(id)s %(msg)s" msgstr "请尝试删除 watch_rule:%(id)s %(msg)s" #, python-format msgid "Attempt to update a stack with id: %(id)s %(msg)s" msgstr "请尝试更新具有标识 %(id)s 的堆栈:%(msg)s" #, python-format msgid "Attempt to update a stack with id: %(id)s %(traversal)s %(msg)s" msgstr "尝试更新带有以下标识的堆栈:%(id)s %(traversal)s %(msg)s " #, python-format msgid "Attempt to update a watch with id: %(id)s %(msg)s" msgstr "请尝试更新具有标识 %(id)s 的监视:%(msg)s" msgid "Attempt to use stored_context with no user_creds" msgstr "尝试使用不带 user_creds 的 stored_context" #, python-format msgid "Attribute %(attr)s for facade %(type)s missing in provider" msgstr "提供程序中缺少外观 %(type)s 的属性 %(attr)s" msgid "Audit status of this firewall policy." msgstr "此防火墙策略的审计状态。" msgid "Authentication Endpoint URI." msgstr "认证端点 URI。" msgid "Authentication hash algorithm for the ike policy." msgstr "ike 策略的认证散列算法。" msgid "Authentication hash algorithm for the ipsec policy." msgstr "IPSec 策略的认证散列算法。" msgid "Authorization failed." msgstr "授权失败。" msgid "AutoScaling group ID to apply policy to." msgstr "要对其应用策略的 AutoScaling 组标识。" msgid "AutoScaling group name to apply policy to." msgstr "要对其应用策略的 AutoScaling 组名。" msgid "Availability Zone of the subnet." msgstr "子网的可用域。" msgid "Availability zone in which you want the subnet." msgstr "在其中需要子网的可用性区域。" msgid "Availability zone to create servers in." msgstr "要在其中创建服务器的可用性区域。" msgid "Availability zone to create volumes in." msgstr "要在其中创建卷的可用性区域。" msgid "Availability zone to launch the instance in." msgstr "要在其中启动实例的可用性区域。" msgid "Backend authentication failed" msgstr "后端认证失败" msgid "Binary" msgstr "二进制" msgid "Block device mappings for this server." msgstr "此服务器的块设备映射。" msgid "Block device mappings to attach to instance." msgstr "要连接至实例的块设备映射。" msgid "Block device mappings v2 for this server." msgstr "此服务器的块设备映射 V2。" msgid "" "Boolean extra spec that used for filtering of backends by their capability " "to create share snapshots." msgstr "额外指定的布尔值,用于过滤后端(按其创建共享快照的能力)。" msgid "Boolean indicating if the volume can be booted or not." msgstr "指示是否可引导卷的布尔值。" msgid "Boolean indicating if the volume is encrypted or not." msgstr "指示卷是否已加密的布尔值。" msgid "" "Boolean indicating whether allow the volume to be attached more than once." msgstr "用于指示是否允许多次附加该卷的布尔值。" msgid "" "Bus of the device: hypervisor driver chooses a suitable default if omitted." msgstr "设备总线:管理程序驱动程序会选择合适的缺省设备总线(如果已省略)。" msgid "CIDR block notation for this subnet." msgstr "此子网的 CIDR 块表示法。" msgid "CIDR block to apply to subnet." msgstr "要应用于子网的 CIDR 块。" msgid "CIDR block to apply to the VPC." msgstr "要应用于 VPC 的 CIDR 块。" msgid "CIDR of subnet." msgstr "子网的 CIDR。" msgid "CIDR to be associated with this metering rule." msgstr "要与此测量规则关联的 CIDR。" #, python-format msgid "Can not specify property \"%s\" if the volume type is public." msgstr "如果卷类型为 public,那么不能指定属性“%s”。" #, python-format msgid "Can not use %s property on Nova-network." msgstr "无法在 Nova-network 上使用 %s 属性。" #, python-format msgid "Can't find role %s" msgstr "找不到角色 %s" msgid "Can't get user token without password" msgstr "无法在没有密码的情况下获取用户令牌" msgid "Can't get user token, user not yet created" msgstr "无法获取用户令牌,尚未创建用户" msgid "Can't traverse attribute path" msgstr "无法遍历属性路径" #, python-format msgid "Cancelling update when stack is %s" msgstr "正在取消堆栈为 %s 时的更新" #, python-format msgid "Cannot call %(method)s on orphaned %(objtype)s object" msgstr "对于孤立对象 %(objtype)s 无法调用 %(method)s" #, python-format msgid "Cannot check %s, stack not created" msgstr "无法检查 %s,未创建堆栈" #, python-format msgid "Cannot define the following properties at the same time: %(props)s." msgstr "无法同时定义下列属性:%(props)s。" #, python-format msgid "" "Cannot establish connection to Heat endpoint at region \"%(region)s\" due to " "\"%(exc)s\"" msgstr "" "由于发生以下异常,无法与区域“%(region)s”上 Heat 端点建立连接:“%(exc)s”" msgid "" "Cannot get stack domain user token, no stack domain id configured, please " "fix your heat.conf" msgstr "无法获取堆栈域用户令牌,未配置堆栈域标识,请修正 heat.conf" msgid "Cannot migrate to lower schema version." msgstr "无法迁移至较低模式版本。" #, python-format msgid "Cannot modify readonly field %(field)s" msgstr "无法修改只读字段 %(field)s" #, python-format msgid "Cannot resume %s, resource not found" msgstr "无法恢复 %s,找不到资源" #, python-format msgid "Cannot resume %s, resource_id not set" msgstr "无法恢复 %s,未设置 resource_id" #, python-format msgid "Cannot resume %s, stack not created" msgstr "无法恢复 %s,未创建堆栈" #, python-format msgid "Cannot suspend %s, resource not found" msgstr "无法暂挂 %s,找不到资源" #, python-format msgid "Cannot suspend %s, resource_id not set" msgstr "无法暂挂 %s,未设置 resource_id" #, python-format msgid "Cannot suspend %s, stack not created" msgstr "无法暂挂 %s,未创建堆栈" msgid "Captured stderr from the configuration execution." msgstr "已从配置执行捕获标准错误。" msgid "Captured stdout from the configuration execution." msgstr "已从配置执行捕获标准输出。" #, python-format msgid "Circular Dependency Found: %(cycle)s" msgstr "找到循环依赖关系:%(cycle)s" msgid "Client entity to poll." msgstr "要轮询的客户机实体。" msgid "Client name and resource getter name must be specified." msgstr "必须指定客户机名称和资源获取者名称。" msgid "Client to poll." msgstr "要轮询的客户机。" msgid "Cluster configs dictionary." msgstr "集群配置字典。" msgid "Cluster information." msgstr "集群信息。" msgid "Cluster metadata." msgstr "集群元数据。" msgid "Cluster name." msgstr "集群名称。" msgid "Cluster status." msgstr "集群状态。" msgid "Comparison operator." msgstr "比较运算符。" #, python-format msgid "Concurrent transaction for %(action)s" msgstr "%(action)s 的并行事务" msgid "Configuration of session persistence." msgstr "会话持久性的配置。" msgid "" "Configuration script or manifest which specifies what actual configuration " "is performed." msgstr "用于指定执行的实际配置的配置脚本或清单。" msgid "Configure most important configs automatically." msgstr "自动配置最重要的配置。" #, python-format msgid "Confirm resize for server %s failed" msgstr "确认调整服务器 %s 大小的操作失败" msgid "Connection info for this network gateway." msgstr "此网络网关的连接信息。" #, python-format msgid "Container '%(name)s' creation failed: %(code)s - %(reason)s" msgstr "创建容器“%(name)s”失败:%(code)s - %(reason)s" msgid "Container format of image." msgstr "映像的容器格式。" msgid "" "Content of part to attach, either inline or by referencing the ID of another " "software config resource." msgstr "要在线附加或通过引用另一软件配置资源的标识来附加的部件的内容。" msgid "Context for this stack." msgstr "此堆栈的上下文。" msgid "Control how the disk is partitioned when the server is created." msgstr "控制当创建服务器时如何对磁盘分区。" msgid "Controls DPD protocol mode." msgstr "控制 DPD 协议方式。" msgid "" "Convenience attribute to fetch the first assigned network address, or an " "empty string if nothing has been assigned at this time. Result may not be " "predictable if the server has addresses from more than one network." msgstr "" "用于访存第一个已分配网络地址的方便属性,或空字符串(如果此时尚未分配任何" "项)。如果服务器具有来自多个网络的地址,那么结果可能无法预测。" msgid "" "Convenience attribute, provides curl CLI command prefix, which can be used " "for signalling handle completion or failure when signal_transport is set to " "TOKEN_SIGNAL. You can signal success by adding --data-binary '{\"status\": " "\"SUCCESS\"}' , or signal failure by adding --data-binary '{\"status\": " "\"FAILURE\"}'. This attribute is set to None for all other signal transports." msgstr "" "方便属性,提供 curl CLI 命令前缀,signal_transport 设置为 TOKEN_SIGNAL 时可用" "于通告句柄完成或故障。可通过添加 --data-binary '{\"status\": \"SUCCESS\"}' 通" "告成功,或通过添加 --data-binary '{\"status\": \"FAILURE\"}' 通告失败。对于所" "有其他信号传输,此属性设置为 None。" msgid "" "Convenience attribute, provides curl CLI command prefix, which can be used " "for signalling handle completion or failure. You can signal success by " "adding --data-binary '{\"status\": \"SUCCESS\"}' , or signal failure by " "adding --data-binary '{\"status\": \"FAILURE\"}'." msgstr "" "方便属性,提供 curl CLI 命令前缀,可用于通告处理完成或故障。可通过添加 --" "data-binary '{\"status\": \"SUCCESS\"}' 通告成功或通过添加 --data-binary " "'{\"status\": \"FAILURE\"}' 通告失败。" msgid "Cooldown period, in seconds." msgstr "冷却时间段,以秒计。" #, python-format msgid "Could not confirm resize of server %s" msgstr "无法确认调整服务器 %s 大小的操作" #, python-format msgid "Could not detach attachment %(att)s from server %(srv)s." msgstr "无法从服务器 %(srv)s 拆离附件 %(att)s。" #, python-format msgid "Could not fetch remote template \"%(name)s\": %(exc)s" msgstr "未能访存远程模板“%(name)s”:%(exc)s" #, python-format msgid "Could not fetch remote template '%(url)s': %(exc)s" msgstr "未能访存远程模板“%(url)s”:%(exc)s" #, python-format msgid "Could not load %(name)s: %(error)s" msgstr "未能装入 %(name)s:%(error)s" #, python-format msgid "Could not retrieve template: %s" msgstr "未能检索到模板:%s" msgid "Create volumes on the same physical port as an instance." msgstr "在实例所在的物理端口上创建卷。" msgid "" "Credentials used for swift. Not required if sahara is configured to use " "proxy users and delegated trusts for access." msgstr "" "用于 swift 的证书。如果 sahara 配置为使用代理用户和代理信托来进行访问,那么不" "需要此项。" msgid "Cron expression." msgstr "cron 表达式。" msgid "Current share status." msgstr "当前共享状态。" msgid "Custom LoadBalancer template can not be found" msgstr "找不到定制负载均衡器模板" msgid "DB instance restore point." msgstr "数据库实例复原点。" msgid "DNS Domain id or name." msgstr "DNS 域标识或名称。" msgid "DNS IP address used inside tenant's network." msgstr "租户网络中使用的 DNS IP 地址。" msgid "DNS Record type." msgstr "DNS 记录类型。" msgid "DNS domain serial." msgstr "DNS 域序列号。" msgid "" "DNS record data, varies based on the type of record. For more details, " "please refer rfc 1035." msgstr "" "DNS 记录数据,根据记录类型不同而变化。有关更多详细信息,请参阅 rfc 1035。" msgid "" "DNS record priority. It is considered only for MX and SRV types, otherwise, " "it is ignored." msgstr "DNS 记录优先级。仅针对 MX 和 SRV 类型考虑此项,否则它被忽略。" #, python-format msgid "Data supplied was not valid: %(reason)s" msgstr "提供的数据无效:%(reason)s" #, python-format msgid "" "Database %(dbs)s specified for user does not exist in databases for resource " "%(name)s." msgstr "对用户指定的数据库 %(dbs)s 在资源 %(name)s 的数据库中不存在。" msgid "Database volume size in GB." msgstr "数据库卷大小,以 GB 计。" #, python-format msgid "" "Databases property is required if users property is provided for resource %s." msgstr "如果已对资源 %s 提供用户属性,那么需要数据库属性。" #, python-format msgid "" "Datastore version %(dsversion)s for datastore type %(dstype)s is not valid. " "Allowed versions are %(allowed)s." msgstr "" "数据存储器类型 %(dstype)s 的数据存储器版本 %(dsversion)s 无效。允许的版本为 " "%(allowed)s。" msgid "Datetime when a share was created." msgstr "创建共享的日期时间。" msgid "" "Dead Peer Detection protocol configuration for the ipsec site connection." msgstr "IPSec 站点连接的已终止同级检测协议配置。" msgid "Dead engines are removed." msgstr "处于关闭状态的引擎已被移除。" msgid "Default TLS container reference to retrieve TLS information." msgstr "用于检索 TLS 信息的缺省 TLS 容器引用。" #, python-format msgid "Default must be a comma-delimited list string: %s" msgstr "缺省值必须是以逗号定界的列表字符串:%s" msgid "Default name or UUID of the image used to boot Hadoop nodes." msgstr "用于引导 Hadoop 节点的映像的缺省名称或 UUID。" msgid "Default region name used to get services endpoints." msgstr "用于获取服务端点的缺省区域名称。" msgid "Default settings for some of task attributes defined at workflow level." msgstr "在工作流程级别定义的一些任务属性的缺省设置。" msgid "Default value for the input if none is specified." msgstr "如果未指定任何内容,那么表示输入的缺省值。" msgid "" "Defines a delay in seconds that Mistral Engine should wait after a task has " "completed before starting next tasks defined in on-success, on-error or on-" "complete." msgstr "" "定义延迟(以秒计),完成一个任务后,Mistral 引擎应在等待此时间段后启动 on-" "success、on-error 或 on-complete 中定义的后续任务。" msgid "" "Defines a delay in seconds that Mistral Engine should wait before starting a " "task." msgstr "定义延迟(以秒计),Mistral 引擎在启动任务前应等待此时间段。" msgid "Defines a pattern how task should be repeated in case of an error." msgstr "定义一种模式,此模式确定发生错误时如何再次执行该任务。" msgid "" "Defines a period of time in seconds after which a task will be failed " "automatically by engine if hasn't completed." msgstr "" "定义一个时间段(以秒计),如果任务在经过此时间段后未完成,该引擎自动认定此任" "务失败。" msgid "Defines if share type is accessible to the public." msgstr "定义共享类型是否可供公众访问。" msgid "Defines if shared filesystem is public or private." msgstr "定义共享文件系统是公用还是专用。" msgid "" "Defines the method in which the request body for signaling a workflow would " "be parsed. In case this property is set to True, the body would be parsed as " "a simple json where each key is a workflow input, in other cases body would " "be parsed expecting a specific json format with two keys: \"input\" and " "\"params\"." msgstr "" "定义用于解析请求主体(此请求主体用于通告工作流程)的方法。如果此属性设置为 " "True,那么该主体将解析为简单 JSON,其中每个键是工作流程输入,在其他情况下,解" "析该主体时需要带有两个键(“input”和“params”)的特定 JSON 格式。" msgid "" "Defines whether Mistral Engine should put the workflow on hold or not before " "starting a task." msgstr "定义 Mistral 引擎在启动任务前是否应搁置此工作流程。" msgid "Defines whether auto-assign security group to this Node Group template." msgstr "定义是否将安全组自动分配给此“节点组”模板。" #, python-format msgid "" "Defining more than one configuration for the same action in " "SoftwareComponent \"%s\" is not allowed." msgstr "不允许在软件组件“%s”中为同一操作定义多个配置。" msgid "Deleting in-progress snapshot" msgstr "正在删除进行中的快照" #, python-format msgid "Deleting non-empty container (%(id)s) when %(prop)s is False" msgstr "当 %(prop)s 为 False 时,正在删除非空容器 (%(id)s)" #, python-format msgid "Delimiter for %s must be string" msgstr "用于 %s 的定界符必须是字符串" msgid "" "Denotes that the deployment is in an error state if this output has a value." msgstr "如果此输出具有值,那么表示部署处于错误状态。" msgid "Deploy data available" msgstr "可用的部署数据" #, python-format msgid "Deployment exited with non-zero status code: %s" msgstr "已退出部署,返回了非零状态码:%s" #, python-format msgid "Deployment to server failed: %s" msgstr "部署至云主机失败:%s" #, python-format msgid "Deployment with id %s not found" msgstr "找不到标识为 %s 的部署" msgid "Deprecated." msgstr "不推荐。" msgid "" "Describe time constraints for the alarm. Only evaluate the alarm if the time " "at evaluation is within this time constraint. Start point(s) of the " "constraint are specified with a cron expression, whereas its duration is " "given in seconds." msgstr "" "描述警报的时间约束。仅当评估时间在此时间约束范围内时,才会评估该警报。约束的" "起始点是使用 cron 表达式指定的,其持续时间以秒数形式给出。" msgid "Description for the alarm." msgstr "警报的描述。" msgid "Description for the firewall policy." msgstr "防火墙策略的描述信息。" msgid "Description for the firewall rule." msgstr "防火墙规则的描述信息。" msgid "Description for the firewall." msgstr "防火墙的描述信息。" msgid "Description for the ike policy." msgstr "ike 策略的描述。" msgid "Description for the ipsec policy." msgstr "IPSec 策略的描述。" msgid "Description for the ipsec site connection." msgstr "IPSec 站点连接的描述。" msgid "Description for the time constraint." msgstr "时间约束的描述。" msgid "Description for the vpn service." msgstr "VPN 服务的描述。" msgid "Description for this interface." msgstr "此接口的描述。" msgid "Description of domain." msgstr "域的描述。" msgid "Description of keystone group." msgstr "keystone 组的描述。" msgid "Description of keystone project." msgstr "keystone 项目的描述。" msgid "Description of keystone region." msgstr "keystone 区域的描述。" msgid "Description of keystone service." msgstr "keystone 服务的描述。" msgid "Description of keystone user." msgstr "keystone 用户的描述。" msgid "Description of record." msgstr "记录的描述。" msgid "Description of the Node Group Template." msgstr "节点组模板的描述。" msgid "Description of the Sahara Group Template." msgstr "Sahara 组模板的描述。" msgid "Description of the alarm." msgstr "警报的描述。" msgid "Description of the data source." msgstr "数据源的描述。" msgid "Description of the firewall policy." msgstr "防火墙策略的描述。" msgid "Description of the firewall rule." msgstr "防火墙规则的描述。" msgid "Description of the firewall." msgstr "防火墙的描述信息。" msgid "Description of the image." msgstr "映像的描述。" msgid "Description of the input." msgstr "输入的描述信息。" msgid "Description of the job binary." msgstr "作业二进制文件的描述。" msgid "Description of the metering label." msgstr "测量标签的描述。" msgid "Description of the output." msgstr "输出的描述。" msgid "Description of the pool." msgstr "池的描述。" msgid "Description of the security group." msgstr "安全组的描述。" msgid "Description of the vip." msgstr "vip 的描述。" msgid "Description of the volume type." msgstr "卷类型的描述。" msgid "Description of the volume." msgstr "卷的描述。" msgid "Description of this Load Balancer." msgstr "此负载均衡器的描述。" msgid "Description of this listener." msgstr "此侦听器的描述。" msgid "Description of this pool." msgstr "此池的描述。" msgid "Desired IPs for this port." msgstr "此端口的所需 IP。" msgid "Desired capacity of the cluster." msgstr "集群的期望容量。" msgid "Desired initial number of instances." msgstr "所需初始实例数。" msgid "Desired initial number of resources in cluster." msgstr "集群中的期望初始资源数。" msgid "Desired initial number of resources." msgstr "所需初始资源数。" msgid "Desired number of instances." msgstr "所需实例数。" msgid "DesiredCapacity must be between MinSize and MaxSize" msgstr "DesiredCapacity 必须介于 MinSize 与 MaxSize 之间" msgid "Destination IP address or CIDR." msgstr "目的IP地址或无类间或路由。" msgid "Destination ip_address for this firewall rule." msgstr "此防火墙规则的目标 ip_address。" msgid "Destination port number or a range." msgstr "目标端口号或范围。" msgid "Destination port range for this firewall rule." msgstr "此防火墙规则的目标端口范围。" msgid "Detailed information about resource." msgstr "有关资源的详细信息。" msgid "Device ID of this port." msgstr "此端口的设备标识。" msgid "Device info for this network gateway." msgstr "此网络网关的设备信息。" msgid "" "Device type: at the moment we can make distinction only between disk and " "cdrom." msgstr "设备类型:此刻,我们只能区别磁盘和 cdrom。" msgid "" "Dict, which has expand properties for port. Used only if port property is " "not specified for creating port." msgstr "已扩展端口属性的字典。仅当未对创建端口指定端口属性时使用。" msgid "Dictionary containing workflow tasks." msgstr "包含工作流程任务的字典。" msgid "Dictionary of node configurations." msgstr "节点配置的字典。" msgid "Dictionary of variables to publish to the workflow context." msgstr "要发布至工作流程上下文的变量的字典。" msgid "Dictionary which contains input for workflow." msgstr "包含工作流程的输入的字典。" msgid "" "Dictionary-like section defining task policies that influence how Mistral " "Engine runs tasks. Must satisfy Mistral DSL v2." msgstr "" "类似字典的节,用于定义影响 Mistral 引擎运行任务的方式的任务策略。必须满足 " "Mistral DSL v2。" msgid "DisableRollback and OnFailure may not be used together" msgstr "不能将 DisableRollback 与 OnFailure 一起使用" msgid "Disk format of image." msgstr "映像的磁盘格式。" msgid "Does not contain a valid AWS Access Key or certificate" msgstr "未包含有效 AWS 访问密钥或证书" msgid "Domain email." msgstr "域电子邮件。" msgid "Domain name." msgstr "域名。" #, python-format msgid "Duplicate names %s" msgstr "名称 %s 重复" msgid "Duplicate refs are not allowed." msgstr "不允许重复引用。" msgid "Duration for the time constraint." msgstr "时间约束的持续时间。" msgid "EIP address to associate with instance." msgstr "要与实例关联的 EIP 地址。" #, python-format msgid "Each %(object_name)s must contain a %(sub_section)s key." msgstr "每个 %(object_name)s 都必须包含一个 %(sub_section)s 键。" msgid "Each Resource must contain a Type key." msgstr "每个资源都必须包含一个“类型”键。" msgid "Ebs is missing, this is required when specifying BlockDeviceMappings." msgstr "缺少 Ebs,指定 BlockDeviceMappings 时需要此项。" msgid "" "Egress rules are only allowed when Neutron is used and the 'VpcId' property " "is set." msgstr "仅当使用了 Neutron 并且设置了“VpcId”属性时,才允许出口规则。" #, python-format msgid "Either %(net)s or %(port)s must be provided." msgstr "必须提供 %(net)s 或 %(port)s。" msgid "Either 'EIP' or 'AllocationId' must be provided." msgstr "必须提供“EIP”或“AllocationId”。" msgid "Either 'InstanceId' or 'LaunchConfigurationName' must be provided." msgstr "必须提供“InstanceId”或“LaunchConfigurationName”。" #, python-format msgid "Either project or domain must be specified for role %s" msgstr "必须对角色 %s 指定项目或域" #, python-format msgid "Either volume_id or snapshot_id must be specified for device mapping %s" msgstr "必须为设备映射 %s 指定 volume_id 或 snapshot_id" msgid "Email address of keystone user." msgstr "keystone 用户的电子邮件地址。" msgid "Enable the legacy OS::Heat::CWLiteAlarm resource." msgstr "请启用旧的 OS::Heat::CWLiteAlarm 资源。" msgid "Enable the preview Stack Abandon feature." msgstr "请启用预览“堆栈放弃”功能。" msgid "Enable the preview Stack Adopt feature." msgstr "请启用预览“堆栈采用”功能。" msgid "" "Enables Source NAT on the router gateway. NOTE: The default policy setting " "in Neutron restricts usage of this property to administrative users only." msgstr "" "在路由器网关上启用源 NAT。注意:Neutron 中的缺省策略设置将此属性限制为仅供管" "理用户使用。" msgid "" "Enables engine with convergence architecture. All stacks with this option " "will be created using convergence engine." msgstr "启用具有汇合体系结构的引擎。将使用汇合引擎来创建带有此选项的所有堆栈。" msgid "Enables or disables read-only access mode of volume." msgstr "启用或禁用卷的只读访问方式。" msgid "Encapsulation mode for the ipsec policy." msgstr "IPSec 策略的封装方式。" msgid "" "Encrypt template parameters that were marked as hidden and also all the " "resource properties before storing them in database." msgstr "" "对标记为隐藏的加密模板参数及所有资源属性进行加密,然后将它们存储在数据库中。" msgid "Encryption algorithm for the ike policy." msgstr "ike 策略的加密算法。" msgid "Encryption algorithm for the ipsec policy." msgstr "IPSec 策略的加密算法。" msgid "End address for the allocation pool." msgstr "分配池的结束地址。" #, python-format msgid "End resizing the group %(group)s" msgstr "请结束对组 %(group)s 调整大小" msgid "" "Endpoint/url which can be used for signalling handle when signal_transport " "is set to TOKEN_SIGNAL. None for all other signal transports." msgstr "" "signal_transport 设置为 TOKEN_SIGNAL 时可用于通告句柄的端点/URL。对于所有其他" "信号传输,此项为 None。" msgid "Endpoint/url which can be used for signalling handle." msgstr "可用于通告句柄的端点/URL。" msgid "Engine_Id" msgstr "Engine_Id" msgid "Error" msgstr "错误" #, python-format msgid "Error authorizing action %s" msgstr "授权操作 %s 时出错" #, python-format msgid "Error creating ec2 keypair for user %s" msgstr "为用户 %s 创建 EC2 密钥对时出错" msgid "" "Error during applying access rules to share \"{0}\". The root cause of the " "problem is the following: {1}." msgstr "对共享“{0}”应用访问规则时出错。问题的根本原因如下:{1}。" msgid "Error during creation of share \"{0}\"" msgstr "创建共享“{0}”时出错" msgid "Error during deleting share \"{0}\"." msgstr "删除共享“{0}”时出错" #, python-format msgid "Error validating value '%(value)s'" msgstr "验证值“%(value)s”时出错" #, python-format msgid "Error validating value '%(value)s': %(message)s" msgstr "验证值“%(value)s”时出错:%(message)s" msgid "Ethertype of the traffic." msgstr "流量的以太网类型。" msgid "Exclude state for cidr." msgstr "请排除 cidr 的状态。" #, python-format msgid "Expected 1 external network, found %d" msgstr "需要 1 个外部网络,找到 %d 个" msgid "Export locations of share." msgstr "导出共享位置。" msgid "Expression of the alarm to evaluate." msgstr "要求值的警报表达式。" msgid "External fixed IP address." msgstr "外部固定 IP 地址。" msgid "External fixed IP addresses for the gateway." msgstr "网关的外部固定 IP 地址。" msgid "External network gateway configuration for a router." msgstr "路由器的外部网络网关配置。" msgid "" "Extra parameters to include in the \"floatingip\" object in the creation " "request. Parameters are often specific to installed hardware or extensions." msgstr "" "要包括在创建请求中的“floatingip”对象内的额外参数。参数通常特定于所安装硬件或" "扩展。" msgid "Extra parameters to include in the creation request." msgstr "要包括在创建请求中的额外参数。" msgid "Extra parameters to include in the request." msgstr "要包含在请求中的额外参数。" msgid "" "Extra parameters to include in the request. Parameters are often specific to " "installed hardware or extensions." msgstr "要包含在请求中的额外参数。参数通常特定于所安装硬件或扩展。" msgid "Extra specs key-value pairs defined for share type." msgstr "对共享类型额外指定的键值对。" #, python-format msgid "Failed to attach interface (%(port)s) to server (%(server)s)" msgstr "无法将接口 (%(port)s) 附加至服务器 (%(server)s)" #, python-format msgid "Failed to attach volume %(vol)s to server %(srv)s - %(err)s" msgstr "无法将卷 %(vol)s 附加至服务器 %(srv)s - %(err)s" #, python-format msgid "Failed to create Bay '%(name)s' - %(reason)s" msgstr "无法创建支架“%(name)s” - %(reason)s" #, python-format msgid "Failed to detach interface (%(port)s) from server (%(server)s)" msgstr "无法从服务器 (%(server)s) 拆离接口 (%(port)s)" #, python-format msgid "Failed to execute %(action)s for %(cluster)s: %(reason)s" msgstr "对 %(cluster)s 执行 %(action)s 失败:%(reason)s" #, python-format msgid "Failed to extend volume %(vol)s - %(err)s" msgstr "无法扩展卷 %(vol)s - %(err)s" #, python-format msgid "Failed to fetch template: %s" msgstr "未能访存模板:%s" #, python-format msgid "Failed to find instance %s" msgstr "找不到实例 %s" #, python-format msgid "Failed to find server %s" msgstr "无法找到服务器 %s" #, python-format msgid "Failed to parse JSON data: %s" msgstr "无法解析 JSON 数据:%s" #, python-format msgid "Failed to restore volume %(vol)s from backup %(backup)s - %(err)s" msgstr "无法从备份 %(backup)s 复原卷 %(vol)s - %(err)s" msgid "Failed to retrieve template" msgstr "获取模板失败" #, python-format msgid "Failed to retrieve template data: %s" msgstr "未能检索到模板数据:%s" #, python-format msgid "Failed to retrieve template: %s" msgstr "未能检索到模板:%s" #, python-format msgid "" "Failed to send message to stack (%(stack_name)s) on other engine " "(%(engine_id)s)" msgstr "未能对其他引擎 (%(engine_id)s) 将消息发送至堆栈 (%(stack_name)s)" #, python-format msgid "Failed to stop stack (%(stack_name)s) on other engine (%(engine_id)s)" msgstr "未能对其他引擎 (%(engine_id)s) 停止堆栈 (%(stack_name)s)" #, python-format msgid "Failed to update Bay '%(name)s' - %(reason)s" msgstr "无法更新支架“%(name)s” - %(reason)s" msgid "Failed to update, can not found port info." msgstr "未能更新,找不到端口信息。" #, python-format msgid "" "Failed validating stack template using Heat endpoint at region \"%(region)s" "\" due to \"%(exc)s\"" msgstr "" "由于发生以下异常,未能使用区域“%(region)s”上 Heat 端点验证堆栈模板:“%(exc)s”" msgid "Fake attribute !a." msgstr "伪属性 !a。" msgid "Fake attribute a." msgstr "伪属性 a。" msgid "Fake property !a." msgstr "伪属性 !a。" msgid "Fake property !c." msgstr "伪属性 !c。" msgid "Fake property a." msgstr "伪属性 a。" msgid "Fake property c." msgstr "伪属性 c。" msgid "Fake property ca." msgstr "伪属性 ca。" msgid "" "False to trigger actions when the threshold is reached AND the alarm's state " "has changed. By default, actions are called each time the threshold is " "reached." msgstr "" "False 以触发操作(当达到阈值并且警报的状态更改时)。缺省情况下,每次达到阈值" "时都会调用操作。" #, python-format msgid "Field %(field)s of %(objname)s is not an instance of Field" msgstr "%(objname)s 的字段 %(field)s 不是字段实例" msgid "" "Fixed IP address to specify for the port created on the requested network." msgstr "要对所请求网络上创建的端口指定的固定 IP 地址。" msgid "Fixed IP addresses." msgstr "固定 IP 地址。" msgid "Fixed IPv4 address for this NIC." msgstr "此 NIC 的固定 IPv4 地址。" msgid "Flag indicating if traffic to or from instance is validated." msgstr "指示是否已验证实例出入流量的标记。" msgid "" "Flag to enable/disable port security on the network. It provides the default " "value for the attribute of the ports created on this network." msgstr "" "用于在网络上启用/禁用端口安全性的标记。它为此网络上创建的端口的该属性提供缺省" "值。" msgid "" "Flag to enable/disable port security on the port. When disable this " "feature(set it to False), there will be no packages filtering, like security-" "group and address-pairs." msgstr "" "用于在端口上启用/禁用端口安全性的标记。如果禁用此功能(将其设置为 False),那" "么不会进行包过滤,例如,安全组和地址对。" msgid "Flavor of the instance." msgstr "实例的实例类型。" msgid "Friendly name of the port." msgstr "端口的友好名称。" msgid "Friendly name of the router." msgstr "路由器的友好名称。" msgid "Friendly name of the subnet." msgstr "子网的友好名称。" #, python-format msgid "Function \"%s\" must have arguments" msgstr "函数“%s”必须具有自变量" #, python-format msgid "Function \"%s\" usage: [\"\", \"\"]" msgstr "函数“%s”用法:[\"\", \"\"]" #, python-format msgid "Gateway IP address \"%(gateway)s\" is in invalid format." msgstr "网关 IP 地址“%(gateway)s”为无效格式。" msgid "Gateway network for the router." msgstr "路由器的网关网络。" msgid "Generic HeatAPIException, please use specific subclasses!" msgstr "发生一般性 HeatAPIException,请使用特定子类!" msgid "Glance image ID or name." msgstr "Glance 映像标识或名称。" msgid "Governs permissions set in manila for the cluster ips." msgstr "管理 manila 中为集群 IP 设置的许可权。" msgid "Granularity to use for age argument, defaults to days." msgstr "要用于 age 自变量的粒度,缺省为天。" msgid "Hadoop cluster name." msgstr "Hadoop 集群名称。" #, python-format msgid "Header X-Auth-Url \"%s\" not an allowed endpoint" msgstr "头 X-Auth-Url“%s”不是允许的端点" msgid "Health probe timeout, in seconds." msgstr "运行状况探测器超时,以秒计。" msgid "" "Heat build revision. If you would prefer to manage your build revision " "separately, you can move this section to a different file and add it as " "another config option." msgstr "" "Heat 构建修订版。如果宁愿单独管理构建修订版,那么可将此部分移至另一文件并将它" "作为另一配置选项添加。" msgid "Host" msgstr "主机" msgid "Hostname" msgstr "主机名" msgid "Hostname of the instance." msgstr "实例的主机名。" msgid "How long to preserve deleted data." msgstr "用于保留已删除的数据的时间。" msgid "" "How the client will signal the wait condition. CFN_SIGNAL will allow an HTTP " "POST to a CFN keypair signed URL. TEMP_URL_SIGNAL will create a Swift " "TempURL to be signalled via HTTP PUT. HEAT_SIGNAL will allow calls to the " "Heat API resource-signal using the provided keystone credentials. " "ZAQAR_SIGNAL will create a dedicated zaqar queue to be signalled using the " "provided keystone credentials. TOKEN_SIGNAL will allow and HTTP POST to a " "Heat API endpoint with the provided keystone token. NO_SIGNAL will result in " "the resource going to a signalled state without waiting for any signal." msgstr "" "客户机如何通告等待条件。CFN_SIGNAL 指示将允许向 CFN 密钥对签署的 URL 执行 " "HTTP POST。TEMP_URL_SIGNAL 指示将创建通过 HTTP PUT 通告的 Swift TempURL。" "HEAT_SIGNAL 指示将允许使用所提供的 keystone 证书调用 Heat API resource-" "signal。ZAQAR_SIGNAL 指示将创建使用所提供的 keystone 证书通告的专用 zaqar 队" "列。TOKEN_SIGNAL 将允许对带有所提供 keystone 令牌的 Heat API 端点执行 HTTP " "POST。NO_SIGNAL 将导致资源进入 COMPLETE 状态而不等待任何信号。" msgid "" "How the server should receive the metadata required for software " "configuration. POLL_SERVER_CFN will allow calls to the cfn API action " "DescribeStackResource authenticated with the provided keypair. " "POLL_SERVER_HEAT will allow calls to the Heat API resource-show using the " "provided keystone credentials. POLL_TEMP_URL will create and populate a " "Swift TempURL with metadata for polling. ZAQAR_MESSAGE will create a " "dedicated zaqar queue and post the metadata for polling." msgstr "" "有关服务器如何接收软件配置所需的元数据。POLL_SERVER_CFN 指示将允许调用已使用" "所提供密钥对认证的 cfn API 操作 DescribeStackResource。POLL_SERVER_HEAT 指示" "将允许使用所提供 keystone 证书调用 Heat API resource-show。POLL_TEMP_URL 指示" "将创建 Swift TempURL 并使用要轮询的元数据进行填充。ZAQAR_MESSAGE 指示将创建专" "用 zaqar 队列并发布元数据以进行轮询。" msgid "How the server should signal to heat with the deployment output values." msgstr "服务器应该如何向 heat 通告部署输出值。" msgid "" "How the server should signal to heat with the deployment output values. " "CFN_SIGNAL will allow an HTTP POST to a CFN keypair signed URL. " "TEMP_URL_SIGNAL will create a Swift TempURL to be signaled via HTTP PUT. " "HEAT_SIGNAL will allow calls to the Heat API resource-signal using the " "provided keystone credentials. ZAQAR_SIGNAL will create a dedicated zaqar " "queue to be signaled using the provided keystone credentials. NO_SIGNAL will " "result in the resource going to the COMPLETE state without waiting for any " "signal." msgstr "" "服务器应如何向 heat 通告部署输出值。CFN_SIGNAL 指示将允许向 CFN 密钥对签署的 " "URL 执行 HTTP POST。TEMP_URL_SIGNAL 指示将创建通过 HTTP PUT 通告的 Swift " "TempURL。HEAT_SIGNAL 指示将允许使用所提供的 keystone 证书调用 Heat API " "resource-signal。ZAQAR_SIGNAL 指示将创建使用所提供的 keystone 证书通告的专用 " "zaqar 队列。NO_SIGNAL 将导致资源进入 COMPLETE 状态而不等待任何信号。" msgid "" "How the user_data should be formatted for the server. For HEAT_CFNTOOLS, the " "user_data is bundled as part of the heat-cfntools cloud-init boot " "configuration data. For RAW the user_data is passed to Nova unmodified. For " "SOFTWARE_CONFIG user_data is bundled as part of the software config data, " "and metadata is derived from any associated SoftwareDeployment resources." msgstr "" "应该为服务器对 user_data 进行格式设置的方式。对于 HEAT_CFNTOOLS,user_data 作" "为 heat-cfntools cloud-init 引导配置数据的一部分捆绑。对于 RAW,会将 " "user_data 按原样传递至 Nova。对于 SOFTWARE_CONFIG,user_data 作为软件配置数据" "的一部分捆绑,并且元数据派生自任何关联的 SoftwareDeployment 资源。" msgid "Human readable name for the secret." msgstr "密钥的人类可读名称。" msgid "Human-readable name for the container." msgstr "容器的人类可读名称。" msgid "" "ID list of the L3 agent. User can specify multi-agents for highly available " "router. NOTE: The default policy setting in Neutron restricts usage of this " "property to administrative users only." msgstr "" "L3 代理的标识列表。用户可为高可用的路由器指定多个代理。注:Neutron 中的缺省策" "略设置将此属性限制为仅供管理用户使用。" msgid "ID of an existing port to associate with this server." msgstr "要与此服务器关联的现有端口的标识。" msgid "" "ID of an existing port with at least one IP address to associate with this " "floating IP." msgstr "具有至少一个要与此浮动 IP 关联的 IP 地址的现有端口的标识。" msgid "ID of network to create a port on." msgstr "要在其上创建端口的网络的标识。" msgid "ID of project for API authentication" msgstr "API认证的项目ID" msgid "ID of queue to use for signaling output values" msgstr "用于通告输出值的的队列的标识" msgid "" "ID of resource to apply configuration to. Normally this should be a Nova " "server ID." msgstr "要对其应用配置的资源的标识,通常应该为 Nova 服务器标识。" msgid "" "ID of server (VM, etc...) on host that is used for exporting network file-" "system." msgstr "主机上用于导出网络文件系统的服务器(VM,等等...)的标识。" msgid "ID of signal to use for signaling output values" msgstr "要用于发出输出值的信号的信号标识" msgid "" "ID of software configuration resource to execute when applying to the server." msgstr "应用于服务器时,要执行的软件配置资源的标识。" msgid "ID of the Cluster Template used for Node Groups and configurations." msgstr "用于节点组和配置的集群模板的标识。" msgid "ID of the InternetGateway." msgstr "InternetGateway 的标识。" msgid "" "ID of the L3 agent. NOTE: The default policy setting in Neutron restricts " "usage of this property to administrative users only." msgstr "" "L3 代理的标识。注:Neutron 中的缺省策略设置将此属性限制为仅供管理用户使用。" msgid "ID of the Node Group Template." msgstr "节点组模板的标识。" msgid "ID of the VPNGateway to attach to the VPC." msgstr "要连接至 VPC 的 VPNGateway 的标识。" msgid "ID of the default image to use for the template." msgstr "要用于模板的缺省映像的标识。" msgid "ID of the default pool this listener is associated to." msgstr "与此侦听器相关联的缺省池的标识。" msgid "ID of the floating IP to assign to the server." msgstr "要分配给服务器的浮动 IP 的标识。" msgid "ID of the floating IP to associate." msgstr "要关联的浮动 IP 的标识。" msgid "ID of the health monitor associated with this pool." msgstr "与此池相关联的运行状况监视器标识。" msgid "ID of the image to use for the template." msgstr "要用于模板的映像的标识。" msgid "ID of the load balancer this listener is associated to." msgstr "与此侦听器相关联的负载均衡器的标识。" msgid "ID of the network in which this IP is allocated." msgstr "在其中分配此 IP 的网络的标识。" msgid "ID of the port associated with this IP." msgstr "与此 IP 关联的端口的标识。" msgid "ID of the queue." msgstr "队列的标识。" msgid "ID of the router used as gateway, set when associated with a port." msgstr "用作网关的路由器的标识,在与端口关联时设置。" msgid "ID of the router." msgstr "路由ID。" msgid "ID of the server being deployed to" msgstr "要部署至的服务器的标识" msgid "ID of the stack this deployment belongs to" msgstr "此部署所属的栈ID" msgid "ID of the tenant to which the RBAC policy will be enforced." msgstr "将对其实施 RBAC 策略的租户的标识。" msgid "ID of the tenant who owns the health monitor." msgstr "拥有运行状况监视器的租户的标识。" msgid "ID or name of the QoS policy." msgstr "QoS 策略的标识或名称。" msgid "ID or name of the RBAC object." msgstr "RBAC 对象的标识或名称。" msgid "ID or name of the external network for the gateway." msgstr "网关的外部网络的标识或名称。" msgid "ID or name of the image to register." msgstr "要注册的映像的标识或名称。" msgid "ID or name of the load balancer with which listener is associated." msgstr "与侦听器相关联的负载均衡器的标识或名称。" msgid "ID or name of the load balancing pool." msgstr "负载均衡池的标识或名称。" msgid "" "ID that AWS assigns to represent the allocation of the address for use with " "Amazon VPC. Returned only for VPC elastic IP addresses." msgstr "" "AWS 分配的标识,用于表示要与 Amazon VPC 配合使用的地址分配。仅对于 VPC 弹性 " "IP 地址才返回。" msgid "IP address and port of the pool." msgstr "池的 IP 地址和端口。" msgid "IP address desired in the subnet for this port." msgstr "此端口的子网中所需的 IP 地址。" msgid "IP address for the VIP." msgstr "VIP 的 IP 地址。" msgid "IP address of the associated port, if specified." msgstr "所关联端口的 IP 地址(如果已指定)。" msgid "" "IP address of the floating IP. NOTE: The default policy setting in Neutron " "restricts usage of this property to administrative users only." msgstr "" "浮动 IP 的 IP 地址。注意:Neutron 中的缺省策略设置将此属性的使用者限制为仅管" "理用户。" msgid "IP address of the pool member on the pool network." msgstr "池网络上池成员的 IP 地址。" msgid "IP address of the pool member." msgstr "池成员的 IP 地址。" msgid "IP address of the vip." msgstr "vip 的 IP 地址。" msgid "IP address to allow through this port." msgstr "要通过此端口允许的 IP 地址。" msgid "IP address to use if the port has multiple addresses." msgstr "要使用的 IP 地址(如果端口具有多个地址)。" msgid "" "IP or other address information about guest that allowed to access to Share." msgstr "有关允许访问共享的访客的 IP 或其他地址信息。" msgid "IPv6 RA (Router Advertisement) mode." msgstr "IPv6 RA(路由器广告)方式。" msgid "IPv6 address mode." msgstr "IPv6 地址方式。" msgid "Id of a resource." msgstr "资源标识。" msgid "Id of the manila share." msgstr "manila 共享的标识。" msgid "Id of the tenant owning the firewall policy." msgstr "拥有此防火墙策略的租户的标识。" msgid "Id of the tenant owning the firewall." msgstr "拥有防火墙的租户的标识。" msgid "Identifier of the source instance to replicate." msgstr "要复制的源实例的标识。" #, python-format msgid "" "If \"%(size)s\" is provided, only one of \"%(image)s\", \"%(image_ref)s\", " "\"%(source_vol)s\", \"%(snapshot_id)s\" can be specified, but currently " "specified options: %(exclusive_options)s." msgstr "" "如果提供了“%(size)s”,那么只能指定下列其中一" "项:“%(image)s”、“%(image_ref)s”、“%(source_vol)s”和“%(snapshot_id)s”,但当前" "指定了以下选项:%(exclusive_options)s。" msgid "If False, closes the client socket connection explicitly." msgstr "如果为 False,那么将显式关闭客户机套接字连接。" msgid "" "If True, delete any objects in the container when the container is deleted. " "Otherwise, deleting a non-empty container will result in an error." msgstr "" "如果为 True,请在删除容器时删除容器中的任何对象。否则,删除非空容器将导致错" "误。" msgid "If True, enable config drive on the server." msgstr "如果为 true,请在服务器上启用配置驱动器。" msgid "" "If configured, it allows to run action or workflow associated with a task " "multiple times on a provided list of items." msgstr "" "如果配置了此项,那么允许针对所提供项列表多次运行与任务相关联的操作或工作流" "程。" msgid "If set, then the server's certificate will not be verified." msgstr "如果已设置,那么将不验证服务器的证书。" msgid "If specified, the backup to create the volume from." msgstr "如果已指定,那么表示要从其创建卷的备份。" msgid "If specified, the backup used as the source to create the volume." msgstr "如果已指定,那么表示用作创建卷的源的备份。" msgid "If specified, the name or ID of the image to create the volume from." msgstr "如果已指定,那么表示要从其创建卷的映像的名称或标识。" msgid "If specified, the snapshot to create the volume from." msgstr "如果已指定,那么表示要从其创建卷的快照。" msgid "If specified, the type of volume to use, mapping to a specific backend." msgstr "如果已指定,那么表示要使用的卷(映射至特定后端)的类型。" msgid "If specified, the volume to use as source." msgstr "如果已指定,那么表示要用作源的卷。" msgid "" "If the region is hierarchically a child of another region, set this " "parameter to the ID of the parent region." msgstr "如果此区域在层次结构中是另一区域的子代,请将此参数设置此父区域的标识。" msgid "" "If true, the resources in the chain will be created concurrently. If false " "or omitted, each resource will be treated as having a dependency on the " "previous resource in the list." msgstr "" "如果为 true,那么链中的资源将以并行方式创建。如果为 false 或被省略,那么每个" "资源将被视为与列表中的前一资源存在依赖关系。" msgid "If without InstanceId, ImageId and InstanceType are required." msgstr "如果没有 InstanceId,那么需要 ImageId 和 InstanceType。" #, python-format msgid "Illegal prefix bounds: %(key1)s=%(value1)s, %(key2)s=%(value2)s." msgstr "非法前缀界限:%(key1)s=%(value1)s,%(key2)s=%(value2)s。" #, python-format msgid "" "Image %(image)s requires %(imram)s minimum ram. Flavor %(flavor)s has only " "%(flram)s." msgstr "" "映像 %(image)s 最低需要 %(imram)s RAM。类型 %(flavor)s 只有 %(flram)s。" #, python-format msgid "" "Image %(image)s requires %(imsz)s GB minimum disk space. Flavor %(flavor)s " "has only %(flsz)s GB." msgstr "" "映像 %(image)s 最低需要 %(imsz)s GB 磁盘空间。类型 %(flavor)s 只有 %(flsz)s " "GB。" #, python-format msgid "Image status is required to be %(cstatus)s not %(wstatus)s." msgstr "映像状态必须为 %(cstatus)s 而不是 %(wstatus)s。" msgid "Incompatible parameters were used together" msgstr "不兼容的参数被混用在一起" #, python-format msgid "Incorrect arguments to \"%(fn_name)s\" should be one of: %(allowed)s" msgstr "针对“%(fn_name)s”的不正确自变量应该是下列其中一项:%(allowed)s" #, python-format msgid "Incorrect arguments to \"%(fn_name)s\" should be: %(example)s" msgstr "针对“%(fn_name)s”的不正确自变量应该如下:%(example)s" msgid "Incorrect arguments: Items to merge must be maps." msgstr "自变量不正确:要合并的项必须为映射。" #, python-format msgid "" "Incorrect index to \"%(fn_name)s\" should be between 0 and %(max_index)s" msgstr "“%(fn_name)s”的索引不正确,应该在 0 到 %(max_index)s 之间" #, python-format msgid "Incorrect index to \"%(fn_name)s\" should be: %(example)s" msgstr "“%(fn_name)s”的索引不正确,应该为:%(example)s" #, python-format msgid "Index to \"%s\" must be a string" msgstr "针对“%s”的索引必须是字符串" #, python-format msgid "Index to \"%s\" must be an integer" msgstr "针对“%s”的索引必须是整数" msgid "" "Indicate whether the volume should be deleted when the instance is " "terminated." msgstr "指示实例终止时是否应删除该卷。" msgid "" "Indicate whether the volume should be deleted when the server is terminated." msgstr "指示当服务器终止时是否应该删除该卷。" msgid "Indicates remote IP prefix to be associated with this metering rule." msgstr "指示要与此测量规则关联的远程 IP 前缀。" msgid "" "Indicates whether or not to create a distributed router. NOTE: The default " "policy setting in Neutron restricts usage of this property to administrative " "users only. This property can not be used in conjunction with the L3 agent " "ID." msgstr "" "指示是否创建分布式路由器。注:Neutron 中的缺省策略设置将此属性限制为仅由管理" "用户使用。此属性不能与 L3 代理标识配合使用。" msgid "" "Indicates whether or not to create a highly available router. NOTE: The " "default policy setting in Neutron restricts usage of this property to " "administrative users only. And now neutron do not support distributed and ha " "at the same time." msgstr "" "指示是否创建高可用的路由器。注:Neutron 中的缺省策略设置将此属性限制为仅由管" "理用户使用。并且,现在 Neutron 不同时支持分布式和 HA。" msgid "Indicates whether this firewall rule is enabled or not." msgstr "指示此防火墙规则是否已启用。" msgid "Information used to configure the bucket as a static website." msgstr "用于将存储区配置为静态 Web 站点的信息。" msgid "Initiator state in lowercase for the ipsec site connection." msgstr "IPSec 站点连接的发起方状态(以小写表示)。" #, python-format msgid "Input in signal data must be a map, find a %s" msgstr "信号数据中的输入必须为映射,但发现 %s" msgid "Input values for the workflow." msgstr "工作流程的输入值。" msgid "Input values to apply to the software configuration on this server." msgstr "要应用于此服务器上的软件配置的输入值。" msgid "Instance ID to associate with EIP specified by EIP property." msgstr "要与由 EIP 属性指定的 EIP 关联的实例标识。" msgid "Instance ID to associate with EIP." msgstr "要与 EIP 关联的实例标识。" msgid "Instance connection to CFN/CW API validate certs if SSL is used." msgstr "实例至 CFN/CW API 验证证书的连接(如果使用了 SSL)。" msgid "Instance connection to CFN/CW API via https." msgstr "实例至 CFN/CW API 的连接(通过 https)。" #, python-format msgid "Instance is not ACTIVE (was: %s)" msgstr "实例未处于活动状态(曾为 %s)" #, python-format msgid "" "Instance metadata must not contain greater than %s entries. This is the " "maximum number allowed by your service provider" msgstr "实例元数据包含的条目数不能超过 %s。这是服务提供程序所允许的最大数目" msgid "Interface type of keystone service endpoint." msgstr "keystone 服务端点的接口类型。" msgid "Internet protocol version." msgstr "因特网协议版本。" #, python-format msgid "Invalid %s, expected a mapping" msgstr "%s 无效,需要映射" #, python-format msgid "Invalid CRON expression: %s" msgstr "无效 CRON 表达式:%s" #, python-format msgid "Invalid Parameter type \"%s\"" msgstr "参数类型“%s”无效" #, python-format msgid "Invalid Property %s" msgstr "属性 %s 无效" msgid "Invalid Stack address" msgstr "堆栈地址无效" msgid "Invalid Template URL" msgstr "模板URL无效" #, python-format msgid "Invalid URL scheme %s" msgstr "URL 方案 %s 无效" #, python-format msgid "Invalid UUID version (%d)" msgstr "UUID 版本 (%d) 无效" #, python-format msgid "Invalid action %s" msgstr "操作 %s 无效" #, python-format msgid "Invalid action %s specified" msgstr "指定的操作 %s 无效" #, python-format msgid "Invalid adopt data: %s" msgstr "以下采用数据无效:%s" #, python-format msgid "Invalid cloud_backend setting in heat.conf detected - %s" msgstr "在 heat.conf 中检测到无效 cloud_backend 设置 - %s" #, python-format msgid "Invalid codes in ignore_errors : %s" msgstr "ignore_errors 中的代码无效:%s" #, python-format msgid "Invalid content type %(content_type)s" msgstr "内容类型 %(content_type)s 无效" #, python-format msgid "Invalid default %(default)s (%(exc)s)" msgstr "缺省 %(default)s 无效 (%(exc)s)" #, python-format msgid "Invalid deletion policy \"%s\"" msgstr "无效删除策略“%s”" #, python-format msgid "Invalid filter parameters %s" msgstr "过滤器参数 %s 无效" #, python-format msgid "Invalid hook type \"%(hook)s\" for %(resource)s" msgstr "挂钩类型“%(hook)s”对 %(resource)s 无效" #, python-format msgid "" "Invalid hook type \"%(value)s\" for resource breakpoint, acceptable hook " "types are: %(types)s" msgstr "挂钩类型“%(value)s”对资源断点无效,可接受挂钩类型为:%(types)s" #, python-format msgid "Invalid key %s" msgstr "键 %s 无效" #, python-format msgid "Invalid key '%(key)s' for %(entity)s" msgstr "对于 %(entity)s,键“%(key)s”无效" #, python-format msgid "Invalid keys in resource mark unhealthy %s" msgstr "资源标记“不正常”中的键 %s 无效" msgid "" "Invalid mix of disk and container formats. When setting a disk or container " "format to one of 'aki', 'ari', or 'ami', the container and disk formats must " "match." msgstr "" "磁盘格式和容器格式的混合用法无效。将磁盘格式或容器格式设置" "为“aki”、“ari”或“ami”的其中之一时,容器格式与磁盘格式必须匹配。" #, python-format msgid "Invalid parameter constraints for parameter %s, expected a list" msgstr "对参数 %s 无效的参数约束,需要列表" #, python-format msgid "" "Invalid restricted_action type \"%(value)s\" for resource, acceptable " "restricted_action types are: %(types)s" msgstr "" "restricted_action 类型“%(value)s”对资源无效,可接受 restricted_action 类型" "为:%(types)s" #, python-format msgid "" "Invalid stack name %s must contain only alphanumeric or \"_-.\" characters, " "must start with alpha and must be 255 characters or less." msgstr "" "堆栈名称 %s 无效,只能包含一个字母数据字符或“_-.”字符,必须以字母开头,长度不" "能超过 255 个字符。" #, python-format msgid "Invalid stack name %s, must be a string" msgstr "堆栈名称 %s 无效,必须为字符串" #, python-format msgid "Invalid status %s" msgstr "状态 %s 无效" #, python-format msgid "Invalid support status and should be one of %s" msgstr "无效支持状态,应该为下列其中一项:%s" #, python-format msgid "Invalid tag, \"%s\" contains a comma" msgstr "无效标记,“%s”包含逗号" #, python-format msgid "Invalid tag, \"%s\" is longer than 80 characters" msgstr "无效标记,“%s”的长度超过 80 个字符" #, python-format msgid "Invalid tag, \"%s\" is not a string" msgstr "无效标记,“%s”并非字符串" #, python-format msgid "Invalid tags, not a list: %s" msgstr "无效标记,并非列表:%s" #, python-format msgid "Invalid template type \"%(value)s\", valid types are: cfn, hot." msgstr "无效模板类型“%(value)s”,有效类型为:cfn 和 hot。" #, python-format msgid "Invalid timeout value %s" msgstr "无效超时值 %s" #, python-format msgid "Invalid timezone: %s" msgstr "无效时区:%s" #, python-format msgid "Invalid type (%s)" msgstr "类型 (%s) 无效" msgid "Ip allocation pools and their ranges." msgstr "IP 分配池及其范围。" msgid "Ip of the subnet's gateway." msgstr "子网的网关 IP。" msgid "Ip version for the subnet." msgstr "子网的 IP 版本。" msgid "Ip_version for this firewall rule." msgstr "此防火墙规则的 Ip_version。" msgid "It defines an executor to which task action should be sent to." msgstr "它定义应将任务操作发送至的执行程序。" #, python-format msgid "Items to join must be string, map or list not %s" msgstr "要加入的项必须为字符串、映射或列表,而不是 %s" #, python-format msgid "Items to join must be string, map or list. %s failed json serialization" msgstr "要加入的项必须为字符串、映射或列表。%s 进行 JSON 序列化失败" #, python-format msgid "Items to join must be strings not %s" msgstr "要加入的项必须为字符串,而不是 %s" #, python-format msgid "" "JSON body size (%(len)s bytes) exceeds maximum allowed size (%(limit)s " "bytes)." msgstr "JSON 主体大小(%(len)s 字节)超过最大所允许大小(%(limit)s 字节)。" msgid "JSON data that was uploaded via the SwiftSignalHandle." msgstr "通过 SwiftSignalHandle 上载的 JSON 数据。" msgid "" "JSON serialized map that includes the endpoint, token and/or other " "attributes the client must use for signalling this handle. The contents of " "this map depend on the type of signal selected in the signal_transport " "property." msgstr "" "JSON 已序列化包含该端点、令牌和/或客户机必须用于通告此句柄的其他属性的映射。" "此映射的内容取决于 signal_transport 属性中选择的信号的类型。" msgid "" "JSON string containing data associated with wait condition signals sent to " "the handle." msgstr "JSON 字符串,包含与发送至句柄的等待条件信号相关联的数据。" msgid "" "Key used to encrypt authentication info in the database. Length of this key " "must be 32 characters." msgstr "用于在数据库中加密认证信息的密钥。此密钥的长度必须为 32 个字符。" msgid "Key/Value pairs to extend the capabilities of the flavor." msgstr "用于控制该类型的功能的键/值对。" msgid "Key/value pairs associated with the volume in raw dict form." msgstr "与该卷相关联的键/值对(使用原始字典形式)。" msgid "Key/value pairs associated with the volume." msgstr "与卷关联的键/值对。" msgid "Key/value pairs to associate with the volume." msgstr "要与卷关联的键/值对。" msgid "Keypair added to instances to make them accessible for user." msgstr "已添加至实例(以使这些实例可供用户访问)的密钥对。" msgid "Keypair secret key." msgstr "密钥对密钥。" msgid "" "Keystone domain ID which contains heat template-defined users. If this " "option is set, stack_user_domain_name option will be ignored." msgstr "" "包含 Heat 模板定义的用户的 Keystone 域标识。如果设置了此选项,那么将忽略 " "stack_user_domain_name 选项。" msgid "" "Keystone domain name which contains heat template-defined users. If " "`stack_user_domain_id` option is set, this option is ignored." msgstr "" "包含 Heat 模板定义的用户的 Keystone 域名。如果设置了“stack_user_domain_id”选" "项,那么将忽略此选项。" msgid "Keystone domain." msgstr "Keystone 域。" #, python-format msgid "" "Keystone has more than one service with same name %(service)s. Please use " "service id instead of name" msgstr "" "Keystone 有多个服务具有相同名称 %(service)s。请使用服务标识而不是名称。" msgid "Keystone password for stack_domain_admin user." msgstr "stack_domain_admin 用户的 Keystone 密码。" msgid "Keystone project." msgstr "Keystone 项目。" msgid "Keystone role for heat template-defined users." msgstr "Heat 模板定义的用户的 Keystone 角色。" msgid "Keystone role." msgstr "Keystone 角色。" msgid "Keystone user group." msgstr "keystone 用户组。" msgid "Keystone user groups." msgstr "keystone 用户组。" msgid "Keystone user is enabled or disabled." msgstr "keystone 用户已启用或已禁用。" msgid "" "Keystone username, a user with roles sufficient to manage users and projects " "in the stack_user_domain." msgstr "" "Keystone 用户名,具有的角色足以管理 stack_user_domain 中的用户和项目的用户。" msgid "L2 segmentation strategy on the external side of the network gateway." msgstr "网络网关外端上的 L2 分段策略。" msgid "LBaaS provider to implement this load balancer instance." msgstr "用于实现此负载均衡器实例的 LBaaS 提供程序。" msgid "Length of OS_PASSWORD after encryption exceeds Heat limit (255 chars)" msgstr "在加密超过 Heat 限制(255 个字符)之后,OS_PASSWORD 的长度" msgid "Length of the string to generate." msgstr "要生成的字符串的长度。" msgid "" "Length property cannot be smaller than combined character class and " "character sequence minimums" msgstr "长度属性不能小于组合字符类及字符序列最小值" msgid "Level of access that need to be provided for guest." msgstr "需要对访客提供的访问权的级别。" msgid "" "Lifecycle actions to which the configuration applies. The string values " "provided for this property can include the standard resource actions CREATE, " "DELETE, UPDATE, SUSPEND and RESUME supported by Heat." msgstr "" "配置所适用的生命周期操作。为此属性提供的字符串值可包括受 Heat 支持的标准资源" "操作 CREATE、DELETE、UPDATE、SUSPEND 和 RESUME。" msgid "List of LoadBalancer resources." msgstr "负载均衡器资源列表。" msgid "List of Security Groups assigned on current LB." msgstr "对当前 LB 分配的安全组的列表。" msgid "List of TLS container references for SNI." msgstr "SNI 的 TLS 容器引用列表。" msgid "List of database instances." msgstr "数据库实例的列表。" msgid "List of databases to be created on DB instance creation." msgstr "进行数据库实例创建时要创建的数据库的列表。" msgid "List of directories to search for plug-ins." msgstr "用于搜索插件的目录的列表。" msgid "List of dns nameservers." msgstr "DNS 名称服务器的列表。" msgid "List of firewall rules in this firewall policy." msgstr "此防火墙策略中的防火墙规则列表。" msgid "List of health monitors associated with the pool." msgstr "与池关联的运行状况监视器的列表。" msgid "List of hosts to join aggregate." msgstr "加入聚集的主机的列表。" msgid "List of manila shares to be mounted." msgstr "要安装的 manila 共享列表。" msgid "List of network interfaces to create on instance." msgstr "要对实例创建的网络接口的列表。" msgid "List of processes to enable anti-affinity for." msgstr "要对其启用反亲缘关系的进程的列表。" msgid "List of processes to run on every node." msgstr "要在每个节点上运行的进程的列表。" msgid "List of role assignments." msgstr "角色分配的列表。" msgid "List of security group IDs associated with this interface." msgstr "与此接口关联的安全组标识的列表。" msgid "List of security group egress rules." msgstr "安全组外出权规则的列表。" msgid "List of security group ingress rules." msgstr "安全组进入权规则的列表。" msgid "" "List of security group names or IDs to assign to this Node Group template." msgstr "要分配给此“节点组”模板的安全组名称或标识的列表。" msgid "" "List of security group names or IDs. Cannot be used if neutron ports are " "associated with this server; assign security groups to the ports instead." msgstr "" "安全组名称或标识的列表。如果 neutron 端口与此服务器关联,那么不能使用;请改为" "将安全组分配给这些端口。" msgid "List of security group rules." msgstr "安全组规则的列表。" msgid "List of subnet prefixes to assign." msgstr "要分配的子网前缀列表。" msgid "List of tags associated with this interface." msgstr "与此接口关联的标记的列表。" msgid "List of tags to attach to the instance." msgstr "要连接至实例的标记的列表。" msgid "List of tags to attach to this resource." msgstr "要连接至此资源的标记的列表。" msgid "List of tags to be attached to this resource." msgstr "要连接至此资源的标记的列表。" msgid "" "List of tasks which should be executed before this task. Used only in " "reverse workflows." msgstr "应在此任务前执行的任务列表。仅应在保留工作流程中使用。" msgid "" "List of tasks which will run after the task has completed regardless of " "whether it is successful or not." msgstr "将在该任务完成(而不理会是否成功)后运行的任务列表。" msgid "List of tasks which will run after the task has completed successfully." msgstr "将在该任务成功完成后运行的任务列表。" msgid "" "List of tasks which will run after the task has completed with an error." msgstr "将在该任务完成(但带有错误)后运行的任务列表。" msgid "List of users to be created on DB instance creation." msgstr "要在数据库实例创建时创建的用户的列表。" msgid "" "List of workflows' executions, each of them is a dictionary with information " "about execution. Each dictionary returns values for next keys: id, " "workflow_name, created_at, updated_at, state for current execution state, " "input, output." msgstr "" "工作流程的执行列表,其中每项是一个带有相关执行信息的字典。每个字典返回以下键" "的值:id、workflow_name、created_at、updated_at、state for current execution " "state、input 和 output。" msgid "Listener associated with this pool." msgstr "与此池相关联的侦听器。" msgid "" "Local path on each cluster node on which to mount the share. Defaults to '/" "mnt/{share_id}'." msgstr "要在每个集群节点上安装该共享的本地路径。缺省为 '/mnt/{share_id}'。" msgid "Location of the SSL certificate file to use for SSL mode." msgstr "要用于 SSL 方式的 SSL 证书文件的位置。" msgid "Location of the SSL key file to use for enabling SSL mode." msgstr "要用于启用 SSL 方式的 SSL 密钥文件的位置。" msgid "MAC address of the port." msgstr "端口的 MAC 地址。" msgid "MAC address to allow through this port." msgstr "要通过此端口允许的 MAC 地址。" msgid "Map between role with either project or domain." msgstr "角色与另一项目或域之间的映射。" msgid "" "Map containing options specific to the configuration management tool used by " "this resource." msgstr "包含特定于由此资源使用的配置管理工具的选项的映射。" msgid "" "Map representing the cloud-config data structure which will be formatted as " "YAML." msgstr "表示 cloud-config 数据结构的映射,格式将设置为 YAML。" msgid "" "Map representing the configuration data structure which will be serialized " "to JSON format." msgstr "表示配置数据结构的映射,该结构将序列化为 JSON 格式。" msgid "Max bandwidth in kbps." msgstr "最大带宽(以 kbps 计)。" msgid "Max burst bandwidth in kbps." msgstr "最大脉冲带宽(以 kbps 计)。" msgid "Max size of the cluster." msgstr "集群的最大大小。" #, python-format msgid "Maximum %s is 1 hour." msgstr "最大 %s 为 1 小时。" msgid "Maximum depth allowed when using nested stacks." msgstr "使用嵌套堆栈时允许的最大深度。" msgid "" "Maximum line size of message headers to be accepted. max_header_line may " "need to be increased when using large tokens (typically those generated by " "the Keystone v3 API with big service catalogs)." msgstr "" "要接受的消息头的最大行大小。将大型令牌(通常是由 Keystone V3 API 生成的那些令" "牌)与大型服务目录配合使用时,可能需要增大 max_header_line。" msgid "" "Maximum line size of message headers to be accepted. max_header_line may " "need to be increased when using large tokens (typically those generated by " "the Keystone v3 API with big service catalogs.)" msgstr "" "要接受的消息头的最大行大小。将大型令牌(通常是由 Keystone V3 API 生成的那些令" "牌)与大型服务目录配合使用时,可能需要增大 max_header_line。" msgid "Maximum number of instances in the group." msgstr "组中的最大实例数。" msgid "Maximum number of resources in the cluster. -1 means unlimited." msgstr "集群中的最大资源数。-1 表示无限制。" msgid "Maximum number of resources in the group." msgstr "组中的最大资源数。" msgid "Maximum number of stacks any one tenant may have active at one time." msgstr "一个租户可同时持有的最大活动堆栈数。" msgid "Maximum prefix size that can be allocated from the subnet pool." msgstr "可从子网池分配的最大前缀大小。" msgid "" "Maximum raw byte size of JSON request body. Should be larger than " "max_template_size." msgstr "JSON 请求主体的最大原始字节大小。长度应该超过 max_template_size。" msgid "Maximum raw byte size of any template." msgstr "任何模板的最大原始字节大小。" msgid "Maximum resources allowed per top-level stack. -1 stands for unlimited." msgstr "允许每个顶级堆栈使用的最大资源数。-1 表示无限制。" msgid "Maximum resources per stack exceeded." msgstr "已超过每个堆栈的最大资源数。" msgid "" "Maximum transmission unit size (in bytes) for the ipsec site connection." msgstr "IPSec 站点连接的最大传输单元大小(以字节计)。" msgid "Member list items must be strings" msgstr "成员列表项必须是字符串" msgid "Member list must be a list" msgstr "成员列表必须为一个列表" msgid "Members associated with this pool." msgstr "与此池相关联的成员。" msgid "Memory in MB for the flavor." msgstr "对应该类型的内存(以 MB 计)。" #, python-format msgid "Message: %(message)s, Code: %(code)s" msgstr "消息:%(message)s,代码:%(code)s" msgid "Metadata format invalid" msgstr "元数据格式无效" msgid "Metadata key-values defined for cluster." msgstr "对集群定义的元数据键/值。" msgid "Metadata key-values defined for node." msgstr "对节点定义的元数据键/值。" msgid "Metadata key-values defined for profile." msgstr "对概要文件定义的元数据键/值。" msgid "Metadata key-values defined for share." msgstr "为共享定义的元数据链值。" msgid "Meter name watched by the alarm." msgstr "警报监视的计量表名称。" msgid "" "Meter should match this resource metadata (key=value) additionally to the " "meter_name." msgstr "除了 meter_name 之外,计量表还应该与此资源元数据 (key=value) 匹配。" msgid "Meter statistic to evaluate." msgstr "要评估的计量表统计信息。" msgid "Method of implementation of session persistence feature." msgstr "会话持久性功能的实现方法。" msgid "Metric name watched by the alarm." msgstr "警报监视的度量值名称。" msgid "Min size of the cluster." msgstr "集群的最小大小。" msgid "MinSize can not be greater than MaxSize" msgstr "MinSize 不能超过 MaxSize" msgid "Minimum number of instances in the group." msgstr "组中的最小实例数。" msgid "Minimum number of resources in the cluster." msgstr "集群中的最小资源数。" msgid "Minimum number of resources in the group." msgstr "组中的最小资源数。" msgid "" "Minimum number of resources that are added or removed when the AutoScaling " "group scales up or down. This can be used only when specifying " "PercentChangeInCapacity for the AdjustmentType property." msgstr "" "AutoScaling 组规模增大或缩小时添加或移除的最小资源数。仅当对 AdjustmentType " "属性指定 PercentChangeInCapacity 时才能使用此项。" msgid "" "Minimum number of resources that are added or removed when the AutoScaling " "group scales up or down. This can be used only when specifying " "percent_change_in_capacity for the adjustment_type property." msgstr "" "AutoScaling 组规模增大或缩小时添加或移除的最小资源数。仅当对 adjustment_type " "属性指定 percent_change_in_capacity 时才能使用此项。" #, python-format msgid "Missing mandatory (%s) key from mark unhealthy request" msgstr "标记“不正常”请求中缺少必需 (%s) 键" #, python-format msgid "Missing parameter type for parameter: %s" msgstr "参数 %s 缺少参数类型" #, python-format msgid "Missing required credential: %(required)s" msgstr "缺少必需凭证:%(required)s" msgid "Mistral resource validation error" msgstr "Mistral 资源验证错误" msgid "Monasca notification." msgstr "Monasca 通知。" msgid "Multiple actions specified" msgstr "已指定多个操作" #, python-format msgid "Multiple physical resources were found with name (%(name)s)." msgstr "找到名称为 (%(name)s) 的多个物理资源。" #, python-format msgid "Multiple routers found with name %s" msgstr "找到名称为 %s 的多个路由器" msgid "Must specify 'InstanceId' if you specify 'EIP'." msgstr "如果指定“EIP”,那么必须指定“InstanceId”。" msgid "Name for the Sahara Cluster Template." msgstr "Sahara 集群模板的名称。" msgid "Name for the Sahara Node Group Template." msgstr "Sahara 节点组模板的名称。" msgid "Name for the aggregate." msgstr "聚集的名称。" msgid "Name for the availability zone." msgstr "可用区域的名称。" msgid "" "Name for the container. If not specified, a unique name will be generated." msgstr "容器的名称。如果未指定,那么将生成唯一名称。" msgid "Name for the firewall policy." msgstr "防火墙策略的名称。" msgid "Name for the firewall rule." msgstr "防火墙规则的名称。" msgid "Name for the firewall." msgstr "防火墙的名称。" msgid "Name for the ike policy." msgstr "ike 策略的名称。" msgid "" "Name for the image. The name of an image is not unique to a Image Service " "node." msgstr "映像的名称。映像的名称对映像服务节点并非唯一。" msgid "Name for the ipsec policy." msgstr "IPSec策略的名称。" msgid "Name for the ipsec site connection." msgstr "IPSec 站点连接的名称。" msgid "Name for the time constraint." msgstr "时间约束的名称。" msgid "Name for the vpn service." msgstr "VPN 服务的名称。" msgid "" "Name of attribute to compare. Names of the form metadata.user_metadata.X or " "metadata.metering.X are equivalent to what you can address through " "matching_metadata; the former for Nova meters, the latter for all others. To " "see the attributes of your Samples, use `ceilometer --debug sample-list`." msgstr "" "要比较的属性的名称。格式为 metadata.user_metadata.X 或 metadata.metering.X 的" "名称等价于可通过 matching_metadata 访问的内容;前者用于 Nova 计量表,后者用于" "所有其他情况。要查看样本的属性,请使用“ceilometer --debug sample-list”。" msgid "Name of key to use for substituting inputs during deployment." msgstr "在部署期间要用于替换输入的键的名称。" msgid "Name of keypair to inject into the server." msgstr "要插入到服务器中的密钥对的名称。" msgid "Name of keystone endpoint." msgstr "keystone 端点的名称。" msgid "Name of keystone group." msgstr "keystone 组的名称。" msgid "Name of keystone project." msgstr "keystone 项目的名称。" msgid "Name of keystone role." msgstr "keystone 角色的名称。" msgid "Name of keystone service." msgstr "keystone 服务的名称。" msgid "Name of keystone user." msgstr "keystone 用户的名称。" msgid "Name of registered datastore type." msgstr "已注册数据存储器类型的名称。" msgid "Name of the DB instance to create." msgstr "要创建的数据库实例的名称。" msgid "Name of the Node group." msgstr "节点组的名称。" msgid "" "Name of the action associated with the task. Either action or workflow may " "be defined in the task." msgstr "与任务相关联的操作的名称。可在该任务中定义操作或工作流程。" msgid "Name of the administrative user to use on the server." msgstr "要在服务器上使用的管理用户名称。" msgid "Name of the alarm. By default, physical resource name is used." msgstr "警报的名称。缺省情况下,将使用物理资源名称。" msgid "Name of the availability zone for DB instance." msgstr "用于数据库实例的可用性区域的名称。" msgid "Name of the availability zone for server placement." msgstr "用于服务器放置的可用性区域的名称。" msgid "Name of the cluster to create." msgstr "要创建的集群的名称。" msgid "Name of the cluster. By default, physical resource name is used." msgstr "集群的名称。缺省情况下,将使用物理资源名称。" msgid "Name of the cookie, required if type is APP_COOKIE." msgstr "cookie 的名称(如果类型为 APP_COOKIE,那么为必需)。" msgid "Name of the cron trigger." msgstr "cron 触发器的名称。" msgid "Name of the current action being deployed" msgstr "正在部署的当前操作的名称" msgid "Name of the data source." msgstr "数据源的名称。" msgid "" "Name of the derived config associated with this deployment. This is used to " "apply a sort order to the list of configurations currently deployed to a " "server." msgstr "" "与此部署关联的派生配置的名称。这用来将排序顺序应用于当前部署至服务器的配置的" "列表。" msgid "" "Name of the engine node. This can be an opaque identifier. It is not " "necessarily a hostname, FQDN, or IP address." msgstr "" "引擎节点的名称。这可以是不透明的标识。它不一定是主机名、FQDN 或 IP 地址。" msgid "Name of the input." msgstr "输入的名称。" msgid "Name of the job binary." msgstr "作业二进制文件的名称。" msgid "Name of the metering label." msgstr "测量标签的名称。" msgid "Name of the network owning the port." msgstr "拥有该端口的网络的名称。" msgid "" "Name of the network owning the port. The value is typically network:" "floatingip or network:router_interface or network:dhcp." msgstr "" "拥有该端口的网络的名称。此值通常为 network:floatingip、network:" "router_interface 或 network:dhcp。" msgid "Name of the notification. By default, physical resource name is used." msgstr "通知的名称。缺省情况下,将使用物理资源名称。" msgid "Name of the output." msgstr "输出的名称。" msgid "Name of the pool." msgstr "池的名称。" msgid "Name of the queue instance to create." msgstr "要创建的队列实例的名称。" msgid "" "Name of the registered datastore version. It must exist for provided " "datastore type. Defaults to using single active version. If several active " "versions exist for provided datastore type, explicit value for this " "parameter must be specified." msgstr "" "已注册数据存储器版本的名称。所提供的数据存储器类型必须存在此项。缺省为使用单" "一活动版本。如果所提供数据存储器类型已存在某些活动版本,那么必须对此参数指定" "显式值。" msgid "Name of the secret." msgstr "密钥的名称。" msgid "Name of the senlin node. By default, physical resource name is used." msgstr "senlin 节点的名称。缺省情况下,将使用物理资源名称。" msgid "Name of the senlin policy. By default, physical resource name is used." msgstr "senlin 策略的名称。缺省情况下,将使用物理资源名称。" msgid "Name of the senlin profile. By default, physical resource name is used." msgstr "senlin 概要文件的名称。缺省情况下,将使用物理资源名称。" msgid "" "Name of the senlin receiver. By default, physical resource name is used." msgstr "senlin 接收器的名称。缺省情况下,将使用物理资源名称。" msgid "Name of the server." msgstr "服务器名称。" msgid "Name of the share network." msgstr "共享网络的名称。" msgid "Name of the share type." msgstr "共享类型的名称。" msgid "Name of the stack." msgstr "堆栈的名称。" msgid "Name of the subnet pool." msgstr "子网池的名称。" msgid "Name of the vip." msgstr "vip 的名称。" msgid "Name of the volume type." msgstr "卷类型的名称。" msgid "Name of the volume." msgstr "卷名。" msgid "" "Name of the workflow associated with the task. Can be defined by intrinsic " "function get_resource or by name of the referenced workflow, i.e. " "{ workflow: wf_name } or { workflow: { get_resource: wf_name }}. Either " "action or workflow may be defined in the task." msgstr "" "与此任务相关联的工作流程的名称。可通过内部函数 get_resource 或所引用工作流程" "的名称(即 { workflow: wf_name } 或 { workflow: { get_resource: wf_name }})" "定义。可在该任务中定义操作或工作流程。" msgid "Name of this Load Balancer." msgstr "此负载均衡器的名称。" msgid "Name of this deployment resource in the stack" msgstr "堆栈中此部署资源的名称" msgid "Name of this listener." msgstr "此侦听器的名称。" msgid "Name of this pool." msgstr "此池的名称。" msgid "Name or ID Nova flavor for the nodes." msgstr "节点的名称或标识 Nova 实例类型。" msgid "Name or ID of network to create a port on." msgstr "要在其上创建端口的网络的名称或标识。" msgid "Name or ID of senlin profile to create this node." msgstr "要创建此节点的 senlin 概要文件的名称或标识。" msgid "" "Name or ID of shared file system snapshot that will be restored and created " "as a new share." msgstr "将复原或作为新的对象创建的共享文件系统快照的名称或标识。" msgid "" "Name or ID of shared filesystem type. Types defines some share filesystem " "profiles that will be used for share creation." msgstr "" "共享文件系统类型的名称或标识。类型定义将用于创建共享的一些共享文件系统概要文" "件。" msgid "Name or ID of shared network defined for shared filesystem." msgstr "对共享文件系统定义的共享网络的名称或标识。" msgid "Name or ID of target cluster." msgstr "目标集群的名称或标识。" msgid "Name or ID of the load balancing pool." msgstr "负载均衡池的名称或标识。" msgid "Name or Id of keystone region." msgstr "keystone 区域的名称或标识。" msgid "Name or Id of keystone service." msgstr "keystone 服务的名称或标识。" #, python-format msgid "" "Name or UUID of Neutron port to attach this NIC to. Either %(port)s or " "%(net)s must be specified." msgstr "" "要将此 NIC 连接至的 Neutron 端口的名称或 UUID。必须指定 %(port)s 或 %(net)s。" msgid "Name or UUID of network." msgstr "网络的名称或 UUID。" msgid "" "Name or UUID of the Neutron floating IP network or name of the Nova floating " "ip pool to use. Should not be provided when used with Nova-network that auto-" "assign floating IPs." msgstr "" "要使用的 Neutron 浮动 IP 网络的名称或 UUID,或要使用的 Nova 浮动 IP 池的名" "称。与自动分配浮动 IP 的 Nova 网络配合使用时,不应该提供。" msgid "Name or UUID of the image used to boot Hadoop nodes." msgstr "用于引导 Hadoop 节点的映像的名称或 UUID。" #, python-format msgid "" "Name or UUID of the network to attach this NIC to. Either %(port)s or " "%(net)s must be specified." msgstr "要将此 NIC 连接至的网络的名称或 UUID。必须指定 %(port)s 或 %(net)s。" msgid "Name or id of keystone domain." msgstr "keystone 域的名称或标识。" msgid "Name or id of keystone group." msgstr "keystone 组的名称或标识。" msgid "Name or id of keystone user." msgstr "keystone 用户的名称或标识。" msgid "Name or id of volume type (OS::Cinder::VolumeType)." msgstr "卷类型 (OS::Cinder::VolumeType) 的名称或标识。" msgid "Names of databases that those users can access on instance creation." msgstr "在进行实例创建时,那些用户可访问的数据库的名称。" msgid "" "Namespace to group this software config by when delivered to a server. This " "may imply what configuration tool is going to perform the configuration." msgstr "" "当传递至服务器时,要用作此软件配置的分组依据的名称空间。这可指示哪种配置工具" "将执行该配置。" msgid "Need more arguments" msgstr "需要更多参数" msgid "Negotiation mode for the ike policy." msgstr "ike 策略的协商方式。" #, python-format msgid "Neither image nor bootable volume is specified for instance %s" msgstr "对于实例 %s,既未指定映像,也未指定可引导卷" msgid "Network in CIDR notation." msgstr "CIDR 注释中的网络。" msgid "Network interface ID to associate with EIP." msgstr "要与 EIP 关联的网络接口标识。" msgid "Network interfaces to associate with instance." msgstr "要与实例关联的网络接口。" #, python-format msgid "" "Network this port belongs to. If you plan to use current port to assign " "Floating IP, you should specify %(fixed_ips)s with %(subnet)s. Note if this " "changes to a different network update, the port will be replaced." msgstr "" "此端口所属的网络。如果计划使用当前端口分配浮动 IP,那么应指定带 %(subnet)s " "的 %(fixed_ips)s。请注意,如果此操作会切换为另一网络更新,那么此端口将被替" "换。" msgid "Network to allocate floating IP from." msgstr "要从中分配浮动 IP 的网络。" msgid "Neutron network id." msgstr "Neutron 网络标识。" msgid "Neutron subnet id." msgstr "Neutron 子网标识。" msgid "Nexthop IP address." msgstr "下一中继段 IP 地址。" #, python-format msgid "No %s specified" msgstr "未指定任何 %s" msgid "No Template provided." msgstr "未提供模板。" msgid "No action specified" msgstr "未指定动作" msgid "No constraint expressed" msgstr "未表示任何约束" #, python-format msgid "" "No content found in the \"files\" section for %(fn_name)s path: %(file_key)s" msgstr "对于 %(fn_name)s 路径,在“文件”部分中找不到任何内容:%(file_key)s" #, python-format msgid "No event %s found" msgstr "找不到任何事件 %s" #, python-format msgid "No events found for resource %s" msgstr "对于资源 %s,找不到任何事件" msgid "No resource data found" msgstr "找不到任何资源数据" #, python-format msgid "No stack exists with id \"%s\"" msgstr "不存在任何具有标识“%s”的堆栈" msgid "No stack name specified" msgstr "未指定栈名称" msgid "No template specified" msgstr "未指定任何模板" msgid "No volume service available." msgstr "没有任何卷服务可用。" msgid "Node groups." msgstr "节点组。" msgid "Nodes list in the cluster." msgstr "集群中的节点列表。" msgid "Non HA routers can only have one L3 agent." msgstr "非 HA 路由器只能具有一个 L3 代理。" #, python-format msgid "Non-empty resource type is required for resource \"%s\"" msgstr "资源“%s”需要非空资源类型" msgid "Not Implemented." msgstr "尚未实现。" #, python-format msgid "Not allowed - %(dsver)s without %(dstype)s." msgstr "不允许 - 没有 %(dstype)s 的 %(dsver)s。" msgid "Not found" msgstr "找不到" msgid "Not waiting for outputs signal" msgstr "未在等待输出信号" msgid "" "Notional service where encryption is performed For example, front-end. For " "Nova." msgstr "在其中执行加密的注释服务。例如,front-end,用于 Nova。" msgid "Nova instance type (flavor)." msgstr "Nova 实例类型。" msgid "Nova network id." msgstr "Nova 网络标识。" msgid "Number of VCPUs for the flavor." msgstr "对应该类型的 VCPU 数。" msgid "Number of backlog requests to configure the socket with." msgstr "用于配置套接字的储备请求数。" msgid "Number of instances in the Node group." msgstr "节点组中的实例数。" msgid "Number of minutes to wait for this stack creation." msgstr "用于等待此堆栈创建的分钟数。" msgid "Number of periods to evaluate over." msgstr "要对其进行评估的时间段数。" msgid "" "Number of permissible connection failures before changing the member status " "to INACTIVE." msgstr "将成员状态更改为不活动之前,允许的连接失败次数。" msgid "Number of remaining executions." msgstr "余下执行次数。" msgid "Number of seconds for the DPD delay." msgstr "DPD 延迟的秒数。" msgid "Number of seconds for the DPD timeout." msgstr "表示 DPD 超时的秒数。" msgid "" "Number of times to check whether an interface has been attached or detached." msgstr "要检查接口是已附加还是已拆离的次数。" msgid "" "Number of times to retry to bring a resource to a non-error state. Set to 0 " "to disable retries." msgstr "使资源进入无错状态的重试次数。设置为 0 以禁用重试。" msgid "" "Number of times to retry when a client encounters an expected intermittent " "error. Set to 0 to disable retries." msgstr "客户机遇到预期间歇性错误的次数。设置为 0 将禁用重试。" msgid "Number of workers for Heat service." msgstr "用于 Heat 服务的工作程序数。" msgid "" "Number of workers for Heat service. Default value 0 means, that service will " "start number of workers equal number of cores on server." msgstr "" "Heat 服务的工作程序数。缺省值 0 意味着该服务将启动的工作程序数等于服务器上的" "核心数。" msgid "Number value for delay during resolve constraint." msgstr "表示解析约束期间的延迟的数字值。" msgid "Number value for timeout during resolving output value." msgstr "表示解析输出值期间的超时的数字值。" #, python-format msgid "Object action %(action)s failed because: %(reason)s" msgstr "由于 %(reason)s,对象操作 %(action)s 失败" msgid "" "On update, enables heat to collect existing resource properties from reality " "and converge to updated template." msgstr "更新时,允许 heat 从现实收集现有资源属性并汇合至所更新模板。" msgid "One of predefined health monitor types." msgstr "其中一种预定义的运行状况监视器类型。" msgid "One or more listeners for this load balancer." msgstr "用于此负载均衡器的一个或多个侦听器。" msgid "Only ISO 8601 duration format of the form PT#H#M#S is supported." msgstr "仅支持 ISO 8601 持续时间格式(格式为 PT#H#M#S)。" msgid "Only Templates with an extension of .yaml or .template are supported" msgstr "仅支持扩展名为 .yaml 或 .template 的模板" #, python-format msgid "Only integer is acceptable by '%(name)s'." msgstr "“%(name)s”仅接受整数。" #, python-format msgid "Only non-zero integer is acceptable by '%(name)s'." msgstr "“%(name)s”仅接受非零整数。" msgid "Operator used to compare specified statistic with threshold." msgstr "用于将指定统计信息与阈值进行比较的运算符。" msgid "Optional CA cert file to use in SSL connections." msgstr "SSL 连接中要使用的可选 CA 证书文件。" msgid "Optional Nova keypair name." msgstr "可选 Nova 密钥对名称。" msgid "Optional PEM-formatted certificate chain file." msgstr "可选 PEM 格式证书链文件。" msgid "Optional PEM-formatted file that contains the private key." msgstr "包含专用密钥的可选 PEM 格式文件。" msgid "Optional filename to associate with part." msgstr "要与部件关联的可选文件名。" #, python-format msgid "Optional heat url in format like http://0.0.0.0:8004/v1/%(tenant_id)s." msgstr "" "可选 Heat URL,采用诸如 http://0.0.0.0:8004/v1/%(tenant_id)s 之类的格式。" msgid "Optional subtype to specify with the type." msgstr "要与类型一起指定的可选子类型。" msgid "Options for simulating waiting." msgstr "用于模拟等待的选项。" #, python-format msgid "Order '%(name)s' failed: %(code)s - %(reason)s" msgstr "指令“%(name)s”失败:%(code)s - %(reason)s" msgid "Outputs received" msgstr "接收到的输出" msgid "Owner of the source security group." msgstr "源安全组的所有者。" msgid "PATCH update to non-COMPLETE stack" msgstr "对未完成堆栈的 PATCH 更新" #, python-format msgid "Parameter '%(name)s' is invalid: %(exp)s" msgstr "参数“%(name)s”无效:%(exp)s" msgid "Parameter Groups error" msgstr "参数组错误" msgid "" "Parameter Groups error: parameter_groups.: The grouped parameter key_name " "does not reference a valid parameter." msgstr "参数组错误:parameter_groups。分组参数 key_name 未引用有效参数。" msgid "" "Parameter Groups error: parameter_groups.: The key_name parameter must be " "assigned to one parameter group only." msgstr "参数组错误:parameter_groups。只能对一个参数组指定 key_name 参数。" msgid "" "Parameter Groups error: parameter_groups.: The parameters of parameter group " "should be a list." msgstr "参数组错误:parameter_groups。参数组的这些参数应该为列表。" msgid "" "Parameter Groups error: parameter_groups.Database Group: The InstanceType " "parameter must be assigned to one parameter group only." msgstr "" "参数组错误:parameter_groups。数据组:只能对一个参数组指定 InstanceType 参" "数。" msgid "" "Parameter Groups error: parameter_groups.Database Group: The grouped " "parameter SomethingNotHere does not reference a valid parameter." msgstr "" "参数组错误:parameter_groups。数据组:分组参数 SomethingNotHere 未引用有效参" "数。" msgid "" "Parameter Groups error: parameter_groups.Server Group: The parameters must " "be provided for each parameter group." msgstr "参数组错误:parameter_groups。服务器组:必须为每个参数组提供这些参数。" msgid "" "Parameter Groups error: parameter_groups.Server Group: The parameters of " "parameter group should be a list." msgstr "参数组错误:parameter_groups。服务器组:参数组的这些参数应该为列表。" msgid "" "Parameter Groups error: parameter_groups: The parameter_groups should be a " "list." msgstr "参数组错误:parameter_groups。parameter_groups 应该为列表。" #, python-format msgid "Parameter name in \"%s\" must be string" msgstr "“%s”中的参数名必须是字符串" #, python-format msgid "Params must be a map, find a %s" msgstr "params 必须为映射,但发现 %s" msgid "Parent network of the subnet." msgstr "子网的父网络。" msgid "Parts belonging to this message." msgstr "属于此消息的部件。" msgid "Password for API authentication" msgstr "API认证的密码" msgid "Password for accessing the data source URL." msgstr "要访问数据源 URL 的密码。" msgid "Password for accessing the job binary URL." msgstr "要访问作业二进制文件 URL 的密码。" msgid "Password for those users on instance creation." msgstr "在进行实例创建时那些用户的密码。" msgid "Password of keystone user." msgstr "keystone 用户的密码。" msgid "Password used by user." msgstr "用户使用的密码。" #, python-format msgid "Path components in \"%s\" must be strings" msgstr "“%s”中的路径部分必须是字符串" msgid "Path components in attributes must be strings" msgstr "属性中的路径部分必须为字符串" msgid "Payload exceeds maximum allowed size" msgstr "有效内容超过最大允许大小" msgid "Perfect forward secrecy for the ipsec policy." msgstr "IPSec 策略的完全正向保密。" msgid "Perfect forward secrecy in lowercase for the ike policy." msgstr "ike 策略的完全正向保密(以小写表示)。" msgid "" "Perform a check on the input values passed to verify that each required " "input has a corresponding value. When the property is set to STRICT and no " "value is passed, an exception is raised." msgstr "" "请对已传递的输入值执行检查,以验证每个必需输入是否具有对应值。当该属性设置为 " "STRICT 并且未传递任何值时,会产生异常。" msgid "Period (seconds) to evaluate over." msgstr "要对其进行评估的时间段(秒)。" msgid "Physical ID of the VPC. Not implemented." msgstr "VPC 的物理标识。未实现。" #, python-format msgid "" "Plugin %(plugin)s doesn't support the following node processes: " "%(unsupported)s. Allowed processes are: %(allowed)s" msgstr "" "插件 %(plugin)s 不支持以下节点进程:%(unsupported)s。允许的进程为:" "%(allowed)s" msgid "Plugin name." msgstr "插件名称。" msgid "Policies for removal of resources on update." msgstr "用于在更新时移除资源的策略。" msgid "Policy for rolling updates for this scaling group." msgstr "用于此缩放组的滚动更新的策略。" msgid "" "Policy on how to apply a flavor update; either by requesting a server resize " "or by replacing the entire server." msgstr "" "关于如何应用实例类型更新的策略;通过请求服务器调整大小或通过替换整个服务器。" msgid "" "Policy on how to apply an image-id update; either by requesting a server " "rebuild or by replacing the entire server." msgstr "" "有关如何应用 image-id 更新的策略;通过请求服务重建或替换整个服务器进行。" msgid "" "Policy on how to respond to a stack-update for this resource. REPLACE_ALWAYS " "will replace the port regardless of any property changes. AUTO will update " "the existing port for any changed update-allowed property." msgstr "" "有关如何针对此资源响应堆栈更新的策略。REPLACE_ALWAYS 将替换端口,而不考虑任何" "属性更改。AUTO 将更新任何已更改更新允许的属性的现有端口。" msgid "" "Policy to be processed when doing an update which requires removal of " "specific resources." msgstr "当执行需要删除特定资源的更新时要处理的策略。" msgid "Pool creation failed" msgstr "池创建失败" msgid "Pool creation failed due to vip" msgstr "由于 vip,池创建失败" msgid "Pool from which floating IP is allocated." msgstr "从中分配浮动 IP 的池。" msgid "Port number on which the servers are running on the members." msgstr "正在其上对成员运行服务器的端口的端口号。" msgid "Port on which the pool member listens for requests or connections." msgstr "池成员在其上侦听请求或连接的端口。" msgid "Port security enabled of the network." msgstr "网络的已启用端口安全性。" msgid "Port security enabled of the port." msgstr "端口的已启用端口安全性。" msgid "Position of the rule within the firewall policy." msgstr "该规则在防火墙策略内的位置" msgid "Pre-shared key string for the ipsec site connection." msgstr "IPSec 站点连接的预先共享密钥字符串。" msgid "Prefix length for subnet allocation from subnet pool." msgstr "从子网池分配的子网的前缀长度。" msgid "Private DNS name of the specified instance." msgstr "所指定实例的专用 DNS 名称。" msgid "Private IP address of the network interface." msgstr "网络接口的专用 IP 地址。" msgid "Private IP address of the specified instance." msgstr "所指定实例的专用 IP 地址。" msgid "Project ID" msgstr "项目ID" msgid "" "Projects to add volume type access to. NOTE: This property is only supported " "since Cinder API V2." msgstr "" "要向其添加卷类型访问权的项目。注意:仅 Cinder API V2 及以上版本支持此属性。" #, python-format msgid "" "Properties %(algorithm)s and %(bit_length)s are required for %(type)s type " "of order." msgstr "" "对于 %(type)s 类型的指令,属性 %(algorithm)s 和 %(bit_length)s 是必需的。" msgid "Properties for profile." msgstr "概要文件的属性。" msgid "Properties of this policy." msgstr "此策略的属性。" msgid "Properties to pass to each resource being created in the chain." msgstr "要传递至要在链中创建的每个资源的属性。" #, python-format msgid "Property %(cookie)s is required when %(sp)s type is set to %(app)s." msgstr "%(sp)s 类型设置为 %(app)s 时需要属性 %(cookie)s。" #, python-format msgid "" "Property %(cookie)s must NOT be specified when %(sp)s type is set to %(ip)s." msgstr "%(sp)s 类型设置为 %(ip)s 时不能指定属性 %(cookie)s。" #, python-format msgid "" "Property %(key)s updated value %(new)s should be superset of existing value " "%(old)s." msgstr "属性 %(key)s 更新值 %(new)s 应是现有值 %(old)s 的超集。" #, python-format msgid "" "Property %(n)s type mismatch between facade %(type)s (%(fs_type)s) and " "provider (%(ps_type)s)" msgstr "" "在外观 %(type)s (%(fs_type)s) 与提供程序 (%(ps_type)s) 之间,属性 %(n)s 类型" "不匹配" #, python-format msgid "Property %(policies)s and %(item)s cannot be used both at one time." msgstr "不能同时使用属性 %(policies)s 和 %(item)s。" #, python-format msgid "Property %(ref)s required when protocol is %(term)s." msgstr "协议为 %(term)s 时需要属性 %(ref)s。" #, python-format msgid "Property %s not assigned" msgstr "未分配属性 %s" #, python-format msgid "Property %s not implemented yet" msgstr "尚未实现属性 %s" msgid "" "Property cookie_name is required when session_persistence type is set to " "APP_COOKIE." msgstr "session_persistence 类型设置为 APP_COOKIE 时,需要属性 cookie_name。" msgid "" "Property cookie_name is required, when session_persistence type is set to " "APP_COOKIE." msgstr "" "需要属性 cookie_name(当 session_persistence 类型设置为 APP_COOKIE 时)。" msgid "" "Property cookie_name must NOT be specified when session_persistence type is " "set to SOURCE_IP." msgstr "" "session_persistence 类型设置为 SOURCE_IP 时,不能指定属性 cookie_name。" msgid "Property values for the resources in the group." msgstr "组中资源的属性值。" msgid "Protocol for balancing." msgstr "用于进行均衡的协议。" msgid "Protocol for the firewall rule." msgstr "防火墙规则的协议。" msgid "Protocol of the pool." msgstr "此池的协议。" msgid "Protocol on which to listen for the client traffic." msgstr "侦听客户机流量时使用的协议。" msgid "Protocol to balance." msgstr "用于进行均衡的协议。" msgid "Protocol value for this firewall rule." msgstr "此防火墙规则的协议值。" msgid "" "Provide access to nodes using other nodes of the cluster as proxy gateways." msgstr "通过将集群的其他节点用作代理网关来提供对某些节点的访问。" msgid "" "Provide old encryption key. New encryption key would be used from config " "file." msgstr "提供旧的加密密钥。新的加密密钥将通过配置文件使用。" msgid "Provider for this Load Balancer." msgstr "此负载均衡器的供应商。" msgid "Provider implementing this load balancer instance." msgstr "用于实现此负载均衡器实例的提供程序。" #, python-format msgid "Provider requires property %(n)s unknown in facade %(type)s" msgstr "提供程序需要外观 %(type)s 中的未知属性 %(n)s" msgid "Public DNS name of the specified instance." msgstr "所指定实例的公共 DNS 名称。" msgid "Public IP address of the specified instance." msgstr "所指定实例的公共 IP 地址。" msgid "" "RPC timeout for the engine liveness check that is used for stack locking." msgstr "用于堆栈锁定的引擎活性检查的 RPC 超时。" msgid "RX/TX factor." msgstr "RX/TX 因子。" #, python-format msgid "Rebuilding server failed, status '%s'" msgstr "重建服务器失败,状态为“%s”" msgid "Record name." msgstr "记录名称。" #, python-format msgid "Recursion depth exceeds %d." msgstr "递归深度超过 %d。" msgid "" "Ref structure that contains the ID of the VPC on which you want to create " "the subnet." msgstr "包含要在其上创建子网的 VPC 的标识的参照结构。" msgid "Reference to a flavor for creating DB instance." msgstr "请对实例类型进行引用,以便创建数据库实例。" msgid "Reference to certificate." msgstr "对证书的引用。" msgid "Reference to intermediates." msgstr "对中间项的引用。" msgid "Reference to private key passphrase." msgstr "对专用密钥密码的引用。" msgid "Reference to private key." msgstr "对专用密钥的引用。" msgid "Reference to public key." msgstr "对公用密钥的引用。" msgid "Reference to the secret." msgstr "对密钥的引用。" msgid "References to secrets that will be stored in container." msgstr "对将存储在容器中的密钥的引用。" msgid "Region name in which this stack will be created." msgstr "将在其中创建此堆栈的区域名称。" msgid "Remaining executions." msgstr "余下执行。" msgid "Remote branch router identity." msgstr "远程分支路由器身份。" msgid "Remote branch router public IPv4 address or IPv6 address or FQDN." msgstr "远程分支路由器公共 IPv4 地址或 IPv6 地址,或者 FQDN。" msgid "Remote subnet(s) in CIDR format." msgstr "采用 CIDR 格式的远程子网。" msgid "" "Replacement policy used to work around flawed nova/neutron port interaction " "which has been fixed since Liberty." msgstr "" "用于解决存在缺陷的 nova/neutron 端口交互(自 Liberty 开始已修正)的替换策略。" msgid "Request expired or more than 15mins in the future" msgstr "请求已到期,或在将来超过 15 分钟" #, python-format msgid "Request limit exceeded: %(message)s" msgstr "已超过请求限制:%(message)s" msgid "Request missing required header X-Auth-Url" msgstr "请求缺少必需的头 X-Auth-Url" msgid "Request was denied due to request throttling" msgstr "由于请求调速,已拒绝请求" #, python-format msgid "" "Requested plugin '%(plugin)s' doesn't support version '%(version)s'. Allowed " "versions are %(allowed)s" msgstr "" "所请求插件“%(plugin)s”不支持版本“%(version)s”。允许的版本为 %(allowed)s" msgid "" "Required extra specification. Defines if share drivers handles share servers." msgstr "需要额外指定。定义共享驱动程序句柄是否共享服务器。" #, python-format msgid "Required property %(n)s for facade %(type)s missing in provider" msgstr "提供程序中缺少外观 %(type)s 的必需属性 %(n)s" #, python-format msgid "Resizing to '%(flavor)s' failed, status '%(status)s'" msgstr "将大小调整至“%(flavor)s”失败,状态为“%(status)s”" #, python-format msgid "Resource \"%s\" has no type" msgstr "资源“%s”没有任何类型" #, python-format msgid "Resource \"%s\" type is not a string" msgstr "资源“%s”类型不是字符串" #, python-format msgid "Resource %(name)s %(key)s type must be %(typename)s" msgstr "资源 %(name)s %(key)s 类型必须为 %(typename)s" #, python-format msgid "Resource %(name)s is missing \"%(type_key)s\"" msgstr "资源 %(name)s 缺少“%(type_key)s”" #, python-format msgid "" "Resource %s's property user_data_format should be set to SOFTWARE_CONFIG " "since there are software deployments on it." msgstr "" "资源 %s 的属性 user_data_format 应设置为 SOFTWARE_CONFIG,因为其上有软件部" "署。" msgid "Resource ID was not provided." msgstr "未提供资源标识。" msgid "" "Resource definition for the resources in the group, in HOT format. The value " "of this property is the definition of a resource just as if it had been " "declared in the template itself." msgstr "" "组中资源的资源定义,以热格式表示。此属性的值是资源的定义,正如同已在模板本身" "中声明该资源定义一样。" msgid "" "Resource definition for the resources in the group. The value of this " "property is the definition of a resource just as if it had been declared in " "the template itself." msgstr "" "组中资源的资源定义。此属性的值是资源的定义,正如同已在模板本身中声明该资源定" "义一样。" msgid "Resource failed" msgstr "资源发生故障" msgid "Resource is not built" msgstr "资源未构建" msgid "Resource name may not contain \"/\"" msgstr "资源名称不能包含“/”" msgid "Resource type." msgstr "资源类型。" msgid "Resource update already requested" msgstr "已请求资源更新" msgid "Resource with the name requested already exists" msgstr "具有所请求名称的资源已存在" msgid "" "ResourceInError: resources.remote_stack: Went to status UPDATE_FAILED due to " "\"Remote stack update failed\"" msgstr "" "ResourceInError:resources.remote_stack:已变为状态 UPDATE_FAILED,因为“远程" "堆栈更新失败”" #, python-format msgid "Resources must contain Resource. Found a [%s] instead" msgstr "资源必须包含“资源”。但发现包含的是 [%s]" msgid "" "Resources that users are allowed to access by the DescribeStackResource API." msgstr "允许用户通过 DescribeStackResource API 访问的资源。" msgid "Returned status code from the configuration execution." msgstr "通过执行配置返回的状态码。" msgid "Route duplicates an existing route." msgstr "路由与现有路由重复。" msgid "Route table ID." msgstr "路由表标识。" msgid "Safety assessment lifetime configuration for the ike policy." msgstr "ike 策略的安全评估生存期配置。" msgid "Safety assessment lifetime configuration for the ipsec policy." msgstr "IPSec 策略的安全评估生存期配置。" msgid "Safety assessment lifetime units." msgstr "安全评估生存期单位。" msgid "Safety assessment lifetime value in specified units." msgstr "以指定单位表示的安全评估生存期值。" msgid "Scheduler hints to pass to Nova (Heat extension)." msgstr "要传递至 Nova 的调度程序提示(Heat 扩展)。" msgid "Schema representing the inputs that this software config is expecting." msgstr "表示此软件配置所需的输入的模式。" msgid "Schema representing the outputs that this software config will produce." msgstr "表示此软件配置将产生的输出的模式。" #, python-format msgid "Schema valid only for %(ltype)s or %(mtype)s, not %(utype)s" msgstr "模式仅对于 %(ltype)s 或 %(mtype)s(而非 %(utype)s)有效" msgid "" "Scope of flavor accessibility. Public or private. Default value is True, " "means public, shared across all projects." msgstr "" "类型可访问性的范围。公用或专用。缺省值为 True,表示公用(在所有项目间共享)。" #, python-format msgid "Searching Tenant %(target)s from Tenant %(actual)s forbidden." msgstr "已禁止从租户 %(actual)s 搜索租户 %(target)s。" msgid "Seconds between running periodic tasks." msgstr "运行定期任务之间的秒数。" msgid "Seconds to wait after a create. Defaults to the global wait_secs." msgstr "执行创建操作后要等待的秒数,缺省为 global wait_secs。" msgid "Seconds to wait after a delete. Defaults to the global wait_secs." msgstr "执行删除操作后要等待的秒数,缺省为 global wait_secs。" msgid "Seconds to wait after an action (-1 is infinite)." msgstr "执行操作后要等待的秒数(-1 表示无限期等待)。" msgid "Seconds to wait after an update. Defaults to the global wait_secs." msgstr "执行更新操作后要等待的秒数,缺省为 global wait_secs。" #, python-format msgid "Section %s can not be accessed directly." msgstr "无法直接访问部分 %s。" #, python-format msgid "Security Group \"%(group_name)s\" not found" msgstr "找不到安全组“%(group_name)s”" msgid "Security group IDs to assign." msgstr "要分配的安全组标识。" msgid "Security group IDs to associate with this port." msgstr "要与此端口关联的安全组标识。" msgid "Security group names to assign." msgstr "要分配的安全组名称。" msgid "Security groups cannot be assigned the name \"default\"." msgstr "不能对安全组分配名称“default”。" msgid "Security service IP address or hostname." msgstr "安全服务 IP 地址或主机名。" msgid "Security service description." msgstr "安全服务描述。" msgid "Security service domain." msgstr "安全服务域。" msgid "Security service name." msgstr "安全服务名称。" msgid "Security service type." msgstr "安全服务类型。" msgid "Security service user or group used by tenant." msgstr "租户使用的安全服务用户或组。" msgid "Select deferred auth method, stored password or trusts." msgstr "请选择延期的认证方法、已存储密码或信任。" msgid "Sequence of characters to build the random string from." msgstr "构建随机字符串时要遵循的字符顺序。" #, python-format msgid "Server %(name)s delete failed: (%(code)s) %(message)s" msgstr "服务器 %(name)s 删除失败:(%(code)s) %(message)s" msgid "Server Group name." msgstr "服务器组名称。" msgid "Server name." msgstr "服务器名称。" msgid "Server to assign floating IP to." msgstr "要对其分配浮动 IP 的服务器。" #, python-format msgid "" "Service %(service_name)s is not available for resource type " "%(resource_type)s, reason: %(reason)s" msgstr "" "服务 %(service_name)s 对资源类型 %(resource_type)s 不可用,原因:%(reason)s" msgid "Service misconfigured" msgstr "服务配置有误" msgid "Service temporarily unavailable" msgstr "服务暂时不可用" msgid "Set of parameters passed to this stack." msgstr "传递至此堆栈的参数的集合。" msgid "Set of rules for comparing characters in a character set." msgstr "用于比较字符集内字符的规则的集合。" msgid "Set of symbols and encodings." msgstr "符号和编码的集合。" msgid "Set to \"vpc\" to have IP address allocation associated to your VPC." msgstr "请设置为“vpc”,以使 IP 地址分配与 VPC 关联。" msgid "Set to true if DHCP is enabled and false if DHCP is disabled." msgstr "如果已启用 DHCP,那么设置为 true;如果已禁用 DHCP,那么设置为 false。" msgid "Severity of the alarm." msgstr "警报的严重性。" msgid "Share description." msgstr "共享描述。" msgid "Share host." msgstr "共享主机。" msgid "Share name." msgstr "共享名称。" msgid "Share network description." msgstr "共享网络描述。" msgid "Share project ID." msgstr "共享项目标识。" msgid "Share protocol supported by shared filesystem." msgstr "共享文件系统支持的共享协议。" msgid "Share storage size in GB." msgstr "共享存储器大小(以 GB 计)。" msgid "Shared status of the metering label." msgstr "测量标签的共享状态。" msgid "Shared status of this firewall policy." msgstr "此防火墙策略的共享状态。" msgid "Shared status of this firewall rule." msgstr "此防火墙规则的共享状态。" msgid "Shared status of this firewall." msgstr "此防火墙的共享状态。" msgid "Shrinking volume" msgstr "正在收缩卷" msgid "Signal data error" msgstr "信号数据错误" #, python-format msgid "Signal resource during %s" msgstr "%s 期间的通告资源" #, python-format msgid "Single schema valid only for %(ltype)s, not %(utype)s" msgstr "单个模式仅对于 %(ltype)s(而非 %(utype)s)有效" msgid "Size of a secondary ephemeral data disk in GB." msgstr "辅助临时数据磁盘的大小(以 GB 计)。" msgid "Size of adjustment." msgstr "调整的大小。" msgid "Size of encryption key, in bits. For example, 128 or 256." msgstr "加密密钥的大小(以位计),例如,128 或 256。" msgid "" "Size of local disk in GB. The \"0\" size is a special case that uses the " "native base image size as the size of the ephemeral root volume." msgstr "" "本地磁盘大小(以 GB 计)。大小为“0”表示以下特殊情况:使用本机基本映像大小作为" "临时根卷的大小。" msgid "" "Size of the block device in GB. If it is omitted, hypervisor driver " "calculates size." msgstr "" "块设备的大小,以 GB 计。如果此项已省略,那么管理程序驱动程序会计算大小。" msgid "Size of the instance disk volume in GB." msgstr "实例磁盘卷的大小,以 GB 计。" msgid "Size of the volumes, in GB." msgstr "卷大小,以 GB 计。" msgid "Smallest prefix size that can be allocated from the subnet pool." msgstr "可从子网池分配的最小前缀大小。" #, python-format msgid "Snapshot with id %s not found" msgstr "找不到标识为 %s 的快照" msgid "" "SnapshotId is missing, this is required when specifying BlockDeviceMappings." msgstr "缺少 SnapshotId,指定BlockDeviceMappings 时需要此项。" #, python-format msgid "Software config with id %s not found" msgstr "找不到标识为 %s 的软件配置" msgid "Source IP address or CIDR." msgstr "源IP地址或无类间或路由。" msgid "Source ip_address for this firewall rule." msgstr "此防火墙规则的源 ip_address。" msgid "Source port number or a range." msgstr "源端口号或范围。" msgid "Source port range for this firewall rule." msgstr "此防火墙规则的源端口范围。" #, python-format msgid "Specified output key %s not found." msgstr "找不到指定的输出键 %s。" #, python-format msgid "Specified status is invalid, defaulting to %s" msgstr "指定的状态无效,缺省为 %s" #, python-format msgid "Specified subnet %(subnet)s does not belongs to network %(network)s." msgstr "指定的子网 %(subnet)s 不属于网络 %(network)s。" msgid "Specifies a custom discovery url for node discovery." msgstr "指定定制发现 URL 以发现节点。" msgid "Specifies database names for creating databases on instance creation." msgstr "为在实例创建时创建数据库指定数据库名称。" msgid "Specify the ACL permissions on who can read objects in the container." msgstr "请指定 ACL 许可权,用户可通过这些许可权对容器中的对象执行读操作。" msgid "Specify the ACL permissions on who can write objects to the container." msgstr "请指定 ACL 许可权,用户可通过这些许可权将对象写入容器。" msgid "" "Specify whether the remote_ip_prefix will be excluded or not from traffic " "counters of the metering label. For example to not count the traffic of a " "specific IP address of a range." msgstr "" "指定是否将从测量标签的流量计数器排除 remote_ip_prefix。例如,不对某个范围的特" "定 IP 地址的流量进行计数。" #, python-format msgid "Stack %(stack_name)s already has an action (%(action)s) in progress." msgstr "堆栈 %(stack_name)s 已具有正在进行的操作 (%(action)s)。" msgid "Stack ID" msgstr "栈ID" msgid "Stack Name" msgstr "栈名" msgid "Stack name may not contain \"/\"" msgstr "堆栈名称不能包含“/”" msgid "Stack resource id" msgstr "堆栈资源标识" msgid "Stack unknown status" msgstr "堆栈的状态未知" #, python-format msgid "Stack with id %s not found" msgstr "找不到标识为 %s 的堆栈" msgid "" "Stacks containing these tag names will be hidden. Multiple tags should be " "given in a comma-delimited list (eg. hidden_stack_tags=hide_me,me_too)." msgstr "" "包含这些标记名称的堆栈将被隐藏。应以逗号分隔列表的形式给出多个标记(例如," "hidden_stack_tags=hide_me,me_too)。" msgid "Start address for the allocation pool." msgstr "分配池的起始地址。" #, python-format msgid "Start resizing the group %(group)s" msgstr "请开始对组 %(group)s 调整大小" msgid "Start time for the time constraint. A CRON expression property." msgstr "时间约束的起始时间。这是一个 CRON 表达式属性。" #, python-format msgid "State %s invalid for create" msgstr "对于创建,状态 %s 无效" #, python-format msgid "State %s invalid for resume" msgstr "对于恢复,状态 %s 无效" #, python-format msgid "State %s invalid for suspend" msgstr "对于暂挂,状态 %s 无效" msgid "Status" msgstr "状态" #, python-format msgid "String to split must be string; got %s" msgstr "要拆分的字符串必须是字符串;已获取 %s" msgid "String value with which to compare." msgstr "要比较的字符串值。" msgid "Subnet ID to associate with this interface." msgstr "要与此接口关联的子网标识。" msgid "Subnet ID to launch instance in." msgstr "要在其中启动实例的子网的标识。" msgid "Subnet ID." msgstr "子网标识。" msgid "Subnet in which the vpn service will be created." msgstr "将在其中创建 vpn 服务的子网。" msgid "" "Subnet in which to allocate the IP address for port. Used for creating port, " "based on derived properties. If subnet is specified, network property " "becomes optional." msgstr "" "在其中为端口分配 IP 地址的子网。用于创建端口(根据所派生属性)。如果指定了子" "网,那么网络属性变为可选。" msgid "Subnet in which to allocate the IP address for this port." msgstr "要在其中分配此端口的 IP 地址的子网。" msgid "Subnet name or ID of this member." msgstr "此成员的子网名称或标识。" msgid "Subnet of external fixed IP address." msgstr "外部固定 IP 地址的子网。" msgid "Subnet of the vip." msgstr "vip 的子网。" msgid "Subnets of this network." msgstr "此网络的子网。" msgid "" "Subset of trustor roles to be delegated to heat. If left unset, all roles of " "a user will be delegated to heat when creating a stack." msgstr "" "要委派给 Heat 的信任者角色的子集。如果保留为取消设置,那么当创建堆栈时,会将" "用户的所有角色都委派给 Heat。" msgid "Supplied metadata for the resources in the group." msgstr "对组中资源提供的元数据。" msgid "Supported versions: keystone v3" msgstr "受支持的版本:keystone v3" #, python-format msgid "Suspend of instance %s failed" msgstr "暂挂实例 %s 失败" #, python-format msgid "Suspend of server %s failed" msgstr "暂挂服务器 %s 失败" msgid "Swap space in MB." msgstr "交换空间(以 MB 计)。" msgid "System SIGHUP signal received." msgstr "接收到系统 SIGHUP 信号。" msgid "TCP or UDP port on which to listen for client traffic." msgstr "要侦听客户机流量的 TCP 或 UDP 端口。" msgid "TCP port on which the instance server is listening." msgstr "实例服务器正在侦听的 TCP 端口。" msgid "TCP port on which the pool member listens for requests or connections." msgstr "池成员在其上侦听请求或连接的 TCP 端口。" msgid "" "TCP port on which to listen for client traffic that is associated with the " "vip address." msgstr "要在其上侦听与 vip 地址关联的客户机流量的 TCP 端口。" msgid "" "TTL, in seconds, for any cached item in the dogpile.cache region used for " "caching of OpenStack service finder functions." msgstr "" "用于缓存 OpenStack 服务查找器功能的 dogpile.cache 区域中任何已缓存项的 " "TTL(以秒计)。" msgid "" "TTL, in seconds, for any cached item in the dogpile.cache region used for " "caching of service extensions." msgstr "用于缓存服务扩展的 dogpile.cache 区域中任何已缓存项的 TTL(以秒计)。" msgid "" "TTL, in seconds, for any cached item in the dogpile.cache region used for " "caching of validation constraints." msgstr "用于缓存验证约束的 dogpile.cache 区域中任何已缓存项的 TTL(以秒计)。" msgid "Tag key." msgstr "标记键。" msgid "Tag value." msgstr "标记值。" msgid "Tags to add to the image." msgstr "要添加至映像的标记。" msgid "Tags to attach to instance." msgstr "要连接至实例的标记。" msgid "Tags to attach to the bucket." msgstr "要连接至存储区的标记。" msgid "Tags to attach to this group." msgstr "要连接至此组的标记。" msgid "Task description." msgstr "任务描述。" msgid "Task name." msgstr "任务名称。" msgid "" "Template default for how the server should receive the metadata required for " "software configuration. POLL_SERVER_CFN will allow calls to the cfn API " "action DescribeStackResource authenticated with the provided keypair " "(requires enabled heat-api-cfn). POLL_SERVER_HEAT will allow calls to the " "Heat API resource-show using the provided keystone credentials (requires " "keystone v3 API, and configured stack_user_* config options). POLL_TEMP_URL " "will create and populate a Swift TempURL with metadata for polling (requires " "object-store endpoint which supports TempURL).ZAQAR_MESSAGE will create a " "dedicated zaqar queue and post the metadata for polling." msgstr "" "模板缺省值,用于指示服务器如何接收软件配置所需的元数据。POLL_SERVER_CFN 指示" "将允许调用已使用所提供密钥对认证的 cfn API 操作 DescribeStackResource(需要已" "启用 heat-api-cfn)。POLL_SERVER_HEAT 指示将允许使用所提供 keystone 证书调用 " "Heat API resource-show(需要 keystone v3 API,并已配置 stack_user_* config 选" "项)。POLL_TEMP_URL 指示将创建 Swift TempURL 并使用要轮询的元数据填充它(需要" "支持 TempURL 的 object-store 端点)。ZAQAR_MESSAGE 指示将创建专用 zaqar 队列" "并发布元数据以进行轮询。" msgid "" "Template default for how the server should signal to heat with the " "deployment output values. CFN_SIGNAL will allow an HTTP POST to a CFN " "keypair signed URL (requires enabled heat-api-cfn). TEMP_URL_SIGNAL will " "create a Swift TempURL to be signaled via HTTP PUT (requires object-store " "endpoint which supports TempURL). HEAT_SIGNAL will allow calls to the Heat " "API resource-signal using the provided keystone credentials. ZAQAR_SIGNAL " "will create a dedicated zaqar queue to be signaled using the provided " "keystone credentials." msgstr "" "模板缺省值,用于指示服务器如何向 heat 通告部署缺省值。CFN_SIGNAL 指示将允许" "对 CFN 密钥对签署的 URL 执行 HTTP POST(需要已启用 heat-api-cfn)。" "TEMP_URL_SIGNAL 指示将创建通过 HTTP PUT 通告的 Swift TempURL(需要支持 " "TempURL 的 object-store 端点)。HEAT_SIGNAL 指示将允许使用所提供 TempURL 证书" "调用 Heat API resource-signal。ZAQAR_SIGNAL 指示将创建使用所提供 keystone 证" "书通告的专用 zaqar 队列。" msgid "Template format version not found." msgstr "找不到模板格式版本。" #, python-format msgid "" "Template size (%(actual_len)s bytes) exceeds maximum allowed size (%(limit)s " "bytes)." msgstr "模板大小(%(actual_len)s 字节)超出最大允许大小(%(limit)s 字节)。" msgid "Template that specifies the stack to be created as a resource." msgstr "用于指定要作为资源创建的堆栈的模板。" #, python-format msgid "Template type is not supported: %s" msgstr "模板类型不受支持:%s" msgid "Template version was not provided" msgstr "未提供模板版本" #, python-format msgid "Template with version %s not found" msgstr "找不到带有版本 %s 的模板" msgid "TemplateBody or TemplateUrl were not given." msgstr "未给定 TemplateBody 或 TemplateUrl。" msgid "Tenant owning the health monitor." msgstr "拥有运行状况监视器的租户。" msgid "Tenant owning the pool member." msgstr "拥有池成员的租户。" msgid "Tenant owning the pool." msgstr "拥有池的租户。" msgid "Tenant owning the port." msgstr "拥有端口的租户。" msgid "Tenant owning the router." msgstr "拥有路由器的租户。" msgid "Tenant owning the subnet." msgstr "拥有子网的租户。" #, python-format msgid "Testing message %(text)s" msgstr "测试消息 %(text)s" #, python-format msgid "The \"%(hook)s\" hook is not defined on %(resource)s" msgstr "未对 %(resource)s 定义“%(hook)s”挂钩" #, python-format msgid "The \"for_each\" argument to \"%s\" must contain a map" msgstr "针对“%s”的“for_each”自变量必须包含映射" #, python-format msgid "The %(entity)s (%(name)s) could not be found." msgstr "找不到 %(entity)s (%(name)s)。" #, python-format msgid "The %s must be provided for each parameter group." msgstr "必须对每个参数组提供 %s。" #, python-format msgid "The %s of parameter group should be a list." msgstr "参数组的 %s 应该为列表。" #, python-format msgid "The %s parameter must be assigned to one parameter group only." msgstr "只能将 %s 参数分配给一个参数组。" #, python-format msgid "The %s should be a list." msgstr "%s 应该为列表。" msgid "The API paste config file to use." msgstr "要使用的 API 粘贴配置文件。" msgid "The AWS Access Key ID needs a subscription for the service" msgstr "AWS 访问密钥标识需要对服务的预订" msgid "The Availability Zone where the specified instance is launched." msgstr "在其中启动了指定实例的可用性区域。" msgid "The Availability Zones in which to create the load balancer." msgstr "要在其中创建负载均衡器的可用性区域。" msgid "The CIDR." msgstr "CIDR。" msgid "The DNS name for the LoadBalancer." msgstr "负载均衡器的 DNS 名称。" msgid "The DNS name of the specified bucket." msgstr "所指定存储区的 DNS 名称。" msgid "The DNS nameserver address." msgstr "DNS 名称服务器地址。" msgid "The HTTP method used for requests by the monitor of type HTTP." msgstr "用于通过类型为 HTTP 的监视器进行的请求的 HTTP 方法。" msgid "" "The HTTP path used in the HTTP request used by the monitor to test a member " "health." msgstr "由监视器用来测试成员运行状况的 HTTP 请求中使用的 HTTP 路径。" msgid "" "The HTTP path used in the HTTP request used by the monitor to test a member " "health. A valid value is a string the begins with a forward slash (/)." msgstr "" "监视器使用的 HTTP 请求中用于测试成员运行状况的 HTTP 路径。有效值为以正斜杠 " "(/) 开头的字符串。" msgid "" "The HTTP status codes expected in response from the member to declare it " "healthy. Specify one of the following values: a single value, such as 200. a " "list, such as 200, 202. a range, such as 200-204." msgstr "" "成员响应中应该出现以声明其状态正常的 HTTP 状态码。指定下列其中一个值:单个" "值,例如,200;列表,例如,200, 202;范围,例如,200-204。" msgid "" "The ID of an existing instance to use to create the Auto Scaling group. If " "specify this property, will create the group use an existing instance " "instead of a launch configuration." msgstr "" "要用于创建“自动缩放”组的现有实例的标识。如果指定此属性,那么将使用现有实例创" "建组而不是创建启动配置。" msgid "" "The ID of an existing instance you want to use to create the launch " "configuration. All properties are derived from the instance with the " "exception of BlockDeviceMapping." msgstr "" "要用于创建启动配置的现有实例的标识。所有属性都派生自该实例,但 " "BlockDeviceMapping 例外。" msgid "The ID of the attached network." msgstr "所连接网络的标识。" msgid "The ID of the firewall policy that this firewall is associated with." msgstr "与此防火墙关联的防火墙策略的标识。" msgid "" "The ID of the hosted zone name that is associated with the LoadBalancer." msgstr "与负载均衡器关联的托管区域名称的标识。" msgid "The ID of the image to create a volume from." msgstr "要从其创建卷的映像的标识。" msgid "The ID of the image to create the volume from." msgstr "要从其创建卷的映像的标识。" msgid "The ID of the instance to which the volume attaches." msgstr "卷连接至的实例的标识。" msgid "The ID of the load balancing pool." msgstr "负载均衡池的标识。" msgid "The ID of the pool to which the pool member belongs." msgstr "池成员所属的池的标识。" msgid "The ID of the server to which the volume attaches." msgstr "卷连接至的服务器的标识。" msgid "The ID of the snapshot to create a volume from." msgstr "要从其创建卷的快照的标识。" msgid "" "The ID of the tenant which will own the network. Only administrative users " "can set the tenant identifier; this cannot be changed using authorization " "policies." msgstr "" "将拥有网络的租户的标识。仅管理用户才能设置租户标识;不能使用授权策略对此进行" "更改。" msgid "" "The ID of the tenant who owns the Load Balancer. Only administrative users " "can specify a tenant ID other than their own." msgstr "" "拥有此负载均衡器的租户的标识。只有管理用户才能指定他们自身以外的租户标识。" msgid "The ID of the tenant who owns the listener." msgstr "拥有此侦听器的租户的标识。" msgid "" "The ID of the tenant who owns the network. Only administrative users can " "specify a tenant ID other than their own." msgstr "" "拥有网络的租户的标识。仅管理用户才能指定除了其本身的标识之外的租户标识。" msgid "" "The ID of the tenant who owns the subnet pool. Only administrative users can " "specify a tenant ID other than their own." msgstr "拥有子网池的租户的标识。只有管理用户才能指定他们自身之外的租户标识。" msgid "The ID of the volume to be attached." msgstr "要连接的卷的标识。" msgid "" "The ID of the volume to boot from. Only one of volume_id or snapshot_id " "should be provided." msgstr "" "要从中进行引导的卷的标识。仅应该提供 volume_id 或 snapshot_id 中的一个。" msgid "The ID or name of the flavor to boot onto." msgstr "要引导到其上的实例类型的标识或名称。" msgid "The ID or name of the image to boot with." msgstr "要用于进行引导的映像的标识或名称。" msgid "" "The IDs of the DHCP agent to schedule the network. Note that the default " "policy setting in Neutron restricts usage of this property to administrative " "users only." msgstr "" "要安排网络的 DHCP 代理的标识。请注意,Neutron 中的缺省策略设置会将此属性限制" "为仅供管理用户使用。" msgid "The IP address of the pool member." msgstr "池成员的 IP 地址。" msgid "The IP version, which is 4 or 6." msgstr "IP 版本,即 4 或 6。" #, python-format msgid "The Parameter (%(key)s) was not defined in template." msgstr "模板中未定义参数 (%(key)s)。" #, python-format msgid "The Parameter (%(key)s) was not provided." msgstr "未提供参数 (%(key)s)。" msgid "The QoS policy ID attached to this network." msgstr "附加至此网络的 QoS 策略标识。" msgid "The QoS policy ID attached to this port." msgstr "附加至此端口的 QoS 策略标识。" #, python-format msgid "The Referenced Attribute (%(resource)s %(key)s) is incorrect." msgstr "引用的属性 (%(resource)s %(key)s) 不正确。" #, python-format msgid "The Resource %s requires replacement." msgstr "需要替换资源 %s。" #, python-format msgid "" "The Resource (%(resource_name)s) could not be found in Stack %(stack_name)s." msgstr "在堆栈 %(stack_name)s 中找不到资源 (%(resource_name)s)。" #, python-format msgid "The Resource (%(resource_name)s) is not available." msgstr "资源 (%(resource_name)s) 不可用。" #, python-format msgid "The Snapshot (%(snapshot)s) for Stack (%(stack)s) could not be found." msgstr "找不到堆栈 (%(stack)s) 的快照 (%(snapshot)s)。" #, python-format msgid "The Stack (%(stack_name)s) already exists." msgstr "堆栈 (%(stack_name)s) 已存在。" msgid "The Template must be a JSON or YAML document." msgstr "模板必须为 JSON 或 YAML 文档。" msgid "The URI to the container." msgstr "容器的 URI。" msgid "The URI to the created container." msgstr "所创建容器的 URI。" msgid "The URI to the created secret." msgstr "所创建密钥的 URI。" msgid "The URI to the order." msgstr "指令的 URI。" msgid "The URIs to container consumers." msgstr "容器使用者的 URI。" msgid "The URIs to secrets stored in container." msgstr "存储在容器中的密钥的 URI。" msgid "" "The URL of a template that specifies the stack to be created as a resource." msgstr "模板的 URL,该模板用于指定要作为资源创建的堆栈。" msgid "The URL of the container." msgstr "容器的 URL。" msgid "The VIP address of the LoadBalancer." msgstr "负载均衡器的 VIP 地址。" msgid "The VIP port of the LoadBalancer." msgstr "负载均衡器的 VIP 端口。" msgid "The VIP subnet of the LoadBalancer." msgstr "负载均衡器的 VIP 子刚刚。" msgid "The action or operation requested is invalid" msgstr "请求的动作或操作无效" msgid "The action to be executed when the receiver is signaled." msgstr "通告接收器时要执行的操作。" msgid "The administrative state of the firewall." msgstr "防火墙的管理状态。" msgid "The administrative state of the health monitor." msgstr "运行状况监视器的管理状态。" msgid "The administrative state of the ipsec site connection." msgstr "IPSec 站点连接的管理状态。" msgid "The administrative state of the pool member." msgstr "池成员的管理状态。" msgid "The administrative state of the router." msgstr "路由器的管理状态。" msgid "The administrative state of the vpn service." msgstr "VPN 服务的管理状态。" msgid "The administrative state of this Load Balancer." msgstr "此负载均衡器的管理状态。" msgid "The administrative state of this health monitor." msgstr "此运行状况监视器的管理状态。" msgid "The administrative state of this listener." msgstr "此侦听器的管理状态。" msgid "The administrative state of this pool member." msgstr "此池成员的管理状态。" msgid "The administrative state of this pool." msgstr "此池的管理状态。" msgid "The administrative state of this port." msgstr "此端口的管理状态。" msgid "The administrative state of this vip." msgstr "此 vip 的管理状态。" msgid "The administrative status of the network." msgstr "网络的管理状态。" msgid "The administrator password for the server." msgstr "服务器的管理员密码。" msgid "The aggregation method to compare to the threshold." msgstr "用于与阈值进行比较的汇总方法。" msgid "The algorithm type used to generate the secret." msgstr "用于生成密钥的算法类型。" msgid "" "The algorithm type used to generate the secret. Required for key and " "asymmetric types of order." msgstr "用于生成密钥的算法类型。对于密钥类型和非对称类型的指令是必需的。" msgid "The algorithm used to distribute load between the members of the pool." msgstr "用于在池的成员之间分配负载的算法。" msgid "The allocated address of this IP." msgstr "此 IP 的已分配地址。" msgid "" "The approximate interval, in seconds, between health checks of an individual " "instance." msgstr "单个实例运行状况检查的大概时间间隔(以秒计)。" msgid "The authentication hash algorithm of the ipsec policy." msgstr "IPSec 策略的认证散列算法。" msgid "The authentication hash algorithm used by the ike policy." msgstr "由 ike 策略使用的认证散列算法。" msgid "The authentication mode of the ipsec site connection." msgstr "IPSec 站点连接的认证方式。" msgid "The availability zone in which the volume is located." msgstr "卷所在的可用性区域。" msgid "The availability zone in which the volume will be created." msgstr "将在其中创建卷的可用性区域。" msgid "The availability zone of shared filesystem." msgstr "共享文件系统的可用区域。" msgid "The bay name." msgstr "支架名称。" msgid "The bit-length of the secret." msgstr "密钥的位长度。" msgid "" "The bit-length of the secret. Required for key and asymmetric types of order." msgstr "密钥的位长度。对于密钥类型和非对称类型的指令是必需的。" #, python-format msgid "The bucket you tried to delete is not empty (%s)." msgstr "已尝试删除的存储区不是空的 (%s)。" msgid "The can be used to unmap a defined device." msgstr "它可用于对已定义设备取消映射。" msgid "The certificate or AWS Key ID provided does not exist" msgstr "已提供的证书或 AWS 密钥标识不存在" msgid "The channel for receiving signals." msgstr "用于接收信号的通道。" msgid "" "The class that provides encryption support. For example, nova.volume." "encryptors.luks.LuksEncryptor." msgstr "" "用于提供加密支持的类。例如,nova.volume.encryptors.luks.LuksEncryptor。" #, python-format msgid "The client (%(client_name)s) is not available." msgstr "客户机 (%(client_name)s) 不可用。" msgid "The cluster ID this node belongs to." msgstr "此节点所属的集群标识。" msgid "The config value of the software config." msgstr "软件配置的配置值。" msgid "" "The configuration tool used to actually apply the configuration on a server. " "This string property has to be understood by in-instance tools running " "inside deployed servers." msgstr "" "用于在服务器上实际地应用配置的配置工具。此字符串属性必须能够由已部署的服务器" "内运行的“实例中”工具所理解。" msgid "The content of the CSR. Only for certificate orders." msgstr "CSR 的内容。仅适用于证书指令。" #, python-format msgid "" "The contents of personality file \"%(path)s\" is larger than the maximum " "allowed personality file size (%(max_size)s bytes)." msgstr "" "个性化文件“%(path)s”的内容超过允许的最大个性化文件大小(%(max_size)s 个字" "节)。" msgid "The current size of AutoscalingResourceGroup." msgstr "AutoscalingResourceGroup 的当前大小。" msgid "The current status of the volume." msgstr "卷的当前状态。" msgid "" "The database instance was created, but heat failed to set up the datastore. " "If a database instance is in the FAILED state, it should be deleted and a " "new one should be created." msgstr "" "数据库实例已创建,但是 Heat 未能设置数据存储器。如果数据库实例处于故障状态," "那么应该将其删除并且应该创建新的数据库实例。" msgid "" "The dead peer detection protocol configuration of the ipsec site connection." msgstr "IPSec 站点连接的已终止同级检测协议配置。" msgid "The decrypted secret payload." msgstr "已解密的密钥有效类型。" msgid "" "The default cloud-init user set up for each image (e.g. \"ubuntu\" for " "Ubuntu 12.04+, \"fedora\" for Fedora 19+ and \"cloud-user\" for CentOS/RHEL " "6.5)." msgstr "" "为每个映像设置的缺省 cloud-init 用户(例如中,对于 Ubuntu 12.04+ 为“ubuntu”," "对于 Fedora 19+ 为“fedora”,对于 CentOS/RHEL 6.5 为“cloud-user”)。" msgid "The description for the QoS policy." msgstr "QoS 策略的描述。" msgid "The description of the ike policy." msgstr "ike 策略的描述。" msgid "The description of the ipsec policy." msgstr "IPSec 策略的描述。" msgid "The description of the ipsec site connection." msgstr "IPSec 站点连接的描述。" msgid "The description of the vpn service." msgstr "VPN 服务的描述。" msgid "The destination for static route." msgstr "静态路由的目标。" msgid "The details of physical object." msgstr "物理对象的详细信息。" msgid "The device id for the network gateway." msgstr "网络网关的设备标识。" msgid "" "The device where the volume is exposed on the instance. This assignment may " "not be honored and it is advised that the path /dev/disk/by-id/virtio-" " be used instead." msgstr "" "在其上对实例公开卷的设备。此分配可能不被采纳,建议改为使用路径 /dev/disk/by-" "id/virtio-。" msgid "" "The direction in which metering rule is applied, either ingress or egress." msgstr "应用测量规则的方向(进入或离开)。" msgid "The direction in which metering rule is applied." msgstr "应用测量规则的方向。" msgid "" "The direction in which the security group rule is applied. For a compute " "instance, an ingress security group rule matches traffic that is incoming " "(ingress) for that instance. An egress rule is applied to traffic leaving " "the instance." msgstr "" "应用安全组规则的方向。对于计算实例,进入权安全组规则与该实例的引入(进入)流" "量匹配。外出权规则应用于离开该实例的流量。" msgid "The directory to search for environment files." msgstr "用于搜索环境文件的目录。" msgid "The ebs volume to attach to the instance." msgstr "要连接至实例的 ebs 卷。" msgid "The encapsulation mode of the ipsec policy." msgstr "IPSec 策略的封装方式。" msgid "The encoding format used to provide the payload data." msgstr "用于提供有效数据的加密格式。" msgid "The encryption algorithm of the ipsec policy." msgstr "IPSec策略的加密算法。" msgid "The encryption algorithm or mode. For example, aes-xts-plain64." msgstr "加密算法或方式,例如,aes-xts-plain64。" msgid "The encryption algorithm used by the ike policy." msgstr "由 ike 策略使用的加密算法。" msgid "The environment is not a valid YAML mapping data type." msgstr "该环境不是有效的 YAML 映射数据类型。" msgid "The expiration date for the secret in ISO-8601 format." msgstr "密钥的到期日期为 ISO-8601 格式。" msgid "The external load balancer port number." msgstr "外部负载均衡器端口号。" msgid "The extra specs key and value pairs of the volume type." msgstr "卷类型的额外规范键值对。" msgid "The flavor to use." msgstr "要使用的实例类型。" #, python-format msgid "The following parameters are immutable and may not be updated: %(keys)s" msgstr "以下参数不可变,因此无法更新:%(keys)s" #, python-format msgid "The function %s is not supported in this version of HOT." msgstr "函数 %s 在此版本的 HOT 中不受支持。" msgid "" "The gateway IP address. Set to any of [ null | ~ | \"\" ] to create/update a " "subnet without a gateway. If omitted when creation, neutron will assign the " "first free IP address within the subnet to the gateway automatically. If " "remove this from template when update, the old gateway IP address will be " "detached." msgstr "" "网关 IP 地址。设置为 [ null | ~ | \"\" ] 中的任何一项以在没有网关的情况下创" "建/更新子网。如果在创建时省略此项,那么 neutron 会自动将该子网内的第一个可用 " "IP 地址分配给网关。如果更新时从模板移除此项,那么旧网关 IP 地址将被拆离。" #, python-format msgid "The grouped parameter %s does not reference a valid parameter." msgstr "分组参数 %s 未引用有效参数。" msgid "The host from the container URL." msgstr "源自容器 URL 的主机。" msgid "The host from which a user is allowed to connect to the database." msgstr "允许用户从其连接至数据库的主机。" msgid "" "The id for L2 segment on the external side of the network gateway. Must be " "specified when using vlan." msgstr "网络网关外端上的 L2 分段的标识。当使用 vlan 时,必须指定。" msgid "The identifier of the CA to use." msgstr "要使用的 CA 的标识。" msgid "The image ID. Glance will generate a UUID if not specified." msgstr "映像标识。Glance 将生成 UUID(如果未指定)。" msgid "The initiator of the ipsec site connection." msgstr "IPSec 站点连接的发起方。" msgid "The input string to be stored." msgstr "要存储的输入字符串。" msgid "The interface name for the network gateway." msgstr "网络网关的接口名称。" msgid "The internal network to connect on the network gateway." msgstr "要在网络网关上连接的内部网络。" msgid "The last operation for the database instance failed due to an error." msgstr "由于发生错误,对数据库实例的最后一个操作失败。" #, python-format msgid "The length must be at least %(min)s." msgstr "长度必须至少为 %(min)s。" #, python-format msgid "The length must be in the range %(min)s to %(max)s." msgstr "长度必须在范围 %(min)s 到 %(max)s 中。" #, python-format msgid "The length must be no greater than %(max)s." msgstr "长度不能超过 %(max)s。" msgid "The length of time, in minutes, to wait for the nested stack creation." msgstr "用于等待嵌套堆栈创建的时间长度(以分钟计)。" msgid "" "The list of HTTP status codes expected in response from the member to " "declare it healthy." msgstr "希望在成员响应中看到的 HTTP 状态码(用于声明其处于正常状态)的列表。" msgid "The list of Nova server IDs load balanced." msgstr "已实现负载均衡的 Nova 服务器的标识列表。" msgid "The list of Pools related to this monitor." msgstr "与此监视器相关的池的列表。" msgid "The list of attachments of the volume." msgstr "卷的附件的列表。" msgid "" "The list of configurations for the different lifecycle actions of the " "represented software component." msgstr "所表示软件组件的不同生命周期操作的配置的列表。" msgid "The list of instance IDs load balanced." msgstr "已实现负载均衡的实例标识的列表。" msgid "" "The list of resource types to create. This list may contain type names or " "aliases defined in the resource registry. Specific template names are not " "supported." msgstr "" "要创建的资源类型的列表。此列表可能包含资源注册表中定义的类型名称或别名。特定" "模板名称不受支持。" msgid "The list of tags to associate with the volume." msgstr "要与卷关联的标记的列表。" msgid "The load balancer transport protocol to use." msgstr "要使用的负载均衡器传输协议。" msgid "" "The location where the volume is exposed on the instance. This assignment " "may not be honored and it is advised that the path /dev/disk/by-id/virtio-" " be used instead." msgstr "" "在其上对实例公开卷的位置。此分配可能不被采纳,建议改为使用路径 /dev/disk/by-" "id/virtio-。" msgid "The manually assigned alternative public IPv4 address of the server." msgstr "服务器的手动分配替代公共 IPv4 地址。" msgid "The manually assigned alternative public IPv6 address of the server." msgstr "服务器的手动分配替代公共 IPv6 地址。" msgid "The maximum number of connections per second allowed for the vip." msgstr "vip 所允许的每秒最大连接数。" msgid "" "The maximum number of connections permitted for this load balancer. Defaults " "to -1, which is infinite." msgstr "此负载均衡器许可的最大连接数。缺省为 -1,表示无限制。" msgid "The maximum number of resources to create at once." msgstr "要立即创建的最大资源数。" msgid "The maximum number of resources to replace at once." msgstr "一次要替换的最大资源数。" msgid "" "The maximum number of seconds to wait for the resource to signal completion. " "Once the timeout is reached, creation of the signal resource will fail." msgstr "等待资源通告完成的最大秒数。达到此超时后,通告资源创建操作将失败。" msgid "" "The maximum port number in the range that is matched by the security group " "rule. The port_range_min attribute constrains the port_range_max attribute. " "If the protocol is ICMP, this value must be an ICMP type." msgstr "" "通过安全组规则匹配的范围中的最大端口号。port_range_min 属性包含 " "port_range_max 属性。如果协议为 ICMP,那么此值必须为 ICMP 类型。" msgid "" "The maximum transmission unit size (in bytes) of the ipsec site connection." msgstr "IPSec 站点连接的最大传输单元大小(按字节计)。" msgid "The maximum transmission unit size(in bytes) for the network." msgstr "网络的最大传输单元大小(以字节计)。" msgid "The metering label ID to associate with this metering rule." msgstr "要与此测量规则关联的测量标签标识。" msgid "" "The metric dimensions to match to the alarm dimensions. One or more " "dimension key names separated by a comma." msgstr "要与警报维匹配的指标维。一个或多个维键名称之间用逗号分隔。" msgid "" "The minimum number of characters from this character class that will be in " "the generated string." msgstr "将包含在所生成字符串中的此字符类中的最小字符数。" msgid "" "The minimum number of characters from this sequence that will be in the " "generated string." msgstr "将包含在所生成字符串中的此序列中的最小字符数。" msgid "" "The minimum number of resources in service while rolling updates are being " "executed." msgstr "执行滚动更新时,处于服务状态的最小资源数。" msgid "" "The minimum port number in the range that is matched by the security group " "rule. If the protocol is TCP or UDP, this value must be less than or equal " "to the value of the port_range_max attribute. If the protocol is ICMP, this " "value must be an ICMP type." msgstr "" "通过安全组规则匹配的范围中的最小端口号。如果协议为 TCP 或 UDP,那么此值必须小" "于或等于 port_range_max 属性的值。如果协议为 ICMP,那么此值必须为 ICMP 类型。" msgid "The name for the QoS policy." msgstr "QoS 策略的名称。" msgid "The name for the address scope." msgstr "地址范围的名称。" msgid "" "The name of the driver used for instantiating container networks. By " "default, Magnum will choose the pre-configured network driver based on COE " "type." msgstr "" "用于实例化容器网络的驱动程序的名称。缺省情况下,Magnum 将根据 COE 类型选择预" "先配置的网络驱动程序。" msgid "The name of the error document." msgstr "错误文档的名称。" msgid "The name of the hosted zone that is associated with the LoadBalancer." msgstr "与负载均衡器关联的托管区域的名称。" msgid "The name of the ike policy." msgstr "IKE策略的名称。" msgid "The name of the index document." msgstr "索引文档的名称。" msgid "The name of the ipsec policy." msgstr "IPSec策略的名称。" msgid "The name of the ipsec site connection." msgstr "IPSec 站点连接的名称。" msgid "The name of the key pair." msgstr "密钥对的名称。" msgid "The name of the network gateway." msgstr "网络网关的名称。" msgid "The name of the network." msgstr "网络的名称。" msgid "The name of the router." msgstr "路由的名称。" msgid "The name of the subnet." msgstr "子网的名称。" msgid "The name of the user that the new key will belong to." msgstr "新密钥将所属的用户的名称。" msgid "" "The name of the virtual device. The name must be in the form ephemeralX " "where X is a number starting from zero (0); for example, ephemeral0." msgstr "" "虚拟设备的名称。该名称必须为 ephemeralX 格式,其中 X 是从 0 开始的数字;例" "如,ephemeral0。" msgid "The name of the vpn service." msgstr "VPN 服务的名称。" msgid "The name or ID of QoS policy to attach to this network." msgstr "要附加至此网络的 QoS 策略的名称或标识。" msgid "The name or ID of QoS policy to attach to this port." msgstr "要附加至此端口的 QoS 策略的名称或标识。" msgid "The name or ID of parent of this keystone project in hierarchy." msgstr "此 keystone 项目在层次结构中的父代的名称或标识。" msgid "The name or ID of target cluster." msgstr "目标集群的名称或标识。" msgid "The name or ID of the bay model." msgstr "支架模型的名称或标识。" msgid "The name or ID of the subnet on which to allocate the VIP address." msgstr "要在其上分配 VIP 地址的子网的名称或标识。" msgid "The name or ID of the subnet pool." msgstr "子网池的名称或标识。" msgid "The name or id of the Senlin profile." msgstr "Senlin 概要文件的名称或标识。" msgid "The negotiation mode of the ike policy." msgstr "ike 策略的协商方式。" msgid "The next hop for the destination." msgstr "目标的下一中断段。" msgid "The node count for this bay." msgstr "此支架的节点计数。" msgid "The notification methods to use when an alarm state is ALARM." msgstr "要在警报状态为 ALARM 时使用的通知方法。" msgid "The notification methods to use when an alarm state is OK." msgstr "要在警报状态为 OK 时使用的通知方法。" msgid "The notification methods to use when an alarm state is UNDETERMINED." msgstr "要在警报状态为 UNDETERMINED 时使用的通知方法。" msgid "The number of I/O operations per second that the volume supports." msgstr "该卷支持的每秒 I/O 操作数。" msgid "The number of bytes stored in the container." msgstr "容器中存储的字节数。" msgid "" "The number of consecutive health probe failures required before moving the " "instance to the unhealthy state" msgstr "将实例移至不正常状态之前需要的连续运行状况探测器失败次数" msgid "" "The number of consecutive health probe successes required before moving the " "instance to the healthy state." msgstr "将实例移至正常状态之前需要的连续运行状况探测器成功次数。" msgid "The number of master nodes for this bay." msgstr "此支架的主节点数。" msgid "The number of objects stored in the container." msgstr "容器中存储的对象数。" msgid "The number of replicas to be created." msgstr "要创建的副本数。" msgid "The number of resources to create." msgstr "要创建的资源数目。" msgid "The number of seconds to wait between batches of updates." msgstr "更新批次之间等待的秒数。" msgid "The number of seconds to wait between batches." msgstr "要在批处理之间等待的秒数。" msgid "The number of seconds to wait for the cluster actions." msgstr "要等待集群操作的秒数。" msgid "" "The number of seconds to wait for the correct number of signals to arrive." msgstr "用于等待正确数目的信号到达的秒数。" msgid "" "The number of success signals that must be received before the stack " "creation process continues." msgstr "在堆栈创建过程继续之前,必须接收到的成功信号数。" msgid "" "The optional public key. This allows users to supply the public key from a " "pre-existing key pair. If not supplied, a new key pair will be generated." msgstr "" "可选公用密钥。这允许用户提供来自预先存在的密钥对的公用密钥。如果未提供,那么" "将生成新密钥对。" msgid "" "The owner tenant ID of the address scope. Only administrative users can " "specify a tenant ID other than their own." msgstr "地址范围的所有者租户标识。只有管理用户才能指定他们自身以外的租户标识。" msgid "The owner tenant ID of this QoS policy." msgstr "此 QoS 策略的所有者租户标识。" msgid "The owner tenant ID of this rule." msgstr "此规则的所有者租户标识。" msgid "" "The owner tenant ID. Only required if the caller has an administrative role " "and wants to create a RBAC for another tenant." msgstr "" "所有者租户标识。仅当调用者具有管理角色并且想要为另一租户创建 RBAC 时,才需要" "此项。" msgid "The parameters passed to action when the receiver is signaled." msgstr "通告接收器时要传递至操作的参数。" msgid "The parent URL of the container." msgstr "容器的父 URL。" msgid "The payload of the created certificate, if available." msgstr "所创建证书的有效内容(如果提供)。" msgid "The payload of the created intermediates, if available." msgstr "所创建中间项的有效内容(如果提供)。" msgid "The payload of the created private key, if available." msgstr "所创建私钥的有效内容(如果提供)。" msgid "The payload of the created public key, if available." msgstr "所创建公钥的有效内容(如果提供)。" msgid "The perfect forward secrecy of the ike policy." msgstr "ike 策略的完全正向保密。" msgid "The perfect forward secrecy of the ipsec policy." msgstr "IPSec 策略的完全正向保密。" #, python-format msgid "The personality property may not contain greater than %s entries." msgstr "个性化属性包含的条目数不能超过 %s。" msgid "The physical mechanism by which the virtual network is implemented." msgstr "用来实现虚拟网络的物理机制。" msgid "The port being checked." msgstr "正在检查端口。" msgid "The port id, either subnet or port_id should be specified." msgstr "应该指定端口标识(subnet 或 port_id)。" msgid "The port on which the server will listen." msgstr "服务器将侦听的端口。" msgid "The port, either subnet or port should be specified." msgstr "应该指定端口(subnet 或 port)。" msgid "The pre-shared key string of the ipsec site connection." msgstr "IPSec 站点连接的预先共享密钥字符串。" msgid "The private key if it has been saved." msgstr "专用密钥(如果已保存)。" msgid "The profile of certificate to use." msgstr "要使用的证书的概要文件。" msgid "" "The protocol that is matched by the security group rule. Valid values " "include tcp, udp, and icmp." msgstr "通过安全组规则匹配的协议。有效值包括 tcp、udp 和 icmp。" msgid "The public key." msgstr "公用密钥。" msgid "The query string is malformed" msgstr "查询字符串格式不正确" msgid "The query to filter the metrics." msgstr "用于过滤指标的查询。" msgid "" "The random string generated by this resource. This value is also available " "by referencing the resource." msgstr "由此资源生成的随机字符串。此值也可通过引用该资源获得。" msgid "The reference to a LaunchConfiguration resource." msgstr "对 LaunchConfiguration 资源的引用。" msgid "" "The remote IP prefix (CIDR) to be associated with this security group rule." msgstr "要与此安全组规则关联的远程 IP 前缀 (CIDR)。" msgid "The remote branch router identity of the ipsec site connection." msgstr "IPSec 站点连接的远程分支路由器身份。" msgid "The remote branch router public IPv4 address or IPv6 address or FQDN." msgstr "远程分支路由器公共 IPv4 地址或 IPv6 地址,或者 FQDN。" msgid "" "The remote group ID to be associated with this security group rule. If no " "value is specified then this rule will use this security group for the " "remote_group_id. The remote mode parameter must be set to \"remote_group_id" "\"." msgstr "" "要与此安全组规则关联的远程组标识。如果未指定任何值,那么此规则会将此安全组用" "于 remote_group_id。远程方式参数必须设置为“remote_group_id”。" msgid "The remote subnet(s) in CIDR format of the ipsec site connection." msgstr "采用 IPSec 站点连接的 CIDR 格式的远程子网。" msgid "The request is missing an action or operation parameter" msgstr "请求缺少行动或操作参数" msgid "The request processing has failed due to an internal error" msgstr "由于内部错误,请求处理失败" msgid "The request signature does not conform to AWS standards" msgstr "请求的签名不符合AWS标准" msgid "" "The request signature we calculated does not match the signature you provided" msgstr "我们已计算的请求签名与您提供的签名不匹配" msgid "The requested action is not yet implemented" msgstr "请求的动作还未实现" #, python-format msgid "The resource %s is already being updated." msgstr "已在更新资源 %s。" msgid "The resource href of the queue." msgstr "队列的资源 href。" msgid "The route mode of the ipsec site connection." msgstr "IPSec 站点连接的路由方式。" msgid "The router id." msgstr "路由ID。" msgid "The router to which the vpn service will be inserted." msgstr "将对其插入 VPN 服务的路由器。" msgid "The router." msgstr "路由器。" msgid "The safety assessment lifetime configuration for the ike policy." msgstr "ike 策略的安全评估生存期配置。" msgid "The safety assessment lifetime configuration of the ipsec policy." msgstr "IPSec 策略的安全评估生存期配置。" msgid "" "The security group that you can use as part of your inbound rules for your " "LoadBalancer's back-end instances." msgstr "安全组,可将它用作负载均衡器的后端实例的入站规则的一部分。" msgid "" "The server could not comply with the request since it is either malformed or " "otherwise incorrect." msgstr "服务器无法遵循请求,因为请求格式不正确。" msgid "The set of parameters passed to this nested stack." msgstr "传递至此嵌套堆栈的参数的集合。" msgid "The size in GB of the docker volume." msgstr "底座卷的大小(以 GB 计)。" msgid "The size of AutoScalingGroup can not be less than zero" msgstr "AutoScalingGroup 的大小不能小于零" msgid "" "The size of the prefix to allocate when the cidr or prefixlen attributes are " "not specified while creating a subnet." msgstr "创建子网期间未指定 cidr 或 prefixlen 属性的大小时要分配的前缀大小。" msgid "The size of the swap, in MB." msgstr "交换的大小,以 MB 计。" msgid "The size of the volume in GB." msgstr "以GB为单位的卷大小。" msgid "" "The size of the volume, in GB. It is safe to leave this blank and have the " "Compute service infer the size." msgstr "卷的大小,以 GB 计。安全的做法是将此保留为空白并让计算服务推断该大小。" msgid "" "The size of the volume, in GB. Must be equal or greater than the size of the " "snapshot. It is safe to leave this blank and have the Compute service infer " "the size." msgstr "" "卷的大小,以 GB 计。必须等于或大于快照的大小。安全的做法是将此项留为空白并让" "计算服务推断大小。" msgid "The snapshot the volume was created from, if any." msgstr "已从其创建卷的快照(如果有)。" msgid "The source of certificate request." msgstr "证书请求的源。" #, python-format msgid "The specified reference \"%(resource)s\" (in %(key)s) is incorrect." msgstr "指定的引用“%(resource)s”(在 %(key)s 中)不正确。" msgid "The start and end addresses for the allocation pools." msgstr "分配池的起始和结束地址。" msgid "The status of the container." msgstr "容器的状态。" msgid "The status of the firewall." msgstr "防火墙的状态。" msgid "The status of the ipsec site connection." msgstr "IPSec 站点连接的状态。" msgid "The status of the network." msgstr "此网络的状态。" msgid "The status of the order." msgstr "指令的状态。" msgid "The status of the port." msgstr "端口状态。" msgid "The status of the router." msgstr "路由器的状态。" msgid "The status of the secret." msgstr "密钥的状态。" msgid "The status of the vpn service." msgstr "VPN 服务的状态。" msgid "" "The string that was stored. This value is also available by referencing the " "resource." msgstr "所存储的字符串。此值还可通过引用资源来提供。" msgid "The subject of the certificate request." msgstr "证书请求的主题。" msgid "" "The subnet for the port on which the members of the pool will be connected." msgstr "端口的子网,将在该端口上连接池的成员。" msgid "The subnet, either subnet or port should be specified." msgstr "应该指定子网(subnet 或 port)。" msgid "The tag key name." msgstr "标记键名。" msgid "The tag value." msgstr "标注值。" msgid "The template is not a JSON object or YAML mapping." msgstr "该模板不是 JSON 对象或 YAML 映射。" #, python-format msgid "The template section is invalid: %(section)s" msgstr "模板部分无效:%(section)s" #, python-format msgid "The template version is invalid: %(explanation)s" msgstr "模板版本无效:%(explanation)s" msgid "The tenant owning this floating IP." msgstr "拥有此浮动 IP 的租户。" msgid "The tenant owning this network." msgstr "拥有此网络的租户。" msgid "The time range in seconds." msgstr "时间范围(以秒计)。" msgid "The timestamp indicating volume creation." msgstr "指示卷创建的时间戳记。" msgid "The transform protocol of the ipsec policy." msgstr "IPSec 策略的变换协议。" msgid "The type of profile." msgstr "概要文件的类型。" msgid "The type of senlin policy." msgstr "senlin 策略的类型。" msgid "The type of the certificate request." msgstr "证书请求的类型。" msgid "The type of the order." msgstr "指令的类型。" msgid "The type of the resources in the group." msgstr "组中资源的类型。" msgid "The type of the secret." msgstr "密钥的类型。" msgid "The type of the volume mapping to a backend, if any." msgstr "映射至后端的卷的类型(如果有)。" msgid "The type/format the secret data is provided in." msgstr "提供密钥数据时使用的类型/格式。" msgid "The type/mode of the algorithm associated with the secret information." msgstr "与密钥信息相关联的算法的类型/方式。" msgid "The unencrypted plain text of the secret." msgstr "密钥的未加密纯文本。" msgid "" "The unique identifier of ike policy associated with the ipsec site " "connection." msgstr "与 IPSec 站点连接相关联的 ike 策略的唯一标识。" msgid "" "The unique identifier of ipsec policy associated with the ipsec site " "connection." msgstr "与 IPSec 站点连接相关联的 IPSec 策略的唯一标识。" msgid "" "The unique identifier of the router to which the vpn service was inserted." msgstr "已对其插入 VPN 服务的路由器的唯一标识。" msgid "" "The unique identifier of the subnet in which the vpn service was created." msgstr "已在其中创建 VPN 服务的子网的唯一标识。" msgid "The unique identifier of the tenant owning the ike policy." msgstr "拥有 ike 策略的租户的唯一标识。" msgid "The unique identifier of the tenant owning the ipsec policy." msgstr "拥有 IPSec 策略的租户的唯一标识。" msgid "The unique identifier of the tenant owning the ipsec site connection." msgstr "拥有 IPSec 站点连接的租户的唯一标识。" msgid "The unique identifier of the tenant owning the vpn service." msgstr "拥有 VPN 服务的租户的唯一标识。" msgid "" "The unique identifier of vpn service associated with the ipsec site " "connection." msgstr "与 IPSec 站点连接相关联的 VPN 服务的唯一标识。" msgid "" "The user-defined region ID and should unique to the OpenStack deployment. " "While creating the region, heat will url encode this ID." msgstr "" "用户定义的区域标识,对于 OpenStack 部署应该是唯一的。创建区域时,heat 会使用 " "URL 对此标识进行编码。" msgid "" "The value for the socket option TCP_KEEPIDLE. This is the time in seconds " "that the connection must be idle before TCP starts sending keepalive probes." msgstr "" "套接字选项 TCP_KEEPIDLE 的值。这是 TCP 开始发送 keepalive 探针之前连接必须保" "持空闲的时间(以秒计)。" #, python-format msgid "The value must be at least %(min)s." msgstr "值必须至少为 %(min)s。" #, python-format msgid "The value must be in the range %(min)s to %(max)s." msgstr "值必须在范围 %(min)s 到 %(max)s 中。" #, python-format msgid "The value must be no greater than %(max)s." msgstr "值不能大于 %(max)s。" #, python-format msgid "The values of the \"for_each\" argument to \"%s\" must be lists" msgstr "针对“%s”的“for_each”自变量的值必须为列表" msgid "The version of the ike policy." msgstr "IKE策略的版本。" msgid "" "The vnic type to be bound on the neutron port. To support SR-IOV PCI " "passthrough networking, you can request that the neutron port to be realized " "as normal (virtual nic), direct (pci passthrough), or macvtap (virtual " "interface with a tap-like software interface). Note that this only works for " "Neutron deployments that support the bindings extension." msgstr "" "要在 neutron 端口上绑定的 vnic 类型。为了支持 SR-IOV PCI passthrough 联网,可" "请求将 neutron 端口以 normal(虚拟 nic)、direct (pci passthrough) 或 " "macvtap(具有分接头之类的软件接口的虚拟接口)形式实现。请注意,这仅对于支持绑" "定扩展的 Neutron 部署起作用。" msgid "The volume type." msgstr "卷类型。" msgid "The volume used as source, if any." msgstr "用作源的卷(如果有)。" msgid "The volume_id can be boot or non-boot device to the server." msgstr "对于服务器,volume_id 可以是引导设备或非引导设备。" msgid "The website endpoint for the specified bucket." msgstr "所指定存储区的 Web 站点端点。" #, python-format msgid "There is no rule %(rule)s. List of allowed rules is: %(rules)s." msgstr "没有规则 %(rule)s。允许的规则列表为:%(rules)s。" msgid "" "There is no such option during 5.0.0, so need to make this attribute " "unsupported, otherwise error will raised." msgstr "5.0.0 中没有这类选项,所以需要将此属性标记为不受支持,否则将发生错误。" msgid "" "There is no such option during 5.0.0, so need to make this property " "unsupported while it not used." msgstr "5.0.0 中没有这类选项,所以需要在未使用此属性时将其标记为不受支持。" #, python-format msgid "" "There was an error loading the definition of the global resource type " "%(type_name)s." msgstr "装入全局资源类型 %(type_name)s 的定义时发生了错误。" msgid "This endpoint is enabled or disabled." msgstr "此端点已启用或已禁用。" msgid "This project is enabled or disabled." msgstr "此项目已启用或已禁用。" msgid "This region is enabled or disabled." msgstr "此区域已启用或已禁用。" msgid "This service is enabled or disabled." msgstr "此服务已启用或已禁用。" msgid "Threshold to evaluate against." msgstr "要对其进行评估的阈值。" msgid "Time To Live (Seconds)." msgstr "生存时间(秒)。" msgid "Time of the first execution in format \"YYYY-MM-DD HH:MM\"." msgstr "第一次执行时间,格式为“YYYY-MM-DD HH:MM”。" msgid "Time of the next execution in format \"YYYY-MM-DD HH:MM:SS\"." msgstr "下一次执行时间,格式为“YYYY-MM-DD HH:MM:SS”。" msgid "" "Timeout for client connections' socket operations. If an incoming connection " "is idle for this number of seconds it will be closed. A value of '0' means " "wait forever." msgstr "" "客户机连接的套接字操作的超时。如果入局连接的空闲时间达到此秒数,那么该连接将" "被关闭。值“0”意味着永远等待。" msgid "Timeout for creating the bay in minutes. Set to 0 for no timeout." msgstr "针对创建支架的超时(以分钟计)。设置为 0 表示无超时。" msgid "Timeout in seconds for stack action (ie. create or update)." msgstr "堆栈操作(即,创建或更新)的超时,以秒计。" msgid "" "Toggle to enable/disable caching when Orchestration Engine looks for other " "OpenStack service resources using name or id. Please note that the global " "toggle for oslo.cache(enabled=True in [cache] group) must be enabled to use " "this feature." msgstr "" "进行切换以在编排引擎使用名称或标识查找其他 OpenStack 服务资源时启用/禁用缓" "存。请注意,必须对 oslo.cache 启用全局切换(在 [cache] 组中为 enabled=True)" "以使用此功能部件。" msgid "" "Toggle to enable/disable caching when Orchestration Engine retrieves " "extensions from other OpenStack services. Please note that the global toggle " "for oslo.cache(enabled=True in [cache] group) must be enabled to use this " "feature." msgstr "" "进行切换以在编排引擎从其他 OpenStack 服务检索扩展时启用/禁用缓存。请注意,必" "须对 oslo.cache 启用全局切换(在 [cache] 组中为 enabled=True)以使用此功能部" "件。" msgid "" "Toggle to enable/disable caching when Orchestration Engine validates " "property constraints of stack.During property validation with constraints " "Orchestration Engine caches requests to other OpenStack services. Please " "note that the global toggle for oslo.cache(enabled=True in [cache] group) " "must be enabled to use this feature." msgstr "" "进行切换以在编排引擎验证堆栈的属性约束时启用/禁用缓存。使用约束验证属性时,编" "排引擎将请求缓存至其他 OpenStack 服务,请注意,必须对 oslo.cache 启用全局切换" "(在 [cache] 组中为 enabled=True)以使用此功能部件。" msgid "" "Token for stack-user which can be used for signalling handle when " "signal_transport is set to TOKEN_SIGNAL. None for all other signal " "transports." msgstr "" "signal_transport 设置为 TOKEN_SIGNAL 时可用于通告句柄的堆栈用户令牌。对于所有" "其他信号传输,此项为 None。" msgid "" "Tokens are not needed for Swift TempURLs. This attribute is being kept for " "compatibility with the OS::Heat::WaitConditionHandle resource." msgstr "" "Swift TempURL 不需要令牌。保留此属性是为了与 OS::Heat::WaitConditionHandle 资" "源兼容。" msgid "Topic" msgstr "主题" msgid "Transform protocol for the ipsec policy." msgstr "IPSec 策略的变换协议。" msgid "True if alarm evaluation/actioning is enabled." msgstr "如果已启用警报评估/进行操作,那么为 true。" msgid "" "True if the system should remember a generated private key; False otherwise." msgstr "如果系统应该记住已生成的专用密钥,那么为 True;否则,为 False。" msgid "Type of access that should be provided to guest." msgstr "应对访客提供的访问权的类型。" msgid "Type of adjustment (absolute or percentage)." msgstr "调整的类型(完全或百分比)。" msgid "" "Type of endpoint in Identity service catalog to use for communication with " "the OpenStack service." msgstr "要用于与 OpenStack 服务通信的身份服务目录中的端点类型。" msgid "Type of keystone Service." msgstr "keystone 服务的类型。" msgid "Type of receiver." msgstr "接收器的类型。" msgid "Type of the data source." msgstr "数据源的类型。" msgid "Type of the notification." msgstr "通知的类型。" msgid "Type of the object that RBAC policy affects." msgstr "RBAC 策略影响的对象类型。" msgid "Type of the value of the input." msgstr "输入值的类型。" msgid "Type of the value of the output." msgstr "输出的值类型。" msgid "Type of the volume to create on Cinder backend." msgstr "要在 Cinder 后端上创建的卷类型。" msgid "URL for API authentication" msgstr "API认证的URL" msgid "URL for the data source." msgstr "数据源的 URL。" msgid "" "URL for the job binary. Must be in the format swift:/// or " "internal-db://." msgstr "" "作业二进制文件的 URL。必须为以下格式:swift:/// 或 internal-" "db://。" msgid "" "URL of TempURL where resource will signal completion and optionally upload " "data." msgstr "资源将在其中发出信号指示完成及(可选)上载数据的 TempURL的 URL。" msgid "URL of keystone service endpoint." msgstr "keystone 服务端点的 URL。" msgid "URL of the Heat CloudWatch server." msgstr "Heat CloudWatch 服务器的 URL。" msgid "" "URL of the Heat metadata server. NOTE: Setting this is only needed if you " "require instances to use a different endpoint than in the keystone catalog" msgstr "" "Heat 元数据服务器的 URL。注意:仅当您要求实例使用的端点不同于 keystone 目录中" "的端点时,才需要设置此项。" msgid "URL of the Heat waitcondition server." msgstr "Heat 等待条件服务器的 URL。" msgid "" "URL where the data for this image already resides. For example, if the image " "data is stored in swift, you could specify \"swift://example.com/container/" "obj\"." msgstr "" "此映像的数据已驻留的 URL。例如,如果映像数据存储在 swift 中,那么可指" "定“swift://example.com/container/obj”。" msgid "UUID of the internal subnet to which the instance will be attached." msgstr "实例将连接至的内部子网的 UUID。" #, python-format msgid "" "Unable to find neutron provider '%(provider)s', available providers are " "%(providers)s." msgstr "找不到 neutron 提供程序“%(provider)s”,可用提供程序为 %(providers)s。" #, python-format msgid "" "Unable to find senlin policy type '%(pt)s', available policy types are " "%(pts)s." msgstr "找不到 senlin 策略类型“%(pt)s”,可用策略类型为 %(pts)s。" #, python-format msgid "" "Unable to find senlin profile type '%(pt)s', available profile types are " "%(pts)s." msgstr "找不到 senlin 概要文件类型“%(pt)s”,可用概要文件类型为 %(pts)s。" #, python-format msgid "" "Unable to load %(app_name)s from configuration file %(conf_file)s.\n" "Got: %(e)r" msgstr "" "无法从配置文件 %(conf_file)s 装入 %(app_name)s。\n" "发生错误:%(e)r" #, python-format msgid "Unable to locate config file [%s]" msgstr "找不到配置文件 [%s]" #, python-format msgid "Unexpected action %(action)s" msgstr "存在意外操作 %(action)s" #, python-format msgid "Unexpected action %s" msgstr "存在意外操作 %s" #, python-format msgid "" "Unexpected properties: %(unexpected)s. Only these properties are allowed for " "%(type)s type of order: %(allowed)s." msgstr "" "意外属性:%(unexpected)s。对于 %(type)s 类型的指令,仅允许使用以下属性:" "%(allowed)s。" msgid "Unique identifier for the device." msgstr "设备的唯一标识。" msgid "" "Unique identifier for the ike policy associated with the ipsec site " "connection." msgstr "与 IPSec 站点连接相关联的 ike 策略的唯一标识。" msgid "" "Unique identifier for the ipsec policy associated with the ipsec site " "connection." msgstr "与 IPSec 站点连接相关联的 IPSec 策略的唯一标识。" msgid "Unique identifier for the network owning the port." msgstr "拥有端口的网络的唯一标识。" msgid "" "Unique identifier for the router to which the vpn service will be inserted." msgstr "将对其插入 VPN 服务的路由器的唯一标识。" msgid "" "Unique identifier for the vpn service associated with the ipsec site " "connection." msgstr "与 IPSec 站点连接相关联的 vpn 服务的唯一标识。" msgid "" "Unique identifier of the firewall policy to which this firewall rule belongs." msgstr "此防火墙规则所属的防火墙策略的唯一标识。" msgid "Unique identifier of the firewall policy used to create the firewall." msgstr "用于创建防火墙的防火墙策略的唯一标识。" msgid "Unknown" msgstr "未知" #, python-format msgid "Unknown Property %s" msgstr "属性 %s 未知" #, python-format msgid "Unknown attribute \"%s\"" msgstr "属性“%s”未知" #, python-format msgid "Unknown error retrieving %s" msgstr "检索 %s 时发生未知错误" #, python-format msgid "Unknown input %s" msgstr "未知输入 %s" #, python-format msgid "Unknown key(s) %s" msgstr "键 %s 未知" msgid "Unknown share_status during creation of share \"{0}\"" msgstr "创建共享“{0}”期间出现未知 share_status" #, python-format msgid "Unknown status creating Bay '%(name)s' - %(reason)s" msgstr "创建支架“%(name)s” 时出现未知状态 - %(reason)s" msgid "Unknown status during deleting share \"{0}\"" msgstr "删除共享“{0}”期间出现未知状态" #, python-format msgid "Unknown status updating Bay '%(name)s' - %(reason)s" msgstr "更新支架“%(name)s” 时出现未知状态 - %(reason)s" #, python-format msgid "Unknown status: %s" msgstr "未知状态:%s" #, python-format msgid "" "Unrecognized value \"%(value)s\" for \"%(name)s\", acceptable values are: " "true, false." msgstr "“%(name)s”的“%(value)s”不可识别,可接受的值为:true 和 false。" #, python-format msgid "Unsupported object type %(objtype)s" msgstr "不支持的对象类型 %(objtype)s" #, python-format msgid "Unsupported resource '%s' in LoadBalancerNames" msgstr "LoadBalancerNames 中不支持资源“%s”" msgid "Unversioned keystone url in format like http://0.0.0.0:5000." msgstr "未版本化 keystone URL 为以下格式:http://0.0.0.0:5000。" #, python-format msgid "Update to properties %(props)s of %(name)s (%(res)s)" msgstr "对 %(name)s (%(res)s) 的属性 %(props)s 的更新" msgid "Updated At" msgstr "已更新于" msgid "Updating a stack when it is deleting" msgstr "删除堆栈时正在对其进行更新" msgid "Updating a stack when it is suspended" msgstr "当堆栈已暂挂时,正在对其进行更新" msgid "" "Use get_resource|Ref command instead. For example: { get_resource : " " }" msgstr "改用 get_resource|Ref 命令。例如:{ get_resource : }" msgid "" "Use only with Neutron, to list the internal subnet to which the instance " "will be attached; needed only if multiple exist; list length must be exactly " "1." msgstr "" "仅与 Neutron 配合使用,以列示实例将连接至的内部子网;仅当多项存在时才需要;列" "表长必须正好为1。" #, python-format msgid "Use property %s" msgstr "请使用属性 %s" #, python-format msgid "Use property %s." msgstr "请使用属性 %s。" msgid "" "Use the `external_gateway_info` property in the router resource to set up " "the gateway." msgstr "在路由器资源中使用“external_gateway_info”属性以设置网关。" msgid "" "Use the networks attribute instead of first_address. For example: " "\"{get_attr: [, networks, , 0]}\"" msgstr "" "使用网络属性而不是 first_address。例如:\"{get_attr: [, " "networks, , 0]}\"" msgid "Use this resource at your own risk." msgstr "使用此资源的风险由您自己承担。" #, python-format msgid "User %s in invalid domain" msgstr "用户 %s 处于无效域中" #, python-format msgid "User %s in invalid project" msgstr "用户 %s 处于无效项目中" msgid "User ID for API authentication" msgstr "API认证的用户ID" msgid "User data to pass to instance." msgstr "要传递到实例的用户数据。" msgid "User is not authorized to perform action" msgstr "用户无权执行操作" msgid "User name to create a user on instance creation." msgstr "用于在实例创建时创建用户的用户名。" msgid "Username associated with the AccessKey." msgstr "与访问码关联的用户名。" msgid "Username for API authentication" msgstr "API认证的用户名" msgid "Username for accessing the data source URL." msgstr "要访问数据源 URL 的用户名。" msgid "Username for accessing the job binary URL." msgstr "要访问作业二进制文件 URL 的用户名。" msgid "Username of privileged user in the image." msgstr "映像中的特权用户的用户名。" msgid "VLAN ID for VLAN networks or tunnel-id for GRE/VXLAN networks." msgstr " VLAN 网络的 VLAN 标识,或 GRE/VXLAN 网络的通道标识。" msgid "VPC ID for this gateway association." msgstr "用于此网关关联的 VPC 标识。" msgid "VPC ID for where the route table is created." msgstr "用于创建路由表的位置的 VPC 标识。" msgid "" "Valid values are encrypt or decrypt. The heat-engine processes must be " "stopped to use this." msgstr "有效值为 encrypt 或 decrypt。heat-engine 进程必须停止以使用此项。" #, python-format msgid "Value \"%(val)s\" is invalid for data type \"%(type)s\"." msgstr "值“%(val)s”对数据类型“%(type)s”无效。" #, python-format msgid "Value '%(value)s' is invalid for '%(name)s' which only accepts integer." msgstr "值“%(value)s”对于仅接受整数的“%(name)s”无效。" #, python-format msgid "" "Value '%(value)s' is invalid for '%(name)s' which only accepts non-negative " "integer." msgstr "值“%(value)s”对于仅接受非负整数的“%(name)s”无效。" #, python-format msgid "Value '%s' is not an integer" msgstr "值“%s”不是整数" #, python-format msgid "Value must be a comma-delimited list string: %s" msgstr "值必须是以逗号定界的列表字符串:%s" #, python-format msgid "Value must be of type %s" msgstr "值的类型必须为 %s" #, python-format msgid "Value must be valid JSON: %s" msgstr "值必须是有效 JSON:%s" #, python-format msgid "Value must match pattern: %s" msgstr "值必须与模式匹配:%s" msgid "" "Value which can be set or changed on stack update to trigger the resource " "for replacement with a new random string. The salt value itself is ignored " "by the random generator." msgstr "" "可在堆栈更新上设置或更改的值,此值触发资源以与新随机字符串替换。残值本身被随" "机生成器忽略。" msgid "" "Value which can be set to fail the resource operation to test failure " "scenarios." msgstr "一个值,可设置此值以使资源操作失败,从而测试失败场景。" msgid "" "Value which can be set to trigger update replace for the particular resource." msgstr "一个值,可设置此值以针对特定资源触发更新替换。" #, python-format msgid "Version %(objver)s of %(objname)s is not supported" msgstr "%(objname)s 的版本 %(objver)s 不被支持" msgid "Version for the ike policy." msgstr "IKE策略的版本。" msgid "Version of Hadoop running on instances." msgstr "实例上运行的 Hadoop 的版本。" msgid "Version of IP address." msgstr "IP 地址的版本。" msgid "Vip associated with the pool." msgstr "与池关联的 Vip。" msgid "Volume attachment failed" msgstr "卷连接失败" msgid "Volume backup failed" msgstr "卷备份失败" msgid "Volume backup restore failed" msgstr "卷备份复原失败" msgid "Volume create failed" msgstr "创建云硬盘失败" msgid "Volume detachment failed" msgstr "与卷断开连接失败" msgid "Volume in use" msgstr "正在使用卷" msgid "Volume resize failed" msgstr "调整云硬盘大小失败" msgid "Volumes per node." msgstr "每个节点的卷数。" msgid "Volumes to attach to instance." msgstr "要连接至实例的卷。" #, python-format msgid "WaitCondition invalid Handle %s" msgstr "WaitCondition 无效句柄 %s" #, python-format msgid "WaitCondition invalid Handle stack %s" msgstr "WaitCondition 无效句柄堆栈 %s" #, python-format msgid "WaitCondition invalid Handle tenant %s" msgstr "WaitCondition 无效句柄租户 %s" msgid "Weight of pool member in the pool (default to 1)." msgstr "池中池成员的权重(缺省为 1)。" msgid "Weight of the pool member in the pool." msgstr "池中池成员的权重。" #, python-format msgid "Went to status %(resource_status)s due to \"%(status_reason)s\"" msgstr "因为“%(status_reason)s”变为状态 %(resource_status)s" msgid "" "When both ipv6_ra_mode and ipv6_address_mode are set, they must be equal." msgstr "如果设置了 ipv6_ra_mode 和 ipv6_address_mode,那么它们必须相等。" msgid "" "When running server in SSL mode, you must specify both a cert_file and " "key_file option value in your configuration file" msgstr "" "以 SSL 方式运行服务器时,必须在配置文件中同时指定 cert_file 和 key_file 选项" "值" msgid "Whether enable this policy on that cluster." msgstr "是否在该集群上启用此策略。" msgid "" "Whether the address scope should be shared to other tenants. Note that the " "default policy setting restricts usage of this attribute to administrative " "users only, and restricts changing of shared address scope to unshared with " "update." msgstr "" "地址范围是否应与其他租户共享。请注意,缺省策略设置将此属性的使用对象限制为仅" "管理用户。并将共享地址范围的的更改限制为不与更新共享。" msgid "Whether the flavor is shared across all projects." msgstr "是否在所有项目间共享该类型。" msgid "" "Whether the image can be deleted. If the value is True, the image is " "protected and cannot be deleted." msgstr "能否删除该映像。如果值为 true,那么该映像受保护且无法删除。" msgid "Whether the metering label should be shared across all tenants." msgstr "是否应该在所有租户之间共享测量标签。" msgid "Whether the network contains an external router." msgstr "网络是否包含外部路由器。" msgid "Whether the part content is text or multipart." msgstr "部件内容是文本还是多重部件。" msgid "" "Whether the subnet pool will be shared across all tenants. Note that the " "default policy setting restricts usage of this attribute to administrative " "users only." msgstr "" "是否在所有租户间共享此子网池。请注意,缺省策略设置将此属性的使用者限制为仅管" "理用户。" msgid "Whether the volume type is accessible to the public." msgstr "该卷类型是否可供公众访问。" msgid "Whether this QoS policy should be shared to other tenants." msgstr "是否应与其他租户共享此 QoS 策略。" msgid "" "Whether this firewall should be shared across all tenants. NOTE: The default " "policy setting in Neutron restricts usage of this property to administrative " "users only." msgstr "" "是否应该在所有租户之间共享此防火墙。注:Neutron 中的缺省策略设置将此属性限制" "为仅由管理用户使用。" msgid "" "Whether this is default IPv4/IPv6 subnet pool. There can only be one default " "subnet pool for each IP family. Note that the default policy setting " "restricts administrative users to set this to True." msgstr "" "此项是否为缺省 IPv4/IPv6 subnet 池。对于每个 IP 系列,只能有一个缺省子网池。" "请注意,缺省策略设置存在以下限制:管理用户必须将此项设置为 True。" msgid "Whether this network should be shared across all tenants." msgstr "是否应该在所有租户之间共享此网络。" msgid "" "Whether this network should be shared across all tenants. Note that the " "default policy setting restricts usage of this attribute to administrative " "users only." msgstr "" "是否应该在所有租户之间共享此网络。请注意,缺省策略设置将此属性限制为仅供管理" "用户使用。" msgid "" "Whether this policy should be audited. When set to True, each time the " "firewall policy or the associated firewall rules are changed, this attribute " "will be set to False and will have to be explicitly set to True through an " "update operation." msgstr "" "是否应审计此策略。如果设置为 true,那么每次更改防火墙策略或关联防火墙规则时," "此属性将设置为 false,并且必须通过更新操作显式设置为true。" msgid "Whether this policy should be shared across all tenants." msgstr "是否应该在所有租户间共享此策略。" msgid "Whether this rule should be enabled." msgstr "是否应该启用此规则。" msgid "Whether this rule should be shared across all tenants." msgstr "是否应该在所有租户之间共享此规则。" msgid "Whether to enable the actions or not." msgstr "是否启用这些操作。" msgid "Whether to specify a remote group or a remote IP prefix." msgstr "是指定远程组还是远程 IP 前缀。" msgid "" "Which lifecycle actions of the deployment resource will result in this " "deployment being triggered." msgstr "部署资源的哪些生命周期操作将导致此部署被触发。" msgid "" "Workflow additional parameters. If Workflow is reverse typed, params " "requires 'task_name', which defines initial task." msgstr "" "工作流程附加参数。如果工作流程为保留类型,那么 params 需要“task_name”,它用于" "定义初始任务。" msgid "Workflow description." msgstr "工作流程描述。" msgid "Workflow name." msgstr "工作流程名称。" msgid "Workflow to execute." msgstr "要执行的工作流程。" msgid "Workflow type." msgstr "工作流程类型。" #, python-format msgid "Wrong Arguments try: \"%s\"" msgstr "自变量尝试“%s”不正确" msgid "You are not authenticated." msgstr "您未经认证。" msgid "You are not authorized to complete this action." msgstr "您无权完成此操作。" #, python-format msgid "You are not authorized to use %(action)s." msgstr "您无权使用 %(action)s。" #, python-format msgid "" "You have reached the maximum stacks per tenant, %d. Please delete some " "stacks." msgstr "已达到每个租户的最大堆栈数 %d。请删除一些堆栈。" #, python-format msgid "could not find user %s" msgstr "无法找到用户%s" msgid "deployment_id must be specified" msgstr "必须指定 deployment_id" msgid "" "deployments key not allowed in resource metadata with user_data_format of " "SOFTWARE_CONFIG" msgstr "不允许在 user_data_format 为 SOFTWARE_CONFIG 的资源元数据中使用部署键" #, python-format msgid "deployments of server %s" msgstr "服务器 %s 的部署" #, python-format msgid "environment has wrong section \"%s\"" msgstr "环境具有不正确的部分“%s”" msgid "error in pool" msgstr "池中出错" msgid "error in vip" msgstr "vip 中出错" msgid "external network for the gateway." msgstr "网关的外部网络。" msgid "granularity should be days, hours, minutes, or seconds" msgstr "粒度应该为天、小时、分钟或秒" msgid "heat.conf misconfigured, auth_encryption_key must be 32 characters" msgstr "未正确配置 heat.conf,auth_encryption_key 必须为 32 个字符" msgid "" "heat.conf misconfigured, cannot specify \"stack_user_domain_id\" or " "\"stack_user_domain_name\" without \"stack_domain_admin\" and " "\"stack_domain_admin_password\"" msgstr "" "heat.conf 的配置错误,无法指" "定“stack_user_domain_id”或“stack_user_domain_name”而不指" "定“stack_domain_admin”和“stack_domain_admin_password”" msgid "ipv6_ra_mode and ipv6_address_mode are not supported for ipv4." msgstr "ipv4 不支持 ipv6_ra_mode 和 ipv6_address_mode。" msgid "limit cannot be less than 4" msgstr "限制不能小于 4" #, python-format msgid "metadata setting for resource %s" msgstr "资源 %s 的元数据设置" msgid "min/max length must be integral" msgstr "最小值/最大值长度必须为整数" msgid "min/max must be numeric" msgstr "最小值/最大值必须为数字" msgid "need more memory." msgstr "需要更多内存。" msgid "no resource data found" msgstr "找不到任何资源数据" msgid "no resources were found" msgstr "找不到任何资源" msgid "nova server metadata needs to be a Map." msgstr "nova 服务器元数据需要是映射。" #, python-format msgid "previous_status must be SupportStatus instead of %s" msgstr "previous_status 必须为 SupportStatus 而不是 %s" #, python-format msgid "raw template with id %s not found" msgstr "找不到具有标识 %s 的原始模板" #, python-format msgid "resource with id %s not found" msgstr "找不到具有标识 %s 的资源" #, python-format msgid "roles %s" msgstr "角色 %s" msgid "segmentation_id cannot be specified except 0 for using flat" msgstr "对于使用平面网络的情况,不能指定除了 0 之外的 segmentation_id" msgid "segmentation_id must be specified for using vlan" msgstr "对于使用 vlan 网络的情况,必须指定 segmentation_id" msgid "segmentation_id not allowed for flat network type." msgstr "对于平面网络类型,不允许 segmentation_id。" msgid "server_id must be specified" msgstr "必须指定server_id" #, python-format msgid "" "task %(task)s contains property 'requires' in case of direct workflow. Only " "reverse workflows can contain property 'requires'." msgstr "" "任务 %(task)s 包含属性“requires”,以防出现直接工作流程。仅保留工作流程可包含" "属性“requires”。" heat-10.0.2/heat/locale/zh_TW/0000775000175000017500000000000013343562672015763 5ustar zuulzuul00000000000000heat-10.0.2/heat/locale/zh_TW/LC_MESSAGES/0000775000175000017500000000000013343562672017550 5ustar zuulzuul00000000000000heat-10.0.2/heat/locale/zh_TW/LC_MESSAGES/heat.po0000666000175000017500000071271413343562351021041 0ustar zuulzuul00000000000000# Translations template for heat. # Copyright (C) 2015 ORGANIZATION # This file is distributed under the same license as the heat project. # # Translators: # Andreas Jaeger , 2016. #zanata msgid "" msgstr "" "Project-Id-Version: heat VERSION\n" "Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n" "POT-Creation-Date: 2018-02-23 07:09+0000\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" "PO-Revision-Date: 2016-04-12 05:32+0000\n" "Last-Translator: Copied by Zanata \n" "Language: zh_TW\n" "Plural-Forms: nplurals=1; plural=0;\n" "Generated-By: Babel 2.0\n" "X-Generator: Zanata 4.3.3\n" "Language-Team: Chinese (Taiwan)\n" #, python-format msgid "\"%%s\" is not a valid keyword inside a %s definition" msgstr "\"%%s\" 不是 %s 定義內的有效關鍵字" #, python-format msgid "\"%(fn_name)s\": %(err)s" msgstr "\"%(fn_name)s\":%(err)s" #, python-format msgid "" "\"%(name)s\" params must be strings, numbers, list or map. Failed to json " "serialize %(value)s" msgstr "" "\"%(name)s\" 參數必須是字串、數字、清單或對映。無法執行 JSON 序列化 %(value)s" #, python-format msgid "" "\"%(section)s\" must contain a map of %(obj_name)s maps. Found a [%(_type)s] " "instead" msgstr "" "\"%(section)s\" 必須包含 %(obj_name)s 對映的一個對映。但找到[%(_type)s]" #, python-format msgid "\"%(url)s\" is not a valid SwiftSignalHandle. The %(part)s is invalid" msgstr "\"%(url)s\" 不是有效的 SwiftSignalHandle。%(part)s 無效" #, python-format msgid "\"%(value)s\" does not validate %(name)s" msgstr "\"%(value)s\" 沒有驗證 %(name)s" #, python-format msgid "\"%(value)s\" does not validate %(name)s (constraint not found)" msgstr "\"%(value)s\" 沒有驗證 %(name)s(找不到限制項)" #, python-format msgid "\"%(version)s\". \"%(version_type)s\" should be one of: %(available)s" msgstr "" "\"%(version)s\"。\"%(version_type)s\" 應該是下列其中一種:%(available)s" #, python-format msgid "\"%(version)s\". \"%(version_type)s\" should be: %(available)s" msgstr "\"%(version)s\"。\"%(version_type)s\" 應該是:%(available)s" #, python-format msgid "\"%s\" : [ { \"key1\": \"val1\" }, { \"key2\": \"val2\" } ]" msgstr "\"%s\":[ { \"key1\": \"val1\" }, { \"key2\": \"val2\" } ]" #, python-format msgid "\"%s\" argument must be a string" msgstr "\"%s\" 引數必須是字串" #, python-format msgid "\"%s\" can't traverse path" msgstr "\"%s\" 無法遍訪路徑" #, python-format msgid "\"%s\" deletion policy not supported" msgstr "\"%s\" 刪除原則不受支援" #, python-format msgid "\"%s\" delimiter must be a string" msgstr "\"%s\" 定界字元必須是字串" #, python-format msgid "\"%s\" is not a list" msgstr "\"%s\" 不是清單" #, python-format msgid "\"%s\" is not a map" msgstr "\"%s\" 不是對映" #, python-format msgid "\"%s\" is not a valid ARN" msgstr "\"%s\" 不是有效的 ARN" #, python-format msgid "\"%s\" is not a valid ARN URL" msgstr "\"%s\" 不是有效的 ARN URL" #, python-format msgid "\"%s\" is not a valid Heat ARN" msgstr "\"%s\" 不是有效的 Heat ARN" #, python-format msgid "\"%s\" is not a valid URL" msgstr "\"%s\" 不是有效的 URL" #, python-format msgid "\"%s\" is not a valid boolean" msgstr "\"%s\" 不是有效的布林值" #, python-format msgid "\"%s\" is not a valid template section" msgstr "\"%s\" 不是有效的範本區段" #, python-format msgid "\"%s\" must operate on a list" msgstr "\"%s\" 必須是對清單進行操作" #, python-format msgid "\"%s\" param placeholders must be strings" msgstr "\"%s\" 參數位置保留元必須是字串" #, python-format msgid "\"%s\" parameters must be a mapping" msgstr "\"%s\" 參數必須是對映" #, python-format msgid "\"%s\" params must be a map" msgstr "\"%s\" 參數必須是對映" #, python-format msgid "\"%s\" params must be strings, numbers, list or map." msgstr "\"%s\" 參數必須是字串、數字、清單或對映。" #, python-format msgid "\"%s\" template must be a string" msgstr "\"%s\" 範本必須是字串" #, python-format msgid "\"repeat\" syntax should be %s" msgstr "\"repeat\" 語法應該為 %s" #, python-format msgid "%(a)s paused until Hook %(h)s is cleared" msgstr "%(a)s 已暫停,直到清除連結鉤 %(h)s 為止" #, python-format msgid "%(action)s is not supported for resource." msgstr "資源不支援 %(action)s。" #, python-format msgid "%(action)s is restricted for resource." msgstr "已對資源限制 %(action)s。" #, python-format msgid "%(desired_capacity)s must be between %(min_size)s and %(max_size)s" msgstr "%(desired_capacity)s 必須介於 %(min_size)s 和 %(max_size)s 之間" #, python-format msgid "%(error)s%(path)s%(message)s" msgstr "%(error)s%(path)s%(message)s" #, python-format msgid "%(feature)s is not supported." msgstr "不支援 %(feature)s。" #, python-format msgid "" "%(img)s must be provided: Referenced cluster template %(tmpl)s has no " "default_image_id defined." msgstr "必須提供 %(img)s:所參照的叢集範本 %(tmpl)s 未定義 default_image_id。" #, python-format msgid "%(lc)s (%(ref)s) reference can not be found." msgstr "找不到 %(lc)s (%(ref)s) 參照。" #, python-format msgid "" "%(lc)s (%(ref)s) requires a reference to the configuration not just the name " "of the resource." msgstr "%(lc)s (%(ref)s) 需要參照資源配置而不只是需要資源名稱。" #, python-format msgid "%(len)d of %(count)d received" msgstr "收到 %(len)d 個(共 %(count)d 個)" #, python-format msgid "%(len)d of %(count)d received - %(reasons)s" msgstr "收到 %(len)d 個(共 %(count)d 個)- %(reasons)s" #, python-format msgid "%(message)s" msgstr "%(message)s" #, python-format msgid "%(min_size)s can not be greater than %(max_size)s" msgstr "%(min_size)s 不能大於 %(max_size)s" #, python-format msgid "%(name)s constraint invalid for %(utype)s" msgstr "%(name)s 限制項不適用於 %(utype)s" #, python-format msgid "%(prop1)s cannot be specified without %(prop2)s." msgstr "如果不指定 %(prop2)s,則無法指定 %(prop1)s。" #, python-format msgid "" "%(prop1)s property should only be specified for %(prop2)s with value " "%(value)s." msgstr "應該只針對值為 %(value)s 的 %(prop2)s 指定 %(prop1)s 內容。" #, python-format msgid "%(resource)s: Invalid attribute %(key)s" msgstr "%(resource)s:無效的屬性 %(key)s" #, python-format msgid "" "%(result)s - Unknown status %(resource_status)s due to \"%(status_reason)s\"" msgstr "%(result)s - 不明狀態 %(resource_status)s,因為 \"%(status_reason)s\"" #, python-format msgid "%(schema)s supplied for %(type)s %(data)s" msgstr "%(schema)s 已提供給 %(type)s %(data)s" #, python-format msgid "%(server)s-port-%(number)s" msgstr "%(server)s-port-%(number)s" #, python-format msgid "%(type)s not in valid format: %(error)s" msgstr "%(type)s 的格式無效:%(error)s" #, python-format msgid "%s Key Name must be a string" msgstr "%s 索引鍵名稱必須是字串" #, python-format msgid "%s Timed out" msgstr "%s 已逾時" #, python-format msgid "%s Value Name must be a string" msgstr "%s 值名稱必須是字串" #, python-format msgid "%s is not a valid job location." msgstr "%s 不是有效的工作位置。" #, python-format msgid "%s is not active" msgstr "%s 未處於作用中狀態" #, python-format msgid "%s is not an integer." msgstr "%s 不是整數。" #, python-format msgid "%s must be provided" msgstr "必須提供 %s" #, python-format msgid "'%(attr)s': expected '%(expected)s', got '%(current)s'" msgstr "'%(attr)s':預期 '%(expected)s',取得 '%(current)s'" msgid "" "'task_name' is not assigned in 'params' in case of reverse type workflow." msgstr "在逆向類型工作流程中,未在 'params' 中指派 'task_name'。" msgid "'true' if DHCP is enabled for this subnet; 'false' otherwise." msgstr "如果對此子網路啟用 DHCP,則為 'true';否則為 'false'。" msgid "A UUID for the set of servers being requested." msgstr "所要求伺服器集的 UUID。" msgid "A bad or out-of-range value was supplied" msgstr "提供的值不正確或超出範圍" msgid "A boolean value of default flag." msgstr "預設旗標的布林值。" msgid "A boolean value specifying the administrative status of the network." msgstr "用來指定網路管理狀態的布林值。" #, python-format msgid "" "A character class and its corresponding %(min)s constraint to generate the " "random string from." msgstr "字元類別及用於從其產生隨機字串的對應 %(min)s限制。" #, python-format msgid "" "A character sequence and its corresponding %(min)s constraint to generate " "the random string from." msgstr "字元序列及用於從其產生隨機字串的對應 %(min)s限制。" msgid "A comma-delimited list of server ip addresses. (Heat extension)." msgstr "伺服器 IP 位址的逗點定界清單。(Heat 延伸)。" msgid "A description of the volume." msgstr "磁區的說明。" msgid "" "A device name where the volume will be attached in the system at /dev/" "device_name. This value is typically vda." msgstr "" "將在系統中 /dev/device_name 處連接磁區,所在裝置的名稱。此值通常是 vda。" msgid "" "A device name where the volume will be attached in the system at /dev/" "device_name.e.g. vdb" msgstr "將在系統中 /dev/device_name 處連接磁區,所在裝置的名稱。例如 vdb" msgid "" "A dict of all network addresses with corresponding port_id. Each network " "will have two keys in dict, they are network name and network id. The port " "ID may be obtained through the following expression: \"{get_attr: [, " "addresses, , 0, port]}\"." msgstr "" "所有網路位址(包含相應的 port_id)的字典。每一個網路在字典中都將具有 2 個索引" "鍵,即網路名稱和網路 ID。埠 ID 可透過下列表示式取得:\"{get_attr: [, " "addresses, , 0, port]}\"。" msgid "" "A dict of assigned network addresses of the form: {\"public\": [ip1, " "ip2...], \"private\": [ip3, ip4], \"public_uuid\": [ip1, ip2...], " "\"private_uuid\": [ip3, ip4]}. Each network will have two keys in dict, they " "are network name and network id." msgstr "" "已指派的網路位址的字典,格式為:{\"public\": [ip1, ip2...], \"private\": " "[ip3, ip4], \"public_uuid\": [ip1, ip2...], \"private_uuid\": [ip3, ip4]}。每" "一個網路在字典中都將具有 2 個索引鍵,即網路名稱和網路 ID。" msgid "A dict of key-value pairs output from the stack." msgstr "來自堆疊的鍵值組輸出字典。" msgid "A dictionary which contains name and input of the workflow." msgstr "包含工作流程的名稱及輸入的字典。" msgid "A length constraint must have a min value and/or a max value specified." msgstr "長度限制必須指定最小值及/或最大值。" msgid "A list of URLs (webhooks) to invoke when state transitions to alarm." msgstr "狀態轉移至警示時要呼叫的 URL(Web 連結鉤)清單。" msgid "" "A list of URLs (webhooks) to invoke when state transitions to insufficient-" "data." msgstr "狀態轉移至資料不足時要呼叫的 URL(Web 連結鉤)清單。" msgid "A list of URLs (webhooks) to invoke when state transitions to ok." msgstr "狀態轉移至正常時要呼叫的 URL(Web 連結鉤)清單。" msgid "A list of access rules that define access from IP to Share." msgstr "存取規則的清單,這些規則用來定義從 IP 到共用項目的存取。" msgid "A list of all rules for the QoS policy." msgstr "服務品質原則的所有規則清單。" msgid "A list of all subnet attributes for the port." msgstr "埠的所有子網路屬性清單。" msgid "" "A list of character class and their constraints to generate the random " "string from." msgstr "字元類別及用於從其產生隨機字串的限制清單。" msgid "" "A list of character sequences and their constraints to generate the random " "string from." msgstr "字元序列及用於從其產生隨機字串的限制清單。" msgid "A list of cluster instance IPs." msgstr "叢集實例 IP 清單。" msgid "A list of clusters to which this policy is attached." msgstr "要將此原則連接至的叢集清單。" msgid "A list of host route dictionaries for the subnet." msgstr "子網路的主機路由字典清單。" msgid "A list of instances ids." msgstr "實例 ID 清單。" msgid "A list of metric ids." msgstr "度量 ID 的清單。" msgid "" "A list of query factors, each comparing a Sample attribute with a value. " "Implicitly combined with matching_metadata, if any." msgstr "" "查詢因數清單,每個因數都會將 Sample 屬性與值進行比較。隱含地與 " "matching_metadata 結合(如果有的話)。" msgid "A list of resource IDs for the resources in the chain." msgstr "鏈中資源的資源 ID 清單。" msgid "A list of resource IDs for the resources in the group." msgstr "群組中資源的資源 ID 清單。" msgid "A list of security groups for the port." msgstr "埠的安全群組清單。" msgid "A list of security services IDs or names." msgstr "安全服務 ID 或名稱清單。" msgid "A list of string policies to apply. Defaults to anti-affinity." msgstr "要套用的字串原則清單。預設為反親緣性。" msgid "A login profile for the user." msgstr "使用者的登入設定檔。" msgid "A mandatory input parameter is missing" msgstr "遺漏了必要的輸入參數" msgid "A map containing all headers for the container." msgstr "儲存器的所有標頭所在的對映。" msgid "" "A map of Nova names and captured stderrs from the configuration execution to " "each server." msgstr "配置執行中 Nova 名稱與擷取之標準錯誤至每一個伺服器的對映。" msgid "" "A map of Nova names and captured stdouts from the configuration execution to " "each server." msgstr "Nova 名稱與從配置執行擷取至每一個伺服器之標準輸出的對映。" msgid "" "A map of Nova names and returned status code from the configuration " "execution." msgstr "Nova 名稱與從配置執行傳回之狀態碼的對映。" msgid "" "A map of files to create/overwrite on the server upon boot. Keys are file " "names and values are the file contents." msgstr "啟動時要在伺服器上建立/改寫的檔案對映。索引鍵是檔名,值是檔案內容。" msgid "" "A map of resource names to the specified attribute of each individual " "resource." msgstr "資源名稱與每一個個別資源之指定屬性的對映。" msgid "" "A map of resource names to the specified attribute of each individual " "resource. Requires heat_template_version: 2014-10-16." msgstr "" "資源名稱與每一個個別資源之指定屬性的對映。需要 heat_template_version:" "2014-10-16。" msgid "" "A map of user-defined meta data to associate with the account. Each key in " "the map will set the header X-Account-Meta-{key} with the corresponding " "value." msgstr "" "要與帳戶產生關聯的使用者定義 meta 資料對映。對映中的每個索引鍵將以對應值設定" "標頭 X-Account-Meta-{key}。" msgid "" "A map of user-defined meta data to associate with the container. Each key in " "the map will set the header X-Container-Meta-{key} with the corresponding " "value." msgstr "" "要與儲存器產生關聯的使用者定義 meta 資料對映。對映中的每個索引鍵將以對應值設" "定標頭 X-Container-Meta-{key}。" msgid "A name used to distinguish the volume." msgstr "用來識別磁區的名稱。" msgid "" "A per-tenant quota on the prefix space that can be allocated from the subnet " "pool for tenant subnets." msgstr "字首空間上每個租戶的配額,可以從子網路儲存區為租戶子網路配置該配額。" msgid "" "A predefined access control list (ACL) that grants permissions on the bucket." msgstr "用來授與儲存區許可權的預先定義存取控制清單 (ACL)。" msgid "A range constraint must have a min value and/or a max value specified." msgstr "範圍限制必須指定最小值及/或最大值。" msgid "" "A reference to the wait condition handle used to signal this wait condition." msgstr "用來向此等待條件發出信號的等待條件控點參照。" msgid "" "A signed url to create executions for workflows specified in Workflow " "resource." msgstr "已簽署的 URL,用來為工作流程資源中指定的工作流程建立執行。" msgid "A signed url to handle the alarm." msgstr "要用來處理警示的已簽署 URL。" msgid "A signed url to handle the alarm. (Heat extension)." msgstr "要用來處理警示的已簽署 URL。(Heat 延伸)。" msgid "A specified set of DNS name servers to be used." msgstr "要使用的指定 DNS 名稱伺服器集。" msgid "" "A string specifying a symbolic name for the network, which is not required " "to be unique." msgstr "用來指定網路符號名稱的字串,不要求是唯一的。" msgid "" "A string specifying a symbolic name for the security group, which is not " "required to be unique." msgstr "用來指定安全群組符號名稱的字串,不要求是唯一的。" msgid "A string specifying physical network mapping for the network." msgstr "用來給網路指定實體網路對映的字串。" msgid "A string specifying the provider network type for the network." msgstr "用來給網路指定提供者網路類型的字串。" msgid "A string specifying the segmentation id for the network." msgstr "用來給網路指定分段 ID 的字串。" msgid "A symbolic name for this port." msgstr "此埠的符號名稱。" msgid "A url to handle the alarm using native API." msgstr "用來利用原生 API 來處理警示的 URL。" msgid "" "A variable that this resource will use to replace with the current index of " "a given resource in the group. Can be used, for example, to customize the " "name property of grouped servers in order to differentiate them when listed " "with nova client." msgstr "" "這是一個變數,此資源會將它取代為群組中給定資源的現行索引。例如,可用於自訂已" "分組伺服器的名稱內容,以在與 Nova用戶端一起列出時進行區分。" msgid "AWS compatible instance name." msgstr "AWS 相容的實例名稱。" msgid "AWS query string is malformed, does not adhere to AWS spec" msgstr "AWS 查詢字串的格式不正確,不符合 AWS 規格" msgid "Access policies to apply to the user." msgstr "要套用至使用者的存取原則。" #, python-format msgid "AccessPolicy resource %s not in stack" msgstr "AccessPolicy 資源 %s 不在堆疊中" #, python-format msgid "Action %s not allowed for user" msgstr "不容許使用者執行動作 %s" msgid "Action to be performed on the traffic matching the rule." msgstr "要對與規則相符之資料流量執行的動作。" msgid "Actual input parameter values of the task." msgstr "作業的實際輸入參數值。" msgid "Add needed policies directly to the task, Policy keyword is not needed" msgstr "將所需原則直接新增至作業,不需要「原則」關鍵字" msgid "Additional MAC/IP address pairs allowed to pass through a port." msgstr "容許通過埠的其他 MAC/IP 位址配對。" msgid "Additional MAC/IP address pairs allowed to pass through the port." msgstr "容許通過此埠的其他 MAC/IP 位址配對。" msgid "Additional routes for this subnet." msgstr "此子網路的其他路由。" msgid "Address family of the address scope, which is 4 or 6." msgstr "位址範圍的位址系列,即 4 或 6。" msgid "" "Address of the notification. It could be a valid email address, url or " "service key based on notification type." msgstr "" "通知的位址。它可以是有效的電子郵件位址、URL 或基於通知類型的服務索引鍵。" msgid "" "Address to bind the server. Useful when selecting a particular network " "interface." msgstr "用於連結伺服器的位址。在選取特定網路時很有用。" msgid "Administrative state for the ipsec site connection." msgstr "IPSec 網站連線的管理狀態。" msgid "Administrative state for the vpn service." msgstr "VPN 服務的管理狀態。" msgid "" "Administrative state of the firewall. If false (down), firewall does not " "forward packets and will drop all traffic to/from VMs behind the firewall." msgstr "" "防火牆的管理狀態。如果為 false(關閉),則防火牆不轉遞封包,並將捨棄在防火牆" "保護之 VM 之間傳送的所有資料流量。" msgid "Administrative state of the router." msgstr "路由器的管理狀態。" #, python-format msgid "Alarm %(alarm)s could not find scaling group named \"%(group)s\"" msgstr "警示 %(alarm)s 找不到名稱為 \"%(group)s\" 的調整大小群組" #, python-format msgid "Algorithm must be one of %s" msgstr "演算法必須是 %s 的其中之一" msgid "All heat engines are down." msgstr "所有的 Heat 引擎都已關閉。" msgid "Allocated floating IP address." msgstr "已配置浮動 IP 位址。" msgid "Allocation ID for VPC EIP address." msgstr "VPC EIP 位址的配置 ID。" msgid "Allow client's debug log output." msgstr "容許用戶端的除錯日誌輸出。" msgid "Allow or deny action for this firewall rule." msgstr "此防火牆規則的容許或拒絕動作。" msgid "Allow orchestration of multiple clouds." msgstr "容許編排多個雲端。" msgid "" "Allow reauthentication on token expiry, such that long-running tasks may " "complete. Note this defeats the expiry of any provided user tokens." msgstr "" "容許對記號期限進行重新鑑別,以便長時間執行的作業可以完成。請注意,這會讓任何" "所提供之使用者記號的期限失效。" msgid "" "Allowed keystone endpoints for auth_uri when multi_cloud is enabled. At " "least one endpoint needs to be specified." msgstr "" "啟用 multi_cloud 後,接受 auth_uri 的 Keystone 端點。必須至少指定一個端點。" msgid "" "Allowed tenancy of instances launched in the VPC. default - any tenancy; " "dedicated - instance will be dedicated, regardless of the tenancy option " "specified at instance launch." msgstr "" "容許租用 VPC 中啟動的實例。預設值 - 任意租用;專用 - 實例將會是專用實例,而不" "論在啟動實例時指定的租用選項為何。" #, python-format msgid "Allowed values: %s" msgstr "容許的值:%s" msgid "AllowedPattern must be a string" msgstr "AllowedPattern 必須是字串" msgid "AllowedValues must be a list" msgstr "AllowedValues 必須是清單" msgid "Allowing not to store action results after task completion." msgstr "容許在作業完成之後不儲存動作結果。" msgid "" "Allows to synchronize multiple parallel workflow branches and aggregate " "their data. Valid inputs: all - the task will run only if all upstream tasks " "are completed. Any numeric value - then the task will run once at least this " "number of upstream tasks are completed and corresponding conditions have " "triggered." msgstr "" "容許同步多個平行工作流程分支並聚集它們的資料。有效輸入:全部 - 僅當所有上游作" "業均已完成時,作業才將執行。任何數值 - 在至少此數目的上游作業均已完成且已觸發" "相應的條件之後,作業才將執行。" #, python-format msgid "Ambiguous versions (%s)" msgstr "版本不詳:(%s)" msgid "" "Amount of disk space (in GB) required to boot image. Default value is 0 if " "not specified and means no limit on the disk size." msgstr "" "啟動映像檔所需的磁碟空間量(以 GB 為單位)。如果未指定,則預設值為 0,這表示" "磁碟大小沒有限制。" msgid "" "Amount of ram (in MB) required to boot image. Default value is 0 if not " "specified and means no limit on the ram size." msgstr "" "啟動映像檔所需的 RAM 量(以 MB 為單位)。如果未指定,則預設值為 0,這表示 " "RAM 大小沒有限制。" msgid "An address scope ID to assign to the subnet pool." msgstr "要指派給子網路儲存區的位址範圍 ID。" msgid "An application health check for the instances." msgstr "實例的應用程式性能檢查。" msgid "An ordered list of firewall rules to apply to the firewall." msgstr "要套用至防火牆的防火牆規則有序清單。" msgid "" "An ordered list of nics to be added to this server, with information about " "connected networks, fixed ips, port etc." msgstr "" "要新增至此伺服器的有序 NIC 清單,含有所連接網路、固定 IP 及埠等的相關資訊。" msgid "An unknown exception occurred." msgstr "發生一個未知例外" msgid "" "Any data structure arbitrarily containing YAQL expressions that defines " "workflow output. May be nested." msgstr "" "任意包含 YAQL 表示式的任何資料結構,該資料結構定義工作流程輸出。可能是巢狀" "的。" msgid "Anything other than one VPCZoneIdentifier" msgstr "一個 VPCZoneIdentifier 之外的任何項目" msgid "Api endpoint reference of the instance." msgstr "實例的 API 端點參照。" msgid "" "Arbitrary key-value pairs specified by the client to help boot a server." msgstr "任意鍵值組,由用戶端指定來協助啟動伺服器。" msgid "" "Arbitrary key-value pairs specified by the client to help the Cinder " "scheduler creating a volume." msgstr "用戶端指定用來協助 Cinder 排程器建立磁區的任意鍵值組。" msgid "" "Arbitrary key/value metadata to store contextual information about this " "queue." msgstr "用於儲存此佇列相關環境定義資訊的任意鍵/值 meta 資料。" msgid "" "Arbitrary key/value metadata to store for this server. Both keys and values " "must be 255 characters or less. Non-string values will be serialized to JSON " "(and the serialized string must be 255 characters or less)." msgstr "" "要儲存用於此伺服器的任意鍵/值 meta 資料。索引鍵及值不得超過 255 個字元。非字" "串值將序列化為 JSON(並且已序列化的字串不得超過 255 個字元)。" msgid "Arbitrary key/value metadata to store information for aggregate." msgstr "用於儲存聚集資訊的任意鍵/值 meta 資料。" #, python-format msgid "Argument to \"%s\" must be a list" msgstr "\"%s\" 的引數必須是清單" #, python-format msgid "Argument to \"%s\" must be a string" msgstr "\"%s\" 的引數必須是字串" #, python-format msgid "Argument to \"%s\" must be string or list" msgstr "\"%s\" 的引數必須是字串或清單" #, python-format msgid "Argument to function \"%s\" must be a list of strings" msgstr "函數 \"%s\" 的引數必須是字串清單" #, python-format msgid "" "Arguments to \"%s\" can be of the next forms: [resource_name] or " "[resource_name, attribute, (path), ...]" msgstr "" "\"%s\" 的引數可以是下列形式:[resource_name] 或 [resource_name, attribute, " "(path), ...]" #, python-format msgid "Arguments to \"%s\" must be a map" msgstr "\"%s\" 的引數必須是對映" #, python-format msgid "Arguments to \"%s\" must be of the form [index, collection]" msgstr "\"%s\" 的引數格式必須是 [index, collection]" #, python-format msgid "" "Arguments to \"%s\" must be of the form [resource_name, attribute, " "(path), ...]" msgstr "\"%s\" 的引數格式必須是 [resource_name, attribute, (path), ...]" #, python-format msgid "Arguments to \"%s\" must be of the form [resource_name, attribute]" msgstr "\"%s\" 的引數格式必須是 [resource_name, attribute]" #, python-format msgid "Arguments to %s not fully resolved" msgstr "%s 的引數未完全解析" #, python-format msgid "Attempt to delete a stack with id: %(id)s %(msg)s" msgstr "嘗試刪除 ID 為 %(id)s 的堆疊:%(msg)s" #, python-format msgid "Attempt to delete user creds with id %(id)s that does not exist" msgstr "嘗試刪除 ID 為 %(id)s 的使用者認證,但它不存在" #, python-format msgid "Attempt to delete watch_rule: %(id)s %(msg)s" msgstr "嘗試刪除 ID 為 %(id)s 的 watch_rule:%(msg)s" #, python-format msgid "Attempt to update a stack with id: %(id)s %(msg)s" msgstr "嘗試更新 ID 為 %(id)s 的堆疊:%(msg)s" #, python-format msgid "Attempt to update a stack with id: %(id)s %(traversal)s %(msg)s" msgstr "嘗試更新 ID 為 %(id)s 的堆疊:%(traversal)s %(msg)s" #, python-format msgid "Attempt to update a watch with id: %(id)s %(msg)s" msgstr "嘗試更新 ID 為 %(id)s 的監看:%(msg)s" msgid "Attempt to use stored_context with no user_creds" msgstr "嘗試在不使用 user_creds 的情況下使用 stored_context" #, python-format msgid "Attribute %(attr)s for facade %(type)s missing in provider" msgstr "提供者中遺漏了 Facade %(type)s 的屬性 %(attr)s" msgid "Audit status of this firewall policy." msgstr "此防火牆原則的審核狀態。" msgid "Authentication Endpoint URI." msgstr "鑑別端點 URI。" msgid "Authentication hash algorithm for the ike policy." msgstr "IKE 原則的鑑別雜湊演算法。" msgid "Authentication hash algorithm for the ipsec policy." msgstr "IPSec 原則的鑑別雜湊演算法。" msgid "Authorization failed." msgstr "授權失敗。" msgid "AutoScaling group ID to apply policy to." msgstr "要套用原則的 AutoScaling 群組 ID。" msgid "AutoScaling group name to apply policy to." msgstr "要套用原則的 AutoScaling 群組名稱。" msgid "Availability Zone of the subnet." msgstr "子網路的「可用性區域」。" msgid "Availability zone in which you want the subnet." msgstr "子網路預期位於的可用性區域。" msgid "Availability zone to create servers in." msgstr "要在其中建立伺服器的可用性區域。" msgid "Availability zone to create volumes in." msgstr "要在其中建立磁區的可用性區域。" msgid "Availability zone to launch the instance in." msgstr "要在其中啟動實例的可用性區域。" msgid "Backend authentication failed" msgstr "後端鑑別失敗" msgid "Binary" msgstr "二進位" msgid "Block device mappings for this server." msgstr "此伺服器的區塊裝置對映。" msgid "Block device mappings to attach to instance." msgstr "要連接至實例的區塊裝置對映。" msgid "Block device mappings v2 for this server." msgstr "此伺服器的區塊裝置對映第 2 版。" msgid "" "Boolean extra spec that used for filtering of backends by their capability " "to create share snapshots." msgstr "布林額外規格,用於依後端的能力對後端進行過濾,以建立共用 Snapshot。" msgid "Boolean indicating if the volume can be booted or not." msgstr "用來指示磁區是否可啟動的布林值。" msgid "Boolean indicating if the volume is encrypted or not." msgstr "布林,指出磁區是否加密。" msgid "" "Boolean indicating whether allow the volume to be attached more than once." msgstr "布林值,用來指示是否容許多次連接該磁區。" msgid "" "Bus of the device: hypervisor driver chooses a suitable default if omitted." msgstr "裝置匯流排:如果省略,則 Hypervisor 驅動程式會選擇適當的預設值。" msgid "CIDR block notation for this subnet." msgstr "此子網路的 CIDR 區塊表示法。" msgid "CIDR block to apply to subnet." msgstr "要套用至子網路的 CIDR 區塊。" msgid "CIDR block to apply to the VPC." msgstr "要套用至 VPC 的 CIDR 區塊。" msgid "CIDR of subnet." msgstr "子網路的 CIDR。" msgid "CIDR to be associated with this metering rule." msgstr "要與此計量規則產生關聯的 CIDR。" #, python-format msgid "Can not specify property \"%s\" if the volume type is public." msgstr "如果磁區類型是公用的,則無法指定內容 \"%s\"。" #, python-format msgid "Can not use %s property on Nova-network." msgstr "無法在 Nova 網路上使用 %s 內容。" #, python-format msgid "Can't find role %s" msgstr "找不到角色 %s" msgid "Can't get user token without password" msgstr "如果沒有密碼,則無法取得使用者記號" msgid "Can't get user token, user not yet created" msgstr "無法取得使用者記號,尚未建立使用者" msgid "Can't traverse attribute path" msgstr "無法遍訪屬性路徑" #, python-format msgid "Cancelling update when stack is %s" msgstr "當堆疊為 %s 時,取消更新" #, python-format msgid "Cannot call %(method)s on orphaned %(objtype)s object" msgstr "無法對孤立的 %(objtype)s 物件呼叫 %(method)s" #, python-format msgid "Cannot check %s, stack not created" msgstr "無法檢查 %s,未建立堆疊" #, python-format msgid "Cannot define the following properties at the same time: %(props)s." msgstr "無法同時定義下列內容:%(props)s。" #, python-format msgid "" "Cannot establish connection to Heat endpoint at region \"%(region)s\" due to " "\"%(exc)s\"" msgstr "無法建立與區域 \"%(region)s\" 中 Heat 端點的連線,因為\"%(exc)s\"" msgid "" "Cannot get stack domain user token, no stack domain id configured, please " "fix your heat.conf" msgstr "無法取得堆疊網域使用者記號,未配置任何堆疊網域 ID,請修正 heat.conf" msgid "Cannot migrate to lower schema version." msgstr "無法移轉至較低的綱目版本。" #, python-format msgid "Cannot modify readonly field %(field)s" msgstr "無法修改唯讀欄位 %(field)s" #, python-format msgid "Cannot resume %s, resource not found" msgstr "無法回復 %s,找不到資源" #, python-format msgid "Cannot resume %s, resource_id not set" msgstr "無法回復 %s,未設定 resource_id" #, python-format msgid "Cannot resume %s, stack not created" msgstr "無法回復 %s,未建立堆疊" #, python-format msgid "Cannot suspend %s, resource not found" msgstr "無法暫停 %s,找不到資源" #, python-format msgid "Cannot suspend %s, resource_id not set" msgstr "無法暫停 %s,未設定 resource_id" #, python-format msgid "Cannot suspend %s, stack not created" msgstr "無法暫停 %s,未建立堆疊" msgid "Captured stderr from the configuration execution." msgstr "從配置執行中擷取的標準錯誤。" msgid "Captured stdout from the configuration execution." msgstr "從配置執行中擷取的標準輸出。" #, python-format msgid "Circular Dependency Found: %(cycle)s" msgstr "發現循環相依關係:%(cycle)s" msgid "Client entity to poll." msgstr "要輪詢的用戶端實體。" msgid "Client name and resource getter name must be specified." msgstr "必須指定用戶端名稱及資源取得程式名稱。" msgid "Client to poll." msgstr "要輪詢的用戶端。" msgid "Cluster configs dictionary." msgstr "叢集配置字典。" msgid "Cluster information." msgstr "叢集資訊。" msgid "Cluster metadata." msgstr "叢集 meta 資料。" msgid "Cluster name." msgstr "叢集名稱。" msgid "Cluster status." msgstr "叢集狀態。" msgid "Comparison operator." msgstr "比較運算子。" #, python-format msgid "Concurrent transaction for %(action)s" msgstr "%(action)s 的並行交易" msgid "Configuration of session persistence." msgstr "階段作業持續性的配置。" msgid "" "Configuration script or manifest which specifies what actual configuration " "is performed." msgstr "用來指定所執行實際配置的配置Script 或資訊清單。" msgid "Configure most important configs automatically." msgstr "自動配置最重要的配置。" #, python-format msgid "Confirm resize for server %s failed" msgstr "確認調整伺服器 %s 的大小失敗" msgid "Connection info for this network gateway." msgstr "此網路閘道的連線資訊。" #, python-format msgid "Container '%(name)s' creation failed: %(code)s - %(reason)s" msgstr "建立儲存器 '%(name)s' 失敗:%(code)s - %(reason)s" msgid "Container format of image." msgstr "映像檔的儲存器格式。" msgid "" "Content of part to attach, either inline or by referencing the ID of another " "software config resource." msgstr "要連接的組件內容(連接方式為行內或者參照另一個軟體配置資源的 ID)。" msgid "Context for this stack." msgstr "此堆疊的環境定義。" msgid "Control how the disk is partitioned when the server is created." msgstr "控制在建立伺服器時如何分割磁碟。" msgid "Controls DPD protocol mode." msgstr "控制 DPD 通訊協定模式。" msgid "" "Convenience attribute to fetch the first assigned network address, or an " "empty string if nothing has been assigned at this time. Result may not be " "predictable if the server has addresses from more than one network." msgstr "" "此便利屬性用來提取第一個所指派的網址,或者是空字串(如果此時尚未指派任何內" "容)。如果伺服器具有來自多個網路的位址,則結果可能無法預期。" msgid "" "Convenience attribute, provides curl CLI command prefix, which can be used " "for signalling handle completion or failure when signal_transport is set to " "TOKEN_SIGNAL. You can signal success by adding --data-binary '{\"status\": " "\"SUCCESS\"}' , or signal failure by adding --data-binary '{\"status\": " "\"FAILURE\"}'. This attribute is set to None for all other signal transports." msgstr "" "此便利屬性提供了 Curl CLI 指令字首,該字首可用於在 signal_transport 設為 " "TOKEN_SIGNAL 時傳送控點完成或失敗的信號。您可以透過新增 --data-binary " "'{\"status\": \"SUCCESS\"}' 來傳送成功信號,或者透過新增 --data-binary " "'{\"status\": \"FAILURE\"}' 來傳送失敗信號。針對所有其他信號傳輸,此屬性設為" "「無」。" msgid "" "Convenience attribute, provides curl CLI command prefix, which can be used " "for signalling handle completion or failure. You can signal success by " "adding --data-binary '{\"status\": \"SUCCESS\"}' , or signal failure by " "adding --data-binary '{\"status\": \"FAILURE\"}'." msgstr "" "此便利屬性提供了 Curl CLI 指令字首,該字首可用於傳送控點完成或失敗的信號。您" "可以透過新增 --data-binary '{\"status\": \"SUCCESS\"}' 來傳送成功信號,或者透" "過新增 --data-binary '{\"status\": \"FAILURE\"}' 來傳送失敗信號。" msgid "Cooldown period, in seconds." msgstr "冷卻期間(以秒為單位)。" #, python-format msgid "Could not confirm resize of server %s" msgstr "無法確認調整伺服器 %s 的大小" #, python-format msgid "Could not detach attachment %(att)s from server %(srv)s." msgstr "無法從伺服器 %(srv)s 分離連接 %(att)s。" #, python-format msgid "Could not fetch remote template \"%(name)s\": %(exc)s" msgstr "無法提取遠端範本 \"%(name)s\":%(exc)s" #, python-format msgid "Could not fetch remote template '%(url)s': %(exc)s" msgstr "無法提取遠端範本 '%(url)s':%(exc)s" #, python-format msgid "Could not load %(name)s: %(error)s" msgstr "無法載入 %(name)s:%(error)s" #, python-format msgid "Could not retrieve template: %s" msgstr "無法擷取範本:%s" msgid "Create volumes on the same physical port as an instance." msgstr "在與實例相同的實體埠上建立磁區。" msgid "" "Credentials used for swift. Not required if sahara is configured to use " "proxy users and delegated trusts for access." msgstr "" "用於 Swift 的認證。如果將 Sahara 配置成使用 Proxy 使用者及委派的存取信任,則" "不需要此項。" msgid "Cron expression." msgstr "CRON 表示式。" msgid "Current share status." msgstr "現行共用項目狀態。" msgid "Custom LoadBalancer template can not be found" msgstr "找不到自訂負載平衡器範本" msgid "DB instance restore point." msgstr "DB 實例還原點。" msgid "DNS Domain id or name." msgstr "DNS 網域 ID 或名稱。" msgid "DNS IP address used inside tenant's network." msgstr "租戶網路內使用的 DNS IP 位址。" msgid "DNS Record type." msgstr "DNS 記錄類型。" msgid "DNS domain serial." msgstr "DNS 網域序列。" msgid "" "DNS record data, varies based on the type of record. For more details, " "please refer rfc 1035." msgstr "" "DNS 記錄資料,根據記錄的類型而有所不同。 如需更多詳細資料,請參閱 RFC 1035。" msgid "" "DNS record priority. It is considered only for MX and SRV types, otherwise, " "it is ignored." msgstr "DNS 記錄優先順序。只將其視為 MX 和 SRV 類型,否則會將其忽略。" #, python-format msgid "Data supplied was not valid: %(reason)s" msgstr "提供的資料無效:%(reason)s" #, python-format msgid "" "Database %(dbs)s specified for user does not exist in databases for resource " "%(name)s." msgstr "指定給使用者的資料庫 %(dbs)s 不在下列資源的資料庫中:%(name)s。" msgid "Database volume size in GB." msgstr "資料庫磁區大小 (GB)。" #, python-format msgid "" "Databases property is required if users property is provided for resource %s." msgstr "如果為下列資源提供了使用者內容,則需要資料庫內容:%s。" #, python-format msgid "" "Datastore version %(dsversion)s for datastore type %(dstype)s is not valid. " "Allowed versions are %(allowed)s." msgstr "" "資料儲存庫類型 %(dstype)s 的資料儲存庫版本 %(dsversion)s 無效。所容許的版本" "為 %(allowed)s。" msgid "Datetime when a share was created." msgstr "判定共用項目的建立時間。" msgid "" "Dead Peer Detection protocol configuration for the ipsec site connection." msgstr "IPSec 網站連線的已停用同層級偵測通訊協定配置。" msgid "Dead engines are removed." msgstr "已移除停用的引擎。" msgid "Default TLS container reference to retrieve TLS information." msgstr "用來擷取 TLS 資訊的預設 TLS 儲存器參照。" #, python-format msgid "Default must be a comma-delimited list string: %s" msgstr "預設值必須是以逗點定界的清單字串:%s" msgid "Default name or UUID of the image used to boot Hadoop nodes." msgstr "用來啟動 Hadoop 節點之映像檔的預設名稱或 UUID。" msgid "Default region name used to get services endpoints." msgstr "用來取得服務端點的預設區域名稱。" msgid "Default settings for some of task attributes defined at workflow level." msgstr "在工作流程層次定義之部分作業屬性的預設設定。" msgid "Default value for the input if none is specified." msgstr "輸入的預設值(如果未指定任何值)。" msgid "" "Defines a delay in seconds that Mistral Engine should wait after a task has " "completed before starting next tasks defined in on-success, on-error or on-" "complete." msgstr "" "定義在某個作業已完成之後且在啟動「成功時」、「發生錯誤時」或「完成時」中所定" "義的下一個作業之前,Mistral 引擎應該等待的延遲(以秒為單位)。" msgid "" "Defines a delay in seconds that Mistral Engine should wait before starting a " "task." msgstr "定義 Mistral 引擎在啟動作業之前應該等待的延遲(以秒為單位)。" msgid "Defines a pattern how task should be repeated in case of an error." msgstr "定義在發生錯誤時應該如何重複作業的型樣。" msgid "" "Defines a period of time in seconds after which a task will be failed " "automatically by engine if hasn't completed." msgstr "" "定義一個時段(以秒為單位),在經歷該時段之後,如果作業仍未完成,則引擎自動讓" "該作業失敗。" msgid "Defines if share type is accessible to the public." msgstr "定義是否可公開存取共用類型。" msgid "Defines if shared filesystem is public or private." msgstr "定義共用檔案系統是公用的還是專用的。" msgid "" "Defines the method in which the request body for signaling a workflow would " "be parsed. In case this property is set to True, the body would be parsed as " "a simple json where each key is a workflow input, in other cases body would " "be parsed expecting a specific json format with two keys: \"input\" and " "\"params\"." msgstr "" "定義解析用於傳送工作流程信號的要求內文時所使用的方法。當此內容設為 True 時," "會將內文作為簡式 JSON 進行解析,其中每一個鍵都是一個工作流程輸入;在其他情況" "下,解析內文時,預期為含有兩個鍵(\"input\" 及 \"params\")的指定 JSON 格式。" msgid "" "Defines whether Mistral Engine should put the workflow on hold or not before " "starting a task." msgstr "定義在啟動作業之前,Mistral 引擎是否應該將工作流程置於保留狀態。" msgid "Defines whether auto-assign security group to this Node Group template." msgstr "定義是否自動將安全群組指派給此「節點群組」範本。" #, python-format msgid "" "Defining more than one configuration for the same action in " "SoftwareComponent \"%s\" is not allowed." msgstr "不容許對 SoftwareComponent \"%s\" 中的同一個動作定義多個配置。" msgid "Deleting in-progress snapshot" msgstr "正在刪除 Snapshot" #, python-format msgid "Deleting non-empty container (%(id)s) when %(prop)s is False" msgstr "當 %(prop)s 為 False 時,刪除非空儲存器 (%(id)s)" #, python-format msgid "Delimiter for %s must be string" msgstr "%s 的定界字元必須是字串" msgid "" "Denotes that the deployment is in an error state if this output has a value." msgstr "如果此輸出具有值,則表示該部署處於錯誤狀態。" msgid "Deploy data available" msgstr "部署可用的資料" #, python-format msgid "Deployment exited with non-zero status code: %s" msgstr "部署已結束,傳回非零狀態碼:%s" #, python-format msgid "Deployment to server failed: %s" msgstr "部署至伺服器時失敗:%s" #, python-format msgid "Deployment with id %s not found" msgstr "找不到 ID 為 %s 的部署" msgid "Deprecated." msgstr "已淘汰。" msgid "" "Describe time constraints for the alarm. Only evaluate the alarm if the time " "at evaluation is within this time constraint. Start point(s) of the " "constraint are specified with a cron expression, whereas its duration is " "given in seconds." msgstr "" "說明警示的時間限制。如果執行評估的時間位於此時間限制內,則只評估警示。限制的" "起始點透過 CRON 表示式進行指定,而它的持續時間則以秒為單位進行提供。" msgid "Description for the alarm." msgstr "警示的說明。" msgid "Description for the firewall policy." msgstr "防火牆原則的說明。" msgid "Description for the firewall rule." msgstr "防火牆規則的說明。" msgid "Description for the firewall." msgstr "防火牆的說明。" msgid "Description for the ike policy." msgstr "IKE 原則的說明。" msgid "Description for the ipsec policy." msgstr "IPSec 原則的說明。" msgid "Description for the ipsec site connection." msgstr "IPSec 網站連線的說明。" msgid "Description for the time constraint." msgstr "時間限制的說明。" msgid "Description for the vpn service." msgstr "VPN 服務的說明。" msgid "Description for this interface." msgstr "此介面的說明。" msgid "Description of domain." msgstr "網域的說明。" msgid "Description of keystone group." msgstr "Keystone 群組的說明。" msgid "Description of keystone project." msgstr "Keystone 專案的說明。" msgid "Description of keystone region." msgstr "Keystone 區域的說明。" msgid "Description of keystone service." msgstr "Keystone 服務的說明。" msgid "Description of keystone user." msgstr "Keystone 使用者的說明。" msgid "Description of record." msgstr "記錄的說明。" msgid "Description of the Node Group Template." msgstr "節點群組範本的說明。" msgid "Description of the Sahara Group Template." msgstr "Sahara 群組範本的說明。" msgid "Description of the alarm." msgstr "警示的說明。" msgid "Description of the data source." msgstr "資料來源的說明。" msgid "Description of the firewall policy." msgstr "防火牆原則的說明。" msgid "Description of the firewall rule." msgstr "防火牆規則的說明。" msgid "Description of the firewall." msgstr "防火牆的說明。" msgid "Description of the image." msgstr "映像檔的說明。" msgid "Description of the input." msgstr "輸入的說明。" msgid "Description of the job binary." msgstr "工作二進位檔的說明。" msgid "Description of the metering label." msgstr "計量標籤的說明。" msgid "Description of the output." msgstr "輸出的說明。" msgid "Description of the pool." msgstr "儲存區的說明。" msgid "Description of the security group." msgstr "安全群組的說明。" msgid "Description of the vip." msgstr "VIP 的說明。" msgid "Description of the volume type." msgstr "磁區類型的說明。" msgid "Description of the volume." msgstr "磁區的說明。" msgid "Description of this Load Balancer." msgstr "此負載平衡器的說明。" msgid "Description of this listener." msgstr "此監視器的說明。" msgid "Description of this pool." msgstr "此儲存區的說明。" msgid "Desired IPs for this port." msgstr "此埠需要的 IP。" msgid "Desired capacity of the cluster." msgstr "需要的叢集容量。" msgid "Desired initial number of instances." msgstr "需要的起始實例數。" msgid "Desired initial number of resources in cluster." msgstr "叢集中需要的起始資源數目。" msgid "Desired initial number of resources." msgstr "需要的起始資源數目。" msgid "Desired number of instances." msgstr "需要的實例數。" msgid "DesiredCapacity must be between MinSize and MaxSize" msgstr "DesiredCapacity 必須介於 MinSize 和 MaxSize 之間" msgid "Destination IP address or CIDR." msgstr "目的地 IP 位址或 CIDR。" msgid "Destination ip_address for this firewall rule." msgstr "此防火牆規則的目的地 ip_address。" msgid "Destination port number or a range." msgstr "目的地埠號或範圍。" msgid "Destination port range for this firewall rule." msgstr "此防火牆規則的目的地埠範圍。" msgid "Detailed information about resource." msgstr "資源的相關詳細資訊。" msgid "Device ID of this port." msgstr "此埠的裝置 ID。" msgid "Device info for this network gateway." msgstr "此網路閘道的裝置資訊。" msgid "" "Device type: at the moment we can make distinction only between disk and " "cdrom." msgstr "裝置類型:目前,我們只能區分磁碟與CDROM。" msgid "" "Dict, which has expand properties for port. Used only if port property is " "not specified for creating port." msgstr "字典,具有埠的延伸內容。只在未指定埠內容以用於建立埠時,才使用。" msgid "Dictionary containing workflow tasks." msgstr "包含工作流程作業的字典。" msgid "Dictionary of node configurations." msgstr "節點配置的字典。" msgid "Dictionary of variables to publish to the workflow context." msgstr "要發佈至工作流程環境定義的變數字典。" msgid "Dictionary which contains input for workflow." msgstr "包含工作流程輸入的字典。" msgid "" "Dictionary-like section defining task policies that influence how Mistral " "Engine runs tasks. Must satisfy Mistral DSL v2." msgstr "" "與字典類似的區段,用來定義可影響 Mistral 引擎如何執行作業的作業原則。必須滿" "足 Mistral DSL 第 2 版。" msgid "DisableRollback and OnFailure may not be used together" msgstr "DisableRollback 及 OnFailure 不能一起使用" msgid "Disk format of image." msgstr "映像檔的磁碟格式。" msgid "Does not contain a valid AWS Access Key or certificate" msgstr "不包含有效的「AWS 存取金鑰」或憑證" msgid "Domain email." msgstr "網域電子郵件。" msgid "Domain name." msgstr "地域名稱。" #, python-format msgid "Duplicate names %s" msgstr "重複名稱 %s" msgid "Duplicate refs are not allowed." msgstr "不允許重複的參照。" msgid "Duration for the time constraint." msgstr "時間限制的持續時間。" msgid "EIP address to associate with instance." msgstr "要與實例產生關聯的 EIP 位址。" #, python-format msgid "Each %(object_name)s must contain a %(sub_section)s key." msgstr "每一個 %(object_name)s 都必須包含 %(sub_section)s 索引鍵。" msgid "Each Resource must contain a Type key." msgstr "每一個資源都必須包含「類型」索引鍵。" msgid "Ebs is missing, this is required when specifying BlockDeviceMappings." msgstr "EBS 遺失,指定 BlockDeviceMappings 時,這是必要項目。" msgid "" "Egress rules are only allowed when Neutron is used and the 'VpcId' property " "is set." msgstr "只有在使用 Neutron 且設定 'VpcId' 內容時,才接受 Egress 規則。" #, python-format msgid "Either %(net)s or %(port)s must be provided." msgstr "必須提供 %(net)s 或 %(port)s。" msgid "Either 'EIP' or 'AllocationId' must be provided." msgstr "必須提供 'EIP' 或 'AllocationId'。" msgid "Either 'InstanceId' or 'LaunchConfigurationName' must be provided." msgstr "必須提供 'InstanceId' 或 'LaunchConfigurationName'。" #, python-format msgid "Either project or domain must be specified for role %s" msgstr "必須為角色 %s 指定專案或網域" #, python-format msgid "Either volume_id or snapshot_id must be specified for device mapping %s" msgstr "必須給裝置對映 %s 指定 volume_id 或 snapshot_id" msgid "Email address of keystone user." msgstr "Keystone 使用者的電子郵件位址。" msgid "Enable the legacy OS::Heat::CWLiteAlarm resource." msgstr "啟用舊式 OS::Heat::CWLiteAlarm 資源。" msgid "Enable the preview Stack Abandon feature." msgstr "啟用預覽「堆疊放棄」功能。" msgid "Enable the preview Stack Adopt feature." msgstr "啟用預覽「堆疊採用」功能。" msgid "" "Enables Source NAT on the router gateway. NOTE: The default policy setting " "in Neutron restricts usage of this property to administrative users only." msgstr "" "在路由器閘道上啟用「來源 NAT」。附註:Neutron 中的預設原則設定將此內容限制為" "僅管理使用者使用。" msgid "" "Enables engine with convergence architecture. All stacks with this option " "will be created using convergence engine." msgstr "啟用具有聚合架構的引擎。將使用聚合引擎來建立具有此選項的所有堆疊。" msgid "Enables or disables read-only access mode of volume." msgstr "啟用或停用磁區的唯讀存取模式。" msgid "Encapsulation mode for the ipsec policy." msgstr "IPSec 原則的封裝模式。" msgid "" "Encrypt template parameters that were marked as hidden and also all the " "resource properties before storing them in database." msgstr "" "先將已標示為隱藏的範本參數以及所有資源內容進行加密,然後將它們儲存在資料庫" "中。" msgid "Encryption algorithm for the ike policy." msgstr "IKE 原則的加密演算法。" msgid "Encryption algorithm for the ipsec policy." msgstr "IPSec 原則的加密演算法。" msgid "End address for the allocation pool." msgstr "配置儲存區的結束位址。" #, python-format msgid "End resizing the group %(group)s" msgstr "停止調整群組 %(group)s 的大小" msgid "" "Endpoint/url which can be used for signalling handle when signal_transport " "is set to TOKEN_SIGNAL. None for all other signal transports." msgstr "" "端點 / URL,可用於在 signal_transport 設為 TOKEN_SIGNAL 時傳送控點信號。針對" "所有其他信號傳輸,值為「無」。" msgid "Endpoint/url which can be used for signalling handle." msgstr "可用於傳送控點信號的端點 / URL。" msgid "Engine_Id" msgstr "引擎 ID" msgid "Error" msgstr "錯誤" #, python-format msgid "Error authorizing action %s" msgstr "授權動作 %s 時發生錯誤" #, python-format msgid "Error creating ec2 keypair for user %s" msgstr "給使用者 %s 建立 EC2 金鑰組時發生錯誤" msgid "" "Error during applying access rules to share \"{0}\". The root cause of the " "problem is the following: {1}." msgstr "" "將存取規則套用至共用項目 \"{0}\" 時發生錯誤。此問題的根本原因如下:{1}。" msgid "Error during creation of share \"{0}\"" msgstr "建立共用項目 \"{0}\" 時發生錯誤" msgid "Error during deleting share \"{0}\"." msgstr "刪除共用項目 \"{0}\" 時發生錯誤。" #, python-format msgid "Error validating value '%(value)s'" msgstr "驗證值 '%(value)s' 時發生錯誤" #, python-format msgid "Error validating value '%(value)s': %(message)s" msgstr "驗證值 '%(value)s' 時發生錯誤:%(message)s" msgid "Ethertype of the traffic." msgstr "資料流量的乙太網路類型。" msgid "Exclude state for cidr." msgstr "排除 CIDR 的狀態。" #, python-format msgid "Expected 1 external network, found %d" msgstr "預期 1 個外部網路,找到 %d 個" msgid "Export locations of share." msgstr "共用項目的匯出位置。" msgid "Expression of the alarm to evaluate." msgstr "要評估之警示的表示式。" msgid "External fixed IP address." msgstr "外部固定 IP 位址。" msgid "External fixed IP addresses for the gateway." msgstr "閘道的外部固定 IP 位址。" msgid "External network gateway configuration for a router." msgstr "路由器的外部網路閘道配置。" msgid "" "Extra parameters to include in the \"floatingip\" object in the creation " "request. Parameters are often specific to installed hardware or extensions." msgstr "" "要併入到建立要求中 \"floatingip\" 物件的額外參數。參數通常為所安裝硬體或延伸" "所特有。" msgid "Extra parameters to include in the creation request." msgstr "要併到建立要求中的額外參數。" msgid "Extra parameters to include in the request." msgstr "要併入要求中的額外參數。" msgid "" "Extra parameters to include in the request. Parameters are often specific to " "installed hardware or extensions." msgstr "要併入到要求中的額外參數。參數通常為所安裝硬體或延伸所特有。" msgid "Extra specs key-value pairs defined for share type." msgstr "為共用類型定義的額外規格鍵值組。" #, python-format msgid "Failed to attach interface (%(port)s) to server (%(server)s)" msgstr "無法將介面 (%(port)s) 連接至伺服器 (%(server)s)" #, python-format msgid "Failed to attach volume %(vol)s to server %(srv)s - %(err)s" msgstr "無法將磁區 %(vol)s 連接至伺服器 %(srv)s - %(err)s" #, python-format msgid "Failed to create Bay '%(name)s' - %(reason)s" msgstr "無法建立機架 '%(name)s' - %(reason)s" #, python-format msgid "Failed to detach interface (%(port)s) from server (%(server)s)" msgstr "無法將介面 (%(port)s) 從伺服器 (%(server)s) 分離" #, python-format msgid "Failed to execute %(action)s for %(cluster)s: %(reason)s" msgstr "無法對 %(cluster)s 執行 %(action)s:%(reason)s" #, python-format msgid "Failed to extend volume %(vol)s - %(err)s" msgstr "無法延伸磁區 %(vol)s - %(err)s" #, python-format msgid "Failed to fetch template: %s" msgstr "無法提取範本:%s" #, python-format msgid "Failed to find instance %s" msgstr "找不到實例 %s" #, python-format msgid "Failed to find server %s" msgstr "找不到伺服器 %s" #, python-format msgid "Failed to parse JSON data: %s" msgstr "無法剖析 JSON 資料:%s" #, python-format msgid "Failed to restore volume %(vol)s from backup %(backup)s - %(err)s" msgstr "無法從備份 %(backup)s 還原磁區 %(vol)s - %(err)s" msgid "Failed to retrieve template" msgstr "無法擷取範本" #, python-format msgid "Failed to retrieve template data: %s" msgstr "無法擷取範本資料:%s" #, python-format msgid "Failed to retrieve template: %s" msgstr "無法擷取範本:%s" #, python-format msgid "" "Failed to send message to stack (%(stack_name)s) on other engine " "(%(engine_id)s)" msgstr "無法將訊息傳送至其他引擎 (%(engine_id)s) 上的堆疊 (%(stack_name)s)" #, python-format msgid "Failed to stop stack (%(stack_name)s) on other engine (%(engine_id)s)" msgstr "無法在其他引擎 (%(engine_id)s) 上停止堆疊 (%(stack_name)s)" #, python-format msgid "Failed to update Bay '%(name)s' - %(reason)s" msgstr "無法更新機架 '%(name)s' - %(reason)s" msgid "Failed to update, can not found port info." msgstr "無法更新,找不到埠資訊。" #, python-format msgid "" "Failed validating stack template using Heat endpoint at region \"%(region)s" "\" due to \"%(exc)s\"" msgstr "" "使用區域 \"%(region)s\" 中的 Heat 端點來驗證堆疊範本失敗,因為 \"%(exc)s\"" msgid "Fake attribute !a." msgstr "偽造屬性 !a。" msgid "Fake attribute a." msgstr "偽造屬性 a。" msgid "Fake property !a." msgstr "偽造內容 !a。" msgid "Fake property !c." msgstr "偽造內容 !c。" msgid "Fake property a." msgstr "偽造內容 a。" msgid "Fake property c." msgstr "偽造內容 c。" msgid "Fake property ca." msgstr "偽造內容 ca。" msgid "" "False to trigger actions when the threshold is reached AND the alarm's state " "has changed. By default, actions are called each time the threshold is " "reached." msgstr "" "如果為 False,則在達到臨界值且警示狀態變更時觸發動作。依預設,每次達到臨界值" "時,都會呼叫動作。" #, python-format msgid "Field %(field)s of %(objname)s is not an instance of Field" msgstr "%(objname)s 的欄位 %(field)s 不是「欄位」的實例" msgid "" "Fixed IP address to specify for the port created on the requested network." msgstr "固定 IP 位址,指定給在所要求網路上建立的埠。" msgid "Fixed IP addresses." msgstr "固定 IP 位址。" msgid "Fixed IPv4 address for this NIC." msgstr "此 NIC 的固定 IPv4 位址。" msgid "Flag indicating if traffic to or from instance is validated." msgstr "已驗證用來指示資料流量是送入實例還是從實例送出的旗標。" msgid "" "Flag to enable/disable port security on the network. It provides the default " "value for the attribute of the ports created on this network." msgstr "" "用來在網路上啟用/停用埠安全的旗標。它為此網路上所建立之埠的屬性提供預設值。" msgid "" "Flag to enable/disable port security on the port. When disable this " "feature(set it to False), there will be no packages filtering, like security-" "group and address-pairs." msgstr "" "用來在埠上啟用/停用埠安全的旗標。停用此功能(將其設為 False)時,將不執行套件" "過濾,例如安全群組和位址配對。" msgid "Flavor of the instance." msgstr "實例的特性。" msgid "Friendly name of the port." msgstr "埠的一般名稱。" msgid "Friendly name of the router." msgstr "路由器的一般名稱。" msgid "Friendly name of the subnet." msgstr "子網路的一般名稱。" #, python-format msgid "Function \"%s\" must have arguments" msgstr "函數 \"%s\" 必須是引數" #, python-format msgid "Function \"%s\" usage: [\"\", \"\"]" msgstr "函數 \"%s\" 用法:[\"\", \"\"]" #, python-format msgid "Gateway IP address \"%(gateway)s\" is in invalid format." msgstr "閘道 IP 位址 \"%(gateway)s\" 的格式無效。" msgid "Gateway network for the router." msgstr "路由器的閘道網路。" msgid "Generic HeatAPIException, please use specific subclasses!" msgstr "一般 HeatAPIException,請使用特定子類別!" msgid "Glance image ID or name." msgstr "Glance 映像檔 ID 或名稱。" msgid "Governs permissions set in manila for the cluster ips." msgstr "針對叢集 IP,控管 Manila 中設定的許可權。" msgid "Granularity to use for age argument, defaults to days." msgstr "要用於經歷時間引數的精度,預設為天數。" msgid "Hadoop cluster name." msgstr "Hadoop 叢集名稱。" #, python-format msgid "Header X-Auth-Url \"%s\" not an allowed endpoint" msgstr "標頭 X-Auth-Url \"%s\" 並非容許的端點" msgid "Health probe timeout, in seconds." msgstr "性能探測逾時秒數。" msgid "" "Heat build revision. If you would prefer to manage your build revision " "separately, you can move this section to a different file and add it as " "another config option." msgstr "" "Heat 建置修訂。如果想要個別管理您的建置修訂,則可以將此區段移至其他檔案,並將" "它新增為另一個配置選項。" msgid "Host" msgstr "主機" msgid "Hostname" msgstr "主機名稱" msgid "Hostname of the instance." msgstr "實例的主機名稱。" msgid "How long to preserve deleted data." msgstr "已刪除資料的保留時間長度。" msgid "" "How the client will signal the wait condition. CFN_SIGNAL will allow an HTTP " "POST to a CFN keypair signed URL. TEMP_URL_SIGNAL will create a Swift " "TempURL to be signalled via HTTP PUT. HEAT_SIGNAL will allow calls to the " "Heat API resource-signal using the provided keystone credentials. " "ZAQAR_SIGNAL will create a dedicated zaqar queue to be signalled using the " "provided keystone credentials. TOKEN_SIGNAL will allow and HTTP POST to a " "Heat API endpoint with the provided keystone token. NO_SIGNAL will result in " "the resource going to a signalled state without waiting for any signal." msgstr "" "用戶端將傳送等待條件信號的方式。CFN_SIGNAL 將容許對 CFN 金鑰組簽署的 URL 執" "行 HTTP POST 動作。TEMP_URL_SIGNAL 將透過 HTTP PUT 建立要向其傳送信號的 " "Swift TempURL。HEAT_SIGNAL 將容許呼叫使用所提供之 Keystone 認證的 Heat API 資" "源信號。ZAQAR_SIGNAL 將建立要使用所提供之 Keystone 認證向其傳送信號的專用 " "zaqar 佇列。TOKEN_SIGNAL 將容許使用所提供之 Keystone 記號對 Heat API 端點執" "行 HTTP POST 動作。NO_SIGNAL 將導致資源跳至「已傳送信號」狀態,而不會等待任何" "信號。" msgid "" "How the server should receive the metadata required for software " "configuration. POLL_SERVER_CFN will allow calls to the cfn API action " "DescribeStackResource authenticated with the provided keypair. " "POLL_SERVER_HEAT will allow calls to the Heat API resource-show using the " "provided keystone credentials. POLL_TEMP_URL will create and populate a " "Swift TempURL with metadata for polling. ZAQAR_MESSAGE will create a " "dedicated zaqar queue and post the metadata for polling." msgstr "" "伺服器應該如何接收軟體配置所需 meta 資料。POLL_SERVER_CFN 將容許呼叫利用所提" "供之金鑰組進行鑑別的 cfn API 動作 DescribeStackResource。POLL_SERVER_HEAT 將" "容許呼叫使用所提供之 Keystone 認證的 Heat API 資源顯示。POLL_TEMP_URL 將利用" "用於進行輪詢的 meta 資料來建立和移入 Swift TempURL。ZAQAR_MESSAGE 將建立專用 " "zaqar 佇列並公佈用於進行輪詢的 meta 資料。" msgid "How the server should signal to heat with the deployment output values." msgstr "伺服器應該如何用信號將部署輸出值傳送至 Heat。" msgid "" "How the server should signal to heat with the deployment output values. " "CFN_SIGNAL will allow an HTTP POST to a CFN keypair signed URL. " "TEMP_URL_SIGNAL will create a Swift TempURL to be signaled via HTTP PUT. " "HEAT_SIGNAL will allow calls to the Heat API resource-signal using the " "provided keystone credentials. ZAQAR_SIGNAL will create a dedicated zaqar " "queue to be signaled using the provided keystone credentials. NO_SIGNAL will " "result in the resource going to the COMPLETE state without waiting for any " "signal." msgstr "" "伺服器應該如何用信號將部署輸出值傳送至 Heat。CFN_SIGNAL 將容許對 CFN 金鑰組簽" "署的 URL 執行 HTTP POST 動作。TEMP_URL_SIGNAL 將透過 HTTP PUT 建立要向其傳送" "信號的 Swift TempURL。HEAT_SIGNAL 將容許呼叫使用所提供之 Keystone 認證的 " "Heat API 資源信號。ZAQAR_SIGNAL 將建立要使用所提供之 Keystone 認證向其傳送信" "號的專用 zaqar 佇列。NO_SIGNAL 將導致資源跳至「完成」狀態,而不會等待任何信" "號。" msgid "" "How the user_data should be formatted for the server. For HEAT_CFNTOOLS, the " "user_data is bundled as part of the heat-cfntools cloud-init boot " "configuration data. For RAW the user_data is passed to Nova unmodified. For " "SOFTWARE_CONFIG user_data is bundled as part of the software config data, " "and metadata is derived from any associated SoftwareDeployment resources." msgstr "" "應該如何給伺服器格式化 user_data。對於 HEAT_CFNTOOLS,user_data 會組合到 " "heat-cfntools cloud-init 啟動配置資料,成為其一部分。對於 RAW,user_data 會按" "原狀傳遞給 Nova。對於 SOFTWARE_CONFIG,user_data 會組合到軟體配置資料,成為其" "一部分,而 meta 資料衍生自任何相關聯的 SoftwareDeployment 資源。" msgid "Human readable name for the secret." msgstr "人類可讀的密碼名稱。" msgid "Human-readable name for the container." msgstr "人類可讀的儲存器名稱。" msgid "" "ID list of the L3 agent. User can specify multi-agents for highly available " "router. NOTE: The default policy setting in Neutron restricts usage of this " "property to administrative users only." msgstr "" "L3 代理程式的 ID 清單。使用者可以為高可用性路由器指定多個代理程式。附註:" "Neutron 中的預設原則設定會將此內容限制為僅供管理使用者使用。" msgid "ID of an existing port to associate with this server." msgstr "要與此伺服器產生關聯的現有埠 ID。" msgid "" "ID of an existing port with at least one IP address to associate with this " "floating IP." msgstr "要與此浮動 IP 產生關聯的現有埠 ID,此埠至少具有一個 IP 位址。" msgid "ID of network to create a port on." msgstr "要在其上建立埠的網路 ID。" msgid "ID of project for API authentication" msgstr "用於 API 鑑別的專案 ID" msgid "ID of queue to use for signaling output values" msgstr "使用信號傳送輸出值時要使用的佇列 ID" msgid "" "ID of resource to apply configuration to. Normally this should be a Nova " "server ID." msgstr "要向其套用配置的資源 ID。通常,這應該是 Nova 伺服器 ID。" msgid "" "ID of server (VM, etc...) on host that is used for exporting network file-" "system." msgstr "用於匯出網路檔案系統之主機上伺服器(VM 等)的 ID。" msgid "ID of signal to use for signaling output values" msgstr "使用信號傳送輸出值時要用的信號 ID" msgid "" "ID of software configuration resource to execute when applying to the server." msgstr "套用至伺服器時要執行的軟體配置資源 ID。" msgid "ID of the Cluster Template used for Node Groups and configurations." msgstr "用於節點群組及配置之叢集範本的 ID。" msgid "ID of the InternetGateway." msgstr "InternetGateway 的 ID。" msgid "" "ID of the L3 agent. NOTE: The default policy setting in Neutron restricts " "usage of this property to administrative users only." msgstr "" "L3 代理程式的 ID。附註:Neutron 中的預設原則設定將此內容限制為僅供管理使用者" "使用。" msgid "ID of the Node Group Template." msgstr "節點群組範本的 ID。" msgid "ID of the VPNGateway to attach to the VPC." msgstr "要連接至 VPC 的 VPNGateway ID。" msgid "ID of the default image to use for the template." msgstr "要用於範本之預設映像檔的 ID。" msgid "ID of the default pool this listener is associated to." msgstr "與此接聽器相關聯的預設儲存區 ID。" msgid "ID of the floating IP to assign to the server." msgstr "要指派給伺服器的浮動 IP ID。" msgid "ID of the floating IP to associate." msgstr "要產生關聯之浮動 IP 的 ID。" msgid "ID of the health monitor associated with this pool." msgstr "與此儲存區相關聯的性能監視器 ID。" msgid "ID of the image to use for the template." msgstr "要用於範本的映像檔 ID。" msgid "ID of the load balancer this listener is associated to." msgstr "與此接聽器相關聯的負載平衡器 ID。" msgid "ID of the network in which this IP is allocated." msgstr "在其中配置此 IP 的網路 ID。" msgid "ID of the port associated with this IP." msgstr "與此 IP 相關聯的埠 ID。" msgid "ID of the queue." msgstr "佇列的 ID。" msgid "ID of the router used as gateway, set when associated with a port." msgstr "用來作為閘道的路由器 ID,與埠產生關聯時設定。" msgid "ID of the router." msgstr "路由器的 ID。" msgid "ID of the server being deployed to" msgstr "要部署至的伺服器 ID" msgid "ID of the stack this deployment belongs to" msgstr "此部署所屬的堆疊 ID" msgid "ID of the tenant to which the RBAC policy will be enforced." msgstr "將對其實施 RBAC 原則的租戶 ID。" msgid "ID of the tenant who owns the health monitor." msgstr "擁有性能監視器的租戶 ID。" msgid "ID or name of the QoS policy." msgstr "服務品質原則的 ID 或名稱。" msgid "ID or name of the RBAC object." msgstr "RBAC 物件的 ID 或名稱。" msgid "ID or name of the external network for the gateway." msgstr "閘道的外部網路 ID 或名稱。" msgid "ID or name of the image to register." msgstr "要登錄的映像檔 ID 或名稱。" msgid "ID or name of the load balancer with which listener is associated." msgstr "與監視器相關聯之負載平衡器的 ID 或名稱。" msgid "ID or name of the load balancing pool." msgstr "負載平衡儲存區的 ID 或名稱。" msgid "" "ID that AWS assigns to represent the allocation of the address for use with " "Amazon VPC. Returned only for VPC elastic IP addresses." msgstr "" "AWS 指派此 ID 以表示配置位址來與 Amazon VPC 一起使用。只針對 VPC 彈性 IP 位址" "傳回。" msgid "IP address and port of the pool." msgstr "儲存區的 IP 位址及埠。" msgid "IP address desired in the subnet for this port." msgstr "子網路中此埠需要的 IP 位址。" msgid "IP address for the VIP." msgstr "VIP 的 IP 位址。" msgid "IP address of the associated port, if specified." msgstr "如果指定的話,則為相關聯埠的 IP 位址。" msgid "" "IP address of the floating IP. NOTE: The default policy setting in Neutron " "restricts usage of this property to administrative users only." msgstr "" "浮動 IP 的 IP 位址。附註:Neutron 中的預設原則設定會將此內容限制為僅供管理使" "用者使用。" msgid "IP address of the pool member on the pool network." msgstr "儲存區網路上儲存區成員的 IP 位址。" msgid "IP address of the pool member." msgstr "儲存區成員的 IP 位址。" msgid "IP address of the vip." msgstr "VIP 的 IP 位址。" msgid "IP address to allow through this port." msgstr "容許通過此埠的 IP 位址。" msgid "IP address to use if the port has multiple addresses." msgstr "埠具有多個位址時要使用的 IP 位址。" msgid "" "IP or other address information about guest that allowed to access to Share." msgstr "容許其存取共用項目之訪客的相關 IP 或其他位址資訊。" msgid "IPv6 RA (Router Advertisement) mode." msgstr "IPv6 RA(路由器通告)模式。" msgid "IPv6 address mode." msgstr "IPv6 位址模式。" msgid "Id of a resource." msgstr "資源的 ID。" msgid "Id of the manila share." msgstr "Manila 共用項目的 ID。" msgid "Id of the tenant owning the firewall policy." msgstr "擁有防火牆原則的承租人 ID。" msgid "Id of the tenant owning the firewall." msgstr "擁有防火牆的承租人 ID。" msgid "Identifier of the source instance to replicate." msgstr "要抄寫的來源實例 ID。" #, python-format msgid "" "If \"%(size)s\" is provided, only one of \"%(image)s\", \"%(image_ref)s\", " "\"%(source_vol)s\", \"%(snapshot_id)s\" can be specified, but currently " "specified options: %(exclusive_options)s." msgstr "" "如果提供了 \"%(size)s\",則只能指定 \"%(image)s\"、\"%(image_ref)s" "\"、\"%(source_vol)s\"、\"%(snapshot_id)s\" 中的一個,但目前指定的選項為:" "%(exclusive_options)s。" msgid "If False, closes the client socket connection explicitly." msgstr "如果為 False,則將明確地關閉用戶端 Socket 連線。" msgid "" "If True, delete any objects in the container when the container is deleted. " "Otherwise, deleting a non-empty container will result in an error." msgstr "" "如果為 True,則在刪除儲存器時刪除儲存器中的所有物件。否則,刪除非空儲存器將導" "致錯誤。" msgid "If True, enable config drive on the server." msgstr "如果為 True,則將在伺服器上啟用配置驅動。" msgid "" "If configured, it allows to run action or workflow associated with a task " "multiple times on a provided list of items." msgstr "" "如果已配置,則它將容許對所提供的項目清單多次執行與作業相關聯的動作或工作流" "程。" msgid "If set, then the server's certificate will not be verified." msgstr "如果設定,則將不會驗證伺服器的憑證。" msgid "If specified, the backup to create the volume from." msgstr "如果指定的話,則為要從其建立磁區的備份。" msgid "If specified, the backup used as the source to create the volume." msgstr "如果指定的話,則為用作建立磁區之來源的備份。" msgid "If specified, the name or ID of the image to create the volume from." msgstr "如果指定的話,則為要從其建立磁區的映像檔名稱或 ID。" msgid "If specified, the snapshot to create the volume from." msgstr "如果指定的話,則為要從其建立磁區的 Snapshot。" msgid "If specified, the type of volume to use, mapping to a specific backend." msgstr "如果指定的話,則為所使用磁區(對映至特定後端)的類型。" msgid "If specified, the volume to use as source." msgstr "如果指定的話,則為要用來作為來源的磁區。" msgid "" "If the region is hierarchically a child of another region, set this " "parameter to the ID of the parent region." msgstr "如果該區域是另一個區域的階層式子項,請將此參數設為母項區域的 ID。" msgid "" "If true, the resources in the chain will be created concurrently. If false " "or omitted, each resource will be treated as having a dependency on the " "previous resource in the list." msgstr "" "如果為 true,則將同時建立鏈中的資源。如果為 false 或予以省略,則會將每一個資" "源都視為與清單中的上一個資源具有相依關係。" msgid "If without InstanceId, ImageId and InstanceType are required." msgstr "如果沒有 InstanceId,則需要 ImageId 和 InstanceType。" #, python-format msgid "Illegal prefix bounds: %(key1)s=%(value1)s, %(key2)s=%(value2)s." msgstr "無效的字首範圍:%(key1)s=%(value1)s,%(key2)s=%(value2)s。" #, python-format msgid "" "Image %(image)s requires %(imram)s minimum ram. Flavor %(flavor)s has only " "%(flram)s." msgstr "" "映像檔 %(image)s 至少需要 %(imram)s RAM。特性 %(flavor)s 只有 %(flram)s。" #, python-format msgid "" "Image %(image)s requires %(imsz)s GB minimum disk space. Flavor %(flavor)s " "has only %(flsz)s GB." msgstr "" "映像檔 %(image)s 至少需要 %(imsz)s GB 的磁碟空間。特性 %(flavor)s 只有 " "%(flsz)s GB。" #, python-format msgid "Image status is required to be %(cstatus)s not %(wstatus)s." msgstr "需要映像檔狀態為 %(cstatus)s,而不是 %(wstatus)s。" msgid "Incompatible parameters were used together" msgstr "一起使用的參數不相容" #, python-format msgid "Incorrect arguments to \"%(fn_name)s\" should be one of: %(allowed)s" msgstr "\"%(fn_name)s\" 的引數不正確,應該是下列其中一項:%(allowed)s" #, python-format msgid "Incorrect arguments to \"%(fn_name)s\" should be: %(example)s" msgstr "\"%(fn_name)s\" 的引數不正確,應該是:%(example)s" msgid "Incorrect arguments: Items to merge must be maps." msgstr "引數不正確:要合併的項目必須是對映。" #, python-format msgid "" "Incorrect index to \"%(fn_name)s\" should be between 0 and %(max_index)s" msgstr "\"%(fn_name)s\" 的索引不正確,應該介於 0 和 %(max_index)s 之間" #, python-format msgid "Incorrect index to \"%(fn_name)s\" should be: %(example)s" msgstr "\"%(fn_name)s\" 的索引不正確,應該是:%(example)s" #, python-format msgid "Index to \"%s\" must be a string" msgstr "\"%s\" 的索引必須是字串" #, python-format msgid "Index to \"%s\" must be an integer" msgstr "\"%s\" 的索引必須是整數" msgid "" "Indicate whether the volume should be deleted when the instance is " "terminated." msgstr "指示在終止實例時是否應該刪除磁區。" msgid "" "Indicate whether the volume should be deleted when the server is terminated." msgstr "指示在終止伺服器時是否應該刪除磁區。" msgid "Indicates remote IP prefix to be associated with this metering rule." msgstr "指示要與此計量規則產生關聯的遠端 IP 字首。" msgid "" "Indicates whether or not to create a distributed router. NOTE: The default " "policy setting in Neutron restricts usage of this property to administrative " "users only. This property can not be used in conjunction with the L3 agent " "ID." msgstr "" "指出是否要建立分散式路由器。附註:Neutron 中的預設原則設定會將此內容限制為僅" "供管理使用者使用。此內容不能與L3 代理程式 ID 結合使用。" msgid "" "Indicates whether or not to create a highly available router. NOTE: The " "default policy setting in Neutron restricts usage of this property to " "administrative users only. And now neutron do not support distributed and ha " "at the same time." msgstr "" "指出是否要建立高可用性路由器。附註:Neutron 中的預設原則設定會將此內容限制為" "僅供管理使用者使用。而現在 Neutron 不能同時支援分散式和高可用性路由器。" msgid "Indicates whether this firewall rule is enabled or not." msgstr "指示是否已啟用此防火牆規則。" msgid "Information used to configure the bucket as a static website." msgstr "用來將儲存區配置為靜態網站的資訊。" msgid "Initiator state in lowercase for the ipsec site connection." msgstr "IPSec 網站連線的起始器狀態(小寫)。" #, python-format msgid "Input in signal data must be a map, find a %s" msgstr "信號資料中的輸入必須是對映,尋找 %s" msgid "Input values for the workflow." msgstr "工作流程的輸入值。" msgid "Input values to apply to the software configuration on this server." msgstr "要套用至此伺服器上軟體配置的輸入值。" msgid "Instance ID to associate with EIP specified by EIP property." msgstr "要與 EIP 內容所指定的 EIP 產生關聯的實例 ID。" msgid "Instance ID to associate with EIP." msgstr "要與 EIP 產生關聯的實例 ID。" msgid "Instance connection to CFN/CW API validate certs if SSL is used." msgstr "與 CFN/CW API 驗證憑證的實例連線(如果使用 SSL)。" msgid "Instance connection to CFN/CW API via https." msgstr "透過 HTTPS 與 CFN/CW API 進行的實例連線。" #, python-format msgid "Instance is not ACTIVE (was: %s)" msgstr "實例不在作用中(曾經是:%s)" #, python-format msgid "" "Instance metadata must not contain greater than %s entries. This is the " "maximum number allowed by your service provider" msgstr "實例 meta 資料不得包含 %s 個以上的項目。這是服務提供者所容許的數目上限" msgid "Interface type of keystone service endpoint." msgstr "Keystone 服務端點的介面類型。" msgid "Internet protocol version." msgstr "網際網路通訊協定版本。" #, python-format msgid "Invalid %s, expected a mapping" msgstr "%s 無效,預期為對映" #, python-format msgid "Invalid CRON expression: %s" msgstr "無效的 CRON 表示式:%s" #, python-format msgid "Invalid Parameter type \"%s\"" msgstr "無效的參數類型 \"%s\"" #, python-format msgid "Invalid Property %s" msgstr "無效的內容 %s" msgid "Invalid Stack address" msgstr "無效的堆疊位址" msgid "Invalid Template URL" msgstr "無效的範本 URL" #, python-format msgid "Invalid URL scheme %s" msgstr "無效的 URL 架構 %s" #, python-format msgid "Invalid UUID version (%d)" msgstr "無效的 UUID 版本 (%d)" #, python-format msgid "Invalid action %s" msgstr "無效的動作 %s" #, python-format msgid "Invalid action %s specified" msgstr "指定的動作 %s 無效" #, python-format msgid "Invalid adopt data: %s" msgstr "採用資料無效:%s" #, python-format msgid "Invalid cloud_backend setting in heat.conf detected - %s" msgstr "偵測到 heat.conf 中的 cloud_backend 設定無效 - %s" #, python-format msgid "Invalid codes in ignore_errors : %s" msgstr "ignore_errors 中的程式碼無效:%s" #, python-format msgid "Invalid content type %(content_type)s" msgstr "無效的內容類型 %(content_type)s" #, python-format msgid "Invalid default %(default)s (%(exc)s)" msgstr "無效的預設值 %(default)s (%(exc)s)" #, python-format msgid "Invalid deletion policy \"%s\"" msgstr "無效的刪除原則 \"%s\"" #, python-format msgid "Invalid filter parameters %s" msgstr "過濾器參數 %s 無效" #, python-format msgid "Invalid hook type \"%(hook)s\" for %(resource)s" msgstr "%(resource)s 的連結鉤類型 \"%(hook)s\" 無效" #, python-format msgid "" "Invalid hook type \"%(value)s\" for resource breakpoint, acceptable hook " "types are: %(types)s" msgstr "" "資源岔斷點的連結鉤類型 \"%(value)s\" 無效,接受的連結鉤類型為:%(types)s" #, python-format msgid "Invalid key %s" msgstr "無效的索引鍵 %s" #, python-format msgid "Invalid key '%(key)s' for %(entity)s" msgstr "%(entity)s 的索引鍵 '%(key)s' 無效" #, python-format msgid "Invalid keys in resource mark unhealthy %s" msgstr "資源標記狀況不良之 %s 中的索引鍵無效" msgid "" "Invalid mix of disk and container formats. When setting a disk or container " "format to one of 'aki', 'ari', or 'ami', the container and disk formats must " "match." msgstr "" "磁碟格式與儲存器格式的混合無效。將磁碟或儲存器格式設為 'aki'、'ari' 或 'ami' " "中的一個時,儲存器與磁碟格式必須相符。" #, python-format msgid "Invalid parameter constraints for parameter %s, expected a list" msgstr "參數 %s 的參數限制無效,預期為清單" #, python-format msgid "" "Invalid restricted_action type \"%(value)s\" for resource, acceptable " "restricted_action types are: %(types)s" msgstr "" "資源的 restricted_action 類型 \"%(value)s\" 無效,接受的 restricted_action 類" "型為:%(types)s" #, python-format msgid "" "Invalid stack name %s must contain only alphanumeric or \"_-.\" characters, " "must start with alpha and must be 255 characters or less." msgstr "" "堆疊名稱 %s 無效,只能包含英數或 \"_-.\" 字元,必須以字母開頭,且不得超過 " "255 個字元。" #, python-format msgid "Invalid stack name %s, must be a string" msgstr "堆疊名稱 %s 無效,必須是字串" #, python-format msgid "Invalid status %s" msgstr "無效的狀態 %s" #, python-format msgid "Invalid support status and should be one of %s" msgstr "無效的支援狀態,應該是下列其中一個:%s" #, python-format msgid "Invalid tag, \"%s\" contains a comma" msgstr "無效的標記,\"%s\" 包含逗點" #, python-format msgid "Invalid tag, \"%s\" is longer than 80 characters" msgstr "無效的標記,\"%s\" 的長度超過 80 個字元" #, python-format msgid "Invalid tag, \"%s\" is not a string" msgstr "無效的標記,\"%s\" 不是字串" #, python-format msgid "Invalid tags, not a list: %s" msgstr "無效的標記,不是清單:%s" #, python-format msgid "Invalid template type \"%(value)s\", valid types are: cfn, hot." msgstr "無效的範本類型 \"%(value)s\",有效的類型為:cfn 和 hot。" #, python-format msgid "Invalid timeout value %s" msgstr "無效的逾時值 %s" #, python-format msgid "Invalid timezone: %s" msgstr "無效的時區:%s" #, python-format msgid "Invalid type (%s)" msgstr "無效的類型 (%s)" msgid "Ip allocation pools and their ranges." msgstr "IP 配置儲存區及其範圍。" msgid "Ip of the subnet's gateway." msgstr "子網路閘道的 IP。" msgid "Ip version for the subnet." msgstr "子網路的 IP 版本。" msgid "Ip_version for this firewall rule." msgstr "此防火牆規則的 Ip_version。" msgid "It defines an executor to which task action should be sent to." msgstr "它會定義一個應該向其傳送作業動作的執行程式。" #, python-format msgid "Items to join must be string, map or list not %s" msgstr "要結合的項目必須是字串、對映或清單,而不是 %s" #, python-format msgid "Items to join must be string, map or list. %s failed json serialization" msgstr "要結合的項目必須是字串、對映或清單。%s 無法執行 JSON 序列化" #, python-format msgid "Items to join must be strings not %s" msgstr "要結合的項目必須是字串,而不是 %s" #, python-format msgid "" "JSON body size (%(len)s bytes) exceeds maximum allowed size (%(limit)s " "bytes)." msgstr "" "JSON 主體大小(%(len)s 個位元組)超出容許的大小上限(%(limit)s 個位元組)。" msgid "JSON data that was uploaded via the SwiftSignalHandle." msgstr "透過 SwiftSignalHandle 上傳的 JSON 資料。" msgid "" "JSON serialized map that includes the endpoint, token and/or other " "attributes the client must use for signalling this handle. The contents of " "this map depend on the type of signal selected in the signal_transport " "property." msgstr "" "JSON 序列化對映,其中包含用戶端在傳送此控點信號時必須使用的端點、記號及/或其" "他屬性。此對映的內容依賴於 signal_transport 內容中選取的信號類型。" msgid "" "JSON string containing data associated with wait condition signals sent to " "the handle." msgstr "包含資料的 JSON 字串,該資料與傳送以進行處理的等待條件信號相關聯。" msgid "" "Key used to encrypt authentication info in the database. Length of this key " "must be 32 characters." msgstr "用於對資料庫中的鑑別資訊進行加密的金鑰。此金鑰的長度應該是 32 個字元。" msgid "Key/Value pairs to extend the capabilities of the flavor." msgstr "用於延伸特性功能的鍵值組。" msgid "Key/value pairs associated with the volume in raw dict form." msgstr "與磁區相關聯的鍵值組(採用原始字典格式)。" msgid "Key/value pairs associated with the volume." msgstr "與磁區相關聯的鍵值組。" msgid "Key/value pairs to associate with the volume." msgstr "要與磁區產生關聯的鍵值組。" msgid "Keypair added to instances to make them accessible for user." msgstr "已將金鑰組新增至實例,讓使用者可以存取這些實例。" msgid "Keypair secret key." msgstr "金鑰組秘密金鑰。" msgid "" "Keystone domain ID which contains heat template-defined users. If this " "option is set, stack_user_domain_name option will be ignored." msgstr "" "包含 Heat 範本定義使用者的 Keystone 網域 ID。如果設定此選項,將忽略 " "stack_user_domain_name 選項。" msgid "" "Keystone domain name which contains heat template-defined users. If " "`stack_user_domain_id` option is set, this option is ignored." msgstr "" "包含 Heat 範本定義使用者的 Keystone 網域名稱。如果設定 " "'stack_user_domain_id' 選項,將忽略此選項。" msgid "Keystone domain." msgstr "Keystone 網域。" #, python-format msgid "" "Keystone has more than one service with same name %(service)s. Please use " "service id instead of name" msgstr "" "Keystone 包含多個具有相同名稱 %(service)s 的服務。請使用服務 ID,而不使用名稱" msgid "Keystone password for stack_domain_admin user." msgstr "stack_domain_admin 使用者的 Keystone 密碼。" msgid "Keystone project." msgstr "Keystone 專案。" msgid "Keystone role for heat template-defined users." msgstr "Heat 範本定義使用者的 Keystone 角色。" msgid "Keystone role." msgstr "Keystone 角色。" msgid "Keystone user group." msgstr "Keystone 使用者群組。" msgid "Keystone user groups." msgstr "Keystone 使用者群組。" msgid "Keystone user is enabled or disabled." msgstr "已啟用或已停用 Keystone 使用者。" msgid "" "Keystone username, a user with roles sufficient to manage users and projects " "in the stack_user_domain." msgstr "" "Keystone 使用者名稱,該使用者所具有的角色足以管理stack_user_domain 中的使用者" "和專案。" msgid "L2 segmentation strategy on the external side of the network gateway." msgstr "網路閘道外部端的 L2 分段策略。" msgid "LBaaS provider to implement this load balancer instance." msgstr "用來實作此負載平衡器實例的 LBaaS 提供者。" msgid "Length of OS_PASSWORD after encryption exceeds Heat limit (255 chars)" msgstr "加密之後的 OS_PASSWORD 長度超出 Heat 限制(255 個字元)" msgid "Length of the string to generate." msgstr "要產生的字串長度。" msgid "" "Length property cannot be smaller than combined character class and " "character sequence minimums" msgstr "長度內容不能小於所結合的字元類別和字元序列下限" msgid "Level of access that need to be provided for guest." msgstr "需要提供給訪客的存取層次。" msgid "" "Lifecycle actions to which the configuration applies. The string values " "provided for this property can include the standard resource actions CREATE, " "DELETE, UPDATE, SUSPEND and RESUME supported by Heat." msgstr "" "配置所套用至的生命週期動作。對此內容提供的字串值可以包括 Heat 支援的標準資源" "動作「建立」、「刪除」、「更新」、「暫停」及「回復」。" msgid "List of LoadBalancer resources." msgstr "LoadBalancer 資源的清單。" msgid "List of Security Groups assigned on current LB." msgstr "現行 LB 上指派的安全群組清單。" msgid "List of TLS container references for SNI." msgstr "SNI 的 TLS 儲存器參照清單。" msgid "List of database instances." msgstr "資料庫實例清單。" msgid "List of databases to be created on DB instance creation." msgstr "要在建立 DB 實例時建立的資料庫清單。" msgid "List of directories to search for plug-ins." msgstr "要在其中搜尋外掛程式的目錄清單。" msgid "List of dns nameservers." msgstr "DNS 名稱伺服器的清單。" msgid "List of firewall rules in this firewall policy." msgstr "此防火牆原則中的防火牆規則清單。" msgid "List of health monitors associated with the pool." msgstr "與儲存區相關聯的性能監視器清單。" msgid "List of hosts to join aggregate." msgstr "要結合聚集的主機清單。" msgid "List of manila shares to be mounted." msgstr "要裝載的 Manila 共用項目清單。" msgid "List of network interfaces to create on instance." msgstr "要在實例上建立的網路介面清單。" msgid "List of processes to enable anti-affinity for." msgstr "要對其啟用反親緣性的程序清單。" msgid "List of processes to run on every node." msgstr "要在每個節點上執行的程序清單。" msgid "List of role assignments." msgstr "角色指派的清單。" msgid "List of security group IDs associated with this interface." msgstr "與此介面相關聯的安全群組 ID 清單。" msgid "List of security group egress rules." msgstr "安全群組出口規則的清單。" msgid "List of security group ingress rules." msgstr "安全群組入口規則的清單。" msgid "" "List of security group names or IDs to assign to this Node Group template." msgstr "要指派給此「節點群組」範本的安全群組名稱或 ID 清單。" msgid "" "List of security group names or IDs. Cannot be used if neutron ports are " "associated with this server; assign security groups to the ports instead." msgstr "" "安全群組名稱或 ID 的清單。如果 Neutron 埠已與此伺服器產生關聯,則無法使用;請" "改為將安全群組指派給埠。" msgid "List of security group rules." msgstr "安全群組規則的清單。" msgid "List of subnet prefixes to assign." msgstr "要指派的子網路字首清單。" msgid "List of tags associated with this interface." msgstr "與此介面相關聯的標籤清單。" msgid "List of tags to attach to the instance." msgstr "要連接至實例的標籤清單。" msgid "List of tags to attach to this resource." msgstr "要連接至此資源的標籤清單。" msgid "List of tags to be attached to this resource." msgstr "要連接至此資源的標籤清單。" msgid "" "List of tasks which should be executed before this task. Used only in " "reverse workflows." msgstr "應該在此作業之前執行的作業清單。僅用在逆向工作流程中。" msgid "" "List of tasks which will run after the task has completed regardless of " "whether it is successful or not." msgstr "作業清單,那些作業將在該作業完成(不論是否成功)之後執行。" msgid "List of tasks which will run after the task has completed successfully." msgstr "作業清單,那些作業將在該作業順利完成之後執行。" msgid "" "List of tasks which will run after the task has completed with an error." msgstr "作業清單,那些作業將在該作業完成但發生錯誤之後執行。" msgid "List of users to be created on DB instance creation." msgstr "要在建立 DB 實例時建立的使用者清單。" msgid "" "List of workflows' executions, each of them is a dictionary with information " "about execution. Each dictionary returns values for next keys: id, " "workflow_name, created_at, updated_at, state for current execution state, " "input, output." msgstr "" "工作流程的執行清單,其中每一個都是含有執行相關資訊的字典。每一個字典都傳回後" "續關鍵字的值:ID、workflow_name、created_at、updated_at、現行執行狀態的狀態、" "輸入、輸出。" msgid "Listener associated with this pool." msgstr "與此儲存區相關聯的接聽器。" msgid "" "Local path on each cluster node on which to mount the share. Defaults to '/" "mnt/{share_id}'." msgstr "" "在其中裝載共用項目之每一個叢集節點上的本端路徑。預設為 '/mnt/{share_id}'。" msgid "Location of the SSL certificate file to use for SSL mode." msgstr "要用於 SSL 模式的 SSL 憑證檔位置。" msgid "Location of the SSL key file to use for enabling SSL mode." msgstr "要用於啟用 SSL 模式的 SSL 金鑰檔位置。" msgid "MAC address of the port." msgstr "埠的 MAC 位址。" msgid "MAC address to allow through this port." msgstr "容許通過此埠的 MAC 位址。" msgid "Map between role with either project or domain." msgstr "角色與專案或網域之間的對映。" msgid "" "Map containing options specific to the configuration management tool used by " "this resource." msgstr "此對映包含此資源所使用之配置管理工具的專用選項。" msgid "" "Map representing the cloud-config data structure which will be formatted as " "YAML." msgstr "用來表示雲端配置資料結構的對映,將格式化為YAML。" msgid "" "Map representing the configuration data structure which will be serialized " "to JSON format." msgstr "此對映表示將序列化為 JSON 格式的配置資料結構。" msgid "Max bandwidth in kbps." msgstr "頻寬上限 (kbps)。" msgid "Max burst bandwidth in kbps." msgstr "激增頻寬上限 (kbps)。" msgid "Max size of the cluster." msgstr "叢集大小上限。" #, python-format msgid "Maximum %s is 1 hour." msgstr "上限 %s 為 1 小時。" msgid "Maximum depth allowed when using nested stacks." msgstr "使用巢狀堆疊時容許的深度上限。" msgid "" "Maximum line size of message headers to be accepted. max_header_line may " "need to be increased when using large tokens (typically those generated by " "the Keystone v3 API with big service catalogs)." msgstr "" "要接受的訊息標頭行大小上限。如果使用大記號(通常是那些由 Keystone 第 3 版 " "API 透過大型服務型錄所產生的記號),則可能需要增加 max_header_line 值。" msgid "" "Maximum line size of message headers to be accepted. max_header_line may " "need to be increased when using large tokens (typically those generated by " "the Keystone v3 API with big service catalogs.)" msgstr "" "要接受的訊息標頭行大小上限。如果使用大記號(通常是那些由 Keystone 第 3 版 " "API 透過大型服務型錄所產生的記號),則可能需要增加 max_header_line 值。" msgid "Maximum number of instances in the group." msgstr "群組中的實例數目上限。" msgid "Maximum number of resources in the cluster. -1 means unlimited." msgstr "叢集中的資源數目上限。-1 表示無限。" msgid "Maximum number of resources in the group." msgstr "群組中的資源數目上限。" msgid "Maximum number of stacks any one tenant may have active at one time." msgstr "任何一個承租人一次可具有的作用中堆疊數目上限。" msgid "Maximum prefix size that can be allocated from the subnet pool." msgstr "可從子網路儲存區配置的字首大小上限。" msgid "" "Maximum raw byte size of JSON request body. Should be larger than " "max_template_size." msgstr "JSON 要求內文的原始位元組大小上限。應該大於 max_template_size。" msgid "Maximum raw byte size of any template." msgstr "任何範本的原始位元組大小上限。" msgid "Maximum resources allowed per top-level stack. -1 stands for unlimited." msgstr "每個最上層堆疊所容許的資源數上限。-1 表示沒有限制。" msgid "Maximum resources per stack exceeded." msgstr "已超出每個堆疊的資源數上限。" msgid "" "Maximum transmission unit size (in bytes) for the ipsec site connection." msgstr "IPSec 網站連線的最大傳輸單位大小(以位元組為單位)。" msgid "Member list items must be strings" msgstr "成員清單項目必須是字串" msgid "Member list must be a list" msgstr "成員清單必須是清單" msgid "Members associated with this pool." msgstr "與此儲存區相關聯的成員。" msgid "Memory in MB for the flavor." msgstr "用於該特性的記憶體(以 MB 為單位)。" #, python-format msgid "Message: %(message)s, Code: %(code)s" msgstr "訊息:%(message)s,程式碼:%(code)s" msgid "Metadata format invalid" msgstr "meta 資料格式無效" msgid "Metadata key-values defined for cluster." msgstr "為叢集定義的 meta 資料鍵值。" msgid "Metadata key-values defined for node." msgstr "為節點定義的 meta 資料鍵值。" msgid "Metadata key-values defined for profile." msgstr "為設定檔定義的 meta 資料鍵值。" msgid "Metadata key-values defined for share." msgstr "為共用項目定義的 meta 資料鍵值。" msgid "Meter name watched by the alarm." msgstr "警示所監看的計量名稱。" msgid "" "Meter should match this resource metadata (key=value) additionally to the " "meter_name." msgstr "計量應該將此資源 meta 資料 (key=value) 與meter_name 進行額外比對。" msgid "Meter statistic to evaluate." msgstr "要評估的計量統計資料。" msgid "Method of implementation of session persistence feature." msgstr "階段作業持續性功能的實作方法。" msgid "Metric name watched by the alarm." msgstr "警示所監看的度量名稱。" msgid "Min size of the cluster." msgstr "叢集大小下限。" msgid "MinSize can not be greater than MaxSize" msgstr "MinSize 不能大於 MaxSize" msgid "Minimum number of instances in the group." msgstr "群組中的實例數目下限。" msgid "Minimum number of resources in the cluster." msgstr "叢集中的資源數目下限。" msgid "Minimum number of resources in the group." msgstr "群組中的資源數目下限。" msgid "" "Minimum number of resources that are added or removed when the AutoScaling " "group scales up or down. This can be used only when specifying " "PercentChangeInCapacity for the AdjustmentType property." msgstr "" "當 AutoScaling 群組擴充或縮減時,所新增或移除的資源數目下限。僅當為 " "AdjustmentType 內容指定 PercentChangeInCapacity 時,才可以使用此項。" msgid "" "Minimum number of resources that are added or removed when the AutoScaling " "group scales up or down. This can be used only when specifying " "percent_change_in_capacity for the adjustment_type property." msgstr "" "當 AutoScaling 群組擴充或縮減時,所新增或移除的資源數目下限。僅當為 " "adjustment_type 內容指定 percent_change_in_capacity 時,才可以使用此項。" #, python-format msgid "Missing mandatory (%s) key from mark unhealthy request" msgstr "標記狀況不良要求中遺漏必要的 (%s) 索引鍵" #, python-format msgid "Missing parameter type for parameter: %s" msgstr "遺漏了參數的參數類型:%s" #, python-format msgid "Missing required credential: %(required)s" msgstr "遺漏了必要認證:%(required)s" msgid "Mistral resource validation error" msgstr "Mistral 資源驗證錯誤" msgid "Monasca notification." msgstr "Monasca 通知。" msgid "Multiple actions specified" msgstr "指定了多個動作" #, python-format msgid "Multiple physical resources were found with name (%(name)s)." msgstr "找到多個名稱為 (%(name)s) 的實體資源。" #, python-format msgid "Multiple routers found with name %s" msgstr "找到多個名稱為 %s 的路由器" msgid "Must specify 'InstanceId' if you specify 'EIP'." msgstr "如果指定 'EIP',則必須指定 'InstanceId'。" msgid "Name for the Sahara Cluster Template." msgstr "Sahara 叢集範本的名稱。" msgid "Name for the Sahara Node Group Template." msgstr "Sahara 節點群組範本的名稱。" msgid "Name for the aggregate." msgstr "聚集的名稱。" msgid "Name for the availability zone." msgstr "可用性區域的名稱。" msgid "" "Name for the container. If not specified, a unique name will be generated." msgstr "儲存器的名稱。如果未指定,則將產生唯一名稱。" msgid "Name for the firewall policy." msgstr "防火牆原則的名稱。" msgid "Name for the firewall rule." msgstr "防火牆規則的名稱。" msgid "Name for the firewall." msgstr "防火牆的名稱。" msgid "Name for the ike policy." msgstr "IKE 原則的名稱。" msgid "" "Name for the image. The name of an image is not unique to a Image Service " "node." msgstr "映像檔的名稱。映像檔的名稱對「映像檔服務」節點不是唯一的。" msgid "Name for the ipsec policy." msgstr "IPSec 原則的名稱。" msgid "Name for the ipsec site connection." msgstr "IPSec 網站連線的名稱。" msgid "Name for the time constraint." msgstr "時間限制的名稱。" msgid "Name for the vpn service." msgstr "VPN 服務的名稱。" msgid "" "Name of attribute to compare. Names of the form metadata.user_metadata.X or " "metadata.metering.X are equivalent to what you can address through " "matching_metadata; the former for Nova meters, the latter for all others. To " "see the attributes of your Samples, use `ceilometer --debug sample-list`." msgstr "" "要比較的屬性名稱。格式為 metadata.user_metadata.X 或 metadata.metering.X 的名" "稱相當於您可以透過matching_metadata 取得的內容;前者用於 Nova 計量,後者用於" "所有其他類型。若要查看您的 Sample 屬性,請使用 'ceilometer --debug sample-" "list'。" msgid "Name of key to use for substituting inputs during deployment." msgstr "部署期間要用於替代輸入的索引鍵名稱。" msgid "Name of keypair to inject into the server." msgstr "要注入伺服器的金鑰組名稱。" msgid "Name of keystone endpoint." msgstr "Keystone 端點的名稱。" msgid "Name of keystone group." msgstr "Keystone 群組的名稱。" msgid "Name of keystone project." msgstr "Keystone 專案的名稱。" msgid "Name of keystone role." msgstr "Keystone 角色的名稱。" msgid "Name of keystone service." msgstr "Keystone 服務的名稱。" msgid "Name of keystone user." msgstr "Keystone 使用者的名稱。" msgid "Name of registered datastore type." msgstr "已登錄資料儲存庫類型的名稱。" msgid "Name of the DB instance to create." msgstr "要建立的 DB 實例名稱。" msgid "Name of the Node group." msgstr "節點群組的名稱。" msgid "" "Name of the action associated with the task. Either action or workflow may " "be defined in the task." msgstr "與作業相關聯之動作的名稱。可以在作業中定義動作或工作流程。" msgid "Name of the administrative user to use on the server." msgstr "要在伺服器上使用的管理使用者名稱。" msgid "Name of the alarm. By default, physical resource name is used." msgstr "警示的名稱。依預設,將使用實體資源名稱。" msgid "Name of the availability zone for DB instance." msgstr "DB 實例的可用性區域名稱。" msgid "Name of the availability zone for server placement." msgstr "用於伺服器放置的可用性區域名稱。" msgid "Name of the cluster to create." msgstr "要建立的叢集名稱。" msgid "Name of the cluster. By default, physical resource name is used." msgstr "叢集的名稱。依預設,將使用實體資源名稱。" msgid "Name of the cookie, required if type is APP_COOKIE." msgstr "類型為 APP_COOKIE 時所需的 Cookie 名稱。" msgid "Name of the cron trigger." msgstr "CRON 觸發程式的名稱。" msgid "Name of the current action being deployed" msgstr "要部署的現行動作名稱" msgid "Name of the data source." msgstr "資料來源的名稱。" msgid "" "Name of the derived config associated with this deployment. This is used to " "apply a sort order to the list of configurations currently deployed to a " "server." msgstr "" "與此部署相關聯的衍生配置名稱。這是用來將排序套用至目前已在伺服器中部署的配置" "清單。" msgid "" "Name of the engine node. This can be an opaque identifier. It is not " "necessarily a hostname, FQDN, or IP address." msgstr "" "引擎節點的名稱。它可以是一個不明確的 ID。不一定非要為主機名稱、FQDN 或 IP 位" "址。" msgid "Name of the input." msgstr "輸入的名稱。" msgid "Name of the job binary." msgstr "工作二進位檔的名稱。" msgid "Name of the metering label." msgstr "計量標籤的名稱。" msgid "Name of the network owning the port." msgstr "擁有埠的網路名稱。" msgid "" "Name of the network owning the port. The value is typically network:" "floatingip or network:router_interface or network:dhcp." msgstr "" "擁有該埠的網路名稱。此值通常是network:floatingip、network:router_interface " "或 network:dhcp。" msgid "Name of the notification. By default, physical resource name is used." msgstr "通知的名稱。依預設,將使用實體資源名稱。" msgid "Name of the output." msgstr "輸出的名稱。" msgid "Name of the pool." msgstr "儲存區的名稱。" msgid "Name of the queue instance to create." msgstr "要建立的佇列實例名稱。" msgid "" "Name of the registered datastore version. It must exist for provided " "datastore type. Defaults to using single active version. If several active " "versions exist for provided datastore type, explicit value for this " "parameter must be specified." msgstr "" "已登錄資料儲存庫版本的名稱。對於所提供的資料儲存庫類型,該名稱必須存在。預設" "是使用單一作用中的版本。對於所提供的資料儲存庫類型,如果存在數個作用中的版" "本,則必須為此參數指定明確的值。" msgid "Name of the secret." msgstr "密碼的名稱。" msgid "Name of the senlin node. By default, physical resource name is used." msgstr "Senlin 節點的名稱。依預設,將使用實體資源名稱。" msgid "Name of the senlin policy. By default, physical resource name is used." msgstr "Senlin 原則的名稱。依預設,將使用實體資源名稱。" msgid "Name of the senlin profile. By default, physical resource name is used." msgstr "Senlin 設定檔的名稱。依預設,將使用實體資源名稱。" msgid "" "Name of the senlin receiver. By default, physical resource name is used." msgstr "Senlin 接收端的名稱。依預設,將使用實體資源名稱。" msgid "Name of the server." msgstr "伺服器的名稱。" msgid "Name of the share network." msgstr "共用網路的名稱。" msgid "Name of the share type." msgstr "共用類型的名稱。" msgid "Name of the stack." msgstr "堆疊名稱。" msgid "Name of the subnet pool." msgstr "子網路儲存區的名稱。" msgid "Name of the vip." msgstr "VIP 的名稱。" msgid "Name of the volume type." msgstr "磁區類型的名稱。" msgid "Name of the volume." msgstr "磁區的名稱。" msgid "" "Name of the workflow associated with the task. Can be defined by intrinsic " "function get_resource or by name of the referenced workflow, i.e. " "{ workflow: wf_name } or { workflow: { get_resource: wf_name }}. Either " "action or workflow may be defined in the task." msgstr "" "與作業相關聯之工作流程的名稱。可以透過自身的函數 get_resource 進行定義,也可" "以透過所參照之工作流程的名稱進行定義,例如 { workflow: wf_name } 或 " "{ workflow: { get_resource: wf_name }}。可以在作業中定義動作或工作流程。" msgid "Name of this Load Balancer." msgstr "此負載平衡器的名稱。" msgid "Name of this deployment resource in the stack" msgstr "堆疊中此部署資源的名稱" msgid "Name of this listener." msgstr "此監視器的名稱。" msgid "Name of this pool." msgstr "此儲存區的名稱。" msgid "Name or ID Nova flavor for the nodes." msgstr "節點之 Nova 特性的名稱或 ID。" msgid "Name or ID of network to create a port on." msgstr "要在其上建立埠的網路名稱或 ID。" msgid "Name or ID of senlin profile to create this node." msgstr "用於建立此節點之 Senlin 設定檔的名稱或 ID。" msgid "" "Name or ID of shared file system snapshot that will be restored and created " "as a new share." msgstr "將作為新共用項目進行還原和建立之共用檔案系統 Snapshot 的名稱或 ID。" msgid "" "Name or ID of shared filesystem type. Types defines some share filesystem " "profiles that will be used for share creation." msgstr "" "共用檔案系統類型的名稱或 ID。類型定義部分將用於建立共用項目的共用檔案系統設定" "檔。" msgid "Name or ID of shared network defined for shared filesystem." msgstr "為共用檔案系統定義之共用網路的名稱或 ID。" msgid "Name or ID of target cluster." msgstr "目標叢集的名稱或 ID。" msgid "Name or ID of the load balancing pool." msgstr "負載平衡儲存區的名稱或 ID。" msgid "Name or Id of keystone region." msgstr "Keystone 區域的名稱或 ID。" msgid "Name or Id of keystone service." msgstr "Keystone 服務的名稱或 ID。" #, python-format msgid "" "Name or UUID of Neutron port to attach this NIC to. Either %(port)s or " "%(net)s must be specified." msgstr "" "要將此 NIC 連接至之 Neutron 埠的名稱或 UUID。必須指定 %(port)s 或%(net)s。" msgid "Name or UUID of network." msgstr "網路的名稱或 UUID。" msgid "" "Name or UUID of the Neutron floating IP network or name of the Nova floating " "ip pool to use. Should not be provided when used with Nova-network that auto-" "assign floating IPs." msgstr "" "要使用之 Neutron 浮動 IP 網路的名稱或 UUID,或 Nova 浮動 IP 儲存區的名稱。當" "與自動指派浮動 IP 的 Nova 網路搭配使用時,不應提供此項。" msgid "Name or UUID of the image used to boot Hadoop nodes." msgstr "用於啟動 Hadoop 節點之映像檔的名稱或 UUID。" #, python-format msgid "" "Name or UUID of the network to attach this NIC to. Either %(port)s or " "%(net)s must be specified." msgstr "要將此 NIC 連接至之網路的名稱或 UUID。必須指定 %(port)s 或%(net)s。" msgid "Name or id of keystone domain." msgstr "Keystone 網域的名稱或 ID。" msgid "Name or id of keystone group." msgstr "Keystone 群組的名稱或 ID。" msgid "Name or id of keystone user." msgstr "Keystone 使用者的名稱或 ID。" msgid "Name or id of volume type (OS::Cinder::VolumeType)." msgstr "磁區類型的名稱或 ID (OS::Cinder::VolumeType)。" msgid "Names of databases that those users can access on instance creation." msgstr "建立實例時那些使用者可以存取的資料庫名稱。" msgid "" "Namespace to group this software config by when delivered to a server. This " "may imply what configuration tool is going to perform the configuration." msgstr "" "交付至伺服器時,此軟體配置據以分組的名稱空間。這可能暗示哪個配置工具將執行配" "置。" msgid "Need more arguments" msgstr "需要更多引數" msgid "Negotiation mode for the ike policy." msgstr "IKE 原則的協議模式。" #, python-format msgid "Neither image nor bootable volume is specified for instance %s" msgstr "沒有給實例 %s 指定映像檔或可啟動磁區" msgid "Network in CIDR notation." msgstr "CIDR 表示法中的網路。" msgid "Network interface ID to associate with EIP." msgstr "要與 EIP 產生關聯的網路介面 ID。" msgid "Network interfaces to associate with instance." msgstr "要與實例產生關聯的網路介面。" #, python-format msgid "" "Network this port belongs to. If you plan to use current port to assign " "Floating IP, you should specify %(fixed_ips)s with %(subnet)s. Note if this " "changes to a different network update, the port will be replaced." msgstr "" "此埠所屬的網路。如果您計劃使用現行埠來指派浮動 IP,則您應指定 %(fixed_ips)s " "及 %(subnet)s。附註:如果這變更為不同的網路更新,則將取代該埠。" msgid "Network to allocate floating IP from." msgstr "從其配置浮動 IP 的網路。" msgid "Neutron network id." msgstr "Neutron 網路 ID。" msgid "Neutron subnet id." msgstr "Neutron 子網路 ID。" msgid "Nexthop IP address." msgstr "下一個中繼站 IP 位址。" #, python-format msgid "No %s specified" msgstr "沒有指定任何 %s" msgid "No Template provided." msgstr "未提供範本。" msgid "No action specified" msgstr "未指定動作" msgid "No constraint expressed" msgstr "未表示限制項" #, python-format msgid "" "No content found in the \"files\" section for %(fn_name)s path: %(file_key)s" msgstr "在 %(fn_name)s 路徑的 \"files\" 區段中找不到任何內容:%(file_key)s" #, python-format msgid "No event %s found" msgstr "找不到事件 %s" #, python-format msgid "No events found for resource %s" msgstr "找不到資源 %s 的事件" msgid "No resource data found" msgstr "找不到任何資源資料" #, python-format msgid "No stack exists with id \"%s\"" msgstr "不存在 ID 為 \"%s\" 的堆疊" msgid "No stack name specified" msgstr "未指定堆疊名稱" msgid "No template specified" msgstr "未指定範本" msgid "No volume service available." msgstr "沒有可用的磁區服務。" msgid "Node groups." msgstr "節點群組。" msgid "Nodes list in the cluster." msgstr "叢集中的節點清單。" msgid "Non HA routers can only have one L3 agent." msgstr "非高可用性路由器只能具有一個 L3 代理程式。" #, python-format msgid "Non-empty resource type is required for resource \"%s\"" msgstr "資源 \"%s\" 需要非空白資源類型" msgid "Not Implemented." msgstr "未實作。" #, python-format msgid "Not allowed - %(dsver)s without %(dstype)s." msgstr "不容許 - %(dsver)s(不帶 %(dstype)s)。" msgid "Not found" msgstr "找不到" msgid "Not waiting for outputs signal" msgstr "不等待輸出信號" msgid "" "Notional service where encryption is performed For example, front-end. For " "Nova." msgstr "在其中執行加密的抽象服務(例如,前端)。適用於 Nova。" msgid "Nova instance type (flavor)." msgstr "Nova 實例類型(特性)。" msgid "Nova network id." msgstr "Nova 網路 ID。" msgid "Number of VCPUs for the flavor." msgstr "用於該特性的 VCPU 數目。" msgid "Number of backlog requests to configure the socket with." msgstr "要配置給 Socket 的待辦事項要求數目。" msgid "Number of instances in the Node group." msgstr "節點群組中的實例數。" msgid "Number of minutes to wait for this stack creation." msgstr "建立此堆疊要等待的分鐘數。" msgid "Number of periods to evaluate over." msgstr "要進行評估的期間數。" msgid "" "Number of permissible connection failures before changing the member status " "to INACTIVE." msgstr "將成員狀態變更成「非作用中」之前所允許的連線失敗次數。" msgid "Number of remaining executions." msgstr "剩餘的執行作業數目。" msgid "Number of seconds for the DPD delay." msgstr "DPD 延遲的秒數。" msgid "Number of seconds for the DPD timeout." msgstr "DPD 逾時秒數。" msgid "" "Number of times to check whether an interface has been attached or detached." msgstr "檢查某個介面是已連接還是已分離的次數。" msgid "" "Number of times to retry to bring a resource to a non-error state. Set to 0 " "to disable retries." msgstr "將資源變更為無錯誤狀態的重試次數。設定為0 可停用重試功能。" msgid "" "Number of times to retry when a client encounters an expected intermittent " "error. Set to 0 to disable retries." msgstr "用戶端在遇到預期的間歇性錯誤時所重試的次數。設為 0 可停用重試。" msgid "Number of workers for Heat service." msgstr "Heat 服務的工作者數目。" msgid "" "Number of workers for Heat service. Default value 0 means, that service will " "start number of workers equal number of cores on server." msgstr "" "Heat 服務的工作程式數目。預設值 0 表示,服務將啟動與伺服器上的核心數目相等的" "工作程式數目。" msgid "Number value for delay during resolve constraint." msgstr "解析限制期間延遲的數值。" msgid "Number value for timeout during resolving output value." msgstr "解析輸出值期間逾時的數值。" #, python-format msgid "Object action %(action)s failed because: %(reason)s" msgstr "物件動作 %(action)s 失敗,原因:%(reason)s" msgid "" "On update, enables heat to collect existing resource properties from reality " "and converge to updated template." msgstr "更新時,容許 Heat 將現實和聚合中的現有資源內容收集到已更新的範本。" msgid "One of predefined health monitor types." msgstr "其中一個預先定義的性能監視器類型。" msgid "One or more listeners for this load balancer." msgstr "此負載平衡器的一個以上接聽器。" msgid "Only ISO 8601 duration format of the form PT#H#M#S is supported." msgstr "僅支援格式為 PT#H#M#S 的 ISO 8601 持續時間格式。" msgid "Only Templates with an extension of .yaml or .template are supported" msgstr "僅支援副檔名為 .yaml 或 .template 的範本" #, python-format msgid "Only integer is acceptable by '%(name)s'." msgstr "'%(name)s' 僅接受整數。" #, python-format msgid "Only non-zero integer is acceptable by '%(name)s'." msgstr "'%(name)s' 僅接受非零整數。" msgid "Operator used to compare specified statistic with threshold." msgstr "用來將所指定統計資料與臨界值相比較的運算子。" msgid "Optional CA cert file to use in SSL connections." msgstr "要用在 SSL 連線中的選用 CA 憑證檔。" msgid "Optional Nova keypair name." msgstr "選用的 Nova 金鑰組名稱。" msgid "Optional PEM-formatted certificate chain file." msgstr "選用的 PEM 格式憑證鏈檔案。" msgid "Optional PEM-formatted file that contains the private key." msgstr "選用的 PEM 格式檔案(內含私密金鑰)。" msgid "Optional filename to associate with part." msgstr "要與組件產生關聯的選用檔名。" #, python-format msgid "Optional heat url in format like http://0.0.0.0:8004/v1/%(tenant_id)s." msgstr "選用的 Heat URL,格式如下:http://0.0.0.0:8004/v1/%(tenant_id)s。" msgid "Optional subtype to specify with the type." msgstr "要與類型一起指定的選用子類型。" msgid "Options for simulating waiting." msgstr "用來模擬等待的選項。" #, python-format msgid "Order '%(name)s' failed: %(code)s - %(reason)s" msgstr "順序 '%(name)s' 失敗:%(code)s - %(reason)s" msgid "Outputs received" msgstr "收到輸出" msgid "Owner of the source security group." msgstr "來源安全群組的擁有者。" msgid "PATCH update to non-COMPLETE stack" msgstr "修補程式更新至不完整堆疊" #, python-format msgid "Parameter '%(name)s' is invalid: %(exp)s" msgstr "參數 '%(name)s' 無效:%(exp)s" msgid "Parameter Groups error" msgstr "參數群組錯誤" msgid "" "Parameter Groups error: parameter_groups.: The grouped parameter key_name " "does not reference a valid parameter." msgstr "" "參數群組錯誤:parameter_groups.:已分組的參數 key_name 未參照有效參數。" msgid "" "Parameter Groups error: parameter_groups.: The key_name parameter must be " "assigned to one parameter group only." msgstr "" "參數群組錯誤:parameter_groups.:必須只將 key_name 參數指派給一個參數群組。" msgid "" "Parameter Groups error: parameter_groups.: The parameters of parameter group " "should be a list." msgstr "參數群組錯誤:parameter_groups.:參數群組的參數應該是清單。" msgid "" "Parameter Groups error: parameter_groups.Database Group: The InstanceType " "parameter must be assigned to one parameter group only." msgstr "" "參數群組錯誤:parameter_groups.Database 群組:必須只將 InstanceType 參數指派" "給一個參數群組。" msgid "" "Parameter Groups error: parameter_groups.Database Group: The grouped " "parameter SomethingNotHere does not reference a valid parameter." msgstr "" "參數群組錯誤:parameter_groups.Database 群組:已分組的參數 SomethingNotHere " "未參照有效參數。" msgid "" "Parameter Groups error: parameter_groups.Server Group: The parameters must " "be provided for each parameter group." msgstr "" "參數群組錯誤:parameter_groups.Server 群組:必須為每一個參數群組提供參數。" msgid "" "Parameter Groups error: parameter_groups.Server Group: The parameters of " "parameter group should be a list." msgstr "參數群組錯誤:parameter_groups.Server 群組:參數群組的參數應該是清單。" msgid "" "Parameter Groups error: parameter_groups: The parameter_groups should be a " "list." msgstr "參數群組錯誤:parameter_groups:parameter_groups 應該是清單。" #, python-format msgid "Parameter name in \"%s\" must be string" msgstr "\"%s\" 中的參數名稱必須是字串" #, python-format msgid "Params must be a map, find a %s" msgstr "參數必須是對映,尋找 %s" msgid "Parent network of the subnet." msgstr "子網路的母項網路。" msgid "Parts belonging to this message." msgstr "屬於此訊息的組件。" msgid "Password for API authentication" msgstr "用於 API 鑑別的密碼" msgid "Password for accessing the data source URL." msgstr "用於存取資料來源 URL 的密碼。" msgid "Password for accessing the job binary URL." msgstr "用於存取工作二進位檔 URL 的密碼。" msgid "Password for those users on instance creation." msgstr "建立實例時那些使用者的密碼。" msgid "Password of keystone user." msgstr "Keystone 使用者的密碼。" msgid "Password used by user." msgstr "使用者所使用的密碼。" #, python-format msgid "Path components in \"%s\" must be strings" msgstr "\"%s\" 中的路徑元件必須是字串" msgid "Path components in attributes must be strings" msgstr "屬性中的路徑元件必須是字串" msgid "Payload exceeds maximum allowed size" msgstr "有效負載超出了容許的大小上限" msgid "Perfect forward secrecy for the ipsec policy." msgstr "IPSec 原則的完整轉遞保密。" msgid "Perfect forward secrecy in lowercase for the ike policy." msgstr "IKE 原則的完整轉遞保密(小寫)。" msgid "" "Perform a check on the input values passed to verify that each required " "input has a corresponding value. When the property is set to STRICT and no " "value is passed, an exception is raised." msgstr "" "檢查所傳遞的輸入值,以驗證每一個必要輸入是否都具有對應值。當內容設定為 " "STRICT 且未傳遞任何值時,會發生異常狀況。" msgid "Period (seconds) to evaluate over." msgstr "要進行評估的期間(秒)。" msgid "Physical ID of the VPC. Not implemented." msgstr "VPC 的實體 ID。未實作。" #, python-format msgid "" "Plugin %(plugin)s doesn't support the following node processes: " "%(unsupported)s. Allowed processes are: %(allowed)s" msgstr "" "外掛程式 %(plugin)s 不支援下列節點程序:%(unsupported)s。容許的程序為:" "%(allowed)s" msgid "Plugin name." msgstr "外掛程式名稱。" msgid "Policies for removal of resources on update." msgstr "用於在更新時移除資源的原則。" msgid "Policy for rolling updates for this scaling group." msgstr "用於漸進式更新此調整大小群組的原則。" msgid "" "Policy on how to apply a flavor update; either by requesting a server resize " "or by replacing the entire server." msgstr "" "如何套用特性更新的相關原則:透過要求調整伺服器大小,或透過取代整個伺服器。" msgid "" "Policy on how to apply an image-id update; either by requesting a server " "rebuild or by replacing the entire server." msgstr "" "如何套用 image-id 更新的相關原則:透過要求重建伺服器,或透過取代整個伺服器。" msgid "" "Policy on how to respond to a stack-update for this resource. REPLACE_ALWAYS " "will replace the port regardless of any property changes. AUTO will update " "the existing port for any changed update-allowed property." msgstr "" "關於如何回應此資源之堆疊更新的原則。REPLACE_ALWAYS 將取代埠,而無論是否有任何" "內容變更。AUTO 將更新任何已變更且容許更新之內容的現有埠。" msgid "" "Policy to be processed when doing an update which requires removal of " "specific resources." msgstr "執行需要移除特定資源的更新作業時要處理的原則。" msgid "Pool creation failed" msgstr "儲存區建立失敗" msgid "Pool creation failed due to vip" msgstr "由於 VIP,儲存區建立失敗" msgid "Pool from which floating IP is allocated." msgstr "從中配置浮動 IP 的儲存區。" msgid "Port number on which the servers are running on the members." msgstr "正在成員上執行的伺服器埠號。" msgid "Port on which the pool member listens for requests or connections." msgstr "儲存區成員在其上接聽要求或連線的埠。" msgid "Port security enabled of the network." msgstr "已啟用網路的埠安全。" msgid "Port security enabled of the port." msgstr "已啟用埠的埠安全。" msgid "Position of the rule within the firewall policy." msgstr "防火牆原則中規則的位置。" msgid "Pre-shared key string for the ipsec site connection." msgstr "IPSec 網站連線的預先共用金鑰字串。" msgid "Prefix length for subnet allocation from subnet pool." msgstr "子網路儲存區中子網路配置的字首長度。" msgid "Private DNS name of the specified instance." msgstr "所指定實例的專用 DNS 名稱。" msgid "Private IP address of the network interface." msgstr "網路介面的專用 IP 位址。" msgid "Private IP address of the specified instance." msgstr "所指定實例的專用 IP 位址。" msgid "Project ID" msgstr "專案識別號" msgid "" "Projects to add volume type access to. NOTE: This property is only supported " "since Cinder API V2." msgstr "" "要向其新增磁區類型存取的專案。附註:自 Cinder API 第 2 版以來,才支援此內容。" #, python-format msgid "" "Properties %(algorithm)s and %(bit_length)s are required for %(type)s type " "of order." msgstr "%(type)s 類型的順序需要內容 %(algorithm)s 和 %(bit_length)s。" msgid "Properties for profile." msgstr "設定檔的內容。" msgid "Properties of this policy." msgstr "此原則的內容。" msgid "Properties to pass to each resource being created in the chain." msgstr "要傳遞至正在鏈中建立之每一個資源的內容。" #, python-format msgid "Property %(cookie)s is required when %(sp)s type is set to %(app)s." msgstr "當 %(sp)s 類型設為 %(app)s 時,需要內容 %(cookie)s。" #, python-format msgid "" "Property %(cookie)s must NOT be specified when %(sp)s type is set to %(ip)s." msgstr "當 %(sp)s 類型設為 %(ip)s 時,不得指定內容 %(cookie)s。" #, python-format msgid "" "Property %(key)s updated value %(new)s should be superset of existing value " "%(old)s." msgstr "內容 %(key)s 更新後的值 %(new)s 應該是現有值 %(old)s 的超集。" #, python-format msgid "" "Property %(n)s type mismatch between facade %(type)s (%(fs_type)s) and " "provider (%(ps_type)s)" msgstr "" "Facade %(type)s (%(fs_type)s) 與提供者 (%(ps_type)s 之間的內容 %(n)s 類型不符" #, python-format msgid "Property %(policies)s and %(item)s cannot be used both at one time." msgstr "不能同時使用內容 %(policies)s 和 %(item)s。" #, python-format msgid "Property %(ref)s required when protocol is %(term)s." msgstr "當通訊協定為 %(term)s 時,需要內容 %(ref)s。" #, python-format msgid "Property %s not assigned" msgstr "未指派內容 %s" #, python-format msgid "Property %s not implemented yet" msgstr "尚未實作內容 %s" msgid "" "Property cookie_name is required when session_persistence type is set to " "APP_COOKIE." msgstr "當 session_persistence 類型設為 APP_COOKIE 時,需要內容 cookie_name。" msgid "" "Property cookie_name is required, when session_persistence type is set to " "APP_COOKIE." msgstr "session_persistence 類型設為 APP_COOKIE 時需要內容 cookie_name。" msgid "" "Property cookie_name must NOT be specified when session_persistence type is " "set to SOURCE_IP." msgstr "" "當 session_persistence 類型設為 SOURCE_IP 時,不得指定內容 cookie_name。" msgid "Property values for the resources in the group." msgstr "群組中資源的內容值。" msgid "Protocol for balancing." msgstr "用於平衡的通訊協定。" msgid "Protocol for the firewall rule." msgstr "防火牆規則的通訊協定。" msgid "Protocol of the pool." msgstr "儲存區的通訊協定。" msgid "Protocol on which to listen for the client traffic." msgstr "要透過其接聽用戶端資料流量的通訊協定。" msgid "Protocol to balance." msgstr "要平衡的通訊協定。" msgid "Protocol value for this firewall rule." msgstr "此防火牆規則的通訊協定值。" msgid "" "Provide access to nodes using other nodes of the cluster as proxy gateways." msgstr "透過將叢集的其他節點用作 Proxy 閘道,來提供對節點的存取。" msgid "" "Provide old encryption key. New encryption key would be used from config " "file." msgstr "請提供舊加密索引鍵。將從配置檔中使用新加密索引鍵。" msgid "Provider for this Load Balancer." msgstr "此負載平衡器的提供者。" msgid "Provider implementing this load balancer instance." msgstr "用來實作此負載平衡器實例的提供者。" #, python-format msgid "Provider requires property %(n)s unknown in facade %(type)s" msgstr "提供者需要 Facade %(type)s 中的不明內容 %(n)s" msgid "Public DNS name of the specified instance." msgstr "所指定實例的公用 DNS 名稱。" msgid "Public IP address of the specified instance." msgstr "所指定實例的公用 IP 位址。" msgid "" "RPC timeout for the engine liveness check that is used for stack locking." msgstr "堆疊鎖定所使用引擎活躍度檢查的 RPC 逾時。" msgid "RX/TX factor." msgstr "RX/TX 因數。" #, python-format msgid "Rebuilding server failed, status '%s'" msgstr "重建伺服器時失敗,狀態為 '%s'" msgid "Record name." msgstr "記錄名稱。" #, python-format msgid "Recursion depth exceeds %d." msgstr "遞迴深度超過 %d。" msgid "" "Ref structure that contains the ID of the VPC on which you want to create " "the subnet." msgstr "此參照結構包含您想要在其中建立子網路的 VPC ID。" msgid "Reference to a flavor for creating DB instance." msgstr "用於建立 DB 實例的特性參照。" msgid "Reference to certificate." msgstr "對憑證的參照。" msgid "Reference to intermediates." msgstr "對中間項目的參照。" msgid "Reference to private key passphrase." msgstr "對私密金鑰通行詞組的參照。" msgid "Reference to private key." msgstr "對私密金鑰的參照。" msgid "Reference to public key." msgstr "對公開金鑰的參照。" msgid "Reference to the secret." msgstr "對密碼的參照。" msgid "References to secrets that will be stored in container." msgstr "對將儲存在儲存器中之密碼的參照。" msgid "Region name in which this stack will be created." msgstr "將在其中建立此堆疊的區域名稱。" msgid "Remaining executions." msgstr "剩餘的執行作業。" msgid "Remote branch router identity." msgstr "遠端分支路由器身分。" msgid "Remote branch router public IPv4 address or IPv6 address or FQDN." msgstr "遠端分支路由器公用 IPv4 位址或 IPv6 位址或 FQDN。" msgid "Remote subnet(s) in CIDR format." msgstr "遠端子網路(CIDR 格式)。" msgid "" "Replacement policy used to work around flawed nova/neutron port interaction " "which has been fixed since Liberty." msgstr "" "用於暫行解決錯誤 Nova/Neutron 埠互動(自 Liberty 以來,已得到修正)的取代原" "則。" msgid "Request expired or more than 15mins in the future" msgstr "要求已過期,或者發生時間是距現在超過 15 分鐘的未來時間" #, python-format msgid "Request limit exceeded: %(message)s" msgstr "已超出要求限制:%(message)s" msgid "Request missing required header X-Auth-Url" msgstr "要求遺漏了必要的標頭 X-Auth-Url" msgid "Request was denied due to request throttling" msgstr "要求已由於要求節流控制而遭拒絕" #, python-format msgid "" "Requested plugin '%(plugin)s' doesn't support version '%(version)s'. Allowed " "versions are %(allowed)s" msgstr "" "要求的外掛程式 '%(plugin)s' 不支援 '%(version)s' 版。所容許的版本是 " "%(allowed)s" msgid "" "Required extra specification. Defines if share drivers handles share servers." msgstr "必需的額外規格。定義共用驅動程式是否處理共用伺服器。" #, python-format msgid "Required property %(n)s for facade %(type)s missing in provider" msgstr "提供者中遺漏了 Facade %(type)s 的必要內容 %(n)s" #, python-format msgid "Resizing to '%(flavor)s' failed, status '%(status)s'" msgstr "調整大小至 '%(flavor)s' 失敗,狀態為 '%(status)s'" #, python-format msgid "Resource \"%s\" has no type" msgstr "資源 \"%s\" 沒有任何類型" #, python-format msgid "Resource \"%s\" type is not a string" msgstr "資源類型 \"%s\" 不是字串" #, python-format msgid "Resource %(name)s %(key)s type must be %(typename)s" msgstr "資源 %(name)s %(key)s 類型必須為 %(typename)s" #, python-format msgid "Resource %(name)s is missing \"%(type_key)s\"" msgstr "資源 %(name)s 遺漏了 \"%(type_key)s\"" #, python-format msgid "" "Resource %s's property user_data_format should be set to SOFTWARE_CONFIG " "since there are software deployments on it." msgstr "" "資源 %s 的內容 user_data_format 應該設定為 SOFTWARE_CONFIG,因為該資源上有軟" "體部署。" msgid "Resource ID was not provided." msgstr "未提供資源 ID。" msgid "" "Resource definition for the resources in the group, in HOT format. The value " "of this property is the definition of a resource just as if it had been " "declared in the template itself." msgstr "" "群組中資源的資源定義,採用 HOT 格式。此內容的值是資源的定義,就像它已在範本本" "身中宣告。" msgid "" "Resource definition for the resources in the group. The value of this " "property is the definition of a resource just as if it had been declared in " "the template itself." msgstr "" "群組中資源的資源定義。此內容的值是資源的定義,就像它已在範本本身中宣告。" msgid "Resource failed" msgstr "資源失敗" msgid "Resource is not built" msgstr "未建置資源" msgid "Resource name may not contain \"/\"" msgstr "資源名稱不能包含 \"/\"" msgid "Resource type." msgstr "資源類型。" msgid "Resource update already requested" msgstr "已要求資源更新" msgid "Resource with the name requested already exists" msgstr "具有所要求名稱的資源已存在" msgid "" "ResourceInError: resources.remote_stack: Went to status UPDATE_FAILED due to " "\"Remote stack update failed\"" msgstr "" "ResourceInError:resources.remote_stack:已跳址狀態 UPDATE_FAILED,原因:「遠" "端堆疊更新失敗」" #, python-format msgid "Resources must contain Resource. Found a [%s] instead" msgstr "資源必須包含「資源」。但卻找到 [%s]" msgid "" "Resources that users are allowed to access by the DescribeStackResource API." msgstr "容許使用者透過 DescribeStackResourceAPI 來存取的資源。" msgid "Returned status code from the configuration execution." msgstr "從配置執行中傳回的狀態碼。" msgid "Route duplicates an existing route." msgstr "路由與現有路由重複。" msgid "Route table ID." msgstr "遞送表 ID。" msgid "Safety assessment lifetime configuration for the ike policy." msgstr "IKE 原則的安全評量生命期限配置。" msgid "Safety assessment lifetime configuration for the ipsec policy." msgstr "IPSec 原則的安全評量生命期限配置。" msgid "Safety assessment lifetime units." msgstr "安全評量生命期限單位。" msgid "Safety assessment lifetime value in specified units." msgstr "安全評量生命期限值(採用所指定的單位)。" msgid "Scheduler hints to pass to Nova (Heat extension)." msgstr "要傳遞給 Nova 的排定器提示。(Heat 延伸)。" msgid "Schema representing the inputs that this software config is expecting." msgstr "此綱目表示此軟體配置預期的輸入。" msgid "Schema representing the outputs that this software config will produce." msgstr "此綱目表示此軟體配置將產生的輸出。" #, python-format msgid "Schema valid only for %(ltype)s or %(mtype)s, not %(utype)s" msgstr "綱目僅適用於 %(ltype)s 或 %(mtype)s,不適用於 %(utype)s" msgid "" "Scope of flavor accessibility. Public or private. Default value is True, " "means public, shared across all projects." msgstr "" "特性的可存取性範圍。公用或專用。預設值為 True,這表示專用,在所有專案之間共" "用。" #, python-format msgid "Searching Tenant %(target)s from Tenant %(actual)s forbidden." msgstr "禁止從承租人 %(actual)s 搜尋承租人 %(target)s。" msgid "Seconds between running periodic tasks." msgstr "兩次執行定期作業之間的秒數。" msgid "Seconds to wait after a create. Defaults to the global wait_secs." msgstr "在執行建立動作之後要等待的秒數。預設為廣域 wait_secs。" msgid "Seconds to wait after a delete. Defaults to the global wait_secs." msgstr "在執行刪除動作之後要等待的秒數。預設為廣域 wait_secs。" msgid "Seconds to wait after an action (-1 is infinite)." msgstr "在執行某個動作之後要等待的秒數(-1 表示無限)。" msgid "Seconds to wait after an update. Defaults to the global wait_secs." msgstr "在執行更新動作之後要等待的秒數。預設為廣域 wait_secs。" #, python-format msgid "Section %s can not be accessed directly." msgstr "無法直接存取區段 %s。" #, python-format msgid "Security Group \"%(group_name)s\" not found" msgstr "找不到安全群組 \"%(group_name)s\"" msgid "Security group IDs to assign." msgstr "要指派的安全群組 ID。" msgid "Security group IDs to associate with this port." msgstr "要與此埠產生關聯的安全群組 ID。" msgid "Security group names to assign." msgstr "要指派的安全群組名稱。" msgid "Security groups cannot be assigned the name \"default\"." msgstr "無法指派名稱 \"default\" 給安全群組。" msgid "Security service IP address or hostname." msgstr "安全服務 IP 位址或主機名稱。" msgid "Security service description." msgstr "安全服務說明。" msgid "Security service domain." msgstr "安全服務網域。" msgid "Security service name." msgstr "安全服務名稱。" msgid "Security service type." msgstr "安全服務類型。" msgid "Security service user or group used by tenant." msgstr "租戶所使用的安全服務使用者或群組。" msgid "Select deferred auth method, stored password or trusts." msgstr "選取延遲鑑別方法、儲存的密碼或信任。" msgid "Sequence of characters to build the random string from." msgstr "要從其建置隨機字串的字元序列。" #, python-format msgid "Server %(name)s delete failed: (%(code)s) %(message)s" msgstr "伺服器 %(name)s 刪除失敗:(%(code)s) %(message)s" msgid "Server Group name." msgstr "「伺服器群組」名稱。" msgid "Server name." msgstr "伺服器名稱。" msgid "Server to assign floating IP to." msgstr "要給其指派浮動 IP 的伺服器。" #, python-format msgid "" "Service %(service_name)s is not available for resource type " "%(resource_type)s, reason: %(reason)s" msgstr "" "資源類型 %(resource_type)s 無法使用服務 %(service_name)s,原因:%(reason)s" msgid "Service misconfigured" msgstr "服務配置錯誤" msgid "Service temporarily unavailable" msgstr "服務暫時無法使用" msgid "Set of parameters passed to this stack." msgstr "傳遞給此堆疊的參數集。" msgid "Set of rules for comparing characters in a character set." msgstr "用來比較字集中字元的規則集。" msgid "Set of symbols and encodings." msgstr "符號及編碼集。" msgid "Set to \"vpc\" to have IP address allocation associated to your VPC." msgstr "設為 \"vpc\",以使 IP 位址配置與 VPC 產生關聯。" msgid "Set to true if DHCP is enabled and false if DHCP is disabled." msgstr "如果啟用了 DHCP,則設為 True;如果停用了 DHCP,則設為 False。" msgid "Severity of the alarm." msgstr "警示的嚴重性。" msgid "Share description." msgstr "共用說明。" msgid "Share host." msgstr "共用項目主機。" msgid "Share name." msgstr "共用項目名稱。" msgid "Share network description." msgstr "共用網路說明。" msgid "Share project ID." msgstr "共用項目專案 ID。" msgid "Share protocol supported by shared filesystem." msgstr "共用檔案系統支援的共用通訊協定。" msgid "Share storage size in GB." msgstr "共用儲存體大小(以 GB 為單位)。" msgid "Shared status of the metering label." msgstr "計量標籤的共用狀態。" msgid "Shared status of this firewall policy." msgstr "此防火牆原則的共用狀態。" msgid "Shared status of this firewall rule." msgstr "此防火牆規則的共用狀態。" msgid "Shared status of this firewall." msgstr "此防火牆的共用狀態。" msgid "Shrinking volume" msgstr "正在收縮磁區" msgid "Signal data error" msgstr "信號資料錯誤" #, python-format msgid "Signal resource during %s" msgstr "%s 期間的信號資源" #, python-format msgid "Single schema valid only for %(ltype)s, not %(utype)s" msgstr "單一綱目僅適用於 %(ltype)s,不適用於 %(utype)s" msgid "Size of a secondary ephemeral data disk in GB." msgstr "次要贊時性資料磁碟的大小(以 GB 為單位)。" msgid "Size of adjustment." msgstr "調整的大小。" msgid "Size of encryption key, in bits. For example, 128 or 256." msgstr "加密金鑰的大小(以位元為單位)。例如 128 或 256。" msgid "" "Size of local disk in GB. The \"0\" size is a special case that uses the " "native base image size as the size of the ephemeral root volume." msgstr "" "本端磁碟的大小(以 GB 為單位)。大小 \"0\" 是一種特殊情況,它使用原生基本映像" "檔大小作為暫時性根磁區的大小。" msgid "" "Size of the block device in GB. If it is omitted, hypervisor driver " "calculates size." msgstr "區塊裝置大小 (GB)。如果省略,則 Hypervisor 驅動程式會計算大小。" msgid "Size of the instance disk volume in GB." msgstr "實例磁碟區的大小 (GB)。" msgid "Size of the volumes, in GB." msgstr "磁區大小 (GB)。" msgid "Smallest prefix size that can be allocated from the subnet pool." msgstr "可從子網路儲存區配置的字首大小下限。" #, python-format msgid "Snapshot with id %s not found" msgstr "找不到 ID 為 %s 的 Snapshot" msgid "" "SnapshotId is missing, this is required when specifying BlockDeviceMappings." msgstr "SnapshotId 遺失,指定 BlockDeviceMappings 時,這是必要項目。" #, python-format msgid "Software config with id %s not found" msgstr "找不到 ID 為 %s 的軟體配置" msgid "Source IP address or CIDR." msgstr "來源 IP 位址或 CIDR。" msgid "Source ip_address for this firewall rule." msgstr "此防火牆規則的來源 ip_address。" msgid "Source port number or a range." msgstr "來源埠號或範圍。" msgid "Source port range for this firewall rule." msgstr "此防火牆規則的來源埠範圍。" #, python-format msgid "Specified output key %s not found." msgstr "找不到指定的輸出金鑰 %s。" #, python-format msgid "Specified status is invalid, defaulting to %s" msgstr "指定的狀態無效,正在設定為預設值 %s" #, python-format msgid "Specified subnet %(subnet)s does not belongs to network %(network)s." msgstr "指定的子網路 %(subnet)s 不屬於網路 %(network)s。" msgid "Specifies a custom discovery url for node discovery." msgstr "指定自訂探索 URL 以進行節點探索。" msgid "Specifies database names for creating databases on instance creation." msgstr "指定建立實例時用來建立資料庫的資料庫名稱。" msgid "Specify the ACL permissions on who can read objects in the container." msgstr "指定關於誰可以讀取儲存器中物件的 ACL 許可權。" msgid "Specify the ACL permissions on who can write objects to the container." msgstr "指定關於誰可以將物件寫入儲存器的 ACL 許可權。" msgid "" "Specify whether the remote_ip_prefix will be excluded or not from traffic " "counters of the metering label. For example to not count the traffic of a " "specific IP address of a range." msgstr "" "指定是否將 remote_ip_prefix 從計量標籤的資料流量計數器中排除。例如,不將某一" "範圍之特定 IP 位址的資料流量計算在內。" #, python-format msgid "Stack %(stack_name)s already has an action (%(action)s) in progress." msgstr "堆疊 %(stack_name)s 已經有一個動作 (%(action)s) 正在進行中。" msgid "Stack ID" msgstr "機櫃識別號" msgid "Stack Name" msgstr "機櫃名稱" msgid "Stack name may not contain \"/\"" msgstr "堆疊名稱不能包含 \"/\"" msgid "Stack resource id" msgstr "堆疊資源 ID" msgid "Stack unknown status" msgstr "堆疊狀態不明" #, python-format msgid "Stack with id %s not found" msgstr "找不到 ID 為 %s 的堆疊" msgid "" "Stacks containing these tag names will be hidden. Multiple tags should be " "given in a comma-delimited list (eg. hidden_stack_tags=hide_me,me_too)." msgstr "" "包含這些標記名稱的堆疊將予以隱藏。應該在以逗點定界的清單中提供多個標記(例" "如,hidden_stack_tags=hide_me,me_too)。" msgid "Start address for the allocation pool." msgstr "配置儲存區的起始位址。" #, python-format msgid "Start resizing the group %(group)s" msgstr "開始調整群組 %(group)s 的大小" msgid "Start time for the time constraint. A CRON expression property." msgstr "時間限制的開始時間。這是一個 CRON 表示式內容。" #, python-format msgid "State %s invalid for create" msgstr "狀態 %s 不適用於建立作業" #, python-format msgid "State %s invalid for resume" msgstr "狀態 %s 不適用於回復作業" #, python-format msgid "State %s invalid for suspend" msgstr "狀態 %s 不適用於暫停作業" msgid "Status" msgstr "狀態" #, python-format msgid "String to split must be string; got %s" msgstr "要分割的字串必須是字串;已取得 %s" msgid "String value with which to compare." msgstr "要與之比較的字串值。" msgid "Subnet ID to associate with this interface." msgstr "要與此介面產生關聯的子網路 ID。" msgid "Subnet ID to launch instance in." msgstr "要在其中啟動實例的子網路 ID。" msgid "Subnet ID." msgstr "子網路 ID。" msgid "Subnet in which the vpn service will be created." msgstr "將在其中建立 VPN 服務的子網路。" msgid "" "Subnet in which to allocate the IP address for port. Used for creating port, " "based on derived properties. If subnet is specified, network property " "becomes optional." msgstr "" "要在其中將 IP 位址配置給埠的子網路。用於根據衍生內容來建立埠。如果指定了子網" "路,則網路內容將變成選用項目。" msgid "Subnet in which to allocate the IP address for this port." msgstr "要在其中將 IP 位址配置給此埠的子網路。" msgid "Subnet name or ID of this member." msgstr "此成員的子網路名稱或 ID。" msgid "Subnet of external fixed IP address." msgstr "外部固定 IP 位址的子網路。" msgid "Subnet of the vip." msgstr "VIP 的子網路。" msgid "Subnets of this network." msgstr "此網路的子網路。" msgid "" "Subset of trustor roles to be delegated to heat. If left unset, all roles of " "a user will be delegated to heat when creating a stack." msgstr "" "要委派給 Heat 的委託人角色子集。如果保留未設定,則在建立堆疊時,會將使用者的" "所有角色委派給 Heat。" msgid "Supplied metadata for the resources in the group." msgstr "群組中資源的已提供 meta 資料。" msgid "Supported versions: keystone v3" msgstr "支援的版本:Keystone 第 3 版" #, python-format msgid "Suspend of instance %s failed" msgstr "停止實例 %s 失敗" #, python-format msgid "Suspend of server %s failed" msgstr "停止伺服器 %s 失敗" msgid "Swap space in MB." msgstr "交換空間(以 MB 為單位)。" msgid "System SIGHUP signal received." msgstr "接收到系統 SIGHUP 信號。" msgid "TCP or UDP port on which to listen for client traffic." msgstr "要在其上接聽用戶端資料流量的 TCP 埠或 UDP 埠。" msgid "TCP port on which the instance server is listening." msgstr "實例伺服器正在其上進行接聽的 TCP 埠。" msgid "TCP port on which the pool member listens for requests or connections." msgstr "儲存區成員在其上接聽要求或連線的 TCP 埠。" msgid "" "TCP port on which to listen for client traffic that is associated with the " "vip address." msgstr "要在其上接聽用戶端資料流量(與 VIP 位址相關聯)的 TCP 埠。" msgid "" "TTL, in seconds, for any cached item in the dogpile.cache region used for " "caching of OpenStack service finder functions." msgstr "" "dogpile.cache 區域(用於快取 OpenStack 服務搜尋器功能)中任何已快取項目的 " "TTL(以秒為單位)。" msgid "" "TTL, in seconds, for any cached item in the dogpile.cache region used for " "caching of service extensions." msgstr "" "dogpile.cache 區域(用於快取服務延伸)中任何已快取項目的 TTL(以秒為單位)。" msgid "" "TTL, in seconds, for any cached item in the dogpile.cache region used for " "caching of validation constraints." msgstr "" "dogpile.cache 區域(用於快取驗證限制)中任何已快取項目的 TTL(以秒為單位)。" msgid "Tag key." msgstr "標記鍵。" msgid "Tag value." msgstr "標記值。" msgid "Tags to add to the image." msgstr "要新增至映像檔的標記。" msgid "Tags to attach to instance." msgstr "要連接至實例的標籤。" msgid "Tags to attach to the bucket." msgstr "要連接至儲存區的標籤。" msgid "Tags to attach to this group." msgstr "要連接至此群組的標籤。" msgid "Task description." msgstr "作業說明。" msgid "Task name." msgstr "作業名稱。" msgid "" "Template default for how the server should receive the metadata required for " "software configuration. POLL_SERVER_CFN will allow calls to the cfn API " "action DescribeStackResource authenticated with the provided keypair " "(requires enabled heat-api-cfn). POLL_SERVER_HEAT will allow calls to the " "Heat API resource-show using the provided keystone credentials (requires " "keystone v3 API, and configured stack_user_* config options). POLL_TEMP_URL " "will create and populate a Swift TempURL with metadata for polling (requires " "object-store endpoint which supports TempURL).ZAQAR_MESSAGE will create a " "dedicated zaqar queue and post the metadata for polling." msgstr "" "用於伺服器應該如何接收軟體配置所需 meta 資料的範本預設值。POLL_SERVER_CFN 將" "容許呼叫利用所提供之金鑰組進行鑑別的 cfn API 動作 DescribeStackResource(需要" "啟用 heat-api-cfn)。POLL_SERVER_HEAT 將容許呼叫使用所提供之 Keystone 認證的 " "Heat API 資源顯示(需要 Keystone 第 3 版 API,並且已配置 stack_user_* 配置選" "項)。POLL_TEMP_URL 將利用用於進行輪詢的 meta 資料來建立和移入 Swift " "TempURL(需要支援 TempURL 的物件儲存庫端點)。ZAQAR_MESSAGE 將建立專用 zaqar " "佇列並公佈用於進行輪詢的 meta 資料。" msgid "" "Template default for how the server should signal to heat with the " "deployment output values. CFN_SIGNAL will allow an HTTP POST to a CFN " "keypair signed URL (requires enabled heat-api-cfn). TEMP_URL_SIGNAL will " "create a Swift TempURL to be signaled via HTTP PUT (requires object-store " "endpoint which supports TempURL). HEAT_SIGNAL will allow calls to the Heat " "API resource-signal using the provided keystone credentials. ZAQAR_SIGNAL " "will create a dedicated zaqar queue to be signaled using the provided " "keystone credentials." msgstr "" "用於伺服器應該如何用信號將部署輸出值傳送至 Heat 的範本預設值。CFN_SIGNAL 將容" "許對 CFN 金鑰組簽署的 URL 執行 HTTP POST(需要啟用 heat-api-cfn)。" "TEMP_URL_SIGNAL 將透過 HTTP PUT 建立要向其傳送信號的 Swift TempURL(需要支援 " "TempURL 的物件儲存庫端點)。HEAT_SIGNAL 將容許呼叫使用所提供之 Keystone 認證" "的 Heat API 資源信號。ZAQAR_SIGNAL 將建立要使用所提供之 Keystone 認證向其傳送" "信號的專用 zaqar 佇列。" msgid "Template format version not found." msgstr "找不到範本格式版本。" #, python-format msgid "" "Template size (%(actual_len)s bytes) exceeds maximum allowed size (%(limit)s " "bytes)." msgstr "" "範本大小(%(actual_len)s 個位元組)超出容許的大小上限(%(limit)s 個位元組)。" msgid "Template that specifies the stack to be created as a resource." msgstr "用於指定要作為資源建立的堆疊的範本。" #, python-format msgid "Template type is not supported: %s" msgstr "範本類型不受支援:%s" msgid "Template version was not provided" msgstr "未提供範本版本" #, python-format msgid "Template with version %s not found" msgstr "找不到版本為 %s 的範本" msgid "TemplateBody or TemplateUrl were not given." msgstr "未提供 TemplateBody 及 TemplateUrl。" msgid "Tenant owning the health monitor." msgstr "擁有性能監視器的承租人。" msgid "Tenant owning the pool member." msgstr "擁有儲存區成員的承租人。" msgid "Tenant owning the pool." msgstr "擁有儲存區的承租人。" msgid "Tenant owning the port." msgstr "擁有埠的承租人。" msgid "Tenant owning the router." msgstr "擁有路由器的承租人。" msgid "Tenant owning the subnet." msgstr "擁有子網路的承租人。" #, python-format msgid "Testing message %(text)s" msgstr "測試訊息 %(text)s" #, python-format msgid "The \"%(hook)s\" hook is not defined on %(resource)s" msgstr "\"%(hook)s\" 連結鉤未在 %(resource)s 上進行定義" #, python-format msgid "The \"for_each\" argument to \"%s\" must contain a map" msgstr "\"%s\" 的 \"for_each\" 引數必須包含對映" #, python-format msgid "The %(entity)s (%(name)s) could not be found." msgstr "找不到 %(entity)s (%(name)s)。" #, python-format msgid "The %s must be provided for each parameter group." msgstr "必須為每一個參數群組提供 %s。" #, python-format msgid "The %s of parameter group should be a list." msgstr "參數群組的 %s 應該是清單。" #, python-format msgid "The %s parameter must be assigned to one parameter group only." msgstr "必須只將 %s 參數指派給一個參數群組。" #, python-format msgid "The %s should be a list." msgstr "%s 應該是清單。" msgid "The API paste config file to use." msgstr "要使用的 API paste 配置檔。" msgid "The AWS Access Key ID needs a subscription for the service" msgstr "「AWS 存取金鑰 ID」需要服務的訂閱" msgid "The Availability Zone where the specified instance is launched." msgstr "在其中啟動所指定實例的可用性區域。" msgid "The Availability Zones in which to create the load balancer." msgstr "要在其中建立負載平衡器的可用性區域。" msgid "The CIDR." msgstr "CIDR。" msgid "The DNS name for the LoadBalancer." msgstr "負載平衡器的 DNS 名稱。" msgid "The DNS name of the specified bucket." msgstr "所指定儲存區的 DNS 名稱。" msgid "The DNS nameserver address." msgstr "DNS 名稱伺服器位址。" msgid "The HTTP method used for requests by the monitor of type HTTP." msgstr "用於 HTTP 類型監視器所發出要求的 HTTP 方法。" msgid "" "The HTTP path used in the HTTP request used by the monitor to test a member " "health." msgstr "HTTP 要求中供監視器用來測試成員性能的 HTTP 路徑。" msgid "" "The HTTP path used in the HTTP request used by the monitor to test a member " "health. A valid value is a string the begins with a forward slash (/)." msgstr "" "HTTP 要求中供監視器用來測試成員性能的 HTTP 路徑。有效值是以正斜線 (/) 開頭的" "字串。" msgid "" "The HTTP status codes expected in response from the member to declare it " "healthy. Specify one of the following values: a single value, such as 200. a " "list, such as 200, 202. a range, such as 200-204." msgstr "" "成員所發出回應中預期的 HTTP 狀態碼,用來宣告成員狀況良好。請指定下列其中一個" "值:單一值(例如 200)、清單(例如:200,202)、範圍(例如 200-204)。" msgid "" "The ID of an existing instance to use to create the Auto Scaling group. If " "specify this property, will create the group use an existing instance " "instead of a launch configuration." msgstr "" "用來建立「自動調整大小」群組之現有實例的 ID。如果指定此內容,將會使用現有實例" "而非啟動配置來建立群組。" msgid "" "The ID of an existing instance you want to use to create the launch " "configuration. All properties are derived from the instance with the " "exception of BlockDeviceMapping." msgstr "" "您想用來建立啟動配置之現有實例的 ID。除了 BlockDeviceMapping 之外,其他所有內" "容都衍生自該實例。" msgid "The ID of the attached network." msgstr "所連接網路的 ID。" msgid "The ID of the firewall policy that this firewall is associated with." msgstr "與此防火牆相關聯之防火牆原則的 ID。" msgid "" "The ID of the hosted zone name that is associated with the LoadBalancer." msgstr "與負載平衡器相關聯的代管區域名稱 ID。" msgid "The ID of the image to create a volume from." msgstr "要從中建立磁區之映像檔的 ID。" msgid "The ID of the image to create the volume from." msgstr "要從其建立磁區的映像檔 ID。" msgid "The ID of the instance to which the volume attaches." msgstr "磁區所連接的實例 ID。" msgid "The ID of the load balancing pool." msgstr "負載平衡儲存區的 ID。" msgid "The ID of the pool to which the pool member belongs." msgstr "儲存區成員所屬之儲存區的 ID。" msgid "The ID of the server to which the volume attaches." msgstr "磁區所連接的伺服器 ID。" msgid "The ID of the snapshot to create a volume from." msgstr "要從其建立磁區的 Snapshot ID。" msgid "" "The ID of the tenant which will own the network. Only administrative users " "can set the tenant identifier; this cannot be changed using authorization " "policies." msgstr "" "將擁有網路的承租人 ID。僅管理使用者可以設定承租人 ID;這無法使用授權原則進行" "修改。" msgid "" "The ID of the tenant who owns the Load Balancer. Only administrative users " "can specify a tenant ID other than their own." msgstr "擁有負載平衡器的租戶 ID。僅管理使用者可以指定其他使用者的租戶 ID。" msgid "The ID of the tenant who owns the listener." msgstr "擁有接聽器的租戶 ID。" msgid "" "The ID of the tenant who owns the network. Only administrative users can " "specify a tenant ID other than their own." msgstr "擁有網路的承租人 ID。僅管理使用者可以指定其他使用者的承租人 ID。" msgid "" "The ID of the tenant who owns the subnet pool. Only administrative users can " "specify a tenant ID other than their own." msgstr "擁有子網路儲存區的租戶 ID。僅管理使用者可以指定其他使用者的租戶 ID。" msgid "The ID of the volume to be attached." msgstr "要連接的磁區 ID。" msgid "" "The ID of the volume to boot from. Only one of volume_id or snapshot_id " "should be provided." msgstr "要從中啟動的磁區 ID。只應該提供 volume_id 或 snapshot_id 的其中之一。" msgid "The ID or name of the flavor to boot onto." msgstr "要用來啟動的特性 ID 或名稱。" msgid "The ID or name of the image to boot with." msgstr "要用來啟動的映像檔 ID 或名稱。" msgid "" "The IDs of the DHCP agent to schedule the network. Note that the default " "policy setting in Neutron restricts usage of this property to administrative " "users only." msgstr "" "用來排定網路的 DHCP 代理程式 ID。請注意,Neutron 中的預設原則設定將此內容限制" "為僅供管理使用者使用。" msgid "The IP address of the pool member." msgstr "儲存區成員的 IP 位址。" msgid "The IP version, which is 4 or 6." msgstr "IP 版本(4 或 6)。" #, python-format msgid "The Parameter (%(key)s) was not defined in template." msgstr "沒有在範本中定義參數 (%(key)s)。" #, python-format msgid "The Parameter (%(key)s) was not provided." msgstr "未提供參數 (%(key)s)。" msgid "The QoS policy ID attached to this network." msgstr "已連接至此網路的服務品質原則 ID。" msgid "The QoS policy ID attached to this port." msgstr "已連接至此埠的服務品質原則 ID。" #, python-format msgid "The Referenced Attribute (%(resource)s %(key)s) is incorrect." msgstr "參照屬性 (%(resource)s %(key)s) 不正確。" #, python-format msgid "The Resource %s requires replacement." msgstr "資源 %s 需要取代項目。" #, python-format msgid "" "The Resource (%(resource_name)s) could not be found in Stack %(stack_name)s." msgstr "在堆疊 %(stack_name)s 中找不到資源(%(resource_name)s)。" #, python-format msgid "The Resource (%(resource_name)s) is not available." msgstr "資源 (%(resource_name)s) 無法使用。" #, python-format msgid "The Snapshot (%(snapshot)s) for Stack (%(stack)s) could not be found." msgstr "找不到堆疊 (%(stack)s) 的 Snapshot (%(snapshot)s)。" #, python-format msgid "The Stack (%(stack_name)s) already exists." msgstr "堆疊 (%(stack_name)s) 已存在。" msgid "The Template must be a JSON or YAML document." msgstr "範本必須是 JSON 或 YAML 文件。" msgid "The URI to the container." msgstr "儲存器的 URI。" msgid "The URI to the created container." msgstr "所建立之儲存器的 URI。" msgid "The URI to the created secret." msgstr "所建立之密碼的 URI。" msgid "The URI to the order." msgstr "順序的 URI。" msgid "The URIs to container consumers." msgstr "儲存器消費者的 URI。" msgid "The URIs to secrets stored in container." msgstr "指向儲存在儲存器中之密碼的 URI。" msgid "" "The URL of a template that specifies the stack to be created as a resource." msgstr "範本的 URL,此範本指定要作為資源建立的堆疊。" msgid "The URL of the container." msgstr "儲存器的 URL。" msgid "The VIP address of the LoadBalancer." msgstr "負載平衡器的 VIP 位址。" msgid "The VIP port of the LoadBalancer." msgstr "負載平衡器的 VIP 埠。" msgid "The VIP subnet of the LoadBalancer." msgstr "負載平衡器的 VIP 子網路。" msgid "The action or operation requested is invalid" msgstr "所要求的動作或作業無效" msgid "The action to be executed when the receiver is signaled." msgstr "向接收端傳送信號時要執行的動作。" msgid "The administrative state of the firewall." msgstr "防火牆的管理狀態。" msgid "The administrative state of the health monitor." msgstr "性能監視器的管理狀態。" msgid "The administrative state of the ipsec site connection." msgstr "IPSec 網站連線的管理狀態。" msgid "The administrative state of the pool member." msgstr "儲存區成員的管理狀態。" msgid "The administrative state of the router." msgstr "路由器的管理狀態。" msgid "The administrative state of the vpn service." msgstr "VPN 服務的管理狀態。" msgid "The administrative state of this Load Balancer." msgstr "此負載平衡器的管理狀態。" msgid "The administrative state of this health monitor." msgstr "此性能監視器的管理狀態。" msgid "The administrative state of this listener." msgstr "此監視器的管理狀態。" msgid "The administrative state of this pool member." msgstr "此儲存區成員的管理狀態。" msgid "The administrative state of this pool." msgstr "此儲存區的管理狀態。" msgid "The administrative state of this port." msgstr "此埠的管理狀態。" msgid "The administrative state of this vip." msgstr "此 VIP 的管理狀態。" msgid "The administrative status of the network." msgstr "網路的管理狀態。" msgid "The administrator password for the server." msgstr "伺服器的管理者密碼。" msgid "The aggregation method to compare to the threshold." msgstr "要與臨界值進行比較的聚集方法。" msgid "The algorithm type used to generate the secret." msgstr "用於產生密碼的演算法類型。" msgid "" "The algorithm type used to generate the secret. Required for key and " "asymmetric types of order." msgstr "用於產生密碼的演算法類型。這對於金鑰及非對稱類型的順序是必要的。" msgid "The algorithm used to distribute load between the members of the pool." msgstr "用來在儲存區成員之間分佈負載的演算法。" msgid "The allocated address of this IP." msgstr "此 IP 的已配置位址。" msgid "" "The approximate interval, in seconds, between health checks of an individual " "instance." msgstr "個別實例的兩次性能檢查之間的估計間隔(以秒為單位)。" msgid "The authentication hash algorithm of the ipsec policy." msgstr "IPSec 原則的鑑別雜湊演算法。" msgid "The authentication hash algorithm used by the ike policy." msgstr "IKE 原則所使用的鑑別雜湊演算法。" msgid "The authentication mode of the ipsec site connection." msgstr "IPSec 網站連線的鑑別模式。" msgid "The availability zone in which the volume is located." msgstr "磁區所在的可用性區域。" msgid "The availability zone in which the volume will be created." msgstr "將在其中建立磁區的可用性區域。" msgid "The availability zone of shared filesystem." msgstr "共用檔案系統的可用性區域。" msgid "The bay name." msgstr "機架名稱。" msgid "The bit-length of the secret." msgstr "密碼的位元長度。" msgid "" "The bit-length of the secret. Required for key and asymmetric types of order." msgstr "密碼的位元長度。這對於金鑰及非對稱類型的順序是必要的。" #, python-format msgid "The bucket you tried to delete is not empty (%s)." msgstr "您嘗試刪除的儲存區不是空的 (%s)。" msgid "The can be used to unmap a defined device." msgstr "可用於取消對映所定義的裝置。" msgid "The certificate or AWS Key ID provided does not exist" msgstr "提供的憑證或「AWS 金鑰 ID」不存在" msgid "The channel for receiving signals." msgstr "用於接收信號的通道。" msgid "" "The class that provides encryption support. For example, nova.volume." "encryptors.luks.LuksEncryptor." msgstr "" "用來提供加密支援的類別。例如 nova.volume.encryptors.luks.LuksEncryptor。" #, python-format msgid "The client (%(client_name)s) is not available." msgstr "無法使用用戶端 (%(client_name)s)。" msgid "The cluster ID this node belongs to." msgstr "此節點所屬的叢集 ID。" msgid "The config value of the software config." msgstr "軟體配置的配置值。" msgid "" "The configuration tool used to actually apply the configuration on a server. " "This string property has to be understood by in-instance tools running " "inside deployed servers." msgstr "" "用於在伺服器上實際套用配置的配置工具。此字串內容必須被所部署伺服器內執行的實" "例中工具瞭解。" msgid "The content of the CSR. Only for certificate orders." msgstr "CSR 的內容。僅適用於憑證順序。" #, python-format msgid "" "The contents of personality file \"%(path)s\" is larger than the maximum " "allowed personality file size (%(max_size)s bytes)." msgstr "" "特質檔案 \"%(path)s\" 的內容,超過了所容許的特質檔案大小上限(%(max_size)s 個" "位元組)。" msgid "The current size of AutoscalingResourceGroup." msgstr "AutoscalingResourceGroup 的現行大小。" msgid "The current status of the volume." msgstr "磁區的現行狀態。" msgid "" "The database instance was created, but heat failed to set up the datastore. " "If a database instance is in the FAILED state, it should be deleted and a " "new one should be created." msgstr "" "已建立資料庫實例,但 Heat 無法設定資料儲存庫。如果資料庫實例處於 FAILED 狀" "態,則應刪除它,並應建立一個新實例。" msgid "" "The dead peer detection protocol configuration of the ipsec site connection." msgstr "IPSec 網站連線的已停用同層級偵測通訊協定配置。" msgid "The decrypted secret payload." msgstr "已解密的密碼有效負載。" msgid "" "The default cloud-init user set up for each image (e.g. \"ubuntu\" for " "Ubuntu 12.04+, \"fedora\" for Fedora 19+ and \"cloud-user\" for CentOS/RHEL " "6.5)." msgstr "" "為每一個映像檔設定的預設 cloud-init 使用者(例如 \"ubuntu\" 適用於 Ubuntu " "12.04+,\"fedora\" 適用於 Fedora 19+,\"cloud-user\" 適用於 CentOS/RHEL " "6.5)。" msgid "The description for the QoS policy." msgstr "服務品質原則的說明。" msgid "The description of the ike policy." msgstr "IKE 原則的說明。" msgid "The description of the ipsec policy." msgstr "IPSec 原則的說明。" msgid "The description of the ipsec site connection." msgstr "IPSec 網站連線的說明。" msgid "The description of the vpn service." msgstr "VPN 服務的說明。" msgid "The destination for static route." msgstr "靜態路由的目標。" msgid "The details of physical object." msgstr "實體物件的詳細資料。" msgid "The device id for the network gateway." msgstr "網路閘道的裝置 ID。" msgid "" "The device where the volume is exposed on the instance. This assignment may " "not be honored and it is advised that the path /dev/disk/by-id/virtio-" " be used instead." msgstr "" "實例上公開磁區的裝置。可能不允許使用此指派,建議改為使用路徑 /dev/disk/by-id/" "virtio-。" msgid "" "The direction in which metering rule is applied, either ingress or egress." msgstr "計量規則的套用方向(入口或出口)。" msgid "The direction in which metering rule is applied." msgstr "計量規則的套用方向。" msgid "" "The direction in which the security group rule is applied. For a compute " "instance, an ingress security group rule matches traffic that is incoming " "(ingress) for that instance. An egress rule is applied to traffic leaving " "the instance." msgstr "" "安全群組規則的套用方向。對於計算實例,入口安全群組規則符合該實例的送入(入" "口)資料流量。出口規則會套用至離開該實例的資料流量。" msgid "The directory to search for environment files." msgstr "要在其中搜尋環境檔案的目錄。" msgid "The ebs volume to attach to the instance." msgstr "要連接至實例的 EBS 磁區。" msgid "The encapsulation mode of the ipsec policy." msgstr "IPSec 原則的封裝模式。" msgid "The encoding format used to provide the payload data." msgstr "用於提供有效負載資料的編碼格式。" msgid "The encryption algorithm of the ipsec policy." msgstr "IPSec 原則的加密演算法。" msgid "The encryption algorithm or mode. For example, aes-xts-plain64." msgstr "加密演算法或模式。例如 aes-xts-plain64。" msgid "The encryption algorithm used by the ike policy." msgstr "IKE 原則所使用的加密演算法。" msgid "The environment is not a valid YAML mapping data type." msgstr "環境不是有效的 YAML 對映資料類型。" msgid "The expiration date for the secret in ISO-8601 format." msgstr "密碼的到期日期(採用 ISO-8601 格式)。" msgid "The external load balancer port number." msgstr "外部負載平衡器埠號。" msgid "The extra specs key and value pairs of the volume type." msgstr "磁區類型的額外規格鍵值組。" msgid "The flavor to use." msgstr "要使用的特性。" #, python-format msgid "The following parameters are immutable and may not be updated: %(keys)s" msgstr "下列參數是不可變的,可能無法予以更新:%(keys)s" #, python-format msgid "The function %s is not supported in this version of HOT." msgstr "此版本的 HOT 中不支援函數 %s。" msgid "" "The gateway IP address. Set to any of [ null | ~ | \"\" ] to create/update a " "subnet without a gateway. If omitted when creation, neutron will assign the " "first free IP address within the subnet to the gateway automatically. If " "remove this from template when update, the old gateway IP address will be " "detached." msgstr "" "閘道 IP 位址。設定為 [ null | ~ | \"\" ] 中的任一個,以建立/更新不含閘道的子" "網路。如果在建立時省略此項,則 Neutron 自動將子網路內的第一個可用 IP 位址指派" "給閘道。如果在更新時從範本中移除此項,則將分離舊閘道 IP 位址。" #, python-format msgid "The grouped parameter %s does not reference a valid parameter." msgstr "已分組的參數 %s 未參照有效參數。" msgid "The host from the container URL." msgstr "來自儲存器 URL 的主機。" msgid "The host from which a user is allowed to connect to the database." msgstr "容許使用者從其連接至資料庫的主機。" msgid "" "The id for L2 segment on the external side of the network gateway. Must be " "specified when using vlan." msgstr "網路閘道外部端的 L2 分段 ID。使用VLAN 時必須指定。" msgid "The identifier of the CA to use." msgstr "要使用之 CA 的 ID。" msgid "The image ID. Glance will generate a UUID if not specified." msgstr "映像檔 ID。如果未指定,則 Glance 將產生 UUID。" msgid "The initiator of the ipsec site connection." msgstr "IPSec 網站連線的起始器。" msgid "The input string to be stored." msgstr "要儲存的輸入字串。" msgid "The interface name for the network gateway." msgstr "網路閘道的介面名稱。" msgid "The internal network to connect on the network gateway." msgstr "要在網路閘道上連接的內部網路。" msgid "The last operation for the database instance failed due to an error." msgstr "資料庫實例的前次作業由於錯誤而失敗。" #, python-format msgid "The length must be at least %(min)s." msgstr "長度必須至少是 %(min)s。" #, python-format msgid "The length must be in the range %(min)s to %(max)s." msgstr "長度必須在 %(min)s 到 %(max)s 的範圍內。" #, python-format msgid "The length must be no greater than %(max)s." msgstr "長度不得大於 %(max)s。" msgid "The length of time, in minutes, to wait for the nested stack creation." msgstr "等待建立巢狀堆疊的時間長度(分鐘)。" msgid "" "The list of HTTP status codes expected in response from the member to " "declare it healthy." msgstr "成員所發出回應中預期的 HTTP 狀態碼清單,用來宣告成員狀況良好。" msgid "The list of Nova server IDs load balanced." msgstr "進行負載平衡的 Nova 伺服器 ID 清單。" msgid "The list of Pools related to this monitor." msgstr "與此監視器相關的儲存區清單。" msgid "The list of attachments of the volume." msgstr "磁區附件的清單。" msgid "" "The list of configurations for the different lifecycle actions of the " "represented software component." msgstr "所代表之軟體元件的不同生命週期動作的配置清單。" msgid "The list of instance IDs load balanced." msgstr "進行負載平衡的實例 ID 清單。" msgid "" "The list of resource types to create. This list may contain type names or " "aliases defined in the resource registry. Specific template names are not " "supported." msgstr "" "要建立的資源類型清單。此清單可能包含資源登錄中定義的類型名稱或別名。不支援特" "定的範本名稱。" msgid "The list of tags to associate with the volume." msgstr "要與磁區產生關聯的標籤清單。" msgid "The load balancer transport protocol to use." msgstr "要使用的負載平衡器傳輸通訊協定。" msgid "" "The location where the volume is exposed on the instance. This assignment " "may not be honored and it is advised that the path /dev/disk/by-id/virtio-" " be used instead." msgstr "" "實例上公開磁區的位置。可能不允許使用此指派,建議改為使用路徑 /dev/disk/by-id/" "virtio-。" msgid "The manually assigned alternative public IPv4 address of the server." msgstr "手動指派給伺服器的替代公用 IPv4 位址。" msgid "The manually assigned alternative public IPv6 address of the server." msgstr "手動指派給伺服器的替代公用 IPv6 位址。" msgid "The maximum number of connections per second allowed for the vip." msgstr "VIP 每秒所容許的連線數上限。" msgid "" "The maximum number of connections permitted for this load balancer. Defaults " "to -1, which is infinite." msgstr "此負載平衡器所允許的連線數上限。預設值為 -1,表示無限。" msgid "The maximum number of resources to create at once." msgstr "一次要建立的資源數目上限。" msgid "The maximum number of resources to replace at once." msgstr "一次要取代的資源數目上限。" msgid "" "The maximum number of seconds to wait for the resource to signal completion. " "Once the timeout is reached, creation of the signal resource will fail." msgstr "" "等待資源傳送完成信號的秒數上限。達到該逾時值之後,信號資源的建立將失敗。" msgid "" "The maximum port number in the range that is matched by the security group " "rule. The port_range_min attribute constrains the port_range_max attribute. " "If the protocol is ICMP, this value must be an ICMP type." msgstr "" "範圍中安全群組規則所符合的埠號上限。port_range_min 屬性會限制 port_range_max " "屬性。如果通訊協定是 ICMP,則此值必須是 ICMP 類型。" msgid "" "The maximum transmission unit size (in bytes) of the ipsec site connection." msgstr "IPSec 網站連線的最大傳輸單位大小(以位元組為單位)。" msgid "The maximum transmission unit size(in bytes) for the network." msgstr "網路傳輸單位大小上限(以位元組為單位)。" msgid "The metering label ID to associate with this metering rule." msgstr "要與此計量規則產生關聯的計量標籤 ID。" msgid "" "The metric dimensions to match to the alarm dimensions. One or more " "dimension key names separated by a comma." msgstr "要與警示維度相符的度量維度。使用逗點來區隔多個維度索引鍵名稱。" msgid "" "The minimum number of characters from this character class that will be in " "the generated string." msgstr "此字元類別中的字元數目下限,所產生的字串將包含此字元類別。" msgid "" "The minimum number of characters from this sequence that will be in the " "generated string." msgstr "此序列中的字元數目下限,所產生的字串將包含此序列。" msgid "" "The minimum number of resources in service while rolling updates are being " "executed." msgstr "執行滾動更新期間,服務中的資源數目下限。" msgid "" "The minimum port number in the range that is matched by the security group " "rule. If the protocol is TCP or UDP, this value must be less than or equal " "to the value of the port_range_max attribute. If the protocol is ICMP, this " "value must be an ICMP type." msgstr "" "範圍中安全群組規則所符合的埠號下限。如果通訊協定是 TCP 或 UDP,此值必須小於或" "等於 port_range_max 屬性的值。如果通訊協定是ICMP,則此值必須是 ICMP 類型。" msgid "The name for the QoS policy." msgstr "服務品質原則的名稱。" msgid "The name for the address scope." msgstr "位址範圍的名稱。" msgid "" "The name of the driver used for instantiating container networks. By " "default, Magnum will choose the pre-configured network driver based on COE " "type." msgstr "" "用於實例化儲存器網路的驅動程式名稱。依預設,Magnum 將根據 COE 類型來選擇預先" "配置的網路驅動程式。" msgid "The name of the error document." msgstr "錯誤文件的名稱。" msgid "The name of the hosted zone that is associated with the LoadBalancer." msgstr "與負載平衡器相關聯的代管區域名稱。" msgid "The name of the ike policy." msgstr "IKE 原則的名稱。" msgid "The name of the index document." msgstr "索引文件的名稱。" msgid "The name of the ipsec policy." msgstr "IPSec 原則的名稱。" msgid "The name of the ipsec site connection." msgstr "IPSec 網站連線的名稱。" msgid "The name of the key pair." msgstr "金鑰組的名稱。" msgid "The name of the network gateway." msgstr "網路閘道的名稱。" msgid "The name of the network." msgstr "網路的名稱。" msgid "The name of the router." msgstr "路由器的名稱。" msgid "The name of the subnet." msgstr "子網路的名稱。" msgid "The name of the user that the new key will belong to." msgstr "新金鑰將隸屬於的使用者名稱。" msgid "" "The name of the virtual device. The name must be in the form ephemeralX " "where X is a number starting from zero (0); for example, ephemeral0." msgstr "" "虛擬裝置的名稱。該名稱的格式必須是 ephemeralX,其中,X 是從 0 開始的數字;例" "如,ephemeral0。" msgid "The name of the vpn service." msgstr "VPN 服務的名稱。" msgid "The name or ID of QoS policy to attach to this network." msgstr "要連接至此網路之服務品質原則的名稱或 ID。" msgid "The name or ID of QoS policy to attach to this port." msgstr "要連接至此埠之服務品質原則的名稱或 ID。" msgid "The name or ID of parent of this keystone project in hierarchy." msgstr "在階層中,此 Keystone 專案之母項的名稱或 ID。" msgid "The name or ID of target cluster." msgstr "目標叢集的名稱或 ID。" msgid "The name or ID of the bay model." msgstr "機架型號的名稱或 ID。" msgid "The name or ID of the subnet on which to allocate the VIP address." msgstr "在其中配置 VIP 位址之子網路的名稱或 ID。" msgid "The name or ID of the subnet pool." msgstr "子網路儲存區的名稱或 ID。" msgid "The name or id of the Senlin profile." msgstr "Senlin 設定檔的名稱或 ID。" msgid "The negotiation mode of the ike policy." msgstr "IKE 原則的協議模式。" msgid "The next hop for the destination." msgstr "目標的下一個中繼站。" msgid "The node count for this bay." msgstr "此機架的節點計數。" msgid "The notification methods to use when an alarm state is ALARM." msgstr "在警示狀態為「警示」時,要使用的通知方法。" msgid "The notification methods to use when an alarm state is OK." msgstr "在警示狀態為「正常」時,要使用的通知方法。" msgid "The notification methods to use when an alarm state is UNDETERMINED." msgstr "在警示狀態為「無法判定」時,要使用的通知方法。" msgid "The number of I/O operations per second that the volume supports." msgstr "磁區所支援的每秒 I/O 作業數目。" msgid "The number of bytes stored in the container." msgstr "儲存器中所儲存的位元組數。" msgid "" "The number of consecutive health probe failures required before moving the " "instance to the unhealthy state" msgstr "將實例移至狀況不良狀態之前所需的連續性能探測失敗次數" msgid "" "The number of consecutive health probe successes required before moving the " "instance to the healthy state." msgstr "將實例移至狀況良好狀態之前所需的連續性能探測成功次數。" msgid "The number of master nodes for this bay." msgstr "此機架的主要節點數目。" msgid "The number of objects stored in the container." msgstr "儲存器中所儲存的物件數。" msgid "The number of replicas to be created." msgstr "要建立的抄本數目。" msgid "The number of resources to create." msgstr "要建立的資源數目。" msgid "The number of seconds to wait between batches of updates." msgstr "在分批更新之間等待的秒數。" msgid "The number of seconds to wait between batches." msgstr "在批次之間等待的秒數。" msgid "The number of seconds to wait for the cluster actions." msgstr "等待叢集動作的秒數。" msgid "" "The number of seconds to wait for the correct number of signals to arrive." msgstr "等待正確數目的信號到達的秒數。" msgid "" "The number of success signals that must be received before the stack " "creation process continues." msgstr "繼續執行堆疊建立程序之前,必須收到的成功信號數目。" msgid "" "The optional public key. This allows users to supply the public key from a " "pre-existing key pair. If not supplied, a new key pair will be generated." msgstr "" "選用的公開金鑰。這容許使用者從預先存在的金鑰組提供公開金鑰。如果沒有提供,則" "將產生新金鑰組。" msgid "" "The owner tenant ID of the address scope. Only administrative users can " "specify a tenant ID other than their own." msgstr "位址範圍的擁有者租戶 ID。僅管理使用者可以指定其他使用者的租戶 ID。" msgid "The owner tenant ID of this QoS policy." msgstr "此服務品質原則的擁有者租戶 ID。" msgid "The owner tenant ID of this rule." msgstr "此規則的擁有者租戶 ID。" msgid "" "The owner tenant ID. Only required if the caller has an administrative role " "and wants to create a RBAC for another tenant." msgstr "" "擁有者租戶 ID。僅當呼叫程式具有管理角色並且要為另一個租戶建立 RBAC 時,才需要" "此項。" msgid "The parameters passed to action when the receiver is signaled." msgstr "向接收端傳送信號時傳遞給動作的參數。" msgid "The parent URL of the container." msgstr "儲存器的母項 URL。" msgid "The payload of the created certificate, if available." msgstr "所建立之憑證的有效負載(如果有的話)。" msgid "The payload of the created intermediates, if available." msgstr "所建立之中間項目的有效負載(如果有的話)。" msgid "The payload of the created private key, if available." msgstr "所建立之私密金鑰的有效負載(如果有的話)。" msgid "The payload of the created public key, if available." msgstr "所建立之公開金鑰的有效負載(如果有的話)。" msgid "The perfect forward secrecy of the ike policy." msgstr "IKE 原則的完整轉遞保密。" msgid "The perfect forward secrecy of the ipsec policy." msgstr "IPSec 原則的完整轉遞保密。" #, python-format msgid "The personality property may not contain greater than %s entries." msgstr "特質內容不能包含 %s 個以上的項目。" msgid "The physical mechanism by which the virtual network is implemented." msgstr "來自已實作的虛擬網路的實體機制。" msgid "The port being checked." msgstr "正在檢查的埠。" msgid "The port id, either subnet or port_id should be specified." msgstr "埠 ID,應該指定子網路或埠 ID。" msgid "The port on which the server will listen." msgstr "伺服器將在其上進行接聽的埠。" msgid "The port, either subnet or port should be specified." msgstr "埠,應該指定子網路或埠。" msgid "The pre-shared key string of the ipsec site connection." msgstr "IPSec 網站連線的預先共用金鑰字串。" msgid "The private key if it has been saved." msgstr "私密金鑰(如果已儲存私密金鑰)。" msgid "The profile of certificate to use." msgstr "要使用之憑證的設定檔。" msgid "" "The protocol that is matched by the security group rule. Valid values " "include tcp, udp, and icmp." msgstr "安全群組規則所符合的通訊協定。有效值包括 TCP、UDP 及 ICMP。" msgid "The public key." msgstr "公開金鑰。" msgid "The query string is malformed" msgstr "查詢字串的格式不正確" msgid "The query to filter the metrics." msgstr "用來過濾度量的查詢。" msgid "" "The random string generated by this resource. This value is also available " "by referencing the resource." msgstr "由此資源產生的隨機字串。此值也可以透過參照資源來提供。" msgid "The reference to a LaunchConfiguration resource." msgstr "對 LaunchConfiguration 資源的參照。" msgid "" "The remote IP prefix (CIDR) to be associated with this security group rule." msgstr "要與此安全群組規則產生關聯的遠端 IP 字首(CIDR)。" msgid "The remote branch router identity of the ipsec site connection." msgstr "IPSec 網站連線的遠端分支路由器身分。" msgid "The remote branch router public IPv4 address or IPv6 address or FQDN." msgstr "遠端分支路由器公用 IPv4 位址或 IPv6 位址或 FQDN。" msgid "" "The remote group ID to be associated with this security group rule. If no " "value is specified then this rule will use this security group for the " "remote_group_id. The remote mode parameter must be set to \"remote_group_id" "\"." msgstr "" "要與此安全群組規則產生關聯的遠端群組 ID。如果沒有指定任何值,則此規則會將此安" "全群組用於remote_group_id。遠端模式參數必須設定為\"remote_group_id\"。" msgid "The remote subnet(s) in CIDR format of the ipsec site connection." msgstr "IPSec 網站連線的遠端子網路(CIDR 格式)。" msgid "The request is missing an action or operation parameter" msgstr "要求遺漏了動作或作業參數" msgid "The request processing has failed due to an internal error" msgstr "要求處理程序由於內部錯誤而失敗" msgid "The request signature does not conform to AWS standards" msgstr "要求簽章不符合 AWS 標準" msgid "" "The request signature we calculated does not match the signature you provided" msgstr "我們計算出的要求簽章與您所提供的簽章不符" msgid "The requested action is not yet implemented" msgstr "尚未實作所要求的動作" #, python-format msgid "The resource %s is already being updated." msgstr "已經在更新資源 %s。" msgid "The resource href of the queue." msgstr "佇列的資源 href。" msgid "The route mode of the ipsec site connection." msgstr "IPSec 網站連線的路由模式。" msgid "The router id." msgstr "路由器 ID。" msgid "The router to which the vpn service will be inserted." msgstr "將在其中插入 VPN 服務的路由器。" msgid "The router." msgstr "路由器。" msgid "The safety assessment lifetime configuration for the ike policy." msgstr "IKE 原則的安全評量生命期限配置。" msgid "The safety assessment lifetime configuration of the ipsec policy." msgstr "IPSec 原則的安全評量生命期限配置。" msgid "" "The security group that you can use as part of your inbound rules for your " "LoadBalancer's back-end instances." msgstr "可以用來作為負載平衡器後端實例入埠規則一部分的安全群組。" msgid "" "The server could not comply with the request since it is either malformed or " "otherwise incorrect." msgstr "伺服器無法遵守要求,因為它的格式不規範,或不正確。" msgid "The set of parameters passed to this nested stack." msgstr "傳遞給此巢狀堆疊的參數集。" msgid "The size in GB of the docker volume." msgstr "Docker 磁區的大小(以 GB 為單位)。" msgid "The size of AutoScalingGroup can not be less than zero" msgstr "AutoScalingGroup 的大小不能小於零" msgid "" "The size of the prefix to allocate when the cidr or prefixlen attributes are " "not specified while creating a subnet." msgstr "建立子網路期間,未指定 cidr 或 prefixlen 屬性時要配置的字首大小。" msgid "The size of the swap, in MB." msgstr "交換大小 (MB)。" msgid "The size of the volume in GB." msgstr "磁區大小 (GB)。" msgid "" "The size of the volume, in GB. It is safe to leave this blank and have the " "Compute service infer the size." msgstr "磁區的大小 (GB)。保留此欄位空白並讓計算服務推斷大小,無安全之虞。" msgid "" "The size of the volume, in GB. Must be equal or greater than the size of the " "snapshot. It is safe to leave this blank and have the Compute service infer " "the size." msgstr "" "磁區的大小 (GB)。必須等於或大於 Snapshot 的大小。將此留空並讓「計算」服務推斷" "該大小並無安全之虞。" msgid "The snapshot the volume was created from, if any." msgstr "如果有的話,則為從其建立了磁區的 Snapshot。" msgid "The source of certificate request." msgstr "憑證請求的來源。" #, python-format msgid "The specified reference \"%(resource)s\" (in %(key)s) is incorrect." msgstr "指定的參照 \"%(resource)s\"(在 %(key)s 中)不正確。" msgid "The start and end addresses for the allocation pools." msgstr "配置儲存區的起始位址及結束位址。" msgid "The status of the container." msgstr "儲存器的狀態。" msgid "The status of the firewall." msgstr "防火牆的狀態。" msgid "The status of the ipsec site connection." msgstr "IPSec 網站連線的狀態。" msgid "The status of the network." msgstr "網路的狀態。" msgid "The status of the order." msgstr "順序的狀態。" msgid "The status of the port." msgstr "埠的狀態。" msgid "The status of the router." msgstr "路由器的狀態。" msgid "The status of the secret." msgstr "密碼的狀態。" msgid "The status of the vpn service." msgstr "VPN 服務的狀態。" msgid "" "The string that was stored. This value is also available by referencing the " "resource." msgstr "已經儲存的字串。此值也可以透過參照資源來提供。" msgid "The subject of the certificate request." msgstr "憑證請求的主題。" msgid "" "The subnet for the port on which the members of the pool will be connected." msgstr "儲存區成員將在其上進行連接的埠子網路。" msgid "The subnet, either subnet or port should be specified." msgstr "子網路,應該指定子網路或埠。" msgid "The tag key name." msgstr "標籤索引鍵名稱。" msgid "The tag value." msgstr "標籤值。" msgid "The template is not a JSON object or YAML mapping." msgstr "範本不是 JSON 物件或 YAML 對映。" #, python-format msgid "The template section is invalid: %(section)s" msgstr "範本區段無效:%(section)s" #, python-format msgid "The template version is invalid: %(explanation)s" msgstr "範本版本無效:%(explanation)s" msgid "The tenant owning this floating IP." msgstr "擁有此浮動 IP 的承租人。" msgid "The tenant owning this network." msgstr "擁有此網路的承租人。" msgid "The time range in seconds." msgstr "時間範圍(以秒為單位)。" msgid "The timestamp indicating volume creation." msgstr "用來指示磁區建立作業的時間戳記。" msgid "The transform protocol of the ipsec policy." msgstr "IPSec 原則的轉換通訊協定。" msgid "The type of profile." msgstr "設定檔的類型。" msgid "The type of senlin policy." msgstr "Senlin 原則的類型。" msgid "The type of the certificate request." msgstr "憑證請求的類型。" msgid "The type of the order." msgstr "順序的類型。" msgid "The type of the resources in the group." msgstr "群組中資源的類型。" msgid "The type of the secret." msgstr "密碼的類型。" msgid "The type of the volume mapping to a backend, if any." msgstr "如果有的話,則為對映至後端的磁區類型。" msgid "The type/format the secret data is provided in." msgstr "所提供之私密資料所採用的類型/格式。" msgid "The type/mode of the algorithm associated with the secret information." msgstr "與密碼資訊相關聯之演算法的類型/模式。" msgid "The unencrypted plain text of the secret." msgstr "密碼的未加密純文字。" msgid "" "The unique identifier of ike policy associated with the ipsec site " "connection." msgstr "與 IPSec 網站連線相關聯之 IKE 原則的唯一 ID。" msgid "" "The unique identifier of ipsec policy associated with the ipsec site " "connection." msgstr "與 IPSec 網站連線相關聯之 IPSec 原則的唯一 ID。" msgid "" "The unique identifier of the router to which the vpn service was inserted." msgstr "在其中插入了 VPN 服務之路由器的唯一 ID。" msgid "" "The unique identifier of the subnet in which the vpn service was created." msgstr "在其中建立了 VPN 服務之子網路的唯一 ID。" msgid "The unique identifier of the tenant owning the ike policy." msgstr "擁有 IKE 原則之承租人的唯一 ID。" msgid "The unique identifier of the tenant owning the ipsec policy." msgstr "擁有 IPSec 原則之承租人的唯一 ID。" msgid "The unique identifier of the tenant owning the ipsec site connection." msgstr "擁有 IPSec 網站連線之承租人的唯一 ID。" msgid "The unique identifier of the tenant owning the vpn service." msgstr "擁有 VPN 服務之承租人的唯一 ID。" msgid "" "The unique identifier of vpn service associated with the ipsec site " "connection." msgstr "與 IPSec 網站連線相關聯之 VPN 服務的唯一 ID。" msgid "" "The user-defined region ID and should unique to the OpenStack deployment. " "While creating the region, heat will url encode this ID." msgstr "" "使用者定義的區域 ID,對 OpenStack 部署應該是唯一的。建立區域時,Heat 將對此 " "ID 進行 URL 編碼。" msgid "" "The value for the socket option TCP_KEEPIDLE. This is the time in seconds " "that the connection must be idle before TCP starts sending keepalive probes." msgstr "" "Socket 選項 TCP_KEEPIDLE 的值。這是在 TCP 開始傳送保持作用中探針之前,連線必" "須處於閒置狀態的秒數。" #, python-format msgid "The value must be at least %(min)s." msgstr "值必須至少是 %(min)s。" #, python-format msgid "The value must be in the range %(min)s to %(max)s." msgstr "值必須在 %(min)s 到 %(max)s 的範圍內。" #, python-format msgid "The value must be no greater than %(max)s." msgstr "值不得大於 %(max)s。" #, python-format msgid "The values of the \"for_each\" argument to \"%s\" must be lists" msgstr "\"%s\" 的 \"for_each\" 引數值必須是清單" msgid "The version of the ike policy." msgstr "IKE 原則的版本。" msgid "" "The vnic type to be bound on the neutron port. To support SR-IOV PCI " "passthrough networking, you can request that the neutron port to be realized " "as normal (virtual nic), direct (pci passthrough), or macvtap (virtual " "interface with a tap-like software interface). Note that this only works for " "Neutron deployments that support the bindings extension." msgstr "" "要在 Neutron 埠上連結的 Vnic 類型。若要支援 SR-IOV PCIpassthrough 網路,您可" "以要求將 Neutron 埠實現為一般(虛擬 NIC)、直接(PCI passthrough)或 macvtap " "(具有點選之類軟體介面的虛擬介面)。請注意,此項僅適用於支援連結延伸的 " "Neutron 部署。" msgid "The volume type." msgstr "磁區類型。" msgid "The volume used as source, if any." msgstr "如果有的話,則為用來作為來源的磁區。" msgid "The volume_id can be boot or non-boot device to the server." msgstr "volume_id 可以是伺服器的啟動或非啟動裝置。" msgid "The website endpoint for the specified bucket." msgstr "所指定儲存區的網站端點。" #, python-format msgid "There is no rule %(rule)s. List of allowed rules is: %(rules)s." msgstr "沒有規則 %(rule)s。所容許的規則清單為:%(rules)s。" msgid "" "There is no such option during 5.0.0, so need to make this attribute " "unsupported, otherwise error will raised." msgstr "5.0.0 中沒有此類選項,所以需要將此屬性變成不受支援,否則將發生錯誤。" msgid "" "There is no such option during 5.0.0, so need to make this property " "unsupported while it not used." msgstr "5.0.0 中沒有此類選項,所以在不使用此內容時,需要將其變成不受支援。" #, python-format msgid "" "There was an error loading the definition of the global resource type " "%(type_name)s." msgstr "載入廣域資源類型 %(type_name)s 的定義時發生錯誤。" msgid "This endpoint is enabled or disabled." msgstr "已啟用或已停用此端點。" msgid "This project is enabled or disabled." msgstr "已啟用或已停用此專案。" msgid "This region is enabled or disabled." msgstr "已啟用或已停用此區域。" msgid "This service is enabled or disabled." msgstr "已啟用或已停用此服務。" msgid "Threshold to evaluate against." msgstr "據以評估的臨界值。" msgid "Time To Live (Seconds)." msgstr "存活時間(秒)。" msgid "Time of the first execution in format \"YYYY-MM-DD HH:MM\"." msgstr "第一次執行的時間,格式為:\"YYYY-MM-DD HH:MM\"。" msgid "Time of the next execution in format \"YYYY-MM-DD HH:MM:SS\"." msgstr "下一次執行的時間,格式為:\"YYYY-MM-DD HH:MM:SS\"。" msgid "" "Timeout for client connections' socket operations. If an incoming connection " "is idle for this number of seconds it will be closed. A value of '0' means " "wait forever." msgstr "" "用戶端連線的 Socket 作業逾時。如果送入的連線處於閒置狀態的時間達到此秒數,則" "會將其關閉。值 '0' 表示永久等待。" msgid "Timeout for creating the bay in minutes. Set to 0 for no timeout." msgstr "用於建立機架的逾時(以分鐘為單位)。設定為 0 表示沒有逾時。" msgid "Timeout in seconds for stack action (ie. create or update)." msgstr "堆疊動作(例如,建立或更新)的逾時值(以秒為單位)。" msgid "" "Toggle to enable/disable caching when Orchestration Engine looks for other " "OpenStack service resources using name or id. Please note that the global " "toggle for oslo.cache(enabled=True in [cache] group) must be enabled to use " "this feature." msgstr "" "切換以在編排引擎使用名稱或 ID 尋找其他 OpenStack 服務時啟用/停用快取功能。請" "注意,必須啟用 oslo.cache(enabled=True in [cache] group) 的廣域切換,才能使用" "此功能。" msgid "" "Toggle to enable/disable caching when Orchestration Engine retrieves " "extensions from other OpenStack services. Please note that the global toggle " "for oslo.cache(enabled=True in [cache] group) must be enabled to use this " "feature." msgstr "" "切換以在編排引擎從其他 OpenStack 服務擷取延伸時啟用/停用快取功能。請注意,必" "須啟用 oslo.cache(enabled=True in [cache] group) 的廣域切換,才能使用此功能。" msgid "" "Toggle to enable/disable caching when Orchestration Engine validates " "property constraints of stack.During property validation with constraints " "Orchestration Engine caches requests to other OpenStack services. Please " "note that the global toggle for oslo.cache(enabled=True in [cache] group) " "must be enabled to use this feature." msgstr "" "切換以在編排引擎驗證堆疊的內容限制時啟用/停用快取功能。在對限制進行內容驗證期" "間,編排引擎會將要求快取到其他 OpenStack 服務。請注意,必須啟用 oslo." "cache(enabled=True in [cache] group) 的廣域切換,才能使用此功能。" msgid "" "Token for stack-user which can be used for signalling handle when " "signal_transport is set to TOKEN_SIGNAL. None for all other signal " "transports." msgstr "" "堆疊使用者的記號,可用於在 signal_transport 設為 TOKEN_SIGNAL 時傳送控點信" "號。針對所有其他信號傳輸,值為「無」。" msgid "" "Tokens are not needed for Swift TempURLs. This attribute is being kept for " "compatibility with the OS::Heat::WaitConditionHandle resource." msgstr "" "Swift TempURL 不需要記號。保留此屬性的目的是為了與 OS::Heat::" "WaitConditionHandle 資源相容。" msgid "Topic" msgstr "主題" msgid "Transform protocol for the ipsec policy." msgstr "IPSec 原則的轉換通訊協定。" msgid "True if alarm evaluation/actioning is enabled." msgstr "如果已啟用警示評估/動作,則為 True。" msgid "" "True if the system should remember a generated private key; False otherwise." msgstr "如果系統應該記住所產生的私密金鑰,則為 True;否則,為 False。" msgid "Type of access that should be provided to guest." msgstr "應該提供給訪客的存取類型。" msgid "Type of adjustment (absolute or percentage)." msgstr "調整的類型(絕對或百分比)。" msgid "" "Type of endpoint in Identity service catalog to use for communication with " "the OpenStack service." msgstr "身分服務型錄中要用於與 OpenStack 服務進行通訊的端點類型。" msgid "Type of keystone Service." msgstr "Keystone 服務的類型。" msgid "Type of receiver." msgstr "接收端的類型。" msgid "Type of the data source." msgstr "資料來源的類型。" msgid "Type of the notification." msgstr "通知的類型。" msgid "Type of the object that RBAC policy affects." msgstr "RBAC 原則所影響的物件類型。" msgid "Type of the value of the input." msgstr "輸入的值類型。" msgid "Type of the value of the output." msgstr "輸出的值類型。" msgid "Type of the volume to create on Cinder backend." msgstr "要在 Cinder 後端上建立的磁區類型。" msgid "URL for API authentication" msgstr "用於 API 鑑別的 URL" msgid "URL for the data source." msgstr "資料來源的 URL。" msgid "" "URL for the job binary. Must be in the format swift:/// or " "internal-db://." msgstr "" "工作二進位檔的 URL。格式必須是:swift:/// 或 internal-db://" "。" msgid "" "URL of TempURL where resource will signal completion and optionally upload " "data." msgstr "TempURL 的 URL,資源將在該 URL 處傳送完成信號,並選擇性地上傳資料。" msgid "URL of keystone service endpoint." msgstr "Keystone 服務端點的 URL。" msgid "URL of the Heat CloudWatch server." msgstr "Heat CloudWatch 伺服器的 URL。" msgid "" "URL of the Heat metadata server. NOTE: Setting this is only needed if you " "require instances to use a different endpoint than in the keystone catalog" msgstr "" "Heat meta 資料伺服器的 URL。附註:僅當需要實例使用除 Keystone 型錄中端點外的" "其他端點時,才需要設定此項" msgid "URL of the Heat waitcondition server." msgstr "Heat waitcondition 伺服器的 URL。" msgid "" "URL where the data for this image already resides. For example, if the image " "data is stored in swift, you could specify \"swift://example.com/container/" "obj\"." msgstr "" "已包含此映像檔的資料的 URL。例如,如果將映像檔資料儲存在 Swift 中,則您可以指" "定\"swift://example.com/container/obj\"。" msgid "UUID of the internal subnet to which the instance will be attached." msgstr "要將實例連接至的內部子網路的 UUID。" #, python-format msgid "" "Unable to find neutron provider '%(provider)s', available providers are " "%(providers)s." msgstr "找不到 Neutron 提供者 '%(provider)s',可用的提供者為 %(providers)s。" #, python-format msgid "" "Unable to find senlin policy type '%(pt)s', available policy types are " "%(pts)s." msgstr "找不到 Senlin 原則類型 '%(pt)s',可用的原則類型為 %(pts)s。" #, python-format msgid "" "Unable to find senlin profile type '%(pt)s', available profile types are " "%(pts)s." msgstr "找不到 Senlin 設定檔類型 '%(pt)s',可用的設定檔類型為 %(pts)s。" #, python-format msgid "" "Unable to load %(app_name)s from configuration file %(conf_file)s.\n" "Got: %(e)r" msgstr "" "無法從配置檔 %(conf_file)s 載入 %(app_name)s。\n" "發生錯誤:%(e)r" #, python-format msgid "Unable to locate config file [%s]" msgstr "找不到配置檔 [%s]" #, python-format msgid "Unexpected action %(action)s" msgstr "非預期的動作 %(action)s" #, python-format msgid "Unexpected action %s" msgstr "非預期的動作 %s" #, python-format msgid "" "Unexpected properties: %(unexpected)s. Only these properties are allowed for " "%(type)s type of order: %(allowed)s." msgstr "" "非預期的內容:%(unexpected)s。%(type)s 類型的順序只容許使用下列內容:" "%(allowed)s。" msgid "Unique identifier for the device." msgstr "裝置的唯一 ID。" msgid "" "Unique identifier for the ike policy associated with the ipsec site " "connection." msgstr "與 IPSec 網站連線相關聯之 IKE 原則的唯一 ID。" msgid "" "Unique identifier for the ipsec policy associated with the ipsec site " "connection." msgstr "與 IPSec 網站連線相關聯之 IPSec 原則的唯一 ID。" msgid "Unique identifier for the network owning the port." msgstr "擁有埠之網路的唯一 ID。" msgid "" "Unique identifier for the router to which the vpn service will be inserted." msgstr "將在其中插入 VPN 服務之路由器的唯一 ID。" msgid "" "Unique identifier for the vpn service associated with the ipsec site " "connection." msgstr "與 IPSec 網站連線相關聯之 VPN 服務的唯一 ID。" msgid "" "Unique identifier of the firewall policy to which this firewall rule belongs." msgstr "此防火牆規則所屬之防火牆原則的唯一 ID。" msgid "Unique identifier of the firewall policy used to create the firewall." msgstr "建立防火牆時所使用之防火牆原則的唯一 ID。" msgid "Unknown" msgstr "未知" #, python-format msgid "Unknown Property %s" msgstr "不明的內容 %s" #, python-format msgid "Unknown attribute \"%s\"" msgstr "不明的屬性 \"%s\"" #, python-format msgid "Unknown error retrieving %s" msgstr "擷取 %s 時發生不明錯誤" #, python-format msgid "Unknown input %s" msgstr "不明的輸入 %s" #, python-format msgid "Unknown key(s) %s" msgstr "不明的索引鍵 %s" msgid "Unknown share_status during creation of share \"{0}\"" msgstr "建立共用項目 \"{0}\" 時的不明 share_status" #, python-format msgid "Unknown status creating Bay '%(name)s' - %(reason)s" msgstr "建立機架 '%(name)s' 時的不明狀態 - %(reason)s" msgid "Unknown status during deleting share \"{0}\"" msgstr "刪除共用項目 \"{0}\" 時的不明狀態" #, python-format msgid "Unknown status updating Bay '%(name)s' - %(reason)s" msgstr "更新機架 '%(name)s' 時的不明狀態 - %(reason)s" #, python-format msgid "Unknown status: %s" msgstr "不明狀態:%s" #, python-format msgid "" "Unrecognized value \"%(value)s\" for \"%(name)s\", acceptable values are: " "true, false." msgstr "無法辨識 \"%(name)s\" 的值 \"%(value)s\",接受的值為:true 和 false。" #, python-format msgid "Unsupported object type %(objtype)s" msgstr "不支援的物件類型 %(objtype)s" #, python-format msgid "Unsupported resource '%s' in LoadBalancerNames" msgstr "LoadBalancerNames 中的資源 '%s' 不受支援" msgid "Unversioned keystone url in format like http://0.0.0.0:5000." msgstr "未版本化之 Keystone URL 的格式類似於 http://0.0.0.0:5000。" #, python-format msgid "Update to properties %(props)s of %(name)s (%(res)s)" msgstr "更新為 %(name)s 的內容 %(props)s (%(res)s)" msgid "Updated At" msgstr "已更新" msgid "Updating a stack when it is deleting" msgstr "在刪除堆疊時,正在對其進行更新" msgid "Updating a stack when it is suspended" msgstr "在堆疊已暫停時更新堆疊" msgid "" "Use get_resource|Ref command instead. For example: { get_resource : " " }" msgstr "改用 get_resource|Ref 指令。例如:{ get_resource : }" msgid "" "Use only with Neutron, to list the internal subnet to which the instance " "will be attached; needed only if multiple exist; list length must be exactly " "1." msgstr "" "僅與 Neutron 配合使用,以列出要將實例連接至的內部子網路;僅當存在多項時,才需" "要此項;清單長度必須正好為 1。" #, python-format msgid "Use property %s" msgstr "使用內容 %s" #, python-format msgid "Use property %s." msgstr "使用內容 %s。" msgid "" "Use the `external_gateway_info` property in the router resource to set up " "the gateway." msgstr "使用路由器資源中的 'external_gateway_info' 內容來設定閘道。" msgid "" "Use the networks attribute instead of first_address. For example: " "\"{get_attr: [, networks, , 0]}\"" msgstr "" "使用網路屬性,而不使用 first_address。例如:\"{get_attr: [, " "networks, , 0]}\"" msgid "Use this resource at your own risk." msgstr "您需要自行承擔使用此資源的風險。" #, python-format msgid "User %s in invalid domain" msgstr "使用者 %s 所在的網域無效" #, python-format msgid "User %s in invalid project" msgstr "使用者 %s 所在的專案無效" msgid "User ID for API authentication" msgstr "用於 API 鑑別的使用者 ID" msgid "User data to pass to instance." msgstr "要傳遞給實例的使用者資料。" msgid "User is not authorized to perform action" msgstr "使用者未獲授權來執行動作" msgid "User name to create a user on instance creation." msgstr "建立實例時要用來建立使用者的使用者名稱。" msgid "Username associated with the AccessKey." msgstr "與 AccessKey 相關聯的使用者名稱。" msgid "Username for API authentication" msgstr "用於 API 鑑別的使用者名稱" msgid "Username for accessing the data source URL." msgstr "用於存取資料來源 URL 的使用者名稱。" msgid "Username for accessing the job binary URL." msgstr "用於存取工作二進位檔 URL 的使用者名稱。" msgid "Username of privileged user in the image." msgstr "映像檔中特許使用者的使用者名稱。" msgid "VLAN ID for VLAN networks or tunnel-id for GRE/VXLAN networks." msgstr " VLAN 網路的 VLAN ID,或者 GRE/VXLAN 網路的通道 ID。" msgid "VPC ID for this gateway association." msgstr "此閘道關聯的 VPC ID。" msgid "VPC ID for where the route table is created." msgstr "在其中建立遞送表的 VPC ID。" msgid "" "Valid values are encrypt or decrypt. The heat-engine processes must be " "stopped to use this." msgstr "有效值為 encrypt 或 decrypt。Heat 引擎程序必須已停止,才能使用此項。" #, python-format msgid "Value \"%(val)s\" is invalid for data type \"%(type)s\"." msgstr "值 \"%(val)s\" 不適用於資料類型 \"%(type)s\"。" #, python-format msgid "Value '%(value)s' is invalid for '%(name)s' which only accepts integer." msgstr "值 '%(value)s' 對於僅接受整數的 '%(name)s' 而言無效。" #, python-format msgid "" "Value '%(value)s' is invalid for '%(name)s' which only accepts non-negative " "integer." msgstr "值 '%(value)s' 對於僅接受非負整數的 '%(name)s'而言無效。" #, python-format msgid "Value '%s' is not an integer" msgstr "值 '%s' 不是整數" #, python-format msgid "Value must be a comma-delimited list string: %s" msgstr "值必須是以逗點定界的清單字串:%s" #, python-format msgid "Value must be of type %s" msgstr "值必須是 %s 類型" #, python-format msgid "Value must be valid JSON: %s" msgstr "值必須是有效的 JSON:%s" #, python-format msgid "Value must match pattern: %s" msgstr "值必須符合型樣:%s" msgid "" "Value which can be set or changed on stack update to trigger the resource " "for replacement with a new random string. The salt value itself is ignored " "by the random generator." msgstr "" "此值可以在更新堆疊時予以設定或變更,用來觸發資源以取代為新的隨機字串。隨機產" "生器會忽略 salt 值本身。" msgid "" "Value which can be set to fail the resource operation to test failure " "scenarios." msgstr "可以設定的值,該值用來讓資源作業失敗,以測試失敗實務。" msgid "" "Value which can be set to trigger update replace for the particular resource." msgstr "可以設定的值,該值用來對特定資源觸發更新取代。" #, python-format msgid "Version %(objver)s of %(objname)s is not supported" msgstr "不支援 %(objname)s %(objver)s 版" msgid "Version for the ike policy." msgstr "IKE 原則的版本。" msgid "Version of Hadoop running on instances." msgstr "在實例上執行之 Hadoop 的版本。" msgid "Version of IP address." msgstr "IP 位址的版本。" msgid "Vip associated with the pool." msgstr "與儲存區相關聯的 VIP。" msgid "Volume attachment failed" msgstr "磁區連接失敗" msgid "Volume backup failed" msgstr "磁區備份失敗" msgid "Volume backup restore failed" msgstr "磁區備份還原失敗" msgid "Volume create failed" msgstr "磁區建立失敗" msgid "Volume detachment failed" msgstr "磁區分離失敗" msgid "Volume in use" msgstr "磁區在使用中" msgid "Volume resize failed" msgstr "磁區調整大小失敗" msgid "Volumes per node." msgstr "每個節點的磁區數目。" msgid "Volumes to attach to instance." msgstr "要連接至實例的磁區。" #, python-format msgid "WaitCondition invalid Handle %s" msgstr "WaitCondition:無效的控點 %s" #, python-format msgid "WaitCondition invalid Handle stack %s" msgstr "WaitCondition:無效的控點堆疊 %s" #, python-format msgid "WaitCondition invalid Handle tenant %s" msgstr "WaitCondition:無效的控點承租人 %s" msgid "Weight of pool member in the pool (default to 1)." msgstr "儲存區中儲存區成員的加權(預設為 1)。" msgid "Weight of the pool member in the pool." msgstr "儲存區中儲存區成員的加權。" #, python-format msgid "Went to status %(resource_status)s due to \"%(status_reason)s\"" msgstr "已轉至狀態 %(resource_status)s,原因:\"%(status_reason)s\"" msgid "" "When both ipv6_ra_mode and ipv6_address_mode are set, they must be equal." msgstr "同時設定 ipv6_ra_mode 和 ipv6_address_mode 時,兩者必須相等。" msgid "" "When running server in SSL mode, you must specify both a cert_file and " "key_file option value in your configuration file" msgstr "" "在 SSL 模式下執行伺服器時,必須在配置檔中指定 cert_file 及 key_file 選項值" msgid "Whether enable this policy on that cluster." msgstr "是否在該叢集上啟用此原則。" msgid "" "Whether the address scope should be shared to other tenants. Note that the " "default policy setting restricts usage of this attribute to administrative " "users only, and restricts changing of shared address scope to unshared with " "update." msgstr "" "是否應該向其他租戶共用位址範圍。請注意,預設原則設定將此屬性限制為僅供管理使" "用者使用,並限制在更新時將共用位址範圍變為取消共用。" msgid "Whether the flavor is shared across all projects." msgstr "是否在所有專案之間共用該特性。" msgid "" "Whether the image can be deleted. If the value is True, the image is " "protected and cannot be deleted." msgstr "是否可以刪除映像檔。如果值為 True,則表示映像檔受保護,無法將其刪除。" msgid "Whether the metering label should be shared across all tenants." msgstr "是否應該在所有承租人之間共用計量標籤。" msgid "Whether the network contains an external router." msgstr "網路是否包含外部路由器。" msgid "Whether the part content is text or multipart." msgstr "組件內容是文字還是多組件。" msgid "" "Whether the subnet pool will be shared across all tenants. Note that the " "default policy setting restricts usage of this attribute to administrative " "users only." msgstr "" "是否應該在所有租戶之間共用此子網路儲存區。請注意,預設原則設定將此屬性限制為" "僅供管理使用者使用。" msgid "Whether the volume type is accessible to the public." msgstr "是否可公開存取該磁區類型。" msgid "Whether this QoS policy should be shared to other tenants." msgstr "是否應該向其他租戶共用此服務品質原則。" msgid "" "Whether this firewall should be shared across all tenants. NOTE: The default " "policy setting in Neutron restricts usage of this property to administrative " "users only." msgstr "" "是否應該在所有承租人之間共用此防火牆。附註:Neutron 中的預設原則設定會將此內" "容限制為僅供管理使用者使用。" msgid "" "Whether this is default IPv4/IPv6 subnet pool. There can only be one default " "subnet pool for each IP family. Note that the default policy setting " "restricts administrative users to set this to True." msgstr "" "這是否是預設 IPv4/IPv6 子網路儲存區。每一個 IP 系列只能有一個預設子網路儲存" "區。請注意,預設原則設定會限制管理使用者將此項設為 True。" msgid "Whether this network should be shared across all tenants." msgstr "是否應該在所有承租人之間共用此網路。" msgid "" "Whether this network should be shared across all tenants. Note that the " "default policy setting restricts usage of this attribute to administrative " "users only." msgstr "" "是否應該在所有承租人之間共用此網路。請注意,預設原則設定將此屬性限制為僅供管" "理使用者使用。" msgid "" "Whether this policy should be audited. When set to True, each time the " "firewall policy or the associated firewall rules are changed, this attribute " "will be set to False and will have to be explicitly set to True through an " "update operation." msgstr "" "是否應該審核此原則。當設定為 True 時,每當變更防火牆原則或相關聯的防火牆規則" "時,會將此屬性設定為 False,並且必須透過更新作業明確地設定為 True。" msgid "Whether this policy should be shared across all tenants." msgstr "是否應該在所有承租人之間共用此原則。" msgid "Whether this rule should be enabled." msgstr "是否應該啟用此規則。" msgid "Whether this rule should be shared across all tenants." msgstr "是否應該在所有承租人之間共用此規則。" msgid "Whether to enable the actions or not." msgstr "是否啟用動作。" msgid "Whether to specify a remote group or a remote IP prefix." msgstr "是要指定遠端群組,還是要指定遠端 IP 字首。" msgid "" "Which lifecycle actions of the deployment resource will result in this " "deployment being triggered." msgstr "將導致觸發此部署的部署資源生命週期動作。" msgid "" "Workflow additional parameters. If Workflow is reverse typed, params " "requires 'task_name', which defines initial task." msgstr "" "工作流程額外參數。如果反轉了工作流程的類型,則參數需要 'task_name'(用來定義" "起始作業)。" msgid "Workflow description." msgstr "工作流程說明。" msgid "Workflow name." msgstr "工作流程名稱。" msgid "Workflow to execute." msgstr "要執行的工作流程。" msgid "Workflow type." msgstr "工作流程類型。" #, python-format msgid "Wrong Arguments try: \"%s\"" msgstr "嘗試的引數錯誤:\"%s\"" msgid "You are not authenticated." msgstr "您沒有進行鑑別。" msgid "You are not authorized to complete this action." msgstr "您未獲授權來完成此動作。" #, python-format msgid "You are not authorized to use %(action)s." msgstr "您未獲授權來使用 %(action)s。" #, python-format msgid "" "You have reached the maximum stacks per tenant, %d. Please delete some " "stacks." msgstr "已達到每個承租人的堆疊數目上限 (%d)。請刪除一些堆疊。" #, python-format msgid "could not find user %s" msgstr "找不到使用者 %s" msgid "deployment_id must be specified" msgstr "必須指定 deployment_id" msgid "" "deployments key not allowed in resource metadata with user_data_format of " "SOFTWARE_CONFIG" msgstr "" "在 user_data_format 為 SOFTWARE_CONFIG 的資源 meta 資料中,不容許使用部署索引" "鍵" #, python-format msgid "deployments of server %s" msgstr "伺服器 %s 的部署" #, python-format msgid "environment has wrong section \"%s\"" msgstr "環境具有錯誤的區段 \"%s\"" msgid "error in pool" msgstr "儲存區發生錯誤" msgid "error in vip" msgstr "VIP 發生錯誤" msgid "external network for the gateway." msgstr "閘道的外部網路。" msgid "granularity should be days, hours, minutes, or seconds" msgstr "精度應該是天數、小時數、分鐘數或秒數" msgid "heat.conf misconfigured, auth_encryption_key must be 32 characters" msgstr "heat.conf 配置錯誤,auth_encryption_key 必須是 32 個字元" msgid "" "heat.conf misconfigured, cannot specify \"stack_user_domain_id\" or " "\"stack_user_domain_name\" without \"stack_domain_admin\" and " "\"stack_domain_admin_password\"" msgstr "" "heat.conf 配置錯誤,不能指定 \"stack_user_domain_id\" 或" "\"stack_user_domain_name\",除非是將它們與 \"stack_domain_admin\" 和" "\"stack_domain_admin_password\" 配合使用" msgid "ipv6_ra_mode and ipv6_address_mode are not supported for ipv4." msgstr "ipv4 不支援 ipv6_ra_mode 及 ipv6_address_mode。" msgid "limit cannot be less than 4" msgstr "限制不能小於 4" #, python-format msgid "metadata setting for resource %s" msgstr "資源 %s 的 meta 資料設定" msgid "min/max length must be integral" msgstr "長度下限/上限必須是整數" msgid "min/max must be numeric" msgstr "下限/上限必須是數值" msgid "need more memory." msgstr "需要更多記憶體。" msgid "no resource data found" msgstr "找不到任何資源資料" msgid "no resources were found" msgstr "找不到任何資源" msgid "nova server metadata needs to be a Map." msgstr "nova 伺服器 meta 資料需要為「對映」。" #, python-format msgid "previous_status must be SupportStatus instead of %s" msgstr "previous_status 必須是 SupportStatus,而不是 %s" #, python-format msgid "raw template with id %s not found" msgstr "找不到 ID 為 %s 的原始範本" #, python-format msgid "resource with id %s not found" msgstr "找不到 ID 為 %s 的資源" #, python-format msgid "roles %s" msgstr "角色 %s" msgid "segmentation_id cannot be specified except 0 for using flat" msgstr "使用平面網路類型時無法指定 segmentation_id(0 除外)" msgid "segmentation_id must be specified for using vlan" msgstr "使用 VLAN 時必須指定 segmentation_id" msgid "segmentation_id not allowed for flat network type." msgstr "平面網路類型不容許 segmentation_id。" msgid "server_id must be specified" msgstr "必須指定 server_id" #, python-format msgid "" "task %(task)s contains property 'requires' in case of direct workflow. Only " "reverse workflows can contain property 'requires'." msgstr "" "在直接工作流程中,作業 %(task)s 包含內容 'requires'。只有逆向工作流程才可以包" "含內容 'requires'。" heat-10.0.2/heat/locale/it/0000775000175000017500000000000013343562672015344 5ustar zuulzuul00000000000000heat-10.0.2/heat/locale/it/LC_MESSAGES/0000775000175000017500000000000013343562672017131 5ustar zuulzuul00000000000000heat-10.0.2/heat/locale/it/LC_MESSAGES/heat.po0000666000175000017500000077515513343562351020432 0ustar zuulzuul00000000000000# Translations template for heat. # Copyright (C) 2015 ORGANIZATION # This file is distributed under the same license as the heat project. # # Translators: # Andreas Jaeger , 2016. #zanata msgid "" msgstr "" "Project-Id-Version: heat VERSION\n" "Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n" "POT-Creation-Date: 2018-02-05 10:35+0000\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" "PO-Revision-Date: 2016-04-12 05:27+0000\n" "Last-Translator: Copied by Zanata \n" "Language: it\n" "Plural-Forms: nplurals=2; plural=(n != 1);\n" "Generated-By: Babel 2.0\n" "X-Generator: Zanata 3.9.6\n" "Language-Team: Italian\n" #, python-format msgid "\"%%s\" is not a valid keyword inside a %s definition" msgstr "" "\"%%s\" non è una parola chiave valida all'interno di una definizione %s" #, python-format msgid "\"%(fn_name)s\": %(err)s" msgstr "\"%(fn_name)s\": %(err)s" #, python-format msgid "" "\"%(name)s\" params must be strings, numbers, list or map. Failed to json " "serialize %(value)s" msgstr "" "I parametri \"%(name)s\" devono essere stringhe, numeri, elenco di mappe. " "Impossibile per json serializzare %(value)s" #, python-format msgid "" "\"%(section)s\" must contain a map of %(obj_name)s maps. Found a [%(_type)s] " "instead" msgstr "" "\"%(section)s\" deve contenere una mappa di associazioni %(obj_name)s. " "Trovato [%(_type)s] invece" #, python-format msgid "\"%(url)s\" is not a valid SwiftSignalHandle. The %(part)s is invalid" msgstr "" "\"%(url)s\" non è un SwiftSignalHandle valido. Il %(part)s non è valido" #, python-format msgid "\"%(value)s\" does not validate %(name)s" msgstr "\"%(value)s\" non convalida %(name)s" #, python-format msgid "\"%(value)s\" does not validate %(name)s (constraint not found)" msgstr "\"%(value)s\" non convalida %(name)s (vincolo non trovato)" #, python-format msgid "\"%(version)s\". \"%(version_type)s\" should be one of: %(available)s" msgstr "" "\"%(version)s\". \"%(version_type)s\" dovrebbe essere una di: %(available)s" #, python-format msgid "\"%(version)s\". \"%(version_type)s\" should be: %(available)s" msgstr "\"%(version)s\". \"%(version_type)s\" dovrebbe essere: %(available)s" #, python-format msgid "\"%s\" : [ { \"key1\": \"val1\" }, { \"key2\": \"val2\" } ]" msgstr "\"%s\" : [ { \"key1\": \"val1\" }, { \"key2\": \"val2\" } ]" #, python-format msgid "\"%s\" argument must be a string" msgstr "L'argomento \"%s\" deve essere una stringa" #, python-format msgid "\"%s\" can't traverse path" msgstr "\"%s\" non può attraversare il percorso" #, python-format msgid "\"%s\" deletion policy not supported" msgstr "\"%s\" politica di eliminazione non supportata" #, python-format msgid "\"%s\" delimiter must be a string" msgstr "Il delimitatore \"%s\" deve essere una stringa" #, python-format msgid "\"%s\" is not a list" msgstr "\"%s\" non è un elenco" #, python-format msgid "\"%s\" is not a map" msgstr "\"%s\" non è una mappa" #, python-format msgid "\"%s\" is not a valid ARN" msgstr "\"%s\" non è un ARN valido" #, python-format msgid "\"%s\" is not a valid ARN URL" msgstr "\"%s\" non è un URL ARN valido" #, python-format msgid "\"%s\" is not a valid Heat ARN" msgstr "\"%s\" non è un ARN Heat valido" #, python-format msgid "\"%s\" is not a valid URL" msgstr "\"%s\" non è un URL valido" #, python-format msgid "\"%s\" is not a valid boolean" msgstr "\"%s\" non è un valore booleano valido" #, python-format msgid "\"%s\" is not a valid template section" msgstr "\"%s\" non è una sezione template valida" #, python-format msgid "\"%s\" must operate on a list" msgstr "\"%s\" deve operare su un elenco" #, python-format msgid "\"%s\" param placeholders must be strings" msgstr "I segnaposto di parametro \"%s\" devono essere stringhe" #, python-format msgid "\"%s\" parameters must be a mapping" msgstr "I parametri \"%s\" devono essere una mappatura" #, python-format msgid "\"%s\" params must be a map" msgstr "I parametri \"%s\" devono essere una mappa" #, python-format msgid "\"%s\" params must be strings, numbers, list or map." msgstr "\"%s\" parametri devono essere stringhe, numeri, elenco di mappe." #, python-format msgid "\"%s\" template must be a string" msgstr "Il modello \"%s\" deve essere una stringa" #, python-format msgid "\"repeat\" syntax should be %s" msgstr "la sintassi \"repeat\" deve essere %s" #, python-format msgid "%(a)s paused until Hook %(h)s is cleared" msgstr "%(a)s messo in pausa fino a quando non è annullato hook %(h)s" #, python-format msgid "%(action)s is not supported for resource." msgstr "%(action)s non è supportata per la risorsa." #, python-format msgid "%(action)s is restricted for resource." msgstr "%(action)s è limitato alla risorsa." #, python-format msgid "%(desired_capacity)s must be between %(min_size)s and %(max_size)s" msgstr "" "%(desired_capacity)s deve essere compresa tra %(min_size)s e %(max_size)s" #, python-format msgid "%(error)s%(path)s%(message)s" msgstr "%(error)s%(path)s%(message)s" #, python-format msgid "%(feature)s is not supported." msgstr "%(feature)s non è supportato." #, python-format msgid "" "%(img)s must be provided: Referenced cluster template %(tmpl)s has no " "default_image_id defined." msgstr "" "è necessario fornire %(img)s: Il template cluster indicato %(tmpl)s non " "dispone di default_image_id definito." #, python-format msgid "%(lc)s (%(ref)s) reference can not be found." msgstr "Impossibile trovare il riferimento %(lc)s (%(ref)s)." #, python-format msgid "" "%(lc)s (%(ref)s) requires a reference to the configuration not just the name " "of the resource." msgstr "" "%(lc)s (%(ref)s) richiede un riferimento alla configurazione non solo il " "nome della risorsa." #, python-format msgid "%(len)d of %(count)d received" msgstr "%(len)d di %(count)d ricevuto" #, python-format msgid "%(len)d of %(count)d received - %(reasons)s" msgstr "%(len)d di %(count)d ricevuti - %(reasons)s" #, python-format msgid "%(message)s" msgstr "%(message)s" #, python-format msgid "%(min_size)s can not be greater than %(max_size)s" msgstr "%(min_size)s non può essere maggiore di %(max_size)s" #, python-format msgid "%(name)s constraint invalid for %(utype)s" msgstr "Vincolo %(name)s non valido per %(utype)s" #, python-format msgid "%(prop1)s cannot be specified without %(prop2)s." msgstr "%(prop1)s non può essere specificato senza %(prop2)s." #, python-format msgid "" "%(prop1)s property should only be specified for %(prop2)s with value " "%(value)s." msgstr "" "La proprietà %(prop1)s deve essere specificata solo per %(prop2)s con valore " "%(value)s." #, python-format msgid "%(resource)s: Invalid attribute %(key)s" msgstr "%(resource)s: attributo non valido %(key)s" #, python-format msgid "" "%(result)s - Unknown status %(resource_status)s due to \"%(status_reason)s\"" msgstr "" "%(result)s - Stato sconosciuto %(resource_status)s dovuto a " "\"%(status_reason)s\"" #, python-format msgid "%(schema)s supplied for %(type)s %(data)s" msgstr "%(schema)s fornito per %(type)s %(data)s" #, python-format msgid "%(server)s-port-%(number)s" msgstr "%(server)s-porta-%(number)s" #, python-format msgid "%(type)s not in valid format: %(error)s" msgstr "%(type)s non in formato valido: %(error)s" #, python-format msgid "%s Key Name must be a string" msgstr "Il nome chiave %s deve essere una stringa" #, python-format msgid "%s Timed out" msgstr "%s Timeout" #, python-format msgid "%s Value Name must be a string" msgstr "Il nome valore %s deve essere una stringa" #, python-format msgid "%s is not a valid job location." msgstr "%s non è un percorso job valido." #, python-format msgid "%s is not active" msgstr "%s non è attivo" #, python-format msgid "%s is not an integer." msgstr "%s non è un numero intero." #, python-format msgid "%s must be provided" msgstr "è necessario fornire %s" #, python-format msgid "'%(attr)s': expected '%(expected)s', got '%(current)s'" msgstr "'%(attr)s': previsto '%(expected)s', ricevuto '%(current)s'" msgid "" "'task_name' is not assigned in 'params' in case of reverse type workflow." msgstr "" "'task_name' non è assegnato in 'params' in caso di flusso di lavoro di tipo " "inverso." msgid "'true' if DHCP is enabled for this subnet; 'false' otherwise." msgstr "" "'true' se DHCP è abilitato per questa sottorete; 'false' in caso contrario." msgid "A UUID for the set of servers being requested." msgstr "In fase di richiesta un UUID per l'insieme di server." msgid "A bad or out-of-range value was supplied" msgstr "È stato fornito un valore errato o fuori intervallo" msgid "A boolean value of default flag." msgstr "Un valore booleano del flag predefinito." msgid "A boolean value specifying the administrative status of the network." msgstr "Un valore booleano che specifica lo stato amministrativo della rete." #, python-format msgid "" "A character class and its corresponding %(min)s constraint to generate the " "random string from." msgstr "" "Una classe di caratteri e il corrispondente vincolo %(min)s da cui generare " "la stringa casuale." #, python-format msgid "" "A character sequence and its corresponding %(min)s constraint to generate " "the random string from." msgstr "" "Una sequenza di caratteri e il corrispondente vincolo %(min)s da cui " "generare la stringa casuale." msgid "A comma-delimited list of server ip addresses. (Heat extension)." msgstr "" "Un elenco delimitato da virgole di indirizzi ip del server. (Estensione " "heat)." msgid "A description of the volume." msgstr "Una descrizione del volume." msgid "" "A device name where the volume will be attached in the system at /dev/" "device_name. This value is typically vda." msgstr "" "Un nome dispositivo in cui il volume sarà collegato nel sistema in /dev/" "device_name. Questo valore è generalmente vda." msgid "" "A device name where the volume will be attached in the system at /dev/" "device_name.e.g. vdb" msgstr "" "Un nome dispositivo in cui il volume sarà collegato nel sistema in /dev/" "device_name.e.g. vdb" msgid "" "A dict of all network addresses with corresponding port_id. Each network " "will have two keys in dict, they are network name and network id. The port " "ID may be obtained through the following expression: \"{get_attr: [, " "addresses, , 0, port]}\"." msgstr "" "Un dict di tutti gli indirizzi di rete con port_id corrispondente. Ciascuna " "rete avrà due chiavi in dict, il nome rete e l'id rete. L'ID porta può " "essere ottenuto attraverso la seguente espressione: \"{get_attr: [, " "addresses, , 0, port]}\"." msgid "" "A dict of assigned network addresses of the form: {\"public\": [ip1, " "ip2...], \"private\": [ip3, ip4], \"public_uuid\": [ip1, ip2...], " "\"private_uuid\": [ip3, ip4]}. Each network will have two keys in dict, they " "are network name and network id." msgstr "" "Un dict degli indirizzi di rete assegnati del modulo: {\"public\": [ip1, " "ip2...], \"private\": [ip3, ip4], \"public_uuid\": [ip1, ip2...], " "\"private_uuid\": [ip3, ip4]}. Ciascuna rete avrà due chiavi in dict, il " "nome rete e l'id rete." msgid "A dict of key-value pairs output from the stack." msgstr "Un dizionario di coppie chiave-valore di output dallo stack." msgid "A dictionary which contains name and input of the workflow." msgstr "Dizionario contenente il nome e l'input del flusso di lavoro." msgid "A length constraint must have a min value and/or a max value specified." msgstr "" "Su un vincolo di lunghezza deve essere specificato un valore minimo e/o un " "valore massimo." msgid "A list of URLs (webhooks) to invoke when state transitions to alarm." msgstr "" "Un elenco di URL (webhooks) da richiamare quando lo stato cambia in " "segnalazione." msgid "" "A list of URLs (webhooks) to invoke when state transitions to insufficient-" "data." msgstr "" "Un elenco di URL (webhooks) da richiamare quando lo stato cambia in dati " "insufficienti." msgid "A list of URLs (webhooks) to invoke when state transitions to ok." msgstr "" "Un elenco di URL (webhooks) da richiamare quando lo stato cambia in ok." msgid "A list of access rules that define access from IP to Share." msgstr "" "Un elenco di regole di accesso che definisce l'accesso da IP a Condivisione." msgid "A list of all rules for the QoS policy." msgstr "Un elenco di regole per la politica QoS." msgid "A list of all subnet attributes for the port." msgstr "Un elenco di tutti gli attributi della sottorete per la porta." msgid "" "A list of character class and their constraints to generate the random " "string from." msgstr "" "Un elenco di classe di caratteri e dei relativi vincoli da cui generare la " "stringa casuale." msgid "" "A list of character sequences and their constraints to generate the random " "string from." msgstr "" "Un elenco di sequenze di caratteri e dei relativi vincoli da cui generare la " "stringa casuale." msgid "A list of cluster instance IPs." msgstr "Un elenco di IP istanze del cluster." msgid "A list of clusters to which this policy is attached." msgstr "Un elenco di cluster a cui è collegata questa politica." msgid "A list of host route dictionaries for the subnet." msgstr "Un elenco di dizionari di instradamenti host per la sottorete." msgid "A list of instances ids." msgstr "Un elenco di id istanze." msgid "A list of metric ids." msgstr "Un elenco di id metriche." msgid "" "A list of query factors, each comparing a Sample attribute with a value. " "Implicitly combined with matching_metadata, if any." msgstr "" "Un elenco di fattori di query, ognuno dei quali confronta un attributo di " "esempio con un valore. Combinato implicitamente con matching_metadata, se " "presente." msgid "A list of resource IDs for the resources in the chain." msgstr "Un elenco di ID risorse per le risorse nella catena." msgid "A list of resource IDs for the resources in the group." msgstr "Un elenco di ID risorse per le risorse nel gruppo." msgid "A list of security groups for the port." msgstr "Un elenco di gruppi di sicurezza per la porta." msgid "A list of security services IDs or names." msgstr "Un elenco di ID o nomi dei servizi di sicurezza." msgid "A list of string policies to apply. Defaults to anti-affinity." msgstr "" "Un elenco di politiche stringa da applicare. L'impostazione predefinita è " "anti-affinity." msgid "A login profile for the user." msgstr "Un profilo di login per l'utente." msgid "A mandatory input parameter is missing" msgstr "Un parametro di input obbligatorio è mancante" msgid "A map containing all headers for the container." msgstr "Una mappa contenente tutte le intestazioni per il contenitore." msgid "" "A map of Nova names and captured stderrs from the configuration execution to " "each server." msgstr "" "Una mappa di nomi e stderr Nova rilevati dall'esecuzione della " "configurazione per ogni server." msgid "" "A map of Nova names and captured stdouts from the configuration execution to " "each server." msgstr "" "Una mappa di nomi Nova e stdout rilevati dall'esecuzione della " "configurazione per ogni server." msgid "" "A map of Nova names and returned status code from the configuration " "execution." msgstr "" "Una mappa di nomi Nova e codici di stato restituiti dall'esecuzione della " "configurazione." msgid "" "A map of files to create/overwrite on the server upon boot. Keys are file " "names and values are the file contents." msgstr "" "Una mappa di file da creare/sovrascrivere sul server in seguito all'avvio. " "Le chiavi sono i nomi file e i valori sono i contenuti file." msgid "" "A map of resource names to the specified attribute of each individual " "resource." msgstr "" "Un'associazione di nomi risorsa per l'attributo specificato di ogni singola " "risorsa." msgid "" "A map of resource names to the specified attribute of each individual " "resource. Requires heat_template_version: 2014-10-16." msgstr "" "Un'associazione di nomi risorsa per l'attributo specificato di ogni singola " "risorsa. Richiede heat_template_version: 2014-10-16." msgid "" "A map of user-defined meta data to associate with the account. Each key in " "the map will set the header X-Account-Meta-{key} with the corresponding " "value." msgstr "" "Una mappa di metadati definiti dall'utente da associare all'account. " "Ciascuna chiave nella mappa imposterà l'intestazione X-Account-Meta-{key} " "con il valore corrispondente." msgid "" "A map of user-defined meta data to associate with the container. Each key in " "the map will set the header X-Container-Meta-{key} with the corresponding " "value." msgstr "" "Una mappa di metadati definiti dall'utente da associare al contenitore. " "Ciascuna chiave nella mappa imposterà l'intestazione X-Container-Meta-{key} " "con il valore corrispondente." msgid "A name used to distinguish the volume." msgstr "Un nome utilizzato per distinguere il volume." msgid "" "A per-tenant quota on the prefix space that can be allocated from the subnet " "pool for tenant subnets." msgstr "" "Una quota pre-tenant dello spazio del prefisso può essere allocata dal pool " "di sottorete per le sottoreti del tenant." msgid "" "A predefined access control list (ACL) that grants permissions on the bucket." msgstr "Un ACL (Access Control List) che concede le autorizzazioni sul bucket." msgid "A range constraint must have a min value and/or a max value specified." msgstr "" "Su un vincolo di intervallo deve essere specificato un valore minimo e/o un " "valore massimo." msgid "" "A reference to the wait condition handle used to signal this wait condition." msgstr "" "Un riferimento al gestore della condizione di attesa utilizzato per " "segnalare questa condizione di attesa." msgid "" "A signed url to create executions for workflows specified in Workflow " "resource." msgstr "" "Un URL firmato per creare esecuzioni per i flussi di lavoro specificati " "nella risorsa Workflow." msgid "A signed url to handle the alarm." msgstr "Url firmato per gestire la segnalazione." msgid "A signed url to handle the alarm. (Heat extension)." msgstr "Un url firmato per gestire la segnalazione. (Estensione heat)." msgid "A specified set of DNS name servers to be used." msgstr "Una serie specificata di server dei nomi DNS da utilizzare." msgid "" "A string specifying a symbolic name for the network, which is not required " "to be unique." msgstr "" "Una stringa che specifica un nome simbolico per la rete, che non deve essere " "necessariamente univoco." msgid "" "A string specifying a symbolic name for the security group, which is not " "required to be unique." msgstr "" "Una stringa che specifica un nome simbolico per il gruppo di sicurezza, che " "non deve essere necessariamente univoco." msgid "A string specifying physical network mapping for the network." msgstr "Una stringa che specifica la mappatura di rete fisica per la rete." msgid "A string specifying the provider network type for the network." msgstr "Una stringa che specifica il tipo di rete del provider per la rete." msgid "A string specifying the segmentation id for the network." msgstr "Una stringa che specifica l'id di segmentazione per la rete." msgid "A symbolic name for this port." msgstr "Un nome simbolico per questa porta." msgid "A url to handle the alarm using native API." msgstr "Un URL per gestire la segnalazione mediante l'API nativa." msgid "" "A variable that this resource will use to replace with the current index of " "a given resource in the group. Can be used, for example, to customize the " "name property of grouped servers in order to differentiate them when listed " "with nova client." msgstr "" "Una variabile che questa risorsa utilizzerà per sostituire con un indice " "corrente di una risorsa fornita nel gruppo. Può essere utilizzata ad esempio " "per personalizzare la proprietà del nome di server raggruppati in modo da " "differenziarli quando sono elencati con il client nova." msgid "AWS compatible instance name." msgstr "Nome istanza compatibile AWS." msgid "AWS query string is malformed, does not adhere to AWS spec" msgstr "" "La stringa di query AWS non è formata correttamente, non è conforme alle " "specifiche AWS" msgid "Access policies to apply to the user." msgstr "Le politiche di accesso da applicare all'utente." #, python-format msgid "AccessPolicy resource %s not in stack" msgstr "La risorsa AccessPolicy %s non presente nello stack" #, python-format msgid "Action %s not allowed for user" msgstr "Azione %s non consentita per l'utente" msgid "Action to be performed on the traffic matching the rule." msgstr "Azione da eseguire sul traffico corrispondente alla regola." msgid "Actual input parameter values of the task." msgstr "Valori effettivi dei parametri di input dell'attività." msgid "Add needed policies directly to the task, Policy keyword is not needed" msgstr "" "Aggiungere le politiche necessarie direttamente all'attività. La parola " "chiave Policy non è necessaria" msgid "Additional MAC/IP address pairs allowed to pass through a port." msgstr "" "Le coppie di indirizzi MAC/IP aggiuntivi che è possibile trasmettere tramite " "una porta." msgid "Additional MAC/IP address pairs allowed to pass through the port." msgstr "" "Le coppie di indirizzi MAC/IP aggiuntivi che è possibile trasmettere tramite " "la porta." msgid "Additional routes for this subnet." msgstr "Instradamenti aggiuntivi per questa sottorete." msgid "Address family of the address scope, which is 4 or 6." msgstr "Famiglia di indirizzi dell'ambito dell'indirizzo, che è 4 o 6." msgid "" "Address of the notification. It could be a valid email address, url or " "service key based on notification type." msgstr "" "Indirizzo della notifica. Potrebbe essere un indirizzo email valido, un url " "o una chiave di servizio basata sul tipo di notifica." msgid "" "Address to bind the server. Useful when selecting a particular network " "interface." msgstr "" "Indirizzo per il bind del server. Utile quando si seleziona una particolare " "interfaccia di rete." msgid "Administrative state for the ipsec site connection." msgstr "Stato amministrativo per la connessione al sito ipsec." msgid "Administrative state for the vpn service." msgstr "Stato amministrativo per il servizio vpn." msgid "" "Administrative state of the firewall. If false (down), firewall does not " "forward packets and will drop all traffic to/from VMs behind the firewall." msgstr "" "Stato amministrativo del firewall. Se false (inattivo), il firewall non " "inoltra i pacchetti ed eliminerà tutto il traffico a/da VM dietro il " "firewall." msgid "Administrative state of the router." msgstr "Stato amministrativo del router." #, python-format msgid "Alarm %(alarm)s could not find scaling group named \"%(group)s\"" msgstr "" "La segnalazione %(alarm)s non è in grado di trovare il gruppo di scaling " "denominato \"%(group)s\"" #, python-format msgid "Algorithm must be one of %s" msgstr "L'algoritmo deve essere uno di %s" msgid "All heat engines are down." msgstr "Tutti i motori Heat sono spenti." msgid "Allocated floating IP address." msgstr "Indirizzo IP mobile assegnato." msgid "Allocation ID for VPC EIP address." msgstr "ID assegnazione per l'indirizzo EIP VPC." msgid "Allow client's debug log output." msgstr "Consenti output log di debug del client." msgid "Allow or deny action for this firewall rule." msgstr "Consentire o negare l'azione per questa regola del firewall." msgid "Allow orchestration of multiple clouds." msgstr "Consenti orchestrazione di più cloud." msgid "" "Allow reauthentication on token expiry, such that long-running tasks may " "complete. Note this defeats the expiry of any provided user tokens." msgstr "" "Consentire la riautenticazione alla scadenza del token, in modo che le " "attività con lunga esecuzione possano essere completate. In questo modo " "viene annullata la scadenza di qualsiasi token fornito dall'utente." msgid "" "Allowed keystone endpoints for auth_uri when multi_cloud is enabled. At " "least one endpoint needs to be specified." msgstr "" "Endpoint keystone consentiti per auth_uri quando è abilitato multi_cloud. " "Specificare almeno un endpoint." msgid "" "Allowed tenancy of instances launched in the VPC. default - any tenancy; " "dedicated - instance will be dedicated, regardless of the tenancy option " "specified at instance launch." msgstr "" "Tenancy consentito delle istanze avviate in VPC. default - qualsiasi " "tenancy; dedicated - l'istanza sarà dedicata, indipendentemente dall'opzione " "tenancy specificata all'avvio dell'istanza." #, python-format msgid "Allowed values: %s" msgstr "Valori consentiti: %s" msgid "AllowedPattern must be a string" msgstr "AllowedPattern deve essere una stringa" msgid "AllowedValues must be a list" msgstr "AllowedValues deve essere un elenco" msgid "Allowing not to store action results after task completion." msgstr "" "Consentire di non memorizzare i risultati dell'azione dopo il completamento " "dell'attività." msgid "" "Allows to synchronize multiple parallel workflow branches and aggregate " "their data. Valid inputs: all - the task will run only if all upstream tasks " "are completed. Any numeric value - then the task will run once at least this " "number of upstream tasks are completed and corresponding conditions have " "triggered." msgstr "" "Consente di sincronizzare più rami paralleli del flusso di lavoro e " "aggregare i relativi dati. Valori validi: all - l'attività viene eseguita " "solo se tutte le attività upstream vengono completate. Qualsiasi valore " "numerico - l'attività viene eseguita almeno una volta, questo numero di " "attività upstream viene completato e le condizioni corrispondenti vengono " "attivate." #, python-format msgid "Ambiguous versions (%s)" msgstr "Versioni non valide (%s)" msgid "" "Amount of disk space (in GB) required to boot image. Default value is 0 if " "not specified and means no limit on the disk size." msgstr "" "Quantità di spazio su disco (in GB) richiesto per avviare l'immagine. Se non " "specificato, il valore predefinito è 0 e indica che non esiste alcun limite " "alla dimensione del disco." msgid "" "Amount of ram (in MB) required to boot image. Default value is 0 if not " "specified and means no limit on the ram size." msgstr "" "Quantità di ram (in MB) richiesta per avviare l'immagine. Se non è " "specificato, il valore predefinito è 0 e indica che non esiste alcun limite " "per la dimensione della ram." msgid "An address scope ID to assign to the subnet pool." msgstr "Un ID dell'ambito di indirizzi da assegnare al pool di sottorete." msgid "An application health check for the instances." msgstr "Una verifica stato dell'applicazione per le istanze." msgid "An ordered list of firewall rules to apply to the firewall." msgstr "Un elenco ordinato di regole del firewall da applicare al firewall." msgid "" "An ordered list of nics to be added to this server, with information about " "connected networks, fixed ips, port etc." msgstr "" "Un elenco ordinato di appellativi da aggiungere a questo server, con le " "informazioni sulle reti connesse, sugli ip fissi, sulla porta, ecc." msgid "An unknown exception occurred." msgstr "E' stato riscontrato un errore sconosciuto" msgid "" "Any data structure arbitrarily containing YAQL expressions that defines " "workflow output. May be nested." msgstr "" "Una struttura di dati che contiene espressioni YAQL che definisce l'output " "del flusso di lavoro. Può essere nidificata." msgid "Anything other than one VPCZoneIdentifier" msgstr "Qualsiasi valore diverso da un VPCZoneIdentifier" msgid "Api endpoint reference of the instance." msgstr "Riferimento endpoint Api dell'istanza." msgid "" "Arbitrary key-value pairs specified by the client to help boot a server." msgstr "" "Coppie chiave-valore arbitrarie specificate dal client per consentire " "l'avvio di un server." msgid "" "Arbitrary key-value pairs specified by the client to help the Cinder " "scheduler creating a volume." msgstr "" "Coppie chiave-valore specificate arbitrariamente dal client per aiutare lo " "scheduler Cinder a creare un volume." msgid "" "Arbitrary key/value metadata to store contextual information about this " "queue." msgstr "" "Metadati chiave/valore arbitrari per memorizzare le informazioni contestuali " "sull'aggregato." msgid "" "Arbitrary key/value metadata to store for this server. Both keys and values " "must be 255 characters or less. Non-string values will be serialized to JSON " "(and the serialized string must be 255 characters or less)." msgstr "" "Metadati chiave/valore arbitrari da memorizzare per questo server. Sia le " "chiavi che i valori devono avere una lunghezza pari a 255 caratteri o meno. " "I valori non stringa verranno serializzati su JSON (mentre la stringa " "serializzata deve avere una lunghezza pari a 255 caratteri o meno)." msgid "Arbitrary key/value metadata to store information for aggregate." msgstr "" "Metadati chiave/valore arbitrari per memorizzare le informazioni " "sull'aggregato." #, python-format msgid "Argument to \"%s\" must be a list" msgstr "L'argomento \"%s\" deve essere un elenco" #, python-format msgid "Argument to \"%s\" must be a string" msgstr "L'argomento per \"%s\" deve essere una stringa" #, python-format msgid "Argument to \"%s\" must be string or list" msgstr "L'argomento per \"%s\" deve essere una stringa o un elenco" #, python-format msgid "Argument to function \"%s\" must be a list of strings" msgstr "L'argomento per la funzione \"%s\" deve essere un elenco di stringhe" #, python-format msgid "" "Arguments to \"%s\" can be of the next forms: [resource_name] or " "[resource_name, attribute, (path), ...]" msgstr "" "Gli argomenti per \"%s\" possono essere dei seguenti formati: " "[resource_name] o [resource_name, attribute, (path), ...]" #, python-format msgid "Arguments to \"%s\" must be a map" msgstr "Gli argomenti per \"%s\" devono essere una mappa" #, python-format msgid "Arguments to \"%s\" must be of the form [index, collection]" msgstr "Gli argomenti per \"%s\" devono essere nel formato [index, collection]" #, python-format msgid "" "Arguments to \"%s\" must be of the form [resource_name, attribute, " "(path), ...]" msgstr "" "Gli argomenti per \"%s\" devono essere nel formato [resource_name, " "attribute, (path), ...]" #, python-format msgid "Arguments to \"%s\" must be of the form [resource_name, attribute]" msgstr "" "Gli argomenti per \"%s\" devono essere nel formato [resource_name, attribute]" #, python-format msgid "Arguments to %s not fully resolved" msgstr "Argomenti per %s non completamente risolti" #, python-format msgid "Attempt to delete a stack with id: %(id)s %(msg)s" msgstr "Tentativo di eliminare uno stack con id: %(id)s %(msg)s" #, python-format msgid "Attempt to delete user creds with id %(id)s that does not exist" msgstr "" "Tentativo di eliminazione dei crediti utente con l'id %(id)s che non esiste" #, python-format msgid "Attempt to delete watch_rule: %(id)s %(msg)s" msgstr "Tentativo di eliminare una regola watch (watch_rule): %(id)s %(msg)s" #, python-format msgid "Attempt to update a stack with id: %(id)s %(msg)s" msgstr "Tentativo di aggiornare uno stack con id: %(id)s %(msg)s" #, python-format msgid "Attempt to update a stack with id: %(id)s %(traversal)s %(msg)s" msgstr "Tentativo di aggiornare uno stack con id: %(id)s %(traversal)s %(msg)s" #, python-format msgid "Attempt to update a watch with id: %(id)s %(msg)s" msgstr "Tentativo di aggiornare una regola watch con id: %(id)s %(msg)s" msgid "Attempt to use stored_context with no user_creds" msgstr "Tentativo di utilizzare stored_context senza nessun user_creds" #, python-format msgid "Attribute %(attr)s for facade %(type)s missing in provider" msgstr "" "L'attributo %(attr)s per il facade %(type)s non è presente nel provider" msgid "Audit status of this firewall policy." msgstr "Verifica lo stato di questa politica del firewall." msgid "Authentication Endpoint URI." msgstr "URI endpoint di autenticazione." msgid "Authentication hash algorithm for the ike policy." msgstr "Algoritmo hash di autenticazione per la politica ike." msgid "Authentication hash algorithm for the ipsec policy." msgstr "Algoritmo hash di autenticazione per la politica ipsec." msgid "Authorization failed." msgstr "Autorizzazione non riuscita." msgid "AutoScaling group ID to apply policy to." msgstr "ID gruppo AutoScaling su cui applicare la politica." msgid "AutoScaling group name to apply policy to." msgstr "Nome del gruppo AutoScaling a cui applicare la politica." msgid "Availability Zone of the subnet." msgstr "Zona di disponibilità della sottorete." msgid "Availability zone in which you want the subnet." msgstr "La zona di disponibilità in cui si desidera la sottorete." msgid "Availability zone to create servers in." msgstr "Zona di disponibilità in cui creare i server." msgid "Availability zone to create volumes in." msgstr "Zona di disponibilità in cui creare i volumi." msgid "Availability zone to launch the instance in." msgstr "Zona di disponibilità in cui avviare l'istanza." msgid "Backend authentication failed" msgstr "Autenticazione backend non riuscita" msgid "Binary" msgstr "Valore binario" msgid "Block device mappings for this server." msgstr "Le associazioni del dispositivo di blocco per questo server." msgid "Block device mappings to attach to instance." msgstr "Associazioni unità di blocco da collegare all'istanza." msgid "Block device mappings v2 for this server." msgstr "Associazioni del dispositivo di blocco v2 per questo server." msgid "" "Boolean extra spec that used for filtering of backends by their capability " "to create share snapshots." msgstr "" "La specifica supplementare booleana utilizzata per il filtraggio dei backend " "in base alla capacità di creare istantanee di condivisione." msgid "Boolean indicating if the volume can be booted or not." msgstr "Booleano che indica se il volume può essere avviato o meno." msgid "Boolean indicating if the volume is encrypted or not." msgstr "Valore Booleano che indica se il volume è codificato o meno." msgid "" "Boolean indicating whether allow the volume to be attached more than once." msgstr "" "Valore Booleano che indica se consentire che il volume sia collegato più di " "una volta." msgid "" "Bus of the device: hypervisor driver chooses a suitable default if omitted." msgstr "" "Bus del dispositivo: il driver hypervisor sceglie un solo valore predefinito " "adattabile se omesso." msgid "CIDR block notation for this subnet." msgstr "Notazione del blocco CIDR per questa sottorete." msgid "CIDR block to apply to subnet." msgstr "Il blocco CIDR da applicare alla sottorete." msgid "CIDR block to apply to the VPC." msgstr "Blocco CIDR da applicare a VPC." msgid "CIDR of subnet." msgstr "CIDR della sottorete." msgid "CIDR to be associated with this metering rule." msgstr "CIDR da associare a questa regola di misurazione." #, python-format msgid "Can not specify property \"%s\" if the volume type is public." msgstr "" "Impossibile specificare la proprietà \"%s\" se il tipo di volume è pubblico." #, python-format msgid "Can not use %s property on Nova-network." msgstr "Non utilizzare la proprietà %s sulla rete Nova." #, python-format msgid "Can't find role %s" msgstr "Impossibile trovare il ruolo %s" msgid "Can't get user token without password" msgstr "Impossibile ottenere il token utente senza password" msgid "Can't get user token, user not yet created" msgstr "" "Impossibile ottenere il token utente, l'utente non è stato ancora creato" msgid "Can't traverse attribute path" msgstr "Impossibile passare il percorso dell'attributo" #, python-format msgid "Cancelling update when stack is %s" msgstr "Annullare l'aggiornamento quando lo stack è %s" #, python-format msgid "Cannot call %(method)s on orphaned %(objtype)s object" msgstr "Impossibile chiamare %(method)s su oggetto orfano %(objtype)s" #, python-format msgid "Cannot check %s, stack not created" msgstr "Impossibile verificare uno stack %s, non creato" #, python-format msgid "Cannot define the following properties at the same time: %(props)s." msgstr "" "Impossibile definire le seguenti proprietà contemporaneamente: %(props)s." #, python-format msgid "" "Cannot establish connection to Heat endpoint at region \"%(region)s\" due to " "\"%(exc)s\"" msgstr "" "Impossibile stabilire una connessione all'endpoint Heat nella regione " "\"%(region)s\" a causa di \"%(exc)s\"" msgid "" "Cannot get stack domain user token, no stack domain id configured, please " "fix your heat.conf" msgstr "" "Impossibile ottenere il token utente del dominio stack, non è configurato " "nessun ID dominio stack, correggere heat.conf" msgid "Cannot migrate to lower schema version." msgstr "Impossibile migrare a una versione schema inferiore." #, python-format msgid "Cannot modify readonly field %(field)s" msgstr "Impossibile modificare il campo di sola lettura %(field)s" #, python-format msgid "Cannot resume %s, resource not found" msgstr "Impossibile riprendere %s, risorsa non trovata" #, python-format msgid "Cannot resume %s, resource_id not set" msgstr "Impossibile riprendere %s, resource_id non impostato" #, python-format msgid "Cannot resume %s, stack not created" msgstr "Impossibile riprendere %s, stack non creato" #, python-format msgid "Cannot suspend %s, resource not found" msgstr "Impossibile sospendere %s, risorsa non trovata" #, python-format msgid "Cannot suspend %s, resource_id not set" msgstr "Impossibile sospendere %s, resource_id non impostato" #, python-format msgid "Cannot suspend %s, stack not created" msgstr "Impossibile sospendere %s, stack non creato" msgid "Captured stderr from the configuration execution." msgstr "stderr catturato dall'esecuzione della configurazione." msgid "Captured stdout from the configuration execution." msgstr "stdout catturato dall'esecuzione della configurazione." #, python-format msgid "Circular Dependency Found: %(cycle)s" msgstr "Trovata dipendenza circolare: %(cycle)s" msgid "Client entity to poll." msgstr "L'entità client per il polling." msgid "Client name and resource getter name must be specified." msgstr "" "Il nome client e il nome getter della risorsa devono essere specificati." msgid "Client to poll." msgstr "Il client per il polling." msgid "Cluster configs dictionary." msgstr "Dizionario configurazioni cluster." msgid "Cluster information." msgstr "Informazioni cluster." msgid "Cluster metadata." msgstr "Metadati cluster." msgid "Cluster name." msgstr "Nome cluster." msgid "Cluster status." msgstr "Stato cluster." msgid "Comparison operator." msgstr "Operatore di confronto." #, python-format msgid "Concurrent transaction for %(action)s" msgstr "Transazione contemporanea per %(action)s" msgid "Configuration of session persistence." msgstr "Configurazione della persistenza di sessione." msgid "" "Configuration script or manifest which specifies what actual configuration " "is performed." msgstr "" "File manifest o script di configurazione che specifica quale effettiva " "configurazione viene eseguita." msgid "Configure most important configs automatically." msgstr "Configurare le configurazioni più importanti automaticamente." #, python-format msgid "Confirm resize for server %s failed" msgstr "Conferma del ridimensionamento del server %s non riuscita" msgid "Connection info for this network gateway." msgstr "Informazioni sulla connessione per questo gateway di rete." #, python-format msgid "Container '%(name)s' creation failed: %(code)s - %(reason)s" msgstr "" "Creazione del contenitore '%(name)s' non riuscita: %(code)s - %(reason)s" msgid "Container format of image." msgstr "Formato contenitore dell'immagine." msgid "" "Content of part to attach, either inline or by referencing the ID of another " "software config resource." msgstr "" "Il contenuto della parte da allegare, sia inline o facendo riferimento " "all'ID di un'altra risorsa di configurazione software." msgid "Context for this stack." msgstr "Il contesto di questo stack." msgid "Control how the disk is partitioned when the server is created." msgstr "" "Controlla come il disco è suddiviso in partizioni quando viene creato il " "server." msgid "Controls DPD protocol mode." msgstr "Controlla la modalità del protocollo DPD." msgid "" "Convenience attribute to fetch the first assigned network address, or an " "empty string if nothing has been assigned at this time. Result may not be " "predictable if the server has addresses from more than one network." msgstr "" "Attributo di convenienza per richiamare il primo indirizzo di rete " "assegnato, oppure una stringa vuota se non è stato assegnato nulla in questa " "fase. Il risultato potrebbe non essere prevedibile se il server dispone di " "indirizzi da più di una rete." msgid "" "Convenience attribute, provides curl CLI command prefix, which can be used " "for signalling handle completion or failure when signal_transport is set to " "TOKEN_SIGNAL. You can signal success by adding --data-binary '{\"status\": " "\"SUCCESS\"}' , or signal failure by adding --data-binary '{\"status\": " "\"FAILURE\"}'. This attribute is set to None for all other signal transports." msgstr "" "L'attributo di convenienza fornisce il prefisso del comando CLI curl, che " "può essere utilizzato per la segnalazione dell'handle di completamento o di " "errore quando signal_transport è impostato su TOKEN_SIGNAL. È possibile " "segnalare l'esito positivo aggiungendo --data-binary '{\"status\": \"SUCCESS" "\"}' o segnalare un errore aggiungendo --data-binary '{\"status\": \"FAILURE" "\"}'. Questo attibuto è impostato su None per tutti gli altri trasporti di " "segnale." msgid "" "Convenience attribute, provides curl CLI command prefix, which can be used " "for signalling handle completion or failure. You can signal success by " "adding --data-binary '{\"status\": \"SUCCESS\"}' , or signal failure by " "adding --data-binary '{\"status\": \"FAILURE\"}'." msgstr "" "L'attributo di convenienza fornisce il prefisso del comando CLI curl, che " "può essere utilizzato per la segnalazione dell'handle di completamento o di " "errore. È possibile segnalare l'esito positivo aggiungendo --data-binary " "'{\"status\": \"SUCCESS\"}' o segnalare un errore aggiungendo --data-binary " "'{\"status\": \"FAILURE\"}'." msgid "Cooldown period, in seconds." msgstr "Periodo di cooldown, in secondi." #, python-format msgid "Could not confirm resize of server %s" msgstr "Impossibile confermare il ridimensionamento del server %s" #, python-format msgid "Could not detach attachment %(att)s from server %(srv)s." msgstr "Impossibile scollegare l'allegato %(att)s dal server %(srv)s." #, python-format msgid "Could not fetch remote template \"%(name)s\": %(exc)s" msgstr "Impossibile recuperare il template remoto \"%(name)s\": %(exc)s" #, python-format msgid "Could not fetch remote template '%(url)s': %(exc)s" msgstr "Impossibile recuperare il template remoto '%(url)s': %(exc)s" #, python-format msgid "Could not load %(name)s: %(error)s" msgstr "Impossibile caricare %(name)s: %(error)s" #, python-format msgid "Could not retrieve template: %s" msgstr "Impossibile richiamare il template: %s" msgid "Create volumes on the same physical port as an instance." msgstr "Creare volumi sulla stessa porta fisica di un'istanza." msgid "" "Credentials used for swift. Not required if sahara is configured to use " "proxy users and delegated trusts for access." msgstr "" "Credenziali utilizzate per swift. Non richieste se sahara è configurato per " "utilizzare gli utenti proxy e i trust delegati per l'accesso." msgid "Cron expression." msgstr "Espressione cron." msgid "Current share status." msgstr "Stato corrente della condivisione." msgid "Custom LoadBalancer template can not be found" msgstr "Impossibile trovare il template LoadBalancer personalizzato" msgid "DB instance restore point." msgstr "Punto di ripristino dell'istanza DB." msgid "DNS Domain id or name." msgstr "ID o nome del dominio DNS." msgid "DNS IP address used inside tenant's network." msgstr "Indirizzo IP DNS utilizzato all'interno della rete del tenant." msgid "DNS Record type." msgstr "Tipo di record DNS." msgid "DNS domain serial." msgstr "Numero di serie del dominio DNS." msgid "" "DNS record data, varies based on the type of record. For more details, " "please refer rfc 1035." msgstr "" "Dati del record DNS, variano in base al tipo di record. Per maggiori " "dettagli, fare riferimento a rfc 1035." msgid "" "DNS record priority. It is considered only for MX and SRV types, otherwise, " "it is ignored." msgstr "" "Priorità del record DNS. Viene considerato solo per i tipi MX e SRV, " "altrimenti, viene ignorato." #, python-format msgid "Data supplied was not valid: %(reason)s" msgstr "I dati forniti non erano validi: %(reason)s" #, python-format msgid "" "Database %(dbs)s specified for user does not exist in databases for resource " "%(name)s." msgstr "" "Il database %(dbs)s specificato per l'utente non esiste nei database per la " "risorsa %(name)s." msgid "Database volume size in GB." msgstr "La dimensione del volume di database in GB." #, python-format msgid "" "Databases property is required if users property is provided for resource %s." msgstr "" "Se per la risorsa è fornita la proprietà utenti, è necessaria la proprietà " "dei database. %s." #, python-format msgid "" "Datastore version %(dsversion)s for datastore type %(dstype)s is not valid. " "Allowed versions are %(allowed)s." msgstr "" "La versione dell'archivio dati %(dsversion)s per il tipo di archivio dati " "%(dstype)s non è valido. Le versioni consentite sono %(allowed)s." msgid "Datetime when a share was created." msgstr "Data e ora di creazione della condivisione." msgid "" "Dead Peer Detection protocol configuration for the ipsec site connection." msgstr "" "Configurazione del protocollo di rilevamento peer non attivo per la " "connessione al sito ipsec." msgid "Dead engines are removed." msgstr "I motori inattivi vengono rimossi." msgid "Default TLS container reference to retrieve TLS information." msgstr "" "Il riferimento del contenitore TLS predefinito per richiamare le " "informazioni TLS." #, python-format msgid "Default must be a comma-delimited list string: %s" msgstr "" "L'impostazione predefinita deve essere una stringa di elenco delimitata da " "virgole: %s" msgid "Default name or UUID of the image used to boot Hadoop nodes." msgstr "" "Nome predefinito o UUID dell'immagine utilizzata per avviare i nodi Hadoop." msgid "Default region name used to get services endpoints." msgstr "" "Nome regione predefinito utilizzato per acquisire gli endpoint dei servizi." msgid "Default settings for some of task attributes defined at workflow level." msgstr "" "Le impostazioni predefinite per alcuni degli attributi delle attività " "definiti a livello di flusso di lavoro." msgid "Default value for the input if none is specified." msgstr "Valore predefinito per l'input se none è specificato." msgid "" "Defines a delay in seconds that Mistral Engine should wait after a task has " "completed before starting next tasks defined in on-success, on-error or on-" "complete." msgstr "" "Definisce un ritardo in secondi che Mistral Engine deve attendere dopo che " "un'attività è stata completata prima di avviare le attività successive " "definite come on-success, on-error o on-complete." msgid "" "Defines a delay in seconds that Mistral Engine should wait before starting a " "task." msgstr "" "Definisce un ritardo in secondi che Mistral Engine deve attendere prima di " "avviare un'attività." msgid "Defines a pattern how task should be repeated in case of an error." msgstr "" "Definisce un modello di come l'attività deve essere ripetuta in caso di " "errore." msgid "" "Defines a period of time in seconds after which a task will be failed " "automatically by engine if hasn't completed." msgstr "" "Definisce un periodo di tempo in secondi dopo il quale un'attività viene " "definita automaticamente non riuscita dal motore se non è stata completata. " msgid "Defines if share type is accessible to the public." msgstr "Definisce se il tipo di condivisione è accessibile al pubblico." msgid "Defines if shared filesystem is public or private." msgstr "Definisce se il filesystem condiviso è pubblico o privato." msgid "" "Defines the method in which the request body for signaling a workflow would " "be parsed. In case this property is set to True, the body would be parsed as " "a simple json where each key is a workflow input, in other cases body would " "be parsed expecting a specific json format with two keys: \"input\" and " "\"params\"." msgstr "" "Definisce il metodo in cui il corpo della richiesta per la segnalazione di " "un flusso di lavoro verrà analizzato. Nel caso in cui questa proprietà sia " "impostata su True, il corpo verrà analizzato come json semplice dove " "ciascuna chiave è un input del flusso di lavoro, negli altri casi il corpo " "verrà analizzato prevedendo un formato json specifico con due chiavi: \"input" "\" e \"params\"." msgid "" "Defines whether Mistral Engine should put the workflow on hold or not before " "starting a task." msgstr "" "Definisce se Mistral Engine deve posizionare il flusso di lavro in attesa o " "meno prima di avviare un'attività." msgid "Defines whether auto-assign security group to this Node Group template." msgstr "" "Definisce se assegnare automaticamente un gruppo di sicurezza a questo " "template Gruppo di nodi." #, python-format msgid "" "Defining more than one configuration for the same action in " "SoftwareComponent \"%s\" is not allowed." msgstr "" "Non è consentita la definizione di più di una configurazione per la stessa " "azione in SoftwareComponent \"%s\"." msgid "Deleting in-progress snapshot" msgstr "Eliminazione dell'istantanea in corso" #, python-format msgid "Deleting non-empty container (%(id)s) when %(prop)s is False" msgstr "" "Eliminazione del contenitore (%(id)s) non vuoto quando %(prop)s è False" #, python-format msgid "Delimiter for %s must be string" msgstr "Il delimitatore per %s deve essere una stringa" msgid "" "Denotes that the deployment is in an error state if this output has a value." msgstr "" "Indica che la distribuzione si trova in uno stato di errore se questo output " "ha un valore." msgid "Deploy data available" msgstr "Distribuzione dati disponibile" #, python-format msgid "Deployment exited with non-zero status code: %s" msgstr "La distribuzione è terminata con codice di stato diverso da zero: %s" #, python-format msgid "Deployment to server failed: %s" msgstr "Distribuzione sul server non riuscita: %s" #, python-format msgid "Deployment with id %s not found" msgstr "Distribuzione con id %s non trovata" msgid "Deprecated." msgstr "Obsoleto." msgid "" "Describe time constraints for the alarm. Only evaluate the alarm if the time " "at evaluation is within this time constraint. Start point(s) of the " "constraint are specified with a cron expression, whereas its duration is " "given in seconds." msgstr "" "Descrivere i vincoli di tempo dell'allarme. Valutare l'allarme solo se il " "tempo al momento della valutazione è compreso in questi vincoli di tempo. I " "punti di inizio del vincolo sono specificati con un'espressione cron, mentre " "la durata è espressa in secondi." msgid "Description for the alarm." msgstr "Descrizione per la segnalazione." msgid "Description for the firewall policy." msgstr "Descrizione della politica del firewall." msgid "Description for the firewall rule." msgstr "Descrizione della politica del firewall." msgid "Description for the firewall." msgstr "Descrizione per il firewall." msgid "Description for the ike policy." msgstr "Descrizione per la politica ike." msgid "Description for the ipsec policy." msgstr "Descrizione per la politica ipsec." msgid "Description for the ipsec site connection." msgstr "Descrizione per la connessione al sito ipsec." msgid "Description for the time constraint." msgstr "Descrizione del vincolo di tempo." msgid "Description for the vpn service." msgstr "Descrizione per il servizio vpn." msgid "Description for this interface." msgstr "Descrizione per questa interfaccia." msgid "Description of domain." msgstr "Descrizione del dominio." msgid "Description of keystone group." msgstr "Descrizione del gruppo keystone." msgid "Description of keystone project." msgstr "Descrizione del progetto keystone." msgid "Description of keystone region." msgstr "Descrizione della regione keystone." msgid "Description of keystone service." msgstr "Descrizione del servizio keystone." msgid "Description of keystone user." msgstr "Descrizione dell'utente keystone." msgid "Description of record." msgstr "Descrizione del record." msgid "Description of the Node Group Template." msgstr "Descrizione del template del gruppo di nodi." msgid "Description of the Sahara Group Template." msgstr "Descrizione del template del gruppo Sahara." msgid "Description of the alarm." msgstr "Descrizione della segnalazione." msgid "Description of the data source." msgstr "Descrizione dell'origine dati." msgid "Description of the firewall policy." msgstr "Descrizione della politica del firewall." msgid "Description of the firewall rule." msgstr "Descrizione della regola del firewall." msgid "Description of the firewall." msgstr "Descrizione del firewall." msgid "Description of the image." msgstr "Descrizione dell'immagine." msgid "Description of the input." msgstr "Descrizione dell'input." msgid "Description of the job binary." msgstr "Descrizione del job binary." msgid "Description of the metering label." msgstr "Descrizione dell'etichetta di misurazione." msgid "Description of the output." msgstr "Descrizione dell'output." msgid "Description of the pool." msgstr "Descrizione del pool." msgid "Description of the security group." msgstr "Descrizione del gruppo di sicurezza." msgid "Description of the vip." msgstr "Descrizione del vip." msgid "Description of the volume type." msgstr "Descrizione del tipo di volume." msgid "Description of the volume." msgstr "Descrizione del volume." msgid "Description of this Load Balancer." msgstr "Descrizione di questo bilanciatore del carico." msgid "Description of this listener." msgstr "Descrizione di questo listener." msgid "Description of this pool." msgstr "Descrizione di questo pool." msgid "Desired IPs for this port." msgstr "Gli IP richiesti per questa porta." msgid "Desired capacity of the cluster." msgstr "Capacità desiderata del cluster." msgid "Desired initial number of instances." msgstr "Numero iniziale desiderato di istanze." msgid "Desired initial number of resources in cluster." msgstr "Numero iniziale desiderato di risorse nel cluster." msgid "Desired initial number of resources." msgstr "Numero iniziale desiderato di risorse." msgid "Desired number of instances." msgstr "Numero desiderato di istanze." msgid "DesiredCapacity must be between MinSize and MaxSize" msgstr "DesiredCapacity deve essere compreso tra MinSize e MaxSize" msgid "Destination IP address or CIDR." msgstr "Indirizzo IP di destinazione o CIDR." msgid "Destination ip_address for this firewall rule." msgstr "Ip_address di destinazione per questa regola del firewall." msgid "Destination port number or a range." msgstr "Numero della porta di destinazione o un intervallo." msgid "Destination port range for this firewall rule." msgstr "Intervallo della porta di destinazione per questa regola del firewall." msgid "Detailed information about resource." msgstr "Informazioni dettagliate sulla risorsa." msgid "Device ID of this port." msgstr "ID dispositivo di questa porta." msgid "Device info for this network gateway." msgstr "Informazioni sul dispositivo per questo gateway di rete." msgid "" "Device type: at the moment we can make distinction only between disk and " "cdrom." msgstr "" "Tipo di dispositivo: al momento è possibile solo effettuare una distinzione " "tra il disco e il cdrom." msgid "" "Dict, which has expand properties for port. Used only if port property is " "not specified for creating port." msgstr "" "Dict, che ha proprietà di espansione per la porta. Utilizzato solo se la " "proprietà della porta non è specificata per la creazione della porta." msgid "Dictionary containing workflow tasks." msgstr "Dizionario contenente le attività del flusso di lavoro." msgid "Dictionary of node configurations." msgstr "Dizionario delle configurazioni del nodo." msgid "Dictionary of variables to publish to the workflow context." msgstr "" "Dizionario di variabili da pubblicare nel contesto del flusso di lavoro." msgid "Dictionary which contains input for workflow." msgstr "Dizionario contenente l'input per il flusso di lavoro." msgid "" "Dictionary-like section defining task policies that influence how Mistral " "Engine runs tasks. Must satisfy Mistral DSL v2." msgstr "" "Sezione simile a un dizionario che definisce le politiche delle attività che " "influenzano il modo in cui Mistral Engine esegue le attività. Deve " "soddisfare Mistral DSL v2." msgid "DisableRollback and OnFailure may not be used together" msgstr "DisableRollback e OnFailure non possono essere utilizzati insieme" msgid "Disk format of image." msgstr "Formato disco dell'immagine." msgid "Does not contain a valid AWS Access Key or certificate" msgstr "Non contiene un certificato o una chiave di accesso AWS valida" msgid "Domain email." msgstr "Email del dominio." msgid "Domain name." msgstr "Nome dominio." #, python-format msgid "Duplicate names %s" msgstr "Nomi duplicati %s" msgid "Duplicate refs are not allowed." msgstr "I riferimenti duplicati non sono consentiti." msgid "Duration for the time constraint." msgstr "Durata del vincolo di tempo." msgid "EIP address to associate with instance." msgstr "Indirizzo EIP da associare all'istanza." #, python-format msgid "Each %(object_name)s must contain a %(sub_section)s key." msgstr "Ciascun %(object_name)s deve contenere una chiave %(sub_section)s." msgid "Each Resource must contain a Type key." msgstr "Ciascuna risorsa deve contenere una chiave Type." msgid "Ebs is missing, this is required when specifying BlockDeviceMappings." msgstr "" "Manca Ebs, questo è necessario quando si specifica BlockDeviceMappings." msgid "" "Egress rules are only allowed when Neutron is used and the 'VpcId' property " "is set." msgstr "" "Le regole Egress sono consentite solo quando viene utilizzato Neutron ed è " "impostata la proprietà 'VpcId'." #, python-format msgid "Either %(net)s or %(port)s must be provided." msgstr "È necessario specificare %(net)s o %(port)s." msgid "Either 'EIP' or 'AllocationId' must be provided." msgstr "È necessario fornire 'EIP' o 'AllocationId'." msgid "Either 'InstanceId' or 'LaunchConfigurationName' must be provided." msgstr "È necessario fornire 'InstanceId' o 'LaunchConfigurationName'." #, python-format msgid "Either project or domain must be specified for role %s" msgstr "Il progetto o il dominio devono essere specificati per il ruolo %s" #, python-format msgid "Either volume_id or snapshot_id must be specified for device mapping %s" msgstr "" "È necessario specificare volume_id o snapshot_id per l'associazione del " "dispositivo %s" msgid "Email address of keystone user." msgstr "Indirizzo email dell'utente keystone." msgid "Enable the legacy OS::Heat::CWLiteAlarm resource." msgstr "Abilita la risorsa OS::Heat::CWLiteAlarm legacy." msgid "Enable the preview Stack Abandon feature." msgstr "Abilitare la funzione Stack Abandon." msgid "Enable the preview Stack Adopt feature." msgstr "Abilitare la funzione Stack Adopt dell'anteprima." msgid "" "Enables Source NAT on the router gateway. NOTE: The default policy setting " "in Neutron restricts usage of this property to administrative users only." msgstr "" "Abilita il Source NAT sul gateway del router. NOTA: l'impostazione della " "politica predefinita in Neutron limita l'utilizzo di questa proprietà solo " "agli utenti amministrativi." msgid "" "Enables engine with convergence architecture. All stacks with this option " "will be created using convergence engine." msgstr "" "Abilita il motore con l'architettura di convergenza, Tutti gli stack con " "questa opzione verranno creati utilizzando il motore di convergenza." msgid "Enables or disables read-only access mode of volume." msgstr "Abilita o disabilita la modalità di accesso in sola letura del volume." msgid "Encapsulation mode for the ipsec policy." msgstr "Modalità di incapsulamento per la politica ipsec." msgid "" "Encrypt template parameters that were marked as hidden and also all the " "resource properties before storing them in database." msgstr "" "Crittografare i parametri di template contrassegnati come nascosti e anche " "tutte le proprietà delle risorse prima di memorizzarle nel database." msgid "Encryption algorithm for the ike policy." msgstr "Algoritmo di codifica per la politica ike." msgid "Encryption algorithm for the ipsec policy." msgstr "Algoritmo di codifica per la politica ipsec." msgid "End address for the allocation pool." msgstr "Indirizzo finale per il pool di allocazione." #, python-format msgid "End resizing the group %(group)s" msgstr "Fine ridimensionamento del gruppo %(group)s" msgid "" "Endpoint/url which can be used for signalling handle when signal_transport " "is set to TOKEN_SIGNAL. None for all other signal transports." msgstr "" "L'endpoint/url che è possibile utilizzare per la segnalazione dell'handle " "quando signal_transport è impostato su TOKEN_SIGNAL. None per tutti gli " "altri trasporti di segnale." msgid "Endpoint/url which can be used for signalling handle." msgstr "" "Endpoint/url che è possibile utilizzare per la segnalazione dell'handle." msgid "Engine_Id" msgstr "Engine_Id" msgid "Error" msgstr "Errore" #, python-format msgid "Error authorizing action %s" msgstr "Errore di autorizzazione dell'azione %s" #, python-format msgid "Error creating ec2 keypair for user %s" msgstr "Errore durante la creazione della coppia di chiavi ec2 per l'utente %s" msgid "" "Error during applying access rules to share \"{0}\". The root cause of the " "problem is the following: {1}." msgstr "" "Errore durante l'applicazione delle regole di accesso per la condivisione di " "\"{0}\". La causa root del problema è la seguente: {1}." msgid "Error during creation of share \"{0}\"" msgstr "Errore durante la creazione della condivisione \"{0}\"" msgid "Error during deleting share \"{0}\"." msgstr "Errore durante l'eliminazione della condivisione \"{0}\"." #, python-format msgid "Error validating value '%(value)s'" msgstr "Errore di convalida del valore '%(value)s'" #, python-format msgid "Error validating value '%(value)s': %(message)s" msgstr "Errore di convalida del valore '%(value)s': %(message)s" msgid "Ethertype of the traffic." msgstr "Ethertype del traffico." msgid "Exclude state for cidr." msgstr "Stato di esclusione per cidr." #, python-format msgid "Expected 1 external network, found %d" msgstr "Prevista 1 rete esterna, trovate %d" msgid "Export locations of share." msgstr "Ubicazioni di esportazione della condivisione." msgid "Expression of the alarm to evaluate." msgstr "Espressione della segnalazione da valutare." msgid "External fixed IP address." msgstr "Indirizzo IP fisso esterno." msgid "External fixed IP addresses for the gateway." msgstr "Indirizzi IP fissi esterni per il gateway." msgid "External network gateway configuration for a router." msgstr "Configurazione del gateway della rete esterna per un router." msgid "" "Extra parameters to include in the \"floatingip\" object in the creation " "request. Parameters are often specific to installed hardware or extensions." msgstr "" "Parametri aggiuntivi da includere nell'oggetto \"floatingip\" nella " "richiesta di creazione. I parametri sono spesso specifici di hardware " "installato o di estensioni." msgid "Extra parameters to include in the creation request." msgstr "Parametri supplementari da includere nella richiesta di creazione." msgid "Extra parameters to include in the request." msgstr "Parametri supplementari da includere nella richiesta." msgid "" "Extra parameters to include in the request. Parameters are often specific to " "installed hardware or extensions." msgstr "" "Parametri supplementari da includere nella richiesta. I parametri sono " "spesso specifici di hardware o estensioni installati." msgid "Extra specs key-value pairs defined for share type." msgstr "" "Le coppie chiave-valore delle specifiche supplementari definite per il tipo " "di condivisione." #, python-format msgid "Failed to attach interface (%(port)s) to server (%(server)s)" msgstr "Impossibile collegare l'interfaccia (%(port)s) al server (%(server)s)" #, python-format msgid "Failed to attach volume %(vol)s to server %(srv)s - %(err)s" msgstr "" "Collegamento del volume %(vol)s al server %(srv)snon riuscito - %(err)s" #, python-format msgid "Failed to create Bay '%(name)s' - %(reason)s" msgstr "Impossibile creare l'alloggiamento '%(name)s' - %(reason)s" #, python-format msgid "Failed to detach interface (%(port)s) from server (%(server)s)" msgstr "" "Impossibile scollegare l'interfaccia (%(port)s) dal server (%(server)s)" #, python-format msgid "Failed to execute %(action)s for %(cluster)s: %(reason)s" msgstr "Impossibile eseguire %(action)s per %(cluster)s: %(reason)s" #, python-format msgid "Failed to extend volume %(vol)s - %(err)s" msgstr "Impossibile estendere il volume %(vol)s - %(err)s" #, python-format msgid "Failed to fetch template: %s" msgstr "Impossibile recuperare il template: %s" #, python-format msgid "Failed to find instance %s" msgstr "Impossibile trovare l'istanza %s" #, python-format msgid "Failed to find server %s" msgstr "Impossibile rilevare il server %s" #, python-format msgid "Failed to parse JSON data: %s" msgstr "Impossibile analizzare i dati JSON: %s" #, python-format msgid "Failed to restore volume %(vol)s from backup %(backup)s - %(err)s" msgstr "" "Impossibile ripristinare il volume %(vol)s dal backup %(backup)s - %(err)s" msgid "Failed to retrieve template" msgstr "Impossibile richiamare il template" #, python-format msgid "Failed to retrieve template data: %s" msgstr "Impossibile richiamare i dati del template: %s" #, python-format msgid "Failed to retrieve template: %s" msgstr "Impossibile recuperare il template: %s" #, python-format msgid "" "Failed to send message to stack (%(stack_name)s) on other engine " "(%(engine_id)s)" msgstr "" "Impossibile inviare messaggi allo stack (%(stack_name)s) su un altro motore " "(%(engine_id)s)" #, python-format msgid "Failed to stop stack (%(stack_name)s) on other engine (%(engine_id)s)" msgstr "" "Impossibile arrestare lo stack (%(stack_name)s) su un altro motore " "(%(engine_id)s)" #, python-format msgid "Failed to update Bay '%(name)s' - %(reason)s" msgstr "Impossibile aggiornare l'alloggiamento '%(name)s' - %(reason)s" msgid "Failed to update, can not found port info." msgstr "" "Impossibile aggiornare, non è possibile reperire le informazioni sulla porta." #, python-format msgid "" "Failed validating stack template using Heat endpoint at region \"%(region)s" "\" due to \"%(exc)s\"" msgstr "" "Impossibile convalidare il template dello stack utilizzando l'endpoint Heat " "nella regione \"%(region)s\" a causa di \"%(exc)s\"" msgid "Fake attribute !a." msgstr "Attributo falso !a." msgid "Fake attribute a." msgstr "Attributo falso a." msgid "Fake property !a." msgstr "Proprietà falsa !a." msgid "Fake property !c." msgstr "Proprietà falsa !c." msgid "Fake property a." msgstr "Proprietà falsa a." msgid "Fake property c." msgstr "Proprietà falsa c." msgid "Fake property ca." msgstr "Proprietà falsa ca." msgid "" "False to trigger actions when the threshold is reached AND the alarm's state " "has changed. By default, actions are called each time the threshold is " "reached." msgstr "" "False per attivare le azioni quando la soglia è raggiunta e lo stato della " "segnalazione è cambiato. Per impostazione predefinita, le azioni vengono " "chiamate ogni volta che la soglia viene raggiunta." #, python-format msgid "Field %(field)s of %(objname)s is not an instance of Field" msgstr "Il campo %(field)s di %(objname)s non è un'istanza o campo" msgid "" "Fixed IP address to specify for the port created on the requested network." msgstr "" "Indirizzo IP fisso da specificare per la porta creata sulla rete richiesta." msgid "Fixed IP addresses." msgstr "Indirizzi IP fissi." msgid "Fixed IPv4 address for this NIC." msgstr "Indirizzo IPv4 fisso per questo NIC." msgid "Flag indicating if traffic to or from instance is validated." msgstr "" "Indicatore che stabilisce se il traffico da o verso l'istanza è convalidato." msgid "" "Flag to enable/disable port security on the network. It provides the default " "value for the attribute of the ports created on this network." msgstr "" "Indicatore per abilitare/disabilitare la sicurezza della porta sulla rete. " "Fornisce il valore predefinito per l'attributo delle porte create su questa " "rete." msgid "" "Flag to enable/disable port security on the port. When disable this " "feature(set it to False), there will be no packages filtering, like security-" "group and address-pairs." msgstr "" "Indicatore per abilitare/disabilitare la sicurezza sulla porta. Quando si " "disabilita questa funzione (impostare su False), non ci sarà il filtraggio " "dei pacchetti, come gruppo di sicurezza e coppie di indirizzi." msgid "Flavor of the instance." msgstr "Flavor dell'istanza." msgid "Friendly name of the port." msgstr "Nome semplice della porta." msgid "Friendly name of the router." msgstr "Nome semplice del router." msgid "Friendly name of the subnet." msgstr "Nome semplice della sottorete." #, python-format msgid "Function \"%s\" must have arguments" msgstr "La funzione \"%s\" deve contenere gli argomenti" #, python-format msgid "Function \"%s\" usage: [\"\", \"\"]" msgstr "Utilizzo funzione \"%s\": [\"\", \"\"]" #, python-format msgid "Gateway IP address \"%(gateway)s\" is in invalid format." msgstr "L'indirizzo IP del gateway \"%(gateway)s\" non è in formato valido." msgid "Gateway network for the router." msgstr "La rete del gateway per il router." msgid "Generic HeatAPIException, please use specific subclasses!" msgstr "HeatAPIException generico, utilizzare le sottoclassi specifiche!" msgid "Glance image ID or name." msgstr "Nome o ID immagine glance." msgid "Governs permissions set in manila for the cluster ips." msgstr "Gestisce le autorizzazioni impostate in manila per gli IP cluster." msgid "Granularity to use for age argument, defaults to days." msgstr "" "Granularità da utilizare per l'argomento age, assume il valore predefinito " "days." msgid "Hadoop cluster name." msgstr "Nome cluster Hadoop." #, python-format msgid "Header X-Auth-Url \"%s\" not an allowed endpoint" msgstr "L'intestazione X-Auth-Url \"%s\" non è un endpoint consentito" msgid "Health probe timeout, in seconds." msgstr "Timeout del probe di stato, in secondi." msgid "" "Heat build revision. If you would prefer to manage your build revision " "separately, you can move this section to a different file and add it as " "another config option." msgstr "" "Revisione del build Heat. Se si desidera gestire separatamente la propria " "revisione del build, è possibile spostare questa sezione in un file " "differente e aggiungerla come altra opzione di configurazione." msgid "Host" msgstr "Host" msgid "Hostname" msgstr "Hostname" msgid "Hostname of the instance." msgstr "Nome host dell'istanza." msgid "How long to preserve deleted data." msgstr "Durata di conservazione dei dati eliminati." msgid "" "How the client will signal the wait condition. CFN_SIGNAL will allow an HTTP " "POST to a CFN keypair signed URL. TEMP_URL_SIGNAL will create a Swift " "TempURL to be signalled via HTTP PUT. HEAT_SIGNAL will allow calls to the " "Heat API resource-signal using the provided keystone credentials. " "ZAQAR_SIGNAL will create a dedicated zaqar queue to be signalled using the " "provided keystone credentials. TOKEN_SIGNAL will allow and HTTP POST to a " "Heat API endpoint with the provided keystone token. NO_SIGNAL will result in " "the resource going to a signalled state without waiting for any signal." msgstr "" "Il modo in cui il client segnala la condizione di attesa. CFN_SIGNAL " "consente un HTTP POST su un URL firmato da una coppia di chiavi CFN. " "TEMP_URL_SIGNAL crea uno Swift TempURL da segnalare mediante HTTP PUT. " "HEAT_SIGNAL consente le chiamate all'API Heat resource-signal utilizzando le " "credenziali keystone fornite. ZAQAR_SIGNAL crea una coda zaqar dedicata da " "segnalare utilizzando le credenziali keystone fornite. TOKEN_SIGNAL consente " "un HTTP POST su un endpoint API Heat con il token keystone fornito. " "NO_SIGNAL fa in modo che la risorsa passi in uno stato segnalato senza " "attendere segnali." msgid "" "How the server should receive the metadata required for software " "configuration. POLL_SERVER_CFN will allow calls to the cfn API action " "DescribeStackResource authenticated with the provided keypair. " "POLL_SERVER_HEAT will allow calls to the Heat API resource-show using the " "provided keystone credentials. POLL_TEMP_URL will create and populate a " "Swift TempURL with metadata for polling. ZAQAR_MESSAGE will create a " "dedicated zaqar queue and post the metadata for polling." msgstr "" "Il modo in cui il server deve ricevere i metadati richiesti per la " "configurazione del software. POLL_SERVER_CFN consente le chiamate all'azione " "DescribeStackResource dell'API cfn autenticata con la coppia di chiavi " "fornita. POLL_SERVER_HEAT consente le chiamate all'API Heat resource-show " "utilizzando le credenziali keystone fornite. POLL_TEMP_URL crea e popola uno " "Swift TempURL con metadati per il polling (richiede l'endpoint object-store " "che supporta TempURL). ZAQAR_MESSAGE crea una coda zaqar dedicata e invia i " "metadati per il polling." msgid "How the server should signal to heat with the deployment output values." msgstr "" "Come il server deve segnalare a heat i valori di output della distribuzione." msgid "" "How the server should signal to heat with the deployment output values. " "CFN_SIGNAL will allow an HTTP POST to a CFN keypair signed URL. " "TEMP_URL_SIGNAL will create a Swift TempURL to be signaled via HTTP PUT. " "HEAT_SIGNAL will allow calls to the Heat API resource-signal using the " "provided keystone credentials. ZAQAR_SIGNAL will create a dedicated zaqar " "queue to be signaled using the provided keystone credentials. NO_SIGNAL will " "result in the resource going to the COMPLETE state without waiting for any " "signal." msgstr "" "Il modo in cui il server deve segnalare l'attivazione con i valori di output " "della distribuzione. CFN_SIGNAL consente un HTTP POST su un URL firmato da " "una coppia di chiavi CFN. TEMP_URL_SIGNAL crea uno Swift TempURL da " "segnalare mediante HTTP PUT. HEAT_SIGNAL consente le chiamate all'API Heat " "resource-signal utilizzando le credenziali keystone fornite. ZAQAR_SIGNAL " "crea una coda zaqar dedicata da segnalare utilizzando le credenziali " "keystone fornite. NO_SIGNAL fa in modo che la risorsa passi nello stato " "COMPLETE senza attendere segnali." msgid "" "How the user_data should be formatted for the server. For HEAT_CFNTOOLS, the " "user_data is bundled as part of the heat-cfntools cloud-init boot " "configuration data. For RAW the user_data is passed to Nova unmodified. For " "SOFTWARE_CONFIG user_data is bundled as part of the software config data, " "and metadata is derived from any associated SoftwareDeployment resources." msgstr "" "Il modo in cui user_data deve essere formattato per il server. Per " "HEAT_CFNTOOLS, user_data viene fornito come parte dei dati di configurazione " "di avvio Per RAW user_data viene passato a Nova senza modifiche. Per " "SOFTWARE_CONFIG user_data è fornito come parte dei dati della configurazione " "software e i metadati derivano dalle risorse SoftwareDeployment associato." msgid "Human readable name for the secret." msgstr "Nome leggibile dall'uomo del segreto." msgid "Human-readable name for the container." msgstr "Il nome leggibile dall'uomo per il contenitore." msgid "" "ID list of the L3 agent. User can specify multi-agents for highly available " "router. NOTE: The default policy setting in Neutron restricts usage of this " "property to administrative users only." msgstr "" "L'elenco ID dell'agent L3. L'utente può specificare più agent per un router " "altamente disponibile. NOTA: l'impostazione predefinita della politica in " "Neutron limita l'uso di questa proprietà solo per gli utenti amministrativi." msgid "ID of an existing port to associate with this server." msgstr "ID di una porta esistente da associare a questo server." msgid "" "ID of an existing port with at least one IP address to associate with this " "floating IP." msgstr "" "L'ID di una porta esistente con almeno un indirizzo IP da associare a questo " "IP mobile." msgid "ID of network to create a port on." msgstr "ID di rete per creare una porta su." msgid "ID of project for API authentication" msgstr "L'ID del progetto per l'autenticazione API" msgid "ID of queue to use for signaling output values" msgstr "L'ID della coda da utilizzare per la segnalazione dei valori di output" msgid "" "ID of resource to apply configuration to. Normally this should be a Nova " "server ID." msgstr "" "L'ID della risorsa a cui applicare la configurazione. In genere deve essere " "un ID server Nova." msgid "" "ID of server (VM, etc...) on host that is used for exporting network file-" "system." msgstr "" "ID del server (VM, etc...) sull'host utilizzato per l'esportazione del " "filesystem di rete." msgid "ID of signal to use for signaling output values" msgstr "" "L'ID del segnale da utilizzare per la segnalazione dei valori di output" msgid "" "ID of software configuration resource to execute when applying to the server." msgstr "" "L'ID della risorsa di configurazione software da eseguire quando si applica " "al server VMware ESX/VC." msgid "ID of the Cluster Template used for Node Groups and configurations." msgstr "" "ID del template cluster utilizzato per i gruppi di nodi e le configurazioni." msgid "ID of the InternetGateway." msgstr "ID del gateway Internet." msgid "" "ID of the L3 agent. NOTE: The default policy setting in Neutron restricts " "usage of this property to administrative users only." msgstr "" "ID dell'agent L3. NOTA: l'impostazione della politica predefinita in Neutron " "limita l'uso di questa proprietà solo per gli utenti amministrativi." msgid "ID of the Node Group Template." msgstr "ID del template gruppo di nodi." msgid "ID of the VPNGateway to attach to the VPC." msgstr "ID del gateway VPN da allegare a VPC." msgid "ID of the default image to use for the template." msgstr "ID dell'immagine predefinita da utilizzare per il template." msgid "ID of the default pool this listener is associated to." msgstr "ID del pool predefinito a cui è associato questo listener." msgid "ID of the floating IP to assign to the server." msgstr "L'ID dell'IP mobile da assegnare al server." msgid "ID of the floating IP to associate." msgstr "ID dell'IP mobile da associare." msgid "ID of the health monitor associated with this pool." msgstr "ID del monitor di integrità associato a questo pool." msgid "ID of the image to use for the template." msgstr "ID dell'immagine da utilizzare per il modello." msgid "ID of the load balancer this listener is associated to." msgstr "ID dl bilanciatore dl carico a cui è associato il listener." msgid "ID of the network in which this IP is allocated." msgstr "ID della rete in cui questo IP è allocato." msgid "ID of the port associated with this IP." msgstr "ID della porta associata a questo IP." msgid "ID of the queue." msgstr "ID della coda." msgid "ID of the router used as gateway, set when associated with a port." msgstr "" "ID del router utilizzato come gateway, impostato quando associato a una " "porta." msgid "ID of the router." msgstr "ID del router." msgid "ID of the server being deployed to" msgstr "ID del server da distribuire su" msgid "ID of the stack this deployment belongs to" msgstr "ID dello stack a cui appartiene questa distribuzione" msgid "ID of the tenant to which the RBAC policy will be enforced." msgstr "ID del tenant su cui verrà rafforzata la politica RBAC." msgid "ID of the tenant who owns the health monitor." msgstr "ID del tenant che possiede il monitor di integrità." msgid "ID or name of the QoS policy." msgstr "ID o nome della politica QoS." msgid "ID or name of the RBAC object." msgstr "ID o nome dell'oggetto RBAC." msgid "ID or name of the external network for the gateway." msgstr "ID o nome della rete esterna per il gateway." msgid "ID or name of the image to register." msgstr "ID o nome dell'immagine da registrare." msgid "ID or name of the load balancer with which listener is associated." msgstr "ID o nome del bilanciatore di carico a cui è associato il listener." msgid "ID or name of the load balancing pool." msgstr "ID o nome del pool di bilanciamento del carico." msgid "" "ID that AWS assigns to represent the allocation of the address for use with " "Amazon VPC. Returned only for VPC elastic IP addresses." msgstr "" "ID che AWS assegna per rappresentare l'assegnazione dell'indirizzo da " "utilizzare con Amazon VPC. Restituito solo per indirizzi IP elastici VPC." msgid "IP address and port of the pool." msgstr "Indirizzo IP e porta del pool." msgid "IP address desired in the subnet for this port." msgstr "L'indirizzo IP desiderato nella sottorete per questa porta." msgid "IP address for the VIP." msgstr "Indirizzo IP del VIP." msgid "IP address of the associated port, if specified." msgstr "Indirizzo IP della porta associata, se specificato." msgid "" "IP address of the floating IP. NOTE: The default policy setting in Neutron " "restricts usage of this property to administrative users only." msgstr "" "Indirizzo IP dell'IP mobile. NOTA: l'impostazione predefinita della politica " "in Neutron limita l'utilizzo di questa proprietà solo agli utenti " "amministrativi. " msgid "IP address of the pool member on the pool network." msgstr "Indirizzo IP del membro del pool nella rete di pool." msgid "IP address of the pool member." msgstr "Indirizzo IP del membro del pool." msgid "IP address of the vip." msgstr "Indirizzo IP del vip." msgid "IP address to allow through this port." msgstr "L'indirizzo IP da consentire tramite questa porta." msgid "IP address to use if the port has multiple addresses." msgstr "Indirizzo IP da utilizzare se la porta ha più indirizzi." msgid "" "IP or other address information about guest that allowed to access to Share." msgstr "" "L'IP o altre informazioni sull'indirizzo relativo al guest a cui è " "consentito accedere a Condivisione." msgid "IPv6 RA (Router Advertisement) mode." msgstr "Modalità IPv6 RA (Router Advertisement)." msgid "IPv6 address mode." msgstr "Modalità indirizzo IPv6." msgid "Id of a resource." msgstr "ID di una risorsa." msgid "Id of the manila share." msgstr "ID della condivisione manila." msgid "Id of the tenant owning the firewall policy." msgstr "ID del tenant che possiede la politica del firewall." msgid "Id of the tenant owning the firewall." msgstr "L'ID del tenant che possiede il firewall." msgid "Identifier of the source instance to replicate." msgstr "Identificativo dell'istanza di origine da replicare." #, python-format msgid "" "If \"%(size)s\" is provided, only one of \"%(image)s\", \"%(image_ref)s\", " "\"%(source_vol)s\", \"%(snapshot_id)s\" can be specified, but currently " "specified options: %(exclusive_options)s." msgstr "" "Se viene fornito \"%(size)s\", solo uno fra \"%(image)s\", \"%(image_ref)s" "\", \"%(source_vol)s\", \"%(snapshot_id)s\" può essere specificato, ma le " "opzioni attualmente specificate: %(exclusive_options)s." msgid "If False, closes the client socket connection explicitly." msgstr "" "Se False, la connessione socket del client viene chiusa in modo esplicito." msgid "" "If True, delete any objects in the container when the container is deleted. " "Otherwise, deleting a non-empty container will result in an error." msgstr "" "Se True, cancellare tutti gli oggetti nel contenitore quando il contenitore " "viene eliminato. In caso contrario, l'eliminazione di un contenitore non " "vuoto si tradurrà in un errore." msgid "If True, enable config drive on the server." msgstr "Se True, abilita l'unità di configurazione sul server." msgid "" "If configured, it allows to run action or workflow associated with a task " "multiple times on a provided list of items." msgstr "" "Se configurato, consente di eseguire l'azione o il flusso di lavoro " "associato ad un'attività più volte in un determinato elenco di elementi." msgid "If set, then the server's certificate will not be verified." msgstr "Se impostato, il certificato del server non verrà verificato." msgid "If specified, the backup to create the volume from." msgstr "Se specificato, il backup da cui creare il volume." msgid "If specified, the backup used as the source to create the volume." msgstr "" "Se specificato, il backup utilizzato come origine per creare il volume." msgid "If specified, the name or ID of the image to create the volume from." msgstr "Se specificato, il nome o l'ID dell'immagine da cui creare il volume." msgid "If specified, the snapshot to create the volume from." msgstr "Se specificato, l'istantanea da cui creare il volume." msgid "If specified, the type of volume to use, mapping to a specific backend." msgstr "" "Se specificato, il tipo di volume da utilizzare, con associazione a un " "backend specifico." msgid "If specified, the volume to use as source." msgstr "Se specificato, il volume da utilizzare come origine." msgid "" "If the region is hierarchically a child of another region, set this " "parameter to the ID of the parent region." msgstr "" "Se la regione è gerarchicamente un elemento child di un'altra regione, " "impostare questo parametro sull'ID della regione parent." msgid "" "If true, the resources in the chain will be created concurrently. If false " "or omitted, each resource will be treated as having a dependency on the " "previous resource in the list." msgstr "" "Se true, le risorse nella catena verranno create simultaneamente. Se false o " "omesso, ciascuna risorsa verrà considerata come se avesse una dipendenza " "dalla risorsa precedente nell'elenco." msgid "If without InstanceId, ImageId and InstanceType are required." msgstr "Senza InstanceId, ImageId e InstanceType sono obbligatori." #, python-format msgid "Illegal prefix bounds: %(key1)s=%(value1)s, %(key2)s=%(value2)s." msgstr "" "Limiti di prefisso non consentiti: %(key1)s=%(value1)s, %(key2)s=%(value2)s." #, python-format msgid "" "Image %(image)s requires %(imram)s minimum ram. Flavor %(flavor)s has only " "%(flram)s." msgstr "" "L'immagine %(image)s richiede una ram minima di %(imram)s. Il flavor " "%(flavor)s ha solo %(flram)s." #, python-format msgid "" "Image %(image)s requires %(imsz)s GB minimum disk space. Flavor %(flavor)s " "has only %(flsz)s GB." msgstr "" "L'immagine %(image)s richiede uno spazio su disco minimo di %(imsz)s. Il " "flavor %(flavor)s ha solo %(flsz)s." #, python-format msgid "Image status is required to be %(cstatus)s not %(wstatus)s." msgstr "" "È richiesto che lo stato dell'immagine sia %(cstatus)s non %(wstatus)s." msgid "Incompatible parameters were used together" msgstr "Sono stati utilizzati insieme parametri incompatibili" #, python-format msgid "Incorrect arguments to \"%(fn_name)s\" should be one of: %(allowed)s" msgstr "" "Gli argomenti non corretti per \"%(fn_name)s\" devono essere tra: %(allowed)s" #, python-format msgid "Incorrect arguments to \"%(fn_name)s\" should be: %(example)s" msgstr "Argomenti non corretti per \"%(fn_name)s\"; devono essere: %(example)s" msgid "Incorrect arguments: Items to merge must be maps." msgstr "Argomenti non corretti: gli elementi da unire devono essere mappe." #, python-format msgid "" "Incorrect index to \"%(fn_name)s\" should be between 0 and %(max_index)s" msgstr "" "Indice non corretto per \"%(fn_name)s\" deve essere compreso tra 0 e " "%(max_index)s" #, python-format msgid "Incorrect index to \"%(fn_name)s\" should be: %(example)s" msgstr "Indice non corretto per \"%(fn_name)s\" deve essere: %(example)s" #, python-format msgid "Index to \"%s\" must be a string" msgstr "Index di \"%s\" deve essere una stringa" #, python-format msgid "Index to \"%s\" must be an integer" msgstr "Index di \"%s\" deve essere un numero intero" msgid "" "Indicate whether the volume should be deleted when the instance is " "terminated." msgstr "" "Indica se il volume deve essere eliminato quando l'istanza è terminata." msgid "" "Indicate whether the volume should be deleted when the server is terminated." msgstr "" "Indicare se il volume deve essere eliminato quando il server è terminata." msgid "Indicates remote IP prefix to be associated with this metering rule." msgstr "" "Indica il prefisso IP remoto da associare a questa regola di misurazione." msgid "" "Indicates whether or not to create a distributed router. NOTE: The default " "policy setting in Neutron restricts usage of this property to administrative " "users only. This property can not be used in conjunction with the L3 agent " "ID." msgstr "" "Indica se creare o meno un router distribuito. NOTA: L'impostazione " "predefinita della politica in Neutron limita l'utilizzo di questa proprietà " "ai soli utenti amministrativi. Questa proprietà non può essere utilizzata " "con l'ID agent L3." msgid "" "Indicates whether or not to create a highly available router. NOTE: The " "default policy setting in Neutron restricts usage of this property to " "administrative users only. And now neutron do not support distributed and ha " "at the same time." msgstr "" "Indica se creare o meno un router altamente disponibile. NOTA: " "l'impostazione predefinita della politica in Neutron limita l'utilizzo di " "questa proprietà solo agli utenti amministrativi. Neutron non supporta ora " "distributed e ha contemporaneamente." msgid "Indicates whether this firewall rule is enabled or not." msgstr "Indica se questa regola del firewall è abilitata o meno." msgid "Information used to configure the bucket as a static website." msgstr "" "Le informazioni utilizzate per configurare il bucket come sito Web statico." msgid "Initiator state in lowercase for the ipsec site connection." msgstr "Stato iniziatore in minuscolo per la connessione al sito ipsec." #, python-format msgid "Input in signal data must be a map, find a %s" msgstr "" "L'input nei dati del segnale deve essere un'associazione, cercare un %s" msgid "Input values for the workflow." msgstr "Valori di input per il flusso di lavoro." msgid "Input values to apply to the software configuration on this server." msgstr "" "I valori di input da applicare alla configurazione software su questo server." msgid "Instance ID to associate with EIP specified by EIP property." msgstr "ID istanza da associare a EIP specificata dalla proprietà EIP." msgid "Instance ID to associate with EIP." msgstr "ID istanza da associare a EIP." msgid "Instance connection to CFN/CW API validate certs if SSL is used." msgstr "" "Connessione dell'istanza ai certificati di convalida dell'API CFN/CW se si " "utilizza SSL." msgid "Instance connection to CFN/CW API via https." msgstr "Connessione dell'istanza all'API CFN/CW tramite https." #, python-format msgid "Instance is not ACTIVE (was: %s)" msgstr "L'istanza non è ATTIVA (era: %s)" #, python-format msgid "" "Instance metadata must not contain greater than %s entries. This is the " "maximum number allowed by your service provider" msgstr "" "I metadati dell'istanza non devono contenere più di %s voci. Questo è il " "numero massimo consentito dal proprio provider del servizio" msgid "Interface type of keystone service endpoint." msgstr "Tipo di interfaccia dell'endpoint del servizio keystone. " msgid "Internet protocol version." msgstr "Versione del protocollo Internet." #, python-format msgid "Invalid %s, expected a mapping" msgstr "%s non valido; è prevista una mappatura" #, python-format msgid "Invalid CRON expression: %s" msgstr "Espressione CRON non valida: %s" #, python-format msgid "Invalid Parameter type \"%s\"" msgstr "Tipo di parametro non valido \"%s\"" #, python-format msgid "Invalid Property %s" msgstr "Proprietà %s non valida" msgid "Invalid Stack address" msgstr "Indirizzo stack non valido" msgid "Invalid Template URL" msgstr "URL template non valido" #, python-format msgid "Invalid URL scheme %s" msgstr "Schema URL non valido %s" #, python-format msgid "Invalid UUID version (%d)" msgstr "Versione UUID non valida (%d)" #, python-format msgid "Invalid action %s" msgstr "Azione non valida %s" #, python-format msgid "Invalid action %s specified" msgstr "Azione specificata %s non valida" #, python-format msgid "Invalid adopt data: %s" msgstr "Dati adottati non validi: %s" #, python-format msgid "Invalid cloud_backend setting in heat.conf detected - %s" msgstr "Impostazione cloud_backend non valida in heat.conf detected - %s" #, python-format msgid "Invalid codes in ignore_errors : %s" msgstr "Codici non validi in ignore_errors : %s" #, python-format msgid "Invalid content type %(content_type)s" msgstr "Tipo contenuto non valido %(content_type)s" #, python-format msgid "Invalid default %(default)s (%(exc)s)" msgstr "Impostazione predefinita non valida %(default)s (%(exc)s)" #, python-format msgid "Invalid deletion policy \"%s\"" msgstr "Politica di eliminazione non valida \"%s\"" #, python-format msgid "Invalid filter parameters %s" msgstr "Parametri di filtro non validi %s" #, python-format msgid "Invalid hook type \"%(hook)s\" for %(resource)s" msgstr "Tipo hook non valido \"%(hook)s\" per %(resource)s" #, python-format msgid "" "Invalid hook type \"%(value)s\" for resource breakpoint, acceptable hook " "types are: %(types)s" msgstr "" "Tipo hook non valido \"%(value)s\" per il punto di interruzione della " "risorsa, i tipi hook accettabili sono: %(types)s" #, python-format msgid "Invalid key %s" msgstr "Chiave non valida %s" #, python-format msgid "Invalid key '%(key)s' for %(entity)s" msgstr "Chiave '%(key)s' non valida per %(entity)s" #, python-format msgid "Invalid keys in resource mark unhealthy %s" msgstr "Chiavi non valide nel contrassegno di risorsa non funzionante %s" msgid "" "Invalid mix of disk and container formats. When setting a disk or container " "format to one of 'aki', 'ari', or 'ami', the container and disk formats must " "match." msgstr "" "Combinazione di formati di disco e contenitore non valida. Quando si imposta " "un formato disco o contenitore in uno dei seguenti 'aki', 'ari' o 'ami', i " "formati contenitore e disco devono corrispondere." #, python-format msgid "Invalid parameter constraints for parameter %s, expected a list" msgstr "" "Vincoli del parametro per il parametro %s, non validi, previsto un elenco" #, python-format msgid "" "Invalid restricted_action type \"%(value)s\" for resource, acceptable " "restricted_action types are: %(types)s" msgstr "" "Tipo restricted_action \"%(value)s\" non valido per la risorsa, i tipi " "restricted_action accettabili sono : %(types)s" #, python-format msgid "" "Invalid stack name %s must contain only alphanumeric or \"_-.\" characters, " "must start with alpha and must be 255 characters or less." msgstr "" "Nome stack non valido %s deve contenere solo alfanumerici o caratteri \"_-." "\", deve iniziare con Alpha e deve contenere un numero massimo di 255 " "caratteri." #, python-format msgid "Invalid stack name %s, must be a string" msgstr "Il nome stack non valido %s deve essere una stringa" #, python-format msgid "Invalid status %s" msgstr "Stato non valido %s" #, python-format msgid "Invalid support status and should be one of %s" msgstr "Stato supporto non valido, deve essere uno di %s" #, python-format msgid "Invalid tag, \"%s\" contains a comma" msgstr "Tag non valido, \"%s\" contiene una virgola" #, python-format msgid "Invalid tag, \"%s\" is longer than 80 characters" msgstr "Tag non valido, \"%s\" è più lungo di 80 caratteri" #, python-format msgid "Invalid tag, \"%s\" is not a string" msgstr "Tag non valido, \"%s\" non è una stringa" #, python-format msgid "Invalid tags, not a list: %s" msgstr "Tag non validi, non un elenco: %s" #, python-format msgid "Invalid template type \"%(value)s\", valid types are: cfn, hot." msgstr "" "Tipo di template non valido \"%(value)s\", i tipi validi sono: cfn, hot." #, python-format msgid "Invalid timeout value %s" msgstr "Valore timeout non valido %s" #, python-format msgid "Invalid timezone: %s" msgstr "Fuso orario non valido: %s" #, python-format msgid "Invalid type (%s)" msgstr "Tipo non valido (%s)" msgid "Ip allocation pools and their ranges." msgstr "Pool di allocazione Ip ed i relativi intervalli." msgid "Ip of the subnet's gateway." msgstr "Ip del gateway della sottorete." msgid "Ip version for the subnet." msgstr "Versione Ip per la sottorete." msgid "Ip_version for this firewall rule." msgstr "Ip_version per questa regola del firewall." msgid "It defines an executor to which task action should be sent to." msgstr "" "Definisce un esecutore a cui deve essere inviata l'azione dell'attività." #, python-format msgid "Items to join must be string, map or list not %s" msgstr "Gli elementi da unire devono essere stringhe, mappe o elenchi, non %s" #, python-format msgid "Items to join must be string, map or list. %s failed json serialization" msgstr "" "Gli elementi da unire devono essere stringhe, mappe o elenchi. %s non è " "riuscito a eseguire la serializzazione json" #, python-format msgid "Items to join must be strings not %s" msgstr "Gli elementi da unire devono essere stringhe non %s" #, python-format msgid "" "JSON body size (%(len)s bytes) exceeds maximum allowed size (%(limit)s " "bytes)." msgstr "" "La dimensione del corpo JSON (%(len)s byte) supera la dimensione massima " "consentita (%(limit)s byte)." msgid "JSON data that was uploaded via the SwiftSignalHandle." msgstr "I dati JSON che sono stati caricati mediante SwiftSignalHandle." msgid "" "JSON serialized map that includes the endpoint, token and/or other " "attributes the client must use for signalling this handle. The contents of " "this map depend on the type of signal selected in the signal_transport " "property." msgstr "" "L'associazione JSON serializzata che include l'endpoint, il token e/o altri " "attributi che il client deve utilizzare per la segnalazione di questo " "handle. Il contenuto di questa associazione dipende dal tipo di segnale " "selezionato nella proprietà signal_transport." msgid "" "JSON string containing data associated with wait condition signals sent to " "the handle." msgstr "" "La stringa JSON contenente i dati associati ai segnali della condizione di " "attesa inviati all'handle." msgid "" "Key used to encrypt authentication info in the database. Length of this key " "must be 32 characters." msgstr "" "Chiave utilizzata per codificare le informazioni sull'autenticazione nel " "database. La lunghezza di questa chiave deve essere 32 caratteri." msgid "Key/Value pairs to extend the capabilities of the flavor." msgstr "Coppie di chiavi/valori per estendere le capacità del flavor." msgid "Key/value pairs associated with the volume in raw dict form." msgstr "" "Coppie chiave/valore associate al volume nel formato dizionario non " "elaborato." msgid "Key/value pairs associated with the volume." msgstr "Coppie di chiavi/valori associate al volume." msgid "Key/value pairs to associate with the volume." msgstr "Coppie di chiavi/valori da associare al volume." msgid "Keypair added to instances to make them accessible for user." msgstr "" "Coppia di chiavi aggiunta alle istanze per renderle accessibili all'utente." msgid "Keypair secret key." msgstr "Chiave segreta Keypair." msgid "" "Keystone domain ID which contains heat template-defined users. If this " "option is set, stack_user_domain_name option will be ignored." msgstr "" "L'ID del dominio Keystone, che contiene gli utenti definiti con template di " "heat. Se questa opzione è impostata, l'opzione stack_user_domain_name verrà " "ignorata." msgid "" "Keystone domain name which contains heat template-defined users. If " "`stack_user_domain_id` option is set, this option is ignored." msgstr "" "Nome del dominio Keystone, che contiene gli utenti definiti con template di " "heat. Se l'opzione `stack_user_domain_id` è impostata, questa opzione verrà " "ignorata." msgid "Keystone domain." msgstr "Dominio keystone." #, python-format msgid "" "Keystone has more than one service with same name %(service)s. Please use " "service id instead of name" msgstr "" "Keystone ha più di un servizio con lo stesso nome %(service)s. Utilizzare " "l'id servizio anziché il nome" msgid "Keystone password for stack_domain_admin user." msgstr "Password Keystone per l'utente stack_domain_admin." msgid "Keystone project." msgstr "Progetto keystone." msgid "Keystone role for heat template-defined users." msgstr "Ruolo Keystone per gli utenti definiti con template di heat." msgid "Keystone role." msgstr "Ruolo keystone." msgid "Keystone user group." msgstr "Gruppo di utenti keystone." msgid "Keystone user groups." msgstr "Gruppi di utenti keystone." msgid "Keystone user is enabled or disabled." msgstr "L'utente keystone è abilitato o disabilitato." msgid "" "Keystone username, a user with roles sufficient to manage users and projects " "in the stack_user_domain." msgstr "" "Nome utente di Keystone, un utente con un ruolo che consente di gestire gli " "utenti ed i progetti in stack_user_domain." msgid "L2 segmentation strategy on the external side of the network gateway." msgstr "Strategia di segmentazione L2 sulla parte esterna del gateway di rete." msgid "LBaaS provider to implement this load balancer instance." msgstr "" "Il provider LBaaS per implementare questa istanza del bilanciatore del " "carico." msgid "Length of OS_PASSWORD after encryption exceeds Heat limit (255 chars)" msgstr "" "La lunghezza di OS_PASSWORD dopo la codifica supera il limite di Heat (255 " "caratteri)" msgid "Length of the string to generate." msgstr "Lunghezza della stringa da generare." msgid "" "Length property cannot be smaller than combined character class and " "character sequence minimums" msgstr "" "La proprietà lunghezza non può essere inferiore alla classe di caratteri " "combinata con la sequenza minima di caratteri" msgid "Level of access that need to be provided for guest." msgstr "Livello di accesso che deve essere fornito per guest." msgid "" "Lifecycle actions to which the configuration applies. The string values " "provided for this property can include the standard resource actions CREATE, " "DELETE, UPDATE, SUSPEND and RESUME supported by Heat." msgstr "" "Azioni del ciclo di vita a cui si applica la configurazione. I valori della " "stringa forniti per questa proprietà possono includere le azioni standard " "della risorsa CREA, ELIMINA, AGGIORNA, SOSPENDI e RIPRISTINA supportate da " "Heat." msgid "List of LoadBalancer resources." msgstr "Elenco di risorse LoadBalancer." msgid "List of Security Groups assigned on current LB." msgstr "List of Security Groups assigned on current LB." msgid "List of TLS container references for SNI." msgstr "Elenco di riferimenti al contenitore TLS per SNI." msgid "List of database instances." msgstr "Elenco di istanze del database." msgid "List of databases to be created on DB instance creation." msgstr "Elenco di database da creare durante la creazione dell'istanza DB." msgid "List of directories to search for plug-ins." msgstr "Elenco di directory in cui effettuare la ricerca dei plug-in." msgid "List of dns nameservers." msgstr "Elenco di nameserver dns." msgid "List of firewall rules in this firewall policy." msgstr "Elenco delle regole del firewall in questa politica del firewall." msgid "List of health monitors associated with the pool." msgstr "Elenco di monitor di integrità associati al pool." msgid "List of hosts to join aggregate." msgstr "Elenco di host per unire l'aggregato." msgid "List of manila shares to be mounted." msgstr "Elenco di condivisioni manila da montare." msgid "List of network interfaces to create on instance." msgstr "Elenco delle interfacce di rete per creare un'istanza." msgid "List of processes to enable anti-affinity for." msgstr "Elenco dei processi da abilitare per l'anti-affinità." msgid "List of processes to run on every node." msgstr "Elenco dei processi da eseguire su ogni nodo." msgid "List of role assignments." msgstr "Elenco di assegnazioni di ruoli." msgid "List of security group IDs associated with this interface." msgstr "" "Elenco degli ID del gruppo di sicurezza associati a questa interfaccia." msgid "List of security group egress rules." msgstr "Elenco di regole egress del gruppo di sicurezza." msgid "List of security group ingress rules." msgstr "Elenco di regole ingress del gruppo di sicurezza." msgid "" "List of security group names or IDs to assign to this Node Group template." msgstr "" "Elenco di ID o nomi del gruppo di sicurezza da assegnare a questo template " "Gruppo nodi." msgid "" "List of security group names or IDs. Cannot be used if neutron ports are " "associated with this server; assign security groups to the ports instead." msgstr "" "Elenco dei nomi o ID del gruppo di sicurezza. Non può essere utilizzato se " "le porte neutron sono associate a questo server; assegnare invece i gruppi " "di sicurezza alle porte." msgid "List of security group rules." msgstr "Elenco di regole del gruppo di sicurezza." msgid "List of subnet prefixes to assign." msgstr "Elenco di prefissi di sottorete da assegnare." msgid "List of tags associated with this interface." msgstr "Elenco di tag associati a questa interfaccia." msgid "List of tags to attach to the instance." msgstr "Elenco di tag da allegare all'istanza." msgid "List of tags to attach to this resource." msgstr "Elenco di tag da allegare a questa risorsa." msgid "List of tags to be attached to this resource." msgstr "Elenco di tag da allegare a questa risorsa." msgid "" "List of tasks which should be executed before this task. Used only in " "reverse workflows." msgstr "" "Elenco di attività che devono essere eseguite prima di questa attività. " "Utilizzato solo nei flussi di lavoro inversi." msgid "" "List of tasks which will run after the task has completed regardless of " "whether it is successful or not." msgstr "" "Elenco di attività che verranno eseguite dopo che l'attività è stata " "completata correttamente o meno." msgid "List of tasks which will run after the task has completed successfully." msgstr "" "Elenco di attività che verranno eseguite dopo che l'attività è stata " "completata correttamente." msgid "" "List of tasks which will run after the task has completed with an error." msgstr "" "Elenco di attività che verranno eseguite dopo che l'attività è stata " "completata con un errore." msgid "List of users to be created on DB instance creation." msgstr "Elenco di utenti da creare durante la creazione dell'istanza DB." msgid "" "List of workflows' executions, each of them is a dictionary with information " "about execution. Each dictionary returns values for next keys: id, " "workflow_name, created_at, updated_at, state for current execution state, " "input, output." msgstr "" "Elenco di esecuzioni del flusso di lavoro, ciascuna delle quali è un " "dizionario con informazioni sull'esecuzione. Ciascun dizionario restituisce " "valori per le chiavi successive: id, workflow_name, created_at, updated_at, " "state for current execution state, input, output." msgid "Listener associated with this pool." msgstr "Listener associato a questo pool." msgid "" "Local path on each cluster node on which to mount the share. Defaults to '/" "mnt/{share_id}'." msgstr "" "Percorso locale su ciascun nodo cluster su cui montare la condivisione. " "Impostato per valore predefinito su '/mnt/{share_id}'." msgid "Location of the SSL certificate file to use for SSL mode." msgstr "Ubicazione del file di certificato SSL da utilizzare per il modo SSL." msgid "Location of the SSL key file to use for enabling SSL mode." msgstr "" "Ubicazione del file di chiavi SSL da utilizzare per abilitare il modo SSL." msgid "MAC address of the port." msgstr "Indirizzo MAC della porta." msgid "MAC address to allow through this port." msgstr "L'indirizzo MAC da consentire tramite questa porta." msgid "Map between role with either project or domain." msgstr "Associazione tra ruolo con progetto o dominio." msgid "" "Map containing options specific to the configuration management tool used by " "this resource." msgstr "" "Mappa contenente le opzioni specifiche per lo strumento di gestione della " "configurazione da questa risorsa." msgid "" "Map representing the cloud-config data structure which will be formatted as " "YAML." msgstr "" "Mappa che rappresenta la struttura di dati cloud-config che verrà formattata " "come YAML." msgid "" "Map representing the configuration data structure which will be serialized " "to JSON format." msgstr "" "Mappa che rappresenta la struttura di dati di configurazione che verrà " "serializzata in formato JSON." msgid "Max bandwidth in kbps." msgstr "Larghezza di banda massima in kbps." msgid "Max burst bandwidth in kbps." msgstr "Larghezza di banda burst massima in kbps." msgid "Max size of the cluster." msgstr "Dimensione massima del cluster." #, python-format msgid "Maximum %s is 1 hour." msgstr "Il valore massimo di %s è 1 ora." msgid "Maximum depth allowed when using nested stacks." msgstr "Profondità massima consentita quando si utilizzano stack nidificati." msgid "" "Maximum line size of message headers to be accepted. max_header_line may " "need to be increased when using large tokens (typically those generated by " "the Keystone v3 API with big service catalogs)." msgstr "" "Dimensione massima della riga di intestazioni del messaggio che deve essere " "accettata. max_header_line dovrebbe essere incrementato quando si utilizzano " "token grandi (in genere quelli generati dall'API Keystone v3 con cataloghi " "del servizio di grandi dimensioni)." msgid "" "Maximum line size of message headers to be accepted. max_header_line may " "need to be increased when using large tokens (typically those generated by " "the Keystone v3 API with big service catalogs.)" msgstr "" "Dimensione massima della riga di intestazioni del messaggio che deve essere " "accettata. max_header_line dovrebbe essere incrementato quando si utilizzano " "token grandi (in genere quelli generati dall'API Keystone v3 con cataloghi " "del servizio di grandi dimensioni)." msgid "Maximum number of instances in the group." msgstr "Numero massimo di istanze nel gruppo." msgid "Maximum number of resources in the cluster. -1 means unlimited." msgstr "Numero massimo di risorse nel cluster. -1 indica illimitato." msgid "Maximum number of resources in the group." msgstr "Numero massimo di risorse nel gruppo." msgid "Maximum number of stacks any one tenant may have active at one time." msgstr "" "Numero massimo di stack un qualsiasi tenant può avere attivo " "contemporaneamente." msgid "Maximum prefix size that can be allocated from the subnet pool." msgstr "" "La dimensione massima del prefisso che può essere allocata dal pool di " "sottorete." msgid "" "Maximum raw byte size of JSON request body. Should be larger than " "max_template_size." msgstr "" "Dimensione massima dei byte non elaborati del corpo della richiesta JSON. " "Deve essere maggiore di max_template_size." msgid "Maximum raw byte size of any template." msgstr "Dimensione massima dei byte non elaborati in qualsiasi template." msgid "Maximum resources allowed per top-level stack. -1 stands for unlimited." msgstr "" "Il numero massimo di risorse consentite per stack di livello superiore. -1 " "sta per non limitato." msgid "Maximum resources per stack exceeded." msgstr "Il numero massimo di risorse per stack è stato superato." msgid "" "Maximum transmission unit size (in bytes) for the ipsec site connection." msgstr "" "Dimensione massima dell'unità di trasmissione (in byte) per la connessione " "al sito ipsec." msgid "Member list items must be strings" msgstr "Gli elementi dell'elenco dei membri devono essere stringhe" msgid "Member list must be a list" msgstr "L'elenco dei membri deve essere un elenco" msgid "Members associated with this pool." msgstr "Membri associati a questo pool." msgid "Memory in MB for the flavor." msgstr "Memoria in MB per il flavor." #, python-format msgid "Message: %(message)s, Code: %(code)s" msgstr "Messaggio: %(message)s, Codice: %(code)s" msgid "Metadata format invalid" msgstr "Formato Metadata non valido" msgid "Metadata key-values defined for cluster." msgstr "Valori chiave dei metadati definiti per il cluster." msgid "Metadata key-values defined for node." msgstr "Valori chiave dei metadati definiti per il nodo." msgid "Metadata key-values defined for profile." msgstr "Valori chiave dei metadati definiti per il profilo." msgid "Metadata key-values defined for share." msgstr "Valori chiave dei metadati definiti per la condivisione." msgid "Meter name watched by the alarm." msgstr "Nome contatore osservato dalla segnalazione." msgid "" "Meter should match this resource metadata (key=value) additionally to the " "meter_name." msgstr "" "Il contatore deve corrispondere al valore metadati risorsa (key=value) in " "aggiunta a meter_name." msgid "Meter statistic to evaluate." msgstr "Statistiche contatore da valutare." msgid "Method of implementation of session persistence feature." msgstr "" "Metodo di implementazione della funzione di persistenza della sessione." msgid "Metric name watched by the alarm." msgstr "Nome della metrica osservata dalla segnalazione." msgid "Min size of the cluster." msgstr "Dimensione minima del cluster." msgid "MinSize can not be greater than MaxSize" msgstr "MinSize non può essere maggiore di MaxSize" msgid "Minimum number of instances in the group." msgstr "Numero minimo di istanze nel gruppo." msgid "Minimum number of resources in the cluster." msgstr "Numero minimo di risorse nel cluster." msgid "Minimum number of resources in the group." msgstr "Numero minimo di risorse nel gruppo." msgid "" "Minimum number of resources that are added or removed when the AutoScaling " "group scales up or down. This can be used only when specifying " "PercentChangeInCapacity for the AdjustmentType property." msgstr "" "Il numero minimo di risorse che vengono aggiunte o rimosse quando il gruppo " "AutoScaling viene ridimensionato verso l'alto o verso il basso. Può essere " "utilizzato solo quando si specifica PercentChangeInCapacity per la proprietà " "AdjustmentType." msgid "" "Minimum number of resources that are added or removed when the AutoScaling " "group scales up or down. This can be used only when specifying " "percent_change_in_capacity for the adjustment_type property." msgstr "" "Il numero minimo di risorse che vengono aggiunte o rimosse quando il gruppo " "AutoScaling viene ridimensionato verso l'alto o verso il basso. Può essere " "utilizzato solo quando si specifica percent_change_in_capacity per la " "proprietà adjustment_type." #, python-format msgid "Missing mandatory (%s) key from mark unhealthy request" msgstr "" "Chiave obbligatoria (%s) mancante nel contrassegno di richiesta non " "funzionante" #, python-format msgid "Missing parameter type for parameter: %s" msgstr "Tipo parametro mancante per il parametro: %s" #, python-format msgid "Missing required credential: %(required)s" msgstr "Credenziale richiesta mancante: %(required)s" msgid "Mistral resource validation error" msgstr "Errore di convalida della risorsa Mistral" msgid "Monasca notification." msgstr "Notifica Monasca." msgid "Multiple actions specified" msgstr "Più azioni specificate" #, python-format msgid "Multiple physical resources were found with name (%(name)s)." msgstr "Sono state trovate più risorse fisiche con il name (%(name)s)." #, python-format msgid "Multiple routers found with name %s" msgstr "Trovati più router con il nome %s" msgid "Must specify 'InstanceId' if you specify 'EIP'." msgstr "È necessario specificare 'InstanceId' se si specifica 'EIP'." msgid "Name for the Sahara Cluster Template." msgstr "Nome del template cluster Sahara." msgid "Name for the Sahara Node Group Template." msgstr "Nome per il template del gruppo di nodi Sahara." msgid "Name for the aggregate." msgstr "Nome dell'aggregato." msgid "Name for the availability zone." msgstr "Nome della zona di disponibilità." msgid "" "Name for the container. If not specified, a unique name will be generated." msgstr "" "Nome per il contenitore. Se non specificato, verrà generato un nome univoco." msgid "Name for the firewall policy." msgstr "Nome della politica del firewall." msgid "Name for the firewall rule." msgstr "Nome della regola del firewall." msgid "Name for the firewall." msgstr "Nome per il firewall." msgid "Name for the ike policy." msgstr "Nome per la politica ike." msgid "" "Name for the image. The name of an image is not unique to a Image Service " "node." msgstr "" "Il nome per l'immagine. Il nome dell'immagine non è univoco per un nodo del " "servizio immagini." msgid "Name for the ipsec policy." msgstr "Nome per la politica ipsec." msgid "Name for the ipsec site connection." msgstr "Nome per la connessione al sito ipsec." msgid "Name for the time constraint." msgstr "Nome per il vincolo di tempo." msgid "Name for the vpn service." msgstr "Nome per il servizio vpn." msgid "" "Name of attribute to compare. Names of the form metadata.user_metadata.X or " "metadata.metering.X are equivalent to what you can address through " "matching_metadata; the former for Nova meters, the latter for all others. To " "see the attributes of your Samples, use `ceilometer --debug sample-list`." msgstr "" "Il nome dell'attributo da confrontare. I nomi nel formato metadata." "user_metadata.X o metadata.metering.X sono equivalenti a ciò che si può " "gestire con matching_metadata; i primi per i contatori Nova, i secondi per " "tutti gli altri. Per visualizzare gli attributi del propri esempi, di " "utilizzare `ceilometer --debug sample-list`." msgid "Name of key to use for substituting inputs during deployment." msgstr "" "Nome della chiave da utilizzare per la sostituzione degli input durante la " "distribuzione." msgid "Name of keypair to inject into the server." msgstr "Il nome della coppia di chiavi da inserire nel server." msgid "Name of keystone endpoint." msgstr "Nome dell'endpoint keystone." msgid "Name of keystone group." msgstr "Nome del gruppo keystone." msgid "Name of keystone project." msgstr "Nome del progetto keystone." msgid "Name of keystone role." msgstr "Nome del ruolo keystone." msgid "Name of keystone service." msgstr "Nome del servizio keystone." msgid "Name of keystone user." msgstr "Nome dell'utente keystone." msgid "Name of registered datastore type." msgstr "Nome del tipo di archivio dati registrato." msgid "Name of the DB instance to create." msgstr "Il nome dell'istanza DB da creare." msgid "Name of the Node group." msgstr "Nome del gruppo di nodi." msgid "" "Name of the action associated with the task. Either action or workflow may " "be defined in the task." msgstr "" "Nome dell'azione associata all'all'attività. L'azione o il flusso di lavoro " "possono essere definiti nell'attività." msgid "Name of the administrative user to use on the server." msgstr "Nome dell'utente amministrativo da utilizzare sul server." msgid "Name of the alarm. By default, physical resource name is used." msgstr "" "Nome della segnalazione. Per impostazione predefinita, viene utilizzato il " "nome della risorsa fisica." msgid "Name of the availability zone for DB instance." msgstr "Nome della zona di disponibilità per l'istanza DB." msgid "Name of the availability zone for server placement." msgstr "Il nome della zona di disponibilità per il posizionamento del server." msgid "Name of the cluster to create." msgstr "Nome del cluster da creare." msgid "Name of the cluster. By default, physical resource name is used." msgstr "" "Nome del cluster. Per impostazione predefinita, viene utilizzato il nome " "della risorsa fisica." msgid "Name of the cookie, required if type is APP_COOKIE." msgstr "Nome del cookie, richiesto se il tipo è APP_COOKIE." msgid "Name of the cron trigger." msgstr "Nome del trigger cron." msgid "Name of the current action being deployed" msgstr "Il nome dell'azione corrente distribuita" msgid "Name of the data source." msgstr "Nome dell'origine dati." msgid "" "Name of the derived config associated with this deployment. This is used to " "apply a sort order to the list of configurations currently deployed to a " "server." msgstr "" "Nome della configurazione derivata associata a questa distribuzione. Viene " "utilizzato per applicare un ordinamento all'elenco delle configurazioni al " "momento distribuite su un server." msgid "" "Name of the engine node. This can be an opaque identifier. It is not " "necessarily a hostname, FQDN, or IP address." msgstr "" "Nome del nodo motore. Questo può essere un identificativo opaco. Non è " "necessario un nome host, FQDN o l'indirizzo IP." msgid "Name of the input." msgstr "Nome dell'input." msgid "Name of the job binary." msgstr "Nome del job binary." msgid "Name of the metering label." msgstr "Nome dell'etichetta di misurazione." msgid "Name of the network owning the port." msgstr "Nome della rete che possiede la porta." msgid "" "Name of the network owning the port. The value is typically network:" "floatingip or network:router_interface or network:dhcp." msgstr "" "Nome della rete proprietaria della porta. In genere il valore è network:" "floatingip o network:router_interface o network:dhcp." msgid "Name of the notification. By default, physical resource name is used." msgstr "" "Nome della notifica. Per impostazione predefinita, viene utilizzato il nome " "della risorsa fisica." msgid "Name of the output." msgstr "Nome dell'output." msgid "Name of the pool." msgstr "Nome del pool." msgid "Name of the queue instance to create." msgstr "Nome dell'istanza della coda da creare." msgid "" "Name of the registered datastore version. It must exist for provided " "datastore type. Defaults to using single active version. If several active " "versions exist for provided datastore type, explicit value for this " "parameter must be specified." msgstr "" "Nome della versione dell'archivio dati registrato. Deve esistere per il tipo " "di archivio dati fornito. L'impostazione predefinita è utilizzare una sola " "versione attiva. Se esistono diverse versioni attive per il tipo di archivio " "dati fornito, è necessario specificare un valore esplicito per questo " "parametro." msgid "Name of the secret." msgstr "Nome del segreto." msgid "Name of the senlin node. By default, physical resource name is used." msgstr "" "Nome del nodo senlin. Per impostazione predefinita, viene utilizzato il nome " "della risorsa fisica." msgid "Name of the senlin policy. By default, physical resource name is used." msgstr "" "Nome della politica senlin. Per impostazione predefinita, viene utilizzato " "il nome della risorsa fisica." msgid "Name of the senlin profile. By default, physical resource name is used." msgstr "" "Nome del profilo senlin. Per impostazione predefinita, viene utilizzato il " "nome della risorsa fisica." msgid "" "Name of the senlin receiver. By default, physical resource name is used." msgstr "" "Nome del ricevitore senlin. Per impostazione predefinita, viene utilizzato " "il nome della risorsa fisica." msgid "Name of the server." msgstr "Nome del server." msgid "Name of the share network." msgstr "Nome della rete di condivisione." msgid "Name of the share type." msgstr "Nome del tipo di condivisione." msgid "Name of the stack." msgstr "Nome dello stack." msgid "Name of the subnet pool." msgstr "Nome del pool di sottorete." msgid "Name of the vip." msgstr "Nome del vip." msgid "Name of the volume type." msgstr "Nome del tipo di volume." msgid "Name of the volume." msgstr "Nome del volume." msgid "" "Name of the workflow associated with the task. Can be defined by intrinsic " "function get_resource or by name of the referenced workflow, i.e. " "{ workflow: wf_name } or { workflow: { get_resource: wf_name }}. Either " "action or workflow may be defined in the task." msgstr "" "Nome del flusso di lavoro associato all'attività. Può essere definito " "mediante la funzione get_resource o mediante il nome del flusso di lavoro di " "riferimento, ad esempio, { workflow: wf_name } o { workflow: { get_resource: " "wf_name }}. L'azione o il flusso di lavoro possono essere definiti " "nell'attività." msgid "Name of this Load Balancer." msgstr "Nome di questo bilanciatore del carico." msgid "Name of this deployment resource in the stack" msgstr "Nome di questa risorsa di distribuzione nello stack" msgid "Name of this listener." msgstr "Nome di questo listener." msgid "Name of this pool." msgstr "Nome di questo pool." msgid "Name or ID Nova flavor for the nodes." msgstr "Nome o flavor ID Nova per i nodi." msgid "Name or ID of network to create a port on." msgstr "Nome o ID della rete su cui creare una porta." msgid "Name or ID of senlin profile to create this node." msgstr "Nom eo ID del profilo senlin per creare questo nodo." msgid "" "Name or ID of shared file system snapshot that will be restored and created " "as a new share." msgstr "" "Nome o ID dell'istantanea del file system condiviso che verrà ripristinato e " "creato come una nuova condivisione." msgid "" "Name or ID of shared filesystem type. Types defines some share filesystem " "profiles that will be used for share creation." msgstr "" "Nome o ID del tipo di filesystem condiviso. I tipi definiscono alcuni " "profili del filesystem di condivisione che verranno utilizzati per la " "creazione della condivisione. " msgid "Name or ID of shared network defined for shared filesystem." msgstr "Nome o ID della rete condivisa definita per il filesystem condiviso." msgid "Name or ID of target cluster." msgstr "Nome o ID del cluster di destinazione." msgid "Name or ID of the load balancing pool." msgstr "Nome o ID del pool di bilanciamento del carico." msgid "Name or Id of keystone region." msgstr "Nome o ID della regione keystone." msgid "Name or Id of keystone service." msgstr "Nome o ID del servizio keystone." #, python-format msgid "" "Name or UUID of Neutron port to attach this NIC to. Either %(port)s or " "%(net)s must be specified." msgstr "" "Nome o UUID della porta Neutron a cui collegare questo NIC. È necessario " "specificare %(port)s o %(net)s." msgid "Name or UUID of network." msgstr "Nome o UUID della rete." msgid "" "Name or UUID of the Neutron floating IP network or name of the Nova floating " "ip pool to use. Should not be provided when used with Nova-network that auto-" "assign floating IPs." msgstr "" "Nome o UUID della rete dell'IP mobile Neutron oppure il nome del pool di IP " "mobili di Nova da utilizzare. Non deve essere fornito quando lo si " "utilizzain una rete Nova che assegna automaticamente IP mobili." msgid "Name or UUID of the image used to boot Hadoop nodes." msgstr "Nome o UUID dell'immagine utilizzata per avviare i nodi Hadoop." #, python-format msgid "" "Name or UUID of the network to attach this NIC to. Either %(port)s or " "%(net)s must be specified." msgstr "" "Nome o UUID della rete a cui collegare questo NIC. È necessario specificare " "%(port)s o %(net)s." msgid "Name or id of keystone domain." msgstr "Nome o ID del dominio keystone." msgid "Name or id of keystone group." msgstr "Nome o ID del gruppo keystone." msgid "Name or id of keystone user." msgstr "Nome o ID dell'utente keystone." msgid "Name or id of volume type (OS::Cinder::VolumeType)." msgstr "Il nome o l'ID del tipo di volume (OS::Cinder::VolumeType)." msgid "Names of databases that those users can access on instance creation." msgstr "" "I nomi dei database che quegli utenti possono accedere durante la creazione " "dell'istanza." msgid "" "Namespace to group this software config by when delivered to a server. This " "may imply what configuration tool is going to perform the configuration." msgstr "" "Spazio dei nomi mediante cui raggruppare questa configurazione software " "quando si distribuisce a un server. Ciò potrebbe determinare lo strumento di " "configurazione che eseguirà la configurazione." msgid "Need more arguments" msgstr "Sono necessari più argomenti" msgid "Negotiation mode for the ike policy." msgstr "Modalità di negoziazione per la politica ike." #, python-format msgid "Neither image nor bootable volume is specified for instance %s" msgstr "Né l'immagine né il volume avviabile è specificato per l'istanza %s" msgid "Network in CIDR notation." msgstr "Rete in notifica CIDR." msgid "Network interface ID to associate with EIP." msgstr "ID dell'interfaccia di rete da associare a EIP." msgid "Network interfaces to associate with instance." msgstr "Interfacce di rete da associare con l'istanza." #, python-format msgid "" "Network this port belongs to. If you plan to use current port to assign " "Floating IP, you should specify %(fixed_ips)s with %(subnet)s. Note if this " "changes to a different network update, the port will be replaced." msgstr "" "La rete a cui appartiene questa porta. Se si prevede di utilizzare la porta " "corrente per assegnare l'IP mobile, si dovrebbe specificare %(fixed_ips)s " "con %(subnet)s. Se ciò determina un aggiornamento di rete diverso, la porta " "verrà sostituita." msgid "Network to allocate floating IP from." msgstr "Rete da cui assegnare l'IP mobile." msgid "Neutron network id." msgstr "ID della rete Neutron." msgid "Neutron subnet id." msgstr "ID della sottorete Neutron." msgid "Nexthop IP address." msgstr "Indirizzo IP Nexthop." #, python-format msgid "No %s specified" msgstr "Nessun %s specificato" msgid "No Template provided." msgstr "Nessun modello fornito." msgid "No action specified" msgstr "Nessuna azione specificata" msgid "No constraint expressed" msgstr "Non è stato espresso alcun vincolo" #, python-format msgid "" "No content found in the \"files\" section for %(fn_name)s path: %(file_key)s" msgstr "" "Nessun contenuto rilevato nella sezione \"files\" per il percorso " "%(fn_name)s: %(file_key)s" #, python-format msgid "No event %s found" msgstr "Nessun evento %s trovato" #, python-format msgid "No events found for resource %s" msgstr "Nessun evento trovato per la risorsa %s" msgid "No resource data found" msgstr "Nessuna dato di risorsa trovato" #, python-format msgid "No stack exists with id \"%s\"" msgstr "Nessuno stack esistente con id \"%s\"" msgid "No stack name specified" msgstr "Nessun nome stack specificato" msgid "No template specified" msgstr "Nessun template specificato" msgid "No volume service available." msgstr "Nessun servizio di volume disponibile." msgid "Node groups." msgstr "Gruppi di nodi." msgid "Nodes list in the cluster." msgstr "Elenco di nodi nel cluster." msgid "Non HA routers can only have one L3 agent." msgstr "I router HA possono solo avere un agent L3." #, python-format msgid "Non-empty resource type is required for resource \"%s\"" msgstr "Il tipo di risorsa non vuota è richiesto per la risorsa \"%s\"" msgid "Not Implemented." msgstr "Non implementato." #, python-format msgid "Not allowed - %(dsver)s without %(dstype)s." msgstr "Non consentito - %(dsver)s senza %(dstype)s." msgid "Not found" msgstr "Non trovato" msgid "Not waiting for outputs signal" msgstr "Non in attesa di segnale di output" msgid "" "Notional service where encryption is performed For example, front-end. For " "Nova." msgstr "" "Il servizio ipotetico in cui viene eseguita la codifica, ad esempio front-" "end. Per Nova." msgid "Nova instance type (flavor)." msgstr "Tipo di istanza Nova (flavor)." msgid "Nova network id." msgstr "ID della rete Nova." msgid "Number of VCPUs for the flavor." msgstr "Numero di VCPU per il flavor." msgid "Number of backlog requests to configure the socket with." msgstr "Numero di richieste di backlog con cui configurare il socket." msgid "Number of instances in the Node group." msgstr "Numero di istanze nel gruppo di nodi." msgid "Number of minutes to wait for this stack creation." msgstr "Numero di minuti da attendere per la creazione di questo stack." msgid "Number of periods to evaluate over." msgstr "Numero di periodi su cui eseguire una valutazione." msgid "" "Number of permissible connection failures before changing the member status " "to INACTIVE." msgstr "" "Numero di errori di connessione consentiti prima di cambiare lo stato del " "membro in INACTIVE." msgid "Number of remaining executions." msgstr "Numero di esecuzioni restanti." msgid "Number of seconds for the DPD delay." msgstr "Numero di secondi per il ritardo DPD." msgid "Number of seconds for the DPD timeout." msgstr "Numero di secondi per il timeout DPD." msgid "" "Number of times to check whether an interface has been attached or detached." msgstr "" "Numero di volte in cui viene controllato se un'interfaccia è collegata o " "scollegata." msgid "" "Number of times to retry to bring a resource to a non-error state. Set to 0 " "to disable retries." msgstr "" "Numero di tentativi per riportare una risorsa nello stato non di errore. " "Impostare su 0 per disabilitare i tentativi." msgid "" "Number of times to retry when a client encounters an expected intermittent " "error. Set to 0 to disable retries." msgstr "" "Numero di tentativi quando un client rileva un errore intermittente " "previsto. Impostare su 0 per disabilitare i tentativi." msgid "Number of workers for Heat service." msgstr "Numero di lavoratori per il servizio Heat." msgid "" "Number of workers for Heat service. Default value 0 means, that service will " "start number of workers equal number of cores on server." msgstr "" "Numero di operatori per il servizio Heat. Il valore predefinito 0 indica che " "il servizio avvierà il numero di operatori uguale al numero di elementi " "principali sul server." msgid "Number value for delay during resolve constraint." msgstr "Il valore numerico per il ritardo durante la risoluzione del vincolo." msgid "Number value for timeout during resolving output value." msgstr "" "Il valore numerico per il timeout durante la risoluzione del valore di " "output." #, python-format msgid "Object action %(action)s failed because: %(reason)s" msgstr "Azione dell'oggetto %(action)s non riuscita perché: %(reason)s" msgid "" "On update, enables heat to collect existing resource properties from reality " "and converge to updated template." msgstr "" "Al momento dell'aggiornamento, abilita il motore heat a raccogliere le " "proprietà delle risorse esistenti dalla realtà e convergerle nel template " "aggiornato." msgid "One of predefined health monitor types." msgstr "Uno dei tipi di monitor di stato predefinito." msgid "One or more listeners for this load balancer." msgstr "Uno o più listener per questo bilanciatore di carico." msgid "Only ISO 8601 duration format of the form PT#H#M#S is supported." msgstr "" "Solo il formato della durata ISO 8601 del modulo PT#H#M#S è supportato." msgid "Only Templates with an extension of .yaml or .template are supported" msgstr "Sono supportati solo i template con un'estensione .yaml o .template" #, python-format msgid "Only integer is acceptable by '%(name)s'." msgstr "Solo un valore intero è accettabile da '%(name)s'." #, python-format msgid "Only non-zero integer is acceptable by '%(name)s'." msgstr "Solo un numero intero diverso da zero è accettabile da '%(name)s'." msgid "Operator used to compare specified statistic with threshold." msgstr "" "Operatore utilizzato per confrontare la statistica specificata con la soglia." msgid "Optional CA cert file to use in SSL connections." msgstr "File certificato CA facoltativo da utilizzare nelle connessioni SSL." msgid "Optional Nova keypair name." msgstr "Nome coppia di chiavi Nova facoltativo." msgid "Optional PEM-formatted certificate chain file." msgstr "File catena del certificato formattato PEM facoltativo." msgid "Optional PEM-formatted file that contains the private key." msgstr "File formattato PEM facoltativo che contiene la chiave privata." msgid "Optional filename to associate with part." msgstr "Nome file facoltativo da associare alla parte." #, python-format msgid "Optional heat url in format like http://0.0.0.0:8004/v1/%(tenant_id)s." msgstr "" "Url heat facoltativo in formato tipo http://0.0.0.0:8004/v1/%(tenant_id)s." msgid "Optional subtype to specify with the type." msgstr "Sottotipo facoltativo da specificare con il tipo." msgid "Options for simulating waiting." msgstr "Opzioni per la simulazione dell'attesa." #, python-format msgid "Order '%(name)s' failed: %(code)s - %(reason)s" msgstr "Ordine '%(name)s' non riuscito: %(code)s - %(reason)s" msgid "Outputs received" msgstr "Output ricevuti" msgid "Owner of the source security group." msgstr "Proprietario del gruppo di sicurezza di origine." msgid "PATCH update to non-COMPLETE stack" msgstr "Aggiornamento PATCH su stack non COMPLETO" #, python-format msgid "Parameter '%(name)s' is invalid: %(exp)s" msgstr "Il parametro '%(name)s' non è valido: %(exp)s" msgid "Parameter Groups error" msgstr "Errore dei gruppi di parametri" msgid "" "Parameter Groups error: parameter_groups.: The grouped parameter key_name " "does not reference a valid parameter." msgstr "" "Errore dei gruppi di parametri: parameter_groups. Il parametro di gruppo " "key_name non fa riferimento a un parametro valido." msgid "" "Parameter Groups error: parameter_groups.: The key_name parameter must be " "assigned to one parameter group only." msgstr "" "Errore dei gruppi di parametri: parameter_groups. Il parametro key_name deve " "essere assegnato solo a un gruppo di parametri." msgid "" "Parameter Groups error: parameter_groups.: The parameters of parameter group " "should be a list." msgstr "" "Errore dei gruppi di parametri: parameter_groups. I parametri del gruppo di " "parametri devono essere un elenco." msgid "" "Parameter Groups error: parameter_groups.Database Group: The InstanceType " "parameter must be assigned to one parameter group only." msgstr "" "Errore dei gruppi di parametri: parameter_groups. Gruppo di database: il " "parametro InstanceType deve essere assegnato solo a un gruppo di parametri." msgid "" "Parameter Groups error: parameter_groups.Database Group: The grouped " "parameter SomethingNotHere does not reference a valid parameter." msgstr "" "Errore dei gruppi di parametri: parameter_groups. Gruppo di database: il " "parametro di gruppo SomethingNotHere non fa riferimento a un parametro " "valido." msgid "" "Parameter Groups error: parameter_groups.Server Group: The parameters must " "be provided for each parameter group." msgstr "" "Errore dei gruppi di parametri: parameter_groups. Gruppo di server: i " "parametri devono essere forniti per ciascun gruppo di parametri." msgid "" "Parameter Groups error: parameter_groups.Server Group: The parameters of " "parameter group should be a list." msgstr "" "Errore dei gruppi di parametri: parameter_groups. Gruppo di server: i " "parametri del gruppo di parametri devono essere un elenco." msgid "" "Parameter Groups error: parameter_groups: The parameter_groups should be a " "list." msgstr "" "Errore dei gruppi di parametri: parameter_groups: parameter_groups deve " "essere un elenco." #, python-format msgid "Parameter name in \"%s\" must be string" msgstr "Il nome parametro in \"%s\" deve essere una stringa" #, python-format msgid "Params must be a map, find a %s" msgstr "Params deve essere un'associazione, cercare un %s" msgid "Parent network of the subnet." msgstr "Rete parent della sottorete." msgid "Parts belonging to this message." msgstr "Parti appartenenti a questo messaggio." msgid "Password for API authentication" msgstr "Password per l'autenticazione API" msgid "Password for accessing the data source URL." msgstr "Password per l'accesso all'URL dell'origine dati." msgid "Password for accessing the job binary URL." msgstr "Password per l'accesso all'URL del job binary." msgid "Password for those users on instance creation." msgstr "Password per quegli utenti durante la creazione dell'istanza." msgid "Password of keystone user." msgstr "Password dell'utente keystone." msgid "Password used by user." msgstr "Password utilizzata dall'utente." #, python-format msgid "Path components in \"%s\" must be strings" msgstr "I componenti del percorso in \"%s\" devono essere stringhe" msgid "Path components in attributes must be strings" msgstr "I componenti del percorso negli attributi devono essere stringhe" msgid "Payload exceeds maximum allowed size" msgstr "Il payload ha superato la dimensione massima consentita" msgid "Perfect forward secrecy for the ipsec policy." msgstr "Il PFS (Perfect Forward Secrecy) per la politica ipsec." msgid "Perfect forward secrecy in lowercase for the ike policy." msgstr "Il PFS (Perfect Forward Secrecy) in minuscolo per la politica ike." msgid "" "Perform a check on the input values passed to verify that each required " "input has a corresponding value. When the property is set to STRICT and no " "value is passed, an exception is raised." msgstr "" "Eseguire una verifica sui valori di input inoltrati per controllare che " "ciascun input richiesto abbia un valore corrispondente. Quando la proprietà " "è impostata su RIGOROSA e non viene trasmesso alcun valore, viene generata " "un'eccezione." msgid "Period (seconds) to evaluate over." msgstr "Periodo (secondi) su cui eseguire la valutazione." msgid "Physical ID of the VPC. Not implemented." msgstr "ID fisico di VPC. Non implementato." #, python-format msgid "" "Plugin %(plugin)s doesn't support the following node processes: " "%(unsupported)s. Allowed processes are: %(allowed)s" msgstr "" "Il plugin %(plugin)s non supporta i seguenti processi del nodo: " "%(unsupported)s. I processi consentiti sono: %(allowed)s" msgid "Plugin name." msgstr "Nome plugin." msgid "Policies for removal of resources on update." msgstr "Politiche di rimozione delle risorse al momento dell'aggiornamento" msgid "Policy for rolling updates for this scaling group." msgstr "Politica per ripristinare gli aggiornamenti per scalare questo gruppo." msgid "" "Policy on how to apply a flavor update; either by requesting a server resize " "or by replacing the entire server." msgstr "" "Politica di applicazione dell'aggiornamento flavor; richiesta di " "ridimensionamento server o sostituzione dell'intero server." msgid "" "Policy on how to apply an image-id update; either by requesting a server " "rebuild or by replacing the entire server." msgstr "" "Politica di applicazione dell'aggiornamento image-id; richiesta di una nuova " "creazione server o sostituzione dell'intero server." msgid "" "Policy on how to respond to a stack-update for this resource. REPLACE_ALWAYS " "will replace the port regardless of any property changes. AUTO will update " "the existing port for any changed update-allowed property." msgstr "" "Politica su come rispondere ad un aggiornamento dello stack di questa " "risorsa. REPLACE_ALWAYS sostituirà la porta indipendentemente da eventuali " "modifiche alle proprietà. AUTO aggiornerà la porta esistente di qualsiasi " "proprietà update-allowed modificata." msgid "" "Policy to be processed when doing an update which requires removal of " "specific resources." msgstr "" "Politica da elaborare quando si esegue un aggiornamento che richiede la " "rimozione di risorse specifiche." msgid "Pool creation failed" msgstr "Creazione del pool non riuscita" msgid "Pool creation failed due to vip" msgstr "Creazione del pool non riuscita a causa di vip" msgid "Pool from which floating IP is allocated." msgstr "Il pool da cui viene assegnato l'IP mobile." msgid "Port number on which the servers are running on the members." msgstr "Numero della porta su cui i server sono in esecuzione sui membri" msgid "Port on which the pool member listens for requests or connections." msgstr "" "Porta su cui il membro del pool è in ascolto per richieste o connessioni." msgid "Port security enabled of the network." msgstr "La sicurezza della porta abilitata della rete." msgid "Port security enabled of the port." msgstr "La sicurezza della porta abilitata della porta." msgid "Position of the rule within the firewall policy." msgstr "Posizione della regola all'interno della politica del firewall." msgid "Pre-shared key string for the ipsec site connection." msgstr "Stringa PSK (Pre-Shared Key) per la connessione al sito ipsec." msgid "Prefix length for subnet allocation from subnet pool." msgstr "" "Lunghezza del prefisso per l'allocazione della sottorete dal pool di " "sottorete." msgid "Private DNS name of the specified instance." msgstr "Il nome DNS privato dell'istanza specificata." msgid "Private IP address of the network interface." msgstr "Indirizzo IP privato dell'interfaccia di rete." msgid "Private IP address of the specified instance." msgstr "L'indirizzo IP privato dell'istanza specificata." msgid "Project ID" msgstr "Identificativo del Progetto" msgid "" "Projects to add volume type access to. NOTE: This property is only supported " "since Cinder API V2." msgstr "" "Progetti a cui aggiungere l'accesso del tipo di volume. NOTA: questa " "proprietà è supportata solo a partire da Cinder V2." #, python-format msgid "" "Properties %(algorithm)s and %(bit_length)s are required for %(type)s type " "of order." msgstr "" "Le proprietà %(algorithm)s e %(bit_length)s sono richieste per il tipo di " "ordine %(type)s." msgid "Properties for profile." msgstr "Proprietà per il profilo." msgid "Properties of this policy." msgstr "Proprietà di questa politica." msgid "Properties to pass to each resource being created in the chain." msgstr "Le proprietà da trasferire a ciascuna risorsa da creare nella catena." #, python-format msgid "Property %(cookie)s is required when %(sp)s type is set to %(app)s." msgstr "" "Proprietà %(cookie)s richiesta quando il tipo %(sp)s è impostato su %(app)s." #, python-format msgid "" "Property %(cookie)s must NOT be specified when %(sp)s type is set to %(ip)s." msgstr "" "La proprietà %(cookie)s NON deve essere specificata quando il tipo %(sp)s è " "impostato su %(ip)s." #, python-format msgid "" "Property %(key)s updated value %(new)s should be superset of existing value " "%(old)s." msgstr "" "Property %(key)s updated value %(new)s should be superset of existing value " "%(old)s." #, python-format msgid "" "Property %(n)s type mismatch between facade %(type)s (%(fs_type)s) and " "provider (%(ps_type)s)" msgstr "" "Mancata corrispondenza del tipo di proprietà %(n)s tra facade %(type)s " "(%(fs_type)s) e provider (%(ps_type)s)" #, python-format msgid "Property %(policies)s and %(item)s cannot be used both at one time." msgstr "" "Le proprietà %(policies)s e %(item)s non possono essere utilizzate " "contemporaneamente." #, python-format msgid "Property %(ref)s required when protocol is %(term)s." msgstr "Proprietà %(ref)s richiesta quando il protocollo è %(term)s." #, python-format msgid "Property %s not assigned" msgstr "Proprietà %s non assegnata" #, python-format msgid "Property %s not implemented yet" msgstr "La proprietà %s non è ancora implementata" msgid "" "Property cookie_name is required when session_persistence type is set to " "APP_COOKIE." msgstr "" "La proprietà cookie_name è richiesta, quando il tipo session_persistence è " "impostato su APP_COOKIE." msgid "" "Property cookie_name is required, when session_persistence type is set to " "APP_COOKIE." msgstr "" "La proprietà cookie_name è richiesta, quando il tipo session_persistence è " "impostato si APP_COOKIE." msgid "" "Property cookie_name must NOT be specified when session_persistence type is " "set to SOURCE_IP." msgstr "" "La proprietà cookie_name NON deve essere specificata quando il tipo " "session_persistence è impostato su SOURCE_IP." msgid "Property values for the resources in the group." msgstr "I valori delle proprietà per le risorse nel gruppo." msgid "Protocol for balancing." msgstr "Protocollo per il bilanciamento." msgid "Protocol for the firewall rule." msgstr "Protocollo per la regola del firewall." msgid "Protocol of the pool." msgstr "Protocollo del pool." msgid "Protocol on which to listen for the client traffic." msgstr "Protocollo su cui si è in ascolto per il traffico client." msgid "Protocol to balance." msgstr "Protocollo da bilanciare." msgid "Protocol value for this firewall rule." msgstr "Valore del protocollo per questa regola del firewall." msgid "" "Provide access to nodes using other nodes of the cluster as proxy gateways." msgstr "" "Fornire l'accesso ai nodi utilizzando altri nodi del cluster come gateway " "del proxy." msgid "" "Provide old encryption key. New encryption key would be used from config " "file." msgstr "" "Fornire la vecchia chiave di crittografia. La nuova chiave di crittografia " "verrebbe utilizzata dal file config." msgid "Provider for this Load Balancer." msgstr "Provider per questo bilanciatore del carico." msgid "Provider implementing this load balancer instance." msgstr "Il provider che implementa questa istanza del bilanciatore del carico." #, python-format msgid "Provider requires property %(n)s unknown in facade %(type)s" msgstr "" "Il provider richiede una proprietà %(n)s sconosciuta in facade %(type)s" msgid "Public DNS name of the specified instance." msgstr "Il nome DNS pubblico dell'istanza specificata." msgid "Public IP address of the specified instance." msgstr "L'indirizzo IP pubblico dell'istanza specificata." msgid "" "RPC timeout for the engine liveness check that is used for stack locking." msgstr "" "Il timeout RPC per il controllo di attività del motore utilizzato per il " "blocco dello stack." msgid "RX/TX factor." msgstr "Fattore RX/TX." #, python-format msgid "Rebuilding server failed, status '%s'" msgstr "Nuova creazione del server non riuscita, stato '%s'" msgid "Record name." msgstr "Nome record." #, python-format msgid "Recursion depth exceeds %d." msgstr "La profondità di ricorsività ha superato %d." msgid "" "Ref structure that contains the ID of the VPC on which you want to create " "the subnet." msgstr "" "La struttura di riferimento che contiene l'ID del VPC su cui si desidera " "creare la sottorete." msgid "Reference to a flavor for creating DB instance." msgstr "Riferimento a un flavor per la creazione dell'istanza DB." msgid "Reference to certificate." msgstr "Riferimento al certificato." msgid "Reference to intermediates." msgstr "Riferimento agli intermediari." msgid "Reference to private key passphrase." msgstr "Riferimento alla passphrase della chiave privata." msgid "Reference to private key." msgstr "Riferimento alla chiave privata." msgid "Reference to public key." msgstr "Riferimento alla chiave pubblica." msgid "Reference to the secret." msgstr "Riferimento al segreto." msgid "References to secrets that will be stored in container." msgstr "I riferimenti ai segreti che verranno memorizzati nel contenitore." msgid "Region name in which this stack will be created." msgstr "Il nome della regione in cui questo stack verrà creato." msgid "Remaining executions." msgstr "Esecuzioni restanti." msgid "Remote branch router identity." msgstr "Identità del router con ramo remoto." msgid "Remote branch router public IPv4 address or IPv6 address or FQDN." msgstr "Indirizzo IPv4 o IPv6 pubblico del router con ramo remoto o FQDN." msgid "Remote subnet(s) in CIDR format." msgstr "Le sottoreti remote in formato CIDR." msgid "" "Replacement policy used to work around flawed nova/neutron port interaction " "which has been fixed since Liberty." msgstr "" "La politica di sostituzione utilizzata per risolvere l'interazione difettosa " "delle porte nova/neutron che è stata risolta a partire da Liberty." msgid "Request expired or more than 15mins in the future" msgstr "Richiesta scaduta o più di 15 minuti in seguito" #, python-format msgid "Request limit exceeded: %(message)s" msgstr "Superato il limite di richiesta: %(message)s" msgid "Request missing required header X-Auth-Url" msgstr "La richiesta manca dell'intestazione X-Auth-Url obbligatoria" msgid "Request was denied due to request throttling" msgstr "La richiesta è stata negata a causa di una regolazione della richiesta" #, python-format msgid "" "Requested plugin '%(plugin)s' doesn't support version '%(version)s'. Allowed " "versions are %(allowed)s" msgstr "" "Il plugin '%(plugin)s' richiesto non supporta la versione '%(version)s'. Le " "versioni consentite sono %(allowed)s" msgid "" "Required extra specification. Defines if share drivers handles share servers." msgstr "" "Specifica supplementare richiesta. Definisce se i driver di condivisione " "gestiscono i server di condivisione." #, python-format msgid "Required property %(n)s for facade %(type)s missing in provider" msgstr "" "La proprietà richiesta %(n)s per il facade %(type)s non è presente nel " "provider" #, python-format msgid "Resizing to '%(flavor)s' failed, status '%(status)s'" msgstr "Nuovo dimensionamento su '%(flavor)s' non riuscito, stato '%(status)s'" #, python-format msgid "Resource \"%s\" has no type" msgstr "La risorsa \"%s\" non ha alcun tipo" #, python-format msgid "Resource \"%s\" type is not a string" msgstr "Il tipo risorsa \"%s\" non è una stringa" #, python-format msgid "Resource %(name)s %(key)s type must be %(typename)s" msgstr "Il tipo %(key)s della risorsa %(name)s deve essere %(typename)s" #, python-format msgid "Resource %(name)s is missing \"%(type_key)s\"" msgstr "Manca la risorsa %(name)s \"%(type_key)s\"" #, python-format msgid "" "Resource %s's property user_data_format should be set to SOFTWARE_CONFIG " "since there are software deployments on it." msgstr "" "La proprietà user_data_format della risorsa %s's deve essere impostata su " "SOFTWARE_CONFIG poiché su di essa sono presenti distribuzioni software." msgid "Resource ID was not provided." msgstr "L'ID risorsa non è stato fornito." msgid "" "Resource definition for the resources in the group, in HOT format. The value " "of this property is the definition of a resource just as if it had been " "declared in the template itself." msgstr "" "Definizione di risorsa per le risorse nel gruppo, in formato HOT. Il valore " "di questa proprietà è la definizione di una risorsa come se fosse stata " "dichiarata nel modello stesso." msgid "" "Resource definition for the resources in the group. The value of this " "property is the definition of a resource just as if it had been declared in " "the template itself." msgstr "" "La definizione di risorsa per le risorse nel gruppo. Il valore di questa " "proprietà è la definizione di una risorsa come se fosse stata dichiarata nel " "template stesso." msgid "Resource failed" msgstr "Risorsa non riuscita" msgid "Resource is not built" msgstr "Risorsa non creata" msgid "Resource name may not contain \"/\"" msgstr "Il nome risorsa non può contenere \"/\"" msgid "Resource type." msgstr "Tipo di risorsa." msgid "Resource update already requested" msgstr "Aggiornamento risorsa già richiesto" msgid "Resource with the name requested already exists" msgstr "La risorsa con il nome richiesto esiste già" msgid "" "ResourceInError: resources.remote_stack: Went to status UPDATE_FAILED due to " "\"Remote stack update failed\"" msgstr "" "ResourceInError: resources.remote_stack: passato allo stato UPDATE_FAILED a " "causa di \"Aggiornamento dello stack remoto non riuscito\"" #, python-format msgid "Resources must contain Resource. Found a [%s] instead" msgstr "Le risorse devono contenere risorsa. Invece trovato un [%s]" msgid "" "Resources that users are allowed to access by the DescribeStackResource API." msgstr "Le risorse a cui gli utenti possono accedere l'API." msgid "Returned status code from the configuration execution." msgstr "Codice di stato restituito dall'esecuzione della configurazione." msgid "Route duplicates an existing route." msgstr "L'instradamento duplica un instradamento esistente." msgid "Route table ID." msgstr "ID tabella di instradamento." msgid "Safety assessment lifetime configuration for the ike policy." msgstr "" "Configurazione della durata di valutazione della sicurezza per la politica " "ike." msgid "Safety assessment lifetime configuration for the ipsec policy." msgstr "" "Configurazione della durata di valutazione della sicurezza per la politica " "ipsec." msgid "Safety assessment lifetime units." msgstr "Unità della durata di valutazione della sicurezza." msgid "Safety assessment lifetime value in specified units." msgstr "" "Valore della durata di valutazione della sicurezza in unità specificate." msgid "Scheduler hints to pass to Nova (Heat extension)." msgstr "" "Suggerimenti (hint) dello scheduler da passare a Nova (estensione heat)." msgid "Schema representing the inputs that this software config is expecting." msgstr "" "Schema che rappresenta gli input previsti da questa configurazione software." msgid "Schema representing the outputs that this software config will produce." msgstr "" "Schema che rappresenta gli output che verranno generati da questa " "configurazione software." #, python-format msgid "Schema valid only for %(ltype)s or %(mtype)s, not %(utype)s" msgstr "Schema valido solo per %(ltype)s o %(mtype)s, non %(utype)s" msgid "" "Scope of flavor accessibility. Public or private. Default value is True, " "means public, shared across all projects." msgstr "" "Ambito di accessibilità del flavor. Pubblico o privato. Il valore " "predefinito è True, che indica pubblico, condiviso tra tutti i progetti." #, python-format msgid "Searching Tenant %(target)s from Tenant %(actual)s forbidden." msgstr "Ricerca del tenant %(target)s nel tenant %(actual)s vietato." msgid "Seconds between running periodic tasks." msgstr "Secondi tra l'esecuzione delle attività periodiche." msgid "Seconds to wait after a create. Defaults to the global wait_secs." msgstr "" "I secondi da attendere dopo una creazione. È impostato per valore " "predefinito su wait_secs globale." msgid "Seconds to wait after a delete. Defaults to the global wait_secs." msgstr "" "I secondi da attendere dopo un'eliminazione. È impostato per valore " "predefinito su wait_secs globale. " msgid "Seconds to wait after an action (-1 is infinite)." msgstr "I secondi da attendere dopo un'azione (-1 è infinito)." msgid "Seconds to wait after an update. Defaults to the global wait_secs." msgstr "" "I secondi da attendere dopo un aggiornamento. È impostato per valore " "predefinito su wait_secs globale. " #, python-format msgid "Section %s can not be accessed directly." msgstr "Impossibile accedere alla sezione %s direttamente." #, python-format msgid "Security Group \"%(group_name)s\" not found" msgstr "Gruppo di sicurezza \"%(group_name)s\" non trovato" msgid "Security group IDs to assign." msgstr "ID dei gruppi di sicurezza da assegnare." msgid "Security group IDs to associate with this port." msgstr "Gli ID dei gruppi di sicurezza da associare a questa porta." msgid "Security group names to assign." msgstr "Nomi dei gruppi di sicurezza da assegnare." msgid "Security groups cannot be assigned the name \"default\"." msgstr "Non è possibile assegnare ai gruppi di sicurezza il nome \"default\"." msgid "Security service IP address or hostname." msgstr "Indirizzo IP o nome host del servizio di sicurezza. " msgid "Security service description." msgstr "Descrizione del servizio di sicurezza." msgid "Security service domain." msgstr "Dominio del servizio di sicurezza." msgid "Security service name." msgstr "Nome del servizio di sicurezza." msgid "Security service type." msgstr "Tipo del servizio di sicurezza." msgid "Security service user or group used by tenant." msgstr "Utente o gruppo del servizio di sicurezza utilizzato dal tenant." msgid "Select deferred auth method, stored password or trusts." msgstr "" "Selezionare il metodo auth differito, la password o i trust memorizzati." msgid "Sequence of characters to build the random string from." msgstr "Sequenza di caratteri da cui creare la stringa casuale." #, python-format msgid "Server %(name)s delete failed: (%(code)s) %(message)s" msgstr "" "L'eliminazione del server %(name)s non è riuscita: (%(code)s) %(message)s" msgid "Server Group name." msgstr "Nome gruppo server." msgid "Server name." msgstr "Nome server." msgid "Server to assign floating IP to." msgstr "Il server a cui assegnare l'IP mobile." #, python-format msgid "" "Service %(service_name)s is not available for resource type " "%(resource_type)s, reason: %(reason)s" msgstr "" "Servizio %(service_name)s non disponibile per il tipo di risorsa " "%(resource_type)s, motivo: %(reason)s" msgid "Service misconfigured" msgstr "Servizio non configurato correttamente" msgid "Service temporarily unavailable" msgstr "Servizio temporaneamente non disponibile" msgid "Set of parameters passed to this stack." msgstr "L'insieme di parametri passati a questo stack." msgid "Set of rules for comparing characters in a character set." msgstr "" "Insieme di regole per il confronto di caratteri in un set di caratteri." msgid "Set of symbols and encodings." msgstr "Insieme di simboli e codifiche." msgid "Set to \"vpc\" to have IP address allocation associated to your VPC." msgstr "" "Impostare su \"vpc\" per avere l'assegnazione dell'indirizzo IP associata al " "proprio VPC." msgid "Set to true if DHCP is enabled and false if DHCP is disabled." msgstr "Impostare su true se DHCP è abilitato e false se DHCP è disabilitato." msgid "Severity of the alarm." msgstr "Gravità dell'allarme." msgid "Share description." msgstr "Descrizione della condivisione." msgid "Share host." msgstr "Host di condivisione." msgid "Share name." msgstr "Nome condivisione." msgid "Share network description." msgstr "Descrizione della rete di condivisione." msgid "Share project ID." msgstr "ID progetto di condivisione." msgid "Share protocol supported by shared filesystem." msgstr "Il protocollo di condivisione supportato dal filesystem condiviso." msgid "Share storage size in GB." msgstr "La dimensione della memoria di condivisione in GB." msgid "Shared status of the metering label." msgstr "Stato di condivisione dell'etichetta di misurazione." msgid "Shared status of this firewall policy." msgstr "Stato condiviso di questa politica del firewall." msgid "Shared status of this firewall rule." msgstr "Stato condiviso della regola del firewall." msgid "Shared status of this firewall." msgstr "Stato di condivisione di questo firewall." msgid "Shrinking volume" msgstr "Riduzione volume" msgid "Signal data error" msgstr "Errore dei dati del segnale" #, python-format msgid "Signal resource during %s" msgstr "Risorsa segnale durante %s" #, python-format msgid "Single schema valid only for %(ltype)s, not %(utype)s" msgstr "Singolo schema valido solo per %(ltype)s, non %(utype)s" msgid "Size of a secondary ephemeral data disk in GB." msgstr "Dimensione del disco di dati temporaneo secondario in GB." msgid "Size of adjustment." msgstr "Dimensione della regolazione." msgid "Size of encryption key, in bits. For example, 128 or 256." msgstr "La dimensione della chiave di codifica in bit. Ad esempio, 128 o 256." msgid "" "Size of local disk in GB. The \"0\" size is a special case that uses the " "native base image size as the size of the ephemeral root volume." msgstr "" "La dimensione del disco locale in GB. La dimensione \"0\" è un caso speciale " "che utilizza la dimensione dell'immagine di base nativa come dimensione del " "volume root temporaneo." msgid "" "Size of the block device in GB. If it is omitted, hypervisor driver " "calculates size." msgstr "" "Dimensione del dispositivo di blocco in GB. Se viene omesso, il driver " "hypervisor calcola la dimensione." msgid "Size of the instance disk volume in GB." msgstr "Dimensione del volume disco dell'istanza in GB." msgid "Size of the volumes, in GB." msgstr "Dimensione dei volumi in GB." msgid "Smallest prefix size that can be allocated from the subnet pool." msgstr "" "La dimensione minima del prefisso che può essere allocata dal pool di " "sottorete." #, python-format msgid "Snapshot with id %s not found" msgstr "Istantanea con ID %s non trovata" msgid "" "SnapshotId is missing, this is required when specifying BlockDeviceMappings." msgstr "" "Manca SnapshotId, questo è necessario quando si specifica " "BlockDeviceMappings." #, python-format msgid "Software config with id %s not found" msgstr "Configurazione software con id %s non trovata" msgid "Source IP address or CIDR." msgstr "Indirizzo IP origine o CIDR." msgid "Source ip_address for this firewall rule." msgstr "Ip_address origine per questa regola del firewall." msgid "Source port number or a range." msgstr "Numero della porta origine o un intervallo." msgid "Source port range for this firewall rule." msgstr "Intervallo della porta origine per questa regola del firewall." #, python-format msgid "Specified output key %s not found." msgstr "Chiave di output %s specificata non trovata." #, python-format msgid "Specified status is invalid, defaulting to %s" msgstr "Lo stato specificato non è valido, si assume il valore predefinito %s" #, python-format msgid "Specified subnet %(subnet)s does not belongs to network %(network)s." msgstr "" "La sottorete %(subnet)s specificata non appartiene alla rete %(network)s." msgid "Specifies a custom discovery url for node discovery." msgstr "" "Specifica un URL di rilevamento personalizzato per il rilevamento dei nodi." msgid "Specifies database names for creating databases on instance creation." msgstr "" "Specifica i nome di database per la creazione di database durante la " "creazione dell'istanza." msgid "Specify the ACL permissions on who can read objects in the container." msgstr "" "Specificare le autorizzazioni ACL per gli utenti che possono leggere gli " "oggetti nel contenitore." msgid "Specify the ACL permissions on who can write objects to the container." msgstr "" "Specificare le autorizzazioni ACL per gli utenti che possono scrivere gli " "oggetti nel contenitore." msgid "" "Specify whether the remote_ip_prefix will be excluded or not from traffic " "counters of the metering label. For example to not count the traffic of a " "specific IP address of a range." msgstr "" "Specificare se remote_ip_prefix verrà escluso o meno dai contatori del " "traffico dell'etichetta di misurazione. Ad esempio, per non calcolare il " "traffico di un indirizzo IP specifico di un intervallo." #, python-format msgid "Stack %(stack_name)s already has an action (%(action)s) in progress." msgstr "Lo stack %(stack_name)s ha già un'azione (%(action)s) in corso." msgid "Stack ID" msgstr "ID stack" msgid "Stack Name" msgstr "Nome stack" msgid "Stack name may not contain \"/\"" msgstr "Il nome di stack potrebbe non contenere \"/\"" msgid "Stack resource id" msgstr "ID risorsa dello stack" msgid "Stack unknown status" msgstr "Stato sconosciuto dello stack" #, python-format msgid "Stack with id %s not found" msgstr "Stack con id %s non trovato" msgid "" "Stacks containing these tag names will be hidden. Multiple tags should be " "given in a comma-delimited list (eg. hidden_stack_tags=hide_me,me_too)." msgstr "" "Gli stack contenenti i nomi di tag verranno nascosti. Più tag devono essere " "forniti in un elenco delimitato da virgole (ad esempio, " "hidden_stack_tags=hide_me,me_too)" msgid "Start address for the allocation pool." msgstr "Indirizzo iniziale per il pool di allocazione." #, python-format msgid "Start resizing the group %(group)s" msgstr "Avvio ridimensionamento del gruppo %(group)s" msgid "Start time for the time constraint. A CRON expression property." msgstr "" "Ora di inizio del vincolo di tempo. Una proprietà dell'espressione CRON." #, python-format msgid "State %s invalid for create" msgstr "Stato %s non valido per la creazione" #, python-format msgid "State %s invalid for resume" msgstr "Stato %s non valido per la ripresa" #, python-format msgid "State %s invalid for suspend" msgstr "Stato %s non valido per la sospensione" msgid "Status" msgstr "Stato" #, python-format msgid "String to split must be string; got %s" msgstr "La stringa da separare deve essere string; ricevuto %s" msgid "String value with which to compare." msgstr "Valore stringa con cui effettuare il confronto." msgid "Subnet ID to associate with this interface." msgstr "ID di sottorete da associare a questa interfaccia." msgid "Subnet ID to launch instance in." msgstr "ID di sottorete in cui avviare l'istanza." msgid "Subnet ID." msgstr "ID di sottorete." msgid "Subnet in which the vpn service will be created." msgstr "Sottorete in cui il servizio vpn verrà creato." msgid "" "Subnet in which to allocate the IP address for port. Used for creating port, " "based on derived properties. If subnet is specified, network property " "becomes optional." msgstr "" "La sottorete in cui allocare l'indirizzo IP per la porta. Utilizzata per la " "creazione della porta, in base alle proprietà derivate. Se è specificata la " "sottorete, la proprietà di rete diventa facoltativa." msgid "Subnet in which to allocate the IP address for this port." msgstr "La sottorete in cui allocare l'indirizzo IP per questa porta." msgid "Subnet name or ID of this member." msgstr "Nome o ID della sottorete di questo membro." msgid "Subnet of external fixed IP address." msgstr "Sottorete dell'indirizzo IP fisso esterno." msgid "Subnet of the vip." msgstr "Sottorete del vip." msgid "Subnets of this network." msgstr "Sottoreti di questa rete." msgid "" "Subset of trustor roles to be delegated to heat. If left unset, all roles of " "a user will be delegated to heat when creating a stack." msgstr "" "Sottoinsieme di ruoli trustor da delegare a heat. Se non impostata, tutti i " "ruoli di un utente verranno delegati a heat quando si crea uno stack." msgid "Supplied metadata for the resources in the group." msgstr "Metadati forniti per le risorse nel gruppo." msgid "Supported versions: keystone v3" msgstr "Versioni supportate: keystone v3" #, python-format msgid "Suspend of instance %s failed" msgstr "Sospensione dell'istanza %s non riuscita" #, python-format msgid "Suspend of server %s failed" msgstr "Sospensione del server %s non riuscita" msgid "Swap space in MB." msgstr "Spazio di scambio in MB." msgid "System SIGHUP signal received." msgstr "Ricevuto segnale SIGHUP di sistema." msgid "TCP or UDP port on which to listen for client traffic." msgstr "Porta TCP o UDP su cui si è in ascolto per il traffico client." msgid "TCP port on which the instance server is listening." msgstr "La porta TCP su cui il server dell'istanza è in ascolto." msgid "TCP port on which the pool member listens for requests or connections." msgstr "" "Porta TCP su cui il membro del pool è in ascolto per richieste o connessioni." msgid "" "TCP port on which to listen for client traffic that is associated with the " "vip address." msgstr "" "La porta TCP su cui restare in ascolto del traffico client associato " "all'indirizzo vip." msgid "" "TTL, in seconds, for any cached item in the dogpile.cache region used for " "caching of OpenStack service finder functions." msgstr "" "TTL, in secondi, per tutti gli elementi memorizzati nella cache nella " "regione dogpile.cache utilizzata per la memorizzazione nella cache delle " "funzioni di ricerca del servizio OpenStack." msgid "" "TTL, in seconds, for any cached item in the dogpile.cache region used for " "caching of service extensions." msgstr "" "TTL, in secondi, per tutti gli elementi memorizzati nella cache nella " "regione dogpile.cache utilizzata per la memorizzazione nella cache delle " "estensioni del servizio." msgid "" "TTL, in seconds, for any cached item in the dogpile.cache region used for " "caching of validation constraints." msgstr "" "TTL, in secondi, per tutti gli elementi memorizzati nella cache nella " "regione dogpile.cache utilizzata per la memorizzazione nella cache dei " "vincoli di convalida." msgid "Tag key." msgstr "Chiave tag." msgid "Tag value." msgstr "Valore tag." msgid "Tags to add to the image." msgstr "Tag da aggiungere all'immagine." msgid "Tags to attach to instance." msgstr "Tag da allegare all'istanza." msgid "Tags to attach to the bucket." msgstr "Tag da allegare al bucket." msgid "Tags to attach to this group." msgstr "Tag da allegare a questo gruppo." msgid "Task description." msgstr "Descrizione attività." msgid "Task name." msgstr "Nome attività." msgid "" "Template default for how the server should receive the metadata required for " "software configuration. POLL_SERVER_CFN will allow calls to the cfn API " "action DescribeStackResource authenticated with the provided keypair " "(requires enabled heat-api-cfn). POLL_SERVER_HEAT will allow calls to the " "Heat API resource-show using the provided keystone credentials (requires " "keystone v3 API, and configured stack_user_* config options). POLL_TEMP_URL " "will create and populate a Swift TempURL with metadata for polling (requires " "object-store endpoint which supports TempURL).ZAQAR_MESSAGE will create a " "dedicated zaqar queue and post the metadata for polling." msgstr "" "Template predefinito per come il server deve ricevere i metadati richiesti " "per la configurazione del software. POLL_SERVER_CFN consente le chiamate " "all'azione DescribeStackResource dell'API cfn autenticata con la coppia di " "chiavi fornita (richiede heat-api-cfn abilitato). POLL_SERVER_HEAT consente " "le chiamate all'API Heat resource-show utilizzando le credenziali keystone " "fornite (richiede keystone v3 API e le opzioni config stack_user_* " "configurate). POLL_TEMP_URL crea e popola uno Swift TempURL con metadati per " "il polling (richiede l'endpoint object-store che supporta TempURL). " "ZAQAR_MESSAGE crea una coda zaqar dedicata e invia i metadati per il polling." msgid "" "Template default for how the server should signal to heat with the " "deployment output values. CFN_SIGNAL will allow an HTTP POST to a CFN " "keypair signed URL (requires enabled heat-api-cfn). TEMP_URL_SIGNAL will " "create a Swift TempURL to be signaled via HTTP PUT (requires object-store " "endpoint which supports TempURL). HEAT_SIGNAL will allow calls to the Heat " "API resource-signal using the provided keystone credentials. ZAQAR_SIGNAL " "will create a dedicated zaqar queue to be signaled using the provided " "keystone credentials." msgstr "" "Template predefinito per come il server deve segnalare l'attivazione con i " "valori di output della distribuzione. CFN_SIGNAL consente un HTTP POST su un " "URL firmato da una coppia di chiavi CFN (richiede heat-api-cfn abilitato). " "TEMP_URL_SIGNAL crea uno Swift TempURL da segnalare mediante HTTP PUT " "(richiede l'endpoint object-store che supporta TempURL). HEAT_SIGNAL " "consente le chiamate all'API Heat resource-signal utilizzando le credenziali " "keystone fornite. ZAQAR_SIGNAL crea una coda zaqar dedicata da segnalare " "utilizzando le credenziali keystone fornite." msgid "Template format version not found." msgstr "Versione del formato template non trovata." #, python-format msgid "" "Template size (%(actual_len)s bytes) exceeds maximum allowed size (%(limit)s " "bytes)." msgstr "" "La dimensione del template (%(actual_len)s bytes) supera la dimensione " "massima consentita (%(limit)s bytes)." msgid "Template that specifies the stack to be created as a resource." msgstr "" "Il template che specifica lo stack che deve essere creato come risorsa." #, python-format msgid "Template type is not supported: %s" msgstr "Tipo di template non supportato: %s" msgid "Template version was not provided" msgstr "Versione del template non fornita" #, python-format msgid "Template with version %s not found" msgstr "Modello con versione %s non trovato" msgid "TemplateBody or TemplateUrl were not given." msgstr "TemplateBody o TemplateUrl non stati forniti." msgid "Tenant owning the health monitor." msgstr "Tenant che possiede il monitor di integrità." msgid "Tenant owning the pool member." msgstr "Tenant che possiede il membro del pool." msgid "Tenant owning the pool." msgstr "Tenant che possiede il pool." msgid "Tenant owning the port." msgstr "Il tenant che possiede la porta." msgid "Tenant owning the router." msgstr "Tenant che possiede il router." msgid "Tenant owning the subnet." msgstr "Tenant che possiede la sottorete." #, python-format msgid "Testing message %(text)s" msgstr "Messaggio di esecuzione del test %(text)s" #, python-format msgid "The \"%(hook)s\" hook is not defined on %(resource)s" msgstr "L'hook \"%(hook)s\" non è definito su %(resource)s" #, python-format msgid "The \"for_each\" argument to \"%s\" must contain a map" msgstr "L'argomento \"for_each\" per \"%s\" deve contenere una mappa" #, python-format msgid "The %(entity)s (%(name)s) could not be found." msgstr "Impossibile trovare l'entità %(entity)s (%(name)s)." #, python-format msgid "The %s must be provided for each parameter group." msgstr "%s deve essere fornito per ciascun gruppo di parametri." #, python-format msgid "The %s of parameter group should be a list." msgstr "%s del gruppo di parametri deve essere un elenco." #, python-format msgid "The %s parameter must be assigned to one parameter group only." msgstr "Il parametro %s deve essere assegnato a un solo gruppo di parametri." #, python-format msgid "The %s should be a list." msgstr "%s deve essere un elenco." msgid "The API paste config file to use." msgstr "Il file di configurazione paste API da utilizzare." msgid "The AWS Access Key ID needs a subscription for the service" msgstr "" "L'ID della chiave di accesso AWS necessita di una sottoscrizione per il " "servizio" msgid "The Availability Zone where the specified instance is launched." msgstr "La zona di disponibilità in cui l'istanza specificata è avviata." msgid "The Availability Zones in which to create the load balancer." msgstr "Le zone di disponibilità in cui creare il bilanciatore di carico." msgid "The CIDR." msgstr "CIDR." msgid "The DNS name for the LoadBalancer." msgstr "Il nome DNS per LoadBalancer." msgid "The DNS name of the specified bucket." msgstr "Il nome DNS del bucket specificato." msgid "The DNS nameserver address." msgstr "L'indirizzo del server dei nomi DNS." msgid "The HTTP method used for requests by the monitor of type HTTP." msgstr "Il metodo HTTP utilizzato per le richieste dal monitor di tipo HTTP." msgid "" "The HTTP path used in the HTTP request used by the monitor to test a member " "health." msgstr "" "Il percorso HTTP adoperato nella richiesta HTTP utilizzata dal monitor per " "eseguire il test dello stato di un membro." msgid "" "The HTTP path used in the HTTP request used by the monitor to test a member " "health. A valid value is a string the begins with a forward slash (/)." msgstr "" "Il percorso HTTP adoperato nella richiesta HTTP utilizzata dal monitor per " "eseguire il test dello stato di un membro. Un valore valido è una stringa " "che inizia con un barra (/)." msgid "" "The HTTP status codes expected in response from the member to declare it " "healthy. Specify one of the following values: a single value, such as 200. a " "list, such as 200, 202. a range, such as 200-204." msgstr "" "Codici stato HTTP previsti in risposta dal membro per dichiararne lo stato " "di integrità. Specificare uno dei seguenti valori: un valore singolo, ad " "esempio 200, un elenco, ad esempio 200, 202, un intervallo, ad esempio " "200-204." msgid "" "The ID of an existing instance to use to create the Auto Scaling group. If " "specify this property, will create the group use an existing instance " "instead of a launch configuration." msgstr "" "L'ID di un'istanza esistente da utilizzare per creare il gruppo Auto " "Scaling. Se si specifica questa proprietà, quando si crea il gruppo " "utilizzare un'istanza esistente invece di una configurazione di avvio." msgid "" "The ID of an existing instance you want to use to create the launch " "configuration. All properties are derived from the instance with the " "exception of BlockDeviceMapping." msgstr "" "L'ID di un'istanza esistente che si desidera utilizzare per creare la " "configurazione di avvio. Tutte le proprietà sono derivate dall'istanza con " "l'eccezione di BlockDeviceMapping." msgid "The ID of the attached network." msgstr "L'ID della rete collegata." msgid "The ID of the firewall policy that this firewall is associated with." msgstr "L'ID della politica firewall a cui questo firewall è associato." msgid "" "The ID of the hosted zone name that is associated with the LoadBalancer." msgstr "L'ID del nome della zona host associato a LoadBalancer." msgid "The ID of the image to create a volume from." msgstr "L'ID dell'immagine da cui creare un volume." msgid "The ID of the image to create the volume from." msgstr "L'ID dell'immagine da cui creare il volume." msgid "The ID of the instance to which the volume attaches." msgstr "L'ID dell'istanza a cui si collega il volume." msgid "The ID of the load balancing pool." msgstr "ID del pool di bilanciamento del carico." msgid "The ID of the pool to which the pool member belongs." msgstr "L'ID del pool a cui appartiene il membro del pool." msgid "The ID of the server to which the volume attaches." msgstr "L'ID del server a cui si collega il volume." msgid "The ID of the snapshot to create a volume from." msgstr "L'ID dell'istantanea da cui creare un volume." msgid "" "The ID of the tenant which will own the network. Only administrative users " "can set the tenant identifier; this cannot be changed using authorization " "policies." msgstr "" "L'ID del tenant che sarà proprietario della rete. Solo gli utenti " "amministrativi possono impostare l'identificativo del tenant; ciò non può " "essere modificato utilizzando le politiche di autorizzazione." msgid "" "The ID of the tenant who owns the Load Balancer. Only administrative users " "can specify a tenant ID other than their own." msgstr "" "L'ID del tenant proprietario del bilanciatore del carico. Solo gli utenti " "amministrativi possono specificare un ID tenant diverso dal proprio." msgid "The ID of the tenant who owns the listener." msgstr "L'ID del tenant che possiede il listener." msgid "" "The ID of the tenant who owns the network. Only administrative users can " "specify a tenant ID other than their own." msgstr "" "L'ID del tenant proprietario della rete. Solo gli utenti amministrativi " "possono specificare un ID tenant diverso dal proprio." msgid "" "The ID of the tenant who owns the subnet pool. Only administrative users can " "specify a tenant ID other than their own." msgstr "" "L'ID del tenant proprietario del pool di sottorete. Solo gli utenti " "amministrativi possono specificare un ID tenant diverso dal proprio." msgid "The ID of the volume to be attached." msgstr "L'ID del volume da allegare." msgid "" "The ID of the volume to boot from. Only one of volume_id or snapshot_id " "should be provided." msgstr "" "L'ID del volume per l'avvio. Fornire solo uno di volume_id o snapshot_id." msgid "The ID or name of the flavor to boot onto." msgstr "L'ID o il nome del flavor su cui eseguire l'avvio." msgid "The ID or name of the image to boot with." msgstr "L'ID o il nome dell'immagine con cui eseguire l'avvio." msgid "" "The IDs of the DHCP agent to schedule the network. Note that the default " "policy setting in Neutron restricts usage of this property to administrative " "users only." msgstr "" "Gli ID dell'agent DHCP per pianificare la rete. Considerare che " "l'impostazione predefinita della politica in Neutron limita l'utilizzo di " "questa proprietà solo utenti amministrativi." msgid "The IP address of the pool member." msgstr "L'indirizzo IP del membro del pool." msgid "The IP version, which is 4 or 6." msgstr "La versione IP, che è 4 o 6." #, python-format msgid "The Parameter (%(key)s) was not defined in template." msgstr "Il parametro (%(key)s) non è stato definito nel template." #, python-format msgid "The Parameter (%(key)s) was not provided." msgstr "Il parametro (%(key)s) non è stato fornito." msgid "The QoS policy ID attached to this network." msgstr "L'ID della politica QoS collegato a questa rete. " msgid "The QoS policy ID attached to this port." msgstr "L'ID della politica QoS collegato a questa porta." #, python-format msgid "The Referenced Attribute (%(resource)s %(key)s) is incorrect." msgstr "L'attributo di riferimento (%(resource)s %(key)s) non è corretto." #, python-format msgid "The Resource %s requires replacement." msgstr "La risorsa %s richiede la sostituzione." #, python-format msgid "" "The Resource (%(resource_name)s) could not be found in Stack %(stack_name)s." msgstr "" "Impossibile trovare la risorsa (%(resource_name)s) nello stack " "%(stack_name)s." #, python-format msgid "The Resource (%(resource_name)s) is not available." msgstr "La risorsa (%(resource_name)s) non è disponibile." #, python-format msgid "The Snapshot (%(snapshot)s) for Stack (%(stack)s) could not be found." msgstr "" "Impossibile trovare l'istantanea (%(snapshot)s) per lo stack (%(stack)s)." #, python-format msgid "The Stack (%(stack_name)s) already exists." msgstr "Lo stack (%(stack_name)s) esiste già." msgid "The Template must be a JSON or YAML document." msgstr "Il template deve essere un documento JSON o YAML." msgid "The URI to the container." msgstr "L'URI per il contenitore." msgid "The URI to the created container." msgstr "L'URI per il contenitore creato." msgid "The URI to the created secret." msgstr "L'URI per il segreto creato." msgid "The URI to the order." msgstr "L'URI per l'ordine." msgid "The URIs to container consumers." msgstr "L'URI per i consumatori del contenitore." msgid "The URIs to secrets stored in container." msgstr "L'URI per i segreti memorizzati nel contenitore." msgid "" "The URL of a template that specifies the stack to be created as a resource." msgstr "" "L'URL di un template che specifica lo stack che deve essere creato come " "risorsa." msgid "The URL of the container." msgstr "L'URL del contenitore." msgid "The VIP address of the LoadBalancer." msgstr "L'indirizzo VIP del bilanciatore del carico." msgid "The VIP port of the LoadBalancer." msgstr "La porta VIP del bilanciatore del carico." msgid "The VIP subnet of the LoadBalancer." msgstr "La sottorete VIP del bilanciatore del carico." msgid "The action or operation requested is invalid" msgstr "L'azione o l'operazione richiesta non è valida" msgid "The action to be executed when the receiver is signaled." msgstr "L'azione da eseguire quando il ricevitore viene segnalato." msgid "The administrative state of the firewall." msgstr "Lo stato amministrativo del firewall." msgid "The administrative state of the health monitor." msgstr "Lo stato amministrativo del monitor di stato." msgid "The administrative state of the ipsec site connection." msgstr "Lo stato amministrativo della connessione al sito ipsec." msgid "The administrative state of the pool member." msgstr "Stato amministrativo del membro del pool." msgid "The administrative state of the router." msgstr "Lo stato amministrativo del router." msgid "The administrative state of the vpn service." msgstr "Lo stato amministrativo del servizio vpn." msgid "The administrative state of this Load Balancer." msgstr "Lo stato amministrativo del bilanciatore del carico." msgid "The administrative state of this health monitor." msgstr "Lo stato amministrativo di questo monitor di integrità." msgid "The administrative state of this listener." msgstr "Lo stato amministrativo di questo listener." msgid "The administrative state of this pool member." msgstr "Stato amministrato di questo membro del pool." msgid "The administrative state of this pool." msgstr "Lo stato amministrativo di questo pool." msgid "The administrative state of this port." msgstr "Lo stato amministrativo di questa porta." msgid "The administrative state of this vip." msgstr "Lo stato amministrativo del presente vip." msgid "The administrative status of the network." msgstr "Lo stato amministrativo della rete." msgid "The administrator password for the server." msgstr "Password dell'amministratore per il server." msgid "The aggregation method to compare to the threshold." msgstr "Il metodo di aggregazione da confrontare alla soglia." msgid "The algorithm type used to generate the secret." msgstr "Il tipo di algoritmo utilizzato per generare il segreto." msgid "" "The algorithm type used to generate the secret. Required for key and " "asymmetric types of order." msgstr "" "Il tipo di algoritmo utilizzato per generare il segreto. Richiesto per tipi " "di ordine chiave e asimmetrico." msgid "The algorithm used to distribute load between the members of the pool." msgstr "Algoritmo utilizzato per distribuire il carico tra i membri del pool." msgid "The allocated address of this IP." msgstr "L'indirizzo allocato di questo IP." msgid "" "The approximate interval, in seconds, between health checks of an individual " "instance." msgstr "" "L'intervallo approssimativo, in secondi, tra le verifiche di stato di " "un'istanza individuale." msgid "The authentication hash algorithm of the ipsec policy." msgstr "L'algoritmo hash di autenticazione della politica ipsec." msgid "The authentication hash algorithm used by the ike policy." msgstr "L'algoritmo hash di autenticazione utilizzato dalla politica ike." msgid "The authentication mode of the ipsec site connection." msgstr "La modalità di autenticazione della connessione al sito ipsec." msgid "The availability zone in which the volume is located." msgstr "La zona di disponibilità in cui il volume è ubicato." msgid "The availability zone in which the volume will be created." msgstr "La zona di disponibilità in cui il volume verrà creato." msgid "The availability zone of shared filesystem." msgstr "La zona di disponibilità per il filesystem condiviso." msgid "The bay name." msgstr "Il nome dell'alloggiamento." msgid "The bit-length of the secret." msgstr "La lunghezza in bit del segreto." msgid "" "The bit-length of the secret. Required for key and asymmetric types of order." msgstr "" "La lunghezza in bit del segreto. Richiesta per tipi di ordine chiave e " "asimmetrico." #, python-format msgid "The bucket you tried to delete is not empty (%s)." msgstr "Il bucket che si desidera eliminare non è vuoto (%s)." msgid "The can be used to unmap a defined device." msgstr "" "Può essere utilizzato per annullare l'associazione ad un dispositivo " "definito." msgid "The certificate or AWS Key ID provided does not exist" msgstr "Il certificato o l'ID chiave AWS fornito non esiste" msgid "The channel for receiving signals." msgstr "Il canale per la ricezione di segnali." msgid "" "The class that provides encryption support. For example, nova.volume." "encryptors.luks.LuksEncryptor." msgstr "" "La classe che fornisce il supporto di codifica, ad esempio, nova.volume." "encryptors.luks.LuksEncryptor." #, python-format msgid "The client (%(client_name)s) is not available." msgstr "Il client (%(client_name)s) non è disponibile." msgid "The cluster ID this node belongs to." msgstr "L'ID cluster a cui appartiene questo nodo." msgid "The config value of the software config." msgstr "Il valore della configurazione software." msgid "" "The configuration tool used to actually apply the configuration on a server. " "This string property has to be understood by in-instance tools running " "inside deployed servers." msgstr "" "Lo strumento di configurazione utilizzato per applicare attualmente la " "configurazione ad un server. Questa proprietà stringa deve essere " "riconosciuta dagli strumenti dell'istanza in esecuzione all'interno dei " "server distribuiti." msgid "The content of the CSR. Only for certificate orders." msgstr "Il contenuto del CSR. Solo per ordini di certificati." #, python-format msgid "" "The contents of personality file \"%(path)s\" is larger than the maximum " "allowed personality file size (%(max_size)s bytes)." msgstr "" "Il contenuto del file di personalità \"%(path)s\" è più grande della " "dimensione massima consentita del file di personalità (%(max_size)s byte)." msgid "The current size of AutoscalingResourceGroup." msgstr "La dimensione corrente di AutoscalingResourceGroup." msgid "The current status of the volume." msgstr "Lo stato corrente del volume." msgid "" "The database instance was created, but heat failed to set up the datastore. " "If a database instance is in the FAILED state, it should be deleted and a " "new one should be created." msgstr "" "L'istanza del database è stata creata, ma heat non è riuscito ad impostare " "il datastore. Se è presente un'istanza di database nello stato Non riuscito, " "è necessario eliminarla e crearne una nuova." msgid "" "The dead peer detection protocol configuration of the ipsec site connection." msgstr "" "La configurazione del protocollo di rilevamento peer non attivo della " "connessione al sito ipsec." msgid "The decrypted secret payload." msgstr "Il payload del segreto decrittografato." msgid "" "The default cloud-init user set up for each image (e.g. \"ubuntu\" for " "Ubuntu 12.04+, \"fedora\" for Fedora 19+ and \"cloud-user\" for CentOS/RHEL " "6.5)." msgstr "" "L'utente cloud-init predefinito configurato per ciascuna immagine (ad " "esempio, \"ubuntu\" per Ubuntu 12.04+, \"fedora\" per Fedora 19+ e \"cloud-" "user\" per CentOS/RHEL 6.5)." msgid "The description for the QoS policy." msgstr "La descrizione della politica QoS." msgid "The description of the ike policy." msgstr "La descrizione della politica ike." msgid "The description of the ipsec policy." msgstr "La descrizione della politica ipsec." msgid "The description of the ipsec site connection." msgstr "La descrizione della connessione al sito ipsec." msgid "The description of the vpn service." msgstr "La descrizione del servizio vpn." msgid "The destination for static route." msgstr "La destinazione per l'instradamento statico." msgid "The details of physical object." msgstr "I dettagli dell'oggetto fisico." msgid "The device id for the network gateway." msgstr "L'id dispositivo per il gateway di rete." msgid "" "The device where the volume is exposed on the instance. This assignment may " "not be honored and it is advised that the path /dev/disk/by-id/virtio-" " be used instead." msgstr "" "Il dispositivo in cui il volume è esposto sull'istanza. Questa assegnazione " "potrebbe non essere rispettata e si consiglia che venga utilizzato il " "percorso /dev/disk/by-id/virtio-." msgid "" "The direction in which metering rule is applied, either ingress or egress." msgstr "" "La direzione in cui la regola di misurazione viene applicata, o ingress o " "egress." msgid "The direction in which metering rule is applied." msgstr "La direzione in cui la regola di misurazione viene applicata." msgid "" "The direction in which the security group rule is applied. For a compute " "instance, an ingress security group rule matches traffic that is incoming " "(ingress) for that instance. An egress rule is applied to traffic leaving " "the instance." msgstr "" "La direzione in cui la regola del gruppo di sicurezza viene applicata. Per " "un'istanza di calcolo, una regola del gruppo di sicurezza ingress " "corrisponde al traffico in entrata (ingress) per quella istanza. Una regola " "egress viene applicata al traffico in uscita dall'istanza." msgid "The directory to search for environment files." msgstr "La directory in cui ricercare i file di ambiente." msgid "The ebs volume to attach to the instance." msgstr "Il volume ebs da collegare all'istanza." msgid "The encapsulation mode of the ipsec policy." msgstr "La modalità di incapsulamento della politica ipsec." msgid "The encoding format used to provide the payload data." msgstr "Il formato di codifica utilizzato per fornire i dati di payload." msgid "The encryption algorithm of the ipsec policy." msgstr "L'algoritmo di codifica della politica ipsec." msgid "The encryption algorithm or mode. For example, aes-xts-plain64." msgstr "L'algoritmo o la modalità di codifica. Ad esempio, aes-xts-plain64." msgid "The encryption algorithm used by the ike policy." msgstr "L'algoritmo di codifica utilizzato dalla politica ike." msgid "The environment is not a valid YAML mapping data type." msgstr "L'ambiente non è un tipo di dati di associazione YAML valido." msgid "The expiration date for the secret in ISO-8601 format." msgstr "La data di scadenza del segreto in formato ISO-8601." msgid "The external load balancer port number." msgstr "Il numero della porta del bilanciatore di carico esterno." msgid "The extra specs key and value pairs of the volume type." msgstr "" "La chiave delle specifiche supplementari e le coppie di valori del tipo di " "volume." msgid "The flavor to use." msgstr "Il flavor da utilizzare." #, python-format msgid "The following parameters are immutable and may not be updated: %(keys)s" msgstr "" "I seguenti parametri sono immutabili e potrebbero non essere aggiornati: " "%(keys)s" #, python-format msgid "The function %s is not supported in this version of HOT." msgstr "La funzione %s is non è supportata in questa versione di HOT." msgid "" "The gateway IP address. Set to any of [ null | ~ | \"\" ] to create/update a " "subnet without a gateway. If omitted when creation, neutron will assign the " "first free IP address within the subnet to the gateway automatically. If " "remove this from template when update, the old gateway IP address will be " "detached." msgstr "" "L'indirizzo IP del gateway. Impostare su un qualsiasi carattere [ null | ~ | " "\"\" ] per creare/aggiornare una sottorete senza un gateway. Se omesso " "durante la creazione, neutron assegna automaticamente al gateway il primo " "indirizzo IP disponibile all'interno della sottorete. Se rimosso dal " "template durante l'aggiornamento, verrà scollegato il vecchio indirizzo IP " "del gateway." #, python-format msgid "The grouped parameter %s does not reference a valid parameter." msgstr "Il parametro del gruppo %s non fa riferimento a un parametro valido." msgid "The host from the container URL." msgstr "L'host dell'URL contenitore." msgid "The host from which a user is allowed to connect to the database." msgstr "L'host da cui un utente può connettersi al database." msgid "" "The id for L2 segment on the external side of the network gateway. Must be " "specified when using vlan." msgstr "" "L'id per il segmento L2 sul lato esterno del gateway di rete. Deve essere " "specificato quando si utilizza vlan." msgid "The identifier of the CA to use." msgstr "L'identificativo del CA da utilizzare." msgid "The image ID. Glance will generate a UUID if not specified." msgstr "L'ID immagine. Glance genererà un UUID se non è specificato." msgid "The initiator of the ipsec site connection." msgstr "L'iniziatore della connessione al sito ipsec." msgid "The input string to be stored." msgstr "La stringa di input da memorizzare." msgid "The interface name for the network gateway." msgstr "Il nome interfaccia per il gateway di rete." msgid "The internal network to connect on the network gateway." msgstr "La rete interna a cui connettersi sul gateway di rete." msgid "The last operation for the database instance failed due to an error." msgstr "" "L'ultima operazione dell'istanza del database non è riuscita a causa di un " "errore." #, python-format msgid "The length must be at least %(min)s." msgstr "La lunghezza deve essere almeno %(min)s." #, python-format msgid "The length must be in the range %(min)s to %(max)s." msgstr "La lunghezza deve essere compresa tra %(min)s e %(max)s." #, python-format msgid "The length must be no greater than %(max)s." msgstr "La lunghezza non può essere maggiore di %(max)s." msgid "The length of time, in minutes, to wait for the nested stack creation." msgstr "" "L'intervallo di tempo, espresso in minuti, di attesa per la creazione dello " "stack nidificato." msgid "" "The list of HTTP status codes expected in response from the member to " "declare it healthy." msgstr "" "Elenco di codici stato HTTP previsti in risposta dal membro per dichiararne " "lo stato di integrità." msgid "The list of Nova server IDs load balanced." msgstr "L'elenco degli ID del server Nova con carico bilanciato." msgid "The list of Pools related to this monitor." msgstr "L'elenco di pool correlati a questo monitor." msgid "The list of attachments of the volume." msgstr "L'elenco degli allegati del volume." msgid "" "The list of configurations for the different lifecycle actions of the " "represented software component." msgstr "" "L'elenco di configurazioni per le diverse azioni del ciclo di vita del " "componente software rappresentato." msgid "The list of instance IDs load balanced." msgstr "Elenco di ID istanze con carico bilanciato." msgid "" "The list of resource types to create. This list may contain type names or " "aliases defined in the resource registry. Specific template names are not " "supported." msgstr "" "L'elenco dei tipi di risorsa da creare. Questo elenco può contenere nomi di " "tipo o alias definiti nel registro delle risorse. I nomi di modelli " "specifici non sono supportati." msgid "The list of tags to associate with the volume." msgstr "Elenco di tag da associare al volume." msgid "The load balancer transport protocol to use." msgstr "Il protocollo di trasporto del bilanciatore di carico da utilizzare." msgid "" "The location where the volume is exposed on the instance. This assignment " "may not be honored and it is advised that the path /dev/disk/by-id/virtio-" " be used instead." msgstr "" "L'ubicazione in cui il volume è esposto sull'istanza. Questa assegnazione " "potrebbe non essere rispettata e si consiglia che venga utilizzato il " "percorso /dev/disk/by-id/virtio-." msgid "The manually assigned alternative public IPv4 address of the server." msgstr "" "L'indirizzo IPv4 pubblico alternativo assegnato manualmente del server." msgid "The manually assigned alternative public IPv6 address of the server." msgstr "" "L'indirizzo IPv6 pubblico alternativo assegnato manualmente del server." msgid "The maximum number of connections per second allowed for the vip." msgstr "Il numero massimo di connessioni al secondo consentito per il vip." msgid "" "The maximum number of connections permitted for this load balancer. Defaults " "to -1, which is infinite." msgstr "" "Il numero massimo di connessioni consentite per questo bilanciatore del " "carico. L'impostazione predefinita è -1, che è infinito." msgid "The maximum number of resources to create at once." msgstr "Il numero massimo di risorse da creare in una volta." msgid "The maximum number of resources to replace at once." msgstr "Il numero massimo di risorse da sostituire in una volta." msgid "" "The maximum number of seconds to wait for the resource to signal completion. " "Once the timeout is reached, creation of the signal resource will fail." msgstr "" "Il numero massimo di secondi da attendere affinché la risorsa segnali il " "completamento. Una volta raggiunto il timeout, la creazione della risorsa " "segnale avrà esito negativo." msgid "" "The maximum port number in the range that is matched by the security group " "rule. The port_range_min attribute constrains the port_range_max attribute. " "If the protocol is ICMP, this value must be an ICMP type." msgstr "" "Il numero di porta massimo nell'intervallo che trova corrispondenza mediante " "la regola del gruppo di sicurezza, L'attributo port_range_min vincola " "l'attributo port_range_max. Se il protocollo è ICMP, questo valore deve " "essere di tipo ICMP." msgid "" "The maximum transmission unit size (in bytes) of the ipsec site connection." msgstr "" "La dimensione massima dell'unità di trasmissionee (in byte) della " "connessione al sito ipsec." msgid "The maximum transmission unit size(in bytes) for the network." msgstr "" "La dimensione massima (in bytes) dell'unità di trasmissione per la rete." msgid "The metering label ID to associate with this metering rule." msgstr "" "L'ID dell'etichetta di misurazione da associare a questa regola di " "misurazione." msgid "" "The metric dimensions to match to the alarm dimensions. One or more " "dimension key names separated by a comma." msgstr "" "Le dimensioni metriche che devono corrispondere alle dimensioni della " "segnalazione. Uno o più nomi chiave delle dimensioni separati da una virgola." msgid "" "The minimum number of characters from this character class that will be in " "the generated string." msgstr "" "Il numero minimo di caratteri da questa classe di caratteri che sarà " "contenuto nella stringa generata." msgid "" "The minimum number of characters from this sequence that will be in the " "generated string." msgstr "" "Il numero minimo di caratteri da questa sequenza di caratteri che sarà " "contenuto nella stringa generata." msgid "" "The minimum number of resources in service while rolling updates are being " "executed." msgstr "" "Il numero minimo di risorse nel servizio che durante il ripristino degli " "aggiornamenti sono state eseguite." msgid "" "The minimum port number in the range that is matched by the security group " "rule. If the protocol is TCP or UDP, this value must be less than or equal " "to the value of the port_range_max attribute. If the protocol is ICMP, this " "value must be an ICMP type." msgstr "" "Il numero di porta minimo nell'intervallo che trova corrispondenza mediante " "la regola del gruppo di sicurezza. Se il protocollo è TCP o UDP, questo " "valore deve essere inferiore o uguale al valore dell'attributo " "port_range_max. Se il protocollo è ICMP, questo valore deve essere di tipo " "ICMP." msgid "The name for the QoS policy." msgstr "Il nome della politica QoS." msgid "The name for the address scope." msgstr "Il nome dell'ambito dell'indirizzo." msgid "" "The name of the driver used for instantiating container networks. By " "default, Magnum will choose the pre-configured network driver based on COE " "type." msgstr "" "Il nome del driver utilizzato per istanziare le reti del contenitore. Per " "impostazione predefinita, Magnum sceglierà il driver di rete preconfigurato " "basato sul tipo COE. " msgid "The name of the error document." msgstr "Il nome del documento di errori." msgid "The name of the hosted zone that is associated with the LoadBalancer." msgstr "Il nome della zona host associata a LoadBalancer." msgid "The name of the ike policy." msgstr "Il nome della politica ike." msgid "The name of the index document." msgstr "Il nome del documento di indice." msgid "The name of the ipsec policy." msgstr "Il nome della politica ipsec." msgid "The name of the ipsec site connection." msgstr "Il nome della connessione al sito ipsec." msgid "The name of the key pair." msgstr "Il nome della coppia di chiavi." msgid "The name of the network gateway." msgstr "Il nome del gateway di rete." msgid "The name of the network." msgstr "Il nome della rete." msgid "The name of the router." msgstr "Nome del router." msgid "The name of the subnet." msgstr "Nome della sottorete." msgid "The name of the user that the new key will belong to." msgstr "Il nome dell'utente a cui la nuova chiave apparterà." msgid "" "The name of the virtual device. The name must be in the form ephemeralX " "where X is a number starting from zero (0); for example, ephemeral0." msgstr "" "Il nome del dispositivo virtuale. Il nome deve essere nel formato ephemeralX " "in cui X è un numero a partire da zero (0); ad esempio, ephemeral0." msgid "The name of the vpn service." msgstr "Il nome del servizio vpn." msgid "The name or ID of QoS policy to attach to this network." msgstr "Il nome o l'ID della politica QoS da collegare a questa rete." msgid "The name or ID of QoS policy to attach to this port." msgstr "Il nome o l'ID della politica QoS da collegare a questa porta." msgid "The name or ID of parent of this keystone project in hierarchy." msgstr "" "Il nome o l'ID dell'elemento parent di questo progetto keystone nella " "gerarchia." msgid "The name or ID of target cluster." msgstr "Il nome o l'ID del cluster di destinazione." msgid "The name or ID of the bay model." msgstr "Il nome o l'ID del modello di alloggiamento." msgid "The name or ID of the subnet on which to allocate the VIP address." msgstr "Il nome o l'ID della sottorete su cui allocare l'indirizzo VIP." msgid "The name or ID of the subnet pool." msgstr "Il nome o l'ID del pool di sottorete." msgid "The name or id of the Senlin profile." msgstr "Il nome o l'ID del profilo Senlin." msgid "The negotiation mode of the ike policy." msgstr "La modalità di negoziazione della politica ike." msgid "The next hop for the destination." msgstr "L'hop successivo per la destinazione." msgid "The node count for this bay." msgstr "Il numero di nodi per questo alloggiamento." msgid "The notification methods to use when an alarm state is ALARM." msgstr "" "I metodi di notifica da utilizzare quando uno stato di allarme è ALARM." msgid "The notification methods to use when an alarm state is OK." msgstr "" "I metodi di notifica da utilizzare quando lo stato della segnalazione è OK." msgid "The notification methods to use when an alarm state is UNDETERMINED." msgstr "" "I metodi di notifica da utilizzare quando lo stato della segnalazione è " "UNDETERMINATED." msgid "The number of I/O operations per second that the volume supports." msgstr "Il numero di operazioni di I/O per secondo che il volume supporta." msgid "The number of bytes stored in the container." msgstr "Il numero di byte memorizzati nel contenitore." msgid "" "The number of consecutive health probe failures required before moving the " "instance to the unhealthy state" msgstr "" "Il numero di probe di stato consecutivi con esito negativo richiesti prima " "di spostare l'istanza allo stato non funzionante." msgid "" "The number of consecutive health probe successes required before moving the " "instance to the healthy state." msgstr "" "Il numero di probe di stato consecutivi con esito positivo richiesti prima " "di spostare l'istanza allo stato funzionante." msgid "The number of master nodes for this bay." msgstr "Il numero di nodi principali per questo alloggiamento." msgid "The number of objects stored in the container." msgstr "Il numero di oggetti memorizzati nel contenitore." msgid "The number of replicas to be created." msgstr "Il numero di repliche da creare." msgid "The number of resources to create." msgstr "Numero di risorse da creare." msgid "The number of seconds to wait between batches of updates." msgstr "Il numero di secondi da attendere tra i batch degli aggiornamenti." msgid "The number of seconds to wait between batches." msgstr "Il numero di secondi da attendere tra i batch." msgid "The number of seconds to wait for the cluster actions." msgstr "Il numero di secondi da attendere per le azioni del cluster." msgid "" "The number of seconds to wait for the correct number of signals to arrive." msgstr "" "Il numero di secondi di attesa per il numero corretto di segnali in arrivo." msgid "" "The number of success signals that must be received before the stack " "creation process continues." msgstr "" "Il numero di segnali corretti che devono essere ricevuti prima che il " "processo di creazione dello stack continui." msgid "" "The optional public key. This allows users to supply the public key from a " "pre-existing key pair. If not supplied, a new key pair will be generated." msgstr "" "La chiave pubblica facoltativa. Ciò consente agli utenti di fornire la " "chiave pubblica da una coppia di chiavi preesistente. Se non fornita, verrà " "generata una nuova coppia di chiavi." msgid "" "The owner tenant ID of the address scope. Only administrative users can " "specify a tenant ID other than their own." msgstr "" "L'ID tenant del proprietario dell'ambito dell'indirizzo. Solo gli utenti " "amministrativi possono specificare un ID tenant diverso dal proprio." msgid "The owner tenant ID of this QoS policy." msgstr "L'ID tenant proprietario di questa politica QoS." msgid "The owner tenant ID of this rule." msgstr "L'ID tenant proprietario di questa regola." msgid "" "The owner tenant ID. Only required if the caller has an administrative role " "and wants to create a RBAC for another tenant." msgstr "" "L'ID tenant del proprietario. Richiesto solo se il chiamante ha un ruolo " "amministrativo e desidera creare un RBAC per un altro tenant." msgid "The parameters passed to action when the receiver is signaled." msgstr "I parametri passati all'azione quando viene segnalato il ricevitore." msgid "The parent URL of the container." msgstr "L'URL parent del contenitore." msgid "The payload of the created certificate, if available." msgstr "Il payload del certificato creato, se disponibile." msgid "The payload of the created intermediates, if available." msgstr "Il payload degli elementi intermedi creati, se disponibile." msgid "The payload of the created private key, if available." msgstr "Il payload della chiave privata creata, se disponibile." msgid "The payload of the created public key, if available." msgstr "Il payload della chiave pubblica creata, se disponibile." msgid "The perfect forward secrecy of the ike policy." msgstr "Il PFS (Perfect Forward Secrecy) della politica ike." msgid "The perfect forward secrecy of the ipsec policy." msgstr "Il PFS (Perfect Forward Secrecy) della politica ipsec." #, python-format msgid "The personality property may not contain greater than %s entries." msgstr "La proprietà di personalità non può contenere più di %s voci." msgid "The physical mechanism by which the virtual network is implemented." msgstr "Il meccanismo fisico da cui viene implementata la rete fisica." msgid "The port being checked." msgstr "La porta sottoposta a verifica." msgid "The port id, either subnet or port_id should be specified." msgstr "L'ID porta, è necessario specificare subnet o port_id." msgid "The port on which the server will listen." msgstr "La porta su cui il server sarà in ascolto." msgid "The port, either subnet or port should be specified." msgstr "La porta, è necessario specificare subnet o port." msgid "The pre-shared key string of the ipsec site connection." msgstr "La stringa PSK (Pre-Shared Key) della connessione al sito ipsec." msgid "The private key if it has been saved." msgstr "La chiave privata se è stata salvata." msgid "The profile of certificate to use." msgstr "Il profilo del certificato da utilizzare." msgid "" "The protocol that is matched by the security group rule. Valid values " "include tcp, udp, and icmp." msgstr "" "Il protocollo che trova corrispondenza mediante la regola del gruppo di " "sicurezza. I valori validi includono tcp, udp e icmp." msgid "The public key." msgstr "La chiave pubblica." msgid "The query string is malformed" msgstr "La stringa di query non è formata correttamente" msgid "The query to filter the metrics." msgstr "La query per filtrare le metriche." msgid "" "The random string generated by this resource. This value is also available " "by referencing the resource." msgstr "" "Stringa a caso generata da questa risorsa. Questo valore è disponibile anche " "facendo riferimento alla risorsa." msgid "The reference to a LaunchConfiguration resource." msgstr "Il riferimento a LaunchConfiguration." msgid "" "The remote IP prefix (CIDR) to be associated with this security group rule." msgstr "" "Il prefisso IP remoto (CIDR) che deve essere associato a questa regola del " "gruppo di sicurezza." msgid "The remote branch router identity of the ipsec site connection." msgstr "L'identità router del ramo remoto della connessione al sito ipsec." msgid "The remote branch router public IPv4 address or IPv6 address or FQDN." msgstr "L'indirizzo IPv4 o IPv6 pubblico del router con ramo remoto o FQDN." msgid "" "The remote group ID to be associated with this security group rule. If no " "value is specified then this rule will use this security group for the " "remote_group_id. The remote mode parameter must be set to \"remote_group_id" "\"." msgstr "" "L'ID del gruppo remoto da associare a questa regola del gruppo di sicurezza. " "Se nessun valore viene specificato, questa regola utilizzerà questo gruppo " "di sicurezza per remote_group_id. Il parametro della modalità remota deve " "essere impostato su \"remote_group_id\"." msgid "The remote subnet(s) in CIDR format of the ipsec site connection." msgstr "Le sottoreti remote in formato CIDR della connessione al sito ipsec." msgid "The request is missing an action or operation parameter" msgstr "Alla richiesta manca un parametro di azione o di operazione" msgid "The request processing has failed due to an internal error" msgstr "" "L'elaborazione della richiesta non è riuscita a causa di un errore interno" msgid "The request signature does not conform to AWS standards" msgstr "La firma della richiesta non è conforme agli standard AWS" msgid "" "The request signature we calculated does not match the signature you provided" msgstr "" "La firma della richiesta calcolata non corrisponde alla firma fornita " "dall'utente" msgid "The requested action is not yet implemented" msgstr "L'azione richiesta non è ancora implementata" #, python-format msgid "The resource %s is already being updated." msgstr "La risorsa %s è già stata aggiornata." msgid "The resource href of the queue." msgstr "L'href della risorsa della coda." msgid "The route mode of the ipsec site connection." msgstr "La modalità di instradamento della connessione al sito ipsec." msgid "The router id." msgstr "L'ID router." msgid "The router to which the vpn service will be inserted." msgstr "Il router in cui il servizio vpn verrà inserito." msgid "The router." msgstr "Il router." msgid "The safety assessment lifetime configuration for the ike policy." msgstr "" "La configurazione della durata di valutazione della sicurezza per la " "politica ike." msgid "The safety assessment lifetime configuration of the ipsec policy." msgstr "" "La configurazione della durata di valutazione della sicurezza della politica " "ipsec." msgid "" "The security group that you can use as part of your inbound rules for your " "LoadBalancer's back-end instances." msgstr "" "Il gruppo di sicurezza che è possibile utilizzare come parte delle regole " "interne per le proprie istanze backend di LoadBalancer." msgid "" "The server could not comply with the request since it is either malformed or " "otherwise incorrect." msgstr "" "Il server non è in grado di soddisfare la richiesta poiché non è nel formato " "corretto o o non è corretto per altri motivi." msgid "The set of parameters passed to this nested stack." msgstr "L'insieme di parametri passati a questo stack nidificato." msgid "The size in GB of the docker volume." msgstr "La dimensione in GB del volume docker." msgid "The size of AutoScalingGroup can not be less than zero" msgstr "La dimensione di AutoScalingGroup non può essere inferiore a zero" msgid "" "The size of the prefix to allocate when the cidr or prefixlen attributes are " "not specified while creating a subnet." msgstr "" "La dimensione del prefisso da allocare quando gli attributi cidr o prefixlen " "non sono specificati durante la creazione di una sottorete." msgid "The size of the swap, in MB." msgstr "La dimensione dello scambio, in MB." msgid "The size of the volume in GB." msgstr "La dimensione del volume in GB." msgid "" "The size of the volume, in GB. It is safe to leave this blank and have the " "Compute service infer the size." msgstr "" "La dimensione del volume in GB. Lasciare questo spazio vuoto e lasciare che " "il servizio Compute imposti la dimensione." msgid "" "The size of the volume, in GB. Must be equal or greater than the size of the " "snapshot. It is safe to leave this blank and have the Compute service infer " "the size." msgstr "" "La dimensione del volume in GB. Deve essere uguale o maggiore della " "dimensione dell'istantanea. È prudente lasciare questo vuoto e lasciare che " "il servizio Compute desuma la dimensione." msgid "The snapshot the volume was created from, if any." msgstr "L'istantanea da cui è stato creato il volume, se presente." msgid "The source of certificate request." msgstr "L'origine della richiesta di certificato." #, python-format msgid "The specified reference \"%(resource)s\" (in %(key)s) is incorrect." msgstr "" "Il riferimento specificato \"%(resource)s\" (in %(key)s) non è corretto." msgid "The start and end addresses for the allocation pools." msgstr "Gli indirizzi iniziale e finale per i pool di assegnazione." msgid "The status of the container." msgstr "Lo stato del contenitore." msgid "The status of the firewall." msgstr "Lo stato del firewall." msgid "The status of the ipsec site connection." msgstr "Lo stato della connessione al sito ipsec." msgid "The status of the network." msgstr "Lo stato della rete." msgid "The status of the order." msgstr "Lo stato dell'ordine." msgid "The status of the port." msgstr "Lo stato della porta." msgid "The status of the router." msgstr "Lo stato del router." msgid "The status of the secret." msgstr "Lo stato del segreto." msgid "The status of the vpn service." msgstr "Lo stato del servizio vpn." msgid "" "The string that was stored. This value is also available by referencing the " "resource." msgstr "" "La stringa che è stata memorizzata. Questo valore è disponibile anche " "facendo riferimento alla risorsa." msgid "The subject of the certificate request." msgstr "L'oggetto della richiesta di certificato." msgid "" "The subnet for the port on which the members of the pool will be connected." msgstr "" "La sottorete per la porta sulla quale i membri del pool saranno connessi." msgid "The subnet, either subnet or port should be specified." msgstr "La sottorete, è necessario specificare subnet o port." msgid "The tag key name." msgstr "Il nome chiave del tag." msgid "The tag value." msgstr "Il valore del tag." msgid "The template is not a JSON object or YAML mapping." msgstr "Il template non è un oggetto JSON o associazione YAML." #, python-format msgid "The template section is invalid: %(section)s" msgstr "La sezione template non è valida: %(section)s" #, python-format msgid "The template version is invalid: %(explanation)s" msgstr "La versione del template non è valida: %(explanation)s" msgid "The tenant owning this floating IP." msgstr "Il tenant che possiede questo IP mobile." msgid "The tenant owning this network." msgstr "Il tenant che possiede questa rete." msgid "The time range in seconds." msgstr "L'intervallo di tempo in secondi." msgid "The timestamp indicating volume creation." msgstr "La data/ora che indica la creazione del volume." msgid "The transform protocol of the ipsec policy." msgstr "Il protocollo di trasformazione della politica ipsec." msgid "The type of profile." msgstr "Il tipo di profilo." msgid "The type of senlin policy." msgstr "Il tipo della politica senlin." msgid "The type of the certificate request." msgstr "Il tipo della richiesta di certificato." msgid "The type of the order." msgstr "Il tipo dell'ordine." msgid "The type of the resources in the group." msgstr "Tipo di risorse nel gruppo." msgid "The type of the secret." msgstr "Il tipo del segreto." msgid "The type of the volume mapping to a backend, if any." msgstr "Il tipo di volume in fase di associazione a un backend, se presente." msgid "The type/format the secret data is provided in." msgstr "Il tipo/formato in cui sono forniti i dati del segreto." msgid "The type/mode of the algorithm associated with the secret information." msgstr "" "Il tipo/la modalità dell'algoritmo associato alle informazioni sul segreto." msgid "The unencrypted plain text of the secret." msgstr "Il testo normale non crittografato del segreto." msgid "" "The unique identifier of ike policy associated with the ipsec site " "connection." msgstr "" "L'identificativo univoco della politica ike associata alla connessione al " "sito ipsec." msgid "" "The unique identifier of ipsec policy associated with the ipsec site " "connection." msgstr "" "L'identificativo univoco della politica ipsec associata alla connessione al " "sito ipsec." msgid "" "The unique identifier of the router to which the vpn service was inserted." msgstr "" "L'identificativo univoco del router in cui è stato inserito il servizio vpn." msgid "" "The unique identifier of the subnet in which the vpn service was created." msgstr "" "L'identificativo univoco della sottorete in cui è stato inserito il servizio " "vpn." msgid "The unique identifier of the tenant owning the ike policy." msgstr "L'identificativo univoco del tenant che possiede la politica ike." msgid "The unique identifier of the tenant owning the ipsec policy." msgstr "L'identificativo univoco del tenant che possiede la politica ipsec." msgid "The unique identifier of the tenant owning the ipsec site connection." msgstr "" "L'identificativo univoco del tenant che possiede la connessione al sito " "ipsec." msgid "The unique identifier of the tenant owning the vpn service." msgstr "L'identificativo univoco del tenant che possiede il servizio vpn." msgid "" "The unique identifier of vpn service associated with the ipsec site " "connection." msgstr "" "L'identificativo univoco del servizio vpn associato alla connessione al sito " "ipsec." msgid "" "The user-defined region ID and should unique to the OpenStack deployment. " "While creating the region, heat will url encode this ID." msgstr "" "L'ID regione definito dall'utente deve essere univoco per la distribuzione " "OpenStack. Durante la creazione della regione, heat eseguirà la codifica URL " "di questo ID." msgid "" "The value for the socket option TCP_KEEPIDLE. This is the time in seconds " "that the connection must be idle before TCP starts sending keepalive probes." msgstr "" "Il valore dell'opzione socket TCP_KEEPIDLE. Rappresenta il periodo di tempo, " "in secondi, per cui la connessione deve essere inattiva prima che il TCP " "inizi ad inviare probe keepalive." #, python-format msgid "The value must be at least %(min)s." msgstr "Il valore deve essere almeno %(min)s." #, python-format msgid "The value must be in the range %(min)s to %(max)s." msgstr "Il valore deve essere compreso tra %(min)s e %(max)s." #, python-format msgid "The value must be no greater than %(max)s." msgstr "Il valore non può essere maggiore di %(max)s." #, python-format msgid "The values of the \"for_each\" argument to \"%s\" must be lists" msgstr "I valori dell'argomento \"for_each\" per \"%s\" devono essere elenchi" msgid "The version of the ike policy." msgstr "La versione della politica ike." msgid "" "The vnic type to be bound on the neutron port. To support SR-IOV PCI " "passthrough networking, you can request that the neutron port to be realized " "as normal (virtual nic), direct (pci passthrough), or macvtap (virtual " "interface with a tap-like software interface). Note that this only works for " "Neutron deployments that support the bindings extension." msgstr "" "Il tipo vnic da collegare alla porta neutron. Per supportare la rete " "passthrough SR-IOV PCI, si può richiedere che la porta neutron venga " "realizzata come normal (nic virtuale), direct (passthrough pci), o macvtap " "(interfaccia virtuale con un'interfaccia software di tipo tap). Notare che " "ciò è valido unicamente per le distribuzioni Neutron che supportano le " "estensioni dei bind." msgid "The volume type." msgstr "Il tipo di volume." msgid "The volume used as source, if any." msgstr "Il volume utilizzato come origine, se presente." msgid "The volume_id can be boot or non-boot device to the server." msgstr "" "Il volume_id può essere un'unità di avvio o non di avvio per il server." msgid "The website endpoint for the specified bucket." msgstr "L'endpoint del sito Web per il bucket specificato." #, python-format msgid "There is no rule %(rule)s. List of allowed rules is: %(rules)s." msgstr "" "Non esiste alcuna regola %(rule)s. L'elenco di regole consentite è: " "%(rules)s." msgid "" "There is no such option during 5.0.0, so need to make this attribute " "unsupported, otherwise error will raised." msgstr "" "Questa opzione non viene rilevata durante 5.0.0, è quindi necessario rendere " "questo attributo non supportato, altrimenti viene generato un errore." msgid "" "There is no such option during 5.0.0, so need to make this property " "unsupported while it not used." msgstr "" "Questa opzione non viene rilevata durante 5.0.0, è quindi necessario rendere " "questa proprietà non supportata mentre non viene utilizzata." #, python-format msgid "" "There was an error loading the definition of the global resource type " "%(type_name)s." msgstr "" "Si è verificato un errore durante il caricamento della definizione del tipo " "di risorsa globale %(type_name)s." msgid "This endpoint is enabled or disabled." msgstr "Questo endpoint è abilitato o disabilitato." msgid "This project is enabled or disabled." msgstr "Questo progetto è abilitato o disabilitato." msgid "This region is enabled or disabled." msgstr "Questa regione è abilitata o disabilitata." msgid "This service is enabled or disabled." msgstr "Questo servizio è abilitato o disabilitato." msgid "Threshold to evaluate against." msgstr "Soglia su cui eseguire la valutazione." msgid "Time To Live (Seconds)." msgstr "Durata (secondi)." msgid "Time of the first execution in format \"YYYY-MM-DD HH:MM\"." msgstr "L'ora della prima esecuzione in formato \"YYYY-MM-DD HH:MM\"." msgid "Time of the next execution in format \"YYYY-MM-DD HH:MM:SS\"." msgstr "L'ora dell'esecuzione successiva in formato \"YYYY-MM-DD HH:MM\"." msgid "" "Timeout for client connections' socket operations. If an incoming connection " "is idle for this number of seconds it will be closed. A value of '0' means " "wait forever." msgstr "" "Timeout per le operazioni socket delle connessioni client. Se una " "connessione in entrata è inattiva per questo numero di secondi, verrà " "chiusa. Il valore 0 indica un'attesa illimitata." msgid "Timeout for creating the bay in minutes. Set to 0 for no timeout." msgstr "" "Timeout per la creazione dell'alloggiamento in minuti. Impostare su 0 per " "nessun timeout." msgid "Timeout in seconds for stack action (ie. create or update)." msgstr "" "Timeout in secondi per l'azione stack (ad esempio creare o aggiornare)." msgid "" "Toggle to enable/disable caching when Orchestration Engine looks for other " "OpenStack service resources using name or id. Please note that the global " "toggle for oslo.cache(enabled=True in [cache] group) must be enabled to use " "this feature." msgstr "" "Attivare per abilitare/disabilitare la memorizzazione nella cache quando il " "motore di orchestrazione ricerca altre risorse del servizio OpenStack. Per " "utilizzare questa funzione l'attivazione globale per oslo.cache(enabled=True " "in [cache] group) deve essere abilitata." msgid "" "Toggle to enable/disable caching when Orchestration Engine retrieves " "extensions from other OpenStack services. Please note that the global toggle " "for oslo.cache(enabled=True in [cache] group) must be enabled to use this " "feature." msgstr "" "Attivare per abilitare/disabilitare la memorizzazione nella cache quando il " "motore di orchestrazione richiama le estensioni da altri servizi OpenStack. " "Per utilizzare questa funzione l'attivazione globale per oslo." "cache(enabled=True in [cache] group) deve essere abilitata." msgid "" "Toggle to enable/disable caching when Orchestration Engine validates " "property constraints of stack.During property validation with constraints " "Orchestration Engine caches requests to other OpenStack services. Please " "note that the global toggle for oslo.cache(enabled=True in [cache] group) " "must be enabled to use this feature." msgstr "" "Attivare per abilitare/disabilitare la memorizzazione nella cache quando il " "motore di orchestrazione convalida i vincoli di proprietà dello stack. " "Durante la convalida delle proprietà con i vincoli, il motore di " "orchestrazione memorizza nella cache le richieste ad altri servizi " "OpenStack. Per utilizzare questa funzione l'attivazione globale per oslo." "cache(enabled=True in [cache] group) deve essere abilitata." msgid "" "Token for stack-user which can be used for signalling handle when " "signal_transport is set to TOKEN_SIGNAL. None for all other signal " "transports." msgstr "" "Il token per l'utente stack che è possibile utilizzare per la segnalazione " "dell'handle quando signal_transport è impostato su TOKEN_SIGNAL. None per " "tutti gli altri trasporti di segnale." msgid "" "Tokens are not needed for Swift TempURLs. This attribute is being kept for " "compatibility with the OS::Heat::WaitConditionHandle resource." msgstr "" "I token non sono necessari per Swift TempURLs. Questo attributo è stato " "conservato per compatibilità con la risorsa OS::Heat::WaitConditionHandle." msgid "Topic" msgstr "Argomento" msgid "Transform protocol for the ipsec policy." msgstr "Protocollo di trasformazione per la politica ipsec." msgid "True if alarm evaluation/actioning is enabled." msgstr "" "True se la funzione di valutazione/azionamento della segnalazione è " "abilitata." msgid "" "True if the system should remember a generated private key; False otherwise." msgstr "" "True se il sistema deve memorizzare una chiave privata generata; False in " "caso contrario." msgid "Type of access that should be provided to guest." msgstr "Tipo di accesso che deve essere fornito a guest." msgid "Type of adjustment (absolute or percentage)." msgstr "Tipo di regolazione (assoluto o percentuale)." msgid "" "Type of endpoint in Identity service catalog to use for communication with " "the OpenStack service." msgstr "" "Tipo di endpoint nel catalogo del servizio identità da utilizzare per le " "comunicazioni con il servizio OpenStack." msgid "Type of keystone Service." msgstr "Tipo di servizio keystone." msgid "Type of receiver." msgstr "Tipo del ricevitore." msgid "Type of the data source." msgstr "Tipo dell'origine dati." msgid "Type of the notification." msgstr "Tipo della notifica." msgid "Type of the object that RBAC policy affects." msgstr "Tipo dell'oggetto su cui influisce la politica RBAC." msgid "Type of the value of the input." msgstr "Tipo di valore dell'input." msgid "Type of the value of the output." msgstr "Tipo di valore dell'output." msgid "Type of the volume to create on Cinder backend." msgstr "Il tipo del volume da creare sul backend Cinder." msgid "URL for API authentication" msgstr "URL per l'autenticazione API" msgid "URL for the data source." msgstr "URL dell'origine dati." msgid "" "URL for the job binary. Must be in the format swift:/// or " "internal-db://." msgstr "" "URL del job binary. Deve essere nel formato swift:/// o " "internal-db://." msgid "" "URL of TempURL where resource will signal completion and optionally upload " "data." msgstr "" "URL di TempURL in cui la risorsa segnalerà il completamento e " "facoltativamente caricherà i dati." msgid "URL of keystone service endpoint." msgstr "URL dell'endpoint del servizio keystone." msgid "URL of the Heat CloudWatch server." msgstr "URL del server Heat CloudWatch." msgid "" "URL of the Heat metadata server. NOTE: Setting this is only needed if you " "require instances to use a different endpoint than in the keystone catalog" msgstr "" "URL del server di metadati Heat. NOTA: l'impostazione di questo valore è " "necessaria solo se si richiede che le istanze utilizzino un endpoint diverso " "da quello presente nel catalogo keystone" msgid "URL of the Heat waitcondition server." msgstr "URL del server Heat waitcondition." msgid "" "URL where the data for this image already resides. For example, if the image " "data is stored in swift, you could specify \"swift://example.com/container/" "obj\"." msgstr "" "URL in cui già risiedono i dati per questa immagine. Ad esempio, se i dati " "dell'immagine sono memorizzati in swift, si potrebbe specificare \"swift://" "example.com/container/obj\"." msgid "UUID of the internal subnet to which the instance will be attached." msgstr "UUID della sottorete interna a cui verrà collegata l'istanza." #, python-format msgid "" "Unable to find neutron provider '%(provider)s', available providers are " "%(providers)s." msgstr "" "Impossibile trovare il provider neutron '%(provider)s', i provider " "disponibili sono %(providers)s." #, python-format msgid "" "Unable to find senlin policy type '%(pt)s', available policy types are " "%(pts)s." msgstr "" "Impossibile trovare il tipo di politica senlin '%(pt)s', i tipi di politica " "disponibili sono %(pts)s." #, python-format msgid "" "Unable to find senlin profile type '%(pt)s', available profile types are " "%(pts)s." msgstr "" "Impossibile trovare il tipo di profilo senlin '%(pt)s', i tipi di profilo " "disponibili sono %(pts)s." #, python-format msgid "" "Unable to load %(app_name)s from configuration file %(conf_file)s.\n" "Got: %(e)r" msgstr "" "Impossibile caricare %(app_name)s dal file di configurazione %(conf_file)s.\n" "Ricevuto: %(e)r" #, python-format msgid "Unable to locate config file [%s]" msgstr "Impossibile individuare il file config [%s]" #, python-format msgid "Unexpected action %(action)s" msgstr "Azione imprevista %(action)s" #, python-format msgid "Unexpected action %s" msgstr "Azione imprevista %s" #, python-format msgid "" "Unexpected properties: %(unexpected)s. Only these properties are allowed for " "%(type)s type of order: %(allowed)s." msgstr "" "Proprietà impreviste: %(unexpected)s. Solo queste proprietà sono consentite " "per tipo di ordine %(type)s: %(allowed)s." msgid "Unique identifier for the device." msgstr "Identificativo univoco per il dispositivo." msgid "" "Unique identifier for the ike policy associated with the ipsec site " "connection." msgstr "" "Identificativo univoco per la politica ike associata alla connessione al " "sito ipsec." msgid "" "Unique identifier for the ipsec policy associated with the ipsec site " "connection." msgstr "" "Identificativo univoco per la politica ipsec associata alla connessione al " "sito ipsec." msgid "Unique identifier for the network owning the port." msgstr "Identificativo univoco per la rete che possiede la porta." msgid "" "Unique identifier for the router to which the vpn service will be inserted." msgstr "" "Identificativo univoco per il router in cui il servizio vpn verrà inserito." msgid "" "Unique identifier for the vpn service associated with the ipsec site " "connection." msgstr "" "Identificativo univoco per il servizio vpn associato alla connessione al " "sito ipsec." msgid "" "Unique identifier of the firewall policy to which this firewall rule belongs." msgstr "" "Identificativo univoco della politica del firewall a cui questa regola del " "firewall FirewallRule." msgid "Unique identifier of the firewall policy used to create the firewall." msgstr "" "Identificativo univoco della politica firewall utilizzato per creare il " "firewall." msgid "Unknown" msgstr "Sconosciuto" #, python-format msgid "Unknown Property %s" msgstr "Proprietà sconosciuta %s" #, python-format msgid "Unknown attribute \"%s\"" msgstr "Attributo sconosciuto \"%s\"" #, python-format msgid "Unknown error retrieving %s" msgstr "Errore sconosciuto durante il richiamo di %s" #, python-format msgid "Unknown input %s" msgstr "Input sconosciuto %s" #, python-format msgid "Unknown key(s) %s" msgstr "Chiavi sconosciute %s" msgid "Unknown share_status during creation of share \"{0}\"" msgstr "" "share_status sconosciuto durante la creazione della condivisione \"{0}\"" #, python-format msgid "Unknown status creating Bay '%(name)s' - %(reason)s" msgstr "" "Stato sconosciuto durante la creazione dell'alloggiamento '%(name)s' - " "%(reason)s" msgid "Unknown status during deleting share \"{0}\"" msgstr "Stato sconosciuto durante l'eliminazione della condivisione \"{0}\"" #, python-format msgid "Unknown status updating Bay '%(name)s' - %(reason)s" msgstr "" "Stato sconosciuto durante l'aggiornamento dell'alloggiamento '%(name)s' - " "%(reason)s" #, python-format msgid "Unknown status: %s" msgstr "Stato sconosciuto: %s" #, python-format msgid "" "Unrecognized value \"%(value)s\" for \"%(name)s\", acceptable values are: " "true, false." msgstr "" "Valore \"%(value)s\" non riconosciuto per \"%(name)s\", i valori accettabili " "sono: true, false." #, python-format msgid "Unsupported object type %(objtype)s" msgstr "Tipo di oggetto non supportato %(objtype)s" #, python-format msgid "Unsupported resource '%s' in LoadBalancerNames" msgstr "Risorsa '%s' non supportata in LoadBalancerNames" msgid "Unversioned keystone url in format like http://0.0.0.0:5000." msgstr "URL keystone senza versione in formato http://0.0.0.0:5000." #, python-format msgid "Update to properties %(props)s of %(name)s (%(res)s)" msgstr "Aggiorna le proprietà %(props)s di %(name)s (%(res)s)" msgid "Updated At" msgstr "Aggiornato a" msgid "Updating a stack when it is deleting" msgstr "Aggiornamento di uno stack mentre era in fase di eliminazione" msgid "Updating a stack when it is suspended" msgstr "Aggiornamento di uno stack quando è sospeso" msgid "" "Use get_resource|Ref command instead. For example: { get_resource : " " }" msgstr "" "Utilizzare il comando get_resource|Ref. Ad esempio: { get_resource : " " }" msgid "" "Use only with Neutron, to list the internal subnet to which the instance " "will be attached; needed only if multiple exist; list length must be exactly " "1." msgstr "" "Utilizzare solo con Neutron, per elencare la sottorete interna a cui " "l'istanza verrà collegata; necessario solo se ne esistono più di una; la " "lunghezza dell'elenco deve essere esattamente 1." #, python-format msgid "Use property %s" msgstr "Utilizzare la proprietà %s" #, python-format msgid "Use property %s." msgstr "Utilizzare la proprietà %s." msgid "" "Use the `external_gateway_info` property in the router resource to set up " "the gateway." msgstr "" "Utilizzare la proprietà `external_gateway_info` nella risorsa del router per " "configurare il gateway." msgid "" "Use the networks attribute instead of first_address. For example: " "\"{get_attr: [, networks, , 0]}\"" msgstr "" "Utilizzare l'attributo di rete invece di first_address. Ad esempio: " "\"{get_attr: [, reti, , 0]}\"" msgid "Use this resource at your own risk." msgstr "Utilizzare questa risorsa a proprio rischio." #, python-format msgid "User %s in invalid domain" msgstr "Utente %s in dominio non valido" #, python-format msgid "User %s in invalid project" msgstr "Utente %s in progetto non valido" msgid "User ID for API authentication" msgstr "ID utente per l'autenticazione API" msgid "User data to pass to instance." msgstr "Dati utenti da passare all'istanza." msgid "User is not authorized to perform action" msgstr "L'utente non è autorizzato ad eseguire l'azione" msgid "User name to create a user on instance creation." msgstr "Nome utente per creare un utente durante la creazione dell'istanza." msgid "Username associated with the AccessKey." msgstr "Nome utente associato con AccessKey." msgid "Username for API authentication" msgstr "Username per l'autenticazione API" msgid "Username for accessing the data source URL." msgstr "Nome utente per l'accesso all'URL dell'origine dati." msgid "Username for accessing the job binary URL." msgstr "Nome utente per l'accesso all'URL del job binary." msgid "Username of privileged user in the image." msgstr "Nome utente dell'utente privilegiato nell'immagine." msgid "VLAN ID for VLAN networks or tunnel-id for GRE/VXLAN networks." msgstr "ID VLAN per le reti VLAN o ID tunnel per le reti GRE/VXLAN." msgid "VPC ID for this gateway association." msgstr "ID VPC per questa associazione di gateway." msgid "VPC ID for where the route table is created." msgstr "L'ID VPC dove la tabella di instradamento viene creata." msgid "" "Valid values are encrypt or decrypt. The heat-engine processes must be " "stopped to use this." msgstr "" "I valori validi sono encrypt o decrypt. I processi heat-engine devono essere " "arrestati per utilizzare questi valori." #, python-format msgid "Value \"%(val)s\" is invalid for data type \"%(type)s\"." msgstr "Il valore \"%(val)s\" non è valido per il tipo di dati \"%(type)s\"." #, python-format msgid "Value '%(value)s' is invalid for '%(name)s' which only accepts integer." msgstr "" "Il valore '%(value)s' non è valido per '%(name)s' che accetta solo numeri " "interi." #, python-format msgid "" "Value '%(value)s' is invalid for '%(name)s' which only accepts non-negative " "integer." msgstr "" "Il valore '%(value)s' non è valido per '%(name)s' che accetta solo numeri " "interinon negativi." #, python-format msgid "Value '%s' is not an integer" msgstr "Il valore '%s' non è un numero intero" #, python-format msgid "Value must be a comma-delimited list string: %s" msgstr "Il valore deve essere una stringa di elenco delimitata da virgole: %s" #, python-format msgid "Value must be of type %s" msgstr "Il valore deve essere di tipo %s" #, python-format msgid "Value must be valid JSON: %s" msgstr "Il valore deve essere un JSON valido: %s" #, python-format msgid "Value must match pattern: %s" msgstr "Il valore deve corrispondere al modello: %s" msgid "" "Value which can be set or changed on stack update to trigger the resource " "for replacement with a new random string. The salt value itself is ignored " "by the random generator." msgstr "" "Il valore che può essere impostato o modificato sull'aggiornamento stack per " "l'attivazione della risorsa per la sostituzione con una nuova stringa " "casuale. Il valore salt stesso viene ignorato dal generatore casuale." msgid "" "Value which can be set to fail the resource operation to test failure " "scenarios." msgstr "" "Il valore che può essere impostato in modo da ottenere un esito negativo " "dell'operazione di risorsa per verificare gli scenari di errore." msgid "" "Value which can be set to trigger update replace for the particular resource." msgstr "" "Il valore che può essere impostato per attivare l'aggiornamento della " "sostituzione per una particolare risorsa." #, python-format msgid "Version %(objver)s of %(objname)s is not supported" msgstr "La versione %(objver)s di %(objname)s non è supportata" msgid "Version for the ike policy." msgstr "Versione per la politica ike." msgid "Version of Hadoop running on instances." msgstr "Versione di Hadoop in esecuzione sulle istanze." msgid "Version of IP address." msgstr "Versione dell'indirizzo IP." msgid "Vip associated with the pool." msgstr "Vip associato al pool." msgid "Volume attachment failed" msgstr "Collegamento al volume non riuscito" msgid "Volume backup failed" msgstr "Backup volume non riuscito" msgid "Volume backup restore failed" msgstr "Ripristino del backup del volume non riuscito" msgid "Volume create failed" msgstr "Creazione volume non riuscita" msgid "Volume detachment failed" msgstr "Scollegamento del volume non riuscito" msgid "Volume in use" msgstr "Volume in uso" msgid "Volume resize failed" msgstr "Ridimensionamento volume non riuscito" msgid "Volumes per node." msgstr "Volumi per nodo." msgid "Volumes to attach to instance." msgstr "Volumi da allegare all'istanza." #, python-format msgid "WaitCondition invalid Handle %s" msgstr "WaitCondition, handle non valido %s" #, python-format msgid "WaitCondition invalid Handle stack %s" msgstr "WaitCondition, stack handle non valido %s" #, python-format msgid "WaitCondition invalid Handle tenant %s" msgstr "WaitCondition, tenant handle non valido %s" msgid "Weight of pool member in the pool (default to 1)." msgstr "Peso del membro del pool nel pool (assume il valore predefinito 1)." msgid "Weight of the pool member in the pool." msgstr "Peso del membro del pool nel pool." #, python-format msgid "Went to status %(resource_status)s due to \"%(status_reason)s\"" msgstr "Assunto lo stato %(resource_status)s a causa di \"%(status_reason)s\"" msgid "" "When both ipv6_ra_mode and ipv6_address_mode are set, they must be equal." msgstr "" "Quando sono impostate entrambe le modalità ipv6_ra_mode e ipv6_address_mode, " "devono essere uguali." msgid "" "When running server in SSL mode, you must specify both a cert_file and " "key_file option value in your configuration file" msgstr "" "Quando si esegue il server in modalità SSL, è necessario specificare sia un " "valore dell'opzione cert_file che key_file nel file di configurazione" msgid "Whether enable this policy on that cluster." msgstr "Se abilitare o meno questa politica su quel cluster." msgid "" "Whether the address scope should be shared to other tenants. Note that the " "default policy setting restricts usage of this attribute to administrative " "users only, and restricts changing of shared address scope to unshared with " "update." msgstr "" "Se l'ambito dell'indirizzo deve essere condiviso con altri tenant. Da notare " "che l'impostazione predefinita della politica limita l'utilizzo di questo " "attributo solo agli utenti amministrativi e limita la modifica dell'ambito " "dell'indirizzo condiviso a non condiviso con l'aggiornamento." msgid "Whether the flavor is shared across all projects." msgstr "Indica se il flavor è condiviso tra tutti i progetti." msgid "" "Whether the image can be deleted. If the value is True, the image is " "protected and cannot be deleted." msgstr "" "Indica se l'immagine può essere eliminata. Se il valore è True, l'immagine è " "protetta e non può essere eliminata." msgid "Whether the metering label should be shared across all tenants." msgstr "" "Indica se questa etichetta di misurazione deve essere condivisa tra tutti i " "tenant." msgid "Whether the network contains an external router." msgstr "Se la rete contiene un router esterno." msgid "Whether the part content is text or multipart." msgstr "Stabilisce se il contenuto della parte è testo o multiparte." msgid "" "Whether the subnet pool will be shared across all tenants. Note that the " "default policy setting restricts usage of this attribute to administrative " "users only." msgstr "" "Stabilisce se il pool di sottorete sarà condiviso tra tutti i tenant. Da " "notare che l'impostazione predefinita della politica limita l'utilizzo di " "questo attributo solo utenti amministrativi." msgid "Whether the volume type is accessible to the public." msgstr "Se il tipo di volume è accessibile al pubblico." msgid "Whether this QoS policy should be shared to other tenants." msgstr "Indica se questa politica QoS deve essere condivisa con altri tenant." msgid "" "Whether this firewall should be shared across all tenants. NOTE: The default " "policy setting in Neutron restricts usage of this property to administrative " "users only." msgstr "" "Indica se questo firewall deve essere condiviso tra tutti i tenant. NOTA: " "L'impostazione predefinita della politica in Neutron limita l'utilizzo di " "questa proprietà utenti amministrativi." msgid "" "Whether this is default IPv4/IPv6 subnet pool. There can only be one default " "subnet pool for each IP family. Note that the default policy setting " "restricts administrative users to set this to True." msgstr "" "Stabilisce se questo è il pool di sottorete IPv4/IPv6 predefinito. Può " "esistere un solo pool di sottorete predefinito per ciascuna famiglia IP. " "L'impostazione della politica predefinita limita gli utenti amministrativi a " "impostare questo valore su True." msgid "Whether this network should be shared across all tenants." msgstr "Stabilisce se questa rete deve essere condivisa tra tutti i tenant." msgid "" "Whether this network should be shared across all tenants. Note that the " "default policy setting restricts usage of this attribute to administrative " "users only." msgstr "" "Stabilisce se questa rete deve essere condivisa tra tutti i tenant. Da " "notare che l'impostazione predefinita della politica limita l'utilizzo di " "questo attributo solo utenti amministrativi." msgid "" "Whether this policy should be audited. When set to True, each time the " "firewall policy or the associated firewall rules are changed, this attribute " "will be set to False and will have to be explicitly set to True through an " "update operation." msgstr "" "Indica se questa politica deve essere verificata. Quando impostato su true, " "ogni volta che la politica del firewall o le regole del firewall associate " "vengono modificate, questo attributo verrà impostato su False e dovrà essere " "impostato esplicitamente su True mediante un'operazione di aggiornamento." msgid "Whether this policy should be shared across all tenants." msgstr "Indica se questa politica deve essere condivisa tra tutti i tenant." msgid "Whether this rule should be enabled." msgstr "Indica se questa regola deve essere abilitata." msgid "Whether this rule should be shared across all tenants." msgstr "Indica se questa regola deve essere condivisa tra tutti i tenant." msgid "Whether to enable the actions or not." msgstr "Se abilitare o meno le azioni." msgid "Whether to specify a remote group or a remote IP prefix." msgstr "Stabilisce se specificare un gruppo remoto o un prefisso IP remoto." msgid "" "Which lifecycle actions of the deployment resource will result in this " "deployment being triggered." msgstr "" "Le azioni del ciclo di vita della risorsa di distribuzione che determinano " "l'attivazione di questa distribuzione." msgid "" "Workflow additional parameters. If Workflow is reverse typed, params " "requires 'task_name', which defines initial task." msgstr "" "Parametri aggiuntivi del flusso di lavoro. Se il flusso di lavoro è immesso " "in senso inverso, params richiede 'task_name', che definisce l'attività " "iniziale." msgid "Workflow description." msgstr "Descrizione del flusso di lavoro." msgid "Workflow name." msgstr "Nome flusso di lavoro." msgid "Workflow to execute." msgstr "Flusso di lavoro da eseguire." msgid "Workflow type." msgstr "Tipo di flusso di lavoro." #, python-format msgid "Wrong Arguments try: \"%s\"" msgstr "Prova argomenti errati: \"%s\"" msgid "You are not authenticated." msgstr "L'utente non è autenticato." msgid "You are not authorized to complete this action." msgstr "Non si è autorizzati a completare questa azione." #, python-format msgid "You are not authorized to use %(action)s." msgstr "Non si è autorizzati a utilizzare %(action)s." #, python-format msgid "" "You have reached the maximum stacks per tenant, %d. Please delete some " "stacks." msgstr "" "È stato raggiunto il numero massimo di stack per tenant, %d. Eliminare " "alcuni stack." #, python-format msgid "could not find user %s" msgstr "impossibile trovare l'utente %s" msgid "deployment_id must be specified" msgstr "deployment_id deve essere specificato" msgid "" "deployments key not allowed in resource metadata with user_data_format of " "SOFTWARE_CONFIG" msgstr "" "distribuzioni chiave non consentite nei metadati della risorsa con " "user_data_format di SOFTWARE_CONFIG" #, python-format msgid "deployments of server %s" msgstr "distribuzioni del server %s" #, python-format msgid "environment has wrong section \"%s\"" msgstr "l'ambiente ha una sezione errata \"%s\"" msgid "error in pool" msgstr "errore nel pool" msgid "error in vip" msgstr "errore in vip" msgid "external network for the gateway." msgstr "rete esterna per il gateway." msgid "granularity should be days, hours, minutes, or seconds" msgstr "" "la granularità deve essere specificata in giorni, ore, minuti o secondi" msgid "heat.conf misconfigured, auth_encryption_key must be 32 characters" msgstr "" "heat.conf configurato in modo errato, auth_encryption_key deve essere 32 " "caratteri" msgid "" "heat.conf misconfigured, cannot specify \"stack_user_domain_id\" or " "\"stack_user_domain_name\" without \"stack_domain_admin\" and " "\"stack_domain_admin_password\"" msgstr "" "heat.conf non è configurato correttamente, impossibile specificare " "\"stack_user_domain_id\" o \"stack_user_domain_name\" senza " "\"stack_domain_admin\" e \"stack_domain_admin_password\"" msgid "ipv6_ra_mode and ipv6_address_mode are not supported for ipv4." msgstr "" "Le modalità ipv6_ra_mode e ipv6_address_mode non sono supportate con ipv4." msgid "limit cannot be less than 4" msgstr "il limite non può essere inferiore a 4" #, python-format msgid "metadata setting for resource %s" msgstr "impostazione di metadati per la risorsa %s" msgid "min/max length must be integral" msgstr "lunghezza min/max deve essere integrale" msgid "min/max must be numeric" msgstr "min/max deve essere numerico" msgid "need more memory." msgstr "è necessaria più memoria." msgid "no resource data found" msgstr "nessun dato di risorsa trovato" msgid "no resources were found" msgstr "nessuna risorsa trovata" msgid "nova server metadata needs to be a Map." msgstr "I metadati del server nova devono essere una mappa." #, python-format msgid "previous_status must be SupportStatus instead of %s" msgstr "previous_status deve essere SupportStatus invece di %s" #, python-format msgid "raw template with id %s not found" msgstr "template raw con id %s non trovato" #, python-format msgid "resource with id %s not found" msgstr "risorsa con id %s non trovato" #, python-format msgid "roles %s" msgstr "ruoli %s" msgid "segmentation_id cannot be specified except 0 for using flat" msgstr "" "segmentation_id non può essere specificato e può essere solo 0 per l'uso " "normale" msgid "segmentation_id must be specified for using vlan" msgstr "Per utilizzare vlan specificare segmentation_id" msgid "segmentation_id not allowed for flat network type." msgstr "segmentation_id non è consentito per il tipo di rete flat." msgid "server_id must be specified" msgstr "specificare server_id" #, python-format msgid "" "task %(task)s contains property 'requires' in case of direct workflow. Only " "reverse workflows can contain property 'requires'." msgstr "" "task %(task)s contiene la proprietà 'requires' in caso di flusso di lavoro " "diretto. Solo i flussi di lavoro inversi possono contenere la proprietà " "'requires'." heat-10.0.2/heat/locale/pt_BR/0000775000175000017500000000000013343562672015736 5ustar zuulzuul00000000000000heat-10.0.2/heat/locale/pt_BR/LC_MESSAGES/0000775000175000017500000000000013343562672017523 5ustar zuulzuul00000000000000heat-10.0.2/heat/locale/pt_BR/LC_MESSAGES/heat.po0000666000175000017500000077173013343562351021017 0ustar zuulzuul00000000000000# Translations template for heat. # Copyright (C) 2015 ORGANIZATION # This file is distributed under the same license as the heat project. # # Translators: # Marcelo Dieder , 2013 # Rodrigo Felix de Almeida , 2014 # Andreas Jaeger , 2016. #zanata msgid "" msgstr "" "Project-Id-Version: heat VERSION\n" "Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n" "POT-Creation-Date: 2018-02-23 07:09+0000\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" "PO-Revision-Date: 2016-04-12 05:31+0000\n" "Last-Translator: Copied by Zanata \n" "Language: pt_BR\n" "Plural-Forms: nplurals=2; plural=(n > 1);\n" "Generated-By: Babel 2.0\n" "X-Generator: Zanata 4.3.3\n" "Language-Team: Portuguese (Brazil)\n" #, python-format msgid "\"%%s\" is not a valid keyword inside a %s definition" msgstr "\"%%s\" não é uma palavra-chave válida dentro de uma %s definição" #, python-format msgid "\"%(fn_name)s\": %(err)s" msgstr "\"%(fn_name)s\": %(err)s" #, python-format msgid "" "\"%(name)s\" params must be strings, numbers, list or map. Failed to json " "serialize %(value)s" msgstr "" "Os parâmetros \"%(name)s\" devem ser sequências, números, lista ou mapa. " "Falha ao serializar JSON %(value)s" #, python-format msgid "" "\"%(section)s\" must contain a map of %(obj_name)s maps. Found a [%(_type)s] " "instead" msgstr "" "\"%(section)s\" deve conter um mapa de mapas do %(obj_name)s. Em vez disso, " "foi localizado um [%(_type)s]" #, python-format msgid "\"%(url)s\" is not a valid SwiftSignalHandle. The %(part)s is invalid" msgstr "\"%(url)s\" não é um SwiftSignalHandle válido. O %(part)s é inválido" #, python-format msgid "\"%(value)s\" does not validate %(name)s" msgstr "\"%(value)s\" não valida %(name)s" #, python-format msgid "\"%(value)s\" does not validate %(name)s (constraint not found)" msgstr "\"%(value)s\" não valida %(name)s (restrição não localizada)" #, python-format msgid "\"%(version)s\". \"%(version_type)s\" should be one of: %(available)s" msgstr "\"%(version)s\". \"%(version_type)s\" deve ser de: %(available)s" #, python-format msgid "\"%(version)s\". \"%(version_type)s\" should be: %(available)s" msgstr "\"%(version)s\". \"%(version_type)s\" deve ser de: %(available)s" #, python-format msgid "\"%s\" : [ { \"key1\": \"val1\" }, { \"key2\": \"val2\" } ]" msgstr "\"%s\" : [ { \"key1\": \"val1\" }, { \"key2\": \"val2\" } ]" #, python-format msgid "\"%s\" argument must be a string" msgstr "Argumento \"%s\" deve ser uma sequência" #, python-format msgid "\"%s\" can't traverse path" msgstr "\"%s\" não pode atravessar o caminho" #, python-format msgid "\"%s\" deletion policy not supported" msgstr "Política de exclusão \"%s\" não suportada" #, python-format msgid "\"%s\" delimiter must be a string" msgstr "\"%s\" delimitador deve ser uma sequência" #, python-format msgid "\"%s\" is not a list" msgstr "\"%s\" não é uma lista" #, python-format msgid "\"%s\" is not a map" msgstr "\"%s\" não é um mapa" #, python-format msgid "\"%s\" is not a valid ARN" msgstr "\"%s\" não é um ARN válido" #, python-format msgid "\"%s\" is not a valid ARN URL" msgstr "\"%s\" não é uma URL ARN válida" #, python-format msgid "\"%s\" is not a valid Heat ARN" msgstr "\"%s\" não é um ARN Heat válido" #, python-format msgid "\"%s\" is not a valid URL" msgstr "\"%s\" não é uma URL válida" #, python-format msgid "\"%s\" is not a valid boolean" msgstr "\"%s\" não é um booleano válido" #, python-format msgid "\"%s\" is not a valid template section" msgstr "\"%s\" não é uma seção de modelo válido" #, python-format msgid "\"%s\" must operate on a list" msgstr "\"%s\" deve operar em uma lista" #, python-format msgid "\"%s\" param placeholders must be strings" msgstr "Sinalizadores do parâmetro \"%s\" devem ser sequências" #, python-format msgid "\"%s\" parameters must be a mapping" msgstr "Os parâmetros \"%s\" devem ser um mapeamento" #, python-format msgid "\"%s\" params must be a map" msgstr "Parâmetros \"%s\" devem ser um mapa" #, python-format msgid "\"%s\" params must be strings, numbers, list or map." msgstr "Os parâmetros \"%s\" devem ser sequências, números, lista ou mapa" #, python-format msgid "\"%s\" template must be a string" msgstr "Modelo \"%s\" deve ser uma sequência" #, python-format msgid "\"repeat\" syntax should be %s" msgstr "a sintaxe \"repeat\" deve ser %s" #, python-format msgid "%(a)s paused until Hook %(h)s is cleared" msgstr "%(a)s pausado até que o Hook %(h)s seja limpo" #, python-format msgid "%(action)s is not supported for resource." msgstr "%(action)s não é suportada para o recurso." #, python-format msgid "%(action)s is restricted for resource." msgstr "%(action)sé restrita para o recurso." #, python-format msgid "%(desired_capacity)s must be between %(min_size)s and %(max_size)s" msgstr "%(desired_capacity)s deve estar entre %(min_size)s e %(max_size)s" #, python-format msgid "%(error)s%(path)s%(message)s" msgstr "%(error)s%(path)s%(message)s" #, python-format msgid "%(feature)s is not supported." msgstr "%(feature)s não é suportado." #, python-format msgid "" "%(img)s must be provided: Referenced cluster template %(tmpl)s has no " "default_image_id defined." msgstr "" "%(img)s deve ser fornecido: o modelo do cluster referenciado %(tmpl)s não " "possui default_image_id definido." #, python-format msgid "%(lc)s (%(ref)s) reference can not be found." msgstr "%(lc)s (%(ref)s) de referência não pode ser localizado." #, python-format msgid "" "%(lc)s (%(ref)s) requires a reference to the configuration not just the name " "of the resource." msgstr "" "%(lc)s (%(ref)s) requer uma referência para a configuração não só o nome do " "recurso." #, python-format msgid "%(len)d of %(count)d received" msgstr "%(len)d de %(count)d recebido" #, python-format msgid "%(len)d of %(count)d received - %(reasons)s" msgstr "%(len)d de %(count)d recebidos - %(reasons)s" #, python-format msgid "%(message)s" msgstr "%(message)s" #, python-format msgid "%(min_size)s can not be greater than %(max_size)s" msgstr "%(min_size)s não pode ser maior que %(max_size)s" #, python-format msgid "%(name)s constraint invalid for %(utype)s" msgstr "%(name)s restrição inválida para %(utype)s" #, python-format msgid "%(prop1)s cannot be specified without %(prop2)s." msgstr "%(prop1)s não pode ser especificado sem %(prop2)s." #, python-format msgid "" "%(prop1)s property should only be specified for %(prop2)s with value " "%(value)s." msgstr "" "A propriedade %(prop1)s deve ser especificada somente para %(prop2)s com " "valor %(value)s." #, python-format msgid "%(resource)s: Invalid attribute %(key)s" msgstr "%(resource)s: Atributo inválido %(key)s" #, python-format msgid "" "%(result)s - Unknown status %(resource_status)s due to \"%(status_reason)s\"" msgstr "" "%(result)s – Status desconhecido %(resource_status)s devido a " "\"%(status_reason)s\"" #, python-format msgid "%(schema)s supplied for %(type)s %(data)s" msgstr "%(schema)s fornecido para %(type)s %(data)s" #, python-format msgid "%(server)s-port-%(number)s" msgstr "%(server)s-port-%(number)s" #, python-format msgid "%(type)s not in valid format: %(error)s" msgstr "%(type)s não no formato válido: %(error)s" #, python-format msgid "%s Key Name must be a string" msgstr "Nome da chave %s deve ser uma sequência" #, python-format msgid "%s Timed out" msgstr "%s Tempo Excedido" #, python-format msgid "%s Value Name must be a string" msgstr "Nome do Valor %s deve ser uma sequência" #, python-format msgid "%s is not a valid job location." msgstr "%s não é um local de tarefa válido." #, python-format msgid "%s is not active" msgstr "%s não está ativo" #, python-format msgid "%s is not an integer." msgstr "%s não é um número inteiro." #, python-format msgid "%s must be provided" msgstr "%s deve ser fornecido" #, python-format msgid "'%(attr)s': expected '%(expected)s', got '%(current)s'" msgstr "'%(attr)s': esperado '%(expected)s', obtido '%(current)s'" msgid "" "'task_name' is not assigned in 'params' in case of reverse type workflow." msgstr "" "'task_name' não é designado em 'params' no caso de um fluxo de trabalho do " "tipo reverso. " msgid "'true' if DHCP is enabled for this subnet; 'false' otherwise." msgstr "" "'true' se o DHCP estiver ativado para esta sub-rede; 'false', caso contrário." msgid "A UUID for the set of servers being requested." msgstr "Um UUID para o conjunto de servidores que estão sendo solicitados." msgid "A bad or out-of-range value was supplied" msgstr "Um valor inválido ou fora do intervalo foi fornecido" msgid "A boolean value of default flag." msgstr "Um valor booleano de sinalizador padrão." msgid "A boolean value specifying the administrative status of the network." msgstr "Um valor booleano especificando o status administrativo da rede." #, python-format msgid "" "A character class and its corresponding %(min)s constraint to generate the " "random string from." msgstr "" "Uma classe de caractere e sua restrição %(min)s correspondente da qual gerar " "a sequência aleatória." #, python-format msgid "" "A character sequence and its corresponding %(min)s constraint to generate " "the random string from." msgstr "" "Uma sequência de caracteres e sua restrição %(min)s correspondente da qual " "gerar a sequência aleatória." msgid "A comma-delimited list of server ip addresses. (Heat extension)." msgstr "" "Uma lista delimitada por vírgulas de endereços ip do servidor. (extensão " "Heat)." msgid "A description of the volume." msgstr "Uma descrição do volume." msgid "" "A device name where the volume will be attached in the system at /dev/" "device_name. This value is typically vda." msgstr "" "Um nome de dispositivo ao qual o volume será anexado no sistema em /dev/" "device_name. Esse valor é geralmente vda." msgid "" "A device name where the volume will be attached in the system at /dev/" "device_name.e.g. vdb" msgstr "" "Um nome de dispositivo ao qual o volume será anexado no sistema em /dev/" "device_name.e.g. vdb" msgid "" "A dict of all network addresses with corresponding port_id. Each network " "will have two keys in dict, they are network name and network id. The port " "ID may be obtained through the following expression: \"{get_attr: [, " "addresses, , 0, port]}\"." msgstr "" "Um dicionário de todos os endereços de rede com o port_id correspondente. " "Cada rede terá duas chaves no dicionário, que são o nome da rede e o ID da " "rede. O ID da porta poderá ser obtido por meio da seguinte expressão: " "\"{get_attr: [, addresses, , 0, port]}\"." msgid "" "A dict of assigned network addresses of the form: {\"public\": [ip1, " "ip2...], \"private\": [ip3, ip4], \"public_uuid\": [ip1, ip2...], " "\"private_uuid\": [ip3, ip4]}. Each network will have two keys in dict, they " "are network name and network id." msgstr "" "Um dicionário de endereços de rede designados no formato: {\"public\": [ip1, " "ip2...], \"private\": [ip3, ip4], \"public_uuid\": [ip1, ip2...], " "\"private_uuid\": [ip3, ip4]}. Cada rede terá duas chaves no dicionário, que " "são o nome da rede e o ID da rede." msgid "A dict of key-value pairs output from the stack." msgstr "Um dicionário de saída de pares chave-valor da pilha." msgid "A dictionary which contains name and input of the workflow." msgstr "Um dicionário que contém o nome e a entrada para o fluxo de trabalho." msgid "A length constraint must have a min value and/or a max value specified." msgstr "" "Uma restrição de comprimento deve ter um valor mín. e/ou um valor máximo " "especificado." msgid "A list of URLs (webhooks) to invoke when state transitions to alarm." msgstr "" "Uma lista de URLs (webhooks) para chamar quando o estado faz a transição " "para o alarme." msgid "" "A list of URLs (webhooks) to invoke when state transitions to insufficient-" "data." msgstr "" "Uma lista de URLs (webhooks) para chamar quando o estado faz a transição " "para dados insuficientes." msgid "A list of URLs (webhooks) to invoke when state transitions to ok." msgstr "" "Uma lista de URLs (webhooks) para chamar quando o estado faz a transição " "para ok." msgid "A list of access rules that define access from IP to Share." msgstr "" "Uma lista de regras de acesso que definem o acesso a partir do IP para " "Share. " msgid "A list of all rules for the QoS policy." msgstr "Uma lista de todas as regras da política do QoS." msgid "A list of all subnet attributes for the port." msgstr "Uma lista com todos os atributos de sub-rede para a porta." msgid "" "A list of character class and their constraints to generate the random " "string from." msgstr "" "Uma lista de classe de caractere e suas restrições da qual gerar a sequência " "aleatória." msgid "" "A list of character sequences and their constraints to generate the random " "string from." msgstr "" "Uma lista de sequências de caracteres e suas restrições da qual gerar a " "sequência aleatória." msgid "A list of cluster instance IPs." msgstr "Uma lista de IPs de instância de cluster." msgid "A list of clusters to which this policy is attached." msgstr "Uma lista de clusters à qual essa política é anexada. " msgid "A list of host route dictionaries for the subnet." msgstr "Uma lista de dicionários de rota do host para a sub-rede." msgid "A list of instances ids." msgstr "Uma lista de IDs de instâncias." msgid "A list of metric ids." msgstr "Uma lista de IDs de métricas." msgid "" "A list of query factors, each comparing a Sample attribute with a value. " "Implicitly combined with matching_metadata, if any." msgstr "" "Uma lista de fatores de consulta, cada um comparando um atributo Sample com " "um valor. Implicitamente combinado com matching_metadata, se houver." msgid "A list of resource IDs for the resources in the chain." msgstr "Uma lista de IDs de recurso para os recursos na cadeia. " msgid "A list of resource IDs for the resources in the group." msgstr "Uma lista de IDs de recurso para os recursos no grupo." msgid "A list of security groups for the port." msgstr "Uma lista de grupos de segurança para a porta." msgid "A list of security services IDs or names." msgstr "Uma lista de IDs ou nomes de serviço de segurança." msgid "A list of string policies to apply. Defaults to anti-affinity." msgstr "Uma lista de políticas de sequência a aplicar. Padrões antiafinidade." msgid "A login profile for the user." msgstr "Um perfil de login para o usuário." msgid "A mandatory input parameter is missing" msgstr "Um parâmetro de entrada obrigatório está faltando" msgid "A map containing all headers for the container." msgstr "Um mapa contendo todos os cabeçalhos para o contêiner." msgid "" "A map of Nova names and captured stderrs from the configuration execution to " "each server." msgstr "" "Um mapa de nomes e sderrs capturados do Nova a partir da execução da " "configuração em cada servidor." msgid "" "A map of Nova names and captured stdouts from the configuration execution to " "each server." msgstr "" "Um mapa de nomes e stdouts capturados do Nova a partir da execução da " "configuração em cada servidor." msgid "" "A map of Nova names and returned status code from the configuration " "execution." msgstr "" "Um mapa de nomes e códigos de status retornados do Nova a partir da execução " "da configuração." msgid "" "A map of files to create/overwrite on the server upon boot. Keys are file " "names and values are the file contents." msgstr "" "Um mapa de arquivos para criar/sobrescrever no servidor após a " "inicialização. As chaves são nomes de arquivos e valores são o conteúdo do " "arquivo." msgid "" "A map of resource names to the specified attribute of each individual " "resource." msgstr "" "Um mapa de nomes de recursos para o atributo especificado de cada recurso " "individual. " msgid "" "A map of resource names to the specified attribute of each individual " "resource. Requires heat_template_version: 2014-10-16." msgstr "" "Um mapa de nomes de recursos para o atributo especificado de cada recurso " "individual. Requer heat_template_version: 2014-10-16." msgid "" "A map of user-defined meta data to associate with the account. Each key in " "the map will set the header X-Account-Meta-{key} with the corresponding " "value." msgstr "" "Um mapa de metadados definidos pelo usuário a ser associado à conta. Cada " "chave no mapa irá configurar o cabeçalho de X-Account-Meta--{chave} com o " "valor correspondente." msgid "" "A map of user-defined meta data to associate with the container. Each key in " "the map will set the header X-Container-Meta-{key} with the corresponding " "value." msgstr "" "Um mapa de metadados definidos pelo usuário para associar ao contêiner. Cada " "chave no mapa irá configurar o cabeçalho de X-Container-Meta--{chave} com o " "valor correspondente." msgid "A name used to distinguish the volume." msgstr "Um nome utilizado para distinguir o volume." msgid "" "A per-tenant quota on the prefix space that can be allocated from the subnet " "pool for tenant subnets." msgstr "" "Uma cota por locatário no espaço de prefixo que pode ser alocado a partir do " "conjunto de sub-redes para sub-redes do locatário. " msgid "" "A predefined access control list (ACL) that grants permissions on the bucket." msgstr "" "Uma lista de controle de acesso predefinido (ACL) que concede permissões no " "depósito." msgid "A range constraint must have a min value and/or a max value specified." msgstr "" "Uma restrição de variação deve ter um valor mín. e/ou um valor máximo " "especificado." msgid "" "A reference to the wait condition handle used to signal this wait condition." msgstr "" "Uma referência ao identificador de condição de espera utilizado para " "sinalizar esta condição de espera." msgid "" "A signed url to create executions for workflows specified in Workflow " "resource." msgstr "" "URL assinada para criar execuções para fluxos de trabalho especificados no " "recurso de Fluxo de Trabalho." msgid "A signed url to handle the alarm." msgstr "Uma url assinada para lidar com o alarme." msgid "A signed url to handle the alarm. (Heat extension)." msgstr "Uma url assinada para lidar com o alarme. (extensão Heat)." msgid "A specified set of DNS name servers to be used." msgstr "Um conjunto especificado de servidores de nomes DNS a ser utilizado." msgid "" "A string specifying a symbolic name for the network, which is not required " "to be unique." msgstr "" "Uma sequência que especifica um nome simbólico para a rede, que não precisa " "ser exclusivo." msgid "" "A string specifying a symbolic name for the security group, which is not " "required to be unique." msgstr "" "Uma sequência que especifica um nome simbólico para o grupo de segurança, " "que não precisa ser exclusivo." msgid "A string specifying physical network mapping for the network." msgstr "Uma sequência especificando mapeamento de rede física para a rede." msgid "A string specifying the provider network type for the network." msgstr "Uma sequência que especifica o tipo de rede do provedor para a rede." msgid "A string specifying the segmentation id for the network." msgstr "Uma sequência que especifica o ID da segmentação para a rede." msgid "A symbolic name for this port." msgstr "Um nome simbólico para esta porta." msgid "A url to handle the alarm using native API." msgstr "Uma URL para manipular o alarme usando API nativa." msgid "" "A variable that this resource will use to replace with the current index of " "a given resource in the group. Can be used, for example, to customize the " "name property of grouped servers in order to differentiate them when listed " "with nova client." msgstr "" "Uma variável que este recurso usará para substituir com o índice atual de um " "determinado recurso no grupo. Pode ser usada, por exemplo, para customizar a " "propriedade de nome de servidores agrupados para diferenciar quando eles " "estiverem listados com clientes novos." msgid "AWS compatible instance name." msgstr "Nome da instância compatível com AWS." msgid "AWS query string is malformed, does not adhere to AWS spec" msgstr "" "Sequência de consulta AWS está malformada, não adere à especificação AWS" msgid "Access policies to apply to the user." msgstr "Políticas de acesso para aplicar ao usuário." #, python-format msgid "AccessPolicy resource %s not in stack" msgstr "Recurso AccessPolicy %s fora da pilha" #, python-format msgid "Action %s not allowed for user" msgstr "Ação %s não permitida para o usuário" msgid "Action to be performed on the traffic matching the rule." msgstr "Ação a ser executada no tráfego correspondendo a regra." msgid "Actual input parameter values of the task." msgstr "Valores de parâmetro de entrada real da tarefa. " msgid "Add needed policies directly to the task, Policy keyword is not needed" msgstr "" "Incluir políticas necessárias diretamente na tarefa. A palavra-chave Policy " "não é necessária. " msgid "Additional MAC/IP address pairs allowed to pass through a port." msgstr "" "Pares de endereço MAC/IP adicionais permitidos para passar por uma porta." msgid "Additional MAC/IP address pairs allowed to pass through the port." msgstr "Pares de endereço MAC/IP adicionais permitidos para passar pela porta." msgid "Additional routes for this subnet." msgstr "Rotas adicionais para essa sub-rede." msgid "Address family of the address scope, which is 4 or 6." msgstr "Família de endereço do escopo de endereço, que é 4 ou 6." msgid "" "Address of the notification. It could be a valid email address, url or " "service key based on notification type." msgstr "" "Endereço da notificação. Pode ser um endereço de e-mail, uma URL ou uma " "chave de serviço válida com base no tipo de notificação. " msgid "" "Address to bind the server. Useful when selecting a particular network " "interface." msgstr "" "Endereço para ligar o servidor. Útil ao selecionar uma interface de rede " "particular." msgid "Administrative state for the ipsec site connection." msgstr "Estado administrativo para a conexão do site ipsec." msgid "Administrative state for the vpn service." msgstr "Estado administrativo para o serviço de vpn." msgid "" "Administrative state of the firewall. If false (down), firewall does not " "forward packets and will drop all traffic to/from VMs behind the firewall." msgstr "" "Estado administrativo do firewall. Se false (desativado), o firewall não " "encaminhará pacotes e irá descartar todo o tráfego para/das máquinas " "virtuais por trás do firewall." msgid "Administrative state of the router." msgstr "Estado administrativo do roteador." #, python-format msgid "Alarm %(alarm)s could not find scaling group named \"%(group)s\"" msgstr "" "Alarme %(alarm)s não pôde localizar grupo de ajuste de escala denominado " "\"%(group)s\"" #, python-format msgid "Algorithm must be one of %s" msgstr "O algoritmo deve ser um de %s" msgid "All heat engines are down." msgstr "Todos os mecanismos de calor estão desativados." msgid "Allocated floating IP address." msgstr "Endereço de IP flutuante alocado." msgid "Allocation ID for VPC EIP address." msgstr "ID de alocação para endereço EIP VPC." msgid "Allow client's debug log output." msgstr "Permita a saída de log de depuração do cliente." msgid "Allow or deny action for this firewall rule." msgstr "Permitir ou negar ação para esta regra de firewall." msgid "Allow orchestration of multiple clouds." msgstr "Permitir orquestração de várias nuvens." msgid "" "Allow reauthentication on token expiry, such that long-running tasks may " "complete. Note this defeats the expiry of any provided user tokens." msgstr "" "Permitir reautenticação na expiração do token, de modo que tarefas de longa " "execução possam ser concluídas. Observe que isso evita a expiração de " "qualquer token do usuário fornecido. " msgid "" "Allowed keystone endpoints for auth_uri when multi_cloud is enabled. At " "least one endpoint needs to be specified." msgstr "" "Terminais keystone permitidos para auth_uri quando multi_cloud está ativado. " "Pelo menos um terminal precisa ser especificado." msgid "" "Allowed tenancy of instances launched in the VPC. default - any tenancy; " "dedicated - instance will be dedicated, regardless of the tenancy option " "specified at instance launch." msgstr "" "Ocupação permitida de instâncias ativadas no VPC. padrão – qualquer " "ocupação; dedicado – instância será dedicada, independentemente da opção de " "ocupação especificada na ativação da instância." #, python-format msgid "Allowed values: %s" msgstr "Valores permitidos: %s" msgid "AllowedPattern must be a string" msgstr "AllowedPattern deve ser uma sequência" msgid "AllowedValues must be a list" msgstr "AllowedValues deve ser uma lista" msgid "Allowing not to store action results after task completion." msgstr "" "Permitindo não armazenar resultados da ação após a conclusão da tarefa. " msgid "" "Allows to synchronize multiple parallel workflow branches and aggregate " "their data. Valid inputs: all - the task will run only if all upstream tasks " "are completed. Any numeric value - then the task will run once at least this " "number of upstream tasks are completed and corresponding conditions have " "triggered." msgstr "" "Permite sincronizar diversas ramificações de fluxo de trabalho paralelas e " "agregar seus dados. Entradas válidas: tudo - a tarefa será executada somente " "se todas as tarefas de envio de dados forem concluídas. Qualquer valor " "numérico - a tarefa será executada quando pelo menos esse número de tarefas " "de envio de dados for concluído e quando as condições correspondentes forem " "acionadas." #, python-format msgid "Ambiguous versions (%s)" msgstr "Versões ambíguas (%s)" msgid "" "Amount of disk space (in GB) required to boot image. Default value is 0 if " "not specified and means no limit on the disk size." msgstr "" "Quantidade de espaço em disco (em GB) necessária para inicializar a imagem. " "O valor padrão é 0 se não especificado e significa que não há nenhum limite " "no tamanho do disco." msgid "" "Amount of ram (in MB) required to boot image. Default value is 0 if not " "specified and means no limit on the ram size." msgstr "" "Quantidade de ram (em MB) necessária para inicializar a imagem. O valor " "padrão é 0, se não especificado e significa que não há limite para o tamanho " "da ram." msgid "An address scope ID to assign to the subnet pool." msgstr "ID do escopo de endereço para designar ao conjunto de sub-redes." msgid "An application health check for the instances." msgstr "Uma verificação de funcionamento do aplicativo para as instâncias." msgid "An ordered list of firewall rules to apply to the firewall." msgstr "" "Uma lista ordenada de regras do firewall a serem aplicadas no firewall." msgid "" "An ordered list of nics to be added to this server, with information about " "connected networks, fixed ips, port etc." msgstr "" "Uma lista ordenada de nics a serem incluídos neste servidor, com informações " "sobre redes conectadas, IPs fixos, porta, etc." msgid "An unknown exception occurred." msgstr "Ocorreu uma exceção desconhecida." msgid "" "Any data structure arbitrarily containing YAQL expressions that defines " "workflow output. May be nested." msgstr "" "Qualquer estrutura de dados contendo arbitrariamente expressões YAQL que " "definem a saída do fluxo de trabalho. Pode ser aninhado. " msgid "Anything other than one VPCZoneIdentifier" msgstr "Qualquer coisa diferente de VPCZoneIdentifier" msgid "Api endpoint reference of the instance." msgstr "Referência de terminal da API da instância." msgid "" "Arbitrary key-value pairs specified by the client to help boot a server." msgstr "" "Pares de valores de chave arbitrários especificados pelo cliente para ajudar " "a inicializar um servidor." msgid "" "Arbitrary key-value pairs specified by the client to help the Cinder " "scheduler creating a volume." msgstr "" "Pares de valores de chave arbitrários especificados pelo cliente para ajudar " "o planejador do Cinder a criar um volume." msgid "" "Arbitrary key/value metadata to store contextual information about this " "queue." msgstr "" "Metadados de chave/valor arbitrários para armazenar informações contextuais " "sobre essa fila. " msgid "" "Arbitrary key/value metadata to store for this server. Both keys and values " "must be 255 characters or less. Non-string values will be serialized to JSON " "(and the serialized string must be 255 characters or less)." msgstr "" "Metadados de chave/valor arbitrários a serem armazenados para este servidor. " "As chaves e os valores devem ter 255 caracteres ou menos. Valores não " "sequência serão serializados para JSON (e a sequência serializada deve ter " "255 caracteres ou menos)." msgid "Arbitrary key/value metadata to store information for aggregate." msgstr "" "Metadados de chave/valor arbitrários para armazenar informações do agregado. " #, python-format msgid "Argument to \"%s\" must be a list" msgstr "Argumento para \"%s\" deve ser uma lista" #, python-format msgid "Argument to \"%s\" must be a string" msgstr "Argumento para \"%s\" deve ser uma sequência" #, python-format msgid "Argument to \"%s\" must be string or list" msgstr "Argumento para \"%s\" deve ser uma sequência ou lista" #, python-format msgid "Argument to function \"%s\" must be a list of strings" msgstr "O argumento para a função \"%s\" deve ser uma lista de sequências" #, python-format msgid "" "Arguments to \"%s\" can be of the next forms: [resource_name] or " "[resource_name, attribute, (path), ...]" msgstr "" "Argumentos para \"%s\" podem ser dos próximos formatos: [resource_name] ou " "[resource_name, attribute, (path), ...]" #, python-format msgid "Arguments to \"%s\" must be a map" msgstr "Argumentos para \"%s\" devem ser um mapa" #, python-format msgid "Arguments to \"%s\" must be of the form [index, collection]" msgstr "Argumentos para \"%s\" devem estar no formato [índice, coleta]" #, python-format msgid "" "Arguments to \"%s\" must be of the form [resource_name, attribute, " "(path), ...]" msgstr "" "Argumentos para \"%s\" devem ser do formulário [resource_name, attribute, " "(caminho), ...]" #, python-format msgid "Arguments to \"%s\" must be of the form [resource_name, attribute]" msgstr "" "Argumentos para \"%s\" devem estar no formato [nome_do_recurso, atributo]" #, python-format msgid "Arguments to %s not fully resolved" msgstr "Argumentos para %s não totalmente resolvidos" #, python-format msgid "Attempt to delete a stack with id: %(id)s %(msg)s" msgstr "Tentativa de excluir uma pilha com ID: %(id)s%(msg)s" #, python-format msgid "Attempt to delete user creds with id %(id)s that does not exist" msgstr "Tentativa de excluir creds do usuário com o ID %(id)s que não existe" #, python-format msgid "Attempt to delete watch_rule: %(id)s %(msg)s" msgstr "Tentativa de excluir watch_rule: %(id)s%(msg)s" #, python-format msgid "Attempt to update a stack with id: %(id)s %(msg)s" msgstr "Tentativa de atualizar uma pilha com ID: %(id)s%(msg)s" #, python-format msgid "Attempt to update a stack with id: %(id)s %(traversal)s %(msg)s" msgstr "Tentativa de atualizar uma pilha com ID: %(id)s %(traversal)s %(msg)s" #, python-format msgid "Attempt to update a watch with id: %(id)s %(msg)s" msgstr "Tentativa de atualizar um relógio com ID: %(id)s%(msg)s" msgid "Attempt to use stored_context with no user_creds" msgstr "Tentativa de usar stored_context sem user_creds" #, python-format msgid "Attribute %(attr)s for facade %(type)s missing in provider" msgstr "Atributo %(attr)s para fachada %(type)s ausente no provedor" msgid "Audit status of this firewall policy." msgstr "Status de auditoria desta política de firewall." msgid "Authentication Endpoint URI." msgstr "URI de Terminal de Autenticação." msgid "Authentication hash algorithm for the ike policy." msgstr "Algoritmo hash de autenticação para a política ike." msgid "Authentication hash algorithm for the ipsec policy." msgstr "Algoritmo hash de autenticação para a política ipsec." msgid "Authorization failed." msgstr "Falha de autorização." msgid "AutoScaling group ID to apply policy to." msgstr "ID do grupo AutoScaling para aplicar à política." msgid "AutoScaling group name to apply policy to." msgstr "Nome do grupo de AutoScaling para aplicar à política." msgid "Availability Zone of the subnet." msgstr "Zona de Disponibilidade da sub-rede." msgid "Availability zone in which you want the subnet." msgstr "Zona de disponibilidade em que você deseja a sub-rede." msgid "Availability zone to create servers in." msgstr "Zona de disponibilidade para a qual criar servidores." msgid "Availability zone to create volumes in." msgstr "Zona de disponibilidade para a qual criar volumes." msgid "Availability zone to launch the instance in." msgstr "zona de disponibilidade para ativar a instância." msgid "Backend authentication failed" msgstr "Autenticação de backend falhou" msgid "Binary" msgstr "binário" msgid "Block device mappings for this server." msgstr "Mapeamentos de dispositivo de bloco para esse servidor." msgid "Block device mappings to attach to instance." msgstr "Mapeamentos de dispositivo de bloco para anexar à instância." msgid "Block device mappings v2 for this server." msgstr "Mapeamentos de dispositivo de bloco v2 para este servidor." msgid "" "Boolean extra spec that used for filtering of backends by their capability " "to create share snapshots." msgstr "" "Especificação extra booleana usada para filtragem de backends pela " "capacidade que eles têm de criar capturas instantâneas de compartilhamento. " msgid "Boolean indicating if the volume can be booted or not." msgstr "Booleano indicando se o volume pode ser reinicializado ou não." msgid "Boolean indicating if the volume is encrypted or not." msgstr "Booleano indicando se o volume está criptografado ou não." msgid "" "Boolean indicating whether allow the volume to be attached more than once." msgstr "Booleano indicando se o volume pode ser conectado mais de uma vez." msgid "" "Bus of the device: hypervisor driver chooses a suitable default if omitted." msgstr "" "Barramento do dispositivo: o driver do hypervisor escolherá um padrão " "adequado se omitido." msgid "CIDR block notation for this subnet." msgstr "Notação de bloco CIDR para esta sub-rede." msgid "CIDR block to apply to subnet." msgstr "Bloco CIDR para aplicar a sub-rede." msgid "CIDR block to apply to the VPC." msgstr "Bloco CIDR para aplicar ao VPC." msgid "CIDR of subnet." msgstr "CIDR de sub-rede." msgid "CIDR to be associated with this metering rule." msgstr "CIDR a ser associado a essa regra de medição." #, python-format msgid "Can not specify property \"%s\" if the volume type is public." msgstr "" "Não será possível especificar a propriedade \"%s\" se o tipo de volume for " "público. " #, python-format msgid "Can not use %s property on Nova-network." msgstr "Não é possível usar a propriedade %s na rede Nova." #, python-format msgid "Can't find role %s" msgstr "Não é possível localizar a função %s" msgid "Can't get user token without password" msgstr "Não é possível obter o token do usuário sem senha" msgid "Can't get user token, user not yet created" msgstr "" "Não é possível obter o token do usuário, o usuário ainda não foi criado" msgid "Can't traverse attribute path" msgstr "Não é possível atravessar o caminho do atributo" #, python-format msgid "Cancelling update when stack is %s" msgstr "Cancelando atualização quando a pilha for %s" #, python-format msgid "Cannot call %(method)s on orphaned %(objtype)s object" msgstr "Não é possível chamar %(method)s no objeto órfão %(objtype)s" #, python-format msgid "Cannot check %s, stack not created" msgstr "Não é possível verificar %s; pilha não criada" #, python-format msgid "Cannot define the following properties at the same time: %(props)s." msgstr "" "Não é possível definir as propriedades a seguir ao mesmo tempo: %(props)s." #, python-format msgid "" "Cannot establish connection to Heat endpoint at region \"%(region)s\" due to " "\"%(exc)s\"" msgstr "" "Não é possível estabelecer conexão com o terminal Heat na região \"%(region)s" "\" devido a \"%(exc)s\"" msgid "" "Cannot get stack domain user token, no stack domain id configured, please " "fix your heat.conf" msgstr "" "Não é possível obter o token de usuário do domínio de pilha, nenhum id de " "domínio de pilha configurado, corrija o heat.conf" msgid "Cannot migrate to lower schema version." msgstr "Não é possível migrar para uma versão de esquema menor." #, python-format msgid "Cannot modify readonly field %(field)s" msgstr "Não é possível modificar o campo somente leitura %(field)s" #, python-format msgid "Cannot resume %s, resource not found" msgstr "Não é possível continuar %s, recurso não localizado" #, python-format msgid "Cannot resume %s, resource_id not set" msgstr "Não é possível continuar %s, resource_id não configurado" #, python-format msgid "Cannot resume %s, stack not created" msgstr "Não é possível continuar %s; pilha não criada" #, python-format msgid "Cannot suspend %s, resource not found" msgstr "Não é possível suspender %s, recurso não localizado" #, python-format msgid "Cannot suspend %s, resource_id not set" msgstr "Não é possível suspender %s, resource_id não configurado" #, python-format msgid "Cannot suspend %s, stack not created" msgstr "Não é possível suspender %s; pilha não criada" msgid "Captured stderr from the configuration execution." msgstr "stderr capturado da execução de configuração." msgid "Captured stdout from the configuration execution." msgstr "stdout capturado da execução de configuração." #, python-format msgid "Circular Dependency Found: %(cycle)s" msgstr "Dependência Circular Localizada: %(cycle)s" msgid "Client entity to poll." msgstr "Entidade do cliente para pesquisa." msgid "Client name and resource getter name must be specified." msgstr "O nome do cliente e o nome getter do recurso devem ser especificados. " msgid "Client to poll." msgstr "Cliente para pesquisa." msgid "Cluster configs dictionary." msgstr "Dicionário de configurações do cluster." msgid "Cluster information." msgstr "Informações do cluster." msgid "Cluster metadata." msgstr "Metadados do cluster." msgid "Cluster name." msgstr "Nome do cluster." msgid "Cluster status." msgstr "Status do cluster." msgid "Comparison operator." msgstr "Operador de comparação." #, python-format msgid "Concurrent transaction for %(action)s" msgstr "Transação simultânea para %(action)s" msgid "Configuration of session persistence." msgstr "Configuração de persistência de sessão." msgid "" "Configuration script or manifest which specifies what actual configuration " "is performed." msgstr "" "Script de configuração ou manifesto que especifica qual configuração " "realmente é executada." msgid "Configure most important configs automatically." msgstr "Definir as configurações mais importantes automaticamente. " #, python-format msgid "Confirm resize for server %s failed" msgstr "A confirmação do redimensionamento do servidor %s falhou" msgid "Connection info for this network gateway." msgstr "Informações de conexão para este gateway de rede." #, python-format msgid "Container '%(name)s' creation failed: %(code)s - %(reason)s" msgstr "A criação do contêiner '%(name)s' falhou: %(code)s - %(reason)s" msgid "Container format of image." msgstr "Formato do Contêiner da imagem." msgid "" "Content of part to attach, either inline or by referencing the ID of another " "software config resource." msgstr "" "Conteúdo de parte para anexar, seja sequencial ou referenciando o ID de " "outro recurso de configuração de software." msgid "Context for this stack." msgstr "Contexto para essa pilha." msgid "Control how the disk is partitioned when the server is created." msgstr "Controlar como o disco é particionado quando o servidor for criado." msgid "Controls DPD protocol mode." msgstr "Controla modo de protocolo DPD." msgid "" "Convenience attribute to fetch the first assigned network address, or an " "empty string if nothing has been assigned at this time. Result may not be " "predictable if the server has addresses from more than one network." msgstr "" "Atributo de conveniência para buscar o endereço de rede designado primeiro, " "ou uma sequência vazia se nada tiver sido designado no momento. O resultado " "não pode ser previsível se o servidor tiver endereços a partir de mais de " "uma rede." msgid "" "Convenience attribute, provides curl CLI command prefix, which can be used " "for signalling handle completion or failure when signal_transport is set to " "TOKEN_SIGNAL. You can signal success by adding --data-binary '{\"status\": " "\"SUCCESS\"}' , or signal failure by adding --data-binary '{\"status\": " "\"FAILURE\"}'. This attribute is set to None for all other signal transports." msgstr "" "Atributo de conveniência, fornece prefixo do comando da CLI de ondulação, " "que pode ser usado para a conclusão ou falha da manipulação de sinalização " "quando signal_transport for configurado para TOKEN_SIGNAL. É possível " "conseguir um sinal com êxito incluindo --data-binary '{\"status\": \"SUCCESS" "\"}' ou falha de sinal incluindo --data-binary '{\"status\": \"FAILURE\"}'. " "Esse atributo é configurado para None para todos os outros transportes de " "sinal." msgid "" "Convenience attribute, provides curl CLI command prefix, which can be used " "for signalling handle completion or failure. You can signal success by " "adding --data-binary '{\"status\": \"SUCCESS\"}' , or signal failure by " "adding --data-binary '{\"status\": \"FAILURE\"}'." msgstr "" "Atributo de conveniência, fornece prefixo do comando da CLI de ondulação, " "que pode ser usado para a conclusão ou falha da manipulação de sinalização. " "É possível conseguir um sinal com êxito incluindo --data-binary '{\"status" "\": \"SUCCESS\"}' ou falha de sinal incluindo --data-binary '{\"status\": " "\"FAILURE\"}'" msgid "Cooldown period, in seconds." msgstr "Período de resfriamento, em segundos." #, python-format msgid "Could not confirm resize of server %s" msgstr "Não foi possível confirmar o redimensionamento do servidor %s" #, python-format msgid "Could not detach attachment %(att)s from server %(srv)s." msgstr "Não foi possível desconectar o anexo %(att)s do servidor %(srv)s." #, python-format msgid "Could not fetch remote template \"%(name)s\": %(exc)s" msgstr "Não foi possível buscar o modelo remoto \"%(name)s\": %(exc)s" #, python-format msgid "Could not fetch remote template '%(url)s': %(exc)s" msgstr "Não foi possível buscar modelo remoto '%(url)s': %(exc)s" #, python-format msgid "Could not load %(name)s: %(error)s" msgstr "Não foi possível carregar %(name)s: %(error)s" #, python-format msgid "Could not retrieve template: %s" msgstr "Não foi possível recuperar o template: %s" msgid "Create volumes on the same physical port as an instance." msgstr "Cria volumes na mesma porta física de uma instância. " msgid "" "Credentials used for swift. Not required if sahara is configured to use " "proxy users and delegated trusts for access." msgstr "" "Credenciais usadas no Swift. Não necessárias se o Sahara estiver configurado " "para usar usuários proxy e confianças delegadas para acesso." msgid "Cron expression." msgstr "Expressão cron." msgid "Current share status." msgstr "Status de compartilhamento atual." msgid "Custom LoadBalancer template can not be found" msgstr "Modelo de balanceador de carga customizado não pôde ser localizado" msgid "DB instance restore point." msgstr "Ponto de restauração da instância do BD." msgid "DNS Domain id or name." msgstr "ID ou nome do Domínio do DNS" msgid "DNS IP address used inside tenant's network." msgstr "Endereço IP do DNS usado dentro da rede do locatário. " msgid "DNS Record type." msgstr "Tipo de Registro do DNS" msgid "DNS domain serial." msgstr "Serial do domínio DNS." msgid "" "DNS record data, varies based on the type of record. For more details, " "please refer rfc 1035." msgstr "" "Os dados de registro do DNS variam com base no tipo de registro. Para obter " "mais detalhes, consulte rfc 1035" msgid "" "DNS record priority. It is considered only for MX and SRV types, otherwise, " "it is ignored." msgstr "" "A prioridade do registro do DNS. Ela é considerada para tipos MX e SRV, caso " "contrário, é ignorada." #, python-format msgid "Data supplied was not valid: %(reason)s" msgstr "Dados fornecidos não eram válidos: %(reason)s" #, python-format msgid "" "Database %(dbs)s specified for user does not exist in databases for resource " "%(name)s." msgstr "" "O banco de dados %(dbs)s especificado para o usuário não existe em bancos de " "dados para o recurso %(name)s." msgid "Database volume size in GB." msgstr "Tamanho do volume do banco de dados em GB." #, python-format msgid "" "Databases property is required if users property is provided for resource %s." msgstr "" "Propriedade do banco de dados é necessária se a propriedade de usuários for " "fornecida para o recurso %s." #, python-format msgid "" "Datastore version %(dsversion)s for datastore type %(dstype)s is not valid. " "Allowed versions are %(allowed)s." msgstr "" "A versão do armazenamento de dados %(dsversion)s para o tipo de " "armazenamento de dados %(dstype)s não é válida. Versões permitidas são " "%(allowed)s." msgid "Datetime when a share was created." msgstr "Data e hora quando um compartilhamento foi criado." msgid "" "Dead Peer Detection protocol configuration for the ipsec site connection." msgstr "" "Configuração do protocolo de Detecção de Ponto Morto para a conexão do site " "ipsec." msgid "Dead engines are removed." msgstr "Mecanismo inativos são removidos." msgid "Default TLS container reference to retrieve TLS information." msgstr "" "A referência do contêiner TLS padrão para recuperar informações do TLS." #, python-format msgid "Default must be a comma-delimited list string: %s" msgstr "Padrão deve ser uma sequência de lista delimitada por vírgulas: %s" msgid "Default name or UUID of the image used to boot Hadoop nodes." msgstr "O nome ou UUID da imagem usada para iniciar os nós do Hadoop." msgid "Default region name used to get services endpoints." msgstr "Nome da região padrão usada para obter terminais de serviços." msgid "Default settings for some of task attributes defined at workflow level." msgstr "" "Configurações padrão de alguns dos atributos de tarefa definidos no nível do " "fluxo de trabalho. " msgid "Default value for the input if none is specified." msgstr "Valor padrão para a entrada, se nenhum for especificado." msgid "" "Defines a delay in seconds that Mistral Engine should wait after a task has " "completed before starting next tasks defined in on-success, on-error or on-" "complete." msgstr "" "Define um atraso em segundos que o Mistral Engine deve aguardar após uma " "tarefa ter sido concluída antes de iniciar as próximas tarefas definidas no " "sucesso, no erro ou na conclusão. " msgid "" "Defines a delay in seconds that Mistral Engine should wait before starting a " "task." msgstr "" "Define um atraso em segundos que o Mistral Engine deve aguardar antes de " "iniciar uma tarefa. " msgid "Defines a pattern how task should be repeated in case of an error." msgstr "" "Define um padrão de como a tarefa deve ser repetida no caso de um erro." msgid "" "Defines a period of time in seconds after which a task will be failed " "automatically by engine if hasn't completed." msgstr "" "Define um período de tempo em segundos após o qual uma tarefa falhará " "automaticamente pelo mecanismo se ela não tiver sido concluída. " msgid "Defines if share type is accessible to the public." msgstr "Define se o tipo de compartilhamento é acessível para o público. " msgid "Defines if shared filesystem is public or private." msgstr "Define se o sistema de arquivos compartilhado é público ou privado. " msgid "" "Defines the method in which the request body for signaling a workflow would " "be parsed. In case this property is set to True, the body would be parsed as " "a simple json where each key is a workflow input, in other cases body would " "be parsed expecting a specific json format with two keys: \"input\" and " "\"params\"." msgstr "" "Define o método no qual o corpo da solicitação para sinalizar um fluxo de " "trabalho será analisado. Se essa propriedade for configurada para True, o " "corpo será analisado como um JSON simples em que cada chave é uma entrada de " "fluxo de trabalho e, em outros casos, o corpo será analisado esperando um " "formato JSON específico com duas chaves: \"input\" e \"params\"." msgid "" "Defines whether Mistral Engine should put the workflow on hold or not before " "starting a task." msgstr "" "Define se o Mistral Engine deve ou não colocar o fluxo de trabalho em " "suspensão antes de iniciar a tarefa. " msgid "Defines whether auto-assign security group to this Node Group template." msgstr "" "Define se autodesignar o grupo de segurança para este modelo de Grupo de Nós." #, python-format msgid "" "Defining more than one configuration for the same action in " "SoftwareComponent \"%s\" is not allowed." msgstr "" "Definir mais de uma configuração para a mesma ação no SoftwareComponent \"%s" "\" não é permitido." msgid "Deleting in-progress snapshot" msgstr "Excluindo captura instantânea em andamento" #, python-format msgid "Deleting non-empty container (%(id)s) when %(prop)s is False" msgstr "Excluindo contêiner não vazio (%(id)s) quando %(prop)s for False" #, python-format msgid "Delimiter for %s must be string" msgstr "Delimitador para %s deve ser uma sequência" msgid "" "Denotes that the deployment is in an error state if this output has a value." msgstr "" "Denota que a implementação está em um estado de erro se esta saída tem um " "valor." msgid "Deploy data available" msgstr "Dados de implementação disponíveis" #, python-format msgid "Deployment exited with non-zero status code: %s" msgstr "Implementação saiu com o código de status diferente de zero: %s" #, python-format msgid "Deployment to server failed: %s" msgstr "Implementação para o servidor com falha: %s" #, python-format msgid "Deployment with id %s not found" msgstr "Implementação com ID %s não localizada" msgid "Deprecated." msgstr "Descontinuado." msgid "" "Describe time constraints for the alarm. Only evaluate the alarm if the time " "at evaluation is within this time constraint. Start point(s) of the " "constraint are specified with a cron expression, whereas its duration is " "given in seconds." msgstr "" "Descrever restrições de tempo para o alarme. Avalie o alarme somente se o " "horário da avaliação estiver dentro da restrição de tempo. Um ou mais " "pontos de início da restrição são especificados com uma expressão cron, ao " "passo que sua duração é fornecida em segundos. " msgid "Description for the alarm." msgstr "Descrição para o alarme." msgid "Description for the firewall policy." msgstr "Descrição para a política de firewall." msgid "Description for the firewall rule." msgstr "Descrição para a regra de firewall." msgid "Description for the firewall." msgstr "Descrição para o firewall." msgid "Description for the ike policy." msgstr "Descrição para a política ike." msgid "Description for the ipsec policy." msgstr "Descrição para a política de ipsec." msgid "Description for the ipsec site connection." msgstr "Descrição para a conexão do site ipsec." msgid "Description for the time constraint." msgstr "Descrição para a restrição de tempo." msgid "Description for the vpn service." msgstr "Descrição para o serviço de vpn." msgid "Description for this interface." msgstr "Descrição para esta interface." msgid "Description of domain." msgstr "Descrição do domínio." msgid "Description of keystone group." msgstr "Descrição do grupo de keystone." msgid "Description of keystone project." msgstr "Descrição do projeto do keystone." msgid "Description of keystone region." msgstr "Descrição da região do keystone." msgid "Description of keystone service." msgstr "Descrição do serviço do keystone." msgid "Description of keystone user." msgstr "Descrição do usuário do keystone." msgid "Description of record." msgstr "Descrição do registro." msgid "Description of the Node Group Template." msgstr "Descrição do Modelo de Grupo de Nós." msgid "Description of the Sahara Group Template." msgstr "Descrição do Modelo de Grupo do Sahara." msgid "Description of the alarm." msgstr "Descrição do alarme." msgid "Description of the data source." msgstr "Descrição da origem de dados." msgid "Description of the firewall policy." msgstr "Descrição da política de firewall." msgid "Description of the firewall rule." msgstr "Descrição da regra de firewall." msgid "Description of the firewall." msgstr "Descrição do firewall." msgid "Description of the image." msgstr "Descrição da imagem." msgid "Description of the input." msgstr "Descrição da entrada." msgid "Description of the job binary." msgstr "Descrição do binário da tarefa." msgid "Description of the metering label." msgstr "Descrição do rótulo de medição." msgid "Description of the output." msgstr "Descrição da saída." msgid "Description of the pool." msgstr "Descrição do pool." msgid "Description of the security group." msgstr "Descrição do grupo de segurança." msgid "Description of the vip." msgstr "Descrição do vip." msgid "Description of the volume type." msgstr "Descrição do tipo de volume." msgid "Description of the volume." msgstr "Descrição do volume." msgid "Description of this Load Balancer." msgstr "Descrição desse Balanceador de Carga." msgid "Description of this listener." msgstr "Descrição desse listener." msgid "Description of this pool." msgstr "Descrição desse conjunto." msgid "Desired IPs for this port." msgstr "IPs desejado para esta porta." msgid "Desired capacity of the cluster." msgstr "Capacidade desejada do cluster." msgid "Desired initial number of instances." msgstr "O número inicial de instâncias." msgid "Desired initial number of resources in cluster." msgstr "O número inicial desejado de recursos no cluster." msgid "Desired initial number of resources." msgstr "O número inicial desejado de recursos." msgid "Desired number of instances." msgstr "Número desejado de instâncias." msgid "DesiredCapacity must be between MinSize and MaxSize" msgstr "DesiredCapacity deve estar entre MinSize e MaxSize" msgid "Destination IP address or CIDR." msgstr "Endereço IP de destino ou CIDR." msgid "Destination ip_address for this firewall rule." msgstr "ip_address de destino para esta regra de firewall." msgid "Destination port number or a range." msgstr "Número da porta de destino ou um intervalo." msgid "Destination port range for this firewall rule." msgstr "Intervalo de porta de destino para esta regra de firewall." msgid "Detailed information about resource." msgstr "Informações detalhadas sobre o recurso." msgid "Device ID of this port." msgstr "ID do Dispositivo desta porta." msgid "Device info for this network gateway." msgstr "Informações de Dispositivo para este gateway de rede." msgid "" "Device type: at the moment we can make distinction only between disk and " "cdrom." msgstr "" "Tipo de dispositivo: no momento, é possível fazer distinção somente entre " "disco e cdrom." msgid "" "Dict, which has expand properties for port. Used only if port property is " "not specified for creating port." msgstr "" "Dicionário, que possui propriedades de expansão para a porta. Usado somente " "se a propriedade de porta não estiver especificada para criação da porta." msgid "Dictionary containing workflow tasks." msgstr "Dicionário contendo tarefas de fluxo de trabalho." msgid "Dictionary of node configurations." msgstr "Dicionário das configurações do nó." msgid "Dictionary of variables to publish to the workflow context." msgstr "" "Dicionário de variáveis para publicar no contexto do fluxo de trabalho." msgid "Dictionary which contains input for workflow." msgstr "Dicionário que contém entrada para o fluxo de trabalho." msgid "" "Dictionary-like section defining task policies that influence how Mistral " "Engine runs tasks. Must satisfy Mistral DSL v2." msgstr "" "Seção tipo dicionário que define políticas de tarefa que influenciam como o " "Mistral Engine executa tarefas. Deve satisfazer o Mistral DSL v2." msgid "DisableRollback and OnFailure may not be used together" msgstr "DisableRollback e OnFailure não podem ser utilizados juntos" msgid "Disk format of image." msgstr "Formato do disco da imagem." msgid "Does not contain a valid AWS Access Key or certificate" msgstr "Não contém uma chave de acesso AWS ou um certificado válidos" msgid "Domain email." msgstr "E-mail do domínio." msgid "Domain name." msgstr "Nome do domínio." #, python-format msgid "Duplicate names %s" msgstr "Nomes duplicados %s" msgid "Duplicate refs are not allowed." msgstr "Referências duplicadas não são permitidas." msgid "Duration for the time constraint." msgstr "Duração para a restrição de tempo." msgid "EIP address to associate with instance." msgstr "Endereço EIP para associar a instância." #, python-format msgid "Each %(object_name)s must contain a %(sub_section)s key." msgstr "Cada %(object_name)s deve conter uma chave de %(sub_section)s." msgid "Each Resource must contain a Type key." msgstr "Cada Recurso deve conter uma Chave de tipo." msgid "Ebs is missing, this is required when specifying BlockDeviceMappings." msgstr "" "Ebs está ausente, isto é necessário ao especificar BlockDeviceMappings." msgid "" "Egress rules are only allowed when Neutron is used and the 'VpcId' property " "is set." msgstr "" "As regras de saída são permitidas apenas quando Neutron é usado e a " "propriedade 'VpcId' está configurada." #, python-format msgid "Either %(net)s or %(port)s must be provided." msgstr "%(net)s ou %(port)s deve ser fornecido." msgid "Either 'EIP' or 'AllocationId' must be provided." msgstr "'EIP' ou 'AllocationId' deve ser fornecido." msgid "Either 'InstanceId' or 'LaunchConfigurationName' must be provided." msgstr "O 'InstanceId' ou 'LaunchConfigurationName' deve ser fornecido." #, python-format msgid "Either project or domain must be specified for role %s" msgstr "O projeto ou o domínio deve ser especificado para a função %s. " #, python-format msgid "Either volume_id or snapshot_id must be specified for device mapping %s" msgstr "" "O volume_id ou snapshot_id deve ser especificado para o mapeamento de " "dispositivo %s" msgid "Email address of keystone user." msgstr "Endereço de e-mail do usuário do keystone." msgid "Enable the legacy OS::Heat::CWLiteAlarm resource." msgstr "Ative o recurso OS::Heat::CWLiteAlarm de legado." msgid "Enable the preview Stack Abandon feature." msgstr "Ative o recurso Stack Abandon de visualização." msgid "Enable the preview Stack Adopt feature." msgstr "Ative o recurso Stack Adopt de visualização." msgid "" "Enables Source NAT on the router gateway. NOTE: The default policy setting " "in Neutron restricts usage of this property to administrative users only." msgstr "" "Ativa NAT de Origem no gateway do roteador. NOTA: A configuração de política " "padrão em Neutron restringe o uso desta propriedade somente para usuários " "administrativos." msgid "" "Enables engine with convergence architecture. All stacks with this option " "will be created using convergence engine." msgstr "" "Ativa o mecanismo com arquitetura de convergência. Todas as pilhas com essa " "opção serão criadas usando o mecanismo de convergência." msgid "Enables or disables read-only access mode of volume." msgstr "Ativa ou desativa o modo de acesso somente leitura do volume." msgid "Encapsulation mode for the ipsec policy." msgstr "Modo de encapsulamento para a política ipsec." msgid "" "Encrypt template parameters that were marked as hidden and also all the " "resource properties before storing them in database." msgstr "" "Parâmetros do modelo de criptografia que foram marcados como ocultos e " "também todas as propriedades de recurso antes de armazená-las no banco de " "dados." msgid "Encryption algorithm for the ike policy." msgstr "Algoritmo de criptografia para a política ike." msgid "Encryption algorithm for the ipsec policy." msgstr "Algoritmo de criptografia para a política ipsec." msgid "End address for the allocation pool." msgstr "Endereço de encerramento para o conjunto de alocações. " #, python-format msgid "End resizing the group %(group)s" msgstr "Término de redimensionamento do grupo %(group)s" msgid "" "Endpoint/url which can be used for signalling handle when signal_transport " "is set to TOKEN_SIGNAL. None for all other signal transports." msgstr "" "Terminal/URL que pode ser usado para manipulação de sinalização quando " "signal_transport é configurado para TOKEN_SIGNAL None para todos os outros " "transportes de sinal." msgid "Endpoint/url which can be used for signalling handle." msgstr "Terminal/URL que pode ser usado para manipulação de sinalização." msgid "Engine_Id" msgstr "Engine_Id" msgid "Error" msgstr "Erro" #, python-format msgid "Error authorizing action %s" msgstr "Erro ao autorizar ação %s" #, python-format msgid "Error creating ec2 keypair for user %s" msgstr "Erro ao criar par de chaves ec2 para o usuário %s" msgid "" "Error during applying access rules to share \"{0}\". The root cause of the " "problem is the following: {1}." msgstr "" "Erro durante aplicação de regras de acesso ao compartilhamento \"{0}\". A " "causa raiz do problema é a seguinte: {1}." msgid "Error during creation of share \"{0}\"" msgstr "Erro durante a criação do compartilhamento \"{0}\"" msgid "Error during deleting share \"{0}\"." msgstr "Erro durante a exclusão do compartilhamento \"{0}\"" #, python-format msgid "Error validating value '%(value)s'" msgstr "Erro ao validar o valor '%(value)s'" #, python-format msgid "Error validating value '%(value)s': %(message)s" msgstr "Erro ao validar o valor '%(value)s': %(message)s" msgid "Ethertype of the traffic." msgstr "Ethertype do tráfego." msgid "Exclude state for cidr." msgstr "Excluir estado para cidr." #, python-format msgid "Expected 1 external network, found %d" msgstr "Esperado 1 rede externa, localizadas %d" msgid "Export locations of share." msgstr "Exportar locais de compartilhamento." msgid "Expression of the alarm to evaluate." msgstr "Expressão do alarme a ser avaliada." msgid "External fixed IP address." msgstr "Endereço IP fixo externo." msgid "External fixed IP addresses for the gateway." msgstr "Endereços IP fixos externos para o gateway." msgid "External network gateway configuration for a router." msgstr "Configuração de gateway de rede externa para um roteador." msgid "" "Extra parameters to include in the \"floatingip\" object in the creation " "request. Parameters are often specific to installed hardware or extensions." msgstr "" "Parâmetros extras a serem incluídos no objeto \"floatingip\" na solicitação " "de criação. Os parâmetros são geralmente específicos do hardware ou das " "extensões instaladas." msgid "Extra parameters to include in the creation request." msgstr "Parâmetros extras a serem incluídos na solicitação de criação." msgid "Extra parameters to include in the request." msgstr "Parâmetros extras a serem incluídos na solicitação." msgid "" "Extra parameters to include in the request. Parameters are often specific to " "installed hardware or extensions." msgstr "" "Parâmetros extras a serem incluídos na solicitação Os parâmetros são " "geralmente específicos do hardware ou das extensões instaladas." msgid "Extra specs key-value pairs defined for share type." msgstr "" "Pares de chave e valor de especificações extras definidos para o tipo de " "compartilhamento." #, python-format msgid "Failed to attach interface (%(port)s) to server (%(server)s)" msgstr "Falha ao conectar a interface (%(port)s) ao servidor (%(server)s)" #, python-format msgid "Failed to attach volume %(vol)s to server %(srv)s - %(err)s" msgstr "Falha ao conectar volume %(vol)s ao servidor %(srv)s - %(err)s" #, python-format msgid "Failed to create Bay '%(name)s' - %(reason)s" msgstr "Falha ao criar o Compartimento '%(name)s' - %(reason)s" #, python-format msgid "Failed to detach interface (%(port)s) from server (%(server)s)" msgstr "Falha ao desconectar a interface (%(port)s) do servidor (%(server)s)" #, python-format msgid "Failed to execute %(action)s for %(cluster)s: %(reason)s" msgstr "Failed ao executar %(action)s para %(cluster)s: %(reason)s" #, python-format msgid "Failed to extend volume %(vol)s - %(err)s" msgstr "Falha ao estender volume %(vol)s - %(err)s" #, python-format msgid "Failed to fetch template: %s" msgstr "Falha para buscar a imagem: %s" #, python-format msgid "Failed to find instance %s" msgstr "Falha ao localizar a instância %s" #, python-format msgid "Failed to find server %s" msgstr "Falha ao localizar o servidor %s" #, python-format msgid "Failed to parse JSON data: %s" msgstr "Falha ao analisar dados JSON: %s" #, python-format msgid "Failed to restore volume %(vol)s from backup %(backup)s - %(err)s" msgstr "" "Falha ao restaurar o volume %(vol)s a partir do backup %(backup)s - %(err)s" msgid "Failed to retrieve template" msgstr "Falha ao recuperar modelo" #, python-format msgid "Failed to retrieve template data: %s" msgstr "Falha ao recuperar dados de modelo: %s" #, python-format msgid "Failed to retrieve template: %s" msgstr "Falha ao recuperar o modelo: %s" #, python-format msgid "" "Failed to send message to stack (%(stack_name)s) on other engine " "(%(engine_id)s)" msgstr "" "Falha ao enviar mensagem à pilha (%(stack_name)s) em outro mecanismo " "(%(engine_id)s)" #, python-format msgid "Failed to stop stack (%(stack_name)s) on other engine (%(engine_id)s)" msgstr "" "Falha ao parar pilha (%(stack_name)s) em outro mecanismo (%(engine_id)s)" #, python-format msgid "Failed to update Bay '%(name)s' - %(reason)s" msgstr "Falha ao atualizar o Compartimento '%(name)s' - %(reason)s" msgid "Failed to update, can not found port info." msgstr "" "Falha na atualização, não é possível localizar as informações da porta." #, python-format msgid "" "Failed validating stack template using Heat endpoint at region \"%(region)s" "\" due to \"%(exc)s\"" msgstr "" "Falha ao validar o modelo de pilha usando o terminal Heat na região " "\"%(region)s\" devido a \"%(exc)s\"" msgid "Fake attribute !a." msgstr "Atributo falso !a." msgid "Fake attribute a." msgstr "Atributo falso a." msgid "Fake property !a." msgstr "Propriedade falsa !a." msgid "Fake property !c." msgstr "Propriedade falsa !c." msgid "Fake property a." msgstr "Propriedade falsa a." msgid "Fake property c." msgstr "Propriedade falsa c." msgid "Fake property ca." msgstr "Propriedade falsa ca." msgid "" "False to trigger actions when the threshold is reached AND the alarm's state " "has changed. By default, actions are called each time the threshold is " "reached." msgstr "" "Falso para acionar ações quando o limite é atingido E o status do alarme é " "alterado. Por padrão, as ações são chamadas toda vez que o limite é atingido." #, python-format msgid "Field %(field)s of %(objname)s is not an instance of Field" msgstr "Campo %(field)s de %(objname)s não é uma instância de Campo" msgid "" "Fixed IP address to specify for the port created on the requested network." msgstr "" "Endereço IP fixo a ser especificado para a porta criada na rede solicitada." msgid "Fixed IP addresses." msgstr "Endereços IP fixo." msgid "Fixed IPv4 address for this NIC." msgstr "Endereço IPv4 fixo para este NIC." msgid "Flag indicating if traffic to or from instance is validated." msgstr "" "Sinalizador que indica se o tráfego para ou a partir da instância é validado." msgid "" "Flag to enable/disable port security on the network. It provides the default " "value for the attribute of the ports created on this network." msgstr "" "Sinalizar para ativar/desativar a segurança de porta na rede. Fornece o " "valor padrão para o atributo das portas criadas nessa rede." msgid "" "Flag to enable/disable port security on the port. When disable this " "feature(set it to False), there will be no packages filtering, like security-" "group and address-pairs." msgstr "" "Sinalizar para ativar/desativar a segurança de porta na porta. Ao desativar " "esse recurso (configurá-lo para False), não haverá nenhuma filtragem de " "pacotes, como security-group e address-pairs." msgid "Flavor of the instance." msgstr "Tipo da instância." msgid "Friendly name of the port." msgstr "Nome Amigável da porta." msgid "Friendly name of the router." msgstr "Nome amigável do roteador." msgid "Friendly name of the subnet." msgstr "Nome Amigável da sub-rede." #, python-format msgid "Function \"%s\" must have arguments" msgstr "Função \"%s\" deve ter argumentos" #, python-format msgid "Function \"%s\" usage: [\"\", \"\"]" msgstr "O uso da função \"%s\": [\"\", \"\"]" #, python-format msgid "Gateway IP address \"%(gateway)s\" is in invalid format." msgstr "O endereço IP do gateway \"%(gateway)s\" está no formato inválido." msgid "Gateway network for the router." msgstr "Rede do gateway para o roteador." msgid "Generic HeatAPIException, please use specific subclasses!" msgstr "HeatAPIException genérica, use as subclasses específicas!" msgid "Glance image ID or name." msgstr "ID ou nome de imagem Glance." msgid "Governs permissions set in manila for the cluster ips." msgstr "Controla permissões configuradas no Manila para IPs de cluster." msgid "Granularity to use for age argument, defaults to days." msgstr "Granularidade a ser usada para o argumento da idade, o padrão é dias." msgid "Hadoop cluster name." msgstr "Nome do cluster Hadoop." #, python-format msgid "Header X-Auth-Url \"%s\" not an allowed endpoint" msgstr "O cabeçalho X-Auth-Url \"%s\" não é um terminal permitido" msgid "Health probe timeout, in seconds." msgstr "Tempo limite da análise de funcionamento, em segundos." msgid "" "Heat build revision. If you would prefer to manage your build revision " "separately, you can move this section to a different file and add it as " "another config option." msgstr "" "Revisão de construção de utilização. Se você preferir gerenciar sua revisão " "de construção separadamente, é possível mover essa seção para um arquivo " "diferente e incluí-lo como outra opção de configuração." msgid "Host" msgstr "Host" msgid "Hostname" msgstr "Nome do host" msgid "Hostname of the instance." msgstr "Nome do host da instância." msgid "How long to preserve deleted data." msgstr "Quanto tempo para preservar os dados excluídos." msgid "" "How the client will signal the wait condition. CFN_SIGNAL will allow an HTTP " "POST to a CFN keypair signed URL. TEMP_URL_SIGNAL will create a Swift " "TempURL to be signalled via HTTP PUT. HEAT_SIGNAL will allow calls to the " "Heat API resource-signal using the provided keystone credentials. " "ZAQAR_SIGNAL will create a dedicated zaqar queue to be signalled using the " "provided keystone credentials. TOKEN_SIGNAL will allow and HTTP POST to a " "Heat API endpoint with the provided keystone token. NO_SIGNAL will result in " "the resource going to a signalled state without waiting for any signal." msgstr "" "Como o cliente sinaliza a condição de espera. CFN_SIGNAL permite um HTTP " "POST em uma URL sinalizada pelo par de chaves CFN. TEMP_URL_SIGNAL cria uma " "TempURL Swift a ser sinalizada por meio do HTTP PUT . HEAT_SIGNAL permite " "chamadas do resource-signal da API do Heat usando as credenciais do keystone " "fornecidas. ZAQAR_SIGNAL cria uma fila zaqar dedicada a ser sinalizada " "usando as credenciais do keystone fornecidas. TOKEN_SIGNAL permite um HTTP " "POST para um terminal da API do Heat com o token de keystone fornecido. " "NO_SIGNAL resulta no recurso entrando no estado COMPLETE sem aguardar nenhum " "sinal." msgid "" "How the server should receive the metadata required for software " "configuration. POLL_SERVER_CFN will allow calls to the cfn API action " "DescribeStackResource authenticated with the provided keypair. " "POLL_SERVER_HEAT will allow calls to the Heat API resource-show using the " "provided keystone credentials. POLL_TEMP_URL will create and populate a " "Swift TempURL with metadata for polling. ZAQAR_MESSAGE will create a " "dedicated zaqar queue and post the metadata for polling." msgstr "" "Como o servidor deve receber os metadados necessários para a configuração do " "software. POLL_SERVER_CFN permite chamadas da ação da API cfn " "DescribeStackResource autenticada com o par de chaves fornecido. " "POLL_SERVER_HEAT permite chamadas do resource-show da API do Heat usando as " "credenciais do keystone fornecidas. POLL_TEMP_URL cria e preenche uma " "TempURL Swift com metadados para pesquisa . ZAQAR_MESSAGE cria uma fila " "zaqar dedicada e posta os metadados para pesquisa." msgid "How the server should signal to heat with the deployment output values." msgstr "" "Como o servidor deve sinalizar para utilização com os valores de saída de " "implementação." msgid "" "How the server should signal to heat with the deployment output values. " "CFN_SIGNAL will allow an HTTP POST to a CFN keypair signed URL. " "TEMP_URL_SIGNAL will create a Swift TempURL to be signaled via HTTP PUT. " "HEAT_SIGNAL will allow calls to the Heat API resource-signal using the " "provided keystone credentials. ZAQAR_SIGNAL will create a dedicated zaqar " "queue to be signaled using the provided keystone credentials. NO_SIGNAL will " "result in the resource going to the COMPLETE state without waiting for any " "signal." msgstr "" "O modo com que o servidor deve sinalizar o Heat com os valores de saída da " "implementação. CFN_SIGNAL permite um HTTP POST em uma URL sinalizada pelo " "par de chaves CFN. TEMP_URL_SIGNAL cria uma TempURL Swift a ser sinalizada " "por meio do HTTP PUT . HEAT_SIGNAL permite chamadas do resource-signal da " "API do Heat usando as credenciais do keystone fornecidas. ZAQAR_SIGNAL cria " "uma fila zaqar dedicada a ser sinalizada usando as credenciais do keystone " "fornecidas. NO_SIGNAL resultará no recurso entrando no estado COMPLETE sem " "aguardar nenhum sinal." msgid "" "How the user_data should be formatted for the server. For HEAT_CFNTOOLS, the " "user_data is bundled as part of the heat-cfntools cloud-init boot " "configuration data. For RAW the user_data is passed to Nova unmodified. For " "SOFTWARE_CONFIG user_data is bundled as part of the software config data, " "and metadata is derived from any associated SoftwareDeployment resources." msgstr "" "Como o user_data deve ser formatado para o servidor. Para HEAT_CFNTOOLS, o " "user_data é empacotado como parte dos dados de configuração de inicialização " "heat-cfntools cloud-init. Para RAW, o user_data é transmitido para Nova sem " "modificações. Por SOFTWARE_CONFIG, user_data é empacotado como parte dos " "dados de configuração de software e os metadados são derivados de qualquer " "recurso SoftwareDeployment associado." msgid "Human readable name for the secret." msgstr "Nome legível para o segredo." msgid "Human-readable name for the container." msgstr "Nome legível para o contêiner." msgid "" "ID list of the L3 agent. User can specify multi-agents for highly available " "router. NOTE: The default policy setting in Neutron restricts usage of this " "property to administrative users only." msgstr "" "A lista de ID do agente L3. O usuário pode especificar vários agentes para " "um roteador altamente disponível. NOTA: A configuração de política padrão no " "Neutron restringe o uso desta propriedade apenas para usuários " "administrativos." msgid "ID of an existing port to associate with this server." msgstr "ID de uma porta existente para associar a esse servidor." msgid "" "ID of an existing port with at least one IP address to associate with this " "floating IP." msgstr "" "ID de uma porta existente com pelo menos um endereço IP para associar com " "esse IP flutuante." msgid "ID of network to create a port on." msgstr "ID da rede para criar uma porta." msgid "ID of project for API authentication" msgstr "ID do projeto para autenticação da API" msgid "ID of queue to use for signaling output values" msgstr "O ID da fila a ser usado para sinalizar os valores de saída" msgid "" "ID of resource to apply configuration to. Normally this should be a Nova " "server ID." msgstr "" "ID de recurso ao qual aplicar a configuração. Normalmente isso deve ser um " "ID do servidor Nova." msgid "" "ID of server (VM, etc...) on host that is used for exporting network file-" "system." msgstr "" "ID do servidor (MV, etc...) no host que é usado para exportar o sistema de " "arquivos de rede. " msgid "ID of signal to use for signaling output values" msgstr "O ID de sinal a ser usado para sinalizar os valores de saída" msgid "" "ID of software configuration resource to execute when applying to the server." msgstr "" "ID de recurso de configuração de software para executar ao aplicar ao " "rabbitmq." msgid "ID of the Cluster Template used for Node Groups and configurations." msgstr "" "ID do Modelo do Cluster usado para os Grupos de Nós e para configurações." msgid "ID of the InternetGateway." msgstr "ID do InternetGateway." msgid "" "ID of the L3 agent. NOTE: The default policy setting in Neutron restricts " "usage of this property to administrative users only." msgstr "" "ID do agente L3. NOTA: A configuração de política padrão no Neutron " "restringe o uso desta propriedade apenas para usuários administrativos." msgid "ID of the Node Group Template." msgstr "ID do Modelo do Grupo de Nós." msgid "ID of the VPNGateway to attach to the VPC." msgstr "ID do VPNGateway para se conectar ao VPC." msgid "ID of the default image to use for the template." msgstr "ID da imagem padrão a ser usada para o modelo." msgid "ID of the default pool this listener is associated to." msgstr "ID do conjunto padrão ao qual esse listener está associado." msgid "ID of the floating IP to assign to the server." msgstr "ID do IP flutuante a ser designado ao servidor." msgid "ID of the floating IP to associate." msgstr "ID do IP flutuante para associar." msgid "ID of the health monitor associated with this pool." msgstr "ID do monitor de funcionamento associado a esse conjunto." msgid "ID of the image to use for the template." msgstr "ID da imagem a ser usada para o modelo." msgid "ID of the load balancer this listener is associated to." msgstr "ID do balanceador de carga ao qual esse listener está associado. " msgid "ID of the network in which this IP is allocated." msgstr "ID da rede na qual esse IP é alocado." msgid "ID of the port associated with this IP." msgstr "ID da porta associado a esse IP." msgid "ID of the queue." msgstr "ID da fila." msgid "ID of the router used as gateway, set when associated with a port." msgstr "" "ID do roteador utilizado como o gateway, definido quando associado a uma " "porta." msgid "ID of the router." msgstr "ID do roteador." msgid "ID of the server being deployed to" msgstr "ID do servidor que está sendo implementado" msgid "ID of the stack this deployment belongs to" msgstr "ID da implementação desta pilha pertence a" msgid "ID of the tenant to which the RBAC policy will be enforced." msgstr "ID do locatário para o qual a política do RBAC será aplicada." msgid "ID of the tenant who owns the health monitor." msgstr "ID do locatário que possui o monitor de funcionamento." msgid "ID or name of the QoS policy." msgstr "ID ou nome da política do QoS." msgid "ID or name of the RBAC object." msgstr "ID ou nome do objeto RBAC." msgid "ID or name of the external network for the gateway." msgstr "ID ou nome da rede externa para o gateway." msgid "ID or name of the image to register." msgstr "O ID ou nome da imagem para registrar." msgid "ID or name of the load balancer with which listener is associated." msgstr "" "ID ou nome do balanceador de carga com o qual o listener está associado. " msgid "ID or name of the load balancing pool." msgstr "ID ou nome do conjunto de balanceamento de carga. " msgid "" "ID that AWS assigns to represent the allocation of the address for use with " "Amazon VPC. Returned only for VPC elastic IP addresses." msgstr "" "ID que AWS designa para representar a alocação do endereço para uso com a " "Amazon VPC. Retornado somente para endereços IP elásticos VPC" msgid "IP address and port of the pool." msgstr "O endereço IP e a porta do conjunto." msgid "IP address desired in the subnet for this port." msgstr "Endereço IP desejado na sub-rede para esta porta." msgid "IP address for the VIP." msgstr "Endereço IP do VIP." msgid "IP address of the associated port, if specified." msgstr "Endereço IP da porta associada, se especificado." msgid "" "IP address of the floating IP. NOTE: The default policy setting in Neutron " "restricts usage of this property to administrative users only." msgstr "" "Endereço IP do IP flutuante. NOTA: A configuração de política padrão no " "Neutron restringe o uso dessa propriedade somente para usuários " "administrativos. " msgid "IP address of the pool member on the pool network." msgstr "Endereço IP do membro do conjunto na rede do conjunto." msgid "IP address of the pool member." msgstr "Endereço IP do membro do conjunto." msgid "IP address of the vip." msgstr "Endereço IP do vip." msgid "IP address to allow through this port." msgstr "Endereço IP para permitir através dessa porta." msgid "IP address to use if the port has multiple addresses." msgstr "Endereço IP a ser utilizado se a porta possuir vários endereços." msgid "" "IP or other address information about guest that allowed to access to Share." msgstr "" "IP ou outras informações de endereço sobre o guest que permitiu acesso ao " "Share." msgid "IPv6 RA (Router Advertisement) mode." msgstr "Modo IPv6 RA (Router Advertisement)." msgid "IPv6 address mode." msgstr "Modo de endereço IPv6." msgid "Id of a resource." msgstr "ID de um recurso." msgid "Id of the manila share." msgstr "ID do compartilhamento Manila." msgid "Id of the tenant owning the firewall policy." msgstr "ID do locatário possui a política de firewall." msgid "Id of the tenant owning the firewall." msgstr "ID do locatário que possui o firewall." msgid "Identifier of the source instance to replicate." msgstr "Identificador da instância de origem para replicar." #, python-format msgid "" "If \"%(size)s\" is provided, only one of \"%(image)s\", \"%(image_ref)s\", " "\"%(source_vol)s\", \"%(snapshot_id)s\" can be specified, but currently " "specified options: %(exclusive_options)s." msgstr "" "Se \"%(size)s\" for fornecido, somente um de \"%(image)s\", \"%(image_ref)s" "\", \"%(source_vol)s\", \"%(snapshot_id)s\" poderá ser especificado, mas as " "opções especificadas atualmente são: %(exclusive_options)s." msgid "If False, closes the client socket connection explicitly." msgstr "Se False, encerra a conexão de soquete do cliente explicitamente." msgid "" "If True, delete any objects in the container when the container is deleted. " "Otherwise, deleting a non-empty container will result in an error." msgstr "" "Se True, exclua quaisquer objetos no contêiner quando o contêiner for " "excluído. Caso contrário, excluir um contêiner que não está vazio resultará " "em um erro." msgid "If True, enable config drive on the server." msgstr "Se True, ative a unidade de configuração no servidor." msgid "" "If configured, it allows to run action or workflow associated with a task " "multiple times on a provided list of items." msgstr "" "Se configurado, permite executar ação ou fluxo de trabalho associado a uma " "tarefa diversas vezes em uma lista de termos fornecida." msgid "If set, then the server's certificate will not be verified." msgstr "Se definido, então o certificado do servidor não será verificado." msgid "If specified, the backup to create the volume from." msgstr "Se especificado, o backup a partir do qual criar o volume." msgid "If specified, the backup used as the source to create the volume." msgstr "Se especificado, o backup utilizado como a origem para criar o volume." msgid "If specified, the name or ID of the image to create the volume from." msgstr "" "Se especificado, o nome ou ID da imagem a partir do qual criar o volume." msgid "If specified, the snapshot to create the volume from." msgstr "" "Se especificado, a captura instantânea a partir da qual criar o volume." msgid "If specified, the type of volume to use, mapping to a specific backend." msgstr "" "Se especificado, o tipo de volume para utilizar, mapeando para um backend " "específico." msgid "If specified, the volume to use as source." msgstr "Se especificado, o volume para utilizar como origem." msgid "" "If the region is hierarchically a child of another region, set this " "parameter to the ID of the parent region." msgstr "" "Se a região for hierarquicamente uma filha de outra região, configure esse " "parâmetro para o ID da região pai. " msgid "" "If true, the resources in the chain will be created concurrently. If false " "or omitted, each resource will be treated as having a dependency on the " "previous resource in the list." msgstr "" "Se true, os recursos na cadeia serão criados simultaneamente. Se false ou " "omitido, cada recurso será tratado como tendo uma dependência do recurso " "anterior na lista. " msgid "If without InstanceId, ImageId and InstanceType are required." msgstr "Se sem InstanceId, ImageId e InstanceType forem necessários." #, python-format msgid "Illegal prefix bounds: %(key1)s=%(value1)s, %(key2)s=%(value2)s." msgstr "Limites de prefixo ilegais: %(key1)s=%(value1)s, %(key2)s=%(value2)s." #, python-format msgid "" "Image %(image)s requires %(imram)s minimum ram. Flavor %(flavor)s has only " "%(flram)s." msgstr "" "A imagem %(image)s requer um mínimo de %(imram)s de RAM. O tipo %(flavor)s " "possui somente %(flram)s." #, python-format msgid "" "Image %(image)s requires %(imsz)s GB minimum disk space. Flavor %(flavor)s " "has only %(flsz)s GB." msgstr "" "A imagem %(image)s requer um espaço mínimo em disco de %(imsz)s GB. O tipo " "%(flavor)s possui somente %(flsz)s GB." #, python-format msgid "Image status is required to be %(cstatus)s not %(wstatus)s." msgstr "O status da imagem precisa ser %(cstatus)s, não %(wstatus)s." msgid "Incompatible parameters were used together" msgstr "Parâmetros incompatíveis foram usados em conjunto" #, python-format msgid "Incorrect arguments to \"%(fn_name)s\" should be one of: %(allowed)s" msgstr "Argumentos incorretos para \"%(fn_name)s\" deve ser um de: %(allowed)s" #, python-format msgid "Incorrect arguments to \"%(fn_name)s\" should be: %(example)s" msgstr "Argumentos incorretos para \"%(fn_name)s\" deve ser: %(example)s" msgid "Incorrect arguments: Items to merge must be maps." msgstr "Argumentos incorretos: Itens para mesclar devem ser mapas." #, python-format msgid "" "Incorrect index to \"%(fn_name)s\" should be between 0 and %(max_index)s" msgstr "" "Índice incorreto para \"%(fn_name)s\", deve ser entre 0 e %(max_index)s" #, python-format msgid "Incorrect index to \"%(fn_name)s\" should be: %(example)s" msgstr "Índice incorreto para \"%(fn_name)s\", deve ser: %(example)s" #, python-format msgid "Index to \"%s\" must be a string" msgstr "Índice para \"%s\" deve ser uma sequência" #, python-format msgid "Index to \"%s\" must be an integer" msgstr "Índice para \"%s\" deve ser um número inteiro" msgid "" "Indicate whether the volume should be deleted when the instance is " "terminated." msgstr "" "Indique se o volume deve ser excluído quando a instância for encerrado." msgid "" "Indicate whether the volume should be deleted when the server is terminated." msgstr "Indique se o volume deve ser excluído quando o servidor for encerrado." msgid "Indicates remote IP prefix to be associated with this metering rule." msgstr "Indica prefixo de IP remoto a ser associado a esta regra de medição." msgid "" "Indicates whether or not to create a distributed router. NOTE: The default " "policy setting in Neutron restricts usage of this property to administrative " "users only. This property can not be used in conjunction with the L3 agent " "ID." msgstr "" "Indica se deve ser criado um roteador distribuído ou não. NOTA: A " "configuração de política padrão em Neutron restringe o uso desta propriedade " "para usuários administrativos apenas. Essa propriedade não pode ser usada em " "conjunto com o ID do agente L3." msgid "" "Indicates whether or not to create a highly available router. NOTE: The " "default policy setting in Neutron restricts usage of this property to " "administrative users only. And now neutron do not support distributed and ha " "at the same time." msgstr "" "Indica se deve ser criado ou não um roteador altamente disponível. NOTA: A " "configuração de política padrão em Neutron restringe o uso desta propriedade " "para apenas usuários administrativos. E agora o Neutron não suporta " "distribuído e e de alta disponibilidade ao mesmo tempo." msgid "Indicates whether this firewall rule is enabled or not." msgstr "Indica se esta regra de firewall está ativada ou não." msgid "Information used to configure the bucket as a static website." msgstr "" "Informações utilizadas para configurar o depósito, como um Web site estático." msgid "Initiator state in lowercase for the ipsec site connection." msgstr "Estado de inicializador em minúsculo para a conexão do site ipsec." #, python-format msgid "Input in signal data must be a map, find a %s" msgstr "Os dados de sinal de entrada devem ser um mapa, localize um %s." msgid "Input values for the workflow." msgstr "Valores de entrada para o fluxo de trabalho." msgid "Input values to apply to the software configuration on this server." msgstr "" "Valores de entrada para aplicar à configuração de software neste servidor." msgid "Instance ID to associate with EIP specified by EIP property." msgstr "" "ID da Instância para associar com EIP especificado pela propriedade EIP." msgid "Instance ID to associate with EIP." msgstr "ID da Instância para associar a EIP." msgid "Instance connection to CFN/CW API validate certs if SSL is used." msgstr "" "Conexão da instância para API de CFN/CW validar certificados se o SSL for " "usado." msgid "Instance connection to CFN/CW API via https." msgstr "Conexão da instância para API de CFN/CW via https." #, python-format msgid "Instance is not ACTIVE (was: %s)" msgstr "A instância não é ACTIVE (era: %s)" #, python-format msgid "" "Instance metadata must not contain greater than %s entries. This is the " "maximum number allowed by your service provider" msgstr "" "Metadados da instância não devem conter mais de %s entradas. Este é o número " "máximo permitido pelo seu provedor de serviços" msgid "Interface type of keystone service endpoint." msgstr "Tipo de interface do terminal de serviço do keystone." msgid "Internet protocol version." msgstr "Versão de protocolo da Internet." #, python-format msgid "Invalid %s, expected a mapping" msgstr "%s inválido, esperado um mapeamento" #, python-format msgid "Invalid CRON expression: %s" msgstr "Expressão CRON inválida %s" #, python-format msgid "Invalid Parameter type \"%s\"" msgstr "Tipo de parâmetro inválido \"%s\"" #, python-format msgid "Invalid Property %s" msgstr "Propriedade Inválida %s" msgid "Invalid Stack address" msgstr "Endereço Stack inválido" msgid "Invalid Template URL" msgstr "URL do Template inválida" #, python-format msgid "Invalid URL scheme %s" msgstr "Esquema de URL inválido %s" #, python-format msgid "Invalid UUID version (%d)" msgstr "Versão de UUID inválida (%d)" #, python-format msgid "Invalid action %s" msgstr "Ação inválida %s" #, python-format msgid "Invalid action %s specified" msgstr "Ação %s especificada é inválida" #, python-format msgid "Invalid adopt data: %s" msgstr "Dados de adoção inválidos: %s" #, python-format msgid "Invalid cloud_backend setting in heat.conf detected - %s" msgstr "Configuração de cloud_backend inválido no heat.conf detectado - %s" #, python-format msgid "Invalid codes in ignore_errors : %s" msgstr "Códigos inválidos em ignore_errors : %s" #, python-format msgid "Invalid content type %(content_type)s" msgstr "Tipo de conteúdo inválido %(content_type)s" #, python-format msgid "Invalid default %(default)s (%(exc)s)" msgstr "Padrão inválido %(default)s (%(exc)s)" #, python-format msgid "Invalid deletion policy \"%s\"" msgstr "Política de exclusão inválida \"%s\"" #, python-format msgid "Invalid filter parameters %s" msgstr "Parâmetros de filtro inválidos %s" #, python-format msgid "Invalid hook type \"%(hook)s\" for %(resource)s" msgstr "Tipo de gancho inválido \"%(hook)s\" para %(resource)s" #, python-format msgid "" "Invalid hook type \"%(value)s\" for resource breakpoint, acceptable hook " "types are: %(types)s" msgstr "" "Tipo de gancho inválido \"%(value)s\" para o ponto de interrupção do " "recurso, os tipos de gancho aceitáveis são: %(types)s" #, python-format msgid "Invalid key %s" msgstr "Chave inválida %s" #, python-format msgid "Invalid key '%(key)s' for %(entity)s" msgstr "Chave inválida '%(key)s' para %(entity)s" #, python-format msgid "Invalid keys in resource mark unhealthy %s" msgstr "Chaves inválidas na marca de recurso com mau funcionamento %s" msgid "" "Invalid mix of disk and container formats. When setting a disk or container " "format to one of 'aki', 'ari', or 'ami', the container and disk formats must " "match." msgstr "" "Combinação inválida de formatos de disco e contêiner. Ao configurar um " "formato de disco ou contêiner para um destes, 'aki', 'ari' ou 'ami', os " "formatos de contêiner e disco devem corresponder." #, python-format msgid "Invalid parameter constraints for parameter %s, expected a list" msgstr "" "Restrições de parâmetro inválidas para o parâmetro %s; esperada uma lista" #, python-format msgid "" "Invalid restricted_action type \"%(value)s\" for resource, acceptable " "restricted_action types are: %(types)s" msgstr "" "restricted_action type \"%(value)s\" inválido para o recurso, os " "restricted_action types aceitáveis são: %(types)s" #, python-format msgid "" "Invalid stack name %s must contain only alphanumeric or \"_-.\" characters, " "must start with alpha and must be 255 characters or less." msgstr "" "Nome de pilha inválido %s, deve conter apenas caracteres alfanuméricos ou " "\"_-.\" e deve começar com alfabético" #, python-format msgid "Invalid stack name %s, must be a string" msgstr "Nome de pilha inválido %s, deve ser uma sequência" #, python-format msgid "Invalid status %s" msgstr "Status inválido %s" #, python-format msgid "Invalid support status and should be one of %s" msgstr "Status de suporte inválido, que deve ser um de %s" #, python-format msgid "Invalid tag, \"%s\" contains a comma" msgstr "Tag inválida, \"%s\" contém uma vírgula" #, python-format msgid "Invalid tag, \"%s\" is longer than 80 characters" msgstr "Tag inválida, \"%s\" possui mais de 80 caracteres" #, python-format msgid "Invalid tag, \"%s\" is not a string" msgstr "Tag inválida, \"%s\" não é uma sequência" #, python-format msgid "Invalid tags, not a list: %s" msgstr "Tags inválidas, não é uma lista: %s" #, python-format msgid "Invalid template type \"%(value)s\", valid types are: cfn, hot." msgstr "Tipo de modelo inválido \"%(value)s\", os tipos válidos são: cfn, hot." #, python-format msgid "Invalid timeout value %s" msgstr "Valor de tempo limite inválido %s" #, python-format msgid "Invalid timezone: %s" msgstr "Fuso horário inválido: %s" #, python-format msgid "Invalid type (%s)" msgstr "Tipo inválido (%s)" msgid "Ip allocation pools and their ranges." msgstr "Conjuntos de alocação de IP e seus intervalos." msgid "Ip of the subnet's gateway." msgstr "IP de gateway da sub-rede." msgid "Ip version for the subnet." msgstr "Versão IP da sub-rede." msgid "Ip_version for this firewall rule." msgstr "Ip_version para esta regra de firewall." msgid "It defines an executor to which task action should be sent to." msgstr "Define um executor para o qual a ação da tarefa deve ser enviada." #, python-format msgid "Items to join must be string, map or list not %s" msgstr "Os itens para junção devem ser sequência , mapa ou lista, não %s" #, python-format msgid "Items to join must be string, map or list. %s failed json serialization" msgstr "" "Os itens para junção devem ser sequência , mapa ou lista. %s falhou na " "serialização de JSON." #, python-format msgid "Items to join must be strings not %s" msgstr "Os itens para junção devem ser sequências não %s" #, python-format msgid "" "JSON body size (%(len)s bytes) exceeds maximum allowed size (%(limit)s " "bytes)." msgstr "" "O tamanho do corpo JSON (%(len)s bytes) excede o tamanho máximo permitido " "(%(limit)s bytes)." msgid "JSON data that was uploaded via the SwiftSignalHandle." msgstr "" "Dados JSON que foram transferidos por upload através do SwiftSignalHandle." msgid "" "JSON serialized map that includes the endpoint, token and/or other " "attributes the client must use for signalling this handle. The contents of " "this map depend on the type of signal selected in the signal_transport " "property." msgstr "" "Mapa serializado JSON que inclui o terminal, o token e/ou outros atributos " "que o cliente deve usar para sinalizar essa manipulação. O conteúdo desse " "mapa depende do tipo de sinal selecionado na propriedade signal_transport." msgid "" "JSON string containing data associated with wait condition signals sent to " "the handle." msgstr "" "A sequência JSON contendo dados associados aos sinais de condição de espera " "enviados para a manipulação." msgid "" "Key used to encrypt authentication info in the database. Length of this key " "must be 32 characters." msgstr "" "A chave usada para criptografar informações de autenticação no banco de " "dados. O comprimento dessa chave deve ser de 32 caracteres." msgid "Key/Value pairs to extend the capabilities of the flavor." msgstr "Par de valores de chave para estender os recursss do tipo." msgid "Key/value pairs associated with the volume in raw dict form." msgstr "" "Pares de chave/valor associados ao volume na forma bruta de dicionário." msgid "Key/value pairs associated with the volume." msgstr "Pares de chave/valor associado ao volume." msgid "Key/value pairs to associate with the volume." msgstr "pares de chave/valor para associar com o volume." msgid "Keypair added to instances to make them accessible for user." msgstr "" "Par de chaves incluído nas instâncias para torná-las acessíveis para o " "usuário." msgid "Keypair secret key." msgstr "Chave secreta de par de chaves." msgid "" "Keystone domain ID which contains heat template-defined users. If this " "option is set, stack_user_domain_name option will be ignored." msgstr "" "ID do domínio do Keystone que contém usuários definidos do modelo do heat. " "Se este opção estiver configurada, a opção stack_user_domain_name será " "ignorada." msgid "" "Keystone domain name which contains heat template-defined users. If " "`stack_user_domain_id` option is set, this option is ignored." msgstr "" "Nome do domínio do keystone que contém os usuários definidos por modelo do " "heat. Se a opção ‘stack_user_domain_id’ estiver configurada, essa opção será " "ignorada." msgid "Keystone domain." msgstr "Domínio do keystone." #, python-format msgid "" "Keystone has more than one service with same name %(service)s. Please use " "service id instead of name" msgstr "" "O keystone possui mais de um serviço com o mesmo nome %(service)s. Use o ID " "de serviço em vez do nome. " msgid "Keystone password for stack_domain_admin user." msgstr "Senha do Keystone para o usuário stack_domain_admin." msgid "Keystone project." msgstr "Projeto do keystone." msgid "Keystone role for heat template-defined users." msgstr "Função do keystone para usuários definidos por modelo do heat." msgid "Keystone role." msgstr "Função do keystone." msgid "Keystone user group." msgstr "Grupo de usuários do keystone." msgid "Keystone user groups." msgstr "Grupos de usuários do keystone." msgid "Keystone user is enabled or disabled." msgstr "O usuário do keystone é ativado ou desativado." msgid "" "Keystone username, a user with roles sufficient to manage users and projects " "in the stack_user_domain." msgstr "" "Nome de usuário do Keystone, um usuário com funções suficientes para " "gerenciar usuários e projetos no stack_user_domain." msgid "L2 segmentation strategy on the external side of the network gateway." msgstr "Estratégia de segmentação L2 no lado externo do gateway de rede." msgid "LBaaS provider to implement this load balancer instance." msgstr "" "Provedor LBaaS para implementar essa instância do balanceador de carga. " msgid "Length of OS_PASSWORD after encryption exceeds Heat limit (255 chars)" msgstr "" "Comprimento de OS_PASSWORD após a criptografia excede o limite do Heat (255 " "caracteres)" msgid "Length of the string to generate." msgstr "Comprimento da sequência a ser gerada." msgid "" "Length property cannot be smaller than combined character class and " "character sequence minimums" msgstr "" "A propriedade de Comprimento não pode ser menor que a classe de caractere " "combinada e sequência de caracteres mínimos" msgid "Level of access that need to be provided for guest." msgstr "Nível de acesso que precisa ser fornecido para o guest. " msgid "" "Lifecycle actions to which the configuration applies. The string values " "provided for this property can include the standard resource actions CREATE, " "DELETE, UPDATE, SUSPEND and RESUME supported by Heat." msgstr "" "Ações do ciclo de vida para as quais a configuração se aplica. Os valores da " "sequência fornecidos para esta propriedade podem incluir as ações padrão do " "recurso CRIAR, EXCLUIR, ATUALIZAR, SUSPENDER e RETOMAR suportadas pelo Heat." msgid "List of LoadBalancer resources." msgstr "Lista de recursos LoadBalancer." msgid "List of Security Groups assigned on current LB." msgstr "Lista de Grupos de Segurança designados no LB atual." msgid "List of TLS container references for SNI." msgstr "Lista de referências do contêiner TLS para SNI." msgid "List of database instances." msgstr "Lista de instâncias de banco de dados." msgid "List of databases to be created on DB instance creation." msgstr "" "Lista de bancos de dados a serem criados na criação da instância do BD." msgid "List of directories to search for plug-ins." msgstr "Lista de diretórios para procurar plug-ins." msgid "List of dns nameservers." msgstr "Lista de nameservers dns." msgid "List of firewall rules in this firewall policy." msgstr "Lista de regras de firewall nesta política de firewall." msgid "List of health monitors associated with the pool." msgstr "Lista de monitores de saúde associados ao conjunto." msgid "List of hosts to join aggregate." msgstr "Lista de hosts para unir o agregado. " msgid "List of manila shares to be mounted." msgstr "Lista de compartilhamentos Manila a serem montados. " msgid "List of network interfaces to create on instance." msgstr "Lista de interfaces de rede para criar na instância." msgid "List of processes to enable anti-affinity for." msgstr "Lista de processos para os quais a antiafinidade deve ser ativada." msgid "List of processes to run on every node." msgstr "Lista de processes a serem executados em cada nó." msgid "List of role assignments." msgstr "Lista de designações da função." msgid "List of security group IDs associated with this interface." msgstr "Lista de IDs de grupo de segurança associada a esta interface." msgid "List of security group egress rules." msgstr "Lista de regras de egresso do grupo de segurança." msgid "List of security group ingress rules." msgstr "Lista de regras de ingresso do grupo de segurança." msgid "" "List of security group names or IDs to assign to this Node Group template." msgstr "" "Lista de nomes ou IDs de grupo de segurança para designar para esse modelo " "de Grupo de Nós." msgid "" "List of security group names or IDs. Cannot be used if neutron ports are " "associated with this server; assign security groups to the ports instead." msgstr "" "Lista de nomes ou IDs de grupo de segurança. Não pode ser utilizado se as " "portas neutron estão associadas a esse servidor; ao invés disso, designe " "grupos de segurança para as portas." msgid "List of security group rules." msgstr "Lista de regras de grupo de segurança." msgid "List of subnet prefixes to assign." msgstr "Lista de prefixos de sub-rede a serem designados. " msgid "List of tags associated with this interface." msgstr "Lista de tags associadas a esta interface." msgid "List of tags to attach to the instance." msgstr "Lista de tags para se conectar à instância." msgid "List of tags to attach to this resource." msgstr "Lista de tags para anexar a este recurso." msgid "List of tags to be attached to this resource." msgstr "Lista de tags a serem conectadas a este recurso." msgid "" "List of tasks which should be executed before this task. Used only in " "reverse workflows." msgstr "" "Lista de tarefas que deverão ser executadas antes dessa tarefa. Usada " "somente em fluxos de trabalho reversos. " msgid "" "List of tasks which will run after the task has completed regardless of " "whether it is successful or not." msgstr "" "Lista de tarefas que serão executadas após a tarefa ter sido concluída, " "independentemente se foi bem-sucedida ou não. " msgid "List of tasks which will run after the task has completed successfully." msgstr "" "Lista de tarefas que serão executadas após a tarefa ter sido concluída com " "sucesso. " msgid "" "List of tasks which will run after the task has completed with an error." msgstr "" "Lista de tarefas que serão executadas após a tarefa ter sido concluída com " "um erro. " msgid "List of users to be created on DB instance creation." msgstr "Lista de usuários a serem criados na criação da instância do BD." msgid "" "List of workflows' executions, each of them is a dictionary with information " "about execution. Each dictionary returns values for next keys: id, " "workflow_name, created_at, updated_at, state for current execution state, " "input, output." msgstr "" "Lista de execuções do fluxo de trabalho, em que cada uma delas é um " "dicionário com informações sobre a execução. Cada dicionário retorna valores " "para as próximas chaves: workflow_name, created_at, updated_at, estado para " "o estado da execução atual, entrada e saída. ." msgid "Listener associated with this pool." msgstr "Listener associado a esse conjunto." msgid "" "Local path on each cluster node on which to mount the share. Defaults to '/" "mnt/{share_id}'." msgstr "" "Caminho local em cada nó do cluster no qual montar o compartilhamento. " "Padronizado para '/mnt/{share_id}'." msgid "Location of the SSL certificate file to use for SSL mode." msgstr "Local do arquivo de certificado SSL a ser usado para o modo SSL." msgid "Location of the SSL key file to use for enabling SSL mode." msgstr "Local do arquivo de chave SSL para usar para ativação do modo SSL." msgid "MAC address of the port." msgstr "Endereço MAC da porta." msgid "MAC address to allow through this port." msgstr "Endereço MAC para permitir através dessa porta." msgid "Map between role with either project or domain." msgstr "Mapear entre a função com o projeto ou domínio." msgid "" "Map containing options specific to the configuration management tool used by " "this resource." msgstr "" "Mapa contendo opções específicas para a ferramenta de gerenciamento de " "configuração utilizada por este recurso." msgid "" "Map representing the cloud-config data structure which will be formatted as " "YAML." msgstr "" "Mapa representando a estrutura dos dados de configuração da nuvem que será " "formatado como YAML." msgid "" "Map representing the configuration data structure which will be serialized " "to JSON format." msgstr "" "Mapa representando a estrutura de dados de configuração que será serializada " "no formato JSON." msgid "Max bandwidth in kbps." msgstr "Largura da banda máxima em kbps." msgid "Max burst bandwidth in kbps." msgstr "Largura da banda máxima de burst em kbps." msgid "Max size of the cluster." msgstr "Tamanho máx. do cluster." #, python-format msgid "Maximum %s is 1 hour." msgstr "O máximo é de %s 1 hora." msgid "Maximum depth allowed when using nested stacks." msgstr "Profundidade máxima permitida ao usar pilhas aninhadas." msgid "" "Maximum line size of message headers to be accepted. max_header_line may " "need to be increased when using large tokens (typically those generated by " "the Keystone v3 API with big service catalogs)." msgstr "" "Tamanho máximo da linha de cabeçalhos da mensagem a ser aceito. " "max_header_line pode precisar ser aumentada ao utilizar tokens grandes " "(geralmente aqueles gerados pela API Keystone v3 com catálogos de serviço " "grandes)." msgid "" "Maximum line size of message headers to be accepted. max_header_line may " "need to be increased when using large tokens (typically those generated by " "the Keystone v3 API with big service catalogs.)" msgstr "" "Tamanho máximo da linha de cabeçalhos da mensagem a ser aceito. " "max_header_line pode precisar ser aumentada ao utilizar tokens grandes " "(geralmente aqueles gerados pela API Keystone v3 com catálogos de serviço " "grandes)." msgid "Maximum number of instances in the group." msgstr "Número máximo de instâncias no grupo." msgid "Maximum number of resources in the cluster. -1 means unlimited." msgstr "Número máximo de recursos no cluster. O valor -1 significa ilimitado. " msgid "Maximum number of resources in the group." msgstr "Número máximo de recursos no grupo." msgid "Maximum number of stacks any one tenant may have active at one time." msgstr "" "Número máximo de pilhas que um locatário pode ter ativas ao mesmo tempo." msgid "Maximum prefix size that can be allocated from the subnet pool." msgstr "" "Tamanho máximo do prefixo que pode ser alocado a partir do conjunto de sub-" "redes." msgid "" "Maximum raw byte size of JSON request body. Should be larger than " "max_template_size." msgstr "" "Tamanho do byte bruto máximo do corpo da solicitação JSON. Deve ser maior " "que max_template_size." msgid "Maximum raw byte size of any template." msgstr "Tamanho do byte bruto máximo de qualquer modelo." msgid "Maximum resources allowed per top-level stack. -1 stands for unlimited." msgstr "" "Máximo de recursos permitido por pilha de nível superior. O valor -1 " "representa ilimitado." msgid "Maximum resources per stack exceeded." msgstr "Máximo de recursos por pilha excedido." msgid "" "Maximum transmission unit size (in bytes) for the ipsec site connection." msgstr "" "Tamanho máximo da unidade de transmissão (em bytes) para a conexão do site " "ipsec." msgid "Member list items must be strings" msgstr "Itens da lista de membros devem ser sequências" msgid "Member list must be a list" msgstr "Lista de Membros deve ser uma lista" msgid "Members associated with this pool." msgstr "Membros associados a esse conjunto." msgid "Memory in MB for the flavor." msgstr "Memória em MB para o tipo." #, python-format msgid "Message: %(message)s, Code: %(code)s" msgstr "Mensagem:%(message)s, Código:%(code)s" msgid "Metadata format invalid" msgstr "Formato inválido de metadados" msgid "Metadata key-values defined for cluster." msgstr "Valores de chave de metadados definidos para o cluster. " msgid "Metadata key-values defined for node." msgstr "Valores de chave de metadados definidos para o nó. " msgid "Metadata key-values defined for profile." msgstr "Valores de chave de metadados definidos para o perfil. " msgid "Metadata key-values defined for share." msgstr "Valores de chave de metadados definidos para compartilhamento. " msgid "Meter name watched by the alarm." msgstr "Nome do medidor inspecionado pelo alarme." msgid "" "Meter should match this resource metadata (key=value) additionally to the " "meter_name." msgstr "" "O medidor deve corresponder a estes metadados de recurso (chave=valor) " "adicionalmente para o meter_name." msgid "Meter statistic to evaluate." msgstr "A estatística do medidor para avaliar." msgid "Method of implementation of session persistence feature." msgstr "Método de implementação do recurso de persistência de sessão." msgid "Metric name watched by the alarm." msgstr "Nome da métrica inspecionada pelo alarme." msgid "Min size of the cluster." msgstr "Tamanho mín. do cluster." msgid "MinSize can not be greater than MaxSize" msgstr "MinSize não pode ser maior que MaxSize" msgid "Minimum number of instances in the group." msgstr "Número mínimo de instâncias neste grupo." msgid "Minimum number of resources in the cluster." msgstr "Número mínimo de recursos no cluster." msgid "Minimum number of resources in the group." msgstr "Número mínimo de recursos no grupo." msgid "" "Minimum number of resources that are added or removed when the AutoScaling " "group scales up or down. This can be used only when specifying " "PercentChangeInCapacity for the AdjustmentType property." msgstr "" "Número mínimo de recursos que são incluídos ou removidos quando o grupo " "AutoScaling aumentar ou diminuir. Isso pode ser usado somente quando " "especificar PercentChangeInCapacity para a propriedade AdjustmentType." msgid "" "Minimum number of resources that are added or removed when the AutoScaling " "group scales up or down. This can be used only when specifying " "percent_change_in_capacity for the adjustment_type property." msgstr "" "Número mínimo de recursos que são incluídos ou removidos quando o grupo " "AutoScaling aumentar ou diminuir. Isso pode ser usado somente quando " "especificar percent_change_in_capacity para a propriedade adjustment_type." #, python-format msgid "Missing mandatory (%s) key from mark unhealthy request" msgstr "" "Chave obrigatória ausente (%s) a partir da solicitação de marca com mau " "funcionamento" #, python-format msgid "Missing parameter type for parameter: %s" msgstr "Tipo de parâmetro ausente para o parâmetro: %s" #, python-format msgid "Missing required credential: %(required)s" msgstr "Credencial necessária ausente: %(required)s" msgid "Mistral resource validation error" msgstr "Erro de validação de recurso do Mistral" msgid "Monasca notification." msgstr "Notificação monasca." msgid "Multiple actions specified" msgstr "Múltiplas ações especificadas" #, python-format msgid "Multiple physical resources were found with name (%(name)s)." msgstr "Vários recursos físicos foram localizados com o nome (%(name)s)." #, python-format msgid "Multiple routers found with name %s" msgstr "Vários roteadores localizados com o nome %s" msgid "Must specify 'InstanceId' if you specify 'EIP'." msgstr "'InstanceId' deve ser especificado se 'EIP' for especificado." msgid "Name for the Sahara Cluster Template." msgstr "Nome do Modelo de Cluster do Sahara." msgid "Name for the Sahara Node Group Template." msgstr "Nome do Modelo de Grupo de Nós do Sahara." msgid "Name for the aggregate." msgstr "Nome do agregado. " msgid "Name for the availability zone." msgstr "Nome da zona de disponibilidade." msgid "" "Name for the container. If not specified, a unique name will be generated." msgstr "" "Nome do contêiner. Se nenhum for especificado, um nome exclusivo será gerado." msgid "Name for the firewall policy." msgstr "Nome para a política de firewall." msgid "Name for the firewall rule." msgstr "Nome para a regra de firewall." msgid "Name for the firewall." msgstr "Nome para o firewall." msgid "Name for the ike policy." msgstr "Nome da política ike." msgid "" "Name for the image. The name of an image is not unique to a Image Service " "node." msgstr "" "Nome para a imagem. O nome de uma imagem não é exclusivo para um nó Image " "Services." msgid "Name for the ipsec policy." msgstr "Nome da política ipsec." msgid "Name for the ipsec site connection." msgstr "Nome para a conexão do site ipsec." msgid "Name for the time constraint." msgstr "Nome para a restrição de tempo." msgid "Name for the vpn service." msgstr "Nome para o serviço de vpn." msgid "" "Name of attribute to compare. Names of the form metadata.user_metadata.X or " "metadata.metering.X are equivalent to what you can address through " "matching_metadata; the former for Nova meters, the latter for all others. To " "see the attributes of your Samples, use `ceilometer --debug sample-list`." msgstr "" "Nome do atributo a ser comparado. Nomes dos formulários metadata." "user_metadata.X ou metadata.metering.X são equivalentes aos que você pode " "direcionar através de matching_metadata; o antigo para medidores de Nova, o " "último para todos os outros. Para consultar os atributos de suas Amostras, " "use `ceilometer --debug sample-list`." msgid "Name of key to use for substituting inputs during deployment." msgstr "" "Nome da chave a ser utilizada para substituir entradas durante a " "implementação." msgid "Name of keypair to inject into the server." msgstr "Nome de par de chave para injetar no servidor." msgid "Name of keystone endpoint." msgstr "Nome do terminal do keystone." msgid "Name of keystone group." msgstr "Nome do grupo do keystone." msgid "Name of keystone project." msgstr "Nome do projeto do keystone." msgid "Name of keystone role." msgstr "Nome da função do keystone." msgid "Name of keystone service." msgstr "Nome o serviço do keystone." msgid "Name of keystone user." msgstr "Nome do usuário do keystone." msgid "Name of registered datastore type." msgstr "Nome do tipo de armazenamento de dados registrado." msgid "Name of the DB instance to create." msgstr "Nome da instância do BD para criar." msgid "Name of the Node group." msgstr "Nome do grupo de nós." msgid "" "Name of the action associated with the task. Either action or workflow may " "be defined in the task." msgstr "" "Nome da ação associada à tarefa. A ação ou o fluxo de trabalho pode ser " "definido na tarefa." msgid "Name of the administrative user to use on the server." msgstr "Nome do usuário administrativo a ser usado no servidor." msgid "Name of the alarm. By default, physical resource name is used." msgstr "Nome do alarme. Por padrão, o nome do recurso físico é usado." msgid "Name of the availability zone for DB instance." msgstr "Nome da zona de disponibilidade para a instância do BD." msgid "Name of the availability zone for server placement." msgstr "Nome da zona de disponibilidade para colocação do servidor." msgid "Name of the cluster to create." msgstr "Nome do cluster a ser criado." msgid "Name of the cluster. By default, physical resource name is used." msgstr "Nome do cluster. Por padrão, o nome do recurso físico é usado." msgid "Name of the cookie, required if type is APP_COOKIE." msgstr "Nome do cookie, necessário se o tipo for APP_COOKIE." msgid "Name of the cron trigger." msgstr "Nome do acionador cron." msgid "Name of the current action being deployed" msgstr "Nome da ação atual que está sendo implementada" msgid "Name of the data source." msgstr "Nome da origem de dados." msgid "" "Name of the derived config associated with this deployment. This is used to " "apply a sort order to the list of configurations currently deployed to a " "server." msgstr "" "Nome da configuração derivada associada a esta implementação. Isso é " "utilizado para aplicar uma ordem de classificação à lista de configurações " "atualmente implementadas em um servidor." msgid "" "Name of the engine node. This can be an opaque identifier. It is not " "necessarily a hostname, FQDN, or IP address." msgstr "" "Nome do nó de mecanismo. Este pode ser um identificador opaco. Não é " "necessariamente um nome do host, FQDN ou endereço IP." msgid "Name of the input." msgstr "Nome da entrada." msgid "Name of the job binary." msgstr "Nome do binário da tarefa." msgid "Name of the metering label." msgstr "Nome do rótulo de medição." msgid "Name of the network owning the port." msgstr "Nome da rede que possui a porta." msgid "" "Name of the network owning the port. The value is typically network:" "floatingip or network:router_interface or network:dhcp." msgstr "" "Nome da rede que possui a porta. O valor é tipicamente network:floatingip ou " "network:router_interface ou network:dhcp." msgid "Name of the notification. By default, physical resource name is used." msgstr "Nome da notificação. Por padrão, o nome do recurso físico é usado." msgid "Name of the output." msgstr "O nome da saída." msgid "Name of the pool." msgstr "Nome do pool." msgid "Name of the queue instance to create." msgstr "Nome da instância de fila para criar." msgid "" "Name of the registered datastore version. It must exist for provided " "datastore type. Defaults to using single active version. If several active " "versions exist for provided datastore type, explicit value for this " "parameter must be specified." msgstr "" "Nome da versão do armazenamento de dados registrado. Ele deve existir para o " "tipo de armazenamento de dados definido. Padrões para usar a única versão " "ativa. Se várias versões ativas existirem para o tipo de armazenamento de " "dados fornecido, o valor explícito para este parâmetro deverá ser " "especificado." msgid "Name of the secret." msgstr "Nome do segredo." msgid "Name of the senlin node. By default, physical resource name is used." msgstr "Nome do nó Senlin. Por padrão, o nome do recurso físico é usado." msgid "Name of the senlin policy. By default, physical resource name is used." msgstr "Nome da política Senlin. Por padrão, o nome do recurso físico é usado." msgid "Name of the senlin profile. By default, physical resource name is used." msgstr "Nome do perfil Senlin. Por padrão, o nome do recurso físico é usado." msgid "" "Name of the senlin receiver. By default, physical resource name is used." msgstr "Nome do receptor Senlin. Por padrão, o nome do recurso físico é usado." msgid "Name of the server." msgstr "Nome do servidor." msgid "Name of the share network." msgstr "Nome da rede de compartilhamento." msgid "Name of the share type." msgstr "Nome do tipo de compartilhamento." msgid "Name of the stack." msgstr "Nome da pilha." msgid "Name of the subnet pool." msgstr "Nome do conjunto de sub-redes." msgid "Name of the vip." msgstr "Nome do vip." msgid "Name of the volume type." msgstr "Nome do tipo de volume." msgid "Name of the volume." msgstr "Nome do volume." msgid "" "Name of the workflow associated with the task. Can be defined by intrinsic " "function get_resource or by name of the referenced workflow, i.e. " "{ workflow: wf_name } or { workflow: { get_resource: wf_name }}. Either " "action or workflow may be defined in the task." msgstr "" "Nome do fluxo de trabalho associado à tarefa. Pode ser definido pela função " "intrínssica get_resource ou pelo nome do fluxo de trabalho referenciado, por " "exemplo, { workflow: wf_name } ou { workflow: { get_resource: wf_name }}. A " "ação ou o fluxo de trabalho pode ser definido na tarefa. " msgid "Name of this Load Balancer." msgstr "Nome desse Balanceador de Carga." msgid "Name of this deployment resource in the stack" msgstr "Nome deste recurso de implementação na pilha" msgid "Name of this listener." msgstr "Nome do listener." msgid "Name of this pool." msgstr "Nome desse conjunto." msgid "Name or ID Nova flavor for the nodes." msgstr "Nome ou tipo do ID Nova para os nós." msgid "Name or ID of network to create a port on." msgstr "Nome ou ID da rede no qual criar uma porta." msgid "Name or ID of senlin profile to create this node." msgstr "O nome ou ID do perfil Senlin para criar esse nó." msgid "" "Name or ID of shared file system snapshot that will be restored and created " "as a new share." msgstr "" "Nome ou ID da captura instantânea do sistema de arquivos compartilhado que " "será restaurados e criado como um novo compartilhamento." msgid "" "Name or ID of shared filesystem type. Types defines some share filesystem " "profiles that will be used for share creation." msgstr "" "Nome ou ID do tipo do sistema de arquivos compartilhado. Os tipos definem " "alguns perfis do sistema de arquivos de compartilhamento que serão usados " "para criação do compartilhamento. " msgid "Name or ID of shared network defined for shared filesystem." msgstr "" "Nome ou ID da rede compartilhada definida para o sistema de arquivos " "compartilhado. " msgid "Name or ID of target cluster." msgstr "Nome ou ID do cluster de destino." msgid "Name or ID of the load balancing pool." msgstr "Nome ou ID do conjunto de balanceamento de carga." msgid "Name or Id of keystone region." msgstr "Nome ou ID da região do keystone." msgid "Name or Id of keystone service." msgstr "Nome ou ID do serviço do keystone." #, python-format msgid "" "Name or UUID of Neutron port to attach this NIC to. Either %(port)s or " "%(net)s must be specified." msgstr "" "Nome ou UUID da porta Neutron ao qual este NIC deve ser anexado. %(port)s ou " "%(net)s deve ser especificado." msgid "Name or UUID of network." msgstr "Nome ou UUID da rede." msgid "" "Name or UUID of the Neutron floating IP network or name of the Nova floating " "ip pool to use. Should not be provided when used with Nova-network that auto-" "assign floating IPs." msgstr "" "Nome ou UUID da rede IP flutuante do Neutron ou nome do conjunto de IP " "flutuante do Nova a ser usado. Não é necessário ser fornecido quando usado " "com a rededo Nova que auto designa IPs flutuantes." msgid "Name or UUID of the image used to boot Hadoop nodes." msgstr "Nome ou UUID da imagem usada para iniciar os nós do Hadoop." #, python-format msgid "" "Name or UUID of the network to attach this NIC to. Either %(port)s or " "%(net)s must be specified." msgstr "" "Nome ou UUID ao qual este NIC deve ser anexado. %(port)s ou %(net)s deve ser " "especificado." msgid "Name or id of keystone domain." msgstr "Nome ou ID do domínio do keystone." msgid "Name or id of keystone group." msgstr "Nome ou ID do grupo do keystone." msgid "Name or id of keystone user." msgstr "Nome ou ID do usuário do keystone." msgid "Name or id of volume type (OS::Cinder::VolumeType)." msgstr "Nome ou ID do tipo de volume (OS::Cinder::VolumeType)." msgid "Names of databases that those users can access on instance creation." msgstr "" "Nomes de bancos de dados que os usuários podem acessar na criação da " "instância." msgid "" "Namespace to group this software config by when delivered to a server. This " "may imply what configuration tool is going to perform the configuration." msgstr "" "Espaço de Nomes para este grupo de configuração de software, quando " "entregues para um servidor. Isto pode significar qual ferramenta de " "configuração irá executar a configuração." msgid "Need more arguments" msgstr "Precisa de mais argumentos" msgid "Negotiation mode for the ike policy." msgstr "Modo de negociação para a política ike." #, python-format msgid "Neither image nor bootable volume is specified for instance %s" msgstr "" "Nem a imagem e nem o volume inicializável estão especificados para a " "instância %s" msgid "Network in CIDR notation." msgstr "Rede na notação CIDR." msgid "Network interface ID to associate with EIP." msgstr "ID da interface de rede a ser associado a EIP." msgid "Network interfaces to associate with instance." msgstr "Interface de rede para associar com a instância." #, python-format msgid "" "Network this port belongs to. If you plan to use current port to assign " "Floating IP, you should specify %(fixed_ips)s with %(subnet)s. Note if this " "changes to a different network update, the port will be replaced." msgstr "" "A rede à qual esta porta pertence. Se você planeja usar a porta atual para " "designar o IP flutuante, é necessário especificar %(fixed_ips)s com " "%(subnet)s. Observe que se isso alterar para uma atualização de rede " "diferente, a porta será substituída. " msgid "Network to allocate floating IP from." msgstr "Rede para alocar IP flutuante." msgid "Neutron network id." msgstr "ID da rede do Neutron." msgid "Neutron subnet id." msgstr "ID da sub-rede do Neutron." msgid "Nexthop IP address." msgstr "Rede IP Nexthop." #, python-format msgid "No %s specified" msgstr "Nenhum %s especificado" msgid "No Template provided." msgstr "Nenhum template especificado. " msgid "No action specified" msgstr "Nenhuma ação especificada" msgid "No constraint expressed" msgstr "Nenhuma restrição expressa" #, python-format msgid "" "No content found in the \"files\" section for %(fn_name)s path: %(file_key)s" msgstr "" "Nenhum conteúdo localizado na seção \"arquivos\" para o caminho %(fn_name)s: " "%(file_key)s" #, python-format msgid "No event %s found" msgstr "Nenhum evento %s encontrado" #, python-format msgid "No events found for resource %s" msgstr "Nenhum evento localizado para o recurso %s" msgid "No resource data found" msgstr "Nenhum dado de recurso localizado" #, python-format msgid "No stack exists with id \"%s\"" msgstr "Não existe pilha com o ID \"%s\"" msgid "No stack name specified" msgstr "Nenhum nome stack especificado" msgid "No template specified" msgstr "Nenhum template especificado. " msgid "No volume service available." msgstr "Nenhum serviço do volume disponível." msgid "Node groups." msgstr "Grupos de nós." msgid "Nodes list in the cluster." msgstr "Lista de nós no cluster." msgid "Non HA routers can only have one L3 agent." msgstr "" "Roteadores que não têm alta disponibilidade podem ter apenas um agente L3." #, python-format msgid "Non-empty resource type is required for resource \"%s\"" msgstr "Tipo de recurso não vazio é obrigatório para o recurso \"%s\"" msgid "Not Implemented." msgstr "Não Implementado." #, python-format msgid "Not allowed - %(dsver)s without %(dstype)s." msgstr "Não permitidas - %(dsver)s sem %(dstype)s." msgid "Not found" msgstr "Não localizado" msgid "Not waiting for outputs signal" msgstr "Não aguardando sinal de saídas" msgid "" "Notional service where encryption is performed For example, front-end. For " "Nova." msgstr "" "Serviço ideal em que a criptografia é executada. Por exemplo, front-end. " "Para Nova." msgid "Nova instance type (flavor)." msgstr "Tipo de instância de Nova (tipo)." msgid "Nova network id." msgstr "ID da rede do Nova" msgid "Number of VCPUs for the flavor." msgstr "Número de vCPUs para o tipo." msgid "Number of backlog requests to configure the socket with." msgstr "" "Número de solicitações de lista não processada para configurar o soquete." msgid "Number of instances in the Node group." msgstr "Número de instâncias no grupo de nós." msgid "Number of minutes to wait for this stack creation." msgstr "Número de minutos para aguardar a criação desta pilha." msgid "Number of periods to evaluate over." msgstr "Número de períodos para avaliar." msgid "" "Number of permissible connection failures before changing the member status " "to INACTIVE." msgstr "" "Número de falhas de conexão admissíveis antes de alterar o status do membro " "para INATIVO." msgid "Number of remaining executions." msgstr "Número de execuções restantes." msgid "Number of seconds for the DPD delay." msgstr "Número de segundos para o atraso DPD." msgid "Number of seconds for the DPD timeout." msgstr "Número de segundos para o tempo limite DPD." msgid "" "Number of times to check whether an interface has been attached or detached." msgstr "" "Número de vezes para verificar se uma interface foi conectada ou " "desconectada." msgid "" "Number of times to retry to bring a resource to a non-error state. Set to 0 " "to disable retries." msgstr "" "Número de vezes para tentar novamente colocar um recurso em um estado sem " "erro. Configure para 0 para desativar as novas tentativas." msgid "" "Number of times to retry when a client encounters an expected intermittent " "error. Set to 0 to disable retries." msgstr "" "Número de vezes para tentar novamente quando um cliente encontra um erro " "intermitente esperado. Configure para 0 para desativar as novas tentativas." msgid "Number of workers for Heat service." msgstr "Número de trabalhadores para o serviço Heat." msgid "" "Number of workers for Heat service. Default value 0 means, that service will " "start number of workers equal number of cores on server." msgstr "" "Número de trabalhadores para o serviço do Heat. O valor padrão 0 significa " "que o serviço iniciará o número de trabalhadores igual ao número de núcleos " "no servidor. " msgid "Number value for delay during resolve constraint." msgstr "Valor do número para atraso durante resolução de restrição." msgid "Number value for timeout during resolving output value." msgstr "Valor de número para tempo limite durante resolução do valor de saída." #, python-format msgid "Object action %(action)s failed because: %(reason)s" msgstr "A ação do objeto %(action)s falhou porque: %(reason)s" msgid "" "On update, enables heat to collect existing resource properties from reality " "and converge to updated template." msgstr "" "Na atualização, permite que o Heat colete propriedades de recurso existentes " "da realidade e as convirja para o modelo atualizado." msgid "One of predefined health monitor types." msgstr "Um dos tipos de monitor de funcionamento predefinidos." msgid "One or more listeners for this load balancer." msgstr "Um ou mais listeners para esse balanceador de carga." msgid "Only ISO 8601 duration format of the form PT#H#M#S is supported." msgstr "Apenas formato de duração ISO 8601 do formulário PT#H#M#S é suportado." msgid "Only Templates with an extension of .yaml or .template are supported" msgstr "Somente Modelos com uma extensão .yaml ou .template são suportados" #, python-format msgid "Only integer is acceptable by '%(name)s'." msgstr "Apenas números inteiros são aceitáveis por '%(name)s'." #, python-format msgid "Only non-zero integer is acceptable by '%(name)s'." msgstr "Apenas números inteiros diferentes de zero são aceitáveis, '%(name)s'." msgid "Operator used to compare specified statistic with threshold." msgstr "" "O operador utilizado para comparar a estatística especificada com o limite." msgid "Optional CA cert file to use in SSL connections." msgstr "Arquivo de certificado de CA opcional a ser usado em conexões SSL." msgid "Optional Nova keypair name." msgstr "Nome de par de chaves opcional Nova." msgid "Optional PEM-formatted certificate chain file." msgstr "Arquivo de cadeia de certificados formatado por PEM opcional." msgid "Optional PEM-formatted file that contains the private key." msgstr "Arquivo formatado por PEM opcional que contém a chave privada." msgid "Optional filename to associate with part." msgstr "Nome opcional a ser associado à parte." #, python-format msgid "Optional heat url in format like http://0.0.0.0:8004/v1/%(tenant_id)s." msgstr "" "URL de utilização opcional no formato http://0.0.0.0:8004/v1/%(tenant_id)s." msgid "Optional subtype to specify with the type." msgstr "Subtipo opcional para especificar com o tipo." msgid "Options for simulating waiting." msgstr "Opções para simular espera. " #, python-format msgid "Order '%(name)s' failed: %(code)s - %(reason)s" msgstr "O pedido '%(name)s' falhou: %(code)s - %(reason)s" msgid "Outputs received" msgstr "Saídas recebidas" msgid "Owner of the source security group." msgstr "Proprietário do grupo de segurança de origem." msgid "PATCH update to non-COMPLETE stack" msgstr "Atualização de PATCH para pilha não COMPLETE" #, python-format msgid "Parameter '%(name)s' is invalid: %(exp)s" msgstr "Parâmetro '%(name)s' é inválido: %(exp)s" msgid "Parameter Groups error" msgstr "Erro de Grupos de Parâmetro" msgid "" "Parameter Groups error: parameter_groups.: The grouped parameter key_name " "does not reference a valid parameter." msgstr "" "Erro de Grupos de Parâmetros: parameter_groups.: O parâmetro agrupado " "key_name não referencia um parâmetro válido. " msgid "" "Parameter Groups error: parameter_groups.: The key_name parameter must be " "assigned to one parameter group only." msgstr "" "Erro de Grupos de Parâmetros: parameter_groups.:O parâmetro key_name deve " "ser designado somente para um grupo de parâmetros. " msgid "" "Parameter Groups error: parameter_groups.: The parameters of parameter group " "should be a list." msgstr "" "Erro de Grupos de Parâmetros: parameter_groups.: Os parâmetros do grupo de " "parâmetros devem ser uma lista." msgid "" "Parameter Groups error: parameter_groups.Database Group: The InstanceType " "parameter must be assigned to one parameter group only." msgstr "" "Erro de Grupos de Parâmetros: parameter_groups.Database Group: O parâmetro " "InstanceType deve ser designado somente para um grupo de parâmetros." msgid "" "Parameter Groups error: parameter_groups.Database Group: The grouped " "parameter SomethingNotHere does not reference a valid parameter." msgstr "" "Erro de Grupos de Parâmetros: : parameter_groups.Database Group: O parâmetro " "agrupado SomethingNotHere não referencia um parâmetro válido. " msgid "" "Parameter Groups error: parameter_groups.Server Group: The parameters must " "be provided for each parameter group." msgstr "" "Erro de Grupos de Parâmetros: parameter_groups.Server Group: Os parâmetros " "devem ser fornecidos para cada grupo de parâmetros." msgid "" "Parameter Groups error: parameter_groups.Server Group: The parameters of " "parameter group should be a list." msgstr "" "Erro de Grupos de Parâmetros: parameter_groups.Server Group: Os parâmetros " "do grupo de parâmetros devem ser uma lista." msgid "" "Parameter Groups error: parameter_groups: The parameter_groups should be a " "list." msgstr "" "Erro de Grupos de Parâmetros: parameter_groups: O parameter_groups deve ser " "uma lista." #, python-format msgid "Parameter name in \"%s\" must be string" msgstr "Nome do parâmetro em \"%s\" deve ser uma sequência" #, python-format msgid "Params must be a map, find a %s" msgstr "Params deve ser um mapa, localize um %s." msgid "Parent network of the subnet." msgstr "Pai de rede da sub-rede." msgid "Parts belonging to this message." msgstr "Partes pertencentes a esta mensagem." msgid "Password for API authentication" msgstr "Senha para autenticação da API" msgid "Password for accessing the data source URL." msgstr "Senha para acessar a URL da origem de dados." msgid "Password for accessing the job binary URL." msgstr "Senha para acessar a URL do binário da tarefa." msgid "Password for those users on instance creation." msgstr "Senha para esses usuários na criação da instância." msgid "Password of keystone user." msgstr "Senha do usuário do keystone." msgid "Password used by user." msgstr "Senha usada pelo usuário." #, python-format msgid "Path components in \"%s\" must be strings" msgstr "Componentes do caminho em \"%s\" devem ser sequências" msgid "Path components in attributes must be strings" msgstr "Componentes do caminho em atributos devem ser sequências" msgid "Payload exceeds maximum allowed size" msgstr "A carga útil excede o tamanho máximo permitido" msgid "Perfect forward secrecy for the ipsec policy." msgstr "Segredo de avanço perfeito para a política ipsec." msgid "Perfect forward secrecy in lowercase for the ike policy." msgstr "Segredo de avanço perfeito em minúsculas para a política ike." msgid "" "Perform a check on the input values passed to verify that each required " "input has a corresponding value. When the property is set to STRICT and no " "value is passed, an exception is raised." msgstr "" "Execute uma verificação nos valores de entrada aprovados para verificar se " "cada entrada requerida possui um valor correspondente. Quando a propriedade " "for configurada como STRICT e nenhum valor for aprovado, uma exceção será " "levantada." msgid "Period (seconds) to evaluate over." msgstr "Período (segundos) para avaliar." msgid "Physical ID of the VPC. Not implemented." msgstr "ID físico do VPC. Não implementado." #, python-format msgid "" "Plugin %(plugin)s doesn't support the following node processes: " "%(unsupported)s. Allowed processes are: %(allowed)s" msgstr "" "O plug-in %(plugin)s não suporta os seguintes processos do nó: " "%(unsupported)s. Os processos permitidos são: %(allowed)s" msgid "Plugin name." msgstr "Nome do plugin." msgid "Policies for removal of resources on update." msgstr "Políticas para remoção de recursos na atualização." msgid "Policy for rolling updates for this scaling group." msgstr "Política para recuperar as atualizações para este grupo de escala." msgid "" "Policy on how to apply a flavor update; either by requesting a server resize " "or by replacing the entire server." msgstr "" "A política sobre como aplicar uma atualização de tipo; solicitando um " "redimensionamento do servidor ou substituindo o servidor inteiro." msgid "" "Policy on how to apply an image-id update; either by requesting a server " "rebuild or by replacing the entire server." msgstr "" "Política sobre como aplicar uma atualização de ID de imagem; solicitando uma " "reconstrução do servidor ou substituindo o servidor inteiro." msgid "" "Policy on how to respond to a stack-update for this resource. REPLACE_ALWAYS " "will replace the port regardless of any property changes. AUTO will update " "the existing port for any changed update-allowed property." msgstr "" "Política sobre como responder a uma atualização de pilha para este recurso. " "REPLACE_ALWAYS irá substituir a porta, independentemente de quaisquer " "alterações na propriedade. AUTO atualizará a porta existente para qualquer " "atualização permitida da propriedade." msgid "" "Policy to be processed when doing an update which requires removal of " "specific resources." msgstr "" "Política a ser processada ao fazer uma atualização que requer a remoção de " "recursos específicos." msgid "Pool creation failed" msgstr "Criação do conjunto com falha" msgid "Pool creation failed due to vip" msgstr "Criação do conjunto com falha devido ao vip" msgid "Pool from which floating IP is allocated." msgstr "Conjunto de onde o IP flutuante está alocado." msgid "Port number on which the servers are running on the members." msgstr "O número da porta no qual os servidores estão em execução nos membros." msgid "Port on which the pool member listens for requests or connections." msgstr "A porta na qual o membro do conjunto atende solicitações ou conexões." msgid "Port security enabled of the network." msgstr "Segurança de porta ativada da rede." msgid "Port security enabled of the port." msgstr "Segurança de porta ativada da porta." msgid "Position of the rule within the firewall policy." msgstr "Posição da regra na política de firewall." msgid "Pre-shared key string for the ipsec site connection." msgstr "Sequência de chave pré-compartilhada para a conexão do site ipsec." msgid "Prefix length for subnet allocation from subnet pool." msgstr "" "Comprimento do prefixo para alocação da sub-rede a partir do conjunto de " "sub-redes." msgid "Private DNS name of the specified instance." msgstr "Nome DNS privado da instância especificada." msgid "Private IP address of the network interface." msgstr "Endereço IP Privado da interface de rede." msgid "Private IP address of the specified instance." msgstr "Endereço IP Privado da instância especificada." msgid "Project ID" msgstr "ID do Projeto" msgid "" "Projects to add volume type access to. NOTE: This property is only supported " "since Cinder API V2." msgstr "" "Projetos aos quais incluir acesso de tipo de volume. NOTA: Essa propriedade " "é suportada somente a partir do Cinder API V2" #, python-format msgid "" "Properties %(algorithm)s and %(bit_length)s are required for %(type)s type " "of order." msgstr "" "As propriedades %(algorithm)s e %(bit_length)s são necessárias para o tipo " "de pedido %(type)s." msgid "Properties for profile." msgstr "Propriedades para o perfil. " msgid "Properties of this policy." msgstr "Propriedades dessa política." msgid "Properties to pass to each resource being created in the chain." msgstr "" "Propriedades a serem transmitidas para cada recurso que estiver sendo criado " "na cadeia. " #, python-format msgid "Property %(cookie)s is required when %(sp)s type is set to %(app)s." msgstr "" "A propriedade %(cookie)s é necessária quando o tipo %(sp)s é configurado " "para %(app)s." #, python-format msgid "" "Property %(cookie)s must NOT be specified when %(sp)s type is set to %(ip)s." msgstr "" "A propriedade %(cookie)s NÃO deve ser especificada quando o tipo %(sp)s é " "configurado para %(ip)s." #, python-format msgid "" "Property %(key)s updated value %(new)s should be superset of existing value " "%(old)s." msgstr "" "O valor atualizado %(new)s da propriedade %(key)s deve ser um " "superconjunto do valor existente %(old)s." #, python-format msgid "" "Property %(n)s type mismatch between facade %(type)s (%(fs_type)s) and " "provider (%(ps_type)s)" msgstr "" "Tipo de propriedade %(n)s incompatível entre fachada %(type)s (%(fs_type)s) " "e provedor (%(ps_type)s)" #, python-format msgid "Property %(policies)s and %(item)s cannot be used both at one time." msgstr "" "As propriedades %(policies)s e %(item)s não podem ser usadas de uma vez. " #, python-format msgid "Property %(ref)s required when protocol is %(term)s." msgstr "A propriedade %(ref)s necessária quando o protocolo é %(term)s." #, python-format msgid "Property %s not assigned" msgstr "Propriedade %s não designada" #, python-format msgid "Property %s not implemented yet" msgstr "Propriedade %s ainda não implementada" msgid "" "Property cookie_name is required when session_persistence type is set to " "APP_COOKIE." msgstr "" "A propriedade cookie_name é necessária quando o tipo de session_persistence " "está configurado como APP_COOKIE." msgid "" "Property cookie_name is required, when session_persistence type is set to " "APP_COOKIE." msgstr "" "Propriedade cookie_name é necessária, quando o tipo de session_persistence " "está configurado como APP_COOKIE." msgid "" "Property cookie_name must NOT be specified when session_persistence type is " "set to SOURCE_IP." msgstr "" "A propriedade cookie_name NÃO deve ser especificada quando o tipo " "session_persistence é configurado para SOURCE_IP." msgid "Property values for the resources in the group." msgstr "Os valores da propriedade para os recursos no grupo." msgid "Protocol for balancing." msgstr "Protocolo para o balanceamento." msgid "Protocol for the firewall rule." msgstr "Protocolo para a regra de firewall." msgid "Protocol of the pool." msgstr "Protocolo do conjunto." msgid "Protocol on which to listen for the client traffic." msgstr "Protocolo no qual atender o tráfego do cliente." msgid "Protocol to balance." msgstr "Protocolo para balancear." msgid "Protocol value for this firewall rule." msgstr "Valor de protocolo para esta regra de firewall." msgid "" "Provide access to nodes using other nodes of the cluster as proxy gateways." msgstr "" "Fornece acesso aos nós usando outros nós do cluster como gateways proxy." msgid "" "Provide old encryption key. New encryption key would be used from config " "file." msgstr "" "Forneça chave de criptografia antiga. A nova chave de criptografia seria " "usada a partir do arquivo de configuração." msgid "Provider for this Load Balancer." msgstr "Provedor desse Balanceador de Carga." msgid "Provider implementing this load balancer instance." msgstr "Provedor implementando essa instância do balanceador de carga. " #, python-format msgid "Provider requires property %(n)s unknown in facade %(type)s" msgstr "Provedor requer propriedade %(n)s desconhecida na fachada %(type)s" msgid "Public DNS name of the specified instance." msgstr "Nome DNS Público da instância especificada." msgid "Public IP address of the specified instance." msgstr "Endereço IP Público da instância especificada." msgid "" "RPC timeout for the engine liveness check that is used for stack locking." msgstr "" "Tempo limite de RPC para a verificação de atividade do mecanismo que é " "utilizado para o bloqueio de pilha." msgid "RX/TX factor." msgstr "Fator RX/TX." #, python-format msgid "Rebuilding server failed, status '%s'" msgstr "Falha ao reconstruir servidor, status '%s'" msgid "Record name." msgstr "Nome do registro." #, python-format msgid "Recursion depth exceeds %d." msgstr "A espessura de recursão excede %d." msgid "" "Ref structure that contains the ID of the VPC on which you want to create " "the subnet." msgstr "" "Estrutura de referência que contém o ID do VPC no qual você deseja criar a " "sub-rede." msgid "Reference to a flavor for creating DB instance." msgstr "Referência a um tipo para criação de instância do BD." msgid "Reference to certificate." msgstr "Referência ao certificado." msgid "Reference to intermediates." msgstr "Referência aos intermediários." msgid "Reference to private key passphrase." msgstr "Referência à passphrase da chave privada." msgid "Reference to private key." msgstr "Referência à chave privada." msgid "Reference to public key." msgstr "Referência à chave pública." msgid "Reference to the secret." msgstr "Referência ao segredo." msgid "References to secrets that will be stored in container." msgstr "Referências aos segredos que serão armazenados no contêiner." msgid "Region name in which this stack will be created." msgstr "Nome da região na qual essa pilha será criada." msgid "Remaining executions." msgstr "Execuções restantes." msgid "Remote branch router identity." msgstr "Identidade do roteador de ramificação remoto." msgid "Remote branch router public IPv4 address or IPv6 address or FQDN." msgstr "" "Endereço IPv4 ou IPv6 público do roteador de ramificação remoto ou FQDN." msgid "Remote subnet(s) in CIDR format." msgstr "Sub-rede(s) remota(s) no formato CIDR." msgid "" "Replacement policy used to work around flawed nova/neutron port interaction " "which has been fixed since Liberty." msgstr "" "Política de substituição usada como uma solução alternativa da interação da " "porta nova/neutron com falha que foi corrigida desde o Liberty." msgid "Request expired or more than 15mins in the future" msgstr "O pedido expirou ou mais de 15 min no futuro" #, python-format msgid "Request limit exceeded: %(message)s" msgstr "O limite da requisição foi excedido: %(message)s" msgid "Request missing required header X-Auth-Url" msgstr "Um cabeçalho necessário X-Auth-Url está ausente na solicitação" msgid "Request was denied due to request throttling" msgstr "O pedido foi negado devido a regulagem de pedido" #, python-format msgid "" "Requested plugin '%(plugin)s' doesn't support version '%(version)s'. Allowed " "versions are %(allowed)s" msgstr "" "O plug-in solicitado '%(plugin)s' não suporta a versão '%(version)s'. As " "versões permitidas são %(allowed)s" msgid "" "Required extra specification. Defines if share drivers handles share servers." msgstr "" "Especificação extra necessária, Define se os drivers de compartilhamento " "manipulam servidores de compartilhamento. " #, python-format msgid "Required property %(n)s for facade %(type)s missing in provider" msgstr "" "Propriedade obrigatória %(n)s para fachada %(type)s ausente no provedor" #, python-format msgid "Resizing to '%(flavor)s' failed, status '%(status)s'" msgstr "O redimensionamento para '%(flavor)s' falhou, status '%(status)s'" #, python-format msgid "Resource \"%s\" has no type" msgstr "O recurso \"%s\" não possui nenhum tipo" #, python-format msgid "Resource \"%s\" type is not a string" msgstr "O tipo \"%s\" de recurso não é uma sequência" #, python-format msgid "Resource %(name)s %(key)s type must be %(typename)s" msgstr "Tipos de recurso %(name)s %(key)s devem ser %(typename)s" #, python-format msgid "Resource %(name)s is missing \"%(type_key)s\"" msgstr "Recurso %(name)s está faltando \"%(type_key)s\"" #, python-format msgid "" "Resource %s's property user_data_format should be set to SOFTWARE_CONFIG " "since there are software deployments on it." msgstr "" "A propriedade %s do recurso user_data_format deve ser configurada para " "SOFTWARE_CONFIG uma vez que existem as implementações de software nela." msgid "Resource ID was not provided." msgstr "ID do recurso não foi fornecido." msgid "" "Resource definition for the resources in the group, in HOT format. The value " "of this property is the definition of a resource just as if it had been " "declared in the template itself." msgstr "" "Definição de recurso para os recursos no grupo, em formato HOT. O valor " "dessa propriedade é a definição de um recurso como se ele tivesse sido " "declarado no próprio modelo." msgid "" "Resource definition for the resources in the group. The value of this " "property is the definition of a resource just as if it had been declared in " "the template itself." msgstr "" "Definição de recurso para os recursos no grupo. O valor desta propriedade é " "a definição de um recurso como se ela tivesse sido declarada no próprio " "modelo." msgid "Resource failed" msgstr "Recurso com falha" msgid "Resource is not built" msgstr "O recurso não foi construído" msgid "Resource name may not contain \"/\"" msgstr "O nome do recurso não pode conter \"/\"" msgid "Resource type." msgstr "Tipo de recurso." msgid "Resource update already requested" msgstr "Atualização do recurso já solicitada" msgid "Resource with the name requested already exists" msgstr "Recurso com o nome requisitado já existe" msgid "" "ResourceInError: resources.remote_stack: Went to status UPDATE_FAILED due to " "\"Remote stack update failed\"" msgstr "" "ResourceInError: resources.remote_stack: Atingiu status UPDATE_FAILED devido " "a \"A atualização da pilha remota falhou\"" #, python-format msgid "Resources must contain Resource. Found a [%s] instead" msgstr "Os recursos devem conter Recurso. Localizado um [%s] em vez" msgid "" "Resources that users are allowed to access by the DescribeStackResource API." msgstr "" "Os recursos que os usuários são permitidos acessar pela API " "DescribeStackResource. " msgid "Returned status code from the configuration execution." msgstr "Código de status retornado da execução de configuração." msgid "Route duplicates an existing route." msgstr "A rota duplica uma rota existente." msgid "Route table ID." msgstr "ID da tabela de roteamento." msgid "Safety assessment lifetime configuration for the ike policy." msgstr "" "Configuração do tempo de vida da avaliação de segurança para a política ike." msgid "Safety assessment lifetime configuration for the ipsec policy." msgstr "" "Configuração de tempo de vida da avaliação de segurança para política ipsec." msgid "Safety assessment lifetime units." msgstr "Unidades de tempo de vida da avaliação de segurança." msgid "Safety assessment lifetime value in specified units." msgstr "" "Valor de tempo de vida da avaliação de segurança em unidades especificadas." msgid "Scheduler hints to pass to Nova (Heat extension)." msgstr "Sugestões do planejador para passar para Nova (extensão Heat)." msgid "Schema representing the inputs that this software config is expecting." msgstr "" "Esquema representando as entradas que esta configuração de software está " "esperando." msgid "Schema representing the outputs that this software config will produce." msgstr "" "Esquema representando as saídas que esta configuração de software produzirá." #, python-format msgid "Schema valid only for %(ltype)s or %(mtype)s, not %(utype)s" msgstr "Esquema válido apenas para %(ltype)s ou %(mtype)s, não %(utype)s" msgid "" "Scope of flavor accessibility. Public or private. Default value is True, " "means public, shared across all projects." msgstr "" "Escopo de acessibilidade de tipo. Público ou privado. O valor padrão é True, " "que significa público, compartilhado com todos os projetos. " #, python-format msgid "Searching Tenant %(target)s from Tenant %(actual)s forbidden." msgstr "A procura do Locatário %(target)s do Locatário %(actual)s é proibida." msgid "Seconds between running periodic tasks." msgstr "Segundos entre a execução de tarefas periódicas." msgid "Seconds to wait after a create. Defaults to the global wait_secs." msgstr "" "Segundos para aguardar após uma criação. Padronizado para o wait-secs global." msgid "Seconds to wait after a delete. Defaults to the global wait_secs." msgstr "" "Segundos para aguardar após uma exclusão. Padronizado para o wait-secs " "global." msgid "Seconds to wait after an action (-1 is infinite)." msgstr "Segundos para aguardar após uma ação (-1 é infinito)." msgid "Seconds to wait after an update. Defaults to the global wait_secs." msgstr "" "Segundos para aguardar após uma atualização. Padronizado para o wait-secs " "global." #, python-format msgid "Section %s can not be accessed directly." msgstr "Seção %s não pode ser acessada diretamente." #, python-format msgid "Security Group \"%(group_name)s\" not found" msgstr "Grupo de segurança \"%(group_name)s\" não localizado" msgid "Security group IDs to assign." msgstr "IDs do grupo de segurança para designar." msgid "Security group IDs to associate with this port." msgstr "IDs do grupo de segurança para associar a esta porta." msgid "Security group names to assign." msgstr "Nomes de grupo de segurança a designar." msgid "Security groups cannot be assigned the name \"default\"." msgstr "Grupos de segurança não podem ter o nome \"default\" designado." msgid "Security service IP address or hostname." msgstr "Endereço IP ou nome do host do serviço de segurança. " msgid "Security service description." msgstr "Descrição do serviço de segurança." msgid "Security service domain." msgstr "Domínio do serviço de segurança." msgid "Security service name." msgstr "Nome do serviço de segurança." msgid "Security service type." msgstr "Tipo de serviço de segurança." msgid "Security service user or group used by tenant." msgstr "Usuário ou grupo do serviço de segurança usado pelo locatário." msgid "Select deferred auth method, stored password or trusts." msgstr "" "Selecionar método de autenticação adiado, senha armazenada ou confianças." msgid "Sequence of characters to build the random string from." msgstr "Sequência de caracteres para construir a sequência aleatória." #, python-format msgid "Server %(name)s delete failed: (%(code)s) %(message)s" msgstr "Servidor %(name)s excluído falhou: (%(code)s) %(message)s" msgid "Server Group name." msgstr "Nome do Grupo de Servidores." msgid "Server name." msgstr "Nome do servidor." msgid "Server to assign floating IP to." msgstr "Servidor para designar IP flutuante." #, python-format msgid "" "Service %(service_name)s is not available for resource type " "%(resource_type)s, reason: %(reason)s" msgstr "" "O serviço %(service_name)s não está disponível para o tipo de recurso " "%(resource_type)s, motivo: %(reason)s" msgid "Service misconfigured" msgstr "Serviço mal-configurado" msgid "Service temporarily unavailable" msgstr "Serviço temporariamente indisponível" msgid "Set of parameters passed to this stack." msgstr "Conjunto de parâmetros aprovados para essa pilha." msgid "Set of rules for comparing characters in a character set." msgstr "" "Conjunto de regras para comparar caracteres em um conjunto de caracteres." msgid "Set of symbols and encodings." msgstr "Conjunto de símbolos e codificações." msgid "Set to \"vpc\" to have IP address allocation associated to your VPC." msgstr "" "Configure para \"vpc\" para ter a alocação de endereço IP associado a seu " "VPC." msgid "Set to true if DHCP is enabled and false if DHCP is disabled." msgstr "" "Configure como true se o DHCP estiver ativado e false se o DHCP estiver " "desativado." msgid "Severity of the alarm." msgstr "Severidade do alarme." msgid "Share description." msgstr "Descrição do compartilhamento." msgid "Share host." msgstr "Host do compartilhamento." msgid "Share name." msgstr "Nome do compartilhamento." msgid "Share network description." msgstr "Descrição da rede de compartilhamento." msgid "Share project ID." msgstr "ID do projeto de compartilhamento." msgid "Share protocol supported by shared filesystem." msgstr "" "Protocolo de compartilhamento suportado pelo sistema de arquivos " "compartilhado." msgid "Share storage size in GB." msgstr "Tamanho do armazenamento de compartilhamento em GB." msgid "Shared status of the metering label." msgstr "O status compartilhado do rótulo de medição." msgid "Shared status of this firewall policy." msgstr "Status compartilhado desta política de firewall." msgid "Shared status of this firewall rule." msgstr "Status compartilhado desta regra de firewall." msgid "Shared status of this firewall." msgstr "Status compartilhado deste firewall." msgid "Shrinking volume" msgstr "Redução do volume" msgid "Signal data error" msgstr "Erro de dados de sinal." #, python-format msgid "Signal resource during %s" msgstr "Sinalizar recurso durante %s" #, python-format msgid "Single schema valid only for %(ltype)s, not %(utype)s" msgstr "Esquema válido apenas para %(ltype)s, não %(utype)s" msgid "Size of a secondary ephemeral data disk in GB." msgstr "Tamanho de um disco de dados efêmero secundário em GB." msgid "Size of adjustment." msgstr "Tamanho de ajuste." msgid "Size of encryption key, in bits. For example, 128 or 256." msgstr "Tamanho da chave de criptografia, em bits. Por exemplo, 128 ou 256." msgid "" "Size of local disk in GB. The \"0\" size is a special case that uses the " "native base image size as the size of the ephemeral root volume." msgstr "" "Tamanho do disco local em GB. O tamanho \"0\" é um caso especial que usa o " "tamanho da imagem de base nativo como o tamanho do volume raiz efêmero." msgid "" "Size of the block device in GB. If it is omitted, hypervisor driver " "calculates size." msgstr "" "Tamanho do dispositivo de bloco em GB. Se ele for omitido, o driver do " "hypervisor calculará o tamanho." msgid "Size of the instance disk volume in GB." msgstr "Tamanho do volume do disco da instância em GB." msgid "Size of the volumes, in GB." msgstr "Tamanho dos volumes, em GB." msgid "Smallest prefix size that can be allocated from the subnet pool." msgstr "" "Menor tamanho do prefixo que pode ser alocado a partir do conjunto de sub-" "redes." #, python-format msgid "Snapshot with id %s not found" msgstr "Captura instantânea com ID %s não localizada" msgid "" "SnapshotId is missing, this is required when specifying BlockDeviceMappings." msgstr "" "SnapshotId está ausente, isso é requerido ao especificar BlockDeviceMappings." #, python-format msgid "Software config with id %s not found" msgstr "Configuração de software com ID %s não localizada" msgid "Source IP address or CIDR." msgstr "Endereço IP ou CIDR de Origem." msgid "Source ip_address for this firewall rule." msgstr "ip_address de origem para esta regra de firewall." msgid "Source port number or a range." msgstr "Número da porta de origem ou um intervalo." msgid "Source port range for this firewall rule." msgstr "Intervalo de porta de origem para esta regra de firewall." #, python-format msgid "Specified output key %s not found." msgstr "Chave de saída especificada %s não localizada." #, python-format msgid "Specified status is invalid, defaulting to %s" msgstr "Status especificado é inválido, padronizando para %s" #, python-format msgid "Specified subnet %(subnet)s does not belongs to network %(network)s." msgstr "A sub-rede %(subnet)s especificada não pertence à rede %(network)s." msgid "Specifies a custom discovery url for node discovery." msgstr "Especifica uma URL de descoberta customizada para a descoberta do nó. " msgid "Specifies database names for creating databases on instance creation." msgstr "" "Especifica nomes de banco de dados para criar bancos de dados na criação da " "instância." msgid "Specify the ACL permissions on who can read objects in the container." msgstr "" "Especifique as permissões de ACL em quem pode ler os objetos no contêiner." msgid "Specify the ACL permissions on who can write objects to the container." msgstr "" "Especifique as permissões de ACL em quem pode gravar objetos para o " "contêiner." msgid "" "Specify whether the remote_ip_prefix will be excluded or not from traffic " "counters of the metering label. For example to not count the traffic of a " "specific IP address of a range." msgstr "" "Especifique se remote_ip_prefix será excluído ou não dos contadores de " "tráfego do rótulo de medição. Por exemplo, para não contar o tráfego de um " "endereço IP específico de um intervalo." #, python-format msgid "Stack %(stack_name)s already has an action (%(action)s) in progress." msgstr "Pilha %(stack_name)s já possui uma ação (%(action)s) em andamento." msgid "Stack ID" msgstr "Stack ID" msgid "Stack Name" msgstr "Nome do Stack" msgid "Stack name may not contain \"/\"" msgstr "Nome da pilha não pode conter \"/\"" msgid "Stack resource id" msgstr "ID de recurso de pilha" msgid "Stack unknown status" msgstr "Pilha de status desconhecido" #, python-format msgid "Stack with id %s not found" msgstr "Pilha com ID %s não localizado" msgid "" "Stacks containing these tag names will be hidden. Multiple tags should be " "given in a comma-delimited list (eg. hidden_stack_tags=hide_me,me_too)." msgstr "" "Pilhas contendo esses nomes de tag serão ocultas. Diversas tags devem ser " "fornecidas em uma lista delimitada por vírgula (por exemplo, " "hidden_stack_tags=hide_me,me_too)." msgid "Start address for the allocation pool." msgstr "Endereço de início para o conjunto de alocações. " #, python-format msgid "Start resizing the group %(group)s" msgstr "Iniciar redimensionamento do grupo %(group)s" msgid "Start time for the time constraint. A CRON expression property." msgstr "" "Horário de início para a restrição de tempo. Uma propriedade da expressão " "CRON." #, python-format msgid "State %s invalid for create" msgstr "Estado %s inválido para criar" #, python-format msgid "State %s invalid for resume" msgstr "Estado %s inválido para continuar" #, python-format msgid "State %s invalid for suspend" msgstr "Estado %s inválido para suspender" msgid "Status" msgstr "Status" #, python-format msgid "String to split must be string; got %s" msgstr "Sequência a ser dividida deve ser uma sequência; obtido %s" msgid "String value with which to compare." msgstr "Valor de sequência com o qual comparar." msgid "Subnet ID to associate with this interface." msgstr "ID de sub-rede a ser associada a essa interface." msgid "Subnet ID to launch instance in." msgstr "ID de sub-rede para ativar a instância." msgid "Subnet ID." msgstr "ID de sub-rede." msgid "Subnet in which the vpn service will be created." msgstr "Sub-rede na qual o serviço da vpn será criado." msgid "" "Subnet in which to allocate the IP address for port. Used for creating port, " "based on derived properties. If subnet is specified, network property " "becomes optional." msgstr "" "Sub-rede no qual alocar o endereço IP para a porta. Usada para criar a porta " "com base em propriedades derivadas. Se a sub-rede for especificada, a " "propriedade de rede se torna opcional. " msgid "Subnet in which to allocate the IP address for this port." msgstr "Sub-rede no qual alocar o endereço IP para esta porta." msgid "Subnet name or ID of this member." msgstr "Nome ou ID da sub-rede desse membro." msgid "Subnet of external fixed IP address." msgstr "Sub-rede do endereço IP fixo externo." msgid "Subnet of the vip." msgstr "Sub-rede do vip." msgid "Subnets of this network." msgstr "Subnets desta rede." msgid "" "Subset of trustor roles to be delegated to heat. If left unset, all roles of " "a user will be delegated to heat when creating a stack." msgstr "" "Subconjunto de funções de fiduciário a serem delegadas para heat. Se for " "deixado desconfigurado, todas as funções de um usuário serão delegadas para " "heat ao criar uma pilha." msgid "Supplied metadata for the resources in the group." msgstr "Metadados fornecidos para os recursos no grupo. " msgid "Supported versions: keystone v3" msgstr "Versão suportada: keystone v3" #, python-format msgid "Suspend of instance %s failed" msgstr "Suspensão da instância %s falhou" #, python-format msgid "Suspend of server %s failed" msgstr "A suspensão do servidor %s falhou" msgid "Swap space in MB." msgstr "O espaço de troca, em MB." msgid "System SIGHUP signal received." msgstr "Sinal SIGHUP do sistema recebido." msgid "TCP or UDP port on which to listen for client traffic." msgstr "A porta TCP ou UDP na qual o tráfego do cliente é atendido." msgid "TCP port on which the instance server is listening." msgstr "A porta TCP na qual o servidor de instância está atendendo." msgid "TCP port on which the pool member listens for requests or connections." msgstr "" "A porta TCP na qual o membro do conjunto atende solicitações ou conexões." msgid "" "TCP port on which to listen for client traffic that is associated with the " "vip address." msgstr "" "A porta TCP na qual atender para o tráfego do cliente que está associado com " "o endereço vip." msgid "" "TTL, in seconds, for any cached item in the dogpile.cache region used for " "caching of OpenStack service finder functions." msgstr "" "TTL, em segundos, para qualquer item em cache na região dogpile.cache usada " "para armazenamento em cache de funções do localizador de serviço do " "OpenStack. " msgid "" "TTL, in seconds, for any cached item in the dogpile.cache region used for " "caching of service extensions." msgstr "" "TTL, em segundos, para qualquer item em cache na região dogpile.cache usada " "para armazenamento em cache de extensões de serviço. " msgid "" "TTL, in seconds, for any cached item in the dogpile.cache region used for " "caching of validation constraints." msgstr "" "TTL, em segundos, para qualquer item em cache na região dogpile.cache usada " "para armazenamento em cache de restrições de validação. " msgid "Tag key." msgstr "Chave da tag." msgid "Tag value." msgstr "Valor da tag." msgid "Tags to add to the image." msgstr "Tags para incluir na imagem. " msgid "Tags to attach to instance." msgstr "Tags para anexar à instância." msgid "Tags to attach to the bucket." msgstr "Tags para conectar ao depósito." msgid "Tags to attach to this group." msgstr "Tags para anexar para este grupo." msgid "Task description." msgstr "Descrição da tarefa." msgid "Task name." msgstr "Nome da tarefa." msgid "" "Template default for how the server should receive the metadata required for " "software configuration. POLL_SERVER_CFN will allow calls to the cfn API " "action DescribeStackResource authenticated with the provided keypair " "(requires enabled heat-api-cfn). POLL_SERVER_HEAT will allow calls to the " "Heat API resource-show using the provided keystone credentials (requires " "keystone v3 API, and configured stack_user_* config options). POLL_TEMP_URL " "will create and populate a Swift TempURL with metadata for polling (requires " "object-store endpoint which supports TempURL).ZAQAR_MESSAGE will create a " "dedicated zaqar queue and post the metadata for polling." msgstr "" "Padrão de modelo para o modo com que o servidor deve receber os metadados " "adquiridos para a configuração do software. POLL_SERVER_CFN permite chamadas " "da ação da API cfn DescribeStackResource autenticada com o par de chaves " "fornecido (requer o heat-api-cfn ativado). POLL_SERVER_HEAT permite " "chamadas do resource-show da API do Heat usando as credenciais do keystone " "fornecidas (requer o keystone da API v3 e as opções stack_user_* config " "configuradas). POLL_TEMP_URL cria e preenche uma TempURL Swift com metadados " "para pesquisa (requer o terminal object-store que suporta TempURL). " "ZAQAR_MESSAGE cria uma fila zaqar dedicada e posta os metadados para " "pesquisa." msgid "" "Template default for how the server should signal to heat with the " "deployment output values. CFN_SIGNAL will allow an HTTP POST to a CFN " "keypair signed URL (requires enabled heat-api-cfn). TEMP_URL_SIGNAL will " "create a Swift TempURL to be signaled via HTTP PUT (requires object-store " "endpoint which supports TempURL). HEAT_SIGNAL will allow calls to the Heat " "API resource-signal using the provided keystone credentials. ZAQAR_SIGNAL " "will create a dedicated zaqar queue to be signaled using the provided " "keystone credentials." msgstr "" "O padrão de modelo para o modo com que o servidor deve sinalizar o Heat com " "os valores de saída da implementação. CFN_SIGNAL permite um HTTP POST em uma " "URL sinalizada pelo par de chaves CFN (requer heat-api-cfn ativado). " "TEMP_URL_SIGNAL cria uma TempURL Swift a ser sinalizada por meio do HTTP PUT " "(requer o terminal object-store que suporta TempURL). HEAT_SIGNAL permite " "chamadas do resource-signal da API do Heat usando as credenciais do keystone " "fornecidas. ZAQAR_SIGNAL cria uma fila zaqar dedicada a ser sinalizada " "usando as credenciais do keystone fornecidas." msgid "Template format version not found." msgstr "Versão do formato da versão do modelo não localizada." #, python-format msgid "" "Template size (%(actual_len)s bytes) exceeds maximum allowed size (%(limit)s " "bytes)." msgstr "" "O tamanho do modelo (%(actual_len)s bytes) excede o tamanho máximo permitido " "de (%(limit)s bytes)." msgid "Template that specifies the stack to be created as a resource." msgstr "Modelo que especifica a pilha a ser criada como um recurso." #, python-format msgid "Template type is not supported: %s" msgstr "O tipo de modelo não é suportado: %s" msgid "Template version was not provided" msgstr "A versão do modelo não foi fornecida" #, python-format msgid "Template with version %s not found" msgstr "Modelo com versão %s não localizado" msgid "TemplateBody or TemplateUrl were not given." msgstr "TemplateBody ou TemplateUrl não foram informados." msgid "Tenant owning the health monitor." msgstr "Locatário possui o monitor de funcionamento." msgid "Tenant owning the pool member." msgstr "Locatário que possui o membro do conjunto." msgid "Tenant owning the pool." msgstr "Locatário que possui o conjunto." msgid "Tenant owning the port." msgstr "Locatário que possui a porta." msgid "Tenant owning the router." msgstr "Locatário possui o roteador." msgid "Tenant owning the subnet." msgstr "Locatário proprietário da sub-rede." #, python-format msgid "Testing message %(text)s" msgstr "Testando a mensagem %(text)s" #, python-format msgid "The \"%(hook)s\" hook is not defined on %(resource)s" msgstr "O ganho \"%(hook)s\" não está definido no %(resource)s" #, python-format msgid "The \"for_each\" argument to \"%s\" must contain a map" msgstr "O argumento \"for_each\" para \"%s\" deve conter um mapa" #, python-format msgid "The %(entity)s (%(name)s) could not be found." msgstr "O %(entity)s (%(name)s) não pôde ser localizado." #, python-format msgid "The %s must be provided for each parameter group." msgstr "O %s deve ser fornecido para cada grupo de parâmetros." #, python-format msgid "The %s of parameter group should be a list." msgstr "O %s do grupo de parâmetros deve ser uma lista." #, python-format msgid "The %s parameter must be assigned to one parameter group only." msgstr "O parâmetro %s deve ser designado somente para um grupo de parâmetros." #, python-format msgid "The %s should be a list." msgstr "O %s deve ser uma lista." msgid "The API paste config file to use." msgstr "O arquivo de configuração de colagem da API a utilizar." msgid "The AWS Access Key ID needs a subscription for the service" msgstr "O ID da Chave AWS Access precisa de uma assinatura para o serviço" msgid "The Availability Zone where the specified instance is launched." msgstr "A Zona de Disponibilidade onde a instância especificada foi ativado." msgid "The Availability Zones in which to create the load balancer." msgstr "As Zonas de Disponibilidade nas quais criar o balanceador de carga." msgid "The CIDR." msgstr "O CIDR." msgid "The DNS name for the LoadBalancer." msgstr "O nome DNS para o LoadBalancer." msgid "The DNS name of the specified bucket." msgstr "O nome DNS do depósito especificado." msgid "The DNS nameserver address." msgstr "O endereço do servidor de nomes do DNS." msgid "The HTTP method used for requests by the monitor of type HTTP." msgstr "O método HTTP utilizado para pedidos pelo monitor de tipo de HTTP." msgid "" "The HTTP path used in the HTTP request used by the monitor to test a member " "health." msgstr "" "O caminho HTTP usado no pedido HTTP utilizado pelo monitor para testar um " "funcionamento do membro." msgid "" "The HTTP path used in the HTTP request used by the monitor to test a member " "health. A valid value is a string the begins with a forward slash (/)." msgstr "" "O caminho HTTP usado na solicitação de HTTP utilizada pelo monitor para " "testar o funcionamento de um membro. Um valor válido é uma sequência que " "inicia com uma barra (/)." msgid "" "The HTTP status codes expected in response from the member to declare it " "healthy. Specify one of the following values: a single value, such as 200. a " "list, such as 200, 202. a range, such as 200-204." msgstr "" "Os códigos de status HTTP esperados em resposta do membro para declará-lo " "como funcionamento. corretamente Especifique um dos seguintes valores: valor " "único, como 200, uma lista, como 200, 202 e um intervalo, como 200-204." msgid "" "The ID of an existing instance to use to create the Auto Scaling group. If " "specify this property, will create the group use an existing instance " "instead of a launch configuration." msgstr "" "O ID de uma instância existente a ser usado para criar o grupo Ajuste de " "Escala Automático. Se especificar essa propriedade, será criado o grupo a " "usar uma instância existente em vez de uma configuração de ativação." msgid "" "The ID of an existing instance you want to use to create the launch " "configuration. All properties are derived from the instance with the " "exception of BlockDeviceMapping." msgstr "" "O ID de uma instância existente que você deseja usar para criar a " "configuração de ativação. Todas as propriedades são derivadas da instância " "com a exceção de BlockDeviceMapping." msgid "The ID of the attached network." msgstr "O ID da rede conectada." msgid "The ID of the firewall policy that this firewall is associated with." msgstr "O ID da política de firewall que esse firewall está associado." msgid "" "The ID of the hosted zone name that is associated with the LoadBalancer." msgstr "O ID do nome da zona de host que está associada ao LoadBalancer." msgid "The ID of the image to create a volume from." msgstr "O ID da imagem a partir do qual criar um volume." msgid "The ID of the image to create the volume from." msgstr "O ID da imagem para criar o volume." msgid "The ID of the instance to which the volume attaches." msgstr "O ID da instância para a qual o volume está conectado." msgid "The ID of the load balancing pool." msgstr "O ID do poll de balanceamento." msgid "The ID of the pool to which the pool member belongs." msgstr "O ID do conjunto ao qual o membro do conjunto pertence." msgid "The ID of the server to which the volume attaches." msgstr "O ID do servidor para o qual o volume está conectado." msgid "The ID of the snapshot to create a volume from." msgstr "O ID da captura instantânea a partir do qual criar um volume." msgid "" "The ID of the tenant which will own the network. Only administrative users " "can set the tenant identifier; this cannot be changed using authorization " "policies." msgstr "" "O ID do locatário que possuirá a rede. Somente usuários administrativos " "podem definir o identificador de locatário; isso não pode ser alterado " "utilizando políticas de autorização." msgid "" "The ID of the tenant who owns the Load Balancer. Only administrative users " "can specify a tenant ID other than their own." msgstr "" "O ID do locatário que possui o Balanceador de Carga. Somente usuários " "administrativos podem especificar um ID de locatário diferente do seu " "próprio." msgid "The ID of the tenant who owns the listener." msgstr "O ID do locatário que possui o listener." msgid "" "The ID of the tenant who owns the network. Only administrative users can " "specify a tenant ID other than their own." msgstr "" "O ID do locatário que possui a rede. Somente usuários administrativos podem " "especificar um ID de locatário diferente do seu próprio." msgid "" "The ID of the tenant who owns the subnet pool. Only administrative users can " "specify a tenant ID other than their own." msgstr "" "O ID do locatário que possui o conjunto de sub-redes. Somente usuários " "administrativos podem especificar um ID de locatário diferente do seu " "próprio." msgid "The ID of the volume to be attached." msgstr "O ID do volume a ser conectado." msgid "" "The ID of the volume to boot from. Only one of volume_id or snapshot_id " "should be provided." msgstr "" "O ID do volume a partir do qual inicializará. Apenas volume_id ou " "snapshot_id deve ser fornecido." msgid "The ID or name of the flavor to boot onto." msgstr "O ID ou nome do tipo no qual inicializará." msgid "The ID or name of the image to boot with." msgstr "O ID ou nome da imagem para inicializar." msgid "" "The IDs of the DHCP agent to schedule the network. Note that the default " "policy setting in Neutron restricts usage of this property to administrative " "users only." msgstr "" "Os IDs do agente DHCP para planejar a rede. Observe que a configuração de " "política padrão em Neutron restringe uso desta propriedade para apenas " "usuários administrativos." msgid "The IP address of the pool member." msgstr "O endereço IP do membro do conjunto." msgid "The IP version, which is 4 or 6." msgstr "A versão de IP, que é 4 ou 6." #, python-format msgid "The Parameter (%(key)s) was not defined in template." msgstr "O Parâmetro (%(key)s) não foi definido no modelo." #, python-format msgid "The Parameter (%(key)s) was not provided." msgstr "O parâmetro (%(key)s) não foi fornecido." msgid "The QoS policy ID attached to this network." msgstr "O ID da política do QoS anexada a essa rede." msgid "The QoS policy ID attached to this port." msgstr "O ID da política do QoS anexada a essa porta." #, python-format msgid "The Referenced Attribute (%(resource)s %(key)s) is incorrect." msgstr "O Atributo de Referência (%(resource)s %(key)s) está incorreto." #, python-format msgid "The Resource %s requires replacement." msgstr "O Recurso %s requer substituição." #, python-format msgid "" "The Resource (%(resource_name)s) could not be found in Stack %(stack_name)s." msgstr "" "O Recurso (%(resource_name)s) não pôde ser localizado na Pilha " "%(stack_name)s." #, python-format msgid "The Resource (%(resource_name)s) is not available." msgstr "O recurso (%(resource_name)s) não está disponível." #, python-format msgid "The Snapshot (%(snapshot)s) for Stack (%(stack)s) could not be found." msgstr "" "A Captura Instantânea (%(snapshot)s) para a Pilha (%(stack)s) não pôde ser " "localizada." #, python-format msgid "The Stack (%(stack_name)s) already exists." msgstr "O Stack (%(stack_name)s) já existe." msgid "The Template must be a JSON or YAML document." msgstr "O Template deve ser um documento JSON ou YAML." msgid "The URI to the container." msgstr "O URI do contêiner." msgid "The URI to the created container." msgstr "O URI para o contêiner criado." msgid "The URI to the created secret." msgstr "O URI do segredo criado." msgid "The URI to the order." msgstr "O URI do pedido." msgid "The URIs to container consumers." msgstr "Os URIs para consumidores do contêiner." msgid "The URIs to secrets stored in container." msgstr "Os URIs para segredos armazenados no contêiner." msgid "" "The URL of a template that specifies the stack to be created as a resource." msgstr "" "A URL de um modelo que especifica a pilha a ser criada como um recurso." msgid "The URL of the container." msgstr "A URL do contêiner." msgid "The VIP address of the LoadBalancer." msgstr "O endereço de VIP do LoadBalancer." msgid "The VIP port of the LoadBalancer." msgstr "A porta VIP do LoadBalancer." msgid "The VIP subnet of the LoadBalancer." msgstr "A sub-rede VIP do LoadBalancer." msgid "The action or operation requested is invalid" msgstr "A ação ou operação solicitada é inválida" msgid "The action to be executed when the receiver is signaled." msgstr "A ação a ser executada quando o receptor é sinalizado. " msgid "The administrative state of the firewall." msgstr "O estado administrativo do firewall." msgid "The administrative state of the health monitor." msgstr "O estado administrativo do monitor de funcionamento." msgid "The administrative state of the ipsec site connection." msgstr "O estado administrativo da conexão do site ipsec." msgid "The administrative state of the pool member." msgstr "O estado administrativo do membro do conjunto." msgid "The administrative state of the router." msgstr "O estado administrativo do roteador." msgid "The administrative state of the vpn service." msgstr "O estado administrativo do serviço de vpn." msgid "The administrative state of this Load Balancer." msgstr "O estado administrativo deste Balanceador de Carga." msgid "The administrative state of this health monitor." msgstr "O estado administrativo deste monitor de funcionamento." msgid "The administrative state of this listener." msgstr "O estado administrativo deste listener." msgid "The administrative state of this pool member." msgstr "O estado administrativo deste membro do conjunto." msgid "The administrative state of this pool." msgstr "O estado administrativo deste conjunto." msgid "The administrative state of this port." msgstr "O estado administrativo desta porta." msgid "The administrative state of this vip." msgstr "O estado administrativo deste vip." msgid "The administrative status of the network." msgstr "O status administrativo da rede." msgid "The administrator password for the server." msgstr "A senha do administrador para o servidor." msgid "The aggregation method to compare to the threshold." msgstr "O método de agregação para comparar com o limite." msgid "The algorithm type used to generate the secret." msgstr "O tipo de algoritmo usado para gerar o segredo. " msgid "" "The algorithm type used to generate the secret. Required for key and " "asymmetric types of order." msgstr "" "O tipo de algoritmo usado para gerar o segredo. Necessário para tipos de " "pedido de chave e assimétricos." msgid "The algorithm used to distribute load between the members of the pool." msgstr "" "O algoritmo utilizado para distribuir carga entre os membros do conjunto." msgid "The allocated address of this IP." msgstr "O endereço alocado desse IP." msgid "" "The approximate interval, in seconds, between health checks of an individual " "instance." msgstr "" "O intervalo aproximado, em segundos, entre as verificações de funcionamento " "de uma instância individual." msgid "The authentication hash algorithm of the ipsec policy." msgstr "O algoritmo hash de autenticação da política ipsec." msgid "The authentication hash algorithm used by the ike policy." msgstr "O algoritmo hash de autenticação utilizado pela política ike." msgid "The authentication mode of the ipsec site connection." msgstr "O modo de autenticação da conexão do site ipsec." msgid "The availability zone in which the volume is located." msgstr "A zona de disponibilidade no qual o volume está localizado." msgid "The availability zone in which the volume will be created." msgstr "A zona de disponibilidade na qual o volume será criado." msgid "The availability zone of shared filesystem." msgstr "A zona de disponibilidade do sistema de arquivos compartilhado." msgid "The bay name." msgstr "O nome do compartimento." msgid "The bit-length of the secret." msgstr "O comprimento de bit do segredo." msgid "" "The bit-length of the secret. Required for key and asymmetric types of order." msgstr "" "O comprimento de bit do segredo. Necessário para tipos de pedido de chave e " "assimétricos." #, python-format msgid "The bucket you tried to delete is not empty (%s)." msgstr "O depósito que você tentou excluir não está vazio (%s)." msgid "The can be used to unmap a defined device." msgstr "Pode ser usado para remover mapeamento de um dispositivo definido." msgid "The certificate or AWS Key ID provided does not exist" msgstr "O certificado ou ID de chave AWS fornecido não existe" msgid "The channel for receiving signals." msgstr "O canal para receber sinais. " msgid "" "The class that provides encryption support. For example, nova.volume." "encryptors.luks.LuksEncryptor." msgstr "" "A classe que fornece o suporte para criptografia, Por exemplo, nova.volume." "encryptors.luks.LuksEncryptor." #, python-format msgid "The client (%(client_name)s) is not available." msgstr "O cliente (%(client_name)s) não está disponível." msgid "The cluster ID this node belongs to." msgstr "O ID do cluster ao qual esse nó pertence." msgid "The config value of the software config." msgstr "O valor de configuração da configuração de software." msgid "" "The configuration tool used to actually apply the configuration on a server. " "This string property has to be understood by in-instance tools running " "inside deployed servers." msgstr "" "A ferramenta de configuração usada para aplicar de fato a configuração em um " "servidor. Essa propriedade da sequência tem que ser entendida pelas " "ferramentas na instâncias sendo executadas dentro dos servidores " "implementados." msgid "The content of the CSR. Only for certificate orders." msgstr "O conteúdo do CSR. Somente para pedidos de certificado. " #, python-format msgid "" "The contents of personality file \"%(path)s\" is larger than the maximum " "allowed personality file size (%(max_size)s bytes)." msgstr "" "O conteúdo do arquivo de personalidade \"%(path)s\" é maior que o máximo " "permitido do tamanho do arquivo de personalidade (bytes %(max_size)s)." msgid "The current size of AutoscalingResourceGroup." msgstr "O tamanho atual do AutoscalingResourceGroup." msgid "The current status of the volume." msgstr "O status atual do volume." msgid "" "The database instance was created, but heat failed to set up the datastore. " "If a database instance is in the FAILED state, it should be deleted and a " "new one should be created." msgstr "" "A instância do banco de dados foi criada, mas o heat falhou ao configurar o " "armazenamento de dados. Se uma instância do banco de dados estiver no estado " "COM FALHA, ela deverá ser excluída e uma nova deverá ser criada." msgid "" "The dead peer detection protocol configuration of the ipsec site connection." msgstr "" "A configuração do protocolo de detecção de peer inativo da conexão do site " "ipsec. " msgid "The decrypted secret payload." msgstr "A carga útil do segredo decriptografada. " msgid "" "The default cloud-init user set up for each image (e.g. \"ubuntu\" for " "Ubuntu 12.04+, \"fedora\" for Fedora 19+ and \"cloud-user\" for CentOS/RHEL " "6.5)." msgstr "" "A configuração do usuário cloud-init padrão para cada imagem (por exemplo, " "\"ubuntu\" para Ubuntu 12.04+, \"fedora\" para Fedora 19+ e \"cloud-user\" " "para CentOS/RHEL 6.5)." msgid "The description for the QoS policy." msgstr "A descrição da política do QoS." msgid "The description of the ike policy." msgstr "A descrição da política ike." msgid "The description of the ipsec policy." msgstr "A descrição da política ipsec." msgid "The description of the ipsec site connection." msgstr "A descrição da conexão do site ipsec." msgid "The description of the vpn service." msgstr "A descrição do serviço de vpn." msgid "The destination for static route." msgstr "O destino para a rota estática." msgid "The details of physical object." msgstr "Os detalhes do objeto físico." msgid "The device id for the network gateway." msgstr "O ID do dispositivo para o gateway de rede." msgid "" "The device where the volume is exposed on the instance. This assignment may " "not be honored and it is advised that the path /dev/disk/by-id/virtio-" " be used instead." msgstr "" "O dispositivo no qual o volume está exposto na instância. Esta designação " "não pode ser honrada e é aconselhável que o caminho /dev/disk/by-id/virtio-" " seja utilizado em seu lugar." msgid "" "The direction in which metering rule is applied, either ingress or egress." msgstr "A direção na qual regra de medição é aplicada, de entrada ou saída." msgid "The direction in which metering rule is applied." msgstr "A direção na qual regra de medição é aplicada." msgid "" "The direction in which the security group rule is applied. For a compute " "instance, an ingress security group rule matches traffic that is incoming " "(ingress) for that instance. An egress rule is applied to traffic leaving " "the instance." msgstr "" "A direção na qual a regra do grupo de segurança é aplicada. Para uma " "instância de cálculo, uma regra de grupo de segurança de entrada corresponde " "ao tráfego que é de entrada (entrada) para essa instância. Uma regra de " "saída é aplicada ao tráfego deixando a instância." msgid "The directory to search for environment files." msgstr "O diretório para procurar arquivos de ambiente." msgid "The ebs volume to attach to the instance." msgstr "O volume ebs para se conectar à instância." msgid "The encapsulation mode of the ipsec policy." msgstr "O modo de encapsulamento da política ipsec." msgid "The encoding format used to provide the payload data." msgstr "O formato da codificação usado para fornecer os dados da carga útil. " msgid "The encryption algorithm of the ipsec policy." msgstr "O algoritmo de criptografia da política ipsec." msgid "The encryption algorithm or mode. For example, aes-xts-plain64." msgstr "O algoritmo ou modo de criptografia. Por exemplo, aes-xts-plain64." msgid "The encryption algorithm used by the ike policy." msgstr "O algoritmo de criptografia utilizado pela política ike." msgid "The environment is not a valid YAML mapping data type." msgstr "O ambiente não é um tipo de dados de mapeamento YAML válido." msgid "The expiration date for the secret in ISO-8601 format." msgstr "A data de expiração para o segredo no formato ISO-8601." msgid "The external load balancer port number." msgstr "O número da porta do balanceador de carga externo." msgid "The extra specs key and value pairs of the volume type." msgstr "Os pares de chave e valor de especificações extras do tipo de volume." msgid "The flavor to use." msgstr "O tipo a ser utilizado." #, python-format msgid "The following parameters are immutable and may not be updated: %(keys)s" msgstr "" "Os seguintes parâmetros são imutáveis e podem não ser atualizados: %(keys)s" #, python-format msgid "The function %s is not supported in this version of HOT." msgstr "A função %s não é suportada nesta versão do HOT." msgid "" "The gateway IP address. Set to any of [ null | ~ | \"\" ] to create/update a " "subnet without a gateway. If omitted when creation, neutron will assign the " "first free IP address within the subnet to the gateway automatically. If " "remove this from template when update, the old gateway IP address will be " "detached." msgstr "" "O endereço IP do gateway. Configure para qualquer [ null | ~ | \"\" ] para " "criar/atualizar uma sub-rede sem um gateway. Se omitido na criação, o " "Neutron designará automaticamente o primeiro endereço IP livre dentro da sub-" "rede para o gateway. Se remover desse modelo ao atualizar, o endereço IP do " "gateway será desconectado." #, python-format msgid "The grouped parameter %s does not reference a valid parameter." msgstr "O grupo de parâmetros %s não referencia um parâmetro válido." msgid "The host from the container URL." msgstr "O host da URL do contêiner." msgid "The host from which a user is allowed to connect to the database." msgstr "" "O host a partir do qual um usuário tem permissão para se conectar ao banco " "de dados." msgid "" "The id for L2 segment on the external side of the network gateway. Must be " "specified when using vlan." msgstr "" "O ID para o segmento L2 no lado externo do gateway de rede. Deve ser " "especificado ao utilizar a vlan." msgid "The identifier of the CA to use." msgstr "O identificador da CA a ser utilizada." msgid "The image ID. Glance will generate a UUID if not specified." msgstr "O ID da imagem. Glance irá gerar um UUID se não for especificado." msgid "The initiator of the ipsec site connection." msgstr "O iniciador da conexão do site ipsec." msgid "The input string to be stored." msgstr "A sequência de entrada a ser armazenada." msgid "The interface name for the network gateway." msgstr "O nome da interface para o gateway de rede." msgid "The internal network to connect on the network gateway." msgstr "A rede interna para conexão no gateway de rede." msgid "The last operation for the database instance failed due to an error." msgstr "" "A última operação para a instância do banco de dados falhou devido a um erro." #, python-format msgid "The length must be at least %(min)s." msgstr "A duração deve ser pelo menos %(min)s." #, python-format msgid "The length must be in the range %(min)s to %(max)s." msgstr "A duração deve estar no intervalo de %(min)s a %(max)s." #, python-format msgid "The length must be no greater than %(max)s." msgstr "A duração não deve ser superior a %(max)s." msgid "The length of time, in minutes, to wait for the nested stack creation." msgstr "" "O período de tempo, em minutos, para aguardar a criação da pilha aninhada." msgid "" "The list of HTTP status codes expected in response from the member to " "declare it healthy." msgstr "" "A lista de códigos de status HTTP esperado na resposta do membro para " "declará-lo em funcionamento." msgid "The list of Nova server IDs load balanced." msgstr "A lista de IDs do servidor Nova com balanceamento de carga." msgid "The list of Pools related to this monitor." msgstr "A lista de Conjuntos relacionados a esse monitor. " msgid "The list of attachments of the volume." msgstr "A lista de anexos do volume." msgid "" "The list of configurations for the different lifecycle actions of the " "represented software component." msgstr "" "A lista das configurações para as diferentes ações do ciclo de vidado " "componente de software representado." msgid "The list of instance IDs load balanced." msgstr "A lista de IDs de instância de carga balanceada." msgid "" "The list of resource types to create. This list may contain type names or " "aliases defined in the resource registry. Specific template names are not " "supported." msgstr "" "A lista de tipos de recurso a serem criados. Essa lista pode conter nomes ou " "aliases de tipo definidos no registro do recurso. Nomes de modelo " "específicos não são suportados. " msgid "The list of tags to associate with the volume." msgstr "A lista de tags para associar com o volume." msgid "The load balancer transport protocol to use." msgstr "O protocolo de transporte do balanceador de carga para utilizar." msgid "" "The location where the volume is exposed on the instance. This assignment " "may not be honored and it is advised that the path /dev/disk/by-id/virtio-" " be used instead." msgstr "" "O local no qual o volume está exposto na instância. Esta designação não pode " "ser honrada e é aconselhável que o caminho /dev/disk/by-id/virtio- " "seja utilizado em seu lugar." msgid "The manually assigned alternative public IPv4 address of the server." msgstr "O endereço IPv4 público alternativo designado manualmente do servidor." msgid "The manually assigned alternative public IPv6 address of the server." msgstr "O endereço IPv6 público alternativo designado manualmente do servidor." msgid "The maximum number of connections per second allowed for the vip." msgstr "O número máximo de conexões por segundo permitido para o vip." msgid "" "The maximum number of connections permitted for this load balancer. Defaults " "to -1, which is infinite." msgstr "" "O número máximo de conexões permitidas para esse balanceador de carga, " "Padronizado para -1, que é infinito." msgid "The maximum number of resources to create at once." msgstr "O número máximo de recursos para criar de uma vez." msgid "The maximum number of resources to replace at once." msgstr "O número máximo de recursos para substituir de uma vez." msgid "" "The maximum number of seconds to wait for the resource to signal completion. " "Once the timeout is reached, creation of the signal resource will fail." msgstr "" "O número máximo de segundos de espera para o recurso ao sinal de conclusão. " "Depois que o tempo limite for atingido, a criação do recurso de sinal " "falhará." msgid "" "The maximum port number in the range that is matched by the security group " "rule. The port_range_min attribute constrains the port_range_max attribute. " "If the protocol is ICMP, this value must be an ICMP type." msgstr "" "O número máximo de porta no intervalo que corresponde à regra do grupo de " "segurança. O atributo port_range_min restringe o atributo port_range_max. Se " "o protocolo for ICMP, esse valor deve ser um tipo ICMP." msgid "" "The maximum transmission unit size (in bytes) of the ipsec site connection." msgstr "" "O tamanho da unidade máxima de transmissão (em bytes) da conexão do site " "ipsec. " msgid "The maximum transmission unit size(in bytes) for the network." msgstr "O tamanho máximo da unidade de transmissão (em bytes) para a rede." msgid "The metering label ID to associate with this metering rule." msgstr "O ID de rótulo do medidor para associar a essa regra de medição." msgid "" "The metric dimensions to match to the alarm dimensions. One or more " "dimension key names separated by a comma." msgstr "" "As dimensões da métrica para corresponder às dimensões do alarme. Um ou mais " "nomes de chave de dimensão separados por uma vírgula. " msgid "" "The minimum number of characters from this character class that will be in " "the generated string." msgstr "" "O número mínimo de caracteres desta classe de caractere que estará na " "sequência gerada." msgid "" "The minimum number of characters from this sequence that will be in the " "generated string." msgstr "" "O número mínimo de caracteres desta sequência que estará na sequência gerada." msgid "" "The minimum number of resources in service while rolling updates are being " "executed." msgstr "" "O número mínimo de recursos em serviço enquanto as atualizações em andamento " "estiverem sendo executadas." msgid "" "The minimum port number in the range that is matched by the security group " "rule. If the protocol is TCP or UDP, this value must be less than or equal " "to the value of the port_range_max attribute. If the protocol is ICMP, this " "value must be an ICMP type." msgstr "" "O número mínimo da porta no intervalo que é compensada pela regra de grupo " "de segurança. Se o protocolo for TCP ou UDP, esse valor deve ser menor que " "ou igual ao valor do atributo port_range_max. Se o protocolo for ICMP, esse " "valor deve ser um tipo ICMP." msgid "The name for the QoS policy." msgstr "O nome da política do QoS." msgid "The name for the address scope." msgstr "O nome do escopo de endereço." msgid "" "The name of the driver used for instantiating container networks. By " "default, Magnum will choose the pre-configured network driver based on COE " "type." msgstr "" "O nome do driver usado para instanciar redes do contêiner. Por padrão, o " "Magnum escolherá o driver de rede pré-configurado com base no tipo de COE." msgid "The name of the error document." msgstr "O nome do documento de erro." msgid "The name of the hosted zone that is associated with the LoadBalancer." msgstr "O nome da zona de host que está associada ao LoadBalancer." msgid "The name of the ike policy." msgstr "O nome da política ike." msgid "The name of the index document." msgstr "O nome do documento de índice." msgid "The name of the ipsec policy." msgstr "O nome da política ipsec." msgid "The name of the ipsec site connection." msgstr "O nome da conexão do site ipsec." msgid "The name of the key pair." msgstr "O nome do par de chaves." msgid "The name of the network gateway." msgstr "O nome do gateway de rede." msgid "The name of the network." msgstr "O nome da rede." msgid "The name of the router." msgstr "O nome do roteador." msgid "The name of the subnet." msgstr "O nome da sub-rede." msgid "The name of the user that the new key will belong to." msgstr "O nome do usuário ao qual a nova chave pertencerá." msgid "" "The name of the virtual device. The name must be in the form ephemeralX " "where X is a number starting from zero (0); for example, ephemeral0." msgstr "" "O nome do dispositivo virtual. O nome deve estar no formato ephemeralX em " "que X é um número que começa com zero (0), por exemplo, ephemeral0." msgid "The name of the vpn service." msgstr "O nome do serviço de vpn." msgid "The name or ID of QoS policy to attach to this network." msgstr "O nome ou o ID da política do QoS para anexar a essa rede." msgid "The name or ID of QoS policy to attach to this port." msgstr "O nome ou o ID da política do QoS para anexar a essa porta." msgid "The name or ID of parent of this keystone project in hierarchy." msgstr "O nome ou ID do pai desse projeto do keystone na hierarquia." msgid "The name or ID of target cluster." msgstr "O nome ou ID do cluster de destino." msgid "The name or ID of the bay model." msgstr "O nome ou ID do modelo do compartimento." msgid "The name or ID of the subnet on which to allocate the VIP address." msgstr "O nome ou ID da sub-rede no qual alocar o endereço VIP." msgid "The name or ID of the subnet pool." msgstr "O nome ou ID do conjunto de sub-redes." msgid "The name or id of the Senlin profile." msgstr "O nome ou ID do perfil Senlin." msgid "The negotiation mode of the ike policy." msgstr "O modo de negociação da política ike." msgid "The next hop for the destination." msgstr "O próximo hop para o destino." msgid "The node count for this bay." msgstr "A contagens de nós para esse compartimento." msgid "The notification methods to use when an alarm state is ALARM." msgstr "" "Os métodos de notificação a serem usados quando um estado de alarme for " "ALARM." msgid "The notification methods to use when an alarm state is OK." msgstr "" "Os métodos de notificação a serem usados quando um estado de alarme for OK." msgid "The notification methods to use when an alarm state is UNDETERMINED." msgstr "" "Os métodos de notificação a serem usados quando um estado de alarme for " "UNDETERMINED." msgid "The number of I/O operations per second that the volume supports." msgstr "O número de operações de E/S por segundo que o volume suporta." msgid "The number of bytes stored in the container." msgstr "O número de bytes armazenados no contêiner." msgid "" "The number of consecutive health probe failures required before moving the " "instance to the unhealthy state" msgstr "" "O número de falhas consecutivas de análise de funcionamento necessárias " "antes de mover a instância para o estado de mau-funcionamento" msgid "" "The number of consecutive health probe successes required before moving the " "instance to the healthy state." msgstr "" "O número de sucessos de análise consecutivas de funcionamento necessária " "antes de mover a instância para o estado saudável." msgid "The number of master nodes for this bay." msgstr "O número de nós principais para esse compartimento." msgid "The number of objects stored in the container." msgstr "O número de objetos armazenados no contêiner." msgid "The number of replicas to be created." msgstr "O número de réplicas a serem criadas." msgid "The number of resources to create." msgstr "O número de recursos a serem criados." msgid "The number of seconds to wait between batches of updates." msgstr "O número de segundos para aguardar entre lotes de atualizações." msgid "The number of seconds to wait between batches." msgstr "O número de segundos para aguardar entre lotes." msgid "The number of seconds to wait for the cluster actions." msgstr "O número de segundos para aguardar as ações do cluster." msgid "" "The number of seconds to wait for the correct number of signals to arrive." msgstr "" "O número de segundos a esperar para que o número correto de sinais chegue." msgid "" "The number of success signals that must be received before the stack " "creation process continues." msgstr "" "O número de sinais de sucesso que deve ser recebido antes que o processo de " "criação da pilha continue." msgid "" "The optional public key. This allows users to supply the public key from a " "pre-existing key pair. If not supplied, a new key pair will be generated." msgstr "" "A chave pública opcional. Isso permite que os usuários forneçam a chave " "pública a partir de um par de chaves preexistente. Se não fornecido, um novo " "par de chaves será gerado." msgid "" "The owner tenant ID of the address scope. Only administrative users can " "specify a tenant ID other than their own." msgstr "" "O ID do locatário proprietário do escopo de endereço. Somente usuários " "administrativos podem especificar um ID do locatário diferente do que eles " "possuem." msgid "The owner tenant ID of this QoS policy." msgstr "O ID do locatário proprietário dessa política do QoS." msgid "The owner tenant ID of this rule." msgstr "O ID do locatário proprietário dessa regra." msgid "" "The owner tenant ID. Only required if the caller has an administrative role " "and wants to create a RBAC for another tenant." msgstr "" "O ID do locatário proprietário. Necessário somente se o responsável pela " "chamada possuir função administrativa e desejar criar um RBAC para outro " "locatário. " msgid "The parameters passed to action when the receiver is signaled." msgstr "" "Os parâmetros transmitidos para a ação quando o receptor é sinalizado. " msgid "The parent URL of the container." msgstr "A URL pai do contêiner." msgid "The payload of the created certificate, if available." msgstr "A carga útil do certificado criado, se disponível." msgid "The payload of the created intermediates, if available." msgstr "A carga útil dos intermediários criados, se disponível." msgid "The payload of the created private key, if available." msgstr "A carga útil da chave privada criada, se disponível." msgid "The payload of the created public key, if available." msgstr "A carga útil da chave pública criada, se disponível." msgid "The perfect forward secrecy of the ike policy." msgstr "O segredo de avanço perfeito da política ike." msgid "The perfect forward secrecy of the ipsec policy." msgstr "O segredo de avanço perfeito da política ipsec." #, python-format msgid "The personality property may not contain greater than %s entries." msgstr "A propriedade personalidade não pode conter mais de %s entradas." msgid "The physical mechanism by which the virtual network is implemented." msgstr "O mecanismo físico pelo qual a rede virtual está implementada." msgid "The port being checked." msgstr "A porta que está sendo verificada." msgid "The port id, either subnet or port_id should be specified." msgstr "O ID da porta, sub-rede ou port_id deve ser especificado." msgid "The port on which the server will listen." msgstr "A porta em que o servidor irá escutar." msgid "The port, either subnet or port should be specified." msgstr "A porta, tanto a sub-rede quanto a porta precisam ser especificadas." msgid "The pre-shared key string of the ipsec site connection." msgstr "A sequência de chave pre-compartilhada da conexão do site ipsec." msgid "The private key if it has been saved." msgstr "A chave privada se ele tiver sido salva." msgid "The profile of certificate to use." msgstr "O perfil de certificado pra uso. " msgid "" "The protocol that is matched by the security group rule. Valid values " "include tcp, udp, and icmp." msgstr "" "O protocolo que é correspondido pela regra de grupo de segurança. Os valores " "válidos incluem tcp, udp, icmp." msgid "The public key." msgstr "A chave pública." msgid "The query string is malformed" msgstr "A query string está mal-formada" msgid "The query to filter the metrics." msgstr "A consulta para filtrar as métricas." msgid "" "The random string generated by this resource. This value is also available " "by referencing the resource." msgstr "" "A sequência aleatória gerada por este recurso. Este valor também está " "disponível para referência ao recurso." msgid "The reference to a LaunchConfiguration resource." msgstr "A referência a um recurso LaunchConfiguration." msgid "" "The remote IP prefix (CIDR) to be associated with this security group rule." msgstr "" "O prefixo do IP remoto (CIDR) a ser associado a esta regra de grupo de " "segurança." msgid "The remote branch router identity of the ipsec site connection." msgstr "" "A identidade do roteador de ramificação remota da conexão do site ipsec." msgid "The remote branch router public IPv4 address or IPv6 address or FQDN." msgstr "" "O endereço IPv4 ou IPv6 público do roteador de ramificação remoto ou FQDN." msgid "" "The remote group ID to be associated with this security group rule. If no " "value is specified then this rule will use this security group for the " "remote_group_id. The remote mode parameter must be set to \"remote_group_id" "\"." msgstr "" "O ID do grupo remoto a ser associado a esta regra de grupo de segurança. Se " "nenhum valor for especificado, essa regra irá utilizar este grupo de " "segurança para o remote_group_id. O parâmetro do modo remoto deve ser " "configurado para \"remote_group_id\"." msgid "The remote subnet(s) in CIDR format of the ipsec site connection." msgstr "A sub-rede(s) remota(s) no formato CIDR da conexão do site ipsec." msgid "The request is missing an action or operation parameter" msgstr "A requisição está sem uma ação or um parâmetro de operação" msgid "The request processing has failed due to an internal error" msgstr "O processamento do pedido falhou devido a um erro interno" msgid "The request signature does not conform to AWS standards" msgstr "A assinatura do pedido não está em conformidade com padrões AWS" msgid "" "The request signature we calculated does not match the signature you provided" msgstr "" "A assinatura da requisição que calculamos não bate com a assinatura que você " "forneceu" msgid "The requested action is not yet implemented" msgstr "A ação requisitada não está implementada ainda" #, python-format msgid "The resource %s is already being updated." msgstr "O recurso %s já está sendo atualizado." msgid "The resource href of the queue." msgstr "O recurso href para a fila. " msgid "The route mode of the ipsec site connection." msgstr "O modo de roteamento da conexão do site ipsec." msgid "The router id." msgstr "O ID do roteador" msgid "The router to which the vpn service will be inserted." msgstr "O roteador no qual o serviço de vpn será inserido." msgid "The router." msgstr "O roteador." msgid "The safety assessment lifetime configuration for the ike policy." msgstr "" "A configuração de tempo de vida da avaliação de segurança para a política " "ike." msgid "The safety assessment lifetime configuration of the ipsec policy." msgstr "" "A configuração de tempo de vida da avaliação de segurança da política ipsec." msgid "" "The security group that you can use as part of your inbound rules for your " "LoadBalancer's back-end instances." msgstr "" "O grupo de segurança que é possível utilizar como parte de suas regras de " "entrada para suas instâncias de backend do LoadBalancer." msgid "" "The server could not comply with the request since it is either malformed or " "otherwise incorrect." msgstr "" "O servidor não pôde obedecer a solicitação, visto que está malformada ou de " "maneira incorreta." msgid "The set of parameters passed to this nested stack." msgstr "O conjunto de parâmetros transmitidos para essa pilha aninhada." msgid "The size in GB of the docker volume." msgstr "O tamanho em GB do volume do docker." msgid "The size of AutoScalingGroup can not be less than zero" msgstr "O tamanho do AutoScalingGroup não pode ser menor que zero" msgid "" "The size of the prefix to allocate when the cidr or prefixlen attributes are " "not specified while creating a subnet." msgstr "" "O tamanho do prefixo para alocar quando os atributos cidr ou prefixlen não " "forem especificados ao criar uma sub-rede." msgid "The size of the swap, in MB." msgstr "O tamanho da troca, em MB." msgid "The size of the volume in GB." msgstr "O tamanho do volume em GB." msgid "" "The size of the volume, in GB. It is safe to leave this blank and have the " "Compute service infer the size." msgstr "" "O tamanho do volume, em GB. É seguro deixar isso em branco e permitir o " "serviço de Cálculo deduzir o tamanho." msgid "" "The size of the volume, in GB. Must be equal or greater than the size of the " "snapshot. It is safe to leave this blank and have the Compute service infer " "the size." msgstr "" "O tamanho do volume, em GB. Deve ser igual ou maior que o tamanho da captura " "instantânea. É seguro deixar em branco e o serviço de Cálculo deduzir o " "tamanho." msgid "The snapshot the volume was created from, if any." msgstr "A captura instantânea a partir da qual o volume foi criado, se houver." msgid "The source of certificate request." msgstr "A origem da solicitação de certificado. " #, python-format msgid "The specified reference \"%(resource)s\" (in %(key)s) is incorrect." msgstr "" "A referência especificada \"%(resource)s\" (em %(key)s) está incorreta." msgid "The start and end addresses for the allocation pools." msgstr "Os endereços de início e fim para os conjuntos de alocação." msgid "The status of the container." msgstr "O status do contêiner." msgid "The status of the firewall." msgstr "O status do firewall." msgid "The status of the ipsec site connection." msgstr "O status da conexão do site ipsec." msgid "The status of the network." msgstr "O status da rede." msgid "The status of the order." msgstr "O status do pedido." msgid "The status of the port." msgstr "O status da porta." msgid "The status of the router." msgstr "O status do roteador." msgid "The status of the secret." msgstr "O status do segredo. " msgid "The status of the vpn service." msgstr "O status do serviço de vpn." msgid "" "The string that was stored. This value is also available by referencing the " "resource." msgstr "" "A sequência que foi armazenada. Este valor também está disponível para " "referência ao recurso." msgid "The subject of the certificate request." msgstr "O assunto da solicitação de certificado. " msgid "" "The subnet for the port on which the members of the pool will be connected." msgstr "" "A sub-rede para a porta na qual os membros do conjunto estarão conectados." msgid "The subnet, either subnet or port should be specified." msgstr "" "A sub-rede, tanto a sub-rede ou quanto a porta precisam ser especificadas." msgid "The tag key name." msgstr "O nome da chave da tag." msgid "The tag value." msgstr "O valor da tag." msgid "The template is not a JSON object or YAML mapping." msgstr "O modelo não é um objeto JSON ou mapeamento YAML." #, python-format msgid "The template section is invalid: %(section)s" msgstr "A seção do modelo é inválida: %(section)s" #, python-format msgid "The template version is invalid: %(explanation)s" msgstr "A versão do modelo é inválida: %(explanation)s" msgid "The tenant owning this floating IP." msgstr "O locatário possui esse IP flutuante." msgid "The tenant owning this network." msgstr "O locatário proprietário dessa rede." msgid "The time range in seconds." msgstr "O intervalo de tempo em segundos." msgid "The timestamp indicating volume creation." msgstr "O registro de data e hora indicando a criação de volume." msgid "The transform protocol of the ipsec policy." msgstr "O protocolo de conversão da política ipsec." msgid "The type of profile." msgstr "O tipo do perfil." msgid "The type of senlin policy." msgstr "O tipo da política Senlin." msgid "The type of the certificate request." msgstr "O tipo da solicitação de certificado. " msgid "The type of the order." msgstr "O tipo do pedido." msgid "The type of the resources in the group." msgstr "O tipo de recursos no grupo." msgid "The type of the secret." msgstr "O tipo do segredo." msgid "The type of the volume mapping to a backend, if any." msgstr "O tipo de mapeamento de volume para um backend, se houver." msgid "The type/format the secret data is provided in." msgstr "O Tipo/formato em que os dados do segredo são fornecidos." msgid "The type/mode of the algorithm associated with the secret information." msgstr "O tipo/modo do algoritmo associado às instâncias de segredo. " msgid "The unencrypted plain text of the secret." msgstr "O texto simples não criptografado do segredo. " msgid "" "The unique identifier of ike policy associated with the ipsec site " "connection." msgstr "O identificador exclusivo da política ike associada ao site ipsec " msgid "" "The unique identifier of ipsec policy associated with the ipsec site " "connection." msgstr "O identificador exclusivo da política ipsec com o site ipsec " msgid "" "The unique identifier of the router to which the vpn service was inserted." msgstr "" "O identificador exclusivo do roteador para o qual o serviço de vpn foi " "inserido." msgid "" "The unique identifier of the subnet in which the vpn service was created." msgstr "" "O identificador exclusivo da sub-rede na qual o serviço de vpn foi criado." msgid "The unique identifier of the tenant owning the ike policy." msgstr "O identificador exclusivo do locatário possui a política ike." msgid "The unique identifier of the tenant owning the ipsec policy." msgstr "O identificador exclusivo do locatário possui a política ipsec." msgid "The unique identifier of the tenant owning the ipsec site connection." msgstr "" "O identificador exclusivo do locatário proprietário da conexão do site ipsec." msgid "The unique identifier of the tenant owning the vpn service." msgstr "O identificador exclusivo do locatário que possui o serviço de vpn." msgid "" "The unique identifier of vpn service associated with the ipsec site " "connection." msgstr "O identificador exclusivo do serviço de vpn associado ao site ipsec " msgid "" "The user-defined region ID and should unique to the OpenStack deployment. " "While creating the region, heat will url encode this ID." msgstr "" "O ID da região definido pelo usuário que deverá ser exclusivo para " "implementação do OpenStack. Ao criar a região, o Heat codificará esse ID em " "forma de URL." msgid "" "The value for the socket option TCP_KEEPIDLE. This is the time in seconds " "that the connection must be idle before TCP starts sending keepalive probes." msgstr "" "O valor para a opção de soquete TCP_KEEPIDLE. Esse é o tempo em segundos que " "a conexão deve ficar inativa antes que o TCP comece a enviar análises keep-" "alive." #, python-format msgid "The value must be at least %(min)s." msgstr "O valor deve ser pelo menos %(min)s." #, python-format msgid "The value must be in the range %(min)s to %(max)s." msgstr "O valor deve estar no intervalo de %(min)s a %(max)s." #, python-format msgid "The value must be no greater than %(max)s." msgstr "O valor deve não deve ser superior a %(max)s." #, python-format msgid "The values of the \"for_each\" argument to \"%s\" must be lists" msgstr "Os valores do argumento \"for_each\" para \"%s\" devem ser listas" msgid "The version of the ike policy." msgstr "A versão da política ike." msgid "" "The vnic type to be bound on the neutron port. To support SR-IOV PCI " "passthrough networking, you can request that the neutron port to be realized " "as normal (virtual nic), direct (pci passthrough), or macvtap (virtual " "interface with a tap-like software interface). Note that this only works for " "Neutron deployments that support the bindings extension." msgstr "" "O tipo vnic a ser ligado à porta neutron. Para suportar a rede de passagem " "de PCI SR-IOV, é possível solicitar que a porta neutron seja realizada como " "normal (nic virtual), direta (passagem de pci) ou macvtap (interface virtual " "com uma interface de software de toque semelhante). Observe que isso só " "funciona para implementações do Neutron que suportam a extensão de ligações." msgid "The volume type." msgstr "O tipo de volume." msgid "The volume used as source, if any." msgstr "O volume utilizado como origem, se houver." msgid "The volume_id can be boot or non-boot device to the server." msgstr "" "O volume_id pode ser um dispositivo de inicialização ou não para o servidor." msgid "The website endpoint for the specified bucket." msgstr "O terminal do website para o depósito especificado." #, python-format msgid "There is no rule %(rule)s. List of allowed rules is: %(rules)s." msgstr "" "Não há nenhuma regra %(rule)s. A lista de regras permitidas é: %(rules)s." msgid "" "There is no such option during 5.0.0, so need to make this attribute " "unsupported, otherwise error will raised." msgstr "" "Não existe nenhuma opção desse tipo durante a 5.0.0, portanto, é necessário " "tornar esse atributo como não suportado, ou um erro será emitido." msgid "" "There is no such option during 5.0.0, so need to make this property " "unsupported while it not used." msgstr "" "Não existe nenhuma opção desse tipo durante a 5.0.0, portanto, é necessário " "tornar essa propriedade como não suportada enquanto não for utilizada." #, python-format msgid "" "There was an error loading the definition of the global resource type " "%(type_name)s." msgstr "" "Ocorreu um erro ao carregar a definição do tipo de recurso global " "%(type_name)s." msgid "This endpoint is enabled or disabled." msgstr "Esse terminal é ativado ou desativado. " msgid "This project is enabled or disabled." msgstr "Esse projeto é ativado ou desativado. " msgid "This region is enabled or disabled." msgstr "Essa região é ativada ou desativada. " msgid "This service is enabled or disabled." msgstr "Esse serviço é ativado ou desativado. " msgid "Threshold to evaluate against." msgstr "O limite para avaliar." msgid "Time To Live (Seconds)." msgstr "Tempo de Vida (Segundos)." msgid "Time of the first execution in format \"YYYY-MM-DD HH:MM\"." msgstr "Horário da primeira execução no formato \"YYYY-MM-DD HH:MM\"." msgid "Time of the next execution in format \"YYYY-MM-DD HH:MM:SS\"." msgstr "Horário da próxima execução no formato \"YYYY-MM-DD HH:MM:SS\"." msgid "" "Timeout for client connections' socket operations. If an incoming connection " "is idle for this number of seconds it will be closed. A value of '0' means " "wait forever." msgstr "" "Tempo limite para operações de soquete de conexões do cliente. Se uma " "conexão recebida estiver inativa por esse número de segundos, ela será " "encerrada. Um valor de '0' significa aguardar para sempre." msgid "Timeout for creating the bay in minutes. Set to 0 for no timeout." msgstr "" "Tempo limite para criar o compartimento em minutos. Configure para 0 para " "nenhum tempo limite. " msgid "Timeout in seconds for stack action (ie. create or update)." msgstr "" "Tempo limite em segundos para a ação da pilha (por exemplo, a criação ou " "atualização)." msgid "" "Toggle to enable/disable caching when Orchestration Engine looks for other " "OpenStack service resources using name or id. Please note that the global " "toggle for oslo.cache(enabled=True in [cache] group) must be enabled to use " "this feature." msgstr "" "Alternar para ativar/desativar o armazenamento em cache quando o " "Orchestration Engine procura por outros recursos de serviço do OpenStack " "usando nome ou ID. Observe que a alternância global para oslo." "cache(enabled=True no [cache] group) deve ser ativada para uso desse recurso." msgid "" "Toggle to enable/disable caching when Orchestration Engine retrieves " "extensions from other OpenStack services. Please note that the global toggle " "for oslo.cache(enabled=True in [cache] group) must be enabled to use this " "feature." msgstr "" "Alternar para ativar/desativar o armazenamento em cache quando o " "Orchestration Engine recupera extensões de outros serviços do OpenStack. " "Observe que a alternância global para oslo.cache(enabled=True no [cache] " "group) deve ser ativada para uso desse recurso." msgid "" "Toggle to enable/disable caching when Orchestration Engine validates " "property constraints of stack.During property validation with constraints " "Orchestration Engine caches requests to other OpenStack services. Please " "note that the global toggle for oslo.cache(enabled=True in [cache] group) " "must be enabled to use this feature." msgstr "" "Alternar para ativar/desativar o armazenamento em cache quando o " "Orchestration Engine validar as restrições de propriedade da pilha. Durante " "a validação da propriedade com restrições, o Orchestration Engine armazena " "em cache solicitações de outros serviços do OpenStack. Observe que a " "alternância global para oslo.cache(enabled=True no [cache] group) deve ser " "ativada para uso desse recurso." msgid "" "Token for stack-user which can be used for signalling handle when " "signal_transport is set to TOKEN_SIGNAL. None for all other signal " "transports." msgstr "" "Token para o usuário da pilha que pode ser usado para manipulação de " "sinalização quando signal_transport é configurado para TOKEN_SIGNAL None " "para todos os outros transportes de sinal." msgid "" "Tokens are not needed for Swift TempURLs. This attribute is being kept for " "compatibility with the OS::Heat::WaitConditionHandle resource." msgstr "" "Tokens não são necessários para TempURLs do Swift. Este atributo está sendo " "mantido para compatibilidade com o recurso S.O.::Heat::WaitConditionHandle." msgid "Topic" msgstr "Topico" msgid "Transform protocol for the ipsec policy." msgstr "Protocolo de conversão para a política ipsec." msgid "True if alarm evaluation/actioning is enabled." msgstr "Verdadeiro se avaliação/acionamento do alarme é ativado." msgid "" "True if the system should remember a generated private key; False otherwise." msgstr "" "Verdadeiro se o sistema tiver que lembrar uma chave privada gerada; Falso, " "caso contrário." msgid "Type of access that should be provided to guest." msgstr "Tipo de acesso que deve ser fornecido para o guest." msgid "Type of adjustment (absolute or percentage)." msgstr "Tipo de ajuste (absoluto ou porcentagem)." msgid "" "Type of endpoint in Identity service catalog to use for communication with " "the OpenStack service." msgstr "" "Tipo de terminal no catálogo de serviços de identidade a ser utilizado para " "comunicação com o serviço OpenStack." msgid "Type of keystone Service." msgstr "Tipo de Serviço do keystone." msgid "Type of receiver." msgstr "Tipo de receptor." msgid "Type of the data source." msgstr "Tipo da origem de dados." msgid "Type of the notification." msgstr "O tipo da notificação." msgid "Type of the object that RBAC policy affects." msgstr "Tipo de objeto que a política RBAC afeta. " msgid "Type of the value of the input." msgstr "Tipo do valor da entrada." msgid "Type of the value of the output." msgstr "Tipo do valor da saída." msgid "Type of the volume to create on Cinder backend." msgstr "Tipo do volume para criar no backend do Cinder." msgid "URL for API authentication" msgstr "URL para autenticação da API" msgid "URL for the data source." msgstr "URL para a origem de dados." msgid "" "URL for the job binary. Must be in the format swift:/// or " "internal-db://." msgstr "" "URL para o binário da tarefa. Deve estar no formato swift:///" " ou internal-db://." msgid "" "URL of TempURL where resource will signal completion and optionally upload " "data." msgstr "" "URL de TempURL onde o recurso irá sinalizar a conclusão e, opcionalmente, " "fazer upload de dados." msgid "URL of keystone service endpoint." msgstr "URL do terminal de serviço do keystone." msgid "URL of the Heat CloudWatch server." msgstr "URL do servidor CloudWatch." msgid "" "URL of the Heat metadata server. NOTE: Setting this is only needed if you " "require instances to use a different endpoint than in the keystone catalog" msgstr "" "URL do servidor de metadados Heat. NOTA: Essa configuração será necessária " "somente se você requerer que as instâncias usem um terminal diferente do que " "está no catálogo de keystone." msgid "URL of the Heat waitcondition server." msgstr "URL do servidor waitcondition Heat." msgid "" "URL where the data for this image already resides. For example, if the image " "data is stored in swift, you could specify \"swift://example.com/container/" "obj\"." msgstr "" "URL em que os dados para esta imagem já residem. Por exemplo, se os dados de " "imagem forem armazenados no swift, você poderia especificar \"swift://" "example.com/container/obj\"." msgid "UUID of the internal subnet to which the instance will be attached." msgstr "UUID da sub-rede interna à qual a instância será conectada." #, python-format msgid "" "Unable to find neutron provider '%(provider)s', available providers are " "%(providers)s." msgstr "" "Não é possível localizar o provedor Neutron '%(provider)s', os provedores " "disponíveis são %(providers)s." #, python-format msgid "" "Unable to find senlin policy type '%(pt)s', available policy types are " "%(pts)s." msgstr "" "Não é possível localizar o tipo de política senlin '%(pt)s', os tipos de " "políticas disponíveis são %(pts)s." #, python-format msgid "" "Unable to find senlin profile type '%(pt)s', available profile types are " "%(pts)s." msgstr "" "Não é possível localizar o tipo de perfil senlin '%(pt)s', os tipos de " "perfis disponíveis são %(pts)s." #, python-format msgid "" "Unable to load %(app_name)s from configuration file %(conf_file)s.\n" "Got: %(e)r" msgstr "" "Não é possível carregar %(app_name)s do arquivo de configuração " "%(conf_file)s.\n" "Obtido: %(e)r" #, python-format msgid "Unable to locate config file [%s]" msgstr "Não é possível localizar o arquivo de configuração [%s]" #, python-format msgid "Unexpected action %(action)s" msgstr "Ação %(action)s não esperada" #, python-format msgid "Unexpected action %s" msgstr "Ação %s não esperada" #, python-format msgid "" "Unexpected properties: %(unexpected)s. Only these properties are allowed for " "%(type)s type of order: %(allowed)s." msgstr "" "Propriedades inesperadas: %(unexpected)s. Somente essas propriedades são " "permitidas para o tipo de pedido %(type)s: %(allowed)s." msgid "Unique identifier for the device." msgstr "Identificador exclusivo para o dispositivo." msgid "" "Unique identifier for the ike policy associated with the ipsec site " "connection." msgstr "Identificador exclusivo para a política ike associada ao site ipsec " msgid "" "Unique identifier for the ipsec policy associated with the ipsec site " "connection." msgstr "Identificador exclusivo para a política ipsec associada ao site ipsec " msgid "Unique identifier for the network owning the port." msgstr "Identificador exclusivo para a rede que possui a porta." msgid "" "Unique identifier for the router to which the vpn service will be inserted." msgstr "" "Identificador exclusivo para o roteador no qual o serviço de vpn será " "inserido." msgid "" "Unique identifier for the vpn service associated with the ipsec site " "connection." msgstr "Identificador exclusivo para o serviço da vpn associado ao site ipsec " msgid "" "Unique identifier of the firewall policy to which this firewall rule belongs." msgstr "" "Identificador exclusivo da política de firewall ao qual esta regra de " "firewall pertence." msgid "Unique identifier of the firewall policy used to create the firewall." msgstr "" "O identificador exclusivo da política de firewall usada para criar o " "firewall." msgid "Unknown" msgstr "Desconhecido" #, python-format msgid "Unknown Property %s" msgstr "Propriedade desconhecida %s" #, python-format msgid "Unknown attribute \"%s\"" msgstr "Atributo desconhecido \"%s\"" #, python-format msgid "Unknown error retrieving %s" msgstr "Erro desconhecido ao recuperar %s" #, python-format msgid "Unknown input %s" msgstr "Entrada desconhecida %s" #, python-format msgid "Unknown key(s) %s" msgstr "Chave(s) desconhecida(s) %s" msgid "Unknown share_status during creation of share \"{0}\"" msgstr "" "share_status desconhecido durante a criação do compartilhamento \"{0}\"" #, python-format msgid "Unknown status creating Bay '%(name)s' - %(reason)s" msgstr "Status desconhecido ao criar o Compartimento '%(name)s' - %(reason)s" msgid "Unknown status during deleting share \"{0}\"" msgstr "Status desconhecido durante a exclusão do compartilhamento \"{0}\"" #, python-format msgid "Unknown status updating Bay '%(name)s' - %(reason)s" msgstr "" "Status desconhecido ao atualizar o Compartimento '%(name)s' - %(reason)s" #, python-format msgid "Unknown status: %s" msgstr "Status desconhecido: %s" #, python-format msgid "" "Unrecognized value \"%(value)s\" for \"%(name)s\", acceptable values are: " "true, false." msgstr "" "Valor não reconhecido \"%(value)s\" para \"%(name)s\", os valores aceitáveis " "são: true, false." #, python-format msgid "Unsupported object type %(objtype)s" msgstr "Tipo de objeto não suportado %(objtype)s" #, python-format msgid "Unsupported resource '%s' in LoadBalancerNames" msgstr "Recurso não suportado '%s' em LoadBalancerNames" msgid "Unversioned keystone url in format like http://0.0.0.0:5000." msgstr "URL do keystone sem versão no formato tipo http://0.0.0.0:5000." #, python-format msgid "Update to properties %(props)s of %(name)s (%(res)s)" msgstr "Atualize para propriedades %(props)s de %(name)s (%(res)s)" msgid "Updated At" msgstr "Atualizado em" msgid "Updating a stack when it is deleting" msgstr "Atualizando uma pilha quando ela estiver excluindo" msgid "Updating a stack when it is suspended" msgstr "Atualizando uma pilha quando ela estiver suspensa" msgid "" "Use get_resource|Ref command instead. For example: { get_resource : " " }" msgstr "" "Use o comando get_resource|Ref. Por exemplo: { get_resource : " " }" msgid "" "Use only with Neutron, to list the internal subnet to which the instance " "will be attached; needed only if multiple exist; list length must be exactly " "1." msgstr "" "Use apenas com Neutron, para listar a sub-rede interna à qual a instância " "será conectada; necessário apenas se existirem múltiplos; lista deve ter " "exatamente 1 de comprimento." #, python-format msgid "Use property %s" msgstr "Use a propriedade %s" #, python-format msgid "Use property %s." msgstr "Utilize propriedade %s." msgid "" "Use the `external_gateway_info` property in the router resource to set up " "the gateway." msgstr "" "Use a propriedade `xternal_gateway_info` no recurso do roteador para " "configurar o gateway." msgid "" "Use the networks attribute instead of first_address. For example: " "\"{get_attr: [, networks, , 0]}\"" msgstr "" "Use o atributo de redes em vez de first_address. Por exemplo: \"{get_attr: " "[, networks, , 0]}\"" msgid "Use this resource at your own risk." msgstr "Use esse recurso por sua conta e risco. " #, python-format msgid "User %s in invalid domain" msgstr "Usuário %s no domínio inválido" #, python-format msgid "User %s in invalid project" msgstr "Usuário %s no projeto inválido" msgid "User ID for API authentication" msgstr "ID do usuário para autenticação da API" msgid "User data to pass to instance." msgstr "Dados do usuário a serem transmitidos para a instância." msgid "User is not authorized to perform action" msgstr "Usuário não está autorizado a executar a ação" msgid "User name to create a user on instance creation." msgstr "Nome de usuário para criar um usuário na criação da instância." msgid "Username associated with the AccessKey." msgstr "O nome de usuário associado ao AccessKey." msgid "Username for API authentication" msgstr "O nome de usuário para autenticação da API" msgid "Username for accessing the data source URL." msgstr "Nome do usuário para acessar a URL da origem de dados." msgid "Username for accessing the job binary URL." msgstr "Nome do usuário para acessar a URL do binário da tarefa." msgid "Username of privileged user in the image." msgstr "Nome de usuário do usuário privilegiado na imagem. " msgid "VLAN ID for VLAN networks or tunnel-id for GRE/VXLAN networks." msgstr "ID do VLAN para redes VLAN ou ID de túnel para redes GRE/VXLAN." msgid "VPC ID for this gateway association." msgstr "ID de VPC para esta associação de gateway." msgid "VPC ID for where the route table is created." msgstr "ID VPC para onde a tabela de rotas é criada." msgid "" "Valid values are encrypt or decrypt. The heat-engine processes must be " "stopped to use this." msgstr "" "Valores válidos são criptografar ou decriptografar. Os processos do " "mecanismo Heat devem ser interrompidos." #, python-format msgid "Value \"%(val)s\" is invalid for data type \"%(type)s\"." msgstr "Valor \"%(val)s\" é inválido para tipo de dados \"%(type)s\"." #, python-format msgid "Value '%(value)s' is invalid for '%(name)s' which only accepts integer." msgstr "" "O valor '%(value)s' é inválido para '%(name)s' que só aceita um número " "inteiro." #, python-format msgid "" "Value '%(value)s' is invalid for '%(name)s' which only accepts non-negative " "integer." msgstr "" "O valor '%(value)s' é inválido para '%(name)s' que só aceita números " "inteirosnão negativos." #, python-format msgid "Value '%s' is not an integer" msgstr "Valor '%s' não é um número inteiro" #, python-format msgid "Value must be a comma-delimited list string: %s" msgstr "Valor deve ser uma sequência de lista delimitada por vírgulas: %s" #, python-format msgid "Value must be of type %s" msgstr "O valor deve ser do tipo %s" #, python-format msgid "Value must be valid JSON: %s" msgstr "O valor deve ser JSON válido: %s" #, python-format msgid "Value must match pattern: %s" msgstr "O valor deve corresponder ao padrão: %s" msgid "" "Value which can be set or changed on stack update to trigger the resource " "for replacement with a new random string. The salt value itself is ignored " "by the random generator." msgstr "" "Valor que pode ser configurado ou alterado na atualização da pilha para " "acionar o recurso para substituição com uma nova sequência aleatória. O " "valor do salt em si é ignorado pelo gerador aleatório." msgid "" "Value which can be set to fail the resource operation to test failure " "scenarios." msgstr "" "Valor que pode ser configurado para falhar a operação de recurso para " "cenários de falha de teste. " msgid "" "Value which can be set to trigger update replace for the particular resource." msgstr "" "Valor que pode ser configurado para acionar substituição de atualização para " "o recurso específico. " #, python-format msgid "Version %(objver)s of %(objname)s is not supported" msgstr "A versão %(objver)s do %(objname)s não é suportada" msgid "Version for the ike policy." msgstr "Versão para a política ike." msgid "Version of Hadoop running on instances." msgstr "Versão do Hadoop executando nas instâncias." msgid "Version of IP address." msgstr "Versão do endereço IP." msgid "Vip associated with the pool." msgstr "VIP associado com o pool." msgid "Volume attachment failed" msgstr "Anexo do volume com falha" msgid "Volume backup failed" msgstr "Backup do volume com falha" msgid "Volume backup restore failed" msgstr "Falha ao restaurar o backup do volume" msgid "Volume create failed" msgstr "Volume criado com falha" msgid "Volume detachment failed" msgstr "Remoção do anexo do volume com falha" msgid "Volume in use" msgstr "O volume está em uso" msgid "Volume resize failed" msgstr "Redimensionamento do volume com falha" msgid "Volumes per node." msgstr "Volumes por nó." msgid "Volumes to attach to instance." msgstr "Volumes para anexar à instância." #, python-format msgid "WaitCondition invalid Handle %s" msgstr "Identificador inválido de WaitCondition %s" #, python-format msgid "WaitCondition invalid Handle stack %s" msgstr "Pilha do identificador inválida de WaitCondition %s" #, python-format msgid "WaitCondition invalid Handle tenant %s" msgstr "Locatário do identificador inválido de WaitCondition %s" msgid "Weight of pool member in the pool (default to 1)." msgstr "Peso do membro do conjunto no conjunto (padrão 1)." msgid "Weight of the pool member in the pool." msgstr "Peso do membro do conjunto no conjunto." #, python-format msgid "Went to status %(resource_status)s due to \"%(status_reason)s\"" msgstr "Foi para o status %(resource_status)s devido a \"%(status_reason)s\"" msgid "" "When both ipv6_ra_mode and ipv6_address_mode are set, they must be equal." msgstr "" "Quando ambos ipv6_ra_mode e ipv6_address_mode forem configurados, eles devem " "ser iguais." msgid "" "When running server in SSL mode, you must specify both a cert_file and " "key_file option value in your configuration file" msgstr "" "Ao executar o servidor no modo SSL, você deve especificar um valor de opção " "cert_file e key_file no seu arquivo de configuração" msgid "Whether enable this policy on that cluster." msgstr "Especifica se essa política é ativada nesse cluster. " msgid "" "Whether the address scope should be shared to other tenants. Note that the " "default policy setting restricts usage of this attribute to administrative " "users only, and restricts changing of shared address scope to unshared with " "update." msgstr "" "Especifica se o escopo de endereço deve ser compartilhado com outros " "locatários. Observe que a configuração da política padrão restringe o uso " "desse atributo somente para usuários administrativos, e restringe a mudança " "do escopo de endereço compartilhado para não compartilhado com atualização. " msgid "Whether the flavor is shared across all projects." msgstr "Especifica se o tipo é compartilhado com todos os projetos." msgid "" "Whether the image can be deleted. If the value is True, the image is " "protected and cannot be deleted." msgstr "" "Se a imagem puder ser excluída. Se o valor for True, a imagem será protegida " "e não poderá ser excluída." msgid "Whether the metering label should be shared across all tenants." msgstr "" "Se o rótulo de medição precisar ser compartilhado por todos os locatários." msgid "Whether the network contains an external router." msgstr "Especifica se a rede contém um roteador externo." msgid "Whether the part content is text or multipart." msgstr "Se o conteúdo da parte é texto ou parte de multipartes." msgid "" "Whether the subnet pool will be shared across all tenants. Note that the " "default policy setting restricts usage of this attribute to administrative " "users only." msgstr "" "Especifica se este conjunto de sub-redes precisa ser compartilhado por todos " "os locatários. Observe que a configuração de política padrão restringe o uso " "deste atributo apenas para usuários administrativos." msgid "Whether the volume type is accessible to the public." msgstr "Especifica se o tipo de volume é acessível para o público. " msgid "Whether this QoS policy should be shared to other tenants." msgstr "" "Especifica se essa política do QoS deve ser compartilhada com outros " "locatários. " msgid "" "Whether this firewall should be shared across all tenants. NOTE: The default " "policy setting in Neutron restricts usage of this property to administrative " "users only." msgstr "" "Especifica se este firewall precisa ser compartilhado por todos os " "locatários. NOTA: A configuração de política padrão no Neutron restringe o " "uso desta propriedade apenas para usuários administrativos." msgid "" "Whether this is default IPv4/IPv6 subnet pool. There can only be one default " "subnet pool for each IP family. Note that the default policy setting " "restricts administrative users to set this to True." msgstr "" "Especifica se é o conjunto de sub-redes IPv4/IPv6 padrão. Pode haver somente " "um conjunto de sub-redes padrão para cada família de IP. Observe que a " "configuração da política padrão restringe usuários administrativos a " "configurarem isso para True." msgid "Whether this network should be shared across all tenants." msgstr "Se esta rede deve ser compartilhada por todos os locatários." msgid "" "Whether this network should be shared across all tenants. Note that the " "default policy setting restricts usage of this attribute to administrative " "users only." msgstr "" "Se esta rede deve ser compartilhada por todos os locatários. Observe que a " "configuração de política padrão restringe uso deste atributo para apenas " "usuários administrativos." msgid "" "Whether this policy should be audited. When set to True, each time the " "firewall policy or the associated firewall rules are changed, this attribute " "will be set to False and will have to be explicitly set to True through an " "update operation." msgstr "" "Se esta política deve ser auditado. Quando for configurada como True, cada " "vez que a política de firewall ou as regras de firewall associadas forem " "alteradas, este atributo será configurado como False e terá que ser " "explicitamente configurado como True por meio de uma operação de atualização." msgid "Whether this policy should be shared across all tenants." msgstr "Se esta política deve ser compartilhada por todos os locatários." msgid "Whether this rule should be enabled." msgstr "Se esta regra deve ser ativada." msgid "Whether this rule should be shared across all tenants." msgstr "Se esta regra deve ser compartilhada por todos os locatários." msgid "Whether to enable the actions or not." msgstr "Especifica se as ações devem ser ativadas ou não. " msgid "Whether to specify a remote group or a remote IP prefix." msgstr "Especificar um grupo remoto ou um prefixo de IP remoto." msgid "" "Which lifecycle actions of the deployment resource will result in this " "deployment being triggered." msgstr "" "Quais ações do ciclo de vida do recurso de implementação resultarão no " "acionamento dessa implementação. " msgid "" "Workflow additional parameters. If Workflow is reverse typed, params " "requires 'task_name', which defines initial task." msgstr "" "Parâmetros adicionais do fluxo de trabalho. Se o Fluxo de trabalho for de um " "tipo reverso, params requererá 'task_name', que define a tarefa inicial. " msgid "Workflow description." msgstr "Descrição do fluxo de trabalho." msgid "Workflow name." msgstr "Nome do fluxo de trabalho." msgid "Workflow to execute." msgstr "Fluxo de trabalho para executar." msgid "Workflow type." msgstr "Tipo de fluxo de trabalho." #, python-format msgid "Wrong Arguments try: \"%s\"" msgstr "Argumentos Errados. Tente: \"%s\"" msgid "You are not authenticated." msgstr "Você não está autenticado." msgid "You are not authorized to complete this action." msgstr "Você não está autorizado a concluir esta ação." #, python-format msgid "You are not authorized to use %(action)s." msgstr "Você não está autorizado a usar %(action)s." #, python-format msgid "" "You have reached the maximum stacks per tenant, %d. Please delete some " "stacks." msgstr "" "Você atingiu o máximo de pilhas por locatário, %d. Exclua algumas pilhas." #, python-format msgid "could not find user %s" msgstr "não foi possível localizar usuário %s" msgid "deployment_id must be specified" msgstr "deployment_id deve ser especificado" msgid "" "deployments key not allowed in resource metadata with user_data_format of " "SOFTWARE_CONFIG" msgstr "" "Chave de implementações não permitida nos metadados de recurso com " "user_data_format de SOFTWARE_CONFIG" #, python-format msgid "environment has wrong section \"%s\"" msgstr "ambiente possui seção errada \"%s\"" msgid "error in pool" msgstr "erro no conjunto" msgid "error in vip" msgstr "erro no vip" msgid "external network for the gateway." msgstr "rede externa para o gateway." msgid "granularity should be days, hours, minutes, or seconds" msgstr "granularidade deve ser dias, horas, minutos ou segundos" msgid "heat.conf misconfigured, auth_encryption_key must be 32 characters" msgstr "" "heat.conf configurado incorretamente, auth_encryption_key deve possuir 32 " "caracteres" msgid "" "heat.conf misconfigured, cannot specify \"stack_user_domain_id\" or " "\"stack_user_domain_name\" without \"stack_domain_admin\" and " "\"stack_domain_admin_password\"" msgstr "" "heat.conf configurado incorretamente, não é possível especificar " "\"stack_user_domain_id\" ou \"stack_user_domain_name\" sem " "\"stack_domain_admin\" e \"stack_domain_admin_password\"" msgid "ipv6_ra_mode and ipv6_address_mode are not supported for ipv4." msgstr "ipv6_ra_mode e ipv6_address_mode não são suportados para ipv4." msgid "limit cannot be less than 4" msgstr "limite não pode ser menor que 4" msgid "min/max length must be integral" msgstr "comprimento mín. /máx. deve ser integral" msgid "min/max must be numeric" msgstr "mín/máx deve ser numérico" msgid "need more memory." msgstr "mais memória é necessária." msgid "no resource data found" msgstr "nenhum dado de recurso localizado" msgid "no resources were found" msgstr "não foram localizados recursos" msgid "nova server metadata needs to be a Map." msgstr "Metadados do servidor nova precisam ser um Mapa." #, python-format msgid "previous_status must be SupportStatus instead of %s" msgstr "previous_status deve ser SupportStatus em vez de %s" #, python-format msgid "raw template with id %s not found" msgstr "modelo bruto com ID %s não localizado" #, python-format msgid "resource with id %s not found" msgstr "recurso com ID %s não localizado" #, python-format msgid "roles %s" msgstr "funções %s" msgid "segmentation_id cannot be specified except 0 for using flat" msgstr "segmentation_id não pode ser especificada, exceto 0 para uso simples" msgid "segmentation_id must be specified for using vlan" msgstr "segmentation_id deve ser especificado para utilizar a vlan" msgid "segmentation_id not allowed for flat network type." msgstr "segmentation_id não é permitido para o tipo de rede simples." msgid "server_id must be specified" msgstr "server_id deve ser especificado" #, python-format msgid "" "task %(task)s contains property 'requires' in case of direct workflow. Only " "reverse workflows can contain property 'requires'." msgstr "" "A tarefa %(task)s contém a propriedade 'requires' no caso de um fluxo de " "trabalho direto. Somente fluxos de trabalho reversos podem conter a " "propriedade 'requires'." heat-10.0.2/heat/engine/0000775000175000017500000000000013343562672014736 5ustar zuulzuul00000000000000heat-10.0.2/heat/engine/worker.py0000666000175000017500000002414513343562340016621 0ustar zuulzuul00000000000000# Copyright (c) 2014 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import eventlet.queue import functools from oslo_log import log as logging import oslo_messaging from oslo_utils import excutils from oslo_utils import uuidutils from osprofiler import profiler from heat.common import context from heat.common import messaging as rpc_messaging from heat.db.sqlalchemy import api as db_api from heat.engine import check_resource from heat.engine import node_data from heat.engine import stack as parser from heat.engine import sync_point from heat.objects import stack as stack_objects from heat.rpc import api as rpc_api from heat.rpc import worker_client as rpc_client LOG = logging.getLogger(__name__) CANCEL_RETRIES = 3 def log_exceptions(func): @functools.wraps(func) def exception_wrapper(*args, **kwargs): try: return func(*args, **kwargs) except Exception: with excutils.save_and_reraise_exception(): LOG.exception('Unhandled exception in %s', func.__name__) return exception_wrapper @profiler.trace_cls("rpc") class WorkerService(object): """Service that has 'worker' actor in convergence. This service is dedicated to handle internal messages to the 'worker' (a.k.a. 'converger') actor in convergence. Messages on this bus will use the 'cast' rather than 'call' method to anycast the message to an engine that will handle it asynchronously. It won't wait for or expect replies from these messages. """ RPC_API_VERSION = '1.4' def __init__(self, host, topic, engine_id, thread_group_mgr): self.host = host self.topic = topic self.engine_id = engine_id self.thread_group_mgr = thread_group_mgr self._rpc_client = rpc_client.WorkerClient() self._rpc_server = None self.target = None def start(self): target = oslo_messaging.Target( version=self.RPC_API_VERSION, server=self.engine_id, topic=self.topic) self.target = target LOG.info("Starting %(topic)s (%(version)s) in engine %(engine)s.", {'topic': self.topic, 'version': self.RPC_API_VERSION, 'engine': self.engine_id}) self._rpc_server = rpc_messaging.get_rpc_server(target, self) self._rpc_server.start() def stop(self): if self._rpc_server is None: return # Stop rpc connection at first for preventing new requests LOG.info("Stopping %(topic)s in engine %(engine)s.", {'topic': self.topic, 'engine': self.engine_id}) try: self._rpc_server.stop() self._rpc_server.wait() except Exception as e: LOG.error("%(topic)s is failed to stop, %(exc)s", {'topic': self.topic, 'exc': e}) def stop_traversal(self, stack): """Update current traversal to stop workers from propagating. Marks the stack as FAILED due to cancellation, but, allows all in_progress resources to complete normally; no worker is stopped abruptly. """ _stop_traversal(stack) db_child_stacks = stack_objects.Stack.get_all_by_root_owner_id( stack.context, stack.id) for db_child in db_child_stacks: if db_child.status == parser.Stack.IN_PROGRESS: child = parser.Stack.load(stack.context, stack_id=db_child.id, stack=db_child, load_template=False) _stop_traversal(child) def stop_all_workers(self, stack): """Cancel all existing worker threads for the stack. Threads will stop running at their next yield point, whether or not the resource operations are complete. """ cancelled = _cancel_workers(stack, self.thread_group_mgr, self.engine_id, self._rpc_client) if not cancelled: LOG.error("Failed to stop all workers of stack %s, " "stack cancel not complete", stack.name) return False LOG.info('[%(name)s(%(id)s)] Stopped all active workers for stack ' '%(action)s', {'name': stack.name, 'id': stack.id, 'action': stack.action}) return True def _retrigger_replaced(self, is_update, rsrc, stack, check_resource): graph = stack.convergence_dependencies.graph() key = parser.ConvergenceNode(rsrc.id, is_update) if key not in graph and rsrc.replaces is not None: # This resource replaces old one and is not needed in # current traversal. You need to mark the resource as # DELETED so that it gets cleaned up in purge_db. values = {'action': rsrc.DELETE} db_api.resource_update_and_save(stack.context, rsrc.id, values) # The old resource might be in the graph (a rollback case); # just re-trigger it. key = parser.ConvergenceNode(rsrc.replaces, is_update) check_resource.retrigger_check_resource(stack.context, is_update, key.rsrc_id, stack) @context.request_context @log_exceptions def check_resource(self, cnxt, resource_id, current_traversal, data, is_update, adopt_stack_data, converge=False): """Process a node in the dependency graph. The node may be associated with either an update or a cleanup of its associated resource. """ in_data = sync_point.deserialize_input_data(data) resource_data = node_data.load_resources_data(in_data if is_update else {}) rsrc, stk_defn, stack = check_resource.load_resource(cnxt, resource_id, resource_data, current_traversal, is_update) if rsrc is None: return rsrc.converge = converge msg_queue = eventlet.queue.LightQueue() try: self.thread_group_mgr.add_msg_queue(stack.id, msg_queue) cr = check_resource.CheckResource(self.engine_id, self._rpc_client, self.thread_group_mgr, msg_queue, in_data) if current_traversal != stack.current_traversal: LOG.debug('[%s] Traversal cancelled; re-trigerring.', current_traversal) self._retrigger_replaced(is_update, rsrc, stack, cr) else: cr.check(cnxt, resource_id, current_traversal, resource_data, is_update, adopt_stack_data, rsrc, stack) finally: self.thread_group_mgr.remove_msg_queue(None, stack.id, msg_queue) @context.request_context @log_exceptions def cancel_check_resource(self, cnxt, stack_id): """Cancel check_resource for given stack. All the workers running for the given stack will be cancelled. """ _cancel_check_resource(stack_id, self.engine_id, self.thread_group_mgr) def _stop_traversal(stack): old_trvsl = stack.current_traversal updated = _update_current_traversal(stack) if not updated: LOG.warning("Failed to update stack %(name)s with new " "traversal, aborting stack cancel", stack.name) return reason = 'Stack %(action)s cancelled' % {'action': stack.action} updated = stack.state_set(stack.action, stack.FAILED, reason) if not updated: LOG.warning("Failed to update stack %(name)s status " "to %(action)s_%(state)s", {'name': stack.name, 'action': stack.action, 'state': stack.FAILED}) return sync_point.delete_all(stack.context, stack.id, old_trvsl) def _update_current_traversal(stack): previous_traversal = stack.current_traversal stack.current_traversal = uuidutils.generate_uuid() values = {'current_traversal': stack.current_traversal} return stack_objects.Stack.select_and_update( stack.context, stack.id, values, exp_trvsl=previous_traversal) def _cancel_check_resource(stack_id, engine_id, tgm): LOG.debug('Cancelling workers for stack [%s] in engine [%s]', stack_id, engine_id) tgm.send(stack_id, rpc_api.THREAD_CANCEL) def _wait_for_cancellation(stack, wait=5): # give enough time to wait till cancel is completed retries = CANCEL_RETRIES while retries > 0: retries -= 1 eventlet.sleep(wait) engines = db_api.engine_get_all_locked_by_stack( stack.context, stack.id) if not engines: return True return False def _cancel_workers(stack, tgm, local_engine_id, rpc_client): engines = db_api.engine_get_all_locked_by_stack(stack.context, stack.id) if not engines: return True # cancel workers running locally if local_engine_id in engines: _cancel_check_resource(stack.id, local_engine_id, tgm) engines.remove(local_engine_id) # cancel workers on remote engines for engine_id in engines: rpc_client.cancel_check_resource(stack.context, stack.id, engine_id) return _wait_for_cancellation(stack) heat-10.0.2/heat/engine/rsrc_defn.py0000666000175000017500000004246013343562351017257 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import copy import itertools import operator import six from heat.common import exception from heat.common.i18n import repr_wrapper from heat.engine import function from heat.engine import properties __all__ = ['ResourceDefinition'] # Field names that can be passed to Template.get_section_name() in order to # determine the appropriate name for a particular template format. FIELDS = ( TYPE, PROPERTIES, METADATA, DELETION_POLICY, UPDATE_POLICY, DEPENDS_ON, DESCRIPTION, EXTERNAL_ID, ) = ( 'Type', 'Properties', 'Metadata', 'DeletionPolicy', 'UpdatePolicy', 'DependsOn', 'Description', 'external_id', ) @repr_wrapper class ResourceDefinition(object): """A definition of a resource, independent of any template format.""" class Diff(object): """A diff between two versions of the same resource definition.""" def __init__(self, old_defn, new_defn): if not (isinstance(old_defn, ResourceDefinition) and isinstance(new_defn, ResourceDefinition)): raise TypeError self.old_defn = old_defn self.new_defn = new_defn def properties_changed(self): """Return True if the resource properties have changed.""" return self.old_defn._properties != self.new_defn._properties def metadata_changed(self): """Return True if the resource metadata has changed.""" return self.old_defn._metadata != self.new_defn._metadata def update_policy_changed(self): """Return True if the resource update policy has changed.""" return self.old_defn._update_policy != self.new_defn._update_policy def __bool__(self): """Return True if anything has changed.""" return (self.properties_changed() or self.metadata_changed() or self.update_policy_changed()) __nonzero__ = __bool__ DELETION_POLICIES = ( DELETE, RETAIN, SNAPSHOT, ) = ( 'Delete', 'Retain', 'Snapshot', ) def __init__(self, name, resource_type, properties=None, metadata=None, depends=None, deletion_policy=None, update_policy=None, description=None, external_id=None, condition=None): """Initialise with the parsed definition of a resource. Any intrinsic functions present in any of the sections should have been parsed into Function objects before constructing the definition. :param name: The name of the resource (for use in error messages) :param resource_type: The resource type :param properties: A dictionary of supplied property values :param metadata: The supplied metadata :param depends: A list of resource names on which this resource depends :param deletion_policy: The deletion policy for the resource :param update_policy: A dictionary of supplied update policies :param description: A string describing the resource :param external_id: A uuid of an external resource :param condition: A condition name associated with the resource """ self.name = name self.resource_type = resource_type self.description = description or '' self._properties = properties self._metadata = metadata self._depends = depends self._deletion_policy = deletion_policy self._update_policy = update_policy self._external_id = external_id self._condition = condition self._hash = hash(self.resource_type) self._rendering = None self._dep_names = None self._all_dep_attrs = None assert isinstance(self.description, six.string_types) if properties is not None: assert isinstance(properties, (collections.Mapping, function.Function)) self._hash ^= _hash_data(properties) if metadata is not None: assert isinstance(metadata, (collections.Mapping, function.Function)) self._hash ^= _hash_data(metadata) if depends is not None: assert isinstance(depends, (collections.Sequence, function.Function)) assert not isinstance(depends, six.string_types) self._hash ^= _hash_data(depends) if deletion_policy is not None: assert deletion_policy in self.DELETION_POLICIES self._hash ^= _hash_data(deletion_policy) if update_policy is not None: assert isinstance(update_policy, (collections.Mapping, function.Function)) self._hash ^= _hash_data(update_policy) if external_id is not None: assert isinstance(external_id, (six.string_types, function.Function)) self._hash ^= _hash_data(external_id) self._deletion_policy = self.RETAIN if condition is not None: assert isinstance(condition, (six.string_types, bool, function.Function)) self._hash ^= _hash_data(condition) self.set_translation_rules() def freeze(self, **overrides): """Return a frozen resource definition, with all functions resolved. This return a new resource definition with fixed data (containing no intrinsic functions). Named arguments passed to this method override the values passed as arguments to the constructor. """ if getattr(self, '_frozen', False) and not overrides: return self def arg_item(attr_name): name = attr_name.lstrip('_') if name in overrides: value = overrides[name] if not value and getattr(self, attr_name) is None: value = None else: value = function.resolve(getattr(self, attr_name)) return name, value args = ('name', 'resource_type', '_properties', '_metadata', '_depends', '_deletion_policy', '_update_policy', 'description', '_external_id', '_condition') defn = type(self)(**dict(arg_item(a) for a in args)) defn._frozen = True return defn def reparse(self, stack, template): """Reinterpret the resource definition in the context of a new stack. This returns a new resource definition, with all of the functions parsed in the context of the specified stack and template. """ assert not getattr(self, '_frozen', False ), "Cannot re-parse a frozen definition" def reparse_snippet(snippet): return template.parse(stack, copy.deepcopy(snippet)) return type(self)( self.name, self.resource_type, properties=reparse_snippet(self._properties), metadata=reparse_snippet(self._metadata), depends=reparse_snippet(self._depends), deletion_policy=reparse_snippet(self._deletion_policy), update_policy=reparse_snippet(self._update_policy), external_id=reparse_snippet(self._external_id), condition=self._condition) def validate(self): """Validate intrinsic functions that appear in the definition.""" function.validate(self._properties, PROPERTIES) function.validate(self._metadata, METADATA) function.validate(self._depends, DEPENDS_ON) function.validate(self._deletion_policy, DELETION_POLICY) function.validate(self._update_policy, UPDATE_POLICY) function.validate(self._external_id, EXTERNAL_ID) def dep_attrs(self, resource_name, load_all=False): """Iterate over attributes of a given resource that this references. Return an iterator over dependent attributes for specified resource_name in resources' properties and metadata fields. """ if self._all_dep_attrs is None and load_all: attr_map = collections.defaultdict(set) atts = itertools.chain(function.all_dep_attrs(self._properties), function.all_dep_attrs(self._metadata)) for res_name, att_name in atts: attr_map[res_name].add(att_name) self._all_dep_attrs = attr_map if self._all_dep_attrs is not None: return self._all_dep_attrs[resource_name] return itertools.chain(function.dep_attrs(self._properties, resource_name), function.dep_attrs(self._metadata, resource_name)) def required_resource_names(self): """Return a set of names of all resources on which this depends. Note that this is done entirely in isolation from the rest of the template, so the resource names returned may refer to resources that don't actually exist, or would have strict_dependency=False. Use the dependencies() method to get validated dependencies. """ if self._dep_names is None: explicit_depends = [] if self._depends is None else self._depends def path(section): return '.'.join([self.name, section]) prop_deps = function.dependencies(self._properties, path(PROPERTIES)) metadata_deps = function.dependencies(self._metadata, path(METADATA)) implicit_depends = six.moves.map(lambda rp: rp.name, itertools.chain(prop_deps, metadata_deps)) # (ricolin) External resource should not depend on any other # resources. This operation is not allowed for now. if self.external_id(): if explicit_depends: raise exception.InvalidExternalResourceDependency( external_id=self.external_id(), resource_type=self.resource_type ) self._dep_names = set() else: self._dep_names = set(itertools.chain(explicit_depends, implicit_depends)) return self._dep_names def dependencies(self, stack): """Return the Resource objects in given stack on which this depends.""" def get_resource(res_name): if res_name not in stack: if res_name in stack.defn.all_rsrc_names(): # The resource is conditionally defined, allow dependencies # on it return raise exception.InvalidTemplateReference(resource=res_name, key=self.name) res = stack[res_name] if getattr(res, 'strict_dependency', True): return res return six.moves.filter(None, six.moves.map(get_resource, self.required_resource_names())) def set_translation_rules(self, rules=None, client_resolve=True): """Helper method to update properties with translation rules.""" self._rules = rules or [] self._client_resolve = client_resolve def properties(self, schema, context=None): """Return a Properties object representing the resource properties. The Properties object is constructed from the given schema, and may require a context to validate constraints. """ props = properties.Properties(schema, self._properties or {}, function.resolve, context=context, section=PROPERTIES) props.update_translation(self._rules, self._client_resolve) return props def deletion_policy(self): """Return the deletion policy for the resource. The policy will be one of those listed in DELETION_POLICIES. """ return function.resolve(self._deletion_policy) or self.DELETE def update_policy(self, schema, context=None): """Return a Properties object representing the resource update policy. The Properties object is constructed from the given schema, and may require a context to validate constraints. """ props = properties.Properties(schema, self._update_policy or {}, function.resolve, context=context, section=UPDATE_POLICY) props.update_translation(self._rules, self._client_resolve) return props def metadata(self): """Return the resource metadata.""" return function.resolve(self._metadata) or {} def external_id(self): """Return the external resource id.""" return function.resolve(self._external_id) def condition(self): """Return the name of the conditional inclusion rule, if any. Returns None if the resource is included unconditionally. """ return function.resolve(self._condition) def render_hot(self): """Return a HOT snippet for the resource definition.""" if self._rendering is None: attrs = { 'type': 'resource_type', 'properties': '_properties', 'metadata': '_metadata', 'deletion_policy': '_deletion_policy', 'update_policy': '_update_policy', 'depends_on': '_depends', 'external_id': '_external_id', 'condition': '_condition' } def rawattrs(): """Get an attribute with function objects stripped out.""" for key, attr in attrs.items(): value = getattr(self, attr) if value is not None: yield key, copy.deepcopy(value) self._rendering = dict(rawattrs()) return self._rendering def __sub__(self, previous): """Calculate the difference between this definition and a previous one. Return a Diff object that can be used to establish differences between this definition and a previous definition of the same resource. """ if not isinstance(previous, ResourceDefinition): return NotImplemented return self.Diff(previous, self) def __eq__(self, other): """Compare this resource definition for equality with another. Two resource definitions are considered to be equal if they can be generated from the same template snippet. The name of the resource is ignored, as are the actual values that any included functions resolve to. """ if not isinstance(other, ResourceDefinition): return NotImplemented return self.render_hot() == other.render_hot() def __ne__(self, other): """Compare this resource definition for inequality with another. See __eq__() for the definition of equality. """ equal = self.__eq__(other) if equal is NotImplemented: return NotImplemented return not equal def __hash__(self): """Return a hash value for this resource definition. Resource definitions that compare equal will have the same hash. (In particular, the resource name is *not* taken into account.) See the __eq__() method for the definition of equality. """ return self._hash def __repr__(self): """Return a string representation of the resource definition.""" def arg_repr(arg_name): return '='.join([arg_name, repr(getattr(self, '_%s' % arg_name))]) args = ('properties', 'metadata', 'depends', 'deletion_policy', 'update_policy', 'condition') data = { 'classname': type(self).__name__, 'name': repr(self.name), 'type': repr(self.resource_type), 'args': ', '.join(arg_repr(n) for n in args) } return '%(classname)s(%(name)s, %(type)s, %(args)s)' % data def _hash_data(data): """Return a stable hash value for an arbitrary parsed-JSON data snippet.""" if isinstance(data, function.Function): data = copy.deepcopy(data) if not isinstance(data, six.string_types): if isinstance(data, collections.Sequence): return hash(tuple(_hash_data(d) for d in data)) if isinstance(data, collections.Mapping): item_hashes = (hash(k) ^ _hash_data(v) for k, v in data.items()) return six.moves.reduce(operator.xor, item_hashes, 0) return hash(data) heat-10.0.2/heat/engine/template_files.py0000666000175000017500000001126613343562351020307 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import six import weakref from heat.common import context from heat.common.i18n import _ from heat.db.sqlalchemy import api as db_api from heat.objects import raw_template_files _d = weakref.WeakValueDictionary() class ReadOnlyDict(dict): def __setitem__(self, key): raise ValueError("Attempted to write to internal TemplateFiles cache") class TemplateFiles(collections.Mapping): def __init__(self, files): self.files = None self.files_id = None if files is None: return if isinstance(files, TemplateFiles): self.files_id = files.files_id self.files = files.files return if isinstance(files, six.integer_types): self.files_id = files if self.files_id in _d: self.files = _d[self.files_id] return if not isinstance(files, dict): raise ValueError(_('Expected dict, got %(cname)s for files, ' '(value is %(val)s)') % {'cname': files.__class__, 'val': str(files)}) # the dict has not been persisted as a raw_template_files db obj # yet, so no self.files_id self.files = ReadOnlyDict(files) def __getitem__(self, key): self._refresh_if_needed() if self.files is None: raise KeyError return self.files[key] def __setitem__(self, key, value): self.update({key: value}) def __len__(self): self._refresh_if_needed() if not self.files: return 0 return len(self.files) def __contains__(self, key): self._refresh_if_needed() if not self.files: return False return key in self.files def __iter__(self): self._refresh_if_needed() if self.files is None: return iter(ReadOnlyDict({})) return iter(self.files) def _refresh_if_needed(self): # retrieve files from db if needed if self.files_id is None: return if self.files_id in _d: self.files = _d[self.files_id] return self._refresh() def _refresh(self): ctxt = context.get_admin_context() rtf_obj = db_api.raw_template_files_get(ctxt, self.files_id) _files_dict = ReadOnlyDict(rtf_obj.files) self.files = _files_dict _d[self.files_id] = _files_dict def store(self, ctxt): if not self.files or self.files_id is not None: # Do not to persist an empty raw_template_files obj. If we # already have a not null self.files_id, the (immutable) # raw_templated_object has already been persisted so just # return the id. return self.files_id rtf_obj = raw_template_files.RawTemplateFiles.create( ctxt, {'files': self.files}) self.files_id = rtf_obj.id _d[self.files_id] = self.files return self.files_id def update(self, files): # Sets up the next call to store() to create a new # raw_template_files db obj. It seems like we *could* just # update the existing raw_template_files obj, but the problem # with that is other heat-engine processes' _d dictionaries # would have stale data for a given raw_template_files.id with # no way of knowing whether that data should be refreshed or # not. So, just avoid the potential for weird race conditions # and create another db obj in the next store(). if len(files) == 0: return if not isinstance(files, dict): raise ValueError(_('Expected dict, got %(cname)s for files, ' '(value is %(val)s)') % {'cname': files.__class__, 'val': str(files)}) self._refresh_if_needed() if self.files: new_files = self.files.copy() new_files.update(files) else: new_files = files self.files_id = None # not persisted yet self.files = ReadOnlyDict(new_files) heat-10.0.2/heat/engine/template_common.py0000666000175000017500000002140013343562351020464 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import functools import weakref import six from heat.common import exception from heat.common.i18n import _ from heat.engine import conditions from heat.engine import function from heat.engine import output from heat.engine import template class CommonTemplate(template.Template): """A class of the common implementation for HOT and CFN templates. This is *not* a stable interface, and any third-parties who create derived classes from it do so at their own risk. """ def __init__(self, template, template_id=None, files=None, env=None): super(CommonTemplate, self).__init__(template, template_id=template_id, files=files, env=env) self._conditions_cache = None, None @classmethod def _parse_resource_field(cls, key, valid_types, typename, rsrc_name, rsrc_data, parse_func): """Parse a field in a resource definition. :param key: The name of the key :param valid_types: Valid types for the parsed output :param typename: Description of valid type to include in error output :param rsrc_name: The resource name :param rsrc_data: The unparsed resource definition data :param parse_func: A function to parse the data, which takes the contents of the field and its path in the template as arguments. """ if key in rsrc_data: data = parse_func(rsrc_data[key], '.'.join([cls.RESOURCES, rsrc_name, key])) if not isinstance(data, valid_types): args = {'name': rsrc_name, 'key': key, 'typename': typename} message = _('Resource %(name)s %(key)s type ' 'must be %(typename)s') % args raise TypeError(message) return data else: return None def _rsrc_defn_args(self, stack, name, data): if self.RES_TYPE not in data: args = {'name': name, 'type_key': self.RES_TYPE} msg = _('Resource %(name)s is missing "%(type_key)s"') % args raise KeyError(msg) parse = functools.partial(self.parse, stack) def no_parse(field, path): return field yield ('resource_type', self._parse_resource_field(self.RES_TYPE, six.string_types, 'string', name, data, parse)) yield ('properties', self._parse_resource_field(self.RES_PROPERTIES, (collections.Mapping, function.Function), 'object', name, data, parse)) yield ('metadata', self._parse_resource_field(self.RES_METADATA, (collections.Mapping, function.Function), 'object', name, data, parse)) depends = self._parse_resource_field(self.RES_DEPENDS_ON, collections.Sequence, 'list or string', name, data, no_parse) if isinstance(depends, six.string_types): depends = [depends] yield 'depends', depends del_policy = self._parse_resource_field(self.RES_DELETION_POLICY, (six.string_types, function.Function), 'string', name, data, parse) deletion_policy = function.resolve(del_policy) if deletion_policy is not None: if deletion_policy not in self.deletion_policies: msg = _('Invalid deletion policy "%s"') % deletion_policy raise exception.StackValidationFailed(message=msg) else: deletion_policy = self.deletion_policies[deletion_policy] yield 'deletion_policy', deletion_policy yield ('update_policy', self._parse_resource_field(self.RES_UPDATE_POLICY, (collections.Mapping, function.Function), 'object', name, data, parse)) yield ('description', self._parse_resource_field(self.RES_DESCRIPTION, six.string_types, 'string', name, data, no_parse)) def _get_condition_definitions(self): """Return the condition definitions of template.""" return {} def conditions(self, stack): get_cache_stack, cached_conds = self._conditions_cache if (cached_conds is not None and get_cache_stack is not None and get_cache_stack() is stack): return cached_conds raw_defs = self._get_condition_definitions() if not isinstance(raw_defs, collections.Mapping): message = _('Condition definitions must be a map. Found a ' '%s instead') % type(raw_defs).__name__ raise exception.StackValidationFailed( error='Conditions validation error', message=message) parsed = {n: self.parse_condition(stack, c, '.'.join([self.CONDITIONS, n])) for n, c in raw_defs.items()} conds = conditions.Conditions(parsed) get_cache_stack = weakref.ref(stack) if stack is not None else None self._conditions_cache = get_cache_stack, conds return conds def outputs(self, stack): conds = self.conditions(stack) outputs = self.t.get(self.OUTPUTS) or {} def get_outputs(): for key, val in outputs.items(): if not isinstance(val, collections.Mapping): message = _('Output definitions must be a map. Found a ' '%s instead') % type(val).__name__ raise exception.StackValidationFailed( error='Output validation error', path=[self.OUTPUTS, key], message=message) if self.OUTPUT_VALUE not in val: message = _('Each output definition must contain ' 'a %s key.') % self.OUTPUT_VALUE raise exception.StackValidationFailed( error='Output validation error', path=[self.OUTPUTS, key], message=message) description = val.get(self.OUTPUT_DESCRIPTION) if hasattr(self, 'OUTPUT_CONDITION'): path = [self.OUTPUTS, key, self.OUTPUT_CONDITION] cond = self.parse_condition(stack, val.get(self.OUTPUT_CONDITION), '.'.join(path)) try: enabled = conds.is_enabled(function.resolve(cond)) except ValueError as exc: path = [self.OUTPUTS, key, self.OUTPUT_CONDITION] message = six.text_type(exc) raise exception.StackValidationFailed(path=path, message=message) if not enabled: yield key, output.OutputDefinition(key, None, description) continue value_def = self.parse(stack, val[self.OUTPUT_VALUE], path='.'.join([self.OUTPUTS, key, self.OUTPUT_VALUE])) yield key, output.OutputDefinition(key, value_def, description) return dict(get_outputs()) heat-10.0.2/heat/engine/cfn/0000775000175000017500000000000013343562672015504 5ustar zuulzuul00000000000000heat-10.0.2/heat/engine/cfn/template.py0000666000175000017500000002230113343562337017667 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import functools import six from heat.common import exception from heat.common.i18n import _ from heat.engine.cfn import functions as cfn_funcs from heat.engine.cfn import parameters as cfn_params from heat.engine import function from heat.engine import parameters from heat.engine import rsrc_defn from heat.engine import template_common class CfnTemplateBase(template_common.CommonTemplate): """The base implementation of cfn template.""" SECTIONS = ( VERSION, ALTERNATE_VERSION, DESCRIPTION, MAPPINGS, PARAMETERS, RESOURCES, OUTPUTS, ) = ( 'AWSTemplateFormatVersion', 'HeatTemplateFormatVersion', 'Description', 'Mappings', 'Parameters', 'Resources', 'Outputs', ) OUTPUT_KEYS = ( OUTPUT_DESCRIPTION, OUTPUT_VALUE, ) = ( 'Description', 'Value', ) SECTIONS_NO_DIRECT_ACCESS = set([PARAMETERS, VERSION, ALTERNATE_VERSION]) _RESOURCE_KEYS = ( RES_TYPE, RES_PROPERTIES, RES_METADATA, RES_DEPENDS_ON, RES_DELETION_POLICY, RES_UPDATE_POLICY, RES_DESCRIPTION, ) = ( 'Type', 'Properties', 'Metadata', 'DependsOn', 'DeletionPolicy', 'UpdatePolicy', 'Description', ) functions = { 'Fn::FindInMap': cfn_funcs.FindInMap, 'Fn::GetAZs': cfn_funcs.GetAZs, 'Ref': cfn_funcs.Ref, 'Fn::GetAtt': cfn_funcs.GetAtt, 'Fn::Select': cfn_funcs.Select, 'Fn::Join': cfn_funcs.Join, 'Fn::Base64': cfn_funcs.Base64, } deletion_policies = { 'Delete': rsrc_defn.ResourceDefinition.DELETE, 'Retain': rsrc_defn.ResourceDefinition.RETAIN, 'Snapshot': rsrc_defn.ResourceDefinition.SNAPSHOT } HOT_TO_CFN_RES_ATTRS = {'type': RES_TYPE, 'properties': RES_PROPERTIES, 'metadata': RES_METADATA, 'depends_on': RES_DEPENDS_ON, 'deletion_policy': RES_DELETION_POLICY, 'update_policy': RES_UPDATE_POLICY} HOT_TO_CFN_OUTPUT_ATTRS = {'description': OUTPUT_DESCRIPTION, 'value': OUTPUT_VALUE} def __getitem__(self, section): """Get the relevant section in the template.""" if section not in self.SECTIONS: raise KeyError(_('"%s" is not a valid template section') % section) if section in self.SECTIONS_NO_DIRECT_ACCESS: raise KeyError( _('Section %s can not be accessed directly.') % section) if section == self.DESCRIPTION: default = 'No description' else: default = {} # if a section is None (empty yaml section) return {} # to be consistent with an empty json section. return self.t.get(section) or default def param_schemata(self, param_defaults=None): params = self.t.get(self.PARAMETERS) or {} pdefaults = param_defaults or {} for name, schema in six.iteritems(params): if name in pdefaults: params[name][parameters.DEFAULT] = pdefaults[name] return dict((name, parameters.Schema.from_dict(name, schema)) for name, schema in six.iteritems(params)) def get_section_name(self, section): return section def parameters(self, stack_identifier, user_params, param_defaults=None): return cfn_params.CfnParameters(stack_identifier, self, user_params=user_params, param_defaults=param_defaults) def resource_definitions(self, stack): resources = self.t.get(self.RESOURCES) or {} conditions = self.conditions(stack) def defns(): for name, snippet in resources.items(): try: defn_data = dict(self._rsrc_defn_args(stack, name, snippet)) except (TypeError, ValueError, KeyError) as ex: msg = six.text_type(ex) raise exception.StackValidationFailed(message=msg) defn = rsrc_defn.ResourceDefinition(name, **defn_data) cond_name = defn.condition() if cond_name is not None: try: enabled = conditions.is_enabled(cond_name) except ValueError as exc: path = [self.RESOURCES, name, self.RES_CONDITION] message = six.text_type(exc) raise exception.StackValidationFailed(path=path, message=message) if not enabled: continue yield name, defn return dict(defns()) def add_resource(self, definition, name=None): if name is None: name = definition.name hot_tmpl = definition.render_hot() if self.t.get(self.RESOURCES) is None: self.t[self.RESOURCES] = {} cfn_tmpl = dict((self.HOT_TO_CFN_RES_ATTRS[k], v) for k, v in hot_tmpl.items()) dep_list = cfn_tmpl.get(self.RES_DEPENDS_ON, []) if len(dep_list) == 1: dep_res = cfn_tmpl[self.RES_DEPENDS_ON][0] if dep_res in self.t[self.RESOURCES]: cfn_tmpl[self.RES_DEPENDS_ON] = dep_res else: del cfn_tmpl[self.RES_DEPENDS_ON] elif dep_list: cfn_tmpl[self.RES_DEPENDS_ON] = [d for d in dep_list if d in self.t[self.RESOURCES]] self.t[self.RESOURCES][name] = cfn_tmpl def add_output(self, definition): hot_op = definition.render_hot() cfn_op = dict((self.HOT_TO_CFN_OUTPUT_ATTRS[k], v) for k, v in hot_op.items()) if self.t.get(self.OUTPUTS) is None: self.t[self.OUTPUTS] = {} self.t[self.OUTPUTS][definition.name] = cfn_op class CfnTemplate(CfnTemplateBase): CONDITIONS = 'Conditions' SECTIONS = CfnTemplateBase.SECTIONS + (CONDITIONS,) SECTIONS_NO_DIRECT_ACCESS = (CfnTemplateBase.SECTIONS_NO_DIRECT_ACCESS | set([CONDITIONS])) RES_CONDITION = 'Condition' _RESOURCE_KEYS = CfnTemplateBase._RESOURCE_KEYS + (RES_CONDITION,) HOT_TO_CFN_RES_ATTRS = CfnTemplateBase.HOT_TO_CFN_RES_ATTRS HOT_TO_CFN_RES_ATTRS.update({'condition': RES_CONDITION}) OUTPUT_CONDITION = 'Condition' OUTPUT_KEYS = CfnTemplateBase.OUTPUT_KEYS + (OUTPUT_CONDITION,) functions = { 'Fn::FindInMap': cfn_funcs.FindInMap, 'Fn::GetAZs': cfn_funcs.GetAZs, 'Ref': cfn_funcs.Ref, 'Fn::GetAtt': cfn_funcs.GetAtt, 'Fn::Select': cfn_funcs.Select, 'Fn::Join': cfn_funcs.Join, 'Fn::Split': cfn_funcs.Split, 'Fn::Replace': cfn_funcs.Replace, 'Fn::Base64': cfn_funcs.Base64, 'Fn::MemberListToMap': cfn_funcs.MemberListToMap, 'Fn::ResourceFacade': cfn_funcs.ResourceFacade, 'Fn::If': cfn_funcs.If, } condition_functions = { 'Fn::Equals': cfn_funcs.Equals, 'Ref': cfn_funcs.ParamRef, 'Fn::FindInMap': cfn_funcs.FindInMap, 'Fn::Not': cfn_funcs.Not, 'Fn::And': cfn_funcs.And, 'Fn::Or': cfn_funcs.Or } def __init__(self, tmpl, template_id=None, files=None, env=None): super(CfnTemplate, self).__init__(tmpl, template_id, files, env) self.merge_sections = [self.PARAMETERS, self.CONDITIONS] def _get_condition_definitions(self): return self.t.get(self.CONDITIONS, {}) def _rsrc_defn_args(self, stack, name, data): for arg in super(CfnTemplate, self)._rsrc_defn_args(stack, name, data): yield arg parse_cond = functools.partial(self.parse_condition, stack) yield ('condition', self._parse_resource_field(self.RES_CONDITION, (six.string_types, bool, function.Function), 'string or boolean', name, data, parse_cond)) class HeatTemplate(CfnTemplateBase): functions = { 'Fn::FindInMap': cfn_funcs.FindInMap, 'Fn::GetAZs': cfn_funcs.GetAZs, 'Ref': cfn_funcs.Ref, 'Fn::GetAtt': cfn_funcs.GetAtt, 'Fn::Select': cfn_funcs.Select, 'Fn::Join': cfn_funcs.Join, 'Fn::Split': cfn_funcs.Split, 'Fn::Replace': cfn_funcs.Replace, 'Fn::Base64': cfn_funcs.Base64, 'Fn::MemberListToMap': cfn_funcs.MemberListToMap, 'Fn::ResourceFacade': cfn_funcs.ResourceFacade, } heat-10.0.2/heat/engine/cfn/__init__.py0000666000175000017500000000000013343562337017603 0ustar zuulzuul00000000000000heat-10.0.2/heat/engine/cfn/parameters.py0000666000175000017500000000431713343562337020226 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common.i18n import _ from heat.engine import constraints as constr from heat.engine import parameters class CfnParameters(parameters.Parameters): PSEUDO_PARAMETERS = ( PARAM_STACK_ID, PARAM_STACK_NAME, PARAM_REGION ) = ( 'AWS::StackId', 'AWS::StackName', 'AWS::Region' ) def _pseudo_parameters(self, stack_identifier): stack_id = (stack_identifier.arn() if stack_identifier is not None else 'None') stack_name = stack_identifier and stack_identifier.stack_name yield parameters.Parameter( self.PARAM_STACK_ID, parameters.Schema(parameters.Schema.STRING, _('Stack ID'), default=str(stack_id))) if stack_name: yield parameters.Parameter( self.PARAM_STACK_NAME, parameters.Schema(parameters.Schema.STRING, _('Stack Name'), default=stack_name)) yield parameters.Parameter( self.PARAM_REGION, parameters.Schema(parameters.Schema.STRING, default='ap-southeast-1', constraints=[ constr.AllowedValues( ['us-east-1', 'us-west-1', 'us-west-2', 'sa-east-1', 'eu-west-1', 'ap-southeast-1', 'ap-northeast-1'])])) heat-10.0.2/heat/engine/cfn/functions.py0000666000175000017500000003345313343562337020076 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections from oslo_serialization import jsonutils import six from heat.api.aws import utils as aws_utils from heat.common import exception from heat.common.i18n import _ from heat.engine import function from heat.engine.hot import functions as hot_funcs class FindInMap(function.Function): """A function for resolving keys in the template mappings. Takes the form:: { "Fn::FindInMap" : [ "mapping", "key", "value" ] } """ def __init__(self, stack, fn_name, args): super(FindInMap, self).__init__(stack, fn_name, args) try: self._mapname, self._mapkey, self._mapvalue = self.args except ValueError as ex: raise KeyError(six.text_type(ex)) def result(self): mapping = self.stack.t.maps[function.resolve(self._mapname)] key = function.resolve(self._mapkey) value = function.resolve(self._mapvalue) return mapping[key][value] class GetAZs(function.Function): """A function for retrieving the availability zones. Takes the form:: { "Fn::GetAZs" : "" } """ def result(self): # TODO(therve): Implement region scoping if self.stack is None: return ['nova'] else: return self.stack.get_availability_zones() class ParamRef(function.Function): """A function for resolving parameter references. Takes the form:: { "Ref" : "" } """ def __init__(self, stack, fn_name, args): super(ParamRef, self).__init__(stack, fn_name, args) self.parameters = self.stack.parameters def result(self): param_name = function.resolve(self.args) try: return self.parameters[param_name] except KeyError: raise exception.InvalidTemplateReference(resource=param_name, key='unknown') def Ref(stack, fn_name, args): """A function for resolving parameters or resource references. Takes the form:: { "Ref" : "" } or:: { "Ref" : "" } """ if stack is None or args in stack: RefClass = hot_funcs.GetResource else: RefClass = ParamRef return RefClass(stack, fn_name, args) class GetAtt(hot_funcs.GetAttThenSelect): """A function for resolving resource attributes. Takes the form:: { "Fn::GetAtt" : [ "", "" ] } """ def _parse_args(self): try: resource_name, attribute = self.args except ValueError: raise ValueError(_('Arguments to "%s" must be of the form ' '[resource_name, attribute]') % self.fn_name) return resource_name, attribute, [] class Select(function.Function): """A function for selecting an item from a list or map. Takes the form (for a list lookup):: { "Fn::Select" : [ "", [ "", "", ... ] ] } or (for a map lookup):: { "Fn::Select" : [ "", { "": "", ... } ] } If the selected index is not found, this function resolves to an empty string. """ def __init__(self, stack, fn_name, args): super(Select, self).__init__(stack, fn_name, args) try: self._lookup, self._strings = self.args except ValueError: raise ValueError(_('Arguments to "%s" must be of the form ' '[index, collection]') % self.fn_name) def result(self): index = function.resolve(self._lookup) strings = function.resolve(self._strings) if strings == '': # an empty string is a common response from other # functions when result is not currently available. # Handle by returning an empty string return '' if isinstance(strings, six.string_types): # might be serialized json. try: strings = jsonutils.loads(strings) except ValueError as json_ex: fmt_data = {'fn_name': self.fn_name, 'err': json_ex} raise ValueError(_('"%(fn_name)s": %(err)s') % fmt_data) if isinstance(strings, collections.Mapping): if not isinstance(index, six.string_types): raise TypeError(_('Index to "%s" must be a string') % self.fn_name) return strings.get(index, '') try: index = int(index) except (ValueError, TypeError): pass if (isinstance(strings, collections.Sequence) and not isinstance(strings, six.string_types)): if not isinstance(index, six.integer_types): raise TypeError(_('Index to "%s" must be an integer') % self.fn_name) try: return strings[index] except IndexError: return '' if strings is None: return '' raise TypeError(_('Arguments to %s not fully resolved') % self.fn_name) class Join(hot_funcs.Join): """A function for joining strings. Takes the form:: { "Fn::Join" : [ "", [ "", "", ... ] ] } And resolves to:: "..." """ class Split(function.Function): """A function for splitting strings. Takes the form:: { "Fn::Split" : [ "", "..." ] } And resolves to:: [ "", "", ... ] """ def __init__(self, stack, fn_name, args): super(Split, self).__init__(stack, fn_name, args) example = '"%s" : [ ",", "str1,str2"]]' % self.fn_name fmt_data = {'fn_name': self.fn_name, 'example': example} if isinstance(self.args, (six.string_types, collections.Mapping)): raise TypeError(_('Incorrect arguments to "%(fn_name)s" ' 'should be: %(example)s') % fmt_data) try: self._delim, self._strings = self.args except ValueError: raise ValueError(_('Incorrect arguments to "%(fn_name)s" ' 'should be: %(example)s') % fmt_data) def result(self): strings = function.resolve(self._strings) if not isinstance(self._delim, six.string_types): raise TypeError(_("Delimiter for %s must be string") % self.fn_name) if not isinstance(strings, six.string_types): raise TypeError(_("String to split must be string; got %s") % type(strings)) return strings.split(self._delim) class Replace(hot_funcs.Replace): """A function for performing string substitutions. Takes the form:: { "Fn::Replace" : [ { "": "", "": "", ... }, " " ] } And resolves to:: " " When keys overlap in the template, longer matches are preferred. For keys of equal length, lexicographically smaller keys are preferred. """ def _parse_args(self): example = ('{"%s": ' '[ {"$var1": "foo", "%%var2%%": "bar"}, ' '"$var1 is %%var2%%"]}' % self.fn_name) fmt_data = {'fn_name': self.fn_name, 'example': example} if isinstance(self.args, (six.string_types, collections.Mapping)): raise TypeError(_('Incorrect arguments to "%(fn_name)s" ' 'should be: %(example)s') % fmt_data) try: mapping, string = self.args except ValueError: raise ValueError(_('Incorrect arguments to "%(fn_name)s" ' 'should be: %(example)s') % fmt_data) else: return mapping, string class Base64(function.Function): """A placeholder function for converting to base64. Takes the form:: { "Fn::Base64" : "" } This function actually performs no conversion. It is included for the benefit of templates that convert UserData to Base64. Heat accepts UserData in plain text. """ def result(self): resolved = function.resolve(self.args) if not isinstance(resolved, six.string_types): raise TypeError(_('"%s" argument must be a string') % self.fn_name) return resolved class MemberListToMap(function.Function): """A function to convert lists with enumerated keys and values to mapping. Takes the form:: { 'Fn::MemberListToMap' : [ 'Name', 'Value', [ '.member.0.Name=', '.member.0.Value=', ... ] ] } And resolves to:: { "" : "", ... } The first two arguments are the names of the key and value. """ def __init__(self, stack, fn_name, args): super(MemberListToMap, self).__init__(stack, fn_name, args) try: self._keyname, self._valuename, self._list = self.args except ValueError: correct = ''' {'Fn::MemberListToMap': ['Name', 'Value', ['.member.0.Name=key', '.member.0.Value=door']]} ''' raise TypeError(_('Wrong Arguments try: "%s"') % correct) if not isinstance(self._keyname, six.string_types): raise TypeError(_('%s Key Name must be a string') % self.fn_name) if not isinstance(self._valuename, six.string_types): raise TypeError(_('%s Value Name must be a string') % self.fn_name) def result(self): member_list = function.resolve(self._list) if not isinstance(member_list, collections.Iterable): raise TypeError(_('Member list must be a list')) def item(s): if not isinstance(s, six.string_types): raise TypeError(_("Member list items must be strings")) return s.split('=', 1) partials = dict(item(s) for s in member_list) return aws_utils.extract_param_pairs(partials, prefix='', keyname=self._keyname, valuename=self._valuename) class ResourceFacade(hot_funcs.ResourceFacade): """A function for retrieving data in a parent provider template. A function for obtaining data from the facade resource from within the corresponding provider template. Takes the form:: { "Fn::ResourceFacade": "" } where the valid attribute types are "Metadata", "DeletionPolicy" and "UpdatePolicy". """ _RESOURCE_ATTRIBUTES = ( METADATA, DELETION_POLICY, UPDATE_POLICY, ) = ( 'Metadata', 'DeletionPolicy', 'UpdatePolicy' ) class If(hot_funcs.If): """A function to return corresponding value based on condition evaluation. Takes the form:: { "Fn::If" : [ "", "", "" ] } The value_if_true to be returned if the specified condition evaluates to true, the value_if_false to be returned if the specified condition evaluates to false. """ class Equals(hot_funcs.Equals): """A function for comparing whether two values are equal. Takes the form:: { "Fn::Equals" : [ "", "" ] } The value can be any type that you want to compare. Returns true if the two values are equal or false if they aren't. """ class Not(hot_funcs.Not): """A function that acts as a NOT operator on a condition. Takes the form:: { "Fn::Not" : [ "" ] } Returns true for a condition that evaluates to false or returns false for a condition that evaluates to true. """ def _check_args(self): msg = _('Arguments to "%s" must be of the form: ' '[condition]') % self.fn_name if (not self.args or not isinstance(self.args, collections.Sequence) or isinstance(self.args, six.string_types)): raise ValueError(msg) if len(self.args) != 1: raise ValueError(msg) self.condition = self.args[0] class And(hot_funcs.And): """A function that acts as an AND operator on conditions. Takes the form:: { "Fn::And" : [ "", "", ... ] } Returns true if all the specified conditions evaluate to true, or returns false if any one of the conditions evaluates to false. The minimum number of conditions that you can include is 2. """ class Or(hot_funcs.Or): """A function that acts as an OR operator on conditions. Takes the form:: { "Fn::Or" : [ "", "", ... ] } Returns true if any one of the specified conditions evaluate to true, or returns false if all of the conditions evaluates to false. The minimum number of conditions that you can include is 2. """ heat-10.0.2/heat/engine/plugin_manager.py0000666000175000017500000001032013343562337020274 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import itertools import sys from oslo_config import cfg from oslo_log import log import six from heat.common import plugin_loader LOG = log.getLogger(__name__) class PluginManager(object): """A class for managing plugin modules.""" def __init__(self, *extra_packages): """Initialise the Heat Engine plugin package, and any others. The heat.engine.plugins package is always created, if it does not exist, from the plugin directories specified in the config file, and searched for modules. In addition, any extra packages specified are also searched for modules. e.g. >>> PluginManager('heat.engine.resources') will load all modules in the heat.engine.resources package as well as any user-supplied plugin modules. """ def packages(): for package_name in extra_packages: yield sys.modules[package_name] cfg.CONF.import_opt('plugin_dirs', 'heat.common.config') yield plugin_loader.create_subpackage(cfg.CONF.plugin_dirs, 'heat.engine') def modules(): pkg_modules = six.moves.map(plugin_loader.load_modules, packages()) return itertools.chain.from_iterable(pkg_modules) self.modules = list(modules()) def map_to_modules(self, function): """Iterate over the results of calling a function on every module.""" return six.moves.map(function, self.modules) class PluginMapping(object): """A class for managing plugin mappings.""" def __init__(self, names, *args, **kwargs): """Initialise with the mapping name(s) and arguments. `names` can be a single name or a list of names. The first name found in a given module is the one used. Each module is searched for a function called _mapping() which is called to retrieve the mappings provided by that module. Any other arguments passed will be passed to the mapping functions. """ if isinstance(names, six.string_types): names = [names] self.names = ['%s_mapping' % name for name in names] self.args = args self.kwargs = kwargs def load_from_module(self, module): """Return the mapping specified in the given module. If no such mapping is specified, an empty dictionary is returned. """ for mapping_name in self.names: mapping_func = getattr(module, mapping_name, None) if callable(mapping_func): fmt_data = {'mapping_name': mapping_name, 'module': module} try: mapping_dict = mapping_func(*self.args, **self.kwargs) except Exception: LOG.error('Failed to load %(mapping_name)s ' 'from %(module)s', fmt_data) raise else: if isinstance(mapping_dict, collections.Mapping): return mapping_dict elif mapping_dict is not None: LOG.error('Invalid type for %(mapping_name)s ' 'from %(module)s', fmt_data) return {} def load_all(self, plugin_manager): """Iterate over the mappings from all modules in the plugin manager. Mappings are returned as a list of (key, value) tuples. """ mod_dicts = plugin_manager.map_to_modules(self.load_from_module) return itertools.chain.from_iterable(six.iteritems(d) for d in mod_dicts) heat-10.0.2/heat/engine/scheduler.py0000666000175000017500000004261713343562340017272 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import sys import types import eventlet from oslo_log import log as logging from oslo_utils import encodeutils from oslo_utils import excutils import six from heat.common.i18n import _ from heat.common.i18n import repr_wrapper from heat.common import timeutils LOG = logging.getLogger(__name__) # Whether TaskRunner._sleep actually does an eventlet sleep when called. ENABLE_SLEEP = True def task_description(task): """Return a human-readable string description of a task. The description is used to identify the task when logging its status. """ name = task.__name__ if hasattr(task, '__name__') else None if name is not None and isinstance(task, (types.MethodType, types.FunctionType)): if getattr(task, '__self__', None) is not None: return '%s from %s' % (six.text_type(name), task.__self__) else: return six.text_type(name) return encodeutils.safe_decode(repr(task)) class Timeout(BaseException): """Raised when task has exceeded its allotted (wallclock) running time. This allows the task to perform any necessary cleanup, as well as use a different exception to notify the controlling task if appropriate. If the task suppresses the exception altogether, it will be cancelled but the controlling task will not be notified of the timeout. """ def __init__(self, task_runner, timeout): """Initialise with the TaskRunner and a timeout period in seconds.""" message = _('%s Timed out') % six.text_type(task_runner) super(Timeout, self).__init__(message) self._duration = timeutils.Duration(timeout) def expired(self): return self._duration.expired() def trigger(self, generator): """Trigger the timeout on a given generator.""" try: generator.throw(self) except StopIteration: return True else: # Clean up in case task swallows exception without exiting generator.close() return False def earlier_than(self, other): if other is None: return True assert isinstance(other, Timeout), "Invalid type for Timeout compare" return self._duration.endtime() < other._duration.endtime() class TimedCancel(Timeout): def trigger(self, generator): """Trigger the timeout on a given generator.""" generator.close() return False @six.python_2_unicode_compatible class ExceptionGroup(Exception): """Container for multiple exceptions. This exception is used by DependencyTaskGroup when the flag aggregate_exceptions is set to True and it's re-raised again when all tasks are finished. This way it can be caught later on so that the individual exceptions can be acted upon. """ def __init__(self, exceptions=None): if exceptions is None: exceptions = list() self.exceptions = list(exceptions) def __str__(self): return str([str(ex) for ex in self.exceptions]) @six.python_2_unicode_compatible class TaskRunner(object): """Wrapper for a resumable task (co-routine).""" def __init__(self, task, *args, **kwargs): """Initialise with a task function and arguments. The arguments are passed to task when it is started. The task function may be a co-routine that yields control flow between steps. If the task co-routine wishes to be advanced only on every nth step of the TaskRunner, it may yield an integer which is the period of the task. e.g. "yield 2" will result in the task being advanced on every second step. """ assert callable(task), "Task is not callable" self._task = task self._args = args self._kwargs = kwargs self._runner = None self._done = False self._timeout = None self._poll_period = 1 self.name = task_description(task) def __str__(self): """Return a human-readable string representation of the task.""" text = 'Task %s' % self.name return six.text_type(text) def _sleep(self, wait_time): """Sleep for the specified number of seconds.""" if ENABLE_SLEEP and wait_time is not None: LOG.debug('%s sleeping', six.text_type(self)) eventlet.sleep(wait_time) def __call__(self, wait_time=1, timeout=None, progress_callback=None): """Start and run the task to completion. The task will first sleep for zero seconds, then sleep for `wait_time` seconds between steps. To avoid sleeping, pass `None` for `wait_time`. """ assert self._runner is None, "Task already started" started = False for step in self.as_task(timeout=timeout, progress_callback=progress_callback): self._sleep(wait_time if (started or wait_time is None) else 0) started = True def start(self, timeout=None): """Initialise the task and run its first step. If a timeout is specified, any attempt to step the task after that number of seconds has elapsed will result in a Timeout being raised inside the task. """ assert self._runner is None, "Task already started" assert not self._done, "Task already cancelled" LOG.debug('%s starting', six.text_type(self)) if timeout is not None: self._timeout = Timeout(self, timeout) result = self._task(*self._args, **self._kwargs) if isinstance(result, types.GeneratorType): self._runner = result self.step() else: self._runner = False self._done = True LOG.debug('%s done (not resumable)', six.text_type(self)) def step(self): """Run another step of the task. Return True if the task is complete; False otherwise. """ if not self.done(): assert self._runner is not None, "Task not started" if self._poll_period > 1: self._poll_period -= 1 return False if self._timeout is not None and self._timeout.expired(): LOG.info('%s timed out', self) self._done = True self._timeout.trigger(self._runner) else: LOG.debug('%s running', six.text_type(self)) try: poll_period = next(self._runner) except StopIteration: self._done = True LOG.debug('%s complete', six.text_type(self)) else: if isinstance(poll_period, six.integer_types): self._poll_period = max(poll_period, 1) else: self._poll_period = 1 return self._done def run_to_completion(self, wait_time=1, progress_callback=None): """Run the task to completion. The task will sleep for `wait_time` seconds between steps. To avoid sleeping, pass `None` for `wait_time`. """ assert self._runner is not None, "Task not started" for step in self.as_task(progress_callback=progress_callback): self._sleep(wait_time) def as_task(self, timeout=None, progress_callback=None): """Return a task that drives the TaskRunner.""" resuming = self.started() if not resuming: self.start(timeout=timeout) else: if timeout is not None: new_timeout = Timeout(self, timeout) if new_timeout.earlier_than(self._timeout): self._timeout = new_timeout done = self.step() if resuming else self.done() while not done: try: yield if progress_callback is not None: progress_callback() except GeneratorExit: self.cancel() raise except: # noqa self._done = True try: self._runner.throw(*sys.exc_info()) except StopIteration: return else: self._done = False else: done = self.step() def cancel(self, grace_period=None): """Cancel the task and mark it as done.""" if self.done(): return if not self.started() or grace_period is None: LOG.debug('%s cancelled', six.text_type(self)) self._done = True if self.started(): self._runner.close() else: timeout = TimedCancel(self, grace_period) if timeout.earlier_than(self._timeout): self._timeout = timeout def started(self): """Return True if the task has been started.""" return self._runner is not None def done(self): """Return True if the task is complete.""" return self._done def __nonzero__(self): """Return True if there are steps remaining.""" return not self.done() def __bool__(self): """Return True if there are steps remaining.""" return self.__nonzero__() def wrappertask(task): """Decorator for a task that needs to drive a subtask. This is essentially a replacement for the Python 3-only "yield from" keyword (PEP 380), using the "yield" keyword that is supported in Python 2. For example:: @wrappertask def parent_task(self): self.setup() yield self.child_task() self.cleanup() """ @six.wraps(task) def wrapper(*args, **kwargs): parent = task(*args, **kwargs) try: subtask = next(parent) except StopIteration: return while True: try: if isinstance(subtask, types.GeneratorType): subtask_running = True try: step = next(subtask) except StopIteration: subtask_running = False while subtask_running: try: yield step except GeneratorExit: subtask.close() raise except: # noqa try: step = subtask.throw(*sys.exc_info()) except StopIteration: subtask_running = False else: try: step = next(subtask) except StopIteration: subtask_running = False else: yield subtask except GeneratorExit: parent.close() raise except: # noqa try: subtask = parent.throw(*sys.exc_info()) except StopIteration: return else: try: subtask = next(parent) except StopIteration: return return wrapper @repr_wrapper class DependencyTaskGroup(object): """Task which manages group of subtasks that have ordering dependencies.""" def __init__(self, dependencies, task=lambda o: o(), reverse=False, name=None, error_wait_time=None, aggregate_exceptions=False): """Initialise with the task dependencies. A task to run on each dependency may optionally be specified. If no task is supplied, it is assumed that the tasks are stored directly in the dependency tree. If a task is supplied, the object stored in the dependency tree is passed as an argument. If an error_wait_time is specified, tasks that are already running at the time of an error will continue to run for up to the specified time before being cancelled. Once all remaining tasks are complete or have been cancelled, the original exception is raised. If error_wait_time is a callable function it will be called for each task, passing the dependency key as an argument, to determine the error_wait_time for that particular task. If aggregate_exceptions is True, then execution of parallel operations will not be cancelled in the event of an error (operations downstream of the error will be cancelled). Once all chains are complete, any errors will be rolled up into an ExceptionGroup exception. """ self._keys = list(dependencies) self._runners = dict((o, TaskRunner(task, o)) for o in self._keys) self._graph = dependencies.graph(reverse=reverse) self.error_wait_time = error_wait_time self.aggregate_exceptions = aggregate_exceptions if name is None: name = '(%s) %s' % (getattr(task, '__name__', task_description(task)), six.text_type(dependencies)) self.name = name def __repr__(self): """Return a string representation of the task.""" text = '%s(%s)' % (type(self).__name__, self.name) return text def __call__(self): """Return a co-routine which runs the task group.""" raised_exceptions = [] thrown_exceptions = [] try: while any(six.itervalues(self._runners)): try: for k, r in self._ready(): r.start() if not r: del self._graph[k] if self._graph: try: yield except Exception: thrown_exceptions.append(sys.exc_info()) raise for k, r in self._running(): if r.step(): del self._graph[k] except Exception: exc_info = None try: exc_info = sys.exc_info() if self.aggregate_exceptions: self._cancel_recursively(k, r) else: self.cancel_all(grace_period=self.error_wait_time) raised_exceptions.append(exc_info) finally: del exc_info except: # noqa with excutils.save_and_reraise_exception(): self.cancel_all() if raised_exceptions: if self.aggregate_exceptions: raise ExceptionGroup(v for t, v, tb in raised_exceptions) else: if thrown_exceptions: six.reraise(*thrown_exceptions[-1]) else: six.reraise(*raised_exceptions[0]) finally: del raised_exceptions del thrown_exceptions def cancel_all(self, grace_period=None): if callable(grace_period): get_grace_period = grace_period else: def get_grace_period(key): return grace_period for k, r in six.iteritems(self._runners): if not r.started() or r.done(): gp = None else: gp = get_grace_period(k) try: r.cancel(grace_period=gp) except Exception as ex: LOG.debug('Exception cancelling task: %s', six.text_type(ex)) def _cancel_recursively(self, key, runner): try: runner.cancel() except Exception as ex: LOG.debug('Exception cancelling task: %s', six.text_type(ex)) node = self._graph[key] for dependent_node in node.required_by(): node_runner = self._runners[dependent_node] self._cancel_recursively(dependent_node, node_runner) del self._graph[key] def _ready(self): """Iterate over all subtasks that are ready to start. Ready subtasks are subtasks whose dependencies have all been satisfied, but which have not yet been started. """ for k in self._keys: if not self._graph.get(k, True): runner = self._runners[k] if runner and not runner.started(): yield k, runner def _running(self): """Iterate over all subtasks that are currently running. Running subtasks are subtasks have been started but have not yet completed. """ def running(k_r): return k_r[0] in self._graph and k_r[1].started() return six.moves.filter(running, six.iteritems(self._runners)) heat-10.0.2/heat/engine/conditions.py0000666000175000017500000000532513343562337017466 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import six from heat.common.i18n import _ from heat.common import exception from heat.engine import function _in_progress = object() class Conditions(object): def __init__(self, conditions_dict): assert isinstance(conditions_dict, collections.Mapping) self._conditions = conditions_dict self._resolved = {} def validate(self): for name, cond in six.iteritems(self._conditions): self._check_condition_type(name, cond) function.validate(cond) def _resolve(self, condition_name): resolved = function.resolve(self._conditions[condition_name]) self._check_condition_type(condition_name, resolved) return resolved def _check_condition_type(self, condition_name, condition_defn): if not isinstance(condition_defn, (bool, function.Function)): msg_data = {'cd': condition_name, 'definition': condition_defn} message = _('The definition of condition "%(cd)s" is invalid: ' '%(definition)s') % msg_data raise exception.StackValidationFailed( error='Condition validation error', message=message) def is_enabled(self, condition_name): if condition_name is None: return True if isinstance(condition_name, bool): return condition_name if not (isinstance(condition_name, six.string_types) and condition_name in self._conditions): raise ValueError(_('Invalid condition "%s"') % condition_name) if condition_name not in self._resolved: self._resolved[condition_name] = _in_progress self._resolved[condition_name] = self._resolve(condition_name) result = self._resolved[condition_name] if result is _in_progress: message = _('Circular definition for condition ' '"%s"') % condition_name raise exception.StackValidationFailed( error='Condition validation error', message=message) return result def __repr__(self): return 'Conditions(%r)' % self._conditions heat-10.0.2/heat/engine/stack_lock.py0000666000175000017500000001477513343562340017435 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import contextlib from oslo_log import log as logging from oslo_utils import excutils from heat.common import exception from heat.common import service_utils from heat.objects import stack as stack_object from heat.objects import stack_lock as stack_lock_object LOG = logging.getLogger(__name__) class StackLock(object): def __init__(self, context, stack_id, engine_id): self.context = context self.stack_id = stack_id self.engine_id = engine_id self.listener = None def get_engine_id(self): """Return the ID of the engine which currently holds the lock. Returns None if there is no lock held on the stack. """ return stack_lock_object.StackLock.get_engine_id(self.context, self.stack_id) def try_acquire(self): """Try to acquire a stack lock. Don't raise an ActionInProgress exception or try to steal lock. """ return stack_lock_object.StackLock.create(self.context, self.stack_id, self.engine_id) def acquire(self, retry=True): """Acquire a lock on the stack. :param retry: When True, retry if lock was released while stealing. :type retry: boolean """ lock_engine_id = stack_lock_object.StackLock.create(self.context, self.stack_id, self.engine_id) if lock_engine_id is None: LOG.debug("Engine %(engine)s acquired lock on stack " "%(stack)s" % {'engine': self.engine_id, 'stack': self.stack_id}) return stack = stack_object.Stack.get_by_id(self.context, self.stack_id, show_deleted=True, eager_load=False) if (lock_engine_id == self.engine_id or service_utils.engine_alive(self.context, lock_engine_id)): LOG.debug("Lock on stack %(stack)s is owned by engine " "%(engine)s" % {'stack': self.stack_id, 'engine': lock_engine_id}) raise exception.ActionInProgress(stack_name=stack.name, action=stack.action) else: LOG.info("Stale lock detected on stack %(stack)s. Engine " "%(engine)s will attempt to steal the lock", {'stack': self.stack_id, 'engine': self.engine_id}) result = stack_lock_object.StackLock.steal(self.context, self.stack_id, lock_engine_id, self.engine_id) if result is None: LOG.info("Engine %(engine)s successfully stole the lock " "on stack %(stack)s", {'engine': self.engine_id, 'stack': self.stack_id}) return elif result is True: if retry: LOG.info("The lock on stack %(stack)s was released " "while engine %(engine)s was stealing it. " "Trying again", {'stack': self.stack_id, 'engine': self.engine_id}) return self.acquire(retry=False) else: new_lock_engine_id = result LOG.info("Failed to steal lock on stack %(stack)s. " "Engine %(engine)s stole the lock first", {'stack': self.stack_id, 'engine': new_lock_engine_id}) raise exception.ActionInProgress( stack_name=stack.name, action=stack.action) def release(self): """Release a stack lock.""" # Only the engine that owns the lock will be releasing it. result = stack_lock_object.StackLock.release(self.context, self.stack_id, self.engine_id) if result is True: LOG.warning("Lock was already released on stack %s!", self.stack_id) else: LOG.debug("Engine %(engine)s released lock on stack " "%(stack)s" % {'engine': self.engine_id, 'stack': self.stack_id}) def __enter__(self): self.acquire() return self def __exit__(self, exc_type, exc_val, exc_tb): self.release() return False @contextlib.contextmanager def thread_lock(self, retry=True): """Acquire a lock and release it only if there is an exception. The release method still needs to be scheduled to be run at the end of the thread using the Thread.link method. :param retry: When True, retry if lock was released while stealing. :type retry: boolean """ try: self.acquire(retry) yield except exception.ActionInProgress: raise except: # noqa with excutils.save_and_reraise_exception(): self.release() @contextlib.contextmanager def try_thread_lock(self): """Similar to thread_lock, but acquire the lock using try_acquire. Only release it upon any exception after a successful acquisition. """ result = None try: result = self.try_acquire() yield result except: # noqa if result is None: # Lock was successfully acquired with excutils.save_and_reraise_exception(): self.release() raise heat-10.0.2/heat/engine/update.py0000666000175000017500000003050313343562351016567 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging import six from heat.common import exception from heat.common.i18n import repr_wrapper from heat.engine import dependencies from heat.engine import resource from heat.engine import scheduler from heat.engine import stk_defn from heat.objects import resource as resource_objects LOG = logging.getLogger(__name__) @repr_wrapper class StackUpdate(object): """A Task to perform the update of an existing stack to a new template.""" def __init__(self, existing_stack, new_stack, previous_stack, rollback=False): """Initialise with the existing stack and the new stack.""" self.existing_stack = existing_stack self.new_stack = new_stack self.previous_stack = previous_stack self.rollback = rollback self.existing_snippets = dict((n, r.frozen_definition()) for n, r in self.existing_stack.items()) def __repr__(self): if self.rollback: return '%s Rollback' % str(self.existing_stack) else: return '%s Update' % str(self.existing_stack) @scheduler.wrappertask def __call__(self): """Return a co-routine that updates the stack.""" cleanup_prev = scheduler.DependencyTaskGroup( self.previous_stack.dependencies, self._remove_backup_resource, reverse=True) def get_error_wait_time(resource): return resource.cancel_grace_period() updater = scheduler.DependencyTaskGroup( self.dependencies(), self._resource_update, error_wait_time=get_error_wait_time) if not self.rollback: yield cleanup_prev() try: yield updater() finally: self.previous_stack.reset_dependencies() def _resource_update(self, res): if res.name in self.new_stack and self.new_stack[res.name] is res: return self._process_new_resource_update(res) else: return self._process_existing_resource_update(res) @scheduler.wrappertask def _remove_backup_resource(self, prev_res): if prev_res.state not in ((prev_res.INIT, prev_res.COMPLETE), (prev_res.DELETE, prev_res.COMPLETE)): LOG.debug("Deleting backup resource %s", prev_res.name) yield prev_res.destroy() @staticmethod def _exchange_stacks(existing_res, prev_res): resource_objects.Resource.exchange_stacks(existing_res.stack.context, existing_res.id, prev_res.id) prev_stack, existing_stack = prev_res.stack, existing_res.stack prev_stack.add_resource(existing_res) existing_stack.add_resource(prev_res) @scheduler.wrappertask def _create_resource(self, new_res): res_name = new_res.name # Clean up previous resource if res_name in self.previous_stack: prev_res = self.previous_stack[res_name] if prev_res.state not in ((prev_res.INIT, prev_res.COMPLETE), (prev_res.DELETE, prev_res.COMPLETE)): # Swap in the backup resource if it is in a valid state, # instead of creating a new resource if prev_res.status == prev_res.COMPLETE: LOG.debug("Swapping in backup Resource %s", res_name) self._exchange_stacks(self.existing_stack[res_name], prev_res) return LOG.debug("Deleting backup Resource %s", res_name) yield prev_res.destroy() # Back up existing resource if res_name in self.existing_stack: LOG.debug("Backing up existing Resource %s", res_name) existing_res = self.existing_stack[res_name] self.previous_stack.add_resource(existing_res) existing_res.state_set(existing_res.UPDATE, existing_res.COMPLETE) self.existing_stack.add_resource(new_res) # Save new resource definition to backup stack if it is not # present in backup stack template already # it allows to resolve all dependencies that existing resource # can have if it was copied to backup stack if (res_name not in self.previous_stack.t[self.previous_stack.t.RESOURCES]): LOG.debug("Storing definition of new Resource %s", res_name) self.previous_stack.t.add_resource(new_res.t) self.previous_stack.t.store(self.previous_stack.context) yield new_res.create() self._update_resource_data(new_res) def _check_replace_restricted(self, res): registry = res.stack.env.registry restricted_actions = registry.get_rsrc_restricted_actions(res.name) existing_res = self.existing_stack[res.name] if 'replace' in restricted_actions: ex = exception.ResourceActionRestricted(action='replace') failure = exception.ResourceFailure(ex, existing_res, existing_res.UPDATE) existing_res._add_event(existing_res.UPDATE, existing_res.FAILED, six.text_type(ex)) raise failure def _update_resource_data(self, resource): # Use the *new* template to determine the attrs to cache node_data = resource.node_data(self.new_stack.defn) stk_defn.update_resource_data(self.existing_stack.defn, resource.name, node_data) # Also update the new stack's definition with the data, so that # following resources can calculate dep_attr values correctly (e.g. if # the actual attribute name in a get_attr function also comes from a # get_attr function.) stk_defn.update_resource_data(self.new_stack.defn, resource.name, node_data) @scheduler.wrappertask def _process_new_resource_update(self, new_res): res_name = new_res.name if res_name in self.existing_stack: existing_res = self.existing_stack[res_name] is_substituted = existing_res.check_is_substituted(type(new_res)) if type(existing_res) is type(new_res) or is_substituted: try: yield self._update_in_place(existing_res, new_res, is_substituted) except resource.UpdateReplace: pass else: # Save updated resource definition to backup stack # cause it allows the backup stack resources to be # synchronized LOG.debug("Storing definition of updated Resource %s", res_name) self.previous_stack.t.add_resource(new_res.t) self.previous_stack.t.store(self.previous_stack.context) LOG.info("Resource %(res_name)s for stack " "%(stack_name)s updated", {'res_name': res_name, 'stack_name': self.existing_stack.name}) self._update_resource_data(existing_res) return else: self._check_replace_restricted(new_res) yield self._create_resource(new_res) def _update_in_place(self, existing_res, new_res, is_substituted=False): existing_snippet = self.existing_snippets[existing_res.name] prev_res = self.previous_stack.get(new_res.name) # Note the new resource snippet is resolved in the context # of the existing stack (which is the stack being updated) # but with the template of the new stack (in case the update # is switching template implementations) new_snippet = new_res.t.reparse(self.existing_stack.defn, self.new_stack.t) if is_substituted: substitute = type(new_res)(existing_res.name, existing_res.t, existing_res.stack) existing_res.stack.resources[existing_res.name] = substitute existing_res = substitute existing_res.converge = self.new_stack.converge return existing_res.update(new_snippet, existing_snippet, prev_resource=prev_res) @scheduler.wrappertask def _process_existing_resource_update(self, existing_res): res_name = existing_res.name if res_name in self.previous_stack: yield self._remove_backup_resource(self.previous_stack[res_name]) if res_name in self.new_stack: new_res = self.new_stack[res_name] if new_res.state == (new_res.INIT, new_res.COMPLETE): # Already updated in-place return if existing_res.stack is not self.previous_stack: yield existing_res.destroy() if res_name not in self.new_stack: self.existing_stack.remove_resource(res_name) def dependencies(self): """Return the Dependencies graph for the update. Returns a Dependencies object representing the dependencies between update operations to move from an existing stack definition to a new one. """ existing_deps = self.existing_stack.dependencies new_deps = self.new_stack.dependencies def edges(): # Create/update the new stack's resources in create order for e in new_deps.graph().edges(): yield e # Destroy/cleanup the old stack's resources in delete order for e in existing_deps.graph(reverse=True).edges(): yield e # Don't cleanup old resources until after they have been replaced for name, res in six.iteritems(self.existing_stack): if name in self.new_stack: yield (res, self.new_stack[name]) return dependencies.Dependencies(edges()) def preview(self): upd_keys = set(self.new_stack.resources.keys()) cur_keys = set(self.existing_stack.resources.keys()) common_keys = cur_keys.intersection(upd_keys) deleted_keys = cur_keys.difference(upd_keys) added_keys = upd_keys.difference(cur_keys) updated_keys = [] replaced_keys = [] for key in common_keys: current_res = self.existing_stack.resources[key] updated_res = self.new_stack.resources[key] current_props = current_res.frozen_definition().properties( current_res.properties_schema, current_res.context) updated_props = updated_res.frozen_definition().properties( updated_res.properties_schema, updated_res.context) # type comparison must match that in _process_new_resource_update if type(current_res) is not type(updated_res): replaced_keys.append(key) continue try: if current_res.preview_update(updated_res.frozen_definition(), current_res.frozen_definition(), updated_props, current_props, None): updated_keys.append(key) except resource.UpdateReplace: replaced_keys.append(key) return { 'unchanged': list(set(common_keys).difference( set(updated_keys + replaced_keys))), 'updated': updated_keys, 'replaced': replaced_keys, 'added': list(added_keys), 'deleted': list(deleted_keys), } heat-10.0.2/heat/engine/constraint/0000775000175000017500000000000013343562672017122 5ustar zuulzuul00000000000000heat-10.0.2/heat/engine/constraint/common_constraints.py0000666000175000017500000001273313343562351023415 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import croniter import eventlet import netaddr import pytz import six from oslo_utils import netutils from oslo_utils import timeutils from heat.common.i18n import _ from heat.common import netutils as heat_netutils from heat.engine import constraints class TestConstraintDelay(constraints.BaseCustomConstraint): def validate_with_client(self, client, value): eventlet.sleep(value) class IPConstraint(constraints.BaseCustomConstraint): def validate(self, value, context, template=None): self._error_message = 'Invalid IP address' return netutils.is_valid_ip(value) class MACConstraint(constraints.BaseCustomConstraint): def validate(self, value, context, template=None): self._error_message = 'Invalid MAC address.' return netaddr.valid_mac(value) class DNSNameConstraint(constraints.BaseCustomConstraint): def validate(self, value, context): try: heat_netutils.validate_dns_format(value) except ValueError as ex: self._error_message = ("'%(value)s' not in valid format." " Reason: %(reason)s") % { 'value': value, 'reason': six.text_type(ex)} return False return True class RelativeDNSNameConstraint(DNSNameConstraint): def validate(self, value, context): if not value: return True if value.endswith('.'): self._error_message = _("'%s' is a FQDN. It should be a " "relative domain name.") % value return False length = len(value) if length > heat_netutils.FQDN_MAX_LEN - 3: self._error_message = _("'%(value)s' contains '%(length)s' " "characters. Adding a domain name will " "cause it to exceed the maximum length " "of a FQDN of '%(max_len)s'.") % { "value": value, "length": length, "max_len": heat_netutils.FQDN_MAX_LEN} return False return super(RelativeDNSNameConstraint, self).validate(value, context) class DNSDomainConstraint(DNSNameConstraint): def validate(self, value, context): if not value: return True if not super(DNSDomainConstraint, self).validate(value, context): return False if not value.endswith('.'): self._error_message = ("'%s' must end with '.'.") % value return False return True class CIDRConstraint(constraints.BaseCustomConstraint): def _validate_whitespace(self, data): self._error_message = ("Invalid net cidr '%s' contains " "whitespace" % data) if len(data.split()) > 1: return False return True def validate(self, value, context, template=None): try: netaddr.IPNetwork(value) return self._validate_whitespace(value) except Exception as ex: self._error_message = 'Invalid net cidr %s ' % six.text_type(ex) return False class ISO8601Constraint(constraints.BaseCustomConstraint): def validate(self, value, context, template=None): try: timeutils.parse_isotime(value) except Exception: return False else: return True class CRONExpressionConstraint(constraints.BaseCustomConstraint): def validate(self, value, context, template=None): if not value: return True try: croniter.croniter(value) return True except Exception as ex: self._error_message = _( 'Invalid CRON expression: %s') % six.text_type(ex) return False class TimezoneConstraint(constraints.BaseCustomConstraint): def validate(self, value, context, template=None): if not value: return True try: pytz.timezone(value) return True except Exception as ex: self._error_message = _( 'Invalid timezone: %s') % six.text_type(ex) return False class ExpirationConstraint(constraints.BaseCustomConstraint): def validate(self, value, context): if not value: return True try: expiration_tz = timeutils.parse_isotime(value.strip()) expiration = timeutils.normalize_time(expiration_tz) if expiration > timeutils.utcnow(): return True raise ValueError(_('Expiration time is out of date.')) except Exception as ex: self._error_message = (_( 'Expiration {0} is invalid: {1}').format(value, six.text_type(ex))) return False heat-10.0.2/heat/engine/constraint/__init__.py0000666000175000017500000000000013343562337021221 0ustar zuulzuul00000000000000heat-10.0.2/heat/engine/properties.py0000666000175000017500000006353413343562337017517 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections from oslo_serialization import jsonutils import six from heat.common import exception from heat.common.i18n import _ from heat.common import param_utils from heat.engine import constraints as constr from heat.engine import function from heat.engine.hot import parameters as hot_param from heat.engine import parameters from heat.engine import support from heat.engine import translation as trans SCHEMA_KEYS = ( REQUIRED, IMPLEMENTED, DEFAULT, TYPE, SCHEMA, ALLOWED_PATTERN, MIN_VALUE, MAX_VALUE, ALLOWED_VALUES, MIN_LENGTH, MAX_LENGTH, DESCRIPTION, UPDATE_ALLOWED, IMMUTABLE, ) = ( 'Required', 'Implemented', 'Default', 'Type', 'Schema', 'AllowedPattern', 'MinValue', 'MaxValue', 'AllowedValues', 'MinLength', 'MaxLength', 'Description', 'UpdateAllowed', 'Immutable', ) class Schema(constr.Schema): """Schema class for validating resource properties. This class is used for defining schema constraints for resource properties. It inherits generic validation features from the base Schema class and add processing that is specific to resource properties. """ KEYS = ( TYPE, DESCRIPTION, DEFAULT, SCHEMA, REQUIRED, CONSTRAINTS, UPDATE_ALLOWED, IMMUTABLE, ) = ( 'type', 'description', 'default', 'schema', 'required', 'constraints', 'update_allowed', 'immutable', ) def __init__(self, data_type, description=None, default=None, schema=None, required=False, constraints=None, implemented=True, update_allowed=False, immutable=False, support_status=support.SupportStatus(), allow_conversion=False): super(Schema, self).__init__(data_type, description, default, schema, required, constraints, immutable=immutable) self.implemented = implemented self.update_allowed = update_allowed self.support_status = support_status self.allow_conversion = allow_conversion # validate structural correctness of schema itself self.validate() def validate(self, context=None): super(Schema, self).validate() # check that update_allowed and immutable # do not contradict each other if self.update_allowed and self.immutable: msg = _("Options %(ua)s and %(im)s " "cannot both be True") % { 'ua': UPDATE_ALLOWED, 'im': IMMUTABLE} raise exception.InvalidSchemaError(message=msg) @classmethod def from_legacy(cls, schema_dict): """Return a Property Schema object from a legacy schema dictionary.""" # Check for fully-fledged Schema objects if isinstance(schema_dict, cls): return schema_dict unknown = [k for k in schema_dict if k not in SCHEMA_KEYS] if unknown: raise exception.InvalidSchemaError( message=_('Unknown key(s) %s') % unknown) def constraints(): def get_num(key): val = schema_dict.get(key) if val is not None: val = Schema.str_to_num(val) return val if MIN_VALUE in schema_dict or MAX_VALUE in schema_dict: yield constr.Range(get_num(MIN_VALUE), get_num(MAX_VALUE)) if MIN_LENGTH in schema_dict or MAX_LENGTH in schema_dict: yield constr.Length(get_num(MIN_LENGTH), get_num(MAX_LENGTH)) if ALLOWED_VALUES in schema_dict: yield constr.AllowedValues(schema_dict[ALLOWED_VALUES]) if ALLOWED_PATTERN in schema_dict: yield constr.AllowedPattern(schema_dict[ALLOWED_PATTERN]) try: data_type = schema_dict[TYPE] except KeyError: raise exception.InvalidSchemaError( message=_('No %s specified') % TYPE) if SCHEMA in schema_dict: if data_type == Schema.LIST: ss = cls.from_legacy(schema_dict[SCHEMA]) elif data_type == Schema.MAP: schema_dicts = schema_dict[SCHEMA].items() ss = dict((n, cls.from_legacy(sd)) for n, sd in schema_dicts) else: raise exception.InvalidSchemaError( message=_('%(schema)s supplied for %(type)s %(data)s') % dict(schema=SCHEMA, type=TYPE, data=data_type)) else: ss = None return cls(data_type, description=schema_dict.get(DESCRIPTION), default=schema_dict.get(DEFAULT), schema=ss, required=schema_dict.get(REQUIRED, False), constraints=list(constraints()), implemented=schema_dict.get(IMPLEMENTED, True), update_allowed=schema_dict.get(UPDATE_ALLOWED, False), immutable=schema_dict.get(IMMUTABLE, False)) @classmethod def from_parameter(cls, param): """Return a Property Schema corresponding to a Parameter Schema. Convert a parameter schema from a provider template to a property Schema for the corresponding resource facade. """ # map param types to property types param_type_map = { param.STRING: cls.STRING, param.NUMBER: cls.NUMBER, param.LIST: cls.LIST, param.MAP: cls.MAP, param.BOOLEAN: cls.BOOLEAN } # allow_conversion allows slightly more flexible type conversion # where property->parameter types don't align, primarily when # a json parameter value is passed via a Map property, which requires # some coercion to pass strings or lists (which are both valid for # Json parameters but not for Map properties). allow_conversion = (param.type == param.MAP or param.type == param.LIST) # make update_allowed true by default on TemplateResources # as the template should deal with this. return cls(data_type=param_type_map.get(param.type, cls.MAP), description=param.description, required=param.required, constraints=param.constraints, update_allowed=True, immutable=False, allow_conversion=allow_conversion) def allowed_param_prop_type(self): """Return allowed type of Property Schema converted from parameter. Especially, when generating Schema from parameter, Integer Property Schema will be supplied by Number parameter. """ param_type_map = { self.INTEGER: self.NUMBER, self.STRING: self.STRING, self.NUMBER: self.NUMBER, self.BOOLEAN: self.BOOLEAN, self.LIST: self.LIST, self.MAP: self.MAP } return param_type_map[self.type] def __getitem__(self, key): if key == self.UPDATE_ALLOWED: return self.update_allowed elif key == self.IMMUTABLE: return self.immutable else: return super(Schema, self).__getitem__(key) def schemata(schema_dicts): """Return dictionary of Schema objects for given dictionary of schemata. The input schemata are converted from the legacy (dictionary-based) format to Schema objects where necessary. """ return dict((n, Schema.from_legacy(s)) for n, s in schema_dicts.items()) class Property(object): def __init__(self, schema, name=None, context=None, path=None): self.schema = Schema.from_legacy(schema) self.name = name self.context = context self.path = self.make_path(name, path) def required(self): return self.schema.required def implemented(self): return self.schema.implemented def update_allowed(self): return self.schema.update_allowed def immutable(self): return self.schema.immutable def has_default(self): return self.schema.default is not None def default(self): return self.schema.default def type(self): return self.schema.type def support_status(self): return self.schema.support_status def make_path(self, name, path=None): if path is None: path = '' if name is None: name = '' if isinstance(name, int) or name.isdigit(): name = str(name) delim = '' if not path or path.endswith('.') else '.' return delim.join([path, name]) def _get_integer(self, value): if value is None: value = self.has_default() and self.default() or 0 try: value = int(value) except ValueError: raise TypeError(_("Value '%s' is not an integer") % value) else: return value def _get_number(self, value): if value is None: value = self.has_default() and self.default() or 0 return Schema.str_to_num(value) def _get_string(self, value): if value is None: value = self.has_default() and self.default() or '' if not isinstance(value, six.string_types): if isinstance(value, (bool, int)): value = six.text_type(value) else: raise ValueError(_('Value must be a string; got %r') % value) return value def _get_children(self, child_values, keys=None, validate=False, translation=None): if self.schema.schema is not None: if keys is None: keys = list(self.schema.schema) schemata = dict((k, self.schema.schema[k]) for k in keys) properties = Properties(schemata, dict(child_values), context=self.context, parent_name=self.path, translation=translation) if validate: properties.validate() return ((k, properties[k]) for k in keys) else: return child_values def _get_map(self, value, validate=False, translation=None): if value is None: value = self.default() if self.has_default() else {} if not isinstance(value, collections.Mapping): # This is to handle passing Lists via Json parameters exposed # via a provider resource, in particular lists-of-dicts which # cannot be handled correctly via comma_delimited_list if self.schema.allow_conversion: if isinstance(value, six.string_types): return value elif isinstance(value, collections.Sequence): return jsonutils.dumps(value) raise TypeError(_('"%s" is not a map') % value) return dict(self._get_children(six.iteritems(value), validate=validate, translation=translation)) def _get_list(self, value, validate=False, translation=None): if value is None: value = self.has_default() and self.default() or [] if self.schema.allow_conversion and isinstance(value, six.string_types): value = param_utils.delim_string_to_list(value) if (not isinstance(value, collections.Sequence) or isinstance(value, six.string_types)): raise TypeError(_('"%s" is not a list') % repr(value)) return [v[1] for v in self._get_children(enumerate(value), range(len(value)), validate=validate, translation=translation)] def _get_bool(self, value): """Get value for boolean property. Explicitly checking for bool, or string with lower value "true" or "false", to avoid integer values. """ if value is None: value = self.has_default() and self.default() or False if isinstance(value, bool): return value if isinstance(value, six.string_types): normalised = value.lower() if normalised not in ['true', 'false']: raise ValueError(_('"%s" is not a valid boolean') % normalised) return normalised == 'true' raise TypeError(_('"%s" is not a valid boolean') % value) def get_value(self, value, validate=False, translation=None): """Get value from raw value and sanitize according to data type.""" t = self.type() if t == Schema.STRING: _value = self._get_string(value) elif t == Schema.INTEGER: _value = self._get_integer(value) elif t == Schema.NUMBER: _value = self._get_number(value) elif t == Schema.MAP: _value = self._get_map(value, validate, translation) elif t == Schema.LIST: _value = self._get_list(value, validate, translation) elif t == Schema.BOOLEAN: _value = self._get_bool(value) elif t == Schema.ANY: _value = value if validate: self.schema.validate_constraints(_value, self.context) return _value class Properties(collections.Mapping): def __init__(self, schema, data, resolver=lambda d: d, parent_name=None, context=None, section=None, translation=None): self.props = dict((k, Property(s, k, context, path=parent_name)) for k, s in schema.items()) self.resolve = resolver self.data = data self.error_prefix = [section] if section is not None else [] self.parent_name = parent_name self.context = context self.translation = (trans.Translation(properties=self) if translation is None else translation) def update_translation(self, rules, client_resolve=True, ignore_resolve_error=False): self.translation.set_rules(rules, client_resolve=client_resolve, ignore_resolve_error=ignore_resolve_error) @staticmethod def schema_from_params(params_snippet): """Create properties schema from the parameters section of a template. :param params_snippet: parameter definition from a template :returns: equivalent properties schemata for the specified parameters """ if params_snippet: return dict((n, Schema.from_parameter(p)) for n, p in params_snippet.items()) return {} def validate(self, with_value=True): try: for key in self.data: if key not in self.props: msg = _("Unknown Property %s") % key raise exception.StackValidationFailed(message=msg) for (key, prop) in self.props.items(): if (self.translation.is_deleted(prop.path) or self.translation.is_replaced(prop.path)): continue if with_value: try: self._get_property_value(key, validate=True) except exception.StackValidationFailed as ex: path = [key] path.extend(ex.path) raise exception.StackValidationFailed( path=path, message=ex.error_message) except ValueError as e: if prop.required() and key not in self.data: path = [] else: path = [key] raise exception.StackValidationFailed( path=path, message=six.text_type(e)) # are there unimplemented Properties if not prop.implemented() and key in self.data: msg = _("Property %s not implemented yet") % key raise exception.StackValidationFailed(message=msg) except exception.StackValidationFailed as ex: # NOTE(prazumovsky): should reraise exception for adding specific # error name and error_prefix to path for correct error message # building. path = self.error_prefix path.extend(ex.path) raise exception.StackValidationFailed( error=ex.error or 'Property error', path=path, message=ex.error_message ) def _find_deps_any_in_init(self, unresolved_value): deps = function.dependencies(unresolved_value) if any(res.action == res.INIT for res in deps): return True def get_user_value(self, key, validate=False): if key not in self: raise KeyError(_('Invalid Property %s') % key) prop = self.props[key] if (self.translation.is_deleted(prop.path) or self.translation.is_replaced(prop.path)): return if key in self.data: try: unresolved_value = self.data[key] if validate: if self._find_deps_any_in_init(unresolved_value): validate = False value = self.resolve(unresolved_value) if self.translation.has_translation(prop.path): value = self.translation.translate(prop.path, value, self.data) return prop.get_value(value, validate, translation=self.translation) # Children can raise StackValidationFailed with unique path which # is necessary for further use in StackValidationFailed exception. # So we need to handle this exception in this method. except exception.StackValidationFailed as e: raise exception.StackValidationFailed(path=e.path, message=e.error_message) # the resolver function could raise any number of exceptions, # so handle this generically except Exception as e: raise ValueError(six.text_type(e)) def _get_property_value(self, key, validate=False): if key not in self: raise KeyError(_('Invalid Property %s') % key) prop = self.props[key] if not self.translation.is_deleted(prop.path) and key in self.data: return self.get_user_value(key, validate) elif self.translation.has_translation(prop.path): value = self.translation.translate(prop.path, prop_data=self.data, validate=validate) if value is not None or prop.has_default(): return prop.get_value(value) elif prop.required(): raise ValueError(_('Property %s not assigned') % key) elif prop.has_default(): return prop.get_value(None, validate, translation=self.translation) elif prop.required(): raise ValueError(_('Property %s not assigned') % key) def __getitem__(self, key): return self._get_property_value(key) def __len__(self): return len(self.props) def __contains__(self, key): return key in self.props def __iter__(self): return iter(self.props) @staticmethod def _param_def_from_prop(schema): """Return a template parameter definition corresponding to property.""" param_type_map = { schema.INTEGER: parameters.Schema.NUMBER, schema.STRING: parameters.Schema.STRING, schema.NUMBER: parameters.Schema.NUMBER, schema.BOOLEAN: parameters.Schema.BOOLEAN, schema.MAP: parameters.Schema.MAP, schema.LIST: parameters.Schema.LIST, } def param_items(): yield parameters.TYPE, param_type_map[schema.type] if schema.description is not None: yield parameters.DESCRIPTION, schema.description if schema.default is not None: yield parameters.DEFAULT, schema.default for constraint in schema.constraints: if isinstance(constraint, constr.Length): if constraint.min is not None: yield parameters.MIN_LENGTH, constraint.min if constraint.max is not None: yield parameters.MAX_LENGTH, constraint.max elif isinstance(constraint, constr.Range): if constraint.min is not None: yield parameters.MIN_VALUE, constraint.min if constraint.max is not None: yield parameters.MAX_VALUE, constraint.max elif isinstance(constraint, constr.AllowedValues): yield parameters.ALLOWED_VALUES, list(constraint.allowed) elif isinstance(constraint, constr.AllowedPattern): yield parameters.ALLOWED_PATTERN, constraint.pattern if schema.type == schema.BOOLEAN: yield parameters.ALLOWED_VALUES, ['True', 'true', 'False', 'false'] return dict(param_items()) @staticmethod def _prop_def_from_prop(name, schema): """Return a provider template property definition for a property.""" if schema.type == Schema.LIST: return {'Fn::Split': [',', {'Ref': name}]} else: return {'Ref': name} @staticmethod def _hot_param_def_from_prop(schema): """Parameter definition corresponding to property for hot template.""" param_type_map = { schema.INTEGER: hot_param.HOTParamSchema.NUMBER, schema.STRING: hot_param.HOTParamSchema.STRING, schema.NUMBER: hot_param.HOTParamSchema.NUMBER, schema.BOOLEAN: hot_param.HOTParamSchema.BOOLEAN, schema.MAP: hot_param.HOTParamSchema.MAP, schema.LIST: hot_param.HOTParamSchema.LIST, } def param_items(): yield hot_param.HOTParamSchema.TYPE, param_type_map[schema.type] if schema.description is not None: yield hot_param.HOTParamSchema.DESCRIPTION, schema.description if schema.default is not None: yield hot_param.HOTParamSchema.DEFAULT, schema.default def constraint_items(constraint): def range_min_max(constraint): if constraint.min is not None: yield hot_param.MIN, constraint.min if constraint.max is not None: yield hot_param.MAX, constraint.max if isinstance(constraint, constr.Length): yield hot_param.LENGTH, dict(range_min_max(constraint)) elif isinstance(constraint, constr.Range): yield hot_param.RANGE, dict(range_min_max(constraint)) elif isinstance(constraint, constr.AllowedValues): yield hot_param.ALLOWED_VALUES, list(constraint.allowed) elif isinstance(constraint, constr.AllowedPattern): yield hot_param.ALLOWED_PATTERN, constraint.pattern if schema.constraints: yield (hot_param.HOTParamSchema.CONSTRAINTS, [dict(constraint_items(constraint)) for constraint in schema.constraints]) return dict(param_items()) @staticmethod def _hot_prop_def_from_prop(name, schema): """Return a provider template property definition for a property.""" return {'get_param': name} @classmethod def schema_to_parameters_and_properties(cls, schema, template_type='cfn'): """Convert a schema to template parameters and properties. This can be used to generate a provider template that matches the given properties schemata. :param schema: A resource type's properties_schema :returns: A tuple of params and properties dicts ex: input: {'foo': {'Type': 'List'}} output: {'foo': {'Type': 'CommaDelimitedList'}}, {'foo': {'Fn::Split': {'Ref': 'foo'}}} ex: input: {'foo': {'Type': 'String'}, 'bar': {'Type': 'Map'}} output: {'foo': {'Type': 'String'}, 'bar': {'Type': 'Json'}}, {'foo': {'Ref': 'foo'}, 'bar': {'Ref': 'bar'}} """ def param_prop_def_items(name, schema, template_type): if template_type == 'hot': param_def = cls._hot_param_def_from_prop(schema) prop_def = cls._hot_prop_def_from_prop(name, schema) else: param_def = cls._param_def_from_prop(schema) prop_def = cls._prop_def_from_prop(name, schema) return (name, param_def), (name, prop_def) if not schema: return {}, {} param_prop_defs = [param_prop_def_items(n, s, template_type) for n, s in six.iteritems(schemata(schema)) if s.implemented] param_items, prop_items = zip(*param_prop_defs) return dict(param_items), dict(prop_items) heat-10.0.2/heat/engine/function.py0000666000175000017500000003215113343562351017133 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import abc import collections import itertools import weakref import six from heat.common import exception from heat.common.i18n import _ @six.add_metaclass(abc.ABCMeta) class Function(object): """Abstract base class for template functions.""" def __init__(self, stack, fn_name, args): """Initialise with a Stack, the function name and the arguments. All functions take the form of a single-item map in JSON:: { : } """ super(Function, self).__init__() self._stackref = weakref.ref(stack) if stack is not None else None self.fn_name = fn_name self.args = args @property def stack(self): ref = self._stackref if ref is None: return None stack = ref() assert stack is not None, ("Need a reference to the " "StackDefinition object") return stack def validate(self): """Validate arguments without resolving the function. Function subclasses must override this method to validate their args. """ validate(self.args) @abc.abstractmethod def result(self): """Return the result of resolving the function. Function subclasses must override this method to calculate their results. """ return {self.fn_name: self.args} def dependencies(self, path): return dependencies(self.args, '.'.join([path, self.fn_name])) def dep_attrs(self, resource_name): """Return the attributes of the specified resource that are referenced. Return an iterator over any attributes of the specified resource that this function references. The special value heat.engine.attributes.ALL_ATTRIBUTES may be used to indicate that all attributes of the resource are required. """ return dep_attrs(self.args, resource_name) def all_dep_attrs(self): """Return resource, attribute name pairs of all attributes referenced. Return an iterator over the resource name, attribute name tuples of all attributes that this function references. The special value heat.engine.attributes.ALL_ATTRIBUTES may be used to indicate that all attributes of the resource are required. By default this calls the dep_attrs() method, but subclasses can override to provide a more efficient implementation. """ # If we are using the default dep_attrs method then it will only # return data from the args anyway if type(self).dep_attrs == Function.dep_attrs: return all_dep_attrs(self.args) def res_dep_attrs(resource_name): return six.moves.zip(itertools.repeat(resource_name), self.dep_attrs(resource_name)) resource_names = self.stack.enabled_rsrc_names() return itertools.chain.from_iterable(six.moves.map(res_dep_attrs, resource_names)) def __reduce__(self): """Return a representation of the function suitable for pickling. This allows the copy module (which works by pickling and then unpickling objects) to copy a template. Functions in the copy will return to their original (JSON) form (i.e. a single-element map). """ return dict, ([(self.fn_name, self.args)],) def _repr_result(self): try: return repr(self.result()) except (TypeError, ValueError): return '???' def __repr__(self): """Return a string representation of the function. The representation includes the function name, arguments and result (if available), as well as the name of the function class. """ fntype = type(self) classname = '.'.join(filter(None, (getattr(fntype, attr, '') for attr in ('__module__', '__name__')))) return '<%s {%s: %r} -> %s>' % (classname, self.fn_name, self.args, self._repr_result()) def __eq__(self, other): """Compare the result of this function for equality.""" try: result = self.result() if isinstance(other, Function): return result == other.result() else: return result == other except (TypeError, ValueError): return NotImplemented def __ne__(self, other): """Compare the result of this function for inequality.""" eq = self.__eq__(other) if eq is NotImplemented: return NotImplemented return not eq __hash__ = None @six.add_metaclass(abc.ABCMeta) class Macro(Function): """Abstract base class for template macros. A macro differs from a function in that it controls how the template is parsed. As such, it operates on the syntax tree itself, not on the parsed output. """ def __init__(self, stack, fn_name, raw_args, parse_func, template): """Initialise with the argument syntax tree and parser function.""" super(Macro, self).__init__(stack, fn_name, raw_args) self._tmplref = weakref.ref(template) if template is not None else None self.parsed = self.parse_args(parse_func) @property def template(self): ref = self._tmplref if ref is None: return None tmpl = ref() assert tmpl is not None, "Need a reference to the Template object" return tmpl @abc.abstractmethod def parse_args(self, parse_func): """Parse the macro using the supplied parsing function. Macro subclasses should override this method to control parsing of the arguments. """ return parse_func(self.args) def validate(self): """Validate arguments without resolving the result.""" validate(self.parsed) def result(self): """Return the resolved result of the macro contents.""" return resolve(self.parsed) def dependencies(self, path): return dependencies(self.parsed, '.'.join([path, self.fn_name])) def dep_attrs(self, resource_name): """Return the attributes of the specified resource that are referenced. Return an iterator over any attributes of the specified resource that this function references. The special value heat.engine.attributes.ALL_ATTRIBUTES may be used to indicate that all attributes of the resource are required. """ return dep_attrs(self.parsed, resource_name) def all_dep_attrs(self): """Return resource, attribute name pairs of all attributes referenced. Return an iterator over the resource name, attribute name tuples of all attributes that this function references. The special value heat.engine.attributes.ALL_ATTRIBUTES may be used to indicate that all attributes of the resource are required. By default this calls the dep_attrs() method, but subclasses can override to provide a more efficient implementation. """ # If we are using the default dep_attrs method then it will only # return data from the transformed parsed args anyway if type(self).dep_attrs == Macro.dep_attrs: return all_dep_attrs(self.parsed) return super(Macro, self).all_dep_attrs() def __reduce__(self): """Return a representation of the macro result suitable for pickling. This allows the copy module (which works by pickling and then unpickling objects) to copy a template. Functions in the copy will return to their original (JSON) form (i.e. a single-element map). Unlike other functions, macros are *not* preserved during a copy. The the processed (but unparsed) output is returned in their place. """ if isinstance(self.parsed, Function): return self.parsed.__reduce__() if self.parsed is None: return lambda x: None, (None,) return type(self.parsed), (self.parsed,) def _repr_result(self): return repr(self.parsed) def resolve(snippet): if isinstance(snippet, Function): return snippet.result() if isinstance(snippet, collections.Mapping): return dict((k, resolve(v)) for k, v in snippet.items()) elif (not isinstance(snippet, six.string_types) and isinstance(snippet, collections.Iterable)): return [resolve(v) for v in snippet] return snippet def validate(snippet, path=None): if path is None: path = [] elif isinstance(path, six.string_types): path = [path] if isinstance(snippet, Function): try: snippet.validate() except AssertionError: raise except Exception as e: raise exception.StackValidationFailed( path=path + [snippet.fn_name], message=six.text_type(e)) elif isinstance(snippet, collections.Mapping): for k, v in six.iteritems(snippet): validate(v, path + [k]) elif (not isinstance(snippet, six.string_types) and isinstance(snippet, collections.Iterable)): basepath = list(path) parent = basepath.pop() if basepath else '' for i, v in enumerate(snippet): validate(v, basepath + ['%s[%d]' % (parent, i)]) def dependencies(snippet, path=''): """Return an iterator over Resource dependencies in a template snippet. The snippet should be already parsed to insert Function objects where appropriate. """ if isinstance(snippet, Function): return snippet.dependencies(path) elif isinstance(snippet, collections.Mapping): def mkpath(key): return '.'.join([path, six.text_type(key)]) deps = (dependencies(value, mkpath(key)) for key, value in snippet.items()) return itertools.chain.from_iterable(deps) elif (not isinstance(snippet, six.string_types) and isinstance(snippet, collections.Iterable)): def mkpath(idx): return ''.join([path, '[%d]' % idx]) deps = (dependencies(value, mkpath(i)) for i, value in enumerate(snippet)) return itertools.chain.from_iterable(deps) else: return [] def dep_attrs(snippet, resource_name): """Iterator over dependent attrs of a resource in a template snippet. The snippet should be already parsed to insert Function objects where appropriate. :returns: an iterator over the attributes of the specified resource that are referenced in the template snippet. """ if isinstance(snippet, Function): return snippet.dep_attrs(resource_name) elif isinstance(snippet, collections.Mapping): attrs = (dep_attrs(val, resource_name) for val in snippet.values()) return itertools.chain.from_iterable(attrs) elif (not isinstance(snippet, six.string_types) and isinstance(snippet, collections.Iterable)): attrs = (dep_attrs(value, resource_name) for value in snippet) return itertools.chain.from_iterable(attrs) return [] def all_dep_attrs(snippet): """Iterator over resource, attribute name pairs referenced in a snippet. The snippet should be already parsed to insert Function objects where appropriate. :returns: an iterator over the resource name, attribute name tuples of all attributes that are referenced in the template snippet. """ if isinstance(snippet, Function): return snippet.all_dep_attrs() elif isinstance(snippet, collections.Mapping): res_attrs = (all_dep_attrs(value) for value in snippet.values()) return itertools.chain.from_iterable(res_attrs) elif (not isinstance(snippet, six.string_types) and isinstance(snippet, collections.Iterable)): res_attrs = (all_dep_attrs(value) for value in snippet) return itertools.chain.from_iterable(res_attrs) return [] class Invalid(Function): """A function for checking condition functions and to force failures. This function is used to force failures for functions that are not supported in condition definition. """ def __init__(self, stack, fn_name, args): raise ValueError(_('The function "%s" ' 'is invalid in this context') % fn_name) def result(self): return super(Invalid, self).result() heat-10.0.2/heat/engine/service_software_config.py0000666000175000017500000004003013343562340022176 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import uuid from oslo_log import log as logging from oslo_serialization import jsonutils from oslo_utils import timeutils import requests import six from six.moves.urllib import parse as urlparse from heat.common import crypt from heat.common import exception from heat.common.i18n import _ from heat.db.sqlalchemy import api as db_api from heat.engine import api from heat.engine import resource from heat.engine import scheduler from heat.engine import software_config_io as swc_io from heat.objects import resource as resource_objects from heat.objects import software_config as software_config_object from heat.objects import software_deployment as software_deployment_object from heat.rpc import api as rpc_api LOG = logging.getLogger(__name__) class SoftwareConfigService(object): def show_software_config(self, cnxt, config_id): sc = software_config_object.SoftwareConfig.get_by_id(cnxt, config_id) return api.format_software_config(sc) def list_software_configs(self, cnxt, limit=None, marker=None): scs = software_config_object.SoftwareConfig.get_all( cnxt, limit=limit, marker=marker) result = [api.format_software_config(sc, detail=False, include_project=cnxt.is_admin) for sc in scs] return result def create_software_config(self, cnxt, group, name, config, inputs, outputs, options): swc_io.check_io_schema_list(inputs) in_conf = [swc_io.InputConfig(**i).as_dict() for i in inputs] swc_io.check_io_schema_list(outputs) out_conf = [swc_io.OutputConfig(**o).as_dict() for o in outputs] sc = software_config_object.SoftwareConfig.create(cnxt, { 'group': group, 'name': name, 'config': { rpc_api.SOFTWARE_CONFIG_INPUTS: in_conf, rpc_api.SOFTWARE_CONFIG_OUTPUTS: out_conf, rpc_api.SOFTWARE_CONFIG_OPTIONS: options, rpc_api.SOFTWARE_CONFIG_CONFIG: config }, 'tenant': cnxt.tenant_id}) return api.format_software_config(sc) def delete_software_config(self, cnxt, config_id): software_config_object.SoftwareConfig.delete(cnxt, config_id) def list_software_deployments(self, cnxt, server_id): all_sd = software_deployment_object.SoftwareDeployment.get_all( cnxt, server_id) result = [api.format_software_deployment(sd) for sd in all_sd] return result def metadata_software_deployments(self, cnxt, server_id): if not server_id: raise ValueError(_('server_id must be specified')) all_sd = software_deployment_object.SoftwareDeployment.get_all( cnxt, server_id) # filter out the sds with None config flt_sd = six.moves.filterfalse(lambda sd: sd.config is None, all_sd) # sort the configs by config name, to give the list of metadata a # deterministic and controllable order. flt_sd_s = sorted(flt_sd, key=lambda sd: sd.config.name) result = [api.format_software_config(sd.config) for sd in flt_sd_s] return result @resource_objects.retry_on_conflict def _push_metadata_software_deployments( self, cnxt, server_id, stack_user_project_id): rs = db_api.resource_get_by_physical_resource_id(cnxt, server_id) if not rs: return cnxt.session.refresh(rs) if rs.action == resource.Resource.DELETE: return deployments = self.metadata_software_deployments(cnxt, server_id) md = rs.rsrc_metadata or {} md['deployments'] = deployments metadata_put_url = None metadata_queue_id = None for rd in rs.data: if rd.key == 'metadata_put_url': metadata_put_url = rd.value if rd.key == 'metadata_queue_id': metadata_queue_id = rd.value action = _('deployments of server %s') % server_id atomic_key = rs.atomic_key rows_updated = db_api.resource_update( cnxt, rs.id, {'rsrc_metadata': md}, atomic_key) if not rows_updated: LOG.debug('Retrying server %s deployment metadata update', server_id) raise exception.ConcurrentTransaction(action=action) LOG.debug('Updated server %s deployment metadata', server_id) if metadata_put_url: json_md = jsonutils.dumps(md) resp = requests.put(metadata_put_url, json_md) try: resp.raise_for_status() except requests.HTTPError as exc: LOG.error('Failed to deliver deployment data to ' 'server %s: %s', server_id, exc) if metadata_queue_id: project = stack_user_project_id queue = self._get_zaqar_queue(cnxt, rs, project, metadata_queue_id) zaqar_plugin = cnxt.clients.client_plugin('zaqar') queue.post({'body': md, 'ttl': zaqar_plugin.DEFAULT_TTL}) # Bump the atomic key again to serialise updates to the data sent to # the server via Swift. if metadata_put_url is not None: rows_updated = db_api.resource_update(cnxt, rs.id, {}, atomic_key + 1) if not rows_updated: LOG.debug('Concurrent update to server %s deployments data ' 'detected - retrying.', server_id) raise exception.ConcurrentTransaction(action=action) def _refresh_swift_software_deployment(self, cnxt, sd, deploy_signal_id): container, object_name = urlparse.urlparse( deploy_signal_id).path.split('/')[-2:] swift_plugin = cnxt.clients.client_plugin('swift') swift = swift_plugin.client() try: headers = swift.head_object(container, object_name) except Exception as ex: # ignore not-found, in case swift is not consistent yet if swift_plugin.is_not_found(ex): LOG.info('Signal object not found: %(c)s %(o)s', { 'c': container, 'o': object_name}) return sd raise lm = headers.get('last-modified') last_modified = swift_plugin.parse_last_modified(lm) prev_last_modified = sd.updated_at if prev_last_modified: # assume stored as utc, convert to offset-naive datetime prev_last_modified = prev_last_modified.replace(tzinfo=None) if prev_last_modified and (last_modified <= prev_last_modified): return sd try: (headers, obj) = swift.get_object(container, object_name) except Exception as ex: # ignore not-found, in case swift is not consistent yet if swift_plugin.is_not_found(ex): LOG.info( 'Signal object not found: %(c)s %(o)s', { 'c': container, 'o': object_name}) return sd raise if obj: self.signal_software_deployment( cnxt, sd.id, jsonutils.loads(obj), last_modified.isoformat()) return software_deployment_object.SoftwareDeployment.get_by_id( cnxt, sd.id) def _get_zaqar_queue(self, cnxt, rs, project, queue_name): user = password = signed_url_data = None for rd in rs.data: if rd.key == 'password': password = crypt.decrypt(rd.decrypt_method, rd.value) if rd.key == 'user_id': user = rd.value if rd.key == 'zaqar_queue_signed_url_data': signed_url_data = jsonutils.loads(rd.value) zaqar_plugin = cnxt.clients.client_plugin('zaqar') if signed_url_data is None: keystone = cnxt.clients.client('keystone') token = keystone.stack_domain_user_token( user_id=user, project_id=project, password=password) zaqar = zaqar_plugin.create_for_tenant(project, token) else: signed_url_data.pop('project') zaqar = zaqar_plugin.create_from_signed_url(project, **signed_url_data) return zaqar.queue(queue_name) def _refresh_zaqar_software_deployment(self, cnxt, sd, deploy_queue_id): rs = db_api.resource_get_by_physical_resource_id(cnxt, sd.id) project = sd.stack_user_project_id queue = self._get_zaqar_queue(cnxt, rs, project, deploy_queue_id) messages = list(queue.pop()) if messages: self.signal_software_deployment( cnxt, sd.id, messages[0].body, None) return software_deployment_object.SoftwareDeployment.get_by_id( cnxt, sd.id) def check_software_deployment(self, cnxt, deployment_id, timeout): def _check(): while True: sd = self._show_software_deployment(cnxt, deployment_id) if sd.status != rpc_api.SOFTWARE_DEPLOYMENT_IN_PROGRESS: return yield scheduler.TaskRunner(_check)(timeout=timeout) sd = software_deployment_object.SoftwareDeployment.get_by_id( cnxt, deployment_id) return api.format_software_deployment(sd) def _show_software_deployment(self, cnxt, deployment_id): sd = software_deployment_object.SoftwareDeployment.get_by_id( cnxt, deployment_id) if sd.status == rpc_api.SOFTWARE_DEPLOYMENT_IN_PROGRESS: c = sd.config.config input_values = dict(swc_io.InputConfig(**i).input_data() for i in c[rpc_api.SOFTWARE_CONFIG_INPUTS]) transport = input_values.get('deploy_signal_transport') if transport == 'TEMP_URL_SIGNAL': sd = self._refresh_swift_software_deployment( cnxt, sd, input_values.get('deploy_signal_id')) elif transport == 'ZAQAR_SIGNAL': sd = self._refresh_zaqar_software_deployment( cnxt, sd, input_values.get('deploy_queue_id')) return sd def show_software_deployment(self, cnxt, deployment_id): sd = self._show_software_deployment(cnxt, deployment_id) return api.format_software_deployment(sd) def create_software_deployment(self, cnxt, server_id, config_id, input_values, action, status, status_reason, stack_user_project_id, deployment_id=None): if deployment_id is None: deployment_id = str(uuid.uuid4()) sd = software_deployment_object.SoftwareDeployment.create(cnxt, { 'id': deployment_id, 'config_id': config_id, 'server_id': server_id, 'input_values': input_values, 'tenant': cnxt.tenant_id, 'stack_user_project_id': stack_user_project_id, 'action': action, 'status': status, 'status_reason': six.text_type(status_reason)}) self._push_metadata_software_deployments( cnxt, server_id, stack_user_project_id) return api.format_software_deployment(sd) def signal_software_deployment(self, cnxt, deployment_id, details, updated_at): if not deployment_id: raise ValueError(_('deployment_id must be specified')) sd = software_deployment_object.SoftwareDeployment.get_by_id( cnxt, deployment_id) status = sd.status if not status == rpc_api.SOFTWARE_DEPLOYMENT_IN_PROGRESS: # output values are only expected when in an IN_PROGRESS state return details = details or {} output_status_code = rpc_api.SOFTWARE_DEPLOYMENT_OUTPUT_STATUS_CODE ov = sd.output_values or {} status = None status_reasons = {} status_code = details.get(output_status_code) if status_code and str(status_code) != '0': status = rpc_api.SOFTWARE_DEPLOYMENT_FAILED status_reasons[output_status_code] = _( 'Deployment exited with non-zero status code: %s' ) % details.get(output_status_code) event_reason = 'deployment %s failed (%s)' % (deployment_id, status_code) else: event_reason = 'deployment %s succeeded' % deployment_id for output in sd.config.config['outputs'] or []: out_key = output['name'] if out_key in details: ov[out_key] = details[out_key] if output.get('error_output', False): status = rpc_api.SOFTWARE_DEPLOYMENT_FAILED status_reasons[out_key] = details[out_key] event_reason = 'deployment %s failed' % deployment_id for out_key in rpc_api.SOFTWARE_DEPLOYMENT_OUTPUTS: ov[out_key] = details.get(out_key) if status == rpc_api.SOFTWARE_DEPLOYMENT_FAILED: # build a status reason out of all of the values of outputs # flagged as error_output status_reasons = [' : '.join((k, six.text_type(status_reasons[k]))) for k in status_reasons] status_reason = ', '.join(status_reasons) else: status = rpc_api.SOFTWARE_DEPLOYMENT_COMPLETE status_reason = _('Outputs received') self.update_software_deployment( cnxt, deployment_id=deployment_id, output_values=ov, status=status, status_reason=status_reason, config_id=None, input_values=None, action=None, updated_at=updated_at) # Return a string describing the outcome of handling the signal data return event_reason def update_software_deployment(self, cnxt, deployment_id, config_id, input_values, output_values, action, status, status_reason, updated_at): update_data = {} if config_id: update_data['config_id'] = config_id if input_values: update_data['input_values'] = input_values if output_values: update_data['output_values'] = output_values if action: update_data['action'] = action if status: update_data['status'] = status if status_reason: update_data['status_reason'] = six.text_type(status_reason) if updated_at: update_data['updated_at'] = timeutils.normalize_time( timeutils.parse_isotime(updated_at)) else: update_data['updated_at'] = timeutils.utcnow() sd = software_deployment_object.SoftwareDeployment.update_by_id( cnxt, deployment_id, update_data) # only push metadata if this update resulted in the config_id # changing, since metadata is just a list of configs if config_id: self._push_metadata_software_deployments( cnxt, sd.server_id, sd.stack_user_project_id) return api.format_software_deployment(sd) def delete_software_deployment(self, cnxt, deployment_id): sd = software_deployment_object.SoftwareDeployment.get_by_id( cnxt, deployment_id) software_deployment_object.SoftwareDeployment.delete( cnxt, deployment_id) self._push_metadata_software_deployments( cnxt, sd.server_id, sd.stack_user_project_id) heat-10.0.2/heat/engine/stack.py0000666000175000017500000026002313343562351016414 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import copy import eventlet import functools import re import warnings from oslo_config import cfg from oslo_log import log as logging from oslo_utils import excutils from oslo_utils import timeutils as oslo_timeutils from oslo_utils import uuidutils from osprofiler import profiler import six from heat.common import context as common_context from heat.common import environment_format as env_fmt from heat.common import exception from heat.common.i18n import _ from heat.common import identifier from heat.common import lifecycle_plugin_utils from heat.engine import api from heat.engine import dependencies from heat.engine import environment from heat.engine import event from heat.engine.notification import stack as notification from heat.engine import parameter_groups as param_groups from heat.engine import parent_rsrc from heat.engine import resource from heat.engine import resources from heat.engine import scheduler from heat.engine import stk_defn from heat.engine import sync_point from heat.engine import template as tmpl from heat.engine import update from heat.objects import raw_template as raw_template_object from heat.objects import resource as resource_objects from heat.objects import snapshot as snapshot_object from heat.objects import stack as stack_object from heat.objects import stack_tag as stack_tag_object from heat.objects import user_creds as ucreds_object from heat.rpc import api as rpc_api from heat.rpc import worker_client as rpc_worker_client LOG = logging.getLogger(__name__) ConvergenceNode = collections.namedtuple('ConvergenceNode', ['rsrc_id', 'is_update']) class ForcedCancel(Exception): """Exception raised to cancel task execution.""" def __init__(self, with_rollback=True): self.with_rollback = with_rollback def __str__(self): return "Operation cancelled" def reset_state_on_error(func): @six.wraps(func) def handle_exceptions(stack, *args, **kwargs): errmsg = None try: return func(stack, *args, **kwargs) except Exception as exc: with excutils.save_and_reraise_exception(): errmsg = six.text_type(exc) LOG.error('Unexpected exception in %(func)s: %(msg)s', {'func': func.__name__, 'msg': errmsg}) except BaseException as exc: with excutils.save_and_reraise_exception(): exc_type = type(exc).__name__ errmsg = '%s(%s)' % (exc_type, six.text_type(exc)) LOG.info('Stopped due to %(msg)s in %(func)s', {'func': func.__name__, 'msg': errmsg}) finally: if stack.status == stack.IN_PROGRESS: rtnmsg = _("Unexpected exit while IN_PROGRESS.") stack.state_set(stack.action, stack.FAILED, errmsg if errmsg is not None else rtnmsg) assert errmsg is not None, "Returned while IN_PROGRESS." return handle_exceptions @six.python_2_unicode_compatible class Stack(collections.Mapping): ACTIONS = ( CREATE, DELETE, UPDATE, ROLLBACK, SUSPEND, RESUME, ADOPT, SNAPSHOT, CHECK, RESTORE ) = ( 'CREATE', 'DELETE', 'UPDATE', 'ROLLBACK', 'SUSPEND', 'RESUME', 'ADOPT', 'SNAPSHOT', 'CHECK', 'RESTORE' ) STATUSES = (IN_PROGRESS, FAILED, COMPLETE ) = ('IN_PROGRESS', 'FAILED', 'COMPLETE') _zones = None def __init__(self, context, stack_name, tmpl, stack_id=None, action=None, status=None, status_reason='', timeout_mins=None, disable_rollback=True, parent_resource=None, owner_id=None, adopt_stack_data=None, stack_user_project_id=None, created_time=None, updated_time=None, user_creds_id=None, tenant_id=None, use_stored_context=False, username=None, nested_depth=0, strict_validate=True, convergence=False, current_traversal=None, tags=None, prev_raw_template_id=None, current_deps=None, cache_data=None, deleted_time=None, converge=False): """Initialise the Stack. Initialise from a context, name, Template object and (optionally) Environment object. The database ID may also be initialised, if the stack is already in the database. Creating a stack with cache_data creates a lightweight stack which will not load any resources from the database and resolve the functions from the cache_data specified. """ def _validate_stack_name(name): try: if not re.match("[a-zA-Z][a-zA-Z0-9_.-]{0,254}$", name): message = _('Invalid stack name %s must contain ' 'only alphanumeric or \"_-.\" characters, ' 'must start with alpha and must be 255 ' 'characters or less.') % name raise exception.StackValidationFailed(message=message) except TypeError: message = _('Invalid stack name %s, must be a string') % name raise exception.StackValidationFailed(message=message) if owner_id is None: _validate_stack_name(stack_name) self.id = stack_id self.owner_id = owner_id self.context = context self.name = stack_name self.action = (self.ADOPT if adopt_stack_data else self.CREATE if action is None else action) self.status = self.IN_PROGRESS if status is None else status self.status_reason = status_reason self.timeout_mins = timeout_mins self.disable_rollback = disable_rollback self._outputs = None self._resources = None self._dependencies = None self._implicit_deps_loaded = False self._access_allowed_handlers = {} self._db_resources = None self._tags = tags self.adopt_stack_data = adopt_stack_data self.stack_user_project_id = stack_user_project_id self.created_time = created_time self.updated_time = updated_time self.deleted_time = deleted_time self.user_creds_id = user_creds_id self.nested_depth = nested_depth self.convergence = convergence self.current_traversal = current_traversal self.tags = tags self.prev_raw_template_id = prev_raw_template_id self.current_deps = current_deps self._worker_client = None self._convg_deps = None self.thread_group_mgr = None self.converge = converge # strict_validate can be used to disable value validation # in the resource properties schema, this is useful when # performing validation when properties reference attributes # for not-yet-created resources (which return None) self.strict_validate = strict_validate self.in_convergence_check = cache_data is not None if use_stored_context: self.context = self.stored_context() self.clients = self.context.clients # This will use the provided tenant ID when loading the stack # from the DB or get it from the context for new stacks. self.tenant_id = tenant_id or self.context.tenant_id self.username = username or self.context.username resources.initialise() parent_info = parent_rsrc.ParentResourceProxy(context, parent_resource, owner_id) if tmpl is not None: self.defn = stk_defn.StackDefinition(context, tmpl, self.identifier(), cache_data or {}, parent_info) else: self.defn = None @property def tags(self): if self._tags is None: tags = stack_tag_object.StackTagList.get( self.context, self.id) if tags: self._tags = [t.tag for t in tags] return self._tags @tags.setter def tags(self, value): self._tags = value @property def worker_client(self): """Return a client for making engine RPC calls.""" if not self._worker_client: self._worker_client = rpc_worker_client.WorkerClient() return self._worker_client @property def t(self): """The stack template.""" if self.defn is None: return None return self.defn.t @t.setter def t(self, tmpl): """Set the stack template.""" self.defn = self.defn.clone_with_new_template(tmpl, self.identifier()) @property def parameters(self): return self.defn.parameters @property def env(self): """The stack environment""" return self.defn.env @property def parent_resource_name(self): parent_info = self.defn.parent_resource return parent_info and parent_info.name @property def parent_resource(self): """Dynamically load up the parent_resource. Note: this should only be used by "Fn::ResourceFacade" """ return self.defn.parent_resource def set_parent_stack(self, parent_stack): parent_info = self.defn.parent_resource if parent_info is not None: parent_rsrc.use_parent_stack(parent_info, parent_stack) def stored_context(self): if self.user_creds_id: creds_obj = ucreds_object.UserCreds.get_by_id( self.context, self.user_creds_id) # Maintain request_id from self.context so we retain traceability # in situations where servicing a request requires switching from # the request context to the stored context creds = creds_obj.obj_to_primitive()["versioned_object.data"] creds['request_id'] = self.context.request_id # We don't store roles in the user_creds table, so disable the # policy check for admin by setting is_admin=False. creds['is_admin'] = False creds['overwrite'] = False return common_context.StoredContext.from_dict(creds) else: msg = _("Attempt to use stored_context with no user_creds") raise exception.Error(msg) @property def outputs(self): return {n: self.defn.output_definition(n) for n in self.defn.enabled_output_names()} def _resources_for_defn(self, stack_defn): return { name: resource.Resource(name, stack_defn.resource_definition(name), self) for name in stack_defn.enabled_rsrc_names() } @property def resources(self): if self._resources is None: self._resources = self._resources_for_defn(self.defn) return self._resources def _update_all_resource_data(self, for_resources, for_outputs): for rsrc in self._explicit_dependencies(): node_data = rsrc.node_data(for_resources=for_resources, for_outputs=for_outputs) stk_defn.update_resource_data(self.defn, rsrc.name, node_data) def _find_filtered_resources(self, filters=None): if filters: assert not self.in_convergence_check, \ "Resources should not be loaded from the DB" resources = resource_objects.Resource.get_all_by_stack( self.context, self.id, filters) else: resources = self._db_resources_get() stk_def_cache = {} for rsc in six.itervalues(resources): loaded_res = self._resource_from_db_resource(rsc, stk_def_cache) if loaded_res is not None: yield loaded_res def iter_resources(self, nested_depth=0, filters=None): """Iterates over all the resources in a stack. Iterating includes nested stacks up to `nested_depth` levels below. """ for res in self._find_filtered_resources(filters): yield res resources = self._find_filtered_resources() for res in resources: if not res.has_nested() or nested_depth == 0: continue nested_stack = res.nested() if nested_stack is None: continue for nested_res in nested_stack.iter_resources(nested_depth - 1, filters): yield nested_res def db_active_resources_get(self): resources = resource_objects.Resource.get_all_active_by_stack( self.context, self.id) return resources or None def db_resource_get(self, name): if self.id is None: return None return self._db_resources_get().get(name) def _db_resources_get(self): if self._db_resources is None: assert not self.in_convergence_check, \ "Resources should not be loaded from the DB" _db_resources = resource_objects.Resource.get_all_by_stack( self.context, self.id) if not _db_resources: return {} self._db_resources = _db_resources return self._db_resources def _resource_from_db_resource(self, db_res, stk_def_cache=None): tid = db_res.current_template_id if tid is None: tid = self.t.id if tid == self.t.id: cur_res = self.resources.get(db_res.name) if cur_res is not None and (cur_res.id == db_res.id): return cur_res stk_def = self.defn elif stk_def_cache and tid in stk_def_cache: stk_def = stk_def_cache[tid] else: try: t = tmpl.Template.load(self.context, tid) except exception.NotFound: return None stk_def = self.defn.clone_with_new_template(t, self.identifier()) if stk_def_cache is not None: stk_def_cache[tid] = stk_def try: defn = stk_def.resource_definition(db_res.name) except KeyError: return None res = resource.Resource(db_res.name, defn, self) res._load_data(db_res) return res def resource_get(self, name): """Return a stack resource, even if not in the current template.""" res = self.resources.get(name) if res: return res # fall back to getting the resource from the database db_res = self.db_resource_get(name) if db_res: return self._resource_from_db_resource(db_res) return None @property def dependencies(self): if not self._implicit_deps_loaded: self._explicit_dependencies() self._add_implicit_dependencies(self._dependencies, ignore_errors=self.id is not None) self._implicit_deps_loaded = True return self._dependencies def reset_dependencies(self): self._implicit_deps_loaded = False self._dependencies = None def root_stack_id(self): if not self.owner_id: return self.id return stack_object.Stack.get_root_id(self.context, self.owner_id) def object_path_in_stack(self): """Return stack resources and stacks in path from the root stack. If this is not nested return (None, self), else return stack resources and stacks in path from the root stack and including this stack. Note that this is horribly inefficient, as it requires us to load every stack in the chain back to the root in memory at the same time. :returns: a list of (stack_resource, stack) tuples. """ if self.parent_resource: parent_stack = self.parent_resource._stack() if parent_stack is not None: path = parent_stack.object_path_in_stack() path.extend([(self.parent_resource, self)]) return path return [(None, self)] def path_in_stack(self): """Return tuples of names in path from the root stack. If this is not nested return (None, self.name), else return tuples of names (stack_resource.name, stack.name) in path from the root stack and including this stack. :returns: a list of (string, string) tuples. """ opis = self.object_path_in_stack() return [(stckres.name if stckres else None, stck.name if stck else None) for stckres, stck in opis] def total_resources(self, stack_id=None): """Return the total number of resources in a stack. Includes nested stacks below. """ if not stack_id: if self.id is None: # We're not stored yet, so we don't have anything to count return 0 stack_id = self.id return stack_object.Stack.count_total_resources(self.context, stack_id) def _set_param_stackid(self): """Update self.parameters with the current ARN. The ARN is then provided via the Parameters class as the StackId pseudo parameter. """ if not self.parameters.set_stack_id(self.identifier()): LOG.warning("Unable to set parameters StackId identifier") def _explicit_dependencies(self): """Return dependencies without making any resource plugin calls. This includes at least all of the dependencies that are explicitly expressed in the template (via depends_on or an intrinsic function). It may include implicit dependencies defined by resource plugins, but only if they have already been calculated. """ if self._dependencies is None: deps = dependencies.Dependencies() for res in six.itervalues(self.resources): res.add_explicit_dependencies(deps) self._dependencies = deps return self._dependencies def _add_implicit_dependencies(self, deps, ignore_errors=True): """Augment the given dependencies with implicit ones from plugins.""" for res in six.itervalues(self.resources): try: res.add_dependencies(deps) except Exception as exc: if not ignore_errors: raise else: LOG.warning('Ignoring error adding implicit ' 'dependencies for %(res)s: %(err)s', {'res': six.text_type(res), 'err': six.text_type(exc)}) @classmethod def load(cls, context, stack_id=None, stack=None, show_deleted=True, use_stored_context=False, force_reload=False, cache_data=None, load_template=True): """Retrieve a Stack from the database.""" if stack is None: stack = stack_object.Stack.get_by_id( context, stack_id, show_deleted=show_deleted) if stack is None: message = _('No stack exists with id "%s"') % str(stack_id) raise exception.NotFound(message) if force_reload: stack.refresh() return cls._from_db(context, stack, use_stored_context=use_stored_context, cache_data=cache_data, load_template=load_template) @classmethod def load_all(cls, context, limit=None, marker=None, sort_keys=None, sort_dir=None, filters=None, show_deleted=False, show_nested=False, show_hidden=False, tags=None, tags_any=None, not_tags=None, not_tags_any=None): stacks = stack_object.Stack.get_all( context, limit=limit, sort_keys=sort_keys, marker=marker, sort_dir=sort_dir, filters=filters, show_deleted=show_deleted, show_nested=show_nested, show_hidden=show_hidden, tags=tags, tags_any=tags_any, not_tags=not_tags, not_tags_any=not_tags_any, eager_load=True) for stack in stacks: try: yield cls._from_db(context, stack) except exception.NotFound: # We're in a different transaction than the get_all, so a stack # returned above can be deleted by the time we try to load it. pass @classmethod def _from_db(cls, context, stack, use_stored_context=False, cache_data=None, load_template=True): if load_template: template = tmpl.Template.load( context, stack.raw_template_id, stack.raw_template) else: template = None return cls(context, stack.name, template, stack_id=stack.id, action=stack.action, status=stack.status, status_reason=stack.status_reason, timeout_mins=stack.timeout, disable_rollback=stack.disable_rollback, parent_resource=stack.parent_resource_name, owner_id=stack.owner_id, stack_user_project_id=stack.stack_user_project_id, created_time=stack.created_at, updated_time=stack.updated_at, user_creds_id=stack.user_creds_id, tenant_id=stack.tenant, use_stored_context=use_stored_context, username=stack.username, convergence=stack.convergence, current_traversal=stack.current_traversal, prev_raw_template_id=stack.prev_raw_template_id, current_deps=stack.current_deps, cache_data=cache_data, nested_depth=stack.nested_depth, deleted_time=stack.deleted_at) def get_kwargs_for_cloning(self, keep_status=False, only_db=False, keep_tags=False): """Get common kwargs for calling Stack() for cloning. The point of this method is to reduce the number of places that we need to update when a kwarg to Stack.__init__() is modified. It is otherwise easy to forget an option and cause some unexpected error if this option is lost. Note: - This doesn't return the args(name, template) but only the kwargs. - We often want to start 'fresh' so don't want to maintain the old status, action and status_reason. - We sometimes only want the DB attributes. """ stack = { 'owner_id': self.owner_id, 'username': self.username, 'disable_rollback': self.disable_rollback, 'stack_user_project_id': self.stack_user_project_id, 'user_creds_id': self.user_creds_id, 'nested_depth': self.nested_depth, 'convergence': self.convergence, 'current_traversal': self.current_traversal, 'prev_raw_template_id': self.prev_raw_template_id, 'current_deps': self.current_deps } if keep_status: stack.update({ 'action': self.action, 'status': self.status, 'status_reason': six.text_type(self.status_reason)}) if only_db: stack['parent_resource_name'] = self.parent_resource_name stack['tenant'] = self.tenant_id stack['timeout'] = self.timeout_mins else: stack['parent_resource'] = self.parent_resource_name stack['tenant_id'] = self.tenant_id stack['timeout_mins'] = self.timeout_mins stack['strict_validate'] = self.strict_validate if keep_tags: stack['tags'] = self.tags return stack @profiler.trace('Stack.store', hide_args=False) def store(self, backup=False, exp_trvsl=None, ignore_traversal_check=False): """Store the stack in the database and return its ID. If self.id is set, we update the existing stack. """ s = self.get_kwargs_for_cloning(keep_status=True, only_db=True) s['name'] = self.name s['backup'] = backup s['updated_at'] = self.updated_time if self.t.id is None: stack_object.Stack.encrypt_hidden_parameters(self.t) s['raw_template_id'] = self.t.store(self.context) else: s['raw_template_id'] = self.t.id if self.id is not None: if exp_trvsl is None and not ignore_traversal_check: exp_trvsl = self.current_traversal if self.convergence: # do things differently for convergence updated = stack_object.Stack.select_and_update( self.context, self.id, s, exp_trvsl=exp_trvsl) if not updated: return None else: stack_object.Stack.update_by_id(self.context, self.id, s) else: if not self.user_creds_id: # Create a context containing a trust_id and trustor_user_id # if trusts are enabled if cfg.CONF.deferred_auth_method == 'trusts': keystone = self.clients.client('keystone') trust_ctx = keystone.create_trust_context() new_creds = ucreds_object.UserCreds.create(trust_ctx) else: new_creds = ucreds_object.UserCreds.create(self.context) s['user_creds_id'] = new_creds.id self.user_creds_id = new_creds.id if self.convergence: # create a traversal ID self.current_traversal = uuidutils.generate_uuid() s['current_traversal'] = self.current_traversal new_s = stack_object.Stack.create(self.context, s) self.id = new_s.id self.created_time = new_s.created_at if self.tags: stack_tag_object.StackTagList.set(self.context, self.id, self.tags) self._set_param_stackid() return self.id def _backup_name(self): return '%s*' % self.name def identifier(self): """Return an identifier for this stack.""" return identifier.HeatIdentifier(self.tenant_id, self.name, self.id) def __iter__(self): """Return an iterator over the resource names.""" return iter(self.resources) def __len__(self): """Return the number of resources.""" return len(self.resources) def __getitem__(self, key): """Get the resource with the specified name.""" return self.resources[key] def add_resource(self, resource): """Insert the given resource into the stack.""" template = resource.stack.t resource.stack = self definition = resource.t.reparse(self.defn, template) resource.t = definition resource.reparse() self.resources[resource.name] = resource stk_defn.add_resource(self.defn, definition) if self.t.id is not None: self.t.store(self.context) resource.store() def remove_resource(self, resource_name): """Remove the resource with the specified name.""" del self.resources[resource_name] stk_defn.remove_resource(self.defn, resource_name) if self.t.id is not None: self.t.store(self.context) def __contains__(self, key): """Determine whether the stack contains the specified resource.""" if self._resources is not None: return key in self.resources else: return key in self.t[self.t.RESOURCES] def __eq__(self, other): """Compare two Stacks for equality. Stacks are considered equal only if they are identical. """ return self is other def __ne__(self, other): return not self.__eq__(other) def __str__(self): """Return a human-readable string representation of the stack.""" text = 'Stack "%s" [%s]' % (self.name, self.id) return six.text_type(text) def resource_by_refid(self, refid): """Return the resource in this stack with the specified refid. :returns: resource in this stack with the specified refid, or None if not found. """ for r in six.itervalues(self): if r.state not in ((r.INIT, r.COMPLETE), (r.CREATE, r.IN_PROGRESS), (r.CREATE, r.COMPLETE), (r.RESUME, r.IN_PROGRESS), (r.RESUME, r.COMPLETE), (r.UPDATE, r.IN_PROGRESS), (r.UPDATE, r.COMPLETE), (r.CHECK, r.COMPLETE)): continue proxy = self.defn[r.name] if proxy._resource_data is None: matches = r.FnGetRefId() == refid or r.name == refid else: matches = proxy.FnGetRefId() == refid if matches: if self.in_convergence_check and r.id is not None: # We don't have resources loaded from the database at this # point, so load the data for just this one from the DB. db_res = resource_objects.Resource.get_obj(self.context, r.id) if db_res is not None: r._load_data(db_res) return r def register_access_allowed_handler(self, credential_id, handler): """Register an authorization handler function. Register a function which determines whether the credentials with a given ID can have access to a named resource. """ assert callable(handler), 'Handler is not callable' self._access_allowed_handlers[credential_id] = handler def access_allowed(self, credential_id, resource_name): """Is credential_id authorised to access resource by resource_name.""" if not self.resources: # this also triggers lazy-loading of resources # so is required for register_access_allowed_handler # to be called return False handler = self._access_allowed_handlers.get(credential_id) return handler and handler(resource_name) @profiler.trace('Stack.validate', hide_args=False) def validate(self, ignorable_errors=None, validate_res_tmpl_only=False): """Validates the stack.""" # TODO(sdake) Should return line number of invalid reference # validate overall template (top-level structure) self.t.validate() # Validate parameters self.parameters.validate(context=self.context, validate_value=self.strict_validate) # Validate Parameter Groups parameter_groups = param_groups.ParameterGroups(self.t) parameter_groups.validate() # Continue to call this function, since old third-party Template # plugins may depend on it being called to validate the resource # definitions before actually generating them. if (type(self.t).validate_resource_definitions != tmpl.Template.validate_resource_definitions): warnings.warn("The Template.validate_resource_definitions() " "method is deprecated and will no longer be called " "in future versions of Heat. Template subclasses " "should validate resource definitions in the " "resource_definitions() method.", DeprecationWarning) self.t.validate_resource_definitions(self) self.t.conditions(self).validate() # Load the resources definitions (success of which implies the # definitions are valid) resources = self.resources # Check duplicate names between parameters and resources dup_names = set(self.parameters) & set(resources) if dup_names: LOG.debug("Duplicate names %s" % dup_names) raise exception.StackValidationFailed( message=_("Duplicate names %s") % dup_names) self._update_all_resource_data(for_resources=True, for_outputs=True) if self.strict_validate: iter_rsc = self.dependencies else: iter_rsc = self._explicit_dependencies() unique_defns = set(res.t for res in six.itervalues(resources)) unique_defn_names = set(defn.name for defn in unique_defns) for res in iter_rsc: # Don't validate identical definitions multiple times if res.name not in unique_defn_names: continue result = None try: if not validate_res_tmpl_only: if res.external_id is not None: res.validate_external() continue result = res.validate() elif res.external_id is None: result = res.validate_template() except exception.HeatException as ex: LOG.debug('%s', ex) if ignorable_errors and ex.error_code in ignorable_errors: result = None else: raise except AssertionError: raise except Exception as ex: LOG.info("Exception in stack validation", exc_info=True) raise exception.StackValidationFailed(error=ex, resource=res) if result: raise exception.StackValidationFailed(message=result) eventlet.sleep(0) for op_name, output in six.iteritems(self.outputs): try: output.validate() except exception.StackValidationFailed as ex: path = [self.t.OUTPUTS, op_name, self.t.get_section_name(ex.path[0])] path.extend(ex.path[1:]) raise exception.StackValidationFailed( error=ex.error, path=path, message=ex.error_message) def requires_deferred_auth(self): """Determine whether to perform API requests with deferred auth. Returns whether this stack may need to perform API requests during its lifecycle using the configured deferred authentication method. """ return any(res.requires_deferred_auth for res in six.itervalues(self)) def _add_event(self, action, status, reason): """Add a state change event to the database.""" ev = event.Event(self.context, self, action, status, reason, self.id, None, None, self.name, 'OS::Heat::Stack') ev.store() self.dispatch_event(ev) def dispatch_event(self, ev): def _dispatch(ctx, sinks, ev): try: for sink in sinks: sink.consume(ctx, ev) except Exception as e: LOG.debug('Got error sending events %s', e) if self.thread_group_mgr is not None: self.thread_group_mgr.start(self.id, _dispatch, self.context, self.env.get_event_sinks(), ev.as_dict()) @profiler.trace('Stack.state_set', hide_args=False) def state_set(self, action, status, reason): """Update the stack state.""" if action not in self.ACTIONS: raise ValueError(_("Invalid action %s") % action) if status not in self.STATUSES: raise ValueError(_("Invalid status %s") % status) self.action = action self.status = status self.status_reason = reason self._log_status() if self.convergence and action in ( self.UPDATE, self.DELETE, self.CREATE, self.ADOPT, self.ROLLBACK, self.RESTORE): # if convergence and stack operation is create/update/rollback/ # delete, stack lock is not used, hence persist state updated = self._persist_state() if not updated: LOG.info("Stack %(name)s traversal %(trvsl_id)s no longer " "active; not setting state to %(action)s_%(status)s", {'name': self.name, 'trvsl_id': self.current_traversal, 'action': action, 'status': status}) return updated # Persist state to db only if status == IN_PROGRESS # or action == UPDATE/DELETE/ROLLBACK. Else, it would # be done before releasing the stack lock. if status == self.IN_PROGRESS or action in ( self.UPDATE, self.DELETE, self.ROLLBACK, self.RESTORE): self._persist_state() def _log_status(self): LOG.info('Stack %(action)s %(status)s (%(name)s): %(reason)s', {'action': self.action, 'status': self.status, 'name': self.name, 'reason': self.status_reason}) def _persist_state(self): """Persist stack state to database""" if self.id is None: return stack = stack_object.Stack.get_by_id(self.context, self.id, eager_load=False) if stack is not None: values = {'action': self.action, 'status': self.status, 'status_reason': six.text_type(self.status_reason)} self._send_notification_and_add_event() if self.convergence: # do things differently for convergence updated = stack_object.Stack.select_and_update( self.context, self.id, values, exp_trvsl=self.current_traversal) return updated else: stack.update_and_save(values) def _send_notification_and_add_event(self): LOG.debug('Persisting stack %(name)s status %(action)s %(status)s', {'action': self.action, 'status': self.status, 'name': self.name}) notification.send(self) self._add_event(self.action, self.status, self.status_reason) def persist_state_and_release_lock(self, engine_id): """Persist stack state to database and release stack lock""" if self.id is None: return stack = stack_object.Stack.get_by_id(self.context, self.id, eager_load=False) if stack is not None: values = {'action': self.action, 'status': self.status, 'status_reason': six.text_type(self.status_reason)} self._send_notification_and_add_event() stack.persist_state_and_release_lock(self.context, self.id, engine_id, values) @property def state(self): """Returns state, tuple of action, status.""" return (self.action, self.status) def timeout_secs(self): """Return the stack action timeout in seconds.""" if self.timeout_mins is None: return cfg.CONF.stack_action_timeout return self.timeout_mins * 60 def preview_resources(self): """Preview the stack with all of the resources.""" return [resource.preview() for resource in six.itervalues(self.resources)] def get_nested_parameters(self, filter_func): """Return nested parameters schema, if any. This introspects the resources to return the parameters of the nested stacks. It uses the `get_nested_parameters_stack` API to build the stack. """ result = {} for name, rsrc in six.iteritems(self.resources): nested = rsrc.get_nested_parameters_stack() if nested is None: continue nested_params = nested.parameters.map( api.format_validate_parameter, filter_func=filter_func) params = { 'Type': rsrc.type(), 'Description': nested.t.get('Description', ''), 'Parameters': nested_params } # Add parameter_groups if it is present in nested stack nested_pg = param_groups.ParameterGroups(nested.t) if nested_pg.parameter_groups: params.update({'ParameterGroups': nested_pg.parameter_groups}) params.update(nested.get_nested_parameters(filter_func)) result[name] = params return {'NestedParameters': result} if result else {} def _store_resources(self): for r in reversed(self.dependencies): if r.action == r.INIT: r.store() @profiler.trace('Stack.create', hide_args=False) @reset_state_on_error def create(self, msg_queue=None): """Create the stack and all of the resources.""" def rollback(): if not self.disable_rollback and self.state == (self.CREATE, self.FAILED): self.delete(action=self.ROLLBACK) self._store_resources() check_message = functools.partial(self._check_for_message, msg_queue) creator = scheduler.TaskRunner( self.stack_task, action=self.CREATE, reverse=False, post_func=rollback) creator(timeout=self.timeout_secs(), progress_callback=check_message) def _adopt_kwargs(self, resource): data = self.adopt_stack_data if not data or not data.get('resources'): return {'resource_data': None} return {'resource_data': data['resources'].get(resource.name)} @scheduler.wrappertask def stack_task(self, action, reverse=False, post_func=None, aggregate_exceptions=False, pre_completion_func=None, notify=None): """A task to perform an action on the stack. All of the resources are traversed in forward or reverse dependency order. :param action: action that should be executed with stack resources :param reverse: define if action on the resources need to be executed in reverse order (resources - first and then res dependencies ) :param post_func: function that need to be executed after action complete on the stack :param aggregate_exceptions: define if exceptions should be aggregated :param pre_completion_func: function that need to be executed right before action completion. Uses stack ,action, status and reason as input parameters """ try: lifecycle_plugin_utils.do_pre_ops(self.context, self, None, action) except Exception as e: self.state_set(action, self.FAILED, e.args[0] if e.args else 'Failed stack pre-ops: %s' % six.text_type(e)) if callable(post_func): post_func() # No need to call notify.signal(), because persistence of the # state is always deferred here. return self.state_set(action, self.IN_PROGRESS, 'Stack %s started' % action) if notify is not None: notify.signal() stack_status = self.COMPLETE reason = 'Stack %s completed successfully' % action action_method = action.lower() # If a local _$action_kwargs function exists, call it to get the # action specific argument list, otherwise an empty arg list handle_kwargs = getattr(self, '_%s_kwargs' % action_method, lambda x: {}) @functools.wraps(getattr(resource.Resource, action_method)) @scheduler.wrappertask def resource_action(r): # Find e.g resource.create and call it handle = getattr(r, action_method) yield handle(**handle_kwargs(r)) if action == self.CREATE: stk_defn.update_resource_data(self.defn, r.name, r.node_data()) def get_error_wait_time(resource): return resource.cancel_grace_period() action_task = scheduler.DependencyTaskGroup( self.dependencies, resource_action, reverse, error_wait_time=get_error_wait_time, aggregate_exceptions=aggregate_exceptions) try: yield action_task() except scheduler.Timeout: stack_status = self.FAILED reason = '%s timed out' % action.title() except Exception as ex: # We use a catch-all here to ensure any raised exceptions # make the stack fail. This is necessary for when # aggregate_exceptions is false, as in that case we don't get # ExceptionGroup, but the raw exception. # see scheduler.py line 395-399 stack_status = self.FAILED reason = 'Resource %s failed: %s' % (action, six.text_type(ex)) if pre_completion_func: pre_completion_func(self, action, stack_status, reason) self.state_set(action, stack_status, reason) if callable(post_func): post_func() lifecycle_plugin_utils.do_post_ops(self.context, self, None, action, (self.status == self.FAILED)) @profiler.trace('Stack.check', hide_args=False) @reset_state_on_error def check(self, notify=None): self.updated_time = oslo_timeutils.utcnow() checker = scheduler.TaskRunner( self.stack_task, self.CHECK, post_func=self.supports_check_action, aggregate_exceptions=True, notify=notify) checker() def supports_check_action(self): def is_supported(res): if res.has_nested() and res.nested(): return res.nested().supports_check_action() else: return hasattr(res, 'handle_%s' % res.CHECK.lower()) all_supported = all(is_supported(res) for res in six.itervalues(self.resources)) if not all_supported: msg = ". '%s' not fully supported (see resources)" % self.CHECK reason = self.status_reason + msg self.state_set(self.CHECK, self.status, reason) return all_supported @profiler.trace('Stack._backup_stack', hide_args=False) def _backup_stack(self, create_if_missing=True): """Backup the stack. Get a Stack containing any in-progress resources from the previous stack state prior to an update. """ s = stack_object.Stack.get_by_name_and_owner_id( self.context, self._backup_name(), owner_id=self.id) if s is not None: LOG.debug('Loaded existing backup stack') return self.load(self.context, stack=s) elif create_if_missing: kwargs = self.get_kwargs_for_cloning(keep_tags=True) kwargs['owner_id'] = self.id del(kwargs['prev_raw_template_id']) prev = type(self)(self.context, self._backup_name(), copy.deepcopy(self.t), **kwargs) prev.store(backup=True) LOG.debug('Created new backup stack') return prev else: return None @profiler.trace('Stack.adopt', hide_args=False) @reset_state_on_error def adopt(self): """Adopt existing resources into a new stack.""" def rollback(): if not self.disable_rollback and self.state == (self.ADOPT, self.FAILED): # enter the same flow as abandon and just delete the stack for res in six.itervalues(self.resources): res.abandon_in_progress = True self.delete(action=self.ROLLBACK, abandon=True) creator = scheduler.TaskRunner( self.stack_task, action=self.ADOPT, reverse=False, post_func=rollback) creator(timeout=self.timeout_secs()) @profiler.trace('Stack.update', hide_args=False) @reset_state_on_error def update(self, newstack, msg_queue=None, notify=None): """Update the stack. Compare the current stack with newstack, and where necessary create/update/delete the resources until this stack aligns with newstack. Note update of existing stack resources depends on update being implemented in the underlying resource types Update will fail if it exceeds the specified timeout. The default is 60 minutes, set in the constructor """ self.updated_time = oslo_timeutils.utcnow() updater = scheduler.TaskRunner(self.update_task, newstack, msg_queue=msg_queue, notify=notify) updater() @profiler.trace('Stack.converge_stack', hide_args=False) def converge_stack(self, template, action=UPDATE, new_stack=None, pre_converge=None): """Update the stack template and trigger convergence for resources.""" if action not in [self.CREATE, self.ADOPT]: # no back-up template for create action self.prev_raw_template_id = getattr(self.t, 'id', None) # switch template and reset dependencies self.defn = self.defn.clone_with_new_template(template, self.identifier(), clear_resource_data=True) self.reset_dependencies() self._resources = None if action is not self.CREATE: self.updated_time = oslo_timeutils.utcnow() if new_stack is not None: self.disable_rollback = new_stack.disable_rollback self.timeout_mins = new_stack.timeout_mins self.converge = new_stack.converge self.defn = new_stack.defn self._set_param_stackid() self.tags = new_stack.tags if new_stack.tags: stack_tag_object.StackTagList.set(self.context, self.id, new_stack.tags) else: stack_tag_object.StackTagList.delete(self.context, self.id) self.action = action self.status = self.IN_PROGRESS self.status_reason = 'Stack %s started' % self.action # generate new traversal and store previous_traversal = self.current_traversal self.current_traversal = uuidutils.generate_uuid() # we expect to update the stack having previous traversal ID stack_id = self.store(exp_trvsl=previous_traversal) if stack_id is None: LOG.warning("Failed to store stack %(name)s with traversal " "ID %(trvsl_id)s, aborting stack %(action)s", {'name': self.name, 'trvsl_id': previous_traversal, 'action': self.action}) return self._send_notification_and_add_event() # delete the prev traversal sync_points if previous_traversal: sync_point.delete_all(self.context, self.id, previous_traversal) # TODO(later): lifecycle_plugin_utils.do_pre_ops self.thread_group_mgr.start(self.id, self._converge_create_or_update, pre_converge=pre_converge) def _converge_create_or_update(self, pre_converge=None): current_resources = self._update_or_store_resources() self._compute_convg_dependencies(self.ext_rsrcs_db, self.dependencies, current_resources) # Store list of edges self.current_deps = { 'edges': [[rqr, rqd] for rqr, rqd in self.convergence_dependencies.graph().edges()]} stack_id = self.store() if stack_id is None: # Failed concurrent update LOG.warning("Failed to store stack %(name)s with traversal " "ID %(trvsl_id)s, aborting stack %(action)s", {'name': self.name, 'trvsl_id': self.current_traversal, 'action': self.action}) return if callable(pre_converge): pre_converge() if self.action == self.DELETE: try: self.delete_all_snapshots() except Exception as exc: self.state_set(self.action, self.FAILED, six.text_type(exc)) self.purge_db() return LOG.info('convergence_dependencies: %s', self.convergence_dependencies) # create sync_points for resources in DB for rsrc_id, is_update in self.convergence_dependencies: sync_point.create(self.context, rsrc_id, self.current_traversal, is_update, self.id) # create sync_point entry for stack sync_point.create( self.context, self.id, self.current_traversal, True, self.id) leaves = set(self.convergence_dependencies.leaves()) if not leaves: self.mark_complete() else: for rsrc_id, is_update in sorted(leaves, key=lambda n: n.is_update): if is_update: LOG.info("Triggering resource %s for update", rsrc_id) else: LOG.info("Triggering resource %s for cleanup", rsrc_id) input_data = sync_point.serialize_input_data({}) self.worker_client.check_resource(self.context, rsrc_id, self.current_traversal, input_data, is_update, self.adopt_stack_data, self.converge) if scheduler.ENABLE_SLEEP: eventlet.sleep(1) def rollback(self): old_tmpl_id = self.prev_raw_template_id if old_tmpl_id is None: rollback_tmpl = tmpl.Template.create_empty_template( version=self.t.version) else: rollback_tmpl = tmpl.Template.load(self.context, old_tmpl_id) self.prev_raw_template_id = None stack_id = self.store() if stack_id is None: # Failed concurrent update LOG.warning("Failed to store stack %(name)s with traversal" " ID %(trvsl_id)s, not triggering rollback.", {'name': self.name, 'trvsl_id': self.current_traversal}) return self.converge_stack(rollback_tmpl, action=self.ROLLBACK) def _get_best_existing_rsrc_db(self, rsrc_name): candidate = None if self.ext_rsrcs_db: for ext_rsrc in self.ext_rsrcs_db.values(): if ext_rsrc.name != rsrc_name: continue if ext_rsrc.current_template_id == self.t.id: # Rollback where the previous resource still exists candidate = ext_rsrc break elif (ext_rsrc.current_template_id == self.prev_raw_template_id): # Current resource is otherwise a good candidate candidate = ext_rsrc elif candidate is None: # In multiple concurrent updates, if candidate is not # found in current/previous template, it could be found # in old tmpl. candidate = ext_rsrc return candidate def _update_or_store_resources(self): self.ext_rsrcs_db = self.db_active_resources_get() curr_name_translated_dep = self.dependencies.translate(lambda res: res.name) rsrcs = {} def update_needed_by(res): new_requirers = set( rsrcs[rsrc_name].id for rsrc_name in curr_name_translated_dep.required_by(res.name) ) old_requirers = set(res.needed_by) if res.needed_by else set() needed_by = old_requirers | new_requirers res.needed_by = list(needed_by) for rsrc in reversed(self.dependencies): existing_rsrc_db = self._get_best_existing_rsrc_db(rsrc.name) if existing_rsrc_db is None: update_needed_by(rsrc) rsrc.current_template_id = self.t.id rsrc.store() rsrcs[rsrc.name] = rsrc else: update_needed_by(existing_rsrc_db) resource.Resource.set_needed_by( existing_rsrc_db, existing_rsrc_db.needed_by ) rsrcs[existing_rsrc_db.name] = existing_rsrc_db return rsrcs def set_resource_deps(self): curr_name_translated_dep = self.dependencies.translate(lambda res: res.id) ext_rsrcs_db = self.db_active_resources_get() for r in self.dependencies: r.needed_by = list(curr_name_translated_dep.required_by(r.id)) r.requires = list(curr_name_translated_dep.requires(r.id)) resource.Resource.set_needed_by(ext_rsrcs_db[r.id], r.needed_by) resource.Resource.set_requires(ext_rsrcs_db[r.id], r.requires) def _compute_convg_dependencies(self, existing_resources, current_template_deps, current_resources): def make_graph_key(rsrc): return ConvergenceNode(current_resources[rsrc.name].id, True) dep = current_template_deps.translate(make_graph_key) if existing_resources: for rsrc_id, rsrc in existing_resources.items(): dep += ConvergenceNode(rsrc_id, False), None for requirement in rsrc.requires: if requirement in existing_resources: dep += (ConvergenceNode(requirement, False), ConvergenceNode(rsrc_id, False)) if rsrc.replaces in existing_resources: dep += (ConvergenceNode(rsrc.replaces, False), ConvergenceNode(rsrc_id, False)) if ConvergenceNode(rsrc.id, True) in dep: dep += (ConvergenceNode(rsrc_id, False), ConvergenceNode(rsrc_id, True)) self._convg_deps = dep @property def convergence_dependencies(self): if self._convg_deps is None: current_deps = ((ConvergenceNode(*i), ConvergenceNode(*j) if j is not None else None) for i, j in self.current_deps['edges']) self._convg_deps = dependencies.Dependencies(edges=current_deps) return self._convg_deps def reset_stack_and_resources_in_progress(self, reason): for name, rsrc in six.iteritems(self.resources): if rsrc.status == rsrc.IN_PROGRESS: rsrc.state_set(rsrc.action, rsrc.FAILED, six.text_type(reason)) self.state_set(self.action, self.FAILED, six.text_type(reason)) @scheduler.wrappertask def update_task(self, newstack, action=UPDATE, msg_queue=None, notify=None): if action not in (self.UPDATE, self.ROLLBACK, self.RESTORE): LOG.error("Unexpected action %s passed to update!", action) self.state_set(self.UPDATE, self.FAILED, "Invalid action %s" % action) if notify is not None: notify.signal() return try: lifecycle_plugin_utils.do_pre_ops(self.context, self, newstack, action) except Exception as e: self.state_set(action, self.FAILED, e.args[0] if e.args else 'Failed stack pre-ops: %s' % six.text_type(e)) if notify is not None: notify.signal() return if self.status == self.IN_PROGRESS: if action == self.ROLLBACK: LOG.debug("Starting update rollback for %s", self.name) else: reason = _('Attempted to %s an IN_PROGRESS ' 'stack') % action self.reset_stack_and_resources_in_progress(reason) if notify is not None: notify.signal() return # Save a copy of the new template. To avoid two DB writes # we store the ID at the same time as the action/status prev_tmpl_id = self.prev_raw_template_id # newstack.t may have been pre-stored, so save with that one bu_tmpl, newstack.t = newstack.t, copy.deepcopy(newstack.t) self.prev_raw_template_id = bu_tmpl.store(self.context) self.action = action self.status = self.IN_PROGRESS self.status_reason = 'Stack %s started' % action self._send_notification_and_add_event() self.store() # Notify the caller that the state is stored if notify is not None: notify.signal() if prev_tmpl_id is not None: raw_template_object.RawTemplate.delete(self.context, prev_tmpl_id) if action == self.UPDATE: # Oldstack is useless when the action is not UPDATE , so we don't # need to build it, this can avoid some unexpected errors. kwargs = self.get_kwargs_for_cloning(keep_tags=True) self._ensure_encrypted_param_names_valid() oldstack = Stack(self.context, self.name, copy.deepcopy(self.t), **kwargs) backup_stack = self._backup_stack() existing_params = environment.Environment({env_fmt.PARAMETERS: self.t.env.params}) previous_template_id = None should_rollback = False update_task = update.StackUpdate( self, newstack, backup_stack, rollback=action == self.ROLLBACK) try: updater = scheduler.TaskRunner(update_task) self.defn.parameters = newstack.defn.parameters self.defn.t.files = newstack.defn.t.files self.defn.t.env = newstack.defn.t.env self.disable_rollback = newstack.disable_rollback self.timeout_mins = newstack.timeout_mins self._set_param_stackid() self.tags = newstack.tags if newstack.tags: stack_tag_object.StackTagList.set(self.context, self.id, newstack.tags) else: stack_tag_object.StackTagList.delete(self.context, self.id) check_message = functools.partial(self._check_for_message, msg_queue) try: yield updater.as_task(timeout=self.timeout_secs(), progress_callback=check_message) finally: self.reset_dependencies() if action in (self.UPDATE, self.RESTORE, self.ROLLBACK): self.status_reason = 'Stack %s completed successfully' % action self.status = self.COMPLETE except scheduler.Timeout: self.status = self.FAILED self.status_reason = 'Timed out' except Exception as e: # If rollback is enabled when resource failure occurred, # we do another update, with the existing template, # so we roll back to the original state should_rollback = self._update_exception_handler(e, action) if should_rollback: yield self.update_task(oldstack, action=self.ROLLBACK) except BaseException as e: with excutils.save_and_reraise_exception(): self._update_exception_handler(e, action) else: LOG.debug('Deleting backup stack') backup_stack.delete(backup=True) # flip the template to the newstack values previous_template_id = self.t.id self.t = newstack.t self._outputs = None finally: if should_rollback: # Already handled in rollback task return # Don't use state_set to do only one update query and avoid race # condition with the COMPLETE status self.action = action self._log_status() self._send_notification_and_add_event() if self.status == self.FAILED: # Since template was incrementally updated based on existing # and new stack resources, we should have user params of both. existing_params.load(newstack.t.env.user_env_as_dict()) self.t.env = existing_params # Update the template version, in case new things were used self.t.t[newstack.t.version[0]] = max( newstack.t.version[1], self.t.version[1]) self.t.merge_snippets(newstack.t) self.t.store(self.context) backup_stack.t.env = existing_params backup_stack.t.t[newstack.t.version[0]] = max( newstack.t.version[1], self.t.version[1]) backup_stack.t.merge_snippets(newstack.t) backup_stack.t.store(self.context) self.store() if previous_template_id is not None: raw_template_object.RawTemplate.delete(self.context, previous_template_id) lifecycle_plugin_utils.do_post_ops(self.context, self, newstack, action, (self.status == self.FAILED)) def _update_exception_handler(self, exc, action): """Handle exceptions in update_task. Decide if we should cancel tasks or not. Also decide if we should rollback or not, depend on disable rollback flag if force rollback flag not triggered. :returns: a boolean for require rollback flag. """ self.status_reason = six.text_type(exc) self.status = self.FAILED if action != self.UPDATE: return False if isinstance(exc, ForcedCancel): return exc.with_rollback or not self.disable_rollback elif isinstance(exc, exception.ResourceFailure): return not self.disable_rollback else: return False def _ensure_encrypted_param_names_valid(self): # If encryption was enabled when the stack was created but # then disabled when the stack was updated, env.params and # env.encrypted_param_names will be in an inconsistent # state if not cfg.CONF.encrypt_parameters_and_properties: self.t.env.encrypted_param_names = [] @staticmethod def _check_for_message(msg_queue): if msg_queue is None: return try: message = msg_queue.get_nowait() except eventlet.queue.Empty: return if message == rpc_api.THREAD_CANCEL: raise ForcedCancel(with_rollback=False) elif message == rpc_api.THREAD_CANCEL_WITH_ROLLBACK: raise ForcedCancel(with_rollback=True) LOG.error('Unknown message "%s" received', message) def _delete_backup_stack(self, stack): # Delete resources in the backup stack referred to by 'stack' def failed(child): return (child.action == child.CREATE and child.status in (child.FAILED, child.IN_PROGRESS)) def copy_data(source_res, destination_res): if source_res.data(): for key, val in six.iteritems(source_res.data()): destination_res.data_set(key, val) for key, backup_res in stack.resources.items(): # If UpdateReplace is failed, we must restore backup_res # to existing_stack in case of it may have dependencies in # these stacks. curr_res is the resource that just # created and failed, so put into the stack to delete anyway. backup_res_id = backup_res.resource_id curr_res = self.resources.get(key) if backup_res_id is not None and curr_res is not None: curr_res_id = curr_res.resource_id if (any(failed(child) for child in self.dependencies[curr_res]) or curr_res.status in (curr_res.FAILED, curr_res.IN_PROGRESS)): # If child resource failed to update, curr_res # should be replaced to resolve dependencies. But this # is not fundamental solution. If there are update # failer and success resources in the children, cannot # delete the stack. # Stack class owns dependencies as set of resource's # objects, so we switch members of the resource that is # needed to delete it. self.resources[key].resource_id = backup_res_id self.resources[key].properties = backup_res.properties copy_data(backup_res, self.resources[key]) stack.resources[key].resource_id = curr_res_id stack.resources[key].properties = curr_res.properties copy_data(curr_res, stack.resources[key]) stack.delete(backup=True) def _try_get_user_creds(self): # There are cases where the user_creds cannot be returned # due to credentials truncated when being saved to DB. # Ignore this error instead of blocking stack deletion. try: return ucreds_object.UserCreds.get_by_id(self.context, self.user_creds_id) except exception.Error: LOG.exception("Failed to retrieve user_creds") return None def _delete_credentials(self, stack_status, reason, abandon): # Cleanup stored user_creds so they aren't accessible via # the soft-deleted stack which remains in the DB # The stack_status and reason passed in are current values, which # may get rewritten and returned from this method if self.user_creds_id: user_creds = self._try_get_user_creds() # If we created a trust, delete it if user_creds is not None: trust_id = user_creds.get('trust_id') if trust_id: try: # If the trustor doesn't match the context user the # we have to use the stored context to cleanup the # trust, as although the user evidently has # permission to delete the stack, they don't have # rights to delete the trust unless an admin trustor_id = user_creds.get('trustor_user_id') if self.context.user_id != trustor_id: LOG.debug("Context user_id doesn't match " "trustor, using stored context") sc = self.stored_context() sc.clients.client('keystone').delete_trust( trust_id) else: self.clients.client('keystone').delete_trust( trust_id) except Exception as ex: LOG.exception("Error deleting trust") stack_status = self.FAILED reason = ("Error deleting trust: %s" % six.text_type(ex)) # Delete the stored credentials try: ucreds_object.UserCreds.delete(self.context, self.user_creds_id) except exception.NotFound: LOG.info("Tried to delete user_creds that do not exist " "(stack=%(stack)s user_creds_id=%(uc)s)", {'stack': self.id, 'uc': self.user_creds_id}) try: self.user_creds_id = None self.store() except exception.NotFound: LOG.info("Tried to store a stack that does not exist %s", self.id) # If the stack has a domain project, delete it if self.stack_user_project_id and not abandon: try: keystone = self.clients.client('keystone') keystone.delete_stack_domain_project( project_id=self.stack_user_project_id) except Exception as ex: LOG.exception("Error deleting project") stack_status = self.FAILED reason = "Error deleting project: %s" % six.text_type(ex) return stack_status, reason @profiler.trace('Stack.delete', hide_args=False) @reset_state_on_error def delete(self, action=DELETE, backup=False, abandon=False, notify=None): """Delete all of the resources, and then the stack itself. The action parameter is used to differentiate between a user initiated delete and an automatic stack rollback after a failed create, which amount to the same thing, but the states are recorded differently. Note abandon is a delete where all resources have been set to a RETAIN deletion policy, but we also don't want to delete anything required for those resources, e.g the stack_user_project. """ if action not in (self.DELETE, self.ROLLBACK): LOG.error("Unexpected action %s passed to delete!", action) self.state_set(self.DELETE, self.FAILED, "Invalid action %s" % action) if notify is not None: notify.signal() return stack_status = self.COMPLETE reason = 'Stack %s completed successfully' % action self.state_set(action, self.IN_PROGRESS, 'Stack %s started' % action) if notify is not None: notify.signal() backup_stack = self._backup_stack(False) if backup_stack: self._delete_backup_stack(backup_stack) if backup_stack.status != backup_stack.COMPLETE: errs = backup_stack.status_reason failure = 'Error deleting backup resources: %s' % errs self.state_set(action, self.FAILED, 'Failed to %s : %s' % (action, failure)) return self.delete_all_snapshots() if not backup: try: lifecycle_plugin_utils.do_pre_ops(self.context, self, None, action) except Exception as e: self.state_set(action, self.FAILED, e.args[0] if e.args else 'Failed stack pre-ops: %s' % six.text_type(e)) return action_task = scheduler.DependencyTaskGroup(self.dependencies, resource.Resource.destroy, reverse=True) try: scheduler.TaskRunner(action_task)(timeout=self.timeout_secs()) except exception.ResourceFailure as ex: stack_status = self.FAILED reason = 'Resource %s failed: %s' % (action, six.text_type(ex)) except scheduler.Timeout: stack_status = self.FAILED reason = '%s timed out' % action.title() # If the stack delete succeeded, this is not a backup stack and it's # not a nested stack, we should delete the credentials if stack_status != self.FAILED and not backup and not self.owner_id: stack_status, reason = self._delete_credentials(stack_status, reason, abandon) try: self.state_set(action, stack_status, reason) except exception.NotFound: LOG.info("Tried to delete stack that does not exist " "%s ", self.id) if not backup: lifecycle_plugin_utils.do_post_ops(self.context, self, None, action, (self.status == self.FAILED)) if stack_status != self.FAILED: # delete the stack try: stack_object.Stack.delete(self.context, self.id) except exception.NotFound: LOG.info("Tried to delete stack that does not exist " "%s ", self.id) self.id = None @profiler.trace('Stack.suspend', hide_args=False) @reset_state_on_error def suspend(self, notify=None): """Suspend the stack. Invokes handle_suspend for all stack resources. Waits for all resources to become SUSPEND_COMPLETE then declares the stack SUSPEND_COMPLETE. Note the default implementation for all resources is to do nothing other than move to SUSPEND_COMPLETE, so the resources must implement handle_suspend for this to have any effect. """ LOG.debug("Suspending stack %s", self) # No need to suspend if the stack has been suspended if self.state == (self.SUSPEND, self.COMPLETE): LOG.info('%s is already suspended', self) return self.updated_time = oslo_timeutils.utcnow() sus_task = scheduler.TaskRunner( self.stack_task, action=self.SUSPEND, reverse=True, notify=notify) sus_task(timeout=self.timeout_secs()) @profiler.trace('Stack.resume', hide_args=False) @reset_state_on_error def resume(self, notify=None): """Resume the stack. Invokes handle_resume for all stack resources. Waits for all resources to become RESUME_COMPLETE then declares the stack RESUME_COMPLETE. Note the default implementation for all resources is to do nothing other than move to RESUME_COMPLETE, so the resources must implement handle_resume for this to have any effect. """ LOG.debug("Resuming stack %s", self) # No need to resume if the stack has been resumed if self.state == (self.RESUME, self.COMPLETE): LOG.info('%s is already resumed', self) return self.updated_time = oslo_timeutils.utcnow() sus_task = scheduler.TaskRunner( self.stack_task, action=self.RESUME, reverse=False, notify=notify) sus_task(timeout=self.timeout_secs()) @profiler.trace('Stack.snapshot', hide_args=False) @reset_state_on_error def snapshot(self, save_snapshot_func): """Snapshot the stack, invoking handle_snapshot on all resources.""" self.updated_time = oslo_timeutils.utcnow() sus_task = scheduler.TaskRunner( self.stack_task, action=self.SNAPSHOT, reverse=False, pre_completion_func=save_snapshot_func) sus_task(timeout=self.timeout_secs()) def delete_all_snapshots(self): """Remove all snapshots for this stack.""" snapshots = snapshot_object.Snapshot.get_all(self.context, self.id) for snapshot in snapshots: self.delete_snapshot(snapshot) snapshot_object.Snapshot.delete(self.context, snapshot.id) @staticmethod def _template_from_snapshot_data(snapshot_data): env = environment.Environment(snapshot_data['environment']) files = snapshot_data['files'] return tmpl.Template(snapshot_data['template'], env=env, files=files) @profiler.trace('Stack.delete_snapshot', hide_args=False) def delete_snapshot(self, snapshot): """Remove a snapshot from the backends.""" snapshot_data = snapshot.data if snapshot_data: template = self._template_from_snapshot_data(snapshot_data) ss_defn = self.defn.clone_with_new_template(template, self.identifier()) resources = self._resources_for_defn(ss_defn) for name, rsrc in six.iteritems(resources): data = snapshot.data['resources'].get(name) if data: scheduler.TaskRunner(rsrc.delete_snapshot, data)() def restore_data(self, snapshot): template = self._template_from_snapshot_data(snapshot.data) newstack = self.__class__(self.context, self.name, template, timeout_mins=self.timeout_mins, disable_rollback=self.disable_rollback) for name in newstack.defn.enabled_rsrc_names(): defn = newstack.defn.resource_definition(name) rsrc = resource.Resource(name, defn, self) data = snapshot.data['resources'].get(name) handle_restore = getattr(rsrc, 'handle_restore', None) if callable(handle_restore): defn = handle_restore(defn, data) template.add_resource(defn, name) newstack.parameters.set_stack_id(self.identifier()) return newstack, template @reset_state_on_error def restore(self, snapshot, notify=None): """Restore the given snapshot. Invokes handle_restore on all resources. """ LOG.debug("Restoring stack %s", self) self.updated_time = oslo_timeutils.utcnow() newstack = self.restore_data(snapshot)[0] updater = scheduler.TaskRunner(self.update_task, newstack, action=self.RESTORE, notify=notify) updater() def get_availability_zones(self): nova = self.clients.client('nova') if self._zones is None: self._zones = [ zone.zoneName for zone in nova.availability_zones.list(detailed=False)] return self._zones def set_stack_user_project_id(self, project_id): self.stack_user_project_id = project_id self.store() @profiler.trace('Stack.create_stack_user_project_id', hide_args=False) def create_stack_user_project_id(self): project_id = self.clients.client( 'keystone').create_stack_domain_project(self.id) self.set_stack_user_project_id(project_id) @profiler.trace('Stack.prepare_abandon', hide_args=False) def prepare_abandon(self): return { 'name': self.name, 'id': self.id, 'action': self.action, 'environment': self.env.user_env_as_dict(), 'files': self.t.files, 'status': self.status, 'template': self.t.t, 'resources': dict((res.name, res.prepare_abandon()) for res in six.itervalues(self.resources)), 'project_id': self.tenant_id, 'stack_user_project_id': self.stack_user_project_id, 'tags': self.tags, } def mark_complete(self): """Mark the update as complete. This currently occurs when all resources have been updated; there may still be resources being cleaned up, but the Stack should now be in service. """ LOG.info('[%(name)s(%(id)s)] update traversal %(tid)s complete', {'name': self.name, 'id': self.id, 'tid': self.current_traversal}) reason = 'Stack %s completed successfully' % self.action updated = self.state_set(self.action, self.COMPLETE, reason) if not updated: return self.purge_db() def purge_db(self): """Cleanup database after stack has completed/failed. 1. Delete the resources from DB. 2. If the stack failed, update the current_traversal to empty string so that the resource workers bail out. 3. Delete previous raw template if stack completes successfully. 4. Deletes all sync points. They are no longer needed after stack has completed/failed. 5. Delete the stack if the action is DELETE. """ resource_objects.Resource.purge_deleted(self.context, self.id) exp_trvsl = self.current_traversal if self.status == self.FAILED: self.current_traversal = '' prev_tmpl_id = None if (self.prev_raw_template_id is not None and self.status != self.FAILED): prev_tmpl_id = self.prev_raw_template_id self.prev_raw_template_id = None stack_id = self.store(exp_trvsl=exp_trvsl) if stack_id is None: # Failed concurrent update LOG.warning("Failed to store stack %(name)s with traversal ID " "%(trvsl_id)s, aborting stack purge", {'name': self.name, 'trvsl_id': self.current_traversal}) return if prev_tmpl_id is not None: raw_template_object.RawTemplate.delete(self.context, prev_tmpl_id) sync_point.delete_all(self.context, self.id, exp_trvsl) if (self.action, self.status) == (self.DELETE, self.COMPLETE): if not self.owner_id: status, reason = self._delete_credentials( self.status, self.status_reason, False) if status == self.FAILED: # something wrong when delete credentials, set FAILED self.state_set(self.action, status, reason) return try: stack_object.Stack.delete(self.context, self.id) except exception.NotFound: pass def time_elapsed(self): """Time elapsed in seconds since the stack operation started.""" start_time = self.updated_time or self.created_time return (oslo_timeutils.utcnow() - start_time).total_seconds() def time_remaining(self): """Time left before stack times out.""" return self.timeout_secs() - self.time_elapsed() def has_timed_out(self): """Returns True if this stack has timed-out.""" if self.status == self.IN_PROGRESS: return self.time_elapsed() > self.timeout_secs() return False def migrate_to_convergence(self): values = {'current_template_id': self.t.id} db_rsrcs = self.db_active_resources_get() if db_rsrcs is not None: for res in db_rsrcs.values(): res.update_and_save(values=values) self.set_resource_deps() self.current_traversal = uuidutils.generate_uuid() self.convergence = True prev_raw_template_id = self.prev_raw_template_id self.prev_raw_template_id = None self.store(ignore_traversal_check=True) if prev_raw_template_id: raw_template_object.RawTemplate.delete(self.context, prev_raw_template_id) heat-10.0.2/heat/engine/output.py0000666000175000017500000000626613343562337016662 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import copy import six from heat.common import exception from heat.engine import function # Field names that can be passed to Template.get_section_name() in order to # determine the appropriate name for a particular template format. FIELDS = ( VALUE, DESCRIPTION, ) = ( 'Value', 'Description', ) class OutputDefinition(object): """A definition of a stack output, independent of any template format.""" def __init__(self, name, value, description=None): self.name = name self._value = value self._resolved_value = None self._description = description self._deps = None self._all_dep_attrs = None def validate(self): """Validate the output value without resolving it.""" function.validate(self._value, VALUE) def required_resource_names(self): if self._deps is None: try: required_resources = function.dependencies(self._value) self._deps = set(six.moves.map(lambda rp: rp.name, required_resources)) except (exception.InvalidTemplateAttribute, exception.InvalidTemplateReference): # This output ain't gonna work anyway self._deps = set() return self._deps def dep_attrs(self, resource_name, load_all=False): """Iterate over attributes of a given resource that this references. Return an iterator over dependent attributes for specified resource_name in the output's value field. """ if self._all_dep_attrs is None and load_all: attr_map = collections.defaultdict(set) for r, a in function.all_dep_attrs(self._value): attr_map[r].add(a) self._all_dep_attrs = attr_map if self._all_dep_attrs is not None: return iter(self._all_dep_attrs.get(resource_name, [])) return function.dep_attrs(self._value, resource_name) def get_value(self): """Resolve the value of the output.""" if self._resolved_value is None: self._resolved_value = function.resolve(self._value) return self._resolved_value def description(self): """Return a description of the output.""" if self._description is None: return 'No description given' return six.text_type(self._description) def render_hot(self): def items(): if self._description is not None: yield 'description', self._description yield 'value', copy.deepcopy(self._value) return dict(items()) heat-10.0.2/heat/engine/dependencies.py0000666000175000017500000002277513343562337017753 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import itertools import six from heat.common import exception from heat.common.i18n import _ from heat.common.i18n import repr_wrapper class CircularDependencyException(exception.HeatException): msg_fmt = _("Circular Dependency Found: %(cycle)s") @repr_wrapper @six.python_2_unicode_compatible class Node(object): """A node in a dependency graph.""" __slots__ = ('require', 'satisfy') def __init__(self, requires=None, required_by=None): """Initialisation of the node. Initialise the node, optionally with a set of keys this node requires and/or a set of keys that this node is required by. """ self.require = requires and requires.copy() or set() self.satisfy = required_by and required_by.copy() or set() def copy(self): """Return a copy of the node.""" return Node(self.require, self.satisfy) def reverse_copy(self): """Return a copy of the node with the edge directions reversed.""" return Node(self.satisfy, self.require) def required_by(self, source=None): """List the keys that require this node, and optionally add new one.""" if source is not None: self.satisfy.add(source) return iter(self.satisfy) def requires(self, target=None): """List the keys that this node requires, and optionally add a new one. """ if target is not None: self.require.add(target) return iter(self.require) def __isub__(self, target): """Remove a key that this node requires.""" self.require.remove(target) return self def __nonzero__(self): """Return True if this node is not a leaf (it requires other nodes).""" return bool(self.require) def __bool__(self): """Return True if this node is not a leaf (it requires other nodes).""" return self.__nonzero__() def stem(self): """Return True if this node is a stem (required by nothing).""" return not bool(self.satisfy) def disjoint(self): """Return True if this node is both a leaf and a stem.""" return (not self) and self.stem() def __len__(self): """Count the number of keys required by this node.""" return len(self.require) def __iter__(self): """Iterate over the keys required by this node.""" return iter(self.require) def __str__(self): """Return a human-readable string representation of the node.""" text = '{%s}' % ', '.join(six.text_type(n) for n in self) return six.text_type(text) def __repr__(self): """Return a string representation of the node.""" return repr(self.require) @six.python_2_unicode_compatible class Graph(collections.defaultdict): """A mutable mapping of objects to nodes in a dependency graph.""" def __init__(self, *args): super(Graph, self).__init__(Node, *args) def map(self, func): """Map the supplied function onto each node in the graph. Return a dictionary derived from mapping the supplied function onto each node in the graph. """ return dict((k, func(n)) for k, n in self.items()) def copy(self): """Return a copy of the graph.""" return Graph(self.map(lambda n: n.copy())) def reverse_copy(self): """Return a copy of the graph with the edges reversed.""" return Graph(self.map(lambda n: n.reverse_copy())) def edges(self): """Return an iterator over all of the edges in the graph.""" def outgoing_edges(rqr, node): if node.disjoint(): yield (rqr, None) else: for rqd in node: yield (rqr, rqd) return itertools.chain.from_iterable(outgoing_edges(*i) for i in six.iteritems(self)) def __delitem__(self, key): """Delete the node given by the specified key from the graph.""" node = self[key] for src in node.required_by(): src_node = self[src] if key in src_node: src_node -= key return super(Graph, self).__delitem__(key) def __str__(self): """Convert the graph to a human-readable string.""" pairs = ('%s: %s' % (six.text_type(k), six.text_type(v)) for k, v in six.iteritems(self)) text = '{%s}' % ', '.join(pairs) return six.text_type(text) @staticmethod def toposort(graph): """Return a topologically sorted iterator over a dependency graph. This is a destructive operation for the graph. """ for iteration in six.moves.xrange(len(graph)): for key, node in six.iteritems(graph): if not node: yield key del graph[key] break else: # There are nodes remaining, but none without # dependencies: a cycle raise CircularDependencyException(cycle=six.text_type(graph)) @repr_wrapper @six.python_2_unicode_compatible class Dependencies(object): """Helper class for calculating a dependency graph.""" def __init__(self, edges=None): """Initialise, optionally with a list of edges. Each edge takes the form of a (requirer, required) tuple. """ edges = edges or [] self._graph = Graph() for e in edges: self += e def __iadd__(self, edge): """Add another edge, in the form of a (requirer, required) tuple.""" requirer, required = edge if required is None: # Just ensure the node is created by accessing the defaultdict self._graph[requirer] else: self._graph[required].required_by(requirer) self._graph[requirer].requires(required) return self def required_by(self, last): """List the keys that require the specified node.""" if last not in self._graph: raise KeyError return self._graph[last].required_by() def requires(self, source): """List the keys that the specified node requires.""" if source not in self._graph: raise KeyError return self._graph[source].requires() def __getitem__(self, last): """Return a partial dependency graph starting with the specified node. Return a subset of the dependency graph consisting of the specified node and all those that require it only. """ if last not in self._graph: raise KeyError def get_edges(key): def requirer_edges(rqr): # Concatenate the dependency on the current node with the # recursive generated list return itertools.chain([(rqr, key)], get_edges(rqr)) # Get the edge list for each node that requires the current node edge_lists = six.moves.map(requirer_edges, self._graph[key].required_by()) # Combine the lists into one long list return itertools.chain.from_iterable(edge_lists) if self._graph[last].stem(): # Nothing requires this, so just add the node itself edges = [(last, None)] else: edges = get_edges(last) return Dependencies(edges) def leaves(self): """Return an iterator over all of the leaf nodes in the graph.""" return (requirer for requirer, required in self._graph.items() if not required) def roots(self): """Return an iterator over all of the root nodes in the graph.""" return (requirer for requirer, required in self.graph( reverse=True).items() if not required) def translate(self, transform): """Translate all of the nodes using a transform function. Returns a new Dependencies object. """ def transform_key(key): return transform(key) if key is not None else None edges = self._graph.edges() return type(self)(tuple(map(transform_key, e)) for e in edges) def __str__(self): """Return a human-readable string repr of the dependency graph.""" return six.text_type(self._graph) def __repr__(self): """Return a consistent string representation of the object.""" edge_reprs = sorted(repr(e) for e in self._graph.edges()) text = 'Dependencies([%s])' % ', '.join(edge_reprs) return text def graph(self, reverse=False): """Return a copy of the underlying dependency graph.""" if reverse: return self._graph.reverse_copy() else: return self._graph.copy() def __iter__(self): """Return a topologically sorted iterator.""" return Graph.toposort(self.graph()) def __reversed__(self): """Return a reverse topologically sorted iterator.""" return Graph.toposort(self.graph(reverse=True)) heat-10.0.2/heat/engine/template.py0000666000175000017500000003407013343562351017123 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import abc import collections import copy import functools import hashlib import six from stevedore import extension from heat.common import exception from heat.common.i18n import _ from heat.common import template_format from heat.engine import conditions from heat.engine import environment from heat.engine import function from heat.engine import template_files from heat.objects import raw_template as template_object __all__ = ['Template'] _template_classes = None def get_version(template_data, available_versions): version_keys = set(key for key, version in available_versions) candidate_keys = set(k for k, v in six.iteritems(template_data) if isinstance(v, six.string_types)) keys_present = version_keys & candidate_keys if len(keys_present) > 1: explanation = _('Ambiguous versions (%s)') % ', '.join(keys_present) raise exception.InvalidTemplateVersion(explanation=explanation) try: version_key = keys_present.pop() except KeyError: explanation = _('Template version was not provided') raise exception.InvalidTemplateVersion(explanation=explanation) return version_key, template_data[version_key] def _get_template_extension_manager(): return extension.ExtensionManager( namespace='heat.templates', invoke_on_load=False, on_load_failure_callback=raise_extension_exception) def raise_extension_exception(extmanager, ep, err): raise TemplatePluginNotRegistered(name=ep.name, error=six.text_type(err)) class TemplatePluginNotRegistered(exception.HeatException): msg_fmt = _("Could not load %(name)s: %(error)s") def get_template_class(template_data): available_versions = _template_classes.keys() version = get_version(template_data, available_versions) version_type = version[0] try: return _template_classes[version] except KeyError: av_list = sorted( [v for k, v in available_versions if k == version_type]) msg_data = {'version': ': '.join(version), 'version_type': version_type, 'available': ', '.join(v for v in av_list)} if len(av_list) > 1: explanation = _('"%(version)s". "%(version_type)s" ' 'should be one of: %(available)s') % msg_data else: explanation = _('"%(version)s". "%(version_type)s" ' 'should be: %(available)s') % msg_data raise exception.InvalidTemplateVersion(explanation=explanation) class Template(collections.Mapping): """Abstract base class for template format plugins. All template formats (both internal and third-party) should derive from Template and implement the abstract functions to provide resource definitions and other data. This is a stable third-party API. Do not add implementations that are specific to internal template formats. Do not add new abstract methods. """ condition_functions = {} functions = {} def __new__(cls, template, *args, **kwargs): """Create a new Template of the appropriate class.""" global _template_classes if _template_classes is None: mgr = _get_template_extension_manager() _template_classes = dict((tuple(name.split('.')), mgr[name].plugin) for name in mgr.names()) if cls != Template: TemplateClass = cls else: TemplateClass = get_template_class(template) return super(Template, cls).__new__(TemplateClass) def __init__(self, template, template_id=None, files=None, env=None): """Initialise the template with JSON object and set of Parameters.""" self.id = template_id self.t = template self.files = files or {} self.maps = self[self.MAPPINGS] self.env = env or environment.Environment({}) self.merge_sections = [self.PARAMETERS] self.version = get_version(self.t, _template_classes.keys()) self.t_digest = None condition_functions = {n: function.Invalid for n in self.functions} condition_functions.update(self.condition_functions) self._parser_condition_functions = condition_functions def __deepcopy__(self, memo): return Template(copy.deepcopy(self.t, memo), files=self.files, env=self.env) def merge_snippets(self, other): for s in self.merge_sections: if s not in other.t: continue if s not in self.t: self.t[s] = {} self.t[s].update(other.t[s]) @classmethod def load(cls, context, template_id, t=None): """Retrieve a Template with the given ID from the database.""" if t is None: t = template_object.RawTemplate.get_by_id(context, template_id) env = environment.Environment(t.environment) # support loading the legacy t.files, but modern templates will # have a t.files_id t_files = t.files or t.files_id return cls(t.template, template_id=template_id, env=env, files=t_files) def store(self, context): """Store the Template in the database and return its ID.""" rt = { 'template': self.t, 'files_id': self.files.store(context), 'environment': self.env.env_as_dict() } if self.id is None: new_rt = template_object.RawTemplate.create(context, rt) self.id = new_rt.id else: template_object.RawTemplate.update_by_id(context, self.id, rt) return self.id @property def files(self): return self._template_files @files.setter def files(self, files): self._template_files = template_files.TemplateFiles(files) def __iter__(self): """Return an iterator over the section names.""" return (s for s in self.SECTIONS if s not in self.SECTIONS_NO_DIRECT_ACCESS) def __len__(self): """Return the number of sections.""" return len(self.SECTIONS) - len(self.SECTIONS_NO_DIRECT_ACCESS) @abc.abstractmethod def param_schemata(self, param_defaults=None): """Return a dict of parameters.Schema objects for the parameters.""" pass def all_param_schemata(self, files): schema = {} files = files if files is not None else {} for f in files.values(): try: data = template_format.parse(f) except ValueError: continue else: sub_tmpl = Template(data) schema.update(sub_tmpl.param_schemata()) # Parent template has precedence, so update the schema last. schema.update(self.param_schemata()) return schema @abc.abstractmethod def get_section_name(self, section): """Get the name of a field within a resource or output definition. Return the name of the given field (specified by the constants given in heat.engine.rsrc_defn and heat.engine.output) in the template format. This is used in error reporting to help users find the location of errors in the template. Note that 'section' here does not refer to a top-level section of the template (like parameters, resources, &c.) as it does everywhere else. """ pass @abc.abstractmethod def parameters(self, stack_identifier, user_params, param_defaults=None): """Return a parameters.Parameters object for the stack.""" pass def validate_resource_definitions(self, stack): """Check validity of resource definitions. This method is deprecated. Subclasses should validate the resource definitions in the process of generating them when calling resource_definitions(). However, for now this method is still called in case any third-party plugins are relying on this for validation and need time to migrate. """ pass def conditions(self, stack): """Return a dictionary of resolved conditions.""" return conditions.Conditions({}) @abc.abstractmethod def outputs(self, stack): """Return a dictionary of OutputDefinition objects.""" pass @abc.abstractmethod def resource_definitions(self, stack): """Return a dictionary of ResourceDefinition objects.""" pass @abc.abstractmethod def add_resource(self, definition, name=None): """Add a resource to the template. The resource is passed as a ResourceDefinition object. If no name is specified, the name from the ResourceDefinition should be used. """ pass def add_output(self, definition): """Add an output to the template. The output is passed as a OutputDefinition object. """ raise NotImplementedError def remove_resource(self, name): """Remove a resource from the template.""" self.t.get(self.RESOURCES, {}).pop(name) def remove_all_resources(self): """Remove all the resources from the template.""" if self.RESOURCES in self.t: self.t.update({self.RESOURCES: {}}) def parse(self, stack, snippet, path=''): return parse(self.functions, stack, snippet, path, self) def parse_condition(self, stack, snippet, path=''): return parse(self._parser_condition_functions, stack, snippet, path, self) def validate(self): """Validate the template. Validates the top-level sections of the template as well as syntax inside select sections. Some sections are not checked here but in code parts that are responsible for working with the respective sections (e.g. parameters are check by parameters schema class). """ t_digest = hashlib.sha256( six.text_type(self.t).encode('utf-8')).hexdigest() # TODO(kanagaraj-manickam) currently t_digest is stored in self. which # is used to check whether already template is validated or not. # But it needs to be loaded from dogpile cache backend once its # available in heat (http://specs.openstack.org/openstack/heat-specs/ # specs/liberty/constraint-validation-cache.html). This is required # as multiple heat-engines may process the same template at least # in case of instance_group. And it fixes partially bug 1444316 if t_digest == self.t_digest: return # check top-level sections for k in self.t.keys(): if k not in self.SECTIONS: raise exception.InvalidTemplateSection(section=k) # check resources for res in six.itervalues(self[self.RESOURCES]): try: if not res or not res.get('Type'): message = _('Each Resource must contain ' 'a Type key.') raise exception.StackValidationFailed(message=message) except AttributeError: message = _('Resources must contain Resource. ' 'Found a [%s] instead') % type(res) raise exception.StackValidationFailed(message=message) self.t_digest = t_digest @classmethod def create_empty_template(cls, version=('heat_template_version', '2015-04-30'), from_template=None): """Create an empty template. Creates a new empty template with given version. If version is not provided, a new empty HOT template of version "2015-04-30" is returned. :param version: A tuple containing version header of the template: version key and value. E.g. ("heat_template_version", "2015-04-30") :returns: A new empty template. """ if from_template: # remove resources from the template and return; keep the # env and other things intact tmpl = copy.deepcopy(from_template) tmpl.remove_all_resources() return tmpl else: tmpl = {version[0]: version[1]} return cls(tmpl) def parse(functions, stack, snippet, path='', template=None): recurse = functools.partial(parse, functions, stack, template=template) if isinstance(snippet, collections.Mapping): def mkpath(key): return '.'.join([path, six.text_type(key)]) if len(snippet) == 1: fn_name, args = next(six.iteritems(snippet)) Func = functions.get(fn_name) if Func is not None: try: path = '.'.join([path, fn_name]) if issubclass(Func, function.Macro): return Func(stack, fn_name, args, functools.partial(recurse, path=path), template) else: return Func(stack, fn_name, recurse(args, path)) except (ValueError, TypeError, KeyError) as e: raise exception.StackValidationFailed( path=path, message=six.text_type(e)) return dict((k, recurse(v, mkpath(k))) for k, v in six.iteritems(snippet)) elif (not isinstance(snippet, six.string_types) and isinstance(snippet, collections.Iterable)): def mkpath(idx): return ''.join([path, '[%d]' % idx]) return [recurse(v, mkpath(i)) for i, v in enumerate(snippet)] else: return snippet heat-10.0.2/heat/engine/parent_rsrc.py0000666000175000017500000000647113343562337017642 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import weakref from heat.objects import resource as resource_object class ParentResourceProxy(object): """Proxy for the TemplateResource that owns a provider stack. This is an interface through which the Fn::ResourceFacade/resource_facade intrinsic functions in a stack can access data about the TemplateResource in the parent stack for which it was created. This API can be considered stable by third-party Function plugins, and no part of it should be changed or removed without an appropriate deprecation process. """ def __new__(cls, context, parent_resource_name, parent_stack_id): if parent_resource_name is None: return None return super(ParentResourceProxy, cls).__new__(cls) def __init__(self, context, parent_resource_name, parent_stack_id): self._context = context self.name = parent_resource_name self._stack_id = parent_stack_id self._stack_ref = None self._parent_stack = None def _stack(self): if self._stack_ref is not None: stk = self._stack_ref() if stk is not None: return stk assert self._stack_id is not None, "Must provide parent stack or ID" from heat.engine import stack self._parent_stack = stack.Stack.load(self._context, stack_id=self._stack_id) self._stack_ref = weakref.ref(self._parent_stack) return self._parent_stack def metadata_get(self): """Return the resource metadata.""" # If we're using an existing stack that was passed in, assume that its # resources are already in memory. If they haven't been stored to the # DB yet, this avoids an unnecessary attempt to read from it. if self._parent_stack is None: refd_stk = self._stack_ref and self._stack_ref() if refd_stk is not None: return refd_stk[self.name].metadata_get() assert self._stack_id is not None, "Must provide parent stack or ID" # Try to read just this resource from the DB rs = resource_object.Resource.get_by_name_and_stack(self._context, self.name, self._stack_id) if rs is not None: return rs.rsrc_metadata # Resource not stored, just return the data from the template return self.t.metadata() @property def t(self): """The resource definition.""" stk = self._stack() return stk.t.resource_definitions(stk)[self.name] def use_parent_stack(parent_proxy, stack): parent_proxy._stack_ref = weakref.ref(stack) parent_proxy._parent_stack = None heat-10.0.2/heat/engine/notification/0000775000175000017500000000000013343562672017424 5ustar zuulzuul00000000000000heat-10.0.2/heat/engine/notification/autoscaling.py0000666000175000017500000000261713343562337022315 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.engine import api as engine_api from heat.engine import notification def send(stack, adjustment=None, adjustment_type=None, capacity=None, groupname=None, message='error', suffix=None): """Send autoscaling notifications to the configured notification driver.""" # see: https://wiki.openstack.org/wiki/SystemUsageData event_type = '%s.%s' % ('autoscaling', suffix) body = engine_api.format_notification_body(stack) body['adjustment_type'] = adjustment_type body['adjustment'] = adjustment body['capacity'] = capacity body['groupname'] = groupname body['message'] = message level = notification.get_default_level() if suffix == 'error': level = notification.ERROR notification.notify(stack.context, event_type, level, body) heat-10.0.2/heat/engine/notification/stack.py0000666000175000017500000000266013343562337021107 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.engine import api as engine_api from heat.engine import notification def send(stack): """Send usage notifications to the configured notification driver.""" # The current notifications have a start/end: # see: https://wiki.openstack.org/wiki/SystemUsageData # so to be consistent we translate our status into a known start/end/error # suffix. level = notification.get_default_level() if stack.status == stack.IN_PROGRESS: suffix = 'start' elif stack.status == stack.COMPLETE: suffix = 'end' else: suffix = 'error' level = notification.ERROR event_type = '%s.%s.%s' % ('stack', stack.action.lower(), suffix) notification.notify(stack.context, event_type, level, engine_api.format_notification_body(stack)) heat-10.0.2/heat/engine/notification/__init__.py0000666000175000017500000000312113343562337021532 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg from heat.common.i18n import _ from heat.common import messaging SERVICE = 'orchestration' INFO = 'INFO' ERROR = 'ERROR' notifier_opts = [ cfg.StrOpt('default_notification_level', default=INFO, help=_('Default notification level for outgoing' 'notifications.')), cfg.StrOpt('default_publisher_id', help=_('Default publisher_id for outgoing notifications.')), ] CONF = cfg.CONF CONF.register_opts(notifier_opts) def _get_default_publisher(): publisher_id = CONF.default_publisher_id if publisher_id is None: publisher_id = "%s.%s" % (SERVICE, CONF.host) return publisher_id def get_default_level(): return CONF.default_notification_level.upper() def notify(context, event_type, level, body): client = messaging.get_notifier(_get_default_publisher()) method = getattr(client, level.lower()) method(context, "%s.%s" % (SERVICE, event_type), body) def list_opts(): yield None, notifier_opts heat-10.0.2/heat/engine/sync_point.py0000666000175000017500000001362113343562351017474 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import ast import six import tenacity from oslo_log import log as logging from heat.common import exception from heat.objects import sync_point as sync_point_object LOG = logging.getLogger(__name__) KEY_SEPERATOR = ':' def _dump_list(items, separator=', '): return separator.join(map(str, items)) def make_key(*components): assert len(components) >= 2 return _dump_list(components, KEY_SEPERATOR) def create(context, entity_id, traversal_id, is_update, stack_id): """Creates a sync point entry in DB.""" values = {'entity_id': entity_id, 'traversal_id': traversal_id, 'is_update': is_update, 'atomic_key': 0, 'stack_id': stack_id, 'input_data': {}} return sync_point_object.SyncPoint.create(context, values) def get(context, entity_id, traversal_id, is_update): """Retrieves a sync point entry from DB.""" sync_point = sync_point_object.SyncPoint.get_by_key(context, entity_id, traversal_id, is_update) if sync_point is None: key = (entity_id, traversal_id, is_update) raise exception.EntityNotFound(entity='Sync Point', name=key) return sync_point def delete_all(context, stack_id, traversal_id): """Deletes all sync points of a stack associated with a traversal_id.""" return sync_point_object.SyncPoint.delete_all_by_stack_and_traversal( context, stack_id, traversal_id ) def update_input_data(context, entity_id, current_traversal, is_update, atomic_key, input_data): rows_updated = sync_point_object.SyncPoint.update_input_data( context, entity_id, current_traversal, is_update, atomic_key, input_data) return rows_updated def str_pack_tuple(t): return u'tuple:' + str(tuple(t)) def _str_unpack_tuple(s): s = s[s.index(':') + 1:] return ast.literal_eval(s) def _deserialize(d): d2 = {} for k, v in d.items(): if isinstance(k, six.string_types) and k.startswith(u'tuple:('): k = _str_unpack_tuple(k) if isinstance(v, dict): v = _deserialize(v) d2[k] = v return d2 def _serialize(d): d2 = {} for k, v in d.items(): if isinstance(k, tuple): k = str_pack_tuple(k) if isinstance(v, dict): v = _serialize(v) d2[k] = v return d2 def deserialize_input_data(db_input_data): db_input_data = db_input_data.get('input_data') if not db_input_data: return {} return dict(_deserialize(db_input_data)) def serialize_input_data(input_data): return {'input_data': _serialize(input_data)} class wait_random_exponential(tenacity.wait_exponential): """Random wait strategy with a geometrically increasing amount of jitter. Implements the truncated binary exponential backoff algorithm as used in e.g. CSMA media access control. The retry occurs at a random time in a (geometrically) expanding interval constrained by minimum and maximum limits. """ def __init__(self, min=0, multiplier=1, max=tenacity._utils.MAX_WAIT, exp_base=2): super(wait_random_exponential, self).__init__(multiplier=multiplier, max=(max-min), exp_base=exp_base) self._random = tenacity.wait_random(min=min, max=(min + multiplier)) def __call__(self, previous_attempt_number, delay_since_first_attempt): jitter = super(wait_random_exponential, self).__call__(previous_attempt_number, delay_since_first_attempt) self._random.wait_random_max = self._random.wait_random_min + jitter return self._random(previous_attempt_number, delay_since_first_attempt) def sync(cnxt, entity_id, current_traversal, is_update, propagate, predecessors, new_data): # Retry waits up to 60 seconds at most, with exponentially increasing # amounts of jitter per resource still outstanding wait_strategy = wait_random_exponential(max=60) def init_jitter(existing_input_data): nconflicts = max(0, len(predecessors) - len(existing_input_data) - 1) # 10ms per potential conflict, up to a max of 10s in total return min(nconflicts, 1000) * 0.01 @tenacity.retry( retry=tenacity.retry_if_result(lambda r: r is None), wait=wait_strategy ) def _sync(): sync_point = get(cnxt, entity_id, current_traversal, is_update) input_data = deserialize_input_data(sync_point.input_data) wait_strategy.multiplier = init_jitter(input_data) input_data.update(new_data) rows_updated = update_input_data( cnxt, entity_id, current_traversal, is_update, sync_point.atomic_key, serialize_input_data(input_data)) return input_data if rows_updated else None input_data = _sync() waiting = predecessors - set(input_data) key = make_key(entity_id, current_traversal, is_update) if waiting: LOG.debug('[%s] Waiting %s: Got %s; still need %s', key, entity_id, _dump_list(input_data), _dump_list(waiting)) else: LOG.debug('[%s] Ready %s: Got %s', key, entity_id, _dump_list(input_data)) propagate(entity_id, serialize_input_data(input_data)) heat-10.0.2/heat/engine/resource.py0000666000175000017500000032012613343562351017137 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import contextlib import datetime as dt import itertools import pydoc import tenacity import weakref from oslo_config import cfg from oslo_log import log as logging from oslo_utils import excutils from oslo_utils import reflection import six from heat.common import exception from heat.common.i18n import _ from heat.common import identifier from heat.common import short_id from heat.common import timeutils from heat.engine import attributes from heat.engine.cfn import template as cfn_tmpl from heat.engine import clients from heat.engine import environment from heat.engine import event from heat.engine import function from heat.engine.hot import template as hot_tmpl from heat.engine import node_data from heat.engine import properties from heat.engine import resources from heat.engine import rsrc_defn from heat.engine import scheduler from heat.engine import status from heat.engine import support from heat.engine import sync_point from heat.engine import template from heat.objects import resource as resource_objects from heat.objects import resource_data as resource_data_objects from heat.objects import resource_properties_data as rpd_objects from heat.rpc import client as rpc_client cfg.CONF.import_opt('action_retry_limit', 'heat.common.config') cfg.CONF.import_opt('observe_on_update', 'heat.common.config') cfg.CONF.import_opt('error_wait_time', 'heat.common.config') LOG = logging.getLogger(__name__) datetime = dt.datetime def _register_class(resource_type, resource_class): resources.global_env().register_class(resource_type, resource_class) # Attention developers about to move/delete this: STOP IT!!! UpdateReplace = exception.UpdateReplace # Attention developers about to move this: STOP IT!!! class NoActionRequired(Exception): """Exception raised when a signal is ignored. Resource subclasses should raise this exception from handle_signal() to suppress recording of an event corresponding to the signal. """ def __init__(self, res_name='Unknown', reason=''): msg = (_("The resource %(res)s could not perform " "scaling action: %(reason)s") % {'res': res_name, 'reason': reason}) super(Exception, self).__init__(six.text_type(msg)) class PollDelay(Exception): """Exception to delay polling of the resource. This exception may be raised by a Resource subclass's check_*_complete() methods to indicate that it need not be polled again immediately. If this exception is raised, the check_*_complete() method will not be called again until the nth time that the resource becomes eligible for polling. A PollDelay period of 1 is equivalent to returning False. """ def __init__(self, period): assert period >= 1 self.period = period @six.python_2_unicode_compatible class Resource(status.ResourceStatus): BASE_ATTRIBUTES = (SHOW, ) = (attributes.SHOW_ATTR, ) LOCK_ACTIONS = ( LOCK_NONE, LOCK_ACQUIRE, LOCK_RELEASE, LOCK_RESPECT, ) = ( None, 1, -1, 0, ) # If True, this resource must be created before it can be referenced. strict_dependency = True # Resource implementation set this to the subset of resource properties # supported for handle_update, used by update_template_diff_properties update_allowed_properties = () # Resource implementations set this to the name: description dictionary # that describes the appropriate resource attributes attributes_schema = {} # Resource implementations set this to update policies update_policy_schema = {} # Default entity of resource, which is used for during resolving # show attribute entity = None # Description dictionary, that describes the common attributes for all # resources base_attributes_schema = { SHOW: attributes.Schema( _("Detailed information about resource."), cache_mode=attributes.Schema.CACHE_NONE, type=attributes.Schema.MAP ) } # If True, this resource may perform authenticated API requests # throughout its lifecycle requires_deferred_auth = False # Limit to apply to physical_resource_name() size reduction algorithm. # If set to None no limit will be applied. physical_resource_name_limit = 255 support_status = support.SupportStatus() # Default name to use for calls to self.client() default_client_name = None # Required service extension for this resource required_service_extension = None # no signal actions no_signal_actions = (status.ResourceStatus.SUSPEND, status.ResourceStatus.DELETE) # Whether all other resources need a metadata_update() after # a signal to this resource signal_needs_metadata_updates = True def __new__(cls, name, definition, stack): """Create a new Resource of the appropriate class for its type.""" assert isinstance(definition, rsrc_defn.ResourceDefinition) if cls != Resource: # Call is already for a subclass, so pass it through ResourceClass = cls else: registry = stack.env.registry ResourceClass = registry.get_class_to_instantiate( definition.resource_type, resource_name=name) assert issubclass(ResourceClass, Resource) return super(Resource, cls).__new__(ResourceClass) @classmethod def _validate_service_availability(cls, context, resource_type): try: (svc_available, reason) = cls.is_service_available(context) except Exception as exc: LOG.exception("Resource type %s unavailable", resource_type) ex = exception.ResourceTypeUnavailable( resource_type=resource_type, service_name=cls.default_client_name, reason=six.text_type(exc)) raise ex else: if not svc_available: ex = exception.ResourceTypeUnavailable( resource_type=resource_type, service_name=cls.default_client_name, reason=reason) LOG.info(six.text_type(ex)) raise ex def __init__(self, name, definition, stack): def _validate_name(res_name): if '/' in res_name: message = _('Resource name may not contain "/"') raise exception.StackValidationFailed(message=message) _validate_name(name) self.stack = stack self.context = stack.context self.name = name self.t = definition self.reparse(client_resolve=False) self.update_policy = self.t.update_policy(self.update_policy_schema, self.context) self._update_allowed_properties = self.calc_update_allowed( self.properties) self.attributes_schema.update(self.base_attributes_schema) self.attributes = attributes.Attributes(self.name, self.attributes_schema, self._make_resolver( weakref.ref(self))) self.abandon_in_progress = False self.resource_id = None # if the stack is being deleted, assume we've already been deleted. # or if the resource has not been created yet, and the stack was # rollback, we set the resource to rollback if stack.action == stack.DELETE or stack.action == stack.ROLLBACK: self.action = stack.action else: self.action = self.INIT self.status = self.COMPLETE self.status_reason = '' self.id = None self.uuid = None self._data = None self._attr_data_id = None self._rsrc_metadata = None self._rsrc_prop_data_id = None self._stored_properties_data = None self.created_time = stack.created_time self.updated_time = stack.updated_time self._rpc_client = None self.needed_by = [] self.requires = [] self.replaces = None self.replaced_by = None self.current_template_id = None self.root_stack_id = None self._calling_engine_id = None self._atomic_key = None self.converge = False if not self.stack.in_convergence_check: resource = stack.db_resource_get(name) if resource: self._load_data(resource) else: proxy = self.stack.defn[self.name] node_data = proxy._resource_data if node_data is not None: self.action, self.status = proxy.state self.id = node_data.primary_key self.uuid = node_data.uuid def rpc_client(self): """Return a client for making engine RPC calls.""" if not self._rpc_client: self._rpc_client = rpc_client.EngineClient() return self._rpc_client def _load_data(self, resource): """Load the resource state from its DB representation.""" self.resource_id = resource.physical_resource_id self.action = resource.action self.status = resource.status self.status_reason = resource.status_reason self.id = resource.id self.uuid = resource.uuid try: self._data = resource_data_objects.ResourceData.get_all( self, resource.data) except exception.NotFound: self._data = {} self.attributes.cached_attrs = resource.attr_data or None self._attr_data_id = resource.attr_data_id self._rsrc_metadata = resource.rsrc_metadata self._stored_properties_data = resource.properties_data self._rsrc_prop_data_id = resource.rsrc_prop_data_id self.created_time = resource.created_at self.updated_time = resource.updated_at self.needed_by = resource.needed_by self.requires = resource.requires self.replaces = resource.replaces self.replaced_by = resource.replaced_by self.current_template_id = resource.current_template_id self.root_stack_id = resource.root_stack_id self._atomic_key = resource.atomic_key @property def external_id(self): return self.t.external_id() @classmethod def getdoc(cls): if cls.__doc__ is None: return _('No description available') return pydoc.getdoc(cls) @property def stack(self): stack = self._stackref() assert stack is not None, "Need a reference to the Stack object" return stack @stack.setter def stack(self, stack): self._stackref = weakref.ref(stack) @classmethod def load(cls, context, resource_id, current_traversal, is_update, data): """Load a specified resource from the database to check. Returns a tuple of the Resource, the StackDefinition corresponding to the resource's ResourceDefinition (i.e. the one the resource was last updated to if it has already been created, or the one it will be created with if it hasn't been already), and the Stack containing the latest StackDefinition (i.e. the one that the latest traversal is updating to. The latter two must remain in-scope, because the Resource holds weak references to them. """ from heat.engine import stack as stack_mod db_res = resource_objects.Resource.get_obj(context, resource_id) curr_stack = stack_mod.Stack.load(context, stack_id=db_res.stack_id, cache_data=data) initial_stk_defn = latest_stk_defn = curr_stack.defn if (db_res.current_template_id != curr_stack.t.id and (db_res.action != cls.INIT or not is_update or current_traversal != curr_stack.current_traversal)): # load the definition associated with the resource's template current_template_id = db_res.current_template_id current_template = template.Template.load(context, current_template_id) initial_stk_defn = curr_stack.defn.clone_with_new_template( current_template, curr_stack.identifier()) curr_stack.defn = initial_stk_defn res_defn = initial_stk_defn.resource_definition(db_res.name) res_type = initial_stk_defn.env.registry.get_class_to_instantiate( res_defn.resource_type, resource_name=db_res.name) # If the resource type has changed and the new one is a valid # substitution, use that as the class to instantiate. if is_update and (latest_stk_defn is not initial_stk_defn): try: new_res_defn = latest_stk_defn.resource_definition(db_res.name) except KeyError: pass else: new_registry = latest_stk_defn.env.registry new_res_type = new_registry.get_class_to_instantiate( new_res_defn.resource_type, resource_name=db_res.name) if res_type.check_is_substituted(new_res_type): res_type = new_res_type # Load only the resource in question; don't load all resources # by invoking stack.resources. Maintain light-weight stack. resource = res_type(db_res.name, res_defn, curr_stack) resource._load_data(db_res) curr_stack.defn = latest_stk_defn return resource, initial_stk_defn, curr_stack def make_replacement(self, new_tmpl_id): """Create a replacement resource in the database. Returns the DB ID of the new resource, or None if the new resource cannot be created (generally because the template ID does not exist). Raises UpdateInProgress if another traversal has already locked the current resource. """ # 1. create the replacement with "replaces" = self.id # Don't set physical_resource_id so that a create is triggered. rs = {'stack_id': self.stack.id, 'name': self.name, 'rsrc_prop_data_id': None, 'needed_by': self.needed_by, 'requires': self.requires, 'replaces': self.id, 'action': self.INIT, 'status': self.COMPLETE, 'current_template_id': new_tmpl_id, 'stack_name': self.stack.name, 'root_stack_id': self.root_stack_id} update_data = {'status': self.COMPLETE} # Retry in case a signal has updated the atomic_key attempts = max(cfg.CONF.client_retry_limit, 0) + 1 def prepare_attempt(fn, attempt): if attempt > 1: res_obj = resource_objects.Resource.get_obj( self.context, self.id) if (res_obj.engine_id is not None or res_obj.updated_at != self.updated_time): raise exception.UpdateInProgress(resource_name=self.name) self._atomic_key = res_obj.atomic_key @tenacity.retry( stop=tenacity.stop_after_attempt(attempts), retry=tenacity.retry_if_exception_type( exception.UpdateInProgress), before=prepare_attempt, wait=tenacity.wait_random(max=2), reraise=True) def create_replacement(): return resource_objects.Resource.replacement(self.context, self.id, update_data, rs, self._atomic_key) new_rs = create_replacement() if new_rs is None: return None self._incr_atomic_key(self._atomic_key) self.replaced_by = new_rs.id return new_rs.id def reparse(self, client_resolve=True): """Reparse the resource properties. Optional translate flag for property translation and client_resolve flag for resolving properties by doing client lookup. """ self.properties = self.t.properties(self.properties_schema, self.context) self.translate_properties(self.properties, client_resolve) def calc_update_allowed(self, props): update_allowed_set = set(self.update_allowed_properties) for (psk, psv) in six.iteritems(props.props): if psv.update_allowed(): update_allowed_set.add(psk) return update_allowed_set def __eq__(self, other): """Allow == comparison of two resources.""" # For the purposes of comparison, we declare two resource objects # equal if their names and resolved templates are the same if isinstance(other, Resource): return ((self.name == other.name) and (self.t.freeze() == other.t.freeze())) return NotImplemented def __ne__(self, other): """Allow != comparison of two resources.""" result = self.__eq__(other) if result is NotImplemented: return result return not result def __hash__(self): return id(self) def metadata_get(self, refresh=False): if refresh: self._rsrc_metadata = None if self.id is None or self.action == self.INIT: return self.t.metadata() if self._rsrc_metadata is not None: return self._rsrc_metadata rs = resource_objects.Resource.get_obj(self.stack.context, self.id, refresh=True, fields=('rsrc_metadata', )) self._rsrc_metadata = rs.rsrc_metadata return rs.rsrc_metadata @resource_objects.retry_on_conflict def metadata_set(self, metadata, merge_metadata=None): """Write new metadata to the database. The caller may optionally provide a merge_metadata() function, which takes two arguments - the metadata passed to metadata_set() and the current metadata of the resource - and returns the merged metadata to write. If merge_metadata is not provided, the metadata passed to metadata_set() is written verbatim, overwriting any existing metadata. If a race condition is detected, the write will be retried with the new result of merge_metadata() (if it is supplied) or the verbatim data (if it is not). """ if self.id is None or self.action == self.INIT: raise exception.ResourceNotAvailable(resource_name=self.name) refresh = merge_metadata is not None db_res = resource_objects.Resource.get_obj( self.stack.context, self.id, refresh=refresh, fields=('name', 'rsrc_metadata', 'atomic_key', 'engine_id', 'action', 'status')) if db_res.action == self.DELETE: self._db_res_is_deleted = True LOG.debug("resource %(name)s, id: %(id)s is DELETE_%(st)s, " "not setting metadata", {'name': self.name, 'id': self.id, 'st': db_res.status}) raise exception.ResourceNotAvailable(resource_name=self.name) LOG.debug('Setting metadata for %s', six.text_type(self)) if refresh: metadata = merge_metadata(metadata, db_res.rsrc_metadata) if db_res.update_metadata(metadata): self._incr_atomic_key(db_res.atomic_key) self._rsrc_metadata = metadata def handle_metadata_reset(self): """Default implementation; should be overridden by resources. Now we override this method to reset the metadata for scale-policy and scale-group resources, because their metadata might hang in a wrong state ('scaling_in_progress' is always True) if engine restarts while scaling. """ pass @classmethod def set_needed_by(cls, db_rsrc, needed_by, expected_engine_id=None): if db_rsrc: db_rsrc.select_and_update( {'needed_by': needed_by}, atomic_key=db_rsrc.atomic_key, expected_engine_id=expected_engine_id ) @classmethod def set_requires(cls, db_rsrc, requires): if db_rsrc: db_rsrc.update_and_save( {'requires': requires} ) def _break_if_required(self, action, hook): """Block the resource until the hook is cleared if there is one.""" if self.stack.env.registry.matches_hook(self.name, hook): self.trigger_hook(hook) self._add_event(self.action, self.status, _("%(a)s paused until Hook %(h)s is cleared") % {'a': action, 'h': hook}) LOG.info('Reached hook on %s', self) while self.has_hook(hook): try: yield except BaseException as exc: self.clear_hook(hook) self._add_event( self.action, self.status, "Failure occurred while waiting.") if (isinstance(exc, AssertionError) or not isinstance(exc, Exception)): raise def has_nested(self): """Return True if the resource has an existing nested stack. For most resource types, this will always return False. StackResource subclasses return True when appropriate. Resource subclasses that may return True must also provide a nested_identifier() method to return the identifier of the nested stack, and a nested() method to return a Stack object for the nested stack. """ return False def get_nested_parameters_stack(self): """Return the nested stack for schema validation. Regular resources don't have such a thing. """ return def has_hook(self, hook): # Clear the cache to make sure the data is up to date: self._data = None return self.data().get(hook) == "True" def trigger_hook(self, hook): self.data_set(hook, "True") def clear_hook(self, hook): self.data_delete(hook) def type(self): return self.t.resource_type def has_interface(self, resource_type): """Check if resource is mapped to resource_type or is "resource_type". Check to see if this resource is either mapped to resource_type or is a "resource_type". """ if self.type() == resource_type: return True try: ri = self.stack.env.get_resource_info(self.type(), self.name) except exception.EntityNotFound: return False else: return ri.name == resource_type def identifier(self): """Return an identifier for this resource.""" return identifier.ResourceIdentifier(resource_name=self.name, **self.stack.identifier()) def frozen_definition(self): """Return a frozen ResourceDefinition with stored property values. The returned definition will contain the property values read from the database, and will have all intrinsic functions resolved (note that this makes it useless for calculating dependencies). """ if self._stored_properties_data is not None: args = {'properties': self._stored_properties_data} else: args = {} return self.t.freeze(**args) @contextlib.contextmanager def frozen_properties(self): """Context manager to use the frozen property values from the database. The live property values are always substituted back when the context ends. """ live_props = self.properties props = self.frozen_definition().properties(self.properties_schema, self.context) try: self.properties = props yield props finally: self.properties = live_props def update_template_diff(self, after, before): """Returns the difference between the before and after json snippets. If something has been removed in after which exists in before we set it to None. """ return after - before def update_template_diff_properties(self, after_props, before_props): """Return changed Properties between the before and after properties. If any property having immutable as True is updated, raises NotSupported error. If any properties have changed which are not in update_allowed_properties, raises UpdateReplace. """ update_allowed_set = self.calc_update_allowed(after_props) immutable_set = set() for (psk, psv) in six.iteritems(after_props.props): if psv.immutable(): immutable_set.add(psk) def prop_changed(key): try: before = before_props.get(key) except (TypeError, ValueError) as exc: # We shouldn't get here usually, but there is a known issue # with template resources and new parameters in non-convergence # stacks (see bug 1543685). The error should be harmless # because we're on the before properties, which have presumably # already been validated. LOG.warning('Ignoring error in old property value ' '%(prop_name)s: %(msg)s', {'prop_name': key, 'msg': six.text_type(exc)}) return True return before != after_props.get(key) # Create a set of keys which differ (or are missing/added) changed_properties_set = set(k for k in after_props if prop_changed(k)) # Create a list of updated properties offending property immutability update_replace_forbidden = [k for k in changed_properties_set if k in immutable_set] if update_replace_forbidden: msg = _("Update to properties %(props)s of %(name)s (%(res)s)" ) % {'props': ", ".join(sorted(update_replace_forbidden)), 'res': self.type(), 'name': self.name} raise exception.NotSupported(feature=msg) if changed_properties_set and self.needs_replace_with_prop_diff( changed_properties_set, after_props, before_props): raise UpdateReplace(self) if not changed_properties_set.issubset(update_allowed_set): raise UpdateReplace(self.name) return dict((k, after_props.get(k)) for k in changed_properties_set) def __str__(self): class_name = reflection.get_class_name(self, fully_qualified=False) if self.stack.id is not None: if self.resource_id is not None: text = '%s "%s" [%s] %s' % (class_name, self.name, self.resource_id, six.text_type(self.stack)) else: text = '%s "%s" %s' % (class_name, self.name, six.text_type(self.stack)) else: text = '%s "%s"' % (class_name, self.name) return six.text_type(text) def add_explicit_dependencies(self, deps): """Add all dependencies explicitly specified in the template. The deps parameter is a Dependencies object to which dependency pairs are added. """ for dep in self.t.dependencies(self.stack): deps += (self, dep) deps += (self, None) def add_dependencies(self, deps): """Add implicit dependencies specific to the resource type. Some resource types may have implicit dependencies on other resources in the same stack that are not linked by a property value (that would be set using get_resource or get_attr for example, thus creating an explicit dependency). Such dependencies are opaque to the user and should be avoided wherever possible, however in some circumstances they are required due to magic in the underlying API. The deps parameter is a Dependencies object to which dependency pairs may be added. """ return def required_by(self): """List of resources that require this one as a dependency. Returns a list of names of resources that depend on this resource directly. """ try: reqd_by = self.stack.dependencies.required_by(self) except KeyError: if self.stack.convergence: # for convergence, fall back to building from needed_by needed_by_ids = self.needed_by or set() reqd_by = [r for r in self.stack.resources.values() if r.id in needed_by_ids] else: LOG.error('Getting required_by list for Resource not in ' 'dependency graph.') return [] return [r.name for r in reqd_by] def client(self, name=None, version=None): client_name = name or self.default_client_name assert client_name, "Must specify client name" return self.stack.clients.client(client_name, version) def client_plugin(self, name=None): client_name = name or self.default_client_name assert client_name, "Must specify client name" return self.stack.clients.client_plugin(client_name) @classmethod def is_service_available(cls, context): # NOTE(kanagaraj-manickam): return True to satisfy the cases like # resource does not have endpoint, such as RandomString, OS::Heat # resources as they are implemented within the engine. if cls.default_client_name is None: return (True, None) client_plugin = clients.Clients(context).client_plugin( cls.default_client_name) if not client_plugin: raise exception.ClientNotAvailable( client_name=cls.default_client_name) service_types = client_plugin.service_types if not service_types: return (True, None) # NOTE(kanagaraj-manickam): if one of the service_type does # exist in the keystone, then considered it as available. for service_type in service_types: endpoint_exists = client_plugin.does_endpoint_exist( service_type=service_type, service_name=cls.default_client_name) if endpoint_exists: req_extension = cls.required_service_extension is_ext_available = ( not req_extension or client_plugin.has_extension( req_extension)) if is_ext_available: return (True, None) else: reason = _('Required extension {0} in {1} service ' 'is not available.') reason = reason.format(req_extension, cls.default_client_name) else: reason = _('{0} {1} endpoint is not in service catalog.') reason = reason.format(cls.default_client_name, service_type) return (False, reason) def keystone(self): return self.client('keystone') def nova(self): return self.client('nova') def swift(self): return self.client('swift') def neutron(self): return self.client('neutron') def cinder(self): return self.client('cinder') def trove(self): return self.client('trove') def ceilometer(self): return self.client('ceilometer') def heat(self): return self.client('heat') def glance(self): return self.client('glance') def _incr_atomic_key(self, last_key): if last_key is None: self._atomic_key = 1 else: self._atomic_key = last_key + 1 def _should_lock_on_action(self, action): """Return whether we should take a resource-level lock for an action. In the legacy path, we always took a lock at the Stack level and never at the Resource level. In convergence, we lock at the Resource level for most operations. However, there are currently some exceptions: the SUSPEND, RESUME, SNAPSHOT, and CHECK actions, and stack abandon. """ return (self.stack.convergence and not self.abandon_in_progress and action in {self.ADOPT, self.CREATE, self.UPDATE, self.ROLLBACK, self.DELETE}) @contextlib.contextmanager def _action_recorder(self, action, expected_exceptions=tuple()): """Return a context manager to record the progress of an action. Upon entering the context manager, the state is set to IN_PROGRESS. Upon exiting, the state will be set to COMPLETE if no exception was raised, or FAILED otherwise. Non-exit exceptions will be translated to ResourceFailure exceptions. Expected exceptions are re-raised, with the Resource moved to the COMPLETE state. """ attempts = 1 first_iter = [True] # work around no nonlocal in py27 if self.stack.convergence: if self._should_lock_on_action(action): lock_acquire = self.LOCK_ACQUIRE lock_release = self.LOCK_RELEASE else: lock_acquire = lock_release = self.LOCK_RESPECT if action != self.CREATE: attempts += max(cfg.CONF.client_retry_limit, 0) else: lock_acquire = lock_release = self.LOCK_NONE # retry for convergence DELETE or UPDATE if we get the usual # lock-acquire exception of exception.UpdateInProgress @tenacity.retry( stop=tenacity.stop_after_attempt(attempts), retry=tenacity.retry_if_exception_type( exception.UpdateInProgress), wait=tenacity.wait_random(max=2), reraise=True) def set_in_progress(): if not first_iter[0]: res_obj = resource_objects.Resource.get_obj( self.context, self.id) self._atomic_key = res_obj.atomic_key else: first_iter[0] = False self.state_set(action, self.IN_PROGRESS, lock=lock_acquire) try: set_in_progress() yield except exception.UpdateInProgress as ex: with excutils.save_and_reraise_exception(): LOG.info('Update in progress for %s', self.name) except expected_exceptions as ex: with excutils.save_and_reraise_exception(): self.state_set(action, self.COMPLETE, six.text_type(ex), lock=lock_release) LOG.debug('%s', six.text_type(ex)) except Exception as ex: LOG.info('%(action)s: %(info)s', {"action": action, "info": six.text_type(self)}, exc_info=True) failure = exception.ResourceFailure(ex, self, action) self.state_set(action, self.FAILED, six.text_type(failure), lock=lock_release) raise failure except BaseException as exc: with excutils.save_and_reraise_exception(): try: reason = six.text_type(exc) msg = '%s aborted' % action if reason: msg += ' (%s)' % reason self.state_set(action, self.FAILED, msg, lock=lock_release) except Exception: LOG.exception('Error marking resource as failed') else: self.state_set(action, self.COMPLETE, lock=lock_release) def action_handler_task(self, action, args=None, action_prefix=None): """A task to call the Resource subclass's handler methods for action. Calls the handle_() method for the given action and then calls the check__complete() method with the result in a loop until it returns True. If the methods are not provided, the call is omitted. Any args provided are passed to the handler. If a prefix is supplied, the handler method handle__() is called instead. """ args = args or [] handler_action = action.lower() check = getattr(self, 'check_%s_complete' % handler_action, None) if action_prefix: handler_action = '%s_%s' % (action_prefix.lower(), handler_action) handler = getattr(self, 'handle_%s' % handler_action, None) if callable(handler): handler_data = handler(*args) yield if callable(check): try: while True: try: done = check(handler_data) except PollDelay as delay: yield delay.period else: if done: break else: yield except Exception: raise except: # noqa with excutils.save_and_reraise_exception(): canceller = getattr( self, 'handle_%s_cancel' % handler_action, None ) if callable(canceller): try: canceller(handler_data) except Exception: LOG.exception( 'Error cancelling resource %s', action ) @scheduler.wrappertask def _do_action(self, action, pre_func=None, resource_data=None): """Perform a transition to a new state via a specified action. Action should be e.g self.CREATE, self.UPDATE etc, we set status based on this, the transition is handled by calling the corresponding handle_* and check_*_complete functions Note pre_func is an optional function reference which will be called before the handle_ function If the resource does not declare a check_$action_complete function, we declare COMPLETE status as soon as the handle_$action call has finished, and if no handle_$action function is declared, then we do nothing, useful e.g if the resource requires no action for a given state transition """ assert action in self.ACTIONS, 'Invalid action %s' % action with self._action_recorder(action): if callable(pre_func): pre_func() handler_args = [resource_data] if resource_data is not None else [] yield self.action_handler_task(action, args=handler_args) def _update_stored_properties(self): old_props = self._stored_properties_data self._stored_properties_data = function.resolve(self.properties.data) if self._stored_properties_data != old_props: self._rsrc_prop_data_id = None self.attributes.reset_resolved_values() def referenced_attrs(self, stk_defn=None, in_resources=True, in_outputs=True, load_all=False): """Return the set of all attributes referenced in the template. This enables the resource to calculate which of its attributes will be used. By default, attributes referenced in either other resources or outputs will be included. Either can be excluded by setting the `in_resources` or `in_outputs` parameters to False. To limit to a subset of outputs, pass an iterable of the output names to examine for the `in_outputs` parameter. The set of referenced attributes is calculated from the StackDefinition object provided, or from the stack's current definition if none is passed. """ if stk_defn is None: stk_defn = self.stack.defn def get_dep_attrs(source): return set(itertools.chain.from_iterable(s.dep_attrs(self.name, load_all) for s in source)) refd_attrs = set() if in_resources: enabled_resources = stk_defn.enabled_rsrc_names() refd_attrs |= get_dep_attrs(stk_defn.resource_definition(r_name) for r_name in enabled_resources) subset_outputs = isinstance(in_outputs, collections.Iterable) if subset_outputs or in_outputs: if not subset_outputs: in_outputs = stk_defn.enabled_output_names() refd_attrs |= get_dep_attrs(stk_defn.output_definition(op_name) for op_name in in_outputs) if attributes.ALL_ATTRIBUTES in refd_attrs: refd_attrs.remove(attributes.ALL_ATTRIBUTES) refd_attrs |= (set(self.attributes) - {self.SHOW}) return refd_attrs def node_data(self, stk_defn=None, for_resources=True, for_outputs=False): """Return a NodeData object representing the resource. The NodeData object returned contains basic data about the resource, including its name, ID and state, as well as its reference ID and any attribute values that are used. By default, those attribute values that are referenced by other resources are included. These can be ignored by setting the for_resources parameter to False. If the for_outputs parameter is True, those attribute values that are referenced by stack outputs are included. If the for_outputs parameter is an iterable of output names, only those attribute values referenced by the specified stack outputs are included. The set of referenced attributes is calculated from the StackDefinition object provided, or from the stack's current definition if none is passed. After calling this method, the resource's attribute cache is populated with any cacheable attribute values referenced by stack outputs, even if they are not also referenced by other resources. """ def get_attrs(attrs, cacheable_only=False): for attr in attrs: path = (attr,) if isinstance(attr, six.string_types) else attr if (cacheable_only and (self.attributes.get_cache_mode(path[0]) == attributes.Schema.CACHE_NONE)): continue if self.action == self.INIT: if (path[0] in self.attributes or (type(self).get_attribute != Resource.get_attribute or type(self).FnGetAtt != Resource.FnGetAtt)): # TODO(ricolin) make better placeholder values here yield attr, None else: try: yield attr, self.FnGetAtt(*path) except exception.InvalidTemplateAttribute as ita: # Attribute doesn't exist, so don't store it. Whatever # tries to access it will get another # InvalidTemplateAttribute exception at that point LOG.info('%s', ita) except Exception as exc: # Store the exception that occurred. It will be # re-raised when something tries to access it, or when # we try to serialise the NodeData. yield attr, exc load_all = not self.stack.in_convergence_check dep_attrs = self.referenced_attrs(stk_defn, in_resources=for_resources, in_outputs=for_outputs, load_all=load_all) # Ensure all attributes referenced in outputs get cached if for_outputs is False and self.stack.convergence: out_attrs = self.referenced_attrs(stk_defn, in_resources=False, load_all=load_all) for e in get_attrs(out_attrs - dep_attrs, cacheable_only=True): pass # Calculate attribute values *before* reference ID, to potentially # save an extra RPC call in TemplateResource attribute_values = dict(get_attrs(dep_attrs)) return node_data.NodeData(self.id, self.name, self.uuid, self.FnGetRefId(), attribute_values, self.action, self.status) def preview(self): """Default implementation of Resource.preview. This method should be overridden by child classes for specific behavior. """ return self def create_convergence(self, template_id, resource_data, engine_id, timeout, progress_callback=None): """Creates the resource by invoking the scheduler TaskRunner.""" self._calling_engine_id = engine_id self.requires = list( set(data.primary_key for data in resource_data.values() if data is not None) ) self.current_template_id = template_id if self.stack.adopt_stack_data is None: runner = scheduler.TaskRunner(self.create) else: adopt_data = self.stack._adopt_kwargs(self) runner = scheduler.TaskRunner(self.adopt, **adopt_data) runner(timeout=timeout, progress_callback=progress_callback) def validate_external(self): if self.external_id is not None: try: self.resource_id = self.external_id self._show_resource() except Exception as ex: if self.client_plugin().is_not_found(ex): error_message = (_("Invalid external resource: Resource " "%(external_id)s (%(type)s) can not " "be found.") % {'external_id': self.external_id, 'type': self.type()}) raise exception.StackValidationFailed( message="%s" % error_message) raise @scheduler.wrappertask def create(self): """Create the resource. Subclasses should provide a handle_create() method to customise creation. """ action = self.CREATE if (self.action, self.status) != (self.INIT, self.COMPLETE): exc = exception.Error(_('State %s invalid for create') % six.text_type(self.state)) raise exception.ResourceFailure(exc, self, action) if self.external_id is not None: yield self._do_action(self.ADOPT, resource_data={ 'resource_id': self.external_id}) self.check() return # This method can be called when we replace a resource, too. In that # case, a hook has already been dealt with in `Resource.update` so we # shouldn't do it here again: if self.stack.action == self.stack.CREATE: yield self._break_if_required( self.CREATE, environment.HOOK_PRE_CREATE) LOG.info('creating %s', self) # Re-resolve the template, since if the resource Ref's # the StackId pseudo parameter, it will change after # the parser.Stack is stored (which is after the resources # are __init__'d, but before they are create()'d). We also # do client lookups for RESOLVE translation rules here. self.reparse() self._update_stored_properties() count = {self.CREATE: 0, self.DELETE: 0} retry_limit = max(cfg.CONF.action_retry_limit, 0) first_failure = None while (count[self.CREATE] <= retry_limit and count[self.DELETE] <= retry_limit): pre_func = None if count[action] > 0: delay = timeutils.retry_backoff_delay(count[action], jitter_max=2.0) waiter = scheduler.TaskRunner(self.pause) yield waiter.as_task(timeout=delay) elif action == self.CREATE: # Only validate properties in first create call. pre_func = self.properties.validate try: yield self._do_action(action, pre_func) if action == self.CREATE: first_failure = None break else: action = self.CREATE except exception.ResourceFailure as failure: if isinstance(failure.exc, exception.StackValidationFailed): path = [self.t.name] path.extend(failure.exc.path) raise exception.ResourceFailure( exception_or_error=exception.StackValidationFailed( error=failure.exc.error, path=path, message=failure.exc.error_message ), resource=failure.resource, action=failure.action ) if not isinstance(failure.exc, exception.ResourceInError): raise failure count[action] += 1 if action == self.CREATE: action = self.DELETE count[action] = 0 if first_failure is None: # Save the first exception first_failure = failure if first_failure: raise first_failure if self.stack.action == self.stack.CREATE: yield self._break_if_required( self.CREATE, environment.HOOK_POST_CREATE) @staticmethod def pause(): try: while True: yield except scheduler.Timeout: return def prepare_abandon(self): self.abandon_in_progress = True return { 'name': self.name, 'resource_id': self.resource_id, 'type': self.type(), 'action': self.action, 'status': self.status, 'metadata': self.metadata_get(), 'resource_data': self.data() } def adopt(self, resource_data): """Adopt the existing resource. Resource subclasses can provide a handle_adopt() method to customise adopt. """ self._update_stored_properties() return self._do_action(self.ADOPT, resource_data=resource_data) def handle_adopt(self, resource_data=None): resource_id, data, metadata = self._get_resource_info(resource_data) if not resource_id: exc = Exception(_('Resource ID was not provided.')) failure = exception.ResourceFailure(exc, self) raise failure # set resource id self.resource_id_set(resource_id) # save the resource data if data and isinstance(data, dict): for key, value in six.iteritems(data): self.data_set(key, value) # save the resource metadata self.metadata_set(metadata) def translation_rules(self, properties): """Return specified rules for resource.""" return [] def translate_properties(self, properties, client_resolve=True, ignore_resolve_error=False): """Set resource specific rules for properties translation. The properties parameter is a properties object and the optional client_resolve flag is to specify whether to do 'RESOLVE' translation with client lookup. """ rules = self.translation_rules(properties) or [] properties.update_translation( rules, client_resolve=client_resolve, ignore_resolve_error=ignore_resolve_error) def cancel_grace_period(self): canceller = getattr(self, 'handle_%s_cancel' % self.action.lower(), None) if callable(canceller): return None return cfg.CONF.error_wait_time def _get_resource_info(self, resource_data): if not resource_data: return None, None, None return (resource_data.get('resource_id'), resource_data.get('resource_data'), resource_data.get('metadata')) def needs_replace(self, after_props): """Mandatory replace based on certain properties.""" return False def needs_replace_with_prop_diff(self, changed_properties_set, after_props, before_props): """Needs replace based on prop_diff.""" return False def needs_replace_with_tmpl_diff(self, tmpl_diff): """Needs replace based on tmpl_diff.""" return False def needs_replace_failed(self): """Needs replace if resource is in *_FAILED.""" return True def _needs_update(self, after, before, after_props, before_props, prev_resource, check_init_complete=True): if self.status == self.FAILED: # always replace when a resource is in CHECK_FAILED if self.action == self.CHECK or self.needs_replace_failed(): raise UpdateReplace(self) if self.state == (self.DELETE, self.COMPLETE): raise UpdateReplace(self) if (check_init_complete and self.state == (self.INIT, self.COMPLETE)): raise UpdateReplace(self) if self.needs_replace(after_props): raise UpdateReplace(self) if before != after.freeze(): return True try: return before_props != after_props except ValueError: return True def _check_for_convergence_replace(self, restricted_actions): if 'replace' in restricted_actions: ex = exception.ResourceActionRestricted(action='replace') failure = exception.ResourceFailure(ex, self, self.UPDATE) self._add_event(self.UPDATE, self.FAILED, six.text_type(ex)) raise failure else: raise UpdateReplace(self.name) def update_convergence(self, template_id, resource_data, engine_id, timeout, new_stack, progress_callback=None): """Update the resource synchronously. Persist the resource's current_template_id to template_id and resource's requires to list of the required resource ids from the given resource_data and existing resource's requires, then updates the resource by invoking the scheduler TaskRunner. """ def update_templ_id_and_requires(persist=True): self.current_template_id = template_id self.requires = list( set(data.primary_key for data in resource_data.values() if data is not None) ) if not persist: return self.store(lock=self.LOCK_RESPECT) self._calling_engine_id = engine_id # Check that the resource type matches. If the type has changed by a # legitimate substitution, the load()ed resource will already be of # the new type. registry = new_stack.env.registry new_res_def = new_stack.defn.resource_definition(self.name) new_res_type = registry.get_class_to_instantiate( new_res_def.resource_type, resource_name=self.name) if type(self) is not new_res_type: restrictions = registry.get_rsrc_restricted_actions(self.name) self._check_for_convergence_replace(restrictions) action_rollback = self.stack.action == self.stack.ROLLBACK status_in_progress = self.stack.status == self.stack.IN_PROGRESS if action_rollback and status_in_progress and self.replaced_by: try: self.restore_prev_rsrc(convergence=True) except Exception as e: failure = exception.ResourceFailure(e, self, self.action) self.state_set(self.UPDATE, self.FAILED, six.text_type(failure)) raise failure self.replaced_by = None runner = scheduler.TaskRunner( self.update, new_res_def, update_templ_func=update_templ_id_and_requires) try: runner(timeout=timeout, progress_callback=progress_callback) except UpdateReplace: raise except exception.UpdateInProgress: raise except BaseException: with excutils.save_and_reraise_exception(): update_templ_id_and_requires(persist=True) def preview_update(self, after, before, after_props, before_props, prev_resource, check_init_complete=False): """Simulates update without actually updating the resource. Raises UpdateReplace, if replacement is required or returns True, if in-place update is required. """ if self._needs_update(after, before, after_props, before_props, prev_resource, check_init_complete): tmpl_diff = self.update_template_diff(after.freeze(), before) if tmpl_diff and self.needs_replace_with_tmpl_diff(tmpl_diff): raise UpdateReplace(self) self.update_template_diff_properties(after_props, before_props) return True else: return False def _check_restricted_actions(self, actions, after, before, after_props, before_props, prev_resource): """Checks for restricted actions. Raises ResourceActionRestricted, if the resource requires update or replace and the required action is restricted. Else, Raises UpdateReplace, if replacement is required or returns True, if in-place update is required. """ try: if self.preview_update(after, before, after_props, before_props, prev_resource, check_init_complete=True): if 'update' in actions: raise exception.ResourceActionRestricted(action='update') return True except UpdateReplace: if 'replace' in actions: raise exception.ResourceActionRestricted(action='replace') raise return False def _prepare_update_props(self, after, before): before_props = before.properties(self.properties_schema, self.context) # Regenerate the schema, else validation would fail self.regenerate_info_schema(after) after.set_translation_rules(self.translation_rules(self.properties)) after_props = after.properties(self.properties_schema, self.context) self.translate_properties(after_props) self.translate_properties(before_props, ignore_resolve_error=True) if (cfg.CONF.observe_on_update or self.converge) and before_props: if not self.resource_id: raise UpdateReplace(self) try: resource_reality = self.get_live_state(before_props) if resource_reality: self._update_properties_with_live_state(before_props, resource_reality) except exception.EntityNotFound: raise UpdateReplace(self) except Exception as ex: LOG.warning("Resource cannot be updated with it's " "live state in case of next " "error: %s", ex) return after_props, before_props def _prepare_update_replace_handler(self, action): """Return the handler method for preparing to replace a resource. This may be either restore_prev_rsrc() (in the case of a legacy rollback) or, more typically, prepare_for_replace(). If the plugin has not overridden the method, then None is returned in place of the default method (which is empty anyway). """ if (self.stack.action == 'ROLLBACK' and self.stack.status == 'IN_PROGRESS' and not self.stack.convergence): # handle case, when it's rollback and we should restore # old resource if self.restore_prev_rsrc != Resource.restore_prev_rsrc: return self.restore_prev_rsrc else: if self.prepare_for_replace != Resource.prepare_for_replace: return self.prepare_for_replace return None def _prepare_update_replace(self, action): handler = self._prepare_update_replace_handler(action) if handler is None: return try: handler() except Exception as e: # if any exception happen, we should set the resource to # FAILED, then raise ResourceFailure failure = exception.ResourceFailure(e, self, action) self.state_set(action, self.FAILED, six.text_type(failure)) raise failure @classmethod def check_is_substituted(cls, new_res_type): support_status = getattr(cls, 'support_status', None) if support_status: is_substituted = support_status.is_substituted(new_res_type) return is_substituted return False @scheduler.wrappertask def update(self, after, before=None, prev_resource=None, update_templ_func=None): """Return a task to update the resource. Subclasses should provide a handle_update() method to customise update, the base-class handle_update will fail by default. """ action = self.UPDATE assert isinstance(after, rsrc_defn.ResourceDefinition) if before is None: before = self.frozen_definition() after_external_id = after.external_id() if self.external_id != after_external_id: msg = _("Update to property %(prop)s of %(name)s (%(res)s)" ) % {'prop': hot_tmpl.HOTemplate20161014.RES_EXTERNAL_ID, 'res': self.type(), 'name': self.name} exc = exception.NotSupported(feature=msg) raise exception.ResourceFailure(exc, self, action) elif after_external_id is not None: LOG.debug("Skip update on external resource.") if update_templ_func is not None: update_templ_func(persist=True) return after_props, before_props = self._prepare_update_props(after, before) yield self._break_if_required( self.UPDATE, environment.HOOK_PRE_UPDATE) try: registry = self.stack.env.registry restr_actions = registry.get_rsrc_restricted_actions(self.name) if restr_actions: needs_update = self._check_restricted_actions(restr_actions, after, before, after_props, before_props, prev_resource) else: needs_update = self._needs_update(after, before, after_props, before_props, prev_resource) except UpdateReplace: with excutils.save_and_reraise_exception(): if self._prepare_update_replace_handler(action) is not None: with self.lock(self._calling_engine_id): self._prepare_update_replace(action) except exception.ResourceActionRestricted as ae: failure = exception.ResourceFailure(ae, self, action) self._add_event(action, self.FAILED, six.text_type(ae)) raise failure if not needs_update: if update_templ_func is not None: update_templ_func(persist=True) if self.status == self.FAILED: status_reason = _('Update status to COMPLETE for ' 'FAILED resource neither update ' 'nor replace.') lock = (self.LOCK_RESPECT if self.stack.convergence else self.LOCK_NONE) self.state_set(self.action, self.COMPLETE, status_reason, lock=lock) return if not self.stack.convergence: if (self.action, self.status) in ( (self.CREATE, self.IN_PROGRESS), (self.UPDATE, self.IN_PROGRESS), (self.ADOPT, self.IN_PROGRESS)): exc = Exception(_('Resource update already requested')) raise exception.ResourceFailure(exc, self, action) LOG.info('updating %s', self) self.updated_time = datetime.utcnow() with self._action_recorder(action, UpdateReplace): after_props.validate() self.properties = before_props tmpl_diff = self.update_template_diff(after.freeze(), before) try: if tmpl_diff and self.needs_replace_with_tmpl_diff(tmpl_diff): raise UpdateReplace(self) prop_diff = self.update_template_diff_properties(after_props, before_props) yield self.action_handler_task(action, args=[after, tmpl_diff, prop_diff]) except UpdateReplace: with excutils.save_and_reraise_exception(): self._prepare_update_replace(action) self.t = after self.reparse() self._update_stored_properties() if update_templ_func is not None: # template/requires will be persisted by _action_recorder() update_templ_func(persist=False) yield self._break_if_required( self.UPDATE, environment.HOOK_POST_UPDATE) def prepare_for_replace(self): """Prepare resource for replacing. Some resources requires additional actions before replace them. If resource need to be changed before replacing, this method should be implemented in resource class. """ pass def restore_prev_rsrc(self, convergence=False): """Restore resource after rollback. Some resources requires additional actions after rollback. If resource need to be changed during rollback, this method should be implemented in resource class. """ pass def check(self): """Checks that the physical resource is in its expected state. Gets the current status of the physical resource and updates the database accordingly. If check is not supported by the resource, default action is to fail and revert the resource's status to its original state with the added message that check was not performed. """ action = self.CHECK LOG.info('Checking %s', self) if hasattr(self, 'handle_%s' % action.lower()): if self.state == (self.INIT, self.COMPLETE): reason = _('Can not check %s, resource not ' 'created yet.') % self.name self.state_set(action, self.FAILED, reason) exc = Exception(_('Resource %s not created yet.') % self.name) failure = exception.ResourceFailure(exc, self, action) raise failure with self.frozen_properties(): return self._do_action(action) else: reason = '%s not supported for %s' % (action, self.type()) self.state_set(action, self.COMPLETE, reason) def _verify_check_conditions(self, checks): def valid(check): if isinstance(check['expected'], list): return check['current'] in check['expected'] else: return check['current'] == check['expected'] msg = _("'%(attr)s': expected '%(expected)s', got '%(current)s'") invalid_checks = [ msg % check for check in checks if not valid(check) ] if invalid_checks: raise exception.Error('; '.join(invalid_checks)) def suspend(self): """Return a task to suspend the resource. Subclasses should provide a handle_suspend() method to implement suspend. """ action = self.SUSPEND # Don't try to suspend the resource unless it's in a stable state # or if the previous suspend failed if (self.action == self.DELETE or (self.action != self.SUSPEND and self.status != self.COMPLETE)): exc = exception.Error(_('State %s invalid for suspend') % six.text_type(self.state)) raise exception.ResourceFailure(exc, self, action) LOG.info('suspending %s', self) with self.frozen_properties(): return self._do_action(action) def resume(self): """Return a task to resume the resource. Subclasses should provide a handle_resume() method to implement resume. """ action = self.RESUME # Allow resume a resource if it's SUSPEND_COMPLETE # or RESUME_FAILED or RESUME_COMPLETE. Recommend to check # the real state of physical resource in handle_resume() if self.state not in ((self.SUSPEND, self.COMPLETE), (self.RESUME, self.FAILED), (self.RESUME, self.COMPLETE)): exc = exception.Error(_('State %s invalid for resume') % six.text_type(self.state)) raise exception.ResourceFailure(exc, self, action) LOG.info('resuming %s', self) with self.frozen_properties(): return self._do_action(action) def snapshot(self): """Snapshot the resource and return the created data, if any.""" LOG.info('snapshotting %s', self) with self.frozen_properties(): return self._do_action(self.SNAPSHOT) @scheduler.wrappertask def delete_snapshot(self, data): yield self.action_handler_task('delete_snapshot', args=[data]) def physical_resource_name(self): if self.id is None or self.action == self.INIT: return None name = '%s-%s-%s' % (self.stack.name.rstrip('*'), self.name, short_id.get_id(self.uuid)) if self.physical_resource_name_limit: name = self.reduce_physical_resource_name( name, self.physical_resource_name_limit) return name @staticmethod def reduce_physical_resource_name(name, limit): """Reduce length of physical resource name to a limit. The reduced name will consist of the following: * the first 2 characters of the name * a hyphen * the end of the name, truncated on the left to bring the name length within the limit :param name: The name to reduce the length of :param limit: The max length limit :returns: A name whose length is less than or equal to the limit """ if len(name) <= limit: return name if limit < 4: raise ValueError(_('limit cannot be less than 4')) postfix_length = limit - 3 return name[0:2] + '-' + name[-postfix_length:] def validate(self): """Validate the resource. This may be overridden by resource plugins to add extra validation logic specific to the resource implementation. """ LOG.info('Validating %s', self) return self.validate_template() def validate_template(self): """Validate structural/syntax aspects of the resource definition. Resource plugins should not override this, because this interface is expected to be called pre-create so things normally valid in an overridden validate() such as accessing properties may not work. """ self._validate_service_availability( self.stack.context, self.t.resource_type ) try: self.t.validate() self.validate_deletion_policy(self.t.deletion_policy()) self.t.update_policy(self.update_policy_schema, self.context).validate() validate = self.properties.validate( with_value=self.stack.strict_validate) except exception.StackValidationFailed as ex: path = [self.stack.t.RESOURCES, self.t.name] if ex.path: path.append(self.stack.t.get_section_name(ex.path[0])) path.extend(ex.path[1:]) raise exception.StackValidationFailed( error=ex.error, path=path, message=ex.error_message) return validate @classmethod def validate_deletion_policy(cls, policy): path = rsrc_defn.DELETION_POLICY if policy not in rsrc_defn.ResourceDefinition.DELETION_POLICIES: msg = _('Invalid deletion policy "%s"') % policy raise exception.StackValidationFailed(message=msg, path=path) if policy == rsrc_defn.ResourceDefinition.SNAPSHOT: if not callable(getattr(cls, 'handle_snapshot_delete', None)): msg = _('"%s" deletion policy not supported') % policy raise exception.StackValidationFailed(message=msg, path=path) def _update_replacement_data(self, template_id): # Update the replacement resource's needed_by and replaces # fields. Make sure that the replacement belongs to the given # template and there is no engine working on it. if self.replaced_by is None: return try: db_res = resource_objects.Resource.get_obj( self.context, self.replaced_by, fields=('current_template_id', 'atomic_key')) except exception.NotFound: LOG.info("Could not find replacement of resource %(name)s " "with id %(id)s while updating needed_by.", {'name': self.name, 'id': self.replaced_by}) return if (db_res.current_template_id == template_id): # Following update failure is ignorable; another # update might have locked/updated the resource. if db_res.select_and_update( {'needed_by': self.needed_by, 'replaces': None}, atomic_key=db_res.atomic_key, expected_engine_id=None): self._incr_atomic_key(self._atomic_key) def delete_convergence(self, template_id, input_data, engine_id, timeout, progress_callback=None): """Destroys the resource if it doesn't belong to given template. The given template is suppose to be the current template being provisioned. Also, since this resource is visited as part of clean-up phase, the needed_by should be updated. If this resource was replaced by more recent resource, then delete this and update the replacement resource's needed_by and replaces fields. """ self._calling_engine_id = engine_id self.needed_by = list(set(v for v in input_data.values() if v is not None)) if self.current_template_id != template_id: # just delete the resources in INIT state if self.action == self.INIT: try: resource_objects.Resource.delete(self.context, self.id) except exception.NotFound: pass else: runner = scheduler.TaskRunner(self.delete) runner(timeout=timeout, progress_callback=progress_callback) self._update_replacement_data(template_id) def handle_delete(self): """Default implementation; should be overridden by resources.""" if self.entity and self.resource_id is not None: try: obj = getattr(self.client(), self.entity) obj.delete(self.resource_id) except Exception as ex: if self.default_client_name is not None: self.client_plugin().ignore_not_found(ex) return None raise return self.resource_id @scheduler.wrappertask def delete(self): """A task to delete the resource. Subclasses should provide a handle_delete() method to customise deletion. """ @excutils.exception_filter def should_retry(exc): if count >= retry_limit: return False if self.default_client_name: return (self.client_plugin().is_conflict(exc) or isinstance(exc, exception.PhysicalResourceExists)) return isinstance(exc, exception.PhysicalResourceExists) action = self.DELETE if (self.action, self.status) == (self.DELETE, self.COMPLETE): return # No need to delete if the resource has never been created if self.action == self.INIT: return initial_state = self.state # This method can be called when we replace a resource, too. In that # case, a hook has already been dealt with in `Resource.update` so we # shouldn't do it here again: if self.stack.action == self.stack.DELETE: yield self._break_if_required( self.DELETE, environment.HOOK_PRE_DELETE) LOG.info('deleting %s', self) if self._stored_properties_data is not None: # On delete we can't rely on re-resolving the properties # so use the stored frozen_definition instead self.properties = self.frozen_definition().properties( self.properties_schema, self.context) self.translate_properties(self.properties) with self._action_recorder(action): if self.abandon_in_progress: deletion_policy = self.t.RETAIN else: deletion_policy = self.t.deletion_policy() if deletion_policy != self.t.RETAIN: if deletion_policy == self.t.SNAPSHOT: action_args = [[initial_state], 'snapshot'] else: action_args = [] count = -1 retry_limit = max(cfg.CONF.action_retry_limit, 0) while True: count += 1 LOG.info('delete %(name)s attempt %(attempt)d' % {'name': six.text_type(self), 'attempt': count+1}) if count: delay = timeutils.retry_backoff_delay(count, jitter_max=2.0) waiter = scheduler.TaskRunner(self.pause) yield waiter.as_task(timeout=delay) with excutils.exception_filter(should_retry): yield self.action_handler_task(action, *action_args) break if self.stack.action == self.stack.DELETE: yield self._break_if_required( self.DELETE, environment.HOOK_POST_DELETE) @scheduler.wrappertask def destroy(self): """A task to delete the resource and remove it from the database.""" yield self.delete() if self.id is None: return try: resource_objects.Resource.delete(self.context, self.id) except exception.NotFound: # Don't fail on delete if the db entry has # not been created yet. pass self.id = None def resource_id_set(self, inst): self.resource_id = inst if self.id is not None: try: resource_objects.Resource.update_by_id( self.context, self.id, {'physical_resource_id': self.resource_id}) except Exception as ex: LOG.warning('db error %s', ex) def store(self, set_metadata=False, lock=LOCK_NONE): """Create the resource in the database. If self.id is set, we update the existing stack. """ if not self.root_stack_id: self.root_stack_id = self.stack.root_stack_id() rs = {'action': self.action, 'status': self.status, 'status_reason': six.text_type(self.status_reason), 'stack_id': self.stack.id, 'physical_resource_id': self.resource_id, 'name': self.name, 'rsrc_prop_data_id': self._create_or_replace_rsrc_prop_data(), 'needed_by': self.needed_by, 'requires': self.requires, 'replaces': self.replaces, 'replaced_by': self.replaced_by, 'current_template_id': self.current_template_id, 'root_stack_id': self.root_stack_id, 'updated_at': self.updated_time, 'properties_data': None} if set_metadata: metadata = self.t.metadata() rs['rsrc_metadata'] = metadata self._rsrc_metadata = metadata if self.id is not None: if (lock == self.LOCK_NONE or (lock in {self.LOCK_ACQUIRE, self.LOCK_RELEASE} and self._calling_engine_id is None)): resource_objects.Resource.update_by_id( self.context, self.id, rs) if lock != self.LOCK_NONE: LOG.error('No calling_engine_id in store() %s', six.text_type(rs)) else: self._store_with_lock(rs, lock) else: new_rs = resource_objects.Resource.create(self.context, rs) self.id = new_rs.id self.uuid = new_rs.uuid self.created_time = new_rs.created_at def _store_with_lock(self, rs, lock): if lock == self.LOCK_ACQUIRE: rs['engine_id'] = self._calling_engine_id expected_engine_id = None elif lock == self.LOCK_RESPECT: expected_engine_id = None elif lock == self.LOCK_RELEASE: expected_engine_id = self._calling_engine_id rs['engine_id'] = None else: assert False, "Invalid lock action: %s" % lock if resource_objects.Resource.select_and_update_by_id( self.context, self.id, rs, expected_engine_id, self._atomic_key): self._incr_atomic_key(self._atomic_key) else: LOG.info('Resource %s is locked or does not exist', six.text_type(self)) LOG.debug('Resource id:%(resource_id)s locked or does not exist. ' 'Expected atomic_key:%(atomic_key)s, ' 'accessing from engine_id:%(engine_id)s', {'resource_id': self.id, 'atomic_key': self._atomic_key, 'engine_id': self._calling_engine_id}) raise exception.UpdateInProgress(self.name) def _add_event(self, action, status, reason): """Add a state change event to the database.""" physical_res_id = self.resource_id or self.physical_resource_name() ev = event.Event(self.context, self.stack, action, status, reason, physical_res_id, self._rsrc_prop_data_id, self._stored_properties_data, self.name, self.type()) ev.store() self.stack.dispatch_event(ev) @contextlib.contextmanager def lock(self, engine_id): self._calling_engine_id = engine_id try: if engine_id is not None: self._store_with_lock({}, self.LOCK_ACQUIRE) yield except exception.UpdateInProgress: raise except BaseException: with excutils.save_and_reraise_exception(): if engine_id is not None: self._store_with_lock({}, self.LOCK_RELEASE) else: if engine_id is not None: self._store_with_lock({}, self.LOCK_RELEASE) def _resolve_any_attribute(self, attr): """Method for resolving any attribute, including base attributes. This method uses basic _resolve_attribute method for resolving plugin-specific attributes. Base attributes will be resolved with corresponding method, which should be defined in each resource class. :param attr: attribute name, which will be resolved :returns: method of resource class, which resolve base attribute """ if attr in self.base_attributes_schema: # check resource_id, because usually it is required for getting # information about resource if not self.resource_id: return None try: return getattr(self, '_{0}_resource'.format(attr))() except Exception as ex: if self.default_client_name is not None: self.client_plugin().ignore_not_found(ex) return None raise else: try: return self._resolve_attribute(attr) except Exception as ex: if self.default_client_name is not None: self.client_plugin().ignore_not_found(ex) return None raise def _show_resource(self): """Default implementation; should be overridden by resources. :returns: the map of resource information or None """ if self.entity: try: obj = getattr(self.client(), self.entity) resource = obj.get(self.resource_id) if isinstance(resource, dict): return resource else: return resource.to_dict() except AttributeError as ex: LOG.warning("Resolving 'show' attribute has failed : %s", ex) return None def get_live_resource_data(self): """Default implementation; can be overridden by resources. Get resource data and handle it with exceptions. """ try: resource_data = self._show_resource() except Exception as ex: if (self.default_client_name is not None and self.client_plugin().is_not_found(ex)): raise exception.EntityNotFound( entity='Resource', name=self.name) raise return resource_data def parse_live_resource_data(self, resource_properties, resource_data): """Default implementation; can be overridden by resources. Parse resource data for using it in updating properties with live state. :param resource_properties: properties of stored resource plugin. :param resource_data: data from current live state of a resource. """ resource_result = {} for key in self._update_allowed_properties: if key in resource_data: if key == 'name' and resource_properties.get(key) is None: # We use `physical_resource_name` for name property in some # resources when name not provided during create, so we # shouldn't add name in resource_data if it's None in # property (might just the cases that we using # `physical_resource_name`). continue resource_result[key] = resource_data.get(key) return resource_result def get_live_state(self, resource_properties): """Default implementation; should be overridden by resources. :param resource_properties: resource's object of Properties class. :returns: dict of resource's real state of properties. """ resource_data = self.get_live_resource_data() if resource_data is None: return {} return self.parse_live_resource_data(resource_properties, resource_data) def _update_properties_with_live_state(self, resource_properties, live_properties): """Update resource properties data with live state properties. Note, that live_properties can contains None values, so there's next situation: property equals to some value, but live state has no such property, i.e. property equals to None, so during update property should be updated with None. """ for key in resource_properties: if key in live_properties: if resource_properties.get(key) != live_properties.get(key): resource_properties.data.update( {key: live_properties.get(key)}) def _resolve_attribute(self, name): """Default implementation of resolving resource's attributes. Should be overridden by resources, that expose attributes. :param name: The attribute to resolve :returns: the resource attribute named key """ # By default, no attributes resolve pass def regenerate_info_schema(self, definition): """Default implementation; should be overridden by resources. Should be overridden by resources that would require schema refresh during update, ex. TemplateResource. :definition: Resource Definition """ # By default, do not regenerate pass def state_reset(self): """Reset state to (INIT, COMPLETE).""" self.action = self.INIT self.status = self.COMPLETE def state_set(self, action, status, reason="state changed", lock=LOCK_NONE): if action not in self.ACTIONS: raise ValueError(_("Invalid action %s") % action) if status not in self.STATUSES: raise ValueError(_("Invalid status %s") % status) old_state = (self.action, self.status) new_state = (action, status) set_metadata = self.action == self.INIT self.action = action self.status = status self.status_reason = reason self.store(set_metadata, lock=lock) if new_state != old_state: self._add_event(action, status, reason) if status != self.COMPLETE: self.clear_stored_attributes() @property def state(self): """Returns state, tuple of action, status.""" return (self.action, self.status) def store_attributes(self): assert self.id is not None if self.status != self.COMPLETE or self.action in (self.INIT, self.DELETE): return if not self.attributes.has_new_cached_attrs(): return try: attr_data_id = resource_objects.Resource.store_attributes( self.context, self.id, self._atomic_key, self.attributes.cached_attrs, self._attr_data_id) if attr_data_id is not None: self._attr_data_id = attr_data_id except Exception as ex: LOG.error('store_attributes rsrc %(name)s %(id)s DB error %(ex)s', {'name': self.name, 'id': self.id, 'ex': ex}) def clear_stored_attributes(self): if self._attr_data_id: resource_objects.Resource.attr_data_delete( self.context, self.id, self._attr_data_id) self.attributes.reset_resolved_values() def get_reference_id(self): """Default implementation for function get_resource. This may be overridden by resource plugins to add extra logic specific to the resource implementation. """ if self.resource_id is not None: return six.text_type(self.resource_id) else: return six.text_type(self.name) def FnGetRefId(self): """For the intrinsic function Ref. :results: the id or name of the resource. """ return self.get_reference_id() def physical_resource_name_or_FnGetRefId(self): res_name = self.physical_resource_name() if res_name is not None: return six.text_type(res_name) else: return Resource.get_reference_id(self) def get_attribute(self, key, *path): """Default implementation for function get_attr and Fn::GetAtt. This may be overridden by resource plugins to add extra logic specific to the resource implementation. """ try: attribute = self.attributes[key] except KeyError: raise exception.InvalidTemplateAttribute(resource=self.name, key=key) return attributes.select_from_attribute(attribute, path) def FnGetAtt(self, key, *path): """For the intrinsic function Fn::GetAtt. :param key: the attribute key. :param path: a list of path components to select from the attribute. :returns: the attribute value. """ cache_custom = ((self.attributes.get_cache_mode(key) != attributes.Schema.CACHE_NONE) and (type(self).get_attribute != Resource.get_attribute)) if cache_custom: if path: full_key = sync_point.str_pack_tuple((key,) + path) else: full_key = key if full_key in self.attributes.cached_attrs: return self.attributes.cached_attrs[full_key] attr_val = self.get_attribute(key, *path) if cache_custom: self.attributes.set_cached_attr(full_key, attr_val) return attr_val def _signal_check_action(self): if self.action in self.no_signal_actions: self._add_event(self.action, self.status, 'Cannot signal resource during %s' % self.action) msg = _('Signal resource during %s') % self.action raise exception.NotSupported(feature=msg) def _signal_check_hook(self, details): if details and 'unset_hook' in details: hook = details['unset_hook'] if not environment.valid_hook_type(hook): msg = (_('Invalid hook type "%(hook)s" for %(resource)s') % {'hook': hook, 'resource': six.text_type(self)}) raise exception.InvalidBreakPointHook(message=msg) if not self.has_hook(hook): msg = (_('The "%(hook)s" hook is not defined ' 'on %(resource)s') % {'hook': hook, 'resource': six.text_type(self)}) raise exception.InvalidBreakPointHook(message=msg) def _unset_hook(self, details): # Clear the hook without interfering with resources' # `handle_signal` callbacks: hook = details['unset_hook'] self.clear_hook(hook) LOG.info('Clearing %(hook)s hook on %(resource)s', {'hook': hook, 'resource': six.text_type(self)}) self._add_event(self.action, self.status, "Hook %s is cleared" % hook) def _handle_signal(self, details): if not callable(getattr(self, 'handle_signal', None)): raise exception.ResourceActionNotSupported(action='signal') def get_string_details(): if details is None: return 'No signal details provided' if isinstance(details, six.string_types): return details if isinstance(details, dict): if all(k in details for k in ('previous', 'current', 'reason')): # this is from Ceilometer. auto = '%(previous)s to %(current)s (%(reason)s)' % details return 'alarm state changed from %s' % auto return 'Unknown' try: signal_result = self.handle_signal(details) if signal_result: reason_string = "Signal: %s" % signal_result else: reason_string = get_string_details() self._add_event('SIGNAL', self.status, reason_string) except NoActionRequired: # Don't log an event as it just spams the user. pass except Exception as ex: if hasattr(self, '_db_res_is_deleted'): # No spam required return LOG.info('signal %(name)s : %(msg)s', {'name': six.text_type(self), 'msg': six.text_type(ex)}, exc_info=True) failure = exception.ResourceFailure(ex, self) raise failure def signal(self, details=None, need_check=True): """Signal the resource. Returns True if the metadata for all resources in the stack needs to be regenerated as a result of the signal, False if it should not be. Subclasses should provide a handle_signal() method to implement the signal. The base-class raise an exception if no handler is implemented. """ if need_check: self._signal_check_hook(details) if details and 'unset_hook' in details: self._unset_hook(details) return False if need_check: self._signal_check_action() with self.frozen_properties(): self._handle_signal(details) return self.signal_needs_metadata_updates def handle_update(self, json_snippet, tmpl_diff, prop_diff): if prop_diff: raise UpdateReplace(self.name) def metadata_update(self, new_metadata=None): """No-op for resources which don't explicitly override this method.""" if new_metadata: LOG.warning("Resource %s does not implement metadata update", self.name) @classmethod def resource_to_template(cls, resource_type, template_type='cfn'): """Generate a provider template that mirrors the resource. :param resource_type: The resource type to be displayed in the template :param template_type: the template type to generate, cfn or hot. :returns: A template where the resource's properties_schema is mapped as parameters, and the resource's attributes_schema is mapped as outputs """ props_schema = {} for name, schema_dict in cls.properties_schema.items(): schema = properties.Schema.from_legacy(schema_dict) if schema.support_status.status != support.HIDDEN: props_schema[name] = schema params, props = (properties.Properties. schema_to_parameters_and_properties(props_schema, template_type)) resource_name = cls.__name__ outputs = attributes.Attributes.as_outputs(resource_name, cls, template_type) description = 'Initial template of %s' % resource_name return cls.build_template_dict(resource_name, resource_type, template_type, params, props, outputs, description) @staticmethod def build_template_dict(res_name, res_type, tmpl_type, params, props, outputs, description): if tmpl_type == 'hot': tmpl_dict = { hot_tmpl.HOTemplate20161014.VERSION: '2016-10-14', hot_tmpl.HOTemplate20161014.DESCRIPTION: description, hot_tmpl.HOTemplate20161014.PARAMETERS: params, hot_tmpl.HOTemplate20161014.OUTPUTS: outputs, hot_tmpl.HOTemplate20161014.RESOURCES: { res_name: { hot_tmpl.HOTemplate20161014.RES_TYPE: res_type, hot_tmpl.HOTemplate20161014.RES_PROPERTIES: props}}} else: tmpl_dict = { cfn_tmpl.CfnTemplate.ALTERNATE_VERSION: '2012-12-12', cfn_tmpl.CfnTemplate.DESCRIPTION: description, cfn_tmpl.CfnTemplate.PARAMETERS: params, cfn_tmpl.CfnTemplate.RESOURCES: { res_name: { cfn_tmpl.CfnTemplate.RES_TYPE: res_type, cfn_tmpl.CfnTemplate.RES_PROPERTIES: props} }, cfn_tmpl.CfnTemplate.OUTPUTS: outputs} return tmpl_dict def data(self): """Return the resource data for this resource. Use methods data_set and data_delete to modify the resource data for this resource. :returns: a dict representing the resource data for this resource. """ if self._data is None and self.id is not None: try: self._data = resource_data_objects.ResourceData.get_all(self) except exception.NotFound: pass return self._data or {} def data_set(self, key, value, redact=False): """Set a key in the resource data.""" resource_data_objects.ResourceData.set(self, key, value, redact) # force fetch all resource data from the database again self._data = None def data_delete(self, key): """Remove a key from the resource data. :returns: True if the key existed to delete. """ try: resource_data_objects.ResourceData.delete(self, key) except exception.NotFound: return False else: # force fetch all resource data from the database again self._data = None return True def _create_or_replace_rsrc_prop_data(self): if self._rsrc_prop_data_id is not None: return self._rsrc_prop_data_id if not self._stored_properties_data: return None self._rsrc_prop_data_id = \ rpd_objects.ResourcePropertiesData(self.context).create( self.context, self._stored_properties_data).id return self._rsrc_prop_data_id def is_using_neutron(self): try: sess_client = self.client('neutron').httpclient if not sess_client.get_endpoint(): return False except Exception: return False return True @staticmethod def _make_resolver(ref): """Return an attribute resolution method. This builds a resolver without a strong reference to this resource, to break a possible cycle. """ def resolve(attr): res = ref() if res is None: raise RuntimeError("Resource collected") return res._resolve_any_attribute(attr) return resolve heat-10.0.2/heat/engine/__init__.py0000666000175000017500000000000013343562337017035 0ustar zuulzuul00000000000000heat-10.0.2/heat/engine/api.py0000666000175000017500000005532413343562337016072 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections from oslo_log import log as logging from oslo_utils import timeutils import six from heat.common.i18n import _ from heat.common import param_utils from heat.common import template_format from heat.common import timeutils as heat_timeutils from heat.engine import constraints as constr from heat.rpc import api as rpc_api LOG = logging.getLogger(__name__) def extract_args(params): """Extract arguments passed as parameters and return them as a dictionary. Extract any arguments passed as parameters through the API and return them as a dictionary. This allows us to filter the passed args and do type conversion where appropriate """ kwargs = {} timeout_mins = params.get(rpc_api.PARAM_TIMEOUT) if timeout_mins not in ('0', 0, None): try: timeout = int(timeout_mins) except (ValueError, TypeError): LOG.exception('Timeout conversion failed') else: if timeout > 0: kwargs[rpc_api.PARAM_TIMEOUT] = timeout else: raise ValueError(_('Invalid timeout value %s') % timeout) for name in [rpc_api.PARAM_CONVERGE, rpc_api.PARAM_DISABLE_ROLLBACK]: if name in params: bool_value = param_utils.extract_bool(name, params[name]) kwargs[name] = bool_value adopt_data = params.get(rpc_api.PARAM_ADOPT_STACK_DATA) if adopt_data: try: adopt_data = template_format.simple_parse(adopt_data) except ValueError as exc: raise ValueError(_('Invalid adopt data: %s') % exc) kwargs[rpc_api.PARAM_ADOPT_STACK_DATA] = adopt_data tags = params.get(rpc_api.PARAM_TAGS) if tags: if not isinstance(tags, list): raise ValueError(_('Invalid tags, not a list: %s') % tags) for tag in tags: if not isinstance(tag, six.string_types): raise ValueError(_('Invalid tag, "%s" is not a string') % tag) if len(tag) > 80: raise ValueError(_('Invalid tag, "%s" is longer than 80 ' 'characters') % tag) # Comma is not allowed as per the API WG tagging guidelines if ',' in tag: raise ValueError(_('Invalid tag, "%s" contains a comma') % tag) kwargs[rpc_api.PARAM_TAGS] = tags return kwargs def _parse_object_status(status): """Parse input status into action and status if possible. This function parses a given string (or list of strings) and see if it contains the action part. The action part is exacted if found. :param status: A string or a list of strings where each string contains a status to be checked. :returns: (actions, statuses) tuple, where actions is a set of actions extracted from the input status and statuses is a set of pure object status. """ if not isinstance(status, list): status = [status] status_set = set() action_set = set() for val in status: # Note: cannot reference Stack.STATUSES due to circular reference issue for s in ('COMPLETE', 'FAILED', 'IN_PROGRESS'): index = val.rfind(s) if index != -1: status_set.add(val[index:]) if index > 1: action_set.add(val[:index - 1]) break return action_set, status_set def translate_filters(params): """Translate filter names to their corresponding DB field names. :param params: A dictionary containing keys from engine.api.STACK_KEYS and other keys previously leaked to users. :returns: A dict containing only valid DB filed names. """ key_map = { rpc_api.STACK_NAME: 'name', rpc_api.STACK_ACTION: 'action', rpc_api.STACK_STATUS: 'status', rpc_api.STACK_STATUS_DATA: 'status_reason', rpc_api.STACK_DISABLE_ROLLBACK: 'disable_rollback', rpc_api.STACK_TIMEOUT: 'timeout', rpc_api.STACK_OWNER: 'username', rpc_api.STACK_PARENT: 'owner_id', rpc_api.STACK_USER_PROJECT_ID: 'stack_user_project_id' } for key, field in key_map.items(): value = params.pop(key, None) if not value: continue fld_value = params.get(field, None) if fld_value: if not isinstance(fld_value, list): fld_value = [fld_value] if not isinstance(value, list): value = [value] value.extend(fld_value) params[field] = value # Deal with status which might be of form _, e.g. # "CREATE_FAILED". Note this logic is still not ideal due to the fact # that action and status are stored separately. if 'status' in params: a_set, s_set = _parse_object_status(params['status']) statuses = sorted(s_set) params['status'] = statuses[0] if len(statuses) == 1 else statuses if a_set: a = params.get('action', []) action_set = set(a) if isinstance(a, list) else set([a]) actions = sorted(action_set.union(a_set)) params['action'] = actions[0] if len(actions) == 1 else actions return params def format_stack_outputs(outputs, resolve_value=False): """Return a representation of the given output template. Return a representation of the given output template for the given stack that matches the API output expectations. """ return [format_stack_output(outputs[key], resolve_value=resolve_value) for key in outputs] def format_stack_output(output_defn, resolve_value=True): result = { rpc_api.OUTPUT_KEY: output_defn.name, rpc_api.OUTPUT_DESCRIPTION: output_defn.description(), } if resolve_value: value = None try: value = output_defn.get_value() except Exception as ex: # We don't need error raising, just adding output_error to # resulting dict. result.update({rpc_api.OUTPUT_ERROR: six.text_type(ex)}) finally: result.update({rpc_api.OUTPUT_VALUE: value}) return result def format_stack(stack, preview=False, resolve_outputs=True): """Return a representation of the given stack. Return a representation of the given stack that matches the API output expectations. """ updated_time = heat_timeutils.isotime(stack.updated_time) created_time = heat_timeutils.isotime(stack.created_time or timeutils.utcnow()) deleted_time = heat_timeutils.isotime(stack.deleted_time) info = { rpc_api.STACK_NAME: stack.name, rpc_api.STACK_ID: dict(stack.identifier()), rpc_api.STACK_CREATION_TIME: created_time, rpc_api.STACK_UPDATED_TIME: updated_time, rpc_api.STACK_DELETION_TIME: deleted_time, rpc_api.STACK_NOTIFICATION_TOPICS: [], # TODO(therve) Not implemented rpc_api.STACK_PARAMETERS: stack.parameters.map(six.text_type), rpc_api.STACK_DESCRIPTION: stack.t[stack.t.DESCRIPTION], rpc_api.STACK_TMPL_DESCRIPTION: stack.t[stack.t.DESCRIPTION], rpc_api.STACK_CAPABILITIES: [], # TODO(?) Not implemented yet rpc_api.STACK_DISABLE_ROLLBACK: stack.disable_rollback, rpc_api.STACK_TIMEOUT: stack.timeout_mins, rpc_api.STACK_OWNER: stack.username, rpc_api.STACK_PARENT: stack.owner_id, rpc_api.STACK_USER_PROJECT_ID: stack.stack_user_project_id, rpc_api.STACK_TAGS: stack.tags, } if not preview: update_info = { rpc_api.STACK_ACTION: stack.action or '', rpc_api.STACK_STATUS: stack.status or '', rpc_api.STACK_STATUS_DATA: stack.status_reason, } info.update(update_info) # allow users to view the outputs of stacks if (not (stack.action == stack.DELETE and stack.status == stack.COMPLETE) and resolve_outputs): info[rpc_api.STACK_OUTPUTS] = format_stack_outputs(stack.outputs, resolve_value=True) return info def format_stack_db_object(stack): """Return a summary representation of the given stack. Given a stack versioned db object, return a representation of the given stack for a stack listing. """ updated_time = heat_timeutils.isotime(stack.updated_at) created_time = heat_timeutils.isotime(stack.created_at) deleted_time = heat_timeutils.isotime(stack.deleted_at) tags = None if stack.tags: tags = [t.tag for t in stack.tags] info = { rpc_api.STACK_ID: dict(stack.identifier()), rpc_api.STACK_NAME: stack.name, rpc_api.STACK_DESCRIPTION: '', rpc_api.STACK_ACTION: stack.action, rpc_api.STACK_STATUS: stack.status, rpc_api.STACK_STATUS_DATA: stack.status_reason, rpc_api.STACK_CREATION_TIME: created_time, rpc_api.STACK_UPDATED_TIME: updated_time, rpc_api.STACK_DELETION_TIME: deleted_time, rpc_api.STACK_OWNER: stack.username, rpc_api.STACK_PARENT: stack.owner_id, rpc_api.STACK_USER_PROJECT_ID: stack.stack_user_project_id, rpc_api.STACK_TAGS: tags, } return info def format_resource_attributes(resource, with_attr=None): resolver = resource.attributes if not with_attr: with_attr = [] # Always return live values for consistency resolver.reset_resolved_values() def resolve(attr, resolver): try: return resolver._resolver(attr) except Exception: return None # if 'show' in attributes_schema, will resolve all attributes of resource # including the ones are not represented in response of show API, such as # 'console_urls' for nova server, user can view it by taking with_attr # parameter if 'show' in resolver: show_attr = resolve('show', resolver) # check if 'show' resolved to dictionary. so it's not None if isinstance(show_attr, collections.Mapping): for a in with_attr: if a not in show_attr: show_attr[a] = resolve(a, resolver) return show_attr else: # remove 'show' attribute if it's None or not a mapping # then resolve all attributes manually del resolver._attributes['show'] attributes = set(resolver) | set(with_attr) return dict((attr, resolve(attr, resolver)) for attr in attributes) def format_resource_properties(resource): def get_property(prop): try: return resource.properties[prop] except (KeyError, ValueError): return None return dict((prop, get_property(prop)) for prop in resource.properties_schema.keys()) def format_stack_resource(resource, detail=True, with_props=False, with_attr=None): """Return a representation of the given resource. Return a representation of the given resource that matches the API output expectations. """ created_time = heat_timeutils.isotime(resource.created_time) last_updated_time = heat_timeutils.isotime( resource.updated_time or resource.created_time) res = { rpc_api.RES_UPDATED_TIME: last_updated_time, rpc_api.RES_CREATION_TIME: created_time, rpc_api.RES_NAME: resource.name, rpc_api.RES_PHYSICAL_ID: resource.resource_id or '', rpc_api.RES_ACTION: resource.action, rpc_api.RES_STATUS: resource.status, rpc_api.RES_STATUS_DATA: resource.status_reason, rpc_api.RES_TYPE: resource.type(), rpc_api.RES_ID: dict(resource.identifier()), rpc_api.RES_STACK_ID: dict(resource.stack.identifier()), rpc_api.RES_STACK_NAME: resource.stack.name, rpc_api.RES_REQUIRED_BY: resource.required_by(), } if resource.has_nested(): res[rpc_api.RES_NESTED_STACK_ID] = dict(resource.nested_identifier()) if resource.stack.parent_resource_name: res[rpc_api.RES_PARENT_RESOURCE] = resource.stack.parent_resource_name if detail: res[rpc_api.RES_DESCRIPTION] = resource.t.description res[rpc_api.RES_METADATA] = resource.metadata_get() if with_attr is not False: res[rpc_api.RES_ATTRIBUTES] = format_resource_attributes( resource, with_attr) if with_props: res[rpc_api.RES_PROPERTIES] = format_resource_properties( resource) return res def format_stack_preview(stack): def format_resource(res): if isinstance(res, list): return list(map(format_resource, res)) return format_stack_resource(res, with_props=True) fmt_stack = format_stack(stack, preview=True) fmt_resources = list(map(format_resource, stack.preview_resources())) fmt_stack['resources'] = fmt_resources return fmt_stack def format_event(event, stack_identifier, root_stack_identifier=None, include_rsrc_prop_data=True): result = { rpc_api.EVENT_ID: dict(event.identifier(stack_identifier)), rpc_api.EVENT_STACK_ID: dict(stack_identifier), rpc_api.EVENT_STACK_NAME: stack_identifier.stack_name, rpc_api.EVENT_TIMESTAMP: heat_timeutils.isotime(event.created_at), rpc_api.EVENT_RES_NAME: event.resource_name, rpc_api.EVENT_RES_PHYSICAL_ID: event.physical_resource_id, rpc_api.EVENT_RES_ACTION: event.resource_action, rpc_api.EVENT_RES_STATUS: event.resource_status, rpc_api.EVENT_RES_STATUS_DATA: event.resource_status_reason, rpc_api.EVENT_RES_TYPE: event.resource_type, } if root_stack_identifier: result[rpc_api.EVENT_ROOT_STACK_ID] = dict(root_stack_identifier) if include_rsrc_prop_data: result[rpc_api.EVENT_RES_PROPERTIES] = event.resource_properties return result def format_notification_body(stack): # some other possibilities here are: # - template name # - template size # - resource count if stack.status is not None and stack.action is not None: state = '_'.join(stack.state) else: state = 'Unknown' updated_at = heat_timeutils.isotime(stack.updated_time) result = { rpc_api.NOTIFY_TENANT_ID: stack.context.tenant_id, rpc_api.NOTIFY_USER_ID: stack.context.username, # deprecated: please use rpc_api.NOTIFY_USERID for user id or # rpc_api.NOTIFY_USERNAME for user name. rpc_api.NOTIFY_USERID: stack.context.user_id, rpc_api.NOTIFY_USERNAME: stack.context.username, rpc_api.NOTIFY_STACK_ID: stack.id, rpc_api.NOTIFY_STACK_NAME: stack.name, rpc_api.NOTIFY_STATE: state, rpc_api.NOTIFY_STATE_REASON: stack.status_reason, rpc_api.NOTIFY_CREATE_AT: heat_timeutils.isotime(stack.created_time), rpc_api.NOTIFY_TAGS: stack.tags, rpc_api.NOTIFY_UPDATE_AT: updated_at } if stack.t is not None: result[rpc_api.NOTIFY_DESCRIPTION] = stack.t[stack.t.DESCRIPTION] return result def format_watch(watch): updated_time = heat_timeutils.isotime(watch.updated_at or timeutils.utcnow()) result = { rpc_api.WATCH_ACTIONS_ENABLED: watch.rule.get( rpc_api.RULE_ACTIONS_ENABLED), rpc_api.WATCH_ALARM_ACTIONS: watch.rule.get( rpc_api.RULE_ALARM_ACTIONS), rpc_api.WATCH_TOPIC: watch.rule.get(rpc_api.RULE_TOPIC), rpc_api.WATCH_UPDATED_TIME: updated_time, rpc_api.WATCH_DESCRIPTION: watch.rule.get(rpc_api.RULE_DESCRIPTION), rpc_api.WATCH_NAME: watch.name, rpc_api.WATCH_COMPARISON: watch.rule.get(rpc_api.RULE_COMPARISON), rpc_api.WATCH_DIMENSIONS: watch.rule.get( rpc_api.RULE_DIMENSIONS) or [], rpc_api.WATCH_PERIODS: watch.rule.get(rpc_api.RULE_PERIODS), rpc_api.WATCH_INSUFFICIENT_ACTIONS: watch.rule.get(rpc_api.RULE_INSUFFICIENT_ACTIONS), rpc_api.WATCH_METRIC_NAME: watch.rule.get(rpc_api.RULE_METRIC_NAME), rpc_api.WATCH_NAMESPACE: watch.rule.get(rpc_api.RULE_NAMESPACE), rpc_api.WATCH_OK_ACTIONS: watch.rule.get(rpc_api.RULE_OK_ACTIONS), rpc_api.WATCH_PERIOD: watch.rule.get(rpc_api.RULE_PERIOD), rpc_api.WATCH_STATE_REASON: watch.rule.get(rpc_api.RULE_STATE_REASON), rpc_api.WATCH_STATE_REASON_DATA: watch.rule.get(rpc_api.RULE_STATE_REASON_DATA), rpc_api.WATCH_STATE_UPDATED_TIME: heat_timeutils.isotime( watch.rule.get(rpc_api.RULE_STATE_UPDATED_TIME, timeutils.utcnow())), rpc_api.WATCH_STATE_VALUE: watch.state, rpc_api.WATCH_STATISTIC: watch.rule.get(rpc_api.RULE_STATISTIC), rpc_api.WATCH_THRESHOLD: watch.rule.get(rpc_api.RULE_THRESHOLD), rpc_api.WATCH_UNIT: watch.rule.get(rpc_api.RULE_UNIT), rpc_api.WATCH_STACK_ID: watch.stack_id } return result def format_watch_data(wd, rule_names): # Demangle DB format data into something more easily used in the API # We are expecting a dict with exactly two items, Namespace and # a metric key namespace = wd.data['Namespace'] metric = [(k, v) for k, v in wd.data.items() if k != 'Namespace'] if len(metric) == 1: metric_name, metric_data = metric[0] else: LOG.error("Unexpected number of keys in watch_data.data!") return result = { rpc_api.WATCH_DATA_ALARM: rule_names.get(wd.watch_rule_id), rpc_api.WATCH_DATA_METRIC: metric_name, rpc_api.WATCH_DATA_TIME: heat_timeutils.isotime(wd.created_at), rpc_api.WATCH_DATA_NAMESPACE: namespace, rpc_api.WATCH_DATA: metric_data } return result def format_validate_parameter(param): """Format a template parameter for validate template API call. Formats a template parameter and its schema information from the engine's internal representation (i.e. a Parameter object and its associated Schema object) to a representation expected by the current API (for example to be compatible to CFN syntax). """ # map of Schema object types to API expected types schema_to_api_types = { param.schema.STRING: rpc_api.PARAM_TYPE_STRING, param.schema.NUMBER: rpc_api.PARAM_TYPE_NUMBER, param.schema.LIST: rpc_api.PARAM_TYPE_COMMA_DELIMITED_LIST, param.schema.MAP: rpc_api.PARAM_TYPE_JSON, param.schema.BOOLEAN: rpc_api.PARAM_TYPE_BOOLEAN } res = { rpc_api.PARAM_TYPE: schema_to_api_types.get(param.schema.type, param.schema.type), rpc_api.PARAM_DESCRIPTION: param.description(), rpc_api.PARAM_NO_ECHO: 'true' if param.hidden() else 'false', rpc_api.PARAM_LABEL: param.label() } if param.has_default(): res[rpc_api.PARAM_DEFAULT] = param.default() if param.user_value: res[rpc_api.PARAM_VALUE] = param.user_value if param.tags(): res[rpc_api.PARAM_TAG] = param.tags() _build_parameter_constraints(res, param) return res def _build_parameter_constraints(res, param): constraint_description = [] # build constraints for c in param.schema.constraints: if isinstance(c, constr.Length): if c.min is not None: res[rpc_api.PARAM_MIN_LENGTH] = c.min if c.max is not None: res[rpc_api.PARAM_MAX_LENGTH] = c.max elif isinstance(c, constr.Range): if c.min is not None: res[rpc_api.PARAM_MIN_VALUE] = c.min if c.max is not None: res[rpc_api.PARAM_MAX_VALUE] = c.max elif isinstance(c, constr.Modulo): if c.step is not None: res[rpc_api.PARAM_STEP] = c.step if c.offset is not None: res[rpc_api.PARAM_OFFSET] = c.offset elif isinstance(c, constr.AllowedValues): res[rpc_api.PARAM_ALLOWED_VALUES] = list(c.allowed) elif isinstance(c, constr.AllowedPattern): res[rpc_api.PARAM_ALLOWED_PATTERN] = c.pattern elif isinstance(c, constr.CustomConstraint): res[rpc_api.PARAM_CUSTOM_CONSTRAINT] = c.name if c.description: constraint_description.append(c.description) if constraint_description: res[rpc_api.PARAM_CONSTRAINT_DESCRIPTION] = " ".join( constraint_description) def format_software_config(sc, detail=True, include_project=False): if sc is None: return result = { rpc_api.SOFTWARE_CONFIG_ID: sc.id, rpc_api.SOFTWARE_CONFIG_NAME: sc.name, rpc_api.SOFTWARE_CONFIG_GROUP: sc.group, rpc_api.SOFTWARE_CONFIG_CREATION_TIME: heat_timeutils.isotime(sc.created_at) } if detail: result[rpc_api.SOFTWARE_CONFIG_CONFIG] = sc.config['config'] result[rpc_api.SOFTWARE_CONFIG_INPUTS] = sc.config['inputs'] result[rpc_api.SOFTWARE_CONFIG_OUTPUTS] = sc.config['outputs'] result[rpc_api.SOFTWARE_CONFIG_OPTIONS] = sc.config['options'] if include_project: result[rpc_api.SOFTWARE_CONFIG_PROJECT] = sc.tenant return result def format_software_deployment(sd): if sd is None: return result = { rpc_api.SOFTWARE_DEPLOYMENT_ID: sd.id, rpc_api.SOFTWARE_DEPLOYMENT_SERVER_ID: sd.server_id, rpc_api.SOFTWARE_DEPLOYMENT_INPUT_VALUES: sd.input_values, rpc_api.SOFTWARE_DEPLOYMENT_OUTPUT_VALUES: sd.output_values, rpc_api.SOFTWARE_DEPLOYMENT_ACTION: sd.action, rpc_api.SOFTWARE_DEPLOYMENT_STATUS: sd.status, rpc_api.SOFTWARE_DEPLOYMENT_STATUS_REASON: sd.status_reason, rpc_api.SOFTWARE_DEPLOYMENT_CONFIG_ID: sd.config.id, rpc_api.SOFTWARE_DEPLOYMENT_CREATION_TIME: heat_timeutils.isotime(sd.created_at), } if sd.updated_at: result[rpc_api.SOFTWARE_DEPLOYMENT_UPDATED_TIME] = ( heat_timeutils.isotime(sd.updated_at)) return result def format_snapshot(snapshot): if snapshot is None: return result = { rpc_api.SNAPSHOT_ID: snapshot.id, rpc_api.SNAPSHOT_NAME: snapshot.name, rpc_api.SNAPSHOT_STATUS: snapshot.status, rpc_api.SNAPSHOT_STATUS_REASON: snapshot.status_reason, rpc_api.SNAPSHOT_DATA: snapshot.data, rpc_api.SNAPSHOT_CREATION_TIME: heat_timeutils.isotime(snapshot.created_at), } return result heat-10.0.2/heat/engine/clients/0000775000175000017500000000000013343562672016377 5ustar zuulzuul00000000000000heat-10.0.2/heat/engine/clients/__init__.py0000666000175000017500000001012513343562337020507 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import weakref from oslo_config import cfg from oslo_log import log as logging from oslo_utils import importutils import six from stevedore import enabled from heat.common import exception from heat.common.i18n import _ from heat.common import pluginutils LOG = logging.getLogger(__name__) _default_backend = "heat.engine.clients.OpenStackClients" cloud_opts = [ cfg.StrOpt('cloud_backend', default=_default_backend, help=_("Fully qualified class name to use as " "a client backend.")) ] cfg.CONF.register_opts(cloud_opts) class OpenStackClients(object): """Convenience class to create and cache client instances.""" def __init__(self, context): self._context = weakref.ref(context) self._clients = {} self._client_plugins = {} @property def context(self): ctxt = self._context() assert ctxt is not None, "Need a reference to the context" return ctxt def client_plugin(self, name): global _mgr if name in self._client_plugins: return self._client_plugins[name] if _mgr and name in _mgr.names(): client_plugin = _mgr[name].plugin(self.context) self._client_plugins[name] = client_plugin return client_plugin def client(self, name, version=None): client_plugin = self.client_plugin(name) if client_plugin: if version: return client_plugin.client(version=version) else: return client_plugin.client() if name in self._clients: return self._clients[name] # call the local method _() if a real client plugin # doesn't exist method_name = '_%s' % name if callable(getattr(self, method_name, None)): client = getattr(self, method_name)() self._clients[name] = client return client LOG.warning('Requested client "%s" not found', name) class ClientBackend(object): """Class for delaying choosing the backend client module. Delay choosing the backend client module until the client's class needs to be initialized. """ def __new__(cls, context): if cfg.CONF.cloud_backend == _default_backend: return OpenStackClients(context) else: try: return importutils.import_object(cfg.CONF.cloud_backend, context) except (ImportError, RuntimeError, cfg.NoSuchOptError) as err: msg = _('Invalid cloud_backend setting in heat.conf ' 'detected - %s') % six.text_type(err) LOG.error(msg) raise exception.Invalid(reason=msg) Clients = ClientBackend _mgr = None def has_client(name): return _mgr and name in _mgr.names() def initialise(): global _mgr if _mgr: return def client_is_available(client_plugin): if not hasattr(client_plugin.plugin, 'is_available'): # if the client does not have a is_available() class method, then # we assume it wants to be always available return True # let the client plugin decide if it wants to register or not return client_plugin.plugin.is_available() _mgr = enabled.EnabledExtensionManager( namespace='heat.clients', check_func=client_is_available, invoke_on_load=False, on_load_failure_callback=pluginutils.log_fail_msg) def list_opts(): yield None, cloud_opts heat-10.0.2/heat/engine/clients/progress.py0000666000175000017500000001155413343562351020617 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Helper classes that are simple key-value storages meant to be passed between handle_* and check_*_complete, being mutated during subsequent check_*_complete calls. Some of them impose restrictions on client plugin API, thus they are put in this client-plugin-agnostic module. """ class ServerCreateProgress(object): def __init__(self, server_id, complete=False): self.complete = complete self.server_id = server_id class ServerUpdateProgress(ServerCreateProgress): """Keeps track on particular server update task. ``handler`` is a method of client plugin performing required update operation. Its first positional argument must be ``server_id`` and this method must be resilent to intermittent failures, returning ``True`` if API was successfully called, ``False`` otherwise. If result of API call is asynchronous, client plugin must have corresponding ``check_`` method. Its first positional argument must be ``server_id`` and it must return ``True`` or ``False`` indicating completeness of the update operation. For synchronous API calls, set ``complete`` attribute of this object to ``True``. ``[handler|checker]_extra`` arguments, if passed to constructor, should be dictionaries of {'args': tuple(), 'kwargs': dict()} structure and contain parameters with which corresponding ``handler`` and ``check_`` methods of client plugin must be called. ``args`` is automatically prepended with ``server_id``. Missing ``args`` or ``kwargs`` are interpreted as empty tuple/dict respectively. Defaults are interpreted as both ``args`` and ``kwargs`` being empty. """ def __init__(self, server_id, handler, complete=False, called=False, handler_extra=None, checker_extra=None): super(ServerUpdateProgress, self).__init__(server_id, complete) self.called = called self.handler = handler self.checker = 'check_%s' % handler # set call arguments basing on incomplete values and defaults hargs = handler_extra or {} self.handler_args = (server_id,) + (hargs.get('args') or ()) self.handler_kwargs = hargs.get('kwargs') or {} cargs = checker_extra or {} self.checker_args = (server_id,) + (cargs.get('args') or ()) self.checker_kwargs = cargs.get('kwargs') or {} class ServerDeleteProgress(object): def __init__(self, server_id, image_id=None, image_complete=True): self.server_id = server_id self.image_id = image_id self.image_complete = image_complete class VolumeDetachProgress(object): def __init__(self, srv_id, vol_id, attach_id, task_complete=False): self.called = task_complete self.cinder_complete = task_complete self.nova_complete = task_complete self.srv_id = srv_id self.vol_id = vol_id self.attach_id = attach_id class VolumeAttachProgress(object): def __init__(self, srv_id, vol_id, device, task_complete=False): self.called = task_complete self.complete = task_complete self.srv_id = srv_id self.vol_id = vol_id self.device = device class VolumeDeleteProgress(object): def __init__(self, task_complete=False): self.backup = {'called': task_complete, 'complete': task_complete} self.delete = {'called': task_complete, 'complete': task_complete} self.backup_id = None class VolumeResizeProgress(object): def __init__(self, task_complete=False, size=None): self.called = task_complete self.complete = task_complete self.size = size class VolumeUpdateAccessModeProgress(object): def __init__(self, task_complete=False, read_only=None): self.called = task_complete self.read_only = read_only class VolumeBackupRestoreProgress(object): def __init__(self, vol_id, backup_id): self.called = False self.complete = False self.vol_id = vol_id self.backup_id = backup_id class PoolDeleteProgress(object): def __init__(self, task_complete=False): self.pool = {'delete_called': task_complete, 'deleted': task_complete} self.vip = {'delete_called': task_complete, 'deleted': task_complete} heat-10.0.2/heat/engine/clients/client_plugin.py0000666000175000017500000001341013343562337021604 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import abc import weakref from keystoneauth1 import exceptions from keystoneauth1.identity import generic from keystoneauth1 import plugin from oslo_config import cfg from oslo_utils import excutils import requests import six from heat.common import config from heat.common import exception as heat_exception cfg.CONF.import_opt('client_retry_limit', 'heat.common.config') @six.add_metaclass(abc.ABCMeta) class ClientPlugin(object): # Module which contains all exceptions classes which the client # may emit exceptions_module = None # supported service types, service like cinder support multiple service # types, so its used in list format service_types = [] # To make the backward compatibility with existing resource plugins default_version = None supported_versions = [] def __init__(self, context): self._context = weakref.ref(context) self._clients = weakref.ref(context.clients) self._client_instances = {} @property def context(self): ctxt = self._context() assert ctxt is not None, "Need a reference to the context" return ctxt @property def clients(self): return self._clients() _get_client_option = staticmethod(config.get_client_option) def client(self, version=None): if not version: version = self.default_version if version in self._client_instances: return self._client_instances[version] # Back-ward compatibility if version is None: self._client_instances[version] = self._create() else: if version not in self.supported_versions: raise heat_exception.InvalidServiceVersion( version=version, service=self._get_service_name()) self._client_instances[version] = self._create(version=version) return self._client_instances[version] @abc.abstractmethod def _create(self, version=None): """Return a newly created client.""" pass def _get_region_name(self): return self.context.region_name or cfg.CONF.region_name_for_services def url_for(self, **kwargs): keystone_session = self.context.keystone_session def get_endpoint(): return keystone_session.get_endpoint(**kwargs) # NOTE(jamielennox): use the session defined by the keystoneclient # options as traditionally the token was always retrieved from # keystoneclient. try: kwargs.setdefault('interface', kwargs.pop('endpoint_type')) except KeyError: pass kwargs.setdefault('region_name', self._get_region_name()) url = None try: url = get_endpoint() except exceptions.EmptyCatalog: endpoint = keystone_session.get_endpoint( None, interface=plugin.AUTH_INTERFACE) token = keystone_session.get_token(None) token_obj = generic.Token(endpoint, token) auth_ref = token_obj.get_access(keystone_session) if auth_ref.has_service_catalog(): self.context.reload_auth_plugin() url = get_endpoint() # NOTE(jamielennox): raising exception maintains compatibility with # older keystoneclient service catalog searching. if url is None: raise exceptions.EndpointNotFound() return url def is_client_exception(self, ex): """Returns True if the current exception comes from the client.""" if self.exceptions_module: if isinstance(self.exceptions_module, list): for m in self.exceptions_module: if type(ex) in six.itervalues(m.__dict__): return True else: return type(ex) in six.itervalues( self.exceptions_module.__dict__) return False def is_not_found(self, ex): """Returns True if the exception is a not-found.""" return False def is_over_limit(self, ex): """Returns True if the exception is an over-limit.""" return False def is_conflict(self, ex): """Returns True if the exception is a conflict.""" return False @excutils.exception_filter def ignore_not_found(self, ex): """Raises the exception unless it is a not-found.""" return self.is_not_found(ex) @excutils.exception_filter def ignore_conflict_and_not_found(self, ex): """Raises the exception unless it is a conflict or not-found.""" return self.is_conflict(ex) or self.is_not_found(ex) def does_endpoint_exist(self, service_type, service_name): endpoint_type = self._get_client_option(service_name, 'endpoint_type') try: self.url_for(service_type=service_type, endpoint_type=endpoint_type) return True except exceptions.EndpointNotFound: return False def retry_if_connection_err(exception): return isinstance(exception, requests.ConnectionError) def retry_if_result_is_false(result): return result is False heat-10.0.2/heat/engine/clients/client_exception.py0000666000175000017500000000223413343562337022306 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common import exception from heat.common.i18n import _ class EntityMatchNotFound(exception.HeatException): msg_fmt = _("No %(entity)s matching %(args)s.") def __init__(self, entity=None, args=None, **kwargs): super(EntityMatchNotFound, self).__init__(entity=entity, args=args, **kwargs) class EntityUniqueMatchNotFound(EntityMatchNotFound): msg_fmt = _("No %(entity)s unique match found for %(args)s.") class InterfaceNotFound(exception.HeatException): msg_fmt = _("No network interface found for server %(id)s.") heat-10.0.2/heat/engine/clients/os/0000775000175000017500000000000013343562672017020 5ustar zuulzuul00000000000000heat-10.0.2/heat/engine/clients/os/swift.py0000666000175000017500000001154013343562351020523 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime import email.utils import hashlib import logging import random import time import six from six.moves.urllib import parse from swiftclient import client as sc from swiftclient import exceptions from swiftclient import utils as swiftclient_utils from heat.engine.clients import client_plugin IN_PROGRESS = 'in progress' MAX_EPOCH = 2147483647 CLIENT_NAME = 'swift' # silence the swiftclient logging sc_logger = logging.getLogger("swiftclient") sc_logger.setLevel(logging.CRITICAL) class SwiftClientPlugin(client_plugin.ClientPlugin): exceptions_module = exceptions service_types = [OBJECT_STORE] = ['object-store'] def _create(self): endpoint_type = self._get_client_option(CLIENT_NAME, 'endpoint_type') os_options = {'endpoint_type': endpoint_type, 'service_type': self.OBJECT_STORE, 'region_name': self._get_region_name()} return sc.Connection(auth_version=3, session=self.context.keystone_session, os_options=os_options) def is_client_exception(self, ex): return isinstance(ex, exceptions.ClientException) def is_not_found(self, ex): return (isinstance(ex, exceptions.ClientException) and ex.http_status == 404) def is_over_limit(self, ex): return (isinstance(ex, exceptions.ClientException) and ex.http_status == 413) def is_conflict(self, ex): return (isinstance(ex, exceptions.ClientException) and ex.http_status == 409) def is_valid_temp_url_path(self, path): """Return True if path is a valid Swift TempURL path, False otherwise. A Swift TempURL path must: - Be five parts, ['', 'v1', 'account', 'container', 'object'] - Be a v1 request - Have account, container, and object values - Have an object value with more than just '/'s :param path: The TempURL path :type path: string """ parts = path.split('/', 4) return bool(len(parts) == 5 and not parts[0] and parts[1] == 'v1' and parts[2].endswith(self.context.tenant_id) and parts[3] and parts[4].strip('/')) def get_temp_url(self, container_name, obj_name, timeout=None, method='PUT'): """Return a Swift TempURL.""" key_header = 'x-account-meta-temp-url-key' if key_header not in self.client().head_account(): self.client().post_account({ key_header: hashlib.sha224( six.b(six.text_type( random.getrandbits(256)))).hexdigest()[:32]}) key = self.client().head_account()[key_header] path = '/v1/AUTH_%s/%s/%s' % (self.context.tenant_id, container_name, obj_name) if timeout is None: timeout = int(MAX_EPOCH - 60 - time.time()) tempurl = swiftclient_utils.generate_temp_url(path, timeout, key, method) sw_url = parse.urlparse(self.client().url) return '%s://%s%s' % (sw_url.scheme, sw_url.netloc, tempurl) def get_signal_url(self, container_name, obj_name, timeout=None): """Turn on object versioning. We can use a single TempURL for multiple signals and return a Swift TempURL. """ self.client().put_container( container_name, headers={'x-versions-location': container_name}) self.client().put_object(container_name, obj_name, IN_PROGRESS) return self.get_temp_url(container_name, obj_name, timeout) def parse_last_modified(self, lm): """Parses the last-modified value. For example, last-modified values from a swift object header. Returns the datetime.datetime of that value. :param lm: The last-modified value (or None) :type lm: string :returns: An offset-naive UTC datetime of the value (or None) """ if not lm: return None pd = email.utils.parsedate(lm)[:6] # according to RFC 2616, all HTTP time headers must be # in GMT time, so create an offset-naive UTC datetime return datetime.datetime(*pd) heat-10.0.2/heat/engine/clients/os/manila.py0000666000175000017500000001143013343562351020626 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common import exception as heat_exception from heat.engine.clients import client_plugin from heat.engine import constraints from manilaclient import client as manila_client from manilaclient import exceptions MANILACLIENT_VERSION = "2" CLIENT_NAME = 'manila' class ManilaClientPlugin(client_plugin.ClientPlugin): exceptions_module = exceptions service_types = [SHARE] = ['share'] def _create(self): endpoint_type = self._get_client_option(CLIENT_NAME, 'endpoint_type') args = { 'endpoint_type': endpoint_type, 'service_type': self.SHARE, 'session': self.context.keystone_session, 'region_name': self._get_region_name() } client = manila_client.Client(MANILACLIENT_VERSION, **args) return client def is_not_found(self, ex): return isinstance(ex, exceptions.NotFound) def is_over_limit(self, ex): return isinstance(ex, exceptions.RequestEntityTooLarge) def is_conflict(self, ex): return isinstance(ex, exceptions.Conflict) @staticmethod def _find_resource_by_id_or_name(id_or_name, resource_list, resource_type_name): """The method is trying to find id or name in item_list The method searches item with id_or_name in list and returns it. If there is more than one value or no values then it raises an exception :param id_or_name: resource id or name :param resource_list: list of resources :param resource_type_name: name of resource type that will be used for exceptions :raises EntityNotFound, NoUniqueMatch :return: resource or generate an exception otherwise """ search_result_by_id = [res for res in resource_list if res.id == id_or_name] if search_result_by_id: return search_result_by_id[0] else: # try to find resource by name search_result_by_name = [res for res in resource_list if res.name == id_or_name] match_count = len(search_result_by_name) if match_count > 1: message = ("Ambiguous {0} name '{1}'. Found more than one " "{0} for this name in Manila." ).format(resource_type_name, id_or_name) raise exceptions.NoUniqueMatch(message) elif match_count == 1: return search_result_by_name[0] else: raise heat_exception.EntityNotFound(entity=resource_type_name, name=id_or_name) def get_share_type(self, share_type_identity): return self._find_resource_by_id_or_name( share_type_identity, self.client().share_types.list(), "share type" ) def get_share_network(self, share_network_identity): return self._find_resource_by_id_or_name( share_network_identity, self.client().share_networks.list(), "share network" ) def get_share_snapshot(self, snapshot_identity): return self._find_resource_by_id_or_name( snapshot_identity, self.client().share_snapshots.list(), "share snapshot" ) def get_security_service(self, service_identity): return self._find_resource_by_id_or_name( service_identity, self.client().security_services.list(), 'security service' ) class ManilaShareBaseConstraint(constraints.BaseCustomConstraint): # check that exceptions module has been loaded. Without this check # doc tests on gates will fail expected_exceptions = (heat_exception.EntityNotFound, exceptions.NoUniqueMatch) resource_client_name = CLIENT_NAME class ManilaShareNetworkConstraint(ManilaShareBaseConstraint): resource_getter_name = 'get_share_network' class ManilaShareTypeConstraint(ManilaShareBaseConstraint): resource_getter_name = 'get_share_type' class ManilaShareSnapshotConstraint(ManilaShareBaseConstraint): resource_getter_name = 'get_share_snapshot' heat-10.0.2/heat/engine/clients/os/senlin.py0000666000175000017500000001200013343562337020653 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack import exceptions from heat.common import exception from heat.common.i18n import _ from heat.engine.clients.os import openstacksdk as sdk_plugin from heat.engine import constraints CLIENT_NAME = 'senlin' class SenlinClientPlugin(sdk_plugin.OpenStackSDKPlugin): exceptions_module = exceptions def _create(self, version=None): client = super(SenlinClientPlugin, self)._create(version=version) return client.clustering def _get_additional_create_args(self, version): return { 'clustering_api_version': version or '1' } def generate_spec(self, spec_type, spec_props): spec = {'properties': spec_props} spec['type'], spec['version'] = spec_type.split('-') return spec def check_action_status(self, action_id): action = self.client().get_action(action_id) if action.status == 'SUCCEEDED': return True elif action.status == 'FAILED': raise exception.ResourceInError( status_reason=action.status_reason, resource_status=action.status, ) return False def get_profile_id(self, profile_name): profile = self.client().get_profile(profile_name) return profile.id def get_cluster_id(self, cluster_name): cluster = self.client().get_cluster(cluster_name) return cluster.id def get_policy_id(self, policy_name): policy = self.client().get_policy(policy_name) return policy.id def is_bad_request(self, ex): return (isinstance(ex, exceptions.HttpException) and ex.status_code == 400) def execute_actions(self, actions): all_executed = True for action in actions: if action['done']: continue all_executed = False if action['action_id'] is None: func = getattr(self.client(), action['func']) ret = func(**action['params']) if isinstance(ret, dict): action['action_id'] = ret['action'] else: action['action_id'] = ret.location.split('/')[-1] else: ret = self.check_action_status(action['action_id']) action['done'] = ret # Execute these actions one by one. break return all_executed class ProfileConstraint(constraints.BaseCustomConstraint): # If name is not unique, will raise exceptions.HttpException expected_exceptions = (exceptions.HttpException,) def validate_with_client(self, client, profile): client.client(CLIENT_NAME).get_profile(profile) class ClusterConstraint(constraints.BaseCustomConstraint): # If name is not unique, will raise exceptions.HttpException expected_exceptions = (exceptions.HttpException,) def validate_with_client(self, client, value): client.client(CLIENT_NAME).get_cluster(value) class PolicyConstraint(constraints.BaseCustomConstraint): # If name is not unique, will raise exceptions.HttpException expected_exceptions = (exceptions.HttpException,) def validate_with_client(self, client, value): client.client(CLIENT_NAME).get_policy(value) class ProfileTypeConstraint(constraints.BaseCustomConstraint): expected_exceptions = (exception.StackValidationFailed,) def validate_with_client(self, client, value): conn = client.client(CLIENT_NAME) type_list = conn.profile_types() names = [pt.name for pt in type_list] if value not in names: not_found_message = ( _("Unable to find senlin profile type '%(pt)s', " "available profile types are %(pts)s.") % {'pt': value, 'pts': names} ) raise exception.StackValidationFailed(message=not_found_message) class PolicyTypeConstraint(constraints.BaseCustomConstraint): expected_exceptions = (exception.StackValidationFailed,) def validate_with_client(self, client, value): conn = client.client(CLIENT_NAME) type_list = conn.policy_types() names = [pt.name for pt in type_list] if value not in names: not_found_message = ( _("Unable to find senlin policy type '%(pt)s', " "available policy types are %(pts)s.") % {'pt': value, 'pts': names} ) raise exception.StackValidationFailed(message=not_found_message) heat-10.0.2/heat/engine/clients/os/openstacksdk.py0000666000175000017500000000610713343562351022063 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.config import cloud_region from openstack import connection from openstack import exceptions import os_service_types from heat.common import config from heat.engine.clients import client_plugin from heat.engine import constraints import heat.version CLIENT_NAME = 'openstack' class OpenStackSDKPlugin(client_plugin.ClientPlugin): exceptions_module = exceptions service_types = [NETWORK, CLUSTERING] = ['network', 'clustering'] def _create(self, version=None): config = cloud_region.from_session( # TODO(mordred) The way from_session calculates a cloud name # doesn't interact well with the mocks in the test cases. The # name is used in logging to distinguish requests made to different # clouds. For now, set it to local - but maybe find a way to set # it to something more meaningful later. name='local', session=self.context.keystone_session, config=self._get_service_interfaces(), region_name=self._get_region_name(), app_name='heat', app_version=heat.version.version_info.version_string(), **self._get_additional_create_args(version)) return connection.Connection(config=config) def _get_additional_create_args(self, version): return {} def _get_service_interfaces(self): interfaces = {} if not os_service_types: return interfaces types = os_service_types.ServiceTypes() for name, _ in config.list_opts(): if not name or not name.startswith('clients_'): continue project_name = name.split("_", 1)[0] service_data = types.get_service_data_for_project(project_name) if not service_data: continue service_type = service_data['service_type'] interfaces[service_type + '_interface'] = self._get_client_option( service_type, 'endpoint_type') return interfaces def is_not_found(self, ex): return isinstance(ex, exceptions.ResourceNotFound) def find_network_segment(self, value): return self.client().network.find_segment(value).id class SegmentConstraint(constraints.BaseCustomConstraint): expected_exceptions = (exceptions.ResourceNotFound, exceptions.DuplicateResource) def validate_with_client(self, client, value): sdk_plugin = client.client_plugin(CLIENT_NAME) sdk_plugin.find_network_segment(value) heat-10.0.2/heat/engine/clients/os/cinder.py0000666000175000017500000001551013343562337020640 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from cinderclient import client as cc from cinderclient import exceptions from keystoneauth1 import exceptions as ks_exceptions from oslo_log import log as logging from heat.common import exception from heat.common.i18n import _ from heat.engine.clients import client_plugin from heat.engine.clients import os as os_client from heat.engine import constraints LOG = logging.getLogger(__name__) CLIENT_NAME = 'cinder' class CinderClientPlugin(client_plugin.ClientPlugin): exceptions_module = exceptions service_types = [VOLUME_V2, VOLUME_V3] = ['volumev2', 'volumev3'] def get_volume_api_version(self): '''Returns the most recent API version.''' self.interface = self._get_client_option(CLIENT_NAME, 'endpoint_type') try: self.context.keystone_session.get_endpoint( service_type=self.VOLUME_V3, interface=self.interface) self.service_type = self.VOLUME_V3 self.client_version = '3' except ks_exceptions.EndpointNotFound: try: self.context.keystone_session.get_endpoint( service_type=self.VOLUME_V2, interface=self.interface) self.service_type = self.VOLUME_V2 self.client_version = '2' except ks_exceptions.EndpointNotFound: raise exception.Error(_('No volume service available.')) def _create(self): self.get_volume_api_version() extensions = cc.discover_extensions(self.client_version) args = { 'session': self.context.keystone_session, 'extensions': extensions, 'interface': self.interface, 'service_type': self.service_type, 'region_name': self._get_region_name(), 'http_log_debug': self._get_client_option(CLIENT_NAME, 'http_log_debug') } client = cc.Client(self.client_version, **args) return client @os_client.MEMOIZE_EXTENSIONS def _list_extensions(self): extensions = self.client().list_extensions.show_all() return set(extension.alias for extension in extensions) def has_extension(self, alias): """Check if specific extension is present.""" return alias in self._list_extensions() def get_volume(self, volume): try: return self.client().volumes.get(volume) except exceptions.NotFound: raise exception.EntityNotFound(entity='Volume', name=volume) def get_volume_snapshot(self, snapshot): try: return self.client().volume_snapshots.get(snapshot) except exceptions.NotFound: raise exception.EntityNotFound(entity='VolumeSnapshot', name=snapshot) def get_volume_backup(self, backup): try: return self.client().backups.get(backup) except exceptions.NotFound: raise exception.EntityNotFound(entity='Volume backup', name=backup) def get_volume_type(self, volume_type): vt_id = None volume_type_list = self.client().volume_types.list() for vt in volume_type_list: if volume_type in [vt.name, vt.id]: vt_id = vt.id break if vt_id is None: raise exception.EntityNotFound(entity='VolumeType', name=volume_type) return vt_id def get_qos_specs(self, qos_specs): try: qos = self.client().qos_specs.get(qos_specs) except exceptions.NotFound: qos = self.client().qos_specs.find(name=qos_specs) return qos.id def is_not_found(self, ex): return isinstance(ex, exceptions.NotFound) def is_over_limit(self, ex): return isinstance(ex, exceptions.OverLimit) def is_conflict(self, ex): return (isinstance(ex, exceptions.ClientException) and ex.code == 409) def check_detach_volume_complete(self, vol_id): try: vol = self.client().volumes.get(vol_id) except Exception as ex: self.ignore_not_found(ex) return True if vol.status in ('in-use', 'detaching'): LOG.debug('%s - volume still in use', vol_id) return False LOG.debug('Volume %(id)s - status: %(status)s', { 'id': vol.id, 'status': vol.status}) if vol.status not in ('available', 'deleting'): LOG.debug("Detachment failed - volume %(vol)s " "is in %(status)s status", {"vol": vol.id, "status": vol.status}) raise exception.ResourceUnknownStatus( resource_status=vol.status, result=_('Volume detachment failed')) else: return True def check_attach_volume_complete(self, vol_id): vol = self.client().volumes.get(vol_id) if vol.status in ('available', 'attaching', 'reserved'): LOG.debug("Volume %(id)s is being attached - " "volume status: %(status)s", {'id': vol_id, 'status': vol.status}) return False if vol.status != 'in-use': LOG.debug("Attachment failed - volume %(vol)s is " "in %(status)s status", {"vol": vol_id, "status": vol.status}) raise exception.ResourceUnknownStatus( resource_status=vol.status, result=_('Volume attachment failed')) LOG.info('Attaching volume %(id)s complete', {'id': vol_id}) return True class BaseCinderConstraint(constraints.BaseCustomConstraint): resource_client_name = CLIENT_NAME class VolumeConstraint(BaseCinderConstraint): resource_getter_name = 'get_volume' class VolumeSnapshotConstraint(BaseCinderConstraint): resource_getter_name = 'get_volume_snapshot' class VolumeTypeConstraint(BaseCinderConstraint): resource_getter_name = 'get_volume_type' class VolumeBackupConstraint(BaseCinderConstraint): resource_getter_name = 'get_volume_backup' class QoSSpecsConstraint(BaseCinderConstraint): expected_exceptions = (exceptions.NotFound,) resource_getter_name = 'get_qos_specs' heat-10.0.2/heat/engine/clients/os/heat_plugin.py0000666000175000017500000000706613343562337021702 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg from heatclient import client as hc from heatclient import exc from heat.engine.clients import client_plugin CLIENT_NAME = 'heat' class HeatClientPlugin(client_plugin.ClientPlugin): exceptions_module = exc service_types = [ORCHESTRATION, CLOUDFORMATION] = ['orchestration', 'cloudformation'] def _create(self): endpoint = self.get_heat_url() args = {} if self._get_client_option(CLIENT_NAME, 'url'): # assume that the heat API URL is manually configured because # it is not in the keystone catalog, so include the credentials # for the standalone auth_password middleware args['username'] = self.context.username args['password'] = self.context.password return hc.Client('1', endpoint_override=endpoint, session=self.context.keystone_session, **args) def is_not_found(self, ex): return isinstance(ex, exc.HTTPNotFound) def is_over_limit(self, ex): return isinstance(ex, exc.HTTPOverLimit) def is_conflict(self, ex): return isinstance(ex, exc.HTTPConflict) def get_heat_url(self): heat_url = self._get_client_option(CLIENT_NAME, 'url') if heat_url: tenant_id = self.context.tenant_id heat_url = heat_url % {'tenant_id': tenant_id} else: endpoint_type = self._get_client_option(CLIENT_NAME, 'endpoint_type') heat_url = self.url_for(service_type=self.ORCHESTRATION, endpoint_type=endpoint_type) return heat_url def get_heat_cfn_url(self): endpoint_type = self._get_client_option(CLIENT_NAME, 'endpoint_type') heat_cfn_url = self.url_for(service_type=self.CLOUDFORMATION, endpoint_type=endpoint_type) return heat_cfn_url def get_cfn_metadata_server_url(self): # Historically, we've required heat_metadata_server_url set in # heat.conf, which simply points to the heat-api-cfn endpoint in # most cases, so fall back to looking in the catalog when not set config_url = cfg.CONF.heat_metadata_server_url if config_url is None: config_url = self.get_heat_cfn_url() # Backwards compatibility, previous heat_metadata_server_url # values didn't have to include the version path suffix # Also, we always added a trailing "/" in nova/server.py, # which looks not required by os-collect-config, but maintain # to avoid any risk other folks have scripts which expect it. if '/v1' not in config_url: config_url += '/v1' if config_url and config_url[-1] != "/": config_url += '/' return config_url def get_insecure_option(self): return self._get_client_option(CLIENT_NAME, 'insecure') heat-10.0.2/heat/engine/clients/os/keystone/0000775000175000017500000000000013343562672020661 5ustar zuulzuul00000000000000heat-10.0.2/heat/engine/clients/os/keystone/fake_keystoneclient.py0000666000175000017500000001006213343562337025260 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """A fake FakeKeystoneClient. This can be used during some runtime scenarios where you want to disable Heat's internal Keystone dependencies entirely. One example is the TripleO Undercloud installer. To use this class at runtime set to following heat.conf config setting: keystone_backend = heat.engine.clients.os.keystone.fake_keystoneclient\ .FakeKeystoneClient """ from keystoneauth1 import session from heat.common import context class FakeKeystoneClient(object): def __init__(self, username='test_username', password='password', user_id='1234', access='4567', secret='8901', credential_id='abcdxyz', auth_token='abcd1234', context=None, stack_domain_id='4321', client=None): self.username = username self.password = password self.user_id = user_id self.access = access self.secret = secret self.session = session.Session() self.credential_id = credential_id self.token = auth_token self.context = context self.v3_endpoint = 'http://localhost:5000/v3' self.stack_domain_id = stack_domain_id self.client = client class FakeCred(object): id = self.credential_id access = self.access secret = self.secret self.creds = FakeCred() def create_stack_user(self, username, password=''): self.username = username return self.user_id def delete_stack_user(self, user_id): self.user_id = None def get_ec2_keypair(self, access, user_id): if user_id == self.user_id: if access == self.access: return self.creds else: raise ValueError("Unexpected access %s" % access) else: raise ValueError("Unexpected user_id %s" % user_id) def create_ec2_keypair(self, user_id): if user_id == self.user_id: return self.creds def delete_ec2_keypair(self, credential_id=None, user_id=None, access=None): if user_id == self.user_id and access == self.creds.access: self.creds = None else: raise Exception('Incorrect user_id or access') def enable_stack_user(self, user_id): pass def disable_stack_user(self, user_id): pass def create_trust_context(self): return context.RequestContext(username=self.username, password=self.password, is_admin=False, trust_id='atrust', trustor_user_id=self.user_id) def delete_trust(self, trust_id): pass def delete_stack_domain_project(self, project_id): pass def create_stack_domain_project(self, stack_id): return 'aprojectid' def create_stack_domain_user(self, username, project_id, password=None): return self.user_id def delete_stack_domain_user(self, user_id, project_id): pass def create_stack_domain_user_keypair(self, user_id, project_id): return self.creds def enable_stack_domain_user(self, user_id, project_id): pass def disable_stack_domain_user(self, user_id, project_id): pass def delete_stack_domain_user_keypair(self, user_id, project_id, credential_id): pass def stack_domain_user_token(self, user_id, project_id, password): return 'adomainusertoken' heat-10.0.2/heat/engine/clients/os/keystone/heat_keystoneclient.py0000666000175000017500000005712613343562337025307 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Keystone Client functionality for use by resources.""" import collections import uuid import weakref from keystoneauth1 import exceptions as ks_exception from keystoneauth1.identity import generic as ks_auth from keystoneclient.v3 import client as kc_v3 from oslo_config import cfg from oslo_log import log as logging from oslo_serialization import jsonutils from oslo_utils import importutils from heat.common import context from heat.common import exception from heat.common.i18n import _ from heat.common import password_gen LOG = logging.getLogger('heat.engine.clients.keystoneclient') AccessKey = collections.namedtuple('AccessKey', ['id', 'access', 'secret']) _default_keystone_backend = ( 'heat.engine.clients.os.keystone.heat_keystoneclient.KsClientWrapper') keystone_opts = [ cfg.StrOpt('keystone_backend', default=_default_keystone_backend, help=_("Fully qualified class name to use as a " "keystone backend.")) ] cfg.CONF.register_opts(keystone_opts) class KsClientWrapper(object): """Wrap keystone client so we can encapsulate logic used in resources. Note this is intended to be initialized from a resource on a per-session basis, so the session context is passed in on initialization Also note that an instance of this is created in each request context as part of a lazy-loaded cloud backend and it can be easily referenced in each resource as ``self.keystone()``, so there should not be any need to directly instantiate instances of this class inside resources themselves. """ def __init__(self, context): # If a trust_id is specified in the context, we immediately # authenticate so we can populate the context with a trust token # otherwise, we delay client authentication until needed to avoid # unnecessary calls to keystone. # # Note that when you obtain a token using a trust, it cannot be # used to reauthenticate and get another token, so we have to # get a new trust-token even if context.auth_token is set. # # - context.auth_url is expected to contain a versioned keystone # path, we will work with either a v2.0 or v3 path self._context = weakref.ref(context) self._client = None self._admin_auth = None self._domain_admin_auth = None self._domain_admin_client = None self.session = self.context.keystone_session self.v3_endpoint = self.context.keystone_v3_endpoint if self.context.trust_id: # Create a client with the specified trust_id, this # populates self.context.auth_token with a trust-scoped token self._client = self._v3_client_init() # The stack domain user ID should be set in heat.conf # It can be created via python-openstackclient # openstack --os-identity-api-version=3 domain create heat # If the domain is specified, then you must specify a domain # admin user. If no domain is specified, we fall back to # legacy behavior with warnings. self._stack_domain_id = cfg.CONF.stack_user_domain_id self.stack_domain_name = cfg.CONF.stack_user_domain_name self.domain_admin_user = cfg.CONF.stack_domain_admin self.domain_admin_password = cfg.CONF.stack_domain_admin_password LOG.debug('Using stack domain %s', self.stack_domain) @property def context(self): ctxt = self._context() assert ctxt is not None, "Need a reference to the context" return ctxt @property def stack_domain(self): """Domain scope data. This is only used for checking for scoping data, not using the value. """ return self._stack_domain_id or self.stack_domain_name @property def client(self): if not self._client: # Create connection to v3 API self._client = self._v3_client_init() return self._client @property def region_name(self): return self.context.region_name or cfg.CONF.region_name_for_services @property def domain_admin_auth(self): if not self._domain_admin_auth: # Note we must specify the domain when getting the token # as only a domain scoped token can create projects in the domain auth = ks_auth.Password(username=self.domain_admin_user, password=self.domain_admin_password, auth_url=self.v3_endpoint, domain_id=self._stack_domain_id, domain_name=self.stack_domain_name, user_domain_id=self._stack_domain_id, user_domain_name=self.stack_domain_name) # NOTE(jamielennox): just do something to ensure a valid token try: auth.get_token(self.session) except ks_exception.Unauthorized: LOG.error("Domain admin client authentication failed") raise exception.AuthorizationFailure() self._domain_admin_auth = auth return self._domain_admin_auth @property def domain_admin_client(self): if not self._domain_admin_client: self._domain_admin_client = kc_v3.Client( session=self.session, auth=self.domain_admin_auth, region_name=self.region_name) return self._domain_admin_client def _v3_client_init(self): client = kc_v3.Client(session=self.session, region_name=self.region_name) if hasattr(self.context.auth_plugin, 'get_access'): # NOTE(jamielennox): get_access returns the current token without # reauthenticating if it's present and valid. try: auth_ref = self.context.auth_plugin.get_access(self.session) except ks_exception.Unauthorized: LOG.error("Keystone client authentication failed") raise exception.AuthorizationFailure() if self.context.trust_id: # Sanity check if not auth_ref.trust_scoped: LOG.error("trust token re-scoping failed!") raise exception.AuthorizationFailure() # Sanity check that impersonation is effective if self.context.trustor_user_id != auth_ref.user_id: LOG.error("Trust impersonation failed") raise exception.AuthorizationFailure() return client def create_trust_context(self): """Create a trust using the trustor identity in the current context. The trust is created with the trustee as the heat service user. If the current context already contains a trust_id, we do nothing and return the current context. Returns a context containing the new trust_id. """ if self.context.trust_id: return self.context # We need the service admin user ID (not name), as the trustor user # can't lookup the ID in keystoneclient unless they're admin # workaround this by getting the user_id from admin_client try: trustee_user_id = self.context.trusts_auth_plugin.get_user_id( self.session) except ks_exception.Unauthorized: LOG.error("Domain admin client authentication failed") raise exception.AuthorizationFailure() trustor_user_id = self.context.auth_plugin.get_user_id(self.session) trustor_proj_id = self.context.auth_plugin.get_project_id(self.session) # inherit the roles of the trustor, unless set trusts_delegated_roles if cfg.CONF.trusts_delegated_roles: roles = cfg.CONF.trusts_delegated_roles else: roles = self.context.roles try: trust = self.client.trusts.create(trustor_user=trustor_user_id, trustee_user=trustee_user_id, project=trustor_proj_id, impersonation=True, role_names=roles) except ks_exception.NotFound: LOG.debug("Failed to find roles %s for user %s" % (roles, trustor_user_id)) raise exception.MissingCredentialError( required=_("roles %s") % roles) context_data = self.context.to_dict() context_data['overwrite'] = False trust_context = context.RequestContext.from_dict(context_data) trust_context.trust_id = trust.id trust_context.trustor_user_id = trustor_user_id return trust_context def delete_trust(self, trust_id): """Delete the specified trust.""" try: self.client.trusts.delete(trust_id) except (ks_exception.NotFound, ks_exception.Unauthorized): pass def _get_username(self, username): if(len(username) > 255): LOG.warning("Truncating the username %s to the last 255 " "characters.", username) # get the last 255 characters of the username return username[-255:] def create_stack_user(self, username, password=''): """Create a user defined as part of a stack. The user is defined either via template or created internally by a resource. This user will be added to the heat_stack_user_role as defined in the config. Returns the keystone ID of the resulting user. """ # FIXME(shardy): There's duplicated logic between here and # create_stack_domain user, but this function is expected to # be removed after the transition of all resources to domain # users has been completed stack_user_role = self.client.roles.list( name=cfg.CONF.heat_stack_user_role) if len(stack_user_role) == 1: role_id = stack_user_role[0].id # Create the user user = self.client.users.create( name=self._get_username(username), password=password, default_project=self.context.tenant_id) # Add user to heat_stack_user_role LOG.debug("Adding user %(user)s to role %(role)s", {'user': user.id, 'role': role_id}) self.client.roles.grant(role=role_id, user=user.id, project=self.context.tenant_id) else: LOG.error("Failed to add user %(user)s to role %(role)s, " "check role exists!", {'user': username, 'role': cfg.CONF.heat_stack_user_role}) raise exception.Error(_("Can't find role %s") % cfg.CONF.heat_stack_user_role) return user.id def stack_domain_user_token(self, user_id, project_id, password): """Get a token for a stack domain user.""" if not self.stack_domain: # Note, no legacy fallback path as we don't want to deploy # tokens for non stack-domain users inside instances msg = _('Cannot get stack domain user token, no stack domain id ' 'configured, please fix your heat.conf') raise exception.Error(msg) # Create a keystone session, then request a token with no # catalog (the token is expected to be used inside an instance # where a specific endpoint will be specified, and user-data # space is limited..) # TODO(rabi): generic auth plugins don't support `include_catalog' # flag yet. We'll add it once it's supported.. auth = ks_auth.Password(auth_url=self.v3_endpoint, user_id=user_id, password=password, project_id=project_id) return auth.get_token(self.session) def create_stack_domain_user(self, username, project_id, password=None): """Create a domain user defined as part of a stack. The user is defined either via template or created internally by a resource. This user will be added to the heat_stack_user_role as defined in the config, and created in the specified project (which is expected to be in the stack_domain). Returns the keystone ID of the resulting user. """ if not self.stack_domain: # FIXME(shardy): Legacy fallback for folks using old heat.conf # files which lack domain configuration return self.create_stack_user(username=username, password=password) # We add the new user to a special keystone role # This role is designed to allow easier differentiation of the # heat-generated "stack users" which will generally have credentials # deployed on an instance (hence are implicitly untrusted) stack_user_role = self.domain_admin_client.roles.list( name=cfg.CONF.heat_stack_user_role) if len(stack_user_role) == 1: role_id = stack_user_role[0].id # Create user user = self.domain_admin_client.users.create( name=self._get_username(username), password=password, default_project=project_id, domain=self.stack_domain_id) # Add to stack user role LOG.debug("Adding user %(user)s to role %(role)s", {'user': user.id, 'role': role_id}) self.domain_admin_client.roles.grant(role=role_id, user=user.id, project=project_id) else: LOG.error("Failed to add user %(user)s to role %(role)s, " "check role exists!", {'user': username, 'role': cfg.CONF.heat_stack_user_role}) raise exception.Error(_("Can't find role %s") % cfg.CONF.heat_stack_user_role) return user.id @property def stack_domain_id(self): if not self._stack_domain_id: try: access = self.domain_admin_auth.get_access(self.session) except ks_exception.Unauthorized: LOG.error("Keystone client authentication failed") raise exception.AuthorizationFailure() self._stack_domain_id = access.domain_id return self._stack_domain_id def _check_stack_domain_user(self, user_id, project_id, action): """Sanity check that domain/project is correct.""" user = self.domain_admin_client.users.get(user_id) if user.domain_id != self.stack_domain_id: raise ValueError(_('User %s in invalid domain') % action) if user.default_project_id != project_id: raise ValueError(_('User %s in invalid project') % action) def delete_stack_domain_user(self, user_id, project_id): if not self.stack_domain: # FIXME(shardy): Legacy fallback for folks using old heat.conf # files which lack domain configuration return self.delete_stack_user(user_id) try: self._check_stack_domain_user(user_id, project_id, 'delete') self.domain_admin_client.users.delete(user_id) except ks_exception.NotFound: pass def delete_stack_user(self, user_id): try: self.client.users.delete(user=user_id) except ks_exception.NotFound: pass def create_stack_domain_project(self, stack_id): """Create a project in the heat stack-user domain.""" if not self.stack_domain: # FIXME(shardy): Legacy fallback for folks using old heat.conf # files which lack domain configuration return self.context.tenant_id # Note we use the tenant ID not name to ensure uniqueness in a multi- # domain environment (where the tenant name may not be globally unique) project_name = ('%s-%s' % (self.context.tenant_id, stack_id))[:64] desc = "Heat stack user project" domain_project = self.domain_admin_client.projects.create( name=project_name, domain=self.stack_domain_id, description=desc) return domain_project.id def delete_stack_domain_project(self, project_id): if not self.stack_domain: # FIXME(shardy): Legacy fallback for folks using old heat.conf # files which lack domain configuration return # If stacks are created before configuring the heat domain, they # exist in the default domain, in the user's project, which we # do *not* want to delete! However, if the keystone v3cloudsample # policy is used, it's possible that we'll get Forbidden when trying # to get the project, so again we should do nothing try: project = self.domain_admin_client.projects.get(project=project_id) except ks_exception.NotFound: return except ks_exception.Forbidden: LOG.warning('Unable to get details for project %s, ' 'not deleting', project_id) return if project.domain_id != self.stack_domain_id: LOG.warning('Not deleting non heat-domain project') return try: project.delete() except ks_exception.NotFound: pass def _find_ec2_keypair(self, access, user_id=None): """Lookup an ec2 keypair by access ID.""" # FIXME(shardy): add filtering for user_id when keystoneclient # extensible-crud-manager-operations bp lands credentials = self.client.credentials.list() for cr in credentials: ec2_creds = jsonutils.loads(cr.blob) if ec2_creds.get('access') == access: return AccessKey(id=cr.id, access=ec2_creds['access'], secret=ec2_creds['secret']) def delete_ec2_keypair(self, credential_id=None, access=None, user_id=None): """Delete credential containing ec2 keypair.""" if credential_id: try: self.client.credentials.delete(credential_id) except ks_exception.NotFound: pass elif access: cred = self._find_ec2_keypair(access=access, user_id=user_id) if cred: self.client.credentials.delete(cred.id) else: raise ValueError("Must specify either credential_id or access") def get_ec2_keypair(self, credential_id=None, access=None, user_id=None): """Get an ec2 keypair via v3/credentials, by id or access.""" # Note v3/credentials does not support filtering by access # because it's stored in the credential blob, so we expect # all resources to pass credential_id except where backwards # compatibility is required (resource only has access stored) # then we'll have to do a brute-force lookup locally if credential_id: cred = self.client.credentials.get(credential_id) ec2_creds = jsonutils.loads(cred.blob) return AccessKey(id=cred.id, access=ec2_creds['access'], secret=ec2_creds['secret']) elif access: return self._find_ec2_keypair(access=access, user_id=user_id) else: raise ValueError("Must specify either credential_id or access") def create_ec2_keypair(self, user_id=None): user_id = user_id or self.context.get_access(self.session).user_id project_id = self.context.tenant_id data_blob = {'access': uuid.uuid4().hex, 'secret': password_gen.generate_openstack_password()} ec2_creds = self.client.credentials.create( user=user_id, type='ec2', blob=jsonutils.dumps(data_blob), project=project_id) # Return a AccessKey namedtuple for easier access to the blob contents # We return the id as the v3 api provides no way to filter by # access in the blob contents, so it will be much more efficient # if we manage credentials by ID instead return AccessKey(id=ec2_creds.id, access=data_blob['access'], secret=data_blob['secret']) def create_stack_domain_user_keypair(self, user_id, project_id): if not self.stack_domain: # FIXME(shardy): Legacy fallback for folks using old heat.conf # files which lack domain configuration return self.create_ec2_keypair(user_id) data_blob = {'access': uuid.uuid4().hex, 'secret': password_gen.generate_openstack_password()} creds = self.domain_admin_client.credentials.create( user=user_id, type='ec2', blob=jsonutils.dumps(data_blob), project=project_id) return AccessKey(id=creds.id, access=data_blob['access'], secret=data_blob['secret']) def delete_stack_domain_user_keypair(self, user_id, project_id, credential_id): if not self.stack_domain: # FIXME(shardy): Legacy fallback for folks using old heat.conf # files which lack domain configuration return self.delete_ec2_keypair(credential_id=credential_id) self._check_stack_domain_user(user_id, project_id, 'delete_keypair') try: self.domain_admin_client.credentials.delete(credential_id) except ks_exception.NotFound: pass def disable_stack_user(self, user_id): self.client.users.update(user=user_id, enabled=False) def enable_stack_user(self, user_id): self.client.users.update(user=user_id, enabled=True) def disable_stack_domain_user(self, user_id, project_id): if not self.stack_domain: # FIXME(shardy): Legacy fallback for folks using old heat.conf # files which lack domain configuration return self.disable_stack_user(user_id) self._check_stack_domain_user(user_id, project_id, 'disable') self.domain_admin_client.users.update(user=user_id, enabled=False) def enable_stack_domain_user(self, user_id, project_id): if not self.stack_domain: # FIXME(shardy): Legacy fallback for folks using old heat.conf # files which lack domain configuration return self.enable_stack_user(user_id) self._check_stack_domain_user(user_id, project_id, 'enable') self.domain_admin_client.users.update(user=user_id, enabled=True) class KeystoneClient(object): """Keystone Auth Client. Delay choosing the backend client module until the client's class needs to be initialized. """ def __new__(cls, context): if cfg.CONF.keystone_backend == _default_keystone_backend: return KsClientWrapper(context) else: return importutils.import_object( cfg.CONF.keystone_backend, context ) def list_opts(): yield None, keystone_opts heat-10.0.2/heat/engine/clients/os/keystone/__init__.py0000666000175000017500000001137113343562337022775 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from keystoneauth1 import exceptions as ks_exceptions from heat.common import exception from heat.engine.clients import client_plugin from heat.engine.clients.os.keystone import heat_keystoneclient as hkc class KeystoneClientPlugin(client_plugin.ClientPlugin): exceptions_module = [ks_exceptions, exception] service_types = [IDENTITY] = ['identity'] def _create(self): return hkc.KeystoneClient(self.context) def is_not_found(self, ex): return isinstance(ex, (ks_exceptions.NotFound, exception.EntityNotFound)) def is_over_limit(self, ex): return isinstance(ex, ks_exceptions.RequestEntityTooLarge) def is_conflict(self, ex): return isinstance(ex, ks_exceptions.Conflict) def get_role_id(self, role): try: role_obj = self.client().client.roles.get(role) return role_obj.id except ks_exceptions.NotFound: role_list = self.client().client.roles.list(name=role) for role_obj in role_list: if role_obj.name == role: return role_obj.id raise exception.EntityNotFound(entity='KeystoneRole', name=role) def get_project_id(self, project): if project is None: return None try: project_obj = self.client().client.projects.get(project) return project_obj.id except ks_exceptions.NotFound: project_list = self.client().client.projects.list(name=project) for project_obj in project_list: if project_obj.name == project: return project_obj.id raise exception.EntityNotFound(entity='KeystoneProject', name=project) def get_domain_id(self, domain): if domain is None: return None try: domain_obj = self.client().client.domains.get(domain) return domain_obj.id except ks_exceptions.NotFound: domain_list = self.client().client.domains.list(name=domain) for domain_obj in domain_list: if domain_obj.name == domain: return domain_obj.id raise exception.EntityNotFound(entity='KeystoneDomain', name=domain) def get_group_id(self, group): if group is None: return None try: group_obj = self.client().client.groups.get(group) return group_obj.id except ks_exceptions.NotFound: group_list = self.client().client.groups.list(name=group) for group_obj in group_list: if group_obj.name == group: return group_obj.id raise exception.EntityNotFound(entity='KeystoneGroup', name=group) def get_service_id(self, service): if service is None: return None try: service_obj = self.client().client.services.get(service) return service_obj.id except ks_exceptions.NotFound: service_list = self.client().client.services.list(name=service) if len(service_list) == 1: return service_list[0].id elif len(service_list) > 1: raise exception.KeystoneServiceNameConflict(service=service) else: raise exception.EntityNotFound(entity='KeystoneService', name=service) def get_user_id(self, user): if user is None: return None try: user_obj = self.client().client.users.get(user) return user_obj.id except ks_exceptions.NotFound: user_list = self.client().client.users.list(name=user) for user_obj in user_list: if user_obj.name == user: return user_obj.id raise exception.EntityNotFound(entity='KeystoneUser', name=user) def get_region_id(self, region): try: region_obj = self.client().client.regions.get(region) return region_obj.id except ks_exceptions.NotFound: raise exception.EntityNotFound(entity='KeystoneRegion', name=region) heat-10.0.2/heat/engine/clients/os/keystone/keystone_constraints.py0000666000175000017500000000444113343562337025526 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common import exception from heat.engine import constraints CLIENT_NAME = 'keystone' class KeystoneBaseConstraint(constraints.BaseCustomConstraint): resource_client_name = CLIENT_NAME entity = None def validate_with_client(self, client, resource_id): # when user specify empty value in template, do not get the # responding resource from backend, otherwise an error will happen if resource_id == '': raise exception.EntityNotFound(entity=self.entity, name=resource_id) super(KeystoneBaseConstraint, self).validate_with_client(client, resource_id) class KeystoneRoleConstraint(KeystoneBaseConstraint): resource_getter_name = 'get_role_id' entity = 'KeystoneRole' class KeystoneDomainConstraint(KeystoneBaseConstraint): resource_getter_name = 'get_domain_id' entity = 'KeystoneDomain' class KeystoneProjectConstraint(KeystoneBaseConstraint): resource_getter_name = 'get_project_id' entity = 'KeystoneProject' class KeystoneGroupConstraint(KeystoneBaseConstraint): resource_getter_name = 'get_group_id' entity = 'KeystoneGroup' class KeystoneServiceConstraint(KeystoneBaseConstraint): expected_exceptions = (exception.EntityNotFound, exception.KeystoneServiceNameConflict,) resource_getter_name = 'get_service_id' entity = 'KeystoneService' class KeystoneUserConstraint(KeystoneBaseConstraint): resource_getter_name = 'get_user_id' entity = 'KeystoneUser' class KeystoneRegionConstraint(KeystoneBaseConstraint): resource_getter_name = 'get_region_id' entity = 'KeystoneRegion' heat-10.0.2/heat/engine/clients/os/zaqar.py0000666000175000017500000000675513343562337020525 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import six from oslo_log import log as logging from zaqarclient.queues.v2 import client as zaqarclient from zaqarclient.transport import errors as zaqar_errors from heat.common.i18n import _ from heat.engine.clients import client_plugin from heat.engine import constraints LOG = logging.getLogger(__name__) CLIENT_NAME = 'zaqar' class ZaqarClientPlugin(client_plugin.ClientPlugin): exceptions_module = zaqar_errors service_types = [MESSAGING] = ['messaging'] DEFAULT_TTL = 3600 def _create(self): return zaqarclient.Client(version=2, session=self.context.keystone_session) def create_for_tenant(self, tenant_id, token): con = self.context if token is None: LOG.error("Zaqar connection failed, no auth_token!") return None opts = { 'os_auth_token': token, 'os_auth_url': con.auth_url, 'os_project_id': tenant_id, 'os_service_type': self.MESSAGING, } auth_opts = {'backend': 'keystone', 'options': opts} conf = {'auth_opts': auth_opts} endpoint = self.url_for(service_type=self.MESSAGING) return zaqarclient.Client(url=endpoint, conf=conf, version=2) def create_from_signed_url(self, project_id, paths, expires, methods, signature): opts = { 'paths': paths, 'expires': expires, 'methods': methods, 'signature': signature, 'os_project_id': project_id, } auth_opts = {'backend': 'signed-url', 'options': opts} conf = {'auth_opts': auth_opts} endpoint = self.url_for(service_type=self.MESSAGING) return zaqarclient.Client(url=endpoint, conf=conf, version=2) def is_not_found(self, ex): return isinstance(ex, zaqar_errors.ResourceNotFound) def get_queue(self, queue_name): if not isinstance(queue_name, six.string_types): raise TypeError(_('Queue name must be a string')) if not (0 < len(queue_name) <= 64): raise ValueError(_('Queue name length must be 1-64')) # Queues are created automatically starting with v1.1 of the Zaqar API, # so any queue name is valid for the purposes of constraint validation. return queue_name class ZaqarEventSink(object): def __init__(self, target, ttl=None): self._target = target self._ttl = ttl def consume(self, context, event): zaqar_plugin = context.clients.client_plugin('zaqar') zaqar = zaqar_plugin.client() queue = zaqar.queue(self._target, auto_create=False) ttl = self._ttl if self._ttl is not None else zaqar_plugin.DEFAULT_TTL queue.post({'body': event, 'ttl': ttl}) class QueueConstraint(constraints.BaseCustomConstraint): resource_client_name = CLIENT_NAME resource_getter_name = 'get_queue' heat-10.0.2/heat/engine/clients/os/mistral.py0000666000175000017500000000620113343562337021044 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from keystoneauth1.exceptions import http as ka_exceptions from mistralclient.api import base as mistral_base from mistralclient.api import client as mistral_client from heat.common import exception from heat.engine.clients import client_plugin from heat.engine import constraints CLIENT_NAME = 'mistral' class MistralClientPlugin(client_plugin.ClientPlugin): service_types = [WORKFLOW_V2] = ['workflowv2'] def _create(self): endpoint_type = self._get_client_option(CLIENT_NAME, 'endpoint_type') endpoint = self.url_for(service_type=self.WORKFLOW_V2, endpoint_type=endpoint_type) args = { 'mistral_url': endpoint, 'session': self.context.keystone_session, 'region_name': self._get_region_name() } client = mistral_client.client(**args) return client def is_not_found(self, ex): # check for keystoneauth exceptions till requirements change # to python-mistralclient > 3.1.2 ka_not_found = isinstance(ex, ka_exceptions.NotFound) mistral_not_found = (isinstance(ex, mistral_base.APIException) and ex.error_code == 404) return ka_not_found or mistral_not_found def is_over_limit(self, ex): # check for keystoneauth exceptions till requirements change # to python-mistralclient > 3.1.2 ka_overlimit = isinstance(ex, ka_exceptions.RequestEntityTooLarge) mistral_overlimit = (isinstance(ex, mistral_base.APIException) and ex.error_code == 413) return ka_overlimit or mistral_overlimit def is_conflict(self, ex): # check for keystoneauth exceptions till requirements change # to python-mistralclient > 3.1.2 ka_conflict = isinstance(ex, ka_exceptions.Conflict) mistral_conflict = (isinstance(ex, mistral_base.APIException) and ex.error_code == 409) return ka_conflict or mistral_conflict def get_workflow_by_identifier(self, workflow_identifier): try: return self.client().workflows.get( workflow_identifier) except Exception as ex: if self.is_not_found(ex): raise exception.EntityNotFound( entity="Workflow", name=workflow_identifier) raise class WorkflowConstraint(constraints.BaseCustomConstraint): resource_client_name = CLIENT_NAME resource_getter_name = 'get_workflow_by_identifier' expected_exceptions = (exception.EntityNotFound,) heat-10.0.2/heat/engine/clients/os/aodh.py0000666000175000017500000000273613343562337020315 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from aodhclient import client as ac from aodhclient import exceptions from heat.engine.clients import client_plugin CLIENT_NAME = 'aodh' class AodhClientPlugin(client_plugin.ClientPlugin): exceptions_module = exceptions service_types = [ALARMING] = ['alarming'] supported_versions = [V2] = ['2'] default_version = V2 def _create(self, version=None): interface = self._get_client_option(CLIENT_NAME, 'endpoint_type') return ac.Client( version, session=self.context.keystone_session, interface=interface, service_type=self.ALARMING, region_name=self._get_region_name()) def is_not_found(self, ex): return isinstance(ex, exceptions.NotFound) def is_over_limit(self, ex): return isinstance(ex, exceptions.OverLimit) def is_conflict(self, ex): return isinstance(ex, exceptions.Conflict) heat-10.0.2/heat/engine/clients/os/barbican.py0000666000175000017500000000603213343562337021134 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from barbicanclient import exceptions from barbicanclient.v1 import client as barbican_client from barbicanclient.v1 import containers from heat.common import exception from heat.engine.clients import client_plugin from heat.engine import constraints CLIENT_NAME = 'barbican' class BarbicanClientPlugin(client_plugin.ClientPlugin): service_types = [KEY_MANAGER] = ['key-manager'] def _create(self): interface = self._get_client_option(CLIENT_NAME, 'endpoint_type') client = barbican_client.Client( session=self.context.keystone_session, service_type=self.KEY_MANAGER, interface=interface, region_name=self._get_region_name()) return client def is_not_found(self, ex): return (isinstance(ex, exceptions.HTTPClientError) and ex.status_code == 404) def create_generic_container(self, **props): return containers.Container( self.client().containers._api, **props) def create_certificate(self, **props): return containers.CertificateContainer( self.client().containers._api, **props) def create_rsa(self, **props): return containers.RSAContainer( self.client().containers._api, **props) def get_secret_by_ref(self, secret_ref): try: secret = self.client().secrets.get(secret_ref) # Force lazy loading. TODO(therve): replace with to_dict() secret.name return secret except Exception as ex: if self.is_not_found(ex): raise exception.EntityNotFound( entity="Secret", name=secret_ref) raise def get_container_by_ref(self, container_ref): try: # TODO(therve): replace with to_dict() return self.client().containers.get(container_ref) except Exception as ex: if self.is_not_found(ex): raise exception.EntityNotFound( entity="Container", name=container_ref) raise class SecretConstraint(constraints.BaseCustomConstraint): resource_client_name = CLIENT_NAME resource_getter_name = 'get_secret_by_ref' expected_exceptions = (exception.EntityNotFound,) class ContainerConstraint(constraints.BaseCustomConstraint): resource_client_name = CLIENT_NAME resource_getter_name = 'get_container_by_ref' expected_exceptions = (exception.EntityNotFound,) heat-10.0.2/heat/engine/clients/os/magnum.py0000666000175000017500000000514513343562337020663 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from magnumclient import exceptions as mc_exc from magnumclient.v1 import client as magnum_client from heat.common import exception from heat.engine.clients import client_plugin from heat.engine import constraints CLIENT_NAME = 'magnum' class MagnumClientPlugin(client_plugin.ClientPlugin): service_types = [CONTAINER] = ['container-infra'] def _create(self): interface = self._get_client_option(CLIENT_NAME, 'endpoint_type') args = { 'interface': interface, 'service_type': self.CONTAINER, 'session': self.context.keystone_session, 'region_name': self._get_region_name() } client = magnum_client.Client(**args) return client def is_not_found(self, ex): return isinstance(ex, mc_exc.NotFound) def is_over_limit(self, ex): return isinstance(ex, mc_exc.RequestEntityTooLarge) def is_conflict(self, ex): return isinstance(ex, mc_exc.Conflict) def _get_rsrc_name_or_id(self, value, entity, entity_msg): entity_client = getattr(self.client(), entity) try: return entity_client.get(value).uuid except mc_exc.NotFound: # Magnum cli will find the value either is name or id, # so no need to call list() here. raise exception.EntityNotFound(entity=entity_msg, name=value) def get_baymodel(self, value): return self._get_rsrc_name_or_id(value, entity='baymodels', entity_msg='BayModel') def get_cluster_template(self, value): return self._get_rsrc_name_or_id(value, entity='cluster_templates', entity_msg='ClusterTemplate') class ClusterTemplateConstraint(constraints.BaseCustomConstraint): resource_client_name = CLIENT_NAME resource_getter_name = 'get_cluster_template' class BaymodelConstraint(constraints.BaseCustomConstraint): resource_client_name = CLIENT_NAME resource_getter_name = 'get_baymodel' heat-10.0.2/heat/engine/clients/os/ceilometer.py0000666000175000017500000000320513343562351021516 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from ceilometerclient import client as cc from ceilometerclient import exc from heat.engine.clients import client_plugin CLIENT_NAME = 'ceilometer' class CeilometerClientPlugin(client_plugin.ClientPlugin): exceptions_module = exc service_types = [METERING, ALARMING] = ['metering', 'alarming'] def _create(self): con = self.context interface = self._get_client_option(CLIENT_NAME, 'endpoint_type') aodh_endpoint = self.url_for(service_type=self.ALARMING, endpoint_type=interface) args = { 'session': con.keystone_session, 'interface': interface, 'service_type': self.METERING, 'aodh_endpoint': aodh_endpoint, 'region_name': self._get_region_name() } return cc.get_client('2', **args) def is_not_found(self, ex): return isinstance(ex, exc.HTTPNotFound) def is_over_limit(self, ex): return isinstance(ex, exc.HTTPOverLimit) def is_conflict(self, ex): return isinstance(ex, exc.HTTPConflict) heat-10.0.2/heat/engine/clients/os/designate.py0000666000175000017500000001043113343562351021330 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from designateclient import client from designateclient import exceptions from designateclient.v1 import domains from designateclient.v1 import records from heat.common import exception as heat_exception from heat.engine.clients import client_plugin from heat.engine import constraints CLIENT_NAME = 'designate' class DesignateClientPlugin(client_plugin.ClientPlugin): exceptions_module = [exceptions] service_types = [DNS] = ['dns'] supported_versions = [V1, V2] = ['1', '2'] default_version = V1 def _create(self, version=default_version): endpoint_type = self._get_client_option(CLIENT_NAME, 'endpoint_type') return client.Client(version=version, session=self.context.keystone_session, endpoint_type=endpoint_type, service_type=self.DNS, region_name=self._get_region_name()) def is_not_found(self, ex): return isinstance(ex, exceptions.NotFound) def get_domain_id(self, domain_id_or_name): try: domain_obj = self.client().domains.get(domain_id_or_name) return domain_obj.id except exceptions.NotFound: for domain in self.client().domains.list(): if domain.name == domain_id_or_name: return domain.id raise heat_exception.EntityNotFound(entity='Designate Domain', name=domain_id_or_name) def get_zone_id(self, zone_id_or_name): try: zone_obj = self.client(version=self.V2).zones.get(zone_id_or_name) return zone_obj['id'] except exceptions.NotFound: zones = self.client().zones.list( criterion=dict(name=zone_id_or_name)) if len(zones) == 1: return zones[0]['id'] raise heat_exception.EntityNotFound(entity='Designate Zone', name=zone_id_or_name) def domain_create(self, **kwargs): domain = domains.Domain(**kwargs) return self.client().domains.create(domain) def domain_update(self, **kwargs): # Designate mandates to pass the Domain object with updated properties domain = self.client().domains.get(kwargs['id']) for key in kwargs.keys(): setattr(domain, key, kwargs[key]) return self.client().domains.update(domain) def record_create(self, **kwargs): domain_id = self.get_domain_id(kwargs.pop('domain')) record = records.Record(**kwargs) return self.client().records.create(domain_id, record) def record_update(self, **kwargs): # Designate mandates to pass the Record object with updated properties domain_id = self.get_domain_id(kwargs.pop('domain')) record = self.client().records.get(domain_id, kwargs['id']) for key in kwargs.keys(): setattr(record, key, kwargs[key]) return self.client().records.update(record.domain_id, record) def record_delete(self, **kwargs): domain_id = self.get_domain_id(kwargs.pop('domain')) return self.client().records.delete(domain_id, kwargs.pop('id')) def record_show(self, **kwargs): domain_id = self.get_domain_id(kwargs.pop('domain')) return self.client().records.get(domain_id, kwargs.pop('id')) class DesignateDomainConstraint(constraints.BaseCustomConstraint): resource_client_name = CLIENT_NAME resource_getter_name = 'get_domain_id' class DesignateZoneConstraint(constraints.BaseCustomConstraint): resource_client_name = CLIENT_NAME resource_getter_name = 'get_zone_id' heat-10.0.2/heat/engine/clients/os/monasca.py0000666000175000017500000000373113343562337021017 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from monascaclient import client from monascaclient import exc as monasca_exc from heat.common import exception as heat_exc from heat.engine.clients import client_plugin from heat.engine import constraints CLIENT_NAME = 'monasca' class MonascaClientPlugin(client_plugin.ClientPlugin): exceptions_module = [monasca_exc] service_types = [MONITORING] = ['monitoring'] VERSION = '2_0' def _create(self): interface = self._get_client_option(CLIENT_NAME, 'endpoint_type') endpoint = self.url_for(service_type=self.MONITORING, endpoint_type=interface) return client.Client( self.VERSION, session=self.context.keystone_session, endpoint=endpoint) def is_not_found(self, ex): return isinstance(ex, monasca_exc.NotFound) def is_un_processable(self, ex): return isinstance(ex, monasca_exc.UnprocessableEntity) def get_notification(self, notification): try: return self.client().notifications.get( notification_id=notification)['id'] except monasca_exc.NotFound: raise heat_exc.EntityNotFound(entity='Monasca Notification', name=notification) class MonascaNotificationConstraint(constraints.BaseCustomConstraint): resource_client_name = CLIENT_NAME resource_getter_name = 'get_notification' heat-10.0.2/heat/engine/clients/os/__init__.py0000666000175000017500000000166613343562337021142 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_cache import core from oslo_config import cfg from heat.common import cache MEMOIZE_EXTENSIONS = core.get_memoization_decorator( conf=cfg.CONF, region=cache.get_cache_region(), group="service_extension_cache") MEMOIZE_FINDER = core.get_memoization_decorator( conf=cfg.CONF, region=cache.get_cache_region(), group="resource_finder_cache") heat-10.0.2/heat/engine/clients/os/zun.py0000666000175000017500000000316513343562351020207 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from zunclient import client as zun_client from zunclient import exceptions as zc_exc from heat.engine.clients import client_plugin CLIENT_NAME = 'zun' class ZunClientPlugin(client_plugin.ClientPlugin): service_types = [CONTAINER] = ['container'] default_version = '1.12' supported_versions = [ V1_12 ] = [ '1.12' ] def _create(self, version=None): if not version: version = self.default_version interface = self._get_client_option(CLIENT_NAME, 'endpoint_type') args = { 'interface': interface, 'service_type': self.CONTAINER, 'session': self.context.keystone_session, 'region_name': self._get_region_name() } client = zun_client.Client(version, **args) return client def is_not_found(self, ex): return isinstance(ex, zc_exc.NotFound) def is_over_limit(self, ex): return isinstance(ex, zc_exc.RequestEntityTooLarge) def is_conflict(self, ex): return isinstance(ex, zc_exc.Conflict) heat-10.0.2/heat/engine/clients/os/octavia.py0000666000175000017500000000671613343562337021032 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from octaviaclient.api import constants from octaviaclient.api.v2 import octavia from osc_lib import exceptions from heat.engine.clients import client_plugin from heat.engine import constraints CLIENT_NAME = 'octavia' DEFAULT_FIND_ATTR = 'name' def _is_translated_exception(ex, code): return (isinstance(ex, octavia.OctaviaClientException) and ex.code == code) class OctaviaClientPlugin(client_plugin.ClientPlugin): exceptions_module = octavia service_types = [LOADBALANCER] = ['load-balancer'] supported_versions = [V2] = ['2'] default_version = V2 def _create(self, version=None): interface = self._get_client_option(CLIENT_NAME, 'endpoint_type') endpoint = self.url_for(service_type=self.LOADBALANCER, endpoint_type=interface) return octavia.OctaviaAPI( session=self.context.keystone_session, service_type=self.LOADBALANCER, endpoint=endpoint) def is_not_found(self, ex): return isinstance( ex, exceptions.NotFound) or _is_translated_exception(ex, 404) def is_over_limit(self, ex): return isinstance( ex, exceptions.OverLimit) or _is_translated_exception(ex, 413) def is_conflict(self, ex): return isinstance( ex, exceptions.Conflict) or _is_translated_exception(ex, 409) def get_pool(self, value): pool = self.client().find(path=constants.BASE_POOL_URL, value=value, attr=DEFAULT_FIND_ATTR) return pool['id'] def get_listener(self, value): lsnr = self.client().find(path=constants.BASE_LISTENER_URL, value=value, attr=DEFAULT_FIND_ATTR) return lsnr['id'] def get_loadbalancer(self, value): lb = self.client().find(path=constants.BASE_LOADBALANCER_URL, value=value, attr=DEFAULT_FIND_ATTR) return lb['id'] def get_l7policy(self, value): policy = self.client().find(path=constants.BASE_L7POLICY_URL, value=value, attr=DEFAULT_FIND_ATTR) return policy['id'] class OctaviaConstraint(constraints.BaseCustomConstraint): expected_exceptions = (exceptions.NotFound, octavia.OctaviaClientException) base_url = None def validate_with_client(self, client, value): octavia_client = client.client(CLIENT_NAME) octavia_client.find(path=self.base_url, value=value, attr=DEFAULT_FIND_ATTR) class LoadbalancerConstraint(OctaviaConstraint): base_url = constants.BASE_LOADBALANCER_URL class ListenerConstraint(OctaviaConstraint): base_url = constants.BASE_LISTENER_URL class PoolConstraint(OctaviaConstraint): base_url = constants.BASE_POOL_URL class L7PolicyConstraint(OctaviaConstraint): base_url = constants.BASE_L7POLICY_URL heat-10.0.2/heat/engine/clients/os/trove.py0000666000175000017500000000654413343562337020542 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from troveclient import client as tc from troveclient import exceptions from heat.common import exception from heat.common.i18n import _ from heat.engine.clients import client_plugin from heat.engine import constraints CLIENT_NAME = 'trove' class TroveClientPlugin(client_plugin.ClientPlugin): exceptions_module = exceptions service_types = [DATABASE] = ['database'] def _create(self): con = self.context endpoint_type = self._get_client_option(CLIENT_NAME, 'endpoint_type') args = { 'endpoint_type': endpoint_type, 'service_type': self.DATABASE, 'session': con.keystone_session, 'region_name': self._get_region_name() } client = tc.Client('1.0', **args) return client def validate_datastore(self, datastore_type, datastore_version, ds_type_key, ds_version_key): if datastore_type: # get current active versions allowed_versions = self.client().datastore_versions.list( datastore_type) allowed_version_names = [v.name for v in allowed_versions] if datastore_version: if datastore_version not in allowed_version_names: msg = _("Datastore version %(dsversion)s " "for datastore type %(dstype)s is not valid. " "Allowed versions are %(allowed)s.") % { 'dstype': datastore_type, 'dsversion': datastore_version, 'allowed': ', '.join(allowed_version_names)} raise exception.StackValidationFailed(message=msg) else: if datastore_version: msg = _("Not allowed - %(dsver)s without %(dstype)s.") % { 'dsver': ds_version_key, 'dstype': ds_type_key} raise exception.StackValidationFailed(message=msg) def is_not_found(self, ex): return isinstance(ex, exceptions.NotFound) def is_over_limit(self, ex): return isinstance(ex, exceptions.RequestEntityTooLarge) def is_conflict(self, ex): return isinstance(ex, exceptions.Conflict) def find_flavor_by_name_or_id(self, flavor): """Find the specified flavor by name or id. :param flavor: the name of the flavor to find :returns: the id of :flavor: """ try: return self.client().flavors.get(flavor).id except exceptions.NotFound: return self.client().flavors.find(name=flavor).id class FlavorConstraint(constraints.BaseCustomConstraint): expected_exceptions = (exceptions.NotFound,) resource_client_name = CLIENT_NAME resource_getter_name = 'find_flavor_by_name_or_id' heat-10.0.2/heat/engine/clients/os/sahara.py0000666000175000017500000001460713343562351020635 0ustar zuulzuul00000000000000# Copyright (c) 2014 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from saharaclient.api import base as sahara_base from saharaclient import client as sahara_client import six from heat.common import exception from heat.common.i18n import _ from heat.engine.clients import client_plugin from heat.engine import constraints CLIENT_NAME = 'sahara' class SaharaClientPlugin(client_plugin.ClientPlugin): exceptions_module = sahara_base service_types = [DATA_PROCESSING] = ['data-processing'] def _create(self): con = self.context endpoint_type = self._get_client_option(CLIENT_NAME, 'endpoint_type') args = { 'endpoint_type': endpoint_type, 'service_type': self.DATA_PROCESSING, 'session': con.keystone_session, 'region_name': self._get_region_name() } client = sahara_client.Client('1.1', **args) return client def validate_hadoop_version(self, plugin_name, hadoop_version): plugin = self.client().plugins.get(plugin_name) allowed_versions = plugin.versions if hadoop_version not in allowed_versions: msg = (_("Requested plugin '%(plugin)s' doesn\'t support version " "'%(version)s'. Allowed versions are %(allowed)s") % {'plugin': plugin_name, 'version': hadoop_version, 'allowed': ', '.join(allowed_versions)}) raise exception.StackValidationFailed(message=msg) def is_not_found(self, ex): return (isinstance(ex, sahara_base.APIException) and ex.error_code == 404) def is_over_limit(self, ex): return (isinstance(ex, sahara_base.APIException) and ex.error_code == 413) def is_conflict(self, ex): return (isinstance(ex, sahara_base.APIException) and ex.error_code == 409) def is_not_registered(self, ex): return (isinstance(ex, sahara_base.APIException) and ex.error_code == 400 and ex.error_name == 'IMAGE_NOT_REGISTERED') def find_resource_by_name_or_id(self, resource_name, value): """Return the ID for the specified name or identifier. :param resource_name: API name of entity :param value: ID or name of entity :returns: the id of the requested :value: :raises: exception.EntityNotFound, exception.PhysicalResourceNameAmbiguity """ try: entity = getattr(self.client(), resource_name) return entity.get(value).id except sahara_base.APIException: return self.find_resource_by_name(resource_name, value) def get_image_id(self, image_identifier): """Return the ID for the specified image name or identifier. :param image_identifier: image name or a UUID-like identifier :returns: the id of the requested :image_identifier: :raises: exception.EntityNotFound, exception.PhysicalResourceNameAmbiguity """ # leave this method for backward compatibility try: return self.find_resource_by_name_or_id('images', image_identifier) except exception.EntityNotFound: raise exception.EntityNotFound(entity='Image', name=image_identifier) def find_resource_by_name(self, resource_name, value): """Return the ID for the specified entity name. :raises: exception.EntityNotFound, exception.PhysicalResourceNameAmbiguity """ try: filters = {'name': value} obj = getattr(self.client(), resource_name) obj_list = obj.find(**filters) except sahara_base.APIException as ex: raise exception.Error( _("Error retrieving %(entity)s list from sahara: " "%(err)s") % dict(entity=resource_name, err=six.text_type(ex))) num_matches = len(obj_list) if num_matches == 0: raise exception.EntityNotFound(entity=resource_name or 'entity', name=value) elif num_matches > 1: raise exception.PhysicalResourceNameAmbiguity( name=value) else: return obj_list[0].id def get_plugin_id(self, plugin_name): """Get the id for the specified plugin name. :param plugin_name: the name of the plugin to find :returns: the id of :plugin: :raises: exception.EntityNotFound """ try: self.client().plugins.get(plugin_name) except sahara_base.APIException: raise exception.EntityNotFound(entity='Plugin', name=plugin_name) class SaharaBaseConstraint(constraints.BaseCustomConstraint): expected_exceptions = (exception.EntityNotFound, exception.PhysicalResourceNameAmbiguity,) resource_name = None def validate_with_client(self, client, resource_id): sahara_plugin = client.client_plugin(CLIENT_NAME) sahara_plugin.find_resource_by_name_or_id(self.resource_name, resource_id) class PluginConstraint(constraints.BaseCustomConstraint): # do not touch constraint for compatibility resource_client_name = CLIENT_NAME resource_getter_name = 'get_plugin_id' class ImageConstraint(SaharaBaseConstraint): resource_name = 'images' class JobBinaryConstraint(SaharaBaseConstraint): resource_name = 'job_binaries' class ClusterConstraint(SaharaBaseConstraint): resource_name = 'clusters' class DataSourceConstraint(SaharaBaseConstraint): resource_name = 'data_sources' class ClusterTemplateConstraint(SaharaBaseConstraint): resource_name = 'cluster_templates' class JobTypeConstraint(SaharaBaseConstraint): resource_name = 'job_types' heat-10.0.2/heat/engine/clients/os/glance.py0000666000175000017500000001000313343562337020615 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_utils import uuidutils from glanceclient import client as gc from glanceclient import exc from heat.engine.clients import client_exception from heat.engine.clients import client_plugin from heat.engine.clients import os as os_client from heat.engine import constraints CLIENT_NAME = 'glance' class GlanceClientPlugin(client_plugin.ClientPlugin): exceptions_module = [client_exception, exc] service_types = [IMAGE] = ['image'] supported_versions = [V1, V2] = ['1', '2'] default_version = V2 def _create(self, version=None): con = self.context interface = self._get_client_option(CLIENT_NAME, 'endpoint_type') return gc.Client(version, session=con.keystone_session, interface=interface, service_type=self.IMAGE, region_name=self._get_region_name()) def _find_with_attr(self, entity, **kwargs): """Find a item for entity with attributes matching ``**kwargs``.""" matches = list(self._findall_with_attr(entity, **kwargs)) num_matches = len(matches) if num_matches == 0: raise client_exception.EntityMatchNotFound(entity=entity, args=kwargs) elif num_matches > 1: raise client_exception.EntityUniqueMatchNotFound(entity=entity, args=kwargs) else: return matches[0] def _findall_with_attr(self, entity, **kwargs): """Find all items for entity with attributes matching ``**kwargs``.""" func = getattr(self.client(), entity) filters = {'filters': kwargs} return func.list(**filters) def is_not_found(self, ex): return isinstance(ex, (client_exception.EntityMatchNotFound, exc.HTTPNotFound)) def is_over_limit(self, ex): return isinstance(ex, exc.HTTPOverLimit) def is_conflict(self, ex): return isinstance(ex, exc.Conflict) def find_image_by_name_or_id(self, image_identifier): """Return the ID for the specified image name or identifier. :param image_identifier: image name or a UUID-like identifier :returns: the id of the requested :image_identifier: """ return self._find_image_id(self.context.tenant_id, image_identifier) @os_client.MEMOIZE_FINDER def _find_image_id(self, tenant_id, image_identifier): # tenant id in the signature is used for the memoization key, # that would differentiate similar resource names across tenants. return self.get_image(image_identifier).id def get_image(self, image_identifier): """Return the image object for the specified image name/id. :param image_identifier: image name :returns: an image object with name/id :image_identifier: """ if uuidutils.is_uuid_like(image_identifier): try: return self.client().images.get(image_identifier) except exc.HTTPNotFound: pass return self._find_with_attr('images', name=image_identifier) class ImageConstraint(constraints.BaseCustomConstraint): expected_exceptions = (client_exception.EntityMatchNotFound, client_exception.EntityUniqueMatchNotFound) resource_client_name = CLIENT_NAME resource_getter_name = 'find_image_by_name_or_id' heat-10.0.2/heat/engine/clients/os/neutron/0000775000175000017500000000000013343562672020512 5ustar zuulzuul00000000000000heat-10.0.2/heat/engine/clients/os/neutron/lbaas_constraints.py0000666000175000017500000000223113343562337024573 0ustar zuulzuul00000000000000# # All Rights Reserved. # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # # Copyright 2015 IBM Corp. from heat.engine.clients.os.neutron import neutron_constraints as nc CLIENT_NAME = 'neutron' class LoadbalancerConstraint(nc.NeutronConstraint): resource_name = 'loadbalancer' extension = 'lbaasv2' class ListenerConstraint(nc.NeutronConstraint): resource_name = 'listener' extension = 'lbaasv2' class PoolConstraint(nc.NeutronConstraint): # Pool constraint for lbaas v2 resource_name = 'pool' extension = 'lbaasv2' class LBaasV2ProviderConstraint(nc.ProviderConstraint): service_type = 'LOADBALANCERV2' heat-10.0.2/heat/engine/clients/os/neutron/__init__.py0000666000175000017500000002227313343562351022625 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from neutronclient.common import exceptions from neutronclient.neutron import v2_0 as neutronV20 from neutronclient.v2_0 import client as nc from oslo_utils import uuidutils from heat.common import exception from heat.common.i18n import _ from heat.engine.clients import client_plugin from heat.engine.clients import os as os_client class NeutronClientPlugin(client_plugin.ClientPlugin): exceptions_module = exceptions service_types = [NETWORK] = ['network'] res_cmdres_mapping = { # resource: cmd_resource 'policy': 'qos_policy', 'loadbalancer': 'lbaas_loadbalancer', 'pool': 'lbaas_pool', 'l7policy': 'lbaas_l7policy' } def _create(self): con = self.context interface = self._get_client_option('neutron', 'endpoint_type') args = { 'session': con.keystone_session, 'service_type': self.NETWORK, 'interface': interface, 'region_name': self._get_region_name() } return nc.Client(**args) def is_not_found(self, ex): if isinstance(ex, (exceptions.NotFound, exceptions.NetworkNotFoundClient, exceptions.PortNotFoundClient)): return True return (isinstance(ex, exceptions.NeutronClientException) and ex.status_code == 404) def is_conflict(self, ex): bad_conflicts = (exceptions.OverQuotaClient,) return (isinstance(ex, exceptions.Conflict) and not isinstance(ex, bad_conflicts)) def is_over_limit(self, ex): if not isinstance(ex, exceptions.NeutronClientException): return False return ex.status_code == 413 def is_no_unique(self, ex): return isinstance(ex, exceptions.NeutronClientNoUniqueMatch) def is_invalid(self, ex): return isinstance(ex, exceptions.StateInvalidClient) def find_resourceid_by_name_or_id(self, resource, name_or_id, cmd_resource=None): cmd_resource = (cmd_resource or self.res_cmdres_mapping.get(resource)) return self._find_resource_id(self.context.tenant_id, resource, name_or_id, cmd_resource) @os_client.MEMOIZE_FINDER def _find_resource_id(self, tenant_id, resource, name_or_id, cmd_resource): # tenant id in the signature is used for the memoization key, # that would differentiate similar resource names across tenants. return neutronV20.find_resourceid_by_name_or_id( self.client(), resource, name_or_id, cmd_resource=cmd_resource) @os_client.MEMOIZE_EXTENSIONS def _list_extensions(self): extensions = self.client().list_extensions().get('extensions') return set(extension.get('alias') for extension in extensions) def has_extension(self, alias): """Check if specific extension is present.""" return alias in self._list_extensions() def _resolve(self, props, key, id_key, key_type): if props.get(key): props[id_key] = self.find_resourceid_by_name_or_id(key_type, props.pop(key)) return props[id_key] def resolve_pool(self, props, pool_key, pool_id_key): if props.get(pool_key): props[pool_id_key] = self.find_resourceid_by_name_or_id( 'pool', props.get(pool_key)) props.pop(pool_key) return props[pool_id_key] def resolve_router(self, props, router_key, router_id_key): return self._resolve(props, router_key, router_id_key, 'router') def network_id_from_subnet_id(self, subnet_id): subnet_info = self.client().show_subnet(subnet_id) return subnet_info['subnet']['network_id'] def check_lb_status(self, lb_id): lb = self.client().show_loadbalancer(lb_id)['loadbalancer'] status = lb['provisioning_status'] if status == 'ERROR': raise exception.ResourceInError(resource_status=status) return status == 'ACTIVE' def get_qos_policy_id(self, policy): """Returns the id of QoS policy. Args: policy: ID or name of the policy. """ return self.find_resourceid_by_name_or_id( 'policy', policy) def get_secgroup_uuids(self, security_groups): '''Returns a list of security group UUIDs. Args: security_groups: List of security group names or UUIDs ''' seclist = [] all_groups = None for sg in security_groups: if uuidutils.is_uuid_like(sg): seclist.append(sg) else: if not all_groups: response = self.client().list_security_groups() all_groups = response['security_groups'] same_name_groups = [g for g in all_groups if g['name'] == sg] groups = [g['id'] for g in same_name_groups] if len(groups) == 0: raise exception.EntityNotFound(entity='Resource', name=sg) elif len(groups) == 1: seclist.append(groups[0]) else: # for admin roles, can get the other users' # securityGroups, so we should match the tenant_id with # the groups, and return the own one own_groups = [g['id'] for g in same_name_groups if g['tenant_id'] == self.context.tenant_id] if len(own_groups) == 1: seclist.append(own_groups[0]) else: raise exception.PhysicalResourceNameAmbiguity(name=sg) return seclist def _resolve_resource_path(self, resource): """Returns sfc resource path.""" if resource == 'port_pair': path = "/sfc/port_pairs" elif resource == 'port_pair_group': path = "/sfc/port_pair_groups" elif resource == 'flow_classifier': path = "/sfc/flow_classifiers" elif resource == 'port_chain': path = "/sfc/port_chains" return path def create_sfc_resource(self, resource, props): """Returns created sfc resource record.""" path = self._resolve_resource_path(resource) record = self.client().create_ext(path, {resource: props} ).get(resource) return record def update_sfc_resource(self, resource, prop_diff, resource_id): """Returns updated sfc resource record.""" path = self._resolve_resource_path(resource) return self.client().update_ext(path + '/%s', resource_id, {resource: prop_diff}) def delete_sfc_resource(self, resource, resource_id): """Deletes sfc resource record and returns status.""" path = self._resolve_resource_path(resource) return self.client().delete_ext(path + '/%s', resource_id) def show_sfc_resource(self, resource, resource_id): """Returns specific sfc resource record.""" path = self._resolve_resource_path(resource) return self.client().show_ext(path + '/%s', resource_id ).get(resource) def resolve_ext_resource(self, resource, name_or_id): """Returns the id and validate neutron ext resource.""" path = self._resolve_resource_path(resource) try: record = self.client().show_ext(path + '/%s', name_or_id) return record.get(resource).get('id') except exceptions.NotFound: res_plural = resource + 's' result = self.client().list_ext(collection=res_plural, path=path, retrieve_all=True) resources = result.get(res_plural) matched = [] for res in resources: if res.get('name') == name_or_id: matched.append(res.get('id')) if len(matched) > 1: raise exceptions.NeutronClientNoUniqueMatch(resource=resource, name=name_or_id) elif len(matched) == 0: not_found_message = (_("Unable to find %(resource)s with name " "or id '%(name_or_id)s'") % {'resource': resource, 'name_or_id': name_or_id}) raise exceptions.NotFound(message=not_found_message) else: return matched[0] heat-10.0.2/heat/engine/clients/os/neutron/neutron_constraints.py0000666000175000017500000000750613343562337025215 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # # Copyright 2015 IBM Corp. from neutronclient.common import exceptions as qe from heat.common import exception from heat.common.i18n import _ from heat.engine import constraints CLIENT_NAME = 'neutron' class NeutronConstraint(constraints.BaseCustomConstraint): expected_exceptions = (qe.NeutronClientException, exception.EntityNotFound) resource_name = None extension = None def validate_with_client(self, client, value): neutron_plugin = client.client_plugin(CLIENT_NAME) if (self.extension and not neutron_plugin.has_extension(self.extension)): raise exception.EntityNotFound(entity='neutron extension', name=self.extension) neutron_plugin.find_resourceid_by_name_or_id( self.resource_name, value) class NeutronExtConstraint(NeutronConstraint): def validate_with_client(self, client, value): neutron_plugin = client.client_plugin(CLIENT_NAME) if (self.extension and not neutron_plugin.has_extension(self.extension)): raise exception.EntityNotFound(entity='neutron extension', name=self.extension) neutron_plugin.resolve_ext_resource(self.resource_name, value) class NetworkConstraint(NeutronConstraint): resource_name = 'network' class PortConstraint(NeutronConstraint): resource_name = 'port' class RouterConstraint(NeutronConstraint): resource_name = 'router' class SubnetConstraint(NeutronConstraint): resource_name = 'subnet' class SubnetPoolConstraint(NeutronConstraint): resource_name = 'subnetpool' class SecurityGroupConstraint(NeutronConstraint): resource_name = 'security_group' class AddressScopeConstraint(NeutronConstraint): resource_name = 'address_scope' extension = 'address-scope' class QoSPolicyConstraint(NeutronConstraint): resource_name = 'policy' extension = 'qos' class PortPairConstraint(NeutronExtConstraint): resource_name = 'port_pair' extension = 'sfc' class PortPairGroupConstraint(NeutronExtConstraint): resource_name = 'port_pair_group' extension = 'sfc' class FlowClassifierConstraint(NeutronExtConstraint): resource_name = 'flow_classifier' extension = 'sfc' class ProviderConstraint(constraints.BaseCustomConstraint): expected_exceptions = (exception.StackValidationFailed,) service_type = None def validate_with_client(self, client, value): params = {} neutron_client = client.client(CLIENT_NAME) if self.service_type: params['service_type'] = self.service_type providers = neutron_client.list_service_providers( retrieve_all=True, **params )['service_providers'] names = [provider['name'] for provider in providers] if value not in names: not_found_message = ( _("Unable to find neutron provider '%(provider)s', " "available providers are %(providers)s.") % {'provider': value, 'providers': names} ) raise exception.StackValidationFailed(message=not_found_message) class LBaasV1ProviderConstraint(ProviderConstraint): service_type = 'LOADBALANCER' heat-10.0.2/heat/engine/clients/os/nova.py0000666000175000017500000007552213343562351020344 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import email from email.mime import multipart from email.mime import text import os import pkgutil import string from neutronclient.common import exceptions as q_exceptions from novaclient import client as nc from novaclient import exceptions from oslo_config import cfg from oslo_log import log as logging from oslo_serialization import jsonutils from oslo_utils import netutils import six from six.moves.urllib import parse as urlparse import tenacity from heat.common import exception from heat.common.i18n import _ from heat.engine.clients import client_exception from heat.engine.clients import client_plugin from heat.engine.clients import os as os_client from heat.engine import constraints LOG = logging.getLogger(__name__) CLIENT_NAME = 'nova' class NovaClientPlugin(client_plugin.ClientPlugin): deferred_server_statuses = ['BUILD', 'HARD_REBOOT', 'PASSWORD', 'REBOOT', 'RESCUE', 'RESIZE', 'REVERT_RESIZE', 'SHUTOFF', 'SUSPENDED', 'VERIFY_RESIZE'] exceptions_module = exceptions NOVA_API_VERSION = '2.1' validate_versions = [ V2_2, V2_8, V2_10, V2_15, V2_26, V2_37, V2_42 ] = [ '2.2', '2.8', '2.10', '2.15', '2.26', '2.37', '2.42' ] supported_versions = [NOVA_API_VERSION] + validate_versions service_types = [COMPUTE] = ['compute'] def _get_service_name(self): return self.COMPUTE def _create(self, version=None): if not version: # TODO(prazumovsky): remove all unexpected calls from tests and # add default_version after that. version = self.NOVA_API_VERSION endpoint_type = self._get_client_option(CLIENT_NAME, 'endpoint_type') extensions = nc.discover_extensions(version) args = { 'session': self.context.keystone_session, 'extensions': extensions, 'endpoint_type': endpoint_type, 'service_type': self.COMPUTE, 'region_name': self._get_region_name(), 'http_log_debug': self._get_client_option(CLIENT_NAME, 'http_log_debug') } client = nc.Client(version, **args) # NOTE: check for microversion availability if version in self.validate_versions: try: client.versions.get_current() except exceptions.NotAcceptable: raise exception.InvalidServiceVersion(service=self.COMPUTE, version=version) return client def is_not_found(self, ex): return isinstance(ex, (exceptions.NotFound, q_exceptions.NotFound)) def is_over_limit(self, ex): return isinstance(ex, exceptions.OverLimit) def is_bad_request(self, ex): return isinstance(ex, exceptions.BadRequest) def is_conflict(self, ex): return isinstance(ex, exceptions.Conflict) def is_unprocessable_entity(self, ex): http_status = (getattr(ex, 'http_status', None) or getattr(ex, 'code', None)) return (isinstance(ex, exceptions.ClientException) and http_status == 422) @tenacity.retry( stop=tenacity.stop_after_attempt( max(cfg.CONF.client_retry_limit + 1, 0)), retry=tenacity.retry_if_exception( client_plugin.retry_if_connection_err), reraise=True) def get_server(self, server): """Return fresh server object. Substitutes Nova's NotFound for Heat's EntityNotFound, to be returned to user as HTTP error. """ try: return self.client().servers.get(server) except exceptions.NotFound: raise exception.EntityNotFound(entity='Server', name=server) def fetch_server(self, server_id): """Fetch fresh server object from Nova. Log warnings and return None for non-critical API errors. Use this method in various ``check_*_complete`` resource methods, where intermittent errors can be tolerated. """ server = None try: server = self.client().servers.get(server_id) except exceptions.OverLimit as exc: LOG.warning("Received an OverLimit response when " "fetching server (%(id)s) : %(exception)s", {'id': server_id, 'exception': exc}) except exceptions.ClientException as exc: if ((getattr(exc, 'http_status', getattr(exc, 'code', None)) in (500, 503))): LOG.warning("Received the following exception when " "fetching server (%(id)s) : %(exception)s", {'id': server_id, 'exception': exc}) else: raise return server def refresh_server(self, server): """Refresh server's attributes. Also log warnings for non-critical API errors. """ try: server.get() except exceptions.OverLimit as exc: LOG.warning("Server %(name)s (%(id)s) received an OverLimit " "response during server.get(): %(exception)s", {'name': server.name, 'id': server.id, 'exception': exc}) except exceptions.ClientException as exc: if ((getattr(exc, 'http_status', getattr(exc, 'code', None)) in (500, 503))): LOG.warning('Server "%(name)s" (%(id)s) received the ' 'following exception during server.get(): ' '%(exception)s', {'name': server.name, 'id': server.id, 'exception': exc}) else: raise def get_ip(self, server, net_type, ip_version): """Return the server's IP of the given type and version.""" if net_type in server.addresses: for ip in server.addresses[net_type]: if ip['version'] == ip_version: return ip['addr'] def get_status(self, server): """Return the server's status. :param server: server object :returns: status as a string """ # Some clouds append extra (STATUS) strings to the status, strip it return server.status.split('(')[0] def _check_active(self, server, res_name='Server'): """Check server status. Accepts both server IDs and server objects. Returns True if server is ACTIVE, raises errors when server has an ERROR or unknown to Heat status, returns False otherwise. :param res_name: name of the resource to use in the exception message """ # not checking with is_uuid_like as most tests use strings e.g. '1234' if isinstance(server, six.string_types): server = self.fetch_server(server) if server is None: return False else: status = self.get_status(server) else: status = self.get_status(server) if status != 'ACTIVE': self.refresh_server(server) status = self.get_status(server) if status in self.deferred_server_statuses: return False elif status == 'ACTIVE': return True elif status == 'ERROR': fault = getattr(server, 'fault', {}) raise exception.ResourceInError( resource_status=status, status_reason=_("Message: %(message)s, Code: %(code)s") % { 'message': fault.get('message', _('Unknown')), 'code': fault.get('code', _('Unknown')) }) else: raise exception.ResourceUnknownStatus( resource_status=server.status, result=_('%s is not active') % res_name) def find_flavor_by_name_or_id(self, flavor): """Find the specified flavor by name or id. :param flavor: the name of the flavor to find :returns: the id of :flavor: """ return self._find_flavor_id(self.context.tenant_id, flavor) @os_client.MEMOIZE_FINDER def _find_flavor_id(self, tenant_id, flavor): # tenant id in the signature is used for the memoization key, # that would differentiate similar resource names across tenants. return self.get_flavor(flavor).id def get_flavor(self, flavor_identifier): """Get the flavor object for the specified flavor name or id. :param flavor_identifier: the name or id of the flavor to find :returns: a flavor object with name or id :flavor: """ try: flavor = self.client().flavors.get(flavor_identifier) except exceptions.NotFound: flavor = self.client().flavors.find(name=flavor_identifier) return flavor def get_host(self, host_name): """Get the host id specified by name. :param host_name: the name of host to find :returns: the list of match hosts :raises: exception.EntityNotFound """ host_list = self.client().hosts.list() for host in host_list: if host.host_name == host_name and host.service == self.COMPUTE: return host raise exception.EntityNotFound(entity='Host', name=host_name) def get_keypair(self, key_name): """Get the public key specified by :key_name: :param key_name: the name of the key to look for :returns: the keypair (name, public_key) for :key_name: :raises: exception.EntityNotFound """ try: return self.client().keypairs.get(key_name) except exceptions.NotFound: raise exception.EntityNotFound(entity='Key', name=key_name) def build_userdata(self, metadata, userdata=None, instance_user=None, user_data_format='HEAT_CFNTOOLS'): """Build multipart data blob for CloudInit. Data blob includes user-supplied Metadata, user data, and the required Heat in-instance configuration. :param resource: the resource implementation :type resource: heat.engine.Resource :param userdata: user data string :type userdata: str or None :param instance_user: the user to create on the server :type instance_user: string :param user_data_format: Format of user data to return :type user_data_format: string :returns: multipart mime as a string """ if user_data_format == 'RAW': return userdata is_cfntools = user_data_format == 'HEAT_CFNTOOLS' is_software_config = user_data_format == 'SOFTWARE_CONFIG' def make_subpart(content, filename, subtype=None): if subtype is None: subtype = os.path.splitext(filename)[0] if content is None: content = '' try: content.encode('us-ascii') charset = 'us-ascii' except UnicodeEncodeError: charset = 'utf-8' msg = (text.MIMEText(content, _subtype=subtype, _charset=charset) if subtype else text.MIMEText(content, _charset=charset)) msg.add_header('Content-Disposition', 'attachment', filename=filename) return msg def read_cloudinit_file(fn): return pkgutil.get_data( 'heat', 'cloudinit/%s' % fn).decode('utf-8') if instance_user: config_custom_user = 'user: %s' % instance_user # FIXME(shadower): compatibility workaround for cloud-init 0.6.3. # We can drop this once we stop supporting 0.6.3 (which ships # with Ubuntu 12.04 LTS). # # See bug https://bugs.launchpad.net/heat/+bug/1257410 boothook_custom_user = r"""useradd -m %s echo -e '%s\tALL=(ALL)\tNOPASSWD: ALL' >> /etc/sudoers """ % (instance_user, instance_user) else: config_custom_user = '' boothook_custom_user = '' cloudinit_config = string.Template( read_cloudinit_file('config')).safe_substitute( add_custom_user=config_custom_user) cloudinit_boothook = string.Template( read_cloudinit_file('boothook.sh')).safe_substitute( add_custom_user=boothook_custom_user) attachments = [(cloudinit_config, 'cloud-config'), (cloudinit_boothook, 'boothook.sh', 'cloud-boothook'), (read_cloudinit_file('part_handler.py'), 'part-handler.py')] if is_cfntools: attachments.append((userdata, 'cfn-userdata', 'x-cfninitdata')) elif is_software_config: # attempt to parse userdata as a multipart message, and if it # is, add each part as an attachment userdata_parts = None try: userdata_parts = email.message_from_string(userdata) except Exception: pass if userdata_parts and userdata_parts.is_multipart(): for part in userdata_parts.get_payload(): attachments.append((part.get_payload(), part.get_filename(), part.get_content_subtype())) else: attachments.append((userdata, '')) if is_cfntools: attachments.append((read_cloudinit_file('loguserdata.py'), 'loguserdata.py', 'x-shellscript')) if metadata: attachments.append((jsonutils.dumps(metadata), 'cfn-init-data', 'x-cfninitdata')) if is_cfntools: heat_client_plugin = self.context.clients.client_plugin('heat') cfn_md_url = heat_client_plugin.get_cfn_metadata_server_url() attachments.append((cfn_md_url, 'cfn-metadata-server', 'x-cfninitdata')) # Create a boto config which the cfntools on the host use to know # where the cfn API is to be accessed cfn_url = urlparse.urlparse(cfn_md_url) is_secure = cfg.CONF.instance_connection_is_secure vcerts = cfg.CONF.instance_connection_https_validate_certificates boto_cfg = "\n".join(["[Boto]", "debug = 0", "is_secure = %s" % is_secure, "https_validate_certificates = %s" % vcerts, "cfn_region_name = heat", "cfn_region_endpoint = %s" % cfn_url.hostname]) attachments.append((boto_cfg, 'cfn-boto-cfg', 'x-cfninitdata')) subparts = [make_subpart(*args) for args in attachments] mime_blob = multipart.MIMEMultipart(_subparts=subparts) return mime_blob.as_string() def check_delete_server_complete(self, server_id): """Wait for server to disappear from Nova.""" try: server = self.fetch_server(server_id) except Exception as exc: self.ignore_not_found(exc) return True if not server: return False task_state_in_nova = getattr(server, 'OS-EXT-STS:task_state', None) # the status of server won't change until the delete task has done if task_state_in_nova == 'deleting': return False status = self.get_status(server) if status == 'DELETED': return True if status == 'SOFT_DELETED': self.client().servers.force_delete(server_id) elif status == 'ERROR': fault = getattr(server, 'fault', {}) message = fault.get('message', 'Unknown') code = fault.get('code') errmsg = _("Server %(name)s delete failed: (%(code)s) " "%(message)s") % dict(name=server.name, code=code, message=message) raise exception.ResourceInError(resource_status=status, status_reason=errmsg) return False def rename(self, server, name): """Update the name for a server.""" server.update(name) def resize(self, server_id, flavor_id): """Resize the server.""" server = self.fetch_server(server_id) if server: server.resize(flavor_id) return True else: return False def check_resize(self, server_id, flavor): """Verify that a resizing server is properly resized. If that's the case, confirm the resize, if not raise an error. """ server = self.fetch_server(server_id) # resize operation is asynchronous so the server resize may not start # when checking server status (the server may stay ACTIVE instead # of RESIZE). if not server or server.status in ('RESIZE', 'ACTIVE'): return False if server.status == 'VERIFY_RESIZE': return True else: raise exception.Error( _("Resizing to '%(flavor)s' failed, status '%(status)s'") % dict(flavor=flavor, status=server.status)) def verify_resize(self, server_id): server = self.fetch_server(server_id) if not server: return False status = self.get_status(server) if status == 'VERIFY_RESIZE': server.confirm_resize() return True else: msg = _("Could not confirm resize of server %s") % server_id raise exception.ResourceUnknownStatus( result=msg, resource_status=status) def check_verify_resize(self, server_id): server = self.fetch_server(server_id) if not server: return False status = self.get_status(server) if status == 'ACTIVE': return True if status == 'VERIFY_RESIZE': return False else: msg = _("Confirm resize for server %s failed") % server_id raise exception.ResourceUnknownStatus( result=msg, resource_status=status) def rebuild(self, server_id, image_id, password=None, preserve_ephemeral=False, meta=None, files=None): """Rebuild the server and call check_rebuild to verify.""" server = self.fetch_server(server_id) if server: server.rebuild(image_id, password=password, preserve_ephemeral=preserve_ephemeral, meta=meta, files=files) return True else: return False def check_rebuild(self, server_id): """Verify that a rebuilding server is rebuilt. Raise error if it ends up in an ERROR state. """ server = self.fetch_server(server_id) if server is None or server.status == 'REBUILD': return False if server.status == 'ERROR': raise exception.Error( _("Rebuilding server failed, status '%s'") % server.status) else: return True def meta_serialize(self, metadata): """Serialize non-string metadata values before sending them to Nova.""" if not isinstance(metadata, collections.Mapping): raise exception.StackValidationFailed(message=_( "nova server metadata needs to be a Map.")) return dict((key, (value if isinstance(value, six.string_types) else jsonutils.dumps(value)) ) for (key, value) in metadata.items()) def meta_update(self, server, metadata): """Delete/Add the metadata in nova as needed.""" metadata = self.meta_serialize(metadata) current_md = server.metadata to_del = sorted(set(current_md) - set(metadata)) client = self.client() if len(to_del) > 0: client.servers.delete_meta(server, to_del) client.servers.set_meta(server, metadata) def server_to_ipaddress(self, server): """Return the server's IP address, fetching it from Nova.""" try: server = self.client().servers.get(server) except exceptions.NotFound as ex: LOG.warning('Instance (%(server)s) not found: %(ex)s', {'server': server, 'ex': ex}) else: for n in sorted(server.networks, reverse=True): if len(server.networks[n]) > 0: return server.networks[n][0] @tenacity.retry( stop=tenacity.stop_after_attempt( max(cfg.CONF.client_retry_limit + 1, 0)), retry=tenacity.retry_if_exception( client_plugin.retry_if_connection_err), reraise=True) def absolute_limits(self): """Return the absolute limits as a dictionary.""" limits = self.client().limits.get() return dict([(limit.name, limit.value) for limit in list(limits.absolute)]) def get_console_urls(self, server): """Return dict-like structure of server's console urls. The actual console url is lazily resolved on access. """ nc = self.client mks_version = self.V2_8 class ConsoleUrls(collections.Mapping): def __init__(self, server): self.console_method = server.get_console_url self.support_console_types = ['novnc', 'xvpvnc', 'spice-html5', 'rdp-html5', 'serial', 'webmks'] def __getitem__(self, key): try: if key not in self.support_console_types: raise exceptions.UnsupportedConsoleType(key) if key == 'webmks': data = nc(mks_version).servers.get_console_url( server, key) else: data = self.console_method(key) console_data = data.get( 'remote_console', data.get('console')) url = console_data['url'] except exceptions.UnsupportedConsoleType as ex: url = ex.message except Exception as e: url = _('Cannot get console url: %s') % six.text_type(e) return url def __len__(self): return len(self.support_console_types) def __iter__(self): return (key for key in self.support_console_types) return ConsoleUrls(server) def attach_volume(self, server_id, volume_id, device): try: va = self.client().volumes.create_server_volume( server_id=server_id, volume_id=volume_id, device=device) except Exception as ex: if self.is_client_exception(ex): raise exception.Error(_( "Failed to attach volume %(vol)s to server %(srv)s " "- %(err)s") % {'vol': volume_id, 'srv': server_id, 'err': ex}) else: raise return va.id def detach_volume(self, server_id, attach_id): # detach the volume using volume_attachment try: self.client().volumes.delete_server_volume(server_id, attach_id) except Exception as ex: if not (self.is_not_found(ex) or self.is_bad_request(ex)): raise exception.Error( _("Could not detach attachment %(att)s " "from server %(srv)s.") % {'srv': server_id, 'att': attach_id}) def check_detach_volume_complete(self, server_id, attach_id): """Check that nova server lost attachment. This check is needed for immediate reattachment when updating: there might be some time between cinder marking volume as 'available' and nova removing attachment from its own objects, so we check that nova already knows that the volume is detached. """ try: self.client().volumes.get_server_volume(server_id, attach_id) except Exception as ex: self.ignore_not_found(ex) LOG.info("Volume %(vol)s is detached from server %(srv)s", {'vol': attach_id, 'srv': server_id}) return True else: LOG.debug("Server %(srv)s still has attachment %(att)s.", {'att': attach_id, 'srv': server_id}) return False def associate_floatingip(self, server_id, floatingip_id): iface_list = self.fetch_server(server_id).interface_list() if len(iface_list) == 0: raise client_exception.InterfaceNotFound(id=server_id) if len(iface_list) > 1: LOG.warning("Multiple interfaces found for server %s, " "using the first one.", server_id) port_id = iface_list[0].port_id fixed_ips = iface_list[0].fixed_ips fixed_address = next(ip['ip_address'] for ip in fixed_ips if netutils.is_valid_ipv4(ip['ip_address'])) request_body = { 'floatingip': { 'port_id': port_id, 'fixed_ip_address': fixed_address}} self.clients.client('neutron').update_floatingip(floatingip_id, request_body) def dissociate_floatingip(self, floatingip_id): request_body = { 'floatingip': { 'port_id': None, 'fixed_ip_address': None}} self.clients.client('neutron').update_floatingip(floatingip_id, request_body) def associate_floatingip_address(self, server_id, fip_address): fips = self.clients.client( 'neutron').list_floatingips( floating_ip_address=fip_address)['floatingips'] if len(fips) == 0: args = {'ip_address': fip_address} raise client_exception.EntityMatchNotFound(entity='floatingip', args=args) self.associate_floatingip(server_id, fips[0]['id']) def dissociate_floatingip_address(self, fip_address): fips = self.clients.client( 'neutron').list_floatingips( floating_ip_address=fip_address)['floatingips'] if len(fips) == 0: args = {'ip_address': fip_address} raise client_exception.EntityMatchNotFound(entity='floatingip', args=args) self.dissociate_floatingip(fips[0]['id']) def interface_detach(self, server_id, port_id): with self.ignore_not_found: server = self.fetch_server(server_id) if server: server.interface_detach(port_id) return True def interface_attach(self, server_id, port_id=None, net_id=None, fip=None, security_groups=None): server = self.fetch_server(server_id) if server: attachment = server.interface_attach(port_id, net_id, fip) if not port_id and security_groups: props = {'security_groups': security_groups} self.clients.client('neutron').update_port( attachment.port_id, {'port': props}) return True else: return False @tenacity.retry( stop=tenacity.stop_after_attempt( cfg.CONF.max_interface_check_attempts), wait=tenacity.wait_exponential(multiplier=0.5, max=12.0), retry=tenacity.retry_if_result(client_plugin.retry_if_result_is_false)) def check_interface_detach(self, server_id, port_id): with self.ignore_not_found: server = self.fetch_server(server_id) if server: interfaces = server.interface_list() for iface in interfaces: if iface.port_id == port_id: return False return True @tenacity.retry( stop=tenacity.stop_after_attempt( cfg.CONF.max_interface_check_attempts), wait=tenacity.wait_fixed(0.5), retry=tenacity.retry_if_result(client_plugin.retry_if_result_is_false)) def check_interface_attach(self, server_id, port_id): if not port_id: return True server = self.fetch_server(server_id) if server: interfaces = server.interface_list() for iface in interfaces: if iface.port_id == port_id: return True return False @os_client.MEMOIZE_EXTENSIONS def _list_extensions(self): extensions = self.client().list_extensions.show_all() return set(extension.alias for extension in extensions) def has_extension(self, alias): """Check if specific extension is present.""" return alias in self._list_extensions() class NovaBaseConstraint(constraints.BaseCustomConstraint): resource_client_name = CLIENT_NAME class ServerConstraint(NovaBaseConstraint): resource_getter_name = 'get_server' class KeypairConstraint(NovaBaseConstraint): resource_getter_name = 'get_keypair' def validate_with_client(self, client, key_name): if not key_name: # Don't validate empty key, which can happen when you # use a KeyPair resource return True super(KeypairConstraint, self).validate_with_client(client, key_name) class FlavorConstraint(NovaBaseConstraint): expected_exceptions = (exceptions.NotFound,) resource_getter_name = 'find_flavor_by_name_or_id' class HostConstraint(NovaBaseConstraint): resource_getter_name = 'get_host' heat-10.0.2/heat/engine/timestamp.py0000666000175000017500000000307713343562340017314 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common import exception class Timestamp(object): """A descriptor for writing a timestamp to the database.""" def __init__(self, db_fetch, attribute): """Initialise the timestamp descriptor. Initialise with a function to fetch the database representation of an object (given a context and ID) and the name of the attribute to retrieve. """ self.db_fetch = db_fetch self.attribute = attribute def __get__(self, obj, obj_class): """Get timestamp for the given object and class.""" if obj is None or obj.id is None: return None o = self.db_fetch(obj.context, obj.id) return getattr(o, self.attribute) def __set__(self, obj, timestamp): """Update the timestamp for the given object.""" if obj.id is None: raise exception.ResourceNotAvailable(resource_name=obj.name) o = self.db_fetch(obj.context, obj.id) o.update_and_save({self.attribute: timestamp}) heat-10.0.2/heat/engine/environment.py0000666000175000017500000007653513343562351017670 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import glob import itertools import os.path import re import weakref from oslo_config import cfg from oslo_log import log from oslo_utils import fnmatch import six from heat.common import environment_format as env_fmt from heat.common import exception from heat.common.i18n import _ from heat.common import policy from heat.engine import support LOG = log.getLogger(__name__) HOOK_TYPES = ( HOOK_PRE_CREATE, HOOK_PRE_UPDATE, HOOK_PRE_DELETE, HOOK_POST_CREATE, HOOK_POST_UPDATE, HOOK_POST_DELETE ) = ( 'pre-create', 'pre-update', 'pre-delete', 'post-create', 'post-update', 'post-delete' ) RESTRICTED_ACTIONS = (UPDATE, REPLACE) = ('update', 'replace') def valid_hook_type(hook): return hook in HOOK_TYPES def valid_restricted_actions(action): return action in RESTRICTED_ACTIONS def is_hook_definition(key, value): is_valid_hook = False if key == 'hooks': if isinstance(value, six.string_types): is_valid_hook = valid_hook_type(value) elif isinstance(value, collections.Sequence): is_valid_hook = all(valid_hook_type(hook) for hook in value) if not is_valid_hook: msg = (_('Invalid hook type "%(value)s" for resource ' 'breakpoint, acceptable hook types are: %(types)s') % {'value': value, 'types': HOOK_TYPES}) raise exception.InvalidBreakPointHook(message=msg) return is_valid_hook def is_valid_restricted_action(key, value): valid_action = False if key == 'restricted_actions': if isinstance(value, six.string_types): valid_action = valid_restricted_actions(value) elif isinstance(value, collections.Sequence): valid_action = all(valid_restricted_actions( action) for action in value) if not valid_action: msg = (_('Invalid restricted_action type "%(value)s" for ' 'resource, acceptable restricted_action ' 'types are: %(types)s') % {'value': value, 'types': RESTRICTED_ACTIONS}) raise exception.InvalidRestrictedAction(message=msg) return valid_action class ResourceInfo(object): """Base mapping of resource type to implementation.""" def __new__(cls, registry, path, value): """Create a new ResourceInfo of the appropriate class.""" if cls is not ResourceInfo: # Call is already for a subclass, so pass it through return super(ResourceInfo, cls).__new__(cls) name = path[-1] if name.endswith(('.yaml', '.template')): # a template url for the resource "Type" klass = TemplateResourceInfo elif not isinstance(value, six.string_types): klass = ClassResourceInfo elif value.endswith(('.yaml', '.template')): # a registered template klass = TemplateResourceInfo elif name.endswith('*'): klass = GlobResourceInfo else: klass = MapResourceInfo return super(ResourceInfo, cls).__new__(klass) __slots__ = ('_registry', 'path', 'name', 'value', 'user_resource') def __init__(self, registry, path, value): self._registry = weakref.ref(registry) self.path = path self.name = path[-1] self.value = value self.user_resource = True @property def registry(self): return self._registry() def __eq__(self, other): if other is None: return False return (self.path == other.path and self.value == other.value and self.user_resource == other.user_resource) def __ne__(self, other): return not self.__eq__(other) def __lt__(self, other): if self.user_resource != other.user_resource: # user resource must be sorted above system ones. return self.user_resource > other.user_resource if len(self.path) != len(other.path): # more specific (longer) path must be sorted above system ones. return len(self.path) > len(other.path) return self.path < other.path def __gt__(self, other): return other.__lt__(self) def get_resource_info(self, resource_type=None, resource_name=None): return self def matches(self, resource_type): return False def get_class(self): raise NotImplementedError def get_class_to_instantiate(self): return self.get_class() def __str__(self): return '[%s](User:%s) %s -> %s' % (self.description, self.user_resource, self.name, str(self.value)) class ClassResourceInfo(ResourceInfo): """Store the mapping of resource name to python class implementation.""" description = 'Plugin' __slots__ = tuple() def get_class(self, files=None): return self.value class TemplateResourceInfo(ResourceInfo): """Store the info needed to start a TemplateResource.""" description = 'Template' __slots__ = ('template_name',) def __init__(self, registry, path, value): super(TemplateResourceInfo, self).__init__(registry, path, value) if self.name.endswith(('.yaml', '.template')): self.template_name = self.name else: self.template_name = value self.value = self.template_name def get_class(self, files=None): from heat.engine.resources import template_resource if files and self.template_name in files: data = files[self.template_name] else: if self.user_resource: allowed_schemes = template_resource.REMOTE_SCHEMES else: allowed_schemes = template_resource.LOCAL_SCHEMES data = template_resource.TemplateResource.get_template_file( self.template_name, allowed_schemes) param_defaults = self.registry.param_defaults return template_resource.generate_class_from_template(str(self.name), data, param_defaults) def get_class_to_instantiate(self): from heat.engine.resources import template_resource return template_resource.TemplateResource class MapResourceInfo(ResourceInfo): """Store the mapping of one resource type to another. like: OS::Networking::FloatingIp -> OS::Neutron::FloatingIp """ description = 'Mapping' __slots__ = tuple() def get_class(self, files=None): return None def get_resource_info(self, resource_type=None, resource_name=None): return self.registry.get_resource_info(self.value, resource_name) class GlobResourceInfo(MapResourceInfo): """Store the mapping (with wild cards) of one resource type to another. like: OS::Networking::* -> OS::Neutron::* Also supports many-to-one mapping (mostly useful together with special "OS::Heat::None" resource) like: OS::* -> OS::Heat::None """ description = 'Wildcard Mapping' __slots__ = tuple() def get_resource_info(self, resource_type=None, resource_name=None): # NOTE(pas-ha) we end up here only when self.name already # ends with * so truncate it orig_prefix = self.name[:-1] if self.value.endswith('*'): new_type = self.value[:-1] + resource_type[len(orig_prefix):] else: new_type = self.value return self.registry.get_resource_info(new_type, resource_name) def matches(self, resource_type): # prevent self-recursion in case of many-to-one mapping match = (resource_type != self.value and resource_type.startswith(self.name[:-1])) return match class ResourceRegistry(object): """By looking at the environment, find the resource implementation.""" def __init__(self, global_registry, param_defaults): self._registry = {'resources': {}} self.global_registry = global_registry self.param_defaults = param_defaults def load(self, json_snippet): self._load_registry([], json_snippet) def register_class(self, resource_type, resource_class, path=None): if path is None: path = [resource_type] ri = ResourceInfo(self, path, resource_class) self._register_info(path, ri) def _load_registry(self, path, registry): for k, v in iter(registry.items()): if v is None: self._register_info(path + [k], None) elif is_hook_definition(k, v) or is_valid_restricted_action(k, v): self._register_item(path + [k], v) elif isinstance(v, dict): self._load_registry(path + [k], v) else: self._register_info(path + [k], ResourceInfo(self, path + [k], v)) def _register_item(self, path, item): name = path[-1] registry = self._registry for key in path[:-1]: if key not in registry: registry[key] = {} registry = registry[key] registry[name] = item def _register_info(self, path, info): """Place the new info in the correct location in the registry. :param path: a list of keys ['resources', 'my_srv', 'OS::Nova::Server'] """ descriptive_path = '/'.join(path) name = path[-1] # create the structure if needed registry = self._registry for key in path[:-1]: if key not in registry: registry[key] = {} registry = registry[key] if info is None: if name.endswith('*'): # delete all matching entries. for res_name, reg_info in list(registry.items()): if (isinstance(reg_info, ResourceInfo) and res_name.startswith(name[:-1])): LOG.warning('Removing %(item)s from %(path)s', { 'item': res_name, 'path': descriptive_path}) del registry[res_name] else: # delete this entry. LOG.warning('Removing %(item)s from %(path)s', { 'item': name, 'path': descriptive_path}) registry.pop(name, None) return if name in registry and isinstance(registry[name], ResourceInfo): if registry[name] == info: return details = { 'path': descriptive_path, 'was': str(registry[name].value), 'now': str(info.value)} LOG.warning('Changing %(path)s from %(was)s to %(now)s', details) if isinstance(info, ClassResourceInfo): if info.value.support_status.status != support.SUPPORTED: if info.value.support_status.message is not None: details = { 'name': info.name, 'status': six.text_type( info.value.support_status.status), 'message': six.text_type( info.value.support_status.message) } LOG.warning('%(name)s is %(status)s. %(message)s', details) info.user_resource = (self.global_registry is not None) registry[name] = info def log_resource_info(self, show_all=False, prefix=None): registry = self._registry prefix = '%s ' % prefix if prefix is not None else '' for name in registry: if name == 'resources': continue if show_all or isinstance(registry[name], TemplateResourceInfo): msg = ('%(p)sRegistered: %(t)s' % {'p': prefix, 't': six.text_type(registry[name])}) LOG.info(msg) def remove_item(self, info): if not isinstance(info, TemplateResourceInfo): return registry = self._registry for key in info.path[:-1]: registry = registry[key] if info.path[-1] in registry: registry.pop(info.path[-1]) def get_rsrc_restricted_actions(self, resource_name): """Returns a set of restricted actions. For a given resource we get the set of restricted actions. Actions are set in this format via `resources`: { "restricted_actions": [update, replace] } A restricted_actions value is either `update`, `replace` or a list of those values. Resources support wildcard matching. The asterisk sign matches everything. """ ress = self._registry['resources'] restricted_actions = set() for name_pattern, resource in six.iteritems(ress): if fnmatch.fnmatchcase(resource_name, name_pattern): if 'restricted_actions' in resource: actions = resource['restricted_actions'] if isinstance(actions, six.string_types): restricted_actions.add(actions) elif isinstance(actions, collections.Sequence): restricted_actions |= set(actions) return restricted_actions def matches_hook(self, resource_name, hook): """Return whether a resource have a hook set in the environment. For a given resource and a hook type, we check to see if the passed group of resources has the right hook associated with the name. Hooks are set in this format via `resources`: { "res_name": { "hooks": [pre-create, pre-update] }, "*_suffix": { "hooks": pre-create }, "prefix_*": { "hooks": pre-update } } A hook value is either `pre-create`, `pre-update` or a list of those values. Resources support wildcard matching. The asterisk sign matches everything. """ ress = self._registry['resources'] for name_pattern, resource in six.iteritems(ress): if fnmatch.fnmatchcase(resource_name, name_pattern): if 'hooks' in resource: hooks = resource['hooks'] if isinstance(hooks, six.string_types): if hook == hooks: return True elif isinstance(hooks, collections.Sequence): if hook in hooks: return True return False def remove_resources_except(self, resource_name): ress = self._registry['resources'] new_resources = {} for name, res in six.iteritems(ress): if fnmatch.fnmatchcase(resource_name, name): new_resources.update(res) if resource_name in ress: new_resources.update(ress[resource_name]) self._registry['resources'] = new_resources def iterable_by(self, resource_type, resource_name=None): is_templ_type = resource_type.endswith(('.yaml', '.template')) if self.global_registry is not None and is_templ_type: # we only support dynamic resource types in user environments # not the global environment. # resource with a Type == a template # we dynamically create an entry as it has not been registered. if resource_type not in self._registry: res = ResourceInfo(self, [resource_type], None) self._register_info([resource_type], res) yield self._registry[resource_type] # handle a specific resource mapping. if resource_name: impl = self._registry['resources'].get(resource_name) if impl and resource_type in impl: yield impl[resource_type] # handle: "OS::Nova::Server" -> "Rackspace::Cloud::Server" impl = self._registry.get(resource_type) if impl: yield impl # handle: "OS::*" -> "Dreamhost::*" def is_a_glob(resource_type): return resource_type.endswith('*') globs = six.moves.filter(is_a_glob, iter(self._registry)) for pattern in globs: if self._registry[pattern].matches(resource_type): yield self._registry[pattern] def get_resource_info(self, resource_type, resource_name=None, registry_type=None, ignore=None): """Find possible matches to the resource type and name. Chain the results from the global and user registry to find a match. """ # use cases # 1) get the impl. # - filter_by(res_type=X), sort_by(res_name=W, is_user=True) # 2) in TemplateResource we need to get both the # TemplateClass and the ResourceClass # - filter_by(res_type=X, impl_type=TemplateResourceInfo), # sort_by(res_name=W, is_user=True) # - filter_by(res_type=X, impl_type=ClassResourceInfo), # sort_by(res_name=W, is_user=True) # 3) get_types() from the api # - filter_by(is_user=False) # 4) as_dict() to write to the db # - filter_by(is_user=True) if self.global_registry is not None: giter = self.global_registry.iterable_by(resource_type, resource_name) else: giter = [] matches = itertools.chain(self.iterable_by(resource_type, resource_name), giter) for info in sorted(matches): try: match = info.get_resource_info(resource_type, resource_name) except exception.EntityNotFound: continue if registry_type is None or isinstance(match, registry_type): if ignore is not None and match == ignore: continue # NOTE(prazumovsky): if resource_type defined in outer env # there is a risk to lose it due to h-eng restarting, so # store it to local env (exclude ClassResourceInfo because it # loads from resources; TemplateResourceInfo handles by # template_resource module). if (match and not match.user_resource and not isinstance(info, (TemplateResourceInfo, ClassResourceInfo))): self._register_info([resource_type], info) return match raise exception.EntityNotFound(entity='Resource Type', name=resource_type) def get_class(self, resource_type, resource_name=None, files=None): info = self.get_resource_info(resource_type, resource_name=resource_name) return info.get_class(files=files) def get_class_to_instantiate(self, resource_type, resource_name=None): if resource_type == "": msg = _('Resource "%s" has no type') % resource_name raise exception.StackValidationFailed(message=msg) elif resource_type is None: msg = _('Non-empty resource type is required ' 'for resource "%s"') % resource_name raise exception.StackValidationFailed(message=msg) elif not isinstance(resource_type, six.string_types): msg = _('Resource "%s" type is not a string') % resource_name raise exception.StackValidationFailed(message=msg) try: info = self.get_resource_info(resource_type, resource_name=resource_name) except exception.EntityNotFound as exc: raise exception.StackValidationFailed(message=six.text_type(exc)) return info.get_class_to_instantiate() def as_dict(self): """Return user resources in a dict format.""" def _as_dict(level): tmp = {} for k, v in iter(level.items()): if isinstance(v, dict): tmp[k] = _as_dict(v) elif is_hook_definition( k, v) or is_valid_restricted_action(k, v): tmp[k] = v elif v.user_resource: tmp[k] = v.value return tmp return _as_dict(self._registry) def get_types(self, cnxt=None, support_status=None, type_name=None, version=None, with_description=False): """Return a list of valid resource types.""" # validate the support status if support_status is not None and not support.is_valid_status( support_status): msg = (_('Invalid support status and should be one of %s') % six.text_type(support.SUPPORT_STATUSES)) raise exception.Invalid(reason=msg) def is_resource(key): return isinstance(self._registry[key], (ClassResourceInfo, TemplateResourceInfo)) def status_matches(cls): return (support_status is None or cls.get_class().support_status.status == support_status) def is_available(cls): if cnxt is None: return True try: return cls.get_class().is_service_available(cnxt)[0] except Exception: return False def not_hidden_matches(cls): return cls.get_class().support_status.status != support.HIDDEN def is_allowed(enforcer, name): if cnxt is None: return True try: enforcer.enforce(cnxt, name, is_registered_policy=True) except enforcer.exc: return False else: return True enforcer = policy.ResourceEnforcer() def name_matches(name): try: return type_name is None or re.match(type_name, name) except: # noqa return False def version_matches(cls): return (version is None or cls.get_class().support_status.version == version) import heat.engine.resource def resource_description(name, info, with_description): if not with_description: return name rsrc_cls = info.get_class() if rsrc_cls is None: rsrc_cls = heat.engine.resource.Resource return { 'resource_type': name, 'description': rsrc_cls.getdoc(), } return [resource_description(name, cls, with_description) for name, cls in six.iteritems(self._registry) if (is_resource(name) and name_matches(name) and status_matches(cls) and is_available(cls) and is_allowed(enforcer, name) and not_hidden_matches(cls) and version_matches(cls))] class Environment(object): def __init__(self, env=None, user_env=True): """Create an Environment from an input dict. The dict may be in one of two formats: 1) old-school flat parameters; or 2) newer {resource_registry: bla, parameters: foo} :param env: the json environment :param user_env: boolean, if False then we manage python resources too. """ if env is None: env = {} if user_env: from heat.engine import resources global_env = resources.global_env() global_registry = global_env.registry event_sink_classes = global_env.event_sink_classes else: global_registry = None event_sink_classes = {} self.param_defaults = env.get(env_fmt.PARAMETER_DEFAULTS, {}) self.registry = ResourceRegistry(global_registry, self.param_defaults) self.registry.load(env.get(env_fmt.RESOURCE_REGISTRY, {})) self.encrypted_param_names = env.get(env_fmt.ENCRYPTED_PARAM_NAMES, []) if env_fmt.PARAMETERS in env: self.params = env[env_fmt.PARAMETERS] else: self.params = dict((k, v) for (k, v) in six.iteritems(env) if k not in (env_fmt.PARAMETER_DEFAULTS, env_fmt.ENCRYPTED_PARAM_NAMES, env_fmt.EVENT_SINKS, env_fmt.RESOURCE_REGISTRY)) self.event_sink_classes = event_sink_classes self._event_sinks = [] self._built_event_sinks = [] self._update_event_sinks(env.get(env_fmt.EVENT_SINKS, [])) self.constraints = {} self.stack_lifecycle_plugins = [] def load(self, env_snippet): self.registry.load(env_snippet.get(env_fmt.RESOURCE_REGISTRY, {})) self.params.update(env_snippet.get(env_fmt.PARAMETERS, {})) self.param_defaults.update( env_snippet.get(env_fmt.PARAMETER_DEFAULTS, {})) self._update_event_sinks(env_snippet.get(env_fmt.EVENT_SINKS, [])) def env_as_dict(self): """Get the entire environment as a dict.""" user_env = self.user_env_as_dict() user_env.update( # Any data here is to be stored in the DB but not reflected # as part of the user environment (e.g to pass to nested stacks # or made visible to the user via API calls etc {env_fmt.ENCRYPTED_PARAM_NAMES: self.encrypted_param_names}) return user_env def user_env_as_dict(self): """Get the environment as a dict, only user-allowed keys.""" return {env_fmt.RESOURCE_REGISTRY: self.registry.as_dict(), env_fmt.PARAMETERS: self.params, env_fmt.PARAMETER_DEFAULTS: self.param_defaults, env_fmt.EVENT_SINKS: self._event_sinks} def register_class(self, resource_type, resource_class, path=None): self.registry.register_class(resource_type, resource_class, path=path) def register_constraint(self, constraint_name, constraint): self.constraints[constraint_name] = constraint def register_stack_lifecycle_plugin(self, stack_lifecycle_name, stack_lifecycle_class): self.stack_lifecycle_plugins.append((stack_lifecycle_name, stack_lifecycle_class)) def register_event_sink(self, event_sink_name, event_sink_class): self.event_sink_classes[event_sink_name] = event_sink_class def get_class(self, resource_type, resource_name=None, files=None): return self.registry.get_class(resource_type, resource_name, files=files) def get_class_to_instantiate(self, resource_type, resource_name=None): return self.registry.get_class_to_instantiate(resource_type, resource_name) def get_types(self, cnxt=None, support_status=None, type_name=None, version=None, with_description=False): return self.registry.get_types(cnxt, support_status=support_status, type_name=type_name, version=version, with_description=with_description) def get_resource_info(self, resource_type, resource_name=None, registry_type=None, ignore=None): return self.registry.get_resource_info(resource_type, resource_name, registry_type, ignore=ignore) def get_constraint(self, name): return self.constraints.get(name) def get_stack_lifecycle_plugins(self): return self.stack_lifecycle_plugins def _update_event_sinks(self, sinks): self._event_sinks.extend(sinks) for sink in sinks: sink = sink.copy() sink_class = sink.pop('type') sink_class = self.event_sink_classes[sink_class] self._built_event_sinks.append(sink_class(**sink)) def get_event_sinks(self): return self._built_event_sinks def get_child_environment(parent_env, child_params, item_to_remove=None, child_resource_name=None): """Build a child environment using the parent environment and params. This is built from the child_params and the parent env so some resources can use user-provided parameters as if they come from an environment. 1. resource_registry must be merged (child env should be loaded after the parent env to take precedence). 2. child parameters must overwrite the parent's as they won't be relevant in the child template. If `child_resource_name` is provided, resources in the registry will be replaced with the contents of the matching child resource plus anything that passes a wildcard match. """ def is_flat_params(env_or_param): if env_or_param is None: return False for sect in env_fmt.SECTIONS: if sect in env_or_param: return False return True child_env = parent_env.user_env_as_dict() child_env[env_fmt.PARAMETERS] = {} flat_params = is_flat_params(child_params) new_env = Environment() if flat_params and child_params is not None: child_env[env_fmt.PARAMETERS] = child_params new_env.load(child_env) if not flat_params and child_params is not None: new_env.load(child_params) if item_to_remove is not None: new_env.registry.remove_item(item_to_remove) if child_resource_name: new_env.registry.remove_resources_except(child_resource_name) return new_env def read_global_environment(env, env_dir=None): if env_dir is None: cfg.CONF.import_opt('environment_dir', 'heat.common.config') env_dir = cfg.CONF.environment_dir try: env_files = glob.glob(os.path.join(env_dir, '*')) except OSError: LOG.exception('Failed to read %s', env_dir) return for file_path in env_files: try: with open(file_path) as env_fd: LOG.info('Loading %s', file_path) env_body = env_fmt.parse(env_fd.read()) env_fmt.default_for_missing(env_body) env.load(env_body) except ValueError: LOG.exception('Failed to parse %s', file_path) except IOError: LOG.exception('Failed to read %s', file_path) heat-10.0.2/heat/engine/software_config_io.py0000666000175000017500000001362413343562340021156 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ APIs for dealing with input and output definitions for Software Configurations. """ import collections import copy import six from heat.common.i18n import _ from heat.common import exception from heat.engine import constraints from heat.engine import parameters from heat.engine import properties ( IO_NAME, DESCRIPTION, TYPE, DEFAULT, REPLACE_ON_CHANGE, VALUE, ERROR_OUTPUT, ) = ( 'name', 'description', 'type', 'default', 'replace_on_change', 'value', 'error_output', ) TYPES = ( STRING_TYPE, NUMBER_TYPE, LIST_TYPE, JSON_TYPE, BOOLEAN_TYPE, ) = ( 'String', 'Number', 'CommaDelimitedList', 'Json', 'Boolean', ) input_config_schema = { IO_NAME: properties.Schema( properties.Schema.STRING, _('Name of the input.'), required=True ), DESCRIPTION: properties.Schema( properties.Schema.STRING, _('Description of the input.') ), TYPE: properties.Schema( properties.Schema.STRING, _('Type of the value of the input.'), default=STRING_TYPE, constraints=[constraints.AllowedValues(TYPES)] ), DEFAULT: properties.Schema( properties.Schema.ANY, _('Default value for the input if none is specified.'), ), REPLACE_ON_CHANGE: properties.Schema( properties.Schema.BOOLEAN, _('Replace the deployment instead of updating it when the input ' 'value changes.'), default=False, ), } output_config_schema = { IO_NAME: properties.Schema( properties.Schema.STRING, _('Name of the output.'), required=True ), DESCRIPTION: properties.Schema( properties.Schema.STRING, _('Description of the output.') ), TYPE: properties.Schema( properties.Schema.STRING, _('Type of the value of the output.'), default=STRING_TYPE, constraints=[constraints.AllowedValues(TYPES)] ), ERROR_OUTPUT: properties.Schema( properties.Schema.BOOLEAN, _('Denotes that the deployment is in an error state if this ' 'output has a value.'), default=False ) } class IOConfig(object): """Base class for the configuration data for a single input or output.""" def __init__(self, **config): self._props = properties.Properties(self.schema, config) try: self._props.validate() except exception.StackValidationFailed as exc: raise ValueError(six.text_type(exc)) def name(self): """Return the name of the input or output.""" return self._props[IO_NAME] def as_dict(self): """Return a dict representation suitable for persisting.""" return {k: v for k, v in self._props.items() if v is not None} def __repr__(self): return '%s(%s)' % (type(self).__name__, ', '.join('%s=%s' % (k, repr(v)) for k, v in self.as_dict().items())) _no_value = object() class InputConfig(IOConfig): """Class representing the configuration data for a single input.""" schema = input_config_schema def __init__(self, value=_no_value, **config): if TYPE in config and DEFAULT in config: if config[DEFAULT] == '' and config[TYPE] != STRING_TYPE: # This is a legacy path, because default used to be of string # type, so we need to skip schema validation in this case. pass else: self.schema = copy.deepcopy(self.schema) config_param = parameters.Schema.from_dict( 'config', {'Type': config[TYPE]}) self.schema[DEFAULT] = properties.Schema.from_parameter( config_param) super(InputConfig, self).__init__(**config) self._value = value def default(self): """Return the default value of the input.""" return self._props[DEFAULT] def replace_on_change(self): return self._props[REPLACE_ON_CHANGE] def as_dict(self): """Return a dict representation suitable for persisting.""" d = super(InputConfig, self).as_dict() if not self._props[REPLACE_ON_CHANGE]: del d[REPLACE_ON_CHANGE] if self._value is not _no_value: d[VALUE] = self._value return d def input_data(self): """Return a name, value pair for the input.""" value = self._value if self._value is not _no_value else None return self.name(), value class OutputConfig(IOConfig): """Class representing the configuration data for a single output.""" schema = output_config_schema def error_output(self): """Return True if the presence of the output indicates an error.""" return self._props[ERROR_OUTPUT] def check_io_schema_list(io_configs): """Check that an input or output schema list is of the correct type. Raises TypeError if the list itself is not a list, or if any of the members are not dicts. """ if (not isinstance(io_configs, collections.Sequence) or isinstance(io_configs, collections.Mapping) or isinstance(io_configs, six.string_types)): raise TypeError('Software Config I/O Schema must be in a list') if not all(isinstance(conf, collections.Mapping) for conf in io_configs): raise TypeError('Software Config I/O Schema must be a dict') heat-10.0.2/heat/engine/properties_group.py0000666000175000017500000000705613343562337020730 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import six from heat.common import exception from heat.common.i18n import _ OPERATORS = ( AND, OR, XOR ) = ( 'AND', 'OR', 'XOR' ) class PropertiesGroup(object): """A class for specifying properties relationships. Properties group allows to specify relations between properties or other properties groups with operators AND, OR and XOR by one-key dict with list value. For example, if there are two properties: "subprop1", which is child of property "prop1", and property "prop2", and they should not be specified together, then properties group for them should be next:: {XOR: [["prop1", "subprop1"], ["prop2"]]} where each property name should be set as list of strings. Also, if these properties are exclusive with properties "prop3" and "prop4", which should be specified both, then properties group will be defined such way:: {XOR: [ ["prop1", "subprop1"], ["prop2"], {AND: [ ["prop3"], ["prop4"] ]} ]} where one-key dict with key "AND" is nested properties group. """ def __init__(self, schema, properties=None): self._properties = properties self.validate_schema(schema) self.schema = schema def validate_schema(self, current_schema): msg = _('Properties group schema incorrectly specified.') if not isinstance(current_schema, dict): msg = _('%(msg)s Schema should be a mapping, found ' '%(t)s instead.') % dict(msg=msg, t=type(current_schema)) raise exception.InvalidSchemaError(message=msg) if len(current_schema.keys()) > 1: msg = _("%(msg)s Schema should be one-key dict.") % dict(msg=msg) raise exception.InvalidSchemaError(message=msg) current_key = next(iter(current_schema)) if current_key not in OPERATORS: msg = _('%(msg)s Properties group schema key should be one of the ' 'operators: %(op)s.') % dict(msg=msg, op=', '.join(OPERATORS)) raise exception.InvalidSchemaError(message=msg) if not isinstance(current_schema[current_key], list): msg = _("%(msg)s Schemas' values should be lists of properties " "names or nested schemas.") % dict(msg=msg) raise exception.InvalidSchemaError(message=msg) next_msg = _('%(msg)s List items should be properties list-type names ' 'with format "[prop, prop_child, prop_sub_child, ...]" ' 'or nested properties group schemas.') % dict(msg=msg) for item in current_schema[current_key]: if isinstance(item, dict): self.validate_schema(item) elif isinstance(item, list): for name in item: if not isinstance(name, six.string_types): raise exception.InvalidSchemaError(message=next_msg) else: raise exception.InvalidSchemaError(message=next_msg) heat-10.0.2/heat/engine/attributes.py0000666000175000017500000002545613343562337017512 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections from oslo_utils import strutils import six from heat.common.i18n import _ from heat.common.i18n import repr_wrapper from heat.engine import constraints as constr from heat.engine import support from oslo_log import log as logging LOG = logging.getLogger(__name__) class Schema(constr.Schema): """Simple schema class for attributes. Schema objects are serializable to dictionaries following a superset of the HOT input Parameter schema using dict(). """ KEYS = ( DESCRIPTION, TYPE ) = ( 'description', 'type', ) CACHE_MODES = ( CACHE_LOCAL, CACHE_NONE ) = ( 'cache_local', 'cache_none' ) TYPES = ( STRING, MAP, LIST, INTEGER, BOOLEAN ) = ( 'String', 'Map', 'List', 'Integer', 'Boolean' ) def __init__(self, description=None, support_status=support.SupportStatus(), cache_mode=CACHE_LOCAL, type=None): self.description = description self.support_status = support_status self.cache_mode = cache_mode self.type = type def __getitem__(self, key): if key == self.DESCRIPTION: if self.description is not None: return self.description elif key == self.TYPE: if self.type is not None: return self.type.lower() raise KeyError(key) @classmethod def from_attribute(cls, schema_dict): """Return a Property Schema corresponding to a Attribute Schema.""" msg = 'Old attribute schema is not supported' assert isinstance(schema_dict, cls), msg return schema_dict def schemata(schema): """Return dictionary of Schema objects for given dictionary of schemata.""" return dict((n, Schema.from_attribute(s)) for n, s in schema.items()) class Attribute(object): """An Attribute schema.""" def __init__(self, attr_name, schema): """Initialise with a name and schema. :param attr_name: the name of the attribute :param schema: attribute schema """ self.name = attr_name self.schema = Schema.from_attribute(schema) def support_status(self): return self.schema.support_status def as_output(self, resource_name, template_type='cfn'): """Output entry for a provider template with the given resource name. :param resource_name: the logical name of the provider resource :param template_type: the template type to generate :returns: This attribute as a template 'Output' entry for cfn template and 'output' entry for hot template """ if template_type == 'hot': return { "value": {"get_attr": [resource_name, self.name]}, "description": self.schema.description } else: return { "Value": {"Fn::GetAtt": [resource_name, self.name]}, "Description": self.schema.description } def _stack_id_output(resource_name, template_type='cfn'): if template_type == 'hot': return { "value": {"get_resource": resource_name}, } else: return { "Value": {"Ref": resource_name}, } BASE_ATTRIBUTES = (SHOW_ATTR, ) = ('show', ) # Returned by function.dep_attrs() to indicate that all attributes are # referenced ALL_ATTRIBUTES = '*' @repr_wrapper class Attributes(collections.Mapping): """Models a collection of Resource Attributes.""" def __init__(self, res_name, schema, resolver): self._resource_name = res_name self._resolver = resolver self.set_schema(schema) self.reset_resolved_values() assert ALL_ATTRIBUTES not in schema, \ "Invalid attribute name '%s'" % ALL_ATTRIBUTES def reset_resolved_values(self): if hasattr(self, '_resolved_values'): self._has_new_resolved = len(self._resolved_values) > 0 else: self._has_new_resolved = False self._resolved_values = {} def set_schema(self, schema): self._attributes = self._make_attributes(schema) def get_cache_mode(self, attribute_name): """Return the cache mode for the specified attribute. If the attribute is not defined in the schema, the default cache mode (CACHE_LOCAL) is returned. """ try: return self._attributes[attribute_name].schema.cache_mode except KeyError: return Schema.CACHE_LOCAL @staticmethod def _make_attributes(schema): return dict((n, Attribute(n, d)) for n, d in schema.items()) @staticmethod def as_outputs(resource_name, resource_class, template_type='cfn'): """Dict of Output entries for a provider template with resource name. :param resource_name: logical name of the resource :param resource_class: resource implementation class :returns: The attributes of the specified resource_class as a template Output map """ attr_schema = {} for name, schema_data in resource_class.attributes_schema.items(): schema = Schema.from_attribute(schema_data) if schema.support_status.status != support.HIDDEN: attr_schema[name] = schema attr_schema.update(resource_class.base_attributes_schema) attribs = Attributes._make_attributes(attr_schema).items() outp = dict((n, att.as_output(resource_name, template_type)) for n, att in attribs) outp['OS::stack_id'] = _stack_id_output(resource_name, template_type) return outp @staticmethod def schema_from_outputs(json_snippet): if json_snippet: return dict((k, Schema(v.get("Description"))) for k, v in json_snippet.items()) return {} def _validate_type(self, attrib, value): if attrib.schema.type == attrib.schema.STRING: if not isinstance(value, six.string_types): LOG.warning("Attribute %(name)s is not of type " "%(att_type)s", {'name': attrib.name, 'att_type': attrib.schema.STRING}) elif attrib.schema.type == attrib.schema.LIST: if (not isinstance(value, collections.Sequence) or isinstance(value, six.string_types)): LOG.warning("Attribute %(name)s is not of type " "%(att_type)s", {'name': attrib.name, 'att_type': attrib.schema.LIST}) elif attrib.schema.type == attrib.schema.MAP: if not isinstance(value, collections.Mapping): LOG.warning("Attribute %(name)s is not of type " "%(att_type)s", {'name': attrib.name, 'att_type': attrib.schema.MAP}) elif attrib.schema.type == attrib.schema.INTEGER: if not isinstance(value, int): LOG.warning("Attribute %(name)s is not of type " "%(att_type)s", {'name': attrib.name, 'att_type': attrib.schema.INTEGER}) elif attrib.schema.type == attrib.schema.BOOLEAN: try: strutils.bool_from_string(value, strict=True) except ValueError: LOG.warning("Attribute %(name)s is not of type " "%(att_type)s", {'name': attrib.name, 'att_type': attrib.schema.BOOLEAN}) @property def cached_attrs(self): return self._resolved_values @cached_attrs.setter def cached_attrs(self, c_attrs): if c_attrs is None: self._resolved_values = {} else: self._resolved_values = c_attrs self._has_new_resolved = False def set_cached_attr(self, key, value): self._resolved_values[key] = value self._has_new_resolved = True def has_new_cached_attrs(self): """Returns True if cached_attrs have changed Allows the caller to determine if this instance's cached_attrs have been updated since they were initially set (if at all). """ return self._has_new_resolved def __getitem__(self, key): if key not in self: raise KeyError(_('%(resource)s: Invalid attribute %(key)s') % dict(resource=self._resource_name, key=key)) attrib = self._attributes.get(key) if attrib.schema.cache_mode == Schema.CACHE_NONE: return self._resolver(key) if key in self._resolved_values: return self._resolved_values[key] value = self._resolver(key) if value is not None: # validate the value against its type self._validate_type(attrib, value) # only store if not None, it may resolve to an actual value # on subsequent calls self.set_cached_attr(key, value) return value def __len__(self): return len(self._attributes) def __contains__(self, key): return key in self._attributes def __iter__(self): return iter(self._attributes) def __repr__(self): return ("Attributes for %s:\n\t" % self._resource_name + '\n\t'.join(six.itervalues(self))) def select_from_attribute(attribute_value, path): """Select an element from an attribute value. :param attribute_value: the attribute value. :param path: a list of path components to select from the attribute. :returns: the selected attribute component value. """ def get_path_component(collection, key): if not isinstance(collection, (collections.Mapping, collections.Sequence)): raise TypeError(_("Can't traverse attribute path")) if not isinstance(key, (six.string_types, int)): raise TypeError(_('Path components in attributes must be strings')) return collection[key] try: return six.moves.reduce(get_path_component, path, attribute_value) except (KeyError, IndexError, TypeError): return None heat-10.0.2/heat/engine/service.py0000666000175000017500000030627713343562351016763 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import datetime import functools import itertools import pydoc import socket import eventlet from oslo_config import cfg from oslo_context import context as oslo_context from oslo_log import log as logging import oslo_messaging as messaging from oslo_serialization import jsonutils from oslo_service import service from oslo_service import threadgroup from oslo_utils import timeutils from oslo_utils import uuidutils from osprofiler import profiler import six import webob from heat.common import context from heat.common import environment_format as env_fmt from heat.common import environment_util as env_util from heat.common import exception from heat.common.i18n import _ from heat.common import identifier from heat.common import messaging as rpc_messaging from heat.common import policy from heat.common import service_utils from heat.engine import api from heat.engine import attributes from heat.engine.cfn import template as cfntemplate from heat.engine import clients from heat.engine import environment from heat.engine.hot import functions as hot_functions from heat.engine import parameter_groups from heat.engine import properties from heat.engine import resources from heat.engine import service_software_config from heat.engine import stack as parser from heat.engine import stack_lock from heat.engine import stk_defn from heat.engine import support from heat.engine import template as templatem from heat.engine import update from heat.engine import worker from heat.objects import event as event_object from heat.objects import resource as resource_objects from heat.objects import service as service_objects from heat.objects import snapshot as snapshot_object from heat.objects import stack as stack_object from heat.rpc import api as rpc_api from heat.rpc import worker_api as rpc_worker_api cfg.CONF.import_opt('engine_life_check_timeout', 'heat.common.config') cfg.CONF.import_opt('max_resources_per_stack', 'heat.common.config') cfg.CONF.import_opt('max_stacks_per_tenant', 'heat.common.config') cfg.CONF.import_opt('enable_stack_abandon', 'heat.common.config') cfg.CONF.import_opt('enable_stack_adopt', 'heat.common.config') cfg.CONF.import_opt('convergence_engine', 'heat.common.config') # Time to wait for a stack to stop when cancelling running threads, before # giving up on being able to start a delete. STOP_STACK_TIMEOUT = 30 LOG = logging.getLogger(__name__) class ThreadGroupManager(object): def __init__(self): super(ThreadGroupManager, self).__init__() self.groups = {} self.msg_queues = collections.defaultdict(list) # Create dummy service task, because when there is nothing queued # on self.tg the process exits self.add_timer(cfg.CONF.periodic_interval, self._service_task) def _service_task(self): """Dummy task which gets queued on the service.Service threadgroup. Without this, service.Service sees nothing running i.e has nothing to wait() on, so the process exits. This could also be used to trigger periodic non-stack-specific housekeeping tasks. """ pass def _serialize_profile_info(self): prof = profiler.get() trace_info = None if prof: trace_info = { "hmac_key": prof.hmac_key, "base_id": prof.get_base_id(), "parent_id": prof.get_id() } return trace_info def _start_with_trace(self, cnxt, trace, func, *args, **kwargs): if trace: profiler.init(**trace) if cnxt is not None: cnxt.update_store() return func(*args, **kwargs) def start(self, stack_id, func, *args, **kwargs): """Run the given method in a sub-thread.""" if stack_id not in self.groups: self.groups[stack_id] = threadgroup.ThreadGroup() def log_exceptions(gt): try: gt.wait() except Exception: LOG.exception('Unhandled error in asynchronous task') except BaseException: pass req_cnxt = oslo_context.get_current() th = self.groups[stack_id].add_thread(self._start_with_trace, req_cnxt, self._serialize_profile_info(), func, *args, **kwargs) th.link(log_exceptions) return th def start_with_lock(self, cnxt, stack, engine_id, func, *args, **kwargs): """Run the method in sub-thread after acquiring the stack lock. Release the lock when the thread finishes. :param cnxt: RPC context :param stack: Stack to be operated on :type stack: heat.engine.parser.Stack :param engine_id: The UUID of the engine/worker acquiring the lock :param func: Callable to be invoked in sub-thread :type func: function or instancemethod :param args: Args to be passed to func :param kwargs: Keyword-args to be passed to func. """ lock = stack_lock.StackLock(cnxt, stack.id, engine_id) with lock.thread_lock(): th = self.start_with_acquired_lock(stack, lock, func, *args, **kwargs) return th def start_with_acquired_lock(self, stack, lock, func, *args, **kwargs): """Run the given method in a sub-thread with an existing stack lock. Release the provided lock when the thread finishes. :param stack: Stack to be operated on :type stack: heat.engine.parser.Stack :param lock: The acquired stack lock :type lock: heat.engine.stack_lock.StackLock :param func: Callable to be invoked in sub-thread :type func: function or instancemethod :param args: Args to be passed to func :param kwargs: Keyword-args to be passed to func """ def release(gt): """Callback function that will be passed to GreenThread.link(). Persist the stack state to COMPLETE and FAILED close to releasing the lock to avoid race conditions. """ if (stack is not None and stack.status != stack.IN_PROGRESS and stack.action not in (stack.DELETE, stack.ROLLBACK, stack.UPDATE)): stack.persist_state_and_release_lock(lock.engine_id) notify = kwargs.get('notify') if notify is not None: assert not notify.signalled() notify.signal() else: lock.release() # Link to self to allow the stack to run tasks stack.thread_group_mgr = self th = self.start(stack.id, func, *args, **kwargs) th.link(release) return th def add_timer(self, stack_id, func, *args, **kwargs): """Define a periodic task in the stack threadgroups. The task is run in a separate greenthread. Periodicity is cfg.CONF.periodic_interval """ if stack_id not in self.groups: self.groups[stack_id] = threadgroup.ThreadGroup() self.groups[stack_id].add_timer(cfg.CONF.periodic_interval, func, *args, **kwargs) def add_msg_queue(self, stack_id, msg_queue): self.msg_queues[stack_id].append(msg_queue) def remove_msg_queue(self, gt, stack_id, msg_queue): for q in self.msg_queues.pop(stack_id, []): if q is not msg_queue: self.add_msg_queue(stack_id, q) def stop_timers(self, stack_id): if stack_id in self.groups: self.groups[stack_id].stop_timers() def stop(self, stack_id, graceful=False): """Stop any active threads on a stack.""" if stack_id in self.groups: self.msg_queues.pop(stack_id, None) threadgroup = self.groups.pop(stack_id) threads = threadgroup.threads[:] threadgroup.stop(graceful) threadgroup.wait() # Wait for link()ed functions (i.e. lock release) links_done = dict((th, False) for th in threads) def mark_done(gt, th): links_done[th] = True for th in threads: th.link(mark_done, th) while not all(six.itervalues(links_done)): eventlet.sleep() def send(self, stack_id, message): for msg_queue in self.msg_queues.get(stack_id, []): msg_queue.put_nowait(message) class NotifyEvent(object): def __init__(self): self._queue = eventlet.queue.LightQueue(1) self._signalled = False def signalled(self): return self._signalled def signal(self): """Signal the event.""" if self._signalled: return self._signalled = True self._queue.put(None) # Yield control so that the waiting greenthread will get the message # as soon as possible, so that the API handler can respond to the user. # Another option would be to set the queue length to 0 (which would # cause put() to block until the event has been seen, but many unit # tests run in a single greenthread and would thus deadlock. eventlet.sleep(0) def wait(self): """Wait for the event.""" try: # There's no timeout argument to eventlet.event.Event available # until eventlet 0.22.1, so use a queue. self._queue.get(timeout=cfg.CONF.rpc_response_timeout) except eventlet.queue.Empty: LOG.warning('Timed out waiting for operation to start') @profiler.trace_cls("rpc") class EngineListener(object): """Listen on an AMQP queue named for the engine. Allows individual engines to communicate with each other for multi-engine support. """ ACTIONS = (STOP_STACK, SEND) = ('stop_stack', 'send') def __init__(self, host, engine_id, thread_group_mgr): self.thread_group_mgr = thread_group_mgr self.engine_id = engine_id self.host = host self._server = None def start(self): self.target = messaging.Target( server=self.engine_id, topic=rpc_api.LISTENER_TOPIC) self._server = rpc_messaging.get_rpc_server(self.target, self) self._server.start() def stop(self): if self._server is not None: LOG.debug("Attempting to stop engine listener...") try: self._server.stop() self._server.wait() LOG.info("Engine listener is stopped successfully") except Exception as e: LOG.error("Failed to stop engine listener, %s", e) def listening(self, ctxt): """Respond to a watchdog request. Respond affirmatively to confirm that the engine performing the action is still alive. """ return True def stop_stack(self, ctxt, stack_identity): """Stop any active threads on a stack.""" stack_id = stack_identity['stack_id'] self.thread_group_mgr.stop(stack_id) def send(self, ctxt, stack_identity, message): stack_id = stack_identity['stack_id'] self.thread_group_mgr.send(stack_id, message) @profiler.trace_cls("rpc") class EngineService(service.ServiceBase): """Manages the running instances from creation to destruction. All the methods in here are called from the RPC backend. This is all done dynamically so if a call is made via RPC that does not have a corresponding method here, an exception will be thrown when it attempts to call into this class. Arguments to these methods are also dynamically added and will be named as keyword arguments by the RPC caller. """ RPC_API_VERSION = '1.35' def __init__(self, host, topic): resources.initialise() self.host = host self.topic = topic self.binary = 'heat-engine' self.hostname = socket.gethostname() # The following are initialized here, but assigned in start() which # happens after the fork when spawning multiple worker processes self.listener = None self.worker_service = None self.engine_id = None self.thread_group_mgr = None self.target = None self.service_id = None self.manage_thread_grp = None self._rpc_server = None self.software_config = service_software_config.SoftwareConfigService() self.resource_enforcer = policy.ResourceEnforcer() if cfg.CONF.trusts_delegated_roles: LOG.warning('The default value of "trusts_delegated_roles" ' 'option in heat.conf is changed to [] in Kilo ' 'and heat will delegate all roles of trustor. ' 'Please keep the same if you do not want to ' 'delegate subset roles when upgrading.') def start(self): self.engine_id = service_utils.generate_engine_id() if self.thread_group_mgr is None: self.thread_group_mgr = ThreadGroupManager() self.listener = EngineListener(self.host, self.engine_id, self.thread_group_mgr) LOG.debug("Starting listener for engine %s", self.engine_id) self.listener.start() if cfg.CONF.convergence_engine: self.worker_service = worker.WorkerService( host=self.host, topic=rpc_worker_api.TOPIC, engine_id=self.engine_id, thread_group_mgr=self.thread_group_mgr ) self.worker_service.start() target = messaging.Target( version=self.RPC_API_VERSION, server=self.host, topic=self.topic) self.target = target self._rpc_server = rpc_messaging.get_rpc_server(target, self) self._rpc_server.start() self._client = rpc_messaging.get_rpc_client( version=self.RPC_API_VERSION) self._configure_db_conn_pool_size() self.service_manage_cleanup() if self.manage_thread_grp is None: self.manage_thread_grp = threadgroup.ThreadGroup() self.manage_thread_grp.add_timer(cfg.CONF.periodic_interval, self.service_manage_report) self.manage_thread_grp.add_thread(self.reset_stack_status) def _configure_db_conn_pool_size(self): # bug #1491185 # Set the DB max_overflow to match the thread pool size. # The overflow connections are automatically closed when they are # not used; setting it is better than setting DB max_pool_size. worker_pool_size = cfg.CONF.executor_thread_pool_size # Update max_overflow only if it is not adequate if ((cfg.CONF.database.max_overflow is None) or (cfg.CONF.database.max_overflow < worker_pool_size)): cfg.CONF.set_override('max_overflow', worker_pool_size, group='database') def _stop_rpc_server(self): # Stop rpc connection at first for preventing new requests if self._rpc_server is None: return LOG.debug("Attempting to stop engine service...") try: self._rpc_server.stop() self._rpc_server.wait() LOG.info("Engine service is stopped successfully") except Exception as e: LOG.error("Failed to stop engine service, %s", e) def stop(self): self._stop_rpc_server() if self.listener: self.listener.stop() if cfg.CONF.convergence_engine and self.worker_service: # Stop the WorkerService self.worker_service.stop() # Wait for all active threads to be finished if self.thread_group_mgr: for stack_id in list(self.thread_group_mgr.groups.keys()): # Ignore dummy service task if stack_id == cfg.CONF.periodic_interval: continue LOG.info("Waiting stack %s processing to be finished", stack_id) # Stop threads gracefully self.thread_group_mgr.stop(stack_id, True) LOG.info("Stack %s processing was finished", stack_id) if self.manage_thread_grp: self.manage_thread_grp.stop() ctxt = context.get_admin_context() service_objects.Service.delete(ctxt, self.service_id) LOG.info('Service %s is deleted', self.service_id) # Terminate the engine process LOG.info("All threads were gone, terminating engine") def wait(self): pass def reset(self): logging.setup(cfg.CONF, 'heat') @context.request_context def identify_stack(self, cnxt, stack_name): """The full stack identifier for a single, live stack with stack_name. :param cnxt: RPC context. :param stack_name: Name or UUID of the stack to look up. """ s = None if uuidutils.is_uuid_like(stack_name): s = stack_object.Stack.get_by_id( cnxt, stack_name, show_deleted=True, eager_load=False) # may be the name is in uuid format, so if get by id returns None, # we should get the info by name again if not s: s = stack_object.Stack.get_by_name(cnxt, stack_name) if not s: raise exception.EntityNotFound(entity='Stack', name=stack_name) return dict(s.identifier()) def _get_stack(self, cnxt, stack_identity, show_deleted=False): identity = identifier.HeatIdentifier(**stack_identity) s = stack_object.Stack.get_by_id( cnxt, identity.stack_id, show_deleted=show_deleted) if s is None: raise exception.EntityNotFound(entity='Stack', name=identity.stack_name) if not cnxt.is_admin and cnxt.tenant_id not in ( identity.tenant, s.stack_user_project_id): # The DB API should not allow this, but sanity-check anyway.. raise exception.InvalidTenant(target=identity.tenant, actual=cnxt.tenant_id) if identity.path or s.name != identity.stack_name: raise exception.EntityNotFound(entity='Stack', name=identity.stack_name) return s @context.request_context def show_stack(self, cnxt, stack_identity, resolve_outputs=True): """Return detailed information about one or all stacks. :param cnxt: RPC context. :param stack_identity: Name of the stack you want to show, or None to show all :param resolve_outputs: If True, outputs for given stack/stacks will be resolved """ if stack_identity is not None: db_stack = self._get_stack(cnxt, stack_identity, show_deleted=True) stacks = [parser.Stack.load(cnxt, stack=db_stack)] else: stacks = parser.Stack.load_all(cnxt) def show(stack): if resolve_outputs: for res in stack._explicit_dependencies(): ensure_cache = stack.convergence and res.id is not None node_data = res.node_data(for_resources=ensure_cache, for_outputs=True) stk_defn.update_resource_data(stack.defn, res.name, node_data) # Cases where stored attributes may not exist for a # resource: # * The resource is an AutoScalingGroup that received a # signal # * Near simultaneous updates (say by an update and a # signal) # * The first time resolving a pre-Pike stack if ensure_cache: res.store_attributes() return api.format_stack(stack, resolve_outputs=resolve_outputs) return [show(stack) for stack in stacks] def get_revision(self, cnxt): return cfg.CONF.revision['heat_revision'] @context.request_context def list_stacks(self, cnxt, limit=None, marker=None, sort_keys=None, sort_dir=None, filters=None, tenant_safe=True, show_deleted=False, show_nested=False, show_hidden=False, tags=None, tags_any=None, not_tags=None, not_tags_any=None): """Returns attributes of all stacks. It supports pagination (``limit`` and ``marker``), sorting (``sort_keys`` and ``sort_dir``) and filtering (``filters``) of the results. :param cnxt: RPC context :param limit: the number of stacks to list (integer or string) :param marker: the ID of the last item in the previous page :param sort_keys: an array of fields used to sort the list :param sort_dir: the direction of the sort ('asc' or 'desc') :param filters: a dict with attribute:value to filter the list :param tenant_safe: DEPRECATED, if true, scope the request by the current tenant :param show_deleted: if true, show soft-deleted stacks :param show_nested: if true, show nested stacks :param show_hidden: if true, show hidden stacks :param tags: show stacks containing these tags. If multiple tags are passed, they will be combined using the boolean AND expression :param tags_any: show stacks containing these tags. If multiple tags are passed, they will be combined using the boolean OR expression :param not_tags: show stacks not containing these tags. If multiple tags are passed, they will be combined using the boolean AND expression :param not_tags_any: show stacks not containing these tags. If multiple tags are passed, they will be combined using the boolean OR expression :returns: a list of formatted stacks """ if filters is not None: filters = api.translate_filters(filters) if not tenant_safe: cnxt = context.get_admin_context() stacks = stack_object.Stack.get_all( cnxt, limit=limit, sort_keys=sort_keys, marker=marker, sort_dir=sort_dir, filters=filters, show_deleted=show_deleted, show_nested=show_nested, show_hidden=show_hidden, tags=tags, tags_any=tags_any, not_tags=not_tags, not_tags_any=not_tags_any) return [api.format_stack_db_object(stack) for stack in stacks] @context.request_context def count_stacks(self, cnxt, filters=None, tenant_safe=True, show_deleted=False, show_nested=False, show_hidden=False, tags=None, tags_any=None, not_tags=None, not_tags_any=None): """Return the number of stacks that match the given filters. :param cnxt: RPC context. :param filters: a dict of ATTR:VALUE to match against stacks :param tenant_safe: DEPRECATED, if true, scope the request by the current tenant :param show_deleted: if true, count will include the deleted stacks :param show_nested: if true, count will include nested stacks :param show_hidden: if true, count will include hidden stacks :param tags: count stacks containing these tags. If multiple tags are passed, they will be combined using the boolean AND expression :param tags_any: count stacks containing these tags. If multiple tags are passed, they will be combined using the boolean OR expression :param not_tags: count stacks not containing these tags. If multiple tags are passed, they will be combined using the boolean AND expression :param not_tags_any: count stacks not containing these tags. If multiple tags are passed, they will be combined using the boolean OR expression :returns: an integer representing the number of matched stacks """ if not tenant_safe: cnxt = context.get_admin_context() return stack_object.Stack.count_all( cnxt, filters=filters, show_deleted=show_deleted, show_nested=show_nested, show_hidden=show_hidden, tags=tags, tags_any=tags_any, not_tags=not_tags, not_tags_any=not_tags_any) def _validate_deferred_auth_context(self, cnxt, stack): if cfg.CONF.deferred_auth_method != 'password': return if not stack.requires_deferred_auth(): return if cnxt.username is None: raise exception.MissingCredentialError(required='X-Auth-User') if cnxt.password is None: raise exception.MissingCredentialError(required='X-Auth-Key') def _validate_new_stack(self, cnxt, stack_name, parsed_template): if stack_object.Stack.get_by_name(cnxt, stack_name): raise exception.StackExists(stack_name=stack_name) tenant_limit = cfg.CONF.max_stacks_per_tenant if stack_object.Stack.count_all(cnxt) >= tenant_limit: message = _("You have reached the maximum stacks per tenant, " "%d. Please delete some stacks.") % tenant_limit raise exception.RequestLimitExceeded(message=message) self._validate_template(cnxt, parsed_template) def _validate_template(self, cnxt, parsed_template): try: parsed_template.validate() except AssertionError: raise except Exception as ex: raise exception.StackValidationFailed(message=six.text_type(ex)) max_resources = cfg.CONF.max_resources_per_stack if max_resources == -1: return num_resources = len(parsed_template[parsed_template.RESOURCES]) if num_resources > max_resources: message = exception.StackResourceLimitExceeded.msg_fmt raise exception.RequestLimitExceeded(message=message) def _parse_template_and_validate_stack(self, cnxt, stack_name, template, params, files, environment_files, args, owner_id=None, nested_depth=0, user_creds_id=None, stack_user_project_id=None, convergence=False, parent_resource_name=None, template_id=None): common_params = api.extract_args(args) # If it is stack-adopt, use parameters from adopt_stack_data if rpc_api.PARAM_ADOPT_STACK_DATA in common_params: if not cfg.CONF.enable_stack_adopt: raise exception.NotSupported(feature='Stack Adopt') # Override the params with values given with -P option new_params = {} if 'environment' in common_params[rpc_api.PARAM_ADOPT_STACK_DATA]: new_params = common_params[rpc_api.PARAM_ADOPT_STACK_DATA][ 'environment'].get(rpc_api.STACK_PARAMETERS, {}).copy() new_params.update(params.get(rpc_api.STACK_PARAMETERS, {})) params[rpc_api.STACK_PARAMETERS] = new_params if template_id is not None: tmpl = templatem.Template.load(cnxt, template_id) else: tmpl = templatem.Template(template, files=files) env_util.merge_environments(environment_files, files, params, tmpl.all_param_schemata(files)) tmpl.env = environment.Environment(params) self._validate_new_stack(cnxt, stack_name, tmpl) stack = parser.Stack(cnxt, stack_name, tmpl, owner_id=owner_id, nested_depth=nested_depth, user_creds_id=user_creds_id, stack_user_project_id=stack_user_project_id, convergence=convergence, parent_resource=parent_resource_name, **common_params) self.resource_enforcer.enforce_stack(stack, is_registered_policy=True) self._validate_deferred_auth_context(cnxt, stack) is_root = stack.nested_depth == 0 stack.validate() # For the root stack, log a summary of the TemplateResources loaded if is_root: tmpl.env.registry.log_resource_info(prefix=stack_name) return stack @context.request_context def preview_stack(self, cnxt, stack_name, template, params, files, args, environment_files=None): """Simulate a new stack using the provided template. Note that at this stage the template has already been fetched from the heat-api process if using a template-url. :param cnxt: RPC context. :param stack_name: Name of the stack you want to create. :param template: Template of stack you want to create. :param params: Stack Input Params :param files: Files referenced from the template :param args: Request parameters/args passed from API :param environment_files: optional ordered list of environment file names included in the files dict :type environment_files: list or None """ LOG.info('previewing stack %s', stack_name) conv_eng = cfg.CONF.convergence_engine stack = self._parse_template_and_validate_stack(cnxt, stack_name, template, params, files, environment_files, args, convergence=conv_eng) return api.format_stack_preview(stack) @context.request_context def create_stack(self, cnxt, stack_name, template, params, files, args, environment_files=None, owner_id=None, nested_depth=0, user_creds_id=None, stack_user_project_id=None, parent_resource_name=None, template_id=None): """Create a new stack using the template provided. Note that at this stage the template has already been fetched from the heat-api process if using a template-url. :param cnxt: RPC context. :param stack_name: Name of the stack you want to create. :param template: Template of stack you want to create. :param params: Stack Input Params :param files: Files referenced from the template :param args: Request parameters/args passed from API :param environment_files: optional ordered list of environment file names included in the files dict :type environment_files: list or None :param owner_id: parent stack ID for nested stacks, only expected when called from another heat-engine (not a user option) :param nested_depth: the nested depth for nested stacks, only expected when called from another heat-engine :param user_creds_id: the parent user_creds record for nested stacks :param stack_user_project_id: the parent stack_user_project_id for nested stacks :param parent_resource_name: the parent resource name :param template_id: the ID of a pre-stored template in the DB """ LOG.info('Creating stack %s', stack_name) def _create_stack_user(stack): if not stack.stack_user_project_id: try: stack.create_stack_user_project_id() except exception.AuthorizationFailure as ex: stack.state_set(stack.action, stack.FAILED, six.text_type(ex)) def _stack_create(stack, msg_queue=None): # Create/Adopt a stack, and create the periodic task if successful if stack.adopt_stack_data: stack.adopt() elif stack.status != stack.FAILED: stack.create(msg_queue=msg_queue) convergence = cfg.CONF.convergence_engine stack = self._parse_template_and_validate_stack( cnxt, stack_name, template, params, files, environment_files, args, owner_id, nested_depth, user_creds_id, stack_user_project_id, convergence, parent_resource_name, template_id) stack_id = stack.store() if cfg.CONF.reauthentication_auth_method == 'trusts': stack = parser.Stack.load( cnxt, stack_id=stack_id, use_stored_context=True) _create_stack_user(stack) if convergence: action = stack.CREATE if stack.adopt_stack_data: action = stack.ADOPT stack.thread_group_mgr = self.thread_group_mgr stack.converge_stack(template=stack.t, action=action) else: msg_queue = eventlet.queue.LightQueue() th = self.thread_group_mgr.start_with_lock(cnxt, stack, self.engine_id, _stack_create, stack, msg_queue=msg_queue) th.link(self.thread_group_mgr.remove_msg_queue, stack.id, msg_queue) self.thread_group_mgr.add_msg_queue(stack.id, msg_queue) return dict(stack.identifier()) def _prepare_stack_updates(self, cnxt, current_stack, template, params, environment_files, files, args, template_id=None): """Return the current and updated stack for a given transition. Changes *will not* be persisted, this is a helper method for update_stack and preview_update_stack. :param cnxt: RPC context. :param stack: A stack to be updated. :param template: Template of stack you want to update to. :param params: Stack Input Params :param files: Files referenced from the template :param args: Request parameters/args passed from API :param template_id: the ID of a pre-stored template in the DB """ # Now parse the template and any parameters for the updated # stack definition. If PARAM_EXISTING is specified, we merge # any environment provided into the existing one and attempt # to use the existing stack template, if one is not provided. if args.get(rpc_api.PARAM_EXISTING): assert template_id is None, \ "Cannot specify template_id with PARAM_EXISTING" if template is not None: new_template = template elif (current_stack.convergence or current_stack.status == current_stack.COMPLETE): # If convergence is enabled, or the stack is complete, we can # just use the current template... new_template = current_stack.t.t else: # ..but if it's FAILED without convergence things may be in an # inconsistent state, so we try to fall back on a stored copy # of the previous template if current_stack.prev_raw_template_id is not None: # Use the stored previous template prev_t = templatem.Template.load( cnxt, current_stack.prev_raw_template_id) new_template = prev_t.t else: # Nothing we can do, the failed update happened before # we started storing prev_raw_template_id LOG.error('PATCH update to FAILED stack only ' 'possible if convergence enabled or ' 'previous template stored') msg = _('PATCH update to non-COMPLETE stack') raise exception.NotSupported(feature=msg) new_files = current_stack.t.files new_files.update(files or {}) tmpl = templatem.Template(new_template, files=new_files) env_util.merge_environments(environment_files, files, params, tmpl.all_param_schemata(files)) existing_env = current_stack.env.env_as_dict() existing_params = existing_env[env_fmt.PARAMETERS] clear_params = set(args.get(rpc_api.PARAM_CLEAR_PARAMETERS, [])) retained = dict((k, v) for k, v in existing_params.items() if k not in clear_params) existing_env[env_fmt.PARAMETERS] = retained new_env = environment.Environment(existing_env) new_env.load(params) for key in list(new_env.params.keys()): if key not in tmpl.param_schemata(): new_env.params.pop(key) tmpl.env = new_env else: if template_id is not None: tmpl = templatem.Template.load(cnxt, template_id) else: tmpl = templatem.Template(template, files=files) env_util.merge_environments(environment_files, files, params, tmpl.all_param_schemata(files)) tmpl.env = environment.Environment(params) max_resources = cfg.CONF.max_resources_per_stack if max_resources != -1 and len(tmpl[tmpl.RESOURCES]) > max_resources: raise exception.RequestLimitExceeded( message=exception.StackResourceLimitExceeded.msg_fmt) stack_name = current_stack.name current_kwargs = current_stack.get_kwargs_for_cloning() common_params = api.extract_args(args) common_params.setdefault(rpc_api.PARAM_TIMEOUT, current_stack.timeout_mins) common_params.setdefault(rpc_api.PARAM_DISABLE_ROLLBACK, current_stack.disable_rollback) common_params.setdefault(rpc_api.PARAM_CONVERGE, current_stack.converge) if args.get(rpc_api.PARAM_EXISTING): common_params.setdefault(rpc_api.STACK_TAGS, current_stack.tags) current_kwargs.update(common_params) updated_stack = parser.Stack(cnxt, stack_name, tmpl, **current_kwargs) invalid_params = current_stack.parameters.immutable_params_modified( updated_stack.parameters, tmpl.env.params) if invalid_params: raise exception.ImmutableParameterModified(*invalid_params) self.resource_enforcer.enforce_stack(updated_stack, is_registered_policy=True) updated_stack.parameters.set_stack_id(current_stack.identifier()) self._validate_deferred_auth_context(cnxt, updated_stack) updated_stack.validate() return tmpl, current_stack, updated_stack @context.request_context def update_stack(self, cnxt, stack_identity, template, params, files, args, environment_files=None, template_id=None): """Update an existing stack based on the provided template and params. Note that at this stage the template has already been fetched from the heat-api process if using a template-url. :param cnxt: RPC context. :param stack_identity: Name of the stack you want to create. :param template: Template of stack you want to create. :param params: Stack Input Params :param files: Files referenced from the template :param args: Request parameters/args passed from API :param environment_files: optional ordered list of environment file names included in the files dict :type environment_files: list or None :param template_id: the ID of a pre-stored template in the DB """ # Get the database representation of the existing stack db_stack = self._get_stack(cnxt, stack_identity) LOG.info('Updating stack %s', db_stack.name) if cfg.CONF.reauthentication_auth_method == 'trusts': current_stack = parser.Stack.load( cnxt, stack=db_stack, use_stored_context=True) else: current_stack = parser.Stack.load(cnxt, stack=db_stack) self.resource_enforcer.enforce_stack(current_stack, is_registered_policy=True) if current_stack.action == current_stack.SUSPEND: msg = _('Updating a stack when it is suspended') raise exception.NotSupported(feature=msg) if current_stack.action == current_stack.DELETE: msg = _('Updating a stack when it is deleting') raise exception.NotSupported(feature=msg) tmpl, current_stack, updated_stack = self._prepare_stack_updates( cnxt, current_stack, template, params, environment_files, files, args, template_id) if current_stack.convergence: current_stack.thread_group_mgr = self.thread_group_mgr current_stack.converge_stack(template=tmpl, new_stack=updated_stack) else: msg_queue = eventlet.queue.LightQueue() stored_event = NotifyEvent() th = self.thread_group_mgr.start_with_lock(cnxt, current_stack, self.engine_id, current_stack.update, updated_stack, msg_queue=msg_queue, notify=stored_event) th.link(self.thread_group_mgr.remove_msg_queue, current_stack.id, msg_queue) self.thread_group_mgr.add_msg_queue(current_stack.id, msg_queue) stored_event.wait() return dict(current_stack.identifier()) @context.request_context def preview_update_stack(self, cnxt, stack_identity, template, params, files, args, environment_files=None): """Shows the resources that would be updated. The preview_update_stack method shows the resources that would be changed with an update to an existing stack based on the provided template and parameters. See update_stack for description of parameters. This method *cannot* guarantee that an update will have the actions specified because resource plugins can influence changes/replacements at runtime. Note that at this stage the template has already been fetched from the heat-api process if using a template-url. """ # Get the database representation of the existing stack db_stack = self._get_stack(cnxt, stack_identity) LOG.info('Previewing update of stack %s', db_stack.name) current_stack = parser.Stack.load(cnxt, stack=db_stack) tmpl, current_stack, updated_stack = self._prepare_stack_updates( cnxt, current_stack, template, params, environment_files, files, args) update_task = update.StackUpdate(current_stack, updated_stack, None) actions = update_task.preview() def fmt_action_map(current, updated, act): def fmt_updated_res(k): return api.format_stack_resource(updated.resources.get(k)) def fmt_current_res(k): return api.format_stack_resource(current.resources.get(k)) return { 'unchanged': list( map(fmt_updated_res, act.get('unchanged', []))), 'updated': list(map(fmt_current_res, act.get('updated', []))), 'replaced': list( map(fmt_updated_res, act.get('replaced', []))), 'added': list(map(fmt_updated_res, act.get('added', []))), 'deleted': list(map(fmt_current_res, act.get('deleted', []))), } updated_stack.id = current_stack.id fmt_actions = fmt_action_map(current_stack, updated_stack, actions) if args.get(rpc_api.PARAM_SHOW_NESTED): # Note preview_resources is needed here to build the tree # of nested resources/stacks in memory, otherwise the # nested/has_nested() tests below won't work updated_stack.preview_resources() def nested_fmt_actions(current, updated, act): updated.id = current.id # Recurse for resources deleted from the current stack, # which is all those marked as deleted or replaced def _n_deleted(stk, deleted): for rsrc in deleted: deleted_rsrc = stk.resources.get(rsrc) if deleted_rsrc.has_nested(): nested_stk = deleted_rsrc.nested() nested_rsrc = nested_stk.resources.keys() n_fmt = fmt_action_map( nested_stk, None, {'deleted': nested_rsrc}) fmt_actions['deleted'].extend(n_fmt['deleted']) _n_deleted(nested_stk, nested_rsrc) _n_deleted(current, act['deleted'] + act['replaced']) # Recurse for all resources added to the updated stack, # which is all those marked added or replaced def _n_added(stk, added): for rsrc in added: added_rsrc = stk.resources.get(rsrc) if added_rsrc.has_nested(): nested_stk = added_rsrc.nested() nested_rsrc = nested_stk.resources.keys() n_fmt = fmt_action_map( None, nested_stk, {'added': nested_rsrc}) fmt_actions['added'].extend(n_fmt['added']) _n_added(nested_stk, nested_rsrc) _n_added(updated, act['added'] + act['replaced']) # Recursively preview all "updated" resources for rsrc in act['updated']: current_rsrc = current.resources.get(rsrc) updated_rsrc = updated.resources.get(rsrc) if current_rsrc.has_nested() and updated_rsrc.has_nested(): current_nested = current_rsrc.nested() updated_nested = updated_rsrc.nested() update_task = update.StackUpdate( current_nested, updated_nested, None) n_actions = update_task.preview() n_fmt_actions = fmt_action_map( current_nested, updated_nested, n_actions) for k in fmt_actions: fmt_actions[k].extend(n_fmt_actions[k]) nested_fmt_actions(current_nested, updated_nested, n_actions) # Start the recursive nested_fmt_actions with the parent stack. nested_fmt_actions(current_stack, updated_stack, actions) return fmt_actions @context.request_context def stack_cancel_update(self, cnxt, stack_identity, cancel_with_rollback=True): """Cancel currently running stack update. :param cnxt: RPC context. :param stack_identity: Name of the stack for which to cancel update. :param cancel_with_rollback: Force rollback when cancel update. """ # Get the database representation of the existing stack db_stack = self._get_stack(cnxt, stack_identity) current_stack = parser.Stack.load(cnxt, stack=db_stack) if cancel_with_rollback: allowed_actions = (current_stack.UPDATE,) else: allowed_actions = (current_stack.UPDATE, current_stack.CREATE) if not (current_stack.status == current_stack.IN_PROGRESS and current_stack.action in allowed_actions): state = '_'.join(current_stack.state) msg = _("Cancelling update when stack is %s") % str(state) raise exception.NotSupported(feature=msg) LOG.info('Starting cancel of updating stack %s', db_stack.name) if current_stack.convergence: current_stack.thread_group_mgr = self.thread_group_mgr if cancel_with_rollback: func = current_stack.rollback else: func = functools.partial(self.worker_service.stop_traversal, current_stack) self.thread_group_mgr.start(current_stack.id, func) return lock = stack_lock.StackLock(cnxt, current_stack.id, self.engine_id) engine_id = lock.get_engine_id() if engine_id is None: LOG.debug('No lock found on stack %s', db_stack.name) return if cancel_with_rollback: cancel_message = rpc_api.THREAD_CANCEL_WITH_ROLLBACK else: cancel_message = rpc_api.THREAD_CANCEL # Current engine has the lock if engine_id == self.engine_id: self.thread_group_mgr.send(current_stack.id, cancel_message) # Another active engine has the lock elif service_utils.engine_alive(cnxt, engine_id): cancel_result = self._remote_call( cnxt, engine_id, cfg.CONF.engine_life_check_timeout, self.listener.SEND, stack_identity=stack_identity, message=cancel_message) if cancel_result is None: LOG.debug("Successfully sent %(msg)s message " "to remote task on engine %(eng)s" % { 'eng': engine_id, 'msg': cancel_message}) else: raise exception.EventSendFailed(stack_name=current_stack.name, engine_id=engine_id) else: LOG.warning(_('Cannot cancel stack %(stack_name)s: lock held by ' 'unknown engine %(engine_id)s') % { 'stack_name': db_stack.name, 'engine_id': engine_id}) @context.request_context def validate_template(self, cnxt, template, params=None, files=None, environment_files=None, show_nested=False, ignorable_errors=None): """Check the validity of a template. Checks, so far as we can, that a template is valid, and returns information about the parameters suitable for producing a user interface through which to specify the parameter values. :param cnxt: RPC context. :param template: Template of stack you want to create. :param params: Stack Input Params :param files: Files referenced from the template :param environment_files: optional ordered list of environment file names included in the files dict :type environment_files: list or None :param show_nested: if True, any nested templates will be checked :param ignorable_errors: List of error_code to be ignored as part of validation """ LOG.info('validate_template') if template is None: msg = _("No Template provided.") return webob.exc.HTTPBadRequest(explanation=msg) if ignorable_errors: invalid_codes = (set(ignorable_errors) - set(exception.ERROR_CODE_MAP.keys())) if invalid_codes: msg = (_("Invalid codes in ignore_errors : %s") % list(invalid_codes)) return webob.exc.HTTPBadRequest(explanation=msg) tmpl = templatem.Template(template, files=files) env_util.merge_environments(environment_files, files, params, tmpl.all_param_schemata(files)) tmpl.env = environment.Environment(params) try: self._validate_template(cnxt, tmpl) except Exception as ex: return {'Error': six.text_type(ex)} stack_name = 'dummy' stack = parser.Stack(cnxt, stack_name, tmpl, strict_validate=False) try: stack.validate(ignorable_errors=ignorable_errors, validate_res_tmpl_only=True) except exception.StackValidationFailed as ex: return {'Error': six.text_type(ex)} def filter_parameter(p): return p.name not in stack.parameters.PSEUDO_PARAMETERS params = stack.parameters.map(api.format_validate_parameter, filter_func=filter_parameter) result = { 'Description': tmpl.get('Description', ''), 'Parameters': params } param_groups = parameter_groups.ParameterGroups(tmpl) if param_groups.parameter_groups: result['ParameterGroups'] = param_groups.parameter_groups if show_nested: result.update(stack.get_nested_parameters(filter_parameter)) result['Environment'] = tmpl.env.user_env_as_dict() return result @context.request_context def authenticated_to_backend(self, cnxt): """Validate the credentials in the RPC context. Verify that the credentials in the RPC context are valid for the current cloud backend. """ return clients.Clients(cnxt).authenticated() @context.request_context def get_template(self, cnxt, stack_identity): """Get the template. :param cnxt: RPC context. :param stack_identity: Name of the stack you want to see. """ s = self._get_stack(cnxt, stack_identity, show_deleted=True) return s.raw_template.template @context.request_context def get_environment(self, cnxt, stack_identity): """Returns the environment for an existing stack. :param cnxt: RPC context :param stack_identity: identifies the stack :rtype: dict """ s = self._get_stack(cnxt, stack_identity, show_deleted=True) return s.raw_template.environment @context.request_context def get_files(self, cnxt, stack_identity): """Returns the files for an existing stack. :param cnxt: RPC context :param stack_identity: identifies the stack :rtype: dict """ s = self._get_stack(cnxt, stack_identity, show_deleted=True) template = templatem.Template.load( cnxt, s.raw_template_id, s.raw_template) return dict(template.files) @context.request_context def list_outputs(self, cntx, stack_identity): """Get a list of stack outputs. :param cntx: RPC context. :param stack_identity: Name of the stack you want to see. :return: list of stack outputs in defined format. """ s = self._get_stack(cntx, stack_identity) stack = parser.Stack.load(cntx, stack=s) return api.format_stack_outputs(stack.outputs, resolve_value=False) @context.request_context def show_output(self, cntx, stack_identity, output_key): """Returns dict with specified output key, value and description. :param cntx: RPC context. :param stack_identity: Name of the stack you want to see. :param output_key: key of desired stack output. :return: dict with output key, value and description in defined format. """ s = self._get_stack(cntx, stack_identity) stack = parser.Stack.load(cntx, stack=s) outputs = stack.outputs if output_key not in outputs: raise exception.NotFound(_('Specified output key %s not ' 'found.') % output_key) stack._update_all_resource_data(for_resources=False, for_outputs={output_key}) return api.format_stack_output(outputs[output_key]) def _remote_call(self, cnxt, lock_engine_id, timeout, call, **kwargs): self.cctxt = self._client.prepare( version='1.0', timeout=timeout, topic=rpc_api.LISTENER_TOPIC, server=lock_engine_id) try: self.cctxt.call(cnxt, call, **kwargs) except messaging.MessagingTimeout: return False @context.request_context def delete_stack(self, cnxt, stack_identity): """Delete a given stack. :param cnxt: RPC context. :param stack_identity: Name of the stack you want to delete. """ st = self._get_stack(cnxt, stack_identity) if (st.status == parser.Stack.COMPLETE and st.action == parser.Stack.DELETE): raise exception.EntityNotFound(entity='Stack', name=st.name) LOG.info('Deleting stack %s', st.name) stack = parser.Stack.load(cnxt, stack=st) self.resource_enforcer.enforce_stack(stack, is_registered_policy=True) if stack.convergence and cfg.CONF.convergence_engine: stack.thread_group_mgr = self.thread_group_mgr template = templatem.Template.create_empty_template( from_template=stack.t) # stop existing traversal; mark stack as FAILED if stack.status == stack.IN_PROGRESS: self.worker_service.stop_traversal(stack) def stop_workers(): self.worker_service.stop_all_workers(stack) stack.converge_stack(template=template, action=stack.DELETE, pre_converge=stop_workers) return lock = stack_lock.StackLock(cnxt, stack.id, self.engine_id) with lock.try_thread_lock() as acquire_result: # Successfully acquired lock if acquire_result is None: self.thread_group_mgr.stop_timers(stack.id) stored = NotifyEvent() self.thread_group_mgr.start_with_acquired_lock(stack, lock, stack.delete, notify=stored) stored.wait() return # Current engine has the lock if acquire_result == self.engine_id: # give threads which are almost complete an opportunity to # finish naturally before force stopping them self.thread_group_mgr.send(stack.id, rpc_api.THREAD_CANCEL) # Another active engine has the lock elif service_utils.engine_alive(cnxt, acquire_result): cancel_result = self._remote_call( cnxt, acquire_result, cfg.CONF.engine_life_check_timeout, self.listener.SEND, stack_identity=stack_identity, message=rpc_api.THREAD_CANCEL) if cancel_result is None: LOG.debug("Successfully sent %(msg)s message " "to remote task on engine %(eng)s" % { 'eng': acquire_result, 'msg': rpc_api.THREAD_CANCEL}) else: raise exception.EventSendFailed(stack_name=stack.name, engine_id=acquire_result) def reload(): st = self._get_stack(cnxt, stack_identity) stack = parser.Stack.load(cnxt, stack=st) self.resource_enforcer.enforce_stack(stack, is_registered_policy=True) return stack def wait_then_delete(stack): watch = timeutils.StopWatch(cfg.CONF.error_wait_time + 10) watch.start() while not watch.expired(): LOG.debug('Waiting for stack cancel to complete: %s', stack.name) with lock.try_thread_lock() as acquire_result: if acquire_result is None: stack = reload() # do the actual delete with the aquired lock self.thread_group_mgr.start_with_acquired_lock( stack, lock, stack.delete) return eventlet.sleep(1.0) if acquire_result == self.engine_id: # cancel didn't finish in time, attempt a stop instead self.thread_group_mgr.stop(stack.id) elif service_utils.engine_alive(cnxt, acquire_result): # Another active engine has the lock stop_result = self._remote_call( cnxt, acquire_result, STOP_STACK_TIMEOUT, self.listener.STOP_STACK, stack_identity=stack_identity) if stop_result is None: LOG.debug("Successfully stopped remote task " "on engine %s", acquire_result) else: raise exception.StopActionFailed( stack_name=stack.name, engine_id=acquire_result) stack = reload() # do the actual delete in a locked task self.thread_group_mgr.start_with_lock(cnxt, stack, self.engine_id, stack.delete) # Cancelling the stack could take some time, so do it in a task self.thread_group_mgr.start(stack.id, wait_then_delete, stack) @context.request_context def export_stack(self, cnxt, stack_identity): """Exports the stack data json. Intended to be used to safely retrieve the stack data before performing the abandon action. :param cnxt: RPC context. :param stack_identity: Name of the stack you want to export. """ return self.abandon_stack(cnxt, stack_identity, abandon=False) @context.request_context def abandon_stack(self, cnxt, stack_identity, abandon=True): """Abandon a given stack. :param cnxt: RPC context. :param stack_identity: Name of the stack you want to abandon. :param abandon: Delete Heat stack but not physical resources. """ if not cfg.CONF.enable_stack_abandon: raise exception.NotSupported(feature='Stack Abandon') def _stack_abandon(stk, abandon): if abandon: LOG.info('abandoning stack %s', stk.name) stk.delete(abandon=abandon) else: LOG.info('exporting stack %s', stk.name) st = self._get_stack(cnxt, stack_identity) stack = parser.Stack.load(cnxt, stack=st) lock = stack_lock.StackLock(cnxt, stack.id, self.engine_id) with lock.thread_lock(): # Get stack details before deleting it. stack_info = stack.prepare_abandon() self.thread_group_mgr.start_with_acquired_lock(stack, lock, _stack_abandon, stack, abandon) return stack_info def list_resource_types(self, cnxt, support_status=None, type_name=None, heat_version=None, with_description=False): """Get a list of supported resource types. :param cnxt: RPC context. :param support_status: Support status of resource type :param type_name: Resource type's name (regular expression allowed) :param heat_version: Heat version :param with_description: Either return resource type description or not """ result = resources.global_env().get_types( cnxt, support_status=support_status, type_name=type_name, version=heat_version, with_description=with_description) return result def list_template_versions(self, cnxt): def find_version_class(versions, cls): for version in versions: if version['class'] is cls: return version mgr = templatem._get_template_extension_manager() _template_classes = [(name, mgr[name].plugin) for name in mgr.names()] versions = [] for t in sorted(_template_classes): # Sort to ensure dates come first if issubclass(t[1], cfntemplate.CfnTemplateBase): type = 'cfn' else: type = 'hot' # Official versions are in '%Y-%m-%d' format. Default # version aliases are the Heat release code name try: datetime.datetime.strptime(t[0].split('.')[-1], '%Y-%m-%d') versions.append({'version': t[0], 'type': type, 'class': t[1], 'aliases': []}) except ValueError: version = find_version_class(versions, t[1]) if version is not None: version['aliases'].append(t[0]) else: raise exception.InvalidTemplateVersions(version=t[0]) # 'class' was just used to find the version that the alias # maps to. Remove it so it will not show up in the output for version in versions: del version['class'] return versions def list_template_functions(self, cnxt, template_version, with_condition=False): mgr = templatem._get_template_extension_manager() try: tmpl_class = mgr[template_version] except KeyError: raise exception.NotFound(_("Template with version %s not found") % template_version) supported_funcs = tmpl_class.plugin.functions if with_condition: supported_funcs.update(tmpl_class.plugin.condition_functions) functions = [] for func_name, func in six.iteritems(supported_funcs): if func is not hot_functions.Removed: desc = pydoc.splitdoc(pydoc.getdoc(func))[0] functions.append( {'functions': func_name, 'description': desc} ) return functions def resource_schema(self, cnxt, type_name, with_description=False): """Return the schema of the specified type. :param cnxt: RPC context. :param type_name: Name of the resource type to obtain the schema of. :param with_description: Return result with description or not. """ self.resource_enforcer.enforce(cnxt, type_name, is_registered_policy=True) try: resource_class = resources.global_env().get_class(type_name) except exception.NotFound: LOG.exception('Error loading resource type %s ' 'from global environment.', type_name) raise exception.InvalidGlobalResource(type_name=type_name) assert resource_class is not None if resource_class.support_status.status == support.HIDDEN: raise exception.NotSupported(feature=type_name) try: svc_available = resource_class.is_service_available(cnxt)[0] except Exception as exc: raise exception.ResourceTypeUnavailable( service_name=resource_class.default_client_name, resource_type=type_name, reason=six.text_type(exc)) else: if not svc_available: raise exception.ResourceTypeUnavailable( service_name=resource_class.default_client_name, resource_type=type_name, reason='Service endpoint not in service catalog.') def properties_schema(): for name, schema_dict in resource_class.properties_schema.items(): schema = properties.Schema.from_legacy(schema_dict) if (schema.implemented and schema.support_status.status != support.HIDDEN): yield name, dict(schema) def attributes_schema(): for name, schema_data in itertools.chain( resource_class.attributes_schema.items(), resource_class.base_attributes_schema.items()): schema = attributes.Schema.from_attribute(schema_data) if schema.support_status.status != support.HIDDEN: yield name, dict(schema) result = { rpc_api.RES_SCHEMA_RES_TYPE: type_name, rpc_api.RES_SCHEMA_PROPERTIES: dict(properties_schema()), rpc_api.RES_SCHEMA_ATTRIBUTES: dict(attributes_schema()), rpc_api.RES_SCHEMA_SUPPORT_STATUS: resource_class.support_status.to_dict() } if with_description: result[rpc_api.RES_SCHEMA_DESCRIPTION] = resource_class.getdoc() return result def generate_template(self, cnxt, type_name, template_type='cfn'): """Generate a template based on the specified type. :param cnxt: RPC context. :param type_name: Name of the resource type to generate a template for. :param template_type: the template type to generate, cfn or hot. """ self.resource_enforcer.enforce(cnxt, type_name, is_registered_policy=True) try: resource_class = resources.global_env().get_class(type_name) except exception.NotFound: LOG.exception('Error loading resource type %s ' 'from global environment.', type_name) raise exception.InvalidGlobalResource(type_name=type_name) else: if resource_class.support_status.status == support.HIDDEN: raise exception.NotSupported(feature=type_name) return resource_class.resource_to_template(type_name, template_type) @context.request_context def list_events(self, cnxt, stack_identity, filters=None, limit=None, marker=None, sort_keys=None, sort_dir=None, nested_depth=None): """Lists all events associated with a given stack. It supports pagination (``limit`` and ``marker``), sorting (``sort_keys`` and ``sort_dir``) and filtering(filters) of the results. :param cnxt: RPC context. :param stack_identity: Name of the stack you want to get events for :param filters: a dict with attribute:value to filter the list :param limit: the number of events to list (integer or string) :param marker: the ID of the last event in the previous page :param sort_keys: an array of fields used to sort the list :param sort_dir: the direction of the sort ('asc' or 'desc'). :param nested_depth: Levels of nested stacks to list events for. """ stack_identifiers = None root_stack_identifier = None if stack_identity: st = self._get_stack(cnxt, stack_identity, show_deleted=True) if nested_depth: root_stack_identifier = st.identifier() # find all stacks with resources associated with a root stack ResObj = resource_objects.Resource stack_ids = ResObj.get_all_stack_ids_by_root_stack(cnxt, st.id) # find stacks to the requested nested_depth stack_filters = { 'id': stack_ids, 'nested_depth': list(range(nested_depth + 1)) } stacks = stack_object.Stack.get_all(cnxt, filters=stack_filters, show_nested=True) stack_identifiers = {s.id: s.identifier() for s in stacks} if filters is None: filters = {} filters['stack_id'] = list(stack_identifiers.keys()) events = list(event_object.Event.get_all_by_tenant( cnxt, limit=limit, marker=marker, sort_keys=sort_keys, sort_dir=sort_dir, filters=filters)) else: events = list(event_object.Event.get_all_by_stack( cnxt, st.id, limit=limit, marker=marker, sort_keys=sort_keys, sort_dir=sort_dir, filters=filters)) stack_identifiers = {st.id: st.identifier()} else: events = list(event_object.Event.get_all_by_tenant( cnxt, limit=limit, marker=marker, sort_keys=sort_keys, sort_dir=sort_dir, filters=filters)) stack_ids = {e.stack_id for e in events} stacks = stack_object.Stack.get_all(cnxt, filters={'id': stack_ids}, show_nested=True) stack_identifiers = {s.id: s.identifier() for s in stacks} # a 'uuid' in filters indicates we are showing a full event, i.e. # the only time we need to load the event's rsrc prop data. include_rsrc_prop_data = (filters and 'uuid' in filters) return [api.format_event(e, stack_identifiers.get(e.stack_id), root_stack_identifier, include_rsrc_prop_data) for e in events] def _authorize_stack_user(self, cnxt, stack, resource_name): """Filter access to describe_stack_resource for in-instance users. - The user must map to a User resource defined in the requested stack - The user resource must validate OK against any Policy specified """ # first check whether access is allowed by context user_id if stack.access_allowed(cnxt.user_id, resource_name): return True # fall back to looking for EC2 credentials in the context try: ec2_creds = jsonutils.loads(cnxt.aws_creds).get('ec2Credentials') except (TypeError, AttributeError): ec2_creds = None if not ec2_creds: return False access_key = ec2_creds.get('access') return stack.access_allowed(access_key, resource_name) @context.request_context def describe_stack_resource(self, cnxt, stack_identity, resource_name, with_attr=None): s = self._get_stack(cnxt, stack_identity) stack = parser.Stack.load(cnxt, stack=s) if cfg.CONF.heat_stack_user_role in cnxt.roles: if not self._authorize_stack_user(cnxt, stack, resource_name): LOG.warning("Access denied to resource %s", resource_name) raise exception.Forbidden() resource = stack.resource_get(resource_name) if not resource: raise exception.ResourceNotFound(resource_name=resource_name, stack_name=stack.name) return api.format_stack_resource(resource, with_attr=with_attr) @context.request_context def resource_signal(self, cnxt, stack_identity, resource_name, details, sync_call=False): """Calls resource's signal for the specified resource. :param sync_call: indicates whether a synchronized call behavior is expected. This is reserved for CFN WaitCondition implementation. """ def _resource_signal(stack, rsrc, details, need_check): LOG.debug("signaling resource %s:%s" % (stack.name, rsrc.name)) needs_metadata_updates = rsrc.signal(details, need_check) if not needs_metadata_updates: return # Refresh the metadata for all other resources, since signals can # update metadata which is used by other resources, e.g # when signalling a WaitConditionHandle resource, and other # resources may refer to WaitCondition Fn::GetAtt Data for r in stack._explicit_dependencies(): if r.action != r.INIT: if r.name != rsrc.name: r.metadata_update() stk_defn.update_resource_data(stack.defn, r.name, r.node_data()) s = self._get_stack(cnxt, stack_identity) # This is not "nice" converting to the stored context here, # but this happens because the keystone user associated with the # signal doesn't have permission to read the secret key of # the user associated with the cfn-credentials file stack = parser.Stack.load(cnxt, stack=s, use_stored_context=True) rsrc = stack.resource_get(resource_name) if rsrc is None: raise exception.ResourceNotFound(resource_name=resource_name, stack_name=stack.name) if rsrc.id is None: raise exception.ResourceNotAvailable(resource_name=resource_name) if callable(rsrc.signal): rsrc._signal_check_action() rsrc._signal_check_hook(details) if sync_call or not callable(getattr(rsrc, 'handle_signal', None)): _resource_signal(stack, rsrc, details, False) else: self.thread_group_mgr.start(stack.id, _resource_signal, stack, rsrc, details, False) if sync_call: return rsrc.metadata_get() @context.request_context def resource_mark_unhealthy(self, cnxt, stack_identity, resource_name, mark_unhealthy, resource_status_reason=None): """Mark the resource as healthy or unhealthy. Put the resource in CHECK_FAILED state if 'mark_unhealthy' is true. Put the resource in CHECK_COMPLETE if 'mark_unhealthy' is false and the resource is in CHECK_FAILED state. Otherwise, make no change. :param resource_name: either the logical name of the resource or the physical resource ID. :param mark_unhealthy: indicates whether the resource is unhealthy. :param resource_status_reason: reason for health change. """ def lock(rsrc): if rsrc.stack.convergence: return rsrc.lock(self.engine_id) else: return stack_lock.StackLock(cnxt, rsrc.stack.id, self.engine_id) if not isinstance(mark_unhealthy, bool): raise exception.Invalid(reason="mark_unhealthy is not a boolean") s = self._get_stack(cnxt, stack_identity) stack = parser.Stack.load(cnxt, stack=s) rsrc = self._find_resource_in_stack(cnxt, resource_name, stack) reason = (resource_status_reason or "state changed by resource_mark_unhealthy api") try: with lock(rsrc): if mark_unhealthy: if rsrc.action != rsrc.DELETE: rsrc.state_set(rsrc.CHECK, rsrc.FAILED, reason=reason) elif rsrc.state == (rsrc.CHECK, rsrc.FAILED): rsrc.handle_metadata_reset() rsrc.state_set(rsrc.CHECK, rsrc.COMPLETE, reason=reason) except exception.UpdateInProgress: raise exception.ActionInProgress(stack_name=stack.name, action=stack.action) @staticmethod def _find_resource_in_stack(cnxt, resource_name, stack): """Find a resource in a stack by either name or physical ID.""" if resource_name in stack: return stack[resource_name] rsrcs = resource_objects.Resource.get_all_by_physical_resource_id( cnxt, resource_name) def in_stack(rs): return rs.stack_id == stack.id and stack[rs.name].id == rs.id matches = [stack[rs.name] for rs in rsrcs if in_stack(rs)] if matches: if len(matches) == 1: return matches[0] raise exception.PhysicalResourceIDAmbiguity(phys_id=resource_name) # Try it the slow way match = stack.resource_by_refid(resource_name) if match is not None: return match raise exception.ResourceNotFound(resource_name=resource_name, stack_name=stack.name) @context.request_context def find_physical_resource(self, cnxt, physical_resource_id): """Return an identifier for the specified resource. :param cnxt: RPC context. :param physical_resource_id: The physical resource ID to look up. """ rsrcs = resource_objects.Resource.get_all_by_physical_resource_id( cnxt, physical_resource_id) if not rsrcs: raise exception.EntityNotFound(entity='Resource', name=physical_resource_id) # This call is used only in the cfn API, which only cares about # finding the stack anyway. So allow duplicate resource IDs within the # same stack. if len({rs.stack_id for rs in rsrcs}) > 1: raise exception.PhysicalResourceIDAmbiguity( phys_id=physical_resource_id) rs = rsrcs[0] stack = parser.Stack.load(cnxt, stack_id=rs.stack_id) resource = stack[rs.name] return dict(resource.identifier()) @context.request_context def describe_stack_resources(self, cnxt, stack_identity, resource_name): s = self._get_stack(cnxt, stack_identity) stack = parser.Stack.load(cnxt, stack=s) return [api.format_stack_resource(resource) for name, resource in six.iteritems(stack) if resource_name is None or name == resource_name] @context.request_context def list_stack_resources(self, cnxt, stack_identity, nested_depth=0, with_detail=False, filters=None): s = self._get_stack(cnxt, stack_identity, show_deleted=True) stack = parser.Stack.load(cnxt, stack=s) depth = min(nested_depth, cfg.CONF.max_nested_stack_depth) res_type = None if filters is not None: filters = api.translate_filters(filters) # There is not corresponding for `type` column in Resource table, # so sqlalchemy filters can't be used. res_type = filters.pop('type', None) if depth > 0: # populate context with resources from all nested depths resource_objects.Resource.get_all_by_root_stack( cnxt, stack.id, filters, cache=True) def filter_type(res_iter): for res in res_iter: if res_type not in res.type(): continue yield res if res_type is None: rsrcs = stack.iter_resources(depth, filters=filters) else: rsrcs = filter_type(stack.iter_resources(depth, filters=filters)) return [api.format_stack_resource(resource, detail=with_detail) for resource in rsrcs] @context.request_context def stack_suspend(self, cnxt, stack_identity): """Handle request to perform suspend action on a stack.""" s = self._get_stack(cnxt, stack_identity) stack = parser.Stack.load(cnxt, stack=s) self.resource_enforcer.enforce_stack(stack, is_registered_policy=True) stored_event = NotifyEvent() self.thread_group_mgr.start_with_lock(cnxt, stack, self.engine_id, stack.suspend, notify=stored_event) stored_event.wait() @context.request_context def stack_resume(self, cnxt, stack_identity): """Handle request to perform a resume action on a stack.""" s = self._get_stack(cnxt, stack_identity) stack = parser.Stack.load(cnxt, stack=s) self.resource_enforcer.enforce_stack(stack, is_registered_policy=True) stored_event = NotifyEvent() self.thread_group_mgr.start_with_lock(cnxt, stack, self.engine_id, stack.resume, notify=stored_event) stored_event.wait() @context.request_context def stack_snapshot(self, cnxt, stack_identity, name): def _stack_snapshot(stack, snapshot): def save_snapshot(stack, action, status, reason): """Function that saves snapshot before snapshot complete.""" data = stack.prepare_abandon() data["status"] = status snapshot_object.Snapshot.update( cnxt, snapshot.id, {'data': data, 'status': status, 'status_reason': reason}) LOG.debug("Snapshotting stack %s", stack.name) stack.snapshot(save_snapshot_func=save_snapshot) s = self._get_stack(cnxt, stack_identity) stack = parser.Stack.load(cnxt, stack=s) if stack.status == stack.IN_PROGRESS: LOG.info('%(stack)s is in state %(action)s_IN_PROGRESS, ' 'snapshot is not permitted.', { 'stack': six.text_type(stack), 'action': stack.action}) raise exception.ActionInProgress(stack_name=stack.name, action=stack.action) lock = stack_lock.StackLock(cnxt, stack.id, self.engine_id) with lock.thread_lock(): snapshot = snapshot_object.Snapshot.create(cnxt, { 'tenant': cnxt.tenant_id, 'name': name, 'stack_id': stack.id, 'status': 'IN_PROGRESS'}) self.thread_group_mgr.start_with_acquired_lock( stack, lock, _stack_snapshot, stack, snapshot) return api.format_snapshot(snapshot) @context.request_context def show_snapshot(self, cnxt, stack_identity, snapshot_id): s = self._get_stack(cnxt, stack_identity) snapshot = snapshot_object.Snapshot.get_snapshot_by_stack( cnxt, snapshot_id, s) return api.format_snapshot(snapshot) @context.request_context def delete_snapshot(self, cnxt, stack_identity, snapshot_id): def _delete_snapshot(stack, snapshot): stack.delete_snapshot(snapshot) snapshot_object.Snapshot.delete(cnxt, snapshot_id) s = self._get_stack(cnxt, stack_identity) stack = parser.Stack.load(cnxt, stack=s) snapshot = snapshot_object.Snapshot.get_snapshot_by_stack( cnxt, snapshot_id, s) if snapshot.status == stack.IN_PROGRESS: msg = _('Deleting in-progress snapshot') raise exception.NotSupported(feature=msg) self.thread_group_mgr.start( stack.id, _delete_snapshot, stack, snapshot) @context.request_context def stack_check(self, cnxt, stack_identity): """Handle request to perform a check action on a stack.""" s = self._get_stack(cnxt, stack_identity) stack = parser.Stack.load(cnxt, stack=s) LOG.info("Checking stack %s", stack.name) stored_event = NotifyEvent() self.thread_group_mgr.start_with_lock(cnxt, stack, self.engine_id, stack.check, notify=stored_event) stored_event.wait() @context.request_context def stack_restore(self, cnxt, stack_identity, snapshot_id): s = self._get_stack(cnxt, stack_identity) stack = parser.Stack.load(cnxt, stack=s) self.resource_enforcer.enforce_stack(stack, is_registered_policy=True) snapshot = snapshot_object.Snapshot.get_snapshot_by_stack( cnxt, snapshot_id, s) # FIXME(pas-ha) has to be amended to deny restoring stacks # that have disallowed for current user if stack.convergence: new_stack, tmpl = stack.restore_data(snapshot) stack.thread_group_mgr = self.thread_group_mgr stack.converge_stack(template=tmpl, action=stack.RESTORE, new_stack=new_stack) else: stored_event = NotifyEvent() self.thread_group_mgr.start_with_lock( cnxt, stack, self.engine_id, stack.restore, snapshot, notify=stored_event) stored_event.wait() @context.request_context def stack_list_snapshots(self, cnxt, stack_identity): s = self._get_stack(cnxt, stack_identity) data = snapshot_object.Snapshot.get_all(cnxt, s.id) return [api.format_snapshot(snapshot) for snapshot in data] @context.request_context def show_software_config(self, cnxt, config_id): return self.software_config.show_software_config(cnxt, config_id) @context.request_context def list_software_configs(self, cnxt, limit=None, marker=None, tenant_safe=True): if not tenant_safe: cnxt = context.get_admin_context() return self.software_config.list_software_configs( cnxt, limit=limit, marker=marker) @context.request_context def create_software_config(self, cnxt, group, name, config, inputs, outputs, options): return self.software_config.create_software_config( cnxt, group=group, name=name, config=config, inputs=inputs, outputs=outputs, options=options) @context.request_context def delete_software_config(self, cnxt, config_id): return self.software_config.delete_software_config(cnxt, config_id) @context.request_context def list_software_deployments(self, cnxt, server_id): return self.software_config.list_software_deployments( cnxt, server_id) @context.request_context def metadata_software_deployments(self, cnxt, server_id): return self.software_config.metadata_software_deployments( cnxt, server_id) @context.request_context def show_software_deployment(self, cnxt, deployment_id): return self.software_config.show_software_deployment( cnxt, deployment_id) @context.request_context def check_software_deployment(self, cnxt, deployment_id, timeout): return self.software_config.check_software_deployment( cnxt, deployment_id, timeout) @context.request_context def create_software_deployment(self, cnxt, server_id, config_id, input_values, action, status, status_reason, stack_user_project_id, deployment_id=None): return self.software_config.create_software_deployment( cnxt, server_id=server_id, config_id=config_id, deployment_id=deployment_id, input_values=input_values, action=action, status=status, status_reason=status_reason, stack_user_project_id=stack_user_project_id) @context.request_context def signal_software_deployment(self, cnxt, deployment_id, details, updated_at): return self.software_config.signal_software_deployment( cnxt, deployment_id=deployment_id, details=details, updated_at=updated_at) @context.request_context def update_software_deployment(self, cnxt, deployment_id, config_id, input_values, output_values, action, status, status_reason, updated_at): return self.software_config.update_software_deployment( cnxt, deployment_id=deployment_id, config_id=config_id, input_values=input_values, output_values=output_values, action=action, status=status, status_reason=status_reason, updated_at=updated_at) @context.request_context def delete_software_deployment(self, cnxt, deployment_id): return self.software_config.delete_software_deployment( cnxt, deployment_id) @context.request_context def list_services(self, cnxt): result = [service_utils.format_service(srv) for srv in service_objects.Service.get_all(cnxt)] return result @context.request_context def migrate_convergence_1(self, ctxt, stack_id): parent_stack = parser.Stack.load(ctxt, stack_id=stack_id, show_deleted=False) if parent_stack.owner_id is not None: msg = _("Migration of nested stack %s") % stack_id raise exception.NotSupported(feature=msg) if parent_stack.status != parent_stack.COMPLETE: raise exception.ActionNotComplete(stack_name=parent_stack.name, action=parent_stack.action) if parent_stack.convergence: LOG.info("Convergence was already enabled for stack %s", stack_id) return db_stacks = stack_object.Stack.get_all_by_root_owner_id( ctxt, parent_stack.id) stacks = [parser.Stack.load(ctxt, stack_id=st.id, stack=st) for st in db_stacks] # check if any of the nested stacks is in IN_PROGRESS/FAILED state for stack in stacks: if stack.status != stack.COMPLETE: raise exception.ActionNotComplete(stack_name=stack.name, action=stack.action) stacks.append(parent_stack) locks = [] try: for st in stacks: lock = stack_lock.StackLock(ctxt, st.id, self.engine_id) lock.acquire() locks.append(lock) sess = ctxt.session sess.begin(subtransactions=True) try: for st in stacks: if not st.convergence: st.migrate_to_convergence() sess.commit() except Exception: sess.rollback() raise finally: for lock in locks: lock.release() def service_manage_report(self): cnxt = context.get_admin_context() if self.service_id is None: service_ref = service_objects.Service.create( cnxt, dict(host=self.host, hostname=self.hostname, binary=self.binary, engine_id=self.engine_id, topic=self.topic, report_interval=cfg.CONF.periodic_interval) ) self.service_id = service_ref['id'] LOG.debug('Service %s is started', self.service_id) try: service_objects.Service.update_by_id( cnxt, self.service_id, dict(deleted_at=None)) LOG.debug('Service %s is updated', self.service_id) except Exception as ex: LOG.error('Service %(service_id)s update ' 'failed: %(error)s', {'service_id': self.service_id, 'error': ex}) def service_manage_cleanup(self): cnxt = context.get_admin_context() last_updated_window = (3 * cfg.CONF.periodic_interval) time_line = timeutils.utcnow() - datetime.timedelta( seconds=last_updated_window) service_refs = service_objects.Service.get_all_by_args( cnxt, self.host, self.binary, self.hostname) for service_ref in service_refs: if (service_ref['id'] == self.service_id or service_ref['deleted_at'] is not None or service_ref['updated_at'] is None): continue if service_ref['updated_at'] < time_line: # hasn't been updated, assuming it's died. LOG.debug('Service %s was aborted', service_ref['id']) service_objects.Service.delete(cnxt, service_ref['id']) def reset_stack_status(self): filters = { 'status': parser.Stack.IN_PROGRESS, 'convergence': False } stacks = stack_object.Stack.get_all(context.get_admin_context(), filters=filters, show_nested=True) for s in stacks: # Build one context per stack, so that it can safely be passed to # to thread. cnxt = context.get_admin_context() stack_id = s.id lock = stack_lock.StackLock(cnxt, stack_id, self.engine_id) engine_id = lock.get_engine_id() try: with lock.thread_lock(retry=False): # refetch stack and confirm it is still IN_PROGRESS s = stack_object.Stack.get_by_id( cnxt, stack_id) if s.status != parser.Stack.IN_PROGRESS: lock.release() continue stk = parser.Stack.load(cnxt, stack=s) LOG.info('Engine %(engine)s went down when stack ' '%(stack_id)s was in action %(action)s', {'engine': engine_id, 'action': stk.action, 'stack_id': stk.id}) reason = _('Engine went down during stack %s') % stk.action # Set stack and resources status to FAILED in sub thread self.thread_group_mgr.start_with_acquired_lock( stk, lock, stk.reset_stack_and_resources_in_progress, reason ) except exception.ActionInProgress: continue except Exception: LOG.exception('Error while resetting stack: %s', stack_id) continue heat-10.0.2/heat/engine/parameters.py0000666000175000017500000005114013343562337017454 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import abc import collections import itertools from oslo_serialization import jsonutils from oslo_utils import encodeutils from oslo_utils import strutils import six from heat.common import exception from heat.common.i18n import _ from heat.common import param_utils from heat.engine import constraints as constr PARAMETER_KEYS = ( TYPE, DEFAULT, NO_ECHO, ALLOWED_VALUES, ALLOWED_PATTERN, MAX_LENGTH, MIN_LENGTH, MAX_VALUE, MIN_VALUE, DESCRIPTION, CONSTRAINT_DESCRIPTION, LABEL ) = ( 'Type', 'Default', 'NoEcho', 'AllowedValues', 'AllowedPattern', 'MaxLength', 'MinLength', 'MaxValue', 'MinValue', 'Description', 'ConstraintDescription', 'Label' ) class Schema(constr.Schema): """Parameter schema.""" KEYS = ( TYPE, DESCRIPTION, DEFAULT, SCHEMA, CONSTRAINTS, HIDDEN, LABEL, IMMUTABLE, TAGS, ) = ( 'Type', 'Description', 'Default', 'Schema', 'Constraints', 'NoEcho', 'Label', 'Immutable', 'Tags', ) PARAMETER_KEYS = PARAMETER_KEYS # For Parameters the type name for Schema.LIST is CommaDelimitedList # and the type name for Schema.MAP is Json TYPES = ( STRING, NUMBER, LIST, MAP, BOOLEAN, ) = ( 'String', 'Number', 'CommaDelimitedList', 'Json', 'Boolean', ) def __init__(self, data_type, description=None, default=None, schema=None, constraints=None, hidden=False, label=None, immutable=False, tags=None): super(Schema, self).__init__(data_type=data_type, description=description, default=default, schema=schema, required=default is None, constraints=constraints, label=label, immutable=immutable) self.hidden = hidden self.tags = tags # Schema class validates default value for lists assuming list type. For # comma delimited list string supported in parameters Schema class, the # default value has to be parsed into a list if necessary so that # validation works. def _validate_default(self, context): if self.default is not None: default_value = self.default if self.type == self.LIST and not isinstance(self.default, list): try: default_value = self.default.split(',') except (KeyError, AttributeError) as err: raise exception.InvalidSchemaError( message=_('Default must be a comma-delimited list ' 'string: %s') % err) elif self.type == self.LIST and isinstance(self.default, list): default_value = [(six.text_type(x)) for x in self.default] try: self.validate_constraints(default_value, context, [constr.CustomConstraint]) except (ValueError, TypeError, exception.StackValidationFailed) as exc: raise exception.InvalidSchemaError( message=_('Invalid default %(default)s (%(exc)s)') % dict(default=self.default, exc=exc)) def set_default(self, default=None): super(Schema, self).set_default(default) self.required = default is None @staticmethod def get_num(key, context): val = context.get(key) if val is not None: val = Schema.str_to_num(val) return val @staticmethod def _check_dict(schema_dict, allowed_keys, entity): if not isinstance(schema_dict, dict): raise exception.InvalidSchemaError( message=_("Invalid %s, expected a mapping") % entity) for key in schema_dict: if key not in allowed_keys: raise exception.InvalidSchemaError( message=_("Invalid key '%(key)s' for %(entity)s") % { "key": key, "entity": entity}) @classmethod def _validate_dict(cls, param_name, schema_dict): cls._check_dict(schema_dict, cls.PARAMETER_KEYS, "parameter (%s)" % param_name) if cls.TYPE not in schema_dict: raise exception.InvalidSchemaError( message=_("Missing parameter type for parameter: %s") % param_name) if not isinstance(schema_dict.get(cls.TAGS, []), list): raise exception.InvalidSchemaError( message=_("Tags property should be a list for parameter: %s") % param_name) @classmethod def from_dict(cls, param_name, schema_dict): """Return a Parameter Schema object from a legacy schema dictionary. :param param_name: name of the parameter owning the schema; used for more verbose logging :type param_name: str """ cls._validate_dict(param_name, schema_dict) def constraints(): desc = schema_dict.get(CONSTRAINT_DESCRIPTION) if MIN_VALUE in schema_dict or MAX_VALUE in schema_dict: yield constr.Range(Schema.get_num(MIN_VALUE, schema_dict), Schema.get_num(MAX_VALUE, schema_dict), desc) if MIN_LENGTH in schema_dict or MAX_LENGTH in schema_dict: yield constr.Length(Schema.get_num(MIN_LENGTH, schema_dict), Schema.get_num(MAX_LENGTH, schema_dict), desc) if ALLOWED_VALUES in schema_dict: yield constr.AllowedValues(schema_dict[ALLOWED_VALUES], desc) if ALLOWED_PATTERN in schema_dict: yield constr.AllowedPattern(schema_dict[ALLOWED_PATTERN], desc) # make update_allowed true by default on TemplateResources # as the template should deal with this. return cls(schema_dict[TYPE], description=schema_dict.get(DESCRIPTION), default=schema_dict.get(DEFAULT), constraints=list(constraints()), hidden=str(schema_dict.get(NO_ECHO, 'false')).lower() == 'true', label=schema_dict.get(LABEL)) def validate_value(self, value, context=None): super(Schema, self).validate_constraints(value, context) def __getitem__(self, key): if key == self.TYPE: return self.type if key == self.HIDDEN: return self.hidden else: return super(Schema, self).__getitem__(key) @six.python_2_unicode_compatible class Parameter(object): """A template parameter.""" def __new__(cls, name, schema, value=None): """Create a new Parameter of the appropriate type.""" if cls is not Parameter: return super(Parameter, cls).__new__(cls) # Check for fully-fledged Schema objects if not isinstance(schema, Schema): schema = Schema.from_dict(name, schema) if schema.type == schema.STRING: ParamClass = StringParam elif schema.type == schema.NUMBER: ParamClass = NumberParam elif schema.type == schema.LIST: ParamClass = CommaDelimitedListParam elif schema.type == schema.MAP: ParamClass = JsonParam elif schema.type == schema.BOOLEAN: ParamClass = BooleanParam else: raise ValueError(_('Invalid Parameter type "%s"') % schema.type) return super(Parameter, cls).__new__(ParamClass) __slots__ = ('name', 'schema', 'user_value', 'user_default') def __init__(self, name, schema, value=None): """Initialise the parameter. Initialise the Parameter with a name, schema and optional user-supplied value. """ self.name = name self.schema = schema self.user_value = value self.user_default = None def validate(self, validate_value=True, context=None): """Validates the parameter. This method validates if the parameter's schema is valid, and if the default value - if present - or the user-provided value for the parameter comply with the schema. """ err_msg = _("Parameter '%(name)s' is invalid: %(exp)s") try: self.schema.validate(context) if not validate_value: return if self.user_value is not None: self._validate(self.user_value, context) elif self.has_default(): self._validate(self.default(), context) else: raise exception.UserParameterMissing(key=self.name) except exception.StackValidationFailed as ex: msg = err_msg % dict(name=self.name, exp=six.text_type(ex)) raise exception.StackValidationFailed(message=msg) except exception.InvalidSchemaError as ex: msg = err_msg % dict(name=self.name, exp=six.text_type(ex)) raise exception.InvalidSchemaError(message=msg) def value(self): """Get the parameter value, optionally sanitising it for output.""" if self.user_value is not None: return self.user_value if self.has_default(): return self.default() raise exception.UserParameterMissing(key=self.name) def has_value(self): """Parameter has a user or default value.""" return self.user_value is not None or self.has_default() def hidden(self): """Return whether the parameter is hidden. Hidden parameters should be sanitised in any output to the user. """ return self.schema.hidden def description(self): """Return the description of the parameter.""" return self.schema.description or '' def label(self): """Return the label or param name.""" return self.schema.label or self.name def tags(self): """Return the tags associated with the parameter""" return self.schema.tags or [] def has_default(self): """Return whether the parameter has a default value.""" return (self.schema.default is not None or self.user_default is not None) def default(self): """Return the default value of the parameter.""" if self.user_default is not None: return self.user_default return self.schema.default def set_default(self, value): self.user_default = value @classmethod def _value_as_text(cls, value): return six.text_type(value) def __str__(self): """Return a string representation of the parameter.""" value = self.value() if self.hidden(): return six.text_type('******') else: return self._value_as_text(value) class NumberParam(Parameter): """A template parameter of type "Number".""" __slots__ = tuple() def __int__(self): """Return an integer representation of the parameter.""" return int(super(NumberParam, self).value()) def __float__(self): """Return a float representation of the parameter.""" return float(super(NumberParam, self).value()) def _validate(self, val, context): try: Schema.str_to_num(val) except (ValueError, TypeError) as ex: raise exception.StackValidationFailed(message=six.text_type(ex)) self.schema.validate_value(val, context) def value(self): return Schema.str_to_num(super(NumberParam, self).value()) class BooleanParam(Parameter): """A template parameter of type "Boolean".""" __slots__ = tuple() def _validate(self, val, context): try: strutils.bool_from_string(val, strict=True) except ValueError as ex: raise exception.StackValidationFailed(message=six.text_type(ex)) self.schema.validate_value(val, context) def value(self): if self.user_value is not None: raw_value = self.user_value else: raw_value = self.default() return strutils.bool_from_string(str(raw_value), strict=True) class StringParam(Parameter): """A template parameter of type "String".""" __slots__ = tuple() def _validate(self, val, context): self.schema.validate_value(val, context=context) def value(self): return self.schema.to_schema_type(super(StringParam, self).value()) class ParsedParameter(Parameter): """A template parameter with cached parsed value.""" __slots__ = ('parsed',) def __init__(self, name, schema, value=None): super(ParsedParameter, self).__init__(name, schema, value) self._update_parsed() def set_default(self, value): super(ParsedParameter, self).set_default(value) self._update_parsed() def _update_parsed(self): if self.has_value(): if self.user_value is not None: self.parsed = self.parse(self.user_value) else: self.parsed = self.parse(self.default()) class CommaDelimitedListParam(ParsedParameter, collections.Sequence): """A template parameter of type "CommaDelimitedList".""" __slots__ = ('parsed',) def __init__(self, name, schema, value=None): self.parsed = [] super(CommaDelimitedListParam, self).__init__(name, schema, value) def parse(self, value): # only parse when value is not already a list if isinstance(value, list): return [(six.text_type(x)) for x in value] try: return param_utils.delim_string_to_list(value) except (KeyError, AttributeError) as err: message = _('Value must be a comma-delimited list string: %s') raise ValueError(message % six.text_type(err)) return value def value(self): if self.has_value(): return self.parsed raise exception.UserParameterMissing(key=self.name) def __len__(self): """Return the length of the list.""" return len(self.parsed) def __getitem__(self, index): """Return an item from the list.""" return self.parsed[index] @classmethod def _value_as_text(cls, value): return ",".join(value) def _validate(self, val, context): try: parsed = self.parse(val) except ValueError as ex: raise exception.StackValidationFailed(message=six.text_type(ex)) self.schema.validate_value(parsed, context) class JsonParam(ParsedParameter): """A template parameter who's value is map or list.""" __slots__ = ('parsed',) def __init__(self, name, schema, value=None): self.parsed = {} super(JsonParam, self).__init__(name, schema, value) def parse(self, value): try: val = value if not isinstance(val, six.string_types): # turn off oslo_serialization's clever to_primitive() val = jsonutils.dumps(val, default=None) if val: return jsonutils.loads(val) except (ValueError, TypeError) as err: message = _('Value must be valid JSON: %s') % err raise ValueError(message) return value def value(self): if self.has_value(): return self.parsed raise exception.UserParameterMissing(key=self.name) def __getitem__(self, key): return self.parsed[key] def __iter__(self): return iter(self.parsed) def __len__(self): return len(self.parsed) @classmethod def _value_as_text(cls, value): return encodeutils.safe_decode(jsonutils.dumps(value)) def _validate(self, val, context): try: parsed = self.parse(val) except ValueError as ex: raise exception.StackValidationFailed(message=six.text_type(ex)) self.schema.validate_value(parsed, context) @six.add_metaclass(abc.ABCMeta) class Parameters(collections.Mapping): """Parameters of a stack. The parameters of a stack, with type checking, defaults, etc. specified by the stack's template. """ def __init__(self, stack_identifier, tmpl, user_params=None, param_defaults=None): """Initialisation of the parameter. Create the parameter container for a stack from the stack name and template, optionally setting the user-supplied parameter values. """ user_params = user_params or {} param_defaults = param_defaults or {} def user_parameter(schema_item): name, schema = schema_item return Parameter(name, schema, user_params.get(name)) self.tmpl = tmpl self.user_params = user_params schemata = self.tmpl.param_schemata() user_parameters = (user_parameter(si) for si in six.iteritems(schemata)) pseudo_parameters = self._pseudo_parameters(stack_identifier) self.params = dict((p.name, p) for p in itertools.chain(pseudo_parameters, user_parameters)) self.non_pseudo_param_keys = [p for p in self.params if p not in self.PSEUDO_PARAMETERS] for pd_name, param_default in param_defaults.items(): if pd_name in self.params: self.params[pd_name].set_default(param_default) def validate(self, validate_value=True, context=None): """Validates all parameters. This method validates if all user-provided parameters are actually defined in the template, and if all parameters are valid. """ self._validate_user_parameters() for param in six.itervalues(self.params): param.validate(validate_value, context) def __contains__(self, key): """Return whether the specified parameter exists.""" return key in self.params def __iter__(self): """Return an iterator over the parameter names.""" return iter(self.params) def __len__(self): """Return the number of parameters defined.""" return len(self.params) def __getitem__(self, key): """Get a parameter value.""" return self.params[key].value() def map(self, func, filter_func=lambda p: True): """Map the supplied function onto each Parameter. Map the supplied function onto each Parameter (with an optional filter function) and return the resulting dictionary. """ return dict((n, func(p)) for n, p in six.iteritems(self.params) if filter_func(p)) def set_stack_id(self, stack_identifier): """Set the StackId pseudo parameter value.""" if stack_identifier is not None: self.params[self.PARAM_STACK_ID].schema.set_default( stack_identifier.arn()) return True return False def _validate_user_parameters(self): schemata = self.tmpl.param_schemata() for param in self.user_params: if param not in schemata: raise exception.UnknownUserParameter(key=param) @abc.abstractmethod def _pseudo_parameters(self, stack_identifier): pass def immutable_params_modified(self, new_parameters, input_params): # A parameter must have been present in the old stack for its # immutability to be enforced common_params = list(set(new_parameters.non_pseudo_param_keys) & set(self.non_pseudo_param_keys)) invalid_params = [] for param in common_params: old_value = self.params[param] if param in input_params: new_value = input_params[param] else: new_value = new_parameters[param] immutable = new_parameters.params[param].schema.immutable if immutable and old_value.value() != new_value: invalid_params.append(param) if invalid_params: return invalid_params heat-10.0.2/heat/engine/parameter_groups.py0000666000175000017500000000756013343562337020677 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging from heat.common import exception from heat.common.i18n import _ LOG = logging.getLogger(__name__) PARAMETER_GROUPS = 'parameter_groups' PARAMETERS = 'parameters' class ParameterGroups(object): """The ParameterGroups specified by the stack's template.""" def __init__(self, tmpl): self.tmpl = tmpl self.parameters = tmpl.parameters(None, {}, param_defaults={}) self.parameter_names = [] if self.parameters: self.parameter_names = [param for param in self.parameters] self.parameter_groups = tmpl.get(PARAMETER_GROUPS) def validate(self): """Validate the parameter group. Validate that each parameter belongs to only one Parameter Group and that each parameter name in the group references a valid parameter. """ LOG.debug('Validating Parameter Groups: %s', ', '.join(self.parameter_names)) if self.parameter_groups: if not isinstance(self.parameter_groups, list): raise exception.StackValidationFailed( error=_('Parameter Groups error'), path=[PARAMETER_GROUPS], message=_('The %s should be a list.') % PARAMETER_GROUPS) # Loop through groups and validate parameters grouped_parameters = [] for group in self.parameter_groups: parameters = group.get(PARAMETERS) if parameters is None: raise exception.StackValidationFailed( error=_('Parameter Groups error'), path=[PARAMETER_GROUPS, group.get('label', '')], message=_('The %s must be provided for ' 'each parameter group.') % PARAMETERS) if not isinstance(parameters, list): raise exception.StackValidationFailed( error=_('Parameter Groups error'), path=[PARAMETER_GROUPS, group.get('label', '')], message=_('The %s of parameter group ' 'should be a list.') % PARAMETERS) for param in parameters: # Check if param has been added to a previous group if param in grouped_parameters: raise exception.StackValidationFailed( error=_('Parameter Groups error'), path=[PARAMETER_GROUPS, group.get('label', '')], message=_( 'The %s parameter must be assigned to one ' 'parameter group only.') % param) else: grouped_parameters.append(param) # Check that grouped parameter references a valid Parameter if param not in self.parameter_names: raise exception.StackValidationFailed( error=_('Parameter Groups error'), path=[PARAMETER_GROUPS, group.get('label', '')], message=_( 'The grouped parameter %s does not reference ' 'a valid parameter.') % param) heat-10.0.2/heat/engine/node_data.py0000666000175000017500000001045413343562337017232 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import six class NodeData(object): """Data about a node in the graph, to be passed along to other nodes.""" __slots__ = ('primary_key', 'name', 'uuid', '_reference_id', '_attributes', 'action', 'status') def __init__(self, primary_key, resource_name, uuid, reference_id, attributes, action, status): """Initialise with data about the resource processed by the node. :param primary_key: the ID of the resource in the database :param name: the logical resource name :param uuid: the UUID of the resource :param reference_id: the value to be returned by get_resource :param attributes: dict of attributes values to be returned by get_attr :param action: the last resource action :param status: the status of the last action """ self.primary_key = primary_key self.name = resource_name self.uuid = uuid self._reference_id = reference_id self._attributes = attributes self.action = action self.status = status def reference_id(self): """Return the reference ID of the resource. i.e. the result that the {get_resource: } intrinsic function should return for this resource. """ return self._reference_id def attributes(self): """Return a dict of all available top-level attribute values.""" attrs = {k: v for k, v in self._attributes.items() if isinstance(k, six.string_types)} for v in six.itervalues(attrs): if isinstance(v, Exception): raise v return attrs def attribute(self, attr_name): """Return the specified attribute value.""" val = self._attributes[attr_name] if isinstance(val, Exception): raise val return val def attribute_names(self): """Iterate over valid top-level attribute names.""" for key in self._attributes: if isinstance(key, six.string_types): yield key else: yield key[0] def as_dict(self): """Return a dict representation of the data. This is the format that is serialised and stored in the database's SyncPoints. """ for v in six.itervalues(self._attributes): if isinstance(v, Exception): raise v return { 'id': self.primary_key, 'name': self.name, 'reference_id': self.reference_id(), 'attrs': dict(self._attributes), 'status': self.status, 'action': self.action, 'uuid': self.uuid, } @classmethod def from_dict(cls, node_data): """Create a new NodeData object from deserialised data. This reads the format that is stored in the database, and is the inverse of as_dict(). """ if isinstance(node_data, cls): return node_data return cls(node_data.get('id'), node_data.get('name'), node_data.get('uuid'), node_data.get('reference_id'), node_data.get('attrs', {}), node_data.get('action'), node_data.get('status')) def load_resources_data(data): """Return the data for all of the resources that meet at a SyncPoint. The input is the input_data dict from a SyncPoint received over RPC. The keys (which are ignored) are resource primary keys. The output is a dict of NodeData objects with the resource names as the keys. """ nodes = (NodeData.from_dict(nd) for nd in data.values() if nd is not None) return {node.name: node for node in nodes} heat-10.0.2/heat/engine/event.py0000666000175000017500000000722013343562337016432 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common import identifier from heat.objects import event as event_object class Event(object): """Class representing a Resource state change.""" def __init__(self, context, stack, action, status, reason, physical_resource_id, resource_prop_data_id, resource_properties, resource_name, resource_type, uuid=None, timestamp=None, id=None): """Initialise from a context, stack, and event information. The timestamp and database ID may also be initialised if the event is already in the database. """ self.context = context self._stack_identifier = stack.identifier() self.action = action self.status = status self.reason = reason self.physical_resource_id = physical_resource_id self.resource_name = resource_name self.resource_type = resource_type self.rsrc_prop_data_id = resource_prop_data_id self.resource_properties = resource_properties if self.resource_properties is None: self.resource_properties = {} self.uuid = uuid self.timestamp = timestamp self.id = id def store(self): """Store the Event in the database.""" ev = { 'resource_name': self.resource_name, 'physical_resource_id': self.physical_resource_id, 'stack_id': self._stack_identifier.stack_id, 'resource_action': self.action, 'resource_status': self.status, 'resource_status_reason': self.reason, 'resource_type': self.resource_type, } if self.uuid is not None: ev['uuid'] = self.uuid if self.timestamp is not None: ev['created_at'] = self.timestamp if self.rsrc_prop_data_id is not None: ev['rsrc_prop_data_id'] = self.rsrc_prop_data_id new_ev = event_object.Event.create(self.context, ev) self.id = new_ev.id self.timestamp = new_ev.created_at self.uuid = new_ev.uuid return self.id def identifier(self): """Return a unique identifier for the event.""" if self.uuid is None: return None res_id = identifier.ResourceIdentifier( resource_name=self.resource_name, **self._stack_identifier) return identifier.EventIdentifier(event_id=str(self.uuid), **res_id) def as_dict(self): return { 'timestamp': self.timestamp.isoformat(), 'version': '0.1', 'type': 'os.heat.event', 'id': self.uuid, 'payload': { 'resource_name': self.resource_name, 'physical_resource_id': self.physical_resource_id, 'stack_id': self._stack_identifier.stack_id, 'resource_action': self.action, 'resource_status': self.status, 'resource_status_reason': self.reason, 'resource_type': self.resource_type, 'resource_properties': self.resource_properties, 'version': '0.1' } } heat-10.0.2/heat/engine/resources/0000775000175000017500000000000013343562672016750 5ustar zuulzuul00000000000000heat-10.0.2/heat/engine/resources/signal_responder.py0000666000175000017500000003615713343562340022666 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from keystoneclient.contrib.ec2 import utils as ec2_utils from oslo_config import cfg from oslo_log import log as logging from oslo_serialization import jsonutils from six.moves.urllib import parse as urlparse from heat.common import exception from heat.common.i18n import _ from heat.common import password_gen from heat.engine.clients.os import swift from heat.engine.resources import stack_user LOG = logging.getLogger(__name__) SIGNAL_TYPES = ( WAITCONDITION, SIGNAL ) = ( '/waitcondition', '/signal' ) SIGNAL_VERB = {WAITCONDITION: 'PUT', SIGNAL: 'POST'} class SignalResponder(stack_user.StackUser): PROPERTIES = ( SIGNAL_TRANSPORT, ) = ( 'signal_transport', ) ATTRIBUTES = ( SIGNAL_ATTR, ) = ( 'signal', ) # Anything which subclasses this may trigger authenticated # API operations as a consequence of handling a signal requires_deferred_auth = True def handle_delete(self): self._delete_signals() return super(SignalResponder, self).handle_delete() def _delete_signals(self): self._delete_ec2_signed_url() self._delete_heat_signal_url() self._delete_swift_signal_url() self._delete_zaqar_signal_queue() @property def password(self): return self.data().get('password') @password.setter def password(self, password): if password is None: self.data_delete('password') else: self.data_set('password', password, True) def _signal_transport_cfn(self): return self.properties[ self.SIGNAL_TRANSPORT] == self.CFN_SIGNAL def _signal_transport_heat(self): return self.properties[ self.SIGNAL_TRANSPORT] == self.HEAT_SIGNAL def _signal_transport_none(self): return self.properties[ self.SIGNAL_TRANSPORT] == self.NO_SIGNAL def _signal_transport_temp_url(self): return self.properties[ self.SIGNAL_TRANSPORT] == self.TEMP_URL_SIGNAL def _signal_transport_zaqar(self): return self.properties.get( self.SIGNAL_TRANSPORT) == self.ZAQAR_SIGNAL def _get_heat_signal_credentials(self): """Return OpenStack credentials that can be used to send a signal. These credentials are for the user associated with this resource in the heat stack user domain. """ if self._get_user_id() is None: if self.password is None: self.password = password_gen.generate_openstack_password() self._create_user() return {'auth_url': self.keystone().v3_endpoint, 'username': self.physical_resource_name(), 'user_id': self._get_user_id(), 'password': self.password, 'project_id': self.stack.stack_user_project_id, 'domain_id': self.keystone().stack_domain_id, 'region_name': (self.context.region_name or cfg.CONF.region_name_for_services)} def _get_ec2_signed_url(self, signal_type=SIGNAL): """Create properly formatted and pre-signed URL. This uses the created user for the credentials. See boto/auth.py::QuerySignatureV2AuthHandler :param signal_type: either WAITCONDITION or SIGNAL. """ stored = self.data().get('ec2_signed_url') if stored is not None: return stored access_key = self.data().get('access_key') secret_key = self.data().get('secret_key') if not access_key or not secret_key: if self.id is None: # it is too early return if self._get_user_id() is None: self._create_user() self._create_keypair() access_key = self.data().get('access_key') secret_key = self.data().get('secret_key') if not access_key or not secret_key: LOG.warning('Cannot generate signed url, ' 'unable to create keypair') return config_url = cfg.CONF.heat_waitcondition_server_url if config_url: signal_url = config_url.replace('/waitcondition', signal_type) else: heat_client_plugin = self.stack.clients.client_plugin('heat') endpoint = heat_client_plugin.get_heat_cfn_url() signal_url = ''.join([endpoint, signal_type]) host_url = urlparse.urlparse(signal_url) path = self.identifier().arn_url_path() # Note the WSGI spec apparently means that the webob request we end up # processing in the CFN API (ec2token.py) has an unquoted path, so we # need to calculate the signature with the path component unquoted, but # ensure the actual URL contains the quoted version... unquoted_path = urlparse.unquote(host_url.path + path) request = {'host': host_url.netloc.lower(), 'verb': SIGNAL_VERB[signal_type], 'path': unquoted_path, 'params': {'SignatureMethod': 'HmacSHA256', 'SignatureVersion': '2', 'AWSAccessKeyId': access_key, 'Timestamp': self.created_time.strftime("%Y-%m-%dT%H:%M:%SZ") }} # Sign the request signer = ec2_utils.Ec2Signer(secret_key) request['params']['Signature'] = signer.generate(request) qs = urlparse.urlencode(request['params']) url = "%s%s?%s" % (signal_url.lower(), path, qs) self.data_set('ec2_signed_url', url) return url def _delete_ec2_signed_url(self): self.data_delete('ec2_signed_url') self._delete_keypair() def _get_heat_signal_url(self, project_id=None): """Return a heat-api signal URL for this resource. This URL is not pre-signed, valid user credentials are required. If a project_id is provided, it is used in place of the original project_id. This is useful to generate a signal URL that uses the heat stack user project instead of the user's. """ stored = self.data().get('heat_signal_url') if stored is not None: return stored if self.id is None: # it is too early return url = self.client_plugin('heat').get_heat_url() path = self.identifier().url_path() if project_id is not None: path = project_id + path[path.find('/'):] url = urlparse.urljoin(url, '%s/signal' % path) self.data_set('heat_signal_url', url) return url def _delete_heat_signal_url(self): self.data_delete('heat_signal_url') def _get_swift_signal_url(self, multiple_signals=False): """Create properly formatted and pre-signed Swift signal URL. This uses a Swift pre-signed temp_url. If multiple_signals is requested, the Swift object referenced by the returned URL will have versioning enabled. """ put_url = self.data().get('swift_signal_url') if put_url: return put_url if self.id is None: # it is too early return container = self.stack.id object_name = self.physical_resource_name() self.client('swift').put_container(container) if multiple_signals: put_url = self.client_plugin('swift').get_signal_url(container, object_name) else: put_url = self.client_plugin('swift').get_temp_url(container, object_name) self.client('swift').put_object(container, object_name, '') self.data_set('swift_signal_url', put_url) self.data_set('swift_signal_object_name', object_name) return put_url def _delete_swift_signal_url(self): object_name = self.data().get('swift_signal_object_name') if not object_name: return with self.client_plugin('swift').ignore_not_found: container_name = self.stack.id swift = self.client('swift') # delete all versions of the object, in case there are some # signals that are waiting to be handled container = swift.get_container(container_name) filtered = [obj for obj in container[1] if object_name in obj['name']] for obj in filtered: # we delete the main object every time, swift takes # care of restoring the previous version after each delete swift.delete_object(container_name, object_name) headers = swift.head_container(container_name) if int(headers['x-container-object-count']) == 0: swift.delete_container(container_name) self.data_delete('swift_signal_object_name') self.data_delete('swift_signal_url') def _get_zaqar_signal_queue_id(self): """Return a zaqar queue_id for signaling this resource. This uses the created user for the credentials. """ queue_id = self.data().get('zaqar_signal_queue_id') if queue_id: return queue_id if self.id is None: # it is too early return if self._get_user_id() is None: if self.password is None: self.password = password_gen.generate_openstack_password() self._create_user() queue_id = self.physical_resource_name() zaqar_plugin = self.client_plugin('zaqar') zaqar = zaqar_plugin.create_for_tenant( self.stack.stack_user_project_id, self._user_token()) queue = zaqar.queue(queue_id) signed_url_data = queue.signed_url( ['messages'], methods=['GET', 'DELETE']) self.data_set('zaqar_queue_signed_url_data', jsonutils.dumps(signed_url_data)) self.data_set('zaqar_signal_queue_id', queue_id) return queue_id def _delete_zaqar_signal_queue(self): queue_id = self.data().get('zaqar_signal_queue_id') if not queue_id: return zaqar_plugin = self.client_plugin('zaqar') zaqar = zaqar_plugin.create_for_tenant( self.stack.stack_user_project_id, self._user_token()) with zaqar_plugin.ignore_not_found: zaqar.queue(queue_id).delete() self.data_delete('zaqar_signal_queue_id') def _get_signal(self, signal_type=SIGNAL, multiple_signals=False): """Return a dictionary with signal details. Subclasses can invoke this method to retrieve information of the resource signal for the specified transport. """ signal = None if self._signal_transport_cfn(): signal = {'alarm_url': self._get_ec2_signed_url( signal_type=signal_type)} elif self._signal_transport_heat(): signal = self._get_heat_signal_credentials() signal['alarm_url'] = self._get_heat_signal_url( project_id=self.stack.stack_user_project_id) elif self._signal_transport_temp_url(): signal = {'alarm_url': self._get_swift_signal_url( multiple_signals=multiple_signals)} elif self._signal_transport_zaqar(): signal = self._get_heat_signal_credentials() signal['queue_id'] = self._get_zaqar_signal_queue_id() elif self._signal_transport_none(): signal = {} return signal def _service_swift_signal(self): swift_client = self.client('swift') try: container = swift_client.get_container(self.stack.id) except Exception as exc: self.client_plugin('swift').ignore_not_found(exc) LOG.debug("Swift container %s was not found", self.stack.id) return index = container[1] if not index: # Swift objects were deleted by user LOG.debug("Swift objects in container %s were not found", self.stack.id) return # Remove objects that are for other resources, given that # multiple swift signals in the same stack share a container object_name = self.physical_resource_name() filtered = [obj for obj in index if object_name in obj['name']] # Fetch objects from Swift and filter results signal_names = [] for obj in filtered: try: signal = swift_client.get_object(self.stack.id, obj['name']) except Exception as exc: self.client_plugin('swift').ignore_not_found(exc) continue body = signal[1] if body == swift.IN_PROGRESS: # Ignore the initial object continue signal_names.append(obj['name']) if body == "": self.signal(details={}) continue try: self.signal(details=jsonutils.loads(body)) except ValueError: raise exception.Error(_("Failed to parse JSON data: %s") % body) # remove the signals that were consumed for signal_name in signal_names: if signal_name != object_name: swift_client.delete_object(self.stack.id, signal_name) if object_name in signal_names: swift_client.delete_object(self.stack.id, object_name) def _service_zaqar_signal(self): zaqar_plugin = self.client_plugin('zaqar') zaqar = zaqar_plugin.create_for_tenant( self.stack.stack_user_project_id, self._user_token()) try: queue = zaqar.queue(self._get_zaqar_signal_queue_id()) except Exception as ex: self.client_plugin('zaqar').ignore_not_found(ex) return messages = list(queue.pop()) for message in messages: self.signal(details=message.body) def _service_signal(self): """Service the signal, when necessary. This method must be called repeatedly by subclasses to update the state of the signals that require polling, which are the ones based on Swift temp URLs and Zaqar queues. The "NO_SIGNAL" case is also handled here by triggering the signal once per call. """ if self._signal_transport_temp_url(): self._service_swift_signal() elif self._signal_transport_zaqar(): self._service_zaqar_signal() elif self._signal_transport_none(): self.signal(details={}) heat-10.0.2/heat/engine/resources/server_base.py0000666000175000017500000003144713343562340021625 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import uuid from oslo_config import cfg from oslo_log import log as logging from oslo_serialization import jsonutils from heat.common import exception from heat.common import password_gen from heat.engine.clients import progress from heat.engine.resources import stack_user cfg.CONF.import_opt('max_server_name_length', 'heat.common.config') LOG = logging.getLogger(__name__) class BaseServer(stack_user.StackUser): """Base Server resource.""" physical_resource_name_limit = cfg.CONF.max_server_name_length entity = 'servers' def __init__(self, name, json_snippet, stack): super(BaseServer, self).__init__(name, json_snippet, stack) self.default_collectors = [] def _server_name(self): name = self.properties[self.NAME] if name: return name return self.physical_resource_name() def _container_and_object_name(self, props): deployment_swift_data = props.get( self.DEPLOYMENT_SWIFT_DATA, self.properties[self.DEPLOYMENT_SWIFT_DATA]) container_name = deployment_swift_data[self.CONTAINER] if container_name is None: container_name = self.physical_resource_name() object_name = deployment_swift_data[self.OBJECT] if object_name is None: object_name = self.data().get('metadata_object_name') if object_name is None: object_name = str(uuid.uuid4()) return container_name, object_name def _populate_deployments_metadata(self, meta, props): meta['deployments'] = meta.get('deployments', []) meta['os-collect-config'] = meta.get('os-collect-config', {}) occ = meta['os-collect-config'] collectors = list(self.default_collectors) occ['collectors'] = collectors region_name = (self.context.region_name or cfg.CONF.region_name_for_services) # set existing values to None to override any boot-time config occ_keys = ('heat', 'zaqar', 'cfn', 'request') for occ_key in occ_keys: if occ_key not in occ: continue existing = occ[occ_key] for k in existing: existing[k] = None queue_id = self.data().get('metadata_queue_id') if self.transport_poll_server_heat(props): occ.update({'heat': { 'user_id': self._get_user_id(), 'password': self.password, 'auth_url': self.context.auth_url, 'project_id': self.stack.stack_user_project_id, 'stack_id': self.stack.identifier().stack_path(), 'resource_name': self.name, 'region_name': region_name}}) collectors.append('heat') elif self.transport_zaqar_message(props): queue_id = queue_id or self.physical_resource_name() occ.update({'zaqar': { 'user_id': self._get_user_id(), 'password': self.password, 'auth_url': self.context.auth_url, 'project_id': self.stack.stack_user_project_id, 'queue_id': queue_id, 'region_name': region_name}}) collectors.append('zaqar') elif self.transport_poll_server_cfn(props): heat_client_plugin = self.stack.clients.client_plugin('heat') config_url = heat_client_plugin.get_cfn_metadata_server_url() occ.update({'cfn': { 'metadata_url': config_url, 'access_key_id': self.access_key, 'secret_access_key': self.secret_key, 'stack_name': self.stack.name, 'path': '%s.Metadata' % self.name}}) collectors.append('cfn') elif self.transport_poll_temp_url(props): container_name, object_name = self._container_and_object_name( props) self.client('swift').put_container(container_name) url = self.client_plugin('swift').get_temp_url( container_name, object_name, method='GET') put_url = self.client_plugin('swift').get_temp_url( container_name, object_name) self.data_set('metadata_put_url', put_url) self.data_set('metadata_object_name', object_name) collectors.append('request') occ.update({'request': {'metadata_url': url}}) collectors.append('local') self.metadata_set(meta) # push replacement polling config to any existing push-based sources if queue_id: zaqar_plugin = self.client_plugin('zaqar') zaqar = zaqar_plugin.create_for_tenant( self.stack.stack_user_project_id, self._user_token()) queue = zaqar.queue(queue_id) queue.post({'body': meta, 'ttl': zaqar_plugin.DEFAULT_TTL}) self.data_set('metadata_queue_id', queue_id) object_name = self.data().get('metadata_object_name') if object_name: container_name, object_name = self._container_and_object_name( props) self.client('swift').put_object( container_name, object_name, jsonutils.dumps(meta)) def _create_transport_credentials(self, props): if self.transport_poll_server_cfn(props): self._create_user() self._create_keypair() elif (self.transport_poll_server_heat(props) or self.transport_zaqar_message(props)): if self.password is None: self.password = password_gen.generate_openstack_password() self._create_user() self._register_access_key() @property def access_key(self): return self.data().get('access_key') @property def secret_key(self): return self.data().get('secret_key') @property def password(self): return self.data().get('password') @password.setter def password(self, password): if password is None: self.data_delete('password') else: self.data_set('password', password, True) def transport_poll_server_cfn(self, props): return props[ self.SOFTWARE_CONFIG_TRANSPORT] == self.POLL_SERVER_CFN def transport_poll_server_heat(self, props): return props[ self.SOFTWARE_CONFIG_TRANSPORT] == self.POLL_SERVER_HEAT def transport_poll_temp_url(self, props): return props[ self.SOFTWARE_CONFIG_TRANSPORT] == self.POLL_TEMP_URL def transport_zaqar_message(self, props): return props[ self.SOFTWARE_CONFIG_TRANSPORT] == self.ZAQAR_MESSAGE def check_create_complete(self, server_id): return True def _resolve_attribute(self, name): if self.resource_id is None: return if name == self.NAME_ATTR: return self._server_name() if name == self.OS_COLLECT_CONFIG: return self.metadata_get().get('os-collect-config', {}) def handle_update(self, json_snippet, tmpl_diff, prop_diff): if tmpl_diff.metadata_changed(): # If SOFTWARE_CONFIG user_data_format is enabled we require # the "deployments" and "os-collect-config" keys for Deployment # polling. We can attempt to merge the occ data, but any # metadata update containing deployments will be discarded. new_md = json_snippet.metadata() if self.user_data_software_config(): metadata = self.metadata_get(True) or {} new_occ_md = new_md.get('os-collect-config', {}) occ_md = metadata.get('os-collect-config', {}) occ_md.update(new_occ_md) new_md['os-collect-config'] = occ_md deployment_md = metadata.get('deployments', []) new_md['deployments'] = deployment_md self.metadata_set(new_md) updaters = [] if self.SOFTWARE_CONFIG_TRANSPORT in prop_diff: self._update_software_config_transport(prop_diff) # NOTE(pas-ha) optimization is possible (starting first task # right away), but we'd rather not, as this method already might # have called several APIs return updaters def _update_software_config_transport(self, prop_diff): if not self.user_data_software_config(): return try: metadata = self.metadata_get(True) or {} self._create_transport_credentials(prop_diff) self._populate_deployments_metadata(metadata, prop_diff) # push new metadata to all sources by creating a dummy # deployment sc = self.rpc_client().create_software_config( self.context, 'ignored', 'ignored', '') sd = self.rpc_client().create_software_deployment( self.context, self.resource_id, sc['id']) self.rpc_client().delete_software_deployment( self.context, sd['id']) self.rpc_client().delete_software_config( self.context, sc['id']) except Exception: # Updating the software config transport is on a best-effort # basis as any raised exception here would result in the resource # going into an ERROR state, which will be replaced on the next # stack update. This is not desirable for a server. The old # transport will continue to work, and the new transport may work # despite exceptions in the above block. LOG.exception( 'Error while updating software config transport' ) def metadata_update(self, new_metadata=None): """Refresh the metadata if new_metadata is None.""" if new_metadata is None: # Re-resolve the template metadata and merge it with the # current resource metadata. This is necessary because the # attributes referenced in the template metadata may change # and the resource itself adds keys to the metadata which # are not specified in the template (e.g the deployments data) meta = self.metadata_get(refresh=True) or {} tmpl_meta = self.t.metadata() meta.update(tmpl_meta) self.metadata_set(meta) @staticmethod def _check_maximum(count, maximum, msg): """Check a count against a maximum. Unless maximum is -1 which indicates that there is no limit. """ if maximum != -1 and count > maximum: raise exception.StackValidationFailed(message=msg) def _delete_temp_url(self): object_name = self.data().get('metadata_object_name') if not object_name: return with self.client_plugin('swift').ignore_not_found: container = self.properties[self.DEPLOYMENT_SWIFT_DATA].get( 'container') container = container or self.physical_resource_name() swift = self.client('swift') swift.delete_object(container, object_name) headers = swift.head_container(container) if int(headers['x-container-object-count']) == 0: swift.delete_container(container) def _delete_queue(self): queue_id = self.data().get('metadata_queue_id') if not queue_id: return client_plugin = self.client_plugin('zaqar') zaqar = client_plugin.create_for_tenant( self.stack.stack_user_project_id, self._user_token()) with client_plugin.ignore_not_found: zaqar.queue(queue_id).delete() self.data_delete('metadata_queue_id') def handle_snapshot_delete(self, state): if state[1] != self.FAILED and self.resource_id: image_id = self.client().servers.create_image( self.resource_id, self.physical_resource_name()) return progress.ServerDeleteProgress( self.resource_id, image_id, False) return self._delete() def handle_delete(self): return self._delete() def check_delete_complete(self, prg): if not prg: return True def _show_resource(self): rsrc_dict = super(BaseServer, self)._show_resource() rsrc_dict.setdefault( self.OS_COLLECT_CONFIG, self.metadata_get().get('os-collect-config', {})) return rsrc_dict heat-10.0.2/heat/engine/resources/template_resource.py0000666000175000017500000003366413343562351023054 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging from oslo_serialization import jsonutils from requests import exceptions import six from heat.common import exception from heat.common import grouputils from heat.common.i18n import _ from heat.common import template_format from heat.common import urlfetch from heat.engine import attributes from heat.engine import environment from heat.engine import properties from heat.engine.resources import stack_resource from heat.engine import template from heat.rpc import api as rpc_api LOG = logging.getLogger(__name__) REMOTE_SCHEMES = ('http', 'https') LOCAL_SCHEMES = ('file',) STACK_ID_OUTPUT = 'OS::stack_id' def generate_class_from_template(name, data, param_defaults): tmpl = template.Template(template_format.parse(data)) props, attrs = TemplateResource.get_schemas(tmpl, param_defaults) cls = type(name, (TemplateResource,), {'properties_schema': props, 'attributes_schema': attrs, '__doc__': tmpl.t.get(tmpl.DESCRIPTION)}) return cls class TemplateResource(stack_resource.StackResource): """A resource implemented by a nested stack. This implementation passes resource properties as parameters to the nested stack. Outputs of the nested stack are exposed as attributes of this resource. """ def __init__(self, name, json_snippet, stack): self._parsed_nested = None self.stack = stack self.validation_exception = None tri = self._get_resource_info(json_snippet) self.properties_schema = {} self.attributes_schema = {} # run Resource.__init__() so we can call self.nested() super(TemplateResource, self).__init__(name, json_snippet, stack) self.resource_info = tri if self.validation_exception is None: self._generate_schema() self.reparse() def _get_resource_info(self, rsrc_defn): try: tri = self.stack.env.get_resource_info( rsrc_defn.resource_type, resource_name=rsrc_defn.name, registry_type=environment.TemplateResourceInfo) except exception.EntityNotFound: self.validation_exception = ValueError(_( 'Only Templates with an extension of .yaml or ' '.template are supported')) else: self._template_name = tri.template_name self.resource_type = tri.name self.resource_path = tri.path if tri.user_resource: self.allowed_schemes = REMOTE_SCHEMES else: self.allowed_schemes = REMOTE_SCHEMES + LOCAL_SCHEMES return tri @staticmethod def get_template_file(template_name, allowed_schemes): try: return urlfetch.get(template_name, allowed_schemes=allowed_schemes) except (IOError, exceptions.RequestException) as r_exc: args = {'name': template_name, 'exc': six.text_type(r_exc)} msg = _('Could not fetch remote template ' '"%(name)s": %(exc)s') % args raise exception.NotFound(msg_fmt=msg) @staticmethod def get_schemas(tmpl, param_defaults): return ((properties.Properties.schema_from_params( tmpl.param_schemata(param_defaults))), (attributes.Attributes.schema_from_outputs( tmpl[tmpl.OUTPUTS]))) def _generate_schema(self): self._parsed_nested = None try: tmpl = template.Template(self.child_template()) except (exception.NotFound, ValueError) as download_error: self.validation_exception = download_error tmpl = template.Template( {"HeatTemplateFormatVersion": "2012-12-12"}) # re-generate the properties and attributes from the template. self.properties_schema, self.attributes_schema = self.get_schemas( tmpl, self.stack.env.param_defaults) self.attributes_schema.update(self.base_attributes_schema) self.attributes.set_schema(self.attributes_schema) def child_params(self): """Override method of child_params for the resource. :return: parameter values for our nested stack based on our properties """ params = {} for pname, pval in iter(self.properties.props.items()): if not pval.implemented(): continue try: val = self.properties.get_user_value(pname) except ValueError: if self.action == self.INIT: prop = self.properties.props[pname] val = prop.get_value(None) else: raise if val is not None: # take a list and create a CommaDelimitedList if pval.type() == properties.Schema.LIST: if len(val) == 0: params[pname] = '' elif isinstance(val[0], dict): flattened = [] for (count, item) in enumerate(val): for (ik, iv) in iter(item.items()): mem_str = '.member.%d.%s=%s' % (count, ik, iv) flattened.append(mem_str) params[pname] = ','.join(flattened) else: # When None is returned from get_attr, creating a # delimited list with it fails during validation. # we should sanitize the None values to empty strings. # FIXME(rabi) this needs a permanent solution # to sanitize attributes and outputs in the future. params[pname] = ','.join( (x if x is not None else '') for x in val) else: # for MAP, the JSON param takes either a collection or # string, so just pass it on and let the param validate # as appropriate params[pname] = val return params def child_template(self): if not self._parsed_nested: self._parsed_nested = template_format.parse(self.template_data(), self.template_url) return self._parsed_nested def regenerate_info_schema(self, definition): self._get_resource_info(definition) self._generate_schema() @property def template_url(self): return self._template_name def template_data(self): # we want to have the latest possible template. # 1. look in files # 2. try download # 3. look in the db reported_excp = None t_data = self.stack.t.files.get(self.template_url) stored_t_data = t_data if not t_data and self.template_url.endswith((".yaml", ".template")): try: t_data = self.get_template_file(self.template_url, self.allowed_schemes) except exception.NotFound as err: if self.action == self.UPDATE: raise reported_excp = err if t_data is None: if self.resource_id is not None: t_data = jsonutils.dumps(self.nested().t.t) if t_data is not None: if t_data != stored_t_data: self.stack.t.files[self.template_url] = t_data self.stack.t.env.register_class(self.resource_type, self.template_url, path=self.resource_path) return t_data if reported_excp is None: reported_excp = ValueError(_('Unknown error retrieving %s') % self.template_url) raise reported_excp def _validate_against_facade(self, facade_cls): facade_schemata = properties.schemata(facade_cls.properties_schema) for n, fs in facade_schemata.items(): if fs.required and n not in self.properties_schema: msg = (_("Required property %(n)s for facade %(type)s " "missing in provider") % {'n': n, 'type': self.type()}) raise exception.StackValidationFailed(message=msg) ps = self.properties_schema.get(n) if (n in self.properties_schema and (fs.allowed_param_prop_type() != ps.type)): # Type mismatch msg = (_("Property %(n)s type mismatch between facade %(type)s" " (%(fs_type)s) and provider (%(ps_type)s)") % { 'n': n, 'type': self.type(), 'fs_type': fs.type, 'ps_type': ps.type}) raise exception.StackValidationFailed(message=msg) for n, ps in self.properties_schema.items(): if ps.required and n not in facade_schemata: # Required property for template not present in facade msg = (_("Provider requires property %(n)s " "unknown in facade %(type)s") % { 'n': n, 'type': self.type()}) raise exception.StackValidationFailed(message=msg) facade_attrs = facade_cls.attributes_schema.copy() facade_attrs.update(facade_cls.base_attributes_schema) for attr in facade_attrs: if attr not in self.attributes_schema: msg = (_("Attribute %(attr)s for facade %(type)s " "missing in provider") % { 'attr': attr, 'type': self.type()}) raise exception.StackValidationFailed(message=msg) def validate(self): if self.validation_exception is not None: msg = six.text_type(self.validation_exception) raise exception.StackValidationFailed(message=msg) try: self.template_data() except ValueError as ex: msg = _("Failed to retrieve template data: %s") % ex raise exception.StackValidationFailed(message=msg) # If we're using an existing resource type as a facade for this # template, check for compatibility between the interfaces. try: fri = self.stack.env.get_resource_info( self.type(), resource_name=self.name, ignore=self.resource_info) except exception.EntityNotFound: pass else: facade_cls = fri.get_class(files=self.stack.t.files) self._validate_against_facade(facade_cls) return super(TemplateResource, self).validate() def handle_adopt(self, resource_data=None): return self.create_with_template(self.child_template(), self.child_params(), adopt_data=resource_data) def handle_create(self): return self.create_with_template(self.child_template(), self.child_params()) def metadata_update(self, new_metadata=None): """Refresh the metadata if new_metadata is None.""" if new_metadata is None: self.metadata_set(self.t.metadata()) def handle_update(self, json_snippet, tmpl_diff, prop_diff): self.properties = json_snippet.properties(self.properties_schema, self.context) return self.update_with_template(self.child_template(), self.child_params()) def get_reference_id(self): if self.resource_id is None: return six.text_type(self.name) if STACK_ID_OUTPUT in self.attributes.cached_attrs: return self.attributes.cached_attrs[STACK_ID_OUTPUT] stack_identity = self.nested_identifier() reference_id = stack_identity.arn() try: if self._outputs is not None: reference_id = self.get_output(STACK_ID_OUTPUT) elif STACK_ID_OUTPUT in self.attributes: output = self.rpc_client().show_output(self.context, dict(stack_identity), STACK_ID_OUTPUT) if rpc_api.OUTPUT_ERROR in output: raise exception.TemplateOutputError( resource=self.name, attribute=STACK_ID_OUTPUT, message=output[rpc_api.OUTPUT_ERROR]) reference_id = output[rpc_api.OUTPUT_VALUE] except exception.TemplateOutputError as err: LOG.info('%s', err) except exception.NotFound: pass self.attributes.set_cached_attr(STACK_ID_OUTPUT, reference_id) return reference_id def get_attribute(self, key, *path): if self.resource_id is None: return None # first look for explicit resource.x.y if key.startswith('resource.'): return grouputils.get_nested_attrs(self, key, False, *path) # then look for normal outputs try: return attributes.select_from_attribute(self.get_output(key), path) except exception.NotFound: raise exception.InvalidTemplateAttribute(resource=self.name, key=key) heat-10.0.2/heat/engine/resources/aws/0000775000175000017500000000000013343562672017542 5ustar zuulzuul00000000000000heat-10.0.2/heat/engine/resources/aws/autoscaling/0000775000175000017500000000000013343562672022053 5ustar zuulzuul00000000000000heat-10.0.2/heat/engine/resources/aws/autoscaling/launch_config.py0000666000175000017500000002433013343562337025226 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import six from heat.common import exception from heat.common.i18n import _ from heat.engine import constraints from heat.engine import function from heat.engine import properties from heat.engine import resource class LaunchConfiguration(resource.Resource): PROPERTIES = ( IMAGE_ID, INSTANCE_TYPE, KEY_NAME, USER_DATA, SECURITY_GROUPS, KERNEL_ID, RAM_DISK_ID, BLOCK_DEVICE_MAPPINGS, NOVA_SCHEDULER_HINTS, INSTANCE_ID, ) = ( 'ImageId', 'InstanceType', 'KeyName', 'UserData', 'SecurityGroups', 'KernelId', 'RamDiskId', 'BlockDeviceMappings', 'NovaSchedulerHints', 'InstanceId', ) _NOVA_SCHEDULER_HINT_KEYS = ( NOVA_SCHEDULER_HINT_KEY, NOVA_SCHEDULER_HINT_VALUE, ) = ( 'Key', 'Value', ) _BLOCK_DEVICE_MAPPING_KEYS = ( DEVICE_NAME, EBS, NO_DEVICE, VIRTUAL_NAME, ) = ( 'DeviceName', 'Ebs', 'NoDevice', 'VirtualName', ) _EBS_KEYS = ( DELETE_ON_TERMINATION, IOPS, SNAPSHOT_ID, VOLUME_SIZE, VOLUME_TYPE, ) = ( 'DeleteOnTermination', 'Iops', 'SnapshotId', 'VolumeSize', 'VolumeType' ) properties_schema = { IMAGE_ID: properties.Schema( properties.Schema.STRING, _('Glance image ID or name.'), constraints=[ constraints.CustomConstraint('glance.image') ] ), INSTANCE_TYPE: properties.Schema( properties.Schema.STRING, _('Nova instance type (flavor).'), constraints=[ constraints.CustomConstraint('nova.flavor') ] ), INSTANCE_ID: properties.Schema( properties.Schema.STRING, _('The ID of an existing instance you want to use to create ' 'the launch configuration. All properties are derived from ' 'the instance with the exception of BlockDeviceMapping.'), constraints=[ constraints.CustomConstraint("nova.server") ] ), KEY_NAME: properties.Schema( properties.Schema.STRING, _('Optional Nova keypair name.'), constraints=[ constraints.CustomConstraint("nova.keypair") ] ), USER_DATA: properties.Schema( properties.Schema.STRING, _('User data to pass to instance.') ), SECURITY_GROUPS: properties.Schema( properties.Schema.LIST, _('Security group names to assign.') ), KERNEL_ID: properties.Schema( properties.Schema.STRING, _('Not Implemented.'), implemented=False ), RAM_DISK_ID: properties.Schema( properties.Schema.STRING, _('Not Implemented.'), implemented=False ), BLOCK_DEVICE_MAPPINGS: properties.Schema( properties.Schema.LIST, _('Block device mappings to attach to instance.'), schema=properties.Schema( properties.Schema.MAP, schema={ DEVICE_NAME: properties.Schema( properties.Schema.STRING, _('A device name where the volume will be ' 'attached in the system at /dev/device_name.' 'e.g. vdb'), required=True, ), EBS: properties.Schema( properties.Schema.MAP, _('The ebs volume to attach to the instance.'), schema={ DELETE_ON_TERMINATION: properties.Schema( properties.Schema.BOOLEAN, _('Indicate whether the volume should be ' 'deleted when the instance is terminated.'), default=True ), IOPS: properties.Schema( properties.Schema.NUMBER, _('The number of I/O operations per second ' 'that the volume supports.'), implemented=False ), SNAPSHOT_ID: properties.Schema( properties.Schema.STRING, _('The ID of the snapshot to create ' 'a volume from.'), constraints=[ constraints.CustomConstraint( 'cinder.snapshot') ] ), VOLUME_SIZE: properties.Schema( properties.Schema.STRING, _('The size of the volume, in GB. Must be ' 'equal or greater than the size of the ' 'snapshot. It is safe to leave this blank ' 'and have the Compute service infer ' 'the size.'), ), VOLUME_TYPE: properties.Schema( properties.Schema.STRING, _('The volume type.'), implemented=False ), }, ), NO_DEVICE: properties.Schema( properties.Schema.MAP, _('The can be used to unmap a defined device.'), implemented=False ), VIRTUAL_NAME: properties.Schema( properties.Schema.STRING, _('The name of the virtual device. The name must be ' 'in the form ephemeralX where X is a number ' 'starting from zero (0); for example, ephemeral0.'), implemented=False ), }, ), ), NOVA_SCHEDULER_HINTS: properties.Schema( properties.Schema.LIST, _('Scheduler hints to pass to Nova (Heat extension).'), schema=properties.Schema( properties.Schema.MAP, schema={ NOVA_SCHEDULER_HINT_KEY: properties.Schema( properties.Schema.STRING, required=True ), NOVA_SCHEDULER_HINT_VALUE: properties.Schema( properties.Schema.STRING, required=True ), }, ) ), } def rebuild_lc_properties(self, instance_id): server = self.client_plugin('nova').get_server(instance_id) instance_props = { self.IMAGE_ID: server.image['id'], self.INSTANCE_TYPE: server.flavor['id'], self.KEY_NAME: server.key_name, self.SECURITY_GROUPS: [sg['name'] for sg in server.security_groups] } lc_props = function.resolve(self.properties.data) for key, value in six.iteritems(instance_props): # the properties which are specified in launch configuration, # will override the attributes from the instance lc_props.setdefault(key, value) return lc_props def handle_create(self): instance_id = self.properties.get(self.INSTANCE_ID) if instance_id: lc_props = self.rebuild_lc_properties(instance_id) defn = self.t.freeze(properties=lc_props) self.properties = defn.properties( self.properties_schema, self.context) self._update_stored_properties() def needs_replace_with_tmpl_diff(self, tmpl_diff): return tmpl_diff.metadata_changed() def get_reference_id(self): return self.physical_resource_name_or_FnGetRefId() def validate(self): """Validate any of the provided params.""" super(LaunchConfiguration, self).validate() # now we don't support without snapshot_id in bdm bdm = self.properties.get(self.BLOCK_DEVICE_MAPPINGS) if bdm: for mapping in bdm: ebs = mapping.get(self.EBS) if ebs: snapshot_id = ebs.get(self.SNAPSHOT_ID) if not snapshot_id: msg = _("SnapshotId is missing, this is required " "when specifying BlockDeviceMappings.") raise exception.StackValidationFailed(message=msg) else: msg = _("Ebs is missing, this is required " "when specifying BlockDeviceMappings.") raise exception.StackValidationFailed(message=msg) # validate the 'InstanceId', 'ImageId' and 'InstanceType', # if without 'InstanceId', 'ImageId' and 'InstanceType' are required instance_id = self.properties.get(self.INSTANCE_ID) if not instance_id: image_id = self.properties.get(self.IMAGE_ID) instance_type = self.properties.get(self.INSTANCE_TYPE) if not image_id or not instance_type: msg = _('If without InstanceId, ' 'ImageId and InstanceType are required.') raise exception.StackValidationFailed(message=msg) def resource_mapping(): return { 'AWS::AutoScaling::LaunchConfiguration': LaunchConfiguration, } heat-10.0.2/heat/engine/resources/aws/autoscaling/autoscaling_group.py0000666000175000017500000004016713343562337026162 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging from oslo_utils import excutils import six from heat.common import exception from heat.common import grouputils from heat.common.i18n import _ from heat.engine import attributes from heat.engine import constraints from heat.engine import function from heat.engine.notification import autoscaling as notification from heat.engine import properties from heat.engine import resource from heat.engine.resources.openstack.heat import instance_group as instgrp from heat.engine import rsrc_defn from heat.engine import support from heat.scaling import cooldown from heat.scaling import scalingutil as sc_util LOG = logging.getLogger(__name__) class AutoScalingGroup(cooldown.CooldownMixin, instgrp.InstanceGroup): support_status = support.SupportStatus(version='2014.1') PROPERTIES = ( AVAILABILITY_ZONES, LAUNCH_CONFIGURATION_NAME, MAX_SIZE, MIN_SIZE, COOLDOWN, DESIRED_CAPACITY, HEALTH_CHECK_GRACE_PERIOD, HEALTH_CHECK_TYPE, LOAD_BALANCER_NAMES, VPCZONE_IDENTIFIER, TAGS, INSTANCE_ID, ) = ( 'AvailabilityZones', 'LaunchConfigurationName', 'MaxSize', 'MinSize', 'Cooldown', 'DesiredCapacity', 'HealthCheckGracePeriod', 'HealthCheckType', 'LoadBalancerNames', 'VPCZoneIdentifier', 'Tags', 'InstanceId', ) _TAG_KEYS = ( TAG_KEY, TAG_VALUE, ) = ( 'Key', 'Value', ) _UPDATE_POLICY_SCHEMA_KEYS = ( ROLLING_UPDATE ) = ( 'AutoScalingRollingUpdate' ) _ROLLING_UPDATE_SCHEMA_KEYS = ( MIN_INSTANCES_IN_SERVICE, MAX_BATCH_SIZE, PAUSE_TIME ) = ( 'MinInstancesInService', 'MaxBatchSize', 'PauseTime' ) ATTRIBUTES = ( INSTANCE_LIST, ) = ( 'InstanceList', ) properties_schema = { AVAILABILITY_ZONES: properties.Schema( properties.Schema.LIST, _('Not Implemented.'), required=True ), LAUNCH_CONFIGURATION_NAME: properties.Schema( properties.Schema.STRING, _('The reference to a LaunchConfiguration resource.'), update_allowed=True ), INSTANCE_ID: properties.Schema( properties.Schema.STRING, _('The ID of an existing instance to use to ' 'create the Auto Scaling group. If specify this property, ' 'will create the group use an existing instance instead of ' 'a launch configuration.'), constraints=[ constraints.CustomConstraint("nova.server") ] ), MAX_SIZE: properties.Schema( properties.Schema.INTEGER, _('Maximum number of instances in the group.'), required=True, update_allowed=True ), MIN_SIZE: properties.Schema( properties.Schema.INTEGER, _('Minimum number of instances in the group.'), required=True, update_allowed=True ), COOLDOWN: properties.Schema( properties.Schema.INTEGER, _('Cooldown period, in seconds.'), update_allowed=True ), DESIRED_CAPACITY: properties.Schema( properties.Schema.INTEGER, _('Desired initial number of instances.'), update_allowed=True ), HEALTH_CHECK_GRACE_PERIOD: properties.Schema( properties.Schema.INTEGER, _('Not Implemented.'), implemented=False ), HEALTH_CHECK_TYPE: properties.Schema( properties.Schema.STRING, _('Not Implemented.'), constraints=[ constraints.AllowedValues(['EC2', 'ELB']), ], implemented=False ), LOAD_BALANCER_NAMES: properties.Schema( properties.Schema.LIST, _('List of LoadBalancer resources.') ), VPCZONE_IDENTIFIER: properties.Schema( properties.Schema.LIST, _('Use only with Neutron, to list the internal subnet to ' 'which the instance will be attached; ' 'needed only if multiple exist; ' 'list length must be exactly 1.'), schema=properties.Schema( properties.Schema.STRING, _('UUID of the internal subnet to which the instance ' 'will be attached.') ) ), TAGS: properties.Schema( properties.Schema.LIST, _('Tags to attach to this group.'), schema=properties.Schema( properties.Schema.MAP, schema={ TAG_KEY: properties.Schema( properties.Schema.STRING, required=True ), TAG_VALUE: properties.Schema( properties.Schema.STRING, required=True ), }, ) ), } attributes_schema = { INSTANCE_LIST: attributes.Schema( _("A comma-delimited list of server ip addresses. " "(Heat extension)."), type=attributes.Schema.STRING ), } rolling_update_schema = { MIN_INSTANCES_IN_SERVICE: properties.Schema(properties.Schema.INTEGER, default=0), MAX_BATCH_SIZE: properties.Schema(properties.Schema.INTEGER, default=1), PAUSE_TIME: properties.Schema(properties.Schema.STRING, default='PT0S') } update_policy_schema = { ROLLING_UPDATE: properties.Schema(properties.Schema.MAP, schema=rolling_update_schema) } def handle_create(self): return self.create_with_template(self.child_template()) def _make_launch_config_resource(self, name, props): lc_res_type = 'AWS::AutoScaling::LaunchConfiguration' lc_res_def = rsrc_defn.ResourceDefinition(name, lc_res_type, props) lc_res = resource.Resource(name, lc_res_def, self.stack) return lc_res def _get_conf_properties(self): instance_id = self.properties.get(self.INSTANCE_ID) if instance_id: server = self.client_plugin('nova').get_server(instance_id) instance_props = { 'ImageId': server.image['id'], 'InstanceType': server.flavor['id'], 'KeyName': server.key_name, 'SecurityGroups': [sg['name'] for sg in server.security_groups] } conf = self._make_launch_config_resource(self.name, instance_props) props = function.resolve(conf.properties.data) else: conf, props = super(AutoScalingGroup, self)._get_conf_properties() vpc_zone_ids = self.properties.get(self.VPCZONE_IDENTIFIER) if vpc_zone_ids: props['SubnetId'] = vpc_zone_ids[0] return conf, props def check_create_complete(self, task): """Update cooldown timestamp after create succeeds.""" done = super(AutoScalingGroup, self).check_create_complete(task) cooldown = self.properties[self.COOLDOWN] if done: self._finished_scaling(cooldown, "%s : %s" % (sc_util.CFN_EXACT_CAPACITY, grouputils.get_size(self))) return done def check_update_complete(self, cookie): """Update the cooldown timestamp after update succeeds.""" done = super(AutoScalingGroup, self).check_update_complete(cookie) cooldown = self.properties[self.COOLDOWN] if done: self._finished_scaling(cooldown, "%s : %s" % (sc_util.CFN_EXACT_CAPACITY, grouputils.get_size(self))) return done def _get_new_capacity(self, capacity, adjustment, adjustment_type=sc_util.CFN_EXACT_CAPACITY, min_adjustment_step=None): lower = self.properties[self.MIN_SIZE] upper = self.properties[self.MAX_SIZE] return sc_util.calculate_new_capacity(capacity, adjustment, adjustment_type, min_adjustment_step, lower, upper) def resize(self, capacity): try: super(AutoScalingGroup, self).resize(capacity) finally: # allow InstanceList to be re-resolved self.clear_stored_attributes() def handle_update(self, json_snippet, tmpl_diff, prop_diff): """Updates self.properties, if Properties has changed. If Properties has changed, update self.properties, so we get the new values during any subsequent adjustment. """ if tmpl_diff: # parse update policy if tmpl_diff.update_policy_changed(): up = json_snippet.update_policy(self.update_policy_schema, self.context) self.update_policy = up self.properties = json_snippet.properties(self.properties_schema, self.context) if prop_diff: # Replace instances first if launch configuration has changed self._try_rolling_update(prop_diff) # Update will happen irrespective of whether auto-scaling # is in progress or not. capacity = grouputils.get_size(self) desired_capacity = self.properties[self.DESIRED_CAPACITY] or capacity new_capacity = self._get_new_capacity(capacity, desired_capacity) self.resize(new_capacity) def adjust(self, adjustment, adjustment_type=sc_util.CFN_CHANGE_IN_CAPACITY, min_adjustment_step=None, cooldown=None): """Adjust the size of the scaling group if the cooldown permits.""" if self.status != self.COMPLETE: LOG.info("%s NOT performing scaling adjustment, " "when status is not COMPLETE", self.name) raise resource.NoActionRequired capacity = grouputils.get_size(self) new_capacity = self._get_new_capacity(capacity, adjustment, adjustment_type, min_adjustment_step) if new_capacity == capacity: LOG.info("%s NOT performing scaling adjustment, " "as there is no change in capacity.", self.name) raise resource.NoActionRequired if cooldown is None: cooldown = self.properties[self.COOLDOWN] self._check_scaling_allowed(cooldown) # send a notification before, on-error and on-success. notif = { 'stack': self.stack, 'adjustment': adjustment, 'adjustment_type': adjustment_type, 'capacity': capacity, 'groupname': self.FnGetRefId(), 'message': _("Start resizing the group %(group)s") % { 'group': self.FnGetRefId()}, 'suffix': 'start', } size_changed = False try: notification.send(**notif) try: self.resize(new_capacity) except Exception as resize_ex: with excutils.save_and_reraise_exception(): try: notif.update({'suffix': 'error', 'message': six.text_type(resize_ex), 'capacity': grouputils.get_size(self), }) notification.send(**notif) except Exception: LOG.exception('Failed sending error notification') else: size_changed = True notif.update({ 'suffix': 'end', 'capacity': new_capacity, 'message': _("End resizing the group %(group)s") % { 'group': notif['groupname']}, }) notification.send(**notif) except Exception: LOG.error("Error in performing scaling adjustment for " "group %s.", self.name) raise finally: self._finished_scaling(cooldown, "%s : %s" % (adjustment_type, adjustment), size_changed=size_changed) def _tags(self): """Add Identifying Tags to all servers in the group. This is so the Dimensions received from cfn-push-stats all include the groupname and stack id. Note: the group name must match what is returned from FnGetRefId """ autoscaling_tag = [{self.TAG_KEY: 'metering.AutoScalingGroupName', self.TAG_VALUE: self.FnGetRefId()}] return super(AutoScalingGroup, self)._tags() + autoscaling_tag def validate(self): # check validity of group size min_size = self.properties[self.MIN_SIZE] max_size = self.properties[self.MAX_SIZE] if max_size < min_size: msg = _("MinSize can not be greater than MaxSize") raise exception.StackValidationFailed(message=msg) if min_size < 0: msg = _("The size of AutoScalingGroup can not be less than zero") raise exception.StackValidationFailed(message=msg) if self.properties[self.DESIRED_CAPACITY] is not None: desired_capacity = self.properties[self.DESIRED_CAPACITY] if desired_capacity < min_size or desired_capacity > max_size: msg = _("DesiredCapacity must be between MinSize and MaxSize") raise exception.StackValidationFailed(message=msg) # TODO(pasquier-s): once Neutron is able to assign subnets to # availability zones, it will be possible to specify multiple subnets. # For now, only one subnet can be specified. The bug #1096017 tracks # this issue. if (self.properties.get(self.VPCZONE_IDENTIFIER) and len(self.properties[self.VPCZONE_IDENTIFIER]) != 1): raise exception.NotSupported(feature=_("Anything other than one " "VPCZoneIdentifier")) # validate properties InstanceId and LaunchConfigurationName # for aws auto scaling group. # should provide just only one of if self.type() == 'AWS::AutoScaling::AutoScalingGroup': instanceId = self.properties.get(self.INSTANCE_ID) launch_config = self.properties.get( self.LAUNCH_CONFIGURATION_NAME) if bool(instanceId) == bool(launch_config): msg = _("Either 'InstanceId' or 'LaunchConfigurationName' " "must be provided.") raise exception.StackValidationFailed(message=msg) super(AutoScalingGroup, self).validate() def child_template(self): if self.properties[self.DESIRED_CAPACITY]: num_instances = self.properties[self.DESIRED_CAPACITY] else: num_instances = self.properties[self.MIN_SIZE] return self._create_template(num_instances) def resource_mapping(): return { 'AWS::AutoScaling::AutoScalingGroup': AutoScalingGroup, } heat-10.0.2/heat/engine/resources/aws/autoscaling/__init__.py0000666000175000017500000000000013343562337024152 0ustar zuulzuul00000000000000heat-10.0.2/heat/engine/resources/aws/autoscaling/scaling_policy.py0000666000175000017500000000740113343562337025426 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import six from heat.common import exception from heat.common.i18n import _ from heat.engine import attributes from heat.engine import constraints from heat.engine import properties from heat.engine.resources.openstack.heat import scaling_policy as heat_sp from heat.scaling import scalingutil as sc_util class AWSScalingPolicy(heat_sp.AutoScalingPolicy): PROPERTIES = ( AUTO_SCALING_GROUP_NAME, SCALING_ADJUSTMENT, ADJUSTMENT_TYPE, COOLDOWN, MIN_ADJUSTMENT_STEP, ) = ( 'AutoScalingGroupName', 'ScalingAdjustment', 'AdjustmentType', 'Cooldown', 'MinAdjustmentStep', ) ATTRIBUTES = ( ALARM_URL, ) = ( 'AlarmUrl', ) properties_schema = { AUTO_SCALING_GROUP_NAME: properties.Schema( properties.Schema.STRING, _('AutoScaling group name to apply policy to.'), required=True ), SCALING_ADJUSTMENT: properties.Schema( properties.Schema.INTEGER, _('Size of adjustment.'), required=True, update_allowed=True ), ADJUSTMENT_TYPE: properties.Schema( properties.Schema.STRING, _('Type of adjustment (absolute or percentage).'), required=True, constraints=[ constraints.AllowedValues( [sc_util.CFN_CHANGE_IN_CAPACITY, sc_util.CFN_EXACT_CAPACITY, sc_util.CFN_PERCENT_CHANGE_IN_CAPACITY]), ], update_allowed=True ), COOLDOWN: properties.Schema( properties.Schema.INTEGER, _('Cooldown period, in seconds.'), update_allowed=True ), MIN_ADJUSTMENT_STEP: properties.Schema( properties.Schema.INTEGER, _('Minimum number of resources that are added or removed ' 'when the AutoScaling group scales up or down. This can ' 'be used only when specifying PercentChangeInCapacity ' 'for the AdjustmentType property.'), constraints=[ constraints.Range( min=0, ), ], update_allowed=True ), } attributes_schema = { ALARM_URL: attributes.Schema( _("A signed url to handle the alarm. (Heat extension)."), type=attributes.Schema.STRING ), } def _validate_min_adjustment_step(self): adjustment_type = self.properties.get(self.ADJUSTMENT_TYPE) adjustment_step = self.properties.get(self.MIN_ADJUSTMENT_STEP) if (adjustment_type != sc_util.CFN_PERCENT_CHANGE_IN_CAPACITY and adjustment_step is not None): raise exception.ResourcePropertyValueDependency( prop1=self.MIN_ADJUSTMENT_STEP, prop2=self.ADJUSTMENT_TYPE, value=sc_util.CFN_PERCENT_CHANGE_IN_CAPACITY) def get_reference_id(self): if self.resource_id is not None: return six.text_type(self._get_ec2_signed_url()) else: return six.text_type(self.name) def resource_mapping(): return { 'AWS::AutoScaling::ScalingPolicy': AWSScalingPolicy, } heat-10.0.2/heat/engine/resources/aws/s3/0000775000175000017500000000000013343562672020067 5ustar zuulzuul00000000000000heat-10.0.2/heat/engine/resources/aws/s3/__init__.py0000666000175000017500000000000013343562337022166 0ustar zuulzuul00000000000000heat-10.0.2/heat/engine/resources/aws/s3/s3.py0000666000175000017500000001447113343562337020775 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import six from six.moves.urllib import parse as urlparse from heat.common import exception from heat.common.i18n import _ from heat.engine import attributes from heat.engine import constraints from heat.engine import properties from heat.engine import resource class S3Bucket(resource.Resource): PROPERTIES = ( ACCESS_CONTROL, WEBSITE_CONFIGURATION, TAGS, ) = ( 'AccessControl', 'WebsiteConfiguration', 'Tags', ) _WEBSITE_CONFIGURATION_KEYS = ( WEBSITE_CONFIGURATION_INDEX_DOCUMENT, WEBSITE_CONFIGURATION_ERROR_DOCUMENT, ) = ( 'IndexDocument', 'ErrorDocument', ) _TAG_KEYS = ( TAG_KEY, TAG_VALUE, ) = ( 'Key', 'Value', ) ATTRIBUTES = ( DOMAIN_NAME, WEBSITE_URL, ) = ( 'DomainName', 'WebsiteURL', ) properties_schema = { ACCESS_CONTROL: properties.Schema( properties.Schema.STRING, _('A predefined access control list (ACL) that grants ' 'permissions on the bucket.'), constraints=[ constraints.AllowedValues(['Private', 'PublicRead', 'PublicReadWrite', 'AuthenticatedRead', 'BucketOwnerRead', 'BucketOwnerFullControl']), ] ), WEBSITE_CONFIGURATION: properties.Schema( properties.Schema.MAP, _('Information used to configure the bucket as a static website.'), schema={ WEBSITE_CONFIGURATION_INDEX_DOCUMENT: properties.Schema( properties.Schema.STRING, _('The name of the index document.') ), WEBSITE_CONFIGURATION_ERROR_DOCUMENT: properties.Schema( properties.Schema.STRING, _('The name of the error document.') ), } ), TAGS: properties.Schema( properties.Schema.LIST, _('Tags to attach to the bucket.'), schema=properties.Schema( properties.Schema.MAP, schema={ TAG_KEY: properties.Schema( properties.Schema.STRING, _('The tag key name.'), required=True ), TAG_VALUE: properties.Schema( properties.Schema.STRING, _('The tag value.'), required=True ), }, ) ), } attributes_schema = { DOMAIN_NAME: attributes.Schema( _('The DNS name of the specified bucket.'), type=attributes.Schema.STRING ), WEBSITE_URL: attributes.Schema( _('The website endpoint for the specified bucket.'), type=attributes.Schema.STRING ), } default_client_name = 'swift' def tags_to_headers(self): if self.properties[self.TAGS] is None: return {} return dict( ('X-Container-Meta-S3-Tag-' + tm[self.TAG_KEY], tm[self.TAG_VALUE]) for tm in self.properties[self.TAGS]) def handle_create(self): """Create a bucket.""" container = self.physical_resource_name() headers = self.tags_to_headers() if self.properties[self.WEBSITE_CONFIGURATION] is not None: sc = self.properties[self.WEBSITE_CONFIGURATION] index_doc = sc[self.WEBSITE_CONFIGURATION_INDEX_DOCUMENT] error_doc = sc[self.WEBSITE_CONFIGURATION_ERROR_DOCUMENT] # we will assume that swift is configured for the staticweb # wsgi middleware headers['X-Container-Meta-Web-Index'] = index_doc headers['X-Container-Meta-Web-Error'] = error_doc con = self.context ac = self.properties[self.ACCESS_CONTROL] tenant_username = '%s:%s' % (con.project_name, con.username) if ac in ('PublicRead', 'PublicReadWrite'): headers['X-Container-Read'] = '.r:*' elif ac == 'AuthenticatedRead': headers['X-Container-Read'] = con.project_name else: headers['X-Container-Read'] = tenant_username if ac == 'PublicReadWrite': headers['X-Container-Write'] = '.r:*' else: headers['X-Container-Write'] = tenant_username self.client().put_container(container, headers) self.resource_id_set(container) def handle_delete(self): """Perform specified delete policy.""" if self.resource_id is None: return try: self.client().delete_container(self.resource_id) except Exception as ex: if self.client_plugin().is_conflict(ex): container, objects = self.client().get_container( self.resource_id) if objects: msg = _("The bucket you tried to delete is not empty (%s)." ) % self.resource_id raise exception.ResourceActionNotSupported(action=msg) self.client_plugin().ignore_not_found(ex) def get_reference_id(self): return six.text_type(self.resource_id) def _resolve_attribute(self, name): url = self.client().get_auth()[0] parsed = list(urlparse.urlparse(url)) if name == self.DOMAIN_NAME: return parsed[1].split(':')[0] elif name == self.WEBSITE_URL: return '%s://%s%s/%s' % (parsed[0], parsed[1], parsed[2], self.resource_id) def resource_mapping(): return { 'AWS::S3::Bucket': S3Bucket, } heat-10.0.2/heat/engine/resources/aws/cfn/0000775000175000017500000000000013343562672020310 5ustar zuulzuul00000000000000heat-10.0.2/heat/engine/resources/aws/cfn/stack.py0000666000175000017500000001031513343562351021763 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from requests import exceptions import six from heat.common import exception from heat.common.i18n import _ from heat.common import template_format from heat.common import urlfetch from heat.engine import attributes from heat.engine import properties from heat.engine.resources import stack_resource class NestedStack(stack_resource.StackResource): """Represents a child stack to allow composition of templates.""" PROPERTIES = ( TEMPLATE_URL, TIMEOUT_IN_MINS, PARAMETERS, ) = ( 'TemplateURL', 'TimeoutInMinutes', 'Parameters', ) properties_schema = { TEMPLATE_URL: properties.Schema( properties.Schema.STRING, _('The URL of a template that specifies the stack to be created ' 'as a resource.'), required=True, update_allowed=True ), TIMEOUT_IN_MINS: properties.Schema( properties.Schema.INTEGER, _('The length of time, in minutes, to wait for the nested stack ' 'creation.'), update_allowed=True ), PARAMETERS: properties.Schema( properties.Schema.MAP, _('The set of parameters passed to this nested stack.'), update_allowed=True ), } def child_template(self): template_url = self.properties[self.TEMPLATE_URL] try: template_data = urlfetch.get(template_url) except (exceptions.RequestException, IOError) as r_exc: raise ValueError(_("Could not fetch remote template '%(url)s': " "%(exc)s") % {'url': template_url, 'exc': r_exc}) return template_format.parse(template_data, template_url) def child_params(self): return self.properties[self.PARAMETERS] def handle_adopt(self, resource_data=None): return self._create_with_template(resource_adopt_data=resource_data) def handle_create(self): return self._create_with_template() def _create_with_template(self, resource_adopt_data=None): template = self.child_template() return self.create_with_template(template, self.child_params(), self.properties[self.TIMEOUT_IN_MINS], adopt_data=resource_adopt_data) def get_attribute(self, key, *path): if key and not key.startswith('Outputs.'): raise exception.InvalidTemplateAttribute(resource=self.name, key=key) try: attribute = self.get_output(key.partition('.')[-1]) except exception.NotFound: raise exception.InvalidTemplateAttribute(resource=self.name, key=key) return attributes.select_from_attribute(attribute, path) def get_reference_id(self): if self.nested() is None: return six.text_type(self.name) return self.nested().identifier().arn() def handle_update(self, json_snippet, tmpl_diff, prop_diff): # Nested stack template may be changed even if the prop_diff is empty. self.properties = json_snippet.properties(self.properties_schema, self.context) return self.update_with_template(self.child_template(), self.properties[self.PARAMETERS], self.properties[self.TIMEOUT_IN_MINS]) def resource_mapping(): return { 'AWS::CloudFormation::Stack': NestedStack, } heat-10.0.2/heat/engine/resources/aws/cfn/wait_condition_handle.py0000666000175000017500000000443613343562351025212 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import six from heat.engine.resources import signal_responder from heat.engine.resources import wait_condition as wc_base from heat.engine import support class WaitConditionHandle(wc_base.BaseWaitConditionHandle): """AWS WaitConditionHandle resource. the main point of this class is to : have no dependencies (so the instance can reference it) generate a unique url (to be returned in the reference) then the cfn-signal will use this url to post to and WaitCondition will poll it to see if has been written to. """ support_status = support.SupportStatus(version='2014.1') METADATA_KEYS = ( DATA, REASON, STATUS, UNIQUE_ID ) = ( 'Data', 'Reason', 'Status', 'UniqueId' ) def get_reference_id(self): if self.resource_id: wc = signal_responder.WAITCONDITION return six.text_type(self._get_ec2_signed_url(signal_type=wc)) else: return six.text_type(self.name) def metadata_update(self, new_metadata=None): """DEPRECATED. Should use handle_signal instead.""" self.handle_signal(details=new_metadata) def handle_signal(self, details=None): """Validate and update the resource metadata. metadata must use the following format: { "Status" : "Status (must be SUCCESS or FAILURE)", "UniqueId" : "Some ID, should be unique for Count>1", "Data" : "Arbitrary Data", "Reason" : "Reason String" } """ if details is None: return return super(WaitConditionHandle, self).handle_signal(details) def resource_mapping(): return { 'AWS::CloudFormation::WaitConditionHandle': WaitConditionHandle, } heat-10.0.2/heat/engine/resources/aws/cfn/__init__.py0000666000175000017500000000000013343562337022407 0ustar zuulzuul00000000000000heat-10.0.2/heat/engine/resources/aws/cfn/wait_condition.py0000666000175000017500000000726413343562337023705 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common.i18n import _ from heat.common import identifier from heat.engine import attributes from heat.engine import constraints from heat.engine import properties from heat.engine.resources.aws.cfn import wait_condition_handle as aws_wch from heat.engine.resources.openstack.heat import wait_condition as heat_wc from heat.engine import support class WaitCondition(heat_wc.HeatWaitCondition): support_status = support.SupportStatus(version='2014.1') PROPERTIES = ( HANDLE, TIMEOUT, COUNT, ) = ( 'Handle', 'Timeout', 'Count', ) ATTRIBUTES = ( DATA, ) = ( 'Data', ) properties_schema = { HANDLE: properties.Schema( properties.Schema.STRING, _('A reference to the wait condition handle used to signal this ' 'wait condition.'), required=True ), TIMEOUT: properties.Schema( properties.Schema.INTEGER, _('The number of seconds to wait for the correct number of ' 'signals to arrive.'), required=True, constraints=[ constraints.Range(1, 43200), ] ), COUNT: properties.Schema( properties.Schema.INTEGER, _('The number of success signals that must be received before ' 'the stack creation process continues.'), constraints=[ constraints.Range(min=1), ], default=1, update_allowed=True ), } attributes_schema = { DATA: attributes.Schema( _('JSON string containing data associated with wait ' 'condition signals sent to the handle.'), cache_mode=attributes.Schema.CACHE_NONE, type=attributes.Schema.STRING ), } def _validate_handle_url(self): handle_url = self.properties[self.HANDLE] handle_id = identifier.ResourceIdentifier.from_arn_url(handle_url) if handle_id.tenant != self.stack.context.tenant_id: raise ValueError(_("WaitCondition invalid Handle tenant %s") % handle_id.tenant) if handle_id.stack_name != self.stack.name: raise ValueError(_("WaitCondition invalid Handle stack %s") % handle_id.stack_name) if handle_id.stack_id != self.stack.id: raise ValueError(_("WaitCondition invalid Handle stack %s") % handle_id.stack_id) if handle_id.resource_name not in self.stack: raise ValueError(_("WaitCondition invalid Handle %s") % handle_id.resource_name) if not isinstance(self.stack[handle_id.resource_name], aws_wch.WaitConditionHandle): raise ValueError(_("WaitCondition invalid Handle %s") % handle_id.resource_name) def handle_create(self): self._validate_handle_url() return super(WaitCondition, self).handle_create() def resource_mapping(): return { 'AWS::CloudFormation::WaitCondition': WaitCondition, } heat-10.0.2/heat/engine/resources/aws/ec2/0000775000175000017500000000000013343562672020213 5ustar zuulzuul00000000000000heat-10.0.2/heat/engine/resources/aws/ec2/internet_gateway.py0000666000175000017500000001045713343562351024141 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import six from heat.common import exception from heat.common.i18n import _ from heat.engine import properties from heat.engine import resource from heat.engine.resources.aws.ec2 import route_table class InternetGateway(resource.Resource): PROPERTIES = ( TAGS, ) = ( 'Tags', ) _TAG_KEYS = ( TAG_KEY, TAG_VALUE, ) = ( 'Key', 'Value', ) properties_schema = { TAGS: properties.Schema( properties.Schema.LIST, schema=properties.Schema( properties.Schema.MAP, schema={ TAG_KEY: properties.Schema( properties.Schema.STRING, required=True ), TAG_VALUE: properties.Schema( properties.Schema.STRING, required=True ), }, implemented=False, ) ), } def handle_create(self): self.resource_id_set(self.physical_resource_name()) def handle_delete(self): pass @staticmethod def get_external_network_id(client): ext_filter = {'router:external': True} ext_nets = client.list_networks(**ext_filter)['networks'] if len(ext_nets) != 1: # TODO(sbaker) if there is more than one external network # add a heat configuration variable to set the ID of # the default one raise exception.Error( _('Expected 1 external network, found %d') % len(ext_nets)) external_network_id = ext_nets[0]['id'] return external_network_id class VPCGatewayAttachment(resource.Resource): PROPERTIES = ( VPC_ID, INTERNET_GATEWAY_ID, VPN_GATEWAY_ID, ) = ( 'VpcId', 'InternetGatewayId', 'VpnGatewayId', ) properties_schema = { VPC_ID: properties.Schema( properties.Schema.STRING, _('VPC ID for this gateway association.'), required=True ), INTERNET_GATEWAY_ID: properties.Schema( properties.Schema.STRING, _('ID of the InternetGateway.') ), VPN_GATEWAY_ID: properties.Schema( properties.Schema.STRING, _('ID of the VPNGateway to attach to the VPC.'), implemented=False ), } default_client_name = 'neutron' def _vpc_route_tables(self): for res in six.itervalues(self.stack): if (res.has_interface('AWS::EC2::RouteTable') and res.properties.get(route_table.RouteTable.VPC_ID) == self.properties.get(self.VPC_ID)): yield res def add_dependencies(self, deps): super(VPCGatewayAttachment, self).add_dependencies(deps) # Depend on any route table in this template with the same # VpcId as this VpcId. # All route tables must exist before gateway attachment # as attachment happens to routers (not VPCs) for route_tbl in self._vpc_route_tables(): deps += (self, route_tbl) def handle_create(self): client = self.client() external_network_id = InternetGateway.get_external_network_id(client) for router in self._vpc_route_tables(): client.add_gateway_router(router.resource_id, { 'network_id': external_network_id}) def handle_delete(self): for router in self._vpc_route_tables(): with self.client_plugin().ignore_not_found: self.client().remove_gateway_router(router.resource_id) def resource_mapping(): return { 'AWS::EC2::InternetGateway': InternetGateway, 'AWS::EC2::VPCGatewayAttachment': VPCGatewayAttachment, } heat-10.0.2/heat/engine/resources/aws/ec2/eip.py0000666000175000017500000003533713343562337021355 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging import six from heat.common import exception from heat.common.i18n import _ from heat.engine import attributes from heat.engine.clients import client_exception from heat.engine import constraints from heat.engine import properties from heat.engine import resource from heat.engine.resources.aws.ec2 import internet_gateway from heat.engine.resources.aws.ec2 import vpc from heat.engine import support LOG = logging.getLogger(__name__) class ElasticIp(resource.Resource): PROPERTIES = ( DOMAIN, INSTANCE_ID, ) = ( 'Domain', 'InstanceId', ) ATTRIBUTES = ( ALLOCATION_ID, ) = ( 'AllocationId', ) properties_schema = { DOMAIN: properties.Schema( properties.Schema.STRING, _('Set to "vpc" to have IP address allocation associated to your ' 'VPC.'), support_status=support.SupportStatus( status=support.DEPRECATED, message=_('Now we only allow vpc here, so no need to set up ' 'this tag anymore.'), version='9.0.0' ), constraints=[ constraints.AllowedValues(['vpc']), ] ), INSTANCE_ID: properties.Schema( properties.Schema.STRING, _('Instance ID to associate with EIP.'), update_allowed=True, constraints=[ constraints.CustomConstraint('nova.server') ] ), } attributes_schema = { ALLOCATION_ID: attributes.Schema( _('ID that AWS assigns to represent the allocation of the address ' 'for use with Amazon VPC. Returned only for VPC elastic IP ' 'addresses.'), type=attributes.Schema.STRING ), } default_client_name = 'nova' def __init__(self, name, json_snippet, stack): super(ElasticIp, self).__init__(name, json_snippet, stack) self.ipaddress = None def _ipaddress(self): if self.ipaddress is None and self.resource_id is not None: try: ips = self.neutron().show_floatingip(self.resource_id) except Exception as ex: self.client_plugin('neutron').ignore_not_found(ex) else: self.ipaddress = ips['floatingip']['floating_ip_address'] return self.ipaddress or '' def handle_create(self): """Allocate a floating IP for the current tenant.""" ips = None ext_net = internet_gateway.InternetGateway.get_external_network_id( self.neutron()) props = {'floating_network_id': ext_net} ips = self.neutron().create_floatingip({ 'floatingip': props})['floatingip'] self.resource_id_set(ips['id']) self.ipaddress = ips['floating_ip_address'] LOG.info('ElasticIp create %s', str(ips)) instance_id = self.properties[self.INSTANCE_ID] if instance_id: self.client_plugin().associate_floatingip(instance_id, ips['id']) def handle_delete(self): if self.resource_id is None: return # may be just create an eip when creation, or create the association # failed when creation, there will be no association, if we attempt to # disassociate, an exception will raised, we need # to catch and ignore it, and then to deallocate the eip instance_id = self.properties[self.INSTANCE_ID] if instance_id: with self.client_plugin().ignore_not_found: self.client_plugin().dissociate_floatingip(self.resource_id) # deallocate the eip with self.client_plugin('neutron').ignore_not_found: self.neutron().delete_floatingip(self.resource_id) def handle_update(self, json_snippet, tmpl_diff, prop_diff): if prop_diff: if self.INSTANCE_ID in prop_diff: instance_id = prop_diff[self.INSTANCE_ID] if instance_id: self.client_plugin().associate_floatingip( instance_id, self.resource_id) else: self.client_plugin().dissociate_floatingip( self.resource_id) def get_reference_id(self): eip = self._ipaddress() if eip: return six.text_type(eip) else: return six.text_type(self.name) def _resolve_attribute(self, name): if name == self.ALLOCATION_ID: return six.text_type(self.resource_id) class ElasticIpAssociation(resource.Resource): PROPERTIES = ( INSTANCE_ID, EIP, ALLOCATION_ID, NETWORK_INTERFACE_ID, ) = ( 'InstanceId', 'EIP', 'AllocationId', 'NetworkInterfaceId', ) properties_schema = { INSTANCE_ID: properties.Schema( properties.Schema.STRING, _('Instance ID to associate with EIP specified by EIP property.'), update_allowed=True, constraints=[ constraints.CustomConstraint('nova.server') ] ), EIP: properties.Schema( properties.Schema.STRING, _('EIP address to associate with instance.'), update_allowed=True, constraints=[ constraints.CustomConstraint('ip_addr') ] ), ALLOCATION_ID: properties.Schema( properties.Schema.STRING, _('Allocation ID for VPC EIP address.'), update_allowed=True ), NETWORK_INTERFACE_ID: properties.Schema( properties.Schema.STRING, _('Network interface ID to associate with EIP.'), update_allowed=True ), } default_client_name = 'nova' def get_reference_id(self): return self.physical_resource_name_or_FnGetRefId() def validate(self): """Validate any of the provided parameters.""" super(ElasticIpAssociation, self).validate() eip = self.properties[self.EIP] allocation_id = self.properties[self.ALLOCATION_ID] instance_id = self.properties[self.INSTANCE_ID] ni_id = self.properties[self.NETWORK_INTERFACE_ID] # to check EIP and ALLOCATION_ID, should provide one of if bool(eip) == bool(allocation_id): msg = _("Either 'EIP' or 'AllocationId' must be provided.") raise exception.StackValidationFailed(message=msg) # to check if has EIP, also should specify InstanceId if eip and not instance_id: msg = _("Must specify 'InstanceId' if you specify 'EIP'.") raise exception.StackValidationFailed(message=msg) # to check InstanceId and NetworkInterfaceId, should provide # at least one if not instance_id and not ni_id: raise exception.PropertyUnspecifiedError('InstanceId', 'NetworkInterfaceId') def _get_port_info(self, ni_id=None, instance_id=None): port_id = None port_rsrc = None if ni_id: port_rsrc = self.neutron().list_ports(id=ni_id)['ports'][0] port_id = ni_id elif instance_id: ports = self.neutron().list_ports(device_id=instance_id) port_rsrc = ports['ports'][0] port_id = port_rsrc['id'] return port_id, port_rsrc def _neutron_add_gateway_router(self, float_id, network_id): router = vpc.VPC.router_for_vpc(self.neutron(), network_id) if router is not None: floatingip = self.neutron().show_floatingip(float_id) floating_net_id = floatingip['floatingip']['floating_network_id'] self.neutron().add_gateway_router( router['id'], {'network_id': floating_net_id}) def _neutron_update_floating_ip(self, allocationId, port_id=None, ignore_not_found=False): try: self.neutron().update_floatingip( allocationId, {'floatingip': {'port_id': port_id}}) except Exception as e: if not (ignore_not_found and self.client_plugin( 'neutron').is_not_found(e)): raise def _remove_floating_ip_address(self, eip, ignore_not_found=False): try: self.client_plugin().dissociate_floatingip_address(eip) except Exception as e: addr_not_found = isinstance( e, client_exception.EntityMatchNotFound) fip_not_found = self.client_plugin().is_not_found(e) not_found = addr_not_found or fip_not_found if not (ignore_not_found and not_found): raise def _floatingIp_detach(self): eip = self.properties[self.EIP] allocation_id = self.properties[self.ALLOCATION_ID] if eip: # if has eip_old, to remove the eip_old from the instance self._remove_floating_ip_address(eip) else: # if hasn't eip_old, to update neutron floatingIp self._neutron_update_floating_ip(allocation_id, None) def _handle_update_eipInfo(self, prop_diff): eip_update = prop_diff.get(self.EIP) allocation_id_update = prop_diff.get(self.ALLOCATION_ID) instance_id = self.properties[self.INSTANCE_ID] ni_id = self.properties[self.NETWORK_INTERFACE_ID] if eip_update: self._floatingIp_detach() self.client_plugin().associate_floatingip_address(instance_id, eip_update) self.resource_id_set(eip_update) elif allocation_id_update: self._floatingIp_detach() port_id, port_rsrc = self._get_port_info(ni_id, instance_id) if not port_id or not port_rsrc: LOG.error('Port not specified.') raise exception.NotFound(_('Failed to update, can not found ' 'port info.')) network_id = port_rsrc['network_id'] self._neutron_add_gateway_router(allocation_id_update, network_id) self._neutron_update_floating_ip(allocation_id_update, port_id) self.resource_id_set(allocation_id_update) def _handle_update_portInfo(self, prop_diff): instance_id_update = prop_diff.get(self.INSTANCE_ID) ni_id_update = prop_diff.get(self.NETWORK_INTERFACE_ID) eip = self.properties[self.EIP] allocation_id = self.properties[self.ALLOCATION_ID] # if update portInfo, no need to detach the port from # old instance/floatingip. if eip: self.client_plugin().associate_floatingip_address( instance_id_update, eip) else: port_id, port_rsrc = self._get_port_info(ni_id_update, instance_id_update) if not port_id or not port_rsrc: LOG.error('Port not specified.') raise exception.NotFound(_('Failed to update, can not found ' 'port info.')) network_id = port_rsrc['network_id'] self._neutron_add_gateway_router(allocation_id, network_id) self._neutron_update_floating_ip(allocation_id, port_id) def handle_create(self): """Add a floating IP address to a server.""" eip = self.properties[self.EIP] if eip: self.client_plugin().associate_floatingip_address( self.properties[self.INSTANCE_ID], eip) self.resource_id_set(eip) LOG.debug('ElasticIpAssociation ' '%(instance)s.add_floating_ip(%(eip)s)', {'instance': self.properties[self.INSTANCE_ID], 'eip': eip}) elif self.properties[self.ALLOCATION_ID]: ni_id = self.properties[self.NETWORK_INTERFACE_ID] instance_id = self.properties[self.INSTANCE_ID] port_id, port_rsrc = self._get_port_info(ni_id, instance_id) if not port_id or not port_rsrc: LOG.warning('Skipping association, resource not specified') return float_id = self.properties[self.ALLOCATION_ID] network_id = port_rsrc['network_id'] self._neutron_add_gateway_router(float_id, network_id) self._neutron_update_floating_ip(float_id, port_id) self.resource_id_set(float_id) def handle_delete(self): """Remove a floating IP address from a server or port.""" if self.resource_id is None: return if self.properties[self.EIP]: eip = self.properties[self.EIP] self._remove_floating_ip_address(eip, ignore_not_found=True) elif self.properties[self.ALLOCATION_ID]: float_id = self.properties[self.ALLOCATION_ID] self._neutron_update_floating_ip(float_id, port_id=None, ignore_not_found=True) def needs_replace_with_prop_diff(self, changed_properties_set, after_props, before_props): if (self.ALLOCATION_ID in changed_properties_set or self.EIP in changed_properties_set): instance_id, ni_id = None, None if self.INSTANCE_ID in changed_properties_set: instance_id = after_props.get(self.INSTANCE_ID) if self.NETWORK_INTERFACE_ID in changed_properties_set: ni_id = after_props.get(self.NETWORK_INTERFACE_ID) return bool(instance_id or ni_id) def handle_update(self, json_snippet, tmpl_diff, prop_diff): if prop_diff: if self.ALLOCATION_ID in prop_diff or self.EIP in prop_diff: self._handle_update_eipInfo(prop_diff) elif (self.INSTANCE_ID in prop_diff or self.NETWORK_INTERFACE_ID in prop_diff): self._handle_update_portInfo(prop_diff) def resource_mapping(): return { 'AWS::EC2::EIP': ElasticIp, 'AWS::EC2::EIPAssociation': ElasticIpAssociation, } heat-10.0.2/heat/engine/resources/aws/ec2/route_table.py0000666000175000017500000001306713343562337023101 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common.i18n import _ from heat.engine import constraints from heat.engine import properties from heat.engine import resource from heat.engine.resources.aws.ec2 import vpc from heat.engine.resources.openstack.neutron import neutron from heat.engine import support class RouteTable(resource.Resource): support_status = support.SupportStatus(version='2014.1') PROPERTIES = ( VPC_ID, TAGS, ) = ( 'VpcId', 'Tags', ) _TAG_KEYS = ( TAG_KEY, TAG_VALUE, ) = ( 'Key', 'Value', ) properties_schema = { VPC_ID: properties.Schema( properties.Schema.STRING, _('VPC ID for where the route table is created.'), required=True ), TAGS: properties.Schema( properties.Schema.LIST, schema=properties.Schema( properties.Schema.MAP, _('List of tags to be attached to this resource.'), schema={ TAG_KEY: properties.Schema( properties.Schema.STRING, required=True ), TAG_VALUE: properties.Schema( properties.Schema.STRING, required=True ), }, implemented=False, ) ), } default_client_name = 'neutron' def handle_create(self): client = self.client() props = {'name': self.physical_resource_name()} router = client.create_router({'router': props})['router'] self.resource_id_set(router['id']) def check_create_complete(self, *args): client = self.client() attributes = client.show_router( self.resource_id)['router'] if not neutron.NeutronResource.is_built(attributes): return False network_id = self.properties.get(self.VPC_ID) default_router = vpc.VPC.router_for_vpc(client, network_id) if default_router and default_router.get('external_gateway_info'): # the default router for the VPC is connected # to the external router, so do it for this too. external_network_id = default_router[ 'external_gateway_info']['network_id'] client.add_gateway_router(self.resource_id, { 'network_id': external_network_id}) return True def handle_delete(self): client = self.client() router_id = self.resource_id with self.client_plugin().ignore_not_found: client.delete_router(router_id) # just in case this router has been added to a gateway, remove it with self.client_plugin().ignore_not_found: client.remove_gateway_router(router_id) class SubnetRouteTableAssociation(resource.Resource): PROPERTIES = ( ROUTE_TABLE_ID, SUBNET_ID, ) = ( 'RouteTableId', 'SubnetId', ) properties_schema = { ROUTE_TABLE_ID: properties.Schema( properties.Schema.STRING, _('Route table ID.'), required=True ), SUBNET_ID: properties.Schema( properties.Schema.STRING, _('Subnet ID.'), required=True, constraints=[ constraints.CustomConstraint('neutron.subnet') ] ), } default_client_name = 'neutron' def handle_create(self): client = self.client() subnet_id = self.properties.get(self.SUBNET_ID) router_id = self.properties.get(self.ROUTE_TABLE_ID) # remove the default router association for this subnet. with self.client_plugin().ignore_not_found: previous_router = self._router_for_subnet(subnet_id) if previous_router: client.remove_interface_router( previous_router['id'], {'subnet_id': subnet_id}) client.add_interface_router( router_id, {'subnet_id': subnet_id}) def _router_for_subnet(self, subnet_id): client = self.client() subnet = client.show_subnet( subnet_id)['subnet'] network_id = subnet['network_id'] return vpc.VPC.router_for_vpc(client, network_id) def handle_delete(self): client = self.client() subnet_id = self.properties.get(self.SUBNET_ID) router_id = self.properties.get(self.ROUTE_TABLE_ID) with self.client_plugin().ignore_not_found: client.remove_interface_router(router_id, { 'subnet_id': subnet_id}) # add back the default router with self.client_plugin().ignore_not_found: default_router = self._router_for_subnet(subnet_id) if default_router: client.add_interface_router( default_router['id'], {'subnet_id': subnet_id}) def resource_mapping(): return { 'AWS::EC2::RouteTable': RouteTable, 'AWS::EC2::SubnetRouteTableAssociation': SubnetRouteTableAssociation, } heat-10.0.2/heat/engine/resources/aws/ec2/instance.py0000666000175000017500000010722113343562337022374 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from oslo_config import cfg from oslo_log import log as logging import six cfg.CONF.import_opt('max_server_name_length', 'heat.common.config') from heat.common import exception from heat.common.i18n import _ from heat.engine import attributes from heat.engine.clients import progress from heat.engine import constraints from heat.engine import properties from heat.engine import resource from heat.engine.resources import scheduler_hints as sh LOG = logging.getLogger(__name__) class Instance(resource.Resource, sh.SchedulerHintsMixin): PROPERTIES = ( IMAGE_ID, INSTANCE_TYPE, KEY_NAME, AVAILABILITY_ZONE, DISABLE_API_TERMINATION, KERNEL_ID, MONITORING, PLACEMENT_GROUP_NAME, PRIVATE_IP_ADDRESS, RAM_DISK_ID, SECURITY_GROUPS, SECURITY_GROUP_IDS, NETWORK_INTERFACES, SOURCE_DEST_CHECK, SUBNET_ID, TAGS, NOVA_SCHEDULER_HINTS, TENANCY, USER_DATA, VOLUMES, BLOCK_DEVICE_MAPPINGS ) = ( 'ImageId', 'InstanceType', 'KeyName', 'AvailabilityZone', 'DisableApiTermination', 'KernelId', 'Monitoring', 'PlacementGroupName', 'PrivateIpAddress', 'RamDiskId', 'SecurityGroups', 'SecurityGroupIds', 'NetworkInterfaces', 'SourceDestCheck', 'SubnetId', 'Tags', 'NovaSchedulerHints', 'Tenancy', 'UserData', 'Volumes', 'BlockDeviceMappings' ) _TAG_KEYS = ( TAG_KEY, TAG_VALUE, ) = ( 'Key', 'Value', ) _NOVA_SCHEDULER_HINT_KEYS = ( NOVA_SCHEDULER_HINT_KEY, NOVA_SCHEDULER_HINT_VALUE, ) = ( 'Key', 'Value', ) _VOLUME_KEYS = ( VOLUME_DEVICE, VOLUME_ID, ) = ( 'Device', 'VolumeId', ) _BLOCK_DEVICE_MAPPINGS_KEYS = ( DEVICE_NAME, EBS, NO_DEVICE, VIRTUAL_NAME, ) = ( 'DeviceName', 'Ebs', 'NoDevice', 'VirtualName', ) _EBS_KEYS = ( DELETE_ON_TERMINATION, IOPS, SNAPSHOT_ID, VOLUME_SIZE, VOLUME_TYPE, ) = ( 'DeleteOnTermination', 'Iops', 'SnapshotId', 'VolumeSize', 'VolumeType' ) ATTRIBUTES = ( AVAILABILITY_ZONE_ATTR, PRIVATE_DNS_NAME, PUBLIC_DNS_NAME, PRIVATE_IP, PUBLIC_IP, ) = ( 'AvailabilityZone', 'PrivateDnsName', 'PublicDnsName', 'PrivateIp', 'PublicIp', ) properties_schema = { IMAGE_ID: properties.Schema( properties.Schema.STRING, _('Glance image ID or name.'), constraints=[ constraints.CustomConstraint('glance.image') ], required=True ), # AWS does not require InstanceType but Heat does because the nova # create api call requires a flavor INSTANCE_TYPE: properties.Schema( properties.Schema.STRING, _('Nova instance type (flavor).'), required=True, update_allowed=True, constraints=[ constraints.CustomConstraint('nova.flavor') ] ), KEY_NAME: properties.Schema( properties.Schema.STRING, _('Optional Nova keypair name.'), constraints=[ constraints.CustomConstraint("nova.keypair") ] ), AVAILABILITY_ZONE: properties.Schema( properties.Schema.STRING, _('Availability zone to launch the instance in.') ), DISABLE_API_TERMINATION: properties.Schema( properties.Schema.STRING, _('Not Implemented.'), implemented=False ), KERNEL_ID: properties.Schema( properties.Schema.STRING, _('Not Implemented.'), implemented=False ), MONITORING: properties.Schema( properties.Schema.BOOLEAN, _('Not Implemented.'), implemented=False ), PLACEMENT_GROUP_NAME: properties.Schema( properties.Schema.STRING, _('Not Implemented.'), implemented=False ), PRIVATE_IP_ADDRESS: properties.Schema( properties.Schema.STRING, _('Not Implemented.'), implemented=False ), RAM_DISK_ID: properties.Schema( properties.Schema.STRING, _('Not Implemented.'), implemented=False ), SECURITY_GROUPS: properties.Schema( properties.Schema.LIST, _('Security group names to assign.') ), SECURITY_GROUP_IDS: properties.Schema( properties.Schema.LIST, _('Security group IDs to assign.') ), NETWORK_INTERFACES: properties.Schema( properties.Schema.LIST, _('Network interfaces to associate with instance.'), update_allowed=True ), SOURCE_DEST_CHECK: properties.Schema( properties.Schema.BOOLEAN, _('Not Implemented.'), implemented=False ), SUBNET_ID: properties.Schema( properties.Schema.STRING, _('Subnet ID to launch instance in.'), update_allowed=True ), TAGS: properties.Schema( properties.Schema.LIST, _('Tags to attach to instance.'), schema=properties.Schema( properties.Schema.MAP, schema={ TAG_KEY: properties.Schema( properties.Schema.STRING, required=True ), TAG_VALUE: properties.Schema( properties.Schema.STRING, required=True ), }, ), update_allowed=True ), NOVA_SCHEDULER_HINTS: properties.Schema( properties.Schema.LIST, _('Scheduler hints to pass to Nova (Heat extension).'), schema=properties.Schema( properties.Schema.MAP, schema={ NOVA_SCHEDULER_HINT_KEY: properties.Schema( properties.Schema.STRING, required=True ), NOVA_SCHEDULER_HINT_VALUE: properties.Schema( properties.Schema.STRING, required=True ), }, ) ), TENANCY: properties.Schema( properties.Schema.STRING, _('Not Implemented.'), constraints=[ constraints.AllowedValues(['dedicated', 'default']), ], implemented=False ), USER_DATA: properties.Schema( properties.Schema.STRING, _('User data to pass to instance.') ), VOLUMES: properties.Schema( properties.Schema.LIST, _('Volumes to attach to instance.'), default=[], schema=properties.Schema( properties.Schema.MAP, schema={ VOLUME_DEVICE: properties.Schema( properties.Schema.STRING, _('The device where the volume is exposed on the ' 'instance. This assignment may not be honored and ' 'it is advised that the path ' '/dev/disk/by-id/virtio- be used ' 'instead.'), required=True ), VOLUME_ID: properties.Schema( properties.Schema.STRING, _('The ID of the volume to be attached.'), required=True, constraints=[ constraints.CustomConstraint('cinder.volume') ] ), } ) ), BLOCK_DEVICE_MAPPINGS: properties.Schema( properties.Schema.LIST, _('Block device mappings to attach to instance.'), schema=properties.Schema( properties.Schema.MAP, schema={ DEVICE_NAME: properties.Schema( properties.Schema.STRING, _('A device name where the volume will be ' 'attached in the system at /dev/device_name.' 'e.g. vdb'), required=True, ), EBS: properties.Schema( properties.Schema.MAP, _('The ebs volume to attach to the instance.'), schema={ DELETE_ON_TERMINATION: properties.Schema( properties.Schema.BOOLEAN, _('Indicate whether the volume should be ' 'deleted when the instance is terminated.'), default=True ), IOPS: properties.Schema( properties.Schema.NUMBER, _('The number of I/O operations per second ' 'that the volume supports.'), implemented=False ), SNAPSHOT_ID: properties.Schema( properties.Schema.STRING, _('The ID of the snapshot to create ' 'a volume from.'), constraints=[ constraints.CustomConstraint( 'cinder.snapshot') ] ), VOLUME_SIZE: properties.Schema( properties.Schema.STRING, _('The size of the volume, in GB. Must be ' 'equal or greater than the size of the ' 'snapshot. It is safe to leave this blank ' 'and have the Compute service infer ' 'the size.'), ), VOLUME_TYPE: properties.Schema( properties.Schema.STRING, _('The volume type.'), implemented=False ), }, ), NO_DEVICE: properties.Schema( properties.Schema.MAP, _('The can be used to unmap a defined device.'), implemented=False ), VIRTUAL_NAME: properties.Schema( properties.Schema.STRING, _('The name of the virtual device. The name must be ' 'in the form ephemeralX where X is a number ' 'starting from zero (0); for example, ephemeral0.'), implemented=False ), }, ), ), } attributes_schema = { AVAILABILITY_ZONE_ATTR: attributes.Schema( _('The Availability Zone where the specified instance is ' 'launched.'), type=attributes.Schema.STRING ), PRIVATE_DNS_NAME: attributes.Schema( _('Private DNS name of the specified instance.'), type=attributes.Schema.STRING ), PUBLIC_DNS_NAME: attributes.Schema( _('Public DNS name of the specified instance.'), type=attributes.Schema.STRING ), PRIVATE_IP: attributes.Schema( _('Private IP address of the specified instance.'), type=attributes.Schema.STRING ), PUBLIC_IP: attributes.Schema( _('Public IP address of the specified instance.'), type=attributes.Schema.STRING ), } physical_resource_name_limit = cfg.CONF.max_server_name_length default_client_name = 'nova' def __init__(self, name, json_snippet, stack): super(Instance, self).__init__(name, json_snippet, stack) self.ipaddress = None def _set_ipaddress(self, networks): """Set IP address to self.ipaddress from a list of networks. Read the server's IP address from a list of networks provided by Nova. """ # Just record the first ipaddress for n in sorted(networks, reverse=True): if len(networks[n]) > 0: self.ipaddress = networks[n][0] break def _ipaddress(self): """Return the server's IP address. Fetching it from Nova if necessary. """ if self.ipaddress is None: self.ipaddress = self.client_plugin().server_to_ipaddress( self.resource_id) return self.ipaddress or '0.0.0.0' def _availability_zone(self): """Return Server's Availability Zone. Fetching it from Nova if necessary. """ availability_zone = self.properties[self.AVAILABILITY_ZONE] if availability_zone is None: try: server = self.client().servers.get(self.resource_id) except Exception as e: self.client_plugin().ignore_not_found(e) return # Default to None if Nova's # OS-EXT-AZ:availability_zone extension is disabled return getattr(server, 'OS-EXT-AZ:availability_zone', None) def _resolve_attribute(self, name): res = None if name == self.AVAILABILITY_ZONE_ATTR: res = self._availability_zone() elif name in self.ATTRIBUTES[1:]: res = self._ipaddress() LOG.info('%(name)s._resolve_attribute(%(attname)s) == %(res)s', {'name': self.name, 'attname': name, 'res': res}) return six.text_type(res) if res else None def _port_data_delete(self): # delete the port data which implicit-created port_id = self.data().get('port_id') if port_id: with self.client_plugin('neutron').ignore_not_found: self.neutron().delete_port(port_id) self.data_delete('port_id') def _build_nics(self, network_interfaces, security_groups=None, subnet_id=None): nics = None if network_interfaces: unsorted_nics = [] for entry in network_interfaces: nic = (entry if not isinstance(entry, six.string_types) else {'NetworkInterfaceId': entry, 'DeviceIndex': len(unsorted_nics)}) unsorted_nics.append(nic) sorted_nics = sorted(unsorted_nics, key=lambda nic: int(nic['DeviceIndex'])) nics = [{'port-id': snic['NetworkInterfaceId']} for snic in sorted_nics] else: # if SubnetId property in Instance, ensure subnet exists if subnet_id: neutronclient = self.neutron() network_id = self.client_plugin( 'neutron').network_id_from_subnet_id(subnet_id) # if subnet verified, create a port to use this subnet # if port is not created explicitly, nova will choose # the first subnet in the given network. if network_id: fixed_ip = {'subnet_id': subnet_id} props = { 'admin_state_up': True, 'network_id': network_id, 'fixed_ips': [fixed_ip] } if security_groups: props['security_groups'] = self.client_plugin( 'neutron').get_secgroup_uuids(security_groups) port = neutronclient.create_port({'port': props})['port'] # after create the port, set the port-id to # resource data, so that the port can be deleted on # instance delete. self.data_set('port_id', port['id']) nics = [{'port-id': port['id']}] return nics def _get_security_groups(self): security_groups = [] for key in (self.SECURITY_GROUPS, self.SECURITY_GROUP_IDS): if self.properties.get(key) is not None: for sg in self.properties.get(key): security_groups.append(sg) if not security_groups: security_groups = None return security_groups def _build_block_device_mapping(self, bdm): if not bdm: return None bdm_dict = {} for mapping in bdm: device_name = mapping.get(self.DEVICE_NAME) ebs = mapping.get(self.EBS) if ebs: mapping_parts = [] snapshot_id = ebs.get(self.SNAPSHOT_ID) volume_size = ebs.get(self.VOLUME_SIZE) delete = ebs.get(self.DELETE_ON_TERMINATION) if snapshot_id: mapping_parts.append(snapshot_id) mapping_parts.append('snap') if volume_size: mapping_parts.append(str(volume_size)) else: mapping_parts.append('') if delete is not None: mapping_parts.append(str(delete)) bdm_dict[device_name] = ':'.join(mapping_parts) return bdm_dict def _get_nova_metadata(self, properties): if properties is None or properties.get(self.TAGS) is None: return None return dict((tm[self.TAG_KEY], tm[self.TAG_VALUE]) for tm in properties[self.TAGS]) def handle_create(self): security_groups = self._get_security_groups() userdata = self.properties[self.USER_DATA] or '' flavor = self.properties[self.INSTANCE_TYPE] availability_zone = self.properties[self.AVAILABILITY_ZONE] image_name = self.properties[self.IMAGE_ID] image_id = self.client_plugin( 'glance').find_image_by_name_or_id(image_name) flavor_id = self.client_plugin().find_flavor_by_name_or_id(flavor) scheduler_hints = {} if self.properties[self.NOVA_SCHEDULER_HINTS]: for tm in self.properties[self.NOVA_SCHEDULER_HINTS]: # adopted from novaclient shell hint = tm[self.NOVA_SCHEDULER_HINT_KEY] hint_value = tm[self.NOVA_SCHEDULER_HINT_VALUE] if hint in scheduler_hints: if isinstance(scheduler_hints[hint], six.string_types): scheduler_hints[hint] = [scheduler_hints[hint]] scheduler_hints[hint].append(hint_value) else: scheduler_hints[hint] = hint_value else: scheduler_hints = None scheduler_hints = self._scheduler_hints(scheduler_hints) nics = self._build_nics(self.properties[self.NETWORK_INTERFACES], security_groups=security_groups, subnet_id=self.properties[self.SUBNET_ID]) block_device_mapping = self._build_block_device_mapping( self.properties.get(self.BLOCK_DEVICE_MAPPINGS)) server = None try: server = self.client().servers.create( name=self.physical_resource_name(), image=image_id, flavor=flavor_id, key_name=self.properties[self.KEY_NAME], security_groups=security_groups, userdata=self.client_plugin().build_userdata( self.metadata_get(), userdata, 'ec2-user'), meta=self._get_nova_metadata(self.properties), scheduler_hints=scheduler_hints, nics=nics, availability_zone=availability_zone, block_device_mapping=block_device_mapping) finally: # Avoid a race condition where the thread could be cancelled # before the ID is stored if server is not None: self.resource_id_set(server.id) creator = progress.ServerCreateProgress(server.id) attachers = [] for vol_id, device in self.volumes(): attachers.append(progress.VolumeAttachProgress(self.resource_id, vol_id, device)) return creator, tuple(attachers) def check_create_complete(self, cookie): creator, attachers = cookie if not creator.complete: creator.complete = self.client_plugin()._check_active( creator.server_id, 'Instance') if creator.complete: server = self.client_plugin().get_server(creator.server_id) self._set_ipaddress(server.networks) # NOTE(pas-ha) small optimization, # return True if there are no volumes to attach # to save one check_create_complete call return not len(attachers) else: return False return self._attach_volumes(attachers) def _attach_volumes(self, attachers): for attacher in attachers: if not attacher.called: self.client_plugin().attach_volume(attacher.srv_id, attacher.vol_id, attacher.device) attacher.called = True return False for attacher in attachers: if not attacher.complete: attacher.complete = self.client_plugin( 'cinder').check_attach_volume_complete(attacher.vol_id) break out = all(attacher.complete for attacher in attachers) return out def volumes(self): """Return an iterator for all volumes that should be attached. Return an iterator over (volume_id, device) tuples for all volumes that should be attached to this instance. """ volumes = self.properties[self.VOLUMES] return ((vol[self.VOLUME_ID], vol[self.VOLUME_DEVICE]) for vol in volumes) def _remove_matched_ifaces(self, old_network_ifaces, new_network_ifaces): # find matches and remove them from old and new ifaces old_network_ifaces_copy = copy.deepcopy(old_network_ifaces) for iface in old_network_ifaces_copy: if iface in new_network_ifaces: new_network_ifaces.remove(iface) old_network_ifaces.remove(iface) def handle_check(self): server = self.client().servers.get(self.resource_id) if not self.client_plugin()._check_active(server, 'Instance'): raise exception.Error(_("Instance is not ACTIVE (was: %s)") % server.status.strip()) def _update_instance_type(self, prop_diff): flavor = prop_diff[self.INSTANCE_TYPE] flavor_id = self.client_plugin().find_flavor_by_name_or_id(flavor) handler_args = {'args': (flavor_id,)} checker_args = {'args': (flavor_id,)} prg_resize = progress.ServerUpdateProgress(self.resource_id, 'resize', handler_extra=handler_args, checker_extra=checker_args) prg_verify = progress.ServerUpdateProgress(self.resource_id, 'verify_resize') return prg_resize, prg_verify def _update_network_interfaces(self, server, prop_diff): updaters = [] new_network_ifaces = prop_diff.get(self.NETWORK_INTERFACES) old_network_ifaces = self.properties.get(self.NETWORK_INTERFACES) # if there is entrys in old_network_ifaces and new_network_ifaces, # remove the same entrys from old and new ifaces if old_network_ifaces and new_network_ifaces: # there are four situations: # 1.old includes new, such as: old = 2,3, new = 2 # 2.new includes old, such as: old = 2,3, new = 1,2,3 # 3.has overlaps, such as: old = 2,3, new = 1,2 # 4.different, such as: old = 2,3, new = 1,4 # detach unmatched ones in old, attach unmatched ones in new self._remove_matched_ifaces(old_network_ifaces, new_network_ifaces) if old_network_ifaces: old_nics = self._build_nics(old_network_ifaces) for nic in old_nics: updaters.append( progress.ServerUpdateProgress( self.resource_id, 'interface_detach', complete=True, handler_extra={'args': (nic['port-id'],)}) ) if new_network_ifaces: new_nics = self._build_nics(new_network_ifaces) for nic in new_nics: handler_kwargs = {'port_id': nic['port-id']} updaters.append( progress.ServerUpdateProgress( self.resource_id, 'interface_attach', complete=True, handler_extra={'kwargs': handler_kwargs}) ) # if there is no change of 'NetworkInterfaces', do nothing, # keep the behavior as creation elif (old_network_ifaces and (self.NETWORK_INTERFACES not in prop_diff)): LOG.warning('There is no change of "%(net_interfaces)s" ' 'for instance %(server)s, do nothing ' 'when updating.', {'net_interfaces': self.NETWORK_INTERFACES, 'server': self.resource_id}) # if the interfaces not come from property 'NetworkInterfaces', # the situation is somewhat complex, so to detach the old ifaces, # and then attach the new ones. else: subnet_id = (prop_diff.get(self.SUBNET_ID) or self.properties.get(self.SUBNET_ID)) security_groups = self._get_security_groups() if not server: server = self.client().servers.get(self.resource_id) interfaces = server.interface_list() for iface in interfaces: updaters.append( progress.ServerUpdateProgress( self.resource_id, 'interface_detach', complete=True, handler_extra={'args': (iface.port_id,)}) ) # first to delete the port which implicit-created by heat self._port_data_delete() nics = self._build_nics(new_network_ifaces, security_groups=security_groups, subnet_id=subnet_id) # 'SubnetId' property is empty(or None) and # 'NetworkInterfaces' property is empty(or None), # _build_nics() will return nics = None,we should attach # first free port, according to similar behavior during # instance creation if not nics: updaters.append( progress.ServerUpdateProgress( self.resource_id, 'interface_attach', complete=True) ) else: for nic in nics: updaters.append( progress.ServerUpdateProgress( self.resource_id, 'interface_attach', complete=True, handler_extra={'kwargs': {'port_id': nic['port-id']}}) ) return updaters def handle_update(self, json_snippet, tmpl_diff, prop_diff): if tmpl_diff.metadata_changed(): self.metadata_set(json_snippet.metadata()) updaters = [] server = None if self.TAGS in prop_diff: server = self.client().servers.get(self.resource_id) self.client_plugin().meta_update( server, self._get_nova_metadata(prop_diff)) if self.INSTANCE_TYPE in prop_diff: updaters.extend(self._update_instance_type(prop_diff)) if (self.NETWORK_INTERFACES in prop_diff or self.SUBNET_ID in prop_diff): updaters.extend(self._update_network_interfaces(server, prop_diff)) # NOTE(pas-ha) optimization is possible (starting first task # right away), but we'd rather not, as this method already might # have called several APIs return updaters def check_update_complete(self, updaters): """Push all updaters to completion in list order.""" for prg in updaters: if not prg.called: handler = getattr(self.client_plugin(), prg.handler) prg.called = handler(*prg.handler_args, **prg.handler_kwargs) return False if not prg.complete: check_complete = getattr(self.client_plugin(), prg.checker) prg.complete = check_complete(*prg.checker_args, **prg.checker_kwargs) break return all(prg.complete for prg in updaters) def metadata_update(self, new_metadata=None): """Refresh the metadata if new_metadata is None.""" if new_metadata is None: self.metadata_set(self.t.metadata()) def validate(self): """Validate any of the provided params.""" res = super(Instance, self).validate() if res: return res # check validity of security groups vs. network interfaces security_groups = self._get_security_groups() network_interfaces = self.properties.get(self.NETWORK_INTERFACES) if security_groups and network_interfaces: raise exception.ResourcePropertyConflict( '/'.join([self.SECURITY_GROUPS, self.SECURITY_GROUP_IDS]), self.NETWORK_INTERFACES) # check bdm property # now we don't support without snapshot_id in bdm bdm = self.properties.get(self.BLOCK_DEVICE_MAPPINGS) if bdm: for mapping in bdm: ebs = mapping.get(self.EBS) if ebs: snapshot_id = ebs.get(self.SNAPSHOT_ID) if not snapshot_id: msg = _("SnapshotId is missing, this is required " "when specifying BlockDeviceMappings.") raise exception.StackValidationFailed(message=msg) else: msg = _("Ebs is missing, this is required " "when specifying BlockDeviceMappings.") raise exception.StackValidationFailed(message=msg) subnet_id = self.properties.get(self.SUBNET_ID) if network_interfaces and subnet_id: # consider the old templates, we only to log to warn user # NetworkInterfaces has higher priority than SubnetId LOG.warning('"%(subnet)s" will be ignored if specified ' '"%(net_interfaces)s". So if you specified the ' '"%(net_interfaces)s" property, ' 'do not specify "%(subnet)s" property.', {'subnet': self.SUBNET_ID, 'net_interfaces': self.NETWORK_INTERFACES}) def handle_delete(self): # make sure to delete the port which implicit-created by heat self._port_data_delete() if self.resource_id is None: return try: self.client().servers.delete(self.resource_id) except Exception as e: self.client_plugin().ignore_not_found(e) return return self.resource_id def check_delete_complete(self, server_id): if not server_id: return True return self.client_plugin().check_delete_server_complete(server_id) def handle_suspend(self): """Suspend an instance. Note we do not wait for the SUSPENDED state, this is polled for by check_suspend_complete in a similar way to the create logic so we can take advantage of coroutines. """ if self.resource_id is None: raise exception.Error(_('Cannot suspend %s, resource_id not set') % self.name) try: server = self.client().servers.get(self.resource_id) except Exception as e: if self.client_plugin().is_not_found(e): raise exception.NotFound(_('Failed to find instance %s') % self.resource_id) else: raise else: # if the instance has been suspended successful, # no need to suspend again if self.client_plugin().get_status(server) != 'SUSPENDED': LOG.debug("suspending instance %s", self.resource_id) server.suspend() return server.id def check_suspend_complete(self, server_id): cp = self.client_plugin() server = cp.fetch_server(server_id) if not server: return False status = cp.get_status(server) LOG.debug('%(name)s check_suspend_complete status = %(status)s', {'name': self.name, 'status': status}) if status in list(cp.deferred_server_statuses + ['ACTIVE']): return status == 'SUSPENDED' else: exc = exception.ResourceUnknownStatus( result=_('Suspend of instance %s failed') % server.name, resource_status=status) raise exc def handle_resume(self): """Resume an instance. Note we do not wait for the ACTIVE state, this is polled for by check_resume_complete in a similar way to the create logic so we can take advantage of coroutines. """ if self.resource_id is None: raise exception.Error(_('Cannot resume %s, resource_id not set') % self.name) try: server = self.client().servers.get(self.resource_id) except Exception as e: if self.client_plugin().is_not_found(e): raise exception.NotFound(_('Failed to find instance %s') % self.resource_id) else: raise else: # if the instance has been resumed successful, # no need to resume again if self.client_plugin().get_status(server) != 'ACTIVE': LOG.debug("resuming instance %s", self.resource_id) server.resume() return server.id def check_resume_complete(self, server_id): return self.client_plugin()._check_active(server_id, 'Instance') def resource_mapping(): return { 'AWS::EC2::Instance': Instance, } heat-10.0.2/heat/engine/resources/aws/ec2/network_interface.py0000666000175000017500000001401713343562337024301 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common.i18n import _ from heat.engine import attributes from heat.engine import constraints from heat.engine import properties from heat.engine import resource class NetworkInterface(resource.Resource): PROPERTIES = ( DESCRIPTION, GROUP_SET, PRIVATE_IP_ADDRESS, SOURCE_DEST_CHECK, SUBNET_ID, TAGS, ) = ( 'Description', 'GroupSet', 'PrivateIpAddress', 'SourceDestCheck', 'SubnetId', 'Tags', ) _TAG_KEYS = ( TAG_KEY, TAG_VALUE, ) = ( 'Key', 'Value', ) ATTRIBUTES = ( PRIVATE_IP_ADDRESS_ATTR, ) = ( 'PrivateIpAddress', ) properties_schema = { DESCRIPTION: properties.Schema( properties.Schema.STRING, _('Description for this interface.') ), GROUP_SET: properties.Schema( properties.Schema.LIST, _('List of security group IDs associated with this interface.'), update_allowed=True ), PRIVATE_IP_ADDRESS: properties.Schema( properties.Schema.STRING ), SOURCE_DEST_CHECK: properties.Schema( properties.Schema.BOOLEAN, _('Flag indicating if traffic to or from instance is validated.'), implemented=False ), SUBNET_ID: properties.Schema( properties.Schema.STRING, _('Subnet ID to associate with this interface.'), required=True, constraints=[ constraints.CustomConstraint('neutron.subnet') ] ), TAGS: properties.Schema( properties.Schema.LIST, schema=properties.Schema( properties.Schema.MAP, _('List of tags associated with this interface.'), schema={ TAG_KEY: properties.Schema( properties.Schema.STRING, required=True ), TAG_VALUE: properties.Schema( properties.Schema.STRING, required=True ), }, implemented=False, ) ), } attributes_schema = { PRIVATE_IP_ADDRESS: attributes.Schema( _('Private IP address of the network interface.'), type=attributes.Schema.STRING ), } default_client_name = 'neutron' @staticmethod def network_id_from_subnet_id(neutronclient, subnet_id): subnet_info = neutronclient.show_subnet(subnet_id) return subnet_info['subnet']['network_id'] def __init__(self, name, json_snippet, stack): super(NetworkInterface, self).__init__(name, json_snippet, stack) self.fixed_ip_address = None def handle_create(self): subnet_id = self.properties[self.SUBNET_ID] network_id = self.client_plugin().network_id_from_subnet_id( subnet_id) fixed_ip = {'subnet_id': subnet_id} if self.properties[self.PRIVATE_IP_ADDRESS]: fixed_ip['ip_address'] = self.properties[self.PRIVATE_IP_ADDRESS] props = { 'name': self.physical_resource_name(), 'admin_state_up': True, 'network_id': network_id, 'fixed_ips': [fixed_ip] } # if without group_set, don't set the 'security_groups' property, # neutron will create the port with the 'default' securityGroup, # if has the group_set and the value is [], which means to create the # port without securityGroup(same as the behavior of neutron) if self.properties[self.GROUP_SET] is not None: sgs = self.client_plugin().get_secgroup_uuids( self.properties.get(self.GROUP_SET)) props['security_groups'] = sgs port = self.client().create_port({'port': props})['port'] self.resource_id_set(port['id']) def handle_delete(self): if self.resource_id is None: return with self.client_plugin().ignore_not_found: self.client().delete_port(self.resource_id) def handle_update(self, json_snippet, tmpl_diff, prop_diff): if prop_diff: update_props = {} if self.GROUP_SET in prop_diff: group_set = prop_diff.get(self.GROUP_SET) # update should keep the same behavior as creation, # if without the GroupSet in update template, we should # update the security_groups property to referent # the 'default' security group if group_set is not None: sgs = self.client_plugin().get_secgroup_uuids(group_set) else: sgs = self.client_plugin().get_secgroup_uuids(['default']) update_props['security_groups'] = sgs self.client().update_port(self.resource_id, {'port': update_props}) def _get_fixed_ip_address(self): if self.fixed_ip_address is None: port = self.client().show_port(self.resource_id)['port'] if port['fixed_ips'] and len(port['fixed_ips']) > 0: self.fixed_ip_address = port['fixed_ips'][0]['ip_address'] return self.fixed_ip_address def _resolve_attribute(self, name): if name == self.PRIVATE_IP_ADDRESS: return self._get_fixed_ip_address() def resource_mapping(): return { 'AWS::EC2::NetworkInterface': NetworkInterface, } heat-10.0.2/heat/engine/resources/aws/ec2/volume.py0000666000175000017500000001043313343562337022075 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common.i18n import _ from heat.engine import constraints from heat.engine import properties from heat.engine.resources import volume_base as vb class Volume(vb.BaseVolume): PROPERTIES = ( AVAILABILITY_ZONE, SIZE, BACKUP_ID, TAGS, ) = ( 'AvailabilityZone', 'Size', 'SnapshotId', 'Tags', ) _TAG_KEYS = ( TAG_KEY, TAG_VALUE, ) = ( 'Key', 'Value', ) properties_schema = { AVAILABILITY_ZONE: properties.Schema( properties.Schema.STRING, _('The availability zone in which the volume will be created.'), required=True, immutable=True ), SIZE: properties.Schema( properties.Schema.INTEGER, _('The size of the volume in GB.'), immutable=True, constraints=[ constraints.Range(min=1), ] ), BACKUP_ID: properties.Schema( properties.Schema.STRING, _('If specified, the backup used as the source to create the ' 'volume.'), immutable=True, constraints=[ constraints.CustomConstraint('cinder.backup') ] ), TAGS: properties.Schema( properties.Schema.LIST, _('The list of tags to associate with the volume.'), immutable=True, schema=properties.Schema( properties.Schema.MAP, schema={ TAG_KEY: properties.Schema( properties.Schema.STRING, required=True ), TAG_VALUE: properties.Schema( properties.Schema.STRING, required=True ), }, ) ), } _volume_creating_status = ['creating', 'restoring-backup'] def _create_arguments(self): if self.properties[self.TAGS]: tags = dict((tm[self.TAG_KEY], tm[self.TAG_VALUE]) for tm in self.properties[self.TAGS]) else: tags = None return { 'size': self.properties[self.SIZE], 'availability_zone': (self.properties[self.AVAILABILITY_ZONE] or None), 'metadata': tags } class VolumeAttachment(vb.BaseVolumeAttachment): PROPERTIES = ( INSTANCE_ID, VOLUME_ID, DEVICE, ) = ( 'InstanceId', 'VolumeId', 'Device', ) properties_schema = { INSTANCE_ID: properties.Schema( properties.Schema.STRING, _('The ID of the instance to which the volume attaches.'), immutable=True, required=True, constraints=[ constraints.CustomConstraint('nova.server') ] ), VOLUME_ID: properties.Schema( properties.Schema.STRING, _('The ID of the volume to be attached.'), immutable=True, required=True, constraints=[ constraints.CustomConstraint('cinder.volume') ] ), DEVICE: properties.Schema( properties.Schema.STRING, _('The device where the volume is exposed on the instance. This ' 'assignment may not be honored and it is advised that the path ' '/dev/disk/by-id/virtio- be used instead.'), immutable=True, required=True, constraints=[ constraints.AllowedPattern('/dev/vd[b-z]'), ] ), } def resource_mapping(): return { 'AWS::EC2::Volume': Volume, 'AWS::EC2::VolumeAttachment': VolumeAttachment, } heat-10.0.2/heat/engine/resources/aws/ec2/__init__.py0000666000175000017500000000000013343562337022312 0ustar zuulzuul00000000000000heat-10.0.2/heat/engine/resources/aws/ec2/vpc.py0000666000175000017500000001036213343562337021357 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common import exception from heat.common.i18n import _ from heat.engine import constraints from heat.engine import properties from heat.engine import resource from heat.engine.resources.openstack.neutron import neutron class VPC(resource.Resource): PROPERTIES = ( CIDR_BLOCK, INSTANCE_TENANCY, TAGS, ) = ( 'CidrBlock', 'InstanceTenancy', 'Tags', ) _TAG_KEYS = ( TAG_KEY, TAG_VALUE, ) = ( 'Key', 'Value', ) properties_schema = { CIDR_BLOCK: properties.Schema( properties.Schema.STRING, _('CIDR block to apply to the VPC.') ), INSTANCE_TENANCY: properties.Schema( properties.Schema.STRING, _('Allowed tenancy of instances launched in the VPC. default - ' 'any tenancy; dedicated - instance will be dedicated, ' 'regardless of the tenancy option specified at instance ' 'launch.'), default='default', constraints=[ constraints.AllowedValues(['default', 'dedicated']), ], implemented=False ), TAGS: properties.Schema( properties.Schema.LIST, schema=properties.Schema( properties.Schema.MAP, _('List of tags to attach to the instance.'), schema={ TAG_KEY: properties.Schema( properties.Schema.STRING, required=True ), TAG_VALUE: properties.Schema( properties.Schema.STRING, required=True ), }, implemented=False, ) ), } default_client_name = 'neutron' def handle_create(self): # The VPC's net and router are associated by having identical names. net_props = {'name': self.physical_resource_name()} router_props = {'name': self.physical_resource_name()} net = self.client().create_network({'network': net_props})['network'] self.resource_id_set(net['id']) self.client().create_router({'router': router_props})['router'] @staticmethod def network_for_vpc(client, network_id): return client.show_network(network_id)['network'] @staticmethod def router_for_vpc(client, network_id): # first get the neutron net net = VPC.network_for_vpc(client, network_id) # then find a router with the same name routers = client.list_routers(name=net['name'])['routers'] if len(routers) == 0: # There may be no router if the net was created manually # instead of in another stack. return None if len(routers) > 1: raise exception.Error( _('Multiple routers found with name %s') % net['name']) return routers[0] def check_create_complete(self, *args): net = self.network_for_vpc(self.client(), self.resource_id) if not neutron.NeutronResource.is_built(net): return False router = self.router_for_vpc(self.client(), self.resource_id) return neutron.NeutronResource.is_built(router) def handle_delete(self): if self.resource_id is None: return with self.client_plugin().ignore_not_found: router = self.router_for_vpc(self.client(), self.resource_id) if router: self.client().delete_router(router['id']) with self.client_plugin().ignore_not_found: self.client().delete_network(self.resource_id) def resource_mapping(): return { 'AWS::EC2::VPC': VPC, } heat-10.0.2/heat/engine/resources/aws/ec2/security_group.py0000666000175000017500000002377013343562337023661 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import six from heat.common import exception from heat.common.i18n import _ from heat.engine import properties from heat.engine import resource class NeutronSecurityGroup(object): def __init__(self, sg): self.sg = sg self.client = sg.client('neutron') self.plugin = sg.client_plugin('neutron') def _convert_to_neutron_rule(self, sg_rule): return { 'direction': sg_rule['direction'], 'ethertype': 'IPv4', 'remote_ip_prefix': sg_rule.get(self.sg.RULE_CIDR_IP), 'port_range_min': sg_rule.get(self.sg.RULE_FROM_PORT), 'port_range_max': sg_rule.get(self.sg.RULE_TO_PORT), 'protocol': sg_rule.get(self.sg.RULE_IP_PROTOCOL), 'remote_group_id': sg_rule.get( self.sg.RULE_SOURCE_SECURITY_GROUP_ID), 'security_group_id': self.sg.resource_id } def _res_rules_to_common(self, api_rules): rules = {} for nr in api_rules: rule = {} rule[self.sg.RULE_FROM_PORT] = nr['port_range_min'] rule[self.sg.RULE_TO_PORT] = nr['port_range_max'] rule[self.sg.RULE_IP_PROTOCOL] = nr['protocol'] rule['direction'] = nr['direction'] rule[self.sg.RULE_CIDR_IP] = nr['remote_ip_prefix'] rule[self.sg.RULE_SOURCE_SECURITY_GROUP_ID ] = nr['remote_group_id'] rules[nr['id']] = rule return rules def _prop_rules_to_common(self, props, direction): rules = [] prs = props.get(direction) or [] for pr in prs: rule = dict(pr) rule.pop(self.sg.RULE_SOURCE_SECURITY_GROUP_OWNER_ID) # Neutron only accepts positive ints from_port = pr.get(self.sg.RULE_FROM_PORT) if from_port is not None: from_port = int(from_port) if from_port < 0: from_port = None rule[self.sg.RULE_FROM_PORT] = from_port to_port = pr.get(self.sg.RULE_TO_PORT) if to_port is not None: to_port = int(to_port) if to_port < 0: to_port = None rule[self.sg.RULE_TO_PORT] = to_port if (pr.get(self.sg.RULE_FROM_PORT) is None and pr.get(self.sg.RULE_TO_PORT) is None): rule[self.sg.RULE_CIDR_IP] = None else: rule[self.sg.RULE_CIDR_IP] = pr.get(self.sg.RULE_CIDR_IP) # Neutron understands both names and ids rule[self.sg.RULE_SOURCE_SECURITY_GROUP_ID] = ( pr.get(self.sg.RULE_SOURCE_SECURITY_GROUP_ID) or pr.get(self.sg.RULE_SOURCE_SECURITY_GROUP_NAME) ) rule.pop(self.sg.RULE_SOURCE_SECURITY_GROUP_NAME) rules.append(rule) return rules def create(self): sec = self.client.create_security_group({'security_group': { 'name': self.sg.physical_resource_name(), 'description': self.sg.properties[self.sg.GROUP_DESCRIPTION]} })['security_group'] self.sg.resource_id_set(sec['id']) self.delete_default_egress_rules(sec) if self.sg.properties[self.sg.SECURITY_GROUP_INGRESS]: rules_in = self._prop_rules_to_common( self.sg.properties, self.sg.SECURITY_GROUP_INGRESS) for rule in rules_in: rule['direction'] = 'ingress' self.create_rule(rule) if self.sg.properties[self.sg.SECURITY_GROUP_EGRESS]: rules_e = self._prop_rules_to_common( self.sg.properties, self.sg.SECURITY_GROUP_EGRESS) for rule in rules_e: rule['direction'] = 'egress' self.create_rule(rule) def create_rule(self, rule): try: self.client.create_security_group_rule({ 'security_group_rule': self._convert_to_neutron_rule(rule) }) except Exception as ex: # ignore error if the group already exists if not self.plugin.is_conflict(ex): raise def delete(self): if self.sg.resource_id is not None: try: sec = self.client.show_security_group( self.sg.resource_id)['security_group'] except Exception as ex: self.plugin.ignore_not_found(ex) else: for rule in sec['security_group_rules']: self.delete_rule(rule['id']) with self.plugin.ignore_not_found: self.client.delete_security_group(self.sg.resource_id) def delete_rule(self, rule_id): with self.plugin.ignore_not_found: self.client.delete_security_group_rule(rule_id) def delete_default_egress_rules(self, sec): """Delete the default rules which allow all egress traffic.""" if self.sg.properties[self.sg.SECURITY_GROUP_EGRESS]: for rule in sec['security_group_rules']: if rule['direction'] == 'egress': self.client.delete_security_group_rule(rule['id']) def update(self, props): sec = self.client.show_security_group( self.sg.resource_id)['security_group'] existing = self._res_rules_to_common( sec['security_group_rules']) updated = {} updated[self.sg.SECURITY_GROUP_EGRESS ] = self._prop_rules_to_common( props, self.sg.SECURITY_GROUP_EGRESS) updated[self.sg.SECURITY_GROUP_INGRESS ] = self._prop_rules_to_common( props, self.sg.SECURITY_GROUP_INGRESS) ids, new = self.diff_rules(existing, updated) for id in ids: self.delete_rule(id) for rule in new: self.create_rule(rule) def diff_rules(self, existing, updated): for rule in updated[self.sg.SECURITY_GROUP_EGRESS]: rule['direction'] = 'egress' for rule in updated[self.sg.SECURITY_GROUP_INGRESS]: rule['direction'] = 'ingress' updated_rules = list(six.itervalues(updated)) updated_all = updated_rules[0] + updated_rules[1] ids_to_delete = [id for id, rule in existing.items() if rule not in updated_all] rules_to_create = [rule for rule in updated_all if rule not in six.itervalues(existing)] return ids_to_delete, rules_to_create class SecurityGroup(resource.Resource): PROPERTIES = ( GROUP_DESCRIPTION, VPC_ID, SECURITY_GROUP_INGRESS, SECURITY_GROUP_EGRESS, ) = ( 'GroupDescription', 'VpcId', 'SecurityGroupIngress', 'SecurityGroupEgress', ) _RULE_KEYS = ( RULE_CIDR_IP, RULE_FROM_PORT, RULE_TO_PORT, RULE_IP_PROTOCOL, RULE_SOURCE_SECURITY_GROUP_ID, RULE_SOURCE_SECURITY_GROUP_NAME, RULE_SOURCE_SECURITY_GROUP_OWNER_ID, ) = ( 'CidrIp', 'FromPort', 'ToPort', 'IpProtocol', 'SourceSecurityGroupId', 'SourceSecurityGroupName', 'SourceSecurityGroupOwnerId', ) _rule_schema = { RULE_CIDR_IP: properties.Schema( properties.Schema.STRING ), RULE_FROM_PORT: properties.Schema( properties.Schema.STRING ), RULE_TO_PORT: properties.Schema( properties.Schema.STRING ), RULE_IP_PROTOCOL: properties.Schema( properties.Schema.STRING ), RULE_SOURCE_SECURITY_GROUP_ID: properties.Schema( properties.Schema.STRING ), RULE_SOURCE_SECURITY_GROUP_NAME: properties.Schema( properties.Schema.STRING ), RULE_SOURCE_SECURITY_GROUP_OWNER_ID: properties.Schema( properties.Schema.STRING, implemented=False ), } properties_schema = { GROUP_DESCRIPTION: properties.Schema( properties.Schema.STRING, _('Description of the security group.'), required=True ), VPC_ID: properties.Schema( properties.Schema.STRING, _('Physical ID of the VPC. Not implemented.') ), SECURITY_GROUP_INGRESS: properties.Schema( properties.Schema.LIST, schema=properties.Schema( properties.Schema.MAP, _('List of security group ingress rules.'), schema=_rule_schema, ), update_allowed=True ), SECURITY_GROUP_EGRESS: properties.Schema( properties.Schema.LIST, schema=properties.Schema( properties.Schema.MAP, _('List of security group egress rules.'), schema=_rule_schema, ), update_allowed=True ), } def handle_create(self): NeutronSecurityGroup(self).create() def handle_delete(self): NeutronSecurityGroup(self).delete() def handle_update(self, json_snippet, tmpl_diff, prop_diff): if (self.SECURITY_GROUP_INGRESS in prop_diff or self.SECURITY_GROUP_EGRESS in prop_diff): props = json_snippet.properties(self.properties_schema, self.context) NeutronSecurityGroup(self).update(props) class SecurityGroupNotFound(exception.HeatException): msg_fmt = _('Security Group "%(group_name)s" not found') def resource_mapping(): return { 'AWS::EC2::SecurityGroup': SecurityGroup, } heat-10.0.2/heat/engine/resources/aws/ec2/subnet.py0000666000175000017500000000764213343562337022076 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common.i18n import _ from heat.engine import attributes from heat.engine import properties from heat.engine import resource from heat.engine.resources.aws.ec2 import vpc class Subnet(resource.Resource): PROPERTIES = ( AVAILABILITY_ZONE, CIDR_BLOCK, VPC_ID, TAGS, ) = ( 'AvailabilityZone', 'CidrBlock', 'VpcId', 'Tags', ) _TAG_KEYS = ( TAG_KEY, TAG_VALUE, ) = ( 'Key', 'Value', ) ATTRIBUTES = ( AVAILABILITY_ZONE, ) properties_schema = { AVAILABILITY_ZONE: properties.Schema( properties.Schema.STRING, _('Availability zone in which you want the subnet.') ), CIDR_BLOCK: properties.Schema( properties.Schema.STRING, _('CIDR block to apply to subnet.'), required=True ), VPC_ID: properties.Schema( properties.Schema.STRING, _('Ref structure that contains the ID of the VPC on which you ' 'want to create the subnet.'), required=True ), TAGS: properties.Schema( properties.Schema.LIST, schema=properties.Schema( properties.Schema.MAP, _('List of tags to attach to this resource.'), schema={ TAG_KEY: properties.Schema( properties.Schema.STRING, required=True ), TAG_VALUE: properties.Schema( properties.Schema.STRING, required=True ), }, implemented=False, ) ), } attributes_schema = { AVAILABILITY_ZONE: attributes.Schema( _('Availability Zone of the subnet.'), type=attributes.Schema.STRING ), } default_client_name = 'neutron' def handle_create(self): # TODO(sbaker) Verify that this CidrBlock is within the vpc CidrBlock network_id = self.properties.get(self.VPC_ID) props = { 'network_id': network_id, 'cidr': self.properties.get(self.CIDR_BLOCK), 'name': self.physical_resource_name(), 'ip_version': 4 } subnet = self.client().create_subnet({'subnet': props})['subnet'] self.resource_id_set(subnet['id']) router = vpc.VPC.router_for_vpc(self.client(), network_id) if router: self.client().add_interface_router( router['id'], {'subnet_id': subnet['id']}) def handle_delete(self): if self.resource_id is None: return network_id = self.properties.get(self.VPC_ID) subnet_id = self.resource_id with self.client_plugin().ignore_not_found: router = vpc.VPC.router_for_vpc(self.client(), network_id) if router: self.client().remove_interface_router( router['id'], {'subnet_id': subnet_id}) with self.client_plugin().ignore_not_found: self.client().delete_subnet(subnet_id) def _resolve_attribute(self, name): if name == self.AVAILABILITY_ZONE: return self.properties.get(self.AVAILABILITY_ZONE) def resource_mapping(): return { 'AWS::EC2::Subnet': Subnet, } heat-10.0.2/heat/engine/resources/aws/__init__.py0000666000175000017500000000000013343562337021641 0ustar zuulzuul00000000000000heat-10.0.2/heat/engine/resources/aws/iam/0000775000175000017500000000000013343562672020310 5ustar zuulzuul00000000000000heat-10.0.2/heat/engine/resources/aws/iam/__init__.py0000666000175000017500000000000013343562337022407 0ustar zuulzuul00000000000000heat-10.0.2/heat/engine/resources/aws/iam/user.py0000666000175000017500000002401613343562337021643 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging import six from heat.common import exception from heat.common.i18n import _ from heat.engine import attributes from heat.engine import constraints from heat.engine import properties from heat.engine import resource from heat.engine.resources import stack_user LOG = logging.getLogger(__name__) # # We are ignoring Groups as keystone does not support them. # For now support users and accesskeys, # We also now support a limited heat-native Policy implementation, and # the native access policy resource is located at: # heat/engine/resources/openstack/access_policy.py # class User(stack_user.StackUser): PROPERTIES = ( PATH, GROUPS, LOGIN_PROFILE, POLICIES, ) = ( 'Path', 'Groups', 'LoginProfile', 'Policies', ) _LOGIN_PROFILE_KEYS = ( LOGIN_PROFILE_PASSWORD, ) = ( 'Password', ) properties_schema = { PATH: properties.Schema( properties.Schema.STRING, _('Not Implemented.') ), GROUPS: properties.Schema( properties.Schema.LIST, _('Not Implemented.') ), LOGIN_PROFILE: properties.Schema( properties.Schema.MAP, _('A login profile for the user.'), schema={ LOGIN_PROFILE_PASSWORD: properties.Schema( properties.Schema.STRING ), } ), POLICIES: properties.Schema( properties.Schema.LIST, _('Access policies to apply to the user.') ), } def _validate_policies(self, policies): for policy in (policies or []): # When we support AWS IAM style policies, we will have to accept # either a ref to an AWS::IAM::Policy defined in the stack, or # and embedded dict describing the policy directly, but for now # we only expect this list to contain strings, which must map # to an OS::Heat::AccessPolicy in this stack # If a non-string (e.g embedded IAM dict policy) is passed, we # ignore the policy (don't reject it because we previously ignored # and we don't want to break templates which previously worked if not isinstance(policy, six.string_types): LOG.debug("Ignoring policy %s, must be string " "resource name", policy) continue try: policy_rsrc = self.stack[policy] except KeyError: LOG.debug("Policy %(policy)s does not exist in stack " "%(stack)s", {'policy': policy, 'stack': self.stack.name}) return False if not callable(getattr(policy_rsrc, 'access_allowed', None)): LOG.debug("Policy %s is not an AccessPolicy resource", policy) return False return True def handle_create(self): profile = self.properties[self.LOGIN_PROFILE] if profile and self.LOGIN_PROFILE_PASSWORD in profile: self.password = profile[self.LOGIN_PROFILE_PASSWORD] if self.properties[self.POLICIES]: if not self._validate_policies(self.properties[self.POLICIES]): raise exception.InvalidTemplateAttribute(resource=self.name, key=self.POLICIES) super(User, self).handle_create() self.resource_id_set(self._get_user_id()) def get_reference_id(self): return self.physical_resource_name_or_FnGetRefId() def access_allowed(self, resource_name): policies = (self.properties[self.POLICIES] or []) for policy in policies: if not isinstance(policy, six.string_types): LOG.debug("Ignoring policy %s, must be string " "resource name", policy) continue policy_rsrc = self.stack[policy] if not policy_rsrc.access_allowed(resource_name): return False return True class AccessKey(resource.Resource): PROPERTIES = ( SERIAL, USER_NAME, STATUS, ) = ( 'Serial', 'UserName', 'Status', ) ATTRIBUTES = ( USER_NAME, SECRET_ACCESS_KEY, ) = ( 'UserName', 'SecretAccessKey', ) properties_schema = { SERIAL: properties.Schema( properties.Schema.INTEGER, _('Not Implemented.'), implemented=False ), USER_NAME: properties.Schema( properties.Schema.STRING, _('The name of the user that the new key will belong to.'), required=True ), STATUS: properties.Schema( properties.Schema.STRING, _('Not Implemented.'), constraints=[ constraints.AllowedValues(['Active', 'Inactive']), ], implemented=False ), } attributes_schema = { USER_NAME: attributes.Schema( _('Username associated with the AccessKey.'), cache_mode=attributes.Schema.CACHE_NONE, type=attributes.Schema.STRING ), SECRET_ACCESS_KEY: attributes.Schema( _('Keypair secret key.'), cache_mode=attributes.Schema.CACHE_NONE, type=attributes.Schema.STRING ), } def __init__(self, name, json_snippet, stack): super(AccessKey, self).__init__(name, json_snippet, stack) self._secret = None if self.resource_id: self._register_access_key() def _get_user(self): """Derive the keystone userid, stored in the User resource_id. Helper function to derive the keystone userid, which is stored in the resource_id of the User associated with this key. We want to avoid looking the name up via listing keystone users, as this requires admin rights in keystone, so FnGetAtt which calls _secret_accesskey won't work for normal non-admin users. """ # Lookup User resource by intrinsic reference (which is what is passed # into the UserName parameter. Would be cleaner to just make the User # resource return resource_id for FnGetRefId but the AWS definition of # user does say it returns a user name not ID return self.stack.resource_by_refid(self.properties[self.USER_NAME]) def handle_create(self): user = self._get_user() if user is None: raise exception.NotFound(_('could not find user %s') % self.properties[self.USER_NAME]) # The keypair is actually created and owned by the User resource kp = user._create_keypair() self.resource_id_set(kp.access) self._secret = kp.secret self._register_access_key() # Store the secret key, encrypted, in the DB so we don't have lookup # the user every time someone requests the SecretAccessKey attribute self.data_set('secret_key', kp.secret, redact=True) self.data_set('credential_id', kp.id, redact=True) def handle_delete(self): self._secret = None if self.resource_id is None: return user = self._get_user() if user is None: LOG.debug('Error deleting %s - user not found', str(self)) return user._delete_keypair() def _secret_accesskey(self): """Return the user's access key. Fetching it from keystone if necessary. """ if self._secret is None: if not self.resource_id: LOG.info('could not get secret for %(username)s ' 'Error:%(msg)s', {'username': self.properties[self.USER_NAME], 'msg': "resource_id not yet set"}) else: # First try to retrieve the secret from resource_data, but # for backwards compatibility, fall back to requesting from # keystone self._secret = self.data().get('secret_key') if self._secret is None: try: user_id = self._get_user().resource_id kp = self.keystone().get_ec2_keypair( user_id=user_id, access=self.resource_id) self._secret = kp.secret # Store the key in resource_data self.data_set('secret_key', kp.secret, redact=True) # And the ID of the v3 credential self.data_set('credential_id', kp.id, redact=True) except Exception as ex: LOG.info('could not get secret for %(username)s ' 'Error:%(msg)s', {'username': self.properties[self.USER_NAME], 'msg': ex}) return self._secret or '000-000-000' def _resolve_attribute(self, name): if name == self.USER_NAME: return self.properties[self.USER_NAME] elif name == self.SECRET_ACCESS_KEY: return self._secret_accesskey() def _register_access_key(self): def access_allowed(resource_name): return self._get_user().access_allowed(resource_name) self.stack.register_access_allowed_handler( self.resource_id, access_allowed) def resource_mapping(): return { 'AWS::IAM::User': User, 'AWS::IAM::AccessKey': AccessKey, } heat-10.0.2/heat/engine/resources/aws/lb/0000775000175000017500000000000013343562672020137 5ustar zuulzuul00000000000000heat-10.0.2/heat/engine/resources/aws/lb/loadbalancer.py0000666000175000017500000005543613343562337023135 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os from oslo_config import cfg from oslo_log import log as logging import six from heat.common import exception from heat.common.i18n import _ from heat.common import template_format from heat.engine import attributes from heat.engine import constraints from heat.engine import properties from heat.engine.resources import stack_resource LOG = logging.getLogger(__name__) lb_template_default = r''' { "AWSTemplateFormatVersion": "2010-09-09", "Description": "Built in HAProxy server using Fedora 21 x64 cloud image", "Parameters" : { "KeyName" : { "Type" : "String" }, "LbImageId" : { "Type" : "String", "Default" : "Fedora-Cloud-Base-20141203-21.x86_64" }, "LbFlavor" : { "Type" : "String", "Default" : "m1.small" }, "LBTimeout" : { "Type" : "String", "Default" : "600" }, "SecurityGroups" : { "Type" : "CommaDelimitedList", "Default" : [] } }, "Resources": { "latency_watcher": { "Type": "AWS::CloudWatch::Alarm", "Properties": { "MetricName": "Latency", "Namespace": "AWS/ELB", "Statistic": "Average", "Period": "60", "EvaluationPeriods": "1", "Threshold": "2", "AlarmActions": [], "ComparisonOperator": "GreaterThanThreshold" } }, "CfnLBUser" : { "Type" : "AWS::IAM::User" }, "CfnLBAccessKey" : { "Type" : "AWS::IAM::AccessKey", "Properties" : { "UserName" : {"Ref": "CfnLBUser"} } }, "LB_instance": { "Type": "AWS::EC2::Instance", "Metadata": { "AWS::CloudFormation::Init": { "config": { "packages": { "yum": { "haproxy" : [], "socat" : [] } }, "services": { "systemd": { "crond" : { "enabled" : "true", "ensureRunning" : "true" } } }, "files": { "/etc/cfn/cfn-credentials" : { "content" : { "Fn::Join" : ["", [ "AWSAccessKeyId=", { "Ref" : "CfnLBAccessKey" }, "\n", "AWSSecretKey=", {"Fn::GetAtt": ["CfnLBAccessKey", "SecretAccessKey"]}, "\n" ]]}, "mode" : "000400", "owner" : "root", "group" : "root" }, "/etc/cfn/cfn-hup.conf" : { "content" : { "Fn::Join" : ["", [ "[main]\n", "stack=", { "Ref" : "AWS::StackId" }, "\n", "credential-file=/etc/cfn/cfn-credentials\n", "region=", { "Ref" : "AWS::Region" }, "\n", "interval=60\n" ]]}, "mode" : "000400", "owner" : "root", "group" : "root" }, "/etc/cfn/hooks.conf" : { "content": { "Fn::Join" : ["", [ "[cfn-init]\n", "triggers=post.update\n", "path=Resources.LB_instance.Metadata\n", "action=/opt/aws/bin/cfn-init -s ", { "Ref": "AWS::StackId" }, " -r LB_instance ", " --region ", { "Ref": "AWS::Region" }, "\n", "runas=root\n", "\n", "[reload]\n", "triggers=post.update\n", "path=Resources.LB_instance.Metadata\n", "action=systemctl reload-or-restart haproxy.service\n", "runas=root\n" ]]}, "mode" : "000400", "owner" : "root", "group" : "root" }, "/etc/haproxy/haproxy.cfg": { "content": "", "mode": "000644", "owner": "root", "group": "root" }, "/root/haproxy_tmp.te": { "mode": "000600", "owner": "root", "group": "root", "content": { "Fn::Join" : [ "", [ "module haproxy_tmp 1.0;\n", "require { type tmp_t; type haproxy_t;", "class sock_file { rename write create unlink link };", "class dir { write remove_name add_name };}\n", "allow haproxy_t ", "tmp_t:dir { write remove_name add_name };\n", "allow haproxy_t ", "tmp_t:sock_file { rename write create unlink link};\n" ]]} }, "/tmp/cfn-hup-crontab.txt" : { "content" : { "Fn::Join" : ["", [ "MAIL=\"\"\n", "\n", "* * * * * /opt/aws/bin/cfn-hup -f\n", "* * * * * /opt/aws/bin/cfn-push-stats ", " --watch ", { "Ref" : "latency_watcher" }, " --haproxy\n" ]]}, "mode" : "000600", "owner" : "root", "group" : "root" } } } } }, "Properties": { "ImageId": { "Ref": "LbImageId" }, "InstanceType": { "Ref": "LbFlavor" }, "KeyName": { "Ref": "KeyName" }, "SecurityGroups": { "Ref": "SecurityGroups" }, "UserData": { "Fn::Base64": { "Fn::Join": ["", [ "#!/bin/bash -v\n", "# Helper function\n", "function error_exit\n", "{\n", " /opt/aws/bin/cfn-signal -e 1 -r \"$1\" '", { "Ref" : "WaitHandle" }, "'\n", " exit 1\n", "}\n", "/opt/aws/bin/cfn-init -s ", { "Ref": "AWS::StackId" }, " -r LB_instance ", " --region ", { "Ref": "AWS::Region" }, " || error_exit 'Failed to run cfn-init'\n", "# HAProxy+SELinux, https://www.mankier.com/8/haproxy_selinux \n", "# this is exported by selinux-policy >=3.12.1.196\n", "setsebool haproxy_connect_any 1\n", "# when the location of haproxy stats file is fixed\n", "# in heat-cfntools and AWS::ElasticLoadBalancing::LoadBalancer\n", "# to point to /var/lib/haproxy/stats, \n", "# this next block can be removed.\n", "# compile custom module to allow /tmp files and sockets access\n", "cd /root\n", "checkmodule -M -m -o haproxy_tmp.mod haproxy_tmp.te\n", "semodule_package -o haproxy_tmp.pp -m haproxy_tmp.mod\n", "semodule -i haproxy_tmp.pp\n", "touch /tmp/.haproxy-stats\n", "semanage fcontext -a -t haproxy_tmpfs_t /tmp/.haproxy-stats\n", "restorecon -R -v /tmp/.haproxy-stats\n", "# install cfn-hup crontab\n", "crontab /tmp/cfn-hup-crontab.txt\n", "# restart haproxy service to catch initial changes\n", "systemctl reload-or-restart haproxy.service\n", "# LB setup completed, signal success\n", "/opt/aws/bin/cfn-signal -e 0 -r \"LB server setup complete\" '", { "Ref" : "WaitHandle" }, "'\n" ]]}} } }, "WaitHandle" : { "Type" : "AWS::CloudFormation::WaitConditionHandle" }, "WaitCondition" : { "Type" : "AWS::CloudFormation::WaitCondition", "DependsOn" : "LB_instance", "Properties" : { "Handle" : {"Ref" : "WaitHandle"}, "Timeout" : {"Ref" : "LBTimeout"} } } }, "Outputs": { "PublicIp": { "Value": { "Fn::GetAtt": [ "LB_instance", "PublicIp" ] }, "Description": "instance IP" } } } ''' # Allow user to provide alternative nested stack template to the above loadbalancer_opts = [ cfg.StrOpt('loadbalancer_template', help=_('Custom template for the built-in ' 'loadbalancer nested stack.'))] cfg.CONF.register_opts(loadbalancer_opts) class LoadBalancer(stack_resource.StackResource): """Implements a HAProxy-bearing instance as a nested stack. The template for the nested stack can be redefined with ``loadbalancer_template`` option in ``heat.conf``. Generally the image used for the instance must have the following packages installed or available for installation at runtime:: - heat-cfntools and its dependencies like python-psutil - cronie - socat - haproxy Current default builtin template uses Fedora 21 x86_64 base cloud image (https://getfedora.org/cloud/download/) and apart from installing packages goes through some hoops around SELinux due to pecularities of heat-cfntools. """ PROPERTIES = ( AVAILABILITY_ZONES, HEALTH_CHECK, INSTANCES, LISTENERS, APP_COOKIE_STICKINESS_POLICY, LBCOOKIE_STICKINESS_POLICY, SECURITY_GROUPS, SUBNETS, ) = ( 'AvailabilityZones', 'HealthCheck', 'Instances', 'Listeners', 'AppCookieStickinessPolicy', 'LBCookieStickinessPolicy', 'SecurityGroups', 'Subnets', ) _HEALTH_CHECK_KEYS = ( HEALTH_CHECK_HEALTHY_THRESHOLD, HEALTH_CHECK_INTERVAL, HEALTH_CHECK_TARGET, HEALTH_CHECK_TIMEOUT, HEALTH_CHECK_UNHEALTHY_THRESHOLD, ) = ( 'HealthyThreshold', 'Interval', 'Target', 'Timeout', 'UnhealthyThreshold', ) _LISTENER_KEYS = ( LISTENER_INSTANCE_PORT, LISTENER_LOAD_BALANCER_PORT, LISTENER_PROTOCOL, LISTENER_SSLCERTIFICATE_ID, LISTENER_POLICY_NAMES, ) = ( 'InstancePort', 'LoadBalancerPort', 'Protocol', 'SSLCertificateId', 'PolicyNames', ) ATTRIBUTES = ( CANONICAL_HOSTED_ZONE_NAME, CANONICAL_HOSTED_ZONE_NAME_ID, DNS_NAME, SOURCE_SECURITY_GROUP_GROUP_NAME, SOURCE_SECURITY_GROUP_OWNER_ALIAS, ) = ( 'CanonicalHostedZoneName', 'CanonicalHostedZoneNameID', 'DNSName', 'SourceSecurityGroup.GroupName', 'SourceSecurityGroup.OwnerAlias', ) properties_schema = { AVAILABILITY_ZONES: properties.Schema( properties.Schema.LIST, _('The Availability Zones in which to create the load balancer.'), required=True ), HEALTH_CHECK: properties.Schema( properties.Schema.MAP, _('An application health check for the instances.'), schema={ HEALTH_CHECK_HEALTHY_THRESHOLD: properties.Schema( properties.Schema.INTEGER, _('The number of consecutive health probe successes ' 'required before moving the instance to the ' 'healthy state.'), required=True ), HEALTH_CHECK_INTERVAL: properties.Schema( properties.Schema.INTEGER, _('The approximate interval, in seconds, between ' 'health checks of an individual instance.'), required=True ), HEALTH_CHECK_TARGET: properties.Schema( properties.Schema.STRING, _('The port being checked.'), required=True ), HEALTH_CHECK_TIMEOUT: properties.Schema( properties.Schema.INTEGER, _('Health probe timeout, in seconds.'), required=True ), HEALTH_CHECK_UNHEALTHY_THRESHOLD: properties.Schema( properties.Schema.INTEGER, _('The number of consecutive health probe failures ' 'required before moving the instance to the ' 'unhealthy state'), required=True ), } ), INSTANCES: properties.Schema( properties.Schema.LIST, _('The list of instance IDs load balanced.'), update_allowed=True ), LISTENERS: properties.Schema( properties.Schema.LIST, _('One or more listeners for this load balancer.'), schema=properties.Schema( properties.Schema.MAP, schema={ LISTENER_INSTANCE_PORT: properties.Schema( properties.Schema.INTEGER, _('TCP port on which the instance server is ' 'listening.'), required=True ), LISTENER_LOAD_BALANCER_PORT: properties.Schema( properties.Schema.INTEGER, _('The external load balancer port number.'), required=True ), LISTENER_PROTOCOL: properties.Schema( properties.Schema.STRING, _('The load balancer transport protocol to use.'), required=True, constraints=[ constraints.AllowedValues(['TCP', 'HTTP']), ] ), LISTENER_SSLCERTIFICATE_ID: properties.Schema( properties.Schema.STRING, _('Not Implemented.'), implemented=False ), LISTENER_POLICY_NAMES: properties.Schema( properties.Schema.LIST, _('Not Implemented.'), implemented=False ), }, ), required=True ), APP_COOKIE_STICKINESS_POLICY: properties.Schema( properties.Schema.STRING, _('Not Implemented.'), implemented=False ), LBCOOKIE_STICKINESS_POLICY: properties.Schema( properties.Schema.STRING, _('Not Implemented.'), implemented=False ), SECURITY_GROUPS: properties.Schema( properties.Schema.LIST, _('List of Security Groups assigned on current LB.'), update_allowed=True ), SUBNETS: properties.Schema( properties.Schema.LIST, _('Not Implemented.'), implemented=False ), } attributes_schema = { CANONICAL_HOSTED_ZONE_NAME: attributes.Schema( _("The name of the hosted zone that is associated with the " "LoadBalancer."), type=attributes.Schema.STRING ), CANONICAL_HOSTED_ZONE_NAME_ID: attributes.Schema( _("The ID of the hosted zone name that is associated with the " "LoadBalancer."), type=attributes.Schema.STRING ), DNS_NAME: attributes.Schema( _("The DNS name for the LoadBalancer."), type=attributes.Schema.STRING ), SOURCE_SECURITY_GROUP_GROUP_NAME: attributes.Schema( _("The security group that you can use as part of your inbound " "rules for your LoadBalancer's back-end instances."), type=attributes.Schema.STRING ), SOURCE_SECURITY_GROUP_OWNER_ALIAS: attributes.Schema( _("Owner of the source security group."), type=attributes.Schema.STRING ), } def _haproxy_config_global(self): return ''' global daemon maxconn 256 stats socket /tmp/.haproxy-stats defaults mode http timeout connect 5000ms timeout client 50000ms timeout server 50000ms ''' def _haproxy_config_frontend(self): listener = self.properties[self.LISTENERS][0] lb_port = listener[self.LISTENER_LOAD_BALANCER_PORT] return ''' frontend http bind *:%s default_backend servers ''' % (lb_port) def _haproxy_config_backend(self): health_chk = self.properties[self.HEALTH_CHECK] if health_chk: timeout = int(health_chk[self.HEALTH_CHECK_TIMEOUT]) timeout_check = 'timeout check %ds' % timeout spaces = ' ' else: timeout_check = '' spaces = '' return ''' backend servers balance roundrobin option http-server-close option forwardfor option httpchk %s%s ''' % (spaces, timeout_check) def _haproxy_config_servers(self, instances): listener = self.properties[self.LISTENERS][0] inst_port = listener[self.LISTENER_INSTANCE_PORT] spaces = ' ' check = '' health_chk = self.properties[self.HEALTH_CHECK] if health_chk: check = ' check inter %ss fall %s rise %s' % ( health_chk[self.HEALTH_CHECK_INTERVAL], health_chk[self.HEALTH_CHECK_UNHEALTHY_THRESHOLD], health_chk[self.HEALTH_CHECK_HEALTHY_THRESHOLD]) servers = [] n = 1 nova_cp = self.client_plugin('nova') for i in instances or []: ip = nova_cp.server_to_ipaddress(i) or '0.0.0.0' LOG.debug('haproxy server:%s', ip) servers.append('%sserver server%d %s:%s%s' % (spaces, n, ip, inst_port, check)) n = n + 1 return '\n'.join(servers) def _haproxy_config(self, instances): # initial simplifications: # - only one Listener # - only http (no tcp or ssl) # # option httpchk HEAD /check.txt HTTP/1.0 return '%s%s%s%s\n' % (self._haproxy_config_global(), self._haproxy_config_frontend(), self._haproxy_config_backend(), self._haproxy_config_servers(instances)) def get_parsed_template(self): if cfg.CONF.loadbalancer_template: with open(cfg.CONF.loadbalancer_template) as templ_fd: LOG.info('Using custom loadbalancer template %s', cfg.CONF.loadbalancer_template) contents = templ_fd.read() else: contents = lb_template_default return template_format.parse(contents) def child_params(self): params = {} params['SecurityGroups'] = self.properties[self.SECURITY_GROUPS] # If the owning stack defines KeyName, we use that key for the nested # template, otherwise use no key for magic_param in ('KeyName', 'LbFlavor', 'LBTimeout', 'LbImageId'): if magic_param in self.stack.parameters: params[magic_param] = self.stack.parameters[magic_param] return params def child_template(self): templ = self.get_parsed_template() # If the owning stack defines KeyName, we use that key for the nested # template, otherwise use no key if 'KeyName' not in self.stack.parameters: del templ['Resources']['LB_instance']['Properties']['KeyName'] del templ['Parameters']['KeyName'] return templ def handle_create(self): templ = self.child_template() params = self.child_params() if self.properties[self.INSTANCES]: md = templ['Resources']['LB_instance']['Metadata'] files = md['AWS::CloudFormation::Init']['config']['files'] cfg = self._haproxy_config(self.properties[self.INSTANCES]) files['/etc/haproxy/haproxy.cfg']['content'] = cfg return self.create_with_template(templ, params) def handle_update(self, json_snippet, tmpl_diff, prop_diff): """Re-generate the Metadata. Save it to the db. Rely on the cfn-hup to reconfigure HAProxy. """ new_props = json_snippet.properties(self.properties_schema, self.context) # Valid use cases are: # - Membership controlled by members property in template # - Empty members property in template; membership controlled by # "updates" triggered from autoscaling group. # Mixing the two will lead to undefined behaviour. if (self.INSTANCES in prop_diff and (self.properties[self.INSTANCES] is not None or new_props[self.INSTANCES] is not None)): cfg = self._haproxy_config(prop_diff[self.INSTANCES]) md = self.nested()['LB_instance'].metadata_get() files = md['AWS::CloudFormation::Init']['config']['files'] files['/etc/haproxy/haproxy.cfg']['content'] = cfg self.nested()['LB_instance'].metadata_set(md) if self.SECURITY_GROUPS in prop_diff: templ = self.child_template() params = self.child_params() params['SecurityGroups'] = new_props[self.SECURITY_GROUPS] self.update_with_template(templ, params) def check_update_complete(self, updater): """Because we are not calling update_with_template, return True.""" return True def validate(self): """Validate any of the provided params.""" res = super(LoadBalancer, self).validate() if res: return res if (cfg.CONF.loadbalancer_template and not os.access(cfg.CONF.loadbalancer_template, os.R_OK)): msg = _('Custom LoadBalancer template can not be found') raise exception.StackValidationFailed(message=msg) health_chk = self.properties[self.HEALTH_CHECK] if health_chk: interval = float(health_chk[self.HEALTH_CHECK_INTERVAL]) timeout = float(health_chk[self.HEALTH_CHECK_TIMEOUT]) if interval < timeout: return {'Error': 'Interval must be larger than Timeout'} def get_reference_id(self): return six.text_type(self.name) def _resolve_attribute(self, name): """We don't really support any of these yet.""" if name == self.DNS_NAME: try: return self.get_output('PublicIp') except exception.NotFound: raise exception.InvalidTemplateAttribute(resource=self.name, key=name) elif name in self.attributes_schema: # Not sure if we should return anything for the other attribs # since they aren't really supported in any meaningful way return '' def resource_mapping(): return { 'AWS::ElasticLoadBalancing::LoadBalancer': LoadBalancer, } heat-10.0.2/heat/engine/resources/aws/lb/__init__.py0000666000175000017500000000000013343562337022236 0ustar zuulzuul00000000000000heat-10.0.2/heat/engine/resources/openstack/0000775000175000017500000000000013343562672020737 5ustar zuulzuul00000000000000heat-10.0.2/heat/engine/resources/openstack/heat/0000775000175000017500000000000013343562672021660 5ustar zuulzuul00000000000000heat-10.0.2/heat/engine/resources/openstack/heat/none_resource.py0000666000175000017500000000445513343562340025102 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import six import uuid from heat.engine import properties from heat.engine import resource from heat.engine import support class NoneResource(resource.Resource): """Enables easily disabling certain resources via the resource_registry. It does nothing, but can effectively stub out any other resource because it will accept any properties and return any attribute (as None). Note this resource always does nothing on update (e.g it is not replaced even if a change to the stubbed resource properties would cause replacement). """ support_status = support.SupportStatus(version='5.0.0') properties_schema = {} attributes_schema = {} IS_PLACEHOLDER = 'is_placeholder' def _needs_update(self, after, before, after_props, before_props, prev_resource, check_init_complete=True): return False def reparse(self, client_resolve=True): self.properties = properties.Properties(schema={}, data={}) self.translate_properties(self.properties, client_resolve) def handle_create(self): self.resource_id_set(six.text_type(uuid.uuid4())) # set is_placeholder flag when resource trying to replace original # resource with a placeholder resource. self.data_set(self.IS_PLACEHOLDER, 'True') def validate(self): pass def get_attribute(self, key, *path): return None def handle_delete(self): # Will not triger the delete method in client if this is not # a placeholder resource. if not self.data().get(self.IS_PLACEHOLDER): return super(NoneResource, self).handle_delete() return self.resource_id def resource_mapping(): return { 'OS::Heat::None': NoneResource, } heat-10.0.2/heat/engine/resources/openstack/heat/structured_config.py0000666000175000017500000002155313343562340025763 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import copy import functools import six from heat.common import exception from heat.common.i18n import _ from heat.engine import constraints from heat.engine import properties from heat.engine.resources.openstack.heat import software_config as sc from heat.engine.resources.openstack.heat import software_deployment as sd from heat.engine import rsrc_defn from heat.engine import support class StructuredConfig(sc.SoftwareConfig): """A resource which has same logic with OS::Heat::SoftwareConfig. This resource is like OS::Heat::SoftwareConfig except that the config property is represented by a Map rather than a String. This is useful for configuration tools which use YAML or JSON as their configuration syntax. The resulting configuration is transferred, stored and returned by the software_configs API as parsed JSON. """ support_status = support.SupportStatus(version='2014.1') PROPERTIES = ( GROUP, CONFIG, OPTIONS, INPUTS, OUTPUTS ) = ( sc.SoftwareConfig.GROUP, sc.SoftwareConfig.CONFIG, sc.SoftwareConfig.OPTIONS, sc.SoftwareConfig.INPUTS, sc.SoftwareConfig.OUTPUTS ) properties_schema = { GROUP: sc.SoftwareConfig.properties_schema[GROUP], OPTIONS: sc.SoftwareConfig.properties_schema[OPTIONS], INPUTS: sc.SoftwareConfig.properties_schema[INPUTS], OUTPUTS: sc.SoftwareConfig.properties_schema[OUTPUTS], CONFIG: properties.Schema( properties.Schema.MAP, _('Map representing the configuration data structure which will ' 'be serialized to JSON format.') ) } class StructuredDeployment(sd.SoftwareDeployment): """A resource which has same logic with OS::Heat::SoftwareDeployment. A deployment resource like OS::Heat::SoftwareDeployment, but which performs input value substitution on the config defined by a OS::Heat::StructuredConfig resource. Some configuration tools have no concept of inputs, so the input value substitution needs to occur in the deployment resource. An example of this is the JSON metadata consumed by the cfn-init tool. Where the config contains {get_input: input_name} this will be substituted with the value of input_name in this resource's input_values. If get_input needs to be passed through to the substituted configuration then a different input_key property value can be specified. """ support_status = support.SupportStatus(version='2014.1') PROPERTIES = ( CONFIG, SERVER, INPUT_VALUES, DEPLOY_ACTIONS, NAME, SIGNAL_TRANSPORT, INPUT_KEY, INPUT_VALUES_VALIDATE ) = ( sd.SoftwareDeployment.CONFIG, sd.SoftwareDeployment.SERVER, sd.SoftwareDeployment.INPUT_VALUES, sd.SoftwareDeployment.DEPLOY_ACTIONS, sd.SoftwareDeployment.NAME, sd.SoftwareDeployment.SIGNAL_TRANSPORT, 'input_key', 'input_values_validate' ) _sd_ps = sd.SoftwareDeployment.properties_schema properties_schema = { CONFIG: _sd_ps[CONFIG], SERVER: _sd_ps[SERVER], INPUT_VALUES: _sd_ps[INPUT_VALUES], DEPLOY_ACTIONS: _sd_ps[DEPLOY_ACTIONS], SIGNAL_TRANSPORT: _sd_ps[SIGNAL_TRANSPORT], NAME: _sd_ps[NAME], INPUT_KEY: properties.Schema( properties.Schema.STRING, _('Name of key to use for substituting inputs during deployment.'), default='get_input', ), INPUT_VALUES_VALIDATE: properties.Schema( properties.Schema.STRING, _('Perform a check on the input values passed to verify that ' 'each required input has a corresponding value. ' 'When the property is set to STRICT and no value is passed, ' 'an exception is raised.'), default='LAX', constraints=[ constraints.AllowedValues(['LAX', 'STRICT']), ], ) } def empty_config(self): return {} def _build_derived_config(self, action, source, derived_inputs, derived_options): cfg = source.get(sc.SoftwareConfig.CONFIG) input_key = self.properties[self.INPUT_KEY] check_input_val = self.properties[self.INPUT_VALUES_VALIDATE] inputs = dict(i.input_data() for i in derived_inputs) return self.parse(inputs, input_key, cfg, check_input_val) @staticmethod def get_input_key_arg(snippet, input_key): if len(snippet) != 1: return None fn_name, fn_arg = next(six.iteritems(snippet)) if (fn_name == input_key and isinstance(fn_arg, six.string_types)): return fn_arg @staticmethod def get_input_key_value(fn_arg, inputs, check_input_val='LAX'): if check_input_val == 'STRICT' and fn_arg not in inputs: raise exception.UserParameterMissing(key=fn_arg) return inputs.get(fn_arg) @staticmethod def parse(inputs, input_key, snippet, check_input_val='LAX'): parse = functools.partial( StructuredDeployment.parse, inputs, input_key, check_input_val=check_input_val) if isinstance(snippet, collections.Mapping): fn_arg = StructuredDeployment.get_input_key_arg(snippet, input_key) if fn_arg is not None: return StructuredDeployment.get_input_key_value(fn_arg, inputs, check_input_val ) return dict((k, parse(v)) for k, v in six.iteritems(snippet)) elif (not isinstance(snippet, six.string_types) and isinstance(snippet, collections.Iterable)): return [parse(v) for v in snippet] else: return snippet class StructuredDeploymentGroup(sd.SoftwareDeploymentGroup): """This resource associates a group of servers with some configuration. This resource works similar as OS::Heat::SoftwareDeploymentGroup, but for structured resources. """ PROPERTIES = ( SERVERS, CONFIG, INPUT_VALUES, DEPLOY_ACTIONS, NAME, SIGNAL_TRANSPORT, INPUT_KEY, INPUT_VALUES_VALIDATE, ) = ( sd.SoftwareDeploymentGroup.SERVERS, sd.SoftwareDeploymentGroup.CONFIG, sd.SoftwareDeploymentGroup.INPUT_VALUES, sd.SoftwareDeploymentGroup.DEPLOY_ACTIONS, sd.SoftwareDeploymentGroup.NAME, sd.SoftwareDeploymentGroup.SIGNAL_TRANSPORT, StructuredDeployment.INPUT_KEY, StructuredDeployment.INPUT_VALUES_VALIDATE ) _sds_ps = sd.SoftwareDeploymentGroup.properties_schema properties_schema = { SERVERS: _sds_ps[SERVERS], CONFIG: _sds_ps[CONFIG], INPUT_VALUES: _sds_ps[INPUT_VALUES], DEPLOY_ACTIONS: _sds_ps[DEPLOY_ACTIONS], SIGNAL_TRANSPORT: _sds_ps[SIGNAL_TRANSPORT], NAME: _sds_ps[NAME], INPUT_KEY: StructuredDeployment.properties_schema[INPUT_KEY], INPUT_VALUES_VALIDATE: StructuredDeployment.properties_schema[INPUT_VALUES_VALIDATE], } def build_resource_definition(self, res_name, res_defn): props = copy.deepcopy(res_defn) servers = props.pop(self.SERVERS) props[StructuredDeployment.SERVER] = servers.get(res_name) return rsrc_defn.ResourceDefinition(res_name, 'OS::Heat::StructuredDeployment', props, None) class StructuredDeployments(StructuredDeploymentGroup): hidden_msg = _('Please use OS::Heat::StructuredDeploymentGroup instead.') support_status = support.SupportStatus( status=support.HIDDEN, message=hidden_msg, version='7.0.0', previous_status=support.SupportStatus( status=support.DEPRECATED, version='2014.2'), substitute_class=StructuredDeploymentGroup) def resource_mapping(): return { 'OS::Heat::StructuredConfig': StructuredConfig, 'OS::Heat::StructuredDeployment': StructuredDeployment, 'OS::Heat::StructuredDeploymentGroup': StructuredDeploymentGroup, 'OS::Heat::StructuredDeployments': StructuredDeployments, } heat-10.0.2/heat/engine/resources/openstack/heat/cloud_config.py0000666000175000017500000000460113343562337024666 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common.i18n import _ from heat.common import template_format from heat.engine import properties from heat.engine.resources.openstack.heat import software_config from heat.engine import support from heat.rpc import api as rpc_api class CloudConfig(software_config.SoftwareConfig): """A configuration resource for representing cloud-init cloud-config. This resource allows cloud-config YAML to be defined and stored by the config API. Any intrinsic functions called in the config will be resolved before storing the result. This resource will generally be referenced by OS::Nova::Server user_data, or OS::Heat::MultipartMime parts config. Since cloud-config is boot-only configuration, any changes to the definition will result in the replacement of all servers which reference it. """ support_status = support.SupportStatus(version='2014.1') PROPERTIES = ( CLOUD_CONFIG ) = ( 'cloud_config' ) properties_schema = { CLOUD_CONFIG: properties.Schema( properties.Schema.MAP, _('Map representing the cloud-config data structure which will ' 'be formatted as YAML.') ) } def handle_create(self): cloud_config = template_format.yaml.dump( self.properties[self.CLOUD_CONFIG], Dumper=template_format.yaml_dumper) props = { rpc_api.SOFTWARE_CONFIG_NAME: self.physical_resource_name(), rpc_api.SOFTWARE_CONFIG_CONFIG: '#cloud-config\n%s' % cloud_config, rpc_api.SOFTWARE_CONFIG_GROUP: 'Heat::Ungrouped' } sc = self.rpc_client().create_software_config(self.context, **props) self.resource_id_set(sc[rpc_api.SOFTWARE_CONFIG_ID]) def resource_mapping(): return { 'OS::Heat::CloudConfig': CloudConfig, } heat-10.0.2/heat/engine/resources/openstack/heat/software_component.py0000666000175000017500000001455113343562340026146 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common import exception from heat.common.i18n import _ from heat.engine import constraints as constr from heat.engine import properties from heat.engine import resource from heat.engine.resources.openstack.heat import software_config as sc from heat.engine import support from heat.rpc import api as rpc_api class SoftwareComponent(sc.SoftwareConfig): """A resource for describing and storing a software component. This resource is similar to OS::Heat::SoftwareConfig. In contrast to SoftwareConfig which allows for storing only one configuration (e.g. one script), SoftwareComponent allows for storing multiple configurations to address handling of all lifecycle hooks (CREATE, UPDATE, SUSPEND, RESUME, DELETE) for a software component in one place. This resource is backed by the persistence layer and the API of the SoftwareConfig resource, and only adds handling for the additional 'configs' property and attribute. """ support_status = support.SupportStatus(version='2014.2') PROPERTIES = ( CONFIGS, INPUTS, OUTPUTS, OPTIONS, ) = ( 'configs', sc.SoftwareConfig.INPUTS, sc.SoftwareConfig.OUTPUTS, sc.SoftwareConfig.OPTIONS, ) CONFIG_PROPERTIES = ( CONFIG_ACTIONS, CONFIG_CONFIG, CONFIG_TOOL, ) = ( 'actions', 'config', 'tool', ) ATTRIBUTES = ( CONFIGS_ATTR, ) = ( 'configs', ) # properties schema for one entry in the 'configs' list config_schema = properties.Schema( properties.Schema.MAP, schema={ CONFIG_ACTIONS: properties.Schema( # Note: This properties schema allows for custom actions to be # specified, which will however require special handling in # in-instance hooks. By default, only the standard actions # stated below will be handled. properties.Schema.LIST, _('Lifecycle actions to which the configuration applies. ' 'The string values provided for this property can include ' 'the standard resource actions CREATE, DELETE, UPDATE, ' 'SUSPEND and RESUME supported by Heat.'), default=[resource.Resource.CREATE, resource.Resource.UPDATE], schema=properties.Schema(properties.Schema.STRING), constraints=[ constr.Length(min=1), ] ), CONFIG_CONFIG: sc.SoftwareConfig.properties_schema[ sc.SoftwareConfig.CONFIG ], CONFIG_TOOL: properties.Schema( properties.Schema.STRING, _('The configuration tool used to actually apply the ' 'configuration on a server. This string property has ' 'to be understood by in-instance tools running inside ' 'deployed servers.'), required=True ) } ) properties_schema = { CONFIGS: properties.Schema( properties.Schema.LIST, _('The list of configurations for the different lifecycle actions ' 'of the represented software component.'), schema=config_schema, constraints=[constr.Length(min=1)], required=True ), INPUTS: sc.SoftwareConfig.properties_schema[ sc.SoftwareConfig.INPUTS], OUTPUTS: sc.SoftwareConfig.properties_schema[ sc.SoftwareConfig.OUTPUTS], OPTIONS: sc.SoftwareConfig.properties_schema[ sc.SoftwareConfig.OPTIONS], } def handle_create(self): props = dict(self.properties) props[rpc_api.SOFTWARE_CONFIG_NAME] = self.physical_resource_name() # use config property of SoftwareConfig to store configs list configs = props.pop(self.CONFIGS) props[rpc_api.SOFTWARE_CONFIG_CONFIG] = {self.CONFIGS: configs} # set 'group' to enable component processing by in-instance hook props[rpc_api.SOFTWARE_CONFIG_GROUP] = 'component' sc = self.rpc_client().create_software_config(self.context, **props) self.resource_id_set(sc[rpc_api.SOFTWARE_CONFIG_ID]) def _resolve_attribute(self, name): """Retrieve attributes of the SoftwareComponent resource. 'configs' returns the list of configurations for the software component's lifecycle actions. If the attribute does not exist, an empty list is returned. """ if name == self.CONFIGS_ATTR and self.resource_id: with self.rpc_client().ignore_error_by_name('NotFound'): sc = self.rpc_client().show_software_config( self.context, self.resource_id) # configs list is stored in 'config' property of parent class # (see handle_create) return sc[rpc_api.SOFTWARE_CONFIG_CONFIG].get(self.CONFIGS) def validate(self): """Validate SoftwareComponent properties consistency.""" super(SoftwareComponent, self).validate() # One lifecycle action (e.g. CREATE) can only be associated with one # config; otherwise a way to define ordering would be required. configs = self.properties[self.CONFIGS] config_actions = set() for config in configs: actions = config.get(self.CONFIG_ACTIONS) if any(action in config_actions for action in actions): msg = _('Defining more than one configuration for the same ' 'action in SoftwareComponent "%s" is not allowed.' ) % self.name raise exception.StackValidationFailed(message=msg) config_actions.update(actions) def resource_mapping(): return { 'OS::Heat::SoftwareComponent': SoftwareComponent, } heat-10.0.2/heat/engine/resources/openstack/heat/access_policy.py0000666000175000017500000000417413343562337025060 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common import exception from heat.common.i18n import _ from heat.engine import properties from heat.engine import resource class AccessPolicy(resource.Resource): """Resource for defining which resources can be accessed by users. NOTE: Now this resource is actually associated with an AWS user resource, not any OS:: resource though it is registered under the OS namespace below. Resource for defining resources that users are allowed to access by the DescribeStackResource API. """ PROPERTIES = ( ALLOWED_RESOURCES, ) = ( 'AllowedResources', ) properties_schema = { ALLOWED_RESOURCES: properties.Schema( properties.Schema.LIST, _('Resources that users are allowed to access by the ' 'DescribeStackResource API.'), required=True ), } def handle_create(self): pass def validate(self): """Make sure all the AllowedResources are present.""" super(AccessPolicy, self).validate() resources = self.properties[self.ALLOWED_RESOURCES] # All of the provided resource names must exist in this stack for res in resources: if res not in self.stack: msg = _("AccessPolicy resource %s not in stack") % res raise exception.StackValidationFailed(message=msg) def access_allowed(self, resource_name): return resource_name in self.properties[self.ALLOWED_RESOURCES] def resource_mapping(): return { 'OS::Heat::AccessPolicy': AccessPolicy, } heat-10.0.2/heat/engine/resources/openstack/heat/resource_group.py0000666000175000017500000007674313343562351025312 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import copy import functools import itertools import six from oslo_log import log as logging from heat.common import exception from heat.common import grouputils from heat.common.i18n import _ from heat.common import timeutils from heat.engine import attributes from heat.engine import constraints from heat.engine import function from heat.engine import output from heat.engine import properties from heat.engine.resources import stack_resource from heat.engine import rsrc_defn from heat.engine import scheduler from heat.engine import support from heat.scaling import rolling_update from heat.scaling import template as scl_template LOG = logging.getLogger(__name__) class ResourceGroup(stack_resource.StackResource): """Creates one or more identically configured nested resources. In addition to the `refs` attribute, this resource implements synthetic attributes that mirror those of the resources in the group. When getting an attribute from this resource, however, a list of attribute values for each resource in the group is returned. To get attribute values for a single resource in the group, synthetic attributes of the form `resource.{resource index}.{attribute name}` can be used. The resource ID of a particular resource in the group can be obtained via the synthetic attribute `resource.{resource index}`. Note, that if you get attribute without `{resource index}`, e.g. `[resource, {attribute_name}]`, you'll get a list of this attribute's value for all resources in group. While each resource in the group will be identically configured, this resource does allow for some index-based customization of the properties of the resources in the group. For example:: resources: my_indexed_group: type: OS::Heat::ResourceGroup properties: count: 3 resource_def: type: OS::Nova::Server properties: # create a unique name for each server # using its index in the group name: my_server_%index% image: CentOS 6.5 flavor: 4GB Performance would result in a group of three servers having the same image and flavor, but names of `my_server_0`, `my_server_1`, and `my_server_2`. The variable used for substitution can be customized by using the `index_var` property. """ support_status = support.SupportStatus(version='2014.1') PROPERTIES = ( COUNT, INDEX_VAR, RESOURCE_DEF, REMOVAL_POLICIES, REMOVAL_POLICIES_MODE, ) = ( 'count', 'index_var', 'resource_def', 'removal_policies', 'removal_policies_mode' ) _RESOURCE_DEF_KEYS = ( RESOURCE_DEF_TYPE, RESOURCE_DEF_PROPERTIES, RESOURCE_DEF_METADATA, ) = ( 'type', 'properties', 'metadata', ) _REMOVAL_POLICIES_KEYS = ( REMOVAL_RSRC_LIST, ) = ( 'resource_list', ) _REMOVAL_POLICY_MODES = ( REMOVAL_POLICY_APPEND, REMOVAL_POLICY_UPDATE ) = ( 'append', 'update' ) _ROLLING_UPDATES_SCHEMA_KEYS = ( MIN_IN_SERVICE, MAX_BATCH_SIZE, PAUSE_TIME, ) = ( 'min_in_service', 'max_batch_size', 'pause_time', ) _BATCH_CREATE_SCHEMA_KEYS = ( MAX_BATCH_SIZE, PAUSE_TIME, ) = ( 'max_batch_size', 'pause_time', ) _UPDATE_POLICY_SCHEMA_KEYS = ( ROLLING_UPDATE, BATCH_CREATE, ) = ( 'rolling_update', 'batch_create', ) ATTRIBUTES = ( REFS, REFS_MAP, ATTR_ATTRIBUTES, REMOVED_RSRC_LIST ) = ( 'refs', 'refs_map', 'attributes', 'removed_rsrc_list' ) properties_schema = { COUNT: properties.Schema( properties.Schema.INTEGER, _('The number of resources to create.'), default=1, constraints=[ constraints.Range(min=0), ], update_allowed=True ), INDEX_VAR: properties.Schema( properties.Schema.STRING, _('A variable that this resource will use to replace with the ' 'current index of a given resource in the group. Can be used, ' 'for example, to customize the name property of grouped ' 'servers in order to differentiate them when listed with ' 'nova client.'), default="%index%", constraints=[ constraints.Length(min=3) ], support_status=support.SupportStatus(version='2014.2') ), RESOURCE_DEF: properties.Schema( properties.Schema.MAP, _('Resource definition for the resources in the group. The value ' 'of this property is the definition of a resource just as if ' 'it had been declared in the template itself.'), schema={ RESOURCE_DEF_TYPE: properties.Schema( properties.Schema.STRING, _('The type of the resources in the group.'), required=True ), RESOURCE_DEF_PROPERTIES: properties.Schema( properties.Schema.MAP, _('Property values for the resources in the group.') ), RESOURCE_DEF_METADATA: properties.Schema( properties.Schema.MAP, _('Supplied metadata for the resources in the group.'), support_status=support.SupportStatus(version='5.0.0') ), }, required=True, update_allowed=True ), REMOVAL_POLICIES: properties.Schema( properties.Schema.LIST, _('Policies for removal of resources on update.'), schema=properties.Schema( properties.Schema.MAP, _('Policy to be processed when doing an update which ' 'requires removal of specific resources.'), schema={ REMOVAL_RSRC_LIST: properties.Schema( properties.Schema.LIST, _("List of resources to be removed " "when doing an update which requires removal of " "specific resources. " "The resource may be specified several ways: " "(1) The resource name, as in the nested stack, " "(2) The resource reference returned from " "get_resource in a template, as available via " "the 'refs' attribute. " "Note this is destructive on update when specified; " "even if the count is not being reduced, and once " "a resource name is removed, its name is never " "reused in subsequent updates." ), default=[] ), }, ), update_allowed=True, default=[], support_status=support.SupportStatus(version='2015.1') ), REMOVAL_POLICIES_MODE: properties.Schema( properties.Schema.STRING, _('How to handle changes to removal_policies on update. ' 'The default "append" mode appends to the internal list, ' '"update" replaces it on update.'), default=REMOVAL_POLICY_APPEND, constraints=[ constraints.AllowedValues(_REMOVAL_POLICY_MODES) ], update_allowed=True, support_status=support.SupportStatus(version='10.0.0') ), } attributes_schema = { REFS: attributes.Schema( _("A list of resource IDs for the resources in the group."), type=attributes.Schema.LIST ), REFS_MAP: attributes.Schema( _("A map of resource names to IDs for the resources in " "the group."), type=attributes.Schema.MAP, support_status=support.SupportStatus(version='7.0.0'), ), ATTR_ATTRIBUTES: attributes.Schema( _("A map of resource names to the specified attribute of each " "individual resource. " "Requires heat_template_version: 2014-10-16."), support_status=support.SupportStatus(version='2014.2'), type=attributes.Schema.MAP ), REMOVED_RSRC_LIST: attributes.Schema( _("A list of removed resource names."), support_status=support.SupportStatus(version='7.0.0'), type=attributes.Schema.LIST ), } rolling_update_schema = { MIN_IN_SERVICE: properties.Schema( properties.Schema.INTEGER, _('The minimum number of resources in service while ' 'rolling updates are being executed.'), constraints=[constraints.Range(min=0)], default=0), MAX_BATCH_SIZE: properties.Schema( properties.Schema.INTEGER, _('The maximum number of resources to replace at once.'), constraints=[constraints.Range(min=1)], default=1), PAUSE_TIME: properties.Schema( properties.Schema.NUMBER, _('The number of seconds to wait between batches of ' 'updates.'), constraints=[constraints.Range(min=0)], default=0), } batch_create_schema = { MAX_BATCH_SIZE: properties.Schema( properties.Schema.INTEGER, _('The maximum number of resources to create at once.'), constraints=[constraints.Range(min=1)], default=1 ), PAUSE_TIME: properties.Schema( properties.Schema.NUMBER, _('The number of seconds to wait between batches.'), constraints=[constraints.Range(min=0)], default=0 ), } update_policy_schema = { ROLLING_UPDATE: properties.Schema( properties.Schema.MAP, schema=rolling_update_schema, support_status=support.SupportStatus(version='5.0.0') ), BATCH_CREATE: properties.Schema( properties.Schema.MAP, schema=batch_create_schema, support_status=support.SupportStatus(version='5.0.0') ) } def get_size(self): return self.properties.get(self.COUNT) def validate_nested_stack(self): # Only validate the resource definition (which may be a # nested template) if count is non-zero, to enable folks # to disable features via a zero count if they wish if not self.get_size(): return first_name = next(self._resource_names()) test_tmpl = self._assemble_nested([first_name], include_all=True) res_def = next(six.itervalues(test_tmpl.resource_definitions(None))) # make sure we can resolve the nested resource type self.stack.env.get_class_to_instantiate(res_def.resource_type) try: name = "%s-%s" % (self.stack.name, self.name) nested_stack = self._parse_nested_stack( name, test_tmpl, self.child_params()) nested_stack.strict_validate = False nested_stack.validate() except Exception as ex: path = "%s<%s>" % (self.name, self.template_url) raise exception.StackValidationFailed( ex, path=[self.stack.t.RESOURCES, path]) def _current_blacklist(self): db_rsrc_names = self.data().get('name_blacklist') if db_rsrc_names: return db_rsrc_names.split(',') else: return [] def _get_new_blacklist_entries(self, properties, current_blacklist): insp = grouputils.GroupInspector.from_parent_resource(self) # Now we iterate over the removal policies, and update the blacklist # with any additional names for r in properties.get(self.REMOVAL_POLICIES, []): if self.REMOVAL_RSRC_LIST in r: # Tolerate string or int list values for n in r[self.REMOVAL_RSRC_LIST]: str_n = six.text_type(n) if (str_n in current_blacklist or self.resource_id is None or str_n in insp.member_names(include_failed=True)): yield str_n elif isinstance(n, six.string_types): try: refids = self.get_output(self.REFS_MAP) except (exception.NotFound, exception.TemplateOutputError) as op_err: LOG.debug('Falling back to resource_by_refid() ' ' due to %s', op_err) rsrc = self.nested().resource_by_refid(n) if rsrc is not None: yield rsrc.name else: if refids is not None: for name, refid in refids.items(): if refid == n: yield name break # Clear output cache from prior to stack update, so we don't get # outdated values after stack update. self._outputs = None def _update_name_blacklist(self, properties): """Resolve the remove_policies to names for removal.""" # To avoid reusing names after removal, we store a comma-separated # blacklist in the resource data - in cases where you want to # overwrite the stored data, removal_policies_mode: update can be used curr_bl = set(self._current_blacklist()) p_mode = properties.get(self.REMOVAL_POLICIES_MODE, self.REMOVAL_POLICY_APPEND) if p_mode == self.REMOVAL_POLICY_UPDATE: init_bl = set() else: init_bl = curr_bl updated_bl = init_bl | set(self._get_new_blacklist_entries(properties, curr_bl)) # If the blacklist has changed, update the resource data if updated_bl != curr_bl: self.data_set('name_blacklist', ','.join(sorted(updated_bl))) def _name_blacklist(self): """Get the list of resource names to blacklist.""" bl = set(self._current_blacklist()) if self.resource_id is None: bl |= set(self._get_new_blacklist_entries(self.properties, bl)) return bl def _resource_names(self, size=None): name_blacklist = self._name_blacklist() if size is None: size = self.get_size() def is_blacklisted(name): return name in name_blacklist candidates = six.moves.map(six.text_type, itertools.count()) return itertools.islice(six.moves.filterfalse(is_blacklisted, candidates), size) def _count_black_listed(self, existing_members): """Return the number of current resource names that are blacklisted.""" return len(self._name_blacklist() & set(existing_members)) def handle_create(self): self._update_name_blacklist(self.properties) if self.update_policy.get(self.BATCH_CREATE) and self.get_size(): batch_create = self.update_policy[self.BATCH_CREATE] max_batch_size = batch_create[self.MAX_BATCH_SIZE] pause_sec = batch_create[self.PAUSE_TIME] checkers = self._replace(0, max_batch_size, pause_sec) if checkers: checkers[0].start() return checkers else: names = self._resource_names() self.create_with_template(self._assemble_nested(names), self.child_params()) def check_create_complete(self, checkers=None): if checkers is None: return super(ResourceGroup, self).check_create_complete() for checker in checkers: if not checker.started(): checker.start() if not checker.step(): return False return True def _run_to_completion(self, template, timeout): updater = self.update_with_template(template, {}, timeout) while not super(ResourceGroup, self).check_update_complete(updater): yield def _run_update(self, total_capacity, max_updates, timeout): template = self._assemble_for_rolling_update(total_capacity, max_updates) return self._run_to_completion(template, timeout) def check_update_complete(self, checkers): for checker in checkers: if not checker.started(): checker.start() if not checker.step(): return False return True def res_def_changed(self, prop_diff): return self.RESOURCE_DEF in prop_diff def handle_update(self, json_snippet, tmpl_diff, prop_diff): if tmpl_diff: # parse update policy if tmpl_diff.update_policy_changed(): up = json_snippet.update_policy(self.update_policy_schema, self.context) self.update_policy = up checkers = [] self.properties = json_snippet.properties(self.properties_schema, self.context) self._update_name_blacklist(self.properties) if prop_diff and self.res_def_changed(prop_diff): updaters = self._try_rolling_update() if updaters: checkers.extend(updaters) if not checkers: resizer = scheduler.TaskRunner( self._run_to_completion, self._assemble_nested(self._resource_names()), self.stack.timeout_mins) checkers.append(resizer) checkers[0].start() return checkers def _attribute_output_name(self, *attr_path): if attr_path[0] == self.REFS: return self.REFS return ', '.join(six.text_type(a) for a in attr_path) def get_attribute(self, key, *path): if key == self.REMOVED_RSRC_LIST: return self._current_blacklist() if key == self.ATTR_ATTRIBUTES and not path: raise exception.InvalidTemplateAttribute(resource=self.name, key=key) is_resource_ref = (key.startswith("resource.") and not path and (len(key.split('.', 2)) == 2)) if is_resource_ref: output_name = self.REFS_MAP else: output_name = self._attribute_output_name(key, *path) if self.resource_id is not None: try: output = self.get_output(output_name) except (exception.NotFound, exception.TemplateOutputError) as op_err: LOG.debug('Falling back to grouputils due to %s', op_err) else: if is_resource_ref: try: target = key.split('.', 2)[1] return output[target] except KeyError: raise exception.NotFound(_("Member '%(mem)s' not " "found in group resource " "'%(grp)s'.") % {'mem': target, 'grp': self.name}) if key == self.REFS: return attributes.select_from_attribute(output, path) return output if key.startswith("resource."): return grouputils.get_nested_attrs(self, key, False, *path) names = self._resource_names() if key == self.REFS: vals = [grouputils.get_rsrc_id(self, key, False, n) for n in names] return attributes.select_from_attribute(vals, path) if key == self.REFS_MAP: refs_map = {n: grouputils.get_rsrc_id(self, key, False, n) for n in names} return refs_map if key == self.ATTR_ATTRIBUTES: return dict((n, grouputils.get_rsrc_attr( self, key, False, n, *path)) for n in names) path = [key] + list(path) return [grouputils.get_rsrc_attr(self, key, False, n, *path) for n in names] def _nested_output_defns(self, resource_names, get_attr_fn, get_res_fn): for attr in self.referenced_attrs(): if isinstance(attr, six.string_types): key, path = attr, [] else: key, path = attr[0], list(attr[1:]) output_name = self._attribute_output_name(key, *path) value = None if key.startswith("resource."): keycomponents = key.split('.', 2) res_name = keycomponents[1] attr_path = keycomponents[2:] + path if attr_path: if res_name in resource_names: value = get_attr_fn([res_name] + attr_path) else: output_name = key = self.REFS_MAP elif key == self.ATTR_ATTRIBUTES and path: value = {r: get_attr_fn([r] + path) for r in resource_names} elif key not in self.ATTRIBUTES: value = [get_attr_fn([r, key] + path) for r in resource_names] if key == self.REFS: value = [get_res_fn(r) for r in resource_names] if value is not None: yield output.OutputDefinition(output_name, value) value = {r: get_res_fn(r) for r in resource_names} yield output.OutputDefinition(self.REFS_MAP, value) def build_resource_definition(self, res_name, res_defn): res_def = copy.deepcopy(res_defn) props = res_def.get(self.RESOURCE_DEF_PROPERTIES) if props: props = self._handle_repl_val(res_name, props) res_type = res_def[self.RESOURCE_DEF_TYPE] meta = res_def[self.RESOURCE_DEF_METADATA] return rsrc_defn.ResourceDefinition(res_name, res_type, props, meta) def get_resource_def(self, include_all=False): """Returns the resource definition portion of the group. :param include_all: if False, only properties for the resource definition that are not empty will be included :type include_all: bool :return: resource definition for the group :rtype: dict """ # At this stage, we don't mind if all of the parameters have values # assigned. Pass in a custom resolver to the properties to not # error when a parameter does not have a user entered value. def ignore_param_resolve(snippet): if isinstance(snippet, function.Function): try: return snippet.result() except exception.UserParameterMissing: return None if isinstance(snippet, collections.Mapping): return dict((k, ignore_param_resolve(v)) for k, v in snippet.items()) elif (not isinstance(snippet, six.string_types) and isinstance(snippet, collections.Iterable)): return [ignore_param_resolve(v) for v in snippet] return snippet self.properties.resolve = ignore_param_resolve res_def = self.properties[self.RESOURCE_DEF] if not include_all: return self._clean_props(res_def) return res_def def _clean_props(self, res_defn): res_def = copy.deepcopy(res_defn) props = res_def.get(self.RESOURCE_DEF_PROPERTIES) if props: clean = dict((k, v) for k, v in props.items() if v is not None) props = clean res_def[self.RESOURCE_DEF_PROPERTIES] = props return res_def def _handle_repl_val(self, res_name, val): repl_var = self.properties[self.INDEX_VAR] def recurse(x): return self._handle_repl_val(res_name, x) if isinstance(val, six.string_types): return val.replace(repl_var, res_name) elif isinstance(val, collections.Mapping): return {k: recurse(v) for k, v in val.items()} elif isinstance(val, collections.Sequence): return [recurse(v) for v in val] return val def _add_output_defns_to_template(self, tmpl, resource_names): att_func = 'get_attr' get_attr = functools.partial(tmpl.functions[att_func], None, att_func) res_func = 'get_resource' get_res = functools.partial(tmpl.functions[res_func], None, res_func) for odefn in self._nested_output_defns(resource_names, get_attr, get_res): tmpl.add_output(odefn) def _assemble_nested(self, names, include_all=False, template_version=('heat_template_version', '2015-04-30')): def_dict = self.get_resource_def(include_all) definitions = [(k, self.build_resource_definition(k, def_dict)) for k in names] tmpl = scl_template.make_template(definitions, version=template_version) self._add_output_defns_to_template(tmpl, [k for k, d in definitions]) return tmpl def _assemble_for_rolling_update(self, total_capacity, max_updates, include_all=False, template_version=('heat_template_version', '2015-04-30')): names = list(self._resource_names(total_capacity)) name_blacklist = self._name_blacklist() valid_resources = [(n, d) for n, d in grouputils.get_member_definitions(self) if n not in name_blacklist] targ_cap = self.get_size() def replace_priority(res_item): name, defn = res_item try: index = names.index(name) except ValueError: # High priority - delete immediately return 0 else: if index < targ_cap: # Update higher indices first return targ_cap - index else: # Low priority - don't update return total_capacity old_resources = sorted(valid_resources, key=replace_priority) existing_names = set(n for n, d in valid_resources) new_names = six.moves.filterfalse(lambda n: n in existing_names, names) res_def = self.get_resource_def(include_all) definitions = scl_template.member_definitions( old_resources, res_def, total_capacity, max_updates, lambda: next(new_names), self.build_resource_definition) tmpl = scl_template.make_template(definitions, version=template_version) self._add_output_defns_to_template(tmpl, names) return tmpl def _try_rolling_update(self): if self.update_policy[self.ROLLING_UPDATE]: policy = self.update_policy[self.ROLLING_UPDATE] return self._replace(policy[self.MIN_IN_SERVICE], policy[self.MAX_BATCH_SIZE], policy[self.PAUSE_TIME]) def _resolve_attribute(self, name): if name == self.REMOVED_RSRC_LIST: return self._current_blacklist() def _update_timeout(self, batch_cnt, pause_sec): total_pause_time = pause_sec * max(batch_cnt - 1, 0) if total_pause_time >= self.stack.timeout_secs(): msg = _('The current update policy will result in stack update ' 'timeout.') raise ValueError(msg) return self.stack.timeout_secs() - total_pause_time @staticmethod def _get_batches(targ_cap, curr_cap, batch_size, min_in_service): updated = 0 while rolling_update.needs_update(targ_cap, curr_cap, updated): new_cap, total_new = rolling_update.next_batch(targ_cap, curr_cap, updated, batch_size, min_in_service) yield new_cap, total_new updated += total_new - max(new_cap - max(curr_cap, targ_cap), 0) curr_cap = new_cap def _replace(self, min_in_service, batch_size, pause_sec): def pause_between_batch(pause_sec): duration = timeutils.Duration(pause_sec) while not duration.expired(): yield # current capacity not including existing blacklisted inspector = grouputils.GroupInspector.from_parent_resource(self) num_blacklist = self._count_black_listed( inspector.member_names(include_failed=False)) num_resources = inspector.size(include_failed=True) curr_cap = num_resources - num_blacklist batches = list(self._get_batches(self.get_size(), curr_cap, batch_size, min_in_service)) update_timeout = self._update_timeout(len(batches), pause_sec) def tasks(): for index, (curr_cap, max_upd) in enumerate(batches): yield scheduler.TaskRunner(self._run_update, curr_cap, max_upd, update_timeout) if index < (len(batches) - 1) and pause_sec > 0: yield scheduler.TaskRunner(pause_between_batch, pause_sec) return list(tasks()) def child_template(self): names = self._resource_names() return self._assemble_nested(names) def child_params(self): return {} def handle_adopt(self, resource_data): names = self._resource_names() if names: return self.create_with_template(self._assemble_nested(names), {}, adopt_data=resource_data) def get_nested_parameters_stack(self): """Return a nested group of size 1 for validation.""" names = self._resource_names(1) child_template = self._assemble_nested(names) params = self.child_params() name = "%s-%s" % (self.stack.name, self.name) return self._parse_nested_stack(name, child_template, params) def resource_mapping(): return { 'OS::Heat::ResourceGroup': ResourceGroup, } heat-10.0.2/heat/engine/resources/openstack/heat/instance_group.py0000666000175000017500000004216013343562351025251 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import functools import six from heat.common import environment_format from heat.common import grouputils from heat.common.i18n import _ from heat.common import short_id from heat.common import timeutils as iso8601utils from heat.engine import attributes from heat.engine import environment from heat.engine import output from heat.engine import properties from heat.engine.resources import stack_resource from heat.engine import rsrc_defn from heat.engine import scheduler from heat.scaling import lbutils from heat.scaling import rolling_update from heat.scaling import template (SCALED_RESOURCE_TYPE,) = ('OS::Heat::ScaledResource',) class InstanceGroup(stack_resource.StackResource): """An instance group that can scale arbitrary instances. A resource allowing for the creating number of defined with AWS::AutoScaling::LaunchConfiguration instances. Allows to associate scaled resources with loadbalancer resources. """ PROPERTIES = ( AVAILABILITY_ZONES, LAUNCH_CONFIGURATION_NAME, SIZE, LOAD_BALANCER_NAMES, TAGS, ) = ( 'AvailabilityZones', 'LaunchConfigurationName', 'Size', 'LoadBalancerNames', 'Tags', ) _TAG_KEYS = ( TAG_KEY, TAG_VALUE, ) = ( 'Key', 'Value', ) _ROLLING_UPDATE_SCHEMA_KEYS = ( MIN_INSTANCES_IN_SERVICE, MAX_BATCH_SIZE, PAUSE_TIME ) = ( 'MinInstancesInService', 'MaxBatchSize', 'PauseTime' ) _UPDATE_POLICY_SCHEMA_KEYS = (ROLLING_UPDATE,) = ('RollingUpdate',) ATTRIBUTES = ( INSTANCE_LIST, ) = ( 'InstanceList', ) properties_schema = { AVAILABILITY_ZONES: properties.Schema( properties.Schema.LIST, _('Not Implemented.'), required=True ), LAUNCH_CONFIGURATION_NAME: properties.Schema( properties.Schema.STRING, _('The reference to a LaunchConfiguration resource.'), required=True, update_allowed=True ), SIZE: properties.Schema( properties.Schema.INTEGER, _('Desired number of instances.'), required=True, update_allowed=True ), LOAD_BALANCER_NAMES: properties.Schema( properties.Schema.LIST, _('List of LoadBalancer resources.') ), TAGS: properties.Schema( properties.Schema.LIST, _('Tags to attach to this group.'), schema=properties.Schema( properties.Schema.MAP, schema={ TAG_KEY: properties.Schema( properties.Schema.STRING, _('Tag key.'), required=True ), TAG_VALUE: properties.Schema( properties.Schema.STRING, _('Tag value.'), required=True ), }, ) ), } attributes_schema = { INSTANCE_LIST: attributes.Schema( _("A comma-delimited list of server ip addresses. " "(Heat extension)."), type=attributes.Schema.STRING ), } rolling_update_schema = { MIN_INSTANCES_IN_SERVICE: properties.Schema(properties.Schema.INTEGER, default=0), MAX_BATCH_SIZE: properties.Schema(properties.Schema.INTEGER, default=1), PAUSE_TIME: properties.Schema(properties.Schema.STRING, default='PT0S') } update_policy_schema = { ROLLING_UPDATE: properties.Schema(properties.Schema.MAP, schema=rolling_update_schema) } def validate(self): """Add validation for update_policy.""" self.validate_launchconfig() super(InstanceGroup, self).validate() if self.update_policy is not None: policy_name = self.ROLLING_UPDATE if (policy_name in self.update_policy and self.update_policy[policy_name] is not None): pause_time = self.update_policy[policy_name][self.PAUSE_TIME] if iso8601utils.parse_isoduration(pause_time) > 3600: msg = _('Maximum %s is 1 hour.') % self.PAUSE_TIME raise ValueError(msg) def validate_launchconfig(self): # It seems to be a common error to not have a dependency on the # launchconfiguration. This can happen if the actual resource # name is used instead of {get_resource: launch_conf} and no # depends_on is used. conf_refid = self.properties.get(self.LAUNCH_CONFIGURATION_NAME) if conf_refid: conf = self.stack.resource_by_refid(conf_refid) if conf is None: raise ValueError(_('%(lc)s (%(ref)s)' ' reference can not be found.') % dict(lc=self.LAUNCH_CONFIGURATION_NAME, ref=conf_refid)) if self.name not in conf.required_by(): raise ValueError(_('%(lc)s (%(ref)s)' ' requires a reference to the' ' configuration not just the name of the' ' resource.') % dict( lc=self.LAUNCH_CONFIGURATION_NAME, ref=conf_refid)) def handle_create(self): """Create a nested stack and add the initial resources to it.""" num_instances = self.properties[self.SIZE] initial_template = self._create_template(num_instances) return self.create_with_template(initial_template) def check_create_complete(self, task): """When stack creation is done, update the loadbalancer. If any instances failed to be created, delete them. """ done = super(InstanceGroup, self).check_create_complete(task) if done: self._lb_reload() return done def handle_update(self, json_snippet, tmpl_diff, prop_diff): """Updates self.properties, if Properties has changed. If Properties has changed, update self.properties, so we get the new values during any subsequent adjustment. """ if tmpl_diff: # parse update policy if tmpl_diff.update_policy_changed(): up = json_snippet.update_policy(self.update_policy_schema, self.context) self.update_policy = up self.properties = json_snippet.properties(self.properties_schema, self.context) if prop_diff: # Replace instances first if launch configuration has changed self._try_rolling_update(prop_diff) # Get the current capacity, we may need to adjust if # Size has changed if self.properties[self.SIZE] is not None: self.resize(self.properties[self.SIZE]) else: curr_size = grouputils.get_size(self) self.resize(curr_size) def _tags(self): """Make sure that we add a tag that Ceilometer can pick up. These need to be prepended with 'metering.'. """ tags = self.properties.get(self.TAGS) or [] for t in tags: if t[self.TAG_KEY].startswith('metering.'): # the user has added one, don't add another. return tags return tags + [{self.TAG_KEY: 'metering.groupname', self.TAG_VALUE: self.FnGetRefId()}] def _get_conf_properties(self): conf_refid = self.properties[self.LAUNCH_CONFIGURATION_NAME] conf = self.stack.resource_by_refid(conf_refid) c_props = conf.frozen_definition().properties(conf.properties_schema, conf.context) props = {k: v for k, v in c_props.items() if k in c_props.data} for key in [conf.BLOCK_DEVICE_MAPPINGS, conf.NOVA_SCHEDULER_HINTS]: if props.get(key) is not None: props[key] = [{k: v for k, v in prop.items() if k in c_props.data[key][idx]} for idx, prop in enumerate(props[key])] if 'InstanceId' in props: props = conf.rebuild_lc_properties(props['InstanceId']) props['Tags'] = self._tags() # if the launch configuration is created from an existing instance. # delete the 'InstanceId' property props.pop('InstanceId', None) return conf, props def _get_resource_definition(self): conf, props = self._get_conf_properties() return rsrc_defn.ResourceDefinition(None, SCALED_RESOURCE_TYPE, props, conf.t.metadata()) def _create_template(self, num_instances, num_replace=0, template_version=('HeatTemplateFormatVersion', '2012-12-12')): """Create a template to represent autoscaled instances. Also see heat.scaling.template.member_definitions. """ instance_definition = self._get_resource_definition() old_resources = grouputils.get_member_definitions(self, include_failed=True) definitions = list(template.member_definitions( old_resources, instance_definition, num_instances, num_replace, short_id.generate_id)) child_env = environment.get_child_environment( self.stack.env, self.child_params(), item_to_remove=self.resource_info) tmpl = template.make_template(definitions, version=template_version, child_env=child_env) # Subclasses use HOT templates att_func, res_func = 'get_attr', 'get_resource' if att_func not in tmpl.functions or res_func not in tmpl.functions: att_func, res_func = 'Fn::GetAtt', 'Ref' get_attr = functools.partial(tmpl.functions[att_func], None, att_func) get_res = functools.partial(tmpl.functions[res_func], None, res_func) for odefn in self._nested_output_defns([k for k, d in definitions], get_attr, get_res): tmpl.add_output(odefn) return tmpl def _try_rolling_update(self, prop_diff): if (self.update_policy[self.ROLLING_UPDATE] and self.LAUNCH_CONFIGURATION_NAME in prop_diff): policy = self.update_policy[self.ROLLING_UPDATE] pause_sec = iso8601utils.parse_isoduration(policy[self.PAUSE_TIME]) self._replace(policy[self.MIN_INSTANCES_IN_SERVICE], policy[self.MAX_BATCH_SIZE], pause_sec) def _update_timeout(self, batch_cnt, pause_sec): total_pause_time = pause_sec * max(batch_cnt - 1, 0) if total_pause_time >= self.stack.timeout_secs(): msg = _('The current update policy will result in stack update ' 'timeout.') raise ValueError(msg) return self.stack.timeout_secs() - total_pause_time def _replace(self, min_in_service, batch_size, pause_sec): """Replace the instances in the group. Replace the instances in the group using updated launch configuration. """ def changing_instances(tmpl): instances = grouputils.get_members(self) current = set((i.name, i.t) for i in instances) updated = set(tmpl.resource_definitions(None).items()) # includes instances to be updated and deleted affected = set(k for k, v in current ^ updated) return set(i.FnGetRefId() for i in instances if i.name in affected) def pause_between_batch(): while True: try: yield except scheduler.Timeout: return capacity = len(self.nested()) if self.nested() else 0 batches = list(self._get_batches(capacity, batch_size, min_in_service)) update_timeout = self._update_timeout(len(batches), pause_sec) try: for index, (total_capacity, efft_bat_sz) in enumerate(batches): template = self._create_template(total_capacity, efft_bat_sz) self._lb_reload(exclude=changing_instances(template)) updater = self.update_with_template(template) checker = scheduler.TaskRunner(self._check_for_completion, updater) checker(timeout=update_timeout) if index < (len(batches) - 1) and pause_sec > 0: self._lb_reload() waiter = scheduler.TaskRunner(pause_between_batch) waiter(timeout=pause_sec) finally: self._lb_reload() @staticmethod def _get_batches(capacity, batch_size, min_in_service): """Return an iterator over the batches in a batched update. Each batch is a tuple comprising the total size of the group after processing the batch, and the number of members that can receive the new definition in that batch (either by creating a new member or updating an existing one). """ efft_capacity = capacity updated = 0 while rolling_update.needs_update(capacity, efft_capacity, updated): batch = rolling_update.next_batch(capacity, efft_capacity, updated, batch_size, min_in_service) yield batch efft_capacity, num_updates = batch updated += num_updates def _check_for_completion(self, updater): while not self.check_update_complete(updater): yield def resize(self, new_capacity): """Resize the instance group to the new capacity. When shrinking, the oldest instances will be removed. """ new_template = self._create_template(new_capacity) try: updater = self.update_with_template(new_template) checker = scheduler.TaskRunner(self._check_for_completion, updater) checker(timeout=self.stack.timeout_secs()) finally: # Reload the LB in any case, so it's only pointing at healthy # nodes. self._lb_reload() def _lb_reload(self, exclude=None): lb_names = self.properties.get(self.LOAD_BALANCER_NAMES) or [] if lb_names: lb_dict = dict((name, self.stack[name]) for name in lb_names) lbutils.reload_loadbalancers(self, lb_dict, exclude) def get_reference_id(self): return self.physical_resource_name_or_FnGetRefId() def _resolve_attribute(self, name): """Resolves the resource's attributes. Heat extension: "InstanceList" returns comma delimited list of server ip addresses. """ if name == self.INSTANCE_LIST: return u','.join(inst.FnGetAtt('PublicIp') or '0.0.0.0' for inst in grouputils.get_members(self)) or None def _nested_output_defns(self, resource_names, get_attr_fn, get_res_fn): for attr in self.referenced_attrs(): if isinstance(attr, six.string_types): key = attr else: key = attr[0] if key == self.INSTANCE_LIST: value = {r: get_attr_fn([r, 'PublicIp']) for r in resource_names} yield output.OutputDefinition(key, value) def child_template(self): num_instances = int(self.properties[self.SIZE]) return self._create_template(num_instances) def child_params(self): """Return the environment for the nested stack.""" return { environment_format.PARAMETERS: {}, environment_format.RESOURCE_REGISTRY: { SCALED_RESOURCE_TYPE: 'AWS::EC2::Instance', }, } def get_nested_parameters_stack(self): """Return a nested group of size 1 for validation.""" child_template = self._create_template(1) params = self.child_params() name = "%s-%s" % (self.stack.name, self.name) return self._parse_nested_stack(name, child_template, params) def resource_mapping(): return { 'OS::Heat::InstanceGroup': InstanceGroup, } heat-10.0.2/heat/engine/resources/openstack/heat/deployed_server.py0000666000175000017500000001476413343562340025433 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg from oslo_log import log as logging from heat.common.i18n import _ from heat.engine import attributes from heat.engine import constraints from heat.engine import properties from heat.engine.resources import server_base from heat.engine import support cfg.CONF.import_opt('default_software_config_transport', 'heat.common.config') cfg.CONF.import_opt('default_user_data_format', 'heat.common.config') cfg.CONF.import_opt('max_server_name_length', 'heat.common.config') LOG = logging.getLogger(__name__) class DeployedServer(server_base.BaseServer): """A resource for managing servers that are already deployed. A DeployedServer resource manages resources for servers that have been deployed externally from OpenStack. These servers can be associated with SoftwareDeployments for further orchestration via Heat. """ PROPERTIES = ( NAME, METADATA, SOFTWARE_CONFIG_TRANSPORT, DEPLOYMENT_SWIFT_DATA ) = ( 'name', 'metadata', 'software_config_transport', 'deployment_swift_data' ) _SOFTWARE_CONFIG_TRANSPORTS = ( POLL_SERVER_CFN, POLL_SERVER_HEAT, POLL_TEMP_URL, ZAQAR_MESSAGE ) = ( 'POLL_SERVER_CFN', 'POLL_SERVER_HEAT', 'POLL_TEMP_URL', 'ZAQAR_MESSAGE' ) _DEPLOYMENT_SWIFT_DATA_KEYS = ( CONTAINER, OBJECT ) = ( 'container', 'object', ) properties_schema = { NAME: properties.Schema( properties.Schema.STRING, _('Server name.'), update_allowed=True ), METADATA: properties.Schema( properties.Schema.MAP, _('Arbitrary key/value metadata to store for this server. Both ' 'keys and values must be 255 characters or less. Non-string ' 'values will be serialized to JSON (and the serialized ' 'string must be 255 characters or less).'), update_allowed=True, support_status=support.SupportStatus( status=support.DEPRECATED, message='This property will be ignored', version='9.0.0', previous_status=support.SupportStatus( status=support.SUPPORTED, version='8.0.0' ) ) ), SOFTWARE_CONFIG_TRANSPORT: properties.Schema( properties.Schema.STRING, _('How the server should receive the metadata required for ' 'software configuration. POLL_SERVER_CFN will allow calls to ' 'the cfn API action DescribeStackResource authenticated with ' 'the provided keypair. POLL_SERVER_HEAT will allow calls to ' 'the Heat API resource-show using the provided keystone ' 'credentials. POLL_TEMP_URL will create and populate a ' 'Swift TempURL with metadata for polling. ZAQAR_MESSAGE will ' 'create a dedicated zaqar queue and post the metadata ' 'for polling.'), default=cfg.CONF.default_software_config_transport, update_allowed=True, constraints=[ constraints.AllowedValues(_SOFTWARE_CONFIG_TRANSPORTS), ] ), DEPLOYMENT_SWIFT_DATA: properties.Schema( properties.Schema.MAP, _('Swift container and object to use for storing deployment data ' 'for the server resource. The parameter is a map value ' 'with the keys "container" and "object", and the values ' 'are the corresponding container and object names. The ' 'software_config_transport parameter must be set to ' 'POLL_TEMP_URL for swift to be used. If not specified, ' 'and software_config_transport is set to POLL_TEMP_URL, a ' 'container will be automatically created from the resource ' 'name, and the object name will be a generated uuid.'), support_status=support.SupportStatus(version='9.0.0'), default={}, update_allowed=True, schema={ CONTAINER: properties.Schema( properties.Schema.STRING, _('Name of the container.'), constraints=[ constraints.Length(min=1) ] ), OBJECT: properties.Schema( properties.Schema.STRING, _('Name of the object.'), constraints=[ constraints.Length(min=1) ] ) } ) } ATTRIBUTES = ( NAME_ATTR, OS_COLLECT_CONFIG ) = ( 'name', 'os_collect_config' ) attributes_schema = { NAME_ATTR: attributes.Schema( _('Name of the server.'), type=attributes.Schema.STRING ), OS_COLLECT_CONFIG: attributes.Schema( _('The os-collect-config configuration for the server\'s local ' 'agent to be configured to connect to Heat to retrieve ' 'deployment data.'), type=attributes.Schema.MAP, support_status=support.SupportStatus(version='9.0.0'), cache_mode=attributes.Schema.CACHE_NONE ), } def __init__(self, name, json_snippet, stack): super(DeployedServer, self).__init__(name, json_snippet, stack) self._register_access_key() def handle_create(self): metadata = self.metadata_get(True) or {} self.resource_id_set(self.uuid) self._create_transport_credentials(self.properties) self._populate_deployments_metadata(metadata, self.properties) return self.resource_id def user_data_software_config(self): return True def _delete(self): self._delete_queue() self._delete_user() self._delete_temp_url() def resource_mapping(): return { 'OS::Heat::DeployedServer': DeployedServer, } heat-10.0.2/heat/engine/resources/openstack/heat/wait_condition_handle.py0000666000175000017500000002354413343562351026563 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_serialization import jsonutils from heat.common.i18n import _ from heat.common import password_gen from heat.engine import attributes from heat.engine import constraints from heat.engine import properties from heat.engine import resource from heat.engine.resources.aws.cfn import wait_condition_handle as aws_wch from heat.engine.resources import signal_responder from heat.engine.resources import wait_condition as wc_base from heat.engine import support class HeatWaitConditionHandle(wc_base.BaseWaitConditionHandle): """Resource for managing instance signals. The main points of this resource are: - have no dependencies (so the instance can reference it). - create credentials to allow for signalling from the instance. - handle signals from the instance, validate and store result. """ support_status = support.SupportStatus(version='2014.2') PROPERTIES = ( SIGNAL_TRANSPORT, ) = ( 'signal_transport', ) SIGNAL_TRANSPORTS = ( CFN_SIGNAL, TEMP_URL_SIGNAL, HEAT_SIGNAL, NO_SIGNAL, ZAQAR_SIGNAL, TOKEN_SIGNAL ) = ( 'CFN_SIGNAL', 'TEMP_URL_SIGNAL', 'HEAT_SIGNAL', 'NO_SIGNAL', 'ZAQAR_SIGNAL', 'TOKEN_SIGNAL' ) properties_schema = { SIGNAL_TRANSPORT: properties.Schema( properties.Schema.STRING, _('How the client will signal the wait condition. CFN_SIGNAL ' 'will allow an HTTP POST to a CFN keypair signed URL. ' 'TEMP_URL_SIGNAL will create a Swift TempURL to be ' 'signalled via HTTP PUT. HEAT_SIGNAL will allow calls to the ' 'Heat API resource-signal using the provided keystone ' 'credentials. ZAQAR_SIGNAL will create a dedicated zaqar queue ' 'to be signalled using the provided keystone credentials. ' 'TOKEN_SIGNAL will allow and HTTP POST to a Heat API endpoint ' 'with the provided keystone token. NO_SIGNAL will result in ' 'the resource going to a signalled state without waiting for ' 'any signal.'), default='TOKEN_SIGNAL', constraints=[ constraints.AllowedValues(SIGNAL_TRANSPORTS), ], support_status=support.SupportStatus(version='6.0.0'), ), } ATTRIBUTES = ( TOKEN, ENDPOINT, CURL_CLI, SIGNAL, ) = ( 'token', 'endpoint', 'curl_cli', 'signal', ) attributes_schema = { TOKEN: attributes.Schema( _('Token for stack-user which can be used for signalling handle ' 'when signal_transport is set to TOKEN_SIGNAL. None for all ' 'other signal transports.'), cache_mode=attributes.Schema.CACHE_NONE, type=attributes.Schema.STRING ), ENDPOINT: attributes.Schema( _('Endpoint/url which can be used for signalling handle when ' 'signal_transport is set to TOKEN_SIGNAL. None for all ' 'other signal transports.'), cache_mode=attributes.Schema.CACHE_NONE, type=attributes.Schema.STRING ), CURL_CLI: attributes.Schema( _('Convenience attribute, provides curl CLI command ' 'prefix, which can be used for signalling handle completion or ' 'failure when signal_transport is set to TOKEN_SIGNAL. You ' 'can signal success by adding ' '--data-binary \'{"status": "SUCCESS"}\' ' ', or signal failure by adding ' '--data-binary \'{"status": "FAILURE"}\'. ' 'This attribute is set to None for all other signal ' 'transports.'), cache_mode=attributes.Schema.CACHE_NONE, type=attributes.Schema.STRING ), SIGNAL: attributes.Schema( _('JSON serialized map that includes the endpoint, token and/or ' 'other attributes the client must use for signalling this ' 'handle. The contents of this map depend on the type of signal ' 'selected in the signal_transport property.'), cache_mode=attributes.Schema.CACHE_NONE, type=attributes.Schema.STRING ) } METADATA_KEYS = ( DATA, REASON, STATUS, UNIQUE_ID ) = ( 'data', 'reason', 'status', 'id' ) def _signal_transport_token(self): return self.properties.get( self.SIGNAL_TRANSPORT) == self.TOKEN_SIGNAL def handle_create(self): self.password = password_gen.generate_openstack_password() super(HeatWaitConditionHandle, self).handle_create() if self._signal_transport_token(): # FIXME(shardy): The assumption here is that token expiry > timeout # but we probably need a check here to fail fast if that's not true # Also need to implement an update property, such that the handle # can be replaced on update which will replace the token token = self._user_token() self.data_set('token', token, True) self.data_set('endpoint', '%s/signal' % self._get_resource_endpoint()) def _get_resource_endpoint(self): # Get the endpoint from stack.clients then replace the context # project_id with the path to the resource (which includes the # context project_id), then replace the context project with # the one needed for signalling from the stack_user_project heat_client_plugin = self.stack.clients.client_plugin('heat') endpoint = heat_client_plugin.get_heat_url() rsrc_ep = endpoint.replace(self.context.tenant_id, self.identifier().url_path()) return rsrc_ep.replace(self.context.tenant_id, self.stack.stack_user_project_id) def _resolve_attribute(self, key): if self.resource_id: if key == self.SIGNAL: return jsonutils.dumps(self._get_signal( signal_type=signal_responder.WAITCONDITION, multiple_signals=True)) elif key == self.TOKEN: return self.data().get('token') elif key == self.ENDPOINT: return self.data().get('endpoint') elif key == self.CURL_CLI: # Construct curl command for template-author convenience endpoint = self.data().get('endpoint') token = self.data().get('token') if endpoint is None or token is None: return None heat_client_plugin = self.stack.clients.client_plugin('heat') insecure_option = heat_client_plugin.get_insecure_option() return ("curl %(insecure)s-i -X POST " "-H 'X-Auth-Token: %(token)s' " "-H 'Content-Type: application/json' " "-H 'Accept: application/json' " "%(endpoint)s" % dict(insecure="--insecure " if insecure_option else "", token=token, endpoint=endpoint)) def get_status(self): # before we check status, we have to update the signal transports # that require constant polling self._service_signal() return super(HeatWaitConditionHandle, self).get_status() def handle_signal(self, details=None): """Validate and update the resource metadata. Metadata is not mandatory, but if passed it must use the following format: { "status" : "Status (must be SUCCESS or FAILURE)", "data" : "Arbitrary data", "reason" : "Reason string" } Optionally "id" may also be specified, but if missing the index of the signal received will be used. """ return super(HeatWaitConditionHandle, self).handle_signal(details) def normalise_signal_data(self, signal_data, latest_metadata): signal_num = len(latest_metadata) + 1 reason = 'Signal %s received' % signal_num # Tolerate missing values, default to success metadata = signal_data.copy() if signal_data else {} metadata.setdefault(self.REASON, reason) metadata.setdefault(self.DATA, None) metadata.setdefault(self.UNIQUE_ID, signal_num) metadata.setdefault(self.STATUS, self.STATUS_SUCCESS) return metadata class UpdateWaitConditionHandle(aws_wch.WaitConditionHandle): """WaitConditionHandle that clears signals and changes handle on update. This works similarly to an AWS::CloudFormation::WaitConditionHandle, except that on update it clears all signals received and changes the handle. Using this handle means that you must setup the signal senders to send their signals again any time the update handle changes. This allows us to roll out new configurations and be confident that they are rolled out once UPDATE COMPLETE is reached. """ support_status = support.SupportStatus(version='2014.1') def update(self, after, before=None, prev_resource=None): raise resource.UpdateReplace(self.name) def resource_mapping(): return { 'OS::Heat::WaitConditionHandle': HeatWaitConditionHandle, 'OS::Heat::UpdateWaitConditionHandle': UpdateWaitConditionHandle, } heat-10.0.2/heat/engine/resources/openstack/heat/random_string.py0000666000175000017500000002234213343562340025075 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import six from heat.common import exception from heat.common.i18n import _ from heat.common import password_gen from heat.engine import attributes from heat.engine import constraints from heat.engine import properties from heat.engine import resource from heat.engine import support from heat.engine import translation class RandomString(resource.Resource): """A resource which generates a random string. This is useful for configuring passwords and secrets on services. Random string can be generated from specified character sequences, which means that all characters will be randomly chosen from specified sequences, or with some classes, e.g. letterdigits, which means that all character will be randomly chosen from union of ascii letters and digits. Output string will be randomly generated string with specified length (or with length of 32, if length property doesn't specified). """ support_status = support.SupportStatus(version='2014.1') PROPERTIES = ( LENGTH, SEQUENCE, CHARACTER_CLASSES, CHARACTER_SEQUENCES, SALT, ) = ( 'length', 'sequence', 'character_classes', 'character_sequences', 'salt', ) _CHARACTER_CLASSES_KEYS = ( CHARACTER_CLASSES_CLASS, CHARACTER_CLASSES_MIN, ) = ( 'class', 'min', ) _CHARACTER_SEQUENCES = ( CHARACTER_SEQUENCES_SEQUENCE, CHARACTER_SEQUENCES_MIN, ) = ( 'sequence', 'min', ) ATTRIBUTES = ( VALUE, ) = ( 'value', ) properties_schema = { LENGTH: properties.Schema( properties.Schema.INTEGER, _('Length of the string to generate.'), default=32, constraints=[ constraints.Range(1, 512), ] ), SEQUENCE: properties.Schema( properties.Schema.STRING, _('Sequence of characters to build the random string from.'), constraints=[ constraints.AllowedValues(password_gen.CHARACTER_CLASSES), ], support_status=support.SupportStatus( status=support.HIDDEN, version='5.0.0', previous_status=support.SupportStatus( status=support.DEPRECATED, message=_('Use property %s.') % CHARACTER_CLASSES, version='2014.2' ) ) ), CHARACTER_CLASSES: properties.Schema( properties.Schema.LIST, _('A list of character class and their constraints to generate ' 'the random string from.'), schema=properties.Schema( properties.Schema.MAP, schema={ CHARACTER_CLASSES_CLASS: properties.Schema( properties.Schema.STRING, (_('A character class and its corresponding %(min)s ' 'constraint to generate the random string from.') % {'min': CHARACTER_CLASSES_MIN}), constraints=[ constraints.AllowedValues( password_gen.CHARACTER_CLASSES), ], default=password_gen.LETTERS_DIGITS), CHARACTER_CLASSES_MIN: properties.Schema( properties.Schema.INTEGER, _('The minimum number of characters from this ' 'character class that will be in the generated ' 'string.'), default=1, constraints=[ constraints.Range(1, 512), ] ) } ), # add defaults for backward compatibility default=[{CHARACTER_CLASSES_CLASS: password_gen.LETTERS_DIGITS, CHARACTER_CLASSES_MIN: 1}] ), CHARACTER_SEQUENCES: properties.Schema( properties.Schema.LIST, _('A list of character sequences and their constraints to ' 'generate the random string from.'), schema=properties.Schema( properties.Schema.MAP, schema={ CHARACTER_SEQUENCES_SEQUENCE: properties.Schema( properties.Schema.STRING, _('A character sequence and its corresponding %(min)s ' 'constraint to generate the random string ' 'from.') % {'min': CHARACTER_SEQUENCES_MIN}, required=True), CHARACTER_SEQUENCES_MIN: properties.Schema( properties.Schema.INTEGER, _('The minimum number of characters from this ' 'sequence that will be in the generated ' 'string.'), default=1, constraints=[ constraints.Range(1, 512), ] ) } ) ), SALT: properties.Schema( properties.Schema.STRING, _('Value which can be set or changed on stack update to trigger ' 'the resource for replacement with a new random string. The ' 'salt value itself is ignored by the random generator.') ), } attributes_schema = { VALUE: attributes.Schema( _('The random string generated by this resource. This value is ' 'also available by referencing the resource.'), cache_mode=attributes.Schema.CACHE_NONE, type=attributes.Schema.STRING ), } def translation_rules(self, props): if props.get(self.SEQUENCE): return [ translation.TranslationRule( props, translation.TranslationRule.ADD, [self.CHARACTER_CLASSES], [{self.CHARACTER_CLASSES_CLASS: props.get( self.SEQUENCE), self.CHARACTER_CLASSES_MIN: 1}]), translation.TranslationRule( props, translation.TranslationRule.DELETE, [self.SEQUENCE] ) ] def _generate_random_string(self, char_sequences, char_classes, length): seq_mins = [ password_gen.special_char_class( char_seq[self.CHARACTER_SEQUENCES_SEQUENCE], char_seq[self.CHARACTER_SEQUENCES_MIN]) for char_seq in char_sequences] char_class_mins = [ password_gen.named_char_class( char_class[self.CHARACTER_CLASSES_CLASS], char_class[self.CHARACTER_CLASSES_MIN]) for char_class in char_classes] return password_gen.generate_password(length, seq_mins + char_class_mins) def validate(self): super(RandomString, self).validate() char_sequences = self.properties[self.CHARACTER_SEQUENCES] char_classes = self.properties[self.CHARACTER_CLASSES] def char_min(char_dicts, min_prop): if char_dicts: return sum(char_dict[min_prop] for char_dict in char_dicts) return 0 length = self.properties[self.LENGTH] min_length = (char_min(char_sequences, self.CHARACTER_SEQUENCES_MIN) + char_min(char_classes, self.CHARACTER_CLASSES_MIN)) if min_length > length: msg = _("Length property cannot be smaller than combined " "character class and character sequence minimums") raise exception.StackValidationFailed(message=msg) def handle_create(self): char_sequences = self.properties[self.CHARACTER_SEQUENCES] or [] char_classes = self.properties[self.CHARACTER_CLASSES] or [] length = self.properties[self.LENGTH] random_string = self._generate_random_string(char_sequences, char_classes, length) self.data_set('value', random_string, redact=True) self.resource_id_set(self.physical_resource_name()) def _resolve_attribute(self, name): if name == self.VALUE: return self.data().get(self.VALUE) def get_reference_id(self): if self.resource_id is not None: return self.data().get('value') else: return six.text_type(self.name) def resource_mapping(): return { 'OS::Heat::RandomString': RandomString, } heat-10.0.2/heat/engine/resources/openstack/heat/swiftsignal.py0000666000175000017500000002735413343562340024571 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging from oslo_serialization import jsonutils from oslo_utils import timeutils import six from six.moves.urllib import parse from heat.common import exception from heat.common.i18n import _ from heat.engine import attributes from heat.engine.clients.os import swift from heat.engine import constraints from heat.engine import properties from heat.engine import resource from heat.engine import support LOG = logging.getLogger(__name__) class SwiftSignalFailure(exception.Error): def __init__(self, wait_cond): reasons = wait_cond.get_status_reason(wait_cond.STATUS_FAILURE) super(SwiftSignalFailure, self).__init__(';'.join(reasons)) class SwiftSignalTimeout(exception.Error): def __init__(self, wait_cond): reasons = wait_cond.get_status_reason(wait_cond.STATUS_SUCCESS) vals = {'len': len(reasons), 'count': wait_cond.properties[wait_cond.COUNT]} if reasons: vals['reasons'] = ';'.join(reasons) message = (_('%(len)d of %(count)d received - %(reasons)s') % vals) else: message = (_('%(len)d of %(count)d received') % vals) super(SwiftSignalTimeout, self).__init__(message) class SwiftSignalHandle(resource.Resource): """Resource for managing signals from Swift resources. This resource is same as WaitConditionHandle, but designed for using by Swift resources. """ support_status = support.SupportStatus(version='2014.2') default_client_name = "swift" properties_schema = {} ATTRIBUTES = ( TOKEN, ENDPOINT, CURL_CLI, ) = ( 'token', 'endpoint', 'curl_cli', ) attributes_schema = { TOKEN: attributes.Schema( _('Tokens are not needed for Swift TempURLs. This attribute is ' 'being kept for compatibility with the ' 'OS::Heat::WaitConditionHandle resource.'), cache_mode=attributes.Schema.CACHE_NONE, type=attributes.Schema.STRING ), ENDPOINT: attributes.Schema( _('Endpoint/url which can be used for signalling handle.'), cache_mode=attributes.Schema.CACHE_NONE, type=attributes.Schema.STRING ), CURL_CLI: attributes.Schema( _('Convenience attribute, provides curl CLI command ' 'prefix, which can be used for signalling handle completion or ' 'failure. You can signal success by adding ' '--data-binary \'{"status": "SUCCESS"}\' ' ', or signal failure by adding ' '--data-binary \'{"status": "FAILURE"}\'.'), cache_mode=attributes.Schema.CACHE_NONE, type=attributes.Schema.STRING ), } def handle_create(self): cplugin = self.client_plugin() url = cplugin.get_signal_url(self.stack.id, self.physical_resource_name()) self.data_set(self.ENDPOINT, url) self.resource_id_set(self.physical_resource_name()) def _resolve_attribute(self, key): if self.resource_id: if key == self.TOKEN: return '' # HeatWaitConditionHandle compatibility elif key == self.ENDPOINT: return self.data().get(self.ENDPOINT) elif key == self.CURL_CLI: return ("curl -i -X PUT '%s'" % self.data().get(self.ENDPOINT)) def handle_delete(self): cplugin = self.client_plugin() client = cplugin.client() # Delete all versioned objects while True: try: client.delete_object(self.stack.id, self.physical_resource_name()) except Exception as exc: cplugin.ignore_not_found(exc) break # Delete the container if it is empty try: client.delete_container(self.stack.id) except Exception as exc: if cplugin.is_not_found(exc) or cplugin.is_conflict(exc): pass else: raise self.data_delete(self.ENDPOINT) def get_reference_id(self): return self.data().get(self.ENDPOINT) class SwiftSignal(resource.Resource): """Resource for handling signals received by SwiftSignalHandle. This resource handles signals received by SwiftSignalHandle and is same as WaitCondition resource. """ support_status = support.SupportStatus(version='2014.2') default_client_name = "swift" PROPERTIES = (HANDLE, TIMEOUT, COUNT,) = ('handle', 'timeout', 'count',) properties_schema = { HANDLE: properties.Schema( properties.Schema.STRING, required=True, description=_('URL of TempURL where resource will signal ' 'completion and optionally upload data.') ), TIMEOUT: properties.Schema( properties.Schema.NUMBER, description=_('The maximum number of seconds to wait for the ' 'resource to signal completion. Once the timeout ' 'is reached, creation of the signal resource will ' 'fail.'), required=True, constraints=[ constraints.Range(1, 43200), ] ), COUNT: properties.Schema( properties.Schema.INTEGER, description=_('The number of success signals that must be ' 'received before the stack creation process ' 'continues.'), default=1, constraints=[ constraints.Range(1, 1000), ] ) } ATTRIBUTES = (DATA) = 'data' attributes_schema = { DATA: attributes.Schema( _('JSON data that was uploaded via the SwiftSignalHandle.'), type=attributes.Schema.STRING ) } WAIT_STATUSES = ( STATUS_FAILURE, STATUS_SUCCESS, ) = ( 'FAILURE', 'SUCCESS', ) METADATA_KEYS = ( DATA, REASON, STATUS, UNIQUE_ID ) = ( 'data', 'reason', 'status', 'id' ) def __init__(self, name, json_snippet, stack): super(SwiftSignal, self).__init__(name, json_snippet, stack) self._obj_name = None self._url = None @property def url(self): if not self._url: self._url = parse.urlparse(self.properties[self.HANDLE]) return self._url @property def obj_name(self): if not self._obj_name: self._obj_name = self.url.path.split('/')[4] return self._obj_name def _validate_handle_url(self): parts = self.url.path.split('/') msg = _('"%(url)s" is not a valid SwiftSignalHandle. The %(part)s ' 'is invalid') cplugin = self.client_plugin() if not cplugin.is_valid_temp_url_path(self.url.path): raise ValueError(msg % {'url': self.url.path, 'part': 'Swift TempURL path'}) if not parts[3] == self.stack.id: raise ValueError(msg % {'url': self.url.path, 'part': 'container name'}) def handle_create(self): self._validate_handle_url() started_at = timeutils.utcnow() return started_at, float(self.properties[self.TIMEOUT]) def get_signals(self): try: container = self.client().get_container(self.stack.id) except Exception as exc: self.client_plugin().ignore_not_found(exc) LOG.debug("Swift container %s was not found", self.stack.id) return [] index = container[1] if not index: LOG.debug("Swift objects in container %s were not found", self.stack.id) return [] # Remove objects in that are for other handle resources, since # multiple SwiftSignalHandle resources in the same stack share # a container filtered = [obj for obj in index if self.obj_name in obj['name']] # Fetch objects from Swift and filter results obj_bodies = [] for obj in filtered: try: signal = self.client().get_object(self.stack.id, obj['name']) except Exception as exc: self.client_plugin().ignore_not_found(exc) continue body = signal[1] if body == swift.IN_PROGRESS: # Ignore the initial object continue if body == "": obj_bodies.append({}) continue try: obj_bodies.append(jsonutils.loads(body)) except ValueError: raise exception.Error(_("Failed to parse JSON data: %s") % body) # Set default values on each signal signals = [] signal_num = 1 for signal in obj_bodies: # Remove previous signals with the same ID sig_id = self.UNIQUE_ID ids = [s.get(sig_id) for s in signals if sig_id in s] if ids and sig_id in signal and ids.count(signal[sig_id]) > 0: [signals.remove(s) for s in signals if s.get(sig_id) == signal[sig_id]] # Make sure all fields are set, since all are optional signal.setdefault(self.DATA, None) unique_id = signal.setdefault(sig_id, signal_num) reason = 'Signal %s received' % unique_id signal.setdefault(self.REASON, reason) signal.setdefault(self.STATUS, self.STATUS_SUCCESS) signals.append(signal) signal_num += 1 return signals def get_status(self): return [s[self.STATUS] for s in self.get_signals()] def get_status_reason(self, status): return [s[self.REASON] for s in self.get_signals() if s[self.STATUS] == status] def get_data(self): signals = self.get_signals() if not signals: return None data = {} for signal in signals: data[signal[self.UNIQUE_ID]] = signal[self.DATA] return data def check_create_complete(self, create_data): if timeutils.is_older_than(*create_data): raise SwiftSignalTimeout(self) statuses = self.get_status() if not statuses: return False for status in statuses: if status == self.STATUS_FAILURE: failure = SwiftSignalFailure(self) LOG.info('%(name)s Failed (%(failure)s)', {'name': str(self), 'failure': str(failure)}) raise failure elif status != self.STATUS_SUCCESS: raise exception.Error(_("Unknown status: %s") % status) if len(statuses) >= self.properties[self.COUNT]: LOG.info("%s Succeeded", str(self)) return True return False def _resolve_attribute(self, key): if key == self.DATA: return six.text_type(jsonutils.dumps(self.get_data())) def resource_mapping(): return {'OS::Heat::SwiftSignal': SwiftSignal, 'OS::Heat::SwiftSignalHandle': SwiftSignalHandle} heat-10.0.2/heat/engine/resources/openstack/heat/test_resource.py0000666000175000017500000002202713343562351025117 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime import eventlet from oslo_utils import timeutils import six from heat.common.i18n import _ from heat.engine import attributes from heat.engine import constraints from heat.engine import properties from heat.engine import resource from heat.engine import support from oslo_log import log as logging LOG = logging.getLogger(__name__) class TestResource(resource.Resource): """A resource which stores the string value that was provided. This resource is to be used only for testing. It has control knobs such as 'update_replace', 'fail', 'wait_secs'. """ support_status = support.SupportStatus(version='5.0.0') ACTION_TIMES = ( CREATE_WAIT_SECS, UPDATE_WAIT_SECS, DELETE_WAIT_SECS ) = ( 'create', 'update', 'delete') PROPERTIES = ( VALUE, UPDATE_REPLACE, FAIL, CLIENT_NAME, ENTITY_NAME, WAIT_SECS, ACTION_WAIT_SECS, ATTR_WAIT_SECS, CONSTRAINT_PROP_SECS, UPDATE_REPLACE_VALUE, ) = ( 'value', 'update_replace', 'fail', 'client_name', 'entity_name', 'wait_secs', 'action_wait_secs', 'attr_wait_secs', 'constraint_prop_secs', 'update_replace_value', ) ATTRIBUTES = ( OUTPUT, ) = ( 'output', ) properties_schema = { CONSTRAINT_PROP_SECS: properties.Schema( properties.Schema.NUMBER, _('Number value for delay during resolve constraint.'), default=0, update_allowed=True, constraints=[ constraints.CustomConstraint('test_constr') ], support_status=support.SupportStatus(version='6.0.0') ), ATTR_WAIT_SECS: properties.Schema( properties.Schema.NUMBER, _('Number value for timeout during resolving output value.'), default=0, update_allowed=True, support_status=support.SupportStatus(version='6.0.0') ), VALUE: properties.Schema( properties.Schema.STRING, _('The input string to be stored.'), default='test_string', update_allowed=True ), UPDATE_REPLACE_VALUE: properties.Schema( properties.Schema.STRING, _('Some value that can be stored but can not be updated.'), support_status=support.SupportStatus(version='7.0.0') ), FAIL: properties.Schema( properties.Schema.BOOLEAN, _('Value which can be set to fail the resource operation ' 'to test failure scenarios.'), update_allowed=True, default=False ), UPDATE_REPLACE: properties.Schema( properties.Schema.BOOLEAN, _('Value which can be set to trigger update replace for ' 'the particular resource.'), update_allowed=True, default=False ), WAIT_SECS: properties.Schema( properties.Schema.NUMBER, _('Seconds to wait after an action (-1 is infinite).'), update_allowed=True, default=0, ), ACTION_WAIT_SECS: properties.Schema( properties.Schema.MAP, _('Options for simulating waiting.'), update_allowed=True, schema={ CREATE_WAIT_SECS: properties.Schema( properties.Schema.NUMBER, _('Seconds to wait after a create. ' 'Defaults to the global wait_secs.'), update_allowed=True, ), UPDATE_WAIT_SECS: properties.Schema( properties.Schema.NUMBER, _('Seconds to wait after an update. ' 'Defaults to the global wait_secs.'), update_allowed=True, ), DELETE_WAIT_SECS: properties.Schema( properties.Schema.NUMBER, _('Seconds to wait after a delete. ' 'Defaults to the global wait_secs.'), update_allowed=True, ), } ), CLIENT_NAME: properties.Schema( properties.Schema.STRING, _('Client to poll.'), default='', update_allowed=True ), ENTITY_NAME: properties.Schema( properties.Schema.STRING, _('Client entity to poll.'), default='', update_allowed=True ), } attributes_schema = { OUTPUT: attributes.Schema( _('The string that was stored. This value is ' 'also available by referencing the resource.'), cache_mode=attributes.Schema.CACHE_NONE ), } def _wait_secs(self): secs = None if self.properties[self.ACTION_WAIT_SECS]: secs = self.properties[self.ACTION_WAIT_SECS][self.action.lower()] if secs is None: secs = self.properties[self.WAIT_SECS] LOG.info('%(name)s wait_secs:%(wait)s, action:%(action)s', {'name': self.name, 'wait': secs, 'action': self.action.lower()}) return secs def handle_create(self): fail_prop = self.properties.get(self.FAIL) if not fail_prop: value = self.properties.get(self.VALUE) self.data_set('value', value, redact=False) self.resource_id_set(self.physical_resource_name()) return timeutils.utcnow(), self._wait_secs() def needs_replace_with_prop_diff(self, changed_properties_set, after_props, before_props): if self.UPDATE_REPLACE in changed_properties_set: return bool(after_props.get(self.UPDATE_REPLACE)) def handle_update(self, json_snippet, tmpl_diff, prop_diff): self.properties = json_snippet.properties(self.properties_schema, self.context) value = prop_diff.get(self.VALUE) if value: # emulate failure fail_prop = self.properties[self.FAIL] if not fail_prop: # update in place self.data_set('value', value, redact=False) return timeutils.utcnow(), self._wait_secs() return timeutils.utcnow(), 0 def handle_delete(self): return timeutils.utcnow(), self._wait_secs() def check_create_complete(self, cookie): return self._check_status_complete(*cookie) def check_update_complete(self, cookie): return self._check_status_complete(*cookie) def check_delete_complete(self, cookie): return self._check_status_complete(*cookie) def _check_status_complete(self, started_at, wait_secs): def simulated_effort(): client_name = self.properties[self.CLIENT_NAME] self.entity = self.properties[self.ENTITY_NAME] if client_name and self.entity: # Allow the user to set the value to a real resource id. entity_id = self.data().get('value') or self.resource_id try: obj = getattr(self.client(name=client_name), self.entity) obj.get(entity_id) except Exception as exc: LOG.debug('%s.%s(%s) %s' % (client_name, self.entity, entity_id, six.text_type(exc))) else: # just sleep some more eventlet.sleep(1) if isinstance(started_at, six.string_types): started_at = timeutils.parse_isotime(started_at) started_at = timeutils.normalize_time(started_at) waited = timeutils.utcnow() - started_at LOG.info("Resource %(name)s waited %(waited)s/%(sec)s seconds", {'name': self.name, 'waited': waited, 'sec': wait_secs}) # wait_secs < 0 is an infinite wait time. if wait_secs >= 0 and waited > datetime.timedelta(seconds=wait_secs): fail_prop = self.properties[self.FAIL] if fail_prop and self.action != self.DELETE: raise ValueError("Test Resource failed %s" % self.name) return True simulated_effort() return False def _resolve_attribute(self, name): eventlet.sleep(self.properties[self.ATTR_WAIT_SECS]) if name == self.OUTPUT: return self.data().get('value') def resource_mapping(): return { 'OS::Heat::TestResource': TestResource, } heat-10.0.2/heat/engine/resources/openstack/heat/autoscaling_group.py0000666000175000017500000002356513343562351025766 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import six from heat.common import exception from heat.common import grouputils from heat.common.i18n import _ from heat.engine import attributes from heat.engine import constraints from heat.engine.hot import template from heat.engine import output from heat.engine import properties from heat.engine.resources.aws.autoscaling import autoscaling_group as aws_asg from heat.engine import rsrc_defn from heat.engine import support class HOTInterpreter(template.HOTemplate20150430): def __new__(cls): return object.__new__(cls) def __init__(self): version = {'heat_template_version': '2015-04-30'} super(HOTInterpreter, self).__init__(version) def parse(self, stack, snippet, path=''): return snippet def parse_conditions(self, stack, snippet, path=''): return snippet class AutoScalingResourceGroup(aws_asg.AutoScalingGroup): """An autoscaling group that can scale arbitrary resources. An autoscaling group allows the creation of a desired count of similar resources, which are defined with the resource property in HOT format. If there is a need to create many of the same resources (e.g. one hundred sets of Server, WaitCondition and WaitConditionHandle or even Neutron Nets), AutoScalingGroup is a convenient and easy way to do that. """ PROPERTIES = ( RESOURCE, MAX_SIZE, MIN_SIZE, COOLDOWN, DESIRED_CAPACITY, ROLLING_UPDATES, ) = ( 'resource', 'max_size', 'min_size', 'cooldown', 'desired_capacity', 'rolling_updates', ) _ROLLING_UPDATES_SCHEMA = ( MIN_IN_SERVICE, MAX_BATCH_SIZE, PAUSE_TIME, ) = ( 'min_in_service', 'max_batch_size', 'pause_time', ) ATTRIBUTES = ( OUTPUTS, OUTPUTS_LIST, CURRENT_SIZE, REFS, REFS_MAP, ) = ( 'outputs', 'outputs_list', 'current_size', 'refs', 'refs_map', ) properties_schema = { RESOURCE: properties.Schema( properties.Schema.MAP, _('Resource definition for the resources in the group, in HOT ' 'format. The value of this property is the definition of a ' 'resource just as if it had been declared in the template ' 'itself.'), required=True, update_allowed=True, ), MAX_SIZE: properties.Schema( properties.Schema.INTEGER, _('Maximum number of resources in the group.'), required=True, update_allowed=True, constraints=[constraints.Range(min=0)], ), MIN_SIZE: properties.Schema( properties.Schema.INTEGER, _('Minimum number of resources in the group.'), required=True, update_allowed=True, constraints=[constraints.Range(min=0)] ), COOLDOWN: properties.Schema( properties.Schema.INTEGER, _('Cooldown period, in seconds.'), update_allowed=True ), DESIRED_CAPACITY: properties.Schema( properties.Schema.INTEGER, _('Desired initial number of resources.'), update_allowed=True ), ROLLING_UPDATES: properties.Schema( properties.Schema.MAP, _('Policy for rolling updates for this scaling group.'), update_allowed=True, schema={ MIN_IN_SERVICE: properties.Schema( properties.Schema.INTEGER, _('The minimum number of resources in service while ' 'rolling updates are being executed.'), constraints=[constraints.Range(min=0)], default=0), MAX_BATCH_SIZE: properties.Schema( properties.Schema.INTEGER, _('The maximum number of resources to replace at once.'), constraints=[constraints.Range(min=1)], default=1), PAUSE_TIME: properties.Schema( properties.Schema.NUMBER, _('The number of seconds to wait between batches of ' 'updates.'), constraints=[constraints.Range(min=0)], default=0), }, # A default policy has all fields with their own default values. default={ MIN_IN_SERVICE: 0, MAX_BATCH_SIZE: 1, PAUSE_TIME: 0, }, ), } attributes_schema = { OUTPUTS: attributes.Schema( _("A map of resource names to the specified attribute of each " "individual resource that is part of the AutoScalingGroup. " "This map specifies output parameters that are available " "once the AutoScalingGroup has been instantiated."), support_status=support.SupportStatus(version='2014.2'), type=attributes.Schema.MAP ), OUTPUTS_LIST: attributes.Schema( _("A list of the specified attribute of each individual resource " "that is part of the AutoScalingGroup. This list of attributes " "is available as an output once the AutoScalingGroup has been " "instantiated."), support_status=support.SupportStatus(version='2014.2'), type=attributes.Schema.LIST ), CURRENT_SIZE: attributes.Schema( _("The current size of AutoscalingResourceGroup."), support_status=support.SupportStatus(version='2015.1'), type=attributes.Schema.INTEGER ), REFS: attributes.Schema( _("A list of resource IDs for the resources in the group."), type=attributes.Schema.LIST, support_status=support.SupportStatus(version='7.0.0'), ), REFS_MAP: attributes.Schema( _("A map of resource names to IDs for the resources in " "the group."), type=attributes.Schema.MAP, support_status=support.SupportStatus(version='7.0.0'), ), } update_policy_schema = {} def _get_resource_definition(self): resource_def = self.properties[self.RESOURCE] defn_data = dict(HOTInterpreter()._rsrc_defn_args(None, 'member', resource_def)) return rsrc_defn.ResourceDefinition(None, **defn_data) def _try_rolling_update(self, prop_diff): if self.RESOURCE in prop_diff: policy = self.properties[self.ROLLING_UPDATES] self._replace(policy[self.MIN_IN_SERVICE], policy[self.MAX_BATCH_SIZE], policy[self.PAUSE_TIME]) def _create_template(self, num_instances, num_replace=0, template_version=('heat_template_version', '2015-04-30')): """Create a template in the HOT format for the nested stack.""" return super(AutoScalingResourceGroup, self)._create_template(num_instances, num_replace, template_version=template_version) def get_attribute(self, key, *path): if key == self.CURRENT_SIZE: return grouputils.get_size(self) if key == self.REFS: refs = grouputils.get_member_refids(self) return refs if key == self.REFS_MAP: members = grouputils.get_members(self) refs_map = {m.name: m.resource_id for m in members} return refs_map if path: members = grouputils.get_members(self) attrs = ((rsrc.name, rsrc.FnGetAtt(*path)) for rsrc in members) if key == self.OUTPUTS: return dict(attrs) if key == self.OUTPUTS_LIST: return [value for name, value in attrs] if key.startswith("resource."): return grouputils.get_nested_attrs(self, key, True, *path) raise exception.InvalidTemplateAttribute(resource=self.name, key=key) def _nested_output_defns(self, resource_names, get_attr_fn, get_res_fn): for attr in self.referenced_attrs(): if isinstance(attr, six.string_types): key, path = attr, [] else: key, path = attr[0], list(attr[1:]) # Always use map types, as list order is not defined at # template generation time. if key == self.OUTPUTS_LIST: key = self.OUTPUTS if key == self.REFS: key = self.REFS_MAP if key.startswith("resource."): keycomponents = key.split('.', 2) path = keycomponents[2:] + path if path: key = self.OUTPUTS else: key = self.REFS_MAP output_name = ', '.join(six.text_type(a) for a in [key] + path) value = None if key == self.REFS_MAP: value = {r: get_res_fn(r) for r in resource_names} elif key == self.OUTPUTS and path: value = {r: get_attr_fn([r] + path) for r in resource_names} if value is not None: yield output.OutputDefinition(output_name, value) def resource_mapping(): return { 'OS::Heat::AutoScalingGroup': AutoScalingResourceGroup, } heat-10.0.2/heat/engine/resources/openstack/heat/__init__.py0000666000175000017500000000000013343562337023757 0ustar zuulzuul00000000000000heat-10.0.2/heat/engine/resources/openstack/heat/ha_restarter.py0000666000175000017500000000433013343562340024707 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging from heat.common.i18n import _ from heat.engine.resources.openstack.heat import none_resource from heat.engine import support LOG = logging.getLogger(__name__) class Restarter(none_resource.NoneResource): support_status = support.SupportStatus( status=support.HIDDEN, version='10.0.0', message=_('The HARestarter resource type has been removed. Existing ' 'stacks containing HARestarter resources can still be ' 'used, but the HARestarter resource will be a placeholder ' 'that does nothing.'), previous_status=support.SupportStatus( status=support.DEPRECATED, message=_('The HARestarter resource type is deprecated and will ' 'be removed in a future release of Heat, once it has ' 'support for auto-healing any type of resource. Note ' 'that HARestarter does *not* actually restart ' 'servers - it deletes and then recreates them. It also ' 'does the same to all dependent resources, and may ' 'therefore exhibit unexpected and undesirable ' 'behaviour. Instead, use the mark-unhealthy API to ' 'mark a resource as needing replacement, and then a ' 'stack update to perform the replacement while ' 'respecting the dependencies and not deleting them ' 'unnecessarily.'), version='2015.1')) def resource_mapping(): return { 'OS::Heat::HARestarter': Restarter, } heat-10.0.2/heat/engine/resources/openstack/heat/scaling_policy.py0000666000175000017500000001662113343562340025231 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging import six from heat.common import exception from heat.common.i18n import _ from heat.engine import attributes from heat.engine import constraints from heat.engine import properties from heat.engine.resources import signal_responder from heat.engine import support from heat.scaling import scalingutil as sc_util LOG = logging.getLogger(__name__) class AutoScalingPolicy(signal_responder.SignalResponder): """A resource to manage scaling of `OS::Heat::AutoScalingGroup`. **Note** while it may incidentally support `AWS::AutoScaling::AutoScalingGroup` for now, please don't use it for that purpose and use `AWS::AutoScaling::ScalingPolicy` instead. Resource to manage scaling for `OS::Heat::AutoScalingGroup`, i.e. define which metric should be scaled and scaling adjustment, set cooldown etc. """ PROPERTIES = ( AUTO_SCALING_GROUP_NAME, SCALING_ADJUSTMENT, ADJUSTMENT_TYPE, COOLDOWN, MIN_ADJUSTMENT_STEP ) = ( 'auto_scaling_group_id', 'scaling_adjustment', 'adjustment_type', 'cooldown', 'min_adjustment_step', ) ATTRIBUTES = ( ALARM_URL, SIGNAL_URL ) = ( 'alarm_url', 'signal_url' ) properties_schema = { # TODO(Qiming): property name should be AUTO_SCALING_GROUP_ID AUTO_SCALING_GROUP_NAME: properties.Schema( properties.Schema.STRING, _('AutoScaling group ID to apply policy to.'), required=True ), SCALING_ADJUSTMENT: properties.Schema( properties.Schema.NUMBER, _('Size of adjustment.'), required=True, update_allowed=True ), ADJUSTMENT_TYPE: properties.Schema( properties.Schema.STRING, _('Type of adjustment (absolute or percentage).'), required=True, constraints=[ constraints.AllowedValues( [sc_util.CHANGE_IN_CAPACITY, sc_util.EXACT_CAPACITY, sc_util.PERCENT_CHANGE_IN_CAPACITY]), ], update_allowed=True ), COOLDOWN: properties.Schema( properties.Schema.NUMBER, _('Cooldown period, in seconds.'), update_allowed=True ), MIN_ADJUSTMENT_STEP: properties.Schema( properties.Schema.INTEGER, _('Minimum number of resources that are added or removed ' 'when the AutoScaling group scales up or down. This can ' 'be used only when specifying percent_change_in_capacity ' 'for the adjustment_type property.'), constraints=[ constraints.Range( min=0, ), ], update_allowed=True ), } attributes_schema = { ALARM_URL: attributes.Schema( _("A signed url to handle the alarm."), type=attributes.Schema.STRING ), SIGNAL_URL: attributes.Schema( _("A url to handle the alarm using native API."), support_status=support.SupportStatus(version='5.0.0'), type=attributes.Schema.STRING ), } def validate(self): """Add validation for min_adjustment_step.""" super(AutoScalingPolicy, self).validate() self._validate_min_adjustment_step() def _validate_min_adjustment_step(self): adjustment_type = self.properties.get(self.ADJUSTMENT_TYPE) adjustment_step = self.properties.get(self.MIN_ADJUSTMENT_STEP) if (adjustment_type != sc_util.PERCENT_CHANGE_IN_CAPACITY and adjustment_step is not None): raise exception.ResourcePropertyValueDependency( prop1=self.MIN_ADJUSTMENT_STEP, prop2=self.ADJUSTMENT_TYPE, value=sc_util.PERCENT_CHANGE_IN_CAPACITY) def handle_create(self): super(AutoScalingPolicy, self).handle_create() self.resource_id_set(self._get_user_id()) def handle_update(self, json_snippet, tmpl_diff, prop_diff): """Updates self.properties, if Properties has changed. If Properties has changed, update self.properties, so we get the new values during any subsequent adjustment. """ if prop_diff: self.properties = json_snippet.properties(self.properties_schema, self.context) def handle_signal(self, details=None): # Template author can use scaling policy with any of the actions # of an alarm (i.e alarm_actions, insufficient_data_actions) and # it would be actioned irrespective of the alarm state. It's # fair to assume that the alarm state would be the appropriate one. # The responsibility of using a scaling policy with desired actions # lies with the template author, though this is normally expected to # be used with 'alarm_actions'. # # We also assume that the alarm state is 'alarm' when 'details' is None # or no 'current'/'state' key in 'details'. Watchrule has upper case # states, so we lower() them. This is only used for logging the alarm # state. if details is None: alarm_state = 'alarm' else: alarm_state = details.get('current', details.get('state', 'alarm')).lower() LOG.info('Alarm %(name)s, new state %(state)s', {'name': self.name, 'state': alarm_state}) asgn_id = self.properties[self.AUTO_SCALING_GROUP_NAME] group = self.stack.resource_by_refid(asgn_id) if group is None: raise exception.NotFound(_('Alarm %(alarm)s could not find ' 'scaling group named "%(group)s"' ) % {'alarm': self.name, 'group': asgn_id}) LOG.info('%(name)s alarm, adjusting group %(group)s with id ' '%(asgn_id)s by %(filter)s', {'name': self.name, 'group': group.name, 'asgn_id': asgn_id, 'filter': self.properties[self.SCALING_ADJUSTMENT]}) with group.frozen_properties(): group.adjust( self.properties[self.SCALING_ADJUSTMENT], self.properties[self.ADJUSTMENT_TYPE], self.properties[self.MIN_ADJUSTMENT_STEP], self.properties[self.COOLDOWN]) def _resolve_attribute(self, name): if self.resource_id is None: return if name == self.ALARM_URL: return six.text_type(self._get_ec2_signed_url()) elif name == self.SIGNAL_URL: return six.text_type(self._get_heat_signal_url()) def resource_mapping(): return { 'OS::Heat::ScalingPolicy': AutoScalingPolicy, } heat-10.0.2/heat/engine/resources/openstack/heat/remote_stack.py0000666000175000017500000002652113343562340024712 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_serialization import jsonutils import six from heat.common import context from heat.common import exception from heat.common.i18n import _ from heat.common import template_format from heat.engine import attributes from heat.engine import environment from heat.engine import properties from heat.engine import resource from heat.engine import template class RemoteStack(resource.Resource): """A Resource representing a stack. A resource that allowing for the creating stack, where should be defined stack template in HOT format, parameters (if template has any parameters with no default value), and timeout of creating. After creating current stack will have remote stack. """ default_client_name = 'heat' PROPERTIES = ( CONTEXT, TEMPLATE, TIMEOUT, PARAMETERS, ) = ( 'context', 'template', 'timeout', 'parameters', ) ATTRIBUTES = ( NAME_ATTR, OUTPUTS, ) = ( 'stack_name', 'outputs', ) _CONTEXT_KEYS = ( REGION_NAME ) = ( 'region_name' ) properties_schema = { CONTEXT: properties.Schema( properties.Schema.MAP, _('Context for this stack.'), schema={ REGION_NAME: properties.Schema( properties.Schema.STRING, _('Region name in which this stack will be created.'), required=True, ) } ), TEMPLATE: properties.Schema( properties.Schema.STRING, _('Template that specifies the stack to be created as ' 'a resource.'), required=True, update_allowed=True ), TIMEOUT: properties.Schema( properties.Schema.INTEGER, _('Number of minutes to wait for this stack creation.'), update_allowed=True ), PARAMETERS: properties.Schema( properties.Schema.MAP, _('Set of parameters passed to this stack.'), default={}, update_allowed=True ), } attributes_schema = { NAME_ATTR: attributes.Schema( _('Name of the stack.'), type=attributes.Schema.STRING ), OUTPUTS: attributes.Schema( _('A dict of key-value pairs output from the stack.'), type=attributes.Schema.MAP ), } def __init__(self, name, definition, stack): super(RemoteStack, self).__init__(name, definition, stack) self._region_name = None self._local_context = None def _context(self): if self._local_context: return self._local_context ctx_props = self.properties.get(self.CONTEXT) if ctx_props: self._region_name = ctx_props[self.REGION_NAME] else: self._region_name = self.context.region_name # Build RequestContext from existing one dict_ctxt = self.context.to_dict() dict_ctxt.update({'region_name': self._region_name, 'overwrite': False}) self._local_context = context.RequestContext.from_dict(dict_ctxt) return self._local_context def heat(self): # A convenience method overriding Resource.heat() return self._context().clients.client(self.default_client_name) def client_plugin(self): # A convenience method overriding Resource.client_plugin() return self._context().clients.client_plugin(self.default_client_name) def validate(self): super(RemoteStack, self).validate() try: self.heat() except Exception as ex: exc_info = dict(region=self._region_name, exc=six.text_type(ex)) msg = _('Cannot establish connection to Heat endpoint at region ' '"%(region)s" due to "%(exc)s"') % exc_info raise exception.StackValidationFailed(message=msg) try: params = self.properties[self.PARAMETERS] env = environment.get_child_environment(self.stack.env, params) tmpl = template_format.parse(self.properties[self.TEMPLATE]) args = { 'template': tmpl, 'files': self.stack.t.files, 'environment': env.user_env_as_dict(), } self.heat().stacks.validate(**args) except Exception as ex: exc_info = dict(region=self._region_name, exc=six.text_type(ex)) msg = _('Failed validating stack template using Heat endpoint at ' 'region "%(region)s" due to "%(exc)s"') % exc_info raise exception.StackValidationFailed(message=msg) def handle_create(self): params = self.properties[self.PARAMETERS] env = environment.get_child_environment(self.stack.env, params) tmpl = template_format.parse(self.properties[self.TEMPLATE]) args = { 'stack_name': self.physical_resource_name_or_FnGetRefId(), 'template': tmpl, 'timeout_mins': self.properties[self.TIMEOUT], 'disable_rollback': True, 'parameters': params, 'files': self.stack.t.files, 'environment': env.user_env_as_dict(), } remote_stack_id = self.heat().stacks.create(**args)['stack']['id'] self.resource_id_set(remote_stack_id) def handle_delete(self): if self.resource_id is not None: with self.client_plugin().ignore_not_found: self.heat().stacks.delete(stack_id=self.resource_id) def handle_resume(self): if self.resource_id is None: raise exception.Error(_('Cannot resume %s, resource not found') % self.name) self.heat().actions.resume(stack_id=self.resource_id) def handle_suspend(self): if self.resource_id is None: raise exception.Error(_('Cannot suspend %s, resource not found') % self.name) self.heat().actions.suspend(stack_id=self.resource_id) def handle_snapshot(self): snapshot = self.heat().stacks.snapshot(stack_id=self.resource_id) self.data_set('snapshot_id', snapshot['id']) def handle_restore(self, defn, restore_data): snapshot_id = restore_data['resource_data']['snapshot_id'] snapshot = self.heat().stacks.snapshot_show(self.resource_id, snapshot_id) s_data = snapshot['snapshot']['data'] env = environment.Environment(s_data['environment']) files = s_data['files'] tmpl = template.Template(s_data['template'], env=env, files=files) props = dict((k, v) for k, v in self.properties.items() if k in self.properties.data) props[self.TEMPLATE] = jsonutils.dumps(tmpl.t) props[self.PARAMETERS] = env.params return defn.freeze(properties=props) def handle_check(self): self.heat().actions.check(stack_id=self.resource_id) def _needs_update(self, after, before, after_props, before_props, prev_resource, check_init_complete=True): # If resource is in CHECK_FAILED state, raise UpdateReplace # to replace the failed stack. if self.state == (self.CHECK, self.FAILED): raise resource.UpdateReplace(self) # Always issue an update to the remote stack and let the individual # resources in it decide if they need updating. return True def handle_update(self, json_snippet, tmpl_diff, prop_diff): # Always issue an update to the remote stack and let the individual # resources in it decide if they need updating. if self.resource_id: self.properties = json_snippet.properties(self.properties_schema, self.context) params = self.properties[self.PARAMETERS] env = environment.get_child_environment(self.stack.env, params) tmpl = template_format.parse(self.properties[self.TEMPLATE]) fields = { 'stack_id': self.resource_id, 'parameters': params, 'template': tmpl, 'timeout_mins': self.properties[self.TIMEOUT], 'disable_rollback': self.stack.disable_rollback, 'files': self.stack.t.files, 'environment': env.user_env_as_dict(), } self.heat().stacks.update(**fields) def _check_action_complete(self, action): stack = self.heat().stacks.get(stack_id=self.resource_id) if stack.action != action: return False if stack.status == self.IN_PROGRESS: return False elif stack.status == self.COMPLETE: return True elif stack.status == self.FAILED: raise exception.ResourceInError( resource_status=stack.stack_status, status_reason=stack.stack_status_reason) else: # Note: this should never happen, so it really means that # the resource/engine is in serious problem if it happens. raise exception.ResourceUnknownStatus( resource_status=stack.stack_status, status_reason=stack.stack_status_reason) def check_create_complete(self, *args): return self._check_action_complete(action=self.CREATE) def check_delete_complete(self, *args): if self.resource_id is None: return True try: return self._check_action_complete(action=self.DELETE) except Exception as ex: self.client_plugin().ignore_not_found(ex) return True def check_resume_complete(self, *args): return self._check_action_complete(action=self.RESUME) def check_suspend_complete(self, *args): return self._check_action_complete(action=self.SUSPEND) def check_update_complete(self, *args): return self._check_action_complete(action=self.UPDATE) def check_snapshot_complete(self, *args): return self._check_action_complete(action=self.SNAPSHOT) def check_check_complete(self, *args): return self._check_action_complete(action=self.CHECK) def _resolve_attribute(self, name): if self.resource_id is None: return stack = self.heat().stacks.get(stack_id=self.resource_id) if name == self.NAME_ATTR: value = getattr(stack, name, None) return value or self.physical_resource_name_or_FnGetRefId() if name == self.OUTPUTS: outputs = stack.outputs return dict((output['output_key'], output['output_value']) for output in outputs) def get_reference_id(self): return self.resource_id def resource_mapping(): return { 'OS::Heat::Stack': RemoteStack, } heat-10.0.2/heat/engine/resources/openstack/heat/wait_condition.py0000666000175000017500000001311513343562340025237 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging from oslo_serialization import jsonutils from oslo_utils import timeutils import six from heat.common.i18n import _ from heat.engine import attributes from heat.engine import constraints from heat.engine import properties from heat.engine import resource from heat.engine.resources import wait_condition as wc_base from heat.engine import support LOG = logging.getLogger(__name__) class HeatWaitCondition(resource.Resource): """Resource for handling signals received by WaitConditionHandle. Resource takes WaitConditionHandle and starts to create. Resource is in CREATE_IN_PROGRESS status until WaitConditionHandle doesn't receive sufficient number of successful signals (this number can be specified with count property) and successfully creates after that, or fails due to timeout. """ support_status = support.SupportStatus(version='2014.2') PROPERTIES = ( HANDLE, TIMEOUT, COUNT, ) = ( 'handle', 'timeout', 'count', ) ATTRIBUTES = ( DATA, ) = ( 'data', ) properties_schema = { HANDLE: properties.Schema( properties.Schema.STRING, _('A reference to the wait condition handle used to signal this ' 'wait condition.'), required=True ), TIMEOUT: properties.Schema( properties.Schema.NUMBER, _('The number of seconds to wait for the correct number of ' 'signals to arrive.'), required=True, constraints=[ constraints.Range(1, 43200), ] ), COUNT: properties.Schema( properties.Schema.INTEGER, _('The number of success signals that must be received before ' 'the stack creation process continues.'), constraints=[ constraints.Range(min=1), ], default=1, update_allowed=True ), } attributes_schema = { DATA: attributes.Schema( _('JSON string containing data associated with wait ' 'condition signals sent to the handle.'), cache_mode=attributes.Schema.CACHE_NONE, type=attributes.Schema.STRING ), } def _get_handle_resource(self): return self.stack.resource_by_refid(self.properties[self.HANDLE]) def _validate_handle_resource(self, handle): if handle is not None and isinstance( handle, wc_base.BaseWaitConditionHandle): return LOG.debug("Got %r instead of wait condition handle", handle) hn = handle.name if handle else self.properties[self.HANDLE] msg = _('%s is not a valid wait condition handle.') % hn raise ValueError(msg) def _wait(self, handle, started_at, timeout_in): if timeutils.is_older_than(started_at, timeout_in): exc = wc_base.WaitConditionTimeout(self, handle) LOG.info('%(name)s Timed out (%(timeout)s)', {'name': str(self), 'timeout': str(exc)}) raise exc handle_status = handle.get_status() if any(s != handle.STATUS_SUCCESS for s in handle_status): failure = wc_base.WaitConditionFailure(self, handle) LOG.info('%(name)s Failed (%(failure)s)', {'name': str(self), 'failure': str(failure)}) raise failure if len(handle_status) >= self.properties[self.COUNT]: LOG.info("%s Succeeded", str(self)) return True return False def handle_create(self): handle = self._get_handle_resource() self._validate_handle_resource(handle) started_at = timeutils.utcnow() return handle, started_at, float(self.properties[self.TIMEOUT]) def check_create_complete(self, data): return self._wait(*data) def handle_update(self, json_snippet, tmpl_diff, prop_diff): if prop_diff: self.properties = json_snippet.properties(self.properties_schema, self.context) handle = self._get_handle_resource() started_at = timeutils.utcnow() return handle, started_at, float(self.properties[self.TIMEOUT]) def check_update_complete(self, data): return self._wait(*data) def handle_delete(self): handle = self._get_handle_resource() if handle: handle.metadata_set({}) def _resolve_attribute(self, key): handle = self._get_handle_resource() if handle is None: return '' if key == self.DATA: meta = handle.metadata_get(refresh=True) res = {k: meta[k][handle.DATA] for k in meta} LOG.debug('%(name)s.GetAtt(%(key)s) == %(res)s' % {'name': self.name, 'key': key, 'res': res}) return six.text_type(jsonutils.dumps(res)) def resource_mapping(): return { 'OS::Heat::WaitCondition': HeatWaitCondition, } heat-10.0.2/heat/engine/resources/openstack/heat/multi_part.py0000666000175000017500000001402613343562340024407 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import email from email.mime import multipart from email.mime import text import os from oslo_utils import uuidutils from heat.common.i18n import _ from heat.engine import constraints from heat.engine import properties from heat.engine.resources.openstack.heat import software_config from heat.engine import support from heat.rpc import api as rpc_api class MultipartMime(software_config.SoftwareConfig): """Assembles a collection of software configurations as a multi-part mime. Parts in the message can be populated with inline configuration or references to other config resources. If the referenced resource is itself a valid multi-part mime message, that will be broken into parts and those parts appended to this message. The resulting multi-part mime message will be stored by the configs API and can be referenced in properties such as OS::Nova::Server user_data. This resource is generally used to build a list of cloud-init configuration elements including scripts and cloud-config. Since cloud-init is boot-only configuration, any changes to the definition will result in the replacement of all servers which reference it. """ support_status = support.SupportStatus(version='2014.1') PROPERTIES = ( PARTS, CONFIG, FILENAME, TYPE, SUBTYPE ) = ( 'parts', 'config', 'filename', 'type', 'subtype' ) TYPES = ( TEXT, MULTIPART ) = ( 'text', 'multipart' ) properties_schema = { PARTS: properties.Schema( properties.Schema.LIST, _('Parts belonging to this message.'), default=[], schema=properties.Schema( properties.Schema.MAP, schema={ CONFIG: properties.Schema( properties.Schema.STRING, _('Content of part to attach, either inline or by ' 'referencing the ID of another software config ' 'resource.'), required=True ), FILENAME: properties.Schema( properties.Schema.STRING, _('Optional filename to associate with part.') ), TYPE: properties.Schema( properties.Schema.STRING, _('Whether the part content is text or multipart.'), default=TEXT, constraints=[constraints.AllowedValues(TYPES)] ), SUBTYPE: properties.Schema( properties.Schema.STRING, _('Optional subtype to specify with the type.') ), } ) ) } message = None def handle_create(self): props = { rpc_api.SOFTWARE_CONFIG_NAME: self.physical_resource_name(), rpc_api.SOFTWARE_CONFIG_CONFIG: self.get_message(), rpc_api.SOFTWARE_CONFIG_GROUP: 'Heat::Ungrouped' } sc = self.rpc_client().create_software_config(self.context, **props) self.resource_id_set(sc[rpc_api.SOFTWARE_CONFIG_ID]) def get_message(self): if self.message: return self.message subparts = [] for item in self.properties[self.PARTS]: config = item.get(self.CONFIG) part_type = item.get(self.TYPE, self.TEXT) part = config if uuidutils.is_uuid_like(config): with self.rpc_client().ignore_error_by_name('NotFound'): sc = self.rpc_client().show_software_config( self.context, config) part = sc[rpc_api.SOFTWARE_CONFIG_CONFIG] if part_type == self.MULTIPART: self._append_multiparts(subparts, part) else: filename = item.get(self.FILENAME, '') subtype = item.get(self.SUBTYPE, '') self._append_part(subparts, part, subtype, filename) mime_blob = multipart.MIMEMultipart(_subparts=subparts) self.message = mime_blob.as_string() return self.message @staticmethod def _append_multiparts(subparts, multi_part): multi_parts = email.message_from_string(multi_part) if not multi_parts or not multi_parts.is_multipart(): return for part in multi_parts.get_payload(): MultipartMime._append_part( subparts, part.get_payload(), part.get_content_subtype(), part.get_filename()) @staticmethod def _append_part(subparts, part, subtype, filename): if not subtype and filename: subtype = os.path.splitext(filename)[0] msg = MultipartMime._create_message(part, subtype, filename) subparts.append(msg) @staticmethod def _create_message(part, subtype, filename): charset = 'us-ascii' try: part.encode(charset) except UnicodeEncodeError: charset = 'utf-8' msg = (text.MIMEText(part, _subtype=subtype, _charset=charset) if subtype else text.MIMEText(part, _charset=charset)) if filename: msg.add_header('Content-Disposition', 'attachment', filename=filename) return msg def resource_mapping(): return { 'OS::Heat::MultipartMime': MultipartMime, } heat-10.0.2/heat/engine/resources/openstack/heat/resource_chain.py0000666000175000017500000002343713343562340025226 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import functools import six from oslo_log import log as logging from heat.common import exception from heat.common import grouputils from heat.common.i18n import _ from heat.engine import attributes from heat.engine import output from heat.engine import properties from heat.engine.resources import stack_resource from heat.engine import rsrc_defn from heat.engine import support from heat.scaling import template as scl_template LOG = logging.getLogger(__name__) class ResourceChain(stack_resource.StackResource): """Creates one or more resources with the same configuration. The types of resources to be created are passed into the chain through the ``resources`` property. One resource will be created for each type listed. Each is passed the configuration specified under ``resource_properties``. The ``concurrent`` property controls if the resources will be created concurrently. If omitted or set to false, each resource will be treated as having a dependency on the resource before it in the list. """ support_status = support.SupportStatus(version='6.0.0') PROPERTIES = ( RESOURCES, CONCURRENT, RESOURCE_PROPERTIES, ) = ( 'resources', 'concurrent', 'resource_properties', ) ATTRIBUTES = ( REFS, ATTR_ATTRIBUTES, ) = ( 'refs', 'attributes', ) properties_schema = { RESOURCES: properties.Schema( properties.Schema.LIST, description=_('The list of resource types to create. This list ' 'may contain type names or aliases defined in ' 'the resource registry. Specific template names ' 'are not supported.'), required=True, update_allowed=True ), CONCURRENT: properties.Schema( properties.Schema.BOOLEAN, description=_('If true, the resources in the chain will be ' 'created concurrently. If false or omitted, ' 'each resource will be treated as having a ' 'dependency on the previous resource in the list.'), default=False, ), RESOURCE_PROPERTIES: properties.Schema( properties.Schema.MAP, description=_('Properties to pass to each resource being created ' 'in the chain.'), ) } attributes_schema = { REFS: attributes.Schema( description=_('A list of resource IDs for the resources in ' 'the chain.'), type=attributes.Schema.LIST ), ATTR_ATTRIBUTES: attributes.Schema( description=_('A map of resource names to the specified attribute ' 'of each individual resource.'), type=attributes.Schema.MAP ), } def validate_nested_stack(self): # Check each specified resource type to ensure it's valid for resource_type in self.properties[self.RESOURCES]: try: self.stack.env.get_class_to_instantiate(resource_type) except exception.EntityNotFound: # Valid if it's a template resource pass super(ResourceChain, self).validate_nested_stack() def handle_create(self): return self.create_with_template(self.child_template()) def handle_update(self, json_snippet, tmpl_diff, prop_diff): self.properties = json_snippet.properties(self.properties_schema, self.context) return self.update_with_template(self.child_template()) def child_template(self): resource_types = self.properties[self.RESOURCES] resource_names = self._resource_names(resource_types) name_def_tuples = [] for index, rt in enumerate(resource_types): name = resource_names[index] depends_on = None if index > 0 and not self.properties[self.CONCURRENT]: depends_on = [resource_names[index - 1]] t = (name, self._build_resource_definition(name, rt, depends_on=depends_on)) name_def_tuples.append(t) nested_template = scl_template.make_template(name_def_tuples) att_func = 'get_attr' get_attr = functools.partial(nested_template.functions[att_func], None, att_func) res_func = 'get_resource' get_res = functools.partial(nested_template.functions[res_func], None, res_func) res_names = [k for k, d in name_def_tuples] for odefn in self._nested_output_defns(res_names, get_attr, get_res): nested_template.add_output(odefn) return nested_template def child_params(self): return {} def _attribute_output_name(self, *attr_path): return ', '.join(six.text_type(a) for a in attr_path) def get_attribute(self, key, *path): if key == self.ATTR_ATTRIBUTES and not path: raise exception.InvalidTemplateAttribute(resource=self.name, key=key) try: output = self.get_output(self._attribute_output_name(key, *path)) except (exception.NotFound, exception.TemplateOutputError) as op_err: resource_types = self.properties[self.RESOURCES] names = self._resource_names(resource_types) if key.startswith('resource.'): target = key.split('.', 2)[1] if target not in names: raise exception.NotFound(_("Member '%(mem)s' not " "found in group resource " "'%(grp)s'.") % {'mem': target, 'grp': self.name}) LOG.debug('Falling back to grouputils due to %s', op_err) else: if key == self.REFS: return attributes.select_from_attribute(output, path) return output if key.startswith('resource.'): return grouputils.get_nested_attrs(self, key, False, *path) if key == self.REFS: vals = [grouputils.get_rsrc_id(self, key, False, n) for n in names] return attributes.select_from_attribute(vals, path) if key == self.ATTR_ATTRIBUTES: return dict((n, grouputils.get_rsrc_attr( self, key, False, n, *path)) for n in names) path = [key] + list(path) return [grouputils.get_rsrc_attr(self, key, False, n, *path) for n in names] def _nested_output_defns(self, resource_names, get_attr_fn, get_res_fn): for attr in self.referenced_attrs(): if isinstance(attr, six.string_types): key, path = attr, [] else: key, path = attr[0], list(attr[1:]) output_name = self._attribute_output_name(key, *path) value = None if key.startswith("resource."): keycomponents = key.split('.', 2) res_name = keycomponents[1] attr_path = keycomponents[2:] + path if res_name in resource_names: if attr_path: value = get_attr_fn([res_name] + attr_path) else: value = get_res_fn(res_name) elif key == self.REFS: value = [get_res_fn(r) for r in resource_names] elif key == self.ATTR_ATTRIBUTES and path: value = {r: get_attr_fn([r] + path) for r in resource_names} elif key not in self.ATTRIBUTES: value = [get_attr_fn([r, key] + path) for r in resource_names] if value is not None: yield output.OutputDefinition(output_name, value) @staticmethod def _resource_names(resource_types): """Returns a list of unique resource names to create.""" return [six.text_type(i) for i, t in enumerate(resource_types)] def _build_resource_definition(self, resource_name, resource_type, depends_on=None): """Creates a definition object for one of the types in the chain. The definition will be built from the given name and type and will use the properties specified in the chain's resource_properties property. All types in the chain are given the same set of properties. :type resource_name: str :type resource_type: str :param depends_on: if specified, the new resource will depend on the resource name specified :type depends_on: str :return: resource definition suitable for adding to a template :rtype: heat.engine.rsrc_defn.ResourceDefinition """ properties = self.properties[self.RESOURCE_PROPERTIES] return rsrc_defn.ResourceDefinition(resource_name, resource_type, properties, depends=depends_on) def resource_mapping(): """Hook to install the type under a specific name.""" return { 'OS::Heat::ResourceChain': ResourceChain, } heat-10.0.2/heat/engine/resources/openstack/heat/software_deployment.py0000666000175000017500000007364113343562351026333 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import six from six import itertools import uuid from oslo_config import cfg from oslo_log import log as logging from oslo_utils import timeutils from heat.common import exception from heat.common.i18n import _ from heat.engine import attributes from heat.engine import constraints from heat.engine import output from heat.engine import properties from heat.engine import resource from heat.engine.resources.openstack.heat import resource_group from heat.engine.resources import signal_responder from heat.engine import rsrc_defn from heat.engine import software_config_io as swc_io from heat.engine import support from heat.rpc import api as rpc_api cfg.CONF.import_opt('default_deployment_signal_transport', 'heat.common.config') LOG = logging.getLogger(__name__) class SoftwareDeployment(signal_responder.SignalResponder): """This resource associates a server with some configuration. The configuration is to be deployed to that server. A deployment allows input values to be specified which map to the inputs schema defined in the config resource. These input values are interpreted by the configuration tool in a tool-specific manner. Whenever this resource goes to an IN_PROGRESS state, it creates an ephemeral config that includes the inputs values plus a number of extra inputs which have names prefixed with deploy_. The extra inputs relate to the current state of the stack, along with the information and credentials required to signal back the deployment results. Unless signal_transport=NO_SIGNAL, this resource will remain in an IN_PROGRESS state until the server signals it with the output values for that deployment. Those output values are then available as resource attributes, along with the default attributes deploy_stdout, deploy_stderr and deploy_status_code. Specifying actions other than the default CREATE and UPDATE will result in the deployment being triggered in those actions. For example this would allow cleanup configuration to be performed during actions SUSPEND and DELETE. A config could be designed to only work with some specific actions, or a config can read the value of the deploy_action input to allow conditional logic to perform different configuration for different actions. """ support_status = support.SupportStatus(version='2014.1') PROPERTIES = ( CONFIG, SERVER, INPUT_VALUES, DEPLOY_ACTIONS, NAME, SIGNAL_TRANSPORT ) = ( 'config', 'server', 'input_values', 'actions', 'name', 'signal_transport' ) ALLOWED_DEPLOY_ACTIONS = ( resource.Resource.CREATE, resource.Resource.UPDATE, resource.Resource.DELETE, resource.Resource.SUSPEND, resource.Resource.RESUME, ) ATTRIBUTES = ( STDOUT, STDERR, STATUS_CODE ) = ( 'deploy_stdout', 'deploy_stderr', 'deploy_status_code' ) DERIVED_CONFIG_INPUTS = ( DEPLOY_SERVER_ID, DEPLOY_ACTION, DEPLOY_SIGNAL_ID, DEPLOY_STACK_ID, DEPLOY_RESOURCE_NAME, DEPLOY_AUTH_URL, DEPLOY_USERNAME, DEPLOY_PASSWORD, DEPLOY_PROJECT_ID, DEPLOY_USER_ID, DEPLOY_SIGNAL_VERB, DEPLOY_SIGNAL_TRANSPORT, DEPLOY_QUEUE_ID, DEPLOY_REGION_NAME ) = ( 'deploy_server_id', 'deploy_action', 'deploy_signal_id', 'deploy_stack_id', 'deploy_resource_name', 'deploy_auth_url', 'deploy_username', 'deploy_password', 'deploy_project_id', 'deploy_user_id', 'deploy_signal_verb', 'deploy_signal_transport', 'deploy_queue_id', 'deploy_region_name' ) SIGNAL_TRANSPORTS = ( CFN_SIGNAL, TEMP_URL_SIGNAL, HEAT_SIGNAL, NO_SIGNAL, ZAQAR_SIGNAL ) = ( 'CFN_SIGNAL', 'TEMP_URL_SIGNAL', 'HEAT_SIGNAL', 'NO_SIGNAL', 'ZAQAR_SIGNAL' ) properties_schema = { CONFIG: properties.Schema( properties.Schema.STRING, _('ID of software configuration resource to execute when ' 'applying to the server.'), update_allowed=True ), SERVER: properties.Schema( properties.Schema.STRING, _('ID of resource to apply configuration to. ' 'Normally this should be a Nova server ID.'), required=True, ), INPUT_VALUES: properties.Schema( properties.Schema.MAP, _('Input values to apply to the software configuration on this ' 'server.'), update_allowed=True ), DEPLOY_ACTIONS: properties.Schema( properties.Schema.LIST, _('Which lifecycle actions of the deployment resource will result ' 'in this deployment being triggered.'), update_allowed=True, default=[resource.Resource.CREATE, resource.Resource.UPDATE], constraints=[constraints.AllowedValues(ALLOWED_DEPLOY_ACTIONS)] ), NAME: properties.Schema( properties.Schema.STRING, _('Name of the derived config associated with this deployment. ' 'This is used to apply a sort order to the list of ' 'configurations currently deployed to a server.'), update_allowed=True ), SIGNAL_TRANSPORT: properties.Schema( properties.Schema.STRING, _('How the server should signal to heat with the deployment ' 'output values. CFN_SIGNAL will allow an HTTP POST to a CFN ' 'keypair signed URL. TEMP_URL_SIGNAL will create a ' 'Swift TempURL to be signaled via HTTP PUT. HEAT_SIGNAL ' 'will allow calls to the Heat API resource-signal using the ' 'provided keystone credentials. ZAQAR_SIGNAL will create a ' 'dedicated zaqar queue to be signaled using the provided ' 'keystone credentials. NO_SIGNAL will result in the resource ' 'going to the COMPLETE state without waiting for any signal.'), default=cfg.CONF.default_deployment_signal_transport, constraints=[ constraints.AllowedValues(SIGNAL_TRANSPORTS), ] ), } attributes_schema = { STDOUT: attributes.Schema( _("Captured stdout from the configuration execution."), type=attributes.Schema.STRING ), STDERR: attributes.Schema( _("Captured stderr from the configuration execution."), type=attributes.Schema.STRING ), STATUS_CODE: attributes.Schema( _("Returned status code from the configuration execution."), type=attributes.Schema.STRING ), } default_client_name = 'heat' no_signal_actions = () # No need to make metadata_update() calls since deployments have a # dedicated API for changing state on signals signal_needs_metadata_updates = False def _build_properties(self, config_id, action): props = { 'config_id': config_id, 'action': action, 'input_values': self.properties.get(self.INPUT_VALUES) } if self._signal_transport_none(): props['status'] = SoftwareDeployment.COMPLETE props['status_reason'] = _('Not waiting for outputs signal') else: props['status'] = SoftwareDeployment.IN_PROGRESS props['status_reason'] = _('Deploy data available') return props def _delete_derived_config(self, derived_config_id): with self.rpc_client().ignore_error_by_name('NotFound'): self.rpc_client().delete_software_config( self.context, derived_config_id) def _create_derived_config(self, action, source_config): derived_params = self._build_derived_config_params( action, source_config) derived_config = self.rpc_client().create_software_config( self.context, **derived_params) return derived_config[rpc_api.SOFTWARE_CONFIG_ID] def _get_derived_config_id(self): sd = self.rpc_client().show_software_deployment(self.context, self.resource_id) return sd[rpc_api.SOFTWARE_DEPLOYMENT_CONFIG_ID] def _load_config(self, config_id=None): if config_id is None: config_id = self.properties.get(self.CONFIG) if config_id: config = self.rpc_client().show_software_config(self.context, config_id) else: config = {} config[rpc_api.SOFTWARE_CONFIG_INPUTS] = [ swc_io.InputConfig(**i) for i in config.get(rpc_api.SOFTWARE_CONFIG_INPUTS, []) ] config[rpc_api.SOFTWARE_CONFIG_OUTPUTS] = [ swc_io.OutputConfig(**o) for o in config.get(rpc_api.SOFTWARE_CONFIG_OUTPUTS, []) ] return config def _handle_action(self, action, config=None, prev_derived_config=None): if config is None: config = self._load_config() if config.get(rpc_api.SOFTWARE_CONFIG_GROUP) == 'component': valid_actions = set() for conf in config[rpc_api.SOFTWARE_CONFIG_CONFIG]['configs']: valid_actions.update(conf['actions']) if action not in valid_actions: return elif action not in self.properties[self.DEPLOY_ACTIONS]: return props = self._build_properties( self._create_derived_config(action, config), action) if self.resource_id is None: resource_id = str(uuid.uuid4()) self.resource_id_set(resource_id) sd = self.rpc_client().create_software_deployment( self.context, deployment_id=resource_id, server_id=self.properties[SoftwareDeployment.SERVER], stack_user_project_id=self.stack.stack_user_project_id, **props) else: if prev_derived_config is None: prev_derived_config = self._get_derived_config_id() sd = self.rpc_client().update_software_deployment( self.context, deployment_id=self.resource_id, **props) if prev_derived_config: self._delete_derived_config(prev_derived_config) if not self._signal_transport_none(): # NOTE(pshchelo): sd is a simple dict, easy to serialize, # does not need fixing re LP bug #1393268 return sd def _check_complete(self): sd = self.rpc_client().show_software_deployment( self.context, self.resource_id) status = sd[rpc_api.SOFTWARE_DEPLOYMENT_STATUS] if status == SoftwareDeployment.COMPLETE: return True elif status == SoftwareDeployment.FAILED: status_reason = sd[rpc_api.SOFTWARE_DEPLOYMENT_STATUS_REASON] message = _("Deployment to server failed: %s") % status_reason LOG.info(message) raise exception.Error(message) def _server_exists(self, sd): """Returns whether or not the deployment's server exists.""" nova_client = self.client_plugin('nova') try: nova_client.get_server(sd['server_id']) return True except exception.EntityNotFound: return False def empty_config(self): return '' def _build_derived_config_params(self, action, source): derived_inputs = self._build_derived_inputs(action, source) derived_options = self._build_derived_options(action, source) derived_config = self._build_derived_config( action, source, derived_inputs, derived_options) derived_name = (self.properties.get(self.NAME) or source.get(rpc_api.SOFTWARE_CONFIG_NAME)) return { rpc_api.SOFTWARE_CONFIG_GROUP: source.get(rpc_api.SOFTWARE_CONFIG_GROUP) or 'Heat::Ungrouped', rpc_api.SOFTWARE_CONFIG_CONFIG: derived_config or self.empty_config(), rpc_api.SOFTWARE_CONFIG_OPTIONS: derived_options, rpc_api.SOFTWARE_CONFIG_INPUTS: [i.as_dict() for i in derived_inputs], rpc_api.SOFTWARE_CONFIG_OUTPUTS: [o.as_dict() for o in source[rpc_api.SOFTWARE_CONFIG_OUTPUTS]], rpc_api.SOFTWARE_CONFIG_NAME: derived_name or self.physical_resource_name() } def _build_derived_config(self, action, source, derived_inputs, derived_options): return source.get(rpc_api.SOFTWARE_CONFIG_CONFIG) def _build_derived_options(self, action, source): return source.get(rpc_api.SOFTWARE_CONFIG_OPTIONS) def _build_derived_inputs(self, action, source): inputs = source[rpc_api.SOFTWARE_CONFIG_INPUTS] input_values = dict(self.properties[self.INPUT_VALUES] or {}) def derive_inputs(): for input_config in inputs: value = input_values.pop(input_config.name(), input_config.default()) yield swc_io.InputConfig(value=value, **input_config.as_dict()) # for any input values that do not have a declared input, add # a derived declared input so that they can be used as config # inputs for inpk, inpv in input_values.items(): yield swc_io.InputConfig(name=inpk, value=inpv) yield swc_io.InputConfig( name=self.DEPLOY_SERVER_ID, value=self.properties[self.SERVER], description=_('ID of the server being deployed to')) yield swc_io.InputConfig( name=self.DEPLOY_ACTION, value=action, description=_('Name of the current action being deployed')) yield swc_io.InputConfig( name=self.DEPLOY_STACK_ID, value=self.stack.identifier().stack_path(), description=_('ID of the stack this deployment belongs to')) yield swc_io.InputConfig( name=self.DEPLOY_RESOURCE_NAME, value=self.name, description=_('Name of this deployment resource in the stack')) yield swc_io.InputConfig( name=self.DEPLOY_SIGNAL_TRANSPORT, value=self.properties[self.SIGNAL_TRANSPORT], description=_('How the server should signal to heat with ' 'the deployment output values.')) if self._signal_transport_cfn(): yield swc_io.InputConfig( name=self.DEPLOY_SIGNAL_ID, value=self._get_ec2_signed_url(), description=_('ID of signal to use for signaling output ' 'values')) yield swc_io.InputConfig( name=self.DEPLOY_SIGNAL_VERB, value='POST', description=_('HTTP verb to use for signaling output' 'values')) elif self._signal_transport_temp_url(): yield swc_io.InputConfig( name=self.DEPLOY_SIGNAL_ID, value=self._get_swift_signal_url(), description=_('ID of signal to use for signaling output ' 'values')) yield swc_io.InputConfig( name=self.DEPLOY_SIGNAL_VERB, value='PUT', description=_('HTTP verb to use for signaling output' 'values')) elif (self._signal_transport_heat() or self._signal_transport_zaqar()): creds = self._get_heat_signal_credentials() yield swc_io.InputConfig( name=self.DEPLOY_AUTH_URL, value=creds['auth_url'], description=_('URL for API authentication')) yield swc_io.InputConfig( name=self.DEPLOY_USERNAME, value=creds['username'], description=_('Username for API authentication')) yield swc_io.InputConfig( name=self.DEPLOY_USER_ID, value=creds['user_id'], description=_('User ID for API authentication')) yield swc_io.InputConfig( name=self.DEPLOY_PASSWORD, value=creds['password'], description=_('Password for API authentication')) yield swc_io.InputConfig( name=self.DEPLOY_PROJECT_ID, value=creds['project_id'], description=_('ID of project for API authentication')) yield swc_io.InputConfig( name=self.DEPLOY_REGION_NAME, value=creds['region_name'], description=_('Region name for API authentication')) if self._signal_transport_zaqar(): yield swc_io.InputConfig( name=self.DEPLOY_QUEUE_ID, value=self._get_zaqar_signal_queue_id(), description=_('ID of queue to use for signaling output ' 'values')) return list(derive_inputs()) def handle_create(self): return self._handle_action(self.CREATE) def check_create_complete(self, sd): if not sd: return True return self._check_complete() def handle_update(self, json_snippet, tmpl_diff, prop_diff): if self.resource_id is None: prev_derived_config = None old_inputs = {} else: prev_derived_config = self._get_derived_config_id() old_config = self._load_config(prev_derived_config) old_inputs = {i.name(): i for i in old_config[rpc_api.SOFTWARE_CONFIG_INPUTS]} self.properties = json_snippet.properties(self.properties_schema, self.context) config = self._load_config() for inp in self._build_derived_inputs(self.UPDATE, config): name = inp.name() if inp.replace_on_change() and name in old_inputs: if inp.input_data() != old_inputs[name].input_data(): LOG.debug('Replacing SW Deployment due to change in ' 'input "%s"', name) raise resource.UpdateReplace return self._handle_action(self.UPDATE, config=config, prev_derived_config=prev_derived_config) def check_update_complete(self, sd): if not sd: return True return self._check_complete() def handle_delete(self): with self.rpc_client().ignore_error_by_name('NotFound'): return self._handle_action(self.DELETE) def check_delete_complete(self, sd=None): if not sd or not self._server_exists(sd) or self._check_complete(): self._delete_resource() return True def _delete_resource(self): derived_config_id = None if self.resource_id is not None: with self.rpc_client().ignore_error_by_name('NotFound'): derived_config_id = self._get_derived_config_id() self.rpc_client().delete_software_deployment( self.context, self.resource_id) if derived_config_id: self._delete_derived_config(derived_config_id) self._delete_signals() self._delete_user() def handle_suspend(self): return self._handle_action(self.SUSPEND) def check_suspend_complete(self, sd): if not sd: return True return self._check_complete() def handle_resume(self): return self._handle_action(self.RESUME) def check_resume_complete(self, sd): if not sd: return True return self._check_complete() def handle_signal(self, details): return self.rpc_client().signal_software_deployment( self.context, self.resource_id, details, timeutils.utcnow().isoformat()) def _handle_cancel(self): if self.resource_id is None: return sd = self.rpc_client().show_software_deployment( self.context, self.resource_id) if sd is None: return status = sd[rpc_api.SOFTWARE_DEPLOYMENT_STATUS] if status == SoftwareDeployment.IN_PROGRESS: self.rpc_client().update_software_deployment( self.context, self.resource_id, status=SoftwareDeployment.FAILED, status_reason=_('Deployment cancelled.')) def handle_create_cancel(self, cookie): self._handle_cancel() def handle_update_cancel(self, cookie): self._handle_cancel() def handle_delete_cancel(self, cookie): self._handle_cancel() def handle_suspend_cancel(self, cookie): self._handle_cancel() def handle_resume_cancel(self, cookie): self._handle_cancel() def get_attribute(self, key, *path): """Resource attributes map to deployment outputs values.""" sd = self.rpc_client().show_software_deployment( self.context, self.resource_id) ov = sd[rpc_api.SOFTWARE_DEPLOYMENT_OUTPUT_VALUES] or {} if key in ov: attribute = ov.get(key) return attributes.select_from_attribute(attribute, path) # Since there is no value for this key yet, check the output schemas # to find out if the key is valid sc = self.rpc_client().show_software_config( self.context, self.properties[self.CONFIG]) outputs = sc[rpc_api.SOFTWARE_CONFIG_OUTPUTS] or [] output_keys = [output['name'] for output in outputs] if key not in output_keys and key not in self.ATTRIBUTES: raise exception.InvalidTemplateAttribute(resource=self.name, key=key) return None def validate(self): """Validate any of the provided params. :raises StackValidationFailed: if any property failed validation. """ super(SoftwareDeployment, self).validate() server = self.properties[self.SERVER] if server: res = self.stack.resource_by_refid(server) if res: user_data_format = res.properties.get('user_data_format') if user_data_format and \ not (user_data_format == 'SOFTWARE_CONFIG'): raise exception.StackValidationFailed(message=_( "Resource %s's property user_data_format should be " "set to SOFTWARE_CONFIG since there are software " "deployments on it.") % server) class SoftwareDeploymentGroup(resource_group.ResourceGroup): """This resource associates a group of servers with some configuration. The configuration is to be deployed to all servers in the group. The properties work in a similar way to OS::Heat::SoftwareDeployment, and in addition to the attributes documented, you may pass any attribute supported by OS::Heat::SoftwareDeployment, including those exposing arbitrary outputs, and return a map of deployment names to the specified attribute. """ support_status = support.SupportStatus(version='5.0.0') PROPERTIES = ( SERVERS, CONFIG, INPUT_VALUES, DEPLOY_ACTIONS, NAME, SIGNAL_TRANSPORT, ) = ( 'servers', SoftwareDeployment.CONFIG, SoftwareDeployment.INPUT_VALUES, SoftwareDeployment.DEPLOY_ACTIONS, SoftwareDeployment.NAME, SoftwareDeployment.SIGNAL_TRANSPORT, ) ATTRIBUTES = ( STDOUTS, STDERRS, STATUS_CODES ) = ( 'deploy_stdouts', 'deploy_stderrs', 'deploy_status_codes' ) _ROLLING_UPDATES_SCHEMA_KEYS = ( MAX_BATCH_SIZE, PAUSE_TIME, ) = ( resource_group.ResourceGroup.MAX_BATCH_SIZE, resource_group.ResourceGroup.PAUSE_TIME, ) _sd_ps = SoftwareDeployment.properties_schema _rg_ps = resource_group.ResourceGroup.properties_schema properties_schema = { SERVERS: properties.Schema( properties.Schema.MAP, _('A map of names and server IDs to apply configuration to. The ' 'name is arbitrary and is used as the Heat resource name ' 'for the corresponding deployment.'), update_allowed=True, required=True ), CONFIG: _sd_ps[CONFIG], INPUT_VALUES: _sd_ps[INPUT_VALUES], DEPLOY_ACTIONS: _sd_ps[DEPLOY_ACTIONS], NAME: _sd_ps[NAME], SIGNAL_TRANSPORT: _sd_ps[SIGNAL_TRANSPORT] } attributes_schema = { STDOUTS: attributes.Schema( _("A map of Nova names and captured stdouts from the " "configuration execution to each server."), type=attributes.Schema.MAP ), STDERRS: attributes.Schema( _("A map of Nova names and captured stderrs from the " "configuration execution to each server."), type=attributes.Schema.MAP ), STATUS_CODES: attributes.Schema( _("A map of Nova names and returned status code from the " "configuration execution."), type=attributes.Schema.MAP ), } rolling_update_schema = { MAX_BATCH_SIZE: properties.Schema( properties.Schema.INTEGER, _('The maximum number of deployments to replace at once.'), constraints=[constraints.Range(min=1)], default=1), PAUSE_TIME: properties.Schema( properties.Schema.NUMBER, _('The number of seconds to wait between batches of ' 'updates.'), constraints=[constraints.Range(min=0)], default=0), } update_policy_schema = { resource_group.ResourceGroup.ROLLING_UPDATE: properties.Schema( properties.Schema.MAP, schema=rolling_update_schema, support_status=support.SupportStatus(version='7.0.0') ), resource_group.ResourceGroup.BATCH_CREATE: properties.Schema( properties.Schema.MAP, schema=resource_group.ResourceGroup.batch_create_schema, support_status=support.SupportStatus(version='7.0.0') ) } def get_size(self): return len(self.properties[self.SERVERS]) def _resource_names(self, size=None, update_rsrc_data=True): candidates = self.properties[self.SERVERS] if size is None: return iter(candidates) return itertools.islice(candidates, size) def res_def_changed(self, prop_diff): return True def _update_name_blacklist(self, properties): pass def _name_blacklist(self): return set() def get_resource_def(self, include_all=False): return dict(self.properties) def build_resource_definition(self, res_name, res_defn): props = copy.deepcopy(res_defn) servers = props.pop(self.SERVERS) props[SoftwareDeployment.SERVER] = servers.get(res_name) return rsrc_defn.ResourceDefinition(res_name, 'OS::Heat::SoftwareDeployment', props, None) def _member_attribute_name(self, key): if key == self.STDOUTS: n_attr = SoftwareDeployment.STDOUT elif key == self.STDERRS: n_attr = SoftwareDeployment.STDERR elif key == self.STATUS_CODES: n_attr = SoftwareDeployment.STATUS_CODE else: # Allow any attribute valid for a single SoftwareDeployment # including arbitrary outputs, so we can't validate here n_attr = key return n_attr def get_attribute(self, key, *path): rg = super(SoftwareDeploymentGroup, self) n_attr = self._member_attribute_name(key) rg_attr = rg.get_attribute(rg.ATTR_ATTRIBUTES, n_attr) return attributes.select_from_attribute(rg_attr, path) def _nested_output_defns(self, resource_names, get_attr_fn, get_res_fn): for attr in self.referenced_attrs(): key = attr if isinstance(attr, six.string_types) else attr[0] n_attr = self._member_attribute_name(key) output_name = self._attribute_output_name(self.ATTR_ATTRIBUTES, n_attr) value = {r: get_attr_fn([r, n_attr]) for r in resource_names} yield output.OutputDefinition(output_name, value) def _try_rolling_update(self): if self.update_policy[self.ROLLING_UPDATE]: policy = self.update_policy[self.ROLLING_UPDATE] return self._replace(0, policy[self.MAX_BATCH_SIZE], policy[self.PAUSE_TIME]) class SoftwareDeployments(SoftwareDeploymentGroup): hidden_msg = _('Please use OS::Heat::SoftwareDeploymentGroup instead.') support_status = support.SupportStatus( status=support.HIDDEN, message=hidden_msg, version='7.0.0', previous_status=support.SupportStatus( status=support.DEPRECATED, version='2014.2'), substitute_class=SoftwareDeploymentGroup) def resource_mapping(): return { 'OS::Heat::SoftwareDeployment': SoftwareDeployment, 'OS::Heat::SoftwareDeploymentGroup': SoftwareDeploymentGroup, 'OS::Heat::SoftwareDeployments': SoftwareDeployments, } heat-10.0.2/heat/engine/resources/openstack/heat/cloud_watch.py0000666000175000017500000000267313343562340024530 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg from heat.common.i18n import _ from heat.engine.resources.openstack.heat import none_resource from heat.engine import support class CloudWatchAlarm(none_resource.NoneResource): support_status = support.SupportStatus( status=support.HIDDEN, message=_('OS::Heat::CWLiteAlarm resource has been removed ' 'since version 10.0.0. Existing stacks can still ' 'use it, where it would do nothing for update/delete.'), version='5.0.0', previous_status=support.SupportStatus( status=support.DEPRECATED, version='2014.2') ) def resource_mapping(): cfg.CONF.import_opt('enable_cloud_watch_lite', 'heat.common.config') if cfg.CONF.enable_cloud_watch_lite: return { 'OS::Heat::CWLiteAlarm': CloudWatchAlarm, } else: return {} heat-10.0.2/heat/engine/resources/openstack/heat/value.py0000666000175000017500000000752313343562340023347 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common.i18n import _ from heat.engine import attributes from heat.engine import constraints from heat.engine.hot import parameters from heat.engine import properties from heat.engine import resource from heat.engine import support class Value(resource.Resource): """A resource which exposes its value property as an attribute. This is useful for exposing a value that is a simple manipulation of other template parameters and/or other resources. """ support_status = support.SupportStatus(version='7.0.0') PROPERTIES = ( VALUE, TYPE, ) = ( 'value', constraints.Schema.TYPE, ) ATTRIBUTES = ( VALUE_ATTR, ) = ( 'value', ) properties_schema = { VALUE: properties.Schema( properties.Schema.ANY, _('The expression to generate the "value" attribute.'), required=True, update_allowed=True, ), TYPE: properties.Schema( properties.Schema.STRING, _('The type of the "value" property.'), constraints=[constraints.AllowedValues( parameters.HOTParamSchema.TYPES)], update_allowed=True, ), } attributes_schema = { VALUE_ATTR: attributes.Schema( _('The value generated by this resource\'s properties "value" ' 'expression, with type determined from the properties "type".'), ), } def _resolve_attribute(self, name): props = self.frozen_definition().properties(self.properties_schema, self.context) if name == self.VALUE_ATTR: return props[self.VALUE] def handle_create(self): self.resource_id_set(self.physical_resource_name()) def handle_update(self, json_snippet, tmpl_diff, prop_diff): # allow update, no replace necessary (do not throw UpdateReplace). # the resource properties are updated appropriately in parent class. pass def reparse(self, *args, **kwargs): super(Value, self).reparse(*args, **kwargs) value_type = self.properties[self.TYPE] if value_type is None: # We don't know what type the value is, anything goes return param_type_map = { parameters.HOTParamSchema.STRING: constraints.Schema.STRING, parameters.HOTParamSchema.NUMBER: constraints.Schema.NUMBER, parameters.HOTParamSchema.LIST: constraints.Schema.LIST, parameters.HOTParamSchema.MAP: constraints.Schema.MAP, parameters.HOTParamSchema.BOOLEAN: constraints.Schema.BOOLEAN, } if value_type not in param_type_map: # the self.TYPE value is not valid, the error will become # apparent appropriately when its constraints are checked. return # We know the value's type, update value's property schema accordingly self.properties.props[self.VALUE] = properties.Property( properties.Schema( param_type_map[value_type], _('The expression to generate the "value" attribute.'), required=True, update_allowed=True, )) def resource_mapping(): return { 'OS::Heat::Value': Value, } heat-10.0.2/heat/engine/resources/openstack/heat/software_config.py0000666000175000017500000001167613343562340025416 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common.i18n import _ from heat.engine import attributes from heat.engine import properties from heat.engine import resource from heat.engine import software_config_io as swc_io from heat.engine import support from heat.rpc import api as rpc_api class SoftwareConfig(resource.Resource): """A resource for describing and storing software configuration. The software_configs API which backs this resource creates immutable configs, so any change to the template resource definition will result in a new config being created, and the old one being deleted. Configs can be defined in the same template which uses them, or they can be created in one stack, and passed to another stack via a parameter. A config resource can be referenced in other resource properties which are config-aware. This includes the properties OS::Nova::Server user_data, OS::Heat::SoftwareDeployment config and OS::Heat::MultipartMime parts config. Along with the config script itself, this resource can define schemas for inputs and outputs which the config script is expected to consume and produce. Inputs and outputs are optional and will map to concepts which are specific to the configuration tool being used. """ support_status = support.SupportStatus(version='2014.1') PROPERTIES = ( GROUP, CONFIG, OPTIONS, INPUTS, OUTPUTS ) = ( rpc_api.SOFTWARE_CONFIG_GROUP, rpc_api.SOFTWARE_CONFIG_CONFIG, rpc_api.SOFTWARE_CONFIG_OPTIONS, rpc_api.SOFTWARE_CONFIG_INPUTS, rpc_api.SOFTWARE_CONFIG_OUTPUTS, ) ATTRIBUTES = ( CONFIG_ATTR, ) = ( 'config', ) properties_schema = { GROUP: properties.Schema( properties.Schema.STRING, _('Namespace to group this software config by when delivered to ' 'a server. This may imply what configuration tool is going to ' 'perform the configuration.'), default='Heat::Ungrouped' ), CONFIG: properties.Schema( properties.Schema.STRING, _('Configuration script or manifest which specifies what actual ' 'configuration is performed.'), ), OPTIONS: properties.Schema( properties.Schema.MAP, _('Map containing options specific to the configuration ' 'management tool used by this resource.'), ), INPUTS: properties.Schema( properties.Schema.LIST, _('Schema representing the inputs that this software config is ' 'expecting.'), schema=properties.Schema(properties.Schema.MAP, schema=swc_io.input_config_schema) ), OUTPUTS: properties.Schema( properties.Schema.LIST, _('Schema representing the outputs that this software config ' 'will produce.'), schema=properties.Schema(properties.Schema.MAP, schema=swc_io.output_config_schema) ), } attributes_schema = { CONFIG_ATTR: attributes.Schema( _("The config value of the software config."), type=attributes.Schema.STRING ), } def handle_create(self): props = dict(self.properties) props[rpc_api.SOFTWARE_CONFIG_NAME] = self.physical_resource_name() sc = self.rpc_client().create_software_config(self.context, **props) self.resource_id_set(sc[rpc_api.SOFTWARE_CONFIG_ID]) def handle_delete(self): if self.resource_id is None: return with self.rpc_client().ignore_error_by_name('NotFound'): self.rpc_client().delete_software_config( self.context, self.resource_id) def _resolve_attribute(self, name): """Retrieve attributes of the SoftwareConfig resource. "config" returns the config value of the software config. If the software config does not exist, returns an empty string. """ if name == self.CONFIG_ATTR and self.resource_id: with self.rpc_client().ignore_error_by_name('NotFound'): sc = self.rpc_client().show_software_config( self.context, self.resource_id) return sc[rpc_api.SOFTWARE_CONFIG_CONFIG] def resource_mapping(): return { 'OS::Heat::SoftwareConfig': SoftwareConfig, } heat-10.0.2/heat/engine/resources/openstack/manila/0000775000175000017500000000000013343562672022200 5ustar zuulzuul00000000000000heat-10.0.2/heat/engine/resources/openstack/manila/share_network.py0000666000175000017500000002272213343562351025426 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common import exception from heat.common.i18n import _ from heat.engine import attributes from heat.engine import constraints from heat.engine import properties from heat.engine import resource from heat.engine import support from heat.engine import translation class ManilaShareNetwork(resource.Resource): """A resource that stores network information for share servers. Stores network information that will be used by share servers, where shares are hosted. """ support_status = support.SupportStatus(version='5.0.0') PROPERTIES = ( NAME, NEUTRON_NETWORK, NEUTRON_SUBNET, NOVA_NETWORK, DESCRIPTION, SECURITY_SERVICES, ) = ( 'name', 'neutron_network', 'neutron_subnet', 'nova_network', 'description', 'security_services', ) ATTRIBUTES = ( SEGMENTATION_ID, CIDR, IP_VERSION, NETWORK_TYPE, ) = ( 'segmentation_id', 'cidr', 'ip_version', 'network_type', ) properties_schema = { NAME: properties.Schema( properties.Schema.STRING, _('Name of the share network.'), update_allowed=True ), NEUTRON_NETWORK: properties.Schema( properties.Schema.STRING, _('Neutron network id.'), update_allowed=True, constraints=[constraints.CustomConstraint('neutron.network')] ), NEUTRON_SUBNET: properties.Schema( properties.Schema.STRING, _('Neutron subnet id.'), update_allowed=True, constraints=[constraints.CustomConstraint('neutron.subnet')] ), NOVA_NETWORK: properties.Schema( properties.Schema.STRING, _('Nova network id.'), update_allowed=True, ), DESCRIPTION: properties.Schema( properties.Schema.STRING, _('Share network description.'), update_allowed=True ), SECURITY_SERVICES: properties.Schema( properties.Schema.LIST, _('A list of security services IDs or names.'), schema=properties.Schema( properties.Schema.STRING ), update_allowed=True, default=[] ) } attributes_schema = { SEGMENTATION_ID: attributes.Schema( _('VLAN ID for VLAN networks or tunnel-id for GRE/VXLAN ' 'networks.'), type=attributes.Schema.STRING ), CIDR: attributes.Schema( _('CIDR of subnet.'), type=attributes.Schema.STRING ), IP_VERSION: attributes.Schema( _('Version of IP address.'), type=attributes.Schema.STRING ), NETWORK_TYPE: attributes.Schema( _('The physical mechanism by which the virtual network is ' 'implemented.'), type=attributes.Schema.STRING ), } default_client_name = 'manila' entity = 'share_networks' def _request_network(self): return self.client().share_networks.get(self.resource_id) def _resolve_attribute(self, name): if self.resource_id is None: return network = self._request_network() return getattr(network, name, None) def validate(self): super(ManilaShareNetwork, self).validate() if (self.properties[self.NEUTRON_NETWORK] and self.properties[self.NOVA_NETWORK]): raise exception.ResourcePropertyConflict(self.NEUTRON_NETWORK, self.NOVA_NETWORK) if (self.properties[self.NOVA_NETWORK] and self.properties[self.NEUTRON_SUBNET]): raise exception.ResourcePropertyConflict(self.NEUTRON_SUBNET, self.NOVA_NETWORK) if self.is_using_neutron() and self.properties[self.NOVA_NETWORK]: msg = _('With Neutron enabled you need to pass Neutron network ' 'and Neutron subnet instead of Nova network') raise exception.StackValidationFailed(message=msg) if (self.properties[self.NEUTRON_NETWORK] and not self.properties[self.NEUTRON_SUBNET]): raise exception.ResourcePropertyDependency( prop1=self.NEUTRON_NETWORK, prop2=self.NEUTRON_SUBNET) if (self.properties[self.NEUTRON_NETWORK] and self.properties[self.NEUTRON_SUBNET]): plg = self.client_plugin('neutron') subnet_id = plg.find_resourceid_by_name_or_id( 'subnet', self.properties[self.NEUTRON_SUBNET]) net_id = plg.network_id_from_subnet_id(subnet_id) provided_net_id = plg.find_resourceid_by_name_or_id( 'network', self.properties[self.NEUTRON_NETWORK]) if net_id != provided_net_id: msg = (_('Provided %(subnet)s does not belong ' 'to provided %(network)s.') % {'subnet': self.NEUTRON_SUBNET, 'network': self.NEUTRON_NETWORK}) raise exception.StackValidationFailed(message=msg) def translation_rules(self, props): translation_rules = [ translation.TranslationRule( props, translation.TranslationRule.RESOLVE, [self.NEUTRON_NETWORK], client_plugin=self.client_plugin('neutron'), finder='find_resourceid_by_name_or_id', entity='network' ), translation.TranslationRule( props, translation.TranslationRule.RESOLVE, [self.NEUTRON_SUBNET], client_plugin=self.client_plugin('neutron'), finder='find_resourceid_by_name_or_id', entity='subnet' ) ] return translation_rules def handle_create(self): neutron_subnet_id = self.properties[self.NEUTRON_SUBNET] neutron_net_id = self.properties[self.NEUTRON_NETWORK] if neutron_subnet_id and not neutron_net_id: neutron_net_id = self.client_plugin( 'neutron').network_id_from_subnet_id(neutron_subnet_id) network = self.client().share_networks.create( name=self.properties[self.NAME], neutron_net_id=neutron_net_id, neutron_subnet_id=neutron_subnet_id, nova_net_id=self.properties[self.NOVA_NETWORK], description=self.properties[self.DESCRIPTION]) self.resource_id_set(network.id) for service in self.properties.get(self.SECURITY_SERVICES): self.client().share_networks.add_security_service( self.resource_id, self.client_plugin().get_security_service(service).id) def handle_update(self, json_snippet, tmpl_diff, prop_diff): if self.SECURITY_SERVICES in prop_diff: services = prop_diff.pop(self.SECURITY_SERVICES) s_curr = set([self.client_plugin().get_security_service(s).id for s in self.properties.get( self.SECURITY_SERVICES)]) s_new = set([self.client_plugin().get_security_service(s).id for s in services]) for service in s_curr - s_new: self.client().share_networks.remove_security_service( self.resource_id, service) for service in s_new - s_curr: self.client().share_networks.add_security_service( self.resource_id, service) if prop_diff: neutron_subnet_id = prop_diff.get(self.NEUTRON_SUBNET) neutron_net_id = prop_diff.get(self.NEUTRON_NETWORK) if neutron_subnet_id and not neutron_net_id: neutron_net_id = self.client_plugin( 'neutron').network_id_from_subnet_id(neutron_subnet_id) self.client().share_networks.update( self.resource_id, name=prop_diff.get(self.NAME), neutron_net_id=neutron_net_id, neutron_subnet_id=neutron_subnet_id, nova_net_id=prop_diff.get(self.NOVA_NETWORK), description=prop_diff.get(self.DESCRIPTION)) def parse_live_resource_data(self, resource_properties, resource_data): result = super(ManilaShareNetwork, self).parse_live_resource_data( resource_properties, resource_data) sec_list = self.client().security_services.list( search_opts={'share_network_id': self.resource_id}) result.update({ self.NOVA_NETWORK: resource_data.get('nova_net_id'), self.NEUTRON_NETWORK: resource_data.get('neutron_net_id'), self.NEUTRON_SUBNET: resource_data.get('neutron_subnet_id'), self.SECURITY_SERVICES: [service.id for service in sec_list]} ) return result def resource_mapping(): return {'OS::Manila::ShareNetwork': ManilaShareNetwork} heat-10.0.2/heat/engine/resources/openstack/manila/__init__.py0000666000175000017500000000000013343562340024271 0ustar zuulzuul00000000000000heat-10.0.2/heat/engine/resources/openstack/manila/share_type.py0000666000175000017500000001001113343562340024700 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common.i18n import _ from heat.engine import properties from heat.engine import resource from heat.engine import support class ManilaShareType(resource.Resource): """A resource for creating manila share type. A share_type is an administrator-defined "type of service", comprised of a tenant visible description, and a list of non-tenant-visible key/value pairs (extra_specs) which the Manila scheduler uses to make scheduling decisions for shared filesystem tasks. Please note that share type is intended to use mostly by administrators. So it is very likely that Manila will prohibit creation of the resource without administration grants. """ support_status = support.SupportStatus(version='5.0.0') PROPERTIES = ( NAME, IS_PUBLIC, DRIVER_HANDLES_SHARE_SERVERS, EXTRA_SPECS, SNAPSHOT_SUPPORT ) = ( 'name', 'is_public', 'driver_handles_share_servers', 'extra_specs', 'snapshot_support' ) properties_schema = { NAME: properties.Schema( properties.Schema.STRING, _('Name of the share type.'), required=True ), IS_PUBLIC: properties.Schema( properties.Schema.BOOLEAN, _('Defines if share type is accessible to the public.'), default=True ), DRIVER_HANDLES_SHARE_SERVERS: properties.Schema( properties.Schema.BOOLEAN, _('Required extra specification. ' 'Defines if share drivers handles share servers.'), required=True, ), EXTRA_SPECS: properties.Schema( properties.Schema.MAP, _("Extra specs key-value pairs defined for share type."), update_allowed=True ), SNAPSHOT_SUPPORT: properties.Schema( properties.Schema.BOOLEAN, _('Boolean extra spec that used for filtering of backends by ' 'their capability to create share snapshots.'), support_status=support.SupportStatus(version='6.0.0'), default=True ) } default_client_name = 'manila' entity = 'share_types' def handle_create(self): share_type = self.client().share_types.create( name=self.properties.get(self.NAME), spec_driver_handles_share_servers=self.properties.get( self.DRIVER_HANDLES_SHARE_SERVERS), is_public=self.properties.get(self.IS_PUBLIC), spec_snapshot_support=self.properties.get(self.SNAPSHOT_SUPPORT) ) self.resource_id_set(share_type.id) extra_specs = self.properties.get(self.EXTRA_SPECS) if extra_specs: share_type.set_keys(extra_specs) def handle_update(self, json_snippet, tmpl_diff, prop_diff): if self.EXTRA_SPECS in prop_diff: share_type = self.client().share_types.get(self.resource_id) extra_specs_old = self.properties.get(self.EXTRA_SPECS) if extra_specs_old: share_type.unset_keys(extra_specs_old) share_type.set_keys(prop_diff.get(self.EXTRA_SPECS)) def parse_live_resource_data(self, resource_properties, resource_data): extra_specs = resource_data.pop(self.EXTRA_SPECS) extra_specs.pop(self.SNAPSHOT_SUPPORT) extra_specs.pop(self.DRIVER_HANDLES_SHARE_SERVERS) return {self.EXTRA_SPECS: extra_specs} def resource_mapping(): return { 'OS::Manila::ShareType': ManilaShareType } heat-10.0.2/heat/engine/resources/openstack/manila/share.py0000666000175000017500000003365613343562340023663 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging from oslo_utils import encodeutils import six from heat.common import exception from heat.common.i18n import _ from heat.engine import attributes from heat.engine import constraints from heat.engine import properties from heat.engine import resource from heat.engine import support LOG = logging.getLogger(__name__) class ManilaShare(resource.Resource): """A resource that creates shared mountable file system. The resource creates a manila share - shared mountable filesystem that can be attached to any client(or clients) that has a network access and permission to mount filesystem. Share is a unit of storage with specific size that supports pre-defined share protocol and advanced security model (access lists, share networks and security services). """ support_status = support.SupportStatus(version='5.0.0') _ACCESS_RULE_PROPERTIES = ( ACCESS_TO, ACCESS_TYPE, ACCESS_LEVEL ) = ( 'access_to', 'access_type', 'access_level') _SHARE_STATUSES = ( STATUS_CREATING, STATUS_DELETING, STATUS_ERROR, STATUS_ERROR_DELETING, STATUS_AVAILABLE ) = ( 'creating', 'deleting', 'error', 'error_deleting', 'available' ) PROPERTIES = ( SHARE_PROTOCOL, SIZE, SHARE_SNAPSHOT, NAME, METADATA, SHARE_NETWORK, DESCRIPTION, SHARE_TYPE, IS_PUBLIC, ACCESS_RULES ) = ( 'share_protocol', 'size', 'snapshot', 'name', 'metadata', 'share_network', 'description', 'share_type', 'is_public', 'access_rules' ) ATTRIBUTES = ( AVAILABILITY_ZONE_ATTR, HOST_ATTR, EXPORT_LOCATIONS_ATTR, SHARE_SERVER_ID_ATTR, CREATED_AT_ATTR, SHARE_STATUS_ATTR, PROJECT_ID_ATTR ) = ( 'availability_zone', 'host', 'export_locations', 'share_server_id', 'created_at', 'status', 'project_id' ) properties_schema = { SHARE_PROTOCOL: properties.Schema( properties.Schema.STRING, _('Share protocol supported by shared filesystem.'), required=True, constraints=[constraints.AllowedValues( ['NFS', 'CIFS', 'GlusterFS', 'HDFS', 'CEPHFS'])] ), SIZE: properties.Schema( properties.Schema.INTEGER, _('Share storage size in GB.'), required=True ), SHARE_SNAPSHOT: properties.Schema( properties.Schema.STRING, _('Name or ID of shared file system snapshot that ' 'will be restored and created as a new share.'), constraints=[constraints.CustomConstraint('manila.share_snapshot')] ), NAME: properties.Schema( properties.Schema.STRING, _('Share name.'), update_allowed=True ), METADATA: properties.Schema( properties.Schema.MAP, _('Metadata key-values defined for share.'), update_allowed=True ), SHARE_NETWORK: properties.Schema( properties.Schema.STRING, _('Name or ID of shared network defined for shared filesystem.'), constraints=[constraints.CustomConstraint('manila.share_network')] ), DESCRIPTION: properties.Schema( properties.Schema.STRING, _('Share description.'), update_allowed=True ), SHARE_TYPE: properties.Schema( properties.Schema.STRING, _('Name or ID of shared filesystem type. Types defines some share ' 'filesystem profiles that will be used for share creation.'), constraints=[constraints.CustomConstraint("manila.share_type")] ), IS_PUBLIC: properties.Schema( properties.Schema.BOOLEAN, _('Defines if shared filesystem is public or private.'), default=False, update_allowed=True ), ACCESS_RULES: properties.Schema( properties.Schema.LIST, _('A list of access rules that define access from IP to Share.'), schema=properties.Schema( properties.Schema.MAP, schema={ ACCESS_TO: properties.Schema( properties.Schema.STRING, _('IP or other address information about guest that ' 'allowed to access to Share.'), required=True ), ACCESS_TYPE: properties.Schema( properties.Schema.STRING, _('Type of access that should be provided to guest.'), constraints=[constraints.AllowedValues( ['ip', 'user', 'cert', 'cephx'])], required=True ), ACCESS_LEVEL: properties.Schema( properties.Schema.STRING, _('Level of access that need to be provided for ' 'guest.'), constraints=[constraints.AllowedValues(['ro', 'rw'])] ) } ), update_allowed=True, default=[] ) } attributes_schema = { AVAILABILITY_ZONE_ATTR: attributes.Schema( _('The availability zone of shared filesystem.'), type=attributes.Schema.STRING ), HOST_ATTR: attributes.Schema( _('Share host.'), type=attributes.Schema.STRING ), EXPORT_LOCATIONS_ATTR: attributes.Schema( _('Export locations of share.'), type=attributes.Schema.LIST ), SHARE_SERVER_ID_ATTR: attributes.Schema( _('ID of server (VM, etc...) on host that is used for ' 'exporting network file-system.'), type=attributes.Schema.STRING ), CREATED_AT_ATTR: attributes.Schema( _('Datetime when a share was created.'), type=attributes.Schema.STRING ), SHARE_STATUS_ATTR: attributes.Schema( _('Current share status.'), type=attributes.Schema.STRING ), PROJECT_ID_ATTR: attributes.Schema( _('Share project ID.'), type=attributes.Schema.STRING ) } default_client_name = 'manila' entity = 'shares' def _request_share(self): return self.client().shares.get(self.resource_id) def _resolve_attribute(self, name): if self.resource_id is None: return share = self._request_share() return six.text_type(getattr(share, name)) def handle_create(self): # Request IDs of entities from manila # if name of the entity defined in template share_net_identity = self.properties[self.SHARE_NETWORK] if share_net_identity: share_net_identity = self.client_plugin().get_share_network( share_net_identity).id snapshot_identity = self.properties[self.SHARE_SNAPSHOT] if snapshot_identity: snapshot_identity = self.client_plugin().get_share_snapshot( snapshot_identity).id share_type_identity = self.properties[self.SHARE_TYPE] if share_type_identity: share_type_identity = self.client_plugin().get_share_type( share_type_identity).id share = self.client().shares.create( share_proto=self.properties[self.SHARE_PROTOCOL], size=self.properties[self.SIZE], snapshot_id=snapshot_identity, name=self.properties[self.NAME], description=self.properties[self.DESCRIPTION], metadata=self.properties[self.METADATA], share_network=share_net_identity, share_type=share_type_identity, is_public=self.properties[self.IS_PUBLIC]) self.resource_id_set(share.id) def check_create_complete(self, *args): share_status = self._request_share().status if share_status == self.STATUS_CREATING: return False elif share_status == self.STATUS_AVAILABLE: LOG.info('Applying access rules to created Share.') # apply access rules to created share. please note that it is not # possible to define rules for share with share_status = creating access_rules = self.properties.get(self.ACCESS_RULES) try: if access_rules: for rule in access_rules: self.client().shares.allow( share=self.resource_id, access_type=rule.get(self.ACCESS_TYPE), access=rule.get(self.ACCESS_TO), access_level=rule.get(self.ACCESS_LEVEL)) return True except Exception as ex: err_msg = encodeutils.exception_to_unicode(ex) reason = _( 'Error during applying access rules to share "{0}". ' 'The root cause of the problem is the following: {1}.' ).format(self.resource_id, err_msg) raise exception.ResourceInError( status_reason=reason, resource_status=share_status) elif share_status == self.STATUS_ERROR: reason = _('Error during creation of share "{0}"').format( self.resource_id) raise exception.ResourceInError(status_reason=reason, resource_status=share_status) else: reason = _('Unknown share_status during creation of share "{0}"' ).format(self.resource_id) raise exception.ResourceUnknownStatus( status_reason=reason, resource_status=share_status) def check_delete_complete(self, *args): if not self.resource_id: return True try: share = self._request_share() except Exception as ex: self.client_plugin().ignore_not_found(ex) return True else: # when share creation is not finished proceed listening if share.status == self.STATUS_DELETING: return False elif share.status in (self.STATUS_ERROR, self.STATUS_ERROR_DELETING): raise exception.ResourceInError( status_reason=_( 'Error during deleting share "{0}".' ).format(self.resource_id), resource_status=share.status) else: reason = _('Unknown status during deleting share ' '"{0}"').format(self.resource_id) raise exception.ResourceUnknownStatus( status_reason=reason, resource_status=share.status) def handle_check(self): share = self._request_share() expected_statuses = [self.STATUS_AVAILABLE] checks = [{'attr': 'status', 'expected': expected_statuses, 'current': share.status}] self._verify_check_conditions(checks) def handle_update(self, json_snippet, tmpl_diff, prop_diff): kwargs = {} if self.IS_PUBLIC in prop_diff: kwargs['is_public'] = prop_diff.get(self.IS_PUBLIC) if self.NAME in prop_diff: kwargs['display_name'] = prop_diff.get(self.NAME) if self.DESCRIPTION in prop_diff: kwargs['display_description'] = prop_diff.get(self.DESCRIPTION) if kwargs: self.client().shares.update(self.resource_id, **kwargs) if self.METADATA in prop_diff: metadata = prop_diff.get(self.METADATA) self.client().shares.update_all_metadata( self.resource_id, metadata) if self.ACCESS_RULES in prop_diff: actual_old_rules = [] for rule in self.client().shares.access_list(self.resource_id): old_rule = { self.ACCESS_TO: getattr(rule, self.ACCESS_TO), self.ACCESS_TYPE: getattr(rule, self.ACCESS_TYPE), self.ACCESS_LEVEL: getattr(rule, self.ACCESS_LEVEL) } if old_rule in prop_diff[self.ACCESS_RULES]: actual_old_rules.append(old_rule) else: self.client().shares.deny(share=self.resource_id, id=rule.id) for rule in prop_diff[self.ACCESS_RULES]: if rule not in actual_old_rules: self.client().shares.allow( share=self.resource_id, access_type=rule.get(self.ACCESS_TYPE), access=rule.get(self.ACCESS_TO), access_level=rule.get(self.ACCESS_LEVEL) ) def parse_live_resource_data(self, resource_properties, resource_data): result = super(ManilaShare, self).parse_live_resource_data( resource_properties, resource_data) rules = self.client().shares.access_list(self.resource_id) result[self.ACCESS_RULES] = [] for rule in rules: result[self.ACCESS_RULES].append( {(k, v) for (k, v) in six.iteritems(rule) if k in self._ACCESS_RULE_PROPERTIES}) return result def resource_mapping(): return {'OS::Manila::Share': ManilaShare} heat-10.0.2/heat/engine/resources/openstack/manila/security_service.py0000666000175000017500000000663513343562340026145 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common.i18n import _ from heat.engine import constraints from heat.engine import properties from heat.engine import resource from heat.engine import support class SecurityService(resource.Resource): """A resource that implements security service of Manila. A security_service is a set of options that defines a security domain for a particular shared filesystem protocol, such as an Active Directory domain or a Kerberos domain. """ support_status = support.SupportStatus(version='5.0.0') PROPERTIES = ( NAME, TYPE, DNS_IP, SERVER, DOMAIN, USER, PASSWORD, DESCRIPTION ) = ( 'name', 'type', 'dns_ip', 'server', 'domain', 'user', 'password', 'description' ) properties_schema = { NAME: properties.Schema( properties.Schema.STRING, _('Security service name.'), update_allowed=True ), TYPE: properties.Schema( properties.Schema.STRING, _('Security service type.'), required=True, constraints=[ constraints.AllowedValues(['ldap', 'kerberos', 'active_directory']) ] ), DNS_IP: properties.Schema( properties.Schema.STRING, _('DNS IP address used inside tenant\'s network.'), update_allowed=True ), SERVER: properties.Schema( properties.Schema.STRING, _('Security service IP address or hostname.'), update_allowed=True ), DOMAIN: properties.Schema( properties.Schema.STRING, _('Security service domain.'), update_allowed=True ), USER: properties.Schema( properties.Schema.STRING, _('Security service user or group used by tenant.'), update_allowed=True ), PASSWORD: properties.Schema( properties.Schema.STRING, _('Password used by user.'), update_allowed=True ), DESCRIPTION: properties.Schema( properties.Schema.STRING, _('Security service description.'), update_allowed=True ) } default_client_name = 'manila' entity = 'security_services' def handle_create(self): args = dict((k, v) for k, v in self.properties.items() if v is not None) security_service = self.client().security_services.create(**args) self.resource_id_set(security_service.id) def handle_update(self, json_snippet, tmpl_diff, prop_diff): if prop_diff: self.client().security_services.update(self.resource_id, **prop_diff) def resource_mapping(): return { 'OS::Manila::SecurityService': SecurityService } heat-10.0.2/heat/engine/resources/openstack/octavia/0000775000175000017500000000000013343562672022365 5ustar zuulzuul00000000000000heat-10.0.2/heat/engine/resources/openstack/octavia/l7policy.py0000666000175000017500000001755213343562340024505 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common import exception from heat.common.i18n import _ from heat.engine import attributes from heat.engine import constraints from heat.engine import properties from heat.engine.resources.openstack.octavia import octavia_base from heat.engine import translation class L7Policy(octavia_base.OctaviaBase): """A resource for managing octavia L7Policies. This resource manages L7Policies, which represent a collection of L7Rules. L7Policy holds the action that should be performed when the rules are matched (Redirect to Pool, Redirect to URL, Reject). L7Policy holds a Listener id, so a Listener can evaluate a collection of L7Policies. L7Policy will return True when all of the L7Rules that belong to this L7Policy are matched. L7Policies under a specific Listener are ordered and the first l7Policy that returns a match will be executed. When none of the policies match the request gets forwarded to listener.default_pool_id. """ PROPERTIES = ( NAME, DESCRIPTION, ADMIN_STATE_UP, ACTION, REDIRECT_POOL, REDIRECT_URL, POSITION, LISTENER ) = ( 'name', 'description', 'admin_state_up', 'action', 'redirect_pool', 'redirect_url', 'position', 'listener' ) L7ACTIONS = ( REJECT, REDIRECT_TO_POOL, REDIRECT_TO_URL ) = ( 'REJECT', 'REDIRECT_TO_POOL', 'REDIRECT_TO_URL' ) ATTRIBUTES = (RULES_ATTR) = ('rules') properties_schema = { NAME: properties.Schema( properties.Schema.STRING, _('Name of the policy.'), update_allowed=True ), DESCRIPTION: properties.Schema( properties.Schema.STRING, _('Description of the policy.'), update_allowed=True ), ADMIN_STATE_UP: properties.Schema( properties.Schema.BOOLEAN, _('The administrative state of the policy.'), default=True, update_allowed=True ), ACTION: properties.Schema( properties.Schema.STRING, _('Action type of the policy.'), required=True, constraints=[constraints.AllowedValues(L7ACTIONS)], update_allowed=True ), REDIRECT_POOL: properties.Schema( properties.Schema.STRING, _('ID or name of the pool for REDIRECT_TO_POOL action type.'), constraints=[ constraints.CustomConstraint('octavia.pool') ], update_allowed=True ), REDIRECT_URL: properties.Schema( properties.Schema.STRING, _('URL for REDIRECT_TO_URL action type. ' 'This should be a valid URL string.'), update_allowed=True ), POSITION: properties.Schema( properties.Schema.NUMBER, _('L7 policy position in ordered policies list. This must be ' 'an integer starting from 1. If not specified, policy will be ' 'placed at the tail of existing policies list.'), constraints=[constraints.Range(min=1)], update_allowed=True ), LISTENER: properties.Schema( properties.Schema.STRING, _('ID or name of the listener this policy belongs to.'), required=True, constraints=[ constraints.CustomConstraint('octavia.listener') ] ), } attributes_schema = { RULES_ATTR: attributes.Schema( _('L7Rules associated with this policy.'), type=attributes.Schema.LIST ), } def translation_rules(self, props): return [ translation.TranslationRule( props, translation.TranslationRule.RESOLVE, [self.LISTENER], client_plugin=self.client_plugin(), finder='get_listener', ), translation.TranslationRule( props, translation.TranslationRule.RESOLVE, [self.REDIRECT_POOL], client_plugin=self.client_plugin(), finder='get_pool', ), ] def validate(self): super(L7Policy, self).validate() if (self.properties[self.ACTION] == self.REJECT and (self.properties[self.REDIRECT_POOL] is not None or self.properties[self.REDIRECT_URL] is not None)): msg = (_('Properties %(pool)s and %(url)s are not required when ' '%(action)s type is set to %(action_type)s.') % {'pool': self.REDIRECT_POOL, 'url': self.REDIRECT_URL, 'action': self.ACTION, 'action_type': self.REJECT}) raise exception.StackValidationFailed(message=msg) if self.properties[self.ACTION] == self.REDIRECT_TO_POOL: if self.properties[self.REDIRECT_URL] is not None: raise exception.ResourcePropertyValueDependency( prop1=self.REDIRECT_URL, prop2=self.ACTION, value=self.REDIRECT_TO_URL) if self.properties[self.REDIRECT_POOL] is None: msg = (_('Property %(pool)s is required when %(action)s ' 'type is set to %(action_type)s.') % {'pool': self.REDIRECT_POOL, 'action': self.ACTION, 'action_type': self.REDIRECT_TO_POOL}) raise exception.StackValidationFailed(message=msg) if self.properties[self.ACTION] == self.REDIRECT_TO_URL: if self.properties[self.REDIRECT_POOL] is not None: raise exception.ResourcePropertyValueDependency( prop1=self.REDIRECT_POOL, prop2=self.ACTION, value=self.REDIRECT_TO_POOL) if self.properties[self.REDIRECT_URL] is None: msg = (_('Property %(url)s is required when %(action)s ' 'type is set to %(action_type)s.') % {'url': self.REDIRECT_URL, 'action': self.ACTION, 'action_type': self.REDIRECT_TO_URL}) raise exception.StackValidationFailed(message=msg) def _prepare_args(self, properties): props = dict((k, v) for k, v in properties.items() if v is not None) if self.NAME not in props: props[self.NAME] = self.physical_resource_name() props['listener_id'] = props.pop(self.LISTENER) if self.REDIRECT_POOL in props: props['redirect_pool_id'] = props.pop(self.REDIRECT_POOL) return props def _resource_create(self, properties): return self.client().l7policy_create( json={'l7policy': properties})['l7policy'] def _resource_update(self, prop_diff): if self.REDIRECT_POOL in prop_diff: prop_diff['redirect_pool_id'] = prop_diff.pop(self.REDIRECT_POOL) self.client().l7policy_set( self.resource_id, json={'l7policy': prop_diff}) def _resource_delete(self): self.client().l7policy_delete(self.resource_id) def _show_resource(self): return self.client().l7policy_show(self.resource_id) def resource_mapping(): return { 'OS::Octavia::L7Policy': L7Policy } heat-10.0.2/heat/engine/resources/openstack/octavia/pool_member.py0000666000175000017500000001201013343562351025225 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common.i18n import _ from heat.engine import constraints from heat.engine import properties from heat.engine.resources.openstack.octavia import octavia_base from heat.engine import translation class PoolMember(octavia_base.OctaviaBase): """A resource for managing Octavia Pool Members. A pool member represents a single backend node. """ PROPERTIES = ( POOL, ADDRESS, PROTOCOL_PORT, MONITOR_ADDRESS, MONITOR_PORT, WEIGHT, ADMIN_STATE_UP, SUBNET, ) = ( 'pool', 'address', 'protocol_port', 'monitor_address', 'monitor_port', 'weight', 'admin_state_up', 'subnet' ) ATTRIBUTES = ( ADDRESS_ATTR, POOL_ID_ATTR ) = ( 'address', 'pool_id' ) properties_schema = { POOL: properties.Schema( properties.Schema.STRING, _('Name or ID of the load balancing pool.'), required=True, constraints=[ constraints.CustomConstraint('octavia.pool') ] ), ADDRESS: properties.Schema( properties.Schema.STRING, _('IP address of the pool member on the pool network.'), required=True, constraints=[ constraints.CustomConstraint('ip_addr') ] ), PROTOCOL_PORT: properties.Schema( properties.Schema.INTEGER, _('Port on which the pool member listens for requests or ' 'connections.'), required=True, constraints=[ constraints.Range(1, 65535), ] ), MONITOR_ADDRESS: properties.Schema( properties.Schema.STRING, _('Alternate IP address which health monitor can use for ' 'health check.'), constraints=[ constraints.CustomConstraint('ip_addr') ] ), MONITOR_PORT: properties.Schema( properties.Schema.INTEGER, _('Alternate Port which health monitor can use for health check.'), constraints=[ constraints.Range(1, 65535), ] ), WEIGHT: properties.Schema( properties.Schema.INTEGER, _('Weight of pool member in the pool (default to 1).'), default=1, constraints=[ constraints.Range(0, 256), ], update_allowed=True ), ADMIN_STATE_UP: properties.Schema( properties.Schema.BOOLEAN, _('The administrative state of the pool member.'), default=True, update_allowed=True ), SUBNET: properties.Schema( properties.Schema.STRING, _('Subnet name or ID of this member.'), constraints=[ constraints.CustomConstraint('neutron.subnet') ], ), } def translation_rules(self, props): return [ translation.TranslationRule( props, translation.TranslationRule.RESOLVE, [self.SUBNET], client_plugin=self.client_plugin('neutron'), finder='find_resourceid_by_name_or_id', entity='subnet' ), translation.TranslationRule( props, translation.TranslationRule.RESOLVE, [self.POOL], client_plugin=self.client_plugin(), finder='get_pool' ), ] def _prepare_args(self, properties): props = dict((k, v) for k, v in properties.items() if v is not None) props.pop(self.POOL) if self.SUBNET in props: props['subnet_id'] = props.pop(self.SUBNET) return props def _resource_create(self, properties): return self.client().member_create( self.properties[self.POOL], json={'member': properties})['member'] def _resource_update(self, prop_diff): self.client().member_set(self.properties[self.POOL], self.resource_id, json={'member': prop_diff}) def _resource_delete(self): self.client().member_delete(self.properties[self.POOL], self.resource_id) def _show_resource(self): return self.client().member_show( self.properties[self.POOL], self.resource_id) def resource_mapping(): return { 'OS::Octavia::PoolMember': PoolMember, } heat-10.0.2/heat/engine/resources/openstack/octavia/pool.py0000666000175000017500000002043413343562351023707 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common import exception from heat.common.i18n import _ from heat.engine import attributes from heat.engine import constraints from heat.engine import properties from heat.engine.resources.openstack.octavia import octavia_base from heat.engine import translation class Pool(octavia_base.OctaviaBase): """A resource for managing Octavia Pools. This resources manages octavia LBaaS Pools, which represent a group of nodes. Pools define the subnet where nodes reside, balancing algorithm, and the nodes themselves. """ PROPERTIES = ( ADMIN_STATE_UP, DESCRIPTION, SESSION_PERSISTENCE, NAME, LB_ALGORITHM, LISTENER, LOADBALANCER, PROTOCOL, SESSION_PERSISTENCE_TYPE, SESSION_PERSISTENCE_COOKIE_NAME, ) = ( 'admin_state_up', 'description', 'session_persistence', 'name', 'lb_algorithm', 'listener', 'loadbalancer', 'protocol', 'type', 'cookie_name' ) SESSION_PERSISTENCE_TYPES = ( SOURCE_IP, HTTP_COOKIE, APP_COOKIE ) = ( 'SOURCE_IP', 'HTTP_COOKIE', 'APP_COOKIE' ) SUPPORTED_PROTOCOLS = (TCP, HTTP, HTTPS, TERMINATED_HTTPS, PROXY) = ( 'TCP', 'HTTP', 'HTTPS', 'TERMINATED_HTTPS', 'PROXY') ATTRIBUTES = ( HEALTHMONITOR_ID_ATTR, LISTENERS_ATTR, MEMBERS_ATTR ) = ( 'healthmonitor_id', 'listeners', 'members' ) properties_schema = { ADMIN_STATE_UP: properties.Schema( properties.Schema.BOOLEAN, _('The administrative state of this pool.'), default=True, update_allowed=True ), DESCRIPTION: properties.Schema( properties.Schema.STRING, _('Description of this pool.'), update_allowed=True, default='' ), SESSION_PERSISTENCE: properties.Schema( properties.Schema.MAP, _('Configuration of session persistence.'), schema={ SESSION_PERSISTENCE_TYPE: properties.Schema( properties.Schema.STRING, _('Method of implementation of session ' 'persistence feature.'), required=True, constraints=[constraints.AllowedValues( SESSION_PERSISTENCE_TYPES )] ), SESSION_PERSISTENCE_COOKIE_NAME: properties.Schema( properties.Schema.STRING, _('Name of the cookie, ' 'required if type is APP_COOKIE.') ) }, ), NAME: properties.Schema( properties.Schema.STRING, _('Name of this pool.'), update_allowed=True ), LB_ALGORITHM: properties.Schema( properties.Schema.STRING, _('The algorithm used to distribute load between the members of ' 'the pool.'), required=True, constraints=[ constraints.AllowedValues(['ROUND_ROBIN', 'LEAST_CONNECTIONS', 'SOURCE_IP']), ], update_allowed=True, ), LISTENER: properties.Schema( properties.Schema.STRING, _('Listener name or ID to be associated with this pool.'), constraints=[ constraints.CustomConstraint('octavia.listener') ] ), LOADBALANCER: properties.Schema( properties.Schema.STRING, _('Loadbalancer name or ID to be associated with this pool.'), constraints=[ constraints.CustomConstraint('octavia.loadbalancer') ], ), PROTOCOL: properties.Schema( properties.Schema.STRING, _('Protocol of the pool.'), required=True, constraints=[ constraints.AllowedValues(SUPPORTED_PROTOCOLS), ] ), } attributes_schema = { HEALTHMONITOR_ID_ATTR: attributes.Schema( _('ID of the health monitor associated with this pool.'), type=attributes.Schema.STRING ), LISTENERS_ATTR: attributes.Schema( _('Listener associated with this pool.'), type=attributes.Schema.STRING ), MEMBERS_ATTR: attributes.Schema( _('Members associated with this pool.'), cache_mode=attributes.Schema.CACHE_NONE, type=attributes.Schema.LIST ), } def translation_rules(self, props): return [ translation.TranslationRule( props, translation.TranslationRule.RESOLVE, [self.LISTENER], client_plugin=self.client_plugin(), finder='get_listener', ), translation.TranslationRule( props, translation.TranslationRule.RESOLVE, [self.LOADBALANCER], client_plugin=self.client_plugin(), finder='get_loadbalancer', ), ] def _prepare_args(self, properties): props = dict((k, v) for k, v in properties.items() if v is not None) if self.NAME not in props: props[self.NAME] = self.physical_resource_name() if self.LISTENER in props: props['listener_id'] = props.pop(self.LISTENER) if self.LOADBALANCER in props: props['loadbalancer_id'] = props.pop(self.LOADBALANCER) session_p = props.get(self.SESSION_PERSISTENCE) if session_p is not None: session_props = dict( (k, v) for k, v in session_p.items() if v is not None) props[self.SESSION_PERSISTENCE] = session_props return props def validate(self): super(Pool, self).validate() if (self.properties[self.LISTENER] is None and self.properties[self.LOADBALANCER] is None): raise exception.PropertyUnspecifiedError(self.LISTENER, self.LOADBALANCER) if self.properties[self.SESSION_PERSISTENCE] is not None: session_p = self.properties[self.SESSION_PERSISTENCE] persistence_type = session_p[self.SESSION_PERSISTENCE_TYPE] if persistence_type == self.APP_COOKIE: if not session_p.get(self.SESSION_PERSISTENCE_COOKIE_NAME): msg = (_('Property %(cookie)s is required when %(sp)s ' 'type is set to %(app)s.') % {'cookie': self.SESSION_PERSISTENCE_COOKIE_NAME, 'sp': self.SESSION_PERSISTENCE, 'app': self.APP_COOKIE}) raise exception.StackValidationFailed(message=msg) elif persistence_type == self.SOURCE_IP: if session_p.get(self.SESSION_PERSISTENCE_COOKIE_NAME): msg = (_('Property %(cookie)s must NOT be specified when ' '%(sp)s type is set to %(ip)s.') % {'cookie': self.SESSION_PERSISTENCE_COOKIE_NAME, 'sp': self.SESSION_PERSISTENCE, 'ip': self.SOURCE_IP}) raise exception.StackValidationFailed(message=msg) def _resource_create(self, properties): return self.client().pool_create(json={'pool': properties})['pool'] def _resource_update(self, prop_diff): self.client().pool_set(self.resource_id, json={'pool': prop_diff}) def _resource_delete(self): self.client().pool_delete(self.resource_id) def _show_resource(self): return self.client().pool_show(self.resource_id) def resource_mapping(): return { 'OS::Octavia::Pool': Pool, } heat-10.0.2/heat/engine/resources/openstack/octavia/loadbalancer.py0000666000175000017500000001250013343562340025336 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common.i18n import _ from heat.engine import attributes from heat.engine import constraints from heat.engine import properties from heat.engine.resources.openstack.octavia import octavia_base from heat.engine import translation class LoadBalancer(octavia_base.OctaviaBase): """A resource for creating octavia Load Balancers. This resource creates and manages octavia Load Balancers, which allows traffic to be directed between servers. """ PROPERTIES = ( DESCRIPTION, NAME, PROVIDER, VIP_ADDRESS, VIP_SUBNET, ADMIN_STATE_UP, TENANT_ID ) = ( 'description', 'name', 'provider', 'vip_address', 'vip_subnet', 'admin_state_up', 'tenant_id' ) ATTRIBUTES = ( VIP_ADDRESS_ATTR, VIP_PORT_ATTR, VIP_SUBNET_ATTR, POOLS_ATTR ) = ( 'vip_address', 'vip_port_id', 'vip_subnet_id', 'pools' ) properties_schema = { DESCRIPTION: properties.Schema( properties.Schema.STRING, _('Description of this Load Balancer.'), update_allowed=True, default='' ), NAME: properties.Schema( properties.Schema.STRING, _('Name of this Load Balancer.'), update_allowed=True ), PROVIDER: properties.Schema( properties.Schema.STRING, _('Provider for this Load Balancer.'), ), VIP_ADDRESS: properties.Schema( properties.Schema.STRING, _('IP address for the VIP.'), constraints=[ constraints.CustomConstraint('ip_addr') ], ), VIP_SUBNET: properties.Schema( properties.Schema.STRING, _('The name or ID of the subnet on which to allocate the VIP ' 'address.'), constraints=[ constraints.CustomConstraint('neutron.subnet') ], required=True ), ADMIN_STATE_UP: properties.Schema( properties.Schema.BOOLEAN, _('The administrative state of this Load Balancer.'), default=True, update_allowed=True ), TENANT_ID: properties.Schema( properties.Schema.STRING, _('The ID of the tenant who owns the Load Balancer. Only ' 'administrative users can specify a tenant ID other than ' 'their own.'), constraints=[ constraints.CustomConstraint('keystone.project') ], ) } attributes_schema = { VIP_ADDRESS_ATTR: attributes.Schema( _('The VIP address of the LoadBalancer.'), type=attributes.Schema.STRING ), VIP_PORT_ATTR: attributes.Schema( _('The VIP port of the LoadBalancer.'), type=attributes.Schema.STRING ), VIP_SUBNET_ATTR: attributes.Schema( _('The VIP subnet of the LoadBalancer.'), type=attributes.Schema.STRING ), POOLS_ATTR: attributes.Schema( _('Pools this LoadBalancer is associated with.'), type=attributes.Schema.LIST, ), } def translation_rules(self, props): return [ translation.TranslationRule( props, translation.TranslationRule.RESOLVE, [self.VIP_SUBNET], client_plugin=self.client_plugin('neutron'), finder='find_resourceid_by_name_or_id', entity='subnet' ), ] def _prepare_args(self, properties): props = dict((k, v) for k, v in properties.items() if v is not None) if self.NAME not in props: props[self.NAME] = self.physical_resource_name() props['vip_subnet_id'] = props.pop(self.VIP_SUBNET) return props def handle_create(self): properties = self._prepare_args(self.properties) lb = self.client().load_balancer_create( json={'loadbalancer': properties})['loadbalancer'] self.resource_id_set(lb['id']) def check_create_complete(self, data): return self._check_status() def handle_update(self, json_snippet, tmpl_diff, prop_diff): if prop_diff: self.client().load_balancer_set( self.resource_id, json={'loadbalancer': prop_diff}) return prop_diff def check_update_complete(self, prop_diff): if prop_diff: return self._check_status() return True def _resource_delete(self): self.client().load_balancer_delete(self.resource_id) def _show_resource(self): return self.client().load_balancer_show( self.resource_id) def resource_mapping(): return { 'OS::Octavia::LoadBalancer': LoadBalancer } heat-10.0.2/heat/engine/resources/openstack/octavia/l7rule.py0000666000175000017500000001171513343562340024150 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common import exception from heat.common.i18n import _ from heat.engine import constraints from heat.engine import properties from heat.engine.resources.openstack.octavia import octavia_base from heat.engine import translation class L7Rule(octavia_base.OctaviaBase): """A resource for managing octavia L7Rules. This resource manages L7Rules, which represent a set of attributes that defines which part of the request should be matched and how it should be matched. """ PROPERTIES = ( ADMIN_STATE_UP, L7POLICY, TYPE, COMPARE_TYPE, INVERT, KEY, VALUE ) = ( 'admin_state_up', 'l7policy', 'type', 'compare_type', 'invert', 'key', 'value' ) L7RULE_TYPES = ( HOST_NAME, PATH, FILE_TYPE, HEADER, COOKIE ) = ( 'HOST_NAME', 'PATH', 'FILE_TYPE', 'HEADER', 'COOKIE' ) L7COMPARE_TYPES = ( REGEX, STARTS_WITH, ENDS_WITH, CONTAINS, EQUAL_TO ) = ( 'REGEX', 'STARTS_WITH', 'ENDS_WITH', 'CONTAINS', 'EQUAL_TO' ) properties_schema = { ADMIN_STATE_UP: properties.Schema( properties.Schema.BOOLEAN, _('The administrative state of the rule.'), default=True, update_allowed=True ), L7POLICY: properties.Schema( properties.Schema.STRING, _('ID or name of L7 policy this rule belongs to.'), constraints=[ constraints.CustomConstraint('octavia.l7policy') ], required=True ), TYPE: properties.Schema( properties.Schema.STRING, _('Rule type.'), constraints=[constraints.AllowedValues(L7RULE_TYPES)], update_allowed=True, required=True ), COMPARE_TYPE: properties.Schema( properties.Schema.STRING, _('Rule compare type.'), constraints=[constraints.AllowedValues(L7COMPARE_TYPES)], update_allowed=True, required=True ), INVERT: properties.Schema( properties.Schema.BOOLEAN, _('Invert the compare type.'), default=False, update_allowed=True ), KEY: properties.Schema( properties.Schema.STRING, _('Key to compare. Relevant for HEADER and COOKIE types only.'), update_allowed=True ), VALUE: properties.Schema( properties.Schema.STRING, _('Value to compare.'), update_allowed=True, required=True ) } def translation_rules(self, props): return [ translation.TranslationRule( props, translation.TranslationRule.RESOLVE, [self.L7POLICY], client_plugin=self.client_plugin(), finder='get_l7policy', ) ] def validate(self): super(L7Rule, self).validate() if (self.properties[self.TYPE] in (self.HEADER, self.COOKIE) and self.properties[self.KEY] is None): msg = (_('Property %(key)s is missing. ' 'This property should be specified for ' 'rules of %(header)s and %(cookie)s types.') % {'key': self.KEY, 'header': self.HEADER, 'cookie': self.COOKIE}) raise exception.StackValidationFailed(message=msg) def _prepare_args(self, properties): props = dict((k, v) for k, v in properties.items() if v is not None) props.pop(self.L7POLICY) return props def _resource_create(self, properties): return self.client().l7rule_create(self.properties[self.L7POLICY], json={'rule': properties})['rule'] def _resource_update(self, prop_diff): self.client().l7rule_set(self.resource_id, self.properties[self.L7POLICY], json={'rule': prop_diff}) def _resource_delete(self): self.client().l7rule_delete(self.resource_id, self.properties[self.L7POLICY]) def _show_resource(self): return self.client().l7rule_show(self.resource_id, self.properties[self.L7POLICY]) def resource_mapping(): return { 'OS::Octavia::L7Rule': L7Rule } heat-10.0.2/heat/engine/resources/openstack/octavia/octavia_base.py0000666000175000017500000000575513343562340025365 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common import exception from heat.engine import resource from heat.engine import support class OctaviaBase(resource.Resource): default_client_name = 'octavia' support_status = support.SupportStatus(version='10.0.0') def _check_status(self, expected_status='ACTIVE'): res = self._show_resource() status = res['provisioning_status'] if status == 'ERROR': raise exception.ResourceInError(resource_status=status) return status == expected_status def _check_deleted(self): with self.client_plugin().ignore_not_found: return self._check_status('DELETED') return True def _resolve_attribute(self, name): if self.resource_id is None: return attributes = self._show_resource() return attributes[name] def handle_create(self): return self._prepare_args(self.properties) def check_create_complete(self, properties): if self.resource_id is None: try: res = self._resource_create(properties) self.resource_id_set(res['id']) except Exception as ex: if self.client_plugin().is_conflict(ex): return False raise return self._check_status() def handle_update(self, json_snippet, tmpl_diff, prop_diff): self._update_called = False return prop_diff def check_update_complete(self, prop_diff): if not prop_diff: return True if not self._update_called: try: self._resource_update(prop_diff) self._update_called = True except Exception as ex: if self.client_plugin().is_conflict(ex): return False raise return self._check_status() def handle_delete(self): self._delete_called = False def check_delete_complete(self, data): if self.resource_id is None: return True if not self._delete_called: try: self._resource_delete() self._delete_called = True except Exception as ex: if self.client_plugin().is_conflict(ex): return self._check_status('DELETED') elif self.client_plugin().is_not_found(ex): return True raise return self._check_deleted() heat-10.0.2/heat/engine/resources/openstack/octavia/__init__.py0000666000175000017500000000000013343562340024456 0ustar zuulzuul00000000000000heat-10.0.2/heat/engine/resources/openstack/octavia/health_monitor.py0000666000175000017500000001366413343562351025761 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common.i18n import _ from heat.engine import attributes from heat.engine import constraints from heat.engine import properties from heat.engine.resources.openstack.octavia import octavia_base from heat.engine import translation class HealthMonitor(octavia_base.OctaviaBase): """A resource to handle load balancer health monitors. This resource creates and manages octavia healthmonitors, which watches status of the load balanced servers. """ # Properties inputs for the resources create/update. PROPERTIES = ( ADMIN_STATE_UP, DELAY, EXPECTED_CODES, HTTP_METHOD, MAX_RETRIES, POOL, TIMEOUT, TYPE, URL_PATH, TENANT_ID ) = ( 'admin_state_up', 'delay', 'expected_codes', 'http_method', 'max_retries', 'pool', 'timeout', 'type', 'url_path', 'tenant_id' ) # Supported HTTP methods HTTP_METHODS = ( GET, HEAT, POST, PUT, DELETE, TRACE, OPTIONS, CONNECT, PATCH ) = ( 'GET', 'HEAD', 'POST', 'PUT', 'DELETE', 'TRACE', 'OPTIONS', 'CONNECT', 'PATCH' ) # Supported output attributes of the resources. ATTRIBUTES = (POOLS_ATTR) = ('pools') properties_schema = { ADMIN_STATE_UP: properties.Schema( properties.Schema.BOOLEAN, _('The administrative state of the health monitor.'), default=True, update_allowed=True ), DELAY: properties.Schema( properties.Schema.INTEGER, _('The minimum time in milliseconds between regular connections ' 'of the member.'), required=True, update_allowed=True, constraints=[constraints.Range(min=0)] ), EXPECTED_CODES: properties.Schema( properties.Schema.STRING, _('The HTTP status codes expected in response from the ' 'member to declare it healthy. Specify one of the following ' 'values: a single value, such as 200. a list, such as 200, 202. ' 'a range, such as 200-204.'), update_allowed=True, default='200' ), HTTP_METHOD: properties.Schema( properties.Schema.STRING, _('The HTTP method used for requests by the monitor of type ' 'HTTP.'), update_allowed=True, default=GET, constraints=[constraints.AllowedValues(HTTP_METHODS)] ), MAX_RETRIES: properties.Schema( properties.Schema.INTEGER, _('Number of permissible connection failures before changing the ' 'member status to INACTIVE.'), required=True, update_allowed=True, constraints=[constraints.Range(min=1, max=10)], ), POOL: properties.Schema( properties.Schema.STRING, _('ID or name of the load balancing pool.'), required=True, constraints=[ constraints.CustomConstraint('octavia.pool') ] ), TIMEOUT: properties.Schema( properties.Schema.INTEGER, _('Maximum number of milliseconds for a monitor to wait for a ' 'connection to be established before it times out.'), required=True, update_allowed=True, constraints=[constraints.Range(min=0)] ), TYPE: properties.Schema( properties.Schema.STRING, _('One of predefined health monitor types.'), required=True, constraints=[ constraints.AllowedValues(['PING', 'TCP', 'HTTP', 'HTTPS']), ] ), URL_PATH: properties.Schema( properties.Schema.STRING, _('The HTTP path used in the HTTP request used by the monitor to ' 'test a member health. A valid value is a string the begins ' 'with a forward slash (/).'), update_allowed=True, default='/' ), TENANT_ID: properties.Schema( properties.Schema.STRING, _('ID of the tenant who owns the health monitor.') ) } attributes_schema = { POOLS_ATTR: attributes.Schema( _('The list of Pools related to this monitor.'), type=attributes.Schema.LIST ) } def translation_rules(self, props): return [ translation.TranslationRule( props, translation.TranslationRule.RESOLVE, [self.POOL], client_plugin=self.client_plugin(), finder='get_pool', ), ] def _prepare_args(self, properties): props = dict((k, v) for k, v in properties.items() if v is not None) if self.POOL in props: props['pool_id'] = props.pop(self.POOL) return props def _resource_create(self, properties): return self.client().health_monitor_create( json={'healthmonitor': properties})['healthmonitor'] def _resource_update(self, prop_diff): self.client().health_monitor_set( self.resource_id, json={'healthmonitor': prop_diff}) def _resource_delete(self): self.client().health_monitor_delete(self.resource_id) def _show_resource(self): return self.client().health_monitor_show(self.resource_id) def resource_mapping(): return { 'OS::Octavia::HealthMonitor': HealthMonitor, } heat-10.0.2/heat/engine/resources/openstack/octavia/listener.py0000666000175000017500000001643213343562351024566 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common import exception from heat.common.i18n import _ from heat.engine import attributes from heat.engine import constraints from heat.engine import properties from heat.engine.resources.openstack.octavia import octavia_base from heat.engine import translation class Listener(octavia_base.OctaviaBase): """A resource for managing octavia Listeners. This resource creates and manages Neutron octavia Listeners, which represent a listening endpoint for the vip. """ PROPERTIES = ( PROTOCOL_PORT, PROTOCOL, LOADBALANCER, DEFAULT_POOL, NAME, ADMIN_STATE_UP, DESCRIPTION, DEFAULT_TLS_CONTAINER_REF, SNI_CONTAINER_REFS, CONNECTION_LIMIT, TENANT_ID ) = ( 'protocol_port', 'protocol', 'loadbalancer', 'default_pool', 'name', 'admin_state_up', 'description', 'default_tls_container_ref', 'sni_container_refs', 'connection_limit', 'tenant_id' ) SUPPORTED_PROTOCOLS = (TCP, HTTP, HTTPS, TERMINATED_HTTPS, PROXY) = ( 'TCP', 'HTTP', 'HTTPS', 'TERMINATED_HTTPS', 'PROXY') ATTRIBUTES = ( LOADBALANCERS_ATTR, DEFAULT_POOL_ID_ATTR ) = ( 'loadbalancers', 'default_pool_id' ) properties_schema = { PROTOCOL_PORT: properties.Schema( properties.Schema.INTEGER, _('TCP or UDP port on which to listen for client traffic.'), required=True, constraints=[ constraints.Range(1, 65535), ] ), PROTOCOL: properties.Schema( properties.Schema.STRING, _('Protocol on which to listen for the client traffic.'), required=True, constraints=[ constraints.AllowedValues(SUPPORTED_PROTOCOLS), ] ), LOADBALANCER: properties.Schema( properties.Schema.STRING, _('ID or name of the load balancer with which listener ' 'is associated.'), constraints=[ constraints.CustomConstraint('octavia.loadbalancer') ] ), DEFAULT_POOL: properties.Schema( properties.Schema.STRING, _('ID or name of the default pool for the listener.'), update_allowed=True, constraints=[ constraints.CustomConstraint('octavia.pool') ], ), NAME: properties.Schema( properties.Schema.STRING, _('Name of this listener.'), update_allowed=True ), ADMIN_STATE_UP: properties.Schema( properties.Schema.BOOLEAN, _('The administrative state of this listener.'), update_allowed=True, default=True ), DESCRIPTION: properties.Schema( properties.Schema.STRING, _('Description of this listener.'), update_allowed=True, default='' ), DEFAULT_TLS_CONTAINER_REF: properties.Schema( properties.Schema.STRING, _('Default TLS container reference to retrieve TLS ' 'information.'), update_allowed=True ), SNI_CONTAINER_REFS: properties.Schema( properties.Schema.LIST, _('List of TLS container references for SNI.'), update_allowed=True ), CONNECTION_LIMIT: properties.Schema( properties.Schema.INTEGER, _('The maximum number of connections permitted for this ' 'load balancer. Defaults to -1, which is infinite.'), update_allowed=True, default=-1, constraints=[ constraints.Range(min=-1), ] ), TENANT_ID: properties.Schema( properties.Schema.STRING, _('The ID of the tenant who owns the listener.') ), } attributes_schema = { LOADBALANCERS_ATTR: attributes.Schema( _('ID of the load balancer this listener is associated to.'), type=attributes.Schema.LIST ), DEFAULT_POOL_ID_ATTR: attributes.Schema( _('ID of the default pool this listener is associated to.'), type=attributes.Schema.STRING ) } def translation_rules(self, props): return [ translation.TranslationRule( props, translation.TranslationRule.RESOLVE, [self.LOADBALANCER], client_plugin=self.client_plugin(), finder='get_loadbalancer', ), translation.TranslationRule( props, translation.TranslationRule.RESOLVE, [self.DEFAULT_POOL], client_plugin=self.client_plugin(), finder='get_pool' ), ] def _prepare_args(self, properties): props = dict((k, v) for k, v in properties.items() if v is not None) if self.NAME not in props: props[self.NAME] = self.physical_resource_name() if self.LOADBALANCER in props: props['loadbalancer_id'] = props.pop(self.LOADBALANCER) if self.DEFAULT_POOL in props: props['default_pool_id'] = props.pop(self.DEFAULT_POOL) return props def validate(self): super(Listener, self).validate() if (self.properties[self.LOADBALANCER] is None and self.properties[self.DEFAULT_POOL] is None): raise exception.PropertyUnspecifiedError(self.LOADBALANCER, self.DEFAULT_POOL) if self.properties[self.PROTOCOL] == self.TERMINATED_HTTPS: if self.properties[self.DEFAULT_TLS_CONTAINER_REF] is None: msg = (_('Property %(ref)s required when protocol is ' '%(term)s.') % {'ref': self.DEFAULT_TLS_CONTAINER_REF, 'term': self.TERMINATED_HTTPS}) raise exception.StackValidationFailed(message=msg) def _resource_create(self, properties): return self.client().listener_create( json={'listener': properties})['listener'] def handle_update(self, json_snippet, tmpl_diff, prop_diff): self._update_called = False if self.DEFAULT_POOL in prop_diff: prop_diff['default_pool_id'] = prop_diff.pop(self.DEFAULT_POOL) return prop_diff def _resource_update(self, prop_diff): self.client().listener_set(self.resource_id, json={'listener': prop_diff}) def _resource_delete(self): self.client().listener_delete(self.resource_id) def _show_resource(self): return self.client().listener_show(self.resource_id) def resource_mapping(): return { 'OS::Octavia::Listener': Listener, } heat-10.0.2/heat/engine/resources/openstack/senlin/0000775000175000017500000000000013343562672022227 5ustar zuulzuul00000000000000heat-10.0.2/heat/engine/resources/openstack/senlin/node.py0000666000175000017500000001572213343562340023527 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from heat.common.i18n import _ from heat.engine import attributes from heat.engine import constraints from heat.engine import properties from heat.engine.resources.openstack.senlin import res_base from heat.engine import support from heat.engine import translation class Node(res_base.BaseSenlinResource): """A resource that creates a Senlin Node. Node is an object that belongs to at most one Cluster, it can be created based on a profile. """ entity = 'node' PROPERTIES = ( NAME, METADATA, PROFILE, CLUSTER ) = ( 'name', 'metadata', 'profile', 'cluster' ) _NODE_STATUS = ( INIT, ACTIVE, CREATING, ) = ( 'INIT', 'ACTIVE', 'CREATING', ) ATTRIBUTES = ( ATTR_DETAILS, ATTR_CLUSTER, ) = ( 'details', 'cluster_id' ) properties_schema = { NAME: properties.Schema( properties.Schema.STRING, _('Name of the senlin node. By default, physical resource name ' 'is used.'), update_allowed=True, ), METADATA: properties.Schema( properties.Schema.MAP, _('Metadata key-values defined for node.'), update_allowed=True, ), PROFILE: properties.Schema( properties.Schema.STRING, _('Name or ID of senlin profile to create this node.'), required=True, update_allowed=True, constraints=[ constraints.CustomConstraint('senlin.profile') ] ), CLUSTER: properties.Schema( properties.Schema.STRING, _('The name of senlin cluster to attach to.'), update_allowed=True, constraints=[ constraints.CustomConstraint('senlin.cluster') ], support_status=support.SupportStatus(version='8.0.0'), ), } attributes_schema = { ATTR_DETAILS: attributes.Schema( _("The details of physical object."), type=attributes.Schema.MAP ), ATTR_CLUSTER: attributes.Schema( _("The cluster ID this node belongs to."), type=attributes.Schema.STRING ), } def translation_rules(self, props): rules = [ translation.TranslationRule( props, translation.TranslationRule.RESOLVE, translation_path=[self.PROFILE], client_plugin=self.client_plugin(), finder='get_profile_id'), translation.TranslationRule( props, translation.TranslationRule.RESOLVE, translation_path=[self.CLUSTER], client_plugin=self.client_plugin(), finder='get_cluster_id'), ] return rules def handle_create(self): params = { 'name': (self.properties[self.NAME] or self.physical_resource_name()), 'metadata': self.properties[self.METADATA], 'profile_id': self.properties[self.PROFILE], 'cluster_id': self.properties[self.CLUSTER], } node = self.client().create_node(**params) action_id = node.location.split('/')[-1] self.resource_id_set(node.id) return action_id def check_create_complete(self, action_id): return self.client_plugin().check_action_status(action_id) def handle_delete(self): if self.resource_id is not None: with self.client_plugin().ignore_not_found: self.client().delete_node(self.resource_id) return self.resource_id def check_delete_complete(self, res_id): if not res_id: return True try: self.client().get_node(self.resource_id) except Exception as ex: self.client_plugin().ignore_not_found(ex) return True return False def handle_update(self, json_snippet, tmpl_diff, prop_diff): actions = [] if prop_diff: old_cluster = None new_cluster = None if self.PROFILE in prop_diff: prop_diff['profile_id'] = prop_diff.pop(self.PROFILE) if self.CLUSTER in prop_diff: old_cluster = self.properties[self.CLUSTER] new_cluster = prop_diff.pop(self.CLUSTER) if old_cluster: params = { 'cluster': old_cluster, 'nodes': [self.resource_id], } action = { 'func': 'cluster_del_nodes', 'action_id': None, 'params': params, 'done': False, } actions.append(action) if prop_diff: node = self.client().get_node(self.resource_id) params = copy.deepcopy(prop_diff) params['node'] = node action = { 'func': 'update_node', 'action_id': None, 'params': params, 'done': False, } actions.append(action) if new_cluster: params = { 'cluster': new_cluster, 'nodes': [self.resource_id], } action = { 'func': 'cluster_add_nodes', 'action_id': None, 'params': params, 'done': False, } actions.append(action) return actions def check_update_complete(self, actions): return self.client_plugin().execute_actions(actions) def _resolve_attribute(self, name): if self.resource_id is None: return node = self.client().get_node(self.resource_id, details=True) return getattr(node, name, None) def parse_live_resource_data(self, resource_properties, resource_data): reality = {} for key in self._update_allowed_properties: if key == self.PROFILE: value = resource_data.get('profile_id') elif key == self.CLUSTER: value = resource_data.get('cluster_id') else: value = resource_data.get(key) reality.update({key: value}) return reality def resource_mapping(): return { 'OS::Senlin::Node': Node } heat-10.0.2/heat/engine/resources/openstack/senlin/profile.py0000666000175000017500000000557613343562340024250 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # # Copyright 2015 IBM Corp. from heat.common.i18n import _ from heat.engine import constraints from heat.engine import properties from heat.engine.resources.openstack.senlin import res_base class Profile(res_base.BaseSenlinResource): """A resource that creates a Senlin Profile. Profile resource in senlin is a template describing how to create nodes in cluster. """ entity = 'profile' PROPERTIES = ( NAME, TYPE, METADATA, PROFILE_PROPERTIES, ) = ( 'name', 'type', 'metadata', 'properties', ) properties_schema = { NAME: properties.Schema( properties.Schema.STRING, _('Name of the senlin profile. By default, physical resource name ' 'is used.'), update_allowed=True, ), TYPE: properties.Schema( properties.Schema.STRING, _('The type of profile.'), required=True, constraints=[ constraints.CustomConstraint('senlin.profile_type') ] ), METADATA: properties.Schema( properties.Schema.MAP, _('Metadata key-values defined for profile.'), update_allowed=True, ), PROFILE_PROPERTIES: properties.Schema( properties.Schema.MAP, _('Properties for profile.'), ) } def handle_create(self): params = { 'name': (self.properties[self.NAME] or self.physical_resource_name()), 'spec': self.client_plugin().generate_spec( spec_type=self.properties[self.TYPE], spec_props=self.properties[self.PROFILE_PROPERTIES]), 'metadata': self.properties[self.METADATA], } profile = self.client().create_profile(**params) self.resource_id_set(profile.id) def handle_delete(self): if self.resource_id is not None: with self.client_plugin().ignore_not_found: self.client().delete_profile(self.resource_id) def handle_update(self, json_snippet, tmpl_diff, prop_diff): if prop_diff: profile_obj = self.client().get_profile(self.resource_id) self.client().update_profile(profile_obj, **prop_diff) def resource_mapping(): return { 'OS::Senlin::Profile': Profile } heat-10.0.2/heat/engine/resources/openstack/senlin/policy.py0000666000175000017500000001634513343562340024103 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import copy from heat.common import exception from heat.common.i18n import _ from heat.engine import constraints from heat.engine import properties from heat.engine.resources.openstack.senlin import res_base from heat.engine import translation class Policy(res_base.BaseSenlinResource): """A resource that creates a Senlin Policy. A policy is a set of rules that can be checked and/or enforced when an action is performed on a Cluster. """ entity = 'policy' PROPERTIES = ( NAME, TYPE, POLICY_PROPS, BINDINGS, ) = ( 'name', 'type', 'properties', 'bindings' ) _BINDINGS = ( BD_CLUSTER, BD_ENABLED, ) = ( 'cluster', 'enabled' ) _ACTION_STATUS = ( ACTION_SUCCEEDED, ACTION_FAILED, ) = ( 'SUCCEEDED', 'FAILED', ) properties_schema = { NAME: properties.Schema( properties.Schema.STRING, _('Name of the senlin policy. By default, physical resource name ' 'is used.'), update_allowed=True, ), TYPE: properties.Schema( properties.Schema.STRING, _('The type of senlin policy.'), required=True, constraints=[ constraints.CustomConstraint('senlin.policy_type') ] ), POLICY_PROPS: properties.Schema( properties.Schema.MAP, _('Properties of this policy.'), ), BINDINGS: properties.Schema( properties.Schema.LIST, _('A list of clusters to which this policy is attached.'), update_allowed=True, schema=properties.Schema( properties.Schema.MAP, schema={ BD_CLUSTER: properties.Schema( properties.Schema.STRING, _("The name or ID of target cluster."), required=True, constraints=[ constraints.CustomConstraint('senlin.cluster') ] ), BD_ENABLED: properties.Schema( properties.Schema.BOOLEAN, _("Whether enable this policy on that cluster."), default=True, ), } ) ) } def translation_rules(self, props): rules = [ translation.TranslationRule( props, translation.TranslationRule.RESOLVE, translation_path=[self.BINDINGS, self.BD_CLUSTER], client_plugin=self.client_plugin(), finder='get_cluster_id'), ] return rules def remove_bindings(self, bindings): for bd in bindings: try: bd['action'] = self.client().cluster_detach_policy( bd[self.BD_CLUSTER], self.resource_id)['action'] bd['finished'] = False except Exception as ex: # policy didn't attach to cluster, skip. if (self.client_plugin().is_bad_request(ex) or self.client_plugin().is_not_found(ex)): bd['finished'] = True else: raise def add_bindings(self, bindings): for bd in bindings: bd['action'] = self.client().cluster_attach_policy( bd[self.BD_CLUSTER], self.resource_id, enabled=bd[self.BD_ENABLED])['action'] bd['finished'] = False def check_action_done(self, bindings): ret = True if not bindings: return ret for bd in bindings: if bd.get('finished', False): continue action = self.client().get_action(bd['action']) if action.status == self.ACTION_SUCCEEDED: bd['finished'] = True elif action.status == self.ACTION_FAILED: err_msg = _('Failed to execute %(action)s for ' '%(cluster)s: %(reason)s') % { 'action': action.action, 'cluster': bd[self.BD_CLUSTER], 'reason': action.status_reason} raise exception.ResourceInError( status_reason=err_msg, resource_status=self.FAILED) else: ret = False return ret def handle_create(self): params = { 'name': (self.properties[self.NAME] or self.physical_resource_name()), 'spec': self.client_plugin().generate_spec( self.properties[self.TYPE], self.properties[self.POLICY_PROPS] ) } policy = self.client().create_policy(**params) self.resource_id_set(policy.id) bindings = copy.deepcopy(self.properties[self.BINDINGS]) if bindings: self.add_bindings(bindings) return bindings def check_create_complete(self, bindings): return self.check_action_done(bindings) def handle_delete(self): return copy.deepcopy(self.properties[self.BINDINGS]) def check_delete_complete(self, bindings): if not self.resource_id: return True self.remove_bindings(bindings) if self.check_action_done(bindings): with self.client_plugin().ignore_not_found: self.client().delete_policy(self.resource_id) return True return False def handle_update(self, json_snippet, tmpl_diff, prop_diff): if self.NAME in prop_diff: param = {'name': prop_diff[self.NAME]} policy_obj = self.client().get_policy(self.resource_id) self.client().update_policy(policy_obj, **param) actions = dict() if self.BINDINGS in prop_diff: old = self.properties[self.BINDINGS] or [] new = prop_diff[self.BINDINGS] or [] actions['remove'] = [bd for bd in old if bd not in new] actions['add'] = [bd for bd in new if bd not in old] self.remove_bindings(actions['remove']) return actions def check_update_complete(self, actions): ret = True remove_done = self.check_action_done(actions.get('remove', [])) # wait until detach finished, then start attach if remove_done and 'add' in actions: if not actions.get('add_started', False): self.add_bindings(actions['add']) actions['add_started'] = True ret = self.check_action_done(actions['add']) return ret def resource_mapping(): return { 'OS::Senlin::Policy': Policy } heat-10.0.2/heat/engine/resources/openstack/senlin/cluster.py0000666000175000017500000003634213343562340024264 0ustar zuulzuul00000000000000# Copyright 2015 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import six from heat.common import exception from heat.common.i18n import _ from heat.engine import attributes from heat.engine import constraints from heat.engine import properties from heat.engine.resources.openstack.senlin import res_base from heat.engine import support from heat.engine import translation class Cluster(res_base.BaseSenlinResource): """A resource that creates a Senlin Cluster. Cluster resource in senlin can create and manage objects of the same nature, e.g. Nova servers, Heat stacks, Cinder volumes, etc. The collection of these objects is referred to as a cluster. """ entity = 'cluster' PROPERTIES = ( NAME, PROFILE, DESIRED_CAPACITY, MIN_SIZE, MAX_SIZE, METADATA, TIMEOUT, POLICIES, ) = ( 'name', 'profile', 'desired_capacity', 'min_size', 'max_size', 'metadata', 'timeout', 'policies', ) ATTRIBUTES = ( ATTR_NAME, ATTR_METADATA, ATTR_NODES, ATTR_DESIRED_CAPACITY, ATTR_MIN_SIZE, ATTR_MAX_SIZE, ATTR_POLICIES, ATTR_COLLECT, ) = ( "name", 'metadata', 'nodes', 'desired_capacity', 'min_size', 'max_size', 'policies', 'collect', ) _POLICIES = ( P_POLICY, P_ENABLED, ) = ( "policy", "enabled", ) _CLUSTER_STATUS = ( CLUSTER_INIT, CLUSTER_ACTIVE, CLUSTER_ERROR, CLUSTER_WARNING, CLUSTER_CREATING, CLUSTER_DELETING, CLUSTER_UPDATING ) = ( 'INIT', 'ACTIVE', 'ERROR', 'WARNING', 'CREATING', 'DELETING', 'UPDATING' ) properties_schema = { PROFILE: properties.Schema( properties.Schema.STRING, _('The name or id of the Senlin profile.'), required=True, update_allowed=True, constraints=[ constraints.CustomConstraint('senlin.profile') ] ), NAME: properties.Schema( properties.Schema.STRING, _('Name of the cluster. By default, physical resource name ' 'is used.'), update_allowed=True, ), DESIRED_CAPACITY: properties.Schema( properties.Schema.INTEGER, _('Desired initial number of resources in cluster.'), default=0, update_allowed=True, ), MIN_SIZE: properties.Schema( properties.Schema.INTEGER, _('Minimum number of resources in the cluster.'), default=0, update_allowed=True, constraints=[ constraints.Range(min=0) ] ), MAX_SIZE: properties.Schema( properties.Schema.INTEGER, _('Maximum number of resources in the cluster. ' '-1 means unlimited.'), default=-1, update_allowed=True, constraints=[ constraints.Range(min=-1) ] ), METADATA: properties.Schema( properties.Schema.MAP, _('Metadata key-values defined for cluster.'), update_allowed=True, default={}, ), TIMEOUT: properties.Schema( properties.Schema.INTEGER, _('The number of seconds to wait for the cluster actions.'), update_allowed=True, constraints=[ constraints.Range(min=0) ] ), POLICIES: properties.Schema( properties.Schema.LIST, _('A list of policies to attach to this cluster.'), update_allowed=True, default=[], support_status=support.SupportStatus(version='8.0.0'), schema=properties.Schema( properties.Schema.MAP, schema={ P_POLICY: properties.Schema( properties.Schema.STRING, _("The name or ID of the policy."), required=True, constraints=[ constraints.CustomConstraint('senlin.policy') ] ), P_ENABLED: properties.Schema( properties.Schema.BOOLEAN, _("Whether enable this policy on this cluster."), default=True, ), } ) ), } attributes_schema = { ATTR_NAME: attributes.Schema( _("Cluster name."), type=attributes.Schema.STRING ), ATTR_METADATA: attributes.Schema( _("Cluster metadata."), type=attributes.Schema.MAP ), ATTR_DESIRED_CAPACITY: attributes.Schema( _("Desired capacity of the cluster."), type=attributes.Schema.INTEGER ), ATTR_NODES: attributes.Schema( _("Nodes list in the cluster."), type=attributes.Schema.LIST, cache_mode=attributes.Schema.CACHE_NONE ), ATTR_MIN_SIZE: attributes.Schema( _("Min size of the cluster."), type=attributes.Schema.INTEGER ), ATTR_MAX_SIZE: attributes.Schema( _("Max size of the cluster."), type=attributes.Schema.INTEGER ), ATTR_POLICIES: attributes.Schema( _("Policies attached to the cluster."), type=attributes.Schema.LIST, support_status=support.SupportStatus(version='8.0.0'), ), ATTR_COLLECT: attributes.Schema( _("Attributes collected from cluster. According to the jsonpath " "following this attribute, it will return a list of attributes " "collected from the nodes of this cluster."), type=attributes.Schema.LIST, support_status=support.SupportStatus(version='8.0.0'), cache_mode=attributes.Schema.CACHE_NONE ) } def translation_rules(self, props): rules = [ translation.TranslationRule( props, translation.TranslationRule.RESOLVE, translation_path=[self.PROFILE], client_plugin=self.client_plugin(), finder='get_profile_id'), translation.TranslationRule( props, translation.TranslationRule.RESOLVE, translation_path=[self.POLICIES, self.P_POLICY], client_plugin=self.client_plugin(), finder='get_policy_id'), ] return rules def handle_create(self): actions = [] params = { 'name': (self.properties[self.NAME] or self.physical_resource_name()), 'profile_id': self.properties[self.PROFILE], 'desired_capacity': self.properties[self.DESIRED_CAPACITY], 'min_size': self.properties[self.MIN_SIZE], 'max_size': self.properties[self.MAX_SIZE], 'metadata': self.properties[self.METADATA], 'timeout': self.properties[self.TIMEOUT] } cluster = self.client().create_cluster(**params) action_id = cluster.location.split('/')[-1] self.resource_id_set(cluster.id) # for cluster creation, we just to check the action status # the action is executed above action = { 'action_id': action_id, 'done': False, } actions.append(action) if self.properties[self.POLICIES]: for p in self.properties[self.POLICIES]: params = { 'cluster': cluster.id, 'policy': p[self.P_POLICY], 'enabled': p[self.P_ENABLED], } action = { 'func': 'cluster_attach_policy', 'params': params, 'action_id': None, 'done': False, } actions.append(action) return actions def check_create_complete(self, actions): return self.client_plugin().execute_actions(actions) def handle_delete(self): if self.resource_id is not None: with self.client_plugin().ignore_not_found: self.client().delete_cluster(self.resource_id) return self.resource_id def check_delete_complete(self, resource_id): if not resource_id: return True try: self.client().get_cluster(self.resource_id) except Exception as ex: self.client_plugin().ignore_not_found(ex) return True return False def handle_update(self, json_snippet, tmpl_diff, prop_diff): UPDATE_PROPS = (self.NAME, self.METADATA, self.TIMEOUT, self.PROFILE) RESIZE_PROPS = (self.MIN_SIZE, self.MAX_SIZE, self.DESIRED_CAPACITY) actions = [] if not prop_diff: return actions cluster_obj = self.client().get_cluster(self.resource_id) # Update Policies if self.POLICIES in prop_diff: old_policies = self.properties[self.POLICIES] new_policies = prop_diff[self.POLICIES] old_policy_ids = [p[self.P_POLICY] for p in old_policies] update_policies = [p for p in new_policies if p[self.P_POLICY] in old_policy_ids] update_policy_ids = [p[self.P_POLICY] for p in update_policies] add_policies = [p for p in new_policies if p[self.P_POLICY] not in old_policy_ids] remove_policies = [p for p in old_policies if p[self.P_POLICY] not in update_policy_ids] for p in update_policies: params = { 'policy': p[self.P_POLICY], 'cluster': self.resource_id, 'enabled': p[self.P_ENABLED] } action = { 'func': 'cluster_update_policy', 'params': params, 'action_id': None, 'done': False, } actions.append(action) for p in remove_policies: params = { 'policy': p[self.P_POLICY], 'cluster': self.resource_id, 'enabled': p[self.P_ENABLED] } action = { 'func': 'cluster_detach_policy', 'params': params, 'action_id': None, 'done': False, } actions.append(action) for p in add_policies: params = { 'policy': p[self.P_POLICY], 'cluster': self.resource_id, 'enabled': p[self.P_ENABLED] } action = { 'func': 'cluster_attach_policy', 'params': params, 'action_id': None, 'done': False, } actions.append(action) # Update cluster if any(p in prop_diff for p in UPDATE_PROPS): params = dict((k, v) for k, v in six.iteritems(prop_diff) if k in UPDATE_PROPS) params['cluster'] = cluster_obj if self.PROFILE in params: params['profile_id'] = params.pop(self.PROFILE) action = { 'func': 'update_cluster', 'params': params, 'action_id': None, 'done': False, } actions.append(action) # Resize Cluster if any(p in prop_diff for p in RESIZE_PROPS): params = dict((k, v) for k, v in six.iteritems(prop_diff) if k in RESIZE_PROPS) if self.DESIRED_CAPACITY in params: params['adjustment_type'] = 'EXACT_CAPACITY' params['number'] = params.pop(self.DESIRED_CAPACITY) params['cluster'] = self.resource_id action = { 'func': 'cluster_resize', 'params': params, 'action_id': None, 'done': False, } actions.append(action) return actions def check_update_complete(self, actions): return self.client_plugin().execute_actions(actions) def validate(self): min_size = self.properties[self.MIN_SIZE] max_size = self.properties[self.MAX_SIZE] desired_capacity = self.properties[self.DESIRED_CAPACITY] if max_size != -1 and max_size < min_size: msg = _("%(min_size)s can not be greater than %(max_size)s") % { 'min_size': self.MIN_SIZE, 'max_size': self.MAX_SIZE, } raise exception.StackValidationFailed(message=msg) if (desired_capacity < min_size or (max_size != -1 and desired_capacity > max_size)): msg = _("%(desired_capacity)s must be between %(min_size)s " "and %(max_size)s") % { 'desired_capacity': self.DESIRED_CAPACITY, 'min_size': self.MIN_SIZE, 'max_size': self.MAX_SIZE, } raise exception.StackValidationFailed(message=msg) def get_attribute(self, key, *path): if self.resource_id is None: return None if key == self.ATTR_COLLECT: if not path: raise exception.InvalidTemplateAttribute( resource=self.name, key=key) attrs = self.client().collect_cluster_attrs( self.resource_id, path[0]) attr = [attr.attr_value for attr in attrs] return attributes.select_from_attribute(attr, path[1:]) else: return super(Cluster, self).get_attribute(key, *path) def _show_resource(self): cluster_dict = super(Cluster, self)._show_resource() cluster_dict[self.ATTR_POLICIES] = self.client().cluster_policies( self.resource_id) return cluster_dict def parse_live_resource_data(self, resource_properties, resource_data): reality = {} for key in self._update_allowed_properties: if key == self.PROFILE: value = resource_data.get('profile_id') elif key == self.POLICIES: value = [] for p in resource_data.get(self.POLICIES): v = { 'policy': p.get('policy_id'), 'enabled': p.get('enabled'), } value.append(v) else: value = resource_data.get(key) reality.update({key: value}) return reality def resource_mapping(): return { 'OS::Senlin::Cluster': Cluster } heat-10.0.2/heat/engine/resources/openstack/senlin/res_base.py0000666000175000017500000000265113343562340024362 0ustar zuulzuul00000000000000# Copyright 2015 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging from heat.engine import resource from heat.engine import support LOG = logging.getLogger(__name__) class BaseSenlinResource(resource.Resource): """A base class for Senlin resources.""" support_status = support.SupportStatus(version='6.0.0') default_client_name = 'senlin' def _show_resource(self): method_name = 'get_' + self.entity try: client_method = getattr(self.client(), method_name) res_info = client_method(self.resource_id) return res_info.to_dict() except AttributeError as ex: LOG.warning("No method to get the resource: %s", ex) def _resolve_attribute(self, name): if self.resource_id is None: return res_info = self._show_resource() return res_info.get(name) heat-10.0.2/heat/engine/resources/openstack/senlin/__init__.py0000666000175000017500000000000013343562340024320 0ustar zuulzuul00000000000000heat-10.0.2/heat/engine/resources/openstack/senlin/receiver.py0000666000175000017500000000674213343562340024410 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common.i18n import _ from heat.engine import attributes from heat.engine import constraints from heat.engine import properties from heat.engine.resources.openstack.senlin import res_base class Receiver(res_base.BaseSenlinResource): """A resource that creates Senlin Receiver. Receiver is an abstract resource created at the senlin engine that can be used to hook the engine to some external event/alarm sources. """ entity = 'receiver' PROPERTIES = ( CLUSTER, ACTION, NAME, TYPE, PARAMS, ) = ( 'cluster', 'action', 'name', 'type', 'params', ) ATTRIBUTES = ( ATTR_CHANNEL, ) = ( 'channel', ) _ACTIONS = ( CLUSTER_SCALE_OUT, CLUSTER_SCALE_IN, ) = ( 'CLUSTER_SCALE_OUT', 'CLUSTER_SCALE_IN', ) _TYPES = ( WEBHOOK, ) = ( 'webhook', ) properties_schema = { CLUSTER: properties.Schema( properties.Schema.STRING, _('Name or ID of target cluster.'), required=True, constraints=[ constraints.CustomConstraint('senlin.cluster') ] ), ACTION: properties.Schema( properties.Schema.STRING, _('The action to be executed when the receiver is signaled.'), required=True, constraints=[ constraints.AllowedValues(_ACTIONS) ] ), NAME: properties.Schema( properties.Schema.STRING, _('Name of the senlin receiver. By default, ' 'physical resource name is used.'), ), TYPE: properties.Schema( properties.Schema.STRING, _('Type of receiver.'), default=WEBHOOK, constraints=[ constraints.AllowedValues(_TYPES) ] ), PARAMS: properties.Schema( properties.Schema.MAP, _('The parameters passed to action when the receiver ' 'is signaled.'), ), } attributes_schema = { ATTR_CHANNEL: attributes.Schema( _("The channel for receiving signals."), type=attributes.Schema.MAP ), } def handle_create(self): params = { 'name': (self.properties[self.NAME] or self.physical_resource_name()), 'cluster_id': self.properties[self.CLUSTER], 'type': self.properties[self.TYPE], 'action': self.properties[self.ACTION], 'params': self.properties[self.PARAMS], } recv = self.client().create_receiver(**params) self.resource_id_set(recv.id) def handle_delete(self): if self.resource_id is not None: with self.client_plugin().ignore_not_found: self.client().delete_receiver(self.resource_id) def resource_mapping(): return { 'OS::Senlin::Receiver': Receiver } heat-10.0.2/heat/engine/resources/openstack/zaqar/0000775000175000017500000000000013343562672022055 5ustar zuulzuul00000000000000heat-10.0.2/heat/engine/resources/openstack/zaqar/subscription.py0000666000175000017500000002016313343562340025147 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common import exception from heat.common.i18n import _ from heat.engine import constraints from heat.engine import properties from heat.engine import resource from heat.engine import support from oslo_serialization import jsonutils class ZaqarSubscription(resource.Resource): """A resource for managing Zaqar subscriptions. A Zaqar subscription listens for messages in a queue and sends a notification over email or webhook. """ default_client_name = "zaqar" support_status = support.SupportStatus(version='8.0.0', status=support.SUPPORTED) PROPERTIES = ( QUEUE_NAME, SUBSCRIBER, TTL, OPTIONS, ) = ( 'queue_name', 'subscriber', 'ttl', 'options', ) properties_schema = { QUEUE_NAME: properties.Schema( properties.Schema.STRING, _("Name of the queue to subscribe to."), constraints=[constraints.CustomConstraint('zaqar.queue')], required=True), SUBSCRIBER: properties.Schema( properties.Schema.STRING, _("URI of the subscriber which will be notified. Must be in the " "format: :."), required=True, update_allowed=True), TTL: properties.Schema( properties.Schema.INTEGER, _("Time to live of the subscription in seconds."), update_allowed=True, default=220367260800, # Seconds until the year 9000 # (ie. never expire) constraints=[ constraints.Range( min=60, ), ], ), OPTIONS: properties.Schema( properties.Schema.MAP, _("Options used to configure this subscription."), required=False, update_allowed=True) } VALID_SUBSCRIBER_TYPES = ['http', 'https', 'mailto', 'trust+http', 'trust+https'] def validate(self): super(ZaqarSubscription, self).validate() self._validate_subscriber() def _validate_subscriber(self): subscriber_type = self.properties[self.SUBSCRIBER].split(":", 1)[0] if subscriber_type not in self.VALID_SUBSCRIBER_TYPES: msg = (_("The subscriber type of must be one of: %s.") % ", ".join(self.VALID_SUBSCRIBER_TYPES)) raise exception.StackValidationFailed(message=msg) def _subscriber_url(self): return self.properties[self.SUBSCRIBER] def _subscription_options(self): return self.properties[self.OPTIONS] def _subscription_data(self): return { 'subscriber': self._subscriber_url(), 'ttl': self.properties[self.TTL], 'options': self._subscription_options(), } def handle_create(self): """Create a subscription to a Zaqar message queue.""" subscription = self.client().subscription( self.properties[self.QUEUE_NAME], **self._subscription_data()) self.resource_id_set(subscription.id) def _get_subscription(self): return self.client().subscription( self.properties[self.QUEUE_NAME], id=self.resource_id, auto_create=False ) def handle_update(self, json_snippet, tmpl_diff, prop_diff): """Update a subscription to a Zaqar message queue.""" self.properties = json_snippet.properties(self.properties_schema, self.context) subscription = self._get_subscription() subscription.update(self._subscription_data()) def handle_delete(self): if self.resource_id is None: return with self.client_plugin().ignore_not_found: self._get_subscription().delete() def _show_resource(self): subscription = self._get_subscription() return vars(subscription) def parse_live_resource_data(self, resource_properties, resource_data): return { self.QUEUE_NAME: resource_data[self.QUEUE_NAME], self.SUBSCRIBER: resource_data[self.SUBSCRIBER], self.TTL: resource_data[self.TTL], self.OPTIONS: resource_data[self.OPTIONS] } class MistralTrigger(ZaqarSubscription): """A Zaqar subscription for triggering Mistral workflows. This Zaqar subscription type listens for messages in a queue and triggers a Mistral workflow execution each time one is received. The content of the Zaqar message is passed to the workflow in the environment with the name "notification", and thus is accessible from within the workflow as: <% env().notification %> Other environment variables can be set using the 'env' key in the params property. """ support_status = support.SupportStatus(version='8.0.0', status=support.SUPPORTED) PROPERTIES = ( QUEUE_NAME, TTL, WORKFLOW_ID, PARAMS, INPUT, ) = ( ZaqarSubscription.QUEUE_NAME, ZaqarSubscription.TTL, 'workflow_id', 'params', 'input', ) properties_schema = { QUEUE_NAME: ZaqarSubscription.properties_schema[QUEUE_NAME], TTL: ZaqarSubscription.properties_schema[TTL], WORKFLOW_ID: properties.Schema( properties.Schema.STRING, _("UUID of the Mistral workflow to trigger."), required=True, constraints=[constraints.CustomConstraint('mistral.workflow')], update_allowed=True), PARAMS: properties.Schema( properties.Schema.MAP, _("Parameters to pass to the Mistral workflow execution. " "The parameters depend on the workflow type."), required=False, default={}, update_allowed=True), INPUT: properties.Schema( properties.Schema.MAP, _("Input values to pass to the Mistral workflow."), required=False, default={}, update_allowed=True), } def _validate_subscriber(self): pass def _subscriber_url(self): mistral_client = self.client('mistral') manager = getattr(mistral_client.executions, 'client', mistral_client.executions) return 'trust+%s/executions' % manager.http_client.base_url def _subscription_options(self): params = dict(self.properties[self.PARAMS]) params.setdefault('env', {}) params['env']['notification'] = "$zaqar_message$" post_data = { self.WORKFLOW_ID: self.properties[self.WORKFLOW_ID], self.PARAMS: params, self.INPUT: self.properties[self.INPUT], } return { 'post_data': jsonutils.dumps(post_data) } def parse_live_resource_data(self, resource_properties, resource_data): options = resource_data.get(self.OPTIONS, {}) post_data = jsonutils.loads(options.get('post_data', '{}')) params = post_data.get(self.PARAMS, {}) env = params.get('env', {}) env.pop('notification', None) if not env: params.pop('env', None) return { self.QUEUE_NAME: resource_data.get(self.QUEUE_NAME), self.TTL: resource_data.get(self.TTL), self.WORKFLOW_ID: post_data.get(self.WORKFLOW_ID), self.PARAMS: params, self.INPUT: post_data.get(self.INPUT), } def resource_mapping(): return { 'OS::Zaqar::Subscription': ZaqarSubscription, 'OS::Zaqar::MistralTrigger': MistralTrigger, } heat-10.0.2/heat/engine/resources/openstack/zaqar/queue.py0000666000175000017500000002232213343562340023546 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_serialization import jsonutils from heat.common.i18n import _ from heat.engine import attributes from heat.engine import constraints from heat.engine import properties from heat.engine import resource from heat.engine import support from six.moves.urllib import parse as urlparse class ZaqarQueue(resource.Resource): """A resource for managing Zaqar queues. Queue is a logical entity that groups messages. Ideally a queue is created per work type. For example, if you want to compress files, you would create a queue dedicated for this job. Any application that reads from this queue would only compress files. """ default_client_name = "zaqar" support_status = support.SupportStatus(version='2014.2') physical_resource_name_limit = 64 PROPERTIES = ( NAME, METADATA, ) = ( 'name', 'metadata', ) ATTRIBUTES = ( QUEUE_ID, HREF, ) = ( 'queue_id', 'href', ) properties_schema = { NAME: properties.Schema( properties.Schema.STRING, _("Name of the queue instance to create."), constraints=[ constraints.Length(max=physical_resource_name_limit) ]), METADATA: properties.Schema( properties.Schema.MAP, description=_("Arbitrary key/value metadata to store " "contextual information about this queue."), update_allowed=True) } attributes_schema = { QUEUE_ID: attributes.Schema( _("ID of the queue."), cache_mode=attributes.Schema.CACHE_NONE, support_status=support.SupportStatus( status=support.HIDDEN, version='6.0.0', previous_status=support.SupportStatus( status=support.DEPRECATED, message=_("Use get_resource|Ref command instead. " "For example: { get_resource : " " }"), version='2015.1', previous_status=support.SupportStatus(version='2014.1') ) ) ), HREF: attributes.Schema( _("The resource href of the queue.") ), } def physical_resource_name(self): name = self.properties[self.NAME] if name is not None: return name return super(ZaqarQueue, self).physical_resource_name() def handle_create(self): """Create a zaqar message queue.""" queue_name = self.physical_resource_name() queue = self.client().queue(queue_name, auto_create=False) metadata = self.properties.get('metadata') if metadata: queue.metadata(new_meta=metadata) self.resource_id_set(queue_name) def handle_update(self, json_snippet, tmpl_diff, prop_diff): """Update queue metadata.""" if 'metadata' in prop_diff: queue = self.client().queue(self.resource_id, auto_create=False) metadata = prop_diff['metadata'] queue.metadata(new_meta=metadata) def handle_delete(self): """Delete a zaqar message queue.""" if not self.resource_id: return with self.client_plugin().ignore_not_found: self.client().queue(self.resource_id, auto_create=False).delete() def href(self): client = self.client() queue_name = self.physical_resource_name() return '%s/v%s/queues/%s' % (client.api_url.rstrip('/'), client.api_version, urlparse.quote(queue_name)) def _resolve_attribute(self, name): if name == self.QUEUE_ID: return self.resource_id elif name == self.HREF: return self.href() def _show_resource(self): queue = self.client().queue(self.resource_id, auto_create=False) metadata = queue.metadata() return {self.METADATA: metadata} def parse_live_resource_data(self, resource_properties, resource_data): name = self.resource_id if name == super(ZaqarQueue, self).physical_resource_name(): name = None return { self.NAME: name, self.METADATA: resource_data[self.METADATA] } class ZaqarSignedQueueURL(resource.Resource): """A resource for managing signed URLs of Zaqar queues. Signed URLs allow to give specific access to queues, for example to be used as alarm notifications. To supply a signed queue URL to Aodh as an action URL, pass "zaqar://?" followed by the query_str attribute of the signed queue URL resource. """ default_client_name = "zaqar" support_status = support.SupportStatus(version='8.0.0') PROPERTIES = ( QUEUE, PATHS, TTL, METHODS, ) = ( 'queue', 'paths', 'ttl', 'methods', ) ATTRIBUTES = ( SIGNATURE, EXPIRES, PATHS_ATTR, METHODS_ATTR, PROJECT, QUERY_STR, ) = ( 'signature', 'expires', 'paths', 'methods', 'project', 'query_str', ) properties_schema = { QUEUE: properties.Schema( properties.Schema.STRING, _("Name of the queue instance to create a URL for."), required=True), PATHS: properties.Schema( properties.Schema.LIST, description=_("List of allowed paths to be accessed. " "Default to allow queue messages URL.")), TTL: properties.Schema( properties.Schema.INTEGER, description=_("Time validity of the URL, in seconds. " "Default to one day.")), METHODS: properties.Schema( properties.Schema.LIST, description=_("List of allowed HTTP methods to be used. " "Default to allow GET."), schema=properties.Schema( properties.Schema.STRING, constraints=[ constraints.AllowedValues(['GET', 'DELETE', 'PATCH', 'POST', 'PUT']), ], )) } attributes_schema = { SIGNATURE: attributes.Schema( _("Signature of the URL built by Zaqar.") ), EXPIRES: attributes.Schema( _("Expiration date of the URL.") ), PATHS_ATTR: attributes.Schema( _("Comma-delimited list of paths for convenience.") ), METHODS_ATTR: attributes.Schema( _("Comma-delimited list of methods for convenience.") ), PROJECT: attributes.Schema( _("The ID of the Keystone project containing the queue.") ), QUERY_STR: attributes.Schema( _("An HTTP URI query fragment.") ), } def handle_create(self): queue = self.client().queue(self.properties[self.QUEUE]) signed_url = queue.signed_url(paths=self.properties[self.PATHS], methods=self.properties[self.METHODS], ttl_seconds=self.properties[self.TTL]) self.data_set(self.SIGNATURE, signed_url['signature']) self.data_set(self.EXPIRES, signed_url['expires']) self.data_set(self.PATHS_ATTR, jsonutils.dumps(signed_url['paths'])) self.data_set(self.METHODS_ATTR, jsonutils.dumps(signed_url['methods'])) self.data_set(self.PROJECT, signed_url['project']) self.resource_id_set(self.physical_resource_name()) def _query_str(self, data): """Return the query fragment of a signed URI. This can be used, for example, for alarming. """ paths = jsonutils.loads(data[self.PATHS_ATTR]) methods = jsonutils.loads(data[self.METHODS_ATTR]) query = { 'signature': data[self.SIGNATURE], 'expires': data[self.EXPIRES], 'paths': ','.join(paths), 'methods': ','.join(methods), 'project_id': data[self.PROJECT], 'queue_name': self.properties[self.QUEUE], } return urlparse.urlencode(query) def handle_delete(self): # We can't delete a signed URL return def _resolve_attribute(self, name): if not self.resource_id: return if name in (self.SIGNATURE, self.EXPIRES, self.PROJECT): return self.data()[name] elif name in (self.PATHS_ATTR, self.METHODS_ATTR): return jsonutils.loads(self.data()[name]) elif name == self.QUERY_STR: return self._query_str(self.data()) def resource_mapping(): return { 'OS::Zaqar::Queue': ZaqarQueue, 'OS::Zaqar::SignedQueueURL': ZaqarSignedQueueURL, } heat-10.0.2/heat/engine/resources/openstack/zaqar/__init__.py0000666000175000017500000000000013343562340024146 0ustar zuulzuul00000000000000heat-10.0.2/heat/engine/resources/openstack/sahara/0000775000175000017500000000000013343562672022176 5ustar zuulzuul00000000000000heat-10.0.2/heat/engine/resources/openstack/sahara/data_source.py0000666000175000017500000001021413343562340025031 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from heat.common.i18n import _ from heat.engine import constraints from heat.engine import properties from heat.engine import resource from heat.engine import support class DataSource(resource.Resource): """A resource for creating sahara data source. A data source stores an URL which designates the location of input or output data and any credentials needed to access the location. """ support_status = support.SupportStatus(version='5.0.0') PROPERTIES = ( NAME, TYPE, URL, DESCRIPTION, CREDENTIALS ) = ( 'name', 'type', 'url', 'description', 'credentials' ) _CREDENTIAL_KEYS = ( USER, PASSWORD ) = ( 'user', 'password' ) _DATA_SOURCE_TYPES = ( SWIFT, HDFS, MAPRFS, MANILA ) = ( 'swift', 'hdfs', 'maprfs', 'manila' ) properties_schema = { NAME: properties.Schema( properties.Schema.STRING, _("Name of the data source."), update_allowed=True ), TYPE: properties.Schema( properties.Schema.STRING, _('Type of the data source.'), constraints=[ constraints.AllowedValues(_DATA_SOURCE_TYPES), ], required=True, update_allowed=True ), URL: properties.Schema( properties.Schema.STRING, _('URL for the data source.'), required=True, update_allowed=True ), DESCRIPTION: properties.Schema( properties.Schema.STRING, _('Description of the data source.'), default='', update_allowed=True ), CREDENTIALS: properties.Schema( properties.Schema.MAP, _('Credentials used for swift. Not required if sahara is ' 'configured to use proxy users and delegated trusts for ' 'access.'), schema={ USER: properties.Schema( properties.Schema.STRING, _('Username for accessing the data source URL.'), required=True ), PASSWORD: properties.Schema( properties.Schema.STRING, _("Password for accessing the data source URL."), required=True ) }, update_allowed=True ) } default_client_name = 'sahara' entity = 'data_sources' def _data_source_name(self): return self.properties[self.NAME] or self.physical_resource_name() def handle_create(self): credentials = self.properties[self.CREDENTIALS] or {} args = { 'name': self._data_source_name(), 'description': self.properties[self.DESCRIPTION], 'data_source_type': self.properties[self.TYPE], 'url': self.properties[self.URL], 'credential_user': credentials.get(self.USER), 'credential_pass': credentials.get(self.PASSWORD) } data_source = self.client().data_sources.create(**args) self.resource_id_set(data_source.id) def handle_update(self, json_snippet, tmpl_diff, prop_diff): if prop_diff: self.properties = json_snippet.properties( self.properties_schema, self.context) data = dict(self.properties) if not data.get(self.NAME): data[self.NAME] = self.physical_resource_name() self.client().data_sources.update(self.resource_id, data) def resource_mapping(): return { 'OS::Sahara::DataSource': DataSource } heat-10.0.2/heat/engine/resources/openstack/sahara/image.py0000666000175000017500000000750613343562340023634 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from heat.common.i18n import _ from heat.engine import constraints from heat.engine import properties from heat.engine import resource from heat.engine import support from heat.engine import translation class SaharaImageRegistry(resource.Resource): """A resource for registering an image in sahara. Allows to register an image in the sahara image registry and add tags. """ support_status = support.SupportStatus(version='6.0.0') PROPERTIES = ( IMAGE, USERNAME, DESCRIPTION, TAGS ) = ( 'image', 'username', 'description', 'tags' ) properties_schema = { IMAGE: properties.Schema( properties.Schema.STRING, _("ID or name of the image to register."), constraints=[ constraints.CustomConstraint('glance.image') ], required=True ), USERNAME: properties.Schema( properties.Schema.STRING, _('Username of privileged user in the image.'), required=True, update_allowed=True ), DESCRIPTION: properties.Schema( properties.Schema.STRING, _('Description of the image.'), default='', update_allowed=True ), TAGS: properties.Schema( properties.Schema.LIST, _('Tags to add to the image.'), schema=properties.Schema( properties.Schema.STRING ), update_allowed=True, default=[] ) } def translation_rules(self, props): return [ translation.TranslationRule( props, translation.TranslationRule.RESOLVE, [self.IMAGE], client_plugin=self.client_plugin('glance'), finder='find_image_by_name_or_id') ] default_client_name = 'sahara' entity = 'images' def handle_create(self): self.resource_id_set(self.properties[self.IMAGE]) self.client().images.update_image( self.resource_id, self.properties[self.USERNAME], self.properties[self.DESCRIPTION] ) if self.properties[self.TAGS]: self.client().images.update_tags(self.resource_id, self.properties[self.TAGS]) def handle_update(self, json_snippet, tmpl_diff, prop_diff): if prop_diff: self.properties = json_snippet.properties( self.properties_schema, self.context) if self.USERNAME in prop_diff or self.DESCRIPTION in prop_diff: self.client().images.update_image( self.resource_id, self.properties[self.USERNAME], self.properties[self.DESCRIPTION] ) if self.TAGS in prop_diff: self.client().images.update_tags(self.resource_id, self.properties[self.TAGS]) def handle_delete(self): if self.resource_id is None: return with self.client_plugin().ignore_not_found: self.client().images.unregister_image(self.resource_id) def resource_mapping(): return { 'OS::Sahara::ImageRegistry': SaharaImageRegistry } heat-10.0.2/heat/engine/resources/openstack/sahara/job_binary.py0000666000175000017500000001030613343562340024660 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_utils import uuidutils from heat.common import exception from heat.common.i18n import _ from heat.engine import properties from heat.engine import resource from heat.engine import rsrc_defn from heat.engine import support class JobBinary(resource.Resource): """A resource for creating sahara job binary. A job binary stores an URL to a single script or Jar file and any credentials needed to retrieve the file. """ support_status = support.SupportStatus(version='5.0.0') PROPERTIES = ( NAME, URL, DESCRIPTION, CREDENTIALS ) = ( 'name', 'url', 'description', 'credentials' ) _CREDENTIAL_KEYS = ( USER, PASSWORD ) = ( 'user', 'password' ) properties_schema = { NAME: properties.Schema( properties.Schema.STRING, _('Name of the job binary.'), update_allowed=True ), URL: properties.Schema( properties.Schema.STRING, _('URL for the job binary. Must be in the format ' 'swift:/// or internal-db://.'), required=True, update_allowed=True ), DESCRIPTION: properties.Schema( properties.Schema.STRING, _('Description of the job binary.'), default='', update_allowed=True ), CREDENTIALS: properties.Schema( properties.Schema.MAP, _('Credentials used for swift. Not required if sahara is ' 'configured to use proxy users and delegated trusts for ' 'access.'), schema={ USER: properties.Schema( properties.Schema.STRING, _('Username for accessing the job binary URL.'), required=True ), PASSWORD: properties.Schema( properties.Schema.STRING, _('Password for accessing the job binary URL.'), required=True ), }, update_allowed=True ) } default_client_name = 'sahara' entity = 'job_binaries' def _job_binary_name(self): return self.properties[self.NAME] or self.physical_resource_name() def _prepare_properties(self): credentials = self.properties[self.CREDENTIALS] or {} return { 'name': self._job_binary_name(), 'description': self.properties[self.DESCRIPTION], 'url': self.properties[self.URL], 'extra': credentials } def validate(self): super(JobBinary, self).validate() url = self.properties[self.URL] if not (url.startswith('swift://') or (url.startswith('internal-db://') and uuidutils.is_uuid_like(url[len("internal-db://"):]))): msg = _("%s is not a valid job location.") % url raise exception.StackValidationFailed( path=[self.stack.t.RESOURCES, self.name, self.stack.t.get_section_name(rsrc_defn.PROPERTIES)], message=msg) def handle_create(self): args = self._prepare_properties() job_binary = self.client().job_binaries.create(**args) self.resource_id_set(job_binary.id) def handle_update(self, json_snippet, tmpl_diff, prop_diff): if prop_diff: self.properties = json_snippet.properties( self.properties_schema, self.context) data = self._prepare_properties() self.client().job_binaries.update(self.resource_id, data) def resource_mapping(): return { 'OS::Sahara::JobBinary': JobBinary } heat-10.0.2/heat/engine/resources/openstack/sahara/job.py0000666000175000017500000002616313343562340023324 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import six from heat.common import exception from heat.common.i18n import _ from heat.engine import attributes from heat.engine import constraints from heat.engine import properties from heat.engine import resource from heat.engine.resources import signal_responder from heat.engine import support from heat.engine import translation # NOTE(tlashchova): copied from sahara/utils/api_validator.py SAHARA_NAME_REGEX = r"^[a-zA-Z0-9][a-zA-Z0-9\-_\.]*$" class SaharaJob(signal_responder.SignalResponder, resource.Resource): """A resource for creating Sahara Job. A job specifies the type of the job and lists all of the individual job binary objects. Can be launched using resource-signal. """ support_status = support.SupportStatus(version='8.0.0') PROPERTIES = ( NAME, TYPE, MAINS, LIBS, DESCRIPTION, DEFAULT_EXECUTION_DATA, IS_PUBLIC, IS_PROTECTED ) = ( 'name', 'type', 'mains', 'libs', 'description', 'default_execution_data', 'is_public', 'is_protected' ) _EXECUTION_DATA_KEYS = ( CLUSTER, INPUT, OUTPUT, CONFIGS, PARAMS, ARGS, IS_PUBLIC, INTERFACE ) = ( 'cluster', 'input', 'output', 'configs', 'params', 'args', 'is_public', 'interface' ) ATTRIBUTES = ( EXECUTIONS, DEFAULT_EXECUTION_URL ) = ( 'executions', 'default_execution_url' ) properties_schema = { NAME: properties.Schema( properties.Schema.STRING, _("Name of the job."), constraints=[ constraints.Length(min=1, max=50), constraints.AllowedPattern(SAHARA_NAME_REGEX), ], update_allowed=True ), TYPE: properties.Schema( properties.Schema.STRING, _("Type of the job."), constraints=[ constraints.CustomConstraint('sahara.job_type') ], required=True ), MAINS: properties.Schema( properties.Schema.LIST, _("IDs or names of job's main job binary. In case of specific " "Sahara service, this property designed as a list, but accepts " "only one item."), schema=properties.Schema( properties.Schema.STRING, _("ID of job's main job binary."), constraints=[constraints.CustomConstraint('sahara.job_binary')] ), constraints=[constraints.Length(max=1)], default=[] ), LIBS: properties.Schema( properties.Schema.LIST, _("IDs or names of job's lib job binaries."), schema=properties.Schema( properties.Schema.STRING, constraints=[ constraints.CustomConstraint('sahara.job_binary') ] ), default=[] ), DESCRIPTION: properties.Schema( properties.Schema.STRING, _('Description of the job.'), update_allowed=True ), IS_PUBLIC: properties.Schema( properties.Schema.BOOLEAN, _('If True, job will be shared across the tenants.'), update_allowed=True, default=False ), IS_PROTECTED: properties.Schema( properties.Schema.BOOLEAN, _('If True, job will be protected from modifications and ' 'can not be deleted until this property is set to False.'), update_allowed=True, default=False ), DEFAULT_EXECUTION_DATA: properties.Schema( properties.Schema.MAP, _('Default execution data to use when run signal.'), schema={ CLUSTER: properties.Schema( properties.Schema.STRING, _('ID or name of the cluster to run the job in.'), constraints=[ constraints.CustomConstraint('sahara.cluster') ], required=True ), INPUT: properties.Schema( properties.Schema.STRING, _('ID or name of the input data source.'), constraints=[ constraints.CustomConstraint('sahara.data_source') ] ), OUTPUT: properties.Schema( properties.Schema.STRING, _('ID or name of the output data source.'), constraints=[ constraints.CustomConstraint('sahara.data_source') ] ), CONFIGS: properties.Schema( properties.Schema.MAP, _('Config parameters to add to the job.'), default={} ), PARAMS: properties.Schema( properties.Schema.MAP, _('Parameters to add to the job.'), default={} ), ARGS: properties.Schema( properties.Schema.LIST, _('Arguments to add to the job.'), schema=properties.Schema( properties.Schema.STRING, ), default=[] ), IS_PUBLIC: properties.Schema( properties.Schema.BOOLEAN, _('If True, execution will be shared across the tenants.'), default=False ), INTERFACE: properties.Schema( properties.Schema.MAP, _('Interface arguments to add to the job.'), default={} ) }, update_allowed=True ) } attributes_schema = { DEFAULT_EXECUTION_URL: attributes.Schema( _("A signed url to create execution specified in " "default_execution_data property."), type=attributes.Schema.STRING ), EXECUTIONS: attributes.Schema( _("List of the job executions."), type=attributes.Schema.LIST ) } default_client_name = 'sahara' entity = 'jobs' def translation_rules(self, properties): return [ translation.TranslationRule( properties, translation.TranslationRule.RESOLVE, [self.MAINS], client_plugin=self.client_plugin(), finder='find_resource_by_name_or_id', entity='job_binaries' ), translation.TranslationRule( properties, translation.TranslationRule.RESOLVE, [self.LIBS], client_plugin=self.client_plugin(), finder='find_resource_by_name_or_id', entity='job_binaries' ), translation.TranslationRule( properties, translation.TranslationRule.RESOLVE, [self.DEFAULT_EXECUTION_DATA, self.CLUSTER], client_plugin=self.client_plugin(), finder='find_resource_by_name_or_id', entity='clusters' ), translation.TranslationRule( properties, translation.TranslationRule.RESOLVE, [self.DEFAULT_EXECUTION_DATA, self.INPUT], client_plugin=self.client_plugin(), finder='find_resource_by_name_or_id', entity='data_sources' ), translation.TranslationRule( properties, translation.TranslationRule.RESOLVE, [self.DEFAULT_EXECUTION_DATA, self.OUTPUT], client_plugin=self.client_plugin(), finder='find_resource_by_name_or_id', entity='data_sources' ) ] def handle_create(self): args = { 'name': self.properties[ self.NAME] or self.physical_resource_name(), 'type': self.properties[self.TYPE], # Note: sahara accepts only one main binary but schema demands # that it should be in a list. 'mains': self.properties[self.MAINS], 'libs': self.properties[self.LIBS], 'description': self.properties[self.DESCRIPTION], 'is_public': self.properties[self.IS_PUBLIC], 'is_protected': self.properties[self.IS_PROTECTED] } job = self.client().jobs.create(**args) self.resource_id_set(job.id) def handle_update(self, json_snippet, tmpl_diff, prop_diff): if self.NAME in prop_diff: name = prop_diff[self.NAME] or self.physical_resource_name() prop_diff[self.NAME] = name if self.DEFAULT_EXECUTION_DATA in prop_diff: del prop_diff[self.DEFAULT_EXECUTION_DATA] if prop_diff: self.client().jobs.update(self.resource_id, **prop_diff) def handle_signal(self, details): data = details or self.properties.get(self.DEFAULT_EXECUTION_DATA) execution_args = { 'job_id': self.resource_id, 'cluster_id': data.get(self.CLUSTER), 'input_id': data.get(self.INPUT), 'output_id': data.get(self.OUTPUT), 'is_public': data.get(self.IS_PUBLIC), 'interface': data.get(self.INTERFACE), 'configs': { 'configs': data.get(self.CONFIGS), 'params': data.get(self.PARAMS), 'args': data.get(self.ARGS) }, 'is_protected': False } try: self.client().job_executions.create(**execution_args) except Exception as ex: raise exception.ResourceFailure(ex, self) def handle_delete(self): if self.resource_id is None: return with self.client_plugin().ignore_not_found: job_exs = self.client().job_executions.find(id=self.resource_id) for ex in job_exs: self.client().job_executions.delete(ex.id) super(SaharaJob, self).handle_delete() def _resolve_attribute(self, name): if name == self.DEFAULT_EXECUTION_URL: return six.text_type(self._get_ec2_signed_url()) elif name == self.EXECUTIONS: try: job_execs = self.client().job_executions.find( id=self.resource_id) except Exception: return [] return [execution.to_dict() for execution in job_execs] def resource_mapping(): return { 'OS::Sahara::Job': SaharaJob } heat-10.0.2/heat/engine/resources/openstack/sahara/cluster.py0000666000175000017500000002561713343562351024240 0ustar zuulzuul00000000000000# Copyright (c) 2014 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import re from oslo_log import log as logging from heat.common import exception from heat.common.i18n import _ from heat.engine import attributes from heat.engine import constraints from heat.engine import properties from heat.engine import resource from heat.engine import support from heat.engine import translation LOG = logging.getLogger(__name__) # NOTE(jfreud, pshchelo): copied from sahara/utils/api_validator.py SAHARA_NAME_REGEX = (r"^(([a-zA-Z]|[a-zA-Z][a-zA-Z0-9\-]" r"*[a-zA-Z0-9])\.)*([A-Za-z]|[A-Za-z]" r"[A-Za-z0-9\-]*[A-Za-z0-9])$") # NOTE(jfreud): we do not use physical_resource_name_limit attr because we # prefer to truncate _after_ removing invalid characters SAHARA_CLUSTER_NAME_MAX_LENGTH = 80 class SaharaCluster(resource.Resource): """A resource for managing Sahara clusters. The Cluster entity represents a collection of VM instances that all have the same data processing framework installed. It is mainly characterized by a VM image with a pre-installed framework which will be used for cluster deployment. Users may choose one of the pre-configured Cluster Templates to start a Cluster. To get access to VMs after a Cluster has started, the user should specify a keypair. """ PROPERTIES = ( NAME, PLUGIN_NAME, HADOOP_VERSION, CLUSTER_TEMPLATE_ID, KEY_NAME, IMAGE, MANAGEMENT_NETWORK, IMAGE_ID, USE_AUTOCONFIG, SHARES ) = ( 'name', 'plugin_name', 'hadoop_version', 'cluster_template_id', 'key_name', 'image', 'neutron_management_network', 'default_image_id', 'use_autoconfig', 'shares' ) _SHARE_KEYS = ( SHARE_ID, PATH, ACCESS_LEVEL ) = ( 'id', 'path', 'access_level' ) ATTRIBUTES = ( STATUS, INFO, ) = ( "status", "info", ) CLUSTER_STATUSES = ( CLUSTER_ACTIVE, CLUSTER_ERROR ) = ( 'Active', 'Error' ) properties_schema = { NAME: properties.Schema( properties.Schema.STRING, _('Hadoop cluster name.'), constraints=[ constraints.Length(min=1, max=SAHARA_CLUSTER_NAME_MAX_LENGTH), constraints.AllowedPattern(SAHARA_NAME_REGEX), ], ), PLUGIN_NAME: properties.Schema( properties.Schema.STRING, _('Plugin name.'), required=True, constraints=[ constraints.CustomConstraint('sahara.plugin') ] ), HADOOP_VERSION: properties.Schema( properties.Schema.STRING, _('Version of Hadoop running on instances.'), required=True, ), CLUSTER_TEMPLATE_ID: properties.Schema( properties.Schema.STRING, _('ID of the Cluster Template used for ' 'Node Groups and configurations.'), constraints=[ constraints.CustomConstraint('sahara.cluster_template') ], required=True ), KEY_NAME: properties.Schema( properties.Schema.STRING, _('Keypair added to instances to make them accessible for user.'), constraints=[ constraints.CustomConstraint('nova.keypair') ], ), IMAGE: properties.Schema( properties.Schema.STRING, _('Name or UUID of the image used to boot Hadoop nodes.'), support_status=support.SupportStatus( status=support.HIDDEN, version='6.0.0', previous_status=support.SupportStatus( status=support.DEPRECATED, message=_('Use property %s.') % IMAGE_ID, version='2015.1', previous_status=support.SupportStatus(version='2014.2')) ), constraints=[ constraints.CustomConstraint('glance.image') ], ), IMAGE_ID: properties.Schema( properties.Schema.STRING, _('Default name or UUID of the image used to boot Hadoop nodes.'), constraints=[ constraints.CustomConstraint('sahara.image'), ], support_status=support.SupportStatus(version='2015.1') ), MANAGEMENT_NETWORK: properties.Schema( properties.Schema.STRING, _('Name or UUID of network.'), required=True, constraints=[ constraints.CustomConstraint('neutron.network') ], ), USE_AUTOCONFIG: properties.Schema( properties.Schema.BOOLEAN, _("Configure most important configs automatically."), support_status=support.SupportStatus(version='5.0.0') ), SHARES: properties.Schema( properties.Schema.LIST, _("List of manila shares to be mounted."), schema=properties.Schema( properties.Schema.MAP, schema={ SHARE_ID: properties.Schema( properties.Schema.STRING, _("Id of the manila share."), required=True ), PATH: properties.Schema( properties.Schema.STRING, _("Local path on each cluster node on which to mount " "the share. Defaults to '/mnt/{share_id}'.") ), ACCESS_LEVEL: properties.Schema( properties.Schema.STRING, _("Governs permissions set in manila for the cluster " "ips."), constraints=[ constraints.AllowedValues(['rw', 'ro']), ], default='rw' ) } ), support_status=support.SupportStatus(version='6.0.0') ) } attributes_schema = { STATUS: attributes.Schema( _("Cluster status."), type=attributes.Schema.STRING ), INFO: attributes.Schema( _("Cluster information."), type=attributes.Schema.MAP ), } default_client_name = 'sahara' entity = 'clusters' def translation_rules(self, props): rules = [ translation.TranslationRule( props, translation.TranslationRule.REPLACE, [self.IMAGE_ID], value_path=[self.IMAGE]), translation.TranslationRule( props, translation.TranslationRule.RESOLVE, [self.IMAGE_ID], client_plugin=self.client_plugin('glance'), finder='find_image_by_name_or_id'), translation.TranslationRule( props, translation.TranslationRule.RESOLVE, [self.MANAGEMENT_NETWORK], client_plugin=self.client_plugin('neutron'), finder='find_resourceid_by_name_or_id', entity='network')] return rules def _cluster_name(self): name = self.properties[self.NAME] if name: return name return self.reduce_physical_resource_name( re.sub('[^a-zA-Z0-9-]', '', self.physical_resource_name()), SAHARA_CLUSTER_NAME_MAX_LENGTH) def handle_create(self): plugin_name = self.properties[self.PLUGIN_NAME] hadoop_version = self.properties[self.HADOOP_VERSION] cluster_template_id = self.properties[self.CLUSTER_TEMPLATE_ID] image_id = self.properties[self.IMAGE_ID] # check that image is provided in case when # cluster template is missing one cluster_template = self.client().cluster_templates.get( cluster_template_id) if cluster_template.default_image_id is None and not image_id: msg = _("%(img)s must be provided: Referenced cluster template " "%(tmpl)s has no default_image_id defined.") % { 'img': self.IMAGE_ID, 'tmpl': cluster_template_id} raise exception.StackValidationFailed(message=msg) key_name = self.properties[self.KEY_NAME] net_id = self.properties[self.MANAGEMENT_NETWORK] use_autoconfig = self.properties[self.USE_AUTOCONFIG] shares = self.properties[self.SHARES] cluster = self.client().clusters.create( self._cluster_name(), plugin_name, hadoop_version, cluster_template_id=cluster_template_id, user_keypair_id=key_name, default_image_id=image_id, net_id=net_id, use_autoconfig=use_autoconfig, shares=shares) LOG.info('Cluster "%s" is being started.', cluster.name) self.resource_id_set(cluster.id) return self.resource_id def check_create_complete(self, cluster_id): cluster = self.client().clusters.get(cluster_id) if cluster.status == self.CLUSTER_ERROR: raise exception.ResourceInError(resource_status=cluster.status) if cluster.status != self.CLUSTER_ACTIVE: return False LOG.info("Cluster '%s' has been created", cluster.name) return True def check_delete_complete(self, resource_id): if not resource_id: return True try: cluster = self.client().clusters.get(resource_id) except Exception as ex: self.client_plugin().ignore_not_found(ex) LOG.info("Cluster '%s' has been deleted", self._cluster_name()) return True else: if cluster.status == self.CLUSTER_ERROR: raise exception.ResourceInError(resource_status=cluster.status) return False def _resolve_attribute(self, name): if self.resource_id is None: return cluster = self.client().clusters.get(self.resource_id) return getattr(cluster, name, None) def validate(self): res = super(SaharaCluster, self).validate() if res: return res self.client_plugin().validate_hadoop_version( self.properties[self.PLUGIN_NAME], self.properties[self.HADOOP_VERSION] ) def resource_mapping(): return { 'OS::Sahara::Cluster': SaharaCluster, } heat-10.0.2/heat/engine/resources/openstack/sahara/__init__.py0000666000175000017500000000000013343562340024267 0ustar zuulzuul00000000000000heat-10.0.2/heat/engine/resources/openstack/sahara/templates.py0000666000175000017500000005476613343562351024564 0ustar zuulzuul00000000000000# Copyright (c) 2014 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import re import six from oslo_log import log as logging from oslo_utils import encodeutils from heat.common import exception from heat.common.i18n import _ from heat.engine import constraints from heat.engine import properties from heat.engine import resource from heat.engine import rsrc_defn from heat.engine import support from heat.engine import translation LOG = logging.getLogger(__name__) # NOTE(pshchelo): copied from sahara/utils/api_validator.py SAHARA_NAME_REGEX = (r"^(([a-zA-Z]|[a-zA-Z][a-zA-Z0-9\-]" r"*[a-zA-Z0-9])\.)*([A-Za-z]|[A-Za-z]" r"[A-Za-z0-9\-]*[A-Za-z0-9])$") class SaharaNodeGroupTemplate(resource.Resource): """A resource for managing Sahara node group templates. A Node Group Template describes a group of nodes within cluster. It contains a list of hadoop processes that will be launched on each instance in a group. Also a Node Group Template may provide node scoped configurations for those processes. """ support_status = support.SupportStatus(version='2014.2') PROPERTIES = ( NAME, PLUGIN_NAME, HADOOP_VERSION, FLAVOR, DESCRIPTION, VOLUMES_PER_NODE, VOLUMES_SIZE, VOLUME_TYPE, SECURITY_GROUPS, AUTO_SECURITY_GROUP, AVAILABILITY_ZONE, VOLUMES_AVAILABILITY_ZONE, NODE_PROCESSES, FLOATING_IP_POOL, NODE_CONFIGS, IMAGE_ID, IS_PROXY_GATEWAY, VOLUME_LOCAL_TO_INSTANCE, USE_AUTOCONFIG, SHARES ) = ( 'name', 'plugin_name', 'hadoop_version', 'flavor', 'description', 'volumes_per_node', 'volumes_size', 'volume_type', 'security_groups', 'auto_security_group', 'availability_zone', 'volumes_availability_zone', 'node_processes', 'floating_ip_pool', 'node_configs', 'image_id', 'is_proxy_gateway', 'volume_local_to_instance', 'use_autoconfig', 'shares' ) _SHARE_KEYS = ( SHARE_ID, PATH, ACCESS_LEVEL ) = ( 'id', 'path', 'access_level' ) properties_schema = { NAME: properties.Schema( properties.Schema.STRING, _("Name for the Sahara Node Group Template."), constraints=[ constraints.Length(min=1, max=50), constraints.AllowedPattern(SAHARA_NAME_REGEX), ], update_allowed=True ), DESCRIPTION: properties.Schema( properties.Schema.STRING, _('Description of the Node Group Template.'), default="", update_allowed=True ), PLUGIN_NAME: properties.Schema( properties.Schema.STRING, _('Plugin name.'), required=True, constraints=[ constraints.CustomConstraint('sahara.plugin') ], update_allowed=True ), HADOOP_VERSION: properties.Schema( properties.Schema.STRING, _('Version of Hadoop running on instances.'), required=True, update_allowed=True ), FLAVOR: properties.Schema( properties.Schema.STRING, _('Name or ID Nova flavor for the nodes.'), required=True, constraints=[ constraints.CustomConstraint('nova.flavor') ], update_allowed=True ), VOLUMES_PER_NODE: properties.Schema( properties.Schema.INTEGER, _("Volumes per node."), constraints=[ constraints.Range(min=0), ], default=0, update_allowed=True ), VOLUMES_SIZE: properties.Schema( properties.Schema.INTEGER, _("Size of the volumes, in GB."), constraints=[ constraints.Range(min=1), ], update_allowed=True ), VOLUME_TYPE: properties.Schema( properties.Schema.STRING, _("Type of the volume to create on Cinder backend."), constraints=[ constraints.CustomConstraint('cinder.vtype') ], update_allowed=True ), SECURITY_GROUPS: properties.Schema( properties.Schema.LIST, _("List of security group names or IDs to assign to this " "Node Group template."), schema=properties.Schema( properties.Schema.STRING, ), update_allowed=True ), AUTO_SECURITY_GROUP: properties.Schema( properties.Schema.BOOLEAN, _("Defines whether auto-assign security group to this " "Node Group template."), update_allowed=True ), AVAILABILITY_ZONE: properties.Schema( properties.Schema.STRING, _("Availability zone to create servers in."), update_allowed=True ), VOLUMES_AVAILABILITY_ZONE: properties.Schema( properties.Schema.STRING, _("Availability zone to create volumes in."), update_allowed=True ), NODE_PROCESSES: properties.Schema( properties.Schema.LIST, _("List of processes to run on every node."), required=True, constraints=[ constraints.Length(min=1), ], schema=properties.Schema( properties.Schema.STRING, ), update_allowed=True ), FLOATING_IP_POOL: properties.Schema( properties.Schema.STRING, _("Name or UUID of the Neutron floating IP network or " "name of the Nova floating ip pool to use. " "Should not be provided when used with Nova-network " "that auto-assign floating IPs."), update_allowed=True ), NODE_CONFIGS: properties.Schema( properties.Schema.MAP, _("Dictionary of node configurations."), update_allowed=True ), IMAGE_ID: properties.Schema( properties.Schema.STRING, _("ID of the image to use for the template."), constraints=[ constraints.CustomConstraint('sahara.image'), ], update_allowed=True ), IS_PROXY_GATEWAY: properties.Schema( properties.Schema.BOOLEAN, _("Provide access to nodes using other nodes of the cluster " "as proxy gateways."), support_status=support.SupportStatus(version='5.0.0'), update_allowed=True ), VOLUME_LOCAL_TO_INSTANCE: properties.Schema( properties.Schema.BOOLEAN, _("Create volumes on the same physical port as an instance."), support_status=support.SupportStatus(version='5.0.0'), update_allowed=True ), USE_AUTOCONFIG: properties.Schema( properties.Schema.BOOLEAN, _("Configure most important configs automatically."), support_status=support.SupportStatus(version='5.0.0'), update_allowed=True ), SHARES: properties.Schema( properties.Schema.LIST, _("List of manila shares to be mounted."), schema=properties.Schema( properties.Schema.MAP, schema={ SHARE_ID: properties.Schema( properties.Schema.STRING, _("Id of the manila share."), required=True ), PATH: properties.Schema( properties.Schema.STRING, _("Local path on each cluster node on which to mount " "the share. Defaults to '/mnt/{share_id}'.") ), ACCESS_LEVEL: properties.Schema( properties.Schema.STRING, _("Governs permissions set in manila for the cluster " "ips."), constraints=[ constraints.AllowedValues(['rw', 'ro']), ], default='rw' ) } ), support_status=support.SupportStatus(version='6.0.0'), update_allowed=True ) } default_client_name = 'sahara' physical_resource_name_limit = 50 entity = 'node_group_templates' def translation_rules(self, props): return [ translation.TranslationRule( props, translation.TranslationRule.RESOLVE, [self.FLAVOR], client_plugin=self.client_plugin('nova'), finder='find_flavor_by_name_or_id'), translation.TranslationRule( props, translation.TranslationRule.RESOLVE, [self.FLOATING_IP_POOL], client_plugin=self.client_plugin('neutron'), finder='find_resourceid_by_name_or_id', entity='network') ] def _ngt_name(self, name): if name: return name return re.sub('[^a-zA-Z0-9-]', '', self.physical_resource_name()) def _prepare_properties(self, props): """Prepares the property values.""" if self.NAME in props: props['name'] = self._ngt_name(props[self.NAME]) if self.FLAVOR in props: props['flavor_id'] = props.pop(self.FLAVOR) return props def handle_create(self): props = dict((k, v) for k, v in six.iteritems(self.properties)) args = self._prepare_properties(props) node_group_template = self.client().node_group_templates.create(**args) LOG.info("Node Group Template '%s' has been created", node_group_template.name) self.resource_id_set(node_group_template.id) return self.resource_id def handle_update(self, json_snippet, tmpl_diff, prop_diff): if prop_diff: args = self._prepare_properties(prop_diff) self.client().node_group_templates.update(self.resource_id, **args) def validate(self): res = super(SaharaNodeGroupTemplate, self).validate() if res: return res pool = self.properties[self.FLOATING_IP_POOL] if pool: if self.is_using_neutron(): try: self.client_plugin( 'neutron').find_resourceid_by_name_or_id('network', pool) except Exception as ex: if (self.client_plugin('neutron').is_not_found(ex) or self.client_plugin('neutron').is_no_unique(ex)): err_msg = encodeutils.exception_to_unicode(ex) raise exception.StackValidationFailed(message=err_msg) raise else: try: self.client('nova').floating_ip_pools.find(name=pool) except Exception as ex: if self.client_plugin('nova').is_not_found(ex): err_msg = encodeutils.exception_to_unicode(ex) raise exception.StackValidationFailed(message=err_msg) raise self.client_plugin().validate_hadoop_version( self.properties[self.PLUGIN_NAME], self.properties[self.HADOOP_VERSION] ) # validate node processes plugin = self.client().plugins.get_version_details( self.properties[self.PLUGIN_NAME], self.properties[self.HADOOP_VERSION]) allowed_processes = [item for sublist in list(six.itervalues(plugin.node_processes)) for item in sublist] unsupported_processes = [] for process in self.properties[self.NODE_PROCESSES]: if process not in allowed_processes: unsupported_processes.append(process) if unsupported_processes: msg = (_("Plugin %(plugin)s doesn't support the following " "node processes: %(unsupported)s. Allowed processes are: " "%(allowed)s") % {'plugin': self.properties[self.PLUGIN_NAME], 'unsupported': ', '.join(unsupported_processes), 'allowed': ', '.join(allowed_processes)}) raise exception.StackValidationFailed( path=[self.stack.t.RESOURCES, self.name, self.stack.t.get_section_name(rsrc_defn.PROPERTIES)], message=msg) def parse_live_resource_data(self, resource_properties, resource_data): result = super(SaharaNodeGroupTemplate, self).parse_live_resource_data( resource_properties, resource_data) for group in result[self.SHARES] or []: remove_keys = set(group.keys()) - set(self._SHARE_KEYS) for key in remove_keys: del group[key] result[self.FLAVOR] = resource_data.get('flavor_id') return result class SaharaClusterTemplate(resource.Resource): """A resource for managing Sahara cluster templates. A Cluster Template is designed to bring Node Group Templates together to form a Cluster. A Cluster Template defines what Node Groups will be included and how many instances will be created in each. Some data processing framework configurations can not be applied to a single node, but to a whole Cluster. A user can specify these kinds of configurations in a Cluster Template. Sahara enables users to specify which processes should be added to an anti-affinity group within a Cluster Template. If a process is included into an anti-affinity group, it means that VMs where this process is going to be launched should be scheduled to different hardware hosts. """ support_status = support.SupportStatus(version='2014.2') PROPERTIES = ( NAME, PLUGIN_NAME, HADOOP_VERSION, DESCRIPTION, ANTI_AFFINITY, MANAGEMENT_NETWORK, CLUSTER_CONFIGS, NODE_GROUPS, IMAGE_ID, USE_AUTOCONFIG, SHARES ) = ( 'name', 'plugin_name', 'hadoop_version', 'description', 'anti_affinity', 'neutron_management_network', 'cluster_configs', 'node_groups', 'default_image_id', 'use_autoconfig', 'shares' ) _NODE_GROUP_KEYS = ( NG_NAME, COUNT, NG_TEMPLATE_ID, ) = ( 'name', 'count', 'node_group_template_id', ) _SHARE_KEYS = ( SHARE_ID, PATH, ACCESS_LEVEL ) = ( 'id', 'path', 'access_level' ) properties_schema = { NAME: properties.Schema( properties.Schema.STRING, _("Name for the Sahara Cluster Template."), constraints=[ constraints.Length(min=1, max=50), constraints.AllowedPattern(SAHARA_NAME_REGEX), ], update_allowed=True ), DESCRIPTION: properties.Schema( properties.Schema.STRING, _('Description of the Sahara Group Template.'), default="", update_allowed=True ), PLUGIN_NAME: properties.Schema( properties.Schema.STRING, _('Plugin name.'), required=True, constraints=[ constraints.CustomConstraint('sahara.plugin') ], update_allowed=True ), HADOOP_VERSION: properties.Schema( properties.Schema.STRING, _('Version of Hadoop running on instances.'), required=True, update_allowed=True ), IMAGE_ID: properties.Schema( properties.Schema.STRING, _("ID of the default image to use for the template."), constraints=[ constraints.CustomConstraint('sahara.image'), ], update_allowed=True ), MANAGEMENT_NETWORK: properties.Schema( properties.Schema.STRING, _('Name or UUID of network.'), constraints=[ constraints.CustomConstraint('neutron.network') ], update_allowed=True ), ANTI_AFFINITY: properties.Schema( properties.Schema.LIST, _("List of processes to enable anti-affinity for."), schema=properties.Schema( properties.Schema.STRING, ), update_allowed=True ), CLUSTER_CONFIGS: properties.Schema( properties.Schema.MAP, _('Cluster configs dictionary.'), update_allowed=True ), NODE_GROUPS: properties.Schema( properties.Schema.LIST, _('Node groups.'), schema=properties.Schema( properties.Schema.MAP, schema={ NG_NAME: properties.Schema( properties.Schema.STRING, _('Name of the Node group.'), required=True ), COUNT: properties.Schema( properties.Schema.INTEGER, _("Number of instances in the Node group."), required=True, constraints=[ constraints.Range(min=1) ] ), NG_TEMPLATE_ID: properties.Schema( properties.Schema.STRING, _("ID of the Node Group Template."), required=True ), } ), update_allowed=True ), USE_AUTOCONFIG: properties.Schema( properties.Schema.BOOLEAN, _("Configure most important configs automatically."), support_status=support.SupportStatus(version='5.0.0') ), SHARES: properties.Schema( properties.Schema.LIST, _("List of manila shares to be mounted."), schema=properties.Schema( properties.Schema.MAP, schema={ SHARE_ID: properties.Schema( properties.Schema.STRING, _("Id of the manila share."), required=True ), PATH: properties.Schema( properties.Schema.STRING, _("Local path on each cluster node on which to mount " "the share. Defaults to '/mnt/{share_id}'.") ), ACCESS_LEVEL: properties.Schema( properties.Schema.STRING, _("Governs permissions set in manila for the cluster " "ips."), constraints=[ constraints.AllowedValues(['rw', 'ro']), ], default='rw' ) } ), support_status=support.SupportStatus(version='6.0.0'), update_allowed=True ) } default_client_name = 'sahara' physical_resource_name_limit = 50 entity = 'cluster_templates' def translation_rules(self, props): return [ translation.TranslationRule( props, translation.TranslationRule.RESOLVE, [self.MANAGEMENT_NETWORK], client_plugin=self.client_plugin('neutron'), finder='find_resourceid_by_name_or_id', entity='network')] def _cluster_template_name(self, name): if name: return name return re.sub('[^a-zA-Z0-9-]', '', self.physical_resource_name()) def _prepare_properties(self, props): """Prepares the property values.""" if self.NAME in props: props['name'] = self._cluster_template_name(props[self.NAME]) if self.MANAGEMENT_NETWORK in props: props['net_id'] = props.pop(self.MANAGEMENT_NETWORK) return props def handle_create(self): props = dict((k, v) for k, v in six.iteritems(self.properties)) args = self._prepare_properties(props) cluster_template = self.client().cluster_templates.create(**args) LOG.info("Cluster Template '%s' has been created", cluster_template.name) self.resource_id_set(cluster_template.id) return self.resource_id def handle_update(self, json_snippet, tmpl_diff, prop_diff): if prop_diff: args = self._prepare_properties(prop_diff) self.client().cluster_templates.update(self.resource_id, **args) def validate(self): res = super(SaharaClusterTemplate, self).validate() if res: return res # check if running on neutron and MANAGEMENT_NETWORK missing if (self.is_using_neutron() and not self.properties[self.MANAGEMENT_NETWORK]): msg = _("%s must be provided" ) % self.MANAGEMENT_NETWORK raise exception.StackValidationFailed(message=msg) self.client_plugin().validate_hadoop_version( self.properties[self.PLUGIN_NAME], self.properties[self.HADOOP_VERSION] ) def parse_live_resource_data(self, resource_properties, resource_data): result = super(SaharaClusterTemplate, self).parse_live_resource_data( resource_properties, resource_data) for group in result[self.NODE_GROUPS] or []: remove_keys = set(group.keys()) - set(self._NODE_GROUP_KEYS) for key in remove_keys: del group[key] for group in result[self.SHARES] or []: remove_keys = set(group.keys()) - set(self._SHARE_KEYS) for key in remove_keys: del group[key] return result def resource_mapping(): return { 'OS::Sahara::NodeGroupTemplate': SaharaNodeGroupTemplate, 'OS::Sahara::ClusterTemplate': SaharaClusterTemplate, } heat-10.0.2/heat/engine/resources/openstack/mistral/0000775000175000017500000000000013343562672022412 5ustar zuulzuul00000000000000heat-10.0.2/heat/engine/resources/openstack/mistral/cron_trigger.py0000666000175000017500000001321013343562340025437 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging from heat.common import exception from heat.common.i18n import _ from heat.engine import attributes from heat.engine import constraints from heat.engine import properties from heat.engine import resource from heat.engine import support LOG = logging.getLogger(__name__) class CronTrigger(resource.Resource): """A resource implements Mistral cron trigger. Cron trigger is an object allowing to run workflow on a schedule. User specifies what workflow with what input needs to be run and also specifies how often it should be run. Pattern property is used to describe the frequency of workflow execution. """ support_status = support.SupportStatus(version='5.0.0') PROPERTIES = ( NAME, PATTERN, WORKFLOW, FIRST_TIME, COUNT ) = ( 'name', 'pattern', 'workflow', 'first_time', 'count' ) _WORKFLOW_KEYS = ( WORKFLOW_NAME, WORKFLOW_INPUT ) = ( 'name', 'input' ) ATTRIBUTES = ( NEXT_EXECUTION_TIME, REMAINING_EXECUTIONS ) = ( 'next_execution_time', 'remaining_executions' ) properties_schema = { NAME: properties.Schema( properties.Schema.STRING, _('Name of the cron trigger.') ), PATTERN: properties.Schema( properties.Schema.STRING, _('Cron expression.'), constraints=[ constraints.CustomConstraint( 'cron_expression') ] ), WORKFLOW: properties.Schema( properties.Schema.MAP, _('Workflow to execute.'), required=True, schema={ WORKFLOW_NAME: properties.Schema( properties.Schema.STRING, _('Name or ID of the workflow.'), required=True, constraints=[ constraints.CustomConstraint('mistral.workflow') ] ), WORKFLOW_INPUT: properties.Schema( properties.Schema.MAP, _('Input values for the workflow.') ) } ), FIRST_TIME: properties.Schema( properties.Schema.STRING, _('Time of the first execution in format "YYYY-MM-DD HH:MM".') ), COUNT: properties.Schema( properties.Schema.INTEGER, _('Remaining executions.') ) } attributes_schema = { NEXT_EXECUTION_TIME: attributes.Schema( _('Time of the next execution in format "YYYY-MM-DD HH:MM:SS".'), type=attributes.Schema.STRING ), REMAINING_EXECUTIONS: attributes.Schema( _('Number of remaining executions.'), type=attributes.Schema.INTEGER ) } default_client_name = 'mistral' entity = 'cron_triggers' def validate(self): super(CronTrigger, self).validate() if not (self.properties[self.PATTERN] or self.properties[self.FIRST_TIME]): raise exception.PropertyUnspecifiedError(self.PATTERN, self.FIRST_TIME) def _cron_trigger_name(self): return self.properties.get(self.NAME) or self.physical_resource_name() def handle_create(self): workflow = self.properties.get(self.WORKFLOW) name = self._cron_trigger_name() identifier = workflow[self.WORKFLOW_NAME] args = { 'pattern': self.properties.get(self.PATTERN), 'workflow_input': workflow.get(self.WORKFLOW_INPUT), 'first_time': self.properties.get(self.FIRST_TIME), 'count': self.properties.get(self.COUNT) } cron_trigger = self.client().cron_triggers.create(name, identifier, **args) self.resource_id_set(cron_trigger.name) def _resolve_attribute(self, name): if self.resource_id is None: return trigger = self.client().cron_triggers.get(self.resource_id) if name == self.NEXT_EXECUTION_TIME: return trigger.next_execution_time elif name == self.REMAINING_EXECUTIONS: return trigger.remaining_executions def get_live_state(self, resource_properties): # Currently mistral just deletes cron trigger that was executed # (i.e. remaining execution is reached zero). In this case we can't # found the cron trigger by mistral api. Suppose that live state of # cron trigger is equal to the state stored in heat, otherwise we may # go through undesirable update-replace. This behaviour might be # changed after # https://blueprints.launchpad.net/mistral/+spec/mistral-cron-trigger-life-cycle # will be merged. LOG.warning("get_live_state isn't implemented for this type of " "resource due to specific behaviour of cron trigger " "in mistral.") return {} def resource_mapping(): return { 'OS::Mistral::CronTrigger': CronTrigger, } heat-10.0.2/heat/engine/resources/openstack/mistral/workflow.py0000666000175000017500000007062613343562340024643 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from oslo_serialization import jsonutils import six import yaml from heat.common import exception from heat.common.i18n import _ from heat.engine import attributes from heat.engine import constraints from heat.engine import properties from heat.engine import resource from heat.engine.resources import signal_responder from heat.engine import support from heat.engine import translation class Workflow(signal_responder.SignalResponder, resource.Resource): """A resource that implements Mistral workflow. Workflow represents a process that can be described in a various number of ways and that can do some job interesting to the end user. Each workflow consists of tasks (at least one) describing what exact steps should be made during workflow execution. For detailed description how to use Workflow, read Mistral documentation. """ support_status = support.SupportStatus(version='2015.1') default_client_name = 'mistral' entity = 'workflows' PROPERTIES = ( NAME, TYPE, DESCRIPTION, INPUT, OUTPUT, TASKS, PARAMS, TASK_DEFAULTS, USE_REQUEST_BODY_AS_INPUT, TAGS ) = ( 'name', 'type', 'description', 'input', 'output', 'tasks', 'params', 'task_defaults', 'use_request_body_as_input', 'tags' ) _TASKS_KEYS = ( TASK_NAME, TASK_DESCRIPTION, ON_ERROR, ON_COMPLETE, ON_SUCCESS, POLICIES, ACTION, WORKFLOW, PUBLISH, TASK_INPUT, REQUIRES, RETRY, WAIT_BEFORE, WAIT_AFTER, PAUSE_BEFORE, TIMEOUT, WITH_ITEMS, KEEP_RESULT, TARGET, JOIN, CONCURRENCY ) = ( 'name', 'description', 'on_error', 'on_complete', 'on_success', 'policies', 'action', 'workflow', 'publish', 'input', 'requires', 'retry', 'wait_before', 'wait_after', 'pause_before', 'timeout', 'with_items', 'keep_result', 'target', 'join', 'concurrency' ) _TASKS_TASK_DEFAULTS = [ ON_ERROR, ON_COMPLETE, ON_SUCCESS, REQUIRES, RETRY, WAIT_BEFORE, WAIT_AFTER, PAUSE_BEFORE, TIMEOUT, CONCURRENCY ] _SIGNAL_DATA_KEYS = ( SIGNAL_DATA_INPUT, SIGNAL_DATA_PARAMS ) = ( 'input', 'params' ) ATTRIBUTES = ( WORKFLOW_DATA, ALARM_URL, EXECUTIONS ) = ( 'data', 'alarm_url', 'executions' ) properties_schema = { NAME: properties.Schema( properties.Schema.STRING, _('Workflow name.') ), TYPE: properties.Schema( properties.Schema.STRING, _('Workflow type.'), constraints=[ constraints.AllowedValues(['direct', 'reverse']) ], required=True, update_allowed=True ), USE_REQUEST_BODY_AS_INPUT: properties.Schema( properties.Schema.BOOLEAN, _('Defines the method in which the request body for signaling a ' 'workflow would be parsed. In case this property is set to ' 'True, the body would be parsed as a simple json where each ' 'key is a workflow input, in other cases body would be parsed ' 'expecting a specific json format with two keys: "input" and ' '"params".'), update_allowed=True, support_status=support.SupportStatus(version='6.0.0') ), TAGS: properties.Schema( properties.Schema.LIST, _('List of tags to set on the workflow.'), update_allowed=True, support_status=support.SupportStatus(version='10.0.0') ), DESCRIPTION: properties.Schema( properties.Schema.STRING, _('Workflow description.'), update_allowed=True ), INPUT: properties.Schema( properties.Schema.MAP, _('Dictionary which contains input for workflow.'), update_allowed=True ), OUTPUT: properties.Schema( properties.Schema.MAP, _('Any data structure arbitrarily containing YAQL ' 'expressions that defines workflow output. May be ' 'nested.'), update_allowed=True ), PARAMS: properties.Schema( properties.Schema.MAP, _("Workflow additional parameters. If Workflow is reverse typed, " "params requires 'task_name', which defines initial task."), update_allowed=True ), TASK_DEFAULTS: properties.Schema( properties.Schema.MAP, _("Default settings for some of task " "attributes defined " "at workflow level."), support_status=support.SupportStatus(version='5.0.0'), schema={ ON_SUCCESS: properties.Schema( properties.Schema.LIST, _('List of tasks which will run after ' 'the task has completed successfully.') ), ON_ERROR: properties.Schema( properties.Schema.LIST, _('List of tasks which will run after ' 'the task has completed with an error.') ), ON_COMPLETE: properties.Schema( properties.Schema.LIST, _('List of tasks which will run after ' 'the task has completed regardless of whether ' 'it is successful or not.') ), REQUIRES: properties.Schema( properties.Schema.LIST, _('List of tasks which should be executed before ' 'this task. Used only in reverse workflows.') ), RETRY: properties.Schema( properties.Schema.MAP, _('Defines a pattern how task should be repeated in ' 'case of an error.') ), WAIT_BEFORE: properties.Schema( properties.Schema.INTEGER, _('Defines a delay in seconds that Mistral Engine ' 'should wait before starting a task.') ), WAIT_AFTER: properties.Schema( properties.Schema.INTEGER, _('Defines a delay in seconds that Mistral Engine ' 'should wait after a task has completed before ' 'starting next tasks defined in ' 'on-success, on-error or on-complete.') ), PAUSE_BEFORE: properties.Schema( properties.Schema.BOOLEAN, _('Defines whether Mistral Engine should put the ' 'workflow on hold or not before starting a task.') ), TIMEOUT: properties.Schema( properties.Schema.INTEGER, _('Defines a period of time in seconds after which ' 'a task will be failed automatically ' 'by engine if hasn\'t completed.') ), CONCURRENCY: properties.Schema( properties.Schema.INTEGER, _('Defines a max number of actions running simultaneously ' 'in a task. Applicable only for tasks that have ' 'with-items.'), support_status=support.SupportStatus(version='8.0.0') ) }, update_allowed=True ), TASKS: properties.Schema( properties.Schema.LIST, _('Dictionary containing workflow tasks.'), schema=properties.Schema( properties.Schema.MAP, schema={ TASK_NAME: properties.Schema( properties.Schema.STRING, _('Task name.'), required=True ), TASK_DESCRIPTION: properties.Schema( properties.Schema.STRING, _('Task description.') ), TASK_INPUT: properties.Schema( properties.Schema.MAP, _('Actual input parameter values of the task.') ), ACTION: properties.Schema( properties.Schema.STRING, _('Name of the action associated with the task. ' 'Either action or workflow may be defined in the ' 'task.') ), WORKFLOW: properties.Schema( properties.Schema.STRING, _('Name of the workflow associated with the task. ' 'Can be defined by intrinsic function get_resource ' 'or by name of the referenced workflow, i.e. ' '{ workflow: wf_name } or ' '{ workflow: { get_resource: wf_name }}. Either ' 'action or workflow may be defined in the task.'), constraints=[ constraints.CustomConstraint('mistral.workflow') ] ), PUBLISH: properties.Schema( properties.Schema.MAP, _('Dictionary of variables to publish to ' 'the workflow context.') ), ON_SUCCESS: properties.Schema( properties.Schema.LIST, _('List of tasks which will run after ' 'the task has completed successfully.') ), ON_ERROR: properties.Schema( properties.Schema.LIST, _('List of tasks which will run after ' 'the task has completed with an error.') ), ON_COMPLETE: properties.Schema( properties.Schema.LIST, _('List of tasks which will run after ' 'the task has completed regardless of whether ' 'it is successful or not.') ), POLICIES: properties.Schema( properties.Schema.MAP, _('Dictionary-like section defining task policies ' 'that influence how Mistral Engine runs tasks. Must ' 'satisfy Mistral DSL v2.'), support_status=support.SupportStatus( status=support.HIDDEN, version='8.0.0', message=_('Add needed policies directly to ' 'the task, Policy keyword is not ' 'needed'), previous_status=support.SupportStatus( status=support.DEPRECATED, version='5.0.0', previous_status=support.SupportStatus( version='2015.1') ) ) ), REQUIRES: properties.Schema( properties.Schema.LIST, _('List of tasks which should be executed before ' 'this task. Used only in reverse workflows.') ), RETRY: properties.Schema( properties.Schema.MAP, _('Defines a pattern how task should be repeated in ' 'case of an error.'), support_status=support.SupportStatus(version='5.0.0') ), WAIT_BEFORE: properties.Schema( properties.Schema.INTEGER, _('Defines a delay in seconds that Mistral Engine ' 'should wait before starting a task.'), support_status=support.SupportStatus(version='5.0.0') ), WAIT_AFTER: properties.Schema( properties.Schema.INTEGER, _('Defines a delay in seconds that Mistral ' 'Engine should wait after ' 'a task has completed before starting next tasks ' 'defined in on-success, on-error or on-complete.'), support_status=support.SupportStatus(version='5.0.0') ), PAUSE_BEFORE: properties.Schema( properties.Schema.BOOLEAN, _('Defines whether Mistral Engine should ' 'put the workflow on hold ' 'or not before starting a task.'), support_status=support.SupportStatus(version='5.0.0') ), TIMEOUT: properties.Schema( properties.Schema.INTEGER, _('Defines a period of time in seconds after which a ' 'task will be failed automatically by engine ' 'if hasn\'t completed.'), support_status=support.SupportStatus(version='5.0.0') ), WITH_ITEMS: properties.Schema( properties.Schema.STRING, _('If configured, it allows to run action or workflow ' 'associated with a task multiple times ' 'on a provided list of items.'), support_status=support.SupportStatus(version='5.0.0') ), KEEP_RESULT: properties.Schema( properties.Schema.BOOLEAN, _('Allowing not to store action results ' 'after task completion.'), support_status=support.SupportStatus(version='5.0.0') ), CONCURRENCY: properties.Schema( properties.Schema.INTEGER, _('Defines a max number of actions running ' 'simultaneously in a task. Applicable only for ' 'tasks that have with-items.'), support_status=support.SupportStatus(version='8.0.0') ), TARGET: properties.Schema( properties.Schema.STRING, _('It defines an executor to which task action ' 'should be sent to.'), support_status=support.SupportStatus(version='5.0.0') ), JOIN: properties.Schema( properties.Schema.STRING, _('Allows to synchronize multiple parallel workflow ' 'branches and aggregate their data. ' 'Valid inputs: all - the task will run only if ' 'all upstream tasks are completed. ' 'Any numeric value - then the task will run once ' 'at least this number of upstream tasks are ' 'completed and corresponding conditions have ' 'triggered.'), support_status=support.SupportStatus(version='6.0.0') ), }, ), required=True, update_allowed=True, constraints=[constraints.Length(min=1)] ) } attributes_schema = { WORKFLOW_DATA: attributes.Schema( _('A dictionary which contains name and input of the workflow.'), type=attributes.Schema.MAP ), ALARM_URL: attributes.Schema( _("A signed url to create executions for workflows specified in " "Workflow resource."), type=attributes.Schema.STRING ), EXECUTIONS: attributes.Schema( _("List of workflows' executions, each of them is a dictionary " "with information about execution. Each dictionary returns " "values for next keys: id, workflow_name, created_at, " "updated_at, state for current execution state, input, output."), type=attributes.Schema.LIST ) } def translation_rules(self, properties): policies_keys = [self.PAUSE_BEFORE, self.WAIT_AFTER, self.WAIT_BEFORE, self.TIMEOUT, self.CONCURRENCY, self.RETRY] rules = [] for key in policies_keys: rules.append( translation.TranslationRule( properties, translation.TranslationRule.REPLACE, [self.TASKS, key], value_name=self.POLICIES, custom_value_path=[key] ) ) # after executing rules above properties data contains policies key # with empty dict value, so need to remove policies from properties. rules.append( translation.TranslationRule( properties, translation.TranslationRule.DELETE, [self.TASKS, self.POLICIES] ) ) return rules def get_reference_id(self): return self._workflow_name() def _get_inputs_and_params(self, data): inputs = None params = None if self.properties.get(self.USE_REQUEST_BODY_AS_INPUT): inputs = data else: if data is not None: inputs = data.get(self.SIGNAL_DATA_INPUT) params = data.get(self.SIGNAL_DATA_PARAMS) return inputs, params def _validate_signal_data(self, inputs, params): if inputs is not None: if not isinstance(inputs, dict): message = (_('Input in signal data must be a map, ' 'find a %s') % type(inputs)) raise exception.StackValidationFailed( error=_('Signal data error'), message=message) for key in inputs: if (self.properties.get(self.INPUT) is None or key not in self.properties.get(self.INPUT)): message = _('Unknown input %s') % key raise exception.StackValidationFailed( error=_('Signal data error'), message=message) if params is not None and not isinstance(params, dict): message = (_('Params must be a map, find a ' '%s') % type(params)) raise exception.StackValidationFailed( error=_('Signal data error'), message=message) def validate(self): super(Workflow, self).validate() if self.properties.get(self.TYPE) == 'reverse': params = self.properties.get(self.PARAMS) if params is None or not params.get('task_name'): raise exception.StackValidationFailed( error=_('Mistral resource validation error'), path=[self.name, ('properties' if self.stack.t.VERSION == 'heat_template_version' else 'Properties'), self.PARAMS], message=_("'task_name' is not assigned in 'params' " "in case of reverse type workflow.") ) for task in self.properties.get(self.TASKS): wf_value = task.get(self.WORKFLOW) action_value = task.get(self.ACTION) if wf_value and action_value: raise exception.ResourcePropertyConflict(self.WORKFLOW, self.ACTION) if not wf_value and not action_value: raise exception.PropertyUnspecifiedError(self.WORKFLOW, self.ACTION) if (task.get(self.REQUIRES) is not None and self.properties.get(self.TYPE)) == 'direct': msg = _("task %(task)s contains property 'requires' " "in case of direct workflow. Only reverse workflows " "can contain property 'requires'.") % { 'name': self.name, 'task': task.get(self.TASK_NAME) } raise exception.StackValidationFailed( error=_('Mistral resource validation error'), path=[self.name, ('properties' if self.stack.t.VERSION == 'heat_template_version' else 'Properties'), self.TASKS, task.get(self.TASK_NAME), self.REQUIRES], message=msg) if task.get(self.POLICIES) is not None: for task_item in task.get(self.POLICIES): if task.get(task_item) is not None: msg = _('Property %(policies)s and %(item)s cannot be ' 'used both at one time.') % { 'policies': self.POLICIES, 'item': task_item } raise exception.StackValidationFailed(message=msg) if (task.get(self.WITH_ITEMS) is None and task.get(self.CONCURRENCY) is not None): raise exception.ResourcePropertyDependency( prop1=self.CONCURRENCY, prop2=self.WITH_ITEMS) def _workflow_name(self): return self.properties.get(self.NAME) or self.physical_resource_name() def build_tasks(self, props): for task in props[self.TASKS]: current_task = {} wf_value = task.get(self.WORKFLOW) if wf_value is not None: current_task.update({self.WORKFLOW: wf_value}) # backward support for kilo. if task.get(self.POLICIES) is not None: task.update(task.get(self.POLICIES)) task_keys = [key for key in self._TASKS_KEYS if key not in [ self.WORKFLOW, self.TASK_NAME, self.POLICIES ]] for task_prop in task_keys: if task.get(task_prop) is not None: current_task.update( {task_prop.replace('_', '-'): task[task_prop]}) yield {task[self.TASK_NAME]: current_task} def prepare_properties(self, props): """Prepare correct YAML-formatted definition for Mistral.""" defn_name = self._workflow_name() definition = {'version': '2.0', defn_name: {self.TYPE: props.get(self.TYPE), self.DESCRIPTION: props.get( self.DESCRIPTION), self.TAGS: props.get(self.TAGS), self.OUTPUT: props.get(self.OUTPUT)}} for key in list(definition[defn_name].keys()): if definition[defn_name][key] is None: del definition[defn_name][key] if props.get(self.INPUT) is not None: definition[defn_name][self.INPUT] = list(props.get( self.INPUT).keys()) definition[defn_name][self.TASKS] = {} for task in self.build_tasks(props): definition.get(defn_name).get(self.TASKS).update(task) if props.get(self.TASK_DEFAULTS) is not None: definition[defn_name][self.TASK_DEFAULTS.replace('_', '-')] = { k.replace('_', '-'): v for k, v in six.iteritems(props.get(self.TASK_DEFAULTS)) if v} return yaml.dump(definition, Dumper=yaml.CSafeDumper if hasattr(yaml, 'CSafeDumper') else yaml.SafeDumper) def handle_create(self): super(Workflow, self).handle_create() props = self.prepare_properties(self.properties) try: workflow = self.client().workflows.create(props) except Exception as ex: raise exception.ResourceFailure(ex, self) # NOTE(prazumovsky): Mistral uses unique names for resource # identification. self.resource_id_set(workflow[0].name) def handle_signal(self, details=None): inputs, params = self._get_inputs_and_params(details) self._validate_signal_data(inputs, params) inputs_result = copy.deepcopy(self.properties[self.INPUT]) params_result = copy.deepcopy(self.properties[self.PARAMS]) or {} # NOTE(prazumovsky): Signal can contains some data, interesting # for workflow, e.g. inputs. So, if signal data contains input # we update override inputs, other leaved defined in template. if inputs: inputs_result.update(inputs) if params: params_result.update(params) try: execution = self.client().executions.create( self._workflow_name(), workflow_input=jsonutils.dumps(inputs_result), **params_result) except Exception as ex: raise exception.ResourceFailure(ex, self) executions = [execution.id] if self.EXECUTIONS in self.data(): executions.extend(self.data().get(self.EXECUTIONS).split(',')) self.data_set(self.EXECUTIONS, ','.join(executions)) def handle_update(self, json_snippet, tmpl_diff, prop_diff): if prop_diff: props = json_snippet.properties(self.properties_schema, self.context) new_props = self.prepare_properties(props) try: workflow = self.client().workflows.update(new_props) except Exception as ex: raise exception.ResourceFailure(ex, self) self.data_set(self.NAME, workflow[0].name) self.resource_id_set(workflow[0].name) def _delete_executions(self): if self.data().get(self.EXECUTIONS): for id in self.data().get(self.EXECUTIONS).split(','): with self.client_plugin().ignore_not_found: self.client().executions.delete(id) self.data_delete('executions') def handle_delete(self): self._delete_executions() return super(Workflow, self).handle_delete() def _resolve_attribute(self, name): if name == self.EXECUTIONS: if self.EXECUTIONS not in self.data(): return [] def parse_execution_response(execution): return { 'id': execution.id, 'workflow_name': execution.workflow_name, 'created_at': execution.created_at, 'updated_at': execution.updated_at, 'state': execution.state, 'input': jsonutils.loads(six.text_type(execution.input)), 'output': jsonutils.loads(six.text_type(execution.output)) } return [parse_execution_response( self.client().executions.get(exec_id)) for exec_id in self.data().get(self.EXECUTIONS).split(',')] elif name == self.WORKFLOW_DATA: return {self.NAME: self.resource_id, self.INPUT: self.properties.get(self.INPUT)} elif name == self.ALARM_URL and self.resource_id is not None: return six.text_type(self._get_ec2_signed_url()) def resource_mapping(): return { 'OS::Mistral::Workflow': Workflow } heat-10.0.2/heat/engine/resources/openstack/mistral/__init__.py0000666000175000017500000000000013343562340024503 0ustar zuulzuul00000000000000heat-10.0.2/heat/engine/resources/openstack/mistral/external_resource.py0000666000175000017500000002443213343562340026514 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging from oslo_serialization import jsonutils import six from heat.common import exception from heat.common.i18n import _ from heat.engine import attributes from heat.engine import constraints from heat.engine import properties from heat.engine import resource from heat.engine import support LOG = logging.getLogger(__name__) class MistralExternalResource(resource.Resource): """A plugin for managing user-defined resources via Mistral workflows. This resource allows users to manage resources that are not known to Heat. The user may specify a Mistral workflow to handle each resource action, such as CREATE, UPDATE, or DELETE. The workflows may return an output named 'resource_id', which will be treated as the physical ID of the resource by Heat. Once the resource is created, subsequent workflow runs will receive the output of the last workflow execution in the 'heat_extresource_data' key in the workflow environment (accessible as ``env().heat_extresource_data`` in the workflow). The template author may specify a subset of inputs as causing replacement of the resource when they change, as an alternative to running the UPDATE workflow. """ support_status = support.SupportStatus(version='9.0.0') default_client_name = 'mistral' entity = 'executions' _ACTION_PROPERTIES = ( WORKFLOW, PARAMS ) = ( 'workflow', 'params' ) PROPERTIES = ( EX_ACTIONS, INPUT, DESCRIPTION, REPLACE_ON_CHANGE, ALWAYS_UPDATE ) = ( 'actions', 'input', 'description', 'replace_on_change_inputs', 'always_update' ) ATTRIBUTES = ( OUTPUT, ) = ( 'output', ) _action_properties_schema = properties.Schema( properties.Schema.MAP, _('Dictionary which defines the workflow to run and its params.'), schema={ WORKFLOW: properties.Schema( properties.Schema.STRING, _('Workflow to execute.'), required=True, constraints=[ constraints.CustomConstraint('mistral.workflow') ], ), PARAMS: properties.Schema( properties.Schema.MAP, _('Workflow additional parameters. If workflow is reverse ' 'typed, params requires "task_name", which defines ' 'initial task.'), default={} ), } ) properties_schema = { EX_ACTIONS: properties.Schema( properties.Schema.MAP, _('Resource action which triggers a workflow execution.'), schema={ resource.Resource.CREATE: _action_properties_schema, resource.Resource.UPDATE: _action_properties_schema, resource.Resource.SUSPEND: _action_properties_schema, resource.Resource.RESUME: _action_properties_schema, resource.Resource.DELETE: _action_properties_schema, }, required=True ), INPUT: properties.Schema( properties.Schema.MAP, _('Dictionary which contains input for the workflows.'), update_allowed=True, default={} ), DESCRIPTION: properties.Schema( properties.Schema.STRING, _('Workflow execution description.'), default='Heat managed' ), REPLACE_ON_CHANGE: properties.Schema( properties.Schema.LIST, _('A list of inputs that should cause the resource to be replaced ' 'when their values change.'), default=[] ), ALWAYS_UPDATE: properties.Schema( properties.Schema.BOOLEAN, _('Triggers UPDATE action execution even if input is ' 'unchanged.'), default=False ), } attributes_schema = { OUTPUT: attributes.Schema( _('Output from the execution.'), type=attributes.Schema.MAP ), } def _check_execution(self, action, execution_id): """Check execution status. Returns False if in IDLE, RUNNING or PAUSED returns True if in SUCCESS raises ResourceFailure if in ERROR, CANCELLED raises ResourceUnknownState otherwise. """ execution = self.client().executions.get(execution_id) LOG.debug('Mistral execution %(id)s is in state ' '%(state)s' % {'id': execution_id, 'state': execution.state}) if execution.state in ('IDLE', 'RUNNING', 'PAUSED'): return False, {} if execution.state in ('SUCCESS',): return True, jsonutils.loads(execution.output) if execution.state in ('ERROR', 'CANCELLED'): raise exception.ResourceFailure( exception_or_error=execution.state, resource=self, action=action) raise exception.ResourceUnknownStatus( resource_status=execution.state, result=_('Mistral execution is in unknown state.')) def _handle_action(self, action, inputs=None): action_data = self.properties[self.EX_ACTIONS].get(action) if action_data: # bring forward output from previous executions into env if self.resource_id: old_outputs = jsonutils.loads(self.data().get('outputs', '{}')) action_env = action_data[self.PARAMS].get('env', {}) action_env['heat_extresource_data'] = old_outputs action_data[self.PARAMS]['env'] = action_env # inputs is not None when inputs changed on stack UPDATE if not inputs: inputs = self.properties[self.INPUT] execution = self.client().executions.create( action_data[self.WORKFLOW], workflow_input=jsonutils.dumps(inputs), description=self.properties[self.DESCRIPTION], **action_data[self.PARAMS]) LOG.debug('Mistral execution %(id)s params set to ' '%(params)s' % {'id': execution.id, 'params': action_data[self.PARAMS]}) return execution.id def _check_action(self, action, execution_id): success = True # execution_id is None when no data is available for a given action if execution_id: rsrc_id = execution_id success, output = self._check_execution(action, execution_id) # merge output with outputs of previous executions outputs = jsonutils.loads(self.data().get('outputs', '{}')) outputs.update(output) self.data_set('outputs', jsonutils.dumps(outputs)) # set resource id using output, if found if output.get('resource_id'): rsrc_id = output.get('resource_id') LOG.debug('ExternalResource id set to %(rid)s from Mistral ' 'execution %(eid)s output' % {'eid': execution_id, 'rid': rsrc_id}) self.resource_id_set(six.text_type(rsrc_id)[:255]) return success def _resolve_attribute(self, name): if self.resource_id and name == self.OUTPUT: return self.data().get('outputs') def _needs_update(self, after, before, after_props, before_props, prev_resource, check_init_complete=True): # check if we need to force replace first old_inputs = before_props[self.INPUT] new_inputs = after_props[self.INPUT] for i in after_props[self.REPLACE_ON_CHANGE]: if old_inputs.get(i) != new_inputs.get(i): LOG.debug('Replacing ExternalResource %(id)s instead of ' 'updating due to change to input "%(i)s"' % {"id": self.resource_id, "i": i}) raise resource.UpdateReplace(self) # honor always_update if found if self.properties[self.ALWAYS_UPDATE]: return True # call super in all other scenarios else: return super(MistralExternalResource, self)._needs_update(after, before, after_props, before_props, prev_resource, check_init_complete) def handle_create(self): return self._handle_action(self.CREATE) def check_create_complete(self, execution_id): return self._check_action(self.CREATE, execution_id) def handle_update(self, json_snippet, tmpl_diff, prop_diff): new_inputs = prop_diff.get(self.INPUT) return self._handle_action(self.UPDATE, new_inputs) def check_update_complete(self, execution_id): return self._check_action(self.UPDATE, execution_id) def handle_suspend(self): return self._handle_action(self.SUSPEND) def check_suspend_complete(self, execution_id): return self._check_action(self.SUSPEND, execution_id) def handle_resume(self): return self._handle_action(self.RESUME) def check_resume_complete(self, execution_id): return self._check_action(self.RESUME, execution_id) def handle_delete(self): return self._handle_action(self.DELETE) def check_delete_complete(self, execution_id): return self._check_action(self.DELETE, execution_id) def resource_mapping(): return { 'OS::Mistral::ExternalResource': MistralExternalResource } heat-10.0.2/heat/engine/resources/openstack/nova/0000775000175000017500000000000013343562672021702 5ustar zuulzuul00000000000000heat-10.0.2/heat/engine/resources/openstack/nova/quota.py0000666000175000017500000002076013343562351023406 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common import exception from heat.common.i18n import _ from heat.engine import constraints from heat.engine import properties from heat.engine import resource from heat.engine import support from heat.engine import translation class NovaQuota(resource.Resource): """A resource for creating nova quotas. Nova Quota is used to manage operational limits for projects. Currently, this resource can manage Nova's quotas for: - cores - fixed_ips - floating_ips - instances - injected_files - injected_file_content_bytes - injected_file_path_bytes - key_pairs - metadata_items - ram - security_groups - security_group_rules - server_groups - server_group_members Note that default nova security policy usage of this resource is limited to being used by administrators only. Administrators should be careful to create only one Nova Quota resource per project, otherwise it will be hard for them to manage the quota properly. """ support_status = support.SupportStatus(version='8.0.0') default_client_name = 'nova' entity = 'quotas' required_service_extension = 'os-quota-sets' PROPERTIES = ( PROJECT, CORES, FIXED_IPS, FLOATING_IPS, INSTANCES, INJECTED_FILES, INJECTED_FILE_CONTENT_BYTES, INJECTED_FILE_PATH_BYTES, KEYPAIRS, METADATA_ITEMS, RAM, SECURITY_GROUPS, SECURITY_GROUP_RULES, SERVER_GROUPS, SERVER_GROUP_MEMBERS ) = ( 'project', 'cores', 'fixed_ips', 'floating_ips', 'instances', 'injected_files', 'injected_file_content_bytes', 'injected_file_path_bytes', 'key_pairs', 'metadata_items', 'ram', 'security_groups', 'security_group_rules', 'server_groups', 'server_group_members' ) properties_schema = { PROJECT: properties.Schema( properties.Schema.STRING, _('Name or id of the project to set the quota for.'), required=True, constraints=[ constraints.CustomConstraint('keystone.project') ] ), CORES: properties.Schema( properties.Schema.INTEGER, _('Quota for the number of cores. ' 'Setting the value to -1 removes the limit.'), constraints=[ constraints.Range(min=-1), ], update_allowed=True ), FIXED_IPS: properties.Schema( properties.Schema.INTEGER, _('Quota for the number of fixed IPs. ' 'Setting the value to -1 removes the limit.'), constraints=[ constraints.Range(min=-1), ], update_allowed=True ), FLOATING_IPS: properties.Schema( properties.Schema.INTEGER, _('Quota for the number of floating IPs. ' 'Setting the value to -1 removes the limit.'), constraints=[ constraints.Range(min=-1), ], update_allowed=True ), INSTANCES: properties.Schema( properties.Schema.INTEGER, _('Quota for the number of instances. ' 'Setting the value to -1 removes the limit.'), constraints=[ constraints.Range(min=-1), ], update_allowed=True ), INJECTED_FILES: properties.Schema( properties.Schema.INTEGER, _('Quota for the number of injected files. ' 'Setting the value to -1 removes the limit.'), constraints=[ constraints.Range(min=-1), ], update_allowed=True ), INJECTED_FILE_CONTENT_BYTES: properties.Schema( properties.Schema.INTEGER, _('Quota for the number of injected file content bytes. ' 'Setting the value to -1 removes the limit.'), constraints=[ constraints.Range(min=-1), ], update_allowed=True ), INJECTED_FILE_PATH_BYTES: properties.Schema( properties.Schema.INTEGER, _('Quota for the number of injected file path bytes. ' 'Setting the value to -1 removes the limit.'), constraints=[ constraints.Range(min=-1), ], update_allowed=True ), KEYPAIRS: properties.Schema( properties.Schema.INTEGER, _('Quota for the number of key pairs. ' 'Setting the value to -1 removes the limit.'), constraints=[ constraints.Range(min=-1), ], update_allowed=True ), METADATA_ITEMS: properties.Schema( properties.Schema.INTEGER, _('Quota for the number of metadata items. ' 'Setting the value to -1 removes the limit.'), constraints=[ constraints.Range(min=-1), ], update_allowed=True ), RAM: properties.Schema( properties.Schema.INTEGER, _('Quota for the amount of ram (in megabytes). ' 'Setting the value to -1 removes the limit.'), constraints=[ constraints.Range(min=-1), ], update_allowed=True ), SECURITY_GROUPS: properties.Schema( properties.Schema.INTEGER, _('Quota for the number of security groups. ' 'Setting the value to -1 removes the limit.'), constraints=[ constraints.Range(min=-1), ], update_allowed=True ), SECURITY_GROUP_RULES: properties.Schema( properties.Schema.INTEGER, _('Quota for the number of security group rules. ' 'Setting the value to -1 removes the limit.'), constraints=[ constraints.Range(min=-1), ], update_allowed=True ), SERVER_GROUPS: properties.Schema( properties.Schema.INTEGER, _('Quota for the number of server groups. ' 'Setting the value to -1 removes the limit.'), constraints=[ constraints.Range(min=-1), ], update_allowed=True ), SERVER_GROUP_MEMBERS: properties.Schema( properties.Schema.INTEGER, _('Quota for the number of server group members. ' 'Setting the value to -1 removes the limit.'), constraints=[ constraints.Range(min=-1), ], update_allowed=True ) } def translation_rules(self, props): return [ translation.TranslationRule( props, translation.TranslationRule.RESOLVE, [self.PROJECT], client_plugin=self.client_plugin('keystone'), finder='get_project_id') ] def handle_create(self): self._set_quota() self.resource_id_set(self.physical_resource_name()) def handle_update(self, json_snippet, tmpl_diff, prop_diff): self._set_quota(json_snippet.properties(self.properties_schema, self.context)) def _set_quota(self, props=None): if props is None: props = self.properties kwargs = dict((k, v) for k, v in props.items() if k != self.PROJECT and v is not None) self.client().quotas.update(props.get(self.PROJECT), **kwargs) def handle_delete(self): self.client().quotas.delete(self.properties[self.PROJECT]) def validate(self): super(NovaQuota, self).validate() if sum(1 for p in self.properties.values() if p is not None) <= 1: raise exception.PropertyUnspecifiedError( *sorted(set(self.PROPERTIES) - {self.PROJECT})) def resource_mapping(): return { 'OS::Nova::Quota': NovaQuota } heat-10.0.2/heat/engine/resources/openstack/nova/server_group.py0000666000175000017500000000522313343562351024774 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common.i18n import _ from heat.engine import constraints from heat.engine import properties from heat.engine import resource from heat.engine import support class ServerGroup(resource.Resource): """A resource for managing a Nova server group. Server groups allow you to make sure instances (VM/VPS) are on the same hypervisor host or on a different one. """ support_status = support.SupportStatus(version='2014.2') default_client_name = 'nova' entity = 'server_groups' required_service_extension = 'os-server-groups' PROPERTIES = ( NAME, POLICIES ) = ( 'name', 'policies' ) properties_schema = { NAME: properties.Schema( properties.Schema.STRING, _('Server Group name.') ), POLICIES: properties.Schema( properties.Schema.LIST, _('A list of string policies to apply. ' 'Defaults to anti-affinity.'), default=['anti-affinity'], constraints=[ constraints.AllowedValues(["anti-affinity", "affinity", "soft-anti-affinity", "soft-affinity"]) ], schema=properties.Schema( properties.Schema.STRING, ) ), } def handle_create(self): name = self.physical_resource_name() policies = self.properties[self.POLICIES] if 'soft-affinity' in policies or 'soft-anti-affinity' in policies: client = self.client( version=self.client_plugin().V2_15) else: client = self.client() server_group = client.server_groups.create(name=name, policies=policies) self.resource_id_set(server_group.id) def physical_resource_name(self): name = self.properties[self.NAME] if name: return name return super(ServerGroup, self).physical_resource_name() def resource_mapping(): return {'OS::Nova::ServerGroup': ServerGroup} heat-10.0.2/heat/engine/resources/openstack/nova/host_aggregate.py0000666000175000017500000001174713343562340025243 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common.i18n import _ from heat.engine import constraints from heat.engine import properties from heat.engine import resource from heat.engine import support class HostAggregate(resource.Resource): """A resource for further partition an availability zone with hosts. While availability zones are visible to users, host aggregates are only visible to administrators. Host aggregates started out as a way to use Xen hypervisor resource pools, but has been generalized to provide a mechanism to allow administrators to assign key-value pairs to groups of machines. Each node can have multiple aggregates, each aggregate can have multiple key-value pairs, and the same key-value pair can be assigned to multiple aggregate. This information can be used in the scheduler to enable advanced scheduling, to set up xen hypervisor resources pools or to define logical groups for migration. """ support_status = support.SupportStatus(version='6.0.0') default_client_name = 'nova' entity = 'aggregates' required_service_extension = 'os-aggregates' PROPERTIES = ( NAME, AVAILABILITY_ZONE, HOSTS, METADATA ) = ( 'name', 'availability_zone', 'hosts', 'metadata' ) properties_schema = { NAME: properties.Schema( properties.Schema.STRING, _('Name for the aggregate.'), required=True, update_allowed=True, ), AVAILABILITY_ZONE: properties.Schema( properties.Schema.STRING, _('Name for the availability zone.'), required=True, update_allowed=True, ), HOSTS: properties.Schema( properties.Schema.LIST, _('List of hosts to join aggregate.'), update_allowed=True, schema=properties.Schema( properties.Schema.STRING, constraints=[constraints.CustomConstraint('nova.host')], ), ), METADATA: properties.Schema( properties.Schema.MAP, _('Arbitrary key/value metadata to store information ' 'for aggregate.'), update_allowed=True, default={} ), } def _find_diff(self, update_prps, stored_prps): add_prps = list(set(update_prps or []) - set(stored_prps or [])) remove_prps = list(set(stored_prps or []) - set(update_prps or [])) return add_prps, remove_prps def handle_create(self): name = self.properties[self.NAME] availability_zone = self.properties[self.AVAILABILITY_ZONE] hosts = self.properties[self.HOSTS] or [] metadata = self.properties[self.METADATA] or {} aggregate = self.client().aggregates.create( name=name, availability_zone=availability_zone ) self.resource_id_set(aggregate.id) if metadata: aggregate.set_metadata(metadata) for host in hosts: aggregate.add_host(host) def handle_update(self, json_snippet, tmpl_diff, prop_diff): if prop_diff: aggregate = self.client().aggregates.get(self.resource_id) if self.HOSTS in prop_diff: new_hosts = prop_diff.pop(self.HOSTS) old_hosts = aggregate.hosts add_hosts, remove_hosts = self._find_diff(new_hosts, old_hosts) for host in add_hosts: aggregate.add_host(host) for host in remove_hosts: aggregate.remove_host(host) if self.METADATA in prop_diff: metadata = prop_diff.pop(self.METADATA) if metadata: aggregate.set_metadata(metadata) if prop_diff: aggregate.update(prop_diff) def handle_delete(self): if self.resource_id is None: return with self.client_plugin().ignore_not_found: aggregate = self.client().aggregates.get(self.resource_id) for host in aggregate.hosts: aggregate.remove_host(host) super(HostAggregate, self).handle_delete() def parse_live_resource_data(self, resource_properties, resource_data): aggregate_reality = {} for key in self.PROPERTIES: aggregate_reality.update({key: resource_data.get(key)}) return aggregate_reality def resource_mapping(): return { 'OS::Nova::HostAggregate': HostAggregate } heat-10.0.2/heat/engine/resources/openstack/nova/flavor.py0000666000175000017500000002131513343562340023541 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging from heat.common import exception from heat.common.i18n import _ from heat.engine import attributes from heat.engine import constraints from heat.engine import properties from heat.engine import resource from heat.engine import support from heat.engine import translation LOG = logging.getLogger(__name__) class NovaFlavor(resource.Resource): """A resource for creating OpenStack virtual hardware templates. Due to default nova security policy usage of this resource is limited to being used by administrators only. The rights may also be delegated to other users by redefining the access controls on the nova-api server. Note that the current implementation of the Nova Flavor resource does not allow specifying the name and flavorid properties for the resource. This is done to avoid potential naming collision upon flavor creation as all flavor have a global scope. """ support_status = support.SupportStatus(version='2014.2') default_client_name = 'nova' required_service_extension = 'os-flavor-manage' entity = 'flavors' PROPERTIES = ( TENANTS, ID, NAME, RAM, VCPUS, DISK, SWAP, EPHEMERAL, RXTX_FACTOR, EXTRA_SPECS, IS_PUBLIC ) = ( 'tenants', 'flavorid', 'name', 'ram', 'vcpus', 'disk', 'swap', 'ephemeral', 'rxtx_factor', 'extra_specs', 'is_public', ) ATTRIBUTES = ( IS_PUBLIC_ATTR, EXTRA_SPECS_ATTR ) = ( 'is_public', 'extra_specs' ) properties_schema = { TENANTS: properties.Schema( properties.Schema.LIST, _('List of tenants.'), update_allowed=True, default=[], schema=properties.Schema( properties.Schema.STRING, constraints=[constraints.CustomConstraint('keystone.project')] ), support_status=support.SupportStatus(version='8.0.0') ), ID: properties.Schema( properties.Schema.STRING, _('Unique ID of the flavor. If not specified, ' 'an UUID will be auto generated and used.'), support_status=support.SupportStatus(version='7.0.0') ), NAME: properties.Schema( properties.Schema.STRING, _('Name of the flavor.'), support_status=support.SupportStatus(version='7.0.0'), ), RAM: properties.Schema( properties.Schema.INTEGER, _('Memory in MB for the flavor.'), required=True ), VCPUS: properties.Schema( properties.Schema.INTEGER, _('Number of VCPUs for the flavor.'), required=True ), DISK: properties.Schema( properties.Schema.INTEGER, _('Size of local disk in GB. The "0" size is a special case that ' 'uses the native base image size as the size of the ephemeral ' 'root volume.'), default=0 ), SWAP: properties.Schema( properties.Schema.INTEGER, _('Swap space in MB.'), default=0 ), EPHEMERAL: properties.Schema( properties.Schema.INTEGER, _('Size of a secondary ephemeral data disk in GB.'), default=0 ), RXTX_FACTOR: properties.Schema( properties.Schema.NUMBER, _('RX/TX factor.'), default=1.0 ), EXTRA_SPECS: properties.Schema( properties.Schema.MAP, _('Key/Value pairs to extend the capabilities of the flavor.'), update_allowed=True, ), IS_PUBLIC: properties.Schema( properties.Schema.BOOLEAN, _('Scope of flavor accessibility. Public or private. ' 'Default value is True, means public, shared ' 'across all projects.'), default=True, support_status=support.SupportStatus(version='6.0.0'), ), } attributes_schema = { IS_PUBLIC_ATTR: attributes.Schema( _('Whether the flavor is shared across all projects.'), support_status=support.SupportStatus(version='6.0.0'), type=attributes.Schema.BOOLEAN ), EXTRA_SPECS_ATTR: attributes.Schema( _('Extra specs of the flavor in key-value pairs.'), support_status=support.SupportStatus(version='7.0.0'), type=attributes.Schema.MAP ) } def translation_rules(self, properties): return [ translation.TranslationRule( properties, translation.TranslationRule.RESOLVE, [self.TENANTS], client_plugin=self.client_plugin('keystone'), finder='get_project_id' ) ] def handle_create(self): args = dict(self.properties) if not args['flavorid']: args['flavorid'] = 'auto' if not args['name']: args['name'] = self.physical_resource_name() flavor_keys = args.pop(self.EXTRA_SPECS) tenants = args.pop(self.TENANTS) flavor = self.client().flavors.create(**args) self.resource_id_set(flavor.id) if flavor_keys: flavor.set_keys(flavor_keys) if not self.properties[self.IS_PUBLIC]: if not tenants: LOG.info('Tenant property is recommended ' 'for the private flavors.') tenant = self.stack.context.tenant_id self.client().flavor_access.add_tenant_access(flavor, tenant) else: for tenant in tenants: # grant access only to the active project(private flavor) self.client().flavor_access.add_tenant_access(flavor, tenant) def handle_update(self, json_snippet, tmpl_diff, prop_diff): """Update nova flavor.""" if self.EXTRA_SPECS in prop_diff: flavor = self.client().flavors.get(self.resource_id) old_keys = flavor.get_keys() flavor.unset_keys(old_keys) new_keys = prop_diff.get(self.EXTRA_SPECS) if new_keys is not None: flavor.set_keys(new_keys) """Update tenant access list.""" if self.TENANTS in prop_diff and not self.properties[self.IS_PUBLIC]: kwargs = {'flavor': self.resource_id} old_tenants = [ x.tenant_id for x in self.client().flavor_access.list(**kwargs) ] or [] new_tenants = prop_diff.get(self.TENANTS) or [] tenants_to_remove = list(set(old_tenants) - set(new_tenants)) tenants_to_add = list(set(new_tenants) - set(old_tenants)) if tenants_to_remove or tenants_to_add: flavor = self.client().flavors.get(self.resource_id) for _tenant in tenants_to_remove: self.client().flavor_access.remove_tenant_access(flavor, _tenant) for _tenant in tenants_to_add: self.client().flavor_access.add_tenant_access(flavor, _tenant) def _resolve_attribute(self, name): if self.resource_id is None: return flavor = self.client().flavors.get(self.resource_id) if name == self.IS_PUBLIC_ATTR: return getattr(flavor, name) if name == self.EXTRA_SPECS_ATTR: return flavor.get_keys() def get_live_resource_data(self): try: flavor = self.client().flavors.get(self.resource_id) resource_data = {self.EXTRA_SPECS: flavor.get_keys()} except Exception as ex: if self.client_plugin().is_not_found(ex): raise exception.EntityNotFound(entity='Resource', name=self.name) raise return resource_data def parse_live_resource_data(self, resource_properties, resource_data): return resource_data def resource_mapping(): return { 'OS::Nova::Flavor': NovaFlavor } heat-10.0.2/heat/engine/resources/openstack/nova/floatingip.py0000666000175000017500000001542113343562351024407 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging import six from heat.common import exception from heat.common.i18n import _ from heat.engine import attributes from heat.engine import constraints from heat.engine import properties from heat.engine import resource from heat.engine import support LOG = logging.getLogger(__name__) class NovaFloatingIp(resource.Resource): """A resource for managing Nova floating IPs. Floating IP addresses can change their association between instances by action of the user. One of the most common use cases for floating IPs is to provide public IP addresses to a private cloud, where there are a limited number of IP addresses available. Another is for a public cloud user to have a "static" IP address that can be reassigned when an instance is upgraded or moved. """ deprecation_msg = _('Please use OS::Neutron::FloatingIP instead.') support_status = support.SupportStatus( status=support.DEPRECATED, message=deprecation_msg, version='9.0.0', previous_status=support.SupportStatus(version='2014.1') ) required_service_extension = 'os-floating-ips' PROPERTIES = (POOL,) = ('pool',) ATTRIBUTES = ( POOL_ATTR, IP, ) = ( 'pool', 'ip', ) properties_schema = { POOL: properties.Schema( properties.Schema.STRING, description=_('Allocate a floating IP from a given ' 'floating IP pool. Now that nova-network ' 'is not supported this represents the ' 'external network.') ), } attributes_schema = { POOL_ATTR: attributes.Schema( _('Pool from which floating IP is allocated.'), type=attributes.Schema.STRING ), IP: attributes.Schema( _('Allocated floating IP address.'), type=attributes.Schema.STRING ), } def __init__(self, name, json_snippet, stack): super(NovaFloatingIp, self).__init__(name, json_snippet, stack) self._floating_ip = None def _get_resource(self): if self._floating_ip is None and self.resource_id is not None: self._floating_ip = self.neutron().show_floatingip( self.resource_id) return self._floating_ip def get_external_network_id(self, pool=None): if pool: return self.client_plugin( 'neutron').find_resourceid_by_name_or_id('network', pool) ext_filter = {'router:external': True} ext_nets = self.neutron().list_networks(**ext_filter)['networks'] if len(ext_nets) != 1: raise exception.Error( _('Expected 1 external network, found %d') % len(ext_nets)) external_network_id = ext_nets[0]['id'] return external_network_id def handle_create(self): ext_net_id = self.get_external_network_id( pool=self.properties[self.POOL]) floating_ip = self.neutron().create_floatingip( {'floatingip': {'floating_network_id': ext_net_id}}) self.resource_id_set(floating_ip['floatingip']['id']) self._floating_ip = floating_ip def handle_delete(self): with self.client_plugin('neutron').ignore_not_found: self.neutron().delete_floatingip(self.resource_id) def _resolve_attribute(self, key): if self.resource_id is None: return floating_ip = self._get_resource() attributes = { self.POOL_ATTR: floating_ip['floatingip']['floating_network_id'], self.IP: floating_ip['floatingip']['floating_ip_address'] } return six.text_type(attributes[key]) class NovaFloatingIpAssociation(resource.Resource): """A resource associates Nova floating IP with Nova server resource. Resource for associating existing Nova floating IP and Nova server. """ deprecation_msg = _( 'Please use OS::Neutron::FloatingIPAssociation instead.') support_status = support.SupportStatus( status=support.DEPRECATED, message=deprecation_msg, version='9.0.0', previous_status=support.SupportStatus(version='2014.1') ) PROPERTIES = ( SERVER, FLOATING_IP ) = ( 'server_id', 'floating_ip' ) properties_schema = { SERVER: properties.Schema( properties.Schema.STRING, _('Server to assign floating IP to.'), required=True, update_allowed=True, constraints=[ constraints.CustomConstraint('nova.server') ] ), FLOATING_IP: properties.Schema( properties.Schema.STRING, _('ID of the floating IP to assign to the server.'), required=True, update_allowed=True ), } default_client_name = 'nova' def get_reference_id(self): return self.physical_resource_name_or_FnGetRefId() def handle_create(self): self.client_plugin().associate_floatingip( self.properties[self.SERVER], self.properties[self.FLOATING_IP]) self.resource_id_set(self.id) def handle_delete(self): if self.resource_id is None: return with self.client_plugin().ignore_not_found: self.client_plugin().dissociate_floatingip( self.properties[self.FLOATING_IP]) def handle_update(self, json_snippet, tmpl_diff, prop_diff): if prop_diff: # If floating_ip in prop_diff, we need to remove the old floating # ip from the old server, and then to add the new floating ip # to the old/new(if the server_id is changed) server. if self.FLOATING_IP in prop_diff: self.handle_delete() server_id = (prop_diff.get(self.SERVER) or self.properties[self.SERVER]) fl_ip_id = (prop_diff.get(self.FLOATING_IP) or self.properties[self.FLOATING_IP]) self.client_plugin().associate_floatingip(server_id, fl_ip_id) self.resource_id_set(self.id) def resource_mapping(): return { 'OS::Nova::FloatingIP': NovaFloatingIp, 'OS::Nova::FloatingIPAssociation': NovaFloatingIpAssociation, } heat-10.0.2/heat/engine/resources/openstack/nova/keypair.py0000666000175000017500000001647313343562351023727 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import six from heat.common import exception from heat.common.i18n import _ from heat.engine import attributes from heat.engine import constraints from heat.engine import properties from heat.engine import resource from heat.engine import support from heat.engine import translation class KeyPair(resource.Resource): """A resource for creating Nova key pairs. A keypair is a ssh key that can be injected into a server on launch. **Note** that if a new key is generated setting `save_private_key` to `True` results in the system saving the private key which can then be retrieved via the `private_key` attribute of this resource. Setting the `public_key` property means that the `private_key` attribute of this resource will always return an empty string regardless of the `save_private_key` setting since there will be no private key data to save. """ support_status = support.SupportStatus(version='2014.1') required_service_extension = 'os-keypairs' PROPERTIES = ( NAME, SAVE_PRIVATE_KEY, PUBLIC_KEY, KEY_TYPE, USER, ) = ( 'name', 'save_private_key', 'public_key', 'type', 'user', ) ATTRIBUTES = ( PUBLIC_KEY_ATTR, PRIVATE_KEY_ATTR, ) = ( 'public_key', 'private_key', ) properties_schema = { NAME: properties.Schema( properties.Schema.STRING, _('The name of the key pair.'), required=True, constraints=[ constraints.Length(min=1, max=255) ] ), SAVE_PRIVATE_KEY: properties.Schema( properties.Schema.BOOLEAN, _('True if the system should remember a generated private key; ' 'False otherwise.'), default=False ), PUBLIC_KEY: properties.Schema( properties.Schema.STRING, _('The optional public key. This allows users to supply the ' 'public key from a pre-existing key pair. If not supplied, a ' 'new key pair will be generated.') ), KEY_TYPE: properties.Schema( properties.Schema.STRING, _('Keypair type. Supported since Nova api version 2.2.'), constraints=[ constraints.AllowedValues(['ssh', 'x509'])], support_status=support.SupportStatus(version='8.0.0') ), USER: properties.Schema( properties.Schema.STRING, _('ID or name of user to whom to add key-pair. The usage of this ' 'property is limited to being used by administrators only. ' 'Supported since Nova api version 2.10.'), constraints=[constraints.CustomConstraint('keystone.user')], support_status=support.SupportStatus(version='9.0.0') ), } attributes_schema = { PUBLIC_KEY_ATTR: attributes.Schema( _('The public key.'), type=attributes.Schema.STRING ), PRIVATE_KEY_ATTR: attributes.Schema( _('The private key if it has been saved.'), cache_mode=attributes.Schema.CACHE_NONE, type=attributes.Schema.STRING ), } default_client_name = 'nova' entity = 'keypairs' def __init__(self, name, json_snippet, stack): super(KeyPair, self).__init__(name, json_snippet, stack) self._public_key = None def translation_rules(self, props): return [ translation.TranslationRule( props, translation.TranslationRule.RESOLVE, [self.USER], client_plugin=self.client_plugin('keystone'), finder='get_user_id' ) ] @property def private_key(self): """Return the private SSH key for the resource.""" if self.properties[self.SAVE_PRIVATE_KEY]: return self.data().get('private_key', '') else: return '' @property def public_key(self): """Return the public SSH key for the resource.""" if not self._public_key: if self.properties[self.PUBLIC_KEY]: self._public_key = self.properties[self.PUBLIC_KEY] elif self.resource_id: nova_key = self.client_plugin().get_keypair(self.resource_id) self._public_key = nova_key.public_key return self._public_key def validate(self): super(KeyPair, self).validate() # Check if key_type is allowed to use key_type = self.properties[self.KEY_TYPE] user = self.properties[self.USER] nc_version = None validate_props = [] if key_type: nc_version = self.client_plugin().V2_2 validate_props.append(self.KEY_TYPE) if user: nc_version = self.client_plugin().V2_10 validate_props.append(self.USER) if nc_version: try: self.client(version=nc_version) except exception.InvalidServiceVersion as ex: msg = (_('Cannot use "%(prop)s" properties - nova does not ' 'support: %(error)s') % {'error': six.text_type(ex), 'prop': validate_props}) raise exception.StackValidationFailed(message=msg) def handle_create(self): pub_key = self.properties[self.PUBLIC_KEY] or None user_id = self.properties[self.USER] key_type = self.properties[self.KEY_TYPE] create_kwargs = { 'name': self.properties[self.NAME], 'public_key': pub_key } nc_version = None if key_type: nc_version = self.client_plugin().V2_2 create_kwargs[self.KEY_TYPE] = key_type if user_id: nc_version = self.client_plugin().V2_10 create_kwargs['user_id'] = user_id nc = self.client(version=nc_version) new_keypair = nc.keypairs.create(**create_kwargs) if (self.properties[self.SAVE_PRIVATE_KEY] and hasattr(new_keypair, 'private_key')): self.data_set('private_key', new_keypair.private_key, True) self.resource_id_set(new_keypair.id) def handle_check(self): self.client().keypairs.get(self.resource_id) def _resolve_attribute(self, key): attr_fn = {self.PRIVATE_KEY_ATTR: self.private_key, self.PUBLIC_KEY_ATTR: self.public_key} return six.text_type(attr_fn[key]) def get_reference_id(self): return self.resource_id def prepare_for_replace(self): if self.resource_id is None: return with self.client_plugin().ignore_not_found: self.client().keypairs.delete(self.resource_id) def resource_mapping(): return {'OS::Nova::KeyPair': KeyPair} heat-10.0.2/heat/engine/resources/openstack/nova/__init__.py0000666000175000017500000000000013343562340023773 0ustar zuulzuul00000000000000heat-10.0.2/heat/engine/resources/openstack/nova/server.py0000666000175000017500000022423713343562351023570 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from oslo_config import cfg from oslo_log import log as logging from oslo_utils import uuidutils import six from heat.common import exception from heat.common.i18n import _ from heat.engine import attributes from heat.engine.clients import progress from heat.engine import constraints from heat.engine import properties from heat.engine.resources.openstack.neutron import port as neutron_port from heat.engine.resources.openstack.neutron import subnet from heat.engine.resources.openstack.nova import server_network_mixin from heat.engine.resources import scheduler_hints as sh from heat.engine.resources import server_base from heat.engine import support from heat.engine import translation from heat.rpc import api as rpc_api cfg.CONF.import_opt('default_software_config_transport', 'heat.common.config') cfg.CONF.import_opt('default_user_data_format', 'heat.common.config') LOG = logging.getLogger(__name__) class Server(server_base.BaseServer, sh.SchedulerHintsMixin, server_network_mixin.ServerNetworkMixin): """A resource for managing Nova instances. A Server resource manages the running virtual machine instance within an OpenStack cloud. """ PROPERTIES = ( NAME, IMAGE, BLOCK_DEVICE_MAPPING, BLOCK_DEVICE_MAPPING_V2, FLAVOR, FLAVOR_UPDATE_POLICY, IMAGE_UPDATE_POLICY, KEY_NAME, ADMIN_USER, AVAILABILITY_ZONE, SECURITY_GROUPS, NETWORKS, SCHEDULER_HINTS, METADATA, USER_DATA_FORMAT, USER_DATA, RESERVATION_ID, CONFIG_DRIVE, DISK_CONFIG, PERSONALITY, ADMIN_PASS, SOFTWARE_CONFIG_TRANSPORT, USER_DATA_UPDATE_POLICY, TAGS, DEPLOYMENT_SWIFT_DATA ) = ( 'name', 'image', 'block_device_mapping', 'block_device_mapping_v2', 'flavor', 'flavor_update_policy', 'image_update_policy', 'key_name', 'admin_user', 'availability_zone', 'security_groups', 'networks', 'scheduler_hints', 'metadata', 'user_data_format', 'user_data', 'reservation_id', 'config_drive', 'diskConfig', 'personality', 'admin_pass', 'software_config_transport', 'user_data_update_policy', 'tags', 'deployment_swift_data' ) _BLOCK_DEVICE_MAPPING_KEYS = ( BLOCK_DEVICE_MAPPING_DEVICE_NAME, BLOCK_DEVICE_MAPPING_VOLUME_ID, BLOCK_DEVICE_MAPPING_SNAPSHOT_ID, BLOCK_DEVICE_MAPPING_VOLUME_SIZE, BLOCK_DEVICE_MAPPING_DELETE_ON_TERM, ) = ( 'device_name', 'volume_id', 'snapshot_id', 'volume_size', 'delete_on_termination', ) _BLOCK_DEVICE_MAPPING_V2_KEYS = ( BLOCK_DEVICE_MAPPING_DEVICE_NAME, BLOCK_DEVICE_MAPPING_VOLUME_ID, BLOCK_DEVICE_MAPPING_IMAGE_ID, BLOCK_DEVICE_MAPPING_IMAGE, BLOCK_DEVICE_MAPPING_SNAPSHOT_ID, BLOCK_DEVICE_MAPPING_SWAP_SIZE, BLOCK_DEVICE_MAPPING_DEVICE_TYPE, BLOCK_DEVICE_MAPPING_DISK_BUS, BLOCK_DEVICE_MAPPING_BOOT_INDEX, BLOCK_DEVICE_MAPPING_VOLUME_SIZE, BLOCK_DEVICE_MAPPING_DELETE_ON_TERM, BLOCK_DEVICE_MAPPING_EPHEMERAL_SIZE, BLOCK_DEVICE_MAPPING_EPHEMERAL_FORMAT, ) = ( 'device_name', 'volume_id', 'image_id', 'image', 'snapshot_id', 'swap_size', 'device_type', 'disk_bus', 'boot_index', 'volume_size', 'delete_on_termination', 'ephemeral_size', 'ephemeral_format' ) _NETWORK_KEYS = ( NETWORK_UUID, NETWORK_ID, NETWORK_FIXED_IP, NETWORK_PORT, NETWORK_SUBNET, NETWORK_PORT_EXTRA, NETWORK_FLOATING_IP, ALLOCATE_NETWORK, NIC_TAG, ) = ( 'uuid', 'network', 'fixed_ip', 'port', 'subnet', 'port_extra_properties', 'floating_ip', 'allocate_network', 'tag', ) _IFACE_MANAGED_KEYS = (NETWORK_PORT, NETWORK_ID, NETWORK_FIXED_IP, NETWORK_SUBNET) _SOFTWARE_CONFIG_FORMATS = ( HEAT_CFNTOOLS, RAW, SOFTWARE_CONFIG ) = ( 'HEAT_CFNTOOLS', 'RAW', 'SOFTWARE_CONFIG' ) _SOFTWARE_CONFIG_TRANSPORTS = ( POLL_SERVER_CFN, POLL_SERVER_HEAT, POLL_TEMP_URL, ZAQAR_MESSAGE ) = ( 'POLL_SERVER_CFN', 'POLL_SERVER_HEAT', 'POLL_TEMP_URL', 'ZAQAR_MESSAGE' ) _ALLOCATE_TYPES = ( NETWORK_NONE, NETWORK_AUTO, ) = ( 'none', 'auto', ) _DEPLOYMENT_SWIFT_DATA_KEYS = ( CONTAINER, OBJECT ) = ( 'container', 'object', ) ATTRIBUTES = ( NAME_ATTR, ADDRESSES, NETWORKS_ATTR, FIRST_ADDRESS, INSTANCE_NAME, ACCESSIPV4, ACCESSIPV6, CONSOLE_URLS, TAGS_ATTR, OS_COLLECT_CONFIG ) = ( 'name', 'addresses', 'networks', 'first_address', 'instance_name', 'accessIPv4', 'accessIPv6', 'console_urls', 'tags', 'os_collect_config' ) # Image Statuses IMAGE_STATUSES = (IMAGE_ACTIVE, IMAGE_ERROR, IMAGE_DELETED) = ('active', 'error', 'deleted') properties_schema = { NAME: properties.Schema( properties.Schema.STRING, _('Server name.'), update_allowed=True ), IMAGE: properties.Schema( properties.Schema.STRING, _('The ID or name of the image to boot with.'), constraints=[ constraints.CustomConstraint('glance.image') ], update_allowed=True ), BLOCK_DEVICE_MAPPING: properties.Schema( properties.Schema.LIST, _('Block device mappings for this server.'), schema=properties.Schema( properties.Schema.MAP, schema={ BLOCK_DEVICE_MAPPING_DEVICE_NAME: properties.Schema( properties.Schema.STRING, _('A device name where the volume will be ' 'attached in the system at /dev/device_name. ' 'This value is typically vda.'), required=True ), BLOCK_DEVICE_MAPPING_VOLUME_ID: properties.Schema( properties.Schema.STRING, _('The ID of the volume to boot from. Only one ' 'of volume_id or snapshot_id should be ' 'provided.'), constraints=[ constraints.CustomConstraint('cinder.volume') ] ), BLOCK_DEVICE_MAPPING_SNAPSHOT_ID: properties.Schema( properties.Schema.STRING, _('The ID of the snapshot to create a volume ' 'from.'), constraints=[ constraints.CustomConstraint('cinder.snapshot') ] ), BLOCK_DEVICE_MAPPING_VOLUME_SIZE: properties.Schema( properties.Schema.INTEGER, _('The size of the volume, in GB. It is safe to ' 'leave this blank and have the Compute service ' 'infer the size.') ), BLOCK_DEVICE_MAPPING_DELETE_ON_TERM: properties.Schema( properties.Schema.BOOLEAN, _('Indicate whether the volume should be deleted ' 'when the server is terminated.') ), }, ) ), BLOCK_DEVICE_MAPPING_V2: properties.Schema( properties.Schema.LIST, _('Block device mappings v2 for this server.'), schema=properties.Schema( properties.Schema.MAP, schema={ BLOCK_DEVICE_MAPPING_DEVICE_NAME: properties.Schema( properties.Schema.STRING, _('A device name where the volume will be ' 'attached in the system at /dev/device_name. ' 'This value is typically vda.'), ), BLOCK_DEVICE_MAPPING_VOLUME_ID: properties.Schema( properties.Schema.STRING, _('The volume_id can be boot or non-boot device ' 'to the server.'), constraints=[ constraints.CustomConstraint('cinder.volume') ] ), BLOCK_DEVICE_MAPPING_IMAGE_ID: properties.Schema( properties.Schema.STRING, _('The ID of the image to create a volume from.'), support_status=support.SupportStatus( status=support.HIDDEN, version='9.0.0', message=_('Use property %s.') % BLOCK_DEVICE_MAPPING_IMAGE, previous_status=support.SupportStatus( status=support.DEPRECATED, version='7.0.0', previous_status=support.SupportStatus( version='5.0.0') ) ), constraints=[ constraints.CustomConstraint('glance.image') ], ), BLOCK_DEVICE_MAPPING_IMAGE: properties.Schema( properties.Schema.STRING, _('The ID or name of the image ' 'to create a volume from.'), support_status=support.SupportStatus(version='7.0.0'), constraints=[ constraints.CustomConstraint('glance.image') ], ), BLOCK_DEVICE_MAPPING_SNAPSHOT_ID: properties.Schema( properties.Schema.STRING, _('The ID of the snapshot to create a volume ' 'from.'), constraints=[ constraints.CustomConstraint('cinder.snapshot') ] ), BLOCK_DEVICE_MAPPING_SWAP_SIZE: properties.Schema( properties.Schema.INTEGER, _('The size of the swap, in MB.') ), BLOCK_DEVICE_MAPPING_EPHEMERAL_SIZE: properties.Schema( properties.Schema.INTEGER, _('The size of the local ephemeral block device, ' 'in GB.'), support_status=support.SupportStatus(version='8.0.0'), constraints=[constraints.Range(min=1)] ), BLOCK_DEVICE_MAPPING_EPHEMERAL_FORMAT: properties.Schema( properties.Schema.STRING, _('The format of the local ephemeral block device. ' 'If no format is specified, uses default value, ' 'defined in nova configuration file.'), constraints=[ constraints.AllowedValues(['ext2', 'ext3', 'ext4', 'xfs', 'ntfs']) ], support_status=support.SupportStatus(version='8.0.0') ), BLOCK_DEVICE_MAPPING_DEVICE_TYPE: properties.Schema( properties.Schema.STRING, _('Device type: at the moment we can make distinction ' 'only between disk and cdrom.'), constraints=[ constraints.AllowedValues(['cdrom', 'disk']), ], ), BLOCK_DEVICE_MAPPING_DISK_BUS: properties.Schema( properties.Schema.STRING, _('Bus of the device: hypervisor driver chooses a ' 'suitable default if omitted.'), constraints=[ constraints.AllowedValues(['ide', 'lame_bus', 'scsi', 'usb', 'virtio']), ], ), BLOCK_DEVICE_MAPPING_BOOT_INDEX: properties.Schema( properties.Schema.INTEGER, _('Integer used for ordering the boot disks. If ' 'it is not specified, value "0" will be set ' 'for bootable sources (volume, snapshot, image); ' 'value "-1" will be set for non-bootable sources.'), ), BLOCK_DEVICE_MAPPING_VOLUME_SIZE: properties.Schema( properties.Schema.INTEGER, _('Size of the block device in GB. If it is omitted, ' 'hypervisor driver calculates size.'), ), BLOCK_DEVICE_MAPPING_DELETE_ON_TERM: properties.Schema( properties.Schema.BOOLEAN, _('Indicate whether the volume should be deleted ' 'when the server is terminated.') ), }, ), support_status=support.SupportStatus(version='2015.1') ), FLAVOR: properties.Schema( properties.Schema.STRING, _('The ID or name of the flavor to boot onto.'), required=True, update_allowed=True, constraints=[ constraints.CustomConstraint('nova.flavor') ] ), FLAVOR_UPDATE_POLICY: properties.Schema( properties.Schema.STRING, _('Policy on how to apply a flavor update; either by requesting ' 'a server resize or by replacing the entire server.'), default='RESIZE', constraints=[ constraints.AllowedValues(['RESIZE', 'REPLACE']), ], update_allowed=True ), IMAGE_UPDATE_POLICY: properties.Schema( properties.Schema.STRING, _('Policy on how to apply an image-id update; either by ' 'requesting a server rebuild or by replacing ' 'the entire server.'), default='REBUILD', constraints=[ constraints.AllowedValues(['REBUILD', 'REPLACE', 'REBUILD_PRESERVE_EPHEMERAL']), ], update_allowed=True ), KEY_NAME: properties.Schema( properties.Schema.STRING, _('Name of keypair to inject into the server.'), constraints=[ constraints.CustomConstraint('nova.keypair') ] ), ADMIN_USER: properties.Schema( properties.Schema.STRING, _('Name of the administrative user to use on the server.'), support_status=support.SupportStatus( status=support.HIDDEN, version='5.0.0', message=_('The default cloud-init user set up for each image ' '(e.g. "ubuntu" for Ubuntu 12.04+, "fedora" for ' 'Fedora 19+ and "cloud-user" for CentOS/RHEL 6.5).'), previous_status=support.SupportStatus( status=support.DEPRECATED, version='2014.1', previous_status=support.SupportStatus(version='2013.2') ) ) ), AVAILABILITY_ZONE: properties.Schema( properties.Schema.STRING, _('Name of the availability zone for server placement.') ), SECURITY_GROUPS: properties.Schema( properties.Schema.LIST, _('List of security group names or IDs. Cannot be used if ' 'neutron ports are associated with this server; assign ' 'security groups to the ports instead.'), default=[] ), NETWORKS: properties.Schema( properties.Schema.LIST, _('An ordered list of nics to be added to this server, with ' 'information about connected networks, fixed ips, port etc.'), schema=properties.Schema( properties.Schema.MAP, schema={ NETWORK_UUID: properties.Schema( properties.Schema.STRING, _('ID of network to create a port on.'), support_status=support.SupportStatus( status=support.HIDDEN, version='5.0.0', previous_status=support.SupportStatus( status=support.DEPRECATED, message=_('Use property %s.') % NETWORK_ID, version='2014.1' ) ), constraints=[ constraints.CustomConstraint('neutron.network') ] ), NETWORK_ID: properties.Schema( properties.Schema.STRING, _('Name or ID of network to create a port on.'), constraints=[ constraints.CustomConstraint('neutron.network') ] ), ALLOCATE_NETWORK: properties.Schema( properties.Schema.STRING, _('The special string values of network, ' 'auto: means either a network that is already ' 'available to the project will be used, or if one ' 'does not exist, will be automatically created for ' 'the project; none: means no networking will be ' 'allocated for the created server. Supported by ' 'Nova API since version "2.37". This property can ' 'not be used with other network keys.'), support_status=support.SupportStatus(version='9.0.0'), constraints=[ constraints.AllowedValues( [NETWORK_NONE, NETWORK_AUTO]) ], update_allowed=True, ), NETWORK_FIXED_IP: properties.Schema( properties.Schema.STRING, _('Fixed IP address to specify for the port ' 'created on the requested network.'), constraints=[ constraints.CustomConstraint('ip_addr') ] ), NETWORK_PORT: properties.Schema( properties.Schema.STRING, _('ID of an existing port to associate with this ' 'server.'), constraints=[ constraints.CustomConstraint('neutron.port') ] ), NETWORK_PORT_EXTRA: properties.Schema( properties.Schema.MAP, _('Dict, which has expand properties for port. ' 'Used only if port property is not specified ' 'for creating port.'), schema=neutron_port.Port.extra_properties_schema, support_status=support.SupportStatus(version='6.0.0') ), NETWORK_SUBNET: properties.Schema( properties.Schema.STRING, _('Subnet in which to allocate the IP address for ' 'port. Used for creating port, based on derived ' 'properties. If subnet is specified, network ' 'property becomes optional.'), support_status=support.SupportStatus(version='5.0.0') ), NETWORK_FLOATING_IP: properties.Schema( properties.Schema.STRING, _('ID of the floating IP to associate.'), support_status=support.SupportStatus(version='6.0.0') ), NIC_TAG: properties.Schema( properties.Schema.STRING, _('Port tag. Heat ignores any update on this property ' 'as nova does not support it.'), support_status=support.SupportStatus(version='9.0.0') ) }, ), update_allowed=True ), SCHEDULER_HINTS: properties.Schema( properties.Schema.MAP, _('Arbitrary key-value pairs specified by the client to help ' 'boot a server.') ), METADATA: properties.Schema( properties.Schema.MAP, _('Arbitrary key/value metadata to store for this server. Both ' 'keys and values must be 255 characters or less. Non-string ' 'values will be serialized to JSON (and the serialized ' 'string must be 255 characters or less).'), update_allowed=True, default={} ), USER_DATA_FORMAT: properties.Schema( properties.Schema.STRING, _('How the user_data should be formatted for the server. For ' 'HEAT_CFNTOOLS, the user_data is bundled as part of the ' 'heat-cfntools cloud-init boot configuration data. For RAW ' 'the user_data is passed to Nova unmodified. ' 'For SOFTWARE_CONFIG user_data is bundled as part of the ' 'software config data, and metadata is derived from any ' 'associated SoftwareDeployment resources.'), default=cfg.CONF.default_user_data_format, constraints=[ constraints.AllowedValues(_SOFTWARE_CONFIG_FORMATS), ] ), SOFTWARE_CONFIG_TRANSPORT: properties.Schema( properties.Schema.STRING, _('How the server should receive the metadata required for ' 'software configuration. POLL_SERVER_CFN will allow calls to ' 'the cfn API action DescribeStackResource authenticated with ' 'the provided keypair. POLL_SERVER_HEAT will allow calls to ' 'the Heat API resource-show using the provided keystone ' 'credentials. POLL_TEMP_URL will create and populate a ' 'Swift TempURL with metadata for polling. ZAQAR_MESSAGE will ' 'create a dedicated zaqar queue and post the metadata ' 'for polling.'), default=cfg.CONF.default_software_config_transport, update_allowed=True, constraints=[ constraints.AllowedValues(_SOFTWARE_CONFIG_TRANSPORTS), ] ), USER_DATA_UPDATE_POLICY: properties.Schema( properties.Schema.STRING, _('Policy on how to apply a user_data update; either by ' 'ignoring it or by replacing the entire server.'), default='REPLACE', constraints=[ constraints.AllowedValues(['REPLACE', 'IGNORE']), ], support_status=support.SupportStatus(version='6.0.0'), update_allowed=True ), USER_DATA: properties.Schema( properties.Schema.STRING, _('User data script to be executed by cloud-init. Changes cause ' 'replacement of the resource by default, but can be ignored ' 'altogether by setting the `user_data_update_policy` property.'), default='', update_allowed=True ), RESERVATION_ID: properties.Schema( properties.Schema.STRING, _('A UUID for the set of servers being requested.') ), CONFIG_DRIVE: properties.Schema( properties.Schema.BOOLEAN, _('If True, enable config drive on the server.') ), DISK_CONFIG: properties.Schema( properties.Schema.STRING, _('Control how the disk is partitioned when the server is ' 'created.'), constraints=[ constraints.AllowedValues(['AUTO', 'MANUAL']), ] ), PERSONALITY: properties.Schema( properties.Schema.MAP, _('A map of files to create/overwrite on the server upon boot. ' 'Keys are file names and values are the file contents.'), default={} ), ADMIN_PASS: properties.Schema( properties.Schema.STRING, _('The administrator password for the server.'), update_allowed=True ), TAGS: properties.Schema( properties.Schema.LIST, _('Server tags. Supported since client version 2.26.'), support_status=support.SupportStatus(version='8.0.0'), schema=properties.Schema(properties.Schema.STRING), update_allowed=True ), DEPLOYMENT_SWIFT_DATA: properties.Schema( properties.Schema.MAP, _('Swift container and object to use for storing deployment data ' 'for the server resource. The parameter is a map value ' 'with the keys "container" and "object", and the values ' 'are the corresponding container and object names. The ' 'software_config_transport parameter must be set to ' 'POLL_TEMP_URL for swift to be used. If not specified, ' 'and software_config_transport is set to POLL_TEMP_URL, a ' 'container will be automatically created from the resource ' 'name, and the object name will be a generated uuid.'), support_status=support.SupportStatus(version='9.0.0'), default={}, update_allowed=True, schema={ CONTAINER: properties.Schema( properties.Schema.STRING, _('Name of the container.'), constraints=[ constraints.Length(min=1) ] ), OBJECT: properties.Schema( properties.Schema.STRING, _('Name of the object.'), constraints=[ constraints.Length(min=1) ] ) } ) } attributes_schema = { NAME_ATTR: attributes.Schema( _('Name of the server.'), type=attributes.Schema.STRING ), ADDRESSES: attributes.Schema( _('A dict of all network addresses with corresponding port_id. ' 'Each network will have two keys in dict, they are network ' 'name and network id. ' 'The port ID may be obtained through the following expression: ' '"{get_attr: [, addresses, , 0, ' 'port]}".'), type=attributes.Schema.MAP ), NETWORKS_ATTR: attributes.Schema( _('A dict of assigned network addresses of the form: ' '{"public": [ip1, ip2...], "private": [ip3, ip4], ' '"public_uuid": [ip1, ip2...], "private_uuid": [ip3, ip4]}. ' 'Each network will have two keys in dict, they are network ' 'name and network id.'), type=attributes.Schema.MAP ), FIRST_ADDRESS: attributes.Schema( _('Convenience attribute to fetch the first assigned network ' 'address, or an empty string if nothing has been assigned at ' 'this time. Result may not be predictable if the server has ' 'addresses from more than one network.'), support_status=support.SupportStatus( status=support.HIDDEN, version='5.0.0', message=_('Use the networks attribute instead of ' 'first_address. For example: "{get_attr: ' '[, networks, , 0]}"'), previous_status=support.SupportStatus( status=support.DEPRECATED, version='2014.2', previous_status=support.SupportStatus(version='2013.2') ) ) ), INSTANCE_NAME: attributes.Schema( _('AWS compatible instance name.'), type=attributes.Schema.STRING ), ACCESSIPV4: attributes.Schema( _('The manually assigned alternative public IPv4 address ' 'of the server.'), type=attributes.Schema.STRING ), ACCESSIPV6: attributes.Schema( _('The manually assigned alternative public IPv6 address ' 'of the server.'), type=attributes.Schema.STRING ), CONSOLE_URLS: attributes.Schema( _("URLs of server's consoles. " "To get a specific console type, the requested type " "can be specified as parameter to the get_attr function, " "e.g. get_attr: [ , console_urls, novnc ]. " "Currently supported types are " "novnc, xvpvnc, spice-html5, rdp-html5, serial and webmks."), support_status=support.SupportStatus(version='2015.1'), type=attributes.Schema.MAP ), TAGS_ATTR: attributes.Schema( _('Tags from the server. Supported since client version 2.26.'), support_status=support.SupportStatus(version='8.0.0'), type=attributes.Schema.LIST ), OS_COLLECT_CONFIG: attributes.Schema( _('The os-collect-config configuration for the server\'s local ' 'agent to be configured to connect to Heat to retrieve ' 'deployment data.'), support_status=support.SupportStatus(version='9.0.0'), type=attributes.Schema.MAP, cache_mode=attributes.Schema.CACHE_NONE ), } default_client_name = 'nova' def translation_rules(self, props): rules = [ translation.TranslationRule( props, translation.TranslationRule.REPLACE, translation_path=[self.NETWORKS, self.NETWORK_ID], value_name=self.NETWORK_UUID), translation.TranslationRule( props, translation.TranslationRule.RESOLVE, translation_path=[self.FLAVOR], client_plugin=self.client_plugin('nova'), finder='find_flavor_by_name_or_id'), translation.TranslationRule( props, translation.TranslationRule.RESOLVE, translation_path=[self.IMAGE], client_plugin=self.client_plugin('glance'), finder='find_image_by_name_or_id'), translation.TranslationRule( props, translation.TranslationRule.REPLACE, translation_path=[self.BLOCK_DEVICE_MAPPING_V2, self.BLOCK_DEVICE_MAPPING_IMAGE], value_name=self.BLOCK_DEVICE_MAPPING_IMAGE_ID), translation.TranslationRule( props, translation.TranslationRule.RESOLVE, translation_path=[self.BLOCK_DEVICE_MAPPING_V2, self.BLOCK_DEVICE_MAPPING_IMAGE], client_plugin=self.client_plugin('glance'), finder='find_image_by_name_or_id'), translation.TranslationRule( props, translation.TranslationRule.RESOLVE, translation_path=[self.NETWORKS, self.NETWORK_ID], client_plugin=self.client_plugin('neutron'), finder='find_resourceid_by_name_or_id', entity='network'), translation.TranslationRule( props, translation.TranslationRule.RESOLVE, translation_path=[self.NETWORKS, self.NETWORK_SUBNET], client_plugin=self.client_plugin('neutron'), finder='find_resourceid_by_name_or_id', entity='subnet'), translation.TranslationRule( props, translation.TranslationRule.RESOLVE, translation_path=[self.NETWORKS, self.NETWORK_PORT], client_plugin=self.client_plugin('neutron'), finder='find_resourceid_by_name_or_id', entity='port')] return rules def __init__(self, name, json_snippet, stack): super(Server, self).__init__(name, json_snippet, stack) if self.user_data_software_config(): self._register_access_key() self.default_collectors = ['ec2'] def _config_drive(self): # This method is overridden by the derived CloudServer resource return self.properties[self.CONFIG_DRIVE] def user_data_raw(self): return self.properties[self.USER_DATA_FORMAT] == self.RAW def user_data_software_config(self): return self.properties[ self.USER_DATA_FORMAT] == self.SOFTWARE_CONFIG def get_software_config(self, ud_content): with self.rpc_client().ignore_error_by_name('NotFound'): sc = self.rpc_client().show_software_config( self.context, ud_content) return sc[rpc_api.SOFTWARE_CONFIG_CONFIG] return ud_content def handle_create(self): security_groups = self.properties[self.SECURITY_GROUPS] user_data_format = self.properties[self.USER_DATA_FORMAT] ud_content = self.properties[self.USER_DATA] if self.user_data_software_config() or self.user_data_raw(): if uuidutils.is_uuid_like(ud_content): # attempt to load the userdata from software config ud_content = self.get_software_config(ud_content) metadata = self.metadata_get(True) or {} if self.user_data_software_config(): self._create_transport_credentials(self.properties) self._populate_deployments_metadata(metadata, self.properties) userdata = self.client_plugin().build_userdata( metadata, ud_content, instance_user=None, user_data_format=user_data_format) availability_zone = self.properties[self.AVAILABILITY_ZONE] instance_meta = self.properties[self.METADATA] if instance_meta: instance_meta = self.client_plugin().meta_serialize( instance_meta) scheduler_hints = self._scheduler_hints( self.properties[self.SCHEDULER_HINTS]) nics = self._build_nics(self.properties[self.NETWORKS], security_groups=security_groups) block_device_mapping = self._build_block_device_mapping( self.properties[self.BLOCK_DEVICE_MAPPING]) block_device_mapping_v2 = self._build_block_device_mapping_v2( self.properties[self.BLOCK_DEVICE_MAPPING_V2]) reservation_id = self.properties[self.RESERVATION_ID] disk_config = self.properties[self.DISK_CONFIG] admin_pass = self.properties[self.ADMIN_PASS] or None personality_files = self.properties[self.PERSONALITY] key_name = self.properties[self.KEY_NAME] flavor = self.properties[self.FLAVOR] image = self.properties[self.IMAGE] server = None try: api_version = None # if 'auto' or 'none' is specified, we get the string type # nics after self._build_nics(), and the string network # is supported since nova microversion 2.37 if isinstance(nics, six.string_types): api_version = self.client_plugin().V2_37 if self._is_nic_tagged(self.properties[self.NETWORKS]): api_version = self.client_plugin().V2_42 nc = self.client(version=api_version) server = nc.servers.create( name=self._server_name(), image=image, flavor=flavor, key_name=key_name, security_groups=security_groups, userdata=userdata, meta=instance_meta, scheduler_hints=scheduler_hints, nics=nics, availability_zone=availability_zone, block_device_mapping=block_device_mapping, block_device_mapping_v2=block_device_mapping_v2, reservation_id=reservation_id, config_drive=self._config_drive(), disk_config=disk_config, files=personality_files, admin_pass=admin_pass) finally: # Avoid a race condition where the thread could be canceled # before the ID is stored if server is not None: self.resource_id_set(server.id) return server.id def check_create_complete(self, server_id): check = self.client_plugin()._check_active(server_id) if check: if self.properties[self.TAGS]: self._update_server_tags(self.properties[self.TAGS]) self.store_external_ports() return check def _update_server_tags(self, tags): server = self.client().servers.get(self.resource_id) self.client(version=self.client_plugin().V2_26 ).servers.set_tags(server, tags) def handle_check(self): server = self.client().servers.get(self.resource_id) status = self.client_plugin().get_status(server) checks = [{'attr': 'status', 'expected': 'ACTIVE', 'current': status}] self._verify_check_conditions(checks) def get_live_resource_data(self): try: server = self.client().servers.get(self.resource_id) server_data = server.to_dict() active = self.client_plugin()._check_active(server) if not active: # There is no difference what error raised, because update # method of resource just silently log it as warning. raise exception.Error(_('Server %s is not ' 'in ACTIVE state') % self.name) except Exception as ex: if self.client_plugin().is_not_found(ex): raise exception.EntityNotFound(entity='Resource', name=self.name) raise try: tag_server = self.client( version=self.client_plugin().V2_26 ).server.get(self.resource_id) except Exception as ex: LOG.warning('Cannot resolve tags for observe reality in case of ' 'unsupported minimal version tag support client') else: server_data['tags'] = tag_server.tag_list() return server, server_data def parse_live_resource_data(self, resource_properties, resource_data): server, server_data = resource_data result = { # there's a risk that flavor id will be int type, so cast to str self.FLAVOR: six.text_type(server_data.get(self.FLAVOR)['id']), self.IMAGE: six.text_type(server_data.get(self.IMAGE)['id']), self.NAME: server_data.get(self.NAME), self.METADATA: server_data.get(self.METADATA), self.NETWORKS: self._get_live_networks(server, resource_properties) } if 'tags' in server_data: result.update({self.TAGS: server_data['tags']}) return result def _get_live_networks(self, server, props): reality_nets = self._add_port_for_address(server, extend_networks=False) reality_net_ids = {} client_plugin = self.client_plugin('neutron') for net_key in reality_nets: try: net_id = client_plugin.find_resourceid_by_name_or_id('network', net_key) except Exception as ex: if (client_plugin.is_not_found(ex) or client_plugin.is_no_unique(ex)): net_id = None else: raise if net_id: reality_net_ids[net_id] = reality_nets.get(net_key) resource_nets = props.get(self.NETWORKS) result_nets = [] for net in resource_nets or []: net_id = self._get_network_id(net) if reality_net_ids.get(net_id): for idx, address in enumerate(reality_net_ids.get(net_id)): if address['addr'] == net[self.NETWORK_FIXED_IP]: result_nets.append(net) reality_net_ids.get(net_id).pop(idx) break for key, value in six.iteritems(reality_nets): for address in reality_nets[key]: new_net = {self.NETWORK_ID: key, self.NETWORK_FIXED_IP: address['addr']} if address['port'] not in [port['id'] for port in self._data_get_ports()]: new_net.update({self.NETWORK_PORT: address['port']}) result_nets.append(new_net) return result_nets @classmethod def _build_block_device_mapping(cls, bdm): if not bdm: return None bdm_dict = {} for mapping in bdm: mapping_parts = [] snapshot_id = mapping.get(cls.BLOCK_DEVICE_MAPPING_SNAPSHOT_ID) if snapshot_id: mapping_parts.append(snapshot_id) mapping_parts.append('snap') else: volume_id = mapping.get(cls.BLOCK_DEVICE_MAPPING_VOLUME_ID) mapping_parts.append(volume_id) mapping_parts.append('') volume_size = mapping.get(cls.BLOCK_DEVICE_MAPPING_VOLUME_SIZE) delete = mapping.get(cls.BLOCK_DEVICE_MAPPING_DELETE_ON_TERM) if volume_size: mapping_parts.append(str(volume_size)) else: mapping_parts.append('') if delete: mapping_parts.append(str(delete)) device_name = mapping.get(cls.BLOCK_DEVICE_MAPPING_DEVICE_NAME) bdm_dict[device_name] = ':'.join(mapping_parts) return bdm_dict @classmethod def _build_block_device_mapping_v2(cls, bdm_v2): if not bdm_v2: return None bdm_v2_list = [] for mapping in bdm_v2: bmd_dict = None if mapping.get(cls.BLOCK_DEVICE_MAPPING_VOLUME_ID): bmd_dict = { 'uuid': mapping.get(cls.BLOCK_DEVICE_MAPPING_VOLUME_ID), 'source_type': 'volume', 'destination_type': 'volume', 'boot_index': 0, 'delete_on_termination': False, } elif mapping.get(cls.BLOCK_DEVICE_MAPPING_SNAPSHOT_ID): bmd_dict = { 'uuid': mapping.get(cls.BLOCK_DEVICE_MAPPING_SNAPSHOT_ID), 'source_type': 'snapshot', 'destination_type': 'volume', 'boot_index': 0, 'delete_on_termination': False, } elif mapping.get(cls.BLOCK_DEVICE_MAPPING_IMAGE): bmd_dict = { 'uuid': mapping.get(cls.BLOCK_DEVICE_MAPPING_IMAGE), 'source_type': 'image', 'destination_type': 'volume', 'boot_index': 0, 'delete_on_termination': False, } elif mapping.get(cls.BLOCK_DEVICE_MAPPING_SWAP_SIZE): bmd_dict = { 'source_type': 'blank', 'destination_type': 'local', 'boot_index': -1, 'delete_on_termination': True, 'guest_format': 'swap', 'volume_size': mapping.get( cls.BLOCK_DEVICE_MAPPING_SWAP_SIZE), } elif (mapping.get(cls.BLOCK_DEVICE_MAPPING_EPHEMERAL_SIZE) or mapping.get(cls.BLOCK_DEVICE_MAPPING_EPHEMERAL_FORMAT)): bmd_dict = { 'source_type': 'blank', 'destination_type': 'local', 'boot_index': -1, 'delete_on_termination': True } ephemeral_size = mapping.get( cls.BLOCK_DEVICE_MAPPING_EPHEMERAL_SIZE) if ephemeral_size: bmd_dict.update({'volume_size': ephemeral_size}) ephemeral_format = mapping.get( cls.BLOCK_DEVICE_MAPPING_EPHEMERAL_FORMAT) if ephemeral_format: bmd_dict.update({'guest_format': ephemeral_format}) # NOTE(prazumovsky): In case of server doesn't take empty value of # device name, need to escape from such situation. device_name = mapping.get(cls.BLOCK_DEVICE_MAPPING_DEVICE_NAME) if device_name: bmd_dict[cls.BLOCK_DEVICE_MAPPING_DEVICE_NAME] = device_name update_props = (cls.BLOCK_DEVICE_MAPPING_DEVICE_TYPE, cls.BLOCK_DEVICE_MAPPING_DISK_BUS, cls.BLOCK_DEVICE_MAPPING_BOOT_INDEX, cls.BLOCK_DEVICE_MAPPING_VOLUME_SIZE, cls.BLOCK_DEVICE_MAPPING_DELETE_ON_TERM) for update_prop in update_props: if mapping.get(update_prop) is not None: bmd_dict[update_prop] = mapping.get(update_prop) if bmd_dict: bdm_v2_list.append(bmd_dict) return bdm_v2_list def _add_port_for_address(self, server, extend_networks=True): """Method adds port id to list of addresses. This method is used only for resolving attributes. """ nets = copy.deepcopy(server.addresses) or {} ifaces = server.interface_list() ip_mac_mapping_on_port_id = dict(((iface.fixed_ips[0]['ip_address'], iface.mac_addr), iface.port_id) for iface in ifaces) for net_name in nets: for addr in nets[net_name]: addr['port'] = ip_mac_mapping_on_port_id.get( (addr['addr'], addr['OS-EXT-IPS-MAC:mac_addr'])) if extend_networks: return self._extend_networks(nets) else: return nets def _extend_networks(self, networks): """Method adds same networks with replaced name on network id. This method is used only for resolving attributes. """ nets = copy.deepcopy(networks) client_plugin = self.client_plugin('neutron') for key in list(nets.keys()): try: net_id = client_plugin.find_resourceid_by_name_or_id('network', key) except Exception as ex: if (client_plugin.is_not_found(ex) or client_plugin.is_no_unique(ex)): net_id = None else: raise if net_id: nets[net_id] = nets[key] return nets def _resolve_attribute(self, name): if self.resource_id is None: return if name == self.FIRST_ADDRESS: return self.client_plugin().server_to_ipaddress( self.resource_id) or '' if name == self.OS_COLLECT_CONFIG: return self.metadata_get().get('os-collect-config', {}) if name == self.NAME_ATTR: return self._server_name() try: server = self.client().servers.get(self.resource_id) except Exception as e: self.client_plugin().ignore_not_found(e) return '' if name == self.ADDRESSES: return self._add_port_for_address(server) if name == self.NETWORKS_ATTR: return self._extend_networks(server.networks) if name == self.INSTANCE_NAME: return getattr(server, 'OS-EXT-SRV-ATTR:instance_name', None) if name == self.ACCESSIPV4: return server.accessIPv4 if name == self.ACCESSIPV6: return server.accessIPv6 if name == self.CONSOLE_URLS: return self.client_plugin('nova').get_console_urls(server) if name == self.TAGS_ATTR: try: cv = self.client( version=self.client_plugin().V2_26) return cv.servers.tag_list(server) except exception.InvalidServiceVersion: return None def add_dependencies(self, deps): super(Server, self).add_dependencies(deps) # Depend on any Subnet in this template with the same # network_id as the networks attached to this server. # It is not known which subnet a server might be assigned # to so all subnets in a network should be created before # the servers in that network. nets = self.properties[self.NETWORKS] if not nets: return for res in six.itervalues(self.stack): if res.has_interface('OS::Neutron::Subnet'): subnet_net = res.properties.get(subnet.Subnet.NETWORK) # Be wary of the case where we do not know a subnet's # network. If that's the case, be safe and add it as a # dependency. if not subnet_net: deps += (self, res) continue for net in nets: # worry about network_id because that could be the match # assigned to the subnet as well and could have been # created by this stack. Regardless, the server should # still wait on the subnet. net_id = net.get(self.NETWORK_ID) if net_id and net_id == subnet_net: deps += (self, res) break # If we don't know a given net_id right now, it's # plausible this subnet depends on it. if not net_id: deps += (self, res) break def _update_flavor(self, after_props): flavor = after_props[self.FLAVOR] handler_args = checker_args = {'args': (flavor,)} prg_resize = progress.ServerUpdateProgress(self.resource_id, 'resize', handler_extra=handler_args, checker_extra=checker_args) prg_verify = progress.ServerUpdateProgress(self.resource_id, 'verify_resize') return prg_resize, prg_verify def _update_image(self, after_props): image_update_policy = after_props[self.IMAGE_UPDATE_POLICY] instance_meta = after_props[self.METADATA] if instance_meta is not None: instance_meta = self.client_plugin().meta_serialize( instance_meta) personality_files = after_props[self.PERSONALITY] image = after_props[self.IMAGE] preserve_ephemeral = ( image_update_policy == 'REBUILD_PRESERVE_EPHEMERAL') password = after_props[self.ADMIN_PASS] kwargs = {'password': password, 'preserve_ephemeral': preserve_ephemeral, 'meta': instance_meta, 'files': personality_files} prg = progress.ServerUpdateProgress(self.resource_id, 'rebuild', handler_extra={'args': (image,), 'kwargs': kwargs}) return prg def _update_networks(self, server, after_props): updaters = [] new_networks = after_props[self.NETWORKS] old_networks = self.properties[self.NETWORKS] security_groups = after_props[self.SECURITY_GROUPS] if not server: server = self.client().servers.get(self.resource_id) interfaces = server.interface_list() remove_ports, add_nets = self.calculate_networks( old_networks, new_networks, interfaces, security_groups) for port in remove_ports: updaters.append( progress.ServerUpdateProgress( self.resource_id, 'interface_detach', handler_extra={'args': (port,)}, checker_extra={'args': (port,)}) ) for args in add_nets: updaters.append( progress.ServerUpdateProgress( self.resource_id, 'interface_attach', handler_extra={'kwargs': args}, checker_extra={'args': (args['port_id'],)}) ) return updaters def needs_replace_with_prop_diff(self, changed_properties_set, after_props, before_props): """Needs replace based on prop_diff.""" if self.FLAVOR in changed_properties_set: flavor_update_policy = ( after_props.get(self.FLAVOR_UPDATE_POLICY) or before_props.get(self.FLAVOR_UPDATE_POLICY)) if flavor_update_policy == 'REPLACE': return True if self.IMAGE in changed_properties_set: image_update_policy = ( after_props.get(self.IMAGE_UPDATE_POLICY) or before_props.get(self.IMAGE_UPDATE_POLICY)) if image_update_policy == 'REPLACE': return True if self.USER_DATA in changed_properties_set: ud_update_policy = ( after_props.get(self.USER_DATA_UPDATE_POLICY) or before_props.get(self.USER_DATA_UPDATE_POLICY)) return ud_update_policy == 'REPLACE' def needs_replace_failed(self): if not self.resource_id: return True with self.client_plugin().ignore_not_found: server = self.client().servers.get(self.resource_id) return server.status in ('ERROR', 'DELETED', 'SOFT_DELETED') return True def handle_update(self, json_snippet, tmpl_diff, prop_diff): updaters = super(Server, self).handle_update( json_snippet, tmpl_diff, prop_diff) server = None after_props = json_snippet.properties(self.properties_schema, self.context) if self.METADATA in prop_diff: server = self.client_plugin().get_server(self.resource_id) self.client_plugin().meta_update(server, after_props[self.METADATA]) if self.TAGS in prop_diff: self._update_server_tags(after_props[self.TAGS] or []) if self.NAME in prop_diff: if not server: server = self.client_plugin().get_server(self.resource_id) self.client_plugin().rename(server, after_props[self.NAME]) if self.NETWORKS in prop_diff: updaters.extend(self._update_networks(server, after_props)) if self.FLAVOR in prop_diff: updaters.extend(self._update_flavor(after_props)) if self.IMAGE in prop_diff: updaters.append(self._update_image(after_props)) elif self.ADMIN_PASS in prop_diff: if not server: server = self.client_plugin().get_server(self.resource_id) server.change_password(after_props[self.ADMIN_PASS]) # NOTE(pas-ha) optimization is possible (starting first task # right away), but we'd rather not, as this method already might # have called several APIs return updaters def check_update_complete(self, updaters): """Push all updaters to completion in list order.""" for prg in updaters: if not prg.called: handler = getattr(self.client_plugin(), prg.handler) prg.called = handler(*prg.handler_args, **prg.handler_kwargs) return False if not prg.complete: check_complete = getattr(self.client_plugin(), prg.checker) prg.complete = check_complete(*prg.checker_args, **prg.checker_kwargs) break status = all(prg.complete for prg in updaters) if status: self.store_external_ports() return status def _validate_block_device_mapping(self): # either volume_id or snapshot_id needs to be specified, but not both # for block device mapping. bdm = self.properties[self.BLOCK_DEVICE_MAPPING] or [] bdm_v2 = self.properties[self.BLOCK_DEVICE_MAPPING_V2] or [] image = self.properties[self.IMAGE] if bdm and bdm_v2: raise exception.ResourcePropertyConflict( self.BLOCK_DEVICE_MAPPING, self.BLOCK_DEVICE_MAPPING_V2) bootable = image is not None for mapping in bdm: device_name = mapping[self.BLOCK_DEVICE_MAPPING_DEVICE_NAME] if device_name == 'vda': bootable = True volume_id = mapping.get(self.BLOCK_DEVICE_MAPPING_VOLUME_ID) snapshot_id = mapping.get(self.BLOCK_DEVICE_MAPPING_SNAPSHOT_ID) if volume_id is not None and snapshot_id is not None: raise exception.ResourcePropertyConflict( self.BLOCK_DEVICE_MAPPING_VOLUME_ID, self.BLOCK_DEVICE_MAPPING_SNAPSHOT_ID) if volume_id is None and snapshot_id is None: msg = _('Either volume_id or snapshot_id must be specified for' ' device mapping %s') % device_name raise exception.StackValidationFailed(message=msg) bootable_devs = [image] for mapping in bdm_v2: volume_id = mapping.get(self.BLOCK_DEVICE_MAPPING_VOLUME_ID) snapshot_id = mapping.get(self.BLOCK_DEVICE_MAPPING_SNAPSHOT_ID) image_id = mapping.get(self.BLOCK_DEVICE_MAPPING_IMAGE) boot_index = mapping.get(self.BLOCK_DEVICE_MAPPING_BOOT_INDEX) swap_size = mapping.get(self.BLOCK_DEVICE_MAPPING_SWAP_SIZE) ephemeral = (mapping.get( self.BLOCK_DEVICE_MAPPING_EPHEMERAL_SIZE) or mapping.get( self.BLOCK_DEVICE_MAPPING_EPHEMERAL_FORMAT)) property_tuple = (volume_id, snapshot_id, image_id, swap_size, ephemeral) if property_tuple.count(None) < 4: raise exception.ResourcePropertyConflict( self.BLOCK_DEVICE_MAPPING_VOLUME_ID, self.BLOCK_DEVICE_MAPPING_SNAPSHOT_ID, self.BLOCK_DEVICE_MAPPING_IMAGE, self.BLOCK_DEVICE_MAPPING_SWAP_SIZE, self.BLOCK_DEVICE_MAPPING_EPHEMERAL_SIZE, self.BLOCK_DEVICE_MAPPING_EPHEMERAL_FORMAT ) if property_tuple.count(None) == 5: msg = _('Either volume_id, snapshot_id, image_id, swap_size, ' 'ephemeral_size or ephemeral_format must be ' 'specified.') raise exception.StackValidationFailed(message=msg) if any((volume_id is not None, snapshot_id is not None, image_id is not None)): # boot_index is not specified, set boot_index=0 when # build_block_device_mapping for volume, snapshot, image if boot_index is None or boot_index == 0: bootable = True bootable_devs.append(volume_id) bootable_devs.append(snapshot_id) bootable_devs.append(image_id) if not bootable: msg = _('Neither image nor bootable volume is specified for ' 'instance %s') % self.name raise exception.StackValidationFailed(message=msg) if bdm_v2 and len(list( dev for dev in bootable_devs if dev is not None)) != 1: msg = _('Multiple bootable sources for instance %s.') % self.name raise exception.StackValidationFailed(message=msg) def _validate_image_flavor(self, image, flavor): try: image_obj = self.client_plugin('glance').get_image(image) flavor_obj = self.client_plugin().get_flavor(flavor) except Exception as ex: # Flavor or image may not have been created in the backend # yet when they are part of the same stack/template. if (self.client_plugin().is_not_found(ex) or self.client_plugin('glance').is_not_found(ex)): return raise else: if image_obj.status.lower() != self.IMAGE_ACTIVE: msg = _('Image status is required to be %(cstatus)s not ' '%(wstatus)s.') % { 'cstatus': self.IMAGE_ACTIVE, 'wstatus': image_obj.status} raise exception.StackValidationFailed(message=msg) # validate image/flavor combination if flavor_obj.ram < image_obj.min_ram: msg = _('Image %(image)s requires %(imram)s minimum ram. ' 'Flavor %(flavor)s has only %(flram)s.') % { 'image': image, 'imram': image_obj.min_ram, 'flavor': flavor, 'flram': flavor_obj.ram} raise exception.StackValidationFailed(message=msg) # validate image/flavor disk compatibility if flavor_obj.disk < image_obj.min_disk: msg = _('Image %(image)s requires %(imsz)s GB minimum ' 'disk space. Flavor %(flavor)s has only ' '%(flsz)s GB.') % { 'image': image, 'imsz': image_obj.min_disk, 'flavor': flavor, 'flsz': flavor_obj.disk} raise exception.StackValidationFailed(message=msg) def validate(self): """Validate any of the provided params.""" super(Server, self).validate() if self.user_data_software_config(): if 'deployments' in self.t.metadata(): msg = _('deployments key not allowed in resource metadata ' 'with user_data_format of SOFTWARE_CONFIG') raise exception.StackValidationFailed(message=msg) self._validate_block_device_mapping() # make sure the image exists if specified. image = self.properties[self.IMAGE] flavor = self.properties[self.FLAVOR] if image: self._validate_image_flavor(image, flavor) networks = self.properties[self.NETWORKS] or [] for network in networks: self._validate_network(network) has_str_net = self._str_network(networks) is not None if has_str_net: if len(networks) != 1: msg = _('Property "%s" can not be specified if ' 'multiple network interfaces set for ' 'server.') % self.ALLOCATE_NETWORK raise exception.StackValidationFailed(message=msg) # Check if str_network is allowed to use try: self.client( version=self.client_plugin().V2_37) except exception.InvalidServiceVersion as ex: msg = (_('Cannot use "%(prop)s" property - compute service ' 'does not support the required api ' 'microversion: %(ex)s') % {'prop': self.ALLOCATE_NETWORK, 'ex': six.text_type(ex)}) raise exception.StackValidationFailed(message=msg) # record if any networks include explicit ports has_port = any(n[self.NETWORK_PORT] is not None for n in networks) # if 'security_groups' present for the server and explicit 'port' # in one or more entries in 'networks', raise validation error if has_port and self.properties[self.SECURITY_GROUPS]: raise exception.ResourcePropertyConflict( self.SECURITY_GROUPS, "/".join([self.NETWORKS, self.NETWORK_PORT])) # Check if nic tag is allowed to use if self._is_nic_tagged(networks=networks): try: self.client( version=self.client_plugin().V2_42) except exception.InvalidServiceVersion as ex: msg = (_('Cannot use "%(tag)s" property in networks - ' 'nova does not support it: %(error)s')) % { 'tag': self.NIC_TAG, 'error': six.text_type(ex) } raise exception.StackValidationFailed(message=msg) # Check if tags is allowed to use if self.properties[self.TAGS]: try: self.client( version=self.client_plugin().V2_26) except exception.InvalidServiceVersion as ex: msg = _('Cannot use "tags" property - nova does not support ' 'it: %s') % six.text_type(ex) raise exception.StackValidationFailed(message=msg) # retrieve provider's absolute limits if it will be needed metadata = self.properties[self.METADATA] personality = self.properties[self.PERSONALITY] if metadata or personality: limits = self.client_plugin().absolute_limits() # verify that the number of metadata entries is not greater # than the maximum number allowed in the provider's absolute # limits if metadata: msg = _('Instance metadata must not contain greater than %s ' 'entries. This is the maximum number allowed by your ' 'service provider') % limits['maxServerMeta'] self._check_maximum(len(metadata), limits['maxServerMeta'], msg) # verify the number of personality files and the size of each # personality file against the provider's absolute limits if personality: msg = _("The personality property may not contain " "greater than %s entries.") % limits['maxPersonality'] self._check_maximum(len(personality), limits['maxPersonality'], msg) for path, contents in personality.items(): msg = (_("The contents of personality file \"%(path)s\" " "is larger than the maximum allowed personality " "file size (%(max_size)s bytes).") % {'path': path, 'max_size': limits['maxPersonalitySize']}) self._check_maximum(len(bytes(contents.encode('utf-8')) ) if contents is not None else 0, limits['maxPersonalitySize'], msg) def _delete(self): if self.user_data_software_config(): self._delete_queue() self._delete_user() self._delete_temp_url() # remove internal and external ports self._delete_internal_ports() self.data_delete('external_ports') if self.resource_id is None: return try: self.client().servers.delete(self.resource_id) except Exception as e: self.client_plugin().ignore_not_found(e) return return progress.ServerDeleteProgress(self.resource_id) def handle_snapshot_delete(self, state): if state[1] != self.FAILED and self.resource_id: image_id = self.client().servers.create_image( self.resource_id, self.physical_resource_name()) return progress.ServerDeleteProgress( self.resource_id, image_id, False) return self._delete() def handle_delete(self): return self._delete() def check_delete_complete(self, prg): if not prg: return True if not prg.image_complete: image = self.client_plugin('glance').get_image(prg.image_id) if image.status.lower() in (self.IMAGE_ERROR, self.IMAGE_DELETED): raise exception.Error(image.status) elif image.status.lower() == self.IMAGE_ACTIVE: prg.image_complete = True if not self._delete(): return True return False return self.client_plugin().check_delete_server_complete( prg.server_id) def handle_suspend(self): """Suspend a server. Note we do not wait for the SUSPENDED state, this is polled for by check_suspend_complete in a similar way to the create logic so we can take advantage of coroutines. """ if self.resource_id is None: raise exception.Error(_('Cannot suspend %s, resource_id not set') % self.name) try: server = self.client().servers.get(self.resource_id) except Exception as e: if self.client_plugin().is_not_found(e): raise exception.NotFound(_('Failed to find server %s') % self.resource_id) else: raise else: # if the server has been suspended successful, # no need to suspend again if self.client_plugin().get_status(server) != 'SUSPENDED': LOG.debug('suspending server %s', self.resource_id) server.suspend() return server.id def check_suspend_complete(self, server_id): cp = self.client_plugin() server = cp.fetch_server(server_id) if not server: return False status = cp.get_status(server) LOG.debug('%(name)s check_suspend_complete status = %(status)s', {'name': self.name, 'status': status}) if status in list(cp.deferred_server_statuses + ['ACTIVE']): return status == 'SUSPENDED' else: exc = exception.ResourceUnknownStatus( result=_('Suspend of server %s failed') % server.name, resource_status=status) raise exc def handle_resume(self): """Resume a server. Note we do not wait for the ACTIVE state, this is polled for by check_resume_complete in a similar way to the create logic so we can take advantage of coroutines. """ if self.resource_id is None: raise exception.Error(_('Cannot resume %s, resource_id not set') % self.name) try: server = self.client().servers.get(self.resource_id) except Exception as e: if self.client_plugin().is_not_found(e): raise exception.NotFound(_('Failed to find server %s') % self.resource_id) else: raise else: # if the server has been resumed successful, # no need to resume again if self.client_plugin().get_status(server) != 'ACTIVE': LOG.debug('resuming server %s', self.resource_id) server.resume() return server.id def check_resume_complete(self, server_id): return self.client_plugin()._check_active(server_id) def handle_snapshot(self): image_id = self.client().servers.create_image( self.resource_id, self.physical_resource_name()) self.data_set('snapshot_image_id', image_id) return image_id def check_snapshot_complete(self, image_id): image = self.client_plugin('glance').get_image(image_id) if image.status.lower() == self.IMAGE_ACTIVE: return True elif image.status.lower() in (self.IMAGE_ERROR, self.IMAGE_DELETED): raise exception.Error(image.status) return False def handle_delete_snapshot(self, snapshot): image_id = snapshot['resource_data'].get('snapshot_image_id') with self.client_plugin('glance').ignore_not_found: self.client('glance').images.delete(image_id) def handle_restore(self, defn, restore_data): image_id = restore_data['resource_data']['snapshot_image_id'] props = dict((k, v) for k, v in self.properties.data.items() if v is not None) for key in [self.BLOCK_DEVICE_MAPPING, self.BLOCK_DEVICE_MAPPING_V2, self.NETWORKS]: if props.get(key) is not None: props[key] = list(dict((k, v) for k, v in prop.items() if v is not None) for prop in props[key]) props[self.IMAGE] = image_id return defn.freeze(properties=props) def prepare_for_replace(self): # if the server has not been created yet, do nothing if self.resource_id is None: return self.prepare_ports_for_replace() def restore_prev_rsrc(self, convergence=False): self.restore_ports_after_rollback(convergence=convergence) def resource_mapping(): return { 'OS::Nova::Server': Server, } heat-10.0.2/heat/engine/resources/openstack/nova/server_network_mixin.py0000666000175000017500000006621113343562340026537 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import itertools import eventlet from oslo_log import log as logging from oslo_serialization import jsonutils from oslo_utils import netutils import tenacity from heat.common import exception from heat.common.i18n import _ from heat.engine import resource from heat.engine.resources.openstack.neutron import port as neutron_port LOG = logging.getLogger(__name__) class ServerNetworkMixin(object): def _validate_network(self, network): net_id = network.get(self.NETWORK_ID) port = network.get(self.NETWORK_PORT) subnet = network.get(self.NETWORK_SUBNET) fixed_ip = network.get(self.NETWORK_FIXED_IP) floating_ip = network.get(self.NETWORK_FLOATING_IP) str_network = network.get(self.ALLOCATE_NETWORK) if (net_id is None and port is None and subnet is None and not str_network): msg = _('One of the properties "%(id)s", "%(port_id)s", ' '"%(str_network)s" or "%(subnet)s" should be set for the ' 'specified network of server "%(server)s".' '') % dict(id=self.NETWORK_ID, port_id=self.NETWORK_PORT, subnet=self.NETWORK_SUBNET, str_network=self.ALLOCATE_NETWORK, server=self.name) raise exception.StackValidationFailed(message=msg) # can not specify str_network with other keys of networks # at the same time has_value_keys = [k for k, v in network.items() if v is not None] if str_network and len(has_value_keys) != 1: msg = _('Can not specify "%s" with other keys of networks ' 'at the same time.') % self.ALLOCATE_NETWORK raise exception.StackValidationFailed(message=msg) # Nova doesn't allow specify ip and port at the same time if fixed_ip and port is not None: raise exception.ResourcePropertyConflict( "/".join([self.NETWORKS, self.NETWORK_FIXED_IP]), "/".join([self.NETWORKS, self.NETWORK_PORT])) # if user only specifies network and floating ip, floating ip # can't be associated as the the neutron port isn't created/managed # by heat if floating_ip is not None: if net_id is not None and port is None and subnet is None: msg = _('Property "%(fip)s" is not supported if only ' '"%(net)s" is specified, because the corresponding ' 'port can not be retrieved.' ) % dict(fip=self.NETWORK_FLOATING_IP, net=self.NETWORK_ID) raise exception.StackValidationFailed(message=msg) def _validate_belonging_subnet_to_net(self, network): if network.get(self.NETWORK_PORT) is None: net = self._get_network_id(network) # check if there are subnet and network both specified that # subnet belongs to specified network subnet = network.get(self.NETWORK_SUBNET) if (subnet is not None and net is not None): subnet_net = self.client_plugin( 'neutron').network_id_from_subnet_id(subnet) if subnet_net != net: msg = _('Specified subnet %(subnet)s does not belongs to ' 'network %(network)s.') % { 'subnet': subnet, 'network': net} raise exception.StackValidationFailed(message=msg) def _create_internal_port(self, net_data, net_number, security_groups=None): name = _('%(server)s-port-%(number)s') % {'server': self.name, 'number': net_number} kwargs = self._prepare_internal_port_kwargs(net_data, security_groups) kwargs['name'] = name port = self.client('neutron').create_port({'port': kwargs})['port'] # Store ids (used for floating_ip association, updating, etc.) # in resource's data. self._data_update_ports(port['id'], 'add') return port['id'] def _prepare_internal_port_kwargs(self, net_data, security_groups=None): kwargs = {'network_id': self._get_network_id(net_data)} fixed_ip = net_data.get(self.NETWORK_FIXED_IP) subnet = net_data.get(self.NETWORK_SUBNET) body = {} if fixed_ip: body['ip_address'] = fixed_ip if subnet: body['subnet_id'] = subnet # we should add fixed_ips only if subnet or ip were provided if body: kwargs.update({'fixed_ips': [body]}) if security_groups: sec_uuids = self.client_plugin( 'neutron').get_secgroup_uuids(security_groups) kwargs['security_groups'] = sec_uuids extra_props = net_data.get(self.NETWORK_PORT_EXTRA) if extra_props is not None: specs = extra_props.pop(neutron_port.Port.VALUE_SPECS) if specs: kwargs.update(specs) port_extra_keys = list(neutron_port.Port.EXTRA_PROPERTIES) port_extra_keys.remove(neutron_port.Port.ALLOWED_ADDRESS_PAIRS) for key in port_extra_keys: if extra_props.get(key) is not None: kwargs[key] = extra_props.get(key) allowed_address_pairs = extra_props.get( neutron_port.Port.ALLOWED_ADDRESS_PAIRS) if allowed_address_pairs is not None: for pair in allowed_address_pairs: if (neutron_port.Port.ALLOWED_ADDRESS_PAIR_MAC_ADDRESS in pair and pair.get( neutron_port.Port.ALLOWED_ADDRESS_PAIR_MAC_ADDRESS) is None): del pair[ neutron_port.Port.ALLOWED_ADDRESS_PAIR_MAC_ADDRESS] port_address_pairs = neutron_port.Port.ALLOWED_ADDRESS_PAIRS kwargs[port_address_pairs] = allowed_address_pairs return kwargs def _delete_internal_port(self, port_id): """Delete physical port by id.""" with self.client_plugin('neutron').ignore_not_found: self.client('neutron').delete_port(port_id) self._data_update_ports(port_id, 'delete') def _delete_internal_ports(self): for port_data in self._data_get_ports(): self._delete_internal_port(port_data['id']) self.data_delete('internal_ports') def _data_update_ports(self, port_id, action, port_type='internal_ports'): data = self._data_get_ports(port_type) if action == 'add': data.append({'id': port_id}) elif action == 'delete': for port in data: if port_id == port['id']: data.remove(port) break self.data_set(port_type, jsonutils.dumps(data)) def _data_get_ports(self, port_type='internal_ports'): data = self.data().get(port_type) return jsonutils.loads(data) if data else [] def store_external_ports(self): """Store in resource's data IDs of ports created by nova for server. If no port property is specified and no internal port has been created, nova client takes no port-id and calls port creating into server creating. We need to store information about that ports, so store their IDs to data with key `external_ports`. """ # check if os-attach-interfaces extension is available on this cloud. # If it's not, then novaclient's interface_list method cannot be used # to get the list of interfaces. if not self.client_plugin().has_extension('os-attach-interfaces'): return server = self.client().servers.get(self.resource_id) ifaces = server.interface_list() external_port_ids = set(iface.port_id for iface in ifaces) # need to make sure external_ports data doesn't store ids of non-exist # ports. Delete such port_id if it's needed. data_external_port_ids = set( port['id'] for port in self._data_get_ports('external_ports')) for port_id in data_external_port_ids - external_port_ids: self._data_update_ports(port_id, 'delete', port_type='external_ports') internal_port_ids = set(port['id'] for port in self._data_get_ports()) # add ids of new external ports which not contains in external_ports # data yet. Also, exclude ids of internal ports. new_ports = ((external_port_ids - internal_port_ids) - data_external_port_ids) for port_id in new_ports: self._data_update_ports(port_id, 'add', port_type='external_ports') def _build_nics(self, networks, security_groups=None): if not networks: return None str_network = self._str_network(networks) if str_network: return str_network nics = [] for idx, net in enumerate(networks): self._validate_belonging_subnet_to_net(net) nic_info = {'net-id': self._get_network_id(net)} if net.get(self.NETWORK_PORT): nic_info['port-id'] = net[self.NETWORK_PORT] elif net.get(self.NETWORK_SUBNET): nic_info['port-id'] = self._create_internal_port( net, idx, security_groups) # if nic_info including 'port-id', do not set ip for nic if not nic_info.get('port-id'): if net.get(self.NETWORK_FIXED_IP): ip = net[self.NETWORK_FIXED_IP] if netutils.is_valid_ipv6(ip): nic_info['v6-fixed-ip'] = ip else: nic_info['v4-fixed-ip'] = ip if net.get(self.NETWORK_FLOATING_IP) and nic_info.get('port-id'): floating_ip_data = {'port_id': nic_info['port-id']} if net.get(self.NETWORK_FIXED_IP): floating_ip_data.update( {'fixed_ip_address': net.get(self.NETWORK_FIXED_IP)}) self._floating_ip_neutron_associate( net.get(self.NETWORK_FLOATING_IP), floating_ip_data) if net.get(self.NIC_TAG): nic_info[self.NIC_TAG] = net.get(self.NIC_TAG) nics.append(nic_info) return nics def _floating_ip_neutron_associate(self, floating_ip, floating_ip_data): self.client('neutron').update_floatingip( floating_ip, {'floatingip': floating_ip_data}) def _floating_ip_disassociate(self, floating_ip): with self.client_plugin('neutron').ignore_not_found: self.client('neutron').update_floatingip( floating_ip, {'floatingip': {'port_id': None}}) def _find_best_match(self, existing_interfaces, specified_net): specified_net_items = set(specified_net.items()) if specified_net.get(self.NETWORK_PORT) is not None: for iface in existing_interfaces: if (iface[self.NETWORK_PORT] == specified_net[self.NETWORK_PORT] and specified_net_items.issubset(set(iface.items()))): return iface elif specified_net.get(self.NETWORK_FIXED_IP) is not None: for iface in existing_interfaces: if (iface[self.NETWORK_FIXED_IP] == specified_net[self.NETWORK_FIXED_IP] and specified_net_items.issubset(set(iface.items()))): return iface else: # Best subset intersection best, matches, num = None, 0, 0 for iface in existing_interfaces: iface_items = set(iface.items()) if specified_net_items.issubset(iface_items): num = len(specified_net_items.intersection(iface_items)) if num > matches: best, matches = iface, num return best def _exclude_not_updated_networks(self, old_nets, new_nets, interfaces): not_updated_nets = [] # Update old_nets to match interfaces self.update_networks_matching_iface_port(old_nets, interfaces) # make networks similar by adding None values for not used keys for key in self._NETWORK_KEYS: # if _net.get(key) is '', convert to None for _net in itertools.chain(new_nets, old_nets): _net[key] = _net.get(key) or None for new_net in list(new_nets): new_net_reduced = {k: v for k, v in new_net.items() if k not in self._IFACE_MANAGED_KEYS or v is not None} match = self._find_best_match(old_nets, new_net_reduced) if match is not None: not_updated_nets.append(match) new_nets.remove(new_net) old_nets.remove(match) return not_updated_nets def _get_network_id(self, net): net_id = net.get(self.NETWORK_ID) or None subnet = net.get(self.NETWORK_SUBNET) or None if not net_id and subnet: net_id = self.client_plugin( 'neutron').network_id_from_subnet_id(subnet) return net_id def update_networks_matching_iface_port(self, old_nets, interfaces): def get_iface_props(iface): ipaddr = None subnet = None if len(iface.fixed_ips) > 0: ipaddr = iface.fixed_ips[0]['ip_address'] subnet = iface.fixed_ips[0]['subnet_id'] return {self.NETWORK_PORT: iface.port_id, self.NETWORK_ID: iface.net_id, self.NETWORK_FIXED_IP: ipaddr, self.NETWORK_SUBNET: subnet} interfaces_net_props = [get_iface_props(iface) for iface in interfaces] for old_net in old_nets: if old_net[self.NETWORK_PORT] is None: old_net[self.NETWORK_ID] = self._get_network_id(old_net) old_net_reduced = {k: v for k, v in old_net.items() if k in self._IFACE_MANAGED_KEYS and v is not None} match = self._find_best_match(interfaces_net_props, old_net_reduced) if match is not None: old_net.update(match) interfaces_net_props.remove(match) def _get_available_networks(self): # first we get the private networks owned by the tenant search_opts = {'tenant_id': self.context.tenant_id, 'shared': False, 'admin_state_up': True, } nc = self.client('neutron') nets = nc.list_networks(**search_opts).get('networks', []) # second we get the public shared networks search_opts = {'shared': True} nets += nc.list_networks(**search_opts).get('networks', []) ids = [net['id'] for net in nets] return ids def _auto_allocate_network(self): topology = self.client('neutron').get_auto_allocated_topology( self.context.tenant_id)['auto_allocated_topology'] return topology['id'] def _calculate_using_str_network(self, ifaces, str_net, security_groups=None): add_nets = [] remove_ports = [iface.port_id for iface in ifaces or []] if str_net == self.NETWORK_AUTO: nets = self._get_available_networks() if not nets: nets = [self._auto_allocate_network()] if len(nets) > 1: msg = 'Multiple possible networks found.' raise exception.UnableToAutoAllocateNetwork(message=msg) handle_args = {'port_id': None, 'net_id': nets[0], 'fip': None} if security_groups: sg_ids = self.client_plugin( 'neutron').get_secgroup_uuids(security_groups) handle_args['security_groups'] = sg_ids add_nets.append(handle_args) return remove_ports, add_nets def _calculate_using_list_networks(self, old_nets, new_nets, ifaces, security_groups): remove_ports = [] add_nets = [] # if update networks between None and empty, no need to # detach and attach, the server got first free port already. if not new_nets and not old_nets: return remove_ports, add_nets new_nets = new_nets or [] old_nets = old_nets or [] remove_ports, not_updated_nets = self._calculate_remove_ports( old_nets, new_nets, ifaces) add_nets = self._calculate_add_nets(new_nets, not_updated_nets, security_groups) return remove_ports, add_nets def _calculate_remove_ports(self, old_nets, new_nets, ifaces): remove_ports = [] not_updated_nets = [] # if old nets is empty, it means that the server got first # free port. so we should detach this interface. if not old_nets: for iface in ifaces: remove_ports.append(iface.port_id) # if we have any information in networks field, we should: # 1. find similar networks, if they exist # 2. remove these networks from new_nets and old_nets # lists # 3. detach unmatched networks, which were present in old_nets # 4. attach unmatched networks, which were present in new_nets else: # if old net is string net, remove the interfaces if self._str_network(old_nets): remove_ports = [iface.port_id for iface in ifaces or []] else: # remove not updated networks from old and new networks lists, # also get list these networks not_updated_nets = self._exclude_not_updated_networks( old_nets, new_nets, ifaces) # according to nova interface-detach command detached port # will be deleted inter_port_data = self._data_get_ports() inter_port_ids = [p['id'] for p in inter_port_data] for net in old_nets: port_id = net.get(self.NETWORK_PORT) # we can't match the port for some user case, like: # the internal port was detached in nova first, then # user update template to detach this nic. The internal # port will remains till we delete the server resource. if port_id: remove_ports.append(port_id) if port_id in inter_port_ids: # if we have internal port with such id, remove it # instantly. self._delete_internal_port(port_id) if net.get(self.NETWORK_FLOATING_IP): self._floating_ip_disassociate( net.get(self.NETWORK_FLOATING_IP)) return remove_ports, not_updated_nets def _calculate_add_nets(self, new_nets, not_updated_nets, security_groups): add_nets = [] # if new_nets is empty (including the non_updated_nets), we should # attach first free port, similar to the behavior for instance # creation if not new_nets and not not_updated_nets: handler_kwargs = {'port_id': None, 'net_id': None, 'fip': None} if security_groups: sec_uuids = self.client_plugin( 'neutron').get_secgroup_uuids(security_groups) handler_kwargs['security_groups'] = sec_uuids add_nets.append(handler_kwargs) else: # attach section similar for both variants that # were mentioned above for idx, net in enumerate(new_nets): handler_kwargs = {'port_id': None, 'net_id': None, 'fip': None} if net.get(self.NETWORK_PORT): handler_kwargs['port_id'] = net.get(self.NETWORK_PORT) elif net.get(self.NETWORK_SUBNET): handler_kwargs['port_id'] = self._create_internal_port( net, idx, security_groups) if not handler_kwargs['port_id']: handler_kwargs['net_id'] = self._get_network_id(net) if security_groups: sec_uuids = self.client_plugin( 'neutron').get_secgroup_uuids(security_groups) handler_kwargs['security_groups'] = sec_uuids if handler_kwargs['net_id']: handler_kwargs['fip'] = net.get('fixed_ip') floating_ip = net.get(self.NETWORK_FLOATING_IP) if floating_ip: flip_associate = {'port_id': handler_kwargs.get('port_id')} if net.get('fixed_ip'): flip_associate['fixed_ip_address'] = net.get( 'fixed_ip') self.update_floating_ip_association(floating_ip, flip_associate) add_nets.append(handler_kwargs) return add_nets def _str_network(self, networks): # if user specify 'allocate_network', return it # otherwise we return None for net in networks or []: str_net = net.get(self.ALLOCATE_NETWORK) if str_net: return str_net def _is_nic_tagged(self, networks): # if user specify 'tag', return True # otherwise return False for net in networks or []: if net.get(self.NIC_TAG): return True return False def calculate_networks(self, old_nets, new_nets, ifaces, security_groups=None): new_str_net = self._str_network(new_nets) if new_str_net: return self._calculate_using_str_network(ifaces, new_str_net, security_groups) else: return self._calculate_using_list_networks( old_nets, new_nets, ifaces, security_groups) def update_floating_ip_association(self, floating_ip, flip_associate): if flip_associate.get('port_id'): self._floating_ip_neutron_associate(floating_ip, flip_associate) @staticmethod def get_all_ports(server): return itertools.chain( server._data_get_ports(), server._data_get_ports('external_ports') ) def detach_ports(self, server): existing_server_id = server.resource_id for port in self.get_all_ports(server): detach_called = self.client_plugin().interface_detach( existing_server_id, port['id']) if not detach_called: return try: if self.client_plugin().check_interface_detach( existing_server_id, port['id']): LOG.info('Detach interface %(port)s successful from ' 'server %(server)s.', {'port': port['id'], 'server': existing_server_id}) except tenacity.RetryError: raise exception.InterfaceDetachFailed( port=port['id'], server=existing_server_id) def attach_ports(self, server): prev_server_id = server.resource_id for port in self.get_all_ports(server): self.client_plugin().interface_attach(prev_server_id, port['id']) try: if self.client_plugin().check_interface_attach( prev_server_id, port['id']): LOG.info('Attach interface %(port)s successful to ' 'server %(server)s', {'port': port['id'], 'server': prev_server_id}) except tenacity.RetryError: raise exception.InterfaceAttachFailed( port=port['id'], server=prev_server_id) def prepare_ports_for_replace(self): # Check that the interface can be detached server = None # TODO(TheJulia): Once Story #2002001 is underway, # we should be able to replace the query to nova and # the check for the failed status with just a check # to see if the resource has failed. with self.client_plugin().ignore_not_found: server = self.client().servers.get(self.resource_id) if server and server.status != 'ERROR': self.detach_ports(self) else: # If we are replacing an ERROR'ed node, we need to delete # internal ports that we have created, otherwise we can # encounter deployment issues with duplicate internal # port data attempting to be created in instances being # deployed. self._delete_internal_ports() def restore_ports_after_rollback(self, convergence): # In case of convergence, during rollback, the previous rsrc is # already selected and is being acted upon. if convergence: prev_server = self rsrc, rsrc_owning_stack, stack = resource.Resource.load( prev_server.context, prev_server.replaced_by, prev_server.stack.current_traversal, True, prev_server.stack.defn._resource_data ) existing_server = rsrc else: backup_stack = self.stack._backup_stack() prev_server = backup_stack.resources.get(self.name) existing_server = self # Wait until server will move to active state. We can't # detach interfaces from server in BUILDING state. # In case of convergence, the replacement resource may be # created but never have been worked on because the rollback was # trigerred or new update was trigerred. if existing_server.resource_id is not None: try: while True: active = self.client_plugin()._check_active( existing_server.resource_id) if active: break eventlet.sleep(1) except exception.ResourceInError: pass self.store_external_ports() self.detach_ports(existing_server) self.attach_ports(prev_server) heat-10.0.2/heat/engine/resources/openstack/magnum/0000775000175000017500000000000013343562672022223 5ustar zuulzuul00000000000000heat-10.0.2/heat/engine/resources/openstack/magnum/bay.py0000666000175000017500000001425213343562351023350 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import six from heat.common import exception from heat.common.i18n import _ from heat.engine import constraints from heat.engine import properties from heat.engine import resource from heat.engine import support class Bay(resource.Resource): """A resource that creates a Magnum Bay. This resource has been deprecated in favor of OS::Magnum::Cluster. """ deprecation_msg = _('Please use OS::Magnum::Cluster instead.') support_status = support.SupportStatus( status=support.DEPRECATED, message=deprecation_msg, version='9.0.0', previous_status=support.SupportStatus( status=support.SUPPORTED, version='6.0.0') ) PROPERTIES = ( NAME, BAYMODEL, NODE_COUNT, MASTER_COUNT, DISCOVERY_URL, BAY_CREATE_TIMEOUT ) = ( 'name', 'baymodel', 'node_count', 'master_count', 'discovery_url', 'bay_create_timeout' ) properties_schema = { NAME: properties.Schema( properties.Schema.STRING, _('The bay name.') ), BAYMODEL: properties.Schema( properties.Schema.STRING, _('The name or ID of the bay model.'), constraints=[ constraints.CustomConstraint('magnum.baymodel') ], required=True ), NODE_COUNT: properties.Schema( properties.Schema.INTEGER, _('The node count for this bay.'), constraints=[constraints.Range(min=1)], update_allowed=True, default=1 ), MASTER_COUNT: properties.Schema( properties.Schema.INTEGER, _('The number of master nodes for this bay.'), constraints=[constraints.Range(min=1)], update_allowed=True, default=1 ), DISCOVERY_URL: properties.Schema( properties.Schema.STRING, _('Specifies a custom discovery url for node discovery.') ), BAY_CREATE_TIMEOUT: properties.Schema( properties.Schema.INTEGER, _('Timeout for creating the bay in minutes. ' 'Set to 0 for no timeout.'), constraints=[constraints.Range(min=0)], default=0 ) } default_client_name = 'magnum' entity = 'bays' def handle_create(self): args = { 'name': self.properties[self.NAME], 'baymodel_id': self.properties[self.BAYMODEL], 'node_count': self.properties[self.NODE_COUNT], 'master_count': self.properties[self.NODE_COUNT], 'discovery_url': self.properties[self.DISCOVERY_URL], 'bay_create_timeout': self.properties[self.BAY_CREATE_TIMEOUT] } bay = self.client().bays.create(**args) self.resource_id_set(bay.uuid) return bay.uuid def check_create_complete(self, id): bay = self.client().bays.get(id) if bay.status == 'CREATE_IN_PROGRESS': return False elif bay.status is None: return False elif bay.status == 'CREATE_COMPLETE': return True elif bay.status == 'CREATE_FAILED': msg = (_("Failed to create Bay '%(name)s' - %(reason)s") % {'name': self.name, 'reason': bay.status_reason}) raise exception.ResourceInError(status_reason=msg, resource_status=bay.status) else: msg = (_("Unknown status creating Bay '%(name)s' - %(reason)s") % {'name': self.name, 'reason': bay.status_reason}) raise exception.ResourceUnknownStatus(status_reason=msg, resource_status=bay.status) def handle_update(self, json_snippet, tmpl_diff, prop_diff): if prop_diff: patch = [{'op': 'replace', 'path': '/' + k, 'value': v} for k, v in six.iteritems(prop_diff)] self.client().bays.update(self.resource_id, patch) return self.resource_id def parse_live_resource_data(self, resource_properties, resource_data): record_reality = {} for key in [self.NODE_COUNT, self.MASTER_COUNT]: record_reality.update({key: resource_data.get(key)}) return record_reality def check_update_complete(self, id): bay = self.client().bays.get(id) # Check update complete request might get status before the status # got changed to update in progress, so we allow `CREATE_COMPLETE` # for it. if bay.status in ['UPDATE_IN_PROGRESS', 'CREATE_COMPLETE']: return False elif bay.status == 'UPDATE_COMPLETE': return True elif bay.status == 'UPDATE_FAILED': msg = (_("Failed to update Bay '%(name)s' - %(reason)s") % {'name': self.name, 'reason': bay.status_reason}) raise exception.ResourceInError(status_reason=msg, resource_status=bay.status) else: msg = (_("Unknown status updating Bay '%(name)s' - %(reason)s") % {'name': self.name, 'reason': bay.status_reason}) raise exception.ResourceUnknownStatus(status_reason=msg, resource_status=bay.status) def check_delete_complete(self, id): if not id: return True try: self.client().bays.get(id) except Exception as exc: self.client_plugin().ignore_not_found(exc) return True return False def resource_mapping(): return { 'OS::Magnum::Bay': Bay } heat-10.0.2/heat/engine/resources/openstack/magnum/cluster.py0000666000175000017500000002343413343562340024256 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import six from heat.common import exception from heat.common.i18n import _ from heat.engine import attributes from heat.engine import constraints from heat.engine import properties from heat.engine import resource from heat.engine import support from heat.engine import translation class Cluster(resource.Resource): """A resource that creates a magnum cluster. This resource creates a magnum cluster, which is a collection of node objects where work is scheduled. """ support_status = support.SupportStatus(version='9.0.0') default_client_name = 'magnum' entity = 'clusters' ATTRIBUTES = ( API_ADDRESS_ATTR, STACK_ID_ATTR, COE_VERSION_ATTR, MASTER_ADDRESSES_ATTR, STATUS_ATTR, MASTER_COUNT_ATTR, NODE_ADDRESSES_ATTR, STATUS_REASON_ATTR, NODE_COUNT_ATTR, NAME_ATTR, CONTAINER_VERSION_ATTR, DISCOVERY_URL_ATTR, CLUSTER_TEMPLATE_ID_ATTR, KEYPAIR_ATTR, CREATE_TIMEOUT_ATTR ) = ( 'api_address', 'stack_id', 'coe_version', 'master_addresses', 'status', 'master_count', 'node_addresses', 'status_reason', 'node_count', 'name', 'container_version', 'discovery_url', 'cluster_template_id', 'keypair', 'create_timeout' ) attributes_schema = { API_ADDRESS_ATTR: attributes.Schema( _('The endpoint URL of COE API exposed to end-users.'), type=attributes.Schema.STRING ), STACK_ID_ATTR: attributes.Schema( _('The reference UUID of orchestration stack for this ' 'COE cluster.'), type=attributes.Schema.STRING ), COE_VERSION_ATTR: attributes.Schema( _('Version info of chosen COE in cluster for helping client ' 'in picking the right version of client.'), type=attributes.Schema.STRING ), MASTER_ADDRESSES_ATTR: attributes.Schema( _('List of floating IP of all master nodes.'), type=attributes.Schema.LIST ), STATUS_ATTR: attributes.Schema( _('The status for this COE cluster.'), type=attributes.Schema.STRING ), MASTER_COUNT_ATTR: attributes.Schema( _('The number of servers that will serve as master for the ' 'cluster.'), type=attributes.Schema.INTEGER ), NODE_ADDRESSES_ATTR: attributes.Schema( _('List of floating IP of all servers that serve as node.'), type=attributes.Schema.LIST ), STATUS_REASON_ATTR: attributes.Schema( _('The reason of cluster current status.'), type=attributes.Schema.STRING ), NODE_COUNT_ATTR: attributes.Schema( _('The number of servers that will serve as node in the cluster.'), type=attributes.Schema.INTEGER ), NAME_ATTR: attributes.Schema( _('Name of the resource.'), type=attributes.Schema.STRING ), CONTAINER_VERSION_ATTR: attributes.Schema( _('Version info of constainer engine in the chosen COE in cluster ' 'for helping client in picking the right version of client.'), type=attributes.Schema.STRING ), DISCOVERY_URL_ATTR: attributes.Schema( _('The custom discovery url for node discovery.'), type=attributes.Schema.STRING ), CLUSTER_TEMPLATE_ID_ATTR: attributes.Schema( _('The UUID of the cluster template.'), type=attributes.Schema.STRING ), KEYPAIR_ATTR: attributes.Schema( _('The name of the keypair.'), type=attributes.Schema.STRING ), CREATE_TIMEOUT_ATTR: attributes.Schema( _('The timeout for cluster creation in minutes.'), type=attributes.Schema.INTEGER )} PROPERTIES = ( NAME, CLUSTER_TEMPLATE, KEYPAIR, NODE_COUNT, MASTER_COUNT, DISCOVERY_URL, CREATE_TIMEOUT ) = ( 'name', 'cluster_template', 'keypair', 'node_count', 'master_count', 'discovery_url', 'create_timeout' ) properties_schema = { NAME: properties.Schema( properties.Schema.STRING, _('The cluster name.'), ), CLUSTER_TEMPLATE: properties.Schema( properties.Schema.STRING, _('The name or ID of the cluster template.'), constraints=[ constraints.CustomConstraint('magnum.cluster_template') ], required=True ), KEYPAIR: properties.Schema( properties.Schema.STRING, _('The name of the keypair. If not presented, use keypair in ' 'cluster template.'), constraints=[ constraints.CustomConstraint('nova.keypair') ] ), NODE_COUNT: properties.Schema( properties.Schema.INTEGER, _('The node count for this cluster.'), constraints=[constraints.Range(min=1)], update_allowed=True, default=1 ), MASTER_COUNT: properties.Schema( properties.Schema.INTEGER, _('The number of master nodes for this cluster.'), constraints=[constraints.Range(min=1)], update_allowed=True, default=1 ), DISCOVERY_URL: properties.Schema( properties.Schema.STRING, _('Specifies a custom discovery url for node discovery.'), update_allowed=True, ), CREATE_TIMEOUT: properties.Schema( properties.Schema.INTEGER, _('Timeout for creating the cluster in minutes. ' 'Set to 0 for no timeout.'), constraints=[constraints.Range(min=0)], update_allowed=True, default=60 ) } def translation_rules(self, props): return [ translation.TranslationRule( props, translation.TranslationRule.RESOLVE, [self.CLUSTER_TEMPLATE], client_plugin=self.client_plugin('magnum'), finder='get_cluster_template') ] def _resolve_attribute(self, name): if self.resource_id is None: return cluster = self.client().clusters.get(self.resource_id) return getattr(cluster, name, None) def handle_create(self): args = dict(self.properties.items()) args['cluster_template_id'] = self.properties[self.CLUSTER_TEMPLATE] del args[self.CLUSTER_TEMPLATE] cluster = self.client().clusters.create(**args) self.resource_id_set(cluster.uuid) return cluster.uuid def check_create_complete(self, id): cluster = self.client().clusters.get(id) if cluster.status == 'CREATE_IN_PROGRESS': return False elif cluster.status == 'CREATE_COMPLETE': return True elif cluster.status == 'CREATE_FAILED': msg = (_("Failed to create Cluster '%(name)s' - %(reason)s") % {'name': self.name, 'reason': cluster.status_reason}) raise exception.ResourceInError(status_reason=msg, resource_status=cluster.status) else: msg = (_("Unknown status creating Cluster '%(name)s' - %(reason)s") % {'name': self.name, 'reason': cluster.status_reason}) raise exception.ResourceUnknownStatus( status_reason=msg, resource_status=cluster.status) def handle_update(self, json_snippet, tmpl_diff, prop_diff): if prop_diff: patch = [{'op': 'replace', 'path': '/' + k, 'value': v} for k, v in six.iteritems(prop_diff)] self.client().clusters.update(self.resource_id, patch) return self.resource_id def check_update_complete(self, id): cluster = self.client().clusters.get(id) # Check update complete request might get status before the status # got changed to update in progress, so we allow `CREATE_COMPLETE` # for it. # TODO(ricolin): we should find way to make sure status check will # only perform after action really started. if cluster.status in ['UPDATE_IN_PROGRESS', 'CREATE_COMPLETE']: return False elif cluster.status == 'UPDATE_COMPLETE': return True elif cluster.status == 'UPDATE_FAILED': msg = (_("Failed to update Cluster '%(name)s' - %(reason)s") % {'name': self.name, 'reason': cluster.status_reason}) raise exception.ResourceInError( status_reason=msg, resource_status=cluster.status) else: msg = (_("Unknown status updating Cluster '%(name)s' - %(reason)s") % {'name': self.name, 'reason': cluster.status_reason}) raise exception.ResourceUnknownStatus( status_reason=msg, resource_status=cluster.status) def check_delete_complete(self, id): if not id: return True try: self.client().clusters.get(id) except Exception as exc: self.client_plugin().ignore_not_found(exc) return True return False def resource_mapping(): return { 'OS::Magnum::Cluster': Cluster } heat-10.0.2/heat/engine/resources/openstack/magnum/__init__.py0000666000175000017500000000000013343562340024314 0ustar zuulzuul00000000000000heat-10.0.2/heat/engine/resources/openstack/magnum/baymodel.py0000666000175000017500000000351213343562351024366 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common.i18n import _ from heat.engine.resources.openstack.magnum import cluster_template from heat.engine import support from heat.engine import translation class BayModel(cluster_template.ClusterTemplate): """A resource for the BayModel in Magnum. This resource has been deprecated by ClusterTemplate. BayModel is an object that stores template information about the bay which is used to create new bays consistently. """ SSH_AUTHORIZED_KEY = 'ssh_authorized_key' deprecate_msg = _('Please use OS::Magnum::ClusterTemplate instead.') support_status = support.SupportStatus( status=support.DEPRECATED, message=deprecate_msg, version='9.0.0', previous_status=support.SupportStatus( status=support.SUPPORTED, version='5.0.0'), substitute_class=cluster_template.ClusterTemplate ) def translation_rules(self, props): if props.get(self.SSH_AUTHORIZED_KEY): return [ translation.TranslationRule( props, translation.TranslationRule.DELETE, [self.SSH_AUTHORIZED_KEY] ) ] def resource_mapping(): return { 'OS::Magnum::BayModel': BayModel } heat-10.0.2/heat/engine/resources/openstack/magnum/cluster_template.py0000666000175000017500000003037013343562340026146 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import six from heat.common import exception from heat.common.i18n import _ from heat.engine import constraints from heat.engine import properties from heat.engine import resource from heat.engine import support from heat.engine import translation class ClusterTemplate(resource.Resource): """A resource for the ClusterTemplate in Magnum. ClusterTemplate is an object that stores template information about the cluster which is used to create new clusters consistently. """ support_status = support.SupportStatus(version='9.0.0') default_client_name = 'magnum' entity = 'cluster_templates' PROPERTIES = ( NAME, IMAGE, FLAVOR, MASTER_FLAVOR, KEYPAIR, EXTERNAL_NETWORK, FIXED_NETWORK, FIXED_SUBNET, DNS_NAMESERVER, DOCKER_VOLUME_SIZE, DOCKER_STORAGE_DRIVER, COE, NETWORK_DRIVER, VOLUME_DRIVER, HTTP_PROXY, HTTPS_PROXY, NO_PROXY, LABELS, TLS_DISABLED, PUBLIC, REGISTRY_ENABLED, SERVER_TYPE, MASTER_LB_ENABLED, FLOATING_IP_ENABLED ) = ( 'name', 'image', 'flavor', 'master_flavor', 'keypair', 'external_network', 'fixed_network', 'fixed_subnet', 'dns_nameserver', 'docker_volume_size', 'docker_storage_driver', 'coe', 'network_driver', 'volume_driver', 'http_proxy', 'https_proxy', 'no_proxy', 'labels', 'tls_disabled', 'public', 'registry_enabled', 'server_type', 'master_lb_enabled', 'floating_ip_enabled' ) # Change it when magnum supports more function in the future. SUPPORTED_VOLUME_DRIVER = {'kubernetes': ['cinder'], 'swarm': ['rexray'], 'mesos': ['rexray']} properties_schema = { NAME: properties.Schema( properties.Schema.STRING, _('The cluster template name.'), ), IMAGE: properties.Schema( properties.Schema.STRING, _('The image name or UUID to use as a base image for cluster.'), constraints=[ constraints.CustomConstraint('glance.image') ], required=True ), FLAVOR: properties.Schema( properties.Schema.STRING, _('The nova flavor name or UUID to use when launching the ' 'cluster.'), constraints=[ constraints.CustomConstraint('nova.flavor') ] ), MASTER_FLAVOR: properties.Schema( properties.Schema.STRING, _('The nova flavor name or UUID to use when launching the ' 'master node of the cluster.'), constraints=[ constraints.CustomConstraint('nova.flavor') ] ), KEYPAIR: properties.Schema( properties.Schema.STRING, _('The name of the SSH keypair to load into the ' 'cluster nodes.'), constraints=[ constraints.CustomConstraint('nova.keypair') ] ), EXTERNAL_NETWORK: properties.Schema( properties.Schema.STRING, _('The external neutron network name or UUID to attach the ' 'Cluster.'), constraints=[ constraints.CustomConstraint('neutron.network') ], required=True ), FIXED_NETWORK: properties.Schema( properties.Schema.STRING, _('The fixed neutron network name or UUID to attach the ' 'Cluster.'), constraints=[ constraints.CustomConstraint('neutron.network') ] ), FIXED_SUBNET: properties.Schema( properties.Schema.STRING, _('The fixed neutron subnet name or UUID to attach the ' 'Cluster.'), constraints=[ constraints.CustomConstraint('neutron.subnet') ] ), DNS_NAMESERVER: properties.Schema( properties.Schema.STRING, _('The DNS nameserver address.'), constraints=[ constraints.CustomConstraint('ip_addr') ] ), DOCKER_VOLUME_SIZE: properties.Schema( properties.Schema.INTEGER, _('The size in GB of the docker volume.'), constraints=[ constraints.Range(min=1), ] ), DOCKER_STORAGE_DRIVER: properties.Schema( properties.Schema.STRING, _('Select a docker storage driver.'), constraints=[ constraints.AllowedValues(['devicemapper', 'overlay']) ], default='devicemapper' ), COE: properties.Schema( properties.Schema.STRING, _('The Container Orchestration Engine for cluster.'), constraints=[ constraints.AllowedValues(['kubernetes', 'swarm', 'mesos']) ], required=True ), NETWORK_DRIVER: properties.Schema( properties.Schema.STRING, _('The name of the driver used for instantiating ' 'container networks. By default, Magnum will choose the ' 'pre-configured network driver based on COE type.') ), VOLUME_DRIVER: properties.Schema( properties.Schema.STRING, _('The volume driver name for instantiating container volume.'), constraints=[ constraints.AllowedValues(['cinder', 'rexray']) ] ), HTTP_PROXY: properties.Schema( properties.Schema.STRING, _('The http_proxy address to use for nodes in cluster.') ), HTTPS_PROXY: properties.Schema( properties.Schema.STRING, _('The https_proxy address to use for nodes in cluster.') ), NO_PROXY: properties.Schema( properties.Schema.STRING, _('A comma separated list of addresses for which proxies should ' 'not be used in the cluster.') ), LABELS: properties.Schema( properties.Schema.MAP, _('Arbitrary labels in the form of key=value pairs to ' 'associate with cluster.') ), TLS_DISABLED: properties.Schema( properties.Schema.BOOLEAN, _('Disable TLS in the cluster.'), default=False ), PUBLIC: properties.Schema( properties.Schema.BOOLEAN, _('Make the cluster template public. To enable this option, you ' 'must own the right to publish in magnum. Which default set ' 'to admin only.'), update_allowed=True, default=False ), REGISTRY_ENABLED: properties.Schema( properties.Schema.BOOLEAN, _('Enable the docker registry in the cluster.'), default=False ), SERVER_TYPE: properties.Schema( properties.Schema.STRING, _('Specify the server type to be used.'), constraints=[ constraints.AllowedValues(['vm', 'bm']) ], default='vm' ), MASTER_LB_ENABLED: properties.Schema( properties.Schema.BOOLEAN, _('Indicates whether created clusters should have a load ' 'balancer for master nodes or not.'), default=True ), FLOATING_IP_ENABLED: properties.Schema( properties.Schema.BOOLEAN, _('Indicates whether created clusters should have a floating ' 'ip or not.'), default=True ), } def translation_rules(self, props): return [ translation.TranslationRule( props, translation.TranslationRule.RESOLVE, [self.EXTERNAL_NETWORK], client_plugin=self.client_plugin('neutron'), finder='find_resourceid_by_name_or_id', entity='network' ), translation.TranslationRule( props, translation.TranslationRule.RESOLVE, [self.FIXED_NETWORK], client_plugin=self.client_plugin('neutron'), finder='find_resourceid_by_name_or_id', entity='network' ), translation.TranslationRule( props, translation.TranslationRule.RESOLVE, [self.FIXED_SUBNET], client_plugin=self.client_plugin('neutron'), finder='find_resourceid_by_name_or_id', entity='subnet' ) ] def validate(self): """Validate the provided params.""" super(ClusterTemplate, self).validate() coe = self.properties[self.COE] volume_driver = self.properties[self.VOLUME_DRIVER] # Confirm that volume driver is supported by Magnum COE per # SUPPORTED_VOLUME_DRIVER. value = self.SUPPORTED_VOLUME_DRIVER[coe] if volume_driver is not None and volume_driver not in value: msg = (_('Volume driver type %(driver)s is not supported by ' 'COE:%(coe)s, expecting a %(supported_volume_driver)s ' 'volume driver.') % { 'driver': volume_driver, 'coe': coe, 'supported_volume_driver': value}) raise exception.StackValidationFailed(message=msg) def handle_create(self): args = { self.NAME: self.properties[ self.NAME] or self.physical_resource_name() } for key in [self.IMAGE, self.FLAVOR, self.MASTER_FLAVOR, self.KEYPAIR, self.EXTERNAL_NETWORK]: if self.properties[key] is not None: args["%s_id" % key] = self.properties[key] for p in [self.FIXED_NETWORK, self.FIXED_SUBNET, self.DNS_NAMESERVER, self.DOCKER_VOLUME_SIZE, self.DOCKER_STORAGE_DRIVER, self.COE, self.NETWORK_DRIVER, self.VOLUME_DRIVER, self.HTTP_PROXY, self.HTTPS_PROXY, self.NO_PROXY, self.LABELS, self.TLS_DISABLED, self.PUBLIC, self.REGISTRY_ENABLED, self.SERVER_TYPE, self.MASTER_LB_ENABLED, self.FLOATING_IP_ENABLED]: if self.properties[p] is not None: args[p] = self.properties[p] ct = self.client().cluster_templates.create(**args) self.resource_id_set(ct.uuid) def handle_update(self, json_snippet, tmpl_diff, prop_diff): if prop_diff: patch = [{'op': 'replace', 'path': '/' + k, 'value': v} for k, v in six.iteritems(prop_diff)] self.client().cluster_templates.update(self.resource_id, patch) return self.resource_id def check_update_complete(self, id): cluster_template = self.client().cluster_templates.get(id) if cluster_template.status == 'UPDATE_IN_PROGRESS': return False elif cluster_template.status == 'UPDATE_COMPLETE': return True elif cluster_template.status == 'UPDATE_FAILED': msg = (_("Failed to update Cluster Template " "'%(name)s' - %(reason)s") % {'name': self.name, 'reason': cluster_template.status_reason}) raise exception.ResourceInError( status_reason=msg, resource_status=cluster_template.status) else: msg = (_("Unknown status updating Cluster Template " "'%(name)s' - %(reason)s") % {'name': self.name, 'reason': cluster_template.status_reason}) raise exception.ResourceUnknownStatus( status_reason=msg, resource_status=cluster_template.status) def resource_mapping(): return { 'OS::Magnum::ClusterTemplate': ClusterTemplate } heat-10.0.2/heat/engine/resources/openstack/swift/0000775000175000017500000000000013343562672022073 5ustar zuulzuul00000000000000heat-10.0.2/heat/engine/resources/openstack/swift/__init__.py0000666000175000017500000000000013343562340024164 0ustar zuulzuul00000000000000heat-10.0.2/heat/engine/resources/openstack/swift/container.py0000666000175000017500000002571713343562340024435 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging import six from six.moves.urllib import parse as urlparse from heat.common import exception from heat.common.i18n import _ from heat.engine import attributes from heat.engine import properties from heat.engine import resource from heat.engine import support LOG = logging.getLogger(__name__) class SwiftContainer(resource.Resource): """A resource for managing Swift containers. A container defines a namespace for objects. An object with the same name in two different containers represents two different objects. """ PROPERTIES = ( NAME, X_CONTAINER_READ, X_CONTAINER_WRITE, X_CONTAINER_META, X_ACCOUNT_META, PURGE_ON_DELETE, ) = ( 'name', 'X-Container-Read', 'X-Container-Write', 'X-Container-Meta', 'X-Account-Meta', 'PurgeOnDelete', ) ATTRIBUTES = ( DOMAIN_NAME, WEBSITE_URL, ROOT_URL, OBJECT_COUNT, BYTES_USED, HEAD_CONTAINER, ) = ( 'DomainName', 'WebsiteURL', 'RootURL', 'ObjectCount', 'BytesUsed', 'HeadContainer', ) properties_schema = { NAME: properties.Schema( properties.Schema.STRING, _('Name for the container. If not specified, a unique name will ' 'be generated.') ), X_CONTAINER_READ: properties.Schema( properties.Schema.STRING, _('Specify the ACL permissions on who can read objects in the ' 'container.') ), X_CONTAINER_WRITE: properties.Schema( properties.Schema.STRING, _('Specify the ACL permissions on who can write objects to the ' 'container.') ), X_CONTAINER_META: properties.Schema( properties.Schema.MAP, _('A map of user-defined meta data to associate with the ' 'container. Each key in the map will set the header ' 'X-Container-Meta-{key} with the corresponding value.'), default={} ), X_ACCOUNT_META: properties.Schema( properties.Schema.MAP, _('A map of user-defined meta data to associate with the ' 'account. Each key in the map will set the header ' 'X-Account-Meta-{key} with the corresponding value.'), default={} ), PURGE_ON_DELETE: properties.Schema( properties.Schema.BOOLEAN, _("If True, delete any objects in the container " "when the container is deleted. " "Otherwise, deleting a non-empty container " "will result in an error."), default=False, support_status=support.SupportStatus( version='2015.1') ), } attributes_schema = { DOMAIN_NAME: attributes.Schema( _('The host from the container URL.'), type=attributes.Schema.STRING ), WEBSITE_URL: attributes.Schema( _('The URL of the container.'), type=attributes.Schema.STRING ), ROOT_URL: attributes.Schema( _('The parent URL of the container.'), type=attributes.Schema.STRING ), OBJECT_COUNT: attributes.Schema( _('The number of objects stored in the container.'), type=attributes.Schema.INTEGER ), BYTES_USED: attributes.Schema( _('The number of bytes stored in the container.'), type=attributes.Schema.INTEGER ), HEAD_CONTAINER: attributes.Schema( _('A map containing all headers for the container.'), type=attributes.Schema.MAP ), } default_client_name = 'swift' def physical_resource_name(self): name = self.properties[self.NAME] if name: return name return super(SwiftContainer, self).physical_resource_name() @staticmethod def _build_meta_headers(obj_type, meta_props): """Returns a new dict. Each key of new dict is prepended with "X-Container-Meta-". """ if meta_props is None: return {} return dict( ('X-' + obj_type.title() + '-Meta-' + k, v) for (k, v) in meta_props.items()) def handle_create(self): """Create a container.""" container = self.physical_resource_name() container_headers = SwiftContainer._build_meta_headers( "container", self.properties[self.X_CONTAINER_META]) account_headers = SwiftContainer._build_meta_headers( "account", self.properties[self.X_ACCOUNT_META]) for key in (self.X_CONTAINER_READ, self.X_CONTAINER_WRITE): if self.properties[key] is not None: container_headers[key] = self.properties[key] LOG.debug('SwiftContainer create container %(container)s with ' 'container headers %(container_headers)s and ' 'account headers %(account_headers)s', {'container': container, 'account_headers': account_headers, 'container_headers': container_headers}) self.client().put_container(container, container_headers) if account_headers: self.client().post_account(account_headers) self.resource_id_set(container) def _get_objects(self): try: container, objects = self.client().get_container(self.resource_id) except Exception as ex: self.client_plugin().ignore_not_found(ex) return None return objects def _deleter(self, obj=None): """Delete the underlying container or an object inside it.""" args = [self.resource_id] if obj: deleter = self.client().delete_object args.append(obj['name']) else: deleter = self.client().delete_container with self.client_plugin().ignore_not_found: deleter(*args) def handle_delete(self): if self.resource_id is None: return objects = self._get_objects() if objects: if self.properties[self.PURGE_ON_DELETE]: self._deleter(objects.pop()) # save first container refresh else: msg = _("Deleting non-empty container (%(id)s) " "when %(prop)s is False") % { 'id': self.resource_id, 'prop': self.PURGE_ON_DELETE} raise exception.ResourceActionNotSupported(action=msg) # objects is either None (container is gone already) or (empty) list if objects is not None: objects = len(objects) return objects def check_delete_complete(self, objects): if objects is None: # resource was not created or is gone already return True if objects: # integer >=0 from the first invocation objs = self._get_objects() if objs is None: return True # container is gone already if objs: self._deleter(objs.pop()) if objs: # save one last _get_objects() API call return False self._deleter() return True def handle_check(self): self.client().get_container(self.resource_id) def get_reference_id(self): return six.text_type(self.resource_id) def _resolve_attribute(self, key): parsed = list(urlparse.urlparse(self.client().url)) if key == self.DOMAIN_NAME: return parsed[1].split(':')[0] elif key == self.WEBSITE_URL: return '%s://%s%s/%s' % (parsed[0], parsed[1], parsed[2], self.resource_id) elif key == self.ROOT_URL: return '%s://%s%s' % (parsed[0], parsed[1], parsed[2]) elif self.resource_id and key in ( self.OBJECT_COUNT, self.BYTES_USED, self.HEAD_CONTAINER): try: headers = self.client().head_container(self.resource_id) except Exception as ex: if self.client_plugin().is_client_exception(ex): LOG.warning("Head container failed: %s", ex) return None raise else: if key == self.OBJECT_COUNT: return headers['x-container-object-count'] elif key == self.BYTES_USED: return headers['x-container-bytes-used'] elif key == self.HEAD_CONTAINER: return headers def _show_resource(self): return self.client().head_container(self.resource_id) def get_live_resource_data(self): resource_data = super(SwiftContainer, self).get_live_resource_data() account_data = self.client().head_account() resource_data['account_data'] = account_data return resource_data def parse_live_resource_data(self, resource_properties, resource_data): swift_reality = {} # swift container name can't be updated swift_reality.update({self.NAME: resource_properties.get(self.NAME)}) # PURGE_ON_DELETE property is used only on the heat side and isn't # passed to swift, so update it from existing resource properties swift_reality.update({self.PURGE_ON_DELETE: resource_properties.get( self.PURGE_ON_DELETE)}) swift_reality.update({self.X_CONTAINER_META: {}}) swift_reality.update({self.X_ACCOUNT_META: {}}) for key in [self.X_CONTAINER_READ, self.X_CONTAINER_WRITE]: swift_reality.update({key: resource_data.get(key.lower())}) for key in resource_properties.get(self.X_CONTAINER_META): prefixed_key = "%(prefix)s-%(key)s" % { 'prefix': self.X_CONTAINER_META.lower(), 'key': key.lower()} if prefixed_key in resource_data: swift_reality[self.X_CONTAINER_META].update( {key: resource_data[prefixed_key]}) for key in resource_properties.get(self.X_ACCOUNT_META): prefixed_key = "%(prefix)s-%(key)s" % { 'prefix': self.X_ACCOUNT_META.lower(), 'key': key.lower()} if prefixed_key in resource_data['account_data']: swift_reality[self.X_ACCOUNT_META].update( {key: resource_data['account_data'][prefixed_key]}) return swift_reality def resource_mapping(): return { 'OS::Swift::Container': SwiftContainer, } heat-10.0.2/heat/engine/resources/openstack/keystone/0000775000175000017500000000000013343562672022600 5ustar zuulzuul00000000000000heat-10.0.2/heat/engine/resources/openstack/keystone/project.py0000666000175000017500000001561313343562340024620 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common.i18n import _ from heat.engine import attributes from heat.engine import constraints from heat.engine import properties from heat.engine import resource from heat.engine import support from heat.engine import translation class KeystoneProject(resource.Resource): """Heat Template Resource for Keystone Project. Projects represent the base unit of ownership in OpenStack, in that all resources in OpenStack should be owned by a specific project. A project itself must be owned by a specific domain, and hence all project names are not globally unique, but unique to their domain. If the domain for a project is not specified, then it is added to the default domain. """ support_status = support.SupportStatus( version='2015.1', message=_('Supported versions: keystone v3')) default_client_name = 'keystone' entity = 'projects' PROPERTIES = ( NAME, DOMAIN, DESCRIPTION, ENABLED, PARENT, TAGS, ) = ( 'name', 'domain', 'description', 'enabled', 'parent', 'tags', ) properties_schema = { NAME: properties.Schema( properties.Schema.STRING, _('Name of keystone project.'), update_allowed=True ), DOMAIN: properties.Schema( properties.Schema.STRING, _('Name or id of keystone domain.'), default='default', update_allowed=True, constraints=[constraints.CustomConstraint('keystone.domain')] ), DESCRIPTION: properties.Schema( properties.Schema.STRING, _('Description of keystone project.'), default='', update_allowed=True ), ENABLED: properties.Schema( properties.Schema.BOOLEAN, _('This project is enabled or disabled.'), default=True, update_allowed=True ), PARENT: properties.Schema( properties.Schema.STRING, _('The name or ID of parent of this keystone project ' 'in hierarchy.'), support_status=support.SupportStatus(version='6.0.0'), constraints=[constraints.CustomConstraint('keystone.project')] ), TAGS: properties.Schema( properties.Schema.LIST, _('A list of tags for labeling and sorting projects.'), support_status=support.SupportStatus(version='10.0.0'), default=[], update_allowed=True ), } ATTRIBUTES = ( NAME_ATTR, PARENT_ATTR, DOMAIN_ATTR, ENABLED_ATTR, IS_DOMAIN_ATTR ) = ( 'name', 'parent_id', 'domain_id', 'enabled', 'is_domain' ) attributes_schema = { NAME_ATTR: attributes.Schema( _('Project name.'), support_status=support.SupportStatus(version='10.0.0'), type=attributes.Schema.STRING ), PARENT_ATTR: attributes.Schema( _('Parent project id.'), support_status=support.SupportStatus(version='10.0.0'), type=attributes.Schema.STRING ), DOMAIN_ATTR: attributes.Schema( _('Domain id for project.'), support_status=support.SupportStatus(version='10.0.0'), type=attributes.Schema.STRING ), ENABLED_ATTR: attributes.Schema( _('Flag of enable project.'), support_status=support.SupportStatus(version='10.0.0'), type=attributes.Schema.BOOLEAN ), IS_DOMAIN_ATTR: attributes.Schema( _('Indicates whether the project also acts as a domain.'), support_status=support.SupportStatus(version='10.0.0'), type=attributes.Schema.BOOLEAN ), } def _resolve_attribute(self, name): if self.resource_id is None: return project = self.client().projects.get(self.resource_id) return getattr(project, name, None) def translation_rules(self, properties): return [ translation.TranslationRule( properties, translation.TranslationRule.RESOLVE, [self.DOMAIN], client_plugin=self.client_plugin(), finder='get_domain_id' ), translation.TranslationRule( properties, translation.TranslationRule.RESOLVE, [self.PARENT], client_plugin=self.client_plugin(), finder='get_project_id' ), ] def client(self): return super(KeystoneProject, self).client().client def handle_create(self): project_name = (self.properties[self.NAME] or self.physical_resource_name()) description = self.properties[self.DESCRIPTION] domain = self.properties[self.DOMAIN] enabled = self.properties[self.ENABLED] parent = self.properties[self.PARENT] tags = self.properties[self.TAGS] project = self.client().projects.create( name=project_name, domain=domain, description=description, enabled=enabled, parent=parent, tags=tags) self.resource_id_set(project.id) def handle_update(self, json_snippet, tmpl_diff, prop_diff): if prop_diff: name = None # Don't update the name if no change if self.NAME in prop_diff: name = prop_diff[self.NAME] or self.physical_resource_name() description = prop_diff.get(self.DESCRIPTION) enabled = prop_diff.get(self.ENABLED) domain = prop_diff.get(self.DOMAIN, self.properties[self.DOMAIN]) tags = (prop_diff.get(self.TAGS) or self.properties[self.TAGS]) self.client().projects.update( project=self.resource_id, name=name, description=description, enabled=enabled, domain=domain, tags=tags ) def parse_live_resource_data(self, resource_properties, resource_data): result = super(KeystoneProject, self).parse_live_resource_data( resource_properties, resource_data) result[self.DOMAIN] = resource_data.get('domain_id') return result def resource_mapping(): return { 'OS::Keystone::Project': KeystoneProject } heat-10.0.2/heat/engine/resources/openstack/keystone/domain.py0000666000175000017500000000624513343562340024422 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common.i18n import _ from heat.engine import properties from heat.engine import resource from heat.engine import support class KeystoneDomain(resource.Resource): """Heat Template Resource for Keystone Domain. This plug-in helps to create, update and delete a keystone domain. Also it can be used for enable or disable a given keystone domain. """ support_status = support.SupportStatus( version='8.0.0', message=_('Supported versions: keystone v3')) default_client_name = 'keystone' entity = 'domains' PROPERTIES = ( NAME, DESCRIPTION, ENABLED ) = ( 'name', 'description', 'enabled' ) properties_schema = { NAME: properties.Schema( properties.Schema.STRING, _('The name of the domain.'), update_allowed=True ), DESCRIPTION: properties.Schema( properties.Schema.STRING, _('Description of keystone domain.'), update_allowed=True ), ENABLED: properties.Schema( properties.Schema.BOOLEAN, _('This domain is enabled or disabled.'), default=True, update_allowed=True ) } def client(self): return super(KeystoneDomain, self).client().client def handle_create(self): name = (self.properties[self.NAME] or self.physical_resource_name()) description = self.properties[self.DESCRIPTION] enabled = self.properties[self.ENABLED] domain = self.client().domains.create( name=name, description=description, enabled=enabled) self.resource_id_set(domain.id) def handle_update(self, json_snippet, tmpl_diff, prop_diff): if prop_diff: description = prop_diff.get(self.DESCRIPTION) enabled = prop_diff.get(self.ENABLED) name = None # Don't update the name if no change if self.NAME in prop_diff: name = prop_diff[self.NAME] or self.physical_resource_name() self.client().domains.update( domain=self.resource_id, name=name, description=description, enabled=enabled ) def parse_live_resource_data(self, resource_properties, resource_data): return {self.NAME: resource_data.get(self.NAME), self.DESCRIPTION: resource_data.get(self.DESCRIPTION), self.ENABLED: resource_data.get(self.ENABLED)} def resource_mapping(): return { 'OS::Keystone::Domain': KeystoneDomain } heat-10.0.2/heat/engine/resources/openstack/keystone/role_assignments.py0000666000175000017500000004307513343562351026533 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common import exception from heat.common.i18n import _ from heat.engine import constraints from heat.engine import properties from heat.engine import resource from heat.engine import support class KeystoneRoleAssignmentMixin(object): """Implements role assignments between user/groups and project/domain. heat_template_version: 2013-05-23 parameters: ... Group or User parameters group_role: type: string description: role group_role_domain: type: string description: group role domain group_role_project: type: string description: group role project resources: admin_group: type: OS::Keystone::Group OR OS::Keystone::User properties: ... Group or User properties roles: - role: {get_param: group_role} domain: {get_param: group_role_domain} - role: {get_param: group_role} project: {get_param: group_role_project} """ PROPERTIES = ( ROLES ) = ( 'roles' ) _ROLES_MAPPING_PROPERTIES = ( ROLE, DOMAIN, PROJECT ) = ( 'role', 'domain', 'project' ) mixin_properties_schema = { ROLES: properties.Schema( properties.Schema.LIST, _('List of role assignments.'), schema=properties.Schema( properties.Schema.MAP, _('Map between role with either project or domain.'), schema={ ROLE: properties.Schema( properties.Schema.STRING, _('Keystone role.'), required=True, constraints=([constraints. CustomConstraint('keystone.role')]) ), PROJECT: properties.Schema( properties.Schema.STRING, _('Keystone project.'), constraints=([constraints. CustomConstraint('keystone.project')]) ), DOMAIN: properties.Schema( properties.Schema.STRING, _('Keystone domain.'), constraints=([constraints. CustomConstraint('keystone.domain')]) ), } ), update_allowed=True ) } def _add_role_assignments_to_group(self, group_id, role_assignments): for role_assignment in self._normalize_to_id(role_assignments): if role_assignment.get(self.PROJECT) is not None: self.client().roles.grant( role=role_assignment.get(self.ROLE), project=role_assignment.get(self.PROJECT), group=group_id ) elif role_assignment.get(self.DOMAIN) is not None: self.client().roles.grant( role=role_assignment.get(self.ROLE), domain=role_assignment.get(self.DOMAIN), group=group_id ) def _add_role_assignments_to_user(self, user_id, role_assignments): for role_assignment in self._normalize_to_id(role_assignments): if role_assignment.get(self.PROJECT) is not None: self.client().roles.grant( role=role_assignment.get(self.ROLE), project=role_assignment.get(self.PROJECT), user=user_id ) elif role_assignment.get(self.DOMAIN) is not None: self.client().roles.grant( role=role_assignment.get(self.ROLE), domain=role_assignment.get(self.DOMAIN), user=user_id ) def _remove_role_assignments_from_group(self, group_id, role_assignments, current_assignments): for role_assignment in self._normalize_to_id(role_assignments): if role_assignment in current_assignments: if role_assignment.get(self.PROJECT) is not None: self.client().roles.revoke( role=role_assignment.get(self.ROLE), project=role_assignment.get(self.PROJECT), group=group_id ) elif role_assignment.get(self.DOMAIN) is not None: self.client().roles.revoke( role=role_assignment.get(self.ROLE), domain=role_assignment.get(self.DOMAIN), group=group_id ) def _remove_role_assignments_from_user(self, user_id, role_assignments, current_assignments): for role_assignment in self._normalize_to_id(role_assignments): if role_assignment in current_assignments: if role_assignment.get(self.PROJECT) is not None: self.client().roles.revoke( role=role_assignment.get(self.ROLE), project=role_assignment.get(self.PROJECT), user=user_id ) elif role_assignment.get(self.DOMAIN) is not None: self.client().roles.revoke( role=role_assignment.get(self.ROLE), domain=role_assignment.get(self.DOMAIN), user=user_id ) def _normalize_to_id(self, role_assignment_prps): role_assignments = [] if role_assignment_prps is None: return role_assignments for role_assignment in role_assignment_prps: role = role_assignment.get(self.ROLE) project = role_assignment.get(self.PROJECT) domain = role_assignment.get(self.DOMAIN) role_assignments.append({ self.ROLE: self.client_plugin().get_role_id(role), self.PROJECT: (self.client_plugin(). get_project_id(project)) if project else None, self.DOMAIN: (self.client_plugin(). get_domain_id(domain)) if domain else None }) return role_assignments def _find_differences(self, updated_prps, stored_prps): updated_role_project_assignments = [] updated_role_domain_assignments = [] # Split the properties into two set of role assignments # (project, domain) from updated properties for role_assignment in updated_prps or []: if role_assignment.get(self.PROJECT) is not None: updated_role_project_assignments.append( '%s:%s' % ( role_assignment[self.ROLE], role_assignment[self.PROJECT])) elif (role_assignment.get(self.DOMAIN) is not None): updated_role_domain_assignments.append( '%s:%s' % (role_assignment[self.ROLE], role_assignment[self.DOMAIN])) stored_role_project_assignments = [] stored_role_domain_assignments = [] # Split the properties into two set of role assignments # (project, domain) from updated properties for role_assignment in (stored_prps or []): if role_assignment.get(self.PROJECT) is not None: stored_role_project_assignments.append( '%s:%s' % ( role_assignment[self.ROLE], role_assignment[self.PROJECT])) elif (role_assignment.get(self.DOMAIN) is not None): stored_role_domain_assignments.append( '%s:%s' % (role_assignment[self.ROLE], role_assignment[self.DOMAIN])) new_role_assignments = [] removed_role_assignments = [] # NOTE: finding the diff of list of strings is easier by using 'set' # so properties are converted to string in above sections # New items for item in (set(updated_role_project_assignments) - set(stored_role_project_assignments)): new_role_assignments.append( {self.ROLE: item[:item.find(':')], self.PROJECT: item[item.find(':') + 1:]} ) for item in (set(updated_role_domain_assignments) - set(stored_role_domain_assignments)): new_role_assignments.append( {self.ROLE: item[:item.find(':')], self.DOMAIN: item[item.find(':') + 1:]} ) # Old items for item in (set(stored_role_project_assignments) - set(updated_role_project_assignments)): removed_role_assignments.append( {self.ROLE: item[:item.find(':')], self.PROJECT: item[item.find(':') + 1:]} ) for item in (set(stored_role_domain_assignments) - set(updated_role_domain_assignments)): removed_role_assignments.append( {self.ROLE: item[:item.find(':')], self.DOMAIN: item[item.find(':') + 1:]} ) return new_role_assignments, removed_role_assignments def create_assignment(self, user_id=None, group_id=None): if self.properties.get(self.ROLES) is not None: if user_id is not None: self._add_role_assignments_to_user( user_id, self.properties.get(self.ROLES)) elif group_id is not None: self._add_role_assignments_to_group( group_id, self.properties.get(self.ROLES)) def update_assignment(self, prop_diff, user_id=None, group_id=None): # if there is no change do not update if self.ROLES in prop_diff: (new_role_assignments, removed_role_assignments) = self._find_differences( prop_diff.get(self.ROLES), self.properties[self.ROLES]) if len(new_role_assignments) > 0: if user_id is not None: self._add_role_assignments_to_user( user_id, new_role_assignments) elif group_id is not None: self._add_role_assignments_to_group( group_id, new_role_assignments) if len(removed_role_assignments) > 0: current_assignments = self.parse_list_assignments( user_id=user_id, group_id=group_id) if user_id is not None: self._remove_role_assignments_from_user( user_id, removed_role_assignments, current_assignments) elif group_id is not None: self._remove_role_assignments_from_group( group_id, removed_role_assignments, current_assignments) def delete_assignment(self, user_id=None, group_id=None): if self.properties[self.ROLES] is not None: current_assignments = self.parse_list_assignments( user_id=user_id, group_id=group_id) if user_id is not None: self._remove_role_assignments_from_user( user_id, (self.properties[self.ROLES]), current_assignments) elif group_id is not None: self._remove_role_assignments_from_group( group_id, (self.properties[self.ROLES]), current_assignments) def validate_assignment_properties(self): if self.properties.get(self.ROLES) is not None: for role_assignment in self.properties.get(self.ROLES): project = role_assignment.get(self.PROJECT) domain = role_assignment.get(self.DOMAIN) if project is not None and domain is not None: raise exception.ResourcePropertyConflict(self.PROJECT, self.DOMAIN) if project is None and domain is None: msg = _('Either project or domain must be specified for' ' role %s') % role_assignment.get(self.ROLE) raise exception.StackValidationFailed(message=msg) def parse_list_assignments(self, user_id=None, group_id=None): """Method used for get_live_state implementation in other resources.""" assignments = [] roles = [] if user_id is not None: assignments = self.client().role_assignments.list(user=user_id) elif group_id is not None: assignments = self.client().role_assignments.list(group=group_id) for assignment in assignments: values = assignment.to_dict() if not values.get('role') or not values.get('role').get('id'): continue role = { self.ROLE: values['role']['id'], self.DOMAIN: (values.get('scope') and values['scope'].get('domain') and values['scope'].get('domain').get('id')), self.PROJECT: (values.get('scope') and values['scope'].get('project') and values['scope'].get('project').get('id')), } roles.append(role) return roles class KeystoneUserRoleAssignment(resource.Resource, KeystoneRoleAssignmentMixin): """Resource for granting roles to a user. Resource for specifying users and their's roles. """ support_status = support.SupportStatus( version='5.0.0', message=_('Supported versions: keystone v3')) default_client_name = 'keystone' PROPERTIES = ( USER, ) = ( 'user', ) properties_schema = { USER: properties.Schema( properties.Schema.STRING, _('Name or id of keystone user.'), required=True, update_allowed=True, constraints=[constraints.CustomConstraint('keystone.user')] ) } properties_schema.update( KeystoneRoleAssignmentMixin.mixin_properties_schema) def client(self): return super(KeystoneUserRoleAssignment, self).client().client @property def user_id(self): try: return self.client_plugin().get_user_id( self.properties.get(self.USER)) except Exception as ex: self.client_plugin().ignore_not_found(ex) return None def handle_create(self): self.create_assignment(user_id=self.user_id) def handle_update(self, json_snippet, tmpl_diff, prop_diff): self.update_assignment(user_id=self.user_id, prop_diff=prop_diff) def handle_delete(self): self.delete_assignment(user_id=self.user_id) def validate(self): super(KeystoneUserRoleAssignment, self).validate() self.validate_assignment_properties() class KeystoneGroupRoleAssignment(resource.Resource, KeystoneRoleAssignmentMixin): """Resource for granting roles to a group. Resource for specifying groups and their's roles. """ support_status = support.SupportStatus( version='5.0.0', message=_('Supported versions: keystone v3')) default_client_name = 'keystone' PROPERTIES = ( GROUP, ) = ( 'group', ) properties_schema = { GROUP: properties.Schema( properties.Schema.STRING, _('Name or id of keystone group.'), required=True, update_allowed=True, constraints=[constraints.CustomConstraint('keystone.group')] ) } properties_schema.update( KeystoneRoleAssignmentMixin.mixin_properties_schema) def client(self): return super(KeystoneGroupRoleAssignment, self).client().client @property def group_id(self): try: return self.client_plugin().get_group_id( self.properties.get(self.GROUP)) except Exception as ex: self.client_plugin().ignore_not_found(ex) return None def handle_create(self): self.create_assignment(group_id=self.group_id) def handle_update(self, json_snippet, tmpl_diff, prop_diff): self.update_assignment(group_id=self.group_id, prop_diff=prop_diff) def handle_delete(self): self.delete_assignment(group_id=self.group_id) def validate(self): super(KeystoneGroupRoleAssignment, self).validate() self.validate_assignment_properties() def resource_mapping(): return { 'OS::Keystone::UserRoleAssignment': KeystoneUserRoleAssignment, 'OS::Keystone::GroupRoleAssignment': KeystoneGroupRoleAssignment } heat-10.0.2/heat/engine/resources/openstack/keystone/region.py0000666000175000017500000001011613343562340024426 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from six.moves.urllib import parse from heat.common.i18n import _ from heat.engine import constraints from heat.engine import properties from heat.engine import resource from heat.engine import support from heat.engine import translation class KeystoneRegion(resource.Resource): """Heat Template Resource for Keystone Region. This plug-in helps to create, update and delete a keystone region. Also it can be used for enable or disable a given keystone region. """ support_status = support.SupportStatus( version='6.0.0', message=_('Supported versions: keystone v3')) default_client_name = 'keystone' entity = 'regions' PROPERTIES = ( ID, PARENT_REGION, DESCRIPTION, ENABLED ) = ( 'id', 'parent_region', 'description', 'enabled' ) properties_schema = { ID: properties.Schema( properties.Schema.STRING, _('The user-defined region ID and should unique to the OpenStack ' 'deployment. While creating the region, heat will url encode ' 'this ID.') ), PARENT_REGION: properties.Schema( properties.Schema.STRING, _('If the region is hierarchically a child of another region, ' 'set this parameter to the ID of the parent region.'), update_allowed=True, constraints=[constraints.CustomConstraint('keystone.region')] ), DESCRIPTION: properties.Schema( properties.Schema.STRING, _('Description of keystone region.'), update_allowed=True ), ENABLED: properties.Schema( properties.Schema.BOOLEAN, _('This region is enabled or disabled.'), default=True, update_allowed=True ) } def translation_rules(self, properties): return [ translation.TranslationRule( properties, translation.TranslationRule.RESOLVE, [self.PARENT_REGION], client_plugin=self.client_plugin(), finder='get_region_id' ) ] def client(self): return super(KeystoneRegion, self).client().client def handle_create(self): region_id = self.properties[self.ID] description = self.properties[self.DESCRIPTION] parent_region = self.properties[self.PARENT_REGION] enabled = self.properties[self.ENABLED] region = self.client().regions.create( id=parse.quote(region_id) if region_id else None, parent_region=parent_region, description=description, enabled=enabled) self.resource_id_set(region.id) def handle_update(self, json_snippet, tmpl_diff, prop_diff): if prop_diff: description = prop_diff.get(self.DESCRIPTION) enabled = prop_diff.get(self.ENABLED) parent_region = prop_diff.get(self.PARENT_REGION) self.client().regions.update( region=self.resource_id, parent_region=parent_region, description=description, enabled=enabled ) def parse_live_resource_data(self, resource_properties, resource_data): return { self.DESCRIPTION: resource_data.get(self.DESCRIPTION), self.ENABLED: resource_data.get(self.ENABLED), self.PARENT_REGION: resource_data.get('parent_region_id') } def resource_mapping(): return { 'OS::Keystone::Region': KeystoneRegion } heat-10.0.2/heat/engine/resources/openstack/keystone/role.py0000666000175000017500000000576013343562340024115 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common.i18n import _ from heat.engine import constraints from heat.engine import properties from heat.engine import resource from heat.engine import support from heat.engine import translation class KeystoneRole(resource.Resource): """Heat Template Resource for Keystone Role. Roles dictate the level of authorization the end user can obtain. Roles can be granted at either the domain or project level. Role can be assigned to the individual user or at the group level. Role name is unique within the owning domain. """ support_status = support.SupportStatus( version='2015.1', message=_('Supported versions: keystone v3')) default_client_name = 'keystone' entity = 'roles' PROPERTIES = ( NAME, DOMAIN, ) = ( 'name', 'domain', ) properties_schema = { NAME: properties.Schema( properties.Schema.STRING, _('Name of keystone role.'), update_allowed=True ), DOMAIN: properties.Schema( properties.Schema.STRING, _('Name or id of keystone domain.'), default='default', constraints=[constraints.CustomConstraint('keystone.domain')], support_status=support.SupportStatus(version='10.0.0') ) } def translation_rules(self, properties): return [ translation.TranslationRule( properties, translation.TranslationRule.RESOLVE, [self.DOMAIN], client_plugin=self.client_plugin(), finder='get_domain_id' ) ] def client(self): return super(KeystoneRole, self).client().client def handle_create(self): role_name = (self.properties[self.NAME] or self.physical_resource_name()) domain = self.properties[self.DOMAIN] role = self.client().roles.create(name=role_name, domain=domain) self.resource_id_set(role.id) def handle_update(self, json_snippet, tmpl_diff, prop_diff): if prop_diff: # Don't update the name if no change if self.NAME in prop_diff: name = prop_diff[self.NAME] or self.physical_resource_name() self.client().roles.update(role=self.resource_id, name=name) def resource_mapping(): return { 'OS::Keystone::Role': KeystoneRole } heat-10.0.2/heat/engine/resources/openstack/keystone/endpoint.py0000666000175000017500000001266513343562340024776 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common.i18n import _ from heat.engine import constraints from heat.engine import properties from heat.engine import resource from heat.engine import support from heat.engine import translation class KeystoneEndpoint(resource.Resource): """Heat Template Resource for Keystone Service Endpoint. Keystone endpoint is just the URL that can be used for accessing a service within OpenStack. Endpoint can be accessed by admin, by services or public, i.e. everyone can use this endpoint. """ support_status = support.SupportStatus( version='5.0.0', message=_('Supported versions: keystone v3')) default_client_name = 'keystone' entity = 'endpoints' PROPERTIES = ( NAME, REGION, SERVICE, INTERFACE, SERVICE_URL, ENABLED, ) = ( 'name', 'region', 'service', 'interface', 'url', 'enabled', ) properties_schema = { NAME: properties.Schema( properties.Schema.STRING, _('Name of keystone endpoint.'), update_allowed=True ), REGION: properties.Schema( properties.Schema.STRING, _('Name or Id of keystone region.'), update_allowed=True, constraints=[constraints.CustomConstraint('keystone.region')] ), SERVICE: properties.Schema( properties.Schema.STRING, _('Name or Id of keystone service.'), update_allowed=True, required=True, constraints=[constraints.CustomConstraint('keystone.service')] ), INTERFACE: properties.Schema( properties.Schema.STRING, _('Interface type of keystone service endpoint.'), update_allowed=True, required=True, constraints=[constraints.AllowedValues( ['public', 'internal', 'admin'] )] ), SERVICE_URL: properties.Schema( properties.Schema.STRING, _('URL of keystone service endpoint.'), update_allowed=True, required=True ), ENABLED: properties.Schema( properties.Schema.BOOLEAN, _('This endpoint is enabled or disabled.'), default=True, update_allowed=True, support_status=support.SupportStatus(version='6.0.0') ) } def translation_rules(self, props): return [ translation.TranslationRule( props, translation.TranslationRule.RESOLVE, [self.SERVICE], client_plugin=self.client_plugin(), finder='get_service_id' ), translation.TranslationRule( props, translation.TranslationRule.RESOLVE, [self.REGION], client_plugin=self.client_plugin(), finder='get_region_id' ), ] def client(self): return super(KeystoneEndpoint, self).client().client def handle_create(self): region = self.properties[self.REGION] service = self.properties[self.SERVICE] interface = self.properties[self.INTERFACE] url = self.properties[self.SERVICE_URL] name = (self.properties[self.NAME] or self.physical_resource_name()) enabled = self.properties[self.ENABLED] endpoint = self.client().endpoints.create( region=region, service=service, interface=interface, url=url, name=name, enabled=enabled) self.resource_id_set(endpoint.id) def handle_update(self, json_snippet, tmpl_diff, prop_diff): if prop_diff: region = prop_diff.get(self.REGION) service = prop_diff.get(self.SERVICE) interface = prop_diff.get(self.INTERFACE) url = prop_diff.get(self.SERVICE_URL) name = None # Don't update the name if no change if self.NAME in prop_diff: name = prop_diff[self.NAME] or self.physical_resource_name() enabled = prop_diff.get(self.ENABLED) self.client().endpoints.update( endpoint=self.resource_id, region=region, service=service, interface=interface, url=url, name=name, enabled=enabled) def parse_live_resource_data(self, resource_properties, resource_data): endpoint_reality = {} endpoint_reality.update( {self.SERVICE: resource_data.get('service_id'), self.REGION: resource_data.get('region_id')}) for key in (set(self.PROPERTIES) - {self.SERVICE, self.REGION}): endpoint_reality.update({key: resource_data.get(key)}) return endpoint_reality def resource_mapping(): return { 'OS::Keystone::Endpoint': KeystoneEndpoint } heat-10.0.2/heat/engine/resources/openstack/keystone/__init__.py0000666000175000017500000000000013343562340024671 0ustar zuulzuul00000000000000heat-10.0.2/heat/engine/resources/openstack/keystone/service.py0000666000175000017500000000635413343562340024614 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common.i18n import _ from heat.engine import properties from heat.engine import resource from heat.engine import support class KeystoneService(resource.Resource): """Heat Template Resource for Keystone Service. A resource that allows to create new service and manage it by Keystone. """ support_status = support.SupportStatus( version='5.0.0', message=_('Supported versions: keystone v3')) default_client_name = 'keystone' entity = 'services' PROPERTIES = ( NAME, DESCRIPTION, TYPE, ENABLED, ) = ( 'name', 'description', 'type', 'enabled', ) properties_schema = { NAME: properties.Schema( properties.Schema.STRING, _('Name of keystone service.'), update_allowed=True ), DESCRIPTION: properties.Schema( properties.Schema.STRING, _('Description of keystone service.'), update_allowed=True ), TYPE: properties.Schema( properties.Schema.STRING, _('Type of keystone Service.'), update_allowed=True, required=True ), ENABLED: properties.Schema( properties.Schema.BOOLEAN, _('This service is enabled or disabled.'), default=True, update_allowed=True, support_status=support.SupportStatus(version='6.0.0') ) } def client(self): return super(KeystoneService, self).client().client def handle_create(self): name = (self.properties[self.NAME] or self.physical_resource_name()) description = self.properties[self.DESCRIPTION] type = self.properties[self.TYPE] enabled = self.properties[self.ENABLED] service = self.client().services.create( name=name, description=description, type=type, enabled=enabled) self.resource_id_set(service.id) def handle_update(self, json_snippet, tmpl_diff, prop_diff): if prop_diff: name = None # Don't update the name if no change if self.NAME in prop_diff: name = prop_diff[self.NAME] or self.physical_resource_name() description = prop_diff.get(self.DESCRIPTION) type = prop_diff.get(self.TYPE) enabled = prop_diff.get(self.ENABLED) self.client().services.update( service=self.resource_id, name=name, description=description, type=type, enabled=enabled) def resource_mapping(): return { 'OS::Keystone::Service': KeystoneService } heat-10.0.2/heat/engine/resources/openstack/keystone/user.py0000666000175000017500000002675113343562340024135 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common.i18n import _ from heat.engine import attributes from heat.engine import constraints from heat.engine import properties from heat.engine import resource from heat.engine.resources.openstack.keystone import role_assignments from heat.engine import support from heat.engine import translation class KeystoneUser(resource.Resource, role_assignments.KeystoneRoleAssignmentMixin): """Heat Template Resource for Keystone User. Users represent an individual API consumer. A user itself must be owned by a specific domain, and hence all user names are not globally unique, but only unique to their domain. """ support_status = support.SupportStatus( version='2015.1', message=_('Supported versions: keystone v3')) default_client_name = 'keystone' entity = 'users' PROPERTIES = ( NAME, DOMAIN, DESCRIPTION, ENABLED, EMAIL, PASSWORD, DEFAULT_PROJECT, GROUPS ) = ( 'name', 'domain', 'description', 'enabled', 'email', 'password', 'default_project', 'groups' ) properties_schema = { NAME: properties.Schema( properties.Schema.STRING, _('Name of keystone user.'), update_allowed=True ), DOMAIN: properties.Schema( properties.Schema.STRING, _('Name or ID of keystone domain.'), default='default', update_allowed=True, constraints=[constraints.CustomConstraint('keystone.domain')] ), DESCRIPTION: properties.Schema( properties.Schema.STRING, _('Description of keystone user.'), default='', update_allowed=True ), ENABLED: properties.Schema( properties.Schema.BOOLEAN, _('Keystone user is enabled or disabled.'), default=True, update_allowed=True ), EMAIL: properties.Schema( properties.Schema.STRING, _('Email address of keystone user.'), update_allowed=True ), PASSWORD: properties.Schema( properties.Schema.STRING, _('Password of keystone user.'), update_allowed=True ), DEFAULT_PROJECT: properties.Schema( properties.Schema.STRING, _('Name or ID of default project of keystone user.'), update_allowed=True, constraints=[constraints.CustomConstraint('keystone.project')] ), GROUPS: properties.Schema( properties.Schema.LIST, _('Keystone user groups.'), update_allowed=True, schema=properties.Schema( properties.Schema.STRING, _('Keystone user group.'), constraints=[constraints.CustomConstraint('keystone.group')] ) ) } properties_schema.update( role_assignments.KeystoneRoleAssignmentMixin.mixin_properties_schema) ATTRIBUTES = ( NAME_ATTR, DEFAULT_PROJECT_ATTR, DOMAIN_ATTR, ENABLED_ATTR, PASSWORD_EXPIRES_AT_ATTR ) = ( 'name', 'default_project_id', 'domain_id', 'enabled', 'password_expires_at' ) attributes_schema = { NAME_ATTR: attributes.Schema( _('User name.'), support_status=support.SupportStatus(version='9.0.0'), type=attributes.Schema.STRING ), DEFAULT_PROJECT_ATTR: attributes.Schema( _('Default project id for user.'), support_status=support.SupportStatus(version='9.0.0'), type=attributes.Schema.STRING ), DOMAIN_ATTR: attributes.Schema( _('Domain id for user.'), support_status=support.SupportStatus(version='9.0.0'), type=attributes.Schema.STRING ), ENABLED_ATTR: attributes.Schema( _('Flag of enable user.'), support_status=support.SupportStatus(version='9.0.0'), type=attributes.Schema.BOOLEAN ), PASSWORD_EXPIRES_AT_ATTR: attributes.Schema( _('Show user password expiration time.'), support_status=support.SupportStatus(version='9.0.0'), type=attributes.Schema.STRING ), } def translation_rules(self, properties): return [ translation.TranslationRule( properties, translation.TranslationRule.RESOLVE, [self.DOMAIN], client_plugin=self.client_plugin(), finder='get_domain_id' ), translation.TranslationRule( properties, translation.TranslationRule.RESOLVE, [self.DEFAULT_PROJECT], client_plugin=self.client_plugin(), finder='get_project_id' ), translation.TranslationRule( properties, translation.TranslationRule.RESOLVE, [self.GROUPS], client_plugin=self.client_plugin(), finder='get_group_id' ), ] def validate(self): super(KeystoneUser, self).validate() self.validate_assignment_properties() def client(self): return super(KeystoneUser, self).client().client def _update_user(self, user_id, domain, new_name=None, new_description=None, new_email=None, new_password=None, new_default_project=None, enabled=None): values = dict() if new_name is not None: values['name'] = new_name if new_description is not None: values['description'] = new_description if new_email is not None: values['email'] = new_email if new_password is not None: values['password'] = new_password if new_default_project is not None: values['default_project'] = new_default_project if enabled is not None: values['enabled'] = enabled # If there're no args above, keystone raises BadRequest error with # message about not enough parameters for updating, so return from # this method to prevent raising error. if not values: return values['user'] = user_id values['domain'] = domain return self.client().users.update(**values) def _add_user_to_groups(self, user_id, groups): if groups is not None: for group_id in groups: self.client().users.add_to_group(user_id, group_id) def _remove_user_from_groups(self, user_id, groups): if groups is not None: for group_id in groups: self.client().users.remove_from_group(user_id, group_id) def _find_diff(self, updated_prps, stored_prps): new_group_ids = list(set(updated_prps or []) - set(stored_prps or [])) removed_group_ids = list(set(stored_prps or []) - set(updated_prps or [])) return new_group_ids, removed_group_ids def _resolve_attribute(self, name): if self.resource_id is None: return user = self.client().users.get(self.resource_id) return getattr(user, name, None) def handle_create(self): user_name = (self.properties[self.NAME] or self.physical_resource_name()) description = self.properties[self.DESCRIPTION] domain = self.properties[self.DOMAIN] enabled = self.properties[self.ENABLED] email = self.properties[self.EMAIL] password = self.properties[self.PASSWORD] default_project = self.properties[self.DEFAULT_PROJECT] groups = self.properties[self.GROUPS] user = self.client().users.create( name=user_name, domain=domain, description=description, enabled=enabled, email=email, password=password, default_project=default_project) self.resource_id_set(user.id) self._add_user_to_groups(user.id, groups) self.create_assignment(user_id=user.id) def handle_update(self, json_snippet, tmpl_diff, prop_diff): if prop_diff: name = None # Don't update the name if no change if self.NAME in prop_diff: name = prop_diff[self.NAME] or self.physical_resource_name() description = prop_diff.get(self.DESCRIPTION) enabled = prop_diff.get(self.ENABLED) email = prop_diff.get(self.EMAIL) password = prop_diff.get(self.PASSWORD) domain = (prop_diff.get(self.DOMAIN) or self.properties[self.DOMAIN]) default_project = prop_diff.get(self.DEFAULT_PROJECT) self._update_user( user_id=self.resource_id, domain=domain, new_name=name, new_description=description, enabled=enabled, new_default_project=default_project, new_email=email, new_password=password ) if self.GROUPS in prop_diff: (new_group_ids, removed_group_ids) = self._find_diff( prop_diff[self.GROUPS], self.properties[self.GROUPS]) if new_group_ids: self._add_user_to_groups(self.resource_id, new_group_ids) if removed_group_ids: self._remove_user_from_groups(self.resource_id, removed_group_ids) self.update_assignment(prop_diff=prop_diff, user_id=self.resource_id) def parse_live_resource_data(self, resource_properties, resource_data): user_reality = { self.ROLES: self.parse_list_assignments(user_id=self.resource_id), self.DEFAULT_PROJECT: resource_data.get('default_project_id'), self.DOMAIN: resource_data.get('domain_id'), self.GROUPS: [group.id for group in self.client().groups.list( user=self.resource_id)] } props_keys = [self.NAME, self.DESCRIPTION, self.ENABLED, self.EMAIL] for key in props_keys: user_reality.update({key: resource_data.get(key)}) return user_reality def handle_delete(self): if self.resource_id is not None: with self.client_plugin().ignore_not_found: if self.properties[self.GROUPS] is not None: self._remove_user_from_groups( self.resource_id, self.properties[self.GROUPS]) self.client().users.delete(self.resource_id) def resource_mapping(): return { 'OS::Keystone::User': KeystoneUser } heat-10.0.2/heat/engine/resources/openstack/keystone/group.py0000666000175000017500000001056413343562340024306 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common.i18n import _ from heat.engine import constraints from heat.engine import properties from heat.engine import resource from heat.engine.resources.openstack.keystone import role_assignments from heat.engine import support from heat.engine import translation class KeystoneGroup(resource.Resource, role_assignments.KeystoneRoleAssignmentMixin): """Heat Template Resource for Keystone Group. Groups are a container representing a collection of users. A group itself must be owned by a specific domain, and hence all group names are not globally unique, but only unique to their domain. """ support_status = support.SupportStatus( version='2015.1', message=_('Supported versions: keystone v3')) default_client_name = 'keystone' entity = 'groups' PROPERTIES = ( NAME, DOMAIN, DESCRIPTION ) = ( 'name', 'domain', 'description' ) properties_schema = { NAME: properties.Schema( properties.Schema.STRING, _('Name of keystone group.'), update_allowed=True ), DOMAIN: properties.Schema( properties.Schema.STRING, _('Name or id of keystone domain.'), default='default', update_allowed=True, constraints=[constraints.CustomConstraint('keystone.domain')] ), DESCRIPTION: properties.Schema( properties.Schema.STRING, _('Description of keystone group.'), default='', update_allowed=True ) } def translation_rules(self, properties): return [ translation.TranslationRule( properties, translation.TranslationRule.RESOLVE, [self.DOMAIN], client_plugin=self.client_plugin(), finder='get_domain_id' ) ] properties_schema.update( role_assignments.KeystoneRoleAssignmentMixin.mixin_properties_schema) def validate(self): super(KeystoneGroup, self).validate() self.validate_assignment_properties() def client(self): return super(KeystoneGroup, self).client().client def handle_create(self): group_name = (self.properties[self.NAME] or self.physical_resource_name()) description = self.properties[self.DESCRIPTION] domain = self.properties[self.DOMAIN] group = self.client().groups.create( name=group_name, domain=domain, description=description) self.resource_id_set(group.id) self.create_assignment(group_id=group.id) def handle_update(self, json_snippet, tmpl_diff, prop_diff): if prop_diff: name = None # Don't update the name if no change if self.NAME in prop_diff: name = prop_diff[self.NAME] or self.physical_resource_name() description = prop_diff.get(self.DESCRIPTION) domain = (prop_diff.get(self.DOMAIN) or self.properties[self.DOMAIN]) self.client().groups.update( group=self.resource_id, name=name, description=description, domain_id=domain) self.update_assignment(prop_diff=prop_diff, group_id=self.resource_id) def parse_live_resource_data(self, resource_properties, resource_data): return { self.NAME: resource_data.get(self.NAME), self.DESCRIPTION: resource_data.get(self.DESCRIPTION), self.DOMAIN: resource_data.get('domain_id'), self.ROLES: self.parse_list_assignments(group_id=self.resource_id) } def resource_mapping(): return { 'OS::Keystone::Group': KeystoneGroup } heat-10.0.2/heat/engine/resources/openstack/zun/0000775000175000017500000000000013343562672021553 5ustar zuulzuul00000000000000heat-10.0.2/heat/engine/resources/openstack/zun/__init__.py0000666000175000017500000000000013343562340023644 0ustar zuulzuul00000000000000heat-10.0.2/heat/engine/resources/openstack/zun/container.py0000666000175000017500000003112113343562351024101 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from heat.common import exception from heat.common.i18n import _ from heat.engine import attributes from heat.engine import constraints from heat.engine import properties from heat.engine import resource from heat.engine import support class Container(resource.Resource): """A resource that creates a Zun Container. This resource creates a Zun container. """ support_status = support.SupportStatus(version='9.0.0') PROPERTIES = ( NAME, IMAGE, COMMAND, CPU, MEMORY, ENVIRONMENT, WORKDIR, LABELS, IMAGE_PULL_POLICY, RESTART_POLICY, INTERACTIVE, IMAGE_DRIVER, HINTS, HOSTNAME, SECURITY_GROUPS, MOUNTS, ) = ( 'name', 'image', 'command', 'cpu', 'memory', 'environment', 'workdir', 'labels', 'image_pull_policy', 'restart_policy', 'interactive', 'image_driver', 'hints', 'hostname', 'security_groups', 'mounts', ) _MOUNT_KEYS = ( VOLUME_ID, MOUNT_PATH, VOLUME_SIZE ) = ( 'volume_id', 'mount_path', 'volume_size', ) ATTRIBUTES = ( NAME, ADDRESSES ) = ( 'name', 'addresses' ) properties_schema = { NAME: properties.Schema( properties.Schema.STRING, _('Name of the container.'), update_allowed=True ), IMAGE: properties.Schema( properties.Schema.STRING, _('Name or ID of the image.'), required=True ), COMMAND: properties.Schema( properties.Schema.STRING, _('Send command to the container.'), ), CPU: properties.Schema( properties.Schema.NUMBER, _('The number of virtual cpus.'), update_allowed=True ), MEMORY: properties.Schema( properties.Schema.INTEGER, _('The container memory size in MiB.'), update_allowed=True ), ENVIRONMENT: properties.Schema( properties.Schema.MAP, _('The environment variables.'), ), WORKDIR: properties.Schema( properties.Schema.STRING, _('The working directory for commands to run in.'), ), LABELS: properties.Schema( properties.Schema.MAP, _('Adds a map of labels to a container. ' 'May be used multiple times.'), ), IMAGE_PULL_POLICY: properties.Schema( properties.Schema.STRING, _('The policy which determines if the image should ' 'be pulled prior to starting the container.'), constraints=[ constraints.AllowedValues(['ifnotpresent', 'always', 'never']), ] ), RESTART_POLICY: properties.Schema( properties.Schema.STRING, _('Restart policy to apply when a container exits. Possible ' 'values are "no", "on-failure[:max-retry]", "always", and ' '"unless-stopped".'), ), INTERACTIVE: properties.Schema( properties.Schema.BOOLEAN, _('Keep STDIN open even if not attached.'), ), IMAGE_DRIVER: properties.Schema( properties.Schema.STRING, _('The image driver to use to pull container image.'), constraints=[ constraints.AllowedValues(['docker', 'glance']), ] ), HINTS: properties.Schema( properties.Schema.MAP, _('Arbitrary key-value pairs for scheduler to select host.'), support_status=support.SupportStatus(version='10.0.0'), ), HOSTNAME: properties.Schema( properties.Schema.STRING, _('The hostname of the container.'), support_status=support.SupportStatus(version='10.0.0'), ), SECURITY_GROUPS: properties.Schema( properties.Schema.LIST, _('List of security group names or IDs.'), support_status=support.SupportStatus(version='10.0.0'), default=[] ), MOUNTS: properties.Schema( properties.Schema.LIST, _('A list of volumes mounted inside the container.'), schema=properties.Schema( properties.Schema.MAP, schema={ VOLUME_ID: properties.Schema( properties.Schema.STRING, _('The ID or name of the cinder volume mount to ' 'the container.'), constraints=[ constraints.CustomConstraint('cinder.volume') ] ), VOLUME_SIZE: properties.Schema( properties.Schema.INTEGER, _('The size of the cinder volume to create.'), ), MOUNT_PATH: properties.Schema( properties.Schema.STRING, _('The filesystem path inside the container.'), required=True, ), }, ) ), } attributes_schema = { NAME: attributes.Schema( _('Name of the container.'), type=attributes.Schema.STRING ), ADDRESSES: attributes.Schema( _('A dict of all network addresses with corresponding port_id. ' 'Each network will have two keys in dict, they are network ' 'name and network id. ' 'The port ID may be obtained through the following expression: ' '"{get_attr: [, addresses, , 0, ' 'port]}".'), type=attributes.Schema.MAP ), } default_client_name = 'zun' entity = 'containers' def validate(self): super(Container, self).validate() policy = self.properties[self.RESTART_POLICY] if policy and not self._parse_restart_policy(policy): msg = _('restart_policy "%s" is invalid. Valid values are ' '"no", "on-failure[:max-retry]", "always", and ' '"unless-stopped".') % policy raise exception.StackValidationFailed(message=msg) mounts = self.properties[self.MOUNTS] or [] for mount in mounts: self._validate_mount(mount) def _validate_mount(self, mount): volume_id = mount.get(self.VOLUME_ID) volume_size = mount.get(self.VOLUME_SIZE) if volume_id is None and volume_size is None: msg = _('One of the properties "%(id)s" or "%(size)s" ' 'should be set for the specified mount of ' 'container "%(container)s".' '') % dict(id=self.VOLUME_ID, size=self.VOLUME_SIZE, container=self.name) raise exception.StackValidationFailed(message=msg) # Don't allow specify volume_id and volume_size at the same time if volume_id and volume_size: raise exception.ResourcePropertyConflict( "/".join([self.NETWORKS, self.VOLUME_ID]), "/".join([self.NETWORKS, self.VOLUME_SIZE])) def handle_create(self): args = dict((k, v) for k, v in self.properties.items() if v is not None) policy = args.pop(self.RESTART_POLICY, None) if policy: args[self.RESTART_POLICY] = self._parse_restart_policy(policy) mounts = args.pop(self.MOUNTS, None) if mounts: args[self.MOUNTS] = self._build_mounts(mounts) container = self.client().containers.run(**args) self.resource_id_set(container.uuid) return container.uuid def _parse_restart_policy(self, policy): restart_policy = None if ":" in policy: policy, count = policy.split(":") if policy in ['on-failure']: restart_policy = {"Name": policy, "MaximumRetryCount": count or '0'} else: if policy in ['always', 'unless-stopped', 'on-failure', 'no']: restart_policy = {"Name": policy, "MaximumRetryCount": '0'} return restart_policy def _build_mounts(self, mounts): mnts = [] for mount in mounts: mnt_info = {'destination': mount[self.MOUNT_PATH]} if mount.get(self.VOLUME_ID): mnt_info['source'] = mount[self.VOLUME_ID] if mount.get(self.VOLUME_SIZE): mnt_info['size'] = mount[self.VOLUME_SIZE] mnts.append(mnt_info) return mnts def check_create_complete(self, id): container = self.client().containers.get(id) if container.status in ('Creating', 'Created'): return False elif container.status == 'Running': return True elif container.status == 'Stopped': if container.interactive: msg = (_("Error in creating container '%(name)s' - " "interactive mode was enabled but the container " "has stopped running") % {'name': self.name}) raise exception.ResourceInError( status_reason=msg, resource_status=container.status) return True elif container.status == 'Error': msg = (_("Error in creating container '%(name)s' - %(reason)s") % {'name': self.name, 'reason': container.status_reason}) raise exception.ResourceInError(status_reason=msg, resource_status=container.status) else: msg = (_("Unknown status Container '%(name)s' - %(reason)s") % {'name': self.name, 'reason': container.status_reason}) raise exception.ResourceUnknownStatus(status_reason=msg, resource_status=container .status) def handle_update(self, json_snippet, tmpl_diff, prop_diff): if self.NAME in prop_diff: name = prop_diff.pop(self.NAME) self.client().containers.rename(self.resource_id, name=name) if prop_diff: self.client().containers.update(self.resource_id, **prop_diff) def handle_delete(self): if not self.resource_id: return try: self.client().containers.delete(self.resource_id, stop=True) return self.resource_id except Exception as exc: self.client_plugin().ignore_not_found(exc) def check_delete_complete(self, id): if not id: return True try: self.client().containers.get(id) except Exception as exc: self.client_plugin().ignore_not_found(exc) return True return False def _resolve_attribute(self, name): if self.resource_id is None: return try: container = self.client().containers.get(self.resource_id) except Exception as exc: self.client_plugin().ignore_not_found(exc) return '' if name == self.ADDRESSES: return self._extend_addresses(container) return getattr(container, name, '') def _extend_addresses(self, container): """Method adds network name to list of addresses. This method is used only for resolving attributes. """ nets = self.neutron().list_networks()['networks'] id_name_mapping_on_network = {net['id']: net['name'] for net in nets} addresses = copy.deepcopy(container.addresses) for net_uuid in container.addresses or {}: addr_list = addresses[net_uuid] net_name = id_name_mapping_on_network.get(net_uuid) if not net_name: continue addresses.setdefault(net_name, []) addresses[net_name] += addr_list return addresses def resource_mapping(): return { 'OS::Zun::Container': Container } heat-10.0.2/heat/engine/resources/openstack/designate/0000775000175000017500000000000013343562672022702 5ustar zuulzuul00000000000000heat-10.0.2/heat/engine/resources/openstack/designate/recordset.py0000666000175000017500000001334113343562337025250 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import six from heat.common import exception from heat.common.i18n import _ from heat.engine import constraints from heat.engine import properties from heat.engine import resource from heat.engine import support class DesignateRecordSet(resource.Resource): """Heat Template Resource for Designate RecordSet. Designate provides DNS-as-a-Service services for OpenStack. RecordSet helps to add more than one records. """ support_status = support.SupportStatus( version='8.0.0') PROPERTIES = ( NAME, TTL, DESCRIPTION, TYPE, RECORDS, ZONE ) = ( 'name', 'ttl', 'description', 'type', 'records', 'zone' ) _ALLOWED_TYPES = ( A, AAAA, CNAME, MX, SRV, TXT, SPF, NS, PTR, SSHFP, SOA ) = ( 'A', 'AAAA', 'CNAME', 'MX', 'SRV', 'TXT', 'SPF', 'NS', 'PTR', 'SSHFP', 'SOA' ) properties_schema = { # Based on RFC 1035, length of name is set to max of 255 NAME: properties.Schema( properties.Schema.STRING, _('RecordSet name.'), constraints=[constraints.Length(max=255)] ), # Based on RFC 1035, range for ttl is set to 1 to signed 32 bit number TTL: properties.Schema( properties.Schema.INTEGER, _('Time To Live (Seconds).'), update_allowed=True, constraints=[constraints.Range(min=1, max=2147483647)] ), # designate mandates to the max length of 160 for description DESCRIPTION: properties.Schema( properties.Schema.STRING, _('Description of RecordSet.'), update_allowed=True, constraints=[constraints.Length(max=160)] ), TYPE: properties.Schema( properties.Schema.STRING, _('DNS RecordSet type.'), required=True, constraints=[constraints.AllowedValues( _ALLOWED_TYPES )] ), RECORDS: properties.Schema( properties.Schema.LIST, _('A list of data for this RecordSet. Each item will be a ' 'separate record in Designate These items should conform to the ' 'DNS spec for the record type - e.g. A records must be IPv4 ' 'addresses, CNAME records must be a hostname. DNS record data ' 'varies based on the type of record. For more details, please ' 'refer rfc 1035.'), update_allowed=True, required=True ), ZONE: properties.Schema( properties.Schema.STRING, _('DNS Zone id or name.'), required=True, constraints=[constraints.CustomConstraint('designate.zone')] ), } default_client_name = 'designate' entity = 'recordsets' def client(self): return super(DesignateRecordSet, self).client(version=self.client_plugin().V2) def handle_create(self): args = dict((k, v) for k, v in six.iteritems(self.properties) if v) args['type_'] = args.pop(self.TYPE) if not args.get(self.NAME): args[self.NAME] = self.physical_resource_name() rs = self.client().recordsets.create(**args) self.resource_id_set(rs['id']) def _check_status_complete(self): recordset = self.client().recordsets.get( recordset=self.resource_id, zone=self.properties[self.ZONE] ) if recordset['status'] == 'ERROR': raise exception.ResourceInError( resource_status=recordset['status'], status_reason=_('Error in RecordSet')) return recordset['status'] != 'PENDING' def check_create_complete(self, handler_data=None): return self._check_status_complete() def handle_update(self, json_snippet, tmpl_diff, prop_diff): args = dict() for prp in (self.TTL, self.DESCRIPTION, self.RECORDS): if prop_diff.get(prp): args[prp] = prop_diff.get(prp) if prop_diff.get(self.TYPE): args['type_'] = prop_diff.get(self.TYPE) if len(args.keys()) > 0: self.client().recordsets.update( recordset=self.resource_id, zone=self.properties[self.ZONE], values=args) def check_update_complete(self, handler_data=None): return self._check_status_complete() def handle_delete(self): if self.resource_id is not None: with self.client_plugin().ignore_not_found: self.client().recordsets.delete( recordset=self.resource_id, zone=self.properties[self.ZONE] ) def check_delete_complete(self, handler_data=None): if self.resource_id is not None: with self.client_plugin().ignore_not_found: return self._check_status_complete() return True def _show_resource(self): return self.client().recordsets.get( recordset=self.resource_id, zone=self.properties[self.ZONE] ) def resource_mapping(): return { 'OS::Designate::RecordSet': DesignateRecordSet } heat-10.0.2/heat/engine/resources/openstack/designate/record.py0000666000175000017500000001512513343562337024536 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import six from heat.common.i18n import _ from heat.engine import constraints from heat.engine import properties from heat.engine import resource from heat.engine import support class DesignateRecord(resource.Resource): """Heat Template Resource for Designate Record. Designate provides DNS-as-a-Service services for OpenStack. Record is storage unit in DNS. So, DNS name server is a server that stores the DNS records for a domain. Each record has a type and type-specific data. """ support_status = support.SupportStatus( status=support.HIDDEN, version='10.0.0', message=_('Use OS::Designate::RecordSet instead.'), previous_status=support.SupportStatus( status=support.DEPRECATED, version='8.0.0', previous_status=support.SupportStatus(version='5.0.0'))) entity = 'records' default_client_name = 'designate' PROPERTIES = ( NAME, TTL, DESCRIPTION, TYPE, DATA, PRIORITY, DOMAIN ) = ( 'name', 'ttl', 'description', 'type', 'data', 'priority', 'domain' ) _ALLOWED_TYPES = ( A, AAAA, CNAME, MX, SRV, TXT, SPF, NS, PTR, SSHFP, SOA ) = ( 'A', 'AAAA', 'CNAME', 'MX', 'SRV', 'TXT', 'SPF', 'NS', 'PTR', 'SSHFP', 'SOA' ) properties_schema = { # Based on RFC 1035, length of name is set to max of 255 NAME: properties.Schema( properties.Schema.STRING, _('Record name.'), required=True, constraints=[constraints.Length(max=255)] ), # Based on RFC 1035, range for ttl is set to 1 to signed 32 bit number TTL: properties.Schema( properties.Schema.INTEGER, _('Time To Live (Seconds).'), update_allowed=True, constraints=[constraints.Range(min=1, max=2147483647)] ), # designate mandates to the max length of 160 for description DESCRIPTION: properties.Schema( properties.Schema.STRING, _('Description of record.'), update_allowed=True, constraints=[constraints.Length(max=160)] ), TYPE: properties.Schema( properties.Schema.STRING, _('DNS Record type.'), update_allowed=True, required=True, constraints=[constraints.AllowedValues( _ALLOWED_TYPES )] ), DATA: properties.Schema( properties.Schema.STRING, _('DNS record data, varies based on the type of record. For more ' 'details, please refer rfc 1035.'), update_allowed=True, required=True ), # Based on RFC 1035, range for priority is set to 0 to signed 16 bit # number PRIORITY: properties.Schema( properties.Schema.INTEGER, _('DNS record priority. It is considered only for MX and SRV ' 'types, otherwise, it is ignored.'), update_allowed=True, constraints=[constraints.Range(min=0, max=65536)] ), DOMAIN: properties.Schema( properties.Schema.STRING, _('DNS Domain id or name.'), required=True, constraints=[constraints.CustomConstraint('designate.domain')] ), } def handle_create(self): args = dict( name=self.properties[self.NAME], type=self.properties[self.TYPE], description=self.properties[self.DESCRIPTION], ttl=self.properties[self.TTL], data=self.properties[self.DATA], # priority is considered only for MX and SRV record. priority=(self.properties[self.PRIORITY] if self.properties[self.TYPE] in (self.MX, self.SRV) else None), domain=self.properties[self.DOMAIN] ) domain = self.client_plugin().record_create(**args) self.resource_id_set(domain.id) def handle_update(self, json_snippet, tmpl_diff, prop_diff): args = dict() if prop_diff.get(self.TTL): args['ttl'] = prop_diff.get(self.TTL) if prop_diff.get(self.DESCRIPTION): args['description'] = prop_diff.get(self.DESCRIPTION) if prop_diff.get(self.TYPE): args['type'] = prop_diff.get(self.TYPE) # priority is considered only for MX and SRV record. if prop_diff.get(self.PRIORITY): args['priority'] = (prop_diff.get(self.PRIORITY) if (prop_diff.get(self.TYPE) or self.properties[self.TYPE]) in (self.MX, self.SRV) else None) if prop_diff.get(self.DATA): args['data'] = prop_diff.get(self.DATA) if len(args.keys()) > 0: args['id'] = self.resource_id args['domain'] = self.properties[self.DOMAIN] self.client_plugin().record_update(**args) def handle_delete(self): if self.resource_id is not None: with self.client_plugin().ignore_not_found: self.client_plugin().record_delete( id=self.resource_id, domain=self.properties[self.DOMAIN] ) # FIXME(kanagaraj-manickam) Remove this method once designate defect # 1485552 is fixed. def _show_resource(self): kwargs = dict(domain=self.properties[self.DOMAIN], id=self.resource_id) return dict(six.iteritems(self.client_plugin().record_show(**kwargs))) def parse_live_resource_data(self, resource_properties, resource_data): record_reality = {} properties_keys = list(set(self.PROPERTIES) - {self.NAME, self.DOMAIN}) for key in properties_keys: record_reality.update({key: resource_data.get(key)}) return record_reality def resource_mapping(): return { 'OS::Designate::Record': DesignateRecord } heat-10.0.2/heat/engine/resources/openstack/designate/domain.py0000666000175000017500000001040413343562337024522 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import six from heat.common.i18n import _ from heat.engine import attributes from heat.engine import constraints from heat.engine import properties from heat.engine import resource from heat.engine import support class DesignateDomain(resource.Resource): """Heat Template Resource for Designate Domain. Designate provides DNS-as-a-Service services for OpenStack. So, domain is a realm with an identification string, unique in DNS. """ support_status = support.SupportStatus( status=support.HIDDEN, version='10.0.0', message=_('Use OS::Designate::Zone instead.'), previous_status=support.SupportStatus( status=support.DEPRECATED, version='8.0.0', previous_status=support.SupportStatus(version='5.0.0'))) entity = 'domains' default_client_name = 'designate' PROPERTIES = ( NAME, TTL, DESCRIPTION, EMAIL ) = ( 'name', 'ttl', 'description', 'email' ) ATTRIBUTES = ( SERIAL, ) = ( 'serial', ) properties_schema = { # Based on RFC 1035, length of name is set to max of 255 NAME: properties.Schema( properties.Schema.STRING, _('Domain name.'), required=True, constraints=[constraints.Length(max=255)] ), # Based on RFC 1035, range for ttl is set to 1 to signed 32 bit number TTL: properties.Schema( properties.Schema.INTEGER, _('Time To Live (Seconds).'), update_allowed=True, constraints=[constraints.Range(min=1, max=2147483647)] ), # designate mandates to the max length of 160 for description DESCRIPTION: properties.Schema( properties.Schema.STRING, _('Description of domain.'), update_allowed=True, constraints=[constraints.Length(max=160)] ), EMAIL: properties.Schema( properties.Schema.STRING, _('Domain email.'), update_allowed=True, required=True ) } attributes_schema = { SERIAL: attributes.Schema( _("DNS domain serial."), type=attributes.Schema.STRING ), } def handle_create(self): args = dict((k, v) for k, v in six.iteritems(self.properties) if v) domain = self.client_plugin().domain_create(**args) self.resource_id_set(domain.id) def handle_update(self, json_snippet, tmpl_diff, prop_diff): args = dict() if prop_diff.get(self.EMAIL): args['email'] = prop_diff.get(self.EMAIL) if prop_diff.get(self.TTL): args['ttl'] = prop_diff.get(self.TTL) if prop_diff.get(self.DESCRIPTION): args['description'] = prop_diff.get(self.DESCRIPTION) if len(args.keys()) > 0: args['id'] = self.resource_id self.client_plugin().domain_update(**args) def _resolve_attribute(self, name): if self.resource_id is None: return if name == self.SERIAL: domain = self.client().domains.get(self.resource_id) return domain.serial # FIXME(kanagaraj-manickam) Remove this method once designate defect # 1485552 is fixed. def _show_resource(self): return dict(self.client().domains.get(self.resource_id).items()) def parse_live_resource_data(self, resource_properties, resource_data): domain_reality = {} for key in self.PROPERTIES: domain_reality.update({key: resource_data.get(key)}) return domain_reality def resource_mapping(): return { 'OS::Designate::Domain': DesignateDomain } heat-10.0.2/heat/engine/resources/openstack/designate/__init__.py0000666000175000017500000000000013343562337025001 0ustar zuulzuul00000000000000heat-10.0.2/heat/engine/resources/openstack/designate/zone.py0000666000175000017500000001340113343562337024226 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import six from heat.common import exception from heat.common.i18n import _ from heat.engine import attributes from heat.engine import constraints from heat.engine import properties from heat.engine import resource from heat.engine import support class DesignateZone(resource.Resource): """Heat Template Resource for Designate Zone. Designate provides DNS-as-a-Service services for OpenStack. So, zone, part of domain is a realm with an identification string, unique in DNS. """ support_status = support.SupportStatus( version='8.0.0') PROPERTIES = ( NAME, TTL, DESCRIPTION, EMAIL, TYPE, MASTERS ) = ( 'name', 'ttl', 'description', 'email', 'type', 'masters' ) ATTRIBUTES = ( SERIAL, ) = ( 'serial', ) TYPES = ( PRIMARY, SECONDARY ) = ( 'PRIMARY', 'SECONDARY' ) properties_schema = { # Based on RFC 1035, length of name is set to max of 255 NAME: properties.Schema( properties.Schema.STRING, _('DNS Name for the zone.'), required=True, constraints=[constraints.Length(max=255)] ), # Based on RFC 1035, range for ttl is set to 1 to signed 32 bit number TTL: properties.Schema( properties.Schema.INTEGER, _('Time To Live (Seconds) for the zone.'), update_allowed=True, constraints=[constraints.Range(min=1, max=2147483647)] ), # designate mandates to the max length of 160 for description DESCRIPTION: properties.Schema( properties.Schema.STRING, _('Description of zone.'), update_allowed=True, constraints=[constraints.Length(max=160)] ), EMAIL: properties.Schema( properties.Schema.STRING, _('E-mail for the zone. Used in SOA records for the zone. ' 'It is required for PRIMARY Type, otherwise ignored.'), update_allowed=True, ), TYPE: properties.Schema( properties.Schema.STRING, _('Type of zone. PRIMARY is controlled by Designate, SECONDARY ' 'zones are slaved from another DNS Server.'), default=PRIMARY, constraints=[constraints.AllowedValues( allowed=TYPES)] ), MASTERS: properties.Schema( properties.Schema.LIST, _('The servers to slave from to get DNS information and is ' 'mandatory for zone type SECONDARY, otherwise ignored.'), update_allowed=True ) } attributes_schema = { SERIAL: attributes.Schema( _("DNS zone serial number."), type=attributes.Schema.STRING ), } default_client_name = 'designate' entity = 'zones' def client(self): return super(DesignateZone, self).client(version=self.client_plugin().V2) def validate(self): super(DesignateZone, self).validate() def raise_invalid_exception(zone_type, prp): if self.properties.get(self.TYPE) == zone_type: if not self.properties.get(prp): msg = _('Property %(prp)s is required for zone type ' '%(zone_type)s') % { "prp": prp, "zone_type": zone_type } raise exception.StackValidationFailed(message=msg) raise_invalid_exception(self.PRIMARY, self.EMAIL) raise_invalid_exception(self.SECONDARY, self.MASTERS) def handle_create(self): args = dict((k, v) for k, v in six.iteritems(self.properties) if v) args['type_'] = args.pop(self.TYPE) zone = self.client().zones.create(**args) self.resource_id_set(zone['id']) def _check_status_complete(self): zone = self.client().zones.get(self.resource_id) if zone['status'] == 'ERROR': raise exception.ResourceInError( resource_status=zone['status'], status_reason=_('Error in zone')) return zone['status'] != 'PENDING' def check_create_complete(self, handler_data=None): return self._check_status_complete() def handle_update(self, json_snippet, tmpl_diff, prop_diff): args = dict() for prp in (self.EMAIL, self.TTL, self.DESCRIPTION, self.MASTERS): if prop_diff.get(prp): args[prp] = prop_diff.get(prp) if len(args.keys()) > 0: self.client().zones.update(self.resource_id, args) def check_update_complete(self, handler_data=None): return self._check_status_complete() def _resolve_attribute(self, name): if self.resource_id is None: return if name == self.SERIAL: zone = self.client().zones.get(self.resource_id) return zone[name] def check_delete_complete(self, handler_data=None): if handler_data: with self.client_plugin().ignore_not_found: return self._check_status_complete() return True def resource_mapping(): return { 'OS::Designate::Zone': DesignateZone } heat-10.0.2/heat/engine/resources/openstack/cinder/0000775000175000017500000000000013343562672022203 5ustar zuulzuul00000000000000heat-10.0.2/heat/engine/resources/openstack/cinder/quota.py0000666000175000017500000001454413343562351023712 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common import exception from heat.common.i18n import _ from heat.engine import constraints from heat.engine import properties from heat.engine import resource from heat.engine import support from heat.engine import translation class CinderQuota(resource.Resource): """A resource for creating cinder quotas. Cinder Quota is used to manage operational limits for projects. Currently, this resource can manage Cinder's gigabytes, snapshots, and volumes quotas. Note that default cinder security policy usage of this resource is limited to being used by administrators only. Administrators should be careful to create only one Cinder Quota resource per project, otherwise it will be hard for them to manage the quota properly. """ support_status = support.SupportStatus(version='7.0.0') default_client_name = 'cinder' entity = 'quotas' required_service_extension = 'os-quota-sets' PROPERTIES = (PROJECT, GIGABYTES, VOLUMES, SNAPSHOTS) = ( 'project', 'gigabytes', 'volumes', 'snapshots' ) properties_schema = { PROJECT: properties.Schema( properties.Schema.STRING, _('OpenStack Keystone Project.'), required=True, constraints=[ constraints.CustomConstraint('keystone.project') ] ), GIGABYTES: properties.Schema( properties.Schema.INTEGER, _('Quota for the amount of disk space (in Gigabytes). ' 'Setting the value to -1 removes the limit.'), constraints=[ constraints.Range(min=-1), ], update_allowed=True ), VOLUMES: properties.Schema( properties.Schema.INTEGER, _('Quota for the number of volumes. ' 'Setting the value to -1 removes the limit.'), constraints=[ constraints.Range(min=-1), ], update_allowed=True ), SNAPSHOTS: properties.Schema( properties.Schema.INTEGER, _('Quota for the number of snapshots. ' 'Setting the value to -1 removes the limit.'), constraints=[ constraints.Range(min=-1), ], update_allowed=True ) } def translation_rules(self, props): return [ translation.TranslationRule( props, translation.TranslationRule.RESOLVE, [self.PROJECT], client_plugin=self.client_plugin('keystone'), finder='get_project_id') ] def handle_create(self): self._set_quota() self.resource_id_set(self.physical_resource_name()) def handle_update(self, json_snippet, tmpl_diff, prop_diff): self._set_quota(json_snippet.properties(self.properties_schema, self.context)) @classmethod def _validate_quota(cls, quota_property, quota_size, total_size): err_message = _("Invalid quota %(property)s value(s): %(value)s. " "Can not be less than the current usage value(s): " "%(total)s.") if quota_size < total_size: message_format = {'property': quota_property, 'value': quota_size, 'total': total_size} raise ValueError(err_message % message_format) def validate_quotas(self, project, **kwargs): search_opts = {'all_tenants': True, 'project_id': project} volume_list = None snapshot_list = None for key in kwargs: if kwargs[key] == -1: del kwargs[key] if self.GIGABYTES in kwargs: quota_size = kwargs[self.GIGABYTES] volume_list = self.client().volumes.list(search_opts=search_opts) snapshot_list = self.client().volume_snapshots.list( search_opts=search_opts) total_size = sum(item.size for item in ( volume_list + snapshot_list)) self._validate_quota(self.GIGABYTES, quota_size, total_size) if self.VOLUMES in kwargs: quota_size = kwargs[self.VOLUMES] if volume_list is None: volume_list = self.client().volumes.list( search_opts=search_opts) total_size = len(volume_list) self._validate_quota(self.VOLUMES, quota_size, total_size) if self.SNAPSHOTS in kwargs: quota_size = kwargs[self.SNAPSHOTS] if snapshot_list is None: snapshot_list = self.client().volume_snapshots.list( search_opts=search_opts) total_size = len(snapshot_list) self._validate_quota(self.SNAPSHOTS, quota_size, total_size) def _set_quota(self, props=None): if props is None: props = self.properties kwargs = dict((k, v) for k, v in props.items() if k != self.PROJECT and v is not None) # TODO(ricolin): Move this to stack validate stage. In some cases # we still can't get project or other properties form other resources # at validate stage. self.validate_quotas(props[self.PROJECT], **kwargs) self.client().quotas.update(props[self.PROJECT], **kwargs) def handle_delete(self): self.client().quotas.delete(self.properties[self.PROJECT]) def validate(self): super(CinderQuota, self).validate() if sum(1 for p in self.properties.values() if p is not None) <= 1: raise exception.PropertyUnspecifiedError(self.GIGABYTES, self.SNAPSHOTS, self.VOLUMES) def resource_mapping(): return { 'OS::Cinder::Quota': CinderQuota } heat-10.0.2/heat/engine/resources/openstack/cinder/encrypted_volume_type.py0000666000175000017500000001165713343562337027214 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common import exception from heat.common.i18n import _ from heat.engine import constraints from heat.engine import properties from heat.engine import resource from heat.engine import support class CinderEncryptedVolumeType(resource.Resource): """A resource for encrypting a cinder volume type. A Volume Encryption Type is a collection of settings used to conduct encryption for a specific volume type. Note that default cinder security policy usage of this resource is limited to being used by administrators only. """ support_status = support.SupportStatus(version='5.0.0') default_client_name = 'cinder' entity = 'volume_encryption_types' required_service_extension = 'encryption' PROPERTIES = ( PROVIDER, CONTROL_LOCATION, CIPHER, KEY_SIZE, VOLUME_TYPE ) = ( 'provider', 'control_location', 'cipher', 'key_size', 'volume_type' ) properties_schema = { PROVIDER: properties.Schema( properties.Schema.STRING, _('The class that provides encryption support. ' 'For example, nova.volume.encryptors.luks.LuksEncryptor.'), required=True, update_allowed=True ), CONTROL_LOCATION: properties.Schema( properties.Schema.STRING, _('Notional service where encryption is performed ' 'For example, front-end. For Nova.'), constraints=[ constraints.AllowedValues(['front-end', 'back-end']) ], default='front-end', update_allowed=True ), CIPHER: properties.Schema( properties.Schema.STRING, _('The encryption algorithm or mode. ' 'For example, aes-xts-plain64.'), constraints=[ constraints.AllowedValues( ['aes-xts-plain64', 'aes-cbc-essiv'] ) ], default=None, update_allowed=True ), KEY_SIZE: properties.Schema( properties.Schema.INTEGER, _('Size of encryption key, in bits. ' 'For example, 128 or 256.'), default=None, update_allowed=True ), VOLUME_TYPE: properties.Schema( properties.Schema.STRING, _('Name or id of volume type (OS::Cinder::VolumeType).'), required=True, constraints=[constraints.CustomConstraint('cinder.vtype')] ), } def _get_vol_type_id(self, volume_type): id = self.client_plugin().get_volume_type(volume_type) return id def handle_create(self): body = { 'provider': self.properties[self.PROVIDER], 'cipher': self.properties[self.CIPHER], 'key_size': self.properties[self.KEY_SIZE], 'control_location': self.properties[self.CONTROL_LOCATION] } vol_type_id = self._get_vol_type_id(self.properties[self.VOLUME_TYPE]) encrypted_vol_type = self.client().volume_encryption_types.create( volume_type=vol_type_id, specs=body ) self.resource_id_set(encrypted_vol_type.volume_type_id) def handle_update(self, json_snippet, tmpl_diff, prop_diff): if prop_diff: self.client().volume_encryption_types.update( volume_type=self.resource_id, specs=prop_diff ) def get_live_resource_data(self): try: resource_data = self._show_resource() if not resource_data: # use attribute error, e.g. API call get raises AttributeError, # when evt is not exists or not ready (cinder bug 1562024). raise AttributeError() except Exception as ex: if (self.client_plugin().is_not_found(ex) or isinstance(ex, AttributeError)): raise exception.EntityNotFound(entity='Resource', name=self.name) raise return resource_data def parse_live_resource_data(self, resource_properties, resource_data): resource_reality = {} for key in set(self.PROPERTIES) - {self.VOLUME_TYPE}: resource_reality.update({key: resource_data.get(key)}) return resource_reality def resource_mapping(): return { 'OS::Cinder::EncryptedVolumeType': CinderEncryptedVolumeType } heat-10.0.2/heat/engine/resources/openstack/cinder/volume_type.py0000666000175000017500000001654613343562337025141 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common import exception from heat.common.i18n import _ from heat.engine import constraints from heat.engine import properties from heat.engine import resource from heat.engine import support class CinderVolumeType(resource.Resource): """A resource for creating cinder volume types. Volume type resource allows to define, whether volume, which will be use this type, will public and which projects are allowed to work with it. Also, there can be some user-defined metadata. Note that default cinder security policy usage of this resource is limited to being used by administrators only. """ support_status = support.SupportStatus(version='2015.1') default_client_name = 'cinder' entity = 'volume_types' required_service_extension = 'os-types-manage' PROPERTIES = ( NAME, METADATA, IS_PUBLIC, DESCRIPTION, PROJECTS, ) = ( 'name', 'metadata', 'is_public', 'description', 'projects', ) properties_schema = { NAME: properties.Schema( properties.Schema.STRING, _('Name of the volume type.'), required=True, update_allowed=True, ), METADATA: properties.Schema( properties.Schema.MAP, _('The extra specs key and value pairs of the volume type.'), update_allowed=True ), IS_PUBLIC: properties.Schema( properties.Schema.BOOLEAN, _('Whether the volume type is accessible to the public.'), default=True, support_status=support.SupportStatus(version='5.0.0'), update_allowed=True ), DESCRIPTION: properties.Schema( properties.Schema.STRING, _('Description of the volume type.'), update_allowed=True, support_status=support.SupportStatus(version='5.0.0'), ), PROJECTS: properties.Schema( properties.Schema.LIST, _('Projects to add volume type access to. NOTE: This ' 'property is only supported since Cinder API V2.'), support_status=support.SupportStatus(version='5.0.0'), update_allowed=True, schema=properties.Schema( properties.Schema.STRING, constraints=[ constraints.CustomConstraint('keystone.project') ], ), default=[], ), } def _add_projects_access(self, projects): for project in projects: project_id = self.client_plugin('keystone').get_project_id(project) self.client().volume_type_access.add_project_access( self.resource_id, project_id) def handle_create(self): args = { 'name': self.properties[self.NAME], 'is_public': self.properties[self.IS_PUBLIC], 'description': self.properties[self.DESCRIPTION] } volume_type = self.client().volume_types.create(**args) self.resource_id_set(volume_type.id) vtype_metadata = self.properties[self.METADATA] if vtype_metadata: volume_type.set_keys(vtype_metadata) self._add_projects_access(self.properties[self.PROJECTS]) def handle_update(self, json_snippet, tmpl_diff, prop_diff): """Update the name, description and metadata for volume type.""" update_args = {} # Update the name, description, is_public of cinder volume type is_public = self.properties[self.IS_PUBLIC] if self.DESCRIPTION in prop_diff: update_args['description'] = prop_diff.get(self.DESCRIPTION) if self.NAME in prop_diff: update_args['name'] = prop_diff.get(self.NAME) if self.IS_PUBLIC in prop_diff: is_public = prop_diff.get(self.IS_PUBLIC) update_args['is_public'] = is_public if update_args: self.client().volume_types.update(self.resource_id, **update_args) # Update the key-value pairs of cinder volume type. if self.METADATA in prop_diff: volume_type = self.client().volume_types.get(self.resource_id) old_keys = volume_type.get_keys() volume_type.unset_keys(old_keys) new_keys = prop_diff.get(self.METADATA) if new_keys is not None: volume_type.set_keys(new_keys) # Update the projects access for volume type if self.PROJECTS in prop_diff and not is_public: old_access_list = self.client().volume_type_access.list( self.resource_id) old_projects = [ac.to_dict()['project_id'] for ac in old_access_list] new_projects = prop_diff.get(self.PROJECTS) # first remove the old projects access for project_id in (set(old_projects) - set(new_projects)): self.client().volume_type_access.remove_project_access( self.resource_id, project_id) # add the new projects access self._add_projects_access(set(new_projects) - set(old_projects)) def validate(self): super(CinderVolumeType, self).validate() if self.properties[self.PROJECTS]: if self.properties[self.IS_PUBLIC]: msg = (_('Can not specify property "%s" ' 'if the volume type is public.') % self.PROJECTS) raise exception.StackValidationFailed(message=msg) def get_live_resource_data(self): try: resource_object = self.client().volume_types.get(self.resource_id) resource_data = resource_object.to_dict() except Exception as ex: if self.client_plugin().is_not_found(ex): raise exception.EntityNotFound(entity='Resource', name=self.name) raise return resource_object, resource_data def parse_live_resource_data(self, resource_properties, resource_data): resource_reality = {} resource_object, resource_data = resource_data resource_reality.update({ self.NAME: resource_data.get(self.NAME), self.DESCRIPTION: resource_data.get(self.DESCRIPTION) }) metadata = resource_object.get_keys() resource_reality.update({self.METADATA: metadata or {}}) is_public = resource_data.get(self.IS_PUBLIC) resource_reality.update({self.IS_PUBLIC: is_public}) projects = [] if not is_public: accesses = self.client().volume_type_access.list(self.resource_id) for access in accesses: projects.append(access.to_dict().get('project_id')) resource_reality.update({self.PROJECTS: projects}) return resource_reality def resource_mapping(): return { 'OS::Cinder::VolumeType': CinderVolumeType } heat-10.0.2/heat/engine/resources/openstack/cinder/volume.py0000666000175000017500000007652313343562337024101 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging from oslo_serialization import jsonutils import six from heat.common import exception from heat.common.i18n import _ from heat.engine import attributes from heat.engine.clients import progress from heat.engine import constraints from heat.engine import properties from heat.engine.resources import scheduler_hints as sh from heat.engine.resources import volume_base as vb from heat.engine import support from heat.engine import translation LOG = logging.getLogger(__name__) class CinderVolume(vb.BaseVolume, sh.SchedulerHintsMixin): """A resource that implements Cinder volumes. Cinder volume is a storage in the form of block devices. It can be used, for example, for providing storage to instance. Volume supports creation from snapshot, backup or image. Also volume can be created only by size. """ PROPERTIES = ( AVAILABILITY_ZONE, SIZE, SNAPSHOT_ID, BACKUP_ID, NAME, DESCRIPTION, VOLUME_TYPE, METADATA, IMAGE_REF, IMAGE, SOURCE_VOLID, CINDER_SCHEDULER_HINTS, READ_ONLY, MULTI_ATTACH, ) = ( 'availability_zone', 'size', 'snapshot_id', 'backup_id', 'name', 'description', 'volume_type', 'metadata', 'imageRef', 'image', 'source_volid', 'scheduler_hints', 'read_only', 'multiattach', ) ATTRIBUTES = ( AVAILABILITY_ZONE_ATTR, SIZE_ATTR, SNAPSHOT_ID_ATTR, DISPLAY_NAME_ATTR, DISPLAY_DESCRIPTION_ATTR, VOLUME_TYPE_ATTR, METADATA_ATTR, SOURCE_VOLID_ATTR, STATUS, CREATED_AT, BOOTABLE, METADATA_VALUES_ATTR, ENCRYPTED_ATTR, ATTACHMENTS, ATTACHMENTS_LIST, MULTI_ATTACH_ATTR, ) = ( 'availability_zone', 'size', 'snapshot_id', 'display_name', 'display_description', 'volume_type', 'metadata', 'source_volid', 'status', 'created_at', 'bootable', 'metadata_values', 'encrypted', 'attachments', 'attachments_list', 'multiattach', ) properties_schema = { AVAILABILITY_ZONE: properties.Schema( properties.Schema.STRING, _('The availability zone in which the volume will be created.') ), SIZE: properties.Schema( properties.Schema.INTEGER, _('The size of the volume in GB. ' 'On update only increase in size is supported. This property ' 'is required unless property %(backup)s or %(vol)s or ' '%(snapshot)s is specified.') % dict(backup=BACKUP_ID, vol=SOURCE_VOLID, snapshot=SNAPSHOT_ID), update_allowed=True, constraints=[ constraints.Range(min=1), ] ), SNAPSHOT_ID: properties.Schema( properties.Schema.STRING, _('If specified, the snapshot to create the volume from.'), constraints=[ constraints.CustomConstraint('cinder.snapshot') ] ), BACKUP_ID: properties.Schema( properties.Schema.STRING, _('If specified, the backup to create the volume from.'), update_allowed=True, constraints=[ constraints.CustomConstraint('cinder.backup') ] ), NAME: properties.Schema( properties.Schema.STRING, _('A name used to distinguish the volume.'), update_allowed=True, ), DESCRIPTION: properties.Schema( properties.Schema.STRING, _('A description of the volume.'), update_allowed=True, ), VOLUME_TYPE: properties.Schema( properties.Schema.STRING, _('If specified, the type of volume to use, mapping to a ' 'specific backend.'), constraints=[ constraints.CustomConstraint('cinder.vtype') ], update_allowed=True ), METADATA: properties.Schema( properties.Schema.MAP, _('Key/value pairs to associate with the volume.'), update_allowed=True, default={} ), IMAGE_REF: properties.Schema( properties.Schema.STRING, _('The ID of the image to create the volume from.'), support_status=support.SupportStatus( status=support.HIDDEN, message=_('Use property %s.') % IMAGE, version='5.0.0', previous_status=support.SupportStatus( status=support.DEPRECATED, version='2014.1' ) ) ), IMAGE: properties.Schema( properties.Schema.STRING, _('If specified, the name or ID of the image to create the ' 'volume from.'), constraints=[ constraints.CustomConstraint('glance.image') ] ), SOURCE_VOLID: properties.Schema( properties.Schema.STRING, _('If specified, the volume to use as source.'), constraints=[ constraints.CustomConstraint('cinder.volume') ] ), CINDER_SCHEDULER_HINTS: properties.Schema( properties.Schema.MAP, _('Arbitrary key-value pairs specified by the client to help ' 'the Cinder scheduler creating a volume.'), support_status=support.SupportStatus(version='2015.1') ), READ_ONLY: properties.Schema( properties.Schema.BOOLEAN, _('Enables or disables read-only access mode of volume.'), support_status=support.SupportStatus(version='5.0.0'), update_allowed=True, ), MULTI_ATTACH: properties.Schema( properties.Schema.BOOLEAN, _('Whether allow the volume to be attached more than once.'), support_status=support.SupportStatus(version='6.0.0'), default=False ), } attributes_schema = { AVAILABILITY_ZONE_ATTR: attributes.Schema( _('The availability zone in which the volume is located.'), type=attributes.Schema.STRING ), SIZE_ATTR: attributes.Schema( _('The size of the volume in GB.'), type=attributes.Schema.STRING ), SNAPSHOT_ID_ATTR: attributes.Schema( _('The snapshot the volume was created from, if any.'), type=attributes.Schema.STRING ), DISPLAY_NAME_ATTR: attributes.Schema( _('Name of the volume.'), type=attributes.Schema.STRING ), DISPLAY_DESCRIPTION_ATTR: attributes.Schema( _('Description of the volume.'), type=attributes.Schema.STRING ), VOLUME_TYPE_ATTR: attributes.Schema( _('The type of the volume mapping to a backend, if any.'), type=attributes.Schema.STRING ), METADATA_ATTR: attributes.Schema( _('Key/value pairs associated with the volume.'), type=attributes.Schema.STRING ), SOURCE_VOLID_ATTR: attributes.Schema( _('The volume used as source, if any.'), type=attributes.Schema.STRING ), STATUS: attributes.Schema( _('The current status of the volume.'), type=attributes.Schema.STRING ), CREATED_AT: attributes.Schema( _('The timestamp indicating volume creation.'), type=attributes.Schema.STRING ), BOOTABLE: attributes.Schema( _('Boolean indicating if the volume can be booted or not.'), type=attributes.Schema.STRING ), METADATA_VALUES_ATTR: attributes.Schema( _('Key/value pairs associated with the volume in raw dict form.'), type=attributes.Schema.MAP ), ENCRYPTED_ATTR: attributes.Schema( _('Boolean indicating if the volume is encrypted or not.'), type=attributes.Schema.STRING ), ATTACHMENTS: attributes.Schema( _('A string representation of the list of attachments of the ' 'volume.'), type=attributes.Schema.STRING, cache_mode=attributes.Schema.CACHE_NONE, support_status=support.SupportStatus( status=support.DEPRECATED, message=_('Use property %s.') % ATTACHMENTS_LIST, version='9.0.0', previous_status=support.SupportStatus( status=support.SUPPORTED, version='2015.1' ) ) ), ATTACHMENTS_LIST: attributes.Schema( _('The list of attachments of the volume.'), type=attributes.Schema.LIST, cache_mode=attributes.Schema.CACHE_NONE, support_status=support.SupportStatus(version='9.0.0'), ), MULTI_ATTACH_ATTR: attributes.Schema( _('Boolean indicating whether allow the volume to be attached ' 'more than once.'), type=attributes.Schema.BOOLEAN, support_status=support.SupportStatus(version='6.0.0'), ), } _volume_creating_status = ['creating', 'restoring-backup', 'downloading'] entity = 'volumes' def translation_rules(self, props): return [ translation.TranslationRule( props, translation.TranslationRule.REPLACE, [self.IMAGE], value_path=[self.IMAGE_REF] ) ] def _name(self): name = self.properties[self.NAME] if name: return name return super(CinderVolume, self)._name() def _description(self): return self.properties[self.DESCRIPTION] def _create_arguments(self): arguments = { 'size': self.properties[self.SIZE], 'availability_zone': self.properties[self.AVAILABILITY_ZONE], } scheduler_hints = self._scheduler_hints( self.properties[self.CINDER_SCHEDULER_HINTS]) if scheduler_hints: arguments[self.CINDER_SCHEDULER_HINTS] = scheduler_hints if self.properties[self.IMAGE]: arguments['imageRef'] = self.client_plugin( 'glance').find_image_by_name_or_id( self.properties[self.IMAGE]) elif self.properties[self.IMAGE_REF]: arguments['imageRef'] = self.properties[self.IMAGE_REF] optionals = (self.SNAPSHOT_ID, self.VOLUME_TYPE, self.SOURCE_VOLID, self.METADATA, self.MULTI_ATTACH) arguments.update((prop, self.properties[prop]) for prop in optionals if self.properties[prop] is not None) return arguments def _resolve_attribute(self, name): if self.resource_id is None: return cinder = self.client() vol = cinder.volumes.get(self.resource_id) if name == self.METADATA_ATTR: return six.text_type(jsonutils.dumps(vol.metadata)) elif name == self.METADATA_VALUES_ATTR: return vol.metadata if name == self.DISPLAY_NAME_ATTR: return vol.name elif name == self.DISPLAY_DESCRIPTION_ATTR: return vol.description elif name == self.ATTACHMENTS_LIST: return vol.attachments return six.text_type(getattr(vol, name)) def check_create_complete(self, vol_id): complete = super(CinderVolume, self).check_create_complete(vol_id) # Cinder just supports update read only for volume in available, # if we update in handle_create(), maybe the volume still in # creating, then cinder will raise an exception if complete: self._store_config_default_properties() self._update_read_only(self.properties[self.READ_ONLY]) return complete def _store_config_default_properties(self, attributes=None): """Method for storing default values of properties in resource data. Some properties have default values, specified in project configuration file, so cannot be hardcoded into properties_schema, but should be stored for further using. So need to get created resource and take required property's value. """ if attributes is None: attributes = self._show_resource() if attributes.get('volume_type') is not None: self.data_set(self.VOLUME_TYPE, attributes['volume_type']) else: self.data_delete(self.VOLUME_TYPE) def _extend_volume(self, new_size): try: self.client().volumes.extend(self.resource_id, new_size) except Exception as ex: if self.client_plugin().is_client_exception(ex): raise exception.Error(_( "Failed to extend volume %(vol)s - %(err)s") % { 'vol': self.resource_id, 'err': six.text_type(ex)}) else: raise return True def _update_read_only(self, read_only_flag): if read_only_flag is not None: self.client().volumes.update_readonly_flag(self.resource_id, read_only_flag) return True def _check_extend_volume_complete(self): vol = self.client().volumes.get(self.resource_id) if vol.status == 'extending': LOG.debug("Volume %s is being extended", vol.id) return False if vol.status != 'available': LOG.info("Resize failed: Volume %(vol)s " "is in %(status)s state.", {'vol': vol.id, 'status': vol.status}) raise exception.ResourceUnknownStatus( resource_status=vol.status, result=_('Volume resize failed')) LOG.info('Volume %(id)s resize complete', {'id': vol.id}) return True def _backup_restore(self, vol_id, backup_id): try: self.client().restores.restore(backup_id, vol_id) except Exception as ex: if self.client_plugin().is_client_exception(ex): raise exception.Error(_( "Failed to restore volume %(vol)s from backup %(backup)s " "- %(err)s") % {'vol': vol_id, 'backup': backup_id, 'err': ex}) else: raise return True def _check_backup_restore_complete(self): vol = self.client().volumes.get(self.resource_id) if vol.status == 'restoring-backup': LOG.debug("Volume %s is being restoring from backup", vol.id) return False if vol.status != 'available': LOG.info("Restore failed: Volume %(vol)s is in %(status)s " "state.", {'vol': vol.id, 'status': vol.status}) raise exception.ResourceUnknownStatus( resource_status=vol.status, result=_('Volume backup restore failed')) LOG.info('Volume %s backup restore complete', vol.id) return True def needs_replace_failed(self): if not self.resource_id: return True with self.client_plugin().ignore_not_found: vol = self.client().volumes.get(self.resource_id) return vol.status in ('error', 'deleting') return True def handle_update(self, json_snippet, tmpl_diff, prop_diff): vol = None cinder = self.client() prg_resize = None prg_attach = None prg_detach = None prg_restore = None prg_access = None # update the name and description for cinder volume if self.NAME in prop_diff or self.DESCRIPTION in prop_diff: vol = cinder.volumes.get(self.resource_id) update_name = (prop_diff.get(self.NAME) or self.properties[self.NAME]) update_description = (prop_diff.get(self.DESCRIPTION) or self.properties[self.DESCRIPTION]) kwargs = self._fetch_name_and_description(update_name, update_description) cinder.volumes.update(vol, **kwargs) # update the metadata for cinder volume if self.METADATA in prop_diff: if not vol: vol = cinder.volumes.get(self.resource_id) metadata = prop_diff.get(self.METADATA) cinder.volumes.update_all_metadata(vol, metadata) # retype if self.VOLUME_TYPE in prop_diff: if not vol: vol = cinder.volumes.get(self.resource_id) new_vol_type = prop_diff.get(self.VOLUME_TYPE) cinder.volumes.retype(vol, new_vol_type, 'never') # update read_only access mode if self.READ_ONLY in prop_diff: if not vol: vol = cinder.volumes.get(self.resource_id) flag = prop_diff.get(self.READ_ONLY) prg_access = progress.VolumeUpdateAccessModeProgress( read_only=flag) prg_detach, prg_attach = self._detach_attach_progress(vol) # restore the volume from backup if self.BACKUP_ID in prop_diff: if not vol: vol = cinder.volumes.get(self.resource_id) prg_restore = progress.VolumeBackupRestoreProgress( vol_id=self.resource_id, backup_id=prop_diff.get(self.BACKUP_ID)) prg_detach, prg_attach = self._detach_attach_progress(vol) # extend volume size if self.SIZE in prop_diff: if not vol: vol = cinder.volumes.get(self.resource_id) new_size = prop_diff[self.SIZE] if new_size < vol.size: raise exception.NotSupported(feature=_("Shrinking volume")) elif new_size > vol.size: prg_resize = progress.VolumeResizeProgress(size=new_size) prg_detach, prg_attach = self._detach_attach_progress(vol) return prg_restore, prg_detach, prg_resize, prg_access, prg_attach def _detach_attach_progress(self, vol): prg_attach = None prg_detach = None if vol.attachments: # NOTE(pshchelo): # this relies on current behavior of cinder attachments, # i.e. volume attachments is a list with len<=1, # so the volume can be attached only to single instance, # and id of attachment is the same as id of the volume # it describes, so detach/attach the same volume # will not change volume attachment id. server_id = vol.attachments[0]['server_id'] device = vol.attachments[0]['device'] attachment_id = vol.attachments[0]['id'] prg_detach = progress.VolumeDetachProgress( server_id, vol.id, attachment_id) prg_attach = progress.VolumeAttachProgress( server_id, vol.id, device) return prg_detach, prg_attach def _detach_volume_to_complete(self, prg_detach): if not prg_detach.called: self.client_plugin('nova').detach_volume(prg_detach.srv_id, prg_detach.attach_id) prg_detach.called = True return False if not prg_detach.cinder_complete: cinder_complete_res = self.client_plugin( ).check_detach_volume_complete(prg_detach.vol_id) prg_detach.cinder_complete = cinder_complete_res return False if not prg_detach.nova_complete: prg_detach.nova_complete = self.client_plugin( 'nova').check_detach_volume_complete(prg_detach.srv_id, prg_detach.attach_id) return False def _attach_volume_to_complete(self, prg_attach): if not prg_attach.called: prg_attach.called = self.client_plugin('nova').attach_volume( prg_attach.srv_id, prg_attach.vol_id, prg_attach.device) return False if not prg_attach.complete: prg_attach.complete = self.client_plugin( ).check_attach_volume_complete(prg_attach.vol_id) return prg_attach.complete def check_update_complete(self, checkers): prg_restore, prg_detach, prg_resize, prg_access, prg_attach = checkers # detach volume if prg_detach: if not prg_detach.nova_complete: self._detach_volume_to_complete(prg_detach) return False if prg_restore: if not prg_restore.called: prg_restore.called = self._backup_restore( prg_restore.vol_id, prg_restore.backup_id) return False if not prg_restore.complete: prg_restore.complete = self._check_backup_restore_complete() return prg_restore.complete and not prg_resize # resize volume if prg_resize: if not prg_resize.called: prg_resize.called = self._extend_volume(prg_resize.size) return False if not prg_resize.complete: prg_resize.complete = self._check_extend_volume_complete() return prg_resize.complete and not prg_attach # update read_only access mode if prg_access: if not prg_access.called: prg_access.called = self._update_read_only( prg_access.read_only) return False # reattach volume back if prg_attach: return self._attach_volume_to_complete(prg_attach) return True def handle_snapshot(self): backup = self.client().backups.create(self.resource_id, force=True) self.data_set('backup_id', backup.id) return backup.id def check_snapshot_complete(self, backup_id): backup = self.client().backups.get(backup_id) if backup.status == 'creating': return False if backup.status == 'available': return True raise exception.Error(backup.fail_reason) def handle_delete_snapshot(self, snapshot): backup_id = snapshot['resource_data'].get('backup_id') if not backup_id: return try: self.client().backups.delete(backup_id) except Exception as ex: self.client_plugin().ignore_not_found(ex) return else: return backup_id def check_delete_snapshot_complete(self, backup_id): if not backup_id: return True try: self.client().backups.get(backup_id) except Exception as ex: self.client_plugin().ignore_not_found(ex) return True else: return False def _build_exclusive_options(self): exclusive_options = [] allow_no_size_options = [] if self.properties.get(self.SNAPSHOT_ID): exclusive_options.append(self.SNAPSHOT_ID) allow_no_size_options.append(self.SNAPSHOT_ID) if self.properties.get(self.SOURCE_VOLID): exclusive_options.append(self.SOURCE_VOLID) allow_no_size_options.append(self.SOURCE_VOLID) if self.properties.get(self.IMAGE): exclusive_options.append(self.IMAGE) if self.properties.get(self.IMAGE_REF): exclusive_options.append(self.IMAGE_REF) return exclusive_options, allow_no_size_options def _validate_create_sources(self): exclusive_options, allow_no_size_ops = self._build_exclusive_options() size = self.properties.get(self.SIZE) if (size is None and (len(allow_no_size_ops) != 1 or len(exclusive_options) != 1)): msg = (_('If neither "%(backup_id)s" nor "%(size)s" is ' 'provided, one and only one of "%(source_vol)s", ' '"%(snapshot_id)s" must be specified, but currently ' 'specified options: %(exclusive_options)s.') % {'backup_id': self.BACKUP_ID, 'size': self.SIZE, 'source_vol': self.SOURCE_VOLID, 'snapshot_id': self.SNAPSHOT_ID, 'exclusive_options': exclusive_options}) raise exception.StackValidationFailed(message=msg) elif size and len(exclusive_options) > 1: msg = (_('If "%(size)s" is provided, only one of ' '"%(image)s", "%(image_ref)s", "%(source_vol)s", ' '"%(snapshot_id)s" can be specified, but currently ' 'specified options: %(exclusive_options)s.') % {'size': self.SIZE, 'image': self.IMAGE, 'image_ref': self.IMAGE_REF, 'source_vol': self.SOURCE_VOLID, 'snapshot_id': self.SNAPSHOT_ID, 'exclusive_options': exclusive_options}) raise exception.StackValidationFailed(message=msg) def validate(self): """Validate provided params.""" res = super(CinderVolume, self).validate() if res is not None: return res # can not specify both image and imageRef image = self.properties.get(self.IMAGE) imageRef = self.properties.get(self.IMAGE_REF) if image and imageRef: raise exception.ResourcePropertyConflict(self.IMAGE, self.IMAGE_REF) # if not create from backup, need to check other create sources if not self.properties.get(self.BACKUP_ID): self._validate_create_sources() def handle_restore(self, defn, restore_data): backup_id = restore_data['resource_data']['backup_id'] # we can't ignore 'size' property: if user update the size # of volume after snapshot, we need to change to old size # when restore the volume. ignore_props = ( self.IMAGE_REF, self.IMAGE, self.SOURCE_VOLID) props = dict( (key, value) for (key, value) in self.properties.data.items() if key not in ignore_props and value is not None) props[self.BACKUP_ID] = backup_id return defn.freeze(properties=props) def parse_live_resource_data(self, resource_properties, resource_data): volume_reality = {} if (resource_data.get(self.METADATA) and resource_data.get(self.METADATA).get( 'readonly') is not None): read_only = resource_data.get(self.METADATA).pop('readonly') volume_reality.update({self.READ_ONLY: read_only}) old_vt = self.data().get(self.VOLUME_TYPE) new_vt = resource_data.get(self.VOLUME_TYPE) if old_vt != new_vt: volume_reality.update({self.VOLUME_TYPE: new_vt}) self._store_config_default_properties(dict(volume_type=new_vt)) props_keys = [self.SIZE, self.NAME, self.DESCRIPTION, self.METADATA, self.BACKUP_ID] for key in props_keys: volume_reality.update({key: resource_data.get(key)}) return volume_reality class CinderVolumeAttachment(vb.BaseVolumeAttachment): """Resource for associating volume to instance. Resource for associating existing volume to instance. Also, the location where the volume is exposed on the instance can be specified. """ PROPERTIES = ( INSTANCE_ID, VOLUME_ID, DEVICE, ) = ( 'instance_uuid', 'volume_id', 'mountpoint', ) properties_schema = { INSTANCE_ID: properties.Schema( properties.Schema.STRING, _('The ID of the server to which the volume attaches.'), required=True, update_allowed=True ), VOLUME_ID: properties.Schema( properties.Schema.STRING, _('The ID of the volume to be attached.'), required=True, update_allowed=True, constraints=[ constraints.CustomConstraint('cinder.volume') ] ), DEVICE: properties.Schema( properties.Schema.STRING, _('The location where the volume is exposed on the instance. This ' 'assignment may not be honored and it is advised that the path ' '/dev/disk/by-id/virtio- be used instead.'), update_allowed=True ), } def handle_update(self, json_snippet, tmpl_diff, prop_diff): prg_attach = None prg_detach = None if prop_diff: # Even though some combinations of changed properties # could be updated in UpdateReplace manner, # we still first detach the old resource so that # self.resource_id is not replaced prematurely volume_id = self.properties[self.VOLUME_ID] server_id = self.properties[self.INSTANCE_ID] self.client_plugin('nova').detach_volume(server_id, self.resource_id) prg_detach = progress.VolumeDetachProgress( server_id, volume_id, self.resource_id) prg_detach.called = True if self.VOLUME_ID in prop_diff: volume_id = prop_diff.get(self.VOLUME_ID) device = (self.properties[self.DEVICE] if self.properties[self.DEVICE] else None) if self.DEVICE in prop_diff: device = (prop_diff[self.DEVICE] if prop_diff[self.DEVICE] else None) if self.INSTANCE_ID in prop_diff: server_id = prop_diff.get(self.INSTANCE_ID) prg_attach = progress.VolumeAttachProgress( server_id, volume_id, device) return prg_detach, prg_attach def check_update_complete(self, checkers): prg_detach, prg_attach = checkers if not (prg_detach and prg_attach): return True if not prg_detach.cinder_complete: prg_detach.cinder_complete = self.client_plugin( ).check_detach_volume_complete(prg_detach.vol_id) return False if not prg_detach.nova_complete: prg_detach.nova_complete = self.client_plugin( 'nova').check_detach_volume_complete(prg_detach.srv_id, self.resource_id) return False if not prg_attach.called: prg_attach.called = self.client_plugin('nova').attach_volume( prg_attach.srv_id, prg_attach.vol_id, prg_attach.device) return False if not prg_attach.complete: prg_attach.complete = self.client_plugin( ).check_attach_volume_complete(prg_attach.vol_id) if prg_attach.complete: self.resource_id_set(prg_attach.called) return prg_attach.complete return True def resource_mapping(): return { 'OS::Cinder::Volume': CinderVolume, 'OS::Cinder::VolumeAttachment': CinderVolumeAttachment, } heat-10.0.2/heat/engine/resources/openstack/cinder/__init__.py0000666000175000017500000000000013343562337024302 0ustar zuulzuul00000000000000heat-10.0.2/heat/engine/resources/openstack/cinder/qos_specs.py0000666000175000017500000001447413343562337024566 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common.i18n import _ from heat.engine import constraints from heat.engine import properties from heat.engine import resource from heat.engine import support from heat.engine import translation class QoSSpecs(resource.Resource): """A resource for creating cinder QoS specs. Users can ask for a specific volume type. Part of that volume type is a string that defines the QoS of the volume IO (fast, normal, or slow). Backends that can handle all of the demands of the volume type become candidates for scheduling. Usage of this resource restricted to admins only by default policy. """ support_status = support.SupportStatus(version='7.0.0') default_client_name = 'cinder' entity = 'qos_specs' required_service_extension = 'qos-specs' PROPERTIES = ( NAME, SPECS, ) = ( 'name', 'specs', ) properties_schema = { NAME: properties.Schema( properties.Schema.STRING, _('Name of the QoS.'), ), SPECS: properties.Schema( properties.Schema.MAP, _('The specs key and value pairs of the QoS.'), required=True, update_allowed=True ), } def _find_diff(self, update_prps, stored_prps): remove_prps = list( set(stored_prps.keys() or []) - set(update_prps.keys() or []) ) add_prps = dict(set(update_prps.items() or []) - set( stored_prps.items() or [])) return add_prps, remove_prps def handle_create(self): name = (self.properties[self.NAME] or self.physical_resource_name()) specs = self.properties[self.SPECS] qos = self.client().qos_specs.create(name, specs) self.resource_id_set(qos.id) def handle_update(self, json_snippet, tmpl_diff, prop_diff): """Update the specs for QoS.""" new_specs = prop_diff.get(self.SPECS) old_specs = self.properties[self.SPECS] add_specs, remove_specs = self._find_diff(new_specs, old_specs) if self.resource_id is not None: # Set new specs to QoS Specs if add_specs: self.client().qos_specs.set_keys(self.resource_id, add_specs) # Unset old specs from QoS Specs if remove_specs: self.client().qos_specs.unset_keys(self.resource_id, remove_specs) def handle_delete(self): if self.resource_id is not None: self.client().qos_specs.disassociate_all(self.resource_id) super(QoSSpecs, self).handle_delete() class QoSAssociation(resource.Resource): """A resource to associate cinder QoS specs with volume types. Usage of this resource restricted to admins only by default policy. """ support_status = support.SupportStatus(version='8.0.0') default_client_name = 'cinder' required_service_extension = 'qos-specs' PROPERTIES = ( QOS_SPECS, VOLUME_TYPES, ) = ( 'qos_specs', 'volume_types', ) properties_schema = { QOS_SPECS: properties.Schema( properties.Schema.STRING, _('ID or Name of the QoS specs.'), required=True, constraints=[ constraints.CustomConstraint('cinder.qos_specs') ], ), VOLUME_TYPES: properties.Schema( properties.Schema.LIST, _('List of volume type IDs or Names to be attached to QoS specs.'), schema=properties.Schema( properties.Schema.STRING, _('A volume type to attach specs.'), constraints=[ constraints.CustomConstraint('cinder.vtype') ], ), update_allowed=True, required=True, ), } def translation_rules(self, props): return [ translation.TranslationRule( props, translation.TranslationRule.RESOLVE, [self.VOLUME_TYPES], client_plugin=self.client_plugin(), finder='get_volume_type' ), translation.TranslationRule( props, translation.TranslationRule.RESOLVE, [self.QOS_SPECS], client_plugin=self.client_plugin(), finder='get_qos_specs' ) ] def _find_diff(self, update_prps, stored_prps): add_prps = list(set(update_prps or []) - set(stored_prps or [])) remove_prps = list(set(stored_prps or []) - set(update_prps or [])) return add_prps, remove_prps def handle_create(self): for vt in self.properties[self.VOLUME_TYPES]: self.client().qos_specs.associate(self.properties[self.QOS_SPECS], vt) def handle_update(self, json_snippet, tmpl_diff, prop_diff): """Associate volume types to QoS.""" qos_specs = self.properties[self.QOS_SPECS] new_associate_vts = prop_diff.get(self.VOLUME_TYPES) old_associate_vts = self.properties[self.VOLUME_TYPES] add_associate_vts, remove_associate_vts = self._find_diff( new_associate_vts, old_associate_vts) for vt in add_associate_vts: self.client().qos_specs.associate(qos_specs, vt) for vt in remove_associate_vts: self.client().qos_specs.disassociate(qos_specs, vt) def handle_delete(self): volume_types = self.properties[self.VOLUME_TYPES] for vt in volume_types: self.client().qos_specs.disassociate( self.properties[self.QOS_SPECS], vt) def resource_mapping(): return { 'OS::Cinder::QoSSpecs': QoSSpecs, 'OS::Cinder::QoSAssociation': QoSAssociation, } heat-10.0.2/heat/engine/resources/openstack/barbican/0000775000175000017500000000000013343562672022500 5ustar zuulzuul00000000000000heat-10.0.2/heat/engine/resources/openstack/barbican/order.py0000666000175000017500000002462213343562337024173 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import six from heat.common import exception from heat.common.i18n import _ from heat.engine import attributes from heat.engine import constraints from heat.engine import properties from heat.engine import resource from heat.engine import support class Order(resource.Resource): """A resource allowing for the generation secret material by Barbican. The resource allows to generate some secret material. It can be, for example, some key or certificate. The order encapsulates the workflow and history for the creation of a secret. The time to generate a secret can vary depending on the type of secret. """ support_status = support.SupportStatus(version='2014.2') default_client_name = 'barbican' entity = 'orders' PROPERTIES = ( NAME, PAYLOAD_CONTENT_TYPE, MODE, EXPIRATION, ALGORITHM, BIT_LENGTH, TYPE, REQUEST_TYPE, SUBJECT_DN, SOURCE_CONTAINER_REF, CA_ID, PROFILE, REQUEST_DATA, PASS_PHRASE ) = ( 'name', 'payload_content_type', 'mode', 'expiration', 'algorithm', 'bit_length', 'type', 'request_type', 'subject_dn', 'source_container_ref', 'ca_id', 'profile', 'request_data', 'pass_phrase' ) ATTRIBUTES = ( STATUS, ORDER_REF, SECRET_REF, PUBLIC_KEY, PRIVATE_KEY, CERTIFICATE, INTERMEDIATES, CONTAINER_REF ) = ( 'status', 'order_ref', 'secret_ref', 'public_key', 'private_key', 'certificate', 'intermediates', 'container_ref' ) ORDER_TYPES = ( KEY, ASYMMETRIC, CERTIFICATE ) = ( 'key', 'asymmetric', 'certificate' ) # full-cmc is declared but not yet supported in barbican REQUEST_TYPES = ( STORED_KEY, SIMPLE_CMC, CUSTOM ) = ( 'stored-key', 'simple-cmc', 'custom' ) ALLOWED_PROPERTIES_FOR_TYPE = { KEY: [NAME, ALGORITHM, BIT_LENGTH, MODE, PAYLOAD_CONTENT_TYPE, EXPIRATION], ASYMMETRIC: [NAME, ALGORITHM, BIT_LENGTH, MODE, PASS_PHRASE, PAYLOAD_CONTENT_TYPE, EXPIRATION], CERTIFICATE: [NAME, REQUEST_TYPE, SUBJECT_DN, SOURCE_CONTAINER_REF, CA_ID, PROFILE, REQUEST_DATA] } properties_schema = { NAME: properties.Schema( properties.Schema.STRING, _('Human readable name for the secret.'), ), PAYLOAD_CONTENT_TYPE: properties.Schema( properties.Schema.STRING, _('The type/format the secret data is provided in.'), ), EXPIRATION: properties.Schema( properties.Schema.STRING, _('The expiration date for the secret in ISO-8601 format.'), constraints=[ constraints.CustomConstraint('expiration'), ], ), ALGORITHM: properties.Schema( properties.Schema.STRING, _('The algorithm type used to generate the secret. ' 'Required for key and asymmetric types of order.'), ), BIT_LENGTH: properties.Schema( properties.Schema.INTEGER, _('The bit-length of the secret. Required for key and ' 'asymmetric types of order.'), ), MODE: properties.Schema( properties.Schema.STRING, _('The type/mode of the algorithm associated with the secret ' 'information.'), ), TYPE: properties.Schema( properties.Schema.STRING, _('The type of the order.'), constraints=[ constraints.AllowedValues(ORDER_TYPES), ], required=True, support_status=support.SupportStatus(version='5.0.0'), ), REQUEST_TYPE: properties.Schema( properties.Schema.STRING, _('The type of the certificate request.'), support_status=support.SupportStatus(version='5.0.0'), constraints=[constraints.AllowedValues(REQUEST_TYPES)] ), SUBJECT_DN: properties.Schema( properties.Schema.STRING, _('The subject of the certificate request.'), support_status=support.SupportStatus(version='5.0.0'), ), SOURCE_CONTAINER_REF: properties.Schema( properties.Schema.STRING, _('The source of certificate request.'), support_status=support.SupportStatus(version='5.0.0'), constraints=[ constraints.CustomConstraint('barbican.container') ], ), CA_ID: properties.Schema( properties.Schema.STRING, _('The identifier of the CA to use.'), support_status=support.SupportStatus(version='5.0.0'), ), PROFILE: properties.Schema( properties.Schema.STRING, _('The profile of certificate to use.'), support_status=support.SupportStatus(version='5.0.0'), ), REQUEST_DATA: properties.Schema( properties.Schema.STRING, _('The content of the CSR. Only for certificate orders.'), support_status=support.SupportStatus(version='5.0.0'), ), PASS_PHRASE: properties.Schema( properties.Schema.STRING, _('The passphrase of the created key. Can be set only ' 'for asymmetric type of order.'), support_status=support.SupportStatus(version='5.0.0'), ), } attributes_schema = { STATUS: attributes.Schema( _('The status of the order.'), type=attributes.Schema.STRING ), ORDER_REF: attributes.Schema( _('The URI to the order.'), type=attributes.Schema.STRING ), SECRET_REF: attributes.Schema( _('The URI to the created secret.'), type=attributes.Schema.STRING ), CONTAINER_REF: attributes.Schema( _('The URI to the created container.'), support_status=support.SupportStatus(version='5.0.0'), type=attributes.Schema.STRING ), PUBLIC_KEY: attributes.Schema( _('The payload of the created public key, if available.'), support_status=support.SupportStatus(version='5.0.0'), type=attributes.Schema.STRING ), PRIVATE_KEY: attributes.Schema( _('The payload of the created private key, if available.'), support_status=support.SupportStatus(version='5.0.0'), type=attributes.Schema.STRING ), CERTIFICATE: attributes.Schema( _('The payload of the created certificate, if available.'), support_status=support.SupportStatus(version='5.0.0'), type=attributes.Schema.STRING ), INTERMEDIATES: attributes.Schema( _('The payload of the created intermediates, if available.'), support_status=support.SupportStatus(version='5.0.0'), type=attributes.Schema.STRING ), } def handle_create(self): info = dict((k, v) for k, v in self.properties.items() if v is not None) order = self.client().orders.create(**info) order_ref = order.submit() self.resource_id_set(order_ref) # NOTE(pshchelo): order_ref is HATEOAS reference, i.e a string # need not to be fixed re LP bug #1393268 return order_ref def validate(self): super(Order, self).validate() if self.properties[self.TYPE] != self.CERTIFICATE: if (self.properties[self.ALGORITHM] is None or self.properties[self.BIT_LENGTH] is None): msg = _("Properties %(algorithm)s and %(bit_length)s are " "required for %(type)s type of order.") % { 'algorithm': self.ALGORITHM, 'bit_length': self.BIT_LENGTH, 'type': self.properties[self.TYPE]} raise exception.StackValidationFailed(message=msg) else: if (self.properties[self.PROFILE] and not self.properties[self.CA_ID]): raise exception.ResourcePropertyDependency( prop1=self.PROFILE, prop2=self.CA_ID ) declared_props = sorted([k for k, v in six.iteritems( self.properties) if k != self.TYPE and v is not None]) allowed_props = sorted(self.ALLOWED_PROPERTIES_FOR_TYPE[ self.properties[self.TYPE]]) diff = sorted(set(declared_props) - set(allowed_props)) if diff: msg = _("Unexpected properties: %(unexpected)s. Only these " "properties are allowed for %(type)s type of order: " "%(allowed)s.") % { 'unexpected': ', '.join(diff), 'type': self.properties[self.TYPE], 'allowed': ', '.join(allowed_props)} raise exception.StackValidationFailed(message=msg) def check_create_complete(self, order_href): order = self.client().orders.get(order_href) if order.status == 'ERROR': reason = order.error_reason code = order.error_status_code msg = (_("Order '%(name)s' failed: %(code)s - %(reason)s") % {'name': self.name, 'code': code, 'reason': reason}) raise exception.Error(msg) return order.status == 'ACTIVE' def _resolve_attribute(self, name): if self.resource_id is None: return client = self.client() order = client.orders.get(self.resource_id) if name in ( self.PUBLIC_KEY, self.PRIVATE_KEY, self.CERTIFICATE, self.INTERMEDIATES): container = client.containers.get(order.container_ref) secret = getattr(container, name) return secret.payload return getattr(order, name) def resource_mapping(): return { 'OS::Barbican::Order': Order, } heat-10.0.2/heat/engine/resources/openstack/barbican/__init__.py0000666000175000017500000000000013343562337024577 0ustar zuulzuul00000000000000heat-10.0.2/heat/engine/resources/openstack/barbican/container.py0000666000175000017500000002073413343562337025042 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import six from heat.common import exception from heat.common.i18n import _ from heat.engine import attributes from heat.engine import constraints from heat.engine import properties from heat.engine import resource from heat.engine import support class GenericContainer(resource.Resource): """A resource for creating Barbican generic container. A generic container is used for any type of secret that a user may wish to aggregate. There are no restrictions on the amount of secrets that can be held within this container. """ support_status = support.SupportStatus(version='6.0.0') default_client_name = 'barbican' entity = 'containers' PROPERTIES = ( NAME, SECRETS, ) = ( 'name', 'secrets', ) ATTRIBUTES = ( STATUS, CONTAINER_REF, SECRET_REFS, CONSUMERS, ) = ( 'status', 'container_ref', 'secret_refs', 'consumers', ) _SECRETS_PROPERTIES = ( NAME, REF, ) = ( 'name', 'ref' ) properties_schema = { NAME: properties.Schema( properties.Schema.STRING, _('Human-readable name for the container.'), ), SECRETS: properties.Schema( properties.Schema.LIST, _('References to secrets that will be stored in container.'), schema=properties.Schema( properties.Schema.MAP, schema={ NAME: properties.Schema( properties.Schema.STRING, _('Name of the secret.'), required=True ), REF: properties.Schema( properties.Schema.STRING, _('Reference to the secret.'), required=True, constraints=[constraints.CustomConstraint( 'barbican.secret')], ), } ), ), } attributes_schema = { STATUS: attributes.Schema( _('The status of the container.'), type=attributes.Schema.STRING ), CONTAINER_REF: attributes.Schema( _('The URI to the container.'), type=attributes.Schema.STRING ), SECRET_REFS: attributes.Schema( _('The URIs to secrets stored in container.'), type=attributes.Schema.MAP ), CONSUMERS: attributes.Schema( _('The URIs to container consumers.'), type=attributes.Schema.LIST ), } def get_refs(self): secrets = self.properties.get(self.SECRETS) or [] return [secret['ref'] for secret in secrets] def validate(self): super(GenericContainer, self).validate() refs = self.get_refs() if len(refs) != len(set(refs)): msg = _("Duplicate refs are not allowed.") raise exception.StackValidationFailed(message=msg) def create_container(self): if self.properties[self.SECRETS]: secrets = dict((secret['name'], secret['ref']) for secret in self.properties[self.SECRETS]) else: secrets = {} info = {'secret_refs': secrets} if self.properties[self.NAME] is not None: info.update({'name': self.properties[self.NAME]}) return self.client_plugin().create_generic_container(**info) def handle_create(self): container_ref = self.create_container().store() self.resource_id_set(container_ref) return container_ref def check_create_complete(self, container_href): container = self.client().containers.get(container_href) if container.status == 'ERROR': reason = container.error_reason code = container.error_status_code msg = (_("Container '%(name)s' creation failed: " "%(code)s - %(reason)s") % {'name': self.name, 'code': code, 'reason': reason}) raise exception.ResourceInError( status_reason=msg, resource_status=container.status) return container.status == 'ACTIVE' def _resolve_attribute(self, name): if self.resource_id is None: return container = self.client().containers.get(self.resource_id) return getattr(container, name, None) class CertificateContainer(GenericContainer): """A resource for creating barbican certificate container. A certificate container is used for storing the secrets that are relevant to certificates. """ PROPERTIES = ( NAME, CERTIFICATE_REF, PRIVATE_KEY_REF, PRIVATE_KEY_PASSPHRASE_REF, INTERMEDIATES_REF, ) = ( 'name', 'certificate_ref', 'private_key_ref', 'private_key_passphrase_ref', 'intermediates_ref', ) properties_schema = { NAME: properties.Schema( properties.Schema.STRING, _('Human-readable name for the container.'), ), CERTIFICATE_REF: properties.Schema( properties.Schema.STRING, _('Reference to certificate.'), constraints=[constraints.CustomConstraint('barbican.secret')], ), PRIVATE_KEY_REF: properties.Schema( properties.Schema.STRING, _('Reference to private key.'), constraints=[constraints.CustomConstraint('barbican.secret')], ), PRIVATE_KEY_PASSPHRASE_REF: properties.Schema( properties.Schema.STRING, _('Reference to private key passphrase.'), constraints=[constraints.CustomConstraint('barbican.secret')], ), INTERMEDIATES_REF: properties.Schema( properties.Schema.STRING, _('Reference to intermediates.'), constraints=[constraints.CustomConstraint('barbican.secret')], ), } def create_container(self): info = dict((k, v) for k, v in six.iteritems(self.properties) if v is not None) return self.client_plugin().create_certificate(**info) def get_refs(self): return [v for k, v in six.iteritems(self.properties) if (k != self.NAME and v is not None)] class RSAContainer(GenericContainer): """A resource for creating barbican RSA container. An RSA container is used for storing RSA public keys, private keys, and private key pass phrases. """ PROPERTIES = ( NAME, PRIVATE_KEY_REF, PRIVATE_KEY_PASSPHRASE_REF, PUBLIC_KEY_REF, ) = ( 'name', 'private_key_ref', 'private_key_passphrase_ref', 'public_key_ref', ) properties_schema = { NAME: properties.Schema( properties.Schema.STRING, _('Human-readable name for the container.'), ), PRIVATE_KEY_REF: properties.Schema( properties.Schema.STRING, _('Reference to private key.'), constraints=[constraints.CustomConstraint('barbican.secret')], ), PRIVATE_KEY_PASSPHRASE_REF: properties.Schema( properties.Schema.STRING, _('Reference to private key passphrase.'), constraints=[constraints.CustomConstraint('barbican.secret')], ), PUBLIC_KEY_REF: properties.Schema( properties.Schema.STRING, _('Reference to public key.'), constraints=[constraints.CustomConstraint('barbican.secret')], ), } def create_container(self): info = dict((k, v) for k, v in six.iteritems(self.properties) if v is not None) return self.client_plugin().create_rsa(**info) def get_refs(self): return [v for k, v in six.iteritems(self.properties) if (k != self.NAME and v is not None)] def resource_mapping(): return { 'OS::Barbican::GenericContainer': GenericContainer, 'OS::Barbican::CertificateContainer': CertificateContainer, 'OS::Barbican::RSAContainer': RSAContainer } heat-10.0.2/heat/engine/resources/openstack/barbican/secret.py0000666000175000017500000001553413343562337024347 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import base64 from heat.common import exception from heat.common.i18n import _ from heat.engine import attributes from heat.engine import constraints from heat.engine import properties from heat.engine import resource from heat.engine import support class Secret(resource.Resource): """The resource provides access to the secret/keying stored material. A secret is a singular item that stored within Barbican. A secret is anything you want it to be; however, the formal use case is a key that you wish to store away from prying eyes. Secret may include private keys, passwords and so on. """ support_status = support.SupportStatus(version='2014.2') default_client_name = 'barbican' entity = 'secrets' PROPERTIES = ( NAME, PAYLOAD, PAYLOAD_CONTENT_TYPE, PAYLOAD_CONTENT_ENCODING, MODE, EXPIRATION, ALGORITHM, BIT_LENGTH, SECRET_TYPE, ) = ( 'name', 'payload', 'payload_content_type', 'payload_content_encoding', 'mode', 'expiration', 'algorithm', 'bit_length', 'secret_type' ) ATTRIBUTES = ( STATUS, DECRYPTED_PAYLOAD, ) = ( 'status', 'decrypted_payload', ) _SECRET_TYPES = ( SYMMETRIC, PUBLIC, PRIVATE, CERTIFICATE, PASSPHRASE, OPAQUE ) = ( 'symmetric', 'public', 'private', 'certificate', 'passphrase', 'opaque' ) properties_schema = { NAME: properties.Schema( properties.Schema.STRING, _('Human readable name for the secret.'), ), PAYLOAD: properties.Schema( properties.Schema.STRING, _('The unencrypted plain text of the secret.'), ), SECRET_TYPE: properties.Schema( properties.Schema.STRING, _('The type of the secret.'), constraints=[ constraints.AllowedValues(_SECRET_TYPES), ], support_status=support.SupportStatus(version='5.0.0'), default=OPAQUE ), PAYLOAD_CONTENT_TYPE: properties.Schema( properties.Schema.STRING, _('The type/format the secret data is provided in.'), constraints=[ constraints.AllowedValues([ 'text/plain', 'application/octet-stream', ]), ], ), PAYLOAD_CONTENT_ENCODING: properties.Schema( properties.Schema.STRING, _('The encoding format used to provide the payload data.'), constraints=[ constraints.AllowedValues([ 'base64', ]), ], ), EXPIRATION: properties.Schema( properties.Schema.STRING, _('The expiration date for the secret in ISO-8601 format.'), constraints=[ constraints.CustomConstraint('expiration'), ], ), ALGORITHM: properties.Schema( properties.Schema.STRING, _('The algorithm type used to generate the secret.'), ), BIT_LENGTH: properties.Schema( properties.Schema.INTEGER, _('The bit-length of the secret.'), constraints=[ constraints.Range( min=0, ), ], ), MODE: properties.Schema( properties.Schema.STRING, _('The type/mode of the algorithm associated with the secret ' 'information.'), ), } attributes_schema = { STATUS: attributes.Schema( _('The status of the secret.'), type=attributes.Schema.STRING ), DECRYPTED_PAYLOAD: attributes.Schema( _('The decrypted secret payload.'), type=attributes.Schema.STRING ), } def handle_create(self): info = dict(self.properties) secret = self.client().secrets.create(**info) secret_ref = secret.store() self.resource_id_set(secret_ref) return secret_ref def validate(self): super(Secret, self).validate() if self.properties[self.PAYLOAD_CONTENT_TYPE]: if not self.properties[self.PAYLOAD]: raise exception.ResourcePropertyDependency( prop1=self.PAYLOAD_CONTENT_TYPE, prop2=self.PAYLOAD) if (self.properties[self.PAYLOAD_CONTENT_TYPE] == 'application/octet-stream'): if not self.properties[self.PAYLOAD_CONTENT_ENCODING]: msg = _("Property unspecified. For '%(value)s' value " "of '%(prop1)s' property, '%(prop2)s' property " "must be specified.") % { 'value': self.properties[self.PAYLOAD_CONTENT_TYPE], 'prop1': self.PAYLOAD_CONTENT_TYPE, 'prop2': self.PAYLOAD_CONTENT_ENCODING} raise exception.StackValidationFailed(message=msg) try: base64.b64decode(self.properties[self.PAYLOAD]) except Exception: msg = _("Invalid %(prop1)s for specified '%(value)s' " "value of '%(prop2)s' property.") % { 'prop1': self.PAYLOAD, 'value': self.properties[ self.PAYLOAD_CONTENT_ENCODING], 'prop2': self.PAYLOAD_CONTENT_ENCODING} raise exception.StackValidationFailed(message=msg) if (self.properties[self.PAYLOAD_CONTENT_ENCODING] and (not self.properties[self.PAYLOAD_CONTENT_TYPE] or self.properties[self.PAYLOAD_CONTENT_TYPE] == 'text/plain')): raise exception.ResourcePropertyValueDependency( prop1=self.PAYLOAD_CONTENT_ENCODING, prop2=self.PAYLOAD_CONTENT_TYPE, value='application/octet-stream') def _resolve_attribute(self, name): if self.resource_id is None: return secret = self.client().secrets.get(self.resource_id) if name == self.DECRYPTED_PAYLOAD: return secret.payload if name == self.STATUS: return secret.status def resource_mapping(): return { 'OS::Barbican::Secret': Secret, } heat-10.0.2/heat/engine/resources/openstack/__init__.py0000666000175000017500000000000013343562337023036 0ustar zuulzuul00000000000000heat-10.0.2/heat/engine/resources/openstack/monasca/0000775000175000017500000000000013343562672022360 5ustar zuulzuul00000000000000heat-10.0.2/heat/engine/resources/openstack/monasca/alarm_definition.py0000666000175000017500000001601313343562340026231 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common.i18n import _ from heat.engine import constraints from heat.engine import properties from heat.engine import resource from heat.engine import support class MonascaAlarmDefinition(resource.Resource): """Heat Template Resource for Monasca Alarm definition. Monasca Alarm definition helps to define the required expression for a given alarm situation. This plugin helps to create, update and delete the alarm definition. Alarm definitions is necessary to describe and manage alarms in a one-to-many relationship in order to avoid having to manually declare each alarm even though they may share many common attributes and differ in only one, such as hostname. """ support_status = support.SupportStatus( version='7.0.0', previous_status=support.SupportStatus( version='5.0.0', status=support.UNSUPPORTED )) default_client_name = 'monasca' entity = 'alarm_definitions' SEVERITY_LEVELS = ( LOW, MEDIUM, HIGH, CRITICAL ) = ( 'low', 'medium', 'high', 'critical' ) PROPERTIES = ( NAME, DESCRIPTION, EXPRESSION, MATCH_BY, SEVERITY, OK_ACTIONS, ALARM_ACTIONS, UNDETERMINED_ACTIONS, ACTIONS_ENABLED ) = ( 'name', 'description', 'expression', 'match_by', 'severity', 'ok_actions', 'alarm_actions', 'undetermined_actions', 'actions_enabled' ) properties_schema = { NAME: properties.Schema( properties.Schema.STRING, _('Name of the alarm. By default, physical resource name is ' 'used.'), update_allowed=True ), DESCRIPTION: properties.Schema( properties.Schema.STRING, _('Description of the alarm.'), update_allowed=True ), EXPRESSION: properties.Schema( properties.Schema.STRING, _('Expression of the alarm to evaluate.'), update_allowed=False, required=True ), MATCH_BY: properties.Schema( properties.Schema.LIST, _('The metric dimensions to match to the alarm dimensions. ' 'One or more dimension key names separated by a comma.'), default=[], ), SEVERITY: properties.Schema( properties.Schema.STRING, _('Severity of the alarm.'), update_allowed=True, constraints=[constraints.AllowedValues( SEVERITY_LEVELS )], default=LOW ), OK_ACTIONS: properties.Schema( properties.Schema.LIST, _('The notification methods to use when an alarm state is OK.'), update_allowed=True, schema=properties.Schema( properties.Schema.STRING, _('Monasca notification.'), constraints=[constraints.CustomConstraint( 'monasca.notification') ] ), default=[], ), ALARM_ACTIONS: properties.Schema( properties.Schema.LIST, _('The notification methods to use when an alarm state is ALARM.'), update_allowed=True, schema=properties.Schema( properties.Schema.STRING, _('Monasca notification.'), constraints=[constraints.CustomConstraint( 'monasca.notification') ] ), default=[], ), UNDETERMINED_ACTIONS: properties.Schema( properties.Schema.LIST, _('The notification methods to use when an alarm state is ' 'UNDETERMINED.'), update_allowed=True, schema=properties.Schema( properties.Schema.STRING, _('Monasca notification.'), constraints=[constraints.CustomConstraint( 'monasca.notification') ] ), default=[], ), ACTIONS_ENABLED: properties.Schema( properties.Schema.BOOLEAN, _('Whether to enable the actions or not.'), update_allowed=True, default=True, ), } def handle_create(self): args = dict( name=(self.properties[self.NAME] or self.physical_resource_name()), description=self.properties[self.DESCRIPTION], expression=self.properties[self.EXPRESSION], match_by=self.properties[self.MATCH_BY], severity=self.properties[self.SEVERITY], ok_actions=self.properties[self.OK_ACTIONS], alarm_actions=self.properties[self.ALARM_ACTIONS], undetermined_actions=self.properties[ self.UNDETERMINED_ACTIONS] ) alarm = self.client().alarm_definitions.create(**args) self.resource_id_set(alarm['id']) # Monasca enables action by default actions_enabled = self.properties[self.ACTIONS_ENABLED] if not actions_enabled: self.client().alarm_definitions.patch( alarm_id=self.resource_id, actions_enabled=actions_enabled ) def handle_update(self, json_snippet, tmpl_diff, prop_diff): args = dict(alarm_id=self.resource_id) if prop_diff.get(self.NAME): args['name'] = prop_diff.get(self.NAME) if prop_diff.get(self.DESCRIPTION): args['description'] = prop_diff.get(self.DESCRIPTION) if prop_diff.get(self.SEVERITY): args['severity'] = prop_diff.get(self.SEVERITY) if prop_diff.get(self.OK_ACTIONS): args['ok_actions'] = prop_diff.get(self.OK_ACTIONS) if prop_diff.get(self.ALARM_ACTIONS): args['alarm_actions'] = prop_diff.get(self.ALARM_ACTIONS) if prop_diff.get(self.UNDETERMINED_ACTIONS): args['undetermined_actions'] = prop_diff.get( self.UNDETERMINED_ACTIONS ) if prop_diff.get(self.ACTIONS_ENABLED): args['actions_enabled'] = prop_diff.get(self.ACTIONS_ENABLED) self.client().alarm_definitions.patch(**args) def handle_delete(self): if self.resource_id is not None: with self.client_plugin().ignore_not_found: self.client().alarm_definitions.delete( alarm_id=self.resource_id) def resource_mapping(): return { 'OS::Monasca::AlarmDefinition': MonascaAlarmDefinition } heat-10.0.2/heat/engine/resources/openstack/monasca/__init__.py0000666000175000017500000000000013343562340024451 0ustar zuulzuul00000000000000heat-10.0.2/heat/engine/resources/openstack/monasca/notification.py0000666000175000017500000001623113343562340025415 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import re from six.moves import urllib from heat.common import exception from heat.common.i18n import _ from heat.engine import constraints from heat.engine import properties from heat.engine import resource from heat.engine import support class MonascaNotification(resource.Resource): """Heat Template Resource for Monasca Notification. A resource which is used to notificate if there is some alarm. Monasca Notification helps to declare the hook points, which will be invoked once alarm is generated. This plugin helps to create, update and delete the notification. """ support_status = support.SupportStatus( version='7.0.0', previous_status=support.SupportStatus( version='5.0.0', status=support.UNSUPPORTED )) default_client_name = 'monasca' entity = 'notifications' # NOTE(sirushti): To conform to the autoscaling behaviour in heat, we set # the default period interval during create/update to 60 for webhooks only. _default_period_interval = 60 NOTIFICATION_TYPES = ( EMAIL, WEBHOOK, PAGERDUTY ) = ( 'email', 'webhook', 'pagerduty' ) PROPERTIES = ( NAME, TYPE, ADDRESS, PERIOD ) = ( 'name', 'type', 'address', 'period' ) properties_schema = { NAME: properties.Schema( properties.Schema.STRING, _('Name of the notification. By default, physical resource name ' 'is used.'), update_allowed=True ), TYPE: properties.Schema( properties.Schema.STRING, _('Type of the notification.'), update_allowed=True, required=True, constraints=[constraints.AllowedValues( NOTIFICATION_TYPES )] ), ADDRESS: properties.Schema( properties.Schema.STRING, _('Address of the notification. It could be a valid email ' 'address, url or service key based on notification type.'), update_allowed=True, required=True, constraints=[constraints.Length(max=512)] ), PERIOD: properties.Schema( properties.Schema.INTEGER, _('Interval in seconds to invoke webhooks if the alarm state ' 'does not transition away from the defined trigger state. A ' 'value of 0 will disable continuous notifications. This ' 'property is only applicable for the webhook notification ' 'type and has default period interval of 60 seconds.'), support_status=support.SupportStatus(version='7.0.0'), update_allowed=True, constraints=[constraints.AllowedValues([0, 60])] ) } def _period_interval(self): period = self.properties[self.PERIOD] if period is None: period = self._default_period_interval return period def validate(self): super(MonascaNotification, self).validate() if self.properties[self.PERIOD] is not None and ( self.properties[self.TYPE] != self.WEBHOOK): msg = _('The period property can only be specified against a ' 'Webhook Notification type.') raise exception.StackValidationFailed(message=msg) address = self.properties[self.ADDRESS] if not address: return if self.properties[self.TYPE] == self.WEBHOOK: try: parsed_address = urllib.parse.urlparse(address) except Exception: msg = _('Address "%(addr)s" should have correct format ' 'required by "%(wh)s" type of "%(type)s" ' 'property') % { 'addr': address, 'wh': self.WEBHOOK, 'type': self.TYPE} raise exception.StackValidationFailed(message=msg) if not parsed_address.scheme: msg = _('Address "%s" doesn\'t have required URL ' 'scheme') % address raise exception.StackValidationFailed(message=msg) if not parsed_address.netloc: msg = _('Address "%s" doesn\'t have required network ' 'location') % address raise exception.StackValidationFailed(message=msg) if parsed_address.scheme not in ['http', 'https']: msg = _('Address "%(addr)s" doesn\'t satisfies ' 'allowed schemes: %(schemes)s') % { 'addr': address, 'schemes': ', '.join(['http', 'https']) } raise exception.StackValidationFailed(message=msg) elif (self.properties[self.TYPE] == self.EMAIL and not re.match(r'^\S+@\S+$', address)): msg = _('Address "%(addr)s" doesn\'t satisfies allowed format for ' '"%(email)s" type of "%(type)s" property') % { 'addr': address, 'email': self.EMAIL, 'type': self.TYPE} raise exception.StackValidationFailed(message=msg) def handle_create(self): args = dict( name=(self.properties[self.NAME] or self.physical_resource_name()), type=self.properties[self.TYPE], address=self.properties[self.ADDRESS], ) if args['type'] == self.WEBHOOK: args['period'] = self._period_interval() notification = self.client().notifications.create(**args) self.resource_id_set(notification['id']) def handle_update(self, json_snippet, tmpl_diff, prop_diff): args = dict(notification_id=self.resource_id) args['name'] = (prop_diff.get(self.NAME) or self.properties[self.NAME]) args['type'] = (prop_diff.get(self.TYPE) or self.properties[self.TYPE]) args['address'] = (prop_diff.get(self.ADDRESS) or self.properties[self.ADDRESS]) if args['type'] == self.WEBHOOK: updated_period = prop_diff.get(self.PERIOD) args['period'] = (updated_period if updated_period is not None else self._period_interval()) self.client().notifications.update(**args) def handle_delete(self): if self.resource_id is not None: with self.client_plugin().ignore_not_found: self.client().notifications.delete( notification_id=self.resource_id) def resource_mapping(): return { 'OS::Monasca::Notification': MonascaNotification } heat-10.0.2/heat/engine/resources/openstack/glance/0000775000175000017500000000000013343562672022170 5ustar zuulzuul00000000000000heat-10.0.2/heat/engine/resources/openstack/glance/image.py0000666000175000017500000002662313343562337023635 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common import exception from heat.common.i18n import _ from heat.engine import constraints from heat.engine import properties from heat.engine import resource from heat.engine import support class GlanceImage(resource.Resource): """A resource managing images in Glance. A resource provides managing images that are meant to be used with other services. """ support_status = support.SupportStatus( status=support.DEPRECATED, version='8.0.0', message=_('Creating a Glance Image based on an existing URL location ' 'requires the Glance v1 API, which is deprecated.'), previous_status=support.SupportStatus(version='2014.2') ) PROPERTIES = ( NAME, IMAGE_ID, IS_PUBLIC, MIN_DISK, MIN_RAM, PROTECTED, DISK_FORMAT, CONTAINER_FORMAT, LOCATION, TAGS, EXTRA_PROPERTIES, ARCHITECTURE, KERNEL_ID, OS_DISTRO, OWNER, RAMDISK_ID ) = ( 'name', 'id', 'is_public', 'min_disk', 'min_ram', 'protected', 'disk_format', 'container_format', 'location', 'tags', 'extra_properties', 'architecture', 'kernel_id', 'os_distro', 'owner', 'ramdisk_id' ) glance_id_pattern = ('^([0-9a-fA-F]){8}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}' '-([0-9a-fA-F]){4}-([0-9a-fA-F]){12}$') properties_schema = { NAME: properties.Schema( properties.Schema.STRING, _('Name for the image. The name of an image is not ' 'unique to a Image Service node.') ), IMAGE_ID: properties.Schema( properties.Schema.STRING, _('The image ID. Glance will generate a UUID if not specified.') ), IS_PUBLIC: properties.Schema( properties.Schema.BOOLEAN, _('Scope of image accessibility. Public or private. ' 'Default value is False means private. Note: The policy ' 'setting of glance allows only users with admin roles to create ' 'public image by default.'), default=False, ), MIN_DISK: properties.Schema( properties.Schema.INTEGER, _('Amount of disk space (in GB) required to boot image. ' 'Default value is 0 if not specified ' 'and means no limit on the disk size.'), constraints=[ constraints.Range(min=0), ], default=0 ), MIN_RAM: properties.Schema( properties.Schema.INTEGER, _('Amount of ram (in MB) required to boot image. Default value ' 'is 0 if not specified and means no limit on the ram size.'), constraints=[ constraints.Range(min=0), ], default=0 ), PROTECTED: properties.Schema( properties.Schema.BOOLEAN, _('Whether the image can be deleted. If the value is True, ' 'the image is protected and cannot be deleted.'), default=False ), DISK_FORMAT: properties.Schema( properties.Schema.STRING, _('Disk format of image.'), required=True, constraints=[ constraints.AllowedValues(['ami', 'ari', 'aki', 'vhd', 'vmdk', 'raw', 'qcow2', 'vdi', 'iso']) ] ), CONTAINER_FORMAT: properties.Schema( properties.Schema.STRING, _('Container format of image.'), required=True, constraints=[ constraints.AllowedValues(['ami', 'ari', 'aki', 'bare', 'ova', 'ovf']) ] ), LOCATION: properties.Schema( properties.Schema.STRING, _('URL where the data for this image already resides. For ' 'example, if the image data is stored in swift, you could ' 'specify "swift://example.com/container/obj".'), required=True, ), TAGS: properties.Schema( properties.Schema.LIST, _('List of image tags.'), update_allowed=True, support_status=support.SupportStatus(version='7.0.0') ), EXTRA_PROPERTIES: properties.Schema( properties.Schema.MAP, _('Arbitrary properties to associate with the image.'), update_allowed=True, default={}, support_status=support.SupportStatus(version='7.0.0') ), ARCHITECTURE: properties.Schema( properties.Schema.STRING, _('Operating system architecture.'), update_allowed=True, support_status=support.SupportStatus(version='7.0.0') ), KERNEL_ID: properties.Schema( properties.Schema.STRING, _('ID of image stored in Glance that should be used as ' 'the kernel when booting an AMI-style image.'), update_allowed=True, support_status=support.SupportStatus(version='7.0.0'), constraints=[ constraints.AllowedPattern(glance_id_pattern) ] ), OS_DISTRO: properties.Schema( properties.Schema.STRING, _('The common name of the operating system distribution ' 'in lowercase.'), update_allowed=True, support_status=support.SupportStatus(version='7.0.0') ), OWNER: properties.Schema( properties.Schema.STRING, _('Owner of the image.'), update_allowed=True, support_status=support.SupportStatus(version='7.0.0') ), RAMDISK_ID: properties.Schema( properties.Schema.STRING, _('ID of image stored in Glance that should be used as ' 'the ramdisk when booting an AMI-style image.'), update_allowed=True, support_status=support.SupportStatus(version='7.0.0'), constraints=[ constraints.AllowedPattern(glance_id_pattern) ] ) } default_client_name = 'glance' entity = 'images' def handle_create(self): args = dict((k, v) for k, v in self.properties.items() if v is not None) tags = args.pop(self.TAGS, []) args['properties'] = args.pop(self.EXTRA_PROPERTIES, {}) architecture = args.pop(self.ARCHITECTURE, None) kernel_id = args.pop(self.KERNEL_ID, None) os_distro = args.pop(self.OS_DISTRO, None) ramdisk_id = args.pop(self.RAMDISK_ID, None) image_id = self.client(version=self.client_plugin().V1).images.create( **args).id self.resource_id_set(image_id) images = self.client().images if architecture is not None: images.update(image_id, architecture=architecture) if kernel_id is not None: images.update(image_id, kernel_id=kernel_id) if os_distro is not None: images.update(image_id, os_distro=os_distro) if ramdisk_id is not None: images.update(image_id, ramdisk_id=ramdisk_id) for tag in tags: self.client().image_tags.update( image_id, tag) return image_id def check_create_complete(self, image_id): image = self.client().images.get(image_id) return image.status == 'active' def handle_update(self, json_snippet, tmpl_diff, prop_diff): if prop_diff and self.TAGS in prop_diff: existing_tags = self.properties.get(self.TAGS) or [] diff_tags = prop_diff.pop(self.TAGS) or [] new_tags = set(diff_tags) - set(existing_tags) for tag in new_tags: self.client().image_tags.update( self.resource_id, tag) removed_tags = set(existing_tags) - set(diff_tags) for tag in removed_tags: with self.client_plugin().ignore_not_found: self.client().image_tags.delete( self.resource_id, tag) images = self.client().images if self.EXTRA_PROPERTIES in prop_diff: old_properties = self.properties.get(self.EXTRA_PROPERTIES) or {} new_properties = prop_diff.pop(self.EXTRA_PROPERTIES) prop_diff.update(new_properties) remove_props = list(set(old_properties) - set(new_properties)) # Though remove_props defaults to None within the glanceclient, # setting it to a list (possibly []) every time ensures only one # calling format to images.update images.update(self.resource_id, remove_props, **prop_diff) else: images.update(self.resource_id, **prop_diff) def validate(self): super(GlanceImage, self).validate() container_format = self.properties[self.CONTAINER_FORMAT] if (container_format in ['ami', 'ari', 'aki'] and self.properties[self.DISK_FORMAT] != container_format): msg = _("Invalid mix of disk and container formats. When " "setting a disk or container format to one of 'aki', " "'ari', or 'ami', the container and disk formats must " "match.") raise exception.StackValidationFailed(message=msg) def get_live_resource_data(self): image_data = super(GlanceImage, self).get_live_resource_data() if image_data.get('status') in ('deleted', 'killed'): raise exception.EntityNotFound(entity='Resource', name=self.name) return image_data def parse_live_resource_data(self, resource_properties, resource_data): image_reality = {} # NOTE(prazumovsky): At first, there's no way to get location from # glance; at second, location property is doubtful, because glance # client v2 doesn't use location, it uses locations. So, we should # get location property from resource properties. if self.client().version == 1.0: image_reality.update( {self.LOCATION: resource_properties[self.LOCATION]}) for key in self.PROPERTIES: if key == self.LOCATION: continue if key == self.IMAGE_ID: if (resource_properties.get(self.IMAGE_ID) is not None or resource_data.get(self.IMAGE_ID) != self.resource_id): image_reality.update({self.IMAGE_ID: resource_data.get( self.IMAGE_ID)}) else: image_reality.update({self.IMAGE_ID: None}) else: image_reality.update({key: resource_data.get(key)}) return image_reality def resource_mapping(): return { 'OS::Glance::Image': GlanceImage } heat-10.0.2/heat/engine/resources/openstack/glance/__init__.py0000666000175000017500000000000013343562337024267 0ustar zuulzuul00000000000000heat-10.0.2/heat/engine/resources/openstack/aodh/0000775000175000017500000000000013343562672021652 5ustar zuulzuul00000000000000heat-10.0.2/heat/engine/resources/openstack/aodh/alarm.py0000666000175000017500000003214213343562337023322 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import six from heat.common.i18n import _ from heat.engine import constraints from heat.engine import properties from heat.engine.resources import alarm_base from heat.engine.resources.openstack.heat import none_resource from heat.engine import support class AodhAlarm(alarm_base.BaseAlarm): """A resource that implements alarming service of Aodh. A resource that allows for the setting alarms based on threshold evaluation for a collection of samples. Also, you can define actions to take if state of watched resource will be satisfied specified conditions. For example, it can watch for the memory consumption and when it reaches 70% on a given instance if the instance has been up for more than 10 min, some action will be called. """ support_status = support.SupportStatus( status=support.DEPRECATED, message=_('Theshold alarm relies on ceilometer-api and has been ' 'deprecated in aodh since Ocata. Use ' 'OS::Aodh::GnocchiAggregationByResourcesAlarm instead.'), version='10.0.0', previous_status=support.SupportStatus(version='2014.1')) PROPERTIES = ( COMPARISON_OPERATOR, EVALUATION_PERIODS, METER_NAME, PERIOD, STATISTIC, THRESHOLD, MATCHING_METADATA, QUERY, ) = ( 'comparison_operator', 'evaluation_periods', 'meter_name', 'period', 'statistic', 'threshold', 'matching_metadata', 'query', ) properties_schema = { COMPARISON_OPERATOR: properties.Schema( properties.Schema.STRING, _('Operator used to compare specified statistic with threshold.'), constraints=[alarm_base.BaseAlarm.QF_OP_VALS], update_allowed=True ), EVALUATION_PERIODS: properties.Schema( properties.Schema.INTEGER, _('Number of periods to evaluate over.'), update_allowed=True ), METER_NAME: properties.Schema( properties.Schema.STRING, _('Meter name watched by the alarm.'), required=True ), PERIOD: properties.Schema( properties.Schema.INTEGER, _('Period (seconds) to evaluate over.'), update_allowed=True ), STATISTIC: properties.Schema( properties.Schema.STRING, _('Meter statistic to evaluate.'), constraints=[ constraints.AllowedValues(['count', 'avg', 'sum', 'min', 'max']), ], update_allowed=True ), THRESHOLD: properties.Schema( properties.Schema.NUMBER, _('Threshold to evaluate against.'), required=True, update_allowed=True ), MATCHING_METADATA: properties.Schema( properties.Schema.MAP, _('Meter should match this resource metadata (key=value) ' 'additionally to the meter_name.'), default={}, update_allowed=True ), QUERY: properties.Schema( properties.Schema.LIST, _('A list of query factors, each comparing ' 'a Sample attribute with a value. ' 'Implicitly combined with matching_metadata, if any.'), update_allowed=True, support_status=support.SupportStatus(version='2015.1'), schema=properties.Schema( properties.Schema.MAP, schema={ alarm_base.BaseAlarm.QF_FIELD: properties.Schema( properties.Schema.STRING, _('Name of attribute to compare. ' 'Names of the form metadata.user_metadata.X ' 'or metadata.metering.X are equivalent to what ' 'you can address through matching_metadata; ' 'the former for Nova meters, ' 'the latter for all others. ' 'To see the attributes of your Samples, ' 'use `ceilometer --debug sample-list`.') ), alarm_base.BaseAlarm.QF_TYPE: properties.Schema( properties.Schema.STRING, _('The type of the attribute.'), default='string', constraints=[alarm_base.BaseAlarm.QF_TYPE_VALS], support_status=support.SupportStatus(version='8.0.0') ), alarm_base.BaseAlarm.QF_OP: properties.Schema( properties.Schema.STRING, _('Comparison operator.'), constraints=[alarm_base.BaseAlarm.QF_OP_VALS] ), alarm_base.BaseAlarm.QF_VALUE: properties.Schema( properties.Schema.STRING, _('String value with which to compare.') ) } ) ) } properties_schema.update(alarm_base.common_properties_schema) def get_alarm_props(self, props): """Apply all relevant compatibility xforms.""" kwargs = self.actions_to_urls(props) kwargs['type'] = self.alarm_type if kwargs.get(self.METER_NAME) in alarm_base.NOVA_METERS: prefix = 'user_metadata.' else: prefix = 'metering.' rule = {} for field in ['period', 'evaluation_periods', 'threshold', 'statistic', 'comparison_operator', 'meter_name']: if field in kwargs: rule[field] = kwargs[field] del kwargs[field] mmd = props.get(self.MATCHING_METADATA) or {} query = props.get(self.QUERY) or [] # make sure the matching_metadata appears in the query like this: # {field: metadata.$prefix.x, ...} for m_k, m_v in six.iteritems(mmd): key = 'metadata.%s' % prefix if m_k.startswith('metadata.'): m_k = m_k[len('metadata.'):] if m_k.startswith('metering.') or m_k.startswith('user_metadata.'): # check prefix m_k = m_k.split('.', 1)[-1] key = '%s%s' % (key, m_k) # NOTE(prazumovsky): type of query value must be a string, but # matching_metadata value type can not be a string, so we # must convert value to a string type. query.append(dict(field=key, op='eq', value=six.text_type(m_v))) if self.MATCHING_METADATA in kwargs: del kwargs[self.MATCHING_METADATA] if self.QUERY in kwargs: del kwargs[self.QUERY] if query: rule['query'] = query kwargs['threshold_rule'] = rule return kwargs def handle_create(self): props = self.get_alarm_props(self.properties) props['name'] = self.physical_resource_name() alarm = self.client().alarm.create(props) self.resource_id_set(alarm['alarm_id']) def handle_update(self, json_snippet, tmpl_diff, prop_diff): if prop_diff: new_props = json_snippet.properties(self.properties_schema, self.context) self.client().alarm.update(self.resource_id, self.get_alarm_props(new_props)) def parse_live_resource_data(self, resource_properties, resource_data): record_reality = {} threshold_data = resource_data.get('threshold_rule').copy() threshold_data.update(resource_data) props_upd_allowed = (set(self.PROPERTIES + alarm_base.COMMON_PROPERTIES) - {self.METER_NAME, alarm_base.TIME_CONSTRAINTS} - set(alarm_base.INTERNAL_PROPERTIES)) for key in props_upd_allowed: record_reality.update({key: threshold_data.get(key)}) return record_reality def handle_check(self): self.client().alarm.get(self.resource_id) class CombinationAlarm(none_resource.NoneResource): """A resource that implements combination of Aodh alarms. This resource is now deleted from Aodh, so will directly inherit from NoneResource (placeholder resource). For old resources (which not a placeholder resource), still can be deleted through client. Any newly created resources will be considered as placeholder resources like none resource. We will schedule to delete it from heat resources list. """ default_client_name = 'aodh' entity = 'alarm' support_status = support.SupportStatus( status=support.HIDDEN, message=_('OS::Aodh::CombinationAlarm is deprecated and has been ' 'removed from Aodh, use OS::Aodh::CompositeAlarm instead.'), version='9.0.0', previous_status=support.SupportStatus( status=support.DEPRECATED, version='7.0.0', previous_status=support.SupportStatus(version='2014.1') ) ) class EventAlarm(alarm_base.BaseAlarm): """A resource that implements event alarms. Allows users to define alarms which can be evaluated based on events passed from other OpenStack services. The events can be emitted when the resources from other OpenStack services have been updated, created or deleted, such as 'compute.instance.reboot.end', 'scheduler.select_destinations.end'. """ alarm_type = 'event' support_status = support.SupportStatus(version='8.0.0') PROPERTIES = ( EVENT_TYPE, QUERY ) = ( 'event_type', 'query' ) properties_schema = { EVENT_TYPE: properties.Schema( properties.Schema.STRING, _('Event type to evaluate against. ' 'If not specified will match all events.'), update_allowed=True, default='*' ), QUERY: properties.Schema( properties.Schema.LIST, _('A list for filtering events. Query conditions used ' 'to filter specific events when evaluating the alarm.'), update_allowed=True, schema=properties.Schema( properties.Schema.MAP, schema={ alarm_base.BaseAlarm.QF_FIELD: properties.Schema( properties.Schema.STRING, _('Name of attribute to compare.') ), alarm_base.BaseAlarm.QF_TYPE: properties.Schema( properties.Schema.STRING, _('The type of the attribute.'), default='string', constraints=[alarm_base.BaseAlarm.QF_TYPE_VALS] ), alarm_base.BaseAlarm.QF_OP: properties.Schema( properties.Schema.STRING, _('Comparison operator.'), constraints=[alarm_base.BaseAlarm.QF_OP_VALS] ), alarm_base.BaseAlarm.QF_VALUE: properties.Schema( properties.Schema.STRING, _('String value with which to compare.') ) } ) ) } properties_schema.update(alarm_base.common_properties_schema) def get_alarm_props(self, props): """Apply all relevant compatibility xforms.""" kwargs = self.actions_to_urls(props) kwargs['type'] = self.alarm_type rule = {} for prop in (self.EVENT_TYPE, self.QUERY): if prop in kwargs: del kwargs[prop] query = props.get(self.QUERY) if query: rule[self.QUERY] = query event_type = props.get(self.EVENT_TYPE) if event_type: rule[self.EVENT_TYPE] = event_type kwargs['event_rule'] = rule return kwargs def handle_create(self): props = self.get_alarm_props(self.properties) props['name'] = self.physical_resource_name() alarm = self.client().alarm.create(props) self.resource_id_set(alarm['alarm_id']) def handle_update(self, json_snippet, tmpl_diff, prop_diff): if prop_diff: new_props = json_snippet.properties(self.properties_schema, self.context) self.client().alarm.update(self.resource_id, self.get_alarm_props(new_props)) def resource_mapping(): return { 'OS::Aodh::Alarm': AodhAlarm, 'OS::Aodh::CombinationAlarm': CombinationAlarm, 'OS::Aodh::EventAlarm': EventAlarm, } heat-10.0.2/heat/engine/resources/openstack/aodh/gnocchi/0000775000175000017500000000000013343562672023264 5ustar zuulzuul00000000000000heat-10.0.2/heat/engine/resources/openstack/aodh/gnocchi/alarm.py0000666000175000017500000001606013343562337024735 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common.i18n import _ from heat.engine import constraints from heat.engine import properties from heat.engine.resources import alarm_base from heat.engine import support COMMON_GNOCCHI_PROPERTIES = ( COMPARISON_OPERATOR, EVALUATION_PERIODS, GRANULARITY, AGGREGATION_METHOD, THRESHOLD, ) = ( 'comparison_operator', 'evaluation_periods', 'granularity', 'aggregation_method', 'threshold', ) common_gnocchi_properties_schema = { COMPARISON_OPERATOR: properties.Schema( properties.Schema.STRING, _('Operator used to compare specified statistic with threshold.'), constraints=[alarm_base.BaseAlarm.QF_OP_VALS], update_allowed=True ), EVALUATION_PERIODS: properties.Schema( properties.Schema.INTEGER, _('Number of periods to evaluate over.'), update_allowed=True ), AGGREGATION_METHOD: properties.Schema( properties.Schema.STRING, _('The aggregation method to compare to the threshold.'), constraints=[ constraints.AllowedValues(['mean', 'sum', 'last', 'max', 'min', 'std', 'median', 'first', 'count']), ], update_allowed=True ), GRANULARITY: properties.Schema( properties.Schema.INTEGER, _('The time range in seconds.'), update_allowed=True ), THRESHOLD: properties.Schema( properties.Schema.NUMBER, _('Threshold to evaluate against.'), required=True, update_allowed=True ), } class AodhGnocchiResourcesAlarm(alarm_base.BaseAlarm): """A resource allowing for the watch of some specified resource. An alarm that evaluates threshold based on some metric for the specified resource. """ support_status = support.SupportStatus(version='2015.1') PROPERTIES = ( METRIC, RESOURCE_ID, RESOURCE_TYPE ) = ( 'metric', 'resource_id', 'resource_type' ) PROPERTIES += COMMON_GNOCCHI_PROPERTIES properties_schema = { METRIC: properties.Schema( properties.Schema.STRING, _('Metric name watched by the alarm.'), required=True, update_allowed=True ), RESOURCE_ID: properties.Schema( properties.Schema.STRING, _('Id of a resource.'), required=True, update_allowed=True ), RESOURCE_TYPE: properties.Schema( properties.Schema.STRING, _('Resource type.'), required=True, update_allowed=True ), } properties_schema.update(common_gnocchi_properties_schema) properties_schema.update(alarm_base.common_properties_schema) alarm_type = 'gnocchi_resources_threshold' def get_alarm_props(self, props): kwargs = self.actions_to_urls(props) kwargs = self._reformat_properties(kwargs) return kwargs def handle_create(self): props = self.get_alarm_props(self.properties) props['name'] = self.physical_resource_name() props['type'] = self.alarm_type alarm = self.client().alarm.create(props) self.resource_id_set(alarm['alarm_id']) def handle_update(self, json_snippet, tmpl_diff, prop_diff): if prop_diff: new_props = json_snippet.properties(self.properties_schema, self.context) props = self.get_alarm_props(new_props) self.client().alarm.update(self.resource_id, props) def parse_live_resource_data(self, resource_properties, resource_data): record_reality = {} rule = self.alarm_type + '_rule' threshold_data = resource_data.get(rule).copy() threshold_data.update(resource_data) for key in self.properties_schema.keys(): if key in alarm_base.INTERNAL_PROPERTIES: continue if self.properties_schema[key].update_allowed: record_reality.update({key: threshold_data.get(key)}) return record_reality class AodhGnocchiAggregationByMetricsAlarm( AodhGnocchiResourcesAlarm): """A resource that implements alarm with specified metrics. A resource that implements alarm which allows to use specified by user metrics in metrics list. """ support_status = support.SupportStatus(version='2015.1') PROPERTIES = (METRICS,) = ('metrics',) PROPERTIES += COMMON_GNOCCHI_PROPERTIES properties_schema = { METRICS: properties.Schema( properties.Schema.LIST, _('A list of metric ids.'), required=True, update_allowed=True, ), } properties_schema.update(common_gnocchi_properties_schema) properties_schema.update(alarm_base.common_properties_schema) alarm_type = 'gnocchi_aggregation_by_metrics_threshold' class AodhGnocchiAggregationByResourcesAlarm( AodhGnocchiResourcesAlarm): """A resource that implements alarm as an aggregation of resources alarms. A resource that implements alarm which uses aggregation of resources alarms with some condition. If state of a system is satisfied alarm condition, alarm is activated. """ support_status = support.SupportStatus(version='2015.1') PROPERTIES = ( METRIC, QUERY, RESOURCE_TYPE ) = ( 'metric', 'query', 'resource_type' ) PROPERTIES += COMMON_GNOCCHI_PROPERTIES properties_schema = { METRIC: properties.Schema( properties.Schema.STRING, _('Metric name watched by the alarm.'), required=True, update_allowed=True ), QUERY: properties.Schema( properties.Schema.STRING, _('The query to filter the metrics.'), required=True, update_allowed=True ), RESOURCE_TYPE: properties.Schema( properties.Schema.STRING, _('Resource type.'), required=True, update_allowed=True ), } properties_schema.update(common_gnocchi_properties_schema) properties_schema.update(alarm_base.common_properties_schema) alarm_type = 'gnocchi_aggregation_by_resources_threshold' def resource_mapping(): return { 'OS::Aodh::GnocchiResourcesAlarm': AodhGnocchiResourcesAlarm, 'OS::Aodh::GnocchiAggregationByMetricsAlarm': AodhGnocchiAggregationByMetricsAlarm, 'OS::Aodh::GnocchiAggregationByResourcesAlarm': AodhGnocchiAggregationByResourcesAlarm, } heat-10.0.2/heat/engine/resources/openstack/aodh/gnocchi/__init__.py0000666000175000017500000000000013343562337025363 0ustar zuulzuul00000000000000heat-10.0.2/heat/engine/resources/openstack/aodh/composite_alarm.py0000666000175000017500000000664613343562337025416 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common.i18n import _ from heat.engine import constraints from heat.engine import properties from heat.engine.resources import alarm_base from heat.engine import support class CompositeAlarm(alarm_base.BaseAlarm): """A resource that implements Aodh composite alarm. Allows to specify multiple rules when creating a composite alarm, and the rules combined with logical operators: and, or. """ alarm_type = 'composite' support_status = support.SupportStatus(version='8.0.0') PROPERTIES = ( COMPOSITE_RULE, COMPOSITE_OPERATOR, RULES, ) = ( 'composite_rule', 'operator', 'rules', ) composite_rule_schema = { COMPOSITE_OPERATOR: properties.Schema( properties.Schema.STRING, _('The operator indicates how to combine the rules.'), required=True, update_allowed=True, constraints=[ constraints.AllowedValues(['or', 'and']) ] ), RULES: properties.Schema( properties.Schema.LIST, _('Rules list. Basic threshold/gnocchi rules and nested dict ' 'which combine threshold/gnocchi rules by "and" or "or" are ' 'allowed. For example, the form is like: [RULE1, RULE2, ' '{"and": [RULE3, RULE4]}], the basic threshold/gnocchi ' 'rules must include a "type" field.'), required=True, update_allowed=True, constraints=[ constraints.Length(min=2) ] ), } properties_schema = { COMPOSITE_RULE: properties.Schema( properties.Schema.MAP, _('Composite threshold rules in JSON format.'), required=True, update_allowed=True, schema=composite_rule_schema ) } properties_schema.update(alarm_base.common_properties_schema) def parse_composite_rule(self, props): composite_rule = props.get(self.COMPOSITE_RULE) operator = composite_rule[self.COMPOSITE_OPERATOR] rules = composite_rule[self.RULES] props[self.COMPOSITE_RULE] = {operator: rules} def handle_create(self): props = self.actions_to_urls(self.properties) self.parse_composite_rule(props) props['name'] = self.physical_resource_name() props['type'] = self.alarm_type alarm = self.client().alarm.create(props) self.resource_id_set(alarm['alarm_id']) def handle_update(self, json_snippet, tmpl_diff, prop_diff): if prop_diff: updated_props = self.actions_to_urls(prop_diff) if self.COMPOSITE_RULE in prop_diff: self.parse_composite_rule(updated_props) self.client().alarm.update(self.resource_id, updated_props) def resource_mapping(): return { 'OS::Aodh::CompositeAlarm': CompositeAlarm, } heat-10.0.2/heat/engine/resources/openstack/aodh/__init__.py0000666000175000017500000000000013343562337023751 0ustar zuulzuul00000000000000heat-10.0.2/heat/engine/resources/openstack/trove/0000775000175000017500000000000013343562672022076 5ustar zuulzuul00000000000000heat-10.0.2/heat/engine/resources/openstack/trove/instance.py0000666000175000017500000007124213343562351024256 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging import six from heat.common import exception from heat.common.i18n import _ from heat.engine import attributes from heat.engine import constraints from heat.engine import properties from heat.engine import resource from heat.engine import support from heat.engine import translation LOG = logging.getLogger(__name__) class Instance(resource.Resource): """OpenStack cloud database instance resource. Trove is Database as a Service for OpenStack. It's designed to run entirely on OpenStack, with the goal of allowing users to quickly and easily utilize the features of a relational or non-relational database without the burden of handling complex administrative tasks. """ support_status = support.SupportStatus(version='2014.1') TROVE_STATUS = ( ERROR, FAILED, ACTIVE, ) = ( 'ERROR', 'FAILED', 'ACTIVE', ) TROVE_STATUS_REASON = { FAILED: _('The database instance was created, but heat failed to set ' 'up the datastore. If a database instance is in the FAILED ' 'state, it should be deleted and a new one should be ' 'created.'), ERROR: _('The last operation for the database instance failed due to ' 'an error.'), } BAD_STATUSES = (ERROR, FAILED) PROPERTIES = ( NAME, FLAVOR, SIZE, DATABASES, USERS, AVAILABILITY_ZONE, RESTORE_POINT, DATASTORE_TYPE, DATASTORE_VERSION, NICS, REPLICA_OF, REPLICA_COUNT, ) = ( 'name', 'flavor', 'size', 'databases', 'users', 'availability_zone', 'restore_point', 'datastore_type', 'datastore_version', 'networks', 'replica_of', 'replica_count' ) _DATABASE_KEYS = ( DATABASE_CHARACTER_SET, DATABASE_COLLATE, DATABASE_NAME, ) = ( 'character_set', 'collate', 'name', ) _USER_KEYS = ( USER_NAME, USER_PASSWORD, USER_HOST, USER_DATABASES, ) = ( 'name', 'password', 'host', 'databases', ) _NICS_KEYS = ( NET, PORT, V4_FIXED_IP ) = ( 'network', 'port', 'fixed_ip' ) ATTRIBUTES = ( HOSTNAME, HREF, ) = ( 'hostname', 'href', ) properties_schema = { NAME: properties.Schema( properties.Schema.STRING, _('Name of the DB instance to create.'), update_allowed=True, constraints=[ constraints.Length(max=255), ] ), FLAVOR: properties.Schema( properties.Schema.STRING, _('Reference to a flavor for creating DB instance.'), required=True, update_allowed=True, constraints=[ constraints.CustomConstraint('trove.flavor') ] ), DATASTORE_TYPE: properties.Schema( properties.Schema.STRING, _("Name of registered datastore type."), constraints=[ constraints.Length(max=255) ] ), DATASTORE_VERSION: properties.Schema( properties.Schema.STRING, _("Name of the registered datastore version. " "It must exist for provided datastore type. " "Defaults to using single active version. " "If several active versions exist for provided datastore type, " "explicit value for this parameter must be specified."), constraints=[constraints.Length(max=255)] ), SIZE: properties.Schema( properties.Schema.INTEGER, _('Database volume size in GB.'), required=True, update_allowed=True, constraints=[ constraints.Range(1, 150), ] ), NICS: properties.Schema( properties.Schema.LIST, _("List of network interfaces to create on instance."), default=[], schema=properties.Schema( properties.Schema.MAP, schema={ NET: properties.Schema( properties.Schema.STRING, _('Name or UUID of the network to attach this NIC to. ' 'Either %(port)s or %(net)s must be specified.') % { 'port': PORT, 'net': NET}, constraints=[ constraints.CustomConstraint('neutron.network') ] ), PORT: properties.Schema( properties.Schema.STRING, _('Name or UUID of Neutron port to attach this ' 'NIC to. ' 'Either %(port)s or %(net)s must be specified.') % { 'port': PORT, 'net': NET}, constraints=[ constraints.CustomConstraint('neutron.port') ], ), V4_FIXED_IP: properties.Schema( properties.Schema.STRING, _('Fixed IPv4 address for this NIC.'), constraints=[ constraints.CustomConstraint('ip_addr') ] ), }, ), ), DATABASES: properties.Schema( properties.Schema.LIST, _('List of databases to be created on DB instance creation.'), default=[], update_allowed=True, schema=properties.Schema( properties.Schema.MAP, schema={ DATABASE_CHARACTER_SET: properties.Schema( properties.Schema.STRING, _('Set of symbols and encodings.'), default='utf8' ), DATABASE_COLLATE: properties.Schema( properties.Schema.STRING, _('Set of rules for comparing characters in a ' 'character set.'), default='utf8_general_ci' ), DATABASE_NAME: properties.Schema( properties.Schema.STRING, _('Specifies database names for creating ' 'databases on instance creation.'), required=True, constraints=[ constraints.Length(max=64), constraints.AllowedPattern(r'[a-zA-Z0-9_\-]+' r'[a-zA-Z0-9_@?#\s\-]*' r'[a-zA-Z0-9_\-]+'), ] ), }, ) ), USERS: properties.Schema( properties.Schema.LIST, _('List of users to be created on DB instance creation.'), default=[], update_allowed=True, schema=properties.Schema( properties.Schema.MAP, schema={ USER_NAME: properties.Schema( properties.Schema.STRING, _('User name to create a user on instance ' 'creation.'), required=True, update_allowed=True, constraints=[ constraints.Length(max=16), constraints.AllowedPattern(r'[a-zA-Z0-9_]+' r'[a-zA-Z0-9_@?#\s]*' r'[a-zA-Z0-9_]+'), ] ), USER_PASSWORD: properties.Schema( properties.Schema.STRING, _('Password for those users on instance ' 'creation.'), required=True, update_allowed=True, constraints=[ constraints.AllowedPattern(r'[a-zA-Z0-9_]+' r'[a-zA-Z0-9_@?#\s]*' r'[a-zA-Z0-9_]+'), ] ), USER_HOST: properties.Schema( properties.Schema.STRING, _('The host from which a user is allowed to ' 'connect to the database.'), default='%', update_allowed=True ), USER_DATABASES: properties.Schema( properties.Schema.LIST, _('Names of databases that those users can ' 'access on instance creation.'), schema=properties.Schema( properties.Schema.STRING, ), required=True, update_allowed=True, constraints=[ constraints.Length(min=1), ] ), }, ) ), AVAILABILITY_ZONE: properties.Schema( properties.Schema.STRING, _('Name of the availability zone for DB instance.') ), RESTORE_POINT: properties.Schema( properties.Schema.STRING, _('DB instance restore point.') ), REPLICA_OF: properties.Schema( properties.Schema.STRING, _('Identifier of the source instance to replicate.'), support_status=support.SupportStatus(version='5.0.0') ), REPLICA_COUNT: properties.Schema( properties.Schema.INTEGER, _('The number of replicas to be created.'), support_status=support.SupportStatus(version='5.0.0') ), } attributes_schema = { HOSTNAME: attributes.Schema( _("Hostname of the instance."), type=attributes.Schema.STRING ), HREF: attributes.Schema( _("Api endpoint reference of the instance."), type=attributes.Schema.STRING ), } default_client_name = 'trove' entity = 'instances' def translation_rules(self, properties): return [ translation.TranslationRule( properties, translation.TranslationRule.RESOLVE, [self.FLAVOR], client_plugin=self.client_plugin(), finder='find_flavor_by_name_or_id' ) ] def __init__(self, name, json_snippet, stack): super(Instance, self).__init__(name, json_snippet, stack) self._href = None self._dbinstance = None @property def dbinstance(self): """Get the trove dbinstance.""" if not self._dbinstance and self.resource_id: self._dbinstance = self.client().instances.get(self.resource_id) return self._dbinstance def _dbinstance_name(self): name = self.properties[self.NAME] if name: return name return self.physical_resource_name() def handle_create(self): """Create cloud database instance.""" self.flavor = self.properties[self.FLAVOR] self.volume = {'size': self.properties[self.SIZE]} self.databases = self.properties[self.DATABASES] self.users = self.properties[self.USERS] restore_point = self.properties[self.RESTORE_POINT] if restore_point: restore_point = {"backupRef": restore_point} zone = self.properties[self.AVAILABILITY_ZONE] self.datastore_type = self.properties[self.DATASTORE_TYPE] self.datastore_version = self.properties[self.DATASTORE_VERSION] replica_of = self.properties[self.REPLICA_OF] replica_count = self.properties[self.REPLICA_COUNT] # convert user databases to format required for troveclient. # that is, list of database dictionaries for user in self.users: dbs = [{'name': db} for db in user.get(self.USER_DATABASES, [])] user[self.USER_DATABASES] = dbs # convert networks to format required by troveclient nics = [] for nic in self.properties[self.NICS]: nic_dict = {} net = nic.get(self.NET) if net: net_id = self.client_plugin( 'neutron').find_resourceid_by_name_or_id('network', net) nic_dict['net-id'] = net_id port = nic.get(self.PORT) if port: neutron = self.client_plugin('neutron') nic_dict['port-id'] = neutron.find_resourceid_by_name_or_id( 'port', port) ip = nic.get(self.V4_FIXED_IP) if ip: nic_dict['v4-fixed-ip'] = ip nics.append(nic_dict) # create db instance instance = self.client().instances.create( self._dbinstance_name(), self.flavor, volume=self.volume, databases=self.databases, users=self.users, restorePoint=restore_point, availability_zone=zone, datastore=self.datastore_type, datastore_version=self.datastore_version, nics=nics, replica_of=replica_of, replica_count=replica_count) self.resource_id_set(instance.id) return instance.id def _refresh_instance(self, instance_id): try: instance = self.client().instances.get(instance_id) return instance except Exception as exc: if self.client_plugin().is_over_limit(exc): LOG.warning("Stack %(name)s (%(id)s) received an " "OverLimit response during instance.get():" " %(exception)s", {'name': self.stack.name, 'id': self.stack.id, 'exception': exc}) return None else: raise def check_create_complete(self, instance_id): """Check if cloud DB instance creation is complete.""" instance = self._refresh_instance(instance_id) # refresh attributes if instance is None: return False if instance.status in self.BAD_STATUSES: raise exception.ResourceInError( resource_status=instance.status, status_reason=self.TROVE_STATUS_REASON.get(instance.status, _("Unknown"))) if instance.status != self.ACTIVE: return False LOG.info("Database instance %(database)s created " "(flavor:%(flavor)s, volume:%(volume)s, " "datastore:%(datastore_type)s, " "datastore_version:%(datastore_version)s)", {'database': self._dbinstance_name(), 'flavor': self.flavor, 'volume': self.volume, 'datastore_type': self.datastore_type, 'datastore_version': self.datastore_version}) return True def handle_check(self): instance = self.client().instances.get(self.resource_id) status = instance.status checks = [ {'attr': 'status', 'expected': self.ACTIVE, 'current': status}, ] self._verify_check_conditions(checks) def handle_update(self, json_snippet, tmpl_diff, prop_diff): updates = {} if prop_diff: instance = self.client().instances.get(self.resource_id) if self.NAME in prop_diff: updates.update({self.NAME: prop_diff[self.NAME]}) if self.FLAVOR in prop_diff: flv = prop_diff[self.FLAVOR] updates.update({self.FLAVOR: flv}) if self.SIZE in prop_diff: updates.update({self.SIZE: prop_diff[self.SIZE]}) if self.DATABASES in prop_diff: current = [d.name for d in self.client().databases.list(instance)] desired = [d[self.DATABASE_NAME] for d in prop_diff[self.DATABASES]] for db in prop_diff[self.DATABASES]: dbname = db[self.DATABASE_NAME] if dbname not in current: db['ACTION'] = self.CREATE for dbname in current: if dbname not in desired: deleted = {self.DATABASE_NAME: dbname, 'ACTION': self.DELETE} prop_diff[self.DATABASES].append(deleted) updates.update({self.DATABASES: prop_diff[self.DATABASES]}) if self.USERS in prop_diff: current = [u.name for u in self.client().users.list(instance)] desired = [u[self.USER_NAME] for u in prop_diff[self.USERS]] for usr in prop_diff[self.USERS]: if usr[self.USER_NAME] not in current: usr['ACTION'] = self.CREATE for usr in current: if usr not in desired: prop_diff[self.USERS].append({self.USER_NAME: usr, 'ACTION': self.DELETE}) updates.update({self.USERS: prop_diff[self.USERS]}) return updates def check_update_complete(self, updates): instance = self.client().instances.get(self.resource_id) if instance.status in self.BAD_STATUSES: raise exception.ResourceInError( resource_status=instance.status, status_reason=self.TROVE_STATUS_REASON.get(instance.status, _("Unknown"))) if updates: if instance.status != self.ACTIVE: dmsg = ("Instance is in status %(now)s. Waiting on status" " %(stat)s") LOG.debug(dmsg % {"now": instance.status, "stat": self.ACTIVE}) return False try: return ( self._update_name(instance, updates.get(self.NAME)) and self._update_flavor(instance, updates.get(self.FLAVOR)) and self._update_size(instance, updates.get(self.SIZE)) and self._update_databases(instance, updates.get(self.DATABASES)) and self._update_users(instance, updates.get(self.USERS)) ) except Exception as exc: if self.client_plugin().is_client_exception(exc): # the instance could have updated between the time # we retrieve it and try to update it so check again if self.client_plugin().is_over_limit(exc): LOG.debug("API rate limit: %(ex)s. Retrying.", {'ex': six.text_type(exc)}) return False if "No change was requested" in six.text_type(exc): LOG.warning("Unexpected instance state change " "during update. Retrying.") return False raise return True def _update_name(self, instance, name): if name and instance.name != name: self.client().instances.edit(instance, name=name) return False return True def _update_flavor(self, instance, new_flavor): if new_flavor: current_flav = six.text_type(instance.flavor['id']) new_flav = six.text_type(new_flavor) if new_flav != current_flav: dmsg = "Resizing instance flavor from %(old)s to %(new)s" LOG.debug(dmsg % {"old": current_flav, "new": new_flav}) self.client().instances.resize_instance(instance, new_flavor) return False return True def _update_size(self, instance, new_size): if new_size and instance.volume['size'] != new_size: dmsg = "Resizing instance storage from %(old)s to %(new)s" LOG.debug(dmsg % {"old": instance.volume['size'], "new": new_size}) self.client().instances.resize_volume(instance, new_size) return False return True def _update_databases(self, instance, databases): if databases: for db in databases: if db.get("ACTION") == self.CREATE: db.pop("ACTION", None) dmsg = "Adding new database %(db)s to instance" LOG.debug(dmsg % {"db": db}) self.client().databases.create(instance, [db]) elif db.get("ACTION") == self.DELETE: dmsg = ("Deleting existing database %(db)s from " "instance") LOG.debug(dmsg % {"db": db['name']}) self.client().databases.delete(instance, db['name']) return True def _update_users(self, instance, users): if users: for usr in users: dbs = [{'name': db} for db in usr.get(self.USER_DATABASES, [])] usr[self.USER_DATABASES] = dbs if usr.get("ACTION") == self.CREATE: usr.pop("ACTION", None) dmsg = "Adding new user %(u)s to instance" LOG.debug(dmsg % {"u": usr}) self.client().users.create(instance, [usr]) elif usr.get("ACTION") == self.DELETE: dmsg = ("Deleting existing user %(u)s from " "instance") LOG.debug(dmsg % {"u": usr['name']}) self.client().users.delete(instance, usr['name']) else: newattrs = {} if usr.get(self.USER_HOST): newattrs[self.USER_HOST] = usr[self.USER_HOST] if usr.get(self.USER_PASSWORD): newattrs[self.USER_PASSWORD] = usr[self.USER_PASSWORD] if newattrs: self.client().users.update_attributes( instance, usr['name'], newuserattr=newattrs, hostname=instance.hostname) current = self.client().users.get(instance, usr[self.USER_NAME]) dbs = [db['name'] for db in current.databases] desired = [db['name'] for db in usr.get(self.USER_DATABASES, [])] grants = [db for db in desired if db not in dbs] revokes = [db for db in dbs if db not in desired] if grants: self.client().users.grant(instance, usr[self.USER_NAME], grants) if revokes: self.client().users.revoke(instance, usr[self.USER_NAME], revokes) return True def parse_live_resource_data(self, resource_properties, resource_data): """A method to parse live resource data to update current resource. NOTE: cannot update users from live resource data in case of impossibility to get required user password. """ dbs = [d.name for d in self.client().databases.list(self.resource_id)] dbs_reality = [] for resource_db in resource_properties[self.DATABASES]: if resource_db[self.DATABASE_NAME] in dbs: dbs_reality.append(resource_db) dbs.remove(resource_db[self.DATABASE_NAME]) # cannot get any property for databases except for name, so update # resource with name dbs_reality.extend([{self.DATABASE_NAME: db} for db in dbs]) result = {self.NAME: resource_data.get('name'), self.DATABASES: dbs_reality} if resource_data.get('flavor') is not None: result[self.FLAVOR] = resource_data['flavor'].get('id') if resource_data.get('volume') is not None: result[self.SIZE] = resource_data['volume']['size'] return result def handle_delete(self): """Delete a cloud database instance.""" if not self.resource_id: return try: instance = self.client().instances.get(self.resource_id) except Exception as ex: self.client_plugin().ignore_not_found(ex) else: instance.delete() return instance.id def check_delete_complete(self, instance_id): """Check for completion of cloud DB instance deletion.""" if not instance_id: return True try: # For some time trove instance may continue to live self._refresh_instance(instance_id) except Exception as ex: self.client_plugin().ignore_not_found(ex) return True return False def validate(self): """Validate any of the provided params.""" res = super(Instance, self).validate() if res: return res datastore_type = self.properties[self.DATASTORE_TYPE] datastore_version = self.properties[self.DATASTORE_VERSION] self.client_plugin().validate_datastore( datastore_type, datastore_version, self.DATASTORE_TYPE, self.DATASTORE_VERSION) # check validity of user and databases users = self.properties[self.USERS] if users: databases = self.properties[self.DATABASES] if not databases: msg = _('Databases property is required if users property ' 'is provided for resource %s.') % self.name raise exception.StackValidationFailed(message=msg) db_names = set([db[self.DATABASE_NAME] for db in databases]) for user in users: missing_db = [db_name for db_name in user[self.USER_DATABASES] if db_name not in db_names] if missing_db: msg = (_('Database %(dbs)s specified for user does ' 'not exist in databases for resource %(name)s.') % {'dbs': missing_db, 'name': self.name}) raise exception.StackValidationFailed(message=msg) # check validity of NICS is_neutron = self.is_using_neutron() nics = self.properties[self.NICS] for nic in nics: if not is_neutron and nic.get(self.PORT): msg = _("Can not use %s property on Nova-network.") % self.PORT raise exception.StackValidationFailed(message=msg) if bool(nic.get(self.NET)) == bool(nic.get(self.PORT)): msg = _("Either %(net)s or %(port)s must be provided.") % { 'net': self.NET, 'port': self.PORT} raise exception.StackValidationFailed(message=msg) def href(self): if not self._href and self.dbinstance: if not self.dbinstance.links: self._href = None else: for link in self.dbinstance.links: if link['rel'] == 'self': self._href = link[self.HREF] break return self._href def _resolve_attribute(self, name): if self.resource_id is None: return if name == self.HOSTNAME: return self.dbinstance.hostname elif name == self.HREF: return self.href() def resource_mapping(): return { 'OS::Trove::Instance': Instance, } heat-10.0.2/heat/engine/resources/openstack/trove/cluster.py0000666000175000017500000003327513343562340024135 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging from heat.common import exception from heat.common.i18n import _ from heat.engine import attributes from heat.engine import constraints from heat.engine import properties from heat.engine import resource from heat.engine import support from heat.engine import translation LOG = logging.getLogger(__name__) class TroveCluster(resource.Resource): """A resource for managing Trove clusters. A Cluster is an opaque cluster used to store Database clusters. """ support_status = support.SupportStatus(version='2015.1') TROVE_STATUS = ( ERROR, FAILED, ACTIVE, ) = ( 'ERROR', 'FAILED', 'ACTIVE', ) DELETE_STATUSES = ( DELETING, NONE ) = ( 'DELETING', 'NONE' ) TROVE_STATUS_REASON = { FAILED: _('The database instance was created, but heat failed to set ' 'up the datastore. If a database instance is in the FAILED ' 'state, it should be deleted and a new one should be ' 'created.'), ERROR: _('The last operation for the database instance failed due to ' 'an error.'), } BAD_STATUSES = (ERROR, FAILED) PROPERTIES = ( NAME, DATASTORE_TYPE, DATASTORE_VERSION, INSTANCES, ) = ( 'name', 'datastore_type', 'datastore_version', 'instances', ) _INSTANCE_KEYS = ( FLAVOR, VOLUME_SIZE, NETWORKS, ) = ( 'flavor', 'volume_size', 'networks', ) _NICS_KEYS = ( NET, PORT, V4_FIXED_IP ) = ( 'network', 'port', 'fixed_ip' ) ATTRIBUTES = ( INSTANCES_ATTR, IP ) = ( 'instances', 'ip' ) properties_schema = { NAME: properties.Schema( properties.Schema.STRING, _('Name of the cluster to create.'), constraints=[ constraints.Length(max=255), ] ), DATASTORE_TYPE: properties.Schema( properties.Schema.STRING, _("Name of registered datastore type."), required=True, constraints=[ constraints.Length(max=255) ] ), DATASTORE_VERSION: properties.Schema( properties.Schema.STRING, _("Name of the registered datastore version. " "It must exist for provided datastore type. " "Defaults to using single active version. " "If several active versions exist for provided datastore type, " "explicit value for this parameter must be specified."), required=True, constraints=[constraints.Length(max=255)] ), INSTANCES: properties.Schema( properties.Schema.LIST, _("List of database instances."), required=True, schema=properties.Schema( properties.Schema.MAP, schema={ FLAVOR: properties.Schema( properties.Schema.STRING, _('Flavor of the instance.'), required=True, constraints=[ constraints.CustomConstraint('trove.flavor') ] ), VOLUME_SIZE: properties.Schema( properties.Schema.INTEGER, _('Size of the instance disk volume in GB.'), required=True, constraints=[ constraints.Range(1, 150), ] ), NETWORKS: properties.Schema( properties.Schema.LIST, _("List of network interfaces to create on instance."), support_status=support.SupportStatus(version='10.0.0'), default=[], schema=properties.Schema( properties.Schema.MAP, schema={ NET: properties.Schema( properties.Schema.STRING, _('Name or UUID of the network to attach ' 'this NIC to. Either %(port)s or ' '%(net)s must be specified.') % { 'port': PORT, 'net': NET}, constraints=[ constraints.CustomConstraint( 'neutron.network') ] ), PORT: properties.Schema( properties.Schema.STRING, _('Name or UUID of Neutron port to ' 'attach this NIC to. Either %(port)s ' 'or %(net)s must be specified.') % {'port': PORT, 'net': NET}, constraints=[ constraints.CustomConstraint( 'neutron.port') ], ), V4_FIXED_IP: properties.Schema( properties.Schema.STRING, _('Fixed IPv4 address for this NIC.'), constraints=[ constraints.CustomConstraint('ip_addr') ] ), }, ), ), } ) ), } attributes_schema = { INSTANCES: attributes.Schema( _("A list of instances ids."), type=attributes.Schema.LIST ), IP: attributes.Schema( _("A list of cluster instance IPs."), type=attributes.Schema.LIST ) } default_client_name = 'trove' entity = 'clusters' def translation_rules(self, properties): return [ translation.TranslationRule( properties, translation.TranslationRule.RESOLVE, translation_path=[self.INSTANCES, self.NETWORKS, self.NET], client_plugin=self.client_plugin('neutron'), finder='find_resourceid_by_name_or_id', entity='network'), translation.TranslationRule( properties, translation.TranslationRule.RESOLVE, translation_path=[self.INSTANCES, self.NETWORKS, self.PORT], client_plugin=self.client_plugin('neutron'), finder='find_resourceid_by_name_or_id', entity='port'), translation.TranslationRule( properties, translation.TranslationRule.RESOLVE, translation_path=[self.INSTANCES, self.FLAVOR], client_plugin=self.client_plugin(), finder='find_flavor_by_name_or_id'), ] def _cluster_name(self): return self.properties[self.NAME] or self.physical_resource_name() def handle_create(self): datastore_type = self.properties[self.DATASTORE_TYPE] datastore_version = self.properties[self.DATASTORE_VERSION] # convert instances to format required by troveclient instances = [] for instance in self.properties[self.INSTANCES]: instance_dict = { 'flavorRef': instance[self.FLAVOR], 'volume': {'size': instance[self.VOLUME_SIZE]}, } instance_nics = self.get_instance_nics(instance) if instance_nics: instance_dict["nics"] = instance_nics instances.append(instance_dict) args = { 'name': self._cluster_name(), 'datastore': datastore_type, 'datastore_version': datastore_version, 'instances': instances } cluster = self.client().clusters.create(**args) self.resource_id_set(cluster.id) return cluster.id def get_instance_nics(self, instance): nics = [] for nic in instance[self.NETWORKS]: nic_dict = {} if nic.get(self.NET): nic_dict['net-id'] = nic.get(self.NET) if nic.get(self.PORT): nic_dict['port-id'] = nic.get(self.PORT) ip = nic.get(self.V4_FIXED_IP) if ip: nic_dict['v4-fixed-ip'] = ip nics.append(nic_dict) return nics def _refresh_cluster(self, cluster_id): try: cluster = self.client().clusters.get(cluster_id) return cluster except Exception as exc: if self.client_plugin().is_over_limit(exc): LOG.warning("Stack %(name)s (%(id)s) received an " "OverLimit response during clusters.get():" " %(exception)s", {'name': self.stack.name, 'id': self.stack.id, 'exception': exc}) return None else: raise def check_create_complete(self, cluster_id): cluster = self._refresh_cluster(cluster_id) if cluster is None: return False for instance in cluster.instances: if instance['status'] in self.BAD_STATUSES: raise exception.ResourceInError( resource_status=instance['status'], status_reason=self.TROVE_STATUS_REASON.get( instance['status'], _("Unknown"))) if instance['status'] != self.ACTIVE: return False LOG.info("Cluster '%s' has been created", cluster.name) return True def cluster_delete(self, cluster_id): try: cluster = self.client().clusters.get(cluster_id) cluster_status = cluster.task['name'] if cluster_status not in self.DELETE_STATUSES: return False if cluster_status != self.DELETING: # If cluster already started to delete, don't send another one # request for deleting. cluster.delete() except Exception as ex: self.client_plugin().ignore_not_found(ex) return True def handle_delete(self): if not self.resource_id: return try: cluster = self.client().clusters.get(self.resource_id) except Exception as ex: self.client_plugin().ignore_not_found(ex) else: return cluster.id def check_delete_complete(self, cluster_id): if not cluster_id: return True if not self.cluster_delete(cluster_id): return False try: # For some time trove cluster may continue to live self._refresh_cluster(cluster_id) except Exception as ex: self.client_plugin().ignore_not_found(ex) return True return False def validate(self): res = super(TroveCluster, self).validate() if res: return res datastore_type = self.properties[self.DATASTORE_TYPE] datastore_version = self.properties[self.DATASTORE_VERSION] self.client_plugin().validate_datastore( datastore_type, datastore_version, self.DATASTORE_TYPE, self.DATASTORE_VERSION) # check validity of instances' NETWORKS is_neutron = self.is_using_neutron() for instance in self.properties[self.INSTANCES]: for nic in instance[self.NETWORKS]: # 'nic.get(self.PORT) is not None' including two cases: # 1. has set port value in template # 2. using 'get_resource' to reference a new resource if not is_neutron and nic.get(self.PORT) is not None: msg = (_("Can not use %s property on Nova-network.") % self.PORT) raise exception.StackValidationFailed(message=msg) if (bool(nic.get(self.NET) is not None) == bool(nic.get(self.PORT) is not None)): msg = (_("Either %(net)s or %(port)s must be provided.") % {'net': self.NET, 'port': self.PORT}) raise exception.StackValidationFailed(message=msg) def _resolve_attribute(self, name): if self.resource_id is None: return if name == self.INSTANCES_ATTR: instances = [] cluster = self.client().clusters.get(self.resource_id) for instance in cluster.instances: instances.append(instance['id']) return instances elif name == self.IP: cluster = self.client().clusters.get(self.resource_id) return cluster.ip def resource_mapping(): return { 'OS::Trove::Cluster': TroveCluster } heat-10.0.2/heat/engine/resources/openstack/trove/__init__.py0000666000175000017500000000000013343562340024167 0ustar zuulzuul00000000000000heat-10.0.2/heat/engine/resources/openstack/neutron/0000775000175000017500000000000013343562672022431 5ustar zuulzuul00000000000000heat-10.0.2/heat/engine/resources/openstack/neutron/quota.py0000666000175000017500000001271413343562351024135 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common import exception from heat.common.i18n import _ from heat.engine import constraints from heat.engine import properties from heat.engine.resources.openstack.neutron import neutron from heat.engine import support from heat.engine import translation class NeutronQuota(neutron.NeutronResource): """A resource for managing neutron quotas. Neutron Quota is used to manage operational limits for projects. Currently, this resource can manage Neutron's quotas for: - subnet - network - floatingip - security_group_rule - security_group - router - port Note that default neutron security policy usage of this resource is limited to being used by administrators only. Administrators should be careful to create only one Neutron Quota resource per project, otherwise it will be hard for them to manage the quota properly. """ support_status = support.SupportStatus(version='8.0.0') required_service_extension = 'quotas' PROPERTIES = ( PROJECT, SUBNET, NETWORK, FLOATINGIP, SECURITY_GROUP_RULE, SECURITY_GROUP, ROUTER, PORT ) = ( 'project', 'subnet', 'network', 'floatingip', 'security_group_rule', 'security_group', 'router', 'port' ) properties_schema = { PROJECT: properties.Schema( properties.Schema.STRING, _('Name or id of the project to set the quota for.'), required=True, constraints=[ constraints.CustomConstraint('keystone.project') ] ), SUBNET: properties.Schema( properties.Schema.INTEGER, _('Quota for the number of subnets. ' 'Setting -1 means unlimited.'), constraints=[constraints.Range(min=-1)], update_allowed=True ), NETWORK: properties.Schema( properties.Schema.INTEGER, _('Quota for the number of networks. ' 'Setting -1 means unlimited.'), constraints=[constraints.Range(min=-1)], update_allowed=True ), FLOATINGIP: properties.Schema( properties.Schema.INTEGER, _('Quota for the number of floating IPs. ' 'Setting -1 means unlimited.'), constraints=[constraints.Range(min=-1)], update_allowed=True ), SECURITY_GROUP_RULE: properties.Schema( properties.Schema.INTEGER, _('Quota for the number of security group rules. ' 'Setting -1 means unlimited.'), constraints=[constraints.Range(min=-1)], update_allowed=True ), SECURITY_GROUP: properties.Schema( properties.Schema.INTEGER, _('Quota for the number of security groups. ' 'Setting -1 means unlimited.'), constraints=[constraints.Range(min=-1)], update_allowed=True ), ROUTER: properties.Schema( properties.Schema.INTEGER, _('Quota for the number of routers. ' 'Setting -1 means unlimited.'), constraints=[constraints.Range(min=-1)], update_allowed=True ), PORT: properties.Schema( properties.Schema.INTEGER, _('Quota for the number of ports. ' 'Setting -1 means unlimited.'), constraints=[constraints.Range(min=-1)], update_allowed=True ) } def translation_rules(self, props): return [ translation.TranslationRule( props, translation.TranslationRule.RESOLVE, [self.PROJECT], client_plugin=self.client_plugin('keystone'), finder='get_project_id') ] def handle_create(self): self._set_quota() self.resource_id_set(self.physical_resource_name()) def handle_update(self, json_snippet, tmpl_diff, prop_diff): self._set_quota(json_snippet.properties(self.properties_schema, self.context)) def _set_quota(self, props=None): if props is None: props = self.properties kwargs = dict((k, v) for k, v in props.items() if k != self.PROJECT and v is not None) body = {"quota": kwargs} self.client().update_quota(props[self.PROJECT], body) def handle_delete(self): if self.resource_id is not None: with self.client_plugin().ignore_not_found: self.client().delete_quota(self.resource_id) def validate(self): super(NeutronQuota, self).validate() if sum(1 for p in self.properties.values() if p is not None) <= 1: raise exception.PropertyUnspecifiedError( *sorted(set(self.PROPERTIES) - {self.PROJECT})) def resource_mapping(): return { 'OS::Neutron::Quota': NeutronQuota } heat-10.0.2/heat/engine/resources/openstack/neutron/segment.py0000666000175000017500000001412713343562351024446 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common import exception from heat.common.i18n import _ from heat.engine import constraints from heat.engine import properties from heat.engine.resources.openstack.neutron import neutron from heat.engine import support from heat.engine import translation class Segment(neutron.NeutronResource): """A resource for Neutron Segment. This requires enabling the segments service plug-in by appending 'segments' to the list of service_plugins in the neutron.conf. The default policy usage of this resource is limited to administrators only. """ required_service_extension = 'segment' support_status = support.SupportStatus(version='9.0.0') NETWORK_TYPES = [LOCAL, VLAN, VXLAN, GRE, GENEVE, FLAT] = ['local', 'vlan', 'vxlan', 'gre', 'geneve', 'flat'] PROPERTIES = ( NETWORK, NETWORK_TYPE, PHYSICAL_NETWORK, SEGMENTATION_ID, NAME, DESCRIPTION ) = ( 'network', 'network_type', 'physical_network', 'segmentation_id', 'name', 'description' ) properties_schema = { NETWORK: properties.Schema( properties.Schema.STRING, _('The name/id of network to associate with this segment.'), constraints=[constraints.CustomConstraint('neutron.network')], required=True ), NETWORK_TYPE: properties.Schema( properties.Schema.STRING, _('Type of network to associate with this segment.'), constraints=[ constraints.AllowedValues(NETWORK_TYPES), ], required=True ), PHYSICAL_NETWORK: properties.Schema( properties.Schema.STRING, _('Name of physical network to associate with this segment.'), ), SEGMENTATION_ID: properties.Schema( properties.Schema.INTEGER, _('Segmentation ID for this segment.'), constraints=[ constraints.Range(min=1), ], ), NAME: properties.Schema( properties.Schema.STRING, _('Name of the segment.'), update_allowed=True, ), DESCRIPTION: properties.Schema( properties.Schema.STRING, _('Description of the segment.'), update_allowed=True, ), } def translation_rules(self, props): return [ translation.TranslationRule( props, translation.TranslationRule.RESOLVE, [self.NETWORK], client_plugin=self.client_plugin(), finder='find_resourceid_by_name_or_id', entity='network' )] def validate(self): super(Segment, self).validate() phys_network = self.properties[self.PHYSICAL_NETWORK] network_type = self.properties[self.NETWORK_TYPE] seg_id = self.properties[self.SEGMENTATION_ID] msg_fmt = _('%(prop)s is required for %(type)s provider network.') if network_type in [self.FLAT, self.VLAN] and phys_network is None: msg = msg_fmt % {'prop': self.PHYSICAL_NETWORK, 'type': network_type} raise exception.StackValidationFailed(message=msg) if network_type == self.VLAN and seg_id is None: msg = msg_fmt % {'prop': self.SEGMENTATION_ID, 'type': network_type} raise exception.StackValidationFailed(message=msg) msg_fmt = _('%(prop)s is prohibited for %(type)s provider network.') if network_type in [self.LOCAL, self.FLAT] and seg_id is not None: msg = msg_fmt % {'prop': self.SEGMENTATION_ID, 'type': network_type} raise exception.StackValidationFailed(message=msg) tunnel_types = [self.VXLAN, self.GRE, self.GENEVE] if network_type in tunnel_types and phys_network is not None: msg = msg_fmt % {'prop': self.PHYSICAL_NETWORK, 'type': network_type} raise exception.StackValidationFailed(message=msg) if network_type == self.VLAN and seg_id > 4094: msg = _('Up to 4094 VLAN network segments can exist ' 'on each physical_network.') raise exception.StackValidationFailed(message=msg) def handle_create(self): props = self.prepare_properties( self.properties, self.physical_resource_name()) props['network_id'] = props.pop(self.NETWORK) segment = self.client('openstack').network.create_segment(**props) self.resource_id_set(segment.id) def handle_delete(self): if self.resource_id is None: return with self.client_plugin('openstack').ignore_not_found: self.client('openstack').network.delete_segment(self.resource_id) def needs_replace_failed(self): if not self.resource_id: return True with self.client_plugin('openstack').ignore_not_found: self._show_resource() return False return True def handle_update(self, json_snippet, tmpl_diff, prop_diff): if prop_diff: self.prepare_update_properties(prop_diff) self.client('openstack').network.update_segment( self.resource_id, **prop_diff) def _show_resource(self): return self.client('openstack').network.get_segment( self.resource_id).to_dict() def resource_mapping(): return { 'OS::Neutron::Segment': Segment } heat-10.0.2/heat/engine/resources/openstack/neutron/metering.py0000666000175000017500000001463013343562340024613 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common.i18n import _ from heat.engine import attributes from heat.engine import constraints from heat.engine import properties from heat.engine.resources.openstack.neutron import neutron from heat.engine import support class MeteringLabel(neutron.NeutronResource): """A resource for creating neutron metering label. The idea is to meter this at the L3 routers levels. The point is to allow operators to configure IP ranges and to assign a label to them. For example we will be able to set two labels; one for the internal traffic, and the other one for the external traffic. Each label will measure the traffic for a specific set of IP range. Then, bandwidth measurement will be sent for each label to the Oslo notification system and could be collected by Ceilometer. """ support_status = support.SupportStatus(version='2014.1') entity = 'metering_label' PROPERTIES = ( NAME, DESCRIPTION, SHARED, ) = ( 'name', 'description', 'shared', ) ATTRIBUTES = ( NAME_ATTR, DESCRIPTION_ATTR, SHARED_ATTR, ) = ( 'name', 'description', 'shared', ) properties_schema = { NAME: properties.Schema( properties.Schema.STRING, _('Name of the metering label.') ), DESCRIPTION: properties.Schema( properties.Schema.STRING, _('Description of the metering label.'), ), SHARED: properties.Schema( properties.Schema.BOOLEAN, _('Whether the metering label should be shared ' 'across all tenants.'), default=False, support_status=support.SupportStatus(version='2015.1'), ), } attributes_schema = { NAME_ATTR: attributes.Schema( _('Name of the metering label.'), type=attributes.Schema.STRING ), DESCRIPTION_ATTR: attributes.Schema( _('Description of the metering label.'), type=attributes.Schema.STRING ), SHARED_ATTR: attributes.Schema( _('Shared status of the metering label.'), type=attributes.Schema.STRING ), } def handle_create(self): props = self.prepare_properties( self.properties, self.physical_resource_name()) metering_label = self.client().create_metering_label( {'metering_label': props})['metering_label'] self.resource_id_set(metering_label['id']) def handle_delete(self): try: self.client().delete_metering_label(self.resource_id) except Exception as ex: self.client_plugin().ignore_not_found(ex) else: return True class MeteringRule(neutron.NeutronResource): """A resource to create rule for some label. Resource for allowing specified label to measure the traffic for a specific set of ip range. """ support_status = support.SupportStatus(version='2014.1') entity = 'metering_label_rule' PROPERTIES = ( METERING_LABEL_ID, REMOTE_IP_PREFIX, DIRECTION, EXCLUDED, ) = ( 'metering_label_id', 'remote_ip_prefix', 'direction', 'excluded', ) ATTRIBUTES = ( DIRECTION_ATTR, EXCLUDED_ATTR, METERING_LABEL_ID_ATTR, REMOTE_IP_PREFIX_ATTR, ) = ( 'direction', 'excluded', 'metering_label_id', 'remote_ip_prefix', ) properties_schema = { METERING_LABEL_ID: properties.Schema( properties.Schema.STRING, _('The metering label ID to associate with this metering rule.'), required=True ), REMOTE_IP_PREFIX: properties.Schema( properties.Schema.STRING, _('Indicates remote IP prefix to be associated with this ' 'metering rule.'), required=True, ), DIRECTION: properties.Schema( properties.Schema.STRING, _('The direction in which metering rule is applied, ' 'either ingress or egress.'), default='ingress', constraints=[constraints.AllowedValues(( 'ingress', 'egress'))] ), EXCLUDED: properties.Schema( properties.Schema.BOOLEAN, _('Specify whether the remote_ip_prefix will be excluded or ' 'not from traffic counters of the metering label. For example ' 'to not count the traffic of a specific IP address of a range.'), default='False' ) } attributes_schema = { DIRECTION_ATTR: attributes.Schema( _('The direction in which metering rule is applied.'), type=attributes.Schema.STRING ), EXCLUDED_ATTR: attributes.Schema( _('Exclude state for cidr.'), type=attributes.Schema.STRING ), METERING_LABEL_ID_ATTR: attributes.Schema( _('The metering label ID to associate with this metering rule.'), type=attributes.Schema.STRING ), REMOTE_IP_PREFIX_ATTR: attributes.Schema( _('CIDR to be associated with this metering rule.'), type=attributes.Schema.STRING ), } def handle_create(self): props = self.prepare_properties( self.properties, self.physical_resource_name()) metering_label_rule = self.client().create_metering_label_rule( {'metering_label_rule': props})['metering_label_rule'] self.resource_id_set(metering_label_rule['id']) def handle_delete(self): try: self.client().delete_metering_label_rule(self.resource_id) except Exception as ex: self.client_plugin().ignore_not_found(ex) else: return True def resource_mapping(): return { 'OS::Neutron::MeteringLabel': MeteringLabel, 'OS::Neutron::MeteringRule': MeteringRule, } heat-10.0.2/heat/engine/resources/openstack/neutron/firewall.py0000666000175000017500000004330013343562340024602 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common import exception from heat.common.i18n import _ from heat.engine import attributes from heat.engine import constraints from heat.engine import properties from heat.engine.resources.openstack.neutron import neutron from heat.engine import support class Firewall(neutron.NeutronResource): """A resource for the Firewall resource in Neutron FWaaS. Resource for using the Neutron firewall implementation. Firewall is a network security system that monitors and controls the incoming and outgoing network traffic based on predetermined security rules. """ required_service_extension = 'fwaas' entity = 'firewall' PROPERTIES = ( NAME, DESCRIPTION, ADMIN_STATE_UP, FIREWALL_POLICY_ID, VALUE_SPECS, SHARED, ) = ( 'name', 'description', 'admin_state_up', 'firewall_policy_id', 'value_specs', 'shared', ) ATTRIBUTES = ( NAME_ATTR, DESCRIPTION_ATTR, ADMIN_STATE_UP_ATTR, FIREWALL_POLICY_ID_ATTR, SHARED_ATTR, STATUS, TENANT_ID, ) = ( 'name', 'description', 'admin_state_up', 'firewall_policy_id', 'shared', 'status', 'tenant_id', ) properties_schema = { NAME: properties.Schema( properties.Schema.STRING, _('Name for the firewall.'), update_allowed=True ), DESCRIPTION: properties.Schema( properties.Schema.STRING, _('Description for the firewall.'), update_allowed=True ), ADMIN_STATE_UP: properties.Schema( properties.Schema.BOOLEAN, _('Administrative state of the firewall. If false (down), ' 'firewall does not forward packets and will drop all ' 'traffic to/from VMs behind the firewall.'), default=True, update_allowed=True ), FIREWALL_POLICY_ID: properties.Schema( properties.Schema.STRING, _('The ID of the firewall policy that this firewall is ' 'associated with.'), required=True, update_allowed=True ), VALUE_SPECS: properties.Schema( properties.Schema.MAP, _('Extra parameters to include in the request. Parameters ' 'are often specific to installed hardware or extensions.'), support_status=support.SupportStatus(version='5.0.0'), default={}, update_allowed=True ), SHARED: properties.Schema( properties.Schema.BOOLEAN, _('Whether this firewall should be shared across all tenants. ' 'NOTE: The default policy setting in Neutron restricts usage ' 'of this property to administrative users only.'), update_allowed=True, support_status=support.SupportStatus( status=support.UNSUPPORTED, message=_('There is no such option during 5.0.0, so need to ' 'make this property unsupported while it not used.'), version='6.0.0', previous_status=support.SupportStatus(version='2015.1') ) ), } attributes_schema = { NAME_ATTR: attributes.Schema( _('Name for the firewall.'), type=attributes.Schema.STRING ), DESCRIPTION_ATTR: attributes.Schema( _('Description of the firewall.'), type=attributes.Schema.STRING ), ADMIN_STATE_UP_ATTR: attributes.Schema( _('The administrative state of the firewall.'), type=attributes.Schema.STRING ), FIREWALL_POLICY_ID_ATTR: attributes.Schema( _('Unique identifier of the firewall policy used to create ' 'the firewall.'), type=attributes.Schema.STRING ), SHARED_ATTR: attributes.Schema( _('Shared status of this firewall.'), support_status=support.SupportStatus( status=support.UNSUPPORTED, message=_('There is no such option during 5.0.0, so need to ' 'make this attribute unsupported, otherwise error ' 'will raised.'), version='6.0.0' ), type=attributes.Schema.STRING ), STATUS: attributes.Schema( _('The status of the firewall.'), type=attributes.Schema.STRING ), TENANT_ID: attributes.Schema( _('Id of the tenant owning the firewall.'), type=attributes.Schema.STRING ), } def check_create_complete(self, data): attributes = self._show_resource() status = attributes['status'] if status == 'PENDING_CREATE': return False elif status == 'ACTIVE': return True elif status == 'ERROR': raise exception.ResourceInError( resource_status=status, status_reason=_('Error in Firewall')) else: raise exception.ResourceUnknownStatus( resource_status=status, result=_('Firewall creation failed')) def handle_create(self): props = self.prepare_properties( self.properties, self.physical_resource_name()) firewall = self.client().create_firewall({'firewall': props})[ 'firewall'] self.resource_id_set(firewall['id']) def handle_update(self, json_snippet, tmpl_diff, prop_diff): if prop_diff: self.prepare_update_properties(prop_diff) self.client().update_firewall( self.resource_id, {'firewall': prop_diff}) def handle_delete(self): try: self.client().delete_firewall(self.resource_id) except Exception as ex: self.client_plugin().ignore_not_found(ex) else: return True def _resolve_attribute(self, name): if name == self.SHARED_ATTR: return ('This attribute is currently unsupported in neutron ' 'firewall resource.') return super(Firewall, self)._resolve_attribute(name) def parse_live_resource_data(self, resource_properties, resource_data): result = super(Firewall, self).parse_live_resource_data( resource_properties, resource_data) if self.SHARED in result: result.pop(self.SHARED) return result class FirewallPolicy(neutron.NeutronResource): """A resource for the FirewallPolicy resource in Neutron FWaaS. FirewallPolicy resource is an ordered collection of firewall rules. A firewall policy can be shared across tenants. """ required_service_extension = 'fwaas' entity = 'firewall_policy' PROPERTIES = ( NAME, DESCRIPTION, SHARED, AUDITED, FIREWALL_RULES, ) = ( 'name', 'description', 'shared', 'audited', 'firewall_rules', ) ATTRIBUTES = ( NAME_ATTR, DESCRIPTION_ATTR, FIREWALL_RULES_ATTR, SHARED_ATTR, AUDITED_ATTR, TENANT_ID, ) = ( 'name', 'description', 'firewall_rules', 'shared', 'audited', 'tenant_id', ) properties_schema = { NAME: properties.Schema( properties.Schema.STRING, _('Name for the firewall policy.'), update_allowed=True ), DESCRIPTION: properties.Schema( properties.Schema.STRING, _('Description for the firewall policy.'), update_allowed=True ), SHARED: properties.Schema( properties.Schema.BOOLEAN, _('Whether this policy should be shared across all tenants.'), default=False, update_allowed=True ), AUDITED: properties.Schema( properties.Schema.BOOLEAN, _('Whether this policy should be audited. When set to True, ' 'each time the firewall policy or the associated firewall ' 'rules are changed, this attribute will be set to False and ' 'will have to be explicitly set to True through an update ' 'operation.'), default=False, update_allowed=True ), FIREWALL_RULES: properties.Schema( properties.Schema.LIST, _('An ordered list of firewall rules to apply to the firewall.'), required=True, update_allowed=True ), } attributes_schema = { NAME_ATTR: attributes.Schema( _('Name for the firewall policy.'), type=attributes.Schema.STRING ), DESCRIPTION_ATTR: attributes.Schema( _('Description of the firewall policy.'), type=attributes.Schema.STRING ), FIREWALL_RULES_ATTR: attributes.Schema( _('List of firewall rules in this firewall policy.'), type=attributes.Schema.LIST ), SHARED_ATTR: attributes.Schema( _('Shared status of this firewall policy.'), type=attributes.Schema.STRING ), AUDITED_ATTR: attributes.Schema( _('Audit status of this firewall policy.'), type=attributes.Schema.STRING ), TENANT_ID: attributes.Schema( _('Id of the tenant owning the firewall policy.'), type=attributes.Schema.STRING ), } def handle_create(self): props = self.prepare_properties( self.properties, self.physical_resource_name()) firewall_policy = self.client().create_firewall_policy( {'firewall_policy': props})['firewall_policy'] self.resource_id_set(firewall_policy['id']) def handle_update(self, json_snippet, tmpl_diff, prop_diff): if prop_diff: self.client().update_firewall_policy( self.resource_id, {'firewall_policy': prop_diff}) def handle_delete(self): try: self.client().delete_firewall_policy(self.resource_id) except Exception as ex: self.client_plugin().ignore_not_found(ex) else: return True class FirewallRule(neutron.NeutronResource): """A resource for the FirewallRule resource in Neutron FWaaS. FirewallRule represents a collection of attributes like ports, ip addresses etc. which define match criteria and action (allow, or deny) that needs to be taken on the matched data traffic. """ required_service_extension = 'fwaas' entity = 'firewall_rule' PROPERTIES = ( NAME, DESCRIPTION, SHARED, PROTOCOL, IP_VERSION, SOURCE_IP_ADDRESS, DESTINATION_IP_ADDRESS, SOURCE_PORT, DESTINATION_PORT, ACTION, ENABLED, ) = ( 'name', 'description', 'shared', 'protocol', 'ip_version', 'source_ip_address', 'destination_ip_address', 'source_port', 'destination_port', 'action', 'enabled', ) ATTRIBUTES = ( NAME_ATTR, DESCRIPTION_ATTR, FIREWALL_POLICY_ID, SHARED_ATTR, PROTOCOL_ATTR, IP_VERSION_ATTR, SOURCE_IP_ADDRESS_ATTR, DESTINATION_IP_ADDRESS_ATTR, SOURCE_PORT_ATTR, DESTINATION_PORT_ATTR, ACTION_ATTR, ENABLED_ATTR, POSITION, TENANT_ID, ) = ( 'name', 'description', 'firewall_policy_id', 'shared', 'protocol', 'ip_version', 'source_ip_address', 'destination_ip_address', 'source_port', 'destination_port', 'action', 'enabled', 'position', 'tenant_id', ) properties_schema = { NAME: properties.Schema( properties.Schema.STRING, _('Name for the firewall rule.'), update_allowed=True ), DESCRIPTION: properties.Schema( properties.Schema.STRING, _('Description for the firewall rule.'), update_allowed=True ), SHARED: properties.Schema( properties.Schema.BOOLEAN, _('Whether this rule should be shared across all tenants.'), default=False, update_allowed=True ), PROTOCOL: properties.Schema( properties.Schema.STRING, _('Protocol for the firewall rule.'), constraints=[ constraints.AllowedValues(['tcp', 'udp', 'icmp', 'any']), ], default='any', update_allowed=True, ), IP_VERSION: properties.Schema( properties.Schema.STRING, _('Internet protocol version.'), default='4', constraints=[ constraints.AllowedValues(['4', '6']), ], update_allowed=True ), SOURCE_IP_ADDRESS: properties.Schema( properties.Schema.STRING, _('Source IP address or CIDR.'), update_allowed=True, constraints=[ constraints.CustomConstraint('net_cidr') ] ), DESTINATION_IP_ADDRESS: properties.Schema( properties.Schema.STRING, _('Destination IP address or CIDR.'), update_allowed=True, constraints=[ constraints.CustomConstraint('net_cidr') ] ), SOURCE_PORT: properties.Schema( properties.Schema.STRING, _('Source port number or a range.'), update_allowed=True ), DESTINATION_PORT: properties.Schema( properties.Schema.STRING, _('Destination port number or a range.'), update_allowed=True ), ACTION: properties.Schema( properties.Schema.STRING, _('Action to be performed on the traffic matching the rule.'), default='deny', constraints=[ constraints.AllowedValues(['allow', 'deny']), ], update_allowed=True ), ENABLED: properties.Schema( properties.Schema.BOOLEAN, _('Whether this rule should be enabled.'), default=True, update_allowed=True ), } attributes_schema = { NAME_ATTR: attributes.Schema( _('Name for the firewall rule.'), type=attributes.Schema.STRING ), DESCRIPTION_ATTR: attributes.Schema( _('Description of the firewall rule.'), type=attributes.Schema.STRING ), FIREWALL_POLICY_ID: attributes.Schema( _('Unique identifier of the firewall policy to which this ' 'firewall rule belongs.'), type=attributes.Schema.STRING ), SHARED_ATTR: attributes.Schema( _('Shared status of this firewall rule.'), type=attributes.Schema.STRING ), PROTOCOL_ATTR: attributes.Schema( _('Protocol value for this firewall rule.'), type=attributes.Schema.STRING ), IP_VERSION_ATTR: attributes.Schema( _('Ip_version for this firewall rule.'), type=attributes.Schema.STRING ), SOURCE_IP_ADDRESS_ATTR: attributes.Schema( _('Source ip_address for this firewall rule.'), type=attributes.Schema.STRING ), DESTINATION_IP_ADDRESS_ATTR: attributes.Schema( _('Destination ip_address for this firewall rule.'), type=attributes.Schema.STRING ), SOURCE_PORT_ATTR: attributes.Schema( _('Source port range for this firewall rule.'), type=attributes.Schema.STRING ), DESTINATION_PORT_ATTR: attributes.Schema( _('Destination port range for this firewall rule.'), type=attributes.Schema.STRING ), ACTION_ATTR: attributes.Schema( _('Allow or deny action for this firewall rule.'), type=attributes.Schema.STRING ), ENABLED_ATTR: attributes.Schema( _('Indicates whether this firewall rule is enabled or not.'), type=attributes.Schema.STRING ), POSITION: attributes.Schema( _('Position of the rule within the firewall policy.'), type=attributes.Schema.STRING ), TENANT_ID: attributes.Schema( _('Id of the tenant owning the firewall.'), type=attributes.Schema.STRING ), } def handle_create(self): props = self.prepare_properties( self.properties, self.physical_resource_name()) if props.get(self.PROTOCOL) == 'any': props[self.PROTOCOL] = None firewall_rule = self.client().create_firewall_rule( {'firewall_rule': props})['firewall_rule'] self.resource_id_set(firewall_rule['id']) def handle_update(self, json_snippet, tmpl_diff, prop_diff): if prop_diff: if prop_diff.get(self.PROTOCOL) == 'any': prop_diff[self.PROTOCOL] = None self.client().update_firewall_rule( self.resource_id, {'firewall_rule': prop_diff}) def handle_delete(self): try: self.client().delete_firewall_rule(self.resource_id) except Exception as ex: self.client_plugin().ignore_not_found(ex) else: return True def resource_mapping(): return { 'OS::Neutron::Firewall': Firewall, 'OS::Neutron::FirewallPolicy': FirewallPolicy, 'OS::Neutron::FirewallRule': FirewallRule, } heat-10.0.2/heat/engine/resources/openstack/neutron/security_group_rule.py0000666000175000017500000001701413343562351027114 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common import exception from heat.common.i18n import _ from heat.engine import constraints from heat.engine import properties from heat.engine.resources.openstack.neutron import neutron from heat.engine import support from heat.engine import translation class SecurityGroupRule(neutron.NeutronResource): """A resource for managing Neutron security group rules. Rules to use in security group resource. """ required_service_extension = 'security-group' entity = 'security_group_rule' support_status = support.SupportStatus(version='7.0.0') PROPERTIES = ( SECURITY_GROUP, DESCRIPTION, DIRECTION, ETHERTYPE, PORT_RANGE_MIN, PORT_RANGE_MAX, PROTOCOL, REMOTE_GROUP, REMOTE_IP_PREFIX ) = ( 'security_group', 'description', 'direction', 'ethertype', 'port_range_min', 'port_range_max', 'protocol', 'remote_group', 'remote_ip_prefix' ) _allowed_protocols = list(range(256)) + [ 'ah', 'dccp', 'egp', 'esp', 'gre', 'icmp', 'icmpv6', 'igmp', 'ipv6-encap', 'ipv6-frag', 'ipv6-icmp', 'ipv6-nonxt', 'ipv6-opts', 'ipv6-route', 'ospf', 'pgm', 'rsvp', 'sctp', 'tcp', 'udp', 'udplite', 'vrrp' ] properties_schema = { SECURITY_GROUP: properties.Schema( properties.Schema.STRING, _('Security group name or ID to add rule.'), required=True, constraints=[ constraints.CustomConstraint('neutron.security_group') ] ), DESCRIPTION: properties.Schema( properties.Schema.STRING, _('Description of the security group rule.') ), DIRECTION: properties.Schema( properties.Schema.STRING, _('The direction in which the security group rule is applied. ' 'For a compute instance, an ingress security group rule ' 'matches traffic that is incoming (ingress) for that ' 'instance. An egress rule is applied to traffic leaving ' 'the instance.'), default='ingress', constraints=[ constraints.AllowedValues(['ingress', 'egress']), ] ), ETHERTYPE: properties.Schema( properties.Schema.STRING, _('Ethertype of the traffic.'), default='IPv4', constraints=[ constraints.AllowedValues(['IPv4', 'IPv6']), ] ), PORT_RANGE_MIN: properties.Schema( properties.Schema.INTEGER, _('The minimum port number in the range that is matched by the ' 'security group rule. If the protocol is TCP or UDP, this ' 'value must be less than or equal to the value of the ' 'port_range_max attribute. If the protocol is ICMP, this ' 'value must be an ICMP type.'), constraints=[ constraints.Range(0, 65535) ] ), PORT_RANGE_MAX: properties.Schema( properties.Schema.INTEGER, _('The maximum port number in the range that is matched by the ' 'security group rule. The port_range_min attribute constrains ' 'the port_range_max attribute. If the protocol is ICMP, this ' 'value must be an ICMP code.'), constraints=[ constraints.Range(0, 65535) ] ), PROTOCOL: properties.Schema( properties.Schema.STRING, _('The protocol that is matched by the security group rule. ' 'Allowed values are ah, dccp, egp, esp, gre, icmp, icmpv6, ' 'igmp, ipv6-encap, ipv6-frag, ipv6-icmp, ipv6-nonxt, ipv6-opts, ' 'ipv6-route, ospf, pgm, rsvp, sctp, tcp, udp, udplite, vrrp ' 'and integer representations [0-255].'), default='tcp', constraints=[constraints.AllowedValues(_allowed_protocols)] ), REMOTE_GROUP: properties.Schema( properties.Schema.STRING, _('The remote group name or ID to be associated with this ' 'security group rule.'), constraints=[ constraints.CustomConstraint('neutron.security_group') ] ), REMOTE_IP_PREFIX: properties.Schema( properties.Schema.STRING, _('The remote IP prefix (CIDR) to be associated with this ' 'security group rule.'), constraints=[ constraints.CustomConstraint('net_cidr') ] ) } def translation_rules(self, props): return [ translation.TranslationRule( props, translation.TranslationRule.RESOLVE, [self.SECURITY_GROUP], client_plugin=self.client_plugin(), finder='find_resourceid_by_name_or_id', entity='security_group' ), translation.TranslationRule( props, translation.TranslationRule.RESOLVE, [self.REMOTE_GROUP], client_plugin=self.client_plugin(), finder='find_resourceid_by_name_or_id', entity='security_group' ), ] def validate(self): super(SecurityGroupRule, self).validate() if (self.properties[self.REMOTE_GROUP] is not None and self.properties[self.REMOTE_IP_PREFIX] is not None): raise exception.ResourcePropertyConflict( self.REMOTE_GROUP, self.REMOTE_IP_PREFIX) port_max = self.properties[self.PORT_RANGE_MAX] port_min = self.properties[self.PORT_RANGE_MIN] protocol = self.properties[self.PROTOCOL] if (port_max is not None and port_min is not None and protocol not in ('icmp', 'icmpv6', 'ipv6-icmp') and port_max < port_min): msg = _('The minimum port number must be less than or equal to ' 'the maximum port number.') raise exception.StackValidationFailed(message=msg) def handle_create(self): props = self.prepare_properties( self.properties, self.physical_resource_name()) props['security_group_id'] = props.pop(self.SECURITY_GROUP) if self.REMOTE_GROUP in props: props['remote_group_id'] = props.pop(self.REMOTE_GROUP) for key in (self.PORT_RANGE_MIN, self.PORT_RANGE_MAX): if props.get(key) is not None: props[key] = str(props[key]) rule = self.client().create_security_group_rule( {'security_group_rule': props})['security_group_rule'] self.resource_id_set(rule['id']) def handle_delete(self): if self.resource_id is None: return with self.client_plugin().ignore_not_found: self.client().delete_security_group_rule(self.resource_id) def resource_mapping(): return { 'OS::Neutron::SecurityGroupRule': SecurityGroupRule } heat-10.0.2/heat/engine/resources/openstack/neutron/vpnservice.py0000666000175000017500000007062013343562351025170 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common.i18n import _ from heat.engine import attributes from heat.engine import constraints from heat.engine import properties from heat.engine.resources.openstack.neutron import neutron from heat.engine import support from heat.engine import translation class VPNService(neutron.NeutronResource): """A resource for VPN service in Neutron. VPN service is a high level object that associates VPN with a specific subnet and router. """ required_service_extension = 'vpnaas' entity = 'vpnservice' PROPERTIES = ( NAME, DESCRIPTION, ADMIN_STATE_UP, SUBNET_ID, SUBNET, ROUTER_ID, ROUTER ) = ( 'name', 'description', 'admin_state_up', 'subnet_id', 'subnet', 'router_id', 'router' ) ATTRIBUTES = ( ADMIN_STATE_UP_ATTR, DESCRIPTION_ATTR, NAME_ATTR, ROUTER_ID_ATTR, STATUS, SUBNET_ID_ATTR, TENANT_ID, ) = ( 'admin_state_up', 'description', 'name', 'router_id', 'status', 'subnet_id', 'tenant_id', ) properties_schema = { NAME: properties.Schema( properties.Schema.STRING, _('Name for the vpn service.'), update_allowed=True ), DESCRIPTION: properties.Schema( properties.Schema.STRING, _('Description for the vpn service.'), update_allowed=True ), ADMIN_STATE_UP: properties.Schema( properties.Schema.BOOLEAN, _('Administrative state for the vpn service.'), default=True, update_allowed=True ), SUBNET_ID: properties.Schema( properties.Schema.STRING, support_status=support.SupportStatus( status=support.HIDDEN, message=_('Use property %s.') % SUBNET, version='5.0.0', previous_status=support.SupportStatus( status=support.DEPRECATED, version='2014.2' ) ), constraints=[ constraints.CustomConstraint('neutron.subnet') ] ), SUBNET: properties.Schema( properties.Schema.STRING, _('Subnet in which the vpn service will be created.'), support_status=support.SupportStatus(version='2014.2'), required=True, constraints=[ constraints.CustomConstraint('neutron.subnet') ] ), ROUTER_ID: properties.Schema( properties.Schema.STRING, _('Unique identifier for the router to which the vpn service ' 'will be inserted.'), support_status=support.SupportStatus( status=support.HIDDEN, version='6.0.0', previous_status=support.SupportStatus( status=support.DEPRECATED, message=_('Use property %s') % ROUTER, version='2015.1', previous_status=support.SupportStatus(version='2013.2')) ), constraints=[ constraints.CustomConstraint('neutron.router') ] ), ROUTER: properties.Schema( properties.Schema.STRING, _('The router to which the vpn service will be inserted.'), support_status=support.SupportStatus(version='2015.1'), required=True, constraints=[ constraints.CustomConstraint('neutron.router') ] ) } attributes_schema = { ADMIN_STATE_UP_ATTR: attributes.Schema( _('The administrative state of the vpn service.'), type=attributes.Schema.STRING ), DESCRIPTION_ATTR: attributes.Schema( _('The description of the vpn service.'), type=attributes.Schema.STRING ), NAME_ATTR: attributes.Schema( _('The name of the vpn service.'), type=attributes.Schema.STRING ), ROUTER_ID_ATTR: attributes.Schema( _('The unique identifier of the router to which the vpn service ' 'was inserted.'), type=attributes.Schema.STRING ), STATUS: attributes.Schema( _('The status of the vpn service.'), type=attributes.Schema.STRING ), SUBNET_ID_ATTR: attributes.Schema( _('The unique identifier of the subnet in which the vpn service ' 'was created.'), type=attributes.Schema.STRING ), TENANT_ID: attributes.Schema( _('The unique identifier of the tenant owning the vpn service.'), type=attributes.Schema.STRING ), } def translation_rules(self, props): return [ translation.TranslationRule( props, translation.TranslationRule.REPLACE, [self.SUBNET], value_path=[self.SUBNET_ID] ), translation.TranslationRule( props, translation.TranslationRule.RESOLVE, [self.SUBNET], client_plugin=self.client_plugin(), finder='find_resourceid_by_name_or_id', entity='subnet' ), translation.TranslationRule( props, translation.TranslationRule.REPLACE, [self.ROUTER], value_path=[self.ROUTER_ID] ), translation.TranslationRule( props, translation.TranslationRule.RESOLVE, [self.ROUTER], client_plugin=self.client_plugin(), finder='find_resourceid_by_name_or_id', entity='router' ), ] def handle_create(self): props = self.prepare_properties( self.properties, self.physical_resource_name()) props['subnet_id'] = props.pop(self.SUBNET) props['router_id'] = props.pop(self.ROUTER) vpnservice = self.client().create_vpnservice({'vpnservice': props})[ 'vpnservice'] self.resource_id_set(vpnservice['id']) def handle_update(self, json_snippet, tmpl_diff, prop_diff): if prop_diff: self.prepare_update_properties(prop_diff) self.client().update_vpnservice(self.resource_id, {'vpnservice': prop_diff}) def handle_delete(self): try: self.client().delete_vpnservice(self.resource_id) except Exception as ex: self.client_plugin().ignore_not_found(ex) else: return True class IPsecSiteConnection(neutron.NeutronResource): """A resource for IPsec site connection in Neutron. This resource has details for the site-to-site IPsec connection, including the peer CIDRs, MTU, peer address, DPD settings and status. """ required_service_extension = 'vpnaas' entity = 'ipsec_site_connection' PROPERTIES = ( NAME, DESCRIPTION, PEER_ADDRESS, PEER_ID, PEER_CIDRS, MTU, DPD, PSK, INITIATOR, ADMIN_STATE_UP, IKEPOLICY_ID, IPSECPOLICY_ID, VPNSERVICE_ID, ) = ( 'name', 'description', 'peer_address', 'peer_id', 'peer_cidrs', 'mtu', 'dpd', 'psk', 'initiator', 'admin_state_up', 'ikepolicy_id', 'ipsecpolicy_id', 'vpnservice_id', ) _DPD_KEYS = ( DPD_ACTIONS, DPD_INTERVAL, DPD_TIMEOUT, ) = ( 'actions', 'interval', 'timeout', ) ATTRIBUTES = ( ADMIN_STATE_UP_ATTR, AUTH_MODE, DESCRIPTION_ATTR, DPD_ATTR, IKEPOLICY_ID_ATTR, INITIATOR_ATTR, IPSECPOLICY_ID_ATTR, MTU_ATTR, NAME_ATTR, PEER_ADDRESS_ATTR, PEER_CIDRS_ATTR, PEER_ID_ATTR, PSK_ATTR, ROUTE_MODE, STATUS, TENANT_ID, VPNSERVICE_ID_ATTR, ) = ( 'admin_state_up', 'auth_mode', 'description', 'dpd', 'ikepolicy_id', 'initiator', 'ipsecpolicy_id', 'mtu', 'name', 'peer_address', 'peer_cidrs', 'peer_id', 'psk', 'route_mode', 'status', 'tenant_id', 'vpnservice_id', ) properties_schema = { NAME: properties.Schema( properties.Schema.STRING, _('Name for the ipsec site connection.'), update_allowed=True ), DESCRIPTION: properties.Schema( properties.Schema.STRING, _('Description for the ipsec site connection.'), update_allowed=True ), PEER_ADDRESS: properties.Schema( properties.Schema.STRING, _('Remote branch router public IPv4 address or IPv6 address or ' 'FQDN.'), required=True ), PEER_ID: properties.Schema( properties.Schema.STRING, _('Remote branch router identity.'), required=True ), PEER_CIDRS: properties.Schema( properties.Schema.LIST, _('Remote subnet(s) in CIDR format.'), required=True, schema=properties.Schema( properties.Schema.STRING, constraints=[ constraints.CustomConstraint('net_cidr') ] ) ), MTU: properties.Schema( properties.Schema.INTEGER, _('Maximum transmission unit size (in bytes) for the ipsec site ' 'connection.'), default=1500 ), DPD: properties.Schema( properties.Schema.MAP, _('Dead Peer Detection protocol configuration for the ipsec site ' 'connection.'), schema={ DPD_ACTIONS: properties.Schema( properties.Schema.STRING, _('Controls DPD protocol mode.'), default='hold', constraints=[ constraints.AllowedValues(['clear', 'disabled', 'hold', 'restart', 'restart-by-peer']), ] ), DPD_INTERVAL: properties.Schema( properties.Schema.INTEGER, _('Number of seconds for the DPD delay.'), default=30 ), DPD_TIMEOUT: properties.Schema( properties.Schema.INTEGER, _('Number of seconds for the DPD timeout.'), default=120 ), } ), PSK: properties.Schema( properties.Schema.STRING, _('Pre-shared key string for the ipsec site connection.'), required=True ), INITIATOR: properties.Schema( properties.Schema.STRING, _('Initiator state in lowercase for the ipsec site connection.'), default='bi-directional', constraints=[ constraints.AllowedValues(['bi-directional', 'response-only']), ] ), ADMIN_STATE_UP: properties.Schema( properties.Schema.BOOLEAN, _('Administrative state for the ipsec site connection.'), default=True, update_allowed=True ), IKEPOLICY_ID: properties.Schema( properties.Schema.STRING, _('Unique identifier for the ike policy associated with the ' 'ipsec site connection.'), required=True ), IPSECPOLICY_ID: properties.Schema( properties.Schema.STRING, _('Unique identifier for the ipsec policy associated with the ' 'ipsec site connection.'), required=True ), VPNSERVICE_ID: properties.Schema( properties.Schema.STRING, _('Unique identifier for the vpn service associated with the ' 'ipsec site connection.'), required=True ), } attributes_schema = { ADMIN_STATE_UP_ATTR: attributes.Schema( _('The administrative state of the ipsec site connection.'), type=attributes.Schema.STRING ), AUTH_MODE: attributes.Schema( _('The authentication mode of the ipsec site connection.'), type=attributes.Schema.STRING ), DESCRIPTION_ATTR: attributes.Schema( _('The description of the ipsec site connection.'), type=attributes.Schema.STRING ), DPD_ATTR: attributes.Schema( _('The dead peer detection protocol configuration of the ipsec ' 'site connection.'), type=attributes.Schema.MAP ), IKEPOLICY_ID_ATTR: attributes.Schema( _('The unique identifier of ike policy associated with the ipsec ' 'site connection.'), type=attributes.Schema.STRING ), INITIATOR_ATTR: attributes.Schema( _('The initiator of the ipsec site connection.'), type=attributes.Schema.STRING ), IPSECPOLICY_ID_ATTR: attributes.Schema( _('The unique identifier of ipsec policy associated with the ' 'ipsec site connection.'), type=attributes.Schema.STRING ), MTU_ATTR: attributes.Schema( _('The maximum transmission unit size (in bytes) of the ipsec ' 'site connection.'), type=attributes.Schema.STRING ), NAME_ATTR: attributes.Schema( _('The name of the ipsec site connection.'), type=attributes.Schema.STRING ), PEER_ADDRESS_ATTR: attributes.Schema( _('The remote branch router public IPv4 address or IPv6 address ' 'or FQDN.'), type=attributes.Schema.STRING ), PEER_CIDRS_ATTR: attributes.Schema( _('The remote subnet(s) in CIDR format of the ipsec site ' 'connection.'), type=attributes.Schema.LIST ), PEER_ID_ATTR: attributes.Schema( _('The remote branch router identity of the ipsec site ' 'connection.'), type=attributes.Schema.STRING ), PSK_ATTR: attributes.Schema( _('The pre-shared key string of the ipsec site connection.'), type=attributes.Schema.STRING ), ROUTE_MODE: attributes.Schema( _('The route mode of the ipsec site connection.'), type=attributes.Schema.STRING ), STATUS: attributes.Schema( _('The status of the ipsec site connection.'), type=attributes.Schema.STRING ), TENANT_ID: attributes.Schema( _('The unique identifier of the tenant owning the ipsec site ' 'connection.'), type=attributes.Schema.STRING ), VPNSERVICE_ID_ATTR: attributes.Schema( _('The unique identifier of vpn service associated with the ipsec ' 'site connection.'), type=attributes.Schema.STRING ), } def handle_create(self): props = self.prepare_properties( self.properties, self.physical_resource_name()) ipsec_site_connection = self.client().create_ipsec_site_connection( {'ipsec_site_connection': props})['ipsec_site_connection'] self.resource_id_set(ipsec_site_connection['id']) def handle_update(self, json_snippet, tmpl_diff, prop_diff): if prop_diff: self.client().update_ipsec_site_connection( self.resource_id, {'ipsec_site_connection': prop_diff}) def handle_delete(self): try: self.client().delete_ipsec_site_connection(self.resource_id) except Exception as ex: self.client_plugin().ignore_not_found(ex) else: return True class IKEPolicy(neutron.NeutronResource): """A resource for IKE policy in Neutron. The Internet Key Exchange policy identifies the authentication and encryption algorithm used during phase one and phase two negotiation of a VPN connection. """ required_service_extension = 'vpnaas' entity = 'ikepolicy' PROPERTIES = ( NAME, DESCRIPTION, AUTH_ALGORITHM, ENCRYPTION_ALGORITHM, PHASE1_NEGOTIATION_MODE, LIFETIME, PFS, IKE_VERSION, ) = ( 'name', 'description', 'auth_algorithm', 'encryption_algorithm', 'phase1_negotiation_mode', 'lifetime', 'pfs', 'ike_version', ) _LIFETIME_KEYS = ( LIFETIME_UNITS, LIFETIME_VALUE, ) = ( 'units', 'value', ) ATTRIBUTES = ( AUTH_ALGORITHM_ATTR, DESCRIPTION_ATTR, ENCRYPTION_ALGORITHM_ATTR, IKE_VERSION_ATTR, LIFETIME_ATTR, NAME_ATTR, PFS_ATTR, PHASE1_NEGOTIATION_MODE_ATTR, TENANT_ID, ) = ( 'auth_algorithm', 'description', 'encryption_algorithm', 'ike_version', 'lifetime', 'name', 'pfs', 'phase1_negotiation_mode', 'tenant_id', ) properties_schema = { NAME: properties.Schema( properties.Schema.STRING, _('Name for the ike policy.'), update_allowed=True ), DESCRIPTION: properties.Schema( properties.Schema.STRING, _('Description for the ike policy.'), update_allowed=True ), AUTH_ALGORITHM: properties.Schema( properties.Schema.STRING, _('Authentication hash algorithm for the ike policy.'), default='sha1', constraints=[ constraints.AllowedValues(['sha1', 'sha256', 'sha384', 'sha512']), ], update_allowed=True ), ENCRYPTION_ALGORITHM: properties.Schema( properties.Schema.STRING, _('Encryption algorithm for the ike policy.'), default='aes-128', constraints=[ constraints.AllowedValues(['3des', 'aes-128', 'aes-192', 'aes-256']), ] ), PHASE1_NEGOTIATION_MODE: properties.Schema( properties.Schema.STRING, _('Negotiation mode for the ike policy.'), default='main', constraints=[ constraints.AllowedValues(['main']), ] ), LIFETIME: properties.Schema( properties.Schema.MAP, _('Safety assessment lifetime configuration for the ike policy.'), schema={ LIFETIME_UNITS: properties.Schema( properties.Schema.STRING, _('Safety assessment lifetime units.'), default='seconds', constraints=[ constraints.AllowedValues(['seconds', 'kilobytes']), ] ), LIFETIME_VALUE: properties.Schema( properties.Schema.INTEGER, _('Safety assessment lifetime value in specified ' 'units.'), default=3600 ), } ), PFS: properties.Schema( properties.Schema.STRING, _('Perfect forward secrecy in lowercase for the ike policy.'), default='group5', constraints=[ constraints.AllowedValues(['group2', 'group5', 'group14']), ] ), IKE_VERSION: properties.Schema( properties.Schema.STRING, _('Version for the ike policy.'), default='v1', constraints=[ constraints.AllowedValues(['v1', 'v2']), ] ), } attributes_schema = { AUTH_ALGORITHM_ATTR: attributes.Schema( _('The authentication hash algorithm used by the ike policy.'), type=attributes.Schema.STRING ), DESCRIPTION_ATTR: attributes.Schema( _('The description of the ike policy.'), type=attributes.Schema.STRING ), ENCRYPTION_ALGORITHM_ATTR: attributes.Schema( _('The encryption algorithm used by the ike policy.'), type=attributes.Schema.STRING ), IKE_VERSION_ATTR: attributes.Schema( _('The version of the ike policy.'), type=attributes.Schema.STRING ), LIFETIME_ATTR: attributes.Schema( _('The safety assessment lifetime configuration for the ike ' 'policy.'), type=attributes.Schema.MAP ), NAME_ATTR: attributes.Schema( _('The name of the ike policy.'), type=attributes.Schema.STRING ), PFS_ATTR: attributes.Schema( _('The perfect forward secrecy of the ike policy.'), type=attributes.Schema.STRING ), PHASE1_NEGOTIATION_MODE_ATTR: attributes.Schema( _('The negotiation mode of the ike policy.'), type=attributes.Schema.STRING ), TENANT_ID: attributes.Schema( _('The unique identifier of the tenant owning the ike policy.'), type=attributes.Schema.STRING ), } def handle_create(self): props = self.prepare_properties( self.properties, self.physical_resource_name()) ikepolicy = self.client().create_ikepolicy({'ikepolicy': props})[ 'ikepolicy'] self.resource_id_set(ikepolicy['id']) def handle_update(self, json_snippet, tmpl_diff, prop_diff): if prop_diff: self.client().update_ikepolicy(self.resource_id, {'ikepolicy': prop_diff}) def handle_delete(self): try: self.client().delete_ikepolicy(self.resource_id) except Exception as ex: self.client_plugin().ignore_not_found(ex) else: return True class IPsecPolicy(neutron.NeutronResource): """A resource for IPsec policy in Neutron. The IP security policy specifying the authentication and encryption algorithm, and encapsulation mode used for the established VPN connection. """ required_service_extension = 'vpnaas' entity = 'ipsecpolicy' PROPERTIES = ( NAME, DESCRIPTION, TRANSFORM_PROTOCOL, ENCAPSULATION_MODE, AUTH_ALGORITHM, ENCRYPTION_ALGORITHM, LIFETIME, PFS, ) = ( 'name', 'description', 'transform_protocol', 'encapsulation_mode', 'auth_algorithm', 'encryption_algorithm', 'lifetime', 'pfs', ) _LIFETIME_KEYS = ( LIFETIME_UNITS, LIFETIME_VALUE, ) = ( 'units', 'value', ) ATTRIBUTES = ( AUTH_ALGORITHM_ATTR, DESCRIPTION_ATTR, ENCAPSULATION_MODE_ATTR, ENCRYPTION_ALGORITHM_ATTR, LIFETIME_ATTR, NAME_ATTR, PFS_ATTR, TENANT_ID, TRANSFORM_PROTOCOL_ATTR, ) = ( 'auth_algorithm', 'description', 'encapsulation_mode', 'encryption_algorithm', 'lifetime', 'name', 'pfs', 'tenant_id', 'transform_protocol', ) properties_schema = { NAME: properties.Schema( properties.Schema.STRING, _('Name for the ipsec policy.'), update_allowed=True ), DESCRIPTION: properties.Schema( properties.Schema.STRING, _('Description for the ipsec policy.'), update_allowed=True ), TRANSFORM_PROTOCOL: properties.Schema( properties.Schema.STRING, _('Transform protocol for the ipsec policy.'), default='esp', constraints=[ constraints.AllowedValues(['esp', 'ah', 'ah-esp']), ] ), ENCAPSULATION_MODE: properties.Schema( properties.Schema.STRING, _('Encapsulation mode for the ipsec policy.'), default='tunnel', constraints=[ constraints.AllowedValues(['tunnel', 'transport']), ] ), AUTH_ALGORITHM: properties.Schema( properties.Schema.STRING, _('Authentication hash algorithm for the ipsec policy.'), default='sha1', constraints=[ constraints.AllowedValues(['sha1']), ] ), ENCRYPTION_ALGORITHM: properties.Schema( properties.Schema.STRING, _('Encryption algorithm for the ipsec policy.'), default='aes-128', constraints=[ constraints.AllowedValues(['3des', 'aes-128', 'aes-192', 'aes-256']), ] ), LIFETIME: properties.Schema( properties.Schema.MAP, _('Safety assessment lifetime configuration for the ipsec ' 'policy.'), schema={ LIFETIME_UNITS: properties.Schema( properties.Schema.STRING, _('Safety assessment lifetime units.'), default='seconds', constraints=[ constraints.AllowedValues(['seconds', 'kilobytes']), ] ), LIFETIME_VALUE: properties.Schema( properties.Schema.INTEGER, _('Safety assessment lifetime value in specified ' 'units.'), default=3600 ), } ), PFS: properties.Schema( properties.Schema.STRING, _('Perfect forward secrecy for the ipsec policy.'), default='group5', constraints=[ constraints.AllowedValues(['group2', 'group5', 'group14']), ] ), } attributes_schema = { AUTH_ALGORITHM_ATTR: attributes.Schema( _('The authentication hash algorithm of the ipsec policy.'), type=attributes.Schema.STRING ), DESCRIPTION_ATTR: attributes.Schema( _('The description of the ipsec policy.'), type=attributes.Schema.STRING ), ENCAPSULATION_MODE_ATTR: attributes.Schema( _('The encapsulation mode of the ipsec policy.'), type=attributes.Schema.STRING ), ENCRYPTION_ALGORITHM_ATTR: attributes.Schema( _('The encryption algorithm of the ipsec policy.'), type=attributes.Schema.STRING ), LIFETIME_ATTR: attributes.Schema( _('The safety assessment lifetime configuration of the ipsec ' 'policy.'), type=attributes.Schema.MAP ), NAME_ATTR: attributes.Schema( _('The name of the ipsec policy.'), type=attributes.Schema.STRING ), PFS_ATTR: attributes.Schema( _('The perfect forward secrecy of the ipsec policy.'), type=attributes.Schema.STRING ), TENANT_ID: attributes.Schema( _('The unique identifier of the tenant owning the ipsec policy.'), type=attributes.Schema.STRING ), TRANSFORM_PROTOCOL_ATTR: attributes.Schema( _('The transform protocol of the ipsec policy.'), type=attributes.Schema.STRING ), } def handle_create(self): props = self.prepare_properties( self.properties, self.physical_resource_name()) ipsecpolicy = self.client().create_ipsecpolicy( {'ipsecpolicy': props})['ipsecpolicy'] self.resource_id_set(ipsecpolicy['id']) def handle_update(self, json_snippet, tmpl_diff, prop_diff): if prop_diff: self.client().update_ipsecpolicy(self.resource_id, {'ipsecpolicy': prop_diff}) def handle_delete(self): try: self.client().delete_ipsecpolicy(self.resource_id) except Exception as ex: self.client_plugin().ignore_not_found(ex) else: return True def resource_mapping(): return { 'OS::Neutron::VPNService': VPNService, 'OS::Neutron::IPsecSiteConnection': IPsecSiteConnection, 'OS::Neutron::IKEPolicy': IKEPolicy, 'OS::Neutron::IPsecPolicy': IPsecPolicy, } heat-10.0.2/heat/engine/resources/openstack/neutron/network_gateway.py0000666000175000017500000002266213343562351026221 0ustar zuulzuul00000000000000# # Copyright 2013 NTT Corp. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from heat.common import exception from heat.common.i18n import _ from heat.engine import attributes from heat.engine import constraints from heat.engine import properties from heat.engine.resources.openstack.neutron import neutron from heat.engine import support from heat.engine import translation class NetworkGateway(neutron.NeutronResource): """Network Gateway resource in Neutron Network Gateway. Resource for connecting internal networks with specified devices. """ support_status = support.SupportStatus(version='2014.1') entity = 'network_gateway' PROPERTIES = ( NAME, DEVICES, CONNECTIONS, ) = ( 'name', 'devices', 'connections', ) ATTRIBUTES = ( DEFAULT, ) = ( 'default', ) _DEVICES_KEYS = ( ID, INTERFACE_NAME, ) = ( 'id', 'interface_name', ) _CONNECTIONS_KEYS = ( NETWORK_ID, NETWORK, SEGMENTATION_TYPE, SEGMENTATION_ID, ) = ( 'network_id', 'network', 'segmentation_type', 'segmentation_id', ) properties_schema = { NAME: properties.Schema( properties.Schema.STRING, description=_('The name of the network gateway.'), update_allowed=True ), DEVICES: properties.Schema( properties.Schema.LIST, description=_('Device info for this network gateway.'), required=True, constraints=[constraints.Length(min=1)], update_allowed=True, schema=properties.Schema( properties.Schema.MAP, schema={ ID: properties.Schema( properties.Schema.STRING, description=_('The device id for the network ' 'gateway.'), required=True ), INTERFACE_NAME: properties.Schema( properties.Schema.STRING, description=_('The interface name for the ' 'network gateway.'), required=True ) } ) ), CONNECTIONS: properties.Schema( properties.Schema.LIST, description=_('Connection info for this network gateway.'), default={}, update_allowed=True, schema=properties.Schema( properties.Schema.MAP, schema={ NETWORK_ID: properties.Schema( properties.Schema.STRING, support_status=support.SupportStatus( status=support.HIDDEN, message=_('Use property %s.') % NETWORK, version='5.0.0', previous_status=support.SupportStatus( status=support.DEPRECATED, version='2014.2' ) ), constraints=[ constraints.CustomConstraint('neutron.network') ], ), NETWORK: properties.Schema( properties.Schema.STRING, description=_( 'The internal network to connect on ' 'the network gateway.'), support_status=support.SupportStatus(version='2014.2'), required=True, constraints=[ constraints.CustomConstraint('neutron.network') ], ), SEGMENTATION_TYPE: properties.Schema( properties.Schema.STRING, description=_( 'L2 segmentation strategy on the external ' 'side of the network gateway.'), default='flat', constraints=[constraints.AllowedValues( ('flat', 'vlan'))] ), SEGMENTATION_ID: properties.Schema( properties.Schema.INTEGER, description=_( 'The id for L2 segment on the external side ' 'of the network gateway. Must be specified ' 'when using vlan.'), constraints=[constraints.Range(0, 4094)] ) } ) ) } attributes_schema = { DEFAULT: attributes.Schema( _("A boolean value of default flag."), type=attributes.Schema.STRING ), } def translation_rules(self, props): return [ translation.TranslationRule( props, translation.TranslationRule.REPLACE, [self.CONNECTIONS, self.NETWORK], value_name=self.NETWORK_ID ), translation.TranslationRule( props, translation.TranslationRule.RESOLVE, [self.CONNECTIONS, self.NETWORK], client_plugin=self.client_plugin(), finder='find_resourceid_by_name_or_id', entity='network' ) ] def validate(self): """Validate any of the provided params.""" super(NetworkGateway, self).validate() connections = self.properties[self.CONNECTIONS] for connection in connections: segmentation_type = connection[self.SEGMENTATION_TYPE] segmentation_id = connection.get(self.SEGMENTATION_ID) if segmentation_type == 'vlan' and segmentation_id is None: msg = _("segmentation_id must be specified for using vlan") raise exception.StackValidationFailed(message=msg) if segmentation_type == 'flat' and segmentation_id: msg = _( "segmentation_id cannot be specified except 0 for " "using flat") raise exception.StackValidationFailed(message=msg) def handle_create(self): props = self.prepare_properties( self.properties, self.physical_resource_name()) connections = props.pop(self.CONNECTIONS) ret = self.client().create_network_gateway( {'network_gateway': props})['network_gateway'] self.resource_id_set(ret['id']) for connection in connections: if self.NETWORK in connection: connection['network_id'] = connection.pop(self.NETWORK) self.client().connect_network_gateway( ret['id'], connection ) def handle_delete(self): if not self.resource_id: return connections = self.properties[self.CONNECTIONS] for connection in connections: with self.client_plugin().ignore_not_found: if self.NETWORK in connection: connection['network_id'] = connection.pop(self.NETWORK) self.client().disconnect_network_gateway( self.resource_id, connection ) try: self.client().delete_network_gateway(self.resource_id) except Exception as ex: self.client_plugin().ignore_not_found(ex) else: return True def handle_update(self, json_snippet, tmpl_diff, prop_diff): connections = None if self.CONNECTIONS in prop_diff: connections = prop_diff.pop(self.CONNECTIONS) if self.DEVICES in prop_diff: self.handle_delete() self.properties.data.update(prop_diff) self.handle_create() return if prop_diff: self.prepare_update_properties(prop_diff) self.client().update_network_gateway( self.resource_id, {'network_gateway': prop_diff}) if connections: for connection in self.properties[self.CONNECTIONS]: with self.client_plugin().ignore_not_found: if self.NETWORK in connection: connection[ 'network_id'] = connection.pop(self.NETWORK) self.client().disconnect_network_gateway( self.resource_id, connection ) for connection in connections: if self.NETWORK in connection: connection['network_id'] = connection.pop(self.NETWORK) self.client().connect_network_gateway( self.resource_id, connection ) def resource_mapping(): return { 'OS::Neutron::NetworkGateway': NetworkGateway, } heat-10.0.2/heat/engine/resources/openstack/neutron/floatingip.py0000666000175000017500000004127713343562351025146 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import six from heat.common import exception from heat.common.i18n import _ from heat.engine import attributes from heat.engine import constraints from heat.engine import properties from heat.engine.resources.openstack.neutron import neutron from heat.engine.resources.openstack.neutron import port from heat.engine.resources.openstack.neutron import router from heat.engine import support from heat.engine import translation class FloatingIP(neutron.NeutronResource): """A resource for managing Neutron floating ips. Floating IP addresses can change their association between routers by action of the user. One of the most common use cases for floating IPs is to provide public IP addresses to a private cloud, where there are a limited number of IP addresses available. Another is for a public cloud user to have a "static" IP address that can be reassigned when an instance is upgraded or moved. """ entity = 'floatingip' PROPERTIES = ( FLOATING_NETWORK_ID, FLOATING_NETWORK, FLOATING_SUBNET, VALUE_SPECS, PORT_ID, FIXED_IP_ADDRESS, FLOATING_IP_ADDRESS, DNS_NAME, DNS_DOMAIN, ) = ( 'floating_network_id', 'floating_network', 'floating_subnet', 'value_specs', 'port_id', 'fixed_ip_address', 'floating_ip_address', 'dns_name', 'dns_domain', ) ATTRIBUTES = ( ROUTER_ID, TENANT_ID, FLOATING_NETWORK_ID_ATTR, FIXED_IP_ADDRESS_ATTR, FLOATING_IP_ADDRESS_ATTR, PORT_ID_ATTR, ) = ( 'router_id', 'tenant_id', 'floating_network_id', 'fixed_ip_address', 'floating_ip_address', 'port_id', ) properties_schema = { FLOATING_NETWORK_ID: properties.Schema( properties.Schema.STRING, support_status=support.SupportStatus( status=support.HIDDEN, version='5.0.0', message=_('Use property %s.') % FLOATING_NETWORK, previous_status=support.SupportStatus( status=support.DEPRECATED, version='2014.2' ) ), constraints=[ constraints.CustomConstraint('neutron.network') ], ), FLOATING_NETWORK: properties.Schema( properties.Schema.STRING, _('Network to allocate floating IP from.'), support_status=support.SupportStatus(version='2014.2'), required=True, constraints=[ constraints.CustomConstraint('neutron.network') ], ), FLOATING_SUBNET: properties.Schema( properties.Schema.STRING, _('Subnet to allocate floating IP from.'), support_status=support.SupportStatus(version='9.0.0'), constraints=[ constraints.CustomConstraint('neutron.subnet') ], ), VALUE_SPECS: properties.Schema( properties.Schema.MAP, _('Extra parameters to include in the "floatingip" object in the ' 'creation request. Parameters are often specific to installed ' 'hardware or extensions.'), default={} ), PORT_ID: properties.Schema( properties.Schema.STRING, _('ID of an existing port with at least one IP address to ' 'associate with this floating IP.'), update_allowed=True, constraints=[ constraints.CustomConstraint('neutron.port') ] ), FIXED_IP_ADDRESS: properties.Schema( properties.Schema.STRING, _('IP address to use if the port has multiple addresses.'), update_allowed=True, constraints=[ constraints.CustomConstraint('ip_addr') ] ), FLOATING_IP_ADDRESS: properties.Schema( properties.Schema.STRING, _('IP address of the floating IP. NOTE: The default policy ' 'setting in Neutron restricts usage of this property to ' 'administrative users only.'), constraints=[ constraints.CustomConstraint('ip_addr') ], support_status=support.SupportStatus(version='5.0.0'), ), DNS_NAME: properties.Schema( properties.Schema.STRING, _('DNS name associated with floating ip.'), update_allowed=True, constraints=[ constraints.CustomConstraint('rel_dns_name') ], support_status=support.SupportStatus(version='7.0.0'), ), DNS_DOMAIN: properties.Schema( properties.Schema.STRING, _('DNS domain associated with floating ip.'), update_allowed=True, constraints=[ constraints.CustomConstraint('dns_domain') ], support_status=support.SupportStatus(version='7.0.0'), ), } attributes_schema = { ROUTER_ID: attributes.Schema( _('ID of the router used as gateway, set when associated with a ' 'port.'), type=attributes.Schema.STRING ), TENANT_ID: attributes.Schema( _('The tenant owning this floating IP.'), type=attributes.Schema.STRING ), FLOATING_NETWORK_ID_ATTR: attributes.Schema( _('ID of the network in which this IP is allocated.'), type=attributes.Schema.STRING ), FIXED_IP_ADDRESS_ATTR: attributes.Schema( _('IP address of the associated port, if specified.'), type=attributes.Schema.STRING, cache_mode=attributes.Schema.CACHE_NONE ), FLOATING_IP_ADDRESS_ATTR: attributes.Schema( _('The allocated address of this IP.'), type=attributes.Schema.STRING ), PORT_ID_ATTR: attributes.Schema( _('ID of the port associated with this IP.'), type=attributes.Schema.STRING, cache_mode=attributes.Schema.CACHE_NONE ), } def translation_rules(self, props): return [ translation.TranslationRule( props, translation.TranslationRule.REPLACE, [self.FLOATING_NETWORK], value_path=[self.FLOATING_NETWORK_ID] ), translation.TranslationRule( props, translation.TranslationRule.RESOLVE, [self.FLOATING_NETWORK], client_plugin=self.client_plugin(), finder='find_resourceid_by_name_or_id', entity='network' ), translation.TranslationRule( props, translation.TranslationRule.RESOLVE, [self.FLOATING_SUBNET], client_plugin=self.client_plugin(), finder='find_resourceid_by_name_or_id', entity='subnet', ) ] def _add_router_interface_dependencies(self, deps, resource): def port_on_subnet(resource, subnet): if not resource.has_interface('OS::Neutron::Port'): return False fixed_ips = resource.properties.get(port.Port.FIXED_IPS) if not fixed_ips: # During create we have only unresolved value for # functions, so can not use None value for building # correct dependencies. Depend on all RouterInterfaces # when the port has no fixed IP specified, since we # can't safely assume that any are in different # networks. if subnet is None: return True p_net = (resource.properties.get(port.Port.NETWORK) or resource.properties.get(port.Port.NETWORK_ID)) if p_net: network = self.client().show_network(p_net)['network'] return subnet in network['subnets'] else: for fixed_ip in resource.properties.get( port.Port.FIXED_IPS): port_subnet = (fixed_ip.get(port.Port.FIXED_IP_SUBNET) or fixed_ip.get(port.Port.FIXED_IP_SUBNET_ID)) if subnet == port_subnet: return True return False interface_subnet = ( resource.properties.get(router.RouterInterface.SUBNET) or resource.properties.get(router.RouterInterface.SUBNET_ID)) for d in deps.graph()[self]: if port_on_subnet(d, interface_subnet): deps += (self, resource) break def add_dependencies(self, deps): super(FloatingIP, self).add_dependencies(deps) for resource in six.itervalues(self.stack): # depend on any RouterGateway in this template with the same # network_id as this floating_network_id if resource.has_interface('OS::Neutron::RouterGateway'): gateway_network = resource.properties.get( router.RouterGateway.NETWORK) or resource.properties.get( router.RouterGateway.NETWORK_ID) floating_network = self.properties[self.FLOATING_NETWORK] if gateway_network == floating_network: deps += (self, resource) # depend on any RouterInterface in this template which interfaces # with the same subnet that this floating IP's port is assigned # to elif resource.has_interface('OS::Neutron::RouterInterface'): self._add_router_interface_dependencies(deps, resource) # depend on Router with EXTERNAL_GATEWAY_NETWORK property # this template with the same network_id as this # floating_network_id elif resource.has_interface('OS::Neutron::Router'): gateway = resource.properties.get( router.Router.EXTERNAL_GATEWAY) if gateway: gateway_network = gateway.get( router.Router.EXTERNAL_GATEWAY_NETWORK) floating_network = self.properties[self.FLOATING_NETWORK] if gateway_network == floating_network: deps += (self, resource) def validate(self): super(FloatingIP, self).validate() # fixed_ip_address cannot be specified without a port_id if self.properties[self.PORT_ID] is None and self.properties[ self.FIXED_IP_ADDRESS] is not None: raise exception.ResourcePropertyDependency( prop1=self.FIXED_IP_ADDRESS, prop2=self.PORT_ID) def handle_create(self): props = self.prepare_properties( self.properties, self.physical_resource_name()) props['floating_network_id'] = props.pop(self.FLOATING_NETWORK) if self.FLOATING_SUBNET in props: props['subnet_id'] = props.pop(self.FLOATING_SUBNET) fip = self.client().create_floatingip({ 'floatingip': props})['floatingip'] self.resource_id_set(fip['id']) def handle_delete(self): with self.client_plugin().ignore_not_found: self.client().delete_floatingip(self.resource_id) return True def handle_update(self, json_snippet, tmpl_diff, prop_diff): if prop_diff: port_id = prop_diff.get(self.PORT_ID, self.properties[self.PORT_ID]) fixed_ip_address = prop_diff.get( self.FIXED_IP_ADDRESS, self.properties[self.FIXED_IP_ADDRESS]) request_body = { 'floatingip': { 'port_id': port_id, 'fixed_ip_address': fixed_ip_address}} self.client().update_floatingip(self.resource_id, request_body) class FloatingIPAssociation(neutron.NeutronResource): """A resource for associating floating ips and ports. This resource allows associating a floating IP to a port with at least one IP address to associate with this floating IP. """ PROPERTIES = ( FLOATINGIP_ID, PORT_ID, FIXED_IP_ADDRESS, ) = ( 'floatingip_id', 'port_id', 'fixed_ip_address', ) properties_schema = { FLOATINGIP_ID: properties.Schema( properties.Schema.STRING, _('ID of the floating IP to associate.'), required=True, update_allowed=True ), PORT_ID: properties.Schema( properties.Schema.STRING, _('ID of an existing port with at least one IP address to ' 'associate with this floating IP.'), required=True, update_allowed=True, constraints=[ constraints.CustomConstraint('neutron.port') ] ), FIXED_IP_ADDRESS: properties.Schema( properties.Schema.STRING, _('IP address to use if the port has multiple addresses.'), update_allowed=True, constraints=[ constraints.CustomConstraint('ip_addr') ] ), } def add_dependencies(self, deps): super(FloatingIPAssociation, self).add_dependencies(deps) for resource in six.itervalues(self.stack): if resource.has_interface('OS::Neutron::RouterInterface'): def port_on_subnet(resource, subnet): if not resource.has_interface('OS::Neutron::Port'): return False fixed_ips = resource.properties.get( port.Port.FIXED_IPS) or [] for fixed_ip in fixed_ips: port_subnet = ( fixed_ip.get(port.Port.FIXED_IP_SUBNET) or fixed_ip.get(port.Port.FIXED_IP_SUBNET_ID)) return subnet == port_subnet return False interface_subnet = ( resource.properties.get(router.RouterInterface.SUBNET) or resource.properties.get(router.RouterInterface.SUBNET_ID)) for d in deps.graph()[self]: if port_on_subnet(d, interface_subnet): deps += (self, resource) break def handle_create(self): props = self.prepare_properties(self.properties, self.name) floatingip_id = props.pop(self.FLOATINGIP_ID) self.client().update_floatingip(floatingip_id, { 'floatingip': props}) self.resource_id_set(self.id) def handle_delete(self): if not self.resource_id: return with self.client_plugin().ignore_not_found: self.client().update_floatingip( self.properties[self.FLOATINGIP_ID], {'floatingip': {'port_id': None}}) def handle_update(self, json_snippet, tmpl_diff, prop_diff): if prop_diff: floatingip_id = self.properties[self.FLOATINGIP_ID] port_id = self.properties[self.PORT_ID] # if the floatingip_id is changed, disassociate the port which # associated with the old floatingip_id if self.FLOATINGIP_ID in prop_diff: with self.client_plugin().ignore_not_found: self.client().update_floatingip( floatingip_id, {'floatingip': {'port_id': None}}) # associate the floatingip with the new port floatingip_id = (prop_diff.get(self.FLOATINGIP_ID) or floatingip_id) port_id = prop_diff.get(self.PORT_ID) or port_id fixed_ip_address = (prop_diff.get(self.FIXED_IP_ADDRESS) or self.properties[self.FIXED_IP_ADDRESS]) request_body = { 'floatingip': { 'port_id': port_id, 'fixed_ip_address': fixed_ip_address}} self.client().update_floatingip(floatingip_id, request_body) self.resource_id_set(self.id) def resource_mapping(): return { 'OS::Neutron::FloatingIP': FloatingIP, 'OS::Neutron::FloatingIPAssociation': FloatingIPAssociation, } heat-10.0.2/heat/engine/resources/openstack/neutron/loadbalancer.py0000666000175000017500000007532613343562351025423 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common import exception from heat.common.i18n import _ from heat.engine import attributes from heat.engine.clients import progress from heat.engine import constraints from heat.engine import properties from heat.engine import resource from heat.engine.resources.openstack.neutron import neutron from heat.engine import support from heat.engine import translation DEPR_MSG = _('Neutron LBaaS v1 is deprecated in the Liberty release ' 'and is planned to be removed in a future release. ' 'Going forward, the LBaaS V2 should be used.') class HealthMonitor(neutron.NeutronResource): """A resource for managing health monitors for loadbalancers in Neutron. A health monitor is used to determine whether or not back-end members of the VIP's pool are usable for processing a request. A pool can have several health monitors associated with it. There are different types of health monitors supported by the OpenStack LBaaS service: - PING: used to ping the members using ICMP. - TCP: used to connect to the members using TCP. - HTTP: used to send an HTTP request to the member. - HTTPS: used to send a secure HTTP request to the member. """ required_service_extension = 'lbaas' entity = 'health_monitor' support_status = support.SupportStatus( status=support.HIDDEN, version='9.0.0', message=_('Use LBaaS V2 instead.'), previous_status=support.SupportStatus( status=support.DEPRECATED, message=DEPR_MSG, version='7.0.0' ) ) PROPERTIES = ( DELAY, TYPE, MAX_RETRIES, TIMEOUT, ADMIN_STATE_UP, HTTP_METHOD, EXPECTED_CODES, URL_PATH, ) = ( 'delay', 'type', 'max_retries', 'timeout', 'admin_state_up', 'http_method', 'expected_codes', 'url_path', ) ATTRIBUTES = ( ADMIN_STATE_UP_ATTR, DELAY_ATTR, EXPECTED_CODES_ATTR, HTTP_METHOD_ATTR, MAX_RETRIES_ATTR, TIMEOUT_ATTR, TYPE_ATTR, URL_PATH_ATTR, TENANT_ID, ) = ( 'admin_state_up', 'delay', 'expected_codes', 'http_method', 'max_retries', 'timeout', 'type', 'url_path', 'tenant_id', ) properties_schema = { DELAY: properties.Schema( properties.Schema.INTEGER, _('The minimum time in milliseconds between regular connections ' 'of the member.'), required=True, update_allowed=True ), TYPE: properties.Schema( properties.Schema.STRING, _('One of predefined health monitor types.'), required=True, constraints=[ constraints.AllowedValues(['PING', 'TCP', 'HTTP', 'HTTPS']), ] ), MAX_RETRIES: properties.Schema( properties.Schema.INTEGER, _('Number of permissible connection failures before changing the ' 'member status to INACTIVE.'), required=True, update_allowed=True ), TIMEOUT: properties.Schema( properties.Schema.INTEGER, _('Maximum number of milliseconds for a monitor to wait for a ' 'connection to be established before it times out.'), required=True, update_allowed=True ), ADMIN_STATE_UP: properties.Schema( properties.Schema.BOOLEAN, _('The administrative state of the health monitor.'), default=True, update_allowed=True ), HTTP_METHOD: properties.Schema( properties.Schema.STRING, _('The HTTP method used for requests by the monitor of type ' 'HTTP.'), update_allowed=True ), EXPECTED_CODES: properties.Schema( properties.Schema.STRING, _('The list of HTTP status codes expected in response from the ' 'member to declare it healthy.'), update_allowed=True ), URL_PATH: properties.Schema( properties.Schema.STRING, _('The HTTP path used in the HTTP request used by the monitor to ' 'test a member health.'), update_allowed=True ), } attributes_schema = { ADMIN_STATE_UP_ATTR: attributes.Schema( _('The administrative state of this health monitor.'), type=attributes.Schema.STRING ), DELAY_ATTR: attributes.Schema( _('The minimum time in milliseconds between regular connections ' 'of the member.'), type=attributes.Schema.STRING ), EXPECTED_CODES_ATTR: attributes.Schema( _('The list of HTTP status codes expected in response ' 'from the member to declare it healthy.'), type=attributes.Schema.LIST ), HTTP_METHOD_ATTR: attributes.Schema( _('The HTTP method used for requests by the monitor of ' 'type HTTP.'), type=attributes.Schema.STRING ), MAX_RETRIES_ATTR: attributes.Schema( _('Number of permissible connection failures before changing ' 'the member status to INACTIVE.'), type=attributes.Schema.STRING ), TIMEOUT_ATTR: attributes.Schema( _('Maximum number of milliseconds for a monitor to wait for a ' 'connection to be established before it times out.'), type=attributes.Schema.STRING ), TYPE_ATTR: attributes.Schema( _('One of predefined health monitor types.'), type=attributes.Schema.STRING ), URL_PATH_ATTR: attributes.Schema( _('The HTTP path used in the HTTP request used by the monitor ' 'to test a member health.'), type=attributes.Schema.STRING ), TENANT_ID: attributes.Schema( _('Tenant owning the health monitor.'), type=attributes.Schema.STRING ), } def handle_create(self): properties = self.prepare_properties( self.properties, self.physical_resource_name()) health_monitor = self.client().create_health_monitor( {'health_monitor': properties})['health_monitor'] self.resource_id_set(health_monitor['id']) def handle_update(self, json_snippet, tmpl_diff, prop_diff): if prop_diff: self.client().update_health_monitor( self.resource_id, {'health_monitor': prop_diff}) def handle_delete(self): try: self.client().delete_health_monitor(self.resource_id) except Exception as ex: self.client_plugin().ignore_not_found(ex) else: return True class Pool(neutron.NeutronResource): """A resource for managing load balancer pools in Neutron. A load balancing pool is a logical set of devices, such as web servers, that you group together to receive and process traffic. The loadbalancing function chooses a member of the pool according to the configured load balancing method to handle the new requests or connections received on the VIP address. There is only one pool for a VIP. """ required_service_extension = 'lbaas' entity = 'pool' support_status = support.SupportStatus( status=support.HIDDEN, version='9.0.0', message=_('Use LBaaS V2 instead.'), previous_status=support.SupportStatus( status=support.DEPRECATED, message=DEPR_MSG, version='7.0.0' ) ) PROPERTIES = ( PROTOCOL, SUBNET_ID, SUBNET, LB_METHOD, NAME, DESCRIPTION, ADMIN_STATE_UP, VIP, MONITORS, PROVIDER, ) = ( 'protocol', 'subnet_id', 'subnet', 'lb_method', 'name', 'description', 'admin_state_up', 'vip', 'monitors', 'provider', ) _VIP_KEYS = ( VIP_NAME, VIP_DESCRIPTION, VIP_SUBNET, VIP_ADDRESS, VIP_CONNECTION_LIMIT, VIP_PROTOCOL_PORT, VIP_SESSION_PERSISTENCE, VIP_ADMIN_STATE_UP, ) = ( 'name', 'description', 'subnet', 'address', 'connection_limit', 'protocol_port', 'session_persistence', 'admin_state_up', ) _VIP_SESSION_PERSISTENCE_KEYS = ( VIP_SESSION_PERSISTENCE_TYPE, VIP_SESSION_PERSISTENCE_COOKIE_NAME, ) = ( 'type', 'cookie_name', ) ATTRIBUTES = ( ADMIN_STATE_UP_ATTR, NAME_ATTR, PROTOCOL_ATTR, SUBNET_ID_ATTR, LB_METHOD_ATTR, DESCRIPTION_ATTR, TENANT_ID, VIP_ATTR, PROVIDER_ATTR, ) = ( 'admin_state_up', 'name', 'protocol', 'subnet_id', 'lb_method', 'description', 'tenant_id', 'vip', 'provider', ) properties_schema = { PROTOCOL: properties.Schema( properties.Schema.STRING, _('Protocol for balancing.'), required=True, constraints=[ constraints.AllowedValues(['TCP', 'HTTP', 'HTTPS']), ] ), SUBNET_ID: properties.Schema( properties.Schema.STRING, support_status=support.SupportStatus( status=support.HIDDEN, version='5.0.0', message=_('Use property %s.') % SUBNET, previous_status=support.SupportStatus( status=support.DEPRECATED, version='2014.2' ) ), constraints=[ constraints.CustomConstraint('neutron.subnet') ] ), SUBNET: properties.Schema( properties.Schema.STRING, _('The subnet for the port on which the members ' 'of the pool will be connected.'), support_status=support.SupportStatus(version='2014.2'), required=True, constraints=[ constraints.CustomConstraint('neutron.subnet') ] ), LB_METHOD: properties.Schema( properties.Schema.STRING, _('The algorithm used to distribute load between the members of ' 'the pool.'), required=True, constraints=[ constraints.AllowedValues(['ROUND_ROBIN', 'LEAST_CONNECTIONS', 'SOURCE_IP']), ], update_allowed=True ), NAME: properties.Schema( properties.Schema.STRING, _('Name of the pool.') ), DESCRIPTION: properties.Schema( properties.Schema.STRING, _('Description of the pool.'), update_allowed=True ), ADMIN_STATE_UP: properties.Schema( properties.Schema.BOOLEAN, _('The administrative state of this pool.'), default=True, update_allowed=True ), PROVIDER: properties.Schema( properties.Schema.STRING, _('LBaaS provider to implement this load balancer instance.'), support_status=support.SupportStatus(version='5.0.0'), constraints=[ constraints.CustomConstraint('neutron.lb.provider') ], ), VIP: properties.Schema( properties.Schema.MAP, _('IP address and port of the pool.'), schema={ VIP_NAME: properties.Schema( properties.Schema.STRING, _('Name of the vip.') ), VIP_DESCRIPTION: properties.Schema( properties.Schema.STRING, _('Description of the vip.') ), VIP_SUBNET: properties.Schema( properties.Schema.STRING, _('Subnet of the vip.'), constraints=[ constraints.CustomConstraint('neutron.subnet') ] ), VIP_ADDRESS: properties.Schema( properties.Schema.STRING, _('IP address of the vip.'), constraints=[ constraints.CustomConstraint('ip_addr') ] ), VIP_CONNECTION_LIMIT: properties.Schema( properties.Schema.INTEGER, _('The maximum number of connections per second ' 'allowed for the vip.') ), VIP_PROTOCOL_PORT: properties.Schema( properties.Schema.INTEGER, _('TCP port on which to listen for client traffic ' 'that is associated with the vip address.'), required=True ), VIP_SESSION_PERSISTENCE: properties.Schema( properties.Schema.MAP, _('Configuration of session persistence.'), schema={ VIP_SESSION_PERSISTENCE_TYPE: properties.Schema( properties.Schema.STRING, _('Method of implementation of session ' 'persistence feature.'), required=True, constraints=[constraints.AllowedValues( ['SOURCE_IP', 'HTTP_COOKIE', 'APP_COOKIE'] )] ), VIP_SESSION_PERSISTENCE_COOKIE_NAME: properties.Schema( properties.Schema.STRING, _('Name of the cookie, ' 'required if type is APP_COOKIE.') ) } ), VIP_ADMIN_STATE_UP: properties.Schema( properties.Schema.BOOLEAN, _('The administrative state of this vip.'), default=True ), }, required=True ), MONITORS: properties.Schema( properties.Schema.LIST, _('List of health monitors associated with the pool.'), default=[], update_allowed=True ), } attributes_schema = { ADMIN_STATE_UP_ATTR: attributes.Schema( _('The administrative state of this pool.'), type=attributes.Schema.STRING ), NAME_ATTR: attributes.Schema( _('Name of the pool.'), type=attributes.Schema.STRING ), PROTOCOL_ATTR: attributes.Schema( _('Protocol to balance.'), type=attributes.Schema.STRING ), SUBNET_ID_ATTR: attributes.Schema( _('The subnet for the port on which the members of the pool ' 'will be connected.'), type=attributes.Schema.STRING ), LB_METHOD_ATTR: attributes.Schema( _('The algorithm used to distribute load between the members ' 'of the pool.'), type=attributes.Schema.STRING ), DESCRIPTION_ATTR: attributes.Schema( _('Description of the pool.'), type=attributes.Schema.STRING ), TENANT_ID: attributes.Schema( _('Tenant owning the pool.'), type=attributes.Schema.STRING ), VIP_ATTR: attributes.Schema( _('Vip associated with the pool.'), type=attributes.Schema.MAP ), PROVIDER_ATTR: attributes.Schema( _('Provider implementing this load balancer instance.'), support_status=support.SupportStatus(version='5.0.0'), type=attributes.Schema.STRING, ), } def translation_rules(self, props): return [ translation.TranslationRule( props, translation.TranslationRule.REPLACE, [self.SUBNET], value_path=[self.SUBNET_ID] ), translation.TranslationRule( props, translation.TranslationRule.RESOLVE, [self.SUBNET], client_plugin=self.client_plugin(), finder='find_resourceid_by_name_or_id', entity='subnet' ), translation.TranslationRule( props, translation.TranslationRule.RESOLVE, [self.VIP, self.VIP_SUBNET], client_plugin=self.client_plugin(), finder='find_resourceid_by_name_or_id', entity='subnet' ) ] def validate(self): res = super(Pool, self).validate() if res: return res session_p = self.properties[self.VIP].get(self.VIP_SESSION_PERSISTENCE) if session_p is None: # session persistence is not configured, skip validation return persistence_type = session_p[self.VIP_SESSION_PERSISTENCE_TYPE] if persistence_type == 'APP_COOKIE': if session_p.get(self.VIP_SESSION_PERSISTENCE_COOKIE_NAME): return msg = _('Property cookie_name is required, when ' 'session_persistence type is set to APP_COOKIE.') raise exception.StackValidationFailed(message=msg) def handle_create(self): properties = self.prepare_properties( self.properties, self.physical_resource_name()) subnet_id = properties.pop(self.SUBNET) properties['subnet_id'] = subnet_id vip_properties = properties.pop(self.VIP) monitors = properties.pop(self.MONITORS) pool = self.client().create_pool({'pool': properties})['pool'] self.resource_id_set(pool['id']) for monitor in monitors: self.client().associate_health_monitor( pool['id'], {'health_monitor': {'id': monitor}}) vip_arguments = self.prepare_properties( vip_properties, '%s.vip' % (self.name,)) session_p = vip_arguments.get(self.VIP_SESSION_PERSISTENCE) if session_p is not None: prepared_props = self.prepare_properties(session_p, None) vip_arguments['session_persistence'] = prepared_props vip_arguments['protocol'] = self.properties[self.PROTOCOL] if vip_arguments.get(self.VIP_SUBNET) is None: vip_arguments['subnet_id'] = subnet_id else: vip_arguments['subnet_id'] = vip_arguments.pop(self.VIP_SUBNET) vip_arguments['pool_id'] = pool['id'] vip = self.client().create_vip({'vip': vip_arguments})['vip'] self.metadata_set({'vip': vip['id']}) def check_create_complete(self, data): attributes = self._show_resource() status = attributes['status'] if status == 'PENDING_CREATE': return False elif status == 'ACTIVE': vip_attributes = self.client().show_vip( self.metadata_get()['vip'])['vip'] vip_status = vip_attributes['status'] if vip_status == 'PENDING_CREATE': return False if vip_status == 'ACTIVE': return True if vip_status == 'ERROR': raise exception.ResourceInError( resource_status=vip_status, status_reason=_('error in vip')) raise exception.ResourceUnknownStatus( resource_status=vip_status, result=_('Pool creation failed due to vip')) elif status == 'ERROR': raise exception.ResourceInError( resource_status=status, status_reason=_('error in pool')) else: raise exception.ResourceUnknownStatus( resource_status=status, result=_('Pool creation failed')) def handle_update(self, json_snippet, tmpl_diff, prop_diff): if prop_diff: if self.MONITORS in prop_diff: monitors = set(prop_diff.pop(self.MONITORS)) old_monitors = set(self.properties[self.MONITORS]) for monitor in old_monitors - monitors: self.client().disassociate_health_monitor( self.resource_id, monitor) for monitor in monitors - old_monitors: self.client().associate_health_monitor( self.resource_id, {'health_monitor': {'id': monitor}}) if prop_diff: self.client().update_pool(self.resource_id, {'pool': prop_diff}) def _resolve_attribute(self, name): if name == self.VIP_ATTR: return self.client().show_vip(self.metadata_get()['vip'])['vip'] return super(Pool, self)._resolve_attribute(name) def handle_delete(self): if not self.resource_id: prg = progress.PoolDeleteProgress(True) return prg prg = progress.PoolDeleteProgress() if not self.metadata_get(): prg.vip['delete_called'] = True prg.vip['deleted'] = True return prg def _delete_vip(self): return self._not_found_in_call( self.client().delete_vip, self.metadata_get()['vip']) def _check_vip_deleted(self): return self._not_found_in_call( self.client().show_vip, self.metadata_get()['vip']) def _delete_pool(self): return self._not_found_in_call( self.client().delete_pool, self.resource_id) def check_delete_complete(self, prg): if not prg.vip['delete_called']: prg.vip['deleted'] = self._delete_vip() prg.vip['delete_called'] = True return False if not prg.vip['deleted']: prg.vip['deleted'] = self._check_vip_deleted() return False if not prg.pool['delete_called']: prg.pool['deleted'] = self._delete_pool() prg.pool['delete_called'] = True return prg.pool['deleted'] if not prg.pool['deleted']: prg.pool['deleted'] = super(Pool, self).check_delete_complete(True) return prg.pool['deleted'] return True class PoolMember(neutron.NeutronResource): """A resource to handle loadbalancer members. A pool member represents the application running on backend server. """ required_service_extension = 'lbaas' entity = 'member' support_status = support.SupportStatus( status=support.HIDDEN, version='9.0.0', message=_('Use LBaaS V2 instead.'), previous_status=support.SupportStatus( status=support.DEPRECATED, message=DEPR_MSG, version='7.0.0', previous_status=support.SupportStatus(version='2014.1') ) ) PROPERTIES = ( POOL_ID, ADDRESS, PROTOCOL_PORT, WEIGHT, ADMIN_STATE_UP, ) = ( 'pool_id', 'address', 'protocol_port', 'weight', 'admin_state_up', ) ATTRIBUTES = ( ADMIN_STATE_UP_ATTR, TENANT_ID, WEIGHT_ATTR, ADDRESS_ATTR, POOL_ID_ATTR, PROTOCOL_PORT_ATTR, ) = ( 'admin_state_up', 'tenant_id', 'weight', 'address', 'pool_id', 'protocol_port', ) properties_schema = { POOL_ID: properties.Schema( properties.Schema.STRING, _('The ID of the load balancing pool.'), required=True, update_allowed=True ), ADDRESS: properties.Schema( properties.Schema.STRING, _('IP address of the pool member on the pool network.'), required=True, constraints=[ constraints.CustomConstraint('ip_addr') ] ), PROTOCOL_PORT: properties.Schema( properties.Schema.INTEGER, _('TCP port on which the pool member listens for requests or ' 'connections.'), required=True, constraints=[ constraints.Range(0, 65535), ] ), WEIGHT: properties.Schema( properties.Schema.INTEGER, _('Weight of pool member in the pool (default to 1).'), constraints=[ constraints.Range(0, 256), ], update_allowed=True ), ADMIN_STATE_UP: properties.Schema( properties.Schema.BOOLEAN, _('The administrative state of the pool member.'), default=True ), } attributes_schema = { ADMIN_STATE_UP_ATTR: attributes.Schema( _('The administrative state of this pool member.'), type=attributes.Schema.STRING ), TENANT_ID: attributes.Schema( _('Tenant owning the pool member.'), type=attributes.Schema.STRING ), WEIGHT_ATTR: attributes.Schema( _('Weight of the pool member in the pool.'), type=attributes.Schema.STRING ), ADDRESS_ATTR: attributes.Schema( _('IP address of the pool member.'), type=attributes.Schema.STRING ), POOL_ID_ATTR: attributes.Schema( _('The ID of the load balancing pool.'), type=attributes.Schema.STRING ), PROTOCOL_PORT_ATTR: attributes.Schema( _('TCP port on which the pool member listens for requests or ' 'connections.'), type=attributes.Schema.STRING ), } def handle_create(self): pool = self.properties[self.POOL_ID] protocol_port = self.properties[self.PROTOCOL_PORT] address = self.properties[self.ADDRESS] admin_state_up = self.properties[self.ADMIN_STATE_UP] weight = self.properties[self.WEIGHT] params = { 'pool_id': pool, 'address': address, 'protocol_port': protocol_port, 'admin_state_up': admin_state_up } if weight is not None: params['weight'] = weight member = self.client().create_member({'member': params})['member'] self.resource_id_set(member['id']) def handle_update(self, json_snippet, tmpl_diff, prop_diff): if prop_diff: self.client().update_member( self.resource_id, {'member': prop_diff}) def handle_delete(self): try: self.client().delete_member(self.resource_id) except Exception as ex: self.client_plugin().ignore_not_found(ex) else: return True class LoadBalancer(resource.Resource): """A resource to link a neutron pool with servers. A loadbalancer allows linking a neutron pool with specified servers to some port. """ required_service_extension = 'lbaas' support_status = support.SupportStatus( status=support.HIDDEN, version='9.0.0', message=_('Use LBaaS V2 instead.'), previous_status=support.SupportStatus( status=support.DEPRECATED, message=DEPR_MSG, version='7.0.0', previous_status=support.SupportStatus(version='2014.1') ) ) PROPERTIES = ( POOL_ID, PROTOCOL_PORT, MEMBERS, ) = ( 'pool_id', 'protocol_port', 'members', ) properties_schema = { POOL_ID: properties.Schema( properties.Schema.STRING, _('The ID of the load balancing pool.'), required=True, update_allowed=True ), PROTOCOL_PORT: properties.Schema( properties.Schema.INTEGER, _('Port number on which the servers are running on the members.'), required=True, constraints=[ constraints.Range(0, 65535), ] ), MEMBERS: properties.Schema( properties.Schema.LIST, _('The list of Nova server IDs load balanced.'), update_allowed=True ), } default_client_name = 'neutron' def handle_create(self): pool = self.properties[self.POOL_ID] protocol_port = self.properties[self.PROTOCOL_PORT] for member in self.properties[self.MEMBERS] or []: address = self.client_plugin('nova').server_to_ipaddress(member) lb_member = self.client().create_member({ 'member': { 'pool_id': pool, 'address': address, 'protocol_port': protocol_port}})['member'] self.data_set(member, lb_member['id']) def handle_update(self, json_snippet, tmpl_diff, prop_diff): new_props = json_snippet.properties(self.properties_schema, self.context) # Valid use cases are: # - Membership controlled by members property in template # - Empty members property in template; membership controlled by # "updates" triggered from autoscaling group. # Mixing the two will lead to undefined behaviour. if (self.MEMBERS in prop_diff and (self.properties[self.MEMBERS] is not None or new_props[self.MEMBERS] is not None)): members = set(new_props[self.MEMBERS] or []) rd_members = self.data() old_members = set(rd_members) for member in old_members - members: member_id = rd_members[member] with self.client_plugin().ignore_not_found: self.client().delete_member(member_id) self.data_delete(member) pool = self.properties[self.POOL_ID] protocol_port = self.properties[self.PROTOCOL_PORT] for member in members - old_members: address = self.client_plugin('nova').server_to_ipaddress( member) lb_member = self.client().create_member({ 'member': { 'pool_id': pool, 'address': address, 'protocol_port': protocol_port}})['member'] self.data_set(member, lb_member['id']) def handle_delete(self): # FIXME(pshchelo): this deletes members in a tight loop, # so is prone to OverLimit bug similar to LP 1265937 for member, member_id in self.data().items(): with self.client_plugin().ignore_not_found: self.client().delete_member(member_id) self.data_delete(member) def resource_mapping(): return { 'OS::Neutron::HealthMonitor': HealthMonitor, 'OS::Neutron::Pool': Pool, 'OS::Neutron::PoolMember': PoolMember, 'OS::Neutron::LoadBalancer': LoadBalancer, } heat-10.0.2/heat/engine/resources/openstack/neutron/rbac_policy.py0000666000175000017500000001330313343562351025265 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common import exception from heat.common.i18n import _ from heat.engine import constraints from heat.engine import properties from heat.engine.resources.openstack.neutron import neutron from heat.engine import support from heat.engine import translation class RBACPolicy(neutron.NeutronResource): """A Resource for managing RBAC policy in Neutron. This resource creates and manages Neutron RBAC policy, which allows to share Neutron networks and qos-policies to subsets of tenants. """ support_status = support.SupportStatus(version='6.0.0') required_service_extension = 'rbac-policies' entity = 'rbac_policy' PROPERTIES = ( OBJECT_TYPE, TARGET_TENANT, ACTION, OBJECT_ID, TENANT_ID ) = ( 'object_type', 'target_tenant', 'action', 'object_id', 'tenant_id' ) OBJECT_TYPE_KEYS = ( OBJECT_NETWORK, OBJECT_QOS_POLICY, ) = ( 'network', 'qos_policy', ) ACTION_KEYS = ( ACCESS_AS_SHARED, ACCESS_AS_EXTERNAL, ) = ( 'access_as_shared', 'access_as_external', ) # Change it when neutron supports more function in the future. SUPPORTED_TYPES_ACTIONS = { OBJECT_NETWORK: [ACCESS_AS_SHARED, ACCESS_AS_EXTERNAL], OBJECT_QOS_POLICY: [ACCESS_AS_SHARED], } properties_schema = { OBJECT_TYPE: properties.Schema( properties.Schema.STRING, _('Type of the object that RBAC policy affects.'), required=True, constraints=[ constraints.AllowedValues(OBJECT_TYPE_KEYS) ] ), TARGET_TENANT: properties.Schema( properties.Schema.STRING, _('ID of the tenant to which the RBAC policy will be enforced.'), required=True, update_allowed=True ), ACTION: properties.Schema( properties.Schema.STRING, _('Action for the RBAC policy. The allowed actions differ for ' 'different object types - only %(network)s objects can have an ' '%(external)s action.') % {'network': OBJECT_NETWORK, 'external': ACCESS_AS_EXTERNAL}, required=True, constraints=[ constraints.AllowedValues(ACTION_KEYS) ] ), OBJECT_ID: properties.Schema( properties.Schema.STRING, _('ID or name of the RBAC object.'), required=True ), TENANT_ID: properties.Schema( properties.Schema.STRING, _('The owner tenant ID. Only required if the caller has an ' 'administrative role and wants to create a RBAC for another ' 'tenant.') ) } def translation_rules(self, props): return [ translation.TranslationRule( props, translation.TranslationRule.RESOLVE, [self.OBJECT_ID], client_plugin=self.client_plugin(), finder='find_resourceid_by_name_or_id', entity=self._get_resource_name(props[self.OBJECT_TYPE]) ) ] def _get_resource_name(self, object_type): resource_name = object_type if object_type == self.OBJECT_QOS_POLICY: resource_name = 'policy' return resource_name def handle_create(self): props = self.prepare_properties( self.properties, self.physical_resource_name()) rbac = self.client().create_rbac_policy( {'rbac_policy': props})['rbac_policy'] self.resource_id_set(rbac['id']) def handle_update(self, json_snippet, tmpl_diff, prop_diff): if prop_diff: self.client().update_rbac_policy( self.resource_id, {'rbac_policy': prop_diff}) def handle_delete(self): if self.resource_id is not None: with self.client_plugin().ignore_not_found: self.client().delete_rbac_policy(self.resource_id) def validate(self): """Validate the provided properties.""" super(RBACPolicy, self).validate() action = self.properties[self.ACTION] obj_type = self.properties[self.OBJECT_TYPE] # Validate obj_type and action per SUPPORTED_TYPES_ACTIONS. if action not in self.SUPPORTED_TYPES_ACTIONS[obj_type]: valid_actions = ', '.join(self.SUPPORTED_TYPES_ACTIONS[obj_type]) msg = (_('Invalid action "%(action)s" for object type ' '%(obj_type)s. Valid actions: %(valid_actions)s') % {'action': action, 'obj_type': obj_type, 'valid_actions': valid_actions}) properties_section = self.properties.error_prefix[0] path = [self.stack.t.RESOURCES, self.t.name, self.stack.t.get_section_name(properties_section), self.ACTION] raise exception.StackValidationFailed(error='Property error', path=path, message=msg) def resource_mapping(): return { 'OS::Neutron::RBACPolicy': RBACPolicy, } heat-10.0.2/heat/engine/resources/openstack/neutron/address_scope.py0000666000175000017500000000657013343562340025623 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common.i18n import _ from heat.engine import constraints from heat.engine import properties from heat.engine.resources.openstack.neutron import neutron from heat.engine import support class AddressScope(neutron.NeutronResource): """A resource for Neutron address scope. This resource can be associated with multiple subnet pools in a one-to-many relationship. The subnet pools under an address scope must not overlap. """ required_service_extension = 'address-scope' entity = 'address_scope' support_status = support.SupportStatus(version='6.0.0') PROPERTIES = ( NAME, SHARED, TENANT_ID, IP_VERSION, ) = ( 'name', 'shared', 'tenant_id', 'ip_version', ) properties_schema = { NAME: properties.Schema( properties.Schema.STRING, _('The name for the address scope.'), update_allowed=True ), SHARED: properties.Schema( properties.Schema.BOOLEAN, _('Whether the address scope should be shared to other ' 'tenants. Note that the default policy setting ' 'restricts usage of this attribute to administrative ' 'users only, and restricts changing of shared address scope ' 'to unshared with update.'), default=False, update_allowed=True ), TENANT_ID: properties.Schema( properties.Schema.STRING, _('The owner tenant ID of the address scope. Only ' 'administrative users can specify a tenant ID ' 'other than their own.'), constraints=[constraints.CustomConstraint('keystone.project')] ), IP_VERSION: properties.Schema( properties.Schema.INTEGER, _('Address family of the address scope, which is 4 or 6.'), default=4, constraints=[ constraints.AllowedValues([4, 6]), ] ), } def handle_create(self): props = self.prepare_properties( self.properties, self.physical_resource_name()) address_scope = self.client().create_address_scope( {'address_scope': props})['address_scope'] self.resource_id_set(address_scope['id']) def handle_delete(self): if self.resource_id is None: return with self.client_plugin().ignore_not_found: self.client().delete_address_scope(self.resource_id) def handle_update(self, json_snippet, tmpl_diff, prop_diff): if prop_diff: self.prepare_update_properties(prop_diff) self.client().update_address_scope( self.resource_id, {'address_scope': prop_diff}) def resource_mapping(): return { 'OS::Neutron::AddressScope': AddressScope } heat-10.0.2/heat/engine/resources/openstack/neutron/__init__.py0000666000175000017500000000000013343562340024522 0ustar zuulzuul00000000000000heat-10.0.2/heat/engine/resources/openstack/neutron/trunk.py0000666000175000017500000002756613343562351024162 0ustar zuulzuul00000000000000# Copyright 2017 Ericsson # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging from heat.common.i18n import _ from heat.engine import attributes from heat.engine import constraints from heat.engine import properties from heat.engine.resources.openstack.neutron import neutron from heat.engine import support from heat.engine import translation LOG = logging.getLogger(__name__) class Trunk(neutron.NeutronResource): """A resource for managing Neutron trunks. Requires Neutron Trunk Extension to be enabled:: $ openstack extension show trunk The network trunk service allows multiple networks to be connected to an instance using a single virtual NIC (vNIC). Multiple networks can be presented to an instance by connecting the instance to a single port. Users can create a port, associate it with a trunk (as the trunk's parent) and launch an instance on that port. Users can dynamically attach and detach additional networks without disrupting operation of the instance. Every trunk has a parent port and can have any number (0, 1, ...) of subports. The parent port is the port that the instance is directly associated with and its traffic is always untagged inside the instance. Users must specify the parent port of the trunk when launching an instance attached to a trunk. A network presented by a subport is the network of the associated port. When creating a subport, a ``segmentation_type`` and ``segmentation_id`` may be required by the driver so the user can distinguish the networks inside the instance. As of release Pike only ``segmentation_type`` ``vlan`` is supported. ``segmentation_id`` defines the segmentation ID on which the subport network is presented to the instance. Note that some Neutron backends (primarily Open vSwitch) only allow trunk creation before an instance is booted on the parent port. To avoid a possible race condition when booting an instance with a trunk it is strongly recommended to refer to the trunk's parent port indirectly in the template via ``get_attr``. For example:: trunk: type: OS::Neutron::Trunk properties: port: ... instance: type: OS::Nova::Server properties: networks: - { port: { get_attr: [trunk, port_id] } } Though other Neutron backends may tolerate the direct port reference (and the possible reverse ordering of API requests implied) it's a good idea to avoid writing Neutron backend specific templates. """ entity = 'trunk' required_service_extension = 'trunk' support_status = support.SupportStatus( status=support.SUPPORTED, version='9.0.0', ) PROPERTIES = ( NAME, PARENT_PORT, SUB_PORTS, DESCRIPTION, ADMIN_STATE_UP, ) = ( 'name', 'port', 'sub_ports', 'description', 'admin_state_up', ) _SUBPORT_KEYS = ( PORT, SEGMENTATION_TYPE, SEGMENTATION_ID, ) = ( 'port', 'segmentation_type', 'segmentation_id', ) _subport_schema = { PORT: properties.Schema( properties.Schema.STRING, _('ID or name of a port to be used as a subport.'), required=True, constraints=[ constraints.CustomConstraint('neutron.port'), ], ), SEGMENTATION_TYPE: properties.Schema( properties.Schema.STRING, _('Segmentation type to be used on the subport.'), required=True, # TODO(nilles): custom constraint 'neutron.trunk_segmentation_type' constraints=[ constraints.AllowedValues(['vlan']), ], ), SEGMENTATION_ID: properties.Schema( properties.Schema.INTEGER, _('The segmentation ID on which the subport network is presented ' 'to the instance.'), required=True, # TODO(nilles): custom constraint 'neutron.trunk_segmentation_id' constraints=[ constraints.Range(1, 4094), ], ), } ATTRIBUTES = ( PORT_ATTR, ) = ( 'port_id', ) properties_schema = { NAME: properties.Schema( properties.Schema.STRING, _('A string specifying a symbolic name for the trunk, which is ' 'not required to be uniqe.'), update_allowed=True, ), PARENT_PORT: properties.Schema( properties.Schema.STRING, _('ID or name of a port to be used as a parent port.'), required=True, immutable=True, constraints=[ constraints.CustomConstraint('neutron.port'), ], ), SUB_PORTS: properties.Schema( properties.Schema.LIST, _('List with 0 or more map elements containing subport details.'), schema=properties.Schema( properties.Schema.MAP, schema=_subport_schema, ), update_allowed=True, ), DESCRIPTION: properties.Schema( properties.Schema.STRING, _('Description for the trunk.'), update_allowed=True, ), ADMIN_STATE_UP: properties.Schema( properties.Schema.BOOLEAN, _('Enable/disable subport addition, removal and trunk delete.'), update_allowed=True, ), } attributes_schema = { PORT_ATTR: attributes.Schema( _('ID or name of a port used as a parent port.'), type=attributes.Schema.STRING, ), } def translation_rules(self, props): return [ translation.TranslationRule( props, translation.TranslationRule.RESOLVE, [self.PARENT_PORT], client_plugin=self.client_plugin(), finder='find_resourceid_by_name_or_id', entity='port', ), translation.TranslationRule( props, translation.TranslationRule.RESOLVE, translation_path=[self.SUB_PORTS, self.PORT], client_plugin=self.client_plugin(), finder='find_resourceid_by_name_or_id', entity='port', ), ] def handle_create(self): props = self.prepare_properties( self.properties, self.physical_resource_name()) props['port_id'] = props.pop(self.PARENT_PORT) if self.SUB_PORTS in props and props[self.SUB_PORTS]: for sub_port in props[self.SUB_PORTS]: sub_port['port_id'] = sub_port.pop(self.PORT) LOG.debug('attempt to create trunk: %s', props) trunk = self.client().create_trunk({'trunk': props})['trunk'] self.resource_id_set(trunk['id']) def check_create_complete(self, *args): attributes = self._show_resource() return self.is_built(attributes) def handle_delete(self): if self.resource_id is not None: with self.client_plugin().ignore_not_found: LOG.debug('attempt to delete trunk: %s', self.resource_id) self.client().delete_trunk(self.resource_id) def handle_update(self, json_snippet, tmpl_diff, prop_diff): """Handle update to a trunk in (at most) three neutron calls. Call #1) Update all changed properties but 'sub_ports'. PUT /v2.0/trunks/TRUNK_ID openstack network trunk set Call #2) Delete subports not needed anymore. PUT /v2.0/trunks/TRUNK_ID/remove_subports openstack network trunk unset --subport Call #3) Create new subports. PUT /v2.0/trunks/TRUNK_ID/add_subports openstack network trunk set --subport A single neutron port cannot be two subports at the same time (ie. have two segmentation (type, ID)s on the same trunk or to belong to two trunks). Therefore we have to delete old subports before creating new ones to avoid conflicts. """ LOG.debug('attempt to update trunk %s', self.resource_id) # NOTE(bence romsics): We want to do set operations on the subports, # however we receive subports represented as dicts. In Python # mutable objects like dicts are not hashable so they cannot be # inserted into sets. So we convert subport dicts to (immutable) # frozensets in order to do the set operations. def dict2frozenset(d): """Convert a dict to a frozenset. Create an immutable equivalent of a dict, so it's hashable therefore can be used as an element of a set or a key of another dictionary. """ return frozenset(d.items()) # NOTE(bence romsics): prop_diff contains a shallow diff of the # properties, so if we had used that to update subports we would # re-create all subports even if just a single subport changed. So we # need to forget about prop_diff['sub_ports'] and diff out the real # subport changes from self.properties and json_snippet. if 'sub_ports' in prop_diff: del prop_diff['sub_ports'] sub_ports_prop_old = self.properties[self.SUB_PORTS] or [] sub_ports_prop_new = json_snippet.properties( self.properties_schema)[self.SUB_PORTS] or [] subports_old = {dict2frozenset(d): d for d in sub_ports_prop_old} subports_new = {dict2frozenset(d): d for d in sub_ports_prop_new} old_set = set(subports_old.keys()) new_set = set(subports_new.keys()) delete = old_set - new_set create = new_set - old_set dicts_delete = [subports_old[fs] for fs in delete] dicts_create = [subports_new[fs] for fs in create] LOG.debug('attempt to delete subports of trunk %s: %s', self.resource_id, dicts_delete) LOG.debug('attempt to create subports of trunk %s: %s', self.resource_id, dicts_create) if prop_diff: self.prepare_update_properties(prop_diff) self.client().update_trunk(self.resource_id, {'trunk': prop_diff}) if dicts_delete: delete_body = self.prepare_trunk_remove_subports_body(dicts_delete) self.client().trunk_remove_subports(self.resource_id, delete_body) if dicts_create: create_body = self.prepare_trunk_add_subports_body(dicts_create) self.client().trunk_add_subports(self.resource_id, create_body) def check_update_complete(self, *args): attributes = self._show_resource() return self.is_built(attributes) @staticmethod def prepare_trunk_remove_subports_body(subports): """Prepares body for PUT /v2.0/trunks/TRUNK_ID/remove_subports.""" return { 'sub_ports': [ {'port_id': sp['port']} for sp in subports ] } @staticmethod def prepare_trunk_add_subports_body(subports): """Prepares body for PUT /v2.0/trunks/TRUNK_ID/add_subports.""" return { 'sub_ports': [ {'port_id': sp['port'], 'segmentation_type': sp['segmentation_type'], 'segmentation_id': sp['segmentation_id']} for sp in subports ] } def resource_mapping(): return { 'OS::Neutron::Trunk': Trunk, } heat-10.0.2/heat/engine/resources/openstack/neutron/router.py0000666000175000017500000006126413343562351024330 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import six from heat.common import exception from heat.common.i18n import _ from heat.engine import attributes from heat.engine import constraints from heat.engine import properties from heat.engine.resources.openstack.neutron import neutron from heat.engine.resources.openstack.neutron import subnet from heat.engine import support from heat.engine import translation class Router(neutron.NeutronResource): """A resource that implements Neutron router. Router is a physical or virtual network device that passes network traffic between different networks. """ required_service_extension = 'router' entity = 'router' PROPERTIES = ( NAME, EXTERNAL_GATEWAY, VALUE_SPECS, ADMIN_STATE_UP, L3_AGENT_ID, L3_AGENT_IDS, DISTRIBUTED, HA, TAGS, ) = ( 'name', 'external_gateway_info', 'value_specs', 'admin_state_up', 'l3_agent_id', 'l3_agent_ids', 'distributed', 'ha', 'tags', ) _EXTERNAL_GATEWAY_KEYS = ( EXTERNAL_GATEWAY_NETWORK, EXTERNAL_GATEWAY_ENABLE_SNAT, EXTERNAL_GATEWAY_FIXED_IPS, ) = ( 'network', 'enable_snat', 'external_fixed_ips', ) _EXTERNAL_GATEWAY_FIXED_IPS_KEYS = ( IP_ADDRESS, SUBNET ) = ( 'ip_address', 'subnet' ) ATTRIBUTES = ( STATUS, EXTERNAL_GATEWAY_INFO_ATTR, NAME_ATTR, ADMIN_STATE_UP_ATTR, TENANT_ID, ) = ( 'status', 'external_gateway_info', 'name', 'admin_state_up', 'tenant_id', ) properties_schema = { NAME: properties.Schema( properties.Schema.STRING, _('The name of the router.'), update_allowed=True ), EXTERNAL_GATEWAY: properties.Schema( properties.Schema.MAP, _('External network gateway configuration for a router.'), schema={ EXTERNAL_GATEWAY_NETWORK: properties.Schema( properties.Schema.STRING, _('ID or name of the external network for the gateway.'), required=True, update_allowed=True ), EXTERNAL_GATEWAY_ENABLE_SNAT: properties.Schema( properties.Schema.BOOLEAN, _('Enables Source NAT on the router gateway. NOTE: The ' 'default policy setting in Neutron restricts usage of ' 'this property to administrative users only.'), update_allowed=True ), EXTERNAL_GATEWAY_FIXED_IPS: properties.Schema( properties.Schema.LIST, _('External fixed IP addresses for the gateway.'), schema=properties.Schema( properties.Schema.MAP, schema={ IP_ADDRESS: properties.Schema( properties.Schema.STRING, _('External fixed IP address.'), constraints=[ constraints.CustomConstraint('ip_addr'), ] ), SUBNET: properties.Schema( properties.Schema.STRING, _('Subnet of external fixed IP address.'), constraints=[ constraints.CustomConstraint( 'neutron.subnet') ] ), } ), update_allowed=True, support_status=support.SupportStatus(version='6.0.0') ), }, update_allowed=True ), VALUE_SPECS: properties.Schema( properties.Schema.MAP, _('Extra parameters to include in the creation request.'), default={}, update_allowed=True ), ADMIN_STATE_UP: properties.Schema( properties.Schema.BOOLEAN, _('The administrative state of the router.'), default=True, update_allowed=True ), L3_AGENT_ID: properties.Schema( properties.Schema.STRING, _('ID of the L3 agent. NOTE: The default policy setting in ' 'Neutron restricts usage of this property to administrative ' 'users only.'), update_allowed=True, support_status=support.SupportStatus( status=support.HIDDEN, version='6.0.0', previous_status=support.SupportStatus( status=support.DEPRECATED, version='2015.1', message=_('Use property %s.') % L3_AGENT_IDS, previous_status=support.SupportStatus(version='2014.1') ) ), ), L3_AGENT_IDS: properties.Schema( properties.Schema.LIST, _('ID list of the L3 agent. User can specify multi-agents ' 'for highly available router. NOTE: The default policy ' 'setting in Neutron restricts usage of this property to ' 'administrative users only.'), schema=properties.Schema( properties.Schema.STRING, ), update_allowed=True, support_status=support.SupportStatus(version='2015.1') ), DISTRIBUTED: properties.Schema( properties.Schema.BOOLEAN, _('Indicates whether or not to create a distributed router. ' 'NOTE: The default policy setting in Neutron restricts usage ' 'of this property to administrative users only. This property ' 'can not be used in conjunction with the L3 agent ID.'), support_status=support.SupportStatus(version='2015.1') ), HA: properties.Schema( properties.Schema.BOOLEAN, _('Indicates whether or not to create a highly available router. ' 'NOTE: The default policy setting in Neutron restricts usage ' 'of this property to administrative users only. And now neutron ' 'do not support distributed and ha at the same time.'), support_status=support.SupportStatus(version='2015.1') ), TAGS: properties.Schema( properties.Schema.LIST, _('The tags to be added to the router.'), schema=properties.Schema(properties.Schema.STRING), update_allowed=True, support_status=support.SupportStatus(version='9.0.0') ), } attributes_schema = { STATUS: attributes.Schema( _("The status of the router."), type=attributes.Schema.STRING ), EXTERNAL_GATEWAY_INFO_ATTR: attributes.Schema( _("Gateway network for the router."), type=attributes.Schema.MAP ), NAME_ATTR: attributes.Schema( _("Friendly name of the router."), type=attributes.Schema.STRING ), ADMIN_STATE_UP_ATTR: attributes.Schema( _("Administrative state of the router."), type=attributes.Schema.STRING ), TENANT_ID: attributes.Schema( _("Tenant owning the router."), type=attributes.Schema.STRING ), } def translation_rules(self, props): rules = [ translation.TranslationRule( props, translation.TranslationRule.RESOLVE, [self.EXTERNAL_GATEWAY, self.EXTERNAL_GATEWAY_NETWORK], client_plugin=self.client_plugin(), finder='find_resourceid_by_name_or_id', entity='network'), translation.TranslationRule( props, translation.TranslationRule.RESOLVE, [self.EXTERNAL_GATEWAY, self.EXTERNAL_GATEWAY_FIXED_IPS, self.SUBNET], client_plugin=self.client_plugin(), finder='find_resourceid_by_name_or_id', entity='subnet') ] if props.get(self.L3_AGENT_ID): rules.extend([ translation.TranslationRule( props, translation.TranslationRule.ADD, [self.L3_AGENT_IDS], [props.get(self.L3_AGENT_ID)]), translation.TranslationRule( props, translation.TranslationRule.DELETE, [self.L3_AGENT_ID] )]) return rules def validate(self): super(Router, self).validate() is_distributed = self.properties[self.DISTRIBUTED] l3_agent_id = self.properties[self.L3_AGENT_ID] l3_agent_ids = self.properties[self.L3_AGENT_IDS] is_ha = self.properties[self.HA] if l3_agent_id and l3_agent_ids: raise exception.ResourcePropertyConflict(self.L3_AGENT_ID, self.L3_AGENT_IDS) # do not specific l3 agent when creating a distributed router if is_distributed and (l3_agent_id or l3_agent_ids): raise exception.ResourcePropertyConflict( self.DISTRIBUTED, "/".join([self.L3_AGENT_ID, self.L3_AGENT_IDS])) if is_ha and is_distributed: raise exception.ResourcePropertyConflict(self.DISTRIBUTED, self.HA) if not is_ha and l3_agent_ids and len(l3_agent_ids) > 1: msg = _('Non HA routers can only have one L3 agent.') raise exception.StackValidationFailed(message=msg) def add_dependencies(self, deps): super(Router, self).add_dependencies(deps) external_gw = self.properties[self.EXTERNAL_GATEWAY] if external_gw: external_gw_net = external_gw.get(self.EXTERNAL_GATEWAY_NETWORK) for res in six.itervalues(self.stack): if res.has_interface('OS::Neutron::Subnet'): subnet_net = res.properties.get(subnet.Subnet.NETWORK) if subnet_net == external_gw_net: deps += (self, res) def _resolve_gateway(self, props): gateway = props.get(self.EXTERNAL_GATEWAY) if gateway: gateway['network_id'] = gateway.pop(self.EXTERNAL_GATEWAY_NETWORK) if gateway[self.EXTERNAL_GATEWAY_ENABLE_SNAT] is None: del gateway[self.EXTERNAL_GATEWAY_ENABLE_SNAT] if gateway[self.EXTERNAL_GATEWAY_FIXED_IPS] is None: del gateway[self.EXTERNAL_GATEWAY_FIXED_IPS] else: self._resolve_subnet(gateway) return props def _get_l3_agent_list(self, props): l3_agent_id = props.pop(self.L3_AGENT_ID, None) l3_agent_ids = props.pop(self.L3_AGENT_IDS, None) if not l3_agent_ids and l3_agent_id: l3_agent_ids = [l3_agent_id] return l3_agent_ids def _resolve_subnet(self, gateway): external_gw_fixed_ips = gateway[self.EXTERNAL_GATEWAY_FIXED_IPS] for fixed_ip in external_gw_fixed_ips: for key, value in six.iteritems(fixed_ip): if value is None: fixed_ip.pop(key) if self.SUBNET in fixed_ip: fixed_ip['subnet_id'] = fixed_ip.pop(self.SUBNET) def handle_create(self): props = self.prepare_properties( self.properties, self.physical_resource_name()) self._resolve_gateway(props) l3_agent_ids = self._get_l3_agent_list(props) tags = props.pop(self.TAGS, []) router = self.client().create_router({'router': props})['router'] self.resource_id_set(router['id']) if l3_agent_ids: self._replace_agent(l3_agent_ids) if tags: self.set_tags(tags) def check_create_complete(self, *args): attributes = self._show_resource() return self.is_built(attributes) def handle_delete(self): try: self.client().delete_router(self.resource_id) except Exception as ex: self.client_plugin().ignore_not_found(ex) else: return True def handle_update(self, json_snippet, tmpl_diff, prop_diff): if self.EXTERNAL_GATEWAY in prop_diff: self._resolve_gateway(prop_diff) if self.L3_AGENT_IDS in prop_diff or self.L3_AGENT_ID in prop_diff: l3_agent_ids = self._get_l3_agent_list(prop_diff) self._replace_agent(l3_agent_ids) if self.TAGS in prop_diff: tags = prop_diff.pop(self.TAGS) self.set_tags(tags) if prop_diff: self.prepare_update_properties(prop_diff) self.client().update_router( self.resource_id, {'router': prop_diff}) def _replace_agent(self, l3_agent_ids=None): ret = self.client().list_l3_agent_hosting_routers( self.resource_id) for agent in ret['agents']: self.client().remove_router_from_l3_agent( agent['id'], self.resource_id) if l3_agent_ids: for l3_agent_id in l3_agent_ids: self.client().add_router_to_l3_agent( l3_agent_id, {'router_id': self.resource_id}) def parse_live_resource_data(self, resource_properties, resource_data): result = super(Router, self).parse_live_resource_data( resource_properties, resource_data) try: ret = self.client().list_l3_agent_hosting_routers(self.resource_id) if ret: result[self.L3_AGENT_IDS] = list( agent['id'] for agent in ret['agents']) except self.client_plugin().exceptions.Forbidden: # Just pass if forbidden pass gateway = resource_data.get(self.EXTERNAL_GATEWAY) if gateway is not None: result[self.EXTERNAL_GATEWAY] = { self.EXTERNAL_GATEWAY_NETWORK: gateway.get('network_id'), self.EXTERNAL_GATEWAY_ENABLE_SNAT: gateway.get('enable_snat') } return result class RouterInterface(neutron.NeutronResource): """A resource for managing Neutron router interfaces. Router interfaces associate routers with existing subnets or ports. """ required_service_extension = 'router' PROPERTIES = ( ROUTER, ROUTER_ID, SUBNET_ID, SUBNET, PORT_ID, PORT ) = ( 'router', 'router_id', 'subnet_id', 'subnet', 'port_id', 'port' ) properties_schema = { ROUTER: properties.Schema( properties.Schema.STRING, _('The router.'), required=True, constraints=[ constraints.CustomConstraint('neutron.router') ], ), ROUTER_ID: properties.Schema( properties.Schema.STRING, _('ID of the router.'), support_status=support.SupportStatus( status=support.HIDDEN, version='6.0.0', previous_status=support.SupportStatus( status=support.DEPRECATED, message=_('Use property %s.') % ROUTER, version='2015.1', previous_status=support.SupportStatus(version='2013.1') ) ), constraints=[ constraints.CustomConstraint('neutron.router') ], ), SUBNET_ID: properties.Schema( properties.Schema.STRING, support_status=support.SupportStatus( status=support.HIDDEN, message=_('Use property %s.') % SUBNET, version='5.0.0', previous_status=support.SupportStatus( status=support.DEPRECATED, version='2014.2' ) ), constraints=[ constraints.CustomConstraint('neutron.subnet') ] ), SUBNET: properties.Schema( properties.Schema.STRING, _('The subnet, either subnet or port should be ' 'specified.'), constraints=[ constraints.CustomConstraint('neutron.subnet') ] ), PORT_ID: properties.Schema( properties.Schema.STRING, _('The port id, either subnet or port_id should be specified.'), support_status=support.SupportStatus( status=support.HIDDEN, version='6.0.0', previous_status=support.SupportStatus( status=support.DEPRECATED, message=_('Use property %s.') % PORT, version='2015.1', previous_status=support.SupportStatus(version='2014.1') ) ), constraints=[ constraints.CustomConstraint('neutron.port') ] ), PORT: properties.Schema( properties.Schema.STRING, _('The port, either subnet or port should be specified.'), support_status=support.SupportStatus(version='2015.1'), constraints=[ constraints.CustomConstraint('neutron.port') ] ) } def translation_rules(self, props): return [ translation.TranslationRule( props, translation.TranslationRule.REPLACE, [self.PORT], value_path=[self.PORT_ID] ), translation.TranslationRule( props, translation.TranslationRule.REPLACE, [self.ROUTER], value_path=[self.ROUTER_ID] ), translation.TranslationRule( props, translation.TranslationRule.REPLACE, [self.SUBNET], value_path=[self.SUBNET_ID] ), translation.TranslationRule( props, translation.TranslationRule.RESOLVE, [self.PORT], client_plugin=self.client_plugin(), finder='find_resourceid_by_name_or_id', entity='port' ), translation.TranslationRule( props, translation.TranslationRule.RESOLVE, [self.ROUTER], client_plugin=self.client_plugin(), finder='find_resourceid_by_name_or_id', entity='router' ), translation.TranslationRule( props, translation.TranslationRule.RESOLVE, [self.SUBNET], client_plugin=self.client_plugin(), finder='find_resourceid_by_name_or_id', entity='subnet' ) ] def validate(self): """Validate any of the provided params.""" super(RouterInterface, self).validate() prop_subnet_exists = self.properties.get(self.SUBNET) is not None prop_port_exists = self.properties.get(self.PORT) is not None if prop_subnet_exists and prop_port_exists: raise exception.ResourcePropertyConflict(self.SUBNET, self.PORT) if not prop_subnet_exists and not prop_port_exists: raise exception.PropertyUnspecifiedError(self.SUBNET, self.PORT) def handle_create(self): router_id = dict(self.properties).get(self.ROUTER) key = 'subnet_id' value = dict(self.properties).get(self.SUBNET) if not value: key = 'port_id' value = dict(self.properties).get(self.PORT) self.client().add_interface_router( router_id, {key: value}) self.resource_id_set('%s:%s=%s' % (router_id, key, value)) def handle_delete(self): if not self.resource_id: return tokens = self.resource_id.replace('=', ':').split(':') if len(tokens) == 2: # compatible with old data tokens.insert(1, 'subnet_id') (router_id, key, value) = tokens with self.client_plugin().ignore_not_found: self.client().remove_interface_router( router_id, {key: value}) class RouterGateway(neutron.NeutronResource): support_status = support.SupportStatus( status=support.HIDDEN, message=_('Use the `external_gateway_info` property in ' 'the router resource to set up the gateway.'), version='5.0.0', previous_status=support.SupportStatus( status=support.DEPRECATED, version='2014.1' ) ) PROPERTIES = ( ROUTER_ID, NETWORK_ID, NETWORK, ) = ( 'router_id', 'network_id', 'network' ) properties_schema = { ROUTER_ID: properties.Schema( properties.Schema.STRING, _('ID of the router.'), required=True, constraints=[ constraints.CustomConstraint('neutron.router') ] ), NETWORK_ID: properties.Schema( properties.Schema.STRING, support_status=support.SupportStatus( status=support.HIDDEN, message=_('Use property %s.') % NETWORK, version='9.0.0', previous_status=support.SupportStatus( status=support.DEPRECATED, version='2014.2' ) ), constraints=[ constraints.CustomConstraint('neutron.network') ], ), NETWORK: properties.Schema( properties.Schema.STRING, _('external network for the gateway.'), constraints=[ constraints.CustomConstraint('neutron.network') ], ), } def translation_rules(self, props): return [ translation.TranslationRule( props, translation.TranslationRule.REPLACE, [self.NETWORK], value_path=[self.NETWORK_ID] ), translation.TranslationRule( props, translation.TranslationRule.RESOLVE, [self.NETWORK], client_plugin=self.client_plugin(), finder='find_resourceid_by_name_or_id', entity='network' ) ] def add_dependencies(self, deps): super(RouterGateway, self).add_dependencies(deps) for resource in six.itervalues(self.stack): # depend on any RouterInterface in this template with the same # router_id as this router_id if resource.has_interface('OS::Neutron::RouterInterface'): dep_router_id = resource.properties[RouterInterface.ROUTER] router_id = self.properties[self.ROUTER_ID] if dep_router_id == router_id: deps += (self, resource) # depend on any subnet in this template with the same network_id # as this network_id, as the gateway implicitly creates a port # on that subnet if resource.has_interface('OS::Neutron::Subnet'): dep_network = resource.properties[subnet.Subnet.NETWORK] network = self.properties[self.NETWORK] if dep_network == network: deps += (self, resource) def handle_create(self): router_id = self.properties[self.ROUTER_ID] network_id = dict(self.properties).get(self.NETWORK) self.client().add_gateway_router( router_id, {'network_id': network_id}) self.resource_id_set('%s:%s' % (router_id, network_id)) def handle_delete(self): if not self.resource_id: return (router_id, network_id) = self.resource_id.split(':') with self.client_plugin().ignore_not_found: self.client().remove_gateway_router(router_id) def resource_mapping(): return { 'OS::Neutron::Router': Router, 'OS::Neutron::RouterInterface': RouterInterface, 'OS::Neutron::RouterGateway': RouterGateway, } heat-10.0.2/heat/engine/resources/openstack/neutron/sfc/0000775000175000017500000000000013343562672023204 5ustar zuulzuul00000000000000heat-10.0.2/heat/engine/resources/openstack/neutron/sfc/port_pair_group.py0000666000175000017500000000723313343562340026770 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common.i18n import _ from heat.engine import constraints from heat.engine import properties from heat.engine.resources.openstack.neutron import neutron from heat.engine import support from heat.engine import translation class PortPairGroup(neutron.NeutronResource): """Heat Template Resource for networking-sfc port-pair-group. Multiple port-pairs may be included in a port-pair-group to allow the specification of a set of functionally equivalent Service Functions that can be be used for load distribution. """ support_status = support.SupportStatus( version='8.0.0', status=support.UNSUPPORTED) required_service_extension = 'sfc' PROPERTIES = ( NAME, DESCRIPTION, PORT_PAIRS, ) = ( 'name', 'description', 'port_pairs', ) properties_schema = { NAME: properties.Schema( properties.Schema.STRING, _('Name for the Port Pair Group.'), update_allowed=True ), DESCRIPTION: properties.Schema( properties.Schema.STRING, _('Description for the Port Pair Group.'), update_allowed=True ), PORT_PAIRS: properties.Schema( properties.Schema.LIST, _('A list of Port Pair IDs or names to apply.'), required=True, update_allowed=True, schema=properties.Schema( properties.Schema.STRING, _('Port Pair ID or name .'), constraints=[ constraints.CustomConstraint('neutron.port_pair') ] ) ), } def translation_rules(self, props): return [ translation.TranslationRule( props, translation.TranslationRule.RESOLVE, [self.PORT_PAIRS], client_plugin=self.client_plugin(), finder='resolve_ext_resource', entity='port_pair' ) ] def _show_resource(self): return self.client_plugin().show_sfc_resource('port_pair_group', self.resource_id) def handle_create(self): props = self.prepare_properties( self.properties, self.physical_resource_name()) port_pair_group = self.client_plugin().create_sfc_resource( 'port_pair_group', props) self.resource_id_set(port_pair_group['id']) def handle_update(self, json_snippet, tmpl_diff, prop_diff): if prop_diff: self.prepare_update_properties(prop_diff) self.client_plugin().update_sfc_resource( 'port_pair_group', prop_diff, self.resource_id) def handle_delete(self): if self.resource_id is None: return with self.client_plugin().ignore_not_found: self.client_plugin().delete_sfc_resource('port_pair_group', self.resource_id) def resource_mapping(): return { 'OS::Neutron::PortPairGroup': PortPairGroup, } heat-10.0.2/heat/engine/resources/openstack/neutron/sfc/port_pair.py0000666000175000017500000001107713343562351025557 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common.i18n import _ from heat.engine import constraints from heat.engine import properties from heat.engine.resources.openstack.neutron import neutron from heat.engine import support from heat.engine import translation class PortPair(neutron.NeutronResource): """A resource for neutron networking-sfc port-pair. This plug-in requires networking-sfc>=1.0.0. So to enable this plug-in, install this library and restart the heat-engine. A Port Pair represents a service function instance. The ingress port and the egress port of the service function may be specified. If a service function has one bidirectional port, the ingress port has the same value as the egress port. """ support_status = support.SupportStatus( version='7.0.0', status=support.UNSUPPORTED) PROPERTIES = ( NAME, DESCRIPTION, INGRESS, EGRESS, SERVICE_FUNCTION_PARAMETERS, ) = ( 'name', 'description', 'ingress', 'egress', 'service_function_parameters', ) properties_schema = { NAME: properties.Schema( properties.Schema.STRING, _('Name for the Port Pair.'), update_allowed=True ), DESCRIPTION: properties.Schema( properties.Schema.STRING, _('Description for the Port Pair.'), update_allowed=True ), INGRESS: properties.Schema( properties.Schema.STRING, _('ID or name of the ingress neutron port.'), constraints=[constraints.CustomConstraint('neutron.port')], required=True, ), EGRESS: properties.Schema( properties.Schema.STRING, _('ID or name of the egress neutron port.'), constraints=[constraints.CustomConstraint('neutron.port')], required=True, ), SERVICE_FUNCTION_PARAMETERS: properties.Schema( properties.Schema.MAP, _('Dictionary of service function parameter. Currently ' 'only correlation=None is supported.'), default={'correlation': None}, ), } def translation_rules(self, props): return [ translation.TranslationRule( props, translation.TranslationRule.RESOLVE, [self.INGRESS], client_plugin=self.client_plugin(), finder='find_resourceid_by_name_or_id', entity='port' ), translation.TranslationRule( props, translation.TranslationRule.RESOLVE, [self.EGRESS], client_plugin=self.client_plugin(), finder='find_resourceid_by_name_or_id', entity='port' ) ] def _show_resource(self): return self.client_plugin().show_sfc_resource('port_pair', self.resource_id) def handle_create(self): props = self.prepare_properties(self.properties, self.physical_resource_name()) props['ingress'] = props.get(self.INGRESS) props['egress'] = props.get(self.EGRESS) port_pair = self.client_plugin().create_sfc_resource('port_pair', props) self.resource_id_set(port_pair['id']) def handle_update(self, json_snippet, tmpl_diff, prop_diff): if prop_diff: self.prepare_update_properties(prop_diff) self.client_plugin().update_sfc_resource('port_pair', prop_diff, self.resource_id) def handle_delete(self): if self.resource_id is None: return with self.client_plugin().ignore_not_found: self.client_plugin().delete_sfc_resource('port_pair', self.resource_id) def resource_mapping(): return { 'OS::Neutron::PortPair': PortPair, } heat-10.0.2/heat/engine/resources/openstack/neutron/sfc/__init__.py0000666000175000017500000000000013343562340025275 0ustar zuulzuul00000000000000heat-10.0.2/heat/engine/resources/openstack/neutron/sfc/flow_classifier.py0000666000175000017500000001537113343562351026734 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common.i18n import _ from heat.engine import constraints from heat.engine import properties from heat.engine.resources.openstack.neutron import neutron from heat.engine import support from heat.engine import translation class FlowClassifier(neutron.NeutronResource): """"Heat Template Resource for networking-sfc flow-classifier. This resource used to select the traffic that can access the service chain. Traffic that matches any flow classifier will be directed to the first port in the chain. """ support_status = support.SupportStatus( version='8.0.0', status=support.UNSUPPORTED) PROPERTIES = ( NAME, DESCRIPTION, PROTOCOL, ETHERTYPE, SOURCE_IP_PREFIX, DESTINATION_IP_PREFIX, SOURCE_PORT_RANGE_MIN, SOURCE_PORT_RANGE_MAX, DESTINATION_PORT_RANGE_MIN, DESTINATION_PORT_RANGE_MAX, LOGICAL_SOURCE_PORT, LOGICAL_DESTINATION_PORT, L7_PARAMETERS, ) = ( 'name', 'description', 'protocol', 'ethertype', 'source_ip_prefix', 'destination_ip_prefix', 'source_port_range_min', 'source_port_range_max', 'destination_port_range_min', 'destination_port_range_max', 'logical_source_port', 'logical_destination_port', 'l7_parameters', ) properties_schema = { NAME: properties.Schema( properties.Schema.STRING, _('Name of the Flow Classifier.'), update_allowed=True ), DESCRIPTION: properties.Schema( properties.Schema.STRING, _('Description for the Flow Classifier.'), update_allowed=True ), PROTOCOL: properties.Schema( properties.Schema.STRING, _('IP Protocol for the Flow Classifier.'), constraints=[ constraints.AllowedValues(['tcp', 'udp', 'icmp']), ], ), ETHERTYPE: properties.Schema( properties.Schema.STRING, _('L2 ethertype.'), default='IPv4', constraints=[ constraints.AllowedValues(['IPv4', 'IPv6']), ], ), SOURCE_IP_PREFIX: properties.Schema( properties.Schema.STRING, _('Source IP prefix or subnet.'), constraints=[ constraints.CustomConstraint('net_cidr') ] ), DESTINATION_IP_PREFIX: properties.Schema( properties.Schema.STRING, _('Destination IP prefix or subnet.'), constraints=[ constraints.CustomConstraint('net_cidr') ] ), SOURCE_PORT_RANGE_MIN: properties.Schema( properties.Schema.INTEGER, _('Source protocol port Minimum.'), constraints=[ constraints.Range(1, 65535) ] ), SOURCE_PORT_RANGE_MAX: properties.Schema( properties.Schema.INTEGER, _('Source protocol port Maximum.'), constraints=[ constraints.Range(1, 65535) ] ), DESTINATION_PORT_RANGE_MIN: properties.Schema( properties.Schema.INTEGER, _('Destination protocol port minimum.'), constraints=[ constraints.Range(1, 65535) ] ), DESTINATION_PORT_RANGE_MAX: properties.Schema( properties.Schema.INTEGER, _('Destination protocol port maximum.'), constraints=[ constraints.Range(1, 65535) ] ), LOGICAL_SOURCE_PORT: properties.Schema( properties.Schema.STRING, _('ID or name of the neutron source port.'), constraints=[ constraints.CustomConstraint('neutron.port') ] ), LOGICAL_DESTINATION_PORT: properties.Schema( properties.Schema.STRING, _('ID or name of the neutron destination port.'), constraints=[ constraints.CustomConstraint('neutron.port') ] ), L7_PARAMETERS: properties.Schema( properties.Schema.MAP, _('Dictionary of L7-parameters.'), support_status=support.SupportStatus( status=support.UNSUPPORTED, message=_('Currently, no value is supported for this option.'), ), ), } def translation_rules(self, props): return [ translation.TranslationRule( props, translation.TranslationRule.RESOLVE, [self.LOGICAL_SOURCE_PORT], client_plugin=self.client_plugin(), finder='find_resourceid_by_name_or_id', entity='port' ), translation.TranslationRule( props, translation.TranslationRule.RESOLVE, [self.LOGICAL_DESTINATION_PORT], client_plugin=self.client_plugin(), finder='find_resourceid_by_name_or_id', entity='port' ) ] def _show_resource(self): return self.client_plugin().show_sfc_resource('flow_classifier', self.resource_id) def handle_create(self): props = self.prepare_properties( self.properties, self.physical_resource_name()) flow_classifier = self.client_plugin().create_sfc_resource( 'flow_classifier', props) self.resource_id_set(flow_classifier['id']) def handle_update(self, json_snippet, tmpl_diff, prop_diff): if prop_diff: self.prepare_update_properties(prop_diff) self.client_plugin().update_sfc_resource('flow_classifier', prop_diff, self.resource_id) def handle_delete(self): if self.resource_id is None: return with self.client_plugin().ignore_not_found: self.client_plugin().delete_sfc_resource('flow_classifier', self.resource_id) def resource_mapping(): return { 'OS::Neutron::FlowClassifier': FlowClassifier, } heat-10.0.2/heat/engine/resources/openstack/neutron/sfc/port_chain.py0000666000175000017500000001165213343562340025703 0ustar zuulzuul00000000000000# Copyright (c) 2016 Huawei Technologies India Pvt Ltd # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common.i18n import _ from heat.engine import constraints from heat.engine import properties from heat.engine.resources.openstack.neutron import neutron from heat.engine import support from heat.engine import translation class PortChain(neutron.NeutronResource): """A resource for neutron networking-sfc. This resource used to define the service function path by arranging networking-sfc port-pair-groups and set of flow classifiers, to specify the classified traffic flows to enter the chain. """ support_status = support.SupportStatus( version='8.0.0', status=support.UNSUPPORTED) required_service_extension = 'sfc' PROPERTIES = ( NAME, DESCRIPTION, PORT_PAIR_GROUPS, FLOW_CLASSIFIERS, CHAIN_PARAMETERS, ) = ( 'name', 'description', 'port_pair_groups', 'flow_classifiers', 'chain_parameters', ) properties_schema = { NAME: properties.Schema( properties.Schema.STRING, _('Name of the Port Chain.'), update_allowed=True ), DESCRIPTION: properties.Schema( properties.Schema.STRING, _('Description for the Port Chain.'), update_allowed=True ), PORT_PAIR_GROUPS: properties.Schema( properties.Schema.LIST, _('A list of port pair groups to apply to the Port Chain.'), update_allowed=True, required=True, schema=properties.Schema( properties.Schema.STRING, _('Port Pair Group ID or Name .'), constraints=[ constraints.CustomConstraint('neutron.port_pair_group') ] ) ), FLOW_CLASSIFIERS: properties.Schema( properties.Schema.LIST, _('A list of flow classifiers to apply to the Port Chain.'), default=[], update_allowed=True, schema=properties.Schema( properties.Schema.STRING, _('Flow Classifier ID or Name .'), constraints=[ constraints.CustomConstraint('neutron.flow_classifier') ] ) ), CHAIN_PARAMETERS: properties.Schema( properties.Schema.MAP, _('Dictionary of chain parameters. Currently, only ' 'correlation=mpls is supported by default.'), default={"correlation": "mpls"} ), } def translation_rules(self, props): return [ translation.TranslationRule( props, translation.TranslationRule.RESOLVE, [self.PORT_PAIR_GROUPS], client_plugin=self.client_plugin(), finder='resolve_ext_resource', entity='port_pair_group' ), translation.TranslationRule( props, translation.TranslationRule.RESOLVE, [self.FLOW_CLASSIFIERS], client_plugin=self.client_plugin(), finder='resolve_ext_resource', entity='flow_classifier' ), ] def _show_resource(self): return self.client_plugin().show_sfc_resource('port_chain', self.resource_id) def handle_create(self): props = self.prepare_properties(self.properties, self.physical_resource_name()) port_chain = self.client_plugin().create_sfc_resource( 'port_chain', props) self.resource_id_set(port_chain['id']) def handle_update(self, json_snippet, tmpl_diff, prop_diff): if prop_diff: self.prepare_update_properties(prop_diff) self.client_plugin().update_sfc_resource('port_chain', prop_diff, self.resource_id) def handle_delete(self): if self.resource_id is None: return with self.client_plugin().ignore_not_found: self.client_plugin().delete_sfc_resource('port_chain', self.resource_id) def resource_mapping(): return { 'OS::Neutron::PortChain': PortChain, } heat-10.0.2/heat/engine/resources/openstack/neutron/subnetpool.py0000666000175000017500000002231413343562351025173 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common import exception from heat.common.i18n import _ from heat.common import netutils from heat.engine import constraints from heat.engine import properties from heat.engine.resources.openstack.neutron import neutron from heat.engine import support class SubnetPool(neutron.NeutronResource): """A resource that implements neutron subnet pool. This resource can be used to create a subnet pool with a large block of addresses and create subnets from it. """ support_status = support.SupportStatus(version='6.0.0') required_service_extension = 'subnet_allocation' entity = 'subnetpool' PROPERTIES = ( NAME, PREFIXES, ADDRESS_SCOPE, DEFAULT_QUOTA, DEFAULT_PREFIXLEN, MIN_PREFIXLEN, MAX_PREFIXLEN, IS_DEFAULT, TENANT_ID, SHARED, TAGS, ) = ( 'name', 'prefixes', 'address_scope', 'default_quota', 'default_prefixlen', 'min_prefixlen', 'max_prefixlen', 'is_default', 'tenant_id', 'shared', 'tags', ) properties_schema = { NAME: properties.Schema( properties.Schema.STRING, _('Name of the subnet pool.'), update_allowed=True ), PREFIXES: properties.Schema( properties.Schema.LIST, _('List of subnet prefixes to assign.'), schema=properties.Schema( properties.Schema.STRING, constraints=[ constraints.CustomConstraint('net_cidr'), ], ), constraints=[constraints.Length(min=1)], required=True, update_allowed=True, ), ADDRESS_SCOPE: properties.Schema( properties.Schema.STRING, _('An address scope ID to assign to the subnet pool.'), constraints=[ constraints.CustomConstraint('neutron.address_scope') ], update_allowed=True, ), DEFAULT_QUOTA: properties.Schema( properties.Schema.INTEGER, _('A per-tenant quota on the prefix space that can be allocated ' 'from the subnet pool for tenant subnets.'), constraints=[constraints.Range(min=0)], update_allowed=True, ), DEFAULT_PREFIXLEN: properties.Schema( properties.Schema.INTEGER, _('The size of the prefix to allocate when the cidr or ' 'prefixlen attributes are not specified while creating ' 'a subnet.'), constraints=[constraints.Range(min=0)], update_allowed=True, ), MIN_PREFIXLEN: properties.Schema( properties.Schema.INTEGER, _('Smallest prefix size that can be allocated ' 'from the subnet pool.'), constraints=[constraints.Range(min=0)], update_allowed=True, ), MAX_PREFIXLEN: properties.Schema( properties.Schema.INTEGER, _('Maximum prefix size that can be allocated ' 'from the subnet pool.'), constraints=[constraints.Range(min=0)], update_allowed=True, ), IS_DEFAULT: properties.Schema( properties.Schema.BOOLEAN, _('Whether this is default IPv4/IPv6 subnet pool. ' 'There can only be one default subnet pool for each IP family. ' 'Note that the default policy setting restricts administrative ' 'users to set this to True.'), default=False, update_allowed=True, ), TENANT_ID: properties.Schema( properties.Schema.STRING, _('The ID of the tenant who owns the subnet pool. Only ' 'administrative users can specify a tenant ID ' 'other than their own.') ), SHARED: properties.Schema( properties.Schema.BOOLEAN, _('Whether the subnet pool will be shared across all tenants. ' 'Note that the default policy setting restricts usage of this ' 'attribute to administrative users only.'), default=False, ), TAGS: properties.Schema( properties.Schema.LIST, _('The tags to be added to the subnetpool.'), schema=properties.Schema(properties.Schema.STRING), update_allowed=True, support_status=support.SupportStatus(version='9.0.0') ), } def validate(self): super(SubnetPool, self).validate() self._validate_prefix_bounds() def _validate_prefix_bounds(self): min_prefixlen = self.properties[self.MIN_PREFIXLEN] default_prefixlen = self.properties[self.DEFAULT_PREFIXLEN] max_prefixlen = self.properties[self.MAX_PREFIXLEN] msg_fmt = _('Illegal prefix bounds: %(key1)s=%(value1)s, ' '%(key2)s=%(value2)s.') # min_prefixlen can not be greater than max_prefixlen if min_prefixlen and max_prefixlen and min_prefixlen > max_prefixlen: msg = msg_fmt % dict(key1=self.MAX_PREFIXLEN, value1=max_prefixlen, key2=self.MIN_PREFIXLEN, value2=min_prefixlen) raise exception.StackValidationFailed(message=msg) if default_prefixlen: # default_prefixlen can not be greater than max_prefixlen if max_prefixlen and default_prefixlen > max_prefixlen: msg = msg_fmt % dict(key1=self.MAX_PREFIXLEN, value1=max_prefixlen, key2=self.DEFAULT_PREFIXLEN, value2=default_prefixlen) raise exception.StackValidationFailed(message=msg) # min_prefixlen can not be greater than default_prefixlen if min_prefixlen and min_prefixlen > default_prefixlen: msg = msg_fmt % dict(key1=self.MIN_PREFIXLEN, value1=min_prefixlen, key2=self.DEFAULT_PREFIXLEN, value2=default_prefixlen) raise exception.StackValidationFailed(message=msg) def _validate_prefixes_for_update(self, prop_diff): old_prefixes = self.properties[self.PREFIXES] new_prefixes = prop_diff[self.PREFIXES] # check new_prefixes is a superset of old_prefixes if not netutils.is_prefix_subset(old_prefixes, new_prefixes): msg = (_('Property %(key)s updated value %(new)s should ' 'be superset of existing value ' '%(old)s.') % dict(key=self.PREFIXES, new=sorted(new_prefixes), old=sorted(old_prefixes))) raise exception.StackValidationFailed(message=msg) def handle_create(self): props = self.prepare_properties( self.properties, self.physical_resource_name()) if self.ADDRESS_SCOPE in props and props[self.ADDRESS_SCOPE]: props['address_scope_id'] = self.client_plugin( ).find_resourceid_by_name_or_id( 'address_scope', props.pop(self.ADDRESS_SCOPE)) tags = props.pop(self.TAGS, []) subnetpool = self.client().create_subnetpool( {'subnetpool': props})['subnetpool'] self.resource_id_set(subnetpool['id']) if tags: self.set_tags(tags) def handle_delete(self): if self.resource_id is not None: with self.client_plugin().ignore_not_found: self.client().delete_subnetpool(self.resource_id) def handle_update(self, json_snippet, tmpl_diff, prop_diff): # check that new prefixes are superset of existing prefixes if self.PREFIXES in prop_diff: self._validate_prefixes_for_update(prop_diff) if self.ADDRESS_SCOPE in prop_diff: if prop_diff[self.ADDRESS_SCOPE]: prop_diff[ 'address_scope_id'] = self.client_plugin( ).find_resourceid_by_name_or_id( self.client(), 'address_scope', prop_diff.pop(self.ADDRESS_SCOPE)) else: prop_diff[ 'address_scope_id'] = prop_diff.pop(self.ADDRESS_SCOPE) if self.TAGS in prop_diff: tags = prop_diff.pop(self.TAGS) self.set_tags(tags) if prop_diff: self.prepare_update_properties(prop_diff) self.client().update_subnetpool( self.resource_id, {'subnetpool': prop_diff}) def resource_mapping(): return { 'OS::Neutron::SubnetPool': SubnetPool, } heat-10.0.2/heat/engine/resources/openstack/neutron/provider_net.py0000666000175000017500000001555513343562351025512 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common import exception from heat.common.i18n import _ from heat.engine import attributes from heat.engine import constraints from heat.engine import properties from heat.engine.resources.openstack.neutron import net from heat.engine import support class ProviderNet(net.Net): """A resource for managing Neutron provider networks. Provider networks specify details of physical realisation of the existing network. The default policy usage of this resource is limited to administrators only. """ required_service_extension = 'provider' support_status = support.SupportStatus(version='2014.1') PROPERTIES = ( NAME, PROVIDER_NETWORK_TYPE, PROVIDER_PHYSICAL_NETWORK, PROVIDER_SEGMENTATION_ID, ADMIN_STATE_UP, SHARED, PORT_SECURITY_ENABLED, ROUTER_EXTERNAL, ) = ( 'name', 'network_type', 'physical_network', 'segmentation_id', 'admin_state_up', 'shared', 'port_security_enabled', 'router_external', ) ATTRIBUTES = ( STATUS, SUBNETS, ) = ( 'status', 'subnets', ) properties_schema = { NAME: net.Net.properties_schema[NAME], PROVIDER_NETWORK_TYPE: properties.Schema( properties.Schema.STRING, _('A string specifying the provider network type for the ' 'network.'), update_allowed=True, required=True, constraints=[ constraints.AllowedValues(['vlan', 'flat']), ] ), PROVIDER_PHYSICAL_NETWORK: properties.Schema( properties.Schema.STRING, _('A string specifying physical network mapping for the ' 'network.'), update_allowed=True ), PROVIDER_SEGMENTATION_ID: properties.Schema( properties.Schema.STRING, _('A string specifying the segmentation id for the ' 'network.'), update_allowed=True ), ADMIN_STATE_UP: net.Net.properties_schema[ADMIN_STATE_UP], SHARED: properties.Schema( properties.Schema.BOOLEAN, _('Whether this network should be shared across all tenants.'), default=True, update_allowed=True ), PORT_SECURITY_ENABLED: properties.Schema( properties.Schema.BOOLEAN, _('Flag to enable/disable port security on the network. It ' 'provides the default value for the attribute of the ports ' 'created on this network.'), update_allowed=True, support_status=support.SupportStatus(version='8.0.0') ), ROUTER_EXTERNAL: properties.Schema( properties.Schema.BOOLEAN, _('Whether the network contains an external router.'), default=False, update_allowed=True, support_status=support.SupportStatus(version='6.0.0') ), } attributes_schema = { STATUS: attributes.Schema( _("The status of the network."), type=attributes.Schema.STRING ), SUBNETS: attributes.Schema( _("Subnets of this network."), type=attributes.Schema.LIST ), } def validate(self): """Resource's validation. Validates to ensure that segmentation_id is not there for flat network type. """ super(ProviderNet, self).validate() if (self.properties[self.PROVIDER_SEGMENTATION_ID] and self.properties[self.PROVIDER_NETWORK_TYPE] != 'vlan'): msg = _('segmentation_id not allowed for flat network type.') raise exception.StackValidationFailed(message=msg) @staticmethod def add_provider_extension(props, key): props['provider:' + key] = props.pop(key) @staticmethod def prepare_provider_properties(props): if ProviderNet.PROVIDER_NETWORK_TYPE in props: ProviderNet.add_provider_extension( props, ProviderNet.PROVIDER_NETWORK_TYPE) if ProviderNet.PROVIDER_PHYSICAL_NETWORK in props: ProviderNet.add_provider_extension( props, ProviderNet.PROVIDER_PHYSICAL_NETWORK) if ProviderNet.PROVIDER_SEGMENTATION_ID in props: ProviderNet.add_provider_extension( props, ProviderNet.PROVIDER_SEGMENTATION_ID) if ProviderNet.ROUTER_EXTERNAL in props: props['router:external'] = props.pop(ProviderNet.ROUTER_EXTERNAL) def handle_create(self): """Creates the resource with provided properties. Adds 'provider:' extension to the required properties during create. """ props = self.prepare_properties( self.properties, self.physical_resource_name()) ProviderNet.prepare_provider_properties(props) prov_net = self.client().create_network({'network': props})['network'] self.resource_id_set(prov_net['id']) def handle_update(self, json_snippet, tmpl_diff, prop_diff): """Updates the resource with provided properties. Adds 'provider:' extension to the required properties during update. """ if prop_diff: ProviderNet.prepare_provider_properties(prop_diff) self.prepare_update_properties(prop_diff) self.client().update_network(self.resource_id, {'network': prop_diff}) def parse_live_resource_data(self, resource_properties, resource_data): # this resource should not have super in case of we don't need to # parse Net resource properties. result = {} provider_keys = [self.PROVIDER_NETWORK_TYPE, self.PROVIDER_PHYSICAL_NETWORK, self.PROVIDER_SEGMENTATION_ID] for key in provider_keys: result[key] = resource_data.get('provider:%s' % key) result[self.ROUTER_EXTERNAL] = resource_data.get('router:external') provider_keys.append(self.ROUTER_EXTERNAL) provider_keys.append(self.SHARED) for key in set(self.PROPERTIES) - set(provider_keys): if key in resource_data: result[key] = resource_data.get(key) return result def resource_mapping(): return { 'OS::Neutron::ProviderNet': ProviderNet, } heat-10.0.2/heat/engine/resources/openstack/neutron/net.py0000666000175000017500000002536413343562351023577 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common.i18n import _ from heat.engine import attributes from heat.engine import constraints from heat.engine import properties from heat.engine.resources.openstack.neutron import neutron from heat.engine import support from heat.engine import translation class Net(neutron.NeutronResource): """A resource for managing Neutron net. A network is a virtual isolated layer-2 broadcast domain which is typically reserved to the tenant who created it, unless the network has been explicitly configured to be shared. """ entity = 'network' PROPERTIES = ( NAME, VALUE_SPECS, ADMIN_STATE_UP, TENANT_ID, SHARED, DHCP_AGENT_IDS, PORT_SECURITY_ENABLED, QOS_POLICY, DNS_DOMAIN, TAGS, ) = ( 'name', 'value_specs', 'admin_state_up', 'tenant_id', 'shared', 'dhcp_agent_ids', 'port_security_enabled', 'qos_policy', 'dns_domain', 'tags', ) ATTRIBUTES = ( STATUS, NAME_ATTR, SUBNETS, ADMIN_STATE_UP_ATTR, TENANT_ID_ATTR, PORT_SECURITY_ENABLED_ATTR, MTU_ATTR, QOS_POLICY_ATTR, L2_ADJACENCY ) = ( "status", "name", "subnets", "admin_state_up", "tenant_id", "port_security_enabled", "mtu", 'qos_policy_id', 'l2_adjacency' ) properties_schema = { NAME: properties.Schema( properties.Schema.STRING, _('A string specifying a symbolic name for the network, which is ' 'not required to be unique.'), update_allowed=True ), VALUE_SPECS: properties.Schema( properties.Schema.MAP, _('Extra parameters to include in the request. Parameters are ' 'often specific to installed hardware or extensions.'), default={}, update_allowed=True ), ADMIN_STATE_UP: properties.Schema( properties.Schema.BOOLEAN, _('A boolean value specifying the administrative status of the ' 'network.'), default=True, update_allowed=True ), TENANT_ID: properties.Schema( properties.Schema.STRING, _('The ID of the tenant which will own the network. Only ' 'administrative users can set the tenant identifier; this ' 'cannot be changed using authorization policies.') ), SHARED: properties.Schema( properties.Schema.BOOLEAN, _('Whether this network should be shared across all tenants. ' 'Note that the default policy setting restricts usage of this ' 'attribute to administrative users only.'), default=False, update_allowed=True ), DHCP_AGENT_IDS: properties.Schema( properties.Schema.LIST, _('The IDs of the DHCP agent to schedule the network. Note that ' 'the default policy setting in Neutron restricts usage of this ' 'property to administrative users only.'), update_allowed=True ), PORT_SECURITY_ENABLED: properties.Schema( properties.Schema.BOOLEAN, _('Flag to enable/disable port security on the network. It ' 'provides the default value for the attribute of the ports ' 'created on this network.'), update_allowed=True, support_status=support.SupportStatus(version='5.0.0') ), QOS_POLICY: properties.Schema( properties.Schema.STRING, _('The name or ID of QoS policy to attach to this network.'), constraints=[ constraints.CustomConstraint('neutron.qos_policy') ], update_allowed=True, support_status=support.SupportStatus(version='6.0.0') ), DNS_DOMAIN: properties.Schema( properties.Schema.STRING, _('DNS domain associated with this network.'), constraints=[ constraints.CustomConstraint('dns_domain') ], update_allowed=True, support_status=support.SupportStatus(version='7.0.0') ), TAGS: properties.Schema( properties.Schema.LIST, _('The tags to be added to the network.'), schema=properties.Schema(properties.Schema.STRING), update_allowed=True, support_status=support.SupportStatus(version='9.0.0') ), } attributes_schema = { STATUS: attributes.Schema( _("The status of the network."), type=attributes.Schema.STRING ), NAME_ATTR: attributes.Schema( _("The name of the network."), type=attributes.Schema.STRING ), SUBNETS: attributes.Schema( _("Subnets of this network."), type=attributes.Schema.LIST ), ADMIN_STATE_UP_ATTR: attributes.Schema( _("The administrative status of the network."), type=attributes.Schema.STRING ), TENANT_ID_ATTR: attributes.Schema( _("The tenant owning this network."), type=attributes.Schema.STRING ), PORT_SECURITY_ENABLED_ATTR: attributes.Schema( _("Port security enabled of the network."), support_status=support.SupportStatus(version='5.0.0'), type=attributes.Schema.BOOLEAN ), MTU_ATTR: attributes.Schema( _("The maximum transmission unit size(in bytes) for the network."), support_status=support.SupportStatus(version='5.0.0'), type=attributes.Schema.INTEGER ), QOS_POLICY_ATTR: attributes.Schema( _("The QoS policy ID attached to this network."), type=attributes.Schema.STRING, support_status=support.SupportStatus(version='6.0.0'), ), L2_ADJACENCY: attributes.Schema( _("A boolean value for L2 adjacency, True means that you can " "expect L2 connectivity throughout the Network."), type=attributes.Schema.BOOLEAN, support_status=support.SupportStatus(version='9.0.0'), ), } def translation_rules(self, properties): return [translation.TranslationRule( properties, translation.TranslationRule.RESOLVE, [self.QOS_POLICY], client_plugin=self.client_plugin(), finder='get_qos_policy_id')] def handle_create(self): props = self.prepare_properties( self.properties, self.physical_resource_name()) dhcp_agent_ids = props.pop(self.DHCP_AGENT_IDS, None) qos_policy = props.pop(self.QOS_POLICY, None) tags = props.pop(self.TAGS, []) if qos_policy: props['qos_policy_id'] = qos_policy net = self.client().create_network({'network': props})['network'] self.resource_id_set(net['id']) if dhcp_agent_ids: self._replace_dhcp_agents(dhcp_agent_ids) if tags: self.set_tags(tags) def check_create_complete(self, *args): attributes = self._show_resource() self._store_config_default_properties(attributes) return self.is_built(attributes) def handle_delete(self): try: self.client().delete_network(self.resource_id) except Exception as ex: self.client_plugin().ignore_not_found(ex) else: return True def handle_update(self, json_snippet, tmpl_diff, prop_diff): if prop_diff: self.prepare_update_properties(prop_diff) if self.DHCP_AGENT_IDS in prop_diff: dhcp_agent_ids = prop_diff.pop(self.DHCP_AGENT_IDS) or [] self._replace_dhcp_agents(dhcp_agent_ids) if self.QOS_POLICY in prop_diff: qos_policy = prop_diff.pop(self.QOS_POLICY) prop_diff[ 'qos_policy_id'] = self.client_plugin().get_qos_policy_id( qos_policy) if qos_policy else None if self.TAGS in prop_diff: self.set_tags(prop_diff.pop(self.TAGS)) if prop_diff: self.client().update_network(self.resource_id, {'network': prop_diff}) def check_update_complete(self, *args): attributes = self._show_resource() return self.is_built(attributes) def _replace_dhcp_agents(self, dhcp_agent_ids): ret = self.client().list_dhcp_agent_hosting_networks( self.resource_id) old = set([agent['id'] for agent in ret['agents']]) new = set(dhcp_agent_ids) for dhcp_agent_id in new - old: try: self.client().add_network_to_dhcp_agent( dhcp_agent_id, {'network_id': self.resource_id}) except Exception as ex: # if 409 is happened, the agent is already associated. if not self.client_plugin().is_conflict(ex): raise for dhcp_agent_id in old - new: try: self.client().remove_network_from_dhcp_agent( dhcp_agent_id, self.resource_id) except Exception as ex: # assume 2 patterns about status_code following: # 404: the network or agent is already gone # 409: the network isn't scheduled by the dhcp_agent if not (self.client_plugin().is_conflict(ex) or self.client_plugin().is_not_found(ex)): raise def parse_live_resource_data(self, resource_properties, resource_data): result = super(Net, self).parse_live_resource_data( resource_properties, resource_data) result.pop(self.SHARED) result[self.QOS_POLICY] = resource_data.get('qos_policy_id') try: dhcp = self.client().list_dhcp_agent_hosting_networks( self.resource_id) dhcp_agents = set([agent['id'] for agent in dhcp['agents']]) result.update({self.DHCP_AGENT_IDS: list(dhcp_agents)}) except self.client_plugin().exceptions.Forbidden: # Just don't add dhcp_clients if we can't get values. pass return result def resource_mapping(): return { 'OS::Neutron::Net': Net, } heat-10.0.2/heat/engine/resources/openstack/neutron/extraroute.py0000666000175000017500000001033213343562351025200 0ustar zuulzuul00000000000000 # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import six from heat.common import exception from heat.common.i18n import _ from heat.engine import constraints from heat.engine import properties from heat.engine.resources.openstack.neutron import neutron from heat.engine import support class ExtraRoute(neutron.NeutronResource): """Resource for specifying extra routes for Neutron router. Resource allows to specify nexthop IP and destination network for router. """ required_service_extension = 'extraroute' support_status = support.SupportStatus( status=support.UNSUPPORTED, message=_('Use this resource at your own risk.')) PROPERTIES = ( ROUTER_ID, DESTINATION, NEXTHOP, ) = ( 'router_id', 'destination', 'nexthop', ) properties_schema = { ROUTER_ID: properties.Schema( properties.Schema.STRING, description=_('The router id.'), required=True, constraints=[ constraints.CustomConstraint('neutron.router') ] ), DESTINATION: properties.Schema( properties.Schema.STRING, description=_('Network in CIDR notation.'), required=True), NEXTHOP: properties.Schema( properties.Schema.STRING, description=_('Nexthop IP address.'), required=True) } def add_dependencies(self, deps): super(ExtraRoute, self).add_dependencies(deps) for resource in six.itervalues(self.stack): # depend on any RouterInterface in this template with the same # router_id as this router_id if resource.has_interface('OS::Neutron::RouterInterface'): router_id = self.properties[self.ROUTER_ID] dep_router_id = resource.properties['router'] if dep_router_id == router_id: deps += (self, resource) # depend on any RouterGateway in this template with the same # router_id as this router_id elif (resource.has_interface('OS::Neutron::RouterGateway') and resource.properties['router_id'] == self.properties['router_id']): deps += (self, resource) def handle_create(self): router_id = self.properties.get(self.ROUTER_ID) routes = self.client().show_router( router_id).get('router').get('routes') if not routes: routes = [] new_route = {'destination': self.properties[self.DESTINATION], 'nexthop': self.properties[self.NEXTHOP]} if new_route in routes: msg = _('Route duplicates an existing route.') raise exception.Error(msg) routes.append(new_route) self.client().update_router(router_id, {'router': {'routes': routes}}) new_route['router_id'] = router_id self.resource_id_set( '%(router_id)s:%(destination)s:%(nexthop)s' % new_route) def handle_delete(self): if not self.resource_id: return (router_id, destination, nexthop) = self.resource_id.split(':') with self.client_plugin().ignore_not_found: routes = self.client().show_router( router_id).get('router').get('routes', []) try: routes.remove({'destination': destination, 'nexthop': nexthop}) except ValueError: return self.client().update_router(router_id, {'router': {'routes': routes}}) def resource_mapping(): return { 'OS::Neutron::ExtraRoute': ExtraRoute, } heat-10.0.2/heat/engine/resources/openstack/neutron/security_group.py0000666000175000017500000002306613343562340026067 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common import exception from heat.common.i18n import _ from heat.engine import constraints from heat.engine import properties from heat.engine.resources.openstack.neutron import neutron from heat.engine import support class SecurityGroup(neutron.NeutronResource): """A resource for managing Neutron security groups. Security groups are sets of IP filter rules that are applied to an instance's networking. They are project specific, and project members can edit the default rules for their group and add new rules sets. All projects have a "default" security group, which is applied to instances that have no other security group defined. """ required_service_extension = 'security-group' entity = 'security_group' support_status = support.SupportStatus(version='2014.1') PROPERTIES = ( NAME, DESCRIPTION, RULES, ) = ( 'name', 'description', 'rules', ) _RULE_KEYS = ( RULE_DIRECTION, RULE_ETHERTYPE, RULE_PORT_RANGE_MIN, RULE_PORT_RANGE_MAX, RULE_PROTOCOL, RULE_REMOTE_MODE, RULE_REMOTE_GROUP_ID, RULE_REMOTE_IP_PREFIX, ) = ( 'direction', 'ethertype', 'port_range_min', 'port_range_max', 'protocol', 'remote_mode', 'remote_group_id', 'remote_ip_prefix', ) _rule_schema = { RULE_DIRECTION: properties.Schema( properties.Schema.STRING, _('The direction in which the security group rule is applied. ' 'For a compute instance, an ingress security group rule ' 'matches traffic that is incoming (ingress) for that ' 'instance. An egress rule is applied to traffic leaving ' 'the instance.'), default='ingress', constraints=[ constraints.AllowedValues(['ingress', 'egress']), ] ), RULE_ETHERTYPE: properties.Schema( properties.Schema.STRING, _('Ethertype of the traffic.'), default='IPv4', constraints=[ constraints.AllowedValues(['IPv4', 'IPv6']), ] ), RULE_PORT_RANGE_MIN: properties.Schema( properties.Schema.INTEGER, _('The minimum port number in the range that is matched by the ' 'security group rule. If the protocol is TCP or UDP, this ' 'value must be less than or equal to the value of the ' 'port_range_max attribute. If the protocol is ICMP, this ' 'value must be an ICMP type.'), constraints=[ constraints.Range(0, 65535) ] ), RULE_PORT_RANGE_MAX: properties.Schema( properties.Schema.INTEGER, _('The maximum port number in the range that is matched by the ' 'security group rule. The port_range_min attribute constrains ' 'the port_range_max attribute. If the protocol is ICMP, this ' 'value must be an ICMP type.'), constraints=[ constraints.Range(0, 65535) ] ), RULE_PROTOCOL: properties.Schema( properties.Schema.STRING, _('The protocol that is matched by the security group rule. ' 'Valid values include tcp, udp, and icmp.') ), RULE_REMOTE_MODE: properties.Schema( properties.Schema.STRING, _('Whether to specify a remote group or a remote IP prefix.'), default='remote_ip_prefix', constraints=[ constraints.AllowedValues(['remote_ip_prefix', 'remote_group_id']), ] ), RULE_REMOTE_GROUP_ID: properties.Schema( properties.Schema.STRING, _('The remote group ID to be associated with this security group ' 'rule. If no value is specified then this rule will use this ' 'security group for the remote_group_id. The remote mode ' 'parameter must be set to "remote_group_id".'), constraints=[ constraints.CustomConstraint('neutron.security_group') ] ), RULE_REMOTE_IP_PREFIX: properties.Schema( properties.Schema.STRING, _('The remote IP prefix (CIDR) to be associated with this ' 'security group rule.'), constraints=[ constraints.CustomConstraint('net_cidr') ] ), } properties_schema = { NAME: properties.Schema( properties.Schema.STRING, _('A string specifying a symbolic name for the security group, ' 'which is not required to be unique.'), update_allowed=True ), DESCRIPTION: properties.Schema( properties.Schema.STRING, _('Description of the security group.'), update_allowed=True ), RULES: properties.Schema( properties.Schema.LIST, _('List of security group rules.'), default=[], schema=properties.Schema( properties.Schema.MAP, schema=_rule_schema ), update_allowed=True ), } default_egress_rules = [ {"direction": "egress", "ethertype": "IPv4"}, {"direction": "egress", "ethertype": "IPv6"} ] def validate(self): super(SecurityGroup, self).validate() if self.properties[self.NAME] == 'default': msg = _('Security groups cannot be assigned the name "default".') raise exception.StackValidationFailed(message=msg) def handle_create(self): props = self.prepare_properties( self.properties, self.physical_resource_name()) rules = props.pop(self.RULES, []) sec = self.client().create_security_group( {'security_group': props})['security_group'] self.resource_id_set(sec['id']) self._create_rules(rules) def _format_rule(self, r): rule = dict(r) rule['security_group_id'] = self.resource_id if 'remote_mode' in rule: remote_mode = rule.get(self.RULE_REMOTE_MODE) del(rule[self.RULE_REMOTE_MODE]) if remote_mode == self.RULE_REMOTE_GROUP_ID: rule[self.RULE_REMOTE_IP_PREFIX] = None if not rule.get(self.RULE_REMOTE_GROUP_ID): # if remote group is not specified then make this # a self-referencing rule rule[self.RULE_REMOTE_GROUP_ID] = self.resource_id else: rule[self.RULE_REMOTE_GROUP_ID] = None for key in (self.RULE_PORT_RANGE_MIN, self.RULE_PORT_RANGE_MAX): if rule.get(key) is not None: rule[key] = str(rule[key]) return rule def _create_rules(self, rules): egress_deleted = False for i in rules: if i[self.RULE_DIRECTION] == 'egress' and not egress_deleted: # There is at least one egress rule, so delete the default # rules which allow all egress traffic egress_deleted = True def is_egress(rule): return rule[self.RULE_DIRECTION] == 'egress' self._delete_rules(is_egress) rule = self._format_rule(i) try: self.client().create_security_group_rule( {'security_group_rule': rule}) except Exception as ex: if not self.client_plugin().is_conflict(ex): raise def _delete_rules(self, to_delete=None): try: sec = self.client().show_security_group( self.resource_id)['security_group'] except Exception as ex: self.client_plugin().ignore_not_found(ex) else: for rule in sec['security_group_rules']: if to_delete is None or to_delete(rule): with self.client_plugin().ignore_not_found: self.client().delete_security_group_rule(rule['id']) def handle_delete(self): if self.resource_id is None: return self._delete_rules() with self.client_plugin().ignore_not_found: self.client().delete_security_group(self.resource_id) def handle_update(self, json_snippet, tmpl_diff, prop_diff): # handle rules changes by: # * deleting all rules # * restoring the default egress rules # * creating new rules rules = None if self.RULES in prop_diff: rules = prop_diff.pop(self.RULES) self._delete_rules() self._create_rules(self.default_egress_rules) if prop_diff: self.prepare_update_properties(prop_diff) self.client().update_security_group( self.resource_id, {'security_group': prop_diff}) if rules: self._create_rules(rules) def resource_mapping(): return { 'OS::Neutron::SecurityGroup': SecurityGroup, } heat-10.0.2/heat/engine/resources/openstack/neutron/port.py0000666000175000017500000006214213343562351023770 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging from oslo_serialization import jsonutils import six from heat.common.i18n import _ from heat.engine import attributes from heat.engine import constraints from heat.engine import properties from heat.engine import resource from heat.engine.resources.openstack.neutron import neutron from heat.engine.resources.openstack.neutron import subnet from heat.engine import support from heat.engine import translation LOG = logging.getLogger(__name__) class Port(neutron.NeutronResource): """A resource for managing Neutron ports. A port represents a virtual switch port on a logical network switch. Virtual instances attach their interfaces into ports. The logical port also defines the MAC address and the IP address(es) to be assigned to the interfaces plugged into them. When IP addresses are associated to a port, this also implies the port is associated with a subnet, as the IP address was taken from the allocation pool for a specific subnet. """ entity = 'port' PROPERTIES = ( NAME, NETWORK_ID, NETWORK, FIXED_IPS, SECURITY_GROUPS, REPLACEMENT_POLICY, DEVICE_ID, DEVICE_OWNER, DNS_NAME, TAGS, ) = ( 'name', 'network_id', 'network', 'fixed_ips', 'security_groups', 'replacement_policy', 'device_id', 'device_owner', 'dns_name', 'tags', ) EXTRA_PROPERTIES = ( VALUE_SPECS, ADMIN_STATE_UP, MAC_ADDRESS, ALLOWED_ADDRESS_PAIRS, VNIC_TYPE, QOS_POLICY, PORT_SECURITY_ENABLED, ) = ( 'value_specs', 'admin_state_up', 'mac_address', 'allowed_address_pairs', 'binding:vnic_type', 'qos_policy', 'port_security_enabled', ) _FIXED_IP_KEYS = ( FIXED_IP_SUBNET_ID, FIXED_IP_SUBNET, FIXED_IP_IP_ADDRESS, ) = ( 'subnet_id', 'subnet', 'ip_address', ) _ALLOWED_ADDRESS_PAIR_KEYS = ( ALLOWED_ADDRESS_PAIR_MAC_ADDRESS, ALLOWED_ADDRESS_PAIR_IP_ADDRESS, ) = ( 'mac_address', 'ip_address', ) ATTRIBUTES = ( ADMIN_STATE_UP_ATTR, DEVICE_ID_ATTR, DEVICE_OWNER_ATTR, FIXED_IPS_ATTR, MAC_ADDRESS_ATTR, NAME_ATTR, NETWORK_ID_ATTR, SECURITY_GROUPS_ATTR, STATUS, TENANT_ID, ALLOWED_ADDRESS_PAIRS_ATTR, SUBNETS_ATTR, PORT_SECURITY_ENABLED_ATTR, QOS_POLICY_ATTR, DNS_ASSIGNMENT, ) = ( 'admin_state_up', 'device_id', 'device_owner', 'fixed_ips', 'mac_address', 'name', 'network_id', 'security_groups', 'status', 'tenant_id', 'allowed_address_pairs', 'subnets', 'port_security_enabled', 'qos_policy_id', 'dns_assignment', ) properties_schema = { NAME: properties.Schema( properties.Schema.STRING, _('A symbolic name for this port.'), update_allowed=True ), NETWORK_ID: properties.Schema( properties.Schema.STRING, support_status=support.SupportStatus( status=support.HIDDEN, version='5.0.0', message=_('Use property %s.') % NETWORK, previous_status=support.SupportStatus( status=support.DEPRECATED, version='2014.2' ) ), constraints=[ constraints.CustomConstraint('neutron.network') ], ), NETWORK: properties.Schema( properties.Schema.STRING, _('Network this port belongs to. If you plan to use current port ' 'to assign Floating IP, you should specify %(fixed_ips)s ' 'with %(subnet)s. Note if this changes to a different network ' 'update, the port will be replaced.') % {'fixed_ips': FIXED_IPS, 'subnet': FIXED_IP_SUBNET}, support_status=support.SupportStatus(version='2014.2'), required=True, constraints=[ constraints.CustomConstraint('neutron.network') ], ), DEVICE_ID: properties.Schema( properties.Schema.STRING, _('Device ID of this port.'), update_allowed=True, default='' ), DEVICE_OWNER: properties.Schema( properties.Schema.STRING, _('Name of the network owning the port. ' 'The value is typically network:floatingip ' 'or network:router_interface or network:dhcp.'), update_allowed=True, default='' ), FIXED_IPS: properties.Schema( properties.Schema.LIST, _('Desired IPs for this port.'), schema=properties.Schema( properties.Schema.MAP, schema={ FIXED_IP_SUBNET_ID: properties.Schema( properties.Schema.STRING, support_status=support.SupportStatus( status=support.HIDDEN, version='5.0.0', message=_('Use property %s.') % FIXED_IP_SUBNET, previous_status=support.SupportStatus( status=support.DEPRECATED, version='2014.2 ' ) ), constraints=[ constraints.CustomConstraint('neutron.subnet') ] ), FIXED_IP_SUBNET: properties.Schema( properties.Schema.STRING, _('Subnet in which to allocate the IP address for ' 'this port.'), support_status=support.SupportStatus(version='2014.2'), constraints=[ constraints.CustomConstraint('neutron.subnet') ] ), FIXED_IP_IP_ADDRESS: properties.Schema( properties.Schema.STRING, _('IP address desired in the subnet for this port.'), constraints=[ constraints.CustomConstraint('ip_addr') ] ), }, ), update_allowed=True ), SECURITY_GROUPS: properties.Schema( properties.Schema.LIST, _('Security group IDs to associate with this port.'), update_allowed=True ), REPLACEMENT_POLICY: properties.Schema( properties.Schema.STRING, _('Policy on how to respond to a stack-update for this resource. ' 'REPLACE_ALWAYS will replace the port regardless of any ' 'property changes. AUTO will update the existing port for any ' 'changed update-allowed property.'), default='AUTO', constraints=[ constraints.AllowedValues(['REPLACE_ALWAYS', 'AUTO']), ], update_allowed=True, support_status=support.SupportStatus( status=support.HIDDEN, version='9.0.0', previous_status=support.SupportStatus( status=support.DEPRECATED, version='6.0.0', message=_('Replacement policy used to work around flawed ' 'nova/neutron port interaction which has been ' 'fixed since Liberty.'), previous_status=support.SupportStatus(version='2014.2') ) ) ), DNS_NAME: properties.Schema( properties.Schema.STRING, _('DNS name associated with the port.'), update_allowed=True, constraints=[ constraints.CustomConstraint('dns_name') ], support_status=support.SupportStatus(version='7.0.0'), ), TAGS: properties.Schema( properties.Schema.LIST, _('The tags to be added to the port.'), schema=properties.Schema(properties.Schema.STRING), update_allowed=True, support_status=support.SupportStatus(version='9.0.0') ), } # NOTE(prazumovsky): properties_schema has been separated because some # properties used in server for creating internal port. extra_properties_schema = { VALUE_SPECS: properties.Schema( properties.Schema.MAP, _('Extra parameters to include in the request.'), default={}, update_allowed=True ), ADMIN_STATE_UP: properties.Schema( properties.Schema.BOOLEAN, _('The administrative state of this port.'), default=True, update_allowed=True ), MAC_ADDRESS: properties.Schema( properties.Schema.STRING, _('MAC address to give to this port. The default update policy ' 'of this property in neutron is that allow admin role only.'), constraints=[ constraints.CustomConstraint('mac_addr') ], update_allowed=True, ), ALLOWED_ADDRESS_PAIRS: properties.Schema( properties.Schema.LIST, _('Additional MAC/IP address pairs allowed to pass through the ' 'port.'), schema=properties.Schema( properties.Schema.MAP, schema={ ALLOWED_ADDRESS_PAIR_MAC_ADDRESS: properties.Schema( properties.Schema.STRING, _('MAC address to allow through this port.'), constraints=[ constraints.CustomConstraint('mac_addr') ] ), ALLOWED_ADDRESS_PAIR_IP_ADDRESS: properties.Schema( properties.Schema.STRING, _('IP address to allow through this port.'), required=True, constraints=[ constraints.CustomConstraint('net_cidr') ] ), }, ), update_allowed=True, ), VNIC_TYPE: properties.Schema( properties.Schema.STRING, _('The vnic type to be bound on the neutron port. ' 'To support SR-IOV PCI passthrough networking, you can request ' 'that the neutron port to be realized as normal (virtual nic), ' 'direct (pci passthrough), or macvtap ' '(virtual interface with a tap-like software interface). Note ' 'that this only works for Neutron deployments that support ' 'the bindings extension.'), constraints=[ constraints.AllowedValues(['normal', 'direct', 'macvtap', 'direct-physical', 'baremetal']), ], support_status=support.SupportStatus(version='2015.1'), update_allowed=True, default='normal' ), PORT_SECURITY_ENABLED: properties.Schema( properties.Schema.BOOLEAN, _('Flag to enable/disable port security on the port. ' 'When disable this feature(set it to False), there will be no ' 'packages filtering, like security-group and address-pairs.'), update_allowed=True, support_status=support.SupportStatus(version='5.0.0') ), QOS_POLICY: properties.Schema( properties.Schema.STRING, _('The name or ID of QoS policy to attach to this port.'), constraints=[ constraints.CustomConstraint('neutron.qos_policy') ], update_allowed=True, support_status=support.SupportStatus(version='6.0.0') ), } # Need to update properties_schema with other properties before # initialisation, because resource should contain all properties before # creating. Also, documentation should correctly resolves resource # properties schema. properties_schema.update(extra_properties_schema) attributes_schema = { ADMIN_STATE_UP_ATTR: attributes.Schema( _("The administrative state of this port."), type=attributes.Schema.STRING ), DEVICE_ID_ATTR: attributes.Schema( _("Unique identifier for the device."), type=attributes.Schema.STRING ), DEVICE_OWNER: attributes.Schema( _("Name of the network owning the port."), type=attributes.Schema.STRING ), FIXED_IPS_ATTR: attributes.Schema( _("Fixed IP addresses."), type=attributes.Schema.LIST ), MAC_ADDRESS_ATTR: attributes.Schema( _("MAC address of the port."), type=attributes.Schema.STRING ), NAME_ATTR: attributes.Schema( _("Friendly name of the port."), type=attributes.Schema.STRING ), NETWORK_ID_ATTR: attributes.Schema( _("Unique identifier for the network owning the port."), type=attributes.Schema.STRING ), SECURITY_GROUPS_ATTR: attributes.Schema( _("A list of security groups for the port."), type=attributes.Schema.LIST ), STATUS: attributes.Schema( _("The status of the port."), type=attributes.Schema.STRING ), TENANT_ID: attributes.Schema( _("Tenant owning the port."), type=attributes.Schema.STRING ), ALLOWED_ADDRESS_PAIRS_ATTR: attributes.Schema( _("Additional MAC/IP address pairs allowed to pass through " "a port."), type=attributes.Schema.LIST ), SUBNETS_ATTR: attributes.Schema( _("A list of all subnet attributes for the port."), type=attributes.Schema.LIST ), PORT_SECURITY_ENABLED_ATTR: attributes.Schema( _("Port security enabled of the port."), support_status=support.SupportStatus(version='5.0.0'), type=attributes.Schema.BOOLEAN ), QOS_POLICY_ATTR: attributes.Schema( _("The QoS policy ID attached to this port."), type=attributes.Schema.STRING, support_status=support.SupportStatus(version='6.0.0'), ), DNS_ASSIGNMENT: attributes.Schema( _("The DNS assigned to this port."), type=attributes.Schema.MAP, support_status=support.SupportStatus(version='7.0.0'), ), } def translation_rules(self, props): return [ translation.TranslationRule( props, translation.TranslationRule.REPLACE, [self.NETWORK], value_path=[self.NETWORK_ID] ), translation.TranslationRule( props, translation.TranslationRule.REPLACE, [self.FIXED_IPS, self.FIXED_IP_SUBNET], value_name=self.FIXED_IP_SUBNET_ID ), translation.TranslationRule( props, translation.TranslationRule.RESOLVE, [self.NETWORK], client_plugin=self.client_plugin(), finder='find_resourceid_by_name_or_id', entity='network' ), translation.TranslationRule( props, translation.TranslationRule.RESOLVE, [self.FIXED_IPS, self.FIXED_IP_SUBNET], client_plugin=self.client_plugin(), finder='find_resourceid_by_name_or_id', entity='subnet' ) ] def add_dependencies(self, deps): super(Port, self).add_dependencies(deps) # Depend on any Subnet in this template with the same # network_id as this network_id. # It is not known which subnet a port might be assigned # to so all subnets in a network should be created before # the ports in that network. for res in six.itervalues(self.stack): if res.has_interface('OS::Neutron::Subnet'): dep_network = res.properties.get(subnet.Subnet.NETWORK) network = self.properties[self.NETWORK] if dep_network == network: deps += (self, res) def handle_create(self): props = self.prepare_properties( self.properties, self.physical_resource_name()) props['network_id'] = props.pop(self.NETWORK) self._prepare_port_properties(props) qos_policy = props.pop(self.QOS_POLICY, None) tags = props.pop(self.TAGS, []) if qos_policy: props['qos_policy_id'] = self.client_plugin().get_qos_policy_id( qos_policy) port = self.client().create_port({'port': props})['port'] self.resource_id_set(port['id']) if tags: self.set_tags(tags) def _prepare_port_properties(self, props, prepare_for_update=False): if self.FIXED_IPS in props: fixed_ips = props[self.FIXED_IPS] if fixed_ips: for fixed_ip in fixed_ips: for key, value in list(fixed_ip.items()): if value is None: fixed_ip.pop(key) if self.FIXED_IP_SUBNET in fixed_ip: fixed_ip[ 'subnet_id'] = fixed_ip.pop(self.FIXED_IP_SUBNET) else: # Passing empty list would have created a port without # fixed_ips during CREATE and released the existing # fixed_ips during UPDATE (default neutron behaviour). # However, for backward compatibility we will let neutron # assign ip for CREATE and leave the assigned ips during # UPDATE by not passing it. ref bug #1538473. del props[self.FIXED_IPS] # delete empty MAC addresses so that Neutron validation code # wouldn't fail as it not accepts Nones if self.ALLOWED_ADDRESS_PAIRS in props: address_pairs = props[self.ALLOWED_ADDRESS_PAIRS] if address_pairs: for pair in address_pairs: if (self.ALLOWED_ADDRESS_PAIR_MAC_ADDRESS in pair and pair[ self.ALLOWED_ADDRESS_PAIR_MAC_ADDRESS] is None): del pair[self.ALLOWED_ADDRESS_PAIR_MAC_ADDRESS] else: props[self.ALLOWED_ADDRESS_PAIRS] = [] # if without 'security_groups', don't set the 'security_groups' # property when creating, neutron will create the port with the # 'default' securityGroup. If has the 'security_groups' and the # value is [], which means to create the port without securityGroup. if self.SECURITY_GROUPS in props: if props.get(self.SECURITY_GROUPS) is not None: props[self.SECURITY_GROUPS] = self.client_plugin( ).get_secgroup_uuids(props.get(self.SECURITY_GROUPS)) else: # And the update should has the same behavior. if prepare_for_update: props[self.SECURITY_GROUPS] = self.client_plugin( ).get_secgroup_uuids(['default']) if self.REPLACEMENT_POLICY in props: del(props[self.REPLACEMENT_POLICY]) def _store_config_default_properties(self, attrs): """A method for storing properties default values. A method allows to store properties default values, which cannot be defined in schema in case of specifying in config file. """ super(Port, self)._store_config_default_properties(attrs) if self.VNIC_TYPE in attrs: self.data_set(self.VNIC_TYPE, attrs[self.VNIC_TYPE]) def check_create_complete(self, *args): attributes = self._show_resource() self._store_config_default_properties(attributes) return self.is_built(attributes) def handle_delete(self): try: self.client().delete_port(self.resource_id) except Exception as ex: self.client_plugin().ignore_not_found(ex) else: return True def parse_live_resource_data(self, resource_properties, resource_data): result = super(Port, self).parse_live_resource_data( resource_properties, resource_data) result[self.QOS_POLICY] = resource_data.get('qos_policy_id') result.pop(self.MAC_ADDRESS) fixed_ips = resource_data.get(self.FIXED_IPS) or [] if fixed_ips: result.update({self.FIXED_IPS: []}) for fixed_ip in fixed_ips: result[self.FIXED_IPS].append( {self.FIXED_IP_SUBNET: fixed_ip.get('subnet_id'), self.FIXED_IP_IP_ADDRESS: fixed_ip.get('ip_address')}) return result def _resolve_attribute(self, name): if self.resource_id is None: return if name == self.SUBNETS_ATTR: subnets = [] try: fixed_ips = self._show_resource().get('fixed_ips', []) for fixed_ip in fixed_ips: subnet_id = fixed_ip.get('subnet_id') if subnet_id: subnets.append(self.client().show_subnet( subnet_id)['subnet']) except Exception as ex: LOG.warning("Failed to fetch resource attributes: %s", ex) return return subnets return super(Port, self)._resolve_attribute(name) def needs_replace(self, after_props): """Mandatory replace based on props.""" return after_props.get(self.REPLACEMENT_POLICY) == 'REPLACE_ALWAYS' def handle_update(self, json_snippet, tmpl_diff, prop_diff): if prop_diff: self.prepare_update_properties(prop_diff) if self.QOS_POLICY in prop_diff: qos_policy = prop_diff.pop(self.QOS_POLICY) prop_diff['qos_policy_id'] = self.client_plugin( ).get_qos_policy_id(qos_policy) if qos_policy else None if self.TAGS in prop_diff: tags = prop_diff.pop(self.TAGS) self.set_tags(tags) self._prepare_port_properties(prop_diff, prepare_for_update=True) if prop_diff: LOG.debug('updating port with %s', prop_diff) self.client().update_port(self.resource_id, {'port': prop_diff}) def check_update_complete(self, *args): attributes = self._show_resource() return self.is_built(attributes) def prepare_for_replace(self): # if the port has not been created yet, return directly if self.resource_id is None: return # store port fixed_ips for restoring after failed update # Ignore if the port does not exist in neutron (deleted) with self.client_plugin().ignore_not_found: fixed_ips = self._show_resource().get('fixed_ips', []) self.data_set('port_fip', jsonutils.dumps(fixed_ips)) # reset fixed_ips for this port by setting fixed_ips to [] props = {'fixed_ips': []} self.client().update_port(self.resource_id, {'port': props}) def restore_prev_rsrc(self, convergence=False): # In case of convergence, during rollback, the previous rsrc is # already selected and is being acted upon. if convergence: prev_port = self existing_port, rsrc_owning_stack, stack = resource.Resource.load( prev_port.context, prev_port.replaced_by, prev_port.stack.current_traversal, True, prev_port.stack.defn._resource_data ) existing_port_id = existing_port.resource_id else: backup_stack = self.stack._backup_stack() prev_port = backup_stack.resources.get(self.name) existing_port_id = self.resource_id if existing_port_id: # reset fixed_ips to [] for new resource props = {'fixed_ips': []} self.client().update_port(existing_port_id, {'port': props}) fixed_ips = prev_port.data().get('port_fip', []) if fixed_ips and prev_port.resource_id: # restore ip for old port prev_port_props = {'fixed_ips': jsonutils.loads(fixed_ips)} self.client().update_port(prev_port.resource_id, {'port': prev_port_props}) def resource_mapping(): return { 'OS::Neutron::Port': Port, } heat-10.0.2/heat/engine/resources/openstack/neutron/neutron.py0000666000175000017500000001632513343562340024476 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging from heat.common import exception from heat.common.i18n import _ from heat.engine import resource LOG = logging.getLogger(__name__) class NeutronResource(resource.Resource): default_client_name = 'neutron' res_info_key = None def get_resource_plural(self): """Return the plural of resource type. The default implementation is to return self.entity + 's', the rule is not appropriate for some special resources, e.g. qos_policy, this method should be overridden by the special resources if needed. """ if not self.entity: return return self.entity + 's' def validate(self): """Validate any of the provided params.""" res = super(NeutronResource, self).validate() if res: return res return self.validate_properties(self.properties) @staticmethod def validate_properties(properties): """Validate properties for the resource. Validates to ensure nothing in value_specs overwrites any key that exists in the schema. Also ensures that shared and tenant_id is not specified in value_specs. """ if 'value_specs' in properties: banned_keys = set(['shared', 'tenant_id']).union(set(properties)) found = banned_keys.intersection(set(properties['value_specs'])) if found: return '%s not allowed in value_specs' % ', '.join(found) @staticmethod def prepare_properties(properties, name): """Prepares the property values for correct Neutron create call. Prepares the property values so that they can be passed directly to the Neutron create call. Removes None values and value_specs, merges value_specs with the main values. """ props = dict((k, v) for k, v in properties.items() if v is not None) if 'name' in properties: props.setdefault('name', name) if 'value_specs' in props: NeutronResource.merge_value_specs(props) return props def _store_config_default_properties(self, attrs): """A method for storing properties, which defaults stored in config. A method allows to store properties default values, which cannot be defined in schema in case of specifying in config file. """ if 'port_security_enabled' in attrs: self.data_set('port_security_enabled', attrs['port_security_enabled']) @staticmethod def merge_value_specs(props): value_spec_props = props.pop('value_specs') props.update(value_spec_props) def prepare_update_properties(self, prop_diff): """Prepares prop_diff values for correct neutron update call. 1. Merges value_specs 2. Defaults resource name to physical resource name if None """ if 'value_specs' in prop_diff and prop_diff['value_specs']: NeutronResource.merge_value_specs(prop_diff) if 'name' in prop_diff and prop_diff['name'] is None: prop_diff['name'] = self.physical_resource_name() @staticmethod def is_built(attributes): status = attributes['status'] if status == 'BUILD': return False if status in ('ACTIVE', 'DOWN'): return True elif status in ('ERROR', 'DEGRADED'): raise exception.ResourceInError( resource_status=status) else: raise exception.ResourceUnknownStatus( resource_status=status, result=_('Resource is not built')) def _res_get_args(self): return [self.resource_id] def _show_resource(self): try: method_name = 'show_' + self.entity client_method = getattr(self.client(), method_name) args = self._res_get_args() res_info = client_method(*args) key = self.res_info_key if self.res_info_key else self.entity return res_info[key] except AttributeError as ex: LOG.warning("Resolving 'show' attribute has failed : %s", ex) def _resolve_attribute(self, name): if self.resource_id is None: return attributes = self._show_resource() return attributes[name] def needs_replace_failed(self): if not self.resource_id: return True with self.client_plugin().ignore_not_found: res_attrs = self._show_resource() if 'status' in res_attrs: return res_attrs['status'] == 'ERROR' return False return True def get_reference_id(self): return self.resource_id def _not_found_in_call(self, func, *args, **kwargs): try: func(*args, **kwargs) except Exception as ex: self.client_plugin().ignore_not_found(ex) return True else: return False def check_delete_complete(self, check): # NOTE(pshchelo): when longer check is needed, check is returned # as True, otherwise None is implicitly returned as check if not check: return True if not self._not_found_in_call(self._show_resource): raise exception.PhysicalResourceExists( name=self.physical_resource_name_or_FnGetRefId()) return True def set_tags(self, tags): resource_plural = self.get_resource_plural() if resource_plural: tags = tags or [] body = {'tags': tags} self.client().replace_tag(resource_plural, self.resource_id, body) def parse_live_resource_data(self, resource_properties, resource_data): result = super(NeutronResource, self).parse_live_resource_data( resource_properties, resource_data) if 'value_specs' in self.properties.keys(): result.update({self.VALUE_SPECS: {}}) for key in self.properties.get(self.VALUE_SPECS): if key in resource_data: result[self.VALUE_SPECS][key] = resource_data.get(key) # We already get real `port_security_enabled` from # super().parse_live_resource_data above, so just check and remove # if that's same value as old port value. if 'port_security_enabled' in self.properties.keys(): old_port = bool(self.data().get(self.PORT_SECURITY_ENABLED)) new_port = resource_data.get(self.PORT_SECURITY_ENABLED) if old_port == new_port: del result[self.PORT_SECURITY_ENABLED] return result heat-10.0.2/heat/engine/resources/openstack/neutron/subnet.py0000666000175000017500000003751213343562351024307 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_utils import netutils from heat.common import exception from heat.common.i18n import _ from heat.engine import attributes from heat.engine import constraints from heat.engine import properties from heat.engine.resources.openstack.neutron import neutron from heat.engine import support from heat.engine import translation class Subnet(neutron.NeutronResource): """A resource for managing Neutron subnets. A subnet represents an IP address block that can be used for assigning IP addresses to virtual instances. Each subnet must have a CIDR and must be associated with a network. IPs can be either selected from the whole subnet CIDR, or from "allocation pools" that can be specified by the user. """ entity = 'subnet' PROPERTIES = ( NETWORK_ID, NETWORK, SUBNETPOOL, PREFIXLEN, CIDR, VALUE_SPECS, NAME, IP_VERSION, DNS_NAMESERVERS, GATEWAY_IP, ENABLE_DHCP, ALLOCATION_POOLS, TENANT_ID, HOST_ROUTES, IPV6_RA_MODE, IPV6_ADDRESS_MODE, SEGMENT, TAGS, ) = ( 'network_id', 'network', 'subnetpool', 'prefixlen', 'cidr', 'value_specs', 'name', 'ip_version', 'dns_nameservers', 'gateway_ip', 'enable_dhcp', 'allocation_pools', 'tenant_id', 'host_routes', 'ipv6_ra_mode', 'ipv6_address_mode', 'segment', 'tags', ) _ALLOCATION_POOL_KEYS = ( ALLOCATION_POOL_START, ALLOCATION_POOL_END, ) = ( 'start', 'end', ) _HOST_ROUTES_KEYS = ( ROUTE_DESTINATION, ROUTE_NEXTHOP, ) = ( 'destination', 'nexthop', ) _IPV6_DHCP_MODES = ( DHCPV6_STATEFUL, DHCPV6_STATELESS, SLAAC, ) = ( 'dhcpv6-stateful', 'dhcpv6-stateless', 'slaac', ) ATTRIBUTES = ( NAME_ATTR, NETWORK_ID_ATTR, TENANT_ID_ATTR, ALLOCATION_POOLS_ATTR, GATEWAY_IP_ATTR, HOST_ROUTES_ATTR, IP_VERSION_ATTR, CIDR_ATTR, DNS_NAMESERVERS_ATTR, ENABLE_DHCP_ATTR, ) = ( 'name', 'network_id', 'tenant_id', 'allocation_pools', 'gateway_ip', 'host_routes', 'ip_version', 'cidr', 'dns_nameservers', 'enable_dhcp', ) properties_schema = { NETWORK_ID: properties.Schema( properties.Schema.STRING, support_status=support.SupportStatus( status=support.HIDDEN, version='5.0.0', message=_('Use property %s.') % NETWORK, previous_status=support.SupportStatus( status=support.DEPRECATED, version='2014.2' ) ), constraints=[ constraints.CustomConstraint('neutron.network') ], ), NETWORK: properties.Schema( properties.Schema.STRING, _('The ID of the attached network.'), required=True, constraints=[ constraints.CustomConstraint('neutron.network') ], support_status=support.SupportStatus(version='2014.2') ), SUBNETPOOL: properties.Schema( properties.Schema.STRING, _('The name or ID of the subnet pool.'), constraints=[ constraints.CustomConstraint('neutron.subnetpool') ], support_status=support.SupportStatus(version='6.0.0'), ), PREFIXLEN: properties.Schema( properties.Schema.INTEGER, _('Prefix length for subnet allocation from subnet pool.'), constraints=[constraints.Range(min=0)], support_status=support.SupportStatus(version='6.0.0'), ), CIDR: properties.Schema( properties.Schema.STRING, _('The CIDR.'), constraints=[ constraints.CustomConstraint('net_cidr') ] ), VALUE_SPECS: properties.Schema( properties.Schema.MAP, _('Extra parameters to include in the request.'), default={}, update_allowed=True ), NAME: properties.Schema( properties.Schema.STRING, _('The name of the subnet.'), update_allowed=True ), IP_VERSION: properties.Schema( properties.Schema.INTEGER, _('The IP version, which is 4 or 6.'), default=4, constraints=[ constraints.AllowedValues([4, 6]), ] ), DNS_NAMESERVERS: properties.Schema( properties.Schema.LIST, _('A specified set of DNS name servers to be used.'), default=[], update_allowed=True ), GATEWAY_IP: properties.Schema( properties.Schema.STRING, _('The gateway IP address. Set to any of [ null | ~ | "" ] ' 'to create/update a subnet without a gateway. ' 'If omitted when creation, neutron will assign the first ' 'free IP address within the subnet to the gateway ' 'automatically. If remove this from template when update, ' 'the old gateway IP address will be detached.'), update_allowed=True ), ENABLE_DHCP: properties.Schema( properties.Schema.BOOLEAN, _('Set to true if DHCP is enabled and false if DHCP is disabled.'), default=True, update_allowed=True ), ALLOCATION_POOLS: properties.Schema( properties.Schema.LIST, _('The start and end addresses for the allocation pools.'), schema=properties.Schema( properties.Schema.MAP, schema={ ALLOCATION_POOL_START: properties.Schema( properties.Schema.STRING, _('Start address for the allocation pool.'), required=True, constraints=[ constraints.CustomConstraint('ip_addr') ] ), ALLOCATION_POOL_END: properties.Schema( properties.Schema.STRING, _('End address for the allocation pool.'), required=True, constraints=[ constraints.CustomConstraint('ip_addr') ] ), }, ), update_allowed=True ), TENANT_ID: properties.Schema( properties.Schema.STRING, _('The ID of the tenant who owns the network. Only administrative ' 'users can specify a tenant ID other than their own.') ), HOST_ROUTES: properties.Schema( properties.Schema.LIST, _('A list of host route dictionaries for the subnet.'), schema=properties.Schema( properties.Schema.MAP, schema={ ROUTE_DESTINATION: properties.Schema( properties.Schema.STRING, _('The destination for static route.'), required=True, constraints=[ constraints.CustomConstraint('net_cidr') ] ), ROUTE_NEXTHOP: properties.Schema( properties.Schema.STRING, _('The next hop for the destination.'), required=True, constraints=[ constraints.CustomConstraint('ip_addr') ] ), }, ), update_allowed=True ), IPV6_RA_MODE: properties.Schema( properties.Schema.STRING, _('IPv6 RA (Router Advertisement) mode.'), constraints=[ constraints.AllowedValues([DHCPV6_STATEFUL, DHCPV6_STATELESS, SLAAC]), ], support_status=support.SupportStatus(version='2015.1') ), IPV6_ADDRESS_MODE: properties.Schema( properties.Schema.STRING, _('IPv6 address mode.'), constraints=[ constraints.AllowedValues([DHCPV6_STATEFUL, DHCPV6_STATELESS, SLAAC]), ], support_status=support.SupportStatus(version='2015.1') ), SEGMENT: properties.Schema( properties.Schema.STRING, _('The name/ID of the segment to associate.'), constraints=[ constraints.CustomConstraint('neutron.segment') ], support_status=support.SupportStatus(version='9.0.0') ), TAGS: properties.Schema( properties.Schema.LIST, _('The tags to be added to the subnet.'), schema=properties.Schema(properties.Schema.STRING), update_allowed=True, support_status=support.SupportStatus(version='9.0.0') ), } attributes_schema = { NAME_ATTR: attributes.Schema( _("Friendly name of the subnet."), type=attributes.Schema.STRING ), NETWORK_ID_ATTR: attributes.Schema( _("Parent network of the subnet."), type=attributes.Schema.STRING ), TENANT_ID_ATTR: attributes.Schema( _("Tenant owning the subnet."), type=attributes.Schema.STRING ), ALLOCATION_POOLS_ATTR: attributes.Schema( _("Ip allocation pools and their ranges."), type=attributes.Schema.LIST ), GATEWAY_IP_ATTR: attributes.Schema( _("Ip of the subnet's gateway."), type=attributes.Schema.STRING ), HOST_ROUTES_ATTR: attributes.Schema( _("Additional routes for this subnet."), type=attributes.Schema.LIST ), IP_VERSION_ATTR: attributes.Schema( _("Ip version for the subnet."), type=attributes.Schema.STRING ), CIDR_ATTR: attributes.Schema( _("CIDR block notation for this subnet."), type=attributes.Schema.STRING ), DNS_NAMESERVERS_ATTR: attributes.Schema( _("List of dns nameservers."), type=attributes.Schema.LIST ), ENABLE_DHCP_ATTR: attributes.Schema( _("'true' if DHCP is enabled for this subnet; 'false' otherwise."), type=attributes.Schema.STRING ), } def translation_rules(self, props): return [ translation.TranslationRule( props, translation.TranslationRule.REPLACE, [self.NETWORK], value_path=[self.NETWORK_ID]), translation.TranslationRule( props, translation.TranslationRule.RESOLVE, [self.NETWORK], client_plugin=self.client_plugin(), finder='find_resourceid_by_name_or_id', entity='network' ), translation.TranslationRule( props, translation.TranslationRule.RESOLVE, [self.SUBNETPOOL], client_plugin=self.client_plugin(), finder='find_resourceid_by_name_or_id', entity='subnetpool' ), translation.TranslationRule( props, translation.TranslationRule.RESOLVE, [self.SEGMENT], client_plugin=self.client_plugin('openstack'), finder='find_network_segment' ) ] @classmethod def _null_gateway_ip(cls, props): if cls.GATEWAY_IP not in props: return # Specifying null in the gateway_ip will result in # a property containing an empty string. # A null gateway_ip has special meaning in the API # so this needs to be set back to None. # See bug https://bugs.launchpad.net/heat/+bug/1226666 if props.get(cls.GATEWAY_IP) == '': props[cls.GATEWAY_IP] = None def validate(self): super(Subnet, self).validate() subnetpool = self.properties[self.SUBNETPOOL] prefixlen = self.properties[self.PREFIXLEN] cidr = self.properties[self.CIDR] if subnetpool is not None and cidr: raise exception.ResourcePropertyConflict(self.SUBNETPOOL, self.CIDR) if subnetpool is None and not cidr: raise exception.PropertyUnspecifiedError(self.SUBNETPOOL, self.CIDR) if prefixlen and cidr: raise exception.ResourcePropertyConflict(self.PREFIXLEN, self.CIDR) ra_mode = self.properties[self.IPV6_RA_MODE] address_mode = self.properties[self.IPV6_ADDRESS_MODE] if (self.properties[self.IP_VERSION] == 4) and ( ra_mode or address_mode): msg = _('ipv6_ra_mode and ipv6_address_mode are not supported ' 'for ipv4.') raise exception.StackValidationFailed(message=msg) if ra_mode and address_mode and (ra_mode != address_mode): msg = _('When both ipv6_ra_mode and ipv6_address_mode are set, ' 'they must be equal.') raise exception.StackValidationFailed(message=msg) gateway_ip = self.properties.get(self.GATEWAY_IP) if (gateway_ip and gateway_ip not in ['~', ''] and not netutils.is_valid_ip(gateway_ip)): msg = (_('Gateway IP address "%(gateway)s" is in ' 'invalid format.'), gateway_ip) raise exception.StackValidationFailed(message=msg) def handle_create(self): props = self.prepare_properties( self.properties, self.physical_resource_name()) props['network_id'] = props.pop(self.NETWORK) if self.SEGMENT in props and props[self.SEGMENT]: props['segment_id'] = props.pop(self.SEGMENT) tags = props.pop(self.TAGS, []) if self.SUBNETPOOL in props and props[self.SUBNETPOOL]: props['subnetpool_id'] = props.pop('subnetpool') self._null_gateway_ip(props) subnet = self.client().create_subnet({'subnet': props})['subnet'] self.resource_id_set(subnet['id']) if tags: self.set_tags(tags) def handle_delete(self): try: self.client().delete_subnet(self.resource_id) except Exception as ex: self.client_plugin().ignore_not_found(ex) else: return True def handle_update(self, json_snippet, tmpl_diff, prop_diff): if prop_diff: self.prepare_update_properties(prop_diff) if (self.ALLOCATION_POOLS in prop_diff and prop_diff[self.ALLOCATION_POOLS] is None): prop_diff[self.ALLOCATION_POOLS] = [] # If the new value is '', set to None self._null_gateway_ip(prop_diff) if self.TAGS in prop_diff: tags = prop_diff.pop(self.TAGS) self.set_tags(tags) self.client().update_subnet( self.resource_id, {'subnet': prop_diff}) def resource_mapping(): return { 'OS::Neutron::Subnet': Subnet, } heat-10.0.2/heat/engine/resources/openstack/neutron/lbaas/0000775000175000017500000000000013343562672023513 5ustar zuulzuul00000000000000heat-10.0.2/heat/engine/resources/openstack/neutron/lbaas/l7policy.py0000666000175000017500000002356213343562340025631 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common import exception from heat.common.i18n import _ from heat.engine import attributes from heat.engine import constraints from heat.engine import properties from heat.engine.resources.openstack.neutron import neutron from heat.engine import support from heat.engine import translation class L7Policy(neutron.NeutronResource): """A resource for managing LBaaS v2 L7Policies. This resource manages Neutron-LBaaS v2 L7Policies, which represent a collection of L7Rules. L7Policy holds the action that should be performed when the rules are matched (Redirect to Pool, Redirect to URL, Reject). L7Policy holds a Listener id, so a Listener can evaluate a collection of L7Policies. L7Policy will return True when all of the L7Rules that belong to this L7Policy are matched. L7Policies under a specific Listener are ordered and the first l7Policy that returns a match will be executed. When none of the policies match the request gets forwarded to listener.default_pool_id. """ support_status = support.SupportStatus(version='7.0.0') required_service_extension = 'lbaasv2' entity = 'lbaas_l7policy' res_info_key = 'l7policy' PROPERTIES = ( NAME, DESCRIPTION, ADMIN_STATE_UP, ACTION, REDIRECT_POOL, REDIRECT_URL, POSITION, LISTENER ) = ( 'name', 'description', 'admin_state_up', 'action', 'redirect_pool', 'redirect_url', 'position', 'listener' ) L7ACTIONS = ( REJECT, REDIRECT_TO_POOL, REDIRECT_TO_URL ) = ( 'REJECT', 'REDIRECT_TO_POOL', 'REDIRECT_TO_URL' ) ATTRIBUTES = (RULES_ATTR) = ('rules') properties_schema = { NAME: properties.Schema( properties.Schema.STRING, _('Name of the policy.'), update_allowed=True ), DESCRIPTION: properties.Schema( properties.Schema.STRING, _('Description of the policy.'), update_allowed=True ), ADMIN_STATE_UP: properties.Schema( properties.Schema.BOOLEAN, _('The administrative state of the policy.'), default=True, update_allowed=True ), ACTION: properties.Schema( properties.Schema.STRING, _('Action type of the policy.'), required=True, constraints=[constraints.AllowedValues(L7ACTIONS)], update_allowed=True ), REDIRECT_POOL: properties.Schema( properties.Schema.STRING, _('ID or name of the pool for REDIRECT_TO_POOL action type.'), constraints=[ constraints.CustomConstraint('neutron.lbaas.pool') ], update_allowed=True ), REDIRECT_URL: properties.Schema( properties.Schema.STRING, _('URL for REDIRECT_TO_URL action type. ' 'This should be a valid URL string.'), update_allowed=True ), POSITION: properties.Schema( properties.Schema.NUMBER, _('L7 policy position in ordered policies list. This must be ' 'an integer starting from 1. If not specified, policy will be ' 'placed at the tail of existing policies list.'), constraints=[constraints.Range(min=1)], update_allowed=True ), LISTENER: properties.Schema( properties.Schema.STRING, _('ID or name of the listener this policy belongs to.'), required=True, constraints=[ constraints.CustomConstraint('neutron.lbaas.listener') ] ), } attributes_schema = { RULES_ATTR: attributes.Schema( _('L7Rules associated with this policy.'), type=attributes.Schema.LIST ), } def translation_rules(self, props): return [ translation.TranslationRule( props, translation.TranslationRule.RESOLVE, [self.LISTENER], client_plugin=self.client_plugin(), finder='find_resourceid_by_name_or_id', entity='listener' ), ] def __init__(self, name, definition, stack): super(L7Policy, self).__init__(name, definition, stack) self._lb_id = None @property def lb_id(self): if self._lb_id is None: listener_id = self.properties[self.LISTENER] listener = self.client().show_listener(listener_id)['listener'] self._lb_id = listener['loadbalancers'][0]['id'] return self._lb_id def validate(self): res = super(L7Policy, self).validate() if res: return res if (self.properties[self.ACTION] == self.REJECT and (self.properties[self.REDIRECT_POOL] is not None or self.properties[self.REDIRECT_URL] is not None)): msg = (_('Properties %(pool)s and %(url)s are not required when ' '%(action)s type is set to %(action_type)s.') % {'pool': self.REDIRECT_POOL, 'url': self.REDIRECT_URL, 'action': self.ACTION, 'action_type': self.REJECT}) raise exception.StackValidationFailed(message=msg) if self.properties[self.ACTION] == self.REDIRECT_TO_POOL: if self.properties[self.REDIRECT_URL] is not None: raise exception.ResourcePropertyValueDependency( prop1=self.REDIRECT_URL, prop2=self.ACTION, value=self.REDIRECT_TO_URL) if self.properties[self.REDIRECT_POOL] is None: msg = (_('Property %(pool)s is required when %(action)s ' 'type is set to %(action_type)s.') % {'pool': self.REDIRECT_POOL, 'action': self.ACTION, 'action_type': self.REDIRECT_TO_POOL}) raise exception.StackValidationFailed(message=msg) if self.properties[self.ACTION] == self.REDIRECT_TO_URL: if self.properties[self.REDIRECT_POOL] is not None: raise exception.ResourcePropertyValueDependency( prop1=self.REDIRECT_POOL, prop2=self.ACTION, value=self.REDIRECT_TO_POOL) if self.properties[self.REDIRECT_URL] is None: msg = (_('Property %(url)s is required when %(action)s ' 'type is set to %(action_type)s.') % {'url': self.REDIRECT_URL, 'action': self.ACTION, 'action_type': self.REDIRECT_TO_URL}) raise exception.StackValidationFailed(message=msg) def _check_lb_status(self): return self.client_plugin().check_lb_status(self.lb_id) def handle_create(self): properties = self.prepare_properties( self.properties, self.physical_resource_name()) properties['listener_id'] = properties.pop(self.LISTENER) if self.properties[self.REDIRECT_POOL] is not None: self.client_plugin().resolve_pool( properties, self.REDIRECT_POOL, 'redirect_pool_id') return properties def check_create_complete(self, properties): if self.resource_id is None: try: l7policy = self.client().create_lbaas_l7policy( {'l7policy': properties})['l7policy'] self.resource_id_set(l7policy['id']) except Exception as ex: if self.client_plugin().is_invalid(ex): return False raise return self._check_lb_status() def handle_update(self, json_snippet, tmpl_diff, prop_diff): self._update_called = False if self.REDIRECT_POOL in prop_diff: if prop_diff[self.REDIRECT_POOL] is not None: self.client_plugin().resolve_pool( prop_diff, self.REDIRECT_POOL, 'redirect_pool_id') else: prop_diff.pop(self.REDIRECT_POOL) prop_diff['redirect_pool_id'] = None return prop_diff def check_update_complete(self, prop_diff): if not prop_diff: return True if not self._update_called: try: self.client().update_lbaas_l7policy( self.resource_id, {'l7policy': prop_diff}) self._update_called = True except Exception as ex: if self.client_plugin().is_invalid(ex): return False raise return self._check_lb_status() def handle_delete(self): self._delete_called = False def check_delete_complete(self, data): if self.resource_id is None: return True if not self._delete_called: try: self.client().delete_lbaas_l7policy(self.resource_id) self._delete_called = True except Exception as ex: if self.client_plugin().is_invalid(ex): return False elif self.client_plugin().is_not_found(ex): return True raise return self._check_lb_status() def resource_mapping(): return { 'OS::Neutron::LBaaS::L7Policy': L7Policy } heat-10.0.2/heat/engine/resources/openstack/neutron/lbaas/pool_member.py0000666000175000017500000001611613343562351026366 0ustar zuulzuul00000000000000# # Copyright 2015 IBM Corp. # # All Rights Reserved. # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common.i18n import _ from heat.engine import attributes from heat.engine import constraints from heat.engine import properties from heat.engine.resources.openstack.neutron import neutron from heat.engine import support from heat.engine import translation class PoolMember(neutron.NeutronResource): """A resource for managing LBaaS v2 Pool Members. A pool member represents a single backend node. """ support_status = support.SupportStatus(version='6.0.0') required_service_extension = 'lbaasv2' entity = 'lbaas_member' res_info_key = 'member' PROPERTIES = ( POOL, ADDRESS, PROTOCOL_PORT, WEIGHT, ADMIN_STATE_UP, SUBNET, ) = ( 'pool', 'address', 'protocol_port', 'weight', 'admin_state_up', 'subnet' ) ATTRIBUTES = ( ADDRESS_ATTR, POOL_ID_ATTR ) = ( 'address', 'pool_id' ) properties_schema = { POOL: properties.Schema( properties.Schema.STRING, _('Name or ID of the load balancing pool.'), required=True, constraints=[ constraints.CustomConstraint('neutron.lbaas.pool') ] ), ADDRESS: properties.Schema( properties.Schema.STRING, _('IP address of the pool member on the pool network.'), required=True, constraints=[ constraints.CustomConstraint('ip_addr') ] ), PROTOCOL_PORT: properties.Schema( properties.Schema.INTEGER, _('Port on which the pool member listens for requests or ' 'connections.'), required=True, constraints=[ constraints.Range(1, 65535), ] ), WEIGHT: properties.Schema( properties.Schema.INTEGER, _('Weight of pool member in the pool (default to 1).'), default=1, constraints=[ constraints.Range(0, 256), ], update_allowed=True ), ADMIN_STATE_UP: properties.Schema( properties.Schema.BOOLEAN, _('The administrative state of the pool member.'), default=True, update_allowed=True ), SUBNET: properties.Schema( properties.Schema.STRING, _('Subnet name or ID of this member.'), constraints=[ constraints.CustomConstraint('neutron.subnet') ], # Make this required untill bug #1585100 is resolved. required=True ), } attributes_schema = { ADDRESS_ATTR: attributes.Schema( _('The IP address of the pool member.'), type=attributes.Schema.STRING ), POOL_ID_ATTR: attributes.Schema( _('The ID of the pool to which the pool member belongs.'), type=attributes.Schema.STRING ) } def translation_rules(self, props): return [ translation.TranslationRule( props, translation.TranslationRule.RESOLVE, [self.SUBNET], client_plugin=self.client_plugin(), finder='find_resourceid_by_name_or_id', entity='subnet' ), ] def __init__(self, name, definition, stack): super(PoolMember, self).__init__(name, definition, stack) self._pool_id = None self._lb_id = None @property def pool_id(self): if self._pool_id is None: self._pool_id = self.client_plugin().find_resourceid_by_name_or_id( self.POOL, self.properties[self.POOL]) return self._pool_id @property def lb_id(self): if self._lb_id is None: pool = self.client().show_lbaas_pool(self.pool_id)['pool'] listener_id = pool['listeners'][0]['id'] listener = self.client().show_listener(listener_id)['listener'] self._lb_id = listener['loadbalancers'][0]['id'] return self._lb_id def _check_lb_status(self): return self.client_plugin().check_lb_status(self.lb_id) def handle_create(self): properties = self.prepare_properties( self.properties, self.physical_resource_name()) self.client_plugin().resolve_pool( properties, self.POOL, 'pool_id') properties.pop('pool_id') properties['subnet_id'] = properties.pop(self.SUBNET) return properties def check_create_complete(self, properties): if self.resource_id is None: try: member = self.client().create_lbaas_member( self.pool_id, {'member': properties})['member'] self.resource_id_set(member['id']) except Exception as ex: if self.client_plugin().is_invalid(ex): return False raise return self._check_lb_status() def _res_get_args(self): return [self.resource_id, self.pool_id] def handle_update(self, json_snippet, tmpl_diff, prop_diff): self._update_called = False return prop_diff def check_update_complete(self, prop_diff): if not prop_diff: return True if not self._update_called: try: self.client().update_lbaas_member(self.resource_id, self.pool_id, {'member': prop_diff}) self._update_called = True except Exception as ex: if self.client_plugin().is_invalid(ex): return False raise return self._check_lb_status() def handle_delete(self): self._delete_called = False def check_delete_complete(self, data): if self.resource_id is None: return True if not self._delete_called: try: self.client().delete_lbaas_member(self.resource_id, self.pool_id) self._delete_called = True except Exception as ex: if self.client_plugin().is_invalid(ex): return False elif self.client_plugin().is_not_found(ex): return True raise return self._check_lb_status() def resource_mapping(): return { 'OS::Neutron::LBaaS::PoolMember': PoolMember, } heat-10.0.2/heat/engine/resources/openstack/neutron/lbaas/pool.py0000666000175000017500000002475013343562351025042 0ustar zuulzuul00000000000000# # Copyright 2015 IBM Corp. # # All Rights Reserved. # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common import exception from heat.common.i18n import _ from heat.engine import attributes from heat.engine import constraints from heat.engine import properties from heat.engine.resources.openstack.neutron import neutron from heat.engine import support from heat.engine import translation class Pool(neutron.NeutronResource): """A resource for managing LBaaS v2 Pools. This resources manages Neutron-LBaaS v2 Pools, which represent a group of nodes. Pools define the subnet where nodes reside, balancing algorithm, and the nodes themselves. """ support_status = support.SupportStatus(version='6.0.0') required_service_extension = 'lbaasv2' entity = 'lbaas_pool' res_info_key = 'pool' PROPERTIES = ( ADMIN_STATE_UP, DESCRIPTION, SESSION_PERSISTENCE, NAME, LB_ALGORITHM, LISTENER, LOADBALANCER, PROTOCOL, SESSION_PERSISTENCE_TYPE, SESSION_PERSISTENCE_COOKIE_NAME, ) = ( 'admin_state_up', 'description', 'session_persistence', 'name', 'lb_algorithm', 'listener', 'loadbalancer', 'protocol', 'type', 'cookie_name' ) SESSION_PERSISTENCE_TYPES = ( SOURCE_IP, HTTP_COOKIE, APP_COOKIE ) = ( 'SOURCE_IP', 'HTTP_COOKIE', 'APP_COOKIE' ) ATTRIBUTES = ( HEALTHMONITOR_ID_ATTR, LISTENERS_ATTR, MEMBERS_ATTR ) = ( 'healthmonitor_id', 'listeners', 'members' ) properties_schema = { ADMIN_STATE_UP: properties.Schema( properties.Schema.BOOLEAN, _('The administrative state of this pool.'), default=True, update_allowed=True ), DESCRIPTION: properties.Schema( properties.Schema.STRING, _('Description of this pool.'), update_allowed=True, default='' ), SESSION_PERSISTENCE: properties.Schema( properties.Schema.MAP, _('Configuration of session persistence.'), schema={ SESSION_PERSISTENCE_TYPE: properties.Schema( properties.Schema.STRING, _('Method of implementation of session ' 'persistence feature.'), required=True, constraints=[constraints.AllowedValues( SESSION_PERSISTENCE_TYPES )] ), SESSION_PERSISTENCE_COOKIE_NAME: properties.Schema( properties.Schema.STRING, _('Name of the cookie, ' 'required if type is APP_COOKIE.') ) }, ), NAME: properties.Schema( properties.Schema.STRING, _('Name of this pool.'), update_allowed=True ), LB_ALGORITHM: properties.Schema( properties.Schema.STRING, _('The algorithm used to distribute load between the members of ' 'the pool.'), required=True, constraints=[ constraints.AllowedValues(['ROUND_ROBIN', 'LEAST_CONNECTIONS', 'SOURCE_IP']), ], update_allowed=True, ), LISTENER: properties.Schema( properties.Schema.STRING, _('Listener name or ID to be associated with this pool.'), constraints=[ constraints.CustomConstraint('neutron.lbaas.listener') ] ), LOADBALANCER: properties.Schema( properties.Schema.STRING, _('Loadbalancer name or ID to be associated with this pool. ' 'Requires shared_pools service extension.'), constraints=[ constraints.CustomConstraint('neutron.lbaas.loadbalancer') ], support_status=support.SupportStatus(version='9.0.0') ), PROTOCOL: properties.Schema( properties.Schema.STRING, _('Protocol of the pool.'), required=True, constraints=[ constraints.AllowedValues(['TCP', 'HTTP', 'HTTPS']), ] ), } attributes_schema = { HEALTHMONITOR_ID_ATTR: attributes.Schema( _('ID of the health monitor associated with this pool.'), type=attributes.Schema.STRING ), LISTENERS_ATTR: attributes.Schema( _('Listener associated with this pool.'), type=attributes.Schema.STRING ), MEMBERS_ATTR: attributes.Schema( _('Members associated with this pool.'), cache_mode=attributes.Schema.CACHE_NONE, type=attributes.Schema.LIST ), } def translation_rules(self, props): return [ translation.TranslationRule( props, translation.TranslationRule.RESOLVE, [self.LISTENER], client_plugin=self.client_plugin(), finder='find_resourceid_by_name_or_id', entity='listener' ), translation.TranslationRule( props, translation.TranslationRule.RESOLVE, [self.LOADBALANCER], client_plugin=self.client_plugin(), finder='find_resourceid_by_name_or_id', entity='loadbalancer' ), ] def __init__(self, name, definition, stack): super(Pool, self).__init__(name, definition, stack) self._lb_id = None @property def lb_id(self): if self._lb_id: return self._lb_id self._lb_id = self.properties[self.LOADBALANCER] if self._lb_id is None: listener_id = self.properties[self.LISTENER] listener = self.client().show_listener(listener_id)['listener'] self._lb_id = listener['loadbalancers'][0]['id'] return self._lb_id def validate(self): res = super(Pool, self).validate() if res: return res if (self.properties[self.LISTENER] is None and self.properties[self.LOADBALANCER] is None): raise exception.PropertyUnspecifiedError(self.LISTENER, self.LOADBALANCER) if self.properties[self.SESSION_PERSISTENCE] is not None: session_p = self.properties[self.SESSION_PERSISTENCE] persistence_type = session_p[self.SESSION_PERSISTENCE_TYPE] if persistence_type == self.APP_COOKIE: if not session_p.get(self.SESSION_PERSISTENCE_COOKIE_NAME): msg = (_('Property %(cookie)s is required when %(sp)s ' 'type is set to %(app)s.') % {'cookie': self.SESSION_PERSISTENCE_COOKIE_NAME, 'sp': self.SESSION_PERSISTENCE, 'app': self.APP_COOKIE}) raise exception.StackValidationFailed(message=msg) elif persistence_type == self.SOURCE_IP: if session_p.get(self.SESSION_PERSISTENCE_COOKIE_NAME): msg = (_('Property %(cookie)s must NOT be specified when ' '%(sp)s type is set to %(ip)s.') % {'cookie': self.SESSION_PERSISTENCE_COOKIE_NAME, 'sp': self.SESSION_PERSISTENCE, 'ip': self.SOURCE_IP}) raise exception.StackValidationFailed(message=msg) def _check_lb_status(self): return self.client_plugin().check_lb_status(self.lb_id) def handle_create(self): properties = self.prepare_properties( self.properties, self.physical_resource_name()) if self.LISTENER in properties: properties['listener_id'] = properties.pop(self.LISTENER) if self.LOADBALANCER in properties: properties['loadbalancer_id'] = properties.pop(self.LOADBALANCER) session_p = properties.get(self.SESSION_PERSISTENCE) if session_p is not None: session_props = self.prepare_properties(session_p, None) properties[self.SESSION_PERSISTENCE] = session_props return properties def check_create_complete(self, properties): if self.resource_id is None: try: pool = self.client().create_lbaas_pool( {'pool': properties})['pool'] self.resource_id_set(pool['id']) except Exception as ex: if self.client_plugin().is_invalid(ex): return False raise return self._check_lb_status() def handle_update(self, json_snippet, tmpl_diff, prop_diff): self._update_called = False return prop_diff def check_update_complete(self, prop_diff): if not prop_diff: return True if not self._update_called: try: self.client().update_lbaas_pool( self.resource_id, {'pool': prop_diff}) self._update_called = True except Exception as ex: if self.client_plugin().is_invalid(ex): return False raise return self._check_lb_status() def handle_delete(self): self._delete_called = False def check_delete_complete(self, data): if self.resource_id is None: return True if not self._delete_called: try: self.client().delete_lbaas_pool(self.resource_id) self._delete_called = True except Exception as ex: if self.client_plugin().is_invalid(ex): return False elif self.client_plugin().is_not_found(ex): return True raise return self._check_lb_status() def resource_mapping(): return { 'OS::Neutron::LBaaS::Pool': Pool, } heat-10.0.2/heat/engine/resources/openstack/neutron/lbaas/loadbalancer.py0000666000175000017500000001411413343562351026471 0ustar zuulzuul00000000000000# # Copyright 2015 IBM Corp. # # All Rights Reserved. # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from neutronclient.common import exceptions from heat.common import exception from heat.common.i18n import _ from heat.engine import attributes from heat.engine import constraints from heat.engine import properties from heat.engine.resources.openstack.neutron import neutron from heat.engine import support from heat.engine import translation class LoadBalancer(neutron.NeutronResource): """A resource for creating LBaaS v2 Load Balancers. This resource creates and manages Neutron LBaaS v2 Load Balancers, which allows traffic to be directed between servers. """ support_status = support.SupportStatus(version='6.0.0') required_service_extension = 'lbaasv2' entity = 'loadbalancer' PROPERTIES = ( DESCRIPTION, NAME, PROVIDER, VIP_ADDRESS, VIP_SUBNET, ADMIN_STATE_UP, TENANT_ID ) = ( 'description', 'name', 'provider', 'vip_address', 'vip_subnet', 'admin_state_up', 'tenant_id' ) ATTRIBUTES = ( VIP_ADDRESS_ATTR, VIP_PORT_ATTR, VIP_SUBNET_ATTR, POOLS_ATTR ) = ( 'vip_address', 'vip_port_id', 'vip_subnet_id', 'pools' ) properties_schema = { DESCRIPTION: properties.Schema( properties.Schema.STRING, _('Description of this Load Balancer.'), update_allowed=True, default='' ), NAME: properties.Schema( properties.Schema.STRING, _('Name of this Load Balancer.'), update_allowed=True ), PROVIDER: properties.Schema( properties.Schema.STRING, _('Provider for this Load Balancer.'), constraints=[ constraints.CustomConstraint('neutron.lbaas.provider') ], ), VIP_ADDRESS: properties.Schema( properties.Schema.STRING, _('IP address for the VIP.'), constraints=[ constraints.CustomConstraint('ip_addr') ], ), VIP_SUBNET: properties.Schema( properties.Schema.STRING, _('The name or ID of the subnet on which to allocate the VIP ' 'address.'), constraints=[ constraints.CustomConstraint('neutron.subnet') ], required=True ), ADMIN_STATE_UP: properties.Schema( properties.Schema.BOOLEAN, _('The administrative state of this Load Balancer.'), default=True, update_allowed=True ), TENANT_ID: properties.Schema( properties.Schema.STRING, _('The ID of the tenant who owns the Load Balancer. Only ' 'administrative users can specify a tenant ID other than ' 'their own.'), constraints=[ constraints.CustomConstraint('keystone.project') ], ) } attributes_schema = { VIP_ADDRESS_ATTR: attributes.Schema( _('The VIP address of the LoadBalancer.'), type=attributes.Schema.STRING ), VIP_PORT_ATTR: attributes.Schema( _('The VIP port of the LoadBalancer.'), type=attributes.Schema.STRING ), VIP_SUBNET_ATTR: attributes.Schema( _('The VIP subnet of the LoadBalancer.'), type=attributes.Schema.STRING ), POOLS_ATTR: attributes.Schema( _('Pools this LoadBalancer is associated with.'), type=attributes.Schema.LIST, support_status=support.SupportStatus(version='9.0.0') ), } def translation_rules(self, props): return [ translation.TranslationRule( props, translation.TranslationRule.RESOLVE, [self.VIP_SUBNET], client_plugin=self.client_plugin(), finder='find_resourceid_by_name_or_id', entity='subnet' ), ] def handle_create(self): properties = self.prepare_properties( self.properties, self.physical_resource_name() ) properties['vip_subnet_id'] = properties.pop(self.VIP_SUBNET) lb = self.client().create_loadbalancer( {'loadbalancer': properties})['loadbalancer'] self.resource_id_set(lb['id']) def check_create_complete(self, data): return self.client_plugin().check_lb_status(self.resource_id) def handle_update(self, json_snippet, tmpl_diff, prop_diff): if prop_diff: self.client().update_loadbalancer( self.resource_id, {'loadbalancer': prop_diff}) return prop_diff def check_update_complete(self, prop_diff): if prop_diff: return self.client_plugin().check_lb_status(self.resource_id) return True def handle_delete(self): pass def check_delete_complete(self, data): if self.resource_id is None: return True try: try: if self.client_plugin().check_lb_status(self.resource_id): self.client().delete_loadbalancer(self.resource_id) except exception.ResourceInError: # Still try to delete loadbalancer in error state self.client().delete_loadbalancer(self.resource_id) except exceptions.NotFound: # Resource is gone return True return False def resource_mapping(): return { 'OS::Neutron::LBaaS::LoadBalancer': LoadBalancer } heat-10.0.2/heat/engine/resources/openstack/neutron/lbaas/l7rule.py0000666000175000017500000001573313343562351025304 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common import exception from heat.common.i18n import _ from heat.engine import constraints from heat.engine import properties from heat.engine.resources.openstack.neutron import neutron from heat.engine import support class L7Rule(neutron.NeutronResource): """A resource for managing LBaaS v2 L7Rules. This resource manages Neutron-LBaaS v2 L7Rules, which represent a set of attributes that defines which part of the request should be matched and how it should be matched. """ support_status = support.SupportStatus(version='7.0.0') required_service_extension = 'lbaasv2' entity = 'lbaas_l7rule' res_info_key = 'rule' PROPERTIES = ( ADMIN_STATE_UP, L7POLICY, TYPE, COMPARE_TYPE, INVERT, KEY, VALUE ) = ( 'admin_state_up', 'l7policy', 'type', 'compare_type', 'invert', 'key', 'value' ) L7RULE_TYPES = ( HOST_NAME, PATH, FILE_TYPE, HEADER, COOKIE ) = ( 'HOST_NAME', 'PATH', 'FILE_TYPE', 'HEADER', 'COOKIE' ) L7COMPARE_TYPES = ( REGEX, STARTS_WITH, ENDS_WITH, CONTAINS, EQUAL_TO ) = ( 'REGEX', 'STARTS_WITH', 'ENDS_WITH', 'CONTAINS', 'EQUAL_TO' ) properties_schema = { ADMIN_STATE_UP: properties.Schema( properties.Schema.BOOLEAN, _('The administrative state of the rule.'), default=True, update_allowed=True ), L7POLICY: properties.Schema( properties.Schema.STRING, _('ID or name of L7 policy this rule belongs to.'), required=True ), TYPE: properties.Schema( properties.Schema.STRING, _('Rule type.'), constraints=[constraints.AllowedValues(L7RULE_TYPES)], update_allowed=True, required=True ), COMPARE_TYPE: properties.Schema( properties.Schema.STRING, _('Rule compare type.'), constraints=[constraints.AllowedValues(L7COMPARE_TYPES)], update_allowed=True, required=True ), INVERT: properties.Schema( properties.Schema.BOOLEAN, _('Invert the compare type.'), default=False, update_allowed=True ), KEY: properties.Schema( properties.Schema.STRING, _('Key to compare. Relevant for HEADER and COOKIE types only.'), update_allowed=True ), VALUE: properties.Schema( properties.Schema.STRING, _('Value to compare.'), update_allowed=True, required=True ) } def __init__(self, name, definition, stack): super(L7Rule, self).__init__(name, definition, stack) self._l7p_id = None self._lb_id = None @property def l7policy_id(self): if self._l7p_id is None: self._l7p_id = self.client_plugin().find_resourceid_by_name_or_id( self.L7POLICY, self.properties[self.L7POLICY]) return self._l7p_id @property def lb_id(self): if self._lb_id is None: policy = self.client().show_lbaas_l7policy( self.l7policy_id)['l7policy'] listener_id = policy['listener_id'] listener = self.client().show_listener(listener_id)['listener'] self._lb_id = listener['loadbalancers'][0]['id'] return self._lb_id def _check_lb_status(self): return self.client_plugin().check_lb_status(self.lb_id) def validate(self): res = super(L7Rule, self).validate() if res: return res if (self.properties[self.TYPE] in (self.HEADER, self.COOKIE) and self.properties[self.KEY] is None): msg = (_('Property %(key)s is missing. ' 'This property should be specified for ' 'rules of %(header)s and %(cookie)s types.') % {'key': self.KEY, 'header': self.HEADER, 'cookie': self.COOKIE}) raise exception.StackValidationFailed(message=msg) def handle_create(self): rule_args = dict((k, v) for k, v in self.properties.items() if k is not self.L7POLICY) return rule_args def check_create_complete(self, rule_args): if self.resource_id is None: try: l7rule = self.client().create_lbaas_l7rule( self.l7policy_id, {'rule': rule_args})['rule'] self.resource_id_set(l7rule['id']) except Exception as ex: if self.client_plugin().is_invalid(ex): return False raise return self._check_lb_status() def _res_get_args(self): return [self.resource_id, self.l7policy_id] def handle_update(self, json_snippet, tmpl_diff, prop_diff): self._update_called = False if (prop_diff.get(self.TYPE) in (self.COOKIE, self.HEADER) and prop_diff.get(self.KEY) is None): prop_diff[self.KEY] = tmpl_diff['Properties'].get(self.KEY) return prop_diff def check_update_complete(self, prop_diff): if not prop_diff: return True if not self._update_called: try: self.client().update_lbaas_l7rule( self.resource_id, self.l7policy_id, {'rule': prop_diff}) self._update_called = True except Exception as ex: if self.client_plugin().is_invalid(ex): return False raise return self._check_lb_status() def handle_delete(self): self._delete_called = False def check_delete_complete(self, data): if self.resource_id is None: return True if not self._delete_called: try: self.client().delete_lbaas_l7rule( self.resource_id, self.l7policy_id) self._delete_called = True except Exception as ex: if self.client_plugin().is_invalid(ex): return False elif self.client_plugin().is_not_found(ex): return True raise return self._check_lb_status() def resource_mapping(): return { 'OS::Neutron::LBaaS::L7Rule': L7Rule } heat-10.0.2/heat/engine/resources/openstack/neutron/lbaas/__init__.py0000666000175000017500000000000013343562340025604 0ustar zuulzuul00000000000000heat-10.0.2/heat/engine/resources/openstack/neutron/lbaas/health_monitor.py0000666000175000017500000001753113343562351027104 0ustar zuulzuul00000000000000# # Copyright 2015 IBM Corp. # # All Rights Reserved. # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common.i18n import _ from heat.engine import attributes from heat.engine import constraints from heat.engine import properties from heat.engine.resources.openstack.neutron import neutron from heat.engine import support class HealthMonitor(neutron.NeutronResource): """A resource to handle load balancer health monitors. This resource creates and manages Neutron LBaaS v2 healthmonitors, which watches status of the load balanced servers. """ support_status = support.SupportStatus(version='6.0.0') required_service_extension = 'lbaasv2' entity = 'lbaas_healthmonitor' res_info_key = 'healthmonitor' # Properties inputs for the resources create/update. PROPERTIES = ( ADMIN_STATE_UP, DELAY, EXPECTED_CODES, HTTP_METHOD, MAX_RETRIES, POOL, TIMEOUT, TYPE, URL_PATH, TENANT_ID ) = ( 'admin_state_up', 'delay', 'expected_codes', 'http_method', 'max_retries', 'pool', 'timeout', 'type', 'url_path', 'tenant_id' ) # Supported HTTP methods HTTP_METHODS = ( GET, HEAT, POST, PUT, DELETE, TRACE, OPTIONS, CONNECT, PATCH ) = ( 'GET', 'HEAD', 'POST', 'PUT', 'DELETE', 'TRACE', 'OPTIONS', 'CONNECT', 'PATCH' ) # Supported output attributes of the resources. ATTRIBUTES = (POOLS_ATTR) = ('pools') properties_schema = { ADMIN_STATE_UP: properties.Schema( properties.Schema.BOOLEAN, _('The administrative state of the health monitor.'), default=True, update_allowed=True ), DELAY: properties.Schema( properties.Schema.INTEGER, _('The minimum time in milliseconds between regular connections ' 'of the member.'), required=True, update_allowed=True, constraints=[constraints.Range(min=0)] ), EXPECTED_CODES: properties.Schema( properties.Schema.STRING, _('The HTTP status codes expected in response from the ' 'member to declare it healthy. Specify one of the following ' 'values: a single value, such as 200. a list, such as 200, 202. ' 'a range, such as 200-204.'), update_allowed=True, default='200' ), HTTP_METHOD: properties.Schema( properties.Schema.STRING, _('The HTTP method used for requests by the monitor of type ' 'HTTP.'), update_allowed=True, default=GET, constraints=[constraints.AllowedValues(HTTP_METHODS)] ), MAX_RETRIES: properties.Schema( properties.Schema.INTEGER, _('Number of permissible connection failures before changing the ' 'member status to INACTIVE.'), required=True, update_allowed=True, constraints=[constraints.Range(min=1, max=10)], ), POOL: properties.Schema( properties.Schema.STRING, _('ID or name of the load balancing pool.'), required=True, constraints=[ constraints.CustomConstraint('neutron.lbaas.pool') ] ), TIMEOUT: properties.Schema( properties.Schema.INTEGER, _('Maximum number of milliseconds for a monitor to wait for a ' 'connection to be established before it times out.'), required=True, update_allowed=True, constraints=[constraints.Range(min=0)] ), TYPE: properties.Schema( properties.Schema.STRING, _('One of predefined health monitor types.'), required=True, constraints=[ constraints.AllowedValues(['PING', 'TCP', 'HTTP', 'HTTPS']), ] ), URL_PATH: properties.Schema( properties.Schema.STRING, _('The HTTP path used in the HTTP request used by the monitor to ' 'test a member health. A valid value is a string the begins ' 'with a forward slash (/).'), update_allowed=True, default='/' ), TENANT_ID: properties.Schema( properties.Schema.STRING, _('ID of the tenant who owns the health monitor.') ) } attributes_schema = { POOLS_ATTR: attributes.Schema( _('The list of Pools related to this monitor.'), type=attributes.Schema.LIST ) } def __init__(self, name, definition, stack): super(HealthMonitor, self).__init__(name, definition, stack) self._lb_id = None @property def lb_id(self): if self._lb_id is None: pool_id = self.client_plugin().find_resourceid_by_name_or_id( self.POOL, self.properties[self.POOL]) pool = self.client().show_lbaas_pool(pool_id)['pool'] listener_id = pool['listeners'][0]['id'] listener = self.client().show_listener(listener_id)['listener'] self._lb_id = listener['loadbalancers'][0]['id'] return self._lb_id def _check_lb_status(self): return self.client_plugin().check_lb_status(self.lb_id) def handle_create(self): properties = self.prepare_properties( self.properties, self.physical_resource_name()) self.client_plugin().resolve_pool( properties, self.POOL, 'pool_id') return properties def check_create_complete(self, properties): if self.resource_id is None: try: healthmonitor = self.client().create_lbaas_healthmonitor( {'healthmonitor': properties})['healthmonitor'] self.resource_id_set(healthmonitor['id']) except Exception as ex: if self.client_plugin().is_invalid(ex): return False raise return self._check_lb_status() def handle_update(self, json_snippet, tmpl_diff, prop_diff): self._update_called = False return prop_diff def check_update_complete(self, prop_diff): if not prop_diff: return True if not self._update_called: try: self.client().update_lbaas_healthmonitor( self.resource_id, {'healthmonitor': prop_diff}) self._update_called = True except Exception as ex: if self.client_plugin().is_invalid(ex): return False raise return self._check_lb_status() def handle_delete(self): self._delete_called = False def check_delete_complete(self, data): if self.resource_id is None: return True if not self._delete_called: try: self.client().delete_lbaas_healthmonitor(self.resource_id) self._delete_called = True except Exception as ex: if self.client_plugin().is_invalid(ex): return False elif self.client_plugin().is_not_found(ex): return True raise return self._check_lb_status() def resource_mapping(): return { 'OS::Neutron::LBaaS::HealthMonitor': HealthMonitor, } heat-10.0.2/heat/engine/resources/openstack/neutron/lbaas/listener.py0000666000175000017500000002305213343562351025710 0ustar zuulzuul00000000000000# # Copyright 2015 IBM Corp. # # All Rights Reserved. # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common import exception from heat.common.i18n import _ from heat.engine import attributes from heat.engine import constraints from heat.engine import properties from heat.engine.resources.openstack.neutron import neutron from heat.engine import support from heat.engine import translation class Listener(neutron.NeutronResource): """A resource for managing LBaaS v2 Listeners. This resource creates and manages Neutron LBaaS v2 Listeners, which represent a listening endpoint for the vip. """ support_status = support.SupportStatus(version='6.0.0') required_service_extension = 'lbaasv2' entity = 'listener' PROPERTIES = ( PROTOCOL_PORT, PROTOCOL, LOADBALANCER, DEFAULT_POOL, NAME, ADMIN_STATE_UP, DESCRIPTION, DEFAULT_TLS_CONTAINER_REF, SNI_CONTAINER_REFS, CONNECTION_LIMIT, TENANT_ID ) = ( 'protocol_port', 'protocol', 'loadbalancer', 'default_pool', 'name', 'admin_state_up', 'description', 'default_tls_container_ref', 'sni_container_refs', 'connection_limit', 'tenant_id' ) PROTOCOLS = ( TCP, HTTP, HTTPS, TERMINATED_HTTPS, ) = ( 'TCP', 'HTTP', 'HTTPS', 'TERMINATED_HTTPS', ) ATTRIBUTES = ( LOADBALANCERS_ATTR, DEFAULT_POOL_ID_ATTR ) = ( 'loadbalancers', 'default_pool_id' ) properties_schema = { PROTOCOL_PORT: properties.Schema( properties.Schema.INTEGER, _('TCP or UDP port on which to listen for client traffic.'), required=True, constraints=[ constraints.Range(1, 65535), ] ), PROTOCOL: properties.Schema( properties.Schema.STRING, _('Protocol on which to listen for the client traffic.'), required=True, constraints=[ constraints.AllowedValues(PROTOCOLS), ] ), LOADBALANCER: properties.Schema( properties.Schema.STRING, _('ID or name of the load balancer with which listener ' 'is associated.'), constraints=[ constraints.CustomConstraint('neutron.lbaas.loadbalancer') ] ), DEFAULT_POOL: properties.Schema( properties.Schema.STRING, _('ID or name of the default pool for the listener. Requires ' 'shared_pools service extension.'), update_allowed=True, constraints=[ constraints.CustomConstraint('neutron.lbaas.pool') ], support_status=support.SupportStatus(version='9.0.0') ), NAME: properties.Schema( properties.Schema.STRING, _('Name of this listener.'), update_allowed=True ), ADMIN_STATE_UP: properties.Schema( properties.Schema.BOOLEAN, _('The administrative state of this listener.'), update_allowed=True, default=True ), DESCRIPTION: properties.Schema( properties.Schema.STRING, _('Description of this listener.'), update_allowed=True, default='' ), DEFAULT_TLS_CONTAINER_REF: properties.Schema( properties.Schema.STRING, _('Default TLS container reference to retrieve TLS ' 'information.'), update_allowed=True ), SNI_CONTAINER_REFS: properties.Schema( properties.Schema.LIST, _('List of TLS container references for SNI.'), update_allowed=True ), CONNECTION_LIMIT: properties.Schema( properties.Schema.INTEGER, _('The maximum number of connections permitted for this ' 'load balancer. Defaults to -1, which is infinite.'), update_allowed=True, default=-1, constraints=[ constraints.Range(min=-1), ] ), TENANT_ID: properties.Schema( properties.Schema.STRING, _('The ID of the tenant who owns the listener.') ), } attributes_schema = { LOADBALANCERS_ATTR: attributes.Schema( _('ID of the load balancer this listener is associated to.'), type=attributes.Schema.LIST ), DEFAULT_POOL_ID_ATTR: attributes.Schema( _('ID of the default pool this listener is associated to.'), type=attributes.Schema.STRING ) } def __init__(self, name, definition, stack): super(Listener, self).__init__(name, definition, stack) self._lb_id = None def translation_rules(self, props): return [ translation.TranslationRule( props, translation.TranslationRule.RESOLVE, [self.LOADBALANCER], client_plugin=self.client_plugin(), finder='find_resourceid_by_name_or_id', entity='loadbalancer' ), translation.TranslationRule( props, translation.TranslationRule.RESOLVE, [self.DEFAULT_POOL], client_plugin=self.client_plugin(), finder='find_resourceid_by_name_or_id', entity='pool' ), ] def validate(self): res = super(Listener, self).validate() if res: return res if (self.properties[self.LOADBALANCER] is None and self.properties[self.DEFAULT_POOL] is None): raise exception.PropertyUnspecifiedError(self.LOADBALANCER, self.DEFAULT_POOL) if self.properties[self.PROTOCOL] == self.TERMINATED_HTTPS: if self.properties[self.DEFAULT_TLS_CONTAINER_REF] is None: msg = (_('Property %(ref)s required when protocol is ' '%(term)s.') % {'ref': self.DEFAULT_TLS_CONTAINER_REF, 'term': self.TERMINATED_HTTPS}) raise exception.StackValidationFailed(message=msg) @property def lb_id(self): if self._lb_id: return self._lb_id self._lb_id = self.properties[self.LOADBALANCER] if self._lb_id is None: pool_id = self.properties[self.DEFAULT_POOL] pool = self.client().show_pool(pool_id)['pool'] self._lb_id = pool['loadbalancers'][0]['id'] return self._lb_id def _check_lb_status(self): return self.client_plugin().check_lb_status(self.lb_id) def handle_create(self): properties = self.prepare_properties( self.properties, self.physical_resource_name()) if self.LOADBALANCER in properties: properties['loadbalancer_id'] = properties.pop(self.LOADBALANCER) if self.DEFAULT_POOL in properties: properties['default_pool_id'] = properties.pop(self.DEFAULT_POOL) return properties def check_create_complete(self, properties): if self.resource_id is None: try: listener = self.client().create_listener( {'listener': properties})['listener'] self.resource_id_set(listener['id']) except Exception as ex: if self.client_plugin().is_invalid(ex): return False raise return self._check_lb_status() def handle_update(self, json_snippet, tmpl_diff, prop_diff): self._update_called = False self.properties = json_snippet.properties(self.properties_schema, self.context) return prop_diff def check_update_complete(self, prop_diff): if not prop_diff: return True if self.DEFAULT_POOL in prop_diff: prop_diff['default_pool_id'] = prop_diff.pop(self.DEFAULT_POOL) if not self._update_called: try: self.client().update_listener(self.resource_id, {'listener': prop_diff}) self._update_called = True except Exception as ex: if self.client_plugin().is_invalid(ex): return False raise return self._check_lb_status() def handle_delete(self): self._delete_called = False def check_delete_complete(self, data): if self.resource_id is None: return True if not self._delete_called: try: self.client().delete_listener(self.resource_id) self._delete_called = True except Exception as ex: if self.client_plugin().is_invalid(ex): return False elif self.client_plugin().is_not_found(ex): return True raise return self._check_lb_status() def resource_mapping(): return { 'OS::Neutron::LBaaS::Listener': Listener, } heat-10.0.2/heat/engine/resources/openstack/neutron/qos.py0000666000175000017500000002052313343562340023601 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common.i18n import _ from heat.engine import attributes from heat.engine import constraints from heat.engine import properties from heat.engine.resources.openstack.neutron import neutron from heat.engine import support class QoSPolicy(neutron.NeutronResource): """A resource for Neutron QoS Policy. This QoS policy can be associated with neutron resources, such as port and network, to provide QoS capabilities. The default policy usage of this resource is limited to administrators only. """ required_service_extension = 'qos' entity = 'qos_policy' res_info_key = 'policy' support_status = support.SupportStatus(version='6.0.0') PROPERTIES = ( NAME, DESCRIPTION, SHARED, TENANT_ID, ) = ( 'name', 'description', 'shared', 'tenant_id', ) ATTRIBUTES = ( RULES_ATTR, ) = ( 'rules', ) properties_schema = { NAME: properties.Schema( properties.Schema.STRING, _('The name for the QoS policy.'), update_allowed=True ), DESCRIPTION: properties.Schema( properties.Schema.STRING, _('The description for the QoS policy.'), update_allowed=True ), SHARED: properties.Schema( properties.Schema.BOOLEAN, _('Whether this QoS policy should be shared to other tenants.'), default=False, update_allowed=True ), TENANT_ID: properties.Schema( properties.Schema.STRING, _('The owner tenant ID of this QoS policy.') ), } attributes_schema = { RULES_ATTR: attributes.Schema( _("A list of all rules for the QoS policy."), type=attributes.Schema.LIST ) } def handle_create(self): props = self.prepare_properties( self.properties, self.physical_resource_name()) policy = self.client().create_qos_policy({'policy': props})['policy'] self.resource_id_set(policy['id']) def handle_delete(self): if self.resource_id is None: return with self.client_plugin().ignore_not_found: self.client().delete_qos_policy(self.resource_id) def handle_update(self, json_snippet, tmpl_diff, prop_diff): if prop_diff: self.prepare_update_properties(prop_diff) self.client().update_qos_policy( self.resource_id, {'policy': prop_diff}) class QoSRule(neutron.NeutronResource): """A resource for Neutron QoS base rule.""" required_service_extension = 'qos' support_status = support.SupportStatus(version='6.0.0') PROPERTIES = ( POLICY, TENANT_ID, ) = ( 'policy', 'tenant_id', ) properties_schema = { POLICY: properties.Schema( properties.Schema.STRING, _('ID or name of the QoS policy.'), required=True, constraints=[constraints.CustomConstraint('neutron.qos_policy')] ), TENANT_ID: properties.Schema( properties.Schema.STRING, _('The owner tenant ID of this rule.') ), } def __init__(self, name, json_snippet, stack): super(QoSRule, self).__init__(name, json_snippet, stack) self._policy_id = None @property def policy_id(self): if not self._policy_id: self._policy_id = self.client_plugin().get_qos_policy_id( self.properties[self.POLICY]) return self._policy_id class QoSBandwidthLimitRule(QoSRule): """A resource for Neutron QoS bandwidth limit rule. This rule can be associated with QoS policy, and then the policy can be used by neutron port and network, to provide bandwidth limit QoS capabilities. The default policy usage of this resource is limited to administrators only. """ entity = 'bandwidth_limit_rule' PROPERTIES = ( MAX_BANDWIDTH, MAX_BURST_BANDWIDTH, ) = ( 'max_kbps', 'max_burst_kbps', ) properties_schema = { MAX_BANDWIDTH: properties.Schema( properties.Schema.INTEGER, _('Max bandwidth in kbps.'), required=True, update_allowed=True, constraints=[ constraints.Range(min=0) ] ), MAX_BURST_BANDWIDTH: properties.Schema( properties.Schema.INTEGER, _('Max burst bandwidth in kbps.'), update_allowed=True, constraints=[ constraints.Range(min=0) ], default=0 ) } properties_schema.update(QoSRule.properties_schema) def handle_create(self): props = self.prepare_properties(self.properties, self.physical_resource_name()) props.pop(self.POLICY) rule = self.client().create_bandwidth_limit_rule( self.policy_id, {'bandwidth_limit_rule': props})['bandwidth_limit_rule'] self.resource_id_set(rule['id']) def handle_delete(self): if self.resource_id is None: return with self.client_plugin().ignore_not_found: self.client().delete_bandwidth_limit_rule( self.resource_id, self.policy_id) def handle_update(self, json_snippet, tmpl_diff, prop_diff): if prop_diff: self.client().update_bandwidth_limit_rule( self.resource_id, self.policy_id, {'bandwidth_limit_rule': prop_diff}) def _res_get_args(self): return [self.resource_id, self.policy_id] class QoSDscpMarkingRule(QoSRule): """A resource for Neutron QoS DSCP marking rule. This rule can be associated with QoS policy, and then the policy can be used by neutron port and network, to provide DSCP marking QoS capabilities. The default policy usage of this resource is limited to administrators only. """ support_status = support.SupportStatus(version='7.0.0') entity = 'dscp_marking_rule' PROPERTIES = ( DSCP_MARK, ) = ( 'dscp_mark', ) properties_schema = { DSCP_MARK: properties.Schema( properties.Schema.INTEGER, _('DSCP mark between 0 and 56, except 2-6, 42, 44, and 50-54.'), required=True, update_allowed=True, constraints=[ constraints.AllowedValues([ 0, 8, 10, 12, 14, 16, 18, 20, 22, 24, 26, 28, 30, 32, 34, 36, 38, 40, 46, 48, 56] ) ] ) } properties_schema.update(QoSRule.properties_schema) def handle_create(self): props = self.prepare_properties(self.properties, self.physical_resource_name()) props.pop(self.POLICY) rule = self.client().create_dscp_marking_rule( self.policy_id, {'dscp_marking_rule': props})['dscp_marking_rule'] self.resource_id_set(rule['id']) def handle_delete(self): if self.resource_id is None: return with self.client_plugin().ignore_not_found: self.client().delete_dscp_marking_rule( self.resource_id, self.policy_id) def handle_update(self, json_snippet, tmpl_diff, prop_diff): if prop_diff: self.client().update_dscp_marking_rule( self.resource_id, self.policy_id, {'dscp_marking_rule': prop_diff}) def _res_get_args(self): return [self.resource_id, self.policy_id] def resource_mapping(): return { 'OS::Neutron::QoSPolicy': QoSPolicy, 'OS::Neutron::QoSBandwidthLimitRule': QoSBandwidthLimitRule, 'OS::Neutron::QoSDscpMarkingRule': QoSDscpMarkingRule } heat-10.0.2/heat/engine/resources/__init__.py0000666000175000017500000000641413343562337021066 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from stevedore import extension from heat.common import pluginutils from heat.engine import clients from heat.engine import environment from heat.engine import plugin_manager def _register_resources(env, type_pairs): for res_name, res_class in type_pairs: env.register_class(res_name, res_class) def _register_constraints(env, type_pairs): for constraint_name, constraint in type_pairs: env.register_constraint(constraint_name, constraint) def _register_stack_lifecycle_plugins(env, type_pairs): for stack_lifecycle_name, stack_lifecycle_class in type_pairs: env.register_stack_lifecycle_plugin(stack_lifecycle_name, stack_lifecycle_class) def _register_event_sinks(env, type_pairs): for sink_name, sink_class in type_pairs: env.register_event_sink(sink_name, sink_class) def _get_mapping(namespace): mgr = extension.ExtensionManager( namespace=namespace, invoke_on_load=False, on_load_failure_callback=pluginutils.log_fail_msg) return [[name, mgr[name].plugin] for name in mgr.names()] _environment = None def global_env(): if _environment is None: initialise() return _environment def initialise(): global _environment if _environment is not None: return clients.initialise() global_env = environment.Environment({}, user_env=False) _load_global_environment(global_env) _environment = global_env global_env.registry.log_resource_info(show_all=True) def _load_global_environment(env): _load_global_resources(env) environment.read_global_environment(env) def _load_global_resources(env): _register_constraints(env, _get_mapping('heat.constraints')) _register_stack_lifecycle_plugins( env, _get_mapping('heat.stack_lifecycle_plugins')) _register_event_sinks( env, _get_mapping('heat.event_sinks')) manager = plugin_manager.PluginManager(__name__) # Sometimes resources should not be available for registration in Heat due # to unsatisfied dependencies. We look first for the function # 'available_resource_mapping', which should return the filtered resources. # If it is not found, we look for the legacy 'resource_mapping'. resource_mapping = plugin_manager.PluginMapping(['available_resource', 'resource']) constraint_mapping = plugin_manager.PluginMapping('constraint') _register_resources(env, resource_mapping.load_all(manager)) _register_constraints(env, constraint_mapping.load_all(manager)) def list_opts(): from heat.engine.resources.aws.lb import loadbalancer yield None, loadbalancer.loadbalancer_opts heat-10.0.2/heat/engine/resources/stack_user.py0000666000175000017500000001553613343562340021471 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import keystoneauth1.exceptions as kc_exception from oslo_log import log as logging from heat.common import exception from heat.common.i18n import _ from heat.engine import resource LOG = logging.getLogger(__name__) class StackUser(resource.Resource): # Subclasses create a user, and optionally keypair associated with a # resource in a stack. Users are created in the heat stack user domain # (in a project specific to the stack) def handle_create(self): self._create_user() def _create_user(self): if self.data().get('user_id'): # a user has been created already return # Check for stack user project, create if not yet set if not self.stack.stack_user_project_id: project_id = self.keystone().create_stack_domain_project( self.stack.id) self.stack.set_stack_user_project_id(project_id) # Create a keystone user in the stack domain project user_id = self.keystone().create_stack_domain_user( username=self.physical_resource_name(), password=getattr(self, 'password', None), project_id=self.stack.stack_user_project_id) # Store the ID in resource data, for compatibility with SignalResponder self.data_set('user_id', user_id) def _user_token(self): project_id = self.stack.stack_user_project_id if not project_id: raise ValueError(_("Can't get user token, user not yet created")) password = getattr(self, 'password', None) # FIXME(shardy): the create and getattr here could allow insane # passwords, e.g a zero length string, if these happen it almost # certainly means a bug elsewhere in heat, so add assertion to catch if password is None: raise ValueError(_("Can't get user token without password")) return self.keystone().stack_domain_user_token( user_id=self._get_user_id(), project_id=project_id, password=password) def _get_user_id(self): user_id = self.data().get('user_id') if user_id: return user_id def handle_delete(self): self._delete_user() return super(StackUser, self).handle_delete() def _delete_user(self): user_id = self._get_user_id() if user_id is None: return # the user is going away, so we want the keypair gone as well self._delete_keypair() try: self.keystone().delete_stack_domain_user( user_id=user_id, project_id=self.stack.stack_user_project_id) except kc_exception.NotFound: pass except ValueError: # FIXME(shardy): This is a legacy delete path for backwards # compatibility with resources created before the migration # to stack_user.StackUser domain users. After an appropriate # transitional period, this should be removed. LOG.warning('Reverting to legacy user delete path') try: self.keystone().delete_stack_user(user_id) except kc_exception.NotFound: pass self.data_delete('user_id') def handle_suspend(self): user_id = self._get_user_id() try: self.keystone().disable_stack_domain_user( user_id=user_id, project_id=self.stack.stack_user_project_id) except ValueError: # FIXME(shardy): This is a legacy path for backwards compatibility self.keystone().disable_stack_user(user_id=user_id) def handle_resume(self): user_id = self._get_user_id() try: self.keystone().enable_stack_domain_user( user_id=user_id, project_id=self.stack.stack_user_project_id) except ValueError: # FIXME(shardy): This is a legacy path for backwards compatibility self.keystone().enable_stack_user(user_id=user_id) def _create_keypair(self): # Subclasses may optionally call this in handle_create to create # an ec2 keypair associated with the user, the resulting keys are # stored in resource_data if self.data().get('credential_id'): return # a keypair was created already user_id = self._get_user_id() kp = self.keystone().create_stack_domain_user_keypair( user_id=user_id, project_id=self.stack.stack_user_project_id) if not kp: raise exception.Error(_("Error creating ec2 keypair for user %s") % user_id) else: try: credential_id = kp.id except AttributeError: # keystone v2 keypairs do not have an id attribute. Use the # access key instead. credential_id = kp.access self.data_set('credential_id', credential_id, redact=True) self.data_set('access_key', kp.access, redact=True) self.data_set('secret_key', kp.secret, redact=True) return kp def _delete_keypair(self): # Subclasses may optionally call this to delete a keypair created # via _create_keypair credential_id = self.data().get('credential_id') if not credential_id: return user_id = self._get_user_id() if user_id is None: return try: self.keystone().delete_stack_domain_user_keypair( user_id=user_id, project_id=self.stack.stack_user_project_id, credential_id=credential_id) except kc_exception.NotFound: pass except ValueError: self.keystone().delete_ec2_keypair( user_id=user_id, credential_id=credential_id) for data_key in ('access_key', 'secret_key', 'credential_id'): self.data_delete(data_key) def _register_access_key(self): """Access is limited to this resource, which created the keypair.""" def access_allowed(resource_name): return resource_name == self.name if self.access_key is not None: self.stack.register_access_allowed_handler( self.access_key, access_allowed) if self._get_user_id() is not None: self.stack.register_access_allowed_handler( self._get_user_id(), access_allowed) heat-10.0.2/heat/engine/resources/wait_condition.py0000666000175000017500000001042613343562351022333 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections from oslo_log import log as logging import six from heat.common import exception from heat.common.i18n import _ from heat.engine.resources import signal_responder LOG = logging.getLogger(__name__) class BaseWaitConditionHandle(signal_responder.SignalResponder): """Base WaitConditionHandle resource. The main point of this class is to : - have no dependencies (so the instance can reference it) - create credentials to allow for signalling from the instance. - handle signals from the instance, validate and store result """ properties_schema = {} WAIT_STATUSES = ( STATUS_FAILURE, STATUS_SUCCESS, ) = ( 'FAILURE', 'SUCCESS', ) def handle_create(self): super(BaseWaitConditionHandle, self).handle_create() self.resource_id_set(self._get_user_id()) def _status_ok(self, status): return status in self.WAIT_STATUSES def _metadata_format_ok(self, metadata): if not isinstance(metadata, collections.Mapping): return False if set(metadata) != set(self.METADATA_KEYS): return False return self._status_ok(metadata[self.STATUS]) def normalise_signal_data(self, signal_data, latest_metadata): return signal_data def handle_signal(self, details=None): write_attempts = [] def merge_signal_metadata(signal_data, latest_rsrc_metadata): signal_data = self.normalise_signal_data(signal_data, latest_rsrc_metadata) if not self._metadata_format_ok(signal_data): LOG.info("Metadata failed validation for %s", self.name) raise ValueError(_("Metadata format invalid")) new_entry = signal_data.copy() unique_id = new_entry.pop(self.UNIQUE_ID) new_rsrc_metadata = latest_rsrc_metadata.copy() if unique_id in new_rsrc_metadata: LOG.info("Overwriting Metadata item for id %s!", unique_id) new_rsrc_metadata.update({unique_id: new_entry}) write_attempts.append(signal_data) return new_rsrc_metadata self.metadata_set(details, merge_metadata=merge_signal_metadata) data_written = write_attempts[-1] signal_reason = ('status:%s reason:%s' % (data_written[self.STATUS], data_written[self.REASON])) return signal_reason def get_status(self): """Return a list of the Status values for the handle signals.""" return [v[self.STATUS] for v in six.itervalues(self.metadata_get(refresh=True))] def get_status_reason(self, status): """Return a list of reasons associated with a particular status.""" return [v[self.REASON] for v in six.itervalues(self.metadata_get(refresh=True)) if v[self.STATUS] == status] class WaitConditionFailure(exception.Error): def __init__(self, wait_condition, handle): reasons = handle.get_status_reason(handle.STATUS_FAILURE) super(WaitConditionFailure, self).__init__(';'.join(reasons)) class WaitConditionTimeout(exception.Error): def __init__(self, wait_condition, handle): reasons = handle.get_status_reason(handle.STATUS_SUCCESS) vals = {'len': len(reasons), 'count': wait_condition.properties[wait_condition.COUNT]} if reasons: vals['reasons'] = ';'.join(reasons) message = (_('%(len)d of %(count)d received - %(reasons)s') % vals) else: message = (_('%(len)d of %(count)d received') % vals) super(WaitConditionTimeout, self).__init__(message) heat-10.0.2/heat/engine/resources/stack_resource.py0000666000175000017500000006504013343562351022337 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import json import weakref from oslo_config import cfg from oslo_log import log as logging from oslo_utils import excutils from oslo_utils import reflection import six from heat.common import exception from heat.common.i18n import _ from heat.common import identifier from heat.common import template_format from heat.engine import attributes from heat.engine import environment from heat.engine import resource from heat.engine import scheduler from heat.engine import stack as parser from heat.engine import stk_defn from heat.engine import template from heat.objects import raw_template from heat.objects import stack as stack_object from heat.objects import stack_lock from heat.rpc import api as rpc_api LOG = logging.getLogger(__name__) class StackResource(resource.Resource): """Allows entire stack to be managed as a resource in a parent stack. An abstract Resource subclass that allows the management of an entire Stack as a resource in a parent stack. """ # Assume True as this is evaluated before the stack is created # so there is no way to know for sure without subclass-specific # template parsing. requires_deferred_auth = True def __init__(self, name, json_snippet, stack): super(StackResource, self).__init__(name, json_snippet, stack) self._nested = None self._outputs = None self.resource_info = None def validate(self): super(StackResource, self).validate() # Don't redo a non-strict validation of a nested stack during the # creation of a child stack; only validate a child stack prior to the # creation of the root stack. if self.stack.nested_depth == 0 or not self.stack.strict_validate: self.validate_nested_stack() def validate_nested_stack(self): try: name = "%s-%s" % (self.stack.name, self.name) nested_stack = self._parse_nested_stack( name, self.child_template(), self.child_params()) nested_stack.strict_validate = False nested_stack.validate() except AssertionError: raise except Exception as ex: path = "%s<%s>" % (self.name, self.template_url) raise exception.StackValidationFailed( ex, path=[self.stack.t.RESOURCES, path]) @property def template_url(self): """Template url for the stack resource. When stack resource is a TemplateResource, it's the template location. For group resources like ResourceGroup where the template is constructed dynamically, it's just a placeholder. """ return "nested_stack" def _outputs_to_attribs(self, json_snippet): outputs = json_snippet.get('Outputs') if not self.attributes and outputs: self.attributes_schema = ( attributes.Attributes.schema_from_outputs(outputs)) # Note: it can be updated too and for show return dictionary # with all available outputs self.attributes = attributes.Attributes( self.name, self.attributes_schema, self._make_resolver(weakref.ref(self))) def _needs_update(self, after, before, after_props, before_props, prev_resource, check_init_complete=True): # If the nested stack has not been created, use the default # implementation to determine if we need to replace the resource. Note # that we do *not* return the result. if self.resource_id is None: super(StackResource, self)._needs_update(after, before, after_props, before_props, prev_resource, check_init_complete) else: if self.state == (self.CHECK, self.FAILED): nested_stack = self.rpc_client().show_stack( self.context, self.nested_identifier())[0] nested_stack_state = (nested_stack[rpc_api.STACK_ACTION], nested_stack[rpc_api.STACK_STATUS]) if nested_stack_state == (self.stack.CHECK, self.stack.FAILED): # The stack-check action marked the stack resource # CHECK_FAILED, so return True to allow the individual # CHECK_FAILED resources decide if they need updating. return True # The mark-unhealthy action marked the stack resource # CHECK_FAILED, so raise UpdateReplace to replace the # entire failed stack. raise resource.UpdateReplace(self) # Always issue an update to the nested stack and let the individual # resources in it decide if they need updating. return True def nested_identifier(self): if self.resource_id is None: return None return identifier.HeatIdentifier( self.context.tenant_id, self.physical_resource_name(), self.resource_id) def has_nested(self): """Return True if the resource has an existing nested stack.""" return self.resource_id is not None or self._nested is not None def nested(self): """Return a Stack object representing the nested (child) stack. If we catch NotFound exception when loading, return None. """ if self._nested is None and self.resource_id is not None: try: self._nested = parser.Stack.load(self.context, self.resource_id) except exception.NotFound: return None return self._nested def child_template(self): """Default implementation to get the child template. Resources that inherit from StackResource should override this method with specific details about the template used by them. """ raise NotImplementedError() def child_params(self): """Default implementation to get the child params. Resources that inherit from StackResource should override this method with specific details about the parameters used by them. """ raise NotImplementedError() def preview(self): """Preview a StackResource as resources within a Stack. This method overrides the original Resource.preview to return a preview of all the resources contained in this Stack. For this to be possible, the specific resources need to override both ``child_template`` and ``child_params`` with specific information to allow the stack to be parsed correctly. If any of these methods is missing, the entire StackResource will be returned as if it were a regular Resource. """ try: child_template = self.child_template() params = self.child_params() except NotImplementedError: class_name = reflection.get_class_name(self, fully_qualified=False) LOG.warning("Preview of '%s' not yet implemented", class_name) return self name = "%s-%s" % (self.stack.name, self.name) self._nested = self._parse_nested_stack(name, child_template, params) return self.nested().preview_resources() def get_nested_parameters_stack(self): """Return a stack for schema validation. This returns a stack to be introspected for building parameters schema. It can be customized by subclass to return a restricted version of what will be running. """ try: child_template = self.child_template() params = self.child_params() except NotImplementedError: class_name = reflection.get_class_name(self, fully_qualified=False) LOG.warning("Nested parameters of '%s' not yet " "implemented", class_name) return name = "%s-%s" % (self.stack.name, self.name) return self._parse_nested_stack(name, child_template, params) def _parse_child_template(self, child_template, child_env): parsed_child_template = child_template if isinstance(parsed_child_template, template.Template): parsed_child_template = parsed_child_template.t return template.Template(parsed_child_template, files=self.stack.t.files, env=child_env) def _parse_nested_stack(self, stack_name, child_template, child_params, timeout_mins=None, adopt_data=None): if timeout_mins is None: timeout_mins = self.stack.timeout_mins stack_user_project_id = self.stack.stack_user_project_id new_nested_depth = self._child_nested_depth() child_env = environment.get_child_environment( self.stack.env, child_params, child_resource_name=self.name, item_to_remove=self.resource_info) parsed_template = self._child_parsed_template(child_template, child_env) self._validate_nested_resources(parsed_template) # Note we disable rollback for nested stacks, since they # should be rolled back by the parent stack on failure nested = parser.Stack(self.context, stack_name, parsed_template, timeout_mins=timeout_mins, disable_rollback=True, parent_resource=self.name, owner_id=self.stack.id, user_creds_id=self.stack.user_creds_id, stack_user_project_id=stack_user_project_id, adopt_stack_data=adopt_data, nested_depth=new_nested_depth) nested.set_parent_stack(self.stack) return nested def _child_nested_depth(self): if self.stack.nested_depth >= cfg.CONF.max_nested_stack_depth: msg = _("Recursion depth exceeds %d." ) % cfg.CONF.max_nested_stack_depth raise exception.RequestLimitExceeded(message=msg) return self.stack.nested_depth + 1 def _child_parsed_template(self, child_template, child_env): parsed_template = self._parse_child_template(child_template, child_env) # Don't overwrite the attributes_schema for subclasses that # define their own attributes_schema. if not hasattr(type(self), 'attributes_schema'): self.attributes = None self._outputs_to_attribs(parsed_template) return parsed_template def _validate_nested_resources(self, templ): if cfg.CONF.max_resources_per_stack == -1: return total_resources = (len(templ[templ.RESOURCES]) + self.stack.total_resources(self.root_stack_id)) identity = self.nested_identifier() if identity is not None: existing = self.rpc_client().list_stack_resources(self.context, identity) # Don't double-count existing resources during an update total_resources -= len(existing) if (total_resources > cfg.CONF.max_resources_per_stack): message = exception.StackResourceLimitExceeded.msg_fmt raise exception.RequestLimitExceeded(message=message) def create_with_template(self, child_template, user_params=None, timeout_mins=None, adopt_data=None): """Create the nested stack with the given template.""" name = self.physical_resource_name() if timeout_mins is None: timeout_mins = self.stack.timeout_mins stack_user_project_id = self.stack.stack_user_project_id kwargs = self._stack_kwargs(user_params, child_template, adopt_data) adopt_data_str = None if adopt_data is not None: if 'environment' not in adopt_data: adopt_data['environment'] = kwargs['params'] if 'template' not in adopt_data: if isinstance(child_template, template.Template): adopt_data['template'] = child_template.t else: adopt_data['template'] = child_template adopt_data_str = json.dumps(adopt_data) args = {rpc_api.PARAM_TIMEOUT: timeout_mins, rpc_api.PARAM_DISABLE_ROLLBACK: True, rpc_api.PARAM_ADOPT_STACK_DATA: adopt_data_str} kwargs.update({ 'stack_name': name, 'args': args, 'environment_files': None, 'owner_id': self.stack.id, 'user_creds_id': self.stack.user_creds_id, 'stack_user_project_id': stack_user_project_id, 'nested_depth': self._child_nested_depth(), 'parent_resource_name': self.name }) with self.translate_remote_exceptions: try: result = self.rpc_client()._create_stack(self.context, **kwargs) except exception.HeatException: with excutils.save_and_reraise_exception(): if adopt_data is None: raw_template.RawTemplate.delete(self.context, kwargs['template_id']) self.resource_id_set(result['stack_id']) def child_definition(self, child_template=None, user_params=None, nested_identifier=None): if user_params is None: user_params = self.child_params() if child_template is None: child_template = self.child_template() if nested_identifier is None: nested_identifier = self.nested_identifier() child_env = environment.get_child_environment( self.stack.env, user_params, child_resource_name=self.name, item_to_remove=self.resource_info) parsed_template = self._child_parsed_template(child_template, child_env) return stk_defn.StackDefinition(self.context, parsed_template, nested_identifier, None) def _stack_kwargs(self, user_params, child_template, adopt_data=None): defn = self.child_definition(child_template, user_params) parsed_template = defn.t if adopt_data is None: template_id = parsed_template.store(self.context) return { 'template_id': template_id, 'template': None, 'params': None, 'files': None, } else: return { 'template': parsed_template.t, 'params': defn.env.user_env_as_dict(), 'files': parsed_template.files, } @excutils.exception_filter def translate_remote_exceptions(self, ex): if (isinstance(ex, exception.ActionInProgress) and self.stack.action == self.stack.ROLLBACK): # The update was interrupted and the rollback is already in # progress, so just ignore the error and wait for the rollback to # finish return True class_name = reflection.get_class_name(ex, fully_qualified=False) if not class_name.endswith('_Remote'): return False full_message = six.text_type(ex) if full_message.find('\n') > -1: message, msg_trace = full_message.split('\n', 1) else: message = full_message raise exception.ResourceFailure(message, self, action=self.action) def check_create_complete(self, cookie=None): return self._check_status_complete(self.CREATE) def _check_status_complete(self, expected_action, cookie=None): try: data = stack_object.Stack.get_status(self.context, self.resource_id) except exception.NotFound: if expected_action == self.DELETE: return True # It's possible the engine handling the create hasn't persisted # the stack to the DB when we first start polling for state return False action, status, status_reason, updated_time = data if action != expected_action: return False # Has the action really started? # # The rpc call to update does not guarantee that the stack will be # placed into IN_PROGRESS by the time it returns (it runs stack.update # in a thread) so you could also have a situation where we get into # this method and the update hasn't even started. # # So we are using a mixture of state (action+status) and updated_at # to see if the action has actually progressed. # - very fast updates (like something with one RandomString) we will # probably miss the state change, but we should catch the updated_at. # - very slow updates we won't see the updated_at for quite a while, # but should see the state change. if cookie is not None: prev_state = cookie['previous']['state'] prev_updated_at = cookie['previous']['updated_at'] if (prev_updated_at == updated_time and prev_state == (action, status)): return False if status == self.IN_PROGRESS: return False elif status == self.COMPLETE: ret = stack_lock.StackLock.get_engine_id( self.context, self.resource_id) is None if ret: # Reset nested, to indicate we changed status self._nested = None return ret elif status == self.FAILED: raise exception.ResourceFailure(status_reason, self, action=action) else: raise exception.ResourceUnknownStatus( resource_status=status, status_reason=status_reason, result=_('Stack unknown status')) def check_adopt_complete(self, cookie=None): return self._check_status_complete(self.ADOPT) def _try_rollback(self): stack_identity = self.nested_identifier() if stack_identity is None: return False try: self.rpc_client().stack_cancel_update( self.context, dict(stack_identity), cancel_with_rollback=True) except exception.NotSupported: return False try: data = stack_object.Stack.get_status(self.context, self.resource_id) except exception.NotFound: return False action, status, status_reason, updated_time = data # If nested stack is still in progress, it should eventually roll # itself back due to stack_cancel_update(), so we just need to wait # for that to complete return status == self.stack.IN_PROGRESS def update_with_template(self, child_template, user_params=None, timeout_mins=None): """Update the nested stack with the new template.""" if self.id is None: self.store() if self.stack.action == self.stack.ROLLBACK: if self._try_rollback(): LOG.info('Triggered nested stack %s rollback', self.physical_resource_name()) return {'target_action': self.stack.ROLLBACK} if self.resource_id is None: # if the create failed for some reason and the nested # stack was not created, we need to create an empty stack # here so that the update will work. def _check_for_completion(): while not self.check_create_complete(): yield empty_temp = template_format.parse( "heat_template_version: '2013-05-23'") self.create_with_template(empty_temp, {}) checker = scheduler.TaskRunner(_check_for_completion) checker(timeout=self.stack.timeout_secs()) if timeout_mins is None: timeout_mins = self.stack.timeout_mins try: status_data = stack_object.Stack.get_status(self.context, self.resource_id) except exception.NotFound: raise resource.UpdateReplace(self) action, status, status_reason, updated_time = status_data kwargs = self._stack_kwargs(user_params, child_template) cookie = {'previous': { 'updated_at': updated_time, 'state': (action, status)}} kwargs.update({ 'stack_identity': dict(self.nested_identifier()), 'args': {rpc_api.PARAM_TIMEOUT: timeout_mins, rpc_api.PARAM_CONVERGE: self.converge} }) with self.translate_remote_exceptions: try: self.rpc_client()._update_stack(self.context, **kwargs) except exception.HeatException: with excutils.save_and_reraise_exception(): raw_template.RawTemplate.delete(self.context, kwargs['template_id']) return cookie def check_update_complete(self, cookie=None): if cookie is not None and 'target_action' in cookie: target_action = cookie['target_action'] cookie = None else: target_action = self.stack.UPDATE return self._check_status_complete(target_action, cookie=cookie) def handle_update_cancel(self, cookie): stack_identity = self.nested_identifier() if stack_identity is not None: try: self.rpc_client().stack_cancel_update( self.context, dict(stack_identity), cancel_with_rollback=False) except exception.NotSupported: LOG.debug('Nested stack %s not in cancellable state', stack_identity.stack_name) def handle_create_cancel(self, cookie): return self.handle_update_cancel(cookie) def delete_nested(self): """Delete the nested stack.""" stack_identity = self.nested_identifier() if stack_identity is None: return with self.rpc_client().ignore_error_by_name('EntityNotFound'): if self.abandon_in_progress: self.rpc_client().abandon_stack(self.context, stack_identity) else: self.rpc_client().delete_stack(self.context, stack_identity, cast=False) def handle_delete(self): return self.delete_nested() def check_delete_complete(self, cookie=None): return self._check_status_complete(self.DELETE) def handle_suspend(self): stack_identity = self.nested_identifier() if stack_identity is None: raise exception.Error(_('Cannot suspend %s, stack not created') % self.name) self.rpc_client().stack_suspend(self.context, dict(stack_identity)) def check_suspend_complete(self, cookie=None): return self._check_status_complete(self.SUSPEND) def handle_resume(self): stack_identity = self.nested_identifier() if stack_identity is None: raise exception.Error(_('Cannot resume %s, stack not created') % self.name) self.rpc_client().stack_resume(self.context, dict(stack_identity)) def check_resume_complete(self, cookie=None): return self._check_status_complete(self.RESUME) def handle_check(self): stack_identity = self.nested_identifier() if stack_identity is None: raise exception.Error(_('Cannot check %s, stack not created') % self.name) self.rpc_client().stack_check(self.context, dict(stack_identity)) def check_check_complete(self, cookie=None): return self._check_status_complete(self.CHECK) def prepare_abandon(self): self.abandon_in_progress = True nested_stack = self.nested() if nested_stack: return nested_stack.prepare_abandon() return {} def get_output(self, op): """Return the specified Output value from the nested stack. If the output key does not exist, raise a NotFound exception. """ if (self._outputs is None or (op in self._outputs and rpc_api.OUTPUT_ERROR not in self._outputs[op] and self._outputs[op].get(rpc_api.OUTPUT_VALUE) is None)): stack_identity = self.nested_identifier() if stack_identity is None: return stack = self.rpc_client().show_stack(self.context, dict(stack_identity)) if not stack: return outputs = stack[0].get(rpc_api.STACK_OUTPUTS) or {} self._outputs = {o[rpc_api.OUTPUT_KEY]: o for o in outputs} if op not in self._outputs: raise exception.NotFound(_('Specified output key %s not ' 'found.') % op) output_data = self._outputs[op] if rpc_api.OUTPUT_ERROR in output_data: raise exception.TemplateOutputError( resource=self.name, attribute=op, message=output_data[rpc_api.OUTPUT_ERROR]) return output_data[rpc_api.OUTPUT_VALUE] def _resolve_attribute(self, name): try: return self.get_output(name) except exception.NotFound: raise exception.InvalidTemplateAttribute(resource=self.name, key=name) heat-10.0.2/heat/engine/resources/volume_base.py0000666000175000017500000001707513343562340021627 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg from heat.common import exception from heat.common.i18n import _ from heat.engine.clients import progress from heat.engine import resource from heat.engine import rsrc_defn class BaseVolume(resource.Resource): """Base Volume Manager.""" default_client_name = 'cinder' def _create_arguments(self): return {} def handle_create(self): backup_id = self.properties.get(self.BACKUP_ID) cinder = self.client() if backup_id is not None: vol_id = cinder.restores.restore(backup_id).volume_id vol = cinder.volumes.get(vol_id) kwargs = self._fetch_name_and_description() cinder.volumes.update(vol_id, **kwargs) else: kwargs = self._create_arguments() kwargs.update(self._fetch_name_and_description()) vol = cinder.volumes.create(**kwargs) self.resource_id_set(vol.id) return vol.id def check_create_complete(self, vol_id): vol = self.client().volumes.get(vol_id) if vol.status == 'available': return True if vol.status in self._volume_creating_status: return False if vol.status == 'error': raise exception.ResourceInError( resource_status=vol.status) else: raise exception.ResourceUnknownStatus( resource_status=vol.status, result=_('Volume create failed')) def _name(self): return self.physical_resource_name() def _description(self): return self.physical_resource_name() def _fetch_name_and_description(self, name=None, description=None): return {'name': name or self._name(), 'description': description or self._description()} def handle_check(self): vol = self.client().volumes.get(self.resource_id) statuses = ['available', 'in-use'] checks = [ {'attr': 'status', 'expected': statuses, 'current': vol.status}, ] self._verify_check_conditions(checks) def handle_snapshot_delete(self, state): backup = state not in ((self.CREATE, self.FAILED), (self.UPDATE, self.FAILED)) prg = progress.VolumeDeleteProgress() prg.backup['called'] = not backup prg.backup['complete'] = not backup return prg def handle_delete(self): if self.resource_id is None: return progress.VolumeDeleteProgress(True) prg = progress.VolumeDeleteProgress() prg.backup['called'] = True prg.backup['complete'] = True return prg def _create_backup(self): backup = self.client().backups.create(self.resource_id) return backup.id def _check_create_backup_complete(self, prg): backup = self.client().backups.get(prg.backup_id) if backup.status == 'creating': return False if backup.status == 'available': return True else: raise exception.ResourceUnknownStatus( resource_status=backup.status, result=_('Volume backup failed')) def _delete_volume(self): """Call the volume delete API. Returns False if further checking of volume status is required, True otherwise. """ try: cinder = self.client() vol = cinder.volumes.get(self.resource_id) if vol.status == 'in-use': raise exception.Error(_('Volume in use')) # if the volume is already in deleting status, # just wait for the deletion to complete if vol.status != 'deleting': cinder.volumes.delete(self.resource_id) return False except Exception as ex: self.client_plugin().ignore_not_found(ex) return True def check_delete_complete(self, prg): if not prg.backup['called']: prg.backup_id = self._create_backup() prg.backup['called'] = True return False if not prg.backup['complete']: prg.backup['complete'] = self._check_create_backup_complete(prg) return False if not prg.delete['called']: prg.delete['complete'] = self._delete_volume() prg.delete['called'] = True return False if not prg.delete['complete']: try: vol = self.client().volumes.get(self.resource_id) except Exception as ex: self.client_plugin().ignore_not_found(ex) prg.delete['complete'] = True return True if vol.status.lower() == 'error_deleting': raise exception.ResourceInError(status_reason='delete', resource_status=vol.status) else: return False return True @classmethod def validate_deletion_policy(cls, policy): res = super(BaseVolume, cls).validate_deletion_policy(policy) if res: return res if (policy == rsrc_defn.ResourceDefinition.SNAPSHOT and not cfg.CONF.volumes.backups_enabled): msg = _('"%s" deletion policy not supported - ' 'volume backup service is not enabled.') % policy raise exception.StackValidationFailed(message=msg) class BaseVolumeAttachment(resource.Resource): """Base Volume Attachment Manager.""" default_client_name = 'cinder' def handle_create(self): server_id = self.properties[self.INSTANCE_ID] volume_id = self.properties[self.VOLUME_ID] dev = (self.properties[self.DEVICE] if self.properties[self.DEVICE] else None) attach_id = self.client_plugin('nova').attach_volume( server_id, volume_id, dev) self.resource_id_set(attach_id) return volume_id def check_create_complete(self, volume_id): return self.client_plugin().check_attach_volume_complete(volume_id) def handle_delete(self): prg = None if self.resource_id: server_id = self.properties[self.INSTANCE_ID] vol_id = self.properties[self.VOLUME_ID] self.client_plugin('nova').detach_volume(server_id, self.resource_id) prg = progress.VolumeDetachProgress( server_id, vol_id, self.resource_id) prg.called = True return prg def check_delete_complete(self, prg): if prg is None: return True if not prg.cinder_complete: prg.cinder_complete = self.client_plugin( ).check_detach_volume_complete(prg.vol_id) return False if not prg.nova_complete: prg.nova_complete = self.client_plugin( 'nova').check_detach_volume_complete(prg.srv_id, prg.attach_id) return prg.nova_complete return True heat-10.0.2/heat/engine/resources/scheduler_hints.py0000666000175000017500000000426013343562340022501 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg cfg.CONF.import_opt('stack_scheduler_hints', 'heat.common.config') class SchedulerHintsMixin(object): """Utility class to encapsulate Scheduler Hint related logic.""" HEAT_ROOT_STACK_ID = 'heat_root_stack_id' HEAT_STACK_ID = 'heat_stack_id' HEAT_STACK_NAME = 'heat_stack_name' HEAT_PATH_IN_STACK = 'heat_path_in_stack' HEAT_RESOURCE_NAME = 'heat_resource_name' HEAT_RESOURCE_UUID = 'heat_resource_uuid' @staticmethod def _path_in_stack(stack): # Note: scheduler_hints can only be of DictOfListOfStrings. # Convert the list of tuples to list of delimited strings. path = [] for parent_res_name, stack_name in stack.path_in_stack(): if parent_res_name is not None: path.append(','.join([parent_res_name, stack_name])) else: path.append(stack_name) return path def _scheduler_hints(self, scheduler_hints): """Augment scheduler hints with supplemental content.""" if cfg.CONF.stack_scheduler_hints: if scheduler_hints is None: scheduler_hints = {} stack = self.stack scheduler_hints[self.HEAT_ROOT_STACK_ID] = stack.root_stack_id() scheduler_hints[self.HEAT_STACK_ID] = stack.id scheduler_hints[self.HEAT_STACK_NAME] = stack.name scheduler_hints[ self.HEAT_PATH_IN_STACK] = self._path_in_stack(stack) scheduler_hints[self.HEAT_RESOURCE_NAME] = self.name scheduler_hints[self.HEAT_RESOURCE_UUID] = self.uuid return scheduler_hints heat-10.0.2/heat/engine/resources/alarm_base.py0000666000175000017500000002370013343562337021412 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common.i18n import _ from heat.engine import constraints from heat.engine import properties from heat.engine import resource from heat.engine import support from six.moves.urllib import parse as urlparse COMMON_PROPERTIES = ( ALARM_ACTIONS, OK_ACTIONS, INSUFFICIENT_DATA_ACTIONS, ALARM_QUEUES, OK_QUEUES, INSUFFICIENT_DATA_QUEUES, REPEAT_ACTIONS, DESCRIPTION, ENABLED, TIME_CONSTRAINTS, SEVERITY, ) = ( 'alarm_actions', 'ok_actions', 'insufficient_data_actions', 'alarm_queues', 'ok_queues', 'insufficient_data_queues', 'repeat_actions', 'description', 'enabled', 'time_constraints', 'severity', ) INTERNAL_PROPERTIES = (ALARM_QUEUES, OK_QUEUES, INSUFFICIENT_DATA_QUEUES) _TIME_CONSTRAINT_KEYS = ( NAME, START, DURATION, TIMEZONE, TIME_CONSTRAINT_DESCRIPTION, ) = ( 'name', 'start', 'duration', 'timezone', 'description', ) common_properties_schema = { DESCRIPTION: properties.Schema( properties.Schema.STRING, _('Description for the alarm.'), update_allowed=True ), ENABLED: properties.Schema( properties.Schema.BOOLEAN, _('True if alarm evaluation/actioning is enabled.'), default='true', update_allowed=True ), ALARM_ACTIONS: properties.Schema( properties.Schema.LIST, _('A list of URLs (webhooks) to invoke when state transitions to ' 'alarm.'), update_allowed=True ), OK_ACTIONS: properties.Schema( properties.Schema.LIST, _('A list of URLs (webhooks) to invoke when state transitions to ' 'ok.'), update_allowed=True ), INSUFFICIENT_DATA_ACTIONS: properties.Schema( properties.Schema.LIST, _('A list of URLs (webhooks) to invoke when state transitions to ' 'insufficient-data.'), update_allowed=True ), ALARM_QUEUES: properties.Schema( properties.Schema.LIST, _('A list of Zaqar queues to post to when state transitions to ' 'alarm.'), support_status=support.SupportStatus(version='8.0.0'), schema=properties.Schema( properties.Schema.STRING, constraints=[constraints.CustomConstraint('zaqar.queue')] ), default=[], update_allowed=True ), OK_QUEUES: properties.Schema( properties.Schema.LIST, _('A list of Zaqar queues to post to when state transitions to ' 'ok.'), support_status=support.SupportStatus(version='8.0.0'), schema=properties.Schema( properties.Schema.STRING, constraints=[constraints.CustomConstraint('zaqar.queue')] ), default=[], update_allowed=True ), INSUFFICIENT_DATA_QUEUES: properties.Schema( properties.Schema.LIST, _('A list of Zaqar queues to post to when state transitions to ' 'insufficient-data.'), support_status=support.SupportStatus(version='8.0.0'), schema=properties.Schema( properties.Schema.STRING, constraints=[constraints.CustomConstraint('zaqar.queue')] ), default=[], update_allowed=True ), REPEAT_ACTIONS: properties.Schema( properties.Schema.BOOLEAN, _("False to trigger actions when the threshold is reached AND " "the alarm's state has changed. By default, actions are called " "each time the threshold is reached."), default='true', update_allowed=True ), SEVERITY: properties.Schema( properties.Schema.STRING, _('Severity of the alarm.'), default='low', constraints=[ constraints.AllowedValues(['low', 'moderate', 'critical']) ], update_allowed=True, support_status=support.SupportStatus(version='5.0.0'), ), TIME_CONSTRAINTS: properties.Schema( properties.Schema.LIST, _('Describe time constraints for the alarm. ' 'Only evaluate the alarm if the time at evaluation ' 'is within this time constraint. Start point(s) of ' 'the constraint are specified with a cron expression, ' 'whereas its duration is given in seconds.' ), schema=properties.Schema( properties.Schema.MAP, schema={ NAME: properties.Schema( properties.Schema.STRING, _("Name for the time constraint."), required=True ), START: properties.Schema( properties.Schema.STRING, _("Start time for the time constraint. " "A CRON expression property."), constraints=[ constraints.CustomConstraint( 'cron_expression') ], required=True ), TIME_CONSTRAINT_DESCRIPTION: properties.Schema( properties.Schema.STRING, _("Description for the time constraint."), ), DURATION: properties.Schema( properties.Schema.INTEGER, _("Duration for the time constraint."), constraints=[ constraints.Range(min=0) ], required=True ), TIMEZONE: properties.Schema( properties.Schema.STRING, _("Timezone for the time constraint " "(eg. 'Asia/Taipei', 'Europe/Amsterdam')."), constraints=[ constraints.CustomConstraint('timezone') ], ) } ), support_status=support.SupportStatus(version='5.0.0'), default=[], ) } NOVA_METERS = ['instance', 'memory', 'memory.usage', 'memory.resident', 'cpu', 'cpu_util', 'vcpus', 'disk.read.requests', 'disk.read.requests.rate', 'disk.write.requests', 'disk.write.requests.rate', 'disk.read.bytes', 'disk.read.bytes.rate', 'disk.write.bytes', 'disk.write.bytes.rate', 'disk.device.read.requests', 'disk.device.read.requests.rate', 'disk.device.write.requests', 'disk.device.write.requests.rate', 'disk.device.read.bytes', 'disk.device.read.bytes.rate', 'disk.device.write.bytes', 'disk.device.write.bytes.rate', 'disk.root.size', 'disk.ephemeral.size', 'network.incoming.bytes', 'network.incoming.bytes.rate', 'network.outgoing.bytes', 'network.outgoing.bytes.rate', 'network.incoming.packets', 'network.incoming.packets.rate', 'network.outgoing.packets', 'network.outgoing.packets.rate'] class BaseAlarm(resource.Resource): """Base Alarm Manager.""" default_client_name = 'aodh' entity = 'alarm' alarm_type = 'threshold' QUERY_FACTOR_FIELDS = ( QF_FIELD, QF_OP, QF_VALUE, QF_TYPE, ) = ( 'field', 'op', 'value', 'type', ) QF_OP_VALS = constraints.AllowedValues(['le', 'ge', 'eq', 'lt', 'gt', 'ne']) QF_TYPE_VALS = constraints.AllowedValues(['integer', 'float', 'string', 'boolean', 'datetime']) def actions_to_urls(self, props): kwargs = dict(props) def get_urls(action_type, queue_type): for act in kwargs.get(action_type) or []: # if the action is a resource name # we ask the destination resource for an alarm url. # the template writer should really do this in the # template if possible with: # {Fn::GetAtt: ['MyAction', 'AlarmUrl']} if act in self.stack: yield self.stack[act].FnGetAtt('AlarmUrl') elif act: yield act for queue in kwargs.pop(queue_type, []): query = {'queue_name': queue} yield 'trust+zaqar://?%s' % urlparse.urlencode(query) action_props = {arg_types[0]: list(get_urls(*arg_types)) for arg_types in ((ALARM_ACTIONS, ALARM_QUEUES), (OK_ACTIONS, OK_QUEUES), (INSUFFICIENT_DATA_ACTIONS, INSUFFICIENT_DATA_QUEUES))} kwargs.update(action_props) return kwargs def _reformat_properties(self, props): rule = {} # Note that self.PROPERTIES includes only properties specific to the # child class; BaseAlarm properties are not included. for name in self.PROPERTIES: if name in props: rule[name] = props.pop(name) if rule: props['%s_rule' % self.alarm_type] = rule return props def handle_suspend(self): if self.resource_id is not None: alarm_update = {'enabled': False} self.client().alarm.update(self.resource_id, alarm_update) def handle_resume(self): if self.resource_id is not None: alarm_update = {'enabled': True} self.client().alarm.update(self.resource_id, alarm_update) def handle_check(self): self.client().alarm.get(self.resource_id) heat-10.0.2/heat/engine/hot/0000775000175000017500000000000013343562672015530 5ustar zuulzuul00000000000000heat-10.0.2/heat/engine/hot/template.py0000666000175000017500000006142213343562351017716 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import functools import six from heat.common import exception from heat.common.i18n import _ from heat.engine.cfn import functions as cfn_funcs from heat.engine.cfn import template as cfn_template from heat.engine import function from heat.engine.hot import functions as hot_funcs from heat.engine.hot import parameters from heat.engine import rsrc_defn from heat.engine import template_common class HOTemplate20130523(template_common.CommonTemplate): """A Heat Orchestration Template format stack template.""" SECTIONS = ( VERSION, DESCRIPTION, PARAMETER_GROUPS, PARAMETERS, RESOURCES, OUTPUTS, MAPPINGS, ) = ( 'heat_template_version', 'description', 'parameter_groups', 'parameters', 'resources', 'outputs', '__undefined__', ) OUTPUT_KEYS = ( OUTPUT_DESCRIPTION, OUTPUT_VALUE, ) = ( 'description', 'value', ) SECTIONS_NO_DIRECT_ACCESS = set([PARAMETERS, VERSION]) _CFN_TO_HOT_SECTIONS = {cfn_template.CfnTemplate.VERSION: VERSION, cfn_template.CfnTemplate.DESCRIPTION: DESCRIPTION, cfn_template.CfnTemplate.PARAMETERS: PARAMETERS, cfn_template.CfnTemplate.MAPPINGS: MAPPINGS, cfn_template.CfnTemplate.RESOURCES: RESOURCES, cfn_template.CfnTemplate.OUTPUTS: OUTPUTS} _RESOURCE_KEYS = ( RES_TYPE, RES_PROPERTIES, RES_METADATA, RES_DEPENDS_ON, RES_DELETION_POLICY, RES_UPDATE_POLICY, RES_DESCRIPTION, ) = ( 'type', 'properties', 'metadata', 'depends_on', 'deletion_policy', 'update_policy', 'description', ) _RESOURCE_HOT_TO_CFN_ATTRS = { RES_TYPE: cfn_template.CfnTemplate.RES_TYPE, RES_PROPERTIES: cfn_template.CfnTemplate.RES_PROPERTIES, RES_METADATA: cfn_template.CfnTemplate.RES_METADATA, RES_DEPENDS_ON: cfn_template.CfnTemplate.RES_DEPENDS_ON, RES_DELETION_POLICY: cfn_template.CfnTemplate.RES_DELETION_POLICY, RES_UPDATE_POLICY: cfn_template.CfnTemplate.RES_UPDATE_POLICY, RES_DESCRIPTION: cfn_template.CfnTemplate.RES_DESCRIPTION} _HOT_TO_CFN_ATTRS = _RESOURCE_HOT_TO_CFN_ATTRS _HOT_TO_CFN_ATTRS.update( {OUTPUT_VALUE: cfn_template.CfnTemplate.OUTPUT_VALUE}) functions = { 'Fn::GetAZs': cfn_funcs.GetAZs, 'get_param': hot_funcs.GetParam, 'get_resource': hot_funcs.GetResource, 'Ref': cfn_funcs.Ref, 'get_attr': hot_funcs.GetAttThenSelect, 'Fn::Select': cfn_funcs.Select, 'Fn::Join': cfn_funcs.Join, 'list_join': hot_funcs.Join, 'Fn::Split': cfn_funcs.Split, 'str_replace': hot_funcs.Replace, 'Fn::Replace': cfn_funcs.Replace, 'Fn::Base64': cfn_funcs.Base64, 'Fn::MemberListToMap': cfn_funcs.MemberListToMap, 'resource_facade': hot_funcs.ResourceFacade, 'Fn::ResourceFacade': cfn_funcs.ResourceFacade, 'get_file': hot_funcs.GetFile, } deletion_policies = { 'Delete': rsrc_defn.ResourceDefinition.DELETE, 'Retain': rsrc_defn.ResourceDefinition.RETAIN, 'Snapshot': rsrc_defn.ResourceDefinition.SNAPSHOT } param_schema_class = parameters.HOTParamSchema def __getitem__(self, section): """"Get the relevant section in the template.""" # first translate from CFN into HOT terminology if necessary if section not in self.SECTIONS: section = HOTemplate20130523._translate( section, self._CFN_TO_HOT_SECTIONS, _('"%s" is not a valid template section')) if section not in self.SECTIONS: raise KeyError(_('"%s" is not a valid template section') % section) if section in self.SECTIONS_NO_DIRECT_ACCESS: raise KeyError( _('Section %s can not be accessed directly.') % section) if section == self.MAPPINGS: return {} if section == self.DESCRIPTION: default = 'No description' else: default = {} # if a section is None (empty yaml section) return {} # to be consistent with an empty json section. the_section = self.t.get(section) or default # In some cases (e.g. parameters), also translate each entry of # a section into CFN format (case, naming, etc) so the rest of the # engine can cope with it. # This is a shortcut for now and might be changed in the future. if section == self.RESOURCES: return self._translate_resources(the_section) if section == self.OUTPUTS: self.validate_section(self.OUTPUTS, self.OUTPUT_VALUE, the_section, self.OUTPUT_KEYS) return the_section @staticmethod def _translate(value, mapping, err_msg=None): try: return mapping[value] except KeyError as ke: if err_msg: raise KeyError(err_msg % value) else: raise ke def validate_section(self, section, sub_section, data, allowed_keys): obj_name = section[:-1] err_msg = _('"%%s" is not a valid keyword inside a %s ' 'definition') % obj_name args = {'object_name': obj_name, 'sub_section': sub_section} message = _('Each %(object_name)s must contain a ' '%(sub_section)s key.') % args for name, attrs in sorted(data.items()): if not attrs: raise exception.StackValidationFailed(message=message) try: for attr, attr_value in six.iteritems(attrs): if attr not in allowed_keys: raise KeyError(err_msg % attr) if sub_section not in attrs: raise exception.StackValidationFailed(message=message) except AttributeError: message = _('"%(section)s" must contain a map of ' '%(obj_name)s maps. Found a [%(_type)s] ' 'instead') % {'section': section, '_type': type(attrs), 'obj_name': obj_name} raise exception.StackValidationFailed(message=message) except KeyError as e: # an invalid keyword was found raise exception.StackValidationFailed(message=e.args[0]) def _translate_section(self, section, sub_section, data, mapping): self.validate_section(section, sub_section, data, mapping) cfn_objects = {} for name, attrs in sorted(data.items()): cfn_object = {} for attr, attr_value in six.iteritems(attrs): cfn_attr = mapping[attr] if cfn_attr is not None: cfn_object[cfn_attr] = attr_value cfn_objects[name] = cfn_object return cfn_objects def _translate_resources(self, resources): """Get the resources of the template translated into CFN format.""" return self._translate_section(self.RESOURCES, self.RES_TYPE, resources, self._RESOURCE_HOT_TO_CFN_ATTRS) def get_section_name(self, section): cfn_to_hot_attrs = dict( zip(six.itervalues(self._HOT_TO_CFN_ATTRS), six.iterkeys(self._HOT_TO_CFN_ATTRS))) return cfn_to_hot_attrs.get(section, section) def param_schemata(self, param_defaults=None): parameter_section = self.t.get(self.PARAMETERS) or {} pdefaults = param_defaults or {} for name, schema in six.iteritems(parameter_section): if name in pdefaults: parameter_section[name]['default'] = pdefaults[name] params = six.iteritems(parameter_section) return dict((name, self.param_schema_class.from_dict(name, schema)) for name, schema in params) def parameters(self, stack_identifier, user_params, param_defaults=None): return parameters.HOTParameters(stack_identifier, self, user_params=user_params, param_defaults=param_defaults) def resource_definitions(self, stack): resources = self.t.get(self.RESOURCES) or {} conditions = self.conditions(stack) valid_keys = frozenset(self._RESOURCE_KEYS) def defns(): for name, snippet in six.iteritems(resources): try: invalid_keys = set(snippet) - valid_keys if invalid_keys: raise ValueError(_('Invalid keyword(s) inside a ' 'resource definition: ' '%s') % ', '.join(invalid_keys)) defn_data = dict(self._rsrc_defn_args(stack, name, snippet)) except (TypeError, ValueError, KeyError) as ex: msg = six.text_type(ex) raise exception.StackValidationFailed(message=msg) defn = rsrc_defn.ResourceDefinition(name, **defn_data) cond_name = defn.condition() if cond_name is not None: try: enabled = conditions.is_enabled(cond_name) except ValueError as exc: path = [self.RESOURCES, name, self.RES_CONDITION] message = six.text_type(exc) raise exception.StackValidationFailed(path=path, message=message) if not enabled: continue yield name, defn return dict(defns()) def add_resource(self, definition, name=None): if name is None: name = definition.name if self.t.get(self.RESOURCES) is None: self.t[self.RESOURCES] = {} rendered = definition.render_hot() dep_list = rendered.get(self.RES_DEPENDS_ON) if dep_list: rendered[self.RES_DEPENDS_ON] = [d for d in dep_list if d in self.t[self.RESOURCES]] self.t[self.RESOURCES][name] = rendered def add_output(self, definition): if self.t.get(self.OUTPUTS) is None: self.t[self.OUTPUTS] = {} self.t[self.OUTPUTS][definition.name] = definition.render_hot() class HOTemplate20141016(HOTemplate20130523): functions = { 'get_attr': hot_funcs.GetAtt, 'get_file': hot_funcs.GetFile, 'get_param': hot_funcs.GetParam, 'get_resource': hot_funcs.GetResource, 'list_join': hot_funcs.Join, 'resource_facade': hot_funcs.ResourceFacade, 'str_replace': hot_funcs.Replace, 'Fn::Select': cfn_funcs.Select, # functions removed from 2014-10-16 'Fn::GetAZs': hot_funcs.Removed, 'Fn::Join': hot_funcs.Removed, 'Fn::Split': hot_funcs.Removed, 'Fn::Replace': hot_funcs.Removed, 'Fn::Base64': hot_funcs.Removed, 'Fn::MemberListToMap': hot_funcs.Removed, 'Fn::ResourceFacade': hot_funcs.Removed, 'Ref': hot_funcs.Removed, } class HOTemplate20150430(HOTemplate20141016): functions = { 'get_attr': hot_funcs.GetAtt, 'get_file': hot_funcs.GetFile, 'get_param': hot_funcs.GetParam, 'get_resource': hot_funcs.GetResource, 'list_join': hot_funcs.Join, 'repeat': hot_funcs.Repeat, 'resource_facade': hot_funcs.ResourceFacade, 'str_replace': hot_funcs.Replace, 'Fn::Select': cfn_funcs.Select, # functions added in 2015-04-30 'digest': hot_funcs.Digest, # functions removed from 2014-10-16 'Fn::GetAZs': hot_funcs.Removed, 'Fn::Join': hot_funcs.Removed, 'Fn::Split': hot_funcs.Removed, 'Fn::Replace': hot_funcs.Removed, 'Fn::Base64': hot_funcs.Removed, 'Fn::MemberListToMap': hot_funcs.Removed, 'Fn::ResourceFacade': hot_funcs.Removed, 'Ref': hot_funcs.Removed, } class HOTemplate20151015(HOTemplate20150430): functions = { 'get_attr': hot_funcs.GetAttAllAttributes, 'get_file': hot_funcs.GetFile, 'get_param': hot_funcs.GetParam, 'get_resource': hot_funcs.GetResource, 'list_join': hot_funcs.JoinMultiple, 'repeat': hot_funcs.Repeat, 'resource_facade': hot_funcs.ResourceFacade, 'str_replace': hot_funcs.ReplaceJson, # functions added in 2015-04-30 'digest': hot_funcs.Digest, # functions added in 2015-10-15 'str_split': hot_funcs.StrSplit, # functions removed from 2015-10-15 'Fn::Select': hot_funcs.Removed, # functions removed from 2014-10-16 'Fn::GetAZs': hot_funcs.Removed, 'Fn::Join': hot_funcs.Removed, 'Fn::Split': hot_funcs.Removed, 'Fn::Replace': hot_funcs.Removed, 'Fn::Base64': hot_funcs.Removed, 'Fn::MemberListToMap': hot_funcs.Removed, 'Fn::ResourceFacade': hot_funcs.Removed, 'Ref': hot_funcs.Removed, } class HOTemplate20160408(HOTemplate20151015): functions = { 'get_attr': hot_funcs.GetAttAllAttributes, 'get_file': hot_funcs.GetFile, 'get_param': hot_funcs.GetParam, 'get_resource': hot_funcs.GetResource, 'list_join': hot_funcs.JoinMultiple, 'repeat': hot_funcs.Repeat, 'resource_facade': hot_funcs.ResourceFacade, 'str_replace': hot_funcs.ReplaceJson, # functions added in 2015-04-30 'digest': hot_funcs.Digest, # functions added in 2015-10-15 'str_split': hot_funcs.StrSplit, # functions added in 2016-04-08 'map_merge': hot_funcs.MapMerge, # functions removed from 2015-10-15 'Fn::Select': hot_funcs.Removed, # functions removed from 2014-10-16 'Fn::GetAZs': hot_funcs.Removed, 'Fn::Join': hot_funcs.Removed, 'Fn::Split': hot_funcs.Removed, 'Fn::Replace': hot_funcs.Removed, 'Fn::Base64': hot_funcs.Removed, 'Fn::MemberListToMap': hot_funcs.Removed, 'Fn::ResourceFacade': hot_funcs.Removed, 'Ref': hot_funcs.Removed, } class HOTemplate20161014(HOTemplate20160408): CONDITIONS = 'conditions' SECTIONS = HOTemplate20160408.SECTIONS + (CONDITIONS,) SECTIONS_NO_DIRECT_ACCESS = (HOTemplate20160408.SECTIONS_NO_DIRECT_ACCESS | set([CONDITIONS])) _CFN_TO_HOT_SECTIONS = HOTemplate20160408._CFN_TO_HOT_SECTIONS _CFN_TO_HOT_SECTIONS.update({ cfn_template.CfnTemplate.CONDITIONS: CONDITIONS}) _EXTRA_RES_KEYS = ( RES_EXTERNAL_ID, RES_CONDITION ) = ( 'external_id', 'condition' ) _RESOURCE_KEYS = HOTemplate20160408._RESOURCE_KEYS + _EXTRA_RES_KEYS _RESOURCE_HOT_TO_CFN_ATTRS = HOTemplate20160408._RESOURCE_HOT_TO_CFN_ATTRS _RESOURCE_HOT_TO_CFN_ATTRS.update({ RES_EXTERNAL_ID: None, RES_CONDITION: cfn_template.CfnTemplate.RES_CONDITION, }) OUTPUT_CONDITION = 'condition' OUTPUT_KEYS = HOTemplate20160408.OUTPUT_KEYS + (OUTPUT_CONDITION,) deletion_policies = { 'Delete': rsrc_defn.ResourceDefinition.DELETE, 'Retain': rsrc_defn.ResourceDefinition.RETAIN, 'Snapshot': rsrc_defn.ResourceDefinition.SNAPSHOT, # aliases added in 2016-10-14 'delete': rsrc_defn.ResourceDefinition.DELETE, 'retain': rsrc_defn.ResourceDefinition.RETAIN, 'snapshot': rsrc_defn.ResourceDefinition.SNAPSHOT, } functions = { 'get_attr': hot_funcs.GetAttAllAttributes, 'get_file': hot_funcs.GetFile, 'get_param': hot_funcs.GetParam, 'get_resource': hot_funcs.GetResource, 'list_join': hot_funcs.JoinMultiple, 'repeat': hot_funcs.RepeatWithMap, 'resource_facade': hot_funcs.ResourceFacade, 'str_replace': hot_funcs.ReplaceJson, # functions added in 2015-04-30 'digest': hot_funcs.Digest, # functions added in 2015-10-15 'str_split': hot_funcs.StrSplit, # functions added in 2016-04-08 'map_merge': hot_funcs.MapMerge, # functions added in 2016-10-14 'yaql': hot_funcs.Yaql, 'map_replace': hot_funcs.MapReplace, 'if': hot_funcs.If, # functions removed from 2015-10-15 'Fn::Select': hot_funcs.Removed, # functions removed from 2014-10-16 'Fn::GetAZs': hot_funcs.Removed, 'Fn::Join': hot_funcs.Removed, 'Fn::Split': hot_funcs.Removed, 'Fn::Replace': hot_funcs.Removed, 'Fn::Base64': hot_funcs.Removed, 'Fn::MemberListToMap': hot_funcs.Removed, 'Fn::ResourceFacade': hot_funcs.Removed, 'Ref': hot_funcs.Removed, } condition_functions = { 'get_param': hot_funcs.GetParam, 'equals': hot_funcs.Equals, 'not': hot_funcs.Not, 'and': hot_funcs.And, 'or': hot_funcs.Or } def __init__(self, tmpl, template_id=None, files=None, env=None): super(HOTemplate20161014, self).__init__( tmpl, template_id, files, env) self._parser_condition_functions = {} for n, f in six.iteritems(self.functions): if not f == hot_funcs.Removed: self._parser_condition_functions[n] = function.Invalid else: self._parser_condition_functions[n] = f self._parser_condition_functions.update(self.condition_functions) self.merge_sections = [self.PARAMETERS, self.CONDITIONS] def _get_condition_definitions(self): return self.t.get(self.CONDITIONS, {}) def _rsrc_defn_args(self, stack, name, data): for arg in super(HOTemplate20161014, self)._rsrc_defn_args(stack, name, data): yield arg parse = functools.partial(self.parse, stack) parse_cond = functools.partial(self.parse_condition, stack) yield ('external_id', self._parse_resource_field(self.RES_EXTERNAL_ID, (six.string_types, function.Function), 'string', name, data, parse)) yield ('condition', self._parse_resource_field(self.RES_CONDITION, (six.string_types, bool, function.Function), 'string_or_boolean', name, data, parse_cond)) class HOTemplate20170224(HOTemplate20161014): functions = { 'get_attr': hot_funcs.GetAttAllAttributes, 'get_file': hot_funcs.GetFile, 'get_param': hot_funcs.GetParam, 'get_resource': hot_funcs.GetResource, 'list_join': hot_funcs.JoinMultiple, 'repeat': hot_funcs.RepeatWithMap, 'resource_facade': hot_funcs.ResourceFacade, 'str_replace': hot_funcs.ReplaceJson, # functions added in 2015-04-30 'digest': hot_funcs.Digest, # functions added in 2015-10-15 'str_split': hot_funcs.StrSplit, # functions added in 2016-04-08 'map_merge': hot_funcs.MapMerge, # functions added in 2016-10-14 'yaql': hot_funcs.Yaql, 'map_replace': hot_funcs.MapReplace, 'if': hot_funcs.If, # functions added in 2017-02-24 'filter': hot_funcs.Filter, 'str_replace_strict': hot_funcs.ReplaceJsonStrict, # functions removed from 2015-10-15 'Fn::Select': hot_funcs.Removed, # functions removed from 2014-10-16 'Fn::GetAZs': hot_funcs.Removed, 'Fn::Join': hot_funcs.Removed, 'Fn::Split': hot_funcs.Removed, 'Fn::Replace': hot_funcs.Removed, 'Fn::Base64': hot_funcs.Removed, 'Fn::MemberListToMap': hot_funcs.Removed, 'Fn::ResourceFacade': hot_funcs.Removed, 'Ref': hot_funcs.Removed, } param_schema_class = parameters.HOTParamSchema20170224 class HOTemplate20170901(HOTemplate20170224): functions = { 'get_attr': hot_funcs.GetAttAllAttributes, 'get_file': hot_funcs.GetFile, 'get_param': hot_funcs.GetParam, 'get_resource': hot_funcs.GetResource, 'list_join': hot_funcs.JoinMultiple, 'repeat': hot_funcs.RepeatWithNestedLoop, 'resource_facade': hot_funcs.ResourceFacade, 'str_replace': hot_funcs.ReplaceJson, # functions added in 2015-04-30 'digest': hot_funcs.Digest, # functions added in 2015-10-15 'str_split': hot_funcs.StrSplit, # functions added in 2016-04-08 'map_merge': hot_funcs.MapMerge, # functions added in 2016-10-14 'yaql': hot_funcs.Yaql, 'map_replace': hot_funcs.MapReplace, 'if': hot_funcs.If, # functions added in 2017-02-24 'filter': hot_funcs.Filter, 'str_replace_strict': hot_funcs.ReplaceJsonStrict, # functions added in 2017-09-01 'make_url': hot_funcs.MakeURL, 'list_concat': hot_funcs.ListConcat, 'str_replace_vstrict': hot_funcs.ReplaceJsonVeryStrict, 'list_concat_unique': hot_funcs.ListConcatUnique, 'contains': hot_funcs.Contains, # functions removed from 2015-10-15 'Fn::Select': hot_funcs.Removed, # functions removed from 2014-10-16 'Fn::GetAZs': hot_funcs.Removed, 'Fn::Join': hot_funcs.Removed, 'Fn::Split': hot_funcs.Removed, 'Fn::Replace': hot_funcs.Removed, 'Fn::Base64': hot_funcs.Removed, 'Fn::MemberListToMap': hot_funcs.Removed, 'Fn::ResourceFacade': hot_funcs.Removed, 'Ref': hot_funcs.Removed, } condition_functions = { 'get_param': hot_funcs.GetParam, 'equals': hot_funcs.Equals, 'not': hot_funcs.Not, 'and': hot_funcs.And, 'or': hot_funcs.Or, # functions added in 2017-09-01 'yaql': hot_funcs.Yaql, 'contains': hot_funcs.Contains } class HOTemplate20180302(HOTemplate20170901): functions = { 'get_attr': hot_funcs.GetAttAllAttributes, 'get_file': hot_funcs.GetFile, 'get_param': hot_funcs.GetParam, 'get_resource': hot_funcs.GetResource, 'list_join': hot_funcs.JoinMultiple, 'repeat': hot_funcs.RepeatWithNestedLoop, 'resource_facade': hot_funcs.ResourceFacade, 'str_replace': hot_funcs.ReplaceJson, # functions added in 2015-04-30 'digest': hot_funcs.Digest, # functions added in 2015-10-15 'str_split': hot_funcs.StrSplit, # functions added in 2016-04-08 'map_merge': hot_funcs.MapMerge, # functions added in 2016-10-14 'yaql': hot_funcs.Yaql, 'map_replace': hot_funcs.MapReplace, 'if': hot_funcs.If, # functions added in 2017-02-24 'filter': hot_funcs.Filter, 'str_replace_strict': hot_funcs.ReplaceJsonStrict, # functions added in 2017-09-01 'make_url': hot_funcs.MakeURL, 'list_concat': hot_funcs.ListConcat, 'str_replace_vstrict': hot_funcs.ReplaceJsonVeryStrict, 'list_concat_unique': hot_funcs.ListConcatUnique, 'contains': hot_funcs.Contains, # functions removed from 2015-10-15 'Fn::Select': hot_funcs.Removed, # functions removed from 2014-10-16 'Fn::GetAZs': hot_funcs.Removed, 'Fn::Join': hot_funcs.Removed, 'Fn::Split': hot_funcs.Removed, 'Fn::Replace': hot_funcs.Removed, 'Fn::Base64': hot_funcs.Removed, 'Fn::MemberListToMap': hot_funcs.Removed, 'Fn::ResourceFacade': hot_funcs.Removed, 'Ref': hot_funcs.Removed, } condition_functions = { 'get_param': hot_funcs.GetParam, 'equals': hot_funcs.Equals, 'not': hot_funcs.Not, 'and': hot_funcs.And, 'or': hot_funcs.Or, # functions added in 2017-09-01 'yaql': hot_funcs.Yaql, 'contains': hot_funcs.Contains } param_schema_class = parameters.HOTParamSchema20180302 heat-10.0.2/heat/engine/hot/__init__.py0000666000175000017500000000000013343562337017627 0ustar zuulzuul00000000000000heat-10.0.2/heat/engine/hot/parameters.py0000666000175000017500000001705013343562337020250 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common import exception from heat.common.i18n import _ from heat.engine import constraints as constr from heat.engine import parameters PARAM_CONSTRAINTS = ( DESCRIPTION, LENGTH, RANGE, MODULO, ALLOWED_VALUES, ALLOWED_PATTERN, CUSTOM_CONSTRAINT, ) = ( 'description', 'length', 'range', 'modulo', 'allowed_values', 'allowed_pattern', 'custom_constraint', ) RANGE_KEYS = (MIN, MAX) = ('min', 'max') MODULO_KEYS = (STEP, OFFSET) = ('step', 'offset') class HOTParamSchema(parameters.Schema): """HOT parameter schema.""" KEYS = ( TYPE, DESCRIPTION, DEFAULT, SCHEMA, CONSTRAINTS, HIDDEN, LABEL, IMMUTABLE ) = ( 'type', 'description', 'default', 'schema', 'constraints', 'hidden', 'label', 'immutable' ) # For Parameters the type name for Schema.LIST is comma_delimited_list # and the type name for Schema.MAP is json TYPES = ( STRING, NUMBER, LIST, MAP, BOOLEAN, ) = ( 'string', 'number', 'comma_delimited_list', 'json', 'boolean', ) PARAMETER_KEYS = KEYS @classmethod def _constraint_from_def(cls, constraint): desc = constraint.get(DESCRIPTION) if RANGE in constraint: cdef = constraint.get(RANGE) cls._check_dict(cdef, RANGE_KEYS, 'range constraint') return constr.Range(parameters.Schema.get_num(MIN, cdef), parameters.Schema.get_num(MAX, cdef), desc) elif LENGTH in constraint: cdef = constraint.get(LENGTH) cls._check_dict(cdef, RANGE_KEYS, 'length constraint') return constr.Length(parameters.Schema.get_num(MIN, cdef), parameters.Schema.get_num(MAX, cdef), desc) elif ALLOWED_VALUES in constraint: cdef = constraint.get(ALLOWED_VALUES) return constr.AllowedValues(cdef, desc) elif ALLOWED_PATTERN in constraint: cdef = constraint.get(ALLOWED_PATTERN) return constr.AllowedPattern(cdef, desc) elif CUSTOM_CONSTRAINT in constraint: cdef = constraint.get(CUSTOM_CONSTRAINT) return constr.CustomConstraint(cdef, desc) else: raise exception.InvalidSchemaError( message=_("No constraint expressed")) @classmethod def _constraints(cls, param_name, schema_dict): constraints = schema_dict.get(cls.CONSTRAINTS) if constraints is None: return if not isinstance(constraints, list): raise exception.InvalidSchemaError( message=_("Invalid parameter constraints for parameter " "%s, expected a list") % param_name) for constraint in constraints: cls._check_dict(constraint, PARAM_CONSTRAINTS, 'parameter constraints') yield cls._constraint_from_def(constraint) @classmethod def from_dict(cls, param_name, schema_dict): """Return a Parameter Schema object from a legacy schema dictionary. :param param_name: name of the parameter owning the schema; used for more verbose logging :type param_name: str """ cls._validate_dict(param_name, schema_dict) # make update_allowed true by default on TemplateResources # as the template should deal with this. return cls(schema_dict[cls.TYPE], description=schema_dict.get(HOTParamSchema.DESCRIPTION), default=schema_dict.get(HOTParamSchema.DEFAULT), constraints=list(cls._constraints(param_name, schema_dict)), hidden=schema_dict.get(HOTParamSchema.HIDDEN, False), label=schema_dict.get(HOTParamSchema.LABEL), immutable=schema_dict.get(HOTParamSchema.IMMUTABLE, False)) class HOTParamSchema20170224(HOTParamSchema): @classmethod def _constraint_from_def(cls, constraint): desc = constraint.get(DESCRIPTION) if MODULO in constraint: cdef = constraint.get(MODULO) cls._check_dict(cdef, MODULO_KEYS, 'modulo constraint') return constr.Modulo(parameters.Schema.get_num(STEP, cdef), parameters.Schema.get_num(OFFSET, cdef), desc) else: return super(HOTParamSchema20170224, cls)._constraint_from_def( constraint) class HOTParamSchema20180302(HOTParamSchema20170224): KEYS_20180302 = (TAGS,) = ('tags',) KEYS = HOTParamSchema20170224.KEYS + KEYS_20180302 PARAMETER_KEYS = KEYS @classmethod def from_dict(cls, param_name, schema_dict): """Return a Parameter Schema object from a legacy schema dictionary. :param param_name: name of the parameter owning the schema; used for more verbose logging :type param_name: str """ cls._validate_dict(param_name, schema_dict) # make update_allowed true by default on TemplateResources # as the template should deal with this. return cls(schema_dict[cls.TYPE], description=schema_dict.get(HOTParamSchema.DESCRIPTION), default=schema_dict.get(HOTParamSchema.DEFAULT), constraints=list(cls._constraints(param_name, schema_dict)), hidden=schema_dict.get(HOTParamSchema.HIDDEN, False), label=schema_dict.get(HOTParamSchema.LABEL), immutable=schema_dict.get(HOTParamSchema.IMMUTABLE, False), tags=schema_dict.get(HOTParamSchema20180302.TAGS)) class HOTParameters(parameters.Parameters): PSEUDO_PARAMETERS = ( PARAM_STACK_ID, PARAM_STACK_NAME, PARAM_REGION, PARAM_PROJECT_ID ) = ( 'OS::stack_id', 'OS::stack_name', 'OS::region', 'OS::project_id' ) def set_stack_id(self, stack_identifier): """Set the StackId pseudo parameter value.""" if stack_identifier is not None: self.params[self.PARAM_STACK_ID].schema.set_default( stack_identifier.stack_id) return True return False def _pseudo_parameters(self, stack_identifier): stack_id = getattr(stack_identifier, 'stack_id', '') stack_name = getattr(stack_identifier, 'stack_name', '') tenant = getattr(stack_identifier, 'tenant', '') yield parameters.Parameter( self.PARAM_STACK_ID, parameters.Schema(parameters.Schema.STRING, _('Stack ID'), default=str(stack_id))) yield parameters.Parameter( self.PARAM_PROJECT_ID, parameters.Schema(parameters.Schema.STRING, _('Project ID'), default=str(tenant))) if stack_name: yield parameters.Parameter( self.PARAM_STACK_NAME, parameters.Schema(parameters.Schema.STRING, _('Stack Name'), default=stack_name)) heat-10.0.2/heat/engine/hot/functions.py0000666000175000017500000015314613343562337020124 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import hashlib import itertools from oslo_config import cfg from oslo_log import log as logging from oslo_serialization import jsonutils import six from six.moves.urllib import parse as urlparse import yaql from yaql.language import exceptions from heat.common import exception from heat.common.i18n import _ from heat.engine import attributes from heat.engine import function LOG = logging.getLogger(__name__) opts = [ cfg.IntOpt('limit_iterators', default=200, help=_('The maximum number of elements in collection ' 'expression can take for its evaluation.')), cfg.IntOpt('memory_quota', default=10000, help=_('The maximum size of memory in bytes that ' 'expression can take for its evaluation.')) ] cfg.CONF.register_opts(opts, group='yaql') class GetParam(function.Function): """A function for resolving parameter references. Takes the form:: get_param: or:: get_param: - - - ... """ def __init__(self, stack, fn_name, args): super(GetParam, self).__init__(stack, fn_name, args) self.parameters = self.stack.parameters def result(self): args = function.resolve(self.args) if not args: raise ValueError(_('Function "%s" must have arguments') % self.fn_name) if isinstance(args, six.string_types): param_name = args path_components = [] elif isinstance(args, collections.Sequence): param_name = args[0] path_components = args[1:] else: raise TypeError(_('Argument to "%s" must be string or list') % self.fn_name) if not isinstance(param_name, six.string_types): raise TypeError(_('Parameter name in "%s" must be string') % self.fn_name) try: parameter = self.parameters[param_name] except KeyError: raise exception.UserParameterMissing(key=param_name) def get_path_component(collection, key): if not isinstance(collection, (collections.Mapping, collections.Sequence)): raise TypeError(_('"%s" can\'t traverse path') % self.fn_name) if not isinstance(key, (six.string_types, int)): raise TypeError(_('Path components in "%s" ' 'must be strings') % self.fn_name) if isinstance(collection, collections.Sequence ) and isinstance(key, six.string_types): try: key = int(key) except ValueError: raise TypeError(_("Path components in '%s' " "must be a string that can be " "parsed into an " "integer.") % self.fn_name) return collection[key] try: return six.moves.reduce(get_path_component, path_components, parameter) except (KeyError, IndexError, TypeError): return '' class GetResource(function.Function): """A function for resolving resource references. Takes the form:: get_resource: """ def _resource(self, path='unknown'): resource_name = function.resolve(self.args) try: return self.stack[resource_name] except KeyError: raise exception.InvalidTemplateReference(resource=resource_name, key=path) def dependencies(self, path): return itertools.chain(super(GetResource, self).dependencies(path), [self._resource(path)]) def result(self): return self._resource().FnGetRefId() class GetAttThenSelect(function.Function): """A function for resolving resource attributes. Takes the form:: get_attr: - - - - ... """ def __init__(self, stack, fn_name, args): super(GetAttThenSelect, self).__init__(stack, fn_name, args) (self._resource_name, self._attribute, self._path_components) = self._parse_args() def _parse_args(self): if (not isinstance(self.args, collections.Sequence) or isinstance(self.args, six.string_types)): raise TypeError(_('Argument to "%s" must be a list') % self.fn_name) if len(self.args) < 2: raise ValueError(_('Arguments to "%s" must be of the form ' '[resource_name, attribute, (path), ...]') % self.fn_name) return self.args[0], self.args[1], self.args[2:] def _res_name(self): return function.resolve(self._resource_name) def _resource(self, path='unknown'): resource_name = self._res_name() try: return self.stack[resource_name] except KeyError: raise exception.InvalidTemplateReference(resource=resource_name, key=path) def _attr_path(self): return function.resolve(self._attribute) def dep_attrs(self, resource_name): if self._res_name() == resource_name: try: attrs = [self._attr_path()] except Exception as exc: LOG.debug("Ignoring exception calculating required attributes" ": %s %s", type(exc).__name__, six.text_type(exc)) attrs = [] else: attrs = [] return itertools.chain(super(GetAttThenSelect, self).dep_attrs(resource_name), attrs) def all_dep_attrs(self): try: attrs = [(self._res_name(), self._attr_path())] except Exception: attrs = [] return itertools.chain(function.all_dep_attrs(self.args), attrs) def dependencies(self, path): return itertools.chain(super(GetAttThenSelect, self).dependencies(path), [self._resource(path)]) def _allow_without_attribute_name(self): return False def validate(self): super(GetAttThenSelect, self).validate() res = self._resource() if self._allow_without_attribute_name(): # if allow without attribute_name, then don't check # when attribute_name is None if self._attribute is None: return attr = function.resolve(self._attribute) if attr not in res.attributes_schema: raise exception.InvalidTemplateAttribute( resource=self._resource_name, key=attr) def _result_ready(self, r): if r.action in (r.CREATE, r.ADOPT, r.SUSPEND, r.RESUME, r.UPDATE, r.ROLLBACK, r.SNAPSHOT, r.CHECK): return True return False def result(self): attr_name = function.resolve(self._attribute) resource = self._resource() if self._result_ready(resource): attribute = resource.FnGetAtt(attr_name) else: attribute = None if attribute is None: return None path_components = function.resolve(self._path_components) return attributes.select_from_attribute(attribute, path_components) class GetAtt(GetAttThenSelect): """A function for resolving resource attributes. Takes the form:: get_attr: - - - - ... """ def result(self): path_components = function.resolve(self._path_components) attribute = function.resolve(self._attribute) resource = self._resource() if self._result_ready(resource): return resource.FnGetAtt(attribute, *path_components) else: return None def _attr_path(self): path = function.resolve(self._path_components) attr = function.resolve(self._attribute) if path: return tuple([attr] + path) else: return attr class GetAttAllAttributes(GetAtt): """A function for resolving resource attributes. Takes the form:: get_attr: - - - - ... where and , ... are optional arguments. If there is no , result will be dict of all resource's attributes. Else function returns resolved resource's attribute. """ def _parse_args(self): if not self.args: raise ValueError(_('Arguments to "%s" can be of the next ' 'forms: [resource_name] or ' '[resource_name, attribute, (path), ...]' ) % self.fn_name) elif isinstance(self.args, collections.Sequence): if len(self.args) > 1: return super(GetAttAllAttributes, self)._parse_args() else: return self.args[0], None, [] else: raise TypeError(_('Argument to "%s" must be a list') % self.fn_name) def _attr_path(self): if self._attribute is None: return attributes.ALL_ATTRIBUTES return super(GetAttAllAttributes, self)._attr_path() def result(self): if self._attribute is None: r = self._resource() if (r.status in (r.IN_PROGRESS, r.COMPLETE) and r.action in (r.CREATE, r.ADOPT, r.SUSPEND, r.RESUME, r.UPDATE, r.CHECK, r.SNAPSHOT)): return r.FnGetAtts() else: return None else: return super(GetAttAllAttributes, self).result() def _allow_without_attribute_name(self): return True class Replace(function.Function): """A function for performing string substitutions. Takes the form:: str_replace: template: params: : : ... And resolves to:: " " When keys overlap in the template, longer matches are preferred. For keys of equal length, lexicographically smaller keys are preferred. """ _strict = False _allow_empty_value = True def __init__(self, stack, fn_name, args): super(Replace, self).__init__(stack, fn_name, args) self._mapping, self._string = self._parse_args() if not isinstance(self._mapping, (collections.Mapping, function.Function)): raise TypeError(_('"%s" parameters must be a mapping') % self.fn_name) def _parse_args(self): if not isinstance(self.args, collections.Mapping): raise TypeError(_('Arguments to "%s" must be a map') % self.fn_name) try: mapping = self.args['params'] string = self.args['template'] except (KeyError, TypeError): example = _('''%s: template: This is var1 template var2 params: var1: a var2: string''') % self.fn_name raise KeyError(_('"%(fn_name)s" syntax should be %(example)s') % {'fn_name': self.fn_name, 'example': example}) else: return mapping, string def _validate_replacement(self, value, param): if value is None: return '' if not isinstance(value, (six.string_types, six.integer_types, float, bool)): raise TypeError(_('"%(name)s" params must be strings or numbers, ' 'param %(param)s is not valid') % {'name': self.fn_name, 'param': param}) return six.text_type(value) def result(self): template = function.resolve(self._string) mapping = function.resolve(self._mapping) if self._strict: unreplaced_keys = set(mapping) if not isinstance(template, six.string_types): raise TypeError(_('"%s" template must be a string') % self.fn_name) if not isinstance(mapping, collections.Mapping): raise TypeError(_('"%s" params must be a map') % self.fn_name) def replace(strings, keys): if not keys: return strings placeholder = keys[0] if not isinstance(placeholder, six.string_types): raise TypeError(_('"%s" param placeholders must be strings') % self.fn_name) remaining_keys = keys[1:] value = self._validate_replacement(mapping[placeholder], placeholder) def string_split(s): ss = s.split(placeholder) if self._strict and len(ss) > 1: unreplaced_keys.discard(placeholder) return ss return [value.join(replace(string_split(s), remaining_keys)) for s in strings] ret_val = replace([template], sorted(sorted(mapping), key=len, reverse=True))[0] if self._strict and len(unreplaced_keys) > 0: raise ValueError( _("The following params were not found in the template: %s") % ','.join(sorted(sorted(unreplaced_keys), key=len, reverse=True))) return ret_val class ReplaceJson(Replace): """A function for performing string substitutions. Takes the form:: str_replace: template: params: : : ... And resolves to:: " " When keys overlap in the template, longer matches are preferred. For keys of equal length, lexicographically smaller keys are preferred. Non-string param values (e.g maps or lists) are serialized as JSON before being substituted in. """ def _validate_replacement(self, value, param): def _raise_empty_param_value_error(): raise ValueError( _('%(name)s has an undefined or empty value for param ' '%(param)s, must be a defined non-empty value') % {'name': self.fn_name, 'param': param}) if value is None: if self._allow_empty_value: return '' else: _raise_empty_param_value_error() if not isinstance(value, (six.string_types, six.integer_types, float, bool)): if isinstance(value, (collections.Mapping, collections.Sequence)): if not self._allow_empty_value and len(value) == 0: _raise_empty_param_value_error() try: return jsonutils.dumps(value, default=None, sort_keys=True) except TypeError: raise TypeError(_('"%(name)s" params must be strings, ' 'numbers, list or map. ' 'Failed to json serialize %(value)s' ) % {'name': self.fn_name, 'value': value}) else: raise TypeError(_('"%s" params must be strings, numbers, ' 'list or map.') % self.fn_name) ret_value = six.text_type(value) if not self._allow_empty_value and not ret_value: _raise_empty_param_value_error() return ret_value class ReplaceJsonStrict(ReplaceJson): """A function for performing string substituions. str_replace_strict is identical to the str_replace function, only a ValueError is raised if any of the params are not present in the template. """ _strict = True class ReplaceJsonVeryStrict(ReplaceJsonStrict): """A function for performing string substituions. str_replace_vstrict is identical to the str_replace_strict function, only a ValueError is raised if any of the params are None or empty. """ _allow_empty_value = False class GetFile(function.Function): """A function for including a file inline. Takes the form:: get_file: And resolves to the content stored in the files dictionary under the given key. """ def __init__(self, stack, fn_name, args): super(GetFile, self).__init__(stack, fn_name, args) self.files = self.stack.t.files def result(self): args = function.resolve(self.args) if not (isinstance(args, six.string_types)): raise TypeError(_('Argument to "%s" must be a string') % self.fn_name) f = self.files.get(args) if f is None: fmt_data = {'fn_name': self.fn_name, 'file_key': args} raise ValueError(_('No content found in the "files" section for ' '%(fn_name)s path: %(file_key)s') % fmt_data) return f class Join(function.Function): """A function for joining strings. Takes the form:: list_join: - - - - - ... And resolves to:: "..." """ def __init__(self, stack, fn_name, args): super(Join, self).__init__(stack, fn_name, args) example = '"%s" : [ " ", [ "str1", "str2"]]' % self.fn_name fmt_data = {'fn_name': self.fn_name, 'example': example} if not isinstance(self.args, list): raise TypeError(_('Incorrect arguments to "%(fn_name)s" ' 'should be: %(example)s') % fmt_data) try: self._delim, self._strings = self.args except ValueError: raise ValueError(_('Incorrect arguments to "%(fn_name)s" ' 'should be: %(example)s') % fmt_data) def result(self): strings = function.resolve(self._strings) if strings is None: strings = [] if (isinstance(strings, six.string_types) or not isinstance(strings, collections.Sequence)): raise TypeError(_('"%s" must operate on a list') % self.fn_name) delim = function.resolve(self._delim) if not isinstance(delim, six.string_types): raise TypeError(_('"%s" delimiter must be a string') % self.fn_name) def ensure_string(s): if s is None: return '' if not isinstance(s, six.string_types): raise TypeError( _('Items to join must be strings not %s' ) % (repr(s)[:200])) return s return delim.join(ensure_string(s) for s in strings) class JoinMultiple(function.Function): """A function for joining one or more lists of strings. Takes the form:: list_join: - - - - - ... - - ... And resolves to:: "..." Optionally multiple lists may be specified, which will also be joined. """ def __init__(self, stack, fn_name, args): super(JoinMultiple, self).__init__(stack, fn_name, args) example = '"%s" : [ " ", [ "str1", "str2"] ...]' % fn_name fmt_data = {'fn_name': fn_name, 'example': example} if not isinstance(args, list): raise TypeError(_('Incorrect arguments to "%(fn_name)s" ' 'should be: %(example)s') % fmt_data) try: self._delim = args[0] self._joinlists = args[1:] if len(self._joinlists) < 1: raise ValueError except (IndexError, ValueError): raise ValueError(_('Incorrect arguments to "%(fn_name)s" ' 'should be: %(example)s') % fmt_data) def result(self): r_joinlists = function.resolve(self._joinlists) strings = [] for jl in r_joinlists: if jl: if (isinstance(jl, six.string_types) or not isinstance(jl, collections.Sequence)): raise TypeError(_('"%s" must operate on ' 'a list') % self.fn_name) strings += jl delim = function.resolve(self._delim) if not isinstance(delim, six.string_types): raise TypeError(_('"%s" delimiter must be a string') % self.fn_name) def ensure_string(s): msg = _('Items to join must be string, map or list not %s' ) % (repr(s)[:200]) if s is None: return '' elif isinstance(s, six.string_types): return s elif isinstance(s, (collections.Mapping, collections.Sequence)): try: return jsonutils.dumps(s, default=None, sort_keys=True) except TypeError: msg = _('Items to join must be string, map or list. ' '%s failed json serialization' ) % (repr(s)[:200]) raise TypeError(msg) return delim.join(ensure_string(s) for s in strings) class MapMerge(function.Function): """A function for merging maps. Takes the form:: map_merge: - : : - : And resolves to:: {"": "", "": ""} """ def __init__(self, stack, fn_name, args): super(MapMerge, self).__init__(stack, fn_name, args) example = (_('"%s" : [ { "key1": "val1" }, { "key2": "val2" } ]') % fn_name) self.fmt_data = {'fn_name': fn_name, 'example': example} def result(self): args = function.resolve(self.args) if not isinstance(args, collections.Sequence): raise TypeError(_('Incorrect arguments to "%(fn_name)s" ' 'should be: %(example)s') % self.fmt_data) def ensure_map(m): if m is None: return {} elif isinstance(m, collections.Mapping): return m else: msg = _('Incorrect arguments: Items to merge must be maps.') raise TypeError(msg) ret_map = {} for m in args: ret_map.update(ensure_map(m)) return ret_map class MapReplace(function.Function): """A function for performing substitutions on maps. Takes the form:: map_replace: - : : - keys: : values: : And resolves to:: {"": "", "": ""} """ def __init__(self, stack, fn_name, args): super(MapReplace, self).__init__(stack, fn_name, args) example = (_('"%s" : [ { "key1": "val1" }, ' '{"keys": {"key1": "key2"}, "values": {"val1": "val2"}}]') % fn_name) self.fmt_data = {'fn_name': fn_name, 'example': example} def result(self): args = function.resolve(self.args) def ensure_map(m): if m is None: return {} elif isinstance(m, collections.Mapping): return m else: msg = (_('Incorrect arguments: to "%(fn_name)s", arguments ' 'must be a list of maps. Example: %(example)s') % self.fmt_data) raise TypeError(msg) try: in_map = ensure_map(args.pop(0)) repl_map = ensure_map(args.pop(0)) if args != []: raise IndexError except (IndexError, AttributeError): raise TypeError(_('Incorrect arguments to "%(fn_name)s" ' 'should be: %(example)s') % self.fmt_data) for k in repl_map: if k not in ('keys', 'values'): raise ValueError(_('Incorrect arguments to "%(fn_name)s" ' 'should be: %(example)s') % self.fmt_data) repl_keys = ensure_map(repl_map.get('keys', {})) repl_values = ensure_map(repl_map.get('values', {})) ret_map = {} for k, v in six.iteritems(in_map): key = repl_keys.get(k) if key is None: key = k elif key in in_map and key != k: # Keys collide msg = _('key replacement %s collides with ' 'a key in the input map') raise ValueError(msg % key) elif key in ret_map: # Keys collide msg = _('key replacement %s collides with ' 'a key in the output map') raise ValueError(msg % key) try: value = repl_values.get(v, v) except TypeError: # If the value is unhashable, we get here value = v ret_map[key] = value return ret_map class ResourceFacade(function.Function): """A function for retrieving data in a parent provider template. A function for obtaining data from the facade resource from within the corresponding provider template. Takes the form:: resource_facade: where the valid attribute types are "metadata", "deletion_policy" and "update_policy". """ _RESOURCE_ATTRIBUTES = ( METADATA, DELETION_POLICY, UPDATE_POLICY, ) = ( 'metadata', 'deletion_policy', 'update_policy' ) def __init__(self, stack, fn_name, args): super(ResourceFacade, self).__init__(stack, fn_name, args) if self.args not in self._RESOURCE_ATTRIBUTES: fmt_data = {'fn_name': self.fn_name, 'allowed': ', '.join(self._RESOURCE_ATTRIBUTES)} raise ValueError(_('Incorrect arguments to "%(fn_name)s" ' 'should be one of: %(allowed)s') % fmt_data) def result(self): attr = function.resolve(self.args) if attr == self.METADATA: return self.stack.parent_resource.metadata_get() elif attr == self.UPDATE_POLICY: up = self.stack.parent_resource.t._update_policy or {} return function.resolve(up) elif attr == self.DELETION_POLICY: return self.stack.parent_resource.t.deletion_policy() class Removed(function.Function): """This function existed in previous versions of HOT, but has been removed. Check the HOT guide for an equivalent native function. """ def validate(self): exp = (_("The function %s is not supported in this version of HOT.") % self.fn_name) raise exception.InvalidTemplateVersion(explanation=exp) def result(self): return super(Removed, self).result() class Repeat(function.Function): """A function for iterating over a list of items. Takes the form:: repeat: template: for_each: : The result is a new list of the same size as , where each element is a copy of with any occurrences of replaced with the corresponding item of . """ def __init__(self, stack, fn_name, args): super(Repeat, self).__init__(stack, fn_name, args) self._parse_args() def _parse_args(self): if not isinstance(self.args, collections.Mapping): raise TypeError(_('Arguments to "%s" must be a map') % self.fn_name) # We don't check for invalid keys appearing here, which is wrong but # it's probably too late to change try: self._for_each = self.args['for_each'] self._template = self.args['template'] except KeyError: example = ('''repeat: template: This is %var% for_each: %var%: ['a', 'b', 'c']''') raise KeyError(_('"repeat" syntax should be %s') % example) self._nested_loop = True def validate(self): super(Repeat, self).validate() if not isinstance(self._for_each, function.Function): if not isinstance(self._for_each, collections.Mapping): raise TypeError(_('The "for_each" argument to "%s" must ' 'contain a map') % self.fn_name) def _valid_arg(self, arg): if not (isinstance(arg, (collections.Sequence, function.Function)) and not isinstance(arg, six.string_types)): raise TypeError(_('The values of the "for_each" argument to ' '"%s" must be lists') % self.fn_name) def _do_replacement(self, keys, values, template): if isinstance(template, six.string_types): for (key, value) in zip(keys, values): template = template.replace(key, value) return template elif isinstance(template, collections.Sequence): return [self._do_replacement(keys, values, elem) for elem in template] elif isinstance(template, collections.Mapping): return dict((self._do_replacement(keys, values, k), self._do_replacement(keys, values, v)) for (k, v) in template.items()) else: return template def result(self): for_each = function.resolve(self._for_each) keys, lists = six.moves.zip(*for_each.items()) # use empty list for references(None) else validation will fail value_lens = [] values = [] for value in lists: if value is None: values.append([]) else: self._valid_arg(value) values.append(value) value_lens.append(len(value)) if not self._nested_loop and value_lens: if len(set(value_lens)) != 1: raise ValueError(_('For %s, the length of for_each values ' 'should be equal if no nested ' 'loop.') % self.fn_name) template = function.resolve(self._template) iter_func = itertools.product if self._nested_loop else six.moves.zip return [self._do_replacement(keys, replacements, template) for replacements in iter_func(*values)] class RepeatWithMap(Repeat): """A function for iterating over a list of items or a dict of keys. Takes the form:: repeat: template: for_each: : or The result is a new list of the same size as or , where each element is a copy of with any occurrences of replaced with the corresponding item of or key of . """ def _valid_arg(self, arg): if not (isinstance(arg, (collections.Sequence, collections.Mapping, function.Function)) and not isinstance(arg, six.string_types)): raise TypeError(_('The values of the "for_each" argument to ' '"%s" must be lists or maps') % self.fn_name) class RepeatWithNestedLoop(RepeatWithMap): """A function for iterating over a list of items or a dict of keys. Takes the form:: repeat: template: for_each: : or The result is a new list of the same size as or , where each element is a copy of with any occurrences of replaced with the corresponding item of or key of . This function also allows to specify 'permutations' to decide whether to iterate nested the over all the permutations of the elements in the given lists. Takes the form:: repeat: template: var: %var% bar: %bar% for_each: %var%: %bar%: permutations: false If 'permutations' is not specified, we set the default value to true to compatible with before behavior. The args have to be lists instead of dicts if 'permutations' is False because keys in a dict are unordered, and the list args all have to be of the same length. """ def _parse_args(self): super(RepeatWithNestedLoop, self)._parse_args() self._nested_loop = self.args.get('permutations', True) if not isinstance(self._nested_loop, bool): raise TypeError(_('"permutations" should be boolean type ' 'for %s function.') % self.fn_name) def _valid_arg(self, arg): if self._nested_loop: super(RepeatWithNestedLoop, self)._valid_arg(arg) else: Repeat._valid_arg(self, arg) class Digest(function.Function): """A function for performing digest operations. Takes the form:: digest: - - Valid algorithms are the ones provided by natively by hashlib (md5, sha1, sha224, sha256, sha384, and sha512) or any one provided by OpenSSL. """ def validate_usage(self, args): if not (isinstance(args, list) and all([isinstance(a, six.string_types) for a in args])): msg = _('Argument to function "%s" must be a list of strings') raise TypeError(msg % self.fn_name) if len(args) != 2: msg = _('Function "%s" usage: ["", ""]') raise ValueError(msg % self.fn_name) if six.PY3: algorithms = hashlib.algorithms_available else: algorithms = hashlib.algorithms if args[0].lower() not in algorithms: msg = _('Algorithm must be one of %s') raise ValueError(msg % six.text_type(algorithms)) def digest(self, algorithm, value): _hash = hashlib.new(algorithm) _hash.update(six.b(value)) return _hash.hexdigest() def result(self): args = function.resolve(self.args) self.validate_usage(args) return self.digest(*args) class StrSplit(function.Function): """A function for splitting delimited strings into a list. Optionally extracting a specific list member by index. Takes the form:: str_split: - - - If is specified, the specified list item will be returned otherwise, the whole list is returned, similar to get_attr with path based attributes accessing lists. """ def __init__(self, stack, fn_name, args): super(StrSplit, self).__init__(stack, fn_name, args) example = '"%s" : [ ",", "apples,pears", ]' % fn_name self.fmt_data = {'fn_name': fn_name, 'example': example} self.fn_name = fn_name if isinstance(args, (six.string_types, collections.Mapping)): raise TypeError(_('Incorrect arguments to "%(fn_name)s" ' 'should be: %(example)s') % self.fmt_data) def result(self): args = function.resolve(self.args) try: delim = args.pop(0) str_to_split = args.pop(0) except (AttributeError, IndexError): raise ValueError(_('Incorrect arguments to "%(fn_name)s" ' 'should be: %(example)s') % self.fmt_data) if str_to_split is None: return None split_list = str_to_split.split(delim) # Optionally allow an index to be specified if args: try: index = int(args.pop(0)) except ValueError: raise ValueError(_('Incorrect index to "%(fn_name)s" ' 'should be: %(example)s') % self.fmt_data) else: try: res = split_list[index] except IndexError: raise ValueError(_('Incorrect index to "%(fn_name)s" ' 'should be between 0 and ' '%(max_index)s') % {'fn_name': self.fn_name, 'max_index': len(split_list) - 1}) else: res = split_list return res class Yaql(function.Function): """A function for executing a yaql expression. Takes the form:: yaql: expression: data: : Evaluates expression on the given data. """ _parser = None @classmethod def get_yaql_parser(cls): if cls._parser is None: global_options = { 'yaql.limitIterators': cfg.CONF.yaql.limit_iterators, 'yaql.memoryQuota': cfg.CONF.yaql.memory_quota } cls._parser = yaql.YaqlFactory().create(global_options) cls._context = yaql.create_context() return cls._parser def __init__(self, stack, fn_name, args): super(Yaql, self).__init__(stack, fn_name, args) if not isinstance(self.args, collections.Mapping): raise TypeError(_('Arguments to "%s" must be a map.') % self.fn_name) try: self._expression = self.args['expression'] self._data = self.args.get('data', {}) if set(self.args) - set(['expression', 'data']): raise KeyError except (KeyError, TypeError): example = ('''%s: expression: $.data.var1.sum() data: var1: [3, 2, 1]''') % self.fn_name raise KeyError(_('"%(name)s" syntax should be %(example)s') % { 'name': self.fn_name, 'example': example}) def validate(self): super(Yaql, self).validate() if not isinstance(self._expression, function.Function): self._parse(self._expression) def _parse(self, expression): if not isinstance(expression, six.string_types): raise TypeError(_('The "expression" argument to %s must ' 'contain a string.') % self.fn_name) parse = self.get_yaql_parser() try: return parse(expression) except exceptions.YaqlException as yex: raise ValueError(_('Bad expression %s.') % yex) def result(self): statement = self._parse(function.resolve(self._expression)) data = function.resolve(self._data) context = self._context.create_child_context() return statement.evaluate({'data': data}, context) class Equals(function.Function): """A function for comparing whether two values are equal. Takes the form:: equals: - - The value can be any type that you want to compare. Returns true if the two values are equal or false if they aren't. """ def __init__(self, stack, fn_name, args): super(Equals, self).__init__(stack, fn_name, args) try: if (not self.args or not isinstance(self.args, list)): raise ValueError() self.value1, self.value2 = self.args except ValueError: msg = _('Arguments to "%s" must be of the form: ' '[value_1, value_2]') raise ValueError(msg % self.fn_name) def result(self): resolved_v1 = function.resolve(self.value1) resolved_v2 = function.resolve(self.value2) return resolved_v1 == resolved_v2 class If(function.Macro): """A function to return corresponding value based on condition evaluation. Takes the form:: if: - - - The value_if_true to be returned if the specified condition evaluates to true, the value_if_false to be returned if the specified condition evaluates to false. """ def parse_args(self, parse_func): try: if (not self.args or not isinstance(self.args, collections.Sequence) or isinstance(self.args, six.string_types)): raise ValueError() condition, value_if_true, value_if_false = self.args except ValueError: msg = _('Arguments to "%s" must be of the form: ' '[condition_name, value_if_true, value_if_false]') raise ValueError(msg % self.fn_name) cond = self.template.parse_condition(self.stack, condition, self.fn_name) cd = self._get_condition(function.resolve(cond)) return parse_func(value_if_true if cd else value_if_false) def _get_condition(self, cond): if isinstance(cond, bool): return cond return self.template.conditions(self.stack).is_enabled(cond) class ConditionBoolean(function.Function): """Abstract parent class of boolean condition functions.""" def __init__(self, stack, fn_name, args): super(ConditionBoolean, self).__init__(stack, fn_name, args) self._check_args() def _check_args(self): if not (isinstance(self.args, collections.Sequence) and not isinstance(self.args, six.string_types)): msg = _('Arguments to "%s" must be a list of conditions') raise ValueError(msg % self.fn_name) if not self.args or len(self.args) < 2: msg = _('The minimum number of condition arguments to "%s" is 2.') raise ValueError(msg % self.fn_name) def _get_condition(self, arg): if isinstance(arg, bool): return arg conditions = self.stack.t.conditions(self.stack) return conditions.is_enabled(arg) class Not(ConditionBoolean): """A function that acts as a NOT operator on a condition. Takes the form:: not: Returns true for a condition that evaluates to false or returns false for a condition that evaluates to true. """ def _check_args(self): self.condition = self.args if self.args is None: msg = _('Argument to "%s" must be a condition') raise ValueError(msg % self.fn_name) def result(self): cd = function.resolve(self.condition) return not self._get_condition(cd) class And(ConditionBoolean): """A function that acts as an AND operator on conditions. Takes the form:: and: - - - ... Returns true if all the specified conditions evaluate to true, or returns false if any one of the conditions evaluates to false. The minimum number of conditions that you can include is 2. """ def result(self): return all(self._get_condition(cd) for cd in function.resolve(self.args)) class Or(ConditionBoolean): """A function that acts as an OR operator on conditions. Takes the form:: or: - - - ... Returns true if any one of the specified conditions evaluate to true, or returns false if all of the conditions evaluates to false. The minimum number of conditions that you can include is 2. """ def result(self): return any(self._get_condition(cd) for cd in function.resolve(self.args)) class Filter(function.Function): """A function for filtering out values from lists. Takes the form:: filter: - - Returns a new list without the values. """ def __init__(self, stack, fn_name, args): super(Filter, self).__init__(stack, fn_name, args) self._values, self._sequence = self._parse_args() def _parse_args(self): if (not isinstance(self.args, collections.Sequence) or isinstance(self.args, six.string_types)): raise TypeError(_('Argument to "%s" must be a list') % self.fn_name) if len(self.args) != 2: raise ValueError(_('"%(fn)s" expected 2 arguments of the form ' '[values, sequence] but got %(len)d arguments ' 'instead') % {'fn': self.fn_name, 'len': len(self.args)}) return self.args[0], self.args[1] def result(self): sequence = function.resolve(self._sequence) if not sequence: return sequence if not isinstance(sequence, list): raise TypeError(_('"%s" only works with lists') % self.fn_name) values = function.resolve(self._values) if not values: return sequence if not isinstance(values, list): raise TypeError( _('"%(fn)s" filters a list of values') % self.fn_name) return [i for i in sequence if i not in values] class MakeURL(function.Function): """A function for performing substitutions on maps. Takes the form:: make_url: scheme: username: password: host: port: path: query: : fragment: And resolves to a correctly-escaped URL constructed from the various components. """ _ARG_KEYS = ( SCHEME, USERNAME, PASSWORD, HOST, PORT, PATH, QUERY, FRAGMENT, ) = ( 'scheme', 'username', 'password', 'host', 'port', 'path', 'query', 'fragment', ) def _check_args(self, args): for arg in self._ARG_KEYS: if arg in args: if arg == self.QUERY: if not isinstance(args[arg], (function.Function, collections.Mapping)): raise TypeError(_('The "%(arg)s" argument to ' '"%(fn_name)s" must be a map') % {'arg': arg, 'fn_name': self.fn_name}) return elif arg == self.PORT: port = args[arg] if not isinstance(port, function.Function): if not isinstance(port, six.integer_types): try: port = int(port) except ValueError: raise ValueError( _('Invalid URL port "%(port)s" ' 'for %(fn_name)s called with ' '%(args)s') % {'fn_name': self.fn_name, 'port': port, 'args': args}) if not (0 < port <= 65535): raise ValueError( _('Invalid URL port %d, ' 'must be in range 1-65535') % port) else: if not isinstance(args[arg], (function.Function, six.string_types)): raise TypeError(_('The "%(arg)s" argument to ' '"%(fn_name)s" must be a string') % {'arg': arg, 'fn_name': self.fn_name}) def validate(self): super(MakeURL, self).validate() if not isinstance(self.args, collections.Mapping): raise TypeError(_('The arguments to "%s" must ' 'be a map') % self.fn_name) invalid_keys = set(self.args) - set(self._ARG_KEYS) if invalid_keys: raise ValueError(_('Invalid arguments to "%(fn)s": %(args)s') % {'fn': self.fn_name, 'args': ', '.join(invalid_keys)}) self._check_args(self.args) def result(self): args = function.resolve(self.args) self._check_args(args) scheme = args.get(self.SCHEME, '') if ':' in scheme: raise ValueError(_('URL "%s" should not contain \':\'') % self.SCHEME) def netloc(): username = urlparse.quote(args.get(self.USERNAME, ''), safe='') password = urlparse.quote(args.get(self.PASSWORD, ''), safe='') if username or password: yield username if password: yield ':' yield password yield '@' host = args.get(self.HOST, '') if host.startswith('[') and host.endswith(']'): host = host[1:-1] host = urlparse.quote(host, safe=':') if ':' in host: host = '[%s]' % host yield host port = args.get(self.PORT, '') if port: yield ':' yield six.text_type(port) path = urlparse.quote(args.get(self.PATH, '')) query_dict = args.get(self.QUERY, {}) query = urlparse.urlencode(query_dict).replace('%2F', '/') fragment = urlparse.quote(args.get(self.FRAGMENT, '')) return urlparse.urlunsplit((scheme, ''.join(netloc()), path, query, fragment)) class ListConcat(function.Function): """A function for extending lists. Takes the form:: list_concat: - [, ] - [, ] And resolves to:: [, , , ] """ _unique = False def __init__(self, stack, fn_name, args): super(ListConcat, self).__init__(stack, fn_name, args) example = (_('"%s" : [ [ , ], ' '[ , ] ]') % fn_name) self.fmt_data = {'fn_name': fn_name, 'example': example} def result(self): args = function.resolve(self.args) if (isinstance(args, six.string_types) or not isinstance(args, collections.Sequence)): raise TypeError(_('Incorrect arguments to "%(fn_name)s" ' 'should be: %(example)s') % self.fmt_data) def ensure_list(m): if m is None: return [] elif (isinstance(m, collections.Sequence) and not isinstance(m, six.string_types)): return m else: msg = _('Incorrect arguments: Items to concat must be lists.') raise TypeError(msg) ret_list = [] for m in args: ret_list.extend(ensure_list(m)) if self._unique: for i in ret_list: while ret_list.count(i) > 1: del ret_list[ret_list.index(i)] return ret_list class ListConcatUnique(ListConcat): """A function for extending lists with unique items. list_concat_unique is identical to the list_concat function, only contains unique items in retuning list. """ _unique = True class Contains(function.Function): """A function for checking whether specific value is in sequence. Takes the form:: contains: - - The value can be any type that you want to check. Returns true if the specific value is in the sequence, otherwise returns false. """ def __init__(self, stack, fn_name, args): super(Contains, self).__init__(stack, fn_name, args) example = '"%s" : [ "value1", [ "value1", "value2"]]' % self.fn_name fmt_data = {'fn_name': self.fn_name, 'example': example} if not self.args or not isinstance(self.args, list): raise TypeError(_('Incorrect arguments to "%(fn_name)s" ' 'should be: %(example)s') % fmt_data) try: self.value, self.sequence = self.args except ValueError: msg = _('Arguments to "%s" must be of the form: ' '[value1, [value1, value2]]') raise ValueError(msg % self.fn_name) def result(self): resolved_value = function.resolve(self.value) resolved_sequence = function.resolve(self.sequence) if not isinstance(resolved_sequence, collections.Sequence): raise TypeError(_('Second argument to "%s" should be ' 'a sequence.') % self.fn_name) return resolved_value in resolved_sequence heat-10.0.2/heat/engine/translation.py0000666000175000017500000003457513343562340017656 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import functools from oslo_log import log as logging import six from heat.common import exception from heat.common.i18n import _ from heat.engine import function LOG = logging.getLogger(__name__) @functools.total_ordering class TranslationRule(object): """Translating mechanism one properties to another. Mechanism uses list of rules, each defines by this class, and can be executed. Working principe: during resource creating after properties defining resource take list of rules, specified by method translation_rules, which should be overloaded for each resource, if it's needed, and execute each rule using translate_properties method. Next operations are allowed: - ADD. This rule allows to add some value to list-type properties. Only list-type values can be added to such properties. Using for other cases is prohibited and will be returned with error. - REPLACE. This rule allows to replace some property value to another. Used for all types of properties. Note, that if property has list type, then value will be replaced for all elements of list, where it needed. If element in such property must be replaced by value of another element of this property, value_name must be defined. - DELETE. This rule allows to delete some property. If property has list type, then deleting affects value in all list elements. - RESOLVE. This rule allows to resolve some property using client and the finder function. Finders may require an additional entity key. """ RULE_KEYS = (ADD, REPLACE, DELETE, RESOLVE) = ('Add', 'Replace', 'Delete', 'Resolve') def __lt__(self, other): rules = [TranslationRule.ADD, TranslationRule.REPLACE, TranslationRule.RESOLVE, TranslationRule.DELETE] idx1 = rules.index(self.rule) idx2 = rules.index(other.rule) return idx1 < idx2 def __init__(self, properties, rule, translation_path, value=None, value_name=None, value_path=None, client_plugin=None, finder=None, entity=None, custom_value_path=None): """Add new rule for translating mechanism. :param properties: properties of resource :param rule: rule from RULE_KEYS :param translation_path: list with path to property, which value will be affected in rule. :param value: value which will be involved in rule :param value_name: value_name which used for replacing properties inside list-type properties. :param value_path: path to value, which should be used for translation. :param client_plugin: client plugin that would be used to resolve :param finder: finder method of the client plugin :param entity: some generic finders require an entity to resolve ex. neutron finder function. :param custom_value_path: list-type value path to translate property, which has no schema. """ self.properties = properties self.rule = rule self.translation_path = translation_path self.value = value or None self.value_name = value_name self.value_path = value_path self.client_plugin = client_plugin self.finder = finder self.entity = entity self.custom_value_path = custom_value_path self.validate() def validate(self): if self.rule not in self.RULE_KEYS: raise ValueError(_('There is no rule %(rule)s. List of allowed ' 'rules is: %(rules)s.') % { 'rule': self.rule, 'rules': ', '.join(self.RULE_KEYS)}) if (not isinstance(self.translation_path, list) or len(self.translation_path) == 0): raise ValueError(_('"translation_path" should be non-empty list ' 'with path to translate.')) args = [self.value_path is not None, self.value is not None, self.value_name is not None] if args.count(True) > 1: raise ValueError(_('"value_path", "value" and "value_name" are ' 'mutually exclusive and cannot be specified ' 'at the same time.')) if (self.rule == self.ADD and self.value is not None and not isinstance(self.value, list)): raise ValueError(_('"value" must be list type when rule is Add.')) if (self.rule == self.RESOLVE and not (self.client_plugin or self.finder)): raise ValueError(_('"client_plugin" and "finder" should be ' 'specified for %s rule') % self.RESOLVE) def get_value_absolute_path(self, full_value_name=False): path = [] if self.value_name: if full_value_name: path.extend(self.translation_path[:-1]) path.append(self.value_name) elif self.value_path: path.extend(self.value_path) if self.custom_value_path: path.extend(self.custom_value_path) return path class Translation(object): """Mechanism for translating one properties to other. Mechanism allows to handle properties - update deprecated/hidden properties to new, resolve values, remove unnecessary. It uses list of TranslationRule objects as rules for translation. """ def __init__(self, properties=None): """Initialise translation mechanism. :param properties: Properties class object to resolve rule pathes. :var _rules: store specified rules by set_rules method. :var resolves_translations: key-pair dict, where key is string-type full path of property, value is a resolved value. :var is_active: indicate to not translate property, if property already in translation and just tries to get property value. This indicator escapes from infinite loop. :var store_translated_values: define storing resolved values. Useful for validation phase, where not all functions can be resolved (``get_attr`` for not created resource, for example). """ self.properties = properties self._rules = {} self.resolved_translations = {} self.is_active = True self.store_translated_values = True self._ignore_resolve_error = False self._deleted_props = [] self._replaced_props = [] def set_rules(self, rules, client_resolve=True, ignore_resolve_error=False): if not rules: return self._rules = {} self.store_translated_values = client_resolve self._ignore_resolve_error = ignore_resolve_error for rule in rules: if not client_resolve and rule.rule == TranslationRule.RESOLVE: continue key = '.'.join(rule.translation_path) self._rules.setdefault(key, set()).add(rule) if rule.rule == TranslationRule.DELETE: self._deleted_props.append(key) if rule.rule == TranslationRule.REPLACE: path = '.'.join(rule.get_value_absolute_path(True)) self._replaced_props.append(path) def is_deleted(self, key): return (self.is_active and self.cast_key_to_rule(key) in self._deleted_props) def is_replaced(self, key): return (self.is_active and self.cast_key_to_rule(key) in self._replaced_props) def cast_key_to_rule(self, key): return '.'.join([item for item in key.split('.') if not item.isdigit()]) def has_translation(self, key): key = self.cast_key_to_rule(key) return (self.is_active and (key in self._rules or key in self.resolved_translations)) def translate(self, key, prop_value=None, prop_data=None, validate=False): if key in self.resolved_translations: return self.resolved_translations[key] result = prop_value if self._rules.get(self.cast_key_to_rule(key)) is None: return result for rule in sorted(self._rules.get(self.cast_key_to_rule(key))): if rule.rule == TranslationRule.DELETE: if self.store_translated_values: self.resolved_translations[key] = None result = None if rule.rule == TranslationRule.REPLACE: result = self.replace(key, rule, result, prop_data, validate) if rule.rule == TranslationRule.ADD: result = self.add(key, rule, result, prop_data, validate) if rule.rule == TranslationRule.RESOLVE: resolved_value = resolve_and_find(result, rule.client_plugin, rule.finder, rule.entity, self._ignore_resolve_error) if self.store_translated_values: self.resolved_translations[key] = resolved_value result = resolved_value return result def add(self, key, add_rule, prop_value=None, prop_data=None, validate=False): value_path = add_rule.get_value_absolute_path() if prop_value is None: prop_value = [] if not isinstance(prop_value, list): raise ValueError(_('Incorrect translation rule using - cannot ' 'resolve Add rule for non-list translation ' 'value "%s".') % key) translation_value = prop_value if add_rule.value: translation_value.extend(add_rule.value) elif value_path: if self.has_translation('.'.join(value_path)): self.translate('.'.join(value_path), prop_data=prop_data) self.is_active = False value = get_value(value_path, prop_data if add_rule.value_name else self.properties, validate) self.is_active = True if value is not None: translation_value.extend(value if isinstance(value, list) else [value]) if self.store_translated_values: self.resolved_translations[key] = translation_value return translation_value def replace(self, key, replace_rule, prop_value=None, prop_data=None, validate=False): value = None value_path = replace_rule.get_value_absolute_path(full_value_name=True) short_path = replace_rule.get_value_absolute_path() if value_path: if replace_rule.value_name is not None: prop_path = key.split('.')[:-1] prop_path.extend(short_path) prop_path = '.'.join(prop_path) subpath = short_path else: prop_path = '.'.join(value_path) subpath = value_path props = prop_data if replace_rule.value_name else self.properties self.is_active = False value = get_value(subpath, props, validate) self.is_active = True if self.has_translation(prop_path): self.translate(prop_path, value, prop_data=prop_data) if value and prop_value: raise exception.StackValidationFailed( message=_('Cannot define the following properties at ' 'the same time: %s') % ', '.join( [self.cast_key_to_rule(key), '.'.join(value_path)])) elif replace_rule.value is not None: value = replace_rule.value result = value if value is not None else prop_value if self.store_translated_values: self.resolved_translations[key] = result if value_path: if replace_rule.value_name: value_path = (key.split('.')[:-1] + short_path) self.resolved_translations['.'.join(value_path)] = None return result def get_value(path, props, validate=False): if not props: return None key = path[0] if isinstance(props, dict): prop = props.get(key) else: prop = props._get_property_value(key, validate) if len(path[1:]) == 0: return prop elif prop is None: return None elif isinstance(prop, list): values = [] for item in prop: values.append(get_value(path[1:], item)) return values elif isinstance(prop, dict): return get_value(path[1:], prop) def resolve_and_find(value, cplugin, finder, entity=None, ignore_resolve_error=False): if isinstance(value, function.Function): value = function.resolve(value) if value: if isinstance(value, list): resolved_value = [] for item in value: resolved_value.append(resolve_and_find(item, cplugin, finder, entity, ignore_resolve_error)) return resolved_value finder = getattr(cplugin, finder) try: if entity: return finder(entity, value) else: return finder(value) except Exception as ex: if ignore_resolve_error: LOG.info("Ignoring error in RESOLVE translation: %s", six.text_type(ex)) return value raise heat-10.0.2/heat/engine/constraints.py0000666000175000017500000006024713343562337017670 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import numbers import re from oslo_cache import core from oslo_config import cfg from oslo_log import log from oslo_utils import reflection from oslo_utils import strutils import six from heat.common import cache from heat.common import exception from heat.common.i18n import _ from heat.engine import resources # decorator that allows to cache the value # of the function based on input arguments MEMOIZE = core.get_memoization_decorator(conf=cfg.CONF, region=cache.get_cache_region(), group="constraint_validation_cache") LOG = log.getLogger(__name__) class Schema(collections.Mapping): """Schema base class for validating properties or parameters. Schema objects are serializable to dictionaries following a superset of the HOT input Parameter schema using dict(). Serialises to JSON in the form:: { 'type': 'list', 'required': False 'constraints': [ { 'length': {'min': 1}, 'description': 'List must not be empty' } ], 'schema': { '*': { 'type': 'string' } }, 'description': 'An example list property.' } """ KEYS = ( TYPE, DESCRIPTION, DEFAULT, SCHEMA, REQUIRED, CONSTRAINTS, IMMUTABLE, ) = ( 'type', 'description', 'default', 'schema', 'required', 'constraints', 'immutable', ) # Keywords for data types; each Schema subclass can define its respective # type name used in templates TYPE_KEYS = ( INTEGER_TYPE, STRING_TYPE, NUMBER_TYPE, BOOLEAN_TYPE, MAP_TYPE, LIST_TYPE, ) = ( 'INTEGER', 'STRING', 'NUMBER', 'BOOLEAN', 'MAP', 'LIST', ) # Default type names for data types used in templates; can be overridden by # subclasses TYPES = ( INTEGER, STRING, NUMBER, BOOLEAN, MAP, LIST, ANY, ) = ( 'Integer', 'String', 'Number', 'Boolean', 'Map', 'List', 'Any', ) def __init__(self, data_type, description=None, default=None, schema=None, required=False, constraints=None, label=None, immutable=False): self._len = None self.label = label self.type = data_type if self.type not in self.TYPES: raise exception.InvalidSchemaError( message=_('Invalid type (%s)') % self.type) if required and default is not None: LOG.warning("Option 'required=True' should not be used with " "any 'default' value (%s)", default) self.description = description self.required = required self.immutable = immutable if isinstance(schema, type(self)): if self.type != self.LIST: msg = _('Single schema valid only for ' '%(ltype)s, not %(utype)s') % dict(ltype=self.LIST, utype=self.type) raise exception.InvalidSchemaError(message=msg) self.schema = AnyIndexDict(schema) else: self.schema = schema if self.schema is not None and self.type not in (self.LIST, self.MAP): msg = _('Schema valid only for %(ltype)s or ' '%(mtype)s, not %(utype)s') % dict(ltype=self.LIST, mtype=self.MAP, utype=self.type) raise exception.InvalidSchemaError(message=msg) self.constraints = constraints or [] self.default = default def validate(self, context=None): """Validates the schema. This method checks if the schema itself is valid, and if the default value - if present - complies to the schema's constraints. """ for c in self.constraints: if not self._is_valid_constraint(c): err_msg = _('%(name)s constraint ' 'invalid for %(utype)s') % dict( name=type(c).__name__, utype=self.type) raise exception.InvalidSchemaError(message=err_msg) self._validate_default(context) # validated nested schema(ta) if self.schema: if isinstance(self.schema, AnyIndexDict): self.schema.value.validate(context) else: for nested_schema in six.itervalues(self.schema): nested_schema.validate(context) def _validate_default(self, context): if self.default is not None: try: self.validate_constraints(self.default, context, [CustomConstraint]) except (ValueError, TypeError) as exc: raise exception.InvalidSchemaError( message=_('Invalid default %(default)s (%(exc)s)') % dict(default=self.default, exc=exc)) def set_default(self, default=None): """Set the default value for this Schema object.""" self.default = default def _is_valid_constraint(self, constraint): valid_types = getattr(constraint, 'valid_types', []) return any(self.type == getattr(self, t, None) for t in valid_types) @staticmethod def str_to_num(value): """Convert a string representation of a number into a numeric type.""" if isinstance(value, numbers.Number): return value try: return int(value) except ValueError: return float(value) def to_schema_type(self, value): """Returns the value in the schema's data type.""" try: # We have to be backwards-compatible for Integer and Number # Schema types and try to convert string representations of # number into "real" number types, therefore calling # str_to_num below. if self.type == self.INTEGER: num = Schema.str_to_num(value) if isinstance(num, float): raise ValueError(_('%s is not an integer.') % num) return num elif self.type == self.NUMBER: return Schema.str_to_num(value) elif self.type == self.STRING: return six.text_type(value) elif self.type == self.BOOLEAN: return strutils.bool_from_string(str(value), strict=True) except ValueError: raise ValueError(_('Value "%(val)s" is invalid for data type ' '"%(type)s".') % {'val': value, 'type': self.type}) return value def validate_constraints(self, value, context=None, skipped=None): if not skipped: skipped = [] try: for constraint in self.constraints: if type(constraint) not in skipped: constraint.validate(value, self, context) except ValueError as ex: raise exception.StackValidationFailed(message=six.text_type(ex)) def __getitem__(self, key): if key == self.TYPE: return self.type.lower() elif key == self.DESCRIPTION: if self.description is not None: return self.description elif key == self.DEFAULT: if self.default is not None: return self.default elif key == self.SCHEMA: if self.schema is not None: return dict((n, dict(s)) for n, s in self.schema.items()) elif key == self.REQUIRED: return self.required elif key == self.CONSTRAINTS: if self.constraints: return [dict(c) for c in self.constraints] raise KeyError(key) def __iter__(self): for k in self.KEYS: try: self[k] except KeyError: pass else: yield k def __len__(self): if self._len is None: self._len = len(list(iter(self))) return self._len class AnyIndexDict(collections.Mapping): """A Mapping that returns the same value for any integer index. Used for storing the schema for a list. When converted to a dictionary, it contains a single item with the key '*'. """ ANYTHING = '*' def __init__(self, value): self.value = value def __getitem__(self, key): if key != self.ANYTHING and not isinstance(key, six.integer_types): raise KeyError(_('Invalid key %s') % str(key)) return self.value def __iter__(self): yield self.ANYTHING def __len__(self): return 1 class Constraint(collections.Mapping): """Parent class for constraints on allowable values for a Property. Constraints are serializable to dictionaries following the HOT input Parameter constraints schema using dict(). """ (DESCRIPTION,) = ('description',) def __init__(self, description=None): self.description = description def __str__(self): def desc(): if self.description: yield self.description yield self._str() return '\n'.join(desc()) def validate(self, value, schema=None, context=None): if not self._is_valid(value, schema, context): if self.description: err_msg = self.description else: err_msg = self._err_msg(value) raise ValueError(err_msg) @classmethod def _name(cls): return '_'.join(w.lower() for w in re.findall('[A-Z]?[a-z]+', cls.__name__)) def __getitem__(self, key): if key == self.DESCRIPTION: if self.description is None: raise KeyError(key) return self.description if key == self._name(): return self._constraint() raise KeyError(key) def __iter__(self): if self.description is not None: yield self.DESCRIPTION yield self._name() def __len__(self): return 2 if self.description is not None else 1 class Range(Constraint): """Constrain values within a range. Serializes to JSON as:: { 'range': {'min': , 'max': }, 'description': } """ (MIN, MAX) = ('min', 'max') valid_types = (Schema.INTEGER_TYPE, Schema.NUMBER_TYPE,) def __init__(self, min=None, max=None, description=None): super(Range, self).__init__(description) self.min = min self.max = max for param in (min, max): if not isinstance(param, (float, six.integer_types, type(None))): raise exception.InvalidSchemaError( message=_('min/max must be numeric')) if min is max is None: raise exception.InvalidSchemaError( message=_('A range constraint must have a min value and/or ' 'a max value specified.')) def _str(self): if self.max is None: fmt = _('The value must be at least %(min)s.') elif self.min is None: fmt = _('The value must be no greater than %(max)s.') else: fmt = _('The value must be in the range %(min)s to %(max)s.') return fmt % self._constraint() def _err_msg(self, value): return '%s is out of range (min: %s, max: %s)' % (value, self.min, self.max) def _is_valid(self, value, schema, context): value = Schema.str_to_num(value) if self.min is not None: if value < self.min: return False if self.max is not None: if value > self.max: return False return True def _constraint(self): def constraints(): if self.min is not None: yield self.MIN, self.min if self.max is not None: yield self.MAX, self.max return dict(constraints()) class Length(Range): """Constrain the length of values within a range. Serializes to JSON as:: { 'length': {'min': , 'max': }, 'description': } """ valid_types = (Schema.STRING_TYPE, Schema.LIST_TYPE, Schema.MAP_TYPE,) def __init__(self, min=None, max=None, description=None): if min is max is None: raise exception.InvalidSchemaError( message=_('A length constraint must have a min value and/or ' 'a max value specified.')) super(Length, self).__init__(min, max, description) for param in (min, max): if not isinstance(param, (six.integer_types, type(None))): msg = _('min/max length must be integral') raise exception.InvalidSchemaError(message=msg) def _str(self): if self.max is None: fmt = _('The length must be at least %(min)s.') elif self.min is None: fmt = _('The length must be no greater than %(max)s.') else: fmt = _('The length must be in the range %(min)s to %(max)s.') return fmt % self._constraint() def _err_msg(self, value): return 'length (%d) is out of range (min: %s, max: %s)' % (len(value), self.min, self.max) def _is_valid(self, value, schema, context): return super(Length, self)._is_valid(len(value), schema, context) class Modulo(Constraint): """Constrain values to modulo. Serializes to JSON as:: { 'modulo': {'step': , 'offset': }, 'description': } """ (STEP, OFFSET) = ('step', 'offset') valid_types = (Schema.INTEGER_TYPE, Schema.NUMBER_TYPE,) def __init__(self, step=None, offset=None, description=None): super(Modulo, self).__init__(description) self.step = step self.offset = offset if step is None or offset is None: raise exception.InvalidSchemaError( message=_('A modulo constraint must have a step value and ' 'an offset value specified.')) for param in (step, offset): if not isinstance(param, (float, six.integer_types, type(None))): raise exception.InvalidSchemaError( message=_('step/offset must be numeric')) if not int(param) == param: raise exception.InvalidSchemaError( message=_('step/offset must be integer')) step, offset = int(step), int(offset) if step == 0: raise exception.InvalidSchemaError(message=_('step cannot be 0.')) if abs(offset) >= abs(step): raise exception.InvalidSchemaError( message=_('offset must be smaller (by absolute value) ' 'than step.')) if step * offset < 0: raise exception.InvalidSchemaError( message=_('step and offset must be both positive or both ' 'negative.')) def _str(self): if self.step is None or self.offset is None: fmt = _('The values must be specified.') else: fmt = _('The value must be a multiple of %(step)s ' 'with an offset of %(offset)s.') return fmt % self._constraint() def _err_msg(self, value): return '%s is not a multiple of %s with an offset of %s)' % ( value, self.step, self.offset) def _is_valid(self, value, schema, context): value = Schema.str_to_num(value) if value % self.step != self.offset: return False return True def _constraint(self): def constraints(): if self.step is not None: yield self.STEP, self.step if self.offset is not None: yield self.OFFSET, self.offset return dict(constraints()) class AllowedValues(Constraint): """Constrain values to a predefined set. Serializes to JSON as:: { 'allowed_values': [, , ...], 'description': } """ valid_types = (Schema.STRING_TYPE, Schema.INTEGER_TYPE, Schema.NUMBER_TYPE, Schema.BOOLEAN_TYPE, Schema.LIST_TYPE,) def __init__(self, allowed, description=None): super(AllowedValues, self).__init__(description) if (not isinstance(allowed, collections.Sequence) or isinstance(allowed, six.string_types)): raise exception.InvalidSchemaError( message=_('AllowedValues must be a list')) self.allowed = tuple(allowed) def _str(self): allowed = ', '.join(str(a) for a in self.allowed) return _('Allowed values: %s') % allowed def _err_msg(self, value): allowed = '[%s]' % ', '.join(str(a) for a in self.allowed) return '"%s" is not an allowed value %s' % (value, allowed) def _is_valid(self, value, schema, context): # For list values, check if all elements of the list are contained # in allowed list. if isinstance(value, list): return all(v in self.allowed for v in value) if schema is not None: _allowed = tuple(schema.to_schema_type(v) for v in self.allowed) return schema.to_schema_type(value) in _allowed return value in self.allowed def _constraint(self): return list(self.allowed) class AllowedPattern(Constraint): """Constrain values to a predefined regular expression pattern. Serializes to JSON as:: { 'allowed_pattern': , 'description': } """ valid_types = (Schema.STRING_TYPE,) def __init__(self, pattern, description=None): super(AllowedPattern, self).__init__(description) if not isinstance(pattern, six.string_types): raise exception.InvalidSchemaError( message=_('AllowedPattern must be a string')) self.pattern = pattern self.match = re.compile(pattern).match def _str(self): return _('Value must match pattern: %s') % self.pattern def _err_msg(self, value): return '"%s" does not match pattern "%s"' % (value, self.pattern) def _is_valid(self, value, schema, context): match = self.match(value) return match is not None and match.end() == len(value) def _constraint(self): return self.pattern class CustomConstraint(Constraint): """A constraint delegating validation to an external class.""" valid_types = (Schema.STRING_TYPE, Schema.INTEGER_TYPE, Schema.NUMBER_TYPE, Schema.BOOLEAN_TYPE, Schema.LIST_TYPE) def __init__(self, name, description=None, environment=None): super(CustomConstraint, self).__init__(description) self.name = name self._environment = environment self._custom_constraint = None def _constraint(self): return self.name @property def custom_constraint(self): if self._custom_constraint is None: if self._environment is None: self._environment = resources.global_env() constraint_class = self._environment.get_constraint(self.name) if constraint_class: self._custom_constraint = constraint_class() return self._custom_constraint def _str(self): message = getattr(self.custom_constraint, "message", None) if not message: message = _('Value must be of type %s') % self.name return message def _err_msg(self, value): constraint = self.custom_constraint if constraint is None: return _('"%(value)s" does not validate %(name)s ' '(constraint not found)') % { "value": value, "name": self.name} error = getattr(constraint, "error", None) if error: return error(value) return _('"%(value)s" does not validate %(name)s') % { "value": value, "name": self.name} def _is_valid(self, value, schema, context): constraint = self.custom_constraint if not constraint: return False return constraint.validate(value, context) class BaseCustomConstraint(object): """A base class for validation using API clients. It will provide a better error message, and reduce a bit of duplication. Subclass must provide `expected_exceptions` and implement `validate_with_client`. """ expected_exceptions = (exception.EntityNotFound,) resource_client_name = None resource_getter_name = None _error_message = None def error(self, value): if self._error_message is None: return _("Error validating value '%(value)s'") % {"value": value} return _("Error validating value '%(value)s': %(message)s") % { "value": value, "message": self._error_message} def validate(self, value, context): @MEMOIZE def check_cache_or_validate_value(cache_value_prefix, value_to_validate): """Check if validation result stored in cache or validate value. The function checks that value was validated and validation result stored in cache. If not then it executes validation and stores the result of validation in cache. If caching is disabled it requests for validation each time. :param cache_value_prefix: cache prefix that used to distinguish value in heat cache. So the cache key would be the following: cache_value_prefix + value_to_validate. :param value_to_validate: value that need to be validated :return: True if value is valid otherwise False """ try: self.validate_with_client(context.clients, value_to_validate) except self.expected_exceptions as e: self._error_message = str(e) return False else: return True class_name = reflection.get_class_name(self, fully_qualified=False) cache_value_prefix = "{0}:{1}".format(class_name, six.text_type(context.tenant_id)) validation_result = check_cache_or_validate_value( cache_value_prefix, value) # if validation failed we should not store it in cache # cause validation will be fixed soon (by admin or other guy) # and we don't need to require user wait for expiration time if not validation_result: check_cache_or_validate_value.invalidate(cache_value_prefix, value) return validation_result def validate_with_client(self, client, resource_id): if self.resource_client_name and self.resource_getter_name: getattr(client.client_plugin(self.resource_client_name), self.resource_getter_name)(resource_id) else: raise exception.InvalidSchemaError( message=_('Client name and resource getter name must be ' 'specified.')) heat-10.0.2/heat/engine/support.py0000666000175000017500000000521713343562340017023 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common.i18n import _ SUPPORT_STATUSES = (UNKNOWN, SUPPORTED, DEPRECATED, UNSUPPORTED, HIDDEN ) = ('UNKNOWN', 'SUPPORTED', 'DEPRECATED', 'UNSUPPORTED', 'HIDDEN') class SupportStatus(object): def __init__(self, status=SUPPORTED, message=None, version=None, previous_status=None, substitute_class=None): """Use SupportStatus for current status of object. :param status: current status of object. :param version: version of OpenStack, from which current status is valid. It may be None, but need to be defined for correct doc generating. :param message: specific status message for object. :param substitute_class: assign substitute class. """ self.status = status self.substitute_class = substitute_class self.message = message self.version = version self.previous_status = previous_status self.validate() def validate(self): if (self.previous_status is not None and not isinstance(self.previous_status, SupportStatus)): raise ValueError(_('previous_status must be SupportStatus ' 'instead of %s') % type(self.previous_status)) if self.status not in SUPPORT_STATUSES: self.status = UNKNOWN self.message = _("Specified status is invalid, defaulting to" " %s") % UNKNOWN self.version = None self.previous_status = None def to_dict(self): return {'status': self.status, 'message': self.message, 'version': self.version, 'previous_status': self.previous_status.to_dict() if self.previous_status is not None else None} def is_substituted(self, substitute_class): if self.substitute_class is None: return False return substitute_class is self.substitute_class def is_valid_status(status): return status in SUPPORT_STATUSES heat-10.0.2/heat/engine/stk_defn.py0000666000175000017500000002561513343562340017110 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import itertools import six from heat.common import exception from heat.engine import attributes from heat.engine import status class StackDefinition(object): """Class representing the definition of a Stack, but not its current state. This is the interface through which template functions will access data about the stack definition, including the template and current values of resource reference IDs and attributes. This API can be considered stable by third-party Template or Function plugins, and no part of it should be changed or removed without an appropriate deprecation process. """ def __init__(self, context, template, stack_identifier, resource_data, parent_info=None): self._context = context self._template = template self._resource_data = {} if resource_data is None else resource_data self._parent_info = parent_info self._zones = None self.parameters = template.parameters(stack_identifier, template.env.params, template.env.param_defaults) self._resource_defns = None self._resources = {} self._output_defns = None def clone_with_new_template(self, new_template, stack_identifier, clear_resource_data=False): """Create a new StackDefinition with a different template.""" res_data = {} if clear_resource_data else dict(self._resource_data) return type(self)(self._context, new_template, stack_identifier, res_data, self._parent_info) @property def t(self): """The stack's template.""" return self._template @property def env(self): """The stack's environment.""" return self._template.env def _load_rsrc_defns(self): self._resource_defns = self._template.resource_definitions(self) def resource_definition(self, resource_name): """Return the definition of the given resource.""" if self._resource_defns is None: self._load_rsrc_defns() return self._resource_defns[resource_name] def enabled_rsrc_names(self): """Return the set of names of all enabled resources in the template.""" if self._resource_defns is None: self._load_rsrc_defns() return set(self._resource_defns) def _load_output_defns(self): self._output_defns = self._template.outputs(self) def output_definition(self, output_name): """Return the definition of the given output.""" if self._output_defns is None: self._load_output_defns() return self._output_defns[output_name] def enabled_output_names(self): """Return the set of names of all enabled outputs in the template.""" if self._output_defns is None: self._load_output_defns() return set(self._output_defns) def all_rsrc_names(self): """Return the set of names of all resources in the template. This includes resources that are disabled due to false conditionals. """ if hasattr(self._template, 'RESOURCES'): return set(self._template.get(self._template.RESOURCES, self._resource_defns or [])) else: return self.enabled_rsrc_names() def get_availability_zones(self): """Return the list of Nova availability zones.""" if self._zones is None: nova = self._context.clients.client('nova') zones = nova.availability_zones.list(detailed=False) self._zones = [zone.zoneName for zone in zones] return self._zones def __contains__(self, resource_name): """Return True if the given resource name is present and enabled.""" if self._resource_defns is not None: return resource_name in self._resource_defns else: # In Cfn templates, we need to know whether Ref refers to a # resource or a parameter in order to parse the resource # definitions return resource_name in self._template[self._template.RESOURCES] def __getitem__(self, resource_name): """Return a proxy for the given resource.""" if resource_name not in self._resources: res_proxy = ResourceProxy(resource_name, self.resource_definition(resource_name), self._resource_data.get(resource_name)) self._resources[resource_name] = res_proxy return self._resources[resource_name] @property def parent_resource(self): """Return a proxy for the parent resource. Returns None if the stack is not a provider stack for a TemplateResource. """ return self._parent_info class ResourceProxy(status.ResourceStatus): """A lightweight API for essential data about a resource. This is the interface through which template functions will access data about particular resources in the stack definition, such as the resource definition and current values of reference IDs and attributes. Resource proxies for some or all resources in the stack will potentially be loaded for every check resource operation, so it is essential that this API is implemented efficiently, using only the data received over RPC and without reference to the resource data stored in the database. This API can be considered stable by third-party Template or Function plugins, and no part of it should be changed or removed without an appropriate deprecation process. """ __slots__ = ('name', '_definition', '_resource_data') def __init__(self, name, definition, resource_data): self.name = name self._definition = definition self._resource_data = resource_data @property def t(self): """The resource definition.""" return self._definition def _res_data(self): assert self._resource_data is not None, "Resource data not available" return self._resource_data @property def attributes_schema(self): """A set of the valid top-level attribute names. This is provided for backwards-compatibility for functions that require a container with all of the valid attribute names in order to validate the template. Other operations on it are invalid because we don't actually have access to the attributes schema here; hence we return a set instead of a dict. """ return set(self._res_data().attribute_names()) @property def external_id(self): """The external ID of the resource.""" return self._definition.external_id() @property def state(self): """The current state (action, status) of the resource.""" return self.action, self.status @property def action(self): """The current action of the resource.""" if self._resource_data is None: return self.INIT return self._resource_data.action @property def status(self): """The current status of the resource.""" if self._resource_data is None: return self.COMPLETE return self._resource_data.status def FnGetRefId(self): """For the intrinsic function get_resource.""" if self._resource_data is None: return self.name return self._resource_data.reference_id() def FnGetAtt(self, attr, *path): """For the intrinsic function get_attr.""" if path: attr = (attr,) + path try: return self._res_data().attribute(attr) except KeyError: raise exception.InvalidTemplateAttribute(resource=self.name, key=attr) def FnGetAtts(self): """For the intrinsic function get_attr when getting all attributes. :returns: a dict of all of the resource's attribute values, excluding the "show" attribute. """ all_attrs = self._res_data().attributes() return dict((k, v) for k, v in six.iteritems(all_attrs) if k != attributes.SHOW_ATTR) def update_resource_data(stack_definition, resource_name, resource_data): """Store new resource state data for the specified resource. This function enables the legacy (non-convergence) path to store updated NodeData as resources are created/updated in a single StackDefinition that lasts for the entire lifetime of the stack operation. """ stack_definition._resource_data[resource_name] = resource_data stack_definition._resources.pop(resource_name, None) # Clear the cached dep_attrs for any resource or output that directly # depends on the resource whose data we are updating. This ensures that if # any of the data we just updated is referenced in the path of a get_attr # function, future calls to dep_attrs() will reflect this new data. res_defns = stack_definition._resource_defns or {} op_defns = stack_definition._output_defns or {} all_defns = itertools.chain(six.itervalues(res_defns), six.itervalues(op_defns)) for defn in all_defns: if resource_name in defn.required_resource_names(): defn._all_dep_attrs = None def add_resource(stack_definition, resource_definition): """Insert the given resource definition into the stack definition. Add the resource to the template and store any temporary data. """ resource_name = resource_definition.name stack_definition._resources.pop(resource_name, None) stack_definition._resource_data.pop(resource_name, None) stack_definition.t.add_resource(resource_definition) if stack_definition._resource_defns is not None: stack_definition._resource_defns[resource_name] = resource_definition def remove_resource(stack_definition, resource_name): """Remove the named resource from the stack definition. Remove the resource from the template and eliminate references to it. """ stack_definition.t.remove_resource(resource_name) if stack_definition._resource_defns is not None: stack_definition._resource_defns.pop(resource_name, None) stack_definition._resource_data.pop(resource_name, None) stack_definition._resources.pop(resource_name, None) heat-10.0.2/heat/engine/status.py0000666000175000017500000000174413343562340016633 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. class ResourceStatus(object): __slots__ = tuple() ACTIONS = ( INIT, CREATE, DELETE, UPDATE, ROLLBACK, SUSPEND, RESUME, ADOPT, SNAPSHOT, CHECK, ) = ( 'INIT', 'CREATE', 'DELETE', 'UPDATE', 'ROLLBACK', 'SUSPEND', 'RESUME', 'ADOPT', 'SNAPSHOT', 'CHECK', ) STATUSES = ( IN_PROGRESS, FAILED, COMPLETE, ) = ( 'IN_PROGRESS', 'FAILED', 'COMPLETE', ) heat-10.0.2/heat/engine/check_resource.py0000666000175000017500000004515413343562351020301 0ustar zuulzuul00000000000000# Copyright (c) 2016 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import six import eventlet.queue import functools from oslo_log import log as logging from oslo_utils import excutils from heat.common import exception from heat.engine import resource from heat.engine import scheduler from heat.engine import stack as parser from heat.engine import sync_point from heat.objects import resource as resource_objects from heat.rpc import api as rpc_api from heat.rpc import listener_client LOG = logging.getLogger(__name__) class CancelOperation(BaseException): """Exception to cancel an in-progress operation on a resource. This exception is raised when operations on a resource are cancelled. """ def __init__(self): return super(CancelOperation, self).__init__('user triggered cancel') class CheckResource(object): def __init__(self, engine_id, rpc_client, thread_group_mgr, msg_queue, input_data): self.engine_id = engine_id self._rpc_client = rpc_client self.thread_group_mgr = thread_group_mgr self.msg_queue = msg_queue self.input_data = input_data def _stale_resource_needs_retry(self, cnxt, rsrc, prev_template_id): """Determine whether a resource needs retrying after failure to lock. Return True if we need to retry the check operation because of a failure to acquire the lock. This can be either because the engine holding the lock is no longer working, or because no other engine had locked the resource and the data was just out of date. In the former case, the lock will be stolen and the resource status changed to FAILED. """ fields = {'current_template_id', 'engine_id'} rs_obj = resource_objects.Resource.get_obj(cnxt, rsrc.id, refresh=True, fields=fields) if rs_obj.engine_id not in (None, self.engine_id): if not listener_client.EngineListenerClient( rs_obj.engine_id).is_alive(cnxt): # steal the lock. rs_obj.update_and_save({'engine_id': None}) # set the resource state as failed status_reason = ('Worker went down ' 'during resource %s' % rsrc.action) rsrc.state_set(rsrc.action, rsrc.FAILED, six.text_type(status_reason)) return True elif (rs_obj.engine_id is None and rs_obj.current_template_id == prev_template_id): LOG.debug('Resource id=%d stale; retrying check', rsrc.id) return True LOG.debug('Resource id=%d modified by another traversal', rsrc.id) return False def _trigger_rollback(self, stack): LOG.info("Triggering rollback of %(stack_name)s %(action)s ", {'action': stack.action, 'stack_name': stack.name}) stack.rollback() def _handle_failure(self, cnxt, stack, failure_reason): updated = stack.state_set(stack.action, stack.FAILED, failure_reason) if not updated: return False if (not stack.disable_rollback and stack.action in (stack.CREATE, stack.ADOPT, stack.UPDATE, stack.RESTORE)): self._trigger_rollback(stack) else: stack.purge_db() return True def _handle_resource_failure(self, cnxt, is_update, rsrc_id, stack, failure_reason): failure_handled = self._handle_failure(cnxt, stack, failure_reason) if not failure_handled: # Another concurrent update has taken over. But there is a # possibility for that update to be waiting for this rsrc to # complete, hence retrigger current rsrc for latest traversal. self._retrigger_new_traversal(cnxt, stack.current_traversal, is_update, stack.id, rsrc_id) def _retrigger_new_traversal(self, cnxt, current_traversal, is_update, stack_id, rsrc_id): latest_stack = parser.Stack.load(cnxt, stack_id=stack_id, force_reload=True) if current_traversal != latest_stack.current_traversal: self.retrigger_check_resource(cnxt, is_update, rsrc_id, latest_stack) def _handle_stack_timeout(self, cnxt, stack): failure_reason = u'Timed out' self._handle_failure(cnxt, stack, failure_reason) def _handle_resource_replacement(self, cnxt, current_traversal, new_tmpl_id, rsrc, stack, adopt_stack_data): """Create a replacement resource and trigger a check on it.""" try: new_res_id = rsrc.make_replacement(new_tmpl_id) except exception.UpdateInProgress: LOG.info("No replacement created - " "resource already locked by new traversal") return if new_res_id is None: LOG.info("No replacement created - " "new traversal already in progress") self._retrigger_new_traversal(cnxt, current_traversal, True, stack.id, rsrc.id) return LOG.info("Replacing resource with new id %s", new_res_id) rpc_data = sync_point.serialize_input_data(self.input_data) self._rpc_client.check_resource(cnxt, new_res_id, current_traversal, rpc_data, True, adopt_stack_data) def _do_check_resource(self, cnxt, current_traversal, tmpl, resource_data, is_update, rsrc, stack, adopt_stack_data): prev_template_id = rsrc.current_template_id try: if is_update: try: check_resource_update(rsrc, tmpl.id, resource_data, self.engine_id, stack, self.msg_queue) except resource.UpdateReplace: self._handle_resource_replacement(cnxt, current_traversal, tmpl.id, rsrc, stack, adopt_stack_data) return False else: check_resource_cleanup(rsrc, tmpl.id, resource_data, self.engine_id, stack.time_remaining(), self.msg_queue) return True except exception.UpdateInProgress: if self._stale_resource_needs_retry(cnxt, rsrc, prev_template_id): rpc_data = sync_point.serialize_input_data(self.input_data) self._rpc_client.check_resource(cnxt, rsrc.id, current_traversal, rpc_data, is_update, adopt_stack_data) except exception.ResourceFailure as ex: action = ex.action or rsrc.action reason = 'Resource %s failed: %s' % (action, six.text_type(ex)) self._handle_resource_failure(cnxt, is_update, rsrc.id, stack, reason) except scheduler.Timeout: self._handle_resource_failure(cnxt, is_update, rsrc.id, stack, u'Timed out') except CancelOperation as ex: # Stack is already marked FAILED, so we just need to retrigger # in case a new traversal has started and is waiting on us. self._retrigger_new_traversal(cnxt, current_traversal, is_update, stack.id, rsrc.id) return False def retrigger_check_resource(self, cnxt, is_update, resource_id, stack): current_traversal = stack.current_traversal graph = stack.convergence_dependencies.graph() key = (resource_id, is_update) if is_update: # When re-trigger received for update in latest traversal, first # check if update key is available in graph. # if No, then latest traversal is waiting for delete. if (resource_id, is_update) not in graph: key = (resource_id, not is_update) else: # When re-trigger received for delete in latest traversal, first # check if update key is available in graph, # if yes, then latest traversal is waiting for update. if (resource_id, True) in graph: # not is_update evaluates to True below, which means update key = (resource_id, not is_update) LOG.info('Re-trigger resource: (%(key1)s, %(key2)s)', {'key1': key[0], 'key2': key[1]}) predecessors = set(graph[key]) try: propagate_check_resource(cnxt, self._rpc_client, resource_id, current_traversal, predecessors, key, None, key[1], None) except exception.EntityNotFound as e: if e.entity != "Sync Point": raise def _initiate_propagate_resource(self, cnxt, resource_id, current_traversal, is_update, rsrc, stack): deps = stack.convergence_dependencies graph = deps.graph() graph_key = parser.ConvergenceNode(resource_id, is_update) if graph_key not in graph and rsrc.replaces is not None: # If we are a replacement, impersonate the replaced resource for # the purposes of calculating whether subsequent resources are # ready, since everybody has to work from the same version of the # graph. Our real resource ID is sent in the input_data, so the # dependencies will get updated to point to this resource in time # for the next traversal. graph_key = parser.ConvergenceNode(rsrc.replaces, is_update) def _get_input_data(req_node, input_forward_data=None): if req_node.is_update: if input_forward_data is None: return rsrc.node_data().as_dict() else: # do not re-resolve attrs return input_forward_data else: # Don't send data if initiating clean-up for self i.e. # initiating delete of a replaced resource if req_node.rsrc_id != graph_key.rsrc_id: # send replaced resource as needed_by if it exists return (rsrc.replaced_by if rsrc.replaced_by is not None else resource_id) return None try: input_forward_data = None for req_node in sorted(deps.required_by(graph_key), key=lambda n: n.is_update): input_data = _get_input_data(req_node, input_forward_data) if req_node.is_update: input_forward_data = input_data propagate_check_resource( cnxt, self._rpc_client, req_node.rsrc_id, current_traversal, set(graph[req_node]), graph_key, input_data, req_node.is_update, stack.adopt_stack_data) if is_update: if input_forward_data is None: # we haven't resolved attribute data for the resource, # so clear any old attributes so they may be re-resolved rsrc.clear_stored_attributes() else: rsrc.store_attributes() check_stack_complete(cnxt, stack, current_traversal, graph_key.rsrc_id, deps, graph_key.is_update) except exception.EntityNotFound as e: if e.entity == "Sync Point": # Reload the stack to determine the current traversal, and # check the SyncPoint for the current node to determine if # it is ready. If it is, then retrigger the current node # with the appropriate data for the latest traversal. stack = parser.Stack.load(cnxt, stack_id=rsrc.stack.id, force_reload=True) if current_traversal == stack.current_traversal: LOG.debug('[%s] Traversal sync point missing.', current_traversal) return self.retrigger_check_resource(cnxt, is_update, resource_id, stack) else: raise def check(self, cnxt, resource_id, current_traversal, resource_data, is_update, adopt_stack_data, rsrc, stack): """Process a node in the dependency graph. The node may be associated with either an update or a cleanup of its associated resource. """ if stack.has_timed_out(): self._handle_stack_timeout(cnxt, stack) return tmpl = stack.t stack.adopt_stack_data = adopt_stack_data stack.thread_group_mgr = self.thread_group_mgr if is_update: if (rsrc.replaced_by is not None and rsrc.current_template_id != tmpl.id): LOG.debug('Resource %s with id %s already replaced by %s; ' 'not checking', rsrc.name, resource_id, rsrc.replaced_by) return try: check_resource_done = self._do_check_resource(cnxt, current_traversal, tmpl, resource_data, is_update, rsrc, stack, adopt_stack_data) if check_resource_done: # initiate check on next set of resources from graph self._initiate_propagate_resource(cnxt, resource_id, current_traversal, is_update, rsrc, stack) except BaseException as exc: with excutils.save_and_reraise_exception(): msg = six.text_type(exc) LOG.exception("Unexpected exception in resource check.") self._handle_resource_failure(cnxt, is_update, rsrc.id, stack, msg) def load_resource(cnxt, resource_id, resource_data, current_traversal, is_update): try: return resource.Resource.load(cnxt, resource_id, current_traversal, is_update, resource_data) except (exception.ResourceNotFound, exception.NotFound): # can be ignored return None, None, None def check_stack_complete(cnxt, stack, current_traversal, sender_id, deps, is_update): """Mark the stack complete if the update is complete. Complete is currently in the sense that all desired resources are in service, not that superfluous ones have been cleaned up. """ roots = set(deps.roots()) if (sender_id, is_update) not in roots: return def mark_complete(stack_id, data): stack.mark_complete() sender_key = (sender_id, is_update) sync_point.sync(cnxt, stack.id, current_traversal, True, mark_complete, roots, {sender_key: None}) def propagate_check_resource(cnxt, rpc_client, next_res_id, current_traversal, predecessors, sender_key, sender_data, is_update, adopt_stack_data): """Trigger processing of node if all of its dependencies are satisfied.""" def do_check(entity_id, data): rpc_client.check_resource(cnxt, entity_id, current_traversal, data, is_update, adopt_stack_data) sync_point.sync(cnxt, next_res_id, current_traversal, is_update, do_check, predecessors, {sender_key: sender_data}) def _check_for_message(msg_queue): if msg_queue is None: return try: message = msg_queue.get_nowait() except eventlet.queue.Empty: return if message == rpc_api.THREAD_CANCEL: raise CancelOperation LOG.error('Unknown message "%s" received', message) def check_resource_update(rsrc, template_id, resource_data, engine_id, stack, msg_queue): """Create or update the Resource if appropriate.""" check_message = functools.partial(_check_for_message, msg_queue) if rsrc.action == resource.Resource.INIT: rsrc.create_convergence(template_id, resource_data, engine_id, stack.time_remaining(), check_message) else: rsrc.update_convergence(template_id, resource_data, engine_id, stack.time_remaining(), stack, check_message) def check_resource_cleanup(rsrc, template_id, resource_data, engine_id, timeout, msg_queue): """Delete the Resource if appropriate.""" check_message = functools.partial(_check_for_message, msg_queue) rsrc.delete_convergence(template_id, resource_data, engine_id, timeout, check_message) heat-10.0.2/heat/engine/lifecycle_plugin.py0000666000175000017500000000413213343562337020625 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. class LifecyclePlugin(object): """Base class for pre-op and post-op work on a stack. Implementations should extend this class and override the methods. """ def do_pre_op(self, cnxt, stack, current_stack=None, action=None): """Method to be run by heat before stack operations.""" pass def do_post_op(self, cnxt, stack, current_stack=None, action=None, is_stack_failure=False): """Method to be run by heat after stack operations, including failures. On failure to execute all the registered pre_ops, this method will be called if and only if the corresponding pre_op was successfully called. On failures of the actual stack operation, this method will be called if all the pre operations were successfully called. """ pass def get_ordinal(self): """Get the sort order for pre and post operation execution. The values returned by get_ordinal are used to create a partial order for pre and post operation method invocations. The default ordinal value of 100 may be overridden. If class1inst.ordinal() < class2inst.ordinal(), then the method on class1inst will be executed before the method on class2inst. If class1inst.ordinal() > class2inst.ordinal(), then the method on class1inst will be executed after the method on class2inst. If class1inst.ordinal() == class2inst.ordinal(), then the order of method invocation is indeterminate. """ return 100 heat-10.0.2/heat/objects/0000775000175000017500000000000013343562672015122 5ustar zuulzuul00000000000000heat-10.0.2/heat/objects/raw_template_files.py0000666000175000017500000000320013343562340021327 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """RawTemplateFiles object.""" from oslo_versionedobjects import base from oslo_versionedobjects import fields from heat.db.sqlalchemy import api as db_api from heat.objects import base as heat_base from heat.objects import fields as heat_fields @heat_base.HeatObjectRegistry.register class RawTemplateFiles( heat_base.HeatObject, base.VersionedObjectDictCompat, base.ComparableVersionedObject, ): # Version 1.0: Initial Version VERSION = '1.0' fields = { 'id': fields.IntegerField(), 'files': heat_fields.JsonField(read_only=True), } @staticmethod def _from_db_object(context, tmpl_files, db_tmpl_files): for field in tmpl_files.fields: tmpl_files[field] = db_tmpl_files[field] tmpl_files._context = context tmpl_files.obj_reset_changes() return tmpl_files @classmethod def create(cls, context, values): return cls._from_db_object(context, cls(), db_api.raw_template_files_create(context, values)) heat-10.0.2/heat/objects/stack_lock.py0000666000175000017500000000335413343562340017610 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """StackLock object.""" from oslo_versionedobjects import base from oslo_versionedobjects import fields from heat.db.sqlalchemy import api as db_api from heat.objects import base as heat_base class StackLock( heat_base.HeatObject, base.VersionedObjectDictCompat, base.ComparableVersionedObject, ): fields = { 'engine_id': fields.StringField(nullable=True), 'stack_id': fields.StringField(), 'created_at': fields.DateTimeField(read_only=True), 'updated_at': fields.DateTimeField(nullable=True), } @classmethod def create(cls, context, stack_id, engine_id): return db_api.stack_lock_create(context, stack_id, engine_id) @classmethod def steal(cls, context, stack_id, old_engine_id, new_engine_id): return db_api.stack_lock_steal(context, stack_id, old_engine_id, new_engine_id) @classmethod def release(cls, context, stack_id, engine_id): return db_api.stack_lock_release(context, stack_id, engine_id) @classmethod def get_engine_id(cls, context, stack_id): return db_api.stack_lock_get_engine_id(context, stack_id) heat-10.0.2/heat/objects/base.py0000666000175000017500000000240313343562340016377 0ustar zuulzuul00000000000000# Copyright 2015 Intel Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Heat common internal object model""" import weakref from oslo_versionedobjects import base as ovoo_base class HeatObjectRegistry(ovoo_base.VersionedObjectRegistry): pass class HeatObject(ovoo_base.VersionedObject): OBJ_PROJECT_NAMESPACE = 'heat' VERSION = '1.0' @property def _context(self): if self._contextref is None: return ctxt = self._contextref() assert ctxt is not None, "Need a reference to the context" return ctxt @_context.setter def _context(self, context): if context: self._contextref = weakref.ref(context) else: self._contextref = None heat-10.0.2/heat/objects/resource_data.py0000666000175000017500000000545013343562340020312 0ustar zuulzuul00000000000000# Copyright 2014 Intel Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ResourceData object.""" from oslo_versionedobjects import base from oslo_versionedobjects import fields from heat.common import exception from heat.db.sqlalchemy import api as db_api from heat.objects import base as heat_base class ResourceData( heat_base.HeatObject, base.VersionedObjectDictCompat, base.ComparableVersionedObject, ): fields = { 'id': fields.IntegerField(), 'created_at': fields.DateTimeField(read_only=True), 'updated_at': fields.DateTimeField(nullable=True), 'key': fields.StringField(nullable=True), 'value': fields.StringField(nullable=True), 'redact': fields.BooleanField(nullable=True), 'resource_id': fields.IntegerField(), 'decrypt_method': fields.StringField(nullable=True), } @staticmethod def _from_db_object(sdata, db_sdata): if db_sdata is None: return None for field in sdata.fields: sdata[field] = db_sdata[field] sdata.obj_reset_changes() return sdata @classmethod def get_all(cls, resource, *args, **kwargs): # this method only returns dict, so we won't use objects mechanism here return db_api.resource_data_get_all(resource.context, resource.id, *args, **kwargs) @classmethod def get_obj(cls, resource, key): raise exception.NotSupported(feature='ResourceData.get_obj') @classmethod def get_val(cls, resource, key): return db_api.resource_data_get(resource.context, resource.id, key) @classmethod def set(cls, resource, key, value, *args, **kwargs): db_data = db_api.resource_data_set( resource.context, resource.id, key, value, *args, **kwargs ) return db_data @classmethod def get_by_key(cls, context, resource_id, key): db_rdata = db_api.resource_data_get_by_key(context, resource_id, key) return cls._from_db_object(cls(context), db_rdata) @classmethod def delete(cls, resource, key): db_api.resource_data_delete(resource.context, resource.id, key) heat-10.0.2/heat/objects/stack.py0000666000175000017500000002240113343562351016574 0ustar zuulzuul00000000000000# Copyright 2014 Intel Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Stack object.""" from oslo_versionedobjects import base from oslo_versionedobjects import fields import six from heat.common import exception from heat.common.i18n import _ from heat.common import identifier from heat.db.sqlalchemy import api as db_api from heat.objects import base as heat_base from heat.objects import fields as heat_fields from heat.objects import raw_template from heat.objects import stack_tag class Stack( heat_base.HeatObject, base.VersionedObjectDictCompat, base.ComparableVersionedObject, ): fields = { 'id': fields.StringField(), 'name': fields.StringField(), 'raw_template_id': fields.IntegerField(), 'backup': fields.BooleanField(), 'created_at': fields.DateTimeField(read_only=True), 'deleted_at': fields.DateTimeField(nullable=True), 'disable_rollback': fields.BooleanField(), 'nested_depth': fields.IntegerField(), 'owner_id': fields.StringField(nullable=True), 'stack_user_project_id': fields.StringField(nullable=True), 'tenant': fields.StringField(nullable=True), 'timeout': fields.IntegerField(nullable=True), 'updated_at': fields.DateTimeField(nullable=True), 'user_creds_id': fields.StringField(nullable=True), 'username': fields.StringField(nullable=True), 'action': fields.StringField(nullable=True), 'status': fields.StringField(nullable=True), 'status_reason': fields.StringField(nullable=True), 'raw_template': fields.ObjectField('RawTemplate'), 'convergence': fields.BooleanField(), 'current_traversal': fields.StringField(), 'current_deps': heat_fields.JsonField(), 'prev_raw_template_id': fields.IntegerField(), 'prev_raw_template': fields.ObjectField('RawTemplate'), 'parent_resource_name': fields.StringField(nullable=True), } @staticmethod def _from_db_object(context, stack, db_stack): for field in stack.fields: if field == 'raw_template': raw_template_obj = db_stack.__dict__.get('raw_template') if raw_template_obj is not None: # Object is already lazy loaded raw_template_obj = ( raw_template.RawTemplate.from_db_object( context, raw_template.RawTemplate(), raw_template_obj)) stack['raw_template'] = raw_template_obj else: stack[field] = db_stack.__dict__.get(field) stack._context = context stack.obj_reset_changes() return stack @classmethod def get_root_id(cls, context, stack_id): return db_api.stack_get_root_id(context, stack_id) @classmethod def get_by_id(cls, context, stack_id, **kwargs): db_stack = db_api.stack_get(context, stack_id, **kwargs) if not db_stack: return None stack = cls._from_db_object(context, cls(context), db_stack) return stack @classmethod def get_by_name_and_owner_id(cls, context, stack_name, owner_id): db_stack = db_api.stack_get_by_name_and_owner_id( context, six.text_type(stack_name), owner_id ) if not db_stack: return None stack = cls._from_db_object(context, cls(context), db_stack) return stack @classmethod def get_by_name(cls, context, stack_name): db_stack = db_api.stack_get_by_name(context, six.text_type(stack_name)) if not db_stack: return None stack = cls._from_db_object(context, cls(context), db_stack) return stack @classmethod def get_all(cls, context, limit=None, sort_keys=None, marker=None, sort_dir=None, filters=None, show_deleted=False, show_nested=False, show_hidden=False, tags=None, tags_any=None, not_tags=None, not_tags_any=None, eager_load=False): db_stacks = db_api.stack_get_all( context, limit=limit, sort_keys=sort_keys, marker=marker, sort_dir=sort_dir, filters=filters, show_deleted=show_deleted, show_nested=show_nested, show_hidden=show_hidden, tags=tags, tags_any=tags_any, not_tags=not_tags, not_tags_any=not_tags_any, eager_load=eager_load) for db_stack in db_stacks: try: yield cls._from_db_object(context, cls(context), db_stack) except exception.NotFound: pass @classmethod def get_all_by_owner_id(cls, context, owner_id): db_stacks = db_api.stack_get_all_by_owner_id(context, owner_id) for db_stack in db_stacks: try: yield cls._from_db_object(context, cls(context), db_stack) except exception.NotFound: pass @classmethod def get_all_by_root_owner_id(cls, context, root_owner_id): db_stacks = db_api.stack_get_all_by_root_owner_id(context, root_owner_id) for db_stack in db_stacks: try: yield cls._from_db_object(context, cls(context), db_stack) except exception.NotFound: pass @classmethod def count_all(cls, context, **kwargs): return db_api.stack_count_all(context, **kwargs) @classmethod def count_total_resources(cls, context, stack_id): return db_api.stack_count_total_resources(context, stack_id) @classmethod def create(cls, context, values): return cls._from_db_object(context, cls(context), db_api.stack_create(context, values)) @classmethod def update_by_id(cls, context, stack_id, values): """Update and return (boolean) if it was updated. Note: the underlying stack_update filters by current_traversal and stack_id. """ return db_api.stack_update(context, stack_id, values) @classmethod def select_and_update(cls, context, stack_id, values, exp_trvsl=None): """Update the stack by selecting on traversal ID. Uses UPDATE ... WHERE (compare and swap) to catch any concurrent update problem. If the stack is found with given traversal, it is updated. If there occurs a race while updating, only one will succeed and other will get return value of False. """ return db_api.stack_update(context, stack_id, values, exp_trvsl=exp_trvsl) @classmethod def persist_state_and_release_lock(cls, context, stack_id, engine_id, values): return db_api.persist_state_and_release_lock(context, stack_id, engine_id, values) @classmethod def delete(cls, context, stack_id): db_api.stack_delete(context, stack_id) def update_and_save(self, values): has_updated = self.__class__.update_by_id(self._context, self.id, values) if not has_updated: raise exception.NotFound(_('Attempt to update a stack with id: ' '%(id)s %(traversal)s %(msg)s') % { 'id': self.id, 'traversal': self.current_traversal, 'msg': 'that does not exist'}) def __eq__(self, another): self.refresh() # to make test object comparison work well return super(Stack, self).__eq__(another) def __ne__(self, other): return not self.__eq__(other) def refresh(self): db_stack = db_api.stack_get( self._context, self.id, show_deleted=True) if db_stack is None: message = _('No stack exists with id "%s"') % str(self.id) raise exception.NotFound(message) return self.__class__._from_db_object( self._context, self, db_stack ) @classmethod def encrypt_hidden_parameters(cls, tmpl): raw_template.RawTemplate.encrypt_hidden_parameters(tmpl) @classmethod def get_status(cls, context, stack_id): """Return action and status for the given stack.""" return db_api.stack_get_status(context, stack_id) def identifier(self): """Return an identifier for this stack.""" return identifier.HeatIdentifier(self.tenant, self.name, self.id) @property def tags(self): return stack_tag.StackTagList.get(self._context, self.id) heat-10.0.2/heat/objects/snapshot.py0000666000175000017500000000503613343562340017331 0ustar zuulzuul00000000000000# Copyright 2015 Intel Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Snapshot object.""" from oslo_versionedobjects import base from oslo_versionedobjects import fields from heat.db.sqlalchemy import api as db_api from heat.objects import base as heat_base from heat.objects import fields as heat_fields class Snapshot( heat_base.HeatObject, base.VersionedObjectDictCompat, base.ComparableVersionedObject, ): fields = { 'id': fields.StringField(), 'name': fields.StringField(nullable=True), 'stack_id': fields.StringField(), 'data': heat_fields.JsonField(nullable=True), 'tenant': fields.StringField(), 'status': fields.StringField(nullable=True), 'status_reason': fields.StringField(nullable=True), 'created_at': fields.DateTimeField(read_only=True), 'updated_at': fields.DateTimeField(nullable=True), } @staticmethod def _from_db_object(context, snapshot, db_snapshot): for field in snapshot.fields: snapshot[field] = db_snapshot[field] snapshot._context = context snapshot.obj_reset_changes() return snapshot @classmethod def create(cls, context, values): return cls._from_db_object( context, cls(), db_api.snapshot_create(context, values)) @classmethod def get_snapshot_by_stack(cls, context, snapshot_id, stack): return cls._from_db_object( context, cls(), db_api.snapshot_get_by_stack( context, snapshot_id, stack)) @classmethod def update(cls, context, snapshot_id, values): db_snapshot = db_api.snapshot_update(context, snapshot_id, values) return cls._from_db_object(context, cls(), db_snapshot) @classmethod def delete(cls, context, snapshot_id): db_api.snapshot_delete(context, snapshot_id) @classmethod def get_all(cls, context, stack_id): return [cls._from_db_object(context, cls(), db_snapshot) for db_snapshot in db_api.snapshot_get_all(context, stack_id)] heat-10.0.2/heat/objects/stack_tag.py0000666000175000017500000000505013343562340017426 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """StackTag object.""" from oslo_versionedobjects import base from oslo_versionedobjects import fields from heat.db.sqlalchemy import api as db_api from heat.objects import base as heat_base class StackTag( heat_base.HeatObject, base.VersionedObjectDictCompat, base.ComparableVersionedObject, ): fields = { 'id': fields.IntegerField(), 'tag': fields.StringField(nullable=True), 'stack_id': fields.StringField(), 'created_at': fields.DateTimeField(read_only=True), 'updated_at': fields.DateTimeField(nullable=True), } @staticmethod def _from_db_object(context, tag, db_tag): """Method to help with migration to objects. Converts a database entity to a formal object. """ if db_tag is None: return None for field in tag.fields: tag[field] = db_tag[field] tag.obj_reset_changes() return tag @classmethod def get_obj(cls, context, tag): return cls._from_db_object(cls(context), tag) class StackTagList( heat_base.HeatObject, base.ObjectListBase, ): fields = { 'objects': fields.ListOfObjectsField('StackTag'), } def __init__(self, *args, **kwargs): self._changed_fields = set() super(StackTagList, self).__init__() @classmethod def get(cls, context, stack_id): db_tags = db_api.stack_tags_get(context, stack_id) if db_tags: return base.obj_make_list(context, cls(), StackTag, db_tags) @classmethod def set(cls, context, stack_id, tags): db_tags = db_api.stack_tags_set(context, stack_id, tags) if db_tags: return base.obj_make_list(context, cls(), StackTag, db_tags) @classmethod def delete(cls, context, stack_id): db_api.stack_tags_delete(context, stack_id) @classmethod def from_db_object(cls, context, db_tags): if db_tags is not None: return base.obj_make_list(context, cls(), StackTag, db_tags) heat-10.0.2/heat/objects/fields.py0000666000175000017500000000227113343562340016736 0ustar zuulzuul00000000000000# Copyright 2014 Intel Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_serialization import jsonutils as json from oslo_versionedobjects import fields import six class Json(fields.FieldType): def coerce(self, obj, attr, value): if isinstance(value, six.string_types): loaded = json.loads(value) return loaded return value def from_primitive(self, obj, attr, value): return self.coerce(obj, attr, value) def to_primitive(self, obj, attr, value): return json.dumps(value) class JsonField(fields.AutoTypedField): AUTO_TYPE = Json() class ListField(fields.AutoTypedField): AUTO_TYPE = fields.List(fields.FieldType()) heat-10.0.2/heat/objects/sync_point.py0000666000175000017500000000604113343562340017654 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """SyncPoint object.""" from oslo_versionedobjects import base from oslo_versionedobjects import fields from heat.db.sqlalchemy import api as db_api from heat.objects import base as heat_base from heat.objects import fields as heat_fields class SyncPoint( heat_base.HeatObject, base.VersionedObjectDictCompat, base.ComparableVersionedObject, ): fields = { 'entity_id': fields.StringField(), 'traversal_id': fields.StringField(), 'is_update': fields.BooleanField(), 'created_at': fields.DateTimeField(read_only=True), 'updated_at': fields.DateTimeField(nullable=True), 'atomic_key': fields.IntegerField(), 'stack_id': fields.StringField(), 'input_data': heat_fields.JsonField(nullable=True), } @staticmethod def _from_db_object(context, sdata, db_sdata): if db_sdata is None: return None for field in sdata.fields: sdata[field] = db_sdata[field] sdata._context = context sdata.obj_reset_changes() return sdata @classmethod def get_by_key(cls, context, entity_id, traversal_id, is_update): sync_point_db = db_api.sync_point_get(context, entity_id, traversal_id, is_update) return cls._from_db_object(context, cls(), sync_point_db) @classmethod def create(cls, context, values): sync_point_db = db_api.sync_point_create(context, values) return cls._from_db_object(context, cls(), sync_point_db) @classmethod def update_input_data(cls, context, entity_id, traversal_id, is_update, atomic_key, input_data): return db_api.sync_point_update_input_data( context, entity_id, traversal_id, is_update, atomic_key, input_data) @classmethod def delete_all_by_stack_and_traversal(cls, context, stack_id, traversal_id): return db_api.sync_point_delete_all_by_stack_and_traversal( context, stack_id, traversal_id) heat-10.0.2/heat/objects/resource.py0000666000175000017500000003266613343562340017332 0ustar zuulzuul00000000000000# Copyright 2014 Intel Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Resource object.""" import collections from oslo_config import cfg from oslo_log import log as logging from oslo_versionedobjects import base from oslo_versionedobjects import fields import six import tenacity from heat.common import crypt from heat.common import exception from heat.common.i18n import _ from heat.db.sqlalchemy import api as db_api from heat.objects import base as heat_base from heat.objects import fields as heat_fields from heat.objects import resource_data from heat.objects import resource_properties_data as rpd cfg.CONF.import_opt('encrypt_parameters_and_properties', 'heat.common.config') LOG = logging.getLogger(__name__) def retry_on_conflict(func): wrapper = tenacity.retry( stop=tenacity.stop_after_attempt(11), wait=tenacity.wait_random(max=2), retry=tenacity.retry_if_exception_type( exception.ConcurrentTransaction), reraise=True) return wrapper(func) class ResourceCache(object): def __init__(self): self.delete_all() def delete_all(self): self.by_stack_id_name = collections.defaultdict(dict) def set_by_stack_id(self, resources): for res in six.itervalues(resources): self.by_stack_id_name[res.stack_id][res.name] = res class Resource( heat_base.HeatObject, base.VersionedObjectDictCompat, base.ComparableVersionedObject, ): fields = { 'id': fields.IntegerField(), 'uuid': fields.StringField(), 'stack_id': fields.StringField(), 'created_at': fields.DateTimeField(read_only=True), 'updated_at': fields.DateTimeField(nullable=True), 'physical_resource_id': fields.StringField(nullable=True), 'name': fields.StringField(nullable=True), 'status': fields.StringField(nullable=True), 'status_reason': fields.StringField(nullable=True), 'action': fields.StringField(nullable=True), 'attr_data': fields.ObjectField( rpd.ResourcePropertiesData, nullable=True), 'attr_data_id': fields.IntegerField(nullable=True), 'rsrc_metadata': heat_fields.JsonField(nullable=True), 'data': fields.ListOfObjectsField( resource_data.ResourceData, nullable=True ), 'rsrc_prop_data_id': fields.ObjectField( fields.IntegerField(nullable=True)), 'engine_id': fields.StringField(nullable=True), 'atomic_key': fields.IntegerField(nullable=True), 'current_template_id': fields.IntegerField(), 'needed_by': heat_fields.ListField(nullable=True, default=None), 'requires': heat_fields.ListField(nullable=True, default=None), 'replaces': fields.IntegerField(nullable=True), 'replaced_by': fields.IntegerField(nullable=True), 'root_stack_id': fields.StringField(nullable=True), } @staticmethod def _from_db_object(resource, context, db_resource, only_fields=None): if db_resource is None: return None for field in resource.fields: if (only_fields is not None and field not in only_fields and field != 'id'): continue if field == 'data': resource['data'] = [resource_data.ResourceData._from_db_object( resource_data.ResourceData(context), resd ) for resd in db_resource.data] elif field != 'attr_data': resource[field] = db_resource[field] if db_resource['rsrc_prop_data_id'] is not None: if hasattr(db_resource, '__dict__'): rpd_obj = db_resource.__dict__.get('rsrc_prop_data') else: rpd_obj = None if rpd_obj is not None: # Object is already eager loaded rpd_obj = ( rpd.ResourcePropertiesData._from_db_object( rpd.ResourcePropertiesData(), context, rpd_obj)) resource._properties_data = rpd_obj.data else: resource._properties_data = {} if db_resource['properties_data']: LOG.error( 'Unexpected condition where resource.rsrc_prop_data ' 'and resource.properties_data are both not null. ' 'rsrc_prop_data.id: %(rsrc_prop_data_id)s, ' 'resource id: %(res_id)s', {'rsrc_prop_data_id': resource['rsrc_prop_data'].id, 'res_id': resource['id']}) elif db_resource['properties_data']: # legacy field if db_resource['properties_data_encrypted']: decrypted_data = crypt.decrypted_dict( db_resource['properties_data']) resource._properties_data = decrypted_data else: resource._properties_data = db_resource['properties_data'] else: resource._properties_data = None if db_resource['attr_data'] is not None: resource._attr_data = rpd.ResourcePropertiesData._from_db_object( rpd.ResourcePropertiesData(context), context, db_resource['attr_data']).data else: resource._attr_data = None resource._context = context resource.obj_reset_changes() return resource @property def attr_data(self): return self._attr_data @property def properties_data(self): if (not self._properties_data and self.rsrc_prop_data_id is not None): LOG.info('rsrc_prop_data lazy load') rpd_obj = rpd.ResourcePropertiesData.get_by_id( self._context, self.rsrc_prop_data_id) self._properties_data = rpd_obj.data or {} return self._properties_data @classmethod def get_obj(cls, context, resource_id, refresh=False, fields=None): if fields is None or 'data' in fields: refresh_data = refresh else: refresh_data = False resource_db = db_api.resource_get(context, resource_id, refresh=refresh, refresh_data=refresh_data) return cls._from_db_object(cls(context), context, resource_db, only_fields=fields) @classmethod def get_all(cls, context): resources_db = db_api.resource_get_all(context) resources = [ ( resource_name, cls._from_db_object(cls(context), context, resource_db) ) for resource_name, resource_db in six.iteritems(resources_db) ] return dict(resources) @classmethod def create(cls, context, values): return cls._from_db_object(cls(context), context, db_api.resource_create(context, values)) @classmethod def replacement(cls, context, existing_res_id, existing_res_values, new_res_values, atomic_key=0, expected_engine_id=None): replacement = db_api.resource_create_replacement(context, existing_res_id, existing_res_values, new_res_values, atomic_key, expected_engine_id) if replacement is None: return None return cls._from_db_object(cls(context), context, replacement) @classmethod def delete(cls, context, resource_id): db_api.resource_delete(context, resource_id) @classmethod def attr_data_delete(cls, context, resource_id, attr_id): db_api.resource_attr_data_delete(context, resource_id, attr_id) @classmethod def exchange_stacks(cls, context, resource_id1, resource_id2): return db_api.resource_exchange_stacks( context, resource_id1, resource_id2) @classmethod def get_all_by_stack(cls, context, stack_id, filters=None): cache = context.cache(ResourceCache) resources = cache.by_stack_id_name.get(stack_id) if resources: return dict(resources) resources_db = db_api.resource_get_all_by_stack(context, stack_id, filters) return cls._resources_to_dict(context, resources_db) @classmethod def _resources_to_dict(cls, context, resources_db): resources = [ ( resource_name, cls._from_db_object(cls(context), context, resource_db) ) for resource_name, resource_db in six.iteritems(resources_db) ] return dict(resources) @classmethod def get_all_active_by_stack(cls, context, stack_id): resources_db = db_api.resource_get_all_active_by_stack(context, stack_id) resources = [ ( resource_id, cls._from_db_object(cls(context), context, resource_db) ) for resource_id, resource_db in six.iteritems(resources_db) ] return dict(resources) @classmethod def get_all_by_root_stack(cls, context, stack_id, filters, cache=False): resources_db = db_api.resource_get_all_by_root_stack( context, stack_id, filters) all = cls._resources_to_dict(context, resources_db) if cache: context.cache(ResourceCache).set_by_stack_id(all) return all @classmethod def get_all_stack_ids_by_root_stack(cls, context, stack_id): resources_db = db_api.resource_get_all_by_root_stack( context, stack_id, stack_id_only=True) return {db_res.stack_id for db_res in six.itervalues(resources_db)} @classmethod def purge_deleted(cls, context, stack_id): return db_api.resource_purge_deleted(context, stack_id) @classmethod def get_by_name_and_stack(cls, context, resource_name, stack_id): resource_db = db_api.resource_get_by_name_and_stack( context, resource_name, stack_id) return cls._from_db_object(cls(context), context, resource_db) @classmethod def get_all_by_physical_resource_id(cls, context, physical_resource_id): matches = db_api.resource_get_all_by_physical_resource_id( context, physical_resource_id) return [cls._from_db_object(cls(context), context, resource_db) for resource_db in matches] @classmethod def update_by_id(cls, context, resource_id, values): db_api.resource_update_and_save(context, resource_id, values) def update_and_save(self, values): db_api.resource_update_and_save(self._context, self.id, values) def select_and_update(self, values, expected_engine_id=None, atomic_key=0): return db_api.resource_update(self._context, self.id, values, atomic_key=atomic_key, expected_engine_id=expected_engine_id) @classmethod def select_and_update_by_id(cls, context, resource_id, values, expected_engine_id=None, atomic_key=0): return db_api.resource_update(context, resource_id, values, atomic_key=atomic_key, expected_engine_id=expected_engine_id) @classmethod def store_attributes(cls, context, resource_id, atomic_key, attr_data, attr_id): attr_id = rpd.ResourcePropertiesData.create_or_update( context, attr_data, attr_id).id if db_api.resource_attr_id_set( context, resource_id, atomic_key, attr_id): return attr_id return None def refresh(self): resource_db = db_api.resource_get(self._context, self.id, refresh=True) return self.__class__._from_db_object( self, self._context, resource_db) @staticmethod def encrypt_properties_data(data): if cfg.CONF.encrypt_parameters_and_properties and data: result = crypt.encrypted_dict(data) return (True, result) return (False, data) def update_metadata(self, metadata): if self.rsrc_metadata != metadata: rows_updated = self.select_and_update( {'rsrc_metadata': metadata}, self.engine_id, self.atomic_key) if not rows_updated: action = _('metadata setting for resource %s') % self.name raise exception.ConcurrentTransaction(action=action) return True else: return False heat-10.0.2/heat/objects/__init__.py0000666000175000017500000000000013343562340017213 0ustar zuulzuul00000000000000heat-10.0.2/heat/objects/service.py0000666000175000017500000000573213343562340017135 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Service object.""" from oslo_versionedobjects import base from oslo_versionedobjects import fields from heat.db.sqlalchemy import api as db_api from heat.objects import base as heat_base class Service( heat_base.HeatObject, base.VersionedObjectDictCompat, base.ComparableVersionedObject, ): fields = { 'id': fields.StringField(), 'engine_id': fields.StringField(), 'host': fields.StringField(), 'hostname': fields.StringField(), 'binary': fields.StringField(), 'topic': fields.StringField(), 'report_interval': fields.IntegerField(), 'created_at': fields.DateTimeField(read_only=True), 'updated_at': fields.DateTimeField(nullable=True), 'deleted_at': fields.DateTimeField(nullable=True) } @staticmethod def _from_db_object(context, service, db_service): for field in service.fields: service[field] = db_service[field] service._context = context service.obj_reset_changes() return service @classmethod def _from_db_objects(cls, context, list_obj): return [cls._from_db_object(context, cls(context), obj) for obj in list_obj] @classmethod def get_by_id(cls, context, service_id): service_db = db_api.service_get(context, service_id) service = cls._from_db_object(context, cls(), service_db) return service @classmethod def create(cls, context, values): return cls._from_db_object( context, cls(), db_api.service_create(context, values)) @classmethod def update_by_id(cls, context, service_id, values): return cls._from_db_object( context, cls(), db_api.service_update(context, service_id, values)) @classmethod def delete(cls, context, service_id, soft_delete=True): db_api.service_delete(context, service_id, soft_delete) @classmethod def get_all(cls, context): return cls._from_db_objects(context, db_api.service_get_all(context)) @classmethod def get_all_by_args(cls, context, host, binary, hostname): return cls._from_db_objects( context, db_api.service_get_all_by_args(context, host, binary, hostname)) heat-10.0.2/heat/objects/raw_template.py0000666000175000017500000001051613343562340020155 0ustar zuulzuul00000000000000# Copyright 2014 Intel Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """RawTemplate object.""" import copy from oslo_config import cfg from oslo_log import log as logging from oslo_versionedobjects import base from oslo_versionedobjects import fields from heat.common import crypt from heat.common import environment_format as env_fmt from heat.db.sqlalchemy import api as db_api from heat.objects import base as heat_base from heat.objects import fields as heat_fields LOG = logging.getLogger(__name__) @heat_base.HeatObjectRegistry.register class RawTemplate( heat_base.HeatObject, base.VersionedObjectDictCompat, base.ComparableVersionedObject, ): # Version 1.0: Initial version # Version 1.1: Added files_id VERSION = '1.1' fields = { 'id': fields.IntegerField(), # TODO(cwolfe): remove deprecated files in future release 'files': heat_fields.JsonField(nullable=True), 'files_id': fields.IntegerField(nullable=True), 'template': heat_fields.JsonField(), 'environment': heat_fields.JsonField(), } @staticmethod def from_db_object(context, tpl, db_tpl): for field in tpl.fields: tpl[field] = db_tpl[field] tpl.environment = copy.deepcopy(tpl.environment) # If any of the parameters were encrypted, then decrypt them if (tpl.environment is not None and env_fmt.ENCRYPTED_PARAM_NAMES in tpl.environment): parameters = tpl.environment[env_fmt.PARAMETERS] encrypted_param_names = tpl.environment[ env_fmt.ENCRYPTED_PARAM_NAMES] for param_name in encrypted_param_names: if (isinstance(parameters[param_name], (list, tuple)) and len(parameters[param_name]) == 2): method, enc_value = parameters[param_name] value = crypt.decrypt(method, enc_value) else: value = parameters[param_name] LOG.warning( 'Encountered already-decrypted data while attempting ' 'to decrypt parameter %s. Please file a Heat bug so ' 'this can be fixed.', param_name) parameters[param_name] = value tpl.environment[env_fmt.PARAMETERS] = parameters tpl._context = context tpl.obj_reset_changes() return tpl @classmethod def get_by_id(cls, context, template_id): raw_template_db = db_api.raw_template_get(context, template_id) return cls.from_db_object(context, cls(), raw_template_db) @classmethod def encrypt_hidden_parameters(cls, tmpl): if cfg.CONF.encrypt_parameters_and_properties: for param_name in tmpl.env.params.keys(): if not tmpl.param_schemata()[param_name].hidden: continue clear_text_val = tmpl.env.params.get(param_name) tmpl.env.params[param_name] = crypt.encrypt(clear_text_val) if param_name not in tmpl.env.encrypted_param_names: tmpl.env.encrypted_param_names.append(param_name) @classmethod def create(cls, context, values): return cls.from_db_object(context, cls(), db_api.raw_template_create(context, values)) @classmethod def update_by_id(cls, context, template_id, values): # Only save template files in the new raw_template_files # table, not in the old location of raw_template.files if 'files_id' in values and values['files_id']: values['files'] = None return cls.from_db_object( context, cls(), db_api.raw_template_update(context, template_id, values)) @classmethod def delete(cls, context, template_id): db_api.raw_template_delete(context, template_id) heat-10.0.2/heat/objects/event.py0000666000175000017500000001130013343562340016602 0ustar zuulzuul00000000000000# Copyright 2014 Intel Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Event object.""" from oslo_log import log as logging from oslo_versionedobjects import base from oslo_versionedobjects import fields from heat.common import identifier from heat.db.sqlalchemy import api as db_api from heat.objects import base as heat_base from heat.objects import resource_properties_data as rpd LOG = logging.getLogger(__name__) class Event( heat_base.HeatObject, base.VersionedObjectDictCompat, ): fields = { 'id': fields.IntegerField(), 'stack_id': fields.StringField(), 'uuid': fields.StringField(), 'resource_action': fields.StringField(nullable=True), 'resource_status': fields.StringField(nullable=True), 'resource_name': fields.StringField(nullable=True), 'physical_resource_id': fields.StringField(nullable=True), 'resource_status_reason': fields.StringField(nullable=True), 'resource_type': fields.StringField(nullable=True), 'rsrc_prop_data_id': fields.ObjectField( fields.IntegerField(nullable=True)), 'created_at': fields.DateTimeField(read_only=True), 'updated_at': fields.DateTimeField(nullable=True), } @staticmethod def _from_db_object(context, event, db_event): event._resource_properties = None for field in event.fields: if field == 'resource_status_reason': # this works whether db_event is a dict or db ref event[field] = db_event['_resource_status_reason'] else: event[field] = db_event[field] if db_event['rsrc_prop_data_id'] is None: event._resource_properties = db_event['resource_properties'] or {} else: if hasattr(db_event, '__dict__'): rpd_obj = db_event.__dict__.get('rsrc_prop_data') elif hasattr(db_event, 'rsrc_prop_data'): rpd_obj = db_event['rsrc_prop_data'] else: rpd_obj = None if rpd_obj is not None: # Object is already eager loaded rpd_obj = ( rpd.ResourcePropertiesData._from_db_object( rpd.ResourcePropertiesData(), context, rpd_obj)) event._resource_properties = rpd_obj.data event._context = context event.obj_reset_changes() return event @property def resource_properties(self): if self._resource_properties is None: LOG.info('rsrc_prop_data lazy load') rpd_obj = rpd.ResourcePropertiesData.get_by_id( self._context, self.rsrc_prop_data_id) self._resource_properties = rpd_obj.data or {} return self._resource_properties @classmethod def get_all_by_tenant(cls, context, **kwargs): return [cls._from_db_object(context, cls(), db_event) for db_event in db_api.event_get_all_by_tenant(context, **kwargs)] @classmethod def get_all_by_stack(cls, context, stack_id, **kwargs): return [cls._from_db_object(context, cls(), db_event) for db_event in db_api.event_get_all_by_stack(context, stack_id, **kwargs)] @classmethod def count_all_by_stack(cls, context, stack_id): return db_api.event_count_all_by_stack(context, stack_id) @classmethod def create(cls, context, values): # Using dict() allows us to be done with the sqlalchemy/model # layer in one call, rather than hitting that layer for every # field in _from_db_object(). return cls._from_db_object(context, cls(context=context), dict(db_api.event_create(context, values))) def identifier(self, stack_identifier): """Return a unique identifier for the event.""" res_id = identifier.ResourceIdentifier( resource_name=self.resource_name, **stack_identifier) return identifier.EventIdentifier(event_id=str(self.uuid), **res_id) heat-10.0.2/heat/objects/user_creds.py0000666000175000017500000000514613343562340017632 0ustar zuulzuul00000000000000# Copyright 2014 Intel Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """UserCreds object.""" from oslo_versionedobjects import base from oslo_versionedobjects import fields from heat.db.sqlalchemy import api as db_api from heat.objects import base as heat_base @base.VersionedObjectRegistry.register class UserCreds( heat_base.HeatObject, base.VersionedObjectDictCompat, base.ComparableVersionedObject, ): fields = { 'id': fields.StringField(), 'created_at': fields.DateTimeField(read_only=True), 'updated_at': fields.DateTimeField(nullable=True), 'username': fields.StringField(nullable=True), 'password': fields.StringField(nullable=True), 'tenant': fields.StringField(nullable=True), 'tenant_id': fields.StringField(nullable=True), 'trustor_user_id': fields.StringField(nullable=True), 'trust_id': fields.StringField(nullable=True), 'region_name': fields.StringField(nullable=True), 'auth_url': fields.StringField(nullable=True), 'decrypt_method': fields.StringField(nullable=True) } @staticmethod def _from_db_object(ucreds, db_ucreds, context=None): if db_ucreds is None: return db_ucreds ucreds._context = context for field in ucreds.fields: # TODO(Shao HE Feng), now the db layer delete the decrypt_method # field, just skip it here. and will add an encrypted_field later. if field == "decrypt_method": continue ucreds[field] = db_ucreds[field] ucreds.obj_reset_changes() return ucreds @classmethod def create(cls, context): user_creds_db = db_api.user_creds_create(context) return cls._from_db_object(cls(), user_creds_db) @classmethod def delete(cls, context, user_creds_id): db_api.user_creds_delete(context, user_creds_id) @classmethod def get_by_id(cls, context, user_creds_id): user_creds_db = db_api.user_creds_get(context, user_creds_id) user_creds = cls._from_db_object(cls(), user_creds_db) return user_creds heat-10.0.2/heat/objects/software_deployment.py0000666000175000017500000000651313343562340021565 0ustar zuulzuul00000000000000# Copyright 2014 Intel Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """SoftwareDeployment object.""" from oslo_versionedobjects import base from oslo_versionedobjects import fields from heat.db.sqlalchemy import api as db_api from heat.objects import base as heat_base from heat.objects import fields as heat_fields from heat.objects import software_config class SoftwareDeployment( heat_base.HeatObject, base.VersionedObjectDictCompat, base.ComparableVersionedObject, ): fields = { 'id': fields.StringField(), 'config_id': fields.StringField(), 'server_id': fields.StringField(), 'input_values': heat_fields.JsonField(nullable=True), 'output_values': heat_fields.JsonField(nullable=True), 'tenant': fields.StringField(), 'stack_user_project_id': fields.StringField(nullable=True), 'action': fields.StringField(nullable=True), 'status': fields.StringField(nullable=True), 'status_reason': fields.StringField(nullable=True), 'config': fields.ObjectField('SoftwareConfig'), 'created_at': fields.DateTimeField(read_only=True), 'updated_at': fields.DateTimeField(nullable=True), } @staticmethod def _from_db_object(context, deployment, db_deployment): for field in deployment.fields: if field == 'config': deployment[field] = ( software_config.SoftwareConfig._from_db_object( context, software_config.SoftwareConfig(), db_deployment['config']) ) else: deployment[field] = db_deployment[field] deployment._context = context deployment.obj_reset_changes() return deployment @classmethod def create(cls, context, values): return cls._from_db_object( context, cls(), db_api.software_deployment_create(context, values)) @classmethod def get_by_id(cls, context, deployment_id): return cls._from_db_object( context, cls(), db_api.software_deployment_get(context, deployment_id)) @classmethod def get_all(cls, context, server_id=None): return [cls._from_db_object(context, cls(), db_deployment) for db_deployment in db_api.software_deployment_get_all( context, server_id)] @classmethod def update_by_id(cls, context, deployment_id, values): """Note this is a bit unusual as it returns the object. Other update_by_id methods return a bool (was it updated). """ return cls._from_db_object( context, cls(), db_api.software_deployment_update(context, deployment_id, values)) @classmethod def delete(cls, context, deployment_id): db_api.software_deployment_delete(context, deployment_id) heat-10.0.2/heat/objects/resource_properties_data.py0000666000175000017500000000641213343562340022565 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ResourcePropertiesData object.""" from oslo_config import cfg from oslo_serialization import jsonutils from oslo_versionedobjects import base from oslo_versionedobjects import fields from heat.common import crypt from heat.db.sqlalchemy import api as db_api from heat.objects import fields as heat_fields class ResourcePropertiesData( base.VersionedObject, base.VersionedObjectDictCompat, base.ComparableVersionedObject, ): fields = { 'id': fields.IntegerField(), 'data': heat_fields.JsonField(nullable=True), 'created_at': fields.DateTimeField(read_only=True), 'updated_at': fields.DateTimeField(nullable=True), } @staticmethod def _from_db_object(rpd, context, db_rpd, data_unencrypted=None): # The data_unencrypted field allows us to avoid an extra # decrypt operation, e.g. when called from create(). for field in rpd.fields: rpd[field] = db_rpd[field] if data_unencrypted: # save a little (decryption) processing rpd['data'] = data_unencrypted elif db_rpd['encrypted'] and rpd['data'] is not None: rpd['data'] = crypt.decrypted_dict(rpd['data']) # TODO(cwolfe) setting the context here should go away, that # should have been done with the initialisation of the rpd # object. For now, maintaining consistency with other # _from_db_object methods. rpd._context = context rpd.obj_reset_changes() return rpd @classmethod def create_or_update(cls, context, data, rpd_id=None): properties_data_encrypted, properties_data = \ ResourcePropertiesData.encrypt_properties_data(data) values = {'encrypted': properties_data_encrypted, 'data': properties_data} db_obj = db_api.resource_prop_data_create_or_update( context, values, rpd_id) return cls._from_db_object(cls(), context, db_obj, data) @classmethod def create(cls, context, data): return ResourcePropertiesData.create_or_update(context, data) @staticmethod def encrypt_properties_data(data): if cfg.CONF.encrypt_parameters_and_properties and data: result = {} for prop_name, prop_value in data.items(): prop_string = jsonutils.dumps(prop_value) encrypted_value = crypt.encrypt(prop_string) result[prop_name] = encrypted_value return (True, result) return (False, data) @staticmethod def get_by_id(context, id): db_ref = db_api.resource_prop_data_get(context, id) return ResourcePropertiesData._from_db_object( ResourcePropertiesData(context=context), context, db_ref) heat-10.0.2/heat/objects/software_config.py0000666000175000017500000000451013343562340020645 0ustar zuulzuul00000000000000# Copyright 2014 Intel Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """SoftwareConfig object.""" from oslo_versionedobjects import base from oslo_versionedobjects import fields from heat.db.sqlalchemy import api as db_api from heat.objects import base as heat_base from heat.objects import fields as heat_fields class SoftwareConfig( heat_base.HeatObject, base.VersionedObjectDictCompat, base.ComparableVersionedObject, ): fields = { 'id': fields.StringField(), 'name': fields.StringField(nullable=True), 'group': fields.StringField(nullable=True), 'tenant': fields.StringField(nullable=True), 'config': heat_fields.JsonField(nullable=True), 'created_at': fields.DateTimeField(read_only=True), 'updated_at': fields.DateTimeField(nullable=True), } @staticmethod def _from_db_object(context, config, db_config): # SoftwareDeployment._from_db_object may attempt to load a None config if db_config is None: return None for field in config.fields: config[field] = db_config[field] config._context = context config.obj_reset_changes() return config @classmethod def create(cls, context, values): return cls._from_db_object( context, cls(), db_api.software_config_create(context, values)) @classmethod def get_by_id(cls, context, config_id): return cls._from_db_object( context, cls(), db_api.software_config_get(context, config_id)) @classmethod def get_all(cls, context, **kwargs): scs = db_api.software_config_get_all(context, **kwargs) return [cls._from_db_object(context, cls(), sc) for sc in scs] @classmethod def delete(cls, context, config_id): db_api.software_config_delete(context, config_id) heat-10.0.2/heat/tests/0000775000175000017500000000000013343562672014633 5ustar zuulzuul00000000000000heat-10.0.2/heat/tests/test_translation_rule.py0000666000175000017500000012644413343562340021636 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import mock import six from heat.common import exception from heat.engine.cfn import functions as cfn_funcs from heat.engine import function from heat.engine.hot import functions as hot_funcs from heat.engine import parameters from heat.engine import properties from heat.engine import translation from heat.tests import common class TestTranslationRule(common.HeatTestCase): def setUp(self): super(TestTranslationRule, self).setUp() self.props = mock.Mock(spec=properties.Properties) def test_translation_rule(self): for r in translation.TranslationRule.RULE_KEYS: props = properties.Properties({}, {}) rule = translation.TranslationRule( props, r, ['any'], ['value'] if r == 'Add' else None, 'value_name' if r == 'Replace' else None, 'client_plugin' if r == 'Resolve' else None, 'finder' if r == 'Resolve' else None) self.assertEqual(rule.properties, props) self.assertEqual(rule.rule, r) if r == 'Add': self.assertEqual(['value'], rule.value) if r == 'Replace': self.assertEqual('value_name', rule.value_name) else: self.assertIsNone(rule.value_name) def test_cmp_rules(self): rules = [ translation.TranslationRule( mock.Mock(spec=properties.Properties), translation.TranslationRule.DELETE, ['any'] ), translation.TranslationRule( mock.Mock(spec=properties.Properties), translation.TranslationRule.ADD, ['any'] ), translation.TranslationRule( mock.Mock(spec=properties.Properties), translation.TranslationRule.RESOLVE, ['any'], client_plugin=mock.ANY, finder=mock.ANY ), translation.TranslationRule( mock.Mock(spec=properties.Properties), translation.TranslationRule.REPLACE, ['any'] ) ] expected = [translation.TranslationRule.ADD, translation.TranslationRule.REPLACE, translation.TranslationRule.RESOLVE, translation.TranslationRule.DELETE] result = [rule.rule for rule in sorted(rules)] self.assertEqual(expected, result) def test_invalid_translation_rule(self): props = properties.Properties({}, {}) exc = self.assertRaises(ValueError, translation.TranslationRule, props, 'EatTheCookie', mock.ANY, mock.ANY) self.assertEqual('There is no rule EatTheCookie. List of allowed ' 'rules is: Add, Replace, Delete, Resolve.', six.text_type(exc)) exc = self.assertRaises(ValueError, translation.TranslationRule, props, translation.TranslationRule.ADD, 'networks.network', 'value') self.assertEqual('"translation_path" should be non-empty list ' 'with path to translate.', six.text_type(exc)) exc = self.assertRaises(ValueError, translation.TranslationRule, props, translation.TranslationRule.ADD, [], mock.ANY) self.assertEqual('"translation_path" should be non-empty list ' 'with path to translate.', six.text_type(exc)) exc = self.assertRaises(ValueError, translation.TranslationRule, props, translation.TranslationRule.ADD, ['any'], 'value', 'value_name', 'some_path') self.assertEqual('"value_path", "value" and "value_name" are ' 'mutually exclusive and cannot be specified ' 'at the same time.', six.text_type(exc)) exc = self.assertRaises(ValueError, translation.TranslationRule, props, translation.TranslationRule.ADD, ['any'], 'value') self.assertEqual('"value" must be list type when rule is Add.', six.text_type(exc)) def test_add_rule_exist(self): schema = { 'far': properties.Schema( properties.Schema.LIST, schema=properties.Schema( properties.Schema.MAP, schema={ 'red': properties.Schema( properties.Schema.STRING ) } ) ), 'bar': properties.Schema( properties.Schema.STRING )} data = { 'far': [ {'red': 'blue'} ], 'bar': 'dak' } props = properties.Properties(schema, copy.copy(data)) rule = translation.TranslationRule( props, translation.TranslationRule.ADD, ['far'], [{'red': props.get('bar')}]) tran = translation.Translation(props) tran.set_rules([rule]) self.assertTrue(tran.has_translation('far')) result = tran.translate('far', data['far']) self.assertEqual([{'red': 'blue'}, {'red': 'dak'}], result) self.assertEqual([{'red': 'blue'}, {'red': 'dak'}], tran.resolved_translations['far']) def test_add_rule_dont_exist(self): schema = { 'far': properties.Schema( properties.Schema.LIST, schema=properties.Schema( properties.Schema.MAP, schema={ 'red': properties.Schema( properties.Schema.STRING ) } ) ), 'bar': properties.Schema( properties.Schema.STRING )} data = { 'bar': 'dak' } props = properties.Properties(schema, copy.copy(data)) rule = translation.TranslationRule( props, translation.TranslationRule.ADD, ['far'], [{'red': props.get('bar')}]) tran = translation.Translation(props) tran.set_rules([rule]) self.assertTrue(tran.has_translation('far')) result = tran.translate('far') self.assertEqual([{'red': 'dak'}], result) self.assertEqual([{'red': 'dak'}], tran.resolved_translations['far']) def test_add_rule_invalid(self): schema = { 'far': properties.Schema( properties.Schema.MAP, schema={ 'red': properties.Schema( properties.Schema.STRING ) } ), 'bar': properties.Schema( properties.Schema.STRING )} data = { 'far': 'tran', 'bar': 'dak' } props = properties.Properties(schema, data) rule = translation.TranslationRule( props, translation.TranslationRule.ADD, ['far'], [props.get('bar')]) tran = translation.Translation(props) tran.set_rules([rule]) self.assertTrue(tran.has_translation('far')) ex = self.assertRaises(ValueError, tran.translate, 'far', 'tran') self.assertEqual('Incorrect translation rule using - cannot ' 'resolve Add rule for non-list translation ' 'value "far".', six.text_type(ex)) def test_replace_rule_map_exist(self): schema = { 'far': properties.Schema( properties.Schema.MAP, schema={ 'red': properties.Schema( properties.Schema.STRING ) } ), 'bar': properties.Schema( properties.Schema.STRING )} data = { 'far': {'red': 'tran'}, 'bar': 'dak' } props = properties.Properties(schema, data) rule = translation.TranslationRule( props, translation.TranslationRule.REPLACE, ['far', 'red'], props.get('bar')) tran = translation.Translation(props) tran.set_rules([rule]) self.assertTrue(tran.has_translation('far.red')) result = tran.translate('far.red', data['far']['red']) self.assertEqual('dak', result) self.assertEqual('dak', tran.resolved_translations['far.red']) def test_replace_rule_map_dont_exist(self): schema = { 'far': properties.Schema( properties.Schema.MAP, schema={ 'red': properties.Schema( properties.Schema.STRING ) } ), 'bar': properties.Schema( properties.Schema.STRING )} data = { 'bar': 'dak' } props = properties.Properties(schema, data) rule = translation.TranslationRule( props, translation.TranslationRule.REPLACE, ['far', 'red'], props.get('bar')) tran = translation.Translation(props) tran.set_rules([rule]) self.assertTrue(tran.has_translation('far.red')) result = tran.translate('far.red') self.assertEqual('dak', result) self.assertEqual('dak', tran.resolved_translations['far.red']) def test_replace_rule_list_different(self): schema = { 'far': properties.Schema( properties.Schema.LIST, schema=properties.Schema( properties.Schema.MAP, schema={ 'red': properties.Schema( properties.Schema.STRING ) } ) ), 'bar': properties.Schema( properties.Schema.STRING )} data = { 'far': [{'red': 'blue'}, {'red': 'roses'}], 'bar': 'dak' } props = properties.Properties(schema, data) rule = translation.TranslationRule( props, translation.TranslationRule.REPLACE, ['far', 'red'], props.get('bar')) tran = translation.Translation(props) tran.set_rules([rule]) self.assertTrue(tran.has_translation('far.red')) result = tran.translate('far.0.red', data['far'][0]['red']) self.assertEqual('dak', result) self.assertEqual('dak', tran.resolved_translations['far.0.red']) def test_replace_rule_list_same(self): schema = { 'far': properties.Schema( properties.Schema.LIST, schema=properties.Schema( properties.Schema.MAP, schema={ 'red': properties.Schema( properties.Schema.STRING ), 'blue': properties.Schema( properties.Schema.STRING ) } ) )} data = { 'far': [{'blue': 'white'}, {'red': 'roses'}] } props = properties.Properties(schema, data) rule = translation.TranslationRule( props, translation.TranslationRule.REPLACE, ['far', 'red'], None, 'blue') tran = translation.Translation(props) tran.set_rules([rule]) self.assertTrue(tran.has_translation('far.0.red')) result = tran.translate('far.0.red', data['far'][0].get('red'), data['far'][0]) self.assertEqual('white', result) self.assertEqual('white', tran.resolved_translations['far.0.red']) self.assertIsNone(tran.resolved_translations['far.0.blue']) self.assertTrue(tran.has_translation('far.1.red')) result = tran.translate('far.1.red', data['far'][1]['red'], data['far'][1]) self.assertEqual('roses', result) self.assertEqual('roses', tran.resolved_translations['far.1.red']) self.assertIsNone(tran.resolved_translations['far.1.blue']) def test_replace_rule_str(self): schema = { 'far': properties.Schema(properties.Schema.STRING), 'bar': properties.Schema(properties.Schema.STRING) } data = {'far': 'one', 'bar': 'two'} props = properties.Properties(schema, data) rule = translation.TranslationRule( props, translation.TranslationRule.REPLACE, ['bar'], props.get('far')) tran = translation.Translation(props) tran.set_rules([rule]) self.assertTrue(tran.has_translation('bar')) result = tran.translate('bar', data['bar']) self.assertEqual('one', result) self.assertEqual('one', tran.resolved_translations['bar']) def test_replace_rule_str_value_path_error(self): schema = { 'far': properties.Schema(properties.Schema.STRING), 'bar': properties.Schema(properties.Schema.STRING) } data = {'far': 'one', 'bar': 'two'} props = properties.Properties(schema, data) rule = translation.TranslationRule( props, translation.TranslationRule.REPLACE, ['bar'], value_path=['far']) tran = translation.Translation(props) tran.set_rules([rule]) self.assertTrue(tran.has_translation('bar')) ex = self.assertRaises(exception.StackValidationFailed, tran.translate, 'bar', data['bar']) self.assertEqual('Cannot define the following properties at ' 'the same time: bar, far', six.text_type(ex)) def test_replace_rule_str_value_path(self): schema = { 'far': properties.Schema(properties.Schema.STRING), 'bar': properties.Schema(properties.Schema.STRING) } props = properties.Properties(schema, {'far': 'one'}) rule = translation.TranslationRule( props, translation.TranslationRule.REPLACE, ['bar'], value_path=['far']) props = properties.Properties(schema, {'far': 'one'}) tran = translation.Translation(props) tran.set_rules([rule]) self.assertTrue(tran.has_translation('bar')) result = tran.translate('bar') self.assertEqual('one', result) self.assertEqual('one', tran.resolved_translations['bar']) self.assertIsNone(tran.resolved_translations['far']) def test_replace_rule_str_invalid(self): schema = { 'far': properties.Schema(properties.Schema.STRING), 'bar': properties.Schema(properties.Schema.INTEGER) } data = {'far': 'one', 'bar': 2} props = properties.Properties(schema, data) rule = translation.TranslationRule( props, translation.TranslationRule.REPLACE, ['bar'], props.get('far')) props.update_translation([rule]) exc = self.assertRaises(exception.StackValidationFailed, props.validate) self.assertEqual("Property error: bar: Value 'one' is not an integer", six.text_type(exc)) def test_delete_rule_list(self): schema = { 'far': properties.Schema( properties.Schema.LIST, schema=properties.Schema( properties.Schema.MAP, schema={ 'red': properties.Schema( properties.Schema.STRING ), 'check': properties.Schema( properties.Schema.STRING ) } ) )} data = { 'far': [{'red': 'blue'}, {'red': 'roses'}], } props = properties.Properties(schema, data) rule = translation.TranslationRule( props, translation.TranslationRule.DELETE, ['far', 'red']) tran = translation.Translation(props) tran.set_rules([rule]) self.assertTrue(tran.has_translation('far.red')) self.assertIsNone(tran.translate('far.red')) self.assertIsNone(tran.resolved_translations['far.red']) def test_delete_rule_other(self): schema = { 'far': properties.Schema(properties.Schema.STRING) } data = {'far': 'one'} props = properties.Properties(schema, data) rule = translation.TranslationRule( props, translation.TranslationRule.DELETE, ['far']) tran = translation.Translation(props) tran.set_rules([rule]) self.assertTrue(tran.has_translation('far')) self.assertIsNone(tran.translate('far')) self.assertIsNone(tran.resolved_translations['far']) def _test_resolve_rule(self, is_list=False, check_error=False): class FakeClientPlugin(object): def find_name_id(self, entity=None, src_value='far'): if check_error: raise exception.NotFound() if entity == 'rose': return 'pink' return 'yellow' if is_list: schema = { 'far': properties.Schema( properties.Schema.LIST, schema=properties.Schema( properties.Schema.MAP, schema={ 'red': properties.Schema( properties.Schema.STRING ) } ) )} else: schema = { 'far': properties.Schema(properties.Schema.STRING) } return FakeClientPlugin(), schema def test_resolve_rule_list_populated(self): client_plugin, schema = self._test_resolve_rule(is_list=True) data = { 'far': [{'red': 'blue'}, {'red': 'roses'}], } props = properties.Properties(schema, data) rule = translation.TranslationRule( props, translation.TranslationRule.RESOLVE, ['far', 'red'], client_plugin=client_plugin, finder='find_name_id' ) tran = translation.Translation(props) tran.set_rules([rule]) self.assertTrue(tran.has_translation('far.red')) result = tran.translate('far.0.red', data['far'][0]['red']) self.assertEqual('yellow', result) self.assertEqual('yellow', tran.resolved_translations['far.0.red']) def test_resolve_rule_nested_list_populated(self): client_plugin, schema = self._test_resolve_rule_nested_list() data = { 'instances': [{'networks': [{'port': 'port1', 'net': 'net1'}]}] } props = properties.Properties(schema, data) rule = translation.TranslationRule( props, translation.TranslationRule.RESOLVE, ['instances', 'networks', 'port'], client_plugin=client_plugin, finder='find_name_id', entity='port' ) tran = translation.Translation(props) tran.set_rules([rule]) self.assertTrue(tran.has_translation('instances.networks.port')) result = tran.translate('instances.0.networks.0.port', data['instances'][0]['networks'][0]['port']) self.assertEqual('port1_id', result) self.assertEqual('port1_id', tran.resolved_translations[ 'instances.0.networks.0.port']) def _test_resolve_rule_nested_list(self): class FakeClientPlugin(object): def find_name_id(self, entity=None, value=None): if entity == 'net': return 'net1_id' if entity == 'port': return 'port1_id' schema = { 'instances': properties.Schema( properties.Schema.LIST, schema=properties.Schema( properties.Schema.MAP, schema={ 'networks': properties.Schema( properties.Schema.LIST, schema=properties.Schema( properties.Schema.MAP, schema={ 'port': properties.Schema( properties.Schema.STRING, ), 'net': properties.Schema( properties.Schema.STRING, ), } ) ) } ) )} return FakeClientPlugin(), schema def test_resolve_rule_list_with_function(self): client_plugin, schema = self._test_resolve_rule(is_list=True) join_func = cfn_funcs.Join(None, 'Fn::Join', ['.', ['bar', 'baz']]) data = { 'far': [{'red': 'blue'}, {'red': join_func}], } props = properties.Properties(schema, data) rule = translation.TranslationRule( props, translation.TranslationRule.RESOLVE, ['far', 'red'], client_plugin=client_plugin, finder='find_name_id' ) tran = translation.Translation(props) tran.set_rules([rule]) self.assertTrue(tran.has_translation('far.red')) result = tran.translate('far.0.red', data['far'][0]['red']) self.assertEqual('yellow', result) self.assertEqual('yellow', tran.resolved_translations['far.0.red']) def test_resolve_rule_list_with_ref(self): client_plugin, schema = self._test_resolve_rule(is_list=True) class rsrc(object): action = INIT = "INIT" def FnGetRefId(self): return 'resource_id' class DummyStack(dict): pass stack = DummyStack(another_res=rsrc()) ref = hot_funcs.GetResource(stack, 'get_resource', 'another_res') data = { 'far': [{'red': ref}], } props = properties.Properties(schema, data) rule = translation.TranslationRule( props, translation.TranslationRule.RESOLVE, ['far', 'red'], client_plugin=client_plugin, finder='find_name_id' ) tran = translation.Translation(props) tran.set_rules([rule]) self.assertTrue(tran.has_translation('far.red')) result = tran.translate('far.0.red', data['far'][0]['red']) self.assertEqual('yellow', result) self.assertEqual('yellow', tran.resolved_translations['far.0.red']) def test_resolve_rule_list_strings(self): client_plugin, schema = self._test_resolve_rule() data = {'far': ['one', 'rose']} schema = {'far': properties.Schema( properties.Schema.LIST, schema=properties.Schema( properties.Schema.STRING ) )} props = properties.Properties(schema, data) rule = translation.TranslationRule( props, translation.TranslationRule.RESOLVE, ['far'], client_plugin=client_plugin, finder='find_name_id') tran = translation.Translation(props) tran.set_rules([rule]) self.assertTrue(tran.has_translation('far')) result = tran.translate('far', data['far']) self.assertEqual(['yellow', 'pink'], result) self.assertEqual(['yellow', 'pink'], tran.resolved_translations['far']) def test_resolve_rule_ignore_error(self): client_plugin, schema = self._test_resolve_rule(check_error=True) data = {'far': 'one'} props = properties.Properties(schema, data) rule = translation.TranslationRule( props, translation.TranslationRule.RESOLVE, ['far'], client_plugin=client_plugin, finder='find_name_id') tran = translation.Translation(props) tran.set_rules([rule], ignore_resolve_error=True) self.assertTrue(tran.has_translation('far')) result = tran.translate('far', data['far']) self.assertEqual('one', result) self.assertEqual('one', tran.resolved_translations['far']) def test_resolve_rule_other(self): client_plugin, schema = self._test_resolve_rule() data = {'far': 'one'} props = properties.Properties(schema, data) rule = translation.TranslationRule( props, translation.TranslationRule.RESOLVE, ['far'], client_plugin=client_plugin, finder='find_name_id') tran = translation.Translation(props) tran.set_rules([rule]) self.assertTrue(tran.has_translation('far')) result = tran.translate('far', data['far']) self.assertEqual('yellow', result) self.assertEqual('yellow', tran.resolved_translations['far']) def test_resolve_rule_other_with_ref(self): client_plugin, schema = self._test_resolve_rule() class rsrc(object): action = INIT = "INIT" def FnGetRefId(self): return 'resource_id' class DummyStack(dict): pass stack = DummyStack(another_res=rsrc()) ref = hot_funcs.GetResource(stack, 'get_resource', 'another_res') data = {'far': ref} props = properties.Properties(schema, data) rule = translation.TranslationRule( props, translation.TranslationRule.RESOLVE, ['far'], client_plugin=client_plugin, finder='find_name_id') tran = translation.Translation(props) tran.set_rules([rule]) self.assertTrue(tran.has_translation('far')) result = tran.translate('far', data['far']) self.assertEqual('yellow', result) def test_resolve_rule_other_with_function(self): client_plugin, schema = self._test_resolve_rule() join_func = cfn_funcs.Join(None, 'Fn::Join', ['.', ['bar', 'baz']]) data = {'far': join_func} props = properties.Properties(schema, data) rule = translation.TranslationRule( props, translation.TranslationRule.RESOLVE, ['far'], client_plugin=client_plugin, finder='find_name_id') tran = translation.Translation(props) tran.set_rules([rule]) self.assertTrue(tran.has_translation('far')) result = tran.translate('far', data['far']) self.assertEqual('yellow', result) self.assertEqual('yellow', tran.resolved_translations['far']) def test_resolve_rule_other_with_get_attr(self): client_plugin, schema = self._test_resolve_rule() class DummyStack(dict): pass class rsrc(object): pass stack = DummyStack(another_res=rsrc()) attr_func = cfn_funcs.GetAtt(stack, 'Fn::GetAtt', ['another_res', 'name']) data = {'far': attr_func} props = properties.Properties(schema, data) rule = translation.TranslationRule( props, translation.TranslationRule.RESOLVE, ['far'], client_plugin=client_plugin, finder='find_name_id') tran = translation.Translation(props) tran.set_rules([rule], client_resolve=False) self.assertFalse(tran.store_translated_values) self.assertFalse(tran.has_translation('far')) result = tran.translate('far', 'no_check', data['far']) self.assertEqual('no_check', result) self.assertIsNone(tran.resolved_translations.get('far')) def test_resolve_rule_other_with_entity(self): client_plugin, schema = self._test_resolve_rule() data = {'far': 'one'} props = properties.Properties(schema, data) rule = translation.TranslationRule( props, translation.TranslationRule.RESOLVE, ['far'], client_plugin=client_plugin, finder='find_name_id', entity='rose') tran = translation.Translation(props) tran.set_rules([rule]) self.assertTrue(tran.has_translation('far')) result = tran.translate('far', data['far']) self.assertEqual('pink', result) self.assertEqual('pink', tran.resolved_translations['far']) def test_property_json_param_correct_translation(self): """Test case when property with sub-schema takes json param.""" schema = { 'far': properties.Schema(properties.Schema.MAP, schema={ 'bar': properties.Schema( properties.Schema.STRING, ), 'dar': properties.Schema( properties.Schema.STRING ) }) } class DummyStack(dict): @property def parameters(self): return mock.Mock() param = hot_funcs.GetParam(DummyStack(json_far='json_far'), 'get_param', 'json_far') param.parameters = { 'json_far': parameters.JsonParam( 'json_far', {'Type': 'Json'}, '{"dar": "rad"}').value()} data = {'far': param} props = properties.Properties(schema, data, resolver=function.resolve) rule = translation.TranslationRule( props, translation.TranslationRule.REPLACE, ['far', 'bar'], value_path=['far', 'dar']) tran = translation.Translation(props) tran.set_rules([rule]) self.assertTrue(tran.has_translation('far.bar')) prop_data = props['far'] result = tran.translate('far.bar', prop_data['bar'], prop_data) self.assertEqual('rad', result) self.assertEqual('rad', tran.resolved_translations['far.bar']) def test_property_json_param_to_list_correct_translation(self): """Test case when list property with sub-schema takes json param.""" schema = { 'far': properties.Schema(properties.Schema.LIST, schema=properties.Schema( properties.Schema.MAP, schema={ 'bar': properties.Schema( properties.Schema.STRING, ), 'dar': properties.Schema( properties.Schema.STRING ) } )) } class DummyStack(dict): @property def parameters(self): return mock.Mock() param = hot_funcs.GetParam(DummyStack(json_far='json_far'), 'get_param', 'json_far') param.parameters = { 'json_far': parameters.JsonParam( 'json_far', {'Type': 'Json'}, '{"dar": "rad"}').value()} data = {'far': [param]} props = properties.Properties(schema, data, resolver=function.resolve) rule = translation.TranslationRule( props, translation.TranslationRule.REPLACE, ['far', 'bar'], value_name='dar') tran = translation.Translation(props) tran.set_rules([rule]) self.assertTrue(tran.has_translation('far.0.bar')) prop_data = props['far'] result = tran.translate('far.0.bar', prop_data[0]['bar'], prop_data[0]) self.assertEqual('rad', result) self.assertEqual('rad', tran.resolved_translations['far.0.bar']) def test_property_commadelimitedlist_param_correct_translation(self): """Test when property with sub-schema takes comma_delimited_list.""" schema = { 'far': properties.Schema( properties.Schema.LIST, schema=properties.Schema( properties.Schema.STRING, ) ), 'boo': properties.Schema( properties.Schema.STRING )} class DummyStack(dict): @property def parameters(self): return mock.Mock() param = hot_funcs.GetParam(DummyStack(list_far='list_far'), 'get_param', 'list_far') param.parameters = { 'list_far': parameters.CommaDelimitedListParam( 'list_far', {'Type': 'CommaDelimitedList'}, "white,roses").value()} data = {'far': param, 'boo': 'chrysanthemums'} props = properties.Properties(schema, data, resolver=function.resolve) rule = translation.TranslationRule( props, translation.TranslationRule.ADD, ['far'], [props.get('boo')]) tran = translation.Translation(props) tran.set_rules([rule]) self.assertTrue(tran.has_translation('far')) result = tran.translate('far', props['far']) self.assertEqual(['white', 'roses', 'chrysanthemums'], result) self.assertEqual(['white', 'roses', 'chrysanthemums'], tran.resolved_translations['far']) def test_list_list_add_translation_rule(self): schema = { 'far': properties.Schema( properties.Schema.LIST, schema=properties.Schema( properties.Schema.MAP, schema={ 'bar': properties.Schema( properties.Schema.LIST, schema=properties.Schema(properties.Schema.STRING) ), 'car': properties.Schema(properties.Schema.STRING) } ) ) } data = {'far': [{'bar': ['shar'], 'car': 'man'}, {'car': 'first'}]} props = properties.Properties(schema, data, resolver=function.resolve) rule = translation.TranslationRule( props, translation.TranslationRule.ADD, ['far', 'bar'], value_name='car' ) tran = translation.Translation(props) tran.set_rules([rule]) self.assertTrue(tran.has_translation('far.0.bar')) result = tran.translate('far.0.bar', props['far'][0]['bar'], props['far'][0]) self.assertEqual(['shar', 'man'], result) self.assertEqual(['shar', 'man'], tran.resolved_translations['far.0.bar']) result = tran.translate('far.1.bar', prop_data=props['far'][1]) self.assertEqual(['first'], result) self.assertEqual(['first'], tran.resolved_translations['far.1.bar']) def test_replace_rule_map_with_custom_value_path(self): schema = { 'far': properties.Schema( properties.Schema.MAP, schema={ 'red': properties.Schema( properties.Schema.STRING ) } ), 'bar': properties.Schema( properties.Schema.MAP )} data = { 'far': {}, 'bar': {'red': 'dak'} } props = properties.Properties(schema, data) rule = translation.TranslationRule( props, translation.TranslationRule.REPLACE, ['far', 'red'], value_path=['bar'], custom_value_path=['red'] ) tran = translation.Translation(props) tran.set_rules([rule]) self.assertTrue(tran.has_translation('far.red')) result = tran.translate('far.red') self.assertEqual('dak', result) self.assertEqual('dak', tran.resolved_translations['far.red']) def test_replace_rule_list_with_custom_value_path(self): schema = { 'far': properties.Schema( properties.Schema.LIST, schema=properties.Schema( properties.Schema.MAP, schema={ 'red': properties.Schema( properties.Schema.STRING ), 'blue': properties.Schema( properties.Schema.MAP ) } ) )} data = { 'far': [{'blue': {'black': {'white': 'daisy'}}}, {'red': 'roses'}] } props = properties.Properties(schema, data) rule = translation.TranslationRule( props, translation.TranslationRule.REPLACE, ['far', 'red'], value_name='blue', custom_value_path=['black', 'white'] ) tran = translation.Translation(props) tran.set_rules([rule]) self.assertTrue(tran.has_translation('far.0.red')) result = tran.translate('far.0.red', prop_data=data['far'][0]) self.assertEqual('daisy', result) self.assertEqual('daisy', tran.resolved_translations['far.0.red']) def test_add_rule_list_with_custom_value_path(self): schema = { 'far': properties.Schema( properties.Schema.LIST, schema=properties.Schema( properties.Schema.MAP, schema={ 'red': properties.Schema( properties.Schema.LIST, schema=properties.Schema(properties.Schema.STRING) ), 'blue': properties.Schema( properties.Schema.MAP ) } ) )} data = { 'far': [{'blue': {'black': {'white': 'daisy', 'check': ['one']}}}, {'red': ['roses']}] } props = properties.Properties(schema, data) rule = translation.TranslationRule( props, translation.TranslationRule.ADD, ['far', 'red'], value_name='blue', custom_value_path=['black', 'check'] ) tran = translation.Translation(props) tran.set_rules([rule]) self.assertTrue(tran.has_translation('far.0.red')) result = tran.translate('far.0.red', data['far'][0].get('red'), data['far'][0]) self.assertEqual(['one'], result) self.assertEqual(['one'], tran.resolved_translations['far.0.red']) self.assertEqual(['roses'], tran.translate('far.1.red', data['far'][1]['red'], data['far'][1])) def test_set_rules_none(self): tran = translation.Translation() self.assertEqual({}, tran._rules) def test_set_no_resolve_rules(self): rules = [ translation.TranslationRule( self.props, translation.TranslationRule.RESOLVE, ['a'], client_plugin=mock.ANY, finder='finder' ) ] tran = translation.Translation() tran.set_rules(rules, client_resolve=False) self.assertEqual({}, tran._rules) def test_translate_add(self): rules = [ translation.TranslationRule( self.props, translation.TranslationRule.ADD, ['a', 'b'], value=['check'] ) ] tran = translation.Translation() tran.set_rules(rules) result = tran.translate('a.b', ['test']) self.assertEqual(['test', 'check'], result) self.assertEqual(['test', 'check'], tran.resolved_translations['a.b']) # Test without storing tran.resolved_translations = {} tran.store_translated_values = False result = tran.translate('a.b', ['test']) self.assertEqual(['test', 'check'], result) self.assertEqual({}, tran.resolved_translations) tran.store_translated_values = True # Test no prop_value self.assertEqual(['check'], tran.translate('a.b', None)) # Check digits in path skipped for rule self.assertEqual(['test', 'check'], tran.translate('a.0.b', ['test'])) def test_translate_delete(self): rules = [ translation.TranslationRule( self.props, translation.TranslationRule.DELETE, ['a'] ) ] tran = translation.Translation() tran.set_rules(rules) self.assertIsNone(tran.translate('a')) self.assertIsNone(tran.resolved_translations['a']) # Test without storing tran.resolved_translations = {} tran.store_translated_values = False self.assertIsNone(tran.translate('a')) self.assertEqual({}, tran.resolved_translations) tran.store_translated_values = True heat-10.0.2/heat/tests/test_rsrc_defn.py0000666000175000017500000002756413343562340020221 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import six from heat.common import exception from heat.common import template_format from heat.engine.cfn import functions as cfn_funcs from heat.engine.hot import functions as hot_funcs from heat.engine import properties from heat.engine import rsrc_defn from heat.tests import common from heat.tests import utils TEMPLATE_WITH_EX_REF_IMPLICIT_DEPEND = ''' heat_template_version: 2016-10-14 resources: test1: type: OS::Heat::TestResource external_id: foobar properties: value: {get_resource: test2} test2: type: OS::Heat::TestResource ''' TEMPLATE_WITH_INVALID_EXPLICIT_DEPEND = ''' heat_template_version: 2016-10-14 resources: test1: type: OS::Heat::TestResource test3: type: OS::Heat::TestResource depends_on: test2 ''' class ResourceDefinitionTest(common.HeatTestCase): def make_me_one_with_everything(self): return rsrc_defn.ResourceDefinition( 'rsrc', 'SomeType', properties={'Foo': cfn_funcs.Join(None, 'Fn::Join', ['a', ['b', 'r']]), 'Blarg': 'wibble'}, metadata={'Baz': cfn_funcs.Join(None, 'Fn::Join', ['u', ['q', '', 'x']])}, depends=['other_resource'], deletion_policy='Retain', update_policy={'SomePolicy': {}}) def test_properties_default(self): rd = rsrc_defn.ResourceDefinition('rsrc', 'SomeType') self.assertEqual({}, rd.properties({})) def test_properties(self): rd = self.make_me_one_with_everything() schema = { 'Foo': properties.Schema(properties.Schema.STRING), 'Blarg': properties.Schema(properties.Schema.STRING, default=''), 'Baz': properties.Schema(properties.Schema.STRING, default='quux'), } props = rd.properties(schema) self.assertEqual('bar', props['Foo']) self.assertEqual('wibble', props['Blarg']) self.assertEqual('quux', props['Baz']) def test_metadata_default(self): rd = rsrc_defn.ResourceDefinition('rsrc', 'SomeType') self.assertEqual({}, rd.metadata()) def test_metadata(self): rd = self.make_me_one_with_everything() metadata = rd.metadata() self.assertEqual({'Baz': 'quux'}, metadata) self.assertIsInstance(metadata['Baz'], six.string_types) def test_dependencies_default(self): rd = rsrc_defn.ResourceDefinition('rsrc', 'SomeType') stack = {'foo': 'FOO', 'bar': 'BAR'} self.assertEqual(set(), rd.required_resource_names()) self.assertEqual([], list(rd.dependencies(stack))) def test_dependencies_explicit(self): rd = rsrc_defn.ResourceDefinition('rsrc', 'SomeType', depends=['foo']) stack = {'foo': 'FOO', 'bar': 'BAR'} self.assertEqual({'foo'}, rd.required_resource_names()) self.assertEqual(['FOO'], list(rd.dependencies(stack))) def test_dependencies_explicit_ext(self): rd = rsrc_defn.ResourceDefinition('rsrc', 'SomeType', depends=['foo'], external_id='abc') stack = {'foo': 'FOO', 'bar': 'BAR'} self.assertRaises( exception.InvalidExternalResourceDependency, rd.dependencies, stack) def test_dependencies_implicit_ext(self): t = template_format.parse(TEMPLATE_WITH_EX_REF_IMPLICIT_DEPEND) stack = utils.parse_stack(t) rsrc = stack['test1'] self.assertEqual([], list(rsrc.t.dependencies(stack))) def test_dependencies_explicit_invalid(self): t = template_format.parse(TEMPLATE_WITH_INVALID_EXPLICIT_DEPEND) stack = utils.parse_stack(t) rd = stack.t.resource_definitions(stack)['test3'] self.assertEqual({'test2'}, rd.required_resource_names()) self.assertRaises(exception.InvalidTemplateReference, lambda: list(rd.dependencies(stack))) def test_deletion_policy_default(self): rd = rsrc_defn.ResourceDefinition('rsrc', 'SomeType') self.assertEqual(rsrc_defn.ResourceDefinition.DELETE, rd.deletion_policy()) def test_deletion_policy(self): for policy in rsrc_defn.ResourceDefinition.DELETION_POLICIES: rd = rsrc_defn.ResourceDefinition('rsrc', 'SomeType', deletion_policy=policy) self.assertEqual(policy, rd.deletion_policy()) def test_deletion_policy_invalid(self): self.assertRaises(AssertionError, rsrc_defn.ResourceDefinition, 'rsrc', 'SomeType', deletion_policy='foo') def test_update_policy_default(self): rd = rsrc_defn.ResourceDefinition('rsrc', 'SomeType') self.assertEqual({}, rd.update_policy({})) def test_update_policy(self): rd = self.make_me_one_with_everything() policy_schema = {'Foo': properties.Schema(properties.Schema.STRING, default='bar')} schema = { 'SomePolicy': properties.Schema(properties.Schema.MAP, schema=policy_schema), } up = rd.update_policy(schema) self.assertEqual('bar', up['SomePolicy']['Foo']) def test_freeze(self): rd = self.make_me_one_with_everything() frozen = rd.freeze() self.assertEqual('bar', frozen._properties['Foo']) self.assertEqual('quux', frozen._metadata['Baz']) def test_freeze_override(self): rd = self.make_me_one_with_everything() frozen = rd.freeze(metadata={'Baz': 'wibble'}) self.assertEqual('bar', frozen._properties['Foo']) self.assertEqual('wibble', frozen._metadata['Baz']) def test_render_hot(self): rd = self.make_me_one_with_everything() expected_hot = { 'type': 'SomeType', 'properties': {'Foo': {'Fn::Join': ['a', ['b', 'r']]}, 'Blarg': 'wibble'}, 'metadata': {'Baz': {'Fn::Join': ['u', ['q', '', 'x']]}}, 'depends_on': ['other_resource'], 'deletion_policy': 'Retain', 'update_policy': {'SomePolicy': {}}, } self.assertEqual(expected_hot, rd.render_hot()) def test_render_hot_empty(self): rd = rsrc_defn.ResourceDefinition('rsrc', 'SomeType') expected_hot = { 'type': 'SomeType', } self.assertEqual(expected_hot, rd.render_hot()) def test_template_equality(self): class FakeStack(object): def __init__(self, params): self.parameters = params def get_param_defn(value): stack = FakeStack({'Foo': value}) param_func = hot_funcs.GetParam(stack, 'get_param', 'Foo') return rsrc_defn.ResourceDefinition('rsrc', 'SomeType', {'Foo': param_func}) self.assertEqual(get_param_defn('bar'), get_param_defn('baz')) def test_hash_equal(self): rd1 = self.make_me_one_with_everything() rd2 = self.make_me_one_with_everything() self.assertEqual(rd1, rd2) self.assertEqual(hash(rd1), hash(rd2)) def test_hash_names(self): rd1 = rsrc_defn.ResourceDefinition('rsrc1', 'SomeType') rd2 = rsrc_defn.ResourceDefinition('rsrc2', 'SomeType') self.assertEqual(rd1, rd2) self.assertEqual(hash(rd1), hash(rd2)) def test_hash_types(self): rd1 = rsrc_defn.ResourceDefinition('rsrc', 'SomeType1') rd2 = rsrc_defn.ResourceDefinition('rsrc', 'SomeType2') self.assertNotEqual(rd1, rd2) self.assertNotEqual(hash(rd1), hash(rd2)) class ResourceDefinitionDiffTest(common.HeatTestCase): def test_properties_diff(self): before = rsrc_defn.ResourceDefinition('rsrc', 'SomeType', properties={'Foo': 'blarg'}, update_policy={'baz': 'quux'}, metadata={'baz': 'quux'}) after = rsrc_defn.ResourceDefinition('rsrc', 'SomeType', properties={'Foo': 'wibble'}, update_policy={'baz': 'quux'}, metadata={'baz': 'quux'}) diff = after - before self.assertTrue(diff.properties_changed()) self.assertFalse(diff.update_policy_changed()) self.assertFalse(diff.metadata_changed()) self.assertTrue(diff) def test_update_policy_diff(self): before = rsrc_defn.ResourceDefinition('rsrc', 'SomeType', properties={'baz': 'quux'}, update_policy={'Foo': 'blarg'}, metadata={'baz': 'quux'}) after = rsrc_defn.ResourceDefinition('rsrc', 'SomeType', properties={'baz': 'quux'}, update_policy={'Foo': 'wibble'}, metadata={'baz': 'quux'}) diff = after - before self.assertFalse(diff.properties_changed()) self.assertTrue(diff.update_policy_changed()) self.assertFalse(diff.metadata_changed()) self.assertTrue(diff) def test_metadata_diff(self): before = rsrc_defn.ResourceDefinition('rsrc', 'SomeType', properties={'baz': 'quux'}, update_policy={'baz': 'quux'}, metadata={'Foo': 'blarg'}) after = rsrc_defn.ResourceDefinition('rsrc', 'SomeType', properties={'baz': 'quux'}, update_policy={'baz': 'quux'}, metadata={'Foo': 'wibble'}) diff = after - before self.assertFalse(diff.properties_changed()) self.assertFalse(diff.update_policy_changed()) self.assertTrue(diff.metadata_changed()) self.assertTrue(diff) def test_no_diff(self): before = rsrc_defn.ResourceDefinition('rsrc', 'SomeType', properties={'Foo': 'blarg'}, update_policy={'bar': 'quux'}, metadata={'baz': 'wibble'}, depends=['other_resource'], deletion_policy='Delete') after = rsrc_defn.ResourceDefinition('rsrc', 'SomeType', properties={'Foo': 'blarg'}, update_policy={'bar': 'quux'}, metadata={'baz': 'wibble'}, depends=['other_other_resource'], deletion_policy='Retain') diff = after - before self.assertFalse(diff.properties_changed()) self.assertFalse(diff.update_policy_changed()) self.assertFalse(diff.metadata_changed()) self.assertFalse(diff) heat-10.0.2/heat/tests/test_common_env_util.py0000666000175000017500000002660313343562340021442 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import json from heat.common import environment_util as env_util from heat.common import exception from heat.engine import parameters from heat.tests import common from heat.tests import utils class TestEnvironmentUtil(common.HeatTestCase): def test_empty_merge_strategies(self): merge_strategies = {} param_strategy = env_util.get_param_merge_strategy(merge_strategies, 'param1') self.assertEqual(env_util.OVERWRITE, param_strategy) def test_default_merge_strategy(self): merge_strategies = {'default': 'deep_merge'} param_strategy = env_util.get_param_merge_strategy(merge_strategies, 'param1') self.assertEqual(env_util.DEEP_MERGE, param_strategy) def test_param_sepcific_merge_strategy(self): merge_strategies = {'default': 'merge', 'param1': 'deep_merge'} param_strategy = env_util.get_param_merge_strategy(merge_strategies, 'param1') self.assertEqual(env_util.DEEP_MERGE, param_strategy) def test_wrong_param_strategy(self): merge_strategies = {'default': 'merge', 'param1': 'unknown'} param_strategy = env_util.get_param_merge_strategy(merge_strategies, 'param1') self.assertEqual(env_util.MERGE, param_strategy) def test_merge_startegies_none(self): merge_strategies = None param_strategy = env_util.get_param_merge_strategy(merge_strategies, 'param1') self.assertEqual(env_util.OVERWRITE, param_strategy) class TestMergeEnvironments(common.HeatTestCase): def setUp(self): super(TestMergeEnvironments, self).setUp() self.ctx = utils.dummy_context(tenant_id='stack_service_test_tenant') # Setup self.params = {'parameters': {}, 'resource_registry': {}, 'parameter_defaults': {}} self.env_1 = {'parameters': { 'str_value1': "string1", 'str_value2': "string2", 'del_lst_value1': '1,2', 'del_lst_value2': '3,4', 'lst_value1': [1, 2], 'json_value1': {"1": ["str1", "str2"]}, 'json_value2': {"2": ["test1", "test2"]}}, 'resource_registry': { 'test::R1': "OS::Heat::RandomString", 'test::R2': "BROKEN"}, 'parameter_defaults': { 'lst_value2': [3, 4]}} self.env_2 = {'parameters': { 'str_value1': "string3", 'str_value2': "string4", 'del_lst_value1': '5,6', 'del_lst_value2': '7,8', 'lst_value1': [5, 6], 'json_value1': {"3": ["str3", "str4"]}, 'json_value2': {"4": ["test3", "test4"]}}, 'resource_registry': { 'test::R2': "OS::Heat::None"}, 'parameter_defaults': { 'lst_value2': [7, 8]}} self.env_3 = {'parameters': { 'lst_value1': [9, 10], 'json_value1': {"5": ["str5"]}}} self.env_4 = {'parameter_defaults': { 'lst_value2': [9, 10]}} self.param_schemata = { 'str_value1': parameters.Schema(parameters.Schema.STRING), 'str_value2': parameters.Schema(parameters.Schema.STRING), 'del_lst_value1': parameters.Schema(parameters.Schema.LIST), 'del_lst_value2': parameters.Schema(parameters.Schema.LIST), 'lst_value1': parameters.Schema(parameters.Schema.LIST, default=[7]), 'lst_value2': parameters.Schema(parameters.Schema.LIST), 'json_value1': parameters.Schema(parameters.Schema.MAP), 'json_value2': parameters.Schema(parameters.Schema.MAP)} def test_merge_envs_with_param_default_merge_strategy(self): files = {'env_1': json.dumps(self.env_1), 'env_2': json.dumps(self.env_2)} environment_files = ['env_1', 'env_2'] # Test env_util.merge_environments(environment_files, files, self.params, self.param_schemata) # Verify expected = {'parameters': { 'json_value1': {u'3': [u'str3', u'str4']}, 'json_value2': {u'4': [u'test3', u'test4']}, 'del_lst_value1': '5,6', 'del_lst_value2': '7,8', 'lst_value1': [5, 6], 'str_value1': u'string3', 'str_value2': u'string4'}, 'resource_registry': { 'test::R1': "OS::Heat::RandomString", 'test::R2': "OS::Heat::None"}, 'parameter_defaults': { 'lst_value2': [7, 8]}} self.assertEqual(expected, self.params) def test_merge_envs_with_specified_default(self): merge_strategies = {'default': 'deep_merge'} self.env_2['parameter_merge_strategies'] = merge_strategies files = {'env_1': json.dumps(self.env_1), 'env_2': json.dumps(self.env_2)} environment_files = ['env_1', 'env_2'] # Test env_util.merge_environments(environment_files, files, self.params, self.param_schemata) # Verify expected = {'parameters': { 'json_value1': {u'3': [u'str3', u'str4'], u'1': [u'str1', u'str2']}, # added 'json_value2': {u'4': [u'test3', u'test4'], u'2': [u'test1', u'test2']}, 'del_lst_value1': '1,2,5,6', 'del_lst_value2': '3,4,7,8', 'lst_value1': [1, 2, 5, 6], # added 'str_value1': u'string1string3', 'str_value2': u'string2string4'}, 'resource_registry': { 'test::R1': "OS::Heat::RandomString", 'test::R2': "OS::Heat::None"}, 'parameter_defaults': { 'lst_value2': [3, 4, 7, 8]}} self.assertEqual(expected, self.params) def test_merge_envs_with_param_specific_merge_strategy(self): merge_strategies = { 'default': 'overwrite', 'lst_value1': 'merge', 'lst_value2': 'merge', 'json_value1': 'deep_merge'} self.env_2['parameter_merge_strategies'] = merge_strategies files = {'env_1': json.dumps(self.env_1), 'env_2': json.dumps(self.env_2)} environment_files = ['env_1', 'env_2'] # Test env_util.merge_environments(environment_files, files, self.params, self.param_schemata) # Verify expected = {'parameters': { 'json_value1': {u'3': [u'str3', u'str4'], u'1': [u'str1', u'str2']}, # added 'json_value2': {u'4': [u'test3', u'test4']}, 'del_lst_value1': '5,6', 'del_lst_value2': '7,8', 'lst_value1': [1, 2, 5, 6], # added 'str_value1': u'string3', 'str_value2': u'string4'}, 'resource_registry': { 'test::R1': 'OS::Heat::RandomString', 'test::R2': 'OS::Heat::None'}, 'parameter_defaults': { 'lst_value2': [3, 4, 7, 8]}} self.assertEqual(expected, self.params) def test_merge_envs_with_param_conflicting_merge_strategy(self): merge_strategies = { 'default': "overwrite", 'lst_value1': "merge", 'json_value1': "deep_merge"} self.env_2['parameter_merge_strategies'] = merge_strategies files = {'env_1': json.dumps(self.env_1), 'env_2': json.dumps(self.env_2), 'env_3': json.dumps(self.env_3)} environment_files = ['env_1', 'env_2', 'env_3'] # Test self.assertRaises(exception.ConflictingMergeStrategyForParam, env_util.merge_environments, environment_files, files, self.params, self.param_schemata) def test_merge_envs_with_param_defaults_conflicting_merge_strategy(self): merge_strategies = { 'default': "overwrite", 'lst_value2': "merge"} self.env_2['parameter_merge_strategies'] = merge_strategies files = {'env_1': json.dumps(self.env_1), 'env_2': json.dumps(self.env_2), 'env_4': json.dumps(self.env_4)} environment_files = ['env_1', 'env_2', 'env_4'] # Test self.assertRaises(exception.ConflictingMergeStrategyForParam, env_util.merge_environments, environment_files, files, self.params, self.param_schemata) def test_merge_environments_no_env_files(self): files = {'env_1': json.dumps(self.env_1)} # Test - Should ignore env_1 in files env_util.merge_environments(None, files, self.params, self.param_schemata) # Verify expected = {'parameters': {}, 'resource_registry': {}, 'parameter_defaults': {}} self.assertEqual(expected, self.params) def test_merge_envs_with_zeros(self): env1 = {'parameter_defaults': {'value1': 1}} env2 = {'parameter_defaults': {'value1': 0}} files = {'env_1': json.dumps(env1), 'env_2': json.dumps(env2)} environment_files = ['env_1', 'env_2'] param_schemata = { 'value1': parameters.Schema(parameters.Schema.NUMBER)} env_util.merge_environments(environment_files, files, self.params, param_schemata) self.assertEqual({'value1': 0}, self.params['parameter_defaults']) def test_merge_envs_with_zeros_in_maps(self): env1 = {'parameter_defaults': {'value1': {'foo': 1}}} env2 = {'parameter_defaults': {'value1': {'foo': 0}}, 'parameter_merge_strategies': {'value1': 'deep_merge'}} files = {'env_1': json.dumps(env1), 'env_2': json.dumps(env2)} environment_files = ['env_1', 'env_2'] param_schemata = { 'value1': parameters.Schema(parameters.Schema.MAP)} env_util.merge_environments(environment_files, files, self.params, param_schemata) self.assertEqual({'value1': {'foo': 0}}, self.params['parameter_defaults']) heat-10.0.2/heat/tests/test_nested_stack.py0000666000175000017500000003440313343562340020711 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_config import cfg from requests import exceptions import six import yaml from heat.common import exception from heat.common import identifier from heat.common import template_format from heat.common import urlfetch from heat.engine import api from heat.engine import node_data from heat.engine import resource from heat.engine.resources.aws.cfn import stack as stack_res from heat.engine import rsrc_defn from heat.engine import stack as parser from heat.engine import template from heat.objects import resource_data as resource_data_object from heat.tests import common from heat.tests import generic_resource as generic_rsrc from heat.tests import utils class NestedStackTest(common.HeatTestCase): test_template = ''' HeatTemplateFormatVersion: '2012-12-12' Resources: the_nested: Type: AWS::CloudFormation::Stack Properties: TemplateURL: https://server.test/the.template Parameters: KeyName: foo ''' nested_template = ''' HeatTemplateFormatVersion: '2012-12-12' Parameters: KeyName: Type: String Outputs: Foo: Value: bar ''' update_template = ''' HeatTemplateFormatVersion: '2012-12-12' Parameters: KeyName: Type: String Outputs: Bar: Value: foo ''' def setUp(self): super(NestedStackTest, self).setUp() self.patchobject(urlfetch, 'get') def validate_stack(self, template): t = template_format.parse(template) stack = self.parse_stack(t) res = stack.validate() self.assertIsNone(res) return stack def parse_stack(self, t, data=None): ctx = utils.dummy_context('test_username', 'aaaa', 'password') stack_name = 'test_stack' tmpl = template.Template(t) stack = parser.Stack(ctx, stack_name, tmpl, adopt_stack_data=data) stack.store() return stack @mock.patch.object(parser.Stack, 'total_resources') def test_nested_stack_three_deep(self, tr): root_template = ''' HeatTemplateFormatVersion: 2012-12-12 Resources: Nested: Type: AWS::CloudFormation::Stack Properties: TemplateURL: 'https://server.test/depth1.template' ''' depth1_template = ''' HeatTemplateFormatVersion: 2012-12-12 Resources: Nested: Type: AWS::CloudFormation::Stack Properties: TemplateURL: 'https://server.test/depth2.template' ''' depth2_template = ''' HeatTemplateFormatVersion: 2012-12-12 Resources: Nested: Type: AWS::CloudFormation::Stack Properties: TemplateURL: 'https://server.test/depth3.template' Parameters: KeyName: foo ''' urlfetch.get.side_effect = [ depth1_template, depth2_template, self.nested_template] tr.return_value = 2 self.validate_stack(root_template) calls = [mock.call('https://server.test/depth1.template'), mock.call('https://server.test/depth2.template'), mock.call('https://server.test/depth3.template')] urlfetch.get.assert_has_calls(calls) @mock.patch.object(parser.Stack, 'total_resources') def test_nested_stack_six_deep(self, tr): tmpl = ''' HeatTemplateFormatVersion: 2012-12-12 Resources: Nested: Type: AWS::CloudFormation::Stack Properties: TemplateURL: 'https://server.test/depth%i.template' ''' root_template = tmpl % 1 depth1_template = tmpl % 2 depth2_template = tmpl % 3 depth3_template = tmpl % 4 depth4_template = tmpl % 5 depth5_template = tmpl % 6 depth5_template += ''' Parameters: KeyName: foo ''' urlfetch.get.side_effect = [ depth1_template, depth2_template, depth3_template, depth4_template, depth5_template, self.nested_template] tr.return_value = 5 t = template_format.parse(root_template) stack = self.parse_stack(t) stack['Nested'].root_stack_id = '1234' res = self.assertRaises(exception.StackValidationFailed, stack.validate) self.assertIn('Recursion depth exceeds', six.text_type(res)) calls = [mock.call('https://server.test/depth1.template'), mock.call('https://server.test/depth2.template'), mock.call('https://server.test/depth3.template'), mock.call('https://server.test/depth4.template'), mock.call('https://server.test/depth5.template'), mock.call('https://server.test/depth6.template')] urlfetch.get.assert_has_calls(calls) def test_nested_stack_four_wide(self): root_template = ''' HeatTemplateFormatVersion: 2012-12-12 Resources: Nested: Type: AWS::CloudFormation::Stack Properties: TemplateURL: 'https://server.test/depth1.template' Parameters: KeyName: foo Nested2: Type: AWS::CloudFormation::Stack Properties: TemplateURL: 'https://server.test/depth2.template' Parameters: KeyName: foo Nested3: Type: AWS::CloudFormation::Stack Properties: TemplateURL: 'https://server.test/depth3.template' Parameters: KeyName: foo Nested4: Type: AWS::CloudFormation::Stack Properties: TemplateURL: 'https://server.test/depth4.template' Parameters: KeyName: foo ''' urlfetch.get.return_value = self.nested_template self.validate_stack(root_template) calls = [mock.call('https://server.test/depth1.template'), mock.call('https://server.test/depth2.template'), mock.call('https://server.test/depth3.template'), mock.call('https://server.test/depth4.template')] urlfetch.get.assert_has_calls(calls, any_order=True) @mock.patch.object(parser.Stack, 'total_resources') def test_nested_stack_infinite_recursion(self, tr): tmpl = ''' HeatTemplateFormatVersion: 2012-12-12 Resources: Nested: Type: AWS::CloudFormation::Stack Properties: TemplateURL: 'https://server.test/the.template' ''' urlfetch.get.return_value = tmpl t = template_format.parse(tmpl) stack = self.parse_stack(t) stack['Nested'].root_stack_id = '1234' tr.return_value = 2 res = self.assertRaises(exception.StackValidationFailed, stack.validate) self.assertIn('Recursion depth exceeds', six.text_type(res)) expected_count = cfg.CONF.get('max_nested_stack_depth') + 1 self.assertEqual(expected_count, urlfetch.get.call_count) def test_child_params(self): t = template_format.parse(self.test_template) stack = self.parse_stack(t) nested_stack = stack['the_nested'] nested_stack.properties.data[nested_stack.PARAMETERS] = {'foo': 'bar'} self.assertEqual({'foo': 'bar'}, nested_stack.child_params()) def test_child_template_when_file_is_fetched(self): urlfetch.get.return_value = 'template_file' t = template_format.parse(self.test_template) stack = self.parse_stack(t) nested_stack = stack['the_nested'] with mock.patch('heat.common.template_format.parse') as mock_parse: mock_parse.return_value = 'child_template' self.assertEqual('child_template', nested_stack.child_template()) mock_parse.assert_called_once_with( 'template_file', 'https://server.test/the.template') def test_child_template_when_fetching_file_fails(self): urlfetch.get.side_effect = exceptions.RequestException() t = template_format.parse(self.test_template) stack = self.parse_stack(t) nested_stack = stack['the_nested'] self.assertRaises(ValueError, nested_stack.child_template) def test_child_template_when_io_error(self): msg = 'Failed to retrieve template' urlfetch.get.side_effect = urlfetch.URLFetchError(msg) t = template_format.parse(self.test_template) stack = self.parse_stack(t) nested_stack = stack['the_nested'] self.assertRaises(ValueError, nested_stack.child_template) def test_refid(self): t = template_format.parse(self.test_template) stack = self.parse_stack(t) nested_stack = stack['the_nested'] self.assertEqual('the_nested', nested_stack.FnGetRefId()) def test_refid_convergence_cache_data(self): t = template_format.parse(self.test_template) tmpl = template.Template(t) ctx = utils.dummy_context() cache_data = {'the_nested': node_data.NodeData.from_dict({ 'uuid': mock.ANY, 'id': mock.ANY, 'action': 'CREATE', 'status': 'COMPLETE', 'reference_id': 'the_nested_convg_mock' })} stack = parser.Stack(ctx, 'test_stack', tmpl, cache_data=cache_data) nested_stack = stack.defn['the_nested'] self.assertEqual('the_nested_convg_mock', nested_stack.FnGetRefId()) def test_get_attribute(self): tmpl = template_format.parse(self.test_template) ctx = utils.dummy_context('test_username', 'aaaa', 'password') stack = parser.Stack(ctx, 'test', template.Template(tmpl)) stack.store() stack_res = stack['the_nested'] stack_res.store() nested_t = template_format.parse(self.nested_template) nested_t['Parameters']['KeyName']['Default'] = 'Key' nested_stack = parser.Stack(ctx, 'test', template.Template(nested_t)) nested_stack.store() stack_res._rpc_client = mock.MagicMock() stack_res._rpc_client.show_stack.return_value = [ api.format_stack(nested_stack)] stack_res.nested_identifier = mock.Mock() stack_res.nested_identifier.return_value = {'foo': 'bar'} self.assertEqual('bar', stack_res.FnGetAtt('Outputs.Foo')) class ResDataResource(generic_rsrc.GenericResource): def handle_create(self): self.data_set("test", 'A secret value', True) class ResDataStackTest(common.HeatTestCase): tmpl = ''' HeatTemplateFormatVersion: "2012-12-12" Parameters: KeyName: Type: String Resources: res: Type: "res.data.resource" Outputs: Foo: Value: bar ''' def setUp(self): super(ResDataStackTest, self).setUp() resource._register_class("res.data.resource", ResDataResource) def create_stack(self, template): t = template_format.parse(template) stack = utils.parse_stack(t) stack.create() self.assertEqual((stack.CREATE, stack.COMPLETE), stack.state) return stack def test_res_data_delete(self): stack = self.create_stack(self.tmpl) res = stack['res'] stack.delete() self.assertEqual((stack.DELETE, stack.COMPLETE), stack.state) self.assertRaises( exception.NotFound, resource_data_object.ResourceData.get_val, res, 'test') class NestedStackCrudTest(common.HeatTestCase): nested_template = ''' HeatTemplateFormatVersion: '2012-12-12' Parameters: KeyName: Type: String Outputs: Foo: Value: bar ''' def setUp(self): super(NestedStackCrudTest, self).setUp() self.ctx = utils.dummy_context('test_username', 'aaaa', 'password') empty_template = {"HeatTemplateFormatVersion": "2012-12-12"} self.stack = parser.Stack(self.ctx, 'test', template.Template(empty_template)) self.stack.store() self.patchobject(urlfetch, 'get', return_value=self.nested_template) self.nested_parsed = yaml.safe_load(self.nested_template) self.nested_params = {"KeyName": "foo"} self.defn = rsrc_defn.ResourceDefinition( 'test_t_res', 'AWS::CloudFormation::Stack', {"TemplateURL": "https://server.test/the.template", "Parameters": self.nested_params}) self.res = stack_res.NestedStack('test_t_res', self.defn, self.stack) self.assertIsNone(self.res.validate()) self.res.store() def test_handle_create(self): self.res.create_with_template = mock.Mock(return_value=None) self.res.handle_create() self.res.create_with_template.assert_called_once_with( self.nested_parsed, self.nested_params, None, adopt_data=None) def test_handle_adopt(self): self.res.create_with_template = mock.Mock(return_value=None) self.res.handle_adopt(resource_data={'resource_id': 'fred'}) self.res.create_with_template.assert_called_once_with( self.nested_parsed, self.nested_params, None, adopt_data={'resource_id': 'fred'}) def test_handle_update(self): self.res.update_with_template = mock.Mock(return_value=None) self.res.handle_update(self.defn, None, None) self.res.update_with_template.assert_called_once_with( self.nested_parsed, self.nested_params, None) def test_handle_delete(self): self.res.rpc_client = mock.MagicMock() self.res.action = self.res.CREATE self.res.nested_identifier = mock.MagicMock() stack_identity = identifier.HeatIdentifier( self.ctx.tenant_id, self.res.physical_resource_name(), self.res.resource_id) self.res.nested_identifier.return_value = stack_identity self.res.resource_id = stack_identity.stack_id self.res.handle_delete() self.res.rpc_client.return_value.delete_stack.assert_called_once_with( self.ctx, stack_identity, cast=False) heat-10.0.2/heat/tests/engine/0000775000175000017500000000000013343562672016100 5ustar zuulzuul00000000000000heat-10.0.2/heat/tests/engine/test_check_resource.py0000666000175000017500000011023013343562351022466 0ustar zuulzuul00000000000000# Copyright (c) 2016 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import eventlet import mock import uuid from oslo_config import cfg from heat.common import exception from heat.engine import check_resource from heat.engine import dependencies from heat.engine import resource from heat.engine import scheduler from heat.engine import stack from heat.engine import sync_point from heat.engine import worker from heat.rpc import api as rpc_api from heat.rpc import worker_client from heat.tests import common from heat.tests.engine import tools from heat.tests import utils @mock.patch.object(check_resource, 'check_stack_complete') @mock.patch.object(check_resource, 'propagate_check_resource') @mock.patch.object(check_resource, 'check_resource_cleanup') @mock.patch.object(check_resource, 'check_resource_update') class CheckWorkflowUpdateTest(common.HeatTestCase): @mock.patch.object(worker_client.WorkerClient, 'check_resource', lambda *_: None) def setUp(self): super(CheckWorkflowUpdateTest, self).setUp() thread_group_mgr = mock.Mock() cfg.CONF.set_default('convergence_engine', True) self.worker = worker.WorkerService('host-1', 'topic-1', 'engine_id', thread_group_mgr) self.cr = check_resource.CheckResource(self.worker.engine_id, self.worker._rpc_client, self.worker.thread_group_mgr, mock.Mock(), {}) self.worker._rpc_client = worker_client.WorkerClient() self.ctx = utils.dummy_context() self.stack = tools.get_stack( 'check_workflow_create_stack', self.ctx, template=tools.string_template_five, convergence=True) self.stack.converge_stack(self.stack.t) self.resource = self.stack['A'] self.is_update = True self.graph_key = (self.resource.id, self.is_update) self.orig_load_method = stack.Stack.load stack.Stack.load = mock.Mock(return_value=self.stack) def tearDown(self): super(CheckWorkflowUpdateTest, self).tearDown() stack.Stack.load = self.orig_load_method def test_resource_not_available( self, mock_cru, mock_crc, mock_pcr, mock_csc): self.worker.check_resource( self.ctx, 'non-existant-id', self.stack.current_traversal, {}, True, None) for mocked in [mock_cru, mock_crc, mock_pcr, mock_csc]: self.assertFalse(mocked.called) @mock.patch.object(worker.WorkerService, '_retrigger_replaced') def test_stale_traversal( self, mock_rnt, mock_cru, mock_crc, mock_pcr, mock_csc): self.worker.check_resource(self.ctx, self.resource.id, 'stale-traversal', {}, True, None) self.assertTrue(mock_rnt.called) def test_is_update_traversal( self, mock_cru, mock_crc, mock_pcr, mock_csc): self.worker.check_resource( self.ctx, self.resource.id, self.stack.current_traversal, {}, self.is_update, None) mock_cru.assert_called_once_with(self.resource, self.resource.stack.t.id, {}, self.worker.engine_id, mock.ANY, mock.ANY) self.assertFalse(mock_crc.called) expected_calls = [] for req, fwd in self.stack.convergence_dependencies.leaves(): expected_calls.append( (mock.call.worker.propagate_check_resource. assert_called_once_with( self.ctx, mock.ANY, mock.ANY, self.stack.current_traversal, mock.ANY, self.graph_key, {}, self.is_update))) mock_csc.assert_called_once_with( self.ctx, mock.ANY, self.stack.current_traversal, self.resource.id, mock.ANY, True) @mock.patch.object(resource.Resource, 'load') @mock.patch.object(resource.Resource, 'make_replacement') @mock.patch.object(stack.Stack, 'time_remaining') def test_is_update_traversal_raise_update_replace( self, tr, mock_mr, mock_load, mock_cru, mock_crc, mock_pcr, mock_csc): mock_load.return_value = self.resource, self.stack, self.stack mock_cru.side_effect = resource.UpdateReplace tr.return_value = 317 self.worker.check_resource( self.ctx, self.resource.id, self.stack.current_traversal, {}, self.is_update, None) mock_cru.assert_called_once_with(self.resource, self.resource.stack.t.id, {}, self.worker.engine_id, mock.ANY, mock.ANY) self.assertTrue(mock_mr.called) self.assertFalse(mock_crc.called) self.assertFalse(mock_pcr.called) self.assertFalse(mock_csc.called) @mock.patch.object(check_resource.CheckResource, '_stale_resource_needs_retry') @mock.patch.object(stack.Stack, 'time_remaining') def test_is_update_traversal_raise_update_inprogress( self, tr, mock_tsl, mock_cru, mock_crc, mock_pcr, mock_csc): mock_cru.side_effect = exception.UpdateInProgress self.worker.engine_id = 'some-thing-else' mock_tsl.return_value = True tr.return_value = 317 self.worker.check_resource( self.ctx, self.resource.id, self.stack.current_traversal, {}, self.is_update, None) mock_cru.assert_called_once_with(self.resource, self.resource.stack.t.id, {}, self.worker.engine_id, mock.ANY, mock.ANY) self.assertFalse(mock_crc.called) self.assertFalse(mock_pcr.called) self.assertFalse(mock_csc.called) @mock.patch.object(resource.Resource, 'state_set') def test_stale_resource_retry( self, mock_ss, mock_cru, mock_crc, mock_pcr, mock_csc): current_template_id = self.resource.current_template_id res = self.cr._stale_resource_needs_retry(self.ctx, self.resource, current_template_id) self.assertTrue(res) mock_ss.assert_not_called() @mock.patch.object(resource.Resource, 'state_set') def test_try_steal_lock_alive( self, mock_ss, mock_cru, mock_crc, mock_pcr, mock_csc): res = self.cr._stale_resource_needs_retry(self.ctx, self.resource, str(uuid.uuid4())) self.assertFalse(res) mock_ss.assert_not_called() @mock.patch.object(check_resource.listener_client, 'EngineListenerClient') @mock.patch.object(check_resource.resource_objects.Resource, 'get_obj') @mock.patch.object(resource.Resource, 'state_set') def test_try_steal_lock_dead( self, mock_ss, mock_get, mock_elc, mock_cru, mock_crc, mock_pcr, mock_csc): fake_res = mock.Mock() fake_res.engine_id = 'some-thing-else' mock_get.return_value = fake_res mock_elc.return_value.is_alive.return_value = False current_template_id = self.resource.current_template_id res = self.cr._stale_resource_needs_retry(self.ctx, self.resource, current_template_id) self.assertTrue(res) mock_ss.assert_called_once_with(self.resource.action, resource.Resource.FAILED, mock.ANY) @mock.patch.object(check_resource.listener_client, 'EngineListenerClient') @mock.patch.object(check_resource.resource_objects.Resource, 'get_obj') @mock.patch.object(resource.Resource, 'state_set') def test_try_steal_lock_not_dead( self, mock_ss, mock_get, mock_elc, mock_cru, mock_crc, mock_pcr, mock_csc): fake_res = mock.Mock() fake_res.engine_id = self.worker.engine_id mock_get.return_value = fake_res mock_elc.return_value.is_alive.return_value = True current_template_id = self.resource.current_template_id res = self.cr._stale_resource_needs_retry(self.ctx, self.resource, current_template_id) self.assertFalse(res) mock_ss.assert_not_called() @mock.patch.object(check_resource.CheckResource, '_trigger_rollback') def test_resource_update_failure_sets_stack_state_as_failed( self, mock_tr, mock_cru, mock_crc, mock_pcr, mock_csc): self.stack.state_set(self.stack.UPDATE, self.stack.IN_PROGRESS, '') self.resource.state_set(self.resource.UPDATE, self.resource.IN_PROGRESS) dummy_ex = exception.ResourceNotAvailable( resource_name=self.resource.name) mock_cru.side_effect = exception.ResourceFailure( dummy_ex, self.resource, action=self.resource.UPDATE) self.worker.check_resource(self.ctx, self.resource.id, self.stack.current_traversal, {}, self.is_update, None) s = self.stack.load(self.ctx, stack_id=self.stack.id) self.assertEqual((s.UPDATE, s.FAILED), (s.action, s.status)) self.assertEqual('Resource UPDATE failed: ' 'ResourceNotAvailable: resources.A: The Resource (A)' ' is not available.', s.status_reason) @mock.patch.object(check_resource.CheckResource, '_trigger_rollback') def test_resource_cleanup_failure_sets_stack_state_as_failed( self, mock_tr, mock_cru, mock_crc, mock_pcr, mock_csc): self.is_update = False # invokes check_resource_cleanup self.stack.state_set(self.stack.UPDATE, self.stack.IN_PROGRESS, '') self.resource.state_set(self.resource.UPDATE, self.resource.IN_PROGRESS) dummy_ex = exception.ResourceNotAvailable( resource_name=self.resource.name) mock_crc.side_effect = exception.ResourceFailure( dummy_ex, self.resource, action=self.resource.UPDATE) self.worker.check_resource(self.ctx, self.resource.id, self.stack.current_traversal, {}, self.is_update, None) s = self.stack.load(self.ctx, stack_id=self.stack.id) self.assertEqual((s.UPDATE, s.FAILED), (s.action, s.status)) self.assertEqual('Resource UPDATE failed: ' 'ResourceNotAvailable: resources.A: The Resource (A)' ' is not available.', s.status_reason) @mock.patch.object(check_resource.CheckResource, '_trigger_rollback') def test_resource_update_failure_triggers_rollback_if_enabled( self, mock_tr, mock_cru, mock_crc, mock_pcr, mock_csc): self.stack.disable_rollback = False self.stack.store() dummy_ex = exception.ResourceNotAvailable( resource_name=self.resource.name) mock_cru.side_effect = exception.ResourceFailure( dummy_ex, self.resource, action=self.resource.UPDATE) self.worker.check_resource(self.ctx, self.resource.id, self.stack.current_traversal, {}, self.is_update, None) self.assertTrue(mock_tr.called) # make sure the rollback is called on given stack call_args, call_kwargs = mock_tr.call_args called_stack = call_args[0] self.assertEqual(self.stack.id, called_stack.id) @mock.patch.object(check_resource.CheckResource, '_trigger_rollback') def test_resource_cleanup_failure_triggers_rollback_if_enabled( self, mock_tr, mock_cru, mock_crc, mock_pcr, mock_csc): self.is_update = False # invokes check_resource_cleanup self.stack.disable_rollback = False self.stack.store() dummy_ex = exception.ResourceNotAvailable( resource_name=self.resource.name) mock_crc.side_effect = exception.ResourceFailure( dummy_ex, self.resource, action=self.resource.UPDATE) self.worker.check_resource(self.ctx, self.resource.id, self.stack.current_traversal, {}, self.is_update, None) self.assertTrue(mock_tr.called) # make sure the rollback is called on given stack call_args, call_kwargs = mock_tr.call_args called_stack = call_args[0] self.assertEqual(self.stack.id, called_stack.id) @mock.patch.object(check_resource.CheckResource, '_trigger_rollback') def test_rollback_is_not_triggered_on_rollback_disabled_stack( self, mock_tr, mock_cru, mock_crc, mock_pcr, mock_csc): self.stack.disable_rollback = True self.stack.store() dummy_ex = exception.ResourceNotAvailable( resource_name=self.resource.name) mock_cru.side_effect = exception.ResourceFailure( dummy_ex, self.resource, action=self.stack.CREATE) self.worker.check_resource(self.ctx, self.resource.id, self.stack.current_traversal, {}, self.is_update, None) self.assertFalse(mock_tr.called) @mock.patch.object(check_resource.CheckResource, '_trigger_rollback') def test_rollback_not_re_triggered_for_a_rolling_back_stack( self, mock_tr, mock_cru, mock_crc, mock_pcr, mock_csc): self.stack.disable_rollback = False self.stack.action = self.stack.ROLLBACK self.stack.status = self.stack.IN_PROGRESS self.stack.store() dummy_ex = exception.ResourceNotAvailable( resource_name=self.resource.name) mock_cru.side_effect = exception.ResourceFailure( dummy_ex, self.resource, action=self.stack.CREATE) self.worker.check_resource(self.ctx, self.resource.id, self.stack.current_traversal, {}, self.is_update, None) self.assertFalse(mock_tr.called) def test_resource_update_failure_purges_db_for_stack_failure( self, mock_cru, mock_crc, mock_pcr, mock_csc): self.stack.disable_rollback = True self.stack.store() self.stack.purge_db = mock.Mock() dummy_ex = exception.ResourceNotAvailable( resource_name=self.resource.name) mock_cru.side_effect = exception.ResourceFailure( dummy_ex, self.resource, action=self.resource.UPDATE) self.worker.check_resource(self.ctx, self.resource.id, self.stack.current_traversal, {}, self.is_update, None) self.assertTrue(self.stack.purge_db.called) def test_resource_cleanup_failure_purges_db_for_stack_failure( self, mock_cru, mock_crc, mock_pcr, mock_csc): self.is_update = False self.stack.disable_rollback = True self.stack.store() self.stack.purge_db = mock.Mock() dummy_ex = exception.ResourceNotAvailable( resource_name=self.resource.name) mock_crc.side_effect = exception.ResourceFailure( dummy_ex, self.resource, action=self.resource.UPDATE) self.worker.check_resource(self.ctx, self.resource.id, self.stack.current_traversal, {}, self.is_update, None) self.assertTrue(self.stack.purge_db.called) @mock.patch.object(check_resource.CheckResource, 'retrigger_check_resource') @mock.patch.object(stack.Stack, 'load') def test_initiate_propagate_rsrc_retriggers_check_rsrc_on_new_stack_update( self, mock_stack_load, mock_rcr, mock_cru, mock_crc, mock_pcr, mock_csc): key = sync_point.make_key(self.resource.id, self.stack.current_traversal, self.is_update) mock_pcr.side_effect = exception.EntityNotFound(entity='Sync Point', name=key) updated_stack = stack.Stack(self.ctx, self.stack.name, self.stack.t, self.stack.id, current_traversal='some_newy_trvl_uuid') mock_stack_load.return_value = updated_stack self.cr._initiate_propagate_resource(self.ctx, self.resource.id, self.stack.current_traversal, self.is_update, self.resource, self.stack) mock_rcr.assert_called_once_with(self.ctx, self.is_update, self.resource.id, updated_stack) def test_check_stack_complete_is_invoked_for_replaced_resource( self, mock_cru, mock_crc, mock_pcr, mock_csc): resC = self.stack['C'] # lets say C is update-replaced is_update = True trav_id = self.stack.current_traversal replacementC_id = resC.make_replacement(self.stack.t.id) replacementC, stack, _ = resource.Resource.load(self.ctx, replacementC_id, trav_id, is_update, {}) self.cr._initiate_propagate_resource(self.ctx, replacementC_id, trav_id, is_update, replacementC, self.stack) # check_stack_complete should be called with resC.id not # replacementC.id mock_csc.assert_called_once_with(self.ctx, self.stack, trav_id, resC.id, mock.ANY, is_update) @mock.patch.object(sync_point, 'sync') def test_retrigger_check_resource(self, mock_sync, mock_cru, mock_crc, mock_pcr, mock_csc): resC = self.stack['C'] # A, B are predecessors to C when is_update is True expected_predecessors = {(self.stack['A'].id, True), (self.stack['B'].id, True)} self.cr.retrigger_check_resource(self.ctx, self.is_update, resC.id, self.stack) mock_pcr.assert_called_once_with(self.ctx, mock.ANY, resC.id, self.stack.current_traversal, mock.ANY, (resC.id, True), None, True, None) call_args, call_kwargs = mock_pcr.call_args actual_predecessors = call_args[4] self.assertItemsEqual(expected_predecessors, actual_predecessors) def test_update_retrigger_check_resource_new_traversal_deletes_rsrc( self, mock_cru, mock_crc, mock_pcr, mock_csc): # mock dependencies to indicate a rsrc with id 2 is not present # in latest traversal self.stack._convg_deps = dependencies.Dependencies([ [(1, False), (1, True)], [(2, False), None]]) # simulate rsrc 2 completing its update for old traversal # and calling rcr self.cr.retrigger_check_resource(self.ctx, True, 2, self.stack) # Ensure that pcr was called with proper delete traversal mock_pcr.assert_called_once_with(self.ctx, mock.ANY, 2, self.stack.current_traversal, mock.ANY, (2, False), None, False, None) def test_delete_retrigger_check_resource_new_traversal_updates_rsrc( self, mock_cru, mock_crc, mock_pcr, mock_csc): # mock dependencies to indicate a rsrc with id 2 has an update # in latest traversal self.stack._convg_deps = dependencies.Dependencies([ [(1, False), (1, True)], [(2, False), (2, True)]]) # simulate rsrc 2 completing its delete for old traversal # and calling rcr self.cr.retrigger_check_resource(self.ctx, False, 2, self.stack) # Ensure that pcr was called with proper delete traversal mock_pcr.assert_called_once_with(self.ctx, mock.ANY, 2, self.stack.current_traversal, mock.ANY, (2, True), None, True, None) @mock.patch.object(stack.Stack, 'purge_db') def test_handle_failure(self, mock_purgedb, mock_cru, mock_crc, mock_pcr, mock_csc): self.cr._handle_failure(self.ctx, self.stack, 'dummy-reason') mock_purgedb.assert_called_once_with() self.assertEqual('dummy-reason', self.stack.status_reason) @mock.patch.object(check_resource.CheckResource, '_trigger_rollback') def test_handle_failure_rollback(self, mock_tr, mock_cru, mock_crc, mock_pcr, mock_csc): self.stack.disable_rollback = False self.stack.state_set(self.stack.UPDATE, self.stack.IN_PROGRESS, '') self.cr._handle_failure(self.ctx, self.stack, 'dummy-reason') mock_tr.assert_called_once_with(self.stack) @mock.patch.object(stack.Stack, 'purge_db') @mock.patch.object(stack.Stack, 'state_set') @mock.patch.object(check_resource.CheckResource, 'retrigger_check_resource') @mock.patch.object(check_resource.CheckResource, '_trigger_rollback') def test_handle_rsrc_failure_when_update_fails( self, mock_tr, mock_rcr, mock_ss, mock_pdb, mock_cru, mock_crc, mock_pcr, mock_csc): # Emulate failure mock_ss.return_value = False self.cr._handle_resource_failure(self.ctx, self.is_update, self.resource.id, self.stack, 'dummy-reason') self.assertTrue(mock_ss.called) self.assertFalse(mock_rcr.called) self.assertFalse(mock_pdb.called) self.assertFalse(mock_tr.called) @mock.patch.object(stack.Stack, 'purge_db') @mock.patch.object(stack.Stack, 'state_set') @mock.patch.object(check_resource.CheckResource, 'retrigger_check_resource') @mock.patch.object(check_resource.CheckResource, '_trigger_rollback') def test_handle_rsrc_failure_when_update_fails_different_traversal( self, mock_tr, mock_rcr, mock_ss, mock_pdb, mock_cru, mock_crc, mock_pcr, mock_csc): # Emulate failure mock_ss.return_value = False # Emulate new traversal new_stack = tools.get_stack('check_workflow_create_stack', self.ctx, template=tools.string_template_five, convergence=True) new_stack.current_traversal = 'new_traversal' stack.Stack.load = mock.Mock(return_value=new_stack) self.cr._handle_resource_failure(self.ctx, self.is_update, self.resource.id, self.stack, 'dummy-reason') # Ensure retrigger called self.assertTrue(mock_rcr.called) self.assertTrue(mock_ss.called) self.assertFalse(mock_pdb.called) self.assertFalse(mock_tr.called) @mock.patch.object(check_resource.CheckResource, '_handle_failure') def test_handle_stack_timeout(self, mock_hf, mock_cru, mock_crc, mock_pcr, mock_csc): self.cr._handle_stack_timeout(self.ctx, self.stack) mock_hf.assert_called_once_with(self.ctx, self.stack, u'Timed out') @mock.patch.object(check_resource.CheckResource, '_handle_failure') def test_do_check_resource_marks_stack_as_failed_if_stack_timesout( self, mock_hf, mock_cru, mock_crc, mock_pcr, mock_csc): mock_cru.side_effect = scheduler.Timeout(None, 60) self.is_update = True self.cr._do_check_resource(self.ctx, self.stack.current_traversal, self.stack.t, {}, self.is_update, self.resource, self.stack, {}) mock_hf.assert_called_once_with(self.ctx, self.stack, u'Timed out') @mock.patch.object(check_resource.CheckResource, '_handle_stack_timeout') def test_do_check_resource_ignores_timeout_for_new_update( self, mock_hst, mock_cru, mock_crc, mock_pcr, mock_csc): # Ensure current_traversal is check before marking the stack as # failed due to time-out. mock_cru.side_effect = scheduler.Timeout(None, 60) self.is_update = True old_traversal = self.stack.current_traversal self.stack.current_traversal = 'new_traversal' self.cr._do_check_resource(self.ctx, old_traversal, self.stack.t, {}, self.is_update, self.resource, self.stack, {}) self.assertFalse(mock_hst.called) @mock.patch.object(stack.Stack, 'has_timed_out') @mock.patch.object(check_resource.CheckResource, '_handle_stack_timeout') def test_check_resource_handles_timeout(self, mock_hst, mock_to, mock_cru, mock_crc, mock_pcr, mock_csc): mock_to.return_value = True self.worker.check_resource(self.ctx, self.resource.id, self.stack.current_traversal, {}, self.is_update, {}) self.assertTrue(mock_hst.called) def test_check_resource_does_not_propagate_on_cancel( self, mock_cru, mock_crc, mock_pcr, mock_csc): # ensure when check_resource is cancelled, the next set of # resources are not propagated. mock_cru.side_effect = check_resource.CancelOperation self.worker.check_resource(self.ctx, self.resource.id, self.stack.current_traversal, {}, self.is_update, {}) self.assertFalse(mock_pcr.called) self.assertFalse(mock_csc.called) @mock.patch.object(check_resource, 'check_stack_complete') @mock.patch.object(check_resource, 'propagate_check_resource') @mock.patch.object(check_resource, 'check_resource_cleanup') @mock.patch.object(check_resource, 'check_resource_update') class CheckWorkflowCleanupTest(common.HeatTestCase): @mock.patch.object(worker_client.WorkerClient, 'check_resource', lambda *_: None) def setUp(self): super(CheckWorkflowCleanupTest, self).setUp() thread_group_mgr = mock.Mock() self.worker = worker.WorkerService('host-1', 'topic-1', 'engine_id', thread_group_mgr) self.worker._rpc_client = worker_client.WorkerClient() self.ctx = utils.dummy_context() tstack = tools.get_stack( 'check_workflow_create_stack', self.ctx, template=tools.string_template_five, convergence=True) tstack.converge_stack(tstack.t, action=tstack.CREATE) self.stack = stack.Stack.load(self.ctx, stack_id=tstack.id) self.stack.thread_group_mgr = tools.DummyThreadGroupManager() self.stack.converge_stack(self.stack.t, action=self.stack.DELETE) self.resource = self.stack['A'] self.is_update = False self.graph_key = (self.resource.id, self.is_update) @mock.patch.object(resource.Resource, 'load') @mock.patch.object(stack.Stack, 'time_remaining') def test_is_cleanup_traversal( self, tr, mock_load, mock_cru, mock_crc, mock_pcr, mock_csc): tr.return_value = 317 mock_load.return_value = self.resource, self.stack, self.stack self.worker.check_resource( self.ctx, self.resource.id, self.stack.current_traversal, {}, self.is_update, None) self.assertFalse(mock_cru.called) mock_crc.assert_called_once_with( self.resource, self.resource.stack.t.id, {}, self.worker.engine_id, tr(), mock.ANY) @mock.patch.object(stack.Stack, 'time_remaining') def test_is_cleanup_traversal_raise_update_inprogress( self, tr, mock_cru, mock_crc, mock_pcr, mock_csc): mock_crc.side_effect = exception.UpdateInProgress tr.return_value = 317 self.worker.check_resource( self.ctx, self.resource.id, self.stack.current_traversal, {}, self.is_update, None) mock_crc.assert_called_once_with(self.resource, self.resource.stack.t.id, {}, self.worker.engine_id, tr(), mock.ANY) self.assertFalse(mock_cru.called) self.assertFalse(mock_pcr.called) self.assertFalse(mock_csc.called) def test_check_resource_does_not_propagate_on_cancelling_cleanup( self, mock_cru, mock_crc, mock_pcr, mock_csc): # ensure when check_resource is cancelled, the next set of # resources are not propagated. mock_crc.side_effect = check_resource.CancelOperation self.worker.check_resource(self.ctx, self.resource.id, self.stack.current_traversal, {}, self.is_update, {}) self.assertFalse(mock_pcr.called) self.assertFalse(mock_csc.called) class MiscMethodsTest(common.HeatTestCase): def setUp(self): super(MiscMethodsTest, self).setUp() cfg.CONF.set_default('convergence_engine', True) self.ctx = utils.dummy_context() self.stack = tools.get_stack( 'check_workflow_create_stack', self.ctx, template=tools.attr_cache_template, convergence=True) self.stack.converge_stack(self.stack.t) self.resource = self.stack['A'] def test_node_data_ok(self): self.resource.action = self.resource.CREATE expected_input_data = {'attrs': {(u'flat_dict', u'key2'): 'val2', (u'flat_dict', u'key3'): 'val3', (u'nested_dict', u'dict', u'a'): 1, (u'nested_dict', u'dict', u'b'): 2}, 'id': mock.ANY, 'reference_id': 'A', 'name': 'A', 'uuid': mock.ANY, 'action': mock.ANY, 'status': mock.ANY} actual_input_data = self.resource.node_data() self.assertEqual(expected_input_data, actual_input_data.as_dict()) def test_node_data_exception(self): self.resource.action = self.resource.CREATE expected_input_data = {'attrs': {}, 'id': mock.ANY, 'reference_id': 'A', 'name': 'A', 'uuid': mock.ANY, 'action': mock.ANY, 'status': mock.ANY} self.resource.get_attribute = mock.Mock( side_effect=exception.InvalidTemplateAttribute(resource='A', key='value')) actual_input_data = self.resource.node_data() self.assertEqual(expected_input_data, actual_input_data.as_dict()) @mock.patch.object(sync_point, 'sync') def test_check_stack_complete_root(self, mock_sync): check_resource.check_stack_complete( self.ctx, self.stack, self.stack.current_traversal, self.stack['E'].id, self.stack.convergence_dependencies, True) mock_sync.assert_called_once_with( self.ctx, self.stack.id, self.stack.current_traversal, True, mock.ANY, mock.ANY, {(self.stack['E'].id, True): None}) @mock.patch.object(sync_point, 'sync') def test_check_stack_complete_child(self, mock_sync): check_resource.check_stack_complete( self.ctx, self.stack, self.stack.current_traversal, self.resource.id, self.stack.convergence_dependencies, True) self.assertFalse(mock_sync.called) @mock.patch.object(dependencies.Dependencies, 'roots') @mock.patch.object(stack.Stack, '_persist_state') def test_check_stack_complete_persist_called(self, mock_persist_state, mock_dep_roots): mock_dep_roots.return_value = [(1, True)] check_resource.check_stack_complete( self.ctx, self.stack, self.stack.current_traversal, 1, self.stack.convergence_dependencies, True) self.assertTrue(mock_persist_state.called) @mock.patch.object(sync_point, 'sync') def test_propagate_check_resource(self, mock_sync): check_resource.propagate_check_resource( self.ctx, mock.ANY, mock.ANY, self.stack.current_traversal, mock.ANY, ('A', True), {}, True, None) self.assertTrue(mock_sync.called) @mock.patch.object(resource.Resource, 'create_convergence') @mock.patch.object(resource.Resource, 'update_convergence') def test_check_resource_update_init_action(self, mock_update, mock_create): self.resource.action = 'INIT' check_resource.check_resource_update( self.resource, self.resource.stack.t.id, {}, 'engine-id', self.stack, None) self.assertTrue(mock_create.called) self.assertFalse(mock_update.called) @mock.patch.object(resource.Resource, 'create_convergence') @mock.patch.object(resource.Resource, 'update_convergence') def test_check_resource_update_create_action( self, mock_update, mock_create): self.resource.action = 'CREATE' check_resource.check_resource_update( self.resource, self.resource.stack.t.id, {}, 'engine-id', self.stack, None) self.assertFalse(mock_create.called) self.assertTrue(mock_update.called) @mock.patch.object(resource.Resource, 'create_convergence') @mock.patch.object(resource.Resource, 'update_convergence') def test_check_resource_update_update_action( self, mock_update, mock_create): self.resource.action = 'UPDATE' check_resource.check_resource_update( self.resource, self.resource.stack.t.id, {}, 'engine-id', self.stack, None) self.assertFalse(mock_create.called) self.assertTrue(mock_update.called) @mock.patch.object(resource.Resource, 'delete_convergence') def test_check_resource_cleanup_delete(self, mock_delete): self.resource.current_template_id = 'new-template-id' check_resource.check_resource_cleanup( self.resource, self.resource.stack.t.id, {}, 'engine-id', self.stack.timeout_secs(), None) self.assertTrue(mock_delete.called) def test_check_message_raises_cancel_exception(self): # ensure CancelOperation is raised on receiving # rpc_api.THREAD_CANCEL message msg_queue = eventlet.queue.LightQueue() msg_queue.put_nowait(rpc_api.THREAD_CANCEL) self.assertRaises(check_resource.CancelOperation, check_resource._check_for_message, msg_queue) heat-10.0.2/heat/tests/engine/test_dependencies.py0000666000175000017500000002300313343562340022127 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.engine import dependencies from heat.tests import common class dependenciesTest(common.HeatTestCase): def _dep_test(self, func, checkorder, deps): nodes = set.union(*[set(e) for e in deps]) d = dependencies.Dependencies(deps) order = list(func(d)) for n in nodes: self.assertIn(n, order, '"%s" is not in the sequence' % n) self.assertEqual(1, order.count(n)) self.assertEqual(len(nodes), len(order)) for l, f in deps: checkorder(order.index(f), order.index(l)) def _dep_test_fwd(self, *deps): def assertLess(a, b): self.assertTrue(a < b, '"%s" is not less than "%s"' % (str(a), str(b))) self._dep_test(iter, assertLess, deps) def _dep_test_rev(self, *deps): def assertGreater(a, b): self.assertTrue(a > b, '"%s" is not greater than "%s"' % (str(a), str(b))) self._dep_test(reversed, assertGreater, deps) def test_edges(self): input_edges = [('1', None), ('2', '3'), ('2', '4')] dp = dependencies.Dependencies(input_edges) self.assertEqual(set(input_edges), set(dp.graph().edges())) def test_repr(self): dp = dependencies.Dependencies([('1', None), ('2', '3'), ('2', '4')]) s = "Dependencies([('1', None), ('2', '3'), ('2', '4')])" self.assertEqual(s, repr(dp)) def test_single_node(self): d = dependencies.Dependencies([('only', None)]) l = list(iter(d)) self.assertEqual(1, len(l)) self.assertEqual('only', l[0]) def test_disjoint(self): d = dependencies.Dependencies([('1', None), ('2', None)]) l = list(iter(d)) self.assertEqual(2, len(l)) self.assertIn('1', l) self.assertIn('2', l) def test_single_fwd(self): self._dep_test_fwd(('second', 'first')) def test_single_rev(self): self._dep_test_rev(('second', 'first')) def test_chain_fwd(self): self._dep_test_fwd(('third', 'second'), ('second', 'first')) def test_chain_rev(self): self._dep_test_rev(('third', 'second'), ('second', 'first')) def test_diamond_fwd(self): self._dep_test_fwd(('last', 'mid1'), ('last', 'mid2'), ('mid1', 'first'), ('mid2', 'first')) def test_diamond_rev(self): self._dep_test_rev(('last', 'mid1'), ('last', 'mid2'), ('mid1', 'first'), ('mid2', 'first')) def test_complex_fwd(self): self._dep_test_fwd(('last', 'mid1'), ('last', 'mid2'), ('mid1', 'mid3'), ('mid1', 'first'), ('mid3', 'first'), ('mid2', 'first')) def test_complex_rev(self): self._dep_test_rev(('last', 'mid1'), ('last', 'mid2'), ('mid1', 'mid3'), ('mid1', 'first'), ('mid3', 'first'), ('mid2', 'first')) def test_many_edges_fwd(self): self._dep_test_fwd(('last', 'e1'), ('last', 'mid1'), ('last', 'mid2'), ('mid1', 'e2'), ('mid1', 'mid3'), ('mid2', 'mid3'), ('mid3', 'e3')) def test_many_edges_rev(self): self._dep_test_rev(('last', 'e1'), ('last', 'mid1'), ('last', 'mid2'), ('mid1', 'e2'), ('mid1', 'mid3'), ('mid2', 'mid3'), ('mid3', 'e3')) def test_dbldiamond_fwd(self): self._dep_test_fwd(('last', 'a1'), ('last', 'a2'), ('a1', 'b1'), ('a2', 'b1'), ('a2', 'b2'), ('b1', 'first'), ('b2', 'first')) def test_dbldiamond_rev(self): self._dep_test_rev(('last', 'a1'), ('last', 'a2'), ('a1', 'b1'), ('a2', 'b1'), ('a2', 'b2'), ('b1', 'first'), ('b2', 'first')) def test_circular_fwd(self): d = dependencies.Dependencies([('first', 'second'), ('second', 'third'), ('third', 'first')]) self.assertRaises(dependencies.CircularDependencyException, list, iter(d)) def test_circular_rev(self): d = dependencies.Dependencies([('first', 'second'), ('second', 'third'), ('third', 'first')]) self.assertRaises(dependencies.CircularDependencyException, list, reversed(d)) def test_self_ref(self): d = dependencies.Dependencies([('node', 'node')]) self.assertRaises(dependencies.CircularDependencyException, list, iter(d)) def test_complex_circular_fwd(self): d = dependencies.Dependencies([('last', 'e1'), ('last', 'mid1'), ('last', 'mid2'), ('mid1', 'e2'), ('mid1', 'mid3'), ('mid2', 'mid3'), ('mid3', 'e3'), ('e3', 'mid1')]) self.assertRaises(dependencies.CircularDependencyException, list, iter(d)) def test_complex_circular_rev(self): d = dependencies.Dependencies([('last', 'e1'), ('last', 'mid1'), ('last', 'mid2'), ('mid1', 'e2'), ('mid1', 'mid3'), ('mid2', 'mid3'), ('mid3', 'e3'), ('e3', 'mid1')]) self.assertRaises(dependencies.CircularDependencyException, list, reversed(d)) def test_noexist_partial(self): d = dependencies.Dependencies([('foo', 'bar')]) def get(i): return d[i] self.assertRaises(KeyError, get, 'baz') def test_single_partial(self): d = dependencies.Dependencies([('last', 'first')]) p = d['last'] l = list(iter(p)) self.assertEqual(1, len(l)) self.assertEqual('last', l[0]) def test_simple_partial(self): d = dependencies.Dependencies([('last', 'middle'), ('middle', 'first')]) p = d['middle'] order = list(iter(p)) self.assertEqual(2, len(order)) for n in ('last', 'middle'): self.assertIn(n, order, "'%s' not found in dependency order" % n) self.assertGreater(order.index('last'), order.index('middle')) def test_simple_multilevel_partial(self): d = dependencies.Dependencies([('last', 'middle'), ('middle', 'target'), ('target', 'first')]) p = d['target'] order = list(iter(p)) self.assertEqual(3, len(order)) for n in ('last', 'middle', 'target'): self.assertIn(n, order, "'%s' not found in dependency order" % n) def test_complex_partial(self): d = dependencies.Dependencies([('last', 'e1'), ('last', 'mid1'), ('last', 'mid2'), ('mid1', 'e2'), ('mid1', 'mid3'), ('mid2', 'mid3'), ('mid3', 'e3')]) p = d['mid3'] order = list(iter(p)) self.assertEqual(4, len(order)) for n in ('last', 'mid1', 'mid2', 'mid3'): self.assertIn(n, order, "'%s' not found in dependency order" % n) def test_required_by(self): d = dependencies.Dependencies([('last', 'e1'), ('last', 'mid1'), ('last', 'mid2'), ('mid1', 'e2'), ('mid1', 'mid3'), ('mid2', 'mid3'), ('mid3', 'e3')]) self.assertEqual(0, len(list(d.required_by('last')))) required_by = list(d.required_by('mid3')) self.assertEqual(2, len(required_by)) for n in ('mid1', 'mid2'): self.assertIn(n, required_by, "'%s' not found in required_by" % n) required_by = list(d.required_by('e2')) self.assertEqual(1, len(required_by)) self.assertIn('mid1', required_by, "'%s' not found in required_by" % n) self.assertRaises(KeyError, d.required_by, 'foo') def test_leaves(self): d = dependencies.Dependencies([('last1', 'mid'), ('last2', 'mid'), ('mid', 'first1'), ('mid', 'first2')]) leaves = sorted(list(d.leaves())) self.assertEqual(['first1', 'first2'], leaves) def test_roots(self): d = dependencies.Dependencies([('last1', 'mid'), ('last2', 'mid'), ('mid', 'first1'), ('mid', 'first2')]) leaves = sorted(list(d.roots())) self.assertEqual(['last1', 'last2'], leaves) heat-10.0.2/heat/tests/engine/test_engine_worker.py0000666000175000017500000002676013343562340022354 0ustar zuulzuul00000000000000# Copyright (c) 2014 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import mock from heat.db.sqlalchemy import api as db_api from heat.engine import check_resource from heat.engine import stack as parser from heat.engine import template as templatem from heat.engine import worker from heat.objects import stack as stack_objects from heat.rpc import worker_client as wc from heat.tests import common from heat.tests import utils class WorkerServiceTest(common.HeatTestCase): def test_make_sure_rpc_version(self): self.assertEqual( '1.4', worker.WorkerService.RPC_API_VERSION, ('RPC version is changed, please update this test to new version ' 'and make sure additional test cases are added for RPC APIs ' 'added in new version')) @mock.patch('heat.common.messaging.get_rpc_server', return_value=mock.Mock()) @mock.patch('oslo_messaging.Target', return_value=mock.Mock()) @mock.patch('heat.rpc.worker_client.WorkerClient', return_value=mock.Mock()) def test_service_start(self, rpc_client_class, target_class, rpc_server_method ): self.worker = worker.WorkerService('host-1', 'topic-1', 'engine_id', mock.Mock()) self.worker.start() # Make sure target is called with proper parameters target_class.assert_called_once_with( version=worker.WorkerService.RPC_API_VERSION, server=self.worker.engine_id, topic=self.worker.topic) # Make sure rpc server creation with proper target # and WorkerService is initialized with it target = target_class.return_value rpc_server_method.assert_called_once_with(target, self.worker) rpc_server = rpc_server_method.return_value self.assertEqual(rpc_server, self.worker._rpc_server, "Failed to create RPC server") # Make sure rpc server is started. rpc_server.start.assert_called_once_with() # Make sure rpc client is created and initialized in WorkerService rpc_client = rpc_client_class.return_value rpc_client_class.assert_called_once_with() self.assertEqual(rpc_client, self.worker._rpc_client, "Failed to create RPC client") def test_service_stop(self): self.worker = worker.WorkerService('host-1', 'topic-1', 'engine_id', mock.Mock()) with mock.patch.object(self.worker, '_rpc_server') as mock_rpc_server: self.worker.stop() mock_rpc_server.stop.assert_called_once_with() mock_rpc_server.wait.assert_called_once_with() @mock.patch.object(check_resource, 'load_resource') @mock.patch.object(check_resource.CheckResource, 'check') def test_check_resource_adds_and_removes_msg_queue(self, mock_check, mock_load_resource): mock_tgm = mock.MagicMock() mock_tgm.add_msg_queue = mock.Mock(return_value=None) mock_tgm.remove_msg_queue = mock.Mock(return_value=None) self.worker = worker.WorkerService('host-1', 'topic-1', 'engine_id', mock_tgm) ctx = utils.dummy_context() current_traversal = 'something' fake_res = mock.MagicMock() fake_res.current_traversal = current_traversal mock_load_resource.return_value = (fake_res, fake_res, fake_res) self.worker.check_resource(ctx, mock.Mock(), current_traversal, {}, mock.Mock(), mock.Mock()) self.assertTrue(mock_tgm.add_msg_queue.called) self.assertTrue(mock_tgm.remove_msg_queue.called) @mock.patch.object(check_resource, 'load_resource') @mock.patch.object(check_resource.CheckResource, 'check') def test_check_resource_adds_and_removes_msg_queue_on_exception( self, mock_check, mock_load_resource): # even if the check fails; the message should be removed mock_tgm = mock.MagicMock() mock_tgm.add_msg_queue = mock.Mock(return_value=None) mock_tgm.remove_msg_queue = mock.Mock(return_value=None) self.worker = worker.WorkerService('host-1', 'topic-1', 'engine_id', mock_tgm) ctx = utils.dummy_context() current_traversal = 'something' fake_res = mock.MagicMock() fake_res.current_traversal = current_traversal mock_load_resource.return_value = (fake_res, fake_res, fake_res) mock_check.side_effect = BaseException self.assertRaises(BaseException, self.worker.check_resource, ctx, mock.Mock(), current_traversal, {}, mock.Mock(), mock.Mock()) self.assertTrue(mock_tgm.add_msg_queue.called) # ensure remove is also called self.assertTrue(mock_tgm.remove_msg_queue.called) @mock.patch.object(worker, '_wait_for_cancellation') @mock.patch.object(worker, '_cancel_check_resource') @mock.patch.object(wc.WorkerClient, 'cancel_check_resource') @mock.patch.object(db_api, 'engine_get_all_locked_by_stack') def test_cancel_workers_when_no_resource_found(self, mock_get_locked, mock_ccr, mock_wccr, mock_wc): mock_tgm = mock.Mock() _worker = worker.WorkerService('host-1', 'topic-1', 'engine-001', mock_tgm) stack = mock.MagicMock() stack.id = 'stack_id' mock_get_locked.return_value = [] worker._cancel_workers(stack, mock_tgm, 'engine-001', _worker._rpc_client) self.assertFalse(mock_wccr.called) self.assertFalse(mock_ccr.called) @mock.patch.object(worker, '_wait_for_cancellation') @mock.patch.object(worker, '_cancel_check_resource') @mock.patch.object(wc.WorkerClient, 'cancel_check_resource') @mock.patch.object(db_api, 'engine_get_all_locked_by_stack') def test_cancel_workers_with_resources_found(self, mock_get_locked, mock_ccr, mock_wccr, mock_wc): mock_tgm = mock.Mock() _worker = worker.WorkerService('host-1', 'topic-1', 'engine-001', mock_tgm) stack = mock.MagicMock() stack.id = 'stack_id' mock_get_locked.return_value = ['engine-001', 'engine-007', 'engine-008'] worker._cancel_workers(stack, mock_tgm, 'engine-001', _worker._rpc_client) mock_wccr.assert_called_once_with(stack.id, 'engine-001', mock_tgm) self.assertEqual(2, mock_ccr.call_count) calls = [mock.call(stack.context, stack.id, 'engine-007'), mock.call(stack.context, stack.id, 'engine-008')] mock_ccr.assert_has_calls(calls, any_order=True) self.assertTrue(mock_wc.called) @mock.patch.object(worker, '_stop_traversal') def test_stop_traversal_stops_nested_stack(self, mock_st): mock_tgm = mock.Mock() ctx = utils.dummy_context() tmpl = templatem.Template.create_empty_template() stack1 = parser.Stack(ctx, 'stack1', tmpl, current_traversal='123') stack1.store() stack2 = parser.Stack(ctx, 'stack2', tmpl, owner_id=stack1.id, current_traversal='456') stack2.store() _worker = worker.WorkerService('host-1', 'topic-1', 'engine-001', mock_tgm) _worker.stop_traversal(stack1) self.assertEqual(2, mock_st.call_count) call1, call2 = mock_st.call_args_list call_args1, call_args2 = call1[0][0], call2[0][0] self.assertEqual('stack1', call_args1.name) self.assertEqual('stack2', call_args2.name) @mock.patch.object(worker, '_cancel_workers') @mock.patch.object(worker.WorkerService, 'stop_traversal') def test_stop_all_workers_when_stack_in_progress(self, mock_st, mock_cw): mock_tgm = mock.Mock() _worker = worker.WorkerService('host-1', 'topic-1', 'engine-001', mock_tgm) stack = mock.MagicMock() stack.IN_PROGRESS = 'IN_PROGRESS' stack.status = stack.IN_PROGRESS stack.id = 'stack_id' stack.rollback = mock.MagicMock() _worker.stop_all_workers(stack) mock_st.assert_not_called() mock_cw.assert_called_once_with(stack, mock_tgm, 'engine-001', _worker._rpc_client) self.assertFalse(stack.rollback.called) @mock.patch.object(worker, '_cancel_workers') @mock.patch.object(worker.WorkerService, 'stop_traversal') def test_stop_all_workers_when_stack_not_in_progress(self, mock_st, mock_cw): mock_tgm = mock.Mock() _worker = worker.WorkerService('host-1', 'topic-1', 'engine-001', mock_tgm) stack = mock.MagicMock() stack.FAILED = 'FAILED' stack.status = stack.FAILED stack.id = 'stack_id' stack.rollback = mock.MagicMock() _worker.stop_all_workers(stack) self.assertFalse(mock_st.called) mock_cw.assert_called_once_with(stack, mock_tgm, 'engine-001', _worker._rpc_client) self.assertFalse(stack.rollback.called) # test when stack complete stack.FAILED = 'FAILED' stack.status = stack.FAILED _worker.stop_all_workers(stack) self.assertFalse(mock_st.called) mock_cw.assert_called_with(stack, mock_tgm, 'engine-001', _worker._rpc_client) self.assertFalse(stack.rollback.called) @mock.patch.object(stack_objects.Stack, 'select_and_update') def test_update_current_traversal(self, mock_sau): stack = mock.MagicMock() stack.current_traversal = 'some-thing' old_trvsl = stack.current_traversal worker._update_current_traversal(stack) self.assertNotEqual(old_trvsl, stack.current_traversal) mock_sau.assert_called_once_with(mock.ANY, stack.id, mock.ANY, exp_trvsl=old_trvsl) heat-10.0.2/heat/tests/engine/service/0000775000175000017500000000000013343562672017540 5ustar zuulzuul00000000000000heat-10.0.2/heat/tests/engine/service/test_stack_resources.py0000666000175000017500000010020513343562351024342 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_config import cfg from oslo_messaging.rpc import dispatcher import six from heat.common import exception from heat.common import identifier from heat.engine.clients.os import keystone from heat.engine.clients.os.keystone import fake_keystoneclient as fake_ks from heat.engine import dependencies from heat.engine import resource as res from heat.engine.resources.aws.ec2 import instance as ins from heat.engine import service from heat.engine import stack from heat.engine import stack_lock from heat.engine import template as templatem from heat.objects import stack as stack_object from heat.tests import common from heat.tests.engine import tools from heat.tests import generic_resource as generic_rsrc from heat.tests import utils policy_template = ''' { "AWSTemplateFormatVersion" : "2010-09-09", "Description" : "alarming", "Resources" : { "WebServerScaleDownPolicy" : { "Type" : "AWS::AutoScaling::ScalingPolicy", "Properties" : { "AdjustmentType" : "ChangeInCapacity", "AutoScalingGroupName" : "", "Cooldown" : "60", "ScalingAdjustment" : "-1" } }, "Random" : { "Type" : "OS::Heat::RandomString", "DependsOn" : "WebServerScaleDownPolicy" } } } ''' class StackResourcesServiceTest(common.HeatTestCase): def setUp(self): super(StackResourcesServiceTest, self).setUp() self.ctx = utils.dummy_context(tenant_id='stack_resource_test_tenant') self.eng = service.EngineService('a-host', 'a-topic') self.eng.thread_group_mgr = tools.DummyThreadGroupManager() self.eng.engine_id = 'engine-fake-uuid' cfg.CONF.set_default('heat_stack_user_role', 'stack_user_role') @mock.patch.object(stack.Stack, 'load') def _test_describe_stack_resource(self, mock_load): mock_load.return_value = self.stack # Patch _resolve_any_attribute or it tries to call novaclient self.patchobject(res.Resource, '_resolve_any_attribute', return_value=None) r = self.eng.describe_stack_resource(self.ctx, self.stack.identifier(), 'WebServer', with_attr=None) self.assertIn('resource_identity', r) self.assertIn('description', r) self.assertIn('updated_time', r) self.assertIn('stack_identity', r) self.assertIsNotNone(r['stack_identity']) self.assertIn('stack_name', r) self.assertEqual(self.stack.name, r['stack_name']) self.assertIn('metadata', r) self.assertIn('resource_status', r) self.assertIn('resource_status_reason', r) self.assertIn('resource_type', r) self.assertIn('physical_resource_id', r) self.assertIn('resource_name', r) self.assertIn('attributes', r) self.assertEqual('WebServer', r['resource_name']) mock_load.assert_called_once_with(self.ctx, stack=mock.ANY) @tools.stack_context('service_stack_resource_describe__test_stack') def test_stack_resource_describe(self): self._test_describe_stack_resource() @mock.patch.object(service.EngineService, '_get_stack') def test_stack_resource_describe_nonexist_stack(self, mock_get): non_exist_identifier = identifier.HeatIdentifier( self.ctx.tenant_id, 'wibble', '18d06e2e-44d3-4bef-9fbf-52480d604b02') mock_get.side_effect = exception.EntityNotFound( entity='Stack', name='test') ex = self.assertRaises(dispatcher.ExpectedException, self.eng.describe_stack_resource, self.ctx, non_exist_identifier, 'WebServer') self.assertEqual(exception.EntityNotFound, ex.exc_info[0]) mock_get.assert_called_once_with(self.ctx, non_exist_identifier) @mock.patch.object(stack.Stack, 'load') @tools.stack_context('service_resource_describe_nonexist_test_stack') def test_stack_resource_describe_nonexist_resource(self, mock_load): mock_load.return_value = self.stack ex = self.assertRaises(dispatcher.ExpectedException, self.eng.describe_stack_resource, self.ctx, self.stack.identifier(), 'foo') self.assertEqual(exception.ResourceNotFound, ex.exc_info[0]) mock_load.assert_called_once_with(self.ctx, stack=mock.ANY) @tools.stack_context('service_resource_describe_noncreated_test_stack', create_res=False) def test_stack_resource_describe_noncreated_resource(self): self._test_describe_stack_resource() @mock.patch.object(service.EngineService, '_authorize_stack_user') @tools.stack_context('service_resource_describe_user_deny_test_stack') def test_stack_resource_describe_stack_user_deny(self, mock_auth): self.ctx.roles = [cfg.CONF.heat_stack_user_role] mock_auth.return_value = False ex = self.assertRaises(dispatcher.ExpectedException, self.eng.describe_stack_resource, self.ctx, self.stack.identifier(), 'foo') self.assertEqual(exception.Forbidden, ex.exc_info[0]) mock_auth.assert_called_once_with(self.ctx, mock.ANY, 'foo') @mock.patch.object(stack.Stack, 'load') @tools.stack_context('service_resources_describe_test_stack') def test_stack_resources_describe(self, mock_load): mock_load.return_value = self.stack resources = self.eng.describe_stack_resources(self.ctx, self.stack.identifier(), 'WebServer') self.assertEqual(1, len(resources)) r = resources[0] self.assertIn('resource_identity', r) self.assertIn('description', r) self.assertIn('updated_time', r) self.assertIn('stack_identity', r) self.assertIsNotNone(r['stack_identity']) self.assertIn('stack_name', r) self.assertEqual(self.stack.name, r['stack_name']) self.assertIn('resource_status', r) self.assertIn('resource_status_reason', r) self.assertIn('resource_type', r) self.assertIn('physical_resource_id', r) self.assertIn('resource_name', r) self.assertEqual('WebServer', r['resource_name']) mock_load.assert_called_once_with(self.ctx, stack=mock.ANY) @mock.patch.object(stack.Stack, 'load') @tools.stack_context('service_resources_describe_no_filter_test_stack') def test_stack_resources_describe_no_filter(self, mock_load): mock_load.return_value = self.stack resources = self.eng.describe_stack_resources( self.ctx, self.stack.identifier(), None) self.assertEqual(1, len(resources)) r = resources[0] self.assertIn('resource_name', r) self.assertEqual('WebServer', r['resource_name']) mock_load.assert_called_once_with(self.ctx, stack=mock.ANY) @mock.patch.object(service.EngineService, '_get_stack') def test_stack_resources_describe_bad_lookup(self, mock_get): mock_get.side_effect = TypeError self.assertRaises(TypeError, self.eng.describe_stack_resources, self.ctx, None, 'WebServer') mock_get.assert_called_once_with(self.ctx, None) def test_stack_resources_describe_nonexist_stack(self): non_exist_identifier = identifier.HeatIdentifier( self.ctx.tenant_id, 'wibble', '18d06e2e-44d3-4bef-9fbf-52480d604b02') ex = self.assertRaises(dispatcher.ExpectedException, self.eng.describe_stack_resources, self.ctx, non_exist_identifier, 'WebServer') self.assertEqual(exception.EntityNotFound, ex.exc_info[0]) @tools.stack_context('find_phys_res_stack') def test_find_physical_resource(self): resources = self.eng.describe_stack_resources(self.ctx, self.stack.identifier(), None) phys_id = resources[0]['physical_resource_id'] result = self.eng.find_physical_resource(self.ctx, phys_id) self.assertIsInstance(result, dict) resource_identity = identifier.ResourceIdentifier(**result) self.assertEqual(self.stack.identifier(), resource_identity.stack()) self.assertEqual('WebServer', resource_identity.resource_name) def test_find_physical_resource_nonexist(self): ex = self.assertRaises(dispatcher.ExpectedException, self.eng.find_physical_resource, self.ctx, 'foo') self.assertEqual(exception.EntityNotFound, ex.exc_info[0]) @mock.patch.object(stack.Stack, 'load') @tools.stack_context('service_resources_list_test_stack') def test_stack_resources_list(self, mock_load): mock_load.return_value = self.stack resources = self.eng.list_stack_resources(self.ctx, self.stack.identifier()) self.assertEqual(1, len(resources)) r = resources[0] self.assertIn('resource_identity', r) self.assertIn('updated_time', r) self.assertIn('physical_resource_id', r) self.assertIn('resource_name', r) self.assertEqual('WebServer', r['resource_name']) self.assertIn('resource_status', r) self.assertIn('resource_status_reason', r) self.assertIn('resource_type', r) mock_load.assert_called_once_with(self.ctx, stack=mock.ANY) @mock.patch.object(stack.Stack, 'load') @tools.stack_context('service_resources_list_test_stack_with_depth') def test_stack_resources_list_with_depth(self, mock_load): mock_load.return_value = self.stack resources = six.itervalues(self.stack) self.stack.iter_resources = mock.Mock(return_value=resources) self.eng.list_stack_resources(self.ctx, self.stack.identifier(), 2) self.stack.iter_resources.assert_called_once_with(2, filters=None) @mock.patch.object(stack.Stack, 'load') @tools.stack_context('service_resources_list_test_stack_with_max_depth') def test_stack_resources_list_with_max_depth(self, mock_load): mock_load.return_value = self.stack resources = six.itervalues(self.stack) self.stack.iter_resources = mock.Mock(return_value=resources) self.eng.list_stack_resources(self.ctx, self.stack.identifier(), 99) max_depth = cfg.CONF.max_nested_stack_depth self.stack.iter_resources.assert_called_once_with(max_depth, filters=None) @mock.patch.object(stack.Stack, 'load') @tools.stack_context('service_resources_list_test_stack') def test_stack_resources_filter_type(self, mock_load): mock_load.return_value = self.stack resources = six.itervalues(self.stack) self.stack.iter_resources = mock.Mock(return_value=resources) filters = {'type': 'AWS::EC2::Instance'} resources = self.eng.list_stack_resources(self.ctx, self.stack.identifier(), filters=filters) self.stack.iter_resources.assert_called_once_with( 0, filters={}) self.assertIn('AWS::EC2::Instance', resources[0]['resource_type']) @mock.patch.object(stack.Stack, 'load') @tools.stack_context('service_resources_list_test_stack') def test_stack_resources_filter_type_not_found(self, mock_load): mock_load.return_value = self.stack resources = six.itervalues(self.stack) self.stack.iter_resources = mock.Mock(return_value=resources) filters = {'type': 'NonExisted'} resources = self.eng.list_stack_resources(self.ctx, self.stack.identifier(), filters=filters) self.stack.iter_resources.assert_called_once_with( 0, filters={}) self.assertEqual(0, len(resources)) @mock.patch.object(stack.Stack, 'load') def test_stack_resources_list_deleted_stack(self, mock_load): stk = tools.setup_stack('resource_list_deleted_stack', self.ctx) stack_id = stk.identifier() mock_load.return_value = stk tools.clean_up_stack(stk) resources = self.eng.list_stack_resources(self.ctx, stack_id) self.assertEqual(0, len(resources)) @mock.patch.object(service.EngineService, '_get_stack') def test_stack_resources_list_nonexist_stack(self, mock_get): non_exist_identifier = identifier.HeatIdentifier( self.ctx.tenant_id, 'wibble', '18d06e2e-44d3-4bef-9fbf-52480d604b02') mock_get.side_effect = exception.EntityNotFound(entity='Stack', name='test') ex = self.assertRaises(dispatcher.ExpectedException, self.eng.list_stack_resources, self.ctx, non_exist_identifier) self.assertEqual(exception.EntityNotFound, ex.exc_info[0]) mock_get.assert_called_once_with(self.ctx, non_exist_identifier, show_deleted=True) def _stack_create(self, stack_name): self.patchobject(keystone.KeystoneClientPlugin, '_create', return_value=fake_ks.FakeKeystoneClient()) stk = tools.get_stack(stack_name, self.ctx, policy_template) stk.store() stk.create() s = stack_object.Stack.get_by_id(self.ctx, stk.id) self.patchobject(service.EngineService, '_get_stack', return_value=s) return stk def test_signal_reception_async(self): self.eng.thread_group_mgr = tools.DummyThreadGroupMgrLogStart() stack_name = 'signal_reception_async' self.stack = self._stack_create(stack_name) test_data = {'food': 'yum'} self.eng.resource_signal(self.ctx, dict(self.stack.identifier()), 'WebServerScaleDownPolicy', test_data) self.assertEqual([(self.stack.id, mock.ANY)], self.eng.thread_group_mgr.started) @mock.patch.object(res.Resource, 'signal') def test_signal_reception_sync(self, mock_signal): mock_signal.return_value = None stack_name = 'signal_reception_sync' self.stack = self._stack_create(stack_name) test_data = {'food': 'yum'} self.eng.resource_signal(self.ctx, dict(self.stack.identifier()), 'WebServerScaleDownPolicy', test_data, sync_call=True) mock_signal.assert_called_once_with(mock.ANY, False) def test_signal_reception_get_resource_none(self): stack_name = 'signal_reception_no_resource' self.stack = self._stack_create(stack_name) test_data = {'food': 'yum'} self.patchobject(stack.Stack, 'resource_get', return_value=None) ex = self.assertRaises(dispatcher.ExpectedException, self.eng.resource_signal, self.ctx, dict(self.stack.identifier()), 'WebServerScaleDownPolicy', test_data) self.assertEqual(exception.ResourceNotFound, ex.exc_info[0]) def test_signal_reception_no_resource(self): stack_name = 'signal_reception_no_resource' self.stack = self._stack_create(stack_name) test_data = {'food': 'yum'} ex = self.assertRaises(dispatcher.ExpectedException, self.eng.resource_signal, self.ctx, dict(self.stack.identifier()), 'resource_does_not_exist', test_data) self.assertEqual(exception.ResourceNotFound, ex.exc_info[0]) @mock.patch.object(stack.Stack, 'load') @mock.patch.object(service.EngineService, '_get_stack') def test_signal_reception_unavailable_resource(self, mock_get, mock_load): stack_name = 'signal_reception_unavailable_resource' stk = tools.get_stack(stack_name, self.ctx, policy_template) stk.store() self.stack = stk s = stack_object.Stack.get_by_id(self.ctx, self.stack.id) mock_load.return_value = stk mock_get.return_value = s test_data = {'food': 'yum'} ex = self.assertRaises(dispatcher.ExpectedException, self.eng.resource_signal, self.ctx, dict(self.stack.identifier()), 'WebServerScaleDownPolicy', test_data) self.assertEqual(exception.ResourceNotAvailable, ex.exc_info[0]) mock_load.assert_called_once_with(self.ctx, stack=mock.ANY, use_stored_context=mock.ANY) mock_get.assert_called_once_with(self.ctx, self.stack.identifier()) @mock.patch.object(res.Resource, 'signal') def test_signal_returns_metadata(self, mock_signal): mock_signal.return_value = None self.stack = self._stack_create('signal_reception') rsrc = self.stack['WebServerScaleDownPolicy'] test_metadata = {'food': 'yum'} rsrc.metadata_set(test_metadata) md = self.eng.resource_signal(self.ctx, dict(self.stack.identifier()), 'WebServerScaleDownPolicy', None, sync_call=True) self.assertEqual(test_metadata, md) mock_signal.assert_called_once_with(mock.ANY, False) def test_signal_unset_invalid_hook(self): self.stack = self._stack_create('signal_unset_invalid_hook') details = {'unset_hook': 'invalid_hook'} ex = self.assertRaises(dispatcher.ExpectedException, self.eng.resource_signal, self.ctx, dict(self.stack.identifier()), 'WebServerScaleDownPolicy', details) msg = 'Invalid hook type "invalid_hook"' self.assertIn(msg, six.text_type(ex.exc_info[1])) self.assertEqual(exception.InvalidBreakPointHook, ex.exc_info[0]) def test_signal_unset_not_defined_hook(self): self.stack = self._stack_create('signal_unset_not_defined_hook') details = {'unset_hook': 'pre-update'} ex = self.assertRaises(dispatcher.ExpectedException, self.eng.resource_signal, self.ctx, dict(self.stack.identifier()), 'WebServerScaleDownPolicy', details) msg = ('The "pre-update" hook is not defined on ' 'AWSScalingPolicy "WebServerScaleDownPolicy"') self.assertIn(msg, six.text_type(ex.exc_info[1])) self.assertEqual(exception.InvalidBreakPointHook, ex.exc_info[0]) @mock.patch.object(res.Resource, 'metadata_update') @mock.patch.object(res.Resource, 'signal') @mock.patch.object(service.EngineService, '_get_stack') def test_signal_calls_metadata_update(self, mock_get, mock_signal, mock_update): # fake keystone client self.patchobject(keystone.KeystoneClientPlugin, '_create', return_value=fake_ks.FakeKeystoneClient()) stk = tools.get_stack('signal_reception', self.ctx, policy_template) self.stack = stk stk.store() stk.create() s = stack_object.Stack.get_by_id(self.ctx, self.stack.id) mock_get.return_value = s mock_signal.return_value = True # this will be called once for the Random resource mock_update.return_value = None self.eng.resource_signal(self.ctx, dict(self.stack.identifier()), 'WebServerScaleDownPolicy', None, sync_call=True) mock_get.assert_called_once_with(self.ctx, self.stack.identifier()) mock_signal.assert_called_once_with(mock.ANY, False) mock_update.assert_called_once_with() @mock.patch.object(res.Resource, 'metadata_update') @mock.patch.object(res.Resource, 'signal') @mock.patch.object(service.EngineService, '_get_stack') def test_signal_no_calls_metadata_update(self, mock_get, mock_signal, mock_update): # fake keystone client self.patchobject(keystone.KeystoneClientPlugin, '_create', return_value=fake_ks.FakeKeystoneClient()) stk = tools.get_stack('signal_reception', self.ctx, policy_template) self.stack = stk stk.store() stk.create() s = stack_object.Stack.get_by_id(self.ctx, self.stack.id) mock_get.return_value = s mock_signal.return_value = False self.eng.resource_signal(self.ctx, dict(self.stack.identifier()), 'WebServerScaleDownPolicy', None, sync_call=True) mock_get.assert_called_once_with(self.ctx, self.stack.identifier()) mock_signal.assert_called_once_with(mock.ANY, False) # this will never be called self.assertEqual(0, mock_update.call_count) def test_lazy_load_resources(self): stack_name = 'lazy_load_test' lazy_load_template = { 'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'foo': {'Type': 'GenericResourceType'}, 'bar': { 'Type': 'ResourceWithPropsType', 'Properties': { 'Foo': {'Ref': 'foo'}, } } } } templ = templatem.Template(lazy_load_template) stk = stack.Stack(self.ctx, stack_name, templ) self.assertIsNone(stk._resources) self.assertIsNone(stk._dependencies) resources = stk.resources self.assertIsInstance(resources, dict) self.assertEqual(2, len(resources)) self.assertIsInstance(resources.get('foo'), generic_rsrc.GenericResource) self.assertIsInstance(resources.get('bar'), generic_rsrc.ResourceWithProps) stack_dependencies = stk.dependencies self.assertIsInstance(stack_dependencies, dependencies.Dependencies) self.assertEqual(2, len(stack_dependencies.graph())) @tools.stack_context('service_find_resource_logical_name') def test_find_resource_logical_name(self): rsrc = self.stack['WebServer'] physical_rsrc = self.eng._find_resource_in_stack(self.ctx, 'WebServer', self.stack) self.assertEqual(rsrc.id, physical_rsrc.id) @tools.stack_context('service_find_resource_physical_id') def test_find_resource_physical_id(self): rsrc = self.stack['WebServer'] physical_rsrc = self.eng._find_resource_in_stack(self.ctx, rsrc.resource_id, self.stack) self.assertEqual(rsrc.id, physical_rsrc.id) @tools.stack_context('service_find_resource_not_found') def test_find_resource_nonexist(self): self.assertRaises(exception.ResourceNotFound, self.eng._find_resource_in_stack, self.ctx, 'wibble', self.stack) def _test_mark_healthy_asserts(self, action='CHECK', status='FAILED', reason='state changed', meta=None): rs = self.eng.describe_stack_resource( self.ctx, self.stack.identifier(), 'WebServer', with_attr=None) self.assertIn('resource_action', rs) self.assertIn('resource_status', rs) self.assertIn('resource_status_reason', rs) self.assertEqual(action, rs['resource_action']) self.assertEqual(status, rs['resource_status']) self.assertEqual(reason, rs['resource_status_reason']) if meta is not None: self.assertIn('metadata', rs) self.assertEqual(meta, rs['metadata']) @tools.stack_context('service_mark_healthy_create_complete_test_stk') def test_mark_healthy_in_create_complete(self): self.eng.resource_mark_unhealthy(self.ctx, self.stack.identifier(), 'WebServer', False, resource_status_reason='noop') self._test_mark_healthy_asserts(action='CREATE', status='COMPLETE') @tools.stack_context('service_mark_unhealthy_create_complete_test_stk') def test_mark_unhealthy_in_create_complete(self): reason = 'Some Reason' self.eng.resource_mark_unhealthy(self.ctx, self.stack.identifier(), 'WebServer', True, resource_status_reason=reason) self._test_mark_healthy_asserts(reason=reason) @tools.stack_context('service_mark_healthy_check_failed_test_stk') def test_mark_healthy_check_failed(self): reason = 'Some Reason' self.eng.resource_mark_unhealthy(self.ctx, self.stack.identifier(), 'WebServer', True, resource_status_reason=reason) self._test_mark_healthy_asserts(reason=reason) meta = {'for_test': True} def override_metadata_reset(rsrc): rsrc.metadata_set(meta) ins.Instance.handle_metadata_reset = override_metadata_reset reason = 'Good Reason' self.eng.resource_mark_unhealthy(self.ctx, self.stack.identifier(), 'WebServer', False, resource_status_reason=reason) self._test_mark_healthy_asserts(status='COMPLETE', reason=reason, meta=meta) @tools.stack_context('service_mark_unhealthy_check_failed_test_stack') def test_mark_unhealthy_check_failed(self): reason = 'Some Reason' self.eng.resource_mark_unhealthy(self.ctx, self.stack.identifier(), 'WebServer', True, resource_status_reason=reason) self._test_mark_healthy_asserts(reason=reason) new_reason = 'New Reason' self.eng.resource_mark_unhealthy(self.ctx, self.stack.identifier(), 'WebServer', True, resource_status_reason=new_reason) self._test_mark_healthy_asserts(reason=new_reason) @tools.stack_context('service_mark_unhealthy_invalid_value_test_stk') def test_mark_unhealthy_invalid_value(self): ex = self.assertRaises(dispatcher.ExpectedException, self.eng.resource_mark_unhealthy, self.ctx, self.stack.identifier(), 'WebServer', "This is wrong", resource_status_reason="Some Reason") self.assertEqual(exception.Invalid, ex.exc_info[0]) @tools.stack_context('service_mark_unhealthy_none_reason_test_stk') def test_mark_unhealthy_none_reason(self): self.eng.resource_mark_unhealthy(self.ctx, self.stack.identifier(), 'WebServer', True) default_reason = 'state changed by resource_mark_unhealthy api' self._test_mark_healthy_asserts(reason=default_reason) @tools.stack_context('service_mark_unhealthy_empty_reason_test_stk') def test_mark_unhealthy_empty_reason(self): self.eng.resource_mark_unhealthy(self.ctx, self.stack.identifier(), 'WebServer', True, resource_status_reason="") default_reason = 'state changed by resource_mark_unhealthy api' self._test_mark_healthy_asserts(reason=default_reason) @tools.stack_context('service_mark_unhealthy_lock_no_converge_test_stk') def test_mark_unhealthy_lock_no_convergence(self): mock_acquire = self.patchobject(stack_lock.StackLock, 'acquire', return_value=None) mock_release = self.patchobject(stack_lock.StackLock, 'release', return_value=None) self.eng.resource_mark_unhealthy(self.ctx, self.stack.identifier(), 'WebServer', True, resource_status_reason="") mock_acquire.assert_called_once_with() mock_release.assert_called_once_with() @tools.stack_context('service_mark_unhealthy_lock_converge_test_stk', convergence=True) def test_mark_unhealthy_stack_lock_convergence(self): mock_store_with_lock = self.patchobject(res.Resource, '_store_with_lock', return_value=None) self.eng.resource_mark_unhealthy(self.ctx, self.stack.identifier(), 'WebServer', True, resource_status_reason="") self.assertEqual(2, mock_store_with_lock.call_count) @tools.stack_context('service_mark_unhealthy_lockexc_converge_test_stk', convergence=True) def test_mark_unhealthy_stack_lock_exc_convergence(self): def _store_with_lock(*args, **kwargs): raise exception.UpdateInProgress(self.stack.name) self.patchobject( res.Resource, '_store_with_lock', return_value=None, side_effect=exception.UpdateInProgress(self.stack.name)) ex = self.assertRaises(dispatcher.ExpectedException, self.eng.resource_mark_unhealthy, self.ctx, self.stack.identifier(), 'WebServer', True, resource_status_reason="") self.assertEqual(exception.ActionInProgress, ex.exc_info[0]) @tools.stack_context('service_mark_unhealthy_lockexc_no_converge_test_stk') def test_mark_unhealthy_stack_lock_exc_no_convergence(self): self.patchobject( stack_lock.StackLock, 'acquire', return_value=None, side_effect=exception.ActionInProgress( stack_name=self.stack.name, action=self.stack.action)) ex = self.assertRaises(dispatcher.ExpectedException, self.eng.resource_mark_unhealthy, self.ctx, self.stack.identifier(), 'WebServer', True, resource_status_reason="") self.assertEqual(exception.ActionInProgress, ex.exc_info[0]) heat-10.0.2/heat/tests/engine/service/test_threadgroup_mgr.py0000666000175000017500000001146313343562340024341 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import eventlet import mock from oslo_context import context from heat.engine import service from heat.tests import common class ThreadGroupManagerTest(common.HeatTestCase): def setUp(self): super(ThreadGroupManagerTest, self).setUp() self.f = 'function' self.fargs = ('spam', 'ham', 'eggs') self.fkwargs = {'foo': 'bar'} self.cnxt = 'ctxt' self.engine_id = 'engine_id' self.stack = mock.Mock() self.lock_mock = mock.Mock() self.stlock_mock = self.patch('heat.engine.service.stack_lock') self.stlock_mock.StackLock.return_value = self.lock_mock self.tg_mock = mock.Mock() self.thg_mock = self.patch('heat.engine.service.threadgroup') self.thg_mock.ThreadGroup.return_value = self.tg_mock self.cfg_mock = self.patch('heat.engine.service.cfg') def test_tgm_start_with_lock(self): thm = service.ThreadGroupManager() with self.patchobject(thm, 'start_with_acquired_lock'): mock_thread_lock = mock.Mock() mock_thread_lock.__enter__ = mock.Mock(return_value=None) mock_thread_lock.__exit__ = mock.Mock(return_value=None) self.lock_mock.thread_lock.return_value = mock_thread_lock thm.start_with_lock(self.cnxt, self.stack, self.engine_id, self.f, *self.fargs, **self.fkwargs) self.stlock_mock.StackLock.assert_called_with(self.cnxt, self.stack.id, self.engine_id) thm.start_with_acquired_lock.assert_called_once_with( self.stack, self.lock_mock, self.f, *self.fargs, **self.fkwargs) def test_tgm_start(self): stack_id = 'test' thm = service.ThreadGroupManager() ret = thm.start(stack_id, self.f, *self.fargs, **self.fkwargs) self.assertEqual(self.tg_mock, thm.groups['test']) self.tg_mock.add_thread.assert_called_with( thm._start_with_trace, context.get_current(), None, self.f, *self.fargs, **self.fkwargs) self.assertEqual(ret, self.tg_mock.add_thread()) def test_tgm_add_timer(self): stack_id = 'test' thm = service.ThreadGroupManager() thm.add_timer(stack_id, self.f, *self.fargs, **self.fkwargs) self.assertEqual(self.tg_mock, thm.groups[stack_id]) self.tg_mock.add_timer.assert_called_with( self.cfg_mock.CONF.periodic_interval, self.f, *self.fargs, **self.fkwargs) def test_tgm_add_msg_queue(self): stack_id = 'add_msg_queues_test' e1, e2 = mock.Mock(), mock.Mock() thm = service.ThreadGroupManager() thm.add_msg_queue(stack_id, e1) thm.add_msg_queue(stack_id, e2) self.assertEqual([e1, e2], thm.msg_queues[stack_id]) def test_tgm_remove_msg_queue(self): stack_id = 'add_msg_queues_test' e1, e2 = mock.Mock(), mock.Mock() thm = service.ThreadGroupManager() thm.add_msg_queue(stack_id, e1) thm.add_msg_queue(stack_id, e2) thm.remove_msg_queue(None, stack_id, e2) self.assertEqual([e1], thm.msg_queues[stack_id]) thm.remove_msg_queue(None, stack_id, e1) self.assertNotIn(stack_id, thm.msg_queues) def test_tgm_send(self): stack_id = 'send_test' e1, e2 = mock.MagicMock(), mock.Mock() thm = service.ThreadGroupManager() thm.add_msg_queue(stack_id, e1) thm.add_msg_queue(stack_id, e2) thm.send(stack_id, 'test_message') class ThreadGroupManagerStopTest(common.HeatTestCase): def test_tgm_stop(self): stack_id = 'test' done = [] def function(): while True: eventlet.sleep() def linked(gt, thread): for i in range(10): eventlet.sleep() done.append(thread) thm = service.ThreadGroupManager() thm.add_msg_queue(stack_id, mock.Mock()) thread = thm.start(stack_id, function) thread.link(linked, thread) thm.stop(stack_id) self.assertIn(thread, done) self.assertNotIn(stack_id, thm.groups) self.assertNotIn(stack_id, thm.msg_queues) heat-10.0.2/heat/tests/engine/service/test_stack_events.py0000666000175000017500000002502613343562340023641 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_config import cfg from oslo_messaging import conffixture from heat.engine import resource as res from heat.engine.resources.aws.ec2 import instance as instances from heat.engine import service from heat.engine import stack as parser from heat.objects import event as event_object from heat.objects import stack as stack_object from heat.tests import common from heat.tests.engine import tools from heat.tests import generic_resource as generic_rsrc from heat.tests import utils class StackEventTest(common.HeatTestCase): def setUp(self): super(StackEventTest, self).setUp() self.ctx = utils.dummy_context(tenant_id='stack_event_test_tenant') self.eng = service.EngineService('a-host', 'a-topic') self.eng.thread_group_mgr = service.ThreadGroupManager() @tools.stack_context('service_event_list_test_stack') @mock.patch.object(service.EngineService, '_get_stack') def test_event_list(self, mock_get): mock_get.return_value = stack_object.Stack.get_by_id(self.ctx, self.stack.id) events = self.eng.list_events(self.ctx, self.stack.identifier()) self.assertEqual(4, len(events)) for ev in events: self.assertNotIn('root_stack_id', ev) self.assertIn('event_identity', ev) self.assertIsInstance(ev['event_identity'], dict) self.assertTrue(ev['event_identity']['path'].rsplit('/', 1)[1]) self.assertIn('resource_name', ev) self.assertIn(ev['resource_name'], ('service_event_list_test_stack', 'WebServer')) self.assertIn('physical_resource_id', ev) self.assertEqual('CREATE', ev['resource_action']) self.assertIn(ev['resource_status'], ('IN_PROGRESS', 'COMPLETE')) self.assertIn('resource_status_reason', ev) self.assertIn(ev['resource_status_reason'], ('state changed', 'Stack CREATE started', 'Stack CREATE completed successfully')) self.assertIn('resource_type', ev) self.assertIn(ev['resource_type'], ('AWS::EC2::Instance', 'OS::Heat::Stack')) self.assertIn('stack_identity', ev) self.assertIn('stack_name', ev) self.assertEqual(self.stack.name, ev['stack_name']) self.assertIn('event_time', ev) mock_get.assert_called_once_with(self.ctx, self.stack.identifier(), show_deleted=True) @tools.stack_context('service_event_list_test_stack') @mock.patch.object(service.EngineService, '_get_stack') def test_event_list_nested_depth(self, mock_get): mock_get.return_value = stack_object.Stack.get_by_id(self.ctx, self.stack.id) events = self.eng.list_events(self.ctx, self.stack.identifier(), nested_depth=1) self.assertEqual(4, len(events)) for ev in events: self.assertIn('root_stack_id', ev) mock_get.assert_called_once_with(self.ctx, self.stack.identifier(), show_deleted=True) @tools.stack_context('service_event_list_deleted_resource') @mock.patch.object(instances.Instance, 'handle_delete') def test_event_list_deleted_resource(self, mock_delete): self.useFixture(conffixture.ConfFixture(cfg.CONF)) mock_delete.return_value = None res._register_class('GenericResourceType', generic_rsrc.GenericResource) thread = mock.Mock() thread.link = mock.Mock(return_value=None) def run(stack_id, func, *args, **kwargs): func(*args, **kwargs) return thread self.eng.thread_group_mgr.start = run new_tmpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': {'AResource': {'Type': 'GenericResourceType'}}} result = self.eng.update_stack(self.ctx, self.stack.identifier(), new_tmpl, None, None, {}) # The self.stack reference needs to be updated. Since the underlying # stack is updated in update_stack, the original reference is now # pointing to an orphaned stack object. self.stack = parser.Stack.load(self.ctx, stack_id=result['stack_id']) self.assertEqual(result, self.stack.identifier()) self.assertIsInstance(result, dict) self.assertTrue(result['stack_id']) events = self.eng.list_events(self.ctx, self.stack.identifier()) self.assertEqual(10, len(events)) for ev in events: self.assertIn('event_identity', ev) self.assertIsInstance(ev['event_identity'], dict) self.assertTrue(ev['event_identity']['path'].rsplit('/', 1)[1]) self.assertIn('resource_name', ev) self.assertIn('physical_resource_id', ev) self.assertIn('resource_status_reason', ev) self.assertIn(ev['resource_action'], ('CREATE', 'UPDATE', 'DELETE')) self.assertIn(ev['resource_status'], ('IN_PROGRESS', 'COMPLETE')) self.assertIn('resource_type', ev) self.assertIn(ev['resource_type'], ('AWS::EC2::Instance', 'GenericResourceType', 'OS::Heat::Stack')) self.assertIn('stack_identity', ev) self.assertIn('stack_name', ev) self.assertEqual(self.stack.name, ev['stack_name']) self.assertIn('event_time', ev) mock_delete.assert_called_once_with() expected = [ mock.call(mock.ANY), mock.call(mock.ANY, self.stack.id, mock.ANY) ] self.assertEqual(expected, thread.link.call_args_list) @tools.stack_context('service_event_list_by_tenant') def test_event_list_by_tenant(self): events = self.eng.list_events(self.ctx, None) self.assertEqual(4, len(events)) for ev in events: self.assertIn('event_identity', ev) self.assertIsInstance(ev['event_identity'], dict) self.assertTrue(ev['event_identity']['path'].rsplit('/', 1)[1]) self.assertIn('resource_name', ev) self.assertIn(ev['resource_name'], ('WebServer', 'service_event_list_by_tenant')) self.assertIn('physical_resource_id', ev) self.assertEqual('CREATE', ev['resource_action']) self.assertIn(ev['resource_status'], ('IN_PROGRESS', 'COMPLETE')) self.assertIn('resource_status_reason', ev) self.assertIn(ev['resource_status_reason'], ('state changed', 'Stack CREATE started', 'Stack CREATE completed successfully')) self.assertIn('resource_type', ev) self.assertIn(ev['resource_type'], ('AWS::EC2::Instance', 'OS::Heat::Stack')) self.assertIn('stack_identity', ev) self.assertIn('stack_name', ev) self.assertEqual(self.stack.name, ev['stack_name']) self.assertIn('event_time', ev) @mock.patch.object(event_object.Event, 'get_all_by_stack') @mock.patch.object(service.EngineService, '_get_stack') def test_event_list_with_marker_and_filters(self, mock_get, mock_get_all): limit = object() marker = object() sort_keys = object() sort_dir = object() filters = {} mock_get.return_value = mock.Mock(id=1) self.eng.list_events(self.ctx, 1, limit=limit, marker=marker, sort_keys=sort_keys, sort_dir=sort_dir, filters=filters) mock_get_all.assert_called_once_with(self.ctx, 1, limit=limit, sort_keys=sort_keys, marker=marker, sort_dir=sort_dir, filters=filters) @mock.patch.object(event_object.Event, 'get_all_by_tenant') def test_tenant_events_list_with_marker_and_filters(self, mock_get_all): limit = object() marker = object() sort_keys = object() sort_dir = object() filters = {} self.eng.list_events(self.ctx, None, limit=limit, marker=marker, sort_keys=sort_keys, sort_dir=sort_dir, filters=filters) mock_get_all.assert_called_once_with(self.ctx, limit=limit, sort_keys=sort_keys, marker=marker, sort_dir=sort_dir, filters=filters) @tools.stack_context('service_event_list_single_event') @mock.patch.object(service.EngineService, '_get_stack') def test_event_list_single_has_rsrc_prop_data(self, mock_get): mock_get.return_value = stack_object.Stack.get_by_id(self.ctx, self.stack.id) events = self.eng.list_events(self.ctx, self.stack.identifier()) self.assertEqual(4, len(events)) for ev in events: self.assertNotIn('resource_properties', ev) event_objs = event_object.Event.get_all_by_stack( self.ctx, self.stack.id) for i in range(2): event_uuid = event_objs[i]['uuid'] events = self.eng.list_events(self.ctx, self.stack.identifier(), filters={'uuid': event_uuid}) self.assertEqual(1, len(events)) self.assertIn('resource_properties', events[0]) if i > 0: self.assertEqual(4, len(events[0]['resource_properties'])) heat-10.0.2/heat/tests/engine/service/test_stack_delete.py0000666000175000017500000002612513343562340023600 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_config import cfg from oslo_messaging.rpc import dispatcher from oslo_utils import timeutils from heat.common import exception from heat.common import service_utils from heat.engine import service from heat.engine import stack as parser from heat.engine import stack_lock from heat.objects import stack as stack_object from heat.objects import stack_lock as stack_lock_object from heat.tests import common from heat.tests.engine import tools from heat.tests import utils class StackDeleteTest(common.HeatTestCase): def setUp(self): super(StackDeleteTest, self).setUp() self.ctx = utils.dummy_context() self.man = service.EngineService('a-host', 'a-topic') self.man.thread_group_mgr = service.ThreadGroupManager() @mock.patch.object(parser.Stack, 'load') def test_stack_delete(self, mock_load): stack_name = 'service_delete_test_stack' stack = tools.get_stack(stack_name, self.ctx) sid = stack.store() mock_load.return_value = stack s = stack_object.Stack.get_by_id(self.ctx, sid) self.assertIsNone(self.man.delete_stack(self.ctx, stack.identifier())) self.man.thread_group_mgr.groups[sid].wait() mock_load.assert_called_once_with(self.ctx, stack=s) def test_stack_delete_nonexist(self): stack_name = 'service_delete_nonexist_test_stack' stack = tools.get_stack(stack_name, self.ctx) ex = self.assertRaises(dispatcher.ExpectedException, self.man.delete_stack, self.ctx, stack.identifier()) self.assertEqual(exception.EntityNotFound, ex.exc_info[0]) @mock.patch.object(parser.Stack, 'load') @mock.patch.object(stack_lock.StackLock, 'try_acquire') def test_stack_delete_acquired_lock(self, mock_acquire, mock_load): mock_acquire.return_value = self.man.engine_id stack_name = 'service_delete_test_stack_acquired_lock' stack = tools.get_stack(stack_name, self.ctx) sid = stack.store() mock_load.return_value = stack st = stack_object.Stack.get_by_id(self.ctx, sid) self.assertIsNone(self.man.delete_stack(self.ctx, stack.identifier())) self.man.thread_group_mgr.groups[sid].wait() mock_acquire.assert_called_once_with() mock_load.assert_called_once_with(self.ctx, stack=st) @mock.patch.object(parser.Stack, 'load') @mock.patch.object(stack_lock.StackLock, 'try_acquire') def test_stack_delete_acquired_lock_stop_timers(self, mock_acquire, mock_load): mock_acquire.return_value = self.man.engine_id stack_name = 'service_delete_test_stack_stop_timers' stack = tools.get_stack(stack_name, self.ctx) sid = stack.store() mock_load.return_value = stack st = stack_object.Stack.get_by_id(self.ctx, sid) self.man.thread_group_mgr.add_timer(stack.id, 'test') self.assertEqual(1, len(self.man.thread_group_mgr.groups[sid].timers)) self.assertIsNone(self.man.delete_stack(self.ctx, stack.identifier())) self.assertEqual(0, len(self.man.thread_group_mgr.groups[sid].timers)) self.man.thread_group_mgr.groups[sid].wait() mock_acquire.assert_called_once_with() mock_load.assert_called_once_with(self.ctx, stack=st) @mock.patch.object(parser.Stack, 'load') @mock.patch.object(stack_lock.StackLock, 'try_acquire') @mock.patch.object(stack_lock.StackLock, 'acquire') @mock.patch.object(timeutils.StopWatch, 'expired') def test_stack_delete_current_engine_active_lock(self, mock_expired, mock_acquire, mock_try, mock_load): cfg.CONF.set_override('error_wait_time', 0) self.man.engine_id = service_utils.generate_engine_id() stack_name = 'service_delete_test_stack_current_active_lock' stack = tools.get_stack(stack_name, self.ctx) sid = stack.store() # Insert a fake lock into the db stack_lock_object.StackLock.create( self.ctx, stack.id, self.man.engine_id) st = stack_object.Stack.get_by_id(self.ctx, sid) mock_load.return_value = stack mock_try.return_value = self.man.engine_id mock_send = self.patchobject(self.man.thread_group_mgr, 'send') mock_expired.side_effect = [False, True] with mock.patch.object(self.man.thread_group_mgr, 'stop') as mock_stop: self.assertIsNone(self.man.delete_stack(self.ctx, stack.identifier())) self.man.thread_group_mgr.groups[sid].wait() mock_load.assert_called_with(self.ctx, stack=st) mock_send.assert_called_once_with(stack.id, 'cancel') mock_stop.assert_called_once_with(stack.id) self.man.thread_group_mgr.stop(sid, graceful=True) self.assertEqual(2, len(mock_load.mock_calls)) mock_try.assert_called_with() mock_acquire.assert_called_once_with(True) @mock.patch.object(parser.Stack, 'load') @mock.patch.object(stack_lock.StackLock, 'try_acquire') @mock.patch.object(service_utils, 'engine_alive') @mock.patch.object(timeutils.StopWatch, 'expired') def test_stack_delete_other_engine_active_lock_failed(self, mock_expired, mock_alive, mock_try, mock_load): cfg.CONF.set_override('error_wait_time', 0) OTHER_ENGINE = "other-engine-fake-uuid" self.man.engine_id = service_utils.generate_engine_id() self.man.listener = service.EngineListener(self.man.host, self.man.engine_id, self.man.thread_group_mgr) stack_name = 'service_delete_test_stack_other_engine_lock_fail' stack = tools.get_stack(stack_name, self.ctx) sid = stack.store() # Insert a fake lock into the db stack_lock_object.StackLock.create(self.ctx, stack.id, OTHER_ENGINE) st = stack_object.Stack.get_by_id(self.ctx, sid) mock_load.return_value = stack mock_try.return_value = OTHER_ENGINE mock_alive.return_value = True mock_expired.side_effect = [False, True] mock_call = self.patchobject(self.man, '_remote_call', return_value=False) ex = self.assertRaises(dispatcher.ExpectedException, self.man.delete_stack, self.ctx, stack.identifier()) self.assertEqual(exception.EventSendFailed, ex.exc_info[0]) mock_load.assert_called_once_with(self.ctx, stack=st) mock_try.assert_called_once_with() mock_alive.assert_called_once_with(self.ctx, OTHER_ENGINE) mock_call.assert_called_once_with(self.ctx, OTHER_ENGINE, mock.ANY, "send", message='cancel', stack_identity=mock.ANY) @mock.patch.object(parser.Stack, 'load') @mock.patch.object(stack_lock.StackLock, 'try_acquire') @mock.patch.object(service_utils, 'engine_alive') @mock.patch.object(stack_lock.StackLock, 'acquire') @mock.patch.object(timeutils.StopWatch, 'expired') def test_stack_delete_other_engine_active_lock_succeeded( self, mock_expired, mock_acquire, mock_alive, mock_try, mock_load): cfg.CONF.set_override('error_wait_time', 0) OTHER_ENGINE = "other-engine-fake-uuid" self.man.engine_id = service_utils.generate_engine_id() self.man.listener = service.EngineListener(self.man.host, self.man.engine_id, self.man.thread_group_mgr) stack_name = 'service_delete_test_stack_other_engine_lock' stack = tools.get_stack(stack_name, self.ctx) sid = stack.store() # Insert a fake lock into the db stack_lock_object.StackLock.create(self.ctx, stack.id, OTHER_ENGINE) st = stack_object.Stack.get_by_id(self.ctx, sid) mock_load.return_value = stack mock_try.return_value = OTHER_ENGINE mock_alive.return_value = True mock_expired.side_effect = [False, True] mock_call = self.patchobject(self.man, '_remote_call', return_value=None) self.assertIsNone(self.man.delete_stack(self.ctx, stack.identifier())) self.man.thread_group_mgr.stop(sid, graceful=True) self.assertEqual(2, len(mock_load.mock_calls)) mock_load.assert_called_with(self.ctx, stack=st) mock_try.assert_called_with() mock_alive.assert_called_with(self.ctx, OTHER_ENGINE) mock_call.assert_has_calls([ mock.call(self.ctx, OTHER_ENGINE, mock.ANY, "send", message='cancel', stack_identity=mock.ANY), mock.call(self.ctx, OTHER_ENGINE, mock.ANY, "stop_stack", stack_identity=mock.ANY) ]) mock_acquire.assert_called_once_with(True) @mock.patch.object(parser.Stack, 'load') @mock.patch.object(stack_lock.StackLock, 'try_acquire') @mock.patch.object(service_utils, 'engine_alive') @mock.patch.object(stack_lock.StackLock, 'acquire') @mock.patch.object(timeutils.StopWatch, 'expired') def test_stack_delete_other_dead_engine_active_lock( self, mock_expired, mock_acquire, mock_alive, mock_try, mock_load): cfg.CONF.set_override('error_wait_time', 0) OTHER_ENGINE = "other-engine-fake-uuid" stack_name = 'service_delete_test_stack_other_dead_engine' stack = tools.get_stack(stack_name, self.ctx) sid = stack.store() # Insert a fake lock into the db stack_lock_object.StackLock.create( self.ctx, stack.id, "other-engine-fake-uuid") st = stack_object.Stack.get_by_id(self.ctx, sid) mock_load.return_value = stack mock_try.return_value = OTHER_ENGINE mock_alive.return_value = False mock_expired.side_effect = [False, True] self.assertIsNone(self.man.delete_stack(self.ctx, stack.identifier())) self.man.thread_group_mgr.stop(sid, graceful=True) mock_load.assert_called_with(self.ctx, stack=st) mock_try.assert_called_with() mock_acquire.assert_called_once_with(True) mock_alive.assert_called_with(self.ctx, OTHER_ENGINE) heat-10.0.2/heat/tests/engine/service/test_stack_adopt.py0000666000175000017500000001736113343562340023447 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_config import cfg from oslo_messaging.rpc import dispatcher import six from heat.common import exception from heat.engine import service from heat.engine import stack as parser from heat.objects import stack as stack_object from heat.tests import common from heat.tests import utils class StackServiceAdoptTest(common.HeatTestCase): def setUp(self): super(StackServiceAdoptTest, self).setUp() self.ctx = utils.dummy_context() self.man = service.EngineService('a-host', 'a-topic') self.man.thread_group_mgr = service.ThreadGroupManager() def _get_adopt_data_and_template(self, environment=None): template = { "heat_template_version": "2013-05-23", "parameters": {"app_dbx": {"type": "string"}}, "resources": {"res1": {"type": "GenericResourceType"}}} adopt_data = { "status": "COMPLETE", "name": "rtrove1", "environment": environment, "template": template, "action": "CREATE", "id": "8532f0d3-ea84-444e-b2bb-2543bb1496a4", "resources": {"res1": { "status": "COMPLETE", "name": "database_password", "resource_id": "yBpuUROjfGQ2gKOD", "action": "CREATE", "type": "GenericResourceType", "metadata": {}}}} return template, adopt_data def _do_adopt(self, stack_name, template, input_params, adopt_data): result = self.man.create_stack(self.ctx, stack_name, template, input_params, None, {'adopt_stack_data': str(adopt_data)}) self.man.thread_group_mgr.stop(result['stack_id'], graceful=True) return result def test_stack_adopt_with_params(self): cfg.CONF.set_override('enable_stack_adopt', True) cfg.CONF.set_override('convergence_engine', False) env = {'parameters': {"app_dbx": "test"}} template, adopt_data = self._get_adopt_data_and_template(env) result = self._do_adopt("test_adopt_with_params", template, {}, adopt_data) stack = stack_object.Stack.get_by_id(self.ctx, result['stack_id']) self.assertEqual(template, stack.raw_template.template) self.assertEqual(env['parameters'], stack.raw_template.environment['parameters']) @mock.patch.object(parser.Stack, '_converge_create_or_update') @mock.patch.object(parser.Stack, '_send_notification_and_add_event') def test_convergence_stack_adopt_with_params(self, mock_converge, mock_send_notif): cfg.CONF.set_override('enable_stack_adopt', True) cfg.CONF.set_override('convergence_engine', True) env = {'parameters': {"app_dbx": "test"}} template, adopt_data = self._get_adopt_data_and_template(env) result = self._do_adopt("test_adopt_with_params", template, {}, adopt_data) stack = stack_object.Stack.get_by_id(self.ctx, result['stack_id']) self.assertEqual(template, stack.raw_template.template) self.assertEqual(env['parameters'], stack.raw_template.environment['parameters']) self.assertTrue(mock_converge.called) def test_stack_adopt_saves_input_params(self): cfg.CONF.set_override('enable_stack_adopt', True) cfg.CONF.set_override('convergence_engine', False) env = {'parameters': {"app_dbx": "foo"}} input_params = { "parameters": {"app_dbx": "bar"} } template, adopt_data = self._get_adopt_data_and_template(env) result = self._do_adopt("test_adopt_saves_inputs", template, input_params, adopt_data) stack = stack_object.Stack.get_by_id(self.ctx, result['stack_id']) self.assertEqual(template, stack.raw_template.template) self.assertEqual(input_params['parameters'], stack.raw_template.environment['parameters']) @mock.patch.object(parser.Stack, '_converge_create_or_update') @mock.patch.object(parser.Stack, '_send_notification_and_add_event') def test_convergence_stack_adopt_saves_input_params( self, mock_converge, mock_send_notif): cfg.CONF.set_override('enable_stack_adopt', True) cfg.CONF.set_override('convergence_engine', True) env = {'parameters': {"app_dbx": "foo"}} input_params = { "parameters": {"app_dbx": "bar"} } template, adopt_data = self._get_adopt_data_and_template(env) result = self._do_adopt("test_adopt_saves_inputs", template, input_params, adopt_data) stack = stack_object.Stack.get_by_id(self.ctx, result['stack_id']) self.assertEqual(template, stack.raw_template.template) self.assertEqual(input_params['parameters'], stack.raw_template.environment['parameters']) self.assertTrue(mock_converge.called) def test_stack_adopt_stack_state(self): cfg.CONF.set_override('enable_stack_adopt', True) cfg.CONF.set_override('convergence_engine', False) env = {'parameters': {"app_dbx": "test"}} template, adopt_data = self._get_adopt_data_and_template(env) result = self._do_adopt("test_adopt_stack_state", template, {}, adopt_data) stack = stack_object.Stack.get_by_id(self.ctx, result['stack_id']) self.assertEqual((parser.Stack.ADOPT, parser.Stack.COMPLETE), (stack.action, stack.status)) @mock.patch.object(parser.Stack, '_converge_create_or_update') @mock.patch.object(parser.Stack, '_send_notification_and_add_event') def test_convergence_stack_adopt_stack_state(self, mock_converge, mock_send_notif): cfg.CONF.set_override('enable_stack_adopt', True) cfg.CONF.set_override('convergence_engine', True) env = {'parameters': {"app_dbx": "test"}} template, adopt_data = self._get_adopt_data_and_template(env) result = self._do_adopt("test_adopt_stack_state", template, {}, adopt_data) stack = stack_object.Stack.get_by_id(self.ctx, result['stack_id']) self.assertEqual((parser.Stack.ADOPT, parser.Stack.IN_PROGRESS), (stack.action, stack.status)) self.assertTrue(mock_converge.called) def test_stack_adopt_disabled(self): # to test disable stack adopt cfg.CONF.set_override('enable_stack_adopt', False) env = {'parameters': {"app_dbx": "test"}} template, adopt_data = self._get_adopt_data_and_template(env) ex = self.assertRaises( dispatcher.ExpectedException, self.man.create_stack, self.ctx, "test_adopt_stack_disabled", template, {}, None, {'adopt_stack_data': str(adopt_data)}) self.assertEqual(exception.NotSupported, ex.exc_info[0]) self.assertIn('Stack Adopt', six.text_type(ex.exc_info[1])) heat-10.0.2/heat/tests/engine/service/test_stack_snapshot.py0000666000175000017500000002657613343562340024207 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import uuid import mock from oslo_messaging.rpc import dispatcher import six from heat.common import exception from heat.common import template_format from heat.engine import service from heat.engine import stack from heat.objects import snapshot as snapshot_objects from heat.tests import common from heat.tests.engine import tools from heat.tests import utils class SnapshotServiceTest(common.HeatTestCase): # TODO(Qiming): Rework this test to handle OS::Nova::Server which # has a real snapshot support. def setUp(self): super(SnapshotServiceTest, self).setUp() self.ctx = utils.dummy_context() self.engine = service.EngineService('a-host', 'a-topic') self.engine.thread_group_mgr = service.ThreadGroupManager() def _create_stack(self, stack_name, files=None): t = template_format.parse(tools.wp_template) stk = utils.parse_stack(t, stack_name=stack_name, files=files) stk.state_set(stk.CREATE, stk.COMPLETE, 'mock completion') return stk def test_show_snapshot_not_found(self): stk = self._create_stack('stack_snapshot_not_found') snapshot_id = str(uuid.uuid4()) ex = self.assertRaises(dispatcher.ExpectedException, self.engine.show_snapshot, self.ctx, stk.identifier(), snapshot_id) expected = 'Snapshot with id %s not found' % snapshot_id self.assertEqual(exception.NotFound, ex.exc_info[0]) self.assertIn(expected, six.text_type(ex.exc_info[1])) def test_show_snapshot_not_belong_to_stack(self): stk1 = self._create_stack('stack_snaphot_not_belong_to_stack_1') stk1._persist_state() snapshot1 = self.engine.stack_snapshot( self.ctx, stk1.identifier(), 'snap1') self.engine.thread_group_mgr.groups[stk1.id].wait() snapshot_id = snapshot1['id'] stk2 = self._create_stack('stack_snaphot_not_belong_to_stack_2') stk2._persist_state() ex = self.assertRaises(dispatcher.ExpectedException, self.engine.show_snapshot, self.ctx, stk2.identifier(), snapshot_id) expected = ('The Snapshot (%(snapshot)s) for Stack (%(stack)s) ' 'could not be found') % {'snapshot': snapshot_id, 'stack': stk2.name} self.assertEqual(exception.SnapshotNotFound, ex.exc_info[0]) self.assertIn(expected, six.text_type(ex.exc_info[1])) @mock.patch.object(stack.Stack, 'load') def test_create_snapshot(self, mock_load): files = {'a_file': 'the contents'} stk = self._create_stack('stack_snapshot_create', files=files) mock_load.return_value = stk snapshot = self.engine.stack_snapshot( self.ctx, stk.identifier(), 'snap1') self.assertIsNotNone(snapshot['id']) self.assertIsNotNone(snapshot['creation_time']) self.assertEqual('snap1', snapshot['name']) self.assertEqual("IN_PROGRESS", snapshot['status']) self.engine.thread_group_mgr.groups[stk.id].wait() snapshot = self.engine.show_snapshot( self.ctx, stk.identifier(), snapshot['id']) self.assertEqual("COMPLETE", snapshot['status']) self.assertEqual("SNAPSHOT", snapshot['data']['action']) self.assertEqual("COMPLETE", snapshot['data']['status']) self.assertEqual(files, snapshot['data']['files']) self.assertEqual(stk.id, snapshot['data']['id']) self.assertIsNotNone(stk.updated_time) self.assertIsNotNone(snapshot['creation_time']) mock_load.assert_called_once_with(self.ctx, stack=mock.ANY) @mock.patch.object(stack.Stack, 'load') def test_create_snapshot_action_in_progress(self, mock_load): stack_name = 'stack_snapshot_action_in_progress' stk = self._create_stack(stack_name) mock_load.return_value = stk stk.state_set(stk.UPDATE, stk.IN_PROGRESS, 'test_override') ex = self.assertRaises(dispatcher.ExpectedException, self.engine.stack_snapshot, self.ctx, stk.identifier(), 'snap_none') self.assertEqual(exception.ActionInProgress, ex.exc_info[0]) msg = ("Stack %(stack)s already has an action (%(action)s) " "in progress.") % {'stack': stack_name, 'action': stk.action} self.assertEqual(msg, six.text_type(ex.exc_info[1])) mock_load.assert_called_once_with(self.ctx, stack=mock.ANY) @mock.patch.object(stack.Stack, 'load') def test_delete_snapshot_not_found(self, mock_load): stk = self._create_stack('stack_snapshot_delete_not_found') mock_load.return_value = stk snapshot_id = str(uuid.uuid4()) ex = self.assertRaises(dispatcher.ExpectedException, self.engine.delete_snapshot, self.ctx, stk.identifier(), snapshot_id) self.assertEqual(exception.NotFound, ex.exc_info[0]) mock_load.assert_called_once_with(self.ctx, stack=mock.ANY) @mock.patch.object(stack.Stack, 'load') def test_delete_snapshot_not_belong_to_stack(self, mock_load): stk1 = self._create_stack('stack_snapshot_delete_not_belong_1') mock_load.return_value = stk1 snapshot1 = self.engine.stack_snapshot( self.ctx, stk1.identifier(), 'snap1') self.engine.thread_group_mgr.groups[stk1.id].wait() snapshot_id = snapshot1['id'] mock_load.assert_called_once_with(self.ctx, stack=mock.ANY) mock_load.reset_mock() stk2 = self._create_stack('stack_snapshot_delete_not_belong_2') mock_load.return_value = stk2 ex = self.assertRaises(dispatcher.ExpectedException, self.engine.delete_snapshot, self.ctx, stk2.identifier(), snapshot_id) expected = ('The Snapshot (%(snapshot)s) for Stack (%(stack)s) ' 'could not be found') % {'snapshot': snapshot_id, 'stack': stk2.name} self.assertEqual(exception.SnapshotNotFound, ex.exc_info[0]) self.assertIn(expected, six.text_type(ex.exc_info[1])) mock_load.assert_called_once_with(self.ctx, stack=mock.ANY) mock_load.reset_mock() @mock.patch.object(stack.Stack, 'load') def test_delete_snapshot_in_progress(self, mock_load): # can not delete the snapshot in snapshotting stk = self._create_stack('test_delete_snapshot_in_progress') mock_load.return_value = stk snapshot = mock.Mock() snapshot.id = str(uuid.uuid4()) snapshot.status = 'IN_PROGRESS' self.patchobject(snapshot_objects.Snapshot, 'get_snapshot_by_stack').return_value = snapshot ex = self.assertRaises(dispatcher.ExpectedException, self.engine.delete_snapshot, self.ctx, stk.identifier(), snapshot.id) msg = 'Deleting in-progress snapshot is not supported' self.assertIn(msg, six.text_type(ex.exc_info[1])) self.assertEqual(exception.NotSupported, ex.exc_info[0]) @mock.patch.object(stack.Stack, 'load') def test_delete_snapshot(self, mock_load): stk = self._create_stack('stack_snapshot_delete_normal') mock_load.return_value = stk snapshot = self.engine.stack_snapshot( self.ctx, stk.identifier(), 'snap1') self.engine.thread_group_mgr.groups[stk.id].wait() snapshot_id = snapshot['id'] self.engine.delete_snapshot(self.ctx, stk.identifier(), snapshot_id) self.engine.thread_group_mgr.groups[stk.id].wait() ex = self.assertRaises(dispatcher.ExpectedException, self.engine.show_snapshot, self.ctx, stk.identifier(), snapshot_id) self.assertEqual(exception.NotFound, ex.exc_info[0]) self.assertTrue(2, mock_load.call_count) @mock.patch.object(stack.Stack, 'load') def test_list_snapshots(self, mock_load): stk = self._create_stack('stack_snapshot_list') mock_load.return_value = stk snapshot = self.engine.stack_snapshot( self.ctx, stk.identifier(), 'snap1') self.assertIsNotNone(snapshot['id']) self.assertEqual("IN_PROGRESS", snapshot['status']) self.engine.thread_group_mgr.groups[stk.id].wait() snapshots = self.engine.stack_list_snapshots( self.ctx, stk.identifier()) expected = { "id": snapshot["id"], "name": "snap1", "status": "COMPLETE", "status_reason": "Stack SNAPSHOT completed successfully", "data": stk.prepare_abandon(), "creation_time": snapshot['creation_time']} self.assertEqual([expected], snapshots) mock_load.assert_called_once_with(self.ctx, stack=mock.ANY) @mock.patch.object(stack.Stack, 'load') def test_restore_snapshot(self, mock_load): stk = self._create_stack('stack_snapshot_restore_normal') mock_load.return_value = stk snapshot = self.engine.stack_snapshot( self.ctx, stk.identifier(), 'snap1') self.engine.thread_group_mgr.groups[stk.id].wait() snapshot_id = snapshot['id'] self.engine.stack_restore(self.ctx, stk.identifier(), snapshot_id) self.engine.thread_group_mgr.groups[stk.id].wait() self.assertEqual((stk.RESTORE, stk.COMPLETE), stk.state) self.assertEqual(2, mock_load.call_count) @mock.patch.object(stack.Stack, 'load') def test_restore_snapshot_other_stack(self, mock_load): stk1 = self._create_stack('stack_snapshot_restore_other_stack_1') mock_load.return_value = stk1 snapshot1 = self.engine.stack_snapshot( self.ctx, stk1.identifier(), 'snap1') self.engine.thread_group_mgr.groups[stk1.id].wait() snapshot_id = snapshot1['id'] mock_load.assert_called_once_with(self.ctx, stack=mock.ANY) mock_load.reset_mock() stk2 = self._create_stack('stack_snapshot_restore_other_stack_1') mock_load.return_value = stk2 ex = self.assertRaises(dispatcher.ExpectedException, self.engine.stack_restore, self.ctx, stk2.identifier(), snapshot_id) expected = ('The Snapshot (%(snapshot)s) for Stack (%(stack)s) ' 'could not be found') % {'snapshot': snapshot_id, 'stack': stk2.name} self.assertEqual(exception.SnapshotNotFound, ex.exc_info[0]) self.assertIn(expected, six.text_type(ex.exc_info[1])) mock_load.assert_called_once_with(self.ctx, stack=mock.ANY) heat-10.0.2/heat/tests/engine/service/test_software_config.py0000666000175000017500000012601113343562351024325 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime import uuid import mock from oslo_messaging.rpc import dispatcher from oslo_serialization import jsonutils as json from oslo_utils import timeutils import six from heat.common import crypt from heat.common import exception from heat.common import template_format from heat.db.sqlalchemy import api as db_api from heat.engine.clients.os import swift from heat.engine.clients.os import zaqar from heat.engine import service from heat.engine import service_software_config from heat.engine import software_config_io as swc_io from heat.objects import resource as resource_objects from heat.objects import software_config as software_config_object from heat.objects import software_deployment as software_deployment_object from heat.tests import common from heat.tests.engine import tools from heat.tests import utils software_config_inputs = ''' heat_template_version: 2013-05-23 description: Validate software config input/output types resources: InputOutputTestConfig: type: OS::Heat::SoftwareConfig properties: group: puppet inputs: - name: boolean_input type: Boolean - name: json_input type: Json - name: number_input type: Number - name: string_input type: String - name: comma_delimited_list_input type: CommaDelimitedList outputs: - name: boolean_output type: Boolean - name: json_output type: Json - name: number_output type: Number - name: string_output type: String - name: comma_delimited_list_output type: CommaDelimitedList ''' class SoftwareConfigServiceTest(common.HeatTestCase): def setUp(self): super(SoftwareConfigServiceTest, self).setUp() self.ctx = utils.dummy_context() self.engine = service.EngineService('a-host', 'a-topic') def _create_software_config( self, group='Heat::Shell', name='config_mysql', config=None, inputs=None, outputs=None, options=None, context=None): cntx = context if context else self.ctx inputs = inputs or [] outputs = outputs or [] options = options or {} return self.engine.create_software_config( cntx, group, name, config, inputs, outputs, options) def _create_dummy_config_object(self): obj_config = software_config_object.SoftwareConfig() obj_config['id'] = str(uuid.uuid4()) obj_config['name'] = 'myconfig' obj_config['group'] = 'mygroup' obj_config['config'] = {'config': 'hello world', 'inputs': [], 'outputs': [], 'options': {}} obj_config['created_at'] = timeutils.utcnow() return obj_config def assert_status_reason(self, expected, actual): expected_dict = dict((i.split(' : ') for i in expected.split(', '))) actual_dict = dict((i.split(' : ') for i in actual.split(', '))) self.assertEqual(expected_dict, actual_dict) def test_list_software_configs(self): config = self._create_software_config() self.assertIsNotNone(config) config_id = config['id'] configs = self.engine.list_software_configs(self.ctx) self.assertIsNotNone(configs) config_ids = [x['id'] for x in configs] self.assertIn(config_id, config_ids) admin_cntx = utils.dummy_context(is_admin=True) admin_config = self._create_software_config(context=admin_cntx) admin_config_id = admin_config['id'] configs = self.engine.list_software_configs(admin_cntx) self.assertIsNotNone(configs) config_ids = [x['id'] for x in configs] project_ids = [x['project'] for x in configs] self.assertEqual(2, len(project_ids)) self.assertEqual(2, len(config_ids)) self.assertIn(config_id, config_ids) self.assertIn(admin_config_id, config_ids) def test_show_software_config(self): config_id = str(uuid.uuid4()) ex = self.assertRaises(dispatcher.ExpectedException, self.engine.show_software_config, self.ctx, config_id) self.assertEqual(exception.NotFound, ex.exc_info[0]) config = self._create_software_config() res = self.engine.show_software_config(self.ctx, config['id']) self.assertEqual(config, res) def test_create_software_config_new_ids(self): config1 = self._create_software_config() self.assertIsNotNone(config1) config2 = self._create_software_config() self.assertNotEqual(config1['id'], config2['id']) def test_create_software_config(self): kwargs = { 'group': 'Heat::Chef', 'name': 'config_heat', 'config': '...', 'inputs': [{'name': 'mode'}], 'outputs': [{'name': 'endpoint'}], 'options': {} } config = self._create_software_config(**kwargs) config_id = config['id'] config = self.engine.show_software_config(self.ctx, config_id) self.assertEqual(kwargs['group'], config['group']) self.assertEqual(kwargs['name'], config['name']) self.assertEqual(kwargs['config'], config['config']) self.assertEqual([{'name': 'mode', 'type': 'String'}], config['inputs']) self.assertEqual([{'name': 'endpoint', 'type': 'String', 'error_output': False}], config['outputs']) self.assertEqual(kwargs['options'], config['options']) def test_delete_software_config(self): config = self._create_software_config() self.assertIsNotNone(config) config_id = config['id'] self.engine.delete_software_config(self.ctx, config_id) ex = self.assertRaises(dispatcher.ExpectedException, self.engine.show_software_config, self.ctx, config_id) self.assertEqual(exception.NotFound, ex.exc_info[0]) def test_boolean_inputs_valid(self): stack_name = 'test_boolean_inputs_valid' t = template_format.parse(software_config_inputs) stack = utils.parse_stack(t, stack_name=stack_name) try: stack.validate() except exception.StackValidationFailed as exc: self.fail("Validation should have passed: %s" % six.text_type(exc)) def _create_software_deployment(self, config_id=None, input_values=None, action='INIT', status='COMPLETE', status_reason='', config_group=None, server_id=str(uuid.uuid4()), config_name=None, stack_user_project_id=None): input_values = input_values or {} if config_id is None: config = self._create_software_config(group=config_group, name=config_name) config_id = config['id'] return self.engine.create_software_deployment( self.ctx, server_id, config_id, input_values, action, status, status_reason, stack_user_project_id) def test_list_software_deployments(self): stack_name = 'test_list_software_deployments' t = template_format.parse(tools.wp_template) stack = utils.parse_stack(t, stack_name=stack_name) tools.setup_mocks(self.m, stack) self.m.ReplayAll() stack.store() stack.create() server = stack['WebServer'] server_id = server.resource_id deployment = self._create_software_deployment( server_id=server_id) deployment_id = deployment['id'] self.assertIsNotNone(deployment) deployments = self.engine.list_software_deployments( self.ctx, server_id=None) self.assertIsNotNone(deployments) deployment_ids = [x['id'] for x in deployments] self.assertIn(deployment_id, deployment_ids) self.assertIn(deployment, deployments) deployments = self.engine.list_software_deployments( self.ctx, server_id=str(uuid.uuid4())) self.assertEqual([], deployments) deployments = self.engine.list_software_deployments( self.ctx, server_id=server.resource_id) self.assertEqual([deployment], deployments) rsrcs = resource_objects.Resource.get_all_by_physical_resource_id( self.ctx, server_id) self.assertEqual(deployment['config_id'], rsrcs[0].rsrc_metadata.get('deployments')[0]['id']) def test_metadata_software_deployments(self): stack_name = 'test_metadata_software_deployments' t = template_format.parse(tools.wp_template) stack = utils.parse_stack(t, stack_name=stack_name) tools.setup_mocks(self.m, stack) self.m.ReplayAll() stack.store() stack.create() server = stack['WebServer'] server_id = server.resource_id stack_user_project_id = str(uuid.uuid4()) d1 = self._create_software_deployment( config_group='mygroup', server_id=server_id, config_name='02_second', stack_user_project_id=stack_user_project_id) d2 = self._create_software_deployment( config_group='mygroup', server_id=server_id, config_name='01_first', stack_user_project_id=stack_user_project_id) d3 = self._create_software_deployment( config_group='myothergroup', server_id=server_id, config_name='03_third', stack_user_project_id=stack_user_project_id) metadata = self.engine.metadata_software_deployments( self.ctx, server_id=server_id) self.assertEqual(3, len(metadata)) self.assertEqual('mygroup', metadata[1]['group']) self.assertEqual('mygroup', metadata[0]['group']) self.assertEqual('myothergroup', metadata[2]['group']) self.assertEqual(d1['config_id'], metadata[1]['id']) self.assertEqual(d2['config_id'], metadata[0]['id']) self.assertEqual(d3['config_id'], metadata[2]['id']) self.assertEqual('01_first', metadata[0]['name']) self.assertEqual('02_second', metadata[1]['name']) self.assertEqual('03_third', metadata[2]['name']) # assert that metadata via metadata_software_deployments matches # metadata via server resource rsrcs = resource_objects.Resource.get_all_by_physical_resource_id( self.ctx, server_id) self.assertEqual(metadata, rsrcs[0].rsrc_metadata.get('deployments')) deployments = self.engine.metadata_software_deployments( self.ctx, server_id=str(uuid.uuid4())) self.assertEqual([], deployments) # assert get results when the context tenant_id matches # the stored stack_user_project_id ctx = utils.dummy_context(tenant_id=stack_user_project_id) metadata = self.engine.metadata_software_deployments( ctx, server_id=server_id) self.assertEqual(3, len(metadata)) # assert get no results when the context tenant_id is unknown ctx = utils.dummy_context(tenant_id=str(uuid.uuid4())) metadata = self.engine.metadata_software_deployments( ctx, server_id=server_id) self.assertEqual(0, len(metadata)) # assert None config is filtered out obj_conf = self._create_dummy_config_object() side_effect = [obj_conf, obj_conf, None] self.patchobject(software_config_object.SoftwareConfig, '_from_db_object', side_effect=side_effect) metadata = self.engine.metadata_software_deployments( self.ctx, server_id=server_id) self.assertEqual(2, len(metadata)) def test_show_software_deployment(self): deployment_id = str(uuid.uuid4()) ex = self.assertRaises(dispatcher.ExpectedException, self.engine.show_software_deployment, self.ctx, deployment_id) self.assertEqual(exception.NotFound, ex.exc_info[0]) deployment = self._create_software_deployment() self.assertIsNotNone(deployment) deployment_id = deployment['id'] self.assertEqual( deployment, self.engine.show_software_deployment(self.ctx, deployment_id)) def test_check_software_deployment(self): deployment_id = str(uuid.uuid4()) ex = self.assertRaises(dispatcher.ExpectedException, self.engine.check_software_deployment, self.ctx, deployment_id, 10) self.assertEqual(exception.NotFound, ex.exc_info[0]) deployment = self._create_software_deployment() self.assertIsNotNone(deployment) deployment_id = deployment['id'] self.assertEqual( deployment, self.engine.check_software_deployment(self.ctx, deployment_id, 10)) @mock.patch.object(service_software_config.SoftwareConfigService, '_push_metadata_software_deployments') def test_signal_software_deployment(self, pmsd): self.assertRaises(ValueError, self.engine.signal_software_deployment, self.ctx, None, {}, None) deployment_id = str(uuid.uuid4()) ex = self.assertRaises(dispatcher.ExpectedException, self.engine.signal_software_deployment, self.ctx, deployment_id, {}, None) self.assertEqual(exception.NotFound, ex.exc_info[0]) deployment = self._create_software_deployment() deployment_id = deployment['id'] # signal is ignore unless deployment is IN_PROGRESS self.assertIsNone(self.engine.signal_software_deployment( self.ctx, deployment_id, {}, None)) # simple signal, no data deployment = self._create_software_deployment(action='INIT', status='IN_PROGRESS') deployment_id = deployment['id'] res = self.engine.signal_software_deployment( self.ctx, deployment_id, {}, None) self.assertEqual('deployment %s succeeded' % deployment_id, res) sd = software_deployment_object.SoftwareDeployment.get_by_id( self.ctx, deployment_id) self.assertEqual('COMPLETE', sd.status) self.assertEqual('Outputs received', sd.status_reason) self.assertEqual({ 'deploy_status_code': None, 'deploy_stderr': None, 'deploy_stdout': None }, sd.output_values) self.assertIsNotNone(sd.updated_at) # simple signal, some data config = self._create_software_config(outputs=[{'name': 'foo'}]) deployment = self._create_software_deployment( config_id=config['id'], action='INIT', status='IN_PROGRESS') deployment_id = deployment['id'] result = self.engine.signal_software_deployment( self.ctx, deployment_id, {'foo': 'bar', 'deploy_status_code': 0}, None) self.assertEqual('deployment %s succeeded' % deployment_id, result) sd = software_deployment_object.SoftwareDeployment.get_by_id( self.ctx, deployment_id) self.assertEqual('COMPLETE', sd.status) self.assertEqual('Outputs received', sd.status_reason) self.assertEqual({ 'deploy_status_code': 0, 'foo': 'bar', 'deploy_stderr': None, 'deploy_stdout': None }, sd.output_values) self.assertIsNotNone(sd.updated_at) # failed signal on deploy_status_code config = self._create_software_config(outputs=[{'name': 'foo'}]) deployment = self._create_software_deployment( config_id=config['id'], action='INIT', status='IN_PROGRESS') deployment_id = deployment['id'] result = self.engine.signal_software_deployment( self.ctx, deployment_id, { 'foo': 'bar', 'deploy_status_code': -1, 'deploy_stderr': 'Its gone Pete Tong' }, None) self.assertEqual('deployment %s failed (-1)' % deployment_id, result) sd = software_deployment_object.SoftwareDeployment.get_by_id( self.ctx, deployment_id) self.assertEqual('FAILED', sd.status) self.assert_status_reason( ('deploy_status_code : Deployment exited with non-zero ' 'status code: -1'), sd.status_reason) self.assertEqual({ 'deploy_status_code': -1, 'foo': 'bar', 'deploy_stderr': 'Its gone Pete Tong', 'deploy_stdout': None }, sd.output_values) self.assertIsNotNone(sd.updated_at) # failed signal on error_output foo config = self._create_software_config(outputs=[ {'name': 'foo', 'error_output': True}]) deployment = self._create_software_deployment( config_id=config['id'], action='INIT', status='IN_PROGRESS') deployment_id = deployment['id'] result = self.engine.signal_software_deployment( self.ctx, deployment_id, { 'foo': 'bar', 'deploy_status_code': -1, 'deploy_stderr': 'Its gone Pete Tong' }, None) self.assertEqual('deployment %s failed' % deployment_id, result) sd = software_deployment_object.SoftwareDeployment.get_by_id( self.ctx, deployment_id) self.assertEqual('FAILED', sd.status) self.assert_status_reason( ('foo : bar, deploy_status_code : Deployment exited with ' 'non-zero status code: -1'), sd.status_reason) self.assertEqual({ 'deploy_status_code': -1, 'foo': 'bar', 'deploy_stderr': 'Its gone Pete Tong', 'deploy_stdout': None }, sd.output_values) self.assertIsNotNone(sd.updated_at) def test_create_software_deployment(self): kwargs = { 'group': 'Heat::Chef', 'name': 'config_heat', 'config': '...', 'inputs': [{'name': 'mode'}], 'outputs': [{'name': 'endpoint'}], 'options': {} } config = self._create_software_config(**kwargs) config_id = config['id'] kwargs = { 'config_id': config_id, 'input_values': {'mode': 'standalone'}, 'action': 'INIT', 'status': 'COMPLETE', 'status_reason': '' } deployment = self._create_software_deployment(**kwargs) deployment_id = deployment['id'] deployment = self.engine.show_software_deployment( self.ctx, deployment_id) self.assertEqual(deployment_id, deployment['id']) self.assertEqual(kwargs['input_values'], deployment['input_values']) @mock.patch.object(service_software_config.SoftwareConfigService, '_refresh_swift_software_deployment') def test_show_software_deployment_refresh( self, _refresh_swift_software_deployment): temp_url = ('http://192.0.2.1/v1/AUTH_a/b/c' '?temp_url_sig=ctemp_url_expires=1234') config = self._create_software_config(inputs=[ { 'name': 'deploy_signal_transport', 'type': 'String', 'value': 'TEMP_URL_SIGNAL' }, { 'name': 'deploy_signal_id', 'type': 'String', 'value': temp_url } ]) deployment = self._create_software_deployment( status='IN_PROGRESS', config_id=config['id']) deployment_id = deployment['id'] sd = software_deployment_object.SoftwareDeployment.get_by_id( self.ctx, deployment_id) _refresh_swift_software_deployment.return_value = sd self.assertEqual( deployment, self.engine.show_software_deployment(self.ctx, deployment_id)) self.assertEqual( (self.ctx, sd, temp_url), _refresh_swift_software_deployment.call_args[0]) def test_update_software_deployment_new_config(self): server_id = str(uuid.uuid4()) mock_push = self.patchobject(self.engine.software_config, '_push_metadata_software_deployments') deployment = self._create_software_deployment(server_id=server_id) self.assertIsNotNone(deployment) deployment_id = deployment['id'] deployment_action = deployment['action'] self.assertEqual('INIT', deployment_action) config_id = deployment['config_id'] self.assertIsNotNone(config_id) updated = self.engine.update_software_deployment( self.ctx, deployment_id=deployment_id, config_id=config_id, input_values={}, output_values={}, action='DEPLOY', status='WAITING', status_reason='', updated_at=None) self.assertIsNotNone(updated) self.assertEqual(config_id, updated['config_id']) self.assertEqual('DEPLOY', updated['action']) self.assertEqual('WAITING', updated['status']) self.assertEqual(2, mock_push.call_count) def test_update_software_deployment_status(self): server_id = str(uuid.uuid4()) mock_push = self.patchobject(self.engine.software_config, '_push_metadata_software_deployments') deployment = self._create_software_deployment(server_id=server_id) self.assertIsNotNone(deployment) deployment_id = deployment['id'] deployment_action = deployment['action'] self.assertEqual('INIT', deployment_action) updated = self.engine.update_software_deployment( self.ctx, deployment_id=deployment_id, config_id=None, input_values=None, output_values={}, action='DEPLOY', status='WAITING', status_reason='', updated_at=None) self.assertIsNotNone(updated) self.assertEqual('DEPLOY', updated['action']) self.assertEqual('WAITING', updated['status']) mock_push.assert_called_once_with(self.ctx, server_id, None) def test_update_software_deployment_fields(self): deployment = self._create_software_deployment() deployment_id = deployment['id'] config_id = deployment['config_id'] def check_software_deployment_updated(**kwargs): values = { 'config_id': None, 'input_values': {}, 'output_values': {}, 'action': {}, 'status': 'WAITING', 'status_reason': '' } values.update(kwargs) updated = self.engine.update_software_deployment( self.ctx, deployment_id, updated_at=None, **values) for key, value in six.iteritems(kwargs): self.assertEqual(value, updated[key]) check_software_deployment_updated(config_id=config_id) check_software_deployment_updated(input_values={'foo': 'fooooo'}) check_software_deployment_updated(output_values={'bar': 'baaaaa'}) check_software_deployment_updated(action='DEPLOY') check_software_deployment_updated(status='COMPLETE') check_software_deployment_updated(status_reason='Done!') @mock.patch.object(service_software_config.SoftwareConfigService, '_push_metadata_software_deployments') def test_delete_software_deployment(self, pmsd): deployment_id = str(uuid.uuid4()) ex = self.assertRaises(dispatcher.ExpectedException, self.engine.delete_software_deployment, self.ctx, deployment_id) self.assertEqual(exception.NotFound, ex.exc_info[0]) deployment = self._create_software_deployment() self.assertIsNotNone(deployment) deployment_id = deployment['id'] deployments = self.engine.list_software_deployments( self.ctx, server_id=None) deployment_ids = [x['id'] for x in deployments] self.assertIn(deployment_id, deployment_ids) self.engine.delete_software_deployment(self.ctx, deployment_id) # assert one call for the create, and one for the delete pmsd.assert_has_calls([ mock.call(self.ctx, deployment['server_id'], None), mock.call(self.ctx, deployment['server_id'], None) ]) deployments = self.engine.list_software_deployments( self.ctx, server_id=None) deployment_ids = [x['id'] for x in deployments] self.assertNotIn(deployment_id, deployment_ids) @mock.patch.object(service_software_config.SoftwareConfigService, 'metadata_software_deployments') @mock.patch.object(db_api, 'resource_update') @mock.patch.object(db_api, 'resource_get_by_physical_resource_id') @mock.patch.object(service_software_config.requests, 'put') def test_push_metadata_software_deployments( self, put, res_get, res_upd, md_sd): rs = mock.Mock() rs.rsrc_metadata = {'original': 'metadata'} rs.id = '1234' rs.atomic_key = 1 rs.data = [] res_get.return_value = rs res_upd.return_value = 1 deployments = {'deploy': 'this'} md_sd.return_value = deployments result_metadata = { 'original': 'metadata', 'deployments': {'deploy': 'this'} } with mock.patch.object(self.ctx.session, 'refresh'): self.engine.software_config._push_metadata_software_deployments( self.ctx, '1234', None) res_upd.assert_called_once_with( self.ctx, '1234', {'rsrc_metadata': result_metadata}, 1) put.side_effect = Exception('Unexpected requests.put') @mock.patch.object(service_software_config.SoftwareConfigService, 'metadata_software_deployments') @mock.patch.object(db_api, 'resource_update') @mock.patch.object(db_api, 'resource_get_by_physical_resource_id') @mock.patch.object(service_software_config.requests, 'put') def test_push_metadata_software_deployments_retry( self, put, res_get, res_upd, md_sd): rs = mock.Mock() rs.rsrc_metadata = {'original': 'metadata'} rs.id = '1234' rs.atomic_key = 1 rs.data = [] res_get.return_value = rs # zero update means another transaction updated res_upd.return_value = 0 deployments = {'deploy': 'this'} md_sd.return_value = deployments with mock.patch.object(self.ctx.session, 'refresh'): f = self.engine.software_config._push_metadata_software_deployments self.patchobject(f.retry, 'sleep') self.assertRaises( exception.ConcurrentTransaction, f, self.ctx, '1234', None) # retry ten times then the final failure self.assertEqual(11, res_upd.call_count) put.assert_not_called() @mock.patch.object(service_software_config.SoftwareConfigService, 'metadata_software_deployments') @mock.patch.object(db_api, 'resource_update') @mock.patch.object(db_api, 'resource_get_by_physical_resource_id') @mock.patch.object(service_software_config.requests, 'put') def test_push_metadata_software_deployments_temp_url( self, put, res_get, res_upd, md_sd): rs = mock.Mock() rs.rsrc_metadata = {'original': 'metadata'} rs.id = '1234' rs.atomic_key = 1 rd = mock.Mock() rd.key = 'metadata_put_url' rd.value = 'http://192.168.2.2/foo/bar' rs.data = [rd] res_get.return_value = rs res_upd.return_value = 1 deployments = {'deploy': 'this'} md_sd.return_value = deployments result_metadata = { 'original': 'metadata', 'deployments': {'deploy': 'this'} } with mock.patch.object(self.ctx.session, 'refresh'): self.engine.software_config._push_metadata_software_deployments( self.ctx, '1234', None) res_upd.has_calls( mock.call(self.ctx, '1234', {'rsrc_metadata': result_metadata}, 1), mock.call(self.ctx, '1234', {'rsrc_metadata': result_metadata}, 2), ) put.assert_called_once_with( 'http://192.168.2.2/foo/bar', json.dumps(result_metadata)) @mock.patch.object(service_software_config.SoftwareConfigService, 'metadata_software_deployments') @mock.patch.object(db_api, 'resource_update') @mock.patch.object(db_api, 'resource_get_by_physical_resource_id') @mock.patch.object(zaqar.ZaqarClientPlugin, 'create_for_tenant') def test_push_metadata_software_deployments_queue( self, plugin, res_get, res_upd, md_sd): rs = mock.Mock() rs.rsrc_metadata = {'original': 'metadata'} rs.id = '1234' rs.atomic_key = 1 rd = mock.Mock() rd.key = 'metadata_queue_id' rd.value = '6789' rs.data = [rd] res_get.return_value = rs res_upd.return_value = 1 queue = mock.Mock() zaqar_client = mock.Mock() plugin.return_value = zaqar_client zaqar_client.queue.return_value = queue deployments = {'deploy': 'this'} md_sd.return_value = deployments result_metadata = { 'original': 'metadata', 'deployments': {'deploy': 'this'} } with mock.patch.object(self.ctx.session, 'refresh'): self.engine.software_config._push_metadata_software_deployments( self.ctx, '1234', 'project1') res_upd.assert_called_once_with( self.ctx, '1234', {'rsrc_metadata': result_metadata}, 1) plugin.assert_called_once_with('project1', mock.ANY) zaqar_client.queue.assert_called_once_with('6789') queue.post.assert_called_once_with( {'body': result_metadata, 'ttl': 3600}) @mock.patch.object(service_software_config.SoftwareConfigService, 'signal_software_deployment') @mock.patch.object(swift.SwiftClientPlugin, '_create') def test_refresh_swift_software_deployment(self, scc, ssd): temp_url = ('http://192.0.2.1/v1/AUTH_a/b/c' '?temp_url_sig=ctemp_url_expires=1234') container = 'b' object_name = 'c' config = self._create_software_config(inputs=[ { 'name': 'deploy_signal_transport', 'type': 'String', 'value': 'TEMP_URL_SIGNAL' }, { 'name': 'deploy_signal_id', 'type': 'String', 'value': temp_url } ]) timeutils.set_time_override( datetime.datetime(2013, 1, 23, 22, 48, 5, 0)) self.addCleanup(timeutils.clear_time_override) now = timeutils.utcnow() then = now - datetime.timedelta(0, 60) last_modified_1 = 'Wed, 23 Jan 2013 22:47:05 GMT' last_modified_2 = 'Wed, 23 Jan 2013 22:48:05 GMT' sc = mock.MagicMock() headers = { 'last-modified': last_modified_1 } sc.head_object.return_value = headers sc.get_object.return_value = (headers, '{"foo": "bar"}') scc.return_value = sc deployment = self._create_software_deployment( status='IN_PROGRESS', config_id=config['id']) deployment_id = six.text_type(deployment['id']) sd = software_deployment_object.SoftwareDeployment.get_by_id( self.ctx, deployment_id) # poll with missing object swift_exc = swift.SwiftClientPlugin.exceptions_module sc.head_object.side_effect = swift_exc.ClientException( 'Not found', http_status=404) self.assertEqual( sd, self.engine.software_config._refresh_swift_software_deployment( self.ctx, sd, temp_url)) sc.head_object.assert_called_once_with(container, object_name) # no call to get_object or signal_last_modified self.assertEqual([], sc.get_object.mock_calls) self.assertEqual([], ssd.mock_calls) # poll with other error sc.head_object.side_effect = swift_exc.ClientException( 'Ouch', http_status=409) self.assertRaises( swift_exc.ClientException, self.engine.software_config._refresh_swift_software_deployment, self.ctx, sd, temp_url) # no call to get_object or signal_last_modified self.assertEqual([], sc.get_object.mock_calls) self.assertEqual([], ssd.mock_calls) sc.head_object.side_effect = None # first poll populates data signal_last_modified self.engine.software_config._refresh_swift_software_deployment( self.ctx, sd, temp_url) sc.head_object.assert_called_with(container, object_name) sc.get_object.assert_called_once_with(container, object_name) # signal_software_deployment called with signal ssd.assert_called_once_with(self.ctx, deployment_id, {u"foo": u"bar"}, then.isoformat()) # second poll updated_at populated with first poll last-modified software_deployment_object.SoftwareDeployment.update_by_id( self.ctx, deployment_id, {'updated_at': then}) sd = software_deployment_object.SoftwareDeployment.get_by_id( self.ctx, deployment_id) self.assertEqual(then, sd.updated_at) self.engine.software_config._refresh_swift_software_deployment( self.ctx, sd, temp_url) sc.get_object.assert_called_once_with(container, object_name) # signal_software_deployment has not been called again ssd.assert_called_once_with(self.ctx, deployment_id, {"foo": "bar"}, then.isoformat()) # third poll last-modified changed, new signal headers['last-modified'] = last_modified_2 sc.head_object.return_value = headers sc.get_object.return_value = (headers, '{"bar": "baz"}') self.engine.software_config._refresh_swift_software_deployment( self.ctx, sd, temp_url) # two calls to signal_software_deployment, for then and now self.assertEqual(2, len(ssd.mock_calls)) ssd.assert_called_with(self.ctx, deployment_id, {"bar": "baz"}, now.isoformat()) # four polls result in only two signals, for then and now software_deployment_object.SoftwareDeployment.update_by_id( self.ctx, deployment_id, {'updated_at': now}) sd = software_deployment_object.SoftwareDeployment.get_by_id( self.ctx, deployment_id) self.engine.software_config._refresh_swift_software_deployment( self.ctx, sd, temp_url) self.assertEqual(2, len(ssd.mock_calls)) @mock.patch.object(service_software_config.SoftwareConfigService, 'signal_software_deployment') @mock.patch.object(service_software_config.SoftwareConfigService, 'metadata_software_deployments') @mock.patch.object(db_api, 'resource_update') @mock.patch.object(db_api, 'resource_get_by_physical_resource_id') @mock.patch.object(zaqar.ZaqarClientPlugin, 'create_for_tenant') def test_refresh_zaqar_software_deployment(self, plugin, res_get, res_upd, md_sd, ssd): rs = mock.Mock() rs.rsrc_metadata = {} rs.id = '1234' rs.atomic_key = 1 rd1 = mock.Mock() rd1.key = 'user' rd1.value = 'user1' rd2 = mock.Mock() rd2.key = 'password' rd2.decrypt_method, rd2.value = crypt.encrypt('pass1') rs.data = [rd1, rd2] res_get.return_value = rs res_upd.return_value = 1 deployments = {'deploy': 'this'} md_sd.return_value = deployments config = self._create_software_config(inputs=[ { 'name': 'deploy_signal_transport', 'type': 'String', 'value': 'ZAQAR_SIGNAL' }, { 'name': 'deploy_queue_id', 'type': 'String', 'value': '6789' } ]) queue = mock.Mock() zaqar_client = mock.Mock() plugin.return_value = zaqar_client zaqar_client.queue.return_value = queue queue.pop.return_value = [mock.Mock(body='ok')] with mock.patch.object(self.ctx.session, 'refresh'): deployment = self._create_software_deployment( status='IN_PROGRESS', config_id=config['id']) deployment_id = deployment['id'] self.assertEqual( deployment, self.engine.show_software_deployment(self.ctx, deployment_id)) zaqar_client.queue.assert_called_once_with('6789') queue.pop.assert_called_once_with() ssd.assert_called_once_with(self.ctx, deployment_id, 'ok', None) class SoftwareConfigIOSchemaTest(common.HeatTestCase): def test_input_config_empty(self): name = 'foo' inp = swc_io.InputConfig(name=name) self.assertIsNone(inp.default()) self.assertIs(False, inp.replace_on_change()) self.assertEqual(name, inp.name()) self.assertEqual({'name': name, 'type': 'String'}, inp.as_dict()) self.assertEqual((name, None), inp.input_data()) def test_input_config(self): name = 'bar' inp = swc_io.InputConfig(name=name, description='test', type='Number', default=0, replace_on_change=True) self.assertEqual(0, inp.default()) self.assertIs(True, inp.replace_on_change()) self.assertEqual(name, inp.name()) self.assertEqual({'name': name, 'type': 'Number', 'description': 'test', 'default': 0, 'replace_on_change': True}, inp.as_dict()) self.assertEqual((name, None), inp.input_data()) def test_input_config_value(self): name = 'baz' inp = swc_io.InputConfig(name=name, type='Number', default=0, value=42) self.assertEqual(0, inp.default()) self.assertIs(False, inp.replace_on_change()) self.assertEqual(name, inp.name()) self.assertEqual({'name': name, 'type': 'Number', 'default': 0, 'value': 42}, inp.as_dict()) self.assertEqual((name, 42), inp.input_data()) def test_input_config_no_name(self): self.assertRaises(ValueError, swc_io.InputConfig, type='String') def test_input_config_extra_key(self): self.assertRaises(ValueError, swc_io.InputConfig, name='test', bogus='wat') def test_input_types(self): swc_io.InputConfig(name='str', type='String').as_dict() swc_io.InputConfig(name='num', type='Number').as_dict() swc_io.InputConfig(name='list', type='CommaDelimitedList').as_dict() swc_io.InputConfig(name='json', type='Json').as_dict() swc_io.InputConfig(name='bool', type='Boolean').as_dict() self.assertRaises(ValueError, swc_io.InputConfig, name='bogus', type='BogusType') def test_output_config_empty(self): name = 'foo' outp = swc_io.OutputConfig(name=name) self.assertEqual(name, outp.name()) self.assertEqual({'name': name, 'type': 'String', 'error_output': False}, outp.as_dict()) def test_output_config(self): name = 'bar' outp = swc_io.OutputConfig(name=name, description='test', type='Json', error_output=True) self.assertEqual(name, outp.name()) self.assertIs(True, outp.error_output()) self.assertEqual({'name': name, 'type': 'Json', 'description': 'test', 'error_output': True}, outp.as_dict()) def test_output_config_no_name(self): self.assertRaises(ValueError, swc_io.OutputConfig, type='String') def test_output_config_extra_key(self): self.assertRaises(ValueError, swc_io.OutputConfig, name='test', bogus='wat') def test_output_types(self): swc_io.OutputConfig(name='str', type='String').as_dict() swc_io.OutputConfig(name='num', type='Number').as_dict() swc_io.OutputConfig(name='list', type='CommaDelimitedList').as_dict() swc_io.OutputConfig(name='json', type='Json').as_dict() swc_io.OutputConfig(name='bool', type='Boolean').as_dict() self.assertRaises(ValueError, swc_io.OutputConfig, name='bogus', type='BogusType') def test_check_io_schema_empty_list(self): swc_io.check_io_schema_list([]) def test_check_io_schema_string(self): self.assertRaises(TypeError, swc_io.check_io_schema_list, '') def test_check_io_schema_dict(self): self.assertRaises(TypeError, swc_io.check_io_schema_list, {}) def test_check_io_schema_list_dict(self): swc_io.check_io_schema_list([{'name': 'foo'}]) def test_check_io_schema_list_string(self): self.assertRaises(TypeError, swc_io.check_io_schema_list, ['foo']) def test_check_io_schema_list_list(self): self.assertRaises(TypeError, swc_io.check_io_schema_list, [['foo']]) def test_check_io_schema_list_none(self): self.assertRaises(TypeError, swc_io.check_io_schema_list, [None]) def test_check_io_schema_list_mixed(self): self.assertRaises(TypeError, swc_io.check_io_schema_list, [{'name': 'foo'}, ('name', 'bar')]) def test_input_config_value_json_default(self): name = 'baz' inp = swc_io.InputConfig(name=name, type='Json', default={'a': 1}, value=42) self.assertEqual({'a': 1}, inp.default()) def test_input_config_value_default_coerce(self): name = 'baz' inp = swc_io.InputConfig(name=name, type='Number', default='0') self.assertEqual(0, inp.default()) def test_input_config_value_ignore_string(self): name = 'baz' inp = swc_io.InputConfig(name=name, type='Number', default='') self.assertEqual({'type': 'Number', 'name': 'baz', 'default': ''}, inp.as_dict()) heat-10.0.2/heat/tests/engine/service/test_service_engine.py0000666000175000017500000003771213343562351024144 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime import mock from oslo_config import cfg from oslo_utils import timeutils from heat.common import context from heat.common import service_utils from heat.engine import service from heat.engine import worker from heat.objects import service as service_objects from heat.rpc import worker_api from heat.tests import common from heat.tests.engine import tools from heat.tests import utils class ServiceEngineTest(common.HeatTestCase): def setUp(self): super(ServiceEngineTest, self).setUp() self.ctx = utils.dummy_context(tenant_id='stack_service_test_tenant') self.eng = service.EngineService('a-host', 'a-topic') self.eng.engine_id = 'engine-fake-uuid' def test_make_sure_rpc_version(self): self.assertEqual( '1.35', service.EngineService.RPC_API_VERSION, ('RPC version is changed, please update this test to new version ' 'and make sure additional test cases are added for RPC APIs ' 'added in new version')) @mock.patch.object(service_objects.Service, 'get_all') @mock.patch.object(service_utils, 'format_service') def test_service_get_all(self, mock_format_service, mock_get_all): mock_get_all.return_value = [mock.Mock()] mock_format_service.return_value = mock.Mock() self.assertEqual(1, len(self.eng.list_services(self.ctx))) self.assertTrue(mock_get_all.called) mock_format_service.assert_called_once_with(mock.ANY) @mock.patch.object(service_objects.Service, 'update_by_id') @mock.patch.object(service_objects.Service, 'create') @mock.patch.object(context, 'get_admin_context') def test_service_manage_report_start(self, mock_admin_context, mock_service_create, mock_service_update): self.eng.service_id = None mock_admin_context.return_value = self.ctx srv = dict(id='mock_id') mock_service_create.return_value = srv self.eng.service_manage_report() mock_admin_context.assert_called_once_with() mock_service_create.assert_called_once_with( self.ctx, dict(host=self.eng.host, hostname=self.eng.hostname, binary=self.eng.binary, engine_id=self.eng.engine_id, topic=self.eng.topic, report_interval=cfg.CONF.periodic_interval)) self.assertEqual(srv['id'], self.eng.service_id) mock_service_update.assert_called_once_with( self.ctx, self.eng.service_id, dict(deleted_at=None)) @mock.patch.object(service_objects.Service, 'get_all_by_args') @mock.patch.object(service_objects.Service, 'delete') @mock.patch.object(context, 'get_admin_context') def test_service_manage_report_cleanup(self, mock_admin_context, mock_service_delete, mock_get_all): mock_admin_context.return_value = self.ctx ages_a_go = timeutils.utcnow() - datetime.timedelta( seconds=4000) mock_get_all.return_value = [{'id': 'foo', 'deleted_at': None, 'updated_at': ages_a_go}] self.eng.service_manage_cleanup() mock_admin_context.assert_called_once_with() mock_get_all.assert_called_once_with(self.ctx, self.eng.host, self.eng.binary, self.eng.hostname) mock_service_delete.assert_called_once_with( self.ctx, 'foo') @mock.patch.object(service_objects.Service, 'update_by_id') @mock.patch.object(context, 'get_admin_context') def test_service_manage_report_update(self, mock_admin_context, mock_service_update): self.eng.service_id = 'mock_id' mock_admin_context.return_value = self.ctx self.eng.service_manage_report() mock_admin_context.assert_called_once_with() mock_service_update.assert_called_once_with( self.ctx, 'mock_id', dict(deleted_at=None)) @mock.patch.object(service_objects.Service, 'update_by_id') @mock.patch.object(context, 'get_admin_context') def test_service_manage_report_update_fail(self, mock_admin_context, mock_service_update): self.eng.service_id = 'mock_id' mock_admin_context.return_value = self.ctx mock_service_update.side_effect = Exception() self.eng.service_manage_report() msg = 'Service %s update failed' % self.eng.service_id self.assertIn(msg, self.LOG.output) def test_stop_rpc_server(self): with mock.patch.object(self.eng, '_rpc_server') as mock_rpc_server: self.eng._stop_rpc_server() mock_rpc_server.stop.assert_called_once_with() mock_rpc_server.wait.assert_called_once_with() def _test_engine_service_start( self, thread_group_class, worker_service_class, engine_listener_class, thread_group_manager_class, sample_uuid_method, rpc_client_class, target_class, rpc_server_method): self.patchobject(self.eng, 'service_manage_cleanup') self.patchobject(self.eng, 'reset_stack_status') self.eng.start() # engine id sample_uuid_method.assert_called_once_with() sampe_uuid = sample_uuid_method.return_value self.assertEqual(sampe_uuid, self.eng.engine_id, 'Failed to generated engine_id') # Thread group manager thread_group_manager_class.assert_called_once_with() thread_group_manager = thread_group_manager_class.return_value self.assertEqual(thread_group_manager, self.eng.thread_group_mgr, 'Failed to create Thread Group Manager') # Engine Listener engine_listener_class.assert_called_once_with( self.eng.host, self.eng.engine_id, self.eng.thread_group_mgr ) engine_lister = engine_listener_class.return_value engine_lister.start.assert_called_once_with() # Worker Service if cfg.CONF.convergence_engine: worker_service_class.assert_called_once_with( host=self.eng.host, topic=worker_api.TOPIC, engine_id=self.eng.engine_id, thread_group_mgr=self.eng.thread_group_mgr ) worker_service = worker_service_class.return_value worker_service.start.assert_called_once_with() # RPC Target target_class.assert_called_once_with( version=service.EngineService.RPC_API_VERSION, server=self.eng.host, topic=self.eng.topic) # RPC server target = target_class.return_value rpc_server_method.assert_called_once_with(target, self.eng) rpc_server = rpc_server_method.return_value self.assertEqual(rpc_server, self.eng._rpc_server, "Failed to create RPC server") rpc_server.start.assert_called_once_with() # RPC client rpc_client = rpc_client_class.return_value rpc_client_class.assert_called_once_with( version=service.EngineService.RPC_API_VERSION) self.assertEqual(rpc_client, self.eng._client, "Failed to create RPC client") # Manage Thread group thread_group_class.assert_called_once_with() manage_thread_group = thread_group_class.return_value manage_thread_group.add_timer.assert_called_once_with( cfg.CONF.periodic_interval, self.eng.service_manage_report ) @mock.patch('heat.common.messaging.get_rpc_server', return_value=mock.Mock()) @mock.patch('oslo_messaging.Target', return_value=mock.Mock()) @mock.patch('heat.common.messaging.get_rpc_client', return_value=mock.Mock()) @mock.patch('heat.common.service_utils.generate_engine_id', return_value='sample-uuid') @mock.patch('heat.engine.service.ThreadGroupManager', return_value=mock.Mock()) @mock.patch('heat.engine.service.EngineListener', return_value=mock.Mock()) @mock.patch('oslo_service.threadgroup.ThreadGroup', return_value=mock.Mock()) @mock.patch.object(service.EngineService, '_configure_db_conn_pool_size') def test_engine_service_start_in_non_convergence_mode( self, configure_db_conn_pool_size, thread_group_class, engine_listener_class, thread_group_manager_class, sample_uuid_method, rpc_client_class, target_class, rpc_server_method): cfg.CONF.set_default('convergence_engine', False) self._test_engine_service_start( thread_group_class, None, engine_listener_class, thread_group_manager_class, sample_uuid_method, rpc_client_class, target_class, rpc_server_method ) @mock.patch('heat.common.messaging.get_rpc_server', return_value=mock.Mock()) @mock.patch('oslo_messaging.Target', return_value=mock.Mock()) @mock.patch('heat.common.messaging.get_rpc_client', return_value=mock.Mock()) @mock.patch('heat.common.service_utils.generate_engine_id', return_value=mock.Mock()) @mock.patch('heat.engine.service.ThreadGroupManager', return_value=mock.Mock()) @mock.patch('heat.engine.service.EngineListener', return_value=mock.Mock()) @mock.patch('heat.engine.worker.WorkerService', return_value=mock.Mock()) @mock.patch('oslo_service.threadgroup.ThreadGroup', return_value=mock.Mock()) @mock.patch.object(service.EngineService, '_configure_db_conn_pool_size') def test_engine_service_start_in_convergence_mode( self, configure_db_conn_pool_size, thread_group_class, worker_service_class, engine_listener_class, thread_group_manager_class, sample_uuid_method, rpc_client_class, target_class, rpc_server_method): cfg.CONF.set_default('convergence_engine', True) self._test_engine_service_start( thread_group_class, worker_service_class, engine_listener_class, thread_group_manager_class, sample_uuid_method, rpc_client_class, target_class, rpc_server_method ) def _test_engine_service_stop( self, service_delete_method, admin_context_method): cfg.CONF.set_default('periodic_interval', 60) self.patchobject(self.eng, 'service_manage_cleanup') self.patchobject(self.eng, 'reset_stack_status') self.eng.start() # Add dummy thread group to test thread_group_mgr.stop() is executed? dtg1 = tools.DummyThreadGroup() dtg2 = tools.DummyThreadGroup() self.eng.thread_group_mgr.groups['sample-uuid1'] = dtg1 self.eng.thread_group_mgr.groups['sample-uuid2'] = dtg2 self.eng.service_id = 'sample-service-uuid' self.patchobject(self.eng.manage_thread_grp, 'stop', new=mock.Mock(wraps=self.eng.manage_thread_grp.stop)) self.patchobject(self.eng, '_stop_rpc_server', new=mock.Mock(wraps=self.eng._stop_rpc_server)) orig_stop = self.eng.thread_group_mgr.stop with mock.patch.object(self.eng.thread_group_mgr, 'stop') as stop: stop.side_effect = orig_stop self.eng.stop() # RPC server self.eng._stop_rpc_server.assert_called_once_with() if cfg.CONF.convergence_engine: # WorkerService self.eng.worker_service.stop.assert_called_once_with() # Wait for all active threads to be finished calls = [mock.call('sample-uuid1', True), mock.call('sample-uuid2', True)] self.eng.thread_group_mgr.stop.assert_has_calls(calls, True) # Manage Thread group self.eng.manage_thread_grp.stop.assert_called_with() # Service delete admin_context_method.assert_called_once_with() ctxt = admin_context_method.return_value service_delete_method.assert_called_once_with( ctxt, self.eng.service_id ) @mock.patch.object(worker.WorkerService, 'stop') @mock.patch('heat.common.context.get_admin_context', return_value=mock.Mock()) @mock.patch('heat.objects.service.Service.delete', return_value=mock.Mock()) def test_engine_service_stop_in_convergence_mode( self, service_delete_method, admin_context_method, worker_service_stop): cfg.CONF.set_default('convergence_engine', True) self._test_engine_service_stop( service_delete_method, admin_context_method ) @mock.patch('heat.common.context.get_admin_context', return_value=mock.Mock()) @mock.patch('heat.objects.service.Service.delete', return_value=mock.Mock()) def test_engine_service_stop_in_non_convergence_mode( self, service_delete_method, admin_context_method): cfg.CONF.set_default('convergence_engine', False) self._test_engine_service_stop( service_delete_method, admin_context_method ) @mock.patch('oslo_log.log.setup') def test_engine_service_reset(self, setup_logging_mock): self.eng.reset() setup_logging_mock.assert_called_once_with(cfg.CONF, 'heat') @mock.patch('heat.common.messaging.get_rpc_client', return_value=mock.Mock()) @mock.patch('heat.common.service_utils.generate_engine_id', return_value=mock.Mock()) @mock.patch('heat.engine.service.ThreadGroupManager', return_value=mock.Mock()) @mock.patch('heat.engine.service.EngineListener', return_value=mock.Mock()) @mock.patch('heat.engine.worker.WorkerService', return_value=mock.Mock()) @mock.patch('oslo_service.threadgroup.ThreadGroup', return_value=mock.Mock()) def test_engine_service_configures_connection_pool( self, thread_group_class, worker_service_class, engine_listener_class, thread_group_manager_class, sample_uuid_method, rpc_client_class): self.addCleanup(self.eng._stop_rpc_server) self.eng.start() self.assertEqual(cfg.CONF.executor_thread_pool_size, cfg.CONF.database.max_overflow) heat-10.0.2/heat/tests/engine/service/test_stack_update.py0000666000175000017500000015051113343562351023617 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import uuid import eventlet.queue import mock from oslo_config import cfg from oslo_messaging import conffixture from oslo_messaging.rpc import dispatcher import six from heat.common import environment_util as env_util from heat.common import exception from heat.common import messaging from heat.common import service_utils from heat.common import template_format from heat.db.sqlalchemy import api as db_api from heat.engine.clients.os import glance from heat.engine.clients.os import nova from heat.engine import environment from heat.engine import resource from heat.engine import service from heat.engine import stack from heat.engine import stack_lock from heat.engine import template as templatem from heat.objects import stack as stack_object from heat.rpc import api as rpc_api from heat.tests import common from heat.tests.engine import tools from heat.tests import utils class ServiceStackUpdateTest(common.HeatTestCase): def setUp(self): super(ServiceStackUpdateTest, self).setUp() self.useFixture(conffixture.ConfFixture(cfg.CONF)) self.ctx = utils.dummy_context() self.man = service.EngineService('a-host', 'a-topic') self.man.thread_group_mgr = tools.DummyThreadGroupManager() def test_stack_update(self): stack_name = 'service_update_test_stack' params = {'foo': 'bar'} template = '{ "Template": "data" }' old_stack = tools.get_stack(stack_name, self.ctx) sid = old_stack.store() old_stack.set_stack_user_project_id('1234') s = stack_object.Stack.get_by_id(self.ctx, sid) stk = tools.get_stack(stack_name, self.ctx) # prepare mocks mock_stack = self.patchobject(stack, 'Stack', return_value=stk) mock_load = self.patchobject(stack.Stack, 'load', return_value=old_stack) mock_tmpl = self.patchobject(templatem, 'Template', return_value=stk.t) mock_env = self.patchobject(environment, 'Environment', return_value=stk.env) mock_validate = self.patchobject(stk, 'validate', return_value=None) msgq_mock = mock.Mock() self.patchobject(eventlet.queue, 'LightQueue', side_effect=[msgq_mock, eventlet.queue.LightQueue()]) # do update api_args = {'timeout_mins': 60, rpc_api.PARAM_CONVERGE: True} result = self.man.update_stack(self.ctx, old_stack.identifier(), template, params, None, api_args) # assertions self.assertEqual(old_stack.identifier(), result) self.assertIsInstance(result, dict) self.assertTrue(result['stack_id']) self.assertEqual([msgq_mock], self.man.thread_group_mgr.msg_queues) mock_tmpl.assert_called_once_with(template, files=None) mock_env.assert_called_once_with(params) mock_stack.assert_called_once_with( self.ctx, stk.name, stk.t, convergence=False, current_traversal=old_stack.current_traversal, prev_raw_template_id=None, current_deps=None, disable_rollback=True, nested_depth=0, owner_id=None, parent_resource=None, stack_user_project_id='1234', strict_validate=True, tenant_id='test_tenant_id', timeout_mins=60, user_creds_id=u'1', username='test_username', converge=True ) mock_load.assert_called_once_with(self.ctx, stack=s) mock_validate.assert_called_once_with() def test_stack_update_with_environment_files(self): # Setup stack_name = 'service_update_env_files_stack' params = {} template = '{ "Template": "data" }' old_stack = tools.get_stack(stack_name, self.ctx) sid = old_stack.store() old_stack.set_stack_user_project_id('1234') stack_object.Stack.get_by_id(self.ctx, sid) stk = tools.get_stack(stack_name, self.ctx) # prepare mocks self.patchobject(stack, 'Stack', return_value=stk) self.patchobject(stack.Stack, 'load', return_value=old_stack) self.patchobject(templatem, 'Template', return_value=stk.t) self.patchobject(environment, 'Environment', return_value=stk.env) self.patchobject(stk, 'validate', return_value=None) self.patchobject(eventlet.queue, 'LightQueue', side_effect=[mock.Mock(), eventlet.queue.LightQueue()]) mock_merge = self.patchobject(env_util, 'merge_environments') # Test environment_files = ['env_1'] self.man.update_stack(self.ctx, old_stack.identifier(), template, params, None, {rpc_api.PARAM_CONVERGE: False}, environment_files=environment_files) # Verify mock_merge.assert_called_once_with(environment_files, None, params, mock.ANY) def test_stack_update_nested(self): stack_name = 'service_update_nested_test_stack' parent_stack = tools.get_stack(stack_name + '_parent', self.ctx) owner_id = parent_stack.store() old_stack = tools.get_stack(stack_name, self.ctx, owner_id=owner_id, nested_depth=1, user_creds_id=parent_stack.user_creds_id) sid = old_stack.store() old_stack.set_stack_user_project_id('1234') s = stack_object.Stack.get_by_id(self.ctx, sid) stk = tools.get_stack(stack_name, self.ctx) tmpl_id = stk.t.store(self.ctx) # prepare mocks mock_stack = self.patchobject(stack, 'Stack', return_value=stk) mock_load = self.patchobject(stack.Stack, 'load', return_value=old_stack) mock_tmpl = self.patchobject(templatem.Template, 'load', return_value=stk.t) mock_validate = self.patchobject(stk, 'validate', return_value=None) msgq_mock = mock.Mock() self.patchobject(eventlet.queue, 'LightQueue', side_effect=[msgq_mock, eventlet.queue.LightQueue()]) # do update api_args = {'timeout_mins': 60, rpc_api.PARAM_CONVERGE: False} result = self.man.update_stack(self.ctx, old_stack.identifier(), None, None, None, api_args, template_id=tmpl_id) # assertions self.assertEqual(old_stack.identifier(), result) self.assertIsInstance(result, dict) self.assertTrue(result['stack_id']) self.assertEqual([msgq_mock], self.man.thread_group_mgr.msg_queues) mock_tmpl.assert_called_once_with(self.ctx, tmpl_id) mock_stack.assert_called_once_with( self.ctx, stk.name, stk.t, convergence=False, current_traversal=old_stack.current_traversal, prev_raw_template_id=None, current_deps=None, disable_rollback=True, nested_depth=1, owner_id=owner_id, parent_resource=None, stack_user_project_id='1234', strict_validate=True, tenant_id='test_tenant_id', timeout_mins=60, user_creds_id=u'1', username='test_username', converge=False ) mock_load.assert_called_once_with(self.ctx, stack=s) mock_validate.assert_called_once_with() def test_stack_update_existing_parameters(self): # Use a template with existing parameters, then update the stack # with a template containing additional parameters and ensure all # are preserved. stack_name = 'service_update_test_stack_existing_parameters' update_params = {'encrypted_param_names': [], 'parameter_defaults': {}, 'event_sinks': [], 'parameters': {'newparam': 123}, 'resource_registry': {'resources': {}}} api_args = {rpc_api.PARAM_TIMEOUT: 60, rpc_api.PARAM_EXISTING: True, rpc_api.PARAM_CONVERGE: False} t = template_format.parse(tools.wp_template) stk = tools.get_stack(stack_name, self.ctx, with_params=True) stk.store() stk.set_stack_user_project_id('1234') self.assertEqual({'KeyName': 'test'}, stk.t.env.params) t['parameters']['newparam'] = {'type': 'number'} with mock.patch('heat.engine.stack.Stack') as mock_stack: stk.update = mock.Mock() self.patchobject(service, 'NotifyEvent') mock_stack.load.return_value = stk mock_stack.validate.return_value = None result = self.man.update_stack(self.ctx, stk.identifier(), t, update_params, None, api_args) tmpl = mock_stack.call_args[0][2] self.assertEqual({'KeyName': 'test', 'newparam': 123}, tmpl.env.params) self.assertEqual(stk.identifier(), result) def test_stack_update_existing_encrypted_parameters(self): # Create the stack with encryption enabled # On update encrypted_param_names should be used from existing stack hidden_param_template = u''' heat_template_version: 2013-05-23 parameters: param2: type: string description: value2. hidden: true resources: a_resource: type: GenericResourceType ''' cfg.CONF.set_override('encrypt_parameters_and_properties', True) stack_name = 'service_update_test_stack_encrypted_parameters' t = template_format.parse(hidden_param_template) env1 = environment.Environment({'param2': 'bar'}) stk = stack.Stack(self.ctx, stack_name, templatem.Template(t, env=env1)) stk.store() stk.set_stack_user_project_id('1234') # Verify that hidden parameters are stored encrypted db_tpl = db_api.raw_template_get(self.ctx, stk.t.id) db_params = db_tpl.environment['parameters'] self.assertEqual('cryptography_decrypt_v1', db_params['param2'][0]) self.assertNotEqual("foo", db_params['param2'][1]) # Verify that loaded stack has decrypted paramters loaded_stack = stack.Stack.load(self.ctx, stack_id=stk.id) params = loaded_stack.t.env.params self.assertEqual('bar', params.get('param2')) update_params = {'encrypted_param_names': [], 'parameter_defaults': {}, 'event_sinks': [], 'parameters': {}, 'resource_registry': {'resources': {}}} api_args = {rpc_api.PARAM_TIMEOUT: 60, rpc_api.PARAM_EXISTING: True, rpc_api.PARAM_CONVERGE: False} with mock.patch('heat.engine.stack.Stack') as mock_stack: loaded_stack.update = mock.Mock() self.patchobject(service, 'NotifyEvent') mock_stack.load.return_value = loaded_stack mock_stack.validate.return_value = None result = self.man.update_stack(self.ctx, stk.identifier(), t, update_params, None, api_args) tmpl = mock_stack.call_args[0][2] self.assertEqual({u'param2': u'bar'}, tmpl.env.params) # encrypted_param_names must be passed from existing to new # stack otherwise the updated stack won't decrypt the params self.assertEqual([u'param2'], tmpl.env.encrypted_param_names) self.assertEqual(stk.identifier(), result) def test_stack_update_existing_parameters_remove(self): """Test case for updating stack with changed parameters. Use a template with existing parameters, then update with a template containing additional parameters and a list of parameters to be removed. """ stack_name = 'service_update_test_stack_existing_parameters_remove' update_params = {'encrypted_param_names': [], 'parameter_defaults': {}, 'event_sinks': [], 'parameters': {'newparam': 123}, 'resource_registry': {'resources': {}}} api_args = {rpc_api.PARAM_TIMEOUT: 60, rpc_api.PARAM_EXISTING: True, rpc_api.PARAM_CLEAR_PARAMETERS: ['removeme'], rpc_api.PARAM_CONVERGE: False} t = template_format.parse(tools.wp_template) t['parameters']['removeme'] = {'type': 'string'} stk = utils.parse_stack(t, stack_name=stack_name, params={'KeyName': 'test', 'removeme': 'foo'}) stk.set_stack_user_project_id('1234') self.assertEqual({'KeyName': 'test', 'removeme': 'foo'}, stk.t.env.params) t['parameters']['newparam'] = {'type': 'number'} with mock.patch('heat.engine.stack.Stack') as mock_stack: stk.update = mock.Mock() self.patchobject(service, 'NotifyEvent') mock_stack.load.return_value = stk mock_stack.validate.return_value = None result = self.man.update_stack(self.ctx, stk.identifier(), t, update_params, None, api_args) tmpl = mock_stack.call_args[0][2] self.assertEqual({'KeyName': 'test', 'newparam': 123}, tmpl.env.params) self.assertEqual(stk.identifier(), result) def test_stack_update_with_tags(self): """Test case for updating stack with tags. Create a stack with tags, then update with/without rpc_api.PARAM_EXISTING. """ stack_name = 'service_update_test_stack_existing_tags' api_args = {rpc_api.PARAM_TIMEOUT: 60, rpc_api.PARAM_EXISTING: True} t = template_format.parse(tools.wp_template) stk = utils.parse_stack(t, stack_name=stack_name, tags=['tag1']) stk.set_stack_user_project_id('1234') self.assertEqual(['tag1'], stk.tags) self.patchobject(stack.Stack, 'validate') # update keep old tags _, _, updated_stack = self.man._prepare_stack_updates( self.ctx, stk, t, {}, None, None, api_args, None) self.assertEqual(['tag1'], updated_stack.tags) # with new tags api_args[rpc_api.STACK_TAGS] = ['tag2'] _, _, updated_stack = self.man._prepare_stack_updates( self.ctx, stk, t, {}, None, None, api_args, None) self.assertEqual(['tag2'], updated_stack.tags) # with no PARAM_EXISTING flag and no tags del api_args[rpc_api.PARAM_EXISTING] del api_args[rpc_api.STACK_TAGS] _, _, updated_stack = self.man._prepare_stack_updates( self.ctx, stk, t, {}, None, None, api_args, None) self.assertIsNone(updated_stack.tags) def test_stack_update_existing_registry(self): # Use a template with existing flag and ensure the # environment registry is preserved. stack_name = 'service_update_test_stack_existing_registry' intital_registry = {'OS::Foo': 'foo.yaml', 'OS::Foo2': 'foo2.yaml', 'resources': { 'myserver': {'OS::Server': 'myserver.yaml'}}} intial_params = {'encrypted_param_names': [], 'parameter_defaults': {}, 'parameters': {}, 'event_sinks': [], 'resource_registry': intital_registry} initial_files = {'foo.yaml': 'foo', 'foo2.yaml': 'foo2', 'myserver.yaml': 'myserver'} update_registry = {'OS::Foo2': 'newfoo2.yaml', 'resources': { 'myother': {'OS::Other': 'myother.yaml'}}} update_params = {'encrypted_param_names': [], 'parameter_defaults': {}, 'parameters': {}, 'resource_registry': update_registry} update_files = {'newfoo2.yaml': 'newfoo', 'myother.yaml': 'myother'} api_args = {rpc_api.PARAM_TIMEOUT: 60, rpc_api.PARAM_EXISTING: True, rpc_api.PARAM_CONVERGE: False} t = template_format.parse(tools.wp_template) stk = utils.parse_stack(t, stack_name=stack_name, params=intial_params, files=initial_files) stk.set_stack_user_project_id('1234') self.assertEqual(intial_params, stk.t.env.env_as_dict()) expected_reg = {'OS::Foo': 'foo.yaml', 'OS::Foo2': 'newfoo2.yaml', 'resources': { 'myother': {'OS::Other': 'myother.yaml'}, 'myserver': {'OS::Server': 'myserver.yaml'}}} expected_env = {'encrypted_param_names': [], 'parameter_defaults': {}, 'parameters': {}, 'event_sinks': [], 'resource_registry': expected_reg} # FIXME(shardy): Currently we don't prune unused old files expected_files = {'foo.yaml': 'foo', 'foo2.yaml': 'foo2', 'myserver.yaml': 'myserver', 'newfoo2.yaml': 'newfoo', 'myother.yaml': 'myother'} with mock.patch('heat.engine.stack.Stack') as mock_stack: stk.update = mock.Mock() self.patchobject(service, 'NotifyEvent') mock_stack.load.return_value = stk mock_stack.validate.return_value = None result = self.man.update_stack(self.ctx, stk.identifier(), t, update_params, update_files, api_args) tmpl = mock_stack.call_args[0][2] self.assertEqual(expected_env, tmpl.env.env_as_dict()) self.assertEqual(expected_files, tmpl.files.files) self.assertEqual(stk.identifier(), result) def test_stack_update_existing_parameter_defaults(self): """Ensure the environment parameter_defaults are preserved. Use a template with existing flag and ensure the environment parameter_defaults are preserved. """ stack_name = 'service_update_test_stack_existing_param_defaults' intial_params = {'encrypted_param_names': [], 'parameter_defaults': {'mydefault': 123}, 'parameters': {}, 'resource_registry': {}} update_params = {'encrypted_param_names': [], 'parameter_defaults': {'default2': 456}, 'parameters': {}, 'resource_registry': {}} api_args = {rpc_api.PARAM_TIMEOUT: 60, rpc_api.PARAM_EXISTING: True, rpc_api.PARAM_CONVERGE: False} t = template_format.parse(tools.wp_template) stk = utils.parse_stack(t, stack_name=stack_name, params=intial_params) stk.set_stack_user_project_id('1234') expected_env = {'encrypted_param_names': [], 'parameter_defaults': { 'mydefault': 123, 'default2': 456}, 'parameters': {}, 'event_sinks': [], 'resource_registry': {'resources': {}}} with mock.patch('heat.engine.stack.Stack') as mock_stack: stk.update = mock.Mock() self.patchobject(service, 'NotifyEvent') mock_stack.load.return_value = stk mock_stack.validate.return_value = None result = self.man.update_stack(self.ctx, stk.identifier(), t, update_params, None, api_args) tmpl = mock_stack.call_args[0][2] self.assertEqual(expected_env, tmpl.env.env_as_dict()) self.assertEqual(stk.identifier(), result) def test_stack_update_reuses_api_params(self): stack_name = 'service_update_stack_reuses_api_params' params = {'foo': 'bar'} template = '{ "Template": "data" }' old_stack = tools.get_stack(stack_name, self.ctx) old_stack.timeout_mins = 1 old_stack.disable_rollback = False sid = old_stack.store() old_stack.set_stack_user_project_id('1234') s = stack_object.Stack.get_by_id(self.ctx, sid) stk = tools.get_stack(stack_name, self.ctx) # prepare mocks mock_stack = self.patchobject(stack, 'Stack', return_value=stk) mock_load = self.patchobject(stack.Stack, 'load', return_value=old_stack) mock_tmpl = self.patchobject(templatem, 'Template', return_value=stk.t) mock_env = self.patchobject(environment, 'Environment', return_value=stk.env) mock_validate = self.patchobject(stk, 'validate', return_value=None) # do update result = self.man.update_stack(self.ctx, old_stack.identifier(), template, params, None, {rpc_api.PARAM_CONVERGE: False}) # assertions self.assertEqual(old_stack.identifier(), result) self.assertIsInstance(result, dict) self.assertTrue(result['stack_id']) mock_validate.assert_called_once_with() mock_tmpl.assert_called_once_with(template, files=None) mock_env.assert_called_once_with(params) mock_load.assert_called_once_with(self.ctx, stack=s) mock_stack.assert_called_once_with( self.ctx, stk.name, stk.t, convergence=False, current_traversal=old_stack.current_traversal, prev_raw_template_id=None, current_deps=None, disable_rollback=False, nested_depth=0, owner_id=None, parent_resource=None, stack_user_project_id='1234', strict_validate=True, tenant_id='test_tenant_id', timeout_mins=1, user_creds_id=u'1', username='test_username', converge=False ) def test_stack_cancel_update_same_engine(self): stack_name = 'service_update_stack_test_cancel_same_engine' stk = tools.get_stack(stack_name, self.ctx) stk.state_set(stk.UPDATE, stk.IN_PROGRESS, 'test_override') stk.disable_rollback = False stk.store() self.man.engine_id = service_utils.generate_engine_id() self.patchobject(stack.Stack, 'load', return_value=stk) self.patchobject(stack_lock.StackLock, 'get_engine_id', return_value=self.man.engine_id) self.patchobject(self.man.thread_group_mgr, 'send') self.man.stack_cancel_update(self.ctx, stk.identifier(), cancel_with_rollback=False) self.man.thread_group_mgr.send.assert_called_once_with(stk.id, 'cancel') def test_stack_cancel_update_different_engine(self): stack_name = 'service_update_stack_test_cancel_different_engine' stk = tools.get_stack(stack_name, self.ctx) stk.state_set(stk.UPDATE, stk.IN_PROGRESS, 'test_override') stk.disable_rollback = False stk.store() self.patchobject(stack.Stack, 'load', return_value=stk) self.patchobject(stack_lock.StackLock, 'get_engine_id', return_value=str(uuid.uuid4())) self.patchobject(service_utils, 'engine_alive', return_value=True) self.man.listener = mock.Mock() self.man.listener.SEND = 'send' self.man._client = messaging.get_rpc_client( version=self.man.RPC_API_VERSION) # In fact the another engine is not alive, so the call will timeout self.assertRaises(dispatcher.ExpectedException, self.man.stack_cancel_update, self.ctx, stk.identifier()) def test_stack_cancel_update_no_lock(self): stack_name = 'service_update_stack_test_cancel_same_engine' stk = tools.get_stack(stack_name, self.ctx) stk.state_set(stk.UPDATE, stk.IN_PROGRESS, 'test_override') stk.disable_rollback = False stk.store() self.patchobject(stack.Stack, 'load', return_value=stk) self.patchobject(stack_lock.StackLock, 'get_engine_id', return_value=None) self.patchobject(self.man.thread_group_mgr, 'send') self.man.stack_cancel_update(self.ctx, stk.identifier(), cancel_with_rollback=False) self.assertFalse(self.man.thread_group_mgr.send.called) def test_stack_cancel_update_wrong_state_fails(self): stack_name = 'service_update_cancel_test_stack' stk = tools.get_stack(stack_name, self.ctx) stk.state_set(stk.UPDATE, stk.COMPLETE, 'test_override') stk.store() self.patchobject(stack.Stack, 'load', return_value=stk) ex = self.assertRaises(dispatcher.ExpectedException, self.man.stack_cancel_update, self.ctx, stk.identifier()) self.assertEqual(exception.NotSupported, ex.exc_info[0]) self.assertIn("Cancelling update when stack is " "UPDATE_COMPLETE", six.text_type(ex.exc_info[1])) @mock.patch.object(stack_object.Stack, 'count_total_resources') def test_stack_update_equals(self, ctr): stack_name = 'test_stack_update_equals_resource_limit' params = {} tpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'A': {'Type': 'GenericResourceType'}, 'B': {'Type': 'GenericResourceType'}, 'C': {'Type': 'GenericResourceType'}}} template = templatem.Template(tpl) old_stack = stack.Stack(self.ctx, stack_name, template) sid = old_stack.store() old_stack.set_stack_user_project_id('1234') s = stack_object.Stack.get_by_id(self.ctx, sid) ctr.return_value = 3 stk = stack.Stack(self.ctx, stack_name, template) # prepare mocks mock_stack = self.patchobject(stack, 'Stack', return_value=stk) mock_load = self.patchobject(stack.Stack, 'load', return_value=old_stack) mock_tmpl = self.patchobject(templatem, 'Template', return_value=stk.t) mock_env = self.patchobject(environment, 'Environment', return_value=stk.env) mock_validate = self.patchobject(stk, 'validate', return_value=None) # do update cfg.CONF.set_override('max_resources_per_stack', 3) api_args = {'timeout_mins': 60, rpc_api.PARAM_CONVERGE: False} result = self.man.update_stack(self.ctx, old_stack.identifier(), template, params, None, api_args) # assertions self.assertEqual(old_stack.identifier(), result) self.assertIsInstance(result, dict) self.assertTrue(result['stack_id']) root_stack_id = old_stack.root_stack_id() self.assertEqual(3, old_stack.total_resources(root_stack_id)) mock_tmpl.assert_called_once_with(template, files=None) mock_env.assert_called_once_with(params) mock_stack.assert_called_once_with( self.ctx, stk.name, stk.t, convergence=False, current_traversal=old_stack.current_traversal, prev_raw_template_id=None, current_deps=None, disable_rollback=True, nested_depth=0, owner_id=None, parent_resource=None, stack_user_project_id='1234', strict_validate=True, tenant_id='test_tenant_id', timeout_mins=60, user_creds_id=u'1', username='test_username', converge=False ) mock_load.assert_called_once_with(self.ctx, stack=s) mock_validate.assert_called_once_with() def test_stack_update_stack_id_equal(self): stack_name = 'test_stack_update_stack_id_equal' tpl = { 'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'A': { 'Type': 'ResourceWithPropsType', 'Properties': { 'Foo': {'Ref': 'AWS::StackId'} } } } } template = templatem.Template(tpl) create_stack = stack.Stack(self.ctx, stack_name, template) sid = create_stack.store() create_stack.create() self.assertEqual((create_stack.CREATE, create_stack.COMPLETE), create_stack.state) create_stack._persist_state() s = stack_object.Stack.get_by_id(self.ctx, sid) old_stack = stack.Stack.load(self.ctx, stack=s) self.assertEqual((old_stack.CREATE, old_stack.COMPLETE), old_stack.state) self.assertEqual(create_stack.identifier().arn(), old_stack['A'].properties['Foo']) mock_load = self.patchobject(stack.Stack, 'load', return_value=old_stack) result = self.man.update_stack(self.ctx, create_stack.identifier(), tpl, {}, None, {rpc_api.PARAM_CONVERGE: False}) old_stack._persist_state() self.assertEqual((old_stack.UPDATE, old_stack.COMPLETE), old_stack.state) self.assertEqual(create_stack.identifier(), result) self.assertIsNotNone(create_stack.identifier().stack_id) self.assertEqual(create_stack.identifier().arn(), old_stack['A'].properties['Foo']) self.assertEqual(create_stack['A'].id, old_stack['A'].id) mock_load.assert_called_once_with(self.ctx, stack=s) def test_stack_update_exceeds_resource_limit(self): stack_name = 'test_stack_update_exceeds_resource_limit' params = {} tpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'A': {'Type': 'GenericResourceType'}, 'B': {'Type': 'GenericResourceType'}, 'C': {'Type': 'GenericResourceType'}}} template = templatem.Template(tpl) old_stack = stack.Stack(self.ctx, stack_name, template) sid = old_stack.store() self.assertIsNotNone(sid) cfg.CONF.set_override('max_resources_per_stack', 2) ex = self.assertRaises(dispatcher.ExpectedException, self.man.update_stack, self.ctx, old_stack.identifier(), tpl, params, None, {rpc_api.PARAM_CONVERGE: False}) self.assertEqual(exception.RequestLimitExceeded, ex.exc_info[0]) self.assertIn(exception.StackResourceLimitExceeded.msg_fmt, six.text_type(ex.exc_info[1])) def test_stack_update_verify_err(self): stack_name = 'service_update_verify_err_test_stack' params = {'foo': 'bar'} template = '{ "Template": "data" }' old_stack = tools.get_stack(stack_name, self.ctx) old_stack.store() sid = old_stack.store() old_stack.set_stack_user_project_id('1234') s = stack_object.Stack.get_by_id(self.ctx, sid) stk = tools.get_stack(stack_name, self.ctx) # prepare mocks mock_stack = self.patchobject(stack, 'Stack', return_value=stk) mock_load = self.patchobject(stack.Stack, 'load', return_value=old_stack) mock_tmpl = self.patchobject(templatem, 'Template', return_value=stk.t) mock_env = self.patchobject(environment, 'Environment', return_value=stk.env) ex_expected = exception.StackValidationFailed(message='fubar') mock_validate = self.patchobject(stk, 'validate', side_effect=ex_expected) # do update api_args = {'timeout_mins': 60, rpc_api.PARAM_CONVERGE: False} ex = self.assertRaises(dispatcher.ExpectedException, self.man.update_stack, self.ctx, old_stack.identifier(), template, params, None, api_args) # assertions self.assertEqual(exception.StackValidationFailed, ex.exc_info[0]) mock_tmpl.assert_called_once_with(template, files=None) mock_env.assert_called_once_with(params) mock_stack.assert_called_once_with( self.ctx, stk.name, stk.t, convergence=False, current_traversal=old_stack.current_traversal, prev_raw_template_id=None, current_deps=None, disable_rollback=True, nested_depth=0, owner_id=None, parent_resource=None, stack_user_project_id='1234', strict_validate=True, tenant_id='test_tenant_id', timeout_mins=60, user_creds_id=u'1', username='test_username', converge=False ) mock_load.assert_called_once_with(self.ctx, stack=s) mock_validate.assert_called_once_with() def test_stack_update_nonexist(self): stack_name = 'service_update_nonexist_test_stack' params = {'foo': 'bar'} template = '{ "Template": "data" }' stk = tools.get_stack(stack_name, self.ctx) ex = self.assertRaises(dispatcher.ExpectedException, self.man.update_stack, self.ctx, stk.identifier(), template, params, None, {rpc_api.PARAM_CONVERGE: False}) self.assertEqual(exception.EntityNotFound, ex.exc_info[0]) def test_stack_update_no_credentials(self): cfg.CONF.set_default('deferred_auth_method', 'password') stack_name = 'test_stack_update_no_credentials' params = {'foo': 'bar'} template = '{ "Template": "data" }' stk = tools.get_stack(stack_name, self.ctx) # force check for credentials on create stk['WebServer'].requires_deferred_auth = True sid = stk.store() stk.set_stack_user_project_id('1234') s = stack_object.Stack.get_by_id(self.ctx, sid) self.ctx = utils.dummy_context(password=None) # prepare mocks mock_get = self.patchobject(self.man, '_get_stack', return_value=s) mock_stack = self.patchobject(stack, 'Stack', return_value=stk) mock_load = self.patchobject(stack.Stack, 'load', return_value=stk) mock_tmpl = self.patchobject(templatem, 'Template', return_value=stk.t) mock_env = self.patchobject(environment, 'Environment', return_value=stk.env) api_args = {'timeout_mins': 60, rpc_api.PARAM_CONVERGE: False} ex = self.assertRaises(dispatcher.ExpectedException, self.man.update_stack, self.ctx, stk.identifier(), template, params, None, api_args) self.assertEqual(exception.MissingCredentialError, ex.exc_info[0]) self.assertEqual('Missing required credential: X-Auth-Key', six.text_type(ex.exc_info[1])) mock_get.assert_called_once_with(self.ctx, stk.identifier()) mock_tmpl.assert_called_once_with(template, files=None) mock_env.assert_called_once_with(params) mock_stack.assert_called_once_with( self.ctx, stk.name, stk.t, convergence=False, current_traversal=stk.current_traversal, prev_raw_template_id=None, current_deps=None, disable_rollback=True, nested_depth=0, owner_id=None, parent_resource=None, stack_user_project_id='1234', strict_validate=True, tenant_id='test_tenant_id', timeout_mins=60, user_creds_id=u'1', username='test_username', converge=False ) mock_load.assert_called_once_with(self.ctx, stack=s) def test_stack_update_existing_template(self): '''Update a stack using the same template.''' stack_name = 'service_update_test_stack_existing_template' api_args = {rpc_api.PARAM_TIMEOUT: 60, rpc_api.PARAM_EXISTING: True, rpc_api.PARAM_CONVERGE: False} t = template_format.parse(tools.wp_template) # Don't actually run the update as the mocking breaks it, instead # we just ensure the expected template is passed in to the updated # template, and that the update task is scheduled. self.man.thread_group_mgr = tools.DummyThreadGroupMgrLogStart() params = {} stack = utils.parse_stack(t, stack_name=stack_name, params=params) stack.set_stack_user_project_id('1234') self.assertEqual(t, stack.t.t) stack.action = stack.CREATE stack.status = stack.COMPLETE with mock.patch('heat.engine.stack.Stack') as mock_stack: self.patchobject(service, 'NotifyEvent') mock_stack.load.return_value = stack mock_stack.validate.return_value = None result = self.man.update_stack(self.ctx, stack.identifier(), None, params, None, api_args) tmpl = mock_stack.call_args[0][2] self.assertEqual(t, tmpl.t) self.assertEqual(stack.identifier(), result) self.assertEqual(1, len(self.man.thread_group_mgr.started)) def test_stack_update_existing_failed(self): '''Update a stack using the same template doesn't work when FAILED.''' stack_name = 'service_update_test_stack_existing_template' api_args = {rpc_api.PARAM_TIMEOUT: 60, rpc_api.PARAM_EXISTING: True, rpc_api.PARAM_CONVERGE: False} t = template_format.parse(tools.wp_template) # Don't actually run the update as the mocking breaks it, instead # we just ensure the expected template is passed in to the updated # template, and that the update task is scheduled. self.man.thread_group_mgr = tools.DummyThreadGroupMgrLogStart() params = {} stack = utils.parse_stack(t, stack_name=stack_name, params=params) stack.set_stack_user_project_id('1234') self.assertEqual(t, stack.t.t) stack.action = stack.UPDATE stack.status = stack.FAILED ex = self.assertRaises(dispatcher.ExpectedException, self.man.update_stack, self.ctx, stack.identifier(), None, params, None, api_args) self.assertEqual(exception.NotSupported, ex.exc_info[0]) self.assertIn("PATCH update to non-COMPLETE stack", six.text_type(ex.exc_info[1])) def test_update_immutable_parameter_disallowed(self): template = ''' heat_template_version: 2014-10-16 parameters: param1: type: string immutable: true default: foo ''' self.ctx = utils.dummy_context(password=None) stack_name = 'test_update_immutable_parameters' old_stack = tools.get_stack(stack_name, self.ctx, template=template) sid = old_stack.store() old_stack.set_stack_user_project_id('1234') s = stack_object.Stack.get_by_id(self.ctx, sid) # prepare mocks self.patchobject(self.man, '_get_stack', return_value=s) self.patchobject(stack, 'Stack', return_value=old_stack) self.patchobject(stack.Stack, 'load', return_value=old_stack) params = {'param1': 'bar'} exc = self.assertRaises(dispatcher.ExpectedException, self.man.update_stack, self.ctx, old_stack.identifier(), old_stack.t.t, params, None, {rpc_api.PARAM_CONVERGE: False}) self.assertEqual(exception.ImmutableParameterModified, exc.exc_info[0]) self.assertEqual('The following parameters are immutable and may not ' 'be updated: param1', exc.exc_info[1].message) def test_update_mutable_parameter_allowed(self): template = ''' heat_template_version: 2014-10-16 parameters: param1: type: string immutable: false default: foo ''' self.ctx = utils.dummy_context(password=None) stack_name = 'test_update_immutable_parameters' params = {} old_stack = tools.get_stack(stack_name, self.ctx, template=template) sid = old_stack.store() old_stack.set_stack_user_project_id('1234') s = stack_object.Stack.get_by_id(self.ctx, sid) # prepare mocks self.patchobject(self.man, '_get_stack', return_value=s) self.patchobject(stack, 'Stack', return_value=old_stack) self.patchobject(stack.Stack, 'load', return_value=old_stack) self.patchobject(templatem, 'Template', return_value=old_stack.t) self.patchobject(environment, 'Environment', return_value=old_stack.env) params = {'param1': 'bar'} result = self.man.update_stack(self.ctx, old_stack.identifier(), templatem.Template(template), params, None, {rpc_api.PARAM_CONVERGE: False}) self.assertEqual(s.id, result['stack_id']) def test_update_immutable_parameter_same_value(self): template = ''' heat_template_version: 2014-10-16 parameters: param1: type: string immutable: true default: foo ''' self.ctx = utils.dummy_context(password=None) stack_name = 'test_update_immutable_parameters' params = {} old_stack = tools.get_stack(stack_name, self.ctx, template=template) sid = old_stack.store() old_stack.set_stack_user_project_id('1234') s = stack_object.Stack.get_by_id(self.ctx, sid) # prepare mocks self.patchobject(self.man, '_get_stack', return_value=s) self.patchobject(stack, 'Stack', return_value=old_stack) self.patchobject(stack.Stack, 'load', return_value=old_stack) self.patchobject(templatem, 'Template', return_value=old_stack.t) self.patchobject(environment, 'Environment', return_value=old_stack.env) params = {'param1': 'foo'} result = self.man.update_stack(self.ctx, old_stack.identifier(), templatem.Template(template), params, None, {rpc_api.PARAM_CONVERGE: False}) self.assertEqual(s.id, result['stack_id']) class ServiceStackUpdatePreviewTest(common.HeatTestCase): old_tmpl = """ heat_template_version: 2014-10-16 resources: web_server: type: OS::Nova::Server properties: image: F17-x86_64-gold flavor: m1.large key_name: test user_data: wordpress """ new_tmpl = """ heat_template_version: 2014-10-16 resources: web_server: type: OS::Nova::Server properties: image: F17-x86_64-gold flavor: m1.large key_name: test user_data: wordpress password: type: OS::Heat::RandomString properties: length: 8 """ def setUp(self): super(ServiceStackUpdatePreviewTest, self).setUp() self.ctx = utils.dummy_context() self.man = service.EngineService('a-host', 'a-topic') self.man.thread_group_mgr = tools.DummyThreadGroupManager() def _test_stack_update_preview(self, orig_template, new_template, environment_files=None): stack_name = 'service_update_test_stack_preview' params = {'foo': 'bar'} def side_effect(*args): return 2 if args[0] == 'm1.small' else 1 self.patchobject(nova.NovaClientPlugin, 'find_flavor_by_name_or_id', side_effect=side_effect) self.patchobject(glance.GlanceClientPlugin, 'find_image_by_name_or_id', return_value=1) old_stack = tools.get_stack(stack_name, self.ctx, template=orig_template) sid = old_stack.store() old_stack.set_stack_user_project_id('1234') s = stack_object.Stack.get_by_id(self.ctx, sid) stk = tools.get_stack(stack_name, self.ctx, template=new_template) # prepare mocks mock_stack = self.patchobject(stack, 'Stack', return_value=stk) mock_load = self.patchobject(stack.Stack, 'load', return_value=old_stack) mock_tmpl = self.patchobject(templatem, 'Template', return_value=stk.t) mock_env = self.patchobject(environment, 'Environment', return_value=stk.env) mock_validate = self.patchobject(stk, 'validate', return_value=None) mock_merge = self.patchobject(env_util, 'merge_environments') # Patch _resolve_any_attribute or it tries to call novaclient self.patchobject(resource.Resource, '_resolve_any_attribute', return_value=None) # do preview_update_stack api_args = {'timeout_mins': 60, rpc_api.PARAM_CONVERGE: False} result = self.man.preview_update_stack( self.ctx, old_stack.identifier(), new_template, params, None, api_args, environment_files=environment_files) # assertions mock_stack.assert_called_once_with( self.ctx, stk.name, stk.t, convergence=False, current_traversal=old_stack.current_traversal, prev_raw_template_id=None, current_deps=None, disable_rollback=True, nested_depth=0, owner_id=None, parent_resource=None, stack_user_project_id='1234', strict_validate=True, tenant_id='test_tenant_id', timeout_mins=60, user_creds_id=u'1', username='test_username', converge=False ) mock_load.assert_called_once_with(self.ctx, stack=s) mock_tmpl.assert_called_once_with(new_template, files=None) mock_env.assert_called_once_with(params) mock_validate.assert_called_once_with() if environment_files: mock_merge.assert_called_once_with(environment_files, None, params, mock.ANY) return result def test_stack_update_preview_added_unchanged(self): result = self._test_stack_update_preview(self.old_tmpl, self.new_tmpl) added = [x for x in result['added']][0] self.assertEqual('password', added['resource_name']) unchanged = [x for x in result['unchanged']][0] self.assertEqual('web_server', unchanged['resource_name']) self.assertNotEqual('None', unchanged['resource_identity']['stack_id']) empty_sections = ('deleted', 'replaced', 'updated') for section in empty_sections: section_contents = [x for x in result[section]] self.assertEqual([], section_contents) def test_stack_update_preview_replaced(self): # new template with a different key_name new_tmpl = self.old_tmpl.replace('test', 'test2') result = self._test_stack_update_preview(self.old_tmpl, new_tmpl) replaced = [x for x in result['replaced']][0] self.assertEqual('web_server', replaced['resource_name']) empty_sections = ('added', 'deleted', 'unchanged', 'updated') for section in empty_sections: section_contents = [x for x in result[section]] self.assertEqual([], section_contents) def test_stack_update_preview_replaced_type(self): # new template with a different type for web_server new_tmpl = self.old_tmpl.replace('OS::Nova::Server', 'OS::Heat::None') result = self._test_stack_update_preview(self.old_tmpl, new_tmpl) replaced = [x for x in result['replaced']][0] self.assertEqual('web_server', replaced['resource_name']) empty_sections = ('added', 'deleted', 'unchanged', 'updated') for section in empty_sections: section_contents = [x for x in result[section]] self.assertEqual([], section_contents) def test_stack_update_preview_updated(self): # new template changes to flavor of server new_tmpl = self.old_tmpl.replace('m1.large', 'm1.small') result = self._test_stack_update_preview(self.old_tmpl, new_tmpl) updated = [x for x in result['updated']][0] self.assertEqual('web_server', updated['resource_name']) empty_sections = ('added', 'deleted', 'unchanged', 'replaced') for section in empty_sections: section_contents = [x for x in result[section]] self.assertEqual([], section_contents) def test_stack_update_preview_deleted(self): # do the reverse direction, i.e. delete resources result = self._test_stack_update_preview(self.new_tmpl, self.old_tmpl) deleted = [x for x in result['deleted']][0] self.assertEqual('password', deleted['resource_name']) unchanged = [x for x in result['unchanged']][0] self.assertEqual('web_server', unchanged['resource_name']) empty_sections = ('added', 'updated', 'replaced') for section in empty_sections: section_contents = [x for x in result[section]] self.assertEqual([], section_contents) def test_stack_update_preview_with_environment_files(self): # Setup environment_files = ['env_1'] # Test self._test_stack_update_preview(self.old_tmpl, self.new_tmpl, environment_files=environment_files) # Assertions done in _test_stack_update_preview def test_reset_stack_and_resources_in_progress(self): def mock_stack_resource(name, action, status): rs = mock.MagicMock() rs.name = name rs.action = action rs.status = status rs.IN_PROGRESS = 'IN_PROGRESS' rs.FAILED = 'FAILED' def mock_resource_state_set(a, s, reason='engine_down'): rs.status = s rs.action = a rs.status_reason = reason rs.state_set = mock_resource_state_set return rs stk_name = 'test_stack' stk = tools.get_stack(stk_name, self.ctx) stk.action = 'CREATE' stk.status = 'IN_PROGRESS' resources = {'r1': mock_stack_resource('r1', 'UPDATE', 'COMPLETE'), 'r2': mock_stack_resource('r2', 'UPDATE', 'IN_PROGRESS'), 'r3': mock_stack_resource('r3', 'UPDATE', 'FAILED')} stk._resources = resources reason = 'Test resetting stack and resources in progress' stk.reset_stack_and_resources_in_progress(reason) self.assertEqual('FAILED', stk.status) self.assertEqual('COMPLETE', stk.resources.get('r1').status) self.assertEqual('FAILED', stk.resources.get('r2').status) self.assertEqual('FAILED', stk.resources.get('r3').status) heat-10.0.2/heat/tests/engine/service/__init__.py0000666000175000017500000000000013343562340021631 0ustar zuulzuul00000000000000heat-10.0.2/heat/tests/engine/service/test_stack_action.py0000666000175000017500000001447213343562340023615 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_messaging.rpc import dispatcher from heat.common import exception from heat.common import template_format from heat.engine import service from heat.engine import stack from heat.engine import template as templatem from heat.objects import stack as stack_object from heat.tests import common from heat.tests.engine import tools from heat.tests import utils class StackServiceActionsTest(common.HeatTestCase): def setUp(self): super(StackServiceActionsTest, self).setUp() self.ctx = utils.dummy_context() self.man = service.EngineService('a-host', 'a-topic') self.man.thread_group_mgr = service.ThreadGroupManager() @mock.patch.object(stack.Stack, 'load') @mock.patch.object(service.ThreadGroupManager, 'start') def test_stack_suspend(self, mock_start, mock_load): stack_name = 'service_suspend_test_stack' t = template_format.parse(tools.wp_template) stk = utils.parse_stack(t, stack_name=stack_name) s = stack_object.Stack.get_by_id(self.ctx, stk.id) mock_load.return_value = stk thread = mock.MagicMock() mock_link = self.patchobject(thread, 'link') mock_start.return_value = thread self.patchobject(service, 'NotifyEvent') result = self.man.stack_suspend(self.ctx, stk.identifier()) self.assertIsNone(result) mock_load.assert_called_once_with(self.ctx, stack=s) mock_link.assert_called_once_with(mock.ANY) mock_start.assert_called_once_with(stk.id, stk.suspend, notify=mock.ANY) stk.delete() @mock.patch.object(stack.Stack, 'load') @mock.patch.object(service.ThreadGroupManager, 'start') def test_stack_resume(self, mock_start, mock_load): stack_name = 'service_resume_test_stack' t = template_format.parse(tools.wp_template) stk = utils.parse_stack(t, stack_name=stack_name) mock_load.return_value = stk thread = mock.MagicMock() mock_link = self.patchobject(thread, 'link') mock_start.return_value = thread self.patchobject(service, 'NotifyEvent') result = self.man.stack_resume(self.ctx, stk.identifier()) self.assertIsNone(result) mock_load.assert_called_once_with(self.ctx, stack=mock.ANY) mock_link.assert_called_once_with(mock.ANY) mock_start.assert_called_once_with(stk.id, stk.resume, notify=mock.ANY) stk.delete() def test_stack_suspend_nonexist(self): stack_name = 'service_suspend_nonexist_test_stack' t = template_format.parse(tools.wp_template) tmpl = templatem.Template(t) stk = stack.Stack(self.ctx, stack_name, tmpl) ex = self.assertRaises(dispatcher.ExpectedException, self.man.stack_suspend, self.ctx, stk.identifier()) self.assertEqual(exception.EntityNotFound, ex.exc_info[0]) def test_stack_resume_nonexist(self): stack_name = 'service_resume_nonexist_test_stack' t = template_format.parse(tools.wp_template) tmpl = templatem.Template(t) stk = stack.Stack(self.ctx, stack_name, tmpl) ex = self.assertRaises(dispatcher.ExpectedException, self.man.stack_resume, self.ctx, stk.identifier()) self.assertEqual(exception.EntityNotFound, ex.exc_info[0]) def _mock_thread_start(self, stack_id, func, *args, **kwargs): func(*args, **kwargs) return mock.Mock() @mock.patch.object(service.ThreadGroupManager, 'start') @mock.patch.object(stack.Stack, 'load') def test_stack_check(self, mock_load, mock_start): stack_name = 'service_check_test_stack' t = template_format.parse(tools.wp_template) stk = utils.parse_stack(t, stack_name=stack_name) stk.check = mock.Mock() self.patchobject(service, 'NotifyEvent') mock_load.return_value = stk mock_start.side_effect = self._mock_thread_start self.man.stack_check(self.ctx, stk.identifier()) self.assertTrue(stk.check.called) stk.delete() class StackServiceUpdateActionsNotSupportedTest(common.HeatTestCase): scenarios = [ ('suspend_in_progress', dict(action='SUSPEND', status='IN_PROGRESS')), ('suspend_complete', dict(action='SUSPEND', status='COMPLETE')), ('suspend_failed', dict(action='SUSPEND', status='FAILED')), ('delete_in_progress', dict(action='DELETE', status='IN_PROGRESS')), ('delete_complete', dict(action='DELETE', status='COMPLETE')), ('delete_failed', dict(action='DELETE', status='FAILED')), ] def setUp(self): super(StackServiceUpdateActionsNotSupportedTest, self).setUp() self.ctx = utils.dummy_context() self.man = service.EngineService('a-host', 'a-topic') @mock.patch.object(stack.Stack, 'load') def test_stack_update_actions_not_supported(self, mock_load): stack_name = '%s-%s' % (self.action, self.status) t = template_format.parse(tools.wp_template) old_stack = utils.parse_stack(t, stack_name=stack_name) old_stack.action = self.action old_stack.status = self.status sid = old_stack.store() s = stack_object.Stack.get_by_id(self.ctx, sid) mock_load.return_value = old_stack params = {'foo': 'bar'} template = '{ "Resources": {} }' ex = self.assertRaises(dispatcher.ExpectedException, self.man.update_stack, self.ctx, old_stack.identifier(), template, params, None, {}) self.assertEqual(exception.NotSupported, ex.exc_info[0]) mock_load.assert_called_once_with(self.ctx, stack=s) old_stack.delete() heat-10.0.2/heat/tests/engine/service/test_stack_create.py0000666000175000017500000004220413343562351023577 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_config import cfg from oslo_messaging.rpc import dispatcher from oslo_service import threadgroup import six from heat.common import environment_util as env_util from heat.common import exception from heat.engine.clients.os import glance from heat.engine.clients.os import nova from heat.engine import environment from heat.engine import properties from heat.engine.resources.aws.ec2 import instance as instances from heat.engine import service from heat.engine import stack from heat.engine import template as templatem from heat.objects import stack as stack_object from heat.tests import common from heat.tests.engine import tools from heat.tests.openstack.nova import fakes as fakes_nova from heat.tests import utils class StackCreateTest(common.HeatTestCase): def setUp(self): super(StackCreateTest, self).setUp() self.ctx = utils.dummy_context() self.man = service.EngineService('a-host', 'a-topic') self.man.thread_group_mgr = service.ThreadGroupManager() @mock.patch.object(threadgroup, 'ThreadGroup') @mock.patch.object(stack.Stack, 'validate') def _test_stack_create(self, stack_name, mock_validate, mock_tg, environment_files=None): mock_tg.return_value = tools.DummyThreadGroup() params = {'foo': 'bar'} template = '{ "Template": "data" }' stk = tools.get_stack(stack_name, self.ctx) mock_tmpl = self.patchobject(templatem, 'Template', return_value=stk.t) mock_env = self.patchobject(environment, 'Environment', return_value=stk.env) mock_stack = self.patchobject(stack, 'Stack', return_value=stk) mock_merge = self.patchobject(env_util, 'merge_environments') result = self.man.create_stack(self.ctx, stack_name, template, params, None, {}, environment_files=environment_files) self.assertEqual(stk.identifier(), result) self.assertIsInstance(result, dict) self.assertTrue(result['stack_id']) mock_tmpl.assert_called_once_with(template, files=None) mock_env.assert_called_once_with(params) mock_stack.assert_called_once_with( self.ctx, stack_name, stk.t, owner_id=None, nested_depth=0, user_creds_id=None, stack_user_project_id=None, convergence=cfg.CONF.convergence_engine, parent_resource=None) if environment_files: mock_merge.assert_called_once_with(environment_files, None, params, mock.ANY) mock_validate.assert_called_once_with() def test_stack_create(self): stack_name = 'service_create_test_stack' self._test_stack_create(stack_name) def test_stack_create_with_environment_files(self): stack_name = 'env_files_test_stack' environment_files = ['env_1', 'env_2'] self._test_stack_create(stack_name, environment_files=environment_files) def test_stack_create_equals_max_per_tenant(self): cfg.CONF.set_override('max_stacks_per_tenant', 1) stack_name = 'service_create_test_stack_equals_max' self._test_stack_create(stack_name) def test_stack_create_exceeds_max_per_tenant(self): cfg.CONF.set_override('max_stacks_per_tenant', 0) stack_name = 'service_create_test_stack_exceeds_max' ex = self.assertRaises(dispatcher.ExpectedException, self._test_stack_create, stack_name) self.assertEqual(exception.RequestLimitExceeded, ex.exc_info[0]) self.assertIn("You have reached the maximum stacks per tenant", six.text_type(ex.exc_info[1])) @mock.patch.object(stack.Stack, 'validate') def test_stack_create_verify_err(self, mock_validate): mock_validate.side_effect = exception.StackValidationFailed(message='') stack_name = 'service_create_verify_err_test_stack' params = {'foo': 'bar'} template = '{ "Template": "data" }' stk = tools.get_stack(stack_name, self.ctx) mock_tmpl = self.patchobject(templatem, 'Template', return_value=stk.t) mock_env = self.patchobject(environment, 'Environment', return_value=stk.env) mock_stack = self.patchobject(stack, 'Stack', return_value=stk) ex = self.assertRaises(dispatcher.ExpectedException, self.man.create_stack, self.ctx, stack_name, template, params, None, {}) self.assertEqual(exception.StackValidationFailed, ex.exc_info[0]) mock_tmpl.assert_called_once_with(template, files=None) mock_env.assert_called_once_with(params) mock_stack.assert_called_once_with( self.ctx, stack_name, stk.t, owner_id=None, nested_depth=0, user_creds_id=None, stack_user_project_id=None, convergence=cfg.CONF.convergence_engine, parent_resource=None) def test_stack_create_invalid_stack_name(self): stack_name = 'service_create_test_stack_invalid_name' stack = tools.get_stack('test_stack', self.ctx) self.assertRaises(dispatcher.ExpectedException, self.man.create_stack, self.ctx, stack_name, stack.t.t, {}, None, {}) def test_stack_create_invalid_resource_name(self): stack_name = 'stack_create_invalid_resource_name' stk = tools.get_stack(stack_name, self.ctx) tmpl = dict(stk.t) tmpl['resources']['Web/Server'] = tmpl['resources']['WebServer'] del tmpl['resources']['WebServer'] ex = self.assertRaises(dispatcher.ExpectedException, self.man.create_stack, self.ctx, stack_name, stk.t.t, {}, None, {}) self.assertEqual(exception.StackValidationFailed, ex.exc_info[0]) @mock.patch.object(stack.Stack, 'create_stack_user_project_id') def test_stack_create_authorization_failure(self, mock_create): stack_name = 'stack_create_authorization_failure' stk = tools.get_stack(stack_name, self.ctx) mock_create.side_effect = exception.AuthorizationFailure ex = self.assertRaises(dispatcher.ExpectedException, self.man.create_stack, self.ctx, stack_name, stk.t.t, {}, None, {}) self.assertEqual(exception.StackValidationFailed, ex.exc_info[0]) def test_stack_create_no_credentials(self): cfg.CONF.set_default('deferred_auth_method', 'password') stack_name = 'test_stack_create_no_credentials' params = {'foo': 'bar'} template = '{ "Template": "data" }' stk = tools.get_stack(stack_name, self.ctx) # force check for credentials on create stk['WebServer'].requires_deferred_auth = True mock_tmpl = self.patchobject(templatem, 'Template', return_value=stk.t) mock_env = self.patchobject(environment, 'Environment', return_value=stk.env) mock_stack = self.patchobject(stack, 'Stack', return_value=stk) # test stack create using context without password ctx_no_pwd = utils.dummy_context(password=None) ex = self.assertRaises(dispatcher.ExpectedException, self.man.create_stack, ctx_no_pwd, stack_name, template, params, None, {}, None) self.assertEqual(exception.MissingCredentialError, ex.exc_info[0]) self.assertEqual('Missing required credential: X-Auth-Key', six.text_type(ex.exc_info[1])) mock_tmpl.assert_called_once_with(template, files=None) mock_env.assert_called_once_with(params) mock_stack.assert_called_once_with( ctx_no_pwd, stack_name, stk.t, owner_id=None, nested_depth=0, user_creds_id=None, stack_user_project_id=None, convergence=cfg.CONF.convergence_engine, parent_resource=None) mock_tmpl.reset_mock() mock_env.reset_mock() mock_stack.reset_mock() # test stack create using context without user ctx_no_pwd = utils.dummy_context(password=None) ctx_no_user = utils.dummy_context(user=None) ex = self.assertRaises(dispatcher.ExpectedException, self.man.create_stack, ctx_no_user, stack_name, template, params, None, {}) self.assertEqual(exception.MissingCredentialError, ex.exc_info[0]) self.assertEqual('Missing required credential: X-Auth-User', six.text_type(ex.exc_info[1])) mock_tmpl.assert_called_once_with(template, files=None) mock_env.assert_called_once_with(params) mock_stack.assert_called_once_with( ctx_no_user, stack_name, stk.t, owner_id=None, nested_depth=0, user_creds_id=None, stack_user_project_id=None, convergence=cfg.CONF.convergence_engine, parent_resource=None) @mock.patch.object(stack_object.Stack, 'count_total_resources') def test_stack_create_total_resources_equals_max(self, ctr): stack_name = 'stack_create_total_resources_equals_max' params = {} tpl = { 'heat_template_version': '2014-10-16', 'resources': { 'A': {'type': 'GenericResourceType'}, 'B': {'type': 'GenericResourceType'}, 'C': {'type': 'GenericResourceType'} } } template = templatem.Template(tpl) stk = stack.Stack(self.ctx, stack_name, template) ctr.return_value = 3 mock_tmpl = self.patchobject(templatem, 'Template', return_value=stk.t) mock_env = self.patchobject(environment, 'Environment', return_value=stk.env) mock_stack = self.patchobject(stack, 'Stack', return_value=stk) cfg.CONF.set_override('max_resources_per_stack', 3) result = self.man.create_stack(self.ctx, stack_name, template, params, None, {}) mock_tmpl.assert_called_once_with(template, files=None) mock_env.assert_called_once_with(params) mock_stack.assert_called_once_with( self.ctx, stack_name, stk.t, owner_id=None, nested_depth=0, user_creds_id=None, stack_user_project_id=None, convergence=cfg.CONF.convergence_engine, parent_resource=None) self.assertEqual(stk.identifier(), result) root_stack_id = stk.root_stack_id() self.assertEqual(3, stk.total_resources(root_stack_id)) self.man.thread_group_mgr.groups[stk.id].wait() stk.delete() def test_stack_create_total_resources_exceeds_max(self): stack_name = 'stack_create_total_resources_exceeds_max' params = {} tpl = { 'heat_template_version': '2014-10-16', 'resources': { 'A': {'type': 'GenericResourceType'}, 'B': {'type': 'GenericResourceType'}, 'C': {'type': 'GenericResourceType'} } } cfg.CONF.set_override('max_resources_per_stack', 2) ex = self.assertRaises(dispatcher.ExpectedException, self.man.create_stack, self.ctx, stack_name, tpl, params, None, {}) self.assertEqual(exception.RequestLimitExceeded, ex.exc_info[0]) self.assertIn(exception.StackResourceLimitExceeded.msg_fmt, six.text_type(ex.exc_info[1])) @mock.patch.object(threadgroup, 'ThreadGroup') @mock.patch.object(stack.Stack, 'validate') def test_stack_create_nested(self, mock_validate, mock_tg): convergence_engine = cfg.CONF.convergence_engine stack_name = 'service_create_nested_test_stack' parent_stack = tools.get_stack(stack_name + '_parent', self.ctx) owner_id = parent_stack.store() mock_tg.return_value = tools.DummyThreadGroup() stk = tools.get_stack(stack_name, self.ctx, with_params=True, owner_id=owner_id, nested_depth=1) tmpl_id = stk.t.store(self.ctx) mock_load = self.patchobject(templatem.Template, 'load', return_value=stk.t) mock_stack = self.patchobject(stack, 'Stack', return_value=stk) result = self.man.create_stack(self.ctx, stack_name, None, None, None, {}, owner_id=owner_id, nested_depth=1, template_id=tmpl_id) self.assertEqual(stk.identifier(), result) self.assertIsInstance(result, dict) self.assertTrue(result['stack_id']) mock_load.assert_called_once_with(self.ctx, tmpl_id) mock_stack.assert_called_once_with(self.ctx, stack_name, stk.t, owner_id=owner_id, nested_depth=1, user_creds_id=None, stack_user_project_id=None, convergence=convergence_engine, parent_resource=None) mock_validate.assert_called_once_with() def test_stack_validate(self): stack_name = 'stack_create_test_validate' stk = tools.get_stack(stack_name, self.ctx) fc = fakes_nova.FakeClient() self.patchobject(nova.NovaClientPlugin, '_create', return_value=fc) self.patchobject(glance.GlanceClientPlugin, 'find_image_by_name_or_id', return_value=744) resource = stk['WebServer'] resource.properties = properties.Properties( resource.properties_schema, { 'ImageId': 'CentOS 5.2', 'KeyName': 'test', 'InstanceType': 'm1.large' }, context=self.ctx) stk.validate() resource.properties = properties.Properties( resource.properties_schema, { 'KeyName': 'test', 'InstanceType': 'm1.large' }, context=self.ctx) self.assertRaises(exception.StackValidationFailed, stk.validate) def test_validate_deferred_auth_context_trusts(self): stk = tools.get_stack('test_deferred_auth', self.ctx) stk['WebServer'].requires_deferred_auth = True ctx = utils.dummy_context(user=None, password=None) cfg.CONF.set_default('deferred_auth_method', 'trusts') # using trusts, no username or password required self.man._validate_deferred_auth_context(ctx, stk) def test_validate_deferred_auth_context_not_required(self): stk = tools.get_stack('test_deferred_auth', self.ctx) stk['WebServer'].requires_deferred_auth = False ctx = utils.dummy_context(user=None, password=None) cfg.CONF.set_default('deferred_auth_method', 'password') # stack performs no deferred operations, so no username or # password required self.man._validate_deferred_auth_context(ctx, stk) def test_validate_deferred_auth_context_missing_credentials(self): stk = tools.get_stack('test_deferred_auth', self.ctx) stk['WebServer'].requires_deferred_auth = True cfg.CONF.set_default('deferred_auth_method', 'password') # missing username ctx = utils.dummy_context(user=None) ex = self.assertRaises(exception.MissingCredentialError, self.man._validate_deferred_auth_context, ctx, stk) self.assertEqual('Missing required credential: X-Auth-User', six.text_type(ex)) # missing password ctx = utils.dummy_context(password=None) ex = self.assertRaises(exception.MissingCredentialError, self.man._validate_deferred_auth_context, ctx, stk) self.assertEqual('Missing required credential: X-Auth-Key', six.text_type(ex)) @mock.patch.object(instances.Instance, 'validate') @mock.patch.object(stack.Stack, 'total_resources') def test_stack_create_max_unlimited(self, total_res_mock, validate_mock): total_res_mock.return_value = 9999 validate_mock.return_value = None cfg.CONF.set_override('max_resources_per_stack', -1) stack_name = 'service_create_test_max_unlimited' self._test_stack_create(stack_name) heat-10.0.2/heat/tests/engine/test_resource_type.py0000666000175000017500000002352013343562351022377 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import six from heat.common import exception from heat.engine import environment from heat.engine import resource as res from heat.engine import service from heat.tests import common from heat.tests import generic_resource as generic_rsrc from heat.tests import utils class ResourceTypeTest(common.HeatTestCase): def setUp(self): super(ResourceTypeTest, self).setUp() self.ctx = utils.dummy_context(tenant_id='stack_service_test_tenant') self.eng = service.EngineService('a-host', 'a-topic') @mock.patch.object(res.Resource, 'is_service_available') def test_list_resource_types(self, mock_is_service_available): mock_is_service_available.return_value = (True, None) resources = self.eng.list_resource_types(self.ctx) self.assertIsInstance(resources, list) self.assertIn('AWS::EC2::Instance', resources) self.assertIn('AWS::RDS::DBInstance', resources) @mock.patch.object(res.Resource, 'is_service_available') def test_list_resource_types_deprecated(self, mock_is_service_available): mock_is_service_available.return_value = (True, None) resources = self.eng.list_resource_types(self.ctx, "DEPRECATED") self.assertEqual(set(['OS::Aodh::Alarm', 'OS::Magnum::Bay', 'OS::Magnum::BayModel', 'OS::Glance::Image', 'OS::Nova::FloatingIP', 'OS::Nova::FloatingIPAssociation']), set(resources)) @mock.patch.object(res.Resource, 'is_service_available') def test_list_resource_types_supported(self, mock_is_service_available): mock_is_service_available.return_value = (True, None) resources = self.eng.list_resource_types(self.ctx, "SUPPORTED") self.assertNotIn(['OS::Neutron::RouterGateway'], resources) self.assertIn('AWS::EC2::Instance', resources) @mock.patch.object(res.Resource, 'is_service_available') def test_list_resource_types_unavailable( self, mock_is_service_available): mock_is_service_available.return_value = ( False, 'Service endpoint not in service catalog.') resources = self.eng.list_resource_types(self.ctx) # Check for a known resource, not listed self.assertNotIn('OS::Nova::Server', resources) @mock.patch.object(res.Resource, 'is_service_available') def test_list_resource_types_with_descr(self, mock_is_service_available): mock_is_service_available.return_value = (True, None) resources = self.eng.list_resource_types(self.ctx, with_description=True) self.assertIsInstance(resources, list) self.assertIn({'resource_type': 'AWS::RDS::DBInstance', 'description': 'Builtin AWS::RDS::DBInstance'}, resources) self.assertIn({'resource_type': 'AWS::EC2::Instance', 'description': 'No description available'}, resources) def test_resource_schema(self): type_name = 'ResourceWithPropsType' expected = { 'resource_type': type_name, 'properties': { 'Foo': { 'type': 'string', 'required': False, 'update_allowed': False, 'immutable': False, }, 'FooInt': { 'type': 'integer', 'required': False, 'update_allowed': False, 'immutable': False, }, }, 'attributes': { 'foo': {'description': 'A generic attribute'}, 'Foo': {'description': 'Another generic attribute'}, 'show': { 'description': 'Detailed information about resource.', 'type': 'map'}, }, 'support_status': { 'status': 'SUPPORTED', 'version': None, 'message': None, 'previous_status': None }, 'description': 'No description available' } schema = self.eng.resource_schema(self.ctx, type_name=type_name, with_description=True) self.assertEqual(expected, schema) def test_resource_schema_for_hidden_type(self): type_name = 'ResourceTypeHidden' self.assertRaises(exception.NotSupported, self.eng.resource_schema, self.ctx, type_name) def test_generate_template_for_hidden_type(self): type_name = 'ResourceTypeHidden' self.assertRaises(exception.NotSupported, self.eng.generate_template, self.ctx, type_name) def test_resource_schema_with_attr_type(self): type_name = 'ResourceWithAttributeType' expected = { 'resource_type': type_name, 'properties': {}, 'attributes': { 'attr1': {'description': 'A generic attribute', 'type': 'string'}, 'attr2': {'description': 'Another generic attribute', 'type': 'map'}, 'show': { 'description': 'Detailed information about resource.', 'type': 'map'}, }, 'support_status': { 'status': 'SUPPORTED', 'version': None, 'message': None, 'previous_status': None } } schema = self.eng.resource_schema(self.ctx, type_name=type_name) self.assertEqual(expected, schema) def test_resource_schema_with_hidden(self): type_name = 'ResourceWithHiddenPropertyAndAttribute' expected = { 'resource_type': type_name, 'properties': { 'supported': { 'description': "Supported property.", 'type': 'list', 'immutable': False, 'required': False, 'update_allowed': False } }, 'attributes': { 'supported': {'description': 'Supported attribute.', 'type': 'string'}, 'show': { 'description': 'Detailed information about resource.', 'type': 'map'}, }, 'support_status': { 'status': 'SUPPORTED', 'version': None, 'message': None, 'previous_status': None } } schema = self.eng.resource_schema(self.ctx, type_name=type_name) self.assertEqual(expected, schema) def _no_template_file(self, function): env = environment.Environment() info = environment.ResourceInfo(env.registry, ['ResourceWithWrongRefOnFile'], 'not_existing.yaml') mock_iterable = mock.MagicMock(return_value=iter([info])) with mock.patch('heat.engine.environment.ResourceRegistry.iterable_by', new=mock_iterable): ex = self.assertRaises(exception.InvalidGlobalResource, function, self.ctx, type_name='ResourceWithWrongRefOnFile') msg = ('There was an error loading the definition of the global ' 'resource type ResourceWithWrongRefOnFile.') self.assertIn(msg, six.text_type(ex)) def test_resource_schema_no_template_file(self): self._no_template_file(self.eng.resource_schema) def test_generate_template_no_template_file(self): self._no_template_file(self.eng.generate_template) def test_resource_schema_nonexist(self): ex = self.assertRaises(exception.EntityNotFound, self.eng.resource_schema, self.ctx, type_name='Bogus') msg = 'The Resource Type (Bogus) could not be found.' self.assertEqual(msg, six.text_type(ex)) def test_resource_schema_unavailable(self): type_name = 'ResourceWithDefaultClientName' with mock.patch.object( generic_rsrc.ResourceWithDefaultClientName, 'is_service_available') as mock_is_service_available: mock_is_service_available.return_value = ( False, 'Service endpoint not in service catalog.') ex = self.assertRaises(exception.ResourceTypeUnavailable, self.eng.resource_schema, self.ctx, type_name) msg = ('HEAT-E99001 Service sample is not available for resource ' 'type ResourceWithDefaultClientName, reason: ' 'Service endpoint not in service catalog.') self.assertEqual(msg, six.text_type(ex), 'invalid exception message') mock_is_service_available.assert_called_once_with(self.ctx) heat-10.0.2/heat/tests/engine/__init__.py0000666000175000017500000000000013343562340020171 0ustar zuulzuul00000000000000heat-10.0.2/heat/tests/engine/test_scheduler.py0000666000175000017500000014761313343562351021477 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import contextlib import itertools import eventlet import six from heat.common.i18n import repr_wrapper from heat.common import timeutils from heat.engine import dependencies from heat.engine import scheduler from heat.tests import common class DummyTask(object): def __init__(self, num_steps=3, delays=None): self.num_steps = num_steps if delays is not None: self.delays = iter(delays) else: self.delays = itertools.repeat(None) def __call__(self, *args, **kwargs): for i in range(1, self.num_steps + 1): self.do_step(i, *args, **kwargs) yield next(self.delays) def do_step(self, step_num, *args, **kwargs): pass class ExceptionGroupTest(common.HeatTestCase): def test_contains_exceptions(self): exception_group = scheduler.ExceptionGroup() self.assertIsInstance(exception_group.exceptions, list) def test_can_be_initialized_with_a_list_of_exceptions(self): ex1 = Exception("ex 1") ex2 = Exception("ex 2") exception_group = scheduler.ExceptionGroup([ex1, ex2]) self.assertIn(ex1, exception_group.exceptions) self.assertIn(ex2, exception_group.exceptions) def test_can_add_exceptions_after_init(self): ex = Exception() exception_group = scheduler.ExceptionGroup() exception_group.exceptions.append(ex) self.assertIn(ex, exception_group.exceptions) def test_str_representation_aggregates_all_exceptions(self): ex1 = Exception("ex 1") ex2 = Exception("ex 2") exception_group = scheduler.ExceptionGroup([ex1, ex2]) self.assertEqual("['ex 1', 'ex 2']", six.text_type(exception_group)) class DependencyTaskGroupTest(common.HeatTestCase): def setUp(self): super(DependencyTaskGroupTest, self).setUp() self.addCleanup(self.m.VerifyAll) self.aggregate_exceptions = False self.error_wait_time = None self.reverse_order = False @contextlib.contextmanager def _dep_test(self, *edges): dummy = DummyTask(getattr(self, 'steps', 3)) deps = dependencies.Dependencies(edges) tg = scheduler.DependencyTaskGroup( deps, dummy, reverse=self.reverse_order, error_wait_time=self.error_wait_time, aggregate_exceptions=self.aggregate_exceptions) self.m.StubOutWithMock(dummy, 'do_step') yield dummy self.m.ReplayAll() scheduler.TaskRunner(tg)(wait_time=None) def test_no_steps(self): self.steps = 0 self.m.StubOutWithMock(scheduler.TaskRunner, '_sleep') with self._dep_test(('second', 'first')): pass def test_single_node(self): with self._dep_test(('only', None)) as dummy: dummy.do_step(1, 'only').AndReturn(None) dummy.do_step(2, 'only').AndReturn(None) dummy.do_step(3, 'only').AndReturn(None) def test_disjoint(self): with self._dep_test(('1', None), ('2', None)) as dummy: dummy.do_step(1, '1').InAnyOrder('1') dummy.do_step(1, '2').InAnyOrder('1') dummy.do_step(2, '1').InAnyOrder('2') dummy.do_step(2, '2').InAnyOrder('2') dummy.do_step(3, '1').InAnyOrder('3') dummy.do_step(3, '2').InAnyOrder('3') def test_single_fwd(self): with self._dep_test(('second', 'first')) as dummy: dummy.do_step(1, 'first').AndReturn(None) dummy.do_step(2, 'first').AndReturn(None) dummy.do_step(3, 'first').AndReturn(None) dummy.do_step(1, 'second').AndReturn(None) dummy.do_step(2, 'second').AndReturn(None) dummy.do_step(3, 'second').AndReturn(None) def test_chain_fwd(self): with self._dep_test(('third', 'second'), ('second', 'first')) as dummy: dummy.do_step(1, 'first').AndReturn(None) dummy.do_step(2, 'first').AndReturn(None) dummy.do_step(3, 'first').AndReturn(None) dummy.do_step(1, 'second').AndReturn(None) dummy.do_step(2, 'second').AndReturn(None) dummy.do_step(3, 'second').AndReturn(None) dummy.do_step(1, 'third').AndReturn(None) dummy.do_step(2, 'third').AndReturn(None) dummy.do_step(3, 'third').AndReturn(None) def test_diamond_fwd(self): with self._dep_test(('last', 'mid1'), ('last', 'mid2'), ('mid1', 'first'), ('mid2', 'first')) as dummy: dummy.do_step(1, 'first').AndReturn(None) dummy.do_step(2, 'first').AndReturn(None) dummy.do_step(3, 'first').AndReturn(None) dummy.do_step(1, 'mid1').InAnyOrder('1') dummy.do_step(1, 'mid2').InAnyOrder('1') dummy.do_step(2, 'mid1').InAnyOrder('2') dummy.do_step(2, 'mid2').InAnyOrder('2') dummy.do_step(3, 'mid1').InAnyOrder('3') dummy.do_step(3, 'mid2').InAnyOrder('3') dummy.do_step(1, 'last').AndReturn(None) dummy.do_step(2, 'last').AndReturn(None) dummy.do_step(3, 'last').AndReturn(None) def test_complex_fwd(self): with self._dep_test(('last', 'mid1'), ('last', 'mid2'), ('mid1', 'mid3'), ('mid1', 'first'), ('mid3', 'first'), ('mid2', 'first')) as dummy: dummy.do_step(1, 'first').AndReturn(None) dummy.do_step(2, 'first').AndReturn(None) dummy.do_step(3, 'first').AndReturn(None) dummy.do_step(1, 'mid2').InAnyOrder('1') dummy.do_step(1, 'mid3').InAnyOrder('1') dummy.do_step(2, 'mid2').InAnyOrder('2') dummy.do_step(2, 'mid3').InAnyOrder('2') dummy.do_step(3, 'mid2').InAnyOrder('3') dummy.do_step(3, 'mid3').InAnyOrder('3') dummy.do_step(1, 'mid1').AndReturn(None) dummy.do_step(2, 'mid1').AndReturn(None) dummy.do_step(3, 'mid1').AndReturn(None) dummy.do_step(1, 'last').AndReturn(None) dummy.do_step(2, 'last').AndReturn(None) dummy.do_step(3, 'last').AndReturn(None) def test_many_edges_fwd(self): with self._dep_test(('last', 'e1'), ('last', 'mid1'), ('last', 'mid2'), ('mid1', 'e2'), ('mid1', 'mid3'), ('mid2', 'mid3'), ('mid3', 'e3')) as dummy: dummy.do_step(1, 'e1').InAnyOrder('1edges') dummy.do_step(1, 'e2').InAnyOrder('1edges') dummy.do_step(1, 'e3').InAnyOrder('1edges') dummy.do_step(2, 'e1').InAnyOrder('2edges') dummy.do_step(2, 'e2').InAnyOrder('2edges') dummy.do_step(2, 'e3').InAnyOrder('2edges') dummy.do_step(3, 'e1').InAnyOrder('3edges') dummy.do_step(3, 'e2').InAnyOrder('3edges') dummy.do_step(3, 'e3').InAnyOrder('3edges') dummy.do_step(1, 'mid3').AndReturn(None) dummy.do_step(2, 'mid3').AndReturn(None) dummy.do_step(3, 'mid3').AndReturn(None) dummy.do_step(1, 'mid2').InAnyOrder('1mid') dummy.do_step(1, 'mid1').InAnyOrder('1mid') dummy.do_step(2, 'mid2').InAnyOrder('2mid') dummy.do_step(2, 'mid1').InAnyOrder('2mid') dummy.do_step(3, 'mid2').InAnyOrder('3mid') dummy.do_step(3, 'mid1').InAnyOrder('3mid') dummy.do_step(1, 'last').AndReturn(None) dummy.do_step(2, 'last').AndReturn(None) dummy.do_step(3, 'last').AndReturn(None) def test_dbldiamond_fwd(self): with self._dep_test(('last', 'a1'), ('last', 'a2'), ('a1', 'b1'), ('a2', 'b1'), ('a2', 'b2'), ('b1', 'first'), ('b2', 'first')) as dummy: dummy.do_step(1, 'first').AndReturn(None) dummy.do_step(2, 'first').AndReturn(None) dummy.do_step(3, 'first').AndReturn(None) dummy.do_step(1, 'b1').InAnyOrder('1b') dummy.do_step(1, 'b2').InAnyOrder('1b') dummy.do_step(2, 'b1').InAnyOrder('2b') dummy.do_step(2, 'b2').InAnyOrder('2b') dummy.do_step(3, 'b1').InAnyOrder('3b') dummy.do_step(3, 'b2').InAnyOrder('3b') dummy.do_step(1, 'a1').InAnyOrder('1a') dummy.do_step(1, 'a2').InAnyOrder('1a') dummy.do_step(2, 'a1').InAnyOrder('2a') dummy.do_step(2, 'a2').InAnyOrder('2a') dummy.do_step(3, 'a1').InAnyOrder('3a') dummy.do_step(3, 'a2').InAnyOrder('3a') dummy.do_step(1, 'last').AndReturn(None) dummy.do_step(2, 'last').AndReturn(None) dummy.do_step(3, 'last').AndReturn(None) def test_circular_deps(self): d = dependencies.Dependencies([('first', 'second'), ('second', 'third'), ('third', 'first')]) self.assertRaises(dependencies.CircularDependencyException, scheduler.DependencyTaskGroup, d) def test_aggregate_exceptions_raises_all_at_the_end(self): def run_tasks_with_exceptions(e1=None, e2=None): self.aggregate_exceptions = True tasks = (('A', None), ('B', None), ('C', None)) with self._dep_test(*tasks) as dummy: dummy.do_step(1, 'A').InAnyOrder('1') dummy.do_step(1, 'B').InAnyOrder('1') dummy.do_step(1, 'C').InAnyOrder('1').AndRaise(e1) dummy.do_step(2, 'A').InAnyOrder('2') dummy.do_step(2, 'B').InAnyOrder('2').AndRaise(e2) dummy.do_step(3, 'A').InAnyOrder('3') e1 = Exception('e1') e2 = Exception('e2') exc = self.assertRaises(scheduler.ExceptionGroup, run_tasks_with_exceptions, e1, e2) self.assertEqual(set([e1, e2]), set(exc.exceptions)) def test_aggregate_exceptions_cancels_dependent_tasks_recursively(self): def run_tasks_with_exceptions(e1=None, e2=None): self.aggregate_exceptions = True tasks = (('A', None), ('B', 'A'), ('C', 'B')) with self._dep_test(*tasks) as dummy: dummy.do_step(1, 'A').AndRaise(e1) e1 = Exception('e1') exc = self.assertRaises(scheduler.ExceptionGroup, run_tasks_with_exceptions, e1) self.assertEqual([e1], exc.exceptions) def test_aggregate_exceptions_cancels_tasks_in_reverse_order(self): def run_tasks_with_exceptions(e1=None, e2=None): self.reverse_order = True self.aggregate_exceptions = True tasks = (('A', None), ('B', 'A'), ('C', 'B')) with self._dep_test(*tasks) as dummy: dummy.do_step(1, 'C').AndRaise(e1) e1 = Exception('e1') exc = self.assertRaises(scheduler.ExceptionGroup, run_tasks_with_exceptions, e1) self.assertEqual([e1], exc.exceptions) def test_exceptions_on_cancel(self): class TestException(Exception): pass class ExceptionOnExit(Exception): pass cancelled = [] def task_func(arg): for i in range(4): if i > 1: raise TestException try: yield except GeneratorExit: cancelled.append(arg) raise ExceptionOnExit tasks = (('A', None), ('B', None), ('C', None)) deps = dependencies.Dependencies(tasks) tg = scheduler.DependencyTaskGroup(deps, task_func) task = tg() next(task) next(task) self.assertRaises(TestException, next, task) self.assertEqual(len(tasks) - 1, len(cancelled)) def test_exception_grace_period(self): e1 = Exception('e1') def run_tasks_with_exceptions(): self.error_wait_time = 5 tasks = (('A', None), ('B', None), ('C', 'A')) with self._dep_test(*tasks) as dummy: dummy.do_step(1, 'A').InAnyOrder('1') dummy.do_step(1, 'B').InAnyOrder('1') dummy.do_step(2, 'A').InAnyOrder('2').AndRaise(e1) dummy.do_step(2, 'B').InAnyOrder('2') dummy.do_step(3, 'B') exc = self.assertRaises(type(e1), run_tasks_with_exceptions) self.assertEqual(e1, exc) def test_exception_grace_period_expired(self): e1 = Exception('e1') def run_tasks_with_exceptions(): self.steps = 5 self.error_wait_time = 0.05 def sleep(): eventlet.sleep(self.error_wait_time) tasks = (('A', None), ('B', None), ('C', 'A')) with self._dep_test(*tasks) as dummy: dummy.do_step(1, 'A').InAnyOrder('1') dummy.do_step(1, 'B').InAnyOrder('1') dummy.do_step(2, 'A').InAnyOrder('2').AndRaise(e1) dummy.do_step(2, 'B').InAnyOrder('2') dummy.do_step(3, 'B') dummy.do_step(4, 'B').WithSideEffects(sleep) exc = self.assertRaises(type(e1), run_tasks_with_exceptions) self.assertEqual(e1, exc) def test_exception_grace_period_per_task(self): e1 = Exception('e1') def get_wait_time(key): if key == 'B': return 5 else: return None def run_tasks_with_exceptions(): self.error_wait_time = get_wait_time tasks = (('A', None), ('B', None), ('C', 'A')) with self._dep_test(*tasks) as dummy: dummy.do_step(1, 'A').InAnyOrder('1') dummy.do_step(1, 'B').InAnyOrder('1') dummy.do_step(2, 'A').InAnyOrder('2').AndRaise(e1) dummy.do_step(2, 'B').InAnyOrder('2') dummy.do_step(3, 'B') exc = self.assertRaises(type(e1), run_tasks_with_exceptions) self.assertEqual(e1, exc) def test_thrown_exception_order(self): e1 = Exception('e1') e2 = Exception('e2') tasks = (('A', None), ('B', None), ('C', 'A')) deps = dependencies.Dependencies(tasks) tg = scheduler.DependencyTaskGroup( deps, DummyTask(), reverse=self.reverse_order, error_wait_time=1, aggregate_exceptions=self.aggregate_exceptions) task = tg() next(task) task.throw(e1) next(task) tg.error_wait_time = None exc = self.assertRaises(type(e2), task.throw, e2) self.assertIs(e2, exc) class TaskTest(common.HeatTestCase): def setUp(self): super(TaskTest, self).setUp() scheduler.ENABLE_SLEEP = True self.addCleanup(self.m.VerifyAll) def test_run(self): task = DummyTask() self.m.StubOutWithMock(task, 'do_step') self.m.StubOutWithMock(scheduler.TaskRunner, '_sleep') task.do_step(1).AndReturn(None) scheduler.TaskRunner._sleep(0).AndReturn(None) task.do_step(2).AndReturn(None) scheduler.TaskRunner._sleep(1).AndReturn(None) task.do_step(3).AndReturn(None) scheduler.TaskRunner._sleep(1).AndReturn(None) self.m.ReplayAll() scheduler.TaskRunner(task)() def test_run_as_task(self): task = DummyTask() self.m.StubOutWithMock(task, 'do_step') self.m.StubOutWithMock(scheduler.TaskRunner, '_sleep') task.do_step(1).AndReturn(None) task.do_step(2).AndReturn(None) task.do_step(3).AndReturn(None) self.m.ReplayAll() tr = scheduler.TaskRunner(task) rt = tr.as_task() for step in rt: pass self.assertTrue(tr.done()) def test_run_as_task_started(self): task = DummyTask() self.m.StubOutWithMock(task, 'do_step') self.m.StubOutWithMock(scheduler.TaskRunner, '_sleep') task.do_step(1).AndReturn(None) task.do_step(2).AndReturn(None) task.do_step(3).AndReturn(None) self.m.ReplayAll() tr = scheduler.TaskRunner(task) tr.start() for step in tr.as_task(): pass self.assertTrue(tr.done()) def test_run_as_task_cancel(self): task = DummyTask() self.m.StubOutWithMock(task, 'do_step') self.m.StubOutWithMock(scheduler.TaskRunner, '_sleep') task.do_step(1).AndReturn(None) self.m.ReplayAll() tr = scheduler.TaskRunner(task) rt = tr.as_task() next(rt) rt.close() self.assertTrue(tr.done()) def test_run_as_task_exception(self): class TestException(Exception): pass task = DummyTask() self.m.StubOutWithMock(task, 'do_step') self.m.StubOutWithMock(scheduler.TaskRunner, '_sleep') task.do_step(1).AndReturn(None) self.m.ReplayAll() tr = scheduler.TaskRunner(task) rt = tr.as_task() next(rt) self.assertRaises(TestException, rt.throw, TestException) self.assertTrue(tr.done()) def test_run_as_task_swallow_exception(self): class TestException(Exception): pass def task(): try: yield except TestException: yield tr = scheduler.TaskRunner(task) rt = tr.as_task() next(rt) rt.throw(TestException) self.assertFalse(tr.done()) self.assertRaises(StopIteration, next, rt) self.assertTrue(tr.done()) def test_run_delays(self): task = DummyTask(delays=itertools.repeat(2)) self.m.StubOutWithMock(task, 'do_step') self.m.StubOutWithMock(scheduler.TaskRunner, '_sleep') task.do_step(1).AndReturn(None) scheduler.TaskRunner._sleep(0).AndReturn(None) scheduler.TaskRunner._sleep(1).AndReturn(None) task.do_step(2).AndReturn(None) scheduler.TaskRunner._sleep(1).AndReturn(None) scheduler.TaskRunner._sleep(1).AndReturn(None) task.do_step(3).AndReturn(None) scheduler.TaskRunner._sleep(1).AndReturn(None) scheduler.TaskRunner._sleep(1).AndReturn(None) self.m.ReplayAll() scheduler.TaskRunner(task)() def test_run_delays_dynamic(self): task = DummyTask(delays=[2, 4, 1]) self.m.StubOutWithMock(task, 'do_step') self.m.StubOutWithMock(scheduler.TaskRunner, '_sleep') task.do_step(1).AndReturn(None) scheduler.TaskRunner._sleep(0).AndReturn(None) scheduler.TaskRunner._sleep(1).AndReturn(None) task.do_step(2).AndReturn(None) scheduler.TaskRunner._sleep(1).AndReturn(None) scheduler.TaskRunner._sleep(1).AndReturn(None) scheduler.TaskRunner._sleep(1).AndReturn(None) scheduler.TaskRunner._sleep(1).AndReturn(None) task.do_step(3).AndReturn(None) scheduler.TaskRunner._sleep(1).AndReturn(None) self.m.ReplayAll() scheduler.TaskRunner(task)() def test_run_wait_time(self): task = DummyTask() self.m.StubOutWithMock(task, 'do_step') self.m.StubOutWithMock(scheduler.TaskRunner, '_sleep') task.do_step(1).AndReturn(None) scheduler.TaskRunner._sleep(0).AndReturn(None) task.do_step(2).AndReturn(None) scheduler.TaskRunner._sleep(42).AndReturn(None) task.do_step(3).AndReturn(None) scheduler.TaskRunner._sleep(42).AndReturn(None) self.m.ReplayAll() scheduler.TaskRunner(task)(wait_time=42) def test_start_run(self): task = DummyTask() self.m.StubOutWithMock(task, 'do_step') self.m.StubOutWithMock(scheduler.TaskRunner, '_sleep') task.do_step(1).AndReturn(None) scheduler.TaskRunner._sleep(1).AndReturn(None) task.do_step(2).AndReturn(None) scheduler.TaskRunner._sleep(1).AndReturn(None) task.do_step(3).AndReturn(None) self.m.ReplayAll() runner = scheduler.TaskRunner(task) runner.start() runner.run_to_completion() def test_start_run_wait_time(self): task = DummyTask() self.m.StubOutWithMock(task, 'do_step') self.m.StubOutWithMock(scheduler.TaskRunner, '_sleep') task.do_step(1).AndReturn(None) scheduler.TaskRunner._sleep(24).AndReturn(None) task.do_step(2).AndReturn(None) scheduler.TaskRunner._sleep(24).AndReturn(None) task.do_step(3).AndReturn(None) self.m.ReplayAll() runner = scheduler.TaskRunner(task) runner.start() runner.run_to_completion(wait_time=24) def test_run_progress(self): progress_count = [] def progress(): progress_count.append(None) task = DummyTask() self.m.StubOutWithMock(task, 'do_step') self.m.StubOutWithMock(scheduler.TaskRunner, '_sleep') task.do_step(1).AndReturn(None) scheduler.TaskRunner._sleep(0).AndReturn(None) task.do_step(2).AndReturn(None) scheduler.TaskRunner._sleep(1).AndReturn(None) task.do_step(3).AndReturn(None) scheduler.TaskRunner._sleep(1).AndReturn(None) self.m.ReplayAll() scheduler.TaskRunner(task)(progress_callback=progress) self.assertEqual(task.num_steps, len(progress_count)) def test_start_run_progress(self): progress_count = [] def progress(): progress_count.append(None) task = DummyTask() self.m.StubOutWithMock(task, 'do_step') self.m.StubOutWithMock(scheduler.TaskRunner, '_sleep') task.do_step(1).AndReturn(None) task.do_step(2).AndReturn(None) scheduler.TaskRunner._sleep(1).AndReturn(None) task.do_step(3).AndReturn(None) scheduler.TaskRunner._sleep(1).AndReturn(None) self.m.ReplayAll() runner = scheduler.TaskRunner(task) runner.start() runner.run_to_completion(progress_callback=progress) self.assertEqual(task.num_steps - 1, len(progress_count)) def test_run_as_task_progress(self): progress_count = [] def progress(): progress_count.append(None) task = DummyTask() self.m.StubOutWithMock(task, 'do_step') self.m.StubOutWithMock(scheduler.TaskRunner, '_sleep') task.do_step(1).AndReturn(None) task.do_step(2).AndReturn(None) task.do_step(3).AndReturn(None) self.m.ReplayAll() tr = scheduler.TaskRunner(task) rt = tr.as_task(progress_callback=progress) for step in rt: pass self.assertEqual(task.num_steps, len(progress_count)) def test_run_progress_exception(self): class TestException(Exception): pass progress_count = [] def progress(): if progress_count: raise TestException progress_count.append(None) task = DummyTask() self.m.StubOutWithMock(task, 'do_step') self.m.StubOutWithMock(scheduler.TaskRunner, '_sleep') task.do_step(1).AndReturn(None) scheduler.TaskRunner._sleep(0).AndReturn(None) task.do_step(2).AndReturn(None) scheduler.TaskRunner._sleep(1).AndReturn(None) self.m.ReplayAll() self.assertRaises(TestException, scheduler.TaskRunner(task), progress_callback=progress) self.assertEqual(1, len(progress_count)) def test_start_run_progress_exception(self): class TestException(Exception): pass progress_count = [] def progress(): if progress_count: raise TestException progress_count.append(None) task = DummyTask() self.m.StubOutWithMock(task, 'do_step') self.m.StubOutWithMock(scheduler.TaskRunner, '_sleep') task.do_step(1).AndReturn(None) task.do_step(2).AndReturn(None) scheduler.TaskRunner._sleep(1).AndReturn(None) task.do_step(3).AndReturn(None) scheduler.TaskRunner._sleep(1).AndReturn(None) self.m.ReplayAll() runner = scheduler.TaskRunner(task) runner.start() self.assertRaises(TestException, runner.run_to_completion, progress_callback=progress) self.assertEqual(1, len(progress_count)) def test_run_as_task_progress_exception(self): class TestException(Exception): pass progress_count = [] def progress(): if progress_count: raise TestException progress_count.append(None) task = DummyTask() self.m.StubOutWithMock(task, 'do_step') self.m.StubOutWithMock(scheduler.TaskRunner, '_sleep') task.do_step(1).AndReturn(None) task.do_step(2).AndReturn(None) self.m.ReplayAll() tr = scheduler.TaskRunner(task) rt = tr.as_task(progress_callback=progress) next(rt) next(rt) self.assertRaises(TestException, next, rt) self.assertEqual(1, len(progress_count)) def test_run_progress_exception_swallow(self): class TestException(Exception): pass progress_count = [] def progress(): try: if not progress_count: raise TestException finally: progress_count.append(None) def task(): try: yield except TestException: yield self.m.StubOutWithMock(scheduler.TaskRunner, '_sleep') scheduler.TaskRunner._sleep(0).AndReturn(None) scheduler.TaskRunner._sleep(1).AndReturn(None) self.m.ReplayAll() scheduler.TaskRunner(task)(progress_callback=progress) self.assertEqual(2, len(progress_count)) def test_start_run_progress_exception_swallow(self): class TestException(Exception): pass progress_count = [] def progress(): try: if not progress_count: raise TestException finally: progress_count.append(None) def task(): yield try: yield except TestException: yield self.m.StubOutWithMock(scheduler.TaskRunner, '_sleep') scheduler.TaskRunner._sleep(1).AndReturn(None) scheduler.TaskRunner._sleep(1).AndReturn(None) self.m.ReplayAll() runner = scheduler.TaskRunner(task) runner.start() runner.run_to_completion(progress_callback=progress) self.assertEqual(2, len(progress_count)) def test_run_as_task_progress_exception_swallow(self): class TestException(Exception): pass progress_count = [] def progress(): try: if not progress_count: raise TestException finally: progress_count.append(None) def task(): try: yield except TestException: yield tr = scheduler.TaskRunner(task) rt = tr.as_task(progress_callback=progress) next(rt) next(rt) self.assertRaises(StopIteration, next, rt) self.assertEqual(2, len(progress_count)) def test_sleep(self): sleep_time = 42 self.m.StubOutWithMock(eventlet, 'sleep') eventlet.sleep(0).AndReturn(None) eventlet.sleep(sleep_time).MultipleTimes().AndReturn(None) self.m.ReplayAll() runner = scheduler.TaskRunner(DummyTask()) runner(wait_time=sleep_time) def test_sleep_zero(self): self.m.StubOutWithMock(eventlet, 'sleep') eventlet.sleep(0).MultipleTimes().AndReturn(None) self.m.ReplayAll() runner = scheduler.TaskRunner(DummyTask()) runner(wait_time=0) def test_sleep_none(self): self.m.StubOutWithMock(eventlet, 'sleep') self.m.ReplayAll() runner = scheduler.TaskRunner(DummyTask()) runner(wait_time=None) def test_args(self): args = ['foo', 'bar'] kwargs = {'baz': 'quux', 'blarg': 'wibble'} self.m.StubOutWithMock(DummyTask, '__call__') task = DummyTask() task(*args, **kwargs) self.m.ReplayAll() runner = scheduler.TaskRunner(task, *args, **kwargs) runner(wait_time=None) def test_non_callable(self): self.assertRaises(AssertionError, scheduler.TaskRunner, object()) def test_stepping(self): task = DummyTask() self.m.StubOutWithMock(task, 'do_step') self.m.StubOutWithMock(scheduler.TaskRunner, '_sleep') task.do_step(1).AndReturn(None) task.do_step(2).AndReturn(None) task.do_step(3).AndReturn(None) self.m.ReplayAll() runner = scheduler.TaskRunner(task) runner.start() self.assertFalse(runner.step()) self.assertTrue(runner) self.assertFalse(runner.step()) self.assertTrue(runner.step()) self.assertFalse(runner) def test_start_no_steps(self): task = DummyTask(0) self.m.StubOutWithMock(task, 'do_step') self.m.StubOutWithMock(scheduler.TaskRunner, '_sleep') self.m.ReplayAll() runner = scheduler.TaskRunner(task) runner.start() self.assertTrue(runner.done()) self.assertTrue(runner.step()) def test_start_only(self): task = DummyTask() self.m.StubOutWithMock(task, 'do_step') self.m.StubOutWithMock(scheduler.TaskRunner, '_sleep') task.do_step(1).AndReturn(None) self.m.ReplayAll() runner = scheduler.TaskRunner(task) self.assertFalse(runner.started()) runner.start() self.assertTrue(runner.started()) def test_double_start(self): runner = scheduler.TaskRunner(DummyTask()) runner.start() self.assertRaises(AssertionError, runner.start) def test_start_cancelled(self): runner = scheduler.TaskRunner(DummyTask()) runner.cancel() self.assertRaises(AssertionError, runner.start) def test_call_double_start(self): runner = scheduler.TaskRunner(DummyTask()) runner(wait_time=None) self.assertRaises(AssertionError, runner.start) def test_start_function(self): def task(): pass runner = scheduler.TaskRunner(task) runner.start() self.assertTrue(runner.started()) self.assertTrue(runner.done()) self.assertTrue(runner.step()) def test_repeated_done(self): task = DummyTask(0) self.m.StubOutWithMock(task, 'do_step') self.m.StubOutWithMock(scheduler.TaskRunner, '_sleep') self.m.ReplayAll() runner = scheduler.TaskRunner(task) runner.start() self.assertTrue(runner.step()) self.assertTrue(runner.step()) def test_timeout(self): st = timeutils.wallclock() def task(): while True: yield self.m.StubOutWithMock(timeutils, 'wallclock') timeutils.wallclock().AndReturn(st) timeutils.wallclock().AndReturn(st + 0.5) timeutils.wallclock().AndReturn(st + 1.5) self.m.ReplayAll() runner = scheduler.TaskRunner(task) runner.start(timeout=1) self.assertTrue(runner) self.assertRaises(scheduler.Timeout, runner.step) def test_timeout_return(self): st = timeutils.wallclock() def task(): while True: try: yield except scheduler.Timeout: return self.m.StubOutWithMock(timeutils, 'wallclock') timeutils.wallclock().AndReturn(st) timeutils.wallclock().AndReturn(st + 0.5) timeutils.wallclock().AndReturn(st + 1.5) self.m.ReplayAll() runner = scheduler.TaskRunner(task) runner.start(timeout=1) self.assertTrue(runner) self.assertTrue(runner.step()) self.assertFalse(runner) def test_timeout_swallowed(self): st = timeutils.wallclock() def task(): while True: try: yield except scheduler.Timeout: yield self.fail('Task still running') self.m.StubOutWithMock(timeutils, 'wallclock') timeutils.wallclock().AndReturn(st) timeutils.wallclock().AndReturn(st + 0.5) timeutils.wallclock().AndReturn(st + 1.5) self.m.ReplayAll() runner = scheduler.TaskRunner(task) runner.start(timeout=1) self.assertTrue(runner) self.assertTrue(runner.step()) self.assertFalse(runner) self.assertTrue(runner.step()) def test_as_task_timeout(self): st = timeutils.wallclock() def task(): while True: yield self.m.StubOutWithMock(timeutils, 'wallclock') timeutils.wallclock().AndReturn(st) timeutils.wallclock().AndReturn(st + 0.5) timeutils.wallclock().AndReturn(st + 1.5) self.m.ReplayAll() runner = scheduler.TaskRunner(task) rt = runner.as_task(timeout=1) next(rt) self.assertTrue(runner) self.assertRaises(scheduler.Timeout, next, rt) def test_as_task_timeout_shorter(self): st = timeutils.wallclock() def task(): while True: yield self.m.StubOutWithMock(timeutils, 'wallclock') timeutils.wallclock().AndReturn(st) timeutils.wallclock().AndReturn(st + 0.5) timeutils.wallclock().AndReturn(st + 0.7) timeutils.wallclock().AndReturn(st + 1.6) timeutils.wallclock().AndReturn(st + 2.6) self.m.ReplayAll() runner = scheduler.TaskRunner(task) runner.start(timeout=10) self.assertTrue(runner) rt = runner.as_task(timeout=1) next(rt) self.assertRaises(scheduler.Timeout, next, rt) def test_as_task_timeout_longer(self): st = timeutils.wallclock() def task(): while True: yield self.m.StubOutWithMock(timeutils, 'wallclock') timeutils.wallclock().AndReturn(st) timeutils.wallclock().AndReturn(st + 0.5) timeutils.wallclock().AndReturn(st + 0.6) timeutils.wallclock().AndReturn(st + 1.5) self.m.ReplayAll() runner = scheduler.TaskRunner(task) runner.start(timeout=1) self.assertTrue(runner) rt = runner.as_task(timeout=10) self.assertRaises(scheduler.Timeout, next, rt) def test_cancel_not_started(self): task = DummyTask(1) self.m.StubOutWithMock(task, 'do_step') self.m.StubOutWithMock(scheduler.TaskRunner, '_sleep') self.m.ReplayAll() runner = scheduler.TaskRunner(task) self.assertFalse(runner.started()) runner.cancel() self.assertTrue(runner.done()) def test_cancel_done(self): task = DummyTask(1) self.m.StubOutWithMock(task, 'do_step') self.m.StubOutWithMock(scheduler.TaskRunner, '_sleep') task.do_step(1).AndReturn(None) self.m.ReplayAll() runner = scheduler.TaskRunner(task) self.assertFalse(runner.started()) runner.start() self.assertTrue(runner.started()) self.assertTrue(runner.step()) self.assertTrue(runner.done()) runner.cancel() self.assertTrue(runner.done()) self.assertTrue(runner.step()) def test_cancel(self): task = DummyTask(3) self.m.StubOutWithMock(task, 'do_step') self.m.StubOutWithMock(scheduler.TaskRunner, '_sleep') task.do_step(1).AndReturn(None) task.do_step(2).AndReturn(None) self.m.ReplayAll() runner = scheduler.TaskRunner(task) self.assertFalse(runner.started()) runner.start() self.assertTrue(runner.started()) self.assertFalse(runner.step()) runner.cancel() self.assertTrue(runner.step()) def test_cancel_grace_period(self): st = timeutils.wallclock() task = DummyTask(5) self.m.StubOutWithMock(task, 'do_step') self.m.StubOutWithMock(scheduler.TaskRunner, '_sleep') self.m.StubOutWithMock(timeutils, 'wallclock') task.do_step(1).AndReturn(None) task.do_step(2).AndReturn(None) timeutils.wallclock().AndReturn(st) timeutils.wallclock().AndReturn(st + 0.5) task.do_step(3).AndReturn(None) timeutils.wallclock().AndReturn(st + 1.0) task.do_step(4).AndReturn(None) timeutils.wallclock().AndReturn(st + 1.5) self.m.ReplayAll() runner = scheduler.TaskRunner(task) self.assertFalse(runner.started()) runner.start() self.assertTrue(runner.started()) self.assertFalse(runner.step()) runner.cancel(grace_period=1.0) self.assertFalse(runner.step()) self.assertFalse(runner.step()) self.assertTrue(runner.step()) def test_cancel_grace_period_before_timeout(self): st = timeutils.wallclock() task = DummyTask(5) self.m.StubOutWithMock(task, 'do_step') self.m.StubOutWithMock(scheduler.TaskRunner, '_sleep') self.m.StubOutWithMock(timeutils, 'wallclock') timeutils.wallclock().AndReturn(st) timeutils.wallclock().AndReturn(st + 0.1) task.do_step(1).AndReturn(None) timeutils.wallclock().AndReturn(st + 0.2) task.do_step(2).AndReturn(None) timeutils.wallclock().AndReturn(st + 0.2) timeutils.wallclock().AndReturn(st + 0.5) task.do_step(3).AndReturn(None) timeutils.wallclock().AndReturn(st + 1.0) task.do_step(4).AndReturn(None) timeutils.wallclock().AndReturn(st + 1.5) self.m.ReplayAll() runner = scheduler.TaskRunner(task) self.assertFalse(runner.started()) runner.start(timeout=10) self.assertTrue(runner.started()) self.assertFalse(runner.step()) runner.cancel(grace_period=1.0) self.assertFalse(runner.step()) self.assertFalse(runner.step()) self.assertTrue(runner.step()) def test_cancel_grace_period_after_timeout(self): st = timeutils.wallclock() task = DummyTask(5) self.m.StubOutWithMock(task, 'do_step') self.m.StubOutWithMock(scheduler.TaskRunner, '_sleep') self.m.StubOutWithMock(timeutils, 'wallclock') timeutils.wallclock().AndReturn(st) timeutils.wallclock().AndReturn(st + 0.1) task.do_step(1).AndReturn(None) timeutils.wallclock().AndReturn(st + 0.2) task.do_step(2).AndReturn(None) timeutils.wallclock().AndReturn(st + 0.2) timeutils.wallclock().AndReturn(st + 0.5) task.do_step(3).AndReturn(None) timeutils.wallclock().AndReturn(st + 1.0) task.do_step(4).AndReturn(None) timeutils.wallclock().AndReturn(st + 1.5) self.m.ReplayAll() runner = scheduler.TaskRunner(task) self.assertFalse(runner.started()) runner.start(timeout=1.25) self.assertTrue(runner.started()) self.assertFalse(runner.step()) runner.cancel(grace_period=3) self.assertFalse(runner.step()) self.assertFalse(runner.step()) self.assertRaises(scheduler.Timeout, runner.step) def test_cancel_grace_period_not_started(self): task = DummyTask(1) self.m.StubOutWithMock(task, 'do_step') self.m.StubOutWithMock(scheduler.TaskRunner, '_sleep') self.m.ReplayAll() runner = scheduler.TaskRunner(task) self.assertFalse(runner.started()) runner.cancel(grace_period=0.5) self.assertTrue(runner.done()) class TimeoutTest(common.HeatTestCase): def test_compare(self): task = scheduler.TaskRunner(DummyTask()) earlier = scheduler.Timeout(task, 10) eventlet.sleep(0.01) later = scheduler.Timeout(task, 10) self.assertTrue(earlier.earlier_than(later)) self.assertFalse(later.earlier_than(earlier)) class DescriptionTest(common.HeatTestCase): def setUp(self): super(DescriptionTest, self).setUp() self.addCleanup(self.m.VerifyAll) def test_func(self): def f(): pass self.assertEqual('f', scheduler.task_description(f)) def test_lambda(self): l = lambda: None self.assertEqual('', scheduler.task_description(l)) def test_method(self): class C(object): def __str__(self): return 'C "o"' def __repr__(self): return 'o' def m(self): pass self.assertEqual('m from C "o"', scheduler.task_description(C().m)) def test_object(self): class C(object): def __str__(self): return 'C "o"' def __repr__(self): return 'o' def __call__(self): pass self.assertEqual('o', scheduler.task_description(C())) def test_unicode(self): @repr_wrapper @six.python_2_unicode_compatible class C(object): def __str__(self): return u'C "\u2665"' def __repr__(self): return u'\u2665' def __call__(self): pass def m(self): pass self.assertEqual(u'm from C "\u2665"', scheduler.task_description(C().m)) self.assertEqual(u'\u2665', scheduler.task_description(C())) class WrapperTaskTest(common.HeatTestCase): def setUp(self): super(WrapperTaskTest, self).setUp() self.addCleanup(self.m.VerifyAll) def test_wrap(self): child_tasks = [DummyTask() for i in range(3)] @scheduler.wrappertask def task(): for child_task in child_tasks: yield child_task() yield for child_task in child_tasks: self.m.StubOutWithMock(child_task, 'do_step') self.m.StubOutWithMock(scheduler.TaskRunner, '_sleep') scheduler.TaskRunner._sleep(0).AndReturn(None) for child_task in child_tasks: child_task.do_step(1).AndReturn(None) scheduler.TaskRunner._sleep(1).AndReturn(None) child_task.do_step(2).AndReturn(None) scheduler.TaskRunner._sleep(1).AndReturn(None) child_task.do_step(3).AndReturn(None) scheduler.TaskRunner._sleep(1).AndReturn(None) self.m.ReplayAll() scheduler.TaskRunner(task)() def test_parent_yield_value(self): @scheduler.wrappertask def parent_task(): yield yield 3 yield iter([1, 2, 4]) task = parent_task() self.assertIsNone(next(task)) self.assertEqual(3, next(task)) self.assertEqual([1, 2, 4], list(next(task))) def test_child_yield_value(self): def child_task(): yield yield 3 yield iter([1, 2, 4]) @scheduler.wrappertask def parent_task(): yield child_task() task = parent_task() self.assertIsNone(next(task)) self.assertEqual(3, next(task)) self.assertEqual([1, 2, 4], list(next(task))) def test_child_exception(self): class MyException(Exception): pass def child_task(): yield raise MyException() @scheduler.wrappertask def parent_task(): try: yield child_task() except MyException: raise else: self.fail('No exception raised in parent_task') task = parent_task() next(task) self.assertRaises(MyException, next, task) def test_child_exception_exit(self): class MyException(Exception): pass def child_task(): yield raise MyException() @scheduler.wrappertask def parent_task(): try: yield child_task() except MyException: return else: self.fail('No exception raised in parent_task') task = parent_task() next(task) self.assertRaises(StopIteration, next, task) def test_child_exception_swallow(self): class MyException(Exception): pass def child_task(): yield raise MyException() @scheduler.wrappertask def parent_task(): try: yield child_task() except MyException: yield else: self.fail('No exception raised in parent_task') yield task = parent_task() next(task) next(task) def test_child_exception_swallow_next(self): class MyException(Exception): pass def child_task(): yield raise MyException() dummy = DummyTask() @scheduler.wrappertask def parent_task(): try: yield child_task() except MyException: pass else: self.fail('No exception raised in parent_task') yield dummy() task = parent_task() next(task) self.m.StubOutWithMock(dummy, 'do_step') for i in range(1, dummy.num_steps + 1): dummy.do_step(i).AndReturn(None) self.m.ReplayAll() for i in range(1, dummy.num_steps + 1): next(task) self.assertRaises(StopIteration, next, task) def test_thrown_exception_swallow_next(self): class MyException(Exception): pass dummy = DummyTask() @scheduler.wrappertask def child_task(): try: yield except MyException: yield dummy() else: self.fail('No exception raised in child_task') @scheduler.wrappertask def parent_task(): yield child_task() task = parent_task() self.m.StubOutWithMock(dummy, 'do_step') for i in range(1, dummy.num_steps + 1): dummy.do_step(i).AndReturn(None) self.m.ReplayAll() next(task) task.throw(MyException) for i in range(2, dummy.num_steps + 1): next(task) self.assertRaises(StopIteration, next, task) def test_thrown_exception_raise(self): class MyException(Exception): pass dummy = DummyTask() @scheduler.wrappertask def child_task(): try: yield except MyException: raise else: self.fail('No exception raised in child_task') @scheduler.wrappertask def parent_task(): try: yield child_task() except MyException: yield dummy() task = parent_task() self.m.StubOutWithMock(dummy, 'do_step') for i in range(1, dummy.num_steps + 1): dummy.do_step(i).AndReturn(None) self.m.ReplayAll() next(task) task.throw(MyException) for i in range(2, dummy.num_steps + 1): next(task) self.assertRaises(StopIteration, next, task) def test_thrown_exception_exit(self): class MyException(Exception): pass dummy = DummyTask() @scheduler.wrappertask def child_task(): try: yield except MyException: return else: self.fail('No exception raised in child_task') @scheduler.wrappertask def parent_task(): yield child_task() yield dummy() task = parent_task() self.m.StubOutWithMock(dummy, 'do_step') for i in range(1, dummy.num_steps + 1): dummy.do_step(i).AndReturn(None) self.m.ReplayAll() next(task) task.throw(MyException) for i in range(2, dummy.num_steps + 1): next(task) self.assertRaises(StopIteration, next, task) def test_parent_exception(self): class MyException(Exception): pass def child_task(): yield @scheduler.wrappertask def parent_task(): yield child_task() raise MyException() task = parent_task() next(task) self.assertRaises(MyException, next, task) def test_parent_throw(self): class MyException(Exception): pass @scheduler.wrappertask def parent_task(): try: yield DummyTask()() except MyException: raise else: self.fail('No exception raised in parent_task') task = parent_task() next(task) self.assertRaises(MyException, task.throw, MyException()) def test_parent_throw_exit(self): class MyException(Exception): pass @scheduler.wrappertask def parent_task(): try: yield DummyTask()() except MyException: return else: self.fail('No exception raised in parent_task') task = parent_task() next(task) self.assertRaises(StopIteration, task.throw, MyException()) def test_parent_cancel(self): @scheduler.wrappertask def parent_task(): try: yield except GeneratorExit: raise else: self.fail('parent_task not closed') task = parent_task() next(task) task.close() def test_parent_cancel_exit(self): @scheduler.wrappertask def parent_task(): try: yield except GeneratorExit: return else: self.fail('parent_task not closed') task = parent_task() next(task) task.close() def test_cancel(self): def child_task(): try: yield except GeneratorExit: raise else: self.fail('child_task not closed') @scheduler.wrappertask def parent_task(): try: yield child_task() except GeneratorExit: raise else: self.fail('parent_task not closed') task = parent_task() next(task) task.close() def test_cancel_exit(self): def child_task(): try: yield except GeneratorExit: return else: self.fail('child_task not closed') @scheduler.wrappertask def parent_task(): try: yield child_task() except GeneratorExit: raise else: self.fail('parent_task not closed') task = parent_task() next(task) task.close() def test_cancel_parent_exit(self): def child_task(): try: yield except GeneratorExit: return else: self.fail('child_task not closed') @scheduler.wrappertask def parent_task(): try: yield child_task() except GeneratorExit: return else: self.fail('parent_task not closed') task = parent_task() next(task) task.close() heat-10.0.2/heat/tests/engine/test_plugin_manager.py0000666000175000017500000001110013343562340022464 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import sys import types import six from heat.engine import plugin_manager from heat.tests import common def legacy_test_mapping(): return {'foo': 'bar', 'baz': 'quux'} def current_test_mapping(): return {'blarg': 'wibble', 'bar': 'baz'} def args_test_mapping(*args): return dict(enumerate(args)) def kwargs_test_mapping(**kwargs): return kwargs def error_test_mapping(): raise MappingTestError def error_test_exception_mapping(): raise Exception("exception") def invalid_type_test_mapping(): return 'foo' def none_return_test_mapping(): return class MappingTestError(Exception): pass class TestPluginManager(common.HeatTestCase): @staticmethod def module(): return sys.modules[__name__] def test_load_single_mapping(self): pm = plugin_manager.PluginMapping('current_test') self.assertEqual(current_test_mapping(), pm.load_from_module(self.module())) def test_load_first_alternative_mapping(self): pm = plugin_manager.PluginMapping(['current_test', 'legacy_test']) self.assertEqual(current_test_mapping(), pm.load_from_module(self.module())) def test_load_second_alternative_mapping(self): pm = plugin_manager.PluginMapping(['nonexist', 'current_test']) self.assertEqual(current_test_mapping(), pm.load_from_module(self.module())) def test_load_mapping_args(self): pm = plugin_manager.PluginMapping('args_test', 'baz', 'quux') expected = {0: 'baz', 1: 'quux'} self.assertEqual(expected, pm.load_from_module(self.module())) def test_load_mapping_kwargs(self): pm = plugin_manager.PluginMapping('kwargs_test', baz='quux') self.assertEqual({'baz': 'quux'}, pm.load_from_module(self.module())) def test_load_mapping_non_existent(self): pm = plugin_manager.PluginMapping('nonexist') self.assertEqual({}, pm.load_from_module(self.module())) def test_load_mapping_error(self): pm = plugin_manager.PluginMapping('error_test') self.assertRaises(MappingTestError, pm.load_from_module, self.module()) def test_load_mapping_exception(self): pm = plugin_manager.PluginMapping('error_test_exception') self.assertRaisesRegex(Exception, "exception", pm.load_from_module, self.module()) def test_load_mapping_invalidtype(self): pm = plugin_manager.PluginMapping('invalid_type_test') self.assertEqual({}, pm.load_from_module(self.module())) def test_load_mapping_nonereturn(self): pm = plugin_manager.PluginMapping('none_return_test') self.assertEqual({}, pm.load_from_module(self.module())) def test_modules(self): mgr = plugin_manager.PluginManager('heat.tests') for module in mgr.modules: self.assertEqual(types.ModuleType, type(module)) self.assertTrue(module.__name__.startswith('heat.tests') or module.__name__.startswith('heat.engine.plugins')) def test_load_all_skip_tests(self): mgr = plugin_manager.PluginManager('heat.tests') pm = plugin_manager.PluginMapping('current_test') all_items = pm.load_all(mgr) for item in six.iteritems(current_test_mapping()): self.assertNotIn(item, all_items) def test_load_all(self): import heat.tests.engine.test_plugin_manager mgr = plugin_manager.PluginManager('heat.tests') pm = plugin_manager.PluginMapping('current_test') # NOTE(chmou): We force the modules to be ourself so we can get # the current_test_mapping if not we will would be # skipped by plugin_loader.load_modules since we are skipping # the loading of the package with tests in there mgr.modules = [heat.tests.engine.test_plugin_manager] all_items = pm.load_all(mgr) for item in six.iteritems(current_test_mapping()): self.assertIn(item, all_items) heat-10.0.2/heat/tests/engine/tools.py0000666000175000017500000002511713343562351017614 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import sys import mox import six from heat.common import template_format from heat.engine.clients.os import glance from heat.engine.clients.os import keystone from heat.engine.clients.os.keystone import fake_keystoneclient as fake_ks from heat.engine.clients.os import nova from heat.engine import environment from heat.engine.resources.aws.ec2 import instance as instances from heat.engine import stack as parser from heat.engine import template as templatem from heat.tests.openstack.nova import fakes as fakes_nova from heat.tests import utils wp_template = u''' heat_template_version: 2014-10-16 description: WordPress parameters: KeyName: description: KeyName type: string default: test\u2042 resources: WebServer: type: AWS::EC2::Instance properties: ImageId: F17-x86_64-gold InstanceType: m1.large KeyName: test UserData: wordpress ''' string_template_five = ''' heat_template_version: 2013-05-23 description: Random String templates parameters: salt: type: string default: "quickbrownfox" resources: A: type: OS::Heat::RandomString properties: salt: {get_param: salt} B: type: OS::Heat::RandomString properties: salt: {get_param: salt} C: type: OS::Heat::RandomString depends_on: [A, B] properties: salt: {get_attr: [A, value]} D: type: OS::Heat::RandomString depends_on: C properties: salt: {get_param: salt} E: type: OS::Heat::RandomString depends_on: C properties: salt: {get_param: salt} ''' string_template_five_update = ''' heat_template_version: 2013-05-23 description: Random String templates parameters: salt: type: string default: "quickbrownfox123" resources: A: type: OS::Heat::RandomString properties: salt: {get_param: salt} B: type: OS::Heat::RandomString properties: salt: {get_param: salt} F: type: OS::Heat::RandomString depends_on: [A, B] properties: salt: {get_param: salt} G: type: OS::Heat::RandomString depends_on: F properties: salt: {get_param: salt} H: type: OS::Heat::RandomString depends_on: F properties: salt: {get_param: salt} ''' attr_cache_template = ''' heat_template_version: 2016-04-08 resources: A: type: ResourceWithComplexAttributesType B: type: OS::Heat::RandomString properties: salt: {get_attr: [A, flat_dict, key2]} C: type: OS::Heat::RandomString depends_on: [A, B] properties: salt: {get_attr: [A, nested_dict, dict, a]} D: type: OS::Heat::RandomString depends_on: C properties: salt: {get_attr: [A, nested_dict, dict, b]} E: type: OS::Heat::RandomString depends_on: C properties: salt: {get_attr: [A, flat_dict, key3]} ''' def get_stack(stack_name, ctx, template=None, with_params=True, convergence=False, **kwargs): if template is None: t = template_format.parse(wp_template) if with_params: env = environment.Environment({'KeyName': 'test'}) tmpl = templatem.Template(t, env=env) else: tmpl = templatem.Template(t) else: t = template_format.parse(template) tmpl = templatem.Template(t) stack = parser.Stack(ctx, stack_name, tmpl, convergence=convergence, **kwargs) stack.thread_group_mgr = DummyThreadGroupManager() return stack def setup_keystone_mocks(mocks, stack): fkc = fake_ks.FakeKeystoneClient() mocks.StubOutWithMock(keystone.KeystoneClientPlugin, '_create') keystone.KeystoneClientPlugin._create().AndReturn(fkc) def setup_mock_for_image_constraint(mocks, imageId_input, imageId_output=744): mocks.StubOutWithMock(glance.GlanceClientPlugin, 'find_image_by_name_or_id') glance.GlanceClientPlugin.find_image_by_name_or_id( imageId_input).MultipleTimes().AndReturn(imageId_output) def setup_mocks(mocks, stack, mock_image_constraint=True, mock_keystone=True): fc = fakes_nova.FakeClient() mocks.StubOutWithMock(instances.Instance, 'client') instances.Instance.client().MultipleTimes().AndReturn(fc) mocks.StubOutWithMock(nova.NovaClientPlugin, '_create') nova.NovaClientPlugin._create().AndReturn(fc) instance = stack['WebServer'] metadata = instance.metadata_get() if mock_image_constraint: setup_mock_for_image_constraint(mocks, instance.properties['ImageId']) if mock_keystone: setup_keystone_mocks(mocks, stack) user_data = instance.properties['UserData'] server_userdata = instance.client_plugin().build_userdata( metadata, user_data, 'ec2-user') mocks.StubOutWithMock(nova.NovaClientPlugin, 'build_userdata') nova.NovaClientPlugin.build_userdata( metadata, user_data, 'ec2-user').AndReturn(server_userdata) mocks.StubOutWithMock(fc.servers, 'create') fc.servers.create( image=744, flavor=3, key_name='test', name=utils.PhysName(stack.name, 'WebServer'), security_groups=None, userdata=server_userdata, scheduler_hints=None, meta=None, nics=None, availability_zone=None, block_device_mapping=None).AndReturn(fc.servers.list()[4]) return fc def setup_stack(stack_name, ctx, create_res=True, convergence=False): stack = get_stack(stack_name, ctx, convergence=convergence) stack.store() if create_res: m = mox.Mox() setup_mocks(m, stack) m.ReplayAll() stack.create() stack._persist_state() m.UnsetStubs() return stack def clean_up_stack(stack, delete_res=True): if delete_res: m = mox.Mox() fc = fakes_nova.FakeClient() m.StubOutWithMock(instances.Instance, 'client') instances.Instance.client().MultipleTimes().AndReturn(fc) m.StubOutWithMock(fc.servers, 'delete') fc.servers.delete(mox.IgnoreArg()).AndRaise( fakes_nova.fake_exception()) m.ReplayAll() stack.delete() if delete_res: m.UnsetStubs() def stack_context(stack_name, create_res=True, convergence=False): """Decorator for creating and deleting stack. Decorator which creates a stack by using the test case's context and deletes it afterwards to ensure tests clean up their stacks regardless of test success/failure. """ def stack_delete(test_fn): @six.wraps(test_fn) def wrapped_test(test_case, *args, **kwargs): def create_stack(): ctx = getattr(test_case, 'ctx', None) if ctx is not None: stack = setup_stack(stack_name, ctx, create_res, convergence) setattr(test_case, 'stack', stack) def delete_stack(): stack = getattr(test_case, 'stack', None) if stack is not None and stack.id is not None: clean_up_stack(stack, delete_res=create_res) create_stack() try: test_fn(test_case, *args, **kwargs) except Exception: exc_class, exc_val, exc_tb = sys.exc_info() try: delete_stack() finally: six.reraise(exc_class, exc_val, exc_tb) else: delete_stack() return wrapped_test return stack_delete class DummyThread(object): def link(self, callback, *args): pass class DummyThreadGroup(object): def __init__(self): self.threads = [] def add_timer(self, interval, callback, initial_delay=None, *args, **kwargs): self.threads.append(callback) def stop_timers(self): pass def add_thread(self, callback, cnxt, trace, func, *args, **kwargs): # callback here is _start_with_trace(); func is the 'real' callback self.threads.append(func) return DummyThread() def stop(self, graceful=False): pass def wait(self): pass class DummyThreadGroupManager(object): def __init__(self): self.msg_queues = [] self.messages = [] def start(self, stack, func, *args, **kwargs): # Just run the function, so we know it's completed in the test func(*args, **kwargs) return DummyThread() def start_with_lock(self, cnxt, stack, engine_id, func, *args, **kwargs): # Just run the function, so we know it's completed in the test func(*args, **kwargs) return DummyThread() def start_with_acquired_lock(self, stack, lock, func, *args, **kwargs): # Just run the function, so we know it's completed in the test func(*args, **kwargs) return DummyThread() def send(self, stack_id, message): self.messages.append(message) def add_msg_queue(self, stack_id, msg_queue): self.msg_queues.append(msg_queue) def remove_msg_queue(self, gt, stack_id, msg_queue): for q in self.msg_queues.pop(stack_id, []): if q is not msg_queue: self.add_event(stack_id, q) class DummyThreadGroupMgrLogStart(DummyThreadGroupManager): def __init__(self): super(DummyThreadGroupMgrLogStart, self).__init__() self.started = [] def start_with_lock(self, cnxt, stack, engine_id, func, *args, **kwargs): self.started.append((stack.id, func)) return DummyThread() def start_with_acquired_lock(self, stack, lock, func, *args, **kwargs): self.started.append((stack.id, func)) return DummyThread() def start(self, stack_id, func, *args, **kwargs): # Here we only store the started task so it can be checked self.started.append((stack_id, func)) heat-10.0.2/heat/tests/engine/test_sync_point.py0000666000175000017500000001005213343562340021666 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import mock from oslo_db import exception from heat.engine import sync_point from heat.tests import common from heat.tests.engine import tools from heat.tests import utils class SyncPointTestCase(common.HeatTestCase): def setUp(self): super(SyncPointTestCase, self).setUp() self.dummy_event = mock.MagicMock() self.dummy_event.ready.return_value = False def test_sync_waiting(self): ctx = utils.dummy_context() stack = tools.get_stack('test_stack', utils.dummy_context(), template=tools.string_template_five, convergence=True) stack.converge_stack(stack.t, action=stack.CREATE) resource = stack['C'] graph = stack.convergence_dependencies.graph() sender = (4, True) mock_callback = mock.Mock() sync_point.sync(ctx, resource.id, stack.current_traversal, True, mock_callback, set(graph[(resource.id, True)]), {sender: None}) updated_sync_point = sync_point.get(ctx, resource.id, stack.current_traversal, True) input_data = sync_point.deserialize_input_data( updated_sync_point.input_data) self.assertEqual({sender: None}, input_data) self.assertFalse(mock_callback.called) def test_sync_non_waiting(self): ctx = utils.dummy_context() stack = tools.get_stack('test_stack', utils.dummy_context(), template=tools.string_template_five, convergence=True) stack.converge_stack(stack.t, action=stack.CREATE) resource = stack['A'] graph = stack.convergence_dependencies.graph() sender = (3, True) mock_callback = mock.Mock() sync_point.sync(ctx, resource.id, stack.current_traversal, True, mock_callback, set(graph[(resource.id, True)]), {sender: None}) updated_sync_point = sync_point.get(ctx, resource.id, stack.current_traversal, True) input_data = sync_point.deserialize_input_data( updated_sync_point.input_data) self.assertEqual({sender: None}, input_data) self.assertTrue(mock_callback.called) def test_serialize_input_data(self): res = sync_point.serialize_input_data({(3, 8): None}) self.assertEqual({'input_data': {u'tuple:(3, 8)': None}}, res) @mock.patch('heat.engine.sync_point.update_input_data', return_value=None) @mock.patch('time.sleep', side_effect=exception.DBError) def sync_with_sleep(self, ctx, stack, mock_sleep_time, mock_uid): resource = stack['C'] graph = stack.convergence_dependencies.graph() mock_callback = mock.Mock() sender = (3, True) self.assertRaises(exception.DBError, sync_point.sync, ctx, resource.id, stack.current_traversal, True, mock_callback, set(graph[(resource.id, True)]), {sender: None}) return mock_sleep_time def test_sync_with_time_throttle(self): ctx = utils.dummy_context() stack = tools.get_stack('test_stack', utils.dummy_context(), template=tools.string_template_five, convergence=True) stack.converge_stack(stack.t, action=stack.CREATE) mock_sleep_time = self.sync_with_sleep(ctx, stack) self.assertTrue(mock_sleep_time.called) heat-10.0.2/heat/tests/engine/test_node_data.py0000666000175000017500000000443613343562340021430 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.engine import node_data from heat.tests import common def make_test_data(): return { 'id': 42, 'name': 'foo', 'reference_id': 'foo-000000', 'attrs': { 'foo': 'bar', ('foo', 'bar', 'baz'): 'quux', ('blarg', 'wibble'): 'foo', }, 'action': 'CREATE', 'status': 'COMPLETE', 'uuid': '000000-0000-0000-0000000', } def make_test_node(): return node_data.NodeData.from_dict(make_test_data()) class NodeDataTest(common.HeatTestCase): def test_round_trip(self): in_dict = make_test_data() self.assertEqual(in_dict, node_data.NodeData.from_dict(in_dict).as_dict()) def test_resource_key(self): nd = make_test_node() self.assertEqual(42, nd.primary_key) def test_resource_name(self): nd = make_test_node() self.assertEqual('foo', nd.name) def test_action(self): nd = make_test_node() self.assertEqual('CREATE', nd.action) def test_status(self): nd = make_test_node() self.assertEqual('COMPLETE', nd.status) def test_refid(self): nd = make_test_node() self.assertEqual('foo-000000', nd.reference_id()) def test_all_attrs(self): nd = make_test_node() self.assertEqual({'foo': 'bar'}, nd.attributes()) def test_attr(self): nd = make_test_node() self.assertEqual('bar', nd.attribute('foo')) def test_path_attr(self): nd = make_test_node() self.assertEqual('quux', nd.attribute(('foo', 'bar', 'baz'))) def test_attr_names(self): nd = make_test_node() self.assertEqual({'foo', 'blarg'}, set(nd.attribute_names())) heat-10.0.2/heat/tests/test_stack_user.py0000666000175000017500000004055613343562352020416 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from keystoneauth1 import exceptions as kc_exceptions import six from heat.common import exception from heat.common import short_id from heat.common import template_format from heat.engine.clients.os.keystone import fake_keystoneclient as fake_ks from heat.engine.resources import stack_user from heat.engine import scheduler from heat.objects import resource_data as resource_data_object from heat.tests import common from heat.tests import utils user_template = ''' heat_template_version: 2013-05-23 resources: user: type: StackUserResourceType ''' class StackUserTest(common.HeatTestCase): def setUp(self): super(StackUserTest, self).setUp() self.fc = fake_ks.FakeKeystoneClient() def _user_create(self, stack_name, project_id, user_id, resource_name='user', create_project=True, password=None): t = template_format.parse(user_template) self.stack = utils.parse_stack(t, stack_name=stack_name) rsrc = self.stack[resource_name] self.m.StubOutWithMock(stack_user.StackUser, 'keystone') stack_user.StackUser.keystone().MultipleTimes().AndReturn(self.fc) if create_project: self.m.StubOutWithMock(fake_ks.FakeKeystoneClient, 'create_stack_domain_project') fake_ks.FakeKeystoneClient.create_stack_domain_project( self.stack.id).AndReturn(project_id) else: self.stack.set_stack_user_project_id(project_id) rsrc.store() self.m.StubOutWithMock(short_id, 'get_id') short_id.get_id(rsrc.uuid).MultipleTimes().AndReturn('aabbcc') self.m.StubOutWithMock(fake_ks.FakeKeystoneClient, 'create_stack_domain_user') expected_username = '%s-%s-%s' % (stack_name, resource_name, 'aabbcc') fake_ks.FakeKeystoneClient.create_stack_domain_user( username=expected_username, password=password, project_id=project_id).AndReturn(user_id) return rsrc def test_handle_create_no_stack_project(self): rsrc = self._user_create(stack_name='stackuser_crnoprj', project_id='aproject123', user_id='auser123') self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) rs_data = resource_data_object.ResourceData.get_all(rsrc) self.assertEqual({'user_id': 'auser123'}, rs_data) self.m.VerifyAll() def test_handle_create_existing_project(self): rsrc = self._user_create(stack_name='stackuser_crexistprj', project_id='aproject456', user_id='auser456', create_project=False) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) rs_data = resource_data_object.ResourceData.get_all(rsrc) self.assertEqual({'user_id': 'auser456'}, rs_data) self.m.VerifyAll() def test_handle_delete(self): rsrc = self._user_create(stack_name='stackuser_testdel', project_id='aprojectdel', user_id='auserdel') self.m.StubOutWithMock(fake_ks.FakeKeystoneClient, 'delete_stack_domain_user') fake_ks.FakeKeystoneClient.delete_stack_domain_user( user_id='auserdel', project_id='aprojectdel').AndReturn(None) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) scheduler.TaskRunner(rsrc.delete)() self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_handle_delete_not_found(self): rsrc = self._user_create(stack_name='stackuser_testdel_notfound', project_id='aprojectdel2', user_id='auserdel2') self.m.StubOutWithMock(fake_ks.FakeKeystoneClient, 'delete_stack_domain_user') fake_ks.FakeKeystoneClient.delete_stack_domain_user( user_id='auserdel2', project_id='aprojectdel2').AndRaise( kc_exceptions.NotFound) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) scheduler.TaskRunner(rsrc.delete)() self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_handle_delete_noid(self): rsrc = self._user_create(stack_name='stackuser_testdel_noid', project_id='aprojectdel2', user_id='auserdel2') self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) resource_data_object.ResourceData.delete(rsrc, 'user_id') scheduler.TaskRunner(rsrc.delete)() self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_handle_suspend(self): rsrc = self._user_create(stack_name='stackuser_testsusp', project_id='aprojectdel', user_id='auserdel') self.m.StubOutWithMock(fake_ks.FakeKeystoneClient, 'disable_stack_domain_user') fake_ks.FakeKeystoneClient.disable_stack_domain_user( user_id='auserdel', project_id='aprojectdel').AndReturn(None) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) scheduler.TaskRunner(rsrc.suspend)() self.assertEqual((rsrc.SUSPEND, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_handle_suspend_legacy(self): rsrc = self._user_create(stack_name='stackuser_testsusp_lgcy', project_id='aprojectdel', user_id='auserdel') self.m.StubOutWithMock(fake_ks.FakeKeystoneClient, 'disable_stack_domain_user') fake_ks.FakeKeystoneClient.disable_stack_domain_user( user_id='auserdel', project_id='aprojectdel').AndRaise(ValueError) self.m.StubOutWithMock(fake_ks.FakeKeystoneClient, 'disable_stack_user') fake_ks.FakeKeystoneClient.disable_stack_user( user_id='auserdel').AndReturn(None) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) scheduler.TaskRunner(rsrc.suspend)() self.assertEqual((rsrc.SUSPEND, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_handle_resume(self): rsrc = self._user_create(stack_name='stackuser_testresume', project_id='aprojectdel', user_id='auserdel') self.m.StubOutWithMock(fake_ks.FakeKeystoneClient, 'enable_stack_domain_user') fake_ks.FakeKeystoneClient.enable_stack_domain_user( user_id='auserdel', project_id='aprojectdel').AndReturn(None) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) rsrc.state_set(rsrc.SUSPEND, rsrc.COMPLETE) scheduler.TaskRunner(rsrc.resume)() self.assertEqual((rsrc.RESUME, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_handle_resume_legacy(self): rsrc = self._user_create(stack_name='stackuser_testresume_lgcy', project_id='aprojectdel', user_id='auserdel') self.m.StubOutWithMock(fake_ks.FakeKeystoneClient, 'enable_stack_domain_user') fake_ks.FakeKeystoneClient.enable_stack_domain_user( user_id='auserdel', project_id='aprojectdel').AndRaise(ValueError) self.m.StubOutWithMock(fake_ks.FakeKeystoneClient, 'enable_stack_user') fake_ks.FakeKeystoneClient.enable_stack_user( user_id='auserdel').AndReturn(None) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) rsrc.state_set(rsrc.SUSPEND, rsrc.COMPLETE) scheduler.TaskRunner(rsrc.resume)() self.assertEqual((rsrc.RESUME, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_create_keypair(self): rsrc = self._user_create(stack_name='stackuser_test_cr_keypair', project_id='aprojectdel', user_id='auserdel') # create_stack_domain_user_keypair(self, user_id, project_id): self.m.StubOutWithMock(fake_ks.FakeKeystoneClient, 'create_stack_domain_user_keypair') fake_ks.FakeKeystoneClient.create_stack_domain_user_keypair( user_id='auserdel', project_id='aprojectdel').AndReturn( self.fc.creds) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) kp = rsrc._create_keypair() self.assertEqual(self.fc.credential_id, kp.id) self.assertEqual(self.fc.access, kp.access) self.assertEqual(self.fc.secret, kp.secret) rs_data = resource_data_object.ResourceData.get_all(rsrc) self.assertEqual(self.fc.credential_id, rs_data['credential_id']) self.assertEqual(self.fc.access, rs_data['access_key']) self.assertEqual(self.fc.secret, rs_data['secret_key']) self.m.VerifyAll() def test_create_keypair_error(self): rsrc = self._user_create(stack_name='stackuser_test_cr_keypair_err', project_id='aprojectdel', user_id='auserdel') # create_stack_domain_user_keypair(self, user_id, project_id): self.m.StubOutWithMock(fake_ks.FakeKeystoneClient, 'create_stack_domain_user_keypair') fake_ks.FakeKeystoneClient.create_stack_domain_user_keypair( user_id='auserdel', project_id='aprojectdel').AndReturn(None) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) self.assertRaises(exception.Error, rsrc._create_keypair) self.m.VerifyAll() def test_delete_keypair(self): rsrc = self._user_create(stack_name='stackuser_testdel_keypair', project_id='aprojectdel', user_id='auserdel') self.m.StubOutWithMock(fake_ks.FakeKeystoneClient, 'delete_stack_domain_user_keypair') fake_ks.FakeKeystoneClient.delete_stack_domain_user_keypair( user_id='auserdel', project_id='aprojectdel', credential_id='acredential').AndReturn(None) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) rsrc.data_set('credential_id', 'acredential') rsrc.data_set('access_key', 'access123') rsrc.data_set('secret_key', 'verysecret') rsrc._delete_keypair() rs_data = resource_data_object.ResourceData.get_all(rsrc) self.assertEqual({'user_id': 'auserdel'}, rs_data) self.m.VerifyAll() def test_delete_keypair_no_credential_id(self): rsrc = self._user_create(stack_name='stackuser_del_keypair_nocrdid', project_id='aprojectdel', user_id='auserdel') rsrc._delete_keypair() def test_delete_keypair_legacy(self): rsrc = self._user_create(stack_name='stackuser_testdel_keypair_lgcy', project_id='aprojectdel', user_id='auserdel') self.m.StubOutWithMock(fake_ks.FakeKeystoneClient, 'delete_stack_domain_user_keypair') fake_ks.FakeKeystoneClient.delete_stack_domain_user_keypair( user_id='auserdel', project_id='aprojectdel', credential_id='acredential').AndRaise(ValueError()) self.m.StubOutWithMock(fake_ks.FakeKeystoneClient, 'delete_ec2_keypair') fake_ks.FakeKeystoneClient.delete_ec2_keypair( user_id='auserdel', credential_id='acredential').AndReturn(None) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) rsrc.data_set('credential_id', 'acredential') rsrc.data_set('access_key', 'access123') rsrc.data_set('secret_key', 'verysecret') rsrc._delete_keypair() rs_data = resource_data_object.ResourceData.get_all(rsrc) self.assertEqual({'user_id': 'auserdel'}, rs_data) self.m.VerifyAll() def test_delete_keypair_notfound(self): rsrc = self._user_create(stack_name='stackuser_testdel_kpr_notfound', project_id='aprojectdel', user_id='auserdel') self.m.StubOutWithMock(fake_ks.FakeKeystoneClient, 'delete_stack_domain_user_keypair') fake_ks.FakeKeystoneClient.delete_stack_domain_user_keypair( user_id='auserdel', project_id='aprojectdel', credential_id='acredential').AndReturn(None) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) rsrc.data_set('credential_id', 'acredential') rsrc._delete_keypair() rs_data = resource_data_object.ResourceData.get_all(rsrc) self.assertEqual({'user_id': 'auserdel'}, rs_data) self.m.VerifyAll() def test_user_token(self): rsrc = self._user_create(stack_name='stackuser_testtoken', project_id='aproject123', user_id='aabbcc', password='apassword') self.m.StubOutWithMock(fake_ks.FakeKeystoneClient, 'stack_domain_user_token') fake_ks.FakeKeystoneClient.stack_domain_user_token( user_id='aabbcc', project_id='aproject123', password='apassword').AndReturn('atoken123') self.m.ReplayAll() rsrc.password = 'apassword' scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) self.assertEqual('atoken123', rsrc._user_token()) self.m.VerifyAll() def test_user_token_err_nopassword(self): rsrc = self._user_create(stack_name='stackuser_testtoken_err_nopwd', project_id='aproject123', user_id='auser123') self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) ex = self.assertRaises(ValueError, rsrc._user_token) expected = "Can't get user token without password" self.assertEqual(expected, six.text_type(ex)) self.m.VerifyAll() def test_user_token_err_noproject(self): stack_name = 'user_token_err_noprohect_stack' resource_name = 'user' t = template_format.parse(user_template) stack = utils.parse_stack(t, stack_name=stack_name) rsrc = stack[resource_name] ex = self.assertRaises(ValueError, rsrc._user_token) expected = "Can't get user token, user not yet created" self.assertEqual(expected, six.text_type(ex)) heat-10.0.2/heat/tests/autoscaling/0000775000175000017500000000000013343562672017144 5ustar zuulzuul00000000000000heat-10.0.2/heat/tests/autoscaling/test_heat_scaling_group.py0000666000175000017500000007715513343562351024425 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime import json import mock from oslo_utils import timeutils import six from heat.common import exception from heat.common import grouputils from heat.common import template_format from heat.engine import resource from heat.engine import rsrc_defn from heat.tests.autoscaling import inline_templates from heat.tests import common from heat.tests import utils class TestAutoScalingGroupValidation(common.HeatTestCase): def setUp(self): super(TestAutoScalingGroupValidation, self).setUp() self.parsed = template_format.parse(inline_templates.as_heat_template) def test_invalid_min_size(self): self.parsed['resources']['my-group']['properties']['min_size'] = -1 stack = utils.parse_stack(self.parsed) self.assertRaises(exception.StackValidationFailed, stack['my-group'].validate) def test_invalid_max_size(self): self.parsed['resources']['my-group']['properties']['max_size'] = -1 stack = utils.parse_stack(self.parsed) self.assertRaises(exception.StackValidationFailed, stack['my-group'].validate) def test_validate_reference_attr_with_none_ref(self): stack = utils.parse_stack(self.parsed) group = stack['my-group'] self.patchobject(group, 'referenced_attrs', return_value=set([('something', None)])) self.assertIsNone(group.validate()) class TestScalingGroupTags(common.HeatTestCase): def setUp(self): super(TestScalingGroupTags, self).setUp() t = template_format.parse(inline_templates.as_heat_template) self.stack = utils.parse_stack(t, params=inline_templates.as_params) self.group = self.stack['my-group'] def test_tags_default(self): expected = [{'Key': 'metering.groupname', 'Value': u'my-group'}, {'Key': 'metering.AutoScalingGroupName', 'Value': u'my-group'}] self.assertEqual(expected, self.group._tags()) def test_tags_with_extra(self): self.group.properties.data['Tags'] = [ {'Key': 'fee', 'Value': 'foo'}] expected = [{'Key': 'metering.groupname', 'Value': u'my-group'}, {'Key': 'metering.AutoScalingGroupName', 'Value': u'my-group'}] self.assertEqual(expected, self.group._tags()) def test_tags_with_metering(self): self.group.properties.data['Tags'] = [ {'Key': 'metering.fee', 'Value': 'foo'}] expected = [{'Key': 'metering.groupname', 'Value': 'my-group'}, {'Key': 'metering.AutoScalingGroupName', 'Value': u'my-group'}] self.assertEqual(expected, self.group._tags()) class TestInitialGroupSize(common.HeatTestCase): scenarios = [ ('000', dict(mins=0, maxs=0, desired=0, expected=0)), ('040', dict(mins=0, maxs=4, desired=0, expected=0)), ('253', dict(mins=2, maxs=5, desired=3, expected=3)), ('14n', dict(mins=1, maxs=4, desired=None, expected=1)), ] def test_initial_size(self): t = template_format.parse(inline_templates.as_heat_template) properties = t['resources']['my-group']['properties'] properties['min_size'] = self.mins properties['max_size'] = self.maxs properties['desired_capacity'] = self.desired stack = utils.parse_stack(t, params=inline_templates.as_params) group = stack['my-group'] with mock.patch.object(group, '_create_template') as mock_cre_temp: group.child_template() mock_cre_temp.assert_called_once_with(self.expected) class TestGroupAdjust(common.HeatTestCase): def setUp(self): super(TestGroupAdjust, self).setUp() t = template_format.parse(inline_templates.as_heat_template) self.stack = utils.parse_stack(t, params=inline_templates.as_params) self.group = self.stack['my-group'] self.stub_ImageConstraint_validate() self.stub_FlavorConstraint_validate() self.stub_SnapshotConstraint_validate() self.assertIsNone(self.group.validate()) def test_group_metadata_reset(self): self.group.state_set('CREATE', 'COMPLETE') metadata = {'scaling_in_progress': True} self.group.metadata_set(metadata) self.group.handle_metadata_reset() new_metadata = self.group.metadata_get() self.assertEqual({'scaling_in_progress': False}, new_metadata) def test_scaling_policy_cooldown_toosoon(self): dont_call = self.patchobject(self.group, 'resize') self.patchobject(self.group, '_check_scaling_allowed', side_effect=resource.NoActionRequired) self.assertRaises(resource.NoActionRequired, self.group.adjust, 1) self.assertEqual([], dont_call.call_args_list) def test_scaling_same_capacity(self): """Don't resize when capacity is the same.""" self.patchobject(grouputils, 'get_size', return_value=3) resize = self.patchobject(self.group, 'resize') finished_scaling = self.patchobject(self.group, '_finished_scaling') notify = self.patch('heat.engine.notification.autoscaling.send') self.assertRaises(resource.NoActionRequired, self.group.adjust, 3, adjustment_type='ExactCapacity') expected_notifies = [] self.assertEqual(expected_notifies, notify.call_args_list) self.assertEqual(0, resize.call_count) self.assertEqual(0, finished_scaling.call_count) def test_scale_up_min_adjustment(self): self.patchobject(grouputils, 'get_size', return_value=1) resize = self.patchobject(self.group, 'resize') finished_scaling = self.patchobject(self.group, '_finished_scaling') notify = self.patch('heat.engine.notification.autoscaling.send') self.patchobject(self.group, '_check_scaling_allowed') self.group.adjust(33, adjustment_type='PercentChangeInCapacity', min_adjustment_step=2) expected_notifies = [ mock.call( capacity=1, suffix='start', adjustment_type='PercentChangeInCapacity', groupname=u'my-group', message=u'Start resizing the group my-group', adjustment=33, stack=self.group.stack), mock.call( capacity=3, suffix='end', adjustment_type='PercentChangeInCapacity', groupname=u'my-group', message=u'End resizing the group my-group', adjustment=33, stack=self.group.stack)] self.assertEqual(expected_notifies, notify.call_args_list) resize.assert_called_once_with(3) finished_scaling.assert_called_once_with( None, 'PercentChangeInCapacity : 33', size_changed=True) def test_scale_down_min_adjustment(self): self.patchobject(grouputils, 'get_size', return_value=3) resize = self.patchobject(self.group, 'resize') finished_scaling = self.patchobject(self.group, '_finished_scaling') notify = self.patch('heat.engine.notification.autoscaling.send') self.patchobject(self.group, '_check_scaling_allowed') self.group.adjust(-33, adjustment_type='PercentChangeInCapacity', min_adjustment_step=2) expected_notifies = [ mock.call( capacity=3, suffix='start', adjustment_type='PercentChangeInCapacity', groupname=u'my-group', message=u'Start resizing the group my-group', adjustment=-33, stack=self.group.stack), mock.call( capacity=1, suffix='end', adjustment_type='PercentChangeInCapacity', groupname=u'my-group', message=u'End resizing the group my-group', adjustment=-33, stack=self.group.stack)] self.assertEqual(expected_notifies, notify.call_args_list) resize.assert_called_once_with(1) finished_scaling.assert_called_once_with( None, 'PercentChangeInCapacity : -33', size_changed=True) def test_scaling_policy_cooldown_ok(self): self.patchobject(grouputils, 'get_size', return_value=0) resize = self.patchobject(self.group, 'resize') finished_scaling = self.patchobject(self.group, '_finished_scaling') notify = self.patch('heat.engine.notification.autoscaling.send') self.patchobject(self.group, '_check_scaling_allowed') self.group.adjust(1) expected_notifies = [ mock.call( capacity=0, suffix='start', adjustment_type='ChangeInCapacity', groupname=u'my-group', message=u'Start resizing the group my-group', adjustment=1, stack=self.group.stack), mock.call( capacity=1, suffix='end', adjustment_type='ChangeInCapacity', groupname=u'my-group', message=u'End resizing the group my-group', adjustment=1, stack=self.group.stack)] self.assertEqual(expected_notifies, notify.call_args_list) resize.assert_called_once_with(1) finished_scaling.assert_called_once_with(None, 'ChangeInCapacity : 1', size_changed=True) grouputils.get_size.assert_called_once_with(self.group) def test_scaling_policy_resize_fail(self): self.patchobject(grouputils, 'get_size', return_value=0) self.patchobject(self.group, 'resize', side_effect=ValueError('test error')) notify = self.patch('heat.engine.notification.autoscaling.send') self.patchobject(self.group, '_check_scaling_allowed') self.patchobject(self.group, '_finished_scaling') self.assertRaises(ValueError, self.group.adjust, 1) expected_notifies = [ mock.call( capacity=0, suffix='start', adjustment_type='ChangeInCapacity', groupname=u'my-group', message=u'Start resizing the group my-group', adjustment=1, stack=self.group.stack), mock.call( capacity=0, suffix='error', adjustment_type='ChangeInCapacity', groupname=u'my-group', message=u'test error', adjustment=1, stack=self.group.stack)] self.assertEqual(expected_notifies, notify.call_args_list) grouputils.get_size.assert_called_with(self.group) def test_notification_send_if_resize_failed(self): """If resize failed, the capacity of group might have been changed""" self.patchobject(grouputils, 'get_size', side_effect=[3, 4]) self.patchobject(self.group, 'resize', side_effect=ValueError('test error')) notify = self.patch('heat.engine.notification.autoscaling.send') self.patchobject(self.group, '_check_scaling_allowed') self.patchobject(self.group, '_finished_scaling') self.assertRaises(ValueError, self.group.adjust, 5, adjustment_type='ExactCapacity') expected_notifies = [ mock.call( capacity=3, suffix='start', adjustment_type='ExactCapacity', groupname='my-group', message='Start resizing the group my-group', adjustment=5, stack=self.group.stack), mock.call( capacity=4, suffix='error', adjustment_type='ExactCapacity', groupname='my-group', message=u'test error', adjustment=5, stack=self.group.stack)] self.assertEqual(expected_notifies, notify.call_args_list) self.group.resize.assert_called_once_with(5) grouputils.get_size.assert_has_calls([mock.call(self.group), mock.call(self.group)]) class TestGroupCrud(common.HeatTestCase): def setUp(self): super(TestGroupCrud, self).setUp() self.stub_ImageConstraint_validate() self.stub_FlavorConstraint_validate() self.stub_SnapshotConstraint_validate() t = template_format.parse(inline_templates.as_heat_template) self.stack = utils.parse_stack(t, params=inline_templates.as_params) self.group = self.stack['my-group'] self.assertIsNone(self.group.validate()) def test_handle_create(self): self.group.create_with_template = mock.Mock(return_value=None) self.group.child_template = mock.Mock(return_value='{}') self.group.handle_create() self.group.child_template.assert_called_once_with() self.group.create_with_template.assert_called_once_with('{}') def test_handle_update_desired_cap(self): self.group._try_rolling_update = mock.Mock(return_value=None) self.group.resize = mock.Mock(return_value=None) props = {'desired_capacity': 4, 'min_size': 0, 'max_size': 6} defn = rsrc_defn.ResourceDefinition( 'nopayload', 'OS::Heat::AutoScalingGroup', props) self.group.handle_update(defn, None, props) self.group.resize.assert_called_once_with(4) self.group._try_rolling_update.assert_called_once_with(props) def test_handle_update_desired_nocap(self): self.group._try_rolling_update = mock.Mock(return_value=None) self.group.resize = mock.Mock(return_value=None) get_size = self.patchobject(grouputils, 'get_size') get_size.return_value = 6 props = {'min_size': 0, 'max_size': 6} defn = rsrc_defn.ResourceDefinition( 'nopayload', 'OS::Heat::AutoScalingGroup', props) self.group.handle_update(defn, None, props) self.group.resize.assert_called_once_with(6) self.group._try_rolling_update.assert_called_once_with(props) def test_update_in_failed(self): self.group.state_set('CREATE', 'FAILED') # to update the failed asg self.group.resize = mock.Mock(return_value=None) new_defn = rsrc_defn.ResourceDefinition( 'asg', 'OS::Heat::AutoScalingGroup', {'AvailabilityZones': ['nova'], 'LaunchConfigurationName': 'config', 'max_size': 5, 'min_size': 1, 'desired_capacity': 2, 'resource': {'type': 'ResourceWithPropsAndAttrs', 'properties': { 'Foo': 'hello'}}}) self.group.handle_update(new_defn, None, None) self.group.resize.assert_called_once_with(2) class HeatScalingGroupAttrTest(common.HeatTestCase): def setUp(self): super(HeatScalingGroupAttrTest, self).setUp() t = template_format.parse(inline_templates.as_heat_template) self.stack = utils.parse_stack(t, params=inline_templates.as_params) self.group = self.stack['my-group'] self.assertIsNone(self.group.validate()) def test_no_instance_list(self): """Tests inheritance of InstanceList attribute. The InstanceList attribute is not inherited from AutoScalingResourceGroup's superclasses. """ self.assertRaises(exception.InvalidTemplateAttribute, self.group.FnGetAtt, 'InstanceList') def test_output_attribute_list(self): mock_members = self.patchobject(grouputils, 'get_members') members = [] output = [] for ip_ex in six.moves.range(1, 4): inst = mock.Mock() inst.FnGetAtt.return_value = '2.1.3.%d' % ip_ex output.append('2.1.3.%d' % ip_ex) members.append(inst) mock_members.return_value = members self.assertEqual(output, self.group.FnGetAtt('outputs_list', 'Bar')) def test_output_refs(self): # Setup mock_get = self.patchobject(grouputils, 'get_member_refids') mock_get.return_value = ['resource-1', 'resource-2'] # Test found = self.group.FnGetAtt('refs') # Verify expected = ['resource-1', 'resource-2'] self.assertEqual(expected, found) mock_get.assert_called_once_with(self.group) def test_output_refs_map(self): # Setup mock_members = self.patchobject(grouputils, 'get_members') members = [mock.MagicMock(), mock.MagicMock()] members[0].name = 'resource-1-name' members[0].resource_id = 'resource-1-id' members[1].name = 'resource-2-name' members[1].resource_id = 'resource-2-id' mock_members.return_value = members # Test found = self.group.FnGetAtt('refs_map') # Verify expected = {'resource-1-name': 'resource-1-id', 'resource-2-name': 'resource-2-id'} self.assertEqual(expected, found) def test_output_attribute_dict(self): mock_members = self.patchobject(grouputils, 'get_members') members = [] output = {} for ip_ex in six.moves.range(1, 4): inst = mock.Mock() inst.name = str(ip_ex) inst.FnGetAtt.return_value = '2.1.3.%d' % ip_ex output[str(ip_ex)] = '2.1.3.%d' % ip_ex members.append(inst) mock_members.return_value = members self.assertEqual(output, self.group.FnGetAtt('outputs', 'Bar')) def test_attribute_current_size(self): mock_instances = self.patchobject(grouputils, 'get_size') mock_instances.return_value = 3 self.assertEqual(3, self.group.FnGetAtt('current_size')) def test_attribute_current_size_with_path(self): mock_instances = self.patchobject(grouputils, 'get_size') mock_instances.return_value = 4 self.assertEqual(4, self.group.FnGetAtt('current_size', 'name')) def test_index_dotted_attribute(self): mock_members = self.patchobject(grouputils, 'get_members') self.group.nested = mock.Mock() members = [] output = [] for ip_ex in six.moves.range(0, 2): inst = mock.Mock() inst.name = str(ip_ex) inst.FnGetAtt.return_value = '2.1.3.%d' % ip_ex output.append('2.1.3.%d' % ip_ex) members.append(inst) mock_members.return_value = members self.assertEqual(output[0], self.group.FnGetAtt('resource.0', 'Bar')) self.assertEqual(output[1], self.group.FnGetAtt('resource.1.Bar')) self.assertRaises(exception.NotFound, self.group.FnGetAtt, 'resource.2') def asg_tmpl_with_bad_updt_policy(): t = template_format.parse(inline_templates.as_heat_template) agp = t['resources']['my-group']['properties'] agp['rolling_updates'] = {"foo": {}} return json.dumps(t) def asg_tmpl_with_default_updt_policy(): t = template_format.parse(inline_templates.as_heat_template) return json.dumps(t) def asg_tmpl_with_updt_policy(props=None): t = template_format.parse(inline_templates.as_heat_template) agp = t['resources']['my-group']['properties'] agp['rolling_updates'] = { "min_in_service": 1, "max_batch_size": 2, "pause_time": 1 } if props is not None: agp.update(props) return json.dumps(t) class RollingUpdatePolicyTest(common.HeatTestCase): def setUp(self): super(RollingUpdatePolicyTest, self).setUp() self.stub_keystoneclient(username='test_stack.CfnLBUser') def test_parse_without_update_policy(self): tmpl = template_format.parse(inline_templates.as_heat_template) stack = utils.parse_stack(tmpl) stack.validate() grp = stack['my-group'] default_policy = { 'min_in_service': 0, 'pause_time': 0, 'max_batch_size': 1 } self.assertEqual(default_policy, grp.properties['rolling_updates']) def test_parse_with_update_policy(self): tmpl = template_format.parse(asg_tmpl_with_updt_policy()) stack = utils.parse_stack(tmpl) stack.validate() tmpl_grp = tmpl['resources']['my-group'] tmpl_policy = tmpl_grp['properties']['rolling_updates'] tmpl_batch_sz = int(tmpl_policy['max_batch_size']) policy = stack['my-group'].properties['rolling_updates'] self.assertTrue(policy) self.assertEqual(3, len(policy)) self.assertEqual(1, int(policy['min_in_service'])) self.assertEqual(tmpl_batch_sz, int(policy['max_batch_size'])) self.assertEqual(1, policy['pause_time']) def test_parse_with_default_update_policy(self): tmpl = template_format.parse(asg_tmpl_with_default_updt_policy()) stack = utils.parse_stack(tmpl) stack.validate() policy = stack['my-group'].properties['rolling_updates'] self.assertTrue(policy) self.assertEqual(3, len(policy)) self.assertEqual(0, int(policy['min_in_service'])) self.assertEqual(1, int(policy['max_batch_size'])) self.assertEqual(0, policy['pause_time']) def test_parse_with_bad_update_policy(self): tmpl = template_format.parse(asg_tmpl_with_bad_updt_policy()) stack = utils.parse_stack(tmpl) error = self.assertRaises( exception.StackValidationFailed, stack.validate) self.assertIn("foo", six.text_type(error)) def test_parse_with_bad_pausetime_in_update_policy(self): tmpl = template_format.parse(asg_tmpl_with_default_updt_policy()) group = tmpl['resources']['my-group'] group['properties']['rolling_updates'] = {'pause_time': 'a-string'} stack = utils.parse_stack(tmpl) error = self.assertRaises( exception.StackValidationFailed, stack.validate) self.assertIn("could not convert string to float", six.text_type(error)) class RollingUpdatePolicyDiffTest(common.HeatTestCase): def setUp(self): super(RollingUpdatePolicyDiffTest, self).setUp() self.stub_keystoneclient(username='test_stack.CfnLBUser') def validate_update_policy_diff(self, current, updated): # load current stack current_tmpl = template_format.parse(current) current_stack = utils.parse_stack(current_tmpl) # get the json snippet for the current InstanceGroup resource current_grp = current_stack['my-group'] current_snippets = dict((n, r.frozen_definition()) for n, r in current_stack.items()) current_grp_json = current_snippets[current_grp.name] # load the updated stack updated_tmpl = template_format.parse(updated) updated_stack = utils.parse_stack(updated_tmpl) # get the updated json snippet for the InstanceGroup resource in the # context of the current stack updated_grp = updated_stack['my-group'] updated_grp_json = updated_grp.t.freeze() # identify the template difference tmpl_diff = updated_grp.update_template_diff( updated_grp_json, current_grp_json) updated_policy = (updated_grp.properties['rolling_updates'] if 'rolling_updates' in updated_grp.properties.data else None) self.assertTrue(tmpl_diff.properties_changed()) current_grp._try_rolling_update = mock.MagicMock() current_grp.resize = mock.MagicMock() current_grp.handle_update(updated_grp_json, tmpl_diff, None) if updated_policy is None: self.assertIsNone( current_grp.properties.data.get('rolling_updates')) else: self.assertEqual(updated_policy, current_grp.properties.data['rolling_updates']) def test_update_policy_added(self): self.validate_update_policy_diff(inline_templates.as_heat_template, asg_tmpl_with_updt_policy()) def test_update_policy_updated(self): extra_props = {'rolling_updates': { 'min_in_service': 2, 'max_batch_size': 4, 'pause_time': 30}} self.validate_update_policy_diff( asg_tmpl_with_updt_policy(), asg_tmpl_with_updt_policy(props=extra_props)) def test_update_policy_removed(self): self.validate_update_policy_diff(asg_tmpl_with_updt_policy(), inline_templates.as_heat_template) class IncorrectUpdatePolicyTest(common.HeatTestCase): def setUp(self): super(IncorrectUpdatePolicyTest, self).setUp() self.stub_keystoneclient(username='test_stack.CfnLBUser') def test_with_update_policy_aws(self): t = template_format.parse(inline_templates.as_heat_template) ag = t['resources']['my-group'] ag["update_policy"] = {"AutoScalingRollingUpdate": { "MinInstancesInService": "1", "MaxBatchSize": "2", "PauseTime": "PT1S" }} tmpl = template_format.parse(json.dumps(t)) stack = utils.parse_stack(tmpl) exc = self.assertRaises(exception.StackValidationFailed, stack.validate) self.assertIn('Unknown Property AutoScalingRollingUpdate', six.text_type(exc)) def test_with_update_policy_inst_group(self): t = template_format.parse(inline_templates.as_heat_template) ag = t['resources']['my-group'] ag["update_policy"] = {"RollingUpdate": { "MinInstancesInService": "1", "MaxBatchSize": "2", "PauseTime": "PT1S" }} tmpl = template_format.parse(json.dumps(t)) stack = utils.parse_stack(tmpl) exc = self.assertRaises(exception.StackValidationFailed, stack.validate) self.assertIn('Unknown Property RollingUpdate', six.text_type(exc)) class TestCooldownMixin(common.HeatTestCase): def setUp(self): super(TestCooldownMixin, self).setUp() t = template_format.parse(inline_templates.as_heat_template) self.stack = utils.parse_stack(t, params=inline_templates.as_params) self.stack.store() self.group = self.stack['my-group'] self.group.state_set('CREATE', 'COMPLETE') def test_cooldown_is_in_progress_toosoon(self): cooldown_end = timeutils.utcnow() + datetime.timedelta(seconds=60) previous_meta = {'cooldown_end': { cooldown_end.isoformat(): 'change_in_capacity : 1'}} self.patchobject(self.group, 'metadata_get', return_value=previous_meta) self.assertRaises(resource.NoActionRequired, self.group._check_scaling_allowed, 60) def test_cooldown_is_in_progress_toosoon_legacy(self): now = timeutils.utcnow() previous_meta = {'cooldown': { now.isoformat(): 'change_in_capacity : 1'}} self.patchobject(self.group, 'metadata_get', return_value=previous_meta) self.assertRaises(resource.NoActionRequired, self.group._check_scaling_allowed, 60) def test_cooldown_is_in_progress_scaling_unfinished(self): previous_meta = {'scaling_in_progress': True} self.patchobject(self.group, 'metadata_get', return_value=previous_meta) self.assertRaises(resource.NoActionRequired, self.group._check_scaling_allowed, 60) def test_cooldown_not_in_progress_legacy(self): awhile_ago = timeutils.utcnow() - datetime.timedelta(seconds=100) previous_meta = { 'cooldown': { awhile_ago.isoformat(): 'change_in_capacity : 1' }, 'scaling_in_progress': False } self.patchobject(self.group, 'metadata_get', return_value=previous_meta) self.assertIsNone(self.group._check_scaling_allowed(60)) def test_cooldown_not_in_progress(self): awhile_after = timeutils.utcnow() + datetime.timedelta(seconds=60) previous_meta = { 'cooldown_end': { awhile_after.isoformat(): 'change_in_capacity : 1' }, 'scaling_in_progress': False } timeutils.set_time_override() timeutils.advance_time_seconds(100) self.patchobject(self.group, 'metadata_get', return_value=previous_meta) self.assertIsNone(self.group._check_scaling_allowed(60)) timeutils.clear_time_override() def test_scaling_policy_cooldown_zero(self): now = timeutils.utcnow() previous_meta = {'cooldown_end': { now.isoformat(): 'change_in_capacity : 1'}} self.patchobject(self.group, 'metadata_get', return_value=previous_meta) self.assertIsNone(self.group._check_scaling_allowed(0)) def test_scaling_policy_cooldown_none(self): now = timeutils.utcnow() previous_meta = {'cooldown_end': { now.isoformat(): 'change_in_capacity : 1'}} self.patchobject(self.group, 'metadata_get', return_value=previous_meta) self.assertIsNone(self.group._check_scaling_allowed(None)) def test_no_cooldown_no_scaling_in_progress(self): # no cooldown entry in the metadata awhile_ago = timeutils.utcnow() - datetime.timedelta(seconds=100) previous_meta = {'scaling_in_progress': False, awhile_ago.isoformat(): 'change_in_capacity : 1'} self.patchobject(self.group, 'metadata_get', return_value=previous_meta) self.assertIsNone(self.group._check_scaling_allowed(60)) def test_metadata_is_written(self): nowish = timeutils.utcnow() reason = 'cool as' meta_set = self.patchobject(self.group, 'metadata_set') self.patchobject(timeutils, 'utcnow', return_value=nowish) self.group._finished_scaling(60, reason) cooldown_end = nowish + datetime.timedelta(seconds=60) meta_set.assert_called_once_with( {'cooldown_end': {cooldown_end.isoformat(): reason}, 'scaling_in_progress': False}) def test_metadata_is_written_update(self): nowish = timeutils.utcnow() reason = 'cool as' prev_cooldown_end = nowish + datetime.timedelta(seconds=100) previous_meta = { 'cooldown_end': { prev_cooldown_end.isoformat(): 'change_in_capacity : 1' } } self.patchobject(self.group, 'metadata_get', return_value=previous_meta) meta_set = self.patchobject(self.group, 'metadata_set') self.patchobject(timeutils, 'utcnow', return_value=nowish) self.group._finished_scaling(60, reason) meta_set.assert_called_once_with( {'cooldown_end': {prev_cooldown_end.isoformat(): reason}, 'scaling_in_progress': False}) heat-10.0.2/heat/tests/autoscaling/test_scaling_template.py0000666000175000017500000001037513343562340024070 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import itertools from heat.scaling import template from heat.tests import common class ResourceTemplatesTest(common.HeatTestCase): def setUp(self): super(ResourceTemplatesTest, self).setUp() ids = ('stubbed-id-%s' % (i,) for i in itertools.count()) self.next_id = lambda: next(ids) def test_create_template(self): """Test case for creating template. When creating a template from scratch, an empty list is accepted as the "old" resources and new resources are created up to num_resource. """ templates = template.member_definitions([], {'type': 'Foo'}, 2, 0, self.next_id) expected = [ ('stubbed-id-0', {'type': 'Foo'}), ('stubbed-id-1', {'type': 'Foo'})] self.assertEqual(expected, list(templates)) def test_replace_template(self): """Test case for replacing template. If num_replace is the number of old resources, then all of the resources will be replaced. """ old_resources = [ ('old-id-0', {'type': 'Foo'}), ('old-id-1', {'type': 'Foo'})] templates = template.member_definitions(old_resources, {'type': 'Bar'}, 1, 2, self.next_id) expected = [('old-id-1', {'type': 'Bar'})] self.assertEqual(expected, list(templates)) def test_replace_some_units(self): """Test case for making only the number of replacements specified. If the resource definition changes, only the number of replacements specified will be made; beyond that, the original templates are used. """ old_resources = [ ('old-id-0', {'type': 'Foo'}), ('old-id-1', {'type': 'Foo'})] new_spec = {'type': 'Bar'} templates = template.member_definitions(old_resources, new_spec, 2, 1, self.next_id) expected = [ ('old-id-0', {'type': 'Bar'}), ('old-id-1', {'type': 'Foo'})] self.assertEqual(expected, list(templates)) def test_growth_counts_as_replacement(self): """Test case for growing template. If we grow the template and replace some elements at the same time, the number of replacements to perform is reduced by the number of new resources to be created. """ spec = {'type': 'Foo'} old_resources = [ ('old-id-0', spec), ('old-id-1', spec)] new_spec = {'type': 'Bar'} templates = template.member_definitions(old_resources, new_spec, 4, 2, self.next_id) expected = [ ('old-id-0', spec), ('old-id-1', spec), ('stubbed-id-0', new_spec), ('stubbed-id-1', new_spec)] self.assertEqual(expected, list(templates)) def test_replace_units_some_already_up_to_date(self): """Test case for up-to-date resources in template. If some of the old resources already have the new resource definition, then they won't be considered for replacement, and the next resource that is out-of-date will be replaced. """ old_resources = [ ('old-id-0', {'type': 'Bar'}), ('old-id-1', {'type': 'Foo'})] new_spec = {'type': 'Bar'} templates = template.member_definitions(old_resources, new_spec, 2, 1, self.next_id) second_batch_expected = [ ('old-id-0', {'type': 'Bar'}), ('old-id-1', {'type': 'Bar'})] self.assertEqual(second_batch_expected, list(templates)) heat-10.0.2/heat/tests/autoscaling/test_scaling_policy.py0000666000175000017500000001755013343562340023556 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import six from heat.common import exception from heat.common import template_format from heat.engine import node_data from heat.engine import resource from heat.engine.resources.aws.autoscaling import scaling_policy as aws_sp from heat.engine import scheduler from heat.tests.autoscaling import inline_templates from heat.tests import common from heat.tests import utils as_template = inline_templates.as_template as_params = inline_templates.as_params class TestAutoScalingPolicy(common.HeatTestCase): def create_scaling_policy(self, t, stack, resource_name): rsrc = stack[resource_name] self.assertIsNone(rsrc.validate()) scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) return rsrc def test_validate_scaling_policy_ok(self): t = template_format.parse(inline_templates.as_template) t['Resources']['WebServerScaleUpPolicy']['Properties'][ 'ScalingAdjustment'] = 33 t['Resources']['WebServerScaleUpPolicy']['Properties'][ 'AdjustmentType'] = 'PercentChangeInCapacity' t['Resources']['WebServerScaleUpPolicy']['Properties'][ 'MinAdjustmentStep'] = 2 stack = utils.parse_stack(t, params=as_params) self.policy = stack['WebServerScaleUpPolicy'] self.assertIsNone(self.policy.validate()) def test_validate_scaling_policy_error(self): t = template_format.parse(inline_templates.as_template) t['Resources']['WebServerScaleUpPolicy']['Properties'][ 'ScalingAdjustment'] = 1 t['Resources']['WebServerScaleUpPolicy']['Properties'][ 'AdjustmentType'] = 'ChangeInCapacity' t['Resources']['WebServerScaleUpPolicy']['Properties'][ 'MinAdjustmentStep'] = 2 stack = utils.parse_stack(t, params=as_params) self.policy = stack['WebServerScaleUpPolicy'] ex = self.assertRaises(exception.ResourcePropertyValueDependency, self.policy.validate) self.assertIn('MinAdjustmentStep property should only ' 'be specified for AdjustmentType with ' 'value PercentChangeInCapacity.', six.text_type(ex)) def test_scaling_policy_bad_group(self): t = template_format.parse(inline_templates.as_template_bad_group) stack = utils.parse_stack(t, params=as_params) up_policy = self.create_scaling_policy(t, stack, 'WebServerScaleUpPolicy') ex = self.assertRaises(exception.ResourceFailure, up_policy.signal) self.assertIn('Alarm WebServerScaleUpPolicy could ' 'not find scaling group', six.text_type(ex)) def test_scaling_policy_adjust_no_action(self): t = template_format.parse(as_template) stack = utils.parse_stack(t, params=as_params) up_policy = self.create_scaling_policy(t, stack, 'WebServerScaleUpPolicy') group = stack['WebServerGroup'] self.patchobject(group, 'adjust', side_effect=resource.NoActionRequired()) self.assertRaises(resource.NoActionRequired, up_policy.handle_signal) def test_scaling_policy_adjust_size_changed(self): t = template_format.parse(as_template) stack = utils.parse_stack(t, params=as_params) up_policy = self.create_scaling_policy(t, stack, 'WebServerScaleUpPolicy') group = stack['WebServerGroup'] self.patchobject(group, 'resize') self.patchobject(group, '_lb_reload') mock_fin_scaling = self.patchobject(group, '_finished_scaling') with mock.patch.object(group, '_check_scaling_allowed') as mock_isa: self.assertIsNone(up_policy.handle_signal()) mock_isa.assert_called_once_with(60) mock_fin_scaling.assert_called_once_with(60, 'ChangeInCapacity : 1', size_changed=True) def test_scaling_policy_cooldown_toosoon(self): t = template_format.parse(as_template) stack = utils.parse_stack(t, params=as_params) pol = self.create_scaling_policy(t, stack, 'WebServerScaleUpPolicy') group = stack['WebServerGroup'] test = {'current': 'alarm'} with mock.patch.object( group, '_check_scaling_allowed', side_effect=resource.NoActionRequired) as mock_isa: self.assertRaises(resource.NoActionRequired, pol.handle_signal, details=test) mock_isa.assert_called_once_with(60) def test_scaling_policy_cooldown_ok(self): t = template_format.parse(as_template) stack = utils.parse_stack(t, params=as_params) pol = self.create_scaling_policy(t, stack, 'WebServerScaleUpPolicy') group = stack['WebServerGroup'] test = {'current': 'alarm'} self.patchobject(group, '_finished_scaling') self.patchobject(group, '_lb_reload') mock_resize = self.patchobject(group, 'resize') with mock.patch.object(group, '_check_scaling_allowed') as mock_isa: pol.handle_signal(details=test) mock_isa.assert_called_once_with(60) mock_resize.assert_called_once_with(1) @mock.patch.object(aws_sp.AWSScalingPolicy, '_get_ec2_signed_url') def test_scaling_policy_refid_signed_url(self, mock_get_ec2_url): t = template_format.parse(as_template) stack = utils.parse_stack(t, params=as_params) rsrc = self.create_scaling_policy(t, stack, 'WebServerScaleUpPolicy') mock_get_ec2_url.return_value = 'http://signed_url' self.assertEqual('http://signed_url', rsrc.FnGetRefId()) def test_scaling_policy_refid_rsrc_name(self): t = template_format.parse(as_template) stack = utils.parse_stack(t, params=as_params) rsrc = self.create_scaling_policy(t, stack, 'WebServerScaleUpPolicy') rsrc.resource_id = None self.assertEqual('WebServerScaleUpPolicy', rsrc.FnGetRefId()) def test_refid_convergence_cache_data(self): t = template_format.parse(as_template) cache_data = {'WebServerScaleUpPolicy': node_data.NodeData.from_dict({ 'uuid': mock.ANY, 'id': mock.ANY, 'action': 'CREATE', 'status': 'COMPLETE', 'reference_id': 'http://convg_signed_url' })} stack = utils.parse_stack(t, cache_data=cache_data) rsrc = stack.defn['WebServerScaleUpPolicy'] self.assertEqual('http://convg_signed_url', rsrc.FnGetRefId()) class ScalingPolicyAttrTest(common.HeatTestCase): def setUp(self): super(ScalingPolicyAttrTest, self).setUp() t = template_format.parse(as_template) self.stack = utils.parse_stack(t, params=as_params) self.policy = self.stack['WebServerScaleUpPolicy'] self.assertIsNone(self.policy.validate()) scheduler.TaskRunner(self.policy.create)() self.assertEqual((self.policy.CREATE, self.policy.COMPLETE), self.policy.state) def test_alarm_attribute(self): self.assertIn("WebServerScaleUpPolicy", self.policy.FnGetAtt('AlarmUrl')) heat-10.0.2/heat/tests/autoscaling/test_lbutils.py0000666000175000017500000001124113343562351022226 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import six from heat.common import exception from heat.common import grouputils from heat.common import template_format from heat.engine import properties from heat.engine import resource from heat.scaling import lbutils from heat.tests import common from heat.tests import generic_resource from heat.tests import utils lb_stack = ''' heat_template_version: 2013-05-23 resources: neutron_lb_1: type: Mock::Neutron::LB neutron_lb_2: type: Mock::Neutron::LB aws_lb_1: type: Mock::AWS::LB aws_lb_2: type: Mock::AWS::LB non_lb: type: Mock::Not::LB ''' class MockedNeutronLB(generic_resource.GenericResource): properties_schema = { 'members': properties.Schema( properties.Schema.LIST, update_allowed=True) } def handle_update(self, json_snippet, tmpl_diff, prop_diff): return class MockedAWSLB(generic_resource.GenericResource): properties_schema = { 'Instances': properties.Schema( properties.Schema.LIST, update_allowed=True) } def handle_update(self, json_snippet, tmpl_diff, prop_diff): return class LBUtilsTest(common.HeatTestCase): # need to mock a group so that group_utils.get_member_refids will work # load_balancers is a dict of load_balancer objects, where each lb has # properties for checking def setUp(self): super(LBUtilsTest, self).setUp() resource._register_class('Mock::Neutron::LB', MockedNeutronLB) resource._register_class('Mock::AWS::LB', MockedAWSLB) resource._register_class('Mock::Not::LB', generic_resource.GenericResource) t = template_format.parse(lb_stack) self.stack = utils.parse_stack(t) def test_reload_aws_lb(self): group = mock.Mock() self.patchobject(grouputils, 'get_member_refids', return_value=['ID1', 'ID2', 'ID3']) lb1 = self.stack['aws_lb_1'] lb2 = self.stack['aws_lb_2'] lbs = { 'LB_1': lb1, 'LB_2': lb2 } lb1.action = mock.Mock(return_value=lb1.CREATE) lb2.action = mock.Mock(return_value=lb2.CREATE) lb1.handle_update = mock.Mock() lb2.handle_update = mock.Mock() prop_diff = {'Instances': ['ID1', 'ID2', 'ID3']} lbutils.reload_loadbalancers(group, lbs) # For verification's purpose, we just check the prop_diff lb1.handle_update.assert_called_with(mock.ANY, mock.ANY, prop_diff) lb2.handle_update.assert_called_with(mock.ANY, mock.ANY, prop_diff) def test_reload_neutron_lb(self): group = mock.Mock() self.patchobject(grouputils, 'get_member_refids', return_value=['ID1', 'ID2', 'ID3']) lb1 = self.stack['neutron_lb_1'] lb2 = self.stack['neutron_lb_2'] lb1.action = mock.Mock(return_value=lb1.CREATE) lb2.action = mock.Mock(return_value=lb2.CREATE) lbs = { 'LB_1': lb1, 'LB_2': lb2 } lb1.handle_update = mock.Mock() lb2.handle_update = mock.Mock() prop_diff = {'members': ['ID1', 'ID2', 'ID3']} lbutils.reload_loadbalancers(group, lbs) # For verification's purpose, we just check the prop_diff lb1.handle_update.assert_called_with(mock.ANY, mock.ANY, prop_diff) lb2.handle_update.assert_called_with(mock.ANY, mock.ANY, prop_diff) def test_reload_non_lb(self): group = mock.Mock() self.patchobject(grouputils, 'get_member_refids', return_value=['ID1', 'ID2', 'ID3']) lbs = { 'LB_1': self.stack['non_lb'], } error = self.assertRaises(exception.Error, lbutils.reload_loadbalancers, group, lbs) self.assertIn("Unsupported resource 'LB_1' in LoadBalancerNames", six.text_type(error)) heat-10.0.2/heat/tests/autoscaling/__init__.py0000666000175000017500000000000013343562340021235 0ustar zuulzuul00000000000000heat-10.0.2/heat/tests/autoscaling/test_launch_config.py0000666000175000017500000002111113343562340023342 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import six from heat.common import exception from heat.common import short_id from heat.common import template_format from heat.engine.clients.os import nova from heat.engine import node_data from heat.engine import scheduler from heat.tests.autoscaling import inline_templates from heat.tests import common from heat.tests import utils class LaunchConfigurationTest(common.HeatTestCase): def validate_launch_config(self, stack, lc_name='LaunchConfig'): # create the launch configuration resource conf = stack[lc_name] self.assertIsNone(conf.validate()) scheduler.TaskRunner(conf.create)() self.assertEqual((conf.CREATE, conf.COMPLETE), conf.state) # check bdm in configuration self.assertIsNotNone(conf.properties['BlockDeviceMappings']) def test_launch_config_get_ref_by_id(self): t = template_format.parse(inline_templates.as_template) stack = utils.parse_stack(t, params=inline_templates.as_params) rsrc = stack['LaunchConfig'] self.stub_ImageConstraint_validate() self.stub_FlavorConstraint_validate() self.stub_SnapshotConstraint_validate() self.assertIsNone(rsrc.validate()) scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) # use physical_resource_name when rsrc.id is not None self.assertIsNotNone(rsrc.uuid) expected = '%s-%s-%s' % (rsrc.stack.name, rsrc.name, short_id.get_id(rsrc.uuid)) self.assertEqual(expected, rsrc.FnGetRefId()) # otherwise use parent method rsrc.id = None self.assertIsNone(rsrc.resource_id) self.assertEqual('LaunchConfig', rsrc.FnGetRefId()) def test_launch_config_refid_convergence_cache_data(self): t = template_format.parse(inline_templates.as_template) cache_data = {'LaunchConfig': node_data.NodeData.from_dict({ 'uuid': mock.ANY, 'id': mock.ANY, 'action': 'CREATE', 'status': 'COMPLETE', 'reference_id': 'convg_xyz' })} stack = utils.parse_stack(t, params=inline_templates.as_params, cache_data=cache_data) rsrc = stack.defn['LaunchConfig'] self.assertEqual('convg_xyz', rsrc.FnGetRefId()) def test_launch_config_create_with_instanceid(self): t = template_format.parse(inline_templates.as_template) lcp = t['Resources']['LaunchConfig']['Properties'] lcp['InstanceId'] = '5678' stack = utils.parse_stack(t, params=inline_templates.as_params) rsrc = stack['LaunchConfig'] # ImageId, InstanceType and BlockDeviceMappings keep the lc's values # KeyName and SecurityGroups are derived from the instance lc_props = { 'ImageId': 'foo', 'InstanceType': 'bar', 'BlockDeviceMappings': lcp['BlockDeviceMappings'], 'KeyName': 'hth_keypair', 'SecurityGroups': ['hth_test'] } rsrc.rebuild_lc_properties = mock.Mock(return_value=lc_props) self.stub_ImageConstraint_validate() self.stub_FlavorConstraint_validate() self.stub_SnapshotConstraint_validate() self.stub_ServerConstraint_validate() self.assertIsNone(rsrc.validate()) scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) def test_lc_validate_without_InstanceId_and_ImageId(self): t = template_format.parse(inline_templates.as_template) lcp = t['Resources']['LaunchConfig']['Properties'] lcp.pop('ImageId') stack = utils.parse_stack(t, inline_templates.as_params) rsrc = stack['LaunchConfig'] self.stub_SnapshotConstraint_validate() self.stub_FlavorConstraint_validate() e = self.assertRaises(exception.StackValidationFailed, rsrc.validate) ex_msg = ('If without InstanceId, ' 'ImageId and InstanceType are required.') self.assertIn(ex_msg, six.text_type(e)) def test_lc_validate_without_InstanceId_and_InstanceType(self): t = template_format.parse(inline_templates.as_template) lcp = t['Resources']['LaunchConfig']['Properties'] lcp.pop('InstanceType') stack = utils.parse_stack(t, inline_templates.as_params) rsrc = stack['LaunchConfig'] self.stub_SnapshotConstraint_validate() self.stub_ImageConstraint_validate() e = self.assertRaises(exception.StackValidationFailed, rsrc.validate) ex_msg = ('If without InstanceId, ' 'ImageId and InstanceType are required.') self.assertIn(ex_msg, six.text_type(e)) def test_launch_config_create_with_instanceid_not_found(self): t = template_format.parse(inline_templates.as_template) lcp = t['Resources']['LaunchConfig']['Properties'] lcp['InstanceId'] = '5678' stack = utils.parse_stack(t, params=inline_templates.as_params) rsrc = stack['LaunchConfig'] self.stub_SnapshotConstraint_validate() self.stub_ImageConstraint_validate() self.stub_FlavorConstraint_validate() self.patchobject(nova.NovaClientPlugin, 'get_server', side_effect=exception.EntityNotFound( entity='Server', name='5678')) msg = ("Property error: " "Resources.LaunchConfig.Properties.InstanceId: " "Error validating value '5678': The Server (5678) " "could not be found.") exc = self.assertRaises(exception.StackValidationFailed, rsrc.validate) self.assertIn(msg, six.text_type(exc)) def test_validate_BlockDeviceMappings_without_Ebs_property(self): t = template_format.parse(inline_templates.as_template) lcp = t['Resources']['LaunchConfig']['Properties'] bdm = [{'DeviceName': 'vdb'}] lcp['BlockDeviceMappings'] = bdm stack = utils.parse_stack(t, params=inline_templates.as_params) self.stub_ImageConstraint_validate() self.stub_FlavorConstraint_validate() e = self.assertRaises(exception.StackValidationFailed, self.validate_launch_config, stack) self.assertIn("Ebs is missing, this is required", six.text_type(e)) def test_validate_BlockDeviceMappings_without_SnapshotId_property(self): t = template_format.parse(inline_templates.as_template) lcp = t['Resources']['LaunchConfig']['Properties'] bdm = [{'DeviceName': 'vdb', 'Ebs': {'VolumeSize': '1'}}] lcp['BlockDeviceMappings'] = bdm stack = utils.parse_stack(t, params=inline_templates.as_params) self.stub_ImageConstraint_validate() self.stub_FlavorConstraint_validate() e = self.assertRaises(exception.StackValidationFailed, self.validate_launch_config, stack) self.assertIn("SnapshotId is missing, this is required", six.text_type(e)) def test_validate_BlockDeviceMappings_without_DeviceName_property(self): t = template_format.parse(inline_templates.as_template) lcp = t['Resources']['LaunchConfig']['Properties'] bdm = [{'Ebs': {'SnapshotId': '1234', 'VolumeSize': '1'}}] lcp['BlockDeviceMappings'] = bdm stack = utils.parse_stack(t, params=inline_templates.as_params) self.stub_ImageConstraint_validate() self.stub_FlavorConstraint_validate() self.stub_SnapshotConstraint_validate() e = self.assertRaises(exception.StackValidationFailed, self.validate_launch_config, stack) excepted_error = ( 'Property error: ' 'Resources.LaunchConfig.Properties.BlockDeviceMappings[0]: ' 'Property DeviceName not assigned') self.assertIn(excepted_error, six.text_type(e)) heat-10.0.2/heat/tests/autoscaling/inline_templates.py0000666000175000017500000000753513343562340023056 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. as_params = {'KeyName': 'test', 'ImageId': 'foo'} as_template = ''' { "AWSTemplateFormatVersion" : "2010-09-09", "Description" : "AutoScaling Test", "Parameters" : { "ImageId": {"Type": "String"}, "KeyName": {"Type": "String"} }, "Resources" : { "WebServerGroup" : { "Type" : "AWS::AutoScaling::AutoScalingGroup", "Properties" : { "AvailabilityZones" : ["nova"], "LaunchConfigurationName" : { "Ref" : "LaunchConfig" }, "MinSize" : "1", "MaxSize" : "5", "LoadBalancerNames" : [ { "Ref" : "ElasticLoadBalancer" } ] } }, "WebServerScaleUpPolicy" : { "Type" : "AWS::AutoScaling::ScalingPolicy", "Properties" : { "AdjustmentType" : "ChangeInCapacity", "AutoScalingGroupName" : { "Ref" : "WebServerGroup" }, "Cooldown" : "60", "ScalingAdjustment" : "1" } }, "WebServerScaleDownPolicy" : { "Type" : "AWS::AutoScaling::ScalingPolicy", "Properties" : { "AdjustmentType" : "ChangeInCapacity", "AutoScalingGroupName" : { "Ref" : "WebServerGroup" }, "Cooldown" : "60", "ScalingAdjustment" : "-1" } }, "ElasticLoadBalancer" : { "Type" : "AWS::ElasticLoadBalancing::LoadBalancer", "Properties" : { "AvailabilityZones" : ["nova"], "Listeners" : [ { "LoadBalancerPort" : "80", "InstancePort" : "80", "Protocol" : "HTTP" }] } }, "LaunchConfig" : { "Type" : "AWS::AutoScaling::LaunchConfiguration", "Properties": { "ImageId" : {"Ref": "ImageId"}, "InstanceType" : "bar", "BlockDeviceMappings": [ { "DeviceName": "vdb", "Ebs": {"SnapshotId": "9ef5496e-7426-446a-bbc8-01f84d9c9972", "DeleteOnTermination": "True"} }] } } } } ''' as_heat_template = ''' heat_template_version: 2013-05-23 description: AutoScaling Test resources: my-group: type: OS::Heat::AutoScalingGroup properties: max_size: 5 min_size: 1 resource: type: ResourceWithPropsAndAttrs properties: Foo: hello my-policy: type: OS::Heat::ScalingPolicy properties: auto_scaling_group_id: {get_resource: my-group} scaling_adjustment: 1 adjustment_type: change_in_capacity cooldown: 60 ''' as_template_bad_group = ''' { "AWSTemplateFormatVersion" : "2010-09-09", "Parameters" : { "ImageId": {"Type": "String"}, "KeyName": {"Type": "String"} }, "Resources" : { "WebServerScaleUpPolicy" : { "Type" : "AWS::AutoScaling::ScalingPolicy", "Properties" : { "AdjustmentType" : "ChangeInCapacity", "AutoScalingGroupName" : "not a real group", "Cooldown" : "60", "ScalingAdjustment" : "1" } } } } ''' as_heat_template_bad_group = ''' heat_template_version: 2013-05-23 description: AutoScaling Test resources: my-policy: type: OS::Heat::ScalingPolicy properties: auto_scaling_group_id: bad-group scaling_adjustment: 1 adjustment_type: change_in_capacity cooldown: 60 ''' heat-10.0.2/heat/tests/autoscaling/test_heat_scaling_policy.py0000666000175000017500000002200313343562351024546 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import six from heat.common import exception from heat.common import template_format from heat.engine import node_data from heat.engine import resource from heat.engine import scheduler from heat.tests.autoscaling import inline_templates from heat.tests import common from heat.tests import utils as_template = inline_templates.as_heat_template as_params = inline_templates.as_params class TestAutoScalingPolicy(common.HeatTestCase): def create_scaling_policy(self, t, stack, resource_name): rsrc = stack[resource_name] self.assertIsNone(rsrc.validate()) scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) return rsrc def test_validate_scaling_policy_ok(self): t = template_format.parse(as_template) t['resources']['my-policy']['properties'][ 'scaling_adjustment'] = 33 t['resources']['my-policy']['properties'][ 'adjustment_type'] = 'percent_change_in_capacity' t['resources']['my-policy']['properties'][ 'min_adjustment_step'] = 2 stack = utils.parse_stack(t) self.assertIsNone(stack.validate()) def test_validate_scaling_policy_error(self): t = template_format.parse(as_template) t['resources']['my-policy']['properties'][ 'scaling_adjustment'] = 1 t['resources']['my-policy']['properties'][ 'adjustment_type'] = 'change_in_capacity' t['resources']['my-policy']['properties'][ 'min_adjustment_step'] = 2 stack = utils.parse_stack(t) ex = self.assertRaises(exception.ResourcePropertyValueDependency, stack.validate) self.assertIn('min_adjustment_step property should only ' 'be specified for adjustment_type with ' 'value percent_change_in_capacity.', six.text_type(ex)) def test_scaling_policy_bad_group(self): t = template_format.parse(inline_templates.as_heat_template_bad_group) stack = utils.parse_stack(t) up_policy = self.create_scaling_policy(t, stack, 'my-policy') ex = self.assertRaises(exception.ResourceFailure, up_policy.signal) self.assertIn('Alarm my-policy could ' 'not find scaling group', six.text_type(ex)) def test_scaling_policy_adjust_no_action(self): t = template_format.parse(as_template) stack = utils.parse_stack(t, params=as_params) up_policy = self.create_scaling_policy(t, stack, 'my-policy') group = stack['my-group'] self.patchobject(group, 'adjust', side_effect=resource.NoActionRequired()) self.assertRaises(resource.NoActionRequired, up_policy.handle_signal) def test_scaling_policy_adjust_size_changed(self): t = template_format.parse(as_template) stack = utils.parse_stack(t, params=as_params) up_policy = self.create_scaling_policy(t, stack, 'my-policy') group = stack['my-group'] self.patchobject(group, 'resize') self.patchobject(group, '_lb_reload') mock_fin_scaling = self.patchobject(group, '_finished_scaling') with mock.patch.object(group, '_check_scaling_allowed') as mock_isa: self.assertIsNone(up_policy.handle_signal()) mock_isa.assert_called_once_with(60) mock_fin_scaling.assert_called_once_with(60, 'change_in_capacity : 1', size_changed=True) def test_scaling_policy_cooldown_toosoon(self): t = template_format.parse(as_template) stack = utils.parse_stack(t, params=as_params) pol = self.create_scaling_policy(t, stack, 'my-policy') group = stack['my-group'] test = {'current': 'alarm'} with mock.patch.object( group, '_check_scaling_allowed', side_effect=resource.NoActionRequired) as mock_cip: self.assertRaises(resource.NoActionRequired, pol.handle_signal, details=test) mock_cip.assert_called_once_with(60) def test_scaling_policy_cooldown_ok(self): t = template_format.parse(as_template) stack = utils.parse_stack(t, params=as_params) pol = self.create_scaling_policy(t, stack, 'my-policy') group = stack['my-group'] test = {'current': 'alarm'} self.patchobject(group, '_finished_scaling') self.patchobject(group, '_lb_reload') mock_resize = self.patchobject(group, 'resize') with mock.patch.object(group, '_check_scaling_allowed') as mock_isa: pol.handle_signal(details=test) mock_isa.assert_called_once_with(60) mock_resize.assert_called_once_with(1) def test_scaling_policy_refid(self): t = template_format.parse(as_template) stack = utils.parse_stack(t) rsrc = stack['my-policy'] rsrc.resource_id = 'xyz' self.assertEqual('xyz', rsrc.FnGetRefId()) def test_scaling_policy_refid_convg_cache_data(self): t = template_format.parse(as_template) cache_data = {'my-policy': node_data.NodeData.from_dict({ 'uuid': mock.ANY, 'id': mock.ANY, 'action': 'CREATE', 'status': 'COMPLETE', 'reference_id': 'convg_xyz' })} stack = utils.parse_stack(t, cache_data=cache_data) rsrc = stack.defn['my-policy'] self.assertEqual('convg_xyz', rsrc.FnGetRefId()) class ScalingPolicyAttrTest(common.HeatTestCase): def setUp(self): super(ScalingPolicyAttrTest, self).setUp() t = template_format.parse(as_template) self.stack = utils.parse_stack(t, params=as_params) self.stack_name = self.stack.name self.policy = self.stack['my-policy'] self.assertIsNone(self.policy.validate()) scheduler.TaskRunner(self.policy.create)() self.assertEqual((self.policy.CREATE, self.policy.COMPLETE), self.policy.state) def test_alarm_attribute(self): self.m.StubOutWithMock(self.stack.clients.client_plugin('heat'), 'get_heat_cfn_url') self.stack.clients.client_plugin('heat').get_heat_cfn_url().AndReturn( 'http://server.test:8000/v1') self.m.ReplayAll() alarm_url = self.policy.FnGetAtt('alarm_url') base = alarm_url.split('?')[0].split('%3A') self.assertEqual('http://server.test:8000/v1/signal/arn', base[0]) self.assertEqual('openstack', base[1]) self.assertEqual('heat', base[2]) self.assertEqual('test_tenant_id', base[4]) res = base[5].split('/') self.assertEqual('stacks', res[0]) self.assertEqual(self.stack_name, res[1]) self.assertEqual('resources', res[3]) self.assertEqual('my-policy', res[4]) args = sorted(alarm_url.split('?')[1].split('&')) self.assertEqual('AWSAccessKeyId', args[0].split('=')[0]) self.assertEqual('Signature', args[1].split('=')[0]) self.assertEqual('SignatureMethod', args[2].split('=')[0]) self.assertEqual('SignatureVersion', args[3].split('=')[0]) self.assertEqual('Timestamp', args[4].split('=')[0]) self.m.VerifyAll() def test_signal_attribute(self): self.m.StubOutWithMock(self.stack.clients.client_plugin('heat'), 'get_heat_url') self.stack.clients.client_plugin('heat').get_heat_url().AndReturn( 'http://server.test:8000/v1/') self.m.ReplayAll() self.assertEqual( 'http://server.test:8000/v1/test_tenant_id/stacks/' '%s/%s/resources/my-policy/signal' % ( self.stack.name, self.stack.id), self.policy.FnGetAtt('signal_url')) self.m.VerifyAll() def test_signal_attribute_with_prefix(self): self.m.StubOutWithMock(self.stack.clients.client_plugin('heat'), 'get_heat_url') self.stack.clients.client_plugin('heat').get_heat_url().AndReturn( 'http://server.test/heat-api/v1/1234') self.m.ReplayAll() self.assertEqual( 'http://server.test/heat-api/v1/test_tenant_id/stacks/' '%s/%s/resources/my-policy/signal' % ( self.stack.name, self.stack.id), self.policy.FnGetAtt('signal_url')) self.m.VerifyAll() heat-10.0.2/heat/tests/autoscaling/test_scaling_group.py0000666000175000017500000010567013343562351023416 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime import json import mock from oslo_utils import timeutils import six from heat.common import exception from heat.common import grouputils from heat.common import template_format from heat.engine.clients.os import nova from heat.engine import resource from heat.engine import rsrc_defn from heat.engine import scheduler from heat.tests.autoscaling import inline_templates from heat.tests import common from heat.tests.openstack.nova import fakes as fakes_nova from heat.tests import utils as_template = inline_templates.as_template class TestAutoScalingGroupValidation(common.HeatTestCase): def validate_scaling_group(self, t, stack, resource_name): # create the launch configuration resource conf = stack['LaunchConfig'] self.assertIsNone(conf.validate()) scheduler.TaskRunner(conf.create)() self.assertEqual((conf.CREATE, conf.COMPLETE), conf.state) # create the group resource rsrc = stack[resource_name] self.assertIsNone(rsrc.validate()) return rsrc def test_toomany_vpc_zone_identifier(self): t = template_format.parse(as_template) properties = t['Resources']['WebServerGroup']['Properties'] properties['VPCZoneIdentifier'] = ['xxxx', 'yyyy'] stack = utils.parse_stack(t, params=inline_templates.as_params) self.stub_SnapshotConstraint_validate() self.stub_ImageConstraint_validate() self.stub_FlavorConstraint_validate() self.m.ReplayAll() self.assertRaises(exception.NotSupported, self.validate_scaling_group, t, stack, 'WebServerGroup') self.m.VerifyAll() def test_invalid_min_size(self): t = template_format.parse(as_template) properties = t['Resources']['WebServerGroup']['Properties'] properties['MinSize'] = '-1' properties['MaxSize'] = '2' stack = utils.parse_stack(t, params=inline_templates.as_params) self.stub_SnapshotConstraint_validate() self.stub_ImageConstraint_validate() self.stub_FlavorConstraint_validate() self.m.ReplayAll() e = self.assertRaises(exception.StackValidationFailed, self.validate_scaling_group, t, stack, 'WebServerGroup') expected_msg = "The size of AutoScalingGroup can not be less than zero" self.assertEqual(expected_msg, six.text_type(e)) self.m.VerifyAll() def test_invalid_max_size(self): t = template_format.parse(as_template) properties = t['Resources']['WebServerGroup']['Properties'] properties['MinSize'] = '3' properties['MaxSize'] = '1' stack = utils.parse_stack(t, params=inline_templates.as_params) self.stub_SnapshotConstraint_validate() self.stub_ImageConstraint_validate() self.stub_FlavorConstraint_validate() self.m.ReplayAll() e = self.assertRaises(exception.StackValidationFailed, self.validate_scaling_group, t, stack, 'WebServerGroup') expected_msg = "MinSize can not be greater than MaxSize" self.assertEqual(expected_msg, six.text_type(e)) self.m.VerifyAll() def test_invalid_desiredcapacity(self): t = template_format.parse(as_template) properties = t['Resources']['WebServerGroup']['Properties'] properties['MinSize'] = '1' properties['MaxSize'] = '3' properties['DesiredCapacity'] = '4' stack = utils.parse_stack(t, params=inline_templates.as_params) self.stub_SnapshotConstraint_validate() self.stub_ImageConstraint_validate() self.stub_FlavorConstraint_validate() self.m.ReplayAll() e = self.assertRaises(exception.StackValidationFailed, self.validate_scaling_group, t, stack, 'WebServerGroup') expected_msg = "DesiredCapacity must be between MinSize and MaxSize" self.assertEqual(expected_msg, six.text_type(e)) self.m.VerifyAll() def test_invalid_desiredcapacity_zero(self): t = template_format.parse(as_template) properties = t['Resources']['WebServerGroup']['Properties'] properties['MinSize'] = '1' properties['MaxSize'] = '3' properties['DesiredCapacity'] = '0' stack = utils.parse_stack(t, params=inline_templates.as_params) self.stub_SnapshotConstraint_validate() self.stub_ImageConstraint_validate() self.stub_FlavorConstraint_validate() self.m.ReplayAll() e = self.assertRaises(exception.StackValidationFailed, self.validate_scaling_group, t, stack, 'WebServerGroup') expected_msg = "DesiredCapacity must be between MinSize and MaxSize" self.assertEqual(expected_msg, six.text_type(e)) self.m.VerifyAll() def test_validate_without_InstanceId_and_LaunchConfigurationName(self): t = template_format.parse(as_template) agp = t['Resources']['WebServerGroup']['Properties'] agp.pop('LaunchConfigurationName') agp.pop('LoadBalancerNames') stack = utils.parse_stack(t, params=inline_templates.as_params) rsrc = stack['WebServerGroup'] error_msg = ("Either 'InstanceId' or 'LaunchConfigurationName' " "must be provided.") exc = self.assertRaises(exception.StackValidationFailed, rsrc.validate) self.assertIn(error_msg, six.text_type(exc)) def test_validate_with_InstanceId_and_LaunchConfigurationName(self): t = template_format.parse(as_template) agp = t['Resources']['WebServerGroup']['Properties'] agp['InstanceId'] = '5678' stack = utils.parse_stack(t, params=inline_templates.as_params) rsrc = stack['WebServerGroup'] error_msg = ("Either 'InstanceId' or 'LaunchConfigurationName' " "must be provided.") exc = self.assertRaises(exception.StackValidationFailed, rsrc.validate) self.assertIn(error_msg, six.text_type(exc)) def _stub_nova_server_get(self, not_found=False): mock_server = mock.MagicMock() mock_server.image = {'id': 'dd619705-468a-4f7d-8a06-b84794b3561a'} mock_server.flavor = {'id': '1'} mock_server.key_name = 'test' mock_server.security_groups = [{u'name': u'hth_test'}] if not_found: self.patchobject(nova.NovaClientPlugin, 'get_server', side_effect=exception.EntityNotFound( entity='Server', name='5678')) else: self.patchobject(nova.NovaClientPlugin, 'get_server', return_value=mock_server) def test_scaling_group_create_with_instanceid(self): t = template_format.parse(as_template) agp = t['Resources']['WebServerGroup']['Properties'] agp['InstanceId'] = '5678' agp.pop('LaunchConfigurationName') agp.pop('LoadBalancerNames') stack = utils.parse_stack(t, params=inline_templates.as_params) rsrc = stack['WebServerGroup'] self._stub_nova_server_get() self.m.ReplayAll() _config, ins_props = rsrc._get_conf_properties() self.assertEqual('dd619705-468a-4f7d-8a06-b84794b3561a', ins_props['ImageId']) self.assertEqual('test', ins_props['KeyName']) self.assertEqual(['hth_test'], ins_props['SecurityGroups']) self.assertEqual('1', ins_props['InstanceType']) self.m.VerifyAll() def test_scaling_group_create_with_instanceid_not_found(self): t = template_format.parse(as_template) agp = t['Resources']['WebServerGroup']['Properties'] agp.pop('LaunchConfigurationName') agp['InstanceId'] = '5678' stack = utils.parse_stack(t, params=inline_templates.as_params) rsrc = stack['WebServerGroup'] self._stub_nova_server_get(not_found=True) self.m.ReplayAll() msg = ("Property error: " "Resources.WebServerGroup.Properties.InstanceId: " "Error validating value '5678': The Server (5678) could " "not be found.") exc = self.assertRaises(exception.StackValidationFailed, rsrc.validate) self.assertIn(msg, six.text_type(exc)) self.m.VerifyAll() class TestScalingGroupTags(common.HeatTestCase): def setUp(self): super(TestScalingGroupTags, self).setUp() t = template_format.parse(as_template) self.stack = utils.parse_stack(t, params=inline_templates.as_params) self.group = self.stack['WebServerGroup'] def test_tags_default(self): expected = [{'Key': 'metering.groupname', 'Value': u'WebServerGroup'}, {'Key': 'metering.AutoScalingGroupName', 'Value': u'WebServerGroup'}] self.assertEqual(expected, self.group._tags()) def test_tags_with_extra(self): self.group.properties.data['Tags'] = [ {'Key': 'fee', 'Value': 'foo'}] expected = [{'Key': 'fee', 'Value': 'foo'}, {'Key': 'metering.groupname', 'Value': u'WebServerGroup'}, {'Key': 'metering.AutoScalingGroupName', 'Value': u'WebServerGroup'}] self.assertEqual(expected, self.group._tags()) def test_tags_with_metering(self): self.group.properties.data['Tags'] = [ {'Key': 'metering.fee', 'Value': 'foo'}] expected = [{'Key': 'metering.fee', 'Value': 'foo'}, {'Key': 'metering.AutoScalingGroupName', 'Value': u'WebServerGroup'}] self.assertEqual(expected, self.group._tags()) class TestInitialGroupSize(common.HeatTestCase): scenarios = [ ('000', dict(mins=0, maxs=0, desired=0, expected=0)), ('040', dict(mins=0, maxs=4, desired=0, expected=0)), ('253', dict(mins=2, maxs=5, desired=3, expected=3)), ('14n', dict(mins=1, maxs=4, desired=None, expected=1)), ] def test_initial_size(self): t = template_format.parse(as_template) properties = t['Resources']['WebServerGroup']['Properties'] properties['MinSize'] = self.mins properties['MaxSize'] = self.maxs properties['DesiredCapacity'] = self.desired stack = utils.parse_stack(t, params=inline_templates.as_params) group = stack['WebServerGroup'] with mock.patch.object(group, '_create_template') as mock_cre_temp: group.child_template() mock_cre_temp.assert_called_once_with(self.expected) class TestGroupAdjust(common.HeatTestCase): def setUp(self): super(TestGroupAdjust, self).setUp() t = template_format.parse(as_template) self.stack = utils.parse_stack(t, params=inline_templates.as_params) self.group = self.stack['WebServerGroup'] self.stub_ImageConstraint_validate() self.stub_FlavorConstraint_validate() self.stub_SnapshotConstraint_validate() self.assertIsNone(self.group.validate()) def test_scaling_policy_cooldown_toosoon(self): dont_call = self.patchobject(self.group, 'resize') self.patchobject(self.group, '_check_scaling_allowed', side_effect=resource.NoActionRequired) self.assertRaises(resource.NoActionRequired, self.group.adjust, 1) self.assertEqual([], dont_call.call_args_list) def test_scaling_same_capacity(self): """Don't resize when capacity is the same.""" self.patchobject(grouputils, 'get_size', return_value=3) resize = self.patchobject(self.group, 'resize') finished_scaling = self.patchobject(self.group, '_finished_scaling') notify = self.patch('heat.engine.notification.autoscaling.send') self.assertRaises(resource.NoActionRequired, self.group.adjust, 3, adjustment_type='ExactCapacity') expected_notifies = [] self.assertEqual(expected_notifies, notify.call_args_list) self.assertEqual(0, resize.call_count) self.assertEqual(0, finished_scaling.call_count) def test_scaling_update_in_progress(self): """Don't resize when update in progress""" self.group.state_set('UPDATE', 'IN_PROGRESS') resize = self.patchobject(self.group, 'resize') finished_scaling = self.patchobject(self.group, '_finished_scaling') notify = self.patch('heat.engine.notification.autoscaling.send') self.assertRaises(resource.NoActionRequired, self.group.adjust, 3, adjustment_type='ExactCapacity') expected_notifies = [] self.assertEqual(expected_notifies, notify.call_args_list) self.assertEqual(0, resize.call_count) self.assertEqual(0, finished_scaling.call_count) def test_scale_up_min_adjustment(self): self.patchobject(grouputils, 'get_size', return_value=1) resize = self.patchobject(self.group, 'resize') finished_scaling = self.patchobject(self.group, '_finished_scaling') notify = self.patch('heat.engine.notification.autoscaling.send') self.patchobject(self.group, '_check_scaling_allowed') self.group.adjust(33, adjustment_type='PercentChangeInCapacity', min_adjustment_step=2) expected_notifies = [ mock.call( capacity=1, suffix='start', adjustment_type='PercentChangeInCapacity', groupname=u'WebServerGroup', message=u'Start resizing the group WebServerGroup', adjustment=33, stack=self.group.stack), mock.call( capacity=3, suffix='end', adjustment_type='PercentChangeInCapacity', groupname=u'WebServerGroup', message=u'End resizing the group WebServerGroup', adjustment=33, stack=self.group.stack)] self.assertEqual(expected_notifies, notify.call_args_list) resize.assert_called_once_with(3) finished_scaling.assert_called_once_with( None, 'PercentChangeInCapacity : 33', size_changed=True) def test_scale_down_min_adjustment(self): self.patchobject(grouputils, 'get_size', return_value=5) resize = self.patchobject(self.group, 'resize') finished_scaling = self.patchobject(self.group, '_finished_scaling') notify = self.patch('heat.engine.notification.autoscaling.send') self.patchobject(self.group, '_check_scaling_allowed') self.group.adjust(-33, adjustment_type='PercentChangeInCapacity', min_adjustment_step=2) expected_notifies = [ mock.call( capacity=5, suffix='start', adjustment_type='PercentChangeInCapacity', groupname=u'WebServerGroup', message=u'Start resizing the group WebServerGroup', adjustment=-33, stack=self.group.stack), mock.call( capacity=3, suffix='end', adjustment_type='PercentChangeInCapacity', groupname=u'WebServerGroup', message=u'End resizing the group WebServerGroup', adjustment=-33, stack=self.group.stack)] self.assertEqual(expected_notifies, notify.call_args_list) resize.assert_called_once_with(3) finished_scaling.assert_called_once_with( None, 'PercentChangeInCapacity : -33', size_changed=True) def test_scaling_policy_cooldown_ok(self): self.patchobject(grouputils, 'get_size', return_value=0) resize = self.patchobject(self.group, 'resize') finished_scaling = self.patchobject(self.group, '_finished_scaling') notify = self.patch('heat.engine.notification.autoscaling.send') self.patchobject(self.group, '_check_scaling_allowed') self.group.adjust(1) expected_notifies = [ mock.call( capacity=0, suffix='start', adjustment_type='ChangeInCapacity', groupname=u'WebServerGroup', message=u'Start resizing the group WebServerGroup', adjustment=1, stack=self.group.stack), mock.call( capacity=1, suffix='end', adjustment_type='ChangeInCapacity', groupname=u'WebServerGroup', message=u'End resizing the group WebServerGroup', adjustment=1, stack=self.group.stack)] self.assertEqual(expected_notifies, notify.call_args_list) resize.assert_called_once_with(1) finished_scaling.assert_called_once_with(None, 'ChangeInCapacity : 1', size_changed=True) grouputils.get_size.assert_called_once_with(self.group) def test_scaling_policy_resize_fail(self): self.patchobject(grouputils, 'get_size', return_value=0) self.patchobject(self.group, 'resize', side_effect=ValueError('test error')) notify = self.patch('heat.engine.notification.autoscaling.send') self.patchobject(self.group, '_check_scaling_allowed') self.patchobject(self.group, '_finished_scaling') self.assertRaises(ValueError, self.group.adjust, 1) expected_notifies = [ mock.call( capacity=0, suffix='start', adjustment_type='ChangeInCapacity', groupname=u'WebServerGroup', message=u'Start resizing the group WebServerGroup', adjustment=1, stack=self.group.stack), mock.call( capacity=0, suffix='error', adjustment_type='ChangeInCapacity', groupname=u'WebServerGroup', message=u'test error', adjustment=1, stack=self.group.stack)] self.assertEqual(expected_notifies, notify.call_args_list) grouputils.get_size.assert_called_with(self.group) def test_notification_send_if_resize_failed(self): """If resize failed, the capacity of group might have been changed""" self.patchobject(grouputils, 'get_size', side_effect=[3, 4]) self.patchobject(self.group, 'resize', side_effect=ValueError('test error')) notify = self.patch('heat.engine.notification.autoscaling.send') self.patchobject(self.group, '_check_scaling_allowed') self.patchobject(self.group, '_finished_scaling') self.assertRaises(ValueError, self.group.adjust, 5, adjustment_type='ExactCapacity') expected_notifies = [ mock.call( capacity=3, suffix='start', adjustment_type='ExactCapacity', groupname=u'WebServerGroup', message=u'Start resizing the group WebServerGroup', adjustment=5, stack=self.group.stack), mock.call( capacity=4, suffix='error', adjustment_type='ExactCapacity', groupname=u'WebServerGroup', message=u'test error', adjustment=5, stack=self.group.stack)] self.assertEqual(expected_notifies, notify.call_args_list) self.group.resize.assert_called_once_with(5) grouputils.get_size.assert_has_calls([mock.call(self.group), mock.call(self.group)]) class TestGroupCrud(common.HeatTestCase): def setUp(self): super(TestGroupCrud, self).setUp() t = template_format.parse(as_template) self.stack = utils.parse_stack(t, params=inline_templates.as_params) self.group = self.stack['WebServerGroup'] self.assertIsNone(self.group.validate()) def test_handle_create(self): self.group.create_with_template = mock.Mock(return_value=None) self.group.child_template = mock.Mock(return_value='{}') self.group.handle_create() self.group.child_template.assert_called_once_with() self.group.create_with_template.assert_called_once_with('{}') def test_handle_update_desired_cap(self): self.group._try_rolling_update = mock.Mock(return_value=None) self.group.resize = mock.Mock(return_value=None) props = {'DesiredCapacity': 4, 'MinSize': 0, 'MaxSize': 6} self.group._get_new_capacity = mock.Mock(return_value=4) defn = rsrc_defn.ResourceDefinition( 'nopayload', 'AWS::AutoScaling::AutoScalingGroup', props) self.group.handle_update(defn, None, props) self.group.resize.assert_called_once_with(4) self.group._try_rolling_update.assert_called_once_with(props) def test_handle_update_desired_nocap(self): self.group._try_rolling_update = mock.Mock(return_value=None) self.group.resize = mock.Mock(return_value=None) get_size = self.patchobject(grouputils, 'get_size') get_size.return_value = 4 props = {'MinSize': 0, 'MaxSize': 6} defn = rsrc_defn.ResourceDefinition( 'nopayload', 'AWS::AutoScaling::AutoScalingGroup', props) self.group.handle_update(defn, None, props) self.group.resize.assert_called_once_with(4) self.group._try_rolling_update.assert_called_once_with(props) def test_conf_properties_vpc_zone(self): self.stub_ImageConstraint_validate() self.stub_FlavorConstraint_validate() self.stub_SnapshotConstraint_validate() t = template_format.parse(as_template) properties = t['Resources']['WebServerGroup']['Properties'] properties['VPCZoneIdentifier'] = ['xxxx'] stack = utils.parse_stack(t, params=inline_templates.as_params) # create the launch configuration resource conf = stack['LaunchConfig'] self.assertIsNone(conf.validate()) scheduler.TaskRunner(conf.create)() self.assertEqual((conf.CREATE, conf.COMPLETE), conf.state) group = stack['WebServerGroup'] config, props = group._get_conf_properties() self.assertEqual('xxxx', props['SubnetId']) conf.delete() def test_update_in_failed(self): self.group.state_set('CREATE', 'FAILED') # to update the failed asg self.group.resize = mock.Mock(return_value=None) new_defn = rsrc_defn.ResourceDefinition( 'asg', 'AWS::AutoScaling::AutoScalingGroup', {'AvailabilityZones': ['nova'], 'LaunchConfigurationName': 'config', 'MaxSize': 5, 'MinSize': 1, 'DesiredCapacity': 2}) self.group.handle_update(new_defn, None, None) self.group.resize.assert_called_once_with(2) def asg_tmpl_with_bad_updt_policy(): t = template_format.parse(inline_templates.as_template) ag = t['Resources']['WebServerGroup'] ag["UpdatePolicy"] = {"foo": {}} return json.dumps(t) def asg_tmpl_with_default_updt_policy(): t = template_format.parse(inline_templates.as_template) ag = t['Resources']['WebServerGroup'] ag["UpdatePolicy"] = {"AutoScalingRollingUpdate": {}} return json.dumps(t) def asg_tmpl_with_updt_policy(): t = template_format.parse(inline_templates.as_template) ag = t['Resources']['WebServerGroup'] ag["UpdatePolicy"] = {"AutoScalingRollingUpdate": { "MinInstancesInService": "1", "MaxBatchSize": "2", "PauseTime": "PT1S" }} return json.dumps(t) class RollingUpdatePolicyTest(common.HeatTestCase): def setUp(self): super(RollingUpdatePolicyTest, self).setUp() self.fc = fakes_nova.FakeClient() self.stub_keystoneclient(username='test_stack.CfnLBUser') self.stub_ImageConstraint_validate() self.stub_FlavorConstraint_validate() self.stub_SnapshotConstraint_validate() def test_parse_without_update_policy(self): tmpl = template_format.parse(inline_templates.as_template) stack = utils.parse_stack(tmpl, params=inline_templates.as_params) stack.validate() grp = stack['WebServerGroup'] self.assertFalse(grp.update_policy['AutoScalingRollingUpdate']) def test_parse_with_update_policy(self): tmpl = template_format.parse(asg_tmpl_with_updt_policy()) stack = utils.parse_stack(tmpl, params=inline_templates.as_params) stack.validate() tmpl_grp = tmpl['Resources']['WebServerGroup'] tmpl_policy = tmpl_grp['UpdatePolicy']['AutoScalingRollingUpdate'] tmpl_batch_sz = int(tmpl_policy['MaxBatchSize']) grp = stack['WebServerGroup'] self.assertTrue(grp.update_policy) self.assertEqual(1, len(grp.update_policy)) self.assertIn('AutoScalingRollingUpdate', grp.update_policy) policy = grp.update_policy['AutoScalingRollingUpdate'] self.assertIsNotNone(policy) self.assertGreater(len(policy), 0) self.assertEqual(1, int(policy['MinInstancesInService'])) self.assertEqual(tmpl_batch_sz, int(policy['MaxBatchSize'])) self.assertEqual('PT1S', policy['PauseTime']) def test_parse_with_default_update_policy(self): tmpl = template_format.parse(asg_tmpl_with_default_updt_policy()) stack = utils.parse_stack(tmpl, params=inline_templates.as_params) stack.validate() grp = stack['WebServerGroup'] self.assertTrue(grp.update_policy) self.assertEqual(1, len(grp.update_policy)) self.assertIn('AutoScalingRollingUpdate', grp.update_policy) policy = grp.update_policy['AutoScalingRollingUpdate'] self.assertIsNotNone(policy) self.assertGreater(len(policy), 0) self.assertEqual(0, int(policy['MinInstancesInService'])) self.assertEqual(1, int(policy['MaxBatchSize'])) self.assertEqual('PT0S', policy['PauseTime']) def test_parse_with_bad_update_policy(self): tmpl = template_format.parse(asg_tmpl_with_bad_updt_policy()) stack = utils.parse_stack(tmpl, params=inline_templates.as_params) error = self.assertRaises( exception.StackValidationFailed, stack.validate) self.assertIn("foo", six.text_type(error)) def test_parse_with_bad_pausetime_in_update_policy(self): tmpl = template_format.parse(asg_tmpl_with_default_updt_policy()) group = tmpl['Resources']['WebServerGroup'] policy = group['UpdatePolicy']['AutoScalingRollingUpdate'] policy['PauseTime'] = 'P1YT1H' stack = utils.parse_stack(tmpl, params=inline_templates.as_params) error = self.assertRaises( exception.StackValidationFailed, stack.validate) self.assertIn("Only ISO 8601 duration format", six.text_type(error)) class RollingUpdatePolicyDiffTest(common.HeatTestCase): def setUp(self): super(RollingUpdatePolicyDiffTest, self).setUp() self.fc = fakes_nova.FakeClient() self.stub_keystoneclient(username='test_stack.CfnLBUser') def validate_update_policy_diff(self, current, updated): # load current stack current_tmpl = template_format.parse(current) current_stack = utils.parse_stack(current_tmpl, params=inline_templates.as_params) # get the json snippet for the current InstanceGroup resource current_grp = current_stack['WebServerGroup'] current_snippets = dict((n, r.frozen_definition()) for n, r in current_stack.items()) current_grp_json = current_snippets[current_grp.name] # load the updated stack updated_tmpl = template_format.parse(updated) updated_stack = utils.parse_stack(updated_tmpl, params=inline_templates.as_params) # get the updated json snippet for the InstanceGroup resource in the # context of the current stack updated_grp = updated_stack['WebServerGroup'] updated_grp_json = updated_grp.t.freeze() # identify the template difference tmpl_diff = updated_grp.update_template_diff( updated_grp_json, current_grp_json) self.assertTrue(tmpl_diff.update_policy_changed()) # test application of the new update policy in handle_update current_grp._try_rolling_update = mock.MagicMock() current_grp.resize = mock.MagicMock() current_grp.handle_update(updated_grp_json, tmpl_diff, None) self.assertEqual(updated_grp_json._update_policy or {}, current_grp.update_policy.data) def test_update_policy_added(self): self.validate_update_policy_diff(inline_templates.as_template, asg_tmpl_with_updt_policy()) def test_update_policy_updated(self): updt_template = json.loads(asg_tmpl_with_updt_policy()) grp = updt_template['Resources']['WebServerGroup'] policy = grp['UpdatePolicy']['AutoScalingRollingUpdate'] policy['MinInstancesInService'] = '2' policy['MaxBatchSize'] = '4' policy['PauseTime'] = 'PT1M30S' self.validate_update_policy_diff(asg_tmpl_with_updt_policy(), json.dumps(updt_template)) def test_update_policy_removed(self): self.validate_update_policy_diff(asg_tmpl_with_updt_policy(), inline_templates.as_template) class TestCooldownMixin(common.HeatTestCase): def setUp(self): super(TestCooldownMixin, self).setUp() t = template_format.parse(inline_templates.as_template) self.stack = utils.parse_stack(t, params=inline_templates.as_params) self.stack.store() self.group = self.stack['WebServerGroup'] self.group.state_set('CREATE', 'COMPLETE') def test_cooldown_is_in_progress_toosoon(self): cooldown_end = timeutils.utcnow() + datetime.timedelta(seconds=60) previous_meta = {'cooldown_end': { cooldown_end.isoformat(): 'ChangeInCapacity : 1'}} self.patchobject(self.group, 'metadata_get', return_value=previous_meta) self.assertRaises(resource.NoActionRequired, self.group._check_scaling_allowed, 60) def test_cooldown_is_in_progress_toosoon_legacy(self): now = timeutils.utcnow() previous_meta = {'cooldown': { now.isoformat(): 'ChangeInCapacity : 1'}} self.patchobject(self.group, 'metadata_get', return_value=previous_meta) self.assertRaises(resource.NoActionRequired, self.group._check_scaling_allowed, 60) def test_cooldown_is_in_progress_scaling_unfinished(self): previous_meta = {'scaling_in_progress': True} self.patchobject(self.group, 'metadata_get', return_value=previous_meta) self.assertRaises(resource.NoActionRequired, self.group._check_scaling_allowed, 60) def test_scaling_not_in_progress_legacy(self): awhile_ago = timeutils.utcnow() - datetime.timedelta(seconds=100) previous_meta = { 'cooldown': { awhile_ago.isoformat(): 'ChangeInCapacity : 1' }, 'scaling_in_progress': False } self.patchobject(self.group, 'metadata_get', return_value=previous_meta) self.assertIsNone(self.group._check_scaling_allowed(60)) def test_scaling_not_in_progress(self): awhile_after = timeutils.utcnow() + datetime.timedelta(seconds=60) previous_meta = { 'cooldown_end': { awhile_after.isoformat(): 'ChangeInCapacity : 1' }, 'scaling_in_progress': False } timeutils.set_time_override() timeutils.advance_time_seconds(100) self.patchobject(self.group, 'metadata_get', return_value=previous_meta) self.assertIsNone(self.group._check_scaling_allowed(60)) timeutils.clear_time_override() def test_scaling_policy_cooldown_zero(self): now = timeutils.utcnow() previous_meta = { 'cooldown_end': { now.isoformat(): 'ChangeInCapacity : 1' }, 'scaling_in_progress': False } self.patchobject(self.group, 'metadata_get', return_value=previous_meta) self.assertIsNone(self.group._check_scaling_allowed(60)) def test_scaling_policy_cooldown_none(self): now = timeutils.utcnow() previous_meta = { 'cooldown_end': { now.isoformat(): 'ChangeInCapacity : 1' }, 'scaling_in_progress': False } self.patchobject(self.group, 'metadata_get', return_value=previous_meta) self.assertIsNone(self.group._check_scaling_allowed(None)) def test_metadata_is_written(self): nowish = timeutils.utcnow() reason = 'cool as' meta_set = self.patchobject(self.group, 'metadata_set') self.patchobject(timeutils, 'utcnow', return_value=nowish) self.group._finished_scaling(60, reason) cooldown_end = nowish + datetime.timedelta(seconds=60) meta_set.assert_called_once_with( {'cooldown_end': {cooldown_end.isoformat(): reason}, 'scaling_in_progress': False}) def test_metadata_is_written_update(self): nowish = timeutils.utcnow() reason = 'cool as' prev_cooldown_end = nowish + datetime.timedelta(seconds=100) previous_meta = { 'cooldown_end': { prev_cooldown_end.isoformat(): 'ChangeInCapacity : 1' } } self.patchobject(self.group, 'metadata_get', return_value=previous_meta) meta_set = self.patchobject(self.group, 'metadata_set') self.patchobject(timeutils, 'utcnow', return_value=nowish) self.group._finished_scaling(60, reason) meta_set.assert_called_once_with( {'cooldown_end': {prev_cooldown_end.isoformat(): reason}, 'scaling_in_progress': False}) heat-10.0.2/heat/tests/autoscaling/test_new_capacity.py0000666000175000017500000001253613343562340023224 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.scaling import scalingutil as sc_util from heat.tests import common class TestCapacityChanges(common.HeatTestCase): # below: # n CHANGE_IN_CAPACITY (+up, -down) # b bounded # r rounded (+up, -down) # e EXACT_CAPACITY # p PERCENT_CHANGE_IN_CAPACITY # s MIN_ADJUSTMENT_STEP scenarios = [ ('+n', dict(current=2, adjustment=3, adjustment_type=sc_util.CHANGE_IN_CAPACITY, min_adjustment_step=None, minimum=0, maximum=10, expected=5)), ('-n', dict(current=6, adjustment=-2, adjustment_type=sc_util.CHANGE_IN_CAPACITY, min_adjustment_step=None, minimum=0, maximum=5, expected=4)), ('+nb', dict(current=2, adjustment=8, adjustment_type=sc_util.CHANGE_IN_CAPACITY, min_adjustment_step=None, minimum=0, maximum=5, expected=5)), ('-nb', dict(current=2, adjustment=-10, adjustment_type=sc_util.CHANGE_IN_CAPACITY, min_adjustment_step=None, minimum=1, maximum=5, expected=1)), ('e', dict(current=2, adjustment=4, adjustment_type=sc_util.EXACT_CAPACITY, min_adjustment_step=None, minimum=0, maximum=5, expected=4)), ('+eb', dict(current=2, adjustment=11, adjustment_type=sc_util.EXACT_CAPACITY, min_adjustment_step=None, minimum=0, maximum=5, expected=5)), ('-eb', dict(current=4, adjustment=1, adjustment_type=sc_util.EXACT_CAPACITY, min_adjustment_step=None, minimum=3, maximum=5, expected=3)), ('+p', dict(current=4, adjustment=50, adjustment_type=sc_util.PERCENT_CHANGE_IN_CAPACITY, min_adjustment_step=None, minimum=1, maximum=10, expected=6)), ('-p', dict(current=4, adjustment=-25, adjustment_type=sc_util.PERCENT_CHANGE_IN_CAPACITY, min_adjustment_step=None, minimum=1, maximum=10, expected=3)), ('+pb', dict(current=4, adjustment=100, adjustment_type=sc_util.PERCENT_CHANGE_IN_CAPACITY, min_adjustment_step=None, minimum=1, maximum=6, expected=6)), ('-pb', dict(current=6, adjustment=-50, adjustment_type=sc_util.PERCENT_CHANGE_IN_CAPACITY, min_adjustment_step=None, minimum=4, maximum=10, expected=4)), ('-p+r', dict(current=2, adjustment=-33, adjustment_type=sc_util.PERCENT_CHANGE_IN_CAPACITY, min_adjustment_step=None, minimum=0, maximum=10, expected=1)), ('+p+r', dict(current=1, adjustment=33, adjustment_type=sc_util.PERCENT_CHANGE_IN_CAPACITY, min_adjustment_step=None, minimum=0, maximum=10, expected=2)), ('-p-r', dict(current=2, adjustment=-66, adjustment_type=sc_util.PERCENT_CHANGE_IN_CAPACITY, min_adjustment_step=None, minimum=0, maximum=10, expected=1)), ('+p-r', dict(current=1, adjustment=225, adjustment_type=sc_util.PERCENT_CHANGE_IN_CAPACITY, min_adjustment_step=None, minimum=0, maximum=10, expected=3)), ('+ps', dict(current=1, adjustment=100, adjustment_type=sc_util.PERCENT_CHANGE_IN_CAPACITY, min_adjustment_step=3, minimum=0, maximum=10, expected=4)), ('+p+rs', dict(current=1, adjustment=33, adjustment_type=sc_util.PERCENT_CHANGE_IN_CAPACITY, min_adjustment_step=2, minimum=0, maximum=10, expected=3)), ('+p-rs', dict(current=1, adjustment=325, adjustment_type=sc_util.PERCENT_CHANGE_IN_CAPACITY, min_adjustment_step=2, minimum=0, maximum=10, expected=4)), ('-p-rs', dict(current=3, adjustment=-25, adjustment_type=sc_util.PERCENT_CHANGE_IN_CAPACITY, min_adjustment_step=2, minimum=0, maximum=10, expected=1)), ] def test_calc(self): self.assertEqual(self.expected, sc_util.calculate_new_capacity( self.current, self.adjustment, self.adjustment_type, self.min_adjustment_step, self.minimum, self.maximum)) heat-10.0.2/heat/tests/autoscaling/test_rolling_update.py0000666000175000017500000002510113343562340023556 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.tests import common from heat.scaling import rolling_update class TestNeedsUpdate(common.HeatTestCase): scenarios = [ ('4_4_0', dict(targ=4, curr=4, updated=0, result=True)), ('4_4_1', dict(targ=4, curr=4, updated=1, result=True)), ('4_4_3', dict(targ=4, curr=4, updated=3, result=True)), ('4_4_4', dict(targ=4, curr=4, updated=4, result=False)), ('4_4_5', dict(targ=4, curr=4, updated=5, result=False)), ('4_5_0', dict(targ=4, curr=5, updated=0, result=True)), ('4_5_1', dict(targ=4, curr=5, updated=1, result=True)), ('4_5_3', dict(targ=4, curr=5, updated=3, result=True)), ('4_5_4', dict(targ=4, curr=5, updated=4, result=True)), ('4_5_5', dict(targ=4, curr=5, updated=5, result=True)), ('4_3_0', dict(targ=4, curr=3, updated=0, result=True)), ('4_3_1', dict(targ=4, curr=3, updated=1, result=True)), ('4_3_2', dict(targ=4, curr=3, updated=2, result=True)), ('4_3_3', dict(targ=4, curr=3, updated=3, result=True)), ('4_3_4', dict(targ=4, curr=3, updated=4, result=True)), ] def test_needs_update(self): needs_update = rolling_update.needs_update(self.targ, self.curr, self.updated) self.assertEqual(self.result, needs_update) class TestNextBatch(common.HeatTestCase): scenarios = [ ('4_4_0_1_0', dict(targ=4, curr=4, updated=0, bat_size=1, min_srv=0, batch=(4, 1))), ('4_4_3_1_0', dict(targ=4, curr=4, updated=3, bat_size=1, min_srv=0, batch=(4, 1))), ('4_4_0_1_4', dict(targ=4, curr=4, updated=0, bat_size=1, min_srv=4, batch=(5, 1))), ('4_5_3_1_4', dict(targ=4, curr=5, updated=3, bat_size=1, min_srv=4, batch=(5, 1))), ('4_5_4_1_4', dict(targ=4, curr=5, updated=4, bat_size=1, min_srv=4, batch=(4, 0))), ('4_4_0_1_5', dict(targ=4, curr=4, updated=0, bat_size=1, min_srv=5, batch=(5, 1))), ('4_5_3_1_5', dict(targ=4, curr=5, updated=3, bat_size=1, min_srv=5, batch=(5, 1))), ('4_5_0_1_4', dict(targ=4, curr=5, updated=0, bat_size=1, min_srv=4, batch=(5, 1))), ('4_5_1_1_4', dict(targ=4, curr=5, updated=1, bat_size=1, min_srv=4, batch=(5, 1))), ('4_5_4_1_5', dict(targ=4, curr=5, updated=4, bat_size=1, min_srv=5, batch=(4, 0))), ('4_4_0_2_0', dict(targ=4, curr=4, updated=0, bat_size=2, min_srv=0, batch=(4, 2))), ('4_4_2_2_0', dict(targ=4, curr=4, updated=2, bat_size=2, min_srv=0, batch=(4, 2))), ('4_4_0_2_4', dict(targ=4, curr=4, updated=0, bat_size=2, min_srv=4, batch=(6, 2))), ('4_6_2_2_4', dict(targ=4, curr=4, updated=0, bat_size=2, min_srv=4, batch=(6, 2))), ('4_6_4_2_4', dict(targ=4, curr=6, updated=4, bat_size=2, min_srv=4, batch=(4, 0))), ('5_5_0_2_0', dict(targ=5, curr=5, updated=0, bat_size=2, min_srv=0, batch=(5, 2))), ('5_5_4_2_0', dict(targ=5, curr=5, updated=4, bat_size=2, min_srv=0, batch=(5, 1))), ('5_5_0_2_4', dict(targ=5, curr=5, updated=0, bat_size=2, min_srv=4, batch=(6, 2))), ('5_6_2_2_4', dict(targ=5, curr=6, updated=2, bat_size=2, min_srv=4, batch=(6, 2))), ('5_6_4_2_4', dict(targ=5, curr=6, updated=4, bat_size=2, min_srv=4, batch=(5, 1))), ('5_7_2_2_4', dict(targ=5, curr=7, updated=2, bat_size=2, min_srv=4, batch=(6, 2))), ('3_3_0_2_0', dict(targ=3, curr=3, updated=0, bat_size=2, min_srv=0, batch=(3, 2))), ('3_3_2_2_0', dict(targ=3, curr=3, updated=2, bat_size=2, min_srv=0, batch=(3, 1))), ('3_3_0_2_4', dict(targ=3, curr=3, updated=0, bat_size=2, min_srv=4, batch=(5, 2))), ('3_5_2_2_4', dict(targ=3, curr=5, updated=2, bat_size=2, min_srv=4, batch=(4, 1))), ('3_5_3_2_4', dict(targ=3, curr=5, updated=3, bat_size=2, min_srv=4, batch=(3, 0))), ('4_4_0_4_0', dict(targ=4, curr=4, updated=0, bat_size=4, min_srv=0, batch=(4, 4))), ('4_4_0_5_0', dict(targ=4, curr=4, updated=0, bat_size=5, min_srv=0, batch=(4, 4))), ('4_4_0_4_1', dict(targ=4, curr=4, updated=0, bat_size=4, min_srv=1, batch=(5, 4))), ('4_4_4_4_1', dict(targ=4, curr=4, updated=4, bat_size=4, min_srv=1, batch=(4, 0))), ('4_4_0_6_1', dict(targ=4, curr=4, updated=0, bat_size=6, min_srv=1, batch=(5, 4))), ('4_4_4_6_1', dict(targ=4, curr=4, updated=4, bat_size=6, min_srv=1, batch=(4, 0))), ('4_4_0_4_2', dict(targ=4, curr=4, updated=0, bat_size=4, min_srv=2, batch=(6, 4))), ('4_4_4_4_2', dict(targ=4, curr=4, updated=4, bat_size=4, min_srv=2, batch=(4, 0))), ('4_4_0_4_4', dict(targ=4, curr=4, updated=0, bat_size=4, min_srv=4, batch=(8, 4))), ('4_4_4_4_4', dict(targ=4, curr=4, updated=4, bat_size=4, min_srv=4, batch=(4, 0))), ('4_4_0_5_6', dict(targ=4, curr=4, updated=0, bat_size=5, min_srv=6, batch=(8, 4))), ('4_4_4_5_6', dict(targ=4, curr=4, updated=4, bat_size=5, min_srv=6, batch=(4, 0))), ('6_4_0_1_0', dict(targ=6, curr=4, updated=0, bat_size=1, min_srv=0, batch=(5, 1))), ('6_5_1_1_0', dict(targ=6, curr=5, updated=1, bat_size=1, min_srv=0, batch=(6, 1))), ('6_4_0_1_4', dict(targ=6, curr=4, updated=0, bat_size=1, min_srv=4, batch=(5, 1))), ('6_5_1_1_4', dict(targ=6, curr=5, updated=1, bat_size=1, min_srv=4, batch=(6, 1))), ('6_4_0_2_0', dict(targ=6, curr=4, updated=0, bat_size=2, min_srv=0, batch=(6, 2))), ('6_4_0_2_4', dict(targ=6, curr=4, updated=0, bat_size=2, min_srv=4, batch=(6, 2))), ('6_5_0_2_0', dict(targ=6, curr=5, updated=0, bat_size=2, min_srv=0, batch=(6, 2))), ('6_5_0_2_4', dict(targ=6, curr=5, updated=0, bat_size=2, min_srv=4, batch=(6, 2))), ('6_3_0_2_0', dict(targ=6, curr=3, updated=0, bat_size=2, min_srv=0, batch=(5, 2))), ('6_5_2_2_0', dict(targ=6, curr=5, updated=2, bat_size=2, min_srv=0, batch=(6, 2))), ('6_5_4_2_0', dict(targ=6, curr=5, updated=4, bat_size=2, min_srv=0, batch=(6, 2))), ('6_3_0_2_4', dict(targ=6, curr=3, updated=0, bat_size=2, min_srv=4, batch=(5, 2))), ('6_5_2_2_4', dict(targ=6, curr=5, updated=2, bat_size=2, min_srv=4, batch=(6, 2))), ('6_5_4_2_4', dict(targ=6, curr=5, updated=4, bat_size=2, min_srv=4, batch=(6, 2))), ('6_4_0_4_0', dict(targ=6, curr=4, updated=0, bat_size=4, min_srv=0, batch=(6, 4))), ('6_4_4_4_0', dict(targ=6, curr=4, updated=4, bat_size=4, min_srv=0, batch=(6, 2))), ('6_4_0_5_0', dict(targ=6, curr=4, updated=0, bat_size=5, min_srv=0, batch=(6, 5))), ('6_6_5_4_0', dict(targ=6, curr=6, updated=5, bat_size=4, min_srv=0, batch=(6, 1))), ('6_4_0_4_1', dict(targ=6, curr=4, updated=0, bat_size=4, min_srv=1, batch=(6, 4))), ('6_6_4_4_1', dict(targ=6, curr=6, updated=4, bat_size=4, min_srv=1, batch=(6, 2))), ('6_4_0_6_1', dict(targ=6, curr=4, updated=0, bat_size=6, min_srv=1, batch=(7, 6))), ('6_7_5_6_1', dict(targ=6, curr=7, updated=5, bat_size=6, min_srv=1, batch=(6, 1))), ('6_4_0_4_2', dict(targ=6, curr=4, updated=0, bat_size=4, min_srv=2, batch=(6, 4))), ('6_4_4_4_2', dict(targ=6, curr=4, updated=4, bat_size=4, min_srv=2, batch=(6, 2))), ('6_4_0_4_4', dict(targ=6, curr=4, updated=0, bat_size=4, min_srv=4, batch=(8, 4))), ('6_8_4_4_4', dict(targ=6, curr=4, updated=4, bat_size=4, min_srv=4, batch=(6, 2))), ('6_8_2_4_4', dict(targ=6, curr=8, updated=2, bat_size=4, min_srv=4, batch=(8, 4))), ('6_8_6_4_4', dict(targ=6, curr=8, updated=6, bat_size=4, min_srv=4, batch=(6, 0))), ('6_4_0_5_6', dict(targ=6, curr=4, updated=0, bat_size=5, min_srv=6, batch=(9, 5))), ('6_9_5_5_6', dict(targ=6, curr=9, updated=5, bat_size=5, min_srv=6, batch=(7, 1))), ('6_9_2_5_6', dict(targ=6, curr=9, updated=2, bat_size=5, min_srv=6, batch=(10, 4))), ('6_A_5_5_6', dict(targ=6, curr=10, updated=5, bat_size=5, min_srv=6, batch=(7, 1))), ('6_7_6_5_6', dict(targ=6, curr=7, updated=6, bat_size=5, min_srv=6, batch=(6, 0))), ] def test_next_batch(self): batch = rolling_update.next_batch(self.targ, self.curr, self.updated, self.bat_size, self.min_srv) self.assertEqual(self.batch, batch) heat-10.0.2/heat/tests/testing-overview.txt0000666000175000017500000000417313343562352020717 0ustar zuulzuul00000000000000Heat testing ------------ All unit tests are to be placed in the heat/tests directory, and tests might be organized by tested subsystem. Each subsystem directory must contain a separate blank __init__.py for tests discovery to function. An example directory structure illustrating the above: heat/tests |-- autoscaling | |-- __init__.py | |-- test1.py | |-- test2.py | |-- test3.py |-- __init__.py |-- test_template_convert.py If a given test has no overlapping requirements (variables or same routines) a new test does not need to create a subdirectory under the test type. Implementing a test ------------------- Testrepository - http://pypi.python.org/pypi/testrepository is used to find and run tests, parallelize their runs, and record timing/results. If new dependencies are introduced upon the development of a test, the test-requirements.txt file needs to be updated so that the virtual environment will be able to successfully execute all tests. Running the tests ----------------- Advised way of running tests is the same as in OpenStack testing infrastructure, that is using tox. $ tox This by default will run unit tests suite with Python 2.7 and PEP8/HACKING style checks. To run only one type of tests you can explicitly provide tox with the test environment to use. $ tox -epy27 # test suite on python 2.7 $ tox -epep8 # run full source code checker To run only a subset of tests, you can provide tox with a regex argument defining which tests to execute. $ tox -epy27 -- VolumeTests To use debugger like pdb during test run, one has to run tests directly with other, non-concurrent test runner instead of testr. That also presumes that you have a virtual env with all heat dependencies active. Below is an example bash script using testtools test runner that also allows running single tests by providing a regex. #! /usr/bin/env sh testlist=$(mktemp) testr list-tests "$1" > $testlist python -m testtools.run --load-list $testlist Another way to use debugger for testing is run tox with command: $ tox -e debug -- heat.tests.test_stack.StackTest.test_stack_reads_tenant Note: last approach is mostly useful to run single tests. heat-10.0.2/heat/tests/test_identifier.py0000666000175000017500000004346613343562340020375 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common import identifier from heat.tests import common class IdentifierTest(common.HeatTestCase): url_prefix = 'http://1.2.3.4/foo/' def test_attrs(self): hi = identifier.HeatIdentifier('t', 's', 'i', 'p') self.assertEqual('t', hi.tenant) self.assertEqual('s', hi.stack_name) self.assertEqual('i', hi.stack_id) self.assertEqual('/p', hi.path) def test_path_default(self): hi = identifier.HeatIdentifier('t', 's', 'i') self.assertEqual('', hi.path) def test_items(self): hi = identifier.HeatIdentifier('t', 's', 'i', 'p') self.assertEqual('t', hi['tenant']) self.assertEqual('s', hi['stack_name']) self.assertEqual('i', hi['stack_id']) self.assertEqual('/p', hi['path']) def test_invalid_attr(self): hi = identifier.HeatIdentifier('t', 's', 'i', 'p') hi.identity['foo'] = 'bar' self.assertRaises(AttributeError, getattr, hi, 'foo') def test_invalid_item(self): hi = identifier.HeatIdentifier('t', 's', 'i', 'p') hi.identity['foo'] = 'bar' self.assertRaises(KeyError, lambda o, k: o[k], hi, 'foo') def test_stack_path(self): hi = identifier.HeatIdentifier('t', 's', 'i', 'p') self.assertEqual('s/i', hi.stack_path()) def test_arn(self): hi = identifier.HeatIdentifier('t', 's', 'i', 'p') self.assertEqual('arn:openstack:heat::t:stacks/s/i/p', hi.arn()) def test_arn_url(self): hi = identifier.HeatIdentifier('t', 's', 'i', 'p') self.assertEqual('/arn%3Aopenstack%3Aheat%3A%3At%3Astacks/s/i/p', hi.arn_url_path()) def test_arn_id_int(self): hi = identifier.HeatIdentifier('t', 's', 42, 'p') self.assertEqual('arn:openstack:heat::t:stacks/s/42/p', hi.arn()) def test_arn_parse(self): arn = 'arn:openstack:heat::t:stacks/s/i/p' hi = identifier.HeatIdentifier.from_arn(arn) self.assertEqual('t', hi.tenant) self.assertEqual('s', hi.stack_name) self.assertEqual('i', hi.stack_id) self.assertEqual('/p', hi.path) def test_arn_url_parse(self): url = self.url_prefix + 'arn%3Aopenstack%3Aheat%3A%3At%3Astacks/s/i/p' hi = identifier.HeatIdentifier.from_arn_url(url) self.assertEqual('t', hi.tenant) self.assertEqual('s', hi.stack_name) self.assertEqual('i', hi.stack_id) self.assertEqual('/p', hi.path) def test_arn_parse_path_default(self): arn = 'arn:openstack:heat::t:stacks/s/i' hi = identifier.HeatIdentifier.from_arn(arn) self.assertEqual('t', hi.tenant) self.assertEqual('s', hi.stack_name) self.assertEqual('i', hi.stack_id) self.assertEqual('', hi.path) def test_arn_url_parse_default(self): url = self.url_prefix + 'arn%3Aopenstack%3Aheat%3A%3At%3Astacks/s/i' hi = identifier.HeatIdentifier.from_arn_url(url) self.assertEqual('t', hi.tenant) self.assertEqual('s', hi.stack_name) self.assertEqual('i', hi.stack_id) self.assertEqual('', hi.path) def test_arn_parse_upper(self): arn = 'ARN:openstack:heat::t:stacks/s/i/p' hi = identifier.HeatIdentifier.from_arn(arn) self.assertEqual('s', hi.stack_name) self.assertEqual('i', hi.stack_id) self.assertEqual('/p', hi.path) def test_arn_url_parse_upper(self): url = self.url_prefix + 'ARN%3Aopenstack%3Aheat%3A%3At%3Astacks/s/i/p' hi = identifier.HeatIdentifier.from_arn_url(url) self.assertEqual('t', hi.tenant) self.assertEqual('s', hi.stack_name) self.assertEqual('i', hi.stack_id) self.assertEqual('/p', hi.path) def test_arn_url_parse_qs(self): url = (self.url_prefix + 'arn%3Aopenstack%3Aheat%3A%3At%3Astacks/s/i/p?foo=bar') hi = identifier.HeatIdentifier.from_arn_url(url) self.assertEqual('t', hi.tenant) self.assertEqual('s', hi.stack_name) self.assertEqual('i', hi.stack_id) self.assertEqual('/p', hi.path) def test_arn_parse_arn_invalid(self): arn = 'urn:openstack:heat::t:stacks/s/i' self.assertRaises(ValueError, identifier.HeatIdentifier.from_arn, arn) def test_arn_url_parse_arn_invalid(self): url = self.url_prefix + 'urn:openstack:heat::t:stacks/s/i/p' self.assertRaises(ValueError, identifier.HeatIdentifier.from_arn_url, url) def test_arn_parse_os_invalid(self): arn = 'arn:aws:heat::t:stacks/s/i' self.assertRaises(ValueError, identifier.HeatIdentifier.from_arn, arn) def test_arn_url_parse_os_invalid(self): url = self.url_prefix + 'arn:aws:heat::t:stacks/s/i/p' self.assertRaises(ValueError, identifier.HeatIdentifier.from_arn_url, url) def test_arn_parse_heat_invalid(self): arn = 'arn:openstack:cool::t:stacks/s/i' self.assertRaises(ValueError, identifier.HeatIdentifier.from_arn, arn) def test_arn_url_parse_heat_invalid(self): url = self.url_prefix + 'arn:openstack:cool::t:stacks/s/i/p' self.assertRaises(ValueError, identifier.HeatIdentifier.from_arn_url, url) def test_arn_parse_stacks_invalid(self): arn = 'arn:openstack:heat::t:sticks/s/i' self.assertRaises(ValueError, identifier.HeatIdentifier.from_arn, arn) def test_arn_url_parse_stacks_invalid(self): url = self.url_prefix + 'arn%3Aopenstack%3Aheat%3A%3At%3Asticks/s/i/p' self.assertRaises(ValueError, identifier.HeatIdentifier.from_arn_url, url) def test_arn_parse_missing_field(self): arn = 'arn:openstack:heat::t:stacks/s' self.assertRaises(ValueError, identifier.HeatIdentifier.from_arn, arn) def test_arn_url_parse_missing_field(self): url = self.url_prefix + 'arn%3Aopenstack%3Aheat%3A%3At%3Asticks/s/' self.assertRaises(ValueError, identifier.HeatIdentifier.from_arn_url, url) def test_arn_parse_empty_field(self): arn = 'arn:openstack:heat::t:stacks//i' self.assertRaises(ValueError, identifier.HeatIdentifier.from_arn, arn) def test_arn_url_parse_empty_field(self): url = self.url_prefix + 'arn%3Aopenstack%3Aheat%3A%3At%3Asticks//i' self.assertRaises(ValueError, identifier.HeatIdentifier.from_arn_url, url) def test_arn_url_parse_leading_char(self): url = self.url_prefix + 'Aarn%3Aopenstack%3Aheat%3A%3At%3Asticks/s/i/p' self.assertRaises(ValueError, identifier.HeatIdentifier.from_arn_url, url) def test_arn_url_parse_leading_space(self): url = self.url_prefix + ' arn%3Aopenstack%3Aheat%3A%3At%3Asticks/s/i/p' self.assertRaises(ValueError, identifier.HeatIdentifier.from_arn_url, url) def test_arn_url_parse_badurl_proto(self): url = 'htt://1.2.3.4/foo/arn%3Aopenstack%3Aheat%3A%3At%3Asticks/s/i/p' self.assertRaises(ValueError, identifier.HeatIdentifier.from_arn_url, url) def test_arn_url_parse_badurl_host(self): url = 'http:///foo/arn%3Aopenstack%3Aheat%3A%3At%3Asticks/s/i/p' self.assertRaises(ValueError, identifier.HeatIdentifier.from_arn_url, url) def test_arn_round_trip(self): hii = identifier.HeatIdentifier('t', 's', 'i', 'p') hio = identifier.HeatIdentifier.from_arn(hii.arn()) self.assertEqual(hii.tenant, hio.tenant) self.assertEqual(hii.stack_name, hio.stack_name) self.assertEqual(hii.stack_id, hio.stack_id) self.assertEqual(hii.path, hio.path) def test_arn_parse_round_trip(self): arn = 'arn:openstack:heat::t:stacks/s/i/p' hi = identifier.HeatIdentifier.from_arn(arn) self.assertEqual(arn, hi.arn()) def test_arn_url_parse_round_trip(self): arn = '/arn%3Aopenstack%3Aheat%3A%3At%3Astacks/s/i/p' url = 'http://1.2.3.4/foo' + arn hi = identifier.HeatIdentifier.from_arn_url(url) self.assertEqual(arn, hi.arn_url_path()) def test_dict_round_trip(self): hii = identifier.HeatIdentifier('t', 's', 'i', 'p') hio = identifier.HeatIdentifier(**dict(hii)) self.assertEqual(hii.tenant, hio.tenant) self.assertEqual(hii.stack_name, hio.stack_name) self.assertEqual(hii.stack_id, hio.stack_id) self.assertEqual(hii.path, hio.path) def test_url_path(self): hi = identifier.HeatIdentifier('t', 's', 'i', 'p') self.assertEqual('t/stacks/s/i/p', hi.url_path()) def test_url_path_default(self): hi = identifier.HeatIdentifier('t', 's', 'i') self.assertEqual('t/stacks/s/i', hi.url_path()) def test_url_path_with_unicode_path(self): hi = identifier.HeatIdentifier('t', 's', 'i', u'\u5de5') self.assertEqual('t/stacks/s/i/%E5%B7%A5', hi.url_path()) def test_tenant_escape(self): hi = identifier.HeatIdentifier(':/', 's', 'i') self.assertEqual(':/', hi.tenant) self.assertEqual('%3A%2F/stacks/s/i', hi.url_path()) self.assertEqual('arn:openstack:heat::%3A%2F:stacks/s/i', hi.arn()) def test_name_escape(self): hi = identifier.HeatIdentifier('t', ':%', 'i') self.assertEqual(':%', hi.stack_name) self.assertEqual('t/stacks/%3A%25/i', hi.url_path()) self.assertEqual('arn:openstack:heat::t:stacks/%3A%25/i', hi.arn()) def test_id_escape(self): hi = identifier.HeatIdentifier('t', 's', ':/') self.assertEqual(':/', hi.stack_id) self.assertEqual('t/stacks/s/%3A%2F', hi.url_path()) self.assertEqual('arn:openstack:heat::t:stacks/s/%3A%2F', hi.arn()) def test_id_contains(self): hi = identifier.HeatIdentifier('t', 's', ':/') self.assertNotIn("t", hi) self.assertIn("stack_id", hi) def test_path_escape(self): hi = identifier.HeatIdentifier('t', 's', 'i', ':/') self.assertEqual('/:/', hi.path) self.assertEqual('t/stacks/s/i/%3A/', hi.url_path()) self.assertEqual('arn:openstack:heat::t:stacks/s/i/%3A/', hi.arn()) def test_tenant_decode(self): arn = 'arn:openstack:heat::%3A%2F:stacks/s/i' hi = identifier.HeatIdentifier.from_arn(arn) self.assertEqual(':/', hi.tenant) def test_url_tenant_decode(self): enc_arn = 'arn%3Aopenstack%3Aheat%3A%3A%253A%252F%3Astacks%2Fs%2Fi' url = self.url_prefix + enc_arn hi = identifier.HeatIdentifier.from_arn_url(url) self.assertEqual(':/', hi.tenant) def test_name_decode(self): arn = 'arn:openstack:heat::t:stacks/%3A%25/i' hi = identifier.HeatIdentifier.from_arn(arn) self.assertEqual(':%', hi.stack_name) def test_url_name_decode(self): enc_arn = 'arn%3Aopenstack%3Aheat%3A%3At%3Astacks%2F%253A%2525%2Fi' url = self.url_prefix + enc_arn hi = identifier.HeatIdentifier.from_arn_url(url) self.assertEqual(':%', hi.stack_name) def test_id_decode(self): arn = 'arn:openstack:heat::t:stacks/s/%3A%2F' hi = identifier.HeatIdentifier.from_arn(arn) self.assertEqual(':/', hi.stack_id) def test_url_id_decode(self): enc_arn = 'arn%3Aopenstack%3Aheat%3A%3At%3Astacks%2Fs%2F%253A%252F' url = self.url_prefix + enc_arn hi = identifier.HeatIdentifier.from_arn_url(url) self.assertEqual(':/', hi.stack_id) def test_path_decode(self): arn = 'arn:openstack:heat::t:stacks/s/i/%3A%2F' hi = identifier.HeatIdentifier.from_arn(arn) self.assertEqual('/:/', hi.path) def test_url_path_decode(self): enc_arn = 'arn%3Aopenstack%3Aheat%3A%3At%3Astacks%2Fs%2Fi%2F%253A%252F' url = self.url_prefix + enc_arn hi = identifier.HeatIdentifier.from_arn_url(url) self.assertEqual('/:/', hi.path) def test_arn_escape_decode_round_trip(self): hii = identifier.HeatIdentifier(':/', ':%', ':/', ':/') hio = identifier.HeatIdentifier.from_arn(hii.arn()) self.assertEqual(hii.tenant, hio.tenant) self.assertEqual(hii.stack_name, hio.stack_name) self.assertEqual(hii.stack_id, hio.stack_id) self.assertEqual(hii.path, hio.path) def test_arn_decode_escape_round_trip(self): arn = 'arn:openstack:heat::%3A%2F:stacks/%3A%25/%3A%2F/%3A/' hi = identifier.HeatIdentifier.from_arn(arn) self.assertEqual(arn, hi.arn()) def test_arn_url_decode_escape_round_trip(self): enc_arn = "".join(['arn%3Aopenstack%3Aheat%3A%3A%253A%252F%3A', 'stacks%2F%253A%2525%2F%253A%252F%2F%253A']) url = self.url_prefix + enc_arn hi = identifier.HeatIdentifier.from_arn_url(url) hi2 = identifier.HeatIdentifier.from_arn_url(self.url_prefix + hi.arn_url_path()) self.assertEqual(hi, hi2) def test_stack_name_slash(self): self.assertRaises(ValueError, identifier.HeatIdentifier, 't', 's/s', 'i', 'p') def test_equal(self): hi1 = identifier.HeatIdentifier('t', 's', 'i', 'p') hi2 = identifier.HeatIdentifier('t', 's', 'i', 'p') self.assertTrue(hi1 == hi2) def test_equal_dict(self): hi = identifier.HeatIdentifier('t', 's', 'i', 'p') self.assertTrue(hi == dict(hi)) self.assertTrue(dict(hi) == hi) def test_not_equal(self): hi1 = identifier.HeatIdentifier('t', 's', 'i', 'p') hi2 = identifier.HeatIdentifier('t', 's', 'i', 'q') self.assertFalse(hi1 == hi2) self.assertFalse(hi2 == hi1) def test_not_equal_dict(self): hi1 = identifier.HeatIdentifier('t', 's', 'i', 'p') hi2 = identifier.HeatIdentifier('t', 's', 'i', 'q') self.assertFalse(hi1 == dict(hi2)) self.assertFalse(dict(hi1) == hi2) self.assertFalse(hi1 == {'tenant': 't', 'stack_name': 's', 'stack_id': 'i'}) self.assertFalse({'tenant': 't', 'stack_name': 's', 'stack_id': 'i'} == hi1) def test_path_components(self): hi = identifier.HeatIdentifier('t', 's', 'i', 'p1/p2/p3') self.assertEqual(['p1', 'p2', 'p3'], hi._path_components()) class ResourceIdentifierTest(common.HeatTestCase): def test_resource_init_no_path(self): si = identifier.HeatIdentifier('t', 's', 'i') ri = identifier.ResourceIdentifier(resource_name='r', **si) self.assertEqual('/resources/r', ri.path) def test_resource_init_path(self): si = identifier.HeatIdentifier('t', 's', 'i') pi = identifier.ResourceIdentifier(resource_name='p', **si) ri = identifier.ResourceIdentifier(resource_name='r', **pi) self.assertEqual('/resources/p/resources/r', ri.path) def test_resource_init_from_dict(self): hi = identifier.HeatIdentifier('t', 's', 'i', '/resources/r') ri = identifier.ResourceIdentifier(**hi) self.assertEqual(hi, ri) def test_resource_stack(self): si = identifier.HeatIdentifier('t', 's', 'i') ri = identifier.ResourceIdentifier(resource_name='r', **si) self.assertEqual(si, ri.stack()) def test_resource_id(self): ri = identifier.ResourceIdentifier('t', 's', 'i', '', 'r') self.assertEqual('r', ri.resource_name) def test_resource_name_slash(self): self.assertRaises(ValueError, identifier.ResourceIdentifier, 't', 's', 'i', 'p', 'r/r') class EventIdentifierTest(common.HeatTestCase): def test_event_init_integer_id(self): self._test_event_init('42') def test_event_init_uuid_id(self): self._test_event_init('a3455d8c-9f88-404d-a85b-5315293e67de') def _test_event_init(self, event_id): si = identifier.HeatIdentifier('t', 's', 'i') pi = identifier.ResourceIdentifier(resource_name='p', **si) ei = identifier.EventIdentifier(event_id=event_id, **pi) self.assertEqual('/resources/p/events/{0}'.format(event_id), ei.path) def test_event_init_from_dict(self): hi = identifier.HeatIdentifier('t', 's', 'i', '/resources/p/events/42') ei = identifier.EventIdentifier(**hi) self.assertEqual(hi, ei) def test_event_stack(self): si = identifier.HeatIdentifier('t', 's', 'i') pi = identifier.ResourceIdentifier(resource_name='r', **si) ei = identifier.EventIdentifier(event_id='e', **pi) self.assertEqual(si, ei.stack()) def test_event_resource(self): si = identifier.HeatIdentifier('t', 's', 'i') pi = identifier.ResourceIdentifier(resource_name='r', **si) ei = identifier.EventIdentifier(event_id='e', **pi) self.assertEqual(pi, ei.resource()) def test_resource_name(self): ei = identifier.EventIdentifier('t', 's', 'i', '/resources/p', 'e') self.assertEqual('p', ei.resource_name) def test_event_id_integer(self): self._test_event_id('42') def test_event_id_uuid(self): self._test_event_id('a3455d8c-9f88-404d-a85b-5315293e67de') def _test_event_id(self, event_id): ei = identifier.EventIdentifier('t', 's', 'i', '/resources/p', event_id) self.assertEqual(event_id, ei.event_id) heat-10.0.2/heat/tests/fakes.py0000666000175000017500000000720313343562340016272 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """A fake server that "responds" to API methods with pre-canned responses. All of these responses come from the spec, so if for some reason the spec's wrong the tests might raise AssertionError. I've indicated in comments the places where actual behavior differs from the spec. """ from keystoneauth1 import plugin import mock class FakeClient(object): def assert_called(self, method, url, body=None, pos=-1): """Assert that an API method was just called.""" expected = (method, url) called = self.client.callstack[pos][0:2] assert self.client.callstack, ("Expected %s %s " "but no calls were made." % expected) assert expected == called, 'Expected %s %s; got %s %s' % ( expected + called) if body is not None: assert self.client.callstack[pos][2] == body def assert_called_anytime(self, method, url, body=None): """Assert that an API method was called anytime in the test.""" expected = (method, url) assert self.client.callstack, ("Expected %s %s but no calls " "were made." % expected) found = False for entry in self.client.callstack: if expected == entry[0:2]: found = True break assert found, 'Expected %s %s; got %s' % (expected, self.client.callstack) if body is not None: try: assert entry[2] == body except AssertionError: print(entry[2]) print("!=") print(body) raise self.client.callstack = [] def clear_callstack(self): self.client.callstack = [] def authenticate(self): pass class FakeAuth(plugin.BaseAuthPlugin): def __init__(self, auth_token='abcd1234', only_services=None): self.auth_token = auth_token self.only_services = only_services def get_token(self, session, **kwargs): return self.auth_token def get_endpoint(self, session, service_type=None, **kwargs): if (self.only_services is not None and service_type not in self.only_services): return None return 'http://example.com:1234/v1' def get_auth_ref(self, session): return mock.Mock() def get_access(self, sesssion): return FakeAccessInfo([], None, None) class FakeAccessInfo(object): def __init__(self, roles, user_domain, project_domain): self.roles = roles self.user_domain = user_domain self.project_domain = project_domain @property def role_names(self): return self.roles @property def user_domain_id(self): return self.user_domain @property def project_domain_id(self): return self.project_domain class FakeEventSink(object): def __init__(self, evt): self.events = [] self.evt = evt def consume(self, stack, event): self.events.append(event) self.evt.send(None) heat-10.0.2/heat/tests/test_environment_format.py0000666000175000017500000000572213343562340022160 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import yaml from heat.common import environment_format from heat.tests import common class YamlEnvironmentTest(common.HeatTestCase): def test_minimal_yaml(self): yaml1 = '' yaml2 = ''' parameters: {} encrypted_param_names: [] parameter_defaults: {} event_sinks: [] resource_registry: {} ''' tpl1 = environment_format.parse(yaml1) environment_format.default_for_missing(tpl1) tpl2 = environment_format.parse(yaml2) self.assertEqual(tpl1, tpl2) def test_param_valid_strategy_section(self): yaml1 = '' yaml2 = ''' parameters: {} encrypted_param_names: [] parameter_defaults: {} parameter_merge_strategies: {} event_sinks: [] resource_registry: {} ''' tpl1 = environment_format.parse(yaml1) environment_format.default_for_missing(tpl1) tpl2 = environment_format.parse(yaml2) self.assertNotEqual(tpl1, tpl2) def test_wrong_sections(self): env = ''' parameters: {} resource_regis: {} ''' self.assertRaises(ValueError, environment_format.parse, env) def test_bad_yaml(self): env = ''' parameters: } ''' self.assertRaises(ValueError, environment_format.parse, env) def test_yaml_none(self): self.assertEqual({}, environment_format.parse(None)) def test_parse_string_environment(self): env = 'just string' expect = 'The environment is not a valid YAML mapping data type.' msg = self.assertRaises(ValueError, environment_format.parse, env) self.assertIn(expect, msg.args) def test_parse_document(self): env = '["foo" , "bar"]' expect = 'The environment is not a valid YAML mapping data type.' msg = self.assertRaises(ValueError, environment_format.parse, env) self.assertIn(expect, msg.args) class YamlParseExceptions(common.HeatTestCase): scenarios = [ ('scanner', dict(raised_exception=yaml.scanner.ScannerError())), ('parser', dict(raised_exception=yaml.parser.ParserError())), ('reader', dict(raised_exception=yaml.reader.ReaderError('', '', '', '', ''))), ] def test_parse_to_value_exception(self): text = 'not important' with mock.patch.object(yaml, 'load') as yaml_loader: yaml_loader.side_effect = self.raised_exception self.assertRaises(ValueError, environment_format.parse, text) heat-10.0.2/heat/tests/test_empty_stack.py0000666000175000017500000000704213343562340020564 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common import template_format from heat.engine import stack as parser from heat.engine import template from heat.tests import common from heat.tests import utils class StackTest(common.HeatTestCase): def setUp(self): super(StackTest, self).setUp() self.username = 'parser_stack_test_user' self.ctx = utils.dummy_context() def _assert_can_create(self, templ): stack = parser.Stack(self.ctx, utils.random_name(), template.Template(templ)) stack.store() stack.create() self.assertEqual((parser.Stack.CREATE, parser.Stack.COMPLETE), stack.state) return stack def test_heat_empty_json(self): tmpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': {}, 'Parameters': {}, 'Outputs': {}} self._assert_can_create(tmpl) def test_cfn_empty_json(self): tmpl = {'AWSTemplateFormatVersion': '2010-09-09', 'Resources': {}, 'Parameters': {}, 'Outputs': {}} self._assert_can_create(tmpl) def test_hot_empty_json(self): tmpl = {'heat_template_version': '2013-05-23', 'resources': {}, 'parameters': {}, 'outputs': {}} self._assert_can_create(tmpl) def test_heat_empty_yaml(self): t = template_format.parse(''' HeatTemplateFormatVersion: 2012-12-12 Parameters: Resources: Outputs: ''') self._assert_can_create(t) def test_cfn_empty_yaml(self): t = template_format.parse(''' AWSTemplateFormatVersion: 2010-09-09 Parameters: Resources: Outputs: ''') self._assert_can_create(t) def test_hot_empty_yaml(self): t = template_format.parse(''' heat_template_version: 2013-05-23 parameters: resources: outputs: ''') self._assert_can_create(t) def test_update_hot_empty_yaml(self): t = template_format.parse(''' heat_template_version: 2013-05-23 parameters: resources: outputs: ''') ut = template_format.parse(''' heat_template_version: 2013-05-23 parameters: resources: rand: type: OS::Heat::RandomString outputs: ''') stack = self._assert_can_create(t) updated = parser.Stack(self.ctx, utils.random_name(), template.Template(ut)) stack.update(updated) self.assertEqual((parser.Stack.UPDATE, parser.Stack.COMPLETE), stack.state) def test_update_cfn_empty_yaml(self): t = template_format.parse(''' AWSTemplateFormatVersion: 2010-09-09 Parameters: Resources: Outputs: ''') ut = template_format.parse(''' AWSTemplateFormatVersion: 2010-09-09 Parameters: Resources: rand: Type: OS::Heat::RandomString Outputs: ''') stack = self._assert_can_create(t) updated = parser.Stack(self.ctx, utils.random_name(), template.Template(ut)) stack.update(updated) self.assertEqual((parser.Stack.UPDATE, parser.Stack.COMPLETE), stack.state) heat-10.0.2/heat/tests/test_engine_api_utils.py0000666000175000017500000014757213343562340021574 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime as dt import json import uuid import mock from oslo_utils import timeutils import six from heat.common import exception from heat.common import template_format from heat.common import timeutils as heat_timeutils from heat.db.sqlalchemy import models from heat.engine import api from heat.engine.cfn import parameters as cfn_param from heat.engine import event from heat.engine import parent_rsrc from heat.engine import stack as parser from heat.engine import template from heat.objects import event as event_object from heat.rpc import api as rpc_api from heat.tests import common from heat.tests import utils datetime = dt.datetime class FormatTest(common.HeatTestCase): def setUp(self): super(FormatTest, self).setUp() tmpl = template.Template({ 'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'generic1': {'Type': 'GenericResourceType', 'Properties': {'k1': 'v1'}}, 'generic2': { 'Type': 'GenericResourceType', 'DependsOn': 'generic1'}, 'generic3': {'Type': 'ResWithShowAttrType'}, 'generic4': {'Type': 'StackResourceType'} } }) self.context = utils.dummy_context() self.stack = parser.Stack(self.context, 'test_stack', tmpl, stack_id=str(uuid.uuid4())) def _dummy_event(self, res_properties=None): resource = self.stack['generic1'] ev_uuid = 'abc123yc-9f88-404d-a85b-531529456xyz' ev = event.Event(self.context, self.stack, 'CREATE', 'COMPLETE', 'state changed', 'z3455xyc-9f88-404d-a85b-5315293e67de', resource._rsrc_prop_data_id, resource._stored_properties_data, resource.name, resource.type(), uuid=ev_uuid) ev.store() return event_object.Event.get_all_by_stack( self.context, self.stack.id, filters={'uuid': ev_uuid})[0] def test_format_stack_resource(self): self.stack.created_time = datetime(2015, 8, 3, 17, 5, 1) self.stack.updated_time = datetime(2015, 8, 3, 17, 6, 2) res = self.stack['generic1'] resource_keys = set(( rpc_api.RES_CREATION_TIME, rpc_api.RES_UPDATED_TIME, rpc_api.RES_NAME, rpc_api.RES_PHYSICAL_ID, rpc_api.RES_ACTION, rpc_api.RES_STATUS, rpc_api.RES_STATUS_DATA, rpc_api.RES_TYPE, rpc_api.RES_ID, rpc_api.RES_STACK_ID, rpc_api.RES_STACK_NAME, rpc_api.RES_REQUIRED_BY, )) resource_details_keys = resource_keys.union(set(( rpc_api.RES_DESCRIPTION, rpc_api.RES_METADATA, rpc_api.RES_ATTRIBUTES, ))) formatted = api.format_stack_resource(res, True) self.assertEqual(resource_details_keys, set(formatted.keys())) formatted = api.format_stack_resource(res, False) self.assertEqual(resource_keys, set(formatted.keys())) self.assertEqual(heat_timeutils.isotime(self.stack.created_time), formatted[rpc_api.RES_CREATION_TIME]) self.assertEqual(heat_timeutils.isotime(self.stack.updated_time), formatted[rpc_api.RES_UPDATED_TIME]) self.assertEqual(res.INIT, formatted[rpc_api.RES_ACTION]) def test_format_stack_resource_no_attrs(self): res = self.stack['generic1'] formatted = api.format_stack_resource(res, True, with_attr=False) self.assertNotIn(rpc_api.RES_ATTRIBUTES, formatted) self.assertIn(rpc_api.RES_METADATA, formatted) def test_format_stack_resource_has_been_deleted(self): # assume the stack and resource have been deleted, # to test the resource's action inherit from stack self.stack.state_set(self.stack.DELETE, self.stack.COMPLETE, 'test_delete') res = self.stack['generic1'] formatted = api.format_stack_resource(res, False) self.assertEqual(res.DELETE, formatted[rpc_api.RES_ACTION]) def test_format_stack_resource_has_been_rollback(self): # Rollback a stack, the resources perhaps have not been # created yet or have been deleted when rollback. # To test the resource's action inherit from stack self.stack.state_set(self.stack.ROLLBACK, self.stack.COMPLETE, 'test_rollback') res = self.stack['generic1'] formatted = api.format_stack_resource(res, False) self.assertEqual(res.ROLLBACK, formatted[rpc_api.RES_ACTION]) @mock.patch.object(api, 'format_resource_properties') def test_format_stack_resource_with_props(self, mock_format_props): mock_format_props.return_value = 'formatted_res_props' res = self.stack['generic1'] formatted = api.format_stack_resource(res, True, with_props=True) formatted_props = formatted[rpc_api.RES_PROPERTIES] self.assertEqual('formatted_res_props', formatted_props) @mock.patch.object(api, 'format_resource_attributes') def test_format_stack_resource_with_attributes(self, mock_format_attrs): mock_format_attrs.return_value = 'formatted_resource_attrs' res = self.stack['generic1'] formatted = api.format_stack_resource(res, True, with_attr=['a', 'b']) formatted_attrs = formatted[rpc_api.RES_ATTRIBUTES] self.assertEqual('formatted_resource_attrs', formatted_attrs) def test_format_resource_attributes(self): res = self.stack['generic1'] # the _resolve_attribute method of 'generic1' returns map with all # attributes except 'show' (because it's None in this test) formatted_attributes = api.format_resource_attributes(res) expected = {'foo': 'generic1', 'Foo': 'generic1'} self.assertEqual(expected, formatted_attributes) def test_format_resource_attributes_show_attribute(self): res = self.stack['generic3'] res.resource_id = 'generic3_id' formatted_attributes = api.format_resource_attributes(res) self.assertEqual(3, len(formatted_attributes)) self.assertIn('foo', formatted_attributes) self.assertIn('Foo', formatted_attributes) self.assertIn('Another', formatted_attributes) def test_format_resource_attributes_show_attribute_with_attr(self): res = self.stack['generic3'] res.resource_id = 'generic3_id' formatted_attributes = api.format_resource_attributes( res, with_attr=['c']) self.assertEqual(4, len(formatted_attributes)) self.assertIn('foo', formatted_attributes) self.assertIn('Foo', formatted_attributes) self.assertIn('Another', formatted_attributes) self.assertIn('c', formatted_attributes) def _get_formatted_resource_properties(self, res_name): tmpl = template.Template(template_format.parse(''' heat_template_version: 2013-05-23 resources: resource1: type: ResWithComplexPropsAndAttrs resource2: type: ResWithComplexPropsAndAttrs properties: a_string: foobar resource3: type: ResWithComplexPropsAndAttrs properties: a_string: { get_attr: [ resource2, string] } ''')) stack = parser.Stack(utils.dummy_context(), 'test_stack_for_preview', tmpl, stack_id=str(uuid.uuid4())) res = stack[res_name] return api.format_resource_properties(res) def test_format_resource_properties_empty(self): props = self._get_formatted_resource_properties('resource1') self.assertIsNone(props['a_string']) self.assertIsNone(props['a_list']) self.assertIsNone(props['a_map']) def test_format_resource_properties_direct_props(self): props = self._get_formatted_resource_properties('resource2') self.assertEqual('foobar', props['a_string']) def test_format_resource_properties_get_attr(self): props = self._get_formatted_resource_properties('resource3') self.assertEqual('', props['a_string']) def test_format_stack_resource_with_nested_stack(self): res = self.stack['generic4'] nested_id = {'foo': 'bar'} res.has_nested = mock.Mock() res.has_nested.return_value = True res.nested_identifier = mock.Mock() res.nested_identifier.return_value = nested_id formatted = api.format_stack_resource(res, False) self.assertEqual(nested_id, formatted[rpc_api.RES_NESTED_STACK_ID]) def test_format_stack_resource_with_nested_stack_none(self): res = self.stack['generic4'] resource_keys = set(( rpc_api.RES_CREATION_TIME, rpc_api.RES_UPDATED_TIME, rpc_api.RES_NAME, rpc_api.RES_PHYSICAL_ID, rpc_api.RES_ACTION, rpc_api.RES_STATUS, rpc_api.RES_STATUS_DATA, rpc_api.RES_TYPE, rpc_api.RES_ID, rpc_api.RES_STACK_ID, rpc_api.RES_STACK_NAME, rpc_api.RES_REQUIRED_BY)) formatted = api.format_stack_resource(res, False) self.assertEqual(resource_keys, set(formatted.keys())) def test_format_stack_resource_with_nested_stack_not_found(self): res = self.stack['generic4'] self.patchobject(parser.Stack, 'load', side_effect=exception.NotFound()) resource_keys = set(( rpc_api.RES_CREATION_TIME, rpc_api.RES_UPDATED_TIME, rpc_api.RES_NAME, rpc_api.RES_PHYSICAL_ID, rpc_api.RES_ACTION, rpc_api.RES_STATUS, rpc_api.RES_STATUS_DATA, rpc_api.RES_TYPE, rpc_api.RES_ID, rpc_api.RES_STACK_ID, rpc_api.RES_STACK_NAME, rpc_api.RES_REQUIRED_BY)) formatted = api.format_stack_resource(res, False) # 'nested_stack_id' is not in formatted self.assertEqual(resource_keys, set(formatted.keys())) def test_format_stack_resource_with_nested_stack_empty(self): res = self.stack['generic4'] nested_id = {'foo': 'bar'} res.has_nested = mock.Mock() res.has_nested.return_value = True res.nested_identifier = mock.Mock() res.nested_identifier.return_value = nested_id formatted = api.format_stack_resource(res, False) self.assertEqual(nested_id, formatted[rpc_api.RES_NESTED_STACK_ID]) def test_format_stack_resource_required_by(self): res1 = api.format_stack_resource(self.stack['generic1']) res2 = api.format_stack_resource(self.stack['generic2']) self.assertEqual(['generic2'], res1['required_by']) self.assertEqual([], res2['required_by']) def test_format_stack_resource_with_parent_stack(self): res = self.stack['generic1'] res.stack.defn._parent_info = parent_rsrc.ParentResourceProxy( self.stack.context, 'foobar', None) formatted = api.format_stack_resource(res, False) self.assertEqual('foobar', formatted[rpc_api.RES_PARENT_RESOURCE]) def test_format_event_identifier_uuid(self): event = self._dummy_event() event_keys = set(( rpc_api.EVENT_ID, rpc_api.EVENT_STACK_ID, rpc_api.EVENT_STACK_NAME, rpc_api.EVENT_TIMESTAMP, rpc_api.EVENT_RES_NAME, rpc_api.EVENT_RES_PHYSICAL_ID, rpc_api.EVENT_RES_ACTION, rpc_api.EVENT_RES_STATUS, rpc_api.EVENT_RES_STATUS_DATA, rpc_api.EVENT_RES_TYPE, rpc_api.EVENT_RES_PROPERTIES)) formatted = api.format_event(event, self.stack.identifier()) self.assertEqual(event_keys, set(formatted.keys())) event_id_formatted = formatted[rpc_api.EVENT_ID] self.assertEqual({ 'path': '/resources/generic1/events/%s' % event.uuid, 'stack_id': self.stack.id, 'stack_name': 'test_stack', 'tenant': 'test_tenant_id' }, event_id_formatted) def test_format_event_prop_data(self): resource = self.stack['generic1'] resource._update_stored_properties() resource.store() event = self._dummy_event( res_properties=resource._stored_properties_data) formatted = api.format_event(event, self.stack.identifier(), include_rsrc_prop_data=True) self.assertEqual({'k1': 'v1'}, formatted[rpc_api.EVENT_RES_PROPERTIES]) def test_format_event_legacy_prop_data(self): event = self._dummy_event(res_properties=None) # legacy location db_obj = self.stack.context.session.query( models.Event).filter_by(id=event.id).first() db_obj.update({'resource_properties': {'legacy_k1': 'legacy_v1'}}) db_obj.save(self.stack.context.session) event_legacy = event_object.Event.get_all_by_stack(self.context, self.stack.id)[0] formatted = api.format_event(event_legacy, self.stack.identifier()) self.assertEqual({'legacy_k1': 'legacy_v1'}, formatted[rpc_api.EVENT_RES_PROPERTIES]) def test_format_event_empty_prop_data(self): event = self._dummy_event(res_properties=None) formatted = api.format_event(event, self.stack.identifier()) self.assertEqual({}, formatted[rpc_api.EVENT_RES_PROPERTIES]) @mock.patch.object(api, 'format_stack_resource') def test_format_stack_preview(self, mock_fmt_resource): def mock_format_resources(res, **kwargs): return 'fmt%s' % res mock_fmt_resource.side_effect = mock_format_resources resources = [1, [2, [3]]] self.stack.preview_resources = mock.Mock(return_value=resources) stack = api.format_stack_preview(self.stack) self.assertIsInstance(stack, dict) self.assertIsNone(stack.get('status')) self.assertIsNone(stack.get('action')) self.assertIsNone(stack.get('status_reason')) self.assertEqual('test_stack', stack['stack_name']) self.assertIn('resources', stack) resources = list(stack['resources']) self.assertEqual('fmt1', resources[0]) resources = list(resources[1]) self.assertEqual('fmt2', resources[0]) resources = list(resources[1]) self.assertEqual('fmt3', resources[0]) kwargs = mock_fmt_resource.call_args[1] self.assertTrue(kwargs['with_props']) def test_format_stack(self): self.stack.created_time = datetime(1970, 1, 1) info = api.format_stack(self.stack) aws_id = ('arn:openstack:heat::test_tenant_id:' 'stacks/test_stack/' + self.stack.id) expected_stack_info = { 'capabilities': [], 'creation_time': '1970-01-01T00:00:00Z', 'deletion_time': None, 'description': 'No description', 'disable_rollback': True, 'notification_topics': [], 'stack_action': 'CREATE', 'stack_name': 'test_stack', 'stack_owner': 'test_username', 'stack_status': 'IN_PROGRESS', 'stack_status_reason': '', 'stack_user_project_id': None, 'outputs': [], 'template_description': 'No description', 'timeout_mins': None, 'tags': None, 'parameters': { 'AWS::Region': 'ap-southeast-1', 'AWS::StackId': aws_id, 'AWS::StackName': 'test_stack'}, 'stack_identity': { 'path': '', 'stack_id': self.stack.id, 'stack_name': 'test_stack', 'tenant': 'test_tenant_id'}, 'updated_time': None, 'parent': None} self.assertEqual(expected_stack_info, info) def test_format_stack_created_time(self): self.stack.created_time = None info = api.format_stack(self.stack) self.assertIsNotNone(info['creation_time']) def test_format_stack_updated_time(self): self.stack.updated_time = None info = api.format_stack(self.stack) self.assertIsNone(info['updated_time']) self.stack.updated_time = datetime(1970, 1, 1) info = api.format_stack(self.stack) self.assertEqual('1970-01-01T00:00:00Z', info['updated_time']) @mock.patch.object(api, 'format_stack_outputs') def test_format_stack_adds_outputs(self, mock_fmt_outputs): mock_fmt_outputs.return_value = 'foobar' self.stack.action = 'CREATE' self.stack.status = 'COMPLETE' info = api.format_stack(self.stack) self.assertEqual('foobar', info[rpc_api.STACK_OUTPUTS]) @mock.patch.object(api, 'format_stack_outputs') def test_format_stack_without_resolving_outputs(self, mock_fmt_outputs): mock_fmt_outputs.return_value = 'foobar' self.stack.action = 'CREATE' self.stack.status = 'COMPLETE' info = api.format_stack(self.stack, resolve_outputs=False) self.assertIsNone(info.get(rpc_api.STACK_OUTPUTS)) def test_format_stack_outputs(self): tmpl = template.Template({ 'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'generic': {'Type': 'GenericResourceType'} }, 'Outputs': { 'correct_output': { 'Description': 'Good output', 'Value': {'Fn::GetAtt': ['generic', 'Foo']} }, 'incorrect_output': { 'Value': {'Fn::GetAtt': ['generic', 'Bar']} } } }) stack = parser.Stack(utils.dummy_context(), 'test_stack', tmpl, stack_id=str(uuid.uuid4())) stack.action = 'CREATE' stack.status = 'COMPLETE' stack['generic'].action = 'CREATE' stack['generic'].status = 'COMPLETE' stack._update_all_resource_data(False, True) info = api.format_stack_outputs(stack.outputs, resolve_value=True) expected = [{'description': 'No description given', 'output_error': 'The Referenced Attribute (generic Bar) ' 'is incorrect.', 'output_key': 'incorrect_output', 'output_value': None}, {'description': 'Good output', 'output_key': 'correct_output', 'output_value': 'generic'}] self.assertEqual(expected, sorted(info, key=lambda k: k['output_key'], reverse=True)) def test_format_stack_outputs_unresolved(self): tmpl = template.Template({ 'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'generic': {'Type': 'GenericResourceType'} }, 'Outputs': { 'correct_output': { 'Description': 'Good output', 'Value': {'Fn::GetAtt': ['generic', 'Foo']} }, 'incorrect_output': { 'Value': {'Fn::GetAtt': ['generic', 'Bar']} } } }) stack = parser.Stack(utils.dummy_context(), 'test_stack', tmpl, stack_id=str(uuid.uuid4())) stack.action = 'CREATE' stack.status = 'COMPLETE' stack['generic'].action = 'CREATE' stack['generic'].status = 'COMPLETE' info = api.format_stack_outputs(stack.outputs) expected = [{'description': 'No description given', 'output_key': 'incorrect_output'}, {'description': 'Good output', 'output_key': 'correct_output'}] self.assertEqual(expected, sorted(info, key=lambda k: k['output_key'], reverse=True)) def test_format_stack_params_csv(self): tmpl = template.Template({ 'heat_template_version': '2013-05-23', 'parameters': { 'foo': { 'type': 'comma_delimited_list', 'default': ['bar', 'baz'] }, } }) stack = parser.Stack(utils.dummy_context(), 'test_stack', tmpl, stack_id=str(uuid.uuid4())) info = api.format_stack(stack) # Should be 'bar,baz' NOT "[u'bar', u'baz']" self.assertEqual('bar,baz', info['parameters']['foo']) def test_format_stack_params_json(self): tmpl = template.Template({ 'heat_template_version': '2013-05-23', 'parameters': { 'foo': { 'type': 'json', 'default': {'bar': 'baz'} }, } }) stack = parser.Stack(utils.dummy_context(), 'test_stack', tmpl, stack_id=str(uuid.uuid4())) info = api.format_stack(stack) # Should be '{"bar": "baz"}' NOT "{u'bar': u'baz'}" self.assertEqual('{"bar": "baz"}', info['parameters']['foo']) class FormatValidateParameterTest(common.HeatTestCase): base_template = ''' { "AWSTemplateFormatVersion" : "2010-09-09", "Description" : "test", "Parameters" : { %s } } ''' base_template_hot = ''' { "heat_template_version" : "2013-05-23", "description" : "test", "parameters" : { %s } } ''' scenarios = [ ('simple', dict(template=base_template, param_name='KeyName', param=''' "KeyName": { "Type": "String", "Description": "Name of SSH key pair" } ''', expected={ 'Type': 'String', 'Description': 'Name of SSH key pair', 'NoEcho': 'false', 'Label': 'KeyName' }) ), ('default', dict(template=base_template, param_name='KeyName', param=''' "KeyName": { "Type": "String", "Description": "Name of SSH key pair", "Default": "dummy" } ''', expected={ 'Type': 'String', 'Description': 'Name of SSH key pair', 'Default': 'dummy', 'NoEcho': 'false', 'Label': 'KeyName' }) ), ('min_length_constraint', dict(template=base_template, param_name='KeyName', param=''' "KeyName": { "Type": "String", "Description": "Name of SSH key pair", "MinLength": 4 } ''', expected={ 'Type': 'String', 'Description': 'Name of SSH key pair', 'MinLength': 4, 'NoEcho': 'false', 'Label': 'KeyName' }) ), ('max_length_constraint', dict(template=base_template, param_name='KeyName', param=''' "KeyName": { "Type": "String", "Description": "Name of SSH key pair", "MaxLength": 10 } ''', expected={ 'Type': 'String', 'Description': 'Name of SSH key pair', 'MaxLength': 10, 'NoEcho': 'false', 'Label': 'KeyName' }) ), ('min_max_length_constraint', dict(template=base_template, param_name='KeyName', param=''' "KeyName": { "Type": "String", "Description": "Name of SSH key pair", "MinLength": 4, "MaxLength": 10 } ''', expected={ 'Type': 'String', 'Description': 'Name of SSH key pair', 'MinLength': 4, 'MaxLength': 10, 'NoEcho': 'false', 'Label': 'KeyName' }) ), ('min_value_constraint', dict(template=base_template, param_name='MyNumber', param=''' "MyNumber": { "Type": "Number", "Description": "A number", "MinValue": 4 } ''', expected={ 'Type': 'Number', 'Description': 'A number', 'MinValue': 4, 'NoEcho': 'false', 'Label': 'MyNumber' }) ), ('max_value_constraint', dict(template=base_template, param_name='MyNumber', param=''' "MyNumber": { "Type": "Number", "Description": "A number", "MaxValue": 10 } ''', expected={ 'Type': 'Number', 'Description': 'A number', 'MaxValue': 10, 'NoEcho': 'false', 'Label': 'MyNumber' }) ), ('min_max_value_constraint', dict(template=base_template, param_name='MyNumber', param=''' "MyNumber": { "Type": "Number", "Description": "A number", "MinValue": 4, "MaxValue": 10 } ''', expected={ 'Type': 'Number', 'Description': 'A number', 'MinValue': 4, 'MaxValue': 10, 'NoEcho': 'false', 'Label': 'MyNumber' }) ), ('allowed_values_constraint', dict(template=base_template, param_name='KeyName', param=''' "KeyName": { "Type": "String", "Description": "Name of SSH key pair", "AllowedValues": [ "foo", "bar", "blub" ] } ''', expected={ 'Type': 'String', 'Description': 'Name of SSH key pair', 'AllowedValues': ['foo', 'bar', 'blub'], 'NoEcho': 'false', 'Label': 'KeyName' }) ), ('allowed_pattern_constraint', dict(template=base_template, param_name='KeyName', param=''' "KeyName": { "Type": "String", "Description": "Name of SSH key pair", "AllowedPattern": "[a-zA-Z0-9]+" } ''', expected={ 'Type': 'String', 'Description': 'Name of SSH key pair', 'AllowedPattern': "[a-zA-Z0-9]+", 'NoEcho': 'false', 'Label': 'KeyName' }) ), ('multiple_constraints', dict(template=base_template, param_name='KeyName', param=''' "KeyName": { "Type": "String", "Description": "Name of SSH key pair", "MinLength": 4, "MaxLength": 10, "AllowedValues": [ "foo", "bar", "blub" ], "AllowedPattern": "[a-zA-Z0-9]+" } ''', expected={ 'Type': 'String', 'Description': 'Name of SSH key pair', 'MinLength': 4, 'MaxLength': 10, 'AllowedValues': ['foo', 'bar', 'blub'], 'AllowedPattern': "[a-zA-Z0-9]+", 'NoEcho': 'false', 'Label': 'KeyName' }) ), ('simple_hot', dict(template=base_template_hot, param_name='KeyName', param=''' "KeyName": { "type": "string", "description": "Name of SSH key pair" } ''', expected={ 'Type': 'String', 'Description': 'Name of SSH key pair', 'NoEcho': 'false', 'Label': 'KeyName' }) ), ('default_hot', dict(template=base_template_hot, param_name='KeyName', param=''' "KeyName": { "type": "string", "description": "Name of SSH key pair", "default": "dummy" } ''', expected={ 'Type': 'String', 'Description': 'Name of SSH key pair', 'Default': 'dummy', 'NoEcho': 'false', 'Label': 'KeyName' }) ), ('min_length_constraint_hot', dict(template=base_template_hot, param_name='KeyName', param=''' "KeyName": { "type": "string", "description": "Name of SSH key pair", "constraints": [ { "length": { "min": 4} } ] } ''', expected={ 'Type': 'String', 'Description': 'Name of SSH key pair', 'MinLength': 4, 'NoEcho': 'false', 'Label': 'KeyName' }) ), ('max_length_constraint_hot', dict(template=base_template_hot, param_name='KeyName', param=''' "KeyName": { "type": "string", "description": "Name of SSH key pair", "constraints": [ { "length": { "max": 10} } ] } ''', expected={ 'Type': 'String', 'Description': 'Name of SSH key pair', 'MaxLength': 10, 'NoEcho': 'false', 'Label': 'KeyName' }) ), ('min_max_length_constraint_hot', dict(template=base_template_hot, param_name='KeyName', param=''' "KeyName": { "type": "string", "description": "Name of SSH key pair", "constraints": [ { "length": { "min":4, "max": 10} } ] } ''', expected={ 'Type': 'String', 'Description': 'Name of SSH key pair', 'MinLength': 4, 'MaxLength': 10, 'NoEcho': 'false', 'Label': 'KeyName' }) ), ('min_value_constraint_hot', dict(template=base_template_hot, param_name='MyNumber', param=''' "MyNumber": { "type": "number", "description": "A number", "constraints": [ { "range": { "min": 4} } ] } ''', expected={ 'Type': 'Number', 'Description': 'A number', 'MinValue': 4, 'NoEcho': 'false', 'Label': 'MyNumber' }) ), ('max_value_constraint_hot', dict(template=base_template_hot, param_name='MyNumber', param=''' "MyNumber": { "type": "number", "description": "A number", "constraints": [ { "range": { "max": 10} } ] } ''', expected={ 'Type': 'Number', 'Description': 'A number', 'MaxValue': 10, 'NoEcho': 'false', 'Label': 'MyNumber' }) ), ('min_max_value_constraint_hot', dict(template=base_template_hot, param_name='MyNumber', param=''' "MyNumber": { "type": "number", "description": "A number", "constraints": [ { "range": { "min": 4, "max": 10} } ] } ''', expected={ 'Type': 'Number', 'Description': 'A number', 'MinValue': 4, 'MaxValue': 10, 'NoEcho': 'false', 'Label': 'MyNumber' }) ), ('allowed_values_constraint_hot', dict(template=base_template_hot, param_name='KeyName', param=''' "KeyName": { "type": "string", "description": "Name of SSH key pair", "constraints": [ { "allowed_values": [ "foo", "bar", "blub" ] } ] } ''', expected={ 'Type': 'String', 'Description': 'Name of SSH key pair', 'AllowedValues': ['foo', 'bar', 'blub'], 'NoEcho': 'false', 'Label': 'KeyName' }) ), ('allowed_pattern_constraint_hot', dict(template=base_template_hot, param_name='KeyName', param=''' "KeyName": { "type": "string", "description": "Name of SSH key pair", "constraints": [ { "allowed_pattern": "[a-zA-Z0-9]+" } ] } ''', expected={ 'Type': 'String', 'Description': 'Name of SSH key pair', 'AllowedPattern': "[a-zA-Z0-9]+", 'NoEcho': 'false', 'Label': 'KeyName' }) ), ('multiple_constraints_hot', dict(template=base_template_hot, param_name='KeyName', param=''' "KeyName": { "type": "string", "description": "Name of SSH key pair", "constraints": [ { "length": { "min": 4, "max": 10} }, { "allowed_values": [ "foo", "bar", "blub" ] }, { "allowed_pattern": "[a-zA-Z0-9]+" } ] } ''', expected={ 'Type': 'String', 'Description': 'Name of SSH key pair', 'MinLength': 4, 'MaxLength': 10, 'AllowedValues': ['foo', 'bar', 'blub'], 'AllowedPattern': "[a-zA-Z0-9]+", 'NoEcho': 'false', 'Label': 'KeyName' }) ), ('constraint_description_hot', dict(template=base_template_hot, param_name='KeyName', param=''' "KeyName": { "type": "string", "description": "Name of SSH key pair", "constraints": [ { "length": { "min": 4}, "description": "Big enough" } ] } ''', expected={ 'Type': 'String', 'Description': 'Name of SSH key pair', 'MinLength': 4, 'ConstraintDescription': 'Big enough', 'NoEcho': 'false', 'Label': 'KeyName' }) ), ('constraint_multiple_descriptions_hot', dict(template=base_template_hot, param_name='KeyName', param=''' "KeyName": { "type": "string", "description": "Name of SSH key pair", "constraints": [ { "length": { "min": 4}, "description": "Big enough." }, { "allowed_pattern": "[a-zA-Z0-9]+", "description": "Only letters." } ] } ''', expected={ 'Type': 'String', 'Description': 'Name of SSH key pair', 'MinLength': 4, 'AllowedPattern': "[a-zA-Z0-9]+", 'ConstraintDescription': 'Big enough. Only letters.', 'NoEcho': 'false', 'Label': 'KeyName' }) ), ('constraint_custom_hot', dict(template=base_template_hot, param_name='KeyName', param=''' "KeyName": { "type": "string", "description": "Public Network", "constraints": [ { "custom_constraint": "neutron.network" } ] } ''', expected={ 'Type': 'String', 'Description': 'Public Network', 'NoEcho': 'false', 'Label': 'KeyName', 'CustomConstraint': 'neutron.network' }) ) ] def test_format_validate_parameter(self): """Test format of a parameter.""" t = template_format.parse(self.template % self.param) tmpl = template.Template(t) tmpl_params = cfn_param.CfnParameters(None, tmpl) tmpl_params.validate(validate_value=False) param = tmpl_params.params[self.param_name] param_formated = api.format_validate_parameter(param) self.assertEqual(self.expected, param_formated) class FormatSoftwareConfigDeploymentTest(common.HeatTestCase): def _dummy_software_config(self): config = mock.Mock() self.now = timeutils.utcnow() config.name = 'config_mysql' config.group = 'Heat::Shell' config.id = str(uuid.uuid4()) config.created_at = self.now config.config = { 'inputs': [{'name': 'bar'}], 'outputs': [{'name': 'result'}], 'options': {}, 'config': '#!/bin/bash\n' } config.tenant = str(uuid.uuid4()) return config def _dummy_software_deployment(self): config = self._dummy_software_config() deployment = mock.Mock() deployment.config = config deployment.id = str(uuid.uuid4()) deployment.server_id = str(uuid.uuid4()) deployment.input_values = {'bar': 'baaaaa'} deployment.output_values = {'result': '0'} deployment.action = 'INIT' deployment.status = 'COMPLETE' deployment.status_reason = 'Because' deployment.created_at = config.created_at deployment.updated_at = config.created_at return deployment def test_format_software_config(self): config = self._dummy_software_config() result = api.format_software_config(config) self.assertIsNotNone(result) self.assertEqual([{'name': 'bar'}], result['inputs']) self.assertEqual([{'name': 'result'}], result['outputs']) self.assertEqual([{'name': 'result'}], result['outputs']) self.assertEqual({}, result['options']) self.assertEqual(heat_timeutils.isotime(self.now), result['creation_time']) self.assertNotIn('project', result) result = api.format_software_config(config, include_project=True) self.assertIsNotNone(result) self.assertEqual([{'name': 'bar'}], result['inputs']) self.assertEqual([{'name': 'result'}], result['outputs']) self.assertEqual([{'name': 'result'}], result['outputs']) self.assertEqual({}, result['options']) self.assertEqual(heat_timeutils.isotime(self.now), result['creation_time']) self.assertIn('project', result) def test_format_software_config_none(self): self.assertIsNone(api.format_software_config(None)) def test_format_software_deployment(self): deployment = self._dummy_software_deployment() result = api.format_software_deployment(deployment) self.assertIsNotNone(result) self.assertEqual(deployment.id, result['id']) self.assertEqual(deployment.config.id, result['config_id']) self.assertEqual(deployment.server_id, result['server_id']) self.assertEqual(deployment.input_values, result['input_values']) self.assertEqual(deployment.output_values, result['output_values']) self.assertEqual(deployment.action, result['action']) self.assertEqual(deployment.status, result['status']) self.assertEqual(deployment.status_reason, result['status_reason']) self.assertEqual(heat_timeutils.isotime(self.now), result['creation_time']) self.assertEqual(heat_timeutils.isotime(self.now), result['updated_time']) def test_format_software_deployment_none(self): self.assertIsNone(api.format_software_deployment(None)) class TestExtractArgs(common.HeatTestCase): def test_timeout_extract(self): p = {'timeout_mins': '5'} args = api.extract_args(p) self.assertEqual(5, args['timeout_mins']) def test_timeout_extract_zero(self): p = {'timeout_mins': '0'} args = api.extract_args(p) self.assertNotIn('timeout_mins', args) def test_timeout_extract_garbage(self): p = {'timeout_mins': 'wibble'} args = api.extract_args(p) self.assertNotIn('timeout_mins', args) def test_timeout_extract_none(self): p = {'timeout_mins': None} args = api.extract_args(p) self.assertNotIn('timeout_mins', args) def test_timeout_extract_negative(self): p = {'timeout_mins': '-100'} error = self.assertRaises(ValueError, api.extract_args, p) self.assertIn('Invalid timeout value', six.text_type(error)) def test_timeout_extract_not_present(self): args = api.extract_args({}) self.assertNotIn('timeout_mins', args) def test_adopt_stack_data_extract_present(self): p = {'adopt_stack_data': json.dumps({'Resources': {}})} args = api.extract_args(p) self.assertTrue(args.get('adopt_stack_data')) def test_invalid_adopt_stack_data(self): params = {'adopt_stack_data': json.dumps("foo")} exc = self.assertRaises(ValueError, api.extract_args, params) self.assertIn('Invalid adopt data', six.text_type(exc)) def test_adopt_stack_data_extract_not_present(self): args = api.extract_args({}) self.assertNotIn('adopt_stack_data', args) def test_disable_rollback_extract_true(self): args = api.extract_args({'disable_rollback': True}) self.assertIn('disable_rollback', args) self.assertTrue(args.get('disable_rollback')) args = api.extract_args({'disable_rollback': 'True'}) self.assertIn('disable_rollback', args) self.assertTrue(args.get('disable_rollback')) args = api.extract_args({'disable_rollback': 'true'}) self.assertIn('disable_rollback', args) self.assertTrue(args.get('disable_rollback')) def test_disable_rollback_extract_false(self): args = api.extract_args({'disable_rollback': False}) self.assertIn('disable_rollback', args) self.assertFalse(args.get('disable_rollback')) args = api.extract_args({'disable_rollback': 'False'}) self.assertIn('disable_rollback', args) self.assertFalse(args.get('disable_rollback')) args = api.extract_args({'disable_rollback': 'false'}) self.assertIn('disable_rollback', args) self.assertFalse(args.get('disable_rollback')) def test_disable_rollback_extract_bad(self): self.assertRaises(ValueError, api.extract_args, {'disable_rollback': 'bad'}) def test_tags_extract(self): p = {'tags': ["tag1", "tag2"]} args = api.extract_args(p) self.assertEqual(['tag1', 'tag2'], args['tags']) def test_tags_extract_not_present(self): args = api.extract_args({}) self.assertNotIn('tags', args) def test_tags_extract_not_map(self): p = {'tags': {"foo": "bar"}} exc = self.assertRaises(ValueError, api.extract_args, p) self.assertIn('Invalid tags, not a list: ', six.text_type(exc)) def test_tags_extract_not_string(self): p = {'tags': ["tag1", 2]} exc = self.assertRaises(ValueError, api.extract_args, p) self.assertIn('Invalid tag, "2" is not a string', six.text_type(exc)) def test_tags_extract_over_limit(self): p = {'tags': ["tag1", "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"]} exc = self.assertRaises(ValueError, api.extract_args, p) self.assertIn('Invalid tag, "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa' 'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" is longer ' 'than 80 characters', six.text_type(exc)) def test_tags_extract_comma(self): p = {'tags': ["tag1", 'tag2,']} exc = self.assertRaises(ValueError, api.extract_args, p) self.assertIn('Invalid tag, "tag2," contains a comma', six.text_type(exc)) class TranslateFilterTest(common.HeatTestCase): scenarios = [ ( 'single+single', dict(inputs={'stack_status': 'COMPLETE', 'status': 'FAILED'}, expected={'status': ['COMPLETE', 'FAILED']}) ), ( 'none+single', dict(inputs={'name': 'n1'}, expected={'name': 'n1'}) ), ( 'single+none', dict(inputs={'stack_name': 'n1'}, expected={'name': 'n1'}) ), ( 'none+list', dict(inputs={'action': ['a1', 'a2']}, expected={'action': ['a1', 'a2']}) ), ( 'list+none', dict(inputs={'stack_action': ['a1', 'a2']}, expected={'action': ['a1', 'a2']}) ), ( 'single+list', dict(inputs={'stack_owner': 'u1', 'username': ['u2', 'u3']}, expected={'username': ['u1', 'u2', 'u3']}) ), ( 'list+single', dict(inputs={'parent': ['s1', 's2'], 'owner_id': 's3'}, expected={'owner_id': ['s1', 's2', 's3']}) ), ( 'list+list', dict(inputs={'stack_name': ['n1', 'n2'], 'name': ['n3', 'n4']}, expected={'name': ['n1', 'n2', 'n3', 'n4']}) ), ( 'full_status_split', dict(inputs={'stack_status': 'CREATE_COMPLETE'}, expected={'action': 'CREATE', 'status': 'COMPLETE'}) ), ( 'full_status_split_merge', dict(inputs={'stack_status': 'CREATE_COMPLETE', 'status': 'CREATE_FAILED'}, expected={'action': 'CREATE', 'status': ['COMPLETE', 'FAILED']}) ), ( 'action_status_merge', dict(inputs={'action': ['UPDATE', 'CREATE'], 'status': 'CREATE_FAILED'}, expected={'action': ['CREATE', 'UPDATE'], 'status': 'FAILED'}) ) ] def test_stack_filter_translate(self): actual = api.translate_filters(self.inputs) self.assertEqual(self.expected, actual) class ParseStatusTest(common.HeatTestCase): scenarios = [ ( 'single_bogus', dict(inputs='bogus status', expected=(set(), set())) ), ( 'list_bogus', dict(inputs=['foo', 'bar'], expected=(set(), set())) ), ( 'single_partial', dict(inputs='COMPLETE', expected=(set(), set(['COMPLETE']))) ), ( 'multi_partial', dict(inputs=['FAILED', 'COMPLETE'], expected=(set(), set(['FAILED', 'COMPLETE']))) ), ( 'multi_partial_dup', dict(inputs=['FAILED', 'FAILED'], expected=(set(), set(['FAILED']))) ), ( 'single_full', dict(inputs=['DELETE_FAILED'], expected=(set(['DELETE']), set(['FAILED']))) ), ( 'multi_full', dict(inputs=['DELETE_FAILED', 'CREATE_COMPLETE'], expected=(set(['CREATE', 'DELETE']), set(['COMPLETE', 'FAILED']))) ), ( 'mix_bogus_partial', dict(inputs=['delete_failed', 'COMPLETE'], expected=(set(), set(['COMPLETE']))) ), ( 'mix_bogus_full', dict(inputs=['delete_failed', 'action_COMPLETE'], expected=(set(['action']), set(['COMPLETE']))) ), ( 'mix_bogus_full_incomplete', dict(inputs=['delete_failed', '_COMPLETE'], expected=(set(), set(['COMPLETE']))) ), ( 'mix_partial_full', dict(inputs=['FAILED', 'b_COMPLETE'], expected=(set(['b']), set(['COMPLETE', 'FAILED']))) ), ( 'mix_full_dup', dict(inputs=['a_FAILED', 'a_COMPLETE'], expected=(set(['a']), set(['COMPLETE', 'FAILED']))) ), ( 'mix_full_dup_2', dict(inputs=['a_FAILED', 'b_FAILED'], expected=(set(['a', 'b']), set(['FAILED']))) ) ] def test_stack_parse_status(self): actual = api._parse_object_status(self.inputs) self.assertEqual(self.expected, actual) heat-10.0.2/heat/tests/test_exception.py0000666000175000017500000001677313343562340020252 0ustar zuulzuul00000000000000# # Copyright 2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import fixtures import mock import six from heat.common import exception from heat.common.i18n import _ from heat.tests import common class TestException(exception.HeatException): msg_fmt = _("Testing message %(text)s") class TestHeatException(common.HeatTestCase): def test_fatal_exception_error(self): self.useFixture(fixtures.MonkeyPatch( 'heat.common.exception._FATAL_EXCEPTION_FORMAT_ERRORS', True)) self.assertRaises(KeyError, TestException) def test_format_string_error_message(self): message = "This format %(message)s should work" err = exception.Error(message) self.assertEqual(message, six.text_type(err)) class TestStackValidationFailed(common.HeatTestCase): scenarios = [ ('test_error_as_exception', dict( kwargs=dict( error=exception.StackValidationFailed( error='Error', path=['some', 'path'], message='Some message')), expected='Error: some.path: Some message', called_error='Error', called_path=['some', 'path'], called_msg='Some message' )), ('test_full_exception', dict( kwargs=dict( error='Error', path=['some', 'path'], message='Some message'), expected='Error: some.path: Some message', called_error='Error', called_path=['some', 'path'], called_msg='Some message' )), ('test_no_error_exception', dict( kwargs=dict( path=['some', 'path'], message='Chain letter'), expected='some.path: Chain letter', called_error='', called_path=['some', 'path'], called_msg='Chain letter' )), ('test_no_path_exception', dict( kwargs=dict( error='Error', message='Just no.'), expected='Error: Just no.', called_error='Error', called_path=[], called_msg='Just no.' )), ('test_no_msg_exception', dict( kwargs=dict( error='Error', path=['we', 'lost', 'our', 'message']), expected='Error: we.lost.our.message: ', called_error='Error', called_path=['we', 'lost', 'our', 'message'], called_msg='' )), ('test_old_format_exception', dict( kwargs=dict( message='Wow. I think I am old error message format.' ), expected='Wow. I think I am old error message format.', called_error='', called_path=[], called_msg='Wow. I think I am old error message format.' )), ('test_int_path_item_exception', dict( kwargs=dict( path=['null', 0] ), expected='null[0]: ', called_error='', called_path=['null', 0], called_msg='' )), ('test_digit_path_item_exception', dict( kwargs=dict( path=['null', '0'] ), expected='null[0]: ', called_error='', called_path=['null', '0'], called_msg='' )), ('test_string_path_exception', dict( kwargs=dict( path='null[0].not_null' ), expected='null[0].not_null: ', called_error='', called_path=['null[0].not_null'], called_msg='' )) ] def test_exception(self): try: raise exception.StackValidationFailed(**self.kwargs) except exception.StackValidationFailed as ex: self.assertIn(self.expected, six.text_type(ex)) self.assertIn(self.called_error, ex.error) self.assertEqual(self.called_path, ex.path) self.assertEqual(self.called_msg, ex.error_message) class TestResourceFailure(common.HeatTestCase): def test_status_reason_resource(self): reason = ('Resource CREATE failed: ValueError: resources.oops: ' 'Test Resource failed oops') exc = exception.ResourceFailure(reason, None, action='CREATE') self.assertEqual('ValueError', exc.error) self.assertEqual(['resources', 'oops'], exc.path) self.assertEqual('Test Resource failed oops', exc.error_message) def test_status_reason_general(self): reason = ('something strange happened') exc = exception.ResourceFailure(reason, None, action='CREATE') self.assertEqual('', exc.error) self.assertEqual([], exc.path) self.assertEqual('something strange happened', exc.error_message) def test_status_reason_general_res(self): res = mock.Mock() res.name = 'fred' res.stack.t.get_section_name.return_value = 'Resources' reason = ('something strange happened') exc = exception.ResourceFailure(reason, res, action='CREATE') self.assertEqual('', exc.error) self.assertEqual(['Resources', 'fred'], exc.path) self.assertEqual('something strange happened', exc.error_message) def test_std_exception(self): base_exc = ValueError('sorry mom') exc = exception.ResourceFailure(base_exc, None, action='UPDATE') self.assertEqual('ValueError', exc.error) self.assertEqual([], exc.path) self.assertEqual('sorry mom', exc.error_message) def test_std_exception_with_resource(self): base_exc = ValueError('sorry mom') res = mock.Mock() res.name = 'fred' res.stack.t.get_section_name.return_value = 'Resources' exc = exception.ResourceFailure(base_exc, res, action='UPDATE') self.assertEqual('ValueError', exc.error) self.assertEqual(['Resources', 'fred'], exc.path) self.assertEqual('sorry mom', exc.error_message) def test_heat_exception(self): base_exc = ValueError('sorry mom') heat_exc = exception.ResourceFailure(base_exc, None, action='UPDATE') exc = exception.ResourceFailure(heat_exc, None, action='UPDATE') self.assertEqual('ValueError', exc.error) self.assertEqual([], exc.path) self.assertEqual('sorry mom', exc.error_message) def test_nested_exceptions(self): res = mock.Mock() res.name = 'frodo' res.stack.t.get_section_name.return_value = 'Resources' reason = ('Resource UPDATE failed: ValueError: resources.oops: ' 'Test Resource failed oops') base_exc = exception.ResourceFailure(reason, res, action='UPDATE') exc = exception.ResourceFailure(base_exc, res, action='UPDATE') self.assertEqual(['Resources', 'frodo', 'resources', 'oops'], exc.path) self.assertEqual('ValueError', exc.error) self.assertEqual('Test Resource failed oops', exc.error_message) heat-10.0.2/heat/tests/api/0000775000175000017500000000000013343562672015404 5ustar zuulzuul00000000000000heat-10.0.2/heat/tests/api/middleware/0000775000175000017500000000000013343562672017521 5ustar zuulzuul00000000000000heat-10.0.2/heat/tests/api/middleware/test_version_negotiation_middleware.py0000666000175000017500000001300013343562340027400 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import webob from heat.api.middleware import version_negotiation as vn from heat.tests import common class VersionController(object): pass class VersionNegotiationMiddlewareTest(common.HeatTestCase): def _version_controller_factory(self, conf): return VersionController() def test_match_version_string(self): version_negotiation = vn.VersionNegotiationFilter( self._version_controller_factory, None, None) request = webob.Request({}) major_version = 1 minor_version = 0 match = version_negotiation._match_version_string( 'v{0}.{1}'.format(major_version, minor_version), request) self.assertTrue(match) self.assertEqual(major_version, request.environ['api.major_version']) self.assertEqual(minor_version, request.environ['api.minor_version']) def test_not_match_version_string(self): version_negotiation = vn.VersionNegotiationFilter( self._version_controller_factory, None, None) request = webob.Request({}) match = version_negotiation._match_version_string("invalid", request) self.assertFalse(match) def test_return_version_controller_when_request_path_is_version(self): version_negotiation = vn.VersionNegotiationFilter( self._version_controller_factory, None, None) request = webob.Request({'PATH_INFO': 'versions'}) response = version_negotiation.process_request(request) self.assertIsInstance(response, VersionController) def test_return_version_controller_when_request_path_is_empty(self): version_negotiation = vn.VersionNegotiationFilter( self._version_controller_factory, None, None) request = webob.Request({'PATH_INFO': '/'}) response = version_negotiation.process_request(request) self.assertIsInstance(response, VersionController) def test_request_path_contains_valid_version(self): version_negotiation = vn.VersionNegotiationFilter( self._version_controller_factory, None, None) major_version = 1 minor_version = 0 request = webob.Request({'PATH_INFO': 'v{0}.{1}/resource'.format(major_version, minor_version)}) response = version_negotiation.process_request(request) self.assertIsNone(response) self.assertEqual(major_version, request.environ['api.major_version']) self.assertEqual(minor_version, request.environ['api.minor_version']) def test_removes_version_from_request_path(self): version_negotiation = vn.VersionNegotiationFilter( self._version_controller_factory, None, None) expected_path = 'resource' request = webob.Request({'PATH_INFO': 'v1.0/{0}'.format(expected_path) }) response = version_negotiation.process_request(request) self.assertIsNone(response) self.assertEqual(expected_path, request.path_info_peek()) def test_request_path_contains_unknown_version(self): version_negotiation = vn.VersionNegotiationFilter( self._version_controller_factory, None, None) request = webob.Request({'PATH_INFO': 'v2.0/resource'}) response = version_negotiation.process_request(request) self.assertIsInstance(response, VersionController) def test_accept_header_contains_valid_version(self): version_negotiation = vn.VersionNegotiationFilter( self._version_controller_factory, None, None) major_version = 1 minor_version = 0 request = webob.Request({'PATH_INFO': 'resource'}) request.headers['Accept'] = ( 'application/vnd.openstack.orchestration-v{0}.{1}'.format( major_version, minor_version)) response = version_negotiation.process_request(request) self.assertIsNone(response) self.assertEqual(major_version, request.environ['api.major_version']) self.assertEqual(minor_version, request.environ['api.minor_version']) def test_accept_header_contains_unknown_version(self): version_negotiation = vn.VersionNegotiationFilter( self._version_controller_factory, None, None) request = webob.Request({'PATH_INFO': 'resource'}) request.headers['Accept'] = ( 'application/vnd.openstack.orchestration-v2.0') response = version_negotiation.process_request(request) self.assertIsInstance(response, VersionController) def test_no_URI_version_accept_header_contains_invalid_MIME_type(self): version_negotiation = vn.VersionNegotiationFilter( self._version_controller_factory, None, None) request = webob.Request({'PATH_INFO': 'resource'}) request.headers['Accept'] = 'application/invalidMIMEType' response = version_negotiation.process_request(request) self.assertIsInstance(response, webob.exc.HTTPNotFound) heat-10.0.2/heat/tests/api/middleware/__init__.py0000666000175000017500000000000013343562340021612 0ustar zuulzuul00000000000000heat-10.0.2/heat/tests/api/cfn/0000775000175000017500000000000013343562672016152 5ustar zuulzuul00000000000000heat-10.0.2/heat/tests/api/cfn/__init__.py0000666000175000017500000000000013343562340020243 0ustar zuulzuul00000000000000heat-10.0.2/heat/tests/api/cfn/test_api_cfn_v1.py0000666000175000017500000021625113343562351021573 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import json import os import mock from oslo_config import fixture as config_fixture import six from heat.api.aws import exception import heat.api.cfn.v1.stacks as stacks from heat.common import exception as heat_exception from heat.common import identifier from heat.common import policy from heat.common import wsgi from heat.rpc import api as rpc_api from heat.rpc import client as rpc_client from heat.tests import common from heat.tests import utils policy_path = os.path.dirname(os.path.realpath(__file__)) + "/../../policy/" class CfnStackControllerTest(common.HeatTestCase): """Tests the API class CfnStackController. Tests the API class which acts as the WSGI controller, the endpoint processing API requests after they are routed """ def setUp(self): super(CfnStackControllerTest, self).setUp() self.fixture = self.useFixture(config_fixture.Config()) self.fixture.conf(args=['--config-dir', policy_path]) self.topic = rpc_api.ENGINE_TOPIC self.api_version = '1.0' self.template = {u'AWSTemplateFormatVersion': u'2010-09-09', u'Foo': u'bar'} # Create WSGI controller instance class DummyConfig(object): bind_port = 8000 cfgopts = DummyConfig() self.controller = stacks.StackController(options=cfgopts) self.controller.policy.enforcer.policy_path = (policy_path + 'deny_stack_user.json') self.addCleanup(self.m.VerifyAll) def test_default(self): self.assertRaises( exception.HeatInvalidActionError, self.controller.default, None) def _dummy_GET_request(self, params=None): # Mangle the params dict into a query string params = params or {} qs = "&".join(["=".join([k, str(params[k])]) for k in params]) environ = {'REQUEST_METHOD': 'GET', 'QUERY_STRING': qs} req = wsgi.Request(environ) req.context = utils.dummy_context() return req def _stub_enforce(self, req, action, allowed=True): mock_enforce = self.patchobject(policy.Enforcer, 'enforce') if allowed: mock_enforce.return_value = True else: mock_enforce.side_effect = heat_exception.Forbidden # The tests def test_stackid_addprefix(self): self.m.ReplayAll() response = self.controller._id_format({ 'StackName': 'Foo', 'StackId': { u'tenant': u't', u'stack_name': u'Foo', u'stack_id': u'123', u'path': u'' } }) expected = {'StackName': 'Foo', 'StackId': 'arn:openstack:heat::t:stacks/Foo/123'} self.assertEqual(expected, response) def test_enforce_ok(self): params = {'Action': 'ListStacks'} dummy_req = self._dummy_GET_request(params) self._stub_enforce(dummy_req, 'ListStacks') response = self.controller._enforce(dummy_req, 'ListStacks') self.assertIsNone(response) def test_enforce_denied(self): self.m.ReplayAll() params = {'Action': 'ListStacks'} dummy_req = self._dummy_GET_request(params) self._stub_enforce(dummy_req, 'ListStacks', False) self.assertRaises(exception.HeatAccessDeniedError, self.controller._enforce, dummy_req, 'ListStacks') def test_enforce_ise(self): params = {'Action': 'ListStacks'} dummy_req = self._dummy_GET_request(params) dummy_req.context.roles = ['heat_stack_user'] mock_enforce = self.patchobject(policy.Enforcer, 'enforce') mock_enforce.side_effect = AttributeError self.assertRaises(exception.HeatInternalFailureError, self.controller._enforce, dummy_req, 'ListStacks') @mock.patch.object(rpc_client.EngineClient, 'call') def test_list(self, mock_call): # Format a dummy GET request to pass into the WSGI handler params = {'Action': 'ListStacks'} dummy_req = self._dummy_GET_request(params) self._stub_enforce(dummy_req, 'ListStacks') # Stub out the RPC call to the engine with a pre-canned response engine_resp = [{u'stack_identity': {u'tenant': u't', u'stack_name': u'wordpress', u'stack_id': u'1', u'path': u''}, u'updated_time': u'2012-07-09T09:13:11Z', u'template_description': u'blah', u'stack_status_reason': u'Stack successfully created', u'creation_time': u'2012-07-09T09:12:45Z', u'stack_name': u'wordpress', u'stack_action': u'CREATE', u'stack_status': u'COMPLETE'}] mock_call.return_value = engine_resp # Call the list controller function and compare the response result = self.controller.list(dummy_req) expected = {'ListStacksResponse': {'ListStacksResult': {'StackSummaries': [{u'StackId': u'arn:openstack:heat::t:stacks/wordpress/1', u'LastUpdatedTime': u'2012-07-09T09:13:11Z', u'TemplateDescription': u'blah', u'StackStatusReason': u'Stack successfully created', u'CreationTime': u'2012-07-09T09:12:45Z', u'StackName': u'wordpress', u'StackStatus': u'CREATE_COMPLETE'}]}}} self.assertEqual(expected, result) default_args = {'limit': None, 'sort_keys': None, 'marker': None, 'sort_dir': None, 'filters': None, 'show_deleted': False, 'show_nested': False, 'show_hidden': False, 'tags': None, 'tags_any': None, 'not_tags': None, 'not_tags_any': None} mock_call.assert_called_once_with( dummy_req.context, ('list_stacks', default_args), version='1.33') @mock.patch.object(rpc_client.EngineClient, 'call') def test_list_rmt_aterr(self, mock_call): params = {'Action': 'ListStacks'} dummy_req = self._dummy_GET_request(params) self._stub_enforce(dummy_req, 'ListStacks') # Insert an engine RPC error and ensure we map correctly to the # heat exception type mock_call.side_effect = AttributeError # Call the list controller function and compare the response result = self.controller.list(dummy_req) self.assertIsInstance(result, exception.HeatInvalidParameterValueError) mock_call.assert_called_once_with( dummy_req.context, ('list_stacks', mock.ANY), version='1.33') @mock.patch.object(rpc_client.EngineClient, 'call') def test_list_rmt_interr(self, mock_call): params = {'Action': 'ListStacks'} dummy_req = self._dummy_GET_request(params) self._stub_enforce(dummy_req, 'ListStacks') # Insert an engine RPC error and ensure we map correctly to the # heat exception type mock_call.side_effect = Exception() # Call the list controller function and compare the response result = self.controller.list(dummy_req) self.assertIsInstance(result, exception.HeatInternalFailureError) mock_call.assert_called_once_with( dummy_req.context, ('list_stacks', mock.ANY), version='1.33') def test_describe_last_updated_time(self): params = {'Action': 'DescribeStacks'} dummy_req = self._dummy_GET_request(params) self._stub_enforce(dummy_req, 'DescribeStacks') engine_resp = [{u'updated_time': '1970-01-01', u'parameters': {}, u'stack_action': u'CREATE', u'stack_status': u'COMPLETE'}] self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( dummy_req.context, ('show_stack', {'stack_identity': None, 'resolve_outputs': True}), version='1.20' ).AndReturn(engine_resp) self.m.ReplayAll() response = self.controller.describe(dummy_req) result = response['DescribeStacksResponse']['DescribeStacksResult'] stack = result['Stacks'][0] self.assertEqual('1970-01-01', stack['LastUpdatedTime']) def test_describe_no_last_updated_time(self): params = {'Action': 'DescribeStacks'} dummy_req = self._dummy_GET_request(params) self._stub_enforce(dummy_req, 'DescribeStacks') engine_resp = [{u'updated_time': None, u'parameters': {}, u'stack_action': u'CREATE', u'stack_status': u'COMPLETE'}] self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( dummy_req.context, ('show_stack', {'stack_identity': None, 'resolve_outputs': True}), version='1.20' ).AndReturn(engine_resp) self.m.ReplayAll() response = self.controller.describe(dummy_req) result = response['DescribeStacksResponse']['DescribeStacksResult'] stack = result['Stacks'][0] self.assertNotIn('LastUpdatedTime', stack) def test_describe(self): # Format a dummy GET request to pass into the WSGI handler stack_name = u"wordpress" identity = dict(identifier.HeatIdentifier('t', stack_name, '6')) params = {'Action': 'DescribeStacks', 'StackName': stack_name} dummy_req = self._dummy_GET_request(params) self._stub_enforce(dummy_req, 'DescribeStacks') # Stub out the RPC call to the engine with a pre-canned response # Note the engine returns a load of keys we don't actually use # so this is a subset of the real response format engine_resp = [{u'stack_identity': {u'tenant': u't', u'stack_name': u'wordpress', u'stack_id': u'6', u'path': u''}, u'updated_time': u'2012-07-09T09:13:11Z', u'parameters': {u'DBUsername': u'admin', u'LinuxDistribution': u'F17', u'InstanceType': u'm1.large', u'DBRootPassword': u'admin', u'DBPassword': u'admin', u'DBName': u'wordpress'}, u'outputs': [{u'output_key': u'WebsiteURL', u'description': u'URL for Wordpress wiki', u'output_value': u'http://10.0.0.8/wordpress'}], u'stack_status_reason': u'Stack successfully created', u'creation_time': u'2012-07-09T09:12:45Z', u'stack_name': u'wordpress', u'notification_topics': [], u'stack_action': u'CREATE', u'stack_status': u'COMPLETE', u'description': u'blah', u'disable_rollback': 'true', u'timeout_mins':60, u'capabilities':[]}] self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( dummy_req.context, ('identify_stack', {'stack_name': stack_name}) ).AndReturn(identity) rpc_client.EngineClient.call( dummy_req.context, ('show_stack', {'stack_identity': identity, 'resolve_outputs': True}), version='1.20' ).AndReturn(engine_resp) self.m.ReplayAll() # Call the list controller function and compare the response response = self.controller.describe(dummy_req) expected = {'DescribeStacksResponse': {'DescribeStacksResult': {'Stacks': [{'StackId': u'arn:openstack:heat::t:stacks/wordpress/6', 'StackStatusReason': u'Stack successfully created', 'Description': u'blah', 'Parameters': [{'ParameterValue': u'wordpress', 'ParameterKey': u'DBName'}, {'ParameterValue': u'admin', 'ParameterKey': u'DBPassword'}, {'ParameterValue': u'admin', 'ParameterKey': u'DBRootPassword'}, {'ParameterValue': u'admin', 'ParameterKey': u'DBUsername'}, {'ParameterValue': u'm1.large', 'ParameterKey': u'InstanceType'}, {'ParameterValue': u'F17', 'ParameterKey': u'LinuxDistribution'}], 'Outputs': [{'OutputKey': u'WebsiteURL', 'OutputValue': u'http://10.0.0.8/wordpress', 'Description': u'URL for Wordpress wiki'}], 'TimeoutInMinutes': 60, 'CreationTime': u'2012-07-09T09:12:45Z', 'Capabilities': [], 'StackName': u'wordpress', 'NotificationARNs': [], 'StackStatus': u'CREATE_COMPLETE', 'DisableRollback': 'true', 'LastUpdatedTime': u'2012-07-09T09:13:11Z'}]}}} stacks = (response['DescribeStacksResponse']['DescribeStacksResult'] ['Stacks']) stacks[0]['Parameters'] = sorted( stacks[0]['Parameters'], key=lambda k: k['ParameterKey']) response['DescribeStacksResponse']['DescribeStacksResult'] = ( {'Stacks': stacks}) self.assertEqual(expected, response) def test_describe_arn(self): # Format a dummy GET request to pass into the WSGI handler stack_name = u"wordpress" stack_identifier = identifier.HeatIdentifier('t', stack_name, '6') identity = dict(stack_identifier) params = {'Action': 'DescribeStacks', 'StackName': stack_identifier.arn()} dummy_req = self._dummy_GET_request(params) self._stub_enforce(dummy_req, 'DescribeStacks') # Stub out the RPC call to the engine with a pre-canned response # Note the engine returns a load of keys we don't actually use # so this is a subset of the real response format engine_resp = [{u'stack_identity': {u'tenant': u't', u'stack_name': u'wordpress', u'stack_id': u'6', u'path': u''}, u'updated_time': u'2012-07-09T09:13:11Z', u'parameters': {u'DBUsername': u'admin', u'LinuxDistribution': u'F17', u'InstanceType': u'm1.large', u'DBRootPassword': u'admin', u'DBPassword': u'admin', u'DBName': u'wordpress'}, u'outputs': [{u'output_key': u'WebsiteURL', u'description': u'URL for Wordpress wiki', u'output_value': u'http://10.0.0.8/wordpress'}], u'stack_status_reason': u'Stack successfully created', u'creation_time': u'2012-07-09T09:12:45Z', u'stack_name': u'wordpress', u'notification_topics': [], u'stack_action': u'CREATE', u'stack_status': u'COMPLETE', u'description': u'blah', u'disable_rollback': 'true', u'timeout_mins':60, u'capabilities':[]}] self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( dummy_req.context, ('show_stack', {'stack_identity': identity, 'resolve_outputs': True}), version='1.20' ).AndReturn(engine_resp) self.m.ReplayAll() # Call the list controller function and compare the response response = self.controller.describe(dummy_req) expected = {'DescribeStacksResponse': {'DescribeStacksResult': {'Stacks': [{'StackId': u'arn:openstack:heat::t:stacks/wordpress/6', 'StackStatusReason': u'Stack successfully created', 'Description': u'blah', 'Parameters': [{'ParameterValue': u'wordpress', 'ParameterKey': u'DBName'}, {'ParameterValue': u'admin', 'ParameterKey': u'DBPassword'}, {'ParameterValue': u'admin', 'ParameterKey': u'DBRootPassword'}, {'ParameterValue': u'admin', 'ParameterKey': u'DBUsername'}, {'ParameterValue': u'm1.large', 'ParameterKey': u'InstanceType'}, {'ParameterValue': u'F17', 'ParameterKey': u'LinuxDistribution'}], 'Outputs': [{'OutputKey': u'WebsiteURL', 'OutputValue': u'http://10.0.0.8/wordpress', 'Description': u'URL for Wordpress wiki'}], 'TimeoutInMinutes': 60, 'CreationTime': u'2012-07-09T09:12:45Z', 'Capabilities': [], 'StackName': u'wordpress', 'NotificationARNs': [], 'StackStatus': u'CREATE_COMPLETE', 'DisableRollback': 'true', 'LastUpdatedTime': u'2012-07-09T09:13:11Z'}]}}} stacks = (response['DescribeStacksResponse']['DescribeStacksResult'] ['Stacks']) stacks[0]['Parameters'] = sorted( stacks[0]['Parameters'], key=lambda k: k['ParameterKey']) response['DescribeStacksResponse']['DescribeStacksResult'] = ( {'Stacks': stacks}) self.assertEqual(expected, response) def test_describe_arn_invalidtenant(self): # Format a dummy GET request to pass into the WSGI handler stack_name = u"wordpress" stack_identifier = identifier.HeatIdentifier('wibble', stack_name, '6') identity = dict(stack_identifier) params = {'Action': 'DescribeStacks', 'StackName': stack_identifier.arn()} dummy_req = self._dummy_GET_request(params) self._stub_enforce(dummy_req, 'DescribeStacks') self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( dummy_req.context, ('show_stack', {'stack_identity': identity, 'resolve_outputs': True},), version='1.20' ).AndRaise(heat_exception.InvalidTenant(target='test', actual='test')) self.m.ReplayAll() result = self.controller.describe(dummy_req) self.assertIsInstance(result, exception.HeatInvalidParameterValueError) def test_describe_aterr(self): stack_name = "wordpress" identity = dict(identifier.HeatIdentifier('t', stack_name, '6')) params = {'Action': 'DescribeStacks', 'StackName': stack_name} dummy_req = self._dummy_GET_request(params) self._stub_enforce(dummy_req, 'DescribeStacks') # Insert an engine RPC error and ensure we map correctly to the # heat exception type self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( dummy_req.context, ('identify_stack', {'stack_name': stack_name}) ).AndReturn(identity) rpc_client.EngineClient.call( dummy_req.context, ('show_stack', {'stack_identity': identity, 'resolve_outputs': True}), version='1.20' ).AndRaise(AttributeError()) self.m.ReplayAll() result = self.controller.describe(dummy_req) self.assertIsInstance(result, exception.HeatInvalidParameterValueError) def test_describe_bad_name(self): stack_name = "wibble" params = {'Action': 'DescribeStacks', 'StackName': stack_name} dummy_req = self._dummy_GET_request(params) self._stub_enforce(dummy_req, 'DescribeStacks') # Insert an engine RPC error and ensure we map correctly to the # heat exception type self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( dummy_req.context, ('identify_stack', {'stack_name': stack_name}) ).AndRaise(heat_exception.EntityNotFound(entity='Stack', name='test')) self.m.ReplayAll() result = self.controller.describe(dummy_req) self.assertIsInstance(result, exception.HeatInvalidParameterValueError) def test_get_template_int_body(self): """Test the internal _get_template function.""" params = {'TemplateBody': "abcdef"} dummy_req = self._dummy_GET_request(params) result = self.controller._get_template(dummy_req) expected = "abcdef" self.assertEqual(expected, result) # TODO(shardy) : test the _get_template TemplateUrl case def _stub_rpc_create_stack_call_failure(self, req_context, stack_name, engine_parms, engine_args, failure, need_stub=True): if need_stub: mock_enforce = self.patchobject(policy.Enforcer, 'enforce') self.m.StubOutWithMock(rpc_client.EngineClient, 'call') mock_enforce.return_value = True # Insert an engine RPC error and ensure we map correctly to the # heat exception type rpc_client.EngineClient.call( req_context, ('create_stack', {'stack_name': stack_name, 'template': self.template, 'params': engine_parms, 'files': {}, 'environment_files': None, 'args': engine_args, 'owner_id': None, 'nested_depth': 0, 'user_creds_id': None, 'parent_resource_name': None, 'stack_user_project_id': None, 'template_id': None}), version='1.29' ).AndRaise(failure) def _stub_rpc_create_stack_call_success(self, stack_name, engine_parms, engine_args, parameters): dummy_req = self._dummy_GET_request(parameters) self._stub_enforce(dummy_req, 'CreateStack') # Stub out the RPC call to the engine with a pre-canned response engine_resp = {u'tenant': u't', u'stack_name': u'wordpress', u'stack_id': u'1', u'path': u''} self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( dummy_req.context, ('create_stack', {'stack_name': stack_name, 'template': self.template, 'params': engine_parms, 'files': {}, 'environment_files': None, 'args': engine_args, 'owner_id': None, 'nested_depth': 0, 'user_creds_id': None, 'parent_resource_name': None, 'stack_user_project_id': None, 'template_id': None}), version='1.29' ).AndReturn(engine_resp) self.m.ReplayAll() return dummy_req def test_create(self): # Format a dummy request stack_name = "wordpress" json_template = json.dumps(self.template) params = {'Action': 'CreateStack', 'StackName': stack_name, 'TemplateBody': '%s' % json_template, 'TimeoutInMinutes': 30, 'DisableRollback': 'true', 'Parameters.member.1.ParameterKey': 'InstanceType', 'Parameters.member.1.ParameterValue': 'm1.xlarge'} engine_parms = {u'InstanceType': u'm1.xlarge'} engine_args = {'timeout_mins': u'30', 'disable_rollback': 'true'} dummy_req = self._stub_rpc_create_stack_call_success(stack_name, engine_parms, engine_args, params) response = self.controller.create(dummy_req) expected = { 'CreateStackResponse': { 'CreateStackResult': { u'StackId': u'arn:openstack:heat::t:stacks/wordpress/1' } } } self.assertEqual(expected, response) def test_create_rollback(self): # Format a dummy request stack_name = "wordpress" json_template = json.dumps(self.template) params = {'Action': 'CreateStack', 'StackName': stack_name, 'TemplateBody': '%s' % json_template, 'TimeoutInMinutes': 30, 'DisableRollback': 'false', 'Parameters.member.1.ParameterKey': 'InstanceType', 'Parameters.member.1.ParameterValue': 'm1.xlarge'} engine_parms = {u'InstanceType': u'm1.xlarge'} engine_args = {'timeout_mins': u'30', 'disable_rollback': 'false'} dummy_req = self._stub_rpc_create_stack_call_success(stack_name, engine_parms, engine_args, params) response = self.controller.create(dummy_req) expected = { 'CreateStackResponse': { 'CreateStackResult': { u'StackId': u'arn:openstack:heat::t:stacks/wordpress/1' } } } self.assertEqual(expected, response) def test_create_onfailure_true(self): # Format a dummy request stack_name = "wordpress" json_template = json.dumps(self.template) params = {'Action': 'CreateStack', 'StackName': stack_name, 'TemplateBody': '%s' % json_template, 'TimeoutInMinutes': 30, 'OnFailure': 'DO_NOTHING', 'Parameters.member.1.ParameterKey': 'InstanceType', 'Parameters.member.1.ParameterValue': 'm1.xlarge'} engine_parms = {u'InstanceType': u'm1.xlarge'} engine_args = {'timeout_mins': u'30', 'disable_rollback': 'true'} dummy_req = self._stub_rpc_create_stack_call_success(stack_name, engine_parms, engine_args, params) response = self.controller.create(dummy_req) expected = { 'CreateStackResponse': { 'CreateStackResult': { u'StackId': u'arn:openstack:heat::t:stacks/wordpress/1' } } } self.assertEqual(expected, response) def test_create_onfailure_false_delete(self): # Format a dummy request stack_name = "wordpress" json_template = json.dumps(self.template) params = {'Action': 'CreateStack', 'StackName': stack_name, 'TemplateBody': '%s' % json_template, 'TimeoutInMinutes': 30, 'OnFailure': 'DELETE', 'Parameters.member.1.ParameterKey': 'InstanceType', 'Parameters.member.1.ParameterValue': 'm1.xlarge'} engine_parms = {u'InstanceType': u'm1.xlarge'} engine_args = {'timeout_mins': u'30', 'disable_rollback': 'false'} dummy_req = self._stub_rpc_create_stack_call_success(stack_name, engine_parms, engine_args, params) response = self.controller.create(dummy_req) expected = { 'CreateStackResponse': { 'CreateStackResult': { u'StackId': u'arn:openstack:heat::t:stacks/wordpress/1' } } } self.assertEqual(expected, response) def test_create_onfailure_false_rollback(self): # Format a dummy request stack_name = "wordpress" json_template = json.dumps(self.template) params = {'Action': 'CreateStack', 'StackName': stack_name, 'TemplateBody': '%s' % json_template, 'TimeoutInMinutes': 30, 'OnFailure': 'ROLLBACK', 'Parameters.member.1.ParameterKey': 'InstanceType', 'Parameters.member.1.ParameterValue': 'm1.xlarge'} engine_parms = {u'InstanceType': u'm1.xlarge'} engine_args = {'timeout_mins': u'30', 'disable_rollback': 'false'} dummy_req = self._stub_rpc_create_stack_call_success(stack_name, engine_parms, engine_args, params) response = self.controller.create(dummy_req) expected = { 'CreateStackResponse': { 'CreateStackResult': { u'StackId': u'arn:openstack:heat::t:stacks/wordpress/1' } } } self.assertEqual(expected, response) def test_create_onfailure_err(self): # Format a dummy request stack_name = "wordpress" json_template = json.dumps(self.template) params = {'Action': 'CreateStack', 'StackName': stack_name, 'TemplateBody': '%s' % json_template, 'TimeoutInMinutes': 30, 'DisableRollback': 'true', 'OnFailure': 'DO_NOTHING', 'Parameters.member.1.ParameterKey': 'InstanceType', 'Parameters.member.1.ParameterValue': 'm1.xlarge'} dummy_req = self._dummy_GET_request(params) self._stub_enforce(dummy_req, 'CreateStack') self.assertRaises(exception.HeatInvalidParameterCombinationError, self.controller.create, dummy_req) def test_create_err_no_template(self): # Format a dummy request with a missing template field stack_name = "wordpress" params = {'Action': 'CreateStack', 'StackName': stack_name} dummy_req = self._dummy_GET_request(params) self._stub_enforce(dummy_req, 'CreateStack') result = self.controller.create(dummy_req) self.assertIsInstance(result, exception.HeatMissingParameterError) def test_create_err_inval_template(self): # Format a dummy request with an invalid TemplateBody stack_name = "wordpress" json_template = "!$%**_+}@~?" params = {'Action': 'CreateStack', 'StackName': stack_name, 'TemplateBody': '%s' % json_template} dummy_req = self._dummy_GET_request(params) self._stub_enforce(dummy_req, 'CreateStack') result = self.controller.create(dummy_req) self.assertIsInstance(result, exception.HeatInvalidParameterValueError) def test_create_err_rpcerr(self): # Format a dummy request stack_name = "wordpress" json_template = json.dumps(self.template) params = {'Action': 'CreateStack', 'StackName': stack_name, 'TemplateBody': '%s' % json_template, 'TimeoutInMinutes': 30, 'Parameters.member.1.ParameterKey': 'InstanceType', 'Parameters.member.1.ParameterValue': 'm1.xlarge'} engine_parms = {u'InstanceType': u'm1.xlarge'} engine_args = {'timeout_mins': u'30'} dummy_req = self._dummy_GET_request(params) self._stub_rpc_create_stack_call_failure(dummy_req.context, stack_name, engine_parms, engine_args, AttributeError()) failure = heat_exception.UnknownUserParameter(key='test') self._stub_rpc_create_stack_call_failure(dummy_req.context, stack_name, engine_parms, engine_args, failure, False) failure = heat_exception.UserParameterMissing(key='test') self._stub_rpc_create_stack_call_failure(dummy_req.context, stack_name, engine_parms, engine_args, failure, False) self.m.ReplayAll() result = self.controller.create(dummy_req) self.assertIsInstance(result, exception.HeatInvalidParameterValueError) result = self.controller.create(dummy_req) self.assertIsInstance(result, exception.HeatInvalidParameterValueError) result = self.controller.create(dummy_req) self.assertIsInstance(result, exception.HeatInvalidParameterValueError) def test_create_err_exists(self): # Format a dummy request stack_name = "wordpress" json_template = json.dumps(self.template) params = {'Action': 'CreateStack', 'StackName': stack_name, 'TemplateBody': '%s' % json_template, 'TimeoutInMinutes': 30, 'Parameters.member.1.ParameterKey': 'InstanceType', 'Parameters.member.1.ParameterValue': 'm1.xlarge'} engine_parms = {u'InstanceType': u'm1.xlarge'} engine_args = {'timeout_mins': u'30'} failure = heat_exception.StackExists(stack_name='test') dummy_req = self._dummy_GET_request(params) self._stub_rpc_create_stack_call_failure(dummy_req.context, stack_name, engine_parms, engine_args, failure) self.m.ReplayAll() result = self.controller.create(dummy_req) self.assertIsInstance(result, exception.AlreadyExistsError) def test_create_err_engine(self): # Format a dummy request stack_name = "wordpress" json_template = json.dumps(self.template) params = {'Action': 'CreateStack', 'StackName': stack_name, 'TemplateBody': '%s' % json_template, 'TimeoutInMinutes': 30, 'Parameters.member.1.ParameterKey': 'InstanceType', 'Parameters.member.1.ParameterValue': 'm1.xlarge'} engine_parms = {u'InstanceType': u'm1.xlarge'} engine_args = {'timeout_mins': u'30'} failure = heat_exception.StackValidationFailed( message='Something went wrong') dummy_req = self._dummy_GET_request(params) self._stub_rpc_create_stack_call_failure(dummy_req.context, stack_name, engine_parms, engine_args, failure) self.m.ReplayAll() result = self.controller.create(dummy_req) self.assertIsInstance(result, exception.HeatInvalidParameterValueError) def test_update(self): # Format a dummy request stack_name = "wordpress" json_template = json.dumps(self.template) params = {'Action': 'UpdateStack', 'StackName': stack_name, 'TemplateBody': '%s' % json_template, 'Parameters.member.1.ParameterKey': 'InstanceType', 'Parameters.member.1.ParameterValue': 'm1.xlarge'} engine_parms = {u'InstanceType': u'm1.xlarge'} engine_args = {} dummy_req = self._dummy_GET_request(params) self._stub_enforce(dummy_req, 'UpdateStack') # Stub out the RPC call to the engine with a pre-canned response identity = dict(identifier.HeatIdentifier('t', stack_name, '1')) self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( dummy_req.context, ('identify_stack', {'stack_name': stack_name}) ).AndReturn(identity) rpc_client.EngineClient.call( dummy_req.context, ('update_stack', {'stack_identity': identity, 'template': self.template, 'params': engine_parms, 'files': {}, 'environment_files': None, 'args': engine_args, 'template_id': None}), version='1.29' ).AndReturn(identity) self.m.ReplayAll() response = self.controller.update(dummy_req) expected = { 'UpdateStackResponse': { 'UpdateStackResult': { u'StackId': u'arn:openstack:heat::t:stacks/wordpress/1' } } } self.assertEqual(expected, response) def test_cancel_update(self): # Format a dummy request stack_name = "wordpress" params = {'Action': 'CancelUpdateStack', 'StackName': stack_name} dummy_req = self._dummy_GET_request(params) self._stub_enforce(dummy_req, 'CancelUpdateStack') # Stub out the RPC call to the engine with a pre-canned response identity = dict(identifier.HeatIdentifier('t', stack_name, '1')) self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( dummy_req.context, ('identify_stack', {'stack_name': stack_name}) ).AndReturn(identity) rpc_client.EngineClient.call( dummy_req.context, ('stack_cancel_update', {'stack_identity': identity, 'cancel_with_rollback': True}), version='1.14' ).AndReturn(identity) self.m.ReplayAll() response = self.controller.cancel_update(dummy_req) expected = { 'CancelUpdateStackResponse': { 'CancelUpdateStackResult': {} } } self.assertEqual(response, expected) def test_update_bad_name(self): stack_name = "wibble" json_template = json.dumps(self.template) params = {'Action': 'UpdateStack', 'StackName': stack_name, 'TemplateBody': '%s' % json_template, 'Parameters.member.1.ParameterKey': 'InstanceType', 'Parameters.member.1.ParameterValue': 'm1.xlarge'} dummy_req = self._dummy_GET_request(params) self._stub_enforce(dummy_req, 'UpdateStack') # Insert an engine RPC error and ensure we map correctly to the # heat exception type self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( dummy_req.context, ('identify_stack', {'stack_name': stack_name}) ).AndRaise(heat_exception.EntityNotFound(entity='Stack', name='test')) self.m.ReplayAll() result = self.controller.update(dummy_req) self.assertIsInstance(result, exception.HeatInvalidParameterValueError) def test_create_or_update_err(self): result = self.controller.create_or_update(req={}, action="dsdgfdf") self.assertIsInstance(result, exception.HeatInternalFailureError) def test_get_template(self): # Format a dummy request stack_name = "wordpress" identity = dict(identifier.HeatIdentifier('t', stack_name, '6')) params = {'Action': 'GetTemplate', 'StackName': stack_name} dummy_req = self._dummy_GET_request(params) self._stub_enforce(dummy_req, 'GetTemplate') # Stub out the RPC call to the engine with a pre-canned response engine_resp = self.template self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( dummy_req.context, ('identify_stack', {'stack_name': stack_name}) ).AndReturn(identity) rpc_client.EngineClient.call( dummy_req.context, ('get_template', {'stack_identity': identity}) ).AndReturn(engine_resp) self.m.ReplayAll() response = self.controller.get_template(dummy_req) expected = {'GetTemplateResponse': {'GetTemplateResult': {'TemplateBody': self.template}}} self.assertEqual(expected, response) def test_get_template_err_rpcerr(self): stack_name = "wordpress" identity = dict(identifier.HeatIdentifier('t', stack_name, '6')) params = {'Action': 'GetTemplate', 'StackName': stack_name} dummy_req = self._dummy_GET_request(params) self._stub_enforce(dummy_req, 'GetTemplate') # Insert an engine RPC error and ensure we map correctly to the # heat exception type self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( dummy_req.context, ('identify_stack', {'stack_name': stack_name}) ).AndReturn(identity) rpc_client.EngineClient.call( dummy_req.context, ('get_template', {'stack_identity': identity}) ).AndRaise(AttributeError()) self.m.ReplayAll() result = self.controller.get_template(dummy_req) self.assertIsInstance(result, exception.HeatInvalidParameterValueError) def test_get_template_bad_name(self): stack_name = "wibble" params = {'Action': 'GetTemplate', 'StackName': stack_name} dummy_req = self._dummy_GET_request(params) self._stub_enforce(dummy_req, 'GetTemplate') # Insert an engine RPC error and ensure we map correctly to the # heat exception type self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( dummy_req.context, ('identify_stack', {'stack_name': stack_name}) ).AndRaise(heat_exception.EntityNotFound(entity='Stack', name='test')) self.m.ReplayAll() result = self.controller.get_template(dummy_req) self.assertIsInstance(result, exception.HeatInvalidParameterValueError) def test_validate_err_no_template(self): # Format a dummy request with a missing template field params = {'Action': 'ValidateTemplate'} dummy_req = self._dummy_GET_request(params) self._stub_enforce(dummy_req, 'ValidateTemplate') result = self.controller.validate_template(dummy_req) self.assertIsInstance(result, exception.HeatMissingParameterError) def test_validate_err_inval_template(self): # Format a dummy request with an invalid TemplateBody json_template = "!$%**_+}@~?" params = {'Action': 'ValidateTemplate', 'TemplateBody': '%s' % json_template} dummy_req = self._dummy_GET_request(params) self._stub_enforce(dummy_req, 'ValidateTemplate') result = self.controller.validate_template(dummy_req) self.assertIsInstance(result, exception.HeatInvalidParameterValueError) def test_bad_resources_in_template(self): # Format a dummy request json_template = { 'AWSTemplateFormatVersion': '2010-09-09', 'Resources': { 'Type': 'AWS: : EC2: : Instance', }, } params = {'Action': 'ValidateTemplate', 'TemplateBody': '%s' % json.dumps(json_template)} response = {'Error': 'Resources must contain Resource. ' 'Found a [string] instead'} dummy_req = self._dummy_GET_request(params) self._stub_enforce(dummy_req, 'ValidateTemplate') # Stub out the RPC call to the engine with a pre-canned response self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( dummy_req.context, ('validate_template', {'template': json_template, 'params': None, 'files': None, 'environment_files': None, 'show_nested': False, 'ignorable_errors': None}), version='1.24' ).AndReturn(response) self.m.ReplayAll() response = self.controller.validate_template(dummy_req) expected = {'ValidateTemplateResponse': {'ValidateTemplateResult': 'Resources must contain Resource. ' 'Found a [string] instead'}} self.assertEqual(expected, response) def test_delete(self): # Format a dummy request stack_name = "wordpress" identity = dict(identifier.HeatIdentifier('t', stack_name, '1')) params = {'Action': 'DeleteStack', 'StackName': stack_name} dummy_req = self._dummy_GET_request(params) self._stub_enforce(dummy_req, 'DeleteStack') # Stub out the RPC call to the engine with a pre-canned response self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( dummy_req.context, ('identify_stack', {'stack_name': stack_name}) ).AndReturn(identity) # Engine returns None when delete successful rpc_client.EngineClient.call( dummy_req.context, ('delete_stack', {'stack_identity': identity}) ).AndReturn(None) self.m.ReplayAll() response = self.controller.delete(dummy_req) expected = {'DeleteStackResponse': {'DeleteStackResult': ''}} self.assertEqual(expected, response) def test_delete_err_rpcerr(self): stack_name = "wordpress" identity = dict(identifier.HeatIdentifier('t', stack_name, '1')) params = {'Action': 'DeleteStack', 'StackName': stack_name} dummy_req = self._dummy_GET_request(params) self._stub_enforce(dummy_req, 'DeleteStack') # Stub out the RPC call to the engine with a pre-canned response self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( dummy_req.context, ('identify_stack', {'stack_name': stack_name}) ).AndReturn(identity) # Insert an engine RPC error and ensure we map correctly to the # heat exception type rpc_client.EngineClient.call( dummy_req.context, ('delete_stack', {'stack_identity': identity}) ).AndRaise(AttributeError()) self.m.ReplayAll() result = self.controller.delete(dummy_req) self.assertIsInstance(result, exception.HeatInvalidParameterValueError) def test_delete_bad_name(self): stack_name = "wibble" params = {'Action': 'DeleteStack', 'StackName': stack_name} dummy_req = self._dummy_GET_request(params) self._stub_enforce(dummy_req, 'DeleteStack') # Insert an engine RPC error and ensure we map correctly to the # heat exception type self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( dummy_req.context, ('identify_stack', {'stack_name': stack_name}) ).AndRaise(heat_exception.EntityNotFound(entity='Stack', name='test')) self.m.ReplayAll() result = self.controller.delete(dummy_req) self.assertIsInstance(result, exception.HeatInvalidParameterValueError) def test_events_list_event_id_integer(self): self._test_events_list('42') def test_events_list_event_id_uuid(self): self._test_events_list('a3455d8c-9f88-404d-a85b-5315293e67de') def _test_events_list(self, event_id): # Format a dummy request stack_name = "wordpress" identity = dict(identifier.HeatIdentifier('t', stack_name, '6')) params = {'Action': 'DescribeStackEvents', 'StackName': stack_name} dummy_req = self._dummy_GET_request(params) self._stub_enforce(dummy_req, 'DescribeStackEvents') # Stub out the RPC call to the engine with a pre-canned response engine_resp = [{u'stack_name': u'wordpress', u'event_time': u'2012-07-23T13:05:39Z', u'stack_identity': {u'tenant': u't', u'stack_name': u'wordpress', u'stack_id': u'6', u'path': u''}, u'resource_name': u'WikiDatabase', u'resource_status_reason': u'state changed', u'event_identity': {u'tenant': u't', u'stack_name': u'wordpress', u'stack_id': u'6', u'path': u'/resources/WikiDatabase/events/{0}'.format( event_id)}, u'resource_action': u'TEST', u'resource_status': u'IN_PROGRESS', u'physical_resource_id': None, u'resource_properties': {u'UserData': u'blah'}, u'resource_type': u'AWS::EC2::Instance'}] kwargs = {'stack_identity': identity, 'nested_depth': None, 'limit': None, 'sort_keys': None, 'marker': None, 'sort_dir': None, 'filters': None} self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( dummy_req.context, ('identify_stack', {'stack_name': stack_name}) ).AndReturn(identity) rpc_client.EngineClient.call( dummy_req.context, ('list_events', kwargs), version='1.31' ).AndReturn(engine_resp) self.m.ReplayAll() response = self.controller.events_list(dummy_req) expected = {'DescribeStackEventsResponse': {'DescribeStackEventsResult': {'StackEvents': [{'EventId': six.text_type(event_id), 'StackId': u'arn:openstack:heat::t:stacks/wordpress/6', 'ResourceStatus': u'TEST_IN_PROGRESS', 'ResourceType': u'AWS::EC2::Instance', 'Timestamp': u'2012-07-23T13:05:39Z', 'StackName': u'wordpress', 'ResourceProperties': json.dumps({u'UserData': u'blah'}), 'PhysicalResourceId': None, 'ResourceStatusReason': u'state changed', 'LogicalResourceId': u'WikiDatabase'}]}}} self.assertEqual(expected, response) def test_events_list_err_rpcerr(self): stack_name = "wordpress" identity = dict(identifier.HeatIdentifier('t', stack_name, '6')) params = {'Action': 'DescribeStackEvents', 'StackName': stack_name} dummy_req = self._dummy_GET_request(params) self._stub_enforce(dummy_req, 'DescribeStackEvents') # Insert an engine RPC error and ensure we map correctly to the # heat exception type self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( dummy_req.context, ('identify_stack', {'stack_name': stack_name}) ).AndReturn(identity) rpc_client.EngineClient.call( dummy_req.context, ('list_events', {'stack_identity': identity}), version='1.31' ).AndRaise(Exception()) self.m.ReplayAll() result = self.controller.events_list(dummy_req) self.assertIsInstance(result, exception.HeatInternalFailureError) def test_events_list_bad_name(self): stack_name = "wibble" params = {'Action': 'DescribeStackEvents', 'StackName': stack_name} dummy_req = self._dummy_GET_request(params) self._stub_enforce(dummy_req, 'DescribeStackEvents') # Insert an engine RPC error and ensure we map correctly to the # heat exception type self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( dummy_req.context, ('identify_stack', {'stack_name': stack_name}) ).AndRaise(heat_exception.EntityNotFound(entity='Stack', name='test')) self.m.ReplayAll() result = self.controller.events_list(dummy_req) self.assertIsInstance(result, exception.HeatInvalidParameterValueError) def test_describe_stack_resource(self): # Format a dummy request stack_name = "wordpress" identity = dict(identifier.HeatIdentifier('t', stack_name, '6')) params = {'Action': 'DescribeStackResource', 'StackName': stack_name, 'LogicalResourceId': "WikiDatabase"} dummy_req = self._dummy_GET_request(params) self._stub_enforce(dummy_req, 'DescribeStackResource') # Stub out the RPC call to the engine with a pre-canned response engine_resp = {u'description': u'', u'resource_identity': { u'tenant': u't', u'stack_name': u'wordpress', u'stack_id': u'6', u'path': u'resources/WikiDatabase' }, u'stack_name': u'wordpress', u'resource_name': u'WikiDatabase', u'resource_status_reason': None, u'updated_time': u'2012-07-23T13:06:00Z', u'stack_identity': {u'tenant': u't', u'stack_name': u'wordpress', u'stack_id': u'6', u'path': u''}, u'resource_action': u'CREATE', u'resource_status': u'COMPLETE', u'physical_resource_id': u'a3455d8c-9f88-404d-a85b-5315293e67de', u'resource_type': u'AWS::EC2::Instance', u'metadata': {u'wordpress': []}} self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( dummy_req.context, ('identify_stack', {'stack_name': stack_name}) ).AndReturn(identity) args = { 'stack_identity': identity, 'resource_name': dummy_req.params.get('LogicalResourceId'), 'with_attr': False, } rpc_client.EngineClient.call( dummy_req.context, ('describe_stack_resource', args), version='1.2' ).AndReturn(engine_resp) self.m.ReplayAll() response = self.controller.describe_stack_resource(dummy_req) expected = {'DescribeStackResourceResponse': {'DescribeStackResourceResult': {'StackResourceDetail': {'StackId': u'arn:openstack:heat::t:stacks/wordpress/6', 'ResourceStatus': u'CREATE_COMPLETE', 'Description': u'', 'ResourceType': u'AWS::EC2::Instance', 'ResourceStatusReason': None, 'LastUpdatedTimestamp': u'2012-07-23T13:06:00Z', 'StackName': u'wordpress', 'PhysicalResourceId': u'a3455d8c-9f88-404d-a85b-5315293e67de', 'Metadata': {u'wordpress': []}, 'LogicalResourceId': u'WikiDatabase'}}}} self.assertEqual(expected, response) def test_describe_stack_resource_nonexistent_stack(self): # Format a dummy request stack_name = "wibble" params = {'Action': 'DescribeStackResource', 'StackName': stack_name, 'LogicalResourceId': "WikiDatabase"} dummy_req = self._dummy_GET_request(params) self._stub_enforce(dummy_req, 'DescribeStackResource') # Stub out the RPC call to the engine with a pre-canned response self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( dummy_req.context, ('identify_stack', {'stack_name': stack_name}) ).AndRaise(heat_exception.EntityNotFound(entity='Stack', name='test')) self.m.ReplayAll() result = self.controller.describe_stack_resource(dummy_req) self.assertIsInstance(result, exception.HeatInvalidParameterValueError) def test_describe_stack_resource_nonexistent(self): # Format a dummy request stack_name = "wordpress" identity = dict(identifier.HeatIdentifier('t', stack_name, '6')) params = {'Action': 'DescribeStackResource', 'StackName': stack_name, 'LogicalResourceId': "wibble"} dummy_req = self._dummy_GET_request(params) self._stub_enforce(dummy_req, 'DescribeStackResource') # Stub out the RPC call to the engine with a pre-canned response self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( dummy_req.context, ('identify_stack', {'stack_name': stack_name}) ).AndReturn(identity) args = { 'stack_identity': identity, 'resource_name': dummy_req.params.get('LogicalResourceId'), 'with_attr': False, } rpc_client.EngineClient.call( dummy_req.context, ('describe_stack_resource', args), version='1.2' ).AndRaise(heat_exception.ResourceNotFound( resource_name='test', stack_name='test')) self.m.ReplayAll() result = self.controller.describe_stack_resource(dummy_req) self.assertIsInstance(result, exception.HeatInvalidParameterValueError) def test_describe_stack_resources(self): # Format a dummy request stack_name = "wordpress" identity = dict(identifier.HeatIdentifier('t', stack_name, '6')) params = {'Action': 'DescribeStackResources', 'StackName': stack_name, 'LogicalResourceId': "WikiDatabase"} dummy_req = self._dummy_GET_request(params) self._stub_enforce(dummy_req, 'DescribeStackResources') # Stub out the RPC call to the engine with a pre-canned response engine_resp = [{u'description': u'', u'resource_identity': { u'tenant': u't', u'stack_name': u'wordpress', u'stack_id': u'6', u'path': u'resources/WikiDatabase' }, u'stack_name': u'wordpress', u'resource_name': u'WikiDatabase', u'resource_status_reason': None, u'updated_time': u'2012-07-23T13:06:00Z', u'stack_identity': {u'tenant': u't', u'stack_name': u'wordpress', u'stack_id': u'6', u'path': u''}, u'resource_action': u'CREATE', u'resource_status': u'COMPLETE', u'physical_resource_id': u'a3455d8c-9f88-404d-a85b-5315293e67de', u'resource_type': u'AWS::EC2::Instance', u'metadata': {u'ensureRunning': u'true''true'}}] self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( dummy_req.context, ('identify_stack', {'stack_name': stack_name}) ).AndReturn(identity) args = { 'stack_identity': identity, 'resource_name': dummy_req.params.get('LogicalResourceId'), } rpc_client.EngineClient.call( dummy_req.context, ('describe_stack_resources', args) ).AndReturn(engine_resp) self.m.ReplayAll() response = self.controller.describe_stack_resources(dummy_req) expected = {'DescribeStackResourcesResponse': {'DescribeStackResourcesResult': {'StackResources': [{'StackId': u'arn:openstack:heat::t:stacks/wordpress/6', 'ResourceStatus': u'CREATE_COMPLETE', 'Description': u'', 'ResourceType': u'AWS::EC2::Instance', 'Timestamp': u'2012-07-23T13:06:00Z', 'ResourceStatusReason': None, 'StackName': u'wordpress', 'PhysicalResourceId': u'a3455d8c-9f88-404d-a85b-5315293e67de', 'LogicalResourceId': u'WikiDatabase'}]}}} self.assertEqual(expected, response) def test_describe_stack_resources_bad_name(self): stack_name = "wibble" params = {'Action': 'DescribeStackResources', 'StackName': stack_name, 'LogicalResourceId': "WikiDatabase"} dummy_req = self._dummy_GET_request(params) self._stub_enforce(dummy_req, 'DescribeStackResources') # Insert an engine RPC error and ensure we map correctly to the # heat exception type self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( dummy_req.context, ('identify_stack', {'stack_name': stack_name}) ).AndRaise(heat_exception.EntityNotFound(entity='Stack', name='test')) self.m.ReplayAll() result = self.controller.describe_stack_resources(dummy_req) self.assertIsInstance(result, exception.HeatInvalidParameterValueError) def test_describe_stack_resources_physical(self): # Format a dummy request stack_name = "wordpress" identity = dict(identifier.HeatIdentifier('t', stack_name, '6')) params = {'Action': 'DescribeStackResources', 'LogicalResourceId': "WikiDatabase", 'PhysicalResourceId': 'a3455d8c-9f88-404d-a85b-5315293e67de'} dummy_req = self._dummy_GET_request(params) self._stub_enforce(dummy_req, 'DescribeStackResources') # Stub out the RPC call to the engine with a pre-canned response engine_resp = [{u'description': u'', u'resource_identity': { u'tenant': u't', u'stack_name': u'wordpress', u'stack_id': u'6', u'path': u'resources/WikiDatabase' }, u'stack_name': u'wordpress', u'resource_name': u'WikiDatabase', u'resource_status_reason': None, u'updated_time': u'2012-07-23T13:06:00Z', u'stack_identity': {u'tenant': u't', u'stack_name': u'wordpress', u'stack_id': u'6', u'path': u''}, u'resource_action': u'CREATE', u'resource_status': u'COMPLETE', u'physical_resource_id': u'a3455d8c-9f88-404d-a85b-5315293e67de', u'resource_type': u'AWS::EC2::Instance', u'metadata': {u'ensureRunning': u'true''true'}}] self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( dummy_req.context, ('find_physical_resource', {'physical_resource_id': 'a3455d8c-9f88-404d-a85b-5315293e67de'}) ).AndReturn(identity) args = { 'stack_identity': identity, 'resource_name': dummy_req.params.get('LogicalResourceId'), } rpc_client.EngineClient.call( dummy_req.context, ('describe_stack_resources', args) ).AndReturn(engine_resp) self.m.ReplayAll() response = self.controller.describe_stack_resources(dummy_req) expected = {'DescribeStackResourcesResponse': {'DescribeStackResourcesResult': {'StackResources': [{'StackId': u'arn:openstack:heat::t:stacks/wordpress/6', 'ResourceStatus': u'CREATE_COMPLETE', 'Description': u'', 'ResourceType': u'AWS::EC2::Instance', 'Timestamp': u'2012-07-23T13:06:00Z', 'ResourceStatusReason': None, 'StackName': u'wordpress', 'PhysicalResourceId': u'a3455d8c-9f88-404d-a85b-5315293e67de', 'LogicalResourceId': u'WikiDatabase'}]}}} self.assertEqual(expected, response) def test_describe_stack_resources_physical_not_found(self): # Format a dummy request params = {'Action': 'DescribeStackResources', 'LogicalResourceId': "WikiDatabase", 'PhysicalResourceId': 'aaaaaaaa-9f88-404d-cccc-ffffffffffff'} dummy_req = self._dummy_GET_request(params) self._stub_enforce(dummy_req, 'DescribeStackResources') # Stub out the RPC call to the engine with a pre-canned response self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( dummy_req.context, ('find_physical_resource', {'physical_resource_id': 'aaaaaaaa-9f88-404d-cccc-ffffffffffff'}) ).AndRaise(heat_exception.EntityNotFound(entity='Resource', name='1')) self.m.ReplayAll() response = self.controller.describe_stack_resources(dummy_req) self.assertIsInstance(response, exception.HeatInvalidParameterValueError) def test_describe_stack_resources_err_inval(self): # Format a dummy request containing both StackName and # PhysicalResourceId, which is invalid and should throw a # HeatInvalidParameterCombinationError stack_name = "wordpress" params = {'Action': 'DescribeStackResources', 'StackName': stack_name, 'PhysicalResourceId': "123456"} dummy_req = self._dummy_GET_request(params) self._stub_enforce(dummy_req, 'DescribeStackResources') ret = self.controller.describe_stack_resources(dummy_req) self.assertIsInstance(ret, exception.HeatInvalidParameterCombinationError) def test_list_stack_resources(self): # Format a dummy request stack_name = "wordpress" identity = dict(identifier.HeatIdentifier('t', stack_name, '6')) params = {'Action': 'ListStackResources', 'StackName': stack_name} dummy_req = self._dummy_GET_request(params) self._stub_enforce(dummy_req, 'ListStackResources') # Stub out the RPC call to the engine with a pre-canned response engine_resp = [{u'resource_identity': {u'tenant': u't', u'stack_name': u'wordpress', u'stack_id': u'6', u'path': u'/resources/WikiDatabase'}, u'stack_name': u'wordpress', u'resource_name': u'WikiDatabase', u'resource_status_reason': None, u'updated_time': u'2012-07-23T13:06:00Z', u'stack_identity': {u'tenant': u't', u'stack_name': u'wordpress', u'stack_id': u'6', u'path': u''}, u'resource_action': u'CREATE', u'resource_status': u'COMPLETE', u'physical_resource_id': u'a3455d8c-9f88-404d-a85b-5315293e67de', u'resource_type': u'AWS::EC2::Instance'}] self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( dummy_req.context, ('identify_stack', {'stack_name': stack_name}) ).AndReturn(identity) rpc_client.EngineClient.call( dummy_req.context, ('list_stack_resources', {'stack_identity': identity, 'nested_depth': 0, 'with_detail': False, 'filters': None}), version='1.25' ).AndReturn(engine_resp) self.m.ReplayAll() response = self.controller.list_stack_resources(dummy_req) expected = {'ListStackResourcesResponse': {'ListStackResourcesResult': {'StackResourceSummaries': [{'ResourceStatus': u'CREATE_COMPLETE', 'ResourceType': u'AWS::EC2::Instance', 'ResourceStatusReason': None, 'LastUpdatedTimestamp': u'2012-07-23T13:06:00Z', 'PhysicalResourceId': u'a3455d8c-9f88-404d-a85b-5315293e67de', 'LogicalResourceId': u'WikiDatabase'}]}}} self.assertEqual(expected, response) def test_list_stack_resources_bad_name(self): stack_name = "wibble" params = {'Action': 'ListStackResources', 'StackName': stack_name} dummy_req = self._dummy_GET_request(params) self._stub_enforce(dummy_req, 'ListStackResources') # Insert an engine RPC error and ensure we map correctly to the # heat exception type self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( dummy_req.context, ('identify_stack', {'stack_name': stack_name}) ).AndRaise(heat_exception.EntityNotFound(entity='Stack', name='test')) self.m.ReplayAll() result = self.controller.list_stack_resources(dummy_req) self.assertIsInstance(result, exception.HeatInvalidParameterValueError) heat-10.0.2/heat/tests/api/openstack_v1/0000775000175000017500000000000013343562672020001 5ustar zuulzuul00000000000000heat-10.0.2/heat/tests/api/openstack_v1/test_routes.py0000666000175000017500000004036013343562340022730 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_utils import reflection import heat.api.openstack.v1 as api_v1 from heat.tests import common class RoutesTest(common.HeatTestCase): def assertRoute(self, mapper, path, method, action, controller, params=None): params = params or {} route = mapper.match(path, {'REQUEST_METHOD': method}) self.assertIsNotNone(route) self.assertEqual(action, route['action']) class_name = reflection.get_class_name(route['controller'].controller, fully_qualified=False) self.assertEqual(controller, class_name) del(route['action']) del(route['controller']) self.assertEqual(params, route) def setUp(self): super(RoutesTest, self).setUp() self.m = api_v1.API({}).map def test_template_handling(self): self.assertRoute( self.m, '/aaaa/resource_types', 'GET', 'list_resource_types', 'StackController', { 'tenant_id': 'aaaa', }) self.assertRoute( self.m, '/aaaa/resource_types/test_type', 'GET', 'resource_schema', 'StackController', { 'tenant_id': 'aaaa', 'type_name': 'test_type' }) self.assertRoute( self.m, '/aaaa/resource_types/test_type/template', 'GET', 'generate_template', 'StackController', { 'tenant_id': 'aaaa', 'type_name': 'test_type' }) self.assertRoute( self.m, '/aaaa/validate', 'POST', 'validate_template', 'StackController', { 'tenant_id': 'aaaa' }) def test_stack_collection(self): self.assertRoute( self.m, '/aaaa/stacks', 'GET', 'index', 'StackController', { 'tenant_id': 'aaaa' }) self.assertRoute( self.m, '/aaaa/stacks', 'POST', 'create', 'StackController', { 'tenant_id': 'aaaa' }) self.assertRoute( self.m, '/aaaa/stacks/preview', 'POST', 'preview', 'StackController', { 'tenant_id': 'aaaa' }) self.assertRoute( self.m, '/aaaa/stacks/detail', 'GET', 'detail', 'StackController', { 'tenant_id': 'aaaa' }) def test_stack_data(self): self.assertRoute( self.m, '/aaaa/stacks/teststack', 'GET', 'lookup', 'StackController', { 'tenant_id': 'aaaa', 'stack_name': 'teststack' }) self.assertRoute( self.m, '/aaaa/stacks/arn:openstack:heat::6548ab64fbda49deb188851a3b7d8c8b' ':stacks/stack-1411-06/1c5d9bb2-3464-45e2-a728-26dfa4e1d34a', 'GET', 'lookup', 'StackController', { 'tenant_id': 'aaaa', 'stack_name': 'arn:openstack:heat:' ':6548ab64fbda49deb188851a3b7d8c8b:stacks/stack-1411-06/' '1c5d9bb2-3464-45e2-a728-26dfa4e1d34a' }) self.assertRoute( self.m, '/aaaa/stacks/teststack/resources', 'GET', 'lookup', 'StackController', { 'tenant_id': 'aaaa', 'stack_name': 'teststack', 'path': 'resources' }) self.assertRoute( self.m, '/aaaa/stacks/teststack/events', 'GET', 'lookup', 'StackController', { 'tenant_id': 'aaaa', 'stack_name': 'teststack', 'path': 'events' }) self.assertRoute( self.m, '/aaaa/stacks/teststack/bbbb', 'GET', 'show', 'StackController', { 'tenant_id': 'aaaa', 'stack_name': 'teststack', 'stack_id': 'bbbb', }) def test_stack_snapshot(self): self.assertRoute( self.m, '/aaaa/stacks/teststack/bbbb/snapshots', 'POST', 'snapshot', 'StackController', { 'tenant_id': 'aaaa', 'stack_name': 'teststack', 'stack_id': 'bbbb', }) self.assertRoute( self.m, '/aaaa/stacks/teststack/bbbb/snapshots/cccc', 'GET', 'show_snapshot', 'StackController', { 'tenant_id': 'aaaa', 'stack_name': 'teststack', 'stack_id': 'bbbb', 'snapshot_id': 'cccc' }) self.assertRoute( self.m, '/aaaa/stacks/teststack/bbbb/snapshots/cccc', 'DELETE', 'delete_snapshot', 'StackController', { 'tenant_id': 'aaaa', 'stack_name': 'teststack', 'stack_id': 'bbbb', 'snapshot_id': 'cccc' }) self.assertRoute( self.m, '/aaaa/stacks/teststack/bbbb/snapshots', 'GET', 'list_snapshots', 'StackController', { 'tenant_id': 'aaaa', 'stack_name': 'teststack', 'stack_id': 'bbbb' }) self.assertRoute( self.m, '/aaaa/stacks/teststack/bbbb/snapshots/cccc/restore', 'POST', 'restore_snapshot', 'StackController', { 'tenant_id': 'aaaa', 'stack_name': 'teststack', 'stack_id': 'bbbb', 'snapshot_id': 'cccc' }) def test_stack_outputs(self): self.assertRoute( self.m, '/aaaa/stacks/teststack/bbbb/outputs', 'GET', 'list_outputs', 'StackController', { 'tenant_id': 'aaaa', 'stack_name': 'teststack', 'stack_id': 'bbbb' } ) self.assertRoute( self.m, '/aaaa/stacks/teststack/bbbb/outputs/cccc', 'GET', 'show_output', 'StackController', { 'tenant_id': 'aaaa', 'stack_name': 'teststack', 'stack_id': 'bbbb', 'output_key': 'cccc' } ) def test_stack_data_template(self): self.assertRoute( self.m, '/aaaa/stacks/teststack/bbbb/template', 'GET', 'template', 'StackController', { 'tenant_id': 'aaaa', 'stack_name': 'teststack', 'stack_id': 'bbbb', }) self.assertRoute( self.m, '/aaaa/stacks/teststack/template', 'GET', 'lookup', 'StackController', { 'tenant_id': 'aaaa', 'stack_name': 'teststack', 'path': 'template' }) def test_stack_post_actions(self): self.assertRoute( self.m, '/aaaa/stacks/teststack/bbbb/actions', 'POST', 'action', 'ActionController', { 'tenant_id': 'aaaa', 'stack_name': 'teststack', 'stack_id': 'bbbb', }) def test_stack_post_actions_lookup_redirect(self): self.assertRoute( self.m, '/aaaa/stacks/teststack/actions', 'POST', 'lookup', 'StackController', { 'tenant_id': 'aaaa', 'stack_name': 'teststack', 'path': 'actions' }) def test_stack_update_delete(self): self.assertRoute( self.m, '/aaaa/stacks/teststack/bbbb', 'PUT', 'update', 'StackController', { 'tenant_id': 'aaaa', 'stack_name': 'teststack', 'stack_id': 'bbbb', }) self.assertRoute( self.m, '/aaaa/stacks/teststack/bbbb', 'DELETE', 'delete', 'StackController', { 'tenant_id': 'aaaa', 'stack_name': 'teststack', 'stack_id': 'bbbb', }) def test_resources(self): self.assertRoute( self.m, '/aaaa/stacks/teststack/bbbb/resources', 'GET', 'index', 'ResourceController', { 'tenant_id': 'aaaa', 'stack_name': 'teststack', 'stack_id': 'bbbb' }) self.assertRoute( self.m, '/aaaa/stacks/teststack/bbbb/resources/cccc', 'GET', 'show', 'ResourceController', { 'tenant_id': 'aaaa', 'stack_name': 'teststack', 'stack_id': 'bbbb', 'resource_name': 'cccc' }) self.assertRoute( self.m, '/aaaa/stacks/teststack/bbbb/resources/cccc/metadata', 'GET', 'metadata', 'ResourceController', { 'tenant_id': 'aaaa', 'stack_name': 'teststack', 'stack_id': 'bbbb', 'resource_name': 'cccc' }) self.assertRoute( self.m, '/aaaa/stacks/teststack/bbbb/resources/cccc/signal', 'POST', 'signal', 'ResourceController', { 'tenant_id': 'aaaa', 'stack_name': 'teststack', 'stack_id': 'bbbb', 'resource_name': 'cccc' }) def test_events(self): self.assertRoute( self.m, '/aaaa/stacks/teststack/bbbb/events', 'GET', 'index', 'EventController', { 'tenant_id': 'aaaa', 'stack_name': 'teststack', 'stack_id': 'bbbb' }) self.assertRoute( self.m, '/aaaa/stacks/teststack/bbbb/resources/cccc/events', 'GET', 'index', 'EventController', { 'tenant_id': 'aaaa', 'stack_name': 'teststack', 'stack_id': 'bbbb', 'resource_name': 'cccc' }) self.assertRoute( self.m, '/aaaa/stacks/teststack/bbbb/resources/cccc/events/dddd', 'GET', 'show', 'EventController', { 'tenant_id': 'aaaa', 'stack_name': 'teststack', 'stack_id': 'bbbb', 'resource_name': 'cccc', 'event_id': 'dddd' }) def test_software_configs(self): self.assertRoute( self.m, '/aaaa/software_configs', 'GET', 'index', 'SoftwareConfigController', { 'tenant_id': 'aaaa' }) self.assertRoute( self.m, '/aaaa/software_configs', 'POST', 'create', 'SoftwareConfigController', { 'tenant_id': 'aaaa' }) self.assertRoute( self.m, '/aaaa/software_configs/bbbb', 'GET', 'show', 'SoftwareConfigController', { 'tenant_id': 'aaaa', 'config_id': 'bbbb' }) self.assertRoute( self.m, '/aaaa/software_configs/bbbb', 'DELETE', 'delete', 'SoftwareConfigController', { 'tenant_id': 'aaaa', 'config_id': 'bbbb' }) def test_software_deployments(self): self.assertRoute( self.m, '/aaaa/software_deployments', 'GET', 'index', 'SoftwareDeploymentController', { 'tenant_id': 'aaaa' }) self.assertRoute( self.m, '/aaaa/software_deployments', 'POST', 'create', 'SoftwareDeploymentController', { 'tenant_id': 'aaaa' }) self.assertRoute( self.m, '/aaaa/software_deployments/bbbb', 'GET', 'show', 'SoftwareDeploymentController', { 'tenant_id': 'aaaa', 'deployment_id': 'bbbb' }) self.assertRoute( self.m, '/aaaa/software_deployments/bbbb', 'PUT', 'update', 'SoftwareDeploymentController', { 'tenant_id': 'aaaa', 'deployment_id': 'bbbb' }) self.assertRoute( self.m, '/aaaa/software_deployments/bbbb', 'DELETE', 'delete', 'SoftwareDeploymentController', { 'tenant_id': 'aaaa', 'deployment_id': 'bbbb' }) def test_build_info(self): self.assertRoute( self.m, '/fake_tenant/build_info', 'GET', 'build_info', 'BuildInfoController', {'tenant_id': 'fake_tenant'} ) def test_405(self): self.assertRoute( self.m, '/fake_tenant/validate', 'GET', 'reject', 'DefaultMethodController', {'tenant_id': 'fake_tenant', 'allowed_methods': 'POST'} ) self.assertRoute( self.m, '/fake_tenant/stacks', 'PUT', 'reject', 'DefaultMethodController', {'tenant_id': 'fake_tenant', 'allowed_methods': 'GET,POST'} ) self.assertRoute( self.m, '/fake_tenant/stacks/fake_stack/stack_id', 'POST', 'reject', 'DefaultMethodController', {'tenant_id': 'fake_tenant', 'stack_name': 'fake_stack', 'stack_id': 'stack_id', 'allowed_methods': 'GET,PUT,PATCH,DELETE'} ) def test_options(self): self.assertRoute( self.m, '/fake_tenant/validate', 'OPTIONS', 'options', 'DefaultMethodController', {'tenant_id': 'fake_tenant', 'allowed_methods': 'POST'} ) self.assertRoute( self.m, '/fake_tenant/stacks/fake_stack/stack_id', 'OPTIONS', 'options', 'DefaultMethodController', {'tenant_id': 'fake_tenant', 'stack_name': 'fake_stack', 'stack_id': 'stack_id', 'allowed_methods': 'GET,PUT,PATCH,DELETE'} ) def test_services(self): self.assertRoute( self.m, '/aaaa/services', 'GET', 'index', 'ServiceController', { 'tenant_id': 'aaaa' }) heat-10.0.2/heat/tests/api/openstack_v1/test_events.py0000666000175000017500000007035513343562351022724 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import six import webob.exc import heat.api.middleware.fault as fault import heat.api.openstack.v1.events as events from heat.common import exception as heat_exc from heat.common import identifier from heat.common import policy from heat.rpc import client as rpc_client from heat.tests.api.openstack_v1 import tools from heat.tests import common @mock.patch.object(policy.Enforcer, 'enforce') class EventControllerTest(tools.ControllerTest, common.HeatTestCase): """Tests the API class EventController. Tests the API class which acts as the WSGI controller, the endpoint processing API requests after they are routed """ def setUp(self): super(EventControllerTest, self).setUp() # Create WSGI controller instance class DummyConfig(object): bind_port = 8004 cfgopts = DummyConfig() self.controller = events.EventController(options=cfgopts) def test_resource_index_event_id_integer(self, mock_enforce): self._test_resource_index('42', mock_enforce) def test_resource_index_event_id_uuid(self, mock_enforce): self._test_resource_index('a3455d8c-9f88-404d-a85b-5315293e67de', mock_enforce) def test_resource_index_nested_depth(self, mock_enforce): self._test_resource_index('a3455d8c-9f88-404d-a85b-5315293e67de', mock_enforce, nested_depth=1) def _test_resource_index(self, event_id, mock_enforce, nested_depth=None): self._mock_enforce_setup(mock_enforce, 'index', True) res_name = 'WikiDatabase' params = {} if nested_depth: params['nested_depth'] = nested_depth stack_identity = identifier.HeatIdentifier(self.tenant, 'wordpress', '6') res_identity = identifier.ResourceIdentifier(resource_name=res_name, **stack_identity) ev_identity = identifier.EventIdentifier(event_id=event_id, **res_identity) req = self._get(stack_identity._tenant_path() + '/resources/' + res_name + '/events', params=params) kwargs = {'stack_identity': stack_identity, 'nested_depth': nested_depth, 'limit': None, 'sort_keys': None, 'marker': None, 'sort_dir': None, 'filters': {'resource_name': res_name}} engine_resp = [ { u'stack_name': u'wordpress', u'event_time': u'2012-07-23T13:05:39Z', u'stack_identity': dict(stack_identity), u'resource_name': res_name, u'resource_status_reason': u'state changed', u'event_identity': dict(ev_identity), u'resource_action': u'CREATE', u'resource_status': u'IN_PROGRESS', u'physical_resource_id': None, u'resource_type': u'AWS::EC2::Instance', } ] if nested_depth: engine_resp[0]['root_stack_id'] = dict(stack_identity) self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( req.context, ('list_events', kwargs), version='1.31' ).AndReturn(engine_resp) self.m.ReplayAll() result = self.controller.index(req, tenant_id=self.tenant, stack_name=stack_identity.stack_name, stack_id=stack_identity.stack_id, resource_name=res_name) expected = { 'events': [ { 'id': event_id, 'links': [ {'href': self._url(ev_identity), 'rel': 'self'}, {'href': self._url(res_identity), 'rel': 'resource'}, {'href': self._url(stack_identity), 'rel': 'stack'}, ], u'resource_name': res_name, u'logical_resource_id': res_name, u'resource_status_reason': u'state changed', u'event_time': u'2012-07-23T13:05:39Z', u'resource_status': u'CREATE_IN_PROGRESS', u'physical_resource_id': None, } ] } if nested_depth: expected['events'][0]['links'].append( {'href': self._url(stack_identity), 'rel': 'root_stack'} ) self.assertEqual(expected, result) self.m.VerifyAll() @mock.patch.object(rpc_client.EngineClient, 'call') def test_index_multiple_resource_names(self, mock_call, mock_enforce): self._mock_enforce_setup(mock_enforce, 'index', True) res_name = 'resource3' event_id = '42' params = { 'resource_name': ['resource1', 'resource2'] } stack_identity = identifier.HeatIdentifier(self.tenant, 'wibble', '6') res_identity = identifier.ResourceIdentifier(resource_name=res_name, **stack_identity) ev_identity = identifier.EventIdentifier(event_id=event_id, **res_identity) req = self._get(stack_identity._tenant_path() + '/events', params=params) mock_call.return_value = [ { u'stack_name': u'wordpress', u'event_time': u'2012-07-23T13:05:39Z', u'stack_identity': dict(stack_identity), u'resource_name': res_name, u'resource_status_reason': u'state changed', u'event_identity': dict(ev_identity), u'resource_action': u'CREATE', u'resource_status': u'IN_PROGRESS', u'physical_resource_id': None, u'resource_type': u'AWS::EC2::Instance', } ] self.controller.index(req, tenant_id=self.tenant, stack_name=stack_identity.stack_name, stack_id=stack_identity.stack_id, resource_name=res_name) rpc_call_args, _ = mock_call.call_args engine_args = rpc_call_args[1][1] self.assertEqual(7, len(engine_args)) self.assertIn('filters', engine_args) self.assertIn('resource_name', engine_args['filters']) self.assertEqual(res_name, engine_args['filters']['resource_name']) self.assertNotIn('resource1', engine_args['filters']['resource_name']) self.assertNotIn('resource2', engine_args['filters']['resource_name']) @mock.patch.object(rpc_client.EngineClient, 'call') def test_index_multiple_resource_names_no_resource(self, mock_call, mock_enforce): self._mock_enforce_setup(mock_enforce, 'index', True) res_name = 'resource3' event_id = '42' params = { 'resource_name': ['resource1', 'resource2'] } stack_identity = identifier.HeatIdentifier(self.tenant, 'wibble', '6') res_identity = identifier.ResourceIdentifier(resource_name=res_name, **stack_identity) ev_identity = identifier.EventIdentifier(event_id=event_id, **res_identity) req = self._get(stack_identity._tenant_path() + '/events', params=params) mock_call.return_value = [ { u'stack_name': u'wordpress', u'event_time': u'2012-07-23T13:05:39Z', u'stack_identity': dict(stack_identity), u'resource_name': res_name, u'resource_status_reason': u'state changed', u'event_identity': dict(ev_identity), u'resource_action': u'CREATE', u'resource_status': u'IN_PROGRESS', u'physical_resource_id': None, u'resource_type': u'AWS::EC2::Instance', } ] self.controller.index(req, tenant_id=self.tenant, stack_name=stack_identity.stack_name, stack_id=stack_identity.stack_id) rpc_call_args, _ = mock_call.call_args engine_args = rpc_call_args[1][1] self.assertEqual(7, len(engine_args)) self.assertIn('filters', engine_args) self.assertIn('resource_name', engine_args['filters']) self.assertIn('resource1', engine_args['filters']['resource_name']) self.assertIn('resource2', engine_args['filters']['resource_name']) def test_stack_index_event_id_integer(self, mock_enforce): self._test_stack_index('42', mock_enforce) def test_stack_index_event_id_uuid(self, mock_enforce): self._test_stack_index('a3455d8c-9f88-404d-a85b-5315293e67de', mock_enforce) def _test_stack_index(self, event_id, mock_enforce): self._mock_enforce_setup(mock_enforce, 'index', True) res_name = 'WikiDatabase' stack_identity = identifier.HeatIdentifier(self.tenant, 'wordpress', '6') res_identity = identifier.ResourceIdentifier(resource_name=res_name, **stack_identity) ev_identity = identifier.EventIdentifier(event_id=event_id, **res_identity) req = self._get(stack_identity._tenant_path() + '/events') kwargs = {'stack_identity': stack_identity, 'nested_depth': None, 'limit': None, 'sort_keys': None, 'marker': None, 'sort_dir': None, 'filters': {'resource_name': res_name}} engine_resp = [ { u'stack_name': u'wordpress', u'event_time': u'2012-07-23T13:05:39Z', u'stack_identity': dict(stack_identity), u'resource_name': res_name, u'resource_status_reason': u'state changed', u'event_identity': dict(ev_identity), u'resource_action': u'CREATE', u'resource_status': u'IN_PROGRESS', u'physical_resource_id': None, u'resource_type': u'AWS::EC2::Instance', } ] self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( req.context, ('list_events', kwargs), version='1.31' ).AndReturn(engine_resp) self.m.ReplayAll() result = self.controller.index(req, tenant_id=self.tenant, stack_name=stack_identity.stack_name, stack_id=stack_identity.stack_id, resource_name=res_name) expected = { 'events': [ { 'id': event_id, 'links': [ {'href': self._url(ev_identity), 'rel': 'self'}, {'href': self._url(res_identity), 'rel': 'resource'}, {'href': self._url(stack_identity), 'rel': 'stack'}, ], u'resource_name': res_name, u'logical_resource_id': res_name, u'resource_status_reason': u'state changed', u'event_time': u'2012-07-23T13:05:39Z', u'resource_status': u'CREATE_IN_PROGRESS', u'physical_resource_id': None, } ] } self.assertEqual(expected, result) self.m.VerifyAll() def test_index_stack_nonexist(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'index', True) stack_identity = identifier.HeatIdentifier(self.tenant, 'wibble', '6') req = self._get(stack_identity._tenant_path() + '/events') kwargs = {'stack_identity': stack_identity, 'nested_depth': None, 'limit': None, 'sort_keys': None, 'marker': None, 'sort_dir': None, 'filters': None} error = heat_exc.EntityNotFound(entity='Stack', name='a') self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( req.context, ('list_events', kwargs), version='1.31' ).AndRaise(tools.to_remote_error(error)) self.m.ReplayAll() resp = tools.request_with_middleware( fault.FaultWrapper, self.controller.index, req, tenant_id=self.tenant, stack_name=stack_identity.stack_name, stack_id=stack_identity.stack_id) self.assertEqual(404, resp.json['code']) self.assertEqual('EntityNotFound', resp.json['error']['type']) self.m.VerifyAll() def test_index_err_denied_policy(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'index', False) stack_identity = identifier.HeatIdentifier(self.tenant, 'wibble', '6') req = self._get(stack_identity._tenant_path() + '/events') resp = tools.request_with_middleware( fault.FaultWrapper, self.controller.index, req, tenant_id=self.tenant, stack_name=stack_identity.stack_name, stack_id=stack_identity.stack_id) self.assertEqual(403, resp.status_int) self.assertIn('403 Forbidden', six.text_type(resp)) def test_index_resource_nonexist(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'index', True) res_name = 'WikiDatabase' stack_identity = identifier.HeatIdentifier(self.tenant, 'wordpress', '6') req = self._get(stack_identity._tenant_path() + '/resources/' + res_name + '/events') kwargs = {'stack_identity': stack_identity, 'nested_depth': None, 'limit': None, 'sort_keys': None, 'marker': None, 'sort_dir': None, 'filters': {'resource_name': res_name}} engine_resp = [] self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( req.context, ('list_events', kwargs), version='1.31' ).AndReturn(engine_resp) self.m.ReplayAll() self.assertRaises(webob.exc.HTTPNotFound, self.controller.index, req, tenant_id=self.tenant, stack_name=stack_identity.stack_name, stack_id=stack_identity.stack_id, resource_name=res_name) self.m.VerifyAll() @mock.patch.object(rpc_client.EngineClient, 'call') def test_index_whitelists_pagination_params(self, mock_call, mock_enforce): self._mock_enforce_setup(mock_enforce, 'index', True) params = { 'limit': 10, 'sort_keys': 'fake sort keys', 'marker': 'fake marker', 'sort_dir': 'fake sort dir', 'balrog': 'you shall not pass!' } stack_identity = identifier.HeatIdentifier(self.tenant, 'wibble', '6') req = self._get(stack_identity._tenant_path() + '/events', params=params) mock_call.return_value = [] self.controller.index(req, tenant_id=self.tenant, stack_name=stack_identity.stack_name, stack_id=stack_identity.stack_id) rpc_call_args, _ = mock_call.call_args engine_args = rpc_call_args[1][1] self.assertEqual(7, len(engine_args)) self.assertIn('limit', engine_args) self.assertEqual(10, engine_args['limit']) self.assertIn('sort_keys', engine_args) self.assertEqual(['fake sort keys'], engine_args['sort_keys']) self.assertIn('marker', engine_args) self.assertEqual('fake marker', engine_args['marker']) self.assertIn('sort_dir', engine_args) self.assertEqual('fake sort dir', engine_args['sort_dir']) self.assertIn('filters', engine_args) self.assertIsNone(engine_args['filters']) self.assertNotIn('balrog', engine_args) @mock.patch.object(rpc_client.EngineClient, 'call') def test_index_limit_not_int(self, mock_call, mock_enforce): self._mock_enforce_setup(mock_enforce, 'index', True) sid = identifier.HeatIdentifier(self.tenant, 'wibble', '6') req = self._get(sid._tenant_path() + '/events', params={'limit': 'not-an-int'}) ex = self.assertRaises(webob.exc.HTTPBadRequest, self.controller.index, req, tenant_id=self.tenant, stack_name=sid.stack_name, stack_id=sid.stack_id) self.assertEqual("Only integer is acceptable by 'limit'.", six.text_type(ex)) self.assertFalse(mock_call.called) @mock.patch.object(rpc_client.EngineClient, 'call') def test_index_whitelist_filter_params(self, mock_call, mock_enforce): self._mock_enforce_setup(mock_enforce, 'index', True) params = { 'resource_status': 'COMPLETE', 'resource_action': 'CREATE', 'resource_name': 'my_server', 'resource_type': 'OS::Nova::Server', 'balrog': 'you shall not pass!' } stack_identity = identifier.HeatIdentifier(self.tenant, 'wibble', '6') req = self._get(stack_identity._tenant_path() + '/events', params=params) mock_call.return_value = [] self.controller.index(req, tenant_id=self.tenant, stack_name=stack_identity.stack_name, stack_id=stack_identity.stack_id) rpc_call_args, _ = mock_call.call_args engine_args = rpc_call_args[1][1] self.assertIn('filters', engine_args) filters = engine_args['filters'] self.assertEqual(4, len(filters)) self.assertIn('resource_status', filters) self.assertEqual('COMPLETE', filters['resource_status']) self.assertIn('resource_action', filters) self.assertEqual('CREATE', filters['resource_action']) self.assertIn('resource_name', filters) self.assertEqual('my_server', filters['resource_name']) self.assertIn('resource_type', filters) self.assertEqual('OS::Nova::Server', filters['resource_type']) self.assertNotIn('balrog', filters) def test_show_event_id_integer(self, mock_enforce): self._test_show('42', mock_enforce) def test_show_event_id_uuid(self, mock_enforce): self._test_show('a3455d8c-9f88-404d-a85b-5315293e67de', mock_enforce) def _test_show(self, event_id, mock_enforce): self._mock_enforce_setup(mock_enforce, 'show', True) res_name = 'WikiDatabase' stack_identity = identifier.HeatIdentifier(self.tenant, 'wordpress', '6') res_identity = identifier.ResourceIdentifier(resource_name=res_name, **stack_identity) ev_identity = identifier.EventIdentifier(event_id=event_id, **res_identity) req = self._get(stack_identity._tenant_path() + '/resources/' + res_name + '/events/' + event_id) kwargs = {'stack_identity': stack_identity, 'limit': None, 'sort_keys': None, 'marker': None, 'sort_dir': None, 'nested_depth': None, 'filters': {'resource_name': res_name, 'uuid': event_id}} engine_resp = [ { u'stack_name': u'wordpress', u'event_time': u'2012-07-23T13:06:00Z', u'stack_identity': dict(stack_identity), u'resource_name': res_name, u'resource_status_reason': u'state changed', u'event_identity': dict(ev_identity), u'resource_action': u'CREATE', u'resource_status': u'COMPLETE', u'physical_resource_id': u'a3455d8c-9f88-404d-a85b-5315293e67de', u'resource_properties': {u'UserData': u'blah'}, u'resource_type': u'AWS::EC2::Instance', } ] self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( req.context, ('list_events', kwargs), version='1.31' ).AndReturn(engine_resp) self.m.ReplayAll() result = self.controller.show(req, tenant_id=self.tenant, stack_name=stack_identity.stack_name, stack_id=stack_identity.stack_id, resource_name=res_name, event_id=event_id) expected = { 'event': { 'id': event_id, 'links': [ {'href': self._url(ev_identity), 'rel': 'self'}, {'href': self._url(res_identity), 'rel': 'resource'}, {'href': self._url(stack_identity), 'rel': 'stack'}, ], u'resource_name': res_name, u'logical_resource_id': res_name, u'resource_status_reason': u'state changed', u'event_time': u'2012-07-23T13:06:00Z', u'resource_status': u'CREATE_COMPLETE', u'physical_resource_id': u'a3455d8c-9f88-404d-a85b-5315293e67de', u'resource_type': u'AWS::EC2::Instance', u'resource_properties': {u'UserData': u'blah'}, } } self.assertEqual(expected, result) self.m.VerifyAll() def test_show_bad_resource(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'show', True) event_id = '42' res_name = 'WikiDatabase' stack_identity = identifier.HeatIdentifier(self.tenant, 'wordpress', '6') req = self._get(stack_identity._tenant_path() + '/resources/' + res_name + '/events/' + event_id) kwargs = {'stack_identity': stack_identity, 'limit': None, 'sort_keys': None, 'marker': None, 'sort_dir': None, 'nested_depth': None, 'filters': {'resource_name': res_name, 'uuid': '42'}} engine_resp = [] self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( req.context, ('list_events', kwargs), version='1.31' ).AndReturn(engine_resp) self.m.ReplayAll() self.assertRaises(webob.exc.HTTPNotFound, self.controller.show, req, tenant_id=self.tenant, stack_name=stack_identity.stack_name, stack_id=stack_identity.stack_id, resource_name=res_name, event_id=event_id) self.m.VerifyAll() def test_show_stack_nonexist(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'show', True) event_id = '42' res_name = 'WikiDatabase' stack_identity = identifier.HeatIdentifier(self.tenant, 'wibble', '6') req = self._get(stack_identity._tenant_path() + '/resources/' + res_name + '/events/' + event_id) kwargs = {'stack_identity': stack_identity, 'limit': None, 'sort_keys': None, 'marker': None, 'sort_dir': None, 'nested_depth': None, 'filters': {'resource_name': res_name, 'uuid': '42'}} error = heat_exc.EntityNotFound(entity='Stack', name='a') self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( req.context, ('list_events', kwargs), version='1.31' ).AndRaise(tools.to_remote_error(error)) self.m.ReplayAll() resp = tools.request_with_middleware( fault.FaultWrapper, self.controller.show, req, tenant_id=self.tenant, stack_name=stack_identity.stack_name, stack_id=stack_identity.stack_id, resource_name=res_name, event_id=event_id) self.assertEqual(404, resp.json['code']) self.assertEqual('EntityNotFound', resp.json['error']['type']) self.m.VerifyAll() def test_show_err_denied_policy(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'show', False) event_id = '42' res_name = 'WikiDatabase' stack_identity = identifier.HeatIdentifier(self.tenant, 'wibble', '6') req = self._get(stack_identity._tenant_path() + '/resources/' + res_name + '/events/' + event_id) resp = tools.request_with_middleware( fault.FaultWrapper, self.controller.show, req, tenant_id=self.tenant, stack_name=stack_identity.stack_name, stack_id=stack_identity.stack_id, resource_name=res_name, event_id=event_id) self.assertEqual(403, resp.status_int) self.assertIn('403 Forbidden', six.text_type(resp)) @mock.patch.object(rpc_client.EngineClient, 'call') def test_show_multiple_resource_names(self, mock_call, mock_enforce): self._mock_enforce_setup(mock_enforce, 'show', True) res_name = 'resource3' event_id = '42' stack_identity = identifier.HeatIdentifier(self.tenant, 'wibble', '6') res_identity = identifier.ResourceIdentifier(resource_name=res_name, **stack_identity) ev_identity = identifier.EventIdentifier(event_id=event_id, **res_identity) req = self._get(stack_identity._tenant_path() + '/resources/' + res_name + '/events/' + event_id) mock_call.return_value = [ { u'stack_name': u'wordpress', u'event_time': u'2012-07-23T13:05:39Z', u'stack_identity': dict(stack_identity), u'resource_name': res_name, u'resource_status_reason': u'state changed', u'event_identity': dict(ev_identity), u'resource_action': u'CREATE', u'resource_status': u'IN_PROGRESS', u'physical_resource_id': None, u'resource_type': u'AWS::EC2::Instance', } ] self.controller.show(req, tenant_id=self.tenant, stack_name=stack_identity.stack_name, stack_id=stack_identity.stack_id, resource_name=res_name, event_id=event_id) rpc_call_args, _ = mock_call.call_args engine_args = rpc_call_args[1][1] self.assertEqual(7, len(engine_args)) self.assertIn('filters', engine_args) self.assertIn('resource_name', engine_args['filters']) self.assertIn(res_name, engine_args['filters']['resource_name']) heat-10.0.2/heat/tests/api/openstack_v1/test_resources.py0000666000175000017500000011140513343562351023422 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import six import webob.exc import heat.api.middleware.fault as fault import heat.api.openstack.v1.resources as resources from heat.common import exception as heat_exc from heat.common import identifier from heat.common import policy from heat.rpc import api as rpc_api from heat.rpc import client as rpc_client from heat.tests.api.openstack_v1 import tools from heat.tests import common @mock.patch.object(policy.Enforcer, 'enforce') class ResourceControllerTest(tools.ControllerTest, common.HeatTestCase): """Tests the API class ResourceController. Tests the API class which acts as the WSGI controller, the endpoint processing API requests after they are routed """ def setUp(self): super(ResourceControllerTest, self).setUp() # Create WSGI controller instance class DummyConfig(object): bind_port = 8004 cfgopts = DummyConfig() self.controller = resources.ResourceController(options=cfgopts) def test_index(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'index', True) res_name = 'WikiDatabase' stack_identity = identifier.HeatIdentifier(self.tenant, 'wordpress', '1') res_identity = identifier.ResourceIdentifier(resource_name=res_name, **stack_identity) req = self._get(stack_identity._tenant_path() + '/resources') engine_resp = [ { u'resource_identity': dict(res_identity), u'stack_name': stack_identity.stack_name, u'resource_name': res_name, u'resource_status_reason': None, u'updated_time': u'2012-07-23T13:06:00Z', u'stack_identity': stack_identity, u'resource_action': u'CREATE', u'resource_status': u'COMPLETE', u'physical_resource_id': u'a3455d8c-9f88-404d-a85b-5315293e67de', u'resource_type': u'AWS::EC2::Instance', } ] self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( req.context, ('list_stack_resources', {'stack_identity': stack_identity, 'nested_depth': 0, 'with_detail': False, 'filters': {} }), version='1.25' ).AndReturn(engine_resp) self.m.ReplayAll() result = self.controller.index(req, tenant_id=self.tenant, stack_name=stack_identity.stack_name, stack_id=stack_identity.stack_id) expected = { 'resources': [{'links': [{'href': self._url(res_identity), 'rel': 'self'}, {'href': self._url(stack_identity), 'rel': 'stack'}], u'resource_name': res_name, u'logical_resource_id': res_name, u'resource_status_reason': None, u'updated_time': u'2012-07-23T13:06:00Z', u'resource_status': u'CREATE_COMPLETE', u'physical_resource_id': u'a3455d8c-9f88-404d-a85b-5315293e67de', u'resource_type': u'AWS::EC2::Instance'}]} self.assertEqual(expected, result) self.m.VerifyAll() def test_index_nonexist(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'index', True) stack_identity = identifier.HeatIdentifier(self.tenant, 'rubbish', '1') req = self._get(stack_identity._tenant_path() + '/resources') error = heat_exc.EntityNotFound(entity='Stack', name='a') self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( req.context, ('list_stack_resources', {'stack_identity': stack_identity, 'nested_depth': 0, 'with_detail': False, 'filters': {}}), version='1.25' ).AndRaise(tools.to_remote_error(error)) self.m.ReplayAll() resp = tools.request_with_middleware( fault.FaultWrapper, self.controller.index, req, tenant_id=self.tenant, stack_name=stack_identity.stack_name, stack_id=stack_identity.stack_id) self.assertEqual(404, resp.json['code']) self.assertEqual('EntityNotFound', resp.json['error']['type']) self.m.VerifyAll() def test_index_invalid_filters(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'index', True) stack_identity = identifier.HeatIdentifier(self.tenant, 'rubbish', '1') req = self._get(stack_identity._tenant_path() + '/resources', {'invalid_key': 'junk'}) mock_call = self.patchobject(rpc_client.EngineClient, 'call') ex = self.assertRaises(webob.exc.HTTPBadRequest, self.controller.index, req, tenant_id=self.tenant, stack_name=stack_identity.stack_name, stack_id=stack_identity.stack_id) self.assertIn("Invalid filter parameters %s" % [six.text_type('invalid_key')], six.text_type(ex)) self.assertFalse(mock_call.called) def test_index_nested_depth(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'index', True) stack_identity = identifier.HeatIdentifier(self.tenant, 'rubbish', '1') req = self._get(stack_identity._tenant_path() + '/resources', {'nested_depth': '99'}) self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( req.context, ('list_stack_resources', {'stack_identity': stack_identity, 'nested_depth': 99, 'with_detail': False, 'filters': {}}), version='1.25' ).AndReturn([]) self.m.ReplayAll() result = self.controller.index(req, tenant_id=self.tenant, stack_name=stack_identity.stack_name, stack_id=stack_identity.stack_id) self.assertEqual([], result['resources']) self.m.VerifyAll() def test_index_nested_depth_not_int(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'index', True) stack_identity = identifier.HeatIdentifier(self.tenant, 'rubbish', '1') req = self._get(stack_identity._tenant_path() + '/resources', {'nested_depth': 'non-int'}) mock_call = self.patchobject(rpc_client.EngineClient, 'call') ex = self.assertRaises(webob.exc.HTTPBadRequest, self.controller.index, req, tenant_id=self.tenant, stack_name=stack_identity.stack_name, stack_id=stack_identity.stack_id) self.assertEqual("Only integer is acceptable by 'nested_depth'.", six.text_type(ex)) self.assertFalse(mock_call.called) def test_index_denied_policy(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'index', False) res_name = 'WikiDatabase' stack_identity = identifier.HeatIdentifier(self.tenant, 'wordpress', '1') identifier.ResourceIdentifier(resource_name=res_name, **stack_identity) req = self._get(stack_identity._tenant_path() + '/resources') resp = tools.request_with_middleware( fault.FaultWrapper, self.controller.index, req, tenant_id=self.tenant, stack_name=stack_identity.stack_name, stack_id=stack_identity.stack_id) self.assertEqual(403, resp.status_int) self.assertIn('403 Forbidden', six.text_type(resp)) def test_index_detail(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'index', True) res_name = 'WikiDatabase' stack_identity = identifier.HeatIdentifier(self.tenant, 'wordpress', '1') res_identity = identifier.ResourceIdentifier(resource_name=res_name, **stack_identity) req = self._get(stack_identity._tenant_path() + '/resources', {'with_detail': 'true'}) resp_parameters = { "OS::project_id": "3ab5b02fa01f4f95afa1e254afc4a435", "network": "cf05086d-07c7-4ed6-95e5-e4af724677e6", "OS::stack_name": "s1", "admin_pass": "******", "key_name": "kk", "image": "fa5d387e-541f-4dfb-ae8a-83a614683f84", "db_port": "50000", "OS::stack_id": "723d7cee-46b3-4433-9c21-f3378eb0bfc4", "flavor": "1" }, engine_resp = [ { u'resource_identity': dict(res_identity), u'stack_name': stack_identity.stack_name, u'resource_name': res_name, u'resource_status_reason': None, u'updated_time': u'2012-07-23T13:06:00Z', u'stack_identity': stack_identity, u'resource_action': u'CREATE', u'resource_status': u'COMPLETE', u'physical_resource_id': u'a3455d8c-9f88-404d-a85b-5315293e67de', u'resource_type': u'AWS::EC2::Instance', u'parameters': resp_parameters, u'description': u'Hello description', u'stack_user_project_id': u'6f38bcfebbc4400b82d50c1a2ea3057d', } ] self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( req.context, ('list_stack_resources', {'stack_identity': stack_identity, 'nested_depth': 0, 'with_detail': True, 'filters': {}}), version='1.25' ).AndReturn(engine_resp) self.m.ReplayAll() result = self.controller.index(req, tenant_id=self.tenant, stack_name=stack_identity.stack_name, stack_id=stack_identity.stack_id) expected = { 'resources': [{'links': [{'href': self._url(res_identity), 'rel': 'self'}, {'href': self._url(stack_identity), 'rel': 'stack'}], u'resource_name': res_name, u'logical_resource_id': res_name, u'resource_status_reason': None, u'updated_time': u'2012-07-23T13:06:00Z', u'resource_status': u'CREATE_COMPLETE', u'physical_resource_id': u'a3455d8c-9f88-404d-a85b-5315293e67de', u'resource_type': u'AWS::EC2::Instance', u'parameters': resp_parameters, u'description': u'Hello description', u'stack_user_project_id': u'6f38bcfebbc4400b82d50c1a2ea3057d'}]} self.assertEqual(expected, result) self.m.VerifyAll() def test_show(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'show', True) res_name = 'WikiDatabase' stack_identity = identifier.HeatIdentifier(self.tenant, 'wordpress', '6') res_identity = identifier.ResourceIdentifier(resource_name=res_name, **stack_identity) req = self._get(stack_identity._tenant_path()) engine_resp = { u'description': u'', u'resource_identity': dict(res_identity), u'stack_name': stack_identity.stack_name, u'resource_name': res_name, u'resource_status_reason': None, u'updated_time': u'2012-07-23T13:06:00Z', u'stack_identity': dict(stack_identity), u'resource_action': u'CREATE', u'resource_status': u'COMPLETE', u'physical_resource_id': u'a3455d8c-9f88-404d-a85b-5315293e67de', u'resource_type': u'AWS::EC2::Instance', u'attributes': {u'foo': 'bar'}, u'metadata': {u'ensureRunning': u'true'} } self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( req.context, ('describe_stack_resource', {'stack_identity': stack_identity, 'resource_name': res_name, 'with_attr': None}), version='1.2' ).AndReturn(engine_resp) self.m.ReplayAll() result = self.controller.show(req, tenant_id=self.tenant, stack_name=stack_identity.stack_name, stack_id=stack_identity.stack_id, resource_name=res_name) expected = { 'resource': { 'links': [ {'href': self._url(res_identity), 'rel': 'self'}, {'href': self._url(stack_identity), 'rel': 'stack'}, ], u'description': u'', u'resource_name': res_name, u'logical_resource_id': res_name, u'resource_status_reason': None, u'updated_time': u'2012-07-23T13:06:00Z', u'resource_status': u'CREATE_COMPLETE', u'physical_resource_id': u'a3455d8c-9f88-404d-a85b-5315293e67de', u'resource_type': u'AWS::EC2::Instance', u'attributes': {u'foo': 'bar'}, } } self.assertEqual(expected, result) self.m.VerifyAll() def test_show_with_nested_stack(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'show', True) res_name = 'WikiDatabase' stack_identity = identifier.HeatIdentifier(self.tenant, 'wordpress', '6') res_identity = identifier.ResourceIdentifier(resource_name=res_name, **stack_identity) nested_stack_identity = identifier.HeatIdentifier(self.tenant, 'nested', 'some_id') req = self._get(stack_identity._tenant_path()) engine_resp = { u'description': u'', u'resource_identity': dict(res_identity), u'stack_name': stack_identity.stack_name, u'resource_name': res_name, u'resource_status_reason': None, u'updated_time': u'2012-07-23T13:06:00Z', u'stack_identity': dict(stack_identity), u'resource_action': u'CREATE', u'resource_status': u'COMPLETE', u'physical_resource_id': u'a3455d8c-9f88-404d-a85b-5315293e67de', u'resource_type': u'AWS::EC2::Instance', u'attributes': {u'foo': 'bar'}, u'metadata': {u'ensureRunning': u'true'}, u'nested_stack_id': dict(nested_stack_identity) } self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( req.context, ('describe_stack_resource', {'stack_identity': stack_identity, 'resource_name': res_name, 'with_attr': None}), version='1.2' ).AndReturn(engine_resp) self.m.ReplayAll() result = self.controller.show(req, tenant_id=self.tenant, stack_name=stack_identity.stack_name, stack_id=stack_identity.stack_id, resource_name=res_name) expected = [{'href': self._url(res_identity), 'rel': 'self'}, {'href': self._url(stack_identity), 'rel': 'stack'}, {'href': self._url(nested_stack_identity), 'rel': 'nested'} ] self.assertEqual(expected, result['resource']['links']) self.assertIsNone(result.get(rpc_api.RES_NESTED_STACK_ID)) self.m.VerifyAll() def test_show_nonexist(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'show', True) res_name = 'WikiDatabase' stack_identity = identifier.HeatIdentifier(self.tenant, 'rubbish', '1') res_identity = identifier.ResourceIdentifier(resource_name=res_name, **stack_identity) req = self._get(res_identity._tenant_path()) error = heat_exc.EntityNotFound(entity='Stack', name='a') self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( req.context, ('describe_stack_resource', {'stack_identity': stack_identity, 'resource_name': res_name, 'with_attr': None}), version='1.2' ).AndRaise(tools.to_remote_error(error)) self.m.ReplayAll() resp = tools.request_with_middleware( fault.FaultWrapper, self.controller.show, req, tenant_id=self.tenant, stack_name=stack_identity.stack_name, stack_id=stack_identity.stack_id, resource_name=res_name) self.assertEqual(404, resp.json['code']) self.assertEqual('EntityNotFound', resp.json['error']['type']) self.m.VerifyAll() def test_show_with_single_attribute(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'show', True) res_name = 'WikiDatabase' stack_identity = identifier.HeatIdentifier(self.tenant, 'foo', '1') res_identity = identifier.ResourceIdentifier(resource_name=res_name, **stack_identity) mock_describe = mock.Mock(return_value={'foo': 'bar'}) self.controller.rpc_client.describe_stack_resource = mock_describe req = self._get(res_identity._tenant_path(), {'with_attr': 'baz'}) resp = self.controller.show(req, tenant_id=self.tenant, stack_name=stack_identity.stack_name, stack_id=stack_identity.stack_id, resource_name=res_name) self.assertEqual({'resource': {'foo': 'bar'}}, resp) args, kwargs = mock_describe.call_args self.assertIn('baz', kwargs['with_attr']) def test_show_with_multiple_attributes(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'show', True) res_name = 'WikiDatabase' stack_identity = identifier.HeatIdentifier(self.tenant, 'foo', '1') res_identity = identifier.ResourceIdentifier(resource_name=res_name, **stack_identity) mock_describe = mock.Mock(return_value={'foo': 'bar'}) self.controller.rpc_client.describe_stack_resource = mock_describe req = self._get(res_identity._tenant_path()) req.environ['QUERY_STRING'] = 'with_attr=a1&with_attr=a2&with_attr=a3' resp = self.controller.show(req, tenant_id=self.tenant, stack_name=stack_identity.stack_name, stack_id=stack_identity.stack_id, resource_name=res_name) self.assertEqual({'resource': {'foo': 'bar'}}, resp) args, kwargs = mock_describe.call_args self.assertIn('a1', kwargs['with_attr']) self.assertIn('a2', kwargs['with_attr']) self.assertIn('a3', kwargs['with_attr']) def test_show_nonexist_resource(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'show', True) res_name = 'Wibble' stack_identity = identifier.HeatIdentifier(self.tenant, 'wordpress', '1') res_identity = identifier.ResourceIdentifier(resource_name=res_name, **stack_identity) req = self._get(res_identity._tenant_path()) error = heat_exc.ResourceNotFound(stack_name='a', resource_name='b') self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( req.context, ('describe_stack_resource', {'stack_identity': stack_identity, 'resource_name': res_name, 'with_attr': None}), version='1.2' ).AndRaise(tools.to_remote_error(error)) self.m.ReplayAll() resp = tools.request_with_middleware( fault.FaultWrapper, self.controller.show, req, tenant_id=self.tenant, stack_name=stack_identity.stack_name, stack_id=stack_identity.stack_id, resource_name=res_name) self.assertEqual(404, resp.json['code']) self.assertEqual('ResourceNotFound', resp.json['error']['type']) self.m.VerifyAll() def test_show_uncreated_resource(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'show', True) res_name = 'WikiDatabase' stack_identity = identifier.HeatIdentifier(self.tenant, 'wordpress', '1') res_identity = identifier.ResourceIdentifier(resource_name=res_name, **stack_identity) req = self._get(res_identity._tenant_path()) error = heat_exc.ResourceNotAvailable(resource_name='') self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( req.context, ('describe_stack_resource', {'stack_identity': stack_identity, 'resource_name': res_name, 'with_attr': None}), version='1.2' ).AndRaise(tools.to_remote_error(error)) self.m.ReplayAll() resp = tools.request_with_middleware( fault.FaultWrapper, self.controller.show, req, tenant_id=self.tenant, stack_name=stack_identity.stack_name, stack_id=stack_identity.stack_id, resource_name=res_name) self.assertEqual(404, resp.json['code']) self.assertEqual('ResourceNotAvailable', resp.json['error']['type']) self.m.VerifyAll() def test_show_err_denied_policy(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'show', False) res_name = 'WikiDatabase' stack_identity = identifier.HeatIdentifier(self.tenant, 'wordpress', '1') res_identity = identifier.ResourceIdentifier(resource_name=res_name, **stack_identity) req = self._get(res_identity._tenant_path()) resp = tools.request_with_middleware( fault.FaultWrapper, self.controller.show, req, tenant_id=self.tenant, stack_name=stack_identity.stack_name, stack_id=stack_identity.stack_id, resource_name=res_name) self.assertEqual(403, resp.status_int) self.assertIn('403 Forbidden', six.text_type(resp)) def test_metadata_show(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'metadata', True) res_name = 'WikiDatabase' stack_identity = identifier.HeatIdentifier(self.tenant, 'wordpress', '6') res_identity = identifier.ResourceIdentifier(resource_name=res_name, **stack_identity) req = self._get(stack_identity._tenant_path()) engine_resp = { u'description': u'', u'resource_identity': dict(res_identity), u'stack_name': stack_identity.stack_name, u'resource_name': res_name, u'resource_status_reason': None, u'updated_time': u'2012-07-23T13:06:00Z', u'stack_identity': dict(stack_identity), u'resource_action': u'CREATE', u'resource_status': u'COMPLETE', u'physical_resource_id': u'a3455d8c-9f88-404d-a85b-5315293e67de', u'resource_type': u'AWS::EC2::Instance', u'metadata': {u'ensureRunning': u'true'} } self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( req.context, ('describe_stack_resource', {'stack_identity': stack_identity, 'resource_name': res_name, 'with_attr': False}), version='1.2' ).AndReturn(engine_resp) self.m.ReplayAll() result = self.controller.metadata(req, tenant_id=self.tenant, stack_name=stack_identity.stack_name, stack_id=stack_identity.stack_id, resource_name=res_name) expected = {'metadata': {u'ensureRunning': u'true'}} self.assertEqual(expected, result) self.m.VerifyAll() def test_metadata_show_nonexist(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'metadata', True) res_name = 'WikiDatabase' stack_identity = identifier.HeatIdentifier(self.tenant, 'rubbish', '1') res_identity = identifier.ResourceIdentifier(resource_name=res_name, **stack_identity) req = self._get(res_identity._tenant_path() + '/metadata') error = heat_exc.EntityNotFound(entity='Stack', name='a') self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( req.context, ('describe_stack_resource', {'stack_identity': stack_identity, 'resource_name': res_name, 'with_attr': False}), version='1.2' ).AndRaise(tools.to_remote_error(error)) self.m.ReplayAll() resp = tools.request_with_middleware( fault.FaultWrapper, self.controller.metadata, req, tenant_id=self.tenant, stack_name=stack_identity.stack_name, stack_id=stack_identity.stack_id, resource_name=res_name) self.assertEqual(404, resp.json['code']) self.assertEqual('EntityNotFound', resp.json['error']['type']) self.m.VerifyAll() def test_metadata_show_nonexist_resource(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'metadata', True) res_name = 'wibble' stack_identity = identifier.HeatIdentifier(self.tenant, 'wordpress', '1') res_identity = identifier.ResourceIdentifier(resource_name=res_name, **stack_identity) req = self._get(res_identity._tenant_path() + '/metadata') error = heat_exc.ResourceNotFound(stack_name='a', resource_name='b') self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( req.context, ('describe_stack_resource', {'stack_identity': stack_identity, 'resource_name': res_name, 'with_attr': False}), version='1.2' ).AndRaise(tools.to_remote_error(error)) self.m.ReplayAll() resp = tools.request_with_middleware( fault.FaultWrapper, self.controller.metadata, req, tenant_id=self.tenant, stack_name=stack_identity.stack_name, stack_id=stack_identity.stack_id, resource_name=res_name) self.assertEqual(404, resp.json['code']) self.assertEqual('ResourceNotFound', resp.json['error']['type']) self.m.VerifyAll() def test_metadata_show_err_denied_policy(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'metadata', False) res_name = 'wibble' stack_identity = identifier.HeatIdentifier(self.tenant, 'wordpress', '1') res_identity = identifier.ResourceIdentifier(resource_name=res_name, **stack_identity) req = self._get(res_identity._tenant_path() + '/metadata') resp = tools.request_with_middleware( fault.FaultWrapper, self.controller.metadata, req, tenant_id=self.tenant, stack_name=stack_identity.stack_name, stack_id=stack_identity.stack_id, resource_name=res_name) self.assertEqual(403, resp.status_int) self.assertIn('403 Forbidden', six.text_type(resp)) def test_signal(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'signal', True) res_name = 'WikiDatabase' stack_identity = identifier.HeatIdentifier(self.tenant, 'wordpress', '6') req = self._get(stack_identity._tenant_path()) self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( req.context, ('resource_signal', {'stack_identity': stack_identity, 'resource_name': res_name, 'details': 'Signal content', 'sync_call': False}), version='1.3') self.m.ReplayAll() result = self.controller.signal(req, tenant_id=self.tenant, stack_name=stack_identity.stack_name, stack_id=stack_identity.stack_id, resource_name=res_name, body="Signal content") self.assertIsNone(result) self.m.VerifyAll() def test_mark_unhealthy_valid_request(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'mark_unhealthy', True) res_name = 'WebServer' stack_identity = identifier.HeatIdentifier(self.tenant, 'wordpress', '1') req = self._get(stack_identity._tenant_path()) self.m.StubOutWithMock(rpc_client.EngineClient, 'call') body = {'mark_unhealthy': True, rpc_api.RES_STATUS_DATA: 'Anything'} params = {'stack_identity': stack_identity, 'resource_name': res_name} params.update(body) rpc_client.EngineClient.call( req.context, ('resource_mark_unhealthy', params), version='1.26') self.m.ReplayAll() result = self.controller.mark_unhealthy( req, tenant_id=self.tenant, stack_name=stack_identity.stack_name, stack_id=stack_identity.stack_id, resource_name=res_name, body=body) self.assertIsNone(result) self.m.VerifyAll() def test_mark_unhealthy_without_reason(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'mark_unhealthy', True) res_name = 'WebServer' stack_identity = identifier.HeatIdentifier(self.tenant, 'wordpress', '1') req = self._get(stack_identity._tenant_path()) self.m.StubOutWithMock(rpc_client.EngineClient, 'call') body = {'mark_unhealthy': True, rpc_api.RES_STATUS_DATA: ''} params = {'stack_identity': stack_identity, 'resource_name': res_name} params.update(body) rpc_client.EngineClient.call( req.context, ('resource_mark_unhealthy', params), version='1.26') self.m.ReplayAll() del body[rpc_api.RES_STATUS_DATA] result = self.controller.mark_unhealthy( req, tenant_id=self.tenant, stack_name=stack_identity.stack_name, stack_id=stack_identity.stack_id, resource_name=res_name, body=body) self.assertIsNone(result) self.m.VerifyAll() def test_mark_unhealthy_with_invalid_keys(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'mark_unhealthy', True) res_name = 'WebServer' stack_identity = identifier.HeatIdentifier(self.tenant, 'wordpress', '1') req = self._get(stack_identity._tenant_path()) self.m.StubOutWithMock(rpc_client.EngineClient, 'call') body = {'mark_unhealthy': True, rpc_api.RES_STATUS_DATA: 'Any', 'invalid_key1': 1, 'invalid_key2': 2} expected = "Invalid keys in resource mark unhealthy" actual = self.assertRaises(webob.exc.HTTPBadRequest, self.controller.mark_unhealthy, req, tenant_id=self.tenant, stack_name=stack_identity.stack_name, stack_id=stack_identity.stack_id, resource_name=res_name, body=body) self.assertIn(expected, six.text_type(actual)) self.assertIn('invalid_key1', six.text_type(actual)) self.assertIn('invalid_key2', six.text_type(actual)) def test_mark_unhealthy_with_invalid_value(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'mark_unhealthy', True) res_name = 'WebServer' stack_identity = identifier.HeatIdentifier(self.tenant, 'wordpress', '1') req = self._get(stack_identity._tenant_path()) self.m.StubOutWithMock(rpc_client.EngineClient, 'call') body = {'mark_unhealthy': 'XYZ', rpc_api.RES_STATUS_DATA: 'Any'} expected = ('Unrecognized value "XYZ" for "mark_unhealthy", ' 'acceptable values are: true, false') actual = self.assertRaises(webob.exc.HTTPBadRequest, self.controller.mark_unhealthy, req, tenant_id=self.tenant, stack_name=stack_identity.stack_name, stack_id=stack_identity.stack_id, resource_name=res_name, body=body) self.assertIn(expected, six.text_type(actual)) def test_mark_unhealthy_without_mark_unhealthy_key(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'mark_unhealthy', True) res_name = 'WebServer' stack_identity = identifier.HeatIdentifier(self.tenant, 'wordpress', '1') req = self._get(stack_identity._tenant_path()) self.m.StubOutWithMock(rpc_client.EngineClient, 'call') body = {rpc_api.RES_STATUS_DATA: 'Any'} expected = ("Missing mandatory (%s) key from " "mark unhealthy request" % 'mark_unhealthy') actual = self.assertRaises(webob.exc.HTTPBadRequest, self.controller.mark_unhealthy, req, tenant_id=self.tenant, stack_name=stack_identity.stack_name, stack_id=stack_identity.stack_id, resource_name=res_name, body=body) self.assertIn(expected, six.text_type(actual)) heat-10.0.2/heat/tests/api/openstack_v1/test_actions.py0000666000175000017500000002306413343562351023053 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import json import mock import six import webob.exc import heat.api.middleware.fault as fault import heat.api.openstack.v1.actions as actions from heat.common import identifier from heat.common import policy from heat.rpc import client as rpc_client from heat.tests.api.openstack_v1 import tools from heat.tests import common @mock.patch.object(policy.Enforcer, 'enforce') class ActionControllerTest(tools.ControllerTest, common.HeatTestCase): """Tests the API class ActionController. Tests the API class which acts as the WSGI controller, the endpoint processing API requests after they are routed """ def setUp(self): super(ActionControllerTest, self).setUp() # Create WSGI controller instance class DummyConfig(object): bind_port = 8004 cfgopts = DummyConfig() self.controller = actions.ActionController(options=cfgopts) def test_action_suspend(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'action', True) stack_identity = identifier.HeatIdentifier(self.tenant, 'wordpress', '1') body = {'suspend': None} req = self._post(stack_identity._tenant_path() + '/actions', data=json.dumps(body)) self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( req.context, ('stack_suspend', {'stack_identity': stack_identity}) ).AndReturn(None) self.m.ReplayAll() result = self.controller.action(req, tenant_id=self.tenant, stack_name=stack_identity.stack_name, stack_id=stack_identity.stack_id, body=body) self.assertIsNone(result) self.m.VerifyAll() def test_action_resume(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'action', True) stack_identity = identifier.HeatIdentifier(self.tenant, 'wordpress', '1') body = {'resume': None} req = self._post(stack_identity._tenant_path() + '/actions', data=json.dumps(body)) self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( req.context, ('stack_resume', {'stack_identity': stack_identity}) ).AndReturn(None) self.m.ReplayAll() result = self.controller.action(req, tenant_id=self.tenant, stack_name=stack_identity.stack_name, stack_id=stack_identity.stack_id, body=body) self.assertIsNone(result) self.m.VerifyAll() def _test_action_cancel_update(self, mock_enforce, with_rollback=True): self._mock_enforce_setup(mock_enforce, 'action', True) stack_identity = identifier.HeatIdentifier(self.tenant, 'wordpress', '1') if with_rollback: body = {'cancel_update': None} else: body = {'cancel_without_rollback': None} req = self._post(stack_identity._tenant_path() + '/actions', data=json.dumps(body)) client = self.patchobject(rpc_client.EngineClient, 'call') result = self.controller.action(req, tenant_id=self.tenant, stack_name=stack_identity.stack_name, stack_id=stack_identity.stack_id, body=body) self.assertIsNone(result) client.assert_called_with( req.context, ('stack_cancel_update', {'stack_identity': stack_identity, 'cancel_with_rollback': with_rollback}), version='1.14') def test_action_cancel_update(self, mock_enforce): self._test_action_cancel_update(mock_enforce) def test_action_cancel_without_rollback(self, mock_enforce): self._test_action_cancel_update(mock_enforce, False) def test_action_badaction(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'action', True) stack_identity = identifier.HeatIdentifier(self.tenant, 'wordpress', '1') body = {'notallowed': None} req = self._post(stack_identity._tenant_path() + '/actions', data=json.dumps(body)) self.m.ReplayAll() self.assertRaises(webob.exc.HTTPBadRequest, self.controller.action, req, tenant_id=self.tenant, stack_name=stack_identity.stack_name, stack_id=stack_identity.stack_id, body=body) self.m.VerifyAll() def test_action_badaction_empty(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'action', True) stack_identity = identifier.HeatIdentifier(self.tenant, 'wordpress', '1') body = {} req = self._post(stack_identity._tenant_path() + '/actions', data=json.dumps(body)) self.m.ReplayAll() self.assertRaises(webob.exc.HTTPBadRequest, self.controller.action, req, tenant_id=self.tenant, stack_name=stack_identity.stack_name, stack_id=stack_identity.stack_id, body=body) self.m.VerifyAll() def test_action_badaction_multiple(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'action', True) stack_identity = identifier.HeatIdentifier(self.tenant, 'wordpress', '1') body = {'one': None, 'two': None} req = self._post(stack_identity._tenant_path() + '/actions', data=json.dumps(body)) self.m.ReplayAll() self.assertRaises(webob.exc.HTTPBadRequest, self.controller.action, req, tenant_id=self.tenant, stack_name=stack_identity.stack_name, stack_id=stack_identity.stack_id, body=body) self.m.VerifyAll() def test_action_rmt_aterr(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'action', True) stack_identity = identifier.HeatIdentifier(self.tenant, 'wordpress', '1') body = {'suspend': None} req = self._post(stack_identity._tenant_path() + '/actions', data=json.dumps(body)) self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( req.context, ('stack_suspend', {'stack_identity': stack_identity}) ).AndRaise(tools.to_remote_error(AttributeError())) self.m.ReplayAll() resp = tools.request_with_middleware( fault.FaultWrapper, self.controller.action, req, tenant_id=self.tenant, stack_name=stack_identity.stack_name, stack_id=stack_identity.stack_id, body=body) self.assertEqual(400, resp.json['code']) self.assertEqual('AttributeError', resp.json['error']['type']) self.m.VerifyAll() def test_action_err_denied_policy(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'action', False) stack_identity = identifier.HeatIdentifier(self.tenant, 'wordpress', '1') body = {'suspend': None} req = self._post(stack_identity._tenant_path() + '/actions', data=json.dumps(body)) resp = tools.request_with_middleware( fault.FaultWrapper, self.controller.action, req, tenant_id=self.tenant, stack_name=stack_identity.stack_name, stack_id=stack_identity.stack_id, body=body) self.assertEqual(403, resp.status_int) self.assertIn('403 Forbidden', six.text_type(resp)) def test_action_badaction_ise(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'action', True) stack_identity = identifier.HeatIdentifier(self.tenant, 'wordpress', '1') body = {'oops': None} req = self._post(stack_identity._tenant_path() + '/actions', data=json.dumps(body)) self.m.ReplayAll() self.controller.ACTIONS = (SUSPEND, NEW) = ('suspend', 'oops') self.assertRaises(webob.exc.HTTPInternalServerError, self.controller.action, req, tenant_id=self.tenant, stack_name=stack_identity.stack_name, stack_id=stack_identity.stack_id, body=body) self.m.VerifyAll() heat-10.0.2/heat/tests/api/openstack_v1/__init__.py0000666000175000017500000000000013343562340022072 0ustar zuulzuul00000000000000heat-10.0.2/heat/tests/api/openstack_v1/test_stacks.py0000666000175000017500000035377713343562351022724 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import json import mock from oslo_config import cfg import six import webob.exc import heat.api.middleware.fault as fault import heat.api.openstack.v1.stacks as stacks from heat.api.openstack.v1.views import stacks_view from heat.common import context from heat.common import exception as heat_exc from heat.common import identifier from heat.common import policy from heat.common import template_format from heat.common import urlfetch from heat.rpc import api as rpc_api from heat.rpc import client as rpc_client from heat.tests.api.openstack_v1 import tools from heat.tests import common class InstantiationDataTest(common.HeatTestCase): def test_parse_error_success(self): with stacks.InstantiationData.parse_error_check('Garbage'): pass def test_parse_error(self): def generate_error(): with stacks.InstantiationData.parse_error_check('Garbage'): raise ValueError self.assertRaises(webob.exc.HTTPBadRequest, generate_error) def test_parse_error_message(self): # make sure the parser error gets through to the caller. bad_temp = ''' heat_template_version: '2013-05-23' parameters: KeyName: type: string description: bla ''' def generate_error(): with stacks.InstantiationData.parse_error_check('foo'): template_format.parse(bad_temp) parse_ex = self.assertRaises(webob.exc.HTTPBadRequest, generate_error) self.assertIn('foo', six.text_type(parse_ex)) def test_stack_name(self): body = {'stack_name': 'wibble'} data = stacks.InstantiationData(body) self.assertEqual('wibble', data.stack_name()) def test_stack_name_missing(self): body = {'not the stack_name': 'wibble'} data = stacks.InstantiationData(body) self.assertRaises(webob.exc.HTTPBadRequest, data.stack_name) def test_template_inline(self): template = {'foo': 'bar', 'blarg': 'wibble'} body = {'template': template} data = stacks.InstantiationData(body) self.assertEqual(template, data.template()) def test_template_string_json(self): template = ('{"heat_template_version": "2013-05-23",' '"foo": "bar", "blarg": "wibble"}') body = {'template': template} data = stacks.InstantiationData(body) self.assertEqual(json.loads(template), data.template()) def test_template_string_yaml(self): template = '''HeatTemplateFormatVersion: 2012-12-12 foo: bar blarg: wibble ''' parsed = {u'HeatTemplateFormatVersion': u'2012-12-12', u'blarg': u'wibble', u'foo': u'bar'} body = {'template': template} data = stacks.InstantiationData(body) self.assertEqual(parsed, data.template()) def test_template_int(self): template = '42' body = {'template': template} data = stacks.InstantiationData(body) self.assertRaises(webob.exc.HTTPBadRequest, data.template) def test_template_url(self): template = {'heat_template_version': '2013-05-23', 'foo': 'bar', 'blarg': 'wibble'} url = 'http://example.com/template' body = {'template_url': url} data = stacks.InstantiationData(body) self.m.StubOutWithMock(urlfetch, 'get') urlfetch.get(url).AndReturn(json.dumps(template)) self.m.ReplayAll() self.assertEqual(template, data.template()) self.m.VerifyAll() def test_template_priority(self): template = {'foo': 'bar', 'blarg': 'wibble'} url = 'http://example.com/template' body = {'template': template, 'template_url': url} data = stacks.InstantiationData(body) self.m.StubOutWithMock(urlfetch, 'get') self.m.ReplayAll() self.assertEqual(template, data.template()) self.m.VerifyAll() def test_template_missing(self): template = {'foo': 'bar', 'blarg': 'wibble'} body = {'not the template': template} data = stacks.InstantiationData(body) self.assertRaises(webob.exc.HTTPBadRequest, data.template) def test_template_exceeds_max_template_size(self): cfg.CONF.set_override('max_template_size', 10) template = json.dumps(['a'] * cfg.CONF.max_template_size) body = {'template': template} data = stacks.InstantiationData(body) error = self.assertRaises(heat_exc.RequestLimitExceeded, data.template) msg = ('Request limit exceeded: Template size (%(actual_len)s ' 'bytes) exceeds maximum allowed size (%(limit)s bytes).') % { 'actual_len': len(str(template)), 'limit': cfg.CONF.max_template_size} self.assertEqual(msg, six.text_type(error)) def test_parameters(self): params = {'foo': 'bar', 'blarg': 'wibble'} body = {'parameters': params, 'encrypted_param_names': [], 'parameter_defaults': {}, 'event_sinks': [], 'resource_registry': {}} data = stacks.InstantiationData(body) self.assertEqual(body, data.environment()) def test_environment_only_params(self): env = {'parameters': {'foo': 'bar', 'blarg': 'wibble'}} body = {'environment': env} data = stacks.InstantiationData(body) self.assertEqual(env, data.environment()) def test_environment_with_env_files(self): env = {'parameters': {'foo': 'bar', 'blarg': 'wibble'}} body = {'environment': env, 'environment_files': ['env.yaml']} expect = {'parameters': {}, 'encrypted_param_names': [], 'parameter_defaults': {}, 'event_sinks': [], 'resource_registry': {}} data = stacks.InstantiationData(body) self.assertEqual(expect, data.environment()) def test_environment_and_parameters(self): body = {'parameters': {'foo': 'bar'}, 'environment': {'parameters': {'blarg': 'wibble'}}} expect = {'parameters': {'blarg': 'wibble', 'foo': 'bar'}, 'encrypted_param_names': [], 'parameter_defaults': {}, 'event_sinks': [], 'resource_registry': {}} data = stacks.InstantiationData(body) self.assertEqual(expect, data.environment()) def test_parameters_override_environment(self): # This tests that the cli parameters will override # any parameters in the environment. body = {'parameters': {'foo': 'bar', 'tester': 'Yes'}, 'environment': {'parameters': {'blarg': 'wibble', 'tester': 'fail'}}} expect = {'parameters': {'blarg': 'wibble', 'foo': 'bar', 'tester': 'Yes'}, 'encrypted_param_names': [], 'parameter_defaults': {}, 'event_sinks': [], 'resource_registry': {}} data = stacks.InstantiationData(body) self.assertEqual(expect, data.environment()) def test_environment_empty_params(self): env = {'parameters': None} body = {'environment': env} data = stacks.InstantiationData(body) self.assertRaises(webob.exc.HTTPBadRequest, data.environment) def test_environment_bad_format(self): env = {'somethingnotsupported': {'blarg': 'wibble'}} body = {'environment': json.dumps(env)} data = stacks.InstantiationData(body) self.assertRaises(webob.exc.HTTPBadRequest, data.environment) def test_environment_missing(self): env = {'foo': 'bar', 'blarg': 'wibble'} body = {'not the environment': env} data = stacks.InstantiationData(body) self.assertEqual({'parameters': {}, 'encrypted_param_names': [], 'parameter_defaults': {}, 'resource_registry': {}, 'event_sinks': []}, data.environment()) def test_args(self): body = { 'parameters': {}, 'environment': {}, 'stack_name': 'foo', 'template': {}, 'template_url': 'http://example.com/', 'timeout_mins': 60, } data = stacks.InstantiationData(body) self.assertEqual({'timeout_mins': 60}, data.args()) @mock.patch.object(policy.Enforcer, 'enforce') class StackControllerTest(tools.ControllerTest, common.HeatTestCase): """Tests the API class StackController. Tests the API class which acts as the WSGI controller, the endpoint processing API requests after they are routed """ def setUp(self): super(StackControllerTest, self).setUp() # Create WSGI controller instance class DummyConfig(object): bind_port = 8004 cfgopts = DummyConfig() self.controller = stacks.StackController(options=cfgopts) @mock.patch.object(rpc_client.EngineClient, 'call') def test_index(self, mock_call, mock_enforce): self._mock_enforce_setup(mock_enforce, 'index', True) req = self._get('/stacks') identity = identifier.HeatIdentifier(self.tenant, 'wordpress', '1') engine_resp = [ { u'stack_identity': dict(identity), u'updated_time': u'2012-07-09T09:13:11Z', u'template_description': u'blah', u'description': u'blah', u'stack_status_reason': u'Stack successfully created', u'creation_time': u'2012-07-09T09:12:45Z', u'stack_name': identity.stack_name, u'stack_action': u'CREATE', u'stack_status': u'COMPLETE', u'parameters': {}, u'outputs': [], u'notification_topics': [], u'capabilities': [], u'disable_rollback': True, u'timeout_mins': 60, } ] mock_call.return_value = engine_resp result = self.controller.index(req, tenant_id=identity.tenant) expected = { 'stacks': [ { 'links': [{"href": self._url(identity), "rel": "self"}], 'id': '1', u'updated_time': u'2012-07-09T09:13:11Z', u'description': u'blah', u'stack_status_reason': u'Stack successfully created', u'creation_time': u'2012-07-09T09:12:45Z', u'stack_name': u'wordpress', u'stack_status': u'CREATE_COMPLETE' } ] } self.assertEqual(expected, result) default_args = {'limit': None, 'sort_keys': None, 'marker': None, 'sort_dir': None, 'filters': None, 'show_deleted': False, 'show_nested': False, 'show_hidden': False, 'tags': None, 'tags_any': None, 'not_tags': None, 'not_tags_any': None} mock_call.assert_called_once_with( req.context, ('list_stacks', default_args), version='1.33') @mock.patch.object(rpc_client.EngineClient, 'call') def test_index_whitelists_pagination_params(self, mock_call, mock_enforce): self._mock_enforce_setup(mock_enforce, 'index', True) params = { 'limit': 10, 'sort_keys': 'fake sort keys', 'marker': 'fake marker', 'sort_dir': 'fake sort dir', 'balrog': 'you shall not pass!' } req = self._get('/stacks', params=params) mock_call.return_value = [] self.controller.index(req, tenant_id=self.tenant) rpc_call_args, _ = mock_call.call_args engine_args = rpc_call_args[1][1] self.assertEqual(12, len(engine_args)) self.assertIn('limit', engine_args) self.assertIn('sort_keys', engine_args) self.assertIn('marker', engine_args) self.assertIn('sort_dir', engine_args) self.assertIn('filters', engine_args) self.assertNotIn('balrog', engine_args) @mock.patch.object(rpc_client.EngineClient, 'call') def test_index_limit_not_int(self, mock_call, mock_enforce): self._mock_enforce_setup(mock_enforce, 'index', True) params = {'limit': 'not-an-int'} req = self._get('/stacks', params=params) ex = self.assertRaises(webob.exc.HTTPBadRequest, self.controller.index, req, tenant_id=self.tenant) self.assertEqual("Only integer is acceptable by 'limit'.", six.text_type(ex)) self.assertFalse(mock_call.called) @mock.patch.object(rpc_client.EngineClient, 'call') def test_index_whitelist_filter_params(self, mock_call, mock_enforce): self._mock_enforce_setup(mock_enforce, 'index', True) params = { 'id': 'fake id', 'status': 'fake status', 'name': 'fake name', 'action': 'fake action', 'username': 'fake username', 'tenant': 'fake tenant', 'owner_id': 'fake owner-id', 'stack_name': 'fake stack name', 'stack_identity': 'fake identity', 'creation_time': 'create timestamp', 'updated_time': 'update timestamp', 'deletion_time': 'deletion timestamp', 'notification_topics': 'fake topic', 'description': 'fake description', 'template_description': 'fake description', 'parameters': 'fake params', 'outputs': 'fake outputs', 'stack_action': 'fake action', 'stack_status': 'fake status', 'stack_status_reason': 'fake status reason', 'capabilities': 'fake capabilities', 'disable_rollback': 'fake value', 'timeout_mins': 'fake timeout', 'stack_owner': 'fake owner', 'parent': 'fake parent', 'stack_user_project_id': 'fake project id', 'tags': 'fake tags', 'barlog': 'you shall not pass!' } req = self._get('/stacks', params=params) mock_call.return_value = [] self.controller.index(req, tenant_id=self.tenant) rpc_call_args, _ = mock_call.call_args engine_args = rpc_call_args[1][1] self.assertIn('filters', engine_args) filters = engine_args['filters'] self.assertEqual(16, len(filters)) for key in ('id', 'status', 'name', 'action', 'username', 'tenant', 'owner_id', 'stack_name', 'stack_action', 'stack_status', 'stack_status_reason', 'disable_rollback', 'timeout_mins', 'stack_owner', 'parent', 'stack_user_project_id'): self.assertIn(key, filters) for key in ('stack_identity', 'creation_time', 'updated_time', 'deletion_time', 'notification_topics', 'description', 'template_description', 'parameters', 'outputs', 'capabilities', 'tags', 'barlog'): self.assertNotIn(key, filters) def test_index_returns_stack_count_if_with_count_is_true( self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'index', True) params = {'with_count': 'True'} req = self._get('/stacks', params=params) engine = self.controller.rpc_client engine.list_stacks = mock.Mock(return_value=[]) engine.count_stacks = mock.Mock(return_value=0) result = self.controller.index(req, tenant_id=self.tenant) self.assertEqual(0, result['count']) def test_index_doesnt_return_stack_count_if_with_count_is_false( self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'index', True) params = {'with_count': 'false'} req = self._get('/stacks', params=params) engine = self.controller.rpc_client engine.list_stacks = mock.Mock(return_value=[]) engine.count_stacks = mock.Mock() result = self.controller.index(req, tenant_id=self.tenant) self.assertNotIn('count', result) self.assertFalse(engine.count_stacks.called) def test_index_with_count_is_invalid(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'index', True) params = {'with_count': 'invalid_value'} req = self._get('/stacks', params=params) exc = self.assertRaises(webob.exc.HTTPBadRequest, self.controller.index, req, tenant_id=self.tenant) excepted = ('Unrecognized value "invalid_value" for "with_count", ' 'acceptable values are: true, false') self.assertIn(excepted, six.text_type(exc)) @mock.patch.object(rpc_client.EngineClient, 'count_stacks') def test_index_doesnt_break_with_old_engine(self, mock_count_stacks, mock_enforce): self._mock_enforce_setup(mock_enforce, 'index', True) params = {'with_count': 'True'} req = self._get('/stacks', params=params) engine = self.controller.rpc_client engine.list_stacks = mock.Mock(return_value=[]) mock_count_stacks.side_effect = AttributeError("Should not exist") result = self.controller.index(req, tenant_id=self.tenant) self.assertNotIn('count', result) def test_index_enforces_global_index_if_global_tenant(self, mock_enforce): params = {'global_tenant': 'True'} req = self._get('/stacks', params=params) rpc_client = self.controller.rpc_client rpc_client.list_stacks = mock.Mock(return_value=[]) rpc_client.count_stacks = mock.Mock() self.controller.index(req, tenant_id=self.tenant) mock_enforce.assert_called_with(action='global_index', scope=self.controller.REQUEST_SCOPE, is_registered_policy=True, context=self.context) def test_global_index_uses_admin_context(self, mock_enforce): rpc_client = self.controller.rpc_client rpc_client.list_stacks = mock.Mock(return_value=[]) rpc_client.count_stacks = mock.Mock() mock_admin_ctxt = self.patchobject(context, 'get_admin_context') params = {'global_tenant': 'True'} req = self._get('/stacks', params=params) self.controller.index(req, tenant_id=self.tenant) rpc_client.list_stacks.assert_called_once_with(mock.ANY, filters=mock.ANY) self.assertEqual(1, mock_admin_ctxt.call_count) def test_index_with_admin_context(self, mock_enforce): rpc_client = self.controller.rpc_client rpc_client.list_stacks = mock.Mock(return_value=[]) rpc_client.count_stacks = mock.Mock() view_collection_mock = self.patchobject(stacks_view, 'collection') req = self._get('/stacks') req.context.is_admin = True self.controller.index(req, tenant_id=self.tenant) rpc_client.list_stacks.assert_called_once_with(mock.ANY, filters=mock.ANY) view_collection_mock.assert_called_once_with(mock.ANY, stacks=mock.ANY, count=mock.ANY, include_project=True) def test_global_index_show_deleted_false(self, mock_enforce): rpc_client = self.controller.rpc_client rpc_client.list_stacks = mock.Mock(return_value=[]) rpc_client.count_stacks = mock.Mock() params = {'show_deleted': 'False'} req = self._get('/stacks', params=params) self.controller.index(req, tenant_id=self.tenant) rpc_client.list_stacks.assert_called_once_with(mock.ANY, filters=mock.ANY, show_deleted=False) def test_global_index_show_deleted_true(self, mock_enforce): rpc_client = self.controller.rpc_client rpc_client.list_stacks = mock.Mock(return_value=[]) rpc_client.count_stacks = mock.Mock() params = {'show_deleted': 'True'} req = self._get('/stacks', params=params) self.controller.index(req, tenant_id=self.tenant) rpc_client.list_stacks.assert_called_once_with(mock.ANY, filters=mock.ANY, show_deleted=True) def test_global_index_show_nested_false(self, mock_enforce): rpc_client = self.controller.rpc_client rpc_client.list_stacks = mock.Mock(return_value=[]) rpc_client.count_stacks = mock.Mock() params = {'show_nested': 'False'} req = self._get('/stacks', params=params) self.controller.index(req, tenant_id=self.tenant) rpc_client.list_stacks.assert_called_once_with(mock.ANY, filters=mock.ANY, show_nested=False) def test_global_index_show_nested_true(self, mock_enforce): rpc_client = self.controller.rpc_client rpc_client.list_stacks = mock.Mock(return_value=[]) rpc_client.count_stacks = mock.Mock() params = {'show_nested': 'True'} req = self._get('/stacks', params=params) self.controller.index(req, tenant_id=self.tenant) rpc_client.list_stacks.assert_called_once_with(mock.ANY, filters=mock.ANY, show_nested=True) def test_global_index_show_hidden_true(self, mock_enforce): rpc_client = self.controller.rpc_client rpc_client.list_stacks = mock.Mock(return_value=[]) rpc_client.count_stacks = mock.Mock() params = {'show_hidden': 'True'} req = self._get('/stacks', params=params) self.controller.index(req, tenant_id=self.tenant) rpc_client.list_stacks.assert_called_once_with(mock.ANY, filters=mock.ANY, show_hidden=True) def test_global_index_show_hidden_false(self, mock_enforce): rpc_client = self.controller.rpc_client rpc_client.list_stacks = mock.Mock(return_value=[]) rpc_client.count_stacks = mock.Mock() params = {'show_hidden': 'false'} req = self._get('/stacks', params=params) self.controller.index(req, tenant_id=self.tenant) rpc_client.list_stacks.assert_called_once_with(mock.ANY, filters=mock.ANY, show_hidden=False) def test_index_show_deleted_True_with_count_false(self, mock_enforce): rpc_client = self.controller.rpc_client rpc_client.list_stacks = mock.Mock(return_value=[]) rpc_client.count_stacks = mock.Mock() params = {'show_deleted': 'True', 'with_count': 'false'} req = self._get('/stacks', params=params) result = self.controller.index(req, tenant_id=self.tenant) self.assertNotIn('count', result) rpc_client.list_stacks.assert_called_once_with(mock.ANY, filters=mock.ANY, show_deleted=True) self.assertFalse(rpc_client.count_stacks.called) def test_index_show_deleted_True_with_count_True(self, mock_enforce): rpc_client = self.controller.rpc_client rpc_client.list_stacks = mock.Mock(return_value=[]) rpc_client.count_stacks = mock.Mock(return_value=0) params = {'show_deleted': 'True', 'with_count': 'True'} req = self._get('/stacks', params=params) result = self.controller.index(req, tenant_id=self.tenant) self.assertEqual(0, result['count']) rpc_client.list_stacks.assert_called_once_with(mock.ANY, filters=mock.ANY, show_deleted=True) rpc_client.count_stacks.assert_called_once_with(mock.ANY, filters=mock.ANY, show_deleted=True, show_nested=False, show_hidden=False, tags=None, tags_any=None, not_tags=None, not_tags_any=None) @mock.patch.object(rpc_client.EngineClient, 'call') def test_detail(self, mock_call, mock_enforce): self._mock_enforce_setup(mock_enforce, 'detail', True) req = self._get('/stacks/detail') identity = identifier.HeatIdentifier(self.tenant, 'wordpress', '1') engine_resp = [ { u'stack_identity': dict(identity), u'updated_time': u'2012-07-09T09:13:11Z', u'template_description': u'blah', u'description': u'blah', u'stack_status_reason': u'Stack successfully created', u'creation_time': u'2012-07-09T09:12:45Z', u'stack_name': identity.stack_name, u'stack_action': u'CREATE', u'stack_status': u'COMPLETE', u'parameters': {'foo': 'bar'}, u'outputs': ['key', 'value'], u'notification_topics': [], u'capabilities': [], u'disable_rollback': True, u'timeout_mins': 60, } ] mock_call.return_value = engine_resp result = self.controller.detail(req, tenant_id=identity.tenant) expected = { 'stacks': [ { 'links': [{"href": self._url(identity), "rel": "self"}], 'id': '1', u'updated_time': u'2012-07-09T09:13:11Z', u'template_description': u'blah', u'description': u'blah', u'stack_status_reason': u'Stack successfully created', u'creation_time': u'2012-07-09T09:12:45Z', u'stack_name': identity.stack_name, u'stack_status': u'CREATE_COMPLETE', u'parameters': {'foo': 'bar'}, u'outputs': ['key', 'value'], u'notification_topics': [], u'capabilities': [], u'disable_rollback': True, u'timeout_mins': 60, } ] } self.assertEqual(expected, result) default_args = {'limit': None, 'sort_keys': None, 'marker': None, 'sort_dir': None, 'filters': None, 'show_deleted': False, 'show_nested': False, 'show_hidden': False, 'tags': None, 'tags_any': None, 'not_tags': None, 'not_tags_any': None} mock_call.assert_called_once_with( req.context, ('list_stacks', default_args), version='1.33') @mock.patch.object(rpc_client.EngineClient, 'call') def test_index_rmt_aterr(self, mock_call, mock_enforce): self._mock_enforce_setup(mock_enforce, 'index', True) req = self._get('/stacks') mock_call.side_effect = tools.to_remote_error(AttributeError()) resp = tools.request_with_middleware(fault.FaultWrapper, self.controller.index, req, tenant_id=self.tenant) self.assertEqual(400, resp.json['code']) self.assertEqual('AttributeError', resp.json['error']['type']) mock_call.assert_called_once_with( req.context, ('list_stacks', mock.ANY), version='1.33') def test_index_err_denied_policy(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'index', False) req = self._get('/stacks') resp = tools.request_with_middleware(fault.FaultWrapper, self.controller.index, req, tenant_id=self.tenant) self.assertEqual(403, resp.status_int) self.assertIn('403 Forbidden', six.text_type(resp)) @mock.patch.object(rpc_client.EngineClient, 'call') def test_index_rmt_interr(self, mock_call, mock_enforce): self._mock_enforce_setup(mock_enforce, 'index', True) req = self._get('/stacks') mock_call.side_effect = tools.to_remote_error(Exception()) resp = tools.request_with_middleware(fault.FaultWrapper, self.controller.index, req, tenant_id=self.tenant) self.assertEqual(500, resp.json['code']) self.assertEqual('Exception', resp.json['error']['type']) mock_call.assert_called_once_with( req.context, ('list_stacks', mock.ANY), version='1.33') def test_create(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'create', True) identity = identifier.HeatIdentifier(self.tenant, 'wordpress', '1') template = {u'Foo': u'bar'} parameters = {u'InstanceType': u'm1.xlarge'} body = {'template': template, 'stack_name': identity.stack_name, 'parameters': parameters, 'environment_files': ['foo.yaml'], 'timeout_mins': 30} req = self._post('/stacks', json.dumps(body)) self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( req.context, ('create_stack', {'stack_name': identity.stack_name, 'template': template, 'params': {'parameters': parameters, 'encrypted_param_names': [], 'parameter_defaults': {}, 'event_sinks': [], 'resource_registry': {}}, 'files': {}, 'environment_files': ['foo.yaml'], 'args': {'timeout_mins': 30}, 'owner_id': None, 'nested_depth': 0, 'user_creds_id': None, 'parent_resource_name': None, 'stack_user_project_id': None, 'template_id': None}), version='1.29' ).AndReturn(dict(identity)) self.m.ReplayAll() response = self.controller.create(req, tenant_id=identity.tenant, body=body) expected = {'stack': {'id': '1', 'links': [{'href': self._url(identity), 'rel': 'self'}]}} self.assertEqual(expected, response) self.m.VerifyAll() def test_create_with_tags(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'create', True) identity = identifier.HeatIdentifier(self.tenant, 'wordpress', '1') template = {u'Foo': u'bar'} parameters = {u'InstanceType': u'm1.xlarge'} body = {'template': template, 'stack_name': identity.stack_name, 'parameters': parameters, 'tags': 'tag1,tag2', 'timeout_mins': 30} req = self._post('/stacks', json.dumps(body)) self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( req.context, ('create_stack', {'stack_name': identity.stack_name, 'template': template, 'params': {'parameters': parameters, 'encrypted_param_names': [], 'parameter_defaults': {}, 'event_sinks': [], 'resource_registry': {}}, 'files': {}, 'environment_files': None, 'args': {'timeout_mins': 30, 'tags': ['tag1', 'tag2']}, 'owner_id': None, 'nested_depth': 0, 'user_creds_id': None, 'parent_resource_name': None, 'stack_user_project_id': None, 'template_id': None}), version='1.29' ).AndReturn(dict(identity)) self.m.ReplayAll() response = self.controller.create(req, tenant_id=identity.tenant, body=body) expected = {'stack': {'id': '1', 'links': [{'href': self._url(identity), 'rel': 'self'}]}} self.assertEqual(expected, response) self.m.VerifyAll() def test_adopt(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'create', True) identity = identifier.HeatIdentifier(self.tenant, 'wordpress', '1') template = { "heat_template_version": "2013-05-23", "parameters": {"app_dbx": {"type": "string"}}, "resources": {"res1": {"type": "GenericResourceType"}}} parameters = {"app_dbx": "test"} adopt_data = { "status": "COMPLETE", "name": "rtrove1", "parameters": parameters, "template": template, "action": "CREATE", "id": "8532f0d3-ea84-444e-b2bb-2543bb1496a4", "resources": {"res1": { "status": "COMPLETE", "name": "database_password", "resource_id": "yBpuUROjfGQ2gKOD", "action": "CREATE", "type": "GenericResourceType", "metadata": {}}}} body = {'template': None, 'stack_name': identity.stack_name, 'parameters': parameters, 'timeout_mins': 30, 'adopt_stack_data': str(adopt_data)} req = self._post('/stacks', json.dumps(body)) self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( req.context, ('create_stack', {'stack_name': identity.stack_name, 'template': template, 'params': {'parameters': parameters, 'encrypted_param_names': [], 'parameter_defaults': {}, 'event_sinks': [], 'resource_registry': {}}, 'files': {}, 'environment_files': None, 'args': {'timeout_mins': 30, 'adopt_stack_data': str(adopt_data)}, 'owner_id': None, 'nested_depth': 0, 'user_creds_id': None, 'parent_resource_name': None, 'stack_user_project_id': None, 'template_id': None}), version='1.29' ).AndReturn(dict(identity)) self.m.ReplayAll() response = self.controller.create(req, tenant_id=identity.tenant, body=body) expected = {'stack': {'id': '1', 'links': [{'href': self._url(identity), 'rel': 'self'}]}} self.assertEqual(expected, response) self.m.VerifyAll() def test_adopt_timeout_not_int(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'create', True) identity = identifier.HeatIdentifier(self.tenant, 'wordpress', '1') body = {'template': None, 'stack_name': identity.stack_name, 'parameters': {}, 'timeout_mins': 'not-an-int', 'adopt_stack_data': 'does not matter'} req = self._post('/stacks', json.dumps(body)) mock_call = self.patchobject(rpc_client.EngineClient, 'call') ex = self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, req, tenant_id=self.tenant, body=body) self.assertEqual("Only integer is acceptable by 'timeout_mins'.", six.text_type(ex)) self.assertFalse(mock_call.called) def test_adopt_error(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'create', True) identity = identifier.HeatIdentifier(self.tenant, 'wordpress', '1') parameters = {"app_dbx": "test"} adopt_data = ["Test"] body = {'template': None, 'stack_name': identity.stack_name, 'parameters': parameters, 'timeout_mins': 30, 'adopt_stack_data': str(adopt_data)} req = self._post('/stacks', json.dumps(body)) self.m.ReplayAll() resp = tools.request_with_middleware(fault.FaultWrapper, self.controller.create, req, tenant_id=self.tenant, body=body) self.assertEqual(400, resp.status_code) self.assertEqual('400 Bad Request', resp.status) self.assertIn('Invalid adopt data', resp.text) self.m.VerifyAll() def test_create_with_files(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'create', True) identity = identifier.HeatIdentifier(self.tenant, 'wordpress', '1') template = {u'Foo': u'bar'} parameters = {u'InstanceType': u'm1.xlarge'} body = {'template': template, 'stack_name': identity.stack_name, 'parameters': parameters, 'files': {'my.yaml': 'This is the file contents.'}, 'timeout_mins': 30} req = self._post('/stacks', json.dumps(body)) self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( req.context, ('create_stack', {'stack_name': identity.stack_name, 'template': template, 'params': {'parameters': parameters, 'encrypted_param_names': [], 'parameter_defaults': {}, 'event_sinks': [], 'resource_registry': {}}, 'files': {'my.yaml': 'This is the file contents.'}, 'environment_files': None, 'args': {'timeout_mins': 30}, 'owner_id': None, 'nested_depth': 0, 'user_creds_id': None, 'parent_resource_name': None, 'stack_user_project_id': None, 'template_id': None}), version='1.29' ).AndReturn(dict(identity)) self.m.ReplayAll() result = self.controller.create(req, tenant_id=identity.tenant, body=body) expected = {'stack': {'id': '1', 'links': [{'href': self._url(identity), 'rel': 'self'}]}} self.assertEqual(expected, result) self.m.VerifyAll() def test_create_err_rpcerr(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'create', True, 3) stack_name = "wordpress" template = {u'Foo': u'bar'} parameters = {u'InstanceType': u'm1.xlarge'} body = {'template': template, 'stack_name': stack_name, 'parameters': parameters, 'timeout_mins': 30} req = self._post('/stacks', json.dumps(body)) unknown_parameter = heat_exc.UnknownUserParameter(key='a') missing_parameter = heat_exc.UserParameterMissing(key='a') self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( req.context, ('create_stack', {'stack_name': stack_name, 'template': template, 'params': {'parameters': parameters, 'encrypted_param_names': [], 'parameter_defaults': {}, 'event_sinks': [], 'resource_registry': {}}, 'files': {}, 'environment_files': None, 'args': {'timeout_mins': 30}, 'owner_id': None, 'nested_depth': 0, 'user_creds_id': None, 'parent_resource_name': None, 'stack_user_project_id': None, 'template_id': None}), version='1.29' ).AndRaise(tools.to_remote_error(AttributeError())) rpc_client.EngineClient.call( req.context, ('create_stack', {'stack_name': stack_name, 'template': template, 'params': {'parameters': parameters, 'encrypted_param_names': [], 'parameter_defaults': {}, 'event_sinks': [], 'resource_registry': {}}, 'files': {}, 'environment_files': None, 'args': {'timeout_mins': 30}, 'owner_id': None, 'nested_depth': 0, 'user_creds_id': None, 'parent_resource_name': None, 'stack_user_project_id': None, 'template_id': None}), version='1.29' ).AndRaise(tools.to_remote_error(unknown_parameter)) rpc_client.EngineClient.call( req.context, ('create_stack', {'stack_name': stack_name, 'template': template, 'params': {'parameters': parameters, 'encrypted_param_names': [], 'parameter_defaults': {}, 'event_sinks': [], 'resource_registry': {}}, 'files': {}, 'environment_files': None, 'args': {'timeout_mins': 30}, 'owner_id': None, 'nested_depth': 0, 'user_creds_id': None, 'parent_resource_name': None, 'stack_user_project_id': None, 'template_id': None}), version='1.29' ).AndRaise(tools.to_remote_error(missing_parameter)) self.m.ReplayAll() resp = tools.request_with_middleware(fault.FaultWrapper, self.controller.create, req, tenant_id=self.tenant, body=body) self.assertEqual(400, resp.json['code']) self.assertEqual('AttributeError', resp.json['error']['type']) resp = tools.request_with_middleware(fault.FaultWrapper, self.controller.create, req, tenant_id=self.tenant, body=body) self.assertEqual(400, resp.json['code']) self.assertEqual('UnknownUserParameter', resp.json['error']['type']) resp = tools.request_with_middleware(fault.FaultWrapper, self.controller.create, req, tenant_id=self.tenant, body=body) self.assertEqual(400, resp.json['code']) self.assertEqual('UserParameterMissing', resp.json['error']['type']) self.m.VerifyAll() def test_create_err_existing(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'create', True) stack_name = "wordpress" template = {u'Foo': u'bar'} parameters = {u'InstanceType': u'm1.xlarge'} body = {'template': template, 'stack_name': stack_name, 'parameters': parameters, 'timeout_mins': 30} req = self._post('/stacks', json.dumps(body)) error = heat_exc.StackExists(stack_name='s') self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( req.context, ('create_stack', {'stack_name': stack_name, 'template': template, 'params': {'parameters': parameters, 'encrypted_param_names': [], 'parameter_defaults': {}, 'event_sinks': [], 'resource_registry': {}}, 'files': {}, 'environment_files': None, 'args': {'timeout_mins': 30}, 'owner_id': None, 'nested_depth': 0, 'user_creds_id': None, 'parent_resource_name': None, 'stack_user_project_id': None, 'template_id': None}), version='1.29' ).AndRaise(tools.to_remote_error(error)) self.m.ReplayAll() resp = tools.request_with_middleware(fault.FaultWrapper, self.controller.create, req, tenant_id=self.tenant, body=body) self.assertEqual(409, resp.json['code']) self.assertEqual('StackExists', resp.json['error']['type']) self.m.VerifyAll() def test_create_timeout_not_int(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'create', True) stack_name = "wordpress" template = {u'Foo': u'bar'} parameters = {u'InstanceType': u'm1.xlarge'} body = {'template': template, 'stack_name': stack_name, 'parameters': parameters, 'timeout_mins': 'not-an-int'} req = self._post('/stacks', json.dumps(body)) mock_call = self.patchobject(rpc_client.EngineClient, 'call') ex = self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, req, tenant_id=self.tenant, body=body) self.assertEqual("Only integer is acceptable by 'timeout_mins'.", six.text_type(ex)) self.assertFalse(mock_call.called) def test_create_err_denied_policy(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'create', False) stack_name = "wordpress" template = {u'Foo': u'bar'} parameters = {u'InstanceType': u'm1.xlarge'} body = {'template': template, 'stack_name': stack_name, 'parameters': parameters, 'timeout_mins': 30} req = self._post('/stacks', json.dumps(body)) resp = tools.request_with_middleware(fault.FaultWrapper, self.controller.create, req, tenant_id=self.tenant, body=body) self.assertEqual(403, resp.status_int) self.assertIn('403 Forbidden', six.text_type(resp)) def test_create_err_engine(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'create', True) stack_name = "wordpress" template = {u'Foo': u'bar'} parameters = {u'InstanceType': u'm1.xlarge'} body = {'template': template, 'stack_name': stack_name, 'parameters': parameters, 'timeout_mins': 30} req = self._post('/stacks', json.dumps(body)) error = heat_exc.StackValidationFailed(message='') self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( req.context, ('create_stack', {'stack_name': stack_name, 'template': template, 'params': {'parameters': parameters, 'encrypted_param_names': [], 'parameter_defaults': {}, 'event_sinks': [], 'resource_registry': {}}, 'files': {}, 'environment_files': None, 'args': {'timeout_mins': 30}, 'owner_id': None, 'nested_depth': 0, 'user_creds_id': None, 'parent_resource_name': None, 'stack_user_project_id': None, 'template_id': None}), version='1.29' ).AndRaise(tools.to_remote_error(error)) self.m.ReplayAll() resp = tools.request_with_middleware(fault.FaultWrapper, self.controller.create, req, tenant_id=self.tenant, body=body) self.assertEqual(400, resp.json['code']) self.assertEqual('StackValidationFailed', resp.json['error']['type']) self.m.VerifyAll() def test_create_err_stack_bad_reqest(self, mock_enforce): cfg.CONF.set_override('debug', True) template = {u'Foo': u'bar'} parameters = {u'InstanceType': u'm1.xlarge'} body = {'template': template, 'parameters': parameters, 'timeout_mins': 30} req = self._post('/stacks', json.dumps(body)) error = heat_exc.HTTPExceptionDisguise(webob.exc.HTTPBadRequest()) self.controller.create = mock.MagicMock(side_effect=error) resp = tools.request_with_middleware(fault.FaultWrapper, self.controller.create, req, body) # When HTTP disguised exceptions reach the fault app, they are # converted into regular responses, just like non-HTTP exceptions self.assertEqual(400, resp.json['code']) self.assertEqual('HTTPBadRequest', resp.json['error']['type']) self.assertIsNotNone(resp.json['error']['traceback']) @mock.patch.object(rpc_client.EngineClient, 'call') @mock.patch.object(stacks.stacks_view, 'format_stack') def test_preview_stack(self, mock_format, mock_call, mock_enforce): self._mock_enforce_setup(mock_enforce, 'preview', True) body = {'stack_name': 'foo', 'template': {}, 'parameters': {}} req = self._post('/stacks/preview', json.dumps(body)) mock_call.return_value = {} mock_format.return_value = 'formatted_stack' result = self.controller.preview(req, tenant_id=self.tenant, body=body) self.assertEqual({'stack': 'formatted_stack'}, result) @mock.patch.object(rpc_client.EngineClient, 'call') @mock.patch.object(stacks.stacks_view, 'format_stack') def test_preview_with_tags_timeout(self, mock_format, mock_call, mock_enforce): self._mock_enforce_setup(mock_enforce, 'preview', True) identity = identifier.HeatIdentifier(self.tenant, 'wordpress', '1') template = {u'Foo': u'bar'} parameters = {u'InstanceType': u'm1.xlarge'} body = {'template': template, 'stack_name': identity.stack_name, 'parameters': parameters, 'tags': 'tag1,tag2', 'timeout_mins': 30} req = self._post('/stacks/preview', json.dumps(body)) mock_call.return_value = {} mock_format.return_value = 'formatted_stack_preview' response = self.controller.preview(req, tenant_id=identity.tenant, body=body) rpc_client.EngineClient.call.assert_called_once_with( req.context, ('preview_stack', {'stack_name': identity.stack_name, 'template': template, 'params': {'parameters': parameters, 'encrypted_param_names': [], 'parameter_defaults': {}, 'event_sinks': [], 'resource_registry': {}}, 'files': {}, 'environment_files': None, 'args': {'timeout_mins': 30, 'tags': ['tag1', 'tag2']}}), version='1.23' ) self.assertEqual({'stack': 'formatted_stack_preview'}, response) def test_preview_update_stack(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'preview_update', True) identity = identifier.HeatIdentifier(self.tenant, 'wordpress', '6') template = {u'Foo': u'bar'} parameters = {u'InstanceType': u'm1.xlarge'} body = {'template': template, 'parameters': parameters, 'files': {}, 'timeout_mins': 30} req = self._put('/stacks/%(stack_name)s/%(stack_id)s/preview' % identity, json.dumps(body)) resource_changes = {'updated': [], 'deleted': [], 'unchanged': [], 'added': [], 'replaced': []} self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( req.context, ('preview_update_stack', {'stack_identity': dict(identity), 'template': template, 'params': {'parameters': parameters, 'encrypted_param_names': [], 'parameter_defaults': {}, 'event_sinks': [], 'resource_registry': {}}, 'files': {}, 'environment_files': None, 'args': {'timeout_mins': 30}}), version='1.23' ).AndReturn(resource_changes) self.m.ReplayAll() result = self.controller.preview_update(req, tenant_id=identity.tenant, stack_name=identity.stack_name, stack_id=identity.stack_id, body=body) self.assertEqual({'resource_changes': resource_changes}, result) self.m.VerifyAll() def test_preview_update_stack_patch(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'preview_update_patch', True) identity = identifier.HeatIdentifier(self.tenant, 'wordpress', '6') parameters = {u'InstanceType': u'm1.xlarge'} body = {'template': None, 'parameters': parameters, 'files': {}, 'timeout_mins': 30} req = self._patch('/stacks/%(stack_name)s/%(stack_id)s/preview' % identity, json.dumps(body)) resource_changes = {'updated': [], 'deleted': [], 'unchanged': [], 'added': [], 'replaced': []} self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( req.context, ('preview_update_stack', {'stack_identity': dict(identity), 'template': None, 'params': {'parameters': parameters, 'encrypted_param_names': [], 'parameter_defaults': {}, 'event_sinks': [], 'resource_registry': {}}, 'files': {}, 'environment_files': None, 'args': {rpc_api.PARAM_EXISTING: True, 'timeout_mins': 30}}), version='1.23' ).AndReturn(resource_changes) self.m.ReplayAll() result = self.controller.preview_update_patch( req, tenant_id=identity.tenant, stack_name=identity.stack_name, stack_id=identity.stack_id, body=body) self.assertEqual({'resource_changes': resource_changes}, result) self.m.VerifyAll() @mock.patch.object(rpc_client.EngineClient, 'call') def test_update_immutable_parameter(self, mock_call, mock_enforce): self._mock_enforce_setup(mock_enforce, 'update', True) identity = identifier.HeatIdentifier(self.tenant, 'wordpress', '6') template = {u'Foo': u'bar'} parameters = {u'param1': u'bar'} body = {'template': template, 'parameters': parameters, 'files': {}, 'timeout_mins': 30} req = self._put('/stacks/%(stack_name)s/%(stack_id)s' % identity, json.dumps(body)) error = heat_exc.ImmutableParameterModified(keys='param1') self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( req.context, ('update_stack', {'stack_identity': dict(identity), 'template': template, 'params': {u'parameters': parameters, u'encrypted_param_names': [], u'parameter_defaults': {}, u'event_sinks': [], u'resource_registry': {}}, 'files': {}, 'environment_files': None, 'args': {'timeout_mins': 30}, 'template_id': None}), version='1.29' ).AndRaise(tools.to_remote_error(error)) self.m.ReplayAll() resp = tools.request_with_middleware(fault.FaultWrapper, self.controller.update, req, tenant_id=identity.tenant, stack_name=identity.stack_name, stack_id=identity.stack_id, body=body) self.assertEqual(400, resp.json['code']) self.assertEqual('ImmutableParameterModified', resp.json['error']['type']) self.assertIn("The following parameters are immutable", six.text_type(resp.json['error']['message'])) self.m.VerifyAll() def test_lookup(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'lookup', True) identity = identifier.HeatIdentifier(self.tenant, 'wordpress', '1') req = self._get('/stacks/%(stack_name)s' % identity) self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( req.context, ('identify_stack', {'stack_name': identity.stack_name}) ).AndReturn(identity) self.m.ReplayAll() found = self.assertRaises( webob.exc.HTTPFound, self.controller.lookup, req, tenant_id=identity.tenant, stack_name=identity.stack_name) self.assertEqual(self._url(identity), found.location) self.m.VerifyAll() def test_lookup_arn(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'lookup', True) identity = identifier.HeatIdentifier(self.tenant, 'wordpress', '1') req = self._get('/stacks%s' % identity.arn_url_path()) self.m.ReplayAll() found = self.assertRaises( webob.exc.HTTPFound, self.controller.lookup, req, tenant_id=identity.tenant, stack_name=identity.arn()) self.assertEqual(self._url(identity), found.location) self.m.VerifyAll() def test_lookup_nonexistent(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'lookup', True) stack_name = 'wibble' req = self._get('/stacks/%(stack_name)s' % { 'stack_name': stack_name}) error = heat_exc.EntityNotFound(entity='Stack', name='a') self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( req.context, ('identify_stack', {'stack_name': stack_name}) ).AndRaise(tools.to_remote_error(error)) self.m.ReplayAll() resp = tools.request_with_middleware(fault.FaultWrapper, self.controller.lookup, req, tenant_id=self.tenant, stack_name=stack_name) self.assertEqual(404, resp.json['code']) self.assertEqual('EntityNotFound', resp.json['error']['type']) self.m.VerifyAll() def test_lookup_err_policy(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'lookup', False) stack_name = 'wibble' req = self._get('/stacks/%(stack_name)s' % { 'stack_name': stack_name}) resp = tools.request_with_middleware(fault.FaultWrapper, self.controller.lookup, req, tenant_id=self.tenant, stack_name=stack_name) self.assertEqual(403, resp.status_int) self.assertIn('403 Forbidden', six.text_type(resp)) def test_lookup_resource(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'lookup', True) identity = identifier.HeatIdentifier(self.tenant, 'wordpress', '1') req = self._get('/stacks/%(stack_name)s/resources' % identity) self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( req.context, ('identify_stack', {'stack_name': identity.stack_name}) ).AndReturn(identity) self.m.ReplayAll() found = self.assertRaises( webob.exc.HTTPFound, self.controller.lookup, req, tenant_id=identity.tenant, stack_name=identity.stack_name, path='resources') self.assertEqual(self._url(identity) + '/resources', found.location) self.m.VerifyAll() def test_lookup_resource_nonexistent(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'lookup', True) stack_name = 'wibble' req = self._get('/stacks/%(stack_name)s/resources' % { 'stack_name': stack_name}) error = heat_exc.EntityNotFound(entity='Stack', name='a') self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( req.context, ('identify_stack', {'stack_name': stack_name}) ).AndRaise(tools.to_remote_error(error)) self.m.ReplayAll() resp = tools.request_with_middleware(fault.FaultWrapper, self.controller.lookup, req, tenant_id=self.tenant, stack_name=stack_name, path='resources') self.assertEqual(404, resp.json['code']) self.assertEqual('EntityNotFound', resp.json['error']['type']) self.m.VerifyAll() def test_lookup_resource_err_denied_policy(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'lookup', False) stack_name = 'wibble' req = self._get('/stacks/%(stack_name)s/resources' % { 'stack_name': stack_name}) resp = tools.request_with_middleware(fault.FaultWrapper, self.controller.lookup, req, tenant_id=self.tenant, stack_name=stack_name, path='resources') self.assertEqual(403, resp.status_int) self.assertIn('403 Forbidden', six.text_type(resp)) def test_show(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'show', True) identity = identifier.HeatIdentifier(self.tenant, 'wordpress', '6') req = self._get('/stacks/%(stack_name)s/%(stack_id)s' % identity, params={'resolve_outputs': True}) parameters = {u'DBUsername': u'admin', u'LinuxDistribution': u'F17', u'InstanceType': u'm1.large', u'DBRootPassword': u'admin', u'DBPassword': u'admin', u'DBName': u'wordpress'} outputs = [{u'output_key': u'WebsiteURL', u'description': u'URL for Wordpress wiki', u'output_value': u'http://10.0.0.8/wordpress'}] engine_resp = [ { u'stack_identity': dict(identity), u'updated_time': u'2012-07-09T09:13:11Z', u'parameters': parameters, u'outputs': outputs, u'stack_status_reason': u'Stack successfully created', u'creation_time': u'2012-07-09T09:12:45Z', u'stack_name': identity.stack_name, u'notification_topics': [], u'stack_action': u'CREATE', u'stack_status': u'COMPLETE', u'description': u'blah', u'disable_rollback': True, u'timeout_mins':60, u'capabilities': [], } ] self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( req.context, ('show_stack', {'stack_identity': dict(identity), 'resolve_outputs': True}), version='1.20' ).AndReturn(engine_resp) self.m.ReplayAll() response = self.controller.show(req, tenant_id=identity.tenant, stack_name=identity.stack_name, stack_id=identity.stack_id) expected = { 'stack': { 'links': [{"href": self._url(identity), "rel": "self"}], 'id': '6', u'updated_time': u'2012-07-09T09:13:11Z', u'parameters': parameters, u'outputs': outputs, u'description': u'blah', u'stack_status_reason': u'Stack successfully created', u'creation_time': u'2012-07-09T09:12:45Z', u'stack_name': identity.stack_name, u'stack_status': u'CREATE_COMPLETE', u'capabilities': [], u'notification_topics': [], u'disable_rollback': True, u'timeout_mins': 60, } } self.assertEqual(expected, response) self.m.VerifyAll() def test_show_without_resolve_outputs(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'show', True) identity = identifier.HeatIdentifier(self.tenant, 'wordpress', '6') req = self._get('/stacks/%(stack_name)s/%(stack_id)s' % identity, params={'resolve_outputs': False}) parameters = {u'DBUsername': u'admin', u'LinuxDistribution': u'F17', u'InstanceType': u'm1.large', u'DBRootPassword': u'admin', u'DBPassword': u'admin', u'DBName': u'wordpress'} engine_resp = [ { u'stack_identity': dict(identity), u'updated_time': u'2012-07-09T09:13:11Z', u'parameters': parameters, u'stack_status_reason': u'Stack successfully created', u'creation_time': u'2012-07-09T09:12:45Z', u'stack_name': identity.stack_name, u'notification_topics': [], u'stack_action': u'CREATE', u'stack_status': u'COMPLETE', u'description': u'blah', u'disable_rollback': True, u'timeout_mins':60, u'capabilities': [], } ] self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( req.context, ('show_stack', {'stack_identity': dict(identity), 'resolve_outputs': False}), version='1.20' ).AndReturn(engine_resp) self.m.ReplayAll() response = self.controller.show(req, tenant_id=identity.tenant, stack_name=identity.stack_name, stack_id=identity.stack_id) expected = { 'stack': { 'links': [{"href": self._url(identity), "rel": "self"}], 'id': '6', u'updated_time': u'2012-07-09T09:13:11Z', u'parameters': parameters, u'description': u'blah', u'stack_status_reason': u'Stack successfully created', u'creation_time': u'2012-07-09T09:12:45Z', u'stack_name': identity.stack_name, u'stack_status': u'CREATE_COMPLETE', u'capabilities': [], u'notification_topics': [], u'disable_rollback': True, u'timeout_mins': 60, } } self.assertEqual(expected, response) self.m.VerifyAll() def test_show_notfound(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'show', True) identity = identifier.HeatIdentifier(self.tenant, 'wibble', '6') req = self._get('/stacks/%(stack_name)s/%(stack_id)s' % identity) error = heat_exc.EntityNotFound(entity='Stack', name='a') self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( req.context, ('show_stack', {'stack_identity': dict(identity), 'resolve_outputs': True}), version='1.20' ).AndRaise(tools.to_remote_error(error)) self.m.ReplayAll() resp = tools.request_with_middleware(fault.FaultWrapper, self.controller.show, req, tenant_id=identity.tenant, stack_name=identity.stack_name, stack_id=identity.stack_id) self.assertEqual(404, resp.json['code']) self.assertEqual('EntityNotFound', resp.json['error']['type']) self.m.VerifyAll() def test_show_invalidtenant(self, mock_enforce): identity = identifier.HeatIdentifier('wibble', 'wordpress', '6') req = self._get('/stacks/%(stack_name)s/%(stack_id)s' % identity) self.m.ReplayAll() resp = tools.request_with_middleware(fault.FaultWrapper, self.controller.show, req, tenant_id=identity.tenant, stack_name=identity.stack_name, stack_id=identity.stack_id) self.assertEqual(403, resp.status_int) self.assertIn('403 Forbidden', six.text_type(resp)) self.m.VerifyAll() def test_show_err_denied_policy(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'show', False) identity = identifier.HeatIdentifier(self.tenant, 'wordpress', '6') req = self._get('/stacks/%(stack_name)s/%(stack_id)s' % identity) resp = tools.request_with_middleware(fault.FaultWrapper, self.controller.show, req, tenant_id=identity.tenant, stack_name=identity.stack_name, stack_id=identity.stack_id) self.assertEqual(403, resp.status_int) self.assertIn('403 Forbidden', six.text_type(resp)) def test_get_template(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'template', True) identity = identifier.HeatIdentifier(self.tenant, 'wordpress', '6') req = self._get('/stacks/%(stack_name)s/%(stack_id)s' % identity) template = {u'Foo': u'bar'} self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( req.context, ('get_template', {'stack_identity': dict(identity)}) ).AndReturn(template) self.m.ReplayAll() response = self.controller.template(req, tenant_id=identity.tenant, stack_name=identity.stack_name, stack_id=identity.stack_id) self.assertEqual(template, response) self.m.VerifyAll() def test_get_environment(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'environment', True) identity = identifier.HeatIdentifier(self.tenant, 'wordpress', '6') req = self._get('/stacks/%(stack_name)s/%(stack_id)s' % identity) env = {'parameters': {'Foo': 'bar'}} self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( req.context, ('get_environment', {'stack_identity': dict(identity)},), version='1.28', ).AndReturn(env) self.m.ReplayAll() response = self.controller.environment(req, tenant_id=identity.tenant, stack_name=identity.stack_name, stack_id=identity.stack_id) self.assertEqual(env, response) self.m.VerifyAll() def test_get_files(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'files', True) identity = identifier.HeatIdentifier(self.tenant, 'wordpress', '6') req = self._get('/stacks/%(stack_name)s/%(stack_id)s' % identity) files = {'foo.yaml': 'i am yaml'} self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( req.context, ('get_files', {'stack_identity': dict(identity)},), version='1.32', ).AndReturn(files) self.m.ReplayAll() response = self.controller.files(req, tenant_id=identity.tenant, stack_name=identity.stack_name, stack_id=identity.stack_id) self.assertEqual(files, response) self.m.VerifyAll() def test_get_template_err_denied_policy(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'template', False) identity = identifier.HeatIdentifier(self.tenant, 'wordpress', '6') req = self._get('/stacks/%(stack_name)s/%(stack_id)s/template' % identity) self.m.ReplayAll() resp = tools.request_with_middleware(fault.FaultWrapper, self.controller.template, req, tenant_id=identity.tenant, stack_name=identity.stack_name, stack_id=identity.stack_id) self.assertEqual(403, resp.status_int) self.assertIn('403 Forbidden', six.text_type(resp)) self.m.VerifyAll() def test_get_template_err_notfound(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'template', True) identity = identifier.HeatIdentifier(self.tenant, 'wordpress', '6') req = self._get('/stacks/%(stack_name)s/%(stack_id)s' % identity) error = heat_exc.EntityNotFound(entity='Stack', name='a') self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( req.context, ('get_template', {'stack_identity': dict(identity)}) ).AndRaise(tools.to_remote_error(error)) self.m.ReplayAll() resp = tools.request_with_middleware(fault.FaultWrapper, self.controller.template, req, tenant_id=identity.tenant, stack_name=identity.stack_name, stack_id=identity.stack_id) self.assertEqual(404, resp.json['code']) self.assertEqual('EntityNotFound', resp.json['error']['type']) self.m.VerifyAll() def test_update(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'update', True) identity = identifier.HeatIdentifier(self.tenant, 'wordpress', '6') template = {u'Foo': u'bar'} parameters = {u'InstanceType': u'm1.xlarge'} body = {'template': template, 'parameters': parameters, 'files': {}, 'timeout_mins': 30} req = self._put('/stacks/%(stack_name)s/%(stack_id)s' % identity, json.dumps(body)) self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( req.context, ('update_stack', {'stack_identity': dict(identity), 'template': template, 'params': {'parameters': parameters, 'encrypted_param_names': [], 'parameter_defaults': {}, 'event_sinks': [], 'resource_registry': {}}, 'files': {}, 'environment_files': None, 'args': {'timeout_mins': 30}, 'template_id': None}), version='1.29' ).AndReturn(dict(identity)) self.m.ReplayAll() self.assertRaises(webob.exc.HTTPAccepted, self.controller.update, req, tenant_id=identity.tenant, stack_name=identity.stack_name, stack_id=identity.stack_id, body=body) self.m.VerifyAll() def test_update_with_tags(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'update', True) identity = identifier.HeatIdentifier(self.tenant, 'wordpress', '6') template = {u'Foo': u'bar'} parameters = {u'InstanceType': u'm1.xlarge'} body = {'template': template, 'parameters': parameters, 'files': {}, 'tags': 'tag1,tag2', 'timeout_mins': 30} req = self._put('/stacks/%(stack_name)s/%(stack_id)s' % identity, json.dumps(body)) self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( req.context, ('update_stack', {'stack_identity': dict(identity), 'template': template, 'params': {'parameters': parameters, 'encrypted_param_names': [], 'parameter_defaults': {}, 'event_sinks': [], 'resource_registry': {}}, 'files': {}, 'environment_files': None, 'args': {'timeout_mins': 30, 'tags': ['tag1', 'tag2']}, 'template_id': None}), version='1.29' ).AndReturn(dict(identity)) self.m.ReplayAll() self.assertRaises(webob.exc.HTTPAccepted, self.controller.update, req, tenant_id=identity.tenant, stack_name=identity.stack_name, stack_id=identity.stack_id, body=body) self.m.VerifyAll() def test_update_bad_name(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'update', True) identity = identifier.HeatIdentifier(self.tenant, 'wibble', '6') template = {u'Foo': u'bar'} parameters = {u'InstanceType': u'm1.xlarge'} body = {'template': template, 'parameters': parameters, 'files': {}, 'timeout_mins': 30} req = self._put('/stacks/%(stack_name)s/%(stack_id)s' % identity, json.dumps(body)) error = heat_exc.EntityNotFound(entity='Stack', name='a') self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( req.context, ('update_stack', {'stack_identity': dict(identity), 'template': template, 'params': {u'parameters': parameters, u'encrypted_param_names': [], u'parameter_defaults': {}, u'event_sinks': [], u'resource_registry': {}}, 'files': {}, 'environment_files': None, 'args': {'timeout_mins': 30}, 'template_id': None}), version='1.29' ).AndRaise(tools.to_remote_error(error)) self.m.ReplayAll() resp = tools.request_with_middleware(fault.FaultWrapper, self.controller.update, req, tenant_id=identity.tenant, stack_name=identity.stack_name, stack_id=identity.stack_id, body=body) self.assertEqual(404, resp.json['code']) self.assertEqual('EntityNotFound', resp.json['error']['type']) self.m.VerifyAll() def test_update_timeout_not_int(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'update', True) identity = identifier.HeatIdentifier(self.tenant, 'wibble', '6') template = {u'Foo': u'bar'} parameters = {u'InstanceType': u'm1.xlarge'} body = {'template': template, 'parameters': parameters, 'files': {}, 'timeout_mins': 'not-int'} req = self._put('/stacks/%(stack_name)s/%(stack_id)s' % identity, json.dumps(body)) mock_call = self.patchobject(rpc_client.EngineClient, 'call') ex = self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update, req, tenant_id=identity.tenant, stack_name=identity.stack_name, stack_id=identity.stack_id, body=body) self.assertEqual("Only integer is acceptable by 'timeout_mins'.", six.text_type(ex)) self.assertFalse(mock_call.called) def test_update_err_denied_policy(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'update', False) identity = identifier.HeatIdentifier(self.tenant, 'wibble', '6') template = {u'Foo': u'bar'} parameters = {u'InstanceType': u'm1.xlarge'} body = {'template': template, 'parameters': parameters, 'files': {}, 'timeout_mins': 30} req = self._put('/stacks/%(stack_name)s/%(stack_id)s' % identity, json.dumps(body)) resp = tools.request_with_middleware(fault.FaultWrapper, self.controller.update, req, tenant_id=identity.tenant, stack_name=identity.stack_name, stack_id=identity.stack_id, body=body) self.assertEqual(403, resp.status_int) self.assertIn('403 Forbidden', six.text_type(resp)) def test_update_with_existing_template(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'update_patch', True) identity = identifier.HeatIdentifier(self.tenant, 'wordpress', '6') body = {'template': None, 'parameters': {}, 'files': {}, 'timeout_mins': 30} req = self._patch('/stacks/%(stack_name)s/%(stack_id)s' % identity, json.dumps(body)) self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( req.context, ('update_stack', {'stack_identity': dict(identity), 'template': None, 'params': {'parameters': {}, 'encrypted_param_names': [], 'parameter_defaults': {}, 'event_sinks': [], 'resource_registry': {}}, 'files': {}, 'environment_files': None, 'args': {rpc_api.PARAM_EXISTING: True, 'timeout_mins': 30}, 'template_id': None}), version='1.29' ).AndReturn(dict(identity)) self.m.ReplayAll() self.assertRaises(webob.exc.HTTPAccepted, self.controller.update_patch, req, tenant_id=identity.tenant, stack_name=identity.stack_name, stack_id=identity.stack_id, body=body) self.m.VerifyAll() def test_update_with_existing_parameters(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'update_patch', True) identity = identifier.HeatIdentifier(self.tenant, 'wordpress', '6') template = {u'Foo': u'bar'} body = {'template': template, 'parameters': {}, 'files': {}, 'timeout_mins': 30} req = self._patch('/stacks/%(stack_name)s/%(stack_id)s' % identity, json.dumps(body)) self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( req.context, ('update_stack', {'stack_identity': dict(identity), 'template': template, 'params': {'parameters': {}, 'encrypted_param_names': [], 'parameter_defaults': {}, 'event_sinks': [], 'resource_registry': {}}, 'files': {}, 'environment_files': None, 'args': {rpc_api.PARAM_EXISTING: True, 'timeout_mins': 30}, 'template_id': None}), version='1.29' ).AndReturn(dict(identity)) self.m.ReplayAll() self.assertRaises(webob.exc.HTTPAccepted, self.controller.update_patch, req, tenant_id=identity.tenant, stack_name=identity.stack_name, stack_id=identity.stack_id, body=body) self.m.VerifyAll() def test_update_with_existing_parameters_with_tags(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'update_patch', True) identity = identifier.HeatIdentifier(self.tenant, 'wordpress', '6') template = {u'Foo': u'bar'} body = {'template': template, 'parameters': {}, 'files': {}, 'tags': 'tag1,tag2', 'timeout_mins': 30} req = self._patch('/stacks/%(stack_name)s/%(stack_id)s' % identity, json.dumps(body)) self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( req.context, ('update_stack', {'stack_identity': dict(identity), 'template': template, 'params': {'parameters': {}, 'encrypted_param_names': [], 'parameter_defaults': {}, 'event_sinks': [], 'resource_registry': {}}, 'files': {}, 'environment_files': None, 'args': {rpc_api.PARAM_EXISTING: True, 'timeout_mins': 30, 'tags': ['tag1', 'tag2']}, 'template_id': None}), version='1.29' ).AndReturn(dict(identity)) self.m.ReplayAll() self.assertRaises(webob.exc.HTTPAccepted, self.controller.update_patch, req, tenant_id=identity.tenant, stack_name=identity.stack_name, stack_id=identity.stack_id, body=body) self.m.VerifyAll() def test_update_with_patched_existing_parameters(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'update_patch', True) identity = identifier.HeatIdentifier(self.tenant, 'wordpress', '6') template = {u'Foo': u'bar'} parameters = {u'InstanceType': u'm1.xlarge'} body = {'template': template, 'parameters': parameters, 'files': {}, 'timeout_mins': 30} req = self._patch('/stacks/%(stack_name)s/%(stack_id)s' % identity, json.dumps(body)) self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( req.context, ('update_stack', {'stack_identity': dict(identity), 'template': template, 'params': {'parameters': parameters, 'encrypted_param_names': [], 'parameter_defaults': {}, 'event_sinks': [], 'resource_registry': {}}, 'files': {}, 'environment_files': None, 'args': {rpc_api.PARAM_EXISTING: True, 'timeout_mins': 30}, 'template_id': None}), version='1.29' ).AndReturn(dict(identity)) self.m.ReplayAll() self.assertRaises(webob.exc.HTTPAccepted, self.controller.update_patch, req, tenant_id=identity.tenant, stack_name=identity.stack_name, stack_id=identity.stack_id, body=body) self.m.VerifyAll() def test_update_with_patch_timeout_not_int(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'update_patch', True) identity = identifier.HeatIdentifier(self.tenant, 'wordpress', '6') template = {u'Foo': u'bar'} parameters = {u'InstanceType': u'm1.xlarge'} body = {'template': template, 'parameters': parameters, 'files': {}, 'timeout_mins': 'not-int'} req = self._patch('/stacks/%(stack_name)s/%(stack_id)s' % identity, json.dumps(body)) mock_call = self.patchobject(rpc_client.EngineClient, 'call') ex = self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update_patch, req, tenant_id=identity.tenant, stack_name=identity.stack_name, stack_id=identity.stack_id, body=body) self.assertEqual("Only integer is acceptable by 'timeout_mins'.", six.text_type(ex)) self.assertFalse(mock_call.called) def test_update_with_existing_and_default_parameters( self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'update_patch', True) identity = identifier.HeatIdentifier(self.tenant, 'wordpress', '6') template = {u'Foo': u'bar'} clear_params = [u'DBUsername', u'DBPassword', u'LinuxDistribution'] body = {'template': template, 'parameters': {}, 'clear_parameters': clear_params, 'files': {}, 'timeout_mins': 30} req = self._patch('/stacks/%(stack_name)s/%(stack_id)s' % identity, json.dumps(body)) self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( req.context, ('update_stack', {'stack_identity': dict(identity), 'template': template, 'params': {'parameters': {}, 'encrypted_param_names': [], 'parameter_defaults': {}, 'event_sinks': [], 'resource_registry': {}}, 'files': {}, 'environment_files': None, 'args': {rpc_api.PARAM_EXISTING: True, 'clear_parameters': clear_params, 'timeout_mins': 30}, 'template_id': None}), version='1.29' ).AndReturn(dict(identity)) self.m.ReplayAll() self.assertRaises(webob.exc.HTTPAccepted, self.controller.update_patch, req, tenant_id=identity.tenant, stack_name=identity.stack_name, stack_id=identity.stack_id, body=body) self.m.VerifyAll() def test_update_with_patched_and_default_parameters( self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'update_patch', True) identity = identifier.HeatIdentifier(self.tenant, 'wordpress', '6') template = {u'Foo': u'bar'} parameters = {u'InstanceType': u'm1.xlarge'} clear_params = [u'DBUsername', u'DBPassword', u'LinuxDistribution'] body = {'template': template, 'parameters': parameters, 'clear_parameters': clear_params, 'files': {}, 'timeout_mins': 30} req = self._patch('/stacks/%(stack_name)s/%(stack_id)s' % identity, json.dumps(body)) self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( req.context, ('update_stack', {'stack_identity': dict(identity), 'template': template, 'params': {'parameters': parameters, 'encrypted_param_names': [], 'parameter_defaults': {}, 'event_sinks': [], 'resource_registry': {}}, 'files': {}, 'environment_files': None, 'args': {rpc_api.PARAM_EXISTING: True, 'clear_parameters': clear_params, 'timeout_mins': 30}, 'template_id': None}), version='1.29' ).AndReturn(dict(identity)) self.m.ReplayAll() self.assertRaises(webob.exc.HTTPAccepted, self.controller.update_patch, req, tenant_id=identity.tenant, stack_name=identity.stack_name, stack_id=identity.stack_id, body=body) self.m.VerifyAll() def test_delete(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'delete', True) identity = identifier.HeatIdentifier(self.tenant, 'wordpress', '6') req = self._delete('/stacks/%(stack_name)s/%(stack_id)s' % identity) self.m.StubOutWithMock(rpc_client.EngineClient, 'call') # Engine returns None when delete successful rpc_client.EngineClient.call( req.context, ('delete_stack', {'stack_identity': dict(identity)}) ).AndReturn(None) self.m.ReplayAll() self.assertRaises(webob.exc.HTTPNoContent, self.controller.delete, req, tenant_id=identity.tenant, stack_name=identity.stack_name, stack_id=identity.stack_id) self.m.VerifyAll() def test_delete_err_denied_policy(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'delete', False) identity = identifier.HeatIdentifier(self.tenant, 'wordpress', '6') req = self._delete('/stacks/%(stack_name)s/%(stack_id)s' % identity) resp = tools.request_with_middleware(fault.FaultWrapper, self.controller.delete, req, tenant_id=self.tenant, stack_name=identity.stack_name, stack_id=identity.stack_id) self.assertEqual(403, resp.status_int) self.assertIn('403 Forbidden', six.text_type(resp)) def test_export(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'export', True) identity = identifier.HeatIdentifier(self.tenant, 'wordpress', '6') req = self._get('/stacks/%(stack_name)s/%(stack_id)s/export' % identity) self.m.StubOutWithMock(rpc_client.EngineClient, 'call') # Engine returns json data expected = {"name": "test", "id": "123"} rpc_client.EngineClient.call( req.context, ('export_stack', {'stack_identity': dict(identity)}), version='1.22' ).AndReturn(expected) self.m.ReplayAll() ret = self.controller.export(req, tenant_id=identity.tenant, stack_name=identity.stack_name, stack_id=identity.stack_id) self.assertEqual(expected, ret) self.m.VerifyAll() def test_abandon(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'abandon', True) identity = identifier.HeatIdentifier(self.tenant, 'wordpress', '6') req = self._abandon('/stacks/%(stack_name)s/%(stack_id)s' % identity) self.m.StubOutWithMock(rpc_client.EngineClient, 'call') # Engine returns json data on abandon completion expected = {"name": "test", "id": "123"} rpc_client.EngineClient.call( req.context, ('abandon_stack', {'stack_identity': dict(identity)}) ).AndReturn(expected) self.m.ReplayAll() ret = self.controller.abandon(req, tenant_id=identity.tenant, stack_name=identity.stack_name, stack_id=identity.stack_id) self.assertEqual(expected, ret) self.m.VerifyAll() def test_abandon_err_denied_policy(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'abandon', False) identity = identifier.HeatIdentifier(self.tenant, 'wordpress', '6') req = self._abandon('/stacks/%(stack_name)s/%(stack_id)s' % identity) resp = tools.request_with_middleware(fault.FaultWrapper, self.controller.abandon, req, tenant_id=self.tenant, stack_name=identity.stack_name, stack_id=identity.stack_id) self.assertEqual(403, resp.status_int) self.assertIn('403 Forbidden', six.text_type(resp)) def test_delete_bad_name(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'delete', True) identity = identifier.HeatIdentifier(self.tenant, 'wibble', '6') req = self._delete('/stacks/%(stack_name)s/%(stack_id)s' % identity) error = heat_exc.EntityNotFound(entity='Stack', name='a') self.m.StubOutWithMock(rpc_client.EngineClient, 'call') # Engine returns None when delete successful rpc_client.EngineClient.call( req.context, ('delete_stack', {'stack_identity': dict(identity)}) ).AndRaise(tools.to_remote_error(error)) self.m.ReplayAll() resp = tools.request_with_middleware(fault.FaultWrapper, self.controller.delete, req, tenant_id=identity.tenant, stack_name=identity.stack_name, stack_id=identity.stack_id) self.assertEqual(404, resp.json['code']) self.assertEqual('EntityNotFound', resp.json['error']['type']) self.m.VerifyAll() def test_validate_template(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'validate_template', True) template = {u'Foo': u'bar'} body = {'template': template} req = self._post('/validate', json.dumps(body)) engine_response = { u'Description': u'blah', u'Parameters': [ { u'NoEcho': u'false', u'ParameterKey': u'InstanceType', u'Description': u'Instance type' } ] } self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( req.context, ('validate_template', {'template': template, 'params': {'parameters': {}, 'encrypted_param_names': [], 'parameter_defaults': {}, 'event_sinks': [], 'resource_registry': {}}, 'files': {}, 'environment_files': None, 'show_nested': False, 'ignorable_errors': None}), version='1.24' ).AndReturn(engine_response) self.m.ReplayAll() response = self.controller.validate_template(req, tenant_id=self.tenant, body=body) self.assertEqual(engine_response, response) self.m.VerifyAll() def test_validate_template_error(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'validate_template', True) template = {u'Foo': u'bar'} body = {'template': template} req = self._post('/validate', json.dumps(body)) self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( req.context, ('validate_template', {'template': template, 'params': {'parameters': {}, 'encrypted_param_names': [], 'parameter_defaults': {}, 'event_sinks': [], 'resource_registry': {}}, 'files': {}, 'environment_files': None, 'show_nested': False, 'ignorable_errors': None}), version='1.24' ).AndReturn({'Error': 'fubar'}) self.m.ReplayAll() self.assertRaises(webob.exc.HTTPBadRequest, self.controller.validate_template, req, tenant_id=self.tenant, body=body) self.m.VerifyAll() def test_validate_err_denied_policy(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'validate_template', False) template = {u'Foo': u'bar'} body = {'template': template} req = self._post('/validate', json.dumps(body)) resp = tools.request_with_middleware( fault.FaultWrapper, self.controller.validate_template, req, tenant_id=self.tenant, body=body) self.assertEqual(403, resp.status_int) self.assertIn('403 Forbidden', six.text_type(resp)) def test_list_resource_types(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'list_resource_types', True) req = self._get('/resource_types') engine_response = ['AWS::EC2::Instance', 'AWS::EC2::EIP', 'AWS::EC2::EIPAssociation'] self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( req.context, ('list_resource_types', { 'support_status': None, 'type_name': None, 'heat_version': None, 'with_description': False }), version="1.30" ).AndReturn(engine_response) self.m.ReplayAll() response = self.controller.list_resource_types(req, tenant_id=self.tenant) self.assertEqual({'resource_types': engine_response}, response) self.m.VerifyAll() def test_list_resource_types_error(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'list_resource_types', True) req = self._get('/resource_types') error = heat_exc.EntityNotFound(entity='Resource Type', name='') self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( req.context, ('list_resource_types', { 'support_status': None, 'type_name': None, 'heat_version': None, 'with_description': False }), version="1.30" ).AndRaise(tools.to_remote_error(error)) self.m.ReplayAll() resp = tools.request_with_middleware( fault.FaultWrapper, self.controller.list_resource_types, req, tenant_id=self.tenant) self.assertEqual(404, resp.json['code']) self.assertEqual('EntityNotFound', resp.json['error']['type']) self.m.VerifyAll() def test_list_resource_types_err_denied_policy(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'list_resource_types', False) req = self._get('/resource_types') resp = tools.request_with_middleware( fault.FaultWrapper, self.controller.list_resource_types, req, tenant_id=self.tenant) self.assertEqual(403, resp.status_int) self.assertIn('403 Forbidden', six.text_type(resp)) def test_list_outputs(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'list_outputs', True) identity = identifier.HeatIdentifier(self.tenant, 'wordpress', '6') req = self._get('/stacks/%(stack_name)s/%(stack_id)s' % identity) outputs = [ {'output_key': 'key1', 'description': 'description'}, {'output_key': 'key2', 'description': 'description1'} ] self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( req.context, ('list_outputs', {'stack_identity': dict(identity)}), version='1.19' ).AndReturn(outputs) self.m.ReplayAll() response = self.controller.list_outputs(req, tenant_id=identity.tenant, stack_name=identity.stack_name, stack_id=identity.stack_id) self.assertEqual({'outputs': outputs}, response) self.m.VerifyAll() def test_show_output(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'show_output', True) identity = identifier.HeatIdentifier(self.tenant, 'wordpress', '6') req = self._get('/stacks/%(stack_name)s/%(stack_id)s/key' % identity) output = {'output_key': 'key', 'output_value': 'val', 'description': 'description'} self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( req.context, ('show_output', {'output_key': 'key', 'stack_identity': dict(identity)}), version='1.19' ).AndReturn(output) self.m.ReplayAll() response = self.controller.show_output(req, tenant_id=identity.tenant, stack_name=identity.stack_name, stack_id=identity.stack_id, output_key='key') self.assertEqual({'output': output}, response) self.m.VerifyAll() def test_list_template_versions(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'list_template_versions', True) req = self._get('/template_versions') engine_response = [ {'version': 'heat_template_version.2013-05-23', 'type': 'hot'}, {'version': 'AWSTemplateFormatVersion.2010-09-09', 'type': 'cfn'}] self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( req.context, ('list_template_versions', {}), version="1.11" ).AndReturn(engine_response) self.m.ReplayAll() response = self.controller.list_template_versions( req, tenant_id=self.tenant) self.assertEqual({'template_versions': engine_response}, response) self.m.VerifyAll() def _test_list_template_functions(self, mock_enforce, req, engine_response, with_condition=False): self._mock_enforce_setup(mock_enforce, 'list_template_functions', True) self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( req.context, ( 'list_template_functions', {'template_version': 't1', 'with_condition': with_condition}), version="1.35" ).AndReturn(engine_response) self.m.ReplayAll() response = self.controller.list_template_functions( req, tenant_id=self.tenant, template_version='t1') self.assertEqual({'template_functions': engine_response}, response) self.m.VerifyAll() def test_list_template_functions(self, mock_enforce): req = self._get('/template_versions/t1/functions') engine_response = [ {'functions': 'func1', 'description': 'desc1'}, ] self._test_list_template_functions(mock_enforce, req, engine_response) def test_list_template_funcs_includes_condition_funcs(self, mock_enforce): params = {'with_condition_func': 'true'} req = self._get('/template_versions/t1/functions', params=params) engine_response = [ {'functions': 'func1', 'description': 'desc1'}, {'functions': 'condition_func', 'description': 'desc2'} ] self._test_list_template_functions(mock_enforce, req, engine_response, with_condition=True) def test_resource_schema(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'resource_schema', True) req = self._get('/resource_types/ResourceWithProps') type_name = 'ResourceWithProps' engine_response = { 'resource_type': type_name, 'properties': { 'Foo': {'type': 'string', 'required': False}, }, 'attributes': { 'foo': {'description': 'A generic attribute'}, 'Foo': {'description': 'Another generic attribute'}, }, 'support_status': { 'status': 'SUPPORTED', 'version': None, 'message': None, }, } self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( req.context, ('resource_schema', {'type_name': type_name, 'with_description': False}), version='1.30' ).AndReturn(engine_response) self.m.ReplayAll() response = self.controller.resource_schema(req, tenant_id=self.tenant, type_name=type_name) self.assertEqual(engine_response, response) self.m.VerifyAll() def test_resource_schema_nonexist(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'resource_schema', True) req = self._get('/resource_types/BogusResourceType') type_name = 'BogusResourceType' error = heat_exc.EntityNotFound(entity='Resource Type', name='BogusResourceType') self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( req.context, ('resource_schema', {'type_name': type_name, 'with_description': False}), version='1.30' ).AndRaise(tools.to_remote_error(error)) self.m.ReplayAll() resp = tools.request_with_middleware(fault.FaultWrapper, self.controller.resource_schema, req, tenant_id=self.tenant, type_name=type_name) self.assertEqual(404, resp.json['code']) self.assertEqual('EntityNotFound', resp.json['error']['type']) self.m.VerifyAll() def test_resource_schema_faulty_template(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'resource_schema', True) req = self._get('/resource_types/FaultyTemplate') type_name = 'FaultyTemplate' error = heat_exc.InvalidGlobalResource(type_name='FaultyTemplate') self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( req.context, ('resource_schema', {'type_name': type_name, 'with_description': False}), version='1.30' ).AndRaise(tools.to_remote_error(error)) self.m.ReplayAll() resp = tools.request_with_middleware(fault.FaultWrapper, self.controller.resource_schema, req, tenant_id=self.tenant, type_name=type_name) self.assertEqual(500, resp.json['code']) self.assertEqual('InvalidGlobalResource', resp.json['error']['type']) self.m.VerifyAll() def test_resource_schema_err_denied_policy(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'resource_schema', False) req = self._get('/resource_types/BogusResourceType') type_name = 'BogusResourceType' resp = tools.request_with_middleware(fault.FaultWrapper, self.controller.resource_schema, req, tenant_id=self.tenant, type_name=type_name) self.assertEqual(403, resp.status_int) self.assertIn('403 Forbidden', six.text_type(resp)) def test_generate_template(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'generate_template', True) req = self._get('/resource_types/TEST_TYPE/template') engine_response = {'Type': 'TEST_TYPE'} self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( req.context, ('generate_template', {'type_name': 'TEST_TYPE', 'template_type': 'cfn'}), version='1.9' ).AndReturn(engine_response) self.m.ReplayAll() self.controller.generate_template(req, tenant_id=self.tenant, type_name='TEST_TYPE') self.m.VerifyAll() def test_generate_template_invalid_template_type(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'generate_template', True) params = {'template_type': 'invalid'} mock_call = self.patchobject(rpc_client.EngineClient, 'call') req = self._get('/resource_types/TEST_TYPE/template', params=params) ex = self.assertRaises(webob.exc.HTTPBadRequest, self.controller.generate_template, req, tenant_id=self.tenant, type_name='TEST_TYPE') self.assertIn('Template type is not supported: Invalid template ' 'type "invalid", valid types are: cfn, hot.', six.text_type(ex)) self.assertFalse(mock_call.called) def test_generate_template_not_found(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'generate_template', True) req = self._get('/resource_types/NOT_FOUND/template') error = heat_exc.EntityNotFound(entity='Resource Type', name='a') self.m.StubOutWithMock(rpc_client.EngineClient, 'call') rpc_client.EngineClient.call( req.context, ('generate_template', {'type_name': 'NOT_FOUND', 'template_type': 'cfn'}), version='1.9' ).AndRaise(tools.to_remote_error(error)) self.m.ReplayAll() resp = tools.request_with_middleware(fault.FaultWrapper, self.controller.generate_template, req, tenant_id=self.tenant, type_name='NOT_FOUND') self.assertEqual(404, resp.json['code']) self.assertEqual('EntityNotFound', resp.json['error']['type']) self.m.VerifyAll() def test_generate_template_err_denied_policy(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'generate_template', False) req = self._get('/resource_types/NOT_FOUND/template') resp = tools.request_with_middleware(fault.FaultWrapper, self.controller.generate_template, req, tenant_id=self.tenant, type_name='blah') self.assertEqual(403, resp.status_int) self.assertIn('403 Forbidden', six.text_type(resp)) class StackSerializerTest(common.HeatTestCase): def setUp(self): super(StackSerializerTest, self).setUp() self.serializer = stacks.StackSerializer() def test_serialize_create(self): result = {'stack': {'id': '1', 'links': [{'href': 'location', "rel": "self"}]}} response = webob.Response() response = self.serializer.create(response, result) self.assertEqual(201, response.status_int) self.assertEqual('location', response.headers['Location']) self.assertEqual('application/json', response.headers['Content-Type']) heat-10.0.2/heat/tests/api/openstack_v1/test_util.py0000666000175000017500000001137713343562340022372 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from webob import exc from heat.api.openstack.v1 import util from heat.common import context from heat.common import policy from heat.common import wsgi from heat.tests import common class TestGetAllowedParams(common.HeatTestCase): def setUp(self): super(TestGetAllowedParams, self).setUp() req = wsgi.Request({}) self.params = req.params.copy() self.params.add('foo', 'foo value') self.whitelist = {'foo': util.PARAM_TYPE_SINGLE} def test_returns_empty_dict(self): self.whitelist = {} result = util.get_allowed_params(self.params, self.whitelist) self.assertEqual({}, result) def test_only_adds_whitelisted_params_if_param_exists(self): self.whitelist = {'foo': util.PARAM_TYPE_SINGLE} self.params.clear() result = util.get_allowed_params(self.params, self.whitelist) self.assertNotIn('foo', result) def test_returns_only_whitelisted_params(self): self.params.add('bar', 'bar value') result = util.get_allowed_params(self.params, self.whitelist) self.assertIn('foo', result) self.assertNotIn('bar', result) def test_handles_single_value_params(self): result = util.get_allowed_params(self.params, self.whitelist) self.assertEqual('foo value', result['foo']) def test_handles_multiple_value_params(self): self.whitelist = {'foo': util.PARAM_TYPE_MULTI} self.params.add('foo', 'foo value 2') result = util.get_allowed_params(self.params, self.whitelist) self.assertEqual(2, len(result['foo'])) self.assertIn('foo value', result['foo']) self.assertIn('foo value 2', result['foo']) def test_handles_mixed_value_param_with_multiple_entries(self): self.whitelist = {'foo': util.PARAM_TYPE_MIXED} self.params.add('foo', 'foo value 2') result = util.get_allowed_params(self.params, self.whitelist) self.assertEqual(2, len(result['foo'])) self.assertIn('foo value', result['foo']) self.assertIn('foo value 2', result['foo']) def test_handles_mixed_value_param_with_single_entry(self): self.whitelist = {'foo': util.PARAM_TYPE_MIXED} result = util.get_allowed_params(self.params, self.whitelist) self.assertEqual('foo value', result['foo']) def test_bogus_whitelist_items(self): self.whitelist = {'foo': 'blah'} self.assertRaises(AssertionError, util.get_allowed_params, self.params, self.whitelist) class TestPolicyEnforce(common.HeatTestCase): def setUp(self): super(TestPolicyEnforce, self).setUp() self.req = wsgi.Request({}) self.req.context = context.RequestContext(tenant='foo', is_admin=False) class DummyController(object): REQUEST_SCOPE = 'test' @util.policy_enforce def an_action(self, req): return 'woot' self.controller = DummyController() @mock.patch.object(policy.Enforcer, 'enforce') def test_policy_enforce_tenant_mismatch(self, mock_enforce): mock_enforce.return_value = True self.assertEqual('woot', self.controller.an_action(self.req, 'foo')) self.assertRaises(exc.HTTPForbidden, self.controller.an_action, self.req, tenant_id='bar') @mock.patch.object(policy.Enforcer, 'enforce') def test_policy_enforce_tenant_mismatch_is_admin(self, mock_enforce): self.req.context = context.RequestContext(tenant='foo', is_admin=True) mock_enforce.return_value = True self.assertEqual('woot', self.controller.an_action(self.req, 'foo')) self.assertEqual('woot', self.controller.an_action(self.req, 'bar')) @mock.patch.object(policy.Enforcer, 'enforce') def test_policy_enforce_policy_deny(self, mock_enforce): mock_enforce.return_value = False self.assertRaises(exc.HTTPForbidden, self.controller.an_action, self.req, tenant_id='foo') heat-10.0.2/heat/tests/api/openstack_v1/test_build_info.py0000666000175000017500000000607213343562340023523 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import six import heat.api.middleware.fault as fault import heat.api.openstack.v1.build_info as build_info from heat.common import policy from heat.tests.api.openstack_v1 import tools from heat.tests import common @mock.patch.object(policy.Enforcer, 'enforce') class BuildInfoControllerTest(tools.ControllerTest, common.HeatTestCase): def setUp(self): super(BuildInfoControllerTest, self).setUp() self.controller = build_info.BuildInfoController({}) def test_theres_a_default_api_build_revision(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'build_info', True) req = self._get('/build_info') self.controller.rpc_client = mock.Mock() response = self.controller.build_info(req, tenant_id=self.tenant) self.assertIn('api', response) self.assertIn('revision', response['api']) self.assertEqual('unknown', response['api']['revision']) @mock.patch.object(build_info.cfg, 'CONF') def test_response_api_build_revision_from_config_file( self, mock_conf, mock_enforce): self._mock_enforce_setup(mock_enforce, 'build_info', True) req = self._get('/build_info') mock_engine = mock.Mock() mock_engine.get_revision.return_value = 'engine_revision' self.controller.rpc_client = mock_engine mock_conf.revision = {'heat_revision': 'test'} response = self.controller.build_info(req, tenant_id=self.tenant) self.assertEqual('test', response['api']['revision']) def test_retrieves_build_revision_from_the_engine(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'build_info', True) req = self._get('/build_info') mock_engine = mock.Mock() mock_engine.get_revision.return_value = 'engine_revision' self.controller.rpc_client = mock_engine response = self.controller.build_info(req, tenant_id=self.tenant) self.assertIn('engine', response) self.assertIn('revision', response['engine']) self.assertEqual('engine_revision', response['engine']['revision']) def test_build_info_err_denied_policy(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'build_info', False) req = self._get('/build_info') resp = tools.request_with_middleware( fault.FaultWrapper, self.controller.build_info, req, tenant_id=self.tenant) self.assertEqual(403, resp.status_int) self.assertIn('403 Forbidden', six.text_type(resp)) heat-10.0.2/heat/tests/api/openstack_v1/test_views_stacks_view.py0000666000175000017500000001616213343562340025151 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from heat.api.openstack.v1.views import stacks_view from heat.common import identifier from heat.tests import common class TestFormatStack(common.HeatTestCase): def setUp(self): super(TestFormatStack, self).setUp() self.request = mock.Mock() def test_doesnt_include_stack_action(self): stack = {'stack_action': 'CREATE'} result = stacks_view.format_stack(self.request, stack) self.assertEqual({}, result) def test_merges_stack_action_and_status(self): stack = {'stack_action': 'CREATE', 'stack_status': 'COMPLETE'} result = stacks_view.format_stack(self.request, stack) self.assertIn('stack_status', result) self.assertEqual('CREATE_COMPLETE', result['stack_status']) def test_include_stack_status_with_no_action(self): stack = {'stack_status': 'COMPLETE'} result = stacks_view.format_stack(self.request, stack) self.assertIn('stack_status', result) self.assertEqual('COMPLETE', result['stack_status']) @mock.patch.object(stacks_view, 'util') def test_replace_stack_identity_with_id_and_links(self, mock_util): mock_util.make_link.return_value = 'blah' stack = {'stack_identity': {'stack_id': 'foo'}} result = stacks_view.format_stack(self.request, stack) self.assertIn('id', result) self.assertNotIn('stack_identity', result) self.assertEqual('foo', result['id']) self.assertIn('links', result) self.assertEqual(['blah'], result['links']) @mock.patch.object(stacks_view, 'util', new=mock.Mock()) def test_doesnt_add_project_by_default(self): stack = {'stack_identity': {'stack_id': 'foo', 'tenant': 'bar'}} result = stacks_view.format_stack(self.request, stack, None) self.assertNotIn('project', result) @mock.patch.object(stacks_view, 'util', new=mock.Mock()) def test_doesnt_add_project_if_not_include_project(self): stack = {'stack_identity': {'stack_id': 'foo', 'tenant': 'bar'}} result = stacks_view.format_stack(self.request, stack, None, include_project=False) self.assertNotIn('project', result) @mock.patch.object(stacks_view, 'util', new=mock.Mock()) def test_adds_project_if_include_project(self): stack = {'stack_identity': {'stack_id': 'foo', 'tenant': 'bar'}} result = stacks_view.format_stack(self.request, stack, None, include_project=True) self.assertIn('project', result) self.assertEqual('bar', result['project']) def test_includes_all_other_keys(self): stack = {'foo': 'bar'} result = stacks_view.format_stack(self.request, stack) self.assertIn('foo', result) self.assertEqual('bar', result['foo']) def test_filter_out_all_but_given_keys(self): stack = { 'foo1': 'bar1', 'foo2': 'bar2', 'foo3': 'bar3', } result = stacks_view.format_stack(self.request, stack, ['foo2']) self.assertIn('foo2', result) self.assertNotIn('foo1', result) self.assertNotIn('foo3', result) class TestStacksViewBuilder(common.HeatTestCase): def setUp(self): super(TestStacksViewBuilder, self).setUp() self.request = mock.Mock() self.request.params = {} identity = identifier.HeatIdentifier('123456', 'wordpress', '1') self.stack1 = { u'stack_identity': dict(identity), u'updated_time': u'2012-07-09T09:13:11Z', u'template_description': u'blah', u'description': u'blah', u'stack_status_reason': u'Stack successfully created', u'creation_time': u'2012-07-09T09:12:45Z', u'stack_name': identity.stack_name, u'stack_action': u'CREATE', u'stack_status': u'COMPLETE', u'parameters': {'foo': 'bar'}, u'outputs': ['key', 'value'], u'notification_topics': [], u'capabilities': [], u'disable_rollback': True, u'timeout_mins': 60, } def test_stack_index(self): stacks = [self.stack1] stack_view = stacks_view.collection(self.request, stacks) self.assertIn('stacks', stack_view) self.assertEqual(1, len(stack_view['stacks'])) @mock.patch.object(stacks_view, 'format_stack') def test_stack_basic_details(self, mock_format_stack): stacks = [self.stack1] expected_keys = stacks_view.basic_keys stacks_view.collection(self.request, stacks) mock_format_stack.assert_called_once_with(self.request, self.stack1, expected_keys, mock.ANY) @mock.patch.object(stacks_view.views_common, 'get_collection_links') def test_append_collection_links(self, mock_get_collection_links): # If the page is full, assume a next page exists stacks = [self.stack1] mock_get_collection_links.return_value = 'fake links' stack_view = stacks_view.collection(self.request, stacks) self.assertIn('links', stack_view) @mock.patch.object(stacks_view.views_common, 'get_collection_links') def test_doesnt_append_collection_links(self, mock_get_collection_links): stacks = [self.stack1] mock_get_collection_links.return_value = None stack_view = stacks_view.collection(self.request, stacks) self.assertNotIn('links', stack_view) @mock.patch.object(stacks_view.views_common, 'get_collection_links') def test_append_collection_count(self, mock_get_collection_links): stacks = [self.stack1] count = 1 stack_view = stacks_view.collection(self.request, stacks, count) self.assertIn('count', stack_view) self.assertEqual(1, stack_view['count']) @mock.patch.object(stacks_view.views_common, 'get_collection_links') def test_doesnt_append_collection_count(self, mock_get_collection_links): stacks = [self.stack1] stack_view = stacks_view.collection(self.request, stacks) self.assertNotIn('count', stack_view) @mock.patch.object(stacks_view.views_common, 'get_collection_links') def test_appends_collection_count_of_zero(self, mock_get_collection_links): stacks = [self.stack1] count = 0 stack_view = stacks_view.collection(self.request, stacks, count) self.assertIn('count', stack_view) self.assertEqual(0, stack_view['count']) heat-10.0.2/heat/tests/api/openstack_v1/test_views_common.py0000666000175000017500000000757313343562340024125 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from six.moves.urllib import parse as urlparse from heat.api.openstack.v1.views import views_common from heat.tests import common class TestViewsCommon(common.HeatTestCase): def setUp(self): super(TestViewsCommon, self).setUp() self.request = mock.Mock() self.stack1 = { 'id': 'id1', } self.stack2 = { 'id': 'id2', } def setUpGetCollectionLinks(self): self.items = [self.stack1, self.stack2] self.request.params = {'limit': '2'} self.request.path_url = "http://example.com/fake/path" def test_get_collection_links_creates_next(self): self.setUpGetCollectionLinks() links = views_common.get_collection_links(self.request, self.items) expected_params = {'marker': ['id2'], 'limit': ['2']} next_link = list(filter( lambda link: link['rel'] == 'next', links)).pop() self.assertEqual('next', next_link['rel']) url_path, url_params = next_link['href'].split('?', 1) self.assertEqual(url_path, self.request.path_url) self.assertEqual(expected_params, urlparse.parse_qs(url_params)) def test_get_collection_links_doesnt_create_next_if_no_limit(self): self.setUpGetCollectionLinks() del self.request.params['limit'] links = views_common.get_collection_links(self.request, self.items) self.assertEqual([], links) def test_get_collection_links_doesnt_create_next_if_page_not_full(self): self.setUpGetCollectionLinks() self.request.params['limit'] = '10' links = views_common.get_collection_links(self.request, self.items) self.assertEqual([], links) def test_get_collection_links_overwrites_url_marker(self): self.setUpGetCollectionLinks() self.request.params = {'limit': '2', 'marker': 'some_marker'} links = views_common.get_collection_links(self.request, self.items) expected_params = {'marker': ['id2'], 'limit': ['2']} next_link = list(filter( lambda link: link['rel'] == 'next', links)).pop() self.assertEqual('next', next_link['rel']) url_path, url_params = next_link['href'].split('?', 1) self.assertEqual(url_path, self.request.path_url) self.assertEqual(expected_params, urlparse.parse_qs(url_params)) def test_get_collection_links_does_not_overwrite_other_params(self): self.setUpGetCollectionLinks() self.request.params = {'limit': '2', 'foo': 'bar'} links = views_common.get_collection_links(self.request, self.items) next_link = list( filter(lambda link: link['rel'] == 'next', links)).pop() url = next_link['href'] query_string = urlparse.urlparse(url).query params = {} params.update(urlparse.parse_qsl(query_string)) self.assertEqual('2', params['limit']) self.assertEqual('bar', params['foo']) def test_get_collection_links_handles_invalid_limits(self): self.setUpGetCollectionLinks() self.request.params = {'limit': 'foo'} links = views_common.get_collection_links(self.request, self.items) self.assertEqual([], links) self.request.params = {'limit': None} links = views_common.get_collection_links(self.request, self.items) self.assertEqual([], links) heat-10.0.2/heat/tests/api/openstack_v1/test_services.py0000666000175000017500000000376713343562340023244 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_messaging import exceptions import webob.exc import heat.api.openstack.v1.services as services from heat.common import policy from heat.tests.api.openstack_v1 import tools from heat.tests import common class ServiceControllerTest(tools.ControllerTest, common.HeatTestCase): def setUp(self): super(ServiceControllerTest, self).setUp() self.controller = services.ServiceController({}) @mock.patch.object(policy.Enforcer, 'enforce') def test_index(self, mock_enforce): self._mock_enforce_setup( mock_enforce, 'index') req = self._get('/services') return_value = [] with mock.patch.object( self.controller.rpc_client, 'list_services', return_value=return_value): resp = self.controller.index(req, tenant_id=self.tenant) self.assertEqual( {'services': []}, resp) @mock.patch.object(policy.Enforcer, 'enforce') def test_index_503(self, mock_enforce): self._mock_enforce_setup( mock_enforce, 'index') req = self._get('/services') with mock.patch.object( self.controller.rpc_client, 'list_services', side_effect=exceptions.MessagingTimeout()): self.assertRaises( webob.exc.HTTPServiceUnavailable, self.controller.index, req, tenant_id=self.tenant) heat-10.0.2/heat/tests/api/openstack_v1/test_software_configs.py0000666000175000017500000001502013343562340024744 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import json import mock import webob.exc import heat.api.middleware.fault as fault import heat.api.openstack.v1.software_configs as software_configs from heat.common import exception as heat_exc from heat.common import policy from heat.tests.api.openstack_v1 import tools from heat.tests import common class SoftwareConfigControllerTest(tools.ControllerTest, common.HeatTestCase): def setUp(self): super(SoftwareConfigControllerTest, self).setUp() self.controller = software_configs.SoftwareConfigController({}) def test_default(self): self.assertRaises( webob.exc.HTTPNotFound, self.controller.default, None) @mock.patch.object(policy.Enforcer, 'enforce') def test_index(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'index') req = self._get('/software_configs') with mock.patch.object( self.controller.rpc_client, 'list_software_configs', return_value=[]): resp = self.controller.index(req, tenant_id=self.tenant) self.assertEqual( {'software_configs': []}, resp) @mock.patch.object(policy.Enforcer, 'enforce') def test_show(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'show') config_id = 'a45559cd-8736-4375-bc39-d6a7bb62ade2' req = self._get('/software_configs/%s' % config_id) return_value = { 'id': config_id, 'name': 'config_mysql', 'group': 'Heat::Shell', 'config': '#!/bin/bash', 'inputs': [], 'ouputs': [], 'options': []} expected = {'software_config': return_value} with mock.patch.object( self.controller.rpc_client, 'show_software_config', return_value=return_value): resp = self.controller.show( req, config_id=config_id, tenant_id=self.tenant) self.assertEqual(expected, resp) @mock.patch.object(policy.Enforcer, 'enforce') def test_show_not_found(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'show') config_id = 'a45559cd-8736-4375-bc39-d6a7bb62ade2' req = self._get('/software_configs/%s' % config_id) error = heat_exc.NotFound('Not found %s' % config_id) with mock.patch.object( self.controller.rpc_client, 'show_software_config', side_effect=tools.to_remote_error(error)): resp = tools.request_with_middleware(fault.FaultWrapper, self.controller.show, req, config_id=config_id, tenant_id=self.tenant) self.assertEqual(404, resp.json['code']) self.assertEqual('NotFound', resp.json['error']['type']) @mock.patch.object(policy.Enforcer, 'enforce') def test_create(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'create') body = { 'name': 'config_mysql', 'group': 'Heat::Shell', 'config': '#!/bin/bash', 'inputs': [], 'ouputs': [], 'options': []} return_value = body.copy() config_id = 'a45559cd-8736-4375-bc39-d6a7bb62ade2' return_value['id'] = config_id req = self._post('/software_configs', json.dumps(body)) expected = {'software_config': return_value} with mock.patch.object( self.controller.rpc_client, 'create_software_config', return_value=return_value): resp = self.controller.create( req, body=body, tenant_id=self.tenant) self.assertEqual(expected, resp) @mock.patch.object(policy.Enforcer, 'enforce') def test_delete(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'delete') config_id = 'a45559cd-8736-4375-bc39-d6a7bb62ade2' req = self._delete('/software_configs/%s' % config_id) return_value = None with mock.patch.object( self.controller.rpc_client, 'delete_software_config', return_value=return_value): self.assertRaises( webob.exc.HTTPNoContent, self.controller.delete, req, config_id=config_id, tenant_id=self.tenant) @mock.patch.object(policy.Enforcer, 'enforce') def test_delete_error(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'delete') config_id = 'a45559cd-8736-4375-bc39-d6a7bb62ade2' req = self._delete('/software_configs/%s' % config_id) error = Exception('something wrong') with mock.patch.object( self.controller.rpc_client, 'delete_software_config', side_effect=tools.to_remote_error(error)): resp = tools.request_with_middleware( fault.FaultWrapper, self.controller.delete, req, config_id=config_id, tenant_id=self.tenant) self.assertEqual(500, resp.json['code']) self.assertEqual('Exception', resp.json['error']['type']) @mock.patch.object(policy.Enforcer, 'enforce') def test_delete_not_found(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'delete') config_id = 'a45559cd-8736-4375-bc39-d6a7bb62ade2' req = self._delete('/software_configs/%s' % config_id) error = heat_exc.NotFound('Not found %s' % config_id) with mock.patch.object( self.controller.rpc_client, 'delete_software_config', side_effect=tools.to_remote_error(error)): resp = tools.request_with_middleware( fault.FaultWrapper, self.controller.delete, req, config_id=config_id, tenant_id=self.tenant) self.assertEqual(404, resp.json['code']) self.assertEqual('NotFound', resp.json['error']['type']) heat-10.0.2/heat/tests/api/openstack_v1/tools.py0000666000175000017500000001102313343562340021502 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_config import cfg from oslo_log import log from oslo_messaging._drivers import common as rpc_common import six import webob.exc from heat.common import wsgi from heat.rpc import api as rpc_api from heat.tests import utils def request_with_middleware(middleware, func, req, *args, **kwargs): @webob.dec.wsgify def _app(req): return func(req, *args, **kwargs) resp = middleware(_app).process_request(req) return resp def to_remote_error(error): """Converts the given exception to the one with the _Remote suffix.""" exc_info = (type(error), error, None) serialized = rpc_common.serialize_remote_exception(exc_info) remote_error = rpc_common.deserialize_remote_exception( serialized, ["heat.common.exception"]) return remote_error class ControllerTest(object): """Common utilities for testing API Controllers.""" def __init__(self, *args, **kwargs): super(ControllerTest, self).__init__(*args, **kwargs) cfg.CONF.set_default('host', 'server.test') self.topic = rpc_api.ENGINE_TOPIC self.api_version = '1.0' self.tenant = 't' self.mock_enforce = None log.register_options(cfg.CONF) def _environ(self, path): return { 'SERVER_NAME': 'server.test', 'SERVER_PORT': 8004, 'SCRIPT_NAME': '/v1', 'PATH_INFO': '/%s' % self.tenant + path, 'wsgi.url_scheme': 'http', } def _simple_request(self, path, params=None, method='GET'): environ = self._environ(path) environ['REQUEST_METHOD'] = method if params: qs = "&".join(["=".join([k, str(params[k])]) for k in params]) environ['QUERY_STRING'] = qs req = wsgi.Request(environ) req.context = utils.dummy_context('api_test_user', self.tenant) self.context = req.context return req def _get(self, path, params=None): return self._simple_request(path, params=params) def _delete(self, path): return self._simple_request(path, method='DELETE') def _abandon(self, path): return self._simple_request(path, method='DELETE') def _data_request(self, path, data, content_type='application/json', method='POST'): environ = self._environ(path) environ['REQUEST_METHOD'] = method req = wsgi.Request(environ) req.context = utils.dummy_context('api_test_user', self.tenant) self.context = req.context req.body = six.b(data) return req def _post(self, path, data, content_type='application/json'): return self._data_request(path, data, content_type) def _put(self, path, data, content_type='application/json'): return self._data_request(path, data, content_type, method='PUT') def _patch(self, path, data, content_type='application/json'): return self._data_request(path, data, content_type, method='PATCH') def _url(self, id): host = 'server.test:8004' path = '/v1/%(tenant)s/stacks/%(stack_name)s/%(stack_id)s%(path)s' % id return 'http://%s%s' % (host, path) def tearDown(self): # Common tearDown to assert that policy enforcement happens for all # controller actions if self.mock_enforce: self.mock_enforce.assert_called_with( action=self.action, context=self.context, scope=self.controller.REQUEST_SCOPE, is_registered_policy=mock.ANY ) self.assertEqual(self.expected_request_count, len(self.mock_enforce.call_args_list)) super(ControllerTest, self).tearDown() def _mock_enforce_setup(self, mocker, action, allowed=True, expected_request_count=1): self.mock_enforce = mocker self.action = action self.mock_enforce.return_value = allowed self.expected_request_count = expected_request_count heat-10.0.2/heat/tests/api/openstack_v1/test_software_deployments.py0000666000175000017500000002573513343562340025675 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import json import mock import webob.exc import heat.api.middleware.fault as fault import heat.api.openstack.v1.software_deployments as software_deployments from heat.common import exception as heat_exc from heat.common import policy from heat.tests.api.openstack_v1 import tools from heat.tests import common class SoftwareDeploymentControllerTest(tools.ControllerTest, common.HeatTestCase): def setUp(self): super(SoftwareDeploymentControllerTest, self).setUp() self.controller = software_deployments.SoftwareDeploymentController({}) def test_default(self): self.assertRaises( webob.exc.HTTPNotFound, self.controller.default, None) @mock.patch.object(policy.Enforcer, 'enforce') def test_index(self, mock_enforce): self._mock_enforce_setup( mock_enforce, 'index', expected_request_count=2) req = self._get('/software_deployments') return_value = [] with mock.patch.object( self.controller.rpc_client, 'list_software_deployments', return_value=return_value) as mock_call: resp = self.controller.index(req, tenant_id=self.tenant) self.assertEqual( {'software_deployments': []}, resp) whitelist = mock_call.call_args[1] self.assertEqual({}, whitelist) server_id = 'fb322564-7927-473d-8aad-68ae7fbf2abf' req = self._get('/software_deployments', {'server_id': server_id}) with mock.patch.object( self.controller.rpc_client, 'list_software_deployments', return_value=return_value) as mock_call: resp = self.controller.index(req, tenant_id=self.tenant) self.assertEqual( {'software_deployments': []}, resp) whitelist = mock_call.call_args[1] self.assertEqual({'server_id': server_id}, whitelist) @mock.patch.object(policy.Enforcer, 'enforce') def test_show(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'show') deployment_id = '38eccf10-97e5-4ae8-9d37-b577c9801750' config_id = 'd00ba4aa-db33-42e1-92f4-2a6469260107' server_id = 'fb322564-7927-473d-8aad-68ae7fbf2abf' req = self._get('/software_deployments/%s' % deployment_id) return_value = { 'id': deployment_id, 'server_id': server_id, 'input_values': {}, 'output_values': {}, 'action': 'INIT', 'status': 'COMPLETE', 'status_reason': None, 'config_id': config_id, 'config': '#!/bin/bash', 'name': 'config_mysql', 'group': 'Heat::Shell', 'inputs': [], 'outputs': [], 'options': []} expected = {'software_deployment': return_value} with mock.patch.object( self.controller.rpc_client, 'show_software_deployment', return_value=return_value): resp = self.controller.show( req, deployment_id=config_id, tenant_id=self.tenant) self.assertEqual(expected, resp) @mock.patch.object(policy.Enforcer, 'enforce') def test_show_not_found(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'show') deployment_id = '38eccf10-97e5-4ae8-9d37-b577c9801750' req = self._get('/software_deployments/%s' % deployment_id) error = heat_exc.NotFound('Not found %s' % deployment_id) with mock.patch.object( self.controller.rpc_client, 'show_software_deployment', side_effect=tools.to_remote_error(error)): resp = tools.request_with_middleware( fault.FaultWrapper, self.controller.show, req, deployment_id=deployment_id, tenant_id=self.tenant) self.assertEqual(404, resp.json['code']) self.assertEqual('NotFound', resp.json['error']['type']) @mock.patch.object(policy.Enforcer, 'enforce') def test_create(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'create') config_id = 'd00ba4aa-db33-42e1-92f4-2a6469260107' server_id = 'fb322564-7927-473d-8aad-68ae7fbf2abf' body = { 'server_id': server_id, 'input_values': {}, 'action': 'INIT', 'status': 'COMPLETE', 'status_reason': None, 'config_id': config_id} return_value = body.copy() deployment_id = 'a45559cd-8736-4375-bc39-d6a7bb62ade2' return_value['id'] = deployment_id req = self._post('/software_deployments', json.dumps(body)) expected = {'software_deployment': return_value} with mock.patch.object( self.controller.rpc_client, 'create_software_deployment', return_value=return_value): resp = self.controller.create( req, body=body, tenant_id=self.tenant) self.assertEqual(expected, resp) @mock.patch.object(policy.Enforcer, 'enforce') def test_update(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'update') config_id = 'd00ba4aa-db33-42e1-92f4-2a6469260107' server_id = 'fb322564-7927-473d-8aad-68ae7fbf2abf' body = { 'input_values': {}, 'action': 'INIT', 'status': 'COMPLETE', 'status_reason': None, 'config_id': config_id} return_value = body.copy() deployment_id = 'a45559cd-8736-4375-bc39-d6a7bb62ade2' return_value['id'] = deployment_id req = self._put('/software_deployments/%s' % deployment_id, json.dumps(body)) return_value['server_id'] = server_id expected = {'software_deployment': return_value} with mock.patch.object( self.controller.rpc_client, 'update_software_deployment', return_value=return_value): resp = self.controller.update( req, deployment_id=deployment_id, body=body, tenant_id=self.tenant) self.assertEqual(expected, resp) @mock.patch.object(policy.Enforcer, 'enforce') def test_update_no_input_values(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'update') config_id = 'd00ba4aa-db33-42e1-92f4-2a6469260107' server_id = 'fb322564-7927-473d-8aad-68ae7fbf2abf' body = { 'action': 'INIT', 'status': 'COMPLETE', 'status_reason': None, 'config_id': config_id} return_value = body.copy() deployment_id = 'a45559cd-8736-4375-bc39-d6a7bb62ade2' return_value['id'] = deployment_id req = self._put('/software_deployments/%s' % deployment_id, json.dumps(body)) return_value['server_id'] = server_id expected = {'software_deployment': return_value} with mock.patch.object( self.controller.rpc_client, 'update_software_deployment', return_value=return_value): resp = self.controller.update( req, deployment_id=deployment_id, body=body, tenant_id=self.tenant) self.assertEqual(expected, resp) @mock.patch.object(policy.Enforcer, 'enforce') def test_update_not_found(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'update') deployment_id = 'a45559cd-8736-4375-bc39-d6a7bb62ade2' req = self._put('/software_deployments/%s' % deployment_id, '{}') error = heat_exc.NotFound('Not found %s' % deployment_id) with mock.patch.object( self.controller.rpc_client, 'update_software_deployment', side_effect=tools.to_remote_error(error)): resp = tools.request_with_middleware( fault.FaultWrapper, self.controller.update, req, deployment_id=deployment_id, body={}, tenant_id=self.tenant) self.assertEqual(404, resp.json['code']) self.assertEqual('NotFound', resp.json['error']['type']) @mock.patch.object(policy.Enforcer, 'enforce') def test_delete(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'delete') deployment_id = 'a45559cd-8736-4375-bc39-d6a7bb62ade2' req = self._delete('/software_deployments/%s' % deployment_id) return_value = None with mock.patch.object( self.controller.rpc_client, 'delete_software_deployment', return_value=return_value): self.assertRaises( webob.exc.HTTPNoContent, self.controller.delete, req, deployment_id=deployment_id, tenant_id=self.tenant) @mock.patch.object(policy.Enforcer, 'enforce') def test_delete_error(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'delete') deployment_id = 'a45559cd-8736-4375-bc39-d6a7bb62ade2' req = self._delete('/software_deployments/%s' % deployment_id) error = Exception('something wrong') with mock.patch.object( self.controller.rpc_client, 'delete_software_deployment', side_effect=tools.to_remote_error(error)): resp = tools.request_with_middleware( fault.FaultWrapper, self.controller.delete, req, deployment_id=deployment_id, tenant_id=self.tenant) self.assertEqual(500, resp.json['code']) self.assertEqual('Exception', resp.json['error']['type']) @mock.patch.object(policy.Enforcer, 'enforce') def test_delete_not_found(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'delete') deployment_id = 'a45559cd-8736-4375-bc39-d6a7bb62ade2' req = self._delete('/software_deployments/%s' % deployment_id) error = heat_exc.NotFound('Not Found %s' % deployment_id) with mock.patch.object( self.controller.rpc_client, 'delete_software_deployment', side_effect=tools.to_remote_error(error)): resp = tools.request_with_middleware( fault.FaultWrapper, self.controller.delete, req, deployment_id=deployment_id, tenant_id=self.tenant) self.assertEqual(404, resp.json['code']) self.assertEqual('NotFound', resp.json['error']['type']) heat-10.0.2/heat/tests/api/test_wsgi.py0000666000175000017500000004553313343562340017772 0ustar zuulzuul00000000000000# # Copyright 2010-2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import fixtures import json import mock import six import socket import webob from oslo_config import cfg from heat.api.aws import exception as aws_exception from heat.common import exception from heat.common import wsgi from heat.tests import common class RequestTest(common.HeatTestCase): def test_content_type_missing(self): request = wsgi.Request.blank('/tests/123') self.assertRaises(exception.InvalidContentType, request.get_content_type, ('application/xml')) def test_content_type_unsupported(self): request = wsgi.Request.blank('/tests/123') request.headers["Content-Type"] = "text/html" self.assertRaises(exception.InvalidContentType, request.get_content_type, ('application/xml')) def test_content_type_with_charset(self): request = wsgi.Request.blank('/tests/123') request.headers["Content-Type"] = "application/json; charset=UTF-8" result = request.get_content_type(('application/json')) self.assertEqual("application/json", result) def test_content_type_from_accept_xml(self): request = wsgi.Request.blank('/tests/123') request.headers["Accept"] = "application/xml" result = request.best_match_content_type() self.assertEqual("application/json", result) def test_content_type_from_accept_json(self): request = wsgi.Request.blank('/tests/123') request.headers["Accept"] = "application/json" result = request.best_match_content_type() self.assertEqual("application/json", result) def test_content_type_from_accept_xml_json(self): request = wsgi.Request.blank('/tests/123') request.headers["Accept"] = "application/xml, application/json" result = request.best_match_content_type() self.assertEqual("application/json", result) def test_content_type_from_accept_json_xml_quality(self): request = wsgi.Request.blank('/tests/123') request.headers["Accept"] = ("application/json; q=0.3, " "application/xml; q=0.9") result = request.best_match_content_type() self.assertEqual("application/json", result) def test_content_type_accept_default(self): request = wsgi.Request.blank('/tests/123.unsupported') request.headers["Accept"] = "application/unsupported1" result = request.best_match_content_type() self.assertEqual("application/json", result) def test_best_match_language(self): # Test that we are actually invoking language negotiation by webop request = wsgi.Request.blank('/') accepted = 'unknown-lang' request.headers = {'Accept-Language': accepted} def fake_best_match(self, offers, default_match=None): # Best match on an unknown locale returns None return None with mock.patch.object(request.accept_language, 'best_match') as mock_match: mock_match.side_effect = fake_best_match self.assertIsNone(request.best_match_language()) # If Accept-Language is missing or empty, match should be None request.headers = {'Accept-Language': ''} self.assertIsNone(request.best_match_language()) request.headers.pop('Accept-Language') self.assertIsNone(request.best_match_language()) class ResourceTest(common.HeatTestCase): def test_get_action_args(self): env = { 'wsgiorg.routing_args': [ None, { 'controller': None, 'format': None, 'action': 'update', 'id': 12, }, ], } expected = {'action': 'update', 'id': 12} actual = wsgi.Resource(None, None, None).get_action_args(env) self.assertEqual(expected, actual) def test_get_action_args_invalid_index(self): env = {'wsgiorg.routing_args': []} expected = {} actual = wsgi.Resource(None, None, None).get_action_args(env) self.assertEqual(expected, actual) def test_get_action_args_del_controller_error(self): actions = {'format': None, 'action': 'update', 'id': 12} env = {'wsgiorg.routing_args': [None, actions]} expected = {'action': 'update', 'id': 12} actual = wsgi.Resource(None, None, None).get_action_args(env) self.assertEqual(expected, actual) def test_get_action_args_del_format_error(self): actions = {'action': 'update', 'id': 12} env = {'wsgiorg.routing_args': [None, actions]} expected = {'action': 'update', 'id': 12} actual = wsgi.Resource(None, None, None).get_action_args(env) self.assertEqual(expected, actual) def test_dispatch(self): class Controller(object): def index(self, shirt, pants=None): return (shirt, pants) resource = wsgi.Resource(None, None, None) actual = resource.dispatch(Controller(), 'index', 'on', pants='off') expected = ('on', 'off') self.assertEqual(expected, actual) def test_dispatch_default(self): class Controller(object): def default(self, shirt, pants=None): return (shirt, pants) resource = wsgi.Resource(None, None, None) actual = resource.dispatch(Controller(), 'index', 'on', pants='off') expected = ('on', 'off') self.assertEqual(expected, actual) def test_dispatch_no_default(self): class Controller(object): def show(self, shirt, pants=None): return (shirt, pants) resource = wsgi.Resource(None, None, None) self.assertRaises(AttributeError, resource.dispatch, Controller(), 'index', 'on', pants='off') def test_resource_call_error_handle(self): class Controller(object): def delete(self, req, identity): return (req, identity) actions = {'action': 'delete', 'id': 12, 'body': 'data'} env = {'wsgiorg.routing_args': [None, actions]} request = wsgi.Request.blank('/tests/123', environ=env) request.body = b'{"foo" : "value"}' resource = wsgi.Resource(Controller(), wsgi.JSONRequestDeserializer(), None) # The Resource does not throw webob.HTTPExceptions, since they # would be considered responses by wsgi and the request flow would end, # instead they are wrapped so they can reach the fault application # where they are converted to a nice JSON/XML response e = self.assertRaises(exception.HTTPExceptionDisguise, resource, request) self.assertIsInstance(e.exc, webob.exc.HTTPBadRequest) @mock.patch.object(wsgi, 'translate_exception') def test_resource_call_error_handle_localized(self, mock_translate): class Controller(object): def delete(self, req, identity): return (req, identity) def fake_translate_exception(ex, locale): return translated_ex mock_translate.side_effect = fake_translate_exception actions = {'action': 'delete', 'id': 12, 'body': 'data'} env = {'wsgiorg.routing_args': [None, actions]} request = wsgi.Request.blank('/tests/123', environ=env) request.body = b'{"foo" : "value"}' message_es = "No Encontrado" translated_ex = webob.exc.HTTPBadRequest(message_es) resource = wsgi.Resource(Controller(), wsgi.JSONRequestDeserializer(), None) e = self.assertRaises(exception.HTTPExceptionDisguise, resource, request) self.assertEqual(message_es, six.text_type(e.exc)) class ResourceExceptionHandlingTest(common.HeatTestCase): scenarios = [ ('client_exceptions', dict( exception=exception.StackResourceLimitExceeded, exception_catch=exception.StackResourceLimitExceeded)), ('aws_exception', dict( exception=aws_exception.HeatAccessDeniedError, exception_catch=aws_exception.HeatAccessDeniedError)), ('webob_bad_request', dict( exception=webob.exc.HTTPBadRequest, exception_catch=exception.HTTPExceptionDisguise)), ('webob_not_found', dict( exception=webob.exc.HTTPNotFound, exception_catch=exception.HTTPExceptionDisguise)), ] def test_resource_client_exceptions_dont_log_error(self): class Controller(object): def __init__(self, excpetion_to_raise): self.excpetion_to_raise = excpetion_to_raise def raise_exception(self, req, body): raise self.excpetion_to_raise() actions = {'action': 'raise_exception', 'body': 'data'} env = {'wsgiorg.routing_args': [None, actions]} request = wsgi.Request.blank('/tests/123', environ=env) request.body = b'{"foo" : "value"}' resource = wsgi.Resource(Controller(self.exception), wsgi.JSONRequestDeserializer(), None) e = self.assertRaises(self.exception_catch, resource, request) e = e.exc if hasattr(e, 'exc') else e self.assertNotIn(six.text_type(e), self.LOG.output) class JSONRequestDeserializerTest(common.HeatTestCase): def test_has_body_no_content_length(self): request = wsgi.Request.blank('/') request.method = 'POST' request.body = b'asdf' request.headers.pop('Content-Length') request.headers['Content-Type'] = 'application/json' self.assertFalse(wsgi.JSONRequestDeserializer().has_body(request)) def test_has_body_zero_content_length(self): request = wsgi.Request.blank('/') request.method = 'POST' request.body = b'asdf' request.headers['Content-Length'] = 0 request.headers['Content-Type'] = 'application/json' self.assertFalse(wsgi.JSONRequestDeserializer().has_body(request)) def test_has_body_has_content_length_no_content_type(self): request = wsgi.Request.blank('/') request.method = 'POST' request.body = b'{"key": "value"}' self.assertIn('Content-Length', request.headers) self.assertTrue(wsgi.JSONRequestDeserializer().has_body(request)) def test_has_body_has_content_length_plain_content_type(self): request = wsgi.Request.blank('/') request.method = 'POST' request.body = b'{"key": "value"}' self.assertIn('Content-Length', request.headers) request.headers['Content-Type'] = 'text/plain' self.assertTrue(wsgi.JSONRequestDeserializer().has_body(request)) def test_has_body_has_content_type_malformed(self): request = wsgi.Request.blank('/') request.method = 'POST' request.body = b'asdf' self.assertIn('Content-Length', request.headers) request.headers['Content-Type'] = 'application/json' self.assertFalse(wsgi.JSONRequestDeserializer().has_body(request)) def test_has_body_has_content_type(self): request = wsgi.Request.blank('/') request.method = 'POST' request.body = b'{"key": "value"}' self.assertIn('Content-Length', request.headers) request.headers['Content-Type'] = 'application/json' self.assertTrue(wsgi.JSONRequestDeserializer().has_body(request)) def test_has_body_has_wrong_content_type(self): request = wsgi.Request.blank('/') request.method = 'POST' request.body = b'{"key": "value"}' self.assertIn('Content-Length', request.headers) request.headers['Content-Type'] = 'application/xml' self.assertFalse(wsgi.JSONRequestDeserializer().has_body(request)) def test_has_body_has_aws_content_type_only(self): request = wsgi.Request.blank('/?ContentType=JSON') request.method = 'GET' request.body = b'{"key": "value"}' self.assertIn('Content-Length', request.headers) self.assertTrue(wsgi.JSONRequestDeserializer().has_body(request)) def test_has_body_respect_aws_content_type(self): request = wsgi.Request.blank('/?ContentType=JSON') request.method = 'GET' request.body = b'{"key": "value"}' self.assertIn('Content-Length', request.headers) request.headers['Content-Type'] = 'application/xml' self.assertTrue(wsgi.JSONRequestDeserializer().has_body(request)) def test_has_body_content_type_with_get(self): request = wsgi.Request.blank('/') request.method = 'GET' request.body = b'{"key": "value"}' self.assertIn('Content-Length', request.headers) self.assertTrue(wsgi.JSONRequestDeserializer().has_body(request)) def test_no_body_no_content_length(self): request = wsgi.Request.blank('/') self.assertFalse(wsgi.JSONRequestDeserializer().has_body(request)) def test_from_json(self): fixture = '{"key": "value"}' expected = {"key": "value"} actual = wsgi.JSONRequestDeserializer().from_json(fixture) self.assertEqual(expected, actual) def test_from_json_malformed(self): fixture = 'kjasdklfjsklajf' self.assertRaises(webob.exc.HTTPBadRequest, wsgi.JSONRequestDeserializer().from_json, fixture) def test_default_no_body(self): request = wsgi.Request.blank('/') actual = wsgi.JSONRequestDeserializer().default(request) expected = {} self.assertEqual(expected, actual) def test_default_with_body(self): request = wsgi.Request.blank('/') request.method = 'POST' request.body = b'{"key": "value"}' actual = wsgi.JSONRequestDeserializer().default(request) expected = {"body": {"key": "value"}} self.assertEqual(expected, actual) def test_default_with_get_with_body(self): request = wsgi.Request.blank('/') request.method = 'GET' request.body = b'{"key": "value"}' actual = wsgi.JSONRequestDeserializer().default(request) expected = {"body": {"key": "value"}} self.assertEqual(expected, actual) def test_default_with_get_with_body_with_aws(self): request = wsgi.Request.blank('/?ContentType=JSON') request.method = 'GET' request.body = b'{"key": "value"}' actual = wsgi.JSONRequestDeserializer().default(request) expected = {"body": {"key": "value"}} self.assertEqual(expected, actual) def test_from_json_exceeds_max_json_mb(self): cfg.CONF.set_override('max_json_body_size', 10) body = json.dumps(['a'] * cfg.CONF.max_json_body_size) self.assertGreater(len(body), cfg.CONF.max_json_body_size) error = self.assertRaises(exception.RequestLimitExceeded, wsgi.JSONRequestDeserializer().from_json, body) msg = ('Request limit exceeded: JSON body size ' '(%s bytes) exceeds maximum allowed size (%s bytes).' % ( len(body), cfg.CONF.max_json_body_size)) self.assertEqual(msg, six.text_type(error)) class GetSocketTestCase(common.HeatTestCase): def setUp(self): super(GetSocketTestCase, self).setUp() self.useFixture(fixtures.MonkeyPatch( "heat.common.wsgi.get_bind_addr", lambda x, y: ('192.168.0.13', 1234))) addr_info_list = [(2, 1, 6, '', ('192.168.0.13', 80)), (2, 2, 17, '', ('192.168.0.13', 80)), (2, 3, 0, '', ('192.168.0.13', 80))] self.useFixture(fixtures.MonkeyPatch( "heat.common.wsgi.socket.getaddrinfo", lambda *x: addr_info_list)) self.useFixture(fixtures.MonkeyPatch( "heat.common.wsgi.time.time", mock.Mock(side_effect=[0, 1, 5, 10, 20, 35]))) wsgi.cfg.CONF.heat_api.cert_file = '/etc/ssl/cert' wsgi.cfg.CONF.heat_api.key_file = '/etc/ssl/key' wsgi.cfg.CONF.heat_api.ca_file = '/etc/ssl/ca_cert' wsgi.cfg.CONF.heat_api.tcp_keepidle = 600 def test_correct_configure_socket(self): mock_socket = mock.Mock() self.useFixture(fixtures.MonkeyPatch( 'heat.common.wsgi.ssl.wrap_socket', mock_socket)) self.useFixture(fixtures.MonkeyPatch( 'heat.common.wsgi.eventlet.listen', lambda *x, **y: mock_socket)) server = wsgi.Server(name='heat-api', conf=cfg.CONF.heat_api) server.default_port = 1234 server.configure_socket() self.assertIn(mock.call.setsockopt( socket.SOL_SOCKET, socket.SO_REUSEADDR, 1), mock_socket.mock_calls) self.assertIn(mock.call.setsockopt( socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1), mock_socket.mock_calls) if hasattr(socket, 'TCP_KEEPIDLE'): self.assertIn(mock.call().setsockopt( socket.IPPROTO_TCP, socket.TCP_KEEPIDLE, wsgi.cfg.CONF.heat_api.tcp_keepidle), mock_socket.mock_calls) def test_get_socket_without_all_ssl_reqs(self): wsgi.cfg.CONF.heat_api.key_file = None self.assertRaises(RuntimeError, wsgi.get_socket, wsgi.cfg.CONF.heat_api, 1234) def test_get_socket_with_bind_problems(self): self.useFixture(fixtures.MonkeyPatch( 'heat.common.wsgi.eventlet.listen', mock.Mock(side_effect=( [wsgi.socket.error(socket.errno.EADDRINUSE)] * 3 + [None])))) self.useFixture(fixtures.MonkeyPatch( 'heat.common.wsgi.ssl.wrap_socket', lambda *x, **y: None)) self.assertRaises(RuntimeError, wsgi.get_socket, wsgi.cfg.CONF.heat_api, 1234) def test_get_socket_with_unexpected_socket_errno(self): self.useFixture(fixtures.MonkeyPatch( 'heat.common.wsgi.eventlet.listen', mock.Mock(side_effect=wsgi.socket.error(socket.errno.ENOMEM)))) self.useFixture(fixtures.MonkeyPatch( 'heat.common.wsgi.ssl.wrap_socket', lambda *x, **y: None)) self.assertRaises(wsgi.socket.error, wsgi.get_socket, wsgi.cfg.CONF.heat_api, 1234) heat-10.0.2/heat/tests/api/aws/0000775000175000017500000000000013343562672016176 5ustar zuulzuul00000000000000heat-10.0.2/heat/tests/api/aws/test_api_aws.py0000666000175000017500000002430513343562340021230 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.api.aws import exception as aws_exception from heat.api.aws import utils as api_utils from heat.common import exception as common_exception from heat.tests import common class AWSCommonTest(common.HeatTestCase): """Tests the api/aws common components.""" # The tests def test_format_response(self): response = api_utils.format_response("Foo", "Bar") expected = {'FooResponse': {'FooResult': 'Bar'}} self.assertEqual(expected, response) def test_params_extract(self): p = {'Parameters.member.1.ParameterKey': 'foo', 'Parameters.member.1.ParameterValue': 'bar', 'Parameters.member.2.ParameterKey': 'blarg', 'Parameters.member.2.ParameterValue': 'wibble'} params = api_utils.extract_param_pairs(p, prefix='Parameters', keyname='ParameterKey', valuename='ParameterValue') self.assertEqual(2, len(params)) self.assertIn('foo', params) self.assertEqual('bar', params['foo']) self.assertIn('blarg', params) self.assertEqual('wibble', params['blarg']) def test_params_extract_dots(self): p = {'Parameters.member.1.1.ParameterKey': 'foo', 'Parameters.member.1.1.ParameterValue': 'bar', 'Parameters.member.2.1.ParameterKey': 'blarg', 'Parameters.member.2.1.ParameterValue': 'wibble'} params = api_utils.extract_param_pairs(p, prefix='Parameters', keyname='ParameterKey', valuename='ParameterValue') self.assertFalse(params) def test_params_extract_garbage(self): p = {'Parameters.member.1.ParameterKey': 'foo', 'Parameters.member.1.ParameterValue': 'bar', 'Foo.1.ParameterKey': 'blarg', 'Foo.1.ParameterValue': 'wibble'} params = api_utils.extract_param_pairs(p, prefix='Parameters', keyname='ParameterKey', valuename='ParameterValue') self.assertEqual(1, len(params)) self.assertIn('foo', params) self.assertEqual('bar', params['foo']) def test_params_extract_garbage_prefix(self): p = {'prefixParameters.member.Foo.Bar.ParameterKey': 'foo', 'Parameters.member.Foo.Bar.ParameterValue': 'bar'} params = api_utils.extract_param_pairs(p, prefix='Parameters', keyname='ParameterKey', valuename='ParameterValue') self.assertFalse(params) def test_params_extract_garbage_suffix(self): p = {'Parameters.member.1.ParameterKeysuffix': 'foo', 'Parameters.member.1.ParameterValue': 'bar'} params = api_utils.extract_param_pairs(p, prefix='Parameters', keyname='ParameterKey', valuename='ParameterValue') self.assertFalse(params) def test_extract_param_list(self): p = {'MetricData.member.1.MetricName': 'foo', 'MetricData.member.1.Unit': 'Bytes', 'MetricData.member.1.Value': 234333} params = api_utils.extract_param_list(p, prefix='MetricData') self.assertEqual(1, len(params)) self.assertIn('MetricName', params[0]) self.assertIn('Unit', params[0]) self.assertIn('Value', params[0]) self.assertEqual('foo', params[0]['MetricName']) self.assertEqual('Bytes', params[0]['Unit']) self.assertEqual(234333, params[0]['Value']) def test_extract_param_list_garbage_prefix(self): p = {'AMetricData.member.1.MetricName': 'foo', 'MetricData.member.1.Unit': 'Bytes', 'MetricData.member.1.Value': 234333} params = api_utils.extract_param_list(p, prefix='MetricData') self.assertEqual(1, len(params)) self.assertNotIn('MetricName', params[0]) self.assertIn('Unit', params[0]) self.assertIn('Value', params[0]) self.assertEqual('Bytes', params[0]['Unit']) self.assertEqual(234333, params[0]['Value']) def test_extract_param_list_garbage_prefix2(self): p = {'AMetricData.member.1.MetricName': 'foo', 'BMetricData.member.1.Unit': 'Bytes', 'CMetricData.member.1.Value': 234333} params = api_utils.extract_param_list(p, prefix='MetricData') self.assertEqual(0, len(params)) def test_extract_param_list_garbage_suffix(self): p = {'MetricData.member.1.AMetricName': 'foo', 'MetricData.member.1.Unit': 'Bytes', 'MetricData.member.1.Value': 234333} params = api_utils.extract_param_list(p, prefix='MetricData') self.assertEqual(1, len(params)) self.assertNotIn('MetricName', params[0]) self.assertIn('Unit', params[0]) self.assertIn('Value', params[0]) self.assertEqual('Bytes', params[0]['Unit']) self.assertEqual(234333, params[0]['Value']) def test_extract_param_list_multiple(self): p = {'MetricData.member.1.MetricName': 'foo', 'MetricData.member.1.Unit': 'Bytes', 'MetricData.member.1.Value': 234333, 'MetricData.member.2.MetricName': 'foo2', 'MetricData.member.2.Unit': 'Bytes', 'MetricData.member.2.Value': 12345} params = api_utils.extract_param_list(p, prefix='MetricData') self.assertEqual(2, len(params)) self.assertIn('MetricName', params[0]) self.assertIn('MetricName', params[1]) self.assertEqual('foo', params[0]['MetricName']) self.assertEqual('Bytes', params[0]['Unit']) self.assertEqual(234333, params[0]['Value']) self.assertEqual('foo2', params[1]['MetricName']) self.assertEqual('Bytes', params[1]['Unit']) self.assertEqual(12345, params[1]['Value']) def test_extract_param_list_multiple_missing(self): # Handle case where there is an empty list item p = {'MetricData.member.1.MetricName': 'foo', 'MetricData.member.1.Unit': 'Bytes', 'MetricData.member.1.Value': 234333, 'MetricData.member.3.MetricName': 'foo2', 'MetricData.member.3.Unit': 'Bytes', 'MetricData.member.3.Value': 12345} params = api_utils.extract_param_list(p, prefix='MetricData') self.assertEqual(2, len(params)) self.assertIn('MetricName', params[0]) self.assertIn('MetricName', params[1]) self.assertEqual('foo', params[0]['MetricName']) self.assertEqual('Bytes', params[0]['Unit']) self.assertEqual(234333, params[0]['Value']) self.assertEqual('foo2', params[1]['MetricName']) self.assertEqual('Bytes', params[1]['Unit']) self.assertEqual(12345, params[1]['Value']) def test_extract_param_list_badindex(self): p = {'MetricData.member.xyz.MetricName': 'foo', 'MetricData.member.$!&^.Unit': 'Bytes', 'MetricData.member.+.Value': 234333, 'MetricData.member.--.MetricName': 'foo2', 'MetricData.member._3.Unit': 'Bytes', 'MetricData.member.-1000.Value': 12345} params = api_utils.extract_param_list(p, prefix='MetricData') self.assertEqual(0, len(params)) def test_reformat_dict_keys(self): keymap = {"foo": "bar"} data = {"foo": 123} expected = {"bar": 123} result = api_utils.reformat_dict_keys(keymap, data) self.assertEqual(expected, result) def test_reformat_dict_keys_missing(self): keymap = {"foo": "bar", "foo2": "bar2"} data = {"foo": 123} expected = {"bar": 123} result = api_utils.reformat_dict_keys(keymap, data) self.assertEqual(expected, result) def test_get_param_value(self): params = {"foo": 123} self.assertEqual(123, api_utils.get_param_value(params, "foo")) def test_get_param_value_missing(self): params = {"foo": 123} self.assertRaises( aws_exception.HeatMissingParameterError, api_utils.get_param_value, params, "bar") def test_map_remote_error(self): ex = Exception() expected = aws_exception.HeatInternalFailureError self.assertIsInstance(aws_exception.map_remote_error(ex), expected) def test_map_remote_error_inval_param_error(self): ex = AttributeError() expected = aws_exception.HeatInvalidParameterValueError self.assertIsInstance(aws_exception.map_remote_error(ex), expected) def test_map_remote_error_denied_error(self): ex = common_exception.Forbidden() expected = aws_exception.HeatAccessDeniedError self.assertIsInstance(aws_exception.map_remote_error(ex), expected) def test_map_remote_error_already_exists_error(self): ex = common_exception.StackExists(stack_name="teststack") expected = aws_exception.AlreadyExistsError self.assertIsInstance(aws_exception.map_remote_error(ex), expected) def test_map_remote_error_invalid_action_error(self): ex = common_exception.ActionInProgress(stack_name="teststack", action="testing") expected = aws_exception.HeatActionInProgressError self.assertIsInstance(aws_exception.map_remote_error(ex), expected) def test_map_remote_error_request_limit_exceeded(self): ex = common_exception.RequestLimitExceeded(message="testing") expected = aws_exception.HeatRequestLimitExceeded self.assertIsInstance(aws_exception.map_remote_error(ex), expected) heat-10.0.2/heat/tests/api/aws/test_api_ec2token.py0000666000175000017500000005653713343562351022166 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import json import mock from oslo_config import cfg from oslo_utils import importutils import requests import six from heat.api.aws import ec2token from heat.api.aws import exception from heat.common import wsgi from heat.tests import common from heat.tests import utils class Ec2TokenTest(common.HeatTestCase): """Tests the Ec2Token middleware.""" def setUp(self): super(Ec2TokenTest, self).setUp() self.m.StubOutWithMock(requests, 'post') def _dummy_GET_request(self, params=None, environ=None): # Mangle the params dict into a query string params = params or {} environ = environ or {} qs = "&".join(["=".join([k, str(params[k])]) for k in params]) environ.update({'REQUEST_METHOD': 'GET', 'QUERY_STRING': qs}) req = wsgi.Request(environ) return req def test_conf_get_paste(self): dummy_conf = {'auth_uri': 'http://192.0.2.9/v2.0'} ec2 = ec2token.EC2Token(app=None, conf=dummy_conf) self.assertEqual('http://192.0.2.9/v2.0', ec2._conf_get('auth_uri')) self.assertEqual( 'http://192.0.2.9/v2.0/ec2tokens', ec2._conf_get_keystone_ec2_uri('http://192.0.2.9/v2.0')) def test_conf_get_opts(self): cfg.CONF.set_default('auth_uri', 'http://192.0.2.9/v2.0/', group='ec2authtoken') cfg.CONF.set_default('auth_uri', 'this-should-be-ignored', group='clients_keystone') ec2 = ec2token.EC2Token(app=None, conf={}) self.assertEqual('http://192.0.2.9/v2.0/', ec2._conf_get('auth_uri')) self.assertEqual( 'http://192.0.2.9/v2.0/ec2tokens', ec2._conf_get_keystone_ec2_uri('http://192.0.2.9/v2.0/')) def test_conf_get_clients_keystone_opts(self): cfg.CONF.set_default('auth_uri', None, group='ec2authtoken') cfg.CONF.set_default('auth_uri', 'http://192.0.2.9', group='clients_keystone') with mock.patch('keystoneauth1.discover.Discover') as discover: class MockDiscover(object): def url_for(self, endpoint): return 'http://192.0.2.9/v3/' discover.return_value = MockDiscover() ec2 = ec2token.EC2Token(app=None, conf={}) self.assertEqual( 'http://192.0.2.9/v3/ec2tokens', ec2._conf_get_keystone_ec2_uri('http://192.0.2.9/v3/')) def test_conf_get_ssl_default_options(self): ec2 = ec2token.EC2Token(app=None, conf={}) self.assertTrue(ec2.ssl_options['verify'], "SSL verify should be True by default") self.assertIsNone(ec2.ssl_options['cert'], "SSL client cert should be None by default") def test_conf_ssl_insecure_option(self): ec2 = ec2token.EC2Token(app=None, conf={}) cfg.CONF.set_default('insecure', 'True', group='ec2authtoken') cfg.CONF.set_default('ca_file', None, group='ec2authtoken') self.assertFalse(ec2.ssl_options['verify']) def test_conf_get_ssl_opts(self): cfg.CONF.set_default('auth_uri', 'https://192.0.2.9/v2.0/', group='ec2authtoken') cfg.CONF.set_default('ca_file', '/home/user/cacert.pem', group='ec2authtoken') cfg.CONF.set_default('insecure', 'false', group='ec2authtoken') cfg.CONF.set_default('cert_file', '/home/user/mycert', group='ec2authtoken') cfg.CONF.set_default('key_file', '/home/user/mykey', group='ec2authtoken') ec2 = ec2token.EC2Token(app=None, conf={}) self.assertEqual('/home/user/cacert.pem', ec2.ssl_options['verify']) self.assertEqual(('/home/user/mycert', '/home/user/mykey'), ec2.ssl_options['cert']) def test_get_signature_param_old(self): params = {'Signature': 'foo'} dummy_req = self._dummy_GET_request(params) ec2 = ec2token.EC2Token(app=None, conf={}) self.assertEqual('foo', ec2._get_signature(dummy_req)) def test_get_signature_param_new(self): params = {'X-Amz-Signature': 'foo'} dummy_req = self._dummy_GET_request(params) ec2 = ec2token.EC2Token(app=None, conf={}) self.assertEqual('foo', ec2._get_signature(dummy_req)) def test_get_signature_header_space(self): req_env = {'HTTP_AUTHORIZATION': ('Authorization: foo Credential=foo/bar, ' 'SignedHeaders=content-type;host;x-amz-date, ' 'Signature=xyz')} dummy_req = self._dummy_GET_request(environ=req_env) ec2 = ec2token.EC2Token(app=None, conf={}) self.assertEqual('xyz', ec2._get_signature(dummy_req)) def test_get_signature_header_notlast(self): req_env = {'HTTP_AUTHORIZATION': ('Authorization: foo Credential=foo/bar, ' 'Signature=xyz,' 'SignedHeaders=content-type;host;x-amz-date ')} dummy_req = self._dummy_GET_request(environ=req_env) ec2 = ec2token.EC2Token(app=None, conf={}) self.assertEqual('xyz', ec2._get_signature(dummy_req)) def test_get_signature_header_nospace(self): req_env = {'HTTP_AUTHORIZATION': ('Authorization: foo Credential=foo/bar,' 'SignedHeaders=content-type;host;x-amz-date,' 'Signature=xyz')} dummy_req = self._dummy_GET_request(environ=req_env) ec2 = ec2token.EC2Token(app=None, conf={}) self.assertEqual('xyz', ec2._get_signature(dummy_req)) def test_get_access_param_old(self): params = {'AWSAccessKeyId': 'foo'} dummy_req = self._dummy_GET_request(params) ec2 = ec2token.EC2Token(app=None, conf={}) self.assertEqual('foo', ec2._get_access(dummy_req)) def test_get_access_param_new(self): params = {'X-Amz-Credential': 'foo/bar'} dummy_req = self._dummy_GET_request(params) ec2 = ec2token.EC2Token(app=None, conf={}) self.assertEqual('foo', ec2._get_access(dummy_req)) def test_get_access_header_space(self): req_env = {'HTTP_AUTHORIZATION': ('Authorization: foo Credential=foo/bar, ' 'SignedHeaders=content-type;host;x-amz-date, ' 'Signature=xyz')} dummy_req = self._dummy_GET_request(environ=req_env) ec2 = ec2token.EC2Token(app=None, conf={}) self.assertEqual('foo', ec2._get_access(dummy_req)) def test_get_access_header_nospace(self): req_env = {'HTTP_AUTHORIZATION': ('Authorization: foo Credential=foo/bar,' 'SignedHeaders=content-type;host;x-amz-date,' 'Signature=xyz')} dummy_req = self._dummy_GET_request(environ=req_env) ec2 = ec2token.EC2Token(app=None, conf={}) self.assertEqual('foo', ec2._get_access(dummy_req)) def test_get_access_header_last(self): req_env = {'HTTP_AUTHORIZATION': ('Authorization: foo ' 'SignedHeaders=content-type;host;x-amz-date,' 'Signature=xyz,Credential=foo/bar')} dummy_req = self._dummy_GET_request(environ=req_env) ec2 = ec2token.EC2Token(app=None, conf={}) self.assertEqual('foo', ec2._get_access(dummy_req)) def test_call_x_auth_user(self): req_env = {'HTTP_X_AUTH_USER': 'foo'} dummy_req = self._dummy_GET_request(environ=req_env) ec2 = ec2token.EC2Token(app='xyz', conf={}) self.assertEqual('xyz', ec2.__call__(dummy_req)) def test_call_auth_nosig(self): req_env = {'HTTP_AUTHORIZATION': ('Authorization: foo Credential=foo/bar, ' 'SignedHeaders=content-type;host;x-amz-date')} dummy_req = self._dummy_GET_request(environ=req_env) ec2 = ec2token.EC2Token(app='xyz', conf={}) self.assertRaises(exception.HeatIncompleteSignatureError, ec2.__call__, dummy_req) def test_call_auth_nouser(self): req_env = {'HTTP_AUTHORIZATION': ('Authorization: foo ' 'SignedHeaders=content-type;host;x-amz-date,' 'Signature=xyz')} dummy_req = self._dummy_GET_request(environ=req_env) ec2 = ec2token.EC2Token(app='xyz', conf={}) self.assertRaises(exception.HeatMissingAuthenticationTokenError, ec2.__call__, dummy_req) def test_call_auth_noaccess(self): # If there's no accesskey in params or header, but there is a # Signature, we expect HeatMissingAuthenticationTokenError params = {'Signature': 'foo'} dummy_req = self._dummy_GET_request(params) ec2 = ec2token.EC2Token(app='xyz', conf={}) self.assertRaises(exception.HeatMissingAuthenticationTokenError, ec2.__call__, dummy_req) def test_call_x_auth_nouser_x_auth_user(self): req_env = {'HTTP_X_AUTH_USER': 'foo', 'HTTP_AUTHORIZATION': ('Authorization: foo ' 'SignedHeaders=content-type;host;x-amz-date,' 'Signature=xyz')} dummy_req = self._dummy_GET_request(environ=req_env) ec2 = ec2token.EC2Token(app='xyz', conf={}) self.assertEqual('xyz', ec2.__call__(dummy_req)) def _stub_http_connection(self, headers=None, params=None, response=None, req_url='http://123:5000/v3/ec2tokens', verify=True, cert=None): headers = headers or {} params = params or {} class DummyHTTPResponse(object): text = response headers = {'X-Subject-Token': 123} def json(self): return json.loads(self.text) body_hash = ('e3b0c44298fc1c149afbf4c8996fb9' '2427ae41e4649b934ca495991b7852b855') req_creds = json.dumps({"ec2Credentials": {"access": "foo", "headers": headers, "host": "heat:8000", "verb": "GET", "params": params, "signature": "xyz", "path": "/v1", "body_hash": body_hash}}) req_headers = {'Content-Type': 'application/json'} requests.post( req_url, data=utils.JsonEquals(req_creds), verify=verify, cert=cert, headers=req_headers).AndReturn(DummyHTTPResponse()) def test_call_ok(self): dummy_conf = {'auth_uri': 'http://123:5000/v2.0'} ec2 = ec2token.EC2Token(app='woot', conf=dummy_conf) auth_str = ('Authorization: foo Credential=foo/bar, ' 'SignedHeaders=content-type;host;x-amz-date, ' 'Signature=xyz') req_env = {'SERVER_NAME': 'heat', 'SERVER_PORT': '8000', 'PATH_INFO': '/v1', 'HTTP_AUTHORIZATION': auth_str} dummy_req = self._dummy_GET_request(environ=req_env) ok_resp = json.dumps({'token': { 'project': {'name': 'tenant', 'id': 'abcd1234'}}}) self._stub_http_connection(headers={'Authorization': auth_str}, response=ok_resp) self.m.ReplayAll() self.assertEqual('woot', ec2.__call__(dummy_req)) self.assertEqual('tenant', dummy_req.headers['X-Tenant-Name']) self.assertEqual('abcd1234', dummy_req.headers['X-Tenant-Id']) self.m.VerifyAll() def test_call_ok_roles(self): dummy_conf = {'auth_uri': 'http://123:5000/v2.0'} ec2 = ec2token.EC2Token(app='woot', conf=dummy_conf) auth_str = ('Authorization: foo Credential=foo/bar, ' 'SignedHeaders=content-type;host;x-amz-date, ' 'Signature=xyz') req_env = {'SERVER_NAME': 'heat', 'SERVER_PORT': '8000', 'PATH_INFO': '/v1', 'HTTP_AUTHORIZATION': auth_str} dummy_req = self._dummy_GET_request(environ=req_env) ok_resp = json.dumps({ 'token': { 'id': 123, 'project': {'name': 'tenant', 'id': 'abcd1234'}, 'roles': [{'name': 'aa'}, {'name': 'bb'}, {'name': 'cc'}]} }) self._stub_http_connection(headers={'Authorization': auth_str}, response=ok_resp) self.m.ReplayAll() self.assertEqual('woot', ec2.__call__(dummy_req)) self.assertEqual('aa,bb,cc', dummy_req.headers['X-Roles']) self.m.VerifyAll() def test_call_err_tokenid(self): dummy_conf = {'auth_uri': 'http://123:5000/v2.0/'} ec2 = ec2token.EC2Token(app='woot', conf=dummy_conf) auth_str = ('Authorization: foo Credential=foo/bar, ' 'SignedHeaders=content-type;host;x-amz-date, ' 'Signature=xyz') req_env = {'SERVER_NAME': 'heat', 'SERVER_PORT': '8000', 'PATH_INFO': '/v1', 'HTTP_AUTHORIZATION': auth_str} dummy_req = self._dummy_GET_request(environ=req_env) err_msg = "EC2 access key not found." err_resp = json.dumps({'error': {'message': err_msg}}) self._stub_http_connection(headers={'Authorization': auth_str}, response=err_resp) self.m.ReplayAll() self.assertRaises(exception.HeatInvalidClientTokenIdError, ec2.__call__, dummy_req) self.m.VerifyAll() def test_call_err_signature(self): dummy_conf = {'auth_uri': 'http://123:5000/v2.0'} ec2 = ec2token.EC2Token(app='woot', conf=dummy_conf) auth_str = ('Authorization: foo Credential=foo/bar, ' 'SignedHeaders=content-type;host;x-amz-date, ' 'Signature=xyz') req_env = {'SERVER_NAME': 'heat', 'SERVER_PORT': '8000', 'PATH_INFO': '/v1', 'HTTP_AUTHORIZATION': auth_str} dummy_req = self._dummy_GET_request(environ=req_env) err_msg = "EC2 signature not supplied." err_resp = json.dumps({'error': {'message': err_msg}}) self._stub_http_connection(headers={'Authorization': auth_str}, response=err_resp) self.m.ReplayAll() self.assertRaises(exception.HeatSignatureError, ec2.__call__, dummy_req) self.m.VerifyAll() def test_call_err_denied(self): dummy_conf = {'auth_uri': 'http://123:5000/v2.0'} ec2 = ec2token.EC2Token(app='woot', conf=dummy_conf) auth_str = ('Authorization: foo Credential=foo/bar, ' 'SignedHeaders=content-type;host;x-amz-date, ' 'Signature=xyz') req_env = {'SERVER_NAME': 'heat', 'SERVER_PORT': '8000', 'PATH_INFO': '/v1', 'HTTP_AUTHORIZATION': auth_str} dummy_req = self._dummy_GET_request(environ=req_env) err_resp = json.dumps({}) self._stub_http_connection(headers={'Authorization': auth_str}, response=err_resp) self.m.ReplayAll() self.assertRaises(exception.HeatAccessDeniedError, ec2.__call__, dummy_req) self.m.VerifyAll() def test_call_ok_v2(self): dummy_conf = {'auth_uri': 'http://123:5000/v2.0'} ec2 = ec2token.EC2Token(app='woot', conf=dummy_conf) params = {'AWSAccessKeyId': 'foo', 'Signature': 'xyz'} req_env = {'SERVER_NAME': 'heat', 'SERVER_PORT': '8000', 'PATH_INFO': '/v1'} dummy_req = self._dummy_GET_request(params, req_env) ok_resp = json.dumps({'token': { 'project': {'name': 'tenant', 'id': 'abcd1234'}}}) self._stub_http_connection(response=ok_resp, params={'AWSAccessKeyId': 'foo'}) self.m.ReplayAll() self.assertEqual('woot', ec2.__call__(dummy_req)) self.m.VerifyAll() def test_call_ok_multicloud(self): dummy_conf = { 'allowed_auth_uris': [ 'http://123:5000/v2.0', 'http://456:5000/v2.0'], 'multi_cloud': True } ec2 = ec2token.EC2Token(app='woot', conf=dummy_conf) params = {'AWSAccessKeyId': 'foo', 'Signature': 'xyz'} req_env = {'SERVER_NAME': 'heat', 'SERVER_PORT': '8000', 'PATH_INFO': '/v1'} dummy_req = self._dummy_GET_request(params, req_env) ok_resp = json.dumps({'token': { 'project': {'name': 'tenant', 'id': 'abcd1234'}}}) err_msg = "EC2 access key not found." err_resp = json.dumps({'error': {'message': err_msg}}) # first request fails self._stub_http_connection( req_url='http://123:5000/v2.0/ec2tokens', response=err_resp, params={'AWSAccessKeyId': 'foo'}) # second request passes self._stub_http_connection( req_url='http://456:5000/v2.0/ec2tokens', response=ok_resp, params={'AWSAccessKeyId': 'foo'}) self.m.ReplayAll() self.assertEqual('woot', ec2.__call__(dummy_req)) self.m.VerifyAll() def test_call_err_multicloud(self): dummy_conf = { 'allowed_auth_uris': [ 'http://123:5000/v2.0', 'http://456:5000/v2.0'], 'multi_cloud': True } ec2 = ec2token.EC2Token(app='woot', conf=dummy_conf) params = {'AWSAccessKeyId': 'foo', 'Signature': 'xyz'} req_env = {'SERVER_NAME': 'heat', 'SERVER_PORT': '8000', 'PATH_INFO': '/v1'} dummy_req = self._dummy_GET_request(params, req_env) err_resp1 = json.dumps({}) err_msg2 = "EC2 access key not found." err_resp2 = json.dumps({'error': {'message': err_msg2}}) # first request fails with HeatAccessDeniedError self._stub_http_connection( req_url='http://123:5000/v2.0/ec2tokens', response=err_resp1, params={'AWSAccessKeyId': 'foo'}) # second request fails with HeatInvalidClientTokenIdError self._stub_http_connection( req_url='http://456:5000/v2.0/ec2tokens', response=err_resp2, params={'AWSAccessKeyId': 'foo'}) self.m.ReplayAll() # raised error matches last failure self.assertRaises(exception.HeatInvalidClientTokenIdError, ec2.__call__, dummy_req) self.m.VerifyAll() def test_call_err_multicloud_none_allowed(self): dummy_conf = { 'allowed_auth_uris': [], 'multi_cloud': True } ec2 = ec2token.EC2Token(app='woot', conf=dummy_conf) params = {'AWSAccessKeyId': 'foo', 'Signature': 'xyz'} req_env = {'SERVER_NAME': 'heat', 'SERVER_PORT': '8000', 'PATH_INFO': '/v1'} dummy_req = self._dummy_GET_request(params, req_env) self.m.ReplayAll() self.assertRaises(exception.HeatAccessDeniedError, ec2.__call__, dummy_req) self.m.VerifyAll() def test_call_badconf_no_authuri(self): ec2 = ec2token.EC2Token(app='woot', conf={}) params = {'AWSAccessKeyId': 'foo', 'Signature': 'xyz'} req_env = {'SERVER_NAME': 'heat', 'SERVER_PORT': '8000', 'PATH_INFO': '/v1'} dummy_req = self._dummy_GET_request(params, req_env) self.m.ReplayAll() ex = self.assertRaises(exception.HeatInternalFailureError, ec2.__call__, dummy_req) self.assertEqual('Service misconfigured', six.text_type(ex)) self.m.VerifyAll() def test_call_ok_auth_uri_ec2authtoken(self): dummy_url = 'http://123:5000/v2.0' cfg.CONF.set_default('auth_uri', dummy_url, group='ec2authtoken') ec2 = ec2token.EC2Token(app='woot', conf={}) params = {'AWSAccessKeyId': 'foo', 'Signature': 'xyz'} req_env = {'SERVER_NAME': 'heat', 'SERVER_PORT': '8000', 'PATH_INFO': '/v1'} dummy_req = self._dummy_GET_request(params, req_env) ok_resp = json.dumps({'token': { 'project': {'name': 'tenant', 'id': 'abcd1234'}}}) self._stub_http_connection(response=ok_resp, params={'AWSAccessKeyId': 'foo'}) self.m.ReplayAll() self.assertEqual('woot', ec2.__call__(dummy_req)) self.m.VerifyAll() def test_call_ok_auth_uri_ec2authtoken_long(self): # Prove we tolerate a url which already includes the /ec2tokens path dummy_url = 'http://123:5000/v2.0/ec2tokens' cfg.CONF.set_default('auth_uri', dummy_url, group='ec2authtoken') ec2 = ec2token.EC2Token(app='woot', conf={}) params = {'AWSAccessKeyId': 'foo', 'Signature': 'xyz'} req_env = {'SERVER_NAME': 'heat', 'SERVER_PORT': '8000', 'PATH_INFO': '/v1'} dummy_req = self._dummy_GET_request(params, req_env) ok_resp = json.dumps({'token': { 'project': {'name': 'tenant', 'id': 'abcd1234'}}}) self._stub_http_connection(response=ok_resp, params={'AWSAccessKeyId': 'foo'}) self.m.ReplayAll() self.assertEqual('woot', ec2.__call__(dummy_req)) self.m.VerifyAll() def test_call_ok_auth_uri_ks_authtoken(self): # Import auth_token to have keystone_authtoken settings setup. importutils.import_module('keystonemiddleware.auth_token') dummy_url = 'http://123:5000/v2.0' cfg.CONF.set_override('auth_uri', dummy_url, group='keystone_authtoken') ec2 = ec2token.EC2Token(app='woot', conf={}) params = {'AWSAccessKeyId': 'foo', 'Signature': 'xyz'} req_env = {'SERVER_NAME': 'heat', 'SERVER_PORT': '8000', 'PATH_INFO': '/v1'} dummy_req = self._dummy_GET_request(params, req_env) ok_resp = json.dumps({'token': { 'project': {'name': 'tenant', 'id': 'abcd1234'}}}) self._stub_http_connection(response=ok_resp, params={'AWSAccessKeyId': 'foo'}) self.m.ReplayAll() self.assertEqual('woot', ec2.__call__(dummy_req)) self.m.VerifyAll() def test_filter_factory(self): ec2_filter = ec2token.EC2Token_filter_factory(global_conf={}) self.assertEqual('xyz', ec2_filter('xyz').application) def test_filter_factory_none_app(self): ec2_filter = ec2token.EC2Token_filter_factory(global_conf={}) self.assertIsNone(ec2_filter(None).application) heat-10.0.2/heat/tests/api/aws/__init__.py0000666000175000017500000000000013343562340020267 0ustar zuulzuul00000000000000heat-10.0.2/heat/tests/api/__init__.py0000666000175000017500000000000013343562340017475 0ustar zuulzuul00000000000000heat-10.0.2/heat/tests/test_common_exception.py0000666000175000017500000000232113343562340021602 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common import exception from heat.tests import common class TestException(common.HeatTestCase): class SampleExceptionNoErorCode(exception.HeatException): msg_fmt = 'Test exception' class SampleException(SampleExceptionNoErorCode): error_code = 100 def test_heat_exception_no_error_code(self): ex = TestException.SampleExceptionNoErorCode() self.assertEqual('Test exception', ex.message) def test_heat_exception_with_error_code(self): ex = TestException.SampleException() self.assertEqual('HEAT-E100 Test exception', ex.message) heat-10.0.2/heat/tests/test_server_tags.py0000666000175000017500000001306113343562352020566 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import uuid from heat.common import template_format from heat.engine.clients.os import glance from heat.engine.clients.os import nova from heat.engine import environment from heat.engine.resources.aws.ec2 import instance as instances from heat.engine import scheduler from heat.engine import stack as parser from heat.engine import template as tmpl from heat.tests import common from heat.tests.openstack.nova import fakes as fakes_nova from heat.tests import utils instance_template = ''' { "AWSTemplateFormatVersion" : "2010-09-09", "Description" : "WordPress", "Parameters" : { "KeyName" : { "Description" : "KeyName", "Type" : "String", "Default" : "test" } }, "Resources" : { "WebServer": { "Type": "AWS::EC2::Instance", "Properties": { "ImageId" : "CentOS 5.2", "InstanceType" : "256 MB Server", "KeyName" : "test", "UserData" : "wordpress" } } } } ''' class ServerTagsTest(common.HeatTestCase): def setUp(self): super(ServerTagsTest, self).setUp() self.fc = fakes_nova.FakeClient() def _mock_get_image_id_success(self, imageId_input, imageId): self.m.StubOutWithMock(glance.GlanceClientPlugin, 'find_image_by_name_or_id') glance.GlanceClientPlugin.find_image_by_name_or_id( imageId_input).MultipleTimes().AndReturn(imageId) def _setup_test_instance(self, intags=None, nova_tags=None): stack_name = 'tag_test' t = template_format.parse(instance_template) template = tmpl.Template(t, env=environment.Environment( {'KeyName': 'test'})) self.stack = parser.Stack(utils.dummy_context(), stack_name, template, stack_id=str(uuid.uuid4())) t['Resources']['WebServer']['Properties']['Tags'] = intags resource_defns = template.resource_definitions(self.stack) instance = instances.Instance('WebServer', resource_defns['WebServer'], self.stack) self.m.StubOutWithMock(nova.NovaClientPlugin, '_create') nova.NovaClientPlugin._create().AndReturn(self.fc) self._mock_get_image_id_success('CentOS 5.2', 1) # need to resolve the template functions metadata = instance.metadata_get() server_userdata = instance.client_plugin().build_userdata( metadata, instance.properties['UserData'], 'ec2-user') self.m.StubOutWithMock(nova.NovaClientPlugin, 'build_userdata') nova.NovaClientPlugin.build_userdata( metadata, instance.properties['UserData'], 'ec2-user').AndReturn(server_userdata) self.m.StubOutWithMock(self.fc.servers, 'create') self.fc.servers.create( image=1, flavor=1, key_name='test', name=utils.PhysName(stack_name, instance.name), security_groups=None, userdata=server_userdata, scheduler_hints=None, meta=nova_tags, nics=None, availability_zone=None, block_device_mapping=None).AndReturn( self.fc.servers.list()[1]) return instance def test_instance_tags(self): tags = [{'Key': 'Food', 'Value': 'yum'}] metadata = dict((tm['Key'], tm['Value']) for tm in tags) instance = self._setup_test_instance(intags=tags, nova_tags=metadata) self.m.ReplayAll() scheduler.TaskRunner(instance.create)() # we are just using mock to verify that the tags get through to the # nova call. self.m.VerifyAll() def test_instance_tags_updated(self): tags = [{'Key': 'Food', 'Value': 'yum'}] metadata = dict((tm['Key'], tm['Value']) for tm in tags) instance = self._setup_test_instance(intags=tags, nova_tags=metadata) self.m.ReplayAll() scheduler.TaskRunner(instance.create)() self.assertEqual((instance.CREATE, instance.COMPLETE), instance.state) # we are just using mock to verify that the tags get through to the # nova call. self.m.VerifyAll() self.m.UnsetStubs() new_tags = [{'Key': 'Food', 'Value': 'yuk'}] new_metadata = dict((tm['Key'], tm['Value']) for tm in new_tags) self.m.StubOutWithMock(self.fc.servers, 'set_meta') self.fc.servers.set_meta(self.fc.servers.list()[1], new_metadata).AndReturn(None) self.stub_ImageConstraint_validate() self.stub_KeypairConstraint_validate() self.stub_FlavorConstraint_validate() self.m.ReplayAll() snippet = instance.stack.t.t['Resources'][instance.name] props = snippet['Properties'].copy() props['Tags'] = new_tags update_template = instance.t.freeze(properties=props) scheduler.TaskRunner(instance.update, update_template)() self.assertEqual((instance.UPDATE, instance.COMPLETE), instance.state) self.m.VerifyAll() heat-10.0.2/heat/tests/test_engine_service.py0000666000175000017500000017453013343562352021240 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import uuid import mock import mox from oslo_config import cfg from oslo_messaging.rpc import dispatcher from oslo_serialization import jsonutils as json import six from heat.common import context from heat.common import environment_util as env_util from heat.common import exception from heat.common import identifier from heat.common import policy from heat.common import template_format from heat.engine.cfn import template as cfntemplate from heat.engine import environment from heat.engine.hot import functions as hot_functions from heat.engine.hot import template as hottemplate from heat.engine import resource as res from heat.engine import service from heat.engine import stack as parser from heat.engine import template as templatem from heat.objects import stack as stack_object from heat.rpc import api as rpc_api from heat.tests import common from heat.tests.engine import tools from heat.tests import generic_resource as generic_rsrc from heat.tests.openstack.nova import fakes as fakes_nova from heat.tests import utils cfg.CONF.import_opt('engine_life_check_timeout', 'heat.common.config') cfg.CONF.import_opt('enable_stack_abandon', 'heat.common.config') wp_template_no_default = ''' { "AWSTemplateFormatVersion" : "2010-09-09", "Description" : "WordPress", "Parameters" : { "KeyName" : { "Description" : "KeyName", "Type" : "String" } }, "Resources" : { "WebServer": { "Type": "AWS::EC2::Instance", "Properties": { "ImageId" : "F17-x86_64-gold", "InstanceType" : "m1.large", "KeyName" : "test", "UserData" : "wordpress" } } } } ''' user_policy_template = ''' { "AWSTemplateFormatVersion" : "2010-09-09", "Description" : "Just a User", "Parameters" : {}, "Resources" : { "CfnUser" : { "Type" : "AWS::IAM::User", "Properties" : { "Policies" : [ { "Ref": "WebServerAccessPolicy"} ] } }, "WebServerAccessPolicy" : { "Type" : "OS::Heat::AccessPolicy", "Properties" : { "AllowedResources" : [ "WebServer" ] } }, "HostKeys" : { "Type" : "AWS::IAM::AccessKey", "Properties" : { "UserName" : {"Ref": "CfnUser"} } }, "WebServer": { "Type": "AWS::EC2::Instance", "Properties": { "ImageId" : "F17-x86_64-gold", "InstanceType" : "m1.large", "KeyName" : "test", "UserData" : "wordpress" } } } } ''' server_config_template = ''' heat_template_version: 2013-05-23 resources: WebServer: type: OS::Nova::Server ''' class StackCreateTest(common.HeatTestCase): def test_wordpress_single_instance_stack_create(self): stack = tools.get_stack('test_stack', utils.dummy_context()) tools.setup_mocks(self.m, stack) self.m.ReplayAll() stack.store() stack.create() self.assertIsNotNone(stack['WebServer']) self.assertGreater(int(stack['WebServer'].resource_id), 0) self.assertNotEqual(stack['WebServer'].ipaddress, '0.0.0.0') def test_wordpress_single_instance_stack_adopt(self): t = template_format.parse(tools.wp_template) template = templatem.Template(t) ctx = utils.dummy_context() adopt_data = { 'resources': { 'WebServer': { 'resource_id': 'test-res-id' } } } stack = parser.Stack(ctx, 'test_stack', template, adopt_stack_data=adopt_data) tools.setup_mocks(self.m, stack) self.m.ReplayAll() stack.store() stack.adopt() self.assertIsNotNone(stack['WebServer']) self.assertEqual('test-res-id', stack['WebServer'].resource_id) self.assertEqual((stack.ADOPT, stack.COMPLETE), stack.state) def test_wordpress_single_instance_stack_adopt_fail(self): t = template_format.parse(tools.wp_template) template = templatem.Template(t) ctx = utils.dummy_context() adopt_data = { 'resources': { 'WebServer1': { 'resource_id': 'test-res-id' } } } stack = parser.Stack(ctx, 'test_stack', template, adopt_stack_data=adopt_data) tools.setup_mocks(self.m, stack) self.m.ReplayAll() stack.store() stack.adopt() self.assertIsNotNone(stack['WebServer']) expected = ('Resource ADOPT failed: Exception: resources.WebServer: ' 'Resource ID was not provided.') self.assertEqual(expected, stack.status_reason) self.assertEqual((stack.ADOPT, stack.FAILED), stack.state) def test_wordpress_single_instance_stack_delete(self): ctx = utils.dummy_context() stack = tools.get_stack('test_stack', ctx) fc = tools.setup_mocks(self.m, stack, mock_keystone=False) self.m.ReplayAll() stack_id = stack.store() stack.create() db_s = stack_object.Stack.get_by_id(ctx, stack_id) self.assertIsNotNone(db_s) self.assertIsNotNone(stack['WebServer']) self.assertGreater(int(stack['WebServer'].resource_id), 0) self.patchobject(fc.servers, 'delete', side_effect=fakes_nova.fake_exception()) stack.delete() rsrc = stack['WebServer'] self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state) self.assertEqual((stack.DELETE, stack.COMPLETE), rsrc.state) self.assertIsNone(stack_object.Stack.get_by_id(ctx, stack_id)) db_s.refresh() self.assertEqual('DELETE', db_s.action) self.assertEqual('COMPLETE', db_s.status, ) class StackConvergenceServiceCreateUpdateTest(common.HeatTestCase): def setUp(self): super(StackConvergenceServiceCreateUpdateTest, self).setUp() cfg.CONF.set_override('convergence_engine', True) self.ctx = utils.dummy_context() self.man = service.EngineService('a-host', 'a-topic') self.man.thread_group_mgr = tools.DummyThreadGroupManager() def _stub_update_mocks(self, stack_to_load, stack_to_return): self.m.StubOutWithMock(parser, 'Stack') self.m.StubOutWithMock(parser.Stack, 'load') parser.Stack.load(self.ctx, stack=stack_to_load ).AndReturn(stack_to_return) self.m.StubOutWithMock(templatem, 'Template') self.m.StubOutWithMock(environment, 'Environment') def _test_stack_create_convergence(self, stack_name): params = {'foo': 'bar'} template = '{ "Template": "data" }' stack = tools.get_stack(stack_name, self.ctx, template=tools.string_template_five, convergence=True) stack.converge = None self.m.StubOutWithMock(templatem, 'Template') self.m.StubOutWithMock(environment, 'Environment') self.m.StubOutWithMock(parser, 'Stack') templatem.Template(template, files=None).AndReturn(stack.t) environment.Environment(params).AndReturn(stack.env) parser.Stack(self.ctx, stack.name, stack.t, owner_id=None, parent_resource=None, nested_depth=0, user_creds_id=None, stack_user_project_id=None, timeout_mins=60, disable_rollback=False, convergence=True).AndReturn(stack) self.m.StubOutWithMock(stack, 'validate') stack.validate().AndReturn(None) self.m.ReplayAll() api_args = {'timeout_mins': 60, 'disable_rollback': False} result = self.man.create_stack(self.ctx, 'service_create_test_stack', template, params, None, api_args) db_stack = stack_object.Stack.get_by_id(self.ctx, result['stack_id']) self.assertTrue(db_stack.convergence) self.assertEqual(result['stack_id'], db_stack.id) self.m.VerifyAll() def test_stack_create_enabled_convergence_engine(self): stack_name = 'service_create_test_stack' self._test_stack_create_convergence(stack_name) def test_stack_update_enabled_convergence_engine(self): stack_name = 'service_update_test_stack' params = {'foo': 'bar'} template = '{ "Template": "data" }' old_stack = tools.get_stack(stack_name, self.ctx, template=tools.string_template_five, convergence=True) old_stack.timeout_mins = 1 sid = old_stack.store() s = stack_object.Stack.get_by_id(self.ctx, sid) stack = tools.get_stack(stack_name, self.ctx, template=tools.string_template_five_update, convergence=True) self._stub_update_mocks(s, old_stack) templatem.Template(template, files=None).AndReturn(stack.t) environment.Environment(params).AndReturn(stack.env) parser.Stack(self.ctx, stack.name, stack.t, owner_id=old_stack.owner_id, nested_depth=old_stack.nested_depth, user_creds_id=old_stack.user_creds_id, stack_user_project_id=old_stack.stack_user_project_id, timeout_mins=60, disable_rollback=False, parent_resource=None, strict_validate=True, tenant_id=old_stack.tenant_id, username=old_stack.username, convergence=old_stack.convergence, current_traversal=old_stack.current_traversal, prev_raw_template_id=old_stack.prev_raw_template_id, current_deps=old_stack.current_deps, converge=False).AndReturn(stack) self.m.StubOutWithMock(stack, 'validate') stack.validate().AndReturn(None) self.m.ReplayAll() api_args = {'timeout_mins': 60, 'disable_rollback': False, rpc_api.PARAM_CONVERGE: False} result = self.man.update_stack(self.ctx, old_stack.identifier(), template, params, None, api_args) self.assertTrue(old_stack.convergence) self.assertEqual(old_stack.identifier(), result) self.assertIsInstance(result, dict) self.assertTrue(result['stack_id']) self.m.VerifyAll() class StackServiceAuthorizeTest(common.HeatTestCase): def setUp(self): super(StackServiceAuthorizeTest, self).setUp() self.ctx = utils.dummy_context(tenant_id='stack_service_test_tenant') self.eng = service.EngineService('a-host', 'a-topic') self.eng.engine_id = 'engine-fake-uuid' @tools.stack_context('service_authorize_stack_user_nocreds_test_stack') def test_stack_authorize_stack_user_nocreds(self): self.assertFalse(self.eng._authorize_stack_user(self.ctx, self.stack, 'foo')) @tools.stack_context('service_authorize_user_attribute_error_test_stack') def test_stack_authorize_stack_user_attribute_error(self): self.m.StubOutWithMock(json, 'loads') json.loads(None).AndRaise(AttributeError) self.m.ReplayAll() self.assertFalse(self.eng._authorize_stack_user(self.ctx, self.stack, 'foo')) self.m.VerifyAll() @tools.stack_context('service_authorize_stack_user_type_error_test_stack') def test_stack_authorize_stack_user_type_error(self): self.m.StubOutWithMock(json, 'loads') json.loads(mox.IgnoreArg()).AndRaise(TypeError) self.m.ReplayAll() self.assertFalse(self.eng._authorize_stack_user(self.ctx, self.stack, 'foo')) self.m.VerifyAll() def test_stack_authorize_stack_user(self): self.ctx = utils.dummy_context() self.ctx.aws_creds = '{"ec2Credentials": {"access": "4567"}}' stack_name = 'stack_authorize_stack_user' stack = tools.get_stack(stack_name, self.ctx, user_policy_template) self.stack = stack fc = tools.setup_mocks(self.m, stack) self.patchobject(fc.servers, 'delete', side_effect=fakes_nova.fake_exception()) self.m.ReplayAll() stack.store() stack.create() self.assertTrue(self.eng._authorize_stack_user( self.ctx, self.stack, 'WebServer')) self.assertFalse(self.eng._authorize_stack_user( self.ctx, self.stack, 'CfnUser')) self.assertFalse(self.eng._authorize_stack_user( self.ctx, self.stack, 'NoSuchResource')) self.m.VerifyAll() def test_stack_authorize_stack_user_user_id(self): self.ctx = utils.dummy_context(user_id=str(uuid.uuid4())) stack_name = 'stack_authorize_stack_user_user_id' stack = tools.get_stack(stack_name, self.ctx, server_config_template) self.stack = stack def handler(resource_name): return resource_name == 'WebServer' self.stack.register_access_allowed_handler(self.ctx.user_id, handler) # matching credential_id and resource_name self.assertTrue(self.eng._authorize_stack_user( self.ctx, self.stack, 'WebServer')) # not matching resource_name self.assertFalse(self.eng._authorize_stack_user( self.ctx, self.stack, 'NoSuchResource')) # not matching credential_id self.ctx.user = str(uuid.uuid4()) self.assertFalse(self.eng._authorize_stack_user( self.ctx, self.stack, 'WebServer')) class StackServiceTest(common.HeatTestCase): def setUp(self): super(StackServiceTest, self).setUp() self.ctx = utils.dummy_context(tenant_id='stack_service_test_tenant') self.eng = service.EngineService('a-host', 'a-topic') self.eng.thread_group_mgr = tools.DummyThreadGroupManager() self.eng.engine_id = 'engine-fake-uuid' @tools.stack_context('service_identify_test_stack', False) def test_stack_identify(self): identity = self.eng.identify_stack(self.ctx, self.stack.name) self.assertEqual(self.stack.identifier(), identity) @tools.stack_context('ef0c41a4-644f-447c-ad80-7eecb0becf79', False) def test_stack_identify_by_name_in_uuid(self): identity = self.eng.identify_stack(self.ctx, self.stack.name) self.assertEqual(self.stack.identifier(), identity) @tools.stack_context('service_identify_uuid_test_stack', False) def test_stack_identify_uuid(self): identity = self.eng.identify_stack(self.ctx, self.stack.id) self.assertEqual(self.stack.identifier(), identity) def test_stack_identify_nonexist(self): ex = self.assertRaises(dispatcher.ExpectedException, self.eng.identify_stack, self.ctx, 'wibble') self.assertEqual(exception.EntityNotFound, ex.exc_info[0]) @tools.stack_context('service_create_existing_test_stack', False) def test_stack_create_existing(self): ex = self.assertRaises(dispatcher.ExpectedException, self.eng.create_stack, self.ctx, self.stack.name, self.stack.t.t, {}, None, {}) self.assertEqual(exception.StackExists, ex.exc_info[0]) @tools.stack_context('service_name_tenants_test_stack', False) def test_stack_by_name_tenants(self): self.assertEqual( self.stack.id, stack_object.Stack.get_by_name(self.ctx, self.stack.name).id ) ctx2 = utils.dummy_context(tenant_id='stack_service_test_tenant2') self.assertIsNone(stack_object.Stack.get_by_name( ctx2, self.stack.name)) @tools.stack_context('service_badname_test_stack', False) def test_stack_by_name_badname(self): # If a bad name type, such as a map, is passed, we should just return # None, as it's converted to a string, which won't match any name ctx = utils.dummy_context(tenant_id='stack_service_test_tenant') self.assertIsNone(stack_object.Stack.get_by_name( ctx, {'notallowed': self.stack.name})) self.assertIsNone(stack_object.Stack.get_by_name_and_owner_id( ctx, {'notallowed': self.stack.name}, 'owner')) @tools.stack_context('service_list_all_test_stack') def test_stack_list_all(self): sl = self.eng.list_stacks(self.ctx) self.assertEqual(1, len(sl)) for s in sl: self.assertIn('creation_time', s) self.assertIn('updated_time', s) self.assertIn('deletion_time', s) self.assertIsNone(s['deletion_time']) self.assertIn('stack_identity', s) self.assertIsNotNone(s['stack_identity']) self.assertIn('stack_name', s) self.assertEqual(self.stack.name, s['stack_name']) self.assertIn('stack_status', s) self.assertIn('stack_status_reason', s) self.assertIn('description', s) self.assertEqual('', s['description']) @mock.patch.object(stack_object.Stack, 'get_all') def test_stack_list_passes_marker_info(self, mock_stack_get_all): limit = object() marker = object() sort_keys = object() sort_dir = object() self.eng.list_stacks(self.ctx, limit=limit, marker=marker, sort_keys=sort_keys, sort_dir=sort_dir) mock_stack_get_all.assert_called_once_with(self.ctx, limit=limit, sort_keys=sort_keys, marker=marker, sort_dir=sort_dir, filters=mock.ANY, show_deleted=mock.ANY, show_nested=mock.ANY, show_hidden=mock.ANY, tags=mock.ANY, tags_any=mock.ANY, not_tags=mock.ANY, not_tags_any=mock.ANY) @mock.patch.object(stack_object.Stack, 'get_all') def test_stack_list_passes_filtering_info(self, mock_stack_get_all): filters = {'foo': 'bar'} self.eng.list_stacks(self.ctx, filters=filters) mock_stack_get_all.assert_called_once_with(self.ctx, limit=mock.ANY, sort_keys=mock.ANY, marker=mock.ANY, sort_dir=mock.ANY, filters=filters, show_deleted=mock.ANY, show_nested=mock.ANY, show_hidden=mock.ANY, tags=mock.ANY, tags_any=mock.ANY, not_tags=mock.ANY, not_tags_any=mock.ANY) @mock.patch.object(stack_object.Stack, 'get_all') def test_stack_list_passes_filter_translated(self, mock_stack_get_all): filters = {'stack_name': 'bar'} self.eng.list_stacks(self.ctx, filters=filters) translated = {'name': 'bar'} mock_stack_get_all.assert_called_once_with(self.ctx, limit=mock.ANY, sort_keys=mock.ANY, marker=mock.ANY, sort_dir=mock.ANY, filters=translated, show_deleted=mock.ANY, show_nested=mock.ANY, show_hidden=mock.ANY, tags=mock.ANY, tags_any=mock.ANY, not_tags=mock.ANY, not_tags_any=mock.ANY) @mock.patch.object(stack_object.Stack, 'get_all') def test_stack_list_show_nested(self, mock_stack_get_all): self.eng.list_stacks(self.ctx, show_nested=True) mock_stack_get_all.assert_called_once_with(self.ctx, limit=mock.ANY, sort_keys=mock.ANY, marker=mock.ANY, sort_dir=mock.ANY, filters=mock.ANY, show_deleted=mock.ANY, show_nested=True, show_hidden=mock.ANY, tags=mock.ANY, tags_any=mock.ANY, not_tags=mock.ANY, not_tags_any=mock.ANY) @mock.patch.object(stack_object.Stack, 'get_all') def test_stack_list_show_deleted(self, mock_stack_get_all): self.eng.list_stacks(self.ctx, show_deleted=True) mock_stack_get_all.assert_called_once_with(self.ctx, limit=mock.ANY, sort_keys=mock.ANY, marker=mock.ANY, sort_dir=mock.ANY, filters=mock.ANY, show_deleted=True, show_nested=mock.ANY, show_hidden=mock.ANY, tags=mock.ANY, tags_any=mock.ANY, not_tags=mock.ANY, not_tags_any=mock.ANY) @mock.patch.object(stack_object.Stack, 'get_all') def test_stack_list_show_hidden(self, mock_stack_get_all): self.eng.list_stacks(self.ctx, show_hidden=True) mock_stack_get_all.assert_called_once_with(self.ctx, limit=mock.ANY, sort_keys=mock.ANY, marker=mock.ANY, sort_dir=mock.ANY, filters=mock.ANY, show_deleted=mock.ANY, show_nested=mock.ANY, show_hidden=True, tags=mock.ANY, tags_any=mock.ANY, not_tags=mock.ANY, not_tags_any=mock.ANY) @mock.patch.object(stack_object.Stack, 'get_all') def test_stack_list_tags(self, mock_stack_get_all): self.eng.list_stacks(self.ctx, tags=['foo', 'bar']) mock_stack_get_all.assert_called_once_with(self.ctx, limit=mock.ANY, sort_keys=mock.ANY, marker=mock.ANY, sort_dir=mock.ANY, filters=mock.ANY, show_deleted=mock.ANY, show_nested=mock.ANY, show_hidden=mock.ANY, tags=['foo', 'bar'], tags_any=mock.ANY, not_tags=mock.ANY, not_tags_any=mock.ANY) @mock.patch.object(stack_object.Stack, 'get_all') def test_stack_list_tags_any(self, mock_stack_get_all): self.eng.list_stacks(self.ctx, tags_any=['foo', 'bar']) mock_stack_get_all.assert_called_once_with(self.ctx, limit=mock.ANY, sort_keys=mock.ANY, marker=mock.ANY, sort_dir=mock.ANY, filters=mock.ANY, show_deleted=mock.ANY, show_nested=mock.ANY, show_hidden=mock.ANY, tags=mock.ANY, tags_any=['foo', 'bar'], not_tags=mock.ANY, not_tags_any=mock.ANY) @mock.patch.object(stack_object.Stack, 'get_all') def test_stack_list_not_tags(self, mock_stack_get_all): self.eng.list_stacks(self.ctx, not_tags=['foo', 'bar']) mock_stack_get_all.assert_called_once_with(self.ctx, limit=mock.ANY, sort_keys=mock.ANY, marker=mock.ANY, sort_dir=mock.ANY, filters=mock.ANY, show_deleted=mock.ANY, show_nested=mock.ANY, show_hidden=mock.ANY, tags=mock.ANY, tags_any=mock.ANY, not_tags=['foo', 'bar'], not_tags_any=mock.ANY) @mock.patch.object(stack_object.Stack, 'get_all') def test_stack_list_not_tags_any(self, mock_stack_get_all): self.eng.list_stacks(self.ctx, not_tags_any=['foo', 'bar']) mock_stack_get_all.assert_called_once_with(self.ctx, limit=mock.ANY, sort_keys=mock.ANY, marker=mock.ANY, sort_dir=mock.ANY, filters=mock.ANY, show_deleted=mock.ANY, show_nested=mock.ANY, show_hidden=mock.ANY, tags=mock.ANY, tags_any=mock.ANY, not_tags=mock.ANY, not_tags_any=['foo', 'bar']) @mock.patch.object(stack_object.Stack, 'count_all') def test_count_stacks_passes_filter_info(self, mock_stack_count_all): self.eng.count_stacks(self.ctx, filters={'foo': 'bar'}) mock_stack_count_all.assert_called_once_with(mock.ANY, filters={'foo': 'bar'}, show_deleted=False, show_nested=False, show_hidden=False, tags=None, tags_any=None, not_tags=None, not_tags_any=None) @mock.patch.object(stack_object.Stack, 'count_all') def test_count_stacks_show_nested(self, mock_stack_count_all): self.eng.count_stacks(self.ctx, show_nested=True) mock_stack_count_all.assert_called_once_with(mock.ANY, filters=mock.ANY, show_deleted=False, show_nested=True, show_hidden=False, tags=None, tags_any=None, not_tags=None, not_tags_any=None) @mock.patch.object(stack_object.Stack, 'count_all') def test_count_stack_show_deleted(self, mock_stack_count_all): self.eng.count_stacks(self.ctx, show_deleted=True) mock_stack_count_all.assert_called_once_with(mock.ANY, filters=mock.ANY, show_deleted=True, show_nested=False, show_hidden=False, tags=None, tags_any=None, not_tags=None, not_tags_any=None) @mock.patch.object(stack_object.Stack, 'count_all') def test_count_stack_show_hidden(self, mock_stack_count_all): self.eng.count_stacks(self.ctx, show_hidden=True) mock_stack_count_all.assert_called_once_with(mock.ANY, filters=mock.ANY, show_deleted=False, show_nested=False, show_hidden=True, tags=None, tags_any=None, not_tags=None, not_tags_any=None) @tools.stack_context('service_export_stack') def test_export_stack(self): cfg.CONF.set_override('enable_stack_abandon', True) self.m.StubOutWithMock(parser.Stack, 'load') parser.Stack.load(self.ctx, stack=mox.IgnoreArg()).AndReturn(self.stack) expected_res = { u'WebServer': { 'action': 'CREATE', 'metadata': {}, 'name': u'WebServer', 'resource_data': {}, 'resource_id': '9999', 'status': 'COMPLETE', 'type': u'AWS::EC2::Instance'}} self.stack.tags = ['tag1', 'tag2'] self.m.ReplayAll() ret = self.eng.export_stack(self.ctx, self.stack.identifier()) self.assertEqual(11, len(ret)) self.assertEqual('CREATE', ret['action']) self.assertEqual('COMPLETE', ret['status']) self.assertEqual('service_export_stack', ret['name']) self.assertEqual({}, ret['files']) self.assertIn('id', ret) self.assertEqual(expected_res, ret['resources']) self.assertEqual(self.stack.t.t, ret['template']) self.assertIn('project_id', ret) self.assertIn('stack_user_project_id', ret) self.assertIn('environment', ret) self.assertIn('files', ret) self.assertEqual(['tag1', 'tag2'], ret['tags']) self.m.VerifyAll() @tools.stack_context('service_abandon_stack') def test_abandon_stack(self): cfg.CONF.set_override('enable_stack_abandon', True) self.m.StubOutWithMock(parser.Stack, 'load') parser.Stack.load(self.ctx, stack=mox.IgnoreArg()).AndReturn(self.stack) self.m.ReplayAll() self.eng.abandon_stack(self.ctx, self.stack.identifier()) ex = self.assertRaises(dispatcher.ExpectedException, self.eng.show_stack, self.ctx, self.stack.identifier(), resolve_outputs=True) self.assertEqual(exception.EntityNotFound, ex.exc_info[0]) self.m.VerifyAll() def test_stack_describe_nonexistent(self): non_exist_identifier = identifier.HeatIdentifier( self.ctx.tenant_id, 'wibble', '18d06e2e-44d3-4bef-9fbf-52480d604b02') stack_not_found_exc = exception.EntityNotFound( entity='Stack', name='test') self.m.StubOutWithMock(service.EngineService, '_get_stack') service.EngineService._get_stack( self.ctx, non_exist_identifier, show_deleted=True).AndRaise(stack_not_found_exc) self.m.ReplayAll() ex = self.assertRaises(dispatcher.ExpectedException, self.eng.show_stack, self.ctx, non_exist_identifier, resolve_outputs=True) self.assertEqual(exception.EntityNotFound, ex.exc_info[0]) self.m.VerifyAll() def test_stack_describe_bad_tenant(self): non_exist_identifier = identifier.HeatIdentifier( 'wibble', 'wibble', '18d06e2e-44d3-4bef-9fbf-52480d604b02') invalid_tenant_exc = exception.InvalidTenant(target='test', actual='test') self.m.StubOutWithMock(service.EngineService, '_get_stack') service.EngineService._get_stack( self.ctx, non_exist_identifier, show_deleted=True).AndRaise(invalid_tenant_exc) self.m.ReplayAll() ex = self.assertRaises(dispatcher.ExpectedException, self.eng.show_stack, self.ctx, non_exist_identifier, resolve_outputs=True) self.assertEqual(exception.InvalidTenant, ex.exc_info[0]) self.m.VerifyAll() @tools.stack_context('service_describe_test_stack', False) def test_stack_describe(self): self.m.StubOutWithMock(service.EngineService, '_get_stack') s = stack_object.Stack.get_by_id(self.ctx, self.stack.id) service.EngineService._get_stack(self.ctx, self.stack.identifier(), show_deleted=True).AndReturn(s) self.m.ReplayAll() sl = self.eng.show_stack(self.ctx, self.stack.identifier(), resolve_outputs=True) self.assertEqual(1, len(sl)) s = sl[0] self.assertIn('creation_time', s) self.assertIn('updated_time', s) self.assertIn('deletion_time', s) self.assertIsNone(s['deletion_time']) self.assertIn('stack_identity', s) self.assertIsNotNone(s['stack_identity']) self.assertIn('stack_name', s) self.assertEqual(self.stack.name, s['stack_name']) self.assertIn('stack_status', s) self.assertIn('stack_status_reason', s) self.assertIn('description', s) self.assertIn('WordPress', s['description']) self.assertIn('parameters', s) self.m.VerifyAll() @tools.stack_context('service_describe_all_test_stack', False) def test_stack_describe_all(self): sl = self.eng.show_stack(self.ctx, None, resolve_outputs=True) self.assertEqual(1, len(sl)) s = sl[0] self.assertIn('creation_time', s) self.assertIn('updated_time', s) self.assertIn('deletion_time', s) self.assertIsNone(s['deletion_time']) self.assertIn('stack_identity', s) self.assertIsNotNone(s['stack_identity']) self.assertIn('stack_name', s) self.assertEqual(self.stack.name, s['stack_name']) self.assertIn('stack_status', s) self.assertIn('stack_status_reason', s) self.assertIn('description', s) self.assertIn('WordPress', s['description']) self.assertIn('parameters', s) @mock.patch('heat.engine.template._get_template_extension_manager') def test_list_template_versions(self, templ_mock): class DummyMgr(object): def names(self): return ['a.2012-12-12', 'c.newton', 'c.2016-10-14', 'c.something'] def __getitem__(self, item): m = mock.MagicMock() if item == 'a.2012-12-12': m.plugin = cfntemplate.CfnTemplate return m else: m.plugin = hottemplate.HOTemplate20130523 return m templ_mock.return_value = DummyMgr() templates = self.eng.list_template_versions(self.ctx) expected = [{'version': 'a.2012-12-12', 'type': 'cfn', 'aliases': []}, {'version': 'c.2016-10-14', 'aliases': ['c.newton', 'c.something'], 'type': 'hot'}] self.assertEqual(expected, templates) @mock.patch('heat.engine.template._get_template_extension_manager') def test_list_template_versions_invalid_version(self, templ_mock): class DummyMgr(object): def names(self): return ['c.something'] def __getitem__(self, item): m = mock.MagicMock() if item == 'c.something': m.plugin = cfntemplate.CfnTemplate return m templ_mock.return_value = DummyMgr() ret = self.assertRaises(exception.InvalidTemplateVersions, self.eng.list_template_versions, self.ctx) self.assertIn('A template version alias c.something was added', six.text_type(ret)) @mock.patch('heat.engine.template._get_template_extension_manager') def test_list_template_functions(self, templ_mock): class DummyFunc1(object): """Dummy Func1. Dummy Func1 Long Description. """ class DummyFunc2(object): """Dummy Func2. Dummy Func2 Long Description. """ class DummyConditionFunc(object): """Dummy Condition Func. Dummy Condition Func Long Description. """ plugin_mock = mock.Mock( functions={'dummy1': DummyFunc1, 'dummy2': DummyFunc2, 'removed': hot_functions.Removed}, condition_functions={'condition_dummy': DummyConditionFunc}) dummy_tmpl = mock.Mock(plugin=plugin_mock) class DummyMgr(object): def __getitem__(self, item): return dummy_tmpl templ_mock.return_value = DummyMgr() functions = self.eng.list_template_functions(self.ctx, 'dummytemplate') expected = [{'functions': 'dummy1', 'description': 'Dummy Func1.'}, {'functions': 'dummy2', 'description': 'Dummy Func2.'}] self.assertEqual(sorted(expected, key=lambda k: k['functions']), sorted(functions, key=lambda k: k['functions'])) # test with_condition functions = self.eng.list_template_functions(self.ctx, 'dummytemplate', with_condition=True) expected = [{'functions': 'dummy1', 'description': 'Dummy Func1.'}, {'functions': 'dummy2', 'description': 'Dummy Func2.'}, {'functions': 'condition_dummy', 'description': 'Dummy Condition Func.'}] self.assertEqual(sorted(expected, key=lambda k: k['functions']), sorted(functions, key=lambda k: k['functions'])) @mock.patch('heat.engine.template._get_template_extension_manager') def test_list_template_functions_version_not_found(self, templ_mock): class DummyMgr(object): def __getitem__(self, item): raise KeyError() templ_mock.return_value = DummyMgr() version = 'dummytemplate' ex = self.assertRaises(exception.NotFound, self.eng.list_template_functions, self.ctx, version) msg = "Template with version %s not found" % version self.assertEqual(msg, six.text_type(ex)) def test_stack_list_outputs(self): t = template_format.parse(tools.wp_template) t['outputs'] = { 'test': {'value': '{ get_attr: fir }', 'description': 'sec'}, 'test2': {'value': 'sec'}} tmpl = templatem.Template(t) stack = parser.Stack(self.ctx, 'service_list_outputs_stack', tmpl) self.patchobject(self.eng, '_get_stack') self.patchobject(parser.Stack, 'load', return_value=stack) outputs = self.eng.list_outputs(self.ctx, mock.ANY) self.assertIn({'output_key': 'test', 'description': 'sec'}, outputs) self.assertIn({'output_key': 'test2', 'description': 'No description given'}, outputs) def test_stack_empty_list_outputs(self): # Ensure that stack with no output returns empty list t = template_format.parse(tools.wp_template) t['outputs'] = {} tmpl = templatem.Template(t) stack = parser.Stack(self.ctx, 'service_list_outputs_stack', tmpl) self.patchobject(self.eng, '_get_stack') self.patchobject(parser.Stack, 'load', return_value=stack) outputs = self.eng.list_outputs(self.ctx, mock.ANY) self.assertEqual([], outputs) def test_stack_delete_complete_is_not_found(self): mock_get_stack = self.patchobject(self.eng, '_get_stack') mock_get_stack.return_value = mock.MagicMock() mock_get_stack.return_value.status = parser.Stack.COMPLETE mock_get_stack.return_value.action = parser.Stack.DELETE ex = self.assertRaises(dispatcher.ExpectedException, self.eng.delete_stack, 'irrelevant', 'irrelevant') self.assertEqual(exception.EntityNotFound, ex.exc_info[0]) def test_get_environment(self): # Setup t = template_format.parse(tools.wp_template) env = {'parameters': {'KeyName': 'EnvKey'}} tmpl = templatem.Template(t) stack = parser.Stack(self.ctx, 'get_env_stack', tmpl) mock_get_stack = self.patchobject(self.eng, '_get_stack') mock_get_stack.return_value = mock.MagicMock() mock_get_stack.return_value.raw_template.environment = env self.patchobject(parser.Stack, 'load', return_value=stack) # Test found = self.eng.get_environment(self.ctx, stack.identifier()) # Verify self.assertEqual(env, found) def test_get_environment_no_env(self): # Setup exc = exception.EntityNotFound(entity='stack', name='missing') self.patchobject(self.eng, '_get_stack', side_effect=exc) # Test self.assertRaises(dispatcher.ExpectedException, self.eng.get_environment, self.ctx, 'irrelevant') def test_get_files(self): # Setup t = template_format.parse(tools.wp_template) files = {'foo.yaml': 'i am a file'} tmpl = templatem.Template(t, files=files) stack = parser.Stack(self.ctx, 'get_env_stack', tmpl) stack.store() mock_get_stack = self.patchobject(self.eng, '_get_stack') mock_get_stack.return_value = mock.MagicMock() self.patchobject(templatem.Template, 'load', return_value=tmpl) # Test found = self.eng.get_files(self.ctx, stack.identifier()) # Verify self.assertEqual(files, found) def test_stack_show_output(self): t = template_format.parse(tools.wp_template) t['outputs'] = {'test': {'value': 'first', 'description': 'sec'}, 'test2': {'value': 'sec'}} tmpl = templatem.Template(t) stack = parser.Stack(self.ctx, 'service_list_outputs_stack', tmpl) self.patchobject(self.eng, '_get_stack') self.patchobject(parser.Stack, 'load', return_value=stack) output = self.eng.show_output(self.ctx, mock.ANY, 'test') self.assertEqual({'output_key': 'test', 'output_value': 'first', 'description': 'sec'}, output) # Ensure that stack raised NotFound error with incorrect key. ex = self.assertRaises(dispatcher.ExpectedException, self.eng.show_output, self.ctx, mock.ANY, 'bunny') self.assertEqual(exception.NotFound, ex.exc_info[0]) self.assertEqual('Specified output key bunny not found.', six.text_type(ex.exc_info[1])) def test_stack_show_output_error(self): t = template_format.parse(tools.wp_template) t['outputs'] = {'test': {'value': 'first', 'description': 'sec'}} tmpl = templatem.Template(t) stack = parser.Stack(self.ctx, 'service_list_outputs_stack', tmpl) self.patchobject(self.eng, '_get_stack') self.patchobject(parser.Stack, 'load', return_value=stack) self.patchobject( stack.outputs['test'], 'get_value', side_effect=[exception.EntityNotFound(entity='one', name='name')]) output = self.eng.show_output(self.ctx, mock.ANY, 'test') self.assertEqual( {'output_key': 'test', 'output_error': "The one (name) could not be found.", 'description': 'sec', 'output_value': None}, output) def test_stack_list_all_empty(self): sl = self.eng.list_stacks(self.ctx) self.assertEqual(0, len(sl)) def test_stack_describe_all_empty(self): sl = self.eng.show_stack(self.ctx, None, resolve_outputs=True) self.assertEqual(0, len(sl)) def test_get_template(self): # Setup t = template_format.parse(tools.wp_template) tmpl = templatem.Template(t) stack = parser.Stack(self.ctx, 'get_env_stack', tmpl) mock_get_stack = self.patchobject(self.eng, '_get_stack') mock_get_stack.return_value = mock.MagicMock() mock_get_stack.return_value.raw_template.template = t self.patchobject(parser.Stack, 'load', return_value=stack) # Test found = self.eng.get_template(self.ctx, stack.identifier()) # Verify self.assertEqual(t, found) def test_get_template_no_template(self): # Setup exc = exception.EntityNotFound(entity='stack', name='missing') self.patchobject(self.eng, '_get_stack', side_effect=exc) # Test self.assertRaises(dispatcher.ExpectedException, self.eng.get_template, self.ctx, 'missing') def _preview_stack(self, environment_files=None): res._register_class('GenericResource1', generic_rsrc.GenericResource) res._register_class('GenericResource2', generic_rsrc.GenericResource) args = {} params = {} files = None stack_name = 'SampleStack' tpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Description': 'Lorem ipsum.', 'Resources': { 'SampleResource1': {'Type': 'GenericResource1'}, 'SampleResource2': {'Type': 'GenericResource2'}}} return self.eng.preview_stack(self.ctx, stack_name, tpl, params, files, args, environment_files=environment_files) def test_preview_stack_returns_a_stack(self): stack = self._preview_stack() expected_identity = {'path': '', 'stack_id': 'None', 'stack_name': 'SampleStack', 'tenant': 'stack_service_test_tenant'} self.assertEqual(expected_identity, stack['stack_identity']) self.assertEqual('SampleStack', stack['stack_name']) self.assertEqual('Lorem ipsum.', stack['description']) def test_preview_stack_returns_list_of_resources_in_stack(self): stack = self._preview_stack() self.assertIsInstance(stack['resources'], list) self.assertEqual(2, len(stack['resources'])) resource_types = set(r['resource_type'] for r in stack['resources']) self.assertIn('GenericResource1', resource_types) self.assertIn('GenericResource2', resource_types) resource_names = set(r['resource_name'] for r in stack['resources']) self.assertIn('SampleResource1', resource_names) self.assertIn('SampleResource2', resource_names) def test_preview_stack_validates_new_stack(self): exc = exception.StackExists(stack_name='Validation Failed') self.eng._validate_new_stack = mock.Mock(side_effect=exc) ex = self.assertRaises(dispatcher.ExpectedException, self._preview_stack) self.assertEqual(exception.StackExists, ex.exc_info[0]) @mock.patch.object(service.api, 'format_stack_preview', new=mock.Mock()) @mock.patch.object(service.parser, 'Stack') def test_preview_stack_checks_stack_validity(self, mock_parser): self.patchobject(policy.ResourceEnforcer, 'enforce_stack') exc = exception.StackValidationFailed(message='Validation Failed') mock_parsed_stack = mock.Mock() mock_parsed_stack.validate.side_effect = exc mock_parser.return_value = mock_parsed_stack ex = self.assertRaises(dispatcher.ExpectedException, self._preview_stack) self.assertEqual(exception.StackValidationFailed, ex.exc_info[0]) @mock.patch.object(env_util, 'merge_environments') def test_preview_environment_files(self, mock_merge): # Setup environment_files = ['env_1'] # Test self._preview_stack(environment_files=environment_files) # Verify mock_merge.assert_called_once_with(environment_files, None, {}, {}) @mock.patch.object(stack_object.Stack, 'get_by_name') def test_validate_new_stack_checks_existing_stack(self, mock_stack_get): mock_stack_get.return_value = 'existing_db_stack' tmpl = templatem.Template( {'HeatTemplateFormatVersion': '2012-12-12'}) self.assertRaises(exception.StackExists, self.eng._validate_new_stack, self.ctx, 'test_existing_stack', tmpl) @mock.patch.object(stack_object.Stack, 'count_all') def test_validate_new_stack_checks_stack_limit(self, mock_db_count): cfg.CONF.set_override('max_stacks_per_tenant', 99) mock_db_count.return_value = 99 template = templatem.Template( {'HeatTemplateFormatVersion': '2012-12-12'}) self.assertRaises(exception.RequestLimitExceeded, self.eng._validate_new_stack, self.ctx, 'test_existing_stack', template) def test_validate_new_stack_checks_incorrect_keywords_in_resource(self): template = {'heat_template_version': '2013-05-23', 'resources': { 'Res': {'Type': 'GenericResource1'}}} parsed_template = templatem.Template(template) ex = self.assertRaises(exception.StackValidationFailed, self.eng._validate_new_stack, self.ctx, 'test_existing_stack', parsed_template) msg = (u'"Type" is not a valid keyword ' 'inside a resource definition') self.assertEqual(msg, six.text_type(ex)) def test_validate_new_stack_checks_incorrect_sections(self): template = {'heat_template_version': '2013-05-23', 'unknown_section': { 'Res': {'Type': 'GenericResource1'}}} parsed_template = templatem.Template(template) ex = self.assertRaises(exception.StackValidationFailed, self.eng._validate_new_stack, self.ctx, 'test_existing_stack', parsed_template) msg = u'The template section is invalid: unknown_section' self.assertEqual(msg, six.text_type(ex)) def test_validate_new_stack_checks_resource_limit(self): cfg.CONF.set_override('max_resources_per_stack', 5) template = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'Res1': {'Type': 'GenericResource1'}, 'Res2': {'Type': 'GenericResource1'}, 'Res3': {'Type': 'GenericResource1'}, 'Res4': {'Type': 'GenericResource1'}, 'Res5': {'Type': 'GenericResource1'}, 'Res6': {'Type': 'GenericResource1'}}} parsed_template = templatem.Template(template) self.assertRaises(exception.RequestLimitExceeded, self.eng._validate_new_stack, self.ctx, 'test_existing_stack', parsed_template) def test_validate_new_stack_handle_assertion_error(self): tmpl = mock.MagicMock() expected_message = 'Expected assertion error' tmpl.validate.side_effect = AssertionError(expected_message) exc = self.assertRaises(AssertionError, self.eng._validate_new_stack, self.ctx, 'stack_name', tmpl) self.assertEqual(expected_message, six.text_type(exc)) @mock.patch('heat.engine.service.ThreadGroupManager', return_value=mock.Mock()) @mock.patch.object(stack_object.Stack, 'get_all') @mock.patch.object(stack_object.Stack, 'get_by_id') @mock.patch('heat.engine.stack_lock.StackLock', return_value=mock.Mock()) @mock.patch.object(parser.Stack, 'load') @mock.patch.object(context, 'get_admin_context') def test_engine_reset_stack_status( self, mock_admin_context, mock_stack_load, mock_stacklock, mock_get_by_id, mock_get_all, mock_thread): mock_admin_context.return_value = self.ctx db_stack = mock.MagicMock() db_stack.id = 'foo' db_stack.status = 'IN_PROGRESS' db_stack.status_reason = None unlocked_stack = mock.MagicMock() unlocked_stack.id = 'bar' unlocked_stack.status = 'IN_PROGRESS' unlocked_stack.status_reason = None unlocked_stack_failed = mock.MagicMock() unlocked_stack_failed.id = 'bar' unlocked_stack_failed.status = 'FAILED' unlocked_stack_failed.status_reason = 'because' mock_get_all.return_value = [db_stack, unlocked_stack] mock_get_by_id.side_effect = [db_stack, unlocked_stack_failed] fake_stack = mock.MagicMock() fake_stack.action = 'CREATE' fake_stack.id = 'foo' fake_stack.status = 'IN_PROGRESS' mock_stack_load.return_value = fake_stack lock1 = mock.MagicMock() lock1.get_engine_id.return_value = 'old-engine' lock1.acquire.return_value = None lock2 = mock.MagicMock() lock2.acquire.return_value = None mock_stacklock.side_effect = [lock1, lock2] self.eng.thread_group_mgr = mock_thread self.eng.reset_stack_status() mock_admin_context.assert_called() filters = { 'status': parser.Stack.IN_PROGRESS, 'convergence': False } mock_get_all.assert_called_once_with(self.ctx, filters=filters, show_nested=True) mock_get_by_id.assert_has_calls([ mock.call(self.ctx, 'foo'), mock.call(self.ctx, 'bar'), ]) mock_stack_load.assert_called_once_with(self.ctx, stack=db_stack) self.assertTrue(lock2.release.called) reason = ('Engine went down during stack %s' % fake_stack.action) mock_thread.start_with_acquired_lock.assert_called_once_with( fake_stack, lock1, fake_stack.reset_stack_and_resources_in_progress, reason ) def test_parse_adopt_stack_data_without_parameters(self): cfg.CONF.set_override('enable_stack_adopt', True) template = {"heat_template_version": "2015-04-30", "resources": { "myres": { "type": "OS::Cinder::Volume", "properties": { "name": "volname", "size": "1" } } }} # Assert no KeyError exception raised like before, when trying to # get parameters from adopt stack data which doesn't have it. args = {"adopt_stack_data": '''{}'''} self.eng._parse_template_and_validate_stack( self.ctx, 'stack_name', template, {}, {}, None, args) args = {"adopt_stack_data": '''{ "environment": {} }'''} self.eng._parse_template_and_validate_stack( self.ctx, 'stack_name', template, {}, {}, None, args) def test_parse_adopt_stack_data_with_parameters(self): cfg.CONF.set_override('enable_stack_adopt', True) template = {"heat_template_version": "2015-04-30", "parameters": { "volsize": {"type": "number"} }, "resources": { "myres": { "type": "OS::Cinder::Volume", "properties": { "name": "volname", "size": {"get_param": "volsize"} } } }} args = {"adopt_stack_data": '''{ "environment": { "parameters": { "volsize": 1 } }}'''} stack = self.eng._parse_template_and_validate_stack( self.ctx, 'stack_name', template, {}, {}, None, args) self.assertEqual(1, stack.parameters['volsize']) @mock.patch('heat.engine.service.ThreadGroupManager', return_value=mock.Mock()) @mock.patch.object(stack_object.Stack, 'get_by_id') @mock.patch.object(parser.Stack, 'load') def test_stack_cancel_update_convergence_with_no_rollback( self, mock_load, mock_get_by_id, mock_tg): stk = mock.MagicMock() stk.id = 1 stk.UPDATE = 'UPDATE' stk.IN_PROGRESS = 'IN_PROGRESS' stk.state = ('UPDATE', 'IN_PROGRESS') stk.status = stk.IN_PROGRESS stk.action = stk.UPDATE stk.convergence = True mock_load.return_value = stk self.patchobject(self.eng, '_get_stack') self.eng.thread_group_mgr.start = mock.MagicMock() with mock.patch.object(self.eng, 'worker_service') as mock_ws: mock_ws.stop_traversal = mock.Mock() # with rollback as false self.eng.stack_cancel_update(self.ctx, 1, cancel_with_rollback=False) self.assertTrue(self.eng.thread_group_mgr.start.called) call_args, _ = self.eng.thread_group_mgr.start.call_args # test ID of stack self.assertEqual(call_args[0], 1) # ensure stop_traversal should be called with stack self.assertEqual(call_args[1].func, mock_ws.stop_traversal) self.assertEqual(call_args[1].args[0], stk) @mock.patch('heat.engine.service.ThreadGroupManager', return_value=mock.Mock()) @mock.patch.object(stack_object.Stack, 'get_by_id') @mock.patch.object(parser.Stack, 'load') def test_stack_cancel_update_convergence_with_rollback( self, mock_load, mock_get_by_id, mock_tg): stk = mock.MagicMock() stk.id = 1 stk.UPDATE = 'UPDATE' stk.IN_PROGRESS = 'IN_PROGRESS' stk.state = ('UPDATE', 'IN_PROGRESS') stk.status = stk.IN_PROGRESS stk.action = stk.UPDATE stk.convergence = True stk.rollback = mock.MagicMock(return_value=None) mock_load.return_value = stk self.patchobject(self.eng, '_get_stack') self.eng.thread_group_mgr.start = mock.MagicMock() # with rollback as true self.eng.stack_cancel_update(self.ctx, 1, cancel_with_rollback=True) self.eng.thread_group_mgr.start.assert_called_once_with( 1, stk.rollback) heat-10.0.2/heat/tests/test_resource_properties_data.py0000666000175000017500000000466213343562340023342 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg from heat.db.sqlalchemy import models from heat.objects import resource_properties_data as rpd_object from heat.tests import common from heat.tests import utils class ResourcePropertiesDataTest(common.HeatTestCase): def setUp(self): super(ResourcePropertiesDataTest, self).setUp() self.ctx = utils.dummy_context() data = {'prop1': 'string', 'prop2': {'a': 'dict'}, 'prop3': 1, 'prop4': ['a', 'list'], 'prop5': True} def _get_rpd_and_db_obj(self): rpd_obj = rpd_object.ResourcePropertiesData().create_or_update( self.ctx, self.data) db_obj = self.ctx.session.query( models.ResourcePropertiesData).get(rpd_obj.id) self.assertEqual(len(self.data), len(db_obj['data'])) return rpd_obj, db_obj def test_rsrc_prop_data_encrypt(self): cfg.CONF.set_override('encrypt_parameters_and_properties', True) rpd_obj, db_obj = self._get_rpd_and_db_obj() # verify data is encrypted in the db self.assertNotEqual(db_obj['data'], self.data) for key in self.data: self.assertEqual('cryptography_decrypt_v1', db_obj['data'][key][0]) # verify rpd_obj data is unencrypted self.assertEqual(self.data, rpd_obj['data']) # verify loading a fresh rpd_obj has decrypted data rpd_obj = rpd_object.ResourcePropertiesData._from_db_object( rpd_object.ResourcePropertiesData(self.ctx), self.ctx, db_obj) self.assertEqual(self.data, rpd_obj['data']) def test_rsrc_prop_data_no_encrypt(self): cfg.CONF.set_override('encrypt_parameters_and_properties', False) rpd_obj, db_obj = self._get_rpd_and_db_obj() # verify data is unencrypted in the db self.assertEqual(db_obj['data'], self.data) heat-10.0.2/heat/tests/test_common_policy.py0000666000175000017500000002130213343562352021106 0ustar zuulzuul00000000000000# # Copyright 2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os.path from oslo_config import fixture as config_fixture from oslo_policy import policy as base_policy from heat.common import exception from heat.common import policy from heat.tests import common from heat.tests import utils policy_path = os.path.dirname(os.path.realpath(__file__)) + "/policy/" class TestPolicyEnforcer(common.HeatTestCase): cfn_actions = ("ListStacks", "CreateStack", "DescribeStacks", "DeleteStack", "UpdateStack", "DescribeStackEvents", "ValidateTemplate", "GetTemplate", "EstimateTemplateCost", "DescribeStackResource", "DescribeStackResources") cw_actions = ("DeleteAlarms", "DescribeAlarmHistory", "DescribeAlarms", "DescribeAlarmsForMetric", "DisableAlarmActions", "EnableAlarmActions", "GetMetricStatistics", "ListMetrics", "PutMetricAlarm", "PutMetricData", "SetAlarmState") def setUp(self): super(TestPolicyEnforcer, self).setUp(mock_resource_policy=False) self.fixture = self.useFixture(config_fixture.Config()) self.fixture.conf(args=['--config-dir', policy_path]) self.addCleanup(self.m.VerifyAll) def get_policy_file(self, filename): return policy_path + filename def test_policy_cfn_default(self): enforcer = policy.Enforcer(scope='cloudformation') ctx = utils.dummy_context(roles=[]) for action in self.cfn_actions: # Everything should be allowed enforcer.enforce(ctx, action, is_registered_policy=True) def test_policy_cfn_notallowed(self): enforcer = policy.Enforcer( scope='cloudformation', policy_file=self.get_policy_file('notallowed.json')) ctx = utils.dummy_context(roles=[]) for action in self.cfn_actions: # Everything should raise the default exception.Forbidden self.assertRaises(exception.Forbidden, enforcer.enforce, ctx, action, {}, is_registered_policy=True) def test_policy_cfn_deny_stack_user(self): enforcer = policy.Enforcer(scope='cloudformation') ctx = utils.dummy_context(roles=['heat_stack_user']) for action in self.cfn_actions: # Everything apart from DescribeStackResource should be Forbidden if action == "DescribeStackResource": enforcer.enforce(ctx, action, is_registered_policy=True) else: self.assertRaises(exception.Forbidden, enforcer.enforce, ctx, action, {}, is_registered_policy=True) def test_policy_cfn_allow_non_stack_user(self): enforcer = policy.Enforcer(scope='cloudformation') ctx = utils.dummy_context(roles=['not_a_stack_user']) for action in self.cfn_actions: # Everything should be allowed enforcer.enforce(ctx, action, is_registered_policy=True) def test_set_rules_overwrite_true(self): enforcer = policy.Enforcer() enforcer.load_rules(True) enforcer.set_rules({'test_heat_rule': 1}, True) self.assertEqual({'test_heat_rule': 1}, enforcer.enforcer.rules) def test_set_rules_overwrite_false(self): enforcer = policy.Enforcer() enforcer.load_rules(True) enforcer.load_rules(True) enforcer.set_rules({'test_heat_rule': 1}, False) self.assertIn('test_heat_rule', enforcer.enforcer.rules) def test_load_rules_force_reload_true(self): enforcer = policy.Enforcer() enforcer.load_rules(True) enforcer.set_rules({'test_heat_rule': 'test'}) enforcer.load_rules(True) self.assertNotIn({'test_heat_rule': 'test'}, enforcer.enforcer.rules) def test_load_rules_force_reload_false(self): enforcer = policy.Enforcer() enforcer.load_rules(True) enforcer.load_rules(True) enforcer.set_rules({'test_heat_rule': 'test'}) enforcer.load_rules(False) self.assertIn('test_heat_rule', enforcer.enforcer.rules) def test_no_such_action(self): ctx = utils.dummy_context(roles=['not_a_stack_user']) enforcer = policy.Enforcer(scope='cloudformation') action = 'no_such_action' msg = 'cloudformation:no_such_action has not been registered' self.assertRaisesRegex(base_policy.PolicyNotRegistered, msg, enforcer.enforce, ctx, action, None, None, True) def test_check_admin(self): enforcer = policy.Enforcer() ctx = utils.dummy_context(roles=[]) self.assertFalse(enforcer.check_is_admin(ctx)) ctx = utils.dummy_context(roles=['not_admin']) self.assertFalse(enforcer.check_is_admin(ctx)) ctx = utils.dummy_context(roles=['admin']) self.assertTrue(enforcer.check_is_admin(ctx)) def test_enforce_creds(self): enforcer = policy.Enforcer() ctx = utils.dummy_context(roles=['admin']) self.assertTrue(enforcer.check_is_admin(ctx)) def test_resource_default_rule(self): context = utils.dummy_context(roles=['non-admin']) enforcer = policy.ResourceEnforcer() res_type = "OS::Test::NotInPolicy" self.assertTrue(enforcer.enforce(context, res_type, is_registered_policy=True)) def test_resource_enforce_success(self): context = utils.dummy_context(roles=['admin']) enforcer = policy.ResourceEnforcer() res_type = "OS::Keystone::User" self.assertTrue(enforcer.enforce(context, res_type, is_registered_policy=True)) def test_resource_enforce_fail(self): context = utils.dummy_context(roles=['non-admin']) enforcer = policy.ResourceEnforcer() res_type = "OS::Nova::Quota" ex = self.assertRaises(exception.Forbidden, enforcer.enforce, context, res_type, None, None, True) self.assertIn(res_type, ex.message) def test_resource_wildcard_enforce_fail(self): context = utils.dummy_context(roles=['non-admin']) enforcer = policy.ResourceEnforcer() res_type = "OS::Keystone::User" ex = self.assertRaises(exception.Forbidden, enforcer.enforce, context, res_type, None, None, True) self.assertIn(res_type.split("::", 1)[0], ex.message) def test_resource_enforce_returns_false(self): context = utils.dummy_context(roles=['non-admin']) enforcer = policy.ResourceEnforcer(exc=None) res_type = "OS::Keystone::User" self.assertFalse(enforcer.enforce(context, res_type, is_registered_policy=True)) self.assertIsNotNone(enforcer.enforce(context, res_type, is_registered_policy=True)) def test_resource_enforce_exc_on_false(self): context = utils.dummy_context(roles=['non-admin']) enforcer = policy.ResourceEnforcer() res_type = "OS::Keystone::User" ex = self.assertRaises(exception.Forbidden, enforcer.enforce, context, res_type, None, None, True) self.assertIn(res_type, ex.message) def test_resource_enforce_override_deny_admin(self): context = utils.dummy_context(roles=['admin']) enforcer = policy.ResourceEnforcer( policy_file=self.get_policy_file('resources.json')) res_type = "OS::Cinder::Quota" ex = self.assertRaises(exception.Forbidden, enforcer.enforce, context, res_type, None, None, True) self.assertIn(res_type, ex.message) heat-10.0.2/heat/tests/db/0000775000175000017500000000000013343562672015220 5ustar zuulzuul00000000000000heat-10.0.2/heat/tests/db/test_migrations.py0000666000175000017500000007742413343562351021017 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests for database migrations. This test case reads the configuration file test_migrations.conf for database connection settings to use in the tests. For each connection found in the config file, the test case runs a series of test cases to ensure that migrations work properly both upgrading and downgrading, and that no data loss occurs if possible. """ import datetime import fixtures import os import uuid from migrate.versioning import repository from oslo_db.sqlalchemy import enginefacade from oslo_db.sqlalchemy import test_fixtures from oslo_db.sqlalchemy import test_migrations from oslo_db.sqlalchemy import utils from oslo_serialization import jsonutils from oslo_utils import timeutils from oslotest import base as test_base import six import sqlalchemy import testtools from heat.db.sqlalchemy import migrate_repo from heat.db.sqlalchemy import migration from heat.db.sqlalchemy import models from heat.tests import common class DBNotAllowed(Exception): pass class BannedDBSchemaOperations(fixtures.Fixture): """Ban some operations for migrations""" def __init__(self, banned_resources=None): super(BannedDBSchemaOperations, self).__init__() self._banned_resources = banned_resources or [] @staticmethod def _explode(resource, op): print('%s.%s()' % (resource, op)) raise DBNotAllowed( 'Operation %s.%s() is not allowed in a database migration' % ( resource, op)) def setUp(self): super(BannedDBSchemaOperations, self).setUp() for thing in self._banned_resources: self.useFixture(fixtures.MonkeyPatch( 'sqlalchemy.%s.drop' % thing, lambda *a, **k: self._explode(thing, 'drop'))) self.useFixture(fixtures.MonkeyPatch( 'sqlalchemy.%s.alter' % thing, lambda *a, **k: self._explode(thing, 'alter'))) class TestBannedDBSchemaOperations(testtools.TestCase): def test_column(self): column = sqlalchemy.Column() with BannedDBSchemaOperations(['Column']): self.assertRaises(DBNotAllowed, column.drop) self.assertRaises(DBNotAllowed, column.alter) def test_table(self): table = sqlalchemy.Table() with BannedDBSchemaOperations(['Table']): self.assertRaises(DBNotAllowed, table.drop) self.assertRaises(DBNotAllowed, table.alter) class HeatMigrationsCheckers(test_migrations.WalkVersionsMixin, common.FakeLogMixin): """Test sqlalchemy-migrate migrations.""" snake_walk = False downgrade = False @property def INIT_VERSION(self): return migration.INIT_VERSION @property def REPOSITORY(self): migrate_file = migrate_repo.__file__ return repository.Repository( os.path.abspath(os.path.dirname(migrate_file)) ) @property def migration_api(self): temp = __import__('oslo_db.sqlalchemy.migration', globals(), locals(), ['versioning_api'], 0) return temp.versioning_api @property def migrate_engine(self): return self.engine def migrate_up(self, version, with_data=False): """Check that migrations don't cause downtime. Schema migrations can be done online, allowing for rolling upgrades. """ # NOTE(xek): This is a list of migrations where we allow dropping # things. The rules for adding exceptions are very very specific. # Chances are you don't meet the critera. # Reviewers: DO NOT ALLOW THINGS TO BE ADDED HERE exceptions = [ 64, # drop constraint ] # Reviewers: DO NOT ALLOW THINGS TO BE ADDED HERE # NOTE(xek): We start requiring things be additive in # liberty, so ignore all migrations before that point. LIBERTY_START = 63 if version >= LIBERTY_START and version not in exceptions: banned = ['Table', 'Column'] else: banned = None with BannedDBSchemaOperations(banned): super(HeatMigrationsCheckers, self).migrate_up(version, with_data) def test_walk_versions(self): self.walk_versions(self.snake_walk, self.downgrade) def assertColumnExists(self, engine, table, column): t = utils.get_table(engine, table) self.assertIn(column, t.c) def assertColumnType(self, engine, table, column, sqltype): t = utils.get_table(engine, table) col = getattr(t.c, column) self.assertIsInstance(col.type, sqltype) def assertColumnNotExists(self, engine, table, column): t = utils.get_table(engine, table) self.assertNotIn(column, t.c) def assertColumnIsNullable(self, engine, table, column): t = utils.get_table(engine, table) col = getattr(t.c, column) self.assertTrue(col.nullable) def assertColumnIsNotNullable(self, engine, table, column_name): table = utils.get_table(engine, table) column = getattr(table.c, column_name) self.assertFalse(column.nullable) def assertIndexExists(self, engine, table, index): t = utils.get_table(engine, table) index_names = [idx.name for idx in t.indexes] self.assertIn(index, index_names) def assertIndexMembers(self, engine, table, index, members): self.assertIndexExists(engine, table, index) t = utils.get_table(engine, table) index_columns = [] for idx in t.indexes: if idx.name == index: for ix in idx.columns: index_columns.append(ix.name) break self.assertEqual(sorted(members), sorted(index_columns)) def _pre_upgrade_031(self, engine): raw_template = utils.get_table(engine, 'raw_template') templ = [] for i in range(300, 303, 1): t = dict(id=i, template='{}', files='{}') engine.execute(raw_template.insert(), [t]) templ.append(t) user_creds = utils.get_table(engine, 'user_creds') user = [dict(id=4, username='angus', password='notthis', tenant='mine', auth_url='bla', tenant_id=str(uuid.uuid4()), trust_id='', trustor_user_id='')] engine.execute(user_creds.insert(), user) stack = utils.get_table(engine, 'stack') stack_ids = [('967aaefb-152e-405d-b13a-35d4c816390c', 0), ('9e9deba9-a303-4f29-84d3-c8165647c47e', 1), ('9a4bd1ec-8b21-46cd-964a-f66cb1cfa2f9', 2)] data = [dict(id=ll_id, name='fruity', raw_template_id=templ[templ_id]['id'], user_creds_id=user[0]['id'], username='angus', disable_rollback=True) for ll_id, templ_id in stack_ids] engine.execute(stack.insert(), data) return data def _check_031(self, engine, data): self.assertColumnExists(engine, 'stack_lock', 'stack_id') self.assertColumnExists(engine, 'stack_lock', 'engine_id') self.assertColumnExists(engine, 'stack_lock', 'created_at') self.assertColumnExists(engine, 'stack_lock', 'updated_at') def _check_034(self, engine, data): self.assertColumnExists(engine, 'raw_template', 'files') def _pre_upgrade_035(self, engine): # The stacks id are for the 33 version migration event_table = utils.get_table(engine, 'event') data = [{ 'id': '22222222-152e-405d-b13a-35d4c816390c', 'stack_id': '967aaefb-152e-405d-b13a-35d4c816390c', 'resource_action': 'Test', 'resource_status': 'TEST IN PROGRESS', 'resource_name': 'Testing Resource', 'physical_resource_id': '3465d1ec-8b21-46cd-9dgf-f66cttrh53f9', 'resource_status_reason': '', 'resource_type': '', 'resource_properties': None, 'created_at': timeutils.utcnow()}, {'id': '11111111-152e-405d-b13a-35d4c816390c', 'stack_id': '967aaefb-152e-405d-b13a-35d4c816390c', 'resource_action': 'Test', 'resource_status': 'TEST COMPLETE', 'resource_name': 'Testing Resource', 'physical_resource_id': '3465d1ec-8b21-46cd-9dgf-f66cttrh53f9', 'resource_status_reason': '', 'resource_type': '', 'resource_properties': None, 'created_at': timeutils.utcnow() + datetime.timedelta(days=5)}] engine.execute(event_table.insert(), data) return data def _check_035(self, engine, data): self.assertColumnExists(engine, 'event', 'id') self.assertColumnExists(engine, 'event', 'uuid') event_table = utils.get_table(engine, 'event') events_in_db = list(event_table.select().execute()) last_id = 0 for index, event in enumerate(data): last_id = index + 1 self.assertEqual(last_id, events_in_db[index].id) self.assertEqual(event['id'], events_in_db[index].uuid) # Check that the autoincremental id is ok data = [{ 'uuid': '33333333-152e-405d-b13a-35d4c816390c', 'stack_id': '967aaefb-152e-405d-b13a-35d4c816390c', 'resource_action': 'Test', 'resource_status': 'TEST COMPLEATE AGAIN', 'resource_name': 'Testing Resource', 'physical_resource_id': '3465d1ec-8b21-46cd-9dgf-f66cttrh53f9', 'resource_status_reason': '', 'resource_type': '', 'resource_properties': None, 'created_at': timeutils.utcnow()}] result = engine.execute(event_table.insert(), data) self.assertEqual(last_id + 1, result.inserted_primary_key[0]) def _check_036(self, engine, data): self.assertColumnExists(engine, 'stack', 'stack_user_project_id') def _pre_upgrade_037(self, engine): raw_template = utils.get_table(engine, 'raw_template') templ = '''{"heat_template_version": "2013-05-23", "parameters": { "key_name": { "Type": "string" } } }''' data = [dict(id=4, template=templ, files='{}')] engine.execute(raw_template.insert(), data) return data[0] def _check_037(self, engine, data): raw_template = utils.get_table(engine, 'raw_template') templs = list(raw_template.select(). where(raw_template.c.id == str(data['id'])). execute()) template = jsonutils.loads(templs[0].template) data_template = jsonutils.loads(data['template']) self.assertNotIn('Type', template['parameters']['key_name']) self.assertIn('type', template['parameters']['key_name']) self.assertEqual(template['parameters']['key_name']['type'], data_template['parameters']['key_name']['Type']) def _check_038(self, engine, data): self.assertColumnNotExists(engine, 'software_config', 'io') def _check_039(self, engine, data): self.assertColumnIsNullable(engine, 'stack', 'user_creds_id') def _check_040(self, engine, data): self.assertColumnNotExists(engine, 'software_deployment', 'signal_id') def _pre_upgrade_041(self, engine): raw_template = utils.get_table(engine, 'raw_template') templ = '''{"heat_template_version": "2013-05-23", "resources": { "my_instance": { "Type": "OS::Nova::Server" } }, "outputs": { "instance_ip": { "Value": { "get_attr": "[my_instance, networks]" } } } }''' data = [dict(id=7, template=templ, files='{}')] engine.execute(raw_template.insert(), data) return data[0] def _check_041(self, engine, data): raw_template = utils.get_table(engine, 'raw_template') templs = list(raw_template.select(). where(raw_template.c.id == str(data['id'])). execute()) template = jsonutils.loads(templs[0].template) self.assertIn('type', template['resources']['my_instance']) self.assertNotIn('Type', template['resources']['my_instance']) self.assertIn('value', template['outputs']['instance_ip']) self.assertNotIn('Value', template['outputs']['instance_ip']) def _pre_upgrade_043(self, engine): raw_template = utils.get_table(engine, 'raw_template') templ = '''{"HeatTemplateFormatVersion" : "2012-12-11", "Parameters" : { "foo" : { "Type" : "String", "NoEcho": "True" }, "bar" : { "Type" : "String", "NoEcho": "True", "Default": "abc" }, "blarg" : { "Type" : "String", "Default": "quux" } } }''' data = [dict(id=8, template=templ, files='{}')] engine.execute(raw_template.insert(), data) return data[0] def _check_043(self, engine, data): raw_template = utils.get_table(engine, 'raw_template') templ = list(raw_template.select(). where(raw_template.c.id == data['id']).execute()) template = jsonutils.loads(templ[0].template) self.assertEqual(template['HeatTemplateFormatVersion'], '2012-12-12') def _pre_upgrade_045(self, engine): raw_template = utils.get_table(engine, 'raw_template') templ = [] for i in range(200, 203, 1): t = dict(id=i, template='{}', files='{}') engine.execute(raw_template.insert(), [t]) templ.append(t) user_creds = utils.get_table(engine, 'user_creds') user = [dict(id=6, username='steve', password='notthis', tenant='mine', auth_url='bla', tenant_id=str(uuid.uuid4()), trust_id='', trustor_user_id='')] engine.execute(user_creds.insert(), user) stack = utils.get_table(engine, 'stack') stack_ids = [('s1', '967aaefb-152e-505d-b13a-35d4c816390c', 0), ('s2', '9e9deba9-a303-5f29-84d3-c8165647c47e', 1), ('s1*', '9a4bd1ec-8b21-56cd-964a-f66cb1cfa2f9', 2)] data = [dict(id=ll_id, name=name, raw_template_id=templ[templ_id]['id'], user_creds_id=user[0]['id'], username='steve', disable_rollback=True) for name, ll_id, templ_id in stack_ids] data[2]['owner_id'] = '967aaefb-152e-505d-b13a-35d4c816390c' engine.execute(stack.insert(), data) return data def _check_045(self, engine, data): self.assertColumnExists(engine, 'stack', 'backup') stack_table = utils.get_table(engine, 'stack') stacks_in_db = list(stack_table.select().execute()) stack_names_in_db = [s.name for s in stacks_in_db] # Assert the expected stacks are still there for stack in data: self.assertIn(stack['name'], stack_names_in_db) # And that the backup flag is set as expected for stack in stacks_in_db: if stack.name.endswith('*'): self.assertTrue(stack.backup) else: self.assertFalse(stack.backup) def _check_046(self, engine, data): self.assertColumnExists(engine, 'resource', 'properties_data') def _pre_upgrade_047(self, engine): raw_template = utils.get_table(engine, 'raw_template') templ = [] for i in range(100, 105, 1): t = dict(id=i, template='{}', files='{}') engine.execute(raw_template.insert(), [t]) templ.append(t) user_creds = utils.get_table(engine, 'user_creds') user = [dict(id=7, username='steve', password='notthis', tenant='mine', auth_url='bla', tenant_id=str(uuid.uuid4()), trust_id='', trustor_user_id='')] engine.execute(user_creds.insert(), user) stack = utils.get_table(engine, 'stack') stack_ids = [ ('s9', '167aaefb-152e-505d-b13a-35d4c816390c', 0), ('n1', '1e9deba9-a303-5f29-84d3-c8165647c47e', 1), ('n2', '1e9deba9-a304-5f29-84d3-c8165647c47e', 2), ('n3', '1e9deba9-a305-5f29-84d3-c8165647c47e', 3), ('s9*', '1a4bd1ec-8b21-56cd-964a-f66cb1cfa2f9', 4)] data = [dict(id=ll_id, name=name, raw_template_id=templ[tmpl_id]['id'], user_creds_id=user[0]['id'], owner_id=None, backup=False, username='steve', disable_rollback=True) for name, ll_id, tmpl_id in stack_ids] # Make a nested tree s1->s2->s3->s4 with a s1 backup data[1]['owner_id'] = '167aaefb-152e-505d-b13a-35d4c816390c' data[2]['owner_id'] = '1e9deba9-a303-5f29-84d3-c8165647c47e' data[3]['owner_id'] = '1e9deba9-a304-5f29-84d3-c8165647c47e' data[4]['owner_id'] = '167aaefb-152e-505d-b13a-35d4c816390c' data[4]['backup'] = True engine.execute(stack.insert(), data) return data def _check_047(self, engine, data): self.assertColumnExists(engine, 'stack', 'nested_depth') stack_table = utils.get_table(engine, 'stack') stacks_in_db = list(stack_table.select().execute()) stack_ids_in_db = [s.id for s in stacks_in_db] # Assert the expected stacks are still there for stack in data: self.assertIn(stack['id'], stack_ids_in_db) # And that the depth is set as expected def n_depth(sid): s = [s for s in stacks_in_db if s.id == sid][0] return s.nested_depth self.assertEqual(0, n_depth('167aaefb-152e-505d-b13a-35d4c816390c')) self.assertEqual(1, n_depth('1e9deba9-a303-5f29-84d3-c8165647c47e')) self.assertEqual(2, n_depth('1e9deba9-a304-5f29-84d3-c8165647c47e')) self.assertEqual(3, n_depth('1e9deba9-a305-5f29-84d3-c8165647c47e')) self.assertEqual(0, n_depth('1a4bd1ec-8b21-56cd-964a-f66cb1cfa2f9')) def _check_049(self, engine, data): self.assertColumnExists(engine, 'user_creds', 'region_name') def _check_051(self, engine, data): column_list = [('id', False), ('host', False), ('topic', False), ('binary', False), ('hostname', False), ('engine_id', False), ('report_interval', False), ('updated_at', True), ('created_at', True), ('deleted_at', True)] for column in column_list: self.assertColumnExists(engine, 'service', column[0]) if not column[1]: self.assertColumnIsNotNullable(engine, 'service', column[0]) else: self.assertColumnIsNullable(engine, 'service', column[0]) def _check_052(self, engine, data): self.assertColumnExists(engine, 'stack', 'convergence') def _check_055(self, engine, data): self.assertColumnExists(engine, 'stack', 'prev_raw_template_id') self.assertColumnExists(engine, 'stack', 'current_traversal') self.assertColumnExists(engine, 'stack', 'current_deps') def _pre_upgrade_056(self, engine): raw_template = utils.get_table(engine, 'raw_template') templ = [] for i in range(900, 903, 1): t = dict(id=i, template='{}', files='{}') engine.execute(raw_template.insert(), [t]) templ.append(t) user_creds = utils.get_table(engine, 'user_creds') user = [dict(id=uid, username='test_user', password='password', tenant='test_project', auth_url='bla', tenant_id=str(uuid.uuid4()), trust_id='', trustor_user_id='') for uid in range(900, 903)] engine.execute(user_creds.insert(), user) stack = utils.get_table(engine, 'stack') stack_ids = [('967aaefa-152e-405d-b13a-35d4c816390c', 0), ('9e9debab-a303-4f29-84d3-c8165647c47e', 1), ('9a4bd1e9-8b21-46cd-964a-f66cb1cfa2f9', 2)] data = [dict(id=ll_id, name=ll_id, raw_template_id=templ[templ_id]['id'], user_creds_id=user[templ_id]['id'], username='test_user', disable_rollback=True, parameters='test_params', created_at=timeutils.utcnow(), deleted_at=None) for ll_id, templ_id in stack_ids] data[-1]['deleted_at'] = timeutils.utcnow() engine.execute(stack.insert(), data) return data def _check_056(self, engine, data): self.assertColumnNotExists(engine, 'stack', 'parameters') self.assertColumnExists(engine, 'raw_template', 'environment') self.assertColumnExists(engine, 'raw_template', 'predecessor') # Get the parameters in stack table stack_parameters = {} for stack in data: templ_id = stack['raw_template_id'] stack_parameters[templ_id] = (stack['parameters'], stack.get('deleted_at')) # validate whether its moved to raw_template raw_template_table = utils.get_table(engine, 'raw_template') raw_templates = raw_template_table.select().execute() for raw_template in raw_templates: if raw_template.id in stack_parameters: stack_param, deleted_at = stack_parameters[raw_template.id] tmpl_env = raw_template.environment if engine.name == 'sqlite' and deleted_at is None: stack_param = '"%s"' % stack_param if deleted_at is None: self.assertEqual(stack_param, tmpl_env, 'parameters migration from stack to ' 'raw_template failed') else: self.assertIsNone(tmpl_env, 'parameters migration did not skip ' 'deleted stack') def _pre_upgrade_057(self, engine): # template raw_template = utils.get_table(engine, 'raw_template') templ = [dict(id=11, template='{}', files='{}')] engine.execute(raw_template.insert(), templ) # credentials user_creds = utils.get_table(engine, 'user_creds') user = [dict(id=11, username='steve', password='notthis', tenant='mine', auth_url='bla', tenant_id=str(uuid.uuid4()), trust_id='', trustor_user_id='')] engine.execute(user_creds.insert(), user) # stack stack = utils.get_table(engine, 'stack') stack_data = [dict(id='867aaefb-152e-505d-b13a-35d4c816390c', name='s1', raw_template_id=templ[0]['id'], user_creds_id=user[0]['id'], username='steve', disable_rollback=True)] engine.execute(stack.insert(), stack_data) # resource resource = utils.get_table(engine, 'resource') res_data = [dict(id='167aaefb-152e-505d-b13a-35d4c816390c', name='res-4', stack_id=stack_data[0]['id'], user_creds_id=user[0]['id']), dict(id='177aaefb-152e-505d-b13a-35d4c816390c', name='res-5', stack_id=stack_data[0]['id'], user_creds_id=user[0]['id'])] engine.execute(resource.insert(), res_data) # resource_data resource_data = utils.get_table(engine, 'resource_data') rd_data = [dict(key='fruit', value='blueberries', reduct=False, resource_id=res_data[0]['id']), dict(key='fruit', value='apples', reduct=False, resource_id=res_data[1]['id'])] engine.execute(resource_data.insert(), rd_data) return {'resource': res_data, 'resource_data': rd_data} def _check_057(self, engine, data): def uuid_in_res_data(res_uuid): for rd in data['resource']: if rd['id'] == res_uuid: return True return False def rd_matches_old_data(key, value, res_uuid): for rd in data['resource_data']: if (rd['resource_id'] == res_uuid and rd['key'] == key and rd['value'] == value): return True return False self.assertColumnIsNotNullable(engine, 'resource', 'id') res_table = utils.get_table(engine, 'resource') res_in_db = list(res_table.select().execute()) # confirm the resource.id is an int and the uuid field has been # copied from the old id. for r in res_in_db: self.assertIsInstance(r.id, six.integer_types) self.assertTrue(uuid_in_res_data(r.uuid)) # confirm that the new resource_id points to the correct resource. rd_table = utils.get_table(engine, 'resource_data') rd_in_db = list(rd_table.select().execute()) for rd in rd_in_db: for r in res_in_db: if rd.resource_id == r.id: self.assertTrue(rd_matches_old_data(rd.key, rd.value, r.uuid)) def _check_058(self, engine, data): self.assertColumnExists(engine, 'resource', 'engine_id') self.assertColumnExists(engine, 'resource', 'atomic_key') def _check_059(self, engine, data): column_list = [('entity_id', False), ('traversal_id', False), ('is_update', False), ('atomic_key', False), ('stack_id', False), ('input_data', True), ('updated_at', True), ('created_at', True)] for column in column_list: self.assertColumnExists(engine, 'sync_point', column[0]) if not column[1]: self.assertColumnIsNotNullable(engine, 'sync_point', column[0]) else: self.assertColumnIsNullable(engine, 'sync_point', column[0]) def _check_060(self, engine, data): column_list = ['needed_by', 'requires', 'replaces', 'replaced_by', 'current_template_id'] for column in column_list: self.assertColumnExists(engine, 'resource', column) def _check_061(self, engine, data): for tab_name in ['stack', 'resource', 'software_deployment']: self.assertColumnType(engine, tab_name, 'status_reason', sqlalchemy.Text) def _check_062(self, engine, data): self.assertColumnExists(engine, 'stack', 'parent_resource_name') def _check_063(self, engine, data): self.assertColumnExists(engine, 'resource', 'properties_data_encrypted') def _check_064(self, engine, data): self.assertColumnNotExists(engine, 'raw_template', 'predecessor') def _check_065(self, engine, data): self.assertColumnExists(engine, 'resource', 'root_stack_id') self.assertIndexExists(engine, 'resource', 'ix_resource_root_stack_id') def _check_071(self, engine, data): self.assertIndexExists(engine, 'stack', 'ix_stack_owner_id') self.assertIndexMembers(engine, 'stack', 'ix_stack_owner_id', ['owner_id']) def _check_073(self, engine, data): # check if column still exists and is not nullable. self.assertColumnIsNotNullable(engine, 'resource_data', 'resource_id') # Ensure that only one foreign key exists and is created as expected. inspector = sqlalchemy.engine.reflection.Inspector.from_engine(engine) resource_data_fkeys = inspector.get_foreign_keys('resource_data') self.assertEqual(1, len(resource_data_fkeys)) fk = resource_data_fkeys[0] self.assertEqual('fk_resource_id', fk['name']) self.assertEqual(['resource_id'], fk['constrained_columns']) self.assertEqual('resource', fk['referred_table']) self.assertEqual(['id'], fk['referred_columns']) def _check_079(self, engine, data): self.assertColumnExists(engine, 'resource', 'rsrc_prop_data_id') self.assertColumnExists(engine, 'event', 'rsrc_prop_data_id') column_list = [('id', False), ('data', True), ('encrypted', True), ('updated_at', True), ('created_at', True)] for column in column_list: self.assertColumnExists(engine, 'resource_properties_data', column[0]) if not column[1]: self.assertColumnIsNotNullable(engine, 'resource_properties_data', column[0]) else: self.assertColumnIsNullable(engine, 'resource_properties_data', column[0]) def _check_080(self, engine, data): self.assertColumnExists(engine, 'resource', 'attr_data_id') class DbTestCase(test_fixtures.OpportunisticDBTestMixin, test_base.BaseTestCase): def setUp(self): super(DbTestCase, self).setUp() self.engine = enginefacade.writer.get_engine() self.sessionmaker = enginefacade.writer.get_sessionmaker() class TestHeatMigrationsMySQL(DbTestCase, HeatMigrationsCheckers): FIXTURE = test_fixtures.MySQLOpportunisticFixture class TestHeatMigrationsPostgreSQL(DbTestCase, HeatMigrationsCheckers): FIXTURE = test_fixtures.PostgresqlOpportunisticFixture class TestHeatMigrationsSQLite(DbTestCase, HeatMigrationsCheckers): pass class ModelsMigrationSyncMixin(object): def get_metadata(self): return models.BASE.metadata def get_engine(self): return self.engine def db_sync(self, engine): migration.db_sync(engine=engine) def include_object(self, object_, name, type_, reflected, compare_to): if name in ['migrate_version'] and type_ == 'table': return False return True class ModelsMigrationsSyncMysql(DbTestCase, ModelsMigrationSyncMixin, test_migrations.ModelsMigrationsSync): FIXTURE = test_fixtures.MySQLOpportunisticFixture class ModelsMigrationsSyncPostgres(DbTestCase, ModelsMigrationSyncMixin, test_migrations.ModelsMigrationsSync): FIXTURE = test_fixtures.PostgresqlOpportunisticFixture class ModelsMigrationsSyncSQLite(DbTestCase, ModelsMigrationSyncMixin, test_migrations.ModelsMigrationsSync): pass heat-10.0.2/heat/tests/db/test_sqlalchemy_types.py0000666000175000017500000000671013343562340022215 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy.dialects.mysql import base as mysql_base from sqlalchemy.dialects.sqlite import base as sqlite_base from sqlalchemy import types from heat.db.sqlalchemy import types as db_types from heat.tests import common class LongTextTest(common.HeatTestCase): def setUp(self): super(LongTextTest, self).setUp() self.sqltype = db_types.LongText() def test_load_dialect_impl(self): dialect = mysql_base.MySQLDialect() impl = self.sqltype.load_dialect_impl(dialect) self.assertNotEqual(types.Text, type(impl)) dialect = sqlite_base.SQLiteDialect() impl = self.sqltype.load_dialect_impl(dialect) self.assertEqual(types.Text, type(impl)) class JsonTest(common.HeatTestCase): def setUp(self): super(JsonTest, self).setUp() self.sqltype = db_types.Json() def test_process_bind_param(self): dialect = None value = {'foo': 'bar'} result = self.sqltype.process_bind_param(value, dialect) self.assertEqual('{"foo": "bar"}', result) def test_process_bind_param_null(self): dialect = None value = None result = self.sqltype.process_bind_param(value, dialect) self.assertEqual('null', result) def test_process_result_value(self): dialect = None value = '{"foo": "bar"}' result = self.sqltype.process_result_value(value, dialect) self.assertEqual({'foo': 'bar'}, result) def test_process_result_value_null(self): dialect = None value = None result = self.sqltype.process_result_value(value, dialect) self.assertIsNone(result) class ListTest(common.HeatTestCase): def setUp(self): super(ListTest, self).setUp() self.sqltype = db_types.List() def test_load_dialect_impl(self): dialect = mysql_base.MySQLDialect() impl = self.sqltype.load_dialect_impl(dialect) self.assertNotEqual(types.Text, type(impl)) dialect = sqlite_base.SQLiteDialect() impl = self.sqltype.load_dialect_impl(dialect) self.assertEqual(types.Text, type(impl)) def test_process_bind_param(self): dialect = None value = ['foo', 'bar'] result = self.sqltype.process_bind_param(value, dialect) self.assertEqual('["foo", "bar"]', result) def test_process_bind_param_null(self): dialect = None value = None result = self.sqltype.process_bind_param(value, dialect) self.assertEqual('null', result) def test_process_result_value(self): dialect = None value = '["foo", "bar"]' result = self.sqltype.process_result_value(value, dialect) self.assertEqual(['foo', 'bar'], result) def test_process_result_value_null(self): dialect = None value = None result = self.sqltype.process_result_value(value, dialect) self.assertIsNone(result) heat-10.0.2/heat/tests/db/test_sqlalchemy_api.py0000666000175000017500000051216413343562351021631 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import datetime import fixtures import json import logging import time import uuid import mock import mox from oslo_config import cfg from oslo_db import exception as db_exception from oslo_utils import timeutils import six from sqlalchemy.orm import exc from sqlalchemy.orm import session from heat.common import context from heat.common import exception from heat.common import template_format from heat.db.sqlalchemy import api as db_api from heat.db.sqlalchemy import models from heat.engine.clients.os import glance from heat.engine.clients.os import nova from heat.engine import environment from heat.engine import resource as rsrc from heat.engine.resources.aws.ec2 import instance as instances from heat.engine import stack as parser from heat.engine import template as tmpl from heat.engine import template_files from heat.tests import common from heat.tests.openstack.nova import fakes as fakes_nova from heat.tests import utils wp_template = ''' { "AWSTemplateFormatVersion" : "2010-09-09", "Description" : "WordPress", "Parameters" : { "KeyName" : { "Description" : "KeyName", "Type" : "String", "Default" : "test" } }, "Resources" : { "WebServer": { "Type": "AWS::EC2::Instance", "Properties": { "ImageId" : "F17-x86_64-gold", "InstanceType" : "m1.large", "KeyName" : "test", "UserData" : "wordpress" } } } } ''' UUIDs = (UUID1, UUID2, UUID3) = sorted([str(uuid.uuid4()) for x in range(3)]) class SqlAlchemyTest(common.HeatTestCase): def setUp(self): super(SqlAlchemyTest, self).setUp() self.fc = fakes_nova.FakeClient() self.ctx = utils.dummy_context() def _mock_get_image_id_success(self, imageId_input, imageId): self.m.StubOutWithMock(glance.GlanceClientPlugin, 'find_image_by_name_or_id') glance.GlanceClientPlugin.find_image_by_name_or_id( imageId_input).MultipleTimes().AndReturn(imageId) def _setup_test_stack(self, stack_name, stack_id=None, owner_id=None, stack_user_project_id=None, backup=False): t = template_format.parse(wp_template) template = tmpl.Template( t, env=environment.Environment({'KeyName': 'test'})) stack_id = stack_id or str(uuid.uuid4()) stack = parser.Stack(self.ctx, stack_name, template, owner_id=owner_id, stack_user_project_id=stack_user_project_id) with utils.UUIDStub(stack_id): stack.store(backup=backup) return (template, stack) def _mock_create(self, mocks): fc = fakes_nova.FakeClient() mocks.StubOutWithMock(instances.Instance, 'client') instances.Instance.client().MultipleTimes().AndReturn(fc) self.m.StubOutWithMock(nova.NovaClientPlugin, '_create') nova.NovaClientPlugin._create().AndReturn(self.fc) self._mock_get_image_id_success('F17-x86_64-gold', 744) mocks.StubOutWithMock(fc.servers, 'create') fc.servers.create(image=744, flavor=3, key_name='test', name=mox.IgnoreArg(), security_groups=None, userdata=mox.IgnoreArg(), scheduler_hints=None, meta=None, nics=None, availability_zone=None, block_device_mapping=None ).MultipleTimes().AndReturn(fc.servers.list()[4]) return fc def _mock_delete(self, mocks): fc = fakes_nova.FakeClient() mocks.StubOutWithMock(instances.Instance, 'client') instances.Instance.client().MultipleTimes().AndReturn(fc) self.patchobject(fc.servers, 'delete', side_effect=fakes_nova.fake_exception()) @mock.patch.object(db_api, '_paginate_query') def test_filter_and_page_query_paginates_query(self, mock_paginate_query): query = mock.Mock() db_api._filter_and_page_query(self.ctx, query) self.assertTrue(mock_paginate_query.called) @mock.patch.object(db_api, '_events_paginate_query') def test_events_filter_and_page_query(self, mock_events_paginate_query): query = mock.Mock() db_api._events_filter_and_page_query(self.ctx, query) self.assertTrue(mock_events_paginate_query.called) @mock.patch.object(db_api.utils, 'paginate_query') def test_events_filter_invalid_sort_key(self, mock_paginate_query): query = mock.Mock() class InvalidSortKey(db_api.utils.InvalidSortKey): @property def message(_): self.fail("_events_paginate_query() should not have tried to " "access .message attribute - it's deprecated in " "oslo.db and removed from base Exception in Py3K.") mock_paginate_query.side_effect = InvalidSortKey() self.assertRaises(exception.Invalid, db_api._events_filter_and_page_query, self.ctx, query, sort_keys=['foo']) @mock.patch.object(db_api.db_filters, 'exact_filter') def test_filter_and_page_query_handles_no_filters(self, mock_db_filter): query = mock.Mock() db_api._filter_and_page_query(self.ctx, query) mock_db_filter.assert_called_once_with(mock.ANY, mock.ANY, {}) @mock.patch.object(db_api.db_filters, 'exact_filter') def test_events_filter_and_page_query_handles_no_filters(self, mock_db_filter): query = mock.Mock() db_api._events_filter_and_page_query(self.ctx, query) mock_db_filter.assert_called_once_with(mock.ANY, mock.ANY, {}) @mock.patch.object(db_api.db_filters, 'exact_filter') def test_filter_and_page_query_applies_filters(self, mock_db_filter): query = mock.Mock() filters = {'foo': 'bar'} db_api._filter_and_page_query(self.ctx, query, filters=filters) self.assertTrue(mock_db_filter.called) @mock.patch.object(db_api.db_filters, 'exact_filter') def test_events_filter_and_page_query_applies_filters(self, mock_db_filter): query = mock.Mock() filters = {'foo': 'bar'} db_api._events_filter_and_page_query(self.ctx, query, filters=filters) self.assertTrue(mock_db_filter.called) @mock.patch.object(db_api, '_paginate_query') def test_filter_and_page_query_whitelists_sort_keys(self, mock_paginate_query): query = mock.Mock() sort_keys = ['stack_name', 'foo'] db_api._filter_and_page_query(self.ctx, query, sort_keys=sort_keys) args, _ = mock_paginate_query.call_args self.assertIn(['name'], args) @mock.patch.object(db_api, '_events_paginate_query') def test_events_filter_and_page_query_whitelists_sort_keys( self, mock_paginate_query): query = mock.Mock() sort_keys = ['event_time', 'foo'] db_api._events_filter_and_page_query(self.ctx, query, sort_keys=sort_keys) args, _ = mock_paginate_query.call_args self.assertIn(['created_at'], args) @mock.patch.object(db_api.utils, 'paginate_query') def test_paginate_query_default_sorts_by_created_at_and_id( self, mock_paginate_query): query = mock.Mock() model = mock.Mock() db_api._paginate_query(self.ctx, query, model, sort_keys=None) args, _ = mock_paginate_query.call_args self.assertIn(['created_at', 'id'], args) @mock.patch.object(db_api.utils, 'paginate_query') def test_paginate_query_default_sorts_dir_by_desc(self, mock_paginate_query): query = mock.Mock() model = mock.Mock() db_api._paginate_query(self.ctx, query, model, sort_dir=None) args, _ = mock_paginate_query.call_args self.assertIn('desc', args) @mock.patch.object(db_api.utils, 'paginate_query') def test_paginate_query_uses_given_sort_plus_id(self, mock_paginate_query): query = mock.Mock() model = mock.Mock() db_api._paginate_query(self.ctx, query, model, sort_keys=['name']) args, _ = mock_paginate_query.call_args self.assertIn(['name', 'id'], args) @mock.patch.object(db_api.utils, 'paginate_query') def test_paginate_query_gets_model_marker(self, mock_paginate_query): query = mock.Mock() model = mock.Mock() marker = mock.Mock() mock_query_object = mock.Mock() mock_query_object.get.return_value = 'real_marker' ctx = mock.MagicMock() ctx.session.query.return_value = mock_query_object db_api._paginate_query(ctx, query, model, marker=marker) mock_query_object.get.assert_called_once_with(marker) args, _ = mock_paginate_query.call_args self.assertIn('real_marker', args) @mock.patch.object(db_api.utils, 'paginate_query') def test_paginate_query_raises_invalid_sort_key(self, mock_paginate_query): query = mock.Mock() model = mock.Mock() class InvalidSortKey(db_api.utils.InvalidSortKey): @property def message(_): self.fail("_paginate_query() should not have tried to access " ".message attribute - it's deprecated in oslo.db " "and removed from base Exception class in Py3K.") mock_paginate_query.side_effect = InvalidSortKey() self.assertRaises(exception.Invalid, db_api._paginate_query, self.ctx, query, model, sort_keys=['foo']) def test_get_sort_keys_returns_empty_list_if_no_keys(self): sort_keys = None mapping = {} filtered_keys = db_api._get_sort_keys(sort_keys, mapping) self.assertEqual([], filtered_keys) def test_get_sort_keys_whitelists_single_key(self): sort_key = 'foo' mapping = {'foo': 'Foo'} filtered_keys = db_api._get_sort_keys(sort_key, mapping) self.assertEqual(['Foo'], filtered_keys) def test_get_sort_keys_whitelists_multiple_keys(self): sort_keys = ['foo', 'bar', 'nope'] mapping = {'foo': 'Foo', 'bar': 'Bar'} filtered_keys = db_api._get_sort_keys(sort_keys, mapping) self.assertIn('Foo', filtered_keys) self.assertIn('Bar', filtered_keys) self.assertEqual(2, len(filtered_keys)) def test_encryption(self): stack_name = 'test_encryption' stack = self._setup_test_stack(stack_name)[1] self._mock_create(self.m) self.m.ReplayAll() stack.create() stack = parser.Stack.load(self.ctx, stack.id) cs = stack['WebServer'] cs.data_set('my_secret', 'fake secret', True) rs = db_api.resource_get_by_name_and_stack(self.ctx, 'WebServer', stack.id) encrypted_key = rs.data[0]['value'] self.assertNotEqual(encrypted_key, "fake secret") # Test private_key property returns decrypted value self.assertEqual("fake secret", db_api.resource_data_get( self.ctx, cs.id, 'my_secret')) # do this twice to verify that the orm does not commit the unencrypted # value. self.assertEqual("fake secret", db_api.resource_data_get( self.ctx, cs.id, 'my_secret')) def test_resource_data_delete(self): stack = self._setup_test_stack('stack', UUID1)[1] self._mock_create(self.m) self.m.ReplayAll() stack.create() stack = parser.Stack.load(self.ctx, stack.id) resource = stack['WebServer'] resource.data_set('test', 'test_data') self.assertEqual('test_data', db_api.resource_data_get( self.ctx, resource.id, 'test')) db_api.resource_data_delete(self.ctx, resource.id, 'test') self.assertRaises(exception.NotFound, db_api.resource_data_get, self.ctx, resource.id, 'test') def test_stack_get_by_name(self): stack = self._setup_test_stack('stack', UUID1, stack_user_project_id=UUID2)[1] st = db_api.stack_get_by_name(self.ctx, 'stack') self.assertEqual(UUID1, st.id) self.ctx.tenant = UUID3 st = db_api.stack_get_by_name(self.ctx, 'stack') self.assertIsNone(st) self.ctx.tenant = UUID2 st = db_api.stack_get_by_name(self.ctx, 'stack') self.assertEqual(UUID1, st.id) stack.delete() st = db_api.stack_get_by_name(self.ctx, 'stack') self.assertIsNone(st) def test_nested_stack_get_by_name(self): stack1 = self._setup_test_stack('stack1', UUID1)[1] stack2 = self._setup_test_stack('stack2', UUID2, owner_id=stack1.id)[1] result = db_api.stack_get_by_name(self.ctx, 'stack2') self.assertEqual(UUID2, result.id) stack2.delete() result = db_api.stack_get_by_name(self.ctx, 'stack2') self.assertIsNone(result) def test_stack_get_by_name_and_owner_id(self): stack1 = self._setup_test_stack('stack1', UUID1, stack_user_project_id=UUID3)[1] stack2 = self._setup_test_stack('stack2', UUID2, owner_id=stack1.id, stack_user_project_id=UUID3)[1] result = db_api.stack_get_by_name_and_owner_id(self.ctx, 'stack2', None) self.assertIsNone(result) result = db_api.stack_get_by_name_and_owner_id(self.ctx, 'stack2', stack1.id) self.assertEqual(UUID2, result.id) self.ctx.tenant = str(uuid.uuid4()) result = db_api.stack_get_by_name_and_owner_id(self.ctx, 'stack2', None) self.assertIsNone(result) self.ctx.tenant = UUID3 result = db_api.stack_get_by_name_and_owner_id(self.ctx, 'stack2', stack1.id) self.assertEqual(UUID2, result.id) stack2.delete() result = db_api.stack_get_by_name_and_owner_id(self.ctx, 'stack2', stack1.id) self.assertIsNone(result) def test_stack_get(self): stack = self._setup_test_stack('stack', UUID1)[1] st = db_api.stack_get(self.ctx, UUID1, show_deleted=False) self.assertEqual(UUID1, st.id) stack.delete() st = db_api.stack_get(self.ctx, UUID1, show_deleted=False) self.assertIsNone(st) st = db_api.stack_get(self.ctx, UUID1, show_deleted=True) self.assertEqual(UUID1, st.id) def test_stack_get_status(self): stack = self._setup_test_stack('stack', UUID1)[1] st = db_api.stack_get_status(self.ctx, UUID1) self.assertEqual(('CREATE', 'IN_PROGRESS', '', None), st) stack.delete() st = db_api.stack_get_status(self.ctx, UUID1) self.assertEqual( ('DELETE', 'COMPLETE', 'Stack DELETE completed successfully', None), st) self.assertRaises(exception.NotFound, db_api.stack_get_status, self.ctx, UUID2) def test_stack_get_show_deleted_context(self): stack = self._setup_test_stack('stack', UUID1)[1] self.assertFalse(self.ctx.show_deleted) st = db_api.stack_get(self.ctx, UUID1) self.assertEqual(UUID1, st.id) stack.delete() st = db_api.stack_get(self.ctx, UUID1) self.assertIsNone(st) self.ctx.show_deleted = True st = db_api.stack_get(self.ctx, UUID1) self.assertEqual(UUID1, st.id) def test_stack_get_all(self): stacks = [self._setup_test_stack('stack', x)[1] for x in UUIDs] st_db = db_api.stack_get_all(self.ctx) self.assertEqual(3, len(st_db)) stacks[0].delete() st_db = db_api.stack_get_all(self.ctx) self.assertEqual(2, len(st_db)) stacks[1].delete() st_db = db_api.stack_get_all(self.ctx) self.assertEqual(1, len(st_db)) def test_stack_get_all_show_deleted(self): stacks = [self._setup_test_stack('stack', x)[1] for x in UUIDs] st_db = db_api.stack_get_all(self.ctx) self.assertEqual(3, len(st_db)) stacks[0].delete() st_db = db_api.stack_get_all(self.ctx) self.assertEqual(2, len(st_db)) st_db = db_api.stack_get_all(self.ctx, show_deleted=True) self.assertEqual(3, len(st_db)) def test_stack_get_all_show_nested(self): stack1 = self._setup_test_stack('stack1', UUID1)[1] stack2 = self._setup_test_stack('stack2', UUID2, owner_id=stack1.id)[1] # Backup stack should not be returned stack3 = self._setup_test_stack('stack1*', UUID3, owner_id=stack1.id, backup=True)[1] st_db = db_api.stack_get_all(self.ctx) self.assertEqual(1, len(st_db)) self.assertEqual(stack1.id, st_db[0].id) st_db = db_api.stack_get_all(self.ctx, show_nested=True) self.assertEqual(2, len(st_db)) st_ids = [s.id for s in st_db] self.assertNotIn(stack3.id, st_ids) self.assertIn(stack1.id, st_ids) self.assertIn(stack2.id, st_ids) def test_stack_get_all_with_filters(self): self._setup_test_stack('foo', UUID1) self._setup_test_stack('bar', UUID2) filters = {'name': 'foo'} results = db_api.stack_get_all(self.ctx, filters=filters) self.assertEqual(1, len(results)) self.assertEqual('foo', results[0]['name']) def test_stack_get_all_filter_matches_in_list(self): self._setup_test_stack('foo', UUID1) self._setup_test_stack('bar', UUID2) filters = {'name': ['bar', 'quux']} results = db_api.stack_get_all(self.ctx, filters=filters) self.assertEqual(1, len(results)) self.assertEqual('bar', results[0]['name']) def test_stack_get_all_returns_all_if_no_filters(self): self._setup_test_stack('foo', UUID1) self._setup_test_stack('bar', UUID2) filters = None results = db_api.stack_get_all(self.ctx, filters=filters) self.assertEqual(2, len(results)) def test_stack_get_all_default_sort_keys_and_dir(self): stacks = [self._setup_test_stack('stack', x)[1] for x in UUIDs] st_db = db_api.stack_get_all(self.ctx) self.assertEqual(3, len(st_db)) self.assertEqual(stacks[2].id, st_db[0].id) self.assertEqual(stacks[1].id, st_db[1].id) self.assertEqual(stacks[0].id, st_db[2].id) def test_stack_get_all_default_sort_dir(self): stacks = [self._setup_test_stack('stack', x)[1] for x in UUIDs] st_db = db_api.stack_get_all(self.ctx, sort_dir='asc') self.assertEqual(3, len(st_db)) self.assertEqual(stacks[0].id, st_db[0].id) self.assertEqual(stacks[1].id, st_db[1].id) self.assertEqual(stacks[2].id, st_db[2].id) def test_stack_get_all_str_sort_keys(self): stacks = [self._setup_test_stack('stack', x)[1] for x in UUIDs] st_db = db_api.stack_get_all(self.ctx, sort_keys='creation_time') self.assertEqual(3, len(st_db)) self.assertEqual(stacks[0].id, st_db[0].id) self.assertEqual(stacks[1].id, st_db[1].id) self.assertEqual(stacks[2].id, st_db[2].id) @mock.patch.object(db_api.utils, 'paginate_query') def test_stack_get_all_filters_sort_keys(self, mock_paginate): sort_keys = ['stack_name', 'stack_status', 'creation_time', 'updated_time', 'stack_owner'] db_api.stack_get_all(self.ctx, sort_keys=sort_keys) args = mock_paginate.call_args[0] used_sort_keys = set(args[3]) expected_keys = set(['name', 'status', 'created_at', 'updated_at', 'id']) self.assertEqual(expected_keys, used_sort_keys) def test_stack_get_all_marker(self): stacks = [self._setup_test_stack('stack', x)[1] for x in UUIDs] st_db = db_api.stack_get_all(self.ctx, marker=stacks[1].id) self.assertEqual(1, len(st_db)) self.assertEqual(stacks[0].id, st_db[0].id) def test_stack_get_all_non_existing_marker(self): [self._setup_test_stack('stack', x)[1] for x in UUIDs] uuid = 'this stack doesn\'t exist' st_db = db_api.stack_get_all(self.ctx, marker=uuid) self.assertEqual(3, len(st_db)) def test_stack_get_all_doesnt_mutate_sort_keys(self): [self._setup_test_stack('stack', x)[1] for x in UUIDs] sort_keys = ['id'] db_api.stack_get_all(self.ctx, sort_keys=sort_keys) self.assertEqual(['id'], sort_keys) def test_stack_get_all_hidden_tags(self): cfg.CONF.set_override('hidden_stack_tags', ['hidden']) stacks = [self._setup_test_stack('stack', x)[1] for x in UUIDs] stacks[0].tags = ['hidden'] stacks[0].store() stacks[1].tags = ['random'] stacks[1].store() st_db = db_api.stack_get_all(self.ctx, show_hidden=True) self.assertEqual(3, len(st_db)) st_db_visible = db_api.stack_get_all(self.ctx, show_hidden=False) self.assertEqual(2, len(st_db_visible)) # Make sure the hidden stack isn't in the stacks returned by # stack_get_all_visible() for stack in st_db_visible: self.assertNotEqual(stacks[0].id, stack.id) def test_stack_get_all_by_tags(self): stacks = [self._setup_test_stack('stack', x)[1] for x in UUIDs] stacks[0].tags = ['tag1'] stacks[0].store() stacks[1].tags = ['tag1', 'tag2'] stacks[1].store() stacks[2].tags = ['tag1', 'tag2', 'tag3'] stacks[2].store() st_db = db_api.stack_get_all(self.ctx, tags=['tag2']) self.assertEqual(2, len(st_db)) st_db = db_api.stack_get_all(self.ctx, tags=['tag1', 'tag2']) self.assertEqual(2, len(st_db)) st_db = db_api.stack_get_all(self.ctx, tags=['tag1', 'tag2', 'tag3']) self.assertEqual(1, len(st_db)) def test_stack_get_all_by_tags_any(self): stacks = [self._setup_test_stack('stack', x)[1] for x in UUIDs] stacks[0].tags = ['tag2'] stacks[0].store() stacks[1].tags = ['tag1', 'tag2'] stacks[1].store() stacks[2].tags = ['tag1', 'tag3'] stacks[2].store() st_db = db_api.stack_get_all(self.ctx, tags_any=['tag1']) self.assertEqual(2, len(st_db)) st_db = db_api.stack_get_all(self.ctx, tags_any=['tag1', 'tag2', 'tag3']) self.assertEqual(3, len(st_db)) def test_stack_get_all_by_not_tags(self): stacks = [self._setup_test_stack('stack', x)[1] for x in UUIDs] stacks[0].tags = ['tag1'] stacks[0].store() stacks[1].tags = ['tag1', 'tag2'] stacks[1].store() stacks[2].tags = ['tag1', 'tag2', 'tag3'] stacks[2].store() st_db = db_api.stack_get_all(self.ctx, not_tags=['tag2']) self.assertEqual(1, len(st_db)) st_db = db_api.stack_get_all(self.ctx, not_tags=['tag1', 'tag2']) self.assertEqual(1, len(st_db)) st_db = db_api.stack_get_all(self.ctx, not_tags=['tag1', 'tag2', 'tag3']) self.assertEqual(2, len(st_db)) def test_stack_get_all_by_not_tags_any(self): stacks = [self._setup_test_stack('stack', x)[1] for x in UUIDs] stacks[0].tags = ['tag2'] stacks[0].store() stacks[1].tags = ['tag1', 'tag2'] stacks[1].store() stacks[2].tags = ['tag1', 'tag3'] stacks[2].store() st_db = db_api.stack_get_all(self.ctx, not_tags_any=['tag1']) self.assertEqual(1, len(st_db)) st_db = db_api.stack_get_all(self.ctx, not_tags_any=['tag1', 'tag2', 'tag3']) self.assertEqual(0, len(st_db)) def test_stack_get_all_by_tag_with_pagination(self): stacks = [self._setup_test_stack('stack', x)[1] for x in UUIDs] stacks[0].tags = ['tag1'] stacks[0].store() stacks[1].tags = ['tag2'] stacks[1].store() stacks[2].tags = ['tag1'] stacks[2].store() st_db = db_api.stack_get_all(self.ctx, tags=['tag1']) self.assertEqual(2, len(st_db)) st_db = db_api.stack_get_all(self.ctx, tags=['tag1'], limit=1) self.assertEqual(1, len(st_db)) self.assertEqual(stacks[2].id, st_db[0].id) st_db = db_api.stack_get_all(self.ctx, tags=['tag1'], limit=1, marker=stacks[2].id) self.assertEqual(1, len(st_db)) self.assertEqual(stacks[0].id, st_db[0].id) def test_stack_get_all_by_tag_with_show_hidden(self): cfg.CONF.set_override('hidden_stack_tags', ['hidden']) stacks = [self._setup_test_stack('stack', x)[1] for x in UUIDs] stacks[0].tags = ['tag1'] stacks[0].store() stacks[1].tags = ['hidden', 'tag1'] stacks[1].store() st_db = db_api.stack_get_all(self.ctx, tags=['tag1'], show_hidden=True) self.assertEqual(2, len(st_db)) st_db = db_api.stack_get_all(self.ctx, tags=['tag1'], show_hidden=False) self.assertEqual(1, len(st_db)) def test_stack_count_all(self): stacks = [self._setup_test_stack('stack', x)[1] for x in UUIDs] st_db = db_api.stack_count_all(self.ctx) self.assertEqual(3, st_db) stacks[0].delete() st_db = db_api.stack_count_all(self.ctx) self.assertEqual(2, st_db) # show deleted st_db = db_api.stack_count_all(self.ctx, show_deleted=True) self.assertEqual(3, st_db) stacks[1].delete() st_db = db_api.stack_count_all(self.ctx) self.assertEqual(1, st_db) # show deleted st_db = db_api.stack_count_all(self.ctx, show_deleted=True) self.assertEqual(3, st_db) def test_count_all_hidden_tags(self): cfg.CONF.set_override('hidden_stack_tags', ['hidden']) stacks = [self._setup_test_stack('stack', x)[1] for x in UUIDs] stacks[0].tags = ['hidden'] stacks[0].store() stacks[1].tags = ['random'] stacks[1].store() st_db = db_api.stack_count_all(self.ctx, show_hidden=True) self.assertEqual(3, st_db) st_db_visible = db_api.stack_count_all(self.ctx, show_hidden=False) self.assertEqual(2, st_db_visible) def test_count_all_by_tags(self): stacks = [self._setup_test_stack('stack', x)[1] for x in UUIDs] stacks[0].tags = ['tag1'] stacks[0].store() stacks[1].tags = ['tag2'] stacks[1].store() stacks[2].tags = ['tag2'] stacks[2].store() st_db = db_api.stack_count_all(self.ctx, tags=['tag1']) self.assertEqual(1, st_db) st_db = db_api.stack_count_all(self.ctx, tags=['tag2']) self.assertEqual(2, st_db) def test_count_all_by_tag_with_show_hidden(self): cfg.CONF.set_override('hidden_stack_tags', ['hidden']) stacks = [self._setup_test_stack('stack', x)[1] for x in UUIDs] stacks[0].tags = ['tag1'] stacks[0].store() stacks[1].tags = ['hidden', 'tag1'] stacks[1].store() st_db = db_api.stack_count_all(self.ctx, tags=['tag1'], show_hidden=True) self.assertEqual(2, st_db) st_db = db_api.stack_count_all(self.ctx, tags=['tag1'], show_hidden=False) self.assertEqual(1, st_db) def test_stack_count_all_with_filters(self): self._setup_test_stack('foo', UUID1) self._setup_test_stack('bar', UUID2) self._setup_test_stack('bar', UUID3) filters = {'name': 'bar'} st_db = db_api.stack_count_all(self.ctx, filters=filters) self.assertEqual(2, st_db) def test_stack_count_all_show_nested(self): stack1 = self._setup_test_stack('stack1', UUID1)[1] self._setup_test_stack('stack2', UUID2, owner_id=stack1.id) # Backup stack should not be counted self._setup_test_stack('stack1*', UUID3, owner_id=stack1.id, backup=True) st_db = db_api.stack_count_all(self.ctx) self.assertEqual(1, st_db) st_db = db_api.stack_count_all(self.ctx, show_nested=True) self.assertEqual(2, st_db) def test_event_get_all_by_stack(self): stack = self._setup_test_stack('stack', UUID1)[1] self._mock_create(self.m) self.m.ReplayAll() stack.create() stack._persist_state() self.m.UnsetStubs() events = db_api.event_get_all_by_stack(self.ctx, UUID1) self.assertEqual(4, len(events)) # test filter by resource_status filters = {'resource_status': 'COMPLETE'} events = db_api.event_get_all_by_stack(self.ctx, UUID1, filters=filters) self.assertEqual(2, len(events)) self.assertEqual('COMPLETE', events[0].resource_status) self.assertEqual('COMPLETE', events[1].resource_status) # test filter by resource_action filters = {'resource_action': 'CREATE'} events = db_api.event_get_all_by_stack(self.ctx, UUID1, filters=filters) self.assertEqual(4, len(events)) self.assertEqual('CREATE', events[0].resource_action) self.assertEqual('CREATE', events[1].resource_action) self.assertEqual('CREATE', events[2].resource_action) self.assertEqual('CREATE', events[3].resource_action) # test filter by resource_type filters = {'resource_type': 'AWS::EC2::Instance'} events = db_api.event_get_all_by_stack(self.ctx, UUID1, filters=filters) self.assertEqual(2, len(events)) self.assertEqual('AWS::EC2::Instance', events[0].resource_type) self.assertEqual('AWS::EC2::Instance', events[1].resource_type) filters = {'resource_type': 'OS::Nova::Server'} events = db_api.event_get_all_by_stack(self.ctx, UUID1, filters=filters) self.assertEqual(0, len(events)) # test limit and marker events_all = db_api.event_get_all_by_stack(self.ctx, UUID1) marker = events_all[0].uuid expected = events_all[1].uuid events = db_api.event_get_all_by_stack(self.ctx, UUID1, limit=1, marker=marker) self.assertEqual(1, len(events)) self.assertEqual(expected, events[0].uuid) self._mock_delete(self.m) self.m.ReplayAll() stack.delete() # test filter by resource_status filters = {'resource_status': 'COMPLETE'} events = db_api.event_get_all_by_stack(self.ctx, UUID1, filters=filters) self.assertEqual(4, len(events)) self.assertEqual('COMPLETE', events[0].resource_status) self.assertEqual('COMPLETE', events[1].resource_status) self.assertEqual('COMPLETE', events[2].resource_status) self.assertEqual('COMPLETE', events[3].resource_status) # test filter by resource_action filters = {'resource_action': 'DELETE', 'resource_status': 'COMPLETE'} events = db_api.event_get_all_by_stack(self.ctx, UUID1, filters=filters) self.assertEqual(2, len(events)) self.assertEqual('DELETE', events[0].resource_action) self.assertEqual('COMPLETE', events[0].resource_status) self.assertEqual('DELETE', events[1].resource_action) self.assertEqual('COMPLETE', events[1].resource_status) # test limit and marker events_all = db_api.event_get_all_by_stack(self.ctx, UUID1) self.assertEqual(8, len(events_all)) marker = events_all[1].uuid events2_uuid = events_all[2].uuid events3_uuid = events_all[3].uuid events = db_api.event_get_all_by_stack(self.ctx, UUID1, limit=1, marker=marker) self.assertEqual(1, len(events)) self.assertEqual(events2_uuid, events[0].uuid) events = db_api.event_get_all_by_stack(self.ctx, UUID1, limit=2, marker=marker) self.assertEqual(2, len(events)) self.assertEqual(events2_uuid, events[0].uuid) self.assertEqual(events3_uuid, events[1].uuid) self.m.VerifyAll() def test_event_count_all_by_stack(self): stack = self._setup_test_stack('stack', UUID1)[1] self._mock_create(self.m) self.m.ReplayAll() stack.create() stack._persist_state() self.m.UnsetStubs() num_events = db_api.event_count_all_by_stack(self.ctx, UUID1) self.assertEqual(4, num_events) self._mock_delete(self.m) self.m.ReplayAll() stack.delete() num_events = db_api.event_count_all_by_stack(self.ctx, UUID1) self.assertEqual(8, num_events) self.m.VerifyAll() def test_event_get_all_by_tenant(self): stacks = [self._setup_test_stack('stack', x)[1] for x in UUIDs] self._mock_create(self.m) self.m.ReplayAll() [s.create() for s in stacks] [s._persist_state() for s in stacks] self.m.UnsetStubs() events = db_api.event_get_all_by_tenant(self.ctx) self.assertEqual(12, len(events)) self._mock_delete(self.m) self.m.ReplayAll() [s.delete() for s in stacks] events = db_api.event_get_all_by_tenant(self.ctx) self.assertEqual(0, len(events)) self.m.VerifyAll() def test_user_creds_password(self): self.ctx.password = 'password' self.ctx.trust_id = None self.ctx.region_name = 'RegionOne' db_creds = db_api.user_creds_create(self.ctx) load_creds = db_api.user_creds_get(self.ctx, db_creds['id']) self.assertEqual('test_username', load_creds.get('username')) self.assertEqual('password', load_creds.get('password')) self.assertEqual('test_tenant', load_creds.get('tenant')) self.assertEqual('test_tenant_id', load_creds.get('tenant_id')) self.assertEqual('RegionOne', load_creds.get('region_name')) self.assertIsNotNone(load_creds.get('created_at')) self.assertIsNone(load_creds.get('updated_at')) self.assertEqual('http://server.test:5000/v2.0', load_creds.get('auth_url')) self.assertIsNone(load_creds.get('trust_id')) self.assertIsNone(load_creds.get('trustor_user_id')) def test_user_creds_password_too_long(self): self.ctx.trust_id = None self.ctx.password = 'O123456789O1234567' * 20 error = self.assertRaises(exception.Error, db_api.user_creds_create, self.ctx) self.assertIn('Length of OS_PASSWORD after encryption exceeds ' 'Heat limit (255 chars)', six.text_type(error)) def test_user_creds_trust(self): self.ctx.username = None self.ctx.password = None self.ctx.trust_id = 'atrust123' self.ctx.trustor_user_id = 'atrustor123' self.ctx.tenant = 'atenant123' self.ctx.project_name = 'atenant' self.ctx.auth_url = 'anauthurl' self.ctx.region_name = 'aregion' db_creds = db_api.user_creds_create(self.ctx) load_creds = db_api.user_creds_get(self.ctx, db_creds['id']) self.assertIsNone(load_creds.get('username')) self.assertIsNone(load_creds.get('password')) self.assertIsNotNone(load_creds.get('created_at')) self.assertIsNone(load_creds.get('updated_at')) self.assertEqual('anauthurl', load_creds.get('auth_url')) self.assertEqual('aregion', load_creds.get('region_name')) self.assertEqual('atenant123', load_creds.get('tenant_id')) self.assertEqual('atenant', load_creds.get('tenant')) self.assertEqual('atrust123', load_creds.get('trust_id')) self.assertEqual('atrustor123', load_creds.get('trustor_user_id')) def test_user_creds_none(self): self.ctx.username = None self.ctx.password = None self.ctx.trust_id = None self.ctx.region_name = None db_creds = db_api.user_creds_create(self.ctx) load_creds = db_api.user_creds_get(self.ctx, db_creds['id']) self.assertIsNone(load_creds.get('username')) self.assertIsNone(load_creds.get('password')) self.assertIsNone(load_creds.get('trust_id')) self.assertIsNone(load_creds.get('region_name')) def test_software_config_create(self): tenant_id = self.ctx.tenant_id config = db_api.software_config_create( self.ctx, {'name': 'config_mysql', 'tenant': tenant_id}) self.assertIsNotNone(config) self.assertEqual('config_mysql', config.name) self.assertEqual(tenant_id, config.tenant) def test_software_config_get(self): self.assertRaises( exception.NotFound, db_api.software_config_get, self.ctx, str(uuid.uuid4())) conf = ('#!/bin/bash\n' 'echo "$bar and $foo"\n') config = { 'inputs': [{'name': 'foo'}, {'name': 'bar'}], 'outputs': [{'name': 'result'}], 'config': conf, 'options': {} } tenant_id = self.ctx.tenant_id values = {'name': 'config_mysql', 'tenant': tenant_id, 'group': 'Heat::Shell', 'config': config} config = db_api.software_config_create( self.ctx, values) config_id = config.id config = db_api.software_config_get(self.ctx, config_id) self.assertIsNotNone(config) self.assertEqual('config_mysql', config.name) self.assertEqual(tenant_id, config.tenant) self.assertEqual('Heat::Shell', config.group) self.assertEqual(conf, config.config['config']) self.ctx.tenant = None self.assertRaises( exception.NotFound, db_api.software_config_get, self.ctx, config_id) def test_software_config_get_all(self): self.assertEqual([], db_api.software_config_get_all(self.ctx)) tenant_id = self.ctx.tenant_id software_config = db_api.software_config_create( self.ctx, {'name': 'config_mysql', 'tenant': tenant_id}) self.assertIsNotNone(software_config) software_configs = db_api.software_config_get_all(self.ctx) self.assertEqual(1, len(software_configs)) self.assertEqual(software_config.id, software_configs[0].id) def test_software_config_delete(self): tenant_id = self.ctx.tenant_id config = db_api.software_config_create( self.ctx, {'name': 'config_mysql', 'tenant': tenant_id}) config_id = config.id db_api.software_config_delete(self.ctx, config_id) err = self.assertRaises( exception.NotFound, db_api.software_config_get, self.ctx, config_id) self.assertIn(config_id, six.text_type(err)) err = self.assertRaises( exception.NotFound, db_api.software_config_delete, self.ctx, config_id) self.assertIn(config_id, six.text_type(err)) def test_software_config_delete_not_allowed(self): tenant_id = self.ctx.tenant_id config = db_api.software_config_create( self.ctx, {'name': 'config_mysql', 'tenant': tenant_id}) config_id = config.id values = { 'tenant': tenant_id, 'stack_user_project_id': str(uuid.uuid4()), 'config_id': config_id, 'server_id': str(uuid.uuid4()), } db_api.software_deployment_create(self.ctx, values) err = self.assertRaises( exception.InvalidRestrictedAction, db_api.software_config_delete, self.ctx, config_id) msg = ("Software config with id %s can not be deleted as it is " "referenced" % config_id) self.assertIn(msg, six.text_type(err)) def _deployment_values(self): tenant_id = self.ctx.tenant_id stack_user_project_id = str(uuid.uuid4()) config_id = db_api.software_config_create( self.ctx, {'name': 'config_mysql', 'tenant': tenant_id}).id server_id = str(uuid.uuid4()) input_values = {'foo': 'fooooo', 'bar': 'baaaaa'} values = { 'tenant': tenant_id, 'stack_user_project_id': stack_user_project_id, 'config_id': config_id, 'server_id': server_id, 'input_values': input_values } return values def test_software_deployment_create(self): values = self._deployment_values() deployment = db_api.software_deployment_create(self.ctx, values) self.assertIsNotNone(deployment) self.assertEqual(values['tenant'], deployment.tenant) def test_software_deployment_get(self): self.assertRaises( exception.NotFound, db_api.software_deployment_get, self.ctx, str(uuid.uuid4())) values = self._deployment_values() deployment = db_api.software_deployment_create(self.ctx, values) self.assertIsNotNone(deployment) deployment_id = deployment.id deployment = db_api.software_deployment_get(self.ctx, deployment_id) self.assertIsNotNone(deployment) self.assertEqual(values['tenant'], deployment.tenant) self.assertEqual(values['config_id'], deployment.config_id) self.assertEqual(values['server_id'], deployment.server_id) self.assertEqual(values['input_values'], deployment.input_values) self.assertEqual( values['stack_user_project_id'], deployment.stack_user_project_id) # assert not found with invalid context tenant self.ctx.tenant = str(uuid.uuid4()) self.assertRaises( exception.NotFound, db_api.software_deployment_get, self.ctx, deployment_id) # assert found with stack_user_project_id context tenant self.ctx.tenant = deployment.stack_user_project_id deployment = db_api.software_deployment_get(self.ctx, deployment_id) self.assertIsNotNone(deployment) self.assertEqual(values['tenant'], deployment.tenant) def test_software_deployment_get_all(self): self.assertEqual([], db_api.software_deployment_get_all(self.ctx)) values = self._deployment_values() deployment = db_api.software_deployment_create(self.ctx, values) self.assertIsNotNone(deployment) deployments = db_api.software_deployment_get_all(self.ctx) self.assertEqual(1, len(deployments)) self.assertEqual(deployment.id, deployments[0].id) deployments = db_api.software_deployment_get_all( self.ctx, server_id=values['server_id']) self.assertEqual(1, len(deployments)) self.assertEqual(deployment.id, deployments[0].id) deployments = db_api.software_deployment_get_all( self.ctx, server_id=str(uuid.uuid4())) self.assertEqual([], deployments) def test_software_deployment_update(self): deployment_id = str(uuid.uuid4()) err = self.assertRaises(exception.NotFound, db_api.software_deployment_update, self.ctx, deployment_id, values={}) self.assertIn(deployment_id, six.text_type(err)) values = self._deployment_values() deployment = db_api.software_deployment_create(self.ctx, values) deployment_id = deployment.id values = {'status': 'COMPLETED'} deployment = db_api.software_deployment_update( self.ctx, deployment_id, values) self.assertIsNotNone(deployment) self.assertEqual(values['status'], deployment.status) def test_software_deployment_delete(self): deployment_id = str(uuid.uuid4()) err = self.assertRaises(exception.NotFound, db_api.software_deployment_delete, self.ctx, deployment_id) self.assertIn(deployment_id, six.text_type(err)) values = self._deployment_values() deployment = db_api.software_deployment_create(self.ctx, values) deployment_id = deployment.id deployment = db_api.software_deployment_get(self.ctx, deployment_id) self.assertIsNotNone(deployment) db_api.software_deployment_delete(self.ctx, deployment_id) err = self.assertRaises( exception.NotFound, db_api.software_deployment_get, self.ctx, deployment_id) self.assertIn(deployment_id, six.text_type(err)) def test_snapshot_create(self): template = create_raw_template(self.ctx) user_creds = create_user_creds(self.ctx) stack = create_stack(self.ctx, template, user_creds) values = {'tenant': self.ctx.tenant_id, 'status': 'IN_PROGRESS', 'stack_id': stack.id} snapshot = db_api.snapshot_create(self.ctx, values) self.assertIsNotNone(snapshot) self.assertEqual(values['tenant'], snapshot.tenant) def test_snapshot_create_with_name(self): template = create_raw_template(self.ctx) user_creds = create_user_creds(self.ctx) stack = create_stack(self.ctx, template, user_creds) values = {'tenant': self.ctx.tenant_id, 'status': 'IN_PROGRESS', 'stack_id': stack.id, 'name': 'snap1'} snapshot = db_api.snapshot_create(self.ctx, values) self.assertIsNotNone(snapshot) self.assertEqual(values['tenant'], snapshot.tenant) self.assertEqual('snap1', snapshot.name) def test_snapshot_get_not_found(self): self.assertRaises( exception.NotFound, db_api.snapshot_get, self.ctx, str(uuid.uuid4())) def test_snapshot_get(self): template = create_raw_template(self.ctx) user_creds = create_user_creds(self.ctx) stack = create_stack(self.ctx, template, user_creds) values = {'tenant': self.ctx.tenant_id, 'status': 'IN_PROGRESS', 'stack_id': stack.id} snapshot = db_api.snapshot_create(self.ctx, values) self.assertIsNotNone(snapshot) snapshot_id = snapshot.id snapshot = db_api.snapshot_get(self.ctx, snapshot_id) self.assertIsNotNone(snapshot) self.assertEqual(values['tenant'], snapshot.tenant) self.assertEqual(values['status'], snapshot.status) self.assertIsNotNone(snapshot.created_at) def test_snapshot_get_by_another_stack(self): template = create_raw_template(self.ctx) user_creds = create_user_creds(self.ctx) stack = create_stack(self.ctx, template, user_creds) stack1 = create_stack(self.ctx, template, user_creds) values = {'tenant': self.ctx.tenant_id, 'status': 'IN_PROGRESS', 'stack_id': stack.id} snapshot = db_api.snapshot_create(self.ctx, values) self.assertIsNotNone(snapshot) snapshot_id = snapshot.id self.assertRaises(exception.SnapshotNotFound, db_api.snapshot_get_by_stack, self.ctx, snapshot_id, stack1) def test_snapshot_get_not_found_invalid_tenant(self): template = create_raw_template(self.ctx) user_creds = create_user_creds(self.ctx) stack = create_stack(self.ctx, template, user_creds) values = {'tenant': self.ctx.tenant_id, 'status': 'IN_PROGRESS', 'stack_id': stack.id} snapshot = db_api.snapshot_create(self.ctx, values) self.ctx.tenant = str(uuid.uuid4()) self.assertRaises( exception.NotFound, db_api.snapshot_get, self.ctx, snapshot.id) def test_snapshot_update_not_found(self): snapshot_id = str(uuid.uuid4()) err = self.assertRaises(exception.NotFound, db_api.snapshot_update, self.ctx, snapshot_id, values={}) self.assertIn(snapshot_id, six.text_type(err)) def test_snapshot_update(self): template = create_raw_template(self.ctx) user_creds = create_user_creds(self.ctx) stack = create_stack(self.ctx, template, user_creds) values = {'tenant': self.ctx.tenant_id, 'status': 'IN_PROGRESS', 'stack_id': stack.id} snapshot = db_api.snapshot_create(self.ctx, values) snapshot_id = snapshot.id values = {'status': 'COMPLETED'} snapshot = db_api.snapshot_update(self.ctx, snapshot_id, values) self.assertIsNotNone(snapshot) self.assertEqual(values['status'], snapshot.status) def test_snapshot_delete_not_found(self): snapshot_id = str(uuid.uuid4()) err = self.assertRaises(exception.NotFound, db_api.snapshot_delete, self.ctx, snapshot_id) self.assertIn(snapshot_id, six.text_type(err)) def test_snapshot_delete(self): template = create_raw_template(self.ctx) user_creds = create_user_creds(self.ctx) stack = create_stack(self.ctx, template, user_creds) values = {'tenant': self.ctx.tenant_id, 'status': 'IN_PROGRESS', 'stack_id': stack.id} snapshot = db_api.snapshot_create(self.ctx, values) snapshot_id = snapshot.id snapshot = db_api.snapshot_get(self.ctx, snapshot_id) self.assertIsNotNone(snapshot) db_api.snapshot_delete(self.ctx, snapshot_id) err = self.assertRaises( exception.NotFound, db_api.snapshot_get, self.ctx, snapshot_id) self.assertIn(snapshot_id, six.text_type(err)) def test_snapshot_get_all(self): template = create_raw_template(self.ctx) user_creds = create_user_creds(self.ctx) stack = create_stack(self.ctx, template, user_creds) values = {'tenant': self.ctx.tenant_id, 'status': 'IN_PROGRESS', 'stack_id': stack.id} snapshot = db_api.snapshot_create(self.ctx, values) self.assertIsNotNone(snapshot) [snapshot] = db_api.snapshot_get_all(self.ctx, stack.id) self.assertIsNotNone(snapshot) self.assertEqual(values['tenant'], snapshot.tenant) self.assertEqual(values['status'], snapshot.status) self.assertIsNotNone(snapshot.created_at) def create_raw_template(context, **kwargs): t = template_format.parse(wp_template) template = { 'template': t, } if 'files' not in kwargs and 'files_id' not in kwargs: # modern raw_templates have associated raw_template_files db obj tf = template_files.TemplateFiles({'foo': 'bar'}) tf.store(context) kwargs['files_id'] = tf.files_id template.update(kwargs) return db_api.raw_template_create(context, template) def create_user_creds(ctx, **kwargs): ctx_dict = ctx.to_dict() ctx_dict.update(kwargs) ctx = context.RequestContext.from_dict(ctx_dict) return db_api.user_creds_create(ctx) def create_stack(ctx, template, user_creds, **kwargs): values = { 'name': 'db_test_stack_name', 'raw_template_id': template.id, 'username': ctx.username, 'tenant': ctx.tenant_id, 'action': 'create', 'status': 'complete', 'status_reason': 'create_complete', 'parameters': {}, 'user_creds_id': user_creds['id'], 'owner_id': None, 'backup': False, 'timeout': '60', 'disable_rollback': 0, 'current_traversal': 'dummy-uuid', 'prev_raw_template': None } values.update(kwargs) return db_api.stack_create(ctx, values) def create_resource(ctx, stack, legacy_prop_data=False, **kwargs): phy_res_id = UUID1 if 'phys_res_id' in kwargs: phy_res_id = kwargs.pop('phys_res_id') if not legacy_prop_data: rpd = db_api.resource_prop_data_create(ctx, {'data': {'foo1': 'bar1'}, 'encrypted': False}) values = { 'name': 'test_resource_name', 'physical_resource_id': phy_res_id, 'action': 'create', 'status': 'complete', 'status_reason': 'create_complete', 'rsrc_metadata': json.loads('{"foo": "123"}'), 'stack_id': stack.id, 'atomic_key': 1, } if not legacy_prop_data: values['rsrc_prop_data'] = rpd else: values['properties_data'] = {'foo1': 'bar1'} values.update(kwargs) return db_api.resource_create(ctx, values) def create_resource_data(ctx, resource, **kwargs): values = { 'key': 'test_resource_key', 'value': 'test_value', 'redact': 0, } values.update(kwargs) return db_api.resource_data_set(ctx, resource.id, **values) def create_resource_prop_data(ctx, **kwargs): values = { 'data': {'foo1': 'bar1'}, 'encrypted': False } values.update(kwargs) return db_api.resource_prop_data_create(ctx, **values) def create_event(ctx, legacy_prop_data=False, **kwargs): if not legacy_prop_data: rpd = db_api.resource_prop_data_create(ctx, {'data': {'foo2': 'ev_bar'}, 'encrypted': False}) values = { 'stack_id': 'test_stack_id', 'resource_action': 'create', 'resource_status': 'complete', 'resource_name': 'res', 'physical_resource_id': UUID1, 'resource_status_reason': "create_complete", } if not legacy_prop_data: values['rsrc_prop_data'] = rpd else: values['resource_properties'] = {'foo2': 'ev_bar'} values.update(kwargs) return db_api.event_create(ctx, values) def create_watch_rule(ctx, stack, **kwargs): values = { 'name': 'test_rule', 'rule': json.loads('{"foo": "123"}'), 'state': 'normal', 'last_evaluated': timeutils.utcnow(), 'stack_id': stack.id, } values.update(kwargs) return db_api.watch_rule_create(ctx, values) def create_watch_data(ctx, watch_rule, **kwargs): values = { 'data': json.loads('{"foo": "bar"}'), 'watch_rule_id': watch_rule.id } values.update(kwargs) return db_api.watch_data_create(ctx, values) def create_service(ctx, **kwargs): values = { 'id': '7079762f-c863-4954-ba61-9dccb68c57e2', 'engine_id': 'f9aff81e-bc1f-4119-941d-ad1ea7f31d19', 'host': 'engine-1', 'hostname': 'host1.devstack.org', 'binary': 'heat-engine', 'topic': 'engine', 'report_interval': 60} values.update(kwargs) return db_api.service_create(ctx, values) def create_sync_point(ctx, **kwargs): values = {'entity_id': '0782c463-064a-468d-98fd-442efb638e3a', 'is_update': True, 'traversal_id': '899ff81e-fc1f-41f9-f41d-ad1ea7f31d19', 'atomic_key': 0, 'stack_id': 'f6359498-764b-49e7-a515-ad31cbef885b', 'input_data': {}} values.update(kwargs) return db_api.sync_point_create(ctx, values) class DBAPIRawTemplateTest(common.HeatTestCase): def setUp(self): super(DBAPIRawTemplateTest, self).setUp() self.ctx = utils.dummy_context() def test_raw_template_create(self): t = template_format.parse(wp_template) tp = create_raw_template(self.ctx, template=t) self.assertIsNotNone(tp.id) self.assertEqual(t, tp.template) def test_raw_template_get(self): t = template_format.parse(wp_template) tp = create_raw_template(self.ctx, template=t) template = db_api.raw_template_get(self.ctx, tp.id) self.assertEqual(tp.id, template.id) self.assertEqual(tp.template, template.template) def test_raw_template_update(self): another_wp_template = ''' { "AWSTemplateFormatVersion" : "2010-09-09", "Description" : "WordPress", "Parameters" : { "KeyName" : { "Description" : "KeyName", "Type" : "String", "Default" : "test" } }, "Resources" : { "WebServer": { "Type": "AWS::EC2::Instance", "Properties": { "ImageId" : "fedora-20.x86_64.qcow2", "InstanceType" : "m1.xlarge", "KeyName" : "test", "UserData" : "wordpress" } } } } ''' new_t = template_format.parse(another_wp_template) new_files = { 'foo': 'bar', 'myfile': 'file:///home/somefile' } new_values = { 'template': new_t, 'files': new_files } orig_tp = create_raw_template(self.ctx) updated_tp = db_api.raw_template_update(self.ctx, orig_tp.id, new_values) self.assertEqual(orig_tp.id, updated_tp.id) self.assertEqual(new_t, updated_tp.template) self.assertEqual(new_files, updated_tp.files) def test_raw_template_delete(self): t = template_format.parse(wp_template) tp = create_raw_template(self.ctx, template=t) db_api.raw_template_delete(self.ctx, tp.id) self.assertRaises(exception.NotFound, db_api.raw_template_get, self.ctx, tp.id) class DBAPIUserCredsTest(common.HeatTestCase): def setUp(self): super(DBAPIUserCredsTest, self).setUp() self.ctx = utils.dummy_context() def test_user_creds_create_trust(self): user_creds = create_user_creds(self.ctx, trust_id='test_trust_id', trustor_user_id='trustor_id') self.assertIsNotNone(user_creds['id']) self.assertEqual('test_trust_id', user_creds['trust_id']) self.assertEqual('trustor_id', user_creds['trustor_user_id']) self.assertIsNone(user_creds['username']) self.assertIsNone(user_creds['password']) self.assertEqual(self.ctx.project_name, user_creds['tenant']) self.assertEqual(self.ctx.tenant_id, user_creds['tenant_id']) def test_user_creds_create_password(self): user_creds = create_user_creds(self.ctx) self.assertIsNotNone(user_creds['id']) self.assertEqual(self.ctx.password, user_creds['password']) def test_user_creds_get(self): user_creds = create_user_creds(self.ctx) ret_user_creds = db_api.user_creds_get(self.ctx, user_creds['id']) self.assertEqual(user_creds['password'], ret_user_creds['password']) def test_user_creds_get_noexist(self): self.assertIsNone(db_api.user_creds_get(self.ctx, 123456)) def test_user_creds_delete(self): user_creds = create_user_creds(self.ctx) self.assertIsNotNone(user_creds['id']) db_api.user_creds_delete(self.ctx, user_creds['id']) creds = db_api.user_creds_get(self.ctx, user_creds['id']) self.assertIsNone(creds) mock_delete = self.patchobject(session.Session, 'delete') err = self.assertRaises( exception.NotFound, db_api.user_creds_delete, self.ctx, user_creds['id']) exp_msg = ('Attempt to delete user creds with id ' '%s that does not exist' % user_creds['id']) self.assertIn(exp_msg, six.text_type(err)) self.assertEqual(0, mock_delete.call_count) def test_user_creds_delete_retries(self): mock_delete = self.patchobject(session.Session, 'delete') # returns StaleDataErrors, so we try delete 3 times mock_delete.side_effect = [exc.StaleDataError, exc.StaleDataError, None] user_creds = create_user_creds(self.ctx) self.assertIsNotNone(user_creds['id']) self.assertIsNone( db_api.user_creds_delete(self.ctx, user_creds['id'])) self.assertEqual(3, mock_delete.call_count) # returns other errors, so we try delete once mock_delete.side_effect = [exc.UnmappedError] self.assertRaises(exc.UnmappedError, db_api.user_creds_delete, self.ctx, user_creds['id']) self.assertEqual(4, mock_delete.call_count) class DBAPIStackTagTest(common.HeatTestCase): def setUp(self): super(DBAPIStackTagTest, self).setUp() self.ctx = utils.dummy_context() self.template = create_raw_template(self.ctx) self.user_creds = create_user_creds(self.ctx) self.stack = create_stack(self.ctx, self.template, self.user_creds) def test_stack_tags_set(self): tags = db_api.stack_tags_set(self.ctx, self.stack.id, ['tag1', 'tag2']) self.assertEqual(self.stack.id, tags[0].stack_id) self.assertEqual('tag1', tags[0].tag) tags = db_api.stack_tags_set(self.ctx, self.stack.id, []) self.assertIsNone(tags) def test_stack_tags_get(self): db_api.stack_tags_set(self.ctx, self.stack.id, ['tag1', 'tag2']) tags = db_api.stack_tags_get(self.ctx, self.stack.id) self.assertEqual(self.stack.id, tags[0].stack_id) self.assertEqual('tag1', tags[0].tag) tags = db_api.stack_tags_get(self.ctx, UUID1) self.assertIsNone(tags) def test_stack_tags_delete(self): db_api.stack_tags_set(self.ctx, self.stack.id, ['tag1', 'tag2']) db_api.stack_tags_delete(self.ctx, self.stack.id) tags = db_api.stack_tags_get(self.ctx, self.stack.id) self.assertIsNone(tags) class DBAPIStackTest(common.HeatTestCase): def setUp(self): super(DBAPIStackTest, self).setUp() self.ctx = utils.dummy_context() self.template = create_raw_template(self.ctx) self.user_creds = create_user_creds(self.ctx) def test_stack_create(self): stack = create_stack(self.ctx, self.template, self.user_creds) self.assertIsNotNone(stack.id) self.assertEqual('db_test_stack_name', stack.name) self.assertEqual(self.template.id, stack.raw_template_id) self.assertEqual(self.ctx.username, stack.username) self.assertEqual(self.ctx.tenant_id, stack.tenant) self.assertEqual('create', stack.action) self.assertEqual('complete', stack.status) self.assertEqual('create_complete', stack.status_reason) self.assertEqual({}, stack.parameters) self.assertEqual(self.user_creds['id'], stack.user_creds_id) self.assertIsNone(stack.owner_id) self.assertEqual('60', stack.timeout) self.assertFalse(stack.disable_rollback) def test_stack_delete(self): stack = create_stack(self.ctx, self.template, self.user_creds) stack_id = stack.id resource = create_resource(self.ctx, stack) db_api.stack_delete(self.ctx, stack_id) self.assertIsNone(db_api.stack_get(self.ctx, stack_id, show_deleted=False)) self.assertRaises(exception.NotFound, db_api.resource_get, self.ctx, resource.id) self.assertRaises(exception.NotFound, db_api.stack_delete, self.ctx, stack_id) # Testing soft delete ret_stack = db_api.stack_get(self.ctx, stack_id, show_deleted=True) self.assertIsNotNone(ret_stack) self.assertEqual(stack_id, ret_stack.id) self.assertEqual('db_test_stack_name', ret_stack.name) # Testing child resources deletion self.assertRaises(exception.NotFound, db_api.resource_get, self.ctx, resource.id) def test_stack_update(self): stack = create_stack(self.ctx, self.template, self.user_creds) values = { 'name': 'db_test_stack_name2', 'action': 'update', 'status': 'failed', 'status_reason': "update_failed", 'timeout': '90', 'current_traversal': 'another-dummy-uuid', } db_api.stack_update(self.ctx, stack.id, values) stack = db_api.stack_get(self.ctx, stack.id) self.assertEqual('db_test_stack_name2', stack.name) self.assertEqual('update', stack.action) self.assertEqual('failed', stack.status) self.assertEqual('update_failed', stack.status_reason) self.assertEqual(90, stack.timeout) self.assertEqual('another-dummy-uuid', stack.current_traversal) self.assertRaises(exception.NotFound, db_api.stack_update, self.ctx, UUID2, values) def test_stack_update_matches_traversal_id(self): stack = create_stack(self.ctx, self.template, self.user_creds) values = { 'current_traversal': 'another-dummy-uuid', } updated = db_api.stack_update(self.ctx, stack.id, values, exp_trvsl='dummy-uuid') self.assertTrue(updated) stack = db_api.stack_get(self.ctx, stack.id) self.assertEqual('another-dummy-uuid', stack.current_traversal) # test update fails when expected traversal is not matched matching_uuid = 'another-dummy-uuid' updated = db_api.stack_update(self.ctx, stack.id, values, exp_trvsl=matching_uuid) self.assertTrue(updated) diff_uuid = 'some-other-dummy-uuid' updated = db_api.stack_update(self.ctx, stack.id, values, exp_trvsl=diff_uuid) self.assertFalse(updated) @mock.patch.object(time, 'sleep') def test_stack_update_retries_on_deadlock(self, sleep): stack = create_stack(self.ctx, self.template, self.user_creds) with mock.patch('sqlalchemy.orm.query.Query.update', side_effect=db_exception.DBDeadlock) as mock_update: self.assertRaises(db_exception.DBDeadlock, db_api.stack_update, self.ctx, stack.id, {}) self.assertEqual(4, mock_update.call_count) def test_stack_set_status_release_lock(self): stack = create_stack(self.ctx, self.template, self.user_creds) values = { 'name': 'db_test_stack_name2', 'action': 'update', 'status': 'failed', 'status_reason': "update_failed", 'timeout': '90', 'current_traversal': 'another-dummy-uuid', } db_api.stack_lock_create(self.ctx, stack.id, UUID1) observed = db_api.persist_state_and_release_lock(self.ctx, stack.id, UUID1, values) self.assertIsNone(observed) stack = db_api.stack_get(self.ctx, stack.id) self.assertEqual('db_test_stack_name2', stack.name) self.assertEqual('update', stack.action) self.assertEqual('failed', stack.status) self.assertEqual('update_failed', stack.status_reason) self.assertEqual(90, stack.timeout) self.assertEqual('another-dummy-uuid', stack.current_traversal) def test_stack_set_status_release_lock_failed(self): stack = create_stack(self.ctx, self.template, self.user_creds) values = { 'name': 'db_test_stack_name2', 'action': 'update', 'status': 'failed', 'status_reason': "update_failed", 'timeout': '90', 'current_traversal': 'another-dummy-uuid', } db_api.stack_lock_create(self.ctx, stack.id, UUID2) observed = db_api.persist_state_and_release_lock(self.ctx, stack.id, UUID1, values) self.assertTrue(observed) def test_stack_set_status_failed_release_lock(self): stack = create_stack(self.ctx, self.template, self.user_creds) values = { 'name': 'db_test_stack_name2', 'action': 'update', 'status': 'failed', 'status_reason': "update_failed", 'timeout': '90', 'current_traversal': 'another-dummy-uuid', } db_api.stack_lock_create(self.ctx, stack.id, UUID1) observed = db_api.persist_state_and_release_lock(self.ctx, UUID2, UUID1, values) self.assertTrue(observed) def test_stack_get_returns_a_stack(self): stack = create_stack(self.ctx, self.template, self.user_creds) ret_stack = db_api.stack_get(self.ctx, stack.id, show_deleted=False) self.assertIsNotNone(ret_stack) self.assertEqual(stack.id, ret_stack.id) self.assertEqual('db_test_stack_name', ret_stack.name) def test_stack_get_returns_none_if_stack_does_not_exist(self): stack = db_api.stack_get(self.ctx, UUID1, show_deleted=False) self.assertIsNone(stack) def test_stack_get_returns_none_if_tenant_id_does_not_match(self): stack = create_stack(self.ctx, self.template, self.user_creds) self.ctx.tenant = 'abc' stack = db_api.stack_get(self.ctx, UUID1, show_deleted=False) self.assertIsNone(stack) def test_stack_get_tenant_is_stack_user_project_id(self): stack = create_stack(self.ctx, self.template, self.user_creds, stack_user_project_id='astackuserproject') self.ctx.tenant = 'astackuserproject' ret_stack = db_api.stack_get(self.ctx, stack.id, show_deleted=False) self.assertIsNotNone(ret_stack) self.assertEqual(stack.id, ret_stack.id) self.assertEqual('db_test_stack_name', ret_stack.name) def test_stack_get_can_return_a_stack_from_different_tenant(self): # create a stack with the common tenant stack = create_stack(self.ctx, self.template, self.user_creds) # admin context can get the stack admin_ctx = utils.dummy_context(user='admin_username', tenant_id='admin_tenant', is_admin=True) ret_stack = db_api.stack_get(admin_ctx, stack.id, show_deleted=False) self.assertEqual(stack.id, ret_stack.id) self.assertEqual('db_test_stack_name', ret_stack.name) def test_stack_get_by_name(self): stack = create_stack(self.ctx, self.template, self.user_creds) ret_stack = db_api.stack_get_by_name(self.ctx, stack.name) self.assertIsNotNone(ret_stack) self.assertEqual(stack.id, ret_stack.id) self.assertEqual('db_test_stack_name', ret_stack.name) self.assertIsNone(db_api.stack_get_by_name(self.ctx, 'abc')) self.ctx.tenant = 'abc' self.assertIsNone(db_api.stack_get_by_name(self.ctx, 'abc')) def test_stack_get_all(self): values = [ {'name': 'stack1'}, {'name': 'stack2'}, {'name': 'stack3'}, {'name': 'stack4'} ] [create_stack(self.ctx, self.template, self.user_creds, **val) for val in values] ret_stacks = db_api.stack_get_all(self.ctx) self.assertEqual(4, len(ret_stacks)) names = [ret_stack.name for ret_stack in ret_stacks] [self.assertIn(val['name'], names) for val in values] def test_stack_get_all_by_owner_id(self): parent_stack1 = create_stack(self.ctx, self.template, self.user_creds) parent_stack2 = create_stack(self.ctx, self.template, self.user_creds) values = [ {'owner_id': parent_stack1.id}, {'owner_id': parent_stack1.id}, {'owner_id': parent_stack2.id}, {'owner_id': parent_stack2.id}, ] [create_stack(self.ctx, self.template, self.user_creds, **val) for val in values] stack1_children = db_api.stack_get_all_by_owner_id(self.ctx, parent_stack1.id) self.assertEqual(2, len(stack1_children)) stack2_children = db_api.stack_get_all_by_owner_id(self.ctx, parent_stack2.id) self.assertEqual(2, len(stack2_children)) def test_stack_get_all_by_root_owner_id(self): parent_stack1 = create_stack(self.ctx, self.template, self.user_creds) parent_stack2 = create_stack(self.ctx, self.template, self.user_creds) for i in range(3): lvl1_st = create_stack(self.ctx, self.template, self.user_creds, owner_id=parent_stack1.id) for j in range(2): create_stack(self.ctx, self.template, self.user_creds, owner_id=lvl1_st.id) for i in range(2): lvl1_st = create_stack(self.ctx, self.template, self.user_creds, owner_id=parent_stack2.id) for j in range(4): lvl2_st = create_stack(self.ctx, self.template, self.user_creds, owner_id=lvl1_st.id) for k in range(3): create_stack(self.ctx, self.template, self.user_creds, owner_id=lvl2_st.id) stack1_children = db_api.stack_get_all_by_root_owner_id( self.ctx, parent_stack1.id) # 3 stacks on the first level + 6 stack on the second self.assertEqual(9, len(list(stack1_children))) stack2_children = db_api.stack_get_all_by_root_owner_id( self.ctx, parent_stack2.id) # 2 + 8 + 24 self.assertEqual(34, len(list(stack2_children))) def test_stack_get_all_with_regular_tenant(self): values = [ {'tenant': UUID1}, {'tenant': UUID1}, {'tenant': UUID2}, {'tenant': UUID2}, {'tenant': UUID2}, ] [create_stack(self.ctx, self.template, self.user_creds, **val) for val in values] self.ctx.tenant = UUID1 stacks = db_api.stack_get_all(self.ctx) self.assertEqual(2, len(stacks)) self.ctx.tenant = UUID2 stacks = db_api.stack_get_all(self.ctx) self.assertEqual(3, len(stacks)) self.ctx.tenant = UUID3 self.assertEqual([], db_api.stack_get_all(self.ctx)) def test_stack_get_all_with_admin_context(self): values = [ {'tenant': UUID1}, {'tenant': UUID1}, {'tenant': UUID2}, {'tenant': UUID2}, {'tenant': UUID2}, ] [create_stack(self.ctx, self.template, self.user_creds, **val) for val in values] admin_ctx = utils.dummy_context(user='admin_user', tenant_id='admin_tenant', is_admin=True) stacks = db_api.stack_get_all(admin_ctx) self.assertEqual(5, len(stacks)) def test_stack_count_all_with_regular_tenant(self): values = [ {'tenant': UUID1}, {'tenant': UUID1}, {'tenant': UUID2}, {'tenant': UUID2}, {'tenant': UUID2}, ] [create_stack(self.ctx, self.template, self.user_creds, **val) for val in values] self.ctx.tenant = UUID1 self.assertEqual(2, db_api.stack_count_all(self.ctx)) self.ctx.tenant = UUID2 self.assertEqual(3, db_api.stack_count_all(self.ctx)) def test_stack_count_all_with_admin_context(self): values = [ {'tenant': UUID1}, {'tenant': UUID1}, {'tenant': UUID2}, {'tenant': UUID2}, {'tenant': UUID2}, ] [create_stack(self.ctx, self.template, self.user_creds, **val) for val in values] admin_ctx = utils.dummy_context(user='admin_user', tenant_id='admin_tenant', is_admin=True) self.assertEqual(5, db_api.stack_count_all(admin_ctx)) def test_purge_deleted(self): now = timeutils.utcnow() delta = datetime.timedelta(seconds=3600 * 7) deleted = [now - delta * i for i in range(1, 6)] tmpl_files = [template_files.TemplateFiles( {'foo': 'file contents %d' % i}) for i in range(5)] [tmpl_file.store(self.ctx) for tmpl_file in tmpl_files] templates = [create_raw_template(self.ctx, files_id=tmpl_files[i].files_id ) for i in range(5)] creds = [create_user_creds(self.ctx) for i in range(5)] stacks = [create_stack(self.ctx, templates[i], creds[i], deleted_at=deleted[i]) for i in range(5)] resources = [create_resource(self.ctx, stacks[i]) for i in range(5)] events = [create_event(self.ctx, stack_id=stacks[i].id) for i in range(5)] db_api.purge_deleted(age=1, granularity='days') admin_ctx = utils.dummy_context(is_admin=True) self._deleted_stack_existance(admin_ctx, stacks, resources, events, tmpl_files, (0, 1, 2), (3, 4)) db_api.purge_deleted(age=22, granularity='hours') self._deleted_stack_existance(admin_ctx, stacks, resources, events, tmpl_files, (0, 1, 2), (3, 4)) db_api.purge_deleted(age=1100, granularity='minutes') self._deleted_stack_existance(admin_ctx, stacks, resources, events, tmpl_files, (0, 1), (2, 3, 4)) db_api.purge_deleted(age=3600, granularity='seconds') self._deleted_stack_existance(admin_ctx, stacks, resources, events, tmpl_files, (), (0, 1, 2, 3, 4)) # test wrong age self.assertRaises(exception.Error, db_api.purge_deleted, -1, 'seconds') def test_purge_project_deleted(self): now = timeutils.utcnow() delta = datetime.timedelta(seconds=3600 * 7) deleted = [now - delta * i for i in range(1, 6)] tmpl_files = [template_files.TemplateFiles( {'foo': 'file contents %d' % i}) for i in range(5)] [tmpl_file.store(self.ctx) for tmpl_file in tmpl_files] templates = [create_raw_template(self.ctx, files_id=tmpl_files[i].files_id ) for i in range(5)] values = [ {'tenant': UUID1}, {'tenant': UUID1}, {'tenant': UUID1}, {'tenant': UUID2}, {'tenant': UUID2}, ] creds = [create_user_creds(self.ctx) for i in range(5)] stacks = [create_stack(self.ctx, templates[i], creds[i], deleted_at=deleted[i], **values[i] ) for i in range(5)] resources = [create_resource(self.ctx, stacks[i]) for i in range(5)] events = [create_event(self.ctx, stack_id=stacks[i].id) for i in range(5)] db_api.purge_deleted(age=1, granularity='days', project_id=UUID1) admin_ctx = utils.dummy_context(is_admin=True) self._deleted_stack_existance(admin_ctx, stacks, resources, events, tmpl_files, (0, 1, 2, 3, 4), ()) db_api.purge_deleted(age=22, granularity='hours', project_id=UUID1) self._deleted_stack_existance(admin_ctx, stacks, resources, events, tmpl_files, (0, 1, 2, 3, 4), ()) db_api.purge_deleted(age=1100, granularity='minutes', project_id=UUID1) self._deleted_stack_existance(admin_ctx, stacks, resources, events, tmpl_files, (0, 1, 3, 4), (2,)) db_api.purge_deleted(age=30, granularity='hours', project_id=UUID2) self._deleted_stack_existance(admin_ctx, stacks, resources, events, tmpl_files, (0, 1, 3), (2, 4)) db_api.purge_deleted(age=3600, granularity='seconds', project_id=UUID1) self._deleted_stack_existance(admin_ctx, stacks, resources, events, tmpl_files, (3,), (0, 1, 2, 4)) db_api.purge_deleted(age=3600, granularity='seconds', project_id=UUID2) self._deleted_stack_existance(admin_ctx, stacks, resources, events, tmpl_files, (), (0, 1, 2, 3, 4)) def test_purge_deleted_prev_raw_template(self): now = timeutils.utcnow() templates = [create_raw_template(self.ctx) for i in range(2)] stacks = [create_stack(self.ctx, templates[0], create_user_creds(self.ctx), deleted_at=now - datetime.timedelta(seconds=10), prev_raw_template=templates[1])] db_api.purge_deleted(age=3600, granularity='seconds') ctx = utils.dummy_context(is_admin=True) self.assertIsNotNone(db_api.stack_get(ctx, stacks[0].id, show_deleted=True)) self.assertIsNotNone(db_api.raw_template_get(ctx, templates[1].id)) stacks = [create_stack(self.ctx, templates[0], create_user_creds(self.ctx), deleted_at=now - datetime.timedelta(seconds=10), prev_raw_template=templates[1], tenant=UUID1)] db_api.purge_deleted(age=3600, granularity='seconds', project_id=UUID1) self.assertIsNotNone(db_api.stack_get(ctx, stacks[0].id, show_deleted=True)) self.assertIsNotNone(db_api.raw_template_get(ctx, templates[1].id)) db_api.purge_deleted(age=0, granularity='seconds', project_id=UUID2) self.assertIsNotNone(db_api.stack_get(ctx, stacks[0].id, show_deleted=True)) self.assertIsNotNone(db_api.raw_template_get(ctx, templates[1].id)) def test_dont_purge_shared_raw_template_files(self): now = timeutils.utcnow() delta = datetime.timedelta(seconds=3600 * 7) deleted = [now - delta * i for i in range(1, 6)] # the last two template_files are identical to first two # (so should not be purged) tmpl_files = [template_files.TemplateFiles( {'foo': 'more file contents'}) for i in range(3)] [tmpl_file.store(self.ctx) for tmpl_file in tmpl_files] templates = [create_raw_template(self.ctx, files_id=tmpl_files[i % 3].files_id ) for i in range(5)] creds = [create_user_creds(self.ctx) for i in range(5)] [create_stack(self.ctx, templates[i], creds[i], deleted_at=deleted[i]) for i in range(5)] db_api.purge_deleted(age=15, granularity='hours') # The third raw_template_files object should be purged (along # with the last three stacks/templates). However, the other # two are shared with existing templates, so should not be # purged. self.assertIsNotNone(db_api.raw_template_files_get( self.ctx, tmpl_files[0].files_id)) self.assertIsNotNone(db_api.raw_template_files_get( self.ctx, tmpl_files[1].files_id)) self.assertRaises(exception.NotFound, db_api.raw_template_files_get, self.ctx, tmpl_files[2].files_id) def test_dont_purge_project_shared_raw_template_files(self): now = timeutils.utcnow() delta = datetime.timedelta(seconds=3600 * 7) deleted = [now - delta * i for i in range(1, 6)] # the last two template_files are identical to first two # (so should not be purged) tmpl_files = [template_files.TemplateFiles( {'foo': 'more file contents'}) for i in range(3)] [tmpl_file.store(self.ctx) for tmpl_file in tmpl_files] templates = [create_raw_template(self.ctx, files_id=tmpl_files[i % 3].files_id ) for i in range(5)] creds = [create_user_creds(self.ctx) for i in range(5)] [create_stack(self.ctx, templates[i], creds[i], deleted_at=deleted[i], tenant=UUID1 ) for i in range(5)] db_api.purge_deleted(age=0, granularity='seconds', project_id=UUID3) self.assertIsNotNone(db_api.raw_template_files_get( self.ctx, tmpl_files[0].files_id)) self.assertIsNotNone(db_api.raw_template_files_get( self.ctx, tmpl_files[1].files_id)) self.assertIsNotNone(db_api.raw_template_files_get( self.ctx, tmpl_files[2].files_id)) db_api.purge_deleted(age=15, granularity='hours', project_id=UUID1) self.assertIsNotNone(db_api.raw_template_files_get( self.ctx, tmpl_files[0].files_id)) self.assertIsNotNone(db_api.raw_template_files_get( self.ctx, tmpl_files[1].files_id)) self.assertRaises(exception.NotFound, db_api.raw_template_files_get, self.ctx, tmpl_files[2].files_id) def _deleted_stack_existance(self, ctx, stacks, resources, events, tmpl_files, existing, deleted): for s in existing: self.assertIsNotNone(db_api.stack_get(ctx, stacks[s].id, show_deleted=True)) self.assertIsNotNone(db_api.raw_template_files_get( ctx, tmpl_files[s].files_id)) self.assertIsNotNone(db_api.resource_get( ctx, resources[s].id)) self.assertIsNotNone(ctx.session.query( models.Event).get(events[s].id)) self.assertIsNotNone(ctx.session.query( models.ResourcePropertiesData).filter_by( id=resources[s].rsrc_prop_data.id).first()) self.assertIsNotNone(ctx.session.query( models.ResourcePropertiesData).filter_by( id=events[s].rsrc_prop_data.id).first()) for s in deleted: self.assertIsNone(db_api.stack_get(ctx, stacks[s].id, show_deleted=True)) rt_id = stacks[s].raw_template_id self.assertRaises(exception.NotFound, db_api.raw_template_get, ctx, rt_id) self.assertEqual({}, db_api.resource_get_all_by_stack( ctx, stacks[s].id)) self.assertRaises(exception.NotFound, db_api.raw_template_files_get, ctx, tmpl_files[s].files_id) self.assertEqual([], db_api.event_get_all_by_stack(ctx, stacks[s].id)) self.assertIsNone(ctx.session.query( models.Event).get(events[s].id)) self.assertIsNone(ctx.session.query( models.ResourcePropertiesData).filter_by( id=resources[s].rsrc_prop_data.id).first()) self.assertIsNone(ctx.session.query( models.ResourcePropertiesData).filter_by( id=events[s].rsrc_prop_data.id).first()) self.assertEqual([], db_api.event_get_all_by_stack(ctx, stacks[s].id)) self.assertIsNone(db_api.user_creds_get( self.ctx, stacks[s].user_creds_id)) def test_purge_deleted_batch_arg(self): now = timeutils.utcnow() delta = datetime.timedelta(seconds=3600) deleted = now - delta for i in range(7): create_stack(self.ctx, self.template, self.user_creds, deleted_at=deleted) with mock.patch('heat.db.sqlalchemy.api._purge_stacks') as mock_ps: db_api.purge_deleted(age=0, batch_size=2) self.assertEqual(4, mock_ps.call_count) def test_stack_get_root_id(self): root = create_stack(self.ctx, self.template, self.user_creds, name='root stack') child_1 = create_stack(self.ctx, self.template, self.user_creds, name='child 1 stack', owner_id=root.id) child_2 = create_stack(self.ctx, self.template, self.user_creds, name='child 2 stack', owner_id=child_1.id) child_3 = create_stack(self.ctx, self.template, self.user_creds, name='child 3 stack', owner_id=child_2.id) self.assertEqual(root.id, db_api.stack_get_root_id( self.ctx, child_3.id)) self.assertEqual(root.id, db_api.stack_get_root_id( self.ctx, child_2.id)) self.assertEqual(root.id, db_api.stack_get_root_id( self.ctx, root.id)) self.assertEqual(root.id, db_api.stack_get_root_id( self.ctx, child_1.id)) self.assertIsNone(db_api.stack_get_root_id( self.ctx, 'non existent stack')) def test_stack_count_total_resources(self): def add_resources(stack, count, root_stack_id): for i in range(count): create_resource( self.ctx, stack, False, name='%s-%s' % (stack.name, i), root_stack_id=root_stack_id ) root = create_stack(self.ctx, self.template, self.user_creds, name='root stack') # stack with 3 children s_1 = create_stack(self.ctx, self.template, self.user_creds, name='s_1', owner_id=root.id) s_1_1 = create_stack(self.ctx, self.template, self.user_creds, name='s_1_1', owner_id=s_1.id) s_1_2 = create_stack(self.ctx, self.template, self.user_creds, name='s_1_2', owner_id=s_1.id) s_1_3 = create_stack(self.ctx, self.template, self.user_creds, name='s_1_3', owner_id=s_1.id) # stacks 4 ancestors deep s_2 = create_stack(self.ctx, self.template, self.user_creds, name='s_2', owner_id=root.id) s_2_1 = create_stack(self.ctx, self.template, self.user_creds, name='s_2_1', owner_id=s_2.id) s_2_1_1 = create_stack(self.ctx, self.template, self.user_creds, name='s_2_1_1', owner_id=s_2_1.id) s_2_1_1_1 = create_stack(self.ctx, self.template, self.user_creds, name='s_2_1_1_1', owner_id=s_2_1_1.id) s_3 = create_stack(self.ctx, self.template, self.user_creds, name='s_3', owner_id=root.id) s_4 = create_stack(self.ctx, self.template, self.user_creds, name='s_4', owner_id=root.id) add_resources(root, 3, root.id) add_resources(s_1, 2, root.id) add_resources(s_1_1, 4, root.id) add_resources(s_1_2, 5, root.id) add_resources(s_1_3, 6, root.id) add_resources(s_2, 1, root.id) add_resources(s_2_1_1_1, 1, root.id) add_resources(s_3, 4, root.id) self.assertEqual(26, db_api.stack_count_total_resources( self.ctx, root.id)) self.assertEqual(0, db_api.stack_count_total_resources( self.ctx, s_4.id)) self.assertEqual(0, db_api.stack_count_total_resources( self.ctx, 'asdf')) self.assertEqual(0, db_api.stack_count_total_resources( self.ctx, None)) class DBAPIResourceTest(common.HeatTestCase): def setUp(self): super(DBAPIResourceTest, self).setUp() self.ctx = utils.dummy_context() self.template = create_raw_template(self.ctx) self.user_creds = create_user_creds(self.ctx) self.stack = create_stack(self.ctx, self.template, self.user_creds) def test_resource_create(self): res = create_resource(self.ctx, self.stack) ret_res = db_api.resource_get(self.ctx, res.id) self.assertIsNotNone(ret_res) self.assertEqual('test_resource_name', ret_res.name) self.assertEqual(UUID1, ret_res.physical_resource_id) self.assertEqual('create', ret_res.action) self.assertEqual('complete', ret_res.status) self.assertEqual('create_complete', ret_res.status_reason) self.assertEqual('{"foo": "123"}', json.dumps(ret_res.rsrc_metadata)) self.assertEqual(self.stack.id, ret_res.stack_id) def test_resource_get(self): res = create_resource(self.ctx, self.stack) ret_res = db_api.resource_get(self.ctx, res.id) self.assertIsNotNone(ret_res) self.assertRaises(exception.NotFound, db_api.resource_get, self.ctx, UUID2) def test_resource_get_by_name_and_stack(self): create_resource(self.ctx, self.stack) ret_res = db_api.resource_get_by_name_and_stack(self.ctx, 'test_resource_name', self.stack.id) self.assertIsNotNone(ret_res) self.assertEqual('test_resource_name', ret_res.name) self.assertEqual(self.stack.id, ret_res.stack_id) self.assertIsNone(db_api.resource_get_by_name_and_stack(self.ctx, 'abc', self.stack.id)) def test_resource_get_by_physical_resource_id(self): create_resource(self.ctx, self.stack) ret_res = db_api.resource_get_by_physical_resource_id(self.ctx, UUID1) self.assertIsNotNone(ret_res) self.assertEqual(UUID1, ret_res.physical_resource_id) self.assertIsNone(db_api.resource_get_by_physical_resource_id(self.ctx, UUID2)) def test_resource_get_all_by_physical_resource_id(self): create_resource(self.ctx, self.stack) create_resource(self.ctx, self.stack) ret_res = db_api.resource_get_all_by_physical_resource_id(self.ctx, UUID1) ret_list = list(ret_res) self.assertEqual(2, len(ret_list)) for res in ret_list: self.assertEqual(UUID1, res.physical_resource_id) mt = db_api.resource_get_all_by_physical_resource_id(self.ctx, UUID2) self.assertFalse(list(mt)) def test_resource_get_all_by_with_admin_context(self): admin_ctx = utils.dummy_context(is_admin=True, tenant_id='admin_tenant') create_resource(self.ctx, self.stack, phys_res_id=UUID1) create_resource(self.ctx, self.stack, phys_res_id=UUID2) ret_res = db_api.resource_get_all_by_physical_resource_id(admin_ctx, UUID1) ret_list = list(ret_res) self.assertEqual(1, len(ret_list)) self.assertEqual(UUID1, ret_list[0].physical_resource_id) mt = db_api.resource_get_all_by_physical_resource_id(admin_ctx, UUID2) ret_list = list(mt) self.assertEqual(1, len(ret_list)) self.assertEqual(UUID2, ret_list[0].physical_resource_id) def test_resource_get_all(self): values = [ {'name': 'res1'}, {'name': 'res2'}, {'name': 'res3'}, ] [create_resource(self.ctx, self.stack, False, **val) for val in values] resources = db_api.resource_get_all(self.ctx) self.assertEqual(3, len(resources)) names = [resource.name for resource in resources] [self.assertIn(val['name'], names) for val in values] def test_resource_get_all_by_stack(self): self.stack1 = create_stack(self.ctx, self.template, self.user_creds) self.stack2 = create_stack(self.ctx, self.template, self.user_creds) values = [ {'name': 'res1', 'stack_id': self.stack.id}, {'name': 'res2', 'stack_id': self.stack.id}, {'name': 'res3', 'stack_id': self.stack.id}, {'name': 'res4', 'stack_id': self.stack1.id}, ] [create_resource(self.ctx, self.stack, False, **val) for val in values] # Test for all resources in a stack resources = db_api.resource_get_all_by_stack(self.ctx, self.stack.id) self.assertEqual(3, len(resources)) self.assertEqual('res1', resources.get('res1').name) self.assertEqual('res2', resources.get('res2').name) self.assertEqual('res3', resources.get('res3').name) # Test for resources matching single entry resources = db_api.resource_get_all_by_stack(self.ctx, self.stack.id, filters=dict(name='res1')) self.assertEqual(1, len(resources)) self.assertEqual('res1', resources.get('res1').name) # Test for resources matching multi entry resources = db_api.resource_get_all_by_stack(self.ctx, self.stack.id, filters=dict(name=[ 'res1', 'res2' ])) self.assertEqual(2, len(resources)) self.assertEqual('res1', resources.get('res1').name) self.assertEqual('res2', resources.get('res2').name) self.assertEqual({}, db_api.resource_get_all_by_stack( self.ctx, self.stack2.id)) def test_resource_get_all_active_by_stack(self): values = [ {'name': 'res1', 'action': rsrc.Resource.DELETE, 'status': rsrc.Resource.COMPLETE}, {'name': 'res2', 'action': rsrc.Resource.DELETE, 'status': rsrc.Resource.IN_PROGRESS}, {'name': 'res3', 'action': rsrc.Resource.UPDATE, 'status': rsrc.Resource.IN_PROGRESS}, {'name': 'res4', 'action': rsrc.Resource.UPDATE, 'status': rsrc.Resource.COMPLETE}, {'name': 'res5', 'action': rsrc.Resource.INIT, 'status': rsrc.Resource.COMPLETE}, {'name': 'res6'}, ] [create_resource(self.ctx, self.stack, **val) for val in values] resources = db_api.resource_get_all_active_by_stack(self.ctx, self.stack.id) self.assertEqual(5, len(resources)) for rsrc_id, res in resources.items(): self.assertIn(res.name, ['res2', 'res3', 'res4', 'res5', 'res6']) def test_resource_get_all_by_root_stack(self): self.stack1 = create_stack(self.ctx, self.template, self.user_creds) self.stack2 = create_stack(self.ctx, self.template, self.user_creds) create_resource(self.ctx, self.stack, name='res1', root_stack_id=self.stack.id) create_resource(self.ctx, self.stack, name='res2', root_stack_id=self.stack.id) create_resource(self.ctx, self.stack, name='res3', root_stack_id=self.stack.id) create_resource(self.ctx, self.stack1, name='res4', root_stack_id=self.stack.id) # Test for all resources in a stack resources = db_api.resource_get_all_by_root_stack( self.ctx, self.stack.id) self.assertEqual(4, len(resources)) resource_names = [r.name for r in resources.values()] self.assertEqual(['res1', 'res2', 'res3', 'res4'], sorted(resource_names)) # Test for resources matching single entry resources = db_api.resource_get_all_by_root_stack( self.ctx, self.stack.id, filters=dict(name='res1')) self.assertEqual(1, len(resources)) resource_names = [r.name for r in resources.values()] self.assertEqual(['res1'], resource_names) self.assertEqual(1, len(resources)) # Test for resources matching multi entry resources = db_api.resource_get_all_by_root_stack( self.ctx, self.stack.id, filters=dict(name=[ 'res1', 'res2' ]) ) self.assertEqual(2, len(resources)) resource_names = [r.name for r in resources.values()] self.assertEqual(['res1', 'res2'], sorted(resource_names)) self.assertEqual({}, db_api.resource_get_all_by_root_stack( self.ctx, self.stack2.id)) def test_resource_purge_deleted_by_stack(self): val = {'name': 'res1', 'action': rsrc.Resource.DELETE, 'status': rsrc.Resource.COMPLETE} resource = create_resource(self.ctx, self.stack, **val) db_api.resource_purge_deleted(self.ctx, self.stack.id) self.assertRaises(exception.NotFound, db_api.resource_get, self.ctx, resource.id) def test_engine_get_all_locked_by_stack(self): values = [ {'name': 'res1', 'action': rsrc.Resource.DELETE, 'root_stack_id': self.stack.id, 'status': rsrc.Resource.COMPLETE}, {'name': 'res2', 'action': rsrc.Resource.DELETE, 'root_stack_id': self.stack.id, 'status': rsrc.Resource.IN_PROGRESS, 'engine_id': 'engine-001'}, {'name': 'res3', 'action': rsrc.Resource.UPDATE, 'root_stack_id': self.stack.id, 'status': rsrc.Resource.IN_PROGRESS, 'engine_id': 'engine-002'}, {'name': 'res4', 'action': rsrc.Resource.CREATE, 'root_stack_id': self.stack.id, 'status': rsrc.Resource.COMPLETE}, {'name': 'res5', 'action': rsrc.Resource.INIT, 'root_stack_id': self.stack.id, 'status': rsrc.Resource.COMPLETE}, {'name': 'res6', 'action': rsrc.Resource.CREATE, 'root_stack_id': self.stack.id, 'status': rsrc.Resource.IN_PROGRESS, 'engine_id': 'engine-001'}, {'name': 'res6'}, ] for val in values: create_resource(self.ctx, self.stack, **val) engines = db_api.engine_get_all_locked_by_stack(self.ctx, self.stack.id) self.assertEqual({'engine-001', 'engine-002'}, engines) class DBAPIResourceReplacementTest(common.HeatTestCase): def setUp(self): self.useFixture(utils.ForeignKeyConstraintFixture()) super(DBAPIResourceReplacementTest, self).setUp() self.ctx = utils.dummy_context() self.template = create_raw_template(self.ctx) self.user_creds = create_user_creds(self.ctx) self.stack = create_stack(self.ctx, self.template, self.user_creds) def test_resource_create_replacement(self): orig = create_resource(self.ctx, self.stack) tmpl_id = create_raw_template(self.ctx).id repl = db_api.resource_create_replacement( self.ctx, orig.id, {'status_reason': 'test replacement'}, {'name': orig.name, 'replaces': orig.id, 'stack_id': orig.stack_id, 'current_template_id': tmpl_id}, 1, None) self.assertIsNotNone(repl) self.assertEqual(orig.name, repl.name) self.assertNotEqual(orig.id, repl.id) self.assertEqual(orig.id, repl.replaces) def test_resource_create_replacement_template_gone(self): orig = create_resource(self.ctx, self.stack) other_ctx = utils.dummy_context() tmpl_id = create_raw_template(self.ctx).id db_api.raw_template_delete(other_ctx, tmpl_id) repl = db_api.resource_create_replacement( self.ctx, orig.id, {'status_reason': 'test replacement'}, {'name': orig.name, 'replaces': orig.id, 'stack_id': orig.stack_id, 'current_template_id': tmpl_id}, 1, None) self.assertIsNone(repl) def test_resource_create_replacement_updated(self): orig = create_resource(self.ctx, self.stack) other_ctx = utils.dummy_context() tmpl_id = create_raw_template(self.ctx).id db_api.resource_update_and_save(other_ctx, orig.id, {'atomic_key': 2}) self.assertRaises(exception.UpdateInProgress, db_api.resource_create_replacement, self.ctx, orig.id, {'status_reason': 'test replacement'}, {'name': orig.name, 'replaces': orig.id, 'stack_id': orig.stack_id, 'current_template_id': tmpl_id}, 1, None) def test_resource_create_replacement_updated_concurrent(self): orig = create_resource(self.ctx, self.stack) other_ctx = utils.dummy_context() tmpl_id = create_raw_template(self.ctx).id def update_atomic_key(*args, **kwargs): db_api.resource_update_and_save(other_ctx, orig.id, {'atomic_key': 2}) self.patchobject(db_api, 'resource_update', new=mock.Mock(wraps=db_api.resource_update, side_effect=update_atomic_key)) self.assertRaises(exception.UpdateInProgress, db_api.resource_create_replacement, self.ctx, orig.id, {'status_reason': 'test replacement'}, {'name': orig.name, 'replaces': orig.id, 'stack_id': orig.stack_id, 'current_template_id': tmpl_id}, 1, None) def test_resource_create_replacement_locked(self): orig = create_resource(self.ctx, self.stack) other_ctx = utils.dummy_context() tmpl_id = create_raw_template(self.ctx).id db_api.resource_update_and_save(other_ctx, orig.id, {'engine_id': 'a', 'atomic_key': 2}) self.assertRaises(exception.UpdateInProgress, db_api.resource_create_replacement, self.ctx, orig.id, {'status_reason': 'test replacement'}, {'name': orig.name, 'replaces': orig.id, 'stack_id': orig.stack_id, 'current_template_id': tmpl_id}, 1, None) def test_resource_create_replacement_locked_concurrent(self): orig = create_resource(self.ctx, self.stack) other_ctx = utils.dummy_context() tmpl_id = create_raw_template(self.ctx).id def lock_resource(*args, **kwargs): db_api.resource_update_and_save(other_ctx, orig.id, {'engine_id': 'a', 'atomic_key': 2}) self.patchobject(db_api, 'resource_update', new=mock.Mock(wraps=db_api.resource_update, side_effect=lock_resource)) self.assertRaises(exception.UpdateInProgress, db_api.resource_create_replacement, self.ctx, orig.id, {'status_reason': 'test replacement'}, {'name': orig.name, 'replaces': orig.id, 'stack_id': orig.stack_id, 'current_template_id': tmpl_id}, 1, None) class DBAPIStackLockTest(common.HeatTestCase): def setUp(self): super(DBAPIStackLockTest, self).setUp() self.ctx = utils.dummy_context() self.template = create_raw_template(self.ctx) self.user_creds = create_user_creds(self.ctx) self.stack = create_stack(self.ctx, self.template, self.user_creds) def test_stack_lock_create_success(self): observed = db_api.stack_lock_create(self.ctx, self.stack.id, UUID1) self.assertIsNone(observed) def test_stack_lock_create_fail_double_same(self): db_api.stack_lock_create(self.ctx, self.stack.id, UUID1) observed = db_api.stack_lock_create(self.ctx, self.stack.id, UUID1) self.assertEqual(UUID1, observed) def test_stack_lock_create_fail_double_different(self): db_api.stack_lock_create(self.ctx, self.stack.id, UUID1) observed = db_api.stack_lock_create(self.ctx, self.stack.id, UUID2) self.assertEqual(UUID1, observed) def test_stack_lock_get_id_success(self): db_api.stack_lock_create(self.ctx, self.stack.id, UUID1) observed = db_api.stack_lock_get_engine_id(self.ctx, self.stack.id) self.assertEqual(UUID1, observed) def test_stack_lock_get_id_return_none(self): observed = db_api.stack_lock_get_engine_id(self.ctx, self.stack.id) self.assertIsNone(observed) def test_stack_lock_steal_success(self): db_api.stack_lock_create(self.ctx, self.stack.id, UUID1) observed = db_api.stack_lock_steal(self.ctx, self.stack.id, UUID1, UUID2) self.assertIsNone(observed) def test_stack_lock_steal_fail_gone(self): db_api.stack_lock_create(self.ctx, self.stack.id, UUID1) db_api.stack_lock_release(self.ctx, self.stack.id, UUID1) observed = db_api.stack_lock_steal(self.ctx, self.stack.id, UUID1, UUID2) self.assertTrue(observed) def test_stack_lock_steal_fail_stolen(self): db_api.stack_lock_create(self.ctx, self.stack.id, UUID1) # Simulate stolen lock db_api.stack_lock_release(self.ctx, self.stack.id, UUID1) db_api.stack_lock_create(self.ctx, self.stack.id, UUID2) observed = db_api.stack_lock_steal(self.ctx, self.stack.id, UUID3, UUID2) self.assertEqual(UUID2, observed) def test_stack_lock_release_success(self): db_api.stack_lock_create(self.ctx, self.stack.id, UUID1) observed = db_api.stack_lock_release(self.ctx, self.stack.id, UUID1) self.assertIsNone(observed) def test_stack_lock_release_fail_double(self): db_api.stack_lock_create(self.ctx, self.stack.id, UUID1) db_api.stack_lock_release(self.ctx, self.stack.id, UUID1) observed = db_api.stack_lock_release(self.ctx, self.stack.id, UUID1) self.assertTrue(observed) def test_stack_lock_release_fail_wrong_engine_id(self): db_api.stack_lock_create(self.ctx, self.stack.id, UUID1) observed = db_api.stack_lock_release(self.ctx, self.stack.id, UUID2) self.assertTrue(observed) @mock.patch.object(time, 'sleep') def test_stack_lock_retry_on_deadlock(self, sleep): with mock.patch('sqlalchemy.orm.Session.add', side_effect=db_exception.DBDeadlock) as mock_add: self.assertRaises(db_exception.DBDeadlock, db_api.stack_lock_create, self.ctx, self.stack.id, UUID1) self.assertEqual(4, mock_add.call_count) class DBAPIResourceDataTest(common.HeatTestCase): def setUp(self): super(DBAPIResourceDataTest, self).setUp() self.ctx = utils.dummy_context() self.template = create_raw_template(self.ctx) self.user_creds = create_user_creds(self.ctx) self.stack = create_stack(self.ctx, self.template, self.user_creds) self.resource = create_resource(self.ctx, self.stack) self.resource.context = self.ctx def test_resource_data_set_get(self): create_resource_data(self.ctx, self.resource) val = db_api.resource_data_get( self.ctx, self.resource.id, 'test_resource_key') self.assertEqual('test_value', val) # Updating existing resource data create_resource_data(self.ctx, self.resource, value='foo') val = db_api.resource_data_get( self.ctx, self.resource.id, 'test_resource_key') self.assertEqual('foo', val) # Testing with encrypted value create_resource_data(self.ctx, self.resource, key='encryped_resource_key', redact=True) val = db_api.resource_data_get( self.ctx, self.resource.id, 'encryped_resource_key') self.assertEqual('test_value', val) # get all by querying for data vals = db_api.resource_data_get_all(self.resource.context, self.resource.id) self.assertEqual(2, len(vals)) self.assertEqual('foo', vals.get('test_resource_key')) self.assertEqual('test_value', vals.get('encryped_resource_key')) # get all by using associated resource data self.resource = db_api.resource_get(self.ctx, self.resource.id) vals = db_api.resource_data_get_all(self.ctx, None, self.resource.data) self.assertEqual(2, len(vals)) self.assertEqual('foo', vals.get('test_resource_key')) self.assertEqual('test_value', vals.get('encryped_resource_key')) def test_resource_data_delete(self): create_resource_data(self.ctx, self.resource) res_data = db_api.resource_data_get_by_key(self.ctx, self.resource.id, 'test_resource_key') self.assertIsNotNone(res_data) self.assertEqual('test_value', res_data.value) db_api.resource_data_delete(self.ctx, self.resource.id, 'test_resource_key') self.assertRaises(exception.NotFound, db_api.resource_data_get_by_key, self.ctx, self.resource.id, 'test_resource_key') self.assertIsNotNone(res_data) self.assertRaises(exception.NotFound, db_api.resource_data_get_all, self.resource.context, self.resource.id) class DBAPIEventTest(common.HeatTestCase): def setUp(self): super(DBAPIEventTest, self).setUp() self.ctx = utils.dummy_context() self.template = create_raw_template(self.ctx) self.user_creds = create_user_creds(self.ctx) def test_event_create(self): stack = create_stack(self.ctx, self.template, self.user_creds) event = create_event(self.ctx, stack_id=stack.id) ret_event = self.ctx.session.query(models.Event).get(event.id) self.assertIsNotNone(ret_event) self.assertEqual(stack.id, ret_event.stack_id) self.assertEqual('create', ret_event.resource_action) self.assertEqual('complete', ret_event.resource_status) self.assertEqual('res', ret_event.resource_name) self.assertEqual(UUID1, ret_event.physical_resource_id) self.assertEqual('create_complete', ret_event.resource_status_reason) self.assertEqual({'foo2': 'ev_bar'}, ret_event.rsrc_prop_data.data) def test_event_get_all_by_tenant(self): self.stack1 = create_stack(self.ctx, self.template, self.user_creds, tenant='tenant1') self.stack2 = create_stack(self.ctx, self.template, self.user_creds, tenant='tenant2') values = [ {'stack_id': self.stack1.id, 'resource_name': 'res1'}, {'stack_id': self.stack1.id, 'resource_name': 'res2'}, {'stack_id': self.stack2.id, 'resource_name': 'res3'}, ] [create_event(self.ctx, **val) for val in values] self.ctx.tenant = 'tenant1' events = db_api.event_get_all_by_tenant(self.ctx) self.assertEqual(2, len(events)) marker = events[0].uuid expected = events[1].uuid events = db_api.event_get_all_by_tenant(self.ctx, marker=marker) self.assertEqual(1, len(events)) self.assertEqual(expected, events[0].uuid) events = db_api.event_get_all_by_tenant(self.ctx, limit=1) self.assertEqual(1, len(events)) filters = {'resource_name': 'res2'} events = db_api.event_get_all_by_tenant(self.ctx, filters=filters) self.assertEqual(1, len(events)) self.assertEqual('res2', events[0].resource_name) sort_keys = 'resource_type' events = db_api.event_get_all_by_tenant(self.ctx, sort_keys=sort_keys) self.assertEqual(2, len(events)) self.ctx.tenant = 'tenant2' events = db_api.event_get_all_by_tenant(self.ctx) self.assertEqual(1, len(events)) def test_event_get_all_by_stack(self): self.stack1 = create_stack(self.ctx, self.template, self.user_creds) self.stack2 = create_stack(self.ctx, self.template, self.user_creds) values = [ {'stack_id': self.stack1.id, 'resource_name': 'res1'}, {'stack_id': self.stack1.id, 'resource_name': 'res2'}, {'stack_id': self.stack2.id, 'resource_name': 'res3'}, ] [create_event(self.ctx, **val) for val in values] self.ctx.tenant = 'tenant1' events = db_api.event_get_all_by_stack(self.ctx, self.stack1.id) self.assertEqual(2, len(events)) self.ctx.tenant = 'tenant2' events = db_api.event_get_all_by_stack(self.ctx, self.stack2.id) self.assertEqual(1, len(events)) def test_event_count_all_by_stack(self): self.stack1 = create_stack(self.ctx, self.template, self.user_creds) self.stack2 = create_stack(self.ctx, self.template, self.user_creds) values = [ {'stack_id': self.stack1.id, 'resource_name': 'res1'}, {'stack_id': self.stack1.id, 'resource_name': 'res2'}, {'stack_id': self.stack2.id, 'resource_name': 'res3'}, ] [create_event(self.ctx, **val) for val in values] self.assertEqual(2, db_api.event_count_all_by_stack(self.ctx, self.stack1.id)) self.assertEqual(1, db_api.event_count_all_by_stack(self.ctx, self.stack2.id)) class DBAPIWatchRuleTest(common.HeatTestCase): def setUp(self): super(DBAPIWatchRuleTest, self).setUp() self.ctx = utils.dummy_context() self.template = create_raw_template(self.ctx) self.user_creds = create_user_creds(self.ctx) self.stack = create_stack(self.ctx, self.template, self.user_creds) def test_watch_rule_create_get(self): watch_rule = create_watch_rule(self.ctx, self.stack) ret_wr = db_api.watch_rule_get(self.ctx, watch_rule.id) self.assertIsNotNone(ret_wr) self.assertEqual('test_rule', ret_wr.name) self.assertEqual('{"foo": "123"}', json.dumps(ret_wr.rule)) self.assertEqual('normal', ret_wr.state) self.assertEqual(self.stack.id, ret_wr.stack_id) def test_watch_rule_get_by_name(self): watch_rule = create_watch_rule(self.ctx, self.stack) ret_wr = db_api.watch_rule_get_by_name(self.ctx, watch_rule.name) self.assertIsNotNone(ret_wr) self.assertEqual('test_rule', ret_wr.name) def test_watch_rule_get_all(self): values = [ {'name': 'rule1'}, {'name': 'rule2'}, {'name': 'rule3'}, ] [create_watch_rule(self.ctx, self.stack, **val) for val in values] wrs = db_api.watch_rule_get_all(self.ctx) self.assertEqual(3, len(wrs)) names = [wr.name for wr in wrs] [self.assertIn(val['name'], names) for val in values] def test_watch_rule_get_all_by_stack(self): self.stack1 = create_stack(self.ctx, self.template, self.user_creds) values = [ {'name': 'rule1', 'stack_id': self.stack.id}, {'name': 'rule2', 'stack_id': self.stack1.id}, {'name': 'rule3', 'stack_id': self.stack1.id}, ] [create_watch_rule(self.ctx, self.stack, **val) for val in values] wrs = db_api.watch_rule_get_all_by_stack(self.ctx, self.stack.id) self.assertEqual(1, len(wrs)) wrs = db_api.watch_rule_get_all_by_stack(self.ctx, self.stack1.id) self.assertEqual(2, len(wrs)) def test_watch_rule_update(self): watch_rule = create_watch_rule(self.ctx, self.stack) values = { 'name': 'test_rule_1', 'rule': json.loads('{"foo": "bar"}'), 'state': 'nodata', } db_api.watch_rule_update(self.ctx, watch_rule.id, values) watch_rule = db_api.watch_rule_get(self.ctx, watch_rule.id) self.assertEqual('test_rule_1', watch_rule.name) self.assertEqual('{"foo": "bar"}', json.dumps(watch_rule.rule)) self.assertEqual('nodata', watch_rule.state) self.assertRaises(exception.NotFound, db_api.watch_rule_update, self.ctx, UUID2, values) def test_watch_rule_delete(self): watch_rule = create_watch_rule(self.ctx, self.stack) create_watch_data(self.ctx, watch_rule) db_api.watch_rule_delete(self.ctx, watch_rule.id) self.assertIsNone(db_api.watch_rule_get(self.ctx, watch_rule.id)) self.assertRaises(exception.NotFound, db_api.watch_rule_delete, self.ctx, UUID2) # Testing associated watch data deletion self.assertEqual([], db_api.watch_data_get_all(self.ctx)) class DBAPIWatchDataTest(common.HeatTestCase): def setUp(self): super(DBAPIWatchDataTest, self).setUp() self.ctx = utils.dummy_context() self.template = create_raw_template(self.ctx) self.user_creds = create_user_creds(self.ctx) self.stack = create_stack(self.ctx, self.template, self.user_creds) self.watch_rule = create_watch_rule(self.ctx, self.stack) def test_watch_data_create(self): create_watch_data(self.ctx, self.watch_rule) ret_data = db_api.watch_data_get_all(self.ctx) self.assertEqual(1, len(ret_data)) self.assertEqual('{"foo": "bar"}', json.dumps(ret_data[0].data)) self.assertEqual(self.watch_rule.id, ret_data[0].watch_rule_id) def test_watch_data_get_all(self): values = [ {'data': json.loads('{"foo": "d1"}')}, {'data': json.loads('{"foo": "d2"}')}, {'data': json.loads('{"foo": "d3"}')} ] [create_watch_data(self.ctx, self.watch_rule, **val) for val in values] watch_data = db_api.watch_data_get_all(self.ctx) self.assertEqual(3, len(watch_data)) data = [wd.data for wd in watch_data] [self.assertIn(val['data'], data) for val in values] class DBAPIServiceTest(common.HeatTestCase): def setUp(self): super(DBAPIServiceTest, self).setUp() self.ctx = utils.dummy_context() def test_service_create_get(self): service = create_service(self.ctx) ret_service = db_api.service_get(self.ctx, service.id) self.assertIsNotNone(ret_service) self.assertEqual(service.id, ret_service.id) self.assertEqual(service.hostname, ret_service.hostname) self.assertEqual(service.binary, ret_service.binary) self.assertEqual(service.host, ret_service.host) self.assertEqual(service.topic, ret_service.topic) self.assertEqual(service.engine_id, ret_service.engine_id) self.assertEqual(service.report_interval, ret_service.report_interval) self.assertIsNotNone(service.created_at) self.assertIsNone(service.updated_at) self.assertIsNone(service.deleted_at) def test_service_get_all_by_args(self): # Host-1 values = [{'id': str(uuid.uuid4()), 'hostname': 'host-1', 'host': 'engine-1'}] # Host-2 for i in [0, 1, 2]: values.append({'id': str(uuid.uuid4()), 'hostname': 'host-2', 'host': 'engine-%s' % i}) [create_service(self.ctx, **val) for val in values] services = db_api.service_get_all(self.ctx) self.assertEqual(4, len(services)) services_by_args = db_api.service_get_all_by_args(self.ctx, hostname='host-2', binary='heat-engine', host='engine-0') self.assertEqual(1, len(services_by_args)) self.assertEqual('host-2', services_by_args[0].hostname) self.assertEqual('heat-engine', services_by_args[0].binary) self.assertEqual('engine-0', services_by_args[0].host) def test_service_update(self): service = create_service(self.ctx) values = {'hostname': 'host-updated', 'host': 'engine-updated', 'retry_interval': 120} service = db_api.service_update(self.ctx, service.id, values) self.assertEqual('host-updated', service.hostname) self.assertEqual(120, service.retry_interval) self.assertEqual('engine-updated', service.host) # simple update, expected the updated_at is updated old_updated_date = service.updated_at service = db_api.service_update(self.ctx, service.id, dict()) self.assertGreater(service.updated_at, old_updated_date) def test_service_delete_soft_delete(self): service = create_service(self.ctx) # Soft delete db_api.service_delete(self.ctx, service.id) ret_service = db_api.service_get(self.ctx, service.id) self.assertEqual(ret_service.id, service.id) # Delete db_api.service_delete(self.ctx, service.id, False) ex = self.assertRaises(exception.EntityNotFound, db_api.service_get, self.ctx, service.id) self.assertEqual('Service', ex.kwargs.get('entity')) class DBAPIResourceUpdateTest(common.HeatTestCase): def setUp(self): super(DBAPIResourceUpdateTest, self).setUp() self.ctx = utils.dummy_context() template = create_raw_template(self.ctx) user_creds = create_user_creds(self.ctx) stack = create_stack(self.ctx, template, user_creds) self.resource = create_resource(self.ctx, stack, False, atomic_key=0) def test_unlocked_resource_update(self): values = {'engine_id': 'engine-1', 'action': 'CREATE', 'status': 'IN_PROGRESS'} db_res = db_api.resource_get(self.ctx, self.resource.id) ret = db_api.resource_update(self.ctx, self.resource.id, values, db_res.atomic_key, None) self.assertTrue(ret) db_res = db_api.resource_get(self.ctx, self.resource.id) self.assertEqual('engine-1', db_res.engine_id) self.assertEqual('CREATE', db_res.action) self.assertEqual('IN_PROGRESS', db_res.status) self.assertEqual(1, db_res.atomic_key) def test_locked_resource_update_by_same_engine(self): values = {'engine_id': 'engine-1', 'action': 'CREATE', 'status': 'IN_PROGRESS'} db_res = db_api.resource_get(self.ctx, self.resource.id) ret = db_api.resource_update(self.ctx, self.resource.id, values, db_res.atomic_key, None) self.assertTrue(ret) db_res = db_api.resource_get(self.ctx, self.resource.id) self.assertEqual('engine-1', db_res.engine_id) self.assertEqual(1, db_res.atomic_key) values = {'engine_id': 'engine-1', 'action': 'CREATE', 'status': 'FAILED'} ret = db_api.resource_update(self.ctx, self.resource.id, values, db_res.atomic_key, 'engine-1') self.assertTrue(ret) db_res = db_api.resource_get(self.ctx, self.resource.id) self.assertEqual('engine-1', db_res.engine_id) self.assertEqual('CREATE', db_res.action) self.assertEqual('FAILED', db_res.status) self.assertEqual(2, db_res.atomic_key) def test_locked_resource_update_by_other_engine(self): values = {'engine_id': 'engine-1', 'action': 'CREATE', 'status': 'IN_PROGRESS'} db_res = db_api.resource_get(self.ctx, self.resource.id) ret = db_api.resource_update(self.ctx, self.resource.id, values, db_res.atomic_key, None) self.assertTrue(ret) db_res = db_api.resource_get(self.ctx, self.resource.id) self.assertEqual('engine-1', db_res.engine_id) self.assertEqual(1, db_res.atomic_key) values = {'engine_id': 'engine-2', 'action': 'CREATE', 'status': 'FAILED'} ret = db_api.resource_update(self.ctx, self.resource.id, values, db_res.atomic_key, 'engine-2') self.assertFalse(ret) def test_release_resource_lock(self): values = {'engine_id': 'engine-1', 'action': 'CREATE', 'status': 'IN_PROGRESS'} db_res = db_api.resource_get(self.ctx, self.resource.id) ret = db_api.resource_update(self.ctx, self.resource.id, values, db_res.atomic_key, None) self.assertTrue(ret) db_res = db_api.resource_get(self.ctx, self.resource.id) self.assertEqual('engine-1', db_res.engine_id) self.assertEqual(1, db_res.atomic_key) # Set engine id as None to release the lock values = {'engine_id': None, 'action': 'CREATE', 'status': 'COMPLETE'} ret = db_api.resource_update(self.ctx, self.resource.id, values, db_res.atomic_key, 'engine-1') self.assertTrue(ret) db_res = db_api.resource_get(self.ctx, self.resource.id) self.assertIsNone(db_res.engine_id) self.assertEqual('CREATE', db_res.action) self.assertEqual('COMPLETE', db_res.status) self.assertEqual(2, db_res.atomic_key) def test_steal_resource_lock(self): values = {'engine_id': 'engine-1', 'action': 'CREATE', 'status': 'IN_PROGRESS'} db_res = db_api.resource_get(self.ctx, self.resource.id) ret = db_api.resource_update(self.ctx, self.resource.id, values, db_res.atomic_key, None) self.assertTrue(ret) db_res = db_api.resource_get(self.ctx, self.resource.id) self.assertEqual('engine-1', db_res.engine_id) self.assertEqual(1, db_res.atomic_key) # Set engine id as engine-2 and pass expected engine id as old engine # i.e engine-1 in db api steal the lock values = {'engine_id': 'engine-2', 'action': 'DELETE', 'status': 'IN_PROGRESS'} ret = db_api.resource_update(self.ctx, self.resource.id, values, db_res.atomic_key, 'engine-1') self.assertTrue(ret) db_res = db_api.resource_get(self.ctx, self.resource.id) self.assertEqual('engine-2', db_res.engine_id) self.assertEqual('DELETE', db_res.action) self.assertEqual(2, db_res.atomic_key) class DBAPISyncPointTest(common.HeatTestCase): def setUp(self): super(DBAPISyncPointTest, self).setUp() self.ctx = utils.dummy_context() self.template = create_raw_template(self.ctx) self.user_creds = create_user_creds(self.ctx) self.stack = create_stack(self.ctx, self.template, self.user_creds) self.resources = [create_resource(self.ctx, self.stack, name='res1'), create_resource(self.ctx, self.stack, name='res2'), create_resource(self.ctx, self.stack, name='res3')] def test_sync_point_create_get(self): for res in self.resources: # create sync_point for resources and verify sync_point_rsrc = create_sync_point( self.ctx, entity_id=str(res.id), stack_id=self.stack.id, traversal_id=self.stack.current_traversal ) ret_sync_point_rsrc = db_api.sync_point_get( self.ctx, sync_point_rsrc.entity_id, sync_point_rsrc.traversal_id, sync_point_rsrc.is_update ) self.assertIsNotNone(ret_sync_point_rsrc) self.assertEqual(sync_point_rsrc.entity_id, ret_sync_point_rsrc.entity_id) self.assertEqual(sync_point_rsrc.traversal_id, ret_sync_point_rsrc.traversal_id) self.assertEqual(sync_point_rsrc.is_update, ret_sync_point_rsrc.is_update) self.assertEqual(sync_point_rsrc.atomic_key, ret_sync_point_rsrc.atomic_key) self.assertEqual(sync_point_rsrc.stack_id, ret_sync_point_rsrc.stack_id) self.assertEqual(sync_point_rsrc.input_data, ret_sync_point_rsrc.input_data) # Finally create sync_point for stack and verify sync_point_stack = create_sync_point( self.ctx, entity_id=self.stack.id, stack_id=self.stack.id, traversal_id=self.stack.current_traversal ) ret_sync_point_stack = db_api.sync_point_get( self.ctx, sync_point_stack.entity_id, sync_point_stack.traversal_id, sync_point_stack.is_update ) self.assertIsNotNone(ret_sync_point_stack) self.assertEqual(sync_point_stack.entity_id, ret_sync_point_stack.entity_id) self.assertEqual(sync_point_stack.traversal_id, ret_sync_point_stack.traversal_id) self.assertEqual(sync_point_stack.is_update, ret_sync_point_stack.is_update) self.assertEqual(sync_point_stack.atomic_key, ret_sync_point_stack.atomic_key) self.assertEqual(sync_point_stack.stack_id, ret_sync_point_stack.stack_id) self.assertEqual(sync_point_stack.input_data, ret_sync_point_stack.input_data) def test_sync_point_update(self): sync_point = create_sync_point( self.ctx, entity_id=str(self.resources[0].id), stack_id=self.stack.id, traversal_id=self.stack.current_traversal ) self.assertEqual({}, sync_point.input_data) self.assertEqual(0, sync_point.atomic_key) # first update rows_updated = db_api.sync_point_update_input_data( self.ctx, sync_point.entity_id, sync_point.traversal_id, sync_point.is_update, sync_point.atomic_key, {'input_data': '{key: value}'} ) self.assertEqual(1, rows_updated) ret_sync_point = db_api.sync_point_get(self.ctx, sync_point.entity_id, sync_point.traversal_id, sync_point.is_update) self.assertIsNotNone(ret_sync_point) # check if atomic_key was incremented on write self.assertEqual(1, ret_sync_point.atomic_key) self.assertEqual({'input_data': '{key: value}'}, ret_sync_point.input_data) # second update rows_updated = db_api.sync_point_update_input_data( self.ctx, ret_sync_point.entity_id, ret_sync_point.traversal_id, ret_sync_point.is_update, ret_sync_point.atomic_key, {'input_data': '{key1: value1}'} ) self.assertEqual(1, rows_updated) ret_sync_point = db_api.sync_point_get(self.ctx, sync_point.entity_id, sync_point.traversal_id, sync_point.is_update) self.assertIsNotNone(ret_sync_point) # check if atomic_key was incremented on write self.assertEqual(2, ret_sync_point.atomic_key) self.assertEqual({'input_data': '{key1: value1}'}, ret_sync_point.input_data) def test_sync_point_concurrent_update(self): sync_point = create_sync_point( self.ctx, entity_id=str(self.resources[0].id), stack_id=self.stack.id, traversal_id=self.stack.current_traversal ) self.assertEqual({}, sync_point.input_data) self.assertEqual(0, sync_point.atomic_key) # update where atomic_key is 0 and succeeds. rows_updated = db_api.sync_point_update_input_data( self.ctx, sync_point.entity_id, sync_point.traversal_id, sync_point.is_update, 0, {'input_data': '{key: value}'} ) self.assertEqual(1, rows_updated) # another update where atomic_key is 0 and does not update. rows_updated = db_api.sync_point_update_input_data( self.ctx, sync_point.entity_id, sync_point.traversal_id, sync_point.is_update, 0, {'input_data': '{key: value}'} ) self.assertEqual(0, rows_updated) def test_sync_point_delete(self): for res in self.resources: sync_point_rsrc = create_sync_point( self.ctx, entity_id=str(res.id), stack_id=self.stack.id, traversal_id=self.stack.current_traversal ) self.assertIsNotNone(sync_point_rsrc) sync_point_stack = create_sync_point( self.ctx, entity_id=self.stack.id, stack_id=self.stack.id, traversal_id=self.stack.current_traversal ) self.assertIsNotNone(sync_point_stack) rows_deleted = db_api.sync_point_delete_all_by_stack_and_traversal( self.ctx, self.stack.id, self.stack.current_traversal ) self.assertGreater(rows_deleted, 0) self.assertEqual(4, rows_deleted) # Additionally check if sync_point_get returns None. for res in self.resources: ret_sync_point_rsrc = db_api.sync_point_get( self.ctx, str(res.id), self.stack.current_traversal, True ) self.assertIsNone(ret_sync_point_rsrc) ret_sync_point_stack = db_api.sync_point_get( self.ctx, self.stack.id, self.stack.current_traversal, True ) self.assertIsNone(ret_sync_point_stack) @mock.patch.object(time, 'sleep') def test_syncpoint_create_deadlock(self, sleep): with mock.patch('sqlalchemy.orm.Session.add', side_effect=db_exception.DBDeadlock) as add: for res in self.resources: self.assertRaises(db_exception.DBDeadlock, create_sync_point, self.ctx, entity_id=str(res.id), stack_id=self.stack.id, traversal_id=self.stack.current_traversal) self.assertEqual(len(self.resources) * 4, add.call_count) class DBAPIMigratePropertiesDataTest(common.HeatTestCase): def setUp(self): super(DBAPIMigratePropertiesDataTest, self).setUp() self.ctx = utils.dummy_context() templ = create_raw_template(self.ctx) user_creds = create_user_creds(self.ctx) stack = create_stack(self.ctx, templ, user_creds) stack2 = create_stack(self.ctx, templ, user_creds) create_resource(self.ctx, stack, True, name='res1') create_resource(self.ctx, stack2, True, name='res2') create_event(self.ctx, True) create_event(self.ctx, True) def _test_migrate_resource(self, batch_size=50): resources = self.ctx.session.query(models.Resource).all() self.assertEqual(2, len(resources)) for resource in resources: self.assertEqual('bar1', resource.properties_data['foo1']) db_api.db_properties_data_migrate(self.ctx, batch_size=batch_size) for resource in resources: self.assertEqual('bar1', resource.rsrc_prop_data.data['foo1']) self.assertFalse(resource.rsrc_prop_data.encrypted) self.assertIsNone(resource.properties_data) self.assertIsNone(resource.properties_data_encrypted) def _test_migrate_event(self, batch_size=50): events = self.ctx.session.query(models.Event).all() self.assertEqual(2, len(events)) for event in events: self.assertEqual('ev_bar', event.resource_properties['foo2']) db_api.db_properties_data_migrate(self.ctx, batch_size=batch_size) self.ctx.session.expire_all() events = self.ctx.session.query(models.Event).all() for event in events: self.assertEqual('ev_bar', event.rsrc_prop_data.data['foo2']) self.assertFalse(event.rsrc_prop_data.encrypted) self.assertIsNone(event.resource_properties) def test_migrate_event(self): self._test_migrate_event() def test_migrate_event_in_batches(self): self._test_migrate_event(batch_size=1) def test_migrate_resource(self): self._test_migrate_resource() def test_migrate_resource_in_batches(self): self._test_migrate_resource(batch_size=1) def test_migrate_encrypted_resource(self): resources = self.ctx.session.query(models.Resource).all() db_api.db_encrypt_parameters_and_properties( self.ctx, 'i have a key for you if you want') encrypted_data_pre_migration = resources[0].properties_data['foo1'][1] db_api.db_properties_data_migrate(self.ctx) resources = self.ctx.session.query(models.Resource).all() self.assertTrue(resources[0].rsrc_prop_data.encrypted) self.assertIsNone(resources[0].properties_data) self.assertIsNone(resources[0].properties_data_encrypted) self.assertEqual('cryptography_decrypt_v1', resources[0].rsrc_prop_data.data['foo1'][0]) self.assertEqual(encrypted_data_pre_migration, resources[0].rsrc_prop_data.data['foo1'][1]) db_api.db_decrypt_parameters_and_properties( self.ctx, 'i have a key for you if you want') self.ctx.session.expire_all() resources = self.ctx.session.query(models.Resource).all() self.assertEqual('bar1', resources[0].rsrc_prop_data.data['foo1']) self.assertFalse(resources[0].rsrc_prop_data.encrypted) self.assertIsNone(resources[0].properties_data) self.assertIsNone(resources[0].properties_data_encrypted) class DBAPICryptParamsPropsTest(common.HeatTestCase): def setUp(self): super(DBAPICryptParamsPropsTest, self).setUp() self.ctx = utils.dummy_context() self.template = self._create_template() self.user_creds = create_user_creds(self.ctx) self.stack = create_stack(self.ctx, self.template, self.user_creds) self.resources = [create_resource(self.ctx, self.stack, name='res1')] hidden_params_dict = { 'param2': 'bar', 'param_number': '456', 'param_boolean': '1', 'param_map': '{\"test\":\"json\"}', 'param_comma_list': '[\"Hola\", \"Senor\"]'} def _create_template(self): """Initialize sample template.""" self.t = template_format.parse(''' heat_template_version: 2013-05-23 parameters: param1: type: string description: value1. param2: type: string description: value2. hidden: true param3: type: string description: value3 hidden: true default: "don't encrypt me! I'm not sensitive enough" param_string_default_int: type: string description: String parameter with integer default value default: 4353 hidden: true param_number: type: number description: Number parameter default: 4353 hidden: true param_boolean: type: boolean description: boolean parameter default: true hidden: true param_map: type: json description: json parameter default: {"fee": {"fi":"fo"}} hidden: true param_comma_list: type: comma_delimited_list description: cdl parameter default: ["hola", "senorita"] hidden: true resources: a_resource: type: GenericResourceType ''') template = { 'template': self.t, 'files': {'foo': 'bar'}, 'environment': { 'parameters': { 'param1': 'foo', 'param2': 'bar', 'param_number': '456', 'param_boolean': '1', 'param_map': '{\"test\":\"json\"}', 'param_comma_list': '[\"Hola\", \"Senor\"]'}}} return db_api.raw_template_create(self.ctx, template) def encrypt(self, enc_key=None, batch_size=50, legacy_prop_data=False): session = self.ctx.session if enc_key is None: enc_key = cfg.CONF.auth_encryption_key self.assertEqual([], db_api.db_encrypt_parameters_and_properties( self.ctx, enc_key, batch_size=batch_size)) for enc_tmpl in session.query(models.RawTemplate).all(): for param_name in self.hidden_params_dict.keys(): self.assertEqual( 'cryptography_decrypt_v1', enc_tmpl.environment['parameters'][param_name][0]) self.assertEqual( 'foo', enc_tmpl.environment['parameters']['param1']) # test that decryption does not store (or encrypt) default # values in template's environment['parameters'] self.assertIsNone( enc_tmpl.environment['parameters'].get('param3')) enc_resources = session.query(models.Resource).all() self.assertNotEqual([], enc_resources) for enc_resource in enc_resources: if legacy_prop_data: self.assertEqual( 'cryptography_decrypt_v1', enc_resource.properties_data['foo1'][0]) else: self.assertEqual( 'cryptography_decrypt_v1', enc_resource.rsrc_prop_data.data['foo1'][0]) ev = enc_tmpl.environment['parameters']['param2'][1] return ev def decrypt(self, encrypt_value, enc_key=None, batch_size=50, legacy_prop_data=False): session = self.ctx.session if enc_key is None: enc_key = cfg.CONF.auth_encryption_key self.assertEqual([], db_api.db_decrypt_parameters_and_properties( self.ctx, enc_key, batch_size=batch_size)) for dec_tmpl in session.query(models.RawTemplate).all(): self.assertNotEqual( encrypt_value, dec_tmpl.environment['parameters']['param2'][1]) for param_name, param_value in self.hidden_params_dict.items(): self.assertEqual( param_value, dec_tmpl.environment['parameters'][param_name]) self.assertEqual( 'foo', dec_tmpl.environment['parameters']['param1']) self.assertIsNone( dec_tmpl.environment['parameters'].get('param3')) # test that decryption does not store default # values in template's environment['parameters'] self.assertIsNone(dec_tmpl.environment['parameters'].get( 'param3')) decrypt_value = dec_tmpl.environment['parameters']['param2'][1] dec_resources = session.query(models.Resource).all() self.assertNotEqual([], dec_resources) for dec_resource in dec_resources: if legacy_prop_data: self.assertEqual( 'bar1', dec_resource.properties_data['foo1']) else: self.assertEqual( 'bar1', dec_resource.rsrc_prop_data.data['foo1']) return decrypt_value def _test_db_encrypt_decrypt(self, batch_size=50, legacy_prop_data=False): session = self.ctx.session raw_templates = session.query(models.RawTemplate).all() self.assertNotEqual([], raw_templates) for r_tmpl in raw_templates: for param_name, param_value in self.hidden_params_dict.items(): self.assertEqual(param_value, r_tmpl.environment['parameters'][param_name]) self.assertEqual('foo', r_tmpl.environment['parameters']['param1']) resources = session.query(models.Resource).all() self.assertNotEqual([], resources) self.assertEqual(len(resources), len(raw_templates)) for resource in resources: resource = db_api.resource_get(self.ctx, resource.id) if legacy_prop_data: self.assertEqual( 'bar1', resource.properties_data['foo1']) else: self.assertEqual( 'bar1', resource.rsrc_prop_data.data['foo1']) # Test encryption encrypt_value = self.encrypt(batch_size=batch_size, legacy_prop_data=legacy_prop_data) # Test that encryption is idempotent encrypt_value2 = self.encrypt(batch_size=batch_size, legacy_prop_data=legacy_prop_data) self.assertEqual(encrypt_value, encrypt_value2) # Test decryption decrypt_value = self.decrypt(encrypt_value, batch_size=batch_size, legacy_prop_data=legacy_prop_data) # Test that decryption is idempotent decrypt_value2 = self.decrypt(encrypt_value, batch_size=batch_size, legacy_prop_data=legacy_prop_data) self.assertEqual(decrypt_value, decrypt_value2) # Test using a different encryption key to encrypt & decrypt encrypt_value3 = self.encrypt( enc_key='774c15be099ea74123a9b9592ff12680', batch_size=batch_size, legacy_prop_data=legacy_prop_data) decrypt_value3 = self.decrypt( encrypt_value3, enc_key='774c15be099ea74123a9b9592ff12680', batch_size=batch_size, legacy_prop_data=legacy_prop_data) self.assertEqual(decrypt_value, decrypt_value3) self.assertNotEqual(encrypt_value, decrypt_value) self.assertNotEqual(encrypt_value3, decrypt_value3) self.assertNotEqual(encrypt_value, encrypt_value3) def test_db_encrypt_decrypt(self): """Test encryption and decryption for single template and resource.""" self._test_db_encrypt_decrypt() def test_db_encrypt_decrypt_legacy_prop_data(self): """Test encryption and decryption for res with legacy prop data.""" # delete what setUp created [self.ctx.session.delete(r) for r in self.ctx.session.query(models.Resource).all()] [self.ctx.session.delete(s) for s in self.ctx.session.query(models.Stack).all()] [self.ctx.session.delete(t) for t in self.ctx.session.query(models.RawTemplate).all()] tmpl = self._create_template() stack = create_stack(self.ctx, tmpl, self.user_creds) create_resource(self.ctx, stack, True, name='res1') self._test_db_encrypt_decrypt(legacy_prop_data=True) def test_db_encrypt_decrypt_in_batches(self): """Test encryption and decryption in for several templates and resources. Test encryption and decryption with set batch size of templates and resources. """ tmpl1 = self._create_template() tmpl2 = self._create_template() stack = create_stack(self.ctx, tmpl1, self.user_creds) create_resource(self.ctx, stack, False, name='res1') stack2 = create_stack(self.ctx, tmpl2, self.user_creds) create_resource(self.ctx, stack2, False, name='res2') self._test_db_encrypt_decrypt(batch_size=1) def test_db_encrypt_decrypt_exception_continue(self): """Test that encryption and decryption proceed after an exception""" def create_malformed_template(): """Initialize a malformed template which should fail encryption.""" t = template_format.parse(''' heat_template_version: 2013-05-23 parameters: param1: type: string description: value1. param2: type: string description: value2. hidden: true param3: type: string description: value3 hidden: true default: "don't encrypt me! I'm not sensitive enough" resources: a_resource: type: GenericResourceType ''') template = { 'template': t, 'files': {'foo': 'bar'}, 'environment': ''} # <- environment should be a dict return db_api.raw_template_create(self.ctx, template) create_malformed_template() self._create_template() # Test encryption enc_result = db_api.db_encrypt_parameters_and_properties( self.ctx, cfg.CONF.auth_encryption_key, batch_size=50) self.assertEqual(1, len(enc_result)) self.assertIs(AttributeError, type(enc_result[0])) enc_tmpls = self.ctx.session.query(models.RawTemplate).all() self.assertEqual('', enc_tmpls[1].environment) self.assertEqual('cryptography_decrypt_v1', enc_tmpls[2].environment['parameters']['param2'][0]) # Test decryption dec_result = db_api.db_decrypt_parameters_and_properties( self.ctx, cfg.CONF.auth_encryption_key, batch_size=50) self.assertEqual(len(dec_result), 1) self.assertIs(AttributeError, type(dec_result[0])) dec_tmpls = self.ctx.session.query(models.RawTemplate).all() self.assertEqual('', dec_tmpls[1].environment) self.assertEqual('bar', dec_tmpls[2].environment['parameters']['param2']) def test_db_encrypt_no_env(self): template = { 'template': self.t, 'files': {'foo': 'bar'}, 'environment': None} db_api.raw_template_create(self.ctx, template) self.assertEqual([], db_api.db_encrypt_parameters_and_properties( self.ctx, cfg.CONF.auth_encryption_key)) def test_db_encrypt_no_env_parameters(self): template = { 'template': self.t, 'files': {'foo': 'bar'}, 'environment': {'encrypted_param_names': ['a']}} db_api.raw_template_create(self.ctx, template) self.assertEqual([], db_api.db_encrypt_parameters_and_properties( self.ctx, cfg.CONF.auth_encryption_key)) def test_db_encrypt_no_properties_data(self): ctx = utils.dummy_context() template = self._create_template() user_creds = create_user_creds(ctx) stack = create_stack(ctx, template, user_creds) resources = [create_resource(ctx, stack, name='res1')] resources[0].properties_data = None self.assertEqual([], db_api.db_encrypt_parameters_and_properties( ctx, cfg.CONF.auth_encryption_key)) def test_db_encrypt_decrypt_verbose_on(self): info_logger = self.useFixture( fixtures.FakeLogger(level=logging.INFO, format="%(levelname)8s [%(name)s] " "%(message)s")) ctx = utils.dummy_context() template = self._create_template() user_creds = create_user_creds(ctx) stack = create_stack(ctx, template, user_creds) create_resource(ctx, stack, legacy_prop_data=True, name='res2') db_api.db_encrypt_parameters_and_properties( ctx, cfg.CONF.auth_encryption_key, verbose=True) self.assertIn("Processing raw_template 1", info_logger.output) self.assertIn("Finished encrypt processing of raw_template 1", info_logger.output) self.assertIn("Processing resource_properties_data 1", info_logger.output) self.assertIn("Finished processing resource_properties_data 1", info_logger.output) # only the resource with legacy properties data is processed self.assertIn("Processing resource 2", info_logger.output) self.assertIn("Finished processing resource 2", info_logger.output) info_logger2 = self.useFixture( fixtures.FakeLogger(level=logging.INFO, format="%(levelname)8s [%(name)s] " "%(message)s")) db_api.db_decrypt_parameters_and_properties( ctx, cfg.CONF.auth_encryption_key, verbose=True) self.assertIn("Processing raw_template 1", info_logger2.output) self.assertIn("Finished decrypt processing of raw_template 1", info_logger2.output) self.assertIn("Processing resource_properties_data 1", info_logger.output) self.assertIn("Finished processing resource_properties_data 1", info_logger.output) # only the resource with legacy properties data is processed self.assertIn("Processing resource 2", info_logger2.output) self.assertIn("Finished processing resource 2", info_logger2.output) def test_db_encrypt_decrypt_verbose_off(self): info_logger = self.useFixture( fixtures.FakeLogger(level=logging.INFO, format="%(levelname)8s [%(name)s] " "%(message)s")) ctx = utils.dummy_context() template = self._create_template() user_creds = create_user_creds(ctx) stack = create_stack(ctx, template, user_creds) create_resource(ctx, stack, name='res1') db_api.db_encrypt_parameters_and_properties( ctx, cfg.CONF.auth_encryption_key, verbose=False) self.assertNotIn("Processing raw_template 1", info_logger.output) self.assertNotIn("Processing resource 1", info_logger.output) self.assertNotIn("Successfully processed raw_template 1", info_logger.output) self.assertNotIn("Successfully processed resource 1", info_logger.output) info_logger2 = self.useFixture( fixtures.FakeLogger(level=logging.INFO, format="%(levelname)8s [%(name)s] " "%(message)s")) db_api.db_decrypt_parameters_and_properties( ctx, cfg.CONF.auth_encryption_key, verbose=False) self.assertNotIn("Processing raw_template 1", info_logger2.output) self.assertNotIn("Processing resource 1", info_logger2.output) self.assertNotIn("Successfully processed raw_template 1", info_logger2.output) self.assertNotIn("Successfully processed resource 1", info_logger2.output) def test_db_encrypt_no_param_schema(self): t = copy.deepcopy(self.t) del(t['parameters']['param2']) template = { 'template': t, 'files': {'foo': 'bar'}, 'environment': {'encrypted_param_names': [], 'parameters': {'param2': 'foo'}}} db_api.raw_template_create(self.ctx, template) self.assertEqual([], db_api.db_encrypt_parameters_and_properties( self.ctx, cfg.CONF.auth_encryption_key)) def test_db_encrypt_non_string_param_type(self): t = template_format.parse(''' heat_template_version: 2013-05-23 parameters: param1: type: string description: value1. param2: type: string description: value2. hidden: true param3: type: string description: value3 hidden: true default: 1234 resources: a_resource: type: GenericResourceType ''') template = { 'template': t, 'files': {}, 'environment': {'parameters': { 'param1': 'foo', 'param2': 'bar', 'param3': 12345}}} tmpl = db_api.raw_template_create(self.ctx, template) self.assertEqual([], db_api.db_encrypt_parameters_and_properties( self.ctx, cfg.CONF.auth_encryption_key)) tmpl = db_api.raw_template_get(self.ctx, tmpl.id) enc_params = copy.copy(tmpl.environment['parameters']) self.assertEqual([], db_api.db_decrypt_parameters_and_properties( self.ctx, cfg.CONF.auth_encryption_key, batch_size=50)) tmpl = db_api.raw_template_get(self.ctx, tmpl.id) dec_params = tmpl.environment['parameters'] self.assertNotEqual(enc_params['param3'], dec_params['param3']) self.assertEqual('bar', dec_params['param2']) self.assertEqual('12345', dec_params['param3']) class ResetStackStatusTests(common.HeatTestCase): def setUp(self): super(ResetStackStatusTests, self).setUp() self.ctx = utils.dummy_context() self.template = create_raw_template(self.ctx) self.user_creds = create_user_creds(self.ctx) self.stack = create_stack(self.ctx, self.template, self.user_creds) def test_status_reset(self): db_api.stack_update(self.ctx, self.stack.id, {'status': 'IN_PROGRESS'}) db_api.stack_lock_create(self.ctx, self.stack.id, UUID1) db_api.reset_stack_status(self.ctx, self.stack.id) stack = db_api.stack_get(self.ctx, self.stack.id) self.assertEqual('FAILED', stack.status) self.assertEqual('Stack status manually reset', stack.status_reason) self.assertEqual(True, db_api.stack_lock_release(self.ctx, self.stack.id, UUID1)) def test_resource_reset(self): resource_progress = create_resource(self.ctx, self.stack, status='IN_PROGRESS', engine_id=UUID2) resource_complete = create_resource(self.ctx, self.stack) db_api.reset_stack_status(self.ctx, self.stack.id) resource_complete = db_api.resource_get(self.ctx, resource_complete.id) resource_progress = db_api.resource_get(self.ctx, resource_progress.id) self.assertEqual('complete', resource_complete.status) self.assertEqual('FAILED', resource_progress.status) self.assertIsNone(resource_progress.engine_id) def test_hook_reset(self): resource = create_resource(self.ctx, self.stack) resource.context = self.ctx create_resource_data(self.ctx, resource, key="pre-create") create_resource_data(self.ctx, resource) db_api.reset_stack_status(self.ctx, self.stack.id) vals = db_api.resource_data_get_all(self.ctx, resource.id) self.assertEqual({'test_resource_key': 'test_value'}, vals) def test_nested_stack(self): db_api.stack_update(self.ctx, self.stack.id, {'status': 'IN_PROGRESS'}) child = create_stack(self.ctx, self.template, self.user_creds, owner_id=self.stack.id) grandchild = create_stack(self.ctx, self.template, self.user_creds, owner_id=child.id, status='IN_PROGRESS') resource = create_resource(self.ctx, grandchild, status='IN_PROGRESS', engine_id=UUID2) db_api.reset_stack_status(self.ctx, self.stack.id) grandchild = db_api.stack_get(self.ctx, grandchild.id) self.stack = db_api.stack_get(self.ctx, self.stack.id) resource = db_api.resource_get(self.ctx, resource.id) self.assertEqual('FAILED', grandchild.status) self.assertEqual('FAILED', resource.status) self.assertIsNone(resource.engine_id) self.assertEqual('FAILED', self.stack.status) heat-10.0.2/heat/tests/db/__init__.py0000666000175000017500000000000013343562340017311 0ustar zuulzuul00000000000000heat-10.0.2/heat/tests/db/test_sqlalchemy_filters.py0000666000175000017500000000314013343562340022513 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from heat.db.sqlalchemy import filters as db_filters from heat.tests import common class ExactFilterTest(common.HeatTestCase): def setUp(self): super(ExactFilterTest, self).setUp() self.query = mock.Mock() self.model = mock.Mock() def test_returns_same_query_for_empty_filters(self): filters = {} db_filters.exact_filter(self.query, self.model, filters) self.assertEqual(0, self.query.call_count) def test_add_exact_match_clause_for_single_values(self): filters = {'cat': 'foo'} db_filters.exact_filter(self.query, self.model, filters) self.query.filter_by.assert_called_once_with(cat='foo') def test_adds_an_in_clause_for_multiple_values(self): self.model.cat.in_.return_value = 'fake in clause' filters = {'cat': ['foo', 'quux']} db_filters.exact_filter(self.query, self.model, filters) self.query.filter.assert_called_once_with('fake in clause') self.model.cat.in_.assert_called_once_with(['foo', 'quux']) heat-10.0.2/heat/tests/db/test_utils.py0000666000175000017500000001665513343562340020000 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.db.sqlalchemy import utils as migrate_utils from heat.tests import common from heat.tests import utils from sqlalchemy.schema import (Column, MetaData, Table) from sqlalchemy.types import (Boolean, String, Integer) from sqlalchemy import (CheckConstraint, UniqueConstraint, ForeignKey, ForeignKeyConstraint) def _has_constraint(cset, ctype, cname): for c in cset: if (isinstance(c, ctype) and c.name == cname): return True else: return False class DBMigrationUtilsTest(common.HeatTestCase): def setUp(self): super(DBMigrationUtilsTest, self).setUp() self.engine = utils.get_engine() def test_clone_table_adds_or_deletes_columns(self): meta = MetaData() meta.bind = self.engine table = Table('dummy', meta, Column('id', String(36), primary_key=True, nullable=False), Column('A', Boolean, default=False) ) table.create() newcols = [ Column('B', Boolean, default=False), Column('C', String(255), default='foobar') ] ignorecols = [ table.c.A.name ] new_table = migrate_utils.clone_table('new_dummy', table, meta, newcols=newcols, ignorecols=ignorecols) col_names = [c.name for c in new_table.columns] self.assertEqual(3, len(col_names)) self.assertIsNotNone(new_table.c.B) self.assertIsNotNone(new_table.c.C) self.assertNotIn('A', col_names) def test_clone_table_swaps_columns(self): meta = MetaData() meta.bind = self.engine table = Table("dummy1", meta, Column('id', String(36), primary_key=True, nullable=False), Column('A', Boolean, default=False), ) table.create() swapcols = { 'A': Column('A', Integer, default=1), } new_table = migrate_utils.clone_table('swap_dummy', table, meta, swapcols=swapcols) self.assertIsNotNone(new_table.c.A) self.assertEqual(Integer, type(new_table.c.A.type)) def test_clone_table_retains_constraints(self): meta = MetaData() meta.bind = self.engine parent = Table('parent', meta, Column('id', String(36), primary_key=True, nullable=False), Column('A', Integer), Column('B', Integer), Column('C', Integer, CheckConstraint('C>100', name="above 100")), Column('D', Integer, unique=True), UniqueConstraint('A', 'B', name='uix_1') ) parent.create() child = Table('child', meta, Column('id', String(36), ForeignKey('parent.id', name="parent_ref"), primary_key=True, nullable=False), Column('A', Boolean, default=False) ) child.create() ignorecols = [ parent.c.D.name, ] new_parent = migrate_utils.clone_table('new_parent', parent, meta, ignorecols=ignorecols) new_child = migrate_utils.clone_table('new_child', child, meta) self.assertTrue(_has_constraint(new_parent.constraints, UniqueConstraint, 'uix_1')) self.assertTrue(_has_constraint(new_parent.c.C.constraints, CheckConstraint, 'above 100')) self.assertTrue(_has_constraint(new_child.constraints, ForeignKeyConstraint, 'parent_ref')) def test_clone_table_ignores_constraints(self): meta = MetaData() meta.bind = self.engine table = Table('constraints_check', meta, Column('id', String(36), primary_key=True, nullable=False), Column('A', Integer), Column('B', Integer), Column('C', Integer, CheckConstraint('C>100', name="above 100")), UniqueConstraint('A', 'B', name='uix_1') ) table.create() ignorecons = [ 'uix_1', ] new_table = migrate_utils.clone_table('constraints_check_tmp', table, meta, ignorecons=ignorecons) self.assertFalse(_has_constraint(new_table.constraints, UniqueConstraint, 'uix_1')) def test_migrate_data(self): meta = MetaData(bind=self.engine) # create TableA table_a = Table('TableA', meta, Column('id', Integer, primary_key=True), Column('first', String(8), nullable=False), Column('second', Integer)) table_a.create() # update it with sample data values = [ {'id': 1, 'first': 'a'}, {'id': 2, 'first': 'b'}, {'id': 3, 'first': 'c'} ] for value in values: self.engine.execute(table_a.insert(values=value)) # create TableB similar to TableA, except column 'second' table_b = Table('TableB', meta, Column('id', Integer, primary_key=True), Column('first', String(8), nullable=False)) table_b.create() # migrate data migrate_utils.migrate_data(self.engine, table_a, table_b, ['second']) # validate table_a is dropped self.assertTrue(self.engine.dialect.has_table( self.engine.connect(), 'TableA'), 'Data migration failed to drop source table') # validate table_b is updated with data from table_a table_b_rows = list(table_b.select().execute()) self.assertEqual(3, len(table_b_rows), "Data migration is failed") table_b_values = [] for row in table_b_rows: table_b_values.append({'id': row.id, 'first': row.first}) self.assertEqual(values, table_b_values, "Data migration failed with invalid data copy") heat-10.0.2/heat/tests/test_version.py0000666000175000017500000000142613343562340017726 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import pbr from heat.tests import common from heat import version class VersionTest(common.HeatTestCase): def test_version(self): self.assertIsInstance(version.version_info, pbr.version.VersionInfo) heat-10.0.2/heat/tests/test_stack.py0000666000175000017500000037630613343562352017365 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import copy import datetime import json import logging import time import eventlet import fixtures import mock import mox from oslo_config import cfg import six from heat.common import context from heat.common import exception from heat.common import template_format from heat.common import timeutils from heat.db.sqlalchemy import api as db_api from heat.engine.clients.os import keystone from heat.engine.clients.os.keystone import fake_keystoneclient as fake_ks from heat.engine.clients.os import nova from heat.engine import environment from heat.engine import function from heat.engine import node_data from heat.engine import resource from heat.engine import scheduler from heat.engine import service from heat.engine import stack from heat.engine import stk_defn from heat.engine import template from heat.engine import update from heat.objects import raw_template as raw_template_object from heat.objects import resource as resource_objects from heat.objects import stack as stack_object from heat.objects import stack_tag as stack_tag_object from heat.objects import user_creds as ucreds_object from heat.tests import common from heat.tests import fakes from heat.tests import generic_resource as generic_rsrc from heat.tests import utils empty_template = template_format.parse('''{ "HeatTemplateFormatVersion" : "2012-12-12", }''') class StackTest(common.HeatTestCase): def setUp(self): super(StackTest, self).setUp() self.tmpl = template.Template(copy.deepcopy(empty_template)) self.ctx = utils.dummy_context() self.stub_auth() def test_stack_reads_tenant(self): self.stack = stack.Stack(self.ctx, 'test_stack', self.tmpl, tenant_id='bar') self.assertEqual('bar', self.stack.tenant_id) def test_stack_reads_tenant_from_context_if_empty(self): self.ctx.tenant = 'foo' self.stack = stack.Stack(self.ctx, 'test_stack', self.tmpl, tenant_id=None) self.assertEqual('foo', self.stack.tenant_id) def test_stack_reads_username(self): self.stack = stack.Stack(self.ctx, 'test_stack', self.tmpl, username='bar') self.assertEqual('bar', self.stack.username) def test_stack_reads_username_from_context_if_empty(self): self.ctx.username = 'foo' self.stack = stack.Stack(self.ctx, 'test_stack', self.tmpl, username=None) self.assertEqual('foo', self.stack.username) def test_stack_string_repr(self): self.stack = stack.Stack(self.ctx, 'test_stack', self.tmpl) expected = 'Stack "%s" [%s]' % (self.stack.name, self.stack.id) observed = str(self.stack) self.assertEqual(expected, observed) def test_state_defaults(self): self.stack = stack.Stack(self.ctx, 'test_stack', self.tmpl) self.assertEqual(('CREATE', 'IN_PROGRESS'), self.stack.state) self.assertEqual('', self.stack.status_reason) def test_timeout_secs_default(self): cfg.CONF.set_override('stack_action_timeout', 1000) self.stack = stack.Stack(self.ctx, 'test_stack', self.tmpl) self.assertIsNone(self.stack.timeout_mins) self.assertEqual(1000, self.stack.timeout_secs()) def test_timeout_secs(self): self.stack = stack.Stack(self.ctx, 'test_stack', self.tmpl, timeout_mins=10) self.assertEqual(600, self.stack.timeout_secs()) @mock.patch.object(stack, 'oslo_timeutils') def test_time_elapsed(self, mock_tu): self.stack = stack.Stack(self.ctx, 'test_stack', self.tmpl) # dummy create time 10:00:00 self.stack.created_time = datetime.datetime(2015, 7, 27, 10, 0, 0) # mock utcnow set to 10:10:00 (600s offset) mock_tu.utcnow.return_value = datetime.datetime(2015, 7, 27, 10, 10, 0) self.assertEqual(600, self.stack.time_elapsed()) @mock.patch.object(stack, 'oslo_timeutils') def test_time_elapsed_negative(self, mock_tu): self.stack = stack.Stack(self.ctx, 'test_stack', self.tmpl) # dummy create time 10:00:00 self.stack.created_time = datetime.datetime(2015, 7, 27, 10, 0, 0) # mock utcnow set to 09:59:50 (-10s offset) mock_tu.utcnow.return_value = datetime.datetime(2015, 7, 27, 9, 59, 50) self.assertEqual(-10, self.stack.time_elapsed()) @mock.patch.object(stack, 'oslo_timeutils') def test_time_elapsed_ms(self, mock_tu): self.stack = stack.Stack(self.ctx, 'test_stack', self.tmpl) # dummy create time 10:00:00 self.stack.created_time = datetime.datetime(2015, 7, 27, 10, 5, 0) # mock utcnow set to microsecond offset mock_tu.utcnow.return_value = datetime.datetime(2015, 7, 27, 10, 4, 59, 750000) self.assertEqual(-0.25, self.stack.time_elapsed()) @mock.patch.object(stack, 'oslo_timeutils') def test_time_elapsed_with_updated_time(self, mock_tu): self.stack = stack.Stack(self.ctx, 'test_stack', self.tmpl) # dummy create time 10:00:00 self.stack.created_time = datetime.datetime(2015, 7, 27, 10, 0, 0) # dummy updated time 11:00:00; should consider this not created_time self.stack.updated_time = datetime.datetime(2015, 7, 27, 11, 0, 0) # mock utcnow set to 11:10:00 (600s offset) mock_tu.utcnow.return_value = datetime.datetime(2015, 7, 27, 11, 10, 0) self.assertEqual(600, self.stack.time_elapsed()) @mock.patch.object(stack.Stack, 'time_elapsed') def test_time_remaining(self, mock_te): self.stack = stack.Stack(self.ctx, 'test_stack', self.tmpl) # mock time elapsed; set to 600 seconds mock_te.return_value = 600 # default stack timeout is 3600 seconds; remaining time 3000 secs self.assertEqual(3000, self.stack.time_remaining()) @mock.patch.object(stack.Stack, 'time_elapsed') def test_has_timed_out(self, mock_te): self.stack = stack.Stack(self.ctx, 'test_stack', self.tmpl) self.stack.status = self.stack.IN_PROGRESS # test with timed out stack mock_te.return_value = 3601 # default stack timeout is 3600 seconds; stack should time out self.assertTrue(self.stack.has_timed_out()) # mock time elapsed; set to 600 seconds mock_te.return_value = 600 # default stack timeout is 3600 seconds; remaining time 3000 secs self.assertFalse(self.stack.has_timed_out()) # has_timed_out has no meaning when stack completes/fails; # should return false self.stack.status = self.stack.COMPLETE self.assertFalse(self.stack.has_timed_out()) self.stack.status = self.stack.FAILED self.assertFalse(self.stack.has_timed_out()) def test_no_auth_token(self): ctx = utils.dummy_context() ctx.auth_token = None self.stack = stack.Stack(ctx, 'test_stack', self.tmpl) self.assertEqual('abcd1234', ctx.auth_plugin.auth_token) def test_state_deleted(self): self.stack = stack.Stack(self.ctx, 'test_stack', self.tmpl, action=stack.Stack.CREATE, status=stack.Stack.IN_PROGRESS) self.stack.id = '1234' self.stack.delete() self.assertIsNone(self.stack.state_set(stack.Stack.CREATE, stack.Stack.COMPLETE, 'test')) def test_load_nonexistant_id(self): self.assertRaises(exception.NotFound, stack.Stack.load, self.ctx, -1) def test_total_resources_empty(self): self.stack = stack.Stack(self.ctx, 'test_stack', self.tmpl, status_reason='flimflam') self.stack.store() self.assertEqual(0, self.stack.total_resources(self.stack.id)) self.assertEqual(0, self.stack.total_resources()) @mock.patch.object(db_api, 'stack_count_total_resources') def test_total_resources_not_stored(self, sctr): self.stack = stack.Stack(self.ctx, 'test_stack', self.tmpl, status_reason='flimflam') self.assertEqual(0, self.stack.total_resources()) sctr.assert_not_called() def test_total_resources_not_found(self): self.stack = stack.Stack(self.ctx, 'test_stack', self.tmpl, status_reason='flimflam') self.assertEqual(0, self.stack.total_resources('1234')) @mock.patch.object(db_api, 'stack_count_total_resources') def test_total_resources_generic(self, sctr): tpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': {'A': {'Type': 'GenericResourceType'}}} self.stack = stack.Stack(self.ctx, 'test_stack', template.Template(tpl), status_reason='blarg') self.stack.store() sctr.return_value = 1 self.assertEqual(1, self.stack.total_resources(self.stack.id)) self.assertEqual(1, self.stack.total_resources()) def test_resource_get(self): tpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': {'A': {'Type': 'GenericResourceType'}}} self.stack = stack.Stack(self.ctx, 'test_stack', template.Template(tpl), status_reason='blarg') self.stack.store() self.assertEqual('A', self.stack.resource_get('A').name) self.assertEqual(self.stack['A'], self.stack.resource_get('A')) self.assertIsNone(self.stack.resource_get('B')) @mock.patch.object(resource_objects.Resource, 'get_all_by_stack') def test_resource_get_db_fallback(self, gabs): tpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': {'A': {'Type': 'GenericResourceType'}}} self.stack = stack.Stack(self.ctx, 'test_stack', template.Template(tpl), status_reason='blarg') self.stack.store() tpl2 = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': {'A': {'Type': 'GenericResourceType'}, 'B': {'Type': 'GenericResourceType'}}} t2 = template.Template(tpl2) t2.store(self.ctx) db_resources = { 'A': mock.MagicMock(), 'B': mock.MagicMock(current_template_id=t2.id), 'C': mock.MagicMock(current_template_id=t2.id) } db_resources['A'].name = 'A' db_resources['B'].name = 'B' db_resources['C'].name = 'C' gabs.return_value = db_resources self.assertEqual('A', self.stack.resource_get('A').name) self.assertEqual('B', self.stack.resource_get('B').name) # Ignore the resource if only in db self.assertIsNone(self.stack.resource_get('C')) self.assertIsNone(self.stack.resource_get('D')) @mock.patch.object(resource_objects.Resource, 'get_all_by_stack') def test_iter_resources(self, mock_db_call): tpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': {'A': {'Type': 'GenericResourceType'}, 'B': {'Type': 'GenericResourceType'}}} self.stack = stack.Stack(self.ctx, 'test_stack', template.Template(tpl), status_reason='blarg') self.stack.store() mock_rsc_a = mock.MagicMock(current_template_id=self.stack.t.id) mock_rsc_a.name = 'A' mock_rsc_b = mock.MagicMock(current_template_id=self.stack.t.id) mock_rsc_b.name = 'B' mock_db_call.return_value = { 'A': mock_rsc_a, 'B': mock_rsc_b } all_resources = list(self.stack.iter_resources()) # Verify, the db query is called with expected filter mock_db_call.assert_called_once_with(self.ctx, self.stack.id) # And returns the resources names = sorted([r.name for r in all_resources]) self.assertEqual(['A', 'B'], names) @mock.patch.object(resource_objects.Resource, 'get_all_by_stack') def test_iter_resources_with_nested(self, mock_db_call): tpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': {'A': {'Type': 'StackResourceType'}, 'B': {'Type': 'GenericResourceType'}}} self.stack = stack.Stack(self.ctx, 'test_stack', template.Template(tpl), status_reason='blarg') self.stack.store() mock_rsc_a = mock.MagicMock(current_template_id=self.stack.t.id) mock_rsc_a.name = 'A' mock_rsc_b = mock.MagicMock(current_template_id=self.stack.t.id) mock_rsc_b.name = 'B' mock_db_call.return_value = { 'A': mock_rsc_a, 'B': mock_rsc_b } def get_more(nested_depth=0, filters=None): yield 'X' yield 'Y' yield 'Z' mock_nested = self.patchobject(generic_rsrc.StackResourceType, 'nested') mock_nested.return_value.iter_resources = mock.MagicMock( side_effect=get_more) resource_generator = self.stack.iter_resources() self.assertIsNot(resource_generator, list) first_level_resources = list(resource_generator) self.assertEqual(2, len(first_level_resources)) all_resources = list(self.stack.iter_resources(1)) self.assertEqual(5, len(all_resources)) @mock.patch.object(resource_objects.Resource, 'get_all_by_stack') def test_iter_resources_with_filters(self, mock_db_call): tpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': {'A': {'Type': 'GenericResourceType'}, 'B': {'Type': 'GenericResourceType'}}} self.stack = stack.Stack(self.ctx, 'test_stack', template.Template(tpl), status_reason='blarg') self.stack.store() mock_rsc = mock.MagicMock() mock_rsc.name = 'A' mock_rsc.current_template_id = self.stack.t.id mock_db_call.return_value = {'A': mock_rsc} all_resources = list(self.stack.iter_resources( filters=dict(name=['A']) )) # Verify, the db query is called with expected filter mock_db_call.assert_has_calls([ mock.call(self.ctx, self.stack.id, dict(name=['A'])), mock.call(self.ctx, self.stack.id), ]) # Make sure it returns only one resource. self.assertEqual(1, len(all_resources)) # And returns the resource A self.assertEqual('A', all_resources[0].name) @mock.patch.object(resource_objects.Resource, 'get_all_by_stack') def test_iter_resources_with_nonexistent_template(self, mock_db_call): tpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': {'A': {'Type': 'GenericResourceType'}, 'B': {'Type': 'GenericResourceType'}}} self.stack = stack.Stack(self.ctx, 'test_stack', template.Template(tpl), status_reason='blarg') self.stack.store() mock_rsc_a = mock.MagicMock(current_template_id=self.stack.t.id) mock_rsc_a.name = 'A' mock_rsc_b = mock.MagicMock(current_template_id=self.stack.t.id + 1) mock_rsc_b.name = 'B' mock_db_call.return_value = { 'A': mock_rsc_a, 'B': mock_rsc_b } all_resources = list(self.stack.iter_resources()) self.assertEqual(1, len(all_resources)) @mock.patch.object(resource_objects.Resource, 'get_all_by_stack') def test_iter_resources_nested_with_filters(self, mock_db_call): tpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': {'A': {'Type': 'StackResourceType'}, 'B': {'Type': 'GenericResourceType'}}} self.stack = stack.Stack(self.ctx, 'test_stack', template.Template(tpl), status_reason='blarg') self.stack.store() mock_rsc_a = mock.MagicMock(current_template_id=self.stack.t.id) mock_rsc_a.name = 'A' mock_rsc_b = mock.MagicMock(current_template_id=self.stack.t.id) mock_rsc_b.name = 'B' mock_db_call.return_value = { 'A': mock_rsc_a, 'B': mock_rsc_b } def get_more(nested_depth=0, filters=None): if filters: yield 'X' mock_nested = self.patchobject(generic_rsrc.StackResourceType, 'nested') mock_nested.return_value.iter_resources = mock.MagicMock( side_effect=get_more) all_resources = list(self.stack.iter_resources( nested_depth=1, filters=dict(name=['A']) )) # Verify, the db query is called with expected filter mock_db_call.assert_has_calls([ mock.call(self.ctx, self.stack.id, dict(name=['A'])), mock.call(self.ctx, self.stack.id), ]) # Returns three resources (1 first level + 2 second level) self.assertEqual(3, len(all_resources)) def test_load_parent_resource(self): self.stack = stack.Stack(self.ctx, 'load_parent_resource', self.tmpl, parent_resource='parent') self.stack.store() stk = stack_object.Stack.get_by_id(self.ctx, self.stack.id) t = template.Template.load(self.ctx, stk.raw_template_id) self.m.StubOutWithMock(template.Template, 'load') template.Template.load( self.ctx, stk.raw_template_id, stk.raw_template ).AndReturn(t) self.m.StubOutWithMock(stack.Stack, '__init__') stack.Stack.__init__(self.ctx, stk.name, t, stack_id=stk.id, action=stk.action, status=stk.status, status_reason=stk.status_reason, timeout_mins=stk.timeout, disable_rollback=stk.disable_rollback, parent_resource='parent', owner_id=None, stack_user_project_id=None, created_time=mox.IgnoreArg(), updated_time=None, user_creds_id=stk.user_creds_id, tenant_id='test_tenant_id', use_stored_context=False, username=mox.IgnoreArg(), convergence=False, current_traversal=self.stack.current_traversal, prev_raw_template_id=None, current_deps=None, cache_data=None, nested_depth=0, deleted_time=None) self.m.ReplayAll() stack.Stack.load(self.ctx, stack_id=self.stack.id) self.m.VerifyAll() def test_identifier(self): self.stack = stack.Stack(self.ctx, 'identifier_test', self.tmpl) self.stack.store() identifier = self.stack.identifier() self.assertEqual(self.stack.tenant_id, identifier.tenant) self.assertEqual('identifier_test', identifier.stack_name) self.assertTrue(identifier.stack_id) self.assertFalse(identifier.path) def test_get_stack_abandon_data(self): tpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Parameters': {'param1': {'Type': 'String'}}, 'Resources': {'A': {'Type': 'GenericResourceType'}, 'B': {'Type': 'GenericResourceType'}}} resources = '''{"A": {"status": "COMPLETE", "name": "A", "resource_data": {}, "resource_id": null, "action": "INIT", "type": "GenericResourceType", "metadata": {}}, "B": {"status": "COMPLETE", "name": "B", "resource_data": {}, "resource_id": null, "action": "INIT", "type": "GenericResourceType", "metadata": {}}}''' env = environment.Environment({'parameters': {'param1': 'test'}}) self.stack = stack.Stack(self.ctx, 'stack_details_test', template.Template(tpl, env=env), tenant_id='123', stack_user_project_id='234', tags=['tag1', 'tag2']) self.stack.store() info = self.stack.prepare_abandon() self.assertEqual('CREATE', info['action']) self.assertIn('id', info) self.assertEqual('stack_details_test', info['name']) self.assertEqual(json.loads(resources), info['resources']) self.assertEqual('IN_PROGRESS', info['status']) self.assertEqual(tpl, info['template']) self.assertEqual('123', info['project_id']) self.assertEqual('234', info['stack_user_project_id']) self.assertEqual(env.params, info['environment']['parameters']) self.assertEqual(['tag1', 'tag2'], info['tags']) def test_set_param_id(self): self.stack = stack.Stack(self.ctx, 'param_arn_test', self.tmpl) exp_prefix = ('arn:openstack:heat::test_tenant_id' ':stacks/param_arn_test/') self.assertEqual(self.stack.parameters['AWS::StackId'], exp_prefix + 'None') self.stack.store() identifier = self.stack.identifier() self.assertEqual(exp_prefix + self.stack.id, self.stack.parameters['AWS::StackId']) self.assertEqual(self.stack.parameters['AWS::StackId'], identifier.arn()) self.m.VerifyAll() def test_set_param_id_update(self): tmpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'AResource': {'Type': 'ResourceWithPropsType', 'Metadata': {'Bar': {'Ref': 'AWS::StackId'}}, 'Properties': {'Foo': 'abc'}}}} self.stack = stack.Stack(self.ctx, 'update_stack_arn_test', template.Template(tmpl)) self.stack.store() self.stack.create() self.assertEqual((stack.Stack.CREATE, stack.Stack.COMPLETE), self.stack.state) stack_arn = self.stack.parameters['AWS::StackId'] tmpl2 = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'AResource': {'Type': 'ResourceWithPropsType', 'Metadata': {'Bar': {'Ref': 'AWS::StackId'}}, 'Properties': {'Foo': 'xyz'}}}} updated_stack = stack.Stack(self.ctx, 'updated_stack', template.Template(tmpl2)) self.stack.update(updated_stack) self.assertEqual((stack.Stack.UPDATE, stack.Stack.COMPLETE), self.stack.state) self.assertEqual('xyz', self.stack['AResource'].properties['Foo']) self.assertEqual( stack_arn, self.stack['AResource'].metadata_get()['Bar']) def test_load_param_id(self): self.stack = stack.Stack(self.ctx, 'param_load_arn_test', self.tmpl) self.stack.store() identifier = self.stack.identifier() self.assertEqual(self.stack.parameters['AWS::StackId'], identifier.arn()) newstack = stack.Stack.load(self.ctx, stack_id=self.stack.id) self.assertEqual(identifier.arn(), newstack.parameters['AWS::StackId']) def test_load_reads_tenant_id(self): self.ctx.tenant = 'foobar' self.stack = stack.Stack(self.ctx, 'stack_name', self.tmpl) self.stack.store() stack_id = self.stack.id self.ctx.tenant = None self.stack = stack.Stack.load(self.ctx, stack_id=stack_id) self.assertEqual('foobar', self.stack.tenant_id) def test_load_reads_username_from_db(self): self.ctx.username = 'foobar' self.stack = stack.Stack(self.ctx, 'stack_name', self.tmpl) self.stack.store() stack_id = self.stack.id self.ctx.username = None stk = stack.Stack.load(self.ctx, stack_id=stack_id) self.assertEqual('foobar', stk.username) self.ctx.username = 'not foobar' stk = stack.Stack.load(self.ctx, stack_id=stack_id) self.assertEqual('foobar', stk.username) def test_load_all(self): stack1 = stack.Stack(self.ctx, 'stack1', self.tmpl) stack1.store() stack2 = stack.Stack(self.ctx, 'stack2', self.tmpl) stack2.store() stacks = list(stack.Stack.load_all(self.ctx)) self.assertEqual(2, len(stacks)) # Add another, nested, stack stack3 = stack.Stack(self.ctx, 'stack3', self.tmpl, owner_id=stack2.id) stack3.store() # Should still be 2 without show_nested stacks = list(stack.Stack.load_all(self.ctx)) self.assertEqual(2, len(stacks)) stacks = list(stack.Stack.load_all(self.ctx, show_nested=True)) self.assertEqual(3, len(stacks)) # A backup stack should not be returned stack1._backup_stack() stacks = list(stack.Stack.load_all(self.ctx)) self.assertEqual(2, len(stacks)) stacks = list(stack.Stack.load_all(self.ctx, show_nested=True)) self.assertEqual(3, len(stacks)) def test_load_all_not_found(self): stack1 = stack.Stack(self.ctx, 'stack1', self.tmpl) stack1.store() tmpl2 = template.Template(copy.deepcopy(empty_template)) stack2 = stack.Stack(self.ctx, 'stack2', tmpl2) stack2.store() def fake_load(ctx, template_id, tmpl): if template_id == stack2.t.id: raise exception.NotFound() else: return tmpl2 with mock.patch.object(template.Template, 'load') as tmpl_load: tmpl_load.side_effect = fake_load stacks = list(stack.Stack.load_all(self.ctx)) self.assertEqual(1, len(stacks)) def test_created_time(self): self.stack = stack.Stack(self.ctx, 'creation_time_test', self.tmpl) self.assertIsNone(self.stack.created_time) self.stack.store() self.assertIsNotNone(self.stack.created_time) def test_updated_time(self): self.stack = stack.Stack(self.ctx, 'updated_time_test', self.tmpl) self.assertIsNone(self.stack.updated_time) self.stack.store() self.stack.create() tmpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': {'R1': {'Type': 'GenericResourceType'}}} newstack = stack.Stack(self.ctx, 'updated_time_test', template.Template(tmpl)) self.stack.update(newstack) self.assertIsNotNone(self.stack.updated_time) def test_update_prev_raw_template(self): self.stack = stack.Stack(self.ctx, 'updated_time_test', self.tmpl) self.assertIsNone(self.stack.updated_time) self.stack.store() self.stack.create() self.assertIsNone(self.stack.prev_raw_template_id) tmpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': {'R1': {'Type': 'GenericResourceType'}}} newstack = stack.Stack(self.ctx, 'updated_time_test', template.Template(tmpl)) self.stack.update(newstack) self.assertIsNotNone(self.stack.prev_raw_template_id) prev_t = template.Template.load(self.ctx, self.stack.prev_raw_template_id) self.assertEqual(tmpl, prev_t.t) prev_id = self.stack.prev_raw_template_id tmpl2 = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': {'R2': {'Type': 'GenericResourceType'}}} newstack2 = stack.Stack(self.ctx, 'updated_time_test', template.Template(tmpl2)) self.stack.update(newstack2) self.assertIsNotNone(self.stack.prev_raw_template_id) self.assertNotEqual(prev_id, self.stack.prev_raw_template_id) prev_t2 = template.Template.load(self.ctx, self.stack.prev_raw_template_id) self.assertEqual(tmpl2, prev_t2.t) self.assertRaises(exception.NotFound, template.Template.load, self.ctx, prev_id) def test_access_policy_update(self): tmpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'R1': {'Type': 'GenericResourceType'}, 'Policy': { 'Type': 'OS::Heat::AccessPolicy', 'Properties': { 'AllowedResources': ['R1'] }}}} self.stack = stack.Stack(self.ctx, 'update_stack_access_policy_test', template.Template(tmpl)) self.stack.store() self.stack.create() self.assertEqual((stack.Stack.CREATE, stack.Stack.COMPLETE), self.stack.state) tmpl2 = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'R1': {'Type': 'GenericResourceType'}, 'R2': {'Type': 'GenericResourceType'}, 'Policy': { 'Type': 'OS::Heat::AccessPolicy', 'Properties': { 'AllowedResources': ['R1', 'R2'], }}}} updated_stack = stack.Stack(self.ctx, 'updated_stack', template.Template(tmpl2)) self.stack.update(updated_stack) self.assertEqual((stack.Stack.UPDATE, stack.Stack.COMPLETE), self.stack.state) def test_abandon_nodelete_project(self): self.stack = stack.Stack(self.ctx, 'delete_trust', self.tmpl) stack_id = self.stack.store() self.stack.set_stack_user_project_id(project_id='aproject456') db_s = stack_object.Stack.get_by_id(self.ctx, stack_id) self.assertIsNotNone(db_s) self.stack.delete(abandon=True) db_s = stack_object.Stack.get_by_id(self.ctx, stack_id) self.assertIsNone(db_s) self.assertEqual((stack.Stack.DELETE, stack.Stack.COMPLETE), self.stack.state) def test_suspend_resume(self): self.m.ReplayAll() tmpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': {'AResource': {'Type': 'GenericResourceType'}}} self.stack = stack.Stack(self.ctx, 'suspend_test', template.Template(tmpl)) self.stack.store() self.stack.create() self.assertEqual((self.stack.CREATE, self.stack.COMPLETE), self.stack.state) self.assertIsNone(self.stack.updated_time) self.stack.suspend() self.assertEqual((self.stack.SUSPEND, self.stack.COMPLETE), self.stack.state) stack_suspend_time = self.stack.updated_time self.assertIsNotNone(stack_suspend_time) self.stack.resume() self.assertEqual((self.stack.RESUME, self.stack.COMPLETE), self.stack.state) self.assertNotEqual(stack_suspend_time, self.stack.updated_time) self.m.VerifyAll() def test_suspend_stack_suspended_ok(self): tmpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': {'AResource': {'Type': 'GenericResourceType'}}} self.stack = stack.Stack(self.ctx, 'suspend_test', template.Template(tmpl)) self.stack.store() self.stack.create() self.assertEqual((self.stack.CREATE, self.stack.COMPLETE), self.stack.state) self.stack.suspend() self.assertEqual((self.stack.SUSPEND, self.stack.COMPLETE), self.stack.state) # unexpected to call Resource.suspend self.m.StubOutWithMock(generic_rsrc.GenericResource, 'suspend') self.m.ReplayAll() self.stack.suspend() self.assertEqual((self.stack.SUSPEND, self.stack.COMPLETE), self.stack.state) self.m.VerifyAll() def test_resume_stack_resumeed_ok(self): tmpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': {'AResource': {'Type': 'GenericResourceType'}}} self.stack = stack.Stack(self.ctx, 'suspend_test', template.Template(tmpl)) self.stack.store() self.stack.create() self.assertEqual((self.stack.CREATE, self.stack.COMPLETE), self.stack.state) self.stack.suspend() self.assertEqual((self.stack.SUSPEND, self.stack.COMPLETE), self.stack.state) self.stack.resume() self.assertEqual((self.stack.RESUME, self.stack.COMPLETE), self.stack.state) # unexpected to call Resource.resume self.m.StubOutWithMock(generic_rsrc.GenericResource, 'resume') self.m.ReplayAll() self.stack.resume() self.assertEqual((self.stack.RESUME, self.stack.COMPLETE), self.stack.state) self.m.VerifyAll() def test_suspend_fail(self): tmpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': {'AResource': {'Type': 'GenericResourceType'}}} self.m.StubOutWithMock(generic_rsrc.GenericResource, 'handle_suspend') exc = Exception('foo') generic_rsrc.GenericResource.handle_suspend().AndRaise(exc) self.m.ReplayAll() self.stack = stack.Stack(self.ctx, 'suspend_test_fail', template.Template(tmpl)) self.stack.store() self.stack.create() self.assertEqual((self.stack.CREATE, self.stack.COMPLETE), self.stack.state) self.stack.suspend() self.assertEqual((self.stack.SUSPEND, self.stack.FAILED), self.stack.state) self.assertEqual('Resource SUSPEND failed: Exception: ' 'resources.AResource: foo', self.stack.status_reason) self.m.VerifyAll() def test_resume_fail(self): tmpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': {'AResource': {'Type': 'GenericResourceType'}}} self.m.StubOutWithMock(generic_rsrc.GenericResource, 'handle_resume') generic_rsrc.GenericResource.handle_resume().AndRaise(Exception('foo')) self.m.ReplayAll() self.stack = stack.Stack(self.ctx, 'resume_test_fail', template.Template(tmpl)) self.stack.store() self.stack.create() self.assertEqual((self.stack.CREATE, self.stack.COMPLETE), self.stack.state) self.stack.suspend() self.assertEqual((self.stack.SUSPEND, self.stack.COMPLETE), self.stack.state) self.stack.resume() self.assertEqual((self.stack.RESUME, self.stack.FAILED), self.stack.state) self.assertEqual('Resource RESUME failed: Exception: ' 'resources.AResource: foo', self.stack.status_reason) self.m.VerifyAll() def test_suspend_timeout(self): tmpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': {'AResource': {'Type': 'GenericResourceType'}}} self.m.StubOutWithMock(generic_rsrc.GenericResource, 'handle_suspend') exc = scheduler.Timeout('foo', 0) generic_rsrc.GenericResource.handle_suspend().AndRaise(exc) self.m.ReplayAll() self.stack = stack.Stack(self.ctx, 'suspend_test_fail_timeout', template.Template(tmpl)) self.stack.store() self.stack.create() self.assertEqual((self.stack.CREATE, self.stack.COMPLETE), self.stack.state) self.stack.suspend() self.assertEqual((self.stack.SUSPEND, self.stack.FAILED), self.stack.state) self.assertEqual('Suspend timed out', self.stack.status_reason) self.m.VerifyAll() def test_resume_timeout(self): tmpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': {'AResource': {'Type': 'GenericResourceType'}}} self.m.StubOutWithMock(generic_rsrc.GenericResource, 'handle_resume') exc = scheduler.Timeout('foo', 0) generic_rsrc.GenericResource.handle_resume().AndRaise(exc) self.m.ReplayAll() self.stack = stack.Stack(self.ctx, 'resume_test_fail_timeout', template.Template(tmpl)) self.stack.store() self.stack.create() self.assertEqual((self.stack.CREATE, self.stack.COMPLETE), self.stack.state) self.stack.suspend() self.assertEqual((self.stack.SUSPEND, self.stack.COMPLETE), self.stack.state) self.stack.resume() self.assertEqual((self.stack.RESUME, self.stack.FAILED), self.stack.state) self.assertEqual('Resume timed out', self.stack.status_reason) self.m.VerifyAll() def _get_stack_to_check(self, name): tpl = {"HeatTemplateFormatVersion": "2012-12-12", "Resources": { "A": {"Type": "GenericResourceType"}, "B": {"Type": "GenericResourceType"}}} self.stack = stack.Stack(self.ctx, name, template.Template(tpl), status_reason=name) self.stack.store() def _mock_check(res): res.handle_check = mock.Mock() [_mock_check(res) for res in six.itervalues(self.stack.resources)] return self.stack def test_check_supported(self): stack1 = self._get_stack_to_check('check-supported') stack1['A'].state_set(stack1['A'].CREATE, stack1['A'].COMPLETE) stack1['B'].state_set(stack1['B'].CREATE, stack1['B'].COMPLETE) stack1.check() self.assertEqual(stack1.COMPLETE, stack1.status) self.assertEqual(stack1.CHECK, stack1.action) [self.assertTrue(res.handle_check.called) for res in six.itervalues(stack1.resources)] self.assertNotIn('not fully supported', stack1.status_reason) def test_check_not_supported(self): stack1 = self._get_stack_to_check('check-not-supported') del stack1['B'].handle_check stack1['A'].state_set(stack1['A'].CREATE, stack1['A'].COMPLETE) stack1.check() self.assertEqual(stack1.COMPLETE, stack1.status) self.assertEqual(stack1.CHECK, stack1.action) self.assertTrue(stack1['A'].handle_check.called) self.assertIn('not fully supported', stack1.status_reason) def test_check_fail(self): stk = self._get_stack_to_check('check-fail') # if resource not created, check fail stk.check() self.assertEqual(stk.FAILED, stk.status) self.assertEqual(stk.CHECK, stk.action) self.assertFalse(stk['A'].handle_check.called) self.assertFalse(stk['B'].handle_check.called) self.assertIn('Resource A not created yet', stk.status_reason) self.assertIn('Resource B not created yet', stk.status_reason) # check if resource created stk['A'].handle_check.side_effect = Exception('fail-A') stk['B'].handle_check.side_effect = Exception('fail-B') stk['A'].state_set(stk['A'].CREATE, stk['A'].COMPLETE) stk['B'].state_set(stk['B'].CREATE, stk['B'].COMPLETE) stk.check() self.assertEqual(stk.FAILED, stk.status) self.assertEqual(stk.CHECK, stk.action) self.assertTrue(stk['A'].handle_check.called) self.assertTrue(stk['B'].handle_check.called) self.assertIn('fail-A', stk.status_reason) self.assertIn('fail-B', stk.status_reason) def test_adopt_stack(self): adopt_data = '''{ "action": "CREATE", "status": "COMPLETE", "name": "my-test-stack-name", "resources": { "AResource": { "status": "COMPLETE", "name": "AResource", "resource_data": {}, "metadata": {}, "resource_id": "test-res-id", "action": "CREATE", "type": "GenericResourceType" } } }''' tmpl = { 'HeatTemplateFormatVersion': '2012-12-12', 'Resources': {'AResource': {'Type': 'GenericResourceType'}}, 'Outputs': {'TestOutput': {'Value': { 'Fn::GetAtt': ['AResource', 'Foo']}} } } self.stack = stack.Stack(utils.dummy_context(), 'test_stack', template.Template(tmpl), adopt_stack_data=json.loads(adopt_data)) self.stack.store() self.stack.adopt() res = self.stack['AResource'] self.assertEqual(u'test-res-id', res.resource_id) self.assertEqual('AResource', res.name) self.assertEqual('COMPLETE', res.status) self.assertEqual('ADOPT', res.action) self.assertEqual((self.stack.ADOPT, self.stack.COMPLETE), self.stack.state) loaded_stack = stack.Stack.load(self.ctx, self.stack.id) loaded_stack._update_all_resource_data(False, True) self.assertEqual('AResource', loaded_stack.outputs['TestOutput'].get_value()) self.assertIsNone(loaded_stack['AResource']._stored_properties_data) def test_adopt_stack_fails(self): adopt_data = '''{ "action": "CREATE", "status": "COMPLETE", "name": "my-test-stack-name", "resources": {} }''' tmpl = template.Template({ 'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'foo': {'Type': 'GenericResourceType'}, } }) self.stack = stack.Stack(utils.dummy_context(), 'test_stack', tmpl, adopt_stack_data=json.loads(adopt_data)) self.stack.store() self.stack.adopt() self.assertEqual((self.stack.ADOPT, self.stack.FAILED), self.stack.state) expected = ('Resource ADOPT failed: Exception: resources.foo: ' 'Resource ID was not provided.') self.assertEqual(expected, self.stack.status_reason) def test_adopt_stack_rollback(self): adopt_data = '''{ "name": "my-test-stack-name", "resources": {} }''' tmpl = template.Template({ 'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'foo': {'Type': 'GenericResourceType'}, } }) self.stack = stack.Stack(utils.dummy_context(), 'test_stack', tmpl, disable_rollback=False, adopt_stack_data=json.loads(adopt_data)) self.stack.store() with mock.patch.object(self.stack, 'delete', side_effect=self.stack.delete) as mock_delete: self.stack.adopt() self.assertEqual((self.stack.ROLLBACK, self.stack.COMPLETE), self.stack.state) mock_delete.assert_called_once_with(action=self.stack.ROLLBACK, abandon=True) def test_resource_by_refid(self): tmpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': {'AResource': {'Type': 'GenericResourceType'}}} self.stack = stack.Stack(self.ctx, 'resource_by_refid_stack', template.Template(tmpl)) self.stack.store() self.stack.create() self.assertEqual((stack.Stack.CREATE, stack.Stack.COMPLETE), self.stack.state) self.assertIn('AResource', self.stack) rsrc = self.stack['AResource'] rsrc.resource_id_set('aaaa') for action, status in ( (rsrc.INIT, rsrc.COMPLETE), (rsrc.CREATE, rsrc.IN_PROGRESS), (rsrc.CREATE, rsrc.COMPLETE), (rsrc.RESUME, rsrc.IN_PROGRESS), (rsrc.RESUME, rsrc.COMPLETE), (rsrc.UPDATE, rsrc.IN_PROGRESS), (rsrc.UPDATE, rsrc.COMPLETE), (rsrc.CHECK, rsrc.COMPLETE)): rsrc.state_set(action, status) stk_defn.update_resource_data(self.stack.defn, rsrc.name, rsrc.node_data()) self.assertEqual(rsrc, self.stack.resource_by_refid('aaaa')) rsrc.state_set(rsrc.DELETE, rsrc.IN_PROGRESS) stk_defn.update_resource_data(self.stack.defn, rsrc.name, rsrc.node_data()) try: self.assertIsNone(self.stack.resource_by_refid('aaaa')) self.assertIsNone(self.stack.resource_by_refid('bbbb')) finally: rsrc.state_set(rsrc.CREATE, rsrc.COMPLETE) def test_resource_name_ref_by_depends_on(self): tmpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'AResource': {'Type': 'GenericResourceType'}, 'BResource': {'Type': 'ResourceWithPropsType', 'Properties': {'Foo': 'AResource'}, 'DependsOn': 'AResource'}}} self.stack = stack.Stack(self.ctx, 'resource_by_name_ref_stack', template.Template(tmpl)) self.stack.store() self.stack.create() self.assertEqual((stack.Stack.CREATE, stack.Stack.COMPLETE), self.stack.state) self.assertIn('AResource', self.stack) self.assertIn('BResource', self.stack) rsrc = self.stack['AResource'] rsrc.resource_id_set('aaaa') b_rsrc = self.stack['BResource'] b_rsrc.resource_id_set('bbbb') b_foo_ref = b_rsrc.properties.get('Foo') for action, status in ( (rsrc.INIT, rsrc.COMPLETE), (rsrc.CREATE, rsrc.IN_PROGRESS), (rsrc.CREATE, rsrc.COMPLETE), (rsrc.RESUME, rsrc.IN_PROGRESS), (rsrc.RESUME, rsrc.COMPLETE), (rsrc.UPDATE, rsrc.IN_PROGRESS), (rsrc.UPDATE, rsrc.COMPLETE)): rsrc.state_set(action, status) ref_rsrc = self.stack.resource_by_refid(b_foo_ref) self.assertEqual(rsrc, ref_rsrc) self.assertIn(b_rsrc.name, ref_rsrc.required_by()) def test_create_failure_recovery(self): """Check that rollback still works with dynamic metadata. This test fails the second instance. """ tmpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'AResource': {'Type': 'OverwrittenFnGetRefIdType', 'Properties': {'Foo': 'abc'}}, 'BResource': {'Type': 'ResourceWithPropsType', 'Properties': { 'Foo': {'Ref': 'AResource'}}}}} self.stack = stack.Stack(self.ctx, 'update_test_stack', template.Template(tmpl), disable_rollback=True) self.m.StubOutWithMock(generic_rsrc.ResourceWithFnGetRefIdType, 'handle_create') self.m.StubOutWithMock(generic_rsrc.ResourceWithFnGetRefIdType, 'handle_delete') # create generic_rsrc.ResourceWithFnGetRefIdType.handle_create().AndRaise( Exception) # update generic_rsrc.ResourceWithFnGetRefIdType.handle_delete() generic_rsrc.ResourceWithFnGetRefIdType.handle_create() self.m.ReplayAll() self.stack.store() self.stack.create() self.assertEqual((stack.Stack.CREATE, stack.Stack.FAILED), self.stack.state) self.assertEqual('abc', self.stack['AResource'].properties['Foo']) updated_stack = stack.Stack(self.ctx, 'updated_stack', template.Template(tmpl), disable_rollback=True) self.stack.update(updated_stack) self.assertEqual((stack.Stack.UPDATE, stack.Stack.COMPLETE), self.stack.state) self.assertEqual( 'abc', self.stack['AResource']._stored_properties_data['Foo']) self.assertEqual( 'ID-AResource', self.stack['BResource']._stored_properties_data['Foo']) self.m.VerifyAll() def test_create_bad_attribute(self): tmpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'AResource': {'Type': 'GenericResourceType'}, 'BResource': {'Type': 'ResourceWithPropsType', 'Properties': { 'Foo': {'Fn::GetAtt': ['AResource', 'Foo']}}}}} self.stack = stack.Stack(self.ctx, 'bad_attr_test_stack', template.Template(tmpl), disable_rollback=True) self.m.StubOutWithMock(generic_rsrc.ResourceWithProps, '_update_stored_properties') generic_rsrc.ResourceWithProps._update_stored_properties().AndRaise( exception.InvalidTemplateAttribute(resource='a', key='foo')) self.m.ReplayAll() self.stack.store() self.stack.create() self.assertEqual((stack.Stack.CREATE, stack.Stack.FAILED), self.stack.state) self.assertEqual('Resource CREATE failed: The Referenced Attribute ' '(a foo) is incorrect.', self.stack.status_reason) self.m.VerifyAll() def test_stack_create_timeout(self): self.m.StubOutWithMock(scheduler.DependencyTaskGroup, '__call__') self.m.StubOutWithMock(timeutils, 'wallclock') stk = stack.Stack(self.ctx, 's', self.tmpl) def dummy_task(): while True: yield start_time = time.time() timeutils.wallclock().AndReturn(start_time) timeutils.wallclock().AndReturn(start_time + 1) scheduler.DependencyTaskGroup.__call__().AndReturn(dummy_task()) timeutils.wallclock().AndReturn(start_time + stk.timeout_secs() + 1) self.m.ReplayAll() stk.create() self.assertEqual((stack.Stack.CREATE, stack.Stack.FAILED), stk.state) self.assertEqual('Create timed out', stk.status_reason) self.m.VerifyAll() def test_stack_name_valid(self): stk = stack.Stack(self.ctx, 's', self.tmpl) self.assertIsInstance(stk, stack.Stack) stk = stack.Stack(self.ctx, 'stack123', self.tmpl) self.assertIsInstance(stk, stack.Stack) stk = stack.Stack(self.ctx, 'test.stack', self.tmpl) self.assertIsInstance(stk, stack.Stack) stk = stack.Stack(self.ctx, 'test_stack', self.tmpl) self.assertIsInstance(stk, stack.Stack) stk = stack.Stack(self.ctx, 'TEST', self.tmpl) self.assertIsInstance(stk, stack.Stack) stk = stack.Stack(self.ctx, 'test-stack', self.tmpl) self.assertIsInstance(stk, stack.Stack) def test_stack_name_invalid(self): gt_255_chars = ('abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz' 'abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz' 'abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz' 'abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz' 'abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuv') stack_names = ['_foo', '1bad', '.kcats', 'test stack', ' teststack', '^-^', '"stack"', '1234', 'cat|dog', '$(foo)', 'test/stack', 'test\\stack', 'test::stack', 'test;stack', 'test~stack', '#test', gt_255_chars] for stack_name in stack_names: ex = self.assertRaises( exception.StackValidationFailed, stack.Stack, self.ctx, stack_name, self.tmpl) self.assertIn("Invalid stack name %s must contain" % stack_name, six.text_type(ex)) def test_stack_name_invalid_type(self): stack_names = [{"bad": 123}, ["no", "lists"]] for stack_name in stack_names: ex = self.assertRaises( exception.StackValidationFailed, stack.Stack, self.ctx, stack_name, self.tmpl) self.assertIn("Invalid stack name %s, must be a string" % stack_name, six.text_type(ex)) def test_resource_state_get_att(self): tmpl = { 'HeatTemplateFormatVersion': '2012-12-12', 'Resources': {'AResource': {'Type': 'GenericResourceType'}}, 'Outputs': {'TestOutput': {'Value': { 'Fn::GetAtt': ['AResource', 'Foo']}} } } self.stack = stack.Stack(self.ctx, 'resource_state_get_att', template.Template(tmpl)) self.stack.store() self.stack.create() self.assertEqual((stack.Stack.CREATE, stack.Stack.COMPLETE), self.stack.state) self.assertIn('AResource', self.stack) rsrc = self.stack['AResource'] rsrc.resource_id_set('aaaa') self.assertEqual('AResource', rsrc.FnGetAtt('Foo')) for action, status in ( (rsrc.CREATE, rsrc.IN_PROGRESS), (rsrc.CREATE, rsrc.COMPLETE), (rsrc.CREATE, rsrc.FAILED), (rsrc.SUSPEND, rsrc.IN_PROGRESS), (rsrc.SUSPEND, rsrc.COMPLETE), (rsrc.RESUME, rsrc.IN_PROGRESS), (rsrc.RESUME, rsrc.COMPLETE), (rsrc.UPDATE, rsrc.IN_PROGRESS), (rsrc.UPDATE, rsrc.FAILED), (rsrc.UPDATE, rsrc.COMPLETE), (rsrc.DELETE, rsrc.IN_PROGRESS), (rsrc.DELETE, rsrc.FAILED), (rsrc.DELETE, rsrc.COMPLETE)): rsrc.state_set(action, status) self.stack._update_all_resource_data(False, True) self.assertEqual('AResource', self.stack.outputs['TestOutput'].get_value()) def test_resource_required_by(self): tmpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': {'AResource': {'Type': 'GenericResourceType'}, 'BResource': {'Type': 'GenericResourceType', 'DependsOn': 'AResource'}, 'CResource': {'Type': 'GenericResourceType', 'DependsOn': 'BResource'}, 'DResource': {'Type': 'GenericResourceType', 'DependsOn': 'BResource'}}} self.stack = stack.Stack(self.ctx, 'depends_test_stack', template.Template(tmpl)) self.stack.store() self.stack.create() self.assertEqual((stack.Stack.CREATE, stack.Stack.COMPLETE), self.stack.state) self.assertEqual(['BResource'], self.stack['AResource'].required_by()) self.assertEqual([], self.stack['CResource'].required_by()) required_by = self.stack['BResource'].required_by() self.assertEqual(2, len(required_by)) for r in ['CResource', 'DResource']: self.assertIn(r, required_by) def test_resource_multi_required_by(self): tmpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': {'AResource': {'Type': 'GenericResourceType'}, 'BResource': {'Type': 'GenericResourceType'}, 'CResource': {'Type': 'GenericResourceType'}, 'DResource': {'Type': 'GenericResourceType', 'DependsOn': ['AResource', 'BResource', 'CResource']}}} self.stack = stack.Stack(self.ctx, 'depends_test_stack', template.Template(tmpl)) self.stack.store() self.stack.create() self.assertEqual((stack.Stack.CREATE, stack.Stack.COMPLETE), self.stack.state) for r in ['AResource', 'BResource', 'CResource']: self.assertEqual(['DResource'], self.stack[r].required_by()) def test_store_saves_owner(self): """owner_id attribute of Store is saved to the database when stored.""" self.stack = stack.Stack(self.ctx, 'owner_stack', self.tmpl) stack_ownee = stack.Stack(self.ctx, 'ownee_stack', self.tmpl, owner_id=self.stack.id) stack_ownee.store() db_stack = stack_object.Stack.get_by_id(self.ctx, stack_ownee.id) self.assertEqual(self.stack.id, db_stack.owner_id) def test_init_user_creds_id(self): ctx_init = utils.dummy_context(user='my_user', password='my_pass') ctx_init.request_id = self.ctx.request_id creds = ucreds_object.UserCreds.create(ctx_init) self.stack = stack.Stack(self.ctx, 'creds_init', self.tmpl, user_creds_id=creds.id) self.stack.store() self.assertEqual(creds.id, self.stack.user_creds_id) ctx_expected = ctx_init.to_dict() ctx_expected['auth_token'] = None self.assertEqual(ctx_expected, self.stack.stored_context().to_dict()) def test_tags_property_get_set(self): self.stack = stack.Stack(self.ctx, 'stack_tags', self.tmpl) self.stack.store() stack_id = self.stack.id test_stack = stack.Stack.load(self.ctx, stack_id=stack_id) self.assertIsNone(test_stack.tags) self.stack = stack.Stack(self.ctx, 'stack_name', self.tmpl) self.stack.tags = ['tag1', 'tag2'] self.assertEqual(['tag1', 'tag2'], self.stack._tags) self.stack.store() stack_id = self.stack.id test_stack = stack.Stack.load(self.ctx, stack_id=stack_id) self.assertIsNone(test_stack._tags) self.assertEqual(['tag1', 'tag2'], test_stack.tags) self.assertEqual(['tag1', 'tag2'], test_stack._tags) def test_load_reads_tags(self): self.stack = stack.Stack(self.ctx, 'stack_tags', self.tmpl) self.stack.store() stack_id = self.stack.id test_stack = stack.Stack.load(self.ctx, stack_id=stack_id) self.assertIsNone(test_stack.tags) self.stack = stack.Stack(self.ctx, 'stack_name', self.tmpl, tags=['tag1', 'tag2']) self.stack.store() stack_id = self.stack.id test_stack = stack.Stack.load(self.ctx, stack_id=stack_id) self.assertEqual(['tag1', 'tag2'], test_stack.tags) def test_store_saves_tags(self): self.stack = stack.Stack(self.ctx, 'tags_stack', self.tmpl) self.stack.store() db_tags = stack_tag_object.StackTagList.get(self.stack.context, self.stack.id) self.assertIsNone(db_tags) self.stack = stack.Stack(self.ctx, 'tags_stack', self.tmpl, tags=['tag1', 'tag2']) self.stack.store() db_tags = stack_tag_object.StackTagList.get(self.stack.context, self.stack.id) self.assertEqual('tag1', db_tags[0].tag) self.assertEqual('tag2', db_tags[1].tag) def test_store_saves_creds(self): """A user_creds entry is created on first stack store.""" cfg.CONF.set_default('deferred_auth_method', 'password') self.stack = stack.Stack(self.ctx, 'creds_stack', self.tmpl) self.stack.store() # The store should've created a user_creds row and set user_creds_id db_stack = stack_object.Stack.get_by_id(self.ctx, self.stack.id) user_creds_id = db_stack.user_creds_id self.assertIsNotNone(user_creds_id) # should've stored the username/password in the context user_creds = ucreds_object.UserCreds.get_by_id(self.ctx, user_creds_id) self.assertEqual(self.ctx.username, user_creds.get('username')) self.assertEqual(self.ctx.password, user_creds.get('password')) self.assertIsNone(user_creds.get('trust_id')) self.assertIsNone(user_creds.get('trustor_user_id')) # Check the stored_context is as expected expected_context = context.RequestContext.from_dict(self.ctx.to_dict()) expected_context.auth_token = None stored_context = self.stack.stored_context().to_dict() self.assertEqual(expected_context.to_dict(), stored_context) # Store again, ID should not change self.stack.store() self.assertEqual(user_creds_id, db_stack.user_creds_id) def test_store_saves_creds_trust(self): """A user_creds entry is created on first stack store.""" cfg.CONF.set_override('deferred_auth_method', 'trusts') self.m.StubOutWithMock(keystone.KeystoneClientPlugin, '_create') keystone.KeystoneClientPlugin._create().AndReturn( fake_ks.FakeKeystoneClient(user_id='auser123')) keystone.KeystoneClientPlugin._create().AndReturn( fake_ks.FakeKeystoneClient(user_id='auser123')) self.m.ReplayAll() self.stack = stack.Stack(self.ctx, 'creds_stack', self.tmpl) self.stack.store() # The store should've created a user_creds row and set user_creds_id db_stack = stack_object.Stack.get_by_id(self.ctx, self.stack.id) user_creds_id = db_stack.user_creds_id self.assertIsNotNone(user_creds_id) # should've stored the trust_id and trustor_user_id returned from # FakeKeystoneClient.create_trust_context, username/password should # not have been stored user_creds = ucreds_object.UserCreds.get_by_id(self.ctx, user_creds_id) self.assertIsNone(user_creds.get('username')) self.assertIsNone(user_creds.get('password')) self.assertEqual('atrust', user_creds.get('trust_id')) self.assertEqual('auser123', user_creds.get('trustor_user_id')) auth = self.patchobject(context.RequestContext, 'trusts_auth_plugin') self.patchobject(auth, 'get_access', return_value=fakes.FakeAccessInfo([], None, None)) # Check the stored_context is as expected expected_context = context.RequestContext( trust_id='atrust', trustor_user_id='auser123', request_id=self.ctx.request_id, is_admin=False).to_dict() stored_context = self.stack.stored_context().to_dict() self.assertEqual(expected_context, stored_context) # Store again, ID should not change self.stack.store() self.assertEqual(user_creds_id, db_stack.user_creds_id) def test_backup_copies_user_creds_id(self): ctx_init = utils.dummy_context(user='my_user', password='my_pass') ctx_init.request_id = self.ctx.request_id creds = ucreds_object.UserCreds.create(ctx_init) self.stack = stack.Stack(self.ctx, 'creds_init', self.tmpl, user_creds_id=creds.id) self.stack.store() self.assertEqual(creds.id, self.stack.user_creds_id) backup = self.stack._backup_stack() self.assertEqual(creds.id, backup.user_creds_id) def test_stored_context_err(self): """Test stored_context error path.""" self.stack = stack.Stack(self.ctx, 'creds_stack', self.tmpl) ex = self.assertRaises(exception.Error, self.stack.stored_context) expected_err = 'Attempt to use stored_context with no user_creds' self.assertEqual(expected_err, six.text_type(ex)) def test_store_gets_username_from_stack(self): self.stack = stack.Stack(self.ctx, 'username_stack', self.tmpl, username='foobar') self.ctx.username = 'not foobar' self.stack.store() db_stack = stack_object.Stack.get_by_id(self.ctx, self.stack.id) self.assertEqual('foobar', db_stack.username) def test_store_backup_true(self): self.stack = stack.Stack(self.ctx, 'username_stack', self.tmpl, username='foobar') self.ctx.username = 'not foobar' self.stack.store(backup=True) db_stack = stack_object.Stack.get_by_id(self.ctx, self.stack.id) self.assertTrue(db_stack.backup) def test_store_backup_false(self): self.stack = stack.Stack(self.ctx, 'username_stack', self.tmpl, username='foobar') self.ctx.username = 'not foobar' self.stack.store(backup=False) db_stack = stack_object.Stack.get_by_id(self.ctx, self.stack.id) self.assertFalse(db_stack.backup) def test_init_stored_context_false(self): ctx_init = utils.dummy_context(user='mystored_user', password='mystored_pass') ctx_init.request_id = self.ctx.request_id creds = ucreds_object.UserCreds.create(ctx_init) self.stack = stack.Stack(self.ctx, 'creds_store1', self.tmpl, user_creds_id=creds.id, use_stored_context=False) ctx_expected = self.ctx.to_dict() self.assertEqual(ctx_expected, self.stack.context.to_dict()) self.stack.store() self.assertEqual(ctx_expected, self.stack.context.to_dict()) def test_init_stored_context_true(self): ctx_init = utils.dummy_context(user='mystored_user', password='mystored_pass') ctx_init.request_id = self.ctx.request_id creds = ucreds_object.UserCreds.create(ctx_init) self.stack = stack.Stack(self.ctx, 'creds_store2', self.tmpl, user_creds_id=creds.id, use_stored_context=True) ctx_expected = ctx_init.to_dict() ctx_expected['auth_token'] = None self.assertEqual(ctx_expected, self.stack.context.to_dict()) self.stack.store() self.assertEqual(ctx_expected, self.stack.context.to_dict()) def test_load_stored_context_false(self): ctx_init = utils.dummy_context(user='mystored_user', password='mystored_pass') ctx_init.request_id = self.ctx.request_id creds = ucreds_object.UserCreds.create(ctx_init) self.stack = stack.Stack(self.ctx, 'creds_store3', self.tmpl, user_creds_id=creds.id) self.stack.store() load_stack = stack.Stack.load(self.ctx, stack_id=self.stack.id, use_stored_context=False) self.assertEqual(self.ctx.to_dict(), load_stack.context.to_dict()) def test_load_stored_context_true(self): ctx_init = utils.dummy_context(user='mystored_user', password='mystored_pass') ctx_init.request_id = self.ctx.request_id creds = ucreds_object.UserCreds.create(ctx_init) self.stack = stack.Stack(self.ctx, 'creds_store4', self.tmpl, user_creds_id=creds.id) self.stack.store() ctx_expected = ctx_init.to_dict() ctx_expected['auth_token'] = None load_stack = stack.Stack.load(self.ctx, stack_id=self.stack.id, use_stored_context=True) self.assertEqual(ctx_expected, load_stack.context.to_dict()) def test_load_honors_owner(self): """Loading a stack from the database will set the owner_id. Loading a stack from the database will set the owner_id of the resultant stack appropriately. """ self.stack = stack.Stack(self.ctx, 'owner_stack', self.tmpl) stack_ownee = stack.Stack(self.ctx, 'ownee_stack', self.tmpl, owner_id=self.stack.id) stack_ownee.store() saved_stack = stack.Stack.load(self.ctx, stack_id=stack_ownee.id) self.assertEqual(self.stack.id, saved_stack.owner_id) def test_requires_deferred_auth(self): tmpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': {'AResource': {'Type': 'GenericResourceType'}, 'BResource': {'Type': 'GenericResourceType'}, 'CResource': {'Type': 'GenericResourceType'}}} self.stack = stack.Stack(self.ctx, 'update_test_stack', template.Template(tmpl), disable_rollback=False) self.assertFalse(self.stack.requires_deferred_auth()) self.stack['CResource'].requires_deferred_auth = True self.assertTrue(self.stack.requires_deferred_auth()) def test_stack_user_project_id_default(self): self.stack = stack.Stack(self.ctx, 'user_project_none', self.tmpl) self.stack.store() self.assertIsNone(self.stack.stack_user_project_id) db_stack = stack_object.Stack.get_by_id(self.ctx, self.stack.id) self.assertIsNone(db_stack.stack_user_project_id) def test_stack_user_project_id_constructor(self): self.stub_keystoneclient() self.m.ReplayAll() self.stack = stack.Stack(self.ctx, 'user_project_init', self.tmpl, stack_user_project_id='aproject1234') self.stack.store() self.assertEqual('aproject1234', self.stack.stack_user_project_id) db_stack = stack_object.Stack.get_by_id(self.ctx, self.stack.id) self.assertEqual('aproject1234', db_stack.stack_user_project_id) self.stack.delete() self.assertEqual((stack.Stack.DELETE, stack.Stack.COMPLETE), self.stack.state) self.m.VerifyAll() def test_stack_user_project_id_setter(self): self.stub_keystoneclient() self.m.ReplayAll() self.stack = stack.Stack(self.ctx, 'user_project_init', self.tmpl) self.stack.store() self.assertIsNone(self.stack.stack_user_project_id) self.stack.set_stack_user_project_id(project_id='aproject456') self.assertEqual('aproject456', self.stack.stack_user_project_id) db_stack = stack_object.Stack.get_by_id(self.ctx, self.stack.id) self.assertEqual('aproject456', db_stack.stack_user_project_id) self.stack.delete() self.assertEqual((stack.Stack.DELETE, stack.Stack.COMPLETE), self.stack.state) self.m.VerifyAll() def test_stack_user_project_id_create(self): self.stub_keystoneclient() self.m.ReplayAll() self.stack = stack.Stack(self.ctx, 'user_project_init', self.tmpl) self.stack.store() self.assertIsNone(self.stack.stack_user_project_id) self.stack.create_stack_user_project_id() self.assertEqual('aprojectid', self.stack.stack_user_project_id) db_stack = stack_object.Stack.get_by_id(self.ctx, self.stack.id) self.assertEqual('aprojectid', db_stack.stack_user_project_id) self.stack.delete() self.assertEqual((stack.Stack.DELETE, stack.Stack.COMPLETE), self.stack.state) self.m.VerifyAll() def test_preview_resources_returns_list_of_resource_previews(self): tmpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': {'AResource': {'Type': 'GenericResourceType'}}} self.stack = stack.Stack(self.ctx, 'preview_stack', template.Template(tmpl)) res = mock.Mock() res.preview.return_value = 'foo' self.stack._resources = {'r1': res} resources = self.stack.preview_resources() self.assertEqual(['foo'], resources) def test_correct_outputs(self): tmpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'AResource': {'Type': 'ResourceWithPropsType', 'Properties': {'Foo': 'abc'}}, 'BResource': {'Type': 'ResourceWithPropsType', 'Properties': {'Foo': 'def'}}}, 'Outputs': { 'Resource_attr': { 'Value': { 'Fn::GetAtt': ['AResource', 'Foo']}}}} self.stack = stack.Stack(self.ctx, 'stack_with_correct_outputs', template.Template(tmpl)) self.stack.store() self.stack.create() self.assertEqual((stack.Stack.CREATE, stack.Stack.COMPLETE), self.stack.state) self.assertEqual('abc', self.stack['AResource'].properties['Foo']) # According _resolve_attribute method in GenericResource output # value will be equal with name AResource. self.stack._update_all_resource_data(False, True) self.assertEqual('AResource', self.stack.outputs['Resource_attr'].get_value()) self.stack.delete() self.assertEqual((self.stack.DELETE, self.stack.COMPLETE), self.stack.state) def test_incorrect_outputs(self): tmpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'AResource': {'Type': 'ResourceWithPropsType', 'Properties': {'Foo': 'abc'}}}, 'Outputs': { 'Resource_attr': { 'Value': { 'Fn::GetAtt': ['AResource', 'Bar']}}}} self.stack = stack.Stack(self.ctx, 'stack_with_incorrect_outputs', template.Template(tmpl)) self.stack.store() self.stack.create() self.assertEqual((stack.Stack.CREATE, stack.Stack.COMPLETE), self.stack.state) ex = self.assertRaises(exception.InvalidTemplateAttribute, self.stack.outputs['Resource_attr'].get_value) self.assertIn('The Referenced Attribute (AResource Bar) is ' 'incorrect.', six.text_type(ex)) self.stack.delete() self.assertEqual((self.stack.DELETE, self.stack.COMPLETE), self.stack.state) def test_stack_load_no_param_value_validation(self): """Test stack loading with disabled parameter value validation.""" tmpl = template_format.parse(''' heat_template_version: 2013-05-23 parameters: flavor: type: string description: A flavor. constraints: - custom_constraint: nova.flavor resources: a_resource: type: GenericResourceType ''') # Mock objects so the query for flavors in server.FlavorConstraint # works for stack creation fc = fakes.FakeClient() self.m.StubOutWithMock(nova.NovaClientPlugin, '_create') nova.NovaClientPlugin._create().AndReturn(fc) fc.flavors = self.m.CreateMockAnything() flavor = collections.namedtuple("Flavor", ["id", "name"]) flavor.id = "1234" flavor.name = "dummy" fc.flavors.get('1234').AndReturn(flavor) self.m.ReplayAll() test_env = environment.Environment({'flavor': '1234'}) self.stack = stack.Stack(self.ctx, 'stack_with_custom_constraint', template.Template(tmpl, env=test_env)) self.stack.validate() self.stack.store() self.stack.create() stack_id = self.stack.id self.m.VerifyAll() self.assertEqual((stack.Stack.CREATE, stack.Stack.COMPLETE), self.stack.state) loaded_stack = stack.Stack.load(self.ctx, stack_id=self.stack.id) self.assertEqual(stack_id, loaded_stack.parameters['OS::stack_id']) # verify that fc.flavors.list() has not been called, i.e. verify that # parameter value validation did not happen and FlavorConstraint was # not invoked self.m.VerifyAll() def test_snapshot_delete(self): snapshots = [] class ResourceDeleteSnapshot(generic_rsrc.ResourceWithProps): def handle_delete_snapshot(self, data): snapshots.append(data) resource._register_class( 'ResourceDeleteSnapshot', ResourceDeleteSnapshot) tmpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': {'AResource': {'Type': 'ResourceDeleteSnapshot'}}} self.stack = stack.Stack(self.ctx, 'snapshot_stack', template.Template(tmpl)) data = self.stack.prepare_abandon() fake_snapshot = collections.namedtuple('Snapshot', ('data',))(data) self.stack.delete_snapshot(fake_snapshot) self.assertEqual([data['resources']['AResource']], snapshots) def test_delete_snapshot_without_data(self): tmpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': {'R1': {'Type': 'GenericResourceType'}}} self.stack = stack.Stack(self.ctx, 'snapshot_stack', template.Template(tmpl)) fake_snapshot = collections.namedtuple('Snapshot', ('data',))(None) self.assertIsNone(self.stack.delete_snapshot(fake_snapshot)) def test_incorrect_outputs_cfn_get_attr(self): tmpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'AResource': {'Type': 'ResourceWithPropsType', 'Properties': {'Foo': 'abc'}}}, 'Outputs': { 'Resource_attr': { 'Value': { 'Fn::GetAtt': ['AResource', 'Bar']}}}} self.stack = stack.Stack(self.ctx, 'stack_with_correct_outputs', template.Template(tmpl)) self.assertRaisesRegex( exception.StackValidationFailed, ('Outputs.Resource_attr.Value.Fn::GetAtt: The Referenced ' r'Attribute \(AResource Bar\) is incorrect.'), self.stack.validate) def test_incorrect_outputs_cfn_incorrect_reference(self): tmpl = template_format.parse(""" HeatTemplateFormatVersion: '2012-12-12' Outputs: Output: Value: Fn::GetAtt: - Resource - Foo """) self.stack = stack.Stack(self.ctx, 'stack_with_incorrect_outputs', template.Template(tmpl)) ex = self.assertRaises(exception.StackValidationFailed, self.stack.validate) self.assertIn('The specified reference "Resource" ' '(in unknown) is incorrect.', six.text_type(ex)) def test_incorrect_outputs_incorrect_reference(self): tmpl = template_format.parse(""" heat_template_version: 2013-05-23 outputs: output: value: { get_attr: [resource, foo] } """) self.stack = stack.Stack(self.ctx, 'stack_with_incorrect_outputs', template.Template(tmpl)) ex = self.assertRaises(exception.StackValidationFailed, self.stack.validate) self.assertIn('The specified reference "resource" ' '(in unknown) is incorrect.', six.text_type(ex)) def test_incorrect_outputs_cfn_missing_value(self): tmpl = template_format.parse(""" HeatTemplateFormatVersion: '2012-12-12' Resources: AResource: Type: ResourceWithPropsType Properties: Foo: abc Outputs: Resource_attr: Description: the attr """) self.stack = stack.Stack(self.ctx, 'stack_with_correct_outputs', template.Template(tmpl)) ex = self.assertRaises(exception.StackValidationFailed, self.stack.validate) self.assertIn('Each output definition must contain a Value key.', six.text_type(ex)) self.assertIn('Outputs.Resource_attr', six.text_type(ex)) def test_incorrect_outputs_cfn_empty_value(self): tmpl = template_format.parse(""" HeatTemplateFormatVersion: '2012-12-12' Resources: AResource: Type: ResourceWithPropsType Properties: Foo: abc Outputs: Resource_attr: Value: '' """) self.stack = stack.Stack(self.ctx, 'stack_with_correct_outputs', template.Template(tmpl)) self.assertIsNone(self.stack.validate()) def test_incorrect_outputs_cfn_none_value(self): tmpl = template_format.parse(""" HeatTemplateFormatVersion: '2012-12-12' Resources: AResource: Type: ResourceWithPropsType Properties: Foo: abc Outputs: Resource_attr: Value: """) self.stack = stack.Stack(self.ctx, 'stack_with_correct_outputs', template.Template(tmpl)) self.assertIsNone(self.stack.validate()) def test_incorrect_outputs_cfn_string_data(self): tmpl = template_format.parse(""" HeatTemplateFormatVersion: '2012-12-12' Resources: AResource: Type: ResourceWithPropsType Properties: Foo: abc Outputs: Resource_attr: This is wrong data """) self.stack = stack.Stack(self.ctx, 'stack_with_correct_outputs', template.Template(tmpl)) ex = self.assertRaises(exception.StackValidationFailed, self.stack.validate) self.assertIn('Found a %s instead' % six.text_type.__name__, six.text_type(ex)) self.assertIn('Outputs.Resource_attr', six.text_type(ex)) def test_prop_validate_value(self): tmpl = template_format.parse(""" HeatTemplateFormatVersion: '2012-12-12' Resources: AResource: Type: ResourceWithPropsType Properties: FooInt: notanint """) self.stack = stack.Stack(self.ctx, 'stack_with_bad_property', template.Template(tmpl)) ex = self.assertRaises(exception.StackValidationFailed, self.stack.validate) self.assertIn("'notanint' is not an integer", six.text_type(ex)) self.stack.strict_validate = False self.assertIsNone(self.stack.validate()) def test_disable_validate_required_param(self): tmpl = template_format.parse(""" heat_template_version: 2013-05-23 parameters: aparam: type: number resources: AResource: type: ResourceWithPropsRefPropOnValidate properties: FooInt: {get_param: aparam} """) self.stack = stack.Stack(self.ctx, 'stack_with_reqd_param', template.Template(tmpl)) ex = self.assertRaises(exception.UserParameterMissing, self.stack.validate) self.assertIn("The Parameter (aparam) was not provided", six.text_type(ex)) self.stack.strict_validate = False ex = self.assertRaises(exception.StackValidationFailed, self.stack.validate) self.assertIn("The Parameter (aparam) was not provided", six.text_type(ex)) self.assertIsNone(self.stack.validate(validate_res_tmpl_only=True)) def test_nodisable_validate_tmpl_err(self): tmpl = template_format.parse(""" heat_template_version: 2013-05-23 resources: AResource: type: ResourceWithPropsRefPropOnValidate depends_on: noexist properties: FooInt: 123 """) self.stack = stack.Stack(self.ctx, 'stack_with_tmpl_err', template.Template(tmpl)) ex = self.assertRaises(exception.InvalidTemplateReference, self.stack.validate) self.assertIn( "The specified reference \"noexist\" (in AResource) is incorrect", six.text_type(ex)) self.stack.strict_validate = False ex = self.assertRaises(exception.InvalidTemplateReference, self.stack.validate) self.assertIn( "The specified reference \"noexist\" (in AResource) is incorrect", six.text_type(ex)) ex = self.assertRaises(exception.InvalidTemplateReference, self.stack.validate, validate_res_tmpl_only=True) self.assertIn( "The specified reference \"noexist\" (in AResource) is incorrect", six.text_type(ex)) def test_validate_property_getatt(self): tmpl = { 'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'R1': {'Type': 'ResourceWithPropsType'}, 'R2': {'Type': 'ResourceWithPropsType', 'Properties': {'Foo': {'Fn::GetAtt': ['R1', 'Foo']}}}} } self.stack = stack.Stack(self.ctx, 'test_stack', template.Template(tmpl)) self.assertIsNone(self.stack.validate()) def test_param_validate_value(self): tmpl = template_format.parse(""" HeatTemplateFormatVersion: '2012-12-12' Parameters: foo: Type: Number """) env1 = environment.Environment({'parameters': {'foo': 'abc'}}) self.stack = stack.Stack(self.ctx, 'stack_with_bad_param', template.Template(tmpl, env=env1)) ex = self.assertRaises(exception.StackValidationFailed, self.stack.validate) self.assertIn("Parameter 'foo' is invalid: could not convert " "string to float:", six.text_type(ex)) self.assertIn("abc", six.text_type(ex)) self.stack.strict_validate = False self.assertIsNone(self.stack.validate()) def test_incorrect_outputs_cfn_list_data(self): tmpl = template_format.parse(""" HeatTemplateFormatVersion: '2012-12-12' Resources: AResource: Type: ResourceWithPropsType Properties: Foo: abc Outputs: Resource_attr: - Data is not what it seems """) self.stack = stack.Stack(self.ctx, 'stack_with_correct_outputs', template.Template(tmpl)) ex = self.assertRaises(exception.StackValidationFailed, self.stack.validate) self.assertIn('Found a list', six.text_type(ex)) self.assertIn('Outputs.Resource_attr', six.text_type(ex)) def test_incorrect_deletion_policy(self): tmpl = template_format.parse(""" HeatTemplateFormatVersion: '2012-12-12' Parameters: Deletion_Policy: Type: String Default: [1, 2] Resources: AResource: Type: ResourceWithPropsType DeletionPolicy: {Ref: Deletion_Policy} Properties: Foo: abc """) self.stack = stack.Stack(self.ctx, 'stack_bad_delpol', template.Template(tmpl)) ex = self.assertRaises(exception.StackValidationFailed, self.stack.validate) self.assertIn('Invalid deletion policy "[1, 2]"', six.text_type(ex)) def test_deletion_policy_apply_ref(self): tmpl = template_format.parse(""" HeatTemplateFormatVersion: '2012-12-12' Parameters: Deletion_Policy: Type: String Default: Delete Resources: AResource: Type: ResourceWithPropsType DeletionPolicy: wibble Properties: Foo: abc DeletionPolicy: {Ref: Deletion_Policy} """) self.stack = stack.Stack(self.ctx, 'stack_delpol_get_param', template.Template(tmpl)) self.stack.validate() self.stack.store() self.stack.create() self.assertEqual((self.stack.CREATE, self.stack.COMPLETE), self.stack.state) def test_deletion_policy_apply_get_param(self): tmpl = template_format.parse(""" heat_template_version: 2016-04-08 parameters: deletion_policy: type: string default: Delete resources: AResource: type: ResourceWithPropsType deletion_policy: {get_param: deletion_policy} properties: Foo: abc """) self.stack = stack.Stack(self.ctx, 'stack_delpol_get_param', template.Template(tmpl)) self.stack.validate() self.stack.store() self.stack.create() self.assertEqual((self.stack.CREATE, self.stack.COMPLETE), self.stack.state) def test_incorrect_deletion_policy_hot(self): tmpl = template_format.parse(""" heat_template_version: 2013-05-23 parameters: deletion_policy: type: string default: [1, 2] resources: AResource: type: ResourceWithPropsType deletion_policy: {get_param: deletion_policy} properties: Foo: abc """) self.stack = stack.Stack(self.ctx, 'stack_bad_delpol', template.Template(tmpl)) ex = self.assertRaises(exception.StackValidationFailed, self.stack.validate) self.assertIn('Invalid deletion policy "[1, 2]', six.text_type(ex)) def test_incorrect_outputs_hot_get_attr(self): tmpl = {'heat_template_version': '2013-05-23', 'resources': { 'AResource': {'type': 'ResourceWithPropsType', 'properties': {'Foo': 'abc'}}}, 'outputs': { 'resource_attr': { 'value': { 'get_attr': ['AResource', 'Bar']}}}} self.stack = stack.Stack(self.ctx, 'stack_with_correct_outputs', template.Template(tmpl)) self.assertRaisesRegex( exception.StackValidationFailed, ('outputs.resource_attr.value.get_attr: The Referenced Attribute ' r'\(AResource Bar\) is incorrect.'), self.stack.validate) def test_snapshot_save_called_first(self): def snapshotting_called_first(stack, action, status, reason): self.assertEqual(stack.status, stack.IN_PROGRESS) self.assertEqual(stack.action, stack.SNAPSHOT) tmpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'A': {'Type': 'GenericResourceType'}, 'B': {'Type': 'GenericResourceType'}}} self.stack = stack.Stack(self.ctx, 'stack_details_test', template.Template(tmpl)) self.stack.store() self.stack.create() self.stack.snapshot(save_snapshot_func=snapshotting_called_first) def test_restore(self): tmpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'A': {'Type': 'GenericResourceType'}, 'B': {'Type': 'GenericResourceType'}}} self.stack = stack.Stack(self.ctx, 'stack_details_test', template.Template(tmpl)) self.stack.store() self.stack.create() data = copy.deepcopy(self.stack.prepare_abandon()) fake_snapshot = collections.namedtuple( 'Snapshot', ('data', 'stack_id'))(data, self.stack.id) new_tmpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': {'A': {'Type': 'GenericResourceType'}}} updated_stack = stack.Stack(self.ctx, 'updated_stack', template.Template(new_tmpl)) self.stack.update(updated_stack) self.assertEqual(1, len(self.stack.resources)) self.stack.restore(fake_snapshot) self.assertEqual((stack.Stack.RESTORE, stack.Stack.COMPLETE), self.stack.state) self.assertEqual(2, len(self.stack.resources)) def test_restore_with_original_env(self): tmpl = { 'heat_template_version': '2013-05-23', 'parameters': { 'foo': {'type': 'string'} }, 'resources': { 'A': { 'type': 'ResourceWithPropsType', 'properties': {'Foo': {'get_param': 'foo'}} } } } self.stack = stack.Stack(self.ctx, 'stack_restore_test', template.Template( tmpl, env=environment.Environment( {'foo': 'abc'}))) self.stack.store() self.stack.create() self.assertEqual('abc', self.stack.resources['A'].properties['Foo']) data = copy.deepcopy(self.stack.prepare_abandon()) fake_snapshot = collections.namedtuple( 'Snapshot', ('data', 'stack_id'))(data, self.stack.id) updated_stack = stack.Stack(self.ctx, 'updated_stack', template.Template( tmpl, env=environment.Environment( {'foo': 'xyz'}))) self.stack.update(updated_stack) self.assertEqual('xyz', self.stack.resources['A'].properties['Foo']) self.stack.restore(fake_snapshot) self.assertEqual((stack.Stack.RESTORE, stack.Stack.COMPLETE), self.stack.state) self.assertEqual('abc', self.stack.resources['A'].properties['Foo']) def test_hot_restore(self): tpl = {'heat_template_version': '2013-05-23', 'resources': {'A': {'type': 'ResourceWithRestoreType'}}} self.stack = stack.Stack(self.ctx, 'stack_details_test', template.Template(tpl)) self.stack.store() self.stack.create() data = self.stack.prepare_abandon() data['resources']['A']['resource_data']['a_string'] = 'foo' fake_snapshot = collections.namedtuple( 'Snapshot', ('data', 'stack_id'))(data, self.stack.id) self.stack.restore(fake_snapshot) self.assertEqual((stack.Stack.RESTORE, stack.Stack.COMPLETE), self.stack.state) self.assertEqual( 'foo', self.stack.resources['A'].properties['a_string']) @mock.patch.object(stack.Stack, 'db_resource_get') def test_lightweight_stack_getatt(self, mock_drg): tmpl = template.Template({ 'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'foo': {'Type': 'GenericResourceType'}, 'bar': { 'Type': 'ResourceWithPropsType', 'Properties': { 'Foo': {'Fn::GetAtt': ['foo', 'bar']}, } } } }) rsrcs_data = {'foo': {'reference_id': 'foo-id', 'attrs': {'bar': 'baz'}, 'uuid': mock.ANY, 'id': mock.ANY, 'action': 'CREATE', 'status': 'COMPLETE'}, 'bar': {'reference_id': 'bar-id', 'uuid': mock.ANY, 'id': mock.ANY, 'action': 'CREATE', 'status': 'COMPLETE'}} cache_data = {n: node_data.NodeData.from_dict(d) for n, d in rsrcs_data.items()} tmpl_stack = stack.Stack(self.ctx, 'test', tmpl) tmpl_stack.store() lightweight_stack = stack.Stack.load(self.ctx, stack_id=tmpl_stack.id, cache_data=cache_data) # Check if the property has the appropriate resolved value. bar = resource.Resource( 'bar', lightweight_stack.defn.resource_definition('bar'), lightweight_stack) self.assertEqual('baz', bar.properties['Foo']) # Make sure FnGetAtt returns the cached value. attr_value = lightweight_stack.defn['foo'].FnGetAtt('bar') self.assertEqual('baz', attr_value) # Make sure calls are not made to the database to retrieve the # resource state. self.assertFalse(mock_drg.called) @mock.patch.object(stack.Stack, 'db_resource_get') def test_lightweight_stack_getrefid(self, mock_drg): tmpl = template.Template({ 'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'foo': {'Type': 'GenericResourceType'}, 'bar': { 'Type': 'ResourceWithPropsType', 'Properties': { 'Foo': {'Ref': 'foo'}, } } } }) rsrcs_data = {'foo': {'reference_id': 'physical-resource-id', 'uuid': mock.ANY, 'id': mock.ANY, 'action': 'CREATE', 'status': 'COMPLETE'}, 'bar': {'reference_id': 'bar-id', 'uuid': mock.ANY, 'id': mock.ANY, 'action': 'CREATE', 'status': 'COMPLETE'}} cache_data = {n: node_data.NodeData.from_dict(d) for n, d in rsrcs_data.items()} tmpl_stack = stack.Stack(self.ctx, 'test', tmpl) tmpl_stack.store() lightweight_stack = stack.Stack.load(self.ctx, stack_id=tmpl_stack.id, cache_data=cache_data) # Check if the property has the appropriate resolved value. bar = resource.Resource( 'bar', lightweight_stack.defn.resource_definition('bar'), lightweight_stack) self.assertEqual('physical-resource-id', bar.properties['Foo']) # Make sure FnGetRefId returns the cached value. resource_id = lightweight_stack.defn['foo'].FnGetRefId() self.assertEqual('physical-resource-id', resource_id) # Make sure calls are not made to the database to retrieve the # resource state. self.assertFalse(mock_drg.called) def test_encrypt_parameters_false_parameters_stored_plaintext(self): """Test stack loading with disabled parameter value validation.""" tmpl = template_format.parse(''' heat_template_version: 2013-05-23 parameters: param1: type: string description: value1. param2: type: string description: value2. hidden: true resources: a_resource: type: GenericResourceType ''') env1 = environment.Environment({'param1': 'foo', 'param2': 'bar'}) self.stack = stack.Stack(self.ctx, 'test', template.Template(tmpl, env=env1)) cfg.CONF.set_override('encrypt_parameters_and_properties', False) # Verify that hidden parameters stored in plain text self.stack.store() db_stack = stack_object.Stack.get_by_id(self.ctx, self.stack.id) params = db_stack.raw_template.environment['parameters'] self.assertEqual('foo', params['param1']) self.assertEqual('bar', params['param2']) def test_parameters_stored_encrypted_decrypted_on_load(self): """Test stack loading with disabled parameter value validation.""" tmpl = template_format.parse(''' heat_template_version: 2013-05-23 parameters: param1: type: string description: value1. param2: type: string description: value2. hidden: true resources: a_resource: type: GenericResourceType ''') env1 = environment.Environment({'param1': 'foo', 'param2': 'bar'}) self.stack = stack.Stack(self.ctx, 'test', template.Template(tmpl, env=env1)) cfg.CONF.set_override('encrypt_parameters_and_properties', True) # Verify that hidden parameters are stored encrypted self.stack.store() db_tpl = db_api.raw_template_get(self.ctx, self.stack.t.id) db_params = db_tpl.environment['parameters'] self.assertEqual('foo', db_params['param1']) self.assertEqual('cryptography_decrypt_v1', db_params['param2'][0]) self.assertIsNotNone(db_params['param2'][1]) # Verify that loaded stack has decrypted paramters loaded_stack = stack.Stack.load(self.ctx, stack_id=self.stack.id) params = loaded_stack.t.env.params self.assertEqual('foo', params.get('param1')) self.assertEqual('bar', params.get('param2')) # test update the param2 loaded_stack.state_set(self.stack.CREATE, self.stack.COMPLETE, 'for_update') env2 = environment.Environment({'param1': 'foo', 'param2': 'new_bar'}) new_stack = stack.Stack(self.ctx, 'test_update', template.Template(tmpl, env=env2)) loaded_stack.update(new_stack) self.assertEqual((loaded_stack.UPDATE, loaded_stack.COMPLETE), loaded_stack.state) db_tpl = db_api.raw_template_get(self.ctx, loaded_stack.t.id) db_params = db_tpl.environment['parameters'] self.assertEqual('foo', db_params['param1']) self.assertEqual('cryptography_decrypt_v1', db_params['param2'][0]) self.assertIsNotNone(db_params['param2'][1]) loaded_stack1 = stack.Stack.load(self.ctx, stack_id=self.stack.id) params = loaded_stack1.t.env.params self.assertEqual('foo', params.get('param1')) self.assertEqual('new_bar', params.get('param2')) def test_parameters_created_encrypted_updated_decrypted(self): """Test stack loading with disabled parameter value validation.""" tmpl = template_format.parse(''' heat_template_version: 2013-05-23 parameters: param1: type: string description: value1. param2: type: string description: value2. hidden: true resources: a_resource: type: GenericResourceType ''') # Create the stack with encryption enabled cfg.CONF.set_override('encrypt_parameters_and_properties', True) env1 = environment.Environment({'param1': 'foo', 'param2': 'bar'}) self.stack = stack.Stack(self.ctx, 'test', template.Template(tmpl, env=env1)) self.stack.store() # Update the stack with encryption disabled cfg.CONF.set_override('encrypt_parameters_and_properties', False) loaded_stack = stack.Stack.load(self.ctx, stack_id=self.stack.id) loaded_stack.state_set(self.stack.CREATE, self.stack.COMPLETE, 'for_update') env2 = environment.Environment({'param1': 'foo', 'param2': 'new_bar'}) new_stack = stack.Stack(self.ctx, 'test_update', template.Template(tmpl, env=env2)) self.assertEqual(['param2'], loaded_stack.env.encrypted_param_names) # Without the fix for bug #1572294, loaded_stack.update() will # blow up with "ValueError: too many values to unpack" loaded_stack.update(new_stack) self.assertEqual([], loaded_stack.env.encrypted_param_names) def test_parameters_inconsistent_encrypted_param_names(self): tmpl = template_format.parse(''' heat_template_version: 2013-05-23 parameters: param1: type: string description: value1. param2: type: string description: value2. hidden: true resources: a_resource: type: GenericResourceType ''') warning_logger = self.useFixture( fixtures.FakeLogger(level=logging.WARNING, format="%(levelname)8s [%(name)s] " "%(message)s")) cfg.CONF.set_override('encrypt_parameters_and_properties', False) env1 = environment.Environment({'param1': 'foo', 'param2': 'bar'}) self.stack = stack.Stack(self.ctx, 'test', template.Template(tmpl, env=env1)) self.stack.store() loaded_stack = stack.Stack.load(self.ctx, stack_id=self.stack.id) loaded_stack.state_set(self.stack.CREATE, self.stack.COMPLETE, 'for_update') env2 = environment.Environment({'param1': 'foo', 'param2': 'new_bar'}) # Put inconsistent encrypted_param_names data in the environment env2.encrypted_param_names = ['param1'] new_stack = stack.Stack(self.ctx, 'test_update', template.Template(tmpl, env=env2)) self.assertIsNone(loaded_stack.update(new_stack)) self.assertIn('Encountered already-decrypted data', warning_logger.output) def test_parameters_stored_decrypted_successful_load(self): """Test stack loading with disabled parameter value validation.""" tmpl = template_format.parse(''' heat_template_version: 2013-05-23 parameters: param1: type: string description: value1. param2: type: string description: value2. hidden: true resources: a_resource: type: GenericResourceType ''') env1 = environment.Environment({'param1': 'foo', 'param2': 'bar'}) self.stack = stack.Stack(self.ctx, 'test', template.Template(tmpl, env=env1)) cfg.CONF.set_override('encrypt_parameters_and_properties', False) # Verify that hidden parameters are stored decrypted self.stack.store() db_tpl = db_api.raw_template_get(self.ctx, self.stack.t.id) db_params = db_tpl.environment['parameters'] self.assertEqual('foo', db_params['param1']) self.assertEqual('bar', db_params['param2']) # Verify that stack loads without error loaded_stack = stack.Stack.load(self.ctx, stack_id=self.stack.id) params = loaded_stack.t.env.params self.assertEqual('foo', params.get('param1')) self.assertEqual('bar', params.get('param2')) def test_event_dispatch(self): env = environment.Environment() evt = eventlet.event.Event() sink = fakes.FakeEventSink(evt) env.register_event_sink('dummy', lambda: sink) env.load({"event_sinks": [{"type": "dummy"}]}) stk = stack.Stack(self.ctx, 'test', template.Template(empty_template, env=env)) stk.thread_group_mgr = service.ThreadGroupManager() self.addCleanup(stk.thread_group_mgr.stop, stk.id) stk.store() stk._add_event('CREATE', 'IN_PROGRESS', '') evt.wait() expected = [{ 'id': mock.ANY, 'timestamp': mock.ANY, 'type': 'os.heat.event', 'version': '0.1', 'payload': { 'physical_resource_id': stk.id, 'resource_action': 'CREATE', 'resource_name': 'test', 'resource_properties': {}, 'resource_status': 'IN_PROGRESS', 'resource_status_reason': '', 'resource_type': 'OS::Heat::Stack', 'stack_id': stk.id, 'version': '0.1'}}] self.assertEqual(expected, sink.events) @mock.patch.object(stack_object.Stack, 'delete') @mock.patch.object(raw_template_object.RawTemplate, 'delete') def test_mark_complete_create(self, mock_tmpl_delete, mock_stack_delete): tmpl = template.Template({ 'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'foo': {'Type': 'GenericResourceType'} } }) tmpl_stack = stack.Stack(self.ctx, 'test', tmpl, convergence=True) tmpl_stack.store() tmpl_stack.action = tmpl_stack.CREATE tmpl_stack.status = tmpl_stack.IN_PROGRESS tmpl_stack.current_traversal = 'some-traversal' tmpl_stack.mark_complete() self.assertEqual(tmpl_stack.prev_raw_template_id, None) self.assertFalse(mock_tmpl_delete.called) self.assertFalse(mock_stack_delete.called) self.assertEqual(tmpl_stack.status, tmpl_stack.COMPLETE) @mock.patch.object(stack.Stack, 'purge_db') def test_mark_complete_update(self, mock_purge_db): tmpl = template.Template({ 'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'foo': {'Type': 'GenericResourceType'} } }) cfg.CONF.set_default('convergence_engine', True) tmpl_stack = stack.Stack(self.ctx, 'test', tmpl, convergence=True) tmpl_stack.prev_raw_template_id = 1 tmpl_stack.action = tmpl_stack.UPDATE tmpl_stack.status = tmpl_stack.IN_PROGRESS tmpl_stack.current_traversal = 'some-traversal' tmpl_stack.store() tmpl_stack.mark_complete() self.assertTrue(mock_purge_db.called) @mock.patch.object(stack.Stack, 'purge_db') def test_mark_complete_update_delete(self, mock_purge_db): tmpl = template.Template({ 'HeatTemplateFormatVersion': '2012-12-12', 'Description': 'Empty Template' }) cfg.CONF.set_default('convergence_engine', True) tmpl_stack = stack.Stack(self.ctx, 'test', tmpl, convergence=True) tmpl_stack.prev_raw_template_id = 1 tmpl_stack.action = tmpl_stack.DELETE tmpl_stack.status = tmpl_stack.IN_PROGRESS tmpl_stack.current_traversal = 'some-traversal' tmpl_stack.store() tmpl_stack.mark_complete() self.assertTrue(mock_purge_db.called) @mock.patch.object(stack.Stack, 'purge_db') def test_mark_complete_stale_traversal(self, mock_purge_db): tmpl = template.Template({ 'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'foo': {'Type': 'GenericResourceType'} } }) tmpl_stack = stack.Stack(self.ctx, 'test', tmpl) tmpl_stack.store() # emulate stale traversal tmpl_stack.current_traversal = 'old-traversal' tmpl_stack.mark_complete() self.assertFalse(mock_purge_db.called) @mock.patch.object(function, 'validate') def test_validate_assertion_exception_rethrow(self, func_val): expected_msg = 'Expected Assertion Error' with mock.patch('heat.engine.stack.dependencies', new_callable=mock.PropertyMock) as mock_dependencies: mock_dependency = mock.MagicMock() mock_dependency.name = 'res' mock_dependency.external_id = None mock_dependency.validate.side_effect = AssertionError(expected_msg) mock_dependencies.Dependencies.return_value = [mock_dependency] stc = stack.Stack(self.ctx, utils.random_name(), self.tmpl) mock_res = mock.Mock() mock_res.name = mock_dependency.name mock_res.t = mock.Mock() mock_res.t.name = mock_res.name stc._resources = {mock_res.name: mock_res} expected_exception = self.assertRaises(AssertionError, stc.validate) self.assertEqual(expected_msg, six.text_type(expected_exception)) mock_dependency.validate.assert_called_once_with() tmpl = template_format.parse(""" HeatTemplateFormatVersion: '2012-12-12' Outputs: foo: Value: bar """) stc = stack.Stack(self.ctx, utils.random_name(), template.Template(tmpl)) func_val.side_effect = AssertionError(expected_msg) expected_exception = self.assertRaises(AssertionError, stc.validate) self.assertEqual(expected_msg, six.text_type(expected_exception)) @mock.patch.object(update, 'StackUpdate') def test_update_task_exception(self, mock_stack_update): class RandomException(Exception): pass tmpl1 = template.Template({ 'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'foo': {'Type': 'GenericResourceType'} } }) self.stack = stack.Stack(utils.dummy_context(), 'test_stack', tmpl1) self.stack.store() self.stack.create() self.assertEqual((stack.Stack.CREATE, stack.Stack.COMPLETE), self.stack.state) tmpl2 = template.Template({ 'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'foo': {'Type': 'GenericResourceType'}, 'bar': {'Type': 'GenericResourceType'} } }) updated_stack = stack.Stack(utils.dummy_context(), 'test_stack', tmpl2) mock_stack_update.side_effect = RandomException() self.assertRaises(RandomException, self.stack.update, updated_stack) def update_exception_handler(self, exc, action=stack.Stack.UPDATE, disable_rollback=False): tmpl = template.Template({ 'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'foo': {'Type': 'GenericResourceType'} } }) self.stack = stack.Stack(utils.dummy_context(), 'test_stack', tmpl, disable_rollback=disable_rollback) self.stack.store() self.m.ReplayAll() rb = self.stack._update_exception_handler(exc=exc, action=action) self.m.VerifyAll() return rb def test_update_exception_handler_resource_failure_no_rollback(self): reason = 'something strange happened' exc = exception.ResourceFailure(reason, None, action='UPDATE') rb = self.update_exception_handler(exc, disable_rollback=True) self.assertFalse(rb) def test_update_exception_handler_resource_failure_rollback(self): reason = 'something strange happened' exc = exception.ResourceFailure(reason, None, action='UPDATE') rb = self.update_exception_handler(exc, disable_rollback=False) self.assertTrue(rb) def test_update_exception_handler_force_cancel_with_rollback(self): exc = stack.ForcedCancel(with_rollback=True) rb = self.update_exception_handler(exc, disable_rollback=False) self.assertTrue(rb) def test_update_exception_handler_force_cancel_with_rollback_off(self): # stack-cancel-update from user *always* rolls back exc = stack.ForcedCancel(with_rollback=True) rb = self.update_exception_handler(exc, disable_rollback=True) self.assertTrue(rb) def test_update_exception_handler_force_cancel_nested(self): exc = stack.ForcedCancel(with_rollback=False) rb = self.update_exception_handler(exc, disable_rollback=True) self.assertFalse(rb) def test_store_generates_new_traversal_id_for_new_stack(self): tmpl = template.Template({ 'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'foo': {'Type': 'GenericResourceType'} } }) self.stack = stack.Stack(utils.dummy_context(), 'test_stack', tmpl, convergence=True) self.assertIsNone(self.stack.current_traversal) self.stack.store() self.assertIsNotNone(self.stack.current_traversal) @mock.patch.object(stack_object.Stack, 'select_and_update') def test_store_uses_traversal_id_for_updating_db(self, mock_sau): tmpl = template.Template({ 'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'foo': {'Type': 'GenericResourceType'} } }) self.stack = stack.Stack(utils.dummy_context(), 'test_stack', tmpl, convergence=True) mock_sau.return_value = True self.stack.id = 1 self.stack.current_traversal = 1 stack_id = self.stack.store() mock_sau.assert_called_once_with(mock.ANY, 1, mock.ANY, exp_trvsl=1) self.assertEqual(1, stack_id) # ensure store uses given expected traversal ID stack_id = self.stack.store(exp_trvsl=2) self.assertEqual(1, stack_id) mock_sau.assert_called_with(mock.ANY, 1, mock.ANY, exp_trvsl=2) @mock.patch.object(stack_object.Stack, 'select_and_update') def test_store_db_update_failure(self, mock_sau): tmpl = template.Template({ 'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'foo': {'Type': 'GenericResourceType'} } }) self.stack = stack.Stack(utils.dummy_context(), 'test_stack', tmpl, convergence=True) mock_sau.return_value = False self.stack.id = 1 stack_id = self.stack.store() self.assertIsNone(stack_id) @mock.patch.object(stack_object.Stack, 'select_and_update') def test_state_set_uses_curr_traversal_for_updating_db(self, mock_sau): tmpl = template.Template({ 'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'foo': {'Type': 'GenericResourceType'} } }) self.stack = stack.Stack(utils.dummy_context(), 'test_stack', tmpl, convergence=True) self.stack.id = 1 self.stack.current_traversal = 'curr-traversal' self.stack.store() self.stack.state_set(self.stack.UPDATE, self.stack.IN_PROGRESS, '') mock_sau.assert_called_once_with(mock.ANY, 1, mock.ANY, exp_trvsl='curr-traversal') class StackKwargsForCloningTest(common.HeatTestCase): scenarios = [ ('default', dict(keep_status=False, only_db=False, keep_tags=False, not_included=['action', 'status', 'status_reason', 'tags'])), ('only_db', dict(keep_status=False, only_db=True, keep_tags=False, not_included=['action', 'status', 'status_reason', 'strict_validate', 'tags'])), ('keep_status', dict(keep_status=True, only_db=False, keep_tags=False, not_included=['tags'])), ('status_db', dict(keep_status=True, only_db=True, keep_tags=False, not_included=['strict_validate', 'tags'])), ('keep_tags', dict(keep_status=False, only_db=False, keep_tags=True, not_included=['action', 'status', 'status_reason'])) ] def test_kwargs(self): tmpl = template.Template(copy.deepcopy(empty_template)) ctx = utils.dummy_context() test_data = dict(action='x', status='y', status_reason='z', timeout_mins=33, disable_rollback=True, parent_resource='fred', owner_id=32, stack_user_project_id=569, user_creds_id=123, tenant_id='some-uuid', username='jo', nested_depth=3, strict_validate=True, convergence=False, current_traversal=45, tags=['tag1', 'tag2']) db_map = {'parent_resource': 'parent_resource_name', 'tenant_id': 'tenant', 'timeout_mins': 'timeout'} test_db_data = {} for key in test_data: dbkey = db_map.get(key, key) test_db_data[dbkey] = test_data[key] self.stack = stack.Stack(ctx, utils.random_name(), tmpl, **test_data) res = self.stack.get_kwargs_for_cloning(keep_status=self.keep_status, only_db=self.only_db, keep_tags=self.keep_tags) for key in self.not_included: self.assertNotIn(key, res) for key in test_data: if key not in self.not_included: dbkey = db_map.get(key, key) if self.only_db: self.assertEqual(test_data[key], res[dbkey]) else: self.assertEqual(test_data[key], res[key]) if not self.only_db: # just make sure that the kwargs are valid # (no exception should be raised) stack.Stack(ctx, utils.random_name(), tmpl, **res) class ResetStateOnErrorTest(common.HeatTestCase): class DummyStack(object): (COMPLETE, IN_PROGRESS, FAILED) = range(3) action = 'something' status = COMPLETE def __init__(self): self.state_set = mock.MagicMock() @stack.reset_state_on_error def raise_exception(self): self.status = self.IN_PROGRESS raise ValueError('oops') @stack.reset_state_on_error def raise_exit_exception(self): self.status = self.IN_PROGRESS raise BaseException('bye') @stack.reset_state_on_error def succeed(self): return 'Hello world' @stack.reset_state_on_error def fail(self): self.status = self.FAILED return 'Hello world' def test_success(self): dummy = self.DummyStack() self.assertEqual('Hello world', dummy.succeed()) self.assertFalse(dummy.state_set.called) def test_failure(self): dummy = self.DummyStack() self.assertEqual('Hello world', dummy.fail()) self.assertFalse(dummy.state_set.called) def test_reset_state_exception(self): dummy = self.DummyStack() exc = self.assertRaises(ValueError, dummy.raise_exception) self.assertIn('oops', str(exc)) self.assertTrue(dummy.state_set.called) def test_reset_state_exit_exception(self): dummy = self.DummyStack() exc = self.assertRaises(BaseException, dummy.raise_exit_exception) self.assertIn('bye', str(exc)) self.assertTrue(dummy.state_set.called) class StackStateSetTest(common.HeatTestCase): scenarios = [ ('in_progress', dict(action=stack.Stack.CREATE, status=stack.Stack.IN_PROGRESS, persist_count=1, error=False)), ('create_complete', dict(action=stack.Stack.CREATE, status=stack.Stack.COMPLETE, persist_count=0, error=False)), ('create_failed', dict(action=stack.Stack.CREATE, status=stack.Stack.FAILED, persist_count=0, error=False)), ('update_complete', dict(action=stack.Stack.UPDATE, status=stack.Stack.COMPLETE, persist_count=1, error=False)), ('update_failed', dict(action=stack.Stack.UPDATE, status=stack.Stack.FAILED, persist_count=1, error=False)), ('delete_complete', dict(action=stack.Stack.DELETE, status=stack.Stack.COMPLETE, persist_count=1, error=False)), ('delete_failed', dict(action=stack.Stack.DELETE, status=stack.Stack.FAILED, persist_count=1, error=False)), ('adopt_complete', dict(action=stack.Stack.ADOPT, status=stack.Stack.COMPLETE, persist_count=0, error=False)), ('adopt_failed', dict(action=stack.Stack.ADOPT, status=stack.Stack.FAILED, persist_count=0, error=False)), ('rollback_complete', dict(action=stack.Stack.ROLLBACK, status=stack.Stack.COMPLETE, persist_count=1, error=False)), ('rollback_failed', dict(action=stack.Stack.ROLLBACK, status=stack.Stack.FAILED, persist_count=1, error=False)), ('invalid_action', dict(action='action', status=stack.Stack.FAILED, persist_count=0, error=True)), ('invalid_status', dict(action=stack.Stack.CREATE, status='status', persist_count=0, error=True)), ] def test_state(self): self.tmpl = template.Template(copy.deepcopy(empty_template)) self.ctx = utils.dummy_context() self.stack = stack.Stack(self.ctx, 'test_stack', self.tmpl, action=stack.Stack.CREATE, status=stack.Stack.IN_PROGRESS) persist_state = self.patchobject(self.stack, '_persist_state') self.assertEqual((stack.Stack.CREATE, stack.Stack.IN_PROGRESS), self.stack.state) if self.error: self.assertRaises(ValueError, self.stack.state_set, self.action, self.status, 'test') else: self.assertIsNone(self.stack.state_set(self.action, self.status, 'test')) self.assertEqual((self.action, self.status), self.stack.state) self.assertEqual('test', self.stack.status_reason) self.assertEqual(self.persist_count, persist_state.call_count) heat-10.0.2/heat/tests/test_stack_delete.py0000666000175000017500000004705713343562352020705 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import time import fixtures from keystoneauth1 import exceptions as kc_exceptions import mock from oslo_log import log as logging from heat.common import exception from heat.common import template_format from heat.common import timeutils from heat.engine.clients.os import keystone from heat.engine.clients.os.keystone import fake_keystoneclient as fake_ks from heat.engine.clients.os.keystone import heat_keystoneclient as hkc from heat.engine import scheduler from heat.engine import stack from heat.engine import template from heat.objects import snapshot as snapshot_object from heat.objects import stack as stack_object from heat.objects import user_creds as ucreds_object from heat.tests import common from heat.tests import generic_resource as generic_rsrc from heat.tests import utils empty_template = template_format.parse('''{ "HeatTemplateFormatVersion" : "2012-12-12", }''') class StackTest(common.HeatTestCase): def setUp(self): super(StackTest, self).setUp() self.tmpl = template.Template(copy.deepcopy(empty_template)) self.ctx = utils.dummy_context() def test_delete(self): self.stack = stack.Stack(self.ctx, 'delete_test', self.tmpl) stack_id = self.stack.store() db_s = stack_object.Stack.get_by_id(self.ctx, stack_id) self.assertIsNotNone(db_s) self.stack.delete() db_s = stack_object.Stack.get_by_id(self.ctx, stack_id) self.assertIsNone(db_s) self.assertEqual((stack.Stack.DELETE, stack.Stack.COMPLETE), self.stack.state) def test_delete_with_snapshot(self): self.stack = stack.Stack(self.ctx, 'delete_test', self.tmpl) stack_id = self.stack.store() snapshot_fake = { 'tenant': self.ctx.tenant_id, 'name': 'Snapshot', 'stack_id': stack_id, 'status': 'COMPLETE', 'data': self.stack.prepare_abandon() } snapshot_object.Snapshot.create(self.ctx, snapshot_fake) self.assertIsNotNone(snapshot_object.Snapshot.get_all( self.ctx, stack_id)) self.stack.delete() db_s = stack_object.Stack.get_by_id(self.ctx, stack_id) self.assertIsNone(db_s) self.assertEqual((stack.Stack.DELETE, stack.Stack.COMPLETE), self.stack.state) self.assertEqual([], snapshot_object.Snapshot.get_all( self.ctx, stack_id)) def test_delete_with_snapshot_after_stack_add_resource(self): tpl = {'heat_template_version': 'queens', 'resources': {'A': {'type': 'ResourceWithRestoreType'}}} self.stack = stack.Stack(self.ctx, 'stack_delete_with_snapshot', template.Template(tpl)) stack_id = self.stack.store() self.stack.create() data = copy.deepcopy(self.stack.prepare_abandon()) data['resources']['A']['resource_data']['a_string'] = 'foo' snapshot_fake = { 'tenant': self.ctx.tenant_id, 'name': 'Snapshot', 'stack_id': stack_id, 'status': 'COMPLETE', 'data': data } snapshot_object.Snapshot.create(self.ctx, snapshot_fake) self.assertIsNotNone(snapshot_object.Snapshot.get_all( self.ctx, stack_id)) new_tmpl = {'heat_template_version': 'queens', 'resources': {'A': {'type': 'ResourceWithRestoreType'}, 'B': {'type': 'ResourceWithRestoreType'}}} updated_stack = stack.Stack(self.ctx, 'update_stack_add_res', template.Template(new_tmpl)) self.stack.update(updated_stack) self.assertEqual(2, len(self.stack.resources)) self.stack.delete() db_s = stack_object.Stack.get_by_id(self.ctx, stack_id) self.assertIsNone(db_s) self.assertEqual((stack.Stack.DELETE, stack.Stack.COMPLETE), self.stack.state) self.assertEqual([], snapshot_object.Snapshot.get_all( self.ctx, stack_id)) def test_delete_user_creds(self): self.stack = stack.Stack(self.ctx, 'delete_test', self.tmpl) stack_id = self.stack.store() db_s = stack_object.Stack.get_by_id(self.ctx, stack_id) self.assertIsNotNone(db_s) self.assertIsNotNone(db_s.user_creds_id) user_creds_id = db_s.user_creds_id db_creds = ucreds_object.UserCreds.get_by_id( self.ctx, db_s.user_creds_id) self.assertIsNotNone(db_creds) self.stack.delete() db_s = stack_object.Stack.get_by_id(self.ctx, stack_id) self.assertIsNone(db_s) db_creds = ucreds_object.UserCreds.get_by_id( self.ctx, user_creds_id) self.assertIsNone(db_creds) del_db_s = stack_object.Stack.get_by_id(self.ctx, stack_id, show_deleted=True) self.assertIsNone(del_db_s.user_creds_id) self.assertEqual((stack.Stack.DELETE, stack.Stack.COMPLETE), self.stack.state) def test_delete_user_creds_gone_missing(self): '''Do not block stack deletion if user_creds is missing. It may happen that user_creds were deleted when a delete operation was stopped. We should be resilient to this and still complete the delete operation. ''' self.stack = stack.Stack(self.ctx, 'delete_test', self.tmpl) stack_id = self.stack.store() db_s = stack_object.Stack.get_by_id(self.ctx, stack_id) self.assertIsNotNone(db_s) self.assertIsNotNone(db_s.user_creds_id) user_creds_id = db_s.user_creds_id db_creds = ucreds_object.UserCreds.get_by_id( self.ctx, db_s.user_creds_id) self.assertIsNotNone(db_creds) ucreds_object.UserCreds.delete(self.ctx, user_creds_id) self.stack.delete() db_s = stack_object.Stack.get_by_id(self.ctx, stack_id) self.assertIsNone(db_s) db_creds = ucreds_object.UserCreds.get_by_id( self.ctx, user_creds_id) self.assertIsNone(db_creds) del_db_s = stack_object.Stack.get_by_id(self.ctx, stack_id, show_deleted=True) self.assertIsNone(del_db_s.user_creds_id) self.assertEqual((stack.Stack.DELETE, stack.Stack.COMPLETE), self.stack.state) def test_delete_user_creds_fail(self): '''Do not stop deleting stacks even failed deleting user_creds. It may happen that user_creds were incorrectly saved (truncated) and thus cannot be correctly retrieved (and decrypted). In this case, stack delete should not be stopped. ''' self.stack = stack.Stack(self.ctx, 'delete_test', self.tmpl) stack_id = self.stack.store() db_s = stack_object.Stack.get_by_id(self.ctx, stack_id) self.assertIsNotNone(db_s) self.assertIsNotNone(db_s.user_creds_id) exc = exception.Error('Cannot get user credentials') self.patchobject(ucreds_object.UserCreds, 'get_by_id').side_effect = exc self.stack.delete() db_s = stack_object.Stack.get_by_id(self.ctx, stack_id) self.assertIsNone(db_s) self.assertEqual((stack.Stack.DELETE, stack.Stack.COMPLETE), self.stack.state) def test_delete_trust(self): self.stub_keystoneclient() self.stack = stack.Stack(self.ctx, 'delete_trust', self.tmpl) stack_id = self.stack.store() db_s = stack_object.Stack.get_by_id(self.ctx, stack_id) self.assertIsNotNone(db_s) self.stack.delete() db_s = stack_object.Stack.get_by_id(self.ctx, stack_id) self.assertIsNone(db_s) self.assertEqual((stack.Stack.DELETE, stack.Stack.COMPLETE), self.stack.state) def test_delete_trust_trustor(self): self.stub_keystoneclient(user_id='thetrustor') trustor_ctx = utils.dummy_context(user_id='thetrustor') self.stack = stack.Stack(trustor_ctx, 'delete_trust_nt', self.tmpl) stack_id = self.stack.store() db_s = stack_object.Stack.get_by_id(self.ctx, stack_id) self.assertIsNotNone(db_s) user_creds_id = db_s.user_creds_id self.assertIsNotNone(user_creds_id) user_creds = ucreds_object.UserCreds.get_by_id( self.ctx, user_creds_id) self.assertEqual('thetrustor', user_creds.get('trustor_user_id')) self.stack.delete() db_s = stack_object.Stack.get_by_id(trustor_ctx, stack_id) self.assertIsNone(db_s) self.assertEqual((stack.Stack.DELETE, stack.Stack.COMPLETE), self.stack.state) def test_delete_trust_not_trustor(self): # Stack gets created with trustor_ctx, deleted with other_ctx # then the trust delete should be with stored_ctx trustor_ctx = utils.dummy_context(user_id='thetrustor') other_ctx = utils.dummy_context(user_id='nottrustor') stored_ctx = utils.dummy_context(trust_id='thetrust') mock_kc = self.patchobject(hkc, 'KeystoneClient') self.stub_keystoneclient(user_id='thetrustor') mock_sc = self.patchobject(stack.Stack, 'stored_context') mock_sc.return_value = stored_ctx self.stack = stack.Stack(trustor_ctx, 'delete_trust_nt', self.tmpl) stack_id = self.stack.store() db_s = stack_object.Stack.get_by_id(self.ctx, stack_id) self.assertIsNotNone(db_s) user_creds_id = db_s.user_creds_id self.assertIsNotNone(user_creds_id) user_creds = ucreds_object.UserCreds.get_by_id( self.ctx, user_creds_id) self.assertEqual('thetrustor', user_creds.get('trustor_user_id')) mock_kc.return_value = fake_ks.FakeKeystoneClient(user_id='nottrustor') loaded_stack = stack.Stack.load(other_ctx, self.stack.id) loaded_stack.delete() mock_sc.assert_called_with() db_s = stack_object.Stack.get_by_id(other_ctx, stack_id) self.assertIsNone(db_s) self.assertEqual((stack.Stack.DELETE, stack.Stack.COMPLETE), loaded_stack.state) def test_delete_trust_backup(self): class FakeKeystoneClientFail(fake_ks.FakeKeystoneClient): def delete_trust(self, trust_id): raise Exception("Shouldn't delete") mock_kcp = self.patchobject(keystone.KeystoneClientPlugin, '_create', return_value=FakeKeystoneClientFail()) self.stack = stack.Stack(self.ctx, 'delete_trust', self.tmpl) stack_id = self.stack.store() db_s = stack_object.Stack.get_by_id(self.ctx, stack_id) self.assertIsNotNone(db_s) self.stack.delete(backup=True) db_s = stack_object.Stack.get_by_id(self.ctx, stack_id) self.assertIsNone(db_s) self.assertEqual(self.stack.state, (stack.Stack.DELETE, stack.Stack.COMPLETE)) mock_kcp.assert_called_once_with() def test_delete_trust_nested(self): class FakeKeystoneClientFail(fake_ks.FakeKeystoneClient): def delete_trust(self, trust_id): raise Exception("Shouldn't delete") self.stub_keystoneclient(fake_client=FakeKeystoneClientFail()) self.stack = stack.Stack(self.ctx, 'delete_trust_nested', self.tmpl, owner_id='owner123') stack_id = self.stack.store() db_s = stack_object.Stack.get_by_id(self.ctx, stack_id) self.assertIsNotNone(db_s) user_creds_id = db_s.user_creds_id self.assertIsNotNone(user_creds_id) user_creds = ucreds_object.UserCreds.get_by_id( self.ctx, user_creds_id) self.assertIsNotNone(user_creds) self.stack.delete() db_s = stack_object.Stack.get_by_id(self.ctx, stack_id) self.assertIsNone(db_s) user_creds = ucreds_object.UserCreds.get_by_id( self.ctx, user_creds_id) self.assertIsNotNone(user_creds) self.assertEqual((stack.Stack.DELETE, stack.Stack.COMPLETE), self.stack.state) def test_delete_trust_fail(self): class FakeKeystoneClientFail(fake_ks.FakeKeystoneClient): def delete_trust(self, trust_id): raise kc_exceptions.Forbidden("Denied!") mock_kcp = self.patchobject(keystone.KeystoneClientPlugin, '_create', return_value=FakeKeystoneClientFail()) self.stack = stack.Stack(self.ctx, 'delete_trust', self.tmpl) stack_id = self.stack.store() db_s = stack_object.Stack.get_by_id(self.ctx, stack_id) self.assertIsNotNone(db_s) self.stack.delete() mock_kcp.assert_called_with() self.assertEqual(2, mock_kcp.call_count) db_s = stack_object.Stack.get_by_id(self.ctx, stack_id) self.assertIsNotNone(db_s) self.assertEqual((stack.Stack.DELETE, stack.Stack.FAILED), self.stack.state) self.assertIn('Error deleting trust', self.stack.status_reason) def test_delete_deletes_project(self): fkc = fake_ks.FakeKeystoneClient() fkc.delete_stack_domain_project = mock.Mock() mock_kcp = self.patchobject(keystone.KeystoneClientPlugin, '_create', return_value=fkc) self.stack = stack.Stack(self.ctx, 'delete_trust', self.tmpl) stack_id = self.stack.store() self.stack.set_stack_user_project_id(project_id='aproject456') db_s = stack_object.Stack.get_by_id(self.ctx, stack_id) self.assertIsNotNone(db_s) self.stack.delete() mock_kcp.assert_called_with() db_s = stack_object.Stack.get_by_id(self.ctx, stack_id) self.assertIsNone(db_s) self.assertEqual((stack.Stack.DELETE, stack.Stack.COMPLETE), self.stack.state) fkc.delete_stack_domain_project.assert_called_once_with( project_id='aproject456') def test_delete_rollback(self): self.stack = stack.Stack(self.ctx, 'delete_rollback_test', self.tmpl, disable_rollback=False) stack_id = self.stack.store() db_s = stack_object.Stack.get_by_id(self.ctx, stack_id) self.assertIsNotNone(db_s) self.stack.delete(action=self.stack.ROLLBACK) db_s = stack_object.Stack.get_by_id(self.ctx, stack_id) self.assertIsNone(db_s) self.assertEqual((stack.Stack.ROLLBACK, stack.Stack.COMPLETE), self.stack.state) def test_delete_badaction(self): self.stack = stack.Stack(self.ctx, 'delete_badaction_test', self.tmpl) stack_id = self.stack.store() db_s = stack_object.Stack.get_by_id(self.ctx, stack_id) self.assertIsNotNone(db_s) self.stack.delete(action="wibble") db_s = stack_object.Stack.get_by_id(self.ctx, stack_id) self.assertIsNotNone(db_s) self.assertEqual((stack.Stack.DELETE, stack.Stack.FAILED), self.stack.state) def test_stack_delete_timeout(self): self.stack = stack.Stack(self.ctx, 'delete_test', self.tmpl) stack_id = self.stack.store() db_s = stack_object.Stack.get_by_id(self.ctx, stack_id) self.assertIsNotNone(db_s) def dummy_task(): while True: yield start_time = time.time() mock_tg = self.patchobject(scheduler.DependencyTaskGroup, '__call__', return_value=dummy_task()) mock_wallclock = self.patchobject(timeutils, 'wallclock') mock_wallclock.side_effect = [ start_time, start_time + 1, start_time + self.stack.timeout_secs() + 1 ] self.stack.delete() self.assertEqual((stack.Stack.DELETE, stack.Stack.FAILED), self.stack.state) self.assertEqual('Delete timed out', self.stack.status_reason) mock_tg.assert_called_once_with() mock_wallclock.assert_called_with() self.assertEqual(3, mock_wallclock.call_count) def test_stack_delete_resourcefailure(self): tmpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': {'AResource': {'Type': 'GenericResourceType'}}} mock_rd = self.patchobject(generic_rsrc.GenericResource, 'handle_delete', side_effect=Exception('foo')) self.stack = stack.Stack(self.ctx, 'delete_test_fail', template.Template(tmpl)) self.stack.store() self.stack.create() self.assertEqual((self.stack.CREATE, self.stack.COMPLETE), self.stack.state) self.stack.delete() mock_rd.assert_called_once_with() self.assertEqual((self.stack.DELETE, self.stack.FAILED), self.stack.state) self.assertEqual('Resource DELETE failed: Exception: ' 'resources.AResource: foo', self.stack.status_reason) def test_delete_stack_with_resource_log_is_clear(self): debug_logger = self.useFixture( fixtures.FakeLogger(level=logging.DEBUG, format="%(levelname)8s [%(name)s] " "%(message)s")) tmpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': {'AResource': {'Type': 'GenericResourceType'}}} self.stack = stack.Stack(self.ctx, 'delete_log_test', template.Template(tmpl)) self.stack.store() self.stack.create() self.assertEqual((self.stack.CREATE, self.stack.COMPLETE), self.stack.state) self.stack.delete() self.assertNotIn("destroy from None running", debug_logger.output) def test_stack_user_project_id_delete_fail(self): class FakeKeystoneClientFail(fake_ks.FakeKeystoneClient): def delete_stack_domain_project(self, project_id): raise kc_exceptions.Forbidden("Denied!") mock_kcp = self.patchobject(keystone.KeystoneClientPlugin, '_create', return_value=FakeKeystoneClientFail()) self.stack = stack.Stack(self.ctx, 'user_project_init', self.tmpl, stack_user_project_id='aproject1234') self.stack.store() self.assertEqual('aproject1234', self.stack.stack_user_project_id) db_stack = stack_object.Stack.get_by_id(self.ctx, self.stack.id) self.assertEqual('aproject1234', db_stack.stack_user_project_id) self.stack.delete() mock_kcp.assert_called_with() self.assertEqual((stack.Stack.DELETE, stack.Stack.FAILED), self.stack.state) self.assertIn('Error deleting project', self.stack.status_reason) heat-10.0.2/heat/tests/test_grouputils.py0000666000175000017500000002211513343562352020457 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import six from heat.common import grouputils from heat.common import identifier from heat.common import template_format from heat.rpc import client as rpc_client from heat.tests import common from heat.tests import utils nested_stack = ''' heat_template_version: 2013-05-23 resources: r0: type: OverwrittenFnGetRefIdType r1: type: OverwrittenFnGetRefIdType ''' class GroupUtilsTest(common.HeatTestCase): def test_non_nested_resource(self): group = mock.Mock() group.nested_identifier.return_value = None group.nested.return_value = None self.assertEqual(0, grouputils.get_size(group)) self.assertEqual([], grouputils.get_members(group)) self.assertEqual([], grouputils.get_member_refids(group)) self.assertEqual([], grouputils.get_member_names(group)) def test_normal_group(self): group = mock.Mock() t = template_format.parse(nested_stack) stack = utils.parse_stack(t) group.nested.return_value = stack # member list (sorted) members = [r for r in six.itervalues(stack)] expected = sorted(members, key=lambda r: (r.created_time, r.name)) actual = grouputils.get_members(group) self.assertEqual(expected, actual) # refids actual_ids = grouputils.get_member_refids(group) self.assertEqual(['ID-r0', 'ID-r1'], actual_ids) partial_ids = grouputils.get_member_refids(group, exclude=['ID-r1']) self.assertEqual(['ID-r0'], partial_ids) def test_group_with_failed_members(self): group = mock.Mock() t = template_format.parse(nested_stack) stack = utils.parse_stack(t) self.patchobject(group, 'nested', return_value=stack) # Just failed for whatever reason rsrc_err = stack.resources['r0'] rsrc_err.status = rsrc_err.FAILED rsrc_ok = stack.resources['r1'] self.assertEqual([rsrc_ok], grouputils.get_members(group)) self.assertEqual(['ID-r1'], grouputils.get_member_refids(group)) class GroupInspectorTest(common.HeatTestCase): resources = [ { 'updated_time': '2018-01-01T12:00', 'creation_time': '2018-01-01T02:00', 'resource_name': 'A', 'physical_resource_id': 'a', 'resource_action': 'UPDATE', 'resource_status': 'COMPLETE', 'resource_status_reason': 'resource changed', 'resource_type': 'OS::Heat::Test', 'resource_id': 'aaaaaaaa', 'stack_identity': 'bar', 'stack_name': 'nested_test', 'required_by': [], 'parent_resource': 'stack_resource', }, { 'updated_time': '2018-01-01T10:00', 'creation_time': '2018-01-01T03:00', 'resource_name': 'E', 'physical_resource_id': 'e', 'resource_action': 'UPDATE', 'resource_status': 'FAILED', 'resource_status_reason': 'reasons', 'resource_type': 'OS::Heat::Test', 'resource_id': 'eeeeeeee', 'stack_identity': 'bar', 'stack_name': 'nested_test', 'required_by': [], 'parent_resource': 'stack_resource', }, { 'updated_time': '2018-01-01T11:00', 'creation_time': '2018-01-01T03:00', 'resource_name': 'B', 'physical_resource_id': 'b', 'resource_action': 'UPDATE', 'resource_status': 'FAILED', 'resource_status_reason': 'reasons', 'resource_type': 'OS::Heat::Test', 'resource_id': 'bbbbbbbb', 'stack_identity': 'bar', 'stack_name': 'nested_test', 'required_by': [], 'parent_resource': 'stack_resource', }, { 'updated_time': '2018-01-01T13:00', 'creation_time': '2018-01-01T01:00', 'resource_name': 'C', 'physical_resource_id': 'c', 'resource_action': 'UPDATE', 'resource_status': 'COMPLETE', 'resource_status_reason': 'resource changed', 'resource_type': 'OS::Heat::Test', 'resource_id': 'cccccccc', 'stack_identity': 'bar', 'stack_name': 'nested_test', 'required_by': [], 'parent_resource': 'stack_resource', }, { 'updated_time': '2018-01-01T04:00', 'creation_time': '2018-01-01T04:00', 'resource_name': 'F', 'physical_resource_id': 'f', 'resource_action': 'CREATE', 'resource_status': 'COMPLETE', 'resource_status_reason': 'resource changed', 'resource_type': 'OS::Heat::Test', 'resource_id': 'ffffffff', 'stack_identity': 'bar', 'stack_name': 'nested_test', 'required_by': [], 'parent_resource': 'stack_resource', }, { 'updated_time': '2018-01-01T04:00', 'creation_time': '2018-01-01T04:00', 'resource_name': 'D', 'physical_resource_id': 'd', 'resource_action': 'CREATE', 'resource_status': 'COMPLETE', 'resource_status_reason': 'resource changed', 'resource_type': 'OS::Heat::Test', 'resource_id': 'dddddddd', 'stack_identity': 'bar', 'stack_name': 'nested_test', 'required_by': [], 'parent_resource': 'stack_resource', }, ] template = { 'heat_template_version': 'newton', 'resources': { 'A': { 'type': 'OS::Heat::TestResource', }, }, } def setUp(self): super(GroupInspectorTest, self).setUp() self.ctx = mock.Mock() self.rpc_client = mock.Mock(spec=rpc_client.EngineClient) self.identity = identifier.HeatIdentifier('foo', 'nested_test', 'bar') self.list_rsrcs = self.rpc_client.list_stack_resources self.get_tmpl = self.rpc_client.get_template self.insp = grouputils.GroupInspector(self.ctx, self.rpc_client, self.identity) def test_no_identity(self): self.insp = grouputils.GroupInspector(self.ctx, self.rpc_client, None) self.assertEqual(0, self.insp.size(include_failed=True)) self.assertEqual([], list(self.insp.member_names(include_failed=True))) self.assertIsNone(self.insp.template()) self.list_rsrcs.assert_not_called() self.get_tmpl.assert_not_called() def test_size_include_failed(self): self.list_rsrcs.return_value = self.resources self.assertEqual(6, self.insp.size(include_failed=True)) self.list_rsrcs.assert_called_once_with(self.ctx, dict(self.identity)) def test_size_exclude_failed(self): self.list_rsrcs.return_value = self.resources self.assertEqual(4, self.insp.size(include_failed=False)) self.list_rsrcs.assert_called_once_with(self.ctx, dict(self.identity)) def test_member_names_include_failed(self): self.list_rsrcs.return_value = self.resources self.assertEqual(['B', 'E', 'C', 'A', 'D', 'F'], list(self.insp.member_names(include_failed=True))) self.list_rsrcs.assert_called_once_with(self.ctx, dict(self.identity)) def test_member_names_exclude_failed(self): self.list_rsrcs.return_value = self.resources self.assertEqual(['C', 'A', 'D', 'F'], list(self.insp.member_names(include_failed=False))) self.list_rsrcs.assert_called_once_with(self.ctx, dict(self.identity)) def test_list_rsrc_caching(self): self.list_rsrcs.return_value = self.resources self.insp.size(include_failed=False) list(self.insp.member_names(include_failed=True)) self.insp.size(include_failed=True) list(self.insp.member_names(include_failed=False)) self.list_rsrcs.assert_called_once_with(self.ctx, dict(self.identity)) self.get_tmpl.assert_not_called() def test_get_template(self): self.get_tmpl.return_value = self.template tmpl = self.insp.template() self.assertEqual(self.template, tmpl.t) self.get_tmpl.assert_called_once_with(self.ctx, dict(self.identity)) def test_get_tmpl_caching(self): self.get_tmpl.return_value = self.template self.insp.template() self.insp.template() self.get_tmpl.assert_called_once_with(self.ctx, dict(self.identity)) self.list_rsrcs.assert_not_called() heat-10.0.2/heat/tests/test_template_format.py0000666000175000017500000001750113343562340021425 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os import mock import re import six import yaml from heat.common import config from heat.common import exception from heat.common import template_format from heat.engine.clients.os import neutron from heat.tests import common from heat.tests import utils class JsonToYamlTest(common.HeatTestCase): def setUp(self): super(JsonToYamlTest, self).setUp() self.expected_test_count = 2 self.longMessage = True self.maxDiff = None def test_convert_all_templates(self): path = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'templates') template_test_count = 0 for (json_str, yml_str) in self.convert_all_json_to_yaml(path): self.compare_json_vs_yaml(json_str, yml_str) template_test_count += 1 if template_test_count >= self.expected_test_count: break self.assertGreaterEqual( template_test_count, self.expected_test_count, 'Expected at least %d templates to be tested, not %d' % (self.expected_test_count, template_test_count)) def compare_json_vs_yaml(self, json_str, yml_str): yml = template_format.parse(yml_str) self.assertEqual(u'2012-12-12', yml[u'HeatTemplateFormatVersion']) self.assertNotIn(u'AWSTemplateFormatVersion', yml) del(yml[u'HeatTemplateFormatVersion']) jsn = template_format.parse(json_str) if u'AWSTemplateFormatVersion' in jsn: del(jsn[u'AWSTemplateFormatVersion']) self.assertEqual(yml, jsn) def convert_all_json_to_yaml(self, dirpath): for path in os.listdir(dirpath): if not path.endswith('.template') and not path.endswith('.json'): continue with open(os.path.join(dirpath, path), 'r') as f: json_str = f.read() yml_str = template_format.convert_json_to_yaml(json_str) yield (json_str, yml_str) def test_integer_only_keys_get_translated_correctly(self): path = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'templates/WordPress_Single_Instance.template') with open(path, 'r') as f: json_str = f.read() yml_str = template_format.convert_json_to_yaml(json_str) match = re.search(r'[\s,{]\d+\s*:', yml_str) # Check that there are no matches of integer-only keys # lacking explicit quotes self.assertIsNone(match) class YamlMinimalTest(common.HeatTestCase): def _parse_template(self, tmpl_str, msg_str): parse_ex = self.assertRaises(ValueError, template_format.parse, tmpl_str) self.assertIn(msg_str, six.text_type(parse_ex)) def test_long_yaml(self): template = {'HeatTemplateFormatVersion': '2012-12-12'} config.cfg.CONF.set_override('max_template_size', 10) template['Resources'] = ['a'] * int( config.cfg.CONF.max_template_size / 3) limit = config.cfg.CONF.max_template_size long_yaml = yaml.safe_dump(template) self.assertGreater(len(long_yaml), limit) ex = self.assertRaises(exception.RequestLimitExceeded, template_format.parse, long_yaml) msg = ('Request limit exceeded: Template size (%(actual_len)s ' 'bytes) exceeds maximum allowed size (%(limit)s bytes).') % { 'actual_len': len(str(long_yaml)), 'limit': config.cfg.CONF.max_template_size} self.assertEqual(msg, six.text_type(ex)) def test_parse_no_version_format(self): yaml = '' self._parse_template(yaml, 'Template format version not found') yaml2 = '''Parameters: {} Mappings: {} Resources: {} Outputs: {} ''' self._parse_template(yaml2, 'Template format version not found') def test_parse_string_template(self): tmpl_str = 'just string' msg = 'The template is not a JSON object or YAML mapping.' self._parse_template(tmpl_str, msg) def test_parse_invalid_yaml_and_json_template(self): tmpl_str = '{test' msg = 'line 1, column 1' self._parse_template(tmpl_str, msg) def test_parse_json_document(self): tmpl_str = '["foo" , "bar"]' msg = 'The template is not a JSON object or YAML mapping.' self._parse_template(tmpl_str, msg) def test_parse_empty_json_template(self): tmpl_str = '{}' msg = 'Template format version not found' self._parse_template(tmpl_str, msg) def test_parse_yaml_template(self): tmpl_str = 'heat_template_version: 2013-05-23' expected = {'heat_template_version': '2013-05-23'} self.assertEqual(expected, template_format.parse(tmpl_str)) class YamlParseExceptions(common.HeatTestCase): scenarios = [ ('scanner', dict(raised_exception=yaml.scanner.ScannerError())), ('parser', dict(raised_exception=yaml.parser.ParserError())), ('reader', dict(raised_exception=yaml.reader.ReaderError( '', 42, six.b('x'), '', ''))), ] def test_parse_to_value_exception(self): text = 'not important' with mock.patch.object(yaml, 'load') as yaml_loader: yaml_loader.side_effect = self.raised_exception err = self.assertRaises(ValueError, template_format.parse, text, 'file://test.yaml') self.assertIn('Error parsing template file://test.yaml', six.text_type(err)) class JsonYamlResolvedCompareTest(common.HeatTestCase): def setUp(self): super(JsonYamlResolvedCompareTest, self).setUp() self.longMessage = True self.maxDiff = None def load_template(self, file_name): filepath = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'templates', file_name) f = open(filepath) t = template_format.parse(f.read()) f.close() return t def compare_stacks(self, json_file, yaml_file, parameters): t1 = self.load_template(json_file) t2 = self.load_template(yaml_file) del(t1[u'AWSTemplateFormatVersion']) t1[u'HeatTemplateFormatVersion'] = t2[u'HeatTemplateFormatVersion'] stack1 = utils.parse_stack(t1, parameters) stack2 = utils.parse_stack(t2, parameters) # compare resources separately so that resolved static data # is compared t1nr = dict(stack1.t.t) del(t1nr['Resources']) t2nr = dict(stack2.t.t) del(t2nr['Resources']) self.assertEqual(t1nr, t2nr) self.assertEqual(set(stack1), set(stack2)) for key in stack1: self.assertEqual(stack1[key].t, stack2[key].t) def test_neutron_resolved(self): self.patchobject(neutron.NeutronClientPlugin, 'has_extension', return_value=True) self.compare_stacks('Neutron.template', 'Neutron.yaml', {}) def test_wordpress_resolved(self): self.compare_stacks('WordPress_Single_Instance.template', 'WordPress_Single_Instance.yaml', {'KeyName': 'test'}) heat-10.0.2/heat/tests/test_stack_resource.py0000666000175000017500000013002013343562352021251 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import contextlib import json import uuid import mock from oslo_config import cfg from oslo_messaging import exceptions as msg_exceptions from oslo_serialization import jsonutils import six from heat.common import exception from heat.common import identifier from heat.common import template_format from heat.engine import node_data from heat.engine import resource from heat.engine.resources import stack_resource from heat.engine import stack as parser from heat.engine import template as templatem from heat.objects import raw_template from heat.objects import stack as stack_object from heat.objects import stack_lock from heat.rpc import api as rpc_api from heat.tests import common from heat.tests import generic_resource as generic_rsrc from heat.tests import utils ws_res_snippet = {"Type": "StackResourceType", "Metadata": { "key": "value", "some": "more stuff"}} param_template = ''' { "AWSTemplateFormatVersion" : "2010-09-09", "Parameters" : { "KeyName" : { "Description" : "KeyName", "Type" : "String", "Default" : "test" } }, "Resources" : { "WebServer": { "Type": "GenericResource", "Properties": {} } } } ''' simple_template = ''' { "AWSTemplateFormatVersion" : "2010-09-09", "Parameters" : {}, "Resources" : { "WebServer": { "Type": "GenericResource", "Properties": {} } } } ''' main_template = ''' heat_template_version: 2013-05-23 resources: volume_server: type: file://tmp/nested.yaml ''' my_wrong_nested_template = ''' heat_template_version: 2013-05-23 resources: server: type: OS::Nova::Server properties: image: F17-x86_64-gold flavor: m1.small volume: type: OS::Cinder::Volume properties: size: 10 description: Volume for stack volume_attachment: type: OS::Cinder::VolumeAttachment properties: volume_id: { get_resource: volume } instance_uuid: { get_resource: instance } ''' resource_group_template = ''' heat_template_version: 2013-05-23 resources: my_resource_group: type: OS::Heat::ResourceGroup properties: resource_def: type: idontexist ''' heat_autoscaling_group_template = ''' heat_template_version: 2013-05-23 resources: my_autoscaling_group: type: OS::Heat::AutoScalingGroup properties: resource: type: idontexist desired_capacity: 2 max_size: 4 min_size: 1 ''' nova_server_template = ''' heat_template_version: 2013-05-23 resources: group_server: type: idontexist ''' class MyImplementedStackResource(generic_rsrc.StackResourceType): def child_template(self): return self.nested_template def child_params(self): return self.nested_params class StackResourceBaseTest(common.HeatTestCase): def setUp(self): super(StackResourceBaseTest, self).setUp() self.ws_resname = "provider_resource" self.empty_temp = templatem.Template( {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': {self.ws_resname: ws_res_snippet}}) self.ctx = utils.dummy_context() self.parent_stack = parser.Stack(self.ctx, 'test_stack', self.empty_temp, stack_id=str(uuid.uuid4()), user_creds_id='uc123', stack_user_project_id='aprojectid') resource_defns = self.empty_temp.resource_definitions( self.parent_stack) self.parent_resource = generic_rsrc.StackResourceType( 'test', resource_defns[self.ws_resname], self.parent_stack) class StackResourceTest(StackResourceBaseTest): def setUp(self): super(StackResourceTest, self).setUp() self.templ = template_format.parse(param_template) self.simple_template = template_format.parse(simple_template) # to get same json string from a dict for comparison, # make sort_keys True orig_dumps = jsonutils.dumps def sorted_dumps(*args, **kwargs): kwargs.setdefault('sort_keys', True) return orig_dumps(*args, **kwargs) patched_dumps = mock.patch( 'oslo_serialization.jsonutils.dumps', sorted_dumps) patched_dumps.start() self.addCleanup(lambda: patched_dumps.stop()) def test_child_template_defaults_to_not_implemented(self): self.assertRaises(NotImplementedError, self.parent_resource.child_template) def test_child_params_defaults_to_not_implemented(self): self.assertRaises(NotImplementedError, self.parent_resource.child_params) def test_preview_defaults_to_stack_resource_itself(self): preview = self.parent_resource.preview() self.assertIsInstance(preview, stack_resource.StackResource) def test_nested_stack_abandon(self): nest = mock.MagicMock() self.parent_resource.nested = nest nest.return_value.prepare_abandon.return_value = {'X': 'Y'} ret = self.parent_resource.prepare_abandon() nest.return_value.prepare_abandon.assert_called_once_with() self.assertEqual({'X': 'Y'}, ret) def test_nested_abandon_stack_not_found(self): self.parent_resource.nested = mock.MagicMock(return_value=None) ret = self.parent_resource.prepare_abandon() self.assertEqual({}, ret) def test_abandon_nested_sends_rpc_abandon(self): rpcc = mock.MagicMock() @contextlib.contextmanager def exc_filter(*args): try: yield except exception.NotFound: pass rpcc.ignore_error_by_name.side_effect = exc_filter self.parent_resource.rpc_client = rpcc self.parent_resource.resource_id = 'fake_id' self.parent_resource.prepare_abandon() self.parent_resource.delete_nested() rpcc.return_value.abandon_stack.assert_called_once_with( self.parent_resource.context, mock.ANY) rpcc.return_value.delete_stack.assert_not_called() def test_propagated_files(self): """Test passing of the files map in the top level to the child. Makes sure that the files map in the top level stack are passed on to the child stack. """ self.parent_stack.t.files["foo"] = "bar" parsed_t = self.parent_resource._parse_child_template(self.templ, None) self.assertEqual({"foo": "bar"}, parsed_t.files.files) @mock.patch('heat.engine.environment.get_child_environment') @mock.patch.object(stack_resource.parser, 'Stack') def test_preview_with_implemented_child_resource(self, mock_stack_class, mock_env_class): nested_stack = mock.Mock() mock_stack_class.return_value = nested_stack nested_stack.preview_resources.return_value = 'preview_nested_stack' mock_env_class.return_value = 'environment' template = templatem.Template(template_format.parse(param_template)) parent_t = self.parent_stack.t resource_defns = parent_t.resource_definitions(self.parent_stack) parent_resource = MyImplementedStackResource( 'test', resource_defns[self.ws_resname], self.parent_stack) params = {'KeyName': 'test'} parent_resource.set_template(template, params) validation_mock = mock.Mock(return_value=None) parent_resource._validate_nested_resources = validation_mock result = parent_resource.preview() mock_env_class.assert_called_once_with( self.parent_stack.env, params, child_resource_name='test', item_to_remove=None) self.assertEqual('preview_nested_stack', result) mock_stack_class.assert_called_once_with( mock.ANY, 'test_stack-test', mock.ANY, timeout_mins=None, disable_rollback=True, parent_resource=parent_resource.name, owner_id=self.parent_stack.id, user_creds_id=self.parent_stack.user_creds_id, stack_user_project_id=self.parent_stack.stack_user_project_id, adopt_stack_data=None, nested_depth=1 ) @mock.patch('heat.engine.environment.get_child_environment') @mock.patch.object(stack_resource.parser, 'Stack') def test_preview_with_implemented_dict_child_resource(self, mock_stack_class, mock_env_class): nested_stack = mock.Mock() mock_stack_class.return_value = nested_stack nested_stack.preview_resources.return_value = 'preview_nested_stack' mock_env_class.return_value = 'environment' template_dict = template_format.parse(param_template) parent_t = self.parent_stack.t resource_defns = parent_t.resource_definitions(self.parent_stack) parent_resource = MyImplementedStackResource( 'test', resource_defns[self.ws_resname], self.parent_stack) params = {'KeyName': 'test'} parent_resource.set_template(template_dict, params) validation_mock = mock.Mock(return_value=None) parent_resource._validate_nested_resources = validation_mock result = parent_resource.preview() mock_env_class.assert_called_once_with( self.parent_stack.env, params, child_resource_name='test', item_to_remove=None) self.assertEqual('preview_nested_stack', result) mock_stack_class.assert_called_once_with( mock.ANY, 'test_stack-test', mock.ANY, timeout_mins=None, disable_rollback=True, parent_resource=parent_resource.name, owner_id=self.parent_stack.id, user_creds_id=self.parent_stack.user_creds_id, stack_user_project_id=self.parent_stack.stack_user_project_id, adopt_stack_data=None, nested_depth=1 ) def test_preview_propagates_files(self): self.parent_stack.t.files["foo"] = "bar" tmpl = self.parent_stack.t.t self.parent_resource.child_template = mock.Mock(return_value=tmpl) self.parent_resource.child_params = mock.Mock(return_value={}) self.parent_resource.preview() self.stack = self.parent_resource.nested() self.assertEqual({"foo": "bar"}, self.stack.t.files.files) def test_preview_validates_nested_resources(self): parent_t = self.parent_stack.t resource_defns = parent_t.resource_definitions(self.parent_stack) stk_resource = MyImplementedStackResource( 'test', resource_defns[self.ws_resname], self.parent_stack) stk_resource.child_params = mock.Mock(return_value={}) stk_resource.child_template = mock.Mock( return_value=templatem.Template(self.simple_template, stk_resource.child_params)) exc = exception.RequestLimitExceeded(message='Validation Failed') validation_mock = mock.Mock(side_effect=exc) stk_resource._validate_nested_resources = validation_mock self.assertRaises(exception.RequestLimitExceeded, stk_resource.preview) def test_parent_stack_existing_of_nested_stack(self): parent_t = self.parent_stack.t resource_defns = parent_t.resource_definitions(self.parent_stack) stk_resource = MyImplementedStackResource( 'test', resource_defns[self.ws_resname], self.parent_stack) stk_resource.child_params = mock.Mock(return_value={}) stk_resource.child_template = mock.Mock( return_value=templatem.Template(self.simple_template, stk_resource.child_params)) stk_resource._validate_nested_resources = mock.Mock() nest_stack = stk_resource._parse_nested_stack( "test_nest_stack", stk_resource.child_template(), stk_resource.child_params()) self.assertEqual(self.parent_stack, nest_stack.parent_resource._stack()) def test_preview_dict_validates_nested_resources(self): parent_t = self.parent_stack.t resource_defns = parent_t.resource_definitions(self.parent_stack) stk_resource = MyImplementedStackResource( 'test', resource_defns[self.ws_resname], self.parent_stack) stk_resource.child_params = mock.Mock(return_value={}) stk_resource.child_template = mock.Mock( return_value=self.simple_template) exc = exception.RequestLimitExceeded(message='Validation Failed') validation_mock = mock.Mock(side_effect=exc) stk_resource._validate_nested_resources = validation_mock self.assertRaises(exception.RequestLimitExceeded, stk_resource.preview) @mock.patch.object(stack_resource.parser, 'Stack') def test_preview_doesnt_validate_nested_stack(self, mock_stack_class): nested_stack = mock.Mock() mock_stack_class.return_value = nested_stack tmpl = self.parent_stack.t.t self.parent_resource.child_template = mock.Mock(return_value=tmpl) self.parent_resource.child_params = mock.Mock(return_value={}) self.parent_resource.preview() self.assertFalse(nested_stack.validate.called) self.assertTrue(nested_stack.preview_resources.called) def test_validate_error_reference(self): stack_name = 'validate_error_reference' tmpl = template_format.parse(main_template) files = {'file://tmp/nested.yaml': my_wrong_nested_template} stack = parser.Stack(utils.dummy_context(), stack_name, templatem.Template(tmpl, files=files)) rsrc = stack['volume_server'] raise_exc_msg = ('InvalidTemplateReference: ' 'resources.volume_server: ' 'The specified reference "instance" (in ' 'volume_attachment.Properties.instance_uuid) is ' 'incorrect.') exc = self.assertRaises(exception.StackValidationFailed, rsrc.validate) self.assertEqual(raise_exc_msg, six.text_type(exc)) def _test_validate_unknown_resource_type(self, stack_name, tmpl, resource_name): raise_exc_msg = 'The Resource Type (idontexist) could not be found.' stack = parser.Stack(utils.dummy_context(), stack_name, tmpl) rsrc = stack[resource_name] exc = self.assertRaises(exception.StackValidationFailed, rsrc.validate) self.assertIn(raise_exc_msg, six.text_type(exc)) def test_validate_resource_group(self): # test validate without nested template stack_name = 'validate_resource_group_template' t = template_format.parse(resource_group_template) tmpl = templatem.Template(t) self._test_validate_unknown_resource_type(stack_name, tmpl, 'my_resource_group') # validate with nested template res_prop = t['resources']['my_resource_group']['properties'] res_prop['resource_def']['type'] = 'nova_server.yaml' files = {'nova_server.yaml': nova_server_template} tmpl = templatem.Template(t, files=files) self._test_validate_unknown_resource_type(stack_name, tmpl, 'my_resource_group') def test_validate_heat_autoscaling_group(self): # test validate without nested template stack_name = 'validate_heat_autoscaling_group_template' t = template_format.parse(heat_autoscaling_group_template) tmpl = templatem.Template(t) self._test_validate_unknown_resource_type(stack_name, tmpl, 'my_autoscaling_group') # validate with nested template res_prop = t['resources']['my_autoscaling_group']['properties'] res_prop['resource']['type'] = 'nova_server.yaml' files = {'nova_server.yaml': nova_server_template} tmpl = templatem.Template(t, files=files) self._test_validate_unknown_resource_type(stack_name, tmpl, 'my_autoscaling_group') def test_get_attribute_autoscaling(self): t = template_format.parse(heat_autoscaling_group_template) tmpl = templatem.Template(t) stack = parser.Stack(utils.dummy_context(), 'test_att', tmpl) rsrc = stack['my_autoscaling_group'] self.assertEqual(0, rsrc.FnGetAtt(rsrc.CURRENT_SIZE)) def test_get_attribute_autoscaling_convg(self): t = template_format.parse(heat_autoscaling_group_template) tmpl = templatem.Template(t) cache_data = {'my_autoscaling_group': node_data.NodeData.from_dict({ 'uuid': mock.ANY, 'id': mock.ANY, 'action': 'CREATE', 'status': 'COMPLETE', 'attrs': {'current_size': 4} })} stack = parser.Stack(utils.dummy_context(), 'test_att', tmpl, cache_data=cache_data) rsrc = stack.defn['my_autoscaling_group'] self.assertEqual(4, rsrc.FnGetAtt('current_size')) def test__validate_nested_resources_checks_num_of_resources(self): stack_resource.cfg.CONF.set_override('max_resources_per_stack', 2) tmpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'r': { 'Type': 'OS::Heat::None' }}} template = stack_resource.template.Template(tmpl) root_resources = mock.Mock(return_value=2) self.parent_resource.stack.total_resources = root_resources self.assertRaises(exception.RequestLimitExceeded, self.parent_resource._validate_nested_resources, template) def test_load_nested_ok(self): self.parent_resource._nested = None self.parent_resource.resource_id = 319 self.m.StubOutWithMock(parser.Stack, 'load') parser.Stack.load(self.parent_resource.context, self.parent_resource.resource_id).AndReturn('s') self.m.ReplayAll() self.parent_resource.nested() self.m.VerifyAll() def test_load_nested_non_exist(self): self.parent_resource._nested = None self.parent_resource.resource_id = '90-8' self.m.StubOutWithMock(parser.Stack, 'load') parser.Stack.load(self.parent_resource.context, self.parent_resource.resource_id).AndRaise( exception.NotFound) self.m.ReplayAll() self.assertIsNone(self.parent_resource.nested()) self.m.VerifyAll() def test_load_nested_cached(self): self.parent_resource._nested = 'gotthis' self.assertEqual('gotthis', self.parent_resource.nested()) def test_delete_nested_none_nested_stack(self): self.parent_resource._nested = None self.assertIsNone(self.parent_resource.delete_nested()) def test_delete_nested_not_found_nested_stack(self): self.parent_resource.resource_id = 'fake_id' rpcc = mock.MagicMock() self.parent_resource.rpc_client = rpcc @contextlib.contextmanager def exc_filter(*args): try: yield except exception.NotFound: pass rpcc.return_value.ignore_error_by_name.side_effect = exc_filter rpcc.return_value.delete_stack = mock.Mock( side_effect=exception.NotFound()) self.assertIsNone(self.parent_resource.delete_nested()) rpcc.return_value.delete_stack.assert_called_once_with( self.parent_resource.context, mock.ANY, cast=False) def test_need_update_for_nested_resource(self): """Test the resource with nested stack should need update. The resource in Create or Update state and has nested stack, should need update. """ self.parent_resource.action = self.parent_resource.CREATE self.parent_resource._rpc_client = mock.MagicMock() self.parent_resource._rpc_client.show_stack.return_value = [ {'stack_action': self.parent_resource.CREATE, 'stack_status': self.parent_resource.COMPLETE}] need_update = self.parent_resource._needs_update( self.parent_resource.t, self.parent_resource.t, self.parent_resource.properties, self.parent_resource.properties, self.parent_resource) self.assertTrue(need_update) def test_need_update_in_failed_state_for_nested_resource(self): """Test the resource with no nested stack should need replacement. The resource in failed state and has no nested stack, should need update with UpdateReplace. """ self.parent_resource.state_set(self.parent_resource.INIT, self.parent_resource.FAILED) self.parent_resource._nested = None self.assertRaises(resource.UpdateReplace, self.parent_resource._needs_update, self.parent_resource.t, self.parent_resource.t, self.parent_resource.properties, self.parent_resource.properties, self.parent_resource) def test_need_update_in_init_complete_state_for_nested_resource(self): """Test the resource with no nested stack should need replacement. The resource in failed state and has no nested stack, should need update with UpdateReplace. """ self.parent_resource.state_set(self.parent_resource.INIT, self.parent_resource.COMPLETE) self.parent_resource._nested = None self.assertRaises(resource.UpdateReplace, self.parent_resource._needs_update, self.parent_resource.t, self.parent_resource.t, self.parent_resource.properties, self.parent_resource.properties, self.parent_resource) def test_need_update_in_check_failed_state_after_stack_check(self): self.parent_resource.resource_id = 'fake_id' self.parent_resource.state_set(self.parent_resource.CHECK, self.parent_resource.FAILED) self.nested = mock.MagicMock() self.nested.name = 'nested-stack' self.parent_resource.nested = mock.MagicMock(return_value=self.nested) self.parent_resource._nested = self.nested self.parent_resource._rpc_client = mock.MagicMock() self.parent_resource._rpc_client.show_stack.return_value = [ {'stack_action': self.parent_resource.CHECK, 'stack_status': self.parent_resource.FAILED}] self.assertTrue( self.parent_resource._needs_update(self.parent_resource.t, self.parent_resource.t, self.parent_resource.properties, self.parent_resource.properties, self.parent_resource)) def test_need_update_check_failed_state_after_mark_unhealthy(self): self.parent_resource.resource_id = 'fake_id' self.parent_resource.state_set(self.parent_resource.CHECK, self.parent_resource.FAILED) self.nested = mock.MagicMock() self.nested.name = 'nested-stack' self.parent_resource.nested = mock.MagicMock(return_value=self.nested) self.parent_resource._nested = self.nested self.parent_resource._rpc_client = mock.MagicMock() self.parent_resource._rpc_client.show_stack.return_value = [ {'stack_action': self.parent_resource.CREATE, 'stack_status': self.parent_resource.COMPLETE}] self.assertRaises(resource.UpdateReplace, self.parent_resource._needs_update, self.parent_resource.t, self.parent_resource.t, self.parent_resource.properties, self.parent_resource.properties, self.parent_resource) class StackResourceLimitTest(StackResourceBaseTest): scenarios = [ ('3_4_0', dict(root=3, templ=4, nested=0, max=10, error=False)), ('3_8_0', dict(root=3, templ=8, nested=0, max=10, error=True)), ('3_8_2', dict(root=3, templ=8, nested=2, max=10, error=True)), ('3_5_2', dict(root=3, templ=5, nested=2, max=10, error=False)), ('3_6_2', dict(root=3, templ=6, nested=2, max=10, error=True)), ('3_12_2', dict(root=3, templ=12, nested=2, max=10, error=True))] def setUp(self): super(StackResourceLimitTest, self).setUp() self.res = self.parent_resource def test_resource_limit(self): # mock root total_resources total_resources = self.root + self.nested parser.Stack.total_resources = mock.Mock(return_value=total_resources) # setup the config max cfg.CONF.set_default('max_resources_per_stack', self.max) # fake the template templ = mock.MagicMock() templ.__getitem__.return_value = range(self.templ) templ.RESOURCES = 'Resources' if self.error: self.assertRaises(exception.RequestLimitExceeded, self.res._validate_nested_resources, templ) else: self.assertIsNone(self.res._validate_nested_resources(templ)) class StackResourceAttrTest(StackResourceBaseTest): def test_get_output_ok(self): self.parent_resource.nested_identifier = mock.Mock() self.parent_resource.nested_identifier.return_value = {'foo': 'bar'} self.parent_resource._rpc_client = mock.MagicMock() output = {'outputs': [{'output_key': 'key', 'output_value': 'value'}]} self.parent_resource._rpc_client.show_stack.return_value = [output] self.assertEqual("value", self.parent_resource.get_output("key")) def test_get_output_key_not_found(self): self.parent_resource.nested_identifier = mock.Mock() self.parent_resource.nested_identifier.return_value = {'foo': 'bar'} self.parent_resource._rpc_client = mock.MagicMock() output = {'outputs': []} self.parent_resource._rpc_client.show_stack.return_value = [output] self.assertRaises(exception.NotFound, self.parent_resource.get_output, "key") def test_get_output_key_no_outputs_from_rpc(self): self.parent_resource.nested_identifier = mock.Mock() self.parent_resource.nested_identifier.return_value = {'foo': 'bar'} self.parent_resource._rpc_client = mock.MagicMock() output = {} self.parent_resource._rpc_client.show_stack.return_value = [output] self.assertRaises(exception.NotFound, self.parent_resource.get_output, "key") def test_resolve_attribute_string(self): self.parent_resource.nested_identifier = mock.Mock() self.parent_resource.nested_identifier.return_value = {'foo': 'bar'} self.parent_resource._rpc_client = mock.MagicMock() output = {'outputs': [{'output_key': 'key', 'output_value': 'value'}]} self.parent_resource._rpc_client.show_stack.return_value = [output] self.assertEqual('value', self.parent_resource._resolve_attribute("key")) def test_resolve_attribute_dict(self): self.parent_resource.nested_identifier = mock.Mock() self.parent_resource.nested_identifier.return_value = {'foo': 'bar'} self.parent_resource._rpc_client = mock.MagicMock() output = {'outputs': [{'output_key': 'key', 'output_value': {'a': 1, 'b': 2}}]} self.parent_resource._rpc_client.show_stack.return_value = [output] self.assertEqual({'a': 1, 'b': 2}, self.parent_resource._resolve_attribute("key")) def test_resolve_attribute_list(self): self.parent_resource.nested_identifier = mock.Mock() self.parent_resource.nested_identifier.return_value = {'foo': 'bar'} self.parent_resource._rpc_client = mock.MagicMock() output = {'outputs': [{'output_key': 'key', 'output_value': [1, 2, 3]}]} self.parent_resource._rpc_client.show_stack.return_value = [output] self.assertEqual([1, 2, 3], self.parent_resource._resolve_attribute("key")) def test_validate_nested_stack(self): self.parent_resource.child_template = mock.Mock(return_value='foo') self.parent_resource.child_params = mock.Mock(return_value={}) nested = self.m.CreateMockAnything() nested.validate().AndReturn(True) self.m.StubOutWithMock(stack_resource.StackResource, '_parse_nested_stack') name = '%s-%s' % (self.parent_stack.name, self.parent_resource.name) stack_resource.StackResource._parse_nested_stack( name, 'foo', {}).AndReturn(nested) self.m.ReplayAll() self.parent_resource.validate_nested_stack() self.assertFalse(nested.strict_validate) self.m.VerifyAll() def test_validate_assertion_exception_rethrow(self): expected_message = 'Expected Assertion Error' self.parent_resource.child_template = mock.Mock(return_value='foo') self.parent_resource.child_params = mock.Mock(return_value={}) self.m.StubOutWithMock(stack_resource.StackResource, '_parse_nested_stack') name = '%s-%s' % (self.parent_stack.name, self.parent_resource.name) stack_resource.StackResource._parse_nested_stack( name, 'foo', {}).AndRaise(AssertionError(expected_message)) self.m.ReplayAll() exc = self.assertRaises(AssertionError, self.parent_resource.validate_nested_stack) self.assertEqual(expected_message, six.text_type(exc)) self.m.VerifyAll() class StackResourceCheckCompleteTest(StackResourceBaseTest): scenarios = [ ('create', dict(action='create')), ('update', dict(action='update')), ('suspend', dict(action='suspend')), ('resume', dict(action='resume')), ('delete', dict(action='delete')), ] def setUp(self): super(StackResourceCheckCompleteTest, self).setUp() self.status = [self.action.upper(), None, None, None] self.mock_status = self.patchobject(stack_object.Stack, 'get_status') self.mock_status.return_value = self.status def test_state_ok(self): """Test case when check_create_complete should return True. check_create_complete should return True create task is done and the nested stack is in (,COMPLETE) state. """ self.mock_lock = self.patchobject(stack_lock.StackLock, 'get_engine_id') self.mock_lock.return_value = None self.status[1] = 'COMPLETE' complete = getattr(self.parent_resource, 'check_%s_complete' % self.action) self.assertIs(True, complete(None)) self.mock_status.assert_called_once_with( self.parent_resource.context, self.parent_resource.resource_id) self.mock_lock.assert_called_once_with( self.parent_resource.context, self.parent_resource.resource_id) def test_state_err(self): """Test case when check_create_complete should raise error. check_create_complete should raise error when create task is done but the nested stack is not in (,COMPLETE) state """ self.status[1] = 'FAILED' reason = ('Resource %s failed: ValueError: ' 'resources.%s: broken on purpose' % ( self.action.upper(), 'child_res')) exp_path = 'resources.test.resources.child_res' exp = 'ValueError: %s: broken on purpose' % exp_path self.status[2] = reason complete = getattr(self.parent_resource, 'check_%s_complete' % self.action) exc = self.assertRaises(exception.ResourceFailure, complete, None) self.assertEqual(exp, six.text_type(exc)) self.mock_status.assert_called_once_with( self.parent_resource.context, self.parent_resource.resource_id) def test_state_unknown(self): """Test case when check_create_complete should raise error. check_create_complete should raise error when create task is done but the nested stack is not in (,COMPLETE) state """ self.status[1] = 'WTF' self.status[2] = 'broken on purpose' complete = getattr(self.parent_resource, 'check_%s_complete' % self.action) self.assertRaises(exception.ResourceUnknownStatus, complete, None) self.mock_status.assert_called_once_with( self.parent_resource.context, self.parent_resource.resource_id) def test_in_progress(self): self.status[1] = 'IN_PROGRESS' complete = getattr(self.parent_resource, 'check_%s_complete' % self.action) self.assertFalse(complete(None)) self.mock_status.assert_called_once_with( self.parent_resource.context, self.parent_resource.resource_id) def test_update_not_started(self): if self.action != 'update': # only valid for updates at the moment. return self.status[1] = 'COMPLETE' self.status[3] = 'test' cookie = {'previous': {'state': ('UPDATE', 'COMPLETE'), 'updated_at': 'test'}} complete = getattr(self.parent_resource, 'check_%s_complete' % self.action) self.assertFalse(complete(cookie=cookie)) self.mock_status.assert_called_once_with( self.parent_resource.context, self.parent_resource.resource_id) def test_wrong_action(self): self.status[0] = 'COMPLETE' complete = getattr(self.parent_resource, 'check_%s_complete' % self.action) self.assertFalse(complete(None)) self.mock_status.assert_called_once_with( self.parent_resource.context, self.parent_resource.resource_id) class WithTemplateTest(StackResourceBaseTest): scenarios = [ ('basic', dict(params={}, timeout_mins=None, adopt_data=None)), ('params', dict(params={'foo': 'fee'}, timeout_mins=None, adopt_data=None)), ('timeout', dict(params={}, timeout_mins=53, adopt_data=None)), ('adopt', dict(params={}, timeout_mins=None, adopt_data={'template': 'foo', 'environment': 'eee'})), ] class IntegerMatch(object): def __eq__(self, other): if getattr(self, 'match', None) is not None: return other == self.match if not isinstance(other, six.integer_types): return False self.match = other return True def __ne__(self, other): return not self.__eq__(other) def test_create_with_template(self): child_env = {'parameter_defaults': {}, 'event_sinks': [], 'parameters': self.params, 'resource_registry': {'resources': {}}} self.parent_resource.child_params = mock.Mock( return_value=self.params) res_name = self.parent_resource.physical_resource_name() rpcc = mock.Mock() self.parent_resource.rpc_client = rpcc rpcc.return_value._create_stack.return_value = {'stack_id': 'pancakes'} self.parent_resource.create_with_template( self.empty_temp, user_params=self.params, timeout_mins=self.timeout_mins, adopt_data=self.adopt_data) if self.adopt_data is not None: adopt_data_str = json.dumps(self.adopt_data) tmpl_args = { 'template': self.empty_temp.t, 'params': child_env, 'files': {}, } else: adopt_data_str = None tmpl_args = { 'template_id': self.IntegerMatch(), 'template': None, 'params': None, 'files': None, } rpcc.return_value._create_stack.assert_called_once_with( self.ctx, stack_name=res_name, args={rpc_api.PARAM_DISABLE_ROLLBACK: True, rpc_api.PARAM_ADOPT_STACK_DATA: adopt_data_str, rpc_api.PARAM_TIMEOUT: self.timeout_mins}, environment_files=None, stack_user_project_id='aprojectid', parent_resource_name='test', user_creds_id='uc123', owner_id=self.parent_stack.id, nested_depth=1, **tmpl_args) def test_create_with_template_failure(self): class StackValidationFailed_Remote(exception.StackValidationFailed): pass child_env = {'parameter_defaults': {}, 'event_sinks': [], 'parameters': self.params, 'resource_registry': {'resources': {}}} self.parent_resource.child_params = mock.Mock( return_value=self.params) res_name = self.parent_resource.physical_resource_name() rpcc = mock.Mock() self.parent_resource.rpc_client = rpcc remote_exc = StackValidationFailed_Remote(message='oops') rpcc.return_value._create_stack.side_effect = remote_exc self.assertRaises(exception.ResourceFailure, self.parent_resource.create_with_template, self.empty_temp, user_params=self.params, timeout_mins=self.timeout_mins, adopt_data=self.adopt_data) if self.adopt_data is not None: adopt_data_str = json.dumps(self.adopt_data) tmpl_args = { 'template': self.empty_temp.t, 'params': child_env, 'files': {}, } else: adopt_data_str = None tmpl_args = { 'template_id': self.IntegerMatch(), 'template': None, 'params': None, 'files': None, } rpcc.return_value._create_stack.assert_called_once_with( self.ctx, stack_name=res_name, args={rpc_api.PARAM_DISABLE_ROLLBACK: True, rpc_api.PARAM_ADOPT_STACK_DATA: adopt_data_str, rpc_api.PARAM_TIMEOUT: self.timeout_mins}, environment_files=None, stack_user_project_id='aprojectid', parent_resource_name='test', user_creds_id='uc123', owner_id=self.parent_stack.id, nested_depth=1, **tmpl_args) if self.adopt_data is None: stored_tmpl_id = tmpl_args['template_id'].match self.assertIsNotNone(stored_tmpl_id) self.assertRaises(exception.NotFound, raw_template.RawTemplate.get_by_id, self.ctx, stored_tmpl_id) def test_update_with_template(self): if self.adopt_data is not None: return ident = identifier.HeatIdentifier(self.ctx.tenant_id, 'fake_name', 'pancakes') self.parent_resource.resource_id = ident.stack_id self.parent_resource.nested_identifier = mock.Mock(return_value=ident) self.parent_resource.child_params = mock.Mock( return_value=self.params) rpcc = mock.Mock() self.parent_resource.rpc_client = rpcc rpcc.return_value._update_stack.return_value = dict(ident) status = ('CREATE', 'COMPLETE', '', 'now_time') with self.patchobject(stack_object.Stack, 'get_status', return_value=status): self.parent_resource.update_with_template( self.empty_temp, user_params=self.params, timeout_mins=self.timeout_mins) rpcc.return_value._update_stack.assert_called_once_with( self.ctx, stack_identity=dict(ident), template_id=self.IntegerMatch(), template=None, params=None, files=None, args={rpc_api.PARAM_TIMEOUT: self.timeout_mins, rpc_api.PARAM_CONVERGE: False}) def test_update_with_template_failure(self): class StackValidationFailed_Remote(exception.StackValidationFailed): pass if self.adopt_data is not None: return ident = identifier.HeatIdentifier(self.ctx.tenant_id, 'fake_name', 'pancakes') self.parent_resource.resource_id = ident.stack_id self.parent_resource.nested_identifier = mock.Mock(return_value=ident) self.parent_resource.child_params = mock.Mock( return_value=self.params) rpcc = mock.Mock() self.parent_resource.rpc_client = rpcc remote_exc = StackValidationFailed_Remote(message='oops') rpcc.return_value._update_stack.side_effect = remote_exc status = ('CREATE', 'COMPLETE', '', 'now_time') with self.patchobject(stack_object.Stack, 'get_status', return_value=status): self.assertRaises(exception.ResourceFailure, self.parent_resource.update_with_template, self.empty_temp, user_params=self.params, timeout_mins=self.timeout_mins) template_id = self.IntegerMatch() rpcc.return_value._update_stack.assert_called_once_with( self.ctx, stack_identity=dict(ident), template_id=template_id, template=None, params=None, files=None, args={rpc_api.PARAM_TIMEOUT: self.timeout_mins, rpc_api.PARAM_CONVERGE: False}) self.assertIsNotNone(template_id.match) self.assertRaises(exception.NotFound, raw_template.RawTemplate.get_by_id, self.ctx, template_id.match) class RaiseLocalException(StackResourceBaseTest): def test_heat_exception(self): local = exception.StackValidationFailed(message='test') self.assertRaises(exception.StackValidationFailed, self.parent_resource.translate_remote_exceptions, local) def test_messaging_timeout(self): local = msg_exceptions.MessagingTimeout('took too long') self.assertRaises(msg_exceptions.MessagingTimeout, self.parent_resource.translate_remote_exceptions, local) def test_remote_heat_ex(self): class StackValidationFailed_Remote(exception.StackValidationFailed): pass local = StackValidationFailed_Remote(message='test') self.assertRaises(exception.ResourceFailure, self.parent_resource.translate_remote_exceptions, local) heat-10.0.2/heat/tests/generic_resource.py0000666000175000017500000003207013343562351020526 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections from oslo_log import log as logging from heat.engine import attributes from heat.engine import constraints from heat.engine import properties from heat.engine import resource from heat.engine.resources import signal_responder from heat.engine.resources import stack_resource from heat.engine.resources import stack_user from heat.engine import support LOG = logging.getLogger(__name__) class GenericResource(resource.Resource): """Dummy resource for use in tests.""" properties_schema = {} attributes_schema = collections.OrderedDict([ ('foo', attributes.Schema('A generic attribute')), ('Foo', attributes.Schema('Another generic attribute'))]) @classmethod def is_service_available(cls, context): return (True, None) def handle_create(self): LOG.warning('Creating generic resource (Type "%s")', self.type()) def handle_update(self, json_snippet, tmpl_diff, prop_diff): LOG.warning('Updating generic resource (Type "%s")', self.type()) def handle_delete(self): LOG.warning('Deleting generic resource (Type "%s")', self.type()) def _resolve_attribute(self, name): return self.name def handle_suspend(self): LOG.warning('Suspending generic resource (Type "%s")', self.type()) def handle_resume(self): LOG.warning(('Resuming generic resource (Type "%s")'), self.type()) class CancellableResource(GenericResource): def check_create_complete(self, cookie): return True def handle_create_cancel(self, cookie): LOG.warning('Cancelling create generic resource (Type "%s")', self.type()) def check_update_complete(self, cookie): return True def handle_update_cancel(self, cookie): LOG.warning('Cancelling update generic resource (Type "%s")', self.type()) class MultiStepResource(GenericResource): properties_schema = { 'create_steps': properties.Schema(properties.Schema.INTEGER, default=2), 'update_steps': properties.Schema(properties.Schema.INTEGER, default=2, update_allowed=True), 'delete_steps': properties.Schema(properties.Schema.INTEGER, default=2, update_allowed=True), } def handle_create(self): super(MultiStepResource, self).handle_create() return [None] * self.properties['create_steps'] def check_create_complete(self, cookie): cookie.pop() return not cookie def handle_update(self, json_snippet, tmpl_diff, prop_diff): super(MultiStepResource, self).handle_update(json_snippet, tmpl_diff, prop_diff) return [None] * self.properties['update_steps'] def check_update_complete(self, cookie): cookie.pop() return not cookie def handle_delete(self): super(MultiStepResource, self).handle_delete() return [None] * self.properties['delete_steps'] def check_delete_complete(self, cookie): cookie.pop() return not cookie class ResWithShowAttr(GenericResource): def _show_resource(self): return {'foo': self.name, 'Foo': self.name, 'Another': self.name} class ResWithStringPropAndAttr(GenericResource): properties_schema = { 'a_string': properties.Schema(properties.Schema.STRING)} attributes_schema = {'string': attributes.Schema('A string')} def _resolve_attribute(self, name): try: return self.properties["a_%s" % name] except KeyError: return None class ResWithComplexPropsAndAttrs(ResWithStringPropAndAttr): properties_schema = { 'a_string': properties.Schema(properties.Schema.STRING), 'a_list': properties.Schema(properties.Schema.LIST), 'a_map': properties.Schema(properties.Schema.MAP), 'an_int': properties.Schema(properties.Schema.INTEGER)} attributes_schema = {'list': attributes.Schema('A list'), 'map': attributes.Schema('A map'), 'string': attributes.Schema('A string')} update_allowed_properties = ('an_int',) def _resolve_attribute(self, name): try: return self.properties["a_%s" % name] except KeyError: return None class ResourceWithProps(GenericResource): properties_schema = { 'Foo': properties.Schema(properties.Schema.STRING), 'FooInt': properties.Schema(properties.Schema.INTEGER)} atomic_key = None class ResourceWithPropsRefPropOnDelete(ResourceWithProps): def check_delete_complete(self, cookie): return self.properties['FooInt'] is not None class ResourceWithPropsRefPropOnValidate(ResourceWithProps): def validate(self): super(ResourceWithPropsRefPropOnValidate, self).validate() self.properties['FooInt'] is not None class ResourceWithPropsAndAttrs(ResourceWithProps): attributes_schema = {'Bar': attributes.Schema('Something.')} class ResourceWithResourceID(GenericResource): properties_schema = {'ID': properties.Schema(properties.Schema.STRING)} def handle_create(self): super(ResourceWithResourceID, self).handle_create() self.resource_id_set(self.properties.get('ID')) def handle_delete(self): self.mox_resource_id(self.resource_id) def mox_resource_id(self, resource_id): pass class ResourceWithComplexAttributes(GenericResource): attributes_schema = { 'list': attributes.Schema('A list'), 'flat_dict': attributes.Schema('A flat dictionary'), 'nested_dict': attributes.Schema('A nested dictionary'), 'none': attributes.Schema('A None') } list = ['foo', 'bar'] flat_dict = {'key1': 'val1', 'key2': 'val2', 'key3': 'val3'} nested_dict = {'list': [1, 2, 3], 'string': 'abc', 'dict': {'a': 1, 'b': 2, 'c': 3}} def _resolve_attribute(self, name): if name == 'list': return self.list if name == 'flat_dict': return self.flat_dict if name == 'nested_dict': return self.nested_dict if name == 'none': return None class ResourceWithRequiredProps(GenericResource): properties_schema = {'Foo': properties.Schema(properties.Schema.STRING, required=True)} class ResourceWithMultipleRequiredProps(GenericResource): properties_schema = {'Foo1': properties.Schema(properties.Schema.STRING, required=True), 'Foo2': properties.Schema(properties.Schema.STRING, required=True), 'Foo3': properties.Schema(properties.Schema.STRING, required=True)} class ResourceWithRequiredPropsAndEmptyAttrs(GenericResource): properties_schema = {'Foo': properties.Schema(properties.Schema.STRING, required=True)} attributes_schema = {} class SignalResource(signal_responder.SignalResponder): SIGNAL_TRANSPORTS = ( CFN_SIGNAL, TEMP_URL_SIGNAL, HEAT_SIGNAL, NO_SIGNAL, ZAQAR_SIGNAL ) = ( 'CFN_SIGNAL', 'TEMP_URL_SIGNAL', 'HEAT_SIGNAL', 'NO_SIGNAL', 'ZAQAR_SIGNAL' ) properties_schema = { 'signal_transport': properties.Schema(properties.Schema.STRING, default='CFN_SIGNAL')} attributes_schema = {'AlarmUrl': attributes.Schema('Get a signed webhook'), 'signal': attributes.Schema('Get a signal')} def handle_create(self): self.password = 'password' super(SignalResource, self).handle_create() self.resource_id_set(self._get_user_id()) def handle_signal(self, details=None): LOG.warning('Signaled resource (Type "%(type)s") %(details)s', {'type': self.type(), 'details': details}) def _resolve_attribute(self, name): if self.resource_id is not None: if name == 'AlarmUrl': return self._get_signal().get('alarm_url') elif name == 'signal': return self._get_signal() class StackUserResource(stack_user.StackUser): properties_schema = {} attributes_schema = {} def handle_create(self): super(StackUserResource, self).handle_create() self.resource_id_set(self._get_user_id()) class ResourceWithCustomConstraint(GenericResource): properties_schema = { 'Foo': properties.Schema( properties.Schema.STRING, constraints=[constraints.CustomConstraint('neutron.network')])} class ResourceWithAttributeType(GenericResource): attributes_schema = { 'attr1': attributes.Schema('A generic attribute', type=attributes.Schema.STRING), 'attr2': attributes.Schema('Another generic attribute', type=attributes.Schema.MAP) } def _resolve_attribute(self, name): if name == 'attr1': return "valid_sting" elif name == 'attr2': return "invalid_type" class ResourceWithDefaultClientName(resource.Resource): default_client_name = 'sample' properties_schema = {} class ResourceWithDefaultClientNameExt(resource.Resource): default_client_name = 'sample' required_service_extension = 'foo' properties_schema = {} class ResourceWithFnGetAttType(GenericResource): def get_attribute(self, name): pass class ResourceWithFnGetRefIdType(ResourceWithProps): def get_reference_id(self): return 'ID-%s' % self.name class ResourceWithListProp(ResourceWithFnGetRefIdType): properties_schema = {"listprop": properties.Schema(properties.Schema.LIST)} class ResourceWithHiddenPropertyAndAttribute(GenericResource): properties_schema = { "supported": properties.Schema(properties.Schema.LIST, "Supported property."), "hidden": properties.Schema( properties.Schema.LIST, "Hidden property.", support_status=support.SupportStatus(status=support.HIDDEN)) } attributes_schema = { 'supported': attributes.Schema(type=attributes.Schema.STRING, description='Supported attribute.'), 'hidden': attributes.Schema( type=attributes.Schema.STRING, description='Hidden attribute', support_status=support.SupportStatus(status=support.HIDDEN)) } class StackResourceType(stack_resource.StackResource, GenericResource): def physical_resource_name(self): return "cb2f2b28-a663-4683-802c-4b40c916e1ff" def set_template(self, nested_template, params): self.nested_template = nested_template self.nested_params = params def handle_create(self): return self.create_with_template(self.nested_template, self.nested_params) def handle_adopt(self, resource_data): return self.create_with_template(self.nested_template, self.nested_params, adopt_data=resource_data) def handle_delete(self): self.delete_nested() def has_nested(self): if self.nested() is not None: return True return False class ResourceWithRestoreType(ResWithComplexPropsAndAttrs): def handle_restore(self, defn, data): props = dict( (key, value) for (key, value) in self.properties.data.items() if value is not None) value = data['resource_data']['a_string'] props['a_string'] = value return defn.freeze(properties=props) def handle_delete_snapshot(self, snapshot): return snapshot['resource_data'].get('a_string') class ResourceTypeUnSupportedLiberty(GenericResource): support_status = support.SupportStatus( version='5.0.0', status=support.UNSUPPORTED) class ResourceTypeSupportedKilo(GenericResource): support_status = support.SupportStatus( version='2015.1') class ResourceTypeHidden(GenericResource): support_status = support.SupportStatus( version='7.0.0', status=support.HIDDEN) heat-10.0.2/heat/tests/test_rpc_client.py0000666000175000017500000004133613343562352020372 0ustar zuulzuul00000000000000# # Copyright 2012, Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Unit Tests for heat.rpc.client """ import copy import mock from oslo_messaging._drivers import common as rpc_common from oslo_utils import reflection from heat.common import exception from heat.common import identifier from heat.rpc import client as rpc_client from heat.tests import common from heat.tests import utils class EngineRpcAPITestCase(common.HeatTestCase): def setUp(self): super(EngineRpcAPITestCase, self).setUp() self.identity = dict(identifier.HeatIdentifier('engine_test_tenant', '6', 'wordpress')) self.rpcapi = rpc_client.EngineClient() def _to_remote_error(self, error): """Converts the given exception to the one with the _Remote suffix.""" exc_info = (type(error), error, None) serialized = rpc_common.serialize_remote_exception(exc_info) remote_error = rpc_common.deserialize_remote_exception( serialized, ["heat.common.exception"]) return remote_error def test_local_error_name(self): ex = exception.NotFound() self.assertEqual('NotFound', self.rpcapi.local_error_name(ex)) exr = self._to_remote_error(ex) self.assertEqual('NotFound_Remote', reflection.get_class_name(exr, fully_qualified=False)) self.assertEqual('NotFound', self.rpcapi.local_error_name(exr)) def test_ignore_error_by_name(self): ex = exception.NotFound() exr = self._to_remote_error(ex) filter_exc = self.rpcapi.ignore_error_by_name('NotFound') with filter_exc: raise ex with filter_exc: raise exr def should_raise(exc): with self.rpcapi.ignore_error_by_name('NotSupported'): raise exc self.assertRaises(exception.NotFound, should_raise, ex) self.assertRaises(exception.NotFound, should_raise, exr) def test_ignore_error_named(self): ex = exception.NotFound() exr = self._to_remote_error(ex) self.rpcapi.ignore_error_named(ex, 'NotFound') self.rpcapi.ignore_error_named(exr, 'NotFound') self.assertRaises( exception.NotFound, self.rpcapi.ignore_error_named, ex, 'NotSupported') self.assertRaises( exception.NotFound, self.rpcapi.ignore_error_named, exr, 'NotSupported') def _test_engine_api(self, method, rpc_method, **kwargs): ctxt = utils.dummy_context() expected_retval = 'foo' if method == 'call' else None kwargs.pop('version', None) if 'expected_message' in kwargs: expected_message = kwargs['expected_message'] del kwargs['expected_message'] else: expected_message = self.rpcapi.make_msg(method, **kwargs) cast_and_call = ['delete_stack'] if method in cast_and_call: kwargs['cast'] = rpc_method != 'call' with mock.patch.object(self.rpcapi, rpc_method) as mock_rpc_method: mock_rpc_method.return_value = expected_retval retval = getattr(self.rpcapi, method)(ctxt, **kwargs) self.assertEqual(expected_retval, retval) expected_args = [ctxt, expected_message, mock.ANY] actual_args, _ = mock_rpc_method.call_args for expected_arg, actual_arg in zip(expected_args, actual_args): self.assertEqual(expected_arg, actual_arg) def test_authenticated_to_backend(self): self._test_engine_api('authenticated_to_backend', 'call') def test_list_stacks(self): default_args = { 'limit': mock.ANY, 'sort_keys': mock.ANY, 'marker': mock.ANY, 'sort_dir': mock.ANY, 'filters': mock.ANY, 'show_deleted': mock.ANY, 'show_nested': mock.ANY, 'show_hidden': mock.ANY, 'tags': mock.ANY, 'tags_any': mock.ANY, 'not_tags': mock.ANY, 'not_tags_any': mock.ANY, } self._test_engine_api('list_stacks', 'call', **default_args) def test_count_stacks(self): default_args = { 'filters': mock.ANY, 'show_deleted': mock.ANY, 'show_nested': mock.ANY, 'show_hidden': mock.ANY, 'tags': mock.ANY, 'tags_any': mock.ANY, 'not_tags': mock.ANY, 'not_tags_any': mock.ANY, } self._test_engine_api('count_stacks', 'call', **default_args) def test_identify_stack(self): self._test_engine_api('identify_stack', 'call', stack_name='wordpress') def test_show_stack(self): self._test_engine_api('show_stack', 'call', stack_identity='wordpress', resolve_outputs=True) def test_preview_stack(self): self._test_engine_api('preview_stack', 'call', stack_name='wordpress', template={u'Foo': u'bar'}, params={u'InstanceType': u'm1.xlarge'}, files={u'a_file': u'the contents'}, environment_files=['foo.yaml'], args={'timeout_mins': u'30'}) def test_create_stack(self): kwargs = dict(stack_name='wordpress', template={u'Foo': u'bar'}, params={u'InstanceType': u'm1.xlarge'}, files={u'a_file': u'the contents'}, environment_files=['foo.yaml'], args={'timeout_mins': u'30'}) call_kwargs = copy.deepcopy(kwargs) call_kwargs['owner_id'] = None call_kwargs['nested_depth'] = 0 call_kwargs['user_creds_id'] = None call_kwargs['stack_user_project_id'] = None call_kwargs['parent_resource_name'] = None call_kwargs['template_id'] = None expected_message = self.rpcapi.make_msg('create_stack', **call_kwargs) kwargs['expected_message'] = expected_message self._test_engine_api('create_stack', 'call', **kwargs) def test_update_stack(self): kwargs = dict(stack_identity=self.identity, template={u'Foo': u'bar'}, params={u'InstanceType': u'm1.xlarge'}, files={}, environment_files=['foo.yaml'], args=mock.ANY) call_kwargs = copy.deepcopy(kwargs) call_kwargs['template_id'] = None expected_message = self.rpcapi.make_msg('update_stack', **call_kwargs) self._test_engine_api('update_stack', 'call', expected_message=expected_message, **kwargs) def test_preview_update_stack(self): self._test_engine_api('preview_update_stack', 'call', stack_identity=self.identity, template={u'Foo': u'bar'}, params={u'InstanceType': u'm1.xlarge'}, files={}, environment_files=['foo.yaml'], args=mock.ANY) def test_get_template(self): self._test_engine_api('get_template', 'call', stack_identity=self.identity) def test_delete_stack_cast(self): self._test_engine_api('delete_stack', 'cast', stack_identity=self.identity) def test_delete_stack_call(self): self._test_engine_api('delete_stack', 'call', stack_identity=self.identity) def test_validate_template(self): self._test_engine_api('validate_template', 'call', template={u'Foo': u'bar'}, params={u'Egg': u'spam'}, files=None, environment_files=['foo.yaml'], ignorable_errors=None, show_nested=False, version='1.24') def test_list_resource_types(self): self._test_engine_api('list_resource_types', 'call', support_status=None, type_name=None, heat_version=None, with_description=False, version='1.30') def test_resource_schema(self): self._test_engine_api('resource_schema', 'call', type_name="TYPE", with_description=False, version='1.30') def test_generate_template(self): self._test_engine_api('generate_template', 'call', type_name="TYPE", template_type='cfn') def test_list_events(self): kwargs = {'stack_identity': self.identity, 'limit': None, 'marker': None, 'sort_keys': None, 'sort_dir': None, 'filters': None, 'nested_depth': None} self._test_engine_api('list_events', 'call', **kwargs) def test_describe_stack_resource(self): self._test_engine_api('describe_stack_resource', 'call', stack_identity=self.identity, resource_name='LogicalResourceId', with_attr=None) def test_find_physical_resource(self): self._test_engine_api('find_physical_resource', 'call', physical_resource_id=u'404d-a85b-5315293e67de') def test_describe_stack_resources(self): self._test_engine_api('describe_stack_resources', 'call', stack_identity=self.identity, resource_name=u'WikiDatabase') def test_list_stack_resources(self): self._test_engine_api('list_stack_resources', 'call', stack_identity=self.identity, nested_depth=0, with_detail=False, filters=None, version=1.25) def test_stack_suspend(self): self._test_engine_api('stack_suspend', 'call', stack_identity=self.identity) def test_stack_resume(self): self._test_engine_api('stack_resume', 'call', stack_identity=self.identity) def test_stack_cancel_update(self): self._test_engine_api('stack_cancel_update', 'call', stack_identity=self.identity, cancel_with_rollback=False, version='1.14') def test_resource_signal(self): self._test_engine_api('resource_signal', 'call', stack_identity=self.identity, resource_name='LogicalResourceId', details={u'wordpress': []}, sync_call=True) def test_list_software_configs(self): self._test_engine_api('list_software_configs', 'call', limit=mock.ANY, marker=mock.ANY) def test_show_software_config(self): self._test_engine_api('show_software_config', 'call', config_id='cda89008-6ea6-4057-b83d-ccde8f0b48c9') def test_create_software_config(self): self._test_engine_api('create_software_config', 'call', group='Heat::Shell', name='config_mysql', config='#!/bin/bash', inputs=[], outputs=[], options={}) def test_delete_software_config(self): self._test_engine_api('delete_software_config', 'call', config_id='cda89008-6ea6-4057-b83d-ccde8f0b48c9') def test_list_software_deployments(self): self._test_engine_api('list_software_deployments', 'call', server_id=None) self._test_engine_api('list_software_deployments', 'call', server_id='9dc13236-d342-451f-a885-1c82420ba5ed') def test_show_software_deployment(self): deployment_id = '86729f02-4648-44d8-af44-d0ec65b6abc9' self._test_engine_api('show_software_deployment', 'call', deployment_id=deployment_id) def test_check_software_deployment(self): deployment_id = '86729f02-4648-44d8-af44-d0ec65b6abc9' self._test_engine_api('check_software_deployment', 'call', deployment_id=deployment_id, timeout=100) def test_create_software_deployment(self): self._test_engine_api( 'create_software_deployment', 'call', server_id='9f1f0e00-05d2-4ca5-8602-95021f19c9d0', config_id='48e8ade1-9196-42d5-89a2-f709fde42632', deployment_id='86729f02-4648-44d8-af44-d0ec65b6abc9', stack_user_project_id='65728b74-cfe7-4f17-9c15-11d4f686e591', input_values={}, action='INIT', status='COMPLETE', status_reason=None) def test_update_software_deployment(self): deployment_id = '86729f02-4648-44d8-af44-d0ec65b6abc9' self._test_engine_api('update_software_deployment', 'call', deployment_id=deployment_id, config_id='48e8ade1-9196-42d5-89a2-f709fde42632', input_values={}, output_values={}, action='DEPLOYED', status='COMPLETE', status_reason=None, updated_at=None) def test_delete_software_deployment(self): deployment_id = '86729f02-4648-44d8-af44-d0ec65b6abc9' self._test_engine_api('delete_software_deployment', 'call', deployment_id=deployment_id) def test_show_snapshot(self): snapshot_id = '86729f02-4648-44d8-af44-d0ec65b6abc9' self._test_engine_api('show_snapshot', 'call', stack_identity=self.identity, snapshot_id=snapshot_id) def test_stack_snapshot(self): self._test_engine_api( 'stack_snapshot', 'call', stack_identity=self.identity, name='snap1') def test_delete_snapshot(self): snapshot_id = '86729f02-4648-44d8-af44-d0ec65b6abc9' self._test_engine_api('delete_snapshot', 'call', stack_identity=self.identity, snapshot_id=snapshot_id) def test_list_services(self): self._test_engine_api('list_services', 'call', version='1.4') def test_stack_list_outputs(self): self._test_engine_api( 'list_outputs', 'call', stack_identity=self.identity, version='1.19' ) def test_stack_show_output(self): self._test_engine_api( 'show_output', 'call', stack_identity=self.identity, output_key='test', version='1.19') def test_export_stack(self): self._test_engine_api('export_stack', 'call', stack_identity=self.identity, version='1.22') def test_resource_mark_unhealthy(self): self._test_engine_api('resource_mark_unhealthy', 'call', stack_identity=self.identity, resource_name='LogicalResourceId', mark_unhealthy=True, resource_status_reason="Any reason", version='1.26') def test_get_environment(self): self._test_engine_api( 'get_environment', 'call', stack_identity=self.identity, version='1.28') def test_get_files(self): self._test_engine_api( 'get_files', 'call', stack_identity=self.identity, version='1.32') heat-10.0.2/heat/tests/constraints/0000775000175000017500000000000013343562672017202 5ustar zuulzuul00000000000000heat-10.0.2/heat/tests/constraints/__init__.py0000666000175000017500000000000013343562340021273 0ustar zuulzuul00000000000000heat-10.0.2/heat/tests/constraints/test_common_constraints.py0000666000175000017500000002637413343562351024542 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import six from heat.engine.constraint import common_constraints as cc from heat.tests import common from heat.tests import utils class TestIPConstraint(common.HeatTestCase): def setUp(self): super(TestIPConstraint, self).setUp() self.constraint = cc.IPConstraint() def test_validate_ipv4_format(self): validate_format = [ '1.1.1.1', '1.0.1.1', '255.255.255.255' ] for ip in validate_format: self.assertTrue(self.constraint.validate(ip, None)) def test_invalidate_ipv4_format(self): invalidate_format = [ '1.1.1.', '1.1.1.256', 'invalidate format', '1.a.1.1' ] for ip in invalidate_format: self.assertFalse(self.constraint.validate(ip, None)) def test_validate_ipv6_format(self): validate_format = [ '2002:2002::20c:29ff:fe7d:811a', '::1', '2002::', '2002::1', ] for ip in validate_format: self.assertTrue(self.constraint.validate(ip, None)) def test_invalidate_ipv6_format(self): invalidate_format = [ '2002::2001::1', '2002::g', 'invalidate format', '2001::0::', '20c:29ff:fe7d:811a' ] for ip in invalidate_format: self.assertFalse(self.constraint.validate(ip, None)) class TestMACConstraint(common.HeatTestCase): def setUp(self): super(TestMACConstraint, self).setUp() self.constraint = cc.MACConstraint() def test_valid_mac_format(self): validate_format = [ '01:23:45:67:89:ab', '01-23-45-67-89-ab', '0123.4567.89ab' ] for mac in validate_format: self.assertTrue(self.constraint.validate(mac, None)) def test_invalid_mac_format(self): invalidate_format = [ '8.8.8.8', '0a-1b-3c-4d-5e-6f-1f', '0a-1b-3c-4d-5e-xx' ] for mac in invalidate_format: self.assertFalse(self.constraint.validate(mac, None)) class TestCIDRConstraint(common.HeatTestCase): def setUp(self): super(TestCIDRConstraint, self).setUp() self.constraint = cc.CIDRConstraint() def test_valid_cidr_format(self): validate_format = [ '10.0.0.0/24', '6000::/64', '8.8.8.8' ] for cidr in validate_format: self.assertTrue(self.constraint.validate(cidr, None)) def test_invalid_cidr_format(self): invalidate_format = [ '::/129', 'Invalid cidr', '300.0.0.0/24', '10.0.0.0/33', '8.8.8.0/ 24' ] for cidr in invalidate_format: self.assertFalse(self.constraint.validate(cidr, None)) class TestISO8601Constraint(common.HeatTestCase): def setUp(self): super(TestISO8601Constraint, self).setUp() self.constraint = cc.ISO8601Constraint() def test_validate_date_format(self): date = '2050-01-01' self.assertTrue(self.constraint.validate(date, None)) def test_validate_datetime_format(self): self.assertTrue(self.constraint.validate('2050-01-01T23:59:59', None)) def test_validate_datetime_format_with_utc_offset(self): date = '2050-01-01T23:59:59+00:00' self.assertTrue(self.constraint.validate(date, None)) def test_validate_datetime_format_with_utc_offset_alternate(self): date = '2050-01-01T23:59:59+0000' self.assertTrue(self.constraint.validate(date, None)) def test_validate_refuses_other_formats(self): self.assertFalse(self.constraint.validate('Fri 13th, 2050', None)) class CRONExpressionConstraint(common.HeatTestCase): def setUp(self): super(CRONExpressionConstraint, self).setUp() self.ctx = utils.dummy_context() self.constraint = cc.CRONExpressionConstraint() def test_validation(self): self.assertTrue(self.constraint.validate("0 23 * * *", self.ctx)) def test_validation_none(self): self.assertTrue(self.constraint.validate(None, self.ctx)) def test_validation_out_of_range_error(self): cron_expression = "* * * * * 100" expect = ("Invalid CRON expression: [%s] " "is not acceptable, out of range") % cron_expression self.assertFalse(self.constraint.validate(cron_expression, self.ctx)) self.assertEqual(expect, six.text_type(self.constraint._error_message)) def test_validation_columns_length_error(self): cron_expression = "* *" expect = ("Invalid CRON expression: Exactly 5 " "or 6 columns has to be specified for " "iteratorexpression.") self.assertFalse(self.constraint.validate(cron_expression, self.ctx)) self.assertEqual(expect, six.text_type(self.constraint._error_message)) class TimezoneConstraintTest(common.HeatTestCase): def setUp(self): super(TimezoneConstraintTest, self).setUp() self.ctx = utils.dummy_context() self.constraint = cc.TimezoneConstraint() def test_validation(self): self.assertTrue(self.constraint.validate("Asia/Taipei", self.ctx)) def test_validation_error(self): timezone = "wrong_timezone" expected = "Invalid timezone: '%s'" % timezone self.assertFalse(self.constraint.validate(timezone, self.ctx)) self.assertEqual( expected, six.text_type(self.constraint._error_message) ) def test_validation_none(self): self.assertTrue(self.constraint.validate(None, self.ctx)) class DNSNameConstraintTest(common.HeatTestCase): def setUp(self): super(DNSNameConstraintTest, self).setUp() self.ctx = utils.dummy_context() self.constraint = cc.DNSNameConstraint() def test_validation(self): self.assertTrue(self.constraint.validate("openstack.org.", self.ctx)) def test_validation_error_hyphen(self): dns_name = "-openstack.org" expected = ("'%s' not in valid format. Reason: Name " "'%s' must not start or end with a " "hyphen.") % (dns_name, dns_name.split('.')[0]) self.assertFalse(self.constraint.validate(dns_name, self.ctx)) self.assertEqual( expected, six.text_type(self.constraint._error_message) ) def test_validation_error_empty_component(self): dns_name = ".openstack.org" expected = ("'%s' not in valid format. Reason: " "Encountered an empty component.") % dns_name self.assertFalse(self.constraint.validate(dns_name, self.ctx)) self.assertEqual( expected, six.text_type(self.constraint._error_message) ) def test_validation_error_special_char(self): dns_name = "$openstack.org" expected = ("'%s' not in valid format. Reason: Name " "'%s' must be 1-63 characters long, each " "of which can only be alphanumeric or a " "hyphen.") % (dns_name, dns_name.split('.')[0]) self.assertFalse(self.constraint.validate(dns_name, self.ctx)) self.assertEqual( expected, six.text_type(self.constraint._error_message) ) def test_validation_error_tld_allnumeric(self): dns_name = "openstack.123." expected = ("'%s' not in valid format. Reason: TLD " "'%s' must not be all numeric.") % (dns_name, dns_name.split('.')[1]) self.assertFalse(self.constraint.validate(dns_name, self.ctx)) self.assertEqual( expected, six.text_type(self.constraint._error_message) ) def test_validation_none(self): self.assertTrue(self.constraint.validate(None, self.ctx)) class DNSDomainConstraintTest(common.HeatTestCase): def setUp(self): super(DNSDomainConstraintTest, self).setUp() self.ctx = utils.dummy_context() self.constraint = cc.DNSDomainConstraint() def test_validation(self): self.assertTrue(self.constraint.validate("openstack.org.", self.ctx)) def test_validation_error_no_end_period(self): dns_domain = "openstack.org" expected = ("'%s' must end with '.'.") % dns_domain self.assertFalse(self.constraint.validate(dns_domain, self.ctx)) self.assertEqual( expected, six.text_type(self.constraint._error_message) ) def test_validation_none(self): self.assertTrue(self.constraint.validate(None, self.ctx)) class FIPDNSNameConstraintTest(common.HeatTestCase): def setUp(self): super(FIPDNSNameConstraintTest, self).setUp() self.ctx = utils.dummy_context() self.constraint = cc.RelativeDNSNameConstraint() def test_validation(self): self.assertTrue(self.constraint.validate("myvm.openstack", self.ctx)) def test_validation_error_end_period(self): dns_name = "myvm.openstack." expected = ("'%s' is a FQDN. It should be a relative " "domain name.") % dns_name self.assertFalse(self.constraint.validate(dns_name, self.ctx)) self.assertEqual( expected, six.text_type(self.constraint._error_message) ) def test_validation_none(self): self.assertTrue(self.constraint.validate(None, self.ctx)) class ExpirationConstraintTest(common.HeatTestCase): def setUp(self): super(ExpirationConstraintTest, self).setUp() self.ctx = utils.dummy_context() self.constraint = cc.ExpirationConstraint() def test_validate_date_format(self): date = '2050-01-01' self.assertTrue(self.constraint.validate(date, None)) def test_validation_error(self): expiration = "Fri 13th, 2050" expected = ("Expiration {0} is invalid: Unable to parse " "date string '{0}'".format(expiration)) self.assertFalse(self.constraint.validate(expiration, self.ctx)) self.assertEqual( expected, six.text_type(self.constraint._error_message) ) def test_validation_before_current_time(self): expiration = "1970-01-01" expected = ("Expiration %s is invalid: Expiration time " "is out of date." % expiration) self.assertFalse(self.constraint.validate(expiration, self.ctx)) self.assertEqual( expected, six.text_type(self.constraint._error_message) ) def test_validation_none(self): self.assertTrue(self.constraint.validate(None, self.ctx)) heat-10.0.2/heat/tests/test_auth_url.py0000666000175000017500000001172213343562340020064 0ustar zuulzuul00000000000000# # Copyright 2013 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import mock import six import webob from webob import exc from heat.common import auth_url from heat.tests import common class FakeApp(object): """This represents a WSGI app protected by our auth middleware.""" def __call__(self, environ, start_response): """Assert that headers are correctly set up when finally called.""" resp = webob.Response() resp.body = six.b('SUCCESS') return resp(environ, start_response) class AuthUrlFilterTest(common.HeatTestCase): def setUp(self): super(AuthUrlFilterTest, self).setUp() self.app = FakeApp() self.config = {'auth_uri': 'foobar'} self.middleware = auth_url.AuthUrlFilter(self.app, self.config) @mock.patch.object(auth_url.cfg, 'CONF') def test_adds_default_auth_url_from_clients_keystone(self, mock_cfg): self.config = {} mock_cfg.clients_keystone.auth_uri = 'foobar' mock_cfg.keystone_authtoken.auth_uri = 'this-should-be-ignored' mock_cfg.auth_password.multi_cloud = False with mock.patch('keystoneauth1.discover.Discover') as discover: class MockDiscover(object): def url_for(self, endpoint): return 'foobar/v3' discover.return_value = MockDiscover() self.middleware = auth_url.AuthUrlFilter(self.app, self.config) req = webob.Request.blank('/tenant_id/') self.middleware(req) self.assertIn('X-Auth-Url', req.headers) self.assertEqual('foobar/v3', req.headers['X-Auth-Url']) @mock.patch.object(auth_url.cfg, 'CONF') def test_adds_default_auth_url_from_keystone_authtoken(self, mock_cfg): self.config = {} mock_cfg.clients_keystone.auth_uri = '' mock_cfg.keystone_authtoken.auth_uri = 'foobar' mock_cfg.auth_password.multi_cloud = False self.middleware = auth_url.AuthUrlFilter(self.app, self.config) req = webob.Request.blank('/tenant_id/') self.middleware(req) self.assertIn('X-Auth-Url', req.headers) self.assertEqual('foobar', req.headers['X-Auth-Url']) def test_overwrites_auth_url_from_headers_with_local_config(self): req = webob.Request.blank('/tenant_id/') req.headers['X-Auth-Url'] = 'should_be_overwritten' self.middleware(req) self.assertEqual('foobar', req.headers['X-Auth-Url']) def test_reads_auth_url_from_local_config(self): req = webob.Request.blank('/tenant_id/') self.middleware(req) self.assertIn('X-Auth-Url', req.headers) self.assertEqual('foobar', req.headers['X-Auth-Url']) @mock.patch.object(auth_url.AuthUrlFilter, '_validate_auth_url') @mock.patch.object(auth_url.cfg, 'CONF') def test_multicloud_reads_auth_url_from_headers(self, mock_cfg, mock_val): mock_cfg.auth_password.multi_cloud = True mock_val.return_value = True req = webob.Request.blank('/tenant_id/') req.headers['X-Auth-Url'] = 'overwrites config' self.middleware(req) self.assertIn('X-Auth-Url', req.headers) self.assertEqual('overwrites config', req.headers['X-Auth-Url']) @mock.patch.object(auth_url.AuthUrlFilter, '_validate_auth_url') @mock.patch.object(auth_url.cfg, 'CONF') def test_multicloud_validates_auth_url(self, mock_cfg, mock_validate): mock_cfg.auth_password.multi_cloud = True req = webob.Request.blank('/tenant_id/') self.middleware(req) self.assertTrue(mock_validate.called) def test_validate_auth_url_with_missing_url(self): self.assertRaises(exc.HTTPBadRequest, self.middleware._validate_auth_url, auth_url='') self.assertRaises(exc.HTTPBadRequest, self.middleware._validate_auth_url, auth_url=None) @mock.patch.object(auth_url.cfg, 'CONF') def test_validate_auth_url_with_url_not_allowed(self, mock_cfg): mock_cfg.auth_password.allowed_auth_uris = ['foobar'] self.assertRaises(exc.HTTPUnauthorized, self.middleware._validate_auth_url, auth_url='not foobar') @mock.patch.object(auth_url.cfg, 'CONF') def test_validate_auth_url_with_valid_url(self, mock_cfg): mock_cfg.auth_password.allowed_auth_uris = ['foobar'] self.assertTrue(self.middleware._validate_auth_url('foobar')) heat-10.0.2/heat/tests/test_lifecycle_plugin_utils.py0000666000175000017500000002213513343562352023001 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from heat.common import lifecycle_plugin_utils from heat.engine import lifecycle_plugin from heat.engine import resources from heat.tests import common empty_template = ''' heat_template_version: '2013-05-23' description: Empty stack resources: ''' class LifecyclePluginUtilsTest(common.HeatTestCase): """Basic tests for :module:'heat.common.lifecycle_plugin_utils'. Basic tests for the helper methods in :module:'heat.common.lifecycle_plugin_utils'. """ def tearDown(self): super(LifecyclePluginUtilsTest, self).tearDown() lifecycle_plugin_utils.pp_class_instances = None def mock_lcp_class_map(self, lcp_mappings): self.m.UnsetStubs() self.m.StubOutWithMock(resources.global_env(), 'get_stack_lifecycle_plugins') resources.global_env().get_stack_lifecycle_plugins( ).MultipleTimes().AndReturn(lcp_mappings) self.m.ReplayAll() # reset cache lifecycle_plugin_utils.pp_class_instances = None def test_get_plug_point_class_instances(self): """Tests the get_plug_point_class_instances function.""" lcp_mappings = [('A::B::C1', TestLifecycleCallout1)] self.mock_lcp_class_map(lcp_mappings) pp_cinstances = lifecycle_plugin_utils.get_plug_point_class_instances() self.assertIsNotNone(pp_cinstances) self.assertTrue(self.is_iterable(pp_cinstances), "not iterable: %s" % pp_cinstances) self.assertEqual(1, len(pp_cinstances)) self.assertEqual(TestLifecycleCallout1, pp_cinstances[0].__class__) def test_do_pre_and_post_callouts(self): lcp_mappings = [('A::B::C1', TestLifecycleCallout1)] self.mock_lcp_class_map(lcp_mappings) mc = mock.Mock() mc.__setattr__("pre_counter_for_unit_test", 0) mc.__setattr__("post_counter_for_unit_test", 0) ms = mock.Mock() ms.__setattr__("action", 'A') lifecycle_plugin_utils.do_pre_ops(mc, ms, None, None) self.assertEqual(1, mc.pre_counter_for_unit_test) lifecycle_plugin_utils.do_post_ops(mc, ms, None, None) self.assertEqual(1, mc.post_counter_for_unit_test) return def test_class_instantiation_and_sorting(self): lcp_mappings = [] self.mock_lcp_class_map(lcp_mappings) pp_cis = lifecycle_plugin_utils.get_plug_point_class_instances() self.assertEqual(0, len(pp_cis)) # order should change with sort lcp_mappings = [('A::B::C2', TestLifecycleCallout2), ('A::B::C1', TestLifecycleCallout1)] self.mock_lcp_class_map(lcp_mappings) pp_cis = lifecycle_plugin_utils.get_plug_point_class_instances() self.assertEqual(2, len(pp_cis)) self.assertEqual(100, pp_cis[0].get_ordinal()) self.assertEqual(101, pp_cis[1].get_ordinal()) self.assertEqual(TestLifecycleCallout1, pp_cis[0].__class__) self.assertEqual(TestLifecycleCallout2, pp_cis[1].__class__) # order should NOT change with sort lcp_mappings = [('A::B::C1', TestLifecycleCallout1), ('A::B::C2', TestLifecycleCallout2)] self.mock_lcp_class_map(lcp_mappings) pp_cis = lifecycle_plugin_utils.get_plug_point_class_instances() self.assertEqual(2, len(pp_cis)) self.assertEqual(100, pp_cis[0].get_ordinal()) self.assertEqual(101, pp_cis[1].get_ordinal()) self.assertEqual(TestLifecycleCallout1, pp_cis[0].__class__) self.assertEqual(TestLifecycleCallout2, pp_cis[1].__class__) # sort failure due to exception in thrown by ordinal lcp_mappings = [('A::B::C2', TestLifecycleCallout2), ('A::B::C3', TestLifecycleCallout3), ('A::B::C1', TestLifecycleCallout1)] self.mock_lcp_class_map(lcp_mappings) pp_cis = lifecycle_plugin_utils.get_plug_point_class_instances() self.assertEqual(3, len(pp_cis)) self.assertEqual(100, pp_cis[2].get_ordinal()) self.assertEqual(101, pp_cis[0].get_ordinal()) # (can sort fail partially? If so then this test may break) self.assertEqual(TestLifecycleCallout2, pp_cis[0].__class__) self.assertEqual(TestLifecycleCallout3, pp_cis[1].__class__) self.assertEqual(TestLifecycleCallout1, pp_cis[2].__class__) return def test_do_pre_op_failure(self): lcp_mappings = [('A::B::C5', TestLifecycleCallout1), ('A::B::C4', TestLifecycleCallout4)] self.mock_lcp_class_map(lcp_mappings) mc = mock.Mock() mc.__setattr__("pre_counter_for_unit_test", 0) mc.__setattr__("post_counter_for_unit_test", 0) ms = mock.Mock() ms.__setattr__("action", 'A') failed = False try: lifecycle_plugin_utils.do_pre_ops(mc, ms, None, None) except Exception: failed = True self.assertTrue(failed) self.assertEqual(1, mc.pre_counter_for_unit_test) self.assertEqual(1, mc.post_counter_for_unit_test) return def test_do_post_op_failure(self): lcp_mappings = [('A::B::C1', TestLifecycleCallout1), ('A::B::C5', TestLifecycleCallout5)] self.mock_lcp_class_map(lcp_mappings) mc = mock.Mock() mc.__setattr__("pre_counter_for_unit_test", 0) mc.__setattr__("post_counter_for_unit_test", 0) ms = mock.Mock() ms.__setattr__("action", 'A') lifecycle_plugin_utils.do_post_ops(mc, ms, None, None) self.assertEqual(1, mc.post_counter_for_unit_test) return def test_exercise_base_lifecycle_plugin_class(self): lcp = lifecycle_plugin.LifecyclePlugin() ordinal = lcp.get_ordinal() lcp.do_pre_op(None, None, None) lcp.do_post_op(None, None, None) self.assertEqual(100, ordinal) return def is_iterable(self, obj): # special case string if not object: return False if isinstance(obj, str): return False # Test for iterabilityy try: for m in obj: break except TypeError: return False return True class TestLifecycleCallout1(lifecycle_plugin.LifecyclePlugin): """Sample test class for testing pre-op and post-op work on a stack.""" def do_pre_op(self, cnxt, stack, current_stack=None, action=None): cnxt.pre_counter_for_unit_test += 1 def do_post_op(self, cnxt, stack, current_stack=None, action=None, is_stack_failure=False): cnxt.post_counter_for_unit_test += 1 def get_ordinal(self): return 100 class TestLifecycleCallout2(lifecycle_plugin.LifecyclePlugin): """Sample test class for testing pre-op and post-op work on a stack. Different ordinal and increment counters by 2. """ def do_pre_op(self, cnxt, stack, current_stack=None, action=None): cnxt.pre_counter_for_unit_test += 2 def do_post_op(self, cnxt, stack, current_stack=None, action=None, is_stack_failure=False): cnxt.post_counter_for_unit_test += 2 def get_ordinal(self): return 101 class TestLifecycleCallout3(lifecycle_plugin.LifecyclePlugin): """Sample test class for testing pre-op and post-op work on a stack. Methods raise exceptions. """ def do_pre_op(self, cnxt, stack, current_stack=None, action=None): raise Exception() def do_post_op(self, cnxt, stack, current_stack=None, action=None, is_stack_failure=False): raise Exception() def get_ordinal(self): raise Exception() class TestLifecycleCallout4(lifecycle_plugin.LifecyclePlugin): """Sample test class for testing pre-op and post-op work on a stack. do_pre_op, do_post_op both throw exception. """ def do_pre_op(self, cnxt, stack, current_stack=None, action=None): raise Exception() def do_post_op(self, cnxt, stack, current_stack=None, action=None, is_stack_failure=False): raise Exception() def get_ordinal(self): return 103 class TestLifecycleCallout5(lifecycle_plugin.LifecyclePlugin): """Sample test class for testing pre-op and post-op work on a stack. do_post_op throws exception. """ def do_pre_op(self, cnxt, stack, current_stack=None, action=None): cnxt.pre_counter_for_unit_test += 1 def do_post_op(self, cnxt, stack, current_stack=None, action=None, is_stack_failure=False): raise Exception() def get_ordinal(self): return 100 heat-10.0.2/heat/tests/test_loguserdata.py0000666000175000017500000001324213343562340020552 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import errno import os import subprocess import mock from heat.cloudinit import loguserdata from heat.tests import common class FakeCiVersion(object): def __init__(self, version): self.version = version class LoguserdataTest(common.HeatTestCase): @mock.patch('pkg_resources.get_distribution') def test_ci_version_with_pkg_resources(self, mock_get): # Setup returned_versions = [ FakeCiVersion('0.5.0'), FakeCiVersion('0.5.9'), FakeCiVersion('0.6.0'), FakeCiVersion('0.7.0'), FakeCiVersion('1.0'), FakeCiVersion('2.0'), ] mock_get.side_effect = returned_versions # Test & Verify self.assertFalse(loguserdata.chk_ci_version()) self.assertFalse(loguserdata.chk_ci_version()) self.assertTrue(loguserdata.chk_ci_version()) self.assertTrue(loguserdata.chk_ci_version()) self.assertTrue(loguserdata.chk_ci_version()) self.assertTrue(loguserdata.chk_ci_version()) self.assertEqual(6, mock_get.call_count) @mock.patch('pkg_resources.get_distribution') @mock.patch('subprocess.Popen') def test_ci_version_with_subprocess(self, mock_popen, mock_get_distribution): # Setup mock_get_distribution.side_effect = Exception() popen_return = [ [None, 'cloud-init 0.0.5\n'], [None, 'cloud-init 0.7.5\n'], ] mock_popen.return_value = mock.MagicMock() mock_popen.return_value.communicate.side_effect = popen_return # Test & Verify self.assertFalse(loguserdata.chk_ci_version()) self.assertTrue(loguserdata.chk_ci_version()) self.assertEqual(2, mock_get_distribution.call_count) @mock.patch('pkg_resources.get_distribution') @mock.patch('subprocess.Popen') def test_ci_version_with_subprocess_exception(self, mock_popen, mock_get_distribution): # Setup mock_get_distribution.side_effect = Exception() mock_popen.return_value = mock.MagicMock() mock_popen.return_value.communicate.return_value = ['non-empty', 'irrelevant'] # Test self.assertRaises(Exception, loguserdata.chk_ci_version) # noqa self.assertEqual(1, mock_get_distribution.call_count) @mock.patch('subprocess.Popen') def test_call(self, mock_popen): # Setup mock_popen.return_value = mock.MagicMock() mock_popen.return_value.communicate.return_value = ['a', 'b'] mock_popen.return_value.returncode = 0 # Test return_code = loguserdata.call(['foo', 'bar']) # Verify mock_popen.assert_called_once_with(['foo', 'bar'], stdout=subprocess.PIPE, stderr=subprocess.PIPE) self.assertEqual(0, return_code) @mock.patch('sys.exc_info') @mock.patch('subprocess.Popen') def test_call_oserror_enoexec(self, mock_popen, mock_exc_info): # Setup mock_popen.side_effect = OSError() no_exec = mock.MagicMock(errno=errno.ENOEXEC) mock_exc_info.return_value = None, no_exec, None # Test return_code = loguserdata.call(['foo', 'bar']) # Verify self.assertEqual(os.EX_OK, return_code) @mock.patch('sys.exc_info') @mock.patch('subprocess.Popen') def test_call_oserror_other(self, mock_popen, mock_exc_info): # Setup mock_popen.side_effect = OSError() no_exec = mock.MagicMock(errno='foo') mock_exc_info.return_value = None, no_exec, None # Test return_code = loguserdata.call(['foo', 'bar']) # Verify self.assertEqual(os.EX_OSERR, return_code) @mock.patch('sys.exc_info') @mock.patch('subprocess.Popen') def test_call_exception(self, mock_popen, mock_exc_info): # Setup mock_popen.side_effect = Exception() no_exec = mock.MagicMock(errno='irrelevant') mock_exc_info.return_value = None, no_exec, None # Test return_code = loguserdata.call(['foo', 'bar']) # Verify self.assertEqual(os.EX_SOFTWARE, return_code) @mock.patch('pkg_resources.get_distribution') @mock.patch('os.chmod') @mock.patch('heat.cloudinit.loguserdata.call') def test_main(self, mock_call, mock_chmod, mock_get): # Setup mock_get.return_value = FakeCiVersion('1.0') mock_call.return_value = 10 # Test return_code = loguserdata.main() # Verify expected_path = os.path.join(loguserdata.VAR_PATH, 'cfn-userdata') mock_chmod.assert_called_once_with(expected_path, int('700', 8)) self.assertEqual(10, return_code) @mock.patch('pkg_resources.get_distribution') def test_main_failed_ci_version(self, mock_get): # Setup mock_get.return_value = FakeCiVersion('0.0.0') # Test return_code = loguserdata.main() # Verify self.assertEqual(-1, return_code) heat-10.0.2/heat/tests/test_vpc.py0000666000175000017500000007164113343562352017042 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import uuid from heat.common import exception from heat.common import template_format from heat.engine import resource from heat.engine.resources.aws.ec2 import subnet as sn from heat.engine import scheduler from heat.engine import stack as parser from heat.engine import template from heat.tests import common from heat.tests import utils try: from neutronclient.common import exceptions as neutron_exc from neutronclient.v2_0 import client as neutronclient except ImportError: neutronclient = None class VPCTestBase(common.HeatTestCase): def setUp(self): super(VPCTestBase, self).setUp() self.m.StubOutWithMock(neutronclient.Client, 'add_interface_router') self.m.StubOutWithMock(neutronclient.Client, 'add_gateway_router') self.m.StubOutWithMock(neutronclient.Client, 'create_network') self.m.StubOutWithMock(neutronclient.Client, 'create_port') self.m.StubOutWithMock(neutronclient.Client, 'create_router') self.m.StubOutWithMock(neutronclient.Client, 'create_subnet') self.m.StubOutWithMock(neutronclient.Client, 'delete_network') self.m.StubOutWithMock(neutronclient.Client, 'delete_port') self.m.StubOutWithMock(neutronclient.Client, 'delete_router') self.m.StubOutWithMock(neutronclient.Client, 'delete_subnet') self.m.StubOutWithMock(neutronclient.Client, 'list_networks') self.m.StubOutWithMock(neutronclient.Client, 'list_routers') self.m.StubOutWithMock(neutronclient.Client, 'remove_gateway_router') self.m.StubOutWithMock(neutronclient.Client, 'remove_interface_router') self.m.StubOutWithMock(neutronclient.Client, 'show_subnet') self.m.StubOutWithMock(neutronclient.Client, 'show_network') self.m.StubOutWithMock(neutronclient.Client, 'show_port') self.m.StubOutWithMock(neutronclient.Client, 'show_router') self.m.StubOutWithMock(neutronclient.Client, 'create_security_group') self.m.StubOutWithMock(neutronclient.Client, 'show_security_group') self.m.StubOutWithMock(neutronclient.Client, 'list_security_groups') self.m.StubOutWithMock(neutronclient.Client, 'delete_security_group') self.m.StubOutWithMock( neutronclient.Client, 'create_security_group_rule') self.m.StubOutWithMock( neutronclient.Client, 'delete_security_group_rule') def create_stack(self, templ): t = template_format.parse(templ) stack = self.parse_stack(t) self.assertIsNone(stack.validate()) self.assertIsNone(stack.create()) return stack def parse_stack(self, t): stack_name = 'test_stack' tmpl = template.Template(t) stack = parser.Stack(utils.dummy_context(), stack_name, tmpl) stack.store() return stack def mock_create_network(self): self.vpc_name = utils.PhysName('test_stack', 'the_vpc') neutronclient.Client.create_network( { 'network': {'name': self.vpc_name} }).AndReturn({'network': { 'status': 'BUILD', 'subnets': [], 'name': 'name', 'admin_state_up': True, 'shared': False, 'tenant_id': 'c1210485b2424d48804aad5d39c61b8f', 'id': 'aaaa' }}) neutronclient.Client.show_network( 'aaaa' ).AndReturn({"network": { "status": "BUILD", "subnets": [], "name": self.vpc_name, "admin_state_up": False, "shared": False, "tenant_id": "c1210485b2424d48804aad5d39c61b8f", "id": "aaaa" }}) neutronclient.Client.show_network( 'aaaa' ).MultipleTimes().AndReturn({"network": { "status": "ACTIVE", "subnets": [], "name": self.vpc_name, "admin_state_up": False, "shared": False, "tenant_id": "c1210485b2424d48804aad5d39c61b8f", "id": "aaaa" }}) neutronclient.Client.create_router( {'router': {'name': self.vpc_name}}).AndReturn({ 'router': { 'status': 'BUILD', 'name': self.vpc_name, 'admin_state_up': True, 'tenant_id': 'c1210485b2424d48804aad5d39c61b8f', 'id': 'bbbb' }}) neutronclient.Client.list_routers(name=self.vpc_name).AndReturn({ "routers": [{ "status": "BUILD", "external_gateway_info": None, "name": self.vpc_name, "admin_state_up": True, "tenant_id": "3e21026f2dc94372b105808c0e721661", "routes": [], "id": "bbbb" }] }) self.mock_router_for_vpc() def mock_create_subnet(self): self.subnet_name = utils.PhysName('test_stack', 'the_subnet') neutronclient.Client.create_subnet( {'subnet': { 'network_id': u'aaaa', 'cidr': u'10.0.0.0/24', 'ip_version': 4, 'name': self.subnet_name}}).AndReturn({ 'subnet': { 'status': 'ACTIVE', 'name': self.subnet_name, 'admin_state_up': True, 'tenant_id': 'c1210485b2424d48804aad5d39c61b8f', 'id': 'cccc'}}) self.mock_router_for_vpc() neutronclient.Client.add_interface_router( u'bbbb', {'subnet_id': 'cccc'}).AndReturn(None) def mock_show_subnet(self): neutronclient.Client.show_subnet('cccc').AndReturn({ 'subnet': { 'name': self.subnet_name, 'network_id': 'aaaa', 'tenant_id': 'c1210485b2424d48804aad5d39c61b8f', 'allocation_pools': [{'start': '10.0.0.2', 'end': '10.0.0.254'}], 'gateway_ip': '10.0.0.1', 'ip_version': 4, 'cidr': '10.0.0.0/24', 'id': 'cccc', 'enable_dhcp': False, }}) def mock_create_security_group(self): self.sg_name = utils.PhysName('test_stack', 'the_sg') neutronclient.Client.create_security_group({ 'security_group': { 'name': self.sg_name, 'description': 'SSH access' } }).AndReturn({ 'security_group': { 'tenant_id': 'c1210485b2424d48804aad5d39c61b8f', 'name': self.sg_name, 'description': 'SSH access', 'security_group_rules': [], 'id': '0389f747-7785-4757-b7bb-2ab07e4b09c3' } }) neutronclient.Client.create_security_group_rule({ 'security_group_rule': { 'direction': 'ingress', 'remote_group_id': None, 'remote_ip_prefix': '0.0.0.0/0', 'port_range_min': 22, 'ethertype': 'IPv4', 'port_range_max': 22, 'protocol': 'tcp', 'security_group_id': '0389f747-7785-4757-b7bb-2ab07e4b09c3' } }).AndReturn({ 'security_group_rule': { 'direction': 'ingress', 'remote_group_id': None, 'remote_ip_prefix': '0.0.0.0/0', 'port_range_min': 22, 'ethertype': 'IPv4', 'port_range_max': 22, 'protocol': 'tcp', 'security_group_id': '0389f747-7785-4757-b7bb-2ab07e4b09c3', 'id': 'bbbb' } }) def mock_show_security_group(self, group=None): sg_name = utils.PhysName('test_stack', 'the_sg') group = group or '0389f747-7785-4757-b7bb-2ab07e4b09c3' if group == '0389f747-7785-4757-b7bb-2ab07e4b09c3': neutronclient.Client.show_security_group(group).AndReturn({ 'security_group': { 'tenant_id': 'c1210485b2424d48804aad5d39c61b8f', 'name': sg_name, 'description': '', 'security_group_rules': [{ 'direction': 'ingress', 'protocol': 'tcp', 'port_range_max': 22, 'id': 'bbbb', 'ethertype': 'IPv4', 'security_group_id': ('0389f747-7785-4757-b7bb-' '2ab07e4b09c3'), 'remote_group_id': None, 'remote_ip_prefix': '0.0.0.0/0', 'tenant_id': 'c1210485b2424d48804aad5d39c61b8f', 'port_range_min': 22 }], 'id': '0389f747-7785-4757-b7bb-2ab07e4b09c3'}}) elif group == 'INVALID-NO-REF': neutronclient.Client.show_security_group(group).AndRaise( neutron_exc.NeutronClientException(status_code=404)) elif group == 'RaiseException': neutronclient.Client.show_security_group( '0389f747-7785-4757-b7bb-2ab07e4b09c3').AndRaise( neutron_exc.NeutronClientException(status_code=403)) def mock_delete_security_group(self): self.mock_show_security_group() neutronclient.Client.delete_security_group_rule( 'bbbb').AndReturn(None) neutronclient.Client.delete_security_group( '0389f747-7785-4757-b7bb-2ab07e4b09c3').AndReturn(None) def mock_router_for_vpc(self): neutronclient.Client.list_routers(name=self.vpc_name).AndReturn({ "routers": [{ "status": "ACTIVE", "external_gateway_info": { "network_id": "zzzz", "enable_snat": True}, "name": self.vpc_name, "admin_state_up": True, "tenant_id": "3e21026f2dc94372b105808c0e721661", "routes": [], "id": "bbbb" }] }) def mock_delete_network(self): self.mock_router_for_vpc() neutronclient.Client.delete_router('bbbb').AndReturn(None) neutronclient.Client.delete_network('aaaa').AndReturn(None) def mock_delete_subnet(self): self.mock_router_for_vpc() neutronclient.Client.remove_interface_router( u'bbbb', {'subnet_id': 'cccc'}).AndReturn(None) neutronclient.Client.delete_subnet('cccc').AndReturn(None) def mock_create_route_table(self): self.rt_name = utils.PhysName('test_stack', 'the_route_table') neutronclient.Client.create_router({ 'router': {'name': self.rt_name}}).AndReturn({ 'router': { 'status': 'BUILD', 'name': self.rt_name, 'admin_state_up': True, 'tenant_id': 'c1210485b2424d48804aad5d39c61b8f', 'id': 'ffff' } }) neutronclient.Client.show_router('ffff').AndReturn({ 'router': { 'status': 'BUILD', 'name': self.rt_name, 'admin_state_up': True, 'tenant_id': 'c1210485b2424d48804aad5d39c61b8f', 'id': 'ffff' } }) neutronclient.Client.show_router('ffff').AndReturn({ 'router': { 'status': 'ACTIVE', 'name': self.rt_name, 'admin_state_up': True, 'tenant_id': 'c1210485b2424d48804aad5d39c61b8f', 'id': 'ffff' } }) self.mock_router_for_vpc() neutronclient.Client.add_gateway_router( 'ffff', {'network_id': 'zzzz'}).AndReturn(None) def mock_create_association(self): self.mock_show_subnet() self.mock_router_for_vpc() neutronclient.Client.remove_interface_router( 'bbbb', {'subnet_id': u'cccc'}).AndReturn(None) neutronclient.Client.add_interface_router( u'ffff', {'subnet_id': 'cccc'}).AndReturn(None) def mock_delete_association(self): self.mock_show_subnet() self.mock_router_for_vpc() neutronclient.Client.remove_interface_router( 'ffff', {'subnet_id': u'cccc'}).AndReturn(None) neutronclient.Client.add_interface_router( u'bbbb', {'subnet_id': 'cccc'}).AndReturn(None) def mock_delete_route_table(self): neutronclient.Client.delete_router('ffff').AndReturn(None) neutronclient.Client.remove_gateway_router('ffff').AndReturn(None) def assertResourceState(self, resource, ref_id): self.assertIsNone(resource.validate()) self.assertEqual((resource.CREATE, resource.COMPLETE), resource.state) self.assertEqual(ref_id, resource.FnGetRefId()) class VPCTest(VPCTestBase): test_template = ''' HeatTemplateFormatVersion: '2012-12-12' Resources: the_vpc: Type: AWS::EC2::VPC Properties: {CidrBlock: '10.0.0.0/16'} ''' def mock_create_network_failed(self): self.vpc_name = utils.PhysName('test_stack', 'the_vpc') neutronclient.Client.create_network( { 'network': {'name': self.vpc_name} }).AndRaise(neutron_exc.NeutronClientException()) def test_vpc(self): self.mock_create_network() self.mock_delete_network() self.m.ReplayAll() stack = self.create_stack(self.test_template) vpc = stack['the_vpc'] self.assertResourceState(vpc, 'aaaa') scheduler.TaskRunner(vpc.delete)() self.m.VerifyAll() def test_vpc_delete_successful_if_created_failed(self): self.mock_create_network_failed() self.m.ReplayAll() t = template_format.parse(self.test_template) stack = self.parse_stack(t) scheduler.TaskRunner(stack.create)() self.assertEqual((stack.CREATE, stack.FAILED), stack.state) scheduler.TaskRunner(stack.delete)() self.m.VerifyAll() class SubnetTest(VPCTestBase): test_template = ''' HeatTemplateFormatVersion: '2012-12-12' Resources: the_vpc: Type: AWS::EC2::VPC Properties: {CidrBlock: '10.0.0.0/16'} the_subnet: Type: AWS::EC2::Subnet Properties: CidrBlock: 10.0.0.0/24 VpcId: {Ref: the_vpc} AvailabilityZone: moon ''' def test_subnet(self): self.mock_create_network() self.mock_create_subnet() self.mock_delete_subnet() self.mock_delete_network() # mock delete subnet which is already deleted self.mock_router_for_vpc() neutronclient.Client.remove_interface_router( u'bbbb', {'subnet_id': 'cccc'}).AndRaise( neutron_exc.NeutronClientException(status_code=404)) neutronclient.Client.delete_subnet('cccc').AndRaise( neutron_exc.NeutronClientException(status_code=404)) self.m.ReplayAll() stack = self.create_stack(self.test_template) subnet = stack['the_subnet'] self.assertResourceState(subnet, 'cccc') self.assertRaises( exception.InvalidTemplateAttribute, subnet.FnGetAtt, 'Foo') self.assertEqual('moon', subnet.FnGetAtt('AvailabilityZone')) scheduler.TaskRunner(subnet.delete)() subnet.state_set(subnet.CREATE, subnet.COMPLETE, 'to delete again') scheduler.TaskRunner(subnet.delete)() scheduler.TaskRunner(stack['the_vpc'].delete)() self.m.VerifyAll() def _mock_create_subnet_failed(self, stack_name): self.subnet_name = utils.PhysName(stack_name, 'the_subnet') neutronclient.Client.create_subnet( {'subnet': { 'network_id': u'aaaa', 'cidr': u'10.0.0.0/24', 'ip_version': 4, 'name': self.subnet_name}}).AndReturn({ 'subnet': { 'status': 'ACTIVE', 'name': self.subnet_name, 'admin_state_up': True, 'tenant_id': 'c1210485b2424d48804aad5d39c61b8f', 'id': 'cccc'}}) neutronclient.Client.show_network('aaaa').MultipleTimes().AndRaise( neutron_exc.NeutronClientException(status_code=404)) def test_create_failed_delete_success(self): stack_name = 'test_subnet_' self._mock_create_subnet_failed(stack_name) neutronclient.Client.delete_subnet('cccc').AndReturn(None) self.m.ReplayAll() t = template_format.parse(self.test_template) tmpl = template.Template(t) stack = parser.Stack(utils.dummy_context(), stack_name, tmpl, stack_id=str(uuid.uuid4())) tmpl.t['Resources']['the_subnet']['Properties']['VpcId'] = 'aaaa' resource_defns = tmpl.resource_definitions(stack) rsrc = sn.Subnet('the_subnet', resource_defns['the_subnet'], stack) rsrc.validate() self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(rsrc.create)) self.assertEqual((rsrc.CREATE, rsrc.FAILED), rsrc.state) ref_id = rsrc.FnGetRefId() self.assertEqual(u'cccc', ref_id) self.assertIsNone(scheduler.TaskRunner(rsrc.delete)()) self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() class NetworkInterfaceTest(VPCTestBase): test_template = ''' HeatTemplateFormatVersion: '2012-12-12' Resources: the_sg: Type: AWS::EC2::SecurityGroup Properties: VpcId: {Ref: the_vpc} GroupDescription: SSH access SecurityGroupIngress: - IpProtocol: tcp FromPort: "22" ToPort: "22" CidrIp: 0.0.0.0/0 the_vpc: Type: AWS::EC2::VPC Properties: {CidrBlock: '10.0.0.0/16'} the_subnet: Type: AWS::EC2::Subnet Properties: CidrBlock: 10.0.0.0/24 VpcId: {Ref: the_vpc} AvailabilityZone: moon the_nic: Type: AWS::EC2::NetworkInterface Properties: PrivateIpAddress: 10.0.0.100 SubnetId: {Ref: the_subnet} GroupSet: - Ref: the_sg ''' test_template_no_groupset = ''' HeatTemplateFormatVersion: '2012-12-12' Resources: the_vpc: Type: AWS::EC2::VPC Properties: {CidrBlock: '10.0.0.0/16'} the_subnet: Type: AWS::EC2::Subnet Properties: CidrBlock: 10.0.0.0/24 VpcId: {Ref: the_vpc} AvailabilityZone: moon the_nic: Type: AWS::EC2::NetworkInterface Properties: PrivateIpAddress: 10.0.0.100 SubnetId: {Ref: the_subnet} ''' test_template_error = ''' HeatTemplateFormatVersion: '2012-12-12' Resources: the_sg: Type: AWS::EC2::SecurityGroup Properties: VpcId: {Ref: the_vpc} GroupDescription: SSH access SecurityGroupIngress: - IpProtocol: tcp FromPort: "22" ToPort: "22" CidrIp: 0.0.0.0/0 the_vpc: Type: AWS::EC2::VPC Properties: {CidrBlock: '10.0.0.0/16'} the_subnet: Type: AWS::EC2::Subnet Properties: CidrBlock: 10.0.0.0/24 VpcId: {Ref: the_vpc} AvailabilityZone: moon the_nic: Type: AWS::EC2::NetworkInterface Properties: PrivateIpAddress: 10.0.0.100 SubnetId: {Ref: the_subnet} GroupSet: - Ref: INVALID-REF-IN-TEMPLATE ''' test_template_error_no_ref = ''' HeatTemplateFormatVersion: '2012-12-12' Resources: the_vpc: Type: AWS::EC2::VPC Properties: {CidrBlock: '10.0.0.0/16'} the_subnet: Type: AWS::EC2::Subnet Properties: CidrBlock: 10.0.0.0/24 VpcId: {Ref: the_vpc} AvailabilityZone: moon the_nic: Type: AWS::EC2::NetworkInterface Properties: PrivateIpAddress: 10.0.0.100 SubnetId: {Ref: the_subnet} GroupSet: - INVALID-NO-REF ''' def mock_create_network_interface( self, security_groups=['0389f747-7785-4757-b7bb-2ab07e4b09c3']): self.patchobject(resource.Resource, 'is_using_neutron', return_value=True) self.nic_name = utils.PhysName('test_stack', 'the_nic') port = {'network_id': 'aaaa', 'fixed_ips': [{ 'subnet_id': u'cccc', 'ip_address': u'10.0.0.100' }], 'name': self.nic_name, 'admin_state_up': True} if security_groups: port['security_groups'] = security_groups neutronclient.Client.create_port({'port': port}).AndReturn({ 'port': { 'admin_state_up': True, 'device_id': '', 'device_owner': '', 'fixed_ips': [ { 'ip_address': '10.0.0.100', 'subnet_id': 'cccc' } ], 'id': 'dddd', 'mac_address': 'fa:16:3e:25:32:5d', 'name': self.nic_name, 'network_id': 'aaaa', 'status': 'ACTIVE', 'tenant_id': 'c1210485b2424d48804aad5d39c61b8f' } }) def mock_show_network_interface(self): self.nic_name = utils.PhysName('test_stack', 'the_nic') neutronclient.Client.show_port('dddd').AndReturn({ 'port': { 'admin_state_up': True, 'device_id': '', 'device_owner': '', 'fixed_ips': [ { 'ip_address': '10.0.0.100', 'subnet_id': 'cccc' } ], 'id': 'dddd', 'mac_address': 'fa:16:3e:25:32:5d', 'name': self.nic_name, 'network_id': 'aaaa', 'security_groups': ['0389f747-7785-4757-b7bb-2ab07e4b09c3'], 'status': 'ACTIVE', 'tenant_id': 'c1210485b2424d48804aad5d39c61b8f' } }) def mock_delete_network_interface(self): neutronclient.Client.delete_port('dddd').AndReturn(None) def test_network_interface(self): self.mock_create_security_group() self.mock_create_network() self.mock_create_subnet() self.mock_show_subnet() self.stub_SubnetConstraint_validate() self.mock_create_network_interface() self.mock_show_network_interface() self.mock_delete_network_interface() self.mock_delete_subnet() self.mock_delete_network() self.mock_delete_security_group() self.m.ReplayAll() stack = self.create_stack(self.test_template) try: self.assertEqual((stack.CREATE, stack.COMPLETE), stack.state) rsrc = stack['the_nic'] self.assertResourceState(rsrc, 'dddd') self.assertEqual('10.0.0.100', rsrc.FnGetAtt('PrivateIpAddress')) finally: scheduler.TaskRunner(stack.delete)() self.m.VerifyAll() def test_network_interface_existing_groupset(self): self.m.StubOutWithMock(parser.Stack, 'resource_by_refid') self.mock_create_security_group() self.mock_create_network() self.mock_create_subnet() self.mock_show_subnet() self.stub_SubnetConstraint_validate() self.mock_create_network_interface() self.mock_delete_network_interface() self.mock_delete_subnet() self.mock_delete_network() self.mock_delete_security_group() self.m.ReplayAll() stack = self.create_stack(self.test_template) try: self.assertEqual((stack.CREATE, stack.COMPLETE), stack.state) rsrc = stack['the_nic'] self.assertResourceState(rsrc, 'dddd') finally: stack.delete() self.m.VerifyAll() def test_network_interface_no_groupset(self): self.mock_create_network() self.mock_create_subnet() self.mock_show_subnet() self.stub_SubnetConstraint_validate() self.mock_create_network_interface(security_groups=None) self.mock_delete_network_interface() self.mock_delete_subnet() self.mock_delete_network() self.m.ReplayAll() stack = self.create_stack(self.test_template_no_groupset) stack.delete() self.m.VerifyAll() def test_network_interface_error(self): self.assertRaises( exception.StackValidationFailed, self.create_stack, self.test_template_error) class InternetGatewayTest(VPCTestBase): test_template = ''' HeatTemplateFormatVersion: '2012-12-12' Resources: the_gateway: Type: AWS::EC2::InternetGateway the_vpc: Type: AWS::EC2::VPC Properties: CidrBlock: '10.0.0.0/16' the_subnet: Type: AWS::EC2::Subnet Properties: CidrBlock: 10.0.0.0/24 VpcId: {Ref: the_vpc} AvailabilityZone: moon the_attachment: Type: AWS::EC2::VPCGatewayAttachment Properties: VpcId: {Ref: the_vpc} InternetGatewayId: {Ref: the_gateway} the_route_table: Type: AWS::EC2::RouteTable Properties: VpcId: {Ref: the_vpc} the_association: Type: AWS::EC2::SubnetRouteTableAssociation Properties: RouteTableId: {Ref: the_route_table} SubnetId: {Ref: the_subnet} ''' def mock_create_internet_gateway(self): neutronclient.Client.list_networks( **{'router:external': True}).AndReturn({'networks': [{ 'status': 'ACTIVE', 'subnets': [], 'name': 'nova', 'router:external': True, 'tenant_id': 'c1210485b2424d48804aad5d39c61b8f', 'admin_state_up': True, 'shared': True, 'id': '0389f747-7785-4757-b7bb-2ab07e4b09c3' }]}) def mock_create_gateway_attachment(self): neutronclient.Client.add_gateway_router( 'ffff', {'network_id': '0389f747-7785-4757-b7bb-2ab07e4b09c3'} ).AndReturn(None) def mock_delete_gateway_attachment(self): neutronclient.Client.remove_gateway_router('ffff').AndReturn(None) def test_internet_gateway(self): self.mock_create_internet_gateway() self.mock_create_network() self.mock_create_subnet() self.mock_create_route_table() self.stub_SubnetConstraint_validate() self.mock_create_association() self.mock_create_gateway_attachment() self.mock_delete_gateway_attachment() self.mock_delete_association() self.mock_delete_route_table() self.mock_delete_subnet() self.mock_delete_network() self.m.ReplayAll() stack = self.create_stack(self.test_template) gateway = stack['the_gateway'] self.assertResourceState(gateway, gateway.physical_resource_name()) attachment = stack['the_attachment'] self.assertResourceState(attachment, 'the_attachment') route_table = stack['the_route_table'] self.assertEqual([route_table], list(attachment._vpc_route_tables())) stack.delete() self.m.VerifyAll() class RouteTableTest(VPCTestBase): test_template = ''' HeatTemplateFormatVersion: '2012-12-12' Resources: the_vpc: Type: AWS::EC2::VPC Properties: CidrBlock: '10.0.0.0/16' the_subnet: Type: AWS::EC2::Subnet Properties: CidrBlock: 10.0.0.0/24 VpcId: {Ref: the_vpc} AvailabilityZone: moon the_route_table: Type: AWS::EC2::RouteTable Properties: VpcId: {Ref: the_vpc} the_association: Type: AWS::EC2::SubnetRouteTableAssociation Properties: RouteTableId: {Ref: the_route_table} SubnetId: {Ref: the_subnet} ''' def test_route_table(self): self.mock_create_network() self.mock_create_subnet() self.mock_create_route_table() self.stub_SubnetConstraint_validate() self.mock_create_association() self.mock_delete_association() self.mock_delete_route_table() self.mock_delete_subnet() self.mock_delete_network() self.m.ReplayAll() stack = self.create_stack(self.test_template) route_table = stack['the_route_table'] self.assertResourceState(route_table, 'ffff') association = stack['the_association'] self.assertResourceState(association, 'the_association') scheduler.TaskRunner(association.delete)() scheduler.TaskRunner(route_table.delete)() stack.delete() self.m.VerifyAll() heat-10.0.2/heat/tests/utils.py0000666000175000017500000001401513343562352016343 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import random import string import uuid import fixtures import mox from oslo_config import cfg from oslo_db import options from oslo_serialization import jsonutils import sqlalchemy from heat.common import context from heat.db.sqlalchemy import api as db_api from heat.db.sqlalchemy import models from heat.engine import environment from heat.engine import node_data from heat.engine import resource from heat.engine import stack from heat.engine import template get_engine = db_api.get_engine class UUIDStub(object): def __init__(self, value): self.value = value def __enter__(self): self.uuid4 = uuid.uuid4 uuid.uuid4 = lambda: self.value def __exit__(self, *exc_info): uuid.uuid4 = self.uuid4 def random_name(): return ''.join(random.choice(string.ascii_uppercase) for x in range(10)) def setup_dummy_db(): options.cfg.set_defaults(options.database_opts, sqlite_synchronous=False) # Uncomment to log SQL # options.cfg.set_defaults(options.database_opts, connection_debug=100) options.set_defaults(cfg.CONF, connection="sqlite://") engine = get_engine() models.BASE.metadata.create_all(engine) engine.connect() def reset_dummy_db(): engine = get_engine() meta = sqlalchemy.MetaData() meta.reflect(bind=engine) for table in reversed(meta.sorted_tables): if table.name == 'migrate_version': continue engine.execute(table.delete()) def dummy_context(user='test_username', tenant_id='test_tenant_id', password='', roles=None, user_id=None, trust_id=None, region_name=None, is_admin=False): roles = roles or [] return context.RequestContext.from_dict({ 'tenant_id': tenant_id, 'tenant': 'test_tenant', 'username': user, 'user_id': user_id, 'password': password, 'roles': roles, 'is_admin': is_admin, 'auth_url': 'http://server.test:5000/v2.0', 'auth_token': 'abcd1234', 'trust_id': trust_id, 'region_name': region_name }) def parse_stack(t, params=None, files=None, stack_name=None, stack_id=None, timeout_mins=None, cache_data=None, tags=None): params = params or {} files = files or {} ctx = dummy_context() templ = template.Template(t, files=files, env=environment.Environment(params)) templ.store(ctx) if stack_name is None: stack_name = random_name() if cache_data is not None: cache_data = {n: node_data.NodeData.from_dict(d) for n, d in cache_data.items()} stk = stack.Stack(ctx, stack_name, templ, stack_id=stack_id, timeout_mins=timeout_mins, cache_data=cache_data, tags=tags) stk.store() return stk def update_stack(stk, new_t, params=None, files=None): ctx = dummy_context() templ = template.Template(new_t, files=files, env=environment.Environment(params)) updated_stack = stack.Stack(ctx, 'updated_stack', templ) stk.update(updated_stack) class PhysName(object): mock_short_id = 'x' * 12 def __init__(self, stack_name, resource_name, limit=255): name = '%s-%s-%s' % (stack_name, resource_name, self.mock_short_id) self._physname = resource.Resource.reduce_physical_resource_name( name, limit) self.stack, self.res, self.sid = self._physname.rsplit('-', 2) def __eq__(self, physical_name): try: stk, res, short_id = str(physical_name).rsplit('-', 2) except ValueError: return False if len(short_id) != len(self.mock_short_id): return False # stack name may have been truncated if (not isinstance(self.stack, PhysName) and 3 < len(stk) < len(self.stack)): our_stk = self.stack[:2] + '-' + self.stack[3 - len(stk):] else: our_stk = self.stack return (stk == our_stk) and (res == self.res) def __hash__(self): return hash(self.stack) ^ hash(self.res) def __ne__(self, physical_name): return not self.__eq__(physical_name) def __repr__(self): return self._physname def recursive_sort(obj): """Recursively sort list in iterables for comparison.""" if isinstance(obj, dict): for v in obj.values(): recursive_sort(v) elif isinstance(obj, list): obj.sort() for i in obj: recursive_sort(i) return obj class JsonEquals(mox.Comparator): """Comparison class used to check if two json strings equal. If a dict is dumped to json, the order is undecided, so load the string back to an object for comparison """ def __init__(self, other_json): self.other_json = other_json def equals(self, rhs): return jsonutils.loads(self.other_json) == jsonutils.loads(rhs) def __repr__(self): return "" % self.other_json class ForeignKeyConstraintFixture(fixtures.Fixture): def __init__(self, sqlite_fk=True): self.enable_fkc = sqlite_fk def _setUp(self): new_context = db_api.db_context.make_new_manager() new_context.configure(sqlite_fk=self.enable_fkc) self.useFixture(fixtures.MockPatchObject(db_api, '_facade', None)) self.addCleanup(db_api.db_context.patch_factory(new_context._factory)) heat-10.0.2/heat/tests/test_signal.py0000666000175000017500000005624413343562340017526 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime from keystoneauth1 import exceptions as kc_exceptions import mock import six from six.moves.urllib import parse as urlparse from heat.common import exception from heat.common import template_format from heat.db.sqlalchemy import models from heat.engine.clients.os import heat_plugin from heat.engine.clients.os.keystone import fake_keystoneclient as fake_ks from heat.engine.clients.os import swift from heat.engine import scheduler from heat.engine import stack as stk from heat.engine import template from heat.objects import resource_data as resource_data_object from heat.tests import common from heat.tests import generic_resource from heat.tests import utils TEMPLATE_CFN_SIGNAL = ''' { "AWSTemplateFormatVersion" : "2010-09-09", "Description" : "Just a test.", "Parameters" : {}, "Resources" : { "signal_handler" : {"Type" : "SignalResourceType", "Properties": {"signal_transport": "CFN_SIGNAL"}}, "resource_X" : {"Type" : "GenericResourceType"} } } ''' TEMPLATE_HEAT_TEMPLATE_SIGNAL = ''' { "AWSTemplateFormatVersion" : "2010-09-09", "Description" : "Just a test.", "Parameters" : {}, "Resources" : { "signal_handler" : {"Type" : "SignalResourceType", "Properties": {"signal_transport": "HEAT_SIGNAL"}}, "resource_X" : {"Type" : "GenericResourceType"} } } ''' TEMPLATE_SWIFT_SIGNAL = ''' { "AWSTemplateFormatVersion" : "2010-09-09", "Description" : "Just a test.", "Parameters" : {}, "Resources" : { "signal_handler" : {"Type" : "SignalResourceType", "Properties": {"signal_transport": "TEMP_URL_SIGNAL"}}, "resource_X" : {"Type" : "GenericResourceType"} } } ''' TEMPLATE_ZAQAR_SIGNAL = ''' { "AWSTemplateFormatVersion" : "2010-09-09", "Description" : "Just a test.", "Parameters" : {}, "Resources" : { "signal_handler" : {"Type" : "SignalResourceType", "Properties": {"signal_transport": "ZAQAR_SIGNAL"}}, "resource_X" : {"Type" : "GenericResourceType"} } } ''' class SignalTest(common.HeatTestCase): @staticmethod def _create_stack(template_string, stack_name=None, stack_id=None): stack_name = stack_name or utils.random_name() stack_id = stack_id or utils.random_name() tpl = template.Template(template_format.parse(template_string)) ctx = utils.dummy_context() ctx.tenant = 'test_tenant' stack = stk.Stack(ctx, stack_name, tpl, disable_rollback=True) with utils.UUIDStub(stack_id): stack.store() stack.create() return stack def test_resource_data(self): # Setup self.stub_keystoneclient(access='anaccesskey', secret='verysecret', credential_id='mycredential') stack = self._create_stack(TEMPLATE_CFN_SIGNAL) rsrc = stack['signal_handler'] self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) rsrc._create_keypair() # Test rs_data = resource_data_object.ResourceData.get_all(rsrc) # Verify self.assertEqual('mycredential', rs_data.get('credential_id')) self.assertEqual('anaccesskey', rs_data.get('access_key')) self.assertEqual('verysecret', rs_data.get('secret_key')) self.assertEqual('1234', rs_data.get('user_id')) self.assertEqual('password', rs_data.get('password')) self.assertEqual(rsrc.resource_id, rs_data.get('user_id')) self.assertEqual(5, len(rs_data)) def test_get_user_id(self): # Setup self.stub_keystoneclient(access='anaccesskey', secret='verysecret') stack = self._create_stack(TEMPLATE_CFN_SIGNAL) rsrc = stack['signal_handler'] self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) # Test rs_data = resource_data_object.ResourceData.get_all(rsrc) # Verify self.assertEqual('1234', rs_data.get('user_id')) self.assertEqual('1234', rsrc.resource_id) self.assertEqual('1234', rsrc._get_user_id()) # Check user id can still be fetched from resource_id # if the resource data is not there. resource_data_object.ResourceData.delete(rsrc, 'user_id') self.assertRaises( exception.NotFound, resource_data_object.ResourceData.get_val, rsrc, 'user_id') self.assertEqual('1234', rsrc._get_user_id()) @mock.patch.object(heat_plugin.HeatClientPlugin, 'get_heat_cfn_url') def test_FnGetAtt_alarm_url(self, mock_get): # Setup stack_id = stack_name = 'FnGetAtt-alarm-url' stack = self._create_stack(TEMPLATE_CFN_SIGNAL, stack_name=stack_name, stack_id=stack_id) mock_get.return_value = 'http://server.test:8000/v1' rsrc = stack['signal_handler'] created_time = datetime.datetime(2012, 11, 29, 13, 49, 37) rsrc.created_time = created_time self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) # Test url = rsrc.FnGetAtt('AlarmUrl') # Verify # url parameters come in unexpected order, so the conversion has to be # done for comparison expected_url_path = "".join([ 'http://server.test:8000/v1/signal/', 'arn%3Aopenstack%3Aheat%3A%3Atest_tenant%3Astacks/', 'FnGetAtt-alarm-url/FnGetAtt-alarm-url/resources/', 'signal_handler']) expected_url_params = { 'Timestamp': ['2012-11-29T13:49:37Z'], 'SignatureMethod': ['HmacSHA256'], 'AWSAccessKeyId': ['4567'], 'SignatureVersion': ['2'], 'Signature': ['JWGilkQ4gHS+Y4+zhL41xSAC7+cUCwDsaIxq9xPYPKE=']} url_path, url_params = url.split('?', 1) url_params = urlparse.parse_qs(url_params) self.assertEqual(expected_url_path, url_path) self.assertEqual(expected_url_params, url_params) mock_get.assert_called_once_with() @mock.patch.object(heat_plugin.HeatClientPlugin, 'get_heat_cfn_url') def test_FnGetAtt_alarm_url_is_cached(self, mock_get): # Setup stack_id = stack_name = 'FnGetAtt-alarm-url' stack = self._create_stack(TEMPLATE_CFN_SIGNAL, stack_name=stack_name, stack_id=stack_id) mock_get.return_value = 'http://server.test:8000/v1' rsrc = stack['signal_handler'] created_time = datetime.datetime(2012, 11, 29, 13, 49, 37) rsrc.created_time = created_time self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) # Test first_url = rsrc.FnGetAtt('signal') second_url = rsrc.FnGetAtt('signal') # Verify self.assertEqual(first_url, second_url) mock_get.assert_called_once_with() @mock.patch.object(heat_plugin.HeatClientPlugin, 'get_heat_url') def test_FnGetAtt_heat_signal(self, mock_get): # Setup stack = self._create_stack(TEMPLATE_HEAT_TEMPLATE_SIGNAL) mock_get.return_value = 'http://server.test:8004/v1' rsrc = stack['signal_handler'] self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) # Test signal = rsrc.FnGetAtt('signal') # Verify self.assertEqual('http://localhost:5000/v3', signal['auth_url']) self.assertEqual('aprojectid', signal['project_id']) self.assertEqual('1234', signal['user_id']) self.assertIn('username', signal) self.assertIn('password', signal) mock_get.assert_called_once_with() @mock.patch.object(heat_plugin.HeatClientPlugin, 'get_heat_url') def test_FnGetAtt_heat_signal_is_cached(self, mock_get): # Setup stack = self._create_stack(TEMPLATE_HEAT_TEMPLATE_SIGNAL) mock_get.return_value = 'http://server.test:8004/v1' rsrc = stack['signal_handler'] self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) # Test first_url = rsrc.FnGetAtt('signal') second_url = rsrc.FnGetAtt('signal') # Verify self.assertEqual(first_url, second_url) mock_get.assert_called_once_with() @mock.patch('zaqarclient.queues.v2.queues.Queue.signed_url') def test_FnGetAtt_zaqar_signal(self, mock_signed_url): # Setup stack = self._create_stack(TEMPLATE_ZAQAR_SIGNAL) rsrc = stack['signal_handler'] self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) # Test signal = rsrc.FnGetAtt('signal') # Verify self.assertEqual('http://localhost:5000/v3', signal['auth_url']) self.assertEqual('aprojectid', signal['project_id']) self.assertEqual('1234', signal['user_id']) self.assertIn('username', signal) self.assertIn('password', signal) self.assertIn('queue_id', signal) mock_signed_url.assert_called_once_with( ['messages'], methods=['GET', 'DELETE']) @mock.patch('zaqarclient.queues.v2.queues.Queue.signed_url') def test_FnGetAtt_zaqar_signal_is_cached(self, mock_signed_url): # Setup stack = self._create_stack(TEMPLATE_ZAQAR_SIGNAL) rsrc = stack['signal_handler'] self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) # Test first_url = rsrc.FnGetAtt('signal') second_url = rsrc.FnGetAtt('signal') # Verify self.assertEqual(first_url, second_url) mock_signed_url.assert_called_once_with( ['messages'], methods=['GET', 'DELETE']) @mock.patch('swiftclient.client.Connection.put_container') @mock.patch('swiftclient.client.Connection.put_object') @mock.patch.object(swift.SwiftClientPlugin, 'get_temp_url') def test_FnGetAtt_swift_signal(self, mock_get_url, mock_put_object, mock_put_container): # Setup mock_get_url.return_value = ( 'http://192.0.2.1/v1/AUTH_aprojectid/foo/bar') stack = self._create_stack(TEMPLATE_SWIFT_SIGNAL) rsrc = stack['signal_handler'] self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) # Test found_url = rsrc.FnGetAtt('AlarmUrl') # Verify self.assertEqual('http://192.0.2.1/v1/AUTH_aprojectid/foo/bar', found_url) self.assertEqual(1, mock_put_container.call_count) self.assertEqual(1, mock_put_object.call_count) self.assertEqual(1, mock_get_url.call_count) @mock.patch('swiftclient.client.Connection.put_container') @mock.patch('swiftclient.client.Connection.put_object') @mock.patch.object(swift.SwiftClientPlugin, 'get_temp_url') def test_FnGetAtt_swift_signal_is_cached(self, mock_get_url, mock_put_object, mock_put_container): # Setup mock_get_url.return_value = ( 'http://192.0.2.1/v1/AUTH_aprojectid/foo/bar') stack = self._create_stack(TEMPLATE_SWIFT_SIGNAL) rsrc = stack['signal_handler'] self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) # Test first_url = rsrc.FnGetAtt('AlarmUrl') second_url = rsrc.FnGetAtt('AlarmUrl') # Verify self.assertEqual(first_url, second_url) self.assertEqual(1, mock_put_container.call_count) self.assertEqual(1, mock_put_object.call_count) self.assertEqual(1, mock_get_url.call_count) @mock.patch.object(heat_plugin.HeatClientPlugin, 'get_heat_cfn_url') def test_FnGetAtt_delete(self, mock_get): # Setup mock_get.return_value = 'http://server.test:8000/v1' stack = self._create_stack(TEMPLATE_CFN_SIGNAL) rsrc = stack['signal_handler'] rsrc.resource_id_set('signal') self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) self.assertIn('http://server.test:8000/v1/signal', rsrc.FnGetAtt('AlarmUrl')) # Test scheduler.TaskRunner(rsrc.delete)() # Verify self.assertIn('http://server.test:8000/v1/signal', rsrc.FnGetAtt('AlarmUrl')) self.assertEqual(2, mock_get.call_count) @mock.patch.object(heat_plugin.HeatClientPlugin, 'get_heat_url') def test_FnGetAtt_heat_signal_delete(self, mock_get): # Setup mock_get.return_value = 'http://server.test:8004/v1' stack = self._create_stack(TEMPLATE_HEAT_TEMPLATE_SIGNAL) rsrc = stack['signal_handler'] self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) def validate_signal(): signal = rsrc.FnGetAtt('signal') self.assertEqual('http://localhost:5000/v3', signal['auth_url']) self.assertEqual('aprojectid', signal['project_id']) self.assertEqual('1234', signal['user_id']) self.assertIn('username', signal) self.assertIn('password', signal) # Test validate_signal() scheduler.TaskRunner(rsrc.delete)() validate_signal() self.assertEqual(2, mock_get.call_count) @mock.patch('swiftclient.client.Connection.delete_container') @mock.patch('swiftclient.client.Connection.delete_object') @mock.patch('swiftclient.client.Connection.get_container') @mock.patch.object(swift.SwiftClientPlugin, 'get_temp_url') @mock.patch('swiftclient.client.Connection.head_container') @mock.patch('swiftclient.client.Connection.put_container') @mock.patch('swiftclient.client.Connection.put_object') def test_FnGetAtt_swift_signal_delete(self, mock_put_object, mock_put_container, mock_head, mock_get_temp, mock_get_container, mock_delete_object, mock_delete_container): # Setup stack = self._create_stack(TEMPLATE_SWIFT_SIGNAL) mock_get_temp.return_value = ( 'http://server.test/v1/AUTH_aprojectid/foo/bar') mock_get_container.return_value = ({}, [{'name': 'bar'}]) mock_head.return_value = {'x-container-object-count': 0} rsrc = stack['signal_handler'] mock_name = mock.MagicMock() mock_name.return_value = 'bar' rsrc.physical_resource_name = mock_name self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) self.assertEqual('http://server.test/v1/AUTH_aprojectid/foo/bar', rsrc.FnGetAtt('AlarmUrl')) # Test scheduler.TaskRunner(rsrc.delete)() # Verify self.assertEqual('http://server.test/v1/AUTH_aprojectid/foo/bar', rsrc.FnGetAtt('AlarmUrl')) self.assertEqual(2, mock_put_container.call_count) self.assertEqual(2, mock_get_temp.call_count) self.assertEqual(2, mock_put_object.call_count) self.assertEqual(2, mock_put_container.call_count) self.assertEqual(1, mock_get_container.call_count) self.assertEqual(1, mock_delete_object.call_count) self.assertEqual(1, mock_delete_container.call_count) self.assertEqual(1, mock_head.call_count) @mock.patch('zaqarclient.queues.v2.queues.Queue.signed_url') def test_FnGetAtt_zaqar_signal_delete(self, mock_signed_url): # Setup stack = self._create_stack(TEMPLATE_ZAQAR_SIGNAL) mock_delete = mock.MagicMock() rsrc = stack['signal_handler'] rsrc._delete_zaqar_signal_queue = mock_delete stack.create() # Test signal = rsrc.FnGetAtt('signal') # Verify self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) self.assertEqual('http://localhost:5000/v3', signal['auth_url']) self.assertEqual('aprojectid', signal['project_id']) self.assertEqual('1234', signal['user_id']) self.assertIn('username', signal) self.assertIn('password', signal) self.assertIn('queue_id', signal) scheduler.TaskRunner(rsrc.delete)() self.assertEqual('http://localhost:5000/v3', signal['auth_url']) self.assertEqual('aprojectid', signal['project_id']) self.assertEqual('1234', signal['user_id']) self.assertIn('username', signal) self.assertIn('password', signal) self.assertIn('queue_id', signal) mock_delete.assert_called_once_with() def test_delete_not_found(self): # Setup class FakeKeystoneClientFail(fake_ks.FakeKeystoneClient): def delete_stack_user(self, name): raise kc_exceptions.NotFound() self.stub_keystoneclient(fake_client=FakeKeystoneClientFail()) stack = self._create_stack(TEMPLATE_CFN_SIGNAL) rsrc = stack['signal_handler'] # Test self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) scheduler.TaskRunner(rsrc.delete)() self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state) @mock.patch.object(generic_resource.SignalResource, 'handle_signal') def test_signal(self, mock_handle): # Setup test_d = {'Data': 'foo', 'Reason': 'bar', 'Status': 'SUCCESS', 'UniqueId': '123'} stack = self._create_stack(TEMPLATE_CFN_SIGNAL) rsrc = stack['signal_handler'] self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) self.assertTrue(rsrc.requires_deferred_auth) # Test result = rsrc.signal(details=test_d) mock_handle.assert_called_once_with(test_d) self.assertTrue(result) @mock.patch.object(generic_resource.SignalResource, 'handle_signal') def test_handle_signal_no_reraise_deleted(self, mock_handle): # Setup test_d = {'Data': 'foo', 'Reason': 'bar', 'Status': 'SUCCESS', 'UniqueId': '123'} stack = self._create_stack(TEMPLATE_CFN_SIGNAL) mock_handle.side_effect = exception.ResourceNotAvailable( resource_name='test') rsrc = stack['signal_handler'] self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) # In the midst of handling a signal, an update happens on the # db resource concurrently, deleting it # Test exception not re-raised in DELETE case res_obj = stack.context.session.query( models.Resource).get(rsrc.id) res_obj.update({'action': 'DELETE'}) rsrc._db_res_is_deleted = True rsrc._handle_signal(details=test_d) mock_handle.assert_called_once_with(test_d) @mock.patch.object(generic_resource.SignalResource, '_add_event') def test_signal_different_reason_types(self, mock_add): # Setup stack = self._create_stack(TEMPLATE_CFN_SIGNAL) rsrc = stack['signal_handler'] # Verify self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) self.assertTrue(rsrc.requires_deferred_auth) ceilo_details = {'current': 'foo', 'reason': 'apples', 'previous': 'SUCCESS'} ceilo_expected = 'alarm state changed from SUCCESS to foo (apples)' str_details = 'a string details' str_expected = str_details none_details = None none_expected = 'No signal details provided' # Test for test_d in (ceilo_details, str_details, none_details): rsrc.signal(details=test_d) # Verify mock_add.assert_any_call('SIGNAL', 'COMPLETE', ceilo_expected) mock_add.assert_any_call('SIGNAL', 'COMPLETE', str_expected) mock_add.assert_any_call('SIGNAL', 'COMPLETE', none_expected) @mock.patch.object(generic_resource.SignalResource, 'handle_signal') @mock.patch.object(generic_resource.SignalResource, '_add_event') def test_signal_plugin_reason(self, mock_add, mock_handle): # Setup stack = self._create_stack(TEMPLATE_CFN_SIGNAL) rsrc = stack['signal_handler'] self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) signal_details = {'status': 'COMPLETE'} ret_expected = 'Received COMPLETE signal' mock_handle.return_value = ret_expected # Test rsrc.signal(details=signal_details) # Verify mock_handle.assert_called_once_with(signal_details) # Ensure if handle_signal returns data, we use it as the reason mock_add.assert_any_call('SIGNAL', 'COMPLETE', 'Signal: %s' % ret_expected) def test_signal_wrong_resource(self): # Setup stack = self._create_stack(TEMPLATE_CFN_SIGNAL) rsrc = stack['resource_X'] self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) # Test # assert that we get the correct exception when calling a # resource.signal() that does not have a handle_signal() err_metadata = {'Data': 'foo', 'Status': 'SUCCESS', 'UniqueId': '123'} self.assertRaises(exception.ResourceActionNotSupported, rsrc.signal, details=err_metadata) @mock.patch.object(generic_resource.SignalResource, 'handle_signal') def test_signal_reception_failed_call(self, mock_handle): # Setup stack = self._create_stack(TEMPLATE_CFN_SIGNAL) test_d = {'Data': 'foo', 'Reason': 'bar', 'Status': 'SUCCESS', 'UniqueId': '123'} mock_handle.side_effect = ValueError() rsrc = stack['signal_handler'] self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) # Test # assert that we get the correct exception from resource.signal() # when resource.handle_signal() raises an exception. self.assertRaises(exception.ResourceFailure, rsrc.signal, details=test_d) # Verify mock_handle.assert_called_once_with(test_d) def _run_test_signal_not_supported_action(self, action): # Setup stack = self._create_stack(TEMPLATE_CFN_SIGNAL) rsrc = stack['signal_handler'] self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) rsrc.action = action # Test err_metadata = {'Data': 'foo', 'Status': 'SUCCESS', 'UniqueId': '123'} msg = 'Signal resource during %s is not supported.' % action exc = self.assertRaises(exception.NotSupported, rsrc.signal, details=err_metadata) self.assertEqual(msg, six.text_type(exc)) def test_signal_in_delete_state(self): # assert that we get the correct exception when calling a # resource.signal() that is in delete action. self._run_test_signal_not_supported_action('DELETE') def test_signal_in_suspend_state(self): # assert that we get the correct exception when calling a # resource.signal() that is in suspend action. self._run_test_signal_not_supported_action('SUSPEND') heat-10.0.2/heat/tests/test_metadata_refresh.py0000666000175000017500000003022213343562340021533 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_serialization import jsonutils from heat.common import identifier from heat.common import template_format from heat.engine.clients.os import glance from heat.engine.clients.os import nova from heat.engine import environment from heat.engine.resources.aws.cfn.wait_condition_handle import ( WaitConditionHandle) from heat.engine.resources.aws.ec2 import instance from heat.engine.resources.openstack.nova import server from heat.engine import scheduler from heat.engine import service from heat.engine import stack as stk from heat.engine import stk_defn from heat.engine import template as tmpl from heat.tests import common from heat.tests import utils TEST_TEMPLATE_METADATA = ''' { "AWSTemplateFormatVersion" : "2010-09-09", "Description" : "", "Parameters" : { "KeyName" : {"Type" : "String", "Default": "mine" }, }, "Resources" : { "S1": { "Type": "AWS::EC2::Instance", "Metadata" : { "AWS::CloudFormation::Init" : { "config" : { "files" : { "/tmp/random_file" : { "content" : { "Fn::Join" : ["", [ "s2-ip=", {"Fn::GetAtt": ["S2", "PublicIp"]} ]]}, "mode" : "000400", "owner" : "root", "group" : "root" } } } } }, "Properties": { "ImageId" : "a", "InstanceType" : "m1.large", "KeyName" : { "Ref" : "KeyName" }, "UserData" : "#!/bin/bash -v\n" } }, "S2": { "Type": "AWS::EC2::Instance", "Properties": { "ImageId" : "a", "InstanceType" : "m1.large", "KeyName" : { "Ref" : "KeyName" }, "UserData" : "#!/bin/bash -v\n" } } } } ''' TEST_TEMPLATE_WAIT_CONDITION = ''' { "AWSTemplateFormatVersion" : "2010-09-09", "Description" : "Just a WaitCondition.", "Parameters" : { "KeyName" : {"Type" : "String", "Default": "mine" }, }, "Resources" : { "WH" : { "Type" : "AWS::CloudFormation::WaitConditionHandle" }, "S1": { "Type": "AWS::EC2::Instance", "Properties": { "ImageId" : "a", "InstanceType" : "m1.large", "KeyName" : { "Ref" : "KeyName" }, "UserData" : { "Fn::Join" : [ "", [ "#!/bin/bash -v\n", "echo ", { "Ref" : "WH" }, "\n" ] ] } } }, "WC" : { "Type" : "AWS::CloudFormation::WaitCondition", "DependsOn": "S1", "Properties" : { "Handle" : {"Ref" : "WH"}, "Timeout" : "5" } }, "S2": { "Type": "AWS::EC2::Instance", "Metadata" : { "test" : {"Fn::GetAtt": ["WC", "Data"]} }, "Properties": { "ImageId" : "a", "InstanceType" : "m1.large", "KeyName" : { "Ref" : "KeyName" }, "UserData" : "#!/bin/bash -v\n" } } } } ''' TEST_TEMPLATE_SERVER = ''' heat_template_version: 2013-05-23 resources: instance1: type: OS::Nova::Server metadata: {"template_data": {get_attr: [instance2, networks]}} properties: image: cirros-0.3.2-x86_64-disk flavor: m1.small key_name: stack_key instance2: type: OS::Nova::Server metadata: {'apples': 'pears'} properties: image: cirros-0.3.2-x86_64-disk flavor: m1.small key_name: stack_key ''' class MetadataRefreshTest(common.HeatTestCase): @mock.patch.object(nova.NovaClientPlugin, 'find_flavor_by_name_or_id') @mock.patch.object(glance.GlanceClientPlugin, 'find_image_by_name_or_id') @mock.patch.object(instance.Instance, 'handle_create') @mock.patch.object(instance.Instance, 'check_create_complete') @mock.patch.object(instance.Instance, '_resolve_attribute') def test_FnGetAtt_metadata_updated(self, mock_get, mock_check, mock_handle, *args): """Tests that metadata gets updated when FnGetAtt return changes.""" # Setup temp = template_format.parse(TEST_TEMPLATE_METADATA) template = tmpl.Template(temp, env=environment.Environment({})) ctx = utils.dummy_context() stack = stk.Stack(ctx, 'test_stack', template, disable_rollback=True) stack.store() self.stub_KeypairConstraint_validate() # Configure FnGetAtt to return different values on subsequent calls mock_get.side_effect = [ '10.0.0.1', '10.0.0.2', ] # Initial resolution of the metadata stack.create() self.assertEqual((stack.CREATE, stack.COMPLETE), stack.state) # Sanity check on S2 s2 = stack['S2'] self.assertEqual((s2.CREATE, s2.COMPLETE), s2.state) # Verify S1 is using the initial value from S2 s1 = stack['S1'] content = self._get_metadata_content(s1.metadata_get()) self.assertEqual('s2-ip=10.0.0.1', content) # This is not a terribly realistic test - the metadata updates below # happen in run_alarm_action() in service_stack_watch, and actually # operate on a freshly loaded stack so there's no cached attributes. # Clear the attributes cache here to keep it passing. s2.attributes.reset_resolved_values() # Run metadata update to pick up the new value from S2 # (simulating run_alarm_action() in service_stack_watch) s2.metadata_update() stk_defn.update_resource_data(stack.defn, s2.name, s2.node_data()) s1.metadata_update() stk_defn.update_resource_data(stack.defn, s1.name, s1.node_data()) # Verify the updated value is correct in S1 content = self._get_metadata_content(s1.metadata_get()) self.assertEqual('s2-ip=10.0.0.2', content) # Verify outgoing calls mock_get.assert_has_calls([ mock.call('PublicIp'), mock.call('PublicIp')]) self.assertEqual(2, mock_handle.call_count) self.assertEqual(2, mock_check.call_count) @staticmethod def _get_metadata_content(m): tmp = m['AWS::CloudFormation::Init']['config']['files'] return tmp['/tmp/random_file']['content'] class WaitConditionMetadataUpdateTest(common.HeatTestCase): def setUp(self): super(WaitConditionMetadataUpdateTest, self).setUp() self.man = service.EngineService('a-host', 'a-topic') @mock.patch.object(nova.NovaClientPlugin, 'find_flavor_by_name_or_id') @mock.patch.object(glance.GlanceClientPlugin, 'find_image_by_name_or_id') @mock.patch.object(instance.Instance, 'handle_create') @mock.patch.object(instance.Instance, 'check_create_complete') @mock.patch.object(scheduler.TaskRunner, '_sleep') @mock.patch.object(WaitConditionHandle, 'identifier') def test_wait_metadata(self, mock_identifier, mock_sleep, mock_check, mock_handle, *args): """Tests a wait condition metadata update after a signal call.""" # Setup Stack temp = template_format.parse(TEST_TEMPLATE_WAIT_CONDITION) template = tmpl.Template(temp) ctx = utils.dummy_context() stack = stk.Stack(ctx, 'test-stack', template, disable_rollback=True) stack.store() self.stub_KeypairConstraint_validate() res_id = identifier.ResourceIdentifier('test_tenant_id', stack.name, stack.id, '', 'WH') mock_identifier.return_value = res_id watch = stack['WC'] inst = stack['S2'] # Setup Sleep Behavior self.run_empty = True def check_empty(sleep_time): self.assertEqual('{}', watch.FnGetAtt('Data')) self.assertIsNone(inst.metadata_get()['test']) def update_metadata(unique_id, data, reason): self.man.resource_signal(ctx, dict(stack.identifier()), 'WH', {'Data': data, 'Reason': reason, 'Status': 'SUCCESS', 'UniqueId': unique_id}, sync_call=True) def post_success(sleep_time): update_metadata('123', 'foo', 'bar') def side_effect_popper(sleep_time): wh = stack['WH'] if wh.status == wh.IN_PROGRESS: return elif self.run_empty: self.run_empty = False check_empty(sleep_time) else: post_success(sleep_time) mock_sleep.side_effect = side_effect_popper # Test Initial Creation stack.create() self.assertEqual((stack.CREATE, stack.COMPLETE), stack.state) self.assertEqual('{"123": "foo"}', watch.FnGetAtt('Data')) self.assertEqual('{"123": "foo"}', inst.metadata_get()['test']) # Test Update update_metadata('456', 'blarg', 'wibble') self.assertEqual({'123': 'foo', '456': 'blarg'}, jsonutils.loads(watch.FnGetAtt('Data'))) self.assertEqual('{"123": "foo"}', inst.metadata_get()['test']) self.assertEqual( {'123': 'foo', '456': 'blarg'}, jsonutils.loads(inst.metadata_get(refresh=True)['test'])) # Verify outgoing calls self.assertEqual(2, mock_handle.call_count) self.assertEqual(2, mock_check.call_count) class MetadataRefreshServerTest(common.HeatTestCase): @mock.patch.object(nova.NovaClientPlugin, 'find_flavor_by_name_or_id', return_value=1) @mock.patch.object(glance.GlanceClientPlugin, 'find_image_by_name_or_id', return_value=1) @mock.patch.object(server.Server, 'handle_create') @mock.patch.object(server.Server, 'check_create_complete') @mock.patch.object(server.Server, 'get_attribute', new_callable=mock.Mock) def test_FnGetAtt_metadata_update(self, mock_get, mock_check, mock_handle, *args): temp = template_format.parse(TEST_TEMPLATE_SERVER) template = tmpl.Template(temp, env=environment.Environment({})) ctx = utils.dummy_context() stack = stk.Stack(ctx, 'test-stack', template, disable_rollback=True) stack.store() self.stub_KeypairConstraint_validate() # Note dummy addresses are from TEST-NET-1 ref rfc5737 mock_get.side_effect = ['192.0.2.1', '192.0.2.2'] # Test stack.create() self.assertEqual((stack.CREATE, stack.COMPLETE), stack.state) s1 = stack['instance1'] s2 = stack['instance2'] md = s1.metadata_get() self.assertEqual({u'template_data': '192.0.2.1'}, md) # Now set some metadata via the resource, like is done by # _populate_deployments_metadata. This should be persisted over # calls to metadata_update() new_md = {u'template_data': '192.0.2.2', 'set_by_rsrc': 'orange'} s1.metadata_set(new_md) md = s1.metadata_get(refresh=True) self.assertEqual(new_md, md) s2.attributes.reset_resolved_values() stk_defn.update_resource_data(stack.defn, s2.name, s2.node_data()) s1.metadata_update() md = s1.metadata_get(refresh=True) self.assertEqual(new_md, md) # Verify outgoing calls mock_get.assert_has_calls([ mock.call('networks'), mock.call('networks')]) self.assertEqual(2, mock_handle.call_count) self.assertEqual(2, mock_check.call_count) heat-10.0.2/heat/tests/test_template_files.py0000666000175000017500000000350513343562340021236 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.engine import template_files from heat.tests import common from heat.tests import utils template_files_1 = {'template file 1': 'Contents of template 1', 'template file 2': 'More template contents'} class TestTemplateFiles(common.HeatTestCase): def test_cache_miss(self): ctx = utils.dummy_context() tf1 = template_files.TemplateFiles(template_files_1) tf1.store(ctx) # As this is the only reference to the value in _d, deleting # t1.files will cause the value to get removed from _d (due to # it being a WeakValueDictionary. del tf1.files self.assertNotIn(tf1.files_id, template_files._d) # this will cause the cache refresh self.assertEqual(template_files_1['template file 1'], tf1['template file 1']) self.assertEqual(template_files_1, template_files._d[tf1.files_id]) def test_d_weakref_behaviour(self): ctx = utils.dummy_context() tf1 = template_files.TemplateFiles(template_files_1) tf1.store(ctx) tf2 = template_files.TemplateFiles(tf1) del tf1.files self.assertIn(tf2.files_id, template_files._d) del tf2.files self.assertNotIn(tf2.files_id, template_files._d) heat-10.0.2/heat/tests/test_provider_template.py0000666000175000017500000013213413343562352021772 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import json import os import uuid import mock import six from heat.common import exception from heat.common.i18n import _ from heat.common import identifier from heat.common import template_format from heat.common import urlfetch from heat.engine import attributes from heat.engine import environment from heat.engine import properties from heat.engine import resource from heat.engine import resources from heat.engine.resources import template_resource from heat.engine import rsrc_defn from heat.engine import stack as parser from heat.engine import support from heat.engine import template from heat.tests import common from heat.tests import generic_resource as generic_rsrc from heat.tests import utils empty_template = {"HeatTemplateFormatVersion": "2012-12-12"} class MyCloudResource(generic_rsrc.GenericResource): pass class ProviderTemplateTest(common.HeatTestCase): def setUp(self): super(ProviderTemplateTest, self).setUp() resource._register_class('myCloud::ResourceType', MyCloudResource) def test_get_os_empty_registry(self): # assertion: with an empty environment we get the correct # default class. env_str = {'resource_registry': {}} env = environment.Environment(env_str) cls = env.get_class('GenericResourceType', 'fred') self.assertEqual(generic_rsrc.GenericResource, cls) def test_get_mine_global_map(self): # assertion: with a global rule we get the "mycloud" class. env_str = {'resource_registry': {"OS::*": "myCloud::*"}} env = environment.Environment(env_str) cls = env.get_class('OS::ResourceType', 'fred') self.assertEqual(MyCloudResource, cls) def test_get_mine_type_map(self): # assertion: with a global rule we get the "mycloud" class. env_str = {'resource_registry': { "OS::ResourceType": "myCloud::ResourceType"}} env = environment.Environment(env_str) cls = env.get_class('OS::ResourceType', 'fred') self.assertEqual(MyCloudResource, cls) def test_get_mine_resource_map(self): # assertion: with a global rule we get the "mycloud" class. env_str = {'resource_registry': {'resources': {'fred': { "OS::ResourceType": "myCloud::ResourceType"}}}} env = environment.Environment(env_str) cls = env.get_class('OS::ResourceType', 'fred') self.assertEqual(MyCloudResource, cls) def test_get_os_no_match(self): # assertion: make sure 'fred' doesn't match 'jerry'. env_str = {'resource_registry': {'resources': {'jerry': { "OS::ResourceType": "myCloud::ResourceType"}}}} env = environment.Environment(env_str) cls = env.get_class('GenericResourceType', 'fred') self.assertEqual(generic_rsrc.GenericResource, cls) def test_to_parameters(self): """Tests property conversion to parameter values.""" provider = { 'HeatTemplateFormatVersion': '2012-12-12', 'Parameters': { 'Foo': {'Type': 'String'}, 'AList': {'Type': 'CommaDelimitedList'}, 'MemList': {'Type': 'CommaDelimitedList'}, 'ListEmpty': {'Type': 'CommaDelimitedList'}, 'ANum': {'Type': 'Number'}, 'AMap': {'Type': 'Json'}, }, 'Outputs': { 'Foo': {'Value': 'bar'}, }, } files = {'test_resource.template': json.dumps(provider)} class DummyResource(generic_rsrc.GenericResource): attributes_schema = {"Foo": attributes.Schema("A test attribute")} properties_schema = { "Foo": {"Type": "String"}, "AList": {"Type": "List"}, "MemList": {"Type": "List"}, "ListEmpty": {"Type": "List"}, "ANum": {"Type": "Number"}, "AMap": {"Type": "Map"} } env = environment.Environment() resource._register_class('DummyResource', DummyResource) env.load({'resource_registry': {'DummyResource': 'test_resource.template'}}) stack = parser.Stack(utils.dummy_context(), 'test_stack', template.Template(empty_template, files=files, env=env), stack_id=str(uuid.uuid4())) map_prop_val = { "key1": "val1", "key2": ["lval1", "lval2", "lval3"], "key3": { "key4": 4, "key5": False } } prop_vals = { "Foo": "Bar", "AList": ["one", "two", "three"], "MemList": [collections.OrderedDict([ ('key', 'name'), ('value', 'three'), ]), collections.OrderedDict([ ('key', 'name'), ('value', 'four'), ])], "ListEmpty": [], "ANum": 5, "AMap": map_prop_val, } definition = rsrc_defn.ResourceDefinition('test_t_res', 'DummyResource', prop_vals) temp_res = template_resource.TemplateResource('test_t_res', definition, stack) temp_res.validate() converted_params = temp_res.child_params() self.assertTrue(converted_params) for key in DummyResource.properties_schema: self.assertIn(key, converted_params) # verify String conversion self.assertEqual("Bar", converted_params.get("Foo")) # verify List conversion self.assertEqual("one,two,three", converted_params.get("AList")) # verify Member List conversion mem_exp = ('.member.0.key=name,' '.member.0.value=three,' '.member.1.key=name,' '.member.1.value=four') self.assertEqual(sorted(mem_exp.split(',')), sorted(converted_params.get("MemList").split(','))) # verify Number conversion self.assertEqual(5, converted_params.get("ANum")) # verify Map conversion self.assertEqual(map_prop_val, converted_params.get("AMap")) with mock.patch.object(properties.Properties, 'get_user_value') as m_get: m_get.side_effect = ValueError('boom') # If the property doesn't exist on INIT, return default value temp_res.action = temp_res.INIT converted_params = temp_res.child_params() for key in DummyResource.properties_schema: self.assertIn(key, converted_params) self.assertEqual({}, converted_params['AMap']) self.assertEqual(0, converted_params['ANum']) # If the property doesn't exist past INIT, then error out temp_res.action = temp_res.CREATE self.assertRaises(ValueError, temp_res.child_params) def test_attributes_extra(self): provider = { 'HeatTemplateFormatVersion': '2012-12-12', 'Outputs': { 'Foo': {'Value': 'bar'}, 'Blarg': {'Value': 'wibble'}, }, } files = {'test_resource.template': json.dumps(provider)} class DummyResource(generic_rsrc.GenericResource): attributes_schema = {"Foo": attributes.Schema("A test attribute")} env = environment.Environment() resource._register_class('DummyResource', DummyResource) env.load({'resource_registry': {'DummyResource': 'test_resource.template'}}) stack = parser.Stack(utils.dummy_context(), 'test_stack', template.Template(empty_template, files=files, env=env), stack_id=str(uuid.uuid4())) definition = rsrc_defn.ResourceDefinition('test_t_res', "DummyResource") temp_res = template_resource.TemplateResource('test_t_res', definition, stack) self.assertIsNone(temp_res.validate()) def test_attributes_missing_based_on_class(self): provider = { 'HeatTemplateFormatVersion': '2012-12-12', 'Outputs': { 'Blarg': {'Value': 'wibble'}, }, } files = {'test_resource.template': json.dumps(provider)} class DummyResource(generic_rsrc.GenericResource): attributes_schema = {"Foo": attributes.Schema("A test attribute")} env = environment.Environment() resource._register_class('DummyResource', DummyResource) env.load({'resource_registry': {'DummyResource': 'test_resource.template'}}) stack = parser.Stack(utils.dummy_context(), 'test_stack', template.Template(empty_template, files=files, env=env), stack_id=str(uuid.uuid4())) definition = rsrc_defn.ResourceDefinition('test_t_res', "DummyResource") temp_res = template_resource.TemplateResource('test_t_res', definition, stack) self.assertRaises(exception.StackValidationFailed, temp_res.validate) def test_attributes_missing_no_class(self): provider = { 'HeatTemplateFormatVersion': '2012-12-12', 'Outputs': { 'Blarg': {'Value': 'wibble'}, }, } files = {'test_resource.template': json.dumps(provider)} env = environment.Environment() env.load({'resource_registry': {'DummyResource2': 'test_resource.template'}}) stack = parser.Stack(utils.dummy_context(), 'test_stack', template.Template(empty_template, files=files, env=env), stack_id=str(uuid.uuid4())) definition = rsrc_defn.ResourceDefinition('test_t_res', "DummyResource2") temp_res = template_resource.TemplateResource('test_t_res', definition, stack) temp_res.resource_id = 'dummy_id' temp_res.nested_identifier = mock.Mock() temp_res.nested_identifier.return_value = {'foo': 'bar'} temp_res._rpc_client = mock.MagicMock() output = {'outputs': [{'output_key': 'Blarg', 'output_value': 'fluffy'}]} temp_res._rpc_client.show_stack.return_value = [output] self.assertRaises(exception.InvalidTemplateAttribute, temp_res.FnGetAtt, 'Foo') def test_attributes_not_parsable(self): provider = { 'HeatTemplateFormatVersion': '2012-12-12', 'Outputs': { 'Foo': {'Value': 'bar'}, }, } files = {'test_resource.template': json.dumps(provider)} class DummyResource(generic_rsrc.GenericResource): support_status = support.SupportStatus() properties_schema = {} attributes_schema = {"Foo": attributes.Schema( "A test attribute")} env = environment.Environment() resource._register_class('DummyResource', DummyResource) env.load({'resource_registry': {'DummyResource': 'test_resource.template'}}) stack = parser.Stack(utils.dummy_context(), 'test_stack', template.Template(empty_template, files=files, env=env), stack_id=str(uuid.uuid4())) definition = rsrc_defn.ResourceDefinition('test_t_res', "DummyResource") temp_res = template_resource.TemplateResource('test_t_res', definition, stack) temp_res.resource_id = 'dummy_id' temp_res.nested_identifier = mock.Mock() temp_res.nested_identifier.return_value = {'foo': 'bar'} temp_res._rpc_client = mock.MagicMock() output = {'outputs': [{'output_key': 'Foo', 'output_value': None, 'output_error': 'it is all bad'}]} temp_res._rpc_client.show_stack.return_value = [output] temp_res._rpc_client.list_stack_resources.return_value = [] self.assertIsNone(temp_res.validate()) self.assertRaises(exception.TemplateOutputError, temp_res.FnGetAtt, 'Foo') def test_properties_normal(self): provider = { 'HeatTemplateFormatVersion': '2012-12-12', 'Parameters': { 'Foo': {'Type': 'String'}, 'Blarg': {'Type': 'String', 'Default': 'wibble'}, }, } files = {'test_resource.template': json.dumps(provider)} env = environment.Environment() env.load({'resource_registry': {'ResourceWithRequiredPropsAndEmptyAttrs': 'test_resource.template'}}) stack = parser.Stack(utils.dummy_context(), 'test_stack', template.Template(empty_template, files=files, env=env), stack_id=str(uuid.uuid4())) definition = rsrc_defn.ResourceDefinition( 'test_t_res', "ResourceWithRequiredPropsAndEmptyAttrs", {"Foo": "bar"}) temp_res = template_resource.TemplateResource('test_t_res', definition, stack) self.assertIsNone(temp_res.validate()) def test_properties_missing(self): provider = { 'HeatTemplateFormatVersion': '2012-12-12', 'Parameters': { 'Blarg': {'Type': 'String', 'Default': 'wibble'}, }, } files = {'test_resource.template': json.dumps(provider)} env = environment.Environment() env.load({'resource_registry': {'ResourceWithRequiredPropsAndEmptyAttrs': 'test_resource.template'}}) stack = parser.Stack(utils.dummy_context(), 'test_stack', template.Template(empty_template, files=files, env=env), stack_id=str(uuid.uuid4())) definition = rsrc_defn.ResourceDefinition( 'test_t_res', "ResourceWithRequiredPropsAndEmptyAttrs") temp_res = template_resource.TemplateResource('test_t_res', definition, stack) self.assertRaises(exception.StackValidationFailed, temp_res.validate) def test_properties_extra_required(self): provider = { 'HeatTemplateFormatVersion': '2012-12-12', 'Parameters': { 'Blarg': {'Type': 'String'}, }, } files = {'test_resource.template': json.dumps(provider)} class DummyResource(object): support_status = support.SupportStatus() properties_schema = {} attributes_schema = {} env = environment.Environment() resource._register_class('DummyResource', DummyResource) env.load({'resource_registry': {'DummyResource': 'test_resource.template'}}) stack = parser.Stack(utils.dummy_context(), 'test_stack', template.Template(empty_template, files=files, env=env), stack_id=str(uuid.uuid4())) definition = rsrc_defn.ResourceDefinition('test_t_res', "DummyResource", {"Blarg": "wibble"}) temp_res = template_resource.TemplateResource('test_t_res', definition, stack) self.assertRaises(exception.StackValidationFailed, temp_res.validate) def test_properties_type_mismatch(self): provider = { 'HeatTemplateFormatVersion': '2012-12-12', 'Parameters': { 'Foo': {'Type': 'String'}, }, } files = {'test_resource.template': json.dumps(provider)} class DummyResource(generic_rsrc.GenericResource): support_status = support.SupportStatus() properties_schema = {"Foo": properties.Schema(properties.Schema.MAP)} attributes_schema = {} env = environment.Environment() resource._register_class('DummyResource', DummyResource) env.load({'resource_registry': {'DummyResource': 'test_resource.template'}}) stack = parser.Stack(utils.dummy_context(), 'test_stack', template.Template( {'HeatTemplateFormatVersion': '2012-12-12'}, files=files, env=env), stack_id=str(uuid.uuid4())) definition = rsrc_defn.ResourceDefinition('test_t_res', "DummyResource", {"Foo": "bar"}) temp_res = template_resource.TemplateResource('test_t_res', definition, stack) ex = self.assertRaises(exception.StackValidationFailed, temp_res.validate) self.assertEqual("Property Foo type mismatch between facade " "DummyResource (Map) and provider (String)", six.text_type(ex)) def test_properties_list_with_none(self): provider = { 'HeatTemplateFormatVersion': '2012-12-12', 'Parameters': { 'Foo': {'Type': "CommaDelimitedList"}, }, } files = {'test_resource.template': json.dumps(provider)} class DummyResource(generic_rsrc.GenericResource): support_status = support.SupportStatus() properties_schema = {"Foo": properties.Schema(properties.Schema.LIST)} attributes_schema = {} env = environment.Environment() resource._register_class('DummyResource', DummyResource) env.load({'resource_registry': {'DummyResource': 'test_resource.template'}}) stack = parser.Stack(utils.dummy_context(), 'test_stack', template.Template( {'HeatTemplateFormatVersion': '2012-12-12'}, files=files, env=env), stack_id=str(uuid.uuid4())) definition = rsrc_defn.ResourceDefinition('test_t_res', "DummyResource", {"Foo": [None, 'test', None]}) temp_res = template_resource.TemplateResource('test_t_res', definition, stack) self.assertIsNone(temp_res.validate()) definition = rsrc_defn.ResourceDefinition('test_t_res', "DummyResource", {"Foo": [None, None, None]}) temp_res = template_resource.TemplateResource('test_t_res', definition, stack) self.assertIsNone(temp_res.validate()) def test_properties_type_match(self): provider = { 'HeatTemplateFormatVersion': '2012-12-12', 'Parameters': { 'Length': {'Type': 'Number'}, }, } files = {'test_resource.template': json.dumps(provider)} class DummyResource(generic_rsrc.GenericResource): support_status = support.SupportStatus() properties_schema = {"Length": properties.Schema(properties.Schema.INTEGER)} attributes_schema = {} env = environment.Environment() resource._register_class('DummyResource', DummyResource) env.load({'resource_registry': {'DummyResource': 'test_resource.template'}}) stack = parser.Stack(utils.dummy_context(), 'test_stack', template.Template( {'HeatTemplateFormatVersion': '2012-12-12'}, files=files, env=env), stack_id=str(uuid.uuid4())) definition = rsrc_defn.ResourceDefinition('test_t_res', "DummyResource", {"Length": 10}) temp_res = template_resource.TemplateResource('test_t_res', definition, stack) self.assertIsNone(temp_res.validate()) def test_boolean_type_provider(self): provider = { 'HeatTemplateFormatVersion': '2012-12-12', 'Parameters': { 'Foo': {'Type': 'Boolean'}, }, } files = {'test_resource.template': json.dumps(provider)} class DummyResource(generic_rsrc.GenericResource): support_status = support.SupportStatus() properties_schema = {"Foo": properties.Schema(properties.Schema.BOOLEAN)} attributes_schema = {} env = environment.Environment() resource._register_class('DummyResource', DummyResource) env.load({'resource_registry': {'DummyResource': 'test_resource.template'}}) stack = parser.Stack(utils.dummy_context(), 'test_stack', template.Template( {'HeatTemplateFormatVersion': '2012-12-12'}, files=files, env=env), stack_id=str(uuid.uuid4())) definition = rsrc_defn.ResourceDefinition('test_t_res', "DummyResource", {"Foo": "False"}) temp_res = template_resource.TemplateResource('test_t_res', definition, stack) self.assertIsNone(temp_res.validate()) def test_resource_info_general(self): provider = { 'HeatTemplateFormatVersion': '2012-12-12', 'Parameters': { 'Foo': {'Type': 'Boolean'}, }, } files = {'test_resource.template': json.dumps(provider), 'foo.template': json.dumps(provider)} class DummyResource(generic_rsrc.GenericResource): properties_schema = {"Foo": properties.Schema(properties.Schema.BOOLEAN)} attributes_schema = {} env = environment.Environment() resource._register_class('DummyResource', DummyResource) env.load({'resource_registry': {'DummyResource': 'test_resource.template', 'resources': {'foo': 'foo.template'}}}) stack = parser.Stack(utils.dummy_context(), 'test_stack', template.Template( {'HeatTemplateFormatVersion': '2012-12-12'}, files=files, env=env), stack_id=str(uuid.uuid4())) definition = rsrc_defn.ResourceDefinition('test_t_res', "DummyResource", {"Foo": "False"}) temp_res = template_resource.TemplateResource('test_t_res', definition, stack) self.assertEqual('test_resource.template', temp_res.template_url) def test_resource_info_special(self): provider = { 'HeatTemplateFormatVersion': '2012-12-12', 'Parameters': { 'Foo': {'Type': 'Boolean'}, }, } files = {'test_resource.template': json.dumps(provider), 'foo.template': json.dumps(provider)} class DummyResource(object): support_status = support.SupportStatus() properties_schema = {"Foo": properties.Schema(properties.Schema.BOOLEAN)} attributes_schema = {} env = environment.Environment() resource._register_class('DummyResource', DummyResource) env.load({'resource_registry': {'DummyResource': 'test_resource.template', 'resources': {'foo': {'DummyResource': 'foo.template'}}}}) stack = parser.Stack(utils.dummy_context(), 'test_stack', template.Template( {'HeatTemplateFormatVersion': '2012-12-12'}, files=files, env=env), stack_id=str(uuid.uuid4())) definition = rsrc_defn.ResourceDefinition('foo', 'DummyResource', {"Foo": "False"}) temp_res = template_resource.TemplateResource('foo', definition, stack) self.assertEqual('foo.template', temp_res.template_url) def test_get_error_for_invalid_template_name(self): # assertion: if the name matches {.yaml|.template} and is valid # we get the TemplateResource class, otherwise error will be raised. env_str = {'resource_registry': {'resources': {'fred': { "OS::ResourceType": "some_magic.yaml"}}}} env = environment.Environment(env_str) ex = self.assertRaises(exception.NotFound, env.get_class, 'OS::ResourceType', 'fred') self.assertIn('Could not fetch remote template "some_magic.yaml"', six.text_type(ex)) def test_metadata_update_called(self): provider = { 'HeatTemplateFormatVersion': '2012-12-12', 'Parameters': { 'Foo': {'Type': 'Boolean'}, }, } files = {'test_resource.template': json.dumps(provider)} class DummyResource(object): support_status = support.SupportStatus() properties_schema = {"Foo": properties.Schema(properties.Schema.BOOLEAN)} attributes_schema = {} env = environment.Environment() resource._register_class('DummyResource', DummyResource) env.load({'resource_registry': {'DummyResource': 'test_resource.template'}}) stack = parser.Stack(utils.dummy_context(), 'test_stack', template.Template( {'HeatTemplateFormatVersion': '2012-12-12'}, files=files, env=env), stack_id=str(uuid.uuid4())) definition = rsrc_defn.ResourceDefinition('test_t_res', "DummyResource", {"Foo": "False"}) temp_res = template_resource.TemplateResource('test_t_res', definition, stack) temp_res.metadata_set = mock.Mock() temp_res.metadata_update() temp_res.metadata_set.assert_called_once_with({}) def test_get_template_resource_class(self): test_templ_name = 'file:///etc/heatr/frodo.yaml' minimal_temp = json.dumps({'HeatTemplateFormatVersion': '2012-12-12', 'Parameters': {}, 'Resources': {}}) self.m.StubOutWithMock(urlfetch, "get") urlfetch.get(test_templ_name, allowed_schemes=('file',)).AndReturn(minimal_temp) self.m.ReplayAll() env_str = {'resource_registry': {'resources': {'fred': { "OS::ResourceType": test_templ_name}}}} global_env = environment.Environment({}, user_env=False) global_env.load(env_str) with mock.patch('heat.engine.resources._environment', global_env): env = environment.Environment({}) cls = env.get_class('OS::ResourceType', 'fred') self.assertNotEqual(template_resource.TemplateResource, cls) self.assertTrue(issubclass(cls, template_resource.TemplateResource)) self.assertTrue(hasattr(cls, "properties_schema")) self.assertTrue(hasattr(cls, "attributes_schema")) self.m.VerifyAll() def test_template_as_resource(self): """Test that resulting resource has the right prop and attrib schema. Note that this test requires the Wordpress_Single_Instance.yaml template in the templates directory since we want to test using a non-trivial template. """ test_templ_name = "WordPress_Single_Instance.yaml" path = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'templates', test_templ_name) # check if its in the directory list vs. exists to work around # case-insensitive file systems self.assertIn(test_templ_name, os.listdir(os.path.dirname(path))) with open(path) as test_templ_file: test_templ = test_templ_file.read() self.assertTrue(test_templ, "Empty test template") self.m.StubOutWithMock(urlfetch, "get") urlfetch.get(test_templ_name, allowed_schemes=('http', 'https')).AndReturn(test_templ) parsed_test_templ = template_format.parse(test_templ) self.m.ReplayAll() stack = parser.Stack(utils.dummy_context(), 'test_stack', template.Template(empty_template), stack_id=str(uuid.uuid4())) properties = { "KeyName": "mykeyname", "DBName": "wordpress1", "DBUsername": "wpdbuser", "DBPassword": "wpdbpass", "DBRootPassword": "wpdbrootpass", "LinuxDistribution": "U10" } definition = rsrc_defn.ResourceDefinition("test_templ_resource", test_templ_name, properties) templ_resource = resource.Resource("test_templ_resource", definition, stack) self.m.VerifyAll() self.assertIsInstance(templ_resource, template_resource.TemplateResource) for prop in parsed_test_templ.get("Parameters", {}): self.assertIn(prop, templ_resource.properties) for attrib in parsed_test_templ.get("Outputs", {}): self.assertIn(attrib, templ_resource.attributes) for k, v in properties.items(): self.assertEqual(v, templ_resource.properties[k]) self.assertEqual( {'WordPress_Single_Instance.yaml': 'WordPress_Single_Instance.yaml', 'resources': {}}, stack.env.user_env_as_dict()["resource_registry"]) self.assertNotIn('WordPress_Single_Instance.yaml', resources.global_env().registry._registry) def test_persisted_unregistered_provider_templates(self): """Test that templates are registered correctly. Test that templates persisted in the database prior to https://review.openstack.org/#/c/79953/1 are registered correctly. """ env = {'resource_registry': {'http://example.com/test.template': None, 'resources': {}}} # A KeyError will be thrown prior to this fix. environment.Environment(env=env) def test_system_template_retrieve_by_file(self): # make sure that a TemplateResource defined in the global environment # can be created and the template retrieved using the "file:" # scheme. g_env = resources.global_env() test_templ_name = 'file:///etc/heatr/frodo.yaml' g_env.load({'resource_registry': {'Test::Frodo': test_templ_name}}) stack = parser.Stack(utils.dummy_context(), 'test_stack', template.Template(empty_template), stack_id=str(uuid.uuid4())) minimal_temp = json.dumps({'HeatTemplateFormatVersion': '2012-12-12', 'Parameters': {}, 'Resources': {}}) self.m.StubOutWithMock(urlfetch, "get") urlfetch.get(test_templ_name, allowed_schemes=('http', 'https', 'file')).AndReturn(minimal_temp) self.m.ReplayAll() definition = rsrc_defn.ResourceDefinition('test_t_res', 'Test::Frodo') temp_res = template_resource.TemplateResource('test_t_res', definition, stack) self.assertIsNone(temp_res.validate()) self.m.VerifyAll() def test_user_template_not_retrieved_by_file(self): # make sure that a TemplateResource defined in the user environment # can NOT be retrieved using the "file:" scheme, validation should fail env = environment.Environment() test_templ_name = 'file:///etc/heatr/flippy.yaml' env.load({'resource_registry': {'Test::Flippy': test_templ_name}}) stack = parser.Stack(utils.dummy_context(), 'test_stack', template.Template(empty_template, env=env), stack_id=str(uuid.uuid4())) definition = rsrc_defn.ResourceDefinition('test_t_res', 'Test::Flippy') temp_res = template_resource.TemplateResource('test_t_res', definition, stack) self.assertRaises(exception.StackValidationFailed, temp_res.validate) def test_system_template_retrieve_fail(self): # make sure that a TemplateResource defined in the global environment # fails gracefully if the template file specified is inaccessible # we should be able to create the TemplateResource object, but # validation should fail, when the second attempt to access it is # made in validate() g_env = resources.global_env() test_templ_name = 'file:///etc/heatr/frodo.yaml' g_env.load({'resource_registry': {'Test::Frodo': test_templ_name}}) stack = parser.Stack(utils.dummy_context(), 'test_stack', template.Template(empty_template), stack_id=str(uuid.uuid4())) self.m.StubOutWithMock(urlfetch, "get") urlfetch.get(test_templ_name, allowed_schemes=('http', 'https', 'file') ).AndRaise(urlfetch.URLFetchError( _('Failed to retrieve template'))) self.m.ReplayAll() definition = rsrc_defn.ResourceDefinition('test_t_res', 'Test::Frodo') temp_res = template_resource.TemplateResource('test_t_res', definition, stack) self.assertRaises(exception.StackValidationFailed, temp_res.validate) self.m.VerifyAll() def test_user_template_retrieve_fail(self): # make sure that a TemplateResource defined in the user environment # fails gracefully if the template file specified is inaccessible # we should be able to create the TemplateResource object, but # validation should fail, when the second attempt to access it is # made in validate() env = environment.Environment() test_templ_name = 'http://heatr/noexist.yaml' env.load({'resource_registry': {'Test::Flippy': test_templ_name}}) stack = parser.Stack(utils.dummy_context(), 'test_stack', template.Template(empty_template, env=env), stack_id=str(uuid.uuid4())) self.m.StubOutWithMock(urlfetch, "get") urlfetch.get(test_templ_name, allowed_schemes=('http', 'https') ).AndRaise(urlfetch.URLFetchError( _('Failed to retrieve template'))) self.m.ReplayAll() definition = rsrc_defn.ResourceDefinition('test_t_res', 'Test::Flippy') temp_res = template_resource.TemplateResource('test_t_res', definition, stack) self.assertRaises(exception.StackValidationFailed, temp_res.validate) self.m.VerifyAll() def test_user_template_retrieve_fail_ext(self): # make sure that a TemplateResource defined in the user environment # fails gracefully if the template file is the wrong extension # we should be able to create the TemplateResource object, but # validation should fail, when the second attempt to access it is # made in validate() env = environment.Environment() test_templ_name = 'http://heatr/letter_to_granny.docx' env.load({'resource_registry': {'Test::Flippy': test_templ_name}}) stack = parser.Stack(utils.dummy_context(), 'test_stack', template.Template(empty_template, env=env), stack_id=str(uuid.uuid4())) self.m.ReplayAll() definition = rsrc_defn.ResourceDefinition('test_t_res', 'Test::Flippy') temp_res = template_resource.TemplateResource('test_t_res', definition, stack) self.assertRaises(exception.StackValidationFailed, temp_res.validate) self.m.VerifyAll() def test_incorrect_template_provided_with_url(self): wrong_template = ''' Hello world HOT template that just defines a single compute instance. Contains just base features to verify base HOT support. parameters: KeyName: type: string description: Name of an existing key pair to use for the instance label: Nova KeyPair Name resources: my_instance: type: AWS::EC2::Instance properties: KeyName: { get_param: KeyName } ImageId: { get_param: ImageId } InstanceType: { get_param: InstanceType } outputs: instance_ip: description: The IP address of the deployed instance value: { get_attr: [my_instance, PublicIp] } ''' test_template_duplicate_parameters = ''' # This is a hello world HOT template just defining a single compute instance heat_template_version: 2013-05-23 parameter_groups: - label: Server Group description: A group of parameters for the server parameters: - InstanceType - KeyName - ImageId - label: Database Group description: A group of parameters for the database parameters: - db_password - db_port - InstanceType parameters: KeyName: type: string description: Name of an existing key pair to use for the instance InstanceType: type: string description: Instance type for the instance to be created default: m1.small constraints: - allowed_values: [m1.tiny, m1.small, m1.large] description: Value must be one of 'm1.tiny', 'm1.small' or 'm1.large' ImageId: type: string description: ID of the image to use for the instance # parameters below are not used in template, but are for verifying parameter # validation support in HOT db_password: type: string description: Database password hidden: true constraints: - length: { min: 6, max: 8 } description: Password length must be between 6 and 8 characters - allowed_pattern: "[a-zA-Z0-9]+" description: Password must consist of characters and numbers only - allowed_pattern: "[A-Z]+[a-zA-Z0-9]*" description: Password must start with an uppercase character db_port: type: number description: Database port number default: 50000 constraints: - range: { min: 40000, max: 60000 } description: Port number must be between 40000 and 60000 resources: my_instance: # Use an AWS resource type since this exists; so why use other name here? type: AWS::EC2::Instance properties: KeyName: { get_param: KeyName } ImageId: { get_param: ImageId } InstanceType: { get_param: InstanceType } outputs: instance_ip: description: The IP address of the deployed instance value: { get_attr: [my_instance, PublicIp] } ''' test_template_invalid_parameter_name = ''' # This is a hello world HOT template just defining a single compute instance heat_template_version: 2013-05-23 description: > Hello world HOT template that just defines a single compute instance. Contains just base features to verify base HOT support. parameter_groups: - label: Server Group description: A group of parameters for the server parameters: - InstanceType - KeyName - ImageId - label: Database Group description: A group of parameters for the database parameters: - db_password - db_port - SomethingNotHere parameters: KeyName: type: string description: Name of an existing key pair to use for the instance InstanceType: type: string description: Instance type for the instance to be created default: m1.small constraints: - allowed_values: [m1.tiny, m1.small, m1.large] description: Value must be one of 'm1.tiny', 'm1.small' or 'm1.large' ImageId: type: string description: ID of the image to use for the instance # parameters below are not used in template, but are for verifying parameter # validation support in HOT db_password: type: string description: Database password hidden: true constraints: - length: { min: 6, max: 8 } description: Password length must be between 6 and 8 characters - allowed_pattern: "[a-zA-Z0-9]+" description: Password must consist of characters and numbers only - allowed_pattern: "[A-Z]+[a-zA-Z0-9]*" description: Password must start with an uppercase character db_port: type: number description: Database port number default: 50000 constraints: - range: { min: 40000, max: 60000 } description: Port number must be between 40000 and 60000 resources: my_instance: # Use an AWS resource type since this exists; so why use other name here? type: AWS::EC2::Instance properties: KeyName: { get_param: KeyName } ImageId: { get_param: ImageId } InstanceType: { get_param: InstanceType } outputs: instance_ip: description: The IP address of the deployed instance value: { get_attr: [my_instance, PublicIp] } ''' test_template_hot_no_parameter_label = ''' heat_template_version: 2013-05-23 description: > Hello world HOT template that just defines a single compute instance. Contains just base features to verify base HOT support. parameters: KeyName: type: string description: Name of an existing key pair to use for the instance resources: my_instance: type: AWS::EC2::Instance properties: KeyName: { get_param: KeyName } ImageId: { get_param: ImageId } InstanceType: { get_param: InstanceType } ''' test_template_no_parameters = ''' heat_template_version: 2013-05-23 description: > Hello world HOT template that just defines a single compute instance. Contains just base features to verify base HOT support. parameter_groups: - label: Server Group description: A group of parameters for the server - label: Database Group description: A group of parameters for the database resources: server: type: OS::Nova::Server ''' test_template_parameter_groups_not_list = ''' heat_template_version: 2013-05-23 description: > Hello world HOT template that just defines a single compute instance. Contains just base features to verify base HOT support. parameter_groups: label: Server Group description: A group of parameters for the server parameters: key_name: heat_key label: Database Group description: A group of parameters for the database parameters: public_net: public resources: server: type: OS::Nova::Server ''' test_template_parameters_not_list = ''' heat_template_version: 2013-05-23 description: > Hello world HOT template that just defines a single compute instance. Contains just base features to verify base HOT support. parameter_groups: - label: Server Group description: A group of parameters for the server parameters: key_name: heat_key public_net: public resources: server: type: OS::Nova::Server ''' test_template_parameters_error_no_label = ''' heat_template_version: 2013-05-23 description: > Hello world HOT template that just defines a single compute instance. Contains just base features to verify base HOT support. parameter_groups: - parameters: key_name: heat_key resources: server: type: OS::Nova::Server ''' test_template_parameters_duplicate_no_label = ''' heat_template_version: 2013-05-23 description: > Hello world HOT template that just defines a single compute instance. Contains just base features to verify base HOT support. parameters: key_name: type: string description: Name of an existing key pair to use for the instance default: heat_key parameter_groups: - parameters: - key_name - parameters: - key_name resources: server: type: OS::Nova::Server ''' test_template_invalid_parameter_no_label = ''' heat_template_version: 2013-05-23 description: > Hello world HOT template that just defines a single compute instance. Contains just base features to verify base HOT support. parameter_groups: - parameters: - key_name resources: server: type: OS::Nova::Server ''' test_template_allowed_integers = ''' heat_template_version: 2013-05-23 parameters: size: type: number constraints: - allowed_values: [1, 4, 8] resources: my_volume: type: OS::Cinder::Volume properties: size: { get_param: size } ''' test_template_allowed_integers_str = ''' heat_template_version: 2013-05-23 parameters: size: type: number constraints: - allowed_values: ['1', '4', '8'] resources: my_volume: type: OS::Cinder::Volume properties: size: { get_param: size } ''' test_template_default_override = ''' heat_template_version: 2013-05-23 description: create a network parameters: net_name: type: string default: defaultnet description: Name of private network to be created resources: private_net: type: OS::Neutron::Net properties: name: { get_param: net_name } ''' test_template_no_default = ''' heat_template_version: 2013-05-23 description: create a network parameters: net_name: type: string description: Name of private network to be created merged_param: type: comma_delimited_list description: A merged list of values resources: private_net: type: OS::Neutron::Net properties: name: { get_param: net_name } ''' test_template_invalid_outputs = ''' heat_template_version: 2013-05-23 resources: random_str: type: OS::Heat::RandomString outputs: string: value: {get_attr: [[random_str, value]]} ''' test_template_circular_reference = ''' heat_template_version: 2013-05-23 resources: res1: type: OS::Heat::None depends_on: res3 res2: type: OS::Heat::None depends_on: res1 res3: type: OS::Heat::None depends_on: res2 ''' test_template_external_rsrc = ''' heat_template_version: pike resources: random_str: type: OS::Nova::Server external_id: foobar ''' test_template_hot_parameter_tags_older = ''' heat_template_version: 2013-05-23 parameters: KeyName: type: string description: Name of an existing key pair to use for the instance label: Nova KeyPair Name tags: - feature1 - feature2 ''' test_template_hot_parameter_tags_pass = ''' heat_template_version: 2018-03-02 parameters: KeyName: type: string description: Name of an existing key pair to use for the instance label: Nova KeyPair Name tags: - feature1 - feature2 ''' test_template_hot_parameter_tags_fail = ''' heat_template_version: 2018-03-02 parameters: KeyName: type: string description: Name of an existing key pair to use for the instance label: Nova KeyPair Name tags: feature ''' class ValidateTest(common.HeatTestCase): def setUp(self): super(ValidateTest, self).setUp() resources.initialise() self.fc = fakes_nova.FakeClient() self.gc = fakes_nova.FakeClient() resources.initialise() self.ctx = utils.dummy_context() self.mock_isa = mock.patch( 'heat.engine.resource.Resource.is_service_available', return_value=(True, None)) self.mock_is_service_available = self.mock_isa.start() self.addCleanup(self.mock_isa.stop) self.engine = service.EngineService('a', 't') self.empty_environment = { 'event_sinks': [], 'parameter_defaults': {}, 'parameters': {}, 'resource_registry': {'resources': {}}} def _mock_get_image_id_success(self, imageId): self.patchobject(glance.GlanceClientPlugin, 'find_image_by_name_or_id', return_value=imageId) def _mock_get_image_id_fail(self, exp): self.patchobject(glance.GlanceClientPlugin, 'find_image_by_name_or_id', side_effect=exp) def test_validate_volumeattach_valid(self): t = template_format.parse(test_template_volumeattach % 'vdq') stack = parser.Stack(self.ctx, 'test_stack', tmpl.Template(t)) volumeattach = stack['MountPoint'] self.assertIsNone(volumeattach.validate()) def test_validate_volumeattach_invalid(self): t = template_format.parse(test_template_volumeattach % 'sda') stack = parser.Stack(self.ctx, 'test_stack', tmpl.Template(t)) volumeattach = stack['MountPoint'] self.assertRaises(exception.StackValidationFailed, volumeattach.validate) def test_validate_ref_valid(self): t = template_format.parse(test_template_ref % 'WikiDatabase') res = dict(self.engine.validate_template(self.ctx, t, {})) self.assertEqual('test.', res['Description']) def test_validate_with_environment(self): test_template = test_template_ref % 'WikiDatabase' test_template = test_template.replace('AWS::EC2::Instance', 'My::Instance') t = template_format.parse(test_template) params = {'resource_registry': {'My::Instance': 'AWS::EC2::Instance'}} res = dict(self.engine.validate_template(self.ctx, t, params)) self.assertEqual('test.', res['Description']) def test_validate_hot_valid(self): t = template_format.parse( """ heat_template_version: 2013-05-23 description: test. resources: my_instance: type: AWS::EC2::Instance """) res = dict(self.engine.validate_template(self.ctx, t, {})) self.assertEqual('test.', res['Description']) def test_validate_ref_invalid(self): t = template_format.parse(test_template_ref % 'WikiDatabasez') res = dict(self.engine.validate_template(self.ctx, t, {})) self.assertNotEqual(res['Description'], 'Successfully validated') def test_validate_findinmap_valid(self): t = template_format.parse(test_template_findinmap_valid) res = dict(self.engine.validate_template(self.ctx, t, {})) self.assertEqual('test.', res['Description']) def test_validate_findinmap_invalid(self): t = template_format.parse(test_template_findinmap_invalid) res = dict(self.engine.validate_template(self.ctx, t, {})) self.assertNotEqual(res['Description'], 'Successfully validated') def test_validate_bad_yaql_metadata(self): t = template_format.parse(test_template_bad_yaql_metadata) res = dict(self.engine.validate_template(self.ctx, t, {})) self.assertIn('Error', res) self.assertIn('yaql', res['Error']) def test_validate_parameters(self): t = template_format.parse(test_template_ref % 'WikiDatabase') res = dict(self.engine.validate_template(self.ctx, t, {})) # Note: the assertion below does not expect a CFN dict of the parameter # but a dict of the parameters.Schema object. # For API CFN backward compatibility, formating to CFN is done in the # API layer in heat.engine.api.format_validate_parameter. expected = {'KeyName': { 'Type': 'String', 'Description': 'Name of an existing EC2KeyPair', 'NoEcho': 'false', 'Label': 'KeyName'}} self.assertEqual(expected, res['Parameters']) def test_validate_parameters_env_override(self): t = template_format.parse(test_template_default_override) env_params = {'net_name': 'betternetname'} res = dict(self.engine.validate_template(self.ctx, t, env_params)) self.assertEqual('defaultnet', res['Parameters']['net_name']['Default']) self.assertEqual('betternetname', res['Parameters']['net_name']['Value']) def test_validate_parameters_env_provided(self): t = template_format.parse(test_template_no_default) env_params = {'net_name': 'betternetname'} res = dict(self.engine.validate_template(self.ctx, t, env_params)) self.assertEqual('betternetname', res['Parameters']['net_name']['Value']) self.assertNotIn('Default', res['Parameters']['net_name']) def test_validate_parameters_nested(self): t = template_format.parse(test_template_allowed_integers) other_template = test_template_no_default.replace( 'net_name', 'net_name2') files = {'env1': 'parameter_defaults:' '\n net_name: net1', 'env2': 'parameter_defaults:' '\n net_name: net2' '\n net_name2: net3', 'tmpl1.yaml': test_template_no_default, 'tmpl2.yaml': other_template} params = {'parameters': {}, 'parameter_defaults': {}} ret = self.engine.validate_template( self.ctx, t, params=params, files=files, environment_files=['env1', 'env2']) self.assertEqual('net2', params['parameter_defaults']['net_name']) self.assertEqual('net3', params['parameter_defaults']['net_name2']) expected = { 'Description': 'No description', 'Parameters': { 'size': {'AllowedValues': [1, 4, 8], 'Description': '', 'Label': u'size', 'NoEcho': 'false', 'Type': 'Number'}}, 'Environment': { 'event_sinks': [], 'parameter_defaults': { 'net_name': u'net2', 'net_name2': u'net3'}, 'parameters': {}, 'resource_registry': {'resources': {}}}} self.assertEqual(expected, ret) def test_validate_parameters_merged_env(self): t = template_format.parse(test_template_allowed_integers) other_template = test_template_no_default.replace( 'net_name', 'net_name2') files = {'env1': 'parameter_defaults:' '\n net_name: net1' '\n merged_param: [net1, net2]' '\nparameter_merge_strategies:' '\n merged_param: merge', 'env2': 'parameter_defaults:' '\n net_name: net2' '\n net_name2: net3' '\n merged_param: [net3, net4]' '\nparameter_merge_strategies:' '\n merged_param: merge', 'tmpl1.yaml': test_template_no_default, 'tmpl2.yaml': other_template} params = {'parameters': {}, 'parameter_defaults': {}} expected = { 'Description': 'No description', 'Parameters': { 'size': {'AllowedValues': [1, 4, 8], 'Description': '', 'Label': u'size', 'NoEcho': 'false', 'Type': 'Number'}}, 'Environment': { 'event_sinks': [], 'parameter_defaults': { 'net_name': u'net2', 'net_name2': u'net3', 'merged_param': ['net1', 'net2', 'net3', 'net4']}, 'parameters': {}, 'resource_registry': {'resources': {}}}} ret = self.engine.validate_template( self.ctx, t, params=params, files=files, environment_files=['env1', 'env2']) self.assertEqual(expected, ret) def test_validate_hot_empty_parameters_valid(self): t = template_format.parse( """ heat_template_version: 2013-05-23 description: test. parameters: resources: my_instance: type: AWS::EC2::Instance """) res = dict(self.engine.validate_template(self.ctx, t, {})) self.assertEqual({}, res['Parameters']) def test_validate_hot_parameter_label(self): t = template_format.parse(test_template_hot_parameter_label) res = dict(self.engine.validate_template(self.ctx, t, {})) parameters = res['Parameters'] expected = {'KeyName': { 'Type': 'String', 'Description': 'Name of an existing key pair to use for the ' 'instance', 'NoEcho': 'false', 'Label': 'Nova KeyPair Name'}} self.assertEqual(expected, parameters) def test_validate_hot_no_parameter_label(self): t = template_format.parse(test_template_hot_no_parameter_label) res = dict(self.engine.validate_template(self.ctx, t, {})) parameters = res['Parameters'] expected = {'KeyName': { 'Type': 'String', 'Description': 'Name of an existing key pair to use for the ' 'instance', 'NoEcho': 'false', 'Label': 'KeyName'}} self.assertEqual(expected, parameters) def test_validate_cfn_parameter_label(self): t = template_format.parse(test_template_cfn_parameter_label) res = dict(self.engine.validate_template(self.ctx, t, {})) parameters = res['Parameters'] expected = {'KeyName': { 'Type': 'String', 'Description': 'Name of an existing EC2KeyPair', 'NoEcho': 'false', 'Label': 'Nova KeyPair Name'}} self.assertEqual(expected, parameters) def test_validate_hot_parameter_type(self): t = template_format.parse( """ heat_template_version: 2013-05-23 parameters: param1: type: string param2: type: number param3: type: json param4: type: comma_delimited_list param5: type: boolean """) res = dict(self.engine.validate_template(self.ctx, t, {})) parameters = res['Parameters'] # make sure all the types are reported correctly self.assertEqual('String', parameters["param1"]["Type"]) self.assertEqual('Number', parameters["param2"]["Type"]) self.assertEqual('Json', parameters["param3"]["Type"]) self.assertEqual('CommaDelimitedList', parameters["param4"]["Type"]) self.assertEqual('Boolean', parameters["param5"]["Type"]) def test_validate_hot_empty_resources_valid(self): t = template_format.parse( """ heat_template_version: 2013-05-23 description: test. resources: """) res = dict(self.engine.validate_template(self.ctx, t, {})) expected = {"Description": "test.", "Parameters": {}, "Environment": self.empty_environment} self.assertEqual(expected, res) def test_validate_hot_empty_outputs_valid(self): t = template_format.parse( """ heat_template_version: 2013-05-23 description: test. outputs: """) res = dict(self.engine.validate_template(self.ctx, t, {})) expected = {"Description": "test.", "Parameters": {}, "Environment": self.empty_environment} self.assertEqual(expected, res) def test_validate_properties(self): t = template_format.parse(test_template_invalid_property) res = dict(self.engine.validate_template(self.ctx, t, {})) self.assertEqual( {'Error': 'Property error: Resources.WikiDatabase.Properties: ' 'Unknown Property UnknownProperty'}, res) def test_invalid_resources(self): t = template_format.parse(test_template_invalid_resources) res = dict(self.engine.validate_template(self.ctx, t, {})) self.assertEqual({'Error': 'Resources must contain Resource. ' 'Found a [%s] instead' % six.text_type}, res) def test_invalid_section_cfn(self): t = template_format.parse( """ { 'AWSTemplateFormatVersion': '2010-09-09', 'Resources': { 'server': { 'Type': 'OS::Nova::Server' } }, 'Output': {} } """) res = dict(self.engine.validate_template(self.ctx, t)) self.assertEqual({'Error': 'The template section is invalid: Output'}, res) def test_invalid_section_hot(self): t = template_format.parse( """ heat_template_version: 2013-05-23 resources: server: type: OS::Nova::Server output: """) res = dict(self.engine.validate_template(self.ctx, t)) self.assertEqual({'Error': 'The template section is invalid: output'}, res) def test_unimplemented_property(self): t = template_format.parse(test_template_unimplemented_property) res = dict(self.engine.validate_template(self.ctx, t, {})) self.assertEqual( {'Error': 'Property error: Resources.WikiDatabase.Properties: ' 'Property SourceDestCheck not implemented yet'}, res) def test_invalid_deletion_policy(self): t = template_format.parse(test_template_invalid_deletion_policy) res = dict(self.engine.validate_template(self.ctx, t, {})) self.assertEqual({'Error': 'Invalid deletion policy "Destroy"'}, res) def test_snapshot_deletion_policy(self): t = template_format.parse(test_template_snapshot_deletion_policy) res = dict(self.engine.validate_template(self.ctx, t, {})) self.assertEqual({'Error': 'Resources.WikiDatabase.DeletionPolicy: ' '"Snapshot" deletion policy ' 'not supported'}, res) def test_volume_snapshot_deletion_policy(self): t = template_format.parse(test_template_volume_snapshot) res = dict(self.engine.validate_template(self.ctx, t, {})) expected = {'Description': u'test.', 'Parameters': {}, 'Environment': self.empty_environment} self.assertEqual(expected, res) def test_validate_template_without_resources(self): hot_tpl = template_format.parse(''' heat_template_version: 2013-05-23 ''') res = dict(self.engine.validate_template(self.ctx, hot_tpl, {})) expected = {'Description': 'No description', 'Parameters': {}, 'Environment': self.empty_environment} self.assertEqual(expected, res) def test_validate_template_with_invalid_resource_type(self): hot_tpl = template_format.parse(''' heat_template_version: 2013-05-23 resources: resource1: Type: AWS::EC2::Instance properties: property1: value1 metadata: foo: bar depends_on: dummy deletion_policy: dummy update_policy: foo: bar ''') res = dict(self.engine.validate_template(self.ctx, hot_tpl, {})) self.assertEqual({'Error': '"Type" is not a valid keyword ' 'inside a resource definition'}, res) def test_validate_template_with_invalid_resource_properties(self): hot_tpl = template_format.parse(''' heat_template_version: 2013-05-23 resources: resource1: type: AWS::EC2::Instance Properties: property1: value1 metadata: foo: bar depends_on: dummy deletion_policy: dummy update_policy: foo: bar ''') res = dict(self.engine.validate_template(self.ctx, hot_tpl, {})) self.assertEqual({'Error': '"Properties" is not a valid keyword ' 'inside a resource definition'}, res) def test_validate_template_with_invalid_resource_matadata(self): hot_tpl = template_format.parse(''' heat_template_version: 2013-05-23 resources: resource1: type: AWS::EC2::Instance properties: property1: value1 Metadata: foo: bar depends_on: dummy deletion_policy: dummy update_policy: foo: bar ''') res = dict(self.engine.validate_template(self.ctx, hot_tpl, {})) self.assertEqual({'Error': '"Metadata" is not a valid keyword ' 'inside a resource definition'}, res) def test_validate_template_with_invalid_resource_depends_on(self): hot_tpl = template_format.parse(''' heat_template_version: 2013-05-23 resources: resource1: type: AWS::EC2::Instance properties: property1: value1 metadata: foo: bar DependsOn: dummy deletion_policy: dummy update_policy: foo: bar ''') res = dict(self.engine.validate_template(self.ctx, hot_tpl, {})) self.assertEqual({'Error': '"DependsOn" is not a valid keyword ' 'inside a resource definition'}, res) def test_validate_template_with_invalid_resource_deletion_policy(self): hot_tpl = template_format.parse(''' heat_template_version: 2013-05-23 resources: resource1: type: AWS::EC2::Instance properties: property1: value1 metadata: foo: bar depends_on: dummy DeletionPolicy: dummy update_policy: foo: bar ''') res = dict(self.engine.validate_template(self.ctx, hot_tpl, {})) self.assertEqual({'Error': '"DeletionPolicy" is not a valid ' 'keyword inside a resource definition'}, res) def test_validate_template_with_invalid_resource_update_policy(self): hot_tpl = template_format.parse(''' heat_template_version: 2013-05-23 resources: resource1: type: AWS::EC2::Instance properties: property1: value1 metadata: foo: bar depends_on: dummy deletion_policy: dummy UpdatePolicy: foo: bar ''') res = dict(self.engine.validate_template(self.ctx, hot_tpl, {})) self.assertEqual({'Error': '"UpdatePolicy" is not a valid ' 'keyword inside a resource definition'}, res) def test_unregistered_key(self): t = template_format.parse(test_unregistered_key) params = {'KeyName': 'not_registered'} template = tmpl.Template(t, env=environment.Environment(params)) stack = parser.Stack(self.ctx, 'test_stack', template) self.stub_FlavorConstraint_validate() self.stub_ImageConstraint_validate() resource = stack['Instance'] self.assertRaises(exception.StackValidationFailed, resource.validate) def test_unregistered_image(self): t = template_format.parse(test_template_image) template = tmpl.Template(t, env=environment.Environment( {'KeyName': 'test'})) stack = parser.Stack(self.ctx, 'test_stack', template) self._mock_get_image_id_fail(exception.EntityNotFound( entity='Image', name='image_name')) self.stub_KeypairConstraint_validate() self.stub_FlavorConstraint_validate() resource = stack['Instance'] self.assertRaises(exception.StackValidationFailed, resource.validate) def test_duplicated_image(self): t = template_format.parse(test_template_image) template = tmpl.Template(t, env=environment.Environment( {'KeyName': 'test'})) stack = parser.Stack(self.ctx, 'test_stack', template) self._mock_get_image_id_fail(exception.PhysicalResourceNameAmbiguity( name='image_name')) self.stub_KeypairConstraint_validate() self.stub_FlavorConstraint_validate() resource = stack['Instance'] self.assertRaises(exception.StackValidationFailed, resource.validate) @mock.patch('heat.engine.clients.os.nova.NovaClientPlugin._create') def test_invalid_security_groups_with_nics(self, mock_create): t = template_format.parse(test_template_invalid_secgroups) template = tmpl.Template(t, env=environment.Environment( {'KeyName': 'test'})) stack = parser.Stack(self.ctx, 'test_stack', template) self._mock_get_image_id_success('image_id') mock_create.return_value = self.fc resource = stack['Instance'] self.assertRaises(exception.ResourcePropertyConflict, resource.validate) @mock.patch('heat.engine.clients.os.nova.NovaClientPlugin._create') def test_invalid_security_group_ids_with_nics(self, mock_create): t = template_format.parse(test_template_invalid_secgroupids) template = tmpl.Template( t, env=environment.Environment({'KeyName': 'test'})) stack = parser.Stack(self.ctx, 'test_stack', template) self._mock_get_image_id_success('image_id') mock_create.return_value = self.fc resource = stack['Instance'] self.assertRaises(exception.ResourcePropertyConflict, resource.validate) @mock.patch('heat.engine.clients.os.glance.GlanceClientPlugin.client') def test_client_exception_from_glance_client(self, mock_client): t = template_format.parse(test_template_glance_client_exception) template = tmpl.Template(t) stack = parser.Stack(self.ctx, 'test_stack', template) mock_client.return_value = self.gc self.stub_FlavorConstraint_validate() self.assertRaises(exception.StackValidationFailed, stack.validate) def test_validate_unique_logical_name(self): t = template_format.parse(test_template_unique_logical_name) template = tmpl.Template( t, env=environment.Environment( {'AName': 'test', 'KeyName': 'test'})) stack = parser.Stack(self.ctx, 'test_stack', template) self.assertRaises(exception.StackValidationFailed, stack.validate) def test_validate_duplicate_parameters_in_group(self): t = template_format.parse(test_template_duplicate_parameters) template = hot_tmpl.HOTemplate20130523( t, env=environment.Environment({ 'KeyName': 'test', 'ImageId': 'sometestid', 'db_password': 'Pass123' })) stack = parser.Stack(self.ctx, 'test_stack', template) exc = self.assertRaises(exception.StackValidationFailed, stack.validate) self.assertEqual(_('Parameter Groups error: ' 'parameter_groups.Database ' 'Group: The InstanceType parameter must be ' 'assigned to one parameter group only.'), six.text_type(exc)) def test_validate_duplicate_parameters_no_label(self): t = template_format.parse(test_template_parameters_duplicate_no_label) template = hot_tmpl.HOTemplate20130523(t) stack = parser.Stack(self.ctx, 'test_stack', template) exc = self.assertRaises(exception.StackValidationFailed, stack.validate) self.assertEqual(_('Parameter Groups error: ' 'parameter_groups.: ' 'The key_name parameter must be ' 'assigned to one parameter group only.'), six.text_type(exc)) def test_validate_invalid_parameter_in_group(self): t = template_format.parse(test_template_invalid_parameter_name) template = hot_tmpl.HOTemplate20130523(t, env=environment.Environment({ 'KeyName': 'test', 'ImageId': 'sometestid', 'db_password': 'Pass123'})) stack = parser.Stack(self.ctx, 'test_stack', template) exc = self.assertRaises(exception.StackValidationFailed, stack.validate) self.assertEqual(_('Parameter Groups error: ' 'parameter_groups.Database Group: The grouped ' 'parameter SomethingNotHere does not ' 'reference a valid parameter.'), six.text_type(exc)) def test_validate_invalid_parameter_no_label(self): t = template_format.parse(test_template_invalid_parameter_no_label) template = hot_tmpl.HOTemplate20130523(t) stack = parser.Stack(self.ctx, 'test_stack', template) exc = self.assertRaises(exception.StackValidationFailed, stack.validate) self.assertEqual(_('Parameter Groups error: ' 'parameter_groups.: The grouped ' 'parameter key_name does not ' 'reference a valid parameter.'), six.text_type(exc)) def test_validate_no_parameters_in_group(self): t = template_format.parse(test_template_no_parameters) template = hot_tmpl.HOTemplate20130523(t) stack = parser.Stack(self.ctx, 'test_stack', template) exc = self.assertRaises(exception.StackValidationFailed, stack.validate) self.assertEqual(_('Parameter Groups error: parameter_groups.Server ' 'Group: The parameters must be provided for each ' 'parameter group.'), six.text_type(exc)) def test_validate_parameter_groups_not_list(self): t = template_format.parse(test_template_parameter_groups_not_list) template = hot_tmpl.HOTemplate20130523(t) stack = parser.Stack(self.ctx, 'test_stack', template) exc = self.assertRaises(exception.StackValidationFailed, stack.validate) self.assertEqual(_('Parameter Groups error: parameter_groups: ' 'The parameter_groups should be ' 'a list.'), six.text_type(exc)) def test_validate_parameters_not_list(self): t = template_format.parse(test_template_parameters_not_list) template = hot_tmpl.HOTemplate20130523(t) stack = parser.Stack(self.ctx, 'test_stack', template) exc = self.assertRaises(exception.StackValidationFailed, stack.validate) self.assertEqual(_('Parameter Groups error: ' 'parameter_groups.Server Group: ' 'The parameters of parameter group should be ' 'a list.'), six.text_type(exc)) def test_validate_parameters_error_no_label(self): t = template_format.parse(test_template_parameters_error_no_label) template = hot_tmpl.HOTemplate20130523(t) stack = parser.Stack(self.ctx, 'test_stack', template) exc = self.assertRaises(exception.StackValidationFailed, stack.validate) self.assertEqual(_('Parameter Groups error: parameter_groups.: ' 'The parameters of parameter group should be ' 'a list.'), six.text_type(exc)) def test_validate_allowed_values_integer(self): t = template_format.parse(test_template_allowed_integers) template = tmpl.Template(t, env=environment.Environment({'size': '4'})) # test with size parameter provided as string stack = parser.Stack(self.ctx, 'test_stack', template) self.assertIsNone(stack.validate()) # test with size parameter provided as number template.env = environment.Environment({'size': 4}) stack = parser.Stack(self.ctx, 'test_stack', template) self.assertIsNone(stack.validate()) def test_validate_allowed_values_integer_str(self): t = template_format.parse(test_template_allowed_integers_str) template = tmpl.Template(t, env=environment.Environment({'size': '4'})) # test with size parameter provided as string stack = parser.Stack(self.ctx, 'test_stack', template) self.assertIsNone(stack.validate()) # test with size parameter provided as number template.env = environment.Environment({'size': 4}) stack = parser.Stack(self.ctx, 'test_stack', template) self.assertIsNone(stack.validate()) def test_validate_not_allowed_values_integer(self): t = template_format.parse(test_template_allowed_integers) template = tmpl.Template(t, env=environment.Environment({'size': '3'})) # test with size parameter provided as string stack = parser.Stack(self.ctx, 'test_stack', template) err = self.assertRaises(exception.StackValidationFailed, stack.validate) self.assertIn('"3" is not an allowed value [1, 4, 8]', six.text_type(err)) # test with size parameter provided as number template.env = environment.Environment({'size': 3}) stack = parser.Stack(self.ctx, 'test_stack', template) err = self.assertRaises(exception.StackValidationFailed, stack.validate) self.assertIn('"3" is not an allowed value [1, 4, 8]', six.text_type(err)) def test_validate_not_allowed_values_integer_str(self): t = template_format.parse(test_template_allowed_integers_str) template = tmpl.Template(t, env=environment.Environment({'size': '3'})) # test with size parameter provided as string stack = parser.Stack(self.ctx, 'test_stack', template) err = self.assertRaises(exception.StackValidationFailed, stack.validate) self.assertIn('"3" is not an allowed value [1, 4, 8]', six.text_type(err)) # test with size parameter provided as number template.env = environment.Environment({'size': 3}) stack = parser.Stack(self.ctx, 'test_stack', template) err = self.assertRaises(exception.StackValidationFailed, stack.validate) self.assertIn('"3" is not an allowed value [1, 4, 8]', six.text_type(err)) def test_validate_invalid_outputs(self): t = template_format.parse(test_template_invalid_outputs) template = tmpl.Template(t) stack = parser.Stack(self.ctx, 'test_stack', template) err = self.assertRaises(exception.StackValidationFailed, stack.validate) error_message = ('outputs.string.value.get_attr: Arguments to ' '"get_attr" must be of the form ' '[resource_name, attribute, (path), ...]') self.assertEqual(error_message, six.text_type(err)) def test_validate_resource_attr_invalid_type(self): t = template_format.parse(""" heat_template_version: 2013-05-23 resources: resource: type: 123 """) template = tmpl.Template(t) stack = parser.Stack(self.ctx, 'test_stack', template) ex = self.assertRaises(exception.StackValidationFailed, stack.validate) self.assertEqual('Resource resource type type must be string', six.text_type(ex)) def test_validate_resource_attr_invalid_type_cfn(self): t = template_format.parse(""" HeatTemplateFormatVersion: '2012-12-12' Resources: Resource: Type: [Wrong, Type] """) stack = parser.Stack(self.ctx, 'test_stack', tmpl.Template(t)) ex = self.assertRaises(exception.StackValidationFailed, stack.validate) self.assertEqual('Resource Resource Type type must be string', six.text_type(ex)) def test_validate_resource_invalid_key(self): t = template_format.parse(""" heat_template_version: 2013-05-23 resources: resource: type: OS::Heat::TestResource wibble: bar """) template = tmpl.Template(t) stack = parser.Stack(self.ctx, 'test_stack', template) ex = self.assertRaises(exception.StackValidationFailed, stack.validate) self.assertIn('wibble', six.text_type(ex)) def test_validate_resource_invalid_cfn_key_in_hot(self): t = template_format.parse(""" heat_template_version: 2013-05-23 resources: resource: type: OS::Heat::TestResource Properties: {foo: bar} """) template = tmpl.Template(t) stack = parser.Stack(self.ctx, 'test_stack', template) ex = self.assertRaises(exception.StackValidationFailed, stack.validate) self.assertIn('Properties', six.text_type(ex)) def test_validate_resource_invalid_key_cfn(self): t = template_format.parse(""" HeatTemplateFormatVersion: '2012-12-12' Resources: Resource: Type: OS::Heat::TestResource Wibble: bar """) template = tmpl.Template(t) stack = parser.Stack(self.ctx, 'test_stack', template) # We have always allowed unknown keys in CFN-style templates, so we # more or less have to keep allowing it. self.assertIsNone(stack.validate()) def test_validate_is_service_available(self): t = template_format.parse( """ heat_template_version: 2015-10-15 resources: my_instance: type: AWS::EC2::Instance """) self.mock_is_service_available.return_value = ( False, 'Service endpoint not in service catalog.') ex = self.assertRaises(dispatcher.ExpectedException, self.engine.validate_template, self.ctx, t, {}) self.assertEqual(exception.ResourceTypeUnavailable, ex.exc_info[0]) def test_validate_with_ignorable_errors(self): t = template_format.parse( """ heat_template_version: 2015-10-15 resources: my_instance: type: AWS::EC2::Instance """) engine = service.EngineService('a', 't') self.mock_is_service_available.return_value = ( False, 'Service endpoint not in service catalog.') res = dict(engine.validate_template( self.ctx, t, {}, ignorable_errors=[exception.ResourceTypeUnavailable.error_code])) expected = {'Description': 'No description', 'Parameters': {}, 'Environment': self.empty_environment} self.assertEqual(expected, res) def test_validate_with_ignorable_errors_invalid_error_code(self): engine = service.EngineService('a', 't') invalide_error_code = '123456' invalid_codes = ['99001', invalide_error_code] res = engine.validate_template( self.ctx, mock.MagicMock(), {}, ignorable_errors=invalid_codes) msg = _("Invalid codes in ignore_errors : %s") % [invalide_error_code] ex = webob.exc.HTTPBadRequest(explanation=msg) self.assertIsInstance(res, webob.exc.HTTPBadRequest) self.assertEqual(ex.explanation, res.explanation) def test_validate_parameter_group_output(self): engine = service.EngineService('a', 't') params = { "resource_registry": { "OS::Test::TestResource": "https://server.test/nested.template" } } root_template_str = ''' heat_template_version: 2015-10-15 parameters: test_root_param: type: string parameter_groups: - label: RootTest parameters: - test_root_param resources: Nested: type: OS::Test::TestResource ''' nested_template_str = ''' heat_template_version: 2015-10-15 parameters: test_param: type: string parameter_groups: - label: Test parameters: - test_param ''' root_template = template_format.parse(root_template_str) self.patchobject(urlfetch, 'get') urlfetch.get.return_value = nested_template_str res = dict(engine.validate_template(self.ctx, root_template, params, show_nested=True)) expected = { 'Description': 'No description', 'ParameterGroups': [{ 'label': 'RootTest', 'parameters': ['test_root_param']}], 'Parameters': { 'test_root_param': { 'Description': '', 'Label': 'test_root_param', 'NoEcho': 'false', 'Type': 'String'}}, 'NestedParameters': { 'Nested': { 'Description': 'No description', 'ParameterGroups': [{ 'label': 'Test', 'parameters': ['test_param']}], 'Parameters': { 'test_param': { 'Description': '', 'Label': 'test_param', 'NoEcho': 'false', 'Type': 'String'}}, 'Type': 'OS::Test::TestResource'}}, 'Environment': { 'event_sinks': [], 'parameter_defaults': {}, 'parameters': {}, 'resource_registry': { 'OS::Test::TestResource': 'https://server.test/nested.template', 'resources': {}}}} self.assertEqual(expected, res) def test_validate_allowed_external_rsrc(self): t = template_format.parse(test_template_external_rsrc) template = tmpl.Template(t) stack = parser.Stack(self.ctx, 'test_stack', template) self.assertIsNone(stack.validate(validate_res_tmpl_only=True)) with mock.patch( 'heat.engine.resources.server_base.BaseServer._show_resource', return_value={'id': 'foobar'} ): self.assertIsNone(stack.validate(validate_res_tmpl_only=False)) def test_validate_circular_reference(self): t = template_format.parse(test_template_circular_reference) exc = self.assertRaises(dispatcher.ExpectedException, self.engine.validate_template, self.ctx, t, {}) self.assertEqual(dependencies.CircularDependencyException, exc.exc_info[0]) def test_validate_hot_parameter_tags_older(self): t = template_format.parse(test_template_hot_parameter_tags_older) exc = self.assertRaises(dispatcher.ExpectedException, self.engine.validate_template, self.ctx, t, {}) self.assertEqual(exception.InvalidSchemaError, exc.exc_info[0]) def test_validate_hot_parameter_tags_pass(self): t = template_format.parse(test_template_hot_parameter_tags_pass) res = dict(self.engine.validate_template(self.ctx, t, {})) parameters = res['Parameters'] expected = {'KeyName': { 'Description': 'Name of an existing key pair to use for the ' 'instance', 'NoEcho': 'false', 'Label': 'Nova KeyPair Name', 'Type': 'String', 'Tags': [ 'feature1', 'feature2' ]}} self.assertEqual(expected, parameters) def test_validate_hot_parameter_tags_fail(self): t = template_format.parse(test_template_hot_parameter_tags_fail) exc = self.assertRaises(dispatcher.ExpectedException, self.engine.validate_template, self.ctx, t, {}) self.assertEqual(exception.InvalidSchemaError, exc.exc_info[0]) def test_validate_empty_resource_group(self): engine = service.EngineService('a', 't') params = { "resource_registry": { "OS::Test::TestResource": "https://server.test/nested.template" } } root_template_str = ''' heat_template_version: 2015-10-15 parameters: test_root_param: type: string resources: Group: type: OS::Heat::ResourceGroup properties: count: 0 resource_def: type: OS::Test::TestResource ''' nested_template_str = ''' heat_template_version: 2015-10-15 parameters: test_param: type: string ''' root_template = template_format.parse(root_template_str) self.patchobject(urlfetch, 'get') urlfetch.get.return_value = nested_template_str res = dict(engine.validate_template(self.ctx, root_template, params, show_nested=True)) expected = { 'Description': 'No description', 'Environment': { 'event_sinks': [], 'parameter_defaults': {}, 'parameters': {}, 'resource_registry': { 'OS::Test::TestResource': 'https://server.test/nested.template', 'resources': {}}}, 'NestedParameters': { 'Group': { 'Description': 'No description', 'Parameters': {}, 'Type': 'OS::Heat::ResourceGroup', 'NestedParameters': { '0': { 'Description': 'No description', 'Parameters': { 'test_param': { 'Description': '', 'Label': 'test_param', 'NoEcho': 'false', 'Type': 'String'}}, 'Type': 'OS::Test::TestResource'}}}}, 'Parameters': { 'test_root_param': { 'Description': '', 'Label': 'test_root_param', 'NoEcho': 'false', 'Type': 'String'}}} self.assertEqual(expected, res) heat-10.0.2/heat/tests/test_urlfetch.py0000666000175000017500000001046113343562352020057 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg import requests from requests import exceptions import six from heat.common import urlfetch from heat.tests import common class Response(object): def __init__(self, buf=''): self.buf = buf def iter_content(self, chunk_size=1): while self.buf: yield self.buf[:chunk_size] self.buf = self.buf[chunk_size:] def raise_for_status(self): pass class UrlFetchTest(common.HeatTestCase): def setUp(self): super(UrlFetchTest, self).setUp() self.m.StubOutWithMock(requests, 'get') def test_file_scheme_default_behaviour(self): self.m.ReplayAll() self.assertRaises(urlfetch.URLFetchError, urlfetch.get, 'file:///etc/profile') self.m.VerifyAll() def test_file_scheme_supported(self): data = '{ "foo": "bar" }' url = 'file:///etc/profile' self.m.StubOutWithMock(six.moves.urllib.request, 'urlopen') six.moves.urllib.request.urlopen(url).AndReturn( six.moves.cStringIO(data)) self.m.ReplayAll() self.assertEqual(data, urlfetch.get(url, allowed_schemes=['file'])) self.m.VerifyAll() def test_file_scheme_failure(self): url = 'file:///etc/profile' self.m.StubOutWithMock(six.moves.urllib.request, 'urlopen') six.moves.urllib.request.urlopen(url).AndRaise( six.moves.urllib.error.URLError('oops')) self.m.ReplayAll() self.assertRaises(urlfetch.URLFetchError, urlfetch.get, url, allowed_schemes=['file']) self.m.VerifyAll() def test_http_scheme(self): url = 'http://example.com/template' data = b'{ "foo": "bar" }' response = Response(data) requests.get(url, stream=True).AndReturn(response) self.m.ReplayAll() self.assertEqual(data, urlfetch.get(url)) self.m.VerifyAll() def test_https_scheme(self): url = 'https://example.com/template' data = b'{ "foo": "bar" }' response = Response(data) requests.get(url, stream=True).AndReturn(response) self.m.ReplayAll() self.assertEqual(data, urlfetch.get(url)) self.m.VerifyAll() def test_http_error(self): url = 'http://example.com/template' requests.get(url, stream=True).AndRaise(exceptions.HTTPError()) self.m.ReplayAll() self.assertRaises(urlfetch.URLFetchError, urlfetch.get, url) self.m.VerifyAll() def test_non_exist_url(self): url = 'http://non-exist.com/template' requests.get(url, stream=True).AndRaise(exceptions.Timeout()) self.m.ReplayAll() self.assertRaises(urlfetch.URLFetchError, urlfetch.get, url) self.m.VerifyAll() def test_garbage(self): self.m.ReplayAll() self.assertRaises(urlfetch.URLFetchError, urlfetch.get, 'wibble') self.m.VerifyAll() def test_max_fetch_size_okay(self): url = 'http://example.com/template' data = b'{ "foo": "bar" }' response = Response(data) cfg.CONF.set_override('max_template_size', 500) requests.get(url, stream=True).AndReturn(response) self.m.ReplayAll() urlfetch.get(url) self.m.VerifyAll() def test_max_fetch_size_error(self): url = 'http://example.com/template' data = b'{ "foo": "bar" }' response = Response(data) cfg.CONF.set_override('max_template_size', 5) requests.get(url, stream=True).AndReturn(response) self.m.ReplayAll() exception = self.assertRaises(urlfetch.URLFetchError, urlfetch.get, url) self.assertIn("Template exceeds", six.text_type(exception)) self.m.VerifyAll() heat-10.0.2/heat/tests/test_resource.py0000666000175000017500000061421713343562352020103 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import datetime import eventlet import itertools import json import os import sys import uuid import mock from oslo_config import cfg import six from heat.common import exception from heat.common.i18n import _ from heat.common import short_id from heat.common import timeutils from heat.db.sqlalchemy import api as db_api from heat.db.sqlalchemy import models from heat.engine import attributes from heat.engine.cfn import functions as cfn_funcs from heat.engine import clients from heat.engine import constraints from heat.engine import dependencies from heat.engine import environment from heat.engine import node_data from heat.engine import plugin_manager from heat.engine import properties from heat.engine import resource from heat.engine import resources from heat.engine.resources.openstack.heat import none_resource from heat.engine.resources.openstack.heat import test_resource from heat.engine import rsrc_defn from heat.engine import scheduler from heat.engine import stack as parser from heat.engine import support from heat.engine import template from heat.engine import translation from heat.objects import resource as resource_objects from heat.objects import resource_data as resource_data_object from heat.objects import resource_properties_data as rpd_object from heat.tests import common from heat.tests.engine import tools from heat.tests import generic_resource as generic_rsrc from heat.tests import utils import neutronclient.common.exceptions as neutron_exp empty_template = {"HeatTemplateFormatVersion": "2012-12-12"} class ResourceTest(common.HeatTestCase): def setUp(self): super(ResourceTest, self).setUp() self.env = environment.Environment() self.env.load({u'resource_registry': {u'OS::Test::GenericResource': u'GenericResourceType', u'OS::Test::ResourceWithCustomConstraint': u'ResourceWithCustomConstraint'}}) self.stack = parser.Stack(utils.dummy_context(), 'test_stack', template.Template(empty_template, env=self.env), stack_id=str(uuid.uuid4())) self.dummy_timeout = 10 self.dummy_event = eventlet.event.Event() def test_get_class_ok(self): cls = resources.global_env().get_class_to_instantiate( 'GenericResourceType') self.assertEqual(generic_rsrc.GenericResource, cls) def test_get_class_noexist(self): self.assertRaises(exception.StackValidationFailed, resources.global_env().get_class_to_instantiate, 'NoExistResourceType') def test_resource_new_ok(self): snippet = rsrc_defn.ResourceDefinition('aresource', 'GenericResourceType') res = resource.Resource('aresource', snippet, self.stack) self.assertIsInstance(res, generic_rsrc.GenericResource) self.assertEqual("INIT", res.action) def test_resource_load_with_state(self): self.stack = parser.Stack(utils.dummy_context(), 'test_stack', template.Template(empty_template)) self.stack.store() snippet = rsrc_defn.ResourceDefinition('aresource', 'GenericResourceType') # Store Resource res = resource.Resource('aresource', snippet, self.stack) res.current_template_id = self.stack.t.id res.state_set('CREATE', 'IN_PROGRESS') self.stack.add_resource(res) loaded_res, res_owning_stack, stack = resource.Resource.load( self.stack.context, res.id, self.stack.current_traversal, True, {}) self.assertEqual(loaded_res.id, res.id) self.assertEqual(self.stack.t, stack.t) def test_resource_load_with_state_cleanup(self): self.old_stack = parser.Stack( utils.dummy_context(), 'test_old_stack', template.Template({ 'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'test_res': {'Type': 'ResourceWithPropsType', 'Properties': {'Foo': 'abc'}}}})) self.old_stack.store() self.new_stack = parser.Stack(utils.dummy_context(), 'test_new_stack', template.Template(empty_template)) self.new_stack.store() snippet = rsrc_defn.ResourceDefinition('aresource', 'GenericResourceType') # Store Resource res = resource.Resource('aresource', snippet, self.old_stack) res.current_template_id = self.old_stack.t.id res.state_set('CREATE', 'IN_PROGRESS') self.old_stack.add_resource(res) loaded_res, res_owning_stack, stack = resource.Resource.load( self.old_stack.context, res.id, self.stack.current_traversal, False, {}) self.assertEqual(loaded_res.id, res.id) self.assertEqual(self.old_stack.t, stack.t) self.assertNotEqual(self.new_stack.t, stack.t) def test_resource_load_with_no_resources(self): self.stack = parser.Stack( utils.dummy_context(), 'test_old_stack', template.Template({ 'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'test_res': {'Type': 'ResourceWithPropsType', 'Properties': {'Foo': 'abc'}}}})) self.stack.store() snippet = rsrc_defn.ResourceDefinition('aresource', 'GenericResourceType') # Store Resource res = resource.Resource('aresource', snippet, self.stack) res.current_template_id = self.stack.t.id res.state_set('CREATE', 'IN_PROGRESS') self.stack.add_resource(res) origin_resources = self.stack.resources self.stack._resources = None loaded_res, res_owning_stack, stack = resource.Resource.load( self.stack.context, res.id, self.stack.current_traversal, False, {}) self.assertEqual(origin_resources, stack.resources) self.assertEqual(loaded_res.id, res.id) self.assertEqual(self.stack.t, stack.t) def test_resource_invalid_name(self): snippet = rsrc_defn.ResourceDefinition('wrong/name', 'GenericResourceType') ex = self.assertRaises(exception.StackValidationFailed, resource.Resource, 'wrong/name', snippet, self.stack) self.assertEqual('Resource name may not contain "/"', six.text_type(ex)) @mock.patch.object(translation, 'resolve_and_find') @mock.patch.object(parser.Stack, 'db_resource_get') @mock.patch.object(resource.Resource, '_load_data') @mock.patch.object(resource.Resource, 'translate_properties') def test_stack_resources(self, mock_translate, mock_load, mock_db_get, mock_resolve): tpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': {'A': {'Type': 'ResourceWithPropsType', 'Properties': {'Foo': 'abc'}}}} stack = parser.Stack(utils.dummy_context(), 'test_stack', template.Template(tpl)) stack.store() mock_db_get.return_value = None self.assertEqual(1, len(stack.resources)) self.assertEqual(1, mock_translate.call_count) self.assertEqual(0, mock_load.call_count) # set stack._resources = None to reload the resources stack._resources = None mock_db_get.return_value = mock.Mock() self.assertEqual(1, len(stack.resources)) self.assertEqual(2, mock_translate.call_count) self.assertEqual(1, mock_load.call_count) self.assertEqual(0, mock_resolve.call_count) def test_resource_new_stack_not_stored(self): snippet = rsrc_defn.ResourceDefinition('aresource', 'GenericResourceType') self.stack.id = None db_method = 'get_by_name_and_stack' with mock.patch.object(resource_objects.Resource, db_method) as resource_get: res = resource.Resource('aresource', snippet, self.stack) self.assertEqual("INIT", res.action) self.assertIs(False, resource_get.called) def test_resource_new_err(self): snippet = rsrc_defn.ResourceDefinition('aresource', 'NoExistResourceType') self.assertRaises(exception.StackValidationFailed, resource.Resource, 'aresource', snippet, self.stack) def test_resource_non_type(self): resource_name = 'aresource' snippet = rsrc_defn.ResourceDefinition(resource_name, '') ex = self.assertRaises(exception.StackValidationFailed, resource.Resource, resource_name, snippet, self.stack) self.assertIn(_('Resource "%s" has no type') % resource_name, six.text_type(ex)) def test_state_defaults(self): tmpl = rsrc_defn.ResourceDefinition('test_res_def', 'Foo') res = generic_rsrc.GenericResource('test_res_def', tmpl, self.stack) self.assertEqual((res.INIT, res.COMPLETE), res.state) self.assertEqual('', res.status_reason) def test_signal_wrong_action_state(self): snippet = rsrc_defn.ResourceDefinition('res', 'GenericResourceType') res = resource.Resource('res', snippet, self.stack) actions = [res.SUSPEND, res.DELETE] for action in actions: for status in res.STATUSES: res.state_set(action, status) ev = self.patchobject(res, '_add_event') ex = self.assertRaises(exception.NotSupported, res.signal) self.assertEqual('Signal resource during %s is not ' 'supported.' % action, six.text_type(ex)) ev.assert_called_with( action, status, 'Cannot signal resource during %s' % action) def test_resource_str_repr_stack_id_resource_id(self): tmpl = rsrc_defn.ResourceDefinition('test_res_str_repr', 'Foo') res = generic_rsrc.GenericResource('test_res_str_repr', tmpl, self.stack) res.stack.id = "123" res.resource_id = "456" expected = ('GenericResource "test_res_str_repr" [456] Stack ' '"test_stack" [123]') observed = str(res) self.assertEqual(expected, observed) def test_resource_str_repr_stack_id_no_resource_id(self): tmpl = rsrc_defn.ResourceDefinition('test_res_str_repr', 'Foo') res = generic_rsrc.GenericResource('test_res_str_repr', tmpl, self.stack) res.stack.id = "123" res.resource_id = None expected = ('GenericResource "test_res_str_repr" Stack "test_stack" ' '[123]') observed = str(res) self.assertEqual(expected, observed) def test_resource_str_repr_no_stack_id(self): tmpl = rsrc_defn.ResourceDefinition('test_res_str_repr', 'Foo') res = generic_rsrc.GenericResource('test_res_str_repr', tmpl, self.stack) res.stack.id = None expected = ('GenericResource "test_res_str_repr"') observed = str(res) self.assertEqual(expected, observed) def test_state_set(self): tmpl = rsrc_defn.ResourceDefinition('test_resource', 'Foo') res = generic_rsrc.GenericResource('test_resource', tmpl, self.stack) res.state_set(res.CREATE, res.IN_PROGRESS, 'test_state_set') self.assertIsNotNone(res.id) self.assertEqual(res.CREATE, res.action) self.assertEqual(res.IN_PROGRESS, res.status) self.assertEqual('test_state_set', res.status_reason) db_res = resource_objects.Resource.get_obj(res.context, res.id) self.assertEqual(res.CREATE, db_res.action) self.assertEqual(res.IN_PROGRESS, db_res.status) self.assertEqual('test_state_set', db_res.status_reason) res.state_set(res.CREATE, res.COMPLETE, 'test_update') self.assertEqual(res.CREATE, res.action) self.assertEqual(res.COMPLETE, res.status) self.assertEqual('test_update', res.status_reason) db_res.refresh() self.assertEqual(res.CREATE, db_res.action) self.assertEqual(res.COMPLETE, db_res.status) self.assertEqual('test_update', db_res.status_reason) def test_physical_resource_name_or_FnGetRefId(self): tmpl = rsrc_defn.ResourceDefinition('test_resource', 'Foo') res = generic_rsrc.GenericResource('test_resource', tmpl, self.stack) scheduler.TaskRunner(res.create)() self.assertEqual((res.CREATE, res.COMPLETE), res.state) # use physical_resource_name when res.id is not None self.assertIsNotNone(res.id) expected = '%s-%s-%s' % (self.stack.name, res.name, short_id.get_id(res.uuid)) self.assertEqual(expected, res.physical_resource_name_or_FnGetRefId()) # otherwise use parent method res.id = None self.assertIsNone(res.resource_id) self.assertEqual('test_resource', res.physical_resource_name_or_FnGetRefId()) def test_prepare_abandon(self): tmpl = rsrc_defn.ResourceDefinition('test_resource', 'Foo') res = generic_rsrc.GenericResource('test_resource', tmpl, self.stack) expected = { 'action': 'INIT', 'metadata': {}, 'name': 'test_resource', 'resource_data': {}, 'resource_id': None, 'status': 'COMPLETE', 'type': 'Foo' } actual = res.prepare_abandon() self.assertEqual(expected, actual) def test_abandon_with_resource_data(self): tmpl = rsrc_defn.ResourceDefinition('test_resource', 'Foo') res = generic_rsrc.GenericResource('test_resource', tmpl, self.stack) res._data = {"test-key": "test-value"} expected = { 'action': 'INIT', 'metadata': {}, 'name': 'test_resource', 'resource_data': {"test-key": "test-value"}, 'resource_id': None, 'status': 'COMPLETE', 'type': 'Foo' } actual = res.prepare_abandon() self.assertEqual(expected, actual) def test_create_from_external(self): tmpl = rsrc_defn.ResourceDefinition( 'test_resource', 'GenericResourceType', external_id='f00d') res = generic_rsrc.GenericResource('test_resource', tmpl, self.stack) scheduler.TaskRunner(res.create)() self.assertEqual((res.CHECK, res.COMPLETE), res.state) self.assertEqual('f00d', res.resource_id) def test_create_from_external_not_found(self): external_id = 'f00d' tmpl = rsrc_defn.ResourceDefinition( 'test_resource', 'GenericResourceType', external_id=external_id) res = generic_rsrc.GenericResource('test_resource', tmpl, self.stack) res.client_plugin = mock.Mock() self.patchobject(res.client_plugin, 'is_not_found', return_value=True) self.patchobject(res, '_show_resource', side_effect=Exception()) e = self.assertRaises(exception.StackValidationFailed, res.validate_external) message = (("Invalid external resource: Resource %(external_id)s " "(%(type)s) can not be found.") % {'external_id': external_id, 'type': res.type()}) self.assertEqual(message, six.text_type(e)) def test_updated_from_external(self): tmpl = rsrc_defn.ResourceDefinition('test_resource', 'GenericResourceType') utmpl = rsrc_defn.ResourceDefinition( 'test_resource', 'GenericResourceType', external_id='f00d') res = generic_rsrc.GenericResource('test_resource', tmpl, self.stack) expected_err_msg = ('NotSupported: resources.test_resource: Update ' 'to property external_id of test_resource ' '(GenericResourceType) is not supported.') err = self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(res.update, utmpl) ) self.assertEqual(expected_err_msg, six.text_type(err)) def test_state_set_invalid(self): tmpl = rsrc_defn.ResourceDefinition('test_resource', 'Foo') res = generic_rsrc.GenericResource('test_resource', tmpl, self.stack) self.assertRaises(ValueError, res.state_set, 'foo', 'bla') self.assertRaises(ValueError, res.state_set, 'foo', res.COMPLETE) self.assertRaises(ValueError, res.state_set, res.CREATE, 'bla') def test_state_del_stack(self): tmpl = rsrc_defn.ResourceDefinition('test_resource', 'Foo') self.stack.action = self.stack.DELETE self.stack.status = self.stack.IN_PROGRESS res = generic_rsrc.GenericResource('test_resource', tmpl, self.stack) self.assertEqual(res.DELETE, res.action) self.assertEqual(res.COMPLETE, res.status) def test_type(self): tmpl = rsrc_defn.ResourceDefinition('test_resource', 'Foo') res = generic_rsrc.GenericResource('test_resource', tmpl, self.stack) self.assertEqual('Foo', res.type()) def test_has_interface_direct_match(self): tmpl = rsrc_defn.ResourceDefinition('test_resource', 'GenericResourceType') res = generic_rsrc.GenericResource('test_resource', tmpl, self.stack) self.assertTrue(res.has_interface('GenericResourceType')) def test_has_interface_no_match(self): tmpl = rsrc_defn.ResourceDefinition('test_resource', 'GenericResourceType') res = generic_rsrc.GenericResource('test_resource', tmpl, self.stack) self.assertFalse(res.has_interface('LookingForAnotherType')) def test_has_interface_mapping(self): tmpl = rsrc_defn.ResourceDefinition('test_resource', 'OS::Test::GenericResource') res = generic_rsrc.GenericResource('test_resource', tmpl, self.stack) self.assertTrue(res.has_interface('GenericResourceType')) def test_has_interface_mapping_no_match(self): tmpl = rsrc_defn.ResourceDefinition('test_resource', 'OS::Test::GenoricResort') res = generic_rsrc.GenericResource('test_resource', tmpl, self.stack) self.assertFalse(res.has_interface('GenericResourceType')) def test_created_time(self): tmpl = rsrc_defn.ResourceDefinition('test_resource', 'Foo') res = generic_rsrc.GenericResource('test_res_new', tmpl, self.stack) self.assertIsNone(res.created_time) res.store() self.assertIsNotNone(res.created_time) def test_updated_time(self): tmpl = rsrc_defn.ResourceDefinition('test_resource', 'GenericResourceType') res = generic_rsrc.GenericResource('test_resource', tmpl, self.stack) res.store() stored_time = res.updated_time utmpl = rsrc_defn.ResourceDefinition('test_resource', 'Foo') res.state_set(res.CREATE, res.COMPLETE) scheduler.TaskRunner(res.update, utmpl)() self.assertIsNotNone(res.updated_time) self.assertNotEqual(res.updated_time, stored_time) def _setup_resource_for_update(self, res_name): class TestResource(resource.Resource): properties_schema = {'a_string': {'Type': 'String'}} update_allowed_properties = ('a_string',) resource._register_class('TestResource', TestResource) tmpl = rsrc_defn.ResourceDefinition(res_name, 'TestResource') res = TestResource('test_resource', tmpl, self.stack) utmpl = rsrc_defn.ResourceDefinition(res_name, 'TestResource', {'a_string': 'foo'}) return res, utmpl def test_update_replace(self): res, utmpl = self._setup_resource_for_update( res_name='test_update_replace') self.assertEqual((res.INIT, res.COMPLETE), res.state) res.prepare_for_replace = mock.Mock() # resource replaced if in INIT_COMPLETE self.assertRaises( resource.UpdateReplace, scheduler.TaskRunner(res.update, utmpl)) self.assertTrue(res.prepare_for_replace.called) def test_update_replace_in_check_failed(self): res, utmpl = self._setup_resource_for_update( res_name='test_update_replace') res.state_set(res.CHECK, res.FAILED) res.prepare_for_replace = mock.Mock() res.needs_replace_failed = mock.MagicMock(return_value=False) self.assertRaises( resource.UpdateReplace, scheduler.TaskRunner(res.update, utmpl)) self.assertFalse(res.needs_replace_failed.called) self.assertTrue(res.prepare_for_replace.called) def test_no_update_or_replace_in_failed(self): res, utmpl = self._setup_resource_for_update( res_name='test_failed_res_no_update_or_replace') res.state_set(res.CREATE, res.FAILED) res.prepare_for_replace = mock.Mock() res.needs_replace_failed = mock.MagicMock(return_value=False) scheduler.TaskRunner(res.update, res.t)() self.assertTrue(res.needs_replace_failed.called) self.assertFalse(res.prepare_for_replace.called) self.assertEqual((res.CREATE, res.COMPLETE), res.state) status_reason = _('Update status to COMPLETE for ' 'FAILED resource neither update ' 'nor replace.') self.assertEqual(status_reason, res.status_reason) def test_update_replace_prepare_replace_error(self): # test if any error happened when prepare_for_replace, # whether the resource will go to FAILED res, utmpl = self._setup_resource_for_update( res_name='test_update_replace_prepare_replace_error') res.prepare_for_replace = mock.Mock(side_effect=Exception) self.assertRaises( exception.ResourceFailure, scheduler.TaskRunner(res.update, utmpl)) self.assertTrue(res.prepare_for_replace.called) self.assertEqual((res.UPDATE, res.FAILED), res.state) def test_update_rsrc_in_progress_raises_exception(self): res, utmpl = self._setup_resource_for_update( res_name='test_update_rsrc_in_progress_raises_exception') cfg.CONF.set_override('convergence_engine', False) res.action = res.UPDATE res.status = res.IN_PROGRESS self.assertRaises( exception.ResourceFailure, scheduler.TaskRunner(res.update, utmpl)) def test_update_replace_rollback(self): cfg.CONF.set_override('convergence_engine', False) res, utmpl = self._setup_resource_for_update( res_name='test_update_replace_rollback') res.restore_prev_rsrc = mock.Mock() self.stack.state_set('ROLLBACK', 'IN_PROGRESS', 'Simulate rollback') self.assertRaises( resource.UpdateReplace, scheduler.TaskRunner(res.update, utmpl)) self.assertTrue(res.restore_prev_rsrc.called) def test_update_replace_rollback_restore_prev_rsrc_error(self): cfg.CONF.set_override('convergence_engine', False) res, utmpl = self._setup_resource_for_update( res_name='restore_prev_rsrc_error') res.restore_prev_rsrc = mock.Mock(side_effect=Exception) self.stack.state_set('ROLLBACK', 'IN_PROGRESS', 'Simulate rollback') self.assertRaises( exception.ResourceFailure, scheduler.TaskRunner(res.update, utmpl)) self.assertTrue(res.restore_prev_rsrc.called) self.assertEqual((res.UPDATE, res.FAILED), res.state) def test_update_replace_in_failed_without_nested(self): tmpl = rsrc_defn.ResourceDefinition('test_resource', 'GenericResourceType', {'Foo': 'abc'}) res = generic_rsrc.ResourceWithProps('test_resource', tmpl, self.stack) res.update_allowed_properties = ('Foo',) self.m.StubOutWithMock(generic_rsrc.ResourceWithProps, 'handle_create') generic_rsrc.ResourceWithProps.handle_create().AndRaise( exception.ResourceFailure) self.m.ReplayAll() self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(res.create)) self.assertEqual((res.CREATE, res.FAILED), res.state) utmpl = rsrc_defn.ResourceDefinition('test_resource', 'GenericResourceType', {'Foo': 'xyz'}) # resource in failed status and hasn't nested will enter # UpdateReplace flow self.assertRaises( resource.UpdateReplace, scheduler.TaskRunner(res.update, utmpl)) self.m.VerifyAll() def test_updated_time_changes_only_when_it_changed(self): tmpl = rsrc_defn.ResourceDefinition('test_resource', 'GenericResourceType') res = generic_rsrc.GenericResource('test_resource', tmpl, self.stack) res.store() self.assertIsNone(res.updated_time) res.updated_time = datetime.datetime.utcnow() res.store() self.assertIsNotNone(res.updated_time) def test_resource_object_get_obj_fields(self): snippet = rsrc_defn.ResourceDefinition('aresource', 'GenericResourceType') res = resource.Resource('aresource', snippet, self.stack) res.store() res_obj = resource_objects.Resource.get_obj( res.context, res.id, refresh=False, fields=('status', )) self.assertEqual(res_obj.status, res.COMPLETE) self.assertRaises(AttributeError, getattr, res_obj, 'action') def test_attributes_store(self): res_def = rsrc_defn.ResourceDefinition('test_resource', 'ResWithStringPropAndAttr') res = generic_rsrc.ResWithStringPropAndAttr( 'test_res_attr_store', res_def, self.stack) res.action = res.CREATE res.status = res.COMPLETE res.store() res.store_attributes() # attr was not resolved, cache was not warmed, nothing to store self.assertIsNone(res._attr_data_id) with mock.patch.object(res, '_resolve_attribute') as res_attr: attr_val = '0123 four' res_attr.return_value = attr_val res.attributes['string'] # attr cache is warmed, now store_attributes persists something res.store_attributes() self.assertIsNotNone(res._attr_data_id) # verify the attribute rpd obj that was stored matches self.assertEqual({'string': attr_val}, rpd_object.ResourcePropertiesData.get_by_id( res.context, res._attr_data_id).data) def test_attributes_load_stored(self): res_def = rsrc_defn.ResourceDefinition('test_resource', 'ResWithStringPropAndAttr') res = generic_rsrc.ResWithStringPropAndAttr( 'test_res_attr_store', res_def, self.stack) res.action = res.UPDATE res.status = res.COMPLETE res.store() attr_data = {'string': 'word'} resource_objects.Resource.store_attributes( res.context, res.id, res._atomic_key, attr_data, None) res._load_data(resource_objects.Resource.get_obj( res.context, res.id)) with mock.patch.object(res, '_resolve_attribute') as res_attr: self.assertEqual(attr_data, res.attributes._resolved_values) self.assertEqual('word', res.attributes['string']) self.assertEqual(0, res_attr.call_count) def test_store_attributes_fail(self): res_def = rsrc_defn.ResourceDefinition('test_resource', 'ResWithStringPropAndAttr') res = generic_rsrc.ResWithStringPropAndAttr( 'test_res_attr_store', res_def, self.stack) res.action = res.UPDATE res.status = res.COMPLETE res.store() attr_data = {'string': 'word'} # set the attr_data_id first resource_objects.Resource.update_by_id(res.context, res.id, {'attr_data_id': 99}) new_attr_data_id = resource_objects.Resource.store_attributes( res.context, res.id, res._atomic_key, attr_data, None) # fail to store new attr data self.assertIsNone(new_attr_data_id) res._load_data(resource_objects.Resource.get_obj( res.context, res.id)) self.assertEqual({}, res.attributes._resolved_values) def test_resource_object_resource_properties_data(self): cfg.CONF.set_override('encrypt_parameters_and_properties', True) data = {'p1': 'i see', 'p2': 'good times, good times'} rpd_obj = rpd_object.ResourcePropertiesData().create_or_update( self.stack.context, data) rpd_db_obj = self.stack.context.session.query( models.ResourcePropertiesData).get(rpd_obj.id) res_obj1 = resource_objects.Resource().create( self.stack.context, {'stack_id': self.stack.id, 'uuid': str(uuid.uuid4()), 'rsrc_prop_data': rpd_db_obj}) res_obj2 = resource_objects.Resource().create( self.stack.context, {'stack_id': self.stack.id, 'uuid': str(uuid.uuid4()), 'rsrc_prop_data_id': rpd_db_obj.id}) ctx2 = utils.dummy_context() res_obj1 = resource_objects.Resource().get_obj( ctx2, res_obj1.id) res_obj2 = resource_objects.Resource().get_obj( ctx2, res_obj2.id) # verify the resource_properties_data association # can be set by id self.assertEqual(rpd_db_obj.id, res_obj1.rsrc_prop_data_id) self.assertEqual(res_obj1.rsrc_prop_data_id, res_obj2.rsrc_prop_data_id) # properties data appears unencrypted to resource object self.assertEqual(data, res_obj1.properties_data) self.assertEqual(data, res_obj2.properties_data) def test_make_replacement(self): tmpl = rsrc_defn.ResourceDefinition('test_resource', 'Foo') res = generic_rsrc.GenericResource('test_res_upd', tmpl, self.stack) res.store() new_tmpl_id = 2 self.assertIsNotNone(res.id) new_id = res.make_replacement(new_tmpl_id) new_res = resource_objects.Resource.get_obj(res.context, new_id) self.assertEqual(new_id, res.replaced_by) self.assertEqual(res.id, new_res.replaces) self.assertIsNone(new_res.physical_resource_id) self.assertEqual(new_tmpl_id, new_res.current_template_id) def test_metadata_default(self): tmpl = rsrc_defn.ResourceDefinition('test_resource', 'Foo') res = generic_rsrc.GenericResource('test_resource', tmpl, self.stack) self.assertEqual({}, res.metadata_get()) def test_metadata_set_when_deleted(self): tmpl = rsrc_defn.ResourceDefinition('test_resource', 'Foo') res = generic_rsrc.GenericResource('test_resource', tmpl, self.stack) res.store() md = {"don't": "care"} res.action = 'CREATE' res.metadata_set(md) self.assertFalse(hasattr(res, '_db_res_is_deleted')) res_obj = self.stack.context.session.query( models.Resource).get(res.id) res_obj.update({'action': 'DELETE'}) self.assertRaises(exception.ResourceNotAvailable, res.metadata_set, md) self.assertTrue(res._db_res_is_deleted) def test_equals_different_stacks(self): tmpl1 = rsrc_defn.ResourceDefinition('test_resource', 'Foo') tmpl2 = rsrc_defn.ResourceDefinition('test_resource', 'Foo') tmpl3 = rsrc_defn.ResourceDefinition('test_resource2', 'Bar') stack2 = parser.Stack(utils.dummy_context(), 'test_stack', template.Template(empty_template), stack_id=-1) res1 = generic_rsrc.GenericResource('test_resource', tmpl1, self.stack) res2 = generic_rsrc.GenericResource('test_resource', tmpl2, stack2) res3 = generic_rsrc.GenericResource('test_resource2', tmpl3, stack2) self.assertEqual(res1, res2) self.assertNotEqual(res1, res3) def test_equals_names(self): tmpl1 = rsrc_defn.ResourceDefinition('test_resource1', 'Foo') tmpl2 = rsrc_defn.ResourceDefinition('test_resource2', 'Foo') res1 = generic_rsrc.GenericResource('test_resource1', tmpl1, self.stack) res2 = generic_rsrc.GenericResource('test_resource2', tmpl2, self.stack) self.assertNotEqual(res1, res2) def test_update_template_diff_changed_modified(self): tmpl = rsrc_defn.ResourceDefinition('test_resource', 'Foo', metadata={'foo': 123}) update_snippet = rsrc_defn.ResourceDefinition('test_resource', 'Foo', metadata={'foo': 456}) res = generic_rsrc.GenericResource('test_resource', tmpl, self.stack) diff = res.update_template_diff(update_snippet, tmpl) self.assertFalse(diff.properties_changed()) self.assertTrue(diff.metadata_changed()) self.assertFalse(diff.update_policy_changed()) def test_update_template_diff_changed_add(self): tmpl = rsrc_defn.ResourceDefinition('test_resource', 'Foo') update_snippet = rsrc_defn.ResourceDefinition('test_resource', 'Foo', metadata={'foo': 123}) res = generic_rsrc.GenericResource('test_resource', tmpl, self.stack) diff = res.update_template_diff(update_snippet, tmpl) self.assertFalse(diff.properties_changed()) self.assertTrue(diff.metadata_changed()) self.assertFalse(diff.update_policy_changed()) def test_update_template_diff_changed_remove(self): tmpl = rsrc_defn.ResourceDefinition('test_resource', 'Foo', metadata={'foo': 123}) update_snippet = rsrc_defn.ResourceDefinition('test_resource', 'Foo') res = generic_rsrc.GenericResource('test_resource', tmpl, self.stack) diff = res.update_template_diff(update_snippet, tmpl) self.assertFalse(diff.properties_changed()) self.assertTrue(diff.metadata_changed()) self.assertFalse(diff.update_policy_changed()) def test_update_template_diff_properties_none(self): tmpl = rsrc_defn.ResourceDefinition('test_resource', 'Foo') res = generic_rsrc.ResourceWithProps('test_resource', tmpl, self.stack) before_props = tmpl.properties(res.properties_schema, self.stack.context) after_props = tmpl.properties(res.properties_schema, self.stack.context) diff = res.update_template_diff_properties(after_props, before_props) self.assertEqual({}, diff) def test_update_template_diff_properties_added(self): tmpl = rsrc_defn.ResourceDefinition('test_resource', 'Foo') update_defn = rsrc_defn.ResourceDefinition('test_resource', 'Foo', properties={'Foo': 123}) res = generic_rsrc.ResourceWithProps('test_resource', tmpl, self.stack) res.update_allowed_properties = ('Foo',) before_props = tmpl.properties(res.properties_schema, self.stack.context) after_props = update_defn.properties(res.properties_schema, self.stack.context) diff = res.update_template_diff_properties(after_props, before_props) self.assertEqual({'Foo': '123'}, diff) def test_update_template_diff_properties_removed_no_default_value(self): tmpl = rsrc_defn.ResourceDefinition('test_resource', 'Foo', {'Foo': '123'}) new_t = rsrc_defn.ResourceDefinition('test_resource', 'Foo') res = generic_rsrc.ResourceWithProps('test_resource', tmpl, self.stack) res.update_allowed_properties = ('Foo',) before_props = tmpl.properties(res.properties_schema, self.stack.context) after_props = new_t.properties(res.properties_schema, self.stack.context) diff = res.update_template_diff_properties(after_props, before_props) self.assertEqual({'Foo': None}, diff) def test_update_template_diff_properties_removed_with_default_value(self): tmpl = rsrc_defn.ResourceDefinition('test_resource', 'Foo', {'Foo': '123'}) schema = {'Foo': {'Type': 'String', 'Default': '567'}} self.patchobject(generic_rsrc.ResourceWithProps, 'properties_schema', new=schema) new_t = rsrc_defn.ResourceDefinition('test_resource', 'Foo') res = generic_rsrc.ResourceWithProps('test_resource', tmpl, self.stack) res.update_allowed_properties = ('Foo',) before_props = tmpl.properties(res.properties_schema, self.stack.context) after_props = new_t.properties(res.properties_schema, self.stack.context) diff = res.update_template_diff_properties(after_props, before_props) self.assertEqual({'Foo': '567'}, diff) def test_update_template_diff_properties_changed(self): tmpl = rsrc_defn.ResourceDefinition('test_resource', 'Foo', {'Foo': '123'}) new_t = rsrc_defn.ResourceDefinition('test_resource', 'Foo', {'Foo': '456'}) res = generic_rsrc.ResourceWithProps('test_resource', tmpl, self.stack) res.update_allowed_properties = ('Foo',) before_props = tmpl.properties(res.properties_schema, self.stack.context) after_props = new_t.properties(res.properties_schema, self.stack.context) diff = res.update_template_diff_properties(after_props, before_props) self.assertEqual({'Foo': '456'}, diff) def test_update_template_diff_properties_notallowed(self): tmpl = rsrc_defn.ResourceDefinition('test_resource', 'Foo', {'Foo': '123'}) new_t = rsrc_defn.ResourceDefinition('test_resource', 'Foo', {'Bar': '456'}) res = generic_rsrc.ResourceWithProps('test_resource', tmpl, self.stack) res.update_allowed_properties = ('Cat',) before_props = tmpl.properties(res.properties_schema, self.stack.context) after_props = new_t.properties(res.properties_schema, self.stack.context) self.assertRaises(resource.UpdateReplace, res.update_template_diff_properties, after_props, before_props) def test_update_template_diff_properties_immutable_notsupported(self): before = {'Foo': 'bar', 'Parrot': 'dead', 'Spam': 'ham', 'Viking': 'axe'} tmpl = rsrc_defn.ResourceDefinition('test_resource', 'Foo', before) schema = {'Foo': {'Type': 'String'}, 'Viking': {'Type': 'String', 'Immutable': True}, 'Spam': {'Type': 'String', 'Immutable': True}, 'Parrot': {'Type': 'String', 'Immutable': True}, } after = {'Foo': 'baz', 'Parrot': 'dead', 'Spam': 'eggs', 'Viking': 'sword'} new_t = rsrc_defn.ResourceDefinition('test_resource', 'Foo', after) self.patchobject(generic_rsrc.ResourceWithProps, 'properties_schema', new=schema) res = generic_rsrc.ResourceWithProps('test_resource', tmpl, self.stack) before_props = tmpl.properties(res.properties_schema, self.stack.context) after_props = new_t.properties(res.properties_schema, self.stack.context) ex = self.assertRaises(exception.NotSupported, res.update_template_diff_properties, after_props, before_props) self.assertIn("Update to properties Spam, Viking of", six.text_type(ex)) def test_resource(self): tmpl = rsrc_defn.ResourceDefinition('test_resource', 'Foo', {'Foo': 'abc'}) res = generic_rsrc.ResourceWithProps('test_resource', tmpl, self.stack) scheduler.TaskRunner(res.create)() self.assertEqual((res.CREATE, res.COMPLETE), res.state) def test_deprecated_prop_data_updated(self): tmpl = rsrc_defn.ResourceDefinition('test_resource', 'Foo', {'Foo': 'abc'}) res = generic_rsrc.ResourceWithProps('test_resource', tmpl, self.stack) scheduler.TaskRunner(res.create)() res_obj = db_api.resource_get(self.stack.context, res.id) self.assertIsNone(res_obj.properties_data) self.assertIsNone(res_obj.properties_data_encrypted) # Now that we've established these couple of depcrated fields # are not populated, let's populate them. db_api.resource_update_and_save(self.stack.context, res_obj.id, {'properties_data': {'Foo': 'lucky'}, 'properties_data_encrypted': False, 'rsrc_prop_data': None}) res._rsrc_prop_data = None res._load_data(res_obj) # Legacy properties_data slurped into res._stored_properties_data self.assertEqual(res._stored_properties_data, {'Foo': 'lucky'}) res._rsrc_prop_data = None res.state_set(res.CREATE, res.IN_PROGRESS, 'test_rpd') # Modernity, the data is where it belongs rsrc_prop_data_db_obj = db_api.resource_prop_data_get( self.stack.context, res._rsrc_prop_data_id) self.assertEqual(rsrc_prop_data_db_obj['data'], {'Foo': 'lucky'}) # legacy locations aren't being used anymore self.assertFalse(hasattr(res, 'properties_data')) self.assertFalse(hasattr(res, 'properties_data_encrypted')) def test_deprecated_encrypted_prop_data_updated(self): cfg.CONF.set_override('encrypt_parameters_and_properties', True) tmpl = rsrc_defn.ResourceDefinition('test_resource', 'Foo', {'Foo': 'abc'}) res = generic_rsrc.ResourceWithProps('test_resource', tmpl, self.stack) scheduler.TaskRunner(res.create)() res_obj = db_api.resource_get(self.stack.context, res.id) self.assertIsNone(res_obj.properties_data) self.assertIsNone(res_obj.properties_data_encrypted) # Now that we've established these couple of depcrated fields # are not populated, let's populate them. encrypted_data = \ rpd_object.ResourcePropertiesData.encrypt_properties_data( {'Foo': 'lucky'})[1] db_api.resource_update_and_save(self.stack.context, res_obj.id, {'properties_data': encrypted_data, 'properties_data_encrypted': True, 'rsrc_prop_data': None}) # This is where the decrypting of legacy data happens res_obj = resource_objects.Resource._from_db_object( resource_objects.Resource(), self.stack.context, res_obj) self.assertEqual('lucky', res_obj.properties_data['Foo']) res._rsrc_prop_data = None res._load_data(res_obj) # Legacy properties_data slurped into res._stored_properties_data self.assertEqual(res._stored_properties_data, {'Foo': 'lucky'}) res._rsrc_prop_data = None res.state_set(res.CREATE, res.IN_PROGRESS, 'test_store') # Modernity, the data is where it belongs # The db object data is encrypted rsrc_prop_data_db_obj = db_api.resource_prop_data_get( self.stack.context, res._rsrc_prop_data_id) self.assertNotEqual(rsrc_prop_data_db_obj['data'], {'Foo': 'lucky'}) # But the objects/ rsrc_prop_data.data is always unencrypted rsrc_prop_data_obj = rpd_object.ResourcePropertiesData._from_db_object( rpd_object.ResourcePropertiesData(), self.stack.context, rsrc_prop_data_db_obj) self.assertEqual(rsrc_prop_data_obj.data, {'Foo': 'lucky'}) # legacy locations aren't being used anymore self.assertFalse(hasattr(res, 'properties_data')) self.assertFalse(hasattr(res, 'properties_data_encrypted')) def test_create_fail_missing_req_prop(self): rname = 'test_resource' tmpl = rsrc_defn.ResourceDefinition(rname, 'Foo', {}) res = generic_rsrc.ResourceWithRequiredProps(rname, tmpl, self.stack) estr = ('Property error: test_resource.Properties: ' 'Property Foo not assigned') create = scheduler.TaskRunner(res.create) err = self.assertRaises(exception.ResourceFailure, create) self.assertIn(estr, six.text_type(err)) self.assertEqual((res.CREATE, res.FAILED), res.state) def test_create_fail_prop_typo(self): rname = 'test_resource' tmpl = rsrc_defn.ResourceDefinition(rname, 'GenericResourceType', {'Food': 'abc'}) res = generic_rsrc.ResourceWithProps(rname, tmpl, self.stack) estr = ('StackValidationFailed: resources.test_resource: ' 'Property error: test_resource.Properties: ' 'Unknown Property Food') create = scheduler.TaskRunner(res.create) err = self.assertRaises(exception.ResourceFailure, create) self.assertIn(estr, six.text_type(err)) self.assertEqual((res.CREATE, res.FAILED), res.state) def test_create_fail_metadata_parse_error(self): rname = 'test_resource' get_att = cfn_funcs.GetAtt(self.stack, 'Fn::GetAtt', ["ResourceA", "abc"]) tmpl = rsrc_defn.ResourceDefinition(rname, 'GenericResourceType', properties={}, metadata={'foo': get_att}) res = generic_rsrc.ResourceWithProps(rname, tmpl, self.stack) create = scheduler.TaskRunner(res.create) self.assertRaises(exception.ResourceFailure, create) self.assertEqual((res.CREATE, res.FAILED), res.state) def test_create_resource_after_destroy(self): rname = 'test_res_id_none' tmpl = rsrc_defn.ResourceDefinition(rname, 'GenericResourceType') res = generic_rsrc.ResourceWithProps(rname, tmpl, self.stack) res.id = 'test_res_id' (res.action, res.status) = (res.INIT, res.DELETE) create = scheduler.TaskRunner(res.create) self.assertRaises(exception.ResourceFailure, create) scheduler.TaskRunner(res.destroy)() res.state_reset() scheduler.TaskRunner(res.create)() self.assertEqual((res.CREATE, res.COMPLETE), res.state) @mock.patch.object(properties.Properties, 'validate') @mock.patch.object(timeutils, 'retry_backoff_delay') def test_create_validate(self, m_re, m_v): tmpl = rsrc_defn.ResourceDefinition('test_resource', 'Foo', {'Foo': 'abc'}) res = generic_rsrc.ResourceWithProps('test_resource', tmpl, self.stack) generic_rsrc.ResourceWithProps.handle_create = mock.Mock() generic_rsrc.ResourceWithProps.handle_delete = mock.Mock() m_v.side_effect = [True, exception.StackValidationFailed()] generic_rsrc.ResourceWithProps.handle_create.side_effect = [ exception.ResourceInError(resource_name='test_resource', resource_status='ERROR', resource_type='GenericResourceType', resource_action='CREATE', status_reason='just because'), exception.ResourceInError(resource_name='test_resource', resource_status='ERROR', resource_type='GenericResourceType', resource_action='CREATE', status_reason='just because'), None ] generic_rsrc.ResourceWithProps.handle_delete.return_value = None m_re.return_value = 0.01 scheduler.TaskRunner(res.create)() self.assertEqual(2, m_re.call_count) self.assertEqual(1, m_v.call_count) self.assertEqual((res.CREATE, res.COMPLETE), res.state) def test_create_fail_retry(self): tmpl = rsrc_defn.ResourceDefinition('test_resource', 'Foo', {'Foo': 'abc'}) res = generic_rsrc.ResourceWithProps('test_resource', tmpl, self.stack) self.m.StubOutWithMock(timeutils, 'retry_backoff_delay') self.m.StubOutWithMock(generic_rsrc.ResourceWithProps, 'handle_create') self.m.StubOutWithMock(generic_rsrc.ResourceWithProps, 'handle_delete') # first attempt to create fails generic_rsrc.ResourceWithProps.handle_create().AndRaise( exception.ResourceInError(resource_name='test_resource', resource_status='ERROR', resource_type='GenericResourceType', resource_action='CREATE', status_reason='just because')) # delete error resource from first attempt generic_rsrc.ResourceWithProps.handle_delete().AndReturn(None) # second attempt to create succeeds timeutils.retry_backoff_delay(1, jitter_max=2.0).AndReturn(0.01) generic_rsrc.ResourceWithProps.handle_create().AndReturn(None) self.m.ReplayAll() scheduler.TaskRunner(res.create)() self.assertEqual((res.CREATE, res.COMPLETE), res.state) self.m.VerifyAll() def test_create_fail_retry_disabled(self): cfg.CONF.set_override('action_retry_limit', 0) tmpl = rsrc_defn.ResourceDefinition('test_resource', 'Foo', {'Foo': 'abc'}) res = generic_rsrc.ResourceWithProps('test_resource', tmpl, self.stack) self.m.StubOutWithMock(timeutils, 'retry_backoff_delay') self.m.StubOutWithMock(generic_rsrc.ResourceWithProps, 'handle_create') self.m.StubOutWithMock(generic_rsrc.ResourceWithProps, 'handle_delete') # attempt to create fails generic_rsrc.ResourceWithProps.handle_create().AndRaise( exception.ResourceInError(resource_name='test_resource', resource_status='ERROR', resource_type='GenericResourceType', resource_action='CREATE', status_reason='just because')) self.m.ReplayAll() estr = ('ResourceInError: resources.test_resource: ' 'Went to status ERROR due to "just because"') create = scheduler.TaskRunner(res.create) err = self.assertRaises(exception.ResourceFailure, create) self.assertEqual(estr, six.text_type(err)) self.assertEqual((res.CREATE, res.FAILED), res.state) self.m.VerifyAll() def test_create_deletes_fail_retry(self): tmpl = rsrc_defn.ResourceDefinition('test_resource', 'Foo', {'Foo': 'abc'}) res = generic_rsrc.ResourceWithProps('test_resource', tmpl, self.stack) self.m.StubOutWithMock(timeutils, 'retry_backoff_delay') self.m.StubOutWithMock(generic_rsrc.ResourceWithProps, 'handle_create') self.m.StubOutWithMock(generic_rsrc.ResourceWithProps, 'handle_delete') # first attempt to create fails generic_rsrc.ResourceWithProps.handle_create().AndRaise( exception.ResourceInError(resource_name='test_resource', resource_status='ERROR', resource_type='GenericResourceType', resource_action='CREATE', status_reason='just because')) # first attempt to delete fails generic_rsrc.ResourceWithProps.handle_delete().AndRaise( exception.ResourceInError(resource_name='test_resource', resource_status='ERROR', resource_type='GenericResourceType', resource_action='DELETE', status_reason='delete failed')) # second attempt to delete fails timeutils.retry_backoff_delay(1, jitter_max=2.0).AndReturn(0.01) generic_rsrc.ResourceWithProps.handle_delete().AndRaise( exception.ResourceInError(resource_name='test_resource', resource_status='ERROR', resource_type='GenericResourceType', resource_action='DELETE', status_reason='delete failed again')) # third attempt to delete succeeds timeutils.retry_backoff_delay(2, jitter_max=2.0).AndReturn(0.01) generic_rsrc.ResourceWithProps.handle_delete().AndReturn(None) # second attempt to create succeeds timeutils.retry_backoff_delay(1, jitter_max=2.0).AndReturn(0.01) generic_rsrc.ResourceWithProps.handle_create().AndReturn(None) self.m.ReplayAll() scheduler.TaskRunner(res.create)() self.assertEqual((res.CREATE, res.COMPLETE), res.state) self.m.VerifyAll() def test_creates_fail_retry(self): tmpl = rsrc_defn.ResourceDefinition('test_resource', 'Foo', {'Foo': 'abc'}) res = generic_rsrc.ResourceWithProps('test_resource', tmpl, self.stack) self.m.StubOutWithMock(timeutils, 'retry_backoff_delay') self.m.StubOutWithMock(generic_rsrc.ResourceWithProps, 'handle_create') self.m.StubOutWithMock(generic_rsrc.ResourceWithProps, 'handle_delete') # first attempt to create fails generic_rsrc.ResourceWithProps.handle_create().AndRaise( exception.ResourceInError(resource_name='test_resource', resource_status='ERROR', resource_type='GenericResourceType', resource_action='CREATE', status_reason='just because')) # delete error resource from first attempt generic_rsrc.ResourceWithProps.handle_delete().AndReturn(None) # second attempt to create fails timeutils.retry_backoff_delay(1, jitter_max=2.0).AndReturn(0.01) generic_rsrc.ResourceWithProps.handle_create().AndRaise( exception.ResourceInError(resource_name='test_resource', resource_status='ERROR', resource_type='GenericResourceType', resource_action='CREATE', status_reason='just because')) # delete error resource from second attempt generic_rsrc.ResourceWithProps.handle_delete().AndReturn(None) # third attempt to create succeeds timeutils.retry_backoff_delay(2, jitter_max=2.0).AndReturn(0.01) generic_rsrc.ResourceWithProps.handle_create().AndReturn(None) self.m.ReplayAll() scheduler.TaskRunner(res.create)() self.assertEqual((res.CREATE, res.COMPLETE), res.state) self.m.VerifyAll() def test_create_cancel(self): tmpl = rsrc_defn.ResourceDefinition('test_resource', 'Foo') res = generic_rsrc.CancellableResource('test_resource', tmpl, self.stack) self.m.StubOutWithMock(res, 'handle_create') self.m.StubOutWithMock(res, 'check_create_complete') self.m.StubOutWithMock(res, 'handle_create_cancel') cookie = object() res.handle_create().AndReturn(cookie) res.check_create_complete(cookie).AndReturn(False) res.handle_create_cancel(cookie).AndReturn(None) self.m.ReplayAll() runner = scheduler.TaskRunner(res.create) runner.start() runner.step() runner.cancel() self.m.VerifyAll() def test_create_cancel_exception(self): tmpl = rsrc_defn.ResourceDefinition('test_resource', 'Foo') res = generic_rsrc.CancellableResource('test_resource', tmpl, self.stack) self.m.StubOutWithMock(res, 'handle_create') self.m.StubOutWithMock(res, 'check_create_complete') self.m.StubOutWithMock(res, 'handle_create_cancel') cookie = object() res.handle_create().AndReturn(cookie) res.check_create_complete(cookie).AndReturn(False) res.handle_create_cancel(cookie).AndRaise(Exception) self.m.ReplayAll() runner = scheduler.TaskRunner(res.create) runner.start() runner.step() runner.cancel() self.m.VerifyAll() def test_preview(self): tmpl = rsrc_defn.ResourceDefinition('test_resource', 'GenericResourceType') res = generic_rsrc.ResourceWithProps('test_resource', tmpl, self.stack) self.assertEqual(res, res.preview()) def test_update_ok(self): tmpl = rsrc_defn.ResourceDefinition('test_resource', 'GenericResourceType', {'Foo': 'abc'}) res = generic_rsrc.ResourceWithProps('test_resource', tmpl, self.stack) res.update_allowed_properties = ('Foo',) scheduler.TaskRunner(res.create)() self.assertEqual((res.CREATE, res.COMPLETE), res.state) utmpl = rsrc_defn.ResourceDefinition('test_resource', 'GenericResourceType', {'Foo': 'xyz'}) prop_diff = {'Foo': 'xyz'} self.m.StubOutWithMock(generic_rsrc.ResourceWithProps, 'handle_update') generic_rsrc.ResourceWithProps.handle_update( utmpl, mock.ANY, prop_diff).AndReturn(None) self.m.ReplayAll() scheduler.TaskRunner(res.update, utmpl)() self.assertEqual((res.UPDATE, res.COMPLETE), res.state) self.assertEqual({'Foo': 'xyz'}, res._stored_properties_data) self.m.VerifyAll() def test_update_replace_with_resource_name(self): tmpl = rsrc_defn.ResourceDefinition('test_resource', 'GenericResourceType', {'Foo': 'abc'}) res = generic_rsrc.ResourceWithProps('test_resource', tmpl, self.stack) res.update_allowed_properties = ('Foo',) scheduler.TaskRunner(res.create)() self.assertEqual((res.CREATE, res.COMPLETE), res.state) utmpl = rsrc_defn.ResourceDefinition('test_resource', 'GenericResourceType', {'Foo': 'xyz'}) self.m.StubOutWithMock(generic_rsrc.ResourceWithProps, 'handle_update') prop_diff = {'Foo': 'xyz'} generic_rsrc.ResourceWithProps.handle_update( utmpl, mock.ANY, prop_diff).AndRaise(resource.UpdateReplace( res.name)) self.m.ReplayAll() # should be re-raised so parser.Stack can handle replacement updater = scheduler.TaskRunner(res.update, utmpl) ex = self.assertRaises(resource.UpdateReplace, updater) self.assertEqual('The Resource test_resource requires replacement.', six.text_type(ex)) self.m.VerifyAll() def test_update_replace_without_resource_name(self): tmpl = rsrc_defn.ResourceDefinition('test_resource', 'GenericResourceType', {'Foo': 'abc'}) res = generic_rsrc.ResourceWithProps('test_resource', tmpl, self.stack) res.update_allowed_properties = ('Foo',) scheduler.TaskRunner(res.create)() self.assertEqual((res.CREATE, res.COMPLETE), res.state) utmpl = rsrc_defn.ResourceDefinition('test_resource', 'GenericResourceType', {'Foo': 'xyz'}) self.m.StubOutWithMock(generic_rsrc.ResourceWithProps, 'handle_update') prop_diff = {'Foo': 'xyz'} generic_rsrc.ResourceWithProps.handle_update( utmpl, mock.ANY, prop_diff).AndRaise(resource.UpdateReplace()) self.m.ReplayAll() # should be re-raised so parser.Stack can handle replacement updater = scheduler.TaskRunner(res.update, utmpl) ex = self.assertRaises(resource.UpdateReplace, updater) self.assertEqual('The Resource Unknown requires replacement.', six.text_type(ex)) self.m.VerifyAll() def test_need_update_in_init_complete_state_for_resource(self): tmpl = rsrc_defn.ResourceDefinition('test_resource', 'GenericResourceType', {'Foo': 'abc'}) res = generic_rsrc.ResourceWithProps('test_resource', tmpl, self.stack) res.update_allowed_properties = ('Foo',) self.assertEqual((res.INIT, res.COMPLETE), res.state) prop = {'Foo': 'abc'} self.assertRaises(resource.UpdateReplace, res._needs_update, tmpl, tmpl, prop, prop, res) def test_need_update_in_create_failed_state_for_resource(self): tmpl = rsrc_defn.ResourceDefinition('test_resource', 'GenericResourceType', {'Foo': 'abc'}) res = generic_rsrc.ResourceWithProps('test_resource', tmpl, self.stack) res.update_allowed_properties = ('Foo',) res.state_set(res.CREATE, res.FAILED) prop = {'Foo': 'abc'} self.assertRaises(resource.UpdateReplace, res._needs_update, tmpl, tmpl, prop, prop, res) def test_convg_need_update_in_delete_complete_state_for_resource(self): tmpl = rsrc_defn.ResourceDefinition('test_resource', 'GenericResourceType', {'Foo': 'abc'}) res = generic_rsrc.ResourceWithProps('test_resource', tmpl, self.stack) res.update_allowed_properties = ('Foo',) res.stack.convergence = True res.state_set(res.DELETE, res.COMPLETE) prop = {'Foo': 'abc'} self.assertRaises(resource.UpdateReplace, res._needs_update, tmpl, tmpl, prop, prop, res) def test_update_fail_missing_req_prop(self): tmpl = rsrc_defn.ResourceDefinition('test_resource', 'GenericResourceType', {'Foo': 'abc'}) res = generic_rsrc.ResourceWithRequiredProps('test_resource', tmpl, self.stack) res.update_allowed_properties = ('Foo',) scheduler.TaskRunner(res.create)() self.assertEqual((res.CREATE, res.COMPLETE), res.state) utmpl = rsrc_defn.ResourceDefinition('test_resource', 'GenericResourceType', {}) updater = scheduler.TaskRunner(res.update, utmpl) self.assertRaises(exception.ResourceFailure, updater) self.assertEqual((res.UPDATE, res.FAILED), res.state) def test_update_fail_prop_typo(self): tmpl = rsrc_defn.ResourceDefinition('test_resource', 'GenericResourceType', {'Foo': 'abc'}) res = generic_rsrc.ResourceWithProps('test_resource', tmpl, self.stack) res.update_allowed_properties = ('Foo',) scheduler.TaskRunner(res.create)() self.assertEqual((res.CREATE, res.COMPLETE), res.state) utmpl = rsrc_defn.ResourceDefinition('test_resource', 'GenericResourceType', {'Food': 'xyz'}) updater = scheduler.TaskRunner(res.update, utmpl) self.assertRaises(exception.ResourceFailure, updater) self.assertEqual((res.UPDATE, res.FAILED), res.state) def test_update_not_implemented(self): tmpl = rsrc_defn.ResourceDefinition('test_resource', 'GenericResourceType', {'Foo': 'abc'}) res = generic_rsrc.ResourceWithProps('test_resource', tmpl, self.stack) res.update_allowed_properties = ('Foo',) scheduler.TaskRunner(res.create)() self.assertEqual((res.CREATE, res.COMPLETE), res.state) utmpl = rsrc_defn.ResourceDefinition('test_resource', 'GenericResourceType', {'Foo': 'xyz'}) tmpl_diff = {'Properties': {'Foo': 'xyz'}} prop_diff = {'Foo': 'xyz'} self.m.StubOutWithMock(generic_rsrc.ResourceWithProps, 'handle_update') generic_rsrc.ResourceWithProps.handle_update( utmpl, tmpl_diff, prop_diff).AndRaise(NotImplementedError) self.m.ReplayAll() updater = scheduler.TaskRunner(res.update, utmpl) self.assertRaises(exception.ResourceFailure, updater) self.assertEqual((res.UPDATE, res.FAILED), res.state) self.m.VerifyAll() def test_update_cancel(self): tmpl = rsrc_defn.ResourceDefinition('test_resource', 'Foo') res = generic_rsrc.CancellableResource('test_resource', tmpl, self.stack) self.m.StubOutWithMock(res, '_needs_update') self.m.StubOutWithMock(res, 'handle_update') self.m.StubOutWithMock(res, 'check_update_complete') self.m.StubOutWithMock(res, 'handle_update_cancel') res._needs_update(mock.ANY, mock.ANY, mock.ANY, mock.ANY, None).AndReturn(True) cookie = object() res.handle_update(mock.ANY, mock.ANY, mock.ANY).AndReturn(cookie) res.check_update_complete(cookie).AndReturn(False) res.handle_update_cancel(cookie).AndReturn(None) self.m.ReplayAll() scheduler.TaskRunner(res.create)() runner = scheduler.TaskRunner(res.update, tmpl) runner.start() runner.step() runner.cancel() self.m.VerifyAll() def _mock_check_res(self, mock_check=True): tmpl = rsrc_defn.ResourceDefinition('test_res', 'GenericResourceType') res = generic_rsrc.ResourceWithProps('test_res', tmpl, self.stack) res.state_set(res.CREATE, res.COMPLETE) if mock_check: res.handle_check = mock.Mock() return res def test_check_supported(self): res = self._mock_check_res() scheduler.TaskRunner(res.check)() self.assertTrue(res.handle_check.called) self.assertEqual(res.CHECK, res.action) self.assertEqual(res.COMPLETE, res.status) self.assertNotIn('not supported', res.status_reason) def test_check_not_supported(self): res = self._mock_check_res(mock_check=False) scheduler.TaskRunner(res.check)() self.assertIn('not supported', res.status_reason) self.assertEqual(res.CHECK, res.action) self.assertEqual(res.COMPLETE, res.status) def test_check_failed(self): res = self._mock_check_res() res.handle_check.side_effect = Exception('boom') self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(res.check)) self.assertTrue(res.handle_check.called) self.assertEqual(res.CHECK, res.action) self.assertEqual(res.FAILED, res.status) self.assertIn('boom', res.status_reason) def test_verify_check_conditions(self): valid_foos = ['foo1', 'foo2'] checks = [ {'attr': 'foo1', 'expected': 'bar1', 'current': 'baz1'}, {'attr': 'foo2', 'expected': valid_foos, 'current': 'foo2'}, {'attr': 'foo3', 'expected': 'bar3', 'current': 'baz3'}, {'attr': 'foo4', 'expected': 'foo4', 'current': 'foo4'}, {'attr': 'foo5', 'expected': valid_foos, 'current': 'baz5'}, ] tmpl = rsrc_defn.ResourceDefinition('test_res', 'GenericResourceType') res = generic_rsrc.ResourceWithProps('test_res', tmpl, self.stack) exc = self.assertRaises(exception.Error, res._verify_check_conditions, checks) exc_text = six.text_type(exc) self.assertNotIn("'foo2':", exc_text) self.assertNotIn("'foo4':", exc_text) self.assertIn("'foo1': expected 'bar1', got 'baz1'", exc_text) self.assertIn("'foo3': expected 'bar3', got 'baz3'", exc_text) self.assertIn("'foo5': expected '['foo1', 'foo2']', got 'baz5'", exc_text) def test_suspend_resume_ok(self): tmpl = rsrc_defn.ResourceDefinition('test_resource', 'GenericResourceType', {'Foo': 'abc'}) res = generic_rsrc.ResourceWithProps('test_resource', tmpl, self.stack) res.update_allowed_properties = ('Foo',) scheduler.TaskRunner(res.create)() self.assertEqual((res.CREATE, res.COMPLETE), res.state) scheduler.TaskRunner(res.suspend)() self.assertEqual((res.SUSPEND, res.COMPLETE), res.state) scheduler.TaskRunner(res.resume)() self.assertEqual((res.RESUME, res.COMPLETE), res.state) def test_suspend_fail_invalid_states(self): tmpl = rsrc_defn.ResourceDefinition('test_resource', 'GenericResourceType', {'Foo': 'abc'}) res = generic_rsrc.ResourceWithProps('test_resource', tmpl, self.stack) scheduler.TaskRunner(res.create)() self.assertEqual((res.CREATE, res.COMPLETE), res.state) invalid_actions = (a for a in res.ACTIONS if a != res.SUSPEND) invalid_status = (s for s in res.STATUSES if s != res.COMPLETE) invalid_states = [s for s in itertools.product(invalid_actions, invalid_status)] for state in invalid_states: res.state_set(*state) suspend = scheduler.TaskRunner(res.suspend) expected = 'State %s invalid for suspend' % six.text_type(state) exc = self.assertRaises(exception.ResourceFailure, suspend) self.assertIn(expected, six.text_type(exc)) def test_resume_fail_invalid_states(self): tmpl = rsrc_defn.ResourceDefinition('test_resource', 'GenericResourceType', {'Foo': 'abc'}) res = generic_rsrc.ResourceWithProps('test_resource', tmpl, self.stack) scheduler.TaskRunner(res.create)() self.assertEqual((res.CREATE, res.COMPLETE), res.state) invalid_states = [s for s in itertools.product(res.ACTIONS, res.STATUSES) if s not in ((res.SUSPEND, res.COMPLETE), (res.RESUME, res.FAILED), (res.RESUME, res.COMPLETE))] for state in invalid_states: res.state_set(*state) resume = scheduler.TaskRunner(res.resume) expected = 'State %s invalid for resume' % six.text_type(state) exc = self.assertRaises(exception.ResourceFailure, resume) self.assertIn(expected, six.text_type(exc)) def test_suspend_fail_exception(self): tmpl = rsrc_defn.ResourceDefinition('test_resource', 'GenericResourceType', {'Foo': 'abc'}) res = generic_rsrc.ResourceWithProps('test_resource', tmpl, self.stack) scheduler.TaskRunner(res.create)() self.assertEqual((res.CREATE, res.COMPLETE), res.state) self.m.StubOutWithMock(generic_rsrc.ResourceWithProps, 'handle_suspend') generic_rsrc.ResourceWithProps.handle_suspend().AndRaise(Exception()) self.m.ReplayAll() suspend = scheduler.TaskRunner(res.suspend) self.assertRaises(exception.ResourceFailure, suspend) self.assertEqual((res.SUSPEND, res.FAILED), res.state) def test_resume_fail_exception(self): tmpl = rsrc_defn.ResourceDefinition('test_resource', 'GenericResourceType', {'Foo': 'abc'}) res = generic_rsrc.ResourceWithProps('test_resource', tmpl, self.stack) scheduler.TaskRunner(res.create)() self.assertEqual((res.CREATE, res.COMPLETE), res.state) self.m.StubOutWithMock(generic_rsrc.ResourceWithProps, 'handle_resume') generic_rsrc.ResourceWithProps.handle_resume().AndRaise(Exception()) self.m.ReplayAll() res.state_set(res.SUSPEND, res.COMPLETE) resume = scheduler.TaskRunner(res.resume) self.assertRaises(exception.ResourceFailure, resume) self.assertEqual((res.RESUME, res.FAILED), res.state) def test_resource_class_to_cfn_template(self): class TestResource(resource.Resource): list_schema = {'wont_show_up': {'Type': 'Number'}} map_schema = {'will_show_up': {'Type': 'Integer'}} properties_schema = { 'name': {'Type': 'String'}, 'bool': {'Type': 'Boolean'}, 'implemented': {'Type': 'String', 'Implemented': True, 'AllowedPattern': '.*', 'MaxLength': 7, 'MinLength': 2, 'Required': True}, 'not_implemented': {'Type': 'String', 'Implemented': False}, 'number': {'Type': 'Number', 'MaxValue': 77, 'MinValue': 41, 'Default': 42}, 'list': {'Type': 'List', 'Schema': {'Type': 'Map', 'Schema': list_schema}}, 'map': {'Type': 'Map', 'Schema': map_schema}, 'hidden': properties.Schema( properties.Schema.STRING, support_status=support.SupportStatus( status=support.HIDDEN)) } attributes_schema = { 'output1': attributes.Schema('output1_desc'), 'output2': attributes.Schema('output2_desc') } expected_template = { 'HeatTemplateFormatVersion': '2012-12-12', 'Description': 'Initial template of TestResource', 'Parameters': { 'name': {'Type': 'String'}, 'bool': {'Type': 'Boolean', 'AllowedValues': ['True', 'true', 'False', 'false']}, 'implemented': { 'Type': 'String', 'AllowedPattern': '.*', 'MaxLength': 7, 'MinLength': 2 }, 'number': {'Type': 'Number', 'MaxValue': 77, 'MinValue': 41, 'Default': 42}, 'list': {'Type': 'CommaDelimitedList'}, 'map': {'Type': 'Json'} }, 'Resources': { 'TestResource': { 'Type': 'Test::Resource::resource', 'Properties': { 'name': {'Ref': 'name'}, 'bool': {'Ref': 'bool'}, 'implemented': {'Ref': 'implemented'}, 'number': {'Ref': 'number'}, 'list': {'Fn::Split': [",", {'Ref': 'list'}]}, 'map': {'Ref': 'map'} } } }, 'Outputs': { 'output1': { 'Description': 'output1_desc', 'Value': {"Fn::GetAtt": ["TestResource", "output1"]} }, 'output2': { 'Description': 'output2_desc', 'Value': {"Fn::GetAtt": ["TestResource", "output2"]} }, 'show': { 'Description': u'Detailed information about resource.', 'Value': {"Fn::GetAtt": ["TestResource", "show"]} }, 'OS::stack_id': { 'Value': {"Ref": "TestResource"} } } } self.assertEqual(expected_template, TestResource.resource_to_template( 'Test::Resource::resource')) def test_resource_class_to_hot_template(self): class TestResource(resource.Resource): list_schema = {'wont_show_up': {'Type': 'Number'}} map_schema = {'will_show_up': {'Type': 'Integer'}} properties_schema = { 'name': {'Type': 'String'}, 'bool': {'Type': 'Boolean'}, 'implemented': {'Type': 'String', 'Implemented': True, 'AllowedPattern': '.*', 'MaxLength': 7, 'MinLength': 2, 'Required': True}, 'not_implemented': {'Type': 'String', 'Implemented': False}, 'number': {'Type': 'Number', 'MaxValue': 77, 'MinValue': 41, 'Default': 42}, 'list': {'Type': 'List', 'Schema': {'Type': 'Map', 'Schema': list_schema}}, 'map': {'Type': 'Map', 'Schema': map_schema}, 'hidden': properties.Schema( properties.Schema.STRING, support_status=support.SupportStatus( status=support.HIDDEN)) } attributes_schema = { 'output1': attributes.Schema('output1_desc'), 'output2': attributes.Schema('output2_desc') } expected_template = { 'heat_template_version': '2016-10-14', 'description': 'Initial template of TestResource', 'parameters': { 'name': {'type': 'string'}, 'bool': {'type': 'boolean'}, 'implemented': { 'type': 'string', 'constraints': [{'length': {'max': 7, 'min': 2}}, {'allowed_pattern': '.*'}] }, 'number': {'type': 'number', 'constraints': [{'range': {'max': 77, 'min': 41}}], 'default': 42}, 'list': {'type': 'comma_delimited_list'}, 'map': {'type': 'json'} }, 'resources': { 'TestResource': { 'type': 'Test::Resource::resource', 'properties': { 'name': {'get_param': 'name'}, 'bool': {'get_param': 'bool'}, 'implemented': {'get_param': 'implemented'}, 'number': {'get_param': 'number'}, 'list': {'get_param': 'list'}, 'map': {'get_param': 'map'} } } }, 'outputs': { 'output1': { 'description': 'output1_desc', 'value': {"get_attr": ["TestResource", "output1"]} }, 'output2': { 'description': 'output2_desc', 'value': {"get_attr": ["TestResource", "output2"]} }, 'show': { 'description': u'Detailed information about resource.', 'value': {"get_attr": ["TestResource", "show"]} }, 'OS::stack_id': { 'value': {"get_resource": "TestResource"} } } } self.assertEqual(expected_template, TestResource.resource_to_template( 'Test::Resource::resource', template_type='hot')) def test_is_not_using_neutron_exception(self): snippet = rsrc_defn.ResourceDefinition('aresource', 'GenericResourceType') res = resource.Resource('aresource', snippet, self.stack) mock_create = self.patch( 'heat.engine.clients.os.neutron.NeutronClientPlugin._create') mock_create.side_effect = Exception() self.assertFalse(res.is_using_neutron()) def test_is_using_neutron_endpoint_lookup(self): snippet = rsrc_defn.ResourceDefinition('aresource', 'GenericResourceType') res = resource.Resource('aresource', snippet, self.stack) client = mock.Mock() self.patchobject(client.httpclient, 'get_endpoint', return_value=None) self.patch( 'heat.engine.clients.os.neutron.NeutronClientPlugin._create', return_value=client) self.assertFalse(res.is_using_neutron()) self.patchobject(client.httpclient, 'get_endpoint', return_value=mock.Mock()) self.assertTrue(res.is_using_neutron()) def _test_skip_validation_if_custom_constraint(self, tmpl): stack = parser.Stack(utils.dummy_context(), 'test', tmpl) stack.store() path = ('heat.engine.clients.os.neutron.neutron_constraints.' 'NetworkConstraint.validate_with_client') with mock.patch(path) as mock_validate: mock_validate.side_effect = neutron_exp.NeutronClientException rsrc2 = stack['bar'] self.assertIsNone(rsrc2.validate()) def test_ref_skip_validation_if_custom_constraint(self): tmpl = template.Template({ 'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'foo': {'Type': 'OS::Test::GenericResource'}, 'bar': { 'Type': 'OS::Test::ResourceWithCustomConstraint', 'Properties': { 'Foo': {'Ref': 'foo'}, } } } }, env=self.env) self._test_skip_validation_if_custom_constraint(tmpl) def test_hot_ref_skip_validation_if_custom_constraint(self): tmpl = template.Template({ 'heat_template_version': '2013-05-23', 'resources': { 'foo': {'type': 'GenericResourceType'}, 'bar': { 'type': 'ResourceWithCustomConstraint', 'properties': { 'Foo': {'get_resource': 'foo'}, } } } }, env=self.env) self._test_skip_validation_if_custom_constraint(tmpl) def test_no_resource_properties_required_default(self): """Test that there is no required properties with default value Check all resources if they have properties with required flag and default value because it is ambiguous. """ env = environment.Environment({}, user_env=False) resources._load_global_environment(env) # change loading mechanism for resources that require template files mod_dir = os.path.dirname(sys.modules[__name__].__file__) project_dir = os.path.abspath(os.path.join(mod_dir, '../../')) template_path = os.path.join(project_dir, 'etc', 'heat', 'templates') tri_db_instance = env.get_resource_info( 'AWS::RDS::DBInstance', registry_type=environment.TemplateResourceInfo) tri_db_instance.template_name = tri_db_instance.template_name.replace( '/etc/heat/templates', template_path) tri_alarm = env.get_resource_info( 'AWS::CloudWatch::Alarm', registry_type=environment.TemplateResourceInfo) tri_alarm.template_name = tri_alarm.template_name.replace( '/etc/heat/templates', template_path) def _validate_property_schema(prop_name, prop, res_name): if isinstance(prop, properties.Schema) and prop.implemented: ambiguous = (prop.default is not None) and prop.required self.assertFalse(ambiguous, "The definition of the property '{0}' " "in resource '{1}' is ambiguous: it " "has default value and required flag. " "Please delete one of these options." .format(prop_name, res_name)) if prop.schema is not None: if isinstance(prop.schema, constraints.AnyIndexDict): _validate_property_schema( prop_name, prop.schema.value, res_name) else: for nest_prop_name, nest_prop in six.iteritems( prop.schema): _validate_property_schema(nest_prop_name, nest_prop, res_name) resource_types = env.get_types() for res_type in resource_types: res_class = env.get_class(res_type) if hasattr(res_class, "properties_schema"): for property_schema_name, property_schema in six.iteritems( res_class.properties_schema): _validate_property_schema( property_schema_name, property_schema, res_class.__name__) def test_getatt_invalid_type(self): tmpl = template.Template({ 'heat_template_version': '2013-05-23', 'resources': { 'res': { 'type': 'ResourceWithAttributeType' } } }) stack = parser.Stack(utils.dummy_context(), 'test', tmpl) res = stack['res'] self.assertEqual('valid_sting', res.FnGetAtt('attr1')) res.FnGetAtt('attr2') self.assertIn("Attribute attr2 is not of type Map", self.LOG.output) def test_getatt_with_path(self): tmpl = template.Template({ 'heat_template_version': '2013-05-23', 'resources': { 'res': { 'type': 'ResourceWithComplexAttributesType' } } }) stack = parser.Stack(utils.dummy_context(), 'test', tmpl) res = stack['res'] self.assertEqual('abc', res.FnGetAtt('nested_dict', 'string')) def test_getatt_with_cache_data(self): tmpl = template.Template({ 'heat_template_version': '2013-05-23', 'resources': { 'res': { 'type': 'ResourceWithAttributeType' } } }) stack = parser.Stack(utils.dummy_context(), 'test', tmpl, cache_data={ 'res': node_data.NodeData.from_dict({ 'attrs': {'Foo': 'Res', 'foo': 'res'}, 'uuid': mock.ANY, 'id': mock.ANY, 'action': 'CREATE', 'status': 'COMPLETE'})}) res = stack.defn['res'] self.assertEqual('Res', res.FnGetAtt('Foo')) def test_getatt_with_path_cache_data(self): tmpl = template.Template({ 'heat_template_version': '2013-05-23', 'resources': { 'res': { 'type': 'ResourceWithComplexAttributesType' } } }) stack = parser.Stack(utils.dummy_context(), 'test', tmpl, cache_data={ 'res': node_data.NodeData.from_dict({ 'attrs': {('nested', 'string'): 'abc'}, 'uuid': mock.ANY, 'id': mock.ANY, 'action': 'CREATE', 'status': 'COMPLETE'})}) res = stack.defn['res'] self.assertEqual('abc', res.FnGetAtt('nested', 'string')) def test_properties_data_stored_encrypted_decrypted_on_load(self): cfg.CONF.set_override('encrypt_parameters_and_properties', True) tmpl = rsrc_defn.ResourceDefinition('test_resource', 'Foo') stored_properties_data = {'prop1': 'string', 'prop2': {'a': 'dict'}, 'prop3': 1, 'prop4': ['a', 'list'], 'prop5': True} # The db data should be encrypted when _store() is called res = generic_rsrc.GenericResource('test_res_enc', tmpl, self.stack) res._stored_properties_data = stored_properties_data res._rsrc_prop_data = None res.store() db_res = db_api.resource_get(res.context, res.id) self.assertNotEqual('string', db_res.rsrc_prop_data.data['prop1']) # The db data should be encrypted when state_set is called res = generic_rsrc.GenericResource('test_res_enc', tmpl, self.stack) res._stored_properties_data = stored_properties_data res.state_set(res.CREATE, res.IN_PROGRESS, 'test_store') db_res = db_api.resource_get(res.context, res.id) self.assertNotEqual('string', db_res.rsrc_prop_data.data['prop1']) # The properties data should be decrypted when the object is # loaded using get_obj res_obj = resource_objects.Resource.get_obj(res.context, res.id) self.assertEqual('string', res_obj.properties_data['prop1']) # _stored_properties_data should be decrypted when the object is # loaded using get_all_by_stack res_objs = resource_objects.Resource.get_all_by_stack(res.context, self.stack.id) res_obj = res_objs['test_res_enc'] self.assertEqual('string', res_obj.properties_data['prop1']) # The properties data should be decrypted when the object is # refreshed res_obj = resource_objects.Resource.get_obj(res.context, res.id) res_obj.refresh() self.assertEqual('string', res_obj.properties_data['prop1']) def test_properties_data_no_encryption(self): cfg.CONF.set_override('encrypt_parameters_and_properties', False) tmpl = rsrc_defn.ResourceDefinition('test_resource', 'Foo') stored_properties_data = {'prop1': 'string', 'prop2': {'a': 'dict'}, 'prop3': 1, 'prop4': ['a', 'list'], 'prop5': True} # The db data should not be encrypted when state_set() # is called res = generic_rsrc.GenericResource('test_res_enc', tmpl, self.stack) res._stored_properties_data = stored_properties_data res._rsrc_prop_data = None res.state_set(res.CREATE, res.IN_PROGRESS, 'test_store') db_res = db_api.resource_get(res.context, res.id) self.assertEqual('string', db_res.rsrc_prop_data.data['prop1']) # The db data should not be encrypted when _store() is called res = generic_rsrc.GenericResource('test_res_enc', tmpl, self.stack) res._stored_properties_data = stored_properties_data db_res = db_api.resource_get(res.context, res.id) self.assertEqual('string', db_res.rsrc_prop_data.data['prop1']) # The properties data should not be modified when the object # is loaded using get_obj prev_rsrc_prop_data_id = db_res.rsrc_prop_data.id res_obj = resource_objects.Resource.get_obj(res.context, res.id) self.assertEqual('string', res_obj.properties_data['prop1']) self.assertEqual(prev_rsrc_prop_data_id, res_obj.rsrc_prop_data_id) # The properties data should not be modified when the object # is loaded using get_all_by_stack res_objs = resource_objects.Resource.get_all_by_stack(res.context, self.stack.id) res_obj = res_objs['test_res_enc'] self.assertEqual('string', res_obj.properties_data['prop1']) self.assertEqual(prev_rsrc_prop_data_id, res_obj.rsrc_prop_data_id) def _assert_resource_lock(self, res_id, engine_id, atomic_key): rs = resource_objects.Resource.get_obj(self.stack.context, res_id) self.assertEqual(engine_id, rs.engine_id) self.assertEqual(atomic_key, rs.atomic_key) def test_create_convergence(self): tmpl = rsrc_defn.ResourceDefinition('test_res', 'Foo') res = generic_rsrc.GenericResource('test_res', tmpl, self.stack) res.action = res.CREATE res.store() self._assert_resource_lock(res.id, None, None) res_data = {(1, True): {u'id': 1, u'name': 'A', 'attrs': {}}, (2, True): {u'id': 3, u'name': 'B', 'attrs': {}}} res_data = node_data.load_resources_data(res_data) pcb = mock.Mock() with mock.patch.object(resource.Resource, 'create') as mock_create: res.create_convergence(self.stack.t.id, res_data, 'engine-007', -1, pcb) self.assertTrue(mock_create.called) self.assertItemsEqual([1, 3], res.requires) self._assert_resource_lock(res.id, None, None) def test_create_convergence_throws_timeout(self): tmpl = rsrc_defn.ResourceDefinition('test_res', 'Foo') res = generic_rsrc.GenericResource('test_res', tmpl, self.stack) res.action = res.CREATE res.store() res_data = {(1, True): {u'id': 1, u'name': 'A', 'attrs': {}}, (2, True): {u'id': 3, u'name': 'B', 'attrs': {}}} res_data = node_data.load_resources_data(res_data) pcb = mock.Mock() self.assertRaises(scheduler.Timeout, res.create_convergence, self.stack.t.id, res_data, 'engine-007', -1, pcb) def test_create_convergence_sets_requires_for_failure(self): """Ensure that requires are computed correctly. Ensure that requires are computed correctly even if resource create fails. """ tmpl = rsrc_defn.ResourceDefinition('test_res', 'Foo') res = generic_rsrc.GenericResource('test_res', tmpl, self.stack) res.store() dummy_ex = exception.ResourceNotAvailable(resource_name=res.name) res.create = mock.Mock(side_effect=dummy_ex) self._assert_resource_lock(res.id, None, None) res_data = {(1, True): {u'id': 5, u'name': 'A', 'attrs': {}}, (2, True): {u'id': 3, u'name': 'B', 'attrs': {}}} res_data = node_data.load_resources_data(res_data) self.assertRaises(exception.ResourceNotAvailable, res.create_convergence, self.stack.t.id, res_data, 'engine-007', self.dummy_timeout, self.dummy_event) self.assertItemsEqual([5, 3], res.requires) # The locking happens in create which we mocked out self._assert_resource_lock(res.id, None, None) @mock.patch.object(resource.Resource, 'adopt') def test_adopt_convergence_ok(self, mock_adopt): tmpl = rsrc_defn.ResourceDefinition('test_res', 'Foo') res = generic_rsrc.GenericResource('test_res', tmpl, self.stack) res.action = res.ADOPT res.store() self.stack.adopt_stack_data = {'resources': {'test_res': { 'resource_id': 'fluffy'}}} self._assert_resource_lock(res.id, None, None) res_data = {(1, True): {u'id': 5, u'name': 'A', 'attrs': {}}, (2, True): {u'id': 3, u'name': 'B', 'attrs': {}}} res_data = node_data.load_resources_data(res_data) tr = scheduler.TaskRunner(res.create_convergence, self.stack.t.id, res_data, 'engine-007', self.dummy_timeout, self.dummy_event) tr() mock_adopt.assert_called_once_with( resource_data={'resource_id': 'fluffy'}) self.assertItemsEqual([5, 3], res.requires) self._assert_resource_lock(res.id, None, None) def test_adopt_convergence_bad_data(self): tmpl = rsrc_defn.ResourceDefinition('test_res', 'Foo') res = generic_rsrc.GenericResource('test_res', tmpl, self.stack) res.action = res.ADOPT res.store() self.stack.adopt_stack_data = {'resources': {}} self._assert_resource_lock(res.id, None, None) res_data = {(1, True): {u'id': 5, u'name': 'A', 'attrs': {}}, (2, True): {u'id': 3, u'name': 'B', 'attrs': {}}} res_data = node_data.load_resources_data(res_data) tr = scheduler.TaskRunner(res.create_convergence, self.stack.t.id, res_data, 'engine-007', self.dummy_timeout, self.dummy_event) exc = self.assertRaises(exception.ResourceFailure, tr) self.assertIn('Resource ID was not provided', six.text_type(exc)) @mock.patch.object(resource.Resource, 'update_template_diff_properties') @mock.patch.object(resource.Resource, '_needs_update') def test_update_convergence(self, mock_nu, mock_utd): tmpl = template.Template({ 'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'test_res': {'Type': 'ResourceWithPropsType'} }}, env=self.env) stack = parser.Stack(utils.dummy_context(), 'test_stack', tmpl) stack.thread_group_mgr = tools.DummyThreadGroupManager() stack.converge_stack(stack.t, action=stack.CREATE) res = stack.resources['test_res'] res.requires = [2] res.action = res.CREATE res.store() self._assert_resource_lock(res.id, None, None) new_temp = template.Template({ 'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'test_res': {'Type': 'ResourceWithPropsType', 'Properties': {'Foo': 'abc'}} }}, env=self.env) new_temp.store(stack.context) new_stack = parser.Stack(utils.dummy_context(), 'test_stack', new_temp, stack_id=self.stack.id) res.stack.convergence = True res_data = {(1, True): {u'id': 4, u'name': 'A', 'attrs': {}}, (2, True): {u'id': 3, u'name': 'B', 'attrs': {}}} res_data = node_data.load_resources_data(res_data) tr = scheduler.TaskRunner(res.update_convergence, new_temp.id, res_data, 'engine-007', 120, new_stack) tr() self.assertItemsEqual([3, 4], res.requires) self.assertEqual(res.action, resource.Resource.UPDATE) self.assertEqual(res.status, resource.Resource.COMPLETE) self._assert_resource_lock(res.id, None, 2) def test_update_convergence_throws_timeout(self): tmpl = template.Template({ 'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'test_res': {'Type': 'ResourceWithPropsType'} }}, env=self.env) stack = parser.Stack(utils.dummy_context(), 'test_stack', tmpl) stack.thread_group_mgr = tools.DummyThreadGroupManager() stack.converge_stack(stack.t, action=stack.CREATE) res = stack.resources['test_res'] res.store() new_temp = template.Template({ 'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'test_res': {'Type': 'ResourceWithPropsType', 'Properties': {'Foo': 'abc'}} }}, env=self.env) new_temp.store(stack.context) new_stack = parser.Stack(utils.dummy_context(), 'test_stack', new_temp, stack_id=self.stack.id) res_data = {} tr = scheduler.TaskRunner(res.update_convergence, new_temp.id, res_data, 'engine-007', -1, new_stack, self.dummy_event) self.assertRaises(scheduler.Timeout, tr) def test_update_convergence_with_substitute_class(self): tmpl = rsrc_defn.ResourceDefinition('test_res', 'GenericResourceType') res = generic_rsrc.GenericResource('test_res', tmpl, self.stack) res.store() new_temp = template.Template({ 'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'test_res': {'Type': 'ResourceWithPropsType', 'Properties': {'Foo': 'abc'}} }}, env=self.env) new_temp.store(self.stack.context) new_stack = parser.Stack(utils.dummy_context(), 'test_stack', new_temp, stack_id=self.stack.id) res_data = {} self.assertRaises(resource.UpdateReplace, res.update_convergence, new_temp.id, res_data, 'engine-007', -1, new_stack) def test_update_convergence_checks_resource_class(self): tmpl = rsrc_defn.ResourceDefinition('test_res', 'GenericResourceType') res = generic_rsrc.GenericResource('test_res', tmpl, self.stack) res.store() new_temp = template.Template({ 'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'test_res': {'Type': 'ResourceWithPropsType', 'Properties': {'Foo': 'abc'}} }}, env=self.env) ctx = utils.dummy_context() new_temp.store(ctx) new_stack = parser.Stack(ctx, 'test_stack', new_temp, stack_id=self.stack.id) res_data = {} tr = scheduler.TaskRunner(res.update_convergence, new_temp.id, res_data, 'engine-007', -1, new_stack, self.dummy_event) self.assertRaises(resource.UpdateReplace, tr) @mock.patch.object(resource.Resource, '_needs_update') @mock.patch.object(resource.Resource, '_check_for_convergence_replace') def test_update_in_progress_convergence(self, mock_cfcr, mock_nu): tmpl = rsrc_defn.ResourceDefinition('test_res', 'Foo') res = generic_rsrc.GenericResource('test_res', tmpl, self.stack) res.requires = [1, 2] res.store() rs = resource_objects.Resource.get_obj(self.stack.context, res.id) rs.update_and_save({'engine_id': 'not-this'}) self._assert_resource_lock(res.id, 'not-this', None) res.stack.convergence = True res_data = {(1, True): {u'id': 4, u'name': 'A', 'attrs': {}}, (2, True): {u'id': 3, u'name': 'B', 'attrs': {}}} res_data = node_data.load_resources_data(res_data) tmpl = template.Template({ 'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'test_res': {'Type': 'ResourceWithPropsType'} }}, env=self.env) new_stack = parser.Stack(utils.dummy_context(), 'test_stack', tmpl, stack_id=self.stack.id) tr = scheduler.TaskRunner(res.update_convergence, 'template_key', res_data, 'engine-007', self.dummy_timeout, new_stack) ex = self.assertRaises(exception.UpdateInProgress, tr) msg = ("The resource %s is already being updated." % res.name) self.assertEqual(msg, six.text_type(ex)) # ensure requirements are not updated for failed resource rs = resource_objects.Resource.get_obj(self.stack.context, res.id) self.assertEqual([1, 2], rs.requires) @mock.patch.object(resource.Resource, 'update_template_diff_properties') @mock.patch.object(resource.Resource, '_needs_update') def test_update_resource_convergence_failed(self, mock_needs_update, mock_update_template_diff): tmpl = template.Template({ 'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'test_res': {'Type': 'ResourceWithPropsType'} }}, env=self.env) stack = parser.Stack(utils.dummy_context(), 'test_stack', tmpl) stack.thread_group_mgr = tools.DummyThreadGroupManager() stack.converge_stack(stack.t, action=stack.CREATE) res = stack.resources['test_res'] res.requires = [2] res.store() self._assert_resource_lock(res.id, None, None) new_temp = template.Template({ 'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'test_res': {'Type': 'ResourceWithPropsType', 'Properties': {'Foo': 'abc'}} }}, env=self.env) new_temp.store(stack.context) res_data = {(1, True): {u'id': 4, u'name': 'A', 'attrs': {}}, (2, True): {u'id': 3, u'name': 'B', 'attrs': {}}} res_data = node_data.load_resources_data(res_data) new_stack = parser.Stack(utils.dummy_context(), 'test_stack', new_temp, stack_id=self.stack.id) res.stack.convergence = True res._calling_engine_id = 'engine-9' tr = scheduler.TaskRunner(res.update_convergence, new_temp.id, res_data, 'engine-007', 120, new_stack, self.dummy_event) self.assertRaises(exception.ResourceFailure, tr) self.assertEqual(new_temp.id, res.current_template_id) # check if requires was updated self.assertItemsEqual([3, 4], res.requires) self.assertEqual(res.action, resource.Resource.UPDATE) self.assertEqual(res.status, resource.Resource.FAILED) self._assert_resource_lock(res.id, None, 3) def test_update_resource_convergence_update_replace(self): tmpl = template.Template({ 'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'test_res': {'Type': 'ResourceWithPropsType'} }}, env=self.env) stack = parser.Stack(utils.dummy_context(), 'test_stack', tmpl) stack.thread_group_mgr = tools.DummyThreadGroupManager() stack.converge_stack(stack.t, action=stack.CREATE) res = stack.resources['test_res'] res.requires = [2] res.store() self._assert_resource_lock(res.id, None, None) new_temp = template.Template({ 'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'test_res': {'Type': 'ResourceWithPropsType', 'Properties': {'Foo': 'abc'}} }}, env=self.env) new_temp.store(stack.context) res.stack.convergence = True res_data = {(1, True): {u'id': 4, u'name': 'A', 'attrs': {}}, (2, True): {u'id': 3, u'name': 'B', 'attrs': {}}} res_data = node_data.load_resources_data(res_data) new_stack = parser.Stack(utils.dummy_context(), 'test_stack', new_temp, stack_id=self.stack.id) tr = scheduler.TaskRunner(res.update_convergence, new_temp.id, res_data, 'engine-007', 120, new_stack, self.dummy_event) self.assertRaises(resource.UpdateReplace, tr) # ensure that current_template_id was not updated self.assertEqual(stack.t.id, res.current_template_id) # ensure that requires was not updated self.assertItemsEqual([2], res.requires) self._assert_resource_lock(res.id, None, 2) def test_convergence_update_replace_rollback(self): rsrc_def = rsrc_defn.ResourceDefinition('test_res', 'ResourceWithPropsType') res = generic_rsrc.ResourceWithProps('test_res', rsrc_def, self.stack) res.replaced_by = 'dummy' res.store() new_temp = template.Template({ 'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'test_res': {'Type': 'ResourceWithPropsType', 'Properties': {'Foo': 'abc'}} }}, env=self.env) new_stack = parser.Stack(utils.dummy_context(), 'test_stack', new_temp, stack_id=self.stack.id) self.stack.state_set(self.stack.ROLLBACK, self.stack.IN_PROGRESS, 'Simulate rollback') res.restore_prev_rsrc = mock.Mock() tr = scheduler.TaskRunner(res.update_convergence, 'new_tmpl_id', {}, 'engine-007', self.dummy_timeout, new_stack, self.dummy_event) self.assertRaises(resource.UpdateReplace, tr) self.assertTrue(res.restore_prev_rsrc.called) def test_convergence_update_replace_rollback_restore_prev_rsrc_error(self): rsrc_def = rsrc_defn.ResourceDefinition('test_res', 'ResourceWithPropsType') res = generic_rsrc.ResourceWithProps('test_res', rsrc_def, self.stack) res.replaced_by = 'dummy' res.store() new_temp = template.Template({ 'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'test_res': {'Type': 'ResourceWithPropsType', 'Properties': {'Foo': 'abc'}} }}, env=self.env) new_stack = parser.Stack(utils.dummy_context(), 'test_stack', new_temp, stack_id=self.stack.id) self.stack.state_set(self.stack.ROLLBACK, self.stack.IN_PROGRESS, 'Simulate rollback') res.restore_prev_rsrc = mock.Mock(side_effect=Exception) tr = scheduler.TaskRunner(res.update_convergence, 'new_tmpl_id', {}, 'engine-007', self.dummy_timeout, new_stack, self.dummy_event) self.assertRaises(exception.ResourceFailure, tr) self.assertTrue(res.restore_prev_rsrc.called) self.assertEqual((res.UPDATE, res.FAILED), res.state) def test_delete_convergence_ok(self): tmpl = rsrc_defn.ResourceDefinition('test_res', 'Foo') res = generic_rsrc.GenericResource('test_res', tmpl, self.stack) res.current_template_id = 1 res.status = res.COMPLETE res.action = res.CREATE res.store() res.handle_delete = mock.Mock(return_value=None) res._update_replacement_data = mock.Mock() self._assert_resource_lock(res.id, None, None) pcb = mock.Mock() with mock.patch.object(resource.Resource, 'delete') as mock_delete: tr = scheduler.TaskRunner(res.delete_convergence, 2, {}, 'engine-007', 20, pcb) tr() self.assertTrue(mock_delete.called) self.assertTrue(res._update_replacement_data.called) self._assert_resource_lock(res.id, None, None) def test_delete_convergence_does_not_delete_same_template_resource(self): tmpl = rsrc_defn.ResourceDefinition('test_res', 'Foo') res = generic_rsrc.GenericResource('test_res', tmpl, self.stack) res.current_template_id = 'same-template' res.store() res.delete = mock.Mock() tr = scheduler.TaskRunner(res.delete_convergence, 'same-template', {}, 'engine-007', self.dummy_timeout, self.dummy_event) tr() self.assertFalse(res.delete.called) def test_delete_convergence_fail(self): tmpl = rsrc_defn.ResourceDefinition('test_res', 'Foo') res = generic_rsrc.GenericResource('test_res', tmpl, self.stack) res.current_template_id = 1 res.status = res.COMPLETE res.action = res.CREATE res.store() res_id = res.id res.handle_delete = mock.Mock(side_effect=ValueError('test')) self._assert_resource_lock(res.id, None, None) res.stack.convergence = True tr = scheduler.TaskRunner(res.delete_convergence, 2, {}, 'engine-007', self.dummy_timeout, self.dummy_event) self.assertRaises(exception.ResourceFailure, tr) self.assertTrue(res.handle_delete.called) # confirm that the DB object still exists, and it's lock is released. rs = resource_objects.Resource.get_obj(self.stack.context, res_id) self.assertEqual(rs.id, res_id) self.assertEqual(res.FAILED, rs.status) self._assert_resource_lock(res.id, None, 2) def test_delete_in_progress_convergence(self): tmpl = rsrc_defn.ResourceDefinition('test_res', 'Foo') res = generic_rsrc.GenericResource('test_res', tmpl, self.stack) res.current_template_id = 1 res.status = res.COMPLETE res.action = res.CREATE res.store() self.stack.convergence = True res._calling_engine_id = 'engine-9' rs = resource_objects.Resource.get_obj(self.stack.context, res.id) rs.update_and_save({'engine_id': 'not-this'}) self._assert_resource_lock(res.id, 'not-this', None) tr = scheduler.TaskRunner(res.delete) ex = self.assertRaises(exception.UpdateInProgress, tr) msg = ("The resource %s is already being updated." % res.name) self.assertEqual(msg, six.text_type(ex)) def test_delete_convergence_updates_needed_by(self): tmpl = rsrc_defn.ResourceDefinition('test_res', 'Foo') res = generic_rsrc.GenericResource('test_res', tmpl, self.stack) res.current_template_id = 1 res.status = res.COMPLETE res.action = res.CREATE res.store() res.destroy = mock.Mock() input_data = {(1, False): 4, (2, False): 5} # needed_by resource ids self._assert_resource_lock(res.id, None, None) scheduler.TaskRunner(res.delete_convergence, 1, input_data, 'engine-007', self.dummy_timeout, self.dummy_event)() self.assertItemsEqual([4, 5], res.needed_by) @mock.patch.object(resource_objects.Resource, 'get_obj') def test_update_replacement_data(self, mock_get_obj): tmpl = rsrc_defn.ResourceDefinition('test_res', 'Foo') r = generic_rsrc.GenericResource('test_res', tmpl, self.stack) r.replaced_by = 4 r.needed_by = [4, 5] r.store() db_res = mock.MagicMock() db_res.current_template_id = 'same_tmpl' mock_get_obj.return_value = db_res r._update_replacement_data('same_tmpl') self.assertTrue(mock_get_obj.called) self.assertTrue(db_res.select_and_update.called) args, kwargs = db_res.select_and_update.call_args self.assertEqual({'replaces': None, 'needed_by': [4, 5]}, args[0]) self.assertIsNone(kwargs['expected_engine_id']) @mock.patch.object(resource_objects.Resource, 'get_obj') def test_update_replacement_data_ignores_rsrc_from_different_tmpl( self, mock_get_obj): tmpl = rsrc_defn.ResourceDefinition('test_res', 'Foo') r = generic_rsrc.GenericResource('test_res', tmpl, self.stack) r.replaced_by = 4 db_res = mock.MagicMock() db_res.current_template_id = 'tmpl' mock_get_obj.return_value = db_res # db_res as tmpl id as 2, and 1 is passed r._update_replacement_data('diff_tmpl') self.assertTrue(mock_get_obj.called) self.assertFalse(db_res.select_and_update.called) def create_resource_for_attributes_tests(self): tmpl = template.Template({ 'heat_template_version': '2013-05-23', 'resources': { 'res': { 'type': 'GenericResourceType' } } }) stack = parser.Stack(utils.dummy_context(), 'test', tmpl) return stack def test_resolve_attributes_stuff_base_attribute(self): # check path with resolving base attributes (via 'show' attribute) stack = self.create_resource_for_attributes_tests() res = stack['res'] with mock.patch.object(res, '_show_resource') as show_attr: # return None, if resource_id is None self.assertIsNone(res.FnGetAtt('show')) # set resource_id and recheck with re-written _show_resource res.resource_id = mock.Mock() res.default_client_name = 'foo' show_attr.return_value = 'my attr' self.assertEqual('my attr', res.FnGetAtt('show')) self.assertEqual(1, show_attr.call_count) # clean resolved_values res.attributes.reset_resolved_values() with mock.patch.object(res, 'client_plugin') as client_plugin: # generate error during calling _show_resource show_attr.side_effect = [Exception] self.assertIsNone(res.FnGetAtt('show')) self.assertEqual(2, show_attr.call_count) self.assertEqual(1, client_plugin.call_count) def test_resolve_attributes_stuff_custom_attribute(self): # check path with resolve_attribute stack = self.create_resource_for_attributes_tests() res = stack['res'] res.default_client_name = 'foo' with mock.patch.object(res, '_resolve_attribute') as res_attr: res_attr.side_effect = ['Works', Exception] self.assertEqual('Works', res.FnGetAtt('Foo')) res_attr.assert_called_once_with('Foo') # clean resolved_values res.attributes.reset_resolved_values() with mock.patch.object(res, 'client_plugin') as client_plugin: self.assertIsNone(res.FnGetAtt('Foo')) self.assertEqual(1, client_plugin.call_count) def test_resolve_attributes_no_default_client_name(self): class MyException(Exception): pass stack = self.create_resource_for_attributes_tests() res = stack['res'] res.default_client_name = None with mock.patch.object(res, '_resolve_attribute') as res_attr: res_attr.side_effect = [MyException] # Make sure this isn't AssertionError self.assertRaises(MyException, res.FnGetAtt, 'Foo') def test_show_resource(self): # check default function _show_resource stack = self.create_resource_for_attributes_tests() res = stack['res'] # check default value of entity self.assertIsNone(res.entity) self.assertIsNone(res.FnGetAtt('show')) # set entity and recheck res.resource_id = 'test_resource_id' res.entity = 'test' # mock getting resource info res.client = mock.Mock() test_obj = mock.Mock() test_resource = mock.Mock() test_resource.to_dict.return_value = {'test': 'info'} test_obj.get.return_value = test_resource res.client().test = test_obj self.assertEqual({'test': 'info'}, res._show_resource()) # mock getting resource info as dict test_obj.get.return_value = {'test': 'info'} res.client().test = test_obj self.assertEqual({'test': 'info'}, res._show_resource()) # check the case where resource entity isn't defined res.entity = None self.assertIsNone(res._show_resource()) # check handling AttributeError exception res.entity = 'test' test_obj.get.side_effect = AttributeError self.assertIsNone(res._show_resource()) def test_delete_convergence_deletes_resource_in_init_state(self): tmpl = rsrc_defn.ResourceDefinition('test_res', 'Foo') res = generic_rsrc.GenericResource('test_res', tmpl, self.stack) # action is INIT by default res.store() with mock.patch.object(resource_objects.Resource, 'delete') as resource_del: tr = scheduler.TaskRunner(res.delete_convergence, 1, {}, 'engine-007', 1, self.dummy_event) tr() resource_del.assert_called_once_with(res.context, res.id) def test_delete_convergence_throws_timeout(self): tmpl = rsrc_defn.ResourceDefinition('test_res', 'Foo') res = generic_rsrc.GenericResource('test_res', tmpl, self.stack) res.action = res.CREATE res.store() timeout = -1 # to emulate timeout tr = scheduler.TaskRunner(res.delete_convergence, 1, {}, 'engine-007', timeout, self.dummy_event) self.assertRaises(scheduler.Timeout, tr) @mock.patch.object(parser.Stack, 'load') @mock.patch.object(resource.Resource, '_load_data') @mock.patch.object(template.Template, 'load') def test_load_loads_stack_with_cached_data(self, mock_tmpl_load, mock_load_data, mock_stack_load): tmpl = template.Template({ 'heat_template_version': '2013-05-23', 'resources': { 'res': { 'type': 'GenericResourceType' } } }, env=self.env) stack = parser.Stack(utils.dummy_context(), 'test_stack', tmpl) stack.store() mock_tmpl_load.return_value = tmpl res = stack['res'] res.current_template_id = stack.t.id res.store() data = {'bar': {'atrr1': 'baz', 'attr2': 'baz2'}} mock_stack_load.return_value = stack resource.Resource.load(stack.context, res.id, stack.current_traversal, True, data) self.assertTrue(mock_stack_load.called) mock_stack_load.assert_called_with(stack.context, stack_id=stack.id, cache_data=data) self.assertTrue(mock_load_data.called) class ResourceDeleteRetryTest(common.HeatTestCase): def setUp(self): super(ResourceDeleteRetryTest, self).setUp() self.env = environment.Environment() self.env.load({u'resource_registry': {u'OS::Test::GenericResource': u'GenericResourceType'}}) self.stack = parser.Stack(utils.dummy_context(), 'test_stack', template.Template(empty_template, env=self.env), stack_id=str(uuid.uuid4())) self.num_retries = 2 cfg.CONF.set_override('action_retry_limit', self.num_retries) def test_delete_retry_conflict(self): tmpl = rsrc_defn.ResourceDefinition('test_resource', 'GenericResourceType', {'Foo': 'xyz123'}) res = generic_rsrc.ResourceWithProps( 'test_resource', tmpl, self.stack) res.state_set(res.CREATE, res.COMPLETE, 'wobble') res.default_client_name = 'neutron' self.m.StubOutWithMock(timeutils, 'retry_backoff_delay') self.m.StubOutWithMock(generic_rsrc.GenericResource, 'handle_delete') # could be any exception that is_conflict(), using the neutron # client one generic_rsrc.GenericResource.handle_delete().AndRaise( neutron_exp.Conflict(message='foo', request_ids=[1])) for i in range(self.num_retries): timeutils.retry_backoff_delay(i+1, jitter_max=2.0).AndReturn( 0.01) generic_rsrc.GenericResource.handle_delete().AndRaise( neutron_exp.Conflict(message='foo', request_ids=[1])) self.m.ReplayAll() exc = self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(res.delete)) exc_text = six.text_type(exc) self.assertIn('Conflict', exc_text) self.m.VerifyAll() def test_delete_retry_phys_resource_exists(self): tmpl = rsrc_defn.ResourceDefinition( 'test_resource', 'Foo', {'Foo': 'abc'}) res = generic_rsrc.ResourceWithPropsRefPropOnDelete( 'test_resource', tmpl, self.stack) res.state_set(res.CREATE, res.COMPLETE, 'wobble') cfg.CONF.set_override('action_retry_limit', self.num_retries) self.m.StubOutWithMock(timeutils, 'retry_backoff_delay') self.m.StubOutWithMock(generic_rsrc.GenericResource, 'handle_delete') self.m.StubOutWithMock(generic_rsrc.ResourceWithPropsRefPropOnDelete, 'check_delete_complete') generic_rsrc.GenericResource.handle_delete().AndReturn(None) generic_rsrc.ResourceWithPropsRefPropOnDelete.check_delete_complete( None).AndRaise( exception.PhysicalResourceExists(name="foo")) for i in range(self.num_retries): timeutils.retry_backoff_delay(i+1, jitter_max=2.0).AndReturn( 0.01) generic_rsrc.GenericResource.handle_delete().AndReturn(None) if i < self.num_retries-1: generic_rsrc.ResourceWithPropsRefPropOnDelete.\ check_delete_complete(None).AndRaise( exception.PhysicalResourceExists(name="foo")) else: generic_rsrc.ResourceWithPropsRefPropOnDelete.\ check_delete_complete(None).AndReturn(True) self.m.ReplayAll() scheduler.TaskRunner(res.delete)() self.m.VerifyAll() class ResourceAdoptTest(common.HeatTestCase): def test_adopt_resource_success(self): adopt_data = '{}' tmpl = template.Template({ 'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'foo': {'Type': 'GenericResourceType'}, } }) self.stack = parser.Stack(utils.dummy_context(), 'test_stack', tmpl, stack_id=str(uuid.uuid4()), adopt_stack_data=json.loads(adopt_data)) res = self.stack['foo'] res_data = { "status": "COMPLETE", "name": "foo", "resource_data": {}, "metadata": {}, "resource_id": "test-res-id", "action": "CREATE", "type": "GenericResourceType" } adopt = scheduler.TaskRunner(res.adopt, res_data) adopt() self.assertEqual({}, res.metadata_get()) self.assertEqual((res.ADOPT, res.COMPLETE), res.state) def test_adopt_with_resource_data_and_metadata(self): adopt_data = '{}' tmpl = template.Template({ 'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'foo': {'Type': 'GenericResourceType'}, } }) self.stack = parser.Stack(utils.dummy_context(), 'test_stack', tmpl, stack_id=str(uuid.uuid4()), adopt_stack_data=json.loads(adopt_data)) res = self.stack['foo'] res_data = { "status": "COMPLETE", "name": "foo", "resource_data": {"test-key": "test-value"}, "metadata": {"os_distro": "test-distro"}, "resource_id": "test-res-id", "action": "CREATE", "type": "GenericResourceType" } adopt = scheduler.TaskRunner(res.adopt, res_data) adopt() self.assertEqual( "test-value", resource_data_object.ResourceData.get_val(res, "test-key")) self.assertEqual({"os_distro": "test-distro"}, res.metadata_get()) self.assertEqual((res.ADOPT, res.COMPLETE), res.state) def test_adopt_resource_missing(self): adopt_data = '''{ "action": "CREATE", "status": "COMPLETE", "name": "my-test-stack-name", "resources": {} }''' tmpl = template.Template({ 'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'foo': {'Type': 'GenericResourceType'}, } }) self.stack = parser.Stack(utils.dummy_context(), 'test_stack', tmpl, stack_id=str(uuid.uuid4()), adopt_stack_data=json.loads(adopt_data)) res = self.stack['foo'] adopt = scheduler.TaskRunner(res.adopt, None) self.assertRaises(exception.ResourceFailure, adopt) expected = 'Exception: resources.foo: Resource ID was not provided.' self.assertEqual(expected, res.status_reason) class ResourceDependenciesTest(common.HeatTestCase): def setUp(self): super(ResourceDependenciesTest, self).setUp() self.deps = dependencies.Dependencies() def test_no_deps(self): tmpl = template.Template({ 'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'foo': {'Type': 'GenericResourceType'}, } }) stack = parser.Stack(utils.dummy_context(), 'test', tmpl) res = stack['foo'] res.add_explicit_dependencies(self.deps) graph = self.deps.graph() self.assertIn(res, graph) def test_hot_add_dep_error_create(self): tmpl = template.Template({ 'heat_template_version': '2013-05-23', 'resources': { 'foo': {'type': 'GenericResourceType'}, 'bar': {'type': 'ResourceWithPropsType'} } }) stack = parser.Stack(utils.dummy_context(), 'test', tmpl) res = stack['bar'] class TestException(Exception): pass self.patchobject(res, 'add_dependencies', side_effect=TestException) def get_dependencies(): return stack.dependencies self.assertRaises(TestException, get_dependencies) def test_hot_add_dep_error_load(self): tmpl = template.Template({ 'heat_template_version': '2013-05-23', 'resources': { 'foo': {'type': 'GenericResourceType'}, 'bar': {'type': 'ResourceWithPropsType'} } }) stack = parser.Stack(utils.dummy_context(), 'test_hot_add_dep_err', tmpl) stack.store() res = stack['bar'] self.patchobject(res, 'add_dependencies', side_effect=ValueError) graph = stack.dependencies.graph() self.assertIn(res, graph) self.assertIn(stack['foo'], graph) def test_ref(self): tmpl = template.Template({ 'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'foo': {'Type': 'GenericResourceType'}, 'bar': { 'Type': 'ResourceWithPropsType', 'Properties': { 'Foo': {'Ref': 'foo'}, } } } }) stack = parser.Stack(utils.dummy_context(), 'test', tmpl) res = stack['bar'] res.add_explicit_dependencies(self.deps) graph = self.deps.graph() self.assertIn(res, graph) self.assertIn(stack['foo'], graph[res]) def test_hot_ref(self): '''Test that HOT get_resource creates dependencies.''' tmpl = template.Template({ 'heat_template_version': '2013-05-23', 'resources': { 'foo': {'type': 'GenericResourceType'}, 'bar': { 'type': 'ResourceWithPropsType', 'properties': { 'Foo': {'get_resource': 'foo'}, } } } }) stack = parser.Stack(utils.dummy_context(), 'test', tmpl) res = stack['bar'] res.add_explicit_dependencies(self.deps) graph = self.deps.graph() self.assertIn(res, graph) self.assertIn(stack['foo'], graph[res]) def test_ref_nested_dict(self): tmpl = template.Template({ 'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'foo': {'Type': 'GenericResourceType'}, 'bar': { 'Type': 'ResourceWithPropsType', 'Properties': { 'Foo': {'Fn::Base64': {'Ref': 'foo'}}, } } } }) stack = parser.Stack(utils.dummy_context(), 'test', tmpl) res = stack['bar'] res.add_explicit_dependencies(self.deps) graph = self.deps.graph() self.assertIn(res, graph) self.assertIn(stack['foo'], graph[res]) def test_hot_ref_nested_dict(self): tmpl = template.Template({ 'heat_template_version': '2013-05-23', 'resources': { 'foo': {'type': 'GenericResourceType'}, 'bar': { 'type': 'ResourceWithPropsType', 'properties': { 'Foo': {'Fn::Base64': {'get_resource': 'foo'}}, } } } }) stack = parser.Stack(utils.dummy_context(), 'test', tmpl) res = stack['bar'] res.add_explicit_dependencies(self.deps) graph = self.deps.graph() self.assertIn(res, graph) self.assertIn(stack['foo'], graph[res]) def test_ref_nested_deep(self): tmpl = template.Template({ 'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'foo': {'Type': 'GenericResourceType'}, 'bar': { 'Type': 'ResourceWithPropsType', 'Properties': { 'Foo': {'Fn::Join': [",", ["blarg", {'Ref': 'foo'}, "wibble"]]}, } } } }) stack = parser.Stack(utils.dummy_context(), 'test', tmpl) res = stack['bar'] res.add_explicit_dependencies(self.deps) graph = self.deps.graph() self.assertIn(res, graph) self.assertIn(stack['foo'], graph[res]) def test_hot_ref_nested_deep(self): tmpl = template.Template({ 'heat_template_version': '2013-05-23', 'resources': { 'foo': {'type': 'GenericResourceType'}, 'bar': { 'type': 'ResourceWithPropsType', 'properties': { 'foo': {'Fn::Join': [",", ["blarg", {'get_resource': 'foo'}, "wibble"]]}, } } } }) stack = parser.Stack(utils.dummy_context(), 'test', tmpl) res = stack['bar'] res.add_explicit_dependencies(self.deps) graph = self.deps.graph() self.assertIn(res, graph) self.assertIn(stack['foo'], graph[res]) def test_ref_fail(self): tmpl = template.Template({ 'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'foo': {'Type': 'GenericResourceType'}, 'bar': { 'Type': 'ResourceWithPropsType', 'Properties': { 'Foo': {'Ref': 'baz'}, } } } }) stack = parser.Stack(utils.dummy_context(), 'test', tmpl) self.assertRaises(exception.StackValidationFailed, stack.validate) def test_hot_ref_fail(self): tmpl = template.Template({ 'heat_template_version': '2013-05-23', 'resources': { 'foo': {'type': 'GenericResourceType'}, 'bar': { 'type': 'ResourceWithPropsType', 'properties': { 'Foo': {'get_resource': 'baz'}, } } } }) stack = parser.Stack(utils.dummy_context(), 'test', tmpl) ex = self.assertRaises(exception.InvalidTemplateReference, stack.validate) self.assertIn('"baz" (in bar.Properties.Foo)', six.text_type(ex)) def test_validate_value_fail(self): tmpl = template.Template({ 'heat_template_version': '2013-05-23', 'resources': { 'bar': { 'type': 'ResourceWithPropsType', 'properties': { 'FooInt': 'notanint', } } } }) stack = parser.Stack(utils.dummy_context(), 'test', tmpl) ex = self.assertRaises(exception.StackValidationFailed, stack.validate) self.assertIn("Property error: resources.bar.properties.FooInt: " "Value 'notanint' is not an integer", six.text_type(ex)) # You can turn off value validation via strict_validate stack_novalidate = parser.Stack(utils.dummy_context(), 'test', tmpl, strict_validate=False) self.assertIsNone(stack_novalidate.validate()) def test_getatt(self): tmpl = template.Template({ 'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'foo': {'Type': 'GenericResourceType'}, 'bar': { 'Type': 'ResourceWithPropsType', 'Properties': { 'Foo': {'Fn::GetAtt': ['foo', 'bar']}, } } } }) stack = parser.Stack(utils.dummy_context(), 'test', tmpl) res = stack['bar'] res.add_explicit_dependencies(self.deps) graph = self.deps.graph() self.assertIn(res, graph) self.assertIn(stack['foo'], graph[res]) def test_hot_getatt(self): tmpl = template.Template({ 'heat_template_version': '2013-05-23', 'resources': { 'foo': {'type': 'GenericResourceType'}, 'bar': { 'type': 'ResourceWithPropsType', 'properties': { 'Foo': {'get_attr': ['foo', 'bar']}, } } } }) stack = parser.Stack(utils.dummy_context(), 'test', tmpl) res = stack['bar'] res.add_explicit_dependencies(self.deps) graph = self.deps.graph() self.assertIn(res, graph) self.assertIn(stack['foo'], graph[res]) def test_getatt_nested_dict(self): tmpl = template.Template({ 'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'foo': {'Type': 'GenericResourceType'}, 'bar': { 'Type': 'ResourceWithPropsType', 'Properties': { 'Foo': {'Fn::Base64': {'Fn::GetAtt': ['foo', 'bar']}}, } } } }) stack = parser.Stack(utils.dummy_context(), 'test', tmpl) res = stack['bar'] res.add_explicit_dependencies(self.deps) graph = self.deps.graph() self.assertIn(res, graph) self.assertIn(stack['foo'], graph[res]) def test_hot_getatt_nested_dict(self): tmpl = template.Template({ 'heat_template_version': '2013-05-23', 'resources': { 'foo': {'type': 'GenericResourceType'}, 'bar': { 'type': 'ResourceWithPropsType', 'properties': { 'Foo': {'Fn::Base64': {'get_attr': ['foo', 'bar']}}, } } } }) stack = parser.Stack(utils.dummy_context(), 'test', tmpl) res = stack['bar'] res.add_explicit_dependencies(self.deps) graph = self.deps.graph() self.assertIn(res, graph) self.assertIn(stack['foo'], graph[res]) def test_getatt_nested_deep(self): tmpl = template.Template({ 'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'foo': {'Type': 'GenericResourceType'}, 'bar': { 'Type': 'ResourceWithPropsType', 'Properties': { 'Foo': {'Fn::Join': [",", ["blarg", {'Fn::GetAtt': ['foo', 'bar']}, "wibble"]]}, } } } }) stack = parser.Stack(utils.dummy_context(), 'test', tmpl) res = stack['bar'] res.add_explicit_dependencies(self.deps) graph = self.deps.graph() self.assertIn(res, graph) self.assertIn(stack['foo'], graph[res]) def test_hot_getatt_nested_deep(self): tmpl = template.Template({ 'heat_template_version': '2013-05-23', 'resources': { 'foo': {'type': 'GenericResourceType'}, 'bar': { 'type': 'ResourceWithPropsType', 'properties': { 'Foo': {'Fn::Join': [",", ["blarg", {'get_attr': ['foo', 'bar']}, "wibble"]]}, } } } }) stack = parser.Stack(utils.dummy_context(), 'test', tmpl) res = stack['bar'] res.add_explicit_dependencies(self.deps) graph = self.deps.graph() self.assertIn(res, graph) self.assertIn(stack['foo'], graph[res]) def test_getatt_fail(self): tmpl = template.Template({ 'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'foo': {'Type': 'GenericResourceType'}, 'bar': { 'Type': 'ResourceWithPropsType', 'Properties': { 'Foo': {'Fn::GetAtt': ['baz', 'bar']}, } } } }) stack = parser.Stack(utils.dummy_context(), 'test', tmpl) ex = self.assertRaises(exception.InvalidTemplateReference, getattr, stack, 'dependencies') self.assertIn('"baz" (in bar.Properties.Foo)', six.text_type(ex)) def test_hot_getatt_fail(self): tmpl = template.Template({ 'heat_template_version': '2013-05-23', 'resources': { 'foo': {'type': 'GenericResourceType'}, 'bar': { 'type': 'ResourceWithPropsType', 'properties': { 'Foo': {'get_attr': ['baz', 'bar']}, } } } }) stack = parser.Stack(utils.dummy_context(), 'test', tmpl) ex = self.assertRaises(exception.InvalidTemplateReference, getattr, stack, 'dependencies') self.assertIn('"baz" (in bar.Properties.Foo)', six.text_type(ex)) def test_getatt_fail_nested_deep(self): tmpl = template.Template({ 'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'foo': {'Type': 'GenericResourceType'}, 'bar': { 'Type': 'ResourceWithPropsType', 'Properties': { 'Foo': {'Fn::Join': [",", ["blarg", {'Fn::GetAtt': ['foo', 'bar']}, "wibble", {'Fn::GetAtt': ['baz', 'bar']}]]}, } } } }) stack = parser.Stack(utils.dummy_context(), 'test', tmpl) ex = self.assertRaises(exception.InvalidTemplateReference, getattr, stack, 'dependencies') self.assertIn('"baz" (in bar.Properties.Foo.Fn::Join[1][3])', six.text_type(ex)) def test_hot_getatt_fail_nested_deep(self): tmpl = template.Template({ 'heat_template_version': '2013-05-23', 'resources': { 'foo': {'type': 'GenericResourceType'}, 'bar': { 'type': 'ResourceWithPropsType', 'properties': { 'Foo': {'Fn::Join': [",", ["blarg", {'get_attr': ['foo', 'bar']}, "wibble", {'get_attr': ['baz', 'bar']}]]}, } } } }) stack = parser.Stack(utils.dummy_context(), 'test', tmpl) ex = self.assertRaises(exception.InvalidTemplateReference, getattr, stack, 'dependencies') self.assertIn('"baz" (in bar.Properties.Foo.Fn::Join[1][3])', six.text_type(ex)) def test_dependson(self): tmpl = template.Template({ 'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'foo': {'Type': 'GenericResourceType'}, 'bar': { 'Type': 'GenericResourceType', 'DependsOn': 'foo', } } }) stack = parser.Stack(utils.dummy_context(), 'test', tmpl) res = stack['bar'] res.add_explicit_dependencies(self.deps) graph = self.deps.graph() self.assertIn(res, graph) self.assertIn(stack['foo'], graph[res]) def test_dependson_hot(self): tmpl = template.Template({ 'heat_template_version': '2013-05-23', 'resources': { 'foo': {'type': 'GenericResourceType'}, 'bar': { 'type': 'GenericResourceType', 'depends_on': 'foo', } } }) stack = parser.Stack(utils.dummy_context(), 'test', tmpl) res = stack['bar'] res.add_explicit_dependencies(self.deps) graph = self.deps.graph() self.assertIn(res, graph) self.assertIn(stack['foo'], graph[res]) def test_dependson_fail(self): tmpl = template.Template({ 'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'foo': { 'Type': 'GenericResourceType', 'DependsOn': 'wibble', } } }) stack = parser.Stack(utils.dummy_context(), 'test', tmpl) ex = self.assertRaises(exception.InvalidTemplateReference, getattr, stack, 'dependencies') self.assertIn('"wibble" (in foo)', six.text_type(ex)) class MetadataTest(common.HeatTestCase): def setUp(self): super(MetadataTest, self).setUp() self.stack = parser.Stack(utils.dummy_context(), 'test_stack', template.Template(empty_template)) self.stack.store() metadata = {'Test': 'Initial metadata'} tmpl = rsrc_defn.ResourceDefinition('metadata_resource', 'Foo', metadata=metadata) self.res = generic_rsrc.GenericResource('metadata_resource', tmpl, self.stack) scheduler.TaskRunner(self.res.create)() self.addCleanup(self.stack.delete) def test_read_initial(self): self.assertEqual({'Test': 'Initial metadata'}, self.res.metadata_get()) def test_write(self): test_data = {'Test': 'Newly-written data'} self.res.metadata_set(test_data) self.assertEqual(test_data, self.res.metadata_get()) class ReducePhysicalResourceNameTest(common.HeatTestCase): scenarios = [ ('one', dict( limit=10, original='one', reduced='one')), ('limit_plus_one', dict( will_reduce=True, limit=10, original='onetwothree', reduced='on-wothree')), ('limit_exact', dict( limit=11, original='onetwothree', reduced='onetwothree')), ('limit_minus_one', dict( limit=12, original='onetwothree', reduced='onetwothree')), ('limit_four', dict( will_reduce=True, limit=4, original='onetwothree', reduced='on-e')), ('limit_three', dict( will_raise=ValueError, limit=3, original='onetwothree')), ('three_nested_stacks', dict( will_reduce=True, limit=63, original=('ElasticSearch-MasterCluster-ccicxsm25ug6-MasterSvr1' '-men65r4t53hh-MasterServer-gxpc3wqxy4el'), reduced=('El-icxsm25ug6-MasterSvr1-men65r4t53hh-' 'MasterServer-gxpc3wqxy4el'))), ('big_names', dict( will_reduce=True, limit=63, original=('MyReallyQuiteVeryLongStackName-' 'MyExtraordinarilyLongResourceName-ccicxsm25ug6'), reduced=('My-LongStackName-' 'MyExtraordinarilyLongResourceName-ccicxsm25ug6'))), ] will_raise = None will_reduce = False def test_reduce(self): if self.will_raise: self.assertRaises( self.will_raise, resource.Resource.reduce_physical_resource_name, self.original, self.limit) else: reduced = resource.Resource.reduce_physical_resource_name( self.original, self.limit) self.assertEqual(self.reduced, reduced) if self.will_reduce: # check it has been truncated to exactly the limit self.assertEqual(self.limit, len(reduced)) else: # check that nothing has changed self.assertEqual(self.original, reduced) class ResourceHookTest(common.HeatTestCase): def setUp(self): super(ResourceHookTest, self).setUp() self.env = environment.Environment() self.stack = parser.Stack(utils.dummy_context(), 'test_stack', template.Template(empty_template, env=self.env), stack_id=str(uuid.uuid4())) def test_hook(self): snippet = rsrc_defn.ResourceDefinition('res', 'GenericResourceType') res = resource.Resource('res', snippet, self.stack) res.data = mock.Mock(return_value={}) self.assertFalse(res.has_hook('pre-create')) self.assertFalse(res.has_hook('pre-update')) res.data = mock.Mock(return_value={'pre-create': 'True'}) self.assertTrue(res.has_hook('pre-create')) self.assertFalse(res.has_hook('pre-update')) res.data = mock.Mock(return_value={'pre-create': 'False'}) self.assertFalse(res.has_hook('pre-create')) self.assertFalse(res.has_hook('pre-update')) res.data = mock.Mock(return_value={'pre-update': 'True'}) self.assertFalse(res.has_hook('pre-create')) self.assertTrue(res.has_hook('pre-update')) res.data = mock.Mock(return_value={'pre-delete': 'True'}) self.assertFalse(res.has_hook('pre-create')) self.assertFalse(res.has_hook('pre-update')) self.assertTrue(res.has_hook('pre-delete')) res.data = mock.Mock(return_value={'post-create': 'True'}) self.assertFalse(res.has_hook('post-delete')) self.assertFalse(res.has_hook('post-update')) self.assertTrue(res.has_hook('post-create')) res.data = mock.Mock(return_value={'post-update': 'True'}) self.assertFalse(res.has_hook('post-create')) self.assertFalse(res.has_hook('post-delete')) self.assertTrue(res.has_hook('post-update')) res.data = mock.Mock(return_value={'post-delete': 'True'}) self.assertFalse(res.has_hook('post-create')) self.assertFalse(res.has_hook('post-update')) self.assertTrue(res.has_hook('post-delete')) def test_set_hook(self): snippet = rsrc_defn.ResourceDefinition('res', 'GenericResourceType') res = resource.Resource('res', snippet, self.stack) res.data_set = mock.Mock() res.data_delete = mock.Mock() res.trigger_hook('pre-create') res.data_set.assert_called_with('pre-create', 'True') res.trigger_hook('pre-update') res.data_set.assert_called_with('pre-update', 'True') res.clear_hook('pre-create') res.data_delete.assert_called_with('pre-create') def test_signal_clear_hook(self): snippet = rsrc_defn.ResourceDefinition('res', 'GenericResourceType') res = resource.Resource('res', snippet, self.stack) res.clear_hook = mock.Mock() res.has_hook = mock.Mock(return_value=True) self.assertRaises(exception.ResourceActionNotSupported, res.signal, None) self.assertFalse(res.clear_hook.called) self.assertRaises(exception.ResourceActionNotSupported, res.signal, {'other_hook': 'alarm'}) self.assertFalse(res.clear_hook.called) self.assertRaises(exception.InvalidBreakPointHook, res.signal, {'unset_hook': 'unknown_hook'}) self.assertFalse(res.clear_hook.called) result = res.signal({'unset_hook': 'pre-create'}) res.clear_hook.assert_called_with('pre-create') self.assertFalse(result) result = res.signal({'unset_hook': 'pre-update'}) res.clear_hook.assert_called_with('pre-update') self.assertFalse(result) res.has_hook = mock.Mock(return_value=False) self.assertRaises(exception.InvalidBreakPointHook, res.signal, {'unset_hook': 'pre-create'}) def test_pre_create_hook_call(self): self.stack.env.registry.load( {'resources': {'res': {'hooks': 'pre-create'}}}) snippet = rsrc_defn.ResourceDefinition('res', 'GenericResourceType') res = resource.Resource('res', snippet, self.stack) res.id = '1234' res.uuid = uuid.uuid4() res.store = mock.Mock() task = scheduler.TaskRunner(res.create) task.start() task.step() self.assertTrue(res.has_hook('pre-create')) res.signal(details={'unset_hook': 'pre-create'}) task.run_to_completion() self.assertEqual((res.CREATE, res.COMPLETE), res.state) def test_pre_delete_hook_call(self): self.stack.env.registry.load( {'resources': {'res': {'hooks': 'pre-delete'}}}) snippet = rsrc_defn.ResourceDefinition('res', 'GenericResourceType') res = resource.Resource('res', snippet, self.stack) res.id = '1234' res.action = 'CREATE' res.uuid = uuid.uuid4() res.store = mock.Mock() self.stack.action = 'DELETE' task = scheduler.TaskRunner(res.delete) task.start() task.step() self.assertTrue(res.has_hook('pre-delete')) res.signal(details={'unset_hook': 'pre-delete'}) task.run_to_completion() self.assertEqual((res.DELETE, res.COMPLETE), res.state) def test_post_create_hook_call(self): self.stack.env.registry.load( {'resources': {'res': {'hooks': 'post-create'}}}) snippet = rsrc_defn.ResourceDefinition('res', 'GenericResourceType') res = resource.Resource('res', snippet, self.stack) res.id = '1234' res.uuid = uuid.uuid4() res.store = mock.Mock() task = scheduler.TaskRunner(res.create) task.start() task.step() self.assertTrue(res.has_hook('post-create')) res.signal(details={'unset_hook': 'post-create'}) task.run_to_completion() self.assertEqual((res.CREATE, res.COMPLETE), res.state) def test_post_delete_hook_call(self): self.stack.env.registry.load( {'resources': {'res': {'hooks': 'post-delete'}}}) snippet = rsrc_defn.ResourceDefinition('res', 'GenericResourceType') res = resource.Resource('res', snippet, self.stack) res.id = '1234' res.uuid = uuid.uuid4() res.action = 'CREATE' self.stack.action = 'DELETE' res.store = mock.Mock() task = scheduler.TaskRunner(res.delete) task.start() task.step() self.assertTrue(res.has_hook('post-delete')) res.signal(details={'unset_hook': 'post-delete'}) task.run_to_completion() self.assertEqual((res.DELETE, res.COMPLETE), res.state) class ResourceAvailabilityTest(common.HeatTestCase): def _mock_client_plugin(self, service_types=None, is_available=True): service_types = service_types or [] mock_client_plugin = mock.Mock() mock_service_types = mock.PropertyMock(return_value=service_types) type(mock_client_plugin).service_types = mock_service_types mock_client_plugin.does_endpoint_exist = mock.Mock( return_value=is_available) return mock_service_types, mock_client_plugin def test_default_true_with_default_client_name_none(self): """Test availability of resource when default_client_name is None. When default_client_name is None, resource is considered as available. """ with mock.patch(('heat.tests.generic_resource' '.ResourceWithDefaultClientName.default_client_name'), new_callable=mock.PropertyMock) as mock_client_name: mock_client_name.return_value = None self.assertTrue((generic_rsrc.ResourceWithDefaultClientName. is_service_available(context=mock.Mock())[0])) @mock.patch.object(clients.OpenStackClients, 'client_plugin') def test_default_true_empty_service_types( self, mock_client_plugin_method): """Test availability of resource when service_types is empty list. When service_types is empty list, resource is considered as available. """ mock_service_types, mock_client_plugin = self._mock_client_plugin() mock_client_plugin_method.return_value = mock_client_plugin self.assertTrue( generic_rsrc.ResourceWithDefaultClientName.is_service_available( context=mock.Mock())[0]) mock_client_plugin_method.assert_called_once_with( generic_rsrc.ResourceWithDefaultClientName.default_client_name) mock_service_types.assert_called_once_with() @mock.patch.object(clients.OpenStackClients, 'client_plugin') def test_service_deployed( self, mock_client_plugin_method): """Test availability of resource when the service is deployed. When the service is deployed, resource is considered as available. """ mock_service_types, mock_client_plugin = self._mock_client_plugin( ['test_type'] ) mock_client_plugin_method.return_value = mock_client_plugin self.assertTrue( generic_rsrc.ResourceWithDefaultClientName.is_service_available( context=mock.Mock())[0]) mock_client_plugin_method.assert_called_once_with( generic_rsrc.ResourceWithDefaultClientName.default_client_name) mock_service_types.assert_called_once_with() mock_client_plugin.does_endpoint_exist.assert_called_once_with( service_type='test_type', service_name=(generic_rsrc.ResourceWithDefaultClientName .default_client_name) ) @mock.patch.object(clients.OpenStackClients, 'client_plugin') def test_service_not_deployed( self, mock_client_plugin_method): """Test availability of resource when the service is not deployed. When the service is not deployed, resource is considered as unavailable. """ mock_service_types, mock_client_plugin = self._mock_client_plugin( ['test_type_un_deployed'], False ) mock_client_plugin_method.return_value = mock_client_plugin self.assertFalse( generic_rsrc.ResourceWithDefaultClientName.is_service_available( context=mock.Mock())[0]) mock_client_plugin_method.assert_called_once_with( generic_rsrc.ResourceWithDefaultClientName.default_client_name) mock_service_types.assert_called_once_with() mock_client_plugin.does_endpoint_exist.assert_called_once_with( service_type='test_type_un_deployed', service_name=(generic_rsrc.ResourceWithDefaultClientName .default_client_name) ) @mock.patch.object(clients.OpenStackClients, 'client_plugin') def test_service_deployed_required_extension_true( self, mock_client_plugin_method): """Test availability of resource with a required extension. """ mock_service_types, mock_client_plugin = self._mock_client_plugin( ['test_type'] ) mock_client_plugin.has_extension = mock.Mock( return_value=True) mock_client_plugin_method.return_value = mock_client_plugin self.assertTrue( generic_rsrc.ResourceWithDefaultClientNameExt.is_service_available( context=mock.Mock())[0]) mock_client_plugin_method.assert_called_once_with( generic_rsrc.ResourceWithDefaultClientName.default_client_name) mock_service_types.assert_called_once_with() mock_client_plugin.does_endpoint_exist.assert_called_once_with( service_type='test_type', service_name=(generic_rsrc.ResourceWithDefaultClientName .default_client_name)) mock_client_plugin.has_extension.assert_called_once_with('foo') @mock.patch.object(clients.OpenStackClients, 'client_plugin') def test_service_deployed_required_extension_false( self, mock_client_plugin_method): """Test availability of resource with a required extension. """ mock_service_types, mock_client_plugin = self._mock_client_plugin( ['test_type'] ) mock_client_plugin.has_extension = mock.Mock( return_value=False) mock_client_plugin_method.return_value = mock_client_plugin self.assertFalse( generic_rsrc.ResourceWithDefaultClientNameExt.is_service_available( context=mock.Mock())[0]) mock_client_plugin_method.assert_called_once_with( generic_rsrc.ResourceWithDefaultClientName.default_client_name) mock_service_types.assert_called_once_with() mock_client_plugin.does_endpoint_exist.assert_called_once_with( service_type='test_type', service_name=(generic_rsrc.ResourceWithDefaultClientName .default_client_name)) mock_client_plugin.has_extension.assert_called_once_with('foo') @mock.patch.object(clients.OpenStackClients, 'client_plugin') def test_service_deployed_required_extension_exception( self, mock_client_plugin_method): """Test availability of resource with a required extension. """ mock_service_types, mock_client_plugin = self._mock_client_plugin( ['test_type'] ) mock_client_plugin.has_extension = mock.Mock( side_effect=exception.AuthorizationFailure()) mock_client_plugin_method.return_value = mock_client_plugin self.assertRaises( exception.AuthorizationFailure, generic_rsrc.ResourceWithDefaultClientNameExt.is_service_available, context=mock.Mock()) mock_client_plugin_method.assert_called_once_with( generic_rsrc.ResourceWithDefaultClientName.default_client_name) mock_service_types.assert_called_once_with() mock_client_plugin.does_endpoint_exist.assert_called_once_with( service_type='test_type', service_name=(generic_rsrc.ResourceWithDefaultClientName .default_client_name)) mock_client_plugin.has_extension.assert_called_once_with('foo') @mock.patch.object(clients.OpenStackClients, 'client_plugin') def test_service_not_deployed_required_extension( self, mock_client_plugin_method): """Test availability of resource when the service is not deployed. When the service is not deployed, resource is considered as unavailable. """ mock_service_types, mock_client_plugin = self._mock_client_plugin( ['test_type_un_deployed'], False ) mock_client_plugin_method.return_value = mock_client_plugin self.assertFalse( generic_rsrc.ResourceWithDefaultClientNameExt.is_service_available( context=mock.Mock())[0]) mock_client_plugin_method.assert_called_once_with( generic_rsrc.ResourceWithDefaultClientName.default_client_name) mock_service_types.assert_called_once_with() mock_client_plugin.does_endpoint_exist.assert_called_once_with( service_type='test_type_un_deployed', service_name=(generic_rsrc.ResourceWithDefaultClientName .default_client_name)) @mock.patch.object(clients.OpenStackClients, 'client_plugin') def test_service_not_installed_required_extension( self, mock_client_plugin_method): """Test availability of resource when the client is not installed. When the client is not installed, we can't create the resource properly so raise a proper exception for information. """ mock_client_plugin_method.return_value = None self.assertRaises( exception.ClientNotAvailable, generic_rsrc.ResourceWithDefaultClientNameExt.is_service_available, context=mock.Mock()) mock_client_plugin_method.assert_called_once_with( generic_rsrc.ResourceWithDefaultClientName.default_client_name) def test_service_not_available_returns_false(self): """Test when the service is not in service catalog. When the service is not deployed, make sure resource is throwing ResourceTypeUnavailable exception. """ with mock.patch.object( generic_rsrc.ResourceWithDefaultClientName, 'is_service_available') as mock_method: mock_method.return_value = ( False, 'Service endpoint not in service catalog.') definition = rsrc_defn.ResourceDefinition( name='Test Resource', resource_type='UnavailableResourceType') mock_stack = mock.MagicMock() mock_stack.in_convergence_check = False mock_stack.db_resource_get.return_value = None rsrc = generic_rsrc.ResourceWithDefaultClientName('test_res', definition, mock_stack) ex = self.assertRaises( exception.ResourceTypeUnavailable, rsrc.validate_template) msg = ('HEAT-E99001 Service sample is not available for resource ' 'type UnavailableResourceType, reason: ' 'Service endpoint not in service catalog.') self.assertEqual(msg, six.text_type(ex), 'invalid exception message') # Make sure is_service_available is called on the right class mock_method.assert_called_once_with(mock_stack.context) def test_service_not_available_throws_exception(self): """Test for other exceptions when checking for service availability Ex. when client throws an error, make sure resource is throwing ResourceTypeUnavailable that contains the original exception message. """ with mock.patch.object( generic_rsrc.ResourceWithDefaultClientName, 'is_service_available') as mock_method: mock_method.side_effect = exception.AuthorizationFailure() definition = rsrc_defn.ResourceDefinition( name='Test Resource', resource_type='UnavailableResourceType') mock_stack = mock.MagicMock() mock_stack.in_convergence_check = False mock_stack.db_resource_get.return_value = None rsrc = generic_rsrc.ResourceWithDefaultClientName('test_res', definition, mock_stack) ex = self.assertRaises( exception.ResourceTypeUnavailable, rsrc.validate_template) msg = ('HEAT-E99001 Service sample is not available for resource ' 'type UnavailableResourceType, reason: ' 'Authorization failed.') self.assertEqual(msg, six.text_type(ex), 'invalid exception message') # Make sure is_service_available is called on the right class mock_method.assert_called_once_with(mock_stack.context) def test_handle_delete_successful(self): self.stack = parser.Stack(utils.dummy_context(), 'test_stack', template.Template(empty_template)) self.stack.store() snippet = rsrc_defn.ResourceDefinition('aresource', 'OS::Heat::None') res = resource.Resource('aresource', snippet, self.stack) FakeClient = collections.namedtuple('Client', ['entity']) client = FakeClient(collections.namedtuple('entity', ['delete'])) self.patchobject(resource.Resource, 'client', return_value=client) delete = mock.Mock() res.client().entity.delete = delete res.entity = 'entity' res.resource_id = '12345' self.assertEqual('12345', res.handle_delete()) delete.assert_called_once_with('12345') def test_handle_delete_not_found(self): self.stack = parser.Stack(utils.dummy_context(), 'test_stack', template.Template(empty_template)) self.stack.store() snippet = rsrc_defn.ResourceDefinition('aresource', 'OS::Heat::None') res = resource.Resource('aresource', snippet, self.stack) FakeClient = collections.namedtuple('Client', ['entity']) client = FakeClient(collections.namedtuple('entity', ['delete'])) class FakeClientPlugin(object): def ignore_not_found(self, ex): if not isinstance(ex, exception.NotFound): raise ex self.patchobject(resource.Resource, 'client', return_value=client) self.patchobject(resource.Resource, 'client_plugin', return_value=FakeClientPlugin()) delete = mock.Mock() delete.side_effect = [exception.NotFound()] res.client().entity.delete = delete res.entity = 'entity' res.resource_id = '12345' res.default_client_name = 'foo' self.assertIsNone(res.handle_delete()) delete.assert_called_once_with('12345') def test_handle_delete_raise_error(self): self.stack = parser.Stack(utils.dummy_context(), 'test_stack', template.Template(empty_template)) self.stack.store() snippet = rsrc_defn.ResourceDefinition('aresource', 'OS::Heat::None') res = resource.Resource('aresource', snippet, self.stack) FakeClient = collections.namedtuple('Client', ['entity']) client = FakeClient(collections.namedtuple('entity', ['delete'])) class FakeClientPlugin(object): def ignore_not_found(self, ex): if not isinstance(ex, exception.NotFound): raise ex self.patchobject(resource.Resource, 'client', return_value=client) self.patchobject(resource.Resource, 'client_plugin', return_value=FakeClientPlugin()) delete = mock.Mock() delete.side_effect = [exception.Error('boom!')] res.client().entity.delete = delete res.entity = 'entity' res.resource_id = '12345' ex = self.assertRaises(exception.Error, res.handle_delete) self.assertEqual('boom!', six.text_type(ex)) delete.assert_called_once_with('12345') def test_handle_delete_no_entity(self): self.stack = parser.Stack(utils.dummy_context(), 'test_stack', template.Template(empty_template)) self.stack.store() snippet = rsrc_defn.ResourceDefinition('aresource', 'OS::Heat::None') res = resource.Resource('aresource', snippet, self.stack) FakeClient = collections.namedtuple('Client', ['entity']) client = FakeClient(collections.namedtuple('entity', ['delete'])) self.patchobject(resource.Resource, 'client', return_value=client) delete = mock.Mock() res.client().entity.delete = delete res.resource_id = '12345' self.assertIsNone(res.handle_delete()) self.assertEqual(0, delete.call_count) def test_handle_delete_no_resource_id(self): self.stack = parser.Stack(utils.dummy_context(), 'test_stack', template.Template(empty_template)) self.stack.store() snippet = rsrc_defn.ResourceDefinition('aresource', 'OS::Heat::None') res = resource.Resource('aresource', snippet, self.stack) FakeClient = collections.namedtuple('Client', ['entity']) client = FakeClient(collections.namedtuple('entity', ['delete'])) self.patchobject(resource.Resource, 'client', return_value=client) delete = mock.Mock() res.client().entity.delete = delete res.entity = 'entity' res.resource_id = None self.assertIsNone(res.handle_delete()) self.assertEqual(0, delete.call_count) def test_handle_delete_no_default_client_name(self): class MyException(Exception): pass self.stack = parser.Stack(utils.dummy_context(), 'test_stack', template.Template(empty_template)) self.stack.store() snippet = rsrc_defn.ResourceDefinition('aresource', 'OS::Heat::None') res = resource.Resource('aresource', snippet, self.stack) FakeClient = collections.namedtuple('Client', ['entity']) client = FakeClient(collections.namedtuple('entity', ['delete'])) self.patchobject(resource.Resource, 'client', return_value=client) delete = mock.Mock() delete.side_effect = [MyException] res.client().entity.delete = delete res.entity = 'entity' res.resource_id = '1234' res.default_client_name = None self.assertRaises(MyException, res.handle_delete) class TestLiveStateUpdate(common.HeatTestCase): scenarios = [ ('update_all_args', dict( live_state={'Foo': 'abb', 'FooInt': 2}, updated_props={'Foo': 'bca', 'FooInt': 3}, expected_error=False, resource_id='1234', expected={'Foo': 'bca', 'FooInt': 3} )), ('update_some_args', dict( live_state={'Foo': 'bca'}, updated_props={'Foo': 'bca', 'FooInt': 3}, expected_error=False, resource_id='1234', expected={'Foo': 'bca', 'FooInt': 3} )), ('live_state_some_error', dict( live_state={'Foo': 'bca'}, updated_props={'Foo': 'bca', 'FooInt': 3}, expected_error=False, resource_id='1234', expected={'Foo': 'bca', 'FooInt': 3} )), ('entity_not_found', dict( live_state=exception.EntityNotFound(entity='resource', name='test'), updated_props={'Foo': 'bca'}, expected_error=True, resource_id='1234', expected=resource.UpdateReplace )), ('live_state_not_found_id', dict( live_state=exception.EntityNotFound(entity='resource', name='test'), updated_props={'Foo': 'bca'}, expected_error=True, resource_id=None, expected=resource.UpdateReplace )) ] def setUp(self): super(TestLiveStateUpdate, self).setUp() self.env = environment.Environment() self.env.load({u'resource_registry': {u'OS::Test::GenericResource': u'GenericResourceType', u'OS::Test::ResourceWithCustomConstraint': u'ResourceWithCustomConstraint'}}) self.stack = parser.Stack(utils.dummy_context(), 'test_stack', template.Template(empty_template, env=self.env), stack_id=str(uuid.uuid4())) def _prepare_resource_live_state(self): tmpl = rsrc_defn.ResourceDefinition('test_resource', 'ResourceWithPropsType', {'Foo': 'abc', 'FooInt': 2}) res = generic_rsrc.ResourceWithProps('test_resource', tmpl, self.stack) for prop in six.itervalues(res.properties.props): prop.schema.update_allowed = True res.update_allowed_properties = ('Foo', 'FooInt',) scheduler.TaskRunner(res.create)() self.assertEqual((res.CREATE, res.COMPLETE), res.state) return res def _clean_tests_after_resource_live_state(self, res): """Revert changes for correct work of other tests. Need to revert changes of resource properties schema for correct work of other tests. """ res.update_allowed_properties = [] res.update_allowed_set = [] for prop in six.itervalues(res.properties.props): prop.schema.update_allowed = False def test_update_resource_live_state(self): res = self._prepare_resource_live_state() res.resource_id = self.resource_id cfg.CONF.set_override('observe_on_update', True) utmpl = rsrc_defn.ResourceDefinition('test_resource', 'ResourceWithPropsType', self.updated_props) if not self.expected_error: self.patchobject(res, 'get_live_state', return_value=self.live_state) scheduler.TaskRunner(res.update, utmpl)() self.assertEqual((res.UPDATE, res.COMPLETE), res.state) self.assertEqual(self.expected, res.properties.data) else: self.patchobject(res, 'get_live_state', side_effect=[self.live_state]) self.assertRaises(self.expected, scheduler.TaskRunner(res.update, utmpl)) self._clean_tests_after_resource_live_state(res) def test_get_live_resource_data_success(self): res = self._prepare_resource_live_state() res.resource_id = self.resource_id res._show_resource = mock.MagicMock(return_value={'a': 'b'}) self.assertEqual({'a': 'b'}, res.get_live_resource_data()) self._clean_tests_after_resource_live_state(res) def test_get_live_resource_data_not_found(self): res = self._prepare_resource_live_state() res.default_client_name = 'foo' res.resource_id = self.resource_id res._show_resource = mock.MagicMock( side_effect=[exception.NotFound()]) res.client_plugin = mock.MagicMock() res.client_plugin().is_not_found = mock.MagicMock(return_value=True) ex = self.assertRaises(exception.EntityNotFound, res.get_live_resource_data) self.assertEqual('The Resource (test_resource) could not be found.', six.text_type(ex)) self._clean_tests_after_resource_live_state(res) def test_parse_live_resource_data(self): res = self._prepare_resource_live_state() res.update_allowed_props = mock.Mock(return_value=['Foo', 'Bar']) resource_data = { 'Foo': 'brave new data', 'Something not so good': 'for all of us' } res._update_allowed_properties = ['Foo', 'Bar'] result = res.parse_live_resource_data(res.properties, resource_data) self.assertEqual({'Foo': 'brave new data'}, result) self._clean_tests_after_resource_live_state(res) def test_get_live_resource_data_not_found_no_default_client_name(self): class MyException(Exception): pass res = self._prepare_resource_live_state() res.default_client_name = None res.resource_id = self.resource_id res._show_resource = mock.MagicMock( side_effect=[MyException]) res.client_plugin = mock.MagicMock() res.client_plugin().is_not_found = mock.MagicMock(return_value=True) self.assertRaises(MyException, res.get_live_resource_data) self._clean_tests_after_resource_live_state(res) def test_get_live_resource_data_other_error(self): res = self._prepare_resource_live_state() res.resource_id = self.resource_id res._show_resource = mock.MagicMock( side_effect=[exception.Forbidden()]) res.client_plugin = mock.MagicMock() res.client_plugin().is_not_found = mock.MagicMock(return_value=False) self.assertRaises(exception.Forbidden, res.get_live_resource_data) self._clean_tests_after_resource_live_state(res) class ResourceUpdateRestrictionTest(common.HeatTestCase): def setUp(self): super(ResourceUpdateRestrictionTest, self).setUp() resource._register_class('TestResourceType', test_resource.TestResource) resource._register_class('NoneResourceType', none_resource.NoneResource) self.tmpl = { 'heat_template_version': '2013-05-23', 'resources': { 'bar': { 'type': 'TestResourceType', 'properties': { 'value': '1234', 'update_replace': False } } } } self.dummy_timeout = 10 self.dummy_event = eventlet.event.Event() def create_resource(self): self.stack = parser.Stack(utils.dummy_context(), 'test_stack', template.Template(self.tmpl, env=self.env), stack_id=str(uuid.uuid4())) res = self.stack['bar'] scheduler.TaskRunner(res.create)() return res def create_convergence_resource(self): self.stack = parser.Stack(utils.dummy_context(), 'test_stack', template.Template(self.tmpl, env=self.env), stack_id=str(uuid.uuid4())) res_data = {} res = self.stack['bar'] pcb = mock.Mock() self.patchobject(res, 'lock') res.create_convergence(self.stack.t.id, res_data, 'engine-007', self.dummy_timeout, pcb) return res def test_update_restricted(self): self.env_snippet = {u'resource_registry': { u'resources': { 'bar': {'restricted_actions': 'update'} } } } self.env = environment.Environment() self.env.load(self.env_snippet) res = self.create_resource() ev = self.patchobject(res, '_add_event') props = self.tmpl['resources']['bar']['properties'] props['value'] = '4567' snippet = rsrc_defn.ResourceDefinition('bar', 'TestResourceType', props) error = self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(res.update, snippet)) self.assertEqual('ResourceActionRestricted: resources.bar: ' 'update is restricted for resource.', six.text_type(error)) self.assertEqual('UPDATE', error.action) self.assertEqual((res.CREATE, res.COMPLETE), res.state) ev.assert_called_with(res.UPDATE, res.FAILED, 'update is restricted for resource.') def test_replace_rstricted(self): self.env_snippet = {u'resource_registry': { u'resources': { 'bar': {'restricted_actions': 'replace'} } } } self.env = environment.Environment() self.env.load(self.env_snippet) res = self.create_resource() ev = self.patchobject(res, '_add_event') props = self.tmpl['resources']['bar']['properties'] props['update_replace'] = True snippet = rsrc_defn.ResourceDefinition('bar', 'TestResourceType', props) error = self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(res.update, snippet)) self.assertEqual('ResourceActionRestricted: resources.bar: ' 'replace is restricted for resource.', six.text_type(error)) self.assertEqual('UPDATE', error.action) self.assertEqual((res.CREATE, res.COMPLETE), res.state) ev.assert_called_with(res.UPDATE, res.FAILED, 'replace is restricted for resource.') def test_update_with_replace_rstricted(self): self.env_snippet = {u'resource_registry': { u'resources': { 'bar': {'restricted_actions': 'replace'} } } } self.env = environment.Environment() self.env.load(self.env_snippet) res = self.create_resource() ev = self.patchobject(res, '_add_event') props = self.tmpl['resources']['bar']['properties'] props['value'] = '4567' snippet = rsrc_defn.ResourceDefinition('bar', 'TestResourceType', props) self.assertIsNone(scheduler.TaskRunner(res.update, snippet)()) self.assertEqual((res.UPDATE, res.COMPLETE), res.state) ev.assert_called_with(res.UPDATE, res.COMPLETE, 'state changed') def test_replace_with_update_rstricted(self): self.env_snippet = {u'resource_registry': { u'resources': { 'bar': {'restricted_actions': 'update'} } } } self.env = environment.Environment() self.env.load(self.env_snippet) res = self.create_resource() ev = self.patchobject(res, '_add_event') prep_replace = self.patchobject(res, 'prepare_for_replace') props = self.tmpl['resources']['bar']['properties'] props['update_replace'] = True snippet = rsrc_defn.ResourceDefinition('bar', 'TestResourceType', props) error = self.assertRaises(resource.UpdateReplace, scheduler.TaskRunner(res.update, snippet)) self.assertIn('requires replacement', six.text_type(error)) self.assertEqual(1, prep_replace.call_count) ev.assert_not_called() def test_replace_restricted_type_change_with_convergence(self): self.env_snippet = {u'resource_registry': { u'resources': { 'bar': {'restricted_actions': 'replace'} } } } self.env = environment.Environment() self.env.load(self.env_snippet) res = self.create_convergence_resource() ev = self.patchobject(res, '_add_event') bar = self.tmpl['resources']['bar'] bar['type'] = 'OS::Heat::None' self.new_stack = parser.Stack(utils.dummy_context(), 'test_stack', template.Template(self.tmpl, env=self.env)) error = self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(res.update_convergence, self.stack.t.id, {}, 'engine-007', self.dummy_timeout, self.new_stack, eventlet.event.Event())) self.assertEqual('ResourceActionRestricted: resources.bar: ' 'replace is restricted for resource.', six.text_type(error)) self.assertEqual('UPDATE', error.action) self.assertEqual((res.CREATE, res.COMPLETE), res.state) ev.assert_called_with(res.UPDATE, res.FAILED, 'replace is restricted for resource.') def test_update_restricted_type_change_with_convergence(self): self.env_snippet = {u'resource_registry': { u'resources': { 'bar': {'restricted_actions': 'update'} } } } self.env = environment.Environment() self.env.load(self.env_snippet) res = self.create_convergence_resource() ev = self.patchobject(res, '_add_event') bar = self.tmpl['resources']['bar'] bar['type'] = 'OS::Heat::None' self.new_stack = parser.Stack(utils.dummy_context(), 'test_stack', template.Template(self.tmpl, env=self.env)) error = self.assertRaises(resource.UpdateReplace, scheduler.TaskRunner(res.update_convergence, self.stack.t.id, {}, 'engine-007', self.dummy_timeout, self.new_stack, eventlet.event.Event())) self.assertIn('requires replacement', six.text_type(error)) ev.assert_not_called() class TestResourceMapping(common.HeatTestCase): def _check_mapping_func(self, func, module): self.assertTrue(callable(func)) res = func() self.assertIsInstance(res, collections.Mapping) for r_type, r_class in six.iteritems(res): self.assertIsInstance(r_type, six.string_types) type_elements = r_type.split('::') # type has fixed format # Platform type::Service/Type::Optional Sub-sections::Name self.assertGreaterEqual(len(type_elements), 3) # type should be OS or AWS self.assertIn(type_elements[0], ('AWS', 'OS')) # check that value is a class object self.assertIsInstance(r_class, six.class_types) # check that class is subclass of Resource base class self.assertTrue(issubclass(r_class, resource.Resource)) # check that mentioned class is presented in the same module self.assertTrue(hasattr(module, six.text_type(r_class.__name__))) return len(res) def test_resource_mappings(self): # use plugin manager for loading all resources # use the same approach like and in heat.engine.resources.__init__ manager = plugin_manager.PluginManager('heat.engine.resources') num_of_types = 0 for module in manager.modules: # check for both potential mappings for name in ('resource_mapping', 'available_resource_mapping'): mapping_func = getattr(module, name, None) if mapping_func: num_of_types += self._check_mapping_func( mapping_func, module) # check number of registred resource types to make sure, # that there is no regressions # It's soft check and should not be a cause of the merge conflict # Feel free to update it in some separate patch self.assertGreaterEqual(num_of_types, 137) class TestResourcePropDataUpdate(common.HeatTestCase): scenarios = [ ('s1', dict( old_rpd={1: 2}, new_rpd={3: 4}, replaced=True)), ('s2', dict( old_rpd={'1': '2'}, new_rpd={'3': '4'}, replaced=True)), ('s3', dict( old_rpd={'1': '2'}, new_rpd={'1': '4'}, replaced=True)), ('s4', dict( old_rpd={'1': '2'}, new_rpd={'1': '2'}, replaced=False)), ('s5', dict( old_rpd={'1': '2', 4: 3}, new_rpd={'1': '2'}, replaced=True)), ('s6', dict( old_rpd={'1': '2', 4: 3}, new_rpd={'1': '2', 4: 3}, replaced=False)), ('s7', dict( old_rpd={'1': '2'}, new_rpd={'1': '2', 4: 3}, replaced=True)), ('s8', dict( old_rpd={'1': '2'}, new_rpd={}, replaced=True)), ('s9', dict( old_rpd={}, new_rpd={1: 2}, replaced=True)), ('s10', dict( old_rpd={}, new_rpd={}, replaced=False)), ('s11', dict( old_rpd=None, new_rpd={}, replaced=False)), ('s11', dict( old_rpd={}, new_rpd=None, replaced=False)), ('s12', dict( old_rpd={3: 4}, new_rpd=None, replaced=True)), ] def setUp(self): super(TestResourcePropDataUpdate, self).setUp() self.stack = parser.Stack(utils.dummy_context(), 'test_stack', template.Template(empty_template)) self.stack.store() snippet = rsrc_defn.ResourceDefinition('aresource', 'GenericResourceType') self.res = resource.Resource('aresource', snippet, self.stack) def test_create_or_replace_rsrc_prop_data(self): res = self.res res._stored_properties_data = self.old_rpd res.store() old_rpd_id = res._rsrc_prop_data_id with mock.patch("heat.engine.function.resolve") as mock_fr: mock_fr.return_value = self.new_rpd res._update_stored_properties() res.store() new_rpd_id = res._rsrc_prop_data_id self.assertEqual(self.replaced, old_rpd_id != new_rpd_id) heat-10.0.2/heat/tests/aws/0000775000175000017500000000000013343562672015425 5ustar zuulzuul00000000000000heat-10.0.2/heat/tests/aws/test_instance.py0000666000175000017500000020613513343562351020645 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import uuid import mock import mox from neutronclient.v2_0 import client as neutronclient import six from heat.common import exception from heat.common import template_format from heat.engine.clients.os import cinder from heat.engine.clients.os import glance from heat.engine.clients.os import neutron from heat.engine.clients.os import nova from heat.engine.clients import progress from heat.engine import environment from heat.engine import resource from heat.engine.resources.aws.ec2 import instance as instances from heat.engine.resources import scheduler_hints as sh from heat.engine import scheduler from heat.engine import stack as parser from heat.engine import template from heat.tests import common from heat.tests.openstack.nova import fakes as fakes_nova from heat.tests import utils wp_template = ''' { "AWSTemplateFormatVersion" : "2010-09-09", "Description" : "WordPress", "Parameters" : { "KeyName" : { "Description" : "KeyName", "Type" : "String", "Default" : "test" } }, "Resources" : { "WebServer": { "Type": "AWS::EC2::Instance", "Properties": { "ImageId" : "F17-x86_64-gold", "InstanceType" : "m1.large", "KeyName" : "test", "NovaSchedulerHints" : [{"Key": "foo", "Value": "spam"}, {"Key": "bar", "Value": "eggs"}, {"Key": "foo", "Value": "ham"}, {"Key": "foo", "Value": "baz"}], "UserData" : "wordpress", "BlockDeviceMappings": [ { "DeviceName": "vdb", "Ebs": {"SnapshotId": "9ef5496e-7426-446a-bbc8-01f84d9c9972", "DeleteOnTermination": "True"} }], "Volumes" : [ { "Device": "/dev/vdc", "VolumeId": "cccc" }, { "Device": "/dev/vdd", "VolumeId": "dddd" }] } } } } ''' class InstancesTest(common.HeatTestCase): def setUp(self): super(InstancesTest, self).setUp() self.fc = fakes_nova.FakeClient() def _setup_test_stack(self, stack_name): t = template_format.parse(wp_template) tmpl = template.Template( t, env=environment.Environment({'KeyName': 'test'})) stack = parser.Stack(utils.dummy_context(), stack_name, tmpl, stack_id=str(uuid.uuid4())) return (tmpl, stack) def _mock_get_image_id_success(self, imageId_input, imageId): self.m.StubOutWithMock(glance.GlanceClientPlugin, 'find_image_by_name_or_id') glance.GlanceClientPlugin.find_image_by_name_or_id( imageId_input).MultipleTimes().AndReturn(imageId) def _mock_get_image_id_fail(self, image_id, exp): self.m.StubOutWithMock(glance.GlanceClientPlugin, 'find_image_by_name_or_id') glance.GlanceClientPlugin.find_image_by_name_or_id( image_id).AndRaise(exp) def _get_test_template(self, stack_name, image_id=None, volumes=False): (tmpl, stack) = self._setup_test_stack(stack_name) tmpl.t['Resources']['WebServer']['Properties'][ 'ImageId'] = image_id or 'CentOS 5.2' tmpl.t['Resources']['WebServer']['Properties'][ 'InstanceType'] = '256 MB Server' if not volumes: tmpl.t['Resources']['WebServer']['Properties']['Volumes'] = [] return tmpl, stack def _setup_test_instance(self, return_server, name, image_id=None, stub_create=True, stub_complete=False, volumes=False): stack_name = '%s_s' % name tmpl, self.stack = self._get_test_template(stack_name, image_id, volumes=volumes) self.instance_props = tmpl.t['Resources']['WebServer']['Properties'] resource_defns = tmpl.resource_definitions(self.stack) instance = instances.Instance(name, resource_defns['WebServer'], self.stack) bdm = {"vdb": "9ef5496e-7426-446a-bbc8-01f84d9c9972:snap::True"} self._mock_get_image_id_success(image_id or 'CentOS 5.2', 1) self.m.StubOutWithMock(nova.NovaClientPlugin, '_create') nova.NovaClientPlugin._create().AndReturn(self.fc) self.stub_SnapshotConstraint_validate() if stub_create: self.m.StubOutWithMock(self.fc.servers, 'create') self.fc.servers.create( image=1, flavor=1, key_name='test', name=utils.PhysName( stack_name, instance.name, limit=instance.physical_resource_name_limit), security_groups=None, userdata=mox.IgnoreArg(), scheduler_hints={'foo': ['spam', 'ham', 'baz'], 'bar': 'eggs'}, meta=None, nics=None, availability_zone=None, block_device_mapping=bdm).AndReturn( return_server) if stub_complete: self.m.StubOutWithMock(self.fc.servers, 'get') self.fc.servers.get(return_server.id ).MultipleTimes().AndReturn(return_server) return instance def _create_test_instance(self, return_server, name, stub_create=True): instance = self._setup_test_instance(return_server, name, stub_create=stub_create, stub_complete=True) self.m.ReplayAll() scheduler.TaskRunner(instance.create)() self.m.UnsetStubs() return instance def _stub_glance_for_update(self, image_id=None): self._mock_get_image_id_success(image_id or 'CentOS 5.2', 1) def test_instance_create(self): return_server = self.fc.servers.list()[1] instance = self._create_test_instance(return_server, 'in_create') # this makes sure the auto increment worked on instance creation self.assertGreater(instance.id, 0) expected_ip = return_server.networks['public'][0] expected_az = getattr(return_server, 'OS-EXT-AZ:availability_zone') self.m.StubOutWithMock(self.fc.servers, 'get') self.fc.servers.get(instance.resource_id).MultipleTimes( ).AndReturn(return_server) self.m.ReplayAll() self.assertEqual(expected_ip, instance.FnGetAtt('PublicIp')) self.assertEqual(expected_ip, instance.FnGetAtt('PrivateIp')) self.assertEqual(expected_ip, instance.FnGetAtt('PublicDnsName')) self.assertEqual(expected_ip, instance.FnGetAtt('PrivateDnsName')) self.assertEqual(expected_az, instance.FnGetAtt('AvailabilityZone')) self.m.VerifyAll() def test_instance_create_with_BlockDeviceMappings(self): return_server = self.fc.servers.list()[4] instance = self._create_test_instance(return_server, 'create_with_bdm') # this makes sure the auto increment worked on instance creation self.assertGreater(instance.id, 0) expected_ip = return_server.networks['public'][0] expected_az = getattr(return_server, 'OS-EXT-AZ:availability_zone') self.assertEqual(expected_ip, instance.FnGetAtt('PublicIp')) self.assertEqual(expected_ip, instance.FnGetAtt('PrivateIp')) self.assertEqual(expected_ip, instance.FnGetAtt('PublicDnsName')) self.assertEqual(expected_ip, instance.FnGetAtt('PrivateDnsName')) self.assertEqual(expected_az, instance.FnGetAtt('AvailabilityZone')) self.m.VerifyAll() def test_build_block_device_mapping(self): return_server = self.fc.servers.list()[1] instance = self._create_test_instance(return_server, 'test_build_bdm') self.assertIsNone(instance._build_block_device_mapping([])) self.assertIsNone(instance._build_block_device_mapping(None)) self.assertEqual({ 'vdb': '1234:snap:', 'vdc': '5678:snap::False', }, instance._build_block_device_mapping([ {'DeviceName': 'vdb', 'Ebs': {'SnapshotId': '1234'}}, {'DeviceName': 'vdc', 'Ebs': {'SnapshotId': '5678', 'DeleteOnTermination': False}}, ])) self.assertEqual({ 'vdb': '1234:snap:1', 'vdc': '5678:snap:2:True', }, instance._build_block_device_mapping([ {'DeviceName': 'vdb', 'Ebs': {'SnapshotId': '1234', 'VolumeSize': '1'}}, {'DeviceName': 'vdc', 'Ebs': {'SnapshotId': '5678', 'VolumeSize': '2', 'DeleteOnTermination': True}}, ])) def test_validate_Volumes_property(self): stack_name = 'validate_volumes' tmpl, stack = self._setup_test_stack(stack_name) volumes = [{'Device': 'vdb', 'VolumeId': '1234'}] wsp = tmpl.t['Resources']['WebServer']['Properties'] wsp['Volumes'] = volumes resource_defns = tmpl.resource_definitions(stack) instance = instances.Instance('validate_volumes', resource_defns['WebServer'], stack) self.stub_ImageConstraint_validate() self.stub_SnapshotConstraint_validate() self.stub_FlavorConstraint_validate() self.stub_KeypairConstraint_validate() self.m.StubOutWithMock(cinder.CinderClientPlugin, 'get_volume') ex = exception.EntityNotFound(entity='Volume', name='1234') cinder.CinderClientPlugin.get_volume('1234').AndRaise(ex) self.m.ReplayAll() exc = self.assertRaises(exception.StackValidationFailed, instance.validate) self.assertIn("WebServer.Properties.Volumes[0].VolumeId: " "Error validating value '1234': The Volume " "(1234) could not be found.", six.text_type(exc)) self.m.VerifyAll() def test_validate_BlockDeviceMappings_VolumeSize_valid_str(self): stack_name = 'val_VolumeSize_valid' tmpl, stack = self._setup_test_stack(stack_name) bdm = [{'DeviceName': 'vdb', 'Ebs': {'SnapshotId': '1234', 'VolumeSize': '1'}}] wsp = tmpl.t['Resources']['WebServer']['Properties'] wsp['BlockDeviceMappings'] = bdm resource_defns = tmpl.resource_definitions(stack) instance = instances.Instance('validate_volume_size', resource_defns['WebServer'], stack) self._mock_get_image_id_success('F17-x86_64-gold', 1) self.stub_SnapshotConstraint_validate() self.stub_VolumeConstraint_validate() self.m.StubOutWithMock(nova.NovaClientPlugin, '_create') nova.NovaClientPlugin._create().MultipleTimes().AndReturn(self.fc) self.m.ReplayAll() self.assertIsNone(instance.validate()) self.m.VerifyAll() def test_validate_BlockDeviceMappings_without_Ebs_property(self): stack_name = 'without_Ebs' tmpl, stack = self._setup_test_stack(stack_name) bdm = [{'DeviceName': 'vdb'}] wsp = tmpl.t['Resources']['WebServer']['Properties'] wsp['BlockDeviceMappings'] = bdm wsp['Volumes'] = [] resource_defns = tmpl.resource_definitions(stack) instance = instances.Instance('validate_without_Ebs', resource_defns['WebServer'], stack) self._mock_get_image_id_success('F17-x86_64-gold', 1) self.m.StubOutWithMock(nova.NovaClientPlugin, '_create') nova.NovaClientPlugin._create().MultipleTimes().AndReturn(self.fc) self.m.ReplayAll() exc = self.assertRaises(exception.StackValidationFailed, instance.validate) self.assertIn("Ebs is missing, this is required", six.text_type(exc)) self.m.VerifyAll() def test_validate_BlockDeviceMappings_without_SnapshotId_property(self): stack_name = 'without_SnapshotId' tmpl, stack = self._setup_test_stack(stack_name) bdm = [{'DeviceName': 'vdb', 'Ebs': {'VolumeSize': '1'}}] wsp = tmpl.t['Resources']['WebServer']['Properties'] wsp['BlockDeviceMappings'] = bdm wsp['Volumes'] = [] resource_defns = tmpl.resource_definitions(stack) instance = instances.Instance('validate_without_SnapshotId', resource_defns['WebServer'], stack) self._mock_get_image_id_success('F17-x86_64-gold', 1) self.m.StubOutWithMock(nova.NovaClientPlugin, '_create') nova.NovaClientPlugin._create().MultipleTimes().AndReturn(self.fc) self.m.ReplayAll() exc = self.assertRaises(exception.StackValidationFailed, instance.validate) self.assertIn("SnapshotId is missing, this is required", six.text_type(exc)) self.m.VerifyAll() def test_validate_BlockDeviceMappings_without_DeviceName_property(self): stack_name = 'without_DeviceName' tmpl, stack = self._setup_test_stack(stack_name) bdm = [{'Ebs': {'SnapshotId': '1234', 'VolumeSize': '1'}}] wsp = tmpl.t['Resources']['WebServer']['Properties'] wsp['BlockDeviceMappings'] = bdm resource_defns = tmpl.resource_definitions(stack) instance = instances.Instance('validate_without_DeviceName', resource_defns['WebServer'], stack) self.stub_VolumeConstraint_validate() self.stub_SnapshotConstraint_validate() self.stub_ImageConstraint_validate() self.stub_KeypairConstraint_validate() self.stub_FlavorConstraint_validate() self.m.ReplayAll() exc = self.assertRaises(exception.StackValidationFailed, instance.validate) excepted_error = ( 'Property error: ' 'Resources.WebServer.Properties.BlockDeviceMappings[0]: ' 'Property DeviceName not assigned') self.assertIn(excepted_error, six.text_type(exc)) self.m.VerifyAll() def test_instance_create_with_image_id(self): return_server = self.fc.servers.list()[1] instance = self._setup_test_instance(return_server, 'in_create_imgid', image_id='1') self.m.ReplayAll() scheduler.TaskRunner(instance.create)() # this makes sure the auto increment worked on instance creation self.assertGreater(instance.id, 0) expected_ip = return_server.networks['public'][0] expected_az = getattr(return_server, 'OS-EXT-AZ:availability_zone') self.assertEqual(expected_ip, instance.FnGetAtt('PublicIp')) self.assertEqual(expected_ip, instance.FnGetAtt('PrivateIp')) self.assertEqual(expected_ip, instance.FnGetAtt('PublicDnsName')) self.assertEqual(expected_ip, instance.FnGetAtt('PrivateDnsName')) self.assertEqual(expected_az, instance.FnGetAtt('AvailabilityZone')) self.m.VerifyAll() def test_instance_create_resolve_az_attribute(self): return_server = self.fc.servers.list()[1] instance = self._setup_test_instance(return_server, 'create_resolve_az_attribute') self.m.ReplayAll() scheduler.TaskRunner(instance.create)() expected_az = getattr(return_server, 'OS-EXT-AZ:availability_zone') actual_az = instance._availability_zone() self.assertEqual(expected_az, actual_az) self.m.VerifyAll() def test_instance_create_resolve_az_attribute_nova_az_ext_disabled(self): return_server = self.fc.servers.list()[1] delattr(return_server, 'OS-EXT-AZ:availability_zone') instance = self._setup_test_instance(return_server, 'create_resolve_az_attribute') self.patchobject(self.fc.servers, 'get', return_value=return_server) self.m.ReplayAll() scheduler.TaskRunner(instance.create)() self.assertIsNone(instance._availability_zone()) self.m.VerifyAll() def test_instance_create_image_name_err(self): stack_name = 'test_instance_create_image_name_err_stack' (tmpl, stack) = self._setup_test_stack(stack_name) # create an instance with non exist image name tmpl.t['Resources']['WebServer']['Properties']['ImageId'] = 'Slackware' resource_defns = tmpl.resource_definitions(stack) instance = instances.Instance('instance_create_image_err', resource_defns['WebServer'], stack) self._mock_get_image_id_fail( 'Slackware', glance.client_exception.EntityMatchNotFound( entity='image', args='Slackware')) self.stub_VolumeConstraint_validate() self.stub_FlavorConstraint_validate() self.stub_KeypairConstraint_validate() self.stub_SnapshotConstraint_validate() self.m.ReplayAll() create = scheduler.TaskRunner(instance.create) error = self.assertRaises(exception.ResourceFailure, create) self.assertEqual( "StackValidationFailed: resources.instance_create_image_err: " "Property error: WebServer.Properties.ImageId: " "Error validating value 'Slackware': No image matching Slackware.", six.text_type(error)) self.m.VerifyAll() def test_instance_create_duplicate_image_name_err(self): stack_name = 'test_instance_create_image_name_err_stack' (tmpl, stack) = self._setup_test_stack(stack_name) # create an instance with a non unique image name wsp = tmpl.t['Resources']['WebServer']['Properties'] wsp['ImageId'] = 'CentOS 5.2' resource_defns = tmpl.resource_definitions(stack) instance = instances.Instance('instance_create_image_err', resource_defns['WebServer'], stack) self._mock_get_image_id_fail( 'CentOS 5.2', glance.client_exception.EntityUniqueMatchNotFound( entity='image', args='CentOS 5.2')) self.stub_KeypairConstraint_validate() self.stub_SnapshotConstraint_validate() self.stub_VolumeConstraint_validate() self.stub_FlavorConstraint_validate() self.m.ReplayAll() create = scheduler.TaskRunner(instance.create) error = self.assertRaises(exception.ResourceFailure, create) self.assertEqual( "StackValidationFailed: resources.instance_create_image_err: " "Property error: WebServer.Properties.ImageId: " "Error validating value 'CentOS 5.2': No image unique match " "found for CentOS 5.2.", six.text_type(error)) self.m.VerifyAll() def test_instance_create_image_id_err(self): stack_name = 'test_instance_create_image_id_err_stack' (tmpl, stack) = self._setup_test_stack(stack_name) # create an instance with non exist image Id tmpl.t['Resources']['WebServer']['Properties']['ImageId'] = '1' resource_defns = tmpl.resource_definitions(stack) instance = instances.Instance('instance_create_image_err', resource_defns['WebServer'], stack) self._mock_get_image_id_fail( '1', glance.client_exception.EntityMatchNotFound(entity='image', args='1')) self.stub_VolumeConstraint_validate() self.stub_FlavorConstraint_validate() self.stub_KeypairConstraint_validate() self.stub_SnapshotConstraint_validate() self.m.ReplayAll() create = scheduler.TaskRunner(instance.create) error = self.assertRaises(exception.ResourceFailure, create) self.assertEqual( "StackValidationFailed: resources.instance_create_image_err: " "Property error: WebServer.Properties.ImageId: " "Error validating value '1': No image matching 1.", six.text_type(error)) self.m.VerifyAll() def test_handle_check(self): (tmpl, stack) = self._setup_test_stack('test_instance_check_active') res_definitions = tmpl.resource_definitions(stack) instance = instances.Instance('instance_create_image', res_definitions['WebServer'], stack) instance.client = mock.Mock() self.patchobject(nova.NovaClientPlugin, '_check_active', return_value=True) self.assertIsNone(instance.handle_check()) def test_handle_check_raises_exception_if_instance_not_active(self): (tmpl, stack) = self._setup_test_stack('test_instance_check_inactive') res_definitions = tmpl.resource_definitions(stack) instance = instances.Instance('instance_create_image', res_definitions['WebServer'], stack) instance.client = mock.Mock() instance.client.return_value.servers.get.return_value.status = 'foo' self.patchobject(nova.NovaClientPlugin, '_check_active', return_value=False) exc = self.assertRaises(exception.Error, instance.handle_check) self.assertIn('foo', six.text_type(exc)) def test_instance_create_unexpected_status(self): # checking via check_create_complete only so not to mock # all retry logic on resource creation return_server = self.fc.servers.list()[1] instance = self._create_test_instance(return_server, 'test_instance_create') creator = progress.ServerCreateProgress(instance.resource_id) self.m.StubOutWithMock(self.fc.servers, 'get') return_server.status = 'BOGUS' self.fc.servers.get(instance.resource_id).AndReturn(return_server) self.m.ReplayAll() e = self.assertRaises(exception.ResourceUnknownStatus, instance.check_create_complete, (creator, None)) self.assertEqual('Instance is not active - Unknown status BOGUS ' 'due to "Unknown"', six.text_type(e)) self.m.VerifyAll() def test_instance_create_error_status(self): # checking via check_create_complete only so not to mock # all retry logic on resource creation return_server = self.fc.servers.list()[1] instance = self._create_test_instance(return_server, 'test_instance_create') creator = progress.ServerCreateProgress(instance.resource_id) return_server.status = 'ERROR' return_server.fault = { 'message': 'NoValidHost', 'code': 500, 'created': '2013-08-14T03:12:10Z' } self.m.StubOutWithMock(self.fc.servers, 'get') self.fc.servers.get(instance.resource_id).AndReturn(return_server) self.m.ReplayAll() e = self.assertRaises(exception.ResourceInError, instance.check_create_complete, (creator, None)) self.assertEqual( 'Went to status ERROR due to "Message: NoValidHost, Code: 500"', six.text_type(e)) self.m.VerifyAll() def test_instance_create_error_no_fault(self): # checking via check_create_complete only so not to mock # all retry logic on resource creation return_server = self.fc.servers.list()[1] instance = self._create_test_instance(return_server, 'in_create') creator = progress.ServerCreateProgress(instance.resource_id) return_server.status = 'ERROR' self.m.StubOutWithMock(self.fc.servers, 'get') self.fc.servers.get(instance.resource_id).AndReturn(return_server) self.m.ReplayAll() e = self.assertRaises( exception.ResourceInError, instance.check_create_complete, (creator, None)) self.assertEqual( 'Went to status ERROR due to "Message: Unknown, Code: Unknown"', six.text_type(e)) self.m.VerifyAll() def test_instance_create_with_stack_scheduler_hints(self): return_server = self.fc.servers.list()[1] sh.cfg.CONF.set_override('stack_scheduler_hints', True) # Unroll _create_test_instance, to enable check # for addition of heat ids (stack id, resource name) stack_name = 'test_instance_create_with_stack_scheduler_hints' (t, stack) = self._get_test_template(stack_name) resource_defns = t.resource_definitions(stack) instance = instances.Instance('in_create_with_sched_hints', resource_defns['WebServer'], stack) bdm = {"vdb": "9ef5496e-7426-446a-bbc8-01f84d9c9972:snap::True"} self._mock_get_image_id_success('CentOS 5.2', 1) # instance.uuid is only available once the resource has been added. stack.add_resource(instance) self.assertIsNotNone(instance.uuid) self.m.StubOutWithMock(nova.NovaClientPlugin, '_create') nova.NovaClientPlugin._create().AndReturn(self.fc) self.stub_SnapshotConstraint_validate() self.m.StubOutWithMock(self.fc.servers, 'create') shm = sh.SchedulerHintsMixin self.fc.servers.create( image=1, flavor=1, key_name='test', name=utils.PhysName( stack_name, instance.name, limit=instance.physical_resource_name_limit), security_groups=None, userdata=mox.IgnoreArg(), scheduler_hints={shm.HEAT_ROOT_STACK_ID: stack.root_stack_id(), shm.HEAT_STACK_ID: stack.id, shm.HEAT_STACK_NAME: stack.name, shm.HEAT_PATH_IN_STACK: [stack.name], shm.HEAT_RESOURCE_NAME: instance.name, shm.HEAT_RESOURCE_UUID: instance.uuid, 'foo': ['spam', 'ham', 'baz'], 'bar': 'eggs'}, meta=None, nics=None, availability_zone=None, block_device_mapping=bdm).AndReturn( return_server) self.m.ReplayAll() scheduler.TaskRunner(instance.create)() self.assertGreater(instance.id, 0) self.m.VerifyAll() def test_instance_validate(self): stack_name = 'test_instance_validate_stack' (tmpl, stack) = self._setup_test_stack(stack_name) tmpl.t['Resources']['WebServer']['Properties']['ImageId'] = '1' resource_defns = tmpl.resource_definitions(stack) instance = instances.Instance('instance_create_image', resource_defns['WebServer'], stack) self.m.StubOutWithMock(nova.NovaClientPlugin, '_create') nova.NovaClientPlugin._create().AndReturn(self.fc) self._mock_get_image_id_success('1', 1) self.stub_VolumeConstraint_validate() self.stub_SnapshotConstraint_validate() self.m.ReplayAll() self.assertIsNone(instance.validate()) self.m.VerifyAll() def _test_instance_create_delete(self, vm_status='ACTIVE', vm_delete_status='NotFound'): return_server = self.fc.servers.list()[1] instance = self._create_test_instance(return_server, 'in_cr_del') instance.resource_id = '1234' instance.status = vm_status # this makes sure the auto increment worked on instance creation self.assertGreater(instance.id, 0) d1 = {'server': self.fc.client.get_servers_detail()[1]['servers'][0]} d1['server']['status'] = vm_status self.m.StubOutWithMock(self.fc.client, 'get_servers_1234') get = self.fc.client.get_servers_1234 get().AndReturn((200, d1)) d2 = copy.deepcopy(d1) if vm_delete_status == 'DELETED': d2['server']['status'] = vm_delete_status get().AndReturn((200, d2)) else: get().AndRaise(fakes_nova.fake_exception()) self.m.ReplayAll() scheduler.TaskRunner(instance.delete)() self.assertEqual((instance.DELETE, instance.COMPLETE), instance.state) self.m.VerifyAll() def test_instance_create_delete_notfound(self): self._test_instance_create_delete() def test_instance_create_delete(self): self._test_instance_create_delete(vm_delete_status='DELETED') def test_instance_create_notfound_on_delete(self): return_server = self.fc.servers.list()[1] instance = self._create_test_instance(return_server, 'in_cr_del') instance.resource_id = '1234' # this makes sure the auto increment worked on instance creation self.assertGreater(instance.id, 0) self.m.StubOutWithMock(self.fc.client, 'delete_servers_1234') self.fc.client.delete_servers_1234().AndRaise( fakes_nova.fake_exception()) self.m.ReplayAll() scheduler.TaskRunner(instance.delete)() self.assertEqual((instance.DELETE, instance.COMPLETE), instance.state) self.m.VerifyAll() def test_instance_update_metadata(self): return_server = self.fc.servers.list()[1] instance = self._create_test_instance(return_server, 'ud_md') self._stub_glance_for_update() ud_tmpl = self._get_test_template('update_stack')[0] ud_tmpl.t['Resources']['WebServer']['Metadata'] = {'test': 123} resource_defns = ud_tmpl.resource_definitions(instance.stack) scheduler.TaskRunner(instance.update, resource_defns['WebServer'])() self.assertEqual({'test': 123}, instance.metadata_get()) def test_instance_update_instance_type(self): """Test case for updating InstanceType. Instance.handle_update supports changing the InstanceType, and makes the change making a resize API call against Nova. """ return_server = self.fc.servers.list()[1] return_server.id = '1234' instance = self._create_test_instance(return_server, 'ud_type') update_props = self.instance_props.copy() update_props['InstanceType'] = 'm1.small' update_template = instance.t.freeze(properties=update_props) def side_effect(*args): return 2 if args[0] == 'm1.small' else 1 self.patchobject(nova.NovaClientPlugin, 'find_flavor_by_name_or_id', side_effect=side_effect) self.patchobject(glance.GlanceClientPlugin, 'find_image_by_name_or_id', return_value=1) self.m.StubOutWithMock(self.fc.servers, 'get') def status_resize(*args): return_server.status = 'RESIZE' def status_verify_resize(*args): return_server.status = 'VERIFY_RESIZE' def status_active(*args): return_server.status = 'ACTIVE' self.fc.servers.get('1234').WithSideEffects( status_active).AndReturn(return_server) self.fc.servers.get('1234').WithSideEffects( status_resize).AndReturn(return_server) self.fc.servers.get('1234').WithSideEffects( status_verify_resize).AndReturn(return_server) self.fc.servers.get('1234').WithSideEffects( status_verify_resize).AndReturn(return_server) self.fc.servers.get('1234').WithSideEffects( status_active).AndReturn(return_server) self.m.StubOutWithMock(self.fc.client, 'post_servers_1234_action') self.fc.client.post_servers_1234_action( body={'resize': {'flavorRef': 2}}).AndReturn((202, None)) self.fc.client.post_servers_1234_action( body={'confirmResize': None}).AndReturn((202, None)) self.m.ReplayAll() scheduler.TaskRunner(instance.update, update_template)() self.assertEqual((instance.UPDATE, instance.COMPLETE), instance.state) self.m.VerifyAll() def test_instance_update_instance_type_failed(self): """Test case for raising exception due to resize call failed. If the status after a resize is not VERIFY_RESIZE, it means the resize call failed, so we raise an explicit error. """ return_server = self.fc.servers.list()[1] return_server.id = '1234' instance = self._create_test_instance(return_server, 'ud_type_f') def side_effect(*args): return 2 if args[0] == 'm1.small' else 1 self.patchobject(nova.NovaClientPlugin, 'find_flavor_by_name_or_id', side_effect=side_effect) self.patchobject(glance.GlanceClientPlugin, 'find_image_by_name_or_id', return_value=1) update_props = self.instance_props.copy() update_props['InstanceType'] = 'm1.small' update_template = instance.t.freeze(properties=update_props) self.m.StubOutWithMock(self.fc.servers, 'get') def status_resize(*args): return_server.status = 'RESIZE' def status_error(*args): return_server.status = 'ERROR' self.fc.servers.get('1234').AndReturn(return_server) self.fc.servers.get('1234').WithSideEffects( status_resize).AndReturn(return_server) self.fc.servers.get('1234').WithSideEffects( status_error).AndReturn(return_server) self.m.StubOutWithMock(self.fc.client, 'post_servers_1234_action') self.fc.client.post_servers_1234_action( body={'resize': {'flavorRef': 2}}).AndReturn((202, None)) self.m.ReplayAll() updater = scheduler.TaskRunner(instance.update, update_template) error = self.assertRaises(exception.ResourceFailure, updater) self.assertEqual( "Error: resources.ud_type_f: " "Resizing to '2' failed, status 'ERROR'", six.text_type(error)) self.assertEqual((instance.UPDATE, instance.FAILED), instance.state) self.m.VerifyAll() def create_fake_iface(self, port, net, ip): class fake_interface(object): def __init__(self, port_id, net_id, fixed_ip): self.port_id = port_id self.net_id = net_id self.fixed_ips = [{'ip_address': fixed_ip}] return fake_interface(port, net, ip) def test_instance_update_network_interfaces(self): """Test case for updating NetworkInterfaces. Instance.handle_update supports changing the NetworkInterfaces, and makes the change making a resize API call against Nova. """ return_server = self.fc.servers.list()[1] return_server.id = '1234' instance = self._create_test_instance(return_server, 'ud_network_interfaces') self._stub_glance_for_update() # if new overlaps with old, detach the different ones in old, and # attach the different ones in new old_interfaces = [ {'NetworkInterfaceId': 'ea29f957-cd35-4364-98fb-57ce9732c10d', 'DeviceIndex': '2'}, {'NetworkInterfaceId': 'd1e9c73c-04fe-4e9e-983c-d5ef94cd1a46', 'DeviceIndex': '1'}] new_interfaces = [ {'NetworkInterfaceId': 'ea29f957-cd35-4364-98fb-57ce9732c10d', 'DeviceIndex': '2'}, {'NetworkInterfaceId': '34b752ec-14de-416a-8722-9531015e04a5', 'DeviceIndex': '3'}] before_props = self.instance_props.copy() before_props['NetworkInterfaces'] = old_interfaces update_props = self.instance_props.copy() update_props['NetworkInterfaces'] = new_interfaces after = instance.t.freeze(properties=update_props) before = instance.t.freeze(properties=before_props) self.m.StubOutWithMock(self.fc.servers, 'get') self.fc.servers.get('1234').MultipleTimes().AndReturn(return_server) self.m.StubOutWithMock(return_server, 'interface_detach') return_server.interface_detach( 'd1e9c73c-04fe-4e9e-983c-d5ef94cd1a46').AndReturn(None) self.m.StubOutWithMock(return_server, 'interface_attach') return_server.interface_attach('34b752ec-14de-416a-8722-9531015e04a5', None, None).AndReturn(None) self.m.ReplayAll() scheduler.TaskRunner(instance.update, after, before)() self.assertEqual((instance.UPDATE, instance.COMPLETE), instance.state) self.m.VerifyAll() def test_instance_update_network_interfaces_old_include_new(self): """Test case for updating NetworkInterfaces when old prop includes new. Instance.handle_update supports changing the NetworkInterfaces, and makes the change making a resize API call against Nova. """ return_server = self.fc.servers.list()[1] return_server.id = '1234' instance = self._create_test_instance(return_server, 'ud_network_interfaces') self._stub_glance_for_update() # if old include new, just detach the different ones in old old_interfaces = [ {'NetworkInterfaceId': 'ea29f957-cd35-4364-98fb-57ce9732c10d', 'DeviceIndex': '2'}, {'NetworkInterfaceId': 'd1e9c73c-04fe-4e9e-983c-d5ef94cd1a46', 'DeviceIndex': '1'}] new_interfaces = [ {'NetworkInterfaceId': 'ea29f957-cd35-4364-98fb-57ce9732c10d', 'DeviceIndex': '2'}] before_props = self.instance_props.copy() before_props['NetworkInterfaces'] = old_interfaces update_props = self.instance_props.copy() update_props['NetworkInterfaces'] = new_interfaces after = instance.t.freeze(properties=update_props) before = instance.t.freeze(properties=before_props) self.m.StubOutWithMock(self.fc.servers, 'get') self.fc.servers.get('1234').MultipleTimes().AndReturn(return_server) self.m.StubOutWithMock(return_server, 'interface_detach') return_server.interface_detach( 'd1e9c73c-04fe-4e9e-983c-d5ef94cd1a46').AndReturn(None) self.m.ReplayAll() scheduler.TaskRunner(instance.update, after, before)() self.assertEqual((instance.UPDATE, instance.COMPLETE), instance.state) def test_instance_update_network_interfaces_new_include_old(self): """Test case for updating NetworkInterfaces when new prop includes old. Instance.handle_update supports changing the NetworkInterfaces, and makes the change making a resize API call against Nova. """ return_server = self.fc.servers.list()[1] return_server.id = '1234' instance = self._create_test_instance(return_server, 'ud_network_interfaces') self._stub_glance_for_update() # if new include old, just attach the different ones in new old_interfaces = [ {'NetworkInterfaceId': 'ea29f957-cd35-4364-98fb-57ce9732c10d', 'DeviceIndex': '2'}] new_interfaces = [ {'NetworkInterfaceId': 'ea29f957-cd35-4364-98fb-57ce9732c10d', 'DeviceIndex': '2'}, {'NetworkInterfaceId': 'd1e9c73c-04fe-4e9e-983c-d5ef94cd1a46', 'DeviceIndex': '1'}] before_props = self.instance_props.copy() before_props['NetworkInterfaces'] = old_interfaces update_props = self.instance_props.copy() update_props['NetworkInterfaces'] = new_interfaces after = instance.t.freeze(properties=update_props) before = instance.t.freeze(properties=before_props) self.m.StubOutWithMock(self.fc.servers, 'get') self.fc.servers.get('1234').MultipleTimes().AndReturn(return_server) self.m.StubOutWithMock(return_server, 'interface_attach') return_server.interface_attach('d1e9c73c-04fe-4e9e-983c-d5ef94cd1a46', None, None).AndReturn(None) self.m.ReplayAll() scheduler.TaskRunner(instance.update, after, before)() self.assertEqual((instance.UPDATE, instance.COMPLETE), instance.state) def test_instance_update_network_interfaces_new_old_all_different(self): """Tests updating NetworkInterfaces when new and old are different. Instance.handle_update supports changing the NetworkInterfaces, and makes the change making a resize API call against Nova. """ return_server = self.fc.servers.list()[1] return_server.id = '1234' instance = self._create_test_instance(return_server, 'ud_network_interfaces') self._stub_glance_for_update() # if different, detach the old ones and attach the new ones old_interfaces = [ {'NetworkInterfaceId': 'ea29f957-cd35-4364-98fb-57ce9732c10d', 'DeviceIndex': '2'}] new_interfaces = [ {'NetworkInterfaceId': '34b752ec-14de-416a-8722-9531015e04a5', 'DeviceIndex': '3'}, {'NetworkInterfaceId': 'd1e9c73c-04fe-4e9e-983c-d5ef94cd1a46', 'DeviceIndex': '1'}] before_props = self.instance_props.copy() before_props['NetworkInterfaces'] = old_interfaces update_props = self.instance_props.copy() update_props['NetworkInterfaces'] = new_interfaces after = instance.t.freeze(properties=update_props) before = instance.t.freeze(properties=before_props) self.m.StubOutWithMock(self.fc.servers, 'get') self.fc.servers.get('1234').MultipleTimes().AndReturn(return_server) self.m.StubOutWithMock(return_server, 'interface_detach') return_server.interface_detach( 'ea29f957-cd35-4364-98fb-57ce9732c10d').AndReturn(None) self.m.StubOutWithMock(return_server, 'interface_attach') return_server.interface_attach('d1e9c73c-04fe-4e9e-983c-d5ef94cd1a46', None, None).InAnyOrder().AndReturn(None) return_server.interface_attach('34b752ec-14de-416a-8722-9531015e04a5', None, None).InAnyOrder().AndReturn(None) self.m.ReplayAll() scheduler.TaskRunner(instance.update, after, before)() self.assertEqual((instance.UPDATE, instance.COMPLETE), instance.state) def test_instance_update_network_interfaces_no_old(self): """Test case for updating NetworkInterfaces when there's no old prop. Instance.handle_update supports changing the NetworkInterfaces, and makes the change making a resize API call against Nova. """ return_server = self.fc.servers.list()[1] return_server.id = '1234' instance = self._create_test_instance(return_server, 'ud_network_interfaces') self._stub_glance_for_update() new_interfaces = [ {'NetworkInterfaceId': 'ea29f957-cd35-4364-98fb-57ce9732c10d', 'DeviceIndex': '2'}, {'NetworkInterfaceId': '34b752ec-14de-416a-8722-9531015e04a5', 'DeviceIndex': '3'}] iface = self.create_fake_iface('d1e9c73c-04fe-4e9e-983c-d5ef94cd1a46', 'c4485ba1-283a-4f5f-8868-0cd46cdda52f', '10.0.0.4') update_props = self.instance_props.copy() update_props['NetworkInterfaces'] = new_interfaces update_template = instance.t.freeze(properties=update_props) self.m.StubOutWithMock(self.fc.servers, 'get') self.fc.servers.get('1234').MultipleTimes().AndReturn(return_server) self.m.StubOutWithMock(return_server, 'interface_list') return_server.interface_list().AndReturn([iface]) self.m.StubOutWithMock(return_server, 'interface_detach') return_server.interface_detach( 'd1e9c73c-04fe-4e9e-983c-d5ef94cd1a46').AndReturn(None) self.m.StubOutWithMock(return_server, 'interface_attach') return_server.interface_attach('ea29f957-cd35-4364-98fb-57ce9732c10d', None, None).AndReturn(None) return_server.interface_attach('34b752ec-14de-416a-8722-9531015e04a5', None, None).AndReturn(None) self.m.ReplayAll() scheduler.TaskRunner(instance.update, update_template)() self.assertEqual((instance.UPDATE, instance.COMPLETE), instance.state) self.m.VerifyAll() def test_instance_update_network_interfaces_no_old_empty_new(self): """Test case for updating NetworkInterfaces when no old, no new prop. Instance.handle_update supports changing the NetworkInterfaces. """ return_server = self.fc.servers.list()[1] return_server.id = '1234' instance = self._create_test_instance(return_server, 'ud_network_interfaces') self._stub_glance_for_update() iface = self.create_fake_iface('d1e9c73c-04fe-4e9e-983c-d5ef94cd1a46', 'c4485ba1-283a-4f5f-8868-0cd46cdda52f', '10.0.0.4') update_props = self.instance_props.copy() update_props['NetworkInterfaces'] = [] update_template = instance.t.freeze(properties=update_props) self.m.StubOutWithMock(self.fc.servers, 'get') self.fc.servers.get('1234').MultipleTimes().AndReturn(return_server) self.m.StubOutWithMock(return_server, 'interface_list') return_server.interface_list().AndReturn([iface]) self.m.StubOutWithMock(return_server, 'interface_detach') return_server.interface_detach( 'd1e9c73c-04fe-4e9e-983c-d5ef94cd1a46').AndReturn(None) self.m.StubOutWithMock(return_server, 'interface_attach') return_server.interface_attach(None, None, None).AndReturn(None) self.m.ReplayAll() scheduler.TaskRunner(instance.update, update_template)() self.assertEqual((instance.UPDATE, instance.COMPLETE), instance.state) self.m.VerifyAll() def _test_instance_update_with_subnet(self, stack_name, new_interfaces=None, old_interfaces=None, need_update=True, multiple_get=True): return_server = self.fc.servers.list()[1] return_server.id = '1234' instance = self._create_test_instance(return_server, stack_name) self._stub_glance_for_update() iface = self.create_fake_iface('d1e9c73c-04fe-4e9e-983c-d5ef94cd1a46', 'c4485ba1-283a-4f5f-8868-0cd46cdda52f', '10.0.0.4') subnet_id = '8c1aaddf-e49e-4f28-93ea-ca9f0b3c6240' nics = [{'port-id': 'ea29f957-cd35-4364-98fb-57ce9732c10d'}] before_props = self.instance_props.copy() if old_interfaces is not None: before_props['NetworkInterfaces'] = old_interfaces update_props = copy.deepcopy(before_props) if new_interfaces is not None: update_props['NetworkInterfaces'] = new_interfaces update_props['SubnetId'] = subnet_id after = instance.t.freeze(properties=update_props) before = instance.t.freeze(properties=before_props) instance.reparse() self.m.StubOutWithMock(self.fc.servers, 'get') if need_update: if multiple_get: self.fc.servers.get('1234').MultipleTimes().AndReturn( return_server) else: self.fc.servers.get('1234').AndReturn(return_server) self.m.StubOutWithMock(return_server, 'interface_list') return_server.interface_list().AndReturn([iface]) self.m.StubOutWithMock(return_server, 'interface_detach') return_server.interface_detach( 'd1e9c73c-04fe-4e9e-983c-d5ef94cd1a46').AndReturn(None) self.m.StubOutWithMock(instance, '_build_nics') instance._build_nics(new_interfaces, security_groups=None, subnet_id=subnet_id).AndReturn(nics) self.m.StubOutWithMock(return_server, 'interface_attach') return_server.interface_attach( 'ea29f957-cd35-4364-98fb-57ce9732c10d', None, None).AndReturn(None) self.m.ReplayAll() scheduler.TaskRunner(instance.update, after, before)() self.assertEqual((instance.UPDATE, instance.COMPLETE), instance.state) self.m.VerifyAll() def test_instance_update_network_interfaces_empty_new_with_subnet(self): """Test update NetworkInterfaces to empty, and update with subnet.""" stack_name = 'ud_network_interfaces_empty_with_subnet' self._test_instance_update_with_subnet( stack_name, new_interfaces=[]) def test_instance_update_no_old_no_new_with_subnet(self): stack_name = 'ud_only_with_subnet' self._test_instance_update_with_subnet(stack_name) def test_instance_update_old_no_change_with_subnet(self): # Test if there is old network interface and no change of # it, will do nothing when updating. old_interfaces = [ {'NetworkInterfaceId': 'd1e9c73c-04fe-4e9e-983c-d5ef94cd1a46', 'DeviceIndex': '2'}] stack_name = 'ud_old_no_change_only_with_subnet' self._test_instance_update_with_subnet(stack_name, old_interfaces=old_interfaces, need_update=False, multiple_get=False) def test_instance_update_properties(self): return_server = self.fc.servers.list()[1] instance = self._create_test_instance(return_server, 'in_update2') self.stub_ImageConstraint_validate() self.m.ReplayAll() update_props = self.instance_props.copy() update_props['ImageId'] = 'mustreplace' update_template = instance.t.freeze(properties=update_props) updater = scheduler.TaskRunner(instance.update, update_template) self.assertRaises(resource.UpdateReplace, updater) self.m.VerifyAll() def test_instance_status_build(self): return_server = self.fc.servers.list()[0] instance = self._setup_test_instance(return_server, 'in_sts_build') instance.resource_id = '1234' self.m.StubOutWithMock(self.fc.servers, 'get') # Bind fake get method which Instance.check_create_complete will call def status_active(*args): return_server.status = 'ACTIVE' self.fc.servers.get(instance.resource_id).WithSideEffects( status_active).MultipleTimes().AndReturn(return_server) self.m.ReplayAll() scheduler.TaskRunner(instance.create)() self.assertEqual((instance.CREATE, instance.COMPLETE), instance.state) self.m.VerifyAll() def _test_instance_status_suspend(self, name, state=('CREATE', 'COMPLETE')): return_server = self.fc.servers.list()[1] instance = self._create_test_instance(return_server, name) instance.resource_id = '1234' instance.state_set(state[0], state[1]) d1 = {'server': self.fc.client.get_servers_detail()[1]['servers'][0]} d2 = copy.deepcopy(d1) d1['server']['status'] = 'ACTIVE' d2['server']['status'] = 'SUSPENDED' self.m.StubOutWithMock(self.fc.client, 'get_servers_1234') get = self.fc.client.get_servers_1234 get().AndReturn((200, d1)) get().AndReturn((200, d1)) get().AndReturn((200, d2)) self.m.ReplayAll() scheduler.TaskRunner(instance.suspend)() self.assertEqual((instance.SUSPEND, instance.COMPLETE), instance.state) self.m.VerifyAll() def test_instance_suspend_in_create_complete(self): self._test_instance_status_suspend( name='test_suspend_in_create_complete') def test_instance_suspend_in_suspend_failed(self): self._test_instance_status_suspend( name='test_suspend_in_suspend_failed', state=('SUSPEND', 'FAILED')) def test_instance_suspend_in_suspend_complete(self): self._test_instance_status_suspend( name='test_suspend_in_suspend_complete', state=('SUSPEND', 'COMPLETE')) def _test_instance_status_resume(self, name, state=('SUSPEND', 'COMPLETE')): return_server = self.fc.servers.list()[1] instance = self._create_test_instance(return_server, name) instance.resource_id = '1234' instance.state_set(state[0], state[1]) d1 = {'server': self.fc.client.get_servers_detail()[1]['servers'][0]} d2 = copy.deepcopy(d1) d1['server']['status'] = 'SUSPENDED' d2['server']['status'] = 'ACTIVE' self.m.StubOutWithMock(self.fc.client, 'get_servers_1234') get = self.fc.client.get_servers_1234 get().AndReturn((200, d1)) get().AndReturn((200, d1)) get().AndReturn((200, d2)) self.m.ReplayAll() instance.state_set(instance.SUSPEND, instance.COMPLETE) scheduler.TaskRunner(instance.resume)() self.assertEqual((instance.RESUME, instance.COMPLETE), instance.state) self.m.VerifyAll() def test_instance_resume_in_suspend_complete(self): self._test_instance_status_resume( name='test_resume_in_suspend_complete') def test_instance_resume_in_resume_failed(self): self._test_instance_status_resume( name='test_resume_in_resume_failed', state=('RESUME', 'FAILED')) def test_instance_resume_in_resume_complete(self): self._test_instance_status_resume( name='test_resume_in_resume_complete', state=('RESUME', 'COMPLETE')) def test_instance_resume_other_exception(self): return_server = self.fc.servers.list()[1] instance = self._create_test_instance(return_server, 'in_resume_wait') instance.resource_id = '1234' self.m.ReplayAll() self.m.StubOutWithMock(self.fc.client, 'get_servers_1234') get = self.fc.client.get_servers_1234 get().AndRaise(fakes_nova.fake_exception(status_code=500, message='VIKINGS!')) self.m.ReplayAll() instance.state_set(instance.SUSPEND, instance.COMPLETE) resumer = scheduler.TaskRunner(instance.resume) ex = self.assertRaises(exception.ResourceFailure, resumer) self.assertIn('VIKINGS!', ex.message) self.m.VerifyAll() def test_instance_status_build_spawning(self): self._test_instance_status_not_build_active('BUILD(SPAWNING)') def test_instance_status_hard_reboot(self): self._test_instance_status_not_build_active('HARD_REBOOT') def test_instance_status_password(self): self._test_instance_status_not_build_active('PASSWORD') def test_instance_status_reboot(self): self._test_instance_status_not_build_active('REBOOT') def test_instance_status_rescue(self): self._test_instance_status_not_build_active('RESCUE') def test_instance_status_resize(self): self._test_instance_status_not_build_active('RESIZE') def test_instance_status_revert_resize(self): self._test_instance_status_not_build_active('REVERT_RESIZE') def test_instance_status_shutoff(self): self._test_instance_status_not_build_active('SHUTOFF') def test_instance_status_suspended(self): self._test_instance_status_not_build_active('SUSPENDED') def test_instance_status_verify_resize(self): self._test_instance_status_not_build_active('VERIFY_RESIZE') def _test_instance_status_not_build_active(self, uncommon_status): return_server = self.fc.servers.list()[0] instance = self._setup_test_instance(return_server, 'in_sts_bld') instance.resource_id = '1234' self.m.StubOutWithMock(self.fc.servers, 'get') # Bind fake get method which Instance.check_create_complete will call def status_not_build(*args): return_server.status = uncommon_status def status_active(*args): return_server.status = 'ACTIVE' self.fc.servers.get(instance.resource_id).WithSideEffects( status_not_build).AndReturn(return_server) self.fc.servers.get(instance.resource_id).WithSideEffects( status_active).MultipleTimes().AndReturn(return_server) self.m.ReplayAll() scheduler.TaskRunner(instance.create)() self.assertEqual((instance.CREATE, instance.COMPLETE), instance.state) self.m.VerifyAll() def test_build_nics(self): return_server = self.fc.servers.list()[1] instance = self._create_test_instance(return_server, 'build_nics') self.assertIsNone(instance._build_nics([])) self.assertIsNone(instance._build_nics(None)) self.assertEqual([ {'port-id': 'id3'}, {'port-id': 'id1'}, {'port-id': 'id2'}], instance._build_nics([ 'id3', 'id1', 'id2'])) self.assertEqual( [{'port-id': 'id1'}, {'port-id': 'id2'}, {'port-id': 'id3'}], instance._build_nics([ {'NetworkInterfaceId': 'id3', 'DeviceIndex': '3'}, {'NetworkInterfaceId': 'id1', 'DeviceIndex': '1'}, {'NetworkInterfaceId': 'id2', 'DeviceIndex': 2}, ])) self.assertEqual( [{'port-id': 'id1'}, {'port-id': 'id2'}, {'port-id': 'id3'}, {'port-id': 'id4'}, {'port-id': 'id5'}], instance._build_nics([ {'NetworkInterfaceId': 'id3', 'DeviceIndex': '3'}, {'NetworkInterfaceId': 'id1', 'DeviceIndex': '1'}, {'NetworkInterfaceId': 'id2', 'DeviceIndex': 2}, 'id4', 'id5'] )) def test_build_nics_with_security_groups(self): """Test the security groups can be associated to a new created port. Test the security groups defined in heat template can be associated to a new created port. """ return_server = self.fc.servers.list()[1] instance = self._create_test_instance(return_server, 'build_nics2') security_groups = ['security_group_1'] self._test_security_groups(instance, security_groups) security_groups = ['0389f747-7785-4757-b7bb-2ab07e4b09c3'] self._test_security_groups(instance, security_groups, all_uuids=True) security_groups = ['0389f747-7785-4757-b7bb-2ab07e4b09c3', '384ccd91-447c-4d83-832c-06974a7d3d05'] self._test_security_groups(instance, security_groups, sg='two', all_uuids=True) security_groups = ['security_group_1', '384ccd91-447c-4d83-832c-06974a7d3d05'] self._test_security_groups(instance, security_groups, sg='two') security_groups = ['wrong_group_name'] self._test_security_groups( instance, security_groups, sg='zero', get_secgroup_raises=exception.EntityNotFound) security_groups = ['wrong_group_name', '0389f747-7785-4757-b7bb-2ab07e4b09c3'] self._test_security_groups( instance, security_groups, get_secgroup_raises=exception.EntityNotFound) security_groups = ['wrong_group_name', 'security_group_1'] self._test_security_groups( instance, security_groups, get_secgroup_raises=exception.EntityNotFound) security_groups = ['duplicate_group_name', 'security_group_1'] self._test_security_groups( instance, security_groups, get_secgroup_raises=exception.PhysicalResourceNameAmbiguity) def _test_security_groups(self, instance, security_groups, sg='one', all_uuids=False, get_secgroup_raises=None): fake_groups_list, props = self._get_fake_properties(sg) nclient = neutronclient.Client() self.m.StubOutWithMock(instance, 'neutron') instance.neutron().MultipleTimes().AndReturn(nclient) if not all_uuids: # list_security_groups only gets called when none of the requested # groups look like UUIDs. self.m.StubOutWithMock( neutronclient.Client, 'list_security_groups') neutronclient.Client.list_security_groups().AndReturn( fake_groups_list) self.m.StubOutWithMock(neutron.NeutronClientPlugin, 'network_id_from_subnet_id') neutron.NeutronClientPlugin.network_id_from_subnet_id( 'fake_subnet_id').MultipleTimes().AndReturn('fake_network_id') if not get_secgroup_raises: self.m.StubOutWithMock(neutronclient.Client, 'create_port') neutronclient.Client.create_port( {'port': props}).MultipleTimes().AndReturn( {'port': {'id': 'fake_port_id'}}) self.stub_keystoneclient() self.m.ReplayAll() if get_secgroup_raises: self.assertRaises(get_secgroup_raises, instance._build_nics, None, security_groups=security_groups, subnet_id='fake_subnet_id') else: self.assertEqual( [{'port-id': 'fake_port_id'}], instance._build_nics(None, security_groups=security_groups, subnet_id='fake_subnet_id')) self.m.VerifyAll() self.m.UnsetStubs() def _get_fake_properties(self, sg='one'): fake_groups_list = { 'security_groups': [ { 'tenant_id': 'test_tenant_id', 'id': '0389f747-7785-4757-b7bb-2ab07e4b09c3', 'name': 'security_group_1', 'security_group_rules': [], 'description': 'no protocol' }, { 'tenant_id': 'test_tenant_id', 'id': '384ccd91-447c-4d83-832c-06974a7d3d05', 'name': 'security_group_2', 'security_group_rules': [], 'description': 'no protocol' }, { 'tenant_id': 'test_tenant_id', 'id': 'e91a0007-06a6-4a4a-8edb-1d91315eb0ef', 'name': 'duplicate_group_name', 'security_group_rules': [], 'description': 'no protocol' }, { 'tenant_id': 'test_tenant_id', 'id': '8be37f3c-176d-4826-aa17-77d1d9df7b2e', 'name': 'duplicate_group_name', 'security_group_rules': [], 'description': 'no protocol' } ] } fixed_ip = {'subnet_id': 'fake_subnet_id'} props = { 'admin_state_up': True, 'network_id': 'fake_network_id', 'fixed_ips': [fixed_ip], 'security_groups': ['0389f747-7785-4757-b7bb-2ab07e4b09c3'] } if sg == 'zero': props['security_groups'] = [] elif sg == 'one': props['security_groups'] = ['0389f747-7785-4757-b7bb-2ab07e4b09c3'] elif sg == 'two': props['security_groups'] = ['0389f747-7785-4757-b7bb-2ab07e4b09c3', '384ccd91-447c-4d83-832c-06974a7d3d05'] return fake_groups_list, props def test_instance_without_ip_address(self): return_server = self.fc.servers.list()[3] instance = self._create_test_instance(return_server, 'wo_ipaddr') self.assertEqual('0.0.0.0', instance.FnGetAtt('PrivateIp')) def test_default_instance_user(self): """CFN instances automatically create the `ec2-user` admin user.""" return_server = self.fc.servers.list()[1] instance = self._setup_test_instance(return_server, 'default_user') metadata = instance.metadata_get() self.m.StubOutWithMock(nova.NovaClientPlugin, 'build_userdata') nova.NovaClientPlugin.build_userdata( metadata, 'wordpress', 'ec2-user') self.m.ReplayAll() scheduler.TaskRunner(instance.create)() self.m.VerifyAll() def test_instance_create_with_volumes(self): return_server = self.fc.servers.list()[1] self.stub_VolumeConstraint_validate() instance = self._setup_test_instance(return_server, 'with_volumes', stub_complete=True, volumes=True) attach_mock = self.patchobject(nova.NovaClientPlugin, 'attach_volume', side_effect=['cccc', 'dddd']) check_attach_mock = self.patchobject(cinder.CinderClientPlugin, 'check_attach_volume_complete', side_effect=[False, True, False, True]) self.m.ReplayAll() scheduler.TaskRunner(instance.create)() self.assertEqual((instance.CREATE, instance.COMPLETE), instance.state) self.assertEqual(2, attach_mock.call_count) attach_mock.assert_has_calls([mock.call(instance.resource_id, 'cccc', '/dev/vdc'), mock.call(instance.resource_id, 'dddd', '/dev/vdd')]) self.assertEqual(4, check_attach_mock.call_count) check_attach_mock.assert_has_calls([mock.call('cccc'), mock.call('cccc'), mock.call('dddd'), mock.call('dddd')]) self.m.VerifyAll() heat-10.0.2/heat/tests/aws/test_instance_network.py0000666000175000017500000003027613343562351022417 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import uuid from heat.common import template_format from heat.engine.clients.os import glance from heat.engine.clients.os import neutron from heat.engine.clients.os import nova from heat.engine import environment from heat.engine.resources.aws.ec2 import instance as instances from heat.engine.resources.aws.ec2 import network_interface as net_interfaces from heat.engine import scheduler from heat.engine import stack as parser from heat.engine import template from heat.tests import common from heat.tests.openstack.nova import fakes as fakes_nova from heat.tests import utils wp_template = ''' { "AWSTemplateFormatVersion" : "2010-09-09", "Description" : "WordPress", "Parameters" : { "KeyName" : { "Description" : "KeyName", "Type" : "String", "Default" : "test" }, "InstanceType": { "Type": "String", "Description": "EC2 instance type", "Default": "m1.small", "AllowedValues": [ "m1.small", "m1.large" ] }, "SubnetId": { "Type" : "String", "Description" : "SubnetId of an existing subnet in your VPC" }, }, "Resources" : { "WebServer": { "Type": "AWS::EC2::Instance", "Properties": { "ImageId" : "F17-x86_64-gold", "InstanceType" : { "Ref" : "InstanceType" }, "SubnetId" : { "Ref" : "SubnetId" }, "KeyName" : { "Ref" : "KeyName" }, "UserData" : "wordpress" } } } } ''' wp_template_with_nic = ''' { "AWSTemplateFormatVersion" : "2010-09-09", "Description" : "WordPress", "Parameters" : { "KeyName" : { "Description" : "KeyName", "Type" : "String", "Default" : "test" }, "InstanceType": { "Type": "String", "Description": "EC2 instance type", "Default": "m1.small", "AllowedValues": [ "m1.small", "m1.large" ] }, "SubnetId": { "Type" : "String", "Description" : "SubnetId of an existing subnet in your VPC" }, }, "Resources" : { "nic1": { "Type": "AWS::EC2::NetworkInterface", "Properties": { "SubnetId": { "Ref": "SubnetId" } } }, "WebServer": { "Type": "AWS::EC2::Instance", "Properties": { "ImageId" : "F17-x86_64-gold", "InstanceType" : { "Ref" : "InstanceType" }, "NetworkInterfaces": [ { "NetworkInterfaceId" : {"Ref": "nic1"}, "DeviceIndex" : "0" } ], "KeyName" : { "Ref" : "KeyName" }, "UserData" : "wordpress" } } } } ''' class FakeNeutron(object): def show_subnet(self, subnet, **_params): return { 'subnet': { 'name': 'name', 'network_id': 'fc68ea2c-b60b-4b4f-bd82-94ec81110766', 'tenant_id': 'c1210485b2424d48804aad5d39c61b8f', 'allocation_pools': [{'start': '10.10.0.2', 'end': '10.10.0.254'}], 'gateway_ip': '10.10.0.1', 'ip_version': 4, 'cidr': '10.10.0.0/24', 'id': '4156c7a5-e8c4-4aff-a6e1-8f3c7bc83861', 'enable_dhcp': False, }} def create_port(self, body=None): return { 'port': { 'admin_state_up': True, 'device_id': '', 'device_owner': '', 'fixed_ips': [{ 'ip_address': '10.0.3.3', 'subnet_id': '4156c7a5-e8c4-4aff-a6e1-8f3c7bc83861'}], 'id': '64d913c1-bcb1-42d2-8f0a-9593dbcaf251', 'mac_address': 'fa:16:3e:25:32:5d', 'name': '', 'network_id': 'fc68ea2c-b60b-4b4f-bd82-94ec81110766', 'status': 'ACTIVE', 'tenant_id': 'c1210485b2424d48804aad5d39c61b8f' }} def delete_port(self, port_id): return None class instancesTest(common.HeatTestCase): def setUp(self): super(instancesTest, self).setUp() self.fc = fakes_nova.FakeClient() def _mock_get_image_id_success(self, imageId_input, imageId): self.m.StubOutWithMock(glance.GlanceClientPlugin, 'find_image_by_name_or_id') glance.GlanceClientPlugin.find_image_by_name_or_id( imageId_input).MultipleTimes().AndReturn(imageId) def _test_instance_create_delete(self, vm_status='ACTIVE', vm_delete_status='NotFound'): return_server = self.fc.servers.list()[1] instance = self._create_test_instance(return_server, 'in_create') instance.resource_id = '1234' instance.status = vm_status # this makes sure the auto increment worked on instance creation self.assertGreater(instance.id, 0) expected_ip = return_server.networks['public'][0] self.assertEqual(expected_ip, instance.FnGetAtt('PublicIp')) self.assertEqual(expected_ip, instance.FnGetAtt('PrivateIp')) self.assertEqual(expected_ip, instance.FnGetAtt('PrivateDnsName')) self.assertEqual(expected_ip, instance.FnGetAtt('PublicDnsName')) d1 = {'server': self.fc.client.get_servers_detail()[1]['servers'][0]} d1['server']['status'] = vm_status self.m.StubOutWithMock(self.fc.client, 'get_servers_1234') get = self.fc.client.get_servers_1234 get().AndReturn((200, d1)) d2 = copy.deepcopy(d1) if vm_delete_status == 'DELETED': d2['server']['status'] = vm_delete_status get().AndReturn((200, d2)) else: get().AndRaise(fakes_nova.fake_exception()) self.m.ReplayAll() scheduler.TaskRunner(instance.delete)() self.assertEqual((instance.DELETE, instance.COMPLETE), instance.state) self.m.VerifyAll() def _create_test_instance(self, return_server, name): stack_name = '%s_s' % name t = template_format.parse(wp_template) kwargs = {'KeyName': 'test', 'InstanceType': 'm1.large', 'SubnetId': '4156c7a5-e8c4-4aff-a6e1-8f3c7bc83861'} tmpl = template.Template(t, env=environment.Environment(kwargs)) self.stack = parser.Stack(utils.dummy_context(), stack_name, tmpl, stack_id=str(uuid.uuid4())) image_id = 'CentOS 5.2' t['Resources']['WebServer']['Properties']['ImageId'] = image_id resource_defns = self.stack.t.resource_definitions(self.stack) instance = instances.Instance('%s_name' % name, resource_defns['WebServer'], self.stack) metadata = instance.metadata_get() self.m.StubOutWithMock(nova.NovaClientPlugin, '_create') nova.NovaClientPlugin._create().AndReturn(self.fc) self._mock_get_image_id_success(image_id, 1) self.stub_SubnetConstraint_validate() self.m.StubOutWithMock(instance, 'neutron') instance.neutron().MultipleTimes().AndReturn(FakeNeutron()) self.m.StubOutWithMock(neutron.NeutronClientPlugin, '_create') neutron.NeutronClientPlugin._create().MultipleTimes().AndReturn( FakeNeutron()) # need to resolve the template functions server_userdata = instance.client_plugin().build_userdata( metadata, instance.properties['UserData'], 'ec2-user') self.m.StubOutWithMock(nova.NovaClientPlugin, 'build_userdata') nova.NovaClientPlugin.build_userdata( metadata, instance.properties['UserData'], 'ec2-user').AndReturn(server_userdata) self.m.StubOutWithMock(self.fc.servers, 'create') self.fc.servers.create( image=1, flavor=3, key_name='test', name=utils.PhysName(stack_name, instance.name), security_groups=None, userdata=server_userdata, scheduler_hints=None, meta=None, nics=[{'port-id': '64d913c1-bcb1-42d2-8f0a-9593dbcaf251'}], availability_zone=None, block_device_mapping=None).AndReturn( return_server) self.m.ReplayAll() scheduler.TaskRunner(instance.create)() return instance def _create_test_instance_with_nic(self, return_server, name): stack_name = '%s_s' % name t = template_format.parse(wp_template_with_nic) kwargs = {'KeyName': 'test', 'InstanceType': 'm1.large', 'SubnetId': '4156c7a5-e8c4-4aff-a6e1-8f3c7bc83861'} tmpl = template.Template(t, env=environment.Environment(kwargs)) self.stack = parser.Stack(utils.dummy_context(), stack_name, tmpl, stack_id=str(uuid.uuid4())) image_id = 'CentOS 5.2' t['Resources']['WebServer']['Properties']['ImageId'] = image_id resource_defns = self.stack.t.resource_definitions(self.stack) nic = net_interfaces.NetworkInterface('%s_nic' % name, resource_defns['nic1'], self.stack) instance = instances.Instance('%s_name' % name, resource_defns['WebServer'], self.stack) metadata = instance.metadata_get() self._mock_get_image_id_success(image_id, 1) self.stub_SubnetConstraint_validate() self.m.StubOutWithMock(nic, 'client') nic.client().AndReturn(FakeNeutron()) self.m.StubOutWithMock(neutron.NeutronClientPlugin, '_create') neutron.NeutronClientPlugin._create().MultipleTimes().AndReturn( FakeNeutron()) self.m.StubOutWithMock(nova.NovaClientPlugin, '_create') nova.NovaClientPlugin._create().AndReturn(self.fc) # need to resolve the template functions server_userdata = instance.client_plugin().build_userdata( metadata, instance.properties['UserData'], 'ec2-user') self.m.StubOutWithMock(nova.NovaClientPlugin, 'build_userdata') nova.NovaClientPlugin.build_userdata( metadata, instance.properties['UserData'], 'ec2-user').AndReturn(server_userdata) self.m.StubOutWithMock(self.fc.servers, 'create') self.fc.servers.create( image=1, flavor=3, key_name='test', name=utils.PhysName(stack_name, instance.name), security_groups=None, userdata=server_userdata, scheduler_hints=None, meta=None, nics=[{'port-id': '64d913c1-bcb1-42d2-8f0a-9593dbcaf251'}], availability_zone=None, block_device_mapping=None).AndReturn( return_server) self.m.ReplayAll() # create network interface scheduler.TaskRunner(nic.create)() self.stack.resources["nic1"] = nic scheduler.TaskRunner(instance.create)() return instance def test_instance_create_delete_with_SubnetId(self): self._test_instance_create_delete(vm_delete_status='DELETED') def test_instance_create_with_nic(self): return_server = self.fc.servers.list()[1] instance = self._create_test_instance_with_nic( return_server, 'in_create_wnic') # this makes sure the auto increment worked on instance creation self.assertGreater(instance.id, 0) expected_ip = return_server.networks['public'][0] self.assertEqual(expected_ip, instance.FnGetAtt('PublicIp')) self.assertEqual(expected_ip, instance.FnGetAtt('PrivateIp')) self.assertEqual(expected_ip, instance.FnGetAtt('PrivateDnsName')) self.assertEqual(expected_ip, instance.FnGetAtt('PublicDnsName')) self.m.VerifyAll() heat-10.0.2/heat/tests/aws/test_loadbalancer.py0000666000175000017500000003516413343562351021452 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import mock from oslo_config import cfg from heat.common import exception from heat.common import template_format from heat.engine.clients.os import nova from heat.engine import node_data from heat.engine.resources.aws.lb import loadbalancer as lb from heat.engine import rsrc_defn from heat.tests import common from heat.tests.openstack.nova import fakes as fakes_nova from heat.tests import utils lb_template = ''' { "AWSTemplateFormatVersion": "2010-09-09", "Description": "LB Template", "Parameters" : { "KeyName" : { "Description" : "KeyName", "Type" : "String", "Default" : "test" }, "LbFlavor" : { "Description" : "Flavor to use for LoadBalancer instance", "Type": "String", "Default": "m1.heat" }, "LbImageId" : { "Description" : "Image to use", "Type" : "String", "Default" : "image123" } }, "Resources": { "WikiServerOne": { "Type": "AWS::EC2::Instance", "Properties": { "ImageId": "F17-x86_64-gold", "InstanceType" : "m1.large", "KeyName" : "test", "UserData" : "some data" } }, "LoadBalancer" : { "Type" : "AWS::ElasticLoadBalancing::LoadBalancer", "Properties" : { "AvailabilityZones" : ["nova"], "SecurityGroups": ["sg_1"], "Instances" : [{"Ref": "WikiServerOne"}], "Listeners" : [ { "LoadBalancerPort" : "80", "InstancePort" : "80", "Protocol" : "HTTP" }] } } } } ''' class LoadBalancerTest(common.HeatTestCase): def setUp(self): super(LoadBalancerTest, self).setUp() self.fc = fakes_nova.FakeClient() def test_loadbalancer(self): t = template_format.parse(lb_template) s = utils.parse_stack(t) s.store() resource_name = 'LoadBalancer' lb_defn = s.t.resource_definitions(s)[resource_name] rsrc = lb.LoadBalancer(resource_name, lb_defn, s) self.patchobject(nova.NovaClientPlugin, '_create', return_value=self.fc) initial_md = {'AWS::CloudFormation::Init': {'config': {'files': {'/etc/haproxy/haproxy.cfg': {'content': 'initial'}}}}} ha_cfg = '\n'.join(['\nglobal', ' daemon', ' maxconn 256', ' stats socket /tmp/.haproxy-stats', '\ndefaults', ' mode http\n timeout connect 5000ms', ' timeout client 50000ms', ' timeout server 50000ms\n\nfrontend http', ' bind *:80\n default_backend servers', '\nbackend servers\n balance roundrobin', ' option http-server-close', ' option forwardfor\n option httpchk', '\n server server1 1.2.3.4:80', ' server server2 0.0.0.0:80\n']) expected_md = {'AWS::CloudFormation::Init': {'config': {'files': {'/etc/haproxy/haproxy.cfg': { 'content': ha_cfg}}}}} md = mock.Mock() md.metadata_get.return_value = copy.deepcopy(initial_md) rsrc.nested = mock.Mock(return_value={'LB_instance': md}) prop_diff = {'Instances': ['WikiServerOne1', 'WikiServerOne2']} props = copy.copy(rsrc.properties.data) props.update(prop_diff) update_defn = rsrc_defn.ResourceDefinition(rsrc.name, rsrc.type(), props) rsrc.handle_update(update_defn, {}, prop_diff) self.assertIsNone(rsrc.handle_update(rsrc.t, {}, {})) md.metadata_get.assert_called_once_with() md.metadata_set.assert_called_once_with(expected_md) def test_loadbalancer_validate_hchk_good(self): hc = { 'Target': 'HTTP:80/', 'HealthyThreshold': '3', 'UnhealthyThreshold': '5', 'Interval': '30', 'Timeout': '5'} rsrc = self.setup_loadbalancer(hc=hc) rsrc._parse_nested_stack = mock.Mock() self.assertIsNone(rsrc.validate()) def test_loadbalancer_validate_hchk_int_gt_tmo(self): hc = { 'Target': 'HTTP:80/', 'HealthyThreshold': '3', 'UnhealthyThreshold': '5', 'Interval': '30', 'Timeout': '35'} rsrc = self.setup_loadbalancer(hc=hc) rsrc._parse_nested_stack = mock.Mock() self.assertEqual( {'Error': 'Interval must be larger than Timeout'}, rsrc.validate()) def test_loadbalancer_validate_badtemplate(self): cfg.CONF.set_override('loadbalancer_template', '/a/noexist/x.y') rsrc = self.setup_loadbalancer() self.assertRaises(exception.StackValidationFailed, rsrc.validate) def setup_loadbalancer(self, include_magic=True, cache_data=None, hc=None): template = template_format.parse(lb_template) if not include_magic: del template['Parameters']['KeyName'] del template['Parameters']['LbFlavor'] del template['Parameters']['LbImageId'] if hc is not None: props = template['Resources']['LoadBalancer']['Properties'] props['HealthCheck'] = hc self.stack = utils.parse_stack(template, cache_data=cache_data) resource_name = 'LoadBalancer' lb_defn = self.stack.defn.resource_definition(resource_name) return lb.LoadBalancer(resource_name, lb_defn, self.stack) def test_loadbalancer_refid(self): rsrc = self.setup_loadbalancer() self.assertEqual('LoadBalancer', rsrc.FnGetRefId()) def test_loadbalancer_refid_convergence_cache_data(self): cache_data = {'LoadBalancer': node_data.NodeData.from_dict({ 'uuid': mock.ANY, 'id': mock.ANY, 'action': 'CREATE', 'status': 'COMPLETE', 'reference_id': 'LoadBalancer_convg_mock' })} rsrc = self.setup_loadbalancer(cache_data=cache_data) self.assertEqual('LoadBalancer_convg_mock', self.stack.defn[rsrc.name].FnGetRefId()) def test_loadbalancer_attr_dnsname(self): rsrc = self.setup_loadbalancer() rsrc.get_output = mock.Mock(return_value='1.3.5.7') self.assertEqual('1.3.5.7', rsrc.FnGetAtt('DNSName')) rsrc.get_output.assert_called_once_with('PublicIp') def test_loadbalancer_attr_not_supported(self): rsrc = self.setup_loadbalancer() for attr in ['CanonicalHostedZoneName', 'CanonicalHostedZoneNameID', 'SourceSecurityGroup.GroupName', 'SourceSecurityGroup.OwnerAlias']: self.assertEqual('', rsrc.FnGetAtt(attr)) def test_loadbalancer_attr_invalid(self): rsrc = self.setup_loadbalancer() self.assertRaises(exception.InvalidTemplateAttribute, rsrc.FnGetAtt, 'Foo') def test_child_params_without_key_name(self): rsrc = self.setup_loadbalancer(False) self.assertNotIn('KeyName', rsrc.child_params()) def test_child_params_with_key_name(self): rsrc = self.setup_loadbalancer() params = rsrc.child_params() self.assertEqual('test', params['KeyName']) def test_child_template_without_key_name(self): rsrc = self.setup_loadbalancer(False) parsed_template = { 'Resources': {'LB_instance': {'Properties': {'KeyName': 'foo'}}}, 'Parameters': {'KeyName': 'foo'} } rsrc.get_parsed_template = mock.Mock(return_value=parsed_template) tmpl = rsrc.child_template() self.assertNotIn('KeyName', tmpl['Parameters']) self.assertNotIn('KeyName', tmpl['Resources']['LB_instance']['Properties']) def test_child_template_with_key_name(self): rsrc = self.setup_loadbalancer() rsrc.get_parsed_template = mock.Mock(return_value='foo') self.assertEqual('foo', rsrc.child_template()) def test_child_params_with_flavor(self): rsrc = self.setup_loadbalancer() params = rsrc.child_params() self.assertEqual('m1.heat', params['LbFlavor']) def test_child_params_without_flavor(self): rsrc = self.setup_loadbalancer(False) params = rsrc.child_params() self.assertNotIn('LbFlavor', params) def test_child_params_with_image_id(self): rsrc = self.setup_loadbalancer() params = rsrc.child_params() self.assertEqual('image123', params['LbImageId']) def test_child_params_without_image_id(self): rsrc = self.setup_loadbalancer(False) params = rsrc.child_params() self.assertNotIn('LbImageId', params) def test_child_params_with_sec_gr(self): rsrc = self.setup_loadbalancer(False) params = rsrc.child_params() expected = {'SecurityGroups': ['sg_1']} self.assertEqual(expected, params) def test_child_params_default_sec_gr(self): template = template_format.parse(lb_template) del template['Parameters']['KeyName'] del template['Parameters']['LbFlavor'] del template['Resources']['LoadBalancer']['Properties'][ 'SecurityGroups'] del template['Parameters']['LbImageId'] stack = utils.parse_stack(template) resource_name = 'LoadBalancer' lb_defn = stack.t.resource_definitions(stack)[resource_name] rsrc = lb.LoadBalancer(resource_name, lb_defn, stack) params = rsrc.child_params() # None value means, that will be used default [] for parameter expected = {'SecurityGroups': None} self.assertEqual(expected, params) class HaProxyConfigTest(common.HeatTestCase): def setUp(self): super(HaProxyConfigTest, self).setUp() self.stack = utils.parse_stack(template_format.parse(lb_template)) resource_name = 'LoadBalancer' lb_defn = self.stack.t.resource_definitions(self.stack)[resource_name] self.lb = lb.LoadBalancer(resource_name, lb_defn, self.stack) self.lb.client_plugin = mock.Mock() def _mock_props(self, props): def get_props(name): return props[name] self.lb.properties = mock.MagicMock() self.lb.properties.__getitem__.side_effect = get_props def test_combined(self): self.lb._haproxy_config_global = mock.Mock(return_value='one,') self.lb._haproxy_config_frontend = mock.Mock(return_value='two,') self.lb._haproxy_config_backend = mock.Mock(return_value='three,') self.lb._haproxy_config_servers = mock.Mock(return_value='four') actual = self.lb._haproxy_config([3, 5]) self.assertEqual('one,two,three,four\n', actual) self.lb._haproxy_config_global.assert_called_once_with() self.lb._haproxy_config_frontend.assert_called_once_with() self.lb._haproxy_config_backend.assert_called_once_with() self.lb._haproxy_config_servers.assert_called_once_with([3, 5]) def test_global(self): exp = ''' global daemon maxconn 256 stats socket /tmp/.haproxy-stats defaults mode http timeout connect 5000ms timeout client 50000ms timeout server 50000ms ''' actual = self.lb._haproxy_config_global() self.assertEqual(exp, actual) def test_frontend(self): props = {'HealthCheck': {}, 'Listeners': [{'LoadBalancerPort': 4014}]} self._mock_props(props) exp = ''' frontend http bind *:4014 default_backend servers ''' actual = self.lb._haproxy_config_frontend() self.assertEqual(exp, actual) def test_backend_with_timeout(self): props = {'HealthCheck': {'Timeout': 43}} self._mock_props(props) actual = self.lb._haproxy_config_backend() exp = ''' backend servers balance roundrobin option http-server-close option forwardfor option httpchk timeout check 43s ''' self.assertEqual(exp, actual) def test_backend_no_timeout(self): self._mock_props({'HealthCheck': None}) be = self.lb._haproxy_config_backend() exp = ''' backend servers balance roundrobin option http-server-close option forwardfor option httpchk ''' self.assertEqual(exp, be) def test_servers_none(self): props = {'HealthCheck': {}, 'Listeners': [{'InstancePort': 1234}]} self._mock_props(props) actual = self.lb._haproxy_config_servers([]) exp = '' self.assertEqual(exp, actual) def test_servers_no_check(self): props = {'HealthCheck': {}, 'Listeners': [{'InstancePort': 4511}]} self._mock_props(props) def fake_to_ipaddr(inst): return '192.168.1.%s' % inst to_ip = self.lb.client_plugin.return_value.server_to_ipaddress to_ip.side_effect = fake_to_ipaddr actual = self.lb._haproxy_config_servers(range(1, 3)) exp = ''' server server1 192.168.1.1:4511 server server2 192.168.1.2:4511''' self.assertEqual(exp.replace('\n', '', 1), actual) def test_servers_servers_and_check(self): props = {'HealthCheck': {'HealthyThreshold': 1, 'Interval': 2, 'Target': 'HTTP:80/', 'Timeout': 45, 'UnhealthyThreshold': 5 }, 'Listeners': [{'InstancePort': 1234}]} self._mock_props(props) def fake_to_ipaddr(inst): return '192.168.1.%s' % inst to_ip = self.lb.client_plugin.return_value.server_to_ipaddress to_ip.side_effect = fake_to_ipaddr actual = self.lb._haproxy_config_servers(range(1, 3)) exp = ''' server server1 192.168.1.1:1234 check inter 2s fall 5 rise 1 server server2 192.168.1.2:1234 check inter 2s fall 5 rise 1''' self.assertEqual(exp.replace('\n', '', 1), actual) heat-10.0.2/heat/tests/aws/test_user.py0000666000175000017500000004161013343562351020012 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_config import cfg from heat.common import exception from heat.common import short_id from heat.common import template_format from heat.engine.clients.os.keystone import fake_keystoneclient as fake_ks from heat.engine import node_data from heat.engine.resources.aws.iam import user from heat.engine.resources.openstack.heat import access_policy as ap from heat.engine import scheduler from heat.engine import stk_defn from heat.objects import resource_data as resource_data_object from heat.tests import common from heat.tests import utils user_template = ''' { "AWSTemplateFormatVersion" : "2010-09-09", "Description" : "Just a User", "Parameters" : {}, "Resources" : { "CfnUser" : { "Type" : "AWS::IAM::User" } } } ''' user_template_password = ''' { "AWSTemplateFormatVersion" : "2010-09-09", "Description" : "Just a User", "Parameters" : {}, "Resources" : { "CfnUser" : { "Type" : "AWS::IAM::User", "Properties": { "LoginProfile": { "Password": "myP@ssW0rd" } } } } } ''' user_accesskey_template = ''' { "AWSTemplateFormatVersion" : "2010-09-09", "Description" : "Just a User", "Parameters" : {}, "Resources" : { "CfnUser" : { "Type" : "AWS::IAM::User" }, "HostKeys" : { "Type" : "AWS::IAM::AccessKey", "Properties" : { "UserName" : {"Ref": "CfnUser"} } } } } ''' user_policy_template = ''' { "AWSTemplateFormatVersion" : "2010-09-09", "Description" : "Just a User", "Parameters" : {}, "Resources" : { "CfnUser" : { "Type" : "AWS::IAM::User", "Properties" : { "Policies" : [ { "Ref": "WebServerAccessPolicy"} ] } }, "WebServerAccessPolicy" : { "Type" : "OS::Heat::AccessPolicy", "Properties" : { "AllowedResources" : [ "WikiDatabase" ] } }, "WikiDatabase" : { "Type" : "AWS::EC2::Instance", } } } ''' class UserTest(common.HeatTestCase): def setUp(self): super(UserTest, self).setUp() self.stack_name = 'test_user_stack_%s' % utils.random_name() self.username = '%s-CfnUser-aabbcc' % self.stack_name self.fc = fake_ks.FakeKeystoneClient(username=self.username) cfg.CONF.set_default('heat_stack_user_role', 'stack_user_role') def create_user(self, t, stack, resource_name, project_id='stackproject', user_id='dummy_user', password=None): self.m.StubOutWithMock(user.User, 'keystone') user.User.keystone().MultipleTimes().AndReturn(self.fc) self.m.StubOutWithMock(fake_ks.FakeKeystoneClient, 'create_stack_domain_project') fake_ks.FakeKeystoneClient.create_stack_domain_project( stack.id).AndReturn(project_id) resource_defns = stack.t.resource_definitions(stack) rsrc = user.User(resource_name, resource_defns[resource_name], stack) rsrc.store() self.m.StubOutWithMock(short_id, 'get_id') short_id.get_id(rsrc.uuid).MultipleTimes().AndReturn('aabbcc') self.m.StubOutWithMock(fake_ks.FakeKeystoneClient, 'create_stack_domain_user') fake_ks.FakeKeystoneClient.create_stack_domain_user( username=self.username, password=password, project_id=project_id).AndReturn(user_id) self.m.ReplayAll() self.assertIsNone(rsrc.validate()) scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) return rsrc def test_user(self): t = template_format.parse(user_template) stack = utils.parse_stack(t, stack_name=self.stack_name) rsrc = self.create_user(t, stack, 'CfnUser') self.assertEqual('dummy_user', rsrc.resource_id) self.assertEqual(self.username, rsrc.FnGetRefId()) self.assertRaises(exception.InvalidTemplateAttribute, rsrc.FnGetAtt, 'Foo') self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) self.assertIsNone(rsrc.handle_suspend()) self.assertIsNone(rsrc.handle_resume()) rsrc.resource_id = None scheduler.TaskRunner(rsrc.delete)() self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state) rsrc.resource_id = self.fc.access rsrc.state_set(rsrc.CREATE, rsrc.COMPLETE) self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) scheduler.TaskRunner(rsrc.delete)() self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state) rsrc.state_set(rsrc.CREATE, rsrc.COMPLETE) self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) scheduler.TaskRunner(rsrc.delete)() self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_user_password(self): t = template_format.parse(user_template_password) stack = utils.parse_stack(t, stack_name=self.stack_name) rsrc = self.create_user(t, stack, 'CfnUser', password=u'myP@ssW0rd') self.assertEqual('dummy_user', rsrc.resource_id) self.assertEqual(self.username, rsrc.FnGetRefId()) self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_user_validate_policies(self): t = template_format.parse(user_policy_template) stack = utils.parse_stack(t, stack_name=self.stack_name) rsrc = self.create_user(t, stack, 'CfnUser') self.assertEqual('dummy_user', rsrc.resource_id) self.assertEqual(self.username, rsrc.FnGetRefId()) self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) self.assertEqual([u'WebServerAccessPolicy'], rsrc.properties['Policies']) # OK self.assertTrue(rsrc._validate_policies([u'WebServerAccessPolicy'])) # Resource name doesn't exist in the stack self.assertFalse(rsrc._validate_policies([u'NoExistAccessPolicy'])) # Resource name is wrong Resource type self.assertFalse(rsrc._validate_policies([u'NoExistAccessPolicy', u'WikiDatabase'])) # Wrong type (AWS embedded policy format, not yet supported) dict_policy = {"PolicyName": "AccessForCFNInit", "PolicyDocument": {"Statement": [{"Effect": "Allow", "Action": "cloudformation:DescribeStackResource", "Resource": "*"}]}} # However we should just ignore it to avoid breaking existing templates self.assertTrue(rsrc._validate_policies([dict_policy])) self.m.VerifyAll() def test_user_create_bad_policies(self): t = template_format.parse(user_policy_template) t['Resources']['CfnUser']['Properties']['Policies'] = ['NoExistBad'] stack = utils.parse_stack(t, stack_name=self.stack_name) resource_name = 'CfnUser' resource_defns = stack.t.resource_definitions(stack) rsrc = user.User(resource_name, resource_defns[resource_name], stack) self.assertRaises(exception.InvalidTemplateAttribute, rsrc.handle_create) def test_user_access_allowed(self): self.m.StubOutWithMock(ap.AccessPolicy, 'access_allowed') ap.AccessPolicy.access_allowed('a_resource').AndReturn(True) ap.AccessPolicy.access_allowed('b_resource').AndReturn(False) self.m.ReplayAll() t = template_format.parse(user_policy_template) stack = utils.parse_stack(t, stack_name=self.stack_name) rsrc = self.create_user(t, stack, 'CfnUser') self.assertEqual('dummy_user', rsrc.resource_id) self.assertEqual(self.username, rsrc.FnGetRefId()) self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) self.assertTrue(rsrc.access_allowed('a_resource')) self.assertFalse(rsrc.access_allowed('b_resource')) self.m.VerifyAll() def test_user_access_allowed_ignorepolicy(self): self.m.StubOutWithMock(ap.AccessPolicy, 'access_allowed') ap.AccessPolicy.access_allowed('a_resource').AndReturn(True) ap.AccessPolicy.access_allowed('b_resource').AndReturn(False) self.m.ReplayAll() t = template_format.parse(user_policy_template) t['Resources']['CfnUser']['Properties']['Policies'] = [ 'WebServerAccessPolicy', {'an_ignored': 'policy'}] stack = utils.parse_stack(t, stack_name=self.stack_name) rsrc = self.create_user(t, stack, 'CfnUser') self.assertEqual('dummy_user', rsrc.resource_id) self.assertEqual(self.username, rsrc.FnGetRefId()) self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) self.assertTrue(rsrc.access_allowed('a_resource')) self.assertFalse(rsrc.access_allowed('b_resource')) self.m.VerifyAll() def test_user_refid_rsrc_id(self): t = template_format.parse(user_template) stack = utils.parse_stack(t) rsrc = stack['CfnUser'] rsrc.resource_id = 'phy-rsrc-id' self.assertEqual('phy-rsrc-id', rsrc.FnGetRefId()) def test_user_refid_convg_cache_data(self): t = template_format.parse(user_template) cache_data = {'CfnUser': node_data.NodeData.from_dict({ 'uuid': mock.ANY, 'id': mock.ANY, 'action': 'CREATE', 'status': 'COMPLETE', 'reference_id': 'convg_xyz' })} stack = utils.parse_stack(t, cache_data=cache_data) rsrc = stack.defn['CfnUser'] self.assertEqual('convg_xyz', rsrc.FnGetRefId()) class AccessKeyTest(common.HeatTestCase): def setUp(self): super(AccessKeyTest, self).setUp() self.username = utils.PhysName('test_stack', 'CfnUser') self.credential_id = 'acredential123' self.fc = fake_ks.FakeKeystoneClient(username=self.username, user_id='dummy_user', credential_id=self.credential_id) cfg.CONF.set_default('heat_stack_user_role', 'stack_user_role') def create_user(self, t, stack, resource_name, project_id='stackproject', user_id='dummy_user', password=None): self.m.StubOutWithMock(user.User, 'keystone') user.User.keystone().MultipleTimes().AndReturn(self.fc) self.m.ReplayAll() rsrc = stack[resource_name] self.assertIsNone(rsrc.validate()) scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) stk_defn.update_resource_data(stack.defn, resource_name, rsrc.node_data()) return rsrc def create_access_key(self, t, stack, resource_name): rsrc = stack[resource_name] self.assertIsNone(rsrc.validate()) scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) return rsrc def test_access_key(self): t = template_format.parse(user_accesskey_template) stack = utils.parse_stack(t) self.create_user(t, stack, 'CfnUser') rsrc = self.create_access_key(t, stack, 'HostKeys') self.m.VerifyAll() self.assertEqual(self.fc.access, rsrc.resource_id) self.assertEqual(self.fc.secret, rsrc._secret) # Ensure the resource data has been stored correctly rs_data = resource_data_object.ResourceData.get_all(rsrc) self.assertEqual(self.fc.secret, rs_data.get('secret_key')) self.assertEqual(self.fc.credential_id, rs_data.get('credential_id')) self.assertEqual(2, len(rs_data.keys())) self.assertEqual(utils.PhysName(stack.name, 'CfnUser'), rsrc.FnGetAtt('UserName')) rsrc._secret = None self.assertEqual(self.fc.secret, rsrc.FnGetAtt('SecretAccessKey')) self.assertRaises(exception.InvalidTemplateAttribute, rsrc.FnGetAtt, 'Foo') scheduler.TaskRunner(rsrc.delete)() self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_access_key_get_from_keystone(self): self.m.StubOutWithMock(user.AccessKey, 'keystone') user.AccessKey.keystone().MultipleTimes().AndReturn(self.fc) self.m.ReplayAll() t = template_format.parse(user_accesskey_template) stack = utils.parse_stack(t) self.create_user(t, stack, 'CfnUser') rsrc = self.create_access_key(t, stack, 'HostKeys') # Delete the resource data for secret_key, to test that existing # stacks which don't have the resource_data stored will continue # working via retrieving the keypair from keystone resource_data_object.ResourceData.delete(rsrc, 'credential_id') resource_data_object.ResourceData.delete(rsrc, 'secret_key') self.assertRaises(exception.NotFound, resource_data_object.ResourceData.get_all, rsrc) rsrc._secret = None rsrc._data = None self.assertEqual(self.fc.secret, rsrc.FnGetAtt('SecretAccessKey')) scheduler.TaskRunner(rsrc.delete)() self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_access_key_no_user(self): self.m.ReplayAll() t = template_format.parse(user_accesskey_template) # Set the resource properties UserName to an unknown user t['Resources']['HostKeys']['Properties']['UserName'] = 'NonExistent' stack = utils.parse_stack(t) stack['CfnUser'].resource_id = self.fc.user_id resource_defns = stack.t.resource_definitions(stack) rsrc = user.AccessKey('HostKeys', resource_defns['HostKeys'], stack) create = scheduler.TaskRunner(rsrc.create) self.assertRaises(exception.ResourceFailure, create) self.assertEqual((rsrc.CREATE, rsrc.FAILED), rsrc.state) scheduler.TaskRunner(rsrc.delete)() self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() class AccessPolicyTest(common.HeatTestCase): def test_accesspolicy_create_ok(self): t = template_format.parse(user_policy_template) stack = utils.parse_stack(t) resource_name = 'WebServerAccessPolicy' resource_defns = stack.t.resource_definitions(stack) rsrc = ap.AccessPolicy(resource_name, resource_defns[resource_name], stack) scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) def test_accesspolicy_create_ok_empty(self): t = template_format.parse(user_policy_template) resource_name = 'WebServerAccessPolicy' t['Resources'][resource_name]['Properties']['AllowedResources'] = [] stack = utils.parse_stack(t) resource_defns = stack.t.resource_definitions(stack) rsrc = ap.AccessPolicy(resource_name, resource_defns[resource_name], stack) scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) def test_accesspolicy_create_err_notfound(self): t = template_format.parse(user_policy_template) resource_name = 'WebServerAccessPolicy' t['Resources'][resource_name]['Properties']['AllowedResources'] = [ 'NoExistResource'] stack = utils.parse_stack(t) self.assertRaises(exception.StackValidationFailed, stack.validate) def test_accesspolicy_access_allowed(self): t = template_format.parse(user_policy_template) resource_name = 'WebServerAccessPolicy' stack = utils.parse_stack(t) resource_defns = stack.t.resource_definitions(stack) rsrc = ap.AccessPolicy(resource_name, resource_defns[resource_name], stack) self.assertTrue(rsrc.access_allowed('WikiDatabase')) self.assertFalse(rsrc.access_allowed('NotWikiDatabase')) self.assertFalse(rsrc.access_allowed(None)) heat-10.0.2/heat/tests/aws/test_security_group.py0000666000175000017500000006226113343562351022124 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import copy from neutronclient.common import exceptions as neutron_exc from neutronclient.v2_0 import client as neutronclient from heat.common import template_format from heat.engine import resource from heat.engine import rsrc_defn from heat.engine import scheduler from heat.engine import stack as parser from heat.engine import template from heat.tests import common from heat.tests import utils NovaSG = collections.namedtuple('NovaSG', ' '.join([ 'name', 'id', 'rules', 'description', ])) class SecurityGroupTest(common.HeatTestCase): test_template_neutron = ''' HeatTemplateFormatVersion: '2012-12-12' Resources: the_sg: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: HTTP and SSH access VpcId: aaaa SecurityGroupIngress: - IpProtocol: tcp FromPort: "22" ToPort: "22" CidrIp: 0.0.0.0/0 - IpProtocol: tcp FromPort : "80" ToPort : "80" CidrIp : 0.0.0.0/0 - IpProtocol: tcp SourceSecurityGroupId: wwww SecurityGroupEgress: - IpProtocol: tcp FromPort: "22" ToPort: "22" CidrIp: 10.0.1.0/24 - SourceSecurityGroupName: xxxx ''' def setUp(self): super(SecurityGroupTest, self).setUp() self.m.StubOutWithMock(neutronclient.Client, 'create_security_group') self.m.StubOutWithMock( neutronclient.Client, 'create_security_group_rule') self.m.StubOutWithMock(neutronclient.Client, 'show_security_group') self.m.StubOutWithMock( neutronclient.Client, 'delete_security_group_rule') self.m.StubOutWithMock(neutronclient.Client, 'delete_security_group') self.m.StubOutWithMock(neutronclient.Client, 'update_security_group') self.patchobject(resource.Resource, 'is_using_neutron', return_value=True) def mock_no_neutron(self): self.patchobject(resource.Resource, 'is_using_neutron', return_value=False) def create_stack(self, templ): self.stack = self.parse_stack(template_format.parse(templ)) self.assertIsNone(self.stack.create()) return self.stack def parse_stack(self, t): stack_name = 'test_stack' tmpl = template.Template(t) stack = parser.Stack(utils.dummy_context(), stack_name, tmpl) stack.store() return stack def assertResourceState(self, rsrc, ref_id, metadata=None): metadata = metadata or {} self.assertIsNone(rsrc.validate()) self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) self.assertEqual(ref_id, rsrc.FnGetRefId()) self.assertEqual(metadata, dict(rsrc.metadata_get())) def stubout_neutron_create_security_group(self): sg_name = utils.PhysName('test_stack', 'the_sg') neutronclient.Client.create_security_group({ 'security_group': { 'name': sg_name, 'description': 'HTTP and SSH access' } }).AndReturn({ 'security_group': { 'tenant_id': 'f18ca530cc05425e8bac0a5ff92f7e88', 'name': sg_name, 'description': 'HTTP and SSH access', 'security_group_rules': [{ "direction": "egress", "ethertype": "IPv4", "id": "aaaa-1", "port_range_max": None, "port_range_min": None, "protocol": None, "remote_group_id": None, "remote_ip_prefix": None, "security_group_id": "aaaa", "tenant_id": "f18ca530cc05425e8bac0a5ff92f7e88" }, { "direction": "egress", "ethertype": "IPv6", "id": "aaaa-2", "port_range_max": None, "port_range_min": None, "protocol": None, "remote_group_id": None, "remote_ip_prefix": None, "security_group_id": "aaaa", "tenant_id": "f18ca530cc05425e8bac0a5ff92f7e88" }], 'id': 'aaaa' } }) neutronclient.Client.delete_security_group_rule('aaaa-1').AndReturn( None) neutronclient.Client.delete_security_group_rule('aaaa-2').AndReturn( None) neutronclient.Client.create_security_group_rule({ 'security_group_rule': { 'direction': 'ingress', 'remote_group_id': None, 'remote_ip_prefix': '0.0.0.0/0', 'port_range_min': 22, 'ethertype': 'IPv4', 'port_range_max': 22, 'protocol': 'tcp', 'security_group_id': 'aaaa' } }).AndReturn({ 'security_group_rule': { 'direction': 'ingress', 'remote_group_id': None, 'remote_ip_prefix': '0.0.0.0/0', 'port_range_min': 22, 'ethertype': 'IPv4', 'port_range_max': 22, 'protocol': 'tcp', 'security_group_id': 'aaaa', 'id': 'bbbb' } }) neutronclient.Client.create_security_group_rule({ 'security_group_rule': { 'direction': 'ingress', 'remote_group_id': None, 'remote_ip_prefix': '0.0.0.0/0', 'port_range_min': 80, 'ethertype': 'IPv4', 'port_range_max': 80, 'protocol': 'tcp', 'security_group_id': 'aaaa' } }).AndReturn({ 'security_group_rule': { 'direction': 'ingress', 'remote_group_id': None, 'remote_ip_prefix': '0.0.0.0/0', 'port_range_min': 80, 'ethertype': 'IPv4', 'port_range_max': 80, 'protocol': 'tcp', 'security_group_id': 'aaaa', 'id': 'cccc' } }) neutronclient.Client.create_security_group_rule({ 'security_group_rule': { 'direction': 'ingress', 'remote_group_id': 'wwww', 'remote_ip_prefix': None, 'port_range_min': None, 'ethertype': 'IPv4', 'port_range_max': None, 'protocol': 'tcp', 'security_group_id': 'aaaa' } }).AndReturn({ 'security_group_rule': { 'direction': 'ingress', 'remote_group_id': 'wwww', 'remote_ip_prefix': None, 'port_range_min': None, 'ethertype': 'IPv4', 'port_range_max': None, 'protocol': 'tcp', 'security_group_id': 'aaaa', 'id': 'dddd' } }) neutronclient.Client.create_security_group_rule({ 'security_group_rule': { 'direction': 'egress', 'remote_group_id': None, 'remote_ip_prefix': '10.0.1.0/24', 'port_range_min': 22, 'ethertype': 'IPv4', 'port_range_max': 22, 'protocol': 'tcp', 'security_group_id': 'aaaa' } }).AndReturn({ 'security_group_rule': { 'direction': 'egress', 'remote_group_id': None, 'remote_ip_prefix': '10.0.1.0/24', 'port_range_min': 22, 'ethertype': 'IPv4', 'port_range_max': 22, 'protocol': 'tcp', 'security_group_id': 'aaaa', 'id': 'eeee' } }) neutronclient.Client.create_security_group_rule({ 'security_group_rule': { 'direction': 'egress', 'remote_group_id': 'xxxx', 'remote_ip_prefix': None, 'port_range_min': None, 'ethertype': 'IPv4', 'port_range_max': None, 'protocol': None, 'security_group_id': 'aaaa' } }).AndReturn({ 'security_group_rule': { 'direction': 'egress', 'remote_group_id': 'xxxx', 'remote_ip_prefix': None, 'port_range_min': None, 'ethertype': 'IPv4', 'port_range_max': None, 'protocol': None, 'security_group_id': 'aaaa', 'id': 'ffff' } }) def stubout_neutron_get_security_group(self): neutronclient.Client.show_security_group('aaaa').AndReturn({ 'security_group': { 'tenant_id': 'f18ca530cc05425e8bac0a5ff92f7e88', 'name': 'sc1', 'description': '', 'security_group_rules': [{ 'direction': 'ingress', 'protocol': 'tcp', 'port_range_max': 22, 'id': 'bbbb', 'ethertype': 'IPv4', 'security_group_id': 'aaaa', 'remote_group_id': None, 'remote_ip_prefix': '0.0.0.0/0', 'tenant_id': 'f18ca530cc05425e8bac0a5ff92f7e88', 'port_range_min': 22 }, { 'direction': 'ingress', 'protocol': 'tcp', 'port_range_max': 80, 'id': 'cccc', 'ethertype': 'IPv4', 'security_group_id': 'aaaa', 'remote_group_id': None, 'remote_ip_prefix': '0.0.0.0/0', 'tenant_id': 'f18ca530cc05425e8bac0a5ff92f7e88', 'port_range_min': 80 }, { 'direction': 'ingress', 'protocol': 'tcp', 'port_range_max': None, 'id': 'dddd', 'ethertype': 'IPv4', 'security_group_id': 'aaaa', 'remote_group_id': 'wwww', 'remote_ip_prefix': None, 'tenant_id': 'f18ca530cc05425e8bac0a5ff92f7e88', 'port_range_min': None }, { 'direction': 'egress', 'protocol': 'tcp', 'port_range_max': 22, 'id': 'eeee', 'ethertype': 'IPv4', 'security_group_id': 'aaaa', 'remote_group_id': None, 'remote_ip_prefix': '10.0.1.0/24', 'tenant_id': 'f18ca530cc05425e8bac0a5ff92f7e88', 'port_range_min': 22 }, { 'direction': 'egress', 'protocol': None, 'port_range_max': None, 'id': 'ffff', 'ethertype': 'IPv4', 'security_group_id': 'aaaa', 'remote_group_id': 'xxxx', 'remote_ip_prefix': None, 'tenant_id': 'f18ca530cc05425e8bac0a5ff92f7e88', 'port_range_min': None }], 'id': 'aaaa'}}) def stubout_neutron_delete_security_group_rules(self): self.stubout_neutron_get_security_group() neutronclient.Client.delete_security_group_rule('bbbb').AndReturn(None) neutronclient.Client.delete_security_group_rule('cccc').AndReturn(None) neutronclient.Client.delete_security_group_rule('dddd').AndReturn(None) neutronclient.Client.delete_security_group_rule('eeee').AndReturn(None) neutronclient.Client.delete_security_group_rule('ffff').AndReturn(None) def test_security_group_neutron(self): # create script self.stubout_neutron_create_security_group() # delete script self.stubout_neutron_delete_security_group_rules() neutronclient.Client.delete_security_group('aaaa').AndReturn(None) self.m.ReplayAll() stack = self.create_stack(self.test_template_neutron) sg = stack['the_sg'] self.assertResourceState(sg, 'aaaa') stack.delete() self.m.VerifyAll() def test_security_group_neutron_exception(self): # create script sg_name = utils.PhysName('test_stack', 'the_sg') neutronclient.Client.create_security_group({ 'security_group': { 'name': sg_name, 'description': 'HTTP and SSH access' } }).AndReturn({ 'security_group': { 'tenant_id': 'f18ca530cc05425e8bac0a5ff92f7e88', 'name': sg_name, 'description': 'HTTP and SSH access', 'security_group_rules': [], 'id': 'aaaa' } }) neutronclient.Client.create_security_group_rule({ 'security_group_rule': { 'direction': 'ingress', 'remote_group_id': None, 'remote_ip_prefix': '0.0.0.0/0', 'port_range_min': 22, 'ethertype': 'IPv4', 'port_range_max': 22, 'protocol': 'tcp', 'security_group_id': 'aaaa' } }).AndRaise( neutron_exc.Conflict()) neutronclient.Client.create_security_group_rule({ 'security_group_rule': { 'direction': 'ingress', 'remote_group_id': None, 'remote_ip_prefix': '0.0.0.0/0', 'port_range_min': 80, 'ethertype': 'IPv4', 'port_range_max': 80, 'protocol': 'tcp', 'security_group_id': 'aaaa' } }).AndRaise( neutron_exc.Conflict()) neutronclient.Client.create_security_group_rule({ 'security_group_rule': { 'direction': 'ingress', 'remote_group_id': 'wwww', 'remote_ip_prefix': None, 'port_range_min': None, 'ethertype': 'IPv4', 'port_range_max': None, 'protocol': 'tcp', 'security_group_id': 'aaaa' } }).AndRaise( neutron_exc.Conflict()) neutronclient.Client.create_security_group_rule({ 'security_group_rule': { 'direction': 'egress', 'remote_group_id': None, 'remote_ip_prefix': '10.0.1.0/24', 'port_range_min': 22, 'ethertype': 'IPv4', 'port_range_max': 22, 'protocol': 'tcp', 'security_group_id': 'aaaa' } }).AndRaise( neutron_exc.Conflict()) neutronclient.Client.create_security_group_rule({ 'security_group_rule': { 'direction': 'egress', 'remote_group_id': 'xxxx', 'remote_ip_prefix': None, 'port_range_min': None, 'ethertype': 'IPv4', 'port_range_max': None, 'protocol': None, 'security_group_id': 'aaaa' } }).AndRaise( neutron_exc.Conflict()) # delete script neutronclient.Client.show_security_group('aaaa').AndReturn({ 'security_group': { 'tenant_id': 'f18ca530cc05425e8bac0a5ff92f7e88', 'name': 'sc1', 'description': '', 'security_group_rules': [{ 'direction': 'ingress', 'protocol': 'tcp', 'port_range_max': 22, 'id': 'bbbb', 'ethertype': 'IPv4', 'security_group_id': 'aaaa', 'remote_group_id': None, 'remote_ip_prefix': '0.0.0.0/0', 'tenant_id': 'f18ca530cc05425e8bac0a5ff92f7e88', 'port_range_min': 22 }, { 'direction': 'ingress', 'protocol': 'tcp', 'port_range_max': 80, 'id': 'cccc', 'ethertype': 'IPv4', 'security_group_id': 'aaaa', 'remote_group_id': None, 'remote_ip_prefix': '0.0.0.0/0', 'tenant_id': 'f18ca530cc05425e8bac0a5ff92f7e88', 'port_range_min': 80 }, { 'direction': 'ingress', 'protocol': 'tcp', 'port_range_max': None, 'id': 'dddd', 'ethertype': 'IPv4', 'security_group_id': 'aaaa', 'remote_group_id': 'wwww', 'remote_ip_prefix': None, 'tenant_id': 'f18ca530cc05425e8bac0a5ff92f7e88', 'port_range_min': None }, { 'direction': 'egress', 'protocol': 'tcp', 'port_range_max': 22, 'id': 'eeee', 'ethertype': 'IPv4', 'security_group_id': 'aaaa', 'remote_group_id': None, 'remote_ip_prefix': '10.0.1.0/24', 'tenant_id': 'f18ca530cc05425e8bac0a5ff92f7e88', 'port_range_min': 22 }, { 'direction': 'egress', 'protocol': None, 'port_range_max': None, 'id': 'ffff', 'ethertype': 'IPv4', 'security_group_id': 'aaaa', 'remote_group_id': None, 'remote_ip_prefix': None, 'tenant_id': 'f18ca530cc05425e8bac0a5ff92f7e88', 'port_range_min': None }], 'id': 'aaaa'}}) neutronclient.Client.delete_security_group_rule('bbbb').AndRaise( neutron_exc.NeutronClientException(status_code=404)) neutronclient.Client.delete_security_group_rule('cccc').AndRaise( neutron_exc.NeutronClientException(status_code=404)) neutronclient.Client.delete_security_group_rule('dddd').AndRaise( neutron_exc.NeutronClientException(status_code=404)) neutronclient.Client.delete_security_group_rule('eeee').AndRaise( neutron_exc.NeutronClientException(status_code=404)) neutronclient.Client.delete_security_group_rule('ffff').AndRaise( neutron_exc.NeutronClientException(status_code=404)) neutronclient.Client.delete_security_group('aaaa').AndRaise( neutron_exc.NeutronClientException(status_code=404)) neutronclient.Client.show_security_group('aaaa').AndRaise( neutron_exc.NeutronClientException(status_code=404)) self.m.ReplayAll() stack = self.create_stack(self.test_template_neutron) sg = stack['the_sg'] self.assertResourceState(sg, 'aaaa') scheduler.TaskRunner(sg.delete)() sg.state_set(sg.CREATE, sg.COMPLETE, 'to delete again') sg.resource_id = 'aaaa' stack.delete() self.m.VerifyAll() def test_security_group_neutron_update(self): # create script self.stubout_neutron_create_security_group() # update script # delete old not needed rules self.stubout_neutron_get_security_group() neutronclient.Client.delete_security_group_rule( 'bbbb').InAnyOrder().AndReturn(None) neutronclient.Client.delete_security_group_rule( 'dddd').InAnyOrder().AndReturn(None) neutronclient.Client.delete_security_group_rule( 'eeee').InAnyOrder().AndReturn(None) # create missing rules neutronclient.Client.create_security_group_rule({ 'security_group_rule': { 'direction': 'ingress', 'remote_group_id': None, 'remote_ip_prefix': '0.0.0.0/0', 'port_range_min': 443, 'ethertype': 'IPv4', 'port_range_max': 443, 'protocol': 'tcp', 'security_group_id': 'aaaa' } }).InAnyOrder().AndReturn({ 'security_group_rule': { 'direction': 'ingress', 'remote_group_id': None, 'remote_ip_prefix': '0.0.0.0/0', 'port_range_min': 443, 'ethertype': 'IPv4', 'port_range_max': 443, 'protocol': 'tcp', 'security_group_id': 'aaaa', 'id': 'bbbb' } }) neutronclient.Client.create_security_group_rule({ 'security_group_rule': { 'direction': 'ingress', 'remote_group_id': 'zzzz', 'remote_ip_prefix': None, 'port_range_min': None, 'ethertype': 'IPv4', 'port_range_max': None, 'protocol': 'tcp', 'security_group_id': 'aaaa' } }).InAnyOrder().AndReturn({ 'security_group_rule': { 'direction': 'ingress', 'remote_group_id': 'zzzz', 'remote_ip_prefix': None, 'port_range_min': None, 'ethertype': 'IPv4', 'port_range_max': None, 'protocol': 'tcp', 'security_group_id': 'aaaa', 'id': 'dddd' } }) neutronclient.Client.create_security_group_rule({ 'security_group_rule': { 'direction': 'egress', 'remote_group_id': None, 'remote_ip_prefix': '0.0.0.0/0', 'port_range_min': 22, 'ethertype': 'IPv4', 'port_range_max': 22, 'protocol': 'tcp', 'security_group_id': 'aaaa' } }).InAnyOrder().AndReturn({ 'security_group_rule': { 'direction': 'egress', 'remote_group_id': None, 'remote_ip_prefix': '0.0.0.0/0', 'port_range_min': 22, 'ethertype': 'IPv4', 'port_range_max': 22, 'protocol': 'tcp', 'security_group_id': 'aaaa', 'id': 'eeee' } }) self.m.ReplayAll() stack = self.create_stack(self.test_template_neutron) sg = stack['the_sg'] self.assertResourceState(sg, 'aaaa') # make updated template props = copy.deepcopy(sg.properties.data) props['SecurityGroupIngress'] = [ {'IpProtocol': 'tcp', 'FromPort': '80', 'ToPort': '80', 'CidrIp': '0.0.0.0/0'}, {'IpProtocol': 'tcp', 'FromPort': '443', 'ToPort': '443', 'CidrIp': '0.0.0.0/0'}, {'IpProtocol': 'tcp', 'SourceSecurityGroupId': 'zzzz'}, ] props['SecurityGroupEgress'] = [ {'IpProtocol': 'tcp', 'FromPort': '22', 'ToPort': '22', 'CidrIp': '0.0.0.0/0'}, {'SourceSecurityGroupName': 'xxxx'}, ] after = rsrc_defn.ResourceDefinition(sg.name, sg.type(), props) scheduler.TaskRunner(sg.update, after)() self.assertEqual((sg.UPDATE, sg.COMPLETE), sg.state) self.m.VerifyAll() def test_security_group_neutron_update_with_empty_rules(self): # create script self.stubout_neutron_create_security_group() # update script # delete old not needed rules self.stubout_neutron_get_security_group() neutronclient.Client.delete_security_group_rule( 'eeee').InAnyOrder().AndReturn(None) neutronclient.Client.delete_security_group_rule( 'ffff').InAnyOrder().AndReturn(None) self.m.ReplayAll() stack = self.create_stack(self.test_template_neutron) sg = stack['the_sg'] self.assertResourceState(sg, 'aaaa') # make updated template props = copy.deepcopy(sg.properties.data) del props['SecurityGroupEgress'] after = rsrc_defn.ResourceDefinition(sg.name, sg.type(), props) scheduler.TaskRunner(sg.update, after)() self.assertEqual((sg.UPDATE, sg.COMPLETE), sg.state) self.m.VerifyAll() heat-10.0.2/heat/tests/aws/__init__.py0000666000175000017500000000000013343562340017516 0ustar zuulzuul00000000000000heat-10.0.2/heat/tests/aws/test_volume.py0000666000175000017500000007227213343562351020353 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from cinderclient import exceptions as cinder_exp import mock import mox from oslo_config import cfg import six from heat.common import exception from heat.common import template_format from heat.engine.clients.os import cinder from heat.engine.clients.os import nova from heat.engine.resources.aws.ec2 import instance from heat.engine.resources.aws.ec2 import volume as aws_vol from heat.engine import rsrc_defn from heat.engine import scheduler from heat.tests.openstack.cinder import test_volume_utils as vt_base from heat.tests.openstack.nova import fakes as fakes_nova from heat.tests import utils volume_template = ''' { "AWSTemplateFormatVersion" : "2010-09-09", "Description" : "Volume Test", "Parameters" : {}, "Resources" : { "WikiDatabase": { "Type": "AWS::EC2::Instance", "Properties": { "ImageId" : "foo", "InstanceType" : "m1.large", "KeyName" : "test", "UserData" : "some data" } }, "DataVolume" : { "Type" : "AWS::EC2::Volume", "Properties" : { "Size" : "1", "AvailabilityZone" : {"Fn::GetAtt": ["WikiDatabase", "AvailabilityZone"]}, "Tags" : [{ "Key" : "Usage", "Value" : "Wiki Data Volume" }] } }, "MountPoint" : { "Type" : "AWS::EC2::VolumeAttachment", "Properties" : { "InstanceId" : { "Ref" : "WikiDatabase" }, "VolumeId" : { "Ref" : "DataVolume" }, "Device" : "/dev/vdc" } } } } ''' class VolumeTest(vt_base.BaseVolumeTest): def setUp(self): super(VolumeTest, self).setUp() self.t = template_format.parse(volume_template) self.use_cinder = False def _mock_create_volume(self, fv, stack_name, final_status='available'): cinder.CinderClientPlugin._create().MultipleTimes().AndReturn( self.cinder_fc) vol_name = utils.PhysName(stack_name, 'DataVolume') self.cinder_fc.volumes.create( size=1, availability_zone='nova', description=vol_name, name=vol_name, metadata={u'Usage': u'Wiki Data Volume'}).AndReturn( vt_base.FakeVolume(fv)) self.cinder_fc.volumes.get(fv.id).AndReturn(fv) fv_ready = vt_base.FakeVolume(final_status, id=fv.id) self.cinder_fc.volumes.get(fv.id).AndReturn(fv_ready) return fv_ready def test_volume(self): stack_name = 'test_volume_create_stack' # create script fv = self._mock_create_volume(vt_base.FakeVolume('creating'), stack_name) # failed delete due to in-use script self.cinder_fc.volumes.get(fv.id).AndReturn( vt_base.FakeVolume('in-use')) # delete script self._mock_delete_volume(fv) self.m.ReplayAll() stack = utils.parse_stack(self.t, stack_name=stack_name) rsrc = self.create_volume(self.t, stack, 'DataVolume') ex = self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(rsrc.destroy)) self.assertIn("Volume in use", six.text_type(ex)) scheduler.TaskRunner(rsrc.destroy)() self.m.VerifyAll() def test_volume_default_az(self): fv = vt_base.FakeVolume('creating') stack_name = 'test_volume_defaultaz_stack' # create script nova.NovaClientPlugin._create().AndReturn(self.fc) self.m.StubOutWithMock(instance.Instance, 'handle_create') self.m.StubOutWithMock(instance.Instance, 'check_create_complete') self.m.StubOutWithMock(instance.Instance, '_resolve_attribute') self.m.StubOutWithMock(aws_vol.VolumeAttachment, 'handle_create') self.m.StubOutWithMock(aws_vol.VolumeAttachment, 'check_create_complete') instance.Instance.handle_create().AndReturn(None) instance.Instance.check_create_complete(None).AndReturn(True) instance.Instance._resolve_attribute( 'AvailabilityZone').MultipleTimes().AndReturn(None) cinder.CinderClientPlugin._create().AndReturn( self.cinder_fc) self.stub_ImageConstraint_validate() self.stub_ServerConstraint_validate() self.stub_VolumeConstraint_validate() vol_name = utils.PhysName(stack_name, 'DataVolume') self.cinder_fc.volumes.create( size=1, availability_zone=None, description=vol_name, name=vol_name, metadata={u'Usage': u'Wiki Data Volume'}).AndReturn(fv) self.cinder_fc.volumes.get(fv.id).AndReturn(fv) fv_ready = vt_base.FakeVolume('available', id=fv.id) self.cinder_fc.volumes.get(fv.id).AndReturn(fv_ready) aws_vol.VolumeAttachment.handle_create().AndReturn(None) aws_vol.VolumeAttachment.check_create_complete( None).AndReturn(True) # delete script self.m.StubOutWithMock(instance.Instance, 'handle_delete') self.m.StubOutWithMock(aws_vol.VolumeAttachment, 'handle_delete') self.m.StubOutWithMock(aws_vol.VolumeAttachment, 'check_delete_complete') instance.Instance.handle_delete().AndReturn(None) self.cinder_fc.volumes.get('vol-123').AndRaise( cinder_exp.NotFound('Not found')) cookie = object() aws_vol.VolumeAttachment.handle_delete().AndReturn(cookie) aws_vol.VolumeAttachment.check_delete_complete(cookie).AndReturn(True) self.m.ReplayAll() stack = utils.parse_stack(self.t, stack_name=stack_name) stack._update_all_resource_data(True, False) rsrc = stack['DataVolume'] self.assertIsNone(rsrc.validate()) scheduler.TaskRunner(stack.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) scheduler.TaskRunner(stack.delete)() self.m.VerifyAll() def test_volume_create_error(self): fv = vt_base.FakeVolume('creating') stack_name = 'test_volume_create_error_stack' cfg.CONF.set_override('action_retry_limit', 0) self._mock_create_volume(fv, stack_name, final_status='error') self.m.ReplayAll() stack = utils.parse_stack(self.t, stack_name=stack_name) ex = self.assertRaises(exception.ResourceFailure, self.create_volume, self.t, stack, 'DataVolume') self.assertIn('Went to status error due to "Unknown"', six.text_type(ex)) self.m.VerifyAll() def test_volume_bad_tags(self): stack_name = 'test_volume_bad_tags_stack' self.t['Resources']['DataVolume']['Properties'][ 'Tags'] = [{'Foo': 'bar'}] stack = utils.parse_stack(self.t, stack_name=stack_name) ex = self.assertRaises(exception.StackValidationFailed, self.create_volume, self.t, stack, 'DataVolume') self.assertEqual("Property error: " "Resources.DataVolume.Properties.Tags[0]: " "Unknown Property Foo", six.text_type(ex)) self.m.VerifyAll() def test_volume_attachment_error(self): stack_name = 'test_volume_attach_error_stack' self._mock_create_volume(vt_base.FakeVolume('creating'), stack_name) self._mock_create_server_volume_script( vt_base.FakeVolume('attaching'), final_status='error') self.stub_VolumeConstraint_validate() self.m.ReplayAll() stack = utils.parse_stack(self.t, stack_name=stack_name) self.create_volume(self.t, stack, 'DataVolume') ex = self.assertRaises(exception.ResourceFailure, self.create_attachment, self.t, stack, 'MountPoint') self.assertIn("Volume attachment failed - Unknown status error", six.text_type(ex)) self.m.VerifyAll() def test_volume_attachment(self): stack_name = 'test_volume_attach_stack' self._mock_create_volume(vt_base.FakeVolume('creating'), stack_name) fva = self._mock_create_server_volume_script( vt_base.FakeVolume('attaching')) self.stub_VolumeConstraint_validate() # delete script fva = vt_base.FakeVolume('in-use') self.fc.volumes.get_server_volume(u'WikiDatabase', 'vol-123').AndReturn(fva) self.cinder_fc.volumes.get(fva.id).AndReturn(fva) self.fc.volumes.delete_server_volume( 'WikiDatabase', 'vol-123').MultipleTimes().AndReturn(None) self.cinder_fc.volumes.get(fva.id).AndReturn( vt_base.FakeVolume('detaching', id=fva.id)) self.cinder_fc.volumes.get(fva.id).AndReturn( vt_base.FakeVolume('available', id=fva.id)) self.fc.volumes.get_server_volume(u'WikiDatabase', 'vol-123').AndReturn(fva) self.fc.volumes.get_server_volume( u'WikiDatabase', 'vol-123').AndRaise(fakes_nova.fake_exception()) self.m.ReplayAll() stack = utils.parse_stack(self.t, stack_name=stack_name) self.create_volume(self.t, stack, 'DataVolume') rsrc = self.create_attachment(self.t, stack, 'MountPoint') scheduler.TaskRunner(rsrc.delete)() self.m.VerifyAll() def test_volume_detachment_err(self): stack_name = 'test_volume_detach_err_stack' self._mock_create_volume(vt_base.FakeVolume('creating'), stack_name) fva = self._mock_create_server_volume_script( vt_base.FakeVolume('attaching')) self.stub_VolumeConstraint_validate() # delete script fva = vt_base.FakeVolume('in-use') self.fc.volumes.get_server_volume(u'WikiDatabase', 'vol-123').AndReturn(fva) self.cinder_fc.volumes.get(fva.id).AndReturn(fva) self.fc.volumes.delete_server_volume( 'WikiDatabase', 'vol-123').AndRaise(fakes_nova.fake_exception(400)) self.cinder_fc.volumes.get(fva.id).AndReturn( vt_base.FakeVolume('available', id=fva.id)) self.fc.volumes.get_server_volume(u'WikiDatabase', 'vol-123').AndReturn(fva) self.fc.volumes.get_server_volume( u'WikiDatabase', 'vol-123').AndRaise(fakes_nova.fake_exception()) self.m.ReplayAll() stack = utils.parse_stack(self.t, stack_name=stack_name) self.create_volume(self.t, stack, 'DataVolume') rsrc = self.create_attachment(self.t, stack, 'MountPoint') scheduler.TaskRunner(rsrc.delete)() self.m.VerifyAll() def test_volume_detach_non_exist(self): fv = vt_base.FakeVolume('creating') fva = vt_base.FakeVolume('in-use') stack_name = 'test_volume_detach_nonexist_stack' self._mock_create_volume(fv, stack_name) self._mock_create_server_volume_script(fva) self.stub_VolumeConstraint_validate() # delete script self.fc.volumes.delete_server_volume(u'WikiDatabase', 'vol-123').AndReturn(None) self.cinder_fc.volumes.get(fva.id).AndRaise( cinder_exp.NotFound('Not found')) self.fc.volumes.get_server_volume(u'WikiDatabase', 'vol-123' ).AndRaise( fakes_nova.fake_exception()) self.m.ReplayAll() stack = utils.parse_stack(self.t, stack_name=stack_name) self.create_volume(self.t, stack, 'DataVolume') rsrc = self.create_attachment(self.t, stack, 'MountPoint') scheduler.TaskRunner(rsrc.delete)() self.m.VerifyAll() def test_volume_detach_deleting_volume(self): fv = vt_base.FakeVolume('creating') fva = vt_base.FakeVolume('deleting') stack_name = 'test_volume_detach_deleting_volume_stack' self._mock_create_volume(fv, stack_name) self._mock_create_server_volume_script(fva) self.stub_VolumeConstraint_validate() # delete script self.fc.volumes.delete_server_volume(u'WikiDatabase', 'vol-123').AndReturn(None) self.cinder_fc.volumes.get(fva.id).AndReturn(fva) self.fc.volumes.get_server_volume(u'WikiDatabase', 'vol-123' ).AndRaise( fakes_nova.fake_exception()) self.m.ReplayAll() stack = utils.parse_stack(self.t, stack_name=stack_name) self.create_volume(self.t, stack, 'DataVolume') rsrc = self.create_attachment(self.t, stack, 'MountPoint') scheduler.TaskRunner(rsrc.delete)() self.m.VerifyAll() def test_volume_detach_with_latency(self): stack_name = 'test_volume_detach_latency_stack' self._mock_create_volume(vt_base.FakeVolume('creating'), stack_name) fva = self._mock_create_server_volume_script( vt_base.FakeVolume('attaching')) self.stub_VolumeConstraint_validate() # delete script self.fc.volumes.get_server_volume(u'WikiDatabase', 'vol-123').AndReturn(fva) self.cinder_fc.volumes.get(fva.id).AndReturn(fva) self.fc.volumes.delete_server_volume( 'WikiDatabase', 'vol-123').MultipleTimes().AndReturn(None) self.cinder_fc.volumes.get(fva.id).AndReturn( vt_base.FakeVolume('in-use', id=fva.id)) self.cinder_fc.volumes.get(fva.id).AndReturn( vt_base.FakeVolume('detaching', id=fva.id)) self.cinder_fc.volumes.get(fva.id).AndReturn( vt_base.FakeVolume('available', id=fva.id)) self.fc.volumes.get_server_volume(u'WikiDatabase', 'vol-123').AndReturn(fva) self.fc.volumes.get_server_volume( u'WikiDatabase', 'vol-123').AndRaise(fakes_nova.fake_exception()) self.m.ReplayAll() stack = utils.parse_stack(self.t, stack_name=stack_name) self.create_volume(self.t, stack, 'DataVolume') rsrc = self.create_attachment(self.t, stack, 'MountPoint') scheduler.TaskRunner(rsrc.delete)() self.m.VerifyAll() def test_volume_detach_with_error(self): stack_name = 'test_volume_detach_werr_stack' self._mock_create_volume(vt_base.FakeVolume('creating'), stack_name) fva = self._mock_create_server_volume_script( vt_base.FakeVolume('attaching')) self.stub_VolumeConstraint_validate() # delete script fva = vt_base.FakeVolume('in-use') self.fc.volumes.delete_server_volume( 'WikiDatabase', 'vol-123').AndReturn(None) self.cinder_fc.volumes.get(fva.id).AndReturn( vt_base.FakeVolume('error', id=fva.id)) self.m.ReplayAll() stack = utils.parse_stack(self.t, stack_name=stack_name) self.create_volume(self.t, stack, 'DataVolume') rsrc = self.create_attachment(self.t, stack, 'MountPoint') detach_task = scheduler.TaskRunner(rsrc.delete) ex = self.assertRaises(exception.ResourceFailure, detach_task) self.assertIn('Volume detachment failed - Unknown status error', six.text_type(ex)) self.m.VerifyAll() def test_volume_delete(self): stack_name = 'test_volume_delete_stack' fv = vt_base.FakeVolume('creating') self._mock_create_volume(fv, stack_name) self.m.ReplayAll() self.t['Resources']['DataVolume']['DeletionPolicy'] = 'Delete' stack = utils.parse_stack(self.t, stack_name=stack_name) rsrc = self.create_volume(self.t, stack, 'DataVolume') self.m.StubOutWithMock(rsrc, "handle_delete") rsrc.handle_delete().AndReturn(None) self.m.StubOutWithMock(rsrc, "check_delete_complete") rsrc.check_delete_complete(None).AndReturn(True) self.m.ReplayAll() scheduler.TaskRunner(rsrc.destroy)() self.m.VerifyAll() def test_volume_deleting_delete(self): fv = vt_base.FakeVolume('creating') stack_name = 'test_volume_deleting_stack' fv = self._mock_create_volume(vt_base.FakeVolume('creating'), stack_name) self.cinder_fc.volumes.get(fv.id).AndReturn( vt_base.FakeVolume('deleting')) self.cinder_fc.volumes.get(fv.id).AndReturn( vt_base.FakeVolume('deleting')) self.cinder_fc.volumes.get(fv.id).AndRaise( cinder_exp.NotFound('NotFound')) self.m.ReplayAll() stack = utils.parse_stack(self.t, stack_name=stack_name) rsrc = self.create_volume(self.t, stack, 'DataVolume') scheduler.TaskRunner(rsrc.destroy)() self.m.VerifyAll() def test_volume_delete_error(self): fv = vt_base.FakeVolume('creating') stack_name = 'test_volume_deleting_stack' fv = self._mock_create_volume(vt_base.FakeVolume('creating'), stack_name) self.cinder_fc.volumes.get(fv.id).AndReturn(fv) self.cinder_fc.volumes.delete(fv.id).AndReturn(True) self.cinder_fc.volumes.get(fv.id).AndReturn( vt_base.FakeVolume('deleting')) self.cinder_fc.volumes.get(fv.id).AndReturn( vt_base.FakeVolume('error_deleting')) self.m.ReplayAll() stack = utils.parse_stack(self.t, stack_name=stack_name) rsrc = self.create_volume(self.t, stack, 'DataVolume') deleter = scheduler.TaskRunner(rsrc.destroy) self.assertRaisesRegex(exception.ResourceFailure, ".*ResourceInError.*error_deleting.*delete", deleter) self.m.VerifyAll() def test_volume_update_not_supported(self): stack_name = 'test_volume_updnotsup_stack' fv = vt_base.FakeVolume('creating') self._mock_create_volume(fv, stack_name) self.m.ReplayAll() t = template_format.parse(volume_template) stack = utils.parse_stack(t, stack_name=stack_name) rsrc = self.create_volume(t, stack, 'DataVolume') props = copy.deepcopy(rsrc.properties.data) props['Size'] = 2 props['Tags'] = None props['AvailabilityZone'] = 'other' after = rsrc_defn.ResourceDefinition(rsrc.name, rsrc.type(), props) updater = scheduler.TaskRunner(rsrc.update, after) ex = self.assertRaises(exception.ResourceFailure, updater) self.assertIn("NotSupported: resources.DataVolume: " "Update to properties " "AvailabilityZone, Size, Tags of DataVolume " "(AWS::EC2::Volume) is not supported", six.text_type(ex)) self.assertEqual((rsrc.UPDATE, rsrc.FAILED), rsrc.state) def test_volume_check(self): stack = utils.parse_stack(self.t, stack_name='volume_check') res = stack['DataVolume'] res.state_set(res.CREATE, res.COMPLETE) fake_volume = vt_base.FakeVolume('available') cinder = mock.Mock() cinder.volumes.get.return_value = fake_volume self.patchobject(res, 'client', return_value=cinder) scheduler.TaskRunner(res.check)() self.assertEqual((res.CHECK, res.COMPLETE), res.state) fake_volume = vt_base.FakeVolume('in-use') res.client().volumes.get.return_value = fake_volume scheduler.TaskRunner(res.check)() self.assertEqual((res.CHECK, res.COMPLETE), res.state) def test_volume_check_not_available(self): stack = utils.parse_stack(self.t, stack_name='volume_check_na') res = stack['DataVolume'] res.state_set(res.CREATE, res.COMPLETE) cinder = mock.Mock() fake_volume = vt_base.FakeVolume('foobar') cinder.volumes.get.return_value = fake_volume self.patchobject(res, 'client', return_value=cinder) self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(res.check)) self.assertEqual((res.CHECK, res.FAILED), res.state) self.assertIn('foobar', res.status_reason) def test_volume_check_fail(self): stack = utils.parse_stack(self.t, stack_name='volume_check_fail') res = stack['DataVolume'] res.state_set(res.CREATE, res.COMPLETE) cinder = mock.Mock() cinder.volumes.get.side_effect = Exception('boom') self.patchobject(res, 'client', return_value=cinder) self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(res.check)) self.assertEqual((res.CHECK, res.FAILED), res.state) self.assertIn('boom', res.status_reason) def test_snapshot(self): stack_name = 'test_volume_snapshot_stack' fv = self._mock_create_volume(vt_base.FakeVolume('creating'), stack_name) # snapshot script self.m.StubOutWithMock(self.cinder_fc.backups, 'create') self.m.StubOutWithMock(self.cinder_fc.backups, 'get') fb = vt_base.FakeBackup('available') self.cinder_fc.backups.create(fv.id).AndReturn(fb) self.cinder_fc.backups.get(fb.id).AndReturn(fb) self.cinder_fc.volumes.get(fv.id).AndReturn(fv) self._mock_delete_volume(fv) self.m.ReplayAll() self.t['Resources']['DataVolume']['DeletionPolicy'] = 'Snapshot' stack = utils.parse_stack(self.t, stack_name=stack_name) rsrc = self.create_volume(self.t, stack, 'DataVolume') scheduler.TaskRunner(rsrc.destroy)() self.m.VerifyAll() def test_snapshot_error(self): stack_name = 'test_volume_snapshot_err_stack' fv = self._mock_create_volume(vt_base.FakeVolume('creating'), stack_name) # snapshot script self.m.StubOutWithMock(self.cinder_fc.backups, 'create') self.m.StubOutWithMock(self.cinder_fc.backups, 'get') fb = vt_base.FakeBackup('error') self.cinder_fc.backups.create(fv.id).AndReturn(fb) self.cinder_fc.backups.get(fb.id).AndReturn(fb) self.m.ReplayAll() self.t['Resources']['DataVolume']['DeletionPolicy'] = 'Snapshot' stack = utils.parse_stack(self.t, stack_name=stack_name) rsrc = self.create_volume(self.t, stack, 'DataVolume') ex = self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(rsrc.destroy)) self.assertIn('Unknown status error', six.text_type(ex)) self.m.VerifyAll() def test_snapshot_no_volume(self): """Test that backup does not start for failed resource.""" stack_name = 'test_volume_snapshot_novol_stack' cfg.CONF.set_override('action_retry_limit', 0) fv = self._mock_create_volume(vt_base.FakeVolume('creating'), stack_name, final_status='error') self._mock_delete_volume(fv) self.m.ReplayAll() self.t['Resources']['DataVolume']['DeletionPolicy'] = 'Snapshot' self.t['Resources']['DataVolume']['Properties'][ 'AvailabilityZone'] = 'nova' stack = utils.parse_stack(self.t, stack_name=stack_name) resource_defns = stack.t.resource_definitions(stack) rsrc = aws_vol.Volume('DataVolume', resource_defns['DataVolume'], stack) create = scheduler.TaskRunner(rsrc.create) ex = self.assertRaises(exception.ResourceFailure, create) self.assertIn('Went to status error due to "Unknown"', six.text_type(ex)) scheduler.TaskRunner(rsrc.destroy)() self.m.VerifyAll() def test_create_from_snapshot(self): stack_name = 'test_volume_create_from_snapshot_stack' fv = vt_base.FakeVolume('restoring-backup') fvbr = vt_base.FakeBackupRestore('vol-123') # create script cinder.CinderClientPlugin._create().AndReturn( self.cinder_fc) self.patchobject(self.cinder_fc.backups, 'get') self.m.StubOutWithMock(self.cinder_fc.restores, 'restore') self.cinder_fc.restores.restore('backup-123').AndReturn(fvbr) self.cinder_fc.volumes.get('vol-123').AndReturn(fv) vol_name = utils.PhysName(stack_name, 'DataVolume') self.cinder_fc.volumes.update('vol-123', description=vol_name, name=vol_name) fv.status = 'available' self.cinder_fc.volumes.get('vol-123').AndReturn(fv) self.m.ReplayAll() self.t['Resources']['DataVolume']['Properties'][ 'SnapshotId'] = 'backup-123' stack = utils.parse_stack(self.t, stack_name=stack_name) self.create_volume(self.t, stack, 'DataVolume') self.m.VerifyAll() def test_create_from_snapshot_error(self): stack_name = 'test_volume_create_from_snap_err_stack' cfg.CONF.set_override('action_retry_limit', 0) fv = vt_base.FakeVolume('restoring-backup') fvbr = vt_base.FakeBackupRestore('vol-123') # create script cinder.CinderClientPlugin._create().AndReturn( self.cinder_fc) self.patchobject(self.cinder_fc.backups, 'get') self.m.StubOutWithMock(self.cinder_fc.restores, 'restore') self.cinder_fc.restores.restore('backup-123').AndReturn(fvbr) self.cinder_fc.volumes.get('vol-123').AndReturn(fv) vol_name = utils.PhysName(stack_name, 'DataVolume') self.cinder_fc.volumes.update(fv.id, description=vol_name, name=vol_name) fv.status = 'error' self.cinder_fc.volumes.get('vol-123').AndReturn(fv) self.m.ReplayAll() self.t['Resources']['DataVolume']['Properties'][ 'SnapshotId'] = 'backup-123' stack = utils.parse_stack(self.t, stack_name=stack_name) ex = self.assertRaises(exception.ResourceFailure, self.create_volume, self.t, stack, 'DataVolume') self.assertIn('Went to status error due to "Unknown"', six.text_type(ex)) self.m.VerifyAll() def test_volume_size_constraint(self): self.t['Resources']['DataVolume']['Properties']['Size'] = '0' stack = utils.parse_stack(self.t) error = self.assertRaises(exception.StackValidationFailed, self.create_volume, self.t, stack, 'DataVolume') self.assertEqual( "Property error: Resources.DataVolume.Properties.Size: " "0 is out of range (min: 1, max: None)", six.text_type(error)) def test_volume_attachment_updates_not_supported(self): self.m.StubOutWithMock(nova.NovaClientPlugin, 'get_server') nova.NovaClientPlugin.get_server(mox.IgnoreArg()).AndReturn( mox.MockAnything()) fv = vt_base.FakeVolume('creating') fva = vt_base.FakeVolume('attaching') stack_name = 'test_volume_attach_updnotsup_stack' self._mock_create_volume(fv, stack_name) self._mock_create_server_volume_script(fva) self.stub_VolumeConstraint_validate() self.m.ReplayAll() stack = utils.parse_stack(self.t, stack_name=stack_name) self.create_volume(self.t, stack, 'DataVolume') rsrc = self.create_attachment(self.t, stack, 'MountPoint') props = copy.deepcopy(rsrc.properties.data) props['InstanceId'] = 'some_other_instance_id' props['VolumeId'] = 'some_other_volume_id' props['Device'] = '/dev/vdz' after = rsrc_defn.ResourceDefinition(rsrc.name, rsrc.type(), props) update_task = scheduler.TaskRunner(rsrc.update, after) ex = self.assertRaises(exception.ResourceFailure, update_task) self.assertIn('NotSupported: resources.MountPoint: ' 'Update to properties Device, InstanceId, ' 'VolumeId of MountPoint (AWS::EC2::VolumeAttachment)', six.text_type(ex)) self.assertEqual((rsrc.UPDATE, rsrc.FAILED), rsrc.state) self.m.VerifyAll() def test_validate_deletion_policy(self): cfg.CONF.set_override('backups_enabled', False, group='volumes') stack_name = 'test_volume_validate_deletion_policy' self.t['Resources']['DataVolume']['DeletionPolicy'] = 'Snapshot' stack = utils.parse_stack(self.t, stack_name=stack_name) rsrc = self.get_volume(self.t, stack, 'DataVolume') self.assertRaisesRegex( exception.StackValidationFailed, 'volume backup service is not enabled', rsrc.validate) heat-10.0.2/heat/tests/aws/test_network_interface.py0000666000175000017500000001233113343562351022543 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from heat.engine import rsrc_defn from heat.engine import scheduler from heat.tests import common from heat.tests import utils try: from neutronclient.v2_0 import client as neutronclient except ImportError: neutronclient = None test_template = { 'heat_template_version': '2013-05-23', 'resources': { 'my_nic': { 'type': 'AWS::EC2::NetworkInterface', 'properties': { 'SubnetId': 'ssss' } } } } class NetworkInterfaceTest(common.HeatTestCase): def setUp(self): super(NetworkInterfaceTest, self).setUp() self.ctx = utils.dummy_context() self.m.StubOutWithMock(neutronclient.Client, 'show_subnet') self.m.StubOutWithMock(neutronclient.Client, 'create_port') self.m.StubOutWithMock(neutronclient.Client, 'delete_port') self.m.StubOutWithMock(neutronclient.Client, 'update_port') def mock_show_subnet(self): neutronclient.Client.show_subnet('ssss').AndReturn({ 'subnet': { 'name': 'my_subnet', 'network_id': 'nnnn', 'tenant_id': 'c1210485b2424d48804aad5d39c61b8f', 'allocation_pools': [{'start': '10.0.0.2', 'end': '10.0.0.254'}], 'gateway_ip': '10.0.0.1', 'ip_version': 4, 'cidr': '10.0.0.0/24', 'id': 'ssss', 'enable_dhcp': False, }}) def mock_create_network_interface(self, stack_name='my_stack', resource_name='my_nic', security_groups=None): self.nic_name = utils.PhysName(stack_name, resource_name) port = {'network_id': 'nnnn', 'fixed_ips': [{ 'subnet_id': u'ssss' }], 'name': self.nic_name, 'admin_state_up': True} port_info = { 'port': { 'admin_state_up': True, 'device_id': '', 'device_owner': '', 'fixed_ips': [ { 'ip_address': '10.0.0.100', 'subnet_id': 'ssss' } ], 'id': 'pppp', 'mac_address': 'fa:16:3e:25:32:5d', 'name': self.nic_name, 'network_id': 'nnnn', 'status': 'ACTIVE', 'tenant_id': 'c1210485b2424d48804aad5d39c61b8f' } } if security_groups is not None: port['security_groups'] = security_groups port_info['security_groups'] = security_groups else: port_info['security_groups'] = ['default'] neutronclient.Client.create_port({'port': port}).AndReturn(port_info) def mock_update_network_interface(self, update_props, port_id='pppp'): neutronclient.Client.update_port( port_id, {'port': update_props}).AndReturn(None) def mock_delete_network_interface(self, port_id='pppp'): neutronclient.Client.delete_port(port_id).AndReturn(None) def test_network_interface_create_update_delete(self): my_stack = utils.parse_stack(test_template, stack_name='test_nif_cud_stack') nic_rsrc = my_stack['my_nic'] self.mock_show_subnet() self.stub_SubnetConstraint_validate() self.mock_create_network_interface(my_stack.name) update_props = {} update_sg_ids = ['0389f747-7785-4757-b7bb-2ab07e4b09c3'] update_props['security_groups'] = update_sg_ids self.mock_update_network_interface(update_props) self.mock_delete_network_interface() self.m.ReplayAll() # create the nic without GroupSet self.assertIsNone(nic_rsrc.validate()) scheduler.TaskRunner(nic_rsrc.create)() self.assertEqual((nic_rsrc.CREATE, my_stack.COMPLETE), nic_rsrc.state) # update the nic with GroupSet props = copy.deepcopy(nic_rsrc.properties.data) props['GroupSet'] = update_sg_ids update_snippet = rsrc_defn.ResourceDefinition(nic_rsrc.name, nic_rsrc.type(), props) scheduler.TaskRunner(nic_rsrc.update, update_snippet)() self.assertEqual((nic_rsrc.UPDATE, nic_rsrc.COMPLETE), nic_rsrc.state) # delete the nic scheduler.TaskRunner(nic_rsrc.delete)() self.assertEqual((nic_rsrc.DELETE, nic_rsrc.COMPLETE), nic_rsrc.state) self.m.VerifyAll() heat-10.0.2/heat/tests/aws/test_eip.py0000666000175000017500000012151713343562351017616 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import mock from neutronclient.common import exceptions as q_exceptions from neutronclient.v2_0 import client as neutronclient import six from heat.common import exception from heat.common import short_id from heat.common import template_format from heat.engine.clients.os import nova from heat.engine import node_data from heat.engine import resource from heat.engine.resources.aws.ec2 import eip from heat.engine import rsrc_defn from heat.engine import scheduler from heat.engine import stack as parser from heat.engine import stk_defn from heat.engine import template as tmpl from heat.tests import common from heat.tests.openstack.nova import fakes as fakes_nova from heat.tests import utils eip_template = ''' { "AWSTemplateFormatVersion" : "2010-09-09", "Description" : "EIP Test", "Parameters" : {}, "Resources" : { "IPAddress" : { "Type" : "AWS::EC2::EIP", "Properties" : { "InstanceId" : { "Ref" : "WebServer" } } }, "WebServer": { "Type": "AWS::EC2::Instance", } } } ''' eip_template_ipassoc = ''' { "AWSTemplateFormatVersion" : "2010-09-09", "Description" : "EIP Test", "Parameters" : {}, "Resources" : { "IPAddress" : { "Type" : "AWS::EC2::EIP" }, "IPAssoc" : { "Type" : "AWS::EC2::EIPAssociation", "Properties" : { "InstanceId" : { "Ref" : "WebServer" }, "EIP" : { "Ref" : "IPAddress" } } }, "WebServer": { "Type": "AWS::EC2::Instance", } } } ''' eip_template_ipassoc2 = ''' { "AWSTemplateFormatVersion" : "2010-09-09", "Description" : "EIP Test", "Parameters" : {}, "Resources" : { "the_eip" : { "Type" : "AWS::EC2::EIP", "Properties" : { "Domain": "vpc" } }, "IPAssoc" : { "Type" : "AWS::EC2::EIPAssociation", "Properties" : { "AllocationId" : 'fc68ea2c-b60b-4b4f-bd82-94ec81110766', "NetworkInterfaceId" : { "Ref" : "the_nic" } } }, "the_vpc" : { "Type" : "AWS::EC2::VPC", "Properties" : { "CidrBlock" : "10.0.0.0/16" } }, "the_subnet" : { "Type" : "AWS::EC2::Subnet", "Properties" : { "CidrBlock" : "10.0.0.0/24", "VpcId" : { "Ref" : "the_vpc" } } }, "the_nic" : { "Type" : "AWS::EC2::NetworkInterface", "Properties" : { "PrivateIpAddress": "10.0.0.100", "SubnetId": { "Ref": "the_subnet" } } }, } } ''' eip_template_ipassoc3 = ''' { "AWSTemplateFormatVersion" : "2010-09-09", "Description" : "EIP Test", "Parameters" : {}, "Resources" : { "the_eip" : { "Type" : "AWS::EC2::EIP", "Properties" : { "Domain": "vpc" } }, "IPAssoc" : { "Type" : "AWS::EC2::EIPAssociation", "Properties" : { "AllocationId" : 'fc68ea2c-b60b-4b4f-bd82-94ec81110766', "InstanceId" : '1fafbe59-2332-4f5f-bfa4-517b4d6c1b65' } } } } ''' ipassoc_template_validate = ''' { "AWSTemplateFormatVersion" : "2010-09-09", "Description" : "EIP Test", "Parameters" : {}, "Resources" : { "eip" : { "Type" : "AWS::EC2::EIP", "Properties" : { "Domain": "vpc" } }, "IPAssoc" : { "Type" : "AWS::EC2::EIPAssociation", "Properties" : { "EIP" : {'Ref': 'eip'}, "InstanceId" : '1fafbe59-2332-4f5f-bfa4-517b4d6c1b65' } } } } ''' class EIPTest(common.HeatTestCase): def setUp(self): # force Nova, will test Neutron below super(EIPTest, self).setUp() self.fc = fakes_nova.FakeClient() self.m.StubOutWithMock(nova.NovaClientPlugin, '_create') self.m.StubOutWithMock(neutronclient.Client, 'list_networks') self.m.StubOutWithMock(self.fc.servers, 'get') self.m.StubOutWithMock(neutronclient.Client, 'list_floatingips') self.m.StubOutWithMock(neutronclient.Client, 'create_floatingip') self.m.StubOutWithMock(neutronclient.Client, 'show_floatingip') self.m.StubOutWithMock(neutronclient.Client, 'update_floatingip') self.m.StubOutWithMock(neutronclient.Client, 'delete_floatingip') def mock_interface(self, port, ip): class MockIface(object): def __init__(self, port_id, fixed_ip): self.port_id = port_id self.fixed_ips = [{'ip_address': fixed_ip}] return MockIface(port, ip) def mock_list_floatingips(self): neutronclient.Client.list_floatingips( floating_ip_address='11.0.0.1').AndReturn({ 'floatingips': [{'id': "fc68ea2c-b60b-4b4f-bd82-94ec81110766"}]}) def mock_create_floatingip(self): nova.NovaClientPlugin._create().AndReturn(self.fc) neutronclient.Client.list_networks( **{'router:external': True}).AndReturn({'networks': [{ 'status': 'ACTIVE', 'subnets': [], 'name': 'nova', 'router:external': True, 'tenant_id': 'c1210485b2424d48804aad5d39c61b8f', 'admin_state_up': True, 'shared': True, 'id': 'eeee' }]}) neutronclient.Client.create_floatingip({ 'floatingip': {'floating_network_id': u'eeee'} }).AndReturn({'floatingip': { "status": "ACTIVE", "id": "fc68ea2c-b60b-4b4f-bd82-94ec81110766", "floating_ip_address": "11.0.0.1" }}) def mock_show_floatingip(self, refid): neutronclient.Client.show_floatingip( refid, ).AndReturn({'floatingip': { 'router_id': None, 'tenant_id': 'e936e6cd3e0b48dcb9ff853a8f253257', 'floating_network_id': 'eeee', 'fixed_ip_address': None, 'floating_ip_address': '11.0.0.1', 'port_id': None, 'id': 'ffff' }}) def mock_update_floatingip(self, fip='fc68ea2c-b60b-4b4f-bd82-94ec81110766', delete_assc=False): if delete_assc: request_body = { 'floatingip': { 'port_id': None, 'fixed_ip_address': None}} else: request_body = { 'floatingip': { 'port_id': 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa', 'fixed_ip_address': '1.2.3.4'}} neutronclient.Client.update_floatingip( fip, request_body).AndReturn(None) def mock_delete_floatingip(self): id = 'fc68ea2c-b60b-4b4f-bd82-94ec81110766' neutronclient.Client.delete_floatingip(id).AndReturn(None) def create_eip(self, t, stack, resource_name): resource_defns = stack.t.resource_definitions(stack) rsrc = eip.ElasticIp(resource_name, resource_defns[resource_name], stack) self.assertIsNone(rsrc.validate()) scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) stk_defn.update_resource_data(stack.defn, resource_name, rsrc.node_data()) return rsrc def create_association(self, t, stack, resource_name): resource_defns = stack.t.resource_definitions(stack) rsrc = eip.ElasticIpAssociation(resource_name, resource_defns[resource_name], stack) self.assertIsNone(rsrc.validate()) scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) stk_defn.update_resource_data(stack.defn, resource_name, rsrc.node_data()) return rsrc def test_eip(self): mock_server = self.fc.servers.list()[0] self.patchobject(self.fc.servers, 'get', return_value=mock_server) self.mock_create_floatingip() self.mock_update_floatingip() self.mock_update_floatingip(delete_assc=True) self.mock_delete_floatingip() self.m.ReplayAll() iface = self.mock_interface('aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa', '1.2.3.4') self.patchobject(mock_server, 'interface_list', return_value=[iface]) t = template_format.parse(eip_template) stack = utils.parse_stack(t) rsrc = self.create_eip(t, stack, 'IPAddress') try: self.assertEqual('11.0.0.1', rsrc.FnGetRefId()) rsrc.refid = None self.assertEqual('11.0.0.1', rsrc.FnGetRefId()) self.assertEqual('fc68ea2c-b60b-4b4f-bd82-94ec81110766', rsrc.FnGetAtt('AllocationId')) self.assertRaises(exception.InvalidTemplateAttribute, rsrc.FnGetAtt, 'Foo') finally: scheduler.TaskRunner(rsrc.destroy)() self.m.VerifyAll() def test_eip_update(self): mock_server = self.fc.servers.list()[0] self.patchobject(self.fc.servers, 'get', return_value=mock_server) self.mock_create_floatingip() self.mock_update_floatingip() self.mock_update_floatingip() self.mock_update_floatingip(delete_assc=True) iface = self.mock_interface('aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa', '1.2.3.4') self.patchobject(mock_server, 'interface_list', return_value=[iface]) self.m.ReplayAll() t = template_format.parse(eip_template) stack = utils.parse_stack(t) rsrc = self.create_eip(t, stack, 'IPAddress') self.assertEqual('11.0.0.1', rsrc.FnGetRefId()) # update with the new InstanceId server_update = self.fc.servers.list()[1] self.patchobject(self.fc.servers, 'get', return_value=server_update) iface = self.mock_interface('aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa', '1.2.3.4') self.patchobject(server_update, 'interface_list', return_value=[iface]) props = copy.deepcopy(rsrc.properties.data) update_server_id = '5678' props['InstanceId'] = update_server_id update_snippet = rsrc_defn.ResourceDefinition(rsrc.name, rsrc.type(), props) scheduler.TaskRunner(rsrc.update, update_snippet)() self.assertEqual((rsrc.UPDATE, rsrc.COMPLETE), rsrc.state) self.assertEqual('11.0.0.1', rsrc.FnGetRefId()) # update without InstanceId props = copy.deepcopy(rsrc.properties.data) props.pop('InstanceId') update_snippet = rsrc_defn.ResourceDefinition(rsrc.name, rsrc.type(), props) scheduler.TaskRunner(rsrc.update, update_snippet)() self.assertEqual((rsrc.UPDATE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_association_eip(self): mock_server = self.fc.servers.list()[0] self.patchobject(self.fc.servers, 'get', return_value=mock_server) self.mock_create_floatingip() self.mock_show_floatingip('fc68ea2c-b60b-4b4f-bd82-94ec81110766') self.mock_update_floatingip() self.mock_list_floatingips() self.mock_list_floatingips() self.mock_update_floatingip(delete_assc=True) self.mock_delete_floatingip() iface = self.mock_interface('aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa', '1.2.3.4') self.patchobject(mock_server, 'interface_list', return_value=[iface]) self.m.ReplayAll() t = template_format.parse(eip_template_ipassoc) stack = utils.parse_stack(t) rsrc = self.create_eip(t, stack, 'IPAddress') association = self.create_association(t, stack, 'IPAssoc') try: self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) self.assertEqual((association.CREATE, association.COMPLETE), association.state) self.assertEqual(utils.PhysName(stack.name, association.name), association.FnGetRefId()) self.assertEqual('11.0.0.1', association.properties['EIP']) finally: scheduler.TaskRunner(association.delete)() scheduler.TaskRunner(rsrc.delete)() self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state) self.assertEqual((association.DELETE, association.COMPLETE), association.state) self.m.VerifyAll() def test_eip_with_exception(self): neutronclient.Client.list_networks( **{'router:external': True}).AndReturn({'networks': [{ 'status': 'ACTIVE', 'subnets': [], 'name': 'nova', 'router:external': True, 'tenant_id': 'c1210485b2424d48804aad5d39c61b8f', 'admin_state_up': True, 'shared': True, 'id': 'eeee' }]}) self.patchobject(neutronclient.Client, 'create_floatingip', side_effect=neutronclient.exceptions.NotFound) self.m.ReplayAll() t = template_format.parse(eip_template) stack = utils.parse_stack(t) resource_name = 'IPAddress' resource_defns = stack.t.resource_definitions(stack) rsrc = eip.ElasticIp(resource_name, resource_defns[resource_name], stack) self.assertRaises(neutronclient.exceptions.NotFound, rsrc.handle_create) self.m.VerifyAll() @mock.patch.object(eip.ElasticIp, '_ipaddress') def test_FnGetRefId_resource_name(self, mock_ipaddr): t = template_format.parse(ipassoc_template_validate) stack = utils.parse_stack(t) rsrc = stack['eip'] mock_ipaddr.return_value = None self.assertEqual('eip', rsrc.FnGetRefId()) @mock.patch.object(eip.ElasticIp, '_ipaddress') def test_FnGetRefId_resource_ip(self, mock_ipaddr): t = template_format.parse(ipassoc_template_validate) stack = utils.parse_stack(t) rsrc = stack['eip'] mock_ipaddr.return_value = 'x.x.x.x' self.assertEqual('x.x.x.x', rsrc.FnGetRefId()) def test_FnGetRefId_convergence_cache_data(self): t = template_format.parse(ipassoc_template_validate) template = tmpl.Template(t) stack = parser.Stack(utils.dummy_context(), 'test', template, cache_data={ 'eip': node_data.NodeData.from_dict({ 'uuid': mock.ANY, 'id': mock.ANY, 'action': 'CREATE', 'status': 'COMPLETE', 'reference_id': '1.1.1.1'})}) rsrc = stack.defn['eip'] self.assertEqual('1.1.1.1', rsrc.FnGetRefId()) class AllocTest(common.HeatTestCase): def setUp(self): super(AllocTest, self).setUp() self.fc = fakes_nova.FakeClient() self.m.StubOutWithMock(self.fc.servers, 'get') self.m.StubOutWithMock(nova.NovaClientPlugin, '_create') self.m.StubOutWithMock(parser.Stack, 'resource_by_refid') self.m.StubOutWithMock(neutronclient.Client, 'create_floatingip') self.m.StubOutWithMock(neutronclient.Client, 'show_floatingip') self.m.StubOutWithMock(neutronclient.Client, 'update_floatingip') self.m.StubOutWithMock(neutronclient.Client, 'list_floatingips') self.m.StubOutWithMock(neutronclient.Client, 'delete_floatingip') self.m.StubOutWithMock(neutronclient.Client, 'add_gateway_router') self.m.StubOutWithMock(neutronclient.Client, 'list_networks') self.m.StubOutWithMock(neutronclient.Client, 'list_ports') self.m.StubOutWithMock(neutronclient.Client, 'show_network') self.m.StubOutWithMock(neutronclient.Client, 'list_routers') self.m.StubOutWithMock(neutronclient.Client, 'remove_gateway_router') def mock_interface(self, port, ip): class MockIface(object): def __init__(self, port_id, fixed_ip): self.port_id = port_id self.fixed_ips = [{'ip_address': fixed_ip}] return MockIface(port, ip) def _setup_test_stack_validate(self, stack_name): t = template_format.parse(ipassoc_template_validate) template = tmpl.Template(t) stack = parser.Stack(utils.dummy_context(), stack_name, template, stack_id='12233', stack_user_project_id='8888') stack.validate() return template, stack def _validate_properties(self, stack, template, expected): resource_defns = template.resource_definitions(stack) rsrc = eip.ElasticIpAssociation('validate_eip_ass', resource_defns['IPAssoc'], stack) exc = self.assertRaises(exception.StackValidationFailed, rsrc.validate) self.assertIn(expected, six.text_type(exc)) def mock_show_network(self): vpc_name = utils.PhysName('test_stack', 'the_vpc') neutronclient.Client.show_network( '22c26451-cf27-4d48-9031-51f5e397b84e' ).AndReturn({"network": { "status": "BUILD", "subnets": [], "name": vpc_name, "admin_state_up": False, "shared": False, "tenant_id": "c1210485b2424d48804aad5d39c61b8f", "id": "22c26451-cf27-4d48-9031-51f5e397b84e" }}) def _mock_server(self, mock_interface=False, mock_server=None): self.patchobject(nova.NovaClientPlugin, '_create', return_value=self.fc) if not mock_server: mock_server = self.fc.servers.list()[0] self.patchobject(self.fc.servers, 'get', return_value=mock_server) if mock_interface: iface = self.mock_interface('aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa', '1.2.3.4') self.patchobject(mock_server, 'interface_list', return_value=[iface]) def create_eip(self, t, stack, resource_name): rsrc = eip.ElasticIp(resource_name, stack.defn.resource_definition(resource_name), stack) self.assertIsNone(rsrc.validate()) scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) stk_defn.update_resource_data(stack.defn, resource_name, rsrc.node_data()) return rsrc def create_association(self, t, stack, resource_name): resource_defn = stack.defn.resource_definition(resource_name) rsrc = eip.ElasticIpAssociation(resource_name, resource_defn, stack) self.assertIsNone(rsrc.validate()) scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) stk_defn.update_resource_data(stack.defn, resource_name, rsrc.node_data()) return rsrc def mock_create_gateway_attachment(self): neutronclient.Client.add_gateway_router( 'bbbb', {'network_id': 'eeee'}).AndReturn(None) def mock_create_floatingip(self): neutronclient.Client.list_networks( **{'router:external': True}).AndReturn({'networks': [{ 'status': 'ACTIVE', 'subnets': [], 'name': 'nova', 'router:external': True, 'tenant_id': 'c1210485b2424d48804aad5d39c61b8f', 'admin_state_up': True, 'shared': True, 'id': 'eeee' }]}) neutronclient.Client.create_floatingip({ 'floatingip': {'floating_network_id': u'eeee'} }).AndReturn({'floatingip': { "status": "ACTIVE", "id": "fc68ea2c-b60b-4b4f-bd82-94ec81110766", "floating_ip_address": "11.0.0.1" }}) def mock_update_floatingip(self, fip='fc68ea2c-b60b-4b4f-bd82-94ec81110766', port_id='aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa', ex=None, with_address=True, delete_assc=False): if delete_assc: request_body = { 'floatingip': {'port_id': None}} if with_address: request_body['floatingip']['fixed_ip_address'] = None else: request_body = { 'floatingip': {'port_id': port_id}} if with_address: request_body['floatingip']['fixed_ip_address'] = '1.2.3.4' if ex: neutronclient.Client.update_floatingip( fip, request_body).AndRaise(ex) else: neutronclient.Client.update_floatingip( fip, request_body).AndReturn(None) def mock_show_floatingip(self, refid): neutronclient.Client.show_floatingip( refid, ).AndReturn({'floatingip': { 'router_id': None, 'tenant_id': 'e936e6cd3e0b48dcb9ff853a8f253257', 'floating_network_id': 'eeee', 'fixed_ip_address': None, 'floating_ip_address': '11.0.0.1', 'port_id': None, 'id': 'ffff' }}) def mock_list_floatingips(self, ip_addr='11.0.0.1'): neutronclient.Client.list_floatingips( floating_ip_address=ip_addr).AndReturn({ 'floatingips': [{'id': "fc68ea2c-b60b-4b4f-bd82-94ec81110766"}]}) def mock_delete_floatingip(self): id = 'fc68ea2c-b60b-4b4f-bd82-94ec81110766' neutronclient.Client.delete_floatingip(id).AndReturn(None) def mock_list_ports(self, id='the_nic'): neutronclient.Client.list_ports(id=id).AndReturn( {"ports": [{ "status": "DOWN", "binding:host_id": "null", "name": "wp-NIC-yu7fc7l4g5p6", "admin_state_up": True, "network_id": "22c26451-cf27-4d48-9031-51f5e397b84e", "tenant_id": "ecf538ec1729478fa1f97f1bf4fdcf7b", "binding:vif_type": "ovs", "device_owner": "", "binding:capabilities": {"port_filter": True}, "mac_address": "fa:16:3e:62:2d:4f", "fixed_ips": [{"subnet_id": "mysubnetid-70ec", "ip_address": "192.168.9.2"}], "id": "a000228d-b40b-4124-8394-a4082ae1b76b", "security_groups": ["5c6f529d-3186-4c36-84c0-af28b8daac7b"], "device_id": "" }]}) def mock_list_instance_ports(self, refid): neutronclient.Client.list_ports(device_id=refid).AndReturn( {"ports": [{ "status": "DOWN", "binding:host_id": "null", "name": "wp-NIC-yu7fc7l4g5p6", "admin_state_up": True, "network_id": "22c26451-cf27-4d48-9031-51f5e397b84e", "tenant_id": "ecf538ec1729478fa1f97f1bf4fdcf7b", "binding:vif_type": "ovs", "device_owner": "", "binding:capabilities": {"port_filter": True}, "mac_address": "fa:16:3e:62:2d:4f", "fixed_ips": [{"subnet_id": "mysubnetid-70ec", "ip_address": "192.168.9.2"}], "id": "a000228d-b40b-4124-8394-a4082ae1b76c", "security_groups": ["5c6f529d-3186-4c36-84c0-af28b8daac7b"], "device_id": refid }]}) def mock_router_for_vpc(self): vpc_name = utils.PhysName('test_stack', 'the_vpc') neutronclient.Client.list_routers(name=vpc_name).AndReturn({ "routers": [{ "status": "ACTIVE", "external_gateway_info": { "network_id": "zzzz", "enable_snat": True}, "name": vpc_name, "admin_state_up": True, "tenant_id": "3e21026f2dc94372b105808c0e721661", "routes": [], "id": "bbbb" }] }) def mock_no_router_for_vpc(self): vpc_name = utils.PhysName('test_stack', 'the_vpc') neutronclient.Client.list_routers(name=vpc_name).AndReturn({ "routers": [] }) def test_association_allocationid(self): self.mock_create_gateway_attachment() self.mock_show_network() self.mock_router_for_vpc() self.mock_create_floatingip() self.mock_list_ports() self.mock_show_floatingip('fc68ea2c-b60b-4b4f-bd82-94ec81110766') self.mock_update_floatingip(port_id='the_nic', with_address=False) self.mock_update_floatingip(delete_assc=True, with_address=False) self.mock_delete_floatingip() self.m.ReplayAll() t = template_format.parse(eip_template_ipassoc2) stack = utils.parse_stack(t) rsrc = self.create_eip(t, stack, 'the_eip') association = self.create_association(t, stack, 'IPAssoc') scheduler.TaskRunner(association.delete)() scheduler.TaskRunner(rsrc.delete)() self.m.VerifyAll() def test_association_allocationid_with_instance(self): self._mock_server() self.mock_show_network() self.mock_create_floatingip() self.mock_list_instance_ports('1fafbe59-2332-4f5f-bfa4-517b4d6c1b65') self.mock_no_router_for_vpc() self.mock_update_floatingip( port_id='a000228d-b40b-4124-8394-a4082ae1b76c', with_address=False) self.mock_update_floatingip(delete_assc=True, with_address=False) self.mock_delete_floatingip() self.m.ReplayAll() t = template_format.parse(eip_template_ipassoc3) stack = utils.parse_stack(t) rsrc = self.create_eip(t, stack, 'the_eip') association = self.create_association(t, stack, 'IPAssoc') scheduler.TaskRunner(association.delete)() scheduler.TaskRunner(rsrc.delete)() self.m.VerifyAll() def test_validate_properties_EIP_and_AllocationId(self): self._mock_server() self.m.ReplayAll() template, stack = self._setup_test_stack_validate( stack_name='validate_EIP_AllocationId') properties = template.t['Resources']['IPAssoc']['Properties'] # test with EIP and AllocationId properties['AllocationId'] = 'fc68ea2c-b60b-4b4f-bd82-94ec81110766' expected = ("Either 'EIP' or 'AllocationId' must be provided.") self._validate_properties(stack, template, expected) # test without EIP and AllocationId properties.pop('AllocationId') properties.pop('EIP') self._validate_properties(stack, template, expected) self.m.VerifyAll() def test_validate_EIP_and_InstanceId(self): self._mock_server() self.m.ReplayAll() template, stack = self._setup_test_stack_validate( stack_name='validate_EIP_InstanceId') properties = template.t['Resources']['IPAssoc']['Properties'] # test with EIP and no InstanceId properties.pop('InstanceId') expected = ("Must specify 'InstanceId' if you specify 'EIP'.") self._validate_properties(stack, template, expected) self.m.VerifyAll() def test_validate_without_NetworkInterfaceId_and_InstanceId(self): self._mock_server() self.m.ReplayAll() template, stack = self._setup_test_stack_validate( stack_name='validate_EIP_InstanceId') properties = template.t['Resources']['IPAssoc']['Properties'] # test without NetworkInterfaceId and InstanceId properties.pop('InstanceId') properties.pop('EIP') allocation_id = '1fafbe59-2332-4f5f-bfa4-517b4d6c1b65' properties['AllocationId'] = allocation_id resource_defns = template.resource_definitions(stack) rsrc = eip.ElasticIpAssociation('validate_eip_ass', resource_defns['IPAssoc'], stack) exc = self.assertRaises(exception.PropertyUnspecifiedError, rsrc.validate) self.assertIn('At least one of the following properties ' 'must be specified: InstanceId, NetworkInterfaceId', six.text_type(exc)) self.m.VerifyAll() def test_delete_association_successful_if_create_failed(self): self._mock_server(mock_interface=True) self.mock_create_floatingip() self.mock_show_floatingip('fc68ea2c-b60b-4b4f-bd82-94ec81110766') self.mock_list_floatingips() self.mock_update_floatingip(ex=q_exceptions.NotFound('Not Found')) self.m.ReplayAll() t = template_format.parse(eip_template_ipassoc) stack = utils.parse_stack(t) self.create_eip(t, stack, 'IPAddress') resource_defns = stack.t.resource_definitions(stack) rsrc = eip.ElasticIpAssociation('IPAssoc', resource_defns['IPAssoc'], stack) self.assertIsNone(rsrc.validate()) self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(rsrc.create)) self.assertEqual((rsrc.CREATE, rsrc.FAILED), rsrc.state) scheduler.TaskRunner(rsrc.delete)() self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_update_association_with_InstanceId(self): self._mock_server(mock_interface=True) self.mock_create_floatingip() self.mock_list_floatingips() self.mock_update_floatingip() self.mock_list_floatingips() self.mock_update_floatingip() self.m.ReplayAll() t = template_format.parse(eip_template_ipassoc) stack = utils.parse_stack(t) self.create_eip(t, stack, 'IPAddress') ass = self.create_association(t, stack, 'IPAssoc') self.assertEqual('11.0.0.1', ass.properties['EIP']) server_update = self.fc.servers.list()[1] self._mock_server(mock_interface=True, mock_server=server_update) # update with the new InstanceId props = copy.deepcopy(ass.properties.data) update_server_id = '5678' props['InstanceId'] = update_server_id update_snippet = rsrc_defn.ResourceDefinition(ass.name, ass.type(), stack.t.parse(stack.defn, props)) scheduler.TaskRunner(ass.update, update_snippet)() self.assertEqual((ass.UPDATE, ass.COMPLETE), ass.state) self.m.VerifyAll() def test_update_association_with_EIP(self): self._mock_server(mock_interface=True) self.mock_create_floatingip() self.mock_list_floatingips() self.mock_update_floatingip() self.mock_list_floatingips() self.mock_update_floatingip(delete_assc=True) self.mock_list_floatingips(ip_addr='11.0.0.2') self.mock_update_floatingip() self.m.ReplayAll() t = template_format.parse(eip_template_ipassoc) stack = utils.parse_stack(t) self.create_eip(t, stack, 'IPAddress') ass = self.create_association(t, stack, 'IPAssoc') # update with the new EIP props = copy.deepcopy(ass.properties.data) update_eip = '11.0.0.2' props['EIP'] = update_eip update_snippet = rsrc_defn.ResourceDefinition(ass.name, ass.type(), stack.t.parse(stack.defn, props)) scheduler.TaskRunner(ass.update, update_snippet)() self.assertEqual((ass.UPDATE, ass.COMPLETE), ass.state) self.m.VerifyAll() def test_update_association_with_AllocationId_or_EIP(self): self._mock_server(mock_interface=True) self.mock_create_floatingip() self.mock_list_instance_ports('WebServer') self.mock_show_network() self.mock_no_router_for_vpc() self.mock_list_floatingips() self.mock_update_floatingip() self.mock_list_floatingips() self.mock_update_floatingip(delete_assc=True) self.mock_update_floatingip( port_id='a000228d-b40b-4124-8394-a4082ae1b76c', with_address=False) self.mock_list_floatingips(ip_addr='11.0.0.2') self.mock_update_floatingip(delete_assc=True, with_address=False) self.mock_update_floatingip() self.m.ReplayAll() t = template_format.parse(eip_template_ipassoc) stack = utils.parse_stack(t) self.create_eip(t, stack, 'IPAddress') ass = self.create_association(t, stack, 'IPAssoc') self.assertEqual('11.0.0.1', ass.properties['EIP']) # change EIP to AllocationId props = copy.deepcopy(ass.properties.data) update_allocationId = 'fc68ea2c-b60b-4b4f-bd82-94ec81110766' props['AllocationId'] = update_allocationId props.pop('EIP') update_snippet = rsrc_defn.ResourceDefinition(ass.name, ass.type(), stack.t.parse(stack.defn, props)) scheduler.TaskRunner(ass.update, update_snippet)() self.assertEqual((ass.UPDATE, ass.COMPLETE), ass.state) stk_defn.update_resource_data(stack.defn, ass.name, ass.node_data()) # change AllocationId to EIP props = copy.deepcopy(ass.properties.data) update_eip = '11.0.0.2' props['EIP'] = update_eip props.pop('AllocationId') update_snippet = rsrc_defn.ResourceDefinition(ass.name, ass.type(), stack.t.parse(stack.defn, props)) scheduler.TaskRunner(ass.update, update_snippet)() self.assertEqual((ass.UPDATE, ass.COMPLETE), ass.state) self.m.VerifyAll() def test_update_association_needs_update_InstanceId(self): self._mock_server(mock_interface=True) self.mock_create_floatingip() self.mock_list_floatingips() self.mock_show_floatingip('fc68ea2c-b60b-4b4f-bd82-94ec81110766') self.mock_update_floatingip() self.m.ReplayAll() t = template_format.parse(eip_template_ipassoc) stack = utils.parse_stack(t) self.create_eip(t, stack, 'IPAddress') before_props = {'InstanceId': {'Ref': 'WebServer'}, 'EIP': '11.0.0.1'} after_props = {'InstanceId': {'Ref': 'WebServer2'}, 'EIP': '11.0.0.1'} before = self.create_association(t, stack, 'IPAssoc') update_server = self.fc.servers.list()[1] self._mock_server(mock_interface=False, mock_server=update_server) after = rsrc_defn.ResourceDefinition(before.name, before.type(), after_props) self.assertTrue(resource.UpdateReplace, before._needs_update(after, before, after_props, before_props, None)) def test_update_association_needs_update_InstanceId_EIP(self): self._mock_server(mock_interface=True) self.mock_create_floatingip() self.mock_list_floatingips() self.mock_show_floatingip('fc68ea2c-b60b-4b4f-bd82-94ec81110766') self.mock_update_floatingip() self.m.ReplayAll() t = template_format.parse(eip_template_ipassoc) stack = utils.parse_stack(t) self.create_eip(t, stack, 'IPAddress') after_props = {'InstanceId': '5678', 'EIP': '11.0.0.2'} before = self.create_association(t, stack, 'IPAssoc') update_server = self.fc.servers.list()[1] self._mock_server(mock_interface=False, mock_server=update_server) after = rsrc_defn.ResourceDefinition(before.name, before.type(), after_props) updater = scheduler.TaskRunner(before.update, after) self.assertRaises(resource.UpdateReplace, updater) def test_update_association_with_NetworkInterfaceId_or_InstanceId(self): self.mock_create_floatingip() self.mock_list_ports() self.mock_show_network() self.mock_no_router_for_vpc() self.mock_update_floatingip(port_id='the_nic', with_address=False) self.mock_list_ports(id='a000228d-b40b-4124-8394-a4082ae1b76b') self.mock_show_network() self.mock_no_router_for_vpc() self.mock_update_floatingip( port_id='a000228d-b40b-4124-8394-a4082ae1b76b', with_address=False) self.mock_list_instance_ports('5678') self.mock_show_network() self.mock_no_router_for_vpc() self.mock_update_floatingip( port_id='a000228d-b40b-4124-8394-a4082ae1b76c', with_address=False) self.m.ReplayAll() t = template_format.parse(eip_template_ipassoc2) stack = utils.parse_stack(t) self.create_eip(t, stack, 'the_eip') ass = self.create_association(t, stack, 'IPAssoc') # update with the new NetworkInterfaceId props = copy.deepcopy(ass.properties.data) update_networkInterfaceId = 'a000228d-b40b-4124-8394-a4082ae1b76b' props['NetworkInterfaceId'] = update_networkInterfaceId update_snippet = rsrc_defn.ResourceDefinition(ass.name, ass.type(), stack.t.parse(stack.defn, props)) scheduler.TaskRunner(ass.update, update_snippet)() self.assertEqual((ass.UPDATE, ass.COMPLETE), ass.state) # update with the InstanceId update_server = self.fc.servers.list()[1] self._mock_server(mock_server=update_server) props = copy.deepcopy(ass.properties.data) instance_id = '5678' props.pop('NetworkInterfaceId') props['InstanceId'] = instance_id update_snippet = rsrc_defn.ResourceDefinition(ass.name, ass.type(), stack.t.parse(stack.defn, props)) scheduler.TaskRunner(ass.update, update_snippet)() self.assertEqual((ass.UPDATE, ass.COMPLETE), ass.state) self.m.VerifyAll() def test_eip_allocation_refid_resource_name(self): t = template_format.parse(eip_template_ipassoc) stack = utils.parse_stack(t) rsrc = stack['IPAssoc'] rsrc.id = '123' rsrc.uuid = '9bfb9456-3fe8-41f4-b318-9dba18eeef74' rsrc.action = 'CREATE' expected = '%s-%s-%s' % (rsrc.stack.name, rsrc.name, short_id.get_id(rsrc.uuid)) self.assertEqual(expected, rsrc.FnGetRefId()) def test_eip_allocation_refid_resource_id(self): t = template_format.parse(eip_template_ipassoc) stack = utils.parse_stack(t) rsrc = stack['IPAssoc'] rsrc.resource_id = 'phy-rsrc-id' self.assertEqual('phy-rsrc-id', rsrc.FnGetRefId()) def test_eip_allocation_refid_convergence_cache_data(self): t = template_format.parse(eip_template_ipassoc) cache_data = {'IPAssoc': node_data.NodeData.from_dict({ 'uuid': mock.ANY, 'id': mock.ANY, 'action': 'CREATE', 'status': 'COMPLETE', 'reference_id': 'convg_xyz' })} stack = utils.parse_stack(t, cache_data=cache_data) rsrc = stack.defn['IPAssoc'] self.assertEqual('convg_xyz', rsrc.FnGetRefId()) heat-10.0.2/heat/tests/aws/test_s3.py0000666000175000017500000002661313343562351017367 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg import six import swiftclient.client as sc from heat.common import exception from heat.common import template_format from heat.engine.resources.aws.s3 import s3 from heat.engine import scheduler from heat.tests import common from heat.tests import utils swift_template = ''' { "AWSTemplateFormatVersion" : "2010-09-09", "Description" : "Template to test S3 Bucket resources", "Resources" : { "S3BucketWebsite" : { "Type" : "AWS::S3::Bucket", "DeletionPolicy" : "Delete", "Properties" : { "AccessControl" : "PublicRead", "WebsiteConfiguration" : { "IndexDocument" : "index.html", "ErrorDocument" : "error.html" } } }, "SwiftContainer": { "Type": "OS::Swift::Container", "Properties": { "S3Bucket": {"Ref" : "S3Bucket"}, } }, "S3Bucket" : { "Type" : "AWS::S3::Bucket", "Properties" : { "AccessControl" : "Private" } }, "S3Bucket_with_tags" : { "Type" : "AWS::S3::Bucket", "Properties" : { "Tags" : [{"Key": "greeting", "Value": "hello"}, {"Key": "location", "Value": "here"}] } } } } ''' class s3Test(common.HeatTestCase): def setUp(self): super(s3Test, self).setUp() self.m.CreateMock(sc.Connection) self.m.StubOutWithMock(sc.Connection, 'put_container') self.m.StubOutWithMock(sc.Connection, 'get_container') self.m.StubOutWithMock(sc.Connection, 'delete_container') self.m.StubOutWithMock(sc.Connection, 'get_auth') def create_resource(self, t, stack, resource_name): resource_defns = stack.t.resource_definitions(stack) rsrc = s3.S3Bucket('test_resource', resource_defns[resource_name], stack) scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) return rsrc def test_attributes(self): t = template_format.parse(swift_template) stack = utils.parse_stack(t) container_name = utils.PhysName(stack.name, 'test_resource') sc.Connection.put_container( container_name, {'X-Container-Write': 'test_tenant:test_username', 'X-Container-Read': 'test_tenant:test_username'} ).AndReturn(None) sc.Connection.get_auth().MultipleTimes().AndReturn( ('http://server.test:8080/v_2', None)) sc.Connection.delete_container(container_name).AndReturn(None) self.m.ReplayAll() rsrc = self.create_resource(t, stack, 'S3Bucket') ref_id = rsrc.FnGetRefId() self.assertEqual(container_name, ref_id) self.assertEqual('server.test', rsrc.FnGetAtt('DomainName')) url = 'http://server.test:8080/v_2/%s' % ref_id self.assertEqual(url, rsrc.FnGetAtt('WebsiteURL')) self.assertRaises(exception.InvalidTemplateAttribute, rsrc.FnGetAtt, 'Foo') scheduler.TaskRunner(rsrc.delete)() self.m.VerifyAll() def test_public_read(self): t = template_format.parse(swift_template) properties = t['Resources']['S3Bucket']['Properties'] properties['AccessControl'] = 'PublicRead' stack = utils.parse_stack(t) container_name = utils.PhysName(stack.name, 'test_resource') sc.Connection.put_container( utils.PhysName(stack.name, 'test_resource'), {'X-Container-Write': 'test_tenant:test_username', 'X-Container-Read': '.r:*'}).AndReturn(None) sc.Connection.delete_container( container_name).AndReturn(None) self.m.ReplayAll() rsrc = self.create_resource(t, stack, 'S3Bucket') scheduler.TaskRunner(rsrc.delete)() self.m.VerifyAll() def test_tags(self): t = template_format.parse(swift_template) stack = utils.parse_stack(t) container_name = utils.PhysName(stack.name, 'test_resource') sc.Connection.put_container( utils.PhysName(stack.name, 'test_resource'), {'X-Container-Write': 'test_tenant:test_username', 'X-Container-Read': 'test_tenant:test_username', 'X-Container-Meta-S3-Tag-greeting': 'hello', 'X-Container-Meta-S3-Tag-location': 'here'}).AndReturn(None) sc.Connection.delete_container( container_name).AndReturn(None) self.m.ReplayAll() rsrc = self.create_resource(t, stack, 'S3Bucket_with_tags') scheduler.TaskRunner(rsrc.delete)() self.m.VerifyAll() def test_public_read_write(self): t = template_format.parse(swift_template) properties = t['Resources']['S3Bucket']['Properties'] properties['AccessControl'] = 'PublicReadWrite' stack = utils.parse_stack(t) container_name = utils.PhysName(stack.name, 'test_resource') sc.Connection.put_container( container_name, {'X-Container-Write': '.r:*', 'X-Container-Read': '.r:*'}).AndReturn(None) sc.Connection.delete_container( container_name).AndReturn(None) self.m.ReplayAll() rsrc = self.create_resource(t, stack, 'S3Bucket') scheduler.TaskRunner(rsrc.delete)() self.m.VerifyAll() def test_authenticated_read(self): t = template_format.parse(swift_template) properties = t['Resources']['S3Bucket']['Properties'] properties['AccessControl'] = 'AuthenticatedRead' stack = utils.parse_stack(t) container_name = utils.PhysName(stack.name, 'test_resource') sc.Connection.put_container( container_name, {'X-Container-Write': 'test_tenant:test_username', 'X-Container-Read': 'test_tenant'}).AndReturn(None) sc.Connection.delete_container(container_name).AndReturn(None) self.m.ReplayAll() rsrc = self.create_resource(t, stack, 'S3Bucket') scheduler.TaskRunner(rsrc.delete)() self.m.VerifyAll() def test_website(self): t = template_format.parse(swift_template) stack = utils.parse_stack(t) container_name = utils.PhysName(stack.name, 'test_resource') sc.Connection.put_container( container_name, {'X-Container-Meta-Web-Error': 'error.html', 'X-Container-Meta-Web-Index': 'index.html', 'X-Container-Write': 'test_tenant:test_username', 'X-Container-Read': '.r:*'}).AndReturn(None) sc.Connection.delete_container(container_name).AndReturn(None) self.m.ReplayAll() rsrc = self.create_resource(t, stack, 'S3BucketWebsite') scheduler.TaskRunner(rsrc.delete)() self.m.VerifyAll() def test_delete_exception(self): t = template_format.parse(swift_template) stack = utils.parse_stack(t) container_name = utils.PhysName(stack.name, 'test_resource') sc.Connection.put_container( container_name, {'X-Container-Write': 'test_tenant:test_username', 'X-Container-Read': 'test_tenant:test_username'}).AndReturn(None) sc.Connection.delete_container(container_name).AndRaise( sc.ClientException('Test delete failure')) self.m.ReplayAll() rsrc = self.create_resource(t, stack, 'S3Bucket') self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(rsrc.delete)) self.m.VerifyAll() def test_delete_not_found(self): t = template_format.parse(swift_template) stack = utils.parse_stack(t) container_name = utils.PhysName(stack.name, 'test_resource') sc.Connection.put_container( container_name, {'X-Container-Write': 'test_tenant:test_username', 'X-Container-Read': 'test_tenant:test_username'}).AndReturn(None) sc.Connection.delete_container(container_name).AndRaise( sc.ClientException('Its gone', http_status=404)) self.m.ReplayAll() rsrc = self.create_resource(t, stack, 'S3Bucket') scheduler.TaskRunner(rsrc.delete)() self.m.VerifyAll() def test_delete_conflict_not_empty(self): t = template_format.parse(swift_template) stack = utils.parse_stack(t) container_name = utils.PhysName(stack.name, 'test_resource') sc.Connection.put_container( container_name, {'X-Container-Write': 'test_tenant:test_username', 'X-Container-Read': 'test_tenant:test_username'}).AndReturn(None) sc.Connection.delete_container(container_name).AndRaise( sc.ClientException('Not empty', http_status=409)) sc.Connection.get_container(container_name).AndReturn( ({'name': container_name}, [{'name': 'test_object'}])) self.m.ReplayAll() rsrc = self.create_resource(t, stack, 'S3Bucket') deleter = scheduler.TaskRunner(rsrc.delete) ex = self.assertRaises(exception.ResourceFailure, deleter) self.assertIn("ResourceActionNotSupported: resources.test_resource: " "The bucket you tried to delete is not empty", six.text_type(ex)) self.m.VerifyAll() def test_delete_conflict_empty(self): cfg.CONF.set_override('action_retry_limit', 0) t = template_format.parse(swift_template) stack = utils.parse_stack(t) container_name = utils.PhysName(stack.name, 'test_resource') sc.Connection.put_container( container_name, {'X-Container-Write': 'test_tenant:test_username', 'X-Container-Read': 'test_tenant:test_username'}).AndReturn(None) sc.Connection.delete_container(container_name).AndRaise( sc.ClientException('Conflict', http_status=409)) sc.Connection.get_container(container_name).AndReturn( ({'name': container_name}, [])) self.m.ReplayAll() rsrc = self.create_resource(t, stack, 'S3Bucket') deleter = scheduler.TaskRunner(rsrc.delete) ex = self.assertRaises(exception.ResourceFailure, deleter) self.assertIn("Conflict", six.text_type(ex)) self.m.VerifyAll() def test_delete_retain(self): t = template_format.parse(swift_template) bucket = t['Resources']['S3Bucket'] bucket['DeletionPolicy'] = 'Retain' stack = utils.parse_stack(t) # first run, with retain policy sc.Connection.put_container( utils.PhysName(stack.name, 'test_resource'), {'X-Container-Write': 'test_tenant:test_username', 'X-Container-Read': 'test_tenant:test_username'}).AndReturn(None) self.m.ReplayAll() rsrc = self.create_resource(t, stack, 'S3Bucket') scheduler.TaskRunner(rsrc.delete)() self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() heat-10.0.2/heat/tests/aws/test_waitcondition.py0000666000175000017500000006573313343562351021723 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import datetime import json import uuid import mock from oslo_utils import timeutils import six from six.moves.urllib import parse from heat.common import exception from heat.common import identifier from heat.common import template_format from heat.engine import environment from heat.engine import node_data from heat.engine.resources.aws.cfn import wait_condition_handle as aws_wch from heat.engine import rsrc_defn from heat.engine import scheduler from heat.engine import stack as parser from heat.engine import stk_defn from heat.engine import template as tmpl from heat.objects import resource as resource_objects from heat.tests import common from heat.tests import utils test_template_waitcondition = ''' { "AWSTemplateFormatVersion" : "2010-09-09", "Description" : "Just a WaitCondition.", "Parameters" : {}, "Resources" : { "WaitHandle" : { "Type" : "AWS::CloudFormation::WaitConditionHandle" }, "WaitForTheHandle" : { "Type" : "AWS::CloudFormation::WaitCondition", "Properties" : { "Handle" : {"Ref" : "WaitHandle"}, "Timeout" : "5" } } } } ''' test_template_wc_count = ''' { "AWSTemplateFormatVersion" : "2010-09-09", "Description" : "Just a WaitCondition.", "Parameters" : {}, "Resources" : { "WaitHandle" : { "Type" : "AWS::CloudFormation::WaitConditionHandle" }, "WaitForTheHandle" : { "Type" : "AWS::CloudFormation::WaitCondition", "Properties" : { "Handle" : {"Ref" : "WaitHandle"}, "Timeout" : "5", "Count" : "3" } } } } ''' class WaitConditionTest(common.HeatTestCase): def create_stack(self, stack_id=None, template=test_template_waitcondition, params=None, stub=True, stub_status=True): params = params or {} temp = template_format.parse(template) template = tmpl.Template(temp, env=environment.Environment(params)) ctx = utils.dummy_context(tenant_id='test_tenant') stack = parser.Stack(ctx, 'test_stack', template, disable_rollback=True) # Stub out the stack ID so we have a known value if stack_id is None: stack_id = str(uuid.uuid4()) self.stack_id = stack_id with utils.UUIDStub(self.stack_id): stack.store() if stub: id = identifier.ResourceIdentifier('test_tenant', stack.name, stack.id, '', 'WaitHandle') self.m.StubOutWithMock(aws_wch.WaitConditionHandle, 'identifier') aws_wch.WaitConditionHandle.identifier( ).MultipleTimes().AndReturn(id) if stub_status: self.m.StubOutWithMock(aws_wch.WaitConditionHandle, 'get_status') return stack def test_post_success_to_handle(self): self.stack = self.create_stack() aws_wch.WaitConditionHandle.get_status().AndReturn([]) aws_wch.WaitConditionHandle.get_status().AndReturn([]) aws_wch.WaitConditionHandle.get_status().AndReturn(['SUCCESS']) self.m.ReplayAll() self.stack.create() rsrc = self.stack['WaitForTheHandle'] self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) r = resource_objects.Resource.get_by_name_and_stack( self.stack.context, 'WaitHandle', self.stack.id) self.assertEqual('WaitHandle', r.name) self.m.VerifyAll() def test_post_failure_to_handle(self): self.stack = self.create_stack() aws_wch.WaitConditionHandle.get_status().AndReturn([]) aws_wch.WaitConditionHandle.get_status().AndReturn([]) aws_wch.WaitConditionHandle.get_status().AndReturn(['FAILURE']) self.m.ReplayAll() self.stack.create() rsrc = self.stack['WaitForTheHandle'] self.assertEqual((rsrc.CREATE, rsrc.FAILED), rsrc.state) reason = rsrc.status_reason self.assertTrue(reason.startswith('WaitConditionFailure:')) r = resource_objects.Resource.get_by_name_and_stack( self.stack.context, 'WaitHandle', self.stack.id) self.assertEqual('WaitHandle', r.name) self.m.VerifyAll() def test_post_success_to_handle_count(self): self.stack = self.create_stack(template=test_template_wc_count) aws_wch.WaitConditionHandle.get_status().AndReturn([]) aws_wch.WaitConditionHandle.get_status().AndReturn(['SUCCESS']) aws_wch.WaitConditionHandle.get_status().AndReturn(['SUCCESS', 'SUCCESS']) aws_wch.WaitConditionHandle.get_status().AndReturn(['SUCCESS', 'SUCCESS', 'SUCCESS']) self.m.ReplayAll() self.stack.create() rsrc = self.stack['WaitForTheHandle'] self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) r = resource_objects.Resource.get_by_name_and_stack( self.stack.context, 'WaitHandle', self.stack.id) self.assertEqual('WaitHandle', r.name) self.m.VerifyAll() def test_post_failure_to_handle_count(self): self.stack = self.create_stack(template=test_template_wc_count) aws_wch.WaitConditionHandle.get_status().AndReturn([]) aws_wch.WaitConditionHandle.get_status().AndReturn(['SUCCESS']) aws_wch.WaitConditionHandle.get_status().AndReturn(['SUCCESS', 'FAILURE']) self.m.ReplayAll() self.stack.create() rsrc = self.stack['WaitForTheHandle'] self.assertEqual((rsrc.CREATE, rsrc.FAILED), rsrc.state) reason = rsrc.status_reason self.assertTrue(reason.startswith('WaitConditionFailure:')) r = resource_objects.Resource.get_by_name_and_stack( self.stack.context, 'WaitHandle', self.stack.id) self.assertEqual('WaitHandle', r.name) self.m.VerifyAll() def test_timeout(self): self.stack = self.create_stack() # Avoid the stack create exercising the timeout code at the same time self.m.StubOutWithMock(self.stack, 'timeout_secs') self.stack.timeout_secs().MultipleTimes().AndReturn(None) now = timeutils.utcnow() periods = [0, 0.001, 0.1, 4.1, 5.1] periods.extend(range(10, 100, 5)) fake_clock = [now + datetime.timedelta(0, t) for t in periods] timeutils.set_time_override(fake_clock) self.addCleanup(timeutils.clear_time_override) aws_wch.WaitConditionHandle.get_status( ).MultipleTimes().AndReturn([]) self.m.ReplayAll() self.stack.create() rsrc = self.stack['WaitForTheHandle'] self.assertEqual((rsrc.CREATE, rsrc.FAILED), rsrc.state) reason = rsrc.status_reason self.assertTrue(reason.startswith('WaitConditionTimeout:')) self.m.VerifyAll() def test_FnGetAtt(self): self.stack = self.create_stack() aws_wch.WaitConditionHandle.get_status().AndReturn(['SUCCESS']) self.m.ReplayAll() self.stack.create() rsrc = self.stack['WaitForTheHandle'] self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) wc_att = rsrc.FnGetAtt('Data') self.assertEqual(six.text_type({}), wc_att) handle = self.stack['WaitHandle'] self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), handle.state) test_metadata = {'Data': 'foo', 'Reason': 'bar', 'Status': 'SUCCESS', 'UniqueId': '123'} ret = handle.handle_signal(test_metadata) wc_att = rsrc.FnGetAtt('Data') self.assertEqual('{"123": "foo"}', wc_att) self.assertEqual('status:SUCCESS reason:bar', ret) test_metadata = {'Data': 'dog', 'Reason': 'cat', 'Status': 'SUCCESS', 'UniqueId': '456'} ret = handle.handle_signal(test_metadata) wc_att = rsrc.FnGetAtt('Data') self.assertIsInstance(wc_att, six.string_types) self.assertEqual({"123": "foo", "456": "dog"}, json.loads(wc_att)) self.assertEqual('status:SUCCESS reason:cat', ret) self.m.VerifyAll() def test_FnGetRefId_resource_name(self): self.stack = self.create_stack() rsrc = self.stack['WaitHandle'] self.assertEqual('WaitHandle', rsrc.FnGetRefId()) @mock.patch.object(aws_wch.WaitConditionHandle, '_get_ec2_signed_url') def test_FnGetRefId_signed_url(self, mock_get_signed_url): self.stack = self.create_stack() rsrc = self.stack['WaitHandle'] rsrc.resource_id = '123' mock_get_signed_url.return_value = 'http://signed_url' self.assertEqual('http://signed_url', rsrc.FnGetRefId()) def test_FnGetRefId_convergence_cache_data(self): t = template_format.parse(test_template_waitcondition) template = tmpl.Template(t) stack = parser.Stack(utils.dummy_context(), 'test', template, cache_data={ 'WaitHandle': node_data.NodeData.from_dict({ 'uuid': mock.ANY, 'id': mock.ANY, 'action': 'CREATE', 'status': 'COMPLETE', 'reference_id': 'http://convg_signed_url' })}) rsrc = stack.defn['WaitHandle'] self.assertEqual('http://convg_signed_url', rsrc.FnGetRefId()) def test_validate_handle_url_bad_stackid(self): self.m.ReplayAll() stack_id = 'STACK_HUBSID_1234' t = json.loads(test_template_waitcondition) badhandle = ("http://server.test:8000/v1/waitcondition/" + "arn%3Aopenstack%3Aheat%3A%3Atest_tenant" + "%3Astacks%2Ftest_stack%2F" + "bad1" + "%2Fresources%2FWaitHandle") t['Resources']['WaitForTheHandle']['Properties']['Handle'] = badhandle self.stack = self.create_stack(template=json.dumps(t), stub=False, stack_id=stack_id) self.m.ReplayAll() rsrc = self.stack['WaitForTheHandle'] self.assertRaises(ValueError, rsrc.handle_create) self.m.VerifyAll() def test_validate_handle_url_bad_stackname(self): self.m.ReplayAll() stack_id = 'STACKABCD1234' t = json.loads(test_template_waitcondition) badhandle = ("http://server.test:8000/v1/waitcondition/" + "arn%3Aopenstack%3Aheat%3A%3Atest_tenant" + "%3Astacks%2FBAD_stack%2F" + stack_id + "%2Fresources%2FWaitHandle") t['Resources']['WaitForTheHandle']['Properties']['Handle'] = badhandle self.stack = self.create_stack(template=json.dumps(t), stub=False, stack_id=stack_id) rsrc = self.stack['WaitForTheHandle'] self.assertRaises(ValueError, rsrc.handle_create) self.m.VerifyAll() def test_validate_handle_url_bad_tenant(self): self.m.ReplayAll() stack_id = 'STACKABCD1234' t = json.loads(test_template_waitcondition) badhandle = ("http://server.test:8000/v1/waitcondition/" + "arn%3Aopenstack%3Aheat%3A%3ABAD_tenant" + "%3Astacks%2Ftest_stack%2F" + stack_id + "%2Fresources%2FWaitHandle") t['Resources']['WaitForTheHandle']['Properties']['Handle'] = badhandle self.stack = self.create_stack(stack_id=stack_id, template=json.dumps(t), stub=False) rsrc = self.stack['WaitForTheHandle'] self.assertRaises(ValueError, rsrc.handle_create) self.m.VerifyAll() def test_validate_handle_url_bad_resource(self): self.m.ReplayAll() stack_id = 'STACK_HUBR_1234' t = json.loads(test_template_waitcondition) badhandle = ("http://server.test:8000/v1/waitcondition/" + "arn%3Aopenstack%3Aheat%3A%3Atest_tenant" + "%3Astacks%2Ftest_stack%2F" + stack_id + "%2Fresources%2FBADHandle") t['Resources']['WaitForTheHandle']['Properties']['Handle'] = badhandle self.stack = self.create_stack(stack_id=stack_id, template=json.dumps(t), stub=False) rsrc = self.stack['WaitForTheHandle'] self.assertRaises(ValueError, rsrc.handle_create) self.m.VerifyAll() def test_validate_handle_url_bad_resource_type(self): self.m.ReplayAll() stack_id = 'STACKABCD1234' t = json.loads(test_template_waitcondition) badhandle = ("http://server.test:8000/v1/waitcondition/" + "arn%3Aopenstack%3Aheat%3A%3Atest_tenant" + "%3Astacks%2Ftest_stack%2F" + stack_id + "%2Fresources%2FWaitForTheHandle") t['Resources']['WaitForTheHandle']['Properties']['Handle'] = badhandle self.stack = self.create_stack(stack_id=stack_id, template=json.dumps(t), stub=False) rsrc = self.stack['WaitForTheHandle'] self.assertRaises(ValueError, rsrc.handle_create) self.m.VerifyAll() class WaitConditionHandleTest(common.HeatTestCase): def create_stack(self, stack_name=None, stack_id=None): temp = template_format.parse(test_template_waitcondition) template = tmpl.Template(temp) ctx = utils.dummy_context(tenant_id='test_tenant') if stack_name is None: stack_name = utils.random_name() stack = parser.Stack(ctx, stack_name, template, disable_rollback=True) # Stub out the UUID for this test, so we can get an expected signature if stack_id is not None: with utils.UUIDStub(stack_id): stack.store() else: stack.store() self.stack_id = stack.id # Stub waitcondition status so all goes CREATE_COMPLETE self.m.StubOutWithMock(aws_wch.WaitConditionHandle, 'get_status') aws_wch.WaitConditionHandle.get_status().AndReturn(['SUCCESS']) id = identifier.ResourceIdentifier('test_tenant', stack.name, stack.id, '', 'WaitHandle') self.m.StubOutWithMock(aws_wch.WaitConditionHandle, 'identifier') aws_wch.WaitConditionHandle.identifier().MultipleTimes().AndReturn(id) self.m.ReplayAll() stack.create() return stack def test_handle(self): stack_id = 'STACKABCD1234' stack_name = 'test_stack2' created_time = datetime.datetime(2012, 11, 29, 13, 49, 37) self.stack = self.create_stack(stack_id=stack_id, stack_name=stack_name) self.m.StubOutWithMock(self.stack.clients.client_plugin('heat'), 'get_heat_cfn_url') self.stack.clients.client_plugin('heat').get_heat_cfn_url().AndReturn( 'http://server.test:8000/v1') self.m.ReplayAll() rsrc = self.stack['WaitHandle'] self.assertEqual(rsrc.resource_id, rsrc.data().get('user_id')) # clear the url rsrc.data_set('ec2_signed_url', None, False) rsrc.created_time = created_time self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) connection_url = "".join([ 'http://server.test:8000/v1/waitcondition/', 'arn%3Aopenstack%3Aheat%3A%3Atest_tenant%3Astacks%2F', 'test_stack2%2F', stack_id, '%2Fresources%2F', 'WaitHandle?']) expected_url = "".join([ connection_url, 'Timestamp=2012-11-29T13%3A49%3A37Z&', 'SignatureMethod=HmacSHA256&', 'AWSAccessKeyId=4567&', 'SignatureVersion=2&', 'Signature=', 'fHyt3XFnHq8%2FSwYaVcHdJka1hz6jdK5mHtgbo8OOKbQ%3D']) actual_url = rsrc.FnGetRefId() expected_params = parse.parse_qs(expected_url.split("?", 1)[1]) actual_params = parse.parse_qs(actual_url.split("?", 1)[1]) self.assertEqual(expected_params, actual_params) self.assertTrue(connection_url.startswith(connection_url)) self.m.VerifyAll() def test_handle_signal(self): self.stack = self.create_stack() rsrc = self.stack['WaitHandle'] self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) self.assertEqual(rsrc.resource_id, rsrc.data().get('user_id')) test_metadata = {'Data': 'foo', 'Reason': 'bar', 'Status': 'SUCCESS', 'UniqueId': '123'} rsrc.handle_signal(test_metadata) handle_metadata = {u'123': {u'Data': u'foo', u'Reason': u'bar', u'Status': u'SUCCESS'}} self.assertEqual(handle_metadata, rsrc.metadata_get()) self.m.VerifyAll() def test_handle_signal_invalid(self): self.stack = self.create_stack() rsrc = self.stack['WaitHandle'] self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) self.assertEqual(rsrc.resource_id, rsrc.data().get('user_id')) # handle_signal should raise a ValueError if the metadata # is missing any of the expected keys err_metadata = {'Data': 'foo', 'Status': 'SUCCESS', 'UniqueId': '123'} self.assertRaises(ValueError, rsrc.handle_signal, err_metadata) err_metadata = {'Data': 'foo', 'Reason': 'bar', 'UniqueId': '1234'} self.assertRaises(ValueError, rsrc.handle_signal, err_metadata) err_metadata = {'Data': 'foo', 'Reason': 'bar', 'UniqueId': '1234'} self.assertRaises(ValueError, rsrc.handle_signal, err_metadata) err_metadata = {'data': 'foo', 'reason': 'bar', 'status': 'SUCCESS', 'uniqueid': '1234'} self.assertRaises(ValueError, rsrc.handle_signal, err_metadata) # Also any Status other than SUCCESS or FAILURE should be rejected err_metadata = {'Data': 'foo', 'Reason': 'bar', 'Status': 'UCCESS', 'UniqueId': '123'} self.assertRaises(ValueError, rsrc.handle_signal, err_metadata) err_metadata = {'Data': 'foo', 'Reason': 'bar', 'Status': 'wibble', 'UniqueId': '123'} self.assertRaises(ValueError, rsrc.handle_signal, err_metadata) err_metadata = {'Data': 'foo', 'Reason': 'bar', 'Status': 'success', 'UniqueId': '123'} self.assertRaises(ValueError, rsrc.handle_signal, err_metadata) err_metadata = {'Data': 'foo', 'Reason': 'bar', 'Status': 'FAIL', 'UniqueId': '123'} self.assertRaises(ValueError, rsrc.handle_signal, err_metadata) self.m.VerifyAll() def test_get_status(self): self.stack = self.create_stack() rsrc = self.stack['WaitHandle'] self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) self.assertEqual(rsrc.resource_id, rsrc.data().get('user_id')) # UnsetStubs, don't want get_status stubbed anymore.. self.m.VerifyAll() self.m.UnsetStubs() self.assertEqual([], rsrc.get_status()) test_metadata = {'Data': 'foo', 'Reason': 'bar', 'Status': 'SUCCESS', 'UniqueId': '123'} ret = rsrc.handle_signal(test_metadata) self.assertEqual(['SUCCESS'], rsrc.get_status()) self.assertEqual('status:SUCCESS reason:bar', ret) test_metadata = {'Data': 'foo', 'Reason': 'bar2', 'Status': 'SUCCESS', 'UniqueId': '456'} ret = rsrc.handle_signal(test_metadata) self.assertEqual(['SUCCESS', 'SUCCESS'], rsrc.get_status()) self.assertEqual('status:SUCCESS reason:bar2', ret) def test_get_status_reason(self): self.stack = self.create_stack() rsrc = self.stack['WaitHandle'] self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) test_metadata = {'Data': 'foo', 'Reason': 'bar', 'Status': 'SUCCESS', 'UniqueId': '123'} ret = rsrc.handle_signal(test_metadata) self.assertEqual(['bar'], rsrc.get_status_reason('SUCCESS')) self.assertEqual('status:SUCCESS reason:bar', ret) test_metadata = {'Data': 'dog', 'Reason': 'cat', 'Status': 'SUCCESS', 'UniqueId': '456'} ret = rsrc.handle_signal(test_metadata) self.assertEqual( ['bar', 'cat'], sorted(rsrc.get_status_reason('SUCCESS'))) self.assertEqual('status:SUCCESS reason:cat', ret) test_metadata = {'Data': 'boo', 'Reason': 'hoo', 'Status': 'FAILURE', 'UniqueId': '789'} ret = rsrc.handle_signal(test_metadata) self.assertEqual(['hoo'], rsrc.get_status_reason('FAILURE')) self.assertEqual('status:FAILURE reason:hoo', ret) self.m.VerifyAll() class WaitConditionUpdateTest(common.HeatTestCase): def create_stack(self, temp=None): if temp is None: temp = test_template_wc_count temp_fmt = template_format.parse(temp) template = tmpl.Template(temp_fmt) ctx = utils.dummy_context(tenant_id='test_tenant') stack = parser.Stack(ctx, 'test_stack', template, disable_rollback=True) stack_id = str(uuid.uuid4()) self.stack_id = stack_id with utils.UUIDStub(self.stack_id): stack.store() self.m.StubOutWithMock(aws_wch.WaitConditionHandle, 'get_status') aws_wch.WaitConditionHandle.get_status().AndReturn([]) aws_wch.WaitConditionHandle.get_status().AndReturn(['SUCCESS']) aws_wch.WaitConditionHandle.get_status().AndReturn(['SUCCESS', 'SUCCESS']) aws_wch.WaitConditionHandle.get_status().AndReturn(['SUCCESS', 'SUCCESS', 'SUCCESS']) return stack def get_stack(self, stack_id): ctx = utils.dummy_context(tenant_id='test_tenant') stack = parser.Stack.load(ctx, stack_id) self.stack_id = stack_id return stack def test_update(self): self.stack = self.create_stack() self.m.ReplayAll() self.stack.create() rsrc = self.stack['WaitForTheHandle'] self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() self.m.UnsetStubs() wait_condition_handle = self.stack['WaitHandle'] test_metadata = {'Data': 'foo', 'Reason': 'bar', 'Status': 'SUCCESS', 'UniqueId': '1'} self._handle_signal(wait_condition_handle, test_metadata, 5) uprops = copy.copy(rsrc.properties.data) uprops['Count'] = '5' update_snippet = rsrc_defn.ResourceDefinition(rsrc.name, rsrc.type(), uprops) updater = scheduler.TaskRunner(rsrc.update, update_snippet) updater() self.assertEqual((rsrc.UPDATE, rsrc.COMPLETE), rsrc.state) def test_update_restored_from_db(self): self.stack = self.create_stack() self.m.ReplayAll() self.stack.create() rsrc = self.stack['WaitForTheHandle'] self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() self.m.UnsetStubs() handle_stack = self.stack wait_condition_handle = handle_stack['WaitHandle'] test_metadata = {'Data': 'foo', 'Reason': 'bar', 'Status': 'SUCCESS', 'UniqueId': '1'} self._handle_signal(wait_condition_handle, test_metadata, 2) self.stack.store() self.stack = self.get_stack(self.stack_id) rsrc = self.stack['WaitForTheHandle'] self._handle_signal(wait_condition_handle, test_metadata, 3) uprops = copy.copy(rsrc.properties.data) uprops['Count'] = '5' update_snippet = rsrc_defn.ResourceDefinition(rsrc.name, rsrc.type(), uprops) stk_defn.update_resource_data(self.stack.defn, 'WaitHandle', self.stack['WaitHandle'].node_data()) updater = scheduler.TaskRunner(rsrc.update, update_snippet) updater() self.assertEqual((rsrc.UPDATE, rsrc.COMPLETE), rsrc.state) def _handle_signal(self, rsrc, metadata, times=1): for t in range(times): metadata['UniqueId'] = metadata['UniqueId'] * 2 ret = rsrc.handle_signal(metadata) self.assertEqual("status:%s reason:%s" % (metadata[rsrc.STATUS], metadata[rsrc.REASON]), ret) def test_update_timeout(self): self.stack = self.create_stack() self.m.ReplayAll() self.stack.create() rsrc = self.stack['WaitForTheHandle'] self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() self.m.UnsetStubs() now = timeutils.utcnow() fake_clock = [now + datetime.timedelta(0, t) for t in (0, 0.001, 0.1, 4.1, 5.1)] timeutils.set_time_override(fake_clock) self.addCleanup(timeutils.clear_time_override) self.m.StubOutWithMock(aws_wch.WaitConditionHandle, 'get_status') aws_wch.WaitConditionHandle.get_status().MultipleTimes().AndReturn([]) self.m.ReplayAll() uprops = copy.copy(rsrc.properties.data) uprops['Count'] = '5' update_snippet = rsrc_defn.ResourceDefinition(rsrc.name, rsrc.type(), uprops) updater = scheduler.TaskRunner(rsrc.update, update_snippet) ex = self.assertRaises(exception.ResourceFailure, updater) self.assertEqual("WaitConditionTimeout: resources.WaitForTheHandle: " "0 of 5 received", six.text_type(ex)) self.assertEqual(5, rsrc.properties['Count']) self.m.VerifyAll() heat-10.0.2/heat/tests/openstack/0000775000175000017500000000000013343562672016622 5ustar zuulzuul00000000000000heat-10.0.2/heat/tests/openstack/heat/0000775000175000017500000000000013343562672017543 5ustar zuulzuul00000000000000heat-10.0.2/heat/tests/openstack/heat/test_value.py0000666000175000017500000002120113343562351022260 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import json from heat.common import exception from heat.common import template_format from heat.engine import environment from heat.engine import stack as parser from heat.engine import template from heat.tests import common from heat.tests import utils class TestValue(common.HeatTestCase): simple_template = ''' heat_template_version: '2016-10-14' parameters: param1: type: resources: my_value: type: OS::Heat::Value properties: value: {get_param: param1} my_value2: type: OS::Heat::Value properties: value: {get_attr: [my_value, value]} outputs: myout: value: {get_attr: [my_value2, value]} ''' def get_strict_and_loose_templates(self, param_type): template_loose = template_format.parse(self.simple_template) template_loose['parameters']['param1']['type'] = param_type template_strict = copy.deepcopy(template_loose) template_strict['resources']['my_value']['properties']['type'] \ = param_type template_strict['resources']['my_value2']['properties']['type'] \ = param_type return (template_strict, template_loose) def parse_stack(self, templ_obj): stack_name = 'test_value_stack' stack = parser.Stack(utils.dummy_context(), stack_name, templ_obj) stack.validate() stack.store() return stack def create_stack(self, templ, env=None): if isinstance(templ, str): return self.create_stack(template_format.parse(templ), env=env) if isinstance(templ, dict): tmpl_obj = template.Template(templ, env=env) return self.create_stack(tmpl_obj) assert isinstance(templ, template.Template) stack = self.parse_stack(templ) self.assertIsNone(stack.create()) return stack class TestValueSimple(TestValue): scenarios = [ ('boolean', dict( param1=True, param_type="boolean")), ('list', dict( param1=['a', 'b', 'Z'], param_type="comma_delimited_list")), ('map', dict( param1={'a': 'Z', 'B': 'y'}, param_type="json")), ('number-int', dict( param1=-11, param_type="number")), ('number-float', dict( param1=100.999, param_type="number")), ('string', dict( param1='Perchance to dream', param_type="string")), ] def test_value(self): ts, tl = self.get_strict_and_loose_templates(self.param_type) env = environment.Environment({ 'parameters': {'param1': self.param1}}) for templ_dict in [ts, tl]: stack = self.create_stack(templ_dict, env) self.assertEqual(self.param1, stack['my_value'].FnGetAtt('value')) self.assertEqual(self.param1, stack['my_value2'].FnGetAtt('value')) stack._update_all_resource_data(False, True) self.assertEqual(self.param1, stack.outputs['myout'].get_value()) class TestValueLessSimple(TestValue): template_bad = ''' heat_template_version: '2016-10-14' parameters: param1: type: json resources: my_value: type: OS::Heat::Value properties: value: {get_param: param1} type: number ''' template_map = ''' heat_template_version: '2016-10-14' parameters: param1: type: json param2: type: json resources: my_value: type: OS::Heat::Value properties: value: {get_param: param1} type: json my_value2: type: OS::Heat::Value properties: value: {map_merge: [{get_attr: [my_value, value]}, {get_param: param2}]} type: json ''' template_yaql = ''' heat_template_version: '2016-10-14' parameters: param1: type: number param2: type: comma_delimited_list resources: my_value: type: OS::Heat::Value properties: value: {get_param: param1} type: number my_value2: type: OS::Heat::Value properties: value: yaql: expression: $.data.param2.select(int($)).min() data: param2: {get_param: param2} type: number my_value3: type: OS::Heat::Value properties: value: yaql: expression: min($.data.v1,$.data.v2) data: v1: {get_attr: [my_value, value]} v2: {get_attr: [my_value2, value]} ''' def test_validation_fail(self): param1 = {"one": "croissant"} env = environment.Environment({ 'parameters': {'param1': json.dumps(param1)}}) self.assertRaises(exception.StackValidationFailed, self.create_stack, self.template_bad, env) def test_map(self): param1 = {"one": "skipper", "two": "antennae"} param2 = {"one": "monarch", "three": "sky"} env = environment.Environment({ 'parameters': {'param1': json.dumps(param1), 'param2': json.dumps(param2)}}) stack = self.create_stack(self.template_map, env) my_value = stack['my_value'] self.assertEqual(param1, my_value.FnGetAtt('value')) my_value2 = stack['my_value2'] self.assertEqual({"one": "monarch", "two": "antennae", "three": "sky"}, my_value2.FnGetAtt('value')) def test_yaql(self): param1 = -800 param2 = [-8, 0, 4, -11, 2] env = environment.Environment({ 'parameters': {'param1': param1, 'param2': param2}}) stack = self.create_stack(self.template_yaql, env) my_value = stack['my_value'] self.assertEqual(param1, my_value.FnGetAtt('value')) my_value2 = stack['my_value2'] self.assertEqual(min(param2), my_value2.FnGetAtt('value')) my_value3 = stack['my_value3'] self.assertEqual(param1, my_value3.FnGetAtt('value')) class TestValueUpdate(TestValue): scenarios = [ ('boolean-to-number', dict( param1=True, param_type1="boolean", param2=-100.999, param_type2="number")), ('number-to-string', dict( param1=-77, param_type1="number", param2='mellors', param_type2="string")), ('string-to-map', dict( param1='mellors', param_type1="string", param2={'3': 'turbo'}, param_type2="json")), ('map-to-boolean', dict( param1={'hey': 'there'}, param_type1="json", param2=False, param_type2="boolean")), ('list-to-boolean', dict( param1=['hey', '!'], param_type1="comma_delimited_list", param2=True, param_type2="boolean")), ] def test_value(self): ts1, tl1 = self.get_strict_and_loose_templates(self.param_type1) ts2, tl2 = self.get_strict_and_loose_templates(self.param_type2) env1 = environment.Environment({ 'parameters': {'param1': self.param1}}) env2 = environment.Environment({ 'parameters': {'param1': self.param2}}) updates = [(ts1, ts2), (ts1, tl2), (tl1, ts2), (tl1, tl2)] updates_other_way = [(b, a) for a, b in updates] updates.extend(updates_other_way) for t_initial, t_updated in updates: if t_initial == ts1 or t_initial == tl1: p1, p2, e1, e2 = self.param1, self.param2, env1, env2 else: # starting with param2, updating to param1 p2, p1, e2, e1 = self.param1, self.param2, env1, env2 stack = self.create_stack(t_initial, env=e1) self.assertEqual(p1, stack['my_value2'].FnGetAtt('value')) res1_id = stack['my_value'].id res2_id = stack['my_value2'].id res2_uuid = stack['my_value2'].uuid updated_stack = parser.Stack( stack.context, 'updated_stack', template.Template(t_updated, env=e2)) updated_stack.validate() stack.update(updated_stack) self.assertEqual(p2, stack['my_value2'].FnGetAtt('value')) # Make sure resources not replaced after update self.assertEqual(res1_id, stack['my_value'].id) self.assertEqual(res2_id, stack['my_value2'].id) self.assertEqual(res2_uuid, stack['my_value2'].uuid) heat-10.0.2/heat/tests/openstack/heat/test_instance_group_update_policy.py0000666000175000017500000002444013343562351027115 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import json import mock from heat.common import exception from heat.common import template_format from heat.engine.resources.openstack.heat import instance_group as instgrp from heat.engine import rsrc_defn from heat.tests import common from heat.tests.openstack.nova import fakes as fakes_nova from heat.tests import utils ig_tmpl_without_updt_policy = ''' { "AWSTemplateFormatVersion" : "2010-09-09", "Description" : "Template to create multiple instances.", "Parameters" : {}, "Resources" : { "JobServerGroup" : { "Type" : "OS::Heat::InstanceGroup", "Properties" : { "LaunchConfigurationName" : { "Ref" : "JobServerConfig" }, "Size" : "10", "AvailabilityZones" : ["nova"] } }, "JobServerConfig" : { "Type" : "AWS::AutoScaling::LaunchConfiguration", "Properties": { "ImageId" : "foo", "InstanceType" : "m1.medium", "KeyName" : "test", "SecurityGroups" : [ "sg-1" ], "UserData" : "jsconfig data" } } } } ''' ig_tmpl_with_bad_updt_policy = ''' { "AWSTemplateFormatVersion" : "2010-09-09", "Description" : "Template to create multiple instances.", "Parameters" : {}, "Resources" : { "JobServerGroup" : { "UpdatePolicy" : { "RollingUpdate": "foo" }, "Type" : "OS::Heat::InstanceGroup", "Properties" : { "LaunchConfigurationName" : { "Ref" : "JobServerConfig" }, "Size" : "10", "AvailabilityZones" : ["nova"] } }, "JobServerConfig" : { "Type" : "AWS::AutoScaling::LaunchConfiguration", "Properties": { "ImageId" : "foo", "InstanceType" : "m1.medium", "KeyName" : "test", "SecurityGroups" : [ "sg-1" ], "UserData" : "jsconfig data" } } } } ''' ig_tmpl_with_default_updt_policy = ''' { "AWSTemplateFormatVersion" : "2010-09-09", "Description" : "Template to create multiple instances.", "Parameters" : {}, "Resources" : { "JobServerGroup" : { "UpdatePolicy" : { "RollingUpdate" : { } }, "Type" : "OS::Heat::InstanceGroup", "Properties" : { "LaunchConfigurationName" : { "Ref" : "JobServerConfig" }, "Size" : "10", "AvailabilityZones" : ["nova"] } }, "JobServerConfig" : { "Type" : "AWS::AutoScaling::LaunchConfiguration", "Properties": { "ImageId" : "foo", "InstanceType" : "m1.medium", "KeyName" : "test", "SecurityGroups" : [ "sg-1" ], "UserData" : "jsconfig data" } } } } ''' ig_tmpl_with_updt_policy = ''' { "AWSTemplateFormatVersion" : "2010-09-09", "Description" : "Template to create multiple instances.", "Parameters" : {}, "Resources" : { "JobServerGroup" : { "UpdatePolicy" : { "RollingUpdate" : { "MinInstancesInService" : "1", "MaxBatchSize" : "2", "PauseTime" : "PT1S" } }, "Type" : "OS::Heat::InstanceGroup", "Properties" : { "LaunchConfigurationName" : { "Ref" : "JobServerConfig" }, "Size" : "10", "AvailabilityZones" : ["nova"] } }, "JobServerConfig" : { "Type" : "AWS::AutoScaling::LaunchConfiguration", "Properties": { "ImageId" : "foo", "InstanceType" : "m1.medium", "KeyName" : "test", "SecurityGroups" : [ "sg-1" ], "UserData" : "jsconfig data" } } } } ''' class InstanceGroupTest(common.HeatTestCase): def setUp(self): super(InstanceGroupTest, self).setUp() self.fc = fakes_nova.FakeClient() self.stub_ImageConstraint_validate() self.stub_KeypairConstraint_validate() self.stub_FlavorConstraint_validate() def get_launch_conf_name(self, stack, ig_name): return stack[ig_name].properties['LaunchConfigurationName'] def test_parse_without_update_policy(self): tmpl = template_format.parse(ig_tmpl_without_updt_policy) stack = utils.parse_stack(tmpl) stack.validate() grp = stack['JobServerGroup'] self.assertFalse(grp.update_policy['RollingUpdate']) def test_parse_with_update_policy(self): tmpl = template_format.parse(ig_tmpl_with_updt_policy) stack = utils.parse_stack(tmpl) stack.validate() grp = stack['JobServerGroup'] self.assertTrue(grp.update_policy) self.assertEqual(1, len(grp.update_policy)) self.assertIn('RollingUpdate', grp.update_policy) policy = grp.update_policy['RollingUpdate'] self.assertIsNotNone(policy) self.assertGreater(len(policy), 0) self.assertEqual(1, int(policy['MinInstancesInService'])) self.assertEqual(2, int(policy['MaxBatchSize'])) self.assertEqual('PT1S', policy['PauseTime']) def test_parse_with_default_update_policy(self): tmpl = template_format.parse(ig_tmpl_with_default_updt_policy) stack = utils.parse_stack(tmpl) stack.validate() grp = stack['JobServerGroup'] self.assertTrue(grp.update_policy) self.assertEqual(1, len(grp.update_policy)) self.assertIn('RollingUpdate', grp.update_policy) policy = grp.update_policy['RollingUpdate'] self.assertIsNotNone(policy) self.assertGreater(len(policy), 0) self.assertEqual(0, int(policy['MinInstancesInService'])) self.assertEqual(1, int(policy['MaxBatchSize'])) self.assertEqual('PT0S', policy['PauseTime']) def test_parse_with_bad_update_policy(self): tmpl = template_format.parse(ig_tmpl_with_bad_updt_policy) stack = utils.parse_stack(tmpl) self.assertRaises(exception.StackValidationFailed, stack.validate) def test_parse_with_bad_pausetime_in_update_policy(self): tmpl = template_format.parse(ig_tmpl_with_updt_policy) group = tmpl['Resources']['JobServerGroup'] policy = group['UpdatePolicy']['RollingUpdate'] # test against some random string policy['PauseTime'] = 'ABCD1234' stack = utils.parse_stack(tmpl) self.assertRaises(exception.StackValidationFailed, stack.validate) # test unsupported designator policy['PauseTime'] = 'P1YT1H' stack = utils.parse_stack(tmpl) self.assertRaises(exception.StackValidationFailed, stack.validate) def validate_update_policy_diff(self, current, updated): # load current stack current_tmpl = template_format.parse(current) current_stack = utils.parse_stack(current_tmpl) # get the json snippet for the current InstanceGroup resource current_grp = current_stack['JobServerGroup'] current_snippets = dict((n, r.frozen_definition()) for n, r in current_stack.items()) current_grp_json = current_snippets[current_grp.name] # load the updated stack updated_tmpl = template_format.parse(updated) updated_stack = utils.parse_stack(updated_tmpl) # get the updated json snippet for the InstanceGroup resource in the # context of the current stack updated_grp = updated_stack['JobServerGroup'] updated_grp_json = updated_grp.t.freeze() # identify the template difference tmpl_diff = updated_grp.update_template_diff( updated_grp_json, current_grp_json) self.assertTrue(tmpl_diff.update_policy_changed()) # test application of the new update policy in handle_update current_grp._try_rolling_update = mock.MagicMock() current_grp.resize = mock.MagicMock() current_grp.handle_update(updated_grp_json, tmpl_diff, None) self.assertEqual(updated_grp_json._update_policy or {}, current_grp.update_policy.data) def test_update_policy_added(self): self.validate_update_policy_diff(ig_tmpl_without_updt_policy, ig_tmpl_with_updt_policy) def test_update_policy_updated(self): updt_template = json.loads(ig_tmpl_with_updt_policy) grp = updt_template['Resources']['JobServerGroup'] policy = grp['UpdatePolicy']['RollingUpdate'] policy['MinInstancesInService'] = '2' policy['MaxBatchSize'] = '4' policy['PauseTime'] = 'PT1M30S' self.validate_update_policy_diff(ig_tmpl_with_updt_policy, json.dumps(updt_template)) def test_update_policy_removed(self): self.validate_update_policy_diff(ig_tmpl_with_updt_policy, ig_tmpl_without_updt_policy) class InstanceGroupReplaceTest(common.HeatTestCase): def test_timeout_exception(self): self.stub_ImageConstraint_validate() self.stub_KeypairConstraint_validate() self.stub_FlavorConstraint_validate() t = template_format.parse(ig_tmpl_with_updt_policy) stack = utils.parse_stack(t) defn = rsrc_defn.ResourceDefinition( 'asg', 'OS::Heat::InstanceGroup', {'Size': 2, 'AvailabilityZones': ['zoneb'], "LaunchConfigurationName": "LaunchConfig", "LoadBalancerNames": ["ElasticLoadBalancer"]}) # the following test, effective_capacity is 12 # batch_count = (effective_capacity + batch_size -1)//batch_size # = (12 + 2 - 1)//2 = 6 # if (batch_count - 1)* pause_time > stack.time_out, to raise error # (6 - 1)*14*60 > 3600, so to raise error group = instgrp.InstanceGroup('asg', defn, stack) group.nested = mock.MagicMock(return_value=range(12)) self.assertRaises(ValueError, group._replace, 10, 1, 14 * 60) heat-10.0.2/heat/tests/openstack/heat/test_deployed_server.py0000666000175000017500000005544313343562340024354 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_serialization import jsonutils from oslo_utils import uuidutils from six.moves.urllib import parse as urlparse from heat.common import exception from heat.common import template_format from heat.engine.clients.os import heat_plugin from heat.engine.clients.os import swift from heat.engine.clients.os import zaqar from heat.engine import environment from heat.engine.resources.openstack.heat import deployed_server from heat.engine import scheduler from heat.engine import stack as parser from heat.engine import template from heat.tests import common from heat.tests import utils ds_tmpl = """ heat_template_version: 2015-10-15 resources: server: type: OS::Heat::DeployedServer properties: software_config_transport: POLL_TEMP_URL """ server_sc_tmpl = """ heat_template_version: 2015-10-15 resources: server: type: OS::Heat::DeployedServer properties: software_config_transport: POLL_SERVER_CFN """ server_heat_tmpl = """ heat_template_version: 2015-10-15 resources: server: type: OS::Heat::DeployedServer properties: software_config_transport: POLL_SERVER_HEAT """ server_zaqar_tmpl = """ heat_template_version: 2015-10-15 resources: server: type: OS::Heat::DeployedServer properties: software_config_transport: ZAQAR_MESSAGE """ ds_deployment_data_tmpl = """ heat_template_version: 2015-10-15 resources: server: type: OS::Heat::DeployedServer properties: software_config_transport: POLL_TEMP_URL deployment_swift_data: container: my-custom-container object: my-custom-object """ ds_deployment_data_bad_container_tmpl = """ heat_template_version: 2015-10-15 resources: server: type: OS::Heat::DeployedServer properties: software_config_transport: POLL_TEMP_URL deployment_swift_data: container: '' object: 'my-custom-object' """ ds_deployment_data_bad_object_tmpl = """ heat_template_version: 2015-10-15 resources: server: type: OS::Heat::DeployedServer properties: software_config_transport: POLL_TEMP_URL deployment_swift_data: container: 'my-custom-container' object: '' """ ds_deployment_data_none_container_tmpl = """ heat_template_version: 2015-10-15 resources: server: type: OS::Heat::DeployedServer properties: software_config_transport: POLL_TEMP_URL deployment_swift_data: container: 0 object: 'my-custom-object' """ ds_deployment_data_none_object_tmpl = """ heat_template_version: 2015-10-15 resources: server: type: OS::Heat::DeployedServer properties: software_config_transport: POLL_TEMP_URL deployment_swift_data: container: 'my-custom-container' object: 0 """ class DeployedServersTest(common.HeatTestCase): def _create_test_server(self, name, override_name=False): server = self._setup_test_server(name, override_name) scheduler.TaskRunner(server.create)() return server def _setup_test_stack(self, stack_name, test_templ=ds_tmpl): t = template_format.parse(test_templ) tmpl = template.Template(t, env=environment.Environment()) stack = parser.Stack(utils.dummy_context(region_name="RegionOne"), stack_name, tmpl, stack_id=uuidutils.generate_uuid(), stack_user_project_id='8888') return (tmpl, stack) def _server_create_software_config_poll_temp_url(self, server_name='server'): stack_name = '%s_s' % server_name (tmpl, stack) = self._setup_test_stack(stack_name) props = tmpl.t['resources']['server']['properties'] props['software_config_transport'] = 'POLL_TEMP_URL' self.server_props = props resource_defns = tmpl.resource_definitions(stack) server = deployed_server.DeployedServer( server_name, resource_defns[server_name], stack) sc = mock.Mock() sc.head_account.return_value = { 'x-account-meta-temp-url-key': 'secrit' } sc.url = 'http://192.0.2.2' self.patchobject(swift.SwiftClientPlugin, '_create', return_value=sc) scheduler.TaskRunner(server.create)() # self._create_test_server(server_name) metadata_put_url = server.data().get('metadata_put_url') md = server.metadata_get() metadata_url = md['os-collect-config']['request']['metadata_url'] self.assertNotEqual(metadata_url, metadata_put_url) container_name = server.physical_resource_name() object_name = server.data().get('metadata_object_name') self.assertTrue(uuidutils.is_uuid_like(object_name)) test_path = '/v1/AUTH_test_tenant_id/%s/%s' % ( server.physical_resource_name(), object_name) self.assertEqual(test_path, urlparse.urlparse(metadata_put_url).path) self.assertEqual(test_path, urlparse.urlparse(metadata_url).path) sc.put_object.assert_called_once_with( container_name, object_name, jsonutils.dumps(md)) sc.head_container.return_value = {'x-container-object-count': '0'} server._delete_temp_url() sc.delete_object.assert_called_once_with(container_name, object_name) sc.head_container.assert_called_once_with(container_name) sc.delete_container.assert_called_once_with(container_name) return metadata_url, server def test_server_create_deployment_swift_data(self): server_name = 'server' stack_name = '%s_s' % server_name (tmpl, stack) = self._setup_test_stack( stack_name, ds_deployment_data_tmpl) props = tmpl.t['resources']['server']['properties'] props['software_config_transport'] = 'POLL_TEMP_URL' self.server_props = props resource_defns = tmpl.resource_definitions(stack) server = deployed_server.DeployedServer( server_name, resource_defns[server_name], stack) sc = mock.Mock() sc.head_account.return_value = { 'x-account-meta-temp-url-key': 'secrit' } sc.url = 'http://192.0.2.2' self.patchobject(swift.SwiftClientPlugin, '_create', return_value=sc) scheduler.TaskRunner(server.create)() # self._create_test_server(server_name) metadata_put_url = server.data().get('metadata_put_url') md = server.metadata_get() metadata_url = md['os-collect-config']['request']['metadata_url'] self.assertNotEqual(metadata_url, metadata_put_url) container_name = 'my-custom-container' object_name = 'my-custom-object' test_path = '/v1/AUTH_test_tenant_id/%s/%s' % ( container_name, object_name) self.assertEqual(test_path, urlparse.urlparse(metadata_put_url).path) self.assertEqual(test_path, urlparse.urlparse(metadata_url).path) sc.put_object.assert_called_once_with( container_name, object_name, jsonutils.dumps(md)) sc.head_container.return_value = {'x-container-object-count': '0'} server._delete_temp_url() sc.delete_object.assert_called_once_with(container_name, object_name) sc.head_container.assert_called_once_with(container_name) sc.delete_container.assert_called_once_with(container_name) return metadata_url, server def test_server_create_deployment_swift_data_bad_container(self): server_name = 'server' stack_name = '%s_s' % server_name (tmpl, stack) = self._setup_test_stack( stack_name, ds_deployment_data_bad_container_tmpl) props = tmpl.t['resources']['server']['properties'] props['software_config_transport'] = 'POLL_TEMP_URL' self.server_props = props resource_defns = tmpl.resource_definitions(stack) server = deployed_server.DeployedServer( server_name, resource_defns[server_name], stack) self.assertRaises(exception.StackValidationFailed, server.validate) def test_server_create_deployment_swift_data_bad_object(self): server_name = 'server' stack_name = '%s_s' % server_name (tmpl, stack) = self._setup_test_stack( stack_name, ds_deployment_data_bad_object_tmpl) props = tmpl.t['resources']['server']['properties'] props['software_config_transport'] = 'POLL_TEMP_URL' self.server_props = props resource_defns = tmpl.resource_definitions(stack) server = deployed_server.DeployedServer( server_name, resource_defns[server_name], stack) self.assertRaises(exception.StackValidationFailed, server.validate) def test_server_create_deployment_swift_data_none_container(self): server_name = 'server' stack_name = '%s_s' % server_name (tmpl, stack) = self._setup_test_stack( stack_name, ds_deployment_data_none_container_tmpl) props = tmpl.t['resources']['server']['properties'] props['software_config_transport'] = 'POLL_TEMP_URL' self.server_props = props resource_defns = tmpl.resource_definitions(stack) server = deployed_server.DeployedServer( server_name, resource_defns[server_name], stack) sc = mock.Mock() sc.head_account.return_value = { 'x-account-meta-temp-url-key': 'secrit' } sc.url = 'http://192.0.2.2' self.patchobject(swift.SwiftClientPlugin, '_create', return_value=sc) scheduler.TaskRunner(server.create)() # self._create_test_server(server_name) metadata_put_url = server.data().get('metadata_put_url') md = server.metadata_get() metadata_url = md['os-collect-config']['request']['metadata_url'] self.assertNotEqual(metadata_url, metadata_put_url) container_name = '0' object_name = 'my-custom-object' test_path = '/v1/AUTH_test_tenant_id/%s/%s' % ( container_name, object_name) self.assertEqual(test_path, urlparse.urlparse(metadata_put_url).path) self.assertEqual(test_path, urlparse.urlparse(metadata_url).path) sc.put_object.assert_called_once_with( container_name, object_name, jsonutils.dumps(md)) sc.head_container.return_value = {'x-container-object-count': '0'} server._delete_temp_url() sc.delete_object.assert_called_once_with(container_name, object_name) sc.head_container.assert_called_once_with(container_name) sc.delete_container.assert_called_once_with(container_name) return metadata_url, server def test_server_create_deployment_swift_data_none_object(self): server_name = 'server' stack_name = '%s_s' % server_name (tmpl, stack) = self._setup_test_stack( stack_name, ds_deployment_data_none_object_tmpl) props = tmpl.t['resources']['server']['properties'] props['software_config_transport'] = 'POLL_TEMP_URL' self.server_props = props resource_defns = tmpl.resource_definitions(stack) server = deployed_server.DeployedServer( server_name, resource_defns[server_name], stack) sc = mock.Mock() sc.head_account.return_value = { 'x-account-meta-temp-url-key': 'secrit' } sc.url = 'http://192.0.2.2' self.patchobject(swift.SwiftClientPlugin, '_create', return_value=sc) scheduler.TaskRunner(server.create)() # self._create_test_server(server_name) metadata_put_url = server.data().get('metadata_put_url') md = server.metadata_get() metadata_url = md['os-collect-config']['request']['metadata_url'] self.assertNotEqual(metadata_url, metadata_put_url) container_name = 'my-custom-container' object_name = '0' test_path = '/v1/AUTH_test_tenant_id/%s/%s' % ( container_name, object_name) self.assertEqual(test_path, urlparse.urlparse(metadata_put_url).path) self.assertEqual(test_path, urlparse.urlparse(metadata_url).path) sc.put_object.assert_called_once_with( container_name, object_name, jsonutils.dumps(md)) sc.head_container.return_value = {'x-container-object-count': '0'} server._delete_temp_url() sc.delete_object.assert_called_once_with(container_name, object_name) sc.head_container.assert_called_once_with(container_name) sc.delete_container.assert_called_once_with(container_name) return metadata_url, server def test_server_create_software_config_poll_temp_url(self): metadata_url, server = ( self._server_create_software_config_poll_temp_url()) self.assertEqual({ 'os-collect-config': { 'request': { 'metadata_url': metadata_url }, 'collectors': ['request', 'local'] }, 'deployments': [] }, server.metadata_get()) def _server_create_software_config(self, server_name='server_sc', md=None, ret_tmpl=False): stack_name = '%s_s' % server_name (tmpl, stack) = self._setup_test_stack(stack_name, server_sc_tmpl) self.stack = stack self.server_props = tmpl.t['resources']['server']['properties'] if md is not None: tmpl.t['resources']['server']['metadata'] = md stack.stack_user_project_id = '8888' resource_defns = tmpl.resource_definitions(stack) server = deployed_server.DeployedServer( 'server', resource_defns['server'], stack) self.patchobject(server, 'heat') scheduler.TaskRunner(server.create)() self.assertEqual('4567', server.access_key) self.assertEqual('8901', server.secret_key) self.assertEqual('1234', server._get_user_id()) self.assertEqual('POLL_SERVER_CFN', server.properties.get('software_config_transport')) self.assertTrue(stack.access_allowed('4567', 'server')) self.assertFalse(stack.access_allowed('45678', 'server')) self.assertFalse(stack.access_allowed('4567', 'wserver')) if ret_tmpl: return server, tmpl else: return server @mock.patch.object(heat_plugin.HeatClientPlugin, 'url_for') def test_server_create_software_config(self, fake_url): fake_url.return_value = 'the-cfn-url' server = self._server_create_software_config() self.assertEqual({ 'os-collect-config': { 'cfn': { 'access_key_id': '4567', 'metadata_url': 'the-cfn-url/v1/', 'path': 'server.Metadata', 'secret_access_key': '8901', 'stack_name': 'server_sc_s' }, 'collectors': ['cfn', 'local'] }, 'deployments': [] }, server.metadata_get()) @mock.patch.object(heat_plugin.HeatClientPlugin, 'url_for') def test_server_create_software_config_metadata(self, fake_url): md = {'os-collect-config': {'polling_interval': 10}} fake_url.return_value = 'the-cfn-url' server = self._server_create_software_config(md=md) self.assertEqual({ 'os-collect-config': { 'cfn': { 'access_key_id': '4567', 'metadata_url': 'the-cfn-url/v1/', 'path': 'server.Metadata', 'secret_access_key': '8901', 'stack_name': 'server_sc_s' }, 'collectors': ['cfn', 'local'], 'polling_interval': 10 }, 'deployments': [] }, server.metadata_get()) def _server_create_software_config_poll_heat(self, server_name='server_heat', md=None): stack_name = '%s_s' % server_name (tmpl, stack) = self._setup_test_stack(stack_name, server_heat_tmpl) self.stack = stack props = tmpl.t['resources']['server']['properties'] props['software_config_transport'] = 'POLL_SERVER_HEAT' if md is not None: tmpl.t['resources']['server']['metadata'] = md self.server_props = props resource_defns = tmpl.resource_definitions(stack) server = deployed_server.DeployedServer( 'server', resource_defns['server'], stack) scheduler.TaskRunner(server.create)() self.assertEqual('1234', server._get_user_id()) self.assertTrue(stack.access_allowed('1234', 'server')) self.assertFalse(stack.access_allowed('45678', 'server')) self.assertFalse(stack.access_allowed('4567', 'wserver')) return stack, server def test_server_software_config_poll_heat(self): stack, server = self._server_create_software_config_poll_heat() md = { 'os-collect-config': { 'heat': { 'auth_url': 'http://server.test:5000/v2.0', 'password': server.password, 'project_id': '8888', 'region_name': 'RegionOne', 'resource_name': 'server', 'stack_id': 'server_heat_s/%s' % stack.id, 'user_id': '1234' }, 'collectors': ['heat', 'local'] }, 'deployments': [] } self.assertEqual(md, server.metadata_get()) # update resource.metadata md1 = {'os-collect-config': {'polling_interval': 10}} server.stack.t.t['resources']['server']['metadata'] = md1 resource_defns = server.stack.t.resource_definitions(server.stack) scheduler.TaskRunner(server.update, resource_defns['server'])() occ = md['os-collect-config'] occ.update(md1['os-collect-config']) # os-collect-config merged self.assertEqual(md, server.metadata_get()) def test_server_create_software_config_poll_heat_metadata(self): md = {'os-collect-config': {'polling_interval': 10}} stack, server = self._server_create_software_config_poll_heat(md=md) self.assertEqual({ 'os-collect-config': { 'heat': { 'auth_url': 'http://server.test:5000/v2.0', 'password': server.password, 'project_id': '8888', 'region_name': 'RegionOne', 'resource_name': 'server', 'stack_id': 'server_heat_s/%s' % stack.id, 'user_id': '1234' }, 'collectors': ['heat', 'local'], 'polling_interval': 10 }, 'deployments': [] }, server.metadata_get()) def _server_create_software_config_zaqar(self, server_name='server_zaqar', md=None): stack_name = '%s_s' % server_name (tmpl, stack) = self._setup_test_stack(stack_name, server_zaqar_tmpl) self.stack = stack props = tmpl.t['resources']['server']['properties'] props['software_config_transport'] = 'ZAQAR_MESSAGE' if md is not None: tmpl.t['resources']['server']['metadata'] = md self.server_props = props resource_defns = tmpl.resource_definitions(stack) server = deployed_server.DeployedServer( 'server', resource_defns['server'], stack) zcc = self.patchobject(zaqar.ZaqarClientPlugin, 'create_for_tenant') zc = mock.Mock() zcc.return_value = zc queue = mock.Mock() zc.queue.return_value = queue scheduler.TaskRunner(server.create)() metadata_queue_id = server.data().get('metadata_queue_id') md = server.metadata_get() queue_id = md['os-collect-config']['zaqar']['queue_id'] self.assertEqual(queue_id, metadata_queue_id) zc.queue.assert_called_once_with(queue_id) queue.post.assert_called_once_with( {'body': server.metadata_get(), 'ttl': 3600}) zc.queue.reset_mock() server._delete_queue() zc.queue.assert_called_once_with(queue_id) zc.queue(queue_id).delete.assert_called_once_with() return queue_id, server def test_server_create_software_config_zaqar(self): queue_id, server = self._server_create_software_config_zaqar() self.assertEqual({ 'os-collect-config': { 'zaqar': { 'user_id': '1234', 'password': server.password, 'auth_url': 'http://server.test:5000/v2.0', 'project_id': '8888', 'region_name': 'RegionOne', 'queue_id': queue_id }, 'collectors': ['zaqar', 'local'] }, 'deployments': [] }, server.metadata_get()) def test_server_create_software_config_zaqar_metadata(self): md = {'os-collect-config': {'polling_interval': 10}} queue_id, server = self._server_create_software_config_zaqar(md=md) self.assertEqual({ 'os-collect-config': { 'zaqar': { 'user_id': '1234', 'password': server.password, 'auth_url': 'http://server.test:5000/v2.0', 'project_id': '8888', 'region_name': 'RegionOne', 'queue_id': queue_id }, 'collectors': ['zaqar', 'local'], 'polling_interval': 10 }, 'deployments': [] }, server.metadata_get()) def test_resolve_attribute_os_collect_config(self): metadata_url, server = ( self._server_create_software_config_poll_temp_url()) # FnGetAtt usage belows requires the resource to have a stack set (tmpl, stack) = self._setup_test_stack('stack_name') server.stack = stack self.assertEqual({ 'request': { 'metadata_url': metadata_url }, 'collectors': ['request', 'local'] }, server.FnGetAtt('os_collect_config')) heat-10.0.2/heat/tests/openstack/heat/test_structured_config.py0000666000175000017500000003440213343562340024702 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from heat.common import exception from heat.engine.resources.openstack.heat import structured_config as sc from heat.engine import rsrc_defn from heat.engine import software_config_io as swc_io from heat.engine import stack as parser from heat.engine import template from heat.rpc import api as rpc_api from heat.tests import common from heat.tests import utils SCENARIOS = [ ( 'no_functions', dict(input_key='get_input', inputs={}, config={'foo': 'bar'}, result={'foo': 'bar'}), ), ( 'none_inputs', dict(input_key='get_input', inputs=None, config={'foo': 'bar'}, result={'foo': 'bar'}), ), ( 'none_config', dict(input_key='get_input', inputs=None, config=None, result=None), ), ( 'empty_config', dict(input_key='get_input', inputs=None, config='', result=''), ), ( 'simple', dict(input_key='get_input', inputs={'bar': 'baa'}, config={'foo': {'get_input': 'bar'}}, result={'foo': 'baa'}), ), ( 'multi_key', dict(input_key='get_input', inputs={'bar': 'baa'}, config={'foo': [{'get_input': 'bar'}, 'other']}, result={'foo': ['baa', 'other']}), ), ( 'list_arg', dict(input_key='get_input', inputs={'bar': 'baa'}, config={'foo': {'get_input': ['bar', 'baz']}}, result={'foo': {'get_input': ['bar', 'baz']}}), ), ( 'missing_input', dict(input_key='get_input', inputs={'bar': 'baa'}, config={'foo': {'get_input': 'barr'}}, result={'foo': None}), ), ( 'deep', dict(input_key='get_input', inputs={'bar': 'baa'}, config={'foo': {'foo': {'get_input': 'bar'}}}, result={'foo': {'foo': 'baa'}}), ), ( 'shallow', dict(input_key='get_input', inputs={'bar': 'baa'}, config={'get_input': 'bar'}, result='baa'), ), ( 'list', dict(input_key='get_input', inputs={'bar': 'baa', 'bar2': 'baz', 'bar3': 'bink'}, config={'foo': [ {'get_input': 'bar'}, {'get_input': 'bar2'}, {'get_input': 'bar3'}]}, result={'foo': ['baa', 'baz', 'bink']}), ) ] class StructuredConfigTestJSON(common.HeatTestCase): template = { 'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'config_mysql': { 'Type': 'OS::Heat::StructuredConfig', 'Properties': {'config': {'foo': 'bar'}} } } } stored_config = {'foo': 'bar'} def setUp(self): super(StructuredConfigTestJSON, self).setUp() self.ctx = utils.dummy_context() self.properties = { 'config': {'foo': 'bar'} } self.stack = parser.Stack( self.ctx, 'software_config_test_stack', template.Template(self.template)) self.config = self.stack['config_mysql'] self.rpc_client = mock.MagicMock() self.config._rpc_client = self.rpc_client def test_handle_create(self): config_id = 'c8a19429-7fde-47ea-a42f-40045488226c' value = {'id': config_id} self.rpc_client.create_software_config.return_value = value self.config.handle_create() self.assertEqual(config_id, self.config.resource_id) kwargs = self.rpc_client.create_software_config.call_args[1] self.assertEqual(self.stored_config, kwargs['config']) class StructuredDeploymentDerivedTest(common.HeatTestCase): template = { 'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'deploy_mysql': { 'Type': 'OS::Heat::StructuredDeployment' } } } def setUp(self): super(StructuredDeploymentDerivedTest, self).setUp() self.ctx = utils.dummy_context() props = { 'server': '9f1f0e00-05d2-4ca5-8602-95021f19c9d0', 'input_values': {'bar': 'baz'}, } self.template['Resources']['deploy_mysql']['Properties'] = props self.stack = parser.Stack( self.ctx, 'software_deploly_test_stack', template.Template(self.template)) self.deployment = self.stack['deploy_mysql'] def test_build_derived_config(self): source = { 'config': {"foo": {"get_input": "bar"}} } inputs = [swc_io.InputConfig(name='bar', value='baz')] result = self.deployment._build_derived_config( 'CREATE', source, inputs, {}) self.assertEqual({"foo": "baz"}, result) def test_build_derived_config_params_with_empty_config(self): source = {} source[rpc_api.SOFTWARE_CONFIG_INPUTS] = [] source[rpc_api.SOFTWARE_CONFIG_OUTPUTS] = [] result = self.deployment._build_derived_config_params( 'CREATE', source) self.assertEqual('Heat::Ungrouped', result['group']) self.assertEqual({}, result['config']) self.assertEqual(self.deployment.physical_resource_name(), result['name']) self.assertIn({'name': 'bar', 'type': 'String', 'value': 'baz'}, result['inputs']) self.assertIsNone(result['options']) self.assertEqual([], result['outputs']) class StructuredDeploymentWithStrictInputTest(common.HeatTestCase): template = { 'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'deploy_mysql': { 'Type': 'OS::Heat::StructuredDeployment', 'Properties': {} } } } def setUp(self): super(StructuredDeploymentWithStrictInputTest, self).setUp() self.source = {'config': {'foo': [{"get_input": "bar"}, {"get_input": "barz"}]}} self.inputs = [swc_io.InputConfig(name='bar', value='baz'), swc_io.InputConfig(name='barz', value='baz2')] def _stack_with_template(self, template_def): self.ctx = utils.dummy_context() self.stack = parser.Stack( self.ctx, 'software_deploly_test_stack', template.Template(template_def)) self.deployment = self.stack['deploy_mysql'] def test_build_derived_config_failure(self): props = {'input_values_validate': 'STRICT'} self.template['Resources']['deploy_mysql']['Properties'] = props self._stack_with_template(self.template) self.assertRaises(exception.UserParameterMissing, self.deployment._build_derived_config, 'CREATE', self.source, self.inputs[:1], {}) def test_build_derived_config_success(self): props = {'input_values_validate': 'STRICT'} self.template['Resources']['deploy_mysql']['Properties'] = props self._stack_with_template(self.template) expected = {'foo': ['baz', 'baz2']} result = self.deployment._build_derived_config( 'CREATE', self.source, self.inputs, {}) self.assertEqual(expected, result) class StructuredDeploymentParseTest(common.HeatTestCase): scenarios = SCENARIOS def test_parse(self): parse = sc.StructuredDeployment.parse self.assertEqual( self.result, parse(self.inputs, self.input_key, self.config)) class StructuredDeploymentGroupTest(common.HeatTestCase): template = { 'heat_template_version': '2013-05-23', 'resources': { 'deploy_mysql': { 'type': 'OS::Heat::StructuredDeploymentGroup', 'properties': { 'config': 'config_uuid', 'servers': {'server1': 'uuid1', 'server2': 'uuid2'}, } } } } def test_build_resource_definition(self): stack = utils.parse_stack(self.template) snip = stack.t.resource_definitions(stack)['deploy_mysql'] resg = sc.StructuredDeploymentGroup('test', snip, stack) expect = rsrc_defn.ResourceDefinition( None, 'OS::Heat::StructuredDeployment', {'actions': ['CREATE', 'UPDATE'], 'config': 'config_uuid', 'input_values': None, 'name': None, 'server': 'uuid1', 'input_key': 'get_input', 'signal_transport': 'CFN_SIGNAL', 'input_values_validate': 'LAX'}) rdef = resg.get_resource_def() self.assertEqual( expect, resg.build_resource_definition('server1', rdef)) rdef = resg.get_resource_def(include_all=True) self.assertEqual( expect, resg.build_resource_definition('server1', rdef)) def test_resource_names(self): stack = utils.parse_stack(self.template) snip = stack.t.resource_definitions(stack)['deploy_mysql'] resg = sc.StructuredDeploymentGroup('test', snip, stack) self.assertEqual( set(('server1', 'server2')), set(resg._resource_names()) ) resg.properties = {'servers': {'s1': 'u1', 's2': 'u2', 's3': 'u3'}} self.assertEqual( set(('s1', 's2', 's3')), set(resg._resource_names())) def test_assemble_nested(self): """Tests nested stack implements group creation based on properties. Tests that the nested stack that implements the group is created appropriately based on properties. """ stack = utils.parse_stack(self.template) snip = stack.t.resource_definitions(stack)['deploy_mysql'] resg = sc.StructuredDeploymentGroup('test', snip, stack) templ = { "heat_template_version": "2015-04-30", "resources": { "server1": { 'type': 'OS::Heat::StructuredDeployment', 'properties': { 'server': 'uuid1', 'actions': ['CREATE', 'UPDATE'], 'config': 'config_uuid', 'input_key': 'get_input', 'input_values': None, 'name': None, 'signal_transport': 'CFN_SIGNAL', 'input_values_validate': 'LAX' } }, "server2": { 'type': 'OS::Heat::StructuredDeployment', 'properties': { 'server': 'uuid2', 'actions': ['CREATE', 'UPDATE'], 'config': 'config_uuid', 'input_key': 'get_input', 'input_values': None, 'name': None, 'signal_transport': 'CFN_SIGNAL', 'input_values_validate': 'LAX' } } } } self.assertEqual(templ, resg._assemble_nested(['server1', 'server2']).t) class StructuredDeploymentWithStrictInputParseTest(common.HeatTestCase): scenarios = SCENARIOS def test_parse(self): self.parse = sc.StructuredDeployment.parse if 'missing_input' not in self.shortDescription(): self.assertEqual( self.result, self.parse( self.inputs, self.input_key, self.config, check_input_val='STRICT') ) else: self.assertRaises(exception.UserParameterMissing, self.parse, self.inputs, self.input_key, self.config, check_input_val='STRICT') class StructuredDeploymentParseMethodsTest(common.HeatTestCase): def test_get_key_args(self): snippet = {'get_input': 'bar'} input_key = 'get_input' expected = 'bar' result = sc.StructuredDeployment.get_input_key_arg(snippet, input_key) self.assertEqual(expected, result) def test_get_key_args_long_snippet(self): snippet = {'get_input': 'bar', 'second': 'foo'} input_key = 'get_input' result = sc.StructuredDeployment.get_input_key_arg(snippet, input_key) self.assertFalse(result) def test_get_key_args_unknown_input_key(self): snippet = {'get_input': 'bar'} input_key = 'input' result = sc.StructuredDeployment.get_input_key_arg(snippet, input_key) self.assertFalse(result) def test_get_key_args_wrong_args(self): snippet = {'get_input': None} input_key = 'get_input' result = sc.StructuredDeployment.get_input_key_arg(snippet, input_key) self.assertFalse(result) def test_get_input_key_value(self): inputs = {'bar': 'baz', 'foo': 'foo2'} res = sc.StructuredDeployment.get_input_key_value('bar', inputs, False) expected = 'baz' self.assertEqual(expected, res) def test_get_input_key_value_raise_exception(self): inputs = {'bar': 'baz', 'foo': 'foo2'} self.assertRaises(exception.UserParameterMissing, sc.StructuredDeployment.get_input_key_value, 'barz', inputs, 'STRICT') def test_get_input_key_value_get_none(self): inputs = {'bar': 'baz', 'foo': 'foo2'} res = sc.StructuredDeployment.get_input_key_value('brz', inputs, False) self.assertIsNone(res) heat-10.0.2/heat/tests/openstack/heat/test_software_deployment.py0000666000175000017500000020122513343562340025242 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import contextlib import copy import re import uuid import mock import six from oslo_serialization import jsonutils from heat.common import exception as exc from heat.common.i18n import _ from heat.common import template_format from heat.engine.clients.os import nova from heat.engine.clients.os import swift from heat.engine.clients.os import zaqar from heat.engine import node_data from heat.engine import resource from heat.engine.resources.openstack.heat import software_deployment as sd from heat.engine import rsrc_defn from heat.engine import stack as parser from heat.engine import template from heat.tests import common from heat.tests import utils class SoftwareDeploymentTest(common.HeatTestCase): template = { 'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'deployment_mysql': { 'Type': 'OS::Heat::SoftwareDeployment', 'Properties': { 'server': '9f1f0e00-05d2-4ca5-8602-95021f19c9d0', 'config': '48e8ade1-9196-42d5-89a2-f709fde42632', 'input_values': {'foo': 'bar'}, } } } } template_with_server = { 'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'deployment_mysql': { 'Type': 'OS::Heat::SoftwareDeployment', 'Properties': { 'server': 'server', 'config': '48e8ade1-9196-42d5-89a2-f709fde42632', 'input_values': {'foo': 'bar'}, } }, 'server': { 'Type': 'OS::Nova::Server', 'Properties': { 'image': 'fedora-amd64', 'flavor': 'm1.small', 'key_name': 'heat_key' } } } } template_no_signal = { 'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'deployment_mysql': { 'Type': 'OS::Heat::SoftwareDeployment', 'Properties': { 'server': '9f1f0e00-05d2-4ca5-8602-95021f19c9d0', 'config': '48e8ade1-9196-42d5-89a2-f709fde42632', 'input_values': {'foo': 'bar', 'bink': 'bonk'}, 'signal_transport': 'NO_SIGNAL', 'name': '00_run_me_first' } } } } template_temp_url_signal = { 'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'deployment_mysql': { 'Type': 'OS::Heat::SoftwareDeployment', 'Properties': { 'server': '9f1f0e00-05d2-4ca5-8602-95021f19c9d0', 'config': '48e8ade1-9196-42d5-89a2-f709fde42632', 'input_values': {'foo': 'bar', 'bink': 'bonk'}, 'signal_transport': 'TEMP_URL_SIGNAL', 'name': '00_run_me_first' } } } } template_zaqar_signal = { 'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'deployment_mysql': { 'Type': 'OS::Heat::SoftwareDeployment', 'Properties': { 'server': '9f1f0e00-05d2-4ca5-8602-95021f19c9d0', 'config': '48e8ade1-9196-42d5-89a2-f709fde42632', 'input_values': {'foo': 'bar', 'bink': 'bonk'}, 'signal_transport': 'ZAQAR_SIGNAL', 'name': '00_run_me_first' } } } } template_delete_suspend_resume = { 'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'deployment_mysql': { 'Type': 'OS::Heat::SoftwareDeployment', 'Properties': { 'server': '9f1f0e00-05d2-4ca5-8602-95021f19c9d0', 'config': '48e8ade1-9196-42d5-89a2-f709fde42632', 'input_values': {'foo': 'bar'}, 'actions': ['DELETE', 'SUSPEND', 'RESUME'], } } } } template_update_only = { 'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'deployment_mysql': { 'Type': 'OS::Heat::SoftwareDeployment', 'Properties': { 'server': '9f1f0e00-05d2-4ca5-8602-95021f19c9d0', 'config': '48e8ade1-9196-42d5-89a2-f709fde42632', 'input_values': {'foo': 'bar'}, 'actions': ['UPDATE'], } } } } template_no_config = { 'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'deployment_mysql': { 'Type': 'OS::Heat::SoftwareDeployment', 'Properties': { 'server': '9f1f0e00-05d2-4ca5-8602-95021f19c9d0', 'input_values': {'foo': 'bar', 'bink': 'bonk'}, 'signal_transport': 'NO_SIGNAL', } } } } template_no_server = { 'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'deployment_mysql': { 'Type': 'OS::Heat::SoftwareDeployment', 'Properties': {} } } } def setUp(self): super(SoftwareDeploymentTest, self).setUp() self.ctx = utils.dummy_context() def _create_stack(self, tmpl, cache_data=None): self.stack = parser.Stack( self.ctx, 'software_deployment_test_stack', template.Template(tmpl), stack_id='42f6f66b-631a-44e7-8d01-e22fb54574a9', stack_user_project_id='65728b74-cfe7-4f17-9c15-11d4f686e591', cache_data=cache_data ) self.patchobject(nova.NovaClientPlugin, 'get_server', return_value=mock.MagicMock()) self.patchobject(sd.SoftwareDeployment, '_create_user') self.patchobject(sd.SoftwareDeployment, '_create_keypair') self.patchobject(sd.SoftwareDeployment, '_delete_user') self.patchobject(sd.SoftwareDeployment, '_delete_ec2_signed_url') get_ec2_signed_url = self.patchobject( sd.SoftwareDeployment, '_get_ec2_signed_url') get_ec2_signed_url.return_value = 'http://192.0.2.2/signed_url' self.deployment = self.stack['deployment_mysql'] self.rpc_client = mock.MagicMock() self.deployment._rpc_client = self.rpc_client @contextlib.contextmanager def exc_filter(*args): try: yield except exc.NotFound: pass self.rpc_client.ignore_error_by_name.side_effect = exc_filter def test_validate(self): template = dict(self.template_with_server) props = template['Resources']['server']['Properties'] props['user_data_format'] = 'SOFTWARE_CONFIG' self._create_stack(self.template_with_server) mock_sd = self.deployment self.assertEqual('CFN_SIGNAL', mock_sd.properties.get('signal_transport')) mock_sd.validate() def test_validate_without_server(self): stack = utils.parse_stack(self.template_no_server) snip = stack.t.resource_definitions(stack)['deployment_mysql'] deployment = sd.SoftwareDeployment('deployment_mysql', snip, stack) err = self.assertRaises(exc.StackValidationFailed, deployment.validate) self.assertEqual("Property error: " "Resources.deployment_mysql.Properties: " "Property server not assigned", six.text_type(err)) def test_validate_failed(self): template = dict(self.template_with_server) props = template['Resources']['server']['Properties'] props['user_data_format'] = 'RAW' self._create_stack(template) mock_sd = self.deployment err = self.assertRaises(exc.StackValidationFailed, mock_sd.validate) self.assertEqual("Resource server's property " "user_data_format should be set to " "SOFTWARE_CONFIG since there are " "software deployments on it.", six.text_type(err)) def mock_software_config(self): config = { 'group': 'Test::Group', 'name': 'myconfig', 'config': 'the config', 'options': {}, 'inputs': [{ 'name': 'foo', 'type': 'String', 'default': 'baa', }, { 'name': 'bar', 'type': 'String', 'default': 'baz', }, { 'name': 'trigger_replace', 'type': 'String', 'default': 'default_value', 'replace_on_change': True, }], 'outputs': [], } derived_config = copy.deepcopy(config) values = {'foo': 'bar'} inputs = derived_config['inputs'] for i in inputs: i['value'] = values.get(i['name'], i['default']) inputs.append({'name': 'deploy_signal_transport', 'type': 'String', 'value': 'NO_SIGNAL'}) configs = { '0ff2e903-78d7-4cca-829e-233af3dae705': config, '48e8ade1-9196-42d5-89a2-f709fde42632': config, '9966c8e7-bc9c-42de-aa7d-f2447a952cb2': derived_config, } def copy_config(context, config_id): config = configs[config_id].copy() config['id'] = config_id return config self.rpc_client.show_software_config.side_effect = copy_config return config def mock_software_component(self): config = { 'id': '48e8ade1-9196-42d5-89a2-f709fde42632', 'group': 'component', 'name': 'myconfig', 'config': { 'configs': [ { 'actions': ['CREATE'], 'config': 'the config', 'tool': 'a_tool' }, { 'actions': ['DELETE'], 'config': 'the config', 'tool': 'a_tool' }, { 'actions': ['UPDATE'], 'config': 'the config', 'tool': 'a_tool' }, { 'actions': ['SUSPEND'], 'config': 'the config', 'tool': 'a_tool' }, { 'actions': ['RESUME'], 'config': 'the config', 'tool': 'a_tool' } ] }, 'options': {}, 'inputs': [{ 'name': 'foo', 'type': 'String', 'default': 'baa', }, { 'name': 'bar', 'type': 'String', 'default': 'baz', }], 'outputs': [], } def copy_config(*args, **kwargs): return config.copy() self.rpc_client.show_software_config.side_effect = copy_config return config def mock_derived_software_config(self): sc = {'id': '9966c8e7-bc9c-42de-aa7d-f2447a952cb2'} self.rpc_client.create_software_config.return_value = sc return sc def mock_deployment(self): mock_sd = { 'config_id': '9966c8e7-bc9c-42de-aa7d-f2447a952cb2' } self.rpc_client.create_software_deployment.return_value = mock_sd return mock_sd def test_handle_create(self): self._create_stack(self.template_no_signal) self.mock_software_config() derived_sc = self.mock_derived_software_config() self.mock_deployment() self.deployment.handle_create() self.assertEqual({ 'config': 'the config', 'group': 'Test::Group', 'name': '00_run_me_first', 'inputs': [{ 'default': 'baa', 'name': 'foo', 'type': 'String', 'value': 'bar' }, { 'default': 'baz', 'name': 'bar', 'type': 'String', 'value': 'baz' }, { 'default': 'default_value', 'name': 'trigger_replace', 'replace_on_change': True, 'type': 'String', 'value': 'default_value' }, { 'name': 'bink', 'type': 'String', 'value': 'bonk' }, { 'description': 'ID of the server being deployed to', 'name': 'deploy_server_id', 'type': 'String', 'value': '9f1f0e00-05d2-4ca5-8602-95021f19c9d0' }, { 'description': 'Name of the current action being deployed', 'name': 'deploy_action', 'type': 'String', 'value': 'CREATE' }, { 'description': 'ID of the stack this deployment belongs to', 'name': 'deploy_stack_id', 'type': 'String', 'value': ('software_deployment_test_stack' '/42f6f66b-631a-44e7-8d01-e22fb54574a9') }, { 'description': 'Name of this deployment resource in the stack', 'name': 'deploy_resource_name', 'type': 'String', 'value': 'deployment_mysql' }, { 'description': ('How the server should signal to heat with ' 'the deployment output values.'), 'name': 'deploy_signal_transport', 'type': 'String', 'value': 'NO_SIGNAL' }], 'options': {}, 'outputs': [] }, self.rpc_client.create_software_config.call_args[1]) self.assertEqual( {'action': 'CREATE', 'config_id': derived_sc['id'], 'deployment_id': self.deployment.resource_id, 'server_id': '9f1f0e00-05d2-4ca5-8602-95021f19c9d0', 'input_values': {'bink': 'bonk', 'foo': 'bar'}, 'stack_user_project_id': '65728b74-cfe7-4f17-9c15-11d4f686e591', 'status': 'COMPLETE', 'status_reason': 'Not waiting for outputs signal'}, self.rpc_client.create_software_deployment.call_args[1]) def test_handle_create_without_config(self): self._create_stack(self.template_no_config) self.mock_deployment() derived_sc = self.mock_derived_software_config() self.deployment.handle_create() call_arg = self.rpc_client.create_software_config.call_args[1] call_arg['inputs'] = sorted( call_arg['inputs'], key=lambda k: k['name']) self.assertEqual({ 'config': '', 'group': 'Heat::Ungrouped', 'name': self.deployment.physical_resource_name(), 'inputs': [{ 'name': 'bink', 'type': 'String', 'value': 'bonk' }, { 'description': 'Name of the current action being deployed', 'name': 'deploy_action', 'type': 'String', 'value': 'CREATE' }, { 'description': 'Name of this deployment resource in the stack', 'name': 'deploy_resource_name', 'type': 'String', 'value': 'deployment_mysql' }, { 'description': 'ID of the server being deployed to', 'name': 'deploy_server_id', 'type': 'String', 'value': '9f1f0e00-05d2-4ca5-8602-95021f19c9d0' }, { 'description': ('How the server should signal to heat with ' 'the deployment output values.'), 'name': 'deploy_signal_transport', 'type': 'String', 'value': 'NO_SIGNAL' }, { 'description': 'ID of the stack this deployment belongs to', 'name': 'deploy_stack_id', 'type': 'String', 'value': ('software_deployment_test_stack' '/42f6f66b-631a-44e7-8d01-e22fb54574a9') }, { 'name': 'foo', 'type': 'String', 'value': 'bar' }], 'options': None, 'outputs': [], }, call_arg) self.assertEqual( {'action': 'CREATE', 'config_id': derived_sc['id'], 'deployment_id': self.deployment.resource_id, 'input_values': {'bink': 'bonk', 'foo': 'bar'}, 'server_id': '9f1f0e00-05d2-4ca5-8602-95021f19c9d0', 'stack_user_project_id': '65728b74-cfe7-4f17-9c15-11d4f686e591', 'status': 'COMPLETE', 'status_reason': 'Not waiting for outputs signal'}, self.rpc_client.create_software_deployment.call_args[1]) def test_handle_create_for_component(self): self._create_stack(self.template_no_signal) self.mock_software_component() derived_sc = self.mock_derived_software_config() self.mock_deployment() self.deployment.handle_create() self.assertEqual({ 'config': { 'configs': [ { 'actions': ['CREATE'], 'config': 'the config', 'tool': 'a_tool' }, { 'actions': ['DELETE'], 'config': 'the config', 'tool': 'a_tool' }, { 'actions': ['UPDATE'], 'config': 'the config', 'tool': 'a_tool' }, { 'actions': ['SUSPEND'], 'config': 'the config', 'tool': 'a_tool' }, { 'actions': ['RESUME'], 'config': 'the config', 'tool': 'a_tool' } ] }, 'group': 'component', 'name': '00_run_me_first', 'inputs': [{ 'default': 'baa', 'name': 'foo', 'type': 'String', 'value': 'bar' }, { 'default': 'baz', 'name': 'bar', 'type': 'String', 'value': 'baz' }, { 'name': 'bink', 'type': 'String', 'value': 'bonk' }, { 'description': 'ID of the server being deployed to', 'name': 'deploy_server_id', 'type': 'String', 'value': '9f1f0e00-05d2-4ca5-8602-95021f19c9d0' }, { 'description': 'Name of the current action being deployed', 'name': 'deploy_action', 'type': 'String', 'value': 'CREATE' }, { 'description': 'ID of the stack this deployment belongs to', 'name': 'deploy_stack_id', 'type': 'String', 'value': ('software_deployment_test_stack' '/42f6f66b-631a-44e7-8d01-e22fb54574a9') }, { 'description': 'Name of this deployment resource in the stack', 'name': 'deploy_resource_name', 'type': 'String', 'value': 'deployment_mysql' }, { 'description': ('How the server should signal to heat with ' 'the deployment output values.'), 'name': 'deploy_signal_transport', 'type': 'String', 'value': 'NO_SIGNAL' }], 'options': {}, 'outputs': [] }, self.rpc_client.create_software_config.call_args[1]) self.assertEqual( {'action': 'CREATE', 'config_id': derived_sc['id'], 'deployment_id': self.deployment.resource_id, 'input_values': {'bink': 'bonk', 'foo': 'bar'}, 'server_id': '9f1f0e00-05d2-4ca5-8602-95021f19c9d0', 'stack_user_project_id': '65728b74-cfe7-4f17-9c15-11d4f686e591', 'status': 'COMPLETE', 'status_reason': 'Not waiting for outputs signal'}, self.rpc_client.create_software_deployment.call_args[1]) def test_handle_create_do_not_wait(self): self._create_stack(self.template) self.mock_software_config() derived_sc = self.mock_derived_software_config() self.mock_deployment() self.deployment.handle_create() self.assertEqual( {'action': 'CREATE', 'config_id': derived_sc['id'], 'deployment_id': self.deployment.resource_id, 'input_values': {'foo': 'bar'}, 'server_id': '9f1f0e00-05d2-4ca5-8602-95021f19c9d0', 'stack_user_project_id': '65728b74-cfe7-4f17-9c15-11d4f686e591', 'status': 'IN_PROGRESS', 'status_reason': 'Deploy data available'}, self.rpc_client.create_software_deployment.call_args[1]) def test_check_create_complete(self): self._create_stack(self.template) mock_sd = self.mock_deployment() self.rpc_client.show_software_deployment.return_value = mock_sd mock_sd['status'] = self.deployment.COMPLETE self.assertTrue(self.deployment.check_create_complete(mock_sd)) mock_sd['status'] = self.deployment.IN_PROGRESS self.assertFalse(self.deployment.check_create_complete(mock_sd)) def test_check_create_complete_none(self): self._create_stack(self.template) self.assertTrue(self.deployment.check_create_complete(sd=None)) def test_check_update_complete(self): self._create_stack(self.template) mock_sd = self.mock_deployment() self.rpc_client.show_software_deployment.return_value = mock_sd mock_sd['status'] = self.deployment.COMPLETE self.assertTrue(self.deployment.check_update_complete(mock_sd)) mock_sd['status'] = self.deployment.IN_PROGRESS self.assertFalse(self.deployment.check_update_complete(mock_sd)) def test_check_update_complete_none(self): self._create_stack(self.template) self.assertTrue(self.deployment.check_update_complete(sd=None)) def test_check_suspend_complete(self): self._create_stack(self.template) mock_sd = self.mock_deployment() self.rpc_client.show_software_deployment.return_value = mock_sd mock_sd['status'] = self.deployment.COMPLETE self.assertTrue(self.deployment.check_suspend_complete(mock_sd)) mock_sd['status'] = self.deployment.IN_PROGRESS self.assertFalse(self.deployment.check_suspend_complete(mock_sd)) def test_check_suspend_complete_none(self): self._create_stack(self.template) self.assertTrue(self.deployment.check_suspend_complete(sd=None)) def test_check_resume_complete(self): self._create_stack(self.template) mock_sd = self.mock_deployment() self.rpc_client.show_software_deployment.return_value = mock_sd mock_sd['status'] = self.deployment.COMPLETE self.assertTrue(self.deployment.check_resume_complete(mock_sd)) mock_sd['status'] = self.deployment.IN_PROGRESS self.assertFalse(self.deployment.check_resume_complete(mock_sd)) def test_check_resume_complete_none(self): self._create_stack(self.template) self.assertTrue(self.deployment.check_resume_complete(sd=None)) def test_check_create_complete_error(self): self._create_stack(self.template) mock_sd = { 'status': self.deployment.FAILED, 'status_reason': 'something wrong' } self.rpc_client.show_software_deployment.return_value = mock_sd err = self.assertRaises( exc.Error, self.deployment.check_create_complete, mock_sd) self.assertEqual( 'Deployment to server failed: something wrong', six.text_type(err)) def test_handle_create_cancel(self): self._create_stack(self.template) mock_sd = self.mock_deployment() self.rpc_client.show_software_deployment.return_value = mock_sd self.deployment.resource_id = 'c8a19429-7fde-47ea-a42f-40045488226c' # status in_progress mock_sd['status'] = self.deployment.IN_PROGRESS self.deployment.handle_create_cancel(None) self.assertEqual( 'FAILED', self.rpc_client.update_software_deployment.call_args[1]['status']) # status failed mock_sd['status'] = self.deployment.FAILED self.deployment.handle_create_cancel(None) # deployment not created mock_sd = None self.deployment.handle_create_cancel(None) self.assertEqual(1, self.rpc_client.update_software_deployment.call_count) def test_handle_delete(self): self._create_stack(self.template) mock_sd = self.mock_deployment() self.rpc_client.show_software_deployment.return_value = mock_sd self.deployment.resource_id = 'c8a19429-7fde-47ea-a42f-40045488226c' self.deployment.handle_delete() self.deployment.check_delete_complete() self.assertEqual( (self.ctx, self.deployment.resource_id), self.rpc_client.delete_software_deployment.call_args[0]) def test_handle_delete_resource_id_is_None(self): self._create_stack(self.template_delete_suspend_resume) self.mock_software_config() mock_sd = self.mock_deployment() self.assertEqual(mock_sd, self.deployment.handle_delete()) def test_delete_complete(self): self._create_stack(self.template_delete_suspend_resume) self.mock_software_config() derived_sc = self.mock_derived_software_config() mock_sd = self.mock_deployment() mock_sd['server_id'] = 'b509edfb-1448-4b57-8cb1-2e31acccbb8a' self.deployment.resource_id = 'c8a19429-7fde-47ea-a42f-40045488226c' self.rpc_client.show_software_deployment.return_value = mock_sd self.rpc_client.update_software_deployment.return_value = mock_sd self.assertEqual(mock_sd, self.deployment.handle_delete()) self.assertEqual({ 'deployment_id': 'c8a19429-7fde-47ea-a42f-40045488226c', 'action': 'DELETE', 'config_id': derived_sc['id'], 'input_values': {'foo': 'bar'}, 'status': 'IN_PROGRESS', 'status_reason': 'Deploy data available'}, self.rpc_client.update_software_deployment.call_args[1]) mock_sd['status'] = self.deployment.IN_PROGRESS self.assertFalse(self.deployment.check_delete_complete(mock_sd)) mock_sd['status'] = self.deployment.COMPLETE self.assertTrue(self.deployment.check_delete_complete(mock_sd)) def test_delete_complete_missing_server(self): """Tests deleting a deployment when the server disappears""" self._create_stack(self.template_delete_suspend_resume) self.mock_software_config() mock_sd = self.mock_deployment() mock_sd['server_id'] = 'b509edfb-1448-4b57-8cb1-2e31acccbb8a' # Simulate Nova not knowing about the server mock_get_server = self.patchobject( nova.NovaClientPlugin, 'get_server', side_effect=exc.EntityNotFound) self.deployment.resource_id = 'c8a19429-7fde-47ea-a42f-40045488226c' self.rpc_client.show_software_deployment.return_value = mock_sd self.rpc_client.update_software_deployment.return_value = mock_sd mock_sd['status'] = self.deployment.COMPLETE self.assertTrue(self.deployment.check_delete_complete(mock_sd)) mock_get_server.assert_called_once_with(mock_sd['server_id']) def test_handle_delete_notfound(self): self._create_stack(self.template) deployment_id = 'c8a19429-7fde-47ea-a42f-40045488226c' self.deployment.resource_id = deployment_id self.mock_software_config() derived_sc = self.mock_derived_software_config() mock_sd = self.mock_deployment() mock_sd['config_id'] = derived_sc['id'] self.rpc_client.show_software_deployment.return_value = mock_sd nf = exc.NotFound self.rpc_client.delete_software_deployment.side_effect = nf self.rpc_client.delete_software_config.side_effect = nf self.assertIsNone(self.deployment.handle_delete()) self.assertTrue(self.deployment.check_delete_complete()) self.assertEqual( (self.ctx, derived_sc['id']), self.rpc_client.delete_software_config.call_args[0]) def test_handle_delete_none(self): self._create_stack(self.template) deployment_id = None self.deployment.resource_id = deployment_id self.assertIsNone(self.deployment.handle_delete()) def test_check_delete_complete_none(self): self._create_stack(self.template) self.assertTrue(self.deployment.check_delete_complete()) def test_check_delete_complete_delete_sd(self): # handle_delete will return None if NO_SIGNAL, # in this case also need to call the _delete_resource(), # otherwise the sd data will residue in db self._create_stack(self.template) mock_sd = self.mock_deployment() self.deployment.resource_id = 'c8a19429-7fde-47ea-a42f-40045488226c' self.rpc_client.show_software_deployment.return_value = mock_sd self.assertTrue(self.deployment.check_delete_complete()) self.assertEqual( (self.ctx, self.deployment.resource_id), self.rpc_client.delete_software_deployment.call_args[0]) def test_handle_update(self): self._create_stack(self.template) self.mock_derived_software_config() mock_sd = self.mock_deployment() rsrc = self.stack['deployment_mysql'] self.rpc_client.show_software_deployment.return_value = mock_sd self.deployment.resource_id = 'c8a19429-7fde-47ea-a42f-40045488226c' config_id = '0ff2e903-78d7-4cca-829e-233af3dae705' prop_diff = { 'config': config_id, 'name': 'new_name' } props = copy.copy(rsrc.properties.data) props.update(prop_diff) snippet = rsrc_defn.ResourceDefinition(rsrc.name, rsrc.type(), props) self.deployment.handle_update( json_snippet=snippet, tmpl_diff=None, prop_diff=prop_diff) self.assertEqual( (self.ctx, config_id), self.rpc_client.show_software_config.call_args[0]) self.assertEqual( (self.ctx, self.deployment.resource_id), self.rpc_client.show_software_deployment.call_args[0]) self.assertEqual( 'new_name', self.rpc_client.create_software_config.call_args[1]['name']) self.assertEqual({ 'deployment_id': 'c8a19429-7fde-47ea-a42f-40045488226c', 'action': 'UPDATE', 'config_id': '9966c8e7-bc9c-42de-aa7d-f2447a952cb2', 'input_values': {'foo': 'bar'}, 'status': 'IN_PROGRESS', 'status_reason': u'Deploy data available'}, self.rpc_client.update_software_deployment.call_args[1]) def test_handle_update_no_replace_on_change(self): self._create_stack(self.template) self.mock_software_config() self.mock_derived_software_config() mock_sd = self.mock_deployment() rsrc = self.stack['deployment_mysql'] self.rpc_client.show_software_deployment.return_value = mock_sd self.deployment.resource_id = 'c8a19429-7fde-47ea-a42f-40045488226c' prop_diff = { 'input_values': {'trigger_replace': 'default_value'}, } props = copy.copy(rsrc.properties.data) props.update(prop_diff) snippet = rsrc_defn.ResourceDefinition(rsrc.name, rsrc.type(), props) self.deployment.handle_update(snippet, None, prop_diff) self.assertEqual({ 'deployment_id': 'c8a19429-7fde-47ea-a42f-40045488226c', 'action': 'UPDATE', 'config_id': '9966c8e7-bc9c-42de-aa7d-f2447a952cb2', 'input_values': {'trigger_replace': 'default_value'}, 'status': 'IN_PROGRESS', 'status_reason': u'Deploy data available'}, self.rpc_client.update_software_deployment.call_args[1]) self.assertEqual([ { 'default': 'baa', 'name': 'foo', 'type': 'String', 'value': 'baa' }, { 'default': 'baz', 'name': 'bar', 'type': 'String', 'value': 'baz' }, { 'default': 'default_value', 'name': 'trigger_replace', 'replace_on_change': True, 'type': 'String', 'value': 'default_value' }], self.rpc_client.create_software_config.call_args[1]['inputs'][:3]) def test_handle_update_replace_on_change(self): self._create_stack(self.template) self.mock_software_config() self.mock_derived_software_config() mock_sd = self.mock_deployment() rsrc = self.stack['deployment_mysql'] self.rpc_client.show_software_deployment.return_value = mock_sd self.deployment.resource_id = 'c8a19429-7fde-47ea-a42f-40045488226c' prop_diff = { 'input_values': {'trigger_replace': 'new_value'}, } props = copy.copy(rsrc.properties.data) props.update(prop_diff) snippet = rsrc_defn.ResourceDefinition(rsrc.name, rsrc.type(), props) self.assertRaises(resource.UpdateReplace, self.deployment.handle_update, snippet, None, prop_diff) def test_handle_update_with_update_only(self): self._create_stack(self.template_update_only) rsrc = self.stack['deployment_mysql'] prop_diff = { 'input_values': {'foo': 'different'} } props = copy.copy(rsrc.properties.data) props.update(prop_diff) snippet = rsrc_defn.ResourceDefinition(rsrc.name, rsrc.type(), props) self.deployment.handle_update( json_snippet=snippet, tmpl_diff=None, prop_diff=prop_diff) self.rpc_client.show_software_deployment.assert_not_called() def test_handle_suspend_resume(self): self._create_stack(self.template_delete_suspend_resume) self.mock_software_config() derived_sc = self.mock_derived_software_config() mock_sd = self.mock_deployment() self.rpc_client.show_software_deployment.return_value = mock_sd self.deployment.resource_id = 'c8a19429-7fde-47ea-a42f-40045488226c' # first, handle the suspend self.deployment.handle_suspend() self.assertEqual({ 'deployment_id': 'c8a19429-7fde-47ea-a42f-40045488226c', 'action': 'SUSPEND', 'config_id': derived_sc['id'], 'input_values': {'foo': 'bar'}, 'status': 'IN_PROGRESS', 'status_reason': 'Deploy data available'}, self.rpc_client.update_software_deployment.call_args[1]) mock_sd['status'] = 'IN_PROGRESS' self.assertFalse(self.deployment.check_suspend_complete(mock_sd)) mock_sd['status'] = 'COMPLETE' self.assertTrue(self.deployment.check_suspend_complete(mock_sd)) # now, handle the resume self.deployment.handle_resume() self.assertEqual({ 'deployment_id': 'c8a19429-7fde-47ea-a42f-40045488226c', 'action': 'RESUME', 'config_id': derived_sc['id'], 'input_values': {'foo': 'bar'}, 'status': 'IN_PROGRESS', 'status_reason': 'Deploy data available'}, self.rpc_client.update_software_deployment.call_args[1]) mock_sd['status'] = 'IN_PROGRESS' self.assertFalse(self.deployment.check_resume_complete(mock_sd)) mock_sd['status'] = 'COMPLETE' self.assertTrue(self.deployment.check_resume_complete(mock_sd)) def test_handle_signal_ok_zero(self): self._create_stack(self.template) self.deployment.resource_id = 'c8a19429-7fde-47ea-a42f-40045488226c' rpcc = self.rpc_client rpcc.signal_software_deployment.return_value = 'deployment succeeded' details = { 'foo': 'bar', 'deploy_status_code': 0 } ret = self.deployment.handle_signal(details) self.assertEqual('deployment succeeded', ret) ca = rpcc.signal_software_deployment.call_args[0] self.assertEqual(self.ctx, ca[0]) self.assertEqual('c8a19429-7fde-47ea-a42f-40045488226c', ca[1]) self.assertEqual({'foo': 'bar', 'deploy_status_code': 0}, ca[2]) self.assertIsNotNone(ca[3]) def test_no_signal_action(self): self._create_stack(self.template) self.deployment.resource_id = 'c8a19429-7fde-47ea-a42f-40045488226c' rpcc = self.rpc_client rpcc.signal_software_deployment.return_value = 'deployment succeeded' details = { 'foo': 'bar', 'deploy_status_code': 0 } actions = [self.deployment.SUSPEND, self.deployment.DELETE] ev = self.patchobject(self.deployment, 'handle_signal') for action in actions: for status in self.deployment.STATUSES: self.deployment.state_set(action, status) self.deployment.signal(details) ev.assert_called_with(details) def test_handle_signal_ok_str_zero(self): self._create_stack(self.template) self.deployment.resource_id = 'c8a19429-7fde-47ea-a42f-40045488226c' rpcc = self.rpc_client rpcc.signal_software_deployment.return_value = 'deployment succeeded' details = { 'foo': 'bar', 'deploy_status_code': '0' } ret = self.deployment.handle_signal(details) self.assertEqual('deployment succeeded', ret) ca = rpcc.signal_software_deployment.call_args[0] self.assertEqual(self.ctx, ca[0]) self.assertEqual('c8a19429-7fde-47ea-a42f-40045488226c', ca[1]) self.assertEqual({'foo': 'bar', 'deploy_status_code': '0'}, ca[2]) self.assertIsNotNone(ca[3]) def test_handle_signal_failed(self): self._create_stack(self.template) self.deployment.resource_id = 'c8a19429-7fde-47ea-a42f-40045488226c' rpcc = self.rpc_client rpcc.signal_software_deployment.return_value = 'deployment failed' details = {'failed': 'no enough memory found.'} ret = self.deployment.handle_signal(details) self.assertEqual('deployment failed', ret) ca = rpcc.signal_software_deployment.call_args[0] self.assertEqual(self.ctx, ca[0]) self.assertEqual('c8a19429-7fde-47ea-a42f-40045488226c', ca[1]) self.assertEqual(details, ca[2]) self.assertIsNotNone(ca[3]) # Test bug 1332355, where details contains a translatable message details = {'failed': _('need more memory.')} ret = self.deployment.handle_signal(details) self.assertEqual('deployment failed', ret) ca = rpcc.signal_software_deployment.call_args[0] self.assertEqual(self.ctx, ca[0]) self.assertEqual('c8a19429-7fde-47ea-a42f-40045488226c', ca[1]) self.assertEqual(details, ca[2]) self.assertIsNotNone(ca[3]) def test_handle_status_code_failed(self): self._create_stack(self.template) self.deployment.resource_id = 'c8a19429-7fde-47ea-a42f-40045488226c' rpcc = self.rpc_client rpcc.signal_software_deployment.return_value = 'deployment failed' details = { 'deploy_stdout': 'A thing happened', 'deploy_stderr': 'Then it broke', 'deploy_status_code': -1 } self.deployment.handle_signal(details) ca = rpcc.signal_software_deployment.call_args[0] self.assertEqual(self.ctx, ca[0]) self.assertEqual('c8a19429-7fde-47ea-a42f-40045488226c', ca[1]) self.assertEqual(details, ca[2]) self.assertIsNotNone(ca[3]) def test_handle_signal_not_waiting(self): self._create_stack(self.template) rpcc = self.rpc_client rpcc.signal_software_deployment.return_value = None details = None self.assertIsNone(self.deployment.handle_signal(details)) ca = rpcc.signal_software_deployment.call_args[0] self.assertEqual(self.ctx, ca[0]) self.assertIsNone(ca[1]) self.assertIsNone(ca[2]) self.assertIsNotNone(ca[3]) def test_fn_get_att(self): self._create_stack(self.template) mock_sd = { 'outputs': [ {'name': 'failed', 'error_output': True}, {'name': 'foo'} ], 'output_values': { 'foo': 'bar', 'deploy_stdout': 'A thing happened', 'deploy_stderr': 'Extraneous logging', 'deploy_status_code': 0 }, 'status': self.deployment.COMPLETE } self.rpc_client.show_software_deployment.return_value = mock_sd self.assertEqual('bar', self.deployment.FnGetAtt('foo')) self.assertEqual('A thing happened', self.deployment.FnGetAtt('deploy_stdout')) self.assertEqual('Extraneous logging', self.deployment.FnGetAtt('deploy_stderr')) self.assertEqual(0, self.deployment.FnGetAtt('deploy_status_code')) def test_fn_get_att_convg(self): cache_data = {'deployment_mysql': node_data.NodeData.from_dict({ 'uuid': mock.ANY, 'id': mock.ANY, 'action': 'CREATE', 'status': 'COMPLETE', 'attrs': {'foo': 'bar'} })} self._create_stack(self.template, cache_data=cache_data) self.assertEqual('bar', self.stack.defn[self.deployment.name].FnGetAtt('foo')) def test_fn_get_att_error(self): self._create_stack(self.template) mock_sd = { 'outputs': [], 'output_values': {'foo': 'bar'}, } self.rpc_client.show_software_deployment.return_value = mock_sd err = self.assertRaises( exc.InvalidTemplateAttribute, self.deployment.FnGetAtt, 'foo2') self.assertEqual( 'The Referenced Attribute (deployment_mysql foo2) is incorrect.', six.text_type(err)) def test_handle_action(self): self._create_stack(self.template) self.mock_software_config() mock_sd = self.mock_deployment() rsrc = self.stack['deployment_mysql'] self.rpc_client.show_software_deployment.return_value = mock_sd self.deployment.resource_id = 'c8a19429-7fde-47ea-a42f-40045488226c' config_id = '0ff2e903-78d7-4cca-829e-233af3dae705' prop_diff = {'config': config_id} props = copy.copy(rsrc.properties.data) props.update(prop_diff) snippet = rsrc_defn.ResourceDefinition(rsrc.name, rsrc.type(), props) # by default (no 'actions' property) SoftwareDeployment must only # trigger for CREATE and UPDATE self.assertIsNotNone(self.deployment.handle_create()) self.assertIsNotNone(self.deployment.handle_update( json_snippet=snippet, tmpl_diff=None, prop_diff=prop_diff)) # ... but it must not trigger for SUSPEND, RESUME and DELETE self.assertIsNone(self.deployment.handle_suspend()) self.assertIsNone(self.deployment.handle_resume()) self.assertIsNone(self.deployment.handle_delete()) def test_handle_action_for_component(self): self._create_stack(self.template) self.mock_software_component() mock_sd = self.mock_deployment() rsrc = self.stack['deployment_mysql'] self.rpc_client.show_software_deployment.return_value = mock_sd self.deployment.resource_id = 'c8a19429-7fde-47ea-a42f-40045488226c' config_id = '0ff2e903-78d7-4cca-829e-233af3dae705' prop_diff = {'config': config_id} props = copy.copy(rsrc.properties.data) props.update(prop_diff) snippet = rsrc_defn.ResourceDefinition(rsrc.name, rsrc.type(), props) # for a SoftwareComponent, SoftwareDeployment must always trigger self.assertIsNotNone(self.deployment.handle_create()) self.assertIsNotNone(self.deployment.handle_update( json_snippet=snippet, tmpl_diff=None, prop_diff=prop_diff)) self.assertIsNotNone(self.deployment.handle_suspend()) self.assertIsNotNone(self.deployment.handle_resume()) self.assertIsNotNone(self.deployment.handle_delete()) def test_handle_unused_action_for_component(self): self._create_stack(self.template) config = { 'id': '48e8ade1-9196-42d5-89a2-f709fde42632', 'group': 'component', 'name': 'myconfig', 'config': { 'configs': [ { 'actions': ['CREATE'], 'config': 'the config', 'tool': 'a_tool' } ] }, 'options': {}, 'inputs': [{ 'name': 'foo', 'type': 'String', 'default': 'baa', }, { 'name': 'bar', 'type': 'String', 'default': 'baz', }], 'outputs': [], } def show_sw_config(*args): return config.copy() self.rpc_client.show_software_config.side_effect = show_sw_config mock_sd = self.mock_deployment() self.rpc_client.show_software_deployment.return_value = mock_sd self.assertIsNotNone(self.deployment.handle_create()) self.assertIsNone(self.deployment.handle_delete()) def test_get_temp_url(self): dep_data = {} sc = mock.MagicMock() scc = self.patch( 'heat.engine.clients.os.swift.SwiftClientPlugin._create') scc.return_value = sc sc.head_account.return_value = { 'x-account-meta-temp-url-key': 'secrit' } sc.url = 'http://192.0.2.1/v1/AUTH_test_tenant_id' self._create_stack(self.template_temp_url_signal) def data_set(key, value, redact=False): dep_data[key] = value self.deployment.data_set = data_set self.deployment.data = mock.Mock( return_value=dep_data) self.deployment.id = 23 self.deployment.uuid = str(uuid.uuid4()) self.deployment.action = self.deployment.CREATE object_name = self.deployment.physical_resource_name() temp_url = self.deployment._get_swift_signal_url() temp_url_pattern = re.compile( '^http://192.0.2.1/v1/AUTH_test_tenant_id/' '(.*)/(software_deployment_test_stack-deployment_mysql-.*)' '\\?temp_url_sig=.*&temp_url_expires=\\d*$') self.assertRegex(temp_url, temp_url_pattern) m = temp_url_pattern.search(temp_url) container = m.group(1) self.assertEqual(object_name, m.group(2)) self.assertEqual(dep_data['swift_signal_object_name'], object_name) self.assertEqual(dep_data['swift_signal_url'], temp_url) self.assertEqual(temp_url, self.deployment._get_swift_signal_url()) sc.put_container.assert_called_once_with(container) sc.put_object.assert_called_once_with(container, object_name, '') def test_delete_temp_url(self): object_name = str(uuid.uuid4()) dep_data = { 'swift_signal_object_name': object_name } self._create_stack(self.template_temp_url_signal) self.deployment.data_delete = mock.MagicMock() self.deployment.data = mock.Mock( return_value=dep_data) sc = mock.MagicMock() sc.get_container.return_value = ({}, [{'name': object_name}]) sc.head_container.return_value = { 'x-container-object-count': 0 } scc = self.patch( 'heat.engine.clients.os.swift.SwiftClientPlugin._create') scc.return_value = sc self.deployment.id = 23 self.deployment.uuid = str(uuid.uuid4()) container = self.stack.id self.deployment._delete_swift_signal_url() sc.delete_object.assert_called_once_with(container, object_name) self.assertEqual( [mock.call('swift_signal_object_name'), mock.call('swift_signal_url')], self.deployment.data_delete.mock_calls) swift_exc = swift.SwiftClientPlugin.exceptions_module sc.delete_object.side_effect = swift_exc.ClientException( 'Not found', http_status=404) self.deployment._delete_swift_signal_url() self.assertEqual( [mock.call('swift_signal_object_name'), mock.call('swift_signal_url'), mock.call('swift_signal_object_name'), mock.call('swift_signal_url')], self.deployment.data_delete.mock_calls) del(dep_data['swift_signal_object_name']) self.deployment.physical_resource_name = mock.Mock() self.deployment._delete_swift_signal_url() self.assertFalse(self.deployment.physical_resource_name.called) def test_handle_action_temp_url(self): self._create_stack(self.template_temp_url_signal) dep_data = { 'swift_signal_url': ( 'http://192.0.2.1/v1/AUTH_a/b/c' '?temp_url_sig=ctemp_url_expires=1234') } self.deployment.data = mock.Mock( return_value=dep_data) self.mock_software_config() for action in ('DELETE', 'SUSPEND', 'RESUME'): self.assertIsNone(self.deployment._handle_action(action)) for action in ('CREATE', 'UPDATE'): self.assertIsNotNone(self.deployment._handle_action(action)) def test_get_zaqar_queue(self): dep_data = {} zc = mock.MagicMock() zcc = self.patch( 'heat.engine.clients.os.zaqar.ZaqarClientPlugin.create_for_tenant') zcc.return_value = zc mock_queue = mock.MagicMock() zc.queue.return_value = mock_queue signed_data = {"signature": "hi", "expires": "later"} mock_queue.signed_url.return_value = signed_data self._create_stack(self.template_zaqar_signal) def data_set(key, value, redact=False): dep_data[key] = value self.deployment.data_set = data_set self.deployment.data = mock.Mock(return_value=dep_data) self.deployment.id = 23 self.deployment.uuid = str(uuid.uuid4()) self.deployment.action = self.deployment.CREATE queue_id = self.deployment._get_zaqar_signal_queue_id() self.assertEqual(queue_id, dep_data['zaqar_signal_queue_id']) self.assertEqual(jsonutils.dumps(signed_data), dep_data['zaqar_queue_signed_url_data']) self.assertEqual(queue_id, self.deployment._get_zaqar_signal_queue_id()) @mock.patch.object(zaqar.ZaqarClientPlugin, 'create_for_tenant') def test_delete_zaqar_queue(self, zcc): queue_id = str(uuid.uuid4()) dep_data = { 'password': 'password', 'zaqar_signal_queue_id': queue_id } self._create_stack(self.template_zaqar_signal) self.deployment.data_delete = mock.MagicMock() self.deployment.data = mock.Mock(return_value=dep_data) zc = mock.MagicMock() zcc.return_value = zc self.deployment.id = 23 self.deployment.uuid = str(uuid.uuid4()) self.deployment._delete_zaqar_signal_queue() zc.queue.assert_called_once_with(queue_id) self.assertTrue(zc.queue(self.deployment.uuid).delete.called) self.assertEqual( [mock.call('zaqar_signal_queue_id')], self.deployment.data_delete.mock_calls) zaqar_exc = zaqar.ZaqarClientPlugin.exceptions_module zc.queue.delete.side_effect = zaqar_exc.ResourceNotFound() self.deployment._delete_zaqar_signal_queue() self.assertEqual( [mock.call('zaqar_signal_queue_id'), mock.call('zaqar_signal_queue_id')], self.deployment.data_delete.mock_calls) dep_data.pop('zaqar_signal_queue_id') self.deployment.physical_resource_name = mock.Mock() self.deployment._delete_zaqar_signal_queue() self.assertEqual(2, len(self.deployment.data_delete.mock_calls)) def test_server_exists(self): # Setup self._create_stack(self.template_delete_suspend_resume) mock_sd = {'server_id': 'b509edfb-1448-4b57-8cb1-2e31acccbb8a'} # For a success case, this doesn't raise an exception self.patchobject(nova.NovaClientPlugin, 'get_server') # Test result = self.deployment._server_exists(mock_sd) self.assertTrue(result) def test_server_exists_no_server(self): # Setup self._create_stack(self.template_delete_suspend_resume) mock_sd = {'server_id': 'b509edfb-1448-4b57-8cb1-2e31acccbb8a'} # For a success case, this doesn't raise an exception self.patchobject(nova.NovaClientPlugin, 'get_server', side_effect=exc.EntityNotFound) # Test result = self.deployment._server_exists(mock_sd) self.assertFalse(result) class SoftwareDeploymentGroupTest(common.HeatTestCase): template = { 'heat_template_version': '2013-05-23', 'resources': { 'deploy_mysql': { 'type': 'OS::Heat::SoftwareDeploymentGroup', 'properties': { 'config': 'config_uuid', 'servers': {'server1': 'uuid1', 'server2': 'uuid2'}, 'input_values': {'foo': 'bar'}, 'name': '10_config' } } } } def setUp(self): common.HeatTestCase.setUp(self) self.rpc_client = mock.MagicMock() def test_build_resource_definition(self): stack = utils.parse_stack(self.template) snip = stack.t.resource_definitions(stack)['deploy_mysql'] resg = sd.SoftwareDeploymentGroup('test', snip, stack) expect = rsrc_defn.ResourceDefinition( None, "OS::Heat::SoftwareDeployment", {'actions': ['CREATE', 'UPDATE'], 'config': 'config_uuid', 'input_values': {'foo': 'bar'}, 'name': '10_config', 'server': 'uuid1', 'signal_transport': 'CFN_SIGNAL'}) rdef = resg.get_resource_def() self.assertEqual( expect, resg.build_resource_definition('server1', rdef)) rdef = resg.get_resource_def(include_all=True) self.assertEqual( expect, resg.build_resource_definition('server1', rdef)) def test_resource_names(self): stack = utils.parse_stack(self.template) snip = stack.t.resource_definitions(stack)['deploy_mysql'] resg = sd.SoftwareDeploymentGroup('test', snip, stack) self.assertEqual( set(('server1', 'server2')), set(resg._resource_names()) ) resg.properties = {'servers': {'s1': 'u1', 's2': 'u2', 's3': 'u3'}} self.assertEqual( set(('s1', 's2', 's3')), set(resg._resource_names())) def test_assemble_nested(self): """Tests nested stack implements group creation based on properties. Tests that the nested stack that implements the group is created appropriately based on properties. """ stack = utils.parse_stack(self.template) snip = stack.t.resource_definitions(stack)['deploy_mysql'] resg = sd.SoftwareDeploymentGroup('test', snip, stack) templ = { "heat_template_version": "2015-04-30", "resources": { "server1": { 'type': 'OS::Heat::SoftwareDeployment', 'properties': { 'server': 'uuid1', 'actions': ['CREATE', 'UPDATE'], 'config': 'config_uuid', 'input_values': {'foo': 'bar'}, 'name': '10_config', 'signal_transport': 'CFN_SIGNAL' } }, "server2": { 'type': 'OS::Heat::SoftwareDeployment', 'properties': { 'server': 'uuid2', 'actions': ['CREATE', 'UPDATE'], 'config': 'config_uuid', 'input_values': {'foo': 'bar'}, 'name': '10_config', 'signal_transport': 'CFN_SIGNAL' } } } } self.assertEqual(templ, resg._assemble_nested(['server1', 'server2']).t) def test_validate(self): stack = utils.parse_stack(self.template) snip = stack.t.resource_definitions(stack)['deploy_mysql'] resg = sd.SoftwareDeploymentGroup('deploy_mysql', snip, stack) self.assertIsNone(resg.validate()) class SoftwareDeploymentGroupAttrTest(common.HeatTestCase): scenarios = [ ('stdouts', dict(group_attr='deploy_stdouts', nested_attr='deploy_stdout', values=['Thing happened on server1', 'ouch'])), ('stderrs', dict(group_attr='deploy_stderrs', nested_attr='deploy_stderr', values=['', "It's gone Pete Tong"])), ('status_codes', dict(group_attr='deploy_status_codes', nested_attr='deploy_status_code', values=[0, 1])), ('passthrough', dict(group_attr='some_attr', nested_attr='some_attr', values=['attr1', 'attr2'])), ] template = { 'heat_template_version': '2013-05-23', 'resources': { 'deploy_mysql': { 'type': 'OS::Heat::SoftwareDeploymentGroup', 'properties': { 'config': 'config_uuid', 'servers': {'server1': 'uuid1', 'server2': 'uuid2'}, 'input_values': {'foo': 'bar'}, 'name': '10_config' } } } } def setUp(self): super(SoftwareDeploymentGroupAttrTest, self).setUp() self.server_names = ['server1', 'server2'] self.servers = [mock.MagicMock() for s in self.server_names] self.stack = utils.parse_stack(self.template) def test_attributes(self): resg = self.create_dummy_stack() self.assertEqual(dict(zip(self.server_names, self.values)), resg.FnGetAtt(self.group_attr)) self.check_calls() def test_attributes_path(self): resg = self.create_dummy_stack() for i, r in enumerate(self.server_names): self.assertEqual(self.values[i], resg.FnGetAtt(self.group_attr, r)) self.check_calls(len(self.server_names)) def create_dummy_stack(self): snip = self.stack.t.resource_definitions(self.stack)['deploy_mysql'] resg = sd.SoftwareDeploymentGroup('test', snip, self.stack) resg.resource_id = 'test-test' nested = self.patchobject(resg, 'nested') nested.return_value = dict(zip(self.server_names, self.servers)) self._stub_get_attr(resg) return resg def _stub_get_attr(self, resg): def ref_id_fn(args): self.fail('Getting member reference ID for some reason') def attr_fn(args): res_name = args[0] return self.values[self.server_names.index(res_name)] def get_output(output_name): outputs = resg._nested_output_defns(resg._resource_names(), attr_fn, ref_id_fn) op_defns = {od.name: od for od in outputs} self.assertIn(output_name, op_defns) return op_defns[output_name].get_value() orig_get_attr = resg.FnGetAtt def get_attr(attr_name, *path): if not path: attr = attr_name else: attr = (attr_name,) + path # Mock referenced_attrs() so that _nested_output_definitions() # will include the output required for this attribute resg.referenced_attrs = mock.Mock(return_value=[attr]) # Pass through to actual function under test return orig_get_attr(attr_name, *path) resg.FnGetAtt = mock.Mock(side_effect=get_attr) resg.get_output = mock.Mock(side_effect=get_output) def check_calls(self, count=1): pass class SoftwareDeploymentGroupAttrFallbackTest(SoftwareDeploymentGroupAttrTest): def _stub_get_attr(self, resg): # Raise NotFound when getting output, to force fallback to old-school # grouputils functions resg.get_output = mock.Mock(side_effect=exc.NotFound) for server, value in zip(self.servers, self.values): server.FnGetAtt.return_value = value def check_calls(self, count=1): calls = [mock.call(c) for c in [self.nested_attr] * count] for server in self.servers: server.FnGetAtt.assert_has_calls(calls) class SDGReplaceTest(common.HeatTestCase): template = { 'heat_template_version': '2013-05-23', 'resources': { 'deploy_mysql': { 'type': 'OS::Heat::SoftwareDeploymentGroup', 'properties': { 'config': 'config_uuid', 'servers': {'server1': 'uuid1', 'server2': 'uuid2'}, 'input_values': {'foo': 'bar'}, 'name': '10_config' } } } } # 1. existing > batch_size # 2. existing < batch_size # 3. count > existing # 4. count < exiting # 5. with pause_sec scenarios = [ ('1', dict(count=2, existing=['0', '1'], batch_size=1, pause_sec=0, tasks=2)), ('2', dict(count=4, existing=['0', '1'], batch_size=3, pause_sec=0, tasks=2)), ('3', dict(count=3, existing=['0', '1'], batch_size=2, pause_sec=0, tasks=2)), ('4', dict(count=2, existing=['0', '1', '2'], batch_size=2, pause_sec=0, tasks=1)), ('5', dict(count=2, existing=['0', '1'], batch_size=1, pause_sec=1, tasks=3))] def get_fake_nested_stack(self, names): nested_t = ''' heat_template_version: 2015-04-30 description: Resource Group resources: ''' resource_snip = ''' '%s': type: SoftwareDeployment properties: foo: bar ''' resources = [nested_t] for res_name in names: resources.extend([resource_snip % res_name]) nested_t = ''.join(resources) return utils.parse_stack(template_format.parse(nested_t)) def setUp(self): super(SDGReplaceTest, self).setUp() self.stack = utils.parse_stack(self.template) snip = self.stack.t.resource_definitions(self.stack)['deploy_mysql'] self.group = sd.SoftwareDeploymentGroup('deploy_mysql', snip, self.stack) self.group.update_with_template = mock.Mock() self.group.check_update_complete = mock.Mock() def test_rolling_updates(self): self.group._nested = self.get_fake_nested_stack(self.existing) self.group.get_size = mock.Mock(return_value=self.count) tasks = self.group._replace(0, self.batch_size, self.pause_sec) self.assertEqual(self.tasks, len(tasks)) heat-10.0.2/heat/tests/openstack/heat/test_software_config.py0000666000175000017500000000631513343562340024332 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import contextlib import mock from heat.common import exception as exc from heat.engine import stack from heat.engine import template from heat.tests import common from heat.tests import utils class SoftwareConfigTest(common.HeatTestCase): def setUp(self): super(SoftwareConfigTest, self).setUp() self.ctx = utils.dummy_context() self.properties = { 'group': 'Heat::Shell', 'inputs': [], 'outputs': [], 'options': {}, 'config': '#!/bin/bash' } self.stack = stack.Stack( self.ctx, 'software_config_test_stack', template.Template({ 'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'config_mysql': { 'Type': 'OS::Heat::SoftwareConfig', 'Properties': self.properties }}})) self.config = self.stack['config_mysql'] self.rpc_client = mock.MagicMock() self.config._rpc_client = self.rpc_client @contextlib.contextmanager def exc_filter(*args): try: yield except exc.NotFound: pass self.rpc_client.ignore_error_by_name.side_effect = exc_filter def test_handle_create(self): config_id = 'c8a19429-7fde-47ea-a42f-40045488226c' value = {'id': config_id} self.rpc_client.create_software_config.return_value = value self.config.handle_create() self.assertEqual(config_id, self.config.resource_id) def test_handle_delete(self): self.resource_id = None self.assertIsNone(self.config.handle_delete()) config_id = 'c8a19429-7fde-47ea-a42f-40045488226c' self.config.resource_id = config_id self.rpc_client.delete_software_config.return_value = None self.assertIsNone(self.config.handle_delete()) self.rpc_client.delete_software_config.side_effect = exc.NotFound self.assertIsNone(self.config.handle_delete()) def test_resolve_attribute(self): self.assertIsNone(self.config._resolve_attribute('others')) self.config.resource_id = None self.assertIsNone(self.config._resolve_attribute('config')) self.config.resource_id = 'c8a19429-7fde-47ea-a42f-40045488226c' value = {'config': '#!/bin/bash'} self.rpc_client.show_software_config.return_value = value self.assertEqual( '#!/bin/bash', self.config._resolve_attribute('config')) self.rpc_client.show_software_config.side_effect = exc.NotFound self.assertIsNone(self.config._resolve_attribute('config')) heat-10.0.2/heat/tests/openstack/heat/test_remote_stack.py0000666000175000017500000006571013343562340023637 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections from heatclient import exc from heatclient.v1 import stacks import mock from oslo_config import cfg import six from heat.common import exception from heat.common.i18n import _ from heat.common import template_format from heat.engine.clients.os import heat_plugin from heat.engine import environment from heat.engine import node_data from heat.engine import resource from heat.engine.resources.openstack.heat import remote_stack from heat.engine import rsrc_defn from heat.engine import scheduler from heat.engine import stack from heat.engine import template from heat.tests import common as tests_common from heat.tests import utils cfg.CONF.import_opt('action_retry_limit', 'heat.common.config') parent_stack_template = ''' heat_template_version: 2013-05-23 resources: remote_stack: type: OS::Heat::Stack properties: context: region_name: RegionOne template: { get_file: remote_template.yaml } timeout: 60 parameters: name: foo ''' remote_template = ''' heat_template_version: 2013-05-23 parameters: name: type: string resources: resource1: type: GenericResourceType outputs: foo: value: bar ''' bad_template = ''' heat_template_version: 2013-05-26 parameters: name: type: string resources: resource1: type: UnknownResourceType outputs: foo: value: bar ''' def get_stack(stack_id='c8a19429-7fde-47ea-a42f-40045488226c', stack_name='teststack', description='No description', creation_time='2013-08-04T20:57:55Z', updated_time='2013-08-04T20:57:55Z', stack_status='CREATE_COMPLETE', stack_status_reason='', outputs=None): action = stack_status[:stack_status.index('_')] status = stack_status[stack_status.index('_') + 1:] data = { 'id': stack_id, 'stack_name': stack_name, 'description': description, 'creation_time': creation_time, 'updated_time': updated_time, 'stack_status': stack_status, 'stack_status_reason': stack_status_reason, 'action': action, 'status': status, 'outputs': outputs or None, } return stacks.Stack(mock.MagicMock(), data) class FakeClients(object): def __init__(self, context, region_name=None): self.ctx = context self.region_name = region_name or 'RegionOne' self.hc = None self.plugin = None def client(self, name): if self.region_name in ['RegionOne', 'RegionTwo']: if self.hc is None: self.hc = mock.MagicMock() return self.hc else: raise Exception('Failed connecting to Heat') def client_plugin(self, name): if self.plugin is None: self.plugin = heat_plugin.HeatClientPlugin(self.ctx) return self.plugin class RemoteStackTest(tests_common.HeatTestCase): def setUp(self): super(RemoteStackTest, self).setUp() self.this_region = 'RegionOne' self.that_region = 'RegionTwo' self.bad_region = 'RegionNone' cfg.CONF.set_override('action_retry_limit', 0) self.parent = None self.heat = None self.client_plugin = None self.this_context = None self.old_clients = None def unset_clients_property(): if self.this_context is not None: type(self.this_context).clients = self.old_clients self.addCleanup(unset_clients_property) def initialize(self): parent, rsrc = self.create_parent_stack(remote_region='RegionTwo') self.parent = parent self.heat = rsrc._context().clients.client("heat") self.client_plugin = rsrc._context().clients.client_plugin('heat') def create_parent_stack(self, remote_region=None, custom_template=None): snippet = template_format.parse(parent_stack_template) self.files = { 'remote_template.yaml': custom_template or remote_template } region_name = remote_region or self.this_region props = snippet['resources']['remote_stack']['properties'] # context property is not required, default to current region if remote_region is None: del props['context'] else: props['context']['region_name'] = region_name if self.this_context is None: self.this_context = utils.dummy_context( region_name=self.this_region) tmpl = template.Template(snippet, files=self.files) parent = stack.Stack(self.this_context, 'parent_stack', tmpl) # parent context checking ctx = parent.context.to_dict() self.assertEqual(self.this_region, ctx['region_name']) self.assertEqual(self.this_context.to_dict(), ctx) parent.store() resource_defns = parent.t.resource_definitions(parent) rsrc = remote_stack.RemoteStack( 'remote_stack_res', resource_defns['remote_stack'], parent) # remote stack resource checking self.assertEqual(60, rsrc.properties.get('timeout')) remote_context = rsrc._context() hc = FakeClients(self.this_context, rsrc._region_name) if self.old_clients is None: self.old_clients = type(remote_context).clients type(remote_context).clients = mock.PropertyMock(return_value=hc) return parent, rsrc def create_remote_stack(self): # This method default creates a stack on RegionTwo (self.other_region) defaults = [get_stack(stack_status='CREATE_IN_PROGRESS'), get_stack(stack_status='CREATE_COMPLETE')] if self.parent is None: self.initialize() # prepare clients to return status self.heat.stacks.create.return_value = {'stack': get_stack().to_dict()} self.heat.stacks.get = mock.MagicMock(side_effect=defaults) rsrc = self.parent['remote_stack'] scheduler.TaskRunner(rsrc.create)() return rsrc def test_create_remote_stack_default_region(self): parent, rsrc = self.create_parent_stack() self.assertEqual((rsrc.INIT, rsrc.COMPLETE), rsrc.state) self.assertEqual(self.this_region, rsrc._region_name) ctx = rsrc.properties.get('context') self.assertIsNone(ctx) self.assertIsNone(rsrc.validate()) def test_create_remote_stack_this_region(self): parent, rsrc = self.create_parent_stack(remote_region=self.this_region) self.assertEqual((rsrc.INIT, rsrc.COMPLETE), rsrc.state) self.assertEqual(self.this_region, rsrc._region_name) ctx = rsrc.properties.get('context') self.assertEqual(self.this_region, ctx['region_name']) self.assertIsNone(rsrc.validate()) def test_create_remote_stack_that_region(self): parent, rsrc = self.create_parent_stack(remote_region=self.that_region) self.assertEqual((rsrc.INIT, rsrc.COMPLETE), rsrc.state) self.assertEqual(self.that_region, rsrc._region_name) ctx = rsrc.properties.get('context') self.assertEqual(self.that_region, ctx['region_name']) self.assertIsNone(rsrc.validate()) def test_create_remote_stack_bad_region(self): parent, rsrc = self.create_parent_stack(remote_region=self.bad_region) self.assertEqual((rsrc.INIT, rsrc.COMPLETE), rsrc.state) self.assertEqual(self.bad_region, rsrc._region_name) ctx = rsrc.properties.get('context') self.assertEqual(self.bad_region, ctx['region_name']) ex = self.assertRaises(exception.StackValidationFailed, rsrc.validate) msg = ('Cannot establish connection to Heat endpoint ' 'at region "%s"' % self.bad_region) self.assertIn(msg, six.text_type(ex)) def test_remote_validation_failed(self): parent, rsrc = self.create_parent_stack(remote_region=self.that_region, custom_template=bad_template) self.assertEqual((rsrc.INIT, rsrc.COMPLETE), rsrc.state) self.assertEqual(self.that_region, rsrc._region_name) ctx = rsrc.properties.get('context') self.assertEqual(self.that_region, ctx['region_name']) # not setting or using self.heat because this test case is a special # one with the RemoteStack resource initialized but not created. heat = rsrc._context().clients.client("heat") # heatclient.exc.BadRequest is the exception returned by a failed # validation heat.stacks.validate = mock.MagicMock(side_effect=exc.HTTPBadRequest) ex = self.assertRaises(exception.StackValidationFailed, rsrc.validate) msg = ('Failed validating stack template using Heat endpoint at region' ' "%s"') % self.that_region self.assertIn(msg, six.text_type(ex)) def test_create(self): rsrc = self.create_remote_stack() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) self.assertEqual('c8a19429-7fde-47ea-a42f-40045488226c', rsrc.resource_id) env = environment.get_child_environment(rsrc.stack.env, {'name': 'foo'}) args = { 'stack_name': rsrc.physical_resource_name(), 'template': template_format.parse(remote_template), 'timeout_mins': 60, 'disable_rollback': True, 'parameters': {'name': 'foo'}, 'files': self.files, 'environment': env.user_env_as_dict(), } self.heat.stacks.create.assert_called_with(**args) self.assertEqual(2, len(self.heat.stacks.get.call_args_list)) def test_create_failed(self): returns = [get_stack(stack_status='CREATE_IN_PROGRESS'), get_stack(stack_status='CREATE_FAILED', stack_status_reason='Remote stack creation ' 'failed')] # Note: only this test case does a out-of-band intialization, most of # the other test cases will have self.parent initialized. if self.parent is None: self.initialize() self.heat.stacks.create.return_value = {'stack': get_stack().to_dict()} self.heat.stacks.get = mock.MagicMock(side_effect=returns) rsrc = self.parent['remote_stack'] error = self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(rsrc.create)) error_msg = ('ResourceInError: resources.remote_stack: ' 'Went to status CREATE_FAILED due to ' '"Remote stack creation failed"') self.assertEqual(error_msg, six.text_type(error)) self.assertEqual((rsrc.CREATE, rsrc.FAILED), rsrc.state) def test_delete(self): returns = [get_stack(stack_status='DELETE_IN_PROGRESS'), get_stack(stack_status='DELETE_COMPLETE')] rsrc = self.create_remote_stack() self.heat.stacks.get = mock.MagicMock(side_effect=returns) self.heat.stacks.delete = mock.MagicMock() remote_stack_id = rsrc.resource_id scheduler.TaskRunner(rsrc.delete)() self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state) self.heat.stacks.delete.assert_called_with(stack_id=remote_stack_id) def test_delete_already_gone(self): rsrc = self.create_remote_stack() self.heat.stacks.delete = mock.MagicMock( side_effect=exc.HTTPNotFound()) self.heat.stacks.get = mock.MagicMock(side_effect=exc.HTTPNotFound()) remote_stack_id = rsrc.resource_id scheduler.TaskRunner(rsrc.delete)() self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state) self.heat.stacks.delete.assert_called_with(stack_id=remote_stack_id) def test_delete_failed(self): returns = [get_stack(stack_status='DELETE_IN_PROGRESS'), get_stack(stack_status='DELETE_FAILED', stack_status_reason='Remote stack deletion ' 'failed')] rsrc = self.create_remote_stack() self.heat.stacks.get = mock.MagicMock(side_effect=returns) self.heat.stacks.delete = mock.MagicMock() remote_stack_id = rsrc.resource_id error = self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(rsrc.delete)) error_msg = ('ResourceInError: resources.remote_stack: ' 'Went to status DELETE_FAILED due to ' '"Remote stack deletion failed"') self.assertIn(error_msg, six.text_type(error)) self.assertEqual((rsrc.DELETE, rsrc.FAILED), rsrc.state) self.heat.stacks.delete.assert_called_with(stack_id=remote_stack_id) self.assertEqual(rsrc.resource_id, remote_stack_id) def test_attribute(self): rsrc = self.create_remote_stack() outputs = [ { 'output_key': 'foo', 'output_value': 'bar' } ] created_stack = get_stack(stack_name='stack1', outputs=outputs) self.heat.stacks.get = mock.MagicMock(return_value=created_stack) self.assertEqual('stack1', rsrc.FnGetAtt('stack_name')) self.assertEqual('bar', rsrc.FnGetAtt('outputs')['foo']) self.heat.stacks.get.assert_called_with( stack_id='c8a19429-7fde-47ea-a42f-40045488226c') def test_attribute_failed(self): rsrc = self.create_remote_stack() error = self.assertRaises(exception.InvalidTemplateAttribute, rsrc.FnGetAtt, 'non-existent_property') self.assertEqual( 'The Referenced Attribute (remote_stack non-existent_property) is ' 'incorrect.', six.text_type(error)) def test_snapshot(self): stacks = [get_stack(stack_status='SNAPSHOT_IN_PROGRESS'), get_stack(stack_status='SNAPSHOT_COMPLETE')] snapshot = { 'id': 'a29bc9e25aa44f99a9a3d59cd5b0e263', 'status': 'IN_PROGRESS' } rsrc = self.create_remote_stack() self.heat.stacks.get = mock.MagicMock(side_effect=stacks) self.heat.stacks.snapshot = mock.MagicMock(return_value=snapshot) scheduler.TaskRunner(rsrc.snapshot)() self.assertEqual((rsrc.SNAPSHOT, rsrc.COMPLETE), rsrc.state) self.assertEqual('a29bc9e25aa44f99a9a3d59cd5b0e263', rsrc.data().get('snapshot_id')) self.heat.stacks.snapshot.assert_called_with( stack_id=rsrc.resource_id) def test_restore(self): snapshot = { 'id': 'a29bc9e25aa44f99a9a3d59cd5b0e263', 'status': 'IN_PROGRESS' } remote_stack = mock.MagicMock() remote_stack.action = 'SNAPSHOT' remote_stack.status = 'COMPLETE' parent, rsrc = self.create_parent_stack() rsrc.action = rsrc.SNAPSHOT heat = rsrc._context().clients.client("heat") heat.stacks.snapshot = mock.MagicMock(return_value=snapshot) heat.stacks.get = mock.MagicMock(return_value=remote_stack) scheduler.TaskRunner(parent.snapshot, None)() self.assertEqual((parent.SNAPSHOT, parent.COMPLETE), parent.state) data = parent.prepare_abandon() remote_stack_snapshot = { 'snapshot': { 'id': 'a29bc9e25aa44f99a9a3d59cd5b0e263', 'status': 'COMPLETE', 'data': { 'files': data['files'], 'environment': data['environment'], 'template': template_format.parse( data['files']['remote_template.yaml']) } } } fake_snapshot = collections.namedtuple( 'Snapshot', ('data', 'stack_id'))(data, parent.id) heat.stacks.snapshot_show = mock.MagicMock( return_value=remote_stack_snapshot) self.patchobject(rsrc, 'update').return_value = None rsrc.action = rsrc.UPDATE rsrc.status = rsrc.COMPLETE remote_stack.action = 'UPDATE' parent.restore(fake_snapshot) self.assertEqual((parent.RESTORE, parent.COMPLETE), parent.state) def test_check(self): stacks = [get_stack(stack_status='CHECK_IN_PROGRESS'), get_stack(stack_status='CHECK_COMPLETE')] rsrc = self.create_remote_stack() self.heat.stacks.get = mock.MagicMock(side_effect=stacks) self.heat.actions.check = mock.MagicMock() scheduler.TaskRunner(rsrc.check)() self.assertEqual((rsrc.CHECK, rsrc.COMPLETE), rsrc.state) self.heat.actions.check.assert_called_with(stack_id=rsrc.resource_id) def test_check_failed(self): returns = [get_stack(stack_status='CHECK_IN_PROGRESS'), get_stack(stack_status='CHECK_FAILED', stack_status_reason='Remote stack check failed')] rsrc = self.create_remote_stack() self.heat.stacks.get = mock.MagicMock(side_effect=returns) self.heat.actions.resume = mock.MagicMock() error = self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(rsrc.check)) error_msg = ('ResourceInError: resources.remote_stack: ' 'Went to status CHECK_FAILED due to ' '"Remote stack check failed"') self.assertEqual(error_msg, six.text_type(error)) self.assertEqual((rsrc.CHECK, rsrc.FAILED), rsrc.state) self.heat.actions.check.assert_called_with(stack_id=rsrc.resource_id) def test_resume(self): stacks = [get_stack(stack_status='RESUME_IN_PROGRESS'), get_stack(stack_status='RESUME_COMPLETE')] rsrc = self.create_remote_stack() rsrc.action = rsrc.SUSPEND self.heat.stacks.get = mock.MagicMock(side_effect=stacks) self.heat.actions.resume = mock.MagicMock() scheduler.TaskRunner(rsrc.resume)() self.assertEqual((rsrc.RESUME, rsrc.COMPLETE), rsrc.state) self.heat.actions.resume.assert_called_with(stack_id=rsrc.resource_id) def test_resume_failed(self): returns = [get_stack(stack_status='RESUME_IN_PROGRESS'), get_stack(stack_status='RESUME_FAILED', stack_status_reason='Remote stack resume failed')] rsrc = self.create_remote_stack() rsrc.action = rsrc.SUSPEND self.heat.stacks.get = mock.MagicMock(side_effect=returns) self.heat.actions.resume = mock.MagicMock() error = self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(rsrc.resume)) error_msg = ('ResourceInError: resources.remote_stack: ' 'Went to status RESUME_FAILED due to ' '"Remote stack resume failed"') self.assertEqual(error_msg, six.text_type(error)) self.assertEqual((rsrc.RESUME, rsrc.FAILED), rsrc.state) self.heat.actions.resume.assert_called_with(stack_id=rsrc.resource_id) def test_resume_failed_not_created(self): self.initialize() rsrc = self.parent['remote_stack'] rsrc.action = rsrc.SUSPEND error = self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(rsrc.resume)) error_msg = ('Error: resources.remote_stack: ' 'Cannot resume remote_stack, resource not found') self.assertEqual(error_msg, six.text_type(error)) self.assertEqual((rsrc.RESUME, rsrc.FAILED), rsrc.state) def test_suspend(self): stacks = [get_stack(stack_status='SUSPEND_IN_PROGRESS'), get_stack(stack_status='SUSPEND_COMPLETE')] rsrc = self.create_remote_stack() self.heat.stacks.get = mock.MagicMock(side_effect=stacks) self.heat.actions.suspend = mock.MagicMock() scheduler.TaskRunner(rsrc.suspend)() self.assertEqual((rsrc.SUSPEND, rsrc.COMPLETE), rsrc.state) self.heat.actions.suspend.assert_called_with(stack_id=rsrc.resource_id) def test_suspend_failed(self): stacks = [get_stack(stack_status='SUSPEND_IN_PROGRESS'), get_stack(stack_status='SUSPEND_FAILED', stack_status_reason='Remote stack suspend failed')] rsrc = self.create_remote_stack() self.heat.stacks.get = mock.MagicMock(side_effect=stacks) self.heat.actions.suspend = mock.MagicMock() error = self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(rsrc.suspend)) error_msg = ('ResourceInError: resources.remote_stack: ' 'Went to status SUSPEND_FAILED due to ' '"Remote stack suspend failed"') self.assertEqual(error_msg, six.text_type(error)) self.assertEqual((rsrc.SUSPEND, rsrc.FAILED), rsrc.state) # assert suspend was not called self.heat.actions.suspend.assert_has_calls([]) def test_suspend_failed_not_created(self): self.initialize() rsrc = self.parent['remote_stack'] # Note: the resource is not created so far self.heat.actions.suspend = mock.MagicMock() error = self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(rsrc.suspend)) error_msg = ('Error: resources.remote_stack: ' 'Cannot suspend remote_stack, resource not found') self.assertEqual(error_msg, six.text_type(error)) self.assertEqual((rsrc.SUSPEND, rsrc.FAILED), rsrc.state) # assert suspend was not called self.heat.actions.suspend.assert_has_calls([]) def test_update(self): stacks = [get_stack(stack_status='UPDATE_IN_PROGRESS'), get_stack(stack_status='UPDATE_COMPLETE')] rsrc = self.create_remote_stack() props = dict(rsrc.properties) props['parameters']['name'] = 'bar' update_snippet = rsrc_defn.ResourceDefinition(rsrc.name, rsrc.type(), props) self.heat.stacks.get = mock.MagicMock(side_effect=stacks) scheduler.TaskRunner(rsrc.update, update_snippet)() self.assertEqual((rsrc.UPDATE, rsrc.COMPLETE), rsrc.state) self.assertEqual('bar', rsrc.properties.get('parameters')['name']) env = environment.get_child_environment(rsrc.stack.env, {'name': 'bar'}) fields = { 'stack_id': rsrc.resource_id, 'template': template_format.parse(remote_template), 'timeout_mins': 60, 'disable_rollback': True, 'parameters': {'name': 'bar'}, 'files': self.files, 'environment': env.user_env_as_dict(), } self.heat.stacks.update.assert_called_with(**fields) self.assertEqual(2, len(self.heat.stacks.get.call_args_list)) def test_update_with_replace(self): rsrc = self.create_remote_stack() props = dict(rsrc.properties) props['context']['region_name'] = 'RegionOne' update_snippet = rsrc_defn.ResourceDefinition(rsrc.name, rsrc.type(), props) self.assertRaises(resource.UpdateReplace, scheduler.TaskRunner(rsrc.update, update_snippet)) def test_update_failed(self): stacks = [get_stack(stack_status='UPDATE_IN_PROGRESS'), get_stack(stack_status='UPDATE_FAILED', stack_status_reason='Remote stack update failed')] rsrc = self.create_remote_stack() props = dict(rsrc.properties) props['parameters']['name'] = 'bar' update_snippet = rsrc_defn.ResourceDefinition(rsrc.name, rsrc.type(), props) self.heat.stacks.get = mock.MagicMock(side_effect=stacks) error = self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(rsrc.update, update_snippet)) error_msg = _('ResourceInError: resources.remote_stack: ' 'Went to status UPDATE_FAILED due to ' '"Remote stack update failed"') self.assertEqual(error_msg, six.text_type(error)) self.assertEqual((rsrc.UPDATE, rsrc.FAILED), rsrc.state) self.assertEqual(2, len(self.heat.stacks.get.call_args_list)) def test_update_no_change(self): stacks = [get_stack(stack_status='UPDATE_IN_PROGRESS'), get_stack(stack_status='UPDATE_COMPLETE')] rsrc = self.create_remote_stack() props = dict(rsrc.properties) update_snippet = rsrc_defn.ResourceDefinition(rsrc.name, rsrc.type(), props) self.heat.stacks.get = mock.MagicMock(side_effect=stacks) scheduler.TaskRunner(rsrc.update, update_snippet)() self.assertEqual((rsrc.UPDATE, rsrc.COMPLETE), rsrc.state) def test_remote_stack_refid(self): t = template_format.parse(parent_stack_template) stack = utils.parse_stack(t) rsrc = stack['remote_stack'] rsrc.resource_id = 'xyz' self.assertEqual('xyz', rsrc.FnGetRefId()) def test_remote_stack_refid_convergence_cache_data(self): t = template_format.parse(parent_stack_template) cache_data = {'remote_stack': node_data.NodeData.from_dict({ 'uuid': mock.ANY, 'id': mock.ANY, 'action': 'CREATE', 'status': 'COMPLETE', 'reference_id': 'convg_xyz' })} stack = utils.parse_stack(t, cache_data=cache_data) rsrc = stack.defn['remote_stack'] self.assertEqual('convg_xyz', rsrc.FnGetRefId()) def test_update_in_check_failed_state(self): rsrc = self.create_remote_stack() rsrc.state_set(rsrc.CHECK, rsrc.FAILED) props = dict(rsrc.properties) update_snippet = rsrc_defn.ResourceDefinition(rsrc.name, rsrc.type(), props) self.assertRaises(resource.UpdateReplace, scheduler.TaskRunner(rsrc.update, update_snippet)) heat-10.0.2/heat/tests/openstack/heat/test_random_string.py0000666000175000017500000003033113343562340024014 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import re import mock import six from testtools import matchers from heat.common import exception from heat.common import template_format from heat.engine import node_data from heat.engine import stack as parser from heat.engine import template from heat.tests import common from heat.tests import utils class TestRandomString(common.HeatTestCase): template_random_string = ''' HeatTemplateFormatVersion: '2012-12-12' Resources: secret1: Type: OS::Heat::RandomString secret2: Type: OS::Heat::RandomString Properties: length: 10 secret3: Type: OS::Heat::RandomString Properties: length: 32 character_classes: - class: digits min: 1 - class: uppercase min: 1 - class: lowercase min: 20 character_sequences: - sequence: (),[]{} min: 1 - sequence: $_ min: 2 - sequence: '@' min: 5 secret4: Type: OS::Heat::RandomString Properties: length: 25 character_classes: - class: digits min: 1 - class: uppercase min: 1 - class: lowercase min: 20 secret5: Type: OS::Heat::RandomString Properties: length: 10 character_sequences: - sequence: (),[]{} min: 1 - sequence: $_ min: 2 - sequence: '@' min: 5 ''' def create_stack(self, templ): self.stack = self.parse_stack(template_format.parse(templ)) self.assertIsNone(self.stack.create()) return self.stack def parse_stack(self, t): stack_name = 'test_stack' tmpl = template.Template(t) stack = parser.Stack(utils.dummy_context(), stack_name, tmpl) stack.validate() stack.store() return stack def assert_min(self, pattern, string, minimum): self.assertGreaterEqual(len(re.findall(pattern, string)), minimum) def test_random_string(self): stack = self.create_stack(self.template_random_string) secret1 = stack['secret1'] random_string = secret1.FnGetAtt('value') self.assert_min('[a-zA-Z0-9]', random_string, 32) self.assertRaises(exception.InvalidTemplateAttribute, secret1.FnGetAtt, 'foo') self.assertEqual(secret1.FnGetRefId(), random_string) secret2 = stack['secret2'] random_string = secret2.FnGetAtt('value') self.assert_min('[a-zA-Z0-9]', random_string, 10) self.assertEqual(secret2.FnGetRefId(), random_string) secret3 = stack['secret3'] random_string = secret3.FnGetAtt('value') self.assertEqual(32, len(random_string)) self.assert_min('[0-9]', random_string, 1) self.assert_min('[A-Z]', random_string, 1) self.assert_min('[a-z]', random_string, 20) self.assert_min(r'[(),\[\]{}]', random_string, 1) self.assert_min('[$_]', random_string, 2) self.assert_min('@', random_string, 5) self.assertEqual(secret3.FnGetRefId(), random_string) secret4 = stack['secret4'] random_string = secret4.FnGetAtt('value') self.assertEqual(25, len(random_string)) self.assert_min('[0-9]', random_string, 1) self.assert_min('[A-Z]', random_string, 1) self.assert_min('[a-z]', random_string, 20) self.assertEqual(secret4.FnGetRefId(), random_string) secret5 = stack['secret5'] random_string = secret5.FnGetAtt('value') self.assertEqual(10, len(random_string)) self.assert_min(r'[(),\[\]{}]', random_string, 1) self.assert_min('[$_]', random_string, 2) self.assert_min('@', random_string, 5) self.assertEqual(secret5.FnGetRefId(), random_string) # Prove the name is returned before create sets the ID secret5.resource_id = None self.assertEqual('secret5', secret5.FnGetRefId()) def test_hidden_sequence_property(self): hidden_prop_templ = ''' HeatTemplateFormatVersion: '2012-12-12' Resources: secret: Type: OS::Heat::RandomString Properties: length: 100 sequence: octdigits ''' stack = self.create_stack(hidden_prop_templ) secret = stack['secret'] random_string = secret.FnGetAtt('value') self.assert_min('[0-7]', random_string, 100) self.assertEqual(secret.FnGetRefId(), random_string) # check, that property was translated according to the TranslationRule self.assertIsNone(secret.properties['sequence']) expected = [{'class': u'octdigits', 'min': 1}] self.assertEqual(expected, secret.properties['character_classes']) def test_random_string_refid_convergence_cache_data(self): t = template_format.parse(self.template_random_string) cache_data = {'secret1': node_data.NodeData.from_dict({ 'uuid': mock.ANY, 'id': mock.ANY, 'action': 'CREATE', 'status': 'COMPLETE', 'reference_id': 'xyz' })} stack = utils.parse_stack(t, cache_data=cache_data) rsrc = stack.defn['secret1'] self.assertEqual('xyz', rsrc.FnGetRefId()) def test_invalid_length(self): template_random_string = ''' HeatTemplateFormatVersion: '2012-12-12' Resources: secret: Type: OS::Heat::RandomString Properties: length: 5 character_classes: - class: digits min: 5 character_sequences: - sequence: (),[]{} min: 1 ''' exc = self.assertRaises(exception.StackValidationFailed, self.create_stack, template_random_string) self.assertEqual("Length property cannot be smaller than combined " "character class and character sequence minimums", six.text_type(exc)) def test_max_length(self): template_random_string = ''' HeatTemplateFormatVersion: '2012-12-12' Resources: secret: Type: OS::Heat::RandomString Properties: length: 512 ''' stack = self.create_stack(template_random_string) secret = stack['secret'] random_string = secret.FnGetAtt('value') self.assertEqual(512, len(random_string)) self.assertEqual(secret.FnGetRefId(), random_string) def test_exceeds_max_length(self): template_random_string = ''' HeatTemplateFormatVersion: '2012-12-12' Resources: secret: Type: OS::Heat::RandomString Properties: length: 513 ''' exc = self.assertRaises(exception.StackValidationFailed, self.create_stack, template_random_string) self.assertIn('513 is out of range (min: 1, max: 512)', six.text_type(exc)) class TestGenerateRandomString(common.HeatTestCase): scenarios = [ ('lettersdigits', dict( length=1, seq='lettersdigits', pattern='[a-zA-Z0-9]')), ('letters', dict( length=10, seq='letters', pattern='[a-zA-Z]')), ('lowercase', dict( length=100, seq='lowercase', pattern='[a-z]')), ('uppercase', dict( length=50, seq='uppercase', pattern='[A-Z]')), ('digits', dict( length=512, seq='digits', pattern='[0-9]')), ('hexdigits', dict( length=16, seq='hexdigits', pattern='[A-F0-9]')), ('octdigits', dict( length=32, seq='octdigits', pattern='[0-7]')) ] template_rs = ''' HeatTemplateFormatVersion: '2012-12-12' Resources: secret: Type: OS::Heat::RandomString ''' def parse_stack(self, t): stack_name = 'test_stack' tmpl = template.Template(t) stack = parser.Stack(utils.dummy_context(), stack_name, tmpl) stack.validate() stack.store() return stack # test was saved to test backward compatibility with old behavior def test_generate_random_string_backward_compatible(self): stack = self.parse_stack(template_format.parse(self.template_rs)) secret = stack['secret'] char_classes = secret.properties['character_classes'] for char_cl in char_classes: char_cl['class'] = self.seq # run each test multiple times to confirm random generator # doesn't generate a matching pattern by chance for i in range(1, 32): r = secret._generate_random_string([], char_classes, self.length) self.assertThat(r, matchers.HasLength(self.length)) regex = '%s{%s}' % (self.pattern, self.length) self.assertThat(r, matchers.MatchesRegex(regex)) class TestGenerateRandomStringDistribution(common.HeatTestCase): def run_test(self, tmpl, iterations=5): stack = utils.parse_stack(template_format.parse(tmpl)) secret = stack['secret'] secret.data_set = mock.Mock() for i in range(iterations): secret.handle_create() return [call[1][1] for call in secret.data_set.mock_calls] def char_counts(self, random_strings, char): return [rs.count(char) for rs in random_strings] def check_stats(self, char_counts, expected_mean, allowed_variance, expected_minimum=0): mean = float(sum(char_counts)) / len(char_counts) self.assertLess(mean, expected_mean + allowed_variance) self.assertGreater(mean, max(0, expected_mean - allowed_variance)) if expected_minimum: self.assertGreaterEqual(min(char_counts), expected_minimum) def test_class_uniformity(self): template_rs = ''' HeatTemplateFormatVersion: '2012-12-12' Resources: secret: Type: OS::Heat::RandomString Properties: length: 66 character_classes: - class: lettersdigits character_sequences: - sequence: "*$" ''' results = self.run_test(template_rs, 10) for char in '$*': self.check_stats(self.char_counts(results, char), 1.5, 2) def test_repeated_sequence(self): template_rs = ''' HeatTemplateFormatVersion: '2012-12-12' Resources: secret: Type: OS::Heat::RandomString Properties: length: 40 character_classes: [] character_sequences: - sequence: "**********$*****************************" ''' results = self.run_test(template_rs) for char in '$*': self.check_stats(self.char_counts(results, char), 20, 6) def test_overlapping_classes(self): template_rs = ''' HeatTemplateFormatVersion: '2012-12-12' Resources: secret: Type: OS::Heat::RandomString Properties: length: 624 character_classes: - class: lettersdigits - class: digits - class: octdigits - class: hexdigits ''' results = self.run_test(template_rs, 20) self.check_stats(self.char_counts(results, '0'), 10.3, 2.5) def test_overlapping_sequences(self): template_rs = ''' HeatTemplateFormatVersion: '2012-12-12' Resources: secret: Type: OS::Heat::RandomString Properties: length: 60 character_classes: [] character_sequences: - sequence: "01" - sequence: "02" - sequence: "03" - sequence: "04" - sequence: "05" - sequence: "06" - sequence: "07" - sequence: "08" - sequence: "09" ''' results = self.run_test(template_rs) self.check_stats(self.char_counts(results, '0'), 10, 5) def test_overlapping_class_sequence(self): template_rs = ''' HeatTemplateFormatVersion: '2012-12-12' Resources: secret: Type: OS::Heat::RandomString Properties: length: 402 character_classes: - class: octdigits character_sequences: - sequence: "0" ''' results = self.run_test(template_rs, 10) self.check_stats(self.char_counts(results, '0'), 51.125, 8, 1) heat-10.0.2/heat/tests/openstack/heat/test_instance_group.py0000666000175000017500000004702513343562351024200 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import mock import six from heat.common import exception from heat.common import grouputils from heat.common import short_id from heat.common import template_format from heat.engine.clients.os import neutron from heat.engine import resource from heat.engine.resources.openstack.heat import instance_group as instgrp from heat.engine import rsrc_defn from heat.engine import scheduler from heat.engine import stk_defn from heat.tests.autoscaling import inline_templates from heat.tests import common from heat.tests import utils class TestInstanceGroup(common.HeatTestCase): def setUp(self): super(TestInstanceGroup, self).setUp() t = template_format.parse(inline_templates.as_template) self.stack = utils.parse_stack(t, params=inline_templates.as_params) self.defn = rsrc_defn.ResourceDefinition( 'asg', 'OS::Heat::InstanceGroup', {'Size': 2, 'AvailabilityZones': ['zoneb'], 'LaunchConfigurationName': 'config'}) self.instance_group = instgrp.InstanceGroup('asg', self.defn, self.stack) def test_update_timeout(self): self.stack.timeout_secs = mock.Mock(return_value=100) # there are 3 batches, so we need 2 pauses by 20 sec # result timeout should be 100 - 2 * 20 = 60 self.assertEqual(60, self.instance_group._update_timeout( batch_cnt=3, pause_sec=20)) def test_child_template(self): self.instance_group._create_template = mock.Mock(return_value='tpl') self.assertEqual('tpl', self.instance_group.child_template()) self.instance_group._create_template.assert_called_once_with(2) def test_child_params(self): expected = {'parameters': {}, 'resource_registry': { 'OS::Heat::ScaledResource': 'AWS::EC2::Instance'}} self.assertEqual(expected, self.instance_group.child_params()) def test_tags_default(self): expected = [{'Value': u'asg', 'Key': 'metering.groupname'}] self.assertEqual(expected, self.instance_group._tags()) def test_tags_with_extra(self): self.instance_group.properties.data['Tags'] = [ {'Key': 'fee', 'Value': 'foo'}] expected = [{'Key': 'fee', 'Value': 'foo'}, {'Value': u'asg', 'Key': 'metering.groupname'}] self.assertEqual(expected, self.instance_group._tags()) def test_tags_with_metering(self): self.instance_group.properties.data['Tags'] = [ {'Key': 'metering.fee', 'Value': 'foo'}] expected = [{'Key': 'metering.fee', 'Value': 'foo'}] self.assertEqual(expected, self.instance_group._tags()) def test_validate_launch_conf_ref(self): # test the launch conf ref can't be found props = self.instance_group.properties.data props['LaunchConfigurationName'] = 'JobServerConfig' error = self.assertRaises(ValueError, self.instance_group.validate) self.assertIn('(JobServerConfig) reference can not be found', six.text_type(error)) # test resource name of instance group not WebServerGroup, so no ref props['LaunchConfigurationName'] = 'LaunchConfig' error = self.assertRaises(ValueError, self.instance_group.validate) self.assertIn('LaunchConfigurationName (LaunchConfig) requires a ' 'reference to the configuration not just the ' 'name of the resource.', six.text_type(error)) # test validate ok if change instance_group name to 'WebServerGroup' self.instance_group.name = 'WebServerGroup' self.instance_group.validate() def test_handle_create(self): self.instance_group.create_with_template = mock.Mock(return_value=None) self.instance_group._create_template = mock.Mock(return_value='{}') self.instance_group.handle_create() self.instance_group._create_template.assert_called_once_with(2) self.instance_group.create_with_template.assert_called_once_with('{}') def test_update_in_failed(self): self.instance_group.state_set('CREATE', 'FAILED') # to update the failed instance_group self.instance_group.resize = mock.Mock(return_value=None) self.instance_group.handle_update(self.defn, None, None) self.instance_group.resize.assert_called_once_with(2) def test_handle_delete(self): self.instance_group.delete_nested = mock.Mock(return_value=None) self.instance_group.handle_delete() self.instance_group.delete_nested.assert_called_once_with() def test_handle_update_size(self): self.instance_group._try_rolling_update = mock.Mock(return_value=None) self.instance_group.resize = mock.Mock(return_value=None) props = {'Size': 5} defn = rsrc_defn.ResourceDefinition( 'nopayload', 'AWS::AutoScaling::AutoScalingGroup', props) self.instance_group.handle_update(defn, None, props) self.instance_group.resize.assert_called_once_with(5) def test_attributes(self): mock_members = self.patchobject(grouputils, 'get_members') instances = [] for ip_ex in six.moves.range(1, 4): inst = mock.Mock() inst.FnGetAtt.return_value = '2.1.3.%d' % ip_ex instances.append(inst) mock_members.return_value = instances res = self.instance_group._resolve_attribute('InstanceList') self.assertEqual('2.1.3.1,2.1.3.2,2.1.3.3', res) def test_instance_group_refid_rsrc_name(self): self.instance_group.id = '123' self.instance_group.uuid = '9bfb9456-3fe8-41f4-b318-9dba18eeef74' self.instance_group.action = 'CREATE' expected = '%s-%s-%s' % (self.instance_group.stack.name, self.instance_group.name, short_id.get_id(self.instance_group.uuid)) self.assertEqual(expected, self.instance_group.FnGetRefId()) def test_instance_group_refid_rsrc_id(self): self.instance_group.resource_id = 'phy-rsrc-id' self.assertEqual('phy-rsrc-id', self.instance_group.FnGetRefId()) class TestLaunchConfig(common.HeatTestCase): def create_resource(self, t, stack, resource_name): # subsequent resources may need to reference previous created resources # use the stack's resource objects instead of instantiating new ones rsrc = stack[resource_name] self.assertIsNone(rsrc.validate()) scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) return rsrc def test_update_metadata_replace(self): """Updating the config's metadata causes a config replacement.""" lc_template = ''' { "AWSTemplateFormatVersion" : "2010-09-09", "Resources": { "JobServerConfig" : { "Type" : "AWS::AutoScaling::LaunchConfiguration", "Metadata": {"foo": "bar"}, "Properties": { "ImageId" : "foo", "InstanceType" : "m1.large", "KeyName" : "test", } } } } ''' self.stub_ImageConstraint_validate() self.stub_FlavorConstraint_validate() self.stub_KeypairConstraint_validate() t = template_format.parse(lc_template) stack = utils.parse_stack(t) rsrc = self.create_resource(t, stack, 'JobServerConfig') props = copy.copy(rsrc.properties.data) metadata = copy.copy(rsrc.metadata_get()) update_snippet = rsrc_defn.ResourceDefinition(rsrc.name, rsrc.type(), props, metadata) # Change nothing in the first update scheduler.TaskRunner(rsrc.update, update_snippet)() self.assertEqual('bar', metadata['foo']) metadata['foo'] = 'wibble' update_snippet = rsrc_defn.ResourceDefinition(rsrc.name, rsrc.type(), props, metadata) # Changing metadata in the second update triggers UpdateReplace updater = scheduler.TaskRunner(rsrc.update, update_snippet) self.assertRaises(resource.UpdateReplace, updater) class LoadbalancerReloadTest(common.HeatTestCase): def test_Instances(self): t = template_format.parse(inline_templates.as_template) stack = utils.parse_stack(t) lb = stack['ElasticLoadBalancer'] lb.update = mock.Mock(return_value=None) defn = rsrc_defn.ResourceDefinition( 'asg', 'OS::Heat::InstanceGroup', {'Size': 2, 'AvailabilityZones': ['zoneb'], "LaunchConfigurationName": "LaunchConfig", "LoadBalancerNames": ["ElasticLoadBalancer"]}) group = instgrp.InstanceGroup('asg', defn, stack) mock_members = self.patchobject(grouputils, 'get_member_refids') mock_members.return_value = ['aaaa', 'bbb'] expected = rsrc_defn.ResourceDefinition( 'ElasticLoadBalancer', 'AWS::ElasticLoadBalancing::LoadBalancer', {'Instances': ['aaaa', 'bbb'], 'Listeners': [{'InstancePort': u'80', 'LoadBalancerPort': u'80', 'Protocol': 'HTTP'}], 'AvailabilityZones': ['nova']}, metadata={}, deletion_policy='Delete' ) group._lb_reload() mock_members.assert_called_once_with(group, exclude=[]) lb.update.assert_called_once_with(expected) def test_members(self): self.patchobject(neutron.NeutronClientPlugin, 'has_extension', return_value=True) t = template_format.parse(inline_templates.as_template) t['Resources']['ElasticLoadBalancer'] = { 'Type': 'OS::Neutron::LoadBalancer', 'Properties': { 'protocol_port': 8080, } } stack = utils.parse_stack(t) lb = stack['ElasticLoadBalancer'] lb.update = mock.Mock(return_value=None) defn = rsrc_defn.ResourceDefinition( 'asg', 'OS::Heat::InstanceGroup', {'Size': 2, 'AvailabilityZones': ['zoneb'], "LaunchConfigurationName": "LaunchConfig", "LoadBalancerNames": ["ElasticLoadBalancer"]}) group = instgrp.InstanceGroup('asg', defn, stack) mock_members = self.patchobject(grouputils, 'get_member_refids') mock_members.return_value = ['aaaa', 'bbb'] expected = rsrc_defn.ResourceDefinition( 'ElasticLoadBalancer', 'OS::Neutron::LoadBalancer', {'protocol_port': 8080, 'members': ['aaaa', 'bbb']}, metadata={}, deletion_policy='Delete') group._lb_reload() mock_members.assert_called_once_with(group, exclude=[]) lb.update.assert_called_once_with(expected) def test_lb_reload_invalid_resource(self): t = template_format.parse(inline_templates.as_template) t['Resources']['ElasticLoadBalancer'] = { 'Type': 'AWS::EC2::Volume', 'Properties': { 'AvailabilityZone': 'nova' } } stack = utils.parse_stack(t) lb = stack['ElasticLoadBalancer'] lb.update = mock.Mock(return_value=None) defn = rsrc_defn.ResourceDefinition( 'asg', 'OS::Heat::InstanceGroup', {'Size': 2, 'AvailabilityZones': ['zoneb'], "LaunchConfigurationName": "LaunchConfig", "LoadBalancerNames": ["ElasticLoadBalancer"]}) group = instgrp.InstanceGroup('asg', defn, stack) mock_members = self.patchobject(grouputils, 'get_member_refids') mock_members.return_value = ['aaaa', 'bbb'] error = self.assertRaises(exception.Error, group._lb_reload) self.assertEqual( "Unsupported resource 'ElasticLoadBalancer' in " "LoadBalancerNames", six.text_type(error)) def test_lb_reload_static_resolve(self): t = template_format.parse(inline_templates.as_template) properties = t['Resources']['ElasticLoadBalancer']['Properties'] properties['AvailabilityZones'] = {'Fn::GetAZs': ''} self.patchobject(stk_defn.StackDefinition, 'get_availability_zones', return_value=['abc', 'xyz']) mock_members = self.patchobject(grouputils, 'get_member_refids') mock_members.return_value = ['aaaabbbbcccc'] stack = utils.parse_stack(t, params=inline_templates.as_params) lb = stack['ElasticLoadBalancer'] lb.state_set(lb.CREATE, lb.COMPLETE) lb.handle_update = mock.Mock(return_value=None) group = stack['WebServerGroup'] group._lb_reload() lb.handle_update.assert_called_once_with( mock.ANY, mock.ANY, {'Instances': ['aaaabbbbcccc']}) class InstanceGroupWithNestedStack(common.HeatTestCase): def setUp(self): super(InstanceGroupWithNestedStack, self).setUp() t = template_format.parse(inline_templates.as_template) self.stack = utils.parse_stack(t, params=inline_templates.as_params) self.create_launch_config(t, self.stack) wsg_props = self.stack['WebServerGroup'].t._properties self.defn = rsrc_defn.ResourceDefinition( 'asg', 'OS::Heat::InstanceGroup', {'Size': 2, 'AvailabilityZones': ['zoneb'], 'LaunchConfigurationName': wsg_props['LaunchConfigurationName']}) self.group = instgrp.InstanceGroup('asg', self.defn, self.stack) self.group._lb_reload = mock.Mock() self.group.update_with_template = mock.Mock() self.group.check_update_complete = mock.Mock() def create_launch_config(self, t, stack): self.stub_ImageConstraint_validate() self.stub_FlavorConstraint_validate() self.stub_SnapshotConstraint_validate() rsrc = stack['LaunchConfig'] self.assertIsNone(rsrc.validate()) scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) return rsrc def get_fake_nested_stack(self, size=1): tmpl = ''' heat_template_version: 2013-05-23 description: AutoScaling Test resources: ''' resource = ''' r%(i)d: type: ResourceWithPropsAndAttrs properties: Foo: bar%(i)d ''' resources = '\n'.join([resource % {'i': i + 1} for i in range(size)]) nested_t = tmpl + resources return utils.parse_stack(template_format.parse(nested_t)) class ReplaceTest(InstanceGroupWithNestedStack): scenarios = [ ('1', dict(min_in_service=0, batch_size=1, updates=2)), ('2', dict(min_in_service=0, batch_size=2, updates=1)), ('3', dict(min_in_service=3, batch_size=1, updates=3)), ('4', dict(min_in_service=3, batch_size=2, updates=2))] def setUp(self): super(ReplaceTest, self).setUp() self.group._nested = self.get_fake_nested_stack(2) def test_rolling_updates(self): self.group._replace(self.min_in_service, self.batch_size, 0) self.assertEqual(self.updates, len(self.group.update_with_template.call_args_list)) self.assertEqual(self.updates + 1, len(self.group._lb_reload.call_args_list)) class ResizeWithFailedInstancesTest(InstanceGroupWithNestedStack): scenarios = [ ('1', dict(size=3, failed=['r1'], content={'r2', 'r3', 'r4'})), ('2', dict(size=3, failed=['r4'], content={'r1', 'r2', 'r3'})), ('3', dict(size=2, failed=['r1', 'r2'], content={'r3', 'r4'})), ('4', dict(size=2, failed=['r3', 'r4'], content={'r1', 'r2'})), ('5', dict(size=2, failed=['r2', 'r3'], content={'r1', 'r4'})), ('6', dict(size=3, failed=['r2', 'r3'], content={'r1', 'r3', 'r4'}))] def setUp(self): super(ResizeWithFailedInstancesTest, self).setUp() nested = self.get_fake_nested_stack(4) inspector = mock.Mock(spec=grouputils.GroupInspector) self.patchobject(grouputils.GroupInspector, 'from_parent_resource', return_value=inspector) inspector.member_names.return_value = (self.failed + sorted(self.content - set(self.failed))) inspector.template.return_value = nested.defn._template def test_resize(self): self.group.resize(self.size) tmpl = self.group.update_with_template.call_args[0][0] resources = tmpl.resource_definitions(None) self.assertEqual(self.content, set(resources.keys())) class TestGetBatches(common.HeatTestCase): scenarios = [ ('4_1_0', dict(curr_cap=4, bat_size=1, min_serv=0, batches=[(4, 1)] * 4)), ('4_1_4', dict(curr_cap=4, bat_size=1, min_serv=4, batches=([(5, 1)] * 4) + [(4, 0)])), ('4_1_5', dict(curr_cap=4, bat_size=1, min_serv=5, batches=([(5, 1)] * 4) + [(4, 0)])), ('4_2_0', dict(curr_cap=4, bat_size=2, min_serv=0, batches=[(4, 2)] * 2)), ('4_2_4', dict(curr_cap=4, bat_size=2, min_serv=4, batches=([(6, 2)] * 2) + [(4, 0)])), ('5_2_0', dict(curr_cap=5, bat_size=2, min_serv=0, batches=([(5, 2)] * 2) + [(5, 1)])), ('5_2_4', dict(curr_cap=5, bat_size=2, min_serv=4, batches=([(6, 2)] * 2) + [(5, 1)])), ('3_2_0', dict(curr_cap=3, bat_size=2, min_serv=0, batches=[(3, 2), (3, 1)])), ('3_2_4', dict(curr_cap=3, bat_size=2, min_serv=4, batches=[(5, 2), (4, 1), (3, 0)])), ('4_4_0', dict(curr_cap=4, bat_size=4, min_serv=0, batches=[(4, 4)])), ('4_5_0', dict(curr_cap=4, bat_size=5, min_serv=0, batches=[(4, 4)])), ('4_4_1', dict(curr_cap=4, bat_size=4, min_serv=1, batches=[(5, 4), (4, 0)])), ('4_6_1', dict(curr_cap=4, bat_size=6, min_serv=1, batches=[(5, 4), (4, 0)])), ('4_4_2', dict(curr_cap=4, bat_size=4, min_serv=2, batches=[(6, 4), (4, 0)])), ('4_4_4', dict(curr_cap=4, bat_size=4, min_serv=4, batches=[(8, 4), (4, 0)])), ('4_5_6', dict(curr_cap=4, bat_size=5, min_serv=6, batches=[(8, 4), (4, 0)])), ] def test_get_batches(self): batches = list(instgrp.InstanceGroup._get_batches(self.curr_cap, self.bat_size, self.min_serv)) self.assertEqual(self.batches, batches) heat-10.0.2/heat/tests/openstack/heat/__init__.py0000666000175000017500000000000013343562340021634 0ustar zuulzuul00000000000000heat-10.0.2/heat/tests/openstack/heat/test_cloud_config.py0000666000175000017500000000410013343562340023574 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import uuid import mock from heat.engine import stack from heat.engine import template from heat.tests import common from heat.tests import utils class CloudConfigTest(common.HeatTestCase): def setUp(self): super(CloudConfigTest, self).setUp() self.ctx = utils.dummy_context() self.properties = { 'cloud_config': {'foo': 'bar'} } self.stack = stack.Stack( self.ctx, 'software_config_test_stack', template.Template({ 'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'config_mysql': { 'Type': 'OS::Heat::CloudConfig', 'Properties': self.properties }}})) self.config = self.stack['config_mysql'] self.rpc_client = mock.MagicMock() self.config._rpc_client = self.rpc_client def test_handle_create(self): config_id = 'c8a19429-7fde-47ea-a42f-40045488226c' value = {'id': config_id} self.rpc_client.create_software_config.return_value = value self.config.id = 5 self.config.uuid = uuid.uuid4().hex self.config.handle_create() self.assertEqual(config_id, self.config.resource_id) kwargs = self.rpc_client.create_software_config.call_args[1] self.assertEqual({ 'name': self.config.physical_resource_name(), 'config': '#cloud-config\n{foo: bar}\n', 'group': 'Heat::Ungrouped' }, kwargs) heat-10.0.2/heat/tests/openstack/heat/test_swiftsignal.py0000666000175000017500000011660113343562351023507 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime import json import uuid import mock from oslo_utils import timeutils import six from swiftclient import client as swiftclient_client from swiftclient import exceptions as swiftclient_exceptions from testtools import matchers from heat.common import exception from heat.common import template_format from heat.engine.clients.os import swift from heat.engine import node_data from heat.engine import resource from heat.engine import rsrc_defn from heat.engine import scheduler from heat.engine import stack from heat.engine import template as templatem from heat.tests import common from heat.tests import utils swiftsignal_template = ''' heat_template_version: 2013-05-23 resources: test_wait_condition: type: "OS::Heat::SwiftSignal" properties: handle: { get_resource: test_wait_condition_handle } timeout: 1 count: 2 test_wait_condition_handle: type: "OS::Heat::SwiftSignalHandle" ''' swiftsignalhandle_template = ''' heat_template_version: 2013-05-23 resources: test_wait_condition_handle: type: "OS::Heat::SwiftSignalHandle" ''' container_header = { 'content-length': '2', 'x-container-object-count': '0', 'accept-ranges': 'bytes', 'date': 'Fri, 25 Jul 2014 16:02:03 GMT', 'x-timestamp': '1405019787.66969', 'x-trans-id': 'tx6651b005324341f685e71-0053d27f7bdfw1', 'x-container-bytes-used': '0', 'content-type': 'application/json; charset=utf-8', 'x-versions-location': 'test' } obj_header = { 'content-length': '5', 'accept-ranges': 'bytes', 'last-modified': 'Fri, 25 Jul 2014 16:05:26 GMT', 'etag': '5a105e8b9d40e1329780d62ea2265d8a', 'x-timestamp': '1406304325.40094', 'x-trans-id': 'tx2f40ff2b4daa4015917fc-0053d28045dfw1', 'date': 'Fri, 25 Jul 2014 16:05:25 GMT', 'content-type': 'application/octet-stream' } def create_stack(template, stack_id=None, cache_data=None): tmpl = template_format.parse(template) template = templatem.Template(tmpl) ctx = utils.dummy_context(tenant_id='test_tenant') st = stack.Stack(ctx, 'test_st', template, disable_rollback=True, cache_data=cache_data) # Stub out the stack ID so we have a known value if stack_id is None: stack_id = str(uuid.uuid4()) with utils.UUIDStub(stack_id): st.store() st.id = stack_id return st def cont_index(obj_name, num_version_hist): objects = [{'bytes': 11, 'last_modified': '2014-07-03T19:42:03.281640', 'hash': '9214b4e4460fcdb9f3a369941400e71e', 'name': "02b" + obj_name + '/1404416326.51383', 'content_type': 'application/octet-stream'}] * num_version_hist objects.append({'bytes': 8, 'last_modified': '2014-07-03T19:42:03.849870', 'hash': '9ab7c0738852d7dd6a2dc0b261edc300', 'name': obj_name, 'content_type': 'application/x-www-form-urlencoded'}) return (container_header, objects) class SwiftSignalHandleTest(common.HeatTestCase): @mock.patch.object(swift.SwiftClientPlugin, '_create') @mock.patch.object(resource.Resource, 'physical_resource_name') def test_create(self, mock_name, mock_swift): st = create_stack(swiftsignalhandle_template) handle = st['test_wait_condition_handle'] mock_swift_object = mock.Mock() mock_swift.return_value = mock_swift_object mock_swift_object.head_account.return_value = { 'x-account-meta-temp-url-key': "1234" } mock_swift_object.url = "http://fake-host.com:8080/v1/AUTH_1234" obj_name = "%s-%s-abcdefghijkl" % (st.name, handle.name) mock_name.return_value = obj_name mock_swift_object.get_container.return_value = cont_index(obj_name, 2) mock_swift_object.get_object.return_value = (obj_header, '{"id": "1"}') st.create() handle = st.resources['test_wait_condition_handle'] obj_name = "%s-%s-abcdefghijkl" % (st.name, handle.name) regexp = ("http://fake-host.com:8080/v1/AUTH_test_tenant/%s/test_st-" "test_wait_condition_handle-abcdefghijkl" r"\?temp_url_sig=[0-9a-f]{40}&temp_url_expires=[0-9]{10}" % st.id) res_id = st.resources['test_wait_condition_handle'].resource_id self.assertEqual(res_id, handle.physical_resource_name()) self.assertThat(handle.FnGetRefId(), matchers.MatchesRegex(regexp)) # Since the account key is mocked out above self.assertFalse(mock_swift_object.post_account.called) header = {'x-versions-location': st.id} self.assertEqual({'headers': header}, mock_swift_object.put_container.call_args[1]) @mock.patch.object(swift.SwiftClientPlugin, '_create') @mock.patch.object(resource.Resource, 'physical_resource_name') def test_delete_empty_container(self, mock_name, mock_swift): st = create_stack(swiftsignalhandle_template) handle = st['test_wait_condition_handle'] mock_swift_object = mock.Mock() mock_swift.return_value = mock_swift_object mock_swift_object.head_account.return_value = { 'x-account-meta-temp-url-key': "1234" } mock_swift_object.url = "http://fake-host.com:8080/v1/AUTH_1234" obj_name = "%s-%s-abcdefghijkl" % (st.name, handle.name) mock_name.return_value = obj_name st.create() exc = swiftclient_exceptions.ClientException("Object DELETE failed", http_status=404) mock_swift_object.delete_object.side_effect = (None, None, None, exc) exc = swiftclient_exceptions.ClientException("Container DELETE failed", http_status=404) mock_swift_object.delete_container.side_effect = exc rsrc = st.resources['test_wait_condition_handle'] scheduler.TaskRunner(rsrc.delete)() self.assertEqual(('DELETE', 'COMPLETE'), rsrc.state) self.assertEqual(4, mock_swift_object.delete_object.call_count) @mock.patch.object(swift.SwiftClientPlugin, '_create') @mock.patch.object(resource.Resource, 'physical_resource_name') def test_delete_object_error(self, mock_name, mock_swift): st = create_stack(swiftsignalhandle_template) handle = st['test_wait_condition_handle'] mock_swift_object = mock.Mock() mock_swift.return_value = mock_swift_object mock_swift_object.head_account.return_value = { 'x-account-meta-temp-url-key': "1234" } mock_swift_object.url = "http://fake-host.com:8080/v1/AUTH_1234" obj_name = "%s-%s-abcdefghijkl" % (st.name, handle.name) mock_name.return_value = obj_name st.create() exc = swiftclient_exceptions.ClientException("Overlimit", http_status=413) mock_swift_object.delete_object.side_effect = (None, None, None, exc) rsrc = st.resources['test_wait_condition_handle'] exc = self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(rsrc.delete)) self.assertEqual('ClientException: ' 'resources.test_wait_condition_handle: ' 'Overlimit: 413', six.text_type(exc)) @mock.patch.object(swift.SwiftClientPlugin, '_create') @mock.patch.object(resource.Resource, 'physical_resource_name') def test_delete_container_error(self, mock_name, mock_swift): st = create_stack(swiftsignalhandle_template) handle = st['test_wait_condition_handle'] mock_swift_object = mock.Mock() mock_swift.return_value = mock_swift_object mock_swift_object.head_account.return_value = { 'x-account-meta-temp-url-key': "1234" } mock_swift_object.url = "http://fake-host.com:8080/v1/AUTH_1234" obj_name = "%s-%s-abcdefghijkl" % (st.name, handle.name) mock_name.return_value = obj_name st.create() exc = swiftclient_exceptions.ClientException("Object DELETE failed", http_status=404) mock_swift_object.delete_object.side_effect = (None, None, None, exc) exc = swiftclient_exceptions.ClientException("Overlimit", http_status=413) mock_swift_object.delete_container.side_effect = (exc,) rsrc = st.resources['test_wait_condition_handle'] exc = self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(rsrc.delete)) self.assertEqual('ClientException: ' 'resources.test_wait_condition_handle: ' 'Overlimit: 413', six.text_type(exc)) @mock.patch.object(swift.SwiftClientPlugin, '_create') @mock.patch.object(resource.Resource, 'physical_resource_name') def test_delete_non_empty_container(self, mock_name, mock_swift): st = create_stack(swiftsignalhandle_template) handle = st['test_wait_condition_handle'] mock_swift_object = mock.Mock() mock_swift.return_value = mock_swift_object mock_swift_object.head_account.return_value = { 'x-account-meta-temp-url-key': "1234" } mock_swift_object.url = "http://fake-host.com:8080/v1/AUTH_1234" obj_name = "%s-%s-abcdefghijkl" % (st.name, handle.name) mock_name.return_value = obj_name st.create() exc = swiftclient_exceptions.ClientException("Object DELETE failed", http_status=404) mock_swift_object.delete_object.side_effect = (None, None, None, exc) exc = swiftclient_exceptions.ClientException("Container DELETE failed", http_status=409) mock_swift_object.delete_container.side_effect = exc rsrc = st.resources['test_wait_condition_handle'] scheduler.TaskRunner(rsrc.delete)() self.assertEqual(('DELETE', 'COMPLETE'), rsrc.state) self.assertEqual(4, mock_swift_object.delete_object.call_count) @mock.patch.object(swift.SwiftClientPlugin, '_create') def test_handle_update(self, mock_swift): st = create_stack(swiftsignalhandle_template) handle = st['test_wait_condition_handle'] mock_swift_object = mock.Mock() mock_swift.return_value = mock_swift_object mock_swift_object.head_account.return_value = { 'x-account-meta-temp-url-key': "1234" } mock_swift_object.url = "http://fake-host.com:8080/v1/AUTH_1234" st.create() rsrc = st.resources['test_wait_condition_handle'] old_url = rsrc.FnGetRefId() update_snippet = rsrc_defn.ResourceDefinition(handle.name, handle.type(), handle.properties.data) scheduler.TaskRunner(handle.update, update_snippet)() self.assertEqual(old_url, rsrc.FnGetRefId()) def test_swift_handle_refid_convergence_cache_data(self): cache_data = { 'test_wait_condition_handle': node_data.NodeData.from_dict({ 'uuid': mock.ANY, 'id': mock.ANY, 'action': 'CREATE', 'status': 'COMPLETE', 'reference_id': 'convg_xyz' }) } st = create_stack(swiftsignalhandle_template, cache_data=cache_data) rsrc = st.defn['test_wait_condition_handle'] self.assertEqual('convg_xyz', rsrc.FnGetRefId()) class SwiftSignalTest(common.HeatTestCase): @mock.patch.object(swift.SwiftClientPlugin, '_create') @mock.patch.object(resource.Resource, 'physical_resource_name') def test_create(self, mock_name, mock_swift): st = create_stack(swiftsignal_template) handle = st['test_wait_condition_handle'] mock_swift_object = mock.Mock() mock_swift.return_value = mock_swift_object mock_swift_object.url = "http://fake-host.com:8080/v1/AUTH_1234" mock_swift_object.head_account.return_value = { 'x-account-meta-temp-url-key': '123456' } obj_name = "%s-%s-abcdefghijkl" % (st.name, handle.name) mock_name.return_value = obj_name mock_swift_object.get_container.return_value = cont_index(obj_name, 2) mock_swift_object.get_object.return_value = (obj_header, '') st.create() self.assertEqual(('CREATE', 'COMPLETE'), st.state) @mock.patch.object(swift.SwiftClientPlugin, 'get_signal_url') def test_validate_handle_url_bad_tempurl(self, mock_handle_url): mock_handle_url.return_value = ( "http://fake-host.com:8080/v1/my-container/" "test_st-test_wait_condition_handle?temp_url_sig=" "12d8f9f2c923fbeb555041d4ed63d83de6768e95&" "temp_url_expires=1404762741") st = create_stack(swiftsignal_template) st.create() self.assertIn('not a valid SwiftSignalHandle. The Swift TempURL path', six.text_type(st.status_reason)) @mock.patch.object(swift.SwiftClientPlugin, 'get_signal_url') def test_validate_handle_url_bad_container_name(self, mock_handle_url): mock_handle_url.return_value = ( "http://fake-host.com:8080/v1/AUTH_test_tenant/my-container/" "test_st-test_wait_condition_handle?temp_url_sig=" "12d8f9f2c923fbeb555041d4ed63d83de6768e95&" "temp_url_expires=1404762741") st = create_stack(swiftsignal_template) st.create() self.assertIn('not a valid SwiftSignalHandle. The container name', six.text_type(st.status_reason)) @mock.patch.object(swift.SwiftClientPlugin, '_create') @mock.patch.object(resource.Resource, 'physical_resource_name') def test_multiple_signals_same_id_complete(self, mock_name, mock_swift): st = create_stack(swiftsignal_template) handle = st['test_wait_condition_handle'] mock_swift_object = mock.Mock() mock_swift.return_value = mock_swift_object mock_swift_object.url = "http://fake-host.com:8080/v1/AUTH_1234" mock_swift_object.head_account.return_value = { 'x-account-meta-temp-url-key': '123456' } obj_name = "%s-%s-abcdefghijkl" % (st.name, handle.name) mock_name.return_value = obj_name mock_swift_object.get_container.return_value = cont_index(obj_name, 2) mock_swift_object.get_object.side_effect = ( (obj_header, json.dumps({'id': 1})), (obj_header, json.dumps({'id': 1})), (obj_header, json.dumps({'id': 1})), (obj_header, json.dumps({'id': 1})), (obj_header, json.dumps({'id': 2})), (obj_header, json.dumps({'id': 3})), ) st.create() self.assertEqual(('CREATE', 'COMPLETE'), st.state) @mock.patch.object(swift.SwiftClientPlugin, '_create') @mock.patch.object(resource.Resource, 'physical_resource_name') def test_multiple_signals_same_id_timeout(self, mock_name, mock_swift): st = create_stack(swiftsignal_template) handle = st['test_wait_condition_handle'] mock_swift_object = mock.Mock() mock_swift.return_value = mock_swift_object mock_swift_object.url = "http://fake-host.com:8080/v1/AUTH_1234" mock_swift_object.head_account.return_value = { 'x-account-meta-temp-url-key': '123456' } obj_name = "%s-%s-abcdefghijkl" % (st.name, handle.name) mock_name.return_value = obj_name mock_swift_object.get_container.return_value = cont_index(obj_name, 2) mock_swift_object.get_object.return_value = (obj_header, json.dumps({'id': 1})) time_now = timeutils.utcnow() time_series = [datetime.timedelta(0, t) + time_now for t in six.moves.xrange(1, 100)] timeutils.set_time_override(time_series) self.addCleanup(timeutils.clear_time_override) st.create() self.assertIn("SwiftSignalTimeout: resources.test_wait_condition: " "1 of 2 received - Signal 1 received", st.status_reason) wc = st['test_wait_condition'] self.assertEqual("SwiftSignalTimeout: resources.test_wait_condition: " "1 of 2 received - Signal 1 received", wc.status_reason) @mock.patch.object(swift.SwiftClientPlugin, '_create') @mock.patch.object(resource.Resource, 'physical_resource_name') def test_post_complete_to_handle(self, mock_name, mock_swift): st = create_stack(swiftsignal_template) handle = st['test_wait_condition_handle'] mock_swift_object = mock.Mock() mock_swift.return_value = mock_swift_object mock_swift_object.url = "http://fake-host.com:8080/v1/AUTH_1234" mock_swift_object.head_account.return_value = { 'x-account-meta-temp-url-key': '123456' } obj_name = "%s-%s-abcdefghijkl" % (st.name, handle.name) mock_name.return_value = obj_name mock_swift_object.get_container.return_value = cont_index(obj_name, 2) mock_swift_object.get_object.side_effect = ( (obj_header, json.dumps({'id': 1, 'status': "SUCCESS"})), (obj_header, json.dumps({'id': 1, 'status': "SUCCESS"})), (obj_header, json.dumps({'id': 2, 'status': "SUCCESS"})), ) st.create() self.assertEqual(('CREATE', 'COMPLETE'), st.state) @mock.patch.object(swift.SwiftClientPlugin, '_create') @mock.patch.object(resource.Resource, 'physical_resource_name') def test_post_failed_to_handle(self, mock_name, mock_swift): st = create_stack(swiftsignal_template) handle = st['test_wait_condition_handle'] mock_swift_object = mock.Mock() mock_swift.return_value = mock_swift_object mock_swift_object.url = "http://fake-host.com:8080/v1/AUTH_1234" mock_swift_object.head_account.return_value = { 'x-account-meta-temp-url-key': '123456' } obj_name = "%s-%s-abcdefghijkl" % (st.name, handle.name) mock_name.return_value = obj_name mock_swift_object.get_container.return_value = cont_index(obj_name, 1) mock_swift_object.get_object.side_effect = ( # Create (obj_header, json.dumps({'id': 1, 'status': "FAILURE", 'reason': "foo"})), (obj_header, json.dumps({'id': 2, 'status': "FAILURE", 'reason': "bar"})), # SwiftSignalFailure (obj_header, json.dumps({'id': 1, 'status': "FAILURE", 'reason': "foo"})), (obj_header, json.dumps({'id': 2, 'status': "FAILURE", 'reason': "bar"})), ) st.create() self.assertEqual(('CREATE', 'FAILED'), st.state) wc = st['test_wait_condition'] self.assertEqual("SwiftSignalFailure: resources.test_wait_condition: " "foo;bar", wc.status_reason) @mock.patch.object(swift.SwiftClientPlugin, '_create') @mock.patch.object(resource.Resource, 'physical_resource_name') def test_data(self, mock_name, mock_swift): st = create_stack(swiftsignal_template) handle = st['test_wait_condition_handle'] mock_swift_object = mock.Mock() mock_swift.return_value = mock_swift_object mock_swift_object.url = "http://fake-host.com:8080/v1/AUTH_1234" mock_swift_object.head_account.return_value = { 'x-account-meta-temp-url-key': '123456' } obj_name = "%s-%s-abcdefghijkl" % (st.name, handle.name) mock_name.return_value = obj_name mock_swift_object.get_container.return_value = cont_index(obj_name, 2) mock_swift_object.get_object.side_effect = ( # st create (obj_header, json.dumps({'id': 1, 'data': "foo"})), (obj_header, json.dumps({'id': 2, 'data': "bar"})), (obj_header, json.dumps({'id': 3, 'data': "baz"})), # FnGetAtt call (obj_header, json.dumps({'id': 1, 'data': "foo"})), (obj_header, json.dumps({'id': 2, 'data': "bar"})), (obj_header, json.dumps({'id': 3, 'data': "baz"})), ) st.create() self.assertEqual(('CREATE', 'COMPLETE'), st.state) wc = st['test_wait_condition'] self.assertEqual(json.dumps({1: 'foo', 2: 'bar', 3: 'baz'}), wc.FnGetAtt('data')) @mock.patch.object(swift.SwiftClientPlugin, '_create') @mock.patch.object(resource.Resource, 'physical_resource_name') def test_data_noid(self, mock_name, mock_swift): st = create_stack(swiftsignal_template) handle = st['test_wait_condition_handle'] mock_swift_object = mock.Mock() mock_swift.return_value = mock_swift_object mock_swift_object.url = "http://fake-host.com:8080/v1/AUTH_1234" mock_swift_object.head_account.return_value = { 'x-account-meta-temp-url-key': '123456' } obj_name = "%s-%s-abcdefghijkl" % (st.name, handle.name) mock_name.return_value = obj_name mock_swift_object.get_container.return_value = cont_index(obj_name, 1) mock_swift_object.get_object.side_effect = ( # st create (obj_header, json.dumps({'data': "foo", 'reason': "bar", 'status': "SUCCESS"})), (obj_header, json.dumps({'data': "dog", 'reason': "cat", 'status': "SUCCESS"})), # FnGetAtt call (obj_header, json.dumps({'data': "foo", 'reason': "bar", 'status': "SUCCESS"})), (obj_header, json.dumps({'data': "dog", 'reason': "cat", 'status': "SUCCESS"})), ) st.create() self.assertEqual(('CREATE', 'COMPLETE'), st.state) wc = st['test_wait_condition'] self.assertEqual(json.dumps({1: 'foo', 2: 'dog'}), wc.FnGetAtt('data')) @mock.patch.object(swift.SwiftClientPlugin, '_create') @mock.patch.object(resource.Resource, 'physical_resource_name') def test_data_nodata(self, mock_name, mock_swift): st = create_stack(swiftsignal_template) handle = st['test_wait_condition_handle'] mock_swift_object = mock.Mock() mock_swift.return_value = mock_swift_object mock_swift_object.url = "http://fake-host.com:8080/v1/AUTH_1234" mock_swift_object.head_account.return_value = { 'x-account-meta-temp-url-key': '123456' } obj_name = "%s-%s-abcdefghijkl" % (st.name, handle.name) mock_name.return_value = obj_name mock_swift_object.get_container.return_value = cont_index(obj_name, 1) mock_swift_object.get_object.side_effect = ( # st create (obj_header, ''), (obj_header, ''), # FnGetAtt call (obj_header, ''), (obj_header, ''), ) st.create() self.assertEqual(('CREATE', 'COMPLETE'), st.state) wc = st['test_wait_condition'] self.assertEqual(json.dumps({1: None, 2: None}), wc.FnGetAtt('data')) @mock.patch.object(swift.SwiftClientPlugin, '_create') @mock.patch.object(resource.Resource, 'physical_resource_name') def test_data_partial_complete(self, mock_name, mock_swift): st = create_stack(swiftsignal_template) handle = st['test_wait_condition_handle'] wc = st['test_wait_condition'] mock_swift_object = mock.Mock() mock_swift.return_value = mock_swift_object mock_swift_object.url = "http://fake-host.com:8080/v1/AUTH_1234" mock_swift_object.head_account.return_value = { 'x-account-meta-temp-url-key': '123456' } obj_name = "%s-%s-abcdefghijkl" % (st.name, handle.name) mock_name.return_value = obj_name mock_swift_object.get_container.return_value = cont_index(obj_name, 1) mock_swift_object.get_object.return_value = ( obj_header, json.dumps({'status': 'SUCCESS'})) st.create() self.assertEqual(['SUCCESS', 'SUCCESS'], wc.get_status()) expected = [{'status': 'SUCCESS', 'reason': 'Signal 1 received', 'data': None, 'id': 1}, {'status': 'SUCCESS', 'reason': 'Signal 2 received', 'data': None, 'id': 2}] self.assertEqual(expected, wc.get_signals()) @mock.patch.object(swift.SwiftClientPlugin, '_create') @mock.patch.object(resource.Resource, 'physical_resource_name') def test_get_status_none_complete(self, mock_name, mock_swift): st = create_stack(swiftsignal_template) handle = st['test_wait_condition_handle'] wc = st['test_wait_condition'] mock_swift_object = mock.Mock() mock_swift.return_value = mock_swift_object mock_swift_object.url = "http://fake-host.com:8080/v1/AUTH_1234" mock_swift_object.head_account.return_value = { 'x-account-meta-temp-url-key': '123456' } obj_name = "%s-%s-abcdefghijkl" % (st.name, handle.name) mock_name.return_value = obj_name mock_swift_object.get_container.return_value = cont_index(obj_name, 1) mock_swift_object.get_object.return_value = (obj_header, '') st.create() self.assertEqual(['SUCCESS', 'SUCCESS'], wc.get_status()) expected = [{'status': 'SUCCESS', 'reason': 'Signal 1 received', 'data': None, 'id': 1}, {'status': 'SUCCESS', 'reason': 'Signal 2 received', 'data': None, 'id': 2}] self.assertEqual(expected, wc.get_signals()) @mock.patch.object(swift.SwiftClientPlugin, '_create') @mock.patch.object(resource.Resource, 'physical_resource_name') def test_get_status_partial_complete(self, mock_name, mock_swift): st = create_stack(swiftsignal_template) handle = st['test_wait_condition_handle'] wc = st['test_wait_condition'] mock_swift_object = mock.Mock() mock_swift.return_value = mock_swift_object mock_swift_object.url = "http://fake-host.com:8080/v1/AUTH_1234" mock_swift_object.head_account.return_value = { 'x-account-meta-temp-url-key': '123456' } obj_name = "%s-%s-abcdefghijkl" % (st.name, handle.name) mock_name.return_value = obj_name mock_swift_object.get_container.return_value = cont_index(obj_name, 1) mock_swift_object.get_object.return_value = ( obj_header, json.dumps({'id': 1, 'status': "SUCCESS"})) st.create() self.assertEqual(['SUCCESS'], wc.get_status()) expected = [{'status': 'SUCCESS', 'reason': 'Signal 1 received', 'data': None, 'id': 1}] self.assertEqual(expected, wc.get_signals()) @mock.patch.object(swift.SwiftClientPlugin, '_create') @mock.patch.object(resource.Resource, 'physical_resource_name') def test_get_status_failure(self, mock_name, mock_swift): st = create_stack(swiftsignal_template) handle = st['test_wait_condition_handle'] wc = st['test_wait_condition'] mock_swift_object = mock.Mock() mock_swift.return_value = mock_swift_object mock_swift_object.url = "http://fake-host.com:8080/v1/AUTH_1234" mock_swift_object.head_account.return_value = { 'x-account-meta-temp-url-key': '123456' } obj_name = "%s-%s-abcdefghijkl" % (st.name, handle.name) mock_name.return_value = obj_name mock_swift_object.get_container.return_value = cont_index(obj_name, 1) mock_swift_object.get_object.return_value = ( obj_header, json.dumps({'id': 1, 'status': "FAILURE"})) st.create() self.assertEqual(('CREATE', 'FAILED'), st.state) self.assertEqual(['FAILURE'], wc.get_status()) expected = [{'status': 'FAILURE', 'reason': 'Signal 1 received', 'data': None, 'id': 1}] self.assertEqual(expected, wc.get_signals()) @mock.patch.object(swift.SwiftClientPlugin, '_create') @mock.patch.object(resource.Resource, 'physical_resource_name') def test_getatt_token(self, mock_name, mock_swift): st = create_stack(swiftsignalhandle_template) handle = st['test_wait_condition_handle'] mock_swift_object = mock.Mock() mock_swift.return_value = mock_swift_object mock_swift_object.url = "http://fake-host.com:8080/v1/AUTH_1234" mock_swift_object.head_account.return_value = { 'x-account-meta-temp-url-key': '123456' } obj_name = "%s-%s-abcdefghijkl" % (st.name, handle.name) mock_name.return_value = obj_name mock_swift_object.get_container.return_value = cont_index(obj_name, 1) mock_swift_object.get_object.side_effect = ( # st create (obj_header, ''), (obj_header, ''), ) st.create() self.assertEqual(('CREATE', 'COMPLETE'), st.state) self.assertEqual('', handle.FnGetAtt('token')) @mock.patch.object(swift.SwiftClientPlugin, '_create') @mock.patch.object(resource.Resource, 'physical_resource_name') def test_getatt_endpoint(self, mock_name, mock_swift): st = create_stack(swiftsignalhandle_template) handle = st['test_wait_condition_handle'] mock_swift_object = mock.Mock() mock_swift.return_value = mock_swift_object mock_swift_object.url = "http://fake-host.com:8080/v1/AUTH_1234" mock_swift_object.head_account.return_value = { 'x-account-meta-temp-url-key': '123456' } obj_name = "%s-%s-abcdefghijkl" % (st.name, handle.name) mock_name.return_value = obj_name mock_swift_object.get_container.return_value = cont_index(obj_name, 1) mock_swift_object.get_object.side_effect = ( # st create (obj_header, ''), (obj_header, ''), ) st.create() self.assertEqual(('CREATE', 'COMPLETE'), st.state) expected = ('http://fake-host.com:8080/v1/AUTH_test_tenant/%s/' r'test_st-test_wait_condition_handle-abcdefghijkl\?temp_' 'url_sig=[0-9a-f]{40}&temp_url_expires=[0-9]{10}') % st.id self.assertThat(handle.FnGetAtt('endpoint'), matchers.MatchesRegex(expected)) @mock.patch.object(swift.SwiftClientPlugin, '_create') @mock.patch.object(resource.Resource, 'physical_resource_name') def test_getatt_curl_cli(self, mock_name, mock_swift): st = create_stack(swiftsignalhandle_template) handle = st['test_wait_condition_handle'] mock_swift_object = mock.Mock() mock_swift.return_value = mock_swift_object mock_swift_object.url = "http://fake-host.com:8080/v1/AUTH_1234" mock_swift_object.head_account.return_value = { 'x-account-meta-temp-url-key': '123456' } obj_name = "%s-%s-abcdefghijkl" % (st.name, handle.name) mock_name.return_value = obj_name mock_swift_object.get_container.return_value = cont_index(obj_name, 1) mock_swift_object.get_object.side_effect = ( # st create (obj_header, ''), (obj_header, ''), ) st.create() self.assertEqual(('CREATE', 'COMPLETE'), st.state) expected = ("curl -i -X PUT 'http://fake-host.com:8080/v1/" "AUTH_test_tenant/%s/test_st-test_wait_condition_" r"handle-abcdefghijkl\?temp_url_sig=[0-9a-f]{40}&" "temp_url_expires=[0-9]{10}'") % st.id self.assertThat(handle.FnGetAtt('curl_cli'), matchers.MatchesRegex(expected)) @mock.patch.object(swift.SwiftClientPlugin, '_create') @mock.patch.object(resource.Resource, 'physical_resource_name') def test_invalid_json_data(self, mock_name, mock_swift): st = create_stack(swiftsignal_template) handle = st['test_wait_condition_handle'] mock_swift_object = mock.Mock() mock_swift.return_value = mock_swift_object mock_swift_object.url = "http://fake-host.com:8080/v1/AUTH_1234" mock_swift_object.head_account.return_value = { 'x-account-meta-temp-url-key': '123456' } obj_name = "%s-%s-abcdefghijkl" % (st.name, handle.name) mock_name.return_value = obj_name mock_swift_object.get_container.return_value = cont_index(obj_name, 1) mock_swift_object.get_object.side_effect = ( # st create (obj_header, '{"status": "SUCCESS"'), (obj_header, '{"status": "FAI'), ) st.create() self.assertEqual(('CREATE', 'FAILED'), st.state) wc = st['test_wait_condition'] self.assertEqual('Error: resources.test_wait_condition: ' 'Failed to parse JSON data: {"status": ' '"SUCCESS"', wc.status_reason) @mock.patch.object(swift.SwiftClientPlugin, '_create') @mock.patch.object(resource.Resource, 'physical_resource_name') def test_unknown_status(self, mock_name, mock_swift): st = create_stack(swiftsignal_template) handle = st['test_wait_condition_handle'] mock_swift_object = mock.Mock() mock_swift.return_value = mock_swift_object mock_swift_object.url = "http://fake-host.com:8080/v1/AUTH_1234" mock_swift_object.head_account.return_value = { 'x-account-meta-temp-url-key': '123456' } obj_name = "%s-%s-abcdefghijkl" % (st.name, handle.name) mock_name.return_value = obj_name mock_swift_object.get_container.return_value = cont_index(obj_name, 1) mock_swift_object.get_object.return_value = ( obj_header, '{"status": "BOO"}') st.create() self.assertEqual(('CREATE', 'FAILED'), st.state) wc = st['test_wait_condition'] self.assertEqual('Error: resources.test_wait_condition: ' 'Unknown status: BOO', wc.status_reason) @mock.patch.object(swift.SwiftClientPlugin, '_create') @mock.patch.object(resource.Resource, 'physical_resource_name') def test_swift_objects_deleted(self, mock_name, mock_swift): st = create_stack(swiftsignal_template) handle = st['test_wait_condition_handle'] mock_swift_object = mock.Mock() mock_swift.return_value = mock_swift_object mock_swift_object.url = "http://fake-host.com:8080/v1/AUTH_1234" mock_swift_object.head_account.return_value = { 'x-account-meta-temp-url-key': '123456' } obj_name = "%s-%s-abcdefghijkl" % (st.name, handle.name) mock_name.return_value = obj_name mock_swift_object.get_container.side_effect = ( cont_index(obj_name, 2), # Objects are there during create (container_header, []), # The user deleted the objects ) mock_swift_object.get_object.side_effect = ( (obj_header, json.dumps({'id': 1})), # Objects there during create (obj_header, json.dumps({'id': 2})), (obj_header, json.dumps({'id': 3})), ) st.create() self.assertEqual(('CREATE', 'COMPLETE'), st.state) wc = st['test_wait_condition'] self.assertEqual("null", wc.FnGetAtt('data')) @mock.patch.object(swift.SwiftClientPlugin, '_create') @mock.patch.object(resource.Resource, 'physical_resource_name') def test_swift_objects_invisible(self, mock_name, mock_swift): st = create_stack(swiftsignal_template) handle = st['test_wait_condition_handle'] mock_swift_object = mock.Mock() mock_swift.return_value = mock_swift_object mock_swift_object.url = "http://fake-host.com:8080/v1/AUTH_1234" mock_swift_object.head_account.return_value = { 'x-account-meta-temp-url-key': '123456' } obj_name = "%s-%s-abcdefghijkl" % (st.name, handle.name) mock_name.return_value = obj_name mock_swift_object.get_container.side_effect = ( (container_header, []), # Just-created objects aren't visible yet (container_header, []), (container_header, []), (container_header, []), cont_index(obj_name, 1), ) mock_swift_object.get_object.side_effect = ( (obj_header, json.dumps({'id': 1})), (obj_header, json.dumps({'id': 2})), ) st.create() self.assertEqual(('CREATE', 'COMPLETE'), st.state) @mock.patch.object(swift.SwiftClientPlugin, '_create') @mock.patch.object(resource.Resource, 'physical_resource_name') def test_swift_container_deleted(self, mock_name, mock_swift): st = create_stack(swiftsignal_template) handle = st['test_wait_condition_handle'] mock_swift_object = mock.Mock() mock_swift.return_value = mock_swift_object mock_swift_object.url = "http://fake-host.com:8080/v1/AUTH_1234" mock_swift_object.head_account.return_value = { 'x-account-meta-temp-url-key': '123456' } obj_name = "%s-%s-abcdefghijkl" % (st.name, handle.name) mock_name.return_value = obj_name mock_swift_object.get_container.side_effect = [ cont_index(obj_name, 2), # Objects are there during create swiftclient_client.ClientException("Container GET failed", http_status=404) # User deleted ] mock_swift_object.get_object.side_effect = ( (obj_header, json.dumps({'id': 1})), # Objects there during create (obj_header, json.dumps({'id': 2})), (obj_header, json.dumps({'id': 3})), ) st.create() self.assertEqual(('CREATE', 'COMPLETE'), st.state) wc = st['test_wait_condition'] self.assertEqual("null", wc.FnGetAtt('data')) @mock.patch.object(swift.SwiftClientPlugin, '_create') @mock.patch.object(resource.Resource, 'physical_resource_name') def test_swift_get_object_404(self, mock_name, mock_swift): st = create_stack(swiftsignal_template) handle = st['test_wait_condition_handle'] mock_swift_object = mock.Mock() mock_swift.return_value = mock_swift_object mock_swift_object.url = "http://fake-host.com:8080/v1/AUTH_1234" mock_swift_object.head_account.return_value = { 'x-account-meta-temp-url-key': '123456' } obj_name = "%s-%s-abcdefghijkl" % (st.name, handle.name) mock_name.return_value = obj_name mock_swift_object.get_container.return_value = cont_index(obj_name, 2) mock_swift_object.get_object.side_effect = ( (obj_header, ''), swiftclient_client.ClientException( "Object %s not found" % obj_name, http_status=404) ) st.create() self.assertEqual(('CREATE', 'COMPLETE'), st.state) heat-10.0.2/heat/tests/openstack/heat/test_multi_part.py0000666000175000017500000002011213343562340023322 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import contextlib import email import uuid import mock from heat.common import exception as exc from heat.engine import stack as parser from heat.engine import template from heat.tests import common from heat.tests import utils class MultipartMimeTest(common.HeatTestCase): def setUp(self): super(MultipartMimeTest, self).setUp() self.ctx = utils.dummy_context() self.init_config() def init_config(self, parts=None): parts = parts or [] stack = parser.Stack( self.ctx, 'software_config_test_stack', template.Template({ 'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'config_mysql': { 'Type': 'OS::Heat::MultipartMime', 'Properties': { 'parts': parts }}}})) self.config = stack['config_mysql'] self.rpc_client = mock.MagicMock() self.config._rpc_client = self.rpc_client def test_handle_create(self): config_id = 'c8a19429-7fde-47ea-a42f-40045488226c' sc = {'id': config_id} self.rpc_client.create_software_config.return_value = sc self.config.id = 55 self.config.uuid = uuid.uuid4().hex self.config.handle_create() self.assertEqual(config_id, self.config.resource_id) kwargs = self.rpc_client.create_software_config.call_args[1] self.assertEqual({ 'name': self.config.physical_resource_name(), 'config': self.config.message, 'group': 'Heat::Ungrouped' }, kwargs) def test_get_message_not_none(self): self.config.message = 'Not none' result = self.config.get_message() self.assertEqual('Not none', result) def test_get_message_empty_list(self): parts = [] self.init_config(parts=parts) result = self.config.get_message() message = email.message_from_string(result) self.assertTrue(message.is_multipart()) def test_get_message_text(self): parts = [{ 'config': '1e0e5a60-2843-4cfd-9137-d90bdf18eef5', 'type': 'text' }] self.init_config(parts=parts) self.rpc_client.show_software_config.return_value = { 'config': '#!/bin/bash' } result = self.config.get_message() self.assertEqual( '1e0e5a60-2843-4cfd-9137-d90bdf18eef5', self.rpc_client.show_software_config.call_args[0][1]) message = email.message_from_string(result) self.assertTrue(message.is_multipart()) subs = message.get_payload() self.assertEqual(1, len(subs)) self.assertEqual('#!/bin/bash', subs[0].get_payload()) def test_get_message_fail_back(self): parts = [{ 'config': '2e0e5a60-2843-4cfd-9137-d90bdf18eef5', 'type': 'text' }] self.init_config(parts=parts) @contextlib.contextmanager def exc_filter(): try: yield except exc.NotFound: pass self.rpc_client.ignore_error_by_name.return_value = exc_filter() self.rpc_client.show_software_config.side_effect = exc.NotFound() result = self.config.get_message() self.assertEqual( '2e0e5a60-2843-4cfd-9137-d90bdf18eef5', self.rpc_client.show_software_config.call_args[0][1]) message = email.message_from_string(result) self.assertTrue(message.is_multipart()) subs = message.get_payload() self.assertEqual(1, len(subs)) self.assertEqual('2e0e5a60-2843-4cfd-9137-d90bdf18eef5', subs[0].get_payload()) def test_get_message_non_uuid(self): parts = [{ 'config': 'http://192.168.122.36:8000/v1/waitcondition/' }] self.init_config(parts=parts) result = self.config.get_message() message = email.message_from_string(result) self.assertTrue(message.is_multipart()) subs = message.get_payload() self.assertEqual(1, len(subs)) self.assertEqual('http://192.168.122.36:8000/v1/waitcondition/', subs[0].get_payload()) def test_get_message_text_with_filename(self): parts = [{ 'config': '1e0e5a60-2843-4cfd-9137-d90bdf18eef5', 'type': 'text', 'filename': '/opt/stack/configure.d/55-heat-config' }] self.init_config(parts=parts) self.rpc_client.show_software_config.return_value = { 'config': '#!/bin/bash' } result = self.config.get_message() self.assertEqual( '1e0e5a60-2843-4cfd-9137-d90bdf18eef5', self.rpc_client.show_software_config.call_args[0][1]) message = email.message_from_string(result) self.assertTrue(message.is_multipart()) subs = message.get_payload() self.assertEqual(1, len(subs)) self.assertEqual('#!/bin/bash', subs[0].get_payload()) self.assertEqual(parts[0]['filename'], subs[0].get_filename()) def test_get_message_multi_part(self): multipart = ('Content-Type: multipart/mixed; ' 'boundary="===============2579792489038011818=="\n' 'MIME-Version: 1.0\n' '\n--===============2579792489038011818==' '\nContent-Type: text; ' 'charset="us-ascii"\n' 'MIME-Version: 1.0\n' 'Content-Transfer-Encoding: 7bit\n' 'Content-Disposition: attachment;\n' ' filename="/opt/stack/configure.d/55-heat-config"\n' '#!/bin/bash\n' '--===============2579792489038011818==--\n') parts = [{ 'config': '1e0e5a60-2843-4cfd-9137-d90bdf18eef5', 'type': 'multipart' }] self.init_config(parts=parts) self.rpc_client.show_software_config.return_value = { 'config': multipart } result = self.config.get_message() self.assertEqual( '1e0e5a60-2843-4cfd-9137-d90bdf18eef5', self.rpc_client.show_software_config.call_args[0][1]) message = email.message_from_string(result) self.assertTrue(message.is_multipart()) subs = message.get_payload() self.assertEqual(1, len(subs)) self.assertEqual('#!/bin/bash', subs[0].get_payload()) self.assertEqual('/opt/stack/configure.d/55-heat-config', subs[0].get_filename()) def test_get_message_multi_part_bad_format(self): parts = [ {'config': '1e0e5a60-2843-4cfd-9137-d90bdf18eef5', 'type': 'multipart'}, {'config': '9cab10ef-16ce-4be9-8b25-a67b7313eddb', 'type': 'text'}] self.init_config(parts=parts) self.rpc_client.show_software_config.return_value = { 'config': '#!/bin/bash' } result = self.config.get_message() self.assertEqual( '1e0e5a60-2843-4cfd-9137-d90bdf18eef5', self.rpc_client.show_software_config.call_args_list[0][0][1]) self.assertEqual( '9cab10ef-16ce-4be9-8b25-a67b7313eddb', self.rpc_client.show_software_config.call_args_list[1][0][1]) message = email.message_from_string(result) self.assertTrue(message.is_multipart()) subs = message.get_payload() self.assertEqual(1, len(subs)) self.assertEqual('#!/bin/bash', subs[0].get_payload()) heat-10.0.2/heat/tests/openstack/heat/test_resource_chain.py0000666000175000017500000003370613343562340024150 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import mock import six from heat.common import exception from heat.engine import node_data from heat.engine.resources.openstack.heat import resource_chain from heat.engine import rsrc_defn from heat.tests import common from heat.tests import utils RESOURCE_PROPERTIES = { 'group': 'test-group', } TEMPLATE = { 'heat_template_version': '2016-04-08', 'resources': { 'test-chain': { 'type': 'OS::Heat::ResourceChain', 'properties': { 'resources': ['OS::Heat::SoftwareConfig', 'OS::Heat::StructuredConfig'], 'concurrent': False, 'resource_properties': RESOURCE_PROPERTIES, } } } } class ResourceChainTest(common.HeatTestCase): def setUp(self): super(ResourceChainTest, self).setUp() self.stack = None # hold on to stack to prevent weakref cleanup def test_child_template_without_concurrency(self): # Test chain = self._create_chain(TEMPLATE) child_template = chain.child_template() # Verify tmpl = child_template.t self.assertEqual('2015-04-30', tmpl['heat_template_version']) self.assertEqual(2, len(child_template.t['resources'])) resource = tmpl['resources']['0'] self.assertEqual('OS::Heat::SoftwareConfig', resource['type']) self.assertEqual(RESOURCE_PROPERTIES, resource['properties']) self.assertNotIn('depends_on', resource) resource = tmpl['resources']['1'] self.assertEqual('OS::Heat::StructuredConfig', resource['type']) self.assertEqual(RESOURCE_PROPERTIES, resource['properties']) self.assertEqual(['0'], resource['depends_on']) def test_child_template_with_concurrent(self): # Setup tmpl_def = copy.deepcopy(TEMPLATE) tmpl_def['resources']['test-chain']['properties']['concurrent'] = True chain = self._create_chain(tmpl_def) # Test child_template = chain.child_template() # Verify # Trimmed down version of above that just checks the depends_on # isn't present tmpl = child_template.t resource = tmpl['resources']['0'] self.assertNotIn('depends_on', resource) resource = tmpl['resources']['1'] self.assertNotIn('depends_on', resource) def test_child_template_default_concurrent(self): # Setup tmpl_def = copy.deepcopy(TEMPLATE) tmpl_def['resources']['test-chain']['properties'].pop('concurrent') chain = self._create_chain(tmpl_def) # Test child_template = chain.child_template() # Verify # Trimmed down version of above that just checks the depends_on # isn't present tmpl = child_template.t resource = tmpl['resources']['0'] self.assertNotIn('depends_on', resource) resource = tmpl['resources']['1'] self.assertEqual(['0'], resource['depends_on']) def test_child_template_empty_resource_list(self): # Setup tmpl_def = copy.deepcopy(TEMPLATE) tmpl_def['resources']['test-chain']['properties']['resources'] = [] chain = self._create_chain(tmpl_def) # Test child_template = chain.child_template() # Verify tmpl = child_template.t # No error, but no resources to create self.assertNotIn('resources', tmpl) # Sanity check that it's actually a template self.assertIn('heat_template_version', tmpl) def test_validate_nested_stack(self): # Test - should not raise exception chain = self._create_chain(TEMPLATE) chain.validate_nested_stack() def test_validate_reference_attr_with_none_ref(self): chain = self._create_chain(TEMPLATE) self.patchobject(chain, 'referenced_attrs', return_value=set([('config', None)])) self.assertIsNone(chain.validate()) def test_validate_incompatible_properties(self): # Tests a resource in the chain that does not support the properties # specified to each resource. # Setup tmpl_def = copy.deepcopy(TEMPLATE) tmpl_res_prop = tmpl_def['resources']['test-chain']['properties'] res_list = tmpl_res_prop['resources'] res_list.append('OS::Heat::RandomString') # Test chain = self._create_chain(tmpl_def) try: chain.validate_nested_stack() self.fail('Exception expected') except exception.StackValidationFailed as e: self.assertEqual('property error: ' 'resources.test.resources[2].' 'properties: unknown property group', e.message.lower()) def test_validate_fake_resource_type(self): # Setup tmpl_def = copy.deepcopy(TEMPLATE) tmpl_res_prop = tmpl_def['resources']['test-chain']['properties'] res_list = tmpl_res_prop['resources'] res_list.append('foo') # Test chain = self._create_chain(tmpl_def) try: chain.validate_nested_stack() self.fail('Exception expected') except exception.StackValidationFailed as e: self.assertIn('could not be found', e.message.lower()) self.assertIn('foo', e.message) @mock.patch.object(resource_chain.ResourceChain, 'create_with_template') def test_handle_create(self, mock_create): # Tests the handle create is propagated upwards with the # child template. # Setup chain = self._create_chain(TEMPLATE) # Test chain.handle_create() # Verify expected_tmpl = chain.child_template() mock_create.assert_called_once_with(expected_tmpl) @mock.patch.object(resource_chain.ResourceChain, 'update_with_template') def test_handle_update(self, mock_update): # Test the handle update is propagated upwards with the child # template. # Setup chain = self._create_chain(TEMPLATE) # Test json_snippet = rsrc_defn.ResourceDefinition( 'test-chain', 'OS::Heat::ResourceChain', TEMPLATE['resources']['test-chain']['properties']) chain.handle_update(json_snippet, None, None) # Verify expected_tmpl = chain.child_template() mock_update.assert_called_once_with(expected_tmpl) def test_child_params(self): chain = self._create_chain(TEMPLATE) self.assertEqual({}, chain.child_params()) def _create_chain(self, t): self.stack = utils.parse_stack(t) snip = self.stack.t.resource_definitions(self.stack)['test-chain'] chain = resource_chain.ResourceChain('test', snip, self.stack) return chain def test_get_attribute_convg(self): cache_data = {'test-chain': node_data.NodeData.from_dict({ 'uuid': mock.ANY, 'id': mock.ANY, 'action': 'CREATE', 'status': 'COMPLETE', 'attrs': {'refs': ['rsrc1', 'rsrc2']} })} stack = utils.parse_stack(TEMPLATE, cache_data=cache_data) rsrc = stack.defn['test-chain'] self.assertEqual(['rsrc1', 'rsrc2'], rsrc.FnGetAtt('refs')) class ResourceChainAttrTest(common.HeatTestCase): def test_aggregate_attribs(self): """Test attribute aggregation. Test attribute aggregation and that we mimic the nested resource's attributes. """ chain = self._create_dummy_stack() expected = ['0', '1'] self.assertEqual(expected, chain.FnGetAtt('foo')) self.assertEqual(expected, chain.FnGetAtt('Foo')) def test_index_dotted_attribs(self): """Test attribute aggregation. Test attribute aggregation and that we mimic the nested resource's attributes. """ chain = self._create_dummy_stack() self.assertEqual('0', chain.FnGetAtt('resource.0.Foo')) self.assertEqual('1', chain.FnGetAtt('resource.1.Foo')) def test_index_path_attribs(self): """Test attribute aggregation. Test attribute aggregation and that we mimic the nested resource's attributes. """ chain = self._create_dummy_stack() self.assertEqual('0', chain.FnGetAtt('resource.0', 'Foo')) self.assertEqual('1', chain.FnGetAtt('resource.1', 'Foo')) def test_index_deep_path_attribs(self): """Test attribute aggregation. Test attribute aggregation and that we mimic the nested resource's attributes. """ chain = self._create_dummy_stack(expect_attrs={'0': 2, '1': 3}) self.assertEqual(2, chain.FnGetAtt('resource.0', 'nested_dict', 'dict', 'b')) self.assertEqual(3, chain.FnGetAtt('resource.1', 'nested_dict', 'dict', 'b')) def test_aggregate_deep_path_attribs(self): """Test attribute aggregation. Test attribute aggregation and that we mimic the nested resource's attributes. """ chain = self._create_dummy_stack(expect_attrs={'0': 3, '1': 3}) expected = [3, 3] self.assertEqual(expected, chain.FnGetAtt('nested_dict', 'list', 2)) def test_aggregate_refs(self): """Test resource id aggregation.""" chain = self._create_dummy_stack() expected = ['ID-0', 'ID-1'] self.assertEqual(expected, chain.FnGetAtt("refs")) def test_aggregate_refs_with_index(self): """Test resource id aggregation with index.""" chain = self._create_dummy_stack() expected = ['ID-0', 'ID-1'] self.assertEqual(expected[0], chain.FnGetAtt("refs", 0)) self.assertEqual(expected[1], chain.FnGetAtt("refs", 1)) self.assertIsNone(chain.FnGetAtt("refs", 2)) def test_aggregate_outputs(self): """Test outputs aggregation.""" expected = {'0': ['foo', 'bar'], '1': ['foo', 'bar']} chain = self._create_dummy_stack(expect_attrs=expected) self.assertEqual(expected, chain.FnGetAtt('attributes', 'list')) def test_aggregate_outputs_no_path(self): """Test outputs aggregation with missing path.""" chain = self._create_dummy_stack() self.assertRaises(exception.InvalidTemplateAttribute, chain.FnGetAtt, 'attributes') def test_index_refs(self): """Tests getting ids of individual resources.""" chain = self._create_dummy_stack() self.assertEqual("ID-0", chain.FnGetAtt('resource.0')) self.assertEqual("ID-1", chain.FnGetAtt('resource.1')) ex = self.assertRaises(exception.NotFound, chain.FnGetAtt, 'resource.2') self.assertIn("Member '2' not found in group resource 'test'", six.text_type(ex)) def _create_dummy_stack(self, expect_count=2, expect_attrs=None): self.stack = utils.parse_stack(TEMPLATE) snip = self.stack.t.resource_definitions(self.stack)['test-chain'] chain = resource_chain.ResourceChain('test', snip, self.stack) attrs = {} refids = {} if expect_attrs is None: expect_attrs = {} for index in range(expect_count): res = str(index) attrs[index] = expect_attrs.get(res, res) refids[index] = 'ID-%s' % res names = [str(name) for name in range(expect_count)] chain._resource_names = mock.Mock(return_value=names) self._stub_get_attr(chain, refids, attrs) return chain def _stub_get_attr(self, chain, refids, attrs): def ref_id_fn(res_name): return refids[int(res_name)] def attr_fn(args): res_name = args[0] return attrs[int(res_name)] def get_output(output_name): outputs = chain._nested_output_defns(chain._resource_names(), attr_fn, ref_id_fn) op_defns = {od.name: od for od in outputs} if output_name not in op_defns: raise exception.NotFound('Specified output key %s not found.' % output_name) return op_defns[output_name].get_value() orig_get_attr = chain.FnGetAtt def get_attr(attr_name, *path): if not path: attr = attr_name else: attr = (attr_name,) + path # Mock referenced_attrs() so that _nested_output_definitions() # will include the output required for this attribute chain.referenced_attrs = mock.Mock(return_value=[attr]) # Pass through to actual function under test return orig_get_attr(attr_name, *path) chain.FnGetAtt = mock.Mock(side_effect=get_attr) chain.get_output = mock.Mock(side_effect=get_output) class ResourceChainAttrFallbackTest(ResourceChainAttrTest): def _stub_get_attr(self, chain, refids, attrs): # Raise NotFound when getting output, to force fallback to old-school # grouputils functions chain.get_output = mock.Mock(side_effect=exception.NotFound) def make_fake_res(idx): fr = mock.Mock() fr.stack = chain.stack fr.FnGetRefId.return_value = refids[idx] fr.FnGetAtt.return_value = attrs[idx] return fr fake_res = {str(i): make_fake_res(i) for i in refids} chain.nested = mock.Mock(return_value=fake_res) heat-10.0.2/heat/tests/openstack/heat/test_none_resource.py0000666000175000017500000001117413343562340024020 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from heat.common import template_format from heat.engine import resource from heat.engine.resources.openstack.heat import none_resource as none from heat.engine import scheduler from heat.tests import common from heat.tests import utils class NoneResourceTest(common.HeatTestCase): tmpl = ''' heat_template_version: 2015-10-15 resources: none: type: OS::Heat::None properties: ignored: foo outputs: anything: value: {get_attr: [none, anything]} ''' def _create_none_stack(self): self.t = template_format.parse(self.tmpl) self.stack = utils.parse_stack(self.t) self.rsrc = self.stack['none'] self.assertIsNone(self.rsrc.validate()) self.stack.create() self.assertEqual(self.rsrc.CREATE, self.rsrc.action) self.assertEqual(self.rsrc.COMPLETE, self.rsrc.status) self.assertEqual(self.stack.CREATE, self.stack.action) self.assertEqual(self.stack.COMPLETE, self.stack.status) self.stack._update_all_resource_data(False, True) self.assertIsNone(self.stack.outputs['anything'].get_value()) def test_none_stack_create(self): self._create_none_stack() def test_none_stack_update_nochange(self): self._create_none_stack() before_refid = self.rsrc.FnGetRefId() self.assertIsNotNone(before_refid) utils.update_stack(self.stack, self.t) self.assertEqual((self.stack.UPDATE, self.stack.COMPLETE), self.stack.state) self.assertEqual(before_refid, self.stack['none'].FnGetRefId()) def test_none_stack_update_add_prop(self): self._create_none_stack() before_refid = self.rsrc.FnGetRefId() self.assertIsNotNone(before_refid) new_t = self.t.copy() new_t['resources']['none']['properties']['another'] = 123 utils.update_stack(self.stack, new_t) self.assertEqual((self.stack.UPDATE, self.stack.COMPLETE), self.stack.state) self.assertEqual(before_refid, self.stack['none'].FnGetRefId()) def test_none_stack_update_del_prop(self): self._create_none_stack() before_refid = self.rsrc.FnGetRefId() self.assertIsNotNone(before_refid) new_t = self.t.copy() del(new_t['resources']['none']['properties']['ignored']) utils.update_stack(self.stack, new_t) self.assertEqual((self.stack.UPDATE, self.stack.COMPLETE), self.stack.state) self.assertEqual(before_refid, self.stack['none'].FnGetRefId()) class PlaceholderResourceTest(common.HeatTestCase): tmpl = ''' heat_template_version: 2015-10-15 resources: none: type: OS::BAR::FOO properties: ignored: foo ''' class FooResource(none.NoneResource): default_client_name = 'heat' entity = 'foo' FOO_RESOURCE_TYPE = 'OS::BAR::FOO' def setUp(self): super(PlaceholderResourceTest, self).setUp() resource._register_class(self.FOO_RESOURCE_TYPE, self.FooResource) self.t = template_format.parse(self.tmpl) self.stack = utils.parse_stack(self.t) self.rsrc = self.stack['none'] self.client = mock.MagicMock() self.patchobject(self.FooResource, 'client', return_value=self.client) scheduler.TaskRunner(self.rsrc.create)() def _test_delete(self, is_placeholder=True): if not is_placeholder: delete_call_count = 1 self.rsrc.data = mock.Mock( return_value={}) else: delete_call_count = 0 self.rsrc.data = mock.Mock( return_value={'is_placeholder': 'True'}) scheduler.TaskRunner(self.rsrc.delete)() self.assertEqual((self.rsrc.DELETE, self.rsrc.COMPLETE), self.rsrc.state) self.assertEqual(delete_call_count, self.client.foo.delete.call_count) self.assertEqual('foo', self.rsrc.entity) def test_not_placeholder_resource_delete(self): self._test_delete(is_placeholder=False) def test_placeholder_resource_delete(self): self._test_delete() heat-10.0.2/heat/tests/openstack/heat/test_waitcondition.py0000666000175000017500000004761713343562351024042 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime import uuid import mock import mox from oslo_serialization import jsonutils as json from oslo_utils import timeutils import six from heat.common import identifier from heat.common import template_format from heat.engine.clients.os import heat_plugin from heat.engine.clients.os import swift as swift_plugin from heat.engine import environment from heat.engine import resource from heat.engine.resources.openstack.heat import wait_condition_handle as h_wch from heat.engine import stack as parser from heat.engine import template as tmpl from heat.objects import resource as resource_objects from heat.tests import common from heat.tests import utils test_template_heat_waitcondition = ''' heat_template_version: 2013-05-23 resources: wait_condition: type: OS::Heat::WaitCondition properties: handle: {get_resource: wait_handle} timeout: 5 wait_handle: type: OS::Heat::WaitConditionHandle ''' test_template_heat_waitcondition_count = ''' heat_template_version: 2013-05-23 resources: wait_condition: type: OS::Heat::WaitCondition properties: handle: {get_resource: wait_handle} count: 3 timeout: 5 wait_handle: type: OS::Heat::WaitConditionHandle ''' test_template_heat_waithandle_token = ''' heat_template_version: 2013-05-23 resources: wait_handle: type: OS::Heat::WaitConditionHandle ''' test_template_heat_waithandle_heat = ''' heat_template_version: 2013-05-23 resources: wait_handle: type: OS::Heat::WaitConditionHandle properties: signal_transport: HEAT_SIGNAL ''' test_template_heat_waithandle_swift = ''' heat_template_version: 2013-05-23 resources: wait_handle: type: OS::Heat::WaitConditionHandle properties: signal_transport: TEMP_URL_SIGNAL ''' test_template_heat_waithandle_zaqar = ''' heat_template_version: 2013-05-23 resources: wait_handle: type: OS::Heat::WaitConditionHandle properties: signal_transport: ZAQAR_SIGNAL ''' test_template_heat_waithandle_none = ''' heat_template_version: 2013-05-23 resources: wait_handle: type: OS::Heat::WaitConditionHandle properties: signal_transport: NO_SIGNAL ''' test_template_update_waithandle = ''' heat_template_version: 2013-05-23 resources: update_wait_handle: type: OS::Heat::UpdateWaitConditionHandle ''' test_template_waithandle_bad_type = ''' heat_template_version: 2013-05-23 resources: wait_condition: type: OS::Heat::WaitCondition properties: handle: {get_resource: wait_handle} timeout: 5 wait_handle: type: OS::Heat::RandomString ''' test_template_waithandle_bad_reference = ''' heat_template_version: pike resources: wait_condition: type: OS::Heat::WaitCondition properties: handle: wait_handel timeout: 5 wait_handle: type: OS::Heat::WaitConditionHandle properties: signal_transport: NO_SIGNAL ''' class HeatWaitConditionTest(common.HeatTestCase): def setUp(self): super(HeatWaitConditionTest, self).setUp() self.tenant_id = 'test_tenant' def create_stack(self, stack_id=None, template=test_template_heat_waitcondition_count, params={}, stub=True, stub_status=True): temp = template_format.parse(template) template = tmpl.Template(temp, env=environment.Environment(params)) ctx = utils.dummy_context(tenant_id=self.tenant_id) stack = parser.Stack(ctx, 'test_stack', template, disable_rollback=True) # Stub out the stack ID so we have a known value if stack_id is None: stack_id = str(uuid.uuid4()) self.stack_id = stack_id with utils.UUIDStub(self.stack_id): stack.store() if stub: id = identifier.ResourceIdentifier('test_tenant', stack.name, stack.id, '', 'wait_handle') self.m.StubOutWithMock(h_wch.HeatWaitConditionHandle, 'identifier') h_wch.HeatWaitConditionHandle.identifier( ).MultipleTimes().AndReturn(id) if stub_status: self.m.StubOutWithMock(h_wch.HeatWaitConditionHandle, 'get_status') return stack def test_post_complete_to_handle(self): self.stack = self.create_stack() h_wch.HeatWaitConditionHandle.get_status().AndReturn(['SUCCESS']) h_wch.HeatWaitConditionHandle.get_status().AndReturn(['SUCCESS', 'SUCCESS']) h_wch.HeatWaitConditionHandle.get_status().AndReturn(['SUCCESS', 'SUCCESS', 'SUCCESS']) self.m.ReplayAll() self.stack.create() rsrc = self.stack['wait_condition'] self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) r = resource_objects.Resource.get_by_name_and_stack( self.stack.context, 'wait_handle', self.stack.id) self.assertEqual('wait_handle', r.name) self.m.VerifyAll() def test_post_failed_to_handle(self): self.stack = self.create_stack() h_wch.HeatWaitConditionHandle.get_status().AndReturn(['SUCCESS']) h_wch.HeatWaitConditionHandle.get_status().AndReturn(['SUCCESS', 'SUCCESS']) h_wch.HeatWaitConditionHandle.get_status().AndReturn(['SUCCESS', 'SUCCESS', 'FAILURE']) self.m.ReplayAll() self.stack.create() rsrc = self.stack['wait_condition'] self.assertEqual((rsrc.CREATE, rsrc.FAILED), rsrc.state) reason = rsrc.status_reason self.assertTrue(reason.startswith('WaitConditionFailure:')) r = resource_objects.Resource.get_by_name_and_stack( self.stack.context, 'wait_handle', self.stack.id) self.assertEqual('wait_handle', r.name) self.m.VerifyAll() def _test_wait_handle_invalid(self, tmpl, handle_name): self.stack = self.create_stack(template=tmpl) self.m.ReplayAll() self.stack.create() rsrc = self.stack['wait_condition'] self.assertEqual((rsrc.CREATE, rsrc.FAILED), rsrc.state) reason = rsrc.status_reason error_msg = ('ValueError: resources.wait_condition: ' '%s is not a valid wait condition handle.') % handle_name self.assertEqual(reason, error_msg) def test_wait_handle_bad_type(self): self._test_wait_handle_invalid(test_template_waithandle_bad_type, 'wait_handle') def test_wait_handle_bad_reference(self): self._test_wait_handle_invalid( test_template_waithandle_bad_reference, 'wait_handel') def test_timeout(self): self.stack = self.create_stack() # Avoid the stack create exercising the timeout code at the same time self.m.StubOutWithMock(self.stack, 'timeout_secs') self.stack.timeout_secs().MultipleTimes().AndReturn(None) now = timeutils.utcnow() periods = [0, 0.001, 0.1, 4.1, 5.1] periods.extend(range(10, 100, 5)) fake_clock = [now + datetime.timedelta(0, t) for t in periods] timeutils.set_time_override(fake_clock) self.addCleanup(timeutils.clear_time_override) h_wch.HeatWaitConditionHandle.get_status( ).MultipleTimes().AndReturn([]) self.m.ReplayAll() self.stack.create() rsrc = self.stack['wait_condition'] self.assertEqual((rsrc.CREATE, rsrc.FAILED), rsrc.state) reason = rsrc.status_reason self.assertTrue(reason.startswith('WaitConditionTimeout:')) self.m.VerifyAll() def _create_heat_wc_and_handle(self): self.stack = self.create_stack( template=test_template_heat_waitcondition) h_wch.HeatWaitConditionHandle.get_status().AndReturn(['SUCCESS']) self.m.ReplayAll() self.stack.create() rsrc = self.stack['wait_condition'] self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) wc_att = rsrc.FnGetAtt('data') self.assertEqual(six.text_type({}), wc_att) handle = self.stack['wait_handle'] self.assertEqual((handle.CREATE, handle.COMPLETE), handle.state) return (rsrc, handle) def test_data(self): rsrc, handle = self._create_heat_wc_and_handle() test_metadata = {'data': 'foo', 'reason': 'bar', 'status': 'SUCCESS', 'id': '123'} ret = handle.handle_signal(details=test_metadata) wc_att = rsrc.FnGetAtt('data') self.assertEqual('{"123": "foo"}', wc_att) self.assertEqual('status:SUCCESS reason:bar', ret) test_metadata = {'data': 'dog', 'reason': 'cat', 'status': 'SUCCESS', 'id': '456'} ret = handle.handle_signal(details=test_metadata) wc_att = rsrc.FnGetAtt('data') self.assertEqual(json.loads(u'{"123": "foo", "456": "dog"}'), json.loads(wc_att)) self.assertEqual('status:SUCCESS reason:cat', ret) self.m.VerifyAll() def test_data_noid(self): rsrc, handle = self._create_heat_wc_and_handle() test_metadata = {'data': 'foo', 'reason': 'bar', 'status': 'SUCCESS'} ret = handle.handle_signal(details=test_metadata) wc_att = rsrc.FnGetAtt('data') self.assertEqual('{"1": "foo"}', wc_att) self.assertEqual('status:SUCCESS reason:bar', ret) test_metadata = {'data': 'dog', 'reason': 'cat', 'status': 'SUCCESS'} ret = handle.handle_signal(details=test_metadata) wc_att = rsrc.FnGetAtt('data') self.assertEqual(json.loads(u'{"1": "foo", "2": "dog"}'), json.loads(wc_att)) self.assertEqual('status:SUCCESS reason:cat', ret) self.m.VerifyAll() def test_data_nodata(self): rsrc, handle = self._create_heat_wc_and_handle() ret = handle.handle_signal() expected = 'status:SUCCESS reason:Signal 1 received' self.assertEqual(expected, ret) wc_att = rsrc.FnGetAtt('data') self.assertEqual('{"1": null}', wc_att) handle.handle_signal() wc_att = rsrc.FnGetAtt('data') self.assertEqual(json.loads(u'{"1": null, "2": null}'), json.loads(wc_att)) self.m.VerifyAll() def test_data_partial_complete(self): rsrc, handle = self._create_heat_wc_and_handle() test_metadata = {'status': 'SUCCESS'} ret = handle.handle_signal(details=test_metadata) expected = 'status:SUCCESS reason:Signal 1 received' self.assertEqual(expected, ret) wc_att = rsrc.FnGetAtt('data') self.assertEqual('{"1": null}', wc_att) test_metadata = {'status': 'SUCCESS'} ret = handle.handle_signal(details=test_metadata) expected = 'status:SUCCESS reason:Signal 2 received' self.assertEqual(expected, ret) wc_att = rsrc.FnGetAtt('data') self.assertEqual(json.loads(u'{"1": null, "2": null}'), json.loads(wc_att)) self.m.VerifyAll() def _create_heat_handle(self, template=test_template_heat_waithandle_token): self.stack = self.create_stack(template=template, stub_status=False) self.m.ReplayAll() self.stack.create() handle = self.stack['wait_handle'] self.assertEqual((handle.CREATE, handle.COMPLETE), handle.state) self.assertIsNotNone(handle.password) self.assertEqual(handle.resource_id, handle.data().get('user_id')) return handle def test_get_status_none_complete(self): handle = self._create_heat_handle() ret = handle.handle_signal() expected = 'status:SUCCESS reason:Signal 1 received' self.assertEqual(expected, ret) self.assertEqual(['SUCCESS'], handle.get_status()) md_expected = {'1': {'data': None, 'reason': 'Signal 1 received', 'status': 'SUCCESS'}} self.assertEqual(md_expected, handle.metadata_get()) self.m.VerifyAll() def test_get_status_partial_complete(self): handle = self._create_heat_handle() test_metadata = {'status': 'SUCCESS'} ret = handle.handle_signal(details=test_metadata) expected = 'status:SUCCESS reason:Signal 1 received' self.assertEqual(expected, ret) self.assertEqual(['SUCCESS'], handle.get_status()) md_expected = {'1': {'data': None, 'reason': 'Signal 1 received', 'status': 'SUCCESS'}} self.assertEqual(md_expected, handle.metadata_get()) self.m.VerifyAll() def test_get_status_failure(self): handle = self._create_heat_handle() test_metadata = {'status': 'FAILURE'} ret = handle.handle_signal(details=test_metadata) expected = 'status:FAILURE reason:Signal 1 received' self.assertEqual(expected, ret) self.assertEqual(['FAILURE'], handle.get_status()) md_expected = {'1': {'data': None, 'reason': 'Signal 1 received', 'status': 'FAILURE'}} self.assertEqual(md_expected, handle.metadata_get()) self.m.VerifyAll() def test_getatt_token(self): handle = self._create_heat_handle() self.assertEqual('adomainusertoken', handle.FnGetAtt('token')) self.m.VerifyAll() def test_getatt_endpoint(self): self.m.StubOutWithMock(heat_plugin.HeatClientPlugin, 'get_heat_url') heat_plugin.HeatClientPlugin.get_heat_url().AndReturn( 'foo/%s' % self.tenant_id) self.m.ReplayAll() handle = self._create_heat_handle() expected = ('foo/aprojectid/stacks/test_stack/%s/resources/' 'wait_handle/signal' % self.stack_id) self.assertEqual(expected, handle.FnGetAtt('endpoint')) self.m.VerifyAll() def test_getatt_curl_cli(self): self.m.StubOutWithMock(heat_plugin.HeatClientPlugin, 'get_heat_url') heat_plugin.HeatClientPlugin.get_heat_url().AndReturn( 'foo/%s' % self.tenant_id) self.m.StubOutWithMock( heat_plugin.HeatClientPlugin, 'get_insecure_option') heat_plugin.HeatClientPlugin.get_insecure_option().AndReturn(False) self.m.ReplayAll() handle = self._create_heat_handle() expected = ("curl -i -X POST -H 'X-Auth-Token: adomainusertoken' " "-H 'Content-Type: application/json' " "-H 'Accept: application/json' " "foo/aprojectid/stacks/test_stack/%s/resources/wait_handle" "/signal" % self.stack_id) self.assertEqual(expected, handle.FnGetAtt('curl_cli')) self.m.VerifyAll() def test_getatt_curl_cli_insecure_true(self): self.m.StubOutWithMock(heat_plugin.HeatClientPlugin, 'get_heat_url') heat_plugin.HeatClientPlugin.get_heat_url().AndReturn( 'foo/%s' % self.tenant_id) self.m.StubOutWithMock( heat_plugin.HeatClientPlugin, 'get_insecure_option') heat_plugin.HeatClientPlugin.get_insecure_option().AndReturn(True) self.m.ReplayAll() handle = self._create_heat_handle() expected = ( "curl --insecure -i -X POST -H 'X-Auth-Token: adomainusertoken' " "-H 'Content-Type: application/json' " "-H 'Accept: application/json' " "foo/aprojectid/stacks/test_stack/%s/resources/wait_handle" "/signal" % self.stack_id) self.assertEqual(expected, handle.FnGetAtt('curl_cli')) self.m.VerifyAll() def test_getatt_signal_heat(self): handle = self._create_heat_handle( template=test_template_heat_waithandle_heat) self.assertIsNone(handle.FnGetAtt('token')) self.assertIsNone(handle.FnGetAtt('endpoint')) self.assertIsNone(handle.FnGetAtt('curl_cli')) signal = json.loads(handle.FnGetAtt('signal')) self.assertIn('alarm_url', signal) self.assertIn('username', signal) self.assertIn('password', signal) self.assertIn('auth_url', signal) self.assertIn('project_id', signal) self.assertIn('domain_id', signal) def test_getatt_signal_swift(self): self.m.StubOutWithMock(swift_plugin.SwiftClientPlugin, 'get_temp_url') self.m.StubOutWithMock(swift_plugin.SwiftClientPlugin, 'client') class mock_swift(object): @staticmethod def put_container(container, **kwargs): pass @staticmethod def put_object(container, object, contents, **kwargs): pass swift_plugin.SwiftClientPlugin.client().AndReturn(mock_swift) swift_plugin.SwiftClientPlugin.client().AndReturn(mock_swift) swift_plugin.SwiftClientPlugin.client().AndReturn(mock_swift) swift_plugin.SwiftClientPlugin.get_temp_url(mox.IgnoreArg(), mox.IgnoreArg(), mox.IgnoreArg() ).AndReturn('foo') self.m.ReplayAll() handle = self._create_heat_handle( template=test_template_heat_waithandle_swift) self.assertIsNone(handle.FnGetAtt('token')) self.assertIsNone(handle.FnGetAtt('endpoint')) self.assertIsNone(handle.FnGetAtt('curl_cli')) signal = json.loads(handle.FnGetAtt('signal')) self.assertIn('alarm_url', signal) @mock.patch('zaqarclient.queues.v2.queues.Queue.signed_url') def test_getatt_signal_zaqar(self, mock_signed_url): handle = self._create_heat_handle( template=test_template_heat_waithandle_zaqar) self.assertIsNone(handle.FnGetAtt('token')) self.assertIsNone(handle.FnGetAtt('endpoint')) self.assertIsNone(handle.FnGetAtt('curl_cli')) signal = json.loads(handle.FnGetAtt('signal')) self.assertIn('queue_id', signal) self.assertIn('username', signal) self.assertIn('password', signal) self.assertIn('auth_url', signal) self.assertIn('project_id', signal) self.assertIn('domain_id', signal) def test_getatt_signal_none(self): handle = self._create_heat_handle( template=test_template_heat_waithandle_none) self.assertIsNone(handle.FnGetAtt('token')) self.assertIsNone(handle.FnGetAtt('endpoint')) self.assertIsNone(handle.FnGetAtt('curl_cli')) self.assertEqual('{}', handle.FnGetAtt('signal')) def test_create_update_updatehandle(self): self.stack = self.create_stack( template=test_template_update_waithandle, stub_status=False) self.m.ReplayAll() self.stack.create() handle = self.stack['update_wait_handle'] self.assertEqual((handle.CREATE, handle.COMPLETE), handle.state) self.assertRaises( resource.UpdateReplace, handle.update, None, None) heat-10.0.2/heat/tests/openstack/heat/test_resource_group.py0000666000175000017500000021425213343562340024217 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import mock import six from heat.common import exception from heat.common import grouputils from heat.common import template_format from heat.engine.clients.os import glance from heat.engine.clients.os import nova from heat.engine import node_data from heat.engine.resources.openstack.heat import resource_group from heat.engine import rsrc_defn from heat.engine import scheduler from heat.tests import common from heat.tests import utils template = { "heat_template_version": "2013-05-23", "resources": { "group1": { "type": "OS::Heat::ResourceGroup", "properties": { "count": 2, "resource_def": { "type": "OverwrittenFnGetRefIdType", "properties": { "Foo": "Bar" } } } } } } template2 = { "heat_template_version": "2013-05-23", "resources": { "dummy": { "type": "OverwrittenFnGetRefIdType", "properties": { "Foo": "baz" } }, "group1": { "type": "OS::Heat::ResourceGroup", "properties": { "count": 2, "resource_def": { "type": "OverwrittenFnGetRefIdType", "properties": { "Foo": {"get_attr": ["dummy", "Foo"]} } } } } } } template_repl = { "heat_template_version": "2013-05-23", "resources": { "group1": { "type": "OS::Heat::ResourceGroup", "properties": { "count": 2, "resource_def": { "type": "ResourceWithListProp%index%", "properties": { "Foo": "Bar_%index%", "listprop": [ "%index%_0", "%index%_1", "%index%_2" ] } } } } } } template_attr = { "heat_template_version": "2014-10-16", "resources": { "group1": { "type": "OS::Heat::ResourceGroup", "properties": { "count": 2, "resource_def": { "type": "ResourceWithComplexAttributesType", "properties": { } } } } }, "outputs": { "nested_strings": { "value": {"get_attr": ["group1", "nested_dict", "string"]} } } } template_server = { "heat_template_version": "2013-05-23", "resources": { "group1": { "type": "OS::Heat::ResourceGroup", "properties": { "count": 2, "resource_def": { "type": "OS::Nova::Server", "properties": { "image": "image%index%", "flavor": "flavor%index%" } } } } } } class ResourceGroupTest(common.HeatTestCase): def setUp(self): super(ResourceGroupTest, self).setUp() self.inspector = mock.Mock(spec=grouputils.GroupInspector) self.patchobject(grouputils.GroupInspector, 'from_parent_resource', return_value=self.inspector) def test_assemble_nested(self): """Tests nested stack creation based on props. Tests that the nested stack that implements the group is created appropriately based on properties. """ stack = utils.parse_stack(template) snip = stack.t.resource_definitions(stack)['group1'] resg = resource_group.ResourceGroup('test', snip, stack) templ = { "heat_template_version": "2015-04-30", "resources": { "0": { "type": "OverwrittenFnGetRefIdType", "properties": { "Foo": "Bar" } }, "1": { "type": "OverwrittenFnGetRefIdType", "properties": { "Foo": "Bar" } }, "2": { "type": "OverwrittenFnGetRefIdType", "properties": { "Foo": "Bar" } } }, "outputs": { "refs_map": { "value": { "0": {"get_resource": "0"}, "1": {"get_resource": "1"}, "2": {"get_resource": "2"}, } } } } self.assertEqual(templ, resg._assemble_nested(['0', '1', '2']).t) def test_assemble_nested_outputs(self): """Tests nested stack creation based on props. Tests that the nested stack that implements the group is created appropriately based on properties. """ stack = utils.parse_stack(template) snip = stack.t.resource_definitions(stack)['group1'] resg = resource_group.ResourceGroup('test', snip, stack) templ = { "heat_template_version": "2015-04-30", "resources": { "0": { "type": "OverwrittenFnGetRefIdType", "properties": { "Foo": "Bar" } }, "1": { "type": "OverwrittenFnGetRefIdType", "properties": { "Foo": "Bar" } }, "2": { "type": "OverwrittenFnGetRefIdType", "properties": { "Foo": "Bar" } } }, "outputs": { "refs_map": { "value": { "0": {"get_resource": "0"}, "1": {"get_resource": "1"}, "2": {"get_resource": "2"}, } }, "foo": { "value": [ {"get_attr": ["0", "foo"]}, {"get_attr": ["1", "foo"]}, {"get_attr": ["2", "foo"]}, ] } } } resg.referenced_attrs = mock.Mock(return_value=["foo"]) self.assertEqual(templ, resg._assemble_nested(['0', '1', '2']).t) def test_assemble_nested_include(self): templ = copy.deepcopy(template) res_def = templ["resources"]["group1"]["properties"]['resource_def'] res_def['properties']['Foo'] = None stack = utils.parse_stack(templ) snip = stack.t.resource_definitions(stack)['group1'] resg = resource_group.ResourceGroup('test', snip, stack) expect = { "heat_template_version": "2015-04-30", "resources": { "0": { "type": "OverwrittenFnGetRefIdType", "properties": {} } }, "outputs": { "refs_map": { "value": { "0": {"get_resource": "0"}, } } } } self.assertEqual(expect, resg._assemble_nested(['0']).t) expect['resources']["0"]['properties'] = {"Foo": None} self.assertEqual( expect, resg._assemble_nested(['0'], include_all=True).t) def test_assemble_nested_include_zero(self): templ = copy.deepcopy(template) templ['resources']['group1']['properties']['count'] = 0 stack = utils.parse_stack(templ) snip = stack.t.resource_definitions(stack)['group1'] resg = resource_group.ResourceGroup('test', snip, stack) expect = { "heat_template_version": "2015-04-30", "outputs": {"refs_map": {"value": {}}}, } self.assertEqual(expect, resg._assemble_nested([]).t) def test_assemble_nested_with_metadata(self): templ = copy.deepcopy(template) res_def = templ["resources"]["group1"]["properties"]['resource_def'] res_def['properties']['Foo'] = None res_def['metadata'] = { 'priority': 'low', 'role': 'webserver' } stack = utils.parse_stack(templ) snip = stack.t.resource_definitions(stack)['group1'] resg = resource_group.ResourceGroup('test', snip, stack) expect = { "heat_template_version": "2015-04-30", "resources": { "0": { "type": "OverwrittenFnGetRefIdType", "properties": {}, "metadata": { 'priority': 'low', 'role': 'webserver' } } }, "outputs": { "refs_map": { "value": { "0": {"get_resource": "0"}, } } } } self.assertEqual(expect, resg._assemble_nested(['0']).t) def test_assemble_nested_rolling_update(self): expect = { "heat_template_version": "2015-04-30", "resources": { "0": { "type": "OverwrittenFnGetRefIdType", "properties": { "foo": "bar" } }, "1": { "type": "OverwrittenFnGetRefIdType", "properties": { "foo": "baz" } } }, "outputs": { "refs_map": { "value": { "0": {"get_resource": "0"}, "1": {"get_resource": "1"}, } } } } resource_def = rsrc_defn.ResourceDefinition( None, "OverwrittenFnGetRefIdType", {"foo": "baz"}) stack = utils.parse_stack(template) snip = stack.t.resource_definitions(stack)['group1'] resg = resource_group.ResourceGroup('test', snip, stack) nested = get_fake_nested_stack(['0', '1']) self.inspector.template.return_value = nested.defn._template self.inspector.member_names.return_value = ['0', '1'] resg.build_resource_definition = mock.Mock(return_value=resource_def) self.assertEqual(expect, resg._assemble_for_rolling_update(2, 1).t) def test_assemble_nested_rolling_update_outputs(self): expect = { "heat_template_version": "2015-04-30", "resources": { "0": { "type": "OverwrittenFnGetRefIdType", "properties": { "foo": "bar" } }, "1": { "type": "OverwrittenFnGetRefIdType", "properties": { "foo": "baz" } } }, "outputs": { "refs_map": { "value": { "0": {"get_resource": "0"}, "1": {"get_resource": "1"}, } }, "bar": { "value": [ {"get_attr": ["0", "bar"]}, {"get_attr": ["1", "bar"]}, ] } } } resource_def = rsrc_defn.ResourceDefinition( None, "OverwrittenFnGetRefIdType", {"foo": "baz"}) stack = utils.parse_stack(template) snip = stack.t.resource_definitions(stack)['group1'] resg = resource_group.ResourceGroup('test', snip, stack) nested = get_fake_nested_stack(['0', '1']) self.inspector.template.return_value = nested.defn._template self.inspector.member_names.return_value = ['0', '1'] resg.build_resource_definition = mock.Mock(return_value=resource_def) resg.referenced_attrs = mock.Mock(return_value=["bar"]) self.assertEqual(expect, resg._assemble_for_rolling_update(2, 1).t) def test_assemble_nested_rolling_update_none(self): expect = { "heat_template_version": "2015-04-30", "resources": { "0": { "type": "OverwrittenFnGetRefIdType", "properties": { "foo": "bar" } }, "1": { "type": "OverwrittenFnGetRefIdType", "properties": { "foo": "bar" } } }, "outputs": { "refs_map": { "value": { "0": {"get_resource": "0"}, "1": {"get_resource": "1"}, } } } } resource_def = rsrc_defn.ResourceDefinition( None, "OverwrittenFnGetRefIdType", {"foo": "baz"}) stack = utils.parse_stack(template) snip = stack.t.resource_definitions(stack)['group1'] resg = resource_group.ResourceGroup('test', snip, stack) nested = get_fake_nested_stack(['0', '1']) self.inspector.template.return_value = nested.defn._template self.inspector.member_names.return_value = ['0', '1'] resg.build_resource_definition = mock.Mock(return_value=resource_def) self.assertEqual(expect, resg._assemble_for_rolling_update(2, 0).t) def test_assemble_nested_rolling_update_failed_resource(self): expect = { "heat_template_version": "2015-04-30", "resources": { "0": { "type": "OverwrittenFnGetRefIdType", "properties": { "foo": "baz" } }, "1": { "type": "OverwrittenFnGetRefIdType", "properties": { "foo": "bar" } } }, "outputs": { "refs_map": { "value": { "0": {"get_resource": "0"}, "1": {"get_resource": "1"}, } } } } resource_def = rsrc_defn.ResourceDefinition( None, "OverwrittenFnGetRefIdType", {"foo": "baz"}) stack = utils.parse_stack(template) snip = stack.t.resource_definitions(stack)['group1'] resg = resource_group.ResourceGroup('test', snip, stack) nested = get_fake_nested_stack(['0', '1']) self.inspector.template.return_value = nested.defn._template self.inspector.member_names.return_value = ['1'] resg.build_resource_definition = mock.Mock(return_value=resource_def) self.assertEqual(expect, resg._assemble_for_rolling_update(2, 1).t) def test_assemble_nested_missing_param(self): # Setup # Change the standard testing template to use a get_param lookup # within the resource definition templ = copy.deepcopy(template) res_def = templ['resources']['group1']['properties']['resource_def'] res_def['properties']['Foo'] = {'get_param': 'bar'} stack = utils.parse_stack(templ) snip = stack.t.resource_definitions(stack)['group1'] resg = resource_group.ResourceGroup('test', snip, stack) # Test - This should not raise a ValueError about "bar" not being # provided nested_tmpl = resg._assemble_nested(['0', '1']) # Verify expected = { "heat_template_version": "2015-04-30", "resources": { "0": { "type": "OverwrittenFnGetRefIdType", "properties": {} }, "1": { "type": "OverwrittenFnGetRefIdType", "properties": {} } }, "outputs": { "refs_map": { "value": { "0": {"get_resource": "0"}, "1": {"get_resource": "1"}, } } } } self.assertEqual(expected, nested_tmpl.t) def test_index_var(self): stack = utils.parse_stack(template_repl) snip = stack.t.resource_definitions(stack)['group1'] resg = resource_group.ResourceGroup('test', snip, stack) expect = { "heat_template_version": "2015-04-30", "resources": { "0": { "type": "ResourceWithListProp%index%", "properties": { "Foo": "Bar_0", "listprop": [ "0_0", "0_1", "0_2" ] } }, "1": { "type": "ResourceWithListProp%index%", "properties": { "Foo": "Bar_1", "listprop": [ "1_0", "1_1", "1_2" ] } }, "2": { "type": "ResourceWithListProp%index%", "properties": { "Foo": "Bar_2", "listprop": [ "2_0", "2_1", "2_2" ] } } }, "outputs": { "refs_map": { "value": { "0": {"get_resource": "0"}, "1": {"get_resource": "1"}, "2": {"get_resource": "2"}, } } } } nested = resg._assemble_nested(['0', '1', '2']).t for res in nested['resources']: res_prop = nested['resources'][res]['properties'] res_prop['listprop'] = list(res_prop['listprop']) self.assertEqual(expect, nested) def test_custom_index_var(self): templ = copy.deepcopy(template_repl) templ['resources']['group1']['properties']['index_var'] = "__foo__" stack = utils.parse_stack(templ) snip = stack.t.resource_definitions(stack)['group1'] resg = resource_group.ResourceGroup('test', snip, stack) expect = { "heat_template_version": "2015-04-30", "resources": { "0": { "type": "ResourceWithListProp%index%", "properties": { "Foo": "Bar_%index%", "listprop": [ "%index%_0", "%index%_1", "%index%_2" ] } } }, "outputs": { "refs_map": { "value": { "0": {"get_resource": "0"}, } } } } nested = resg._assemble_nested(['0']).t res_prop = nested['resources']['0']['properties'] res_prop['listprop'] = list(res_prop['listprop']) self.assertEqual(expect, nested) props = copy.deepcopy(templ['resources']['group1']['properties']) res_def = props['resource_def'] res_def['properties']['Foo'] = "Bar___foo__" res_def['properties']['listprop'] = ["__foo___0", "__foo___1", "__foo___2"] res_def['type'] = "ResourceWithListProp__foo__" snip = snip.freeze(properties=props) resg = resource_group.ResourceGroup('test', snip, stack) expect = { "heat_template_version": "2015-04-30", "resources": { "0": { "type": "ResourceWithListProp__foo__", "properties": { "Foo": "Bar_0", "listprop": [ "0_0", "0_1", "0_2" ] } } }, "outputs": { "refs_map": { "value": { "0": {"get_resource": "0"}, } } } } nested = resg._assemble_nested(['0']).t res_prop = nested['resources']['0']['properties'] res_prop['listprop'] = list(res_prop['listprop']) self.assertEqual(expect, nested) def test_assemble_no_properties(self): templ = copy.deepcopy(template) res_def = templ["resources"]["group1"]["properties"]['resource_def'] del res_def['properties'] stack = utils.parse_stack(templ) resg = stack.resources['group1'] self.assertIsNone(resg.validate()) def test_validate_with_blacklist(self): templ = copy.deepcopy(template_server) self.mock_flavor = mock.Mock(ram=4, disk=4) self.mock_active_image = mock.Mock(min_ram=1, min_disk=1, status='active') self.mock_inactive_image = mock.Mock(min_ram=1, min_disk=1, status='inactive') def get_image(image_identifier): if image_identifier == 'image0': return self.mock_inactive_image else: return self.mock_active_image self.patchobject(glance.GlanceClientPlugin, 'get_image', side_effect=get_image) self.patchobject(nova.NovaClientPlugin, 'get_flavor', return_value=self.mock_flavor) props = templ["resources"]["group1"]["properties"] props["removal_policies"] = [{"resource_list": ["0"]}] stack = utils.parse_stack(templ) resg = stack.resources['group1'] self.assertIsNone(resg.validate()) def test_invalid_res_type(self): """Test that error raised for unknown resource type.""" tmp = copy.deepcopy(template) grp_props = tmp['resources']['group1']['properties'] grp_props['resource_def']['type'] = "idontexist" stack = utils.parse_stack(tmp) snip = stack.t.resource_definitions(stack)['group1'] resg = resource_group.ResourceGroup('test', snip, stack) exc = self.assertRaises(exception.StackValidationFailed, resg.validate) exp_msg = 'The Resource Type (idontexist) could not be found.' self.assertIn(exp_msg, six.text_type(exc)) def test_reference_attr(self): stack = utils.parse_stack(template2) snip = stack.t.resource_definitions(stack)['group1'] resgrp = resource_group.ResourceGroup('test', snip, stack) self.assertIsNone(resgrp.validate()) def test_validate_reference_attr_with_none_ref(self): stack = utils.parse_stack(template_attr) snip = stack.t.resource_definitions(stack)['group1'] resgrp = resource_group.ResourceGroup('test', snip, stack) self.patchobject(resgrp, 'referenced_attrs', return_value=set([('nested_dict', None)])) self.assertIsNone(resgrp.validate()) def test_invalid_removal_policies_nolist(self): """Test that error raised for malformed removal_policies.""" tmp = copy.deepcopy(template) grp_props = tmp['resources']['group1']['properties'] grp_props['removal_policies'] = 'notallowed' stack = utils.parse_stack(tmp) snip = stack.t.resource_definitions(stack)['group1'] resg = resource_group.ResourceGroup('test', snip, stack) exc = self.assertRaises(exception.StackValidationFailed, resg.validate) errstr = "removal_policies: \"'notallowed'\" is not a list" self.assertIn(errstr, six.text_type(exc)) def test_invalid_removal_policies_nomap(self): """Test that error raised for malformed removal_policies.""" tmp = copy.deepcopy(template) grp_props = tmp['resources']['group1']['properties'] grp_props['removal_policies'] = ['notallowed'] stack = utils.parse_stack(tmp) snip = stack.t.resource_definitions(stack)['group1'] resg = resource_group.ResourceGroup('test', snip, stack) exc = self.assertRaises(exception.StackValidationFailed, resg.validate) errstr = '"notallowed" is not a map' self.assertIn(errstr, six.text_type(exc)) def test_child_template(self): stack = utils.parse_stack(template2) snip = stack.t.resource_definitions(stack)['group1'] def check_res_names(names): self.assertEqual(list(names), ['0', '1']) return 'tmpl' resgrp = resource_group.ResourceGroup('test', snip, stack) resgrp._assemble_nested = mock.Mock() resgrp._assemble_nested.side_effect = check_res_names resgrp.properties.data[resgrp.COUNT] = 2 self.assertEqual('tmpl', resgrp.child_template()) self.assertEqual(1, resgrp._assemble_nested.call_count) def test_child_params(self): stack = utils.parse_stack(template2) snip = stack.t.resource_definitions(stack)['group1'] resgrp = resource_group.ResourceGroup('test', snip, stack) self.assertEqual({}, resgrp.child_params()) def test_handle_create(self): stack = utils.parse_stack(template2) snip = stack.t.resource_definitions(stack)['group1'] resgrp = resource_group.ResourceGroup('test', snip, stack) resgrp.create_with_template = mock.Mock(return_value=None) self.assertIsNone(resgrp.handle_create()) self.assertEqual(1, resgrp.create_with_template.call_count) def test_handle_create_with_batching(self): self.inspector.member_names.return_value = [] self.inspector.size.return_value = 0 stack = utils.parse_stack(tmpl_with_default_updt_policy()) defn = stack.t.resource_definitions(stack)['group1'] props = stack.t.t['resources']['group1']['properties'].copy() props['count'] = 10 update_policy = {'batch_create': {'max_batch_size': 3}} snip = defn.freeze(properties=props, update_policy=update_policy) resgrp = resource_group.ResourceGroup('test', snip, stack) self.patchobject(scheduler.TaskRunner, 'start') checkers = resgrp.handle_create() self.assertEqual(4, len(checkers)) def test_handle_create_with_batching_zero_count(self): self.inspector.member_names.return_value = [] self.inspector.size.return_value = 0 stack = utils.parse_stack(tmpl_with_default_updt_policy()) defn = stack.t.resource_definitions(stack)['group1'] props = stack.t.t['resources']['group1']['properties'].copy() props['count'] = 0 update_policy = {'batch_create': {'max_batch_size': 1}} snip = defn.freeze(properties=props, update_policy=update_policy) resgrp = resource_group.ResourceGroup('test', snip, stack) resgrp.create_with_template = mock.Mock(return_value=None) self.assertIsNone(resgrp.handle_create()) self.assertEqual(1, resgrp.create_with_template.call_count) def test_run_to_completion(self): stack = utils.parse_stack(template2) snip = stack.t.resource_definitions(stack)['group1'] resgrp = resource_group.ResourceGroup('test', snip, stack) resgrp._check_status_complete = mock.Mock(side_effect=[False, True]) resgrp.update_with_template = mock.Mock(return_value=None) next(resgrp._run_to_completion(snip, 200)) self.assertEqual(1, resgrp.update_with_template.call_count) def test_update_in_failed(self): stack = utils.parse_stack(template2) snip = stack.t.resource_definitions(stack)['group1'] resgrp = resource_group.ResourceGroup('test', snip, stack) resgrp.state_set('CREATE', 'FAILED') resgrp._assemble_nested = mock.Mock(return_value='tmpl') resgrp.properties.data[resgrp.COUNT] = 2 self.patchobject(scheduler.TaskRunner, 'start') resgrp.handle_update(snip, mock.Mock(), {}) self.assertTrue(resgrp._assemble_nested.called) def test_handle_delete(self): stack = utils.parse_stack(template2) snip = stack.t.resource_definitions(stack)['group1'] resgrp = resource_group.ResourceGroup('test', snip, stack) resgrp.delete_nested = mock.Mock(return_value=None) resgrp.handle_delete() resgrp.delete_nested.assert_called_once_with() def test_handle_update_size(self): stack = utils.parse_stack(template2) snip = stack.t.resource_definitions(stack)['group1'] resgrp = resource_group.ResourceGroup('test', snip, stack) resgrp._assemble_nested = mock.Mock(return_value=None) resgrp.properties.data[resgrp.COUNT] = 5 self.patchobject(scheduler.TaskRunner, 'start') resgrp.handle_update(snip, mock.Mock(), {}) self.assertTrue(resgrp._assemble_nested.called) class ResourceGroupBlackList(common.HeatTestCase): """This class tests ResourceGroup._name_blacklist().""" # 1) no resource_list, empty blacklist # 2) no resource_list, existing blacklist # 3) resource_list not in nested() # 4) resource_list (refid) not in nested() # 5) resource_list in nested() -> saved # 6) resource_list (refid) in nested() -> saved # 7) resource_list (refid) in nested(), update -> saved # 8) resource_list, update -> saved # 9) resource_list (refid) in nested(), grouputils fallback -> saved # A) resource_list (refid) in nested(), update, grouputils -> saved scenarios = [ ('1', dict(data_in=None, rm_list=[], nested_rsrcs=[], expected=[], saved=False, fallback=False, rm_mode='append')), ('2', dict(data_in='0,1,2', rm_list=[], nested_rsrcs=[], expected=['0', '1', '2'], saved=False, fallback=False, rm_mode='append')), ('3', dict(data_in='1,3', rm_list=['6'], nested_rsrcs=['0', '1', '3'], expected=['1', '3'], saved=False, fallback=False, rm_mode='append')), ('4', dict(data_in='0,1', rm_list=['id-7'], nested_rsrcs=['0', '1', '3'], expected=['0', '1'], saved=False, fallback=False, rm_mode='append')), ('5', dict(data_in='0,1', rm_list=['3'], nested_rsrcs=['0', '1', '3'], expected=['0', '1', '3'], saved=True, fallback=False, rm_mode='append')), ('6', dict(data_in='0,1', rm_list=['id-3'], nested_rsrcs=['0', '1', '3'], expected=['0', '1', '3'], saved=True, fallback=False, rm_mode='append')), ('7', dict(data_in='0,1', rm_list=['id-3'], nested_rsrcs=['0', '1', '3'], expected=['3'], saved=True, fallback=False, rm_mode='update')), ('8', dict(data_in='1', rm_list=[], nested_rsrcs=['0', '1', '2'], expected=[], saved=True, fallback=False, rm_mode='update')), ('9', dict(data_in='0,1', rm_list=['id-3'], nested_rsrcs=['0', '1', '3'], expected=['0', '1', '3'], saved=True, fallback=True, rm_mode='append')), ('A', dict(data_in='0,1', rm_list=['id-3'], nested_rsrcs=['0', '1', '3'], expected=['3'], saved=True, fallback=True, rm_mode='update')), ] def test_blacklist(self): stack = utils.parse_stack(template) resg = stack['group1'] if self.data_in is not None: resg.resource_id = 'foo' # mock properties properties = mock.MagicMock() p_data = {'removal_policies': [{'resource_list': self.rm_list}], 'removal_policies_mode': self.rm_mode} properties.get.side_effect = p_data.get # mock data get/set resg.data = mock.Mock() resg.data.return_value.get.return_value = self.data_in resg.data_set = mock.Mock() # mock nested access mock_inspect = mock.Mock() self.patchobject(grouputils.GroupInspector, 'from_parent_resource', return_value=mock_inspect) mock_inspect.member_names.return_value = self.nested_rsrcs if not self.fallback: refs_map = {n: 'id-%s' % n for n in self.nested_rsrcs} resg.get_output = mock.Mock(return_value=refs_map) else: resg.get_output = mock.Mock(side_effect=exception.NotFound) def stack_contains(name): return name in self.nested_rsrcs def by_refid(name): rid = name.replace('id-', '') if rid not in self.nested_rsrcs: return None res = mock.Mock() res.name = rid return res nested = mock.MagicMock() nested.__contains__.side_effect = stack_contains nested.__iter__.side_effect = iter(self.nested_rsrcs) nested.resource_by_refid.side_effect = by_refid resg.nested = mock.Mock(return_value=nested) resg._update_name_blacklist(properties) if self.saved: resg.data_set.assert_called_once_with('name_blacklist', ','.join(self.expected)) else: resg.data_set.assert_not_called() self.assertEqual(set(self.expected), resg._name_blacklist()) class ResourceGroupEmptyParams(common.HeatTestCase): """This class tests ResourceGroup.build_resource_definition().""" scenarios = [ ('non_empty', dict(value='Bar', expected={'Foo': 'Bar'}, expected_include={'Foo': 'Bar'})), ('empty_None', dict(value=None, expected={}, expected_include={'Foo': None})), ('empty_boolean', dict(value=False, expected={'Foo': False}, expected_include={'Foo': False})), ('empty_string', dict(value='', expected={'Foo': ''}, expected_include={'Foo': ''})), ('empty_number', dict(value=0, expected={'Foo': 0}, expected_include={'Foo': 0})), ('empty_json', dict(value={}, expected={'Foo': {}}, expected_include={'Foo': {}})), ('empty_list', dict(value=[], expected={'Foo': []}, expected_include={'Foo': []})) ] def test_definition(self): templ = copy.deepcopy(template) res_def = templ["resources"]["group1"]["properties"]['resource_def'] res_def['properties']['Foo'] = self.value stack = utils.parse_stack(templ) snip = stack.t.resource_definitions(stack)['group1'] resg = resource_group.ResourceGroup('test', snip, stack) exp1 = rsrc_defn.ResourceDefinition( None, "OverwrittenFnGetRefIdType", self.expected) exp2 = rsrc_defn.ResourceDefinition( None, "OverwrittenFnGetRefIdType", self.expected_include) rdef = resg.get_resource_def() self.assertEqual(exp1, resg.build_resource_definition('0', rdef)) rdef = resg.get_resource_def(include_all=True) self.assertEqual( exp2, resg.build_resource_definition('0', rdef)) class ResourceGroupNameListTest(common.HeatTestCase): """This class tests ResourceGroup._resource_names().""" # 1) no blacklist, 0 count # 2) no blacklist, x count # 3) blacklist (not effecting) # 4) blacklist with pruning scenarios = [ ('1', dict(blacklist=[], count=0, expected=[])), ('2', dict(blacklist=[], count=4, expected=['0', '1', '2', '3'])), ('3', dict(blacklist=['5', '6'], count=3, expected=['0', '1', '2'])), ('4', dict(blacklist=['2', '4'], count=4, expected=['0', '1', '3', '5'])), ] def test_names(self): stack = utils.parse_stack(template) resg = stack['group1'] resg.properties = mock.MagicMock() resg.properties.get.return_value = self.count resg._name_blacklist = mock.MagicMock(return_value=self.blacklist) self.assertEqual(self.expected, list(resg._resource_names())) class ResourceGroupAttrTest(common.HeatTestCase): def test_aggregate_attribs(self): """Test attribute aggregation. Test attribute aggregation and that we mimic the nested resource's attributes. """ resg = self._create_dummy_stack() expected = ['0', '1'] self.assertEqual(expected, resg.FnGetAtt('foo')) self.assertEqual(expected, resg.FnGetAtt('Foo')) def test_index_dotted_attribs(self): """Test attribute aggregation. Test attribute aggregation and that we mimic the nested resource's attributes. """ resg = self._create_dummy_stack() self.assertEqual('0', resg.FnGetAtt('resource.0.Foo')) self.assertEqual('1', resg.FnGetAtt('resource.1.Foo')) def test_index_path_attribs(self): """Test attribute aggregation. Test attribute aggregation and that we mimic the nested resource's attributes. """ resg = self._create_dummy_stack() self.assertEqual('0', resg.FnGetAtt('resource.0', 'Foo')) self.assertEqual('1', resg.FnGetAtt('resource.1', 'Foo')) def test_index_deep_path_attribs(self): """Test attribute aggregation. Test attribute aggregation and that we mimic the nested resource's attributes. """ resg = self._create_dummy_stack(template_attr, expect_attrs={'0': 2, '1': 2}) self.assertEqual(2, resg.FnGetAtt('resource.0', 'nested_dict', 'dict', 'b')) self.assertEqual(2, resg.FnGetAtt('resource.1', 'nested_dict', 'dict', 'b')) def test_aggregate_deep_path_attribs(self): """Test attribute aggregation. Test attribute aggregation and that we mimic the nested resource's attributes. """ resg = self._create_dummy_stack(template_attr, expect_attrs={'0': 3, '1': 3}) expected = [3, 3] self.assertEqual(expected, resg.FnGetAtt('nested_dict', 'list', 2)) def test_aggregate_refs(self): """Test resource id aggregation.""" resg = self._create_dummy_stack() expected = ['ID-0', 'ID-1'] self.assertEqual(expected, resg.FnGetAtt("refs")) def test_aggregate_refs_with_index(self): """Test resource id aggregation with index.""" resg = self._create_dummy_stack() expected = ['ID-0', 'ID-1'] self.assertEqual(expected[0], resg.FnGetAtt("refs", 0)) self.assertEqual(expected[1], resg.FnGetAtt("refs", 1)) self.assertIsNone(resg.FnGetAtt("refs", 2)) def test_aggregate_refs_map(self): resg = self._create_dummy_stack() found = resg.FnGetAtt("refs_map") expected = {'0': 'ID-0', '1': 'ID-1'} self.assertEqual(expected, found) def test_aggregate_outputs(self): """Test outputs aggregation.""" expected = {'0': ['foo', 'bar'], '1': ['foo', 'bar']} resg = self._create_dummy_stack(template_attr, expect_attrs=expected) self.assertEqual(expected, resg.FnGetAtt('attributes', 'list')) def test_aggregate_outputs_no_path(self): """Test outputs aggregation with missing path.""" resg = self._create_dummy_stack(template_attr) self.assertRaises(exception.InvalidTemplateAttribute, resg.FnGetAtt, 'attributes') def test_index_refs(self): """Tests getting ids of individual resources.""" resg = self._create_dummy_stack() self.assertEqual("ID-0", resg.FnGetAtt('resource.0')) self.assertEqual("ID-1", resg.FnGetAtt('resource.1')) ex = self.assertRaises(exception.NotFound, resg.FnGetAtt, 'resource.2') self.assertIn("Member '2' not found in group resource 'group1'.", six.text_type(ex)) def test_get_attribute_convg(self): cache_data = {'group1': node_data.NodeData.from_dict({ 'uuid': mock.ANY, 'id': mock.ANY, 'action': 'CREATE', 'status': 'COMPLETE', 'attrs': {'refs': ['rsrc1', 'rsrc2']} })} stack = utils.parse_stack(template, cache_data=cache_data) rsrc = stack.defn['group1'] self.assertEqual(['rsrc1', 'rsrc2'], rsrc.FnGetAtt('refs')) def test_get_attribute_blacklist(self): resg = self._create_dummy_stack() resg.data = mock.Mock(return_value={'name_blacklist': '3,5'}) expected = ['3', '5'] self.assertEqual(expected, resg.FnGetAtt(resg.REMOVED_RSRC_LIST)) def _create_dummy_stack(self, template_data=template, expect_count=2, expect_attrs=None): stack = utils.parse_stack(template_data) resg = stack['group1'] resg.resource_id = 'test-test' attrs = {} refids = {} if expect_attrs is None: expect_attrs = {} for index in range(expect_count): res = str(index) attrs[index] = expect_attrs.get(res, res) refids[index] = 'ID-%s' % res names = [str(name) for name in range(expect_count)] resg._resource_names = mock.Mock(return_value=names) self._stub_get_attr(resg, refids, attrs) return resg def _stub_get_attr(self, resg, refids, attrs): def ref_id_fn(res_name): return refids[int(res_name)] def attr_fn(args): res_name = args[0] return attrs[int(res_name)] def get_output(output_name): outputs = resg._nested_output_defns(resg._resource_names(), attr_fn, ref_id_fn) op_defns = {od.name: od for od in outputs} self.assertIn(output_name, op_defns) return op_defns[output_name].get_value() orig_get_attr = resg.FnGetAtt def get_attr(attr_name, *path): if not path: attr = attr_name else: attr = (attr_name,) + path # Mock referenced_attrs() so that _nested_output_definitions() # will include the output required for this attribute resg.referenced_attrs = mock.Mock(return_value=[attr]) # Pass through to actual function under test return orig_get_attr(attr_name, *path) resg.FnGetAtt = mock.Mock(side_effect=get_attr) resg.get_output = mock.Mock(side_effect=get_output) class ResourceGroupAttrFallbackTest(ResourceGroupAttrTest): def _stub_get_attr(self, resg, refids, attrs): # Raise NotFound when getting output, to force fallback to old-school # grouputils functions resg.get_output = mock.Mock(side_effect=exception.NotFound) def make_fake_res(idx): fr = mock.Mock() fr.stack = resg.stack fr.FnGetRefId.return_value = refids[idx] fr.FnGetAtt.return_value = attrs[idx] return fr fake_res = {str(i): make_fake_res(i) for i in refids} resg.nested = mock.Mock(return_value=fake_res) @mock.patch.object(grouputils, 'get_rsrc_id') def test_get_attribute(self, mock_get_rsrc_id): stack = utils.parse_stack(template) mock_get_rsrc_id.side_effect = ['0', '1'] rsrc = stack['group1'] rsrc.get_output = mock.Mock(side_effect=exception.NotFound) self.assertEqual(['0', '1'], rsrc.FnGetAtt(rsrc.REFS)) class ReplaceTest(common.HeatTestCase): # 1. no min_in_service # 2. min_in_service > count and existing with no blacklist # 3. min_in_service > count and existing with blacklist # 4. existing > count and min_in_service with blacklist # 5. existing > count and min_in_service with no blacklist # 6. all existing blacklisted # 7. count > existing and min_in_service with no blacklist # 8. count > existing and min_in_service with blacklist # 9. count < existing - blacklisted # 10. pause_sec > 0 scenarios = [ ('1', dict(min_in_service=0, count=2, existing=['0', '1'], black_listed=['0'], batch_size=1, pause_sec=0, tasks=2)), ('2', dict(min_in_service=3, count=2, existing=['0', '1'], black_listed=[], batch_size=2, pause_sec=0, tasks=3)), ('3', dict(min_in_service=3, count=2, existing=['0', '1'], black_listed=['0'], batch_size=2, pause_sec=0, tasks=3)), ('4', dict(min_in_service=3, count=2, existing=['0', '1', '2', '3'], black_listed=['2', '3'], batch_size=1, pause_sec=0, tasks=4)), ('5', dict(min_in_service=2, count=2, existing=['0', '1', '2', '3'], black_listed=[], batch_size=2, pause_sec=0, tasks=2)), ('6', dict(min_in_service=2, count=3, existing=['0', '1'], black_listed=['0', '1'], batch_size=2, pause_sec=0, tasks=2)), ('7', dict(min_in_service=0, count=5, existing=['0', '1'], black_listed=[], batch_size=1, pause_sec=0, tasks=5)), ('8', dict(min_in_service=0, count=5, existing=['0', '1'], black_listed=['0'], batch_size=1, pause_sec=0, tasks=5)), ('9', dict(min_in_service=0, count=3, existing=['0', '1', '2', '3', '4', '5'], black_listed=['0'], batch_size=2, pause_sec=0, tasks=2)), ('10', dict(min_in_service=0, count=3, existing=['0', '1', '2', '3', '4', '5'], black_listed=['0'], batch_size=2, pause_sec=10, tasks=3))] def setUp(self): super(ReplaceTest, self).setUp() templ = copy.deepcopy(template) self.stack = utils.parse_stack(templ) snip = self.stack.t.resource_definitions(self.stack)['group1'] self.group = resource_group.ResourceGroup('test', snip, self.stack) self.group.update_with_template = mock.Mock() self.group.check_update_complete = mock.Mock() inspector = mock.Mock(spec=grouputils.GroupInspector) self.patchobject(grouputils.GroupInspector, 'from_parent_resource', return_value=inspector) inspector.member_names.return_value = self.existing inspector.size.return_value = len(self.existing) def test_rolling_updates(self): self.group._nested = get_fake_nested_stack(self.existing) self.group.get_size = mock.Mock(return_value=self.count) self.group._name_blacklist = mock.Mock( return_value=set(self.black_listed)) tasks = self.group._replace(self.min_in_service, self.batch_size, self.pause_sec) self.assertEqual(self.tasks, len(tasks)) def tmpl_with_bad_updt_policy(): t = copy.deepcopy(template) rg = t['resources']['group1'] rg["update_policy"] = {"foo": {}} return t def tmpl_with_default_updt_policy(): t = copy.deepcopy(template) rg = t['resources']['group1'] rg["update_policy"] = {"rolling_update": {}} return t def tmpl_with_updt_policy(): t = copy.deepcopy(template) rg = t['resources']['group1'] rg["update_policy"] = {"rolling_update": { "min_in_service": "1", "max_batch_size": "2", "pause_time": "1" }} return t def get_fake_nested_stack(names): nested_t = ''' heat_template_version: 2015-04-30 description: Resource Group resources: ''' resource_snip = ''' '%s': type: OverwrittenFnGetRefIdType properties: foo: bar ''' resources = [nested_t] for res_name in names: resources.extend([resource_snip % res_name]) nested_t = ''.join(resources) return utils.parse_stack(template_format.parse(nested_t)) class RollingUpdatePolicyTest(common.HeatTestCase): def test_parse_without_update_policy(self): stack = utils.parse_stack(template) stack.validate() grp = stack['group1'] self.assertFalse(grp.update_policy['rolling_update']) def test_parse_with_update_policy(self): tmpl = tmpl_with_updt_policy() stack = utils.parse_stack(tmpl) stack.validate() tmpl_grp = tmpl['resources']['group1'] tmpl_policy = tmpl_grp['update_policy']['rolling_update'] tmpl_batch_sz = int(tmpl_policy['max_batch_size']) grp = stack['group1'] self.assertTrue(grp.update_policy) self.assertEqual(2, len(grp.update_policy)) self.assertIn('rolling_update', grp.update_policy) policy = grp.update_policy['rolling_update'] self.assertIsNotNone(policy) self.assertGreater(len(policy), 0) self.assertEqual(1, int(policy['min_in_service'])) self.assertEqual(tmpl_batch_sz, int(policy['max_batch_size'])) self.assertEqual(1, policy['pause_time']) def test_parse_with_default_update_policy(self): tmpl = tmpl_with_default_updt_policy() stack = utils.parse_stack(tmpl) stack.validate() grp = stack['group1'] self.assertTrue(grp.update_policy) self.assertEqual(2, len(grp.update_policy)) self.assertIn('rolling_update', grp.update_policy) policy = grp.update_policy['rolling_update'] self.assertIsNotNone(policy) self.assertGreater(len(policy), 0) self.assertEqual(0, int(policy['min_in_service'])) self.assertEqual(1, int(policy['max_batch_size'])) self.assertEqual(0, policy['pause_time']) def test_parse_with_bad_update_policy(self): tmpl = tmpl_with_bad_updt_policy() stack = utils.parse_stack(tmpl) error = self.assertRaises( exception.StackValidationFailed, stack.validate) self.assertIn("foo", six.text_type(error)) class RollingUpdatePolicyDiffTest(common.HeatTestCase): def validate_update_policy_diff(self, current, updated): # load current stack current_stack = utils.parse_stack(current) current_grp = current_stack['group1'] current_grp_json = current_grp.frozen_definition() updated_stack = utils.parse_stack(updated) updated_grp = updated_stack['group1'] updated_grp_json = updated_grp.t.freeze() # identify the template difference tmpl_diff = updated_grp.update_template_diff( updated_grp_json, current_grp_json) self.assertTrue(tmpl_diff.update_policy_changed()) prop_diff = current_grp.update_template_diff_properties( updated_grp.properties, current_grp.properties) # test application of the new update policy in handle_update current_grp._try_rolling_update = mock.Mock() current_grp._assemble_nested_for_size = mock.Mock() self.patchobject(scheduler.TaskRunner, 'start') current_grp.handle_update(updated_grp_json, tmpl_diff, prop_diff) self.assertEqual(updated_grp_json._update_policy or {}, current_grp.update_policy.data) def test_update_policy_added(self): self.validate_update_policy_diff(template, tmpl_with_updt_policy()) def test_update_policy_updated(self): updt_template = tmpl_with_updt_policy() grp = updt_template['resources']['group1'] policy = grp['update_policy']['rolling_update'] policy['min_in_service'] = '2' policy['max_batch_size'] = '4' policy['pause_time'] = '90' self.validate_update_policy_diff(tmpl_with_updt_policy(), updt_template) def test_update_policy_removed(self): self.validate_update_policy_diff(tmpl_with_updt_policy(), template) class RollingUpdateTest(common.HeatTestCase): def check_with_update(self, with_policy=False, with_diff=False): current = copy.deepcopy(template) self.current_stack = utils.parse_stack(current) self.current_grp = self.current_stack['group1'] current_grp_json = self.current_grp.frozen_definition() prop_diff, tmpl_diff = None, None updated = tmpl_with_updt_policy() if ( with_policy) else copy.deepcopy(template) if with_diff: res_def = updated['resources']['group1'][ 'properties']['resource_def'] res_def['properties']['Foo'] = 'baz' prop_diff = dict( {'count': 2, 'resource_def': {'properties': {'Foo': 'baz'}, 'type': 'OverwrittenFnGetRefIdType'}}) updated_stack = utils.parse_stack(updated) updated_grp = updated_stack['group1'] updated_grp_json = updated_grp.t.freeze() tmpl_diff = updated_grp.update_template_diff( updated_grp_json, current_grp_json) self.current_grp._replace = mock.Mock(return_value=[]) self.current_grp._assemble_nested = mock.Mock() self.patchobject(scheduler.TaskRunner, 'start') self.current_grp.handle_update(updated_grp_json, tmpl_diff, prop_diff) def test_update_without_policy_prop_diff(self): self.check_with_update(with_diff=True) self.assertTrue(self.current_grp._assemble_nested.called) def test_update_with_policy_prop_diff(self): self.check_with_update(with_policy=True, with_diff=True) self.current_grp._replace.assert_called_once_with(1, 2, 1) self.assertTrue(self.current_grp._assemble_nested.called) def test_update_time_not_sufficient(self): current = copy.deepcopy(template) self.stack = utils.parse_stack(current) self.current_grp = self.stack['group1'] self.stack.timeout_secs = mock.Mock(return_value=200) err = self.assertRaises(ValueError, self.current_grp._update_timeout, 3, 100) self.assertIn('The current update policy will result in stack update ' 'timeout.', six.text_type(err)) def test_update_time_sufficient(self): current = copy.deepcopy(template) self.stack = utils.parse_stack(current) self.current_grp = self.stack['group1'] self.stack.timeout_secs = mock.Mock(return_value=400) self.assertEqual(200, self.current_grp._update_timeout(3, 100)) class TestUtils(common.HeatTestCase): # 1. No existing no blacklist # 2. Existing with no blacklist # 3. Existing with blacklist scenarios = [ ('1', dict(existing=[], black_listed=[], count=0)), ('2', dict(existing=['0', '1'], black_listed=[], count=0)), ('3', dict(existing=['0', '1'], black_listed=['0'], count=1)), ('4', dict(existing=['0', '1'], black_listed=['1', '2'], count=1)) ] def test_count_black_listed(self): inspector = mock.Mock(spec=grouputils.GroupInspector) self.patchobject(grouputils.GroupInspector, 'from_parent_resource', return_value=inspector) inspector.member_names.return_value = self.existing stack = utils.parse_stack(template2) snip = stack.t.resource_definitions(stack)['group1'] resgrp = resource_group.ResourceGroup('test', snip, stack) resgrp._name_blacklist = mock.Mock(return_value=set(self.black_listed)) rcount = resgrp._count_black_listed(self.existing) self.assertEqual(self.count, rcount) class TestGetBatches(common.HeatTestCase): scenarios = [ ('4_4_1_0', dict(targ_cap=4, init_cap=4, bat_size=1, min_serv=0, batches=[ (4, 1, ['4']), (4, 1, ['3']), (4, 1, ['2']), (4, 1, ['1']), ])), ('4_4_1_4', dict(targ_cap=4, init_cap=4, bat_size=1, min_serv=4, batches=[ (5, 1, ['5']), (5, 1, ['4']), (5, 1, ['3']), (5, 1, ['2']), (5, 1, ['1']), (4, 0, []), ])), ('4_4_1_5', dict(targ_cap=4, init_cap=4, bat_size=1, min_serv=5, batches=[ (5, 1, ['5']), (5, 1, ['4']), (5, 1, ['3']), (5, 1, ['2']), (5, 1, ['1']), (4, 0, []), ])), ('4_4_2_0', dict(targ_cap=4, init_cap=4, bat_size=2, min_serv=0, batches=[ (4, 2, ['4', '3']), (4, 2, ['2', '1']), ])), ('4_4_2_4', dict(targ_cap=4, init_cap=4, bat_size=2, min_serv=4, batches=[ (6, 2, ['6', '5']), (6, 2, ['4', '3']), (6, 2, ['2', '1']), (4, 0, []), ])), ('5_5_2_0', dict(targ_cap=5, init_cap=5, bat_size=2, min_serv=0, batches=[ (5, 2, ['5', '4']), (5, 2, ['3', '2']), (5, 1, ['1']), ])), ('5_5_2_4', dict(targ_cap=5, init_cap=5, bat_size=2, min_serv=4, batches=[ (6, 2, ['6', '5']), (6, 2, ['4', '3']), (6, 2, ['2', '1']), (5, 0, []), ])), ('3_3_2_0', dict(targ_cap=3, init_cap=3, bat_size=2, min_serv=0, batches=[ (3, 2, ['3', '2']), (3, 1, ['1']), ])), ('3_3_2_4', dict(targ_cap=3, init_cap=3, bat_size=2, min_serv=4, batches=[ (5, 2, ['5', '4']), (5, 2, ['3', '2']), (4, 1, ['1']), (3, 0, []), ])), ('4_4_4_0', dict(targ_cap=4, init_cap=4, bat_size=4, min_serv=0, batches=[ (4, 4, ['4', '3', '2', '1']), ])), ('4_4_5_0', dict(targ_cap=4, init_cap=4, bat_size=5, min_serv=0, batches=[ (4, 4, ['4', '3', '2', '1']), ])), ('4_4_4_1', dict(targ_cap=4, init_cap=4, bat_size=4, min_serv=1, batches=[ (5, 4, ['5', '4', '3', '2']), (4, 1, ['1']), ])), ('4_4_6_1', dict(targ_cap=4, init_cap=4, bat_size=6, min_serv=1, batches=[ (5, 4, ['5', '4', '3', '2']), (4, 1, ['1']), ])), ('4_4_4_2', dict(targ_cap=4, init_cap=4, bat_size=4, min_serv=2, batches=[ (6, 4, ['6', '5', '4', '3']), (4, 2, ['2', '1']), ])), ('4_4_4_4', dict(targ_cap=4, init_cap=4, bat_size=4, min_serv=4, batches=[ (8, 4, ['8', '7', '6', '5']), (8, 4, ['4', '3', '2', '1']), (4, 0, []), ])), ('4_4_5_6', dict(targ_cap=4, init_cap=4, bat_size=5, min_serv=6, batches=[ (8, 4, ['8', '7', '6', '5']), (8, 4, ['4', '3', '2', '1']), (4, 0, []), ])), ('4_7_1_0', dict(targ_cap=4, init_cap=7, bat_size=1, min_serv=0, batches=[ (4, 1, ['4']), (4, 1, ['3']), (4, 1, ['2']), (4, 1, ['1']), ])), ('4_7_1_4', dict(targ_cap=4, init_cap=7, bat_size=1, min_serv=4, batches=[ (5, 1, ['4']), (5, 1, ['3']), (5, 1, ['2']), (5, 1, ['1']), (4, 0, []), ])), ('4_7_1_5', dict(targ_cap=4, init_cap=7, bat_size=1, min_serv=5, batches=[ (5, 1, ['4']), (5, 1, ['3']), (5, 1, ['2']), (5, 1, ['1']), (4, 0, []), ])), ('4_7_2_0', dict(targ_cap=4, init_cap=7, bat_size=2, min_serv=0, batches=[ (4, 2, ['4', '3']), (4, 2, ['2', '1']), ])), ('4_7_2_4', dict(targ_cap=4, init_cap=7, bat_size=2, min_serv=4, batches=[ (6, 2, ['4', '3']), (6, 2, ['2', '1']), (4, 0, []), ])), ('5_7_2_0', dict(targ_cap=5, init_cap=7, bat_size=2, min_serv=0, batches=[ (5, 2, ['5', '4']), (5, 2, ['3', '2']), (5, 1, ['1']), ])), ('5_7_2_4', dict(targ_cap=5, init_cap=7, bat_size=2, min_serv=4, batches=[ (6, 2, ['5', '4']), (6, 2, ['3', '2']), (5, 1, ['1']), ])), ('4_7_4_4', dict(targ_cap=4, init_cap=7, bat_size=4, min_serv=4, batches=[ (8, 4, ['8', '4', '3', '2']), (5, 1, ['1']), (4, 0, []), ])), ('4_7_5_6', dict(targ_cap=4, init_cap=7, bat_size=5, min_serv=6, batches=[ (8, 4, ['8', '4', '3', '2']), (5, 1, ['1']), (4, 0, []), ])), ('6_4_1_0', dict(targ_cap=6, init_cap=4, bat_size=1, min_serv=0, batches=[ (5, 1, ['5']), (6, 1, ['6']), (6, 1, ['4']), (6, 1, ['3']), (6, 1, ['2']), (6, 1, ['1']), ])), ('6_4_1_4', dict(targ_cap=6, init_cap=4, bat_size=1, min_serv=4, batches=[ (5, 1, ['5']), (6, 1, ['6']), (6, 1, ['4']), (6, 1, ['3']), (6, 1, ['2']), (6, 1, ['1']), ])), ('6_4_1_5', dict(targ_cap=6, init_cap=4, bat_size=1, min_serv=5, batches=[ (5, 1, ['5']), (6, 1, ['6']), (6, 1, ['4']), (6, 1, ['3']), (6, 1, ['2']), (6, 1, ['1']), ])), ('6_4_2_0', dict(targ_cap=6, init_cap=4, bat_size=2, min_serv=0, batches=[ (6, 2, ['5', '6']), (6, 2, ['4', '3']), (6, 2, ['2', '1']), ])), ('6_4_2_4', dict(targ_cap=6, init_cap=4, bat_size=2, min_serv=4, batches=[ (6, 2, ['5', '6']), (6, 2, ['4', '3']), (6, 2, ['2', '1']), ])), ('6_5_2_0', dict(targ_cap=6, init_cap=5, bat_size=2, min_serv=0, batches=[ (6, 2, ['6', '5']), (6, 2, ['4', '3']), (6, 2, ['2', '1']), ])), ('6_5_2_4', dict(targ_cap=6, init_cap=5, bat_size=2, min_serv=4, batches=[ (6, 2, ['6', '5']), (6, 2, ['4', '3']), (6, 2, ['2', '1']), ])), ('6_3_2_0', dict(targ_cap=6, init_cap=3, bat_size=2, min_serv=0, batches=[ (5, 2, ['4', '5']), (6, 2, ['6', '3']), (6, 2, ['2', '1']), ])), ('6_3_2_4', dict(targ_cap=6, init_cap=3, bat_size=2, min_serv=4, batches=[ (5, 2, ['4', '5']), (6, 2, ['6', '3']), (6, 2, ['2', '1']), ])), ('6_4_4_0', dict(targ_cap=6, init_cap=4, bat_size=4, min_serv=0, batches=[ (6, 4, ['5', '6', '4', '3']), (6, 2, ['2', '1']), ])), ('6_4_5_0', dict(targ_cap=6, init_cap=4, bat_size=5, min_serv=0, batches=[ (6, 5, ['5', '6', '4', '3', '2']), (6, 1, ['1']), ])), ('6_4_4_1', dict(targ_cap=6, init_cap=4, bat_size=4, min_serv=1, batches=[ (6, 4, ['5', '6', '4', '3']), (6, 2, ['2', '1']), ])), ('6_4_6_1', dict(targ_cap=6, init_cap=4, bat_size=6, min_serv=1, batches=[ (7, 6, ['5', '6', '7', '4', '3', '2']), (6, 1, ['1']), ])), ('6_4_4_2', dict(targ_cap=6, init_cap=4, bat_size=4, min_serv=2, batches=[ (6, 4, ['5', '6', '4', '3']), (6, 2, ['2', '1']), ])), ('6_4_4_4', dict(targ_cap=6, init_cap=4, bat_size=4, min_serv=4, batches=[ (8, 4, ['8', '7', '6', '5']), (8, 4, ['4', '3', '2', '1']), (6, 0, []), ])), ('6_4_5_6', dict(targ_cap=6, init_cap=4, bat_size=5, min_serv=6, batches=[ (9, 5, ['9', '8', '7', '6', '5']), (10, 4, ['10', '4', '3', '2']), (7, 1, ['1']), (6, 0, []), ])), ] def setUp(self): super(TestGetBatches, self).setUp() self.stack = utils.parse_stack(template) self.grp = self.stack['group1'] self.grp._name_blacklist = mock.Mock(return_value={'0'}) def test_get_batches(self): batches = list(self.grp._get_batches(self.targ_cap, self.init_cap, self.bat_size, self.min_serv)) self.assertEqual([(s, u) for s, u, n in self.batches], batches) def test_assemble(self): old_def = rsrc_defn.ResourceDefinition( None, "OverwrittenFnGetRefIdType", {"foo": "baz"}) new_def = rsrc_defn.ResourceDefinition( None, "OverwrittenFnGetRefIdType", {"foo": "bar"}) resources = [(str(i), old_def) for i in range(self.init_cap + 1)] self.grp.get_size = mock.Mock(return_value=self.targ_cap) self.patchobject(grouputils, 'get_member_definitions', return_value=resources) self.grp.build_resource_definition = mock.Mock(return_value=new_def) all_updated_names = set() for size, max_upd, names in self.batches: template = self.grp._assemble_for_rolling_update(size, max_upd, names) res_dict = template.resource_definitions(self.stack) expected_names = set(map(str, range(1, size + 1))) self.assertEqual(expected_names, set(res_dict)) all_updated_names &= expected_names all_updated_names |= set(names) updated = set(n for n, v in res_dict.items() if v != old_def) self.assertEqual(all_updated_names, updated) resources[:] = sorted(res_dict.items(), key=lambda i: int(i[0])) heat-10.0.2/heat/tests/openstack/heat/test_software_component.py0000666000175000017500000002403213343562340025063 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import contextlib import mock import six from heat.common import exception as exc from heat.common import template_format from heat.engine import stack from heat.engine import template from heat.tests import common from heat.tests import utils class SoftwareComponentTest(common.HeatTestCase): def setUp(self): super(SoftwareComponentTest, self).setUp() self.ctx = utils.dummy_context() tpl = ''' heat_template_version: 2013-05-23 resources: mysql_component: type: OS::Heat::SoftwareComponent properties: configs: - actions: [CREATE] config: | #!/bin/bash echo "Create MySQL" tool: script - actions: [UPDATE] config: | #!/bin/bash echo "Update MySQL" tool: script inputs: - name: mysql_port outputs: - name: root_password ''' self.template = template_format.parse(tpl) self.stack = stack.Stack( self.ctx, 'software_component_test_stack', template.Template(self.template)) self.component = self.stack['mysql_component'] self.rpc_client = mock.MagicMock() self.component._rpc_client = self.rpc_client @contextlib.contextmanager def exc_filter(*args): try: yield except exc.NotFound: pass self.rpc_client.ignore_error_by_name.side_effect = exc_filter def test_handle_create(self): config_id = 'c8a19429-7fde-47ea-a42f-40045488226c' value = {'id': config_id} self.rpc_client.create_software_config.return_value = value props = dict(self.component.properties) self.component.handle_create() self.rpc_client.create_software_config.assert_called_with( self.ctx, group='component', name=None, inputs=props['inputs'], outputs=props['outputs'], config={'configs': props['configs']}, options=None) self.assertEqual(config_id, self.component.resource_id) def test_handle_delete(self): self.resource_id = None self.assertIsNone(self.component.handle_delete()) config_id = 'c8a19429-7fde-47ea-a42f-40045488226c' self.component.resource_id = config_id self.rpc_client.delete_software_config.return_value = None self.assertIsNone(self.component.handle_delete()) self.rpc_client.delete_software_config.side_effect = exc.NotFound self.assertIsNone(self.component.handle_delete()) def test_resolve_attribute(self): self.assertIsNone(self.component._resolve_attribute('others')) self.component.resource_id = None self.assertIsNone(self.component._resolve_attribute('configs')) self.component.resource_id = 'c8a19429-7fde-47ea-a42f-40045488226c' configs = self.template['resources']['mysql_component' ]['properties']['configs'] # configs list is stored in 'config' property of SoftwareConfig value = {'config': {'configs': configs}} self.rpc_client.show_software_config.return_value = value self.assertEqual(configs, self.component._resolve_attribute('configs')) self.rpc_client.show_software_config.side_effect = exc.NotFound self.assertIsNone(self.component._resolve_attribute('configs')) class SoftwareComponentValidationTest(common.HeatTestCase): scenarios = [ ( 'component_full', dict(snippet=''' component: type: OS::Heat::SoftwareComponent properties: configs: - actions: [CREATE] config: | #!/bin/bash echo CREATE $foo tool: script inputs: - name: foo outputs: - name: bar options: opt1: blah ''', err=None, err_msg=None) ), ( 'no_input_output_options', dict(snippet=''' component: type: OS::Heat::SoftwareComponent properties: configs: - actions: [CREATE] config: | #!/bin/bash echo CREATE $foo tool: script ''', err=None, err_msg=None) ), ( 'wrong_property_config', dict(snippet=''' component: type: OS::Heat::SoftwareComponent properties: config: #!/bin/bash configs: - actions: [CREATE] config: | #!/bin/bash echo CREATE $foo tool: script ''', err=exc.StackValidationFailed, err_msg='Unknown Property config') ), ( 'missing_configs', dict(snippet=''' component: type: OS::Heat::SoftwareComponent properties: inputs: - name: foo ''', err=exc.StackValidationFailed, err_msg='Property configs not assigned') ), ( 'empty_configs', dict(snippet=''' component: type: OS::Heat::SoftwareComponent properties: configs: ''', err=exc.StackValidationFailed, err_msg='resources.component.properties.configs: ' 'length (0) is out of range (min: 1, max: None)') ), ( 'invalid_configs', dict(snippet=''' component: type: OS::Heat::SoftwareComponent properties: configs: actions: [CREATE] config: #!/bin/bash tool: script ''', err=exc.StackValidationFailed, err_msg='is not a list') ), ( 'config_empty_actions', dict(snippet=''' component: type: OS::Heat::SoftwareComponent properties: configs: - actions: [] config: #!/bin/bash tool: script ''', err=exc.StackValidationFailed, err_msg='component.properties.configs[0].actions: ' 'length (0) is out of range (min: 1, max: None)') ), ( 'multiple_configs_per_action_single', dict(snippet=''' component: type: OS::Heat::SoftwareComponent properties: configs: - actions: [CREATE] config: #!/bin/bash tool: script - actions: [CREATE] config: #!/bin/bash tool: script ''', err=exc.StackValidationFailed, err_msg='Defining more than one configuration for the same ' 'action in SoftwareComponent "component" is not ' 'allowed.') ), ( 'multiple_configs_per_action_overlapping_list', dict(snippet=''' component: type: OS::Heat::SoftwareComponent properties: configs: - actions: [CREATE, UPDATE, RESUME] config: #!/bin/bash tool: script - actions: [UPDATE] config: #!/bin/bash tool: script ''', err=exc.StackValidationFailed, err_msg='Defining more than one configuration for the same ' 'action in SoftwareComponent "component" is not ' 'allowed.') ), ] def setUp(self): super(SoftwareComponentValidationTest, self).setUp() self.ctx = utils.dummy_context() tpl = ''' heat_template_version: 2013-05-23 resources: %s ''' % self.snippet self.template = template_format.parse(tpl) self.stack = stack.Stack( self.ctx, 'software_component_test_stack', template.Template(self.template)) self.component = self.stack['component'] self.component._rpc_client = mock.MagicMock() def test_properties_schema(self): if self.err: err = self.assertRaises(self.err, self.stack.validate) if self.err_msg: self.assertIn(self.err_msg, six.text_type(err)) else: self.assertIsNone(self.stack.validate()) heat-10.0.2/heat/tests/openstack/manila/0000775000175000017500000000000013343562672020063 5ustar zuulzuul00000000000000heat-10.0.2/heat/tests/openstack/manila/test_share_type.py0000666000175000017500000001050413343562340023631 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import mock from heat.common import template_format from heat.engine.resources.openstack.manila import share_type as mshare_type from heat.engine import rsrc_defn from heat.engine import scheduler from heat.tests import common from heat.tests import utils manila_template = """ heat_template_version: 2013-05-23 resources: test_share_type: type: OS::Manila::ShareType properties: name: test_share_type driver_handles_share_servers: True extra_specs: {"test":"test"} is_public: False snapshot_support: True """ class DummyShare(object): def __init__(self): self.to_dict = lambda: {'attr': 'val'} class ManilaShareTypeTest(common.HeatTestCase): def _init_share(self, stack_name, share_type_name="test_share_type"): # parse stack tmp = template_format.parse(manila_template) self.stack = utils.parse_stack(tmp, stack_name=stack_name) defns = self.stack.t.resource_definitions(self.stack) res_def = defns["test_share_type"] share_type = mshare_type.ManilaShareType( share_type_name, res_def, self.stack) # mock clients and plugins mock_client = mock.MagicMock() client = mock.MagicMock(return_value=mock_client) share_type.client = client return share_type def test_share_type_create(self): share_type = self._init_share("stack_share_type_create") fake_share_type = mock.MagicMock(id="type_id") share_type.client().share_types.create.return_value = fake_share_type scheduler.TaskRunner(share_type.create)() self.assertEqual("type_id", share_type.resource_id) share_type.client().share_types.create.assert_called_once_with( name="test_share_type", spec_driver_handles_share_servers=True, is_public=False, spec_snapshot_support=True) fake_share_type.set_keys.assert_called_once_with({"test": "test"}) self.assertEqual('share_types', share_type.entity) def test_share_type_update(self): share_type = self._init_share("stack_share_type_update") share_type.client().share_types.create.return_value = mock.MagicMock( id="type_id") fake_share_type = mock.MagicMock() share_type.client().share_types.get.return_value = fake_share_type scheduler.TaskRunner(share_type.create)() updated_props = copy.deepcopy(share_type.properties.data) updated_props[mshare_type.ManilaShareType.EXTRA_SPECS] = { "fake_key": "fake_value"} after = rsrc_defn.ResourceDefinition(share_type.name, share_type.type(), updated_props) scheduler.TaskRunner(share_type.update, after)() fake_share_type.unset_keys.assert_called_once_with({"test": "test"}) fake_share_type.set_keys.assert_called_with( updated_props[mshare_type.ManilaShareType.EXTRA_SPECS]) def test_get_live_state(self): share_type = self._init_share("stack_share_type_update") value = mock.MagicMock() value.to_dict.return_value = { 'os-share-type-access:is_public': True, 'required_extra_specs': {}, 'extra_specs': {'test': 'test', 'snapshot_support': 'True', 'driver_handles_share_servers': 'True'}, 'id': 'cc76cb22-75fe-4e6e-b618-7c345b2444e3', 'name': 'test'} share_type.client().share_types.get.return_value = value reality = share_type.get_live_state(share_type.properties) expected = { 'extra_specs': {'test': 'test'} } self.assertEqual(set(expected.keys()), set(reality.keys())) for key in expected: self.assertEqual(expected[key], reality[key]) heat-10.0.2/heat/tests/openstack/manila/test_share_network.py0000666000175000017500000003207113343562351024346 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from heat.common import exception from heat.common import template_format from heat.engine.resources.openstack.manila import share_network from heat.engine import scheduler from heat.tests import common from heat.tests import utils stack_template = """ heat_template_version: 2015-04-30 resources: share_network: type: OS::Manila::ShareNetwork properties: name: 1 description: 2 neutron_network: 3 neutron_subnet: 4 security_services: [6, 7] """ class DummyShareNetwork(object): def __init__(self): self.id = '42' self.segmentation_id = '2' self.cidr = '3' self.ip_version = '5' self.network_type = '6' class ShareNetworkWithNova(share_network.ManilaShareNetwork): def is_using_neutron(self): return False class ManilaShareNetworkTest(common.HeatTestCase): def setUp(self): super(ManilaShareNetworkTest, self).setUp() self.tmpl = template_format.parse(stack_template) self.stack = utils.parse_stack(self.tmpl) resource_defns = self.stack.t.resource_definitions(self.stack) self.rsrc_defn = resource_defns['share_network'] self.client = mock.Mock() self.patchobject(share_network.ManilaShareNetwork, 'client', return_value=self.client) self.client_plugin = mock.Mock() def resolve_neutron(resource_type, name): return name self.client_plugin.find_resourceid_by_name_or_id.side_effect = ( resolve_neutron ) self.patchobject(share_network.ManilaShareNetwork, 'client_plugin', return_value=self.client_plugin) def return_network(name): return '3' self.client_plugin.network_id_from_subnet_id.side_effect = ( return_network ) self.stub_NetworkConstraint_validate() self.stub_SubnetConstraint_validate() def _create_network(self, name, snippet, stack, use_neutron=True): if not use_neutron: net = ShareNetworkWithNova(name, snippet, stack) else: net = share_network.ManilaShareNetwork(name, snippet, stack) self.client.share_networks.create.return_value = DummyShareNetwork() self.client.share_networks.get.return_value = DummyShareNetwork() def get_security_service(id): return mock.Mock(id=id) self.client_plugin.get_security_service.side_effect = ( get_security_service) scheduler.TaskRunner(net.create)() return net def test_create(self, rsrc_defn=None, stack=None): if rsrc_defn is None: rsrc_defn = self.rsrc_defn if stack is None: stack = self.stack net = self._create_network('share_network', rsrc_defn, stack) self.assertEqual((net.CREATE, net.COMPLETE), net.state) self.assertEqual('42', net.resource_id) net.client().share_networks.create.assert_called_with( name='1', description='2', neutron_net_id='3', neutron_subnet_id='4', nova_net_id=None) calls = [mock.call('42', '6'), mock.call('42', '7')] net.client().share_networks.add_security_service.assert_has_calls( calls, any_order=True) self.assertEqual('share_networks', net.entity) def test_create_with_nova(self): t = template_format.parse(stack_template) t['resources']['share_network']['properties']['nova_network'] = 'n' del t['resources']['share_network']['properties']['neutron_network'] del t['resources']['share_network']['properties']['neutron_subnet'] stack = utils.parse_stack(t) rsrc_defn = stack.t.resource_definitions(stack)['share_network'] net = self._create_network('share_network', rsrc_defn, stack, use_neutron=False) self.assertEqual((net.CREATE, net.COMPLETE), net.state) self.assertEqual('42', net.resource_id) net.client().share_networks.create.assert_called_with( name='1', description='2', neutron_net_id=None, neutron_subnet_id=None, nova_net_id='n') calls = [mock.call('42', '6'), mock.call('42', '7')] net.client().share_networks.add_security_service.assert_has_calls( calls, any_order=True) self.assertEqual('share_networks', net.entity) def test_create_without_network(self): t = template_format.parse(stack_template) del t['resources']['share_network']['properties']['neutron_network'] stack = utils.parse_stack(t) rsrc_defn = stack.t.resource_definitions(stack)['share_network'] net = self._create_network('share_network', rsrc_defn, stack) self.assertEqual((net.CREATE, net.COMPLETE), net.state) self.assertEqual('42', net.resource_id) net.client().share_networks.create.assert_called_with( name='1', description='2', neutron_net_id='3', neutron_subnet_id='4', nova_net_id=None) calls = [mock.call('42', '6'), mock.call('42', '7')] net.client().share_networks.add_security_service.assert_has_calls( calls, any_order=True) self.assertEqual('share_networks', net.entity) def test_create_fail(self): self.client.share_networks.add_security_service.side_effect = ( Exception()) self.assertRaises( exception.ResourceFailure, self._create_network, 'share_network', self.rsrc_defn, self.stack) def test_validate_conflicting_net_subnet(self): t = template_format.parse(stack_template) t['resources']['share_network']['properties']['neutron_network'] = '5' stack = utils.parse_stack(t) rsrc_defn = stack.t.resource_definitions(stack)['share_network'] net = self._create_network('share_network', rsrc_defn, stack) net.is_using_neutron = mock.Mock(return_value=True) msg = ('Provided neutron_subnet does not belong ' 'to provided neutron_network.') self.assertRaisesRegex(exception.StackValidationFailed, msg, net.validate) def test_update(self): net = self._create_network('share_network', self.rsrc_defn, self.stack) props = self.tmpl['resources']['share_network']['properties'].copy() props['name'] = 'a' props['description'] = 'b' props['neutron_network'] = 'c' props['neutron_subnet'] = 'd' props['security_services'] = ['7', '8'] update_template = net.t.freeze(properties=props) scheduler.TaskRunner(net.update, update_template)() self.assertEqual((net.UPDATE, net.COMPLETE), net.state) exp_args = { 'name': 'a', 'description': 'b', 'neutron_net_id': 'c', 'neutron_subnet_id': 'd', 'nova_net_id': None } net.client().share_networks.update.assert_called_with('42', **exp_args) net.client().share_networks.add_security_service.assert_called_with( '42', '8') net.client().share_networks.remove_security_service.assert_called_with( '42', '6') def test_update_security_services(self): net = self._create_network('share_network', self.rsrc_defn, self.stack) props = self.tmpl['resources']['share_network']['properties'].copy() props['security_services'] = ['7', '8'] update_template = net.t.freeze(properties=props) scheduler.TaskRunner(net.update, update_template)() self.assertEqual((net.UPDATE, net.COMPLETE), net.state) called = net.client().share_networks.update.called self.assertFalse(called) net.client().share_networks.add_security_service.assert_called_with( '42', '8') net.client().share_networks.remove_security_service.assert_called_with( '42', '6') def test_update_fail(self): net = self._create_network('share_network', self.rsrc_defn, self.stack) self.client.share_networks.remove_security_service.side_effect = ( Exception()) props = self.tmpl['resources']['share_network']['properties'].copy() props['security_services'] = [] update_template = net.t.freeze(properties=props) run = scheduler.TaskRunner(net.update, update_template) self.assertRaises(exception.ResourceFailure, run) def test_nova_net_neutron_net_conflict(self): t = template_format.parse(stack_template) t['resources']['share_network']['properties']['nova_network'] = 1 stack = utils.parse_stack(t) rsrc_defn = stack.t.resource_definitions(stack)['share_network'] net = self._create_network('share_network', rsrc_defn, stack) msg = ('Cannot define the following properties at the same time: ' 'neutron_network, nova_network.') self.assertRaisesRegex(exception.ResourcePropertyConflict, msg, net.validate) def test_nova_net_neutron_subnet_conflict(self): t = template_format.parse(stack_template) t['resources']['share_network']['properties']['nova_network'] = 1 del t['resources']['share_network']['properties']['neutron_network'] stack = utils.parse_stack(t) rsrc_defn = stack.t.resource_definitions(stack)['share_network'] net = self._create_network('share_network', rsrc_defn, stack) msg = ('Cannot define the following properties at the same time: ' 'neutron_subnet, nova_network.') self.assertRaisesRegex(exception.ResourcePropertyConflict, msg, net.validate) def test_nova_net_while_using_neutron(self): t = template_format.parse(stack_template) t['resources']['share_network']['properties']['nova_network'] = 'n' del t['resources']['share_network']['properties']['neutron_network'] del t['resources']['share_network']['properties']['neutron_subnet'] stack = utils.parse_stack(t) rsrc_defn = stack.t.resource_definitions(stack)['share_network'] net = self._create_network('share_network', rsrc_defn, stack) net.is_using_neutron = mock.Mock(return_value=True) msg = ('With Neutron enabled you need to pass Neutron network ' 'and Neutron subnet instead of Nova network') self.assertRaisesRegex(exception.StackValidationFailed, msg, net.validate) def test_neutron_net_without_neutron_subnet(self): t = template_format.parse(stack_template) del t['resources']['share_network']['properties']['neutron_subnet'] stack = utils.parse_stack(t) rsrc_defn = stack.t.resource_definitions(stack)['share_network'] net = self._create_network('share_network', rsrc_defn, stack) msg = ('neutron_network cannot be specified without neutron_subnet.') self.assertRaisesRegex(exception.ResourcePropertyDependency, msg, net.validate) def test_attributes(self): net = self._create_network('share_network', self.rsrc_defn, self.stack) self.assertEqual('2', net.FnGetAtt('segmentation_id')) self.assertEqual('3', net.FnGetAtt('cidr')) self.assertEqual('5', net.FnGetAtt('ip_version')) self.assertEqual('6', net.FnGetAtt('network_type')) def test_get_live_state(self): net = self._create_network('share_network', self.rsrc_defn, self.stack) value = mock.MagicMock() value.to_dict.return_value = { 'name': 'test', 'segmentation_id': '123', 'created_at': '2016-02-02T18:40:24.000000', 'neutron_subnet_id': None, 'updated_at': None, 'network_type': None, 'neutron_net_id': '4321', 'ip_version': None, 'nova_net_id': None, 'cidr': None, 'project_id': '221b4f51e9bd4f659845f657a3051a46', 'id': '4000d1c7-1017-4ea2-a4a1-951d8b63857a', 'description': None} self.client.share_networks.get.return_value = value self.client.security_services.list.return_value = [mock.Mock(id='6'), mock.Mock(id='7')] reality = net.get_live_state(net.properties) expected = { 'name': 'test', 'neutron_subnet': None, 'neutron_network': '4321', 'nova_network': None, 'description': None, 'security_services': ['6', '7'] } self.assertEqual(expected, reality) heat-10.0.2/heat/tests/openstack/manila/test_security_service.py0000666000175000017500000001207013343562340025055 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import six from heat.common import exception from heat.common import template_format from heat.engine import resource from heat.engine.resources.openstack.manila import security_service from heat.engine import scheduler from heat.engine import template from heat.tests import common from heat.tests import utils stack_template = ''' heat_template_version: 2013-05-23 resources: security_service: type: OS::Manila::SecurityService properties: name: my_security_service domain: test-domain dns_ip: 1.1.1.1 type: ldap server: test-server user: test-user password: test-password ''' stack_template_update = ''' heat_template_version: 2013-05-23 resources: security_service: type: OS::Manila::SecurityService properties: name: fake_security_service domain: fake-domain dns_ip: 1.1.1.1 type: ldap server: fake-server ''' stack_template_update_replace = ''' heat_template_version: 2013-05-23 resources: security_service: type: OS::Manila::SecurityService properties: name: my_security_service domain: test-domain dns_ip: 1.1.1.1 type: kerberos server: test-server user: test-user password: test-password ''' class ManilaSecurityServiceTest(common.HeatTestCase): def setUp(self): super(ManilaSecurityServiceTest, self).setUp() t = template_format.parse(stack_template) self.stack = utils.parse_stack(t) resource_defns = self.stack.t.resource_definitions(self.stack) self.rsrc_defn = resource_defns['security_service'] self.client = mock.Mock() self.patchobject(security_service.SecurityService, 'client', return_value=self.client) def _create_resource(self, name, snippet, stack): ss = security_service.SecurityService(name, snippet, stack) value = mock.MagicMock(id='12345') self.client.security_services.create.return_value = value self.client.security_services.get.return_value = value scheduler.TaskRunner(ss.create)() args = self.client.security_services.create.call_args[1] self.assertEqual(self.rsrc_defn._properties, args) self.assertEqual('12345', ss.resource_id) return ss def test_create(self): ct = self._create_resource('security_service', self.rsrc_defn, self.stack) expected_state = (ct.CREATE, ct.COMPLETE) self.assertEqual(expected_state, ct.state) self.assertEqual('security_services', ct.entity) def test_create_failed(self): ss = security_service.SecurityService('security_service', self.rsrc_defn, self.stack) self.client.security_services.create.side_effect = Exception('error') exc = self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(ss.create)) expected_state = (ss.CREATE, ss.FAILED) self.assertEqual(expected_state, ss.state) self.assertIn('Exception: resources.security_service: error', six.text_type(exc)) def test_update(self): ss = self._create_resource('security_service', self.rsrc_defn, self.stack) t = template_format.parse(stack_template_update) rsrc_defns = template.Template(t).resource_definitions(self.stack) new_ss = rsrc_defns['security_service'] scheduler.TaskRunner(ss.update, new_ss)() args = { 'domain': 'fake-domain', 'password': None, 'user': None, 'server': 'fake-server', 'name': 'fake_security_service' } self.client.security_services.update.assert_called_once_with( '12345', **args) self.assertEqual((ss.UPDATE, ss.COMPLETE), ss.state) def test_update_replace(self): ss = self._create_resource('security_service', self.rsrc_defn, self.stack) t = template_format.parse(stack_template_update_replace) rsrc_defns = template.Template(t).resource_definitions(self.stack) new_ss = rsrc_defns['security_service'] self.assertEqual(0, self.client.security_services.update.call_count) err = self.assertRaises(resource.UpdateReplace, scheduler.TaskRunner(ss.update, new_ss)) msg = 'The Resource security_service requires replacement.' self.assertEqual(msg, six.text_type(err)) heat-10.0.2/heat/tests/openstack/manila/__init__.py0000666000175000017500000000000013343562340022154 0ustar zuulzuul00000000000000heat-10.0.2/heat/tests/openstack/manila/test_share.py0000666000175000017500000002727213343562340022602 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import copy import mock import six from heat.common import exception from heat.common import template_format from heat.engine.resources.openstack.manila import share as mshare from heat.engine import rsrc_defn from heat.engine import scheduler from heat.tests import common from heat.tests import utils manila_template = """ heat_template_version: 2015-04-30 resources: test_share: type: OS::Manila::Share properties: share_protocol: NFS size: 1 access_rules: - access_to: 127.0.0.1 access_type: ip access_level: ro name: basic_test_share description: basic test share is_public: True metadata: {"key": "value"} """ class DummyShare(object): def __init__(self): self.availability_zone = 'az' self.host = 'host' self.export_locations = 'el' self.share_server_id = 'id' self.created_at = 'ca' self.status = 's' self.project_id = 'p_id' class ManilaShareTest(common.HeatTestCase): def setUp(self): super(ManilaShareTest, self).setUp() self.fake_share = mock.MagicMock(id="test_share_id") self.available_share = mock.MagicMock( id="test_share_id", status=mshare.ManilaShare.STATUS_AVAILABLE) self.failed_share = mock.MagicMock( id="test_share_id", status=mshare.ManilaShare.STATUS_ERROR) self.deleting_share = mock.MagicMock( id="test_share_id", status=mshare.ManilaShare.STATUS_DELETING) def _init_share(self, stack_name): tmp = template_format.parse(manila_template) self.stack = utils.parse_stack(tmp, stack_name=stack_name) res_def = self.stack.t.resource_definitions(self.stack)["test_share"] share = mshare.ManilaShare("test_share", res_def, self.stack) self.patchobject(share, 'data_set') # replace clients and plugins with mocks mock_client = mock.MagicMock() client = mock.MagicMock(return_value=mock_client) share.client = client return share def _create_share(self, stack_name): share = self._init_share(stack_name) share.client().shares.create.return_value = self.fake_share share.client().shares.get.return_value = self.available_share scheduler.TaskRunner(share.create)() return share def test_share_create(self): share = self._create_share("stack_share_create") expected_state = (share.CREATE, share.COMPLETE) self.assertEqual(expected_state, share.state, "Share is not in expected state") self.assertEqual(self.fake_share.id, share.resource_id, "Expected share ID was not propagated to share") share.client().shares.allow.assert_called_once_with( access="127.0.0.1", access_level="ro", share=share.resource_id, access_type="ip") args, kwargs = share.client().shares.create.call_args message_end = " parameter was not passed to manila client" self.assertEqual(u"NFS", kwargs["share_proto"], "Share protocol" + message_end) self.assertEqual(1, kwargs["size"], "Share size" + message_end) self.assertEqual("basic_test_share", kwargs["name"], "Share name" + message_end) self.assertEqual("basic test share", kwargs["description"], "Share description" + message_end) self.assertEqual({u"key": u"value"}, kwargs["metadata"], "Metadata" + message_end) self.assertTrue(kwargs["is_public"]) share.client().shares.get.assert_called_once_with(self.fake_share.id) self.assertEqual('shares', share.entity) def test_share_create_fail(self): share = self._init_share("stack_share_create_fail") share.client().shares.get.return_value = self.failed_share exc = self.assertRaises(exception.ResourceInError, share.check_create_complete, self.failed_share) self.assertIn("Error during creation", six.text_type(exc)) def test_share_create_unknown_status(self): share = self._init_share("stack_share_create_unknown") share.client().shares.get.return_value = self.deleting_share exc = self.assertRaises(exception.ResourceUnknownStatus, share.check_create_complete, self.deleting_share) self.assertIn("Unknown status", six.text_type(exc)) def test_share_check(self): share = self._create_share("stack_share_check") scheduler.TaskRunner(share.check)() expected_state = (share.CHECK, share.COMPLETE) self.assertEqual(expected_state, share.state, "Share is not in expected state") def test_share_check_fail(self): share = self._create_share("stack_share_check_fail") share.client().shares.get.return_value = self.failed_share exc = self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(share.check)) self.assertIn("Error: resources.test_share: 'status': expected " "'['available']'", six.text_type(exc)) def test_share_update(self): share = self._create_share("stack_share_update") updated_share_props = copy.deepcopy(share.properties.data) updated_share_props[mshare.ManilaShare.DESCRIPTION] = "desc" updated_share_props[mshare.ManilaShare.NAME] = "name" updated_share_props[mshare.ManilaShare.IS_PUBLIC] = True share.client().shares.update.return_value = None after = rsrc_defn.ResourceDefinition(share.name, share.type(), updated_share_props) scheduler.TaskRunner(share.update, after)() kwargs = { "display_name": "name", "display_description": "desc", } share.client().shares.update.assert_called_once_with( share.resource_id, **kwargs) def test_share_update_access_rules(self): share = self._create_share("stack_share_update_access_rules") updated_share_props = copy.deepcopy(share.properties.data) updated_share_props[mshare.ManilaShare.ACCESS_RULES] = [ {mshare.ManilaShare.ACCESS_TO: "127.0.0.2", mshare.ManilaShare.ACCESS_TYPE: "ip", mshare.ManilaShare.ACCESS_LEVEL: "ro"}] share.client().shares.deny.return_value = None current_rule = { mshare.ManilaShare.ACCESS_TO: "127.0.0.1", mshare.ManilaShare.ACCESS_TYPE: "ip", mshare.ManilaShare.ACCESS_LEVEL: "ro", "id": "test_access_rule" } rule_tuple = collections.namedtuple("DummyRule", list(current_rule.keys())) share.client().shares.access_list.return_value = [ rule_tuple(**current_rule)] after = rsrc_defn.ResourceDefinition(share.name, share.type(), updated_share_props) scheduler.TaskRunner(share.update, after)() share.client().shares.access_list.assert_called_once_with( share.resource_id) share.client().shares.allow.assert_called_with( share=share.resource_id, access_type="ip", access="127.0.0.2", access_level="ro") share.client().shares.deny.assert_called_once_with( share=share.resource_id, id="test_access_rule") def test_share_update_metadata(self): share = self._create_share("stack_share_update_metadata") updated_share_props = copy.deepcopy(share.properties.data) updated_share_props[mshare.ManilaShare.METADATA] = { "fake_key": "fake_value"} share.client().shares.update_all_metadata.return_value = None after = rsrc_defn.ResourceDefinition(share.name, share.type(), updated_share_props) scheduler.TaskRunner(share.update, after)() share.client().shares.update_all_metadata.assert_called_once_with( share.resource_id, updated_share_props[mshare.ManilaShare.METADATA]) def test_attributes(self): share = self._create_share("share") share.client().shares.get.return_value = DummyShare() self.assertEqual('az', share.FnGetAtt('availability_zone')) self.assertEqual('host', share.FnGetAtt('host')) self.assertEqual('el', share.FnGetAtt('export_locations')) self.assertEqual('id', share.FnGetAtt('share_server_id')) self.assertEqual('ca', share.FnGetAtt('created_at')) self.assertEqual('s', share.FnGetAtt('status')) self.assertEqual('p_id', share.FnGetAtt('project_id')) def test_allowed_access_type(self): tmp = template_format.parse(manila_template) properties = tmp['resources']['test_share']['properties'] properties['access_rules'][0]['access_type'] = 'domain' stack = utils.parse_stack(tmp, stack_name='access_type') self.assertRaisesRegex( exception.StackValidationFailed, r'.* "domain" is not an allowed value \[ip, user, cert, cephx\]', stack.validate) def test_get_live_state(self): share = self._create_share("share") value = mock.MagicMock() value.to_dict.return_value = { 'status': 'available', 'size': 1, 'description': None, 'share_proto': 'NFS', 'name': 'testshare', 'share_type': 'default', 'availability_zone': 'nova', 'created_at': '2016-02-04T10:20:52.000000', 'export_location': 'dummy', 'share_network_id': '5f0a3c90-36ef-4e92-8142-06afd6be2881', 'export_locations': ['dummy'], 'share_server_id': 'fcb9d90d-76e6-466f-a0cb-23e254ccc16c', 'host': 'ubuntu@generic1#GENERIC1', 'volume_type': 'default', 'snapshot_id': None, 'is_public': False, 'project_id': '221b4f51e9bd4f659845f657a3051a46', 'id': '3a68e59d-11c1-4da4-a102-03fc9448613e', 'metadata': {}} share.client().shares.get.return_value = value share.client().shares.access_list.return_value = [ {'access_to': '0.0.0.0', 'access_type': 'ip', 'access_level': 'r'}] share.data = mock.MagicMock(return_value={'share_type': 'default'}) reality = share.get_live_state(share.properties) expected = { 'description': None, 'name': 'testshare', 'is_public': False, 'metadata': {}, 'access_rules': [{'access_to': '0.0.0.0', 'access_type': 'ip', 'access_level': 'r'}]} self.assertEqual(set(expected.keys()), set(reality.keys())) exp_rules = expected.pop('access_rules') real_rules = reality.pop('access_rules') self.assertEqual([set(rule.items()) for rule in exp_rules], real_rules) for key in expected: self.assertEqual(expected[key], reality[key]) heat-10.0.2/heat/tests/openstack/octavia/0000775000175000017500000000000013343562672020250 5ustar zuulzuul00000000000000heat-10.0.2/heat/tests/openstack/octavia/test_l7policy.py0000666000175000017500000002602213343562340023417 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import yaml from osc_lib import exceptions from heat.common import exception from heat.common.i18n import _ from heat.common import template_format from heat.engine.resources.openstack.octavia import l7policy from heat.tests import common from heat.tests.openstack.octavia import inline_templates from heat.tests import utils class L7PolicyTest(common.HeatTestCase): def test_resource_mapping(self): mapping = l7policy.resource_mapping() self.assertEqual(mapping['OS::Octavia::L7Policy'], l7policy.L7Policy) def _create_stack(self, tmpl=inline_templates.L7POLICY_TEMPLATE): self.t = template_format.parse(tmpl) self.stack = utils.parse_stack(self.t) self.l7policy = self.stack['l7policy'] self.octavia_client = mock.MagicMock() self.l7policy.client = mock.MagicMock( return_value=self.octavia_client) self.l7policy.client_plugin().client = mock.MagicMock( return_value=self.octavia_client) def test_validate_reject_action_with_conflicting_props(self): tmpl = yaml.safe_load(inline_templates.L7POLICY_TEMPLATE) props = tmpl['resources']['l7policy']['properties'] props['action'] = 'REJECT' self._create_stack(tmpl=yaml.safe_dump(tmpl)) msg = _('Properties redirect_pool and redirect_url are not ' 'required when action type is set to REJECT.') with mock.patch('heat.engine.clients.os.neutron.NeutronClientPlugin.' 'has_extension', return_value=True): self.assertRaisesRegex(exception.StackValidationFailed, msg, self.l7policy.validate) def test_validate_redirect_pool_action_with_url(self): tmpl = yaml.safe_load(inline_templates.L7POLICY_TEMPLATE) props = tmpl['resources']['l7policy']['properties'] props['action'] = 'REDIRECT_TO_POOL' props['redirect_pool'] = '123' self._create_stack(tmpl=yaml.safe_dump(tmpl)) msg = _('redirect_url property should only be specified ' 'for action with value REDIRECT_TO_URL.') with mock.patch('heat.engine.clients.os.neutron.NeutronClientPlugin.' 'has_extension', return_value=True): self.assertRaisesRegex(exception.ResourcePropertyValueDependency, msg, self.l7policy.validate) def test_validate_redirect_pool_action_without_pool(self): tmpl = yaml.safe_load(inline_templates.L7POLICY_TEMPLATE) props = tmpl['resources']['l7policy']['properties'] props['action'] = 'REDIRECT_TO_POOL' del props['redirect_url'] self._create_stack(tmpl=yaml.safe_dump(tmpl)) msg = _('Property redirect_pool is required when action type ' 'is set to REDIRECT_TO_POOL.') with mock.patch('heat.engine.clients.os.neutron.NeutronClientPlugin.' 'has_extension', return_value=True): self.assertRaisesRegex(exception.StackValidationFailed, msg, self.l7policy.validate) def test_validate_redirect_url_action_with_pool(self): tmpl = yaml.safe_load(inline_templates.L7POLICY_TEMPLATE) props = tmpl['resources']['l7policy']['properties'] props['redirect_pool'] = '123' self._create_stack(tmpl=yaml.safe_dump(tmpl)) msg = _('redirect_pool property should only be specified ' 'for action with value REDIRECT_TO_POOL.') with mock.patch('heat.engine.clients.os.neutron.NeutronClientPlugin.' 'has_extension', return_value=True): self.assertRaisesRegex(exception.ResourcePropertyValueDependency, msg, self.l7policy.validate) def test_validate_redirect_url_action_without_url(self): tmpl = yaml.safe_load(inline_templates.L7POLICY_TEMPLATE) props = tmpl['resources']['l7policy']['properties'] del props['redirect_url'] self._create_stack(tmpl=yaml.safe_dump(tmpl)) msg = _('Property redirect_url is required when action type ' 'is set to REDIRECT_TO_URL.') with mock.patch('heat.engine.clients.os.neutron.NeutronClientPlugin.' 'has_extension', return_value=True): self.assertRaisesRegex(exception.StackValidationFailed, msg, self.l7policy.validate) def test_create(self): self._create_stack() self.octavia_client.l7policy_show.side_effect = [ {'provisioning_status': 'PENDING_CREATE'}, {'provisioning_status': 'PENDING_CREATE'}, {'provisioning_status': 'ACTIVE'}, ] self.octavia_client.l7policy_create.side_effect = [ exceptions.Conflict(409), {'l7policy': {'id': '1234'}} ] expected = { 'l7policy': { 'name': u'test_l7policy', 'description': u'test l7policy resource', 'action': u'REDIRECT_TO_URL', 'listener_id': u'123', 'redirect_url': u'http://www.mirantis.com', 'position': 1, 'admin_state_up': True } } props = self.l7policy.handle_create() self.assertFalse(self.l7policy.check_create_complete(props)) self.octavia_client.l7policy_create.assert_called_with(json=expected) self.assertFalse(self.l7policy.check_create_complete(props)) self.octavia_client.l7policy_create.assert_called_with(json=expected) self.assertFalse(self.l7policy.check_create_complete(props)) self.assertTrue(self.l7policy.check_create_complete(props)) def test_create_missing_properties(self): for prop in ('action', 'listener'): tmpl = yaml.load(inline_templates.L7POLICY_TEMPLATE) del tmpl['resources']['l7policy']['properties'][prop] self._create_stack(tmpl=yaml.dump(tmpl)) self.assertRaises(exception.StackValidationFailed, self.l7policy.validate) def test_show_resource(self): self._create_stack() self.l7policy.resource_id_set('1234') self.octavia_client.l7policy_show.return_value = {'id': '1234'} self.assertEqual({'id': '1234'}, self.l7policy._show_resource()) self.octavia_client.l7policy_show.assert_called_with('1234') def test_update(self): self._create_stack() self.l7policy.resource_id_set('1234') self.octavia_client.l7policy_show.side_effect = [ {'provisioning_status': 'PENDING_UPDATE'}, {'provisioning_status': 'PENDING_UPDATE'}, {'provisioning_status': 'ACTIVE'}, ] self.octavia_client.l7policy_set.side_effect = [ exceptions.Conflict(409), None] prop_diff = { 'admin_state_up': False, 'name': 'your_l7policy', 'redirect_url': 'http://www.google.com' } prop_diff = self.l7policy.handle_update(None, None, prop_diff) self.assertFalse(self.l7policy.check_update_complete(prop_diff)) self.assertFalse(self.l7policy._update_called) self.octavia_client.l7policy_set.assert_called_with( '1234', json={'l7policy': prop_diff}) self.assertFalse(self.l7policy.check_update_complete(prop_diff)) self.assertTrue(self.l7policy._update_called) self.octavia_client.l7policy_set.assert_called_with( '1234', json={'l7policy': prop_diff}) self.assertFalse(self.l7policy.check_update_complete(prop_diff)) self.assertTrue(self.l7policy.check_update_complete(prop_diff)) def test_update_redirect_pool_prop_name(self): self._create_stack() self.l7policy.resource_id_set('1234') self.octavia_client.l7policy_show.side_effect = [ {'provisioning_status': 'PENDING_UPDATE'}, {'provisioning_status': 'PENDING_UPDATE'}, {'provisioning_status': 'ACTIVE'}, ] self.octavia_client.l7policy_set.side_effect = [ exceptions.Conflict(409), None] unresolved_diff = { 'redirect_url': None, 'action': 'REDIRECT_TO_POOL', 'redirect_pool': 'UNRESOLVED_POOL' } resolved_diff = { 'redirect_url': None, 'action': 'REDIRECT_TO_POOL', 'redirect_pool_id': '123' } self.l7policy.handle_update(None, None, unresolved_diff) self.assertFalse(self.l7policy.check_update_complete(resolved_diff)) self.assertFalse(self.l7policy._update_called) self.octavia_client.l7policy_set.assert_called_with( '1234', json={'l7policy': resolved_diff}) self.assertFalse(self.l7policy.check_update_complete(resolved_diff)) self.assertTrue(self.l7policy._update_called) self.octavia_client.l7policy_set.assert_called_with( '1234', json={'l7policy': resolved_diff}) self.assertFalse(self.l7policy.check_update_complete(resolved_diff)) self.assertTrue(self.l7policy.check_update_complete(resolved_diff)) def test_delete(self): self._create_stack() self.l7policy.resource_id_set('1234') self.octavia_client.l7policy_show.side_effect = [ {'provisioning_status': 'PENDING_DELETE'}, {'provisioning_status': 'PENDING_DELETE'}, {'provisioning_status': 'DELETED'}, ] self.octavia_client.l7policy_delete.side_effect = [ exceptions.Conflict(409), None] self.l7policy.handle_delete() self.assertFalse(self.l7policy.check_delete_complete(None)) self.assertFalse(self.l7policy._delete_called) self.octavia_client.l7policy_delete.assert_called_with( '1234') self.assertFalse(self.l7policy.check_delete_complete(None)) self.assertTrue(self.l7policy._delete_called) self.octavia_client.l7policy_delete.assert_called_with( '1234') self.assertTrue(self.l7policy.check_delete_complete(None)) def test_delete_failed(self): self._create_stack() self.l7policy.resource_id_set('1234') self.octavia_client.l7policy_delete.side_effect = ( exceptions.Unauthorized(401)) self.l7policy.handle_delete() self.assertRaises(exceptions.Unauthorized, self.l7policy.check_delete_complete, None) self.octavia_client.l7policy_delete.assert_called_with( '1234') heat-10.0.2/heat/tests/openstack/octavia/test_loadbalancer.py0000666000175000017500000001373013343562340024266 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from neutronclient.neutron import v2_0 as neutronV20 from osc_lib import exceptions from heat.common import exception from heat.common import template_format from heat.engine.resources.openstack.octavia import loadbalancer from heat.tests import common from heat.tests.openstack.octavia import inline_templates from heat.tests import utils class LoadBalancerTest(common.HeatTestCase): def test_resource_mapping(self): mapping = loadbalancer.resource_mapping() self.assertEqual(loadbalancer.LoadBalancer, mapping['OS::Octavia::LoadBalancer']) def _create_stack(self, tmpl=inline_templates.LB_TEMPLATE): self.t = template_format.parse(tmpl) self.stack = utils.parse_stack(self.t) self.lb = self.stack['lb'] self.octavia_client = mock.MagicMock() self.lb.client = mock.MagicMock() self.lb.client.return_value = self.octavia_client self.patchobject(neutronV20, 'find_resourceid_by_name_or_id', return_value='123') self.lb.client_plugin().client = mock.MagicMock( return_value=self.octavia_client) self.lb.translate_properties(self.lb.properties) self.lb.resource_id_set('1234') def test_create(self): self._create_stack() expected = { 'loadbalancer': { 'name': 'my_lb', 'description': 'my loadbalancer', 'vip_address': '10.0.0.4', 'vip_subnet_id': '123', 'provider': 'octavia', 'tenant_id': '1234', 'admin_state_up': True, } } self.lb.handle_create() self.octavia_client.load_balancer_create.assert_called_with( json=expected) def test_check_create_complete(self): self._create_stack() self.octavia_client.load_balancer_show.side_effect = [ {'provisioning_status': 'ACTIVE'}, {'provisioning_status': 'PENDING_CREATE'}, {'provisioning_status': 'ERROR'}, ] self.assertTrue(self.lb.check_create_complete(None)) self.assertFalse(self.lb.check_create_complete(None)) self.assertRaises(exception.ResourceInError, self.lb.check_create_complete, None) def test_show_resource(self): self._create_stack() self.octavia_client.load_balancer_show.return_value = {'id': '1234'} self.assertEqual({'id': '1234'}, self.lb._show_resource()) self.octavia_client.load_balancer_show.assert_called_with('1234') def test_update(self): self._create_stack() prop_diff = { 'name': 'lb', 'description': 'a loadbalancer', 'admin_state_up': False, } prop_diff = self.lb.handle_update(None, None, prop_diff) self.octavia_client.load_balancer_set.assert_called_once_with( '1234', json={'loadbalancer': prop_diff}) def test_update_complete(self): self._create_stack() prop_diff = { 'name': 'lb', 'description': 'a loadbalancer', 'admin_state_up': False, } self.octavia_client.load_balancer_show.side_effect = [ {'provisioning_status': 'ACTIVE'}, {'provisioning_status': 'PENDING_UPDATE'}, ] self.lb.handle_update(None, None, prop_diff) self.assertTrue(self.lb.check_update_complete(prop_diff)) self.assertFalse(self.lb.check_update_complete(prop_diff)) self.assertTrue(self.lb.check_update_complete({})) def test_delete(self): self._create_stack() self.octavia_client.load_balancer_show.side_effect = [ {'provisioning_status': 'DELETE_PENDING'}, {'provisioning_status': 'DELETE_PENDING'}, {'provisioning_status': 'DELETED'}, ] self.octavia_client.load_balancer_delete.side_effect = [ exceptions.Conflict(409), None ] self.lb.handle_delete() self.assertFalse(self.lb.check_delete_complete(None)) self.assertFalse(self.lb._delete_called) self.assertFalse(self.lb.check_delete_complete(None)) self.assertTrue(self.lb._delete_called) self.assertTrue(self.lb.check_delete_complete(None)) self.octavia_client.load_balancer_delete.assert_called_with('1234') self.assertEqual( 2, self.octavia_client.load_balancer_delete.call_count) def test_delete_error(self): self._create_stack() self.octavia_client.load_balancer_show.side_effect = [ {'provisioning_status': 'DELETE_PENDING'}, ] self.octavia_client.load_balancer_delete.side_effect = [ exceptions.Conflict(409), exceptions.NotFound(404) ] self.lb.handle_delete() self.assertFalse(self.lb.check_delete_complete(None)) self.assertTrue(self.lb.check_delete_complete(None)) self.octavia_client.load_balancer_delete.assert_called_with('1234') self.assertEqual( 2, self.octavia_client.load_balancer_delete.call_count) def test_delete_failed(self): self._create_stack() self.octavia_client.load_balancer_delete.side_effect = ( exceptions.Unauthorized(403)) self.lb.handle_delete() self.assertRaises(exceptions.Unauthorized, self.lb.check_delete_complete, None) heat-10.0.2/heat/tests/openstack/octavia/test_l7rule.py0000666000175000017500000001574513343562340023101 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import yaml from osc_lib import exceptions from heat.common import exception from heat.common.i18n import _ from heat.common import template_format from heat.engine.resources.openstack.octavia import l7rule from heat.tests import common from heat.tests.openstack.octavia import inline_templates from heat.tests import utils class L7RuleTest(common.HeatTestCase): def test_resource_mapping(self): mapping = l7rule.resource_mapping() self.assertEqual(mapping['OS::Octavia::L7Rule'], l7rule.L7Rule) def _create_stack(self, tmpl=inline_templates.L7RULE_TEMPLATE): self.t = template_format.parse(tmpl) self.stack = utils.parse_stack(self.t) self.l7rule = self.stack['l7rule'] self.octavia_client = mock.MagicMock() self.l7rule.client = mock.MagicMock( return_value=self.octavia_client) self.l7rule.client_plugin().client = mock.MagicMock( return_value=self.octavia_client) def test_validate_when_key_required(self): tmpl = yaml.safe_load(inline_templates.L7RULE_TEMPLATE) props = tmpl['resources']['l7rule']['properties'] del props['key'] self._create_stack(tmpl=yaml.safe_dump(tmpl)) msg = _('Property key is missing. This property should be ' 'specified for rules of HEADER and COOKIE types.') with mock.patch('heat.engine.clients.os.neutron.NeutronClientPlugin.' 'has_extension', return_value=True): self.assertRaisesRegex(exception.StackValidationFailed, msg, self.l7rule.validate) def test_create(self): self._create_stack() self.octavia_client.l7rule_show.side_effect = [ {'provisioning_status': 'PENDING_CREATE'}, {'provisioning_status': 'PENDING_CREATE'}, {'provisioning_status': 'ACTIVE'}, ] self.octavia_client.l7rule_create.side_effect = [ exceptions.Conflict(409), {'rule': {'id': '1234'}} ] expected = { 'rule': { 'admin_state_up': True, 'invert': False, 'type': u'HEADER', 'compare_type': u'ENDS_WITH', 'key': u'test_key', 'value': u'test_value', 'invert': False } } props = self.l7rule.handle_create() self.assertFalse(self.l7rule.check_create_complete(props)) self.octavia_client.l7rule_create.assert_called_with('123', json=expected) self.assertFalse(self.l7rule.check_create_complete(props)) self.octavia_client.l7rule_create.assert_called_with('123', json=expected) self.assertFalse(self.l7rule.check_create_complete(props)) self.assertTrue(self.l7rule.check_create_complete(props)) def test_create_missing_properties(self): for prop in ('l7policy', 'type', 'compare_type', 'value'): tmpl = yaml.safe_load(inline_templates.L7RULE_TEMPLATE) del tmpl['resources']['l7rule']['properties'][prop] self._create_stack(tmpl=yaml.safe_dump(tmpl)) self.assertRaises(exception.StackValidationFailed, self.l7rule.validate) def test_show_resource(self): self._create_stack() self.l7rule.resource_id_set('1234') self.octavia_client.l7rule_show.return_value = {'id': '1234'} self.assertEqual({'id': '1234'}, self.l7rule._show_resource()) self.octavia_client.l7rule_show.assert_called_with('1234', '123') def test_update(self): self._create_stack() self.l7rule.resource_id_set('1234') self.octavia_client.l7rule_show.side_effect = [ {'provisioning_status': 'PENDING_UPDATE'}, {'provisioning_status': 'PENDING_UPDATE'}, {'provisioning_status': 'ACTIVE'}, ] self.octavia_client.l7rule_set.side_effect = [ exceptions.Conflict(409), None] prop_diff = { 'admin_state_up': False, 'name': 'your_l7policy', 'redirect_url': 'http://www.google.com' } prop_diff = self.l7rule.handle_update(None, None, prop_diff) self.assertFalse(self.l7rule.check_update_complete(prop_diff)) self.assertFalse(self.l7rule._update_called) self.octavia_client.l7rule_set.assert_called_with( '1234', '123', json={'rule': prop_diff}) self.assertFalse(self.l7rule.check_update_complete(prop_diff)) self.assertTrue(self.l7rule._update_called) self.octavia_client.l7rule_set.assert_called_with( '1234', '123', json={'rule': prop_diff}) self.assertFalse(self.l7rule.check_update_complete(prop_diff)) self.assertTrue(self.l7rule.check_update_complete(prop_diff)) def test_delete(self): self._create_stack() self.l7rule.resource_id_set('1234') self.octavia_client.l7rule_show.side_effect = [ {'provisioning_status': 'PENDING_DELETE'}, {'provisioning_status': 'PENDING_DELETE'}, {'provisioning_status': 'DELETED'}, ] self.octavia_client.l7rule_delete.side_effect = [ exceptions.Conflict(409), None] self.l7rule.handle_delete() self.assertFalse(self.l7rule.check_delete_complete(None)) self.assertFalse(self.l7rule._delete_called) self.assertFalse(self.l7rule.check_delete_complete(None)) self.assertTrue(self.l7rule._delete_called) self.octavia_client.l7rule_delete.assert_called_with( '1234', '123') self.assertTrue(self.l7rule.check_delete_complete(None)) def test_delete_already_gone(self): self._create_stack() self.l7rule.resource_id_set('1234') self.octavia_client.l7rule_delete.side_effect = ( exceptions.NotFound(404)) self.l7rule.handle_delete() self.assertTrue(self.l7rule.check_delete_complete(None)) def test_delete_failed(self): self._create_stack() self.l7rule.resource_id_set('1234') self.octavia_client.l7rule_delete.side_effect = ( exceptions.Unauthorized(401)) self.l7rule.handle_delete() self.assertRaises(exceptions.Unauthorized, self.l7rule.check_delete_complete, None) heat-10.0.2/heat/tests/openstack/octavia/test_pool.py0000666000175000017500000001741413343562340022633 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import yaml from osc_lib import exceptions from heat.common import exception from heat.common.i18n import _ from heat.common import template_format from heat.engine.resources.openstack.octavia import pool from heat.tests import common from heat.tests.openstack.octavia import inline_templates from heat.tests import utils class PoolTest(common.HeatTestCase): def test_resource_mapping(self): mapping = pool.resource_mapping() self.assertEqual(pool.Pool, mapping['OS::Octavia::Pool']) def _create_stack(self, tmpl=inline_templates.POOL_TEMPLATE): self.t = template_format.parse(tmpl) self.stack = utils.parse_stack(self.t) self.pool = self.stack['pool'] self.octavia_client = mock.MagicMock() self.pool.client = mock.MagicMock(return_value=self.octavia_client) self.pool.client_plugin().client = mock.MagicMock( return_value=self.octavia_client) def test_validate_no_cookie_name(self): tmpl = yaml.safe_load(inline_templates.POOL_TEMPLATE) sp = tmpl['resources']['pool']['properties']['session_persistence'] sp['type'] = 'APP_COOKIE' self._create_stack(tmpl=yaml.safe_dump(tmpl)) msg = _('Property cookie_name is required when ' 'session_persistence type is set to APP_COOKIE.') self.assertRaisesRegex(exception.StackValidationFailed, msg, self.pool.validate) def test_validate_source_ip_cookie_name(self): tmpl = yaml.safe_load(inline_templates.POOL_TEMPLATE) sp = tmpl['resources']['pool']['properties']['session_persistence'] sp['type'] = 'SOURCE_IP' sp['cookie_name'] = 'cookie' self._create_stack(tmpl=yaml.safe_dump(tmpl)) msg = _('Property cookie_name must NOT be specified when ' 'session_persistence type is set to SOURCE_IP.') self.assertRaisesRegex(exception.StackValidationFailed, msg, self.pool.validate) def test_create(self): self._create_stack() self.octavia_client.pool_show.side_effect = [ {'provisioning_status': 'PENDING_CREATE'}, {'provisioning_status': 'PENDING_CREATE'}, {'provisioning_status': 'ACTIVE'}, ] self.octavia_client.pool_create.side_effect = [ exceptions.Conflict(409), {'pool': {'id': '1234'}} ] expected = { 'pool': { 'name': 'my_pool', 'description': 'my pool', 'session_persistence': { 'type': 'HTTP_COOKIE' }, 'lb_algorithm': 'ROUND_ROBIN', 'listener_id': '123', 'loadbalancer_id': 'my_lb', 'protocol': 'HTTP', 'admin_state_up': True } } props = self.pool.handle_create() self.assertFalse(self.pool.check_create_complete(props)) self.octavia_client.pool_create.assert_called_with(json=expected) self.assertFalse(self.pool.check_create_complete(props)) self.octavia_client.pool_create.assert_called_with(json=expected) self.assertFalse(self.pool.check_create_complete(props)) self.assertTrue(self.pool.check_create_complete(props)) def test_create_missing_properties(self): for prop in ('lb_algorithm', 'listener', 'protocol'): tmpl = yaml.safe_load(inline_templates.POOL_TEMPLATE) del tmpl['resources']['pool']['properties']['loadbalancer'] del tmpl['resources']['pool']['properties'][prop] self._create_stack(tmpl=yaml.safe_dump(tmpl)) if prop == 'listener': self.assertRaises(exception.PropertyUnspecifiedError, self.pool.validate) else: self.assertRaises(exception.StackValidationFailed, self.pool.validate) def test_show_resource(self): self._create_stack() self.pool.resource_id_set('1234') self.octavia_client.pool_show.return_value = {'id': '1234'} self.assertEqual(self.pool._show_resource(), {'id': '1234'}) self.octavia_client.pool_show.assert_called_with('1234') def test_update(self): self._create_stack() self.pool.resource_id_set('1234') self.octavia_client.pool_show.side_effect = [ {'provisioning_status': 'PENDING_UPDATE'}, {'provisioning_status': 'PENDING_UPDATE'}, {'provisioning_status': 'ACTIVE'}, ] self.octavia_client.pool_set.side_effect = [ exceptions.Conflict(409), None] prop_diff = { 'admin_state_up': False, 'name': 'your_pool', 'lb_algorithm': 'SOURCE_IP' } prop_diff = self.pool.handle_update(None, None, prop_diff) self.assertFalse(self.pool.check_update_complete(prop_diff)) self.assertFalse(self.pool._update_called) self.octavia_client.pool_set.assert_called_with( '1234', json={'pool': prop_diff}) self.assertFalse(self.pool.check_update_complete(prop_diff)) self.assertTrue(self.pool._update_called) self.octavia_client.pool_set.assert_called_with( '1234', json={'pool': prop_diff}) self.assertFalse(self.pool.check_update_complete(prop_diff)) self.assertTrue(self.pool.check_update_complete(prop_diff)) def test_delete(self): self._create_stack() self.pool.resource_id_set('1234') self.octavia_client.pool_show.side_effect = [ {'provisioning_status': 'PENDING_DELETE'}, {'provisioning_status': 'PENDING_DELETE'}, {'provisioning_status': 'DELETED'}, ] self.octavia_client.pool_delete.side_effect = [ exceptions.Conflict(409), None] self.pool.handle_delete() self.assertFalse(self.pool.check_delete_complete(None)) self.assertFalse(self.pool._delete_called) self.assertFalse(self.pool.check_delete_complete(None)) self.assertTrue(self.pool._delete_called) self.octavia_client.pool_delete.assert_called_with('1234') self.assertTrue(self.pool.check_delete_complete(None)) def test_delete_not_found(self): self._create_stack() self.pool.resource_id_set('1234') self.octavia_client.pool_show.side_effect = [ {'provisioning_status': 'PENDING_DELETE'}, ] self.octavia_client.pool_delete.side_effect = [ exceptions.Conflict(409), exceptions.NotFound(404)] self.pool.handle_delete() self.assertFalse(self.pool.check_delete_complete(None)) self.assertFalse(self.pool._delete_called) self.octavia_client.pool_delete.assert_called_with('1234') self.assertTrue(self.pool.check_delete_complete(None)) def test_delete_failed(self): self._create_stack() self.pool.resource_id_set('1234') self.octavia_client.pool_delete.side_effect = ( exceptions.Unauthorized(401)) self.pool.handle_delete() self.assertRaises(exceptions.Unauthorized, self.pool.check_delete_complete, None) heat-10.0.2/heat/tests/openstack/octavia/__init__.py0000666000175000017500000000000013343562340022341 0ustar zuulzuul00000000000000heat-10.0.2/heat/tests/openstack/octavia/test_pool_member.py0000666000175000017500000001546313343562352024167 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from neutronclient.neutron import v2_0 as neutronV20 from osc_lib import exceptions from heat.common import template_format from heat.engine.resources.openstack.octavia import pool_member from heat.tests import common from heat.tests.openstack.octavia import inline_templates from heat.tests import utils class PoolMemberTest(common.HeatTestCase): def test_resource_mapping(self): mapping = pool_member.resource_mapping() self.assertEqual(pool_member.PoolMember, mapping['OS::Octavia::PoolMember']) def _create_stack(self, tmpl=inline_templates.MEMBER_TEMPLATE): self.t = template_format.parse(tmpl) self.stack = utils.parse_stack(self.t) self.member = self.stack['member'] self.patchobject(neutronV20, 'find_resourceid_by_name_or_id', return_value='123') self.octavia_client = mock.MagicMock() self.member.client = mock.MagicMock(return_value=self.octavia_client) self.member.client_plugin().get_pool = ( mock.MagicMock(return_value='123')) self.member.client_plugin().client = mock.MagicMock( return_value=self.octavia_client) self.member.translate_properties(self.member.properties) def test_create(self): self._create_stack() self.octavia_client.member_show.side_effect = [ {'provisioning_status': 'PENDING_CREATE'}, {'provisioning_status': 'PENDING_CREATE'}, {'provisioning_status': 'ACTIVE'}, ] self.octavia_client.member_create.side_effect = [ exceptions.Conflict(409), {'member': {'id': '1234'}}] expected = { 'member': { 'address': '1.2.3.4', 'protocol_port': 80, 'weight': 1, 'subnet_id': '123', 'admin_state_up': True, } } props = self.member.handle_create() self.assertFalse(self.member.check_create_complete(props)) self.octavia_client.member_create.assert_called_with('123', json=expected) self.assertFalse(self.member.check_create_complete(props)) self.octavia_client.member_create.assert_called_with('123', json=expected) self.assertFalse(self.member.check_create_complete(props)) self.assertTrue(self.member.check_create_complete(props)) def test_show_resource(self): self._create_stack() self.member.resource_id_set('1234') self.octavia_client.member_show.return_value = {'id': '1234'} self.assertEqual(self.member._show_resource(), {'id': '1234'}) self.octavia_client.member_show.assert_called_with('123', '1234') def test_update(self): self._create_stack() self.member.resource_id_set('1234') self.octavia_client.member_show.side_effect = [ {'provisioning_status': 'PENDING_UPDATE'}, {'provisioning_status': 'PENDING_UPDATE'}, {'provisioning_status': 'ACTIVE'}, ] self.octavia_client.member_set.side_effect = [ exceptions.Conflict(409), None] prop_diff = { 'admin_state_up': False, 'weight': 2, } prop_diff = self.member.handle_update(None, None, prop_diff) self.assertFalse(self.member.check_update_complete(prop_diff)) self.assertFalse(self.member._update_called) self.octavia_client.member_set.assert_called_with( '123', '1234', json={'member': prop_diff}) self.assertFalse(self.member.check_update_complete(prop_diff)) self.assertTrue(self.member._update_called) self.octavia_client.member_set.assert_called_with( '123', '1234', json={'member': prop_diff}) self.assertFalse(self.member.check_update_complete(prop_diff)) self.assertTrue(self.member.check_update_complete(prop_diff)) def test_delete(self): self._create_stack() self.member.resource_id_set('1234') self.octavia_client.member_show.side_effect = [ {'provisioning_status': 'PENDING_DELETE'}, {'provisioning_status': 'PENDING_DELETE'}, {'provisioning_status': 'DELETED'}, ] self.octavia_client.member_delete.side_effect = [ exceptions.Conflict(409), None] self.member.handle_delete() self.assertFalse(self.member.check_delete_complete(None)) self.assertFalse(self.member._delete_called) self.octavia_client.member_delete.assert_called_with('123', '1234') self.assertFalse(self.member.check_delete_complete(None)) self.octavia_client.member_delete.assert_called_with('123', '1234') self.assertTrue(self.member._delete_called) self.assertTrue(self.member.check_delete_complete(None)) def test_delete_not_found(self): self._create_stack() self.member.resource_id_set('1234') self.octavia_client.member_show.side_effect = [ {'provisioning_status': 'PENDING_DELETE'}, ] self.octavia_client.member_delete.side_effect = [ exceptions.Conflict(409), exceptions.NotFound(404)] self.member.handle_delete() self.assertFalse(self.member.check_delete_complete(None)) self.assertFalse(self.member._delete_called) self.octavia_client.member_delete.assert_called_with('123', '1234') self.assertTrue(self.member.check_delete_complete(None)) self.octavia_client.member_delete.assert_called_with('123', '1234') self.assertFalse(self.member._delete_called) def test_delete_failed(self): self._create_stack() self.member.resource_id_set('1234') self.octavia_client.member_delete.side_effect = ( exceptions.Unauthorized(401)) self.member.handle_delete() self.assertRaises(exceptions.Unauthorized, self.member.check_delete_complete, None) heat-10.0.2/heat/tests/openstack/octavia/inline_templates.py0000666000175000017500000000614413343562340024155 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. LB_TEMPLATE = ''' heat_template_version: 2016-04-08 description: Create a loadbalancer resources: lb: type: OS::Octavia::LoadBalancer properties: name: my_lb description: my loadbalancer vip_address: 10.0.0.4 vip_subnet: sub123 provider: octavia tenant_id: 1234 admin_state_up: True ''' LISTENER_TEMPLATE = ''' heat_template_version: 2016-04-08 description: Create a listener resources: listener: type: OS::Octavia::Listener properties: protocol_port: 80 protocol: TCP loadbalancer: 123 default_pool: my_pool name: my_listener description: my listener admin_state_up: True default_tls_container_ref: ref sni_container_refs: - ref1 - ref2 connection_limit: -1 tenant_id: 1234 ''' POOL_TEMPLATE = ''' heat_template_version: 2016-04-08 description: Create a pool resources: pool: type: OS::Octavia::Pool properties: name: my_pool description: my pool session_persistence: type: HTTP_COOKIE lb_algorithm: ROUND_ROBIN loadbalancer: my_lb listener: 123 protocol: HTTP admin_state_up: True ''' MEMBER_TEMPLATE = ''' heat_template_version: 2016-04-08 description: Create a pool member resources: member: type: OS::Octavia::PoolMember properties: pool: 123 address: 1.2.3.4 protocol_port: 80 weight: 1 subnet: sub123 admin_state_up: True ''' MONITOR_TEMPLATE = ''' heat_template_version: 2016-04-08 description: Create a health monitor resources: monitor: type: OS::Octavia::HealthMonitor properties: admin_state_up: True delay: 3 expected_codes: 200-202 http_method: HEAD max_retries: 5 pool: 123 timeout: 10 type: HTTP url_path: /health ''' L7POLICY_TEMPLATE = ''' heat_template_version: 2016-04-08 description: Template to test L7Policy Neutron resource resources: l7policy: type: OS::Octavia::L7Policy properties: admin_state_up: True name: test_l7policy description: test l7policy resource action: REDIRECT_TO_URL redirect_url: http://www.mirantis.com listener: 123 position: 1 ''' L7RULE_TEMPLATE = ''' heat_template_version: 2016-04-08 description: Template to test L7Rule Neutron resource resources: l7rule: type: OS::Octavia::L7Rule properties: admin_state_up: True l7policy: 123 type: HEADER compare_type: ENDS_WITH key: test_key value: test_value invert: False ''' heat-10.0.2/heat/tests/openstack/octavia/test_health_monitor.py0000666000175000017500000001354513343562340024677 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from osc_lib import exceptions from heat.common import template_format from heat.engine.resources.openstack.octavia import health_monitor from heat.tests import common from heat.tests.openstack.octavia import inline_templates from heat.tests import utils class HealthMonitorTest(common.HeatTestCase): def _create_stack(self, tmpl=inline_templates.MONITOR_TEMPLATE): self.t = template_format.parse(tmpl) self.stack = utils.parse_stack(self.t) self.healthmonitor = self.stack['monitor'] self.octavia_client = mock.MagicMock() self.healthmonitor.client = mock.MagicMock( return_value=self.octavia_client) self.healthmonitor.client_plugin().client = mock.MagicMock( return_value=self.octavia_client) def test_resource_mapping(self): mapping = health_monitor.resource_mapping() self.assertEqual(health_monitor.HealthMonitor, mapping['OS::Octavia::HealthMonitor']) def test_create(self): self._create_stack() self.octavia_client.health_monitor_show.side_effect = [ {'provisioning_status': 'PENDING_CREATE'}, {'provisioning_status': 'PENDING_CREATE'}, {'provisioning_status': 'ACTIVE'}, ] self.octavia_client.health_monitor_create.side_effect = [ exceptions.Conflict(409), {'healthmonitor': {'id': '1234'}} ] expected = { 'healthmonitor': { 'admin_state_up': True, 'delay': 3, 'expected_codes': '200-202', 'http_method': 'HEAD', 'max_retries': 5, 'pool_id': '123', 'timeout': 10, 'type': 'HTTP', 'url_path': '/health' } } props = self.healthmonitor.handle_create() self.assertFalse(self.healthmonitor.check_create_complete(props)) self.octavia_client.health_monitor_create.assert_called_with( json=expected) self.assertFalse(self.healthmonitor.check_create_complete(props)) self.octavia_client.health_monitor_create.assert_called_with( json=expected) self.assertFalse(self.healthmonitor.check_create_complete(props)) self.assertTrue(self.healthmonitor.check_create_complete(props)) def test_show_resource(self): self._create_stack() self.healthmonitor.resource_id_set('1234') self.assertTrue(self.healthmonitor._show_resource()) self.octavia_client.health_monitor_show.assert_called_with( '1234') def test_update(self): self._create_stack() self.healthmonitor.resource_id_set('1234') self.octavia_client.health_monitor_show.side_effect = [ {'provisioning_status': 'PENDING_UPDATE'}, {'provisioning_status': 'PENDING_UPDATE'}, {'provisioning_status': 'ACTIVE'}, ] self.octavia_client.health_monitor_set.side_effect = [ exceptions.Conflict(409), None] prop_diff = { 'admin_state_up': False, } prop_diff = self.healthmonitor.handle_update(None, None, prop_diff) self.assertFalse(self.healthmonitor.check_update_complete(prop_diff)) self.assertFalse(self.healthmonitor._update_called) self.octavia_client.health_monitor_set.assert_called_with( '1234', json={'healthmonitor': prop_diff}) self.assertFalse(self.healthmonitor.check_update_complete(prop_diff)) self.assertTrue(self.healthmonitor._update_called) self.octavia_client.health_monitor_set.assert_called_with( '1234', json={'healthmonitor': prop_diff}) self.assertFalse(self.healthmonitor.check_update_complete(prop_diff)) self.assertTrue(self.healthmonitor.check_update_complete(prop_diff)) def test_delete(self): self._create_stack() self.healthmonitor.resource_id_set('1234') self.octavia_client.health_monitor_show.side_effect = [ {'provisioning_status': 'PENDING_DELETE'}, {'provisioning_status': 'PENDING_DELETE'}, {'provisioning_status': 'DELETED'}, ] self.octavia_client.health_monitor_delete.side_effect = [ exceptions.Conflict(409), None] self.healthmonitor.handle_delete() self.assertFalse(self.healthmonitor.check_delete_complete(None)) self.assertFalse(self.healthmonitor._delete_called) self.octavia_client.health_monitor_delete.assert_called_with( '1234') self.assertFalse(self.healthmonitor.check_delete_complete(None)) self.assertTrue(self.healthmonitor._delete_called) self.octavia_client.health_monitor_delete.assert_called_with( '1234') self.assertTrue(self.healthmonitor.check_delete_complete(None)) def test_delete_failed(self): self._create_stack() self.healthmonitor.resource_id_set('1234') self.octavia_client.health_monitor_delete.side_effect = ( exceptions.Unauthorized(401)) self.healthmonitor.handle_delete() self.assertRaises(exceptions.Unauthorized, self.healthmonitor.check_delete_complete, None) self.octavia_client.health_monitor_delete.assert_called_with( '1234') heat-10.0.2/heat/tests/openstack/octavia/test_listener.py0000666000175000017500000001714613343562340023511 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import yaml from osc_lib import exceptions from heat.common import exception from heat.common import template_format from heat.engine.resources.openstack.octavia import listener from heat.tests import common from heat.tests.openstack.octavia import inline_templates from heat.tests import utils class ListenerTest(common.HeatTestCase): def test_resource_mapping(self): mapping = listener.resource_mapping() self.assertEqual(listener.Listener, mapping['OS::Octavia::Listener']) def _create_stack(self, tmpl=inline_templates.LISTENER_TEMPLATE): self.t = template_format.parse(tmpl) self.stack = utils.parse_stack(self.t) self.listener = self.stack['listener'] self.octavia_client = mock.MagicMock() self.listener.client = mock.MagicMock(return_value=self.octavia_client) self.listener.client_plugin().client = mock.MagicMock( return_value=self.octavia_client) def test_validate_terminated_https(self): tmpl = yaml.safe_load(inline_templates.LISTENER_TEMPLATE) props = tmpl['resources']['listener']['properties'] props['protocol'] = 'TERMINATED_HTTPS' del props['default_tls_container_ref'] self._create_stack(tmpl=yaml.safe_dump(tmpl)) self.assertRaises(exception.StackValidationFailed, self.listener.validate) def test_create(self): self._create_stack() self.octavia_client.listener_show.side_effect = [ {'provisioning_status': 'PENDING_CREATE'}, {'provisioning_status': 'PENDING_CREATE'}, {'provisioning_status': 'ACTIVE'}, ] self.octavia_client.listener_create.side_effect = [ exceptions.Conflict(409), {'listener': {'id': '1234'}} ] expected = { 'listener': { 'protocol_port': 80, 'protocol': 'TCP', 'loadbalancer_id': '123', 'default_pool_id': 'my_pool', 'name': 'my_listener', 'description': 'my listener', 'admin_state_up': True, 'default_tls_container_ref': 'ref', 'sni_container_refs': ['ref1', 'ref2'], 'connection_limit': -1, 'tenant_id': '1234', } } props = self.listener.handle_create() self.assertFalse(self.listener.check_create_complete(props)) self.octavia_client.listener_create.assert_called_with(json=expected) self.assertFalse(self.listener.check_create_complete(props)) self.octavia_client.listener_create.assert_called_with(json=expected) self.assertFalse(self.listener.check_create_complete(props)) self.assertTrue(self.listener.check_create_complete(props)) def test_create_missing_properties(self): for prop in ('protocol', 'protocol_port', 'loadbalancer'): tmpl = yaml.safe_load(inline_templates.LISTENER_TEMPLATE) del tmpl['resources']['listener']['properties'][prop] del tmpl['resources']['listener']['properties']['default_pool'] self._create_stack(tmpl=yaml.safe_dump(tmpl)) if prop == 'loadbalancer': self.assertRaises(exception.PropertyUnspecifiedError, self.listener.validate) else: self.assertRaises(exception.StackValidationFailed, self.listener.validate) def test_show_resource(self): self._create_stack() self.listener.resource_id_set('1234') self.octavia_client.listener_show.return_value = {'id': '1234'} self.assertEqual({'id': '1234'}, self.listener._show_resource()) self.octavia_client.listener_show.assert_called_with('1234') def test_update(self): self._create_stack() self.listener.resource_id_set('1234') self.octavia_client.listener_show.side_effect = [ {'provisioning_status': 'PENDING_UPDATE'}, {'provisioning_status': 'PENDING_UPDATE'}, {'provisioning_status': 'ACTIVE'}, ] self.octavia_client.listener_set.side_effect = [ exceptions.Conflict(409), None] prop_diff = { 'admin_state_up': False, 'name': 'your_listener', } prop_diff = self.listener.handle_update(self.listener.t, None, prop_diff) self.assertFalse(self.listener.check_update_complete(prop_diff)) self.assertFalse(self.listener._update_called) self.octavia_client.listener_set.assert_called_with( '1234', json={'listener': prop_diff}) self.assertFalse(self.listener.check_update_complete(prop_diff)) self.assertTrue(self.listener._update_called) self.octavia_client.listener_set.assert_called_with( '1234', json={'listener': prop_diff}) self.assertFalse(self.listener.check_update_complete(prop_diff)) self.assertTrue(self.listener.check_update_complete(prop_diff)) def test_delete(self): self._create_stack() self.listener.resource_id_set('1234') self.octavia_client.listener_show.side_effect = [ {'provisioning_status': 'PENDING_DELETE'}, {'provisioning_status': 'PENDING_DELETE'}, {'provisioning_status': 'DELETED'}, ] self.octavia_client.listener_delete.side_effect = [ exceptions.Conflict(409), None] self.listener.handle_delete() self.assertFalse(self.listener.check_delete_complete(None)) self.assertFalse(self.listener._delete_called) self.octavia_client.listener_delete.assert_called_with('1234') self.assertFalse(self.listener.check_delete_complete(None)) self.assertTrue(self.listener._delete_called) self.octavia_client.listener_delete.assert_called_with('1234') self.assertTrue(self.listener.check_delete_complete(None)) def test_delete_not_found(self): self._create_stack() self.listener.resource_id_set('1234') self.octavia_client.listener_show.side_effect = [ {'provisioning_status': 'PENDING_DELETE'}, ] self.octavia_client.listener_delete.side_effect = [ exceptions.Conflict(409), exceptions.NotFound(404)] self.listener.handle_delete() self.assertFalse(self.listener.check_delete_complete(None)) self.assertFalse(self.listener._delete_called) self.octavia_client.listener_delete.assert_called_with('1234') self.assertTrue(self.listener.check_delete_complete(None)) self.octavia_client.listener_delete.assert_called_with('1234') def test_delete_failed(self): self._create_stack() self.listener.resource_id_set('1234') self.octavia_client.listener_delete.side_effect = ( exceptions.Unauthorized(401)) self.listener.handle_delete() self.assertRaises(exceptions.Unauthorized, self.listener.check_delete_complete, None) heat-10.0.2/heat/tests/openstack/senlin/0000775000175000017500000000000013343562672020112 5ustar zuulzuul00000000000000heat-10.0.2/heat/tests/openstack/senlin/test_receiver.py0000666000175000017500000001045613343562340023327 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import mock from openstack import exceptions from heat.common import template_format from heat.engine.clients.os import senlin from heat.engine.resources.openstack.senlin import receiver as sr from heat.engine import scheduler from heat.tests import common from heat.tests import utils receiver_stack_template = """ heat_template_version: 2016-04-08 description: Senlin Receiver Template resources: senlin-receiver: type: OS::Senlin::Receiver properties: name: SenlinReceiver cluster: fake_cluster action: CLUSTER_SCALE_OUT type: webhook params: foo: bar """ class FakeReceiver(object): def __init__(self, id='some_id'): self.id = id self.name = "SenlinReceiver" self.cluster_id = "fake_cluster" self.action = "CLUSTER_SCALE_OUT" self.channel = {'alarm_url': "http://foo.bar/webhooks/fake_url"} def to_dict(self): return { 'id': self.id, 'name': self.name, 'cluster_id': self.cluster_id, 'action': self.action, 'channel': self.channel, 'actor': {'trust_id': ['fake_trust_id']} } class SenlinReceiverTest(common.HeatTestCase): def setUp(self): super(SenlinReceiverTest, self).setUp() self.senlin_mock = mock.MagicMock() self.patchobject(sr.Receiver, 'client', return_value=self.senlin_mock) self.patchobject(senlin.ClusterConstraint, 'validate', return_value=True) self.fake_r = FakeReceiver() self.t = template_format.parse(receiver_stack_template) def _init_recv(self, template): self.stack = utils.parse_stack(template) recv = self.stack['senlin-receiver'] return recv def _create_recv(self, template): recv = self._init_recv(template) self.senlin_mock.create_receiver.return_value = self.fake_r self.senlin_mock.get_receiver.return_value = self.fake_r scheduler.TaskRunner(recv.create)() self.assertEqual((recv.CREATE, recv.COMPLETE), recv.state) self.assertEqual(self.fake_r.id, recv.resource_id) return recv def test_recv_create_success(self): self._create_recv(self.t) expect_kwargs = { 'name': 'SenlinReceiver', 'cluster_id': 'fake_cluster', 'action': 'CLUSTER_SCALE_OUT', 'type': 'webhook', 'params': {'foo': 'bar'}, } self.senlin_mock.create_receiver.assert_called_once_with( **expect_kwargs) def test_recv_delete_success(self): self.senlin_mock.delete_receiver.return_value = None recv = self._create_recv(self.t) scheduler.TaskRunner(recv.delete)() self.senlin_mock.delete_receiver.assert_called_once_with( recv.resource_id) def test_recv_delete_not_found(self): self.senlin_mock.delete_receiver.side_effect = [ exceptions.ResourceNotFound(http_status=404) ] recv = self._create_recv(self.t) scheduler.TaskRunner(recv.delete)() self.senlin_mock.delete_receiver.assert_called_once_with( recv.resource_id) def test_cluster_resolve_attribute(self): excepted_show = { 'id': 'some_id', 'name': 'SenlinReceiver', 'cluster_id': 'fake_cluster', 'action': 'CLUSTER_SCALE_OUT', 'channel': {'alarm_url': "http://foo.bar/webhooks/fake_url"}, 'actor': {'trust_id': ['fake_trust_id']} } recv = self._create_recv(self.t) self.assertEqual(self.fake_r.channel, recv._resolve_attribute('channel')) self.assertEqual(excepted_show, recv._show_resource()) heat-10.0.2/heat/tests/openstack/senlin/test_node.py0000666000175000017500000002251513343562340022447 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import copy import mock from oslo_config import cfg import six from heat.common import exception from heat.common import template_format from heat.engine.clients.os import senlin from heat.engine.resources.openstack.senlin import node as sn from heat.engine import scheduler from heat.engine import template from heat.tests import common from heat.tests import utils from openstack import exceptions node_stack_template = """ heat_template_version: 2016-04-08 description: Senlin Node Template resources: senlin-node: type: OS::Senlin::Node properties: name: SenlinNode profile: fake_profile cluster: fake_cluster metadata: foo: bar """ class FakeNode(object): def __init__(self, id='some_id', status='ACTIVE'): self.status = status self.status_reason = 'Unknown' self.id = id self.name = "SenlinNode" self.metadata = {'foo': 'bar'} self.profile_id = "fake_profile_id" self.cluster_id = "fake_cluster_id" self.details = {'id': 'physical_object_id'} self.location = "actions/fake_action" def to_dict(self): return { 'id': self.id, 'status': self.status, 'status_reason': self.status_reason, 'name': self.name, 'metadata': self.metadata, 'profile_id': self.profile_id, 'cluster_id': self.cluster_id, } class SenlinNodeTest(common.HeatTestCase): def setUp(self): super(SenlinNodeTest, self).setUp() self.senlin_mock = mock.MagicMock() self.senlin_mock.get_profile.return_value = mock.Mock( id='fake_profile_id' ) self.senlin_mock.get_cluster.return_value = mock.Mock( id='fake_cluster_id' ) self.patchobject(sn.Node, 'client', return_value=self.senlin_mock) self.patchobject(senlin.SenlinClientPlugin, 'client', return_value=self.senlin_mock) self.patchobject(senlin.ProfileConstraint, 'validate', return_value=True) self.patchobject(senlin.ClusterConstraint, 'validate', return_value=True) self.fake_node = FakeNode() self.t = template_format.parse(node_stack_template) self.stack = utils.parse_stack(self.t) self.node = self.stack['senlin-node'] def _create_node(self): self.senlin_mock.create_node.return_value = self.fake_node self.senlin_mock.get_node.return_value = self.fake_node self.senlin_mock.get_action.return_value = mock.Mock( status='SUCCEEDED') scheduler.TaskRunner(self.node.create)() self.assertEqual((self.node.CREATE, self.node.COMPLETE), self.node.state) self.assertEqual(self.fake_node.id, self.node.resource_id) return self.node def test_node_create_success(self): self._create_node() expect_kwargs = { 'name': 'SenlinNode', 'profile_id': 'fake_profile_id', 'metadata': {'foo': 'bar'}, 'cluster_id': 'fake_cluster_id', } self.senlin_mock.create_node.assert_called_once_with( **expect_kwargs) def test_node_create_error(self): cfg.CONF.set_override('action_retry_limit', 0) self.senlin_mock.create_node.return_value = self.fake_node mock_action = mock.MagicMock() mock_action.status = 'FAILED' mock_action.status_reason = 'oops' self.senlin_mock.get_action.return_value = mock_action create_task = scheduler.TaskRunner(self.node.create) ex = self.assertRaises(exception.ResourceFailure, create_task) expected = ('ResourceInError: resources.senlin-node: ' 'Went to status FAILED due to "oops"') self.assertEqual(expected, six.text_type(ex)) def test_node_delete_success(self): node = self._create_node() self.senlin_mock.get_node.side_effect = [ exceptions.ResourceNotFound('SenlinNode'), ] scheduler.TaskRunner(node.delete)() self.senlin_mock.delete_node.assert_called_once_with( node.resource_id) def test_cluster_delete_error(self): node = self._create_node() self.senlin_mock.get_node.side_effect = exception.Error('oops') delete_task = scheduler.TaskRunner(node.delete) ex = self.assertRaises(exception.ResourceFailure, delete_task) expected = 'Error: resources.senlin-node: oops' self.assertEqual(expected, six.text_type(ex)) def test_node_update_profile(self): node = self._create_node() # Mock translate rules self.senlin_mock.get_profile.side_effect = [ mock.Mock(id='new_profile_id'), mock.Mock(id='fake_profile_id'), mock.Mock(id='new_profile_id'), ] new_t = copy.deepcopy(self.t) props = new_t['resources']['senlin-node']['properties'] props['profile'] = 'new_profile' props['name'] = 'new_name' rsrc_defns = template.Template(new_t).resource_definitions(self.stack) new_node = rsrc_defns['senlin-node'] self.senlin_mock.update_node.return_value = mock.Mock( location='/actions/fake-action') scheduler.TaskRunner(node.update, new_node)() self.assertEqual((node.UPDATE, node.COMPLETE), node.state) node_update_kwargs = { 'profile_id': 'new_profile_id', 'name': 'new_name' } self.senlin_mock.update_node.assert_called_once_with( node=self.fake_node, **node_update_kwargs) self.assertEqual(2, self.senlin_mock.get_action.call_count) def test_node_update_cluster(self): node = self._create_node() # Mock translate rules self.senlin_mock.get_cluster.side_effect = [ mock.Mock(id='new_cluster_id'), mock.Mock(id='fake_cluster_id'), mock.Mock(id='new_cluster_id'), ] new_t = copy.deepcopy(self.t) props = new_t['resources']['senlin-node']['properties'] props['cluster'] = 'new_cluster' rsrc_defns = template.Template(new_t).resource_definitions(self.stack) new_node = rsrc_defns['senlin-node'] self.senlin_mock.cluster_del_nodes.return_value = { 'action': 'remove_node_from_cluster' } self.senlin_mock.cluster_add_nodes.return_value = { 'action': 'add_node_to_cluster' } scheduler.TaskRunner(node.update, new_node)() self.assertEqual((node.UPDATE, node.COMPLETE), node.state) self.senlin_mock.cluster_del_nodes.assert_called_once_with( cluster='fake_cluster_id', nodes=[node.resource_id]) self.senlin_mock.cluster_add_nodes.assert_called_once_with( cluster='new_cluster_id', nodes=[node.resource_id]) def test_node_update_failed(self): node = self._create_node() new_t = copy.deepcopy(self.t) props = new_t['resources']['senlin-node']['properties'] props['name'] = 'new_name' rsrc_defns = template.Template(new_t).resource_definitions(self.stack) new_node = rsrc_defns['senlin-node'] self.senlin_mock.update_node.return_value = mock.Mock( location='/actions/fake-action') self.senlin_mock.get_action.return_value = mock.Mock( status='FAILED', status_reason='oops') update_task = scheduler.TaskRunner(node.update, new_node) ex = self.assertRaises(exception.ResourceFailure, update_task) expected = ('ResourceInError: resources.senlin-node: Went to ' 'status FAILED due to "oops"') self.assertEqual(expected, six.text_type(ex)) self.assertEqual((node.UPDATE, node.FAILED), node.state) self.assertEqual(2, self.senlin_mock.get_action.call_count) def test_cluster_resolve_attribute(self): excepted_show = { 'id': 'some_id', 'status': 'ACTIVE', 'status_reason': 'Unknown', 'name': 'SenlinNode', 'metadata': {'foo': 'bar'}, 'profile_id': 'fake_profile_id', 'cluster_id': 'fake_cluster_id' } node = self._create_node() self.assertEqual(excepted_show, node._show_resource()) self.assertEqual(self.fake_node.details, node._resolve_attribute('details')) self.senlin_mock.get_node.assert_called_with( node.resource_id, details=True) def test_node_get_live_state(self): expected_reality = { 'name': 'SenlinNode', 'metadata': {'foo': 'bar'}, 'profile': 'fake_profile_id', 'cluster': 'fake_cluster_id' } node = self._create_node() reality = node.get_live_state(node.properties) self.assertEqual(expected_reality, reality) heat-10.0.2/heat/tests/openstack/senlin/test_policy.py0000666000175000017500000001632013343562340023016 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import copy import mock from openstack import exceptions from oslo_config import cfg from heat.common import exception from heat.common import template_format from heat.engine.clients.os import senlin from heat.engine.resources.openstack.senlin import policy from heat.engine import scheduler from heat.engine import template from heat.tests import common from heat.tests import utils policy_stack_template = """ heat_template_version: 2016-04-08 description: Senlin Policy Template resources: senlin-policy: type: OS::Senlin::Policy properties: name: SenlinPolicy type: senlin.policy.deletion-1.0 properties: criteria: OLDEST_FIRST bindings: - cluster: c1 """ policy_spec = { 'type': 'senlin.policy.deletion', 'version': '1.0', 'properties': { 'criteria': 'OLDEST_FIRST' } } class FakePolicy(object): def __init__(self, id='some_id', spec=None): self.id = id self.name = "SenlinPolicy" def to_dict(self): return { 'id': self.id, 'name': self.name, } class SenlinPolicyTest(common.HeatTestCase): def setUp(self): super(SenlinPolicyTest, self).setUp() self.patchobject(senlin.ClusterConstraint, 'validate', return_value=True) self.patchobject(senlin.PolicyTypeConstraint, 'validate', return_value=True) self.senlin_mock = mock.MagicMock() self.senlin_mock.get_cluster.return_value = mock.Mock( id='c1_id') self.patchobject(policy.Policy, 'client', return_value=self.senlin_mock) self.patchobject(senlin.SenlinClientPlugin, 'client', return_value=self.senlin_mock) self.fake_p = FakePolicy() self.t = template_format.parse(policy_stack_template) def _init_policy(self, template): self.stack = utils.parse_stack(template) policy = self.stack['senlin-policy'] return policy def _create_policy(self, template): policy = self._init_policy(template) self.senlin_mock.create_policy.return_value = self.fake_p self.senlin_mock.cluster_attach_policy.return_value = { 'action': 'fake_action'} self.senlin_mock.get_action.return_value = mock.Mock( status='SUCCEEDED') scheduler.TaskRunner(policy.create)() self.assertEqual((policy.CREATE, policy.COMPLETE), policy.state) self.assertEqual(self.fake_p.id, policy.resource_id) self.senlin_mock.cluster_attach_policy.assert_called_once_with( 'c1_id', policy.resource_id, enabled=True) self.senlin_mock.get_action.assert_called_once_with('fake_action') return policy def test_policy_create(self): self._create_policy(self.t) expect_kwargs = { 'name': 'SenlinPolicy', 'spec': policy_spec } self.senlin_mock.create_policy.assert_called_once_with( **expect_kwargs) def test_policy_create_fail(self): cfg.CONF.set_override('action_retry_limit', 0) policy = self._init_policy(self.t) self.senlin_mock.create_policy.return_value = self.fake_p self.senlin_mock.cluster_attach_policy.return_value = { 'action': 'fake_action'} self.senlin_mock.get_action.return_value = mock.Mock( status='FAILED', status_reason='oops', action='CLUSTER_ATTACH_POLICY') create_task = scheduler.TaskRunner(policy.create) self.assertRaises(exception.ResourceFailure, create_task) self.assertEqual((policy.CREATE, policy.FAILED), policy.state) err_msg = ('ResourceInError: resources.senlin-policy: Went to status ' 'FAILED due to "Failed to execute CLUSTER_ATTACH_POLICY ' 'for c1_id: oops"') self.assertEqual(err_msg, policy.status_reason) def test_policy_delete_not_found(self): self.senlin_mock.cluster_detach_policy.return_value = { 'action': 'fake_action'} policy = self._create_policy(self.t) self.senlin_mock.get_policy.side_effect = [ exceptions.ResourceNotFound('SenlinPolicy'), ] scheduler.TaskRunner(policy.delete)() self.senlin_mock.cluster_detach_policy.assert_called_once_with( 'c1_id', policy.resource_id) self.senlin_mock.delete_policy.assert_called_once_with( policy.resource_id) def test_policy_delete_not_attached(self): policy = self._create_policy(self.t) self.senlin_mock.get_policy.side_effect = [ exceptions.ResourceNotFound('SenlinPolicy'), ] self.senlin_mock.cluster_detach_policy.side_effect = [ exceptions.HttpException(http_status=400), ] scheduler.TaskRunner(policy.delete)() self.senlin_mock.cluster_detach_policy.assert_called_once_with( 'c1_id', policy.resource_id) self.senlin_mock.delete_policy.assert_called_once_with( policy.resource_id) def test_policy_update(self): policy = self._create_policy(self.t) # Mock translate rules self.senlin_mock.get_cluster.side_effect = [ mock.Mock(id='c2_id'), mock.Mock(id='c1_id'), mock.Mock(id='c2_id'), ] new_t = copy.deepcopy(self.t) props = new_t['resources']['senlin-policy']['properties'] props['bindings'] = [{'cluster': 'c2'}] props['name'] = 'new_name' rsrc_defns = template.Template(new_t).resource_definitions(self.stack) new_cluster = rsrc_defns['senlin-policy'] self.senlin_mock.cluster_attach_policy.return_value = { 'action': 'fake_action1'} self.senlin_mock.cluster_detach_policy.return_value = { 'action': 'fake_action2'} self.senlin_mock.get_policy.return_value = self.fake_p scheduler.TaskRunner(policy.update, new_cluster)() self.assertEqual((policy.UPDATE, policy.COMPLETE), policy.state) self.senlin_mock.update_policy.assert_called_once_with( self.fake_p, name='new_name') self.senlin_mock.cluster_detach_policy.assert_called_once_with( 'c1_id', policy.resource_id) self.senlin_mock.cluster_attach_policy.assert_called_with( 'c2_id', policy.resource_id, enabled=True) def test_policy_resolve_attribute(self): excepted_show = { 'id': 'some_id', 'name': 'SenlinPolicy', } policy = self._create_policy(self.t) self.senlin_mock.get_policy.return_value = FakePolicy() self.assertEqual(excepted_show, policy._show_resource()) heat-10.0.2/heat/tests/openstack/senlin/__init__.py0000666000175000017500000000000013343562340022203 0ustar zuulzuul00000000000000heat-10.0.2/heat/tests/openstack/senlin/test_profile.py0000666000175000017500000000731013343562340023156 0ustar zuulzuul00000000000000# Copyright 2015 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import mock from heat.common import template_format from heat.engine.clients.os import senlin from heat.engine.resources.openstack.senlin import profile as sp from heat.engine import scheduler from heat.tests import common from heat.tests import utils profile_stack_template = """ heat_template_version: 2016-04-08 description: Senlin Profile Template resources: senlin-profile: type: OS::Senlin::Profile properties: name: SenlinProfile type: os.heat.stack-1.0 properties: template: heat_template_version: 2014-10-16 resources: random: type: OS::Heat::RandomString """ profile_spec = { 'type': 'os.heat.stack', 'version': '1.0', 'properties': { 'template': { 'heat_template_version': '2014-10-16', 'resources': { 'random': { 'type': 'OS::Heat::RandomString' } } } } } class FakeProfile(object): def __init__(self, id='some_id', spec=None): self.id = id self.name = "SenlinProfile" self.metadata = {} self.spec = spec or profile_spec class SenlinProfileTest(common.HeatTestCase): def setUp(self): super(SenlinProfileTest, self).setUp() self.senlin_mock = mock.MagicMock() self.patchobject(sp.Profile, 'client', return_value=self.senlin_mock) self.patchobject(senlin.ProfileTypeConstraint, 'validate', return_value=True) self.fake_p = FakeProfile() self.t = template_format.parse(profile_stack_template) def _init_profile(self, template): self.stack = utils.parse_stack(template) profile = self.stack['senlin-profile'] return profile def _create_profile(self, template): profile = self._init_profile(template) self.senlin_mock.create_profile.return_value = self.fake_p scheduler.TaskRunner(profile.create)() self.assertEqual((profile.CREATE, profile.COMPLETE), profile.state) self.assertEqual(self.fake_p.id, profile.resource_id) return profile def test_profile_create(self): self._create_profile(self.t) expect_kwargs = { 'name': 'SenlinProfile', 'metadata': None, 'spec': profile_spec } self.senlin_mock.create_profile.assert_called_once_with( **expect_kwargs) def test_profile_delete(self): self.senlin_mock.delete_profile.return_value = None profile = self._create_profile(self.t) scheduler.TaskRunner(profile.delete)() self.senlin_mock.delete_profile.assert_called_once_with( profile.resource_id) def test_profile_update(self): profile = self._create_profile(self.t) prop_diff = {'metadata': {'foo': 'bar'}} self.senlin_mock.get_profile.return_value = self.fake_p profile.handle_update(json_snippet=None, tmpl_diff=None, prop_diff=prop_diff) self.senlin_mock.update_profile.assert_called_once_with( self.fake_p, **prop_diff) heat-10.0.2/heat/tests/openstack/senlin/test_cluster.py0000666000175000017500000003702113343562340023201 0ustar zuulzuul00000000000000# Copyright 2015 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import copy import mock from oslo_config import cfg import six from heat.common import exception from heat.common import template_format from heat.engine.clients.os import senlin from heat.engine.resources.openstack.senlin import cluster as sc from heat.engine import scheduler from heat.engine import template from heat.tests import common from heat.tests import utils from openstack import exceptions cluster_stack_template = """ heat_template_version: 2016-04-08 description: Senlin Cluster Template resources: senlin-cluster: type: OS::Senlin::Cluster properties: name: SenlinCluster profile: fake_profile policies: - policy: fake_policy enabled: true min_size: 0 max_size: -1 desired_capacity: 1 timeout: 3600 metadata: foo: bar """ class FakeCluster(object): def __init__(self, id='some_id', status='ACTIVE'): self.status = status self.status_reason = 'Unknown' self.id = id self.name = "SenlinCluster" self.metadata = {} self.nodes = ['node1'] self.desired_capacity = 1 self.metadata = {'foo': 'bar'} self.timeout = 3600 self.max_size = -1 self.min_size = 0 self.location = 'actions/fake-action' self.profile_name = 'fake_profile' self.profile_id = 'fake_profile_id' def to_dict(self): return { 'id': self.id, 'status': self.status, 'status_reason': self.status_reason, 'name': self.name, 'metadata': self.metadata, 'timeout': self.timeout, 'desired_capacity': self.desired_capacity, 'max_size': self.max_size, 'min_size': self.min_size, 'nodes': self.nodes, 'profile_name': self.profile_name, 'profile_id': self.profile_id } class SenlinClusterTest(common.HeatTestCase): def setUp(self): super(SenlinClusterTest, self).setUp() self.senlin_mock = mock.MagicMock() self.senlin_mock.get_profile.return_value = mock.Mock( id='fake_profile_id' ) self.patchobject(sc.Cluster, 'client', return_value=self.senlin_mock) self.patchobject(senlin.SenlinClientPlugin, 'client', return_value=self.senlin_mock) self.patchobject(senlin.ProfileConstraint, 'validate', return_value=True) self.patchobject(senlin.PolicyConstraint, 'validate', return_value=True) self.fake_cl = FakeCluster() self.t = template_format.parse(cluster_stack_template) def _init_cluster(self, template): self.stack = utils.parse_stack(template) cluster = self.stack['senlin-cluster'] return cluster def _create_cluster(self, template): cluster = self._init_cluster(template) self.senlin_mock.create_cluster.return_value = self.fake_cl self.senlin_mock.get_cluster.return_value = self.fake_cl self.senlin_mock.get_action.return_value = mock.Mock( status='SUCCEEDED') self.senlin_mock.get_policy.return_value = mock.Mock( id='fake_policy_id' ) self.senlin_mock.cluster_policies.return_value = [ {'policy_id': 'fake_policy_id', 'enabled': True} ] scheduler.TaskRunner(cluster.create)() self.assertEqual((cluster.CREATE, cluster.COMPLETE), cluster.state) self.assertEqual(self.fake_cl.id, cluster.resource_id) return cluster def test_cluster_create_success(self): self._create_cluster(self.t) create_cluster_kwargs = { 'name': 'SenlinCluster', 'profile_id': 'fake_profile_id', 'desired_capacity': 1, 'min_size': 0, 'max_size': -1, 'metadata': {'foo': 'bar'}, 'timeout': 3600, } attach_policy_kwargs = { 'cluster': self.fake_cl.id, 'policy': 'fake_policy_id', 'enabled': True } self.senlin_mock.create_cluster.assert_called_once_with( **create_cluster_kwargs) self.senlin_mock.cluster_attach_policy.assert_called_once_with( **attach_policy_kwargs) def test_cluster_create_error(self): cfg.CONF.set_override('action_retry_limit', 0) cluster = self._init_cluster(self.t) self.senlin_mock.create_cluster.return_value = self.fake_cl mock_action = mock.MagicMock() mock_action.status = 'FAILED' mock_action.status_reason = 'oops' self.senlin_mock.get_policy.return_value = mock.Mock( id='fake_policy_id' ) self.senlin_mock.get_action.return_value = mock_action create_task = scheduler.TaskRunner(cluster.create) ex = self.assertRaises(exception.ResourceFailure, create_task) expected = ('ResourceInError: resources.senlin-cluster: ' 'Went to status FAILED due to "oops"') self.assertEqual(expected, six.text_type(ex)) def test_cluster_delete_success(self): cluster = self._create_cluster(self.t) self.senlin_mock.get_cluster.side_effect = [ exceptions.ResourceNotFound('SenlinCluster'), ] scheduler.TaskRunner(cluster.delete)() self.senlin_mock.delete_cluster.assert_called_once_with( cluster.resource_id) def test_cluster_delete_error(self): cluster = self._create_cluster(self.t) self.senlin_mock.get_cluster.side_effect = exception.Error('oops') delete_task = scheduler.TaskRunner(cluster.delete) ex = self.assertRaises(exception.ResourceFailure, delete_task) expected = 'Error: resources.senlin-cluster: oops' self.assertEqual(expected, six.text_type(ex)) def test_cluster_update_profile(self): cluster = self._create_cluster(self.t) # Mock translate rules self.senlin_mock.get_profile.side_effect = [ mock.Mock(id='new_profile_id'), mock.Mock(id='fake_profile_id'), mock.Mock(id='new_profile_id'), ] new_t = copy.deepcopy(self.t) props = new_t['resources']['senlin-cluster']['properties'] props['profile'] = 'new_profile' props['name'] = 'new_name' rsrc_defns = template.Template(new_t).resource_definitions(self.stack) new_cluster = rsrc_defns['senlin-cluster'] self.senlin_mock.update_cluster.return_value = mock.Mock( location='/actions/fake-action') self.senlin_mock.get_action.return_value = mock.Mock( status='SUCCEEDED') scheduler.TaskRunner(cluster.update, new_cluster)() self.assertEqual((cluster.UPDATE, cluster.COMPLETE), cluster.state) cluster_update_kwargs = { 'profile_id': 'new_profile_id', 'name': 'new_name' } self.senlin_mock.update_cluster.assert_called_once_with( cluster=self.fake_cl, **cluster_update_kwargs) self.assertEqual(3, self.senlin_mock.get_action.call_count) def test_cluster_update_desire_capacity(self): cluster = self._create_cluster(self.t) new_t = copy.deepcopy(self.t) props = new_t['resources']['senlin-cluster']['properties'] props['desired_capacity'] = 10 rsrc_defns = template.Template(new_t).resource_definitions(self.stack) new_cluster = rsrc_defns['senlin-cluster'] self.senlin_mock.cluster_resize.return_value = { 'action': 'fake-action'} self.senlin_mock.get_action.return_value = mock.Mock( status='SUCCEEDED') scheduler.TaskRunner(cluster.update, new_cluster)() self.assertEqual((cluster.UPDATE, cluster.COMPLETE), cluster.state) cluster_resize_kwargs = { 'adjustment_type': 'EXACT_CAPACITY', 'number': 10 } self.senlin_mock.cluster_resize.assert_called_once_with( cluster=cluster.resource_id, **cluster_resize_kwargs) self.assertEqual(3, self.senlin_mock.get_action.call_count) def test_cluster_update_policy_add_remove(self): cluster = self._create_cluster(self.t) # Mock translate rules self.senlin_mock.get_policy.side_effect = [ mock.Mock(id='new_policy_id'), mock.Mock(id='fake_policy_id'), mock.Mock(id='new_policy_id'), ] new_t = copy.deepcopy(self.t) props = new_t['resources']['senlin-cluster']['properties'] props['policies'] = [{'policy': 'new_policy'}] rsrc_defns = template.Template(new_t).resource_definitions(self.stack) new_cluster = rsrc_defns['senlin-cluster'] self.senlin_mock.cluster_detach_policy.return_value = { 'action': 'fake-action'} self.senlin_mock.cluster_attach_policy.return_value = { 'action': 'fake-action'} self.senlin_mock.get_action.return_value = mock.Mock( status='SUCCEEDED') scheduler.TaskRunner(cluster.update, new_cluster)() self.assertEqual((cluster.UPDATE, cluster.COMPLETE), cluster.state) detach_policy_kwargs = { 'policy': 'fake_policy_id', 'cluster': cluster.resource_id, 'enabled': True, } self.assertEqual(2, self.senlin_mock.cluster_attach_policy.call_count) self.senlin_mock.cluster_detach_policy.assert_called_once_with( **detach_policy_kwargs) self.assertEqual(0, self.senlin_mock.cluster_update_policy.call_count) self.assertEqual(4, self.senlin_mock.get_action.call_count) def test_cluster_update_policy_exists(self): cluster = self._create_cluster(self.t) new_t = copy.deepcopy(self.t) props = new_t['resources']['senlin-cluster']['properties'] props['policies'] = [{'policy': 'fake_policy', 'enabled': False}] rsrc_defns = template.Template(new_t).resource_definitions(self.stack) new_cluster = rsrc_defns['senlin-cluster'] self.senlin_mock.cluster_update_policy.return_value = { 'action': 'fake-action'} self.senlin_mock.get_action.return_value = mock.Mock( status='SUCCEEDED') scheduler.TaskRunner(cluster.update, new_cluster)() self.assertEqual((cluster.UPDATE, cluster.COMPLETE), cluster.state) update_policy_kwargs = { 'policy': 'fake_policy_id', 'cluster': cluster.resource_id, 'enabled': False, } self.senlin_mock.cluster_update_policy.assert_called_once_with( **update_policy_kwargs) self.assertEqual(1, self.senlin_mock.cluster_attach_policy.call_count) self.assertEqual(0, self.senlin_mock.cluster_detach_policy.call_count) def test_cluster_update_failed(self): cluster = self._create_cluster(self.t) new_t = copy.deepcopy(self.t) props = new_t['resources']['senlin-cluster']['properties'] props['desired_capacity'] = 3 rsrc_defns = template.Template(new_t).resource_definitions(self.stack) update_snippet = rsrc_defns['senlin-cluster'] self.senlin_mock.cluster_resize.return_value = { 'action': 'fake-action'} self.senlin_mock.get_action.return_value = mock.Mock( status='FAILED', status_reason='Unknown') exc = self.assertRaises( exception.ResourceFailure, scheduler.TaskRunner(cluster.update, update_snippet)) self.assertEqual('ResourceInError: resources.senlin-cluster: ' 'Went to status FAILED due to "Unknown"', six.text_type(exc)) def test_cluster_get_attr_collect(self): cluster = self._create_cluster(self.t) self.senlin_mock.collect_cluster_attrs.return_value = [ mock.Mock(attr_value='ip1')] attr_path1 = ['details.addresses.private[0].addr'] self.assertEqual( ['ip1'], cluster.get_attribute(cluster.ATTR_COLLECT, *attr_path1)) attr_path2 = ['details.addresses.private[0].addr', 0] self.assertEqual( 'ip1', cluster.get_attribute(cluster.ATTR_COLLECT, *attr_path2)) self.senlin_mock.collect_cluster_attrs.assert_called_with( cluster.resource_id, attr_path2[0]) def test_cluster_resolve_attribute(self): excepted_show = { 'id': 'some_id', 'status': 'ACTIVE', 'status_reason': 'Unknown', 'name': 'SenlinCluster', 'metadata': {'foo': 'bar'}, 'timeout': 3600, 'desired_capacity': 1, 'max_size': -1, 'min_size': 0, 'nodes': ['node1'], 'profile_name': 'fake_profile', 'profile_id': 'fake_profile_id', 'policies': [{'policy_id': 'fake_policy_id', 'enabled': True}] } cluster = self._create_cluster(self.t) self.assertEqual(self.fake_cl.desired_capacity, cluster._resolve_attribute('desired_capacity')) self.assertEqual(['node1'], cluster._resolve_attribute('nodes')) self.assertEqual(excepted_show, cluster._show_resource()) def test_cluster_get_live_state(self): expected_reality = { 'name': 'SenlinCluster', 'metadata': {'foo': 'bar'}, 'timeout': 3600, 'desired_capacity': 1, 'max_size': -1, 'min_size': 0, 'profile': 'fake_profile_id', 'policies': [{'policy': 'fake_policy_id', 'enabled': True}] } cluster = self._create_cluster(self.t) self.senlin_mock.get_cluster.return_value = self.fake_cl reality = cluster.get_live_state(cluster.properties) self.assertEqual(expected_reality, reality) class TestSenlinClusterValidation(common.HeatTestCase): def setUp(self): super(TestSenlinClusterValidation, self).setUp() self.t = template_format.parse(cluster_stack_template) def test_invalid_min_max_size(self): self.t['resources']['senlin-cluster']['properties']['min_size'] = 2 self.t['resources']['senlin-cluster']['properties']['max_size'] = 1 stack = utils.parse_stack(self.t) ex = self.assertRaises(exception.StackValidationFailed, stack['senlin-cluster'].validate) self.assertEqual('min_size can not be greater than max_size', six.text_type(ex)) def test_invalid_desired_capacity(self): self.t['resources']['senlin-cluster']['properties']['min_size'] = 1 self.t['resources']['senlin-cluster']['properties']['max_size'] = 2 self.t['resources']['senlin-cluster']['properties'][ 'desired_capacity'] = 3 stack = utils.parse_stack(self.t) ex = self.assertRaises(exception.StackValidationFailed, stack['senlin-cluster'].validate) self.assertEqual( 'desired_capacity must be between min_size and max_size', six.text_type(ex) ) heat-10.0.2/heat/tests/openstack/zaqar/0000775000175000017500000000000013343562672017740 5ustar zuulzuul00000000000000heat-10.0.2/heat/tests/openstack/zaqar/test_subscription.py0000666000175000017500000004667313343562352024112 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import six from heat.common import exception from heat.common import template_format from heat.engine.clients.os import mistral as mistral_client_plugin from heat.engine import resource from heat.engine import scheduler from heat.engine import stack from heat.engine import template from heat.tests import common from heat.tests import utils from oslo_serialization import jsonutils try: from zaqarclient.transport.errors import ResourceNotFound # noqa except ImportError: class ResourceNotFound(Exception): pass subscr_template = ''' { "AWSTemplateFormatVersion" : "2010-09-09", "Resources" : { "MyQueue2" : { "Type" : "OS::Zaqar::Queue", "Properties" : { "name" : "myqueue", "metadata" : { "key1" : { "key2" : "value", "key3" : [1, 2] } } } }, "MySubscription" : { "Type" : "OS::Zaqar::Subscription", "Properties" : { "queue_name" : "myqueue", "subscriber" : "mailto:name@domain.com", "ttl" : "3600", "options" : { "key1" : "value1" } } } } } ''' mistral_template = ''' { "heat_template_version" : "2015-10-15", "resources" : { "subscription" : { "type" : "OS::Zaqar::MistralTrigger", "properties" : { "queue_name" : "myqueue", "workflow_id": "abcd", "input" : { "key1" : "value1" } } } } } ''' class FakeSubscription(object): def __init__(self, queue_name, id=None, ttl=None, subscriber=None, options=None, auto_create=True): self.id = id self.queue_name = queue_name self.ttl = ttl self.subscriber = subscriber self.options = options def update(self, prop_diff): allowed_keys = {'subscriber', 'ttl', 'options'} for key in six.iterkeys(prop_diff): if key not in allowed_keys: raise KeyError(key) def delete(self): pass @mock.patch.object(resource.Resource, "client") class ZaqarSubscriptionTest(common.HeatTestCase): def setUp(self): super(ZaqarSubscriptionTest, self).setUp() self.fc = self.m.CreateMockAnything() self.ctx = utils.dummy_context() def parse_stack(self, t): stack_name = 'test_stack' tmpl = template.Template(t) self.stack = stack.Stack(self.ctx, stack_name, tmpl) self.stack.validate() self.stack.store() def test_validate_subscriber_type(self, mock_client): t = template_format.parse(subscr_template) t['Resources']['MySubscription']['Properties']['subscriber'] = "foo:ba" stack_name = 'test_stack' tmpl = template.Template(t) self.stack = stack.Stack(self.ctx, stack_name, tmpl) exc = self.assertRaises(exception.StackValidationFailed, self.stack.validate) self.assertEqual('The subscriber type of must be one of: http, https, ' 'mailto, trust+http, trust+https.', six.text_type(exc)) def test_create(self, mock_client): t = template_format.parse(subscr_template) self.parse_stack(t) subscr = self.stack['MySubscription'] subscr_id = "58138648c1e2eb7355d62137" self.m.StubOutWithMock(subscr, 'client') subscr.client().MultipleTimes().AndReturn(self.fc) fake_subscr = FakeSubscription(subscr.properties['queue_name'], subscr_id) self.m.StubOutWithMock(self.fc, 'subscription') self.fc.subscription(subscr.properties['queue_name'], options={'key1': 'value1'}, subscriber=u'mailto:name@domain.com', ttl=3600).AndReturn(fake_subscr) self.m.ReplayAll() scheduler.TaskRunner(subscr.create)() self.assertEqual(subscr_id, subscr.FnGetRefId()) self.m.VerifyAll() def test_delete(self, mock_client): t = template_format.parse(subscr_template) self.parse_stack(t) subscr = self.stack['MySubscription'] subscr_id = "58138648c1e2eb7355d62137" self.m.StubOutWithMock(subscr, 'client') subscr.client().MultipleTimes().AndReturn(self.fc) fake_subscr = FakeSubscription(subscr.properties['queue_name'], subscr_id) self.m.StubOutWithMock(self.fc, 'subscription') self.fc.subscription(subscr.properties['queue_name'], options={'key1': 'value1'}, subscriber=u'mailto:name@domain.com', ttl=3600).AndReturn(fake_subscr) self.fc.subscription(subscr.properties['queue_name'], id=subscr_id, auto_create=False).AndReturn(fake_subscr) self.m.StubOutWithMock(fake_subscr, 'delete') fake_subscr.delete() self.m.ReplayAll() scheduler.TaskRunner(subscr.create)() scheduler.TaskRunner(subscr.delete)() self.m.VerifyAll() def test_delete_not_found(self, mock_client): t = template_format.parse(subscr_template) self.parse_stack(t) subscr = self.stack['MySubscription'] subscr_id = "58138648c1e2eb7355d62137" self.m.StubOutWithMock(subscr, 'client') subscr.client().MultipleTimes().AndReturn(self.fc) fake_subscr = FakeSubscription(subscr.properties['queue_name'], subscr_id) self.m.StubOutWithMock(self.fc, 'subscription') self.fc.subscription(subscr.properties['queue_name'], options={'key1': 'value1'}, subscriber=u'mailto:name@domain.com', ttl=3600).AndReturn(fake_subscr) self.fc.subscription(subscr.properties['queue_name'], id=subscr_id, auto_create=False).AndRaise(ResourceNotFound()) self.m.ReplayAll() scheduler.TaskRunner(subscr.create)() scheduler.TaskRunner(subscr.delete)() self.m.VerifyAll() def test_update_in_place(self, mock_client): t = template_format.parse(subscr_template) self.parse_stack(t) subscr = self.stack['MySubscription'] subscr_id = "58138648c1e2eb7355d62137" self.m.StubOutWithMock(subscr, 'client') subscr.client().MultipleTimes().AndReturn(self.fc) fake_subscr = FakeSubscription(subscr.properties['queue_name'], subscr_id) self.m.StubOutWithMock(self.fc, 'subscription') self.fc.subscription(subscr.properties['queue_name'], options={'key1': 'value1'}, subscriber=u'mailto:name@domain.com', ttl=3600).AndReturn(fake_subscr) self.fc.subscription(subscr.properties['queue_name'], id=subscr_id, auto_create=False).AndReturn(fake_subscr) self.m.StubOutWithMock(fake_subscr, 'update') fake_subscr.update({'ttl': 3601, 'options': {'key1': 'value1'}, 'subscriber': 'mailto:name@domain.com'}) self.m.ReplayAll() t = template_format.parse(subscr_template) new_subscr = t['Resources']['MySubscription'] new_subscr['Properties']['ttl'] = "3601" resource_defns = template.Template(t).resource_definitions(self.stack) scheduler.TaskRunner(subscr.create)() scheduler.TaskRunner(subscr.update, resource_defns['MySubscription'])() self.m.VerifyAll() def test_update_replace(self, mock_client): t = template_format.parse(subscr_template) self.parse_stack(t) subscr = self.stack['MySubscription'] subscr_id = "58138648c1e2eb7355d62137" self.m.StubOutWithMock(subscr, 'client') subscr.client().MultipleTimes().AndReturn(self.fc) fake_subscr = FakeSubscription(subscr.properties['queue_name'], subscr_id) self.m.StubOutWithMock(self.fc, 'subscription') self.fc.subscription(subscr.properties['queue_name'], options={'key1': 'value1'}, subscriber=u'mailto:name@domain.com', ttl=3600).AndReturn(fake_subscr) self.m.ReplayAll() t = template_format.parse(subscr_template) t['Resources']['MySubscription']['Properties']['queue_name'] = 'foo' resource_defns = template.Template(t).resource_definitions(self.stack) new_subscr = resource_defns['MySubscription'] scheduler.TaskRunner(subscr.create)() err = self.assertRaises(resource.UpdateReplace, scheduler.TaskRunner(subscr.update, new_subscr)) msg = 'The Resource MySubscription requires replacement.' self.assertEqual(msg, six.text_type(err)) self.m.VerifyAll() def test_show_resource(self, mock_client): t = template_format.parse(subscr_template) self.parse_stack(t) subscr = self.stack['MySubscription'] subscr_id = "58138648c1e2eb7355d62137" self.m.StubOutWithMock(subscr, 'client') subscr.client().MultipleTimes().AndReturn(self.fc) fake_subscr = FakeSubscription(subscr.properties['queue_name'], subscr_id) props = t['Resources']['MySubscription']['Properties'] fake_subscr.ttl = props['ttl'] fake_subscr.subscriber = props['subscriber'] fake_subscr.options = props['options'] self.m.StubOutWithMock(self.fc, 'subscription') self.fc.subscription(subscr.properties['queue_name'], options={'key1': 'value1'}, subscriber=u'mailto:name@domain.com', ttl=3600).AndReturn(fake_subscr) self.fc.subscription( subscr.properties['queue_name'], id=subscr_id, auto_create=False).MultipleTimes().AndReturn(fake_subscr) self.m.ReplayAll() rsrc_data = props.copy() rsrc_data['id'] = subscr_id scheduler.TaskRunner(subscr.create)() self.assertEqual(rsrc_data, subscr._show_resource()) self.assertEqual( {'queue_name': props['queue_name'], 'subscriber': props['subscriber'], 'ttl': props['ttl'], 'options': props['options']}, subscr.parse_live_resource_data(subscr.properties, subscr._show_resource())) self.m.VerifyAll() class JsonString(object): def __init__(self, data): self._data = data def __eq__(self, other): return self._data == jsonutils.loads(other) def __str__(self): return jsonutils.dumps(self._data) def __repr__(self): return str(self) @mock.patch.object(resource.Resource, "client") class ZaqarMistralTriggerTest(common.HeatTestCase): def setUp(self): super(ZaqarMistralTriggerTest, self).setUp() self.fc = self.m.CreateMockAnything() self.ctx = utils.dummy_context() self.patchobject(mistral_client_plugin.WorkflowConstraint, 'validate', return_value=True) stack_name = 'test_stack' t = template_format.parse(mistral_template) tmpl = template.Template(t) self.stack = stack.Stack(self.ctx, stack_name, tmpl) self.stack.validate() self.stack.store() def client(name='zaqar'): if name == 'mistral': client = mock.Mock() http_client = mock.Mock() client.executions = mock.Mock(spec=['http_client']) client.executions.http_client = http_client http_client.base_url = 'http://mistral.example.net:8989' return client elif name == 'zaqar': return self.fc self.subscr = self.stack['subscription'] self.subscr.client = mock.Mock(side_effect=client) self.subscriber = 'trust+http://mistral.example.net:8989/executions' self.options = { 'post_data': JsonString({ 'workflow_id': 'abcd', 'input': {"key1": "value1"}, 'params': {'env': {'notification': '$zaqar_message$'}}, }) } def test_create(self, mock_client): subscr = self.subscr subscr_id = "58138648c1e2eb7355d62137" fake_subscr = FakeSubscription(subscr.properties['queue_name'], subscr_id) self.m.StubOutWithMock(self.fc, 'subscription') self.fc.subscription(subscr.properties['queue_name'], options=self.options, subscriber=self.subscriber, ttl=220367260800).AndReturn(fake_subscr) self.m.ReplayAll() scheduler.TaskRunner(subscr.create)() self.assertEqual(subscr_id, subscr.FnGetRefId()) self.m.VerifyAll() def test_delete(self, mock_client): subscr = self.subscr subscr_id = "58138648c1e2eb7355d62137" fake_subscr = FakeSubscription(subscr.properties['queue_name'], subscr_id) self.m.StubOutWithMock(self.fc, 'subscription') self.fc.subscription(subscr.properties['queue_name'], options=self.options, subscriber=self.subscriber, ttl=220367260800).AndReturn(fake_subscr) self.fc.subscription(subscr.properties['queue_name'], id=subscr_id, auto_create=False).AndReturn(fake_subscr) self.m.StubOutWithMock(fake_subscr, 'delete') fake_subscr.delete() self.m.ReplayAll() scheduler.TaskRunner(subscr.create)() scheduler.TaskRunner(subscr.delete)() self.m.VerifyAll() def test_delete_not_found(self, mock_client): subscr = self.subscr subscr_id = "58138648c1e2eb7355d62137" fake_subscr = FakeSubscription(subscr.properties['queue_name'], subscr_id) self.m.StubOutWithMock(self.fc, 'subscription') self.fc.subscription(subscr.properties['queue_name'], options=self.options, subscriber=self.subscriber, ttl=220367260800).AndReturn(fake_subscr) self.fc.subscription(subscr.properties['queue_name'], id=subscr_id, auto_create=False).AndRaise(ResourceNotFound()) self.m.ReplayAll() scheduler.TaskRunner(subscr.create)() scheduler.TaskRunner(subscr.delete)() self.m.VerifyAll() def test_update_in_place(self, mock_client): subscr = self.subscr subscr_id = "58138648c1e2eb7355d62137" fake_subscr = FakeSubscription(subscr.properties['queue_name'], subscr_id) self.m.StubOutWithMock(self.fc, 'subscription') self.fc.subscription(subscr.properties['queue_name'], options=self.options, subscriber=self.subscriber, ttl=220367260800).AndReturn(fake_subscr) self.fc.subscription(subscr.properties['queue_name'], id=subscr_id, auto_create=False).AndReturn(fake_subscr) self.m.StubOutWithMock(fake_subscr, 'update') fake_subscr.update({'ttl': 3601, 'subscriber': self.subscriber, 'options': self.options}) self.m.ReplayAll() t = template_format.parse(mistral_template) new_subscr = t['resources']['subscription'] new_subscr['properties']['ttl'] = "3601" resource_defns = template.Template(t).resource_definitions(self.stack) scheduler.TaskRunner(subscr.create)() scheduler.TaskRunner(subscr.update, resource_defns['subscription'])() self.m.VerifyAll() def test_update_replace(self, mock_client): subscr = self.subscr subscr_id = "58138648c1e2eb7355d62137" fake_subscr = FakeSubscription(subscr.properties['queue_name'], subscr_id) self.m.StubOutWithMock(self.fc, 'subscription') self.fc.subscription(subscr.properties['queue_name'], options=self.options, subscriber=self.subscriber, ttl=220367260800).AndReturn(fake_subscr) self.m.ReplayAll() t = template_format.parse(mistral_template) t['resources']['subscription']['properties']['queue_name'] = 'foo' resource_defns = template.Template(t).resource_definitions(self.stack) new_subscr = resource_defns['subscription'] scheduler.TaskRunner(subscr.create)() err = self.assertRaises(resource.UpdateReplace, scheduler.TaskRunner(subscr.update, new_subscr)) msg = 'The Resource subscription requires replacement.' self.assertEqual(msg, six.text_type(err)) self.m.VerifyAll() def test_show_resource(self, mock_client): subscr = self.subscr subscr_id = "58138648c1e2eb7355d62137" fake_subscr = FakeSubscription(subscr.properties['queue_name'], subscr_id) fake_subscr.ttl = 220367260800 fake_subscr.subscriber = self.subscriber fake_subscr.options = {'post_data': str(self.options['post_data'])} self.m.StubOutWithMock(self.fc, 'subscription') self.fc.subscription(subscr.properties['queue_name'], options=self.options, subscriber=self.subscriber, ttl=220367260800).AndReturn(fake_subscr) self.fc.subscription( subscr.properties['queue_name'], id=subscr_id, auto_create=False).MultipleTimes().AndReturn(fake_subscr) self.m.ReplayAll() props = self.stack.t.t['resources']['subscription']['properties'] scheduler.TaskRunner(subscr.create)() self.assertEqual( {'queue_name': props['queue_name'], 'id': subscr_id, 'subscriber': self.subscriber, 'options': self.options, 'ttl': 220367260800}, subscr._show_resource()) self.assertEqual( {'queue_name': props['queue_name'], 'workflow_id': props['workflow_id'], 'input': props['input'], 'params': {}, 'ttl': 220367260800}, subscr.parse_live_resource_data(subscr.properties, subscr._show_resource())) heat-10.0.2/heat/tests/openstack/zaqar/test_queue.py0000666000175000017500000003177113343562352022503 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import six from six.moves.urllib import parse as urlparse from heat.common import template_format from heat.engine.clients import client_plugin from heat.engine import resource from heat.engine.resources.openstack.zaqar import queue from heat.engine import rsrc_defn from heat.engine import scheduler from heat.engine import stack from heat.engine import template from heat.tests import common from heat.tests import utils try: from zaqarclient.transport.errors import ResourceNotFound # noqa except ImportError: ResourceNotFound = Exception wp_template = ''' { "AWSTemplateFormatVersion" : "2010-09-09", "Description" : "openstack Zaqar queue service as a resource", "Resources" : { "MyQueue2" : { "Type" : "OS::Zaqar::Queue", "Properties" : { "name": "myqueue", "metadata": { "key1": { "key2": "value", "key3": [1, 2] } } } } }, "Outputs" : { "queue_id": { "Value": { "Ref" : "MyQueue2" }, "Description": "queue name" }, "queue_href": { "Value": { "Fn::GetAtt" : [ "MyQueue2", "href" ]}, "Description": "queue href" } } } ''' class FakeQueue(object): def __init__(self, queue_name, auto_create=True): self._id = queue_name self._auto_create = auto_create self._exists = False def metadata(self, new_meta=None): pass def delete(self): pass class ZaqarMessageQueueTest(common.HeatTestCase): def setUp(self): super(ZaqarMessageQueueTest, self).setUp() self.fc = self.m.CreateMockAnything() self.ctx = utils.dummy_context() def parse_stack(self, t): stack_name = 'test_stack' tmpl = template.Template(t) self.stack = stack.Stack(self.ctx, stack_name, tmpl) self.stack.validate() self.stack.store() def test_create(self): t = template_format.parse(wp_template) self.parse_stack(t) queue = self.stack['MyQueue2'] self.m.StubOutWithMock(queue, 'client') queue.client().MultipleTimes().AndReturn(self.fc) fake_q = FakeQueue(queue.physical_resource_name(), auto_create=False) self.m.StubOutWithMock(self.fc, 'queue') self.fc.queue(queue.physical_resource_name(), auto_create=False).AndReturn(fake_q) self.m.StubOutWithMock(fake_q, 'metadata') fake_q.metadata(new_meta=queue.properties.get('metadata')) self.m.ReplayAll() scheduler.TaskRunner(queue.create)() self.fc.api_url = 'http://127.0.0.1:8888/' self.fc.api_version = 1.1 self.assertEqual('http://127.0.0.1:8888/v1.1/queues/myqueue', queue.FnGetAtt('href')) self.m.VerifyAll() def test_create_default_name(self): t = template_format.parse(wp_template) del t['Resources']['MyQueue2']['Properties']['name'] self.parse_stack(t) queue = self.stack['MyQueue2'] self.m.StubOutWithMock(queue, 'client') queue.client().MultipleTimes().AndReturn(self.fc) name_match = utils.PhysName(self.stack.name, 'MyQueue2') self.m.StubOutWithMock(self.fc, 'queue') self.fc.queue(name_match, auto_create=False).WithSideEffects(FakeQueue) self.m.ReplayAll() scheduler.TaskRunner(queue.create)() queue_name = queue.physical_resource_name() self.assertEqual(name_match, queue_name) self.fc.api_url = 'http://127.0.0.1:8888' self.fc.api_version = 2 self.assertEqual('http://127.0.0.1:8888/v2/queues/' + queue_name, queue.FnGetAtt('href')) self.m.VerifyAll() def test_delete(self): t = template_format.parse(wp_template) self.parse_stack(t) queue = self.stack['MyQueue2'] queue.resource_id_set(queue.properties.get('name')) self.m.StubOutWithMock(queue, 'client') queue.client().MultipleTimes().AndReturn(self.fc) fake_q = FakeQueue("myqueue", auto_create=False) self.m.StubOutWithMock(self.fc, 'queue') self.fc.queue("myqueue", auto_create=False).MultipleTimes().AndReturn(fake_q) self.m.StubOutWithMock(fake_q, 'delete') fake_q.delete() self.m.ReplayAll() scheduler.TaskRunner(queue.create)() scheduler.TaskRunner(queue.delete)() self.m.VerifyAll() @mock.patch.object(queue.ZaqarQueue, "client") def test_delete_not_found(self, mockclient): class ZaqarClientPlugin(client_plugin.ClientPlugin): def _create(self): return mockclient() mock_def = mock.Mock(spec=rsrc_defn.ResourceDefinition) mock_def.resource_type = 'OS::Zaqar::Queue' props = mock.Mock() props.props = {} mock_def.properties.return_value = props stack = utils.parse_stack(template_format.parse(wp_template)) self.patchobject(stack, 'db_resource_get', return_value=None) mockplugin = ZaqarClientPlugin(self.ctx) clients = self.patchobject(stack, 'clients') clients.client_plugin.return_value = mockplugin mockplugin.is_not_found = mock.Mock() mockplugin.is_not_found.return_value = True zaqar_q = mock.Mock() zaqar_q.delete.side_effect = ResourceNotFound() mockclient.return_value.queue.return_value = zaqar_q zplugin = queue.ZaqarQueue("test_delete_not_found", mock_def, stack) zplugin.resource_id = "test_delete_not_found" zplugin.handle_delete() clients.client_plugin.assert_called_once_with('zaqar') mockplugin.is_not_found.assert_called_once_with( zaqar_q.delete.side_effect) mockclient.return_value.queue.assert_called_once_with( "test_delete_not_found", auto_create=False) def test_update_in_place(self): t = template_format.parse(wp_template) self.parse_stack(t) queue = self.stack['MyQueue2'] queue.resource_id_set(queue.properties.get('name')) self.m.StubOutWithMock(queue, 'client') queue.client().MultipleTimes().AndReturn(self.fc) fake_q = FakeQueue('myqueue', auto_create=False) self.m.StubOutWithMock(self.fc, 'queue') self.fc.queue('myqueue', auto_create=False).MultipleTimes().AndReturn(fake_q) self.m.StubOutWithMock(fake_q, 'metadata') fake_q.metadata(new_meta={"key1": {"key2": "value", "key3": [1, 2]}}) # Expected to be called during update fake_q.metadata(new_meta={'key1': 'value'}) self.m.ReplayAll() t = template_format.parse(wp_template) new_queue = t['Resources']['MyQueue2'] new_queue['Properties']['metadata'] = {'key1': 'value'} resource_defns = template.Template(t).resource_definitions(self.stack) scheduler.TaskRunner(queue.create)() scheduler.TaskRunner(queue.update, resource_defns['MyQueue2'])() self.m.VerifyAll() def test_update_replace(self): t = template_format.parse(wp_template) self.parse_stack(t) queue = self.stack['MyQueue2'] queue.resource_id_set(queue.properties.get('name')) self.m.StubOutWithMock(queue, 'client') queue.client().MultipleTimes().AndReturn(self.fc) fake_q = FakeQueue('myqueue', auto_create=False) self.m.StubOutWithMock(self.fc, 'queue') self.fc.queue('myqueue', auto_create=False).MultipleTimes().AndReturn(fake_q) self.m.ReplayAll() t = template_format.parse(wp_template) t['Resources']['MyQueue2']['Properties']['name'] = 'new_queue' resource_defns = template.Template(t).resource_definitions(self.stack) new_queue = resource_defns['MyQueue2'] scheduler.TaskRunner(queue.create)() err = self.assertRaises(resource.UpdateReplace, scheduler.TaskRunner(queue.update, new_queue)) msg = 'The Resource MyQueue2 requires replacement.' self.assertEqual(msg, six.text_type(err)) self.m.VerifyAll() def test_show_resource(self): t = template_format.parse(wp_template) self.parse_stack(t) queue = self.stack['MyQueue2'] self.m.StubOutWithMock(queue, 'client') queue.client().MultipleTimes().AndReturn(self.fc) fake_q = FakeQueue(queue.physical_resource_name(), auto_create=False) self.m.StubOutWithMock(self.fc, 'queue') self.fc.queue(queue.physical_resource_name(), auto_create=False).AndReturn(fake_q) self.m.StubOutWithMock(fake_q, 'metadata') fake_q.metadata(new_meta=queue.properties.get('metadata')) self.fc.queue(queue.physical_resource_name(), auto_create=False).AndReturn(fake_q) fake_q.metadata().AndReturn( {"key1": {"key2": "value", "key3": [1, 2]}}) self.m.ReplayAll() scheduler.TaskRunner(queue.create)() self.assertEqual( {'metadata': {"key1": {"key2": "value", "key3": [1, 2]}}}, queue._show_resource()) self.m.VerifyAll() def test_parse_live_resource_data(self): t = template_format.parse(wp_template) self.parse_stack(t) queue = self.stack['MyQueue2'] self.m.StubOutWithMock(queue, 'client') queue.client().MultipleTimes().AndReturn(self.fc) fake_q = FakeQueue(queue.physical_resource_name(), auto_create=False) self.m.StubOutWithMock(self.fc, 'queue') self.fc.queue(queue.physical_resource_name(), auto_create=False).AndReturn(fake_q) self.m.StubOutWithMock(fake_q, 'metadata') fake_q.metadata(new_meta=queue.properties.get('metadata')) self.fc.queue(queue.physical_resource_name(), auto_create=False).AndReturn(fake_q) fake_q.metadata().AndReturn( {"key1": {"key2": "value", "key3": [1, 2]}}) self.m.ReplayAll() scheduler.TaskRunner(queue.create)() self.assertEqual( {'metadata': {"key1": {"key2": "value", "key3": [1, 2]}}, 'name': queue.resource_id}, queue.parse_live_resource_data(queue.properties, queue._show_resource())) self.m.VerifyAll() class ZaqarSignedQueueURLTest(common.HeatTestCase): tmpl = ''' heat_template_version: 2015-10-15 resources: signed_url: type: OS::Zaqar::SignedQueueURL properties: queue: foo ttl: 60 paths: - messages - subscription methods: - POST - DELETE ''' @mock.patch('zaqarclient.queues.v2.queues.Queue.signed_url') def test_create(self, mock_signed_url): mock_signed_url.return_value = { 'expires': '2020-01-01', 'signature': 'secret', 'project': 'project_id', 'paths': ['/v2/foo/messages', '/v2/foo/sub'], 'methods': ['DELETE', 'POST']} self.t = template_format.parse(self.tmpl) self.stack = utils.parse_stack(self.t) self.rsrc = self.stack['signed_url'] self.assertIsNone(self.rsrc.validate()) self.stack.create() self.assertEqual(self.rsrc.CREATE, self.rsrc.action) self.assertEqual(self.rsrc.COMPLETE, self.rsrc.status) self.assertEqual(self.stack.CREATE, self.stack.action) self.assertEqual(self.stack.COMPLETE, self.stack.status) mock_signed_url.assert_called_once_with( paths=['messages', 'subscription'], methods=['POST', 'DELETE'], ttl_seconds=60) self.assertEqual('secret', self.rsrc.FnGetAtt('signature')) self.assertEqual('2020-01-01', self.rsrc.FnGetAtt('expires')) self.assertEqual('project_id', self.rsrc.FnGetAtt('project')) self.assertEqual(['/v2/foo/messages', '/v2/foo/sub'], self.rsrc.FnGetAtt('paths')) self.assertEqual(['DELETE', 'POST'], self.rsrc.FnGetAtt('methods')) expected_query = { 'queue_name': ['foo'], 'expires': ['2020-01-01'], 'signature': ['secret'], 'project_id': ['project_id'], 'paths': ['/v2/foo/messages,/v2/foo/sub'], 'methods': ['DELETE,POST'] } query_str_attr = self.rsrc.get_attribute('query_str') self.assertEqual(expected_query, urlparse.parse_qs(query_str_attr, strict_parsing=True)) heat-10.0.2/heat/tests/openstack/zaqar/__init__.py0000666000175000017500000000000013343562340022031 0ustar zuulzuul00000000000000heat-10.0.2/heat/tests/openstack/sahara/0000775000175000017500000000000013343562672020061 5ustar zuulzuul00000000000000heat-10.0.2/heat/tests/openstack/sahara/test_data_source.py0000666000175000017500000001017513343562340023761 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import mock import six from heat.common import exception from heat.common import template_format from heat.engine.resources.openstack.sahara import data_source from heat.engine import scheduler from heat.tests import common from heat.tests import utils data_source_template = """ heat_template_version: 2015-10-15 resources: data-source: type: OS::Sahara::DataSource properties: name: my-ds type: swift url: swift://container.sahara/text credentials: user: admin password: swordfish """ class SaharaDataSourceTest(common.HeatTestCase): def setUp(self): super(SaharaDataSourceTest, self).setUp() t = template_format.parse(data_source_template) self.stack = utils.parse_stack(t) resource_defns = self.stack.t.resource_definitions(self.stack) self.rsrc_defn = resource_defns['data-source'] self.client = mock.Mock() self.patchobject(data_source.DataSource, 'client', return_value=self.client) def _create_resource(self, name, snippet, stack): ds = data_source.DataSource(name, snippet, stack) value = mock.MagicMock(id='12345') self.client.data_sources.create.return_value = value scheduler.TaskRunner(ds.create)() return ds def test_create(self): ds = self._create_resource('data-source', self.rsrc_defn, self.stack) args = self.client.data_sources.create.call_args[1] expected_args = { 'name': 'my-ds', 'description': '', 'data_source_type': 'swift', 'url': 'swift://container.sahara/text', 'credential_user': 'admin', 'credential_pass': 'swordfish' } self.assertEqual(expected_args, args) self.assertEqual('12345', ds.resource_id) expected_state = (ds.CREATE, ds.COMPLETE) self.assertEqual(expected_state, ds.state) def test_update(self): ds = self._create_resource('data-source', self.rsrc_defn, self.stack) props = self.stack.t.t['resources']['data-source']['properties'].copy() props['type'] = 'hdfs' props['url'] = 'my/path' self.rsrc_defn = self.rsrc_defn.freeze(properties=props) scheduler.TaskRunner(ds.update, self.rsrc_defn)() data = { 'name': 'my-ds', 'description': '', 'type': 'hdfs', 'url': 'my/path', 'credentials': { 'user': 'admin', 'password': 'swordfish' } } self.client.data_sources.update.assert_called_once_with( '12345', data) self.assertEqual((ds.UPDATE, ds.COMPLETE), ds.state) def test_show_attribute(self): ds = self._create_resource('data-source', self.rsrc_defn, self.stack) value = mock.MagicMock() value.to_dict.return_value = {'ds': 'info'} self.client.data_sources.get.return_value = value self.assertEqual({'ds': 'info'}, ds.FnGetAtt('show')) def test_validate_password_without_user(self): props = self.stack.t.t['resources']['data-source']['properties'].copy() del props['credentials']['user'] self.rsrc_defn = self.rsrc_defn.freeze(properties=props) ds = data_source.DataSource('data-source', self.rsrc_defn, self.stack) ex = self.assertRaises(exception.StackValidationFailed, ds.validate) error_msg = ('Property error: resources.data-source.properties.' 'credentials: Property user not assigned') self.assertEqual(error_msg, six.text_type(ex)) heat-10.0.2/heat/tests/openstack/sahara/test_job.py0000666000175000017500000001633113343562352022245 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import mock from heat.common import template_format from heat.engine.clients.os import sahara from heat.engine.resources.openstack.sahara import job from heat.engine import scheduler from heat.tests import common from heat.tests import utils job_template = """ heat_template_version: newton resources: job: type: OS::Sahara::Job properties: name: test_name_job type: MapReduce libs: [ fake-lib-id ] description: test_description is_public: True default_execution_data: cluster: fake-cluster-id input: fake-input-id output: fake-output-id is_public: True configs: mapred.map.class: org.apache.oozie.example.SampleMapper mapred.reduce.class: org.apache.oozie.example.SampleReducer mapreduce.framework.name: yarn """ class SaharaJobTest(common.HeatTestCase): def setUp(self): super(SaharaJobTest, self).setUp() t = template_format.parse(job_template) self.stack = utils.parse_stack(t) resource_defns = self.stack.t.resource_definitions(self.stack) self.rsrc_defn = resource_defns['job'] self.client = mock.Mock() self.patchobject(job.SaharaJob, 'client', return_value=self.client) fake_execution = mock.Mock() fake_execution.job_id = 'fake-resource-id' fake_execution.id = 'fake-execution-id' fake_execution.to_dict.return_value = {'job_id': 'fake-resource-id', 'id': 'fake-execution-id'} self.client.job_executions.find.return_value = [fake_execution] def _create_resource(self, name, snippet, stack, without_name=False): jb = job.SaharaJob(name, snippet, stack) if without_name: self.client.jobs.create = mock.Mock(return_value='fake_rsrc_id') jb.physical_resource_name = mock.Mock( return_value='fake_phys_name') value = mock.MagicMock(id='fake-resource-id') self.client.jobs.create.return_value = value mock_get_res = mock.Mock(return_value='some res id') jb.client_plugin().find_resource_by_name_or_id = mock_get_res scheduler.TaskRunner(jb.create)() return jb def test_create(self): jb = self._create_resource('job', self.rsrc_defn, self.stack) args = self.client.jobs.create.call_args[1] expected_args = { 'name': 'test_name_job', 'type': 'MapReduce', 'libs': ['some res id'], 'description': 'test_description', 'is_public': True, 'is_protected': False, 'mains': [] } self.assertEqual(expected_args, args) self.assertEqual('fake-resource-id', jb.resource_id) expected_state = (jb.CREATE, jb.COMPLETE) self.assertEqual(expected_state, jb.state) def test_create_without_name_passed(self): props = self.stack.t.t['resources']['job']['properties'] del props['name'] self.rsrc_defn = self.rsrc_defn.freeze(properties=props) jb = self._create_resource('job', self.rsrc_defn, self.stack, True) args = self.client.jobs.create.call_args[1] expected_args = { 'name': 'fake_phys_name', 'type': 'MapReduce', 'libs': ['some res id'], 'description': 'test_description', 'is_public': True, 'is_protected': False, 'mains': [] } self.assertEqual(expected_args, args) self.assertEqual('fake-resource-id', jb.resource_id) expected_state = (jb.CREATE, jb.COMPLETE) self.assertEqual(expected_state, jb.state) def test_delete(self): jb = self._create_resource('job', self.rsrc_defn, self.stack) scheduler.TaskRunner(jb.delete)() self.assertEqual((jb.DELETE, jb.COMPLETE), jb.state) self.client.jobs.delete.assert_called_once_with(jb.resource_id) self.client.job_executions.delete.assert_called_once_with( 'fake-execution-id') def test_delete_not_found(self): jb = self._create_resource('job', self.rsrc_defn, self.stack) self.client.jobs.delete.side_effect = ( sahara.sahara_base.APIException(error_code=404)) scheduler.TaskRunner(jb.delete)() self.assertEqual((jb.DELETE, jb.COMPLETE), jb.state) self.client.jobs.delete.assert_called_once_with(jb.resource_id) self.client.job_executions.delete.assert_called_once_with( 'fake-execution-id') def test_delete_job_executions_raises_error(self): jb = self._create_resource('job', self.rsrc_defn, self.stack) self.client.job_executions.find.side_effect = [ sahara.sahara_base.APIException(400)] self.assertRaises(sahara.sahara_base.APIException, jb.handle_delete) def test_update(self): jb = self._create_resource('job', self.rsrc_defn, self.stack) props = self.stack.t.t['resources']['job']['properties'].copy() props['name'] = 'test_name_job_new' props['description'] = 'test_description_new' props['is_public'] = False self.rsrc_defn = self.rsrc_defn.freeze(properties=props) scheduler.TaskRunner(jb.update, self.rsrc_defn)() self.client.jobs.update.assert_called_once_with( 'fake-resource-id', name='test_name_job_new', description='test_description_new', is_public=False) self.assertEqual((jb.UPDATE, jb.COMPLETE), jb.state) def test_handle_signal(self): jb = self._create_resource('job', self.rsrc_defn, self.stack) scheduler.TaskRunner(jb.handle_signal, None)() expected_args = { 'job_id': 'fake-resource-id', 'cluster_id': 'some res id', 'input_id': 'some res id', 'output_id': 'some res id', 'is_public': True, 'is_protected': False, 'interface': {}, 'configs': { 'configs': { 'mapred.reduce.class': 'org.apache.oozie.example.SampleReducer', 'mapred.map.class': 'org.apache.oozie.example.SampleMapper', 'mapreduce.framework.name': 'yarn'}, 'args': [], 'params': {} } } self.client.job_executions.create.assert_called_once_with( **expected_args) def test_attributes(self): jb = self._create_resource('job', self.rsrc_defn, self.stack) jb._get_ec2_signed_url = mock.Mock(return_value='fake-url') self.assertEqual('fake-execution-id', jb.FnGetAtt('executions')[0]['id']) self.assertEqual('fake-url', jb.FnGetAtt('default_execution_url')) heat-10.0.2/heat/tests/openstack/sahara/test_templates.py0000666000175000017500000004512013343562340023464 0ustar zuulzuul00000000000000# Copyright (c) 2014 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import mock import six from heat.common import exception from heat.common import template_format from heat.engine.clients.os import neutron from heat.engine.clients.os import nova from heat.engine.clients.os import sahara from heat.engine.resources.openstack.sahara import templates as st from heat.engine import scheduler from heat.tests import common from heat.tests import utils node_group_template = """ heat_template_version: 2013-05-23 description: Sahara Node Group Template resources: node-group: type: OS::Sahara::NodeGroupTemplate properties: name: node-group-template plugin_name: vanilla hadoop_version: 2.3.0 flavor: m1.large volume_type: lvm floating_ip_pool: some_pool_name node_processes: - namenode - jobtracker is_proxy_gateway: True shares: - id: e45eaabf-9300-42e2-b6eb-9ebc92081f46 access_level: ro """ cluster_template = """ heat_template_version: 2013-05-23 description: Sahara Cluster Template resources: cluster-template: type: OS::Sahara::ClusterTemplate properties: name: test-cluster-template plugin_name: vanilla hadoop_version: 2.3.0 neutron_management_network: some_network shares: - id: e45eaabf-9300-42e2-b6eb-9ebc92081f46 access_level: ro """ cluster_template_without_name = """ heat_template_version: 2013-05-23 resources: cluster_template!: type: OS::Sahara::ClusterTemplate properties: plugin_name: vanilla hadoop_version: 2.3.0 neutron_management_network: some_network """ node_group_template_without_name = """ heat_template_version: 2013-05-23 resources: node_group!: type: OS::Sahara::NodeGroupTemplate properties: plugin_name: vanilla hadoop_version: 2.3.0 flavor: m1.large floating_ip_pool: some_pool_name node_processes: - namenode - jobtracker """ class FakeNodeGroupTemplate(object): def __init__(self): self.id = "some_ng_id" self.name = "test-cluster-template" self.to_dict = lambda: {"ng-template": "info"} class FakeClusterTemplate(object): def __init__(self): self.id = "some_ct_id" self.name = "node-group-template" self.to_dict = lambda: {"cluster-template": "info"} class SaharaNodeGroupTemplateTest(common.HeatTestCase): def setUp(self): super(SaharaNodeGroupTemplateTest, self).setUp() self.stub_FlavorConstraint_validate() self.stub_SaharaPluginConstraint() self.stub_VolumeTypeConstraint_validate() self.patchobject(nova.NovaClientPlugin, 'find_flavor_by_name_or_id' ).return_value = 'someflavorid' self.patchobject(neutron.NeutronClientPlugin, '_create') self.patchobject(neutron.NeutronClientPlugin, 'find_resourceid_by_name_or_id', return_value='some_pool_id') sahara_mock = mock.MagicMock() self.ngt_mgr = sahara_mock.node_group_templates self.plugin_mgr = sahara_mock.plugins self.patchobject(sahara.SaharaClientPlugin, '_create').return_value = sahara_mock self.patchobject(sahara.SaharaClientPlugin, 'validate_hadoop_version' ).return_value = None self.fake_ngt = FakeNodeGroupTemplate() self.t = template_format.parse(node_group_template) self.ngt_props = self.t['resources']['node-group']['properties'] def _init_ngt(self, template): self.stack = utils.parse_stack(template) return self.stack['node-group'] def _create_ngt(self, template): ngt = self._init_ngt(template) self.ngt_mgr.create.return_value = self.fake_ngt scheduler.TaskRunner(ngt.create)() self.assertEqual((ngt.CREATE, ngt.COMPLETE), ngt.state) self.assertEqual(self.fake_ngt.id, ngt.resource_id) return ngt def test_ngt_create(self): self._create_ngt(self.t) args = { 'name': 'node-group-template', 'plugin_name': 'vanilla', 'hadoop_version': '2.3.0', 'flavor_id': 'someflavorid', 'description': "", 'volumes_per_node': 0, 'volumes_size': None, 'volume_type': 'lvm', 'security_groups': None, 'auto_security_group': None, 'availability_zone': None, 'volumes_availability_zone': None, 'node_processes': ['namenode', 'jobtracker'], 'floating_ip_pool': 'some_pool_id', 'node_configs': None, 'image_id': None, 'is_proxy_gateway': True, 'volume_local_to_instance': None, 'use_autoconfig': None, 'shares': [{'id': 'e45eaabf-9300-42e2-b6eb-9ebc92081f46', 'access_level': 'ro', 'path': None}] } self.ngt_mgr.create.assert_called_once_with(**args) def test_validate_floatingippool_on_neutron_fails(self): ngt = self._init_ngt(self.t) self.patchobject( neutron.NeutronClientPlugin, 'find_resourceid_by_name_or_id' ).side_effect = [ neutron.exceptions.NeutronClientNoUniqueMatch(message='Too many'), neutron.exceptions.NeutronClientException(message='Not found', status_code=404) ] ex = self.assertRaises(exception.StackValidationFailed, ngt.validate) self.assertEqual('Too many', six.text_type(ex)) ex = self.assertRaises(exception.StackValidationFailed, ngt.validate) self.assertEqual('Not found', six.text_type(ex)) def test_validate_flavor_constraint_return_false(self): self.t['resources']['node-group']['properties'].pop('floating_ip_pool') self.t['resources']['node-group']['properties'].pop('volume_type') ngt = self._init_ngt(self.t) self.patchobject(nova.FlavorConstraint, 'validate' ).return_value = False ex = self.assertRaises(exception.StackValidationFailed, ngt.validate) self.assertEqual(u"Property error: " u"resources.node-group.properties.flavor: " u"Error validating value 'm1.large'", six.text_type(ex)) def test_template_invalid_name(self): tmpl = template_format.parse(node_group_template_without_name) stack = utils.parse_stack(tmpl) ngt = stack['node_group!'] self.ngt_mgr.create.return_value = self.fake_ngt scheduler.TaskRunner(ngt.create)() self.assertEqual((ngt.CREATE, ngt.COMPLETE), ngt.state) self.assertEqual(self.fake_ngt.id, ngt.resource_id) name = self.ngt_mgr.create.call_args[1]['name'] self.assertIn('-nodegroup-', name) def test_ngt_show_resource(self): ngt = self._create_ngt(self.t) self.ngt_mgr.get.return_value = self.fake_ngt self.assertEqual({"ng-template": "info"}, ngt.FnGetAtt('show')) self.ngt_mgr.get.assert_called_once_with('some_ng_id') def test_validate_node_processes_fails(self): ngt = self._init_ngt(self.t) plugin_mock = mock.MagicMock() plugin_mock.node_processes = { "HDFS": ["namenode", "datanode", "secondarynamenode"], "JobFlow": ["oozie"] } self.plugin_mgr.get_version_details.return_value = plugin_mock ex = self.assertRaises(exception.StackValidationFailed, ngt.validate) self.assertIn("resources.node-group.properties: Plugin vanilla " "doesn't support the following node processes: " "jobtracker. Allowed processes are: ", six.text_type(ex)) self.assertIn("namenode", six.text_type(ex)) self.assertIn("datanode", six.text_type(ex)) self.assertIn("secondarynamenode", six.text_type(ex)) self.assertIn("oozie", six.text_type(ex)) def test_update(self): ngt = self._create_ngt(self.t) props = self.ngt_props.copy() props['node_processes'] = ['tasktracker', 'datanode'] props['name'] = 'new-ng-template' rsrc_defn = ngt.t.freeze(properties=props) scheduler.TaskRunner(ngt.update, rsrc_defn)() args = {'node_processes': ['tasktracker', 'datanode'], 'name': 'new-ng-template'} self.ngt_mgr.update.assert_called_once_with('some_ng_id', **args) self.assertEqual((ngt.UPDATE, ngt.COMPLETE), ngt.state) def test_get_live_state(self): ngt = self._create_ngt(self.t) resp = mock.MagicMock() resp.to_dict.return_value = { 'volume_local_to_instance': False, 'availability_zone': None, 'updated_at': None, 'use_autoconfig': True, 'volumes_per_node': 0, 'id': '6157755e-dfd3-45b4-a445-36588e5f75ad', 'security_groups': None, 'shares': None, 'node_configs': {}, 'auto_security_group': False, 'volumes_availability_zone': None, 'description': '', 'volume_mount_prefix': '/volumes/disk', 'plugin_name': 'vanilla', 'floating_ip_pool': None, 'is_default': False, 'image_id': None, 'volumes_size': 0, 'is_proxy_gateway': False, 'is_public': False, 'hadoop_version': '2.7.1', 'name': 'cluster-nodetemplate-jlgzovdaivn', 'tenant_id': '221b4f51e9bd4f659845f657a3051a46', 'created_at': '2016-01-29T11:08:46', 'volume_type': None, 'is_protected': False, 'node_processes': ['namenode'], 'flavor_id': '2'} self.ngt_mgr.get.return_value = resp # Simulate replace translation rule execution. ngt.properties.data['flavor'] = '1' reality = ngt.get_live_state(ngt.properties) expected = { 'volume_local_to_instance': False, 'availability_zone': None, 'use_autoconfig': True, 'volumes_per_node': 0, 'security_groups': None, 'shares': None, 'node_configs': {}, 'auto_security_group': False, 'volumes_availability_zone': None, 'description': '', 'plugin_name': 'vanilla', 'floating_ip_pool': None, 'image_id': None, 'volumes_size': 0, 'is_proxy_gateway': False, 'hadoop_version': '2.7.1', 'name': 'cluster-nodetemplate-jlgzovdaivn', 'volume_type': None, 'node_processes': ['namenode'], 'flavor': '2' } self.assertEqual(expected, reality) # Make sure that old flavor will return when ids are equal - simulate # replace translation rule execution. ngt.properties.data['flavor'] = '2' reality = ngt.get_live_state(ngt.properties) self.assertEqual('2', reality.get('flavor')) class SaharaClusterTemplateTest(common.HeatTestCase): def setUp(self): super(SaharaClusterTemplateTest, self).setUp() self.patchobject(st.constraints.CustomConstraint, '_is_valid' ).return_value = True self.patchobject(neutron.NeutronClientPlugin, '_create') self.patchobject(neutron.NeutronClientPlugin, 'find_resourceid_by_name_or_id', return_value='some_network_id') sahara_mock = mock.MagicMock() self.ct_mgr = sahara_mock.cluster_templates self.patchobject(sahara.SaharaClientPlugin, '_create').return_value = sahara_mock self.patchobject(sahara.SaharaClientPlugin, 'validate_hadoop_version' ).return_value = None self.fake_ct = FakeClusterTemplate() self.t = template_format.parse(cluster_template) def _init_ct(self, template): self.stack = utils.parse_stack(template) return self.stack['cluster-template'] def _create_ct(self, template): ct = self._init_ct(template) self.ct_mgr.create.return_value = self.fake_ct scheduler.TaskRunner(ct.create)() self.assertEqual((ct.CREATE, ct.COMPLETE), ct.state) self.assertEqual(self.fake_ct.id, ct.resource_id) return ct def test_ct_create(self): self._create_ct(self.t) args = { 'name': 'test-cluster-template', 'plugin_name': 'vanilla', 'hadoop_version': '2.3.0', 'description': '', 'default_image_id': None, 'net_id': 'some_network_id', 'anti_affinity': None, 'node_groups': None, 'cluster_configs': None, 'use_autoconfig': None, 'shares': [{'id': 'e45eaabf-9300-42e2-b6eb-9ebc92081f46', 'access_level': 'ro', 'path': None}] } self.ct_mgr.create.assert_called_once_with(**args) def test_ct_validate_no_network_on_neutron_fails(self): self.t['resources']['cluster-template']['properties'].pop( 'neutron_management_network') ct = self._init_ct(self.t) self.patchobject(ct, 'is_using_neutron', return_value=True) ex = self.assertRaises(exception.StackValidationFailed, ct.validate) self.assertEqual("neutron_management_network must be provided", six.text_type(ex)) def test_template_invalid_name(self): tmpl = template_format.parse(cluster_template_without_name) stack = utils.parse_stack(tmpl) ct = stack['cluster_template!'] self.ct_mgr.create.return_value = self.fake_ct scheduler.TaskRunner(ct.create)() self.assertEqual((ct.CREATE, ct.COMPLETE), ct.state) self.assertEqual(self.fake_ct.id, ct.resource_id) name = self.ct_mgr.create.call_args[1]['name'] self.assertIn('-clustertemplate-', name) def test_ct_show_resource(self): ct = self._create_ct(self.t) self.ct_mgr.get.return_value = self.fake_ct self.assertEqual({"cluster-template": "info"}, ct.FnGetAtt('show')) self.ct_mgr.get.assert_called_once_with('some_ct_id') def test_update(self): ct = self._create_ct(self.t) rsrc_defn = self.stack.t.resource_definitions(self.stack)[ 'cluster-template'] props = self.t['resources']['cluster-template']['properties'].copy() props['plugin_name'] = 'hdp' props['hadoop_version'] = '1.3.2' props['name'] = 'new-cluster-template' rsrc_defn = rsrc_defn.freeze(properties=props) scheduler.TaskRunner(ct.update, rsrc_defn)() args = { 'plugin_name': 'hdp', 'hadoop_version': '1.3.2', 'name': 'new-cluster-template' } self.ct_mgr.update.assert_called_once_with('some_ct_id', **args) self.assertEqual((ct.UPDATE, ct.COMPLETE), ct.state) def test_ct_get_live_state(self): ct = self._create_ct(self.t) resp = mock.MagicMock() resp.to_dict.return_value = { 'neutron_management_network': 'public', 'description': '', 'cluster_configs': {}, 'created_at': '2016-01-29T11:45:47', 'default_image_id': None, 'updated_at': None, 'plugin_name': 'vanilla', 'shares': None, 'is_default': False, 'is_protected': False, 'use_autoconfig': True, 'anti_affinity': [], 'tenant_id': '221b4f51e9bd4f659845f657a3051a46', 'node_groups': [{'volume_local_to_instance': False, 'availability_zone': None, 'updated_at': None, 'node_group_template_id': '1234', 'volumes_per_node': 0, 'id': '48c356f6-bbe1-4b26-a90a-f3d543c2ea4c', 'security_groups': None, 'shares': None, 'node_configs': {}, 'auto_security_group': False, 'volumes_availability_zone': None, 'volume_mount_prefix': '/volumes/disk', 'floating_ip_pool': None, 'image_id': None, 'volumes_size': 0, 'is_proxy_gateway': False, 'count': 1, 'name': 'test', 'created_at': '2016-01-29T11:45:47', 'volume_type': None, 'node_processes': ['namenode'], 'flavor_id': '2', 'use_autoconfig': True}], 'is_public': False, 'hadoop_version': '2.7.1', 'id': 'c07b8c63-b944-47f9-8588-085547a45c1b', 'name': 'cluster-template-ykokor6auha4'} self.ct_mgr.get.return_value = resp reality = ct.get_live_state(ct.properties) expected = { 'neutron_management_network': 'public', 'description': '', 'cluster_configs': {}, 'default_image_id': None, 'plugin_name': 'vanilla', 'shares': None, 'anti_affinity': [], 'node_groups': [{'node_group_template_id': '1234', 'count': 1, 'name': 'test'}], 'hadoop_version': '2.7.1', 'name': 'cluster-template-ykokor6auha4' } self.assertEqual(set(expected.keys()), set(reality.keys())) expected_node_group = sorted(expected.pop('node_groups')) reality_node_group = sorted(reality.pop('node_groups')) for i in range(len(expected_node_group)): self.assertEqual(expected_node_group[i], reality_node_group[i]) self.assertEqual(expected, reality) heat-10.0.2/heat/tests/openstack/sahara/__init__.py0000666000175000017500000000000013343562340022152 0ustar zuulzuul00000000000000heat-10.0.2/heat/tests/openstack/sahara/test_image.py0000666000175000017500000001041013343562340022542 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import mock from heat.common import template_format from heat.engine.clients.os import glance from heat.engine.clients.os import sahara from heat.engine.resources.openstack.sahara import image from heat.engine import scheduler from heat.tests import common from heat.tests import utils sahara_image_template = """ heat_template_version: 2015-10-15 resources: sahara-image: type: OS::Sahara::ImageRegistry properties: image: sahara-icehouse-vanilla-1.2.1-ubuntu-13.10 username: ubuntu tags: - vanilla - 1.2.1 """ class SaharaImageTest(common.HeatTestCase): def setUp(self): super(SaharaImageTest, self).setUp() self.tmpl = template_format.parse(sahara_image_template) self.stack = utils.parse_stack(self.tmpl) resource_defns = self.stack.t.resource_definitions(self.stack) self.rsrc_defn = resource_defns['sahara-image'] self.client = mock.Mock() self.patchobject(image.SaharaImageRegistry, 'client', return_value=self.client) self.patchobject(glance.GlanceClientPlugin, 'find_image_by_name_or_id', return_value='12345') def _create_resource(self, name, snippet, stack): img = image.SaharaImageRegistry(name, snippet, stack) scheduler.TaskRunner(img.create)() return img def test_create(self): img = self._create_resource('sahara-image', self.rsrc_defn, self.stack) args = ('12345', 'ubuntu', '') self.client.images.update_image.assert_called_once_with(*args) self.client.images.update_tags.assert_called_once_with( '12345', ['vanilla', '1.2.1']) self.assertEqual('12345', img.resource_id) expected_state = (img.CREATE, img.COMPLETE) self.assertEqual(expected_state, img.state) def test_update(self): img = self._create_resource('sahara-image', self.rsrc_defn, self.stack) props = self.tmpl['resources']['sahara-image']['properties'].copy() props['tags'] = [] props['description'] = 'test image' self.rsrc_defn = self.rsrc_defn.freeze(properties=props) scheduler.TaskRunner(img.update, self.rsrc_defn)() tags_update_calls = [ mock.call('12345', ['vanilla', '1.2.1']), mock.call('12345', []) ] image_update_calls = [ mock.call('12345', 'ubuntu', ''), mock.call('12345', 'ubuntu', 'test image') ] self.client.images.update_image.assert_has_calls(image_update_calls) self.client.images.update_tags.assert_has_calls(tags_update_calls) self.assertEqual((img.UPDATE, img.COMPLETE), img.state) def test_delete(self): img = self._create_resource('sahara-image', self.rsrc_defn, self.stack) scheduler.TaskRunner(img.delete)() self.assertEqual((img.DELETE, img.COMPLETE), img.state) self.client.images.unregister_image.assert_called_once_with( img.resource_id) def test_delete_not_found(self): img = self._create_resource('sahara-image', self.rsrc_defn, self.stack) self.client.images.unregister_image.side_effect = ( sahara.sahara_base.APIException(error_code=404)) scheduler.TaskRunner(img.delete)() self.assertEqual((img.DELETE, img.COMPLETE), img.state) self.client.images.unregister_image.assert_called_once_with( img.resource_id) def test_show_attribute(self): img = self._create_resource('sahara-image', self.rsrc_defn, self.stack) value = mock.MagicMock() value.to_dict.return_value = {'img': 'info'} self.client.images.get.return_value = value self.assertEqual({'img': 'info'}, img.FnGetAtt('show')) heat-10.0.2/heat/tests/openstack/sahara/test_cluster.py0000666000175000017500000002265213343562340023154 0ustar zuulzuul00000000000000# Copyright (c) 2014 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import mock from oslo_config import cfg import six from heat.common import exception from heat.common import template_format from heat.engine.clients.os import glance from heat.engine.clients.os import neutron from heat.engine.clients.os import sahara from heat.engine.resources.openstack.sahara import cluster as sc from heat.engine import scheduler from heat.tests import common from heat.tests import utils cluster_stack_template = """ heat_template_version: 2013-05-23 description: Hadoop Cluster by Sahara resources: super-cluster: type: OS::Sahara::Cluster properties: name: super-cluster plugin_name: vanilla hadoop_version: 2.3.0 cluster_template_id: some_cluster_template_id default_image_id: some_image key_name: admin neutron_management_network: some_network shares: - id: some_share_id access_level: ro """ # NOTE(jfreud): the resource name contains an invalid character cluster_stack_template_without_name = """ heat_template_version: 2013-05-23 description: Hadoop Cluster by Sahara resources: lots_of_underscore_name: type: OS::Sahara::Cluster properties: plugin_name: vanilla hadoop_version: 2.3.0 cluster_template_id: some_cluster_template_id default_image_id: some_image key_name: admin neutron_management_network: some_network shares: - id: some_share_id access_level: ro """ class FakeCluster(object): def __init__(self, status='Active'): self.status = status self.id = "some_id" self.name = "super-cluster" self.info = {"HDFS": {"NameNode": "hdfs://hostname:port", "Web UI": "http://host_ip:port"}} self.to_dict = lambda: {"cluster": "info"} class SaharaClusterTest(common.HeatTestCase): def setUp(self): super(SaharaClusterTest, self).setUp() self.patchobject(sc.constraints.CustomConstraint, '_is_valid' ).return_value = True self.patchobject(glance.GlanceClientPlugin, 'find_image_by_name_or_id' ).return_value = 'some_image_id' self.patchobject(neutron.NeutronClientPlugin, '_create') self.patchobject(neutron.NeutronClientPlugin, 'find_resourceid_by_name_or_id', return_value='some_network_id') self.sahara_mock = mock.MagicMock() self.patchobject(sahara.SaharaClientPlugin, '_create' ).return_value = self.sahara_mock self.patchobject(sahara.SaharaClientPlugin, 'validate_hadoop_version' ).return_value = None self.cl_mgr = self.sahara_mock.clusters self.fake_cl = FakeCluster() self.t = template_format.parse(cluster_stack_template) self.t2 = template_format.parse(cluster_stack_template_without_name) def _init_cluster(self, template, name='super-cluster'): self.stack = utils.parse_stack(template) cluster = self.stack[name] return cluster def _create_cluster(self, template): cluster = self._init_cluster(template) self.cl_mgr.create.return_value = self.fake_cl self.cl_mgr.get.return_value = self.fake_cl scheduler.TaskRunner(cluster.create)() self.assertEqual((cluster.CREATE, cluster.COMPLETE), cluster.state) self.assertEqual(self.fake_cl.id, cluster.resource_id) return cluster def test_cluster_create(self): self._create_cluster(self.t) expected_args = ('super-cluster', 'vanilla', '2.3.0') expected_kwargs = {'cluster_template_id': 'some_cluster_template_id', 'user_keypair_id': 'admin', 'default_image_id': 'some_image_id', 'net_id': 'some_network_id', 'use_autoconfig': None, 'shares': [{'id': 'some_share_id', 'access_level': 'ro', 'path': None}]} self.cl_mgr.create.assert_called_once_with(*expected_args, **expected_kwargs) self.cl_mgr.get.assert_called_once_with(self.fake_cl.id) def test_cluster_create_invalid_name(self): cluster = self._init_cluster(self.t2, 'lots_of_underscore_name') self.cl_mgr.create.return_value = self.fake_cl self.cl_mgr.get.return_value = self.fake_cl scheduler.TaskRunner(cluster.create)() name = self.cl_mgr.create.call_args[0][0] self.assertIn('lotsofunderscorename', name) def test_cluster_create_fails(self): cfg.CONF.set_override('action_retry_limit', 0) cluster = self._init_cluster(self.t) self.cl_mgr.create.return_value = self.fake_cl self.cl_mgr.get.return_value = FakeCluster(status='Error') create_task = scheduler.TaskRunner(cluster.create) ex = self.assertRaises(exception.ResourceFailure, create_task) expected = ('ResourceInError: resources.super-cluster: ' 'Went to status Error due to "Unknown"') self.assertEqual(expected, six.text_type(ex)) def test_cluster_check_delete_complete_error(self): cluster = self._create_cluster(self.t) self.cl_mgr.get.side_effect = [ self.fake_cl, sahara.sahara_base.APIException()] self.cl_mgr.get.reset_mock() delete_task = scheduler.TaskRunner(cluster.delete) ex = self.assertRaises(exception.ResourceFailure, delete_task) expected = "APIException: resources.super-cluster: None" self.assertEqual(expected, six.text_type(ex)) self.cl_mgr.delete.assert_called_once_with(self.fake_cl.id) self.assertEqual(2, self.cl_mgr.get.call_count) def test_cluster_delete_cluster_in_error(self): cluster = self._create_cluster(self.t) self.cl_mgr.get.side_effect = [ self.fake_cl, FakeCluster(status='Error')] self.cl_mgr.get.reset_mock() delete_task = scheduler.TaskRunner(cluster.delete) ex = self.assertRaises(exception.ResourceFailure, delete_task) expected = ('ResourceInError: resources.super-cluster: ' 'Went to status Error due to "Unknown"') self.assertEqual(expected, six.text_type(ex)) self.cl_mgr.delete.assert_called_once_with(self.fake_cl.id) self.assertEqual(2, self.cl_mgr.get.call_count) def test_cluster_resolve_attribute(self): cluster = self._create_cluster(self.t) self.cl_mgr.get.reset_mock() self.assertEqual(self.fake_cl.info, cluster._resolve_attribute('info')) self.assertEqual(self.fake_cl.status, cluster._resolve_attribute('status')) self.assertEqual({"cluster": "info"}, cluster.FnGetAtt('show')) self.assertEqual(3, self.cl_mgr.get.call_count) def test_cluster_create_no_image_anywhere_fails(self): self.t['resources']['super-cluster']['properties'].pop( 'default_image_id') self.sahara_mock.cluster_templates.get.return_value = mock.Mock( default_image_id=None) cluster = self._init_cluster(self.t) ex = self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(cluster.create)) self.assertIsInstance(ex.exc, exception.StackValidationFailed) self.assertIn("default_image_id must be provided: " "Referenced cluster template some_cluster_template_id " "has no default_image_id defined.", six.text_type(ex.message)) def test_cluster_validate_no_network_on_neutron_fails(self): self.t['resources']['super-cluster']['properties'].pop( 'neutron_management_network') cluster = self._init_cluster(self.t) ex = self.assertRaises(exception.StackValidationFailed, cluster.validate) error_msg = ('Property error: resources.super-cluster.properties: ' 'Property neutron_management_network not assigned') self.assertEqual(error_msg, six.text_type(ex)) def test_deprecated_properties_correctly_translates(self): tmpl = ''' heat_template_version: 2013-05-23 description: Hadoop Cluster by Sahara resources: super-cluster: type: OS::Sahara::Cluster properties: name: super-cluster plugin_name: vanilla hadoop_version: 2.3.0 cluster_template_id: some_cluster_template_id image: some_image key_name: admin neutron_management_network: some_network ''' ct = self._create_cluster(template_format.parse(tmpl)) self.assertEqual('some_image_id', ct.properties.get('default_image_id')) self.assertIsNone(ct.properties.get('image_id')) heat-10.0.2/heat/tests/openstack/sahara/test_job_binary.py0000666000175000017500000001263513343562340023611 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import mock import six from heat.common import exception from heat.common import template_format from heat.engine.clients.os import sahara from heat.engine.resources.openstack.sahara import job_binary from heat.engine import scheduler from heat.tests import common from heat.tests import utils job_binary_template = """ heat_template_version: 2015-10-15 resources: job-binary: type: OS::Sahara::JobBinary properties: name: my-jb url: swift://container/jar-example.jar credentials: {'user': 'admin','password': 'swordfish'} """ class SaharaJobBinaryTest(common.HeatTestCase): def setUp(self): super(SaharaJobBinaryTest, self).setUp() t = template_format.parse(job_binary_template) self.stack = utils.parse_stack(t) resource_defns = self.stack.t.resource_definitions(self.stack) self.rsrc_defn = resource_defns['job-binary'] self.client = mock.Mock() self.patchobject(job_binary.JobBinary, 'client', return_value=self.client) def _create_resource(self, name, snippet, stack): jb = job_binary.JobBinary(name, snippet, stack) value = mock.MagicMock(id='12345') self.client.job_binaries.create.return_value = value scheduler.TaskRunner(jb.create)() return jb def test_create(self): jb = self._create_resource('job-binary', self.rsrc_defn, self.stack) args = self.client.job_binaries.create.call_args[1] expected_args = { 'name': 'my-jb', 'description': '', 'url': 'swift://container/jar-example.jar', 'extra': { 'user': 'admin', 'password': 'swordfish' } } self.assertEqual(expected_args, args) self.assertEqual('12345', jb.resource_id) expected_state = (jb.CREATE, jb.COMPLETE) self.assertEqual(expected_state, jb.state) def test_update(self): jb = self._create_resource('job-binary', self.rsrc_defn, self.stack) props = self.stack.t.t['resources']['job-binary']['properties'].copy() props['url'] = 'internal-db://94b8821d-1ce7-4131-8364-a6c6d85ad57b' self.rsrc_defn = self.rsrc_defn.freeze(properties=props) scheduler.TaskRunner(jb.update, self.rsrc_defn)() data = { 'name': 'my-jb', 'description': '', 'url': 'internal-db://94b8821d-1ce7-4131-8364-a6c6d85ad57b', 'extra': { 'user': 'admin', 'password': 'swordfish' } } self.client.job_binaries.update.assert_called_once_with( '12345', data) self.assertEqual((jb.UPDATE, jb.COMPLETE), jb.state) def test_delete(self): jb = self._create_resource('job-binary', self.rsrc_defn, self.stack) scheduler.TaskRunner(jb.delete)() self.assertEqual((jb.DELETE, jb.COMPLETE), jb.state) self.client.job_binaries.delete.assert_called_once_with( jb.resource_id) def test_delete_not_found(self): jb = self._create_resource('job-binary', self.rsrc_defn, self.stack) self.client.job_binaries.delete.side_effect = ( sahara.sahara_base.APIException(error_code=404)) scheduler.TaskRunner(jb.delete)() self.assertEqual((jb.DELETE, jb.COMPLETE), jb.state) self.client.job_binaries.delete.assert_called_once_with( jb.resource_id) def test_show_attribute(self): jb = self._create_resource('job-binary', self.rsrc_defn, self.stack) value = mock.MagicMock() value.to_dict.return_value = {'jb': 'info'} self.client.job_binaries.get.return_value = value self.assertEqual({'jb': 'info'}, jb.FnGetAtt('show')) def test_validate_invalid_url(self): props = self.stack.t.t['resources']['job-binary']['properties'].copy() props['url'] = 'internal-db://38273f82' self.rsrc_defn = self.rsrc_defn.freeze(properties=props) jb = job_binary.JobBinary('job-binary', self.rsrc_defn, self.stack) ex = self.assertRaises(exception.StackValidationFailed, jb.validate) error_msg = ('resources.job-binary.properties: internal-db://38273f82 ' 'is not a valid job location.') self.assertEqual(error_msg, six.text_type(ex)) def test_validate_password_without_user(self): props = self.stack.t.t['resources']['job-binary']['properties'].copy() props['credentials'].pop('user') self.rsrc_defn = self.rsrc_defn.freeze(properties=props) jb = job_binary.JobBinary('job-binary', self.rsrc_defn, self.stack) ex = self.assertRaises(exception.StackValidationFailed, jb.validate) error_msg = ('Property error: resources.job-binary.properties.' 'credentials: Property user not assigned') self.assertEqual(error_msg, six.text_type(ex)) heat-10.0.2/heat/tests/openstack/mistral/0000775000175000017500000000000013343562672020275 5ustar zuulzuul00000000000000heat-10.0.2/heat/tests/openstack/mistral/test_external_resource.py0000666000175000017500000001252513343562351025440 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from heat.common import exception from heat.common import template_format from heat.engine.clients.os import mistral as client from heat.engine import resource from heat.engine.resources.openstack.mistral import external_resource from heat.engine import scheduler from heat.engine import template from heat.tests import common from heat.tests import utils external_resource_template = """ heat_template_version: ocata resources: custom: type: OS::Mistral::ExternalResource properties: actions: CREATE: workflow: some_workflow params: target: create_my_custom_thing UPDATE: workflow: another_workflow DELETE: workflow: yet_another_workflow input: foo1: 123 foo2: 456 replace_on_change_inputs: - foo2 """ class FakeExecution(object): def __init__(self, id='1234', output='{}', state='IDLE'): self.id = id self.output = output self.state = state class TestMistralExternalResource(common.HeatTestCase): def setUp(self): super(TestMistralExternalResource, self).setUp() self.ctx = utils.dummy_context() tmpl = template_format.parse(external_resource_template) self.stack = utils.parse_stack(tmpl, stack_name='test_stack') resource_defns = self.stack.t.resource_definitions(self.stack) self.rsrc_defn = resource_defns['custom'] self.mistral = mock.Mock() self.patchobject(external_resource.MistralExternalResource, 'client', return_value=self.mistral) self.patchobject(client, 'mistral_base') self.patchobject(client.MistralClientPlugin, '_create') self.client = client.MistralClientPlugin(self.ctx) def _create_resource(self, name, snippet, stack, output='{}', get_state='SUCCESS'): execution = external_resource.MistralExternalResource(name, snippet, stack) self.mistral.executions.get.return_value = ( FakeExecution('test_stack-execution-b5fiekfci3yc', output, get_state)) self.mistral.executions.create.return_value = ( FakeExecution('test_stack-execution-b5fiekfci3yc')) return execution def test_create(self): execution = self._create_resource('execution', self.rsrc_defn, self.stack) scheduler.TaskRunner(execution.create)() expected_state = (execution.CREATE, execution.COMPLETE) self.assertEqual(expected_state, execution.state) self.assertEqual('test_stack-execution-b5fiekfci3yc', execution.resource_id) def test_create_with_resource_id_output(self): output = '{"resource_id": "my-fake-resource-id"}' execution = self._create_resource('execution', self.rsrc_defn, self.stack, output) scheduler.TaskRunner(execution.create)() expected_state = (execution.CREATE, execution.COMPLETE) self.assertEqual(expected_state, execution.state) self.assertEqual('my-fake-resource-id', execution.resource_id) def test_replace_on_change(self): execution = self._create_resource('execution', self.rsrc_defn, self.stack) scheduler.TaskRunner(execution.create)() expected_state = (execution.CREATE, execution.COMPLETE) self.assertEqual(expected_state, execution.state) tmpl = template_format.parse(external_resource_template) tmpl['resources']['custom']['properties']['input']['foo2'] = '4567' res_defns = template.Template(tmpl).resource_definitions(self.stack) new_custom_defn = res_defns['custom'] self.assertRaises(resource.UpdateReplace, scheduler.TaskRunner(execution.update, new_custom_defn)) def test_create_failed(self): execution = self._create_resource('execution', self.rsrc_defn, self.stack, get_state='ERROR') self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(execution.create)) expected_state = (execution.CREATE, execution.FAILED) self.assertEqual(expected_state, execution.state) heat-10.0.2/heat/tests/openstack/mistral/test_workflow.py0000666000175000017500000010171213343562351023556 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import six import yaml from mistralclient.api.v2 import executions from oslo_serialization import jsonutils from heat.common import exception from heat.common import template_format from heat.engine.clients.os import mistral as client from heat.engine import node_data from heat.engine import resource from heat.engine.resources.openstack.mistral import workflow from heat.engine.resources import signal_responder from heat.engine.resources import stack_user from heat.engine import scheduler from heat.engine import template from heat.tests import common from heat.tests import utils workflow_template = """ heat_template_version: 2013-05-23 resources: workflow: type: OS::Mistral::Workflow properties: type: direct tasks: - name: hello action: std.echo output='Good morning!' publish: result: <% $.hello %> """ workflow_template_with_tags = """ heat_template_version: queens resources: workflow: type: OS::Mistral::Workflow properties: type: direct tags: - tagged tasks: - name: hello action: std.echo output='Good morning!' publish: result: <% $.hello %> """ workflow_template_with_params = """ heat_template_version: 2013-05-23 resources: workflow: type: OS::Mistral::Workflow properties: params: {'test':'param_value'} type: direct tasks: - name: hello action: std.echo output='Good morning!' publish: result: <% $.hello %> """ workflow_template_with_params_override = """ heat_template_version: 2013-05-23 resources: workflow: type: OS::Mistral::Workflow properties: params: {'test':'param_value_override','test1':'param_value_override_1'} type: direct tasks: - name: hello action: std.echo output='Good morning!' publish: result: <% $.hello %> """ workflow_template_full = """ heat_template_version: 2013-05-23 parameters: use_request_body_as_input: type : boolean default : false resources: create_vm: type: OS::Mistral::Workflow properties: name: create_vm use_request_body_as_input: { get_param: use_request_body_as_input } type: direct input: name: create_test_server image: 31d8eeaf-686e-4e95-bb27-765014b9f20b flavor: 2 output: vm_id: <% $.vm_id %> task_defaults: on_error: - on_error tasks: - name: create_server action: | nova.servers_create name=<% $.name %> image=<% $.image %> flavor=<% $.flavor %> publish: vm_id: <% $.create_server.id %> on_success: - check_server_exists - name: check_server_exists action: nova.servers_get server=<% $.vm_id %> publish: server_exists: True on_success: - list_machines - name: wait_instance action: nova.servers_find id=<% $.vm_id_new %> status='ACTIVE' retry: delay: 5 count: 15 wait_before: 7 wait_after: 8 pause_before: true timeout: 11 keep_result: false target: test with_items: vm_id_new in <% $.list_servers %> - name: list_machines action: nova.servers_list publish: -list_servers: <% $.list_machines %> on_success: - wait_instance - name: on_error action: std.echo output="output" - name: external_workflow workflow: external_workflow_name """ workflow_updating_request_body_property = """ heat_template_version: 2013-05-23 resources: create_vm: type: OS::Mistral::Workflow properties: name: create_vm use_request_body_as_input: false type: direct input: name: create_test_server image: 31d8eeaf-686e-4e95-bb27-765014b9f20b flavor: 2 output: vm_id: <% $.vm_id %> task_defaults: on_error: - on_error tasks: - name: create_server action: | nova.servers_create name=<% $.name %> image=<% $.image %> flavor=<% $.flavor %> publish: vm_id: <% $.create_server.id %> on_success: - check_server_exists - name: check_server_exists action: nova.servers_get server=<% $.vm_id %> publish: server_exists: True on_success: - list_machines - name: wait_instance action: nova.servers_find id=<% $.vm_id_new %> status='ACTIVE' retry: delay: 5 count: 15 wait_before: 7 wait_after: 8 pause_before: true timeout: 11 keep_result: false target: test with_items: vm_id_new in <% $.list_servers %> join: all - name: list_machines action: nova.servers_list publish: -list_servers: <% $.list_machines %> on_success: - wait_instance - name: on_error action: std.echo output="output" """ workflow_template_backward_support = """ heat_template_version: 2013-05-23 resources: create_vm: type: OS::Mistral::Workflow properties: name: create_vm type: direct input: name: create_test_server image: 31d8eeaf-686e-4e95-bb27-765014b9f20b flavor: 2 output: vm_id: <% $.vm_id %> tasks: - name: create_server action: | nova.servers_create name=<% $.name %> image=<% $.image %> flavor=<% $.flavor %> publish: vm_id: <% $.create_server.id %> on_success: - check_server_exists - name: check_server_exists action: nova.servers_get server=<% $.vm_id %> publish: server_exists: True on_success: - wait_instance - name: wait_instance action: nova.servers_find id=<% $.vm_id %> status='ACTIVE' policies: retry: delay: 5 count: 15 """ workflow_template_bad = """ heat_template_version: 2013-05-23 resources: workflow: type: OS::Mistral::Workflow properties: type: direct tasks: - name: second_task action: std.noop requires: [first_task] - name: first_task action: std.noop """ workflow_template_bad_reverse = """ heat_template_version: 2013-05-23 resources: workflow: type: OS::Mistral::Workflow properties: type: reverse tasks: - name: second_task action: std.noop requires: [first_task] - name: first_task action: std.noop """ workflow_template_concurrency_no_with_items = """ heat_template_version: 2013-05-23 resources: workflow: type: OS::Mistral::Workflow properties: params: {'test':'param_value'} type: direct tasks: - name: hello action: std.echo output='Good morning!' concurrency: 9001 """ workflow_template_update_replace = """ heat_template_version: 2013-05-23 resources: workflow: type: OS::Mistral::Workflow properties: name: hello_action type: direct tasks: - name: hello action: std.echo output='Good evening!' publish: result: <% $.hello %> """ workflow_template_update = """ heat_template_version: 2013-05-23 resources: workflow: type: OS::Mistral::Workflow properties: type: direct description: just testing workflow resource tasks: - name: hello action: std.echo output='Good evening!' publish: result: <% $.hello %> """ workflow_template_duplicate_polices = """ heat_template_version: 2013-05-23 resources: workflow: type: OS::Mistral::Workflow properties: name: list type: direct tasks: - name: list action: nova.servers_list policies: retry: delay: 5 count: 15 retry: delay: 6 count: 16 """ workflow_template_policies_translation = """ heat_template_version: 2016-10-14 resources: workflow: type: OS::Mistral::Workflow properties: name: translation_done type: direct tasks: - name: check_dat_thing action: nova.servers_list policies: retry: delay: 5 count: 15 wait_before: 5 wait_after: 5 pause_before: true timeout: 42 concurrency: 5 """ class FakeWorkflow(object): def __init__(self, name): self.name = name self._data = {'workflow': 'info'} def to_dict(self): return self._data class TestMistralWorkflow(common.HeatTestCase): def setUp(self): super(TestMistralWorkflow, self).setUp() self.ctx = utils.dummy_context() tmpl = template_format.parse(workflow_template) self.stack = utils.parse_stack(tmpl, stack_name='test_stack') resource_defns = self.stack.t.resource_definitions(self.stack) self.rsrc_defn = resource_defns['workflow'] self.mistral = mock.Mock() self.patchobject(workflow.Workflow, 'client', return_value=self.mistral) self.patches = [] self.patches.append(mock.patch.object(stack_user.StackUser, '_create_user')) self.patches.append(mock.patch.object(signal_responder.SignalResponder, '_create_keypair')) self.patches.append(mock.patch.object(client, 'mistral_base')) self.patches.append(mock.patch.object(client.MistralClientPlugin, '_create')) for patch in self.patches: patch.start() self.client = client.MistralClientPlugin(self.ctx) def tearDown(self): super(TestMistralWorkflow, self).tearDown() for patch in self.patches: patch.stop() def _create_resource(self, name, snippet, stack): wf = workflow.Workflow(name, snippet, stack) self.mistral.workflows.create.return_value = [ FakeWorkflow('test_stack-workflow-b5fiekfci3yc')] scheduler.TaskRunner(wf.create)() return wf def test_create(self): wf = self._create_resource('workflow', self.rsrc_defn, self.stack) expected_state = (wf.CREATE, wf.COMPLETE) self.assertEqual(expected_state, wf.state) self.assertEqual('test_stack-workflow-b5fiekfci3yc', wf.resource_id) def test_create_with_name(self): tmpl = template_format.parse(workflow_template_full) stack = utils.parse_stack(tmpl) rsrc_defns = stack.t.resource_definitions(stack)['create_vm'] wf = workflow.Workflow('create_vm', rsrc_defns, stack) self.mistral.workflows.create.return_value = [ FakeWorkflow('create_vm')] scheduler.TaskRunner(wf.create)() expected_state = (wf.CREATE, wf.COMPLETE) self.assertEqual(expected_state, wf.state) self.assertEqual('create_vm', wf.resource_id) def test_create_with_task_parms(self): tmpl = template_format.parse(workflow_template_full) stack = utils.parse_stack(tmpl) rsrc_defns = stack.t.resource_definitions(stack)['create_vm'] wf = workflow.Workflow('create_vm', rsrc_defns, stack) self.mistral.workflows.create.side_effect = (lambda args: self.verify_create_params( args)) scheduler.TaskRunner(wf.create)() def test_backward_support(self): tmpl = template_format.parse(workflow_template_backward_support) stack = utils.parse_stack(tmpl) rsrc_defns = stack.t.resource_definitions(stack)['create_vm'] wf = workflow.Workflow('create_vm', rsrc_defns, stack) self.mistral.workflows.create.return_value = [ FakeWorkflow('create_vm')] scheduler.TaskRunner(wf.create)() expected_state = (wf.CREATE, wf.COMPLETE) self.assertEqual(expected_state, wf.state) self.assertEqual('create_vm', wf.resource_id) for task in wf.properties['tasks']: if task['name'] is 'wait_instance': self.assertEqual(5, task['retry']['delay']) self.assertEqual(15, task['retry']['count']) break def test_attributes(self): wf = self._create_resource('workflow', self.rsrc_defn, self.stack) self.mistral.workflows.get.return_value = ( FakeWorkflow('test_stack-workflow-b5fiekfci3yc')) self.assertEqual({'name': 'test_stack-workflow-b5fiekfci3yc', 'input': None}, wf.FnGetAtt('data')) self.assertEqual([], wf.FnGetAtt('executions')) self.assertEqual({'workflow': 'info'}, wf.FnGetAtt('show')) def test_direct_workflow_validation_error(self): error_msg = ("Mistral resource validation error: " "workflow.properties.tasks.second_task.requires: " "task second_task contains property 'requires' " "in case of direct workflow. Only reverse workflows " "can contain property 'requires'.") self._test_validation_failed(workflow_template_bad, error_msg) def test_wrong_params_using(self): error_msg = ("Mistral resource validation error: " "workflow.properties.params: 'task_name' is not assigned " "in 'params' in case of reverse type workflow.") self._test_validation_failed(workflow_template_bad_reverse, error_msg) def test_with_items_concurrency_failed_validate(self): error_msg = "concurrency cannot be specified without with_items." self._test_validation_failed( workflow_template_concurrency_no_with_items, error_msg, error_cls=exception.ResourcePropertyDependency) def _test_validation_failed(self, templatem, error_msg, error_cls=None): tmpl = template_format.parse(templatem) stack = utils.parse_stack(tmpl) rsrc_defns = stack.t.resource_definitions(stack)['workflow'] wf = workflow.Workflow('workflow', rsrc_defns, stack) if error_cls is None: error_cls = exception.StackValidationFailed exc = self.assertRaises(error_cls, wf.validate) self.assertEqual(error_msg, six.text_type(exc)) def test_create_wrong_definition(self): tmpl = template_format.parse(workflow_template) stack = utils.parse_stack(tmpl) rsrc_defns = stack.t.resource_definitions(stack)['workflow'] wf = workflow.Workflow('workflow', rsrc_defns, stack) self.mistral.workflows.create.side_effect = Exception('boom!') exc = self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(wf.create)) expected_state = (wf.CREATE, wf.FAILED) self.assertEqual(expected_state, wf.state) self.assertIn('Exception: resources.workflow: boom!', six.text_type(exc)) def test_update_replace(self): wf = self._create_resource('workflow', self.rsrc_defn, self.stack) t = template_format.parse(workflow_template_update_replace) rsrc_defns = template.Template(t).resource_definitions(self.stack) new_workflow = rsrc_defns['workflow'] new_workflows = [FakeWorkflow('hello_action')] self.mistral.workflows.update.return_value = new_workflows self.mistral.workflows.delete.return_value = None err = self.assertRaises(resource.UpdateReplace, scheduler.TaskRunner(wf.update, new_workflow)) msg = 'The Resource workflow requires replacement.' self.assertEqual(msg, six.text_type(err)) def test_update(self): wf = self._create_resource('workflow', self.rsrc_defn, self.stack) t = template_format.parse(workflow_template_update) rsrc_defns = template.Template(t).resource_definitions(self.stack) new_wf = rsrc_defns['workflow'] self.mistral.workflows.update.return_value = [ FakeWorkflow('test_stack-workflow-b5fiekfci3yc')] scheduler.TaskRunner(wf.update, new_wf)() self.assertTrue(self.mistral.workflows.update.called) self.assertEqual((wf.UPDATE, wf.COMPLETE), wf.state) def test_update_input(self): wf = self._create_resource('workflow', self.rsrc_defn, self.stack) t = template_format.parse(workflow_template) t['resources']['workflow']['properties']['input'] = {'foo': 'bar'} rsrc_defns = template.Template(t).resource_definitions(self.stack) new_wf = rsrc_defns['workflow'] self.mistral.workflows.update.return_value = [ FakeWorkflow('test_stack-workflow-b5fiekfci3yc')] scheduler.TaskRunner(wf.update, new_wf)() self.assertTrue(self.mistral.workflows.update.called) self.assertEqual((wf.UPDATE, wf.COMPLETE), wf.state) def test_update_failed(self): wf = self._create_resource('workflow', self.rsrc_defn, self.stack) t = template_format.parse(workflow_template_update) rsrc_defns = template.Template(t).resource_definitions(self.stack) new_wf = rsrc_defns['workflow'] self.mistral.workflows.update.side_effect = Exception('boom!') self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(wf.update, new_wf)) self.assertEqual((wf.UPDATE, wf.FAILED), wf.state) def test_delete_super_call_successful(self): wf = self._create_resource('workflow', self.rsrc_defn, self.stack) scheduler.TaskRunner(wf.delete)() self.assertEqual((wf.DELETE, wf.COMPLETE), wf.state) self.assertEqual(1, self.mistral.workflows.delete.call_count) def test_delete_executions_successful(self): wf = self._create_resource('workflow', self.rsrc_defn, self.stack) self.mistral.executuions.delete.return_value = None wf._data = {'executions': '1234,5678'} data_delete = self.patchobject(resource.Resource, 'data_delete') wf._delete_executions() self.assertEqual(2, self.mistral.executions.delete.call_count) data_delete.assert_called_once_with('executions') def test_delete_executions_not_found(self): wf = self._create_resource('workflow', self.rsrc_defn, self.stack) self.mistral.executuions.delete.side_effect = [ self.mistral.mistral_base.APIException(error_code=404), None ] wf._data = {'executions': '1234,5678'} data_delete = self.patchobject(resource.Resource, 'data_delete') wf._delete_executions() self.assertEqual(2, self.mistral.executions.delete.call_count) data_delete.assert_called_once_with('executions') def test_signal_failed(self): tmpl = template_format.parse(workflow_template_full) stack = utils.parse_stack(tmpl) rsrc_defns = stack.t.resource_definitions(stack)['create_vm'] wf = workflow.Workflow('create_vm', rsrc_defns, stack) self.mistral.workflows.create.return_value = [ FakeWorkflow('create_vm')] scheduler.TaskRunner(wf.create)() details = {'input': {'flavor': '3'}} self.mistral.executions.create.side_effect = Exception('boom!') err = self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(wf.signal, details)) self.assertEqual('Exception: resources.create_vm: boom!', six.text_type(err)) def test_signal_wrong_input_and_params_type(self): tmpl = template_format.parse(workflow_template_full) stack = utils.parse_stack(tmpl) rsrc_defns = stack.t.resource_definitions(stack)['create_vm'] wf = workflow.Workflow('create_vm', rsrc_defns, stack) self.mistral.workflows.create.return_value = [ FakeWorkflow('create_vm')] scheduler.TaskRunner(wf.create)() details = {'input': '3'} err = self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(wf.signal, details)) if six.PY3: entity = 'class' else: entity = 'type' error_message = ("StackValidationFailed: resources.create_vm: " "Signal data error: Input in" " signal data must be a map, find a <%s 'str'>" % entity) self.assertEqual(error_message, six.text_type(err)) details = {'params': '3'} err = self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(wf.signal, details)) error_message = ("StackValidationFailed: resources.create_vm: " "Signal data error: Params " "must be a map, find a <%s 'str'>" % entity) self.assertEqual(error_message, six.text_type(err)) def test_signal_wrong_input_key(self): tmpl = template_format.parse(workflow_template_full) stack = utils.parse_stack(tmpl) rsrc_defns = stack.t.resource_definitions(stack)['create_vm'] wf = workflow.Workflow('create_vm', rsrc_defns, stack) self.mistral.workflows.create.return_value = [ FakeWorkflow('create_vm')] scheduler.TaskRunner(wf.create)() details = {'input': {'1': '3'}} err = self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(wf.signal, details)) error_message = ("StackValidationFailed: resources.create_vm: " "Signal data error: Unknown input 1") self.assertEqual(error_message, six.text_type(err)) def test_signal_with_body_as_input_and_delete_with_executions(self): tmpl = template_format.parse(workflow_template_full) stack = utils.parse_stack(tmpl, params={ 'parameters': {'use_request_body_as_input': 'true'} }) rsrc_defns = stack.t.resource_definitions(stack)['create_vm'] wf = workflow.Workflow('create_vm', rsrc_defns, stack) self.mistral.workflows.create.return_value = [ FakeWorkflow('create_vm')] scheduler.TaskRunner(wf.create)() details = {'flavor': '3'} execution = mock.Mock() execution.id = '12345' exec_manager = executions.ExecutionManager(wf.client('mistral')) self.mistral.executions.create.side_effect = ( lambda *args, **kw: exec_manager.create(*args, **kw)) self.patchobject(exec_manager, '_create', return_value=execution) scheduler.TaskRunner(wf.signal, details)() call_args = self.mistral.executions.create.call_args args, kwargs = call_args expected_args = ( '{"image": "31d8eeaf-686e-4e95-bb27-765014b9f20b", ' '"name": "create_test_server", "flavor": "3"}') self.validate_json_inputs(kwargs['workflow_input'], expected_args) self.assertEqual({'executions': '12345'}, wf.data()) # Updating the workflow changing "use_request_body_as_input" to # false and signaling again with the expected request body format. t = template_format.parse(workflow_updating_request_body_property) new_stack = utils.parse_stack(t) rsrc_defns = new_stack.t.resource_definitions(new_stack) self.mistral.workflows.update.return_value = [ FakeWorkflow('test_stack-workflow-b5fiekdsa355')] scheduler.TaskRunner(wf.update, rsrc_defns['create_vm'])() self.assertTrue(self.mistral.workflows.update.called) self.assertEqual((wf.UPDATE, wf.COMPLETE), wf.state) details = {'input': {'flavor': '4'}} execution = mock.Mock() execution.id = '54321' exec_manager = executions.ExecutionManager(wf.client('mistral')) self.mistral.executions.create.side_effect = ( lambda *args, **kw: exec_manager.create(*args, **kw)) self.patchobject(exec_manager, '_create', return_value=execution) scheduler.TaskRunner(wf.signal, details)() call_args = self.mistral.executions.create.call_args args, kwargs = call_args expected_args = ( '{"image": "31d8eeaf-686e-4e95-bb27-765014b9f20b", ' '"name": "create_test_server", "flavor": "4"}') self.validate_json_inputs(kwargs['workflow_input'], expected_args) self.assertEqual({'executions': '54321,12345', 'name': 'test_stack-workflow-b5fiekdsa355'}, wf.data()) scheduler.TaskRunner(wf.delete)() self.assertEqual(2, self.mistral.executions.delete.call_count) self.assertEqual((wf.DELETE, wf.COMPLETE), wf.state) def test_signal_and_delete_with_executions(self): tmpl = template_format.parse(workflow_template_full) stack = utils.parse_stack(tmpl) rsrc_defns = stack.t.resource_definitions(stack)['create_vm'] wf = workflow.Workflow('create_vm', rsrc_defns, stack) self.mistral.workflows.create.return_value = [ FakeWorkflow('create_vm')] scheduler.TaskRunner(wf.create)() details = {'input': {'flavor': '3'}} execution = mock.Mock() execution.id = '12345' # Invoke the real create method (bug 1453539) exec_manager = executions.ExecutionManager(wf.client('mistral')) self.mistral.executions.create.side_effect = ( lambda *args, **kw: exec_manager.create(*args, **kw)) self.patchobject(exec_manager, '_create', return_value=execution) scheduler.TaskRunner(wf.signal, details)() self.assertEqual({'executions': '12345'}, wf.data()) scheduler.TaskRunner(wf.delete)() self.assertEqual(1, self.mistral.executions.delete.call_count) self.assertEqual((wf.DELETE, wf.COMPLETE), wf.state) def test_workflow_params(self): tmpl = template_format.parse(workflow_template_full) stack = utils.parse_stack(tmpl) rsrc_defns = stack.t.resource_definitions(stack)['create_vm'] wf = workflow.Workflow('create_vm', rsrc_defns, stack) self.mistral.workflows.create.return_value = [ FakeWorkflow('create_vm')] scheduler.TaskRunner(wf.create)() details = {'input': {'flavor': '3'}, 'params': {'test': 'param_value', 'test1': 'param_value_1'}} execution = mock.Mock() execution.id = '12345' self.mistral.executions.create.side_effect = ( lambda *args, **kw: self.verify_params(*args, **kw)) scheduler.TaskRunner(wf.signal, details)() def test_workflow_tags(self): tmpl = template_format.parse(workflow_template_with_tags) stack = utils.parse_stack(tmpl) rsrc_defns = stack.t.resource_definitions(stack)['workflow'] wf = workflow.Workflow('workflow', rsrc_defns, stack) self.mistral.workflows.create.return_value = [ FakeWorkflow('workflow')] scheduler.TaskRunner(wf.create)() details = {'tags': ['mytag'], 'params': {'test': 'param_value', 'test1': 'param_value_1'}} execution = mock.Mock() execution.id = '12345' self.mistral.executions.create.side_effect = ( lambda *args, **kw: self.verify_params(*args, **kw)) scheduler.TaskRunner(wf.signal, details)() def test_workflow_params_merge(self): tmpl = template_format.parse(workflow_template_with_params) stack = utils.parse_stack(tmpl) rsrc_defns = stack.t.resource_definitions(stack)['workflow'] wf = workflow.Workflow('workflow', rsrc_defns, stack) self.mistral.workflows.create.return_value = [ FakeWorkflow('workflow')] scheduler.TaskRunner(wf.create)() details = {'params': {'test1': 'param_value_1'}} execution = mock.Mock() execution.id = '12345' self.mistral.executions.create.side_effect = ( lambda *args, **kw: self.verify_params(*args, **kw)) scheduler.TaskRunner(wf.signal, details)() def test_workflow_params_override(self): tmpl = template_format.parse(workflow_template_with_params_override) stack = utils.parse_stack(tmpl) rsrc_defns = stack.t.resource_definitions(stack)['workflow'] wf = workflow.Workflow('workflow', rsrc_defns, stack) self.mistral.workflows.create.return_value = [ FakeWorkflow('workflow')] scheduler.TaskRunner(wf.create)() details = {'params': {'test': 'param_value', 'test1': 'param_value_1'}} execution = mock.Mock() execution.id = '12345' self.mistral.executions.create.side_effect = ( lambda *args, **kw: self.verify_params(*args, **kw)) scheduler.TaskRunner(wf.signal, details)() def test_duplicate_attribute_translation_error(self): tmpl = template_format.parse(workflow_template_duplicate_polices) stack = utils.parse_stack(tmpl) rsrc_defns = stack.t.resource_definitions(stack)['workflow'] workflow_rsrc = workflow.Workflow('workflow', rsrc_defns, stack) ex = self.assertRaises(exception.StackValidationFailed, workflow_rsrc.validate) error_msg = ("Cannot define the following properties at " "the same time: tasks.retry, tasks.policies.retry") self.assertIn(error_msg, six.text_type(ex)) def validate_json_inputs(self, actual_input, expected_input): actual_json_input = jsonutils.loads(actual_input) expected_json_input = jsonutils.loads(expected_input) self.assertEqual(expected_json_input, actual_json_input) def verify_params(self, workflow_name, workflow_input=None, **params): self.assertEqual({'test': 'param_value', 'test1': 'param_value_1'}, params) execution = mock.Mock() execution.id = '12345' return execution def verify_create_params(self, wf_yaml): wf = yaml.safe_load(wf_yaml)["create_vm"] self.assertEqual(['on_error'], wf["task-defaults"]["on-error"]) tasks = wf['tasks'] task = tasks['wait_instance'] self.assertEqual('vm_id_new in <% $.list_servers %>', task['with-items']) self.assertEqual(5, task['retry']['delay']) self.assertEqual(15, task['retry']['count']) self.assertEqual(8, task['wait-after']) self.assertTrue(task['pause-before']) self.assertEqual(11, task['timeout']) self.assertEqual('test', task['target']) self.assertEqual(7, task['wait-before']) self.assertFalse(task['keep-result']) return [FakeWorkflow('create_vm')] def test_mistral_workflow_refid(self): tmpl = template_format.parse(workflow_template) stack = utils.parse_stack(tmpl, stack_name='test') rsrc = stack['workflow'] rsrc.uuid = '4c885bde-957e-4758-907b-c188a487e908' rsrc.id = 'mockid' rsrc.action = 'CREATE' self.assertEqual('test-workflow-owevpzgiqw66', rsrc.FnGetRefId()) def test_mistral_workflow_refid_convergence_cache_data(self): tmpl = template_format.parse(workflow_template) cache_data = {'workflow': node_data.NodeData.from_dict({ 'uuid': mock.ANY, 'id': mock.ANY, 'action': 'CREATE', 'status': 'COMPLETE', 'reference_id': 'convg_xyz' })} stack = utils.parse_stack(tmpl, stack_name='test', cache_data=cache_data) rsrc = stack.defn['workflow'] self.assertEqual('convg_xyz', rsrc.FnGetRefId()) def test_policies_translation_successful(self): tmpl = template_format.parse(workflow_template_policies_translation) stack = utils.parse_stack(tmpl) rsrc_defns = stack.t.resource_definitions(stack)['workflow'] wf = workflow.Workflow('workflow', rsrc_defns, stack) result = {k: v for k, v in wf.properties['tasks'][0].items() if v} self.assertEqual({'name': 'check_dat_thing', 'action': 'nova.servers_list', 'retry': {'delay': 5, 'count': 15}, 'wait_before': 5, 'wait_after': 5, 'pause_before': True, 'timeout': 42, 'concurrency': 5}, result) heat-10.0.2/heat/tests/openstack/mistral/__init__.py0000666000175000017500000000000013343562340022366 0ustar zuulzuul00000000000000heat-10.0.2/heat/tests/openstack/mistral/test_cron_trigger.py0000666000175000017500000001146313343562340024371 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from heat.common import exception from heat.common import template_format from heat.engine.resources.openstack.mistral import cron_trigger from heat.engine import scheduler from heat.tests import common from heat.tests import utils stack_template = ''' heat_template_version: 2013-05-23 resources: cron_trigger: type: OS::Mistral::CronTrigger properties: name: my_cron_trigger pattern: "* * 0 * *" workflow: {'name': 'get_first_glance_image', 'input': {} } count: 3 first_time: "2015-04-08 06:20" ''' class FakeCronTrigger(object): def __init__(self, name): self.name = name self.next_execution_time = '2015-03-01 00:00:00' self.remaining_executions = 3 self._data = {'trigger': 'info'} def to_dict(self): return self._data class MistralCronTriggerTest(common.HeatTestCase): def setUp(self): super(MistralCronTriggerTest, self).setUp() t = template_format.parse(stack_template) self.stack = utils.parse_stack(t) resource_defns = self.stack.t.resource_definitions(self.stack) self.rsrc_defn = resource_defns['cron_trigger'] self.client = mock.Mock() self.patchobject(cron_trigger.CronTrigger, 'client', return_value=self.client) def _create_resource(self, name, snippet, stack): ct = cron_trigger.CronTrigger(name, snippet, stack) mock_get_workflow = mock.Mock(return_value='get_first_glance_image') ct.client_plugin().get_workflow_by_identifier = mock_get_workflow self.client.cron_triggers.create.return_value = FakeCronTrigger( 'my_cron_trigger') self.client.cron_triggers.get.return_value = FakeCronTrigger( 'my_cron_trigger') scheduler.TaskRunner(ct.create)() return ct def test_create(self): ct = self._create_resource('trigger', self.rsrc_defn, self.stack) expected_state = (ct.CREATE, ct.COMPLETE) self.assertEqual(expected_state, ct.state) args, kwargs = self.client.cron_triggers.create.call_args self.assertEqual('* * 0 * *', kwargs['pattern']) self.assertEqual('get_first_glance_image', args[1]) self.assertEqual({}, kwargs['workflow_input']) self.assertEqual('2015-04-08 06:20', kwargs['first_time']) self.assertEqual(3, kwargs['count']) self.assertEqual('my_cron_trigger', ct.resource_id) def test_attributes(self): ct = self._create_resource('trigger', self.rsrc_defn, self.stack) self.assertEqual('2015-03-01 00:00:00', ct.FnGetAtt('next_execution_time')) self.assertEqual(3, ct.FnGetAtt('remaining_executions')) self.assertEqual({'trigger': 'info'}, ct.FnGetAtt('show')) def test_validate_fail(self): t = template_format.parse(stack_template) del t['resources']['cron_trigger']['properties']['first_time'] del t['resources']['cron_trigger']['properties']['pattern'] stack = utils.parse_stack(t) resource_defns = stack.t.resource_definitions(stack) self.rsrc_defn = resource_defns['cron_trigger'] ct = self._create_resource('trigger', self.rsrc_defn, self.stack) msg = ("At least one of the following properties must be specified: " "pattern, first_time") self.assertRaisesRegex(exception.PropertyUnspecifiedError, msg, ct.validate) def test_validate_ok_without_first_time(self): t = template_format.parse(stack_template) del t['resources']['cron_trigger']['properties']['first_time'] stack = utils.parse_stack(t) resource_defns = stack.t.resource_definitions(stack) self.rsrc_defn = resource_defns['cron_trigger'] ct = self._create_resource('trigger', self.rsrc_defn, self.stack) ct.validate() def test_validate_ok_without_pattern(self): t = template_format.parse(stack_template) del t['resources']['cron_trigger']['properties']['pattern'] stack = utils.parse_stack(t) resource_defns = stack.t.resource_definitions(stack) self.rsrc_defn = resource_defns['cron_trigger'] ct = self._create_resource('trigger', self.rsrc_defn, self.stack) ct.validate() heat-10.0.2/heat/tests/openstack/nova/0000775000175000017500000000000013343562672017565 5ustar zuulzuul00000000000000heat-10.0.2/heat/tests/openstack/nova/test_keypair.py0000666000175000017500000003133213343562352022641 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import mock import six from heat.common import exception from heat.engine.clients.os import keystone from heat.engine.clients.os import nova from heat.engine import resource from heat.engine.resources.openstack.nova import keypair from heat.engine import scheduler from heat.tests import common from heat.tests.openstack.nova import fakes as fakes_nova from heat.tests import utils class NovaKeyPairTest(common.HeatTestCase): kp_template = { "heat_template_version": "2013-05-23", "resources": { "kp": { "type": "OS::Nova::KeyPair", "properties": { "name": "key_pair" } } } } def setUp(self): super(NovaKeyPairTest, self).setUp() self.fake_nova = mock.MagicMock() self.fake_keypairs = mock.MagicMock() self.fake_nova.keypairs = self.fake_keypairs self.patchobject(nova.NovaClientPlugin, 'has_extension', return_value=True) self.cp_mock = self.patchobject(nova.NovaClientPlugin, '_create', return_value=self.fake_nova) def _mock_key(self, name, pub=None, priv=None): mkey = mock.MagicMock() mkey.id = name mkey.name = name if pub: mkey.public_key = pub if priv: mkey.private_key = priv return mkey def _get_test_resource(self, template): self.stack = utils.parse_stack(template) definition = self.stack.t.resource_definitions(self.stack)['kp'] kp_res = keypair.KeyPair('kp', definition, self.stack) return kp_res def _get_mock_kp_for_create(self, key_name, public_key=None, priv_saved=False, key_type=None, user=None): template = copy.deepcopy(self.kp_template) template['resources']['kp']['properties']['name'] = key_name props = template['resources']['kp']['properties'] if public_key: props['public_key'] = public_key gen_pk = public_key or "generated test public key" nova_key = self._mock_key(key_name, gen_pk) if priv_saved: nova_key.private_key = "private key for %s" % key_name props['save_private_key'] = True if key_type: props['type'] = key_type if user: props['user'] = user kp_res = self._get_test_resource(template) self.patchobject(self.fake_keypairs, 'create', return_value=nova_key) return kp_res, nova_key def test_create_key(self): """Test basic create.""" key_name = "generate_no_save" tp_test, created_key = self._get_mock_kp_for_create(key_name) self.patchobject(self.fake_keypairs, 'get', return_value=created_key) key_info = {'key_pair': 'info'} self.patchobject(created_key, 'to_dict', return_value=key_info) scheduler.TaskRunner(tp_test.create)() self.fake_keypairs.create.assert_called_once_with( name=key_name, public_key=None) self.assertEqual("", tp_test.FnGetAtt('private_key')) self.assertEqual("generated test public key", tp_test.FnGetAtt('public_key')) self.assertEqual(key_info, tp_test.FnGetAtt('show')) self.assertEqual((tp_test.CREATE, tp_test.COMPLETE), tp_test.state) self.assertEqual(tp_test.resource_id, created_key.name) def test_create_key_with_type(self): """Test basic create.""" key_name = "with_type" tp_test, created_key = self._get_mock_kp_for_create(key_name, key_type='ssh') scheduler.TaskRunner(tp_test.create)() self.assertEqual((tp_test.CREATE, tp_test.COMPLETE), tp_test.state) self.assertEqual(tp_test.resource_id, created_key.name) self.fake_keypairs.create.assert_called_once_with( name=key_name, public_key=None, type='ssh') self.cp_mock.assert_called_once_with(version='2.2') def test_create_key_with_user_id(self): key_name = "create_with_user_id" tp_test, created_key = self._get_mock_kp_for_create(key_name, user='userA') self.patchobject(keystone.KeystoneClientPlugin, 'get_user_id', return_value='userA_ID') scheduler.TaskRunner(tp_test.create)() self.assertEqual((tp_test.CREATE, tp_test.COMPLETE), tp_test.state) self.assertEqual(tp_test.resource_id, created_key.name) self.fake_keypairs.create.assert_called_once_with( name=key_name, public_key=None, user_id='userA_ID') self.cp_mock.assert_called_once_with(version='2.10') def test_create_key_with_user_and_type(self): key_name = "create_with_user_id_and_type" tp_test, created_key = self._get_mock_kp_for_create(key_name, user='userA', key_type='x509') self.patchobject(keystone.KeystoneClientPlugin, 'get_user_id', return_value='userA_ID') scheduler.TaskRunner(tp_test.create)() self.assertEqual((tp_test.CREATE, tp_test.COMPLETE), tp_test.state) self.assertEqual(tp_test.resource_id, created_key.name) self.fake_keypairs.create.assert_called_once_with( name=key_name, public_key=None, user_id='userA_ID', type='x509') self.cp_mock.assert_called_once_with(version='2.10') def test_create_key_empty_name(self): """Test creation of a keypair whose name is of length zero.""" key_name = "" template = copy.deepcopy(self.kp_template) template['resources']['kp']['properties']['name'] = key_name stack = utils.parse_stack(template) definition = stack.t.resource_definitions(stack)['kp'] kp_res = keypair.KeyPair('kp', definition, stack) error = self.assertRaises(exception.StackValidationFailed, kp_res.validate) self.assertIn("Property error", six.text_type(error)) self.assertIn("kp.properties.name: length (0) is out of " "range (min: 1, max: 255)", six.text_type(error)) def test_create_key_excess_name_length(self): """Test creation of a keypair whose name is of excess length.""" key_name = 'k' * 256 template = copy.deepcopy(self.kp_template) template['resources']['kp']['properties']['name'] = key_name stack = utils.parse_stack(template) definition = stack.t.resource_definitions(stack)['kp'] kp_res = keypair.KeyPair('kp', definition, stack) error = self.assertRaises(exception.StackValidationFailed, kp_res.validate) self.assertIn("Property error", six.text_type(error)) self.assertIn("kp.properties.name: length (256) is out of " "range (min: 1, max: 255)", six.text_type(error)) def _test_validate(self, key_type=None, user=None, nc_version=None): template = copy.deepcopy(self.kp_template) validate_props = [] if key_type: template['resources']['kp']['properties']['type'] = key_type validate_props.append('type') if user: template['resources']['kp']['properties']['user'] = user validate_props.append('user') stack = utils.parse_stack(template) definition = stack.t.resource_definitions(stack)['kp'] kp_res = keypair.KeyPair('kp', definition, stack) self.patchobject(nova.NovaClientPlugin, '_create', side_effect=exception.InvalidServiceVersion( service='compute', version=nc_version )) error = self.assertRaises(exception.StackValidationFailed, kp_res.validate) msg = (('Cannot use "%(prop)s" properties - nova does not support: ' 'Invalid service compute version %(ver)s') % {'prop': validate_props, 'ver': nc_version}) self.assertIn(msg, six.text_type(error)) def test_validate_key_type(self): self._test_validate(key_type='x509', nc_version='2.2') def test_validate_user(self): self.patchobject(keystone.KeystoneClientPlugin, 'get_user_id', return_value='user_A') self._test_validate(user='user_A', nc_version='2.10') def test_check_key(self): res = self._get_test_resource(self.kp_template) res.state_set(res.CREATE, res.COMPLETE, 'for test') res.client = mock.Mock() scheduler.TaskRunner(res.check)() self.assertEqual((res.CHECK, res.COMPLETE), res.state) def test_check_key_fail(self): res = self._get_test_resource(self.kp_template) res.state_set(res.CREATE, res.COMPLETE, 'for test') res.client = mock.Mock() res.client().keypairs.get.side_effect = Exception("boom") exc = self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(res.check)) self.assertIn("boom", six.text_type(exc)) self.assertEqual((res.CHECK, res.FAILED), res.state) def test_update_replace(self): res = self._get_test_resource(self.kp_template) res.state_set(res.CHECK, res.FAILED, 'for test') res.resource_id = 'my_key' # to delete the keypair preparing for replace self.fake_keypairs.delete('my_key') updater = scheduler.TaskRunner(res.update, res.t) self.assertRaises(resource.UpdateReplace, updater) def test_delete_key_not_found(self): """Test delete non-existent key.""" test_res = self._get_test_resource(self.kp_template) test_res.resource_id = "key_name" test_res.state_set(test_res.CREATE, test_res.COMPLETE) self.patchobject(self.fake_keypairs, 'delete', side_effect=fakes_nova.fake_exception()) scheduler.TaskRunner(test_res.delete)() self.assertEqual((test_res.DELETE, test_res.COMPLETE), test_res.state) def test_create_pub(self): """Test create using existing pub key.""" key_name = "existing_key" pk = "test_create_pub" tp_test, created_key = self._get_mock_kp_for_create(key_name, public_key=pk) scheduler.TaskRunner(tp_test.create)() self.assertEqual("", tp_test.FnGetAtt('private_key')) self.assertEqual("test_create_pub", tp_test.FnGetAtt('public_key')) self.assertEqual((tp_test.CREATE, tp_test.COMPLETE), tp_test.state) self.assertEqual(tp_test.resource_id, created_key.name) def test_save_priv_key(self): """Test a saved private key.""" key_name = "save_private" tp_test, created_key = self._get_mock_kp_for_create(key_name, priv_saved=True) self.patchobject(self.fake_keypairs, 'get', return_value=created_key) scheduler.TaskRunner(tp_test.create)() self.assertEqual("private key for save_private", tp_test.FnGetAtt('private_key')) self.assertEqual("generated test public key", tp_test.FnGetAtt('public_key')) self.assertEqual((tp_test.CREATE, tp_test.COMPLETE), tp_test.state) self.assertEqual(tp_test.resource_id, created_key.name) def test_nova_keypair_refid(self): stack = utils.parse_stack(self.kp_template) rsrc = stack['kp'] rsrc.resource_id = 'xyz' self.assertEqual('xyz', rsrc.FnGetRefId()) def test_nova_keypair_refid_convergence_cache_data(self): cache_data = {'kp': { 'uuid': mock.ANY, 'id': mock.ANY, 'action': 'CREATE', 'status': 'COMPLETE', 'reference_id': 'convg_xyz' }} stack = utils.parse_stack(self.kp_template, cache_data=cache_data) rsrc = stack.defn['kp'] self.assertEqual('convg_xyz', rsrc.FnGetRefId()) heat-10.0.2/heat/tests/openstack/nova/fakes.py0000666000175000017500000004473613343562352021243 0ustar zuulzuul00000000000000# # Copyright (c) 2011 X.commerce, a business unit of eBay Inc. # Copyright 2011 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import mock from novaclient import client as base_client from novaclient import exceptions as nova_exceptions import requests from six.moves.urllib import parse as urlparse from heat.tests import fakes NOVA_API_VERSION = "2.1" Client = base_client.Client(NOVA_API_VERSION).__class__ def fake_exception(status_code=404, message=None, details=None): resp = mock.Mock() resp.status_code = status_code resp.headers = None body = {'error': {'message': message, 'details': details}} return nova_exceptions.from_response(resp, body, None) class FakeClient(fakes.FakeClient, Client): def __init__(self, *args, **kwargs): super(FakeClient, self).__init__(direct_use=False) self.client = FakeSessionClient(session=mock.Mock(), **kwargs) class FakeSessionClient(base_client.SessionClient): def __init__(self, *args, **kwargs): super(FakeSessionClient, self).__init__(*args, **kwargs) self.callstack = [] def request(self, url, method, **kwargs): # Check that certain things are called correctly if method in ['GET', 'DELETE']: assert 'body' not in kwargs elif method == 'PUT': assert 'body' in kwargs # Call the method args = urlparse.parse_qsl(urlparse.urlparse(url)[4]) kwargs.update(args) munged_url = url.rsplit('?', 1)[0] munged_url = munged_url.strip('/').replace('/', '_').replace( '.', '_').replace(' ', '_') munged_url = munged_url.replace('-', '_') callback = "%s_%s" % (method.lower(), munged_url) if not hasattr(self, callback): raise AssertionError('Called unknown API method: %s %s, ' 'expected fakes method name: %s' % (method, url, callback)) # Note the call self.callstack.append((method, url, kwargs.get('body'))) status, body = getattr(self, callback)(**kwargs) response = requests.models.Response() if isinstance(status, dict): response.status_code = status.pop("status") response.headers = status else: response.status_code = status return response, body # # Servers # def get_servers_detail(self, **kw): return ( 200, {"servers": [{"id": "1234", "name": "sample-server", "OS-EXT-SRV-ATTR:instance_name": "sample-server", "image": {"id": 2, "name": "sample image"}, "flavor": {"id": 1, "name": "256 MB Server"}, "hostId": "e4d909c290d0fb1ca068ffaddf22cbd0", "status": "BUILD", "progress": 60, "addresses": {"public": [{"version": 4, "addr": "1.2.3.4"}, {"version": 4, "addr": "5.6.7.8"}], "private": [{"version": 4, "addr": "10.11.12.13"}]}, "accessIPv4": "", "accessIPv6": "", "metadata": {"Server Label": "Web Head 1", "Image Version": "2.1"}}, {"id": "5678", "name": "sample-server2", "OS-EXT-AZ:availability_zone": "nova2", "OS-EXT-SRV-ATTR:instance_name": "sample-server2", "image": {"id": 2, "name": "sample image"}, "flavor": {"id": 1, "name": "256 MB Server"}, "hostId": "9e107d9d372bb6826bd81d3542a419d6", "status": "ACTIVE", "accessIPv4": "192.0.2.0", "accessIPv6": "::babe:4317:0A83", "addresses": {"public": [{"version": 4, "addr": "4.5.6.7", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:8c:22:aa"}, {"version": 4, "addr": "5.6.9.8", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:8c:33:bb"}], "private": [{"version": 4, "addr": "10.13.12.13", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:8c:44:cc"}]}, "metadata": {}}, {"id": "9101", "name": "hard-reboot", "OS-EXT-SRV-ATTR:instance_name": "hard-reboot", "image": {"id": 2, "name": "sample image"}, "flavor": {"id": 1, "name": "256 MB Server"}, "hostId": "9e44d8d435c43dd8d96bb63ed995605f", "status": "HARD_REBOOT", "accessIPv4": "", "accessIPv6": "", "addresses": {"public": [{"version": 4, "addr": "172.17.1.2"}, {"version": 4, "addr": "10.20.30.40"}], "private": [{"version": 4, "addr": "10.13.12.13"}]}, "metadata": {"Server Label": "DB 1"}}, {"id": "9102", "name": "server-with-no-ip", "OS-EXT-SRV-ATTR:instance_name": "server-with-no-ip", "image": {"id": 2, "name": "sample image"}, "flavor": {"id": 1, "name": "256 MB Server"}, "hostId": "c1365ba78c624df9b2ff446515a682f5", "status": "ACTIVE", "accessIPv4": "", "accessIPv6": "", "addresses": {"empty_net": []}, "metadata": {"Server Label": "DB 1"}}, {"id": "9999", "name": "sample-server3", "OS-EXT-SRV-ATTR:instance_name": "sample-server3", "OS-EXT-AZ:availability_zone": "nova3", "image": {"id": 3, "name": "sample image"}, "flavor": {"id": 3, "name": "m1.large"}, "hostId": "9e107d9d372bb6826bd81d3542a419d6", "status": "ACTIVE", "accessIPv4": "", "accessIPv6": "", "addresses": { "public": [{"version": 4, "addr": "4.5.6.7"}, {"version": 4, "addr": "5.6.9.8"}], "private": [{"version": 4, "addr": "10.13.12.13"}]}, "metadata": {"Server Label": "DB 1"}, "os-extended-volumes:volumes_attached": [{"id": "66359157-dace-43ab-a7ed-a7e7cd7be59d"}]}, {"id": 56789, "name": "server-with-metadata", "OS-EXT-SRV-ATTR:instance_name": "sample-server2", "image": {"id": 2, "name": "sample image"}, "flavor": {"id": 1, "name": "256 MB Server"}, "hostId": "9e107d9d372bb6826bd81d3542a419d6", "status": "ACTIVE", "accessIPv4": "192.0.2.0", "accessIPv6": "::babe:4317:0A83", "addresses": {"public": [{"version": 4, "addr": "4.5.6.7"}, {"version": 4, "addr": "5.6.9.8"}], "private": [{"version": 4, "addr": "10.13.12.13"}]}, "metadata": {'test': '123', 'this': 'that'}}]}) def get_servers_1234(self, **kw): r = {'server': self.get_servers_detail()[1]['servers'][0]} return (200, r) def get_servers_56789(self, **kw): r = {'server': self.get_servers_detail()[1]['servers'][5]} return (200, r) def get_servers_WikiServerOne(self, **kw): r = {'server': self.get_servers_detail()[1]['servers'][0]} return (200, r) def get_servers_WikiServerOne1(self, **kw): r = {'server': self.get_servers_detail()[1]['servers'][0]} return (200, r) def get_servers_WikiServerOne2(self, **kw): r = {'server': self.get_servers_detail()[1]['servers'][3]} return (200, r) def get_servers_5678(self, **kw): r = {'server': self.get_servers_detail()[1]['servers'][1]} return (200, r) def delete_servers_1234(self, **kw): return (202, None) def get_servers_9999(self, **kw): r = {'server': self.get_servers_detail()[1]['servers'][4]} return (200, r) def get_servers_9102(self, **kw): r = {'server': self.get_servers_detail()[1]['servers'][3]} return (200, r) # # Server actions # def post_servers_1234_action(self, body, **kw): _body = None resp = 202 assert len(body.keys()) == 1 action = next(iter(body)) if action == 'reboot': assert list(body[action].keys()) == ['type'] assert body[action]['type'] in ['HARD', 'SOFT'] elif action == 'rebuild': keys = list(body[action].keys()) if 'adminPass' in keys: keys.remove('adminPass') assert keys == ['imageRef'] _body = self.get_servers_1234()[1] elif action == 'resize': assert list(body[action].keys()) == ['flavorRef'] elif action == 'confirmResize': assert body[action] is None # This one method returns a different response code return (204, None) elif action in ['revertResize', 'migrate', 'rescue', 'unrescue', 'suspend', 'resume', 'lock', 'unlock', ]: assert body[action] is None elif action == 'addFixedIp': assert list(body[action].keys()) == ['networkId'] elif action in ['removeFixedIp', 'addFloatingIp', 'removeFloatingIp', ]: assert list(body[action].keys()) == ['address'] elif action == 'createImage': assert set(body[action].keys()) == set(['name', 'metadata']) resp = {"status": 202, "location": "http://blah/images/456"} elif action == 'changePassword': assert list(body[action].keys()) == ['adminPass'] elif action == 'os-getConsoleOutput': assert list(body[action].keys()) == ['length'] return (202, {'output': 'foo'}) elif action == 'os-getVNCConsole': assert list(body[action].keys()) == ['type'] elif action == 'os-migrateLive': assert set(body[action].keys()) == set(['host', 'block_migration', 'disk_over_commit']) elif action == 'forceDelete': assert body is not None else: raise AssertionError("Unexpected server action: %s" % action) return (resp, _body) def post_servers_5678_action(self, body, **kw): _body = None resp = 202 assert len(body.keys()) == 1 action = next(iter(body)) if action in ['addFloatingIp', 'removeFloatingIp', ]: assert list(body[action].keys()) == ['address'] return (resp, _body) # # Flavors # def get_flavors(self, **kw): return (200, {'flavors': [ {'id': 1, 'name': '256 MB Server', 'ram': 256, 'disk': 10, 'OS-FLV-EXT-DATA:ephemeral': 10}, {'id': 2, 'name': 'm1.small', 'ram': 512, 'disk': 20, 'OS-FLV-EXT-DATA:ephemeral': 20}, {'id': 3, 'name': 'm1.large', 'ram': 512, 'disk': 20, 'OS-FLV-EXT-DATA:ephemeral': 30} ]}) def get_flavors_256_MB_Server(self, **kw): raise fake_exception() def get_flavors_m1_small(self, **kw): raise fake_exception() def get_flavors_m1_large(self, **kw): raise fake_exception() def get_flavors_1(self, **kw): return (200, {'flavor': { 'id': 1, 'name': '256 MB Server', 'ram': 256, 'disk': 10, 'OS-FLV-EXT-DATA:ephemeral': 10}}) def get_flavors_2(self, **kw): return (200, {'flavor': { 'id': 2, 'name': 'm1.small', 'ram': 512, 'disk': 20, 'OS-FLV-EXT-DATA:ephemeral': 20}}) def get_flavors_3(self, **kw): return (200, {'flavor': { 'id': 3, 'name': 'm1.large', 'ram': 512, 'disk': 20, 'OS-FLV-EXT-DATA:ephemeral': 30}}) # # Floating ips # def get_os_floating_ips_1(self, **kw): return (200, {'floating_ip': {'id': 1, 'fixed_ip': '10.0.0.1', 'ip': '11.0.0.1'}}) def post_os_floating_ips(self, body, **kw): return (202, self.get_os_floating_ips_1()[1]) def delete_os_floating_ips_1(self, **kw): return (204, None) # # Images # def get_images_detail(self, **kw): return (200, {'images': [{'id': 1, 'name': 'CentOS 5.2', "updated": "2010-10-10T12:00:00Z", "created": "2010-08-10T12:00:00Z", "status": "ACTIVE", "metadata": {"test_key": "test_value"}, "links": {}}, {"id": 743, "name": "My Server Backup", "serverId": 1234, "updated": "2010-10-10T12:00:00Z", "created": "2010-08-10T12:00:00Z", "status": "SAVING", "progress": 80, "links": {}}, {"id": 744, "name": "F17-x86_64-gold", "serverId": 9999, "updated": "2010-10-10T12:00:00Z", "created": "2010-08-10T12:00:00Z", "status": "SAVING", "progress": 80, "links": {}}, {"id": 745, "name": "F17-x86_64-cfntools", "serverId": 9998, "updated": "2010-10-10T12:00:00Z", "created": "2010-08-10T12:00:00Z", "status": "SAVING", "progress": 80, "links": {}}, {"id": 746, "name": "F20-x86_64-cfntools", "serverId": 9998, "updated": "2010-10-10T12:00:00Z", "created": "2010-08-10T12:00:00Z", "status": "SAVING", "progress": 80, "links": {}}]}) def get_images_1(self, **kw): return (200, {'image': self.get_images_detail()[1]['images'][0]}) get_images_456 = get_images_1 get_images_image_name = get_images_1 # # Keypairs # def get_os_keypairs(self, *kw): return (200, {"keypairs": [{'fingerprint': 'FAKE_KEYPAIR', 'name': 'test', 'public_key': 'foo'}]}) def get_os_keypairs_test(self, *kw): return (200, {"keypair": {'fingerprint': 'FAKE_KEYPAIR', 'name': 'test', 'public_key': 'foo'}}) def get_os_keypairs_test2(self, *kw): raise fake_exception() def get_os_availability_zone(self, *kw): return (200, {"availabilityZoneInfo": [{'zoneName': 'nova1'}]}) def get_os_networks(self, **kw): return (200, {'networks': [{'label': 'public', 'id': 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa'}, {'label': 'foo', 'id': '42'}, {'label': 'foo', 'id': '42'}]}) # # Limits # def get_limits(self, *kw): return (200, {'limits': {'absolute': {'maxServerMeta': 3, 'maxPersonalitySize': 10240, 'maxPersonality': 5}}}) heat-10.0.2/heat/tests/openstack/nova/test_host_aggregate.py0000666000175000017500000001346013343562340024157 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from heat.engine.clients.os import nova from heat.engine import stack from heat.engine import template from heat.tests import common from heat.tests import utils AGGREGATE_TEMPLATE = { 'heat_template_version': '2013-05-23', 'description': 'Heat Aggregate creation example', 'resources': { 'my_aggregate': { 'type': 'OS::Nova::HostAggregate', 'properties': { 'name': 'host_aggregate', 'availability_zone': 'nova', 'hosts': ['host_1', 'host_2'], 'metadata': {"availability_zone": "nova"} } } } } class NovaHostAggregateTest(common.HeatTestCase): def setUp(self): super(NovaHostAggregateTest, self).setUp() self.patchobject(nova.NovaClientPlugin, 'has_extension', return_value=True) self.ctx = utils.dummy_context() self.stack = stack.Stack( self.ctx, 'nova_aggregate_test_stack', template.Template(AGGREGATE_TEMPLATE) ) self.my_aggregate = self.stack['my_aggregate'] nova_client = mock.MagicMock() self.novaclient = mock.MagicMock() self.my_aggregate.client = nova_client nova_client.return_value = self.novaclient self.aggregates = self.novaclient.aggregates def test_aggregate_handle_create(self): value = mock.MagicMock() aggregate_id = '927202df-1afb-497f-8368-9c2d2f26e5db' value.id = aggregate_id self.aggregates.create.return_value = value self.my_aggregate.handle_create() value.set_metadata.assert_called_once_with( {"availability_zone": "nova"}) self.assertEqual(2, value.add_host.call_count) self.assertEqual(aggregate_id, self.my_aggregate.resource_id) def test_aggregate_handle_update_name(self): value = mock.MagicMock() self.aggregates.get.return_value = value prop_diff = {'name': 'new_host_aggregate', 'metadata': {'availability_zone': 'new_nova'}, 'availability_zone': 'new_nova'} expected = {'name': 'new_host_aggregate', 'availability_zone': 'new_nova'} self.my_aggregate.handle_update( json_snippet=None, tmpl_diff=None, prop_diff=prop_diff ) value.update.assert_called_once_with(expected) def test_aggregate_handle_update_hosts(self): ag = mock.MagicMock() ag.hosts = ['host_1', 'host_2'] self.aggregates.get.return_value = ag prop_diff = {'hosts': ['host_1', 'host_3']} add_host_expected = 'host_3' remove_host_expected = 'host_2' self.my_aggregate.handle_update( json_snippet=None, tmpl_diff=None, prop_diff=prop_diff ) self.assertEqual(0, ag.update.call_count) self.assertEqual(0, ag.set_metadata.call_count) ag.add_host.assert_called_once_with(add_host_expected) ag.remove_host.assert_called_once_with(remove_host_expected) def test_aggregate_handle_update_metadata(self): ag = mock.MagicMock() self.aggregates.get.return_value = ag prop_diff = {'metadata': {'availability_zone': 'nova3'}} set_metadata_expected = {'availability_zone': 'nova3'} self.my_aggregate.handle_update( json_snippet=None, tmpl_diff=None, prop_diff=prop_diff ) self.assertEqual(0, ag.update.call_count) self.assertEqual(0, ag.add_host.call_count) self.assertEqual(0, ag.remove_host.call_count) ag.set_metadata.assert_called_once_with(set_metadata_expected) def test_aggregate_handle_delete_not_found(self): ag = mock.MagicMock() ag.id = '927202df-1afb-497f-8368-9c2d2f26e5db' ag.hosts = ['host_1'] self.aggregates.get.side_effect = [nova.exceptions.NotFound(404)] self.my_aggregate.handle_delete() def test_aggregate_handle_delete(self): ag = mock.MagicMock() ag.id = '927202df-1afb-497f-8368-9c2d2f26e5db' ag.hosts = ['host_1'] self.aggregates.get.return_value = ag self.aggregates.hosts = ag.hosts self.my_aggregate.resource_id = ag.id self.my_aggregate.handle_delete() self.assertEqual(1, self.aggregates.get.call_count) self.assertEqual(ag.hosts, self.aggregates.hosts) ag.remove_host.assert_called_once_with(ag.hosts[0]) self.aggregates.delete.assert_called_once_with(ag.id) def test_aggregate_get_live_state(self): value = mock.MagicMock() value.to_dict.return_value = { 'availability_zone': 'nova', 'name': 'test', 'hosts': ['ubuntu'], 'metadata': {'test': 'test', 'availability_zone': 'nova'} } self.aggregates.get.return_value = value reality = self.my_aggregate.get_live_state( self.my_aggregate.properties) expected = { 'availability_zone': 'nova', 'name': 'test', 'hosts': ['ubuntu'], 'metadata': {'test': 'test', 'availability_zone': 'nova'} } self.assertEqual(set(expected.keys()), set(reality.keys())) for key in reality: self.assertEqual(expected[key], reality[key]) heat-10.0.2/heat/tests/openstack/nova/test_server_group.py0000666000175000017500000000675213343562340023724 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import json import mock from heat.common import template_format from heat.engine.clients.os import nova from heat.engine import scheduler from heat.tests import common from heat.tests import utils sg_template = { "heat_template_version": "2013-05-23", "resources": { "ServerGroup": { "type": "OS::Nova::ServerGroup", "properties": { "name": "test", "policies": ["anti-affinity"] } } } } class FakeGroup(object): def __init__(self, name): self.id = name self.name = name class NovaServerGroupTest(common.HeatTestCase): def setUp(self): super(NovaServerGroupTest, self).setUp() self.patchobject(nova.NovaClientPlugin, 'has_extension', return_value=True) def _init_template(self, sg_template): template = template_format.parse(json.dumps(sg_template)) self.stack = utils.parse_stack(template) self.sg = self.stack['ServerGroup'] # create mock clients and objects nova = mock.MagicMock() self.sg.client = mock.MagicMock(return_value=nova) self.sg_mgr = nova.server_groups def _create_sg(self, name): if name: sg = sg_template['resources']['ServerGroup'] sg['properties']['name'] = name self._init_template(sg_template) self.sg_mgr.create.return_value = FakeGroup(name) else: try: sg = sg_template['resources']['ServerGroup'] del sg['properties']['name'] except Exception: pass self._init_template(sg_template) name = 'test' n = name def fake_create(name, policies): self.assertGreater(len(name), 1) return FakeGroup(n) self.sg_mgr.create = fake_create scheduler.TaskRunner(self.sg.create)() self.assertEqual((self.sg.CREATE, self.sg.COMPLETE), self.sg.state) self.assertEqual(name, self.sg.resource_id) def test_sg_create(self): self._create_sg('test') expected_args = () expected_kwargs = {'name': 'test', 'policies': ["anti-affinity"], } self.sg_mgr.create.assert_called_once_with(*expected_args, **expected_kwargs) def test_sg_create_no_name(self): self._create_sg(None) def test_sg_show_resource(self): self._create_sg('test') self.sg.client = mock.MagicMock() s_groups = mock.MagicMock() sg = mock.MagicMock() sg.to_dict.return_value = {'server_gr': 'info'} s_groups.get.return_value = sg self.sg.client().server_groups = s_groups self.assertEqual({'server_gr': 'info'}, self.sg.FnGetAtt('show')) s_groups.get.assert_called_once_with('test') heat-10.0.2/heat/tests/openstack/nova/test_quota.py0000666000175000017500000001375013343562340022327 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import six from heat.common import exception from heat.common import template_format from heat.engine.clients.os import keystone as k_plugin from heat.engine.clients.os import nova as n_plugin from heat.engine import rsrc_defn from heat.engine import stack as parser from heat.engine import template from heat.tests import common from heat.tests import utils quota_template = ''' heat_template_version: newton description: Sample nova quota heat template resources: my_quota: type: OS::Nova::Quota properties: project: demo cores: 5 fixed_ips: 5 floating_ips: 5 instances: 5 injected_files: 5 injected_file_content_bytes: 5 injected_file_path_bytes: 5 key_pairs: 5 metadata_items: 5 ram: 5 security_groups: 5 security_group_rules: 5 server_groups: 5 server_group_members: 5 ''' valid_properties = [ 'cores', 'fixed_ips', 'floating_ips', 'instances', 'injected_files', 'injected_file_content_bytes', 'injected_file_path_bytes', 'key_pairs', 'metadata_items', 'ram', 'security_groups', 'security_group_rules', 'server_groups', 'server_group_members' ] class NovaQuotaTest(common.HeatTestCase): def setUp(self): super(NovaQuotaTest, self).setUp() self.ctx = utils.dummy_context() self.patchobject(n_plugin.NovaClientPlugin, 'has_extension', return_value=True) self.patchobject(k_plugin.KeystoneClientPlugin, 'get_project_id', return_value='some_project_id') tpl = template_format.parse(quota_template) self.stack = parser.Stack( self.ctx, 'nova_quota_test_stack', template.Template(tpl) ) self.my_quota = self.stack['my_quota'] nova = mock.MagicMock() self.novaclient = mock.MagicMock() self.my_quota.client = nova nova.return_value = self.novaclient self.quotas = self.novaclient.quotas self.quota_set = mock.MagicMock() self.quotas.update.return_value = self.quota_set self.quotas.delete.return_value = self.quota_set def _test_validate(self, resource, error_msg): exc = self.assertRaises(exception.StackValidationFailed, resource.validate) self.assertIn(error_msg, six.text_type(exc)) def _test_invalid_property(self, prop_name): my_quota = self.stack['my_quota'] props = self.stack.t.t['resources']['my_quota']['properties'].copy() props[prop_name] = -2 my_quota.t = my_quota.t.freeze(properties=props) my_quota.reparse() error_msg = ('Property error: resources.my_quota.properties.%s:' ' -2 is out of range (min: -1, max: None)' % prop_name) self._test_validate(my_quota, error_msg) def test_invalid_properties(self): for prop in valid_properties: self._test_invalid_property(prop) def test_miss_all_quotas(self): my_quota = self.stack['my_quota'] props = self.stack.t.t['resources']['my_quota']['properties'].copy() for key in valid_properties: if key in props: del props[key] my_quota.t = my_quota.t.freeze(properties=props) my_quota.reparse() msg = ('At least one of the following properties must be specified: ' 'cores, fixed_ips, floating_ips, injected_file_content_bytes, ' 'injected_file_path_bytes, injected_files, instances, ' 'key_pairs, metadata_items, ram, security_group_rules, ' 'security_groups, server_group_members, server_groups.') self.assertRaisesRegex(exception.PropertyUnspecifiedError, msg, my_quota.validate) def test_quota_handle_create(self): self.my_quota.physical_resource_name = mock.MagicMock( return_value='some_resource_id') self.my_quota.reparse() self.my_quota.handle_create() self.quotas.update.assert_called_once_with( 'some_project_id', cores=5, fixed_ips=5, floating_ips=5, instances=5, injected_files=5, injected_file_content_bytes=5, injected_file_path_bytes=5, key_pairs=5, metadata_items=5, ram=5, security_groups=5, security_group_rules=5, server_groups=5, server_group_members=5 ) self.assertEqual('some_resource_id', self.my_quota.resource_id) def test_quota_handle_update(self): tmpl_diff = mock.MagicMock() prop_diff = mock.MagicMock() props = {'project': 'some_project_id', 'cores': 1, 'fixed_ips': 2, 'instances': 3, 'injected_file_content_bytes': 4, 'ram': 200} json_snippet = rsrc_defn.ResourceDefinition( self.my_quota.name, 'OS::Nova::Quota', properties=props) self.my_quota.reparse() self.my_quota.handle_update(json_snippet, tmpl_diff, prop_diff) self.quotas.update.assert_called_once_with( 'some_project_id', cores=1, fixed_ips=2, instances=3, injected_file_content_bytes=4, ram=200 ) def test_quota_handle_delete(self): self.my_quota.reparse() self.my_quota.handle_delete() self.quotas.delete.assert_called_once_with('some_project_id') heat-10.0.2/heat/tests/openstack/nova/test_floatingip.py0000666000175000017500000003304313343562352023332 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import mock from neutronclient.v2_0 import client as neutronclient from heat.common import exception as heat_ex from heat.common import short_id from heat.common import template_format from heat.engine.clients.os import neutron from heat.engine.clients.os import nova from heat.engine import node_data from heat.engine.resources.openstack.nova import floatingip from heat.engine import rsrc_defn from heat.engine import scheduler from heat.tests import common from heat.tests.openstack.nova import fakes as fakes_nova from heat.tests import utils floating_ip_template = ''' { "heat_template_version": "2013-05-23", "resources": { "MyFloatingIP": { "type": "OS::Nova::FloatingIP", "properties": { "pool": "public" } } } } ''' floating_ip_template_with_assoc = ''' { "heat_template_version": "2013-05-23", "resources": { "MyFloatingIPAssociation": { "type": "OS::Nova::FloatingIPAssociation", "properties": { "server_id": "67dc62f9-efde-4c8b-94af-013e00f5dc57", "floating_ip": "fc68ea2c-b60b-4b4f-bd82-94ec81110766" } } } } ''' class NovaFloatingIPTest(common.HeatTestCase): def setUp(self): super(NovaFloatingIPTest, self).setUp() self.novaclient = fakes_nova.FakeClient() self.patchobject(nova.NovaClientPlugin, '_create', return_value=self.novaclient) self.m.StubOutWithMock(neutronclient.Client, 'create_floatingip') self.m.StubOutWithMock(neutronclient.Client, 'update_floatingip') self.m.StubOutWithMock(neutronclient.Client, 'delete_floatingip') self.patchobject(neutron.NeutronClientPlugin, 'find_resourceid_by_name_or_id', return_value='eeee') def mock_interface(self, port, ip): class MockIface(object): def __init__(self, port_id, fixed_ip): self.port_id = port_id self.fixed_ips = [{'ip_address': fixed_ip}] return MockIface(port, ip) def mock_create_floatingip(self): neutronclient.Client.create_floatingip({ 'floatingip': {'floating_network_id': u'eeee'} }).AndReturn({'floatingip': { "status": "ACTIVE", "id": "fc68ea2c-b60b-4b4f-bd82-94ec81110766", 'floating_network_id': 'eeee', "floating_ip_address": "11.0.0.1" }}) def mock_update_floatingip(self, fip='fc68ea2c-b60b-4b4f-bd82-94ec81110766', ex=None, fip_request=None, delete_assc=False): if fip_request: request_body = fip_request elif delete_assc: request_body = { 'floatingip': { 'port_id': None, 'fixed_ip_address': None}} else: request_body = { 'floatingip': { 'port_id': 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa', 'fixed_ip_address': '1.2.3.4'}} if ex: neutronclient.Client.update_floatingip( fip, request_body).AndRaise(ex) else: neutronclient.Client.update_floatingip( fip, request_body).AndReturn(None) def mock_delete_floatingip(self): id = 'fc68ea2c-b60b-4b4f-bd82-94ec81110766' neutronclient.Client.delete_floatingip(id).AndReturn(None) def prepare_floating_ip(self): self.mock_create_floatingip() template = template_format.parse(floating_ip_template) self.stack = utils.parse_stack(template) defns = self.stack.t.resource_definitions(self.stack) return floatingip.NovaFloatingIp('MyFloatingIP', defns['MyFloatingIP'], self.stack) def prepare_floating_ip_assoc(self): return_server = self.novaclient.servers.list()[1] self.patchobject(self.novaclient.servers, 'get', return_value=return_server) iface = self.mock_interface('aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa', '1.2.3.4') self.patchobject(return_server, 'interface_list', return_value=[iface]) template = template_format.parse(floating_ip_template_with_assoc) self.stack = utils.parse_stack(template) resource_defns = self.stack.t.resource_definitions(self.stack) floating_ip_assoc = resource_defns['MyFloatingIPAssociation'] return floatingip.NovaFloatingIpAssociation( 'MyFloatingIPAssociation', floating_ip_assoc, self.stack) def test_floating_ip_create(self): rsrc = self.prepare_floating_ip() self.m.ReplayAll() rsrc.validate() scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) self.assertEqual('fc68ea2c-b60b-4b4f-bd82-94ec81110766', rsrc.FnGetRefId()) self.assertEqual('11.0.0.1', rsrc.FnGetAtt('ip')) self.assertEqual('eeee', rsrc.FnGetAtt('pool')) self.m.VerifyAll() def test_floating_ip_delete(self): rsrc = self.prepare_floating_ip() self.mock_delete_floatingip() self.m.ReplayAll() rsrc.validate() scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) scheduler.TaskRunner(rsrc.delete)() self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_delete_floating_ip_assoc_successful_if_create_failed(self): rsrc = self.prepare_floating_ip_assoc() self.mock_update_floatingip(fakes_nova.fake_exception(400)) self.m.ReplayAll() rsrc.validate() self.assertRaises(heat_ex.ResourceFailure, scheduler.TaskRunner(rsrc.create)) self.assertEqual((rsrc.CREATE, rsrc.FAILED), rsrc.state) scheduler.TaskRunner(rsrc.delete)() self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_floating_ip_assoc_create(self): rsrc = self.prepare_floating_ip_assoc() self.mock_update_floatingip() self.m.ReplayAll() rsrc.validate() scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) self.assertIsNotNone(rsrc.id) self.assertEqual(rsrc.id, rsrc.resource_id) self.m.VerifyAll() def test_floating_ip_assoc_delete(self): rsrc = self.prepare_floating_ip_assoc() self.mock_update_floatingip() self.mock_update_floatingip(delete_assc=True) self.m.ReplayAll() rsrc.validate() scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) scheduler.TaskRunner(rsrc.delete)() self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_floating_ip_assoc_delete_not_found(self): rsrc = self.prepare_floating_ip_assoc() self.mock_update_floatingip() self.mock_update_floatingip(ex=fakes_nova.fake_exception(404), delete_assc=True) self.m.ReplayAll() rsrc.validate() scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) scheduler.TaskRunner(rsrc.delete)() self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_floating_ip_assoc_update_server_id(self): rsrc = self.prepare_floating_ip_assoc() self.mock_update_floatingip() fip_request = {'floatingip': { 'fixed_ip_address': '4.5.6.7', 'port_id': 'bbbbb-bbbb-bbbb-bbbbbbbbb'} } self.mock_update_floatingip(fip_request=fip_request) self.m.ReplayAll() rsrc.validate() scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) # for update return_server = self.novaclient.servers.list()[2] self.patchobject(self.novaclient.servers, 'get', return_value=return_server) iface = self.mock_interface('bbbbb-bbbb-bbbb-bbbbbbbbb', '4.5.6.7') self.patchobject(return_server, 'interface_list', return_value=[iface]) # update with the new server_id props = copy.deepcopy(rsrc.properties.data) update_server_id = '2146dfbf-ba77-4083-8e86-d052f671ece5' props['server_id'] = update_server_id update_snippet = rsrc_defn.ResourceDefinition(rsrc.name, rsrc.type(), props) scheduler.TaskRunner(rsrc.update, update_snippet)() self.assertEqual((rsrc.UPDATE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_floating_ip_assoc_update_fl_ip(self): rsrc = self.prepare_floating_ip_assoc() # for create self.mock_update_floatingip() # mock for delete the old association self.mock_update_floatingip(delete_assc=True) # mock for new association self.mock_update_floatingip(fip='fc68ea2c-dddd-4b4f-bd82-94ec81110766') self.m.ReplayAll() rsrc.validate() scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) # update with the new floatingip props = copy.deepcopy(rsrc.properties.data) props['floating_ip'] = 'fc68ea2c-dddd-4b4f-bd82-94ec81110766' update_snippet = rsrc_defn.ResourceDefinition(rsrc.name, rsrc.type(), props) scheduler.TaskRunner(rsrc.update, update_snippet)() self.assertEqual((rsrc.UPDATE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_floating_ip_assoc_update_both(self): rsrc = self.prepare_floating_ip_assoc() # for create self.mock_update_floatingip() # mock for delete the old association self.mock_update_floatingip(delete_assc=True) # mock for new association fip_request = {'floatingip': { 'fixed_ip_address': '4.5.6.7', 'port_id': 'bbbbb-bbbb-bbbb-bbbbbbbbb'} } self.mock_update_floatingip(fip='fc68ea2c-dddd-4b4f-bd82-94ec81110766', fip_request=fip_request) self.m.ReplayAll() rsrc.validate() scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) # update with the new floatingip and server return_server = self.novaclient.servers.list()[2] self.patchobject(self.novaclient.servers, 'get', return_value=return_server) iface = self.mock_interface('bbbbb-bbbb-bbbb-bbbbbbbbb', '4.5.6.7') self.patchobject(return_server, 'interface_list', return_value=[iface]) props = copy.deepcopy(rsrc.properties.data) update_server_id = '2146dfbf-ba77-4083-8e86-d052f671ece5' props['server_id'] = update_server_id props['floating_ip'] = 'fc68ea2c-dddd-4b4f-bd82-94ec81110766' update_snippet = rsrc_defn.ResourceDefinition(rsrc.name, rsrc.type(), props) scheduler.TaskRunner(rsrc.update, update_snippet)() self.assertEqual((rsrc.UPDATE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_floating_ip_assoc_refid_rsrc_name(self): t = template_format.parse(floating_ip_template_with_assoc) stack = utils.parse_stack(t) rsrc = stack['MyFloatingIPAssociation'] rsrc.id = '123' rsrc.uuid = '9bfb9456-3fe8-41f4-b318-9dba18eeef74' rsrc.action = 'CREATE' expected = '%s-%s-%s' % (rsrc.stack.name, rsrc.name, short_id.get_id(rsrc.uuid)) self.assertEqual(expected, rsrc.FnGetRefId()) def test_floating_ip_assoc_refid_rsrc_id(self): t = template_format.parse(floating_ip_template_with_assoc) stack = utils.parse_stack(t) rsrc = stack['MyFloatingIPAssociation'] rsrc.resource_id = 'phy-rsrc-id' self.assertEqual('phy-rsrc-id', rsrc.FnGetRefId()) def test_floating_ip_assoc_refid_convg_cache_data(self): t = template_format.parse(floating_ip_template_with_assoc) cache_data = {'MyFloatingIPAssociation': node_data.NodeData.from_dict({ 'uuid': mock.ANY, 'id': mock.ANY, 'action': 'CREATE', 'status': 'COMPLETE', 'reference_id': 'convg_xyz' })} stack = utils.parse_stack(t, cache_data=cache_data) rsrc = stack.defn['MyFloatingIPAssociation'] self.assertEqual('convg_xyz', rsrc.FnGetRefId()) heat-10.0.2/heat/tests/openstack/nova/__init__.py0000666000175000017500000000000013343562340021656 0ustar zuulzuul00000000000000heat-10.0.2/heat/tests/openstack/nova/test_server.py0000666000175000017500000071473613343562352022523 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import contextlib import copy import mock from keystoneauth1 import exceptions as ks_exceptions from neutronclient.v2_0 import client as neutronclient from novaclient import exceptions as nova_exceptions from novaclient.v2 import client as novaclient from oslo_serialization import jsonutils from oslo_utils import uuidutils import requests import six from six.moves.urllib import parse as urlparse from heat.common import exception from heat.common.i18n import _ from heat.common import template_format from heat.engine.clients.os import glance from heat.engine.clients.os import heat_plugin from heat.engine.clients.os import neutron from heat.engine.clients.os import nova from heat.engine.clients.os import swift from heat.engine.clients.os import zaqar from heat.engine import environment from heat.engine import resource from heat.engine.resources.openstack.nova import server as servers from heat.engine.resources.openstack.nova import server_network_mixin from heat.engine.resources import scheduler_hints as sh from heat.engine import scheduler from heat.engine import stack as parser from heat.engine import template from heat.objects import resource_data as resource_data_object from heat.tests import common from heat.tests.openstack.nova import fakes as fakes_nova from heat.tests import utils wp_template = ''' { "AWSTemplateFormatVersion" : "2010-09-09", "Description" : "WordPress", "Parameters" : { "key_name" : { "Description" : "key_name", "Type" : "String", "Default" : "test" } }, "Resources" : { "WebServer": { "Type": "OS::Nova::Server", "Properties": { "image" : "F18-x86_64-gold", "flavor" : "m1.large", "key_name" : "test", "user_data" : "wordpress" } } } } ''' ns_template = ''' heat_template_version: 2015-04-30 resources: server: type: OS::Nova::Server properties: image: F17-x86_64-gold flavor: m1.large user_data: {get_file: a_file} networks: [{'network': 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa'}] ''' with_port_template = ''' heat_template_version: 2015-04-30 resources: port: type: OS::Neutron::Port properties: network: 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa' server: type: OS::Nova::Server properties: image: F17-x86_64-gold flavor: m1.small networks: - port: {get_resource: port} fixed_ip: 10.0.0.99 ''' bdm_v2_template = ''' heat_template_version: 2015-04-30 resources: server: type: OS::Nova::Server properties: flavor: m1.tiny block_device_mapping_v2: - device_name: vda delete_on_termination: true image_id: F17-x86_64-gold ''' subnet_template = ''' heat_template_version: 2013-05-23 resources: server: type: OS::Nova::Server properties: image: F17-x86_64-gold flavor: m1.large networks: - { network: 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa' } subnet: type: OS::Neutron::Subnet properties: network: 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa' subnet_unreferenced: type: OS::Neutron::Subnet properties: network: 'bbccbbcc-bbcc-bbcc-bbcc-bbccbbccbbcc' ''' multi_subnet_template = ''' heat_template_version: 2013-05-23 resources: server: type: OS::Nova::Server properties: image: F17-x86_64-gold flavor: m1.large networks: - network: {get_resource: network} network: type: OS::Neutron::Net properties: name: NewNetwork subnet1: type: OS::Neutron::Subnet properties: network: {get_resource: network} name: NewSubnet1 subnet2: type: OS::Neutron::Subnet properties: network: {get_resource: network} name: NewSubnet2 ''' no_subnet_template = ''' heat_template_version: 2013-05-23 resources: server: type: OS::Nova::Server properties: image: F17-x86_64-gold flavor: m1.large subnet: type: OS::Neutron::Subnet properties: network: 12345 ''' tmpl_server_with_network_id = """ heat_template_version: 2015-10-15 resources: server: type: OS::Nova::Server properties: flavor: m1.small image: F17-x86_64-gold networks: - network: 4321 """ tmpl_server_with_sub_secu_group = """ heat_template_version: 2015-10-15 resources: server: type: OS::Nova::Server properties: flavor: m1.small image: F17-x86_64-gold networks: - subnet: 2a60cbaa-3d33-4af6-a9ce-83594ac546fc security_groups: - my_seg """ server_with_sw_config_personality = """ heat_template_version: 2014-10-16 resources: swconfig: type: OS::Heat::SoftwareConfig properties: config: | #!/bin/bash echo -e "test" server: type: OS::Nova::Server properties: image: F17-x86_64-gold flavor: m1.large personality: { /tmp/test: { get_attr: [swconfig, config]}} """ def create_fake_iface(port=None, net=None, mac=None, ip=None, subnet=None): class fake_interface(object): def __init__(self, port_id, net_id, mac_addr, fixed_ip, subnet_id): self.port_id = port_id self.net_id = net_id self.mac_addr = mac_addr self.fixed_ips = [{'ip_address': fixed_ip, 'subnet_id': subnet_id}] return fake_interface(port, net, mac, ip, subnet) class ServersTest(common.HeatTestCase): def setUp(self): super(ServersTest, self).setUp() self.fc = fakes_nova.FakeClient() self.limits = self.m.CreateMockAnything() self.limits.absolute = self._limits_absolute() self.mock_flavor = mock.Mock(ram=4, disk=4) self.mock_image = mock.Mock(min_ram=1, min_disk=1, status='ACTIVE') def flavor_side_effect(*args): return 2 if args[0] == 'm1.small' else 1 def image_side_effect(*args): return 2 if args[0] == 'F17-x86_64-gold' else 1 self.patchobject(nova.NovaClientPlugin, 'find_flavor_by_name_or_id', side_effect=flavor_side_effect) self.patchobject(glance.GlanceClientPlugin, 'find_image_by_name_or_id', side_effect=image_side_effect) def _limits_absolute(self): max_personality = self.m.CreateMockAnything() max_personality.name = 'maxPersonality' max_personality.value = 5 max_personality_size = self.m.CreateMockAnything() max_personality_size.name = 'maxPersonalitySize' max_personality_size.value = 10240 max_server_meta = self.m.CreateMockAnything() max_server_meta.name = 'maxServerMeta' max_server_meta.value = 3 yield max_personality yield max_personality_size yield max_server_meta def _setup_test_stack(self, stack_name, test_templ=wp_template): t = template_format.parse(test_templ) files = {} if test_templ == ns_template: files = {'a_file': 'stub'} templ = template.Template(t, env=environment.Environment( {'key_name': 'test'}), files=files) stack = parser.Stack(utils.dummy_context(region_name="RegionOne"), stack_name, templ, stack_id=uuidutils.generate_uuid(), stack_user_project_id='8888') return templ, stack def _prepare_server_check(self, status='ACTIVE'): templ, self.stack = self._setup_test_stack('server_check') server = self.fc.servers.list()[1] server.status = status res = self.stack['WebServer'] res.state_set(res.CREATE, res.COMPLETE) res.client = mock.Mock() res.client().servers.get.return_value = server return res def test_check(self): res = self._prepare_server_check() scheduler.TaskRunner(res.check)() self.assertEqual((res.CHECK, res.COMPLETE), res.state) def test_check_fail(self): res = self._prepare_server_check() res.client().servers.get.side_effect = Exception('boom') exc = self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(res.check)) self.assertIn('boom', six.text_type(exc)) self.assertEqual((res.CHECK, res.FAILED), res.state) def test_check_not_active(self): res = self._prepare_server_check(status='FOO') exc = self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(res.check)) self.assertIn('FOO', six.text_type(exc)) def _get_test_template(self, stack_name, server_name=None, image_id=None): tmpl, stack = self._setup_test_stack(stack_name) tmpl.t['Resources']['WebServer']['Properties'][ 'image'] = image_id or 'CentOS 5.2' tmpl.t['Resources']['WebServer']['Properties'][ 'flavor'] = '256 MB Server' if server_name is not None: tmpl.t['Resources']['WebServer']['Properties'][ 'name'] = server_name return tmpl, stack def _setup_test_server(self, return_server, name, image_id=None, override_name=False, stub_create=True, networks=None): stack_name = '%s_s' % name def _mock_find_id(resource, name_or_id, cmd_resource=None): return name_or_id mock_find = self.patchobject(neutron.NeutronClientPlugin, 'find_resourceid_by_name_or_id') mock_find.side_effect = _mock_find_id server_name = str(name) if override_name else None tmpl, self.stack = self._get_test_template(stack_name, server_name, image_id) props = tmpl.t['Resources']['WebServer']['Properties'] # set old_networks for server if networks is not None: props['networks'] = networks self.server_props = props resource_defns = tmpl.resource_definitions(self.stack) server = servers.Server(str(name), resource_defns['WebServer'], self.stack) self.patchobject(server, 'store_external_ports') self.patchobject(nova.NovaClientPlugin, '_create', return_value=self.fc) self.patchobject(glance.GlanceClientPlugin, 'get_image', return_value=self.mock_image) if stub_create: self.patchobject(self.fc.servers, 'create', return_value=return_server) # mock check_create_complete innards self.patchobject(self.fc.servers, 'get', return_value=return_server) return server def _create_test_server(self, return_server, name, override_name=False, stub_create=True, networks=None): server = self._setup_test_server(return_server, name, stub_create=stub_create, networks=networks) scheduler.TaskRunner(server.create)() return server def test_subnet_dependency_by_network_id(self): templ, stack = self._setup_test_stack('subnet-test', subnet_template) server_rsrc = stack['server'] subnet_rsrc = stack['subnet'] deps = [] server_rsrc.add_explicit_dependencies(deps) server_rsrc.add_dependencies(deps) self.assertEqual(4, len(deps)) self.assertEqual(subnet_rsrc, deps[3]) self.assertNotIn(stack['subnet_unreferenced'], deps) def test_subnet_dependency_unknown_network_id(self): # The use case here is creating a network + subnets + server # from within one stack templ, stack = self._setup_test_stack('subnet-test', multi_subnet_template) server_rsrc = stack['server'] subnet1_rsrc = stack['subnet1'] subnet2_rsrc = stack['subnet2'] deps = [] server_rsrc.add_explicit_dependencies(deps) server_rsrc.add_dependencies(deps) self.assertEqual(8, len(deps)) self.assertIn(subnet1_rsrc, deps) self.assertIn(subnet2_rsrc, deps) def test_subnet_nodeps(self): templ, stack = self._setup_test_stack('subnet-test', no_subnet_template) server_rsrc = stack['server'] subnet_rsrc = stack['subnet'] deps = [] server_rsrc.add_explicit_dependencies(deps) server_rsrc.add_dependencies(deps) self.assertEqual(2, len(deps)) self.assertNotIn(subnet_rsrc, deps) def test_server_create(self): return_server = self.fc.servers.list()[1] return_server.id = '5678' return_server._info['os_collect_config'] = {} server_name = 'test_server_create' stack_name = '%s_s' % server_name server = self._create_test_server(return_server, server_name) # this makes sure the auto increment worked on server creation self.assertGreater(server.id, 0) interfaces = [create_fake_iface(port='1234', mac='fa:16:3e:8c:22:aa', ip='4.5.6.7'), create_fake_iface(port='5678', mac='fa:16:3e:8c:33:bb', ip='5.6.9.8'), create_fake_iface(port='1013', mac='fa:16:3e:8c:44:cc', ip='10.13.12.13')] self.patchobject(self.fc.servers, 'get', return_value=return_server) self.patchobject(return_server, 'interface_list', return_value=interfaces) self.patchobject(self.fc.servers, 'tag_list', return_value=['test']) public_ip = return_server.networks['public'][0] self.assertEqual('1234', server.FnGetAtt('addresses')['public'][0]['port']) self.assertEqual('5678', server.FnGetAtt('addresses')['public'][1]['port']) self.assertEqual(public_ip, server.FnGetAtt('addresses')['public'][0]['addr']) self.assertEqual(public_ip, server.FnGetAtt('networks')['public'][0]) private_ip = return_server.networks['private'][0] self.assertEqual('1013', server.FnGetAtt('addresses')['private'][0]['port']) self.assertEqual(private_ip, server.FnGetAtt('addresses')['private'][0]['addr']) self.assertEqual(private_ip, server.FnGetAtt('networks')['private'][0]) self.assertEqual(return_server._info, server.FnGetAtt('show')) self.assertEqual('sample-server2', server.FnGetAtt('instance_name')) self.assertEqual('192.0.2.0', server.FnGetAtt('accessIPv4')) self.assertEqual('::babe:4317:0A83', server.FnGetAtt('accessIPv6')) expected_name = utils.PhysName(stack_name, server.name) self.assertEqual(expected_name, server.FnGetAtt('name')) self.assertEqual(['test'], server.FnGetAtt('tags')) # test with unsupported version server.client = mock.Mock(side_effect=[ self.fc, exception.InvalidServiceVersion(service='a', version='0')]) if server.attributes._resolved_values.get('tags'): del server.attributes._resolved_values['tags'] self.assertIsNone(server.FnGetAtt('tags')) self.assertEqual({}, server.FnGetAtt('os_collect_config')) def test_server_create_metadata(self): stack_name = 'create_metadata_test_stack' self.patchobject(nova.NovaClientPlugin, '_create', return_value=self.fc) return_server = self.fc.servers.list()[1] (tmpl, stack) = self._setup_test_stack(stack_name) tmpl['Resources']['WebServer']['Properties'][ 'metadata'] = {'a': 1} resource_defns = tmpl.resource_definitions(stack) server = servers.Server('create_metadata_test_server', resource_defns['WebServer'], stack) self.patchobject(server, 'store_external_ports') mock_create = self.patchobject(self.fc.servers, 'create', return_value=return_server) scheduler.TaskRunner(server.create)() args, kwargs = mock_create.call_args self.assertEqual({'a': "1"}, kwargs['meta']) def test_server_create_with_subnet_security_group(self): stack_name = 'server_with_subnet_security_group' self.patchobject(nova.NovaClientPlugin, '_create', return_value=self.fc) return_server = self.fc.servers.list()[1] (tmpl, stack) = self._setup_test_stack( stack_name, test_templ=tmpl_server_with_sub_secu_group) resource_defns = tmpl.resource_definitions(stack) server = servers.Server('server_with_sub_secu', resource_defns['server'], stack) mock_find = self.patchobject( neutron.NeutronClientPlugin, 'find_resourceid_by_name_or_id', return_value='2a60cbaa-3d33-4af6-a9ce-83594ac546fc') sec_uuids = ['86c0f8ae-23a8-464f-8603-c54113ef5467'] self.patchobject(neutron.NeutronClientPlugin, 'get_secgroup_uuids', return_value=sec_uuids) self.patchobject(server, 'store_external_ports') self.patchobject(neutron.NeutronClientPlugin, 'network_id_from_subnet_id', return_value='05d8e681-4b37-4570-bc8d-810089f706b2') mock_create_port = self.patchobject( neutronclient.Client, 'create_port') mock_create = self.patchobject( self.fc.servers, 'create', return_value=return_server) scheduler.TaskRunner(server.create)() kwargs = {'network_id': '05d8e681-4b37-4570-bc8d-810089f706b2', 'fixed_ips': [ {'subnet_id': '2a60cbaa-3d33-4af6-a9ce-83594ac546fc'}], 'security_groups': sec_uuids, 'name': 'server_with_sub_secu-port-0', } mock_create_port.assert_called_with({'port': kwargs}) self.assertEqual(1, mock_find.call_count) args, kwargs = mock_create.call_args self.assertEqual({}, kwargs['meta']) def test_server_create_with_str_network(self): stack_name = 'server_with_str_network' return_server = self.fc.servers.list()[1] (tmpl, stack) = self._setup_test_stack(stack_name) mock_nc = self.patchobject(nova.NovaClientPlugin, '_create', return_value=self.fc) self.patchobject(glance.GlanceClientPlugin, 'get_image', return_value=self.mock_image) self.patchobject(nova.NovaClientPlugin, 'get_flavor', return_value=self.mock_flavor) self.patchobject(neutron.NeutronClientPlugin, 'find_resourceid_by_name_or_id') props = tmpl['Resources']['WebServer']['Properties'] props['networks'] = [{'allocate_network': 'none'}] resource_defns = tmpl.resource_definitions(stack) server = servers.Server('WebServer', resource_defns['WebServer'], stack) self.patchobject(server, 'store_external_ports') create_mock = self.patchobject(self.fc.servers, 'create', return_value=return_server) scheduler.TaskRunner(server.create)() mock_nc.assert_called_with(version='2.37') self.assertEqual('none', create_mock.call_args[1]['nics']) def test_server_create_with_image_id(self): return_server = self.fc.servers.list()[1] return_server.id = '5678' server_name = 'test_server_create_image_id' server = self._setup_test_server(return_server, server_name, override_name=True) server.resource_id = '1234' interfaces = [create_fake_iface(port='1234', mac='fa:16:3e:8c:22:aa', ip='4.5.6.7'), create_fake_iface(port='5678', mac='fa:16:3e:8c:33:bb', ip='5.6.9.8'), create_fake_iface(port='1013', mac='fa:16:3e:8c:44:cc', ip='10.13.12.13')] self.patchobject(self.fc.servers, 'get', return_value=return_server) self.patchobject(return_server, 'interface_list', return_value=interfaces) self.patchobject(return_server, 'interface_detach') self.patchobject(return_server, 'interface_attach') public_ip = return_server.networks['public'][0] self.assertEqual('1234', server.FnGetAtt('addresses')['public'][0]['port']) self.assertEqual('5678', server.FnGetAtt('addresses')['public'][1]['port']) self.assertEqual(public_ip, server.FnGetAtt('addresses')['public'][0]['addr']) self.assertEqual(public_ip, server.FnGetAtt('networks')['public'][0]) private_ip = return_server.networks['private'][0] self.assertEqual('1013', server.FnGetAtt('addresses')['private'][0]['port']) self.assertEqual(private_ip, server.FnGetAtt('addresses')['private'][0]['addr']) self.assertEqual(private_ip, server.FnGetAtt('networks')['private'][0]) self.assertEqual(server_name, server.FnGetAtt('name')) def test_server_image_name_err(self): stack_name = 'img_name_err' (tmpl, stack) = self._setup_test_stack(stack_name) mock_image = self.patchobject(glance.GlanceClientPlugin, 'find_image_by_name_or_id') self.stub_KeypairConstraint_validate() mock_image.side_effect = ( glance.client_exception.EntityMatchNotFound( entity='image', args={'name': 'Slackware'})) # Init a server with non exist image name tmpl['Resources']['WebServer']['Properties']['image'] = 'Slackware' resource_defns = tmpl.resource_definitions(stack) server = servers.Server('WebServer', resource_defns['WebServer'], stack) error = self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(server.create)) self.assertIn("No image matching {'name': 'Slackware'}.", six.text_type(error)) def test_server_duplicate_image_name_err(self): stack_name = 'img_dup_err' (tmpl, stack) = self._setup_test_stack(stack_name) mock_image = self.patchobject(glance.GlanceClientPlugin, 'find_image_by_name_or_id') self.stub_KeypairConstraint_validate() mock_image.side_effect = ( glance.client_exception.EntityUniqueMatchNotFound( entity='image', args='CentOS 5.2')) tmpl['Resources']['WebServer']['Properties']['image'] = 'CentOS 5.2' resource_defns = tmpl.resource_definitions(stack) server = servers.Server('WebServer', resource_defns['WebServer'], stack) error = self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(server.create)) self.assertIn('No image unique match found for CentOS 5.2.', six.text_type(error)) def test_server_create_unexpected_status(self): # NOTE(pshchelo) checking is done only on check_create_complete # level so not to mock out all delete/retry logic that kicks in # on resource create failure return_server = self.fc.servers.list()[1] server = self._create_test_server(return_server, 'cr_unexp_sts') return_server.status = 'BOGUS' self.patchobject(self.fc.servers, 'get', return_value=return_server) e = self.assertRaises(exception.ResourceUnknownStatus, server.check_create_complete, server.resource_id) self.assertEqual('Server is not active - Unknown status BOGUS due to ' '"Unknown"', six.text_type(e)) def test_server_create_error_status(self): # NOTE(pshchelo) checking is done only on check_create_complete # level so not to mock out all delete/retry logic that kicks in # on resource create failure return_server = self.fc.servers.list()[1] server = self._create_test_server(return_server, 'cr_err_sts') return_server.status = 'ERROR' return_server.fault = { 'message': 'NoValidHost', 'code': 500, 'created': '2013-08-14T03:12:10Z' } self.patchobject(self.fc.servers, 'get', return_value=return_server) e = self.assertRaises(exception.ResourceInError, server.check_create_complete, server.resource_id) self.assertEqual( 'Went to status ERROR due to "Message: NoValidHost, Code: 500"', six.text_type(e)) def test_server_create_raw_userdata(self): self.patchobject(nova.NovaClientPlugin, '_create', return_value=self.fc) return_server = self.fc.servers.list()[1] stack_name = 'raw_userdata_s' (tmpl, stack) = self._setup_test_stack(stack_name) tmpl['Resources']['WebServer']['Properties'][ 'user_data_format'] = 'RAW' resource_defns = tmpl.resource_definitions(stack) server = servers.Server('WebServer', resource_defns['WebServer'], stack) self.patchobject(server, 'store_external_ports') mock_create = self.patchobject(self.fc.servers, 'create', return_value=return_server) scheduler.TaskRunner(server.create)() args, kwargs = mock_create.call_args self.assertEqual('wordpress', kwargs['userdata']) self.assertEqual({}, kwargs['meta']) def test_server_create_raw_config_userdata(self): self.patchobject(nova.NovaClientPlugin, '_create', return_value=self.fc) return_server = self.fc.servers.list()[1] stack_name = 'raw_userdata_s' (tmpl, stack) = self._setup_test_stack(stack_name) tmpl['Resources']['WebServer']['Properties'][ 'user_data_format'] = 'RAW' tmpl['Resources']['WebServer']['Properties'][ 'user_data'] = '8c813873-f6ee-4809-8eec-959ef39acb55' resource_defns = tmpl.resource_definitions(stack) server = servers.Server('WebServer', resource_defns['WebServer'], stack) self.patchobject(server, 'store_external_ports') self.rpc_client = mock.MagicMock() server._rpc_client = self.rpc_client sc = {'config': 'wordpress from config'} self.rpc_client.show_software_config.return_value = sc mock_create = self.patchobject(self.fc.servers, 'create', return_value=return_server) scheduler.TaskRunner(server.create)() args, kwargs = mock_create.call_args self.assertEqual('wordpress from config', kwargs['userdata']) self.assertEqual({}, kwargs['meta']) def test_server_create_raw_config_userdata_None(self): self.patchobject(nova.NovaClientPlugin, '_create', return_value=self.fc) return_server = self.fc.servers.list()[1] stack_name = 'raw_userdata_s' (tmpl, stack) = self._setup_test_stack(stack_name) sc_id = '8c813873-f6ee-4809-8eec-959ef39acb55' tmpl['Resources']['WebServer']['Properties'][ 'user_data_format'] = 'RAW' tmpl['Resources']['WebServer']['Properties']['user_data'] = sc_id resource_defns = tmpl.resource_definitions(stack) server = servers.Server('WebServer', resource_defns['WebServer'], stack) self.patchobject(server, 'store_external_ports') self.rpc_client = mock.MagicMock() server._rpc_client = self.rpc_client @contextlib.contextmanager def exc_filter(*args): try: yield except exception.NotFound: pass self.rpc_client.ignore_error_by_name.side_effect = exc_filter self.rpc_client.show_software_config.side_effect = exception.NotFound mock_create = self.patchobject(self.fc.servers, 'create', return_value=return_server) scheduler.TaskRunner(server.create)() args, kwargs = mock_create.call_args self.assertEqual(sc_id, kwargs['userdata']) self.assertEqual({}, kwargs['meta']) def _server_create_software_config(self, md=None, stack_name='software_config_s', ret_tmpl=False): self.patchobject(nova.NovaClientPlugin, '_create', return_value=self.fc) return_server = self.fc.servers.list()[1] (tmpl, stack) = self._setup_test_stack(stack_name) self.stack = stack self.server_props = tmpl['Resources']['WebServer']['Properties'] self.server_props['user_data_format'] = 'SOFTWARE_CONFIG' if md is not None: tmpl['Resources']['WebServer']['Metadata'] = md stack.stack_user_project_id = '8888' resource_defns = tmpl.resource_definitions(stack) server = servers.Server('WebServer', resource_defns['WebServer'], stack) self.patchobject(server, 'store_external_ports') self.patchobject(server, 'heat') self.patchobject(self.fc.servers, 'create', return_value=return_server) scheduler.TaskRunner(server.create)() self.assertEqual('4567', server.access_key) self.assertEqual('8901', server.secret_key) self.assertEqual('1234', server._get_user_id()) self.assertEqual('POLL_SERVER_CFN', server.properties.get('software_config_transport')) self.assertTrue(stack.access_allowed('4567', 'WebServer')) self.assertFalse(stack.access_allowed('45678', 'WebServer')) self.assertFalse(stack.access_allowed('4567', 'wWebServer')) if ret_tmpl: return server, tmpl else: return server @mock.patch.object(heat_plugin.HeatClientPlugin, 'url_for') def test_server_create_software_config(self, fake_url): fake_url.return_value = 'http://ip:8000/v1' server = self._server_create_software_config() self.assertEqual({ 'os-collect-config': { 'cfn': { 'access_key_id': '4567', 'metadata_url': 'http://ip:8000/v1/', 'path': 'WebServer.Metadata', 'secret_access_key': '8901', 'stack_name': 'software_config_s' }, 'collectors': ['ec2', 'cfn', 'local'] }, 'deployments': [] }, server.metadata_get()) @mock.patch.object(heat_plugin.HeatClientPlugin, 'url_for') def test_resolve_attribute_os_collect_config(self, fake_url): fake_url.return_value = 'http://ip/heat-api-cfn/v1' server = self._server_create_software_config() self.assertEqual({ 'cfn': { 'access_key_id': '4567', 'metadata_url': 'http://ip/heat-api-cfn/v1/', 'path': 'WebServer.Metadata', 'secret_access_key': '8901', 'stack_name': 'software_config_s' }, 'collectors': ['ec2', 'cfn', 'local'] }, server.FnGetAtt('os_collect_config')) @mock.patch.object(heat_plugin.HeatClientPlugin, 'url_for') def test_server_create_software_config_metadata(self, fake_url): md = {'os-collect-config': {'polling_interval': 10}} fake_url.return_value = 'http://ip/heat-api-cfn/v1' server = self._server_create_software_config(md=md) self.assertEqual({ 'os-collect-config': { 'cfn': { 'access_key_id': '4567', 'metadata_url': 'http://ip/heat-api-cfn/v1/', 'path': 'WebServer.Metadata', 'secret_access_key': '8901', 'stack_name': 'software_config_s' }, 'collectors': ['ec2', 'cfn', 'local'], 'polling_interval': 10 }, 'deployments': [] }, server.metadata_get()) def _server_create_software_config_poll_heat(self, md=None): self.patchobject(nova.NovaClientPlugin, '_create', return_value=self.fc) return_server = self.fc.servers.list()[1] stack_name = 'software_config_s' (tmpl, stack) = self._setup_test_stack(stack_name) props = tmpl.t['Resources']['WebServer']['Properties'] props['user_data_format'] = 'SOFTWARE_CONFIG' props['software_config_transport'] = 'POLL_SERVER_HEAT' if md is not None: tmpl.t['Resources']['WebServer']['Metadata'] = md self.server_props = props resource_defns = tmpl.resource_definitions(stack) server = servers.Server('WebServer', resource_defns['WebServer'], stack) self.patchobject(server, 'store_external_ports') self.patchobject(self.fc.servers, 'create', return_value=return_server) scheduler.TaskRunner(server.create)() self.assertEqual('1234', server._get_user_id()) self.assertTrue(stack.access_allowed('1234', 'WebServer')) self.assertFalse(stack.access_allowed('45678', 'WebServer')) self.assertFalse(stack.access_allowed('4567', 'wWebServer')) return stack, server def test_server_create_software_config_poll_heat(self): stack, server = self._server_create_software_config_poll_heat() self.assertEqual({ 'os-collect-config': { 'heat': { 'auth_url': 'http://server.test:5000/v2.0', 'password': server.password, 'project_id': '8888', 'region_name': 'RegionOne', 'resource_name': 'WebServer', 'stack_id': 'software_config_s/%s' % stack.id, 'user_id': '1234' }, 'collectors': ['ec2', 'heat', 'local'] }, 'deployments': [] }, server.metadata_get()) def test_server_create_software_config_poll_heat_metadata(self): md = {'os-collect-config': {'polling_interval': 10}} stack, server = self._server_create_software_config_poll_heat(md=md) self.assertEqual({ 'os-collect-config': { 'heat': { 'auth_url': 'http://server.test:5000/v2.0', 'password': server.password, 'project_id': '8888', 'region_name': 'RegionOne', 'resource_name': 'WebServer', 'stack_id': 'software_config_s/%s' % stack.id, 'user_id': '1234' }, 'collectors': ['ec2', 'heat', 'local'], 'polling_interval': 10 }, 'deployments': [] }, server.metadata_get()) def _server_create_software_config_poll_temp_url(self, md=None): self.patchobject(nova.NovaClientPlugin, '_create', return_value=self.fc) return_server = self.fc.servers.list()[1] stack_name = 'software_config_s' (tmpl, stack) = self._setup_test_stack(stack_name) props = tmpl.t['Resources']['WebServer']['Properties'] props['user_data_format'] = 'SOFTWARE_CONFIG' props['software_config_transport'] = 'POLL_TEMP_URL' if md is not None: tmpl.t['Resources']['WebServer']['Metadata'] = md self.server_props = props resource_defns = tmpl.resource_definitions(stack) server = servers.Server('WebServer', resource_defns['WebServer'], stack) self.patchobject(server, 'store_external_ports') sc = mock.Mock() sc.head_account.return_value = { 'x-account-meta-temp-url-key': 'secrit' } sc.url = 'http://192.0.2.2' self.patchobject(swift.SwiftClientPlugin, '_create', return_value=sc) self.patchobject(self.fc.servers, 'create', return_value=return_server) scheduler.TaskRunner(server.create)() metadata_put_url = server.data().get('metadata_put_url') md = server.metadata_get() metadata_url = md['os-collect-config']['request']['metadata_url'] self.assertNotEqual(metadata_url, metadata_put_url) container_name = server.physical_resource_name() object_name = server.data().get('metadata_object_name') self.assertTrue(uuidutils.is_uuid_like(object_name)) test_path = '/v1/AUTH_test_tenant_id/%s/%s' % ( server.physical_resource_name(), object_name) self.assertEqual(test_path, urlparse.urlparse(metadata_put_url).path) self.assertEqual(test_path, urlparse.urlparse(metadata_url).path) sc.put_object.assert_called_once_with( container_name, object_name, jsonutils.dumps(md)) sc.head_container.return_value = {'x-container-object-count': '0'} server._delete_temp_url() sc.delete_object.assert_called_once_with(container_name, object_name) sc.head_container.assert_called_once_with(container_name) sc.delete_container.assert_called_once_with(container_name) return metadata_url, server def test_server_create_software_config_poll_temp_url(self): metadata_url, server = ( self._server_create_software_config_poll_temp_url()) self.assertEqual({ 'os-collect-config': { 'request': { 'metadata_url': metadata_url }, 'collectors': ['ec2', 'request', 'local'] }, 'deployments': [] }, server.metadata_get()) def test_server_create_software_config_poll_temp_url_metadata(self): md = {'os-collect-config': {'polling_interval': 10}} metadata_url, server = ( self._server_create_software_config_poll_temp_url(md=md)) self.assertEqual({ 'os-collect-config': { 'request': { 'metadata_url': metadata_url }, 'collectors': ['ec2', 'request', 'local'], 'polling_interval': 10 }, 'deployments': [] }, server.metadata_get()) def _prepare_for_server_create(self, md=None): self.patchobject(nova.NovaClientPlugin, '_create', return_value=self.fc) return_server = self.fc.servers.list()[1] stack_name = 'software_config_s' (tmpl, stack) = self._setup_test_stack(stack_name) props = tmpl.t['Resources']['WebServer']['Properties'] props['user_data_format'] = 'SOFTWARE_CONFIG' props['software_config_transport'] = 'ZAQAR_MESSAGE' if md is not None: tmpl.t['Resources']['WebServer']['Metadata'] = md self.server_props = props resource_defns = tmpl.resource_definitions(stack) server = servers.Server('WebServer', resource_defns['WebServer'], stack) self.patchobject(server, 'store_external_ports') self.patchobject(self.fc.servers, 'create', return_value=return_server) return server, stack def _server_create_software_config_zaqar(self, md=None): server, stack = self._prepare_for_server_create(md) zcc = self.patchobject(zaqar.ZaqarClientPlugin, 'create_for_tenant') zc = mock.Mock() zcc.return_value = zc queue = mock.Mock() zc.queue.return_value = queue scheduler.TaskRunner(server.create)() metadata_queue_id = server.data().get('metadata_queue_id') md = server.metadata_get() queue_id = md['os-collect-config']['zaqar']['queue_id'] self.assertEqual(queue_id, metadata_queue_id) zc.queue.assert_called_once_with(queue_id) queue.post.assert_called_once_with( {'body': server.metadata_get(), 'ttl': 3600}) zc.queue.reset_mock() server._delete_queue() zc.queue.assert_called_once_with(queue_id) zc.queue(queue_id).delete.assert_called_once_with() return queue_id, server def test_server_create_software_config_zaqar(self): queue_id, server = self._server_create_software_config_zaqar() self.assertEqual({ 'os-collect-config': { 'zaqar': { 'user_id': '1234', 'password': server.password, 'auth_url': 'http://server.test:5000/v2.0', 'project_id': '8888', 'queue_id': queue_id, 'region_name': 'RegionOne', }, 'collectors': ['ec2', 'zaqar', 'local'] }, 'deployments': [] }, server.metadata_get()) def test_create_delete_no_zaqar_service(self): zcc = self.patchobject(zaqar.ZaqarClientPlugin, 'create_for_tenant') zcc.side_effect = ks_exceptions.EndpointNotFound server, stack = self._prepare_for_server_create() creator = scheduler.TaskRunner(server.create) self.assertRaises(exception.ResourceFailure, creator) self.assertEqual((server.CREATE, server.FAILED), server.state) self.assertEqual({ 'os-collect-config': { 'zaqar': { 'user_id': '1234', 'password': server.password, 'auth_url': 'http://server.test:5000/v2.0', 'project_id': '8888', 'queue_id': mock.ANY, 'region_name': 'RegionOne', }, 'collectors': ['ec2', 'zaqar', 'local'] }, 'deployments': [] }, server.metadata_get()) scheduler.TaskRunner(server.delete)() self.assertEqual((server.DELETE, server.COMPLETE), server.state) def test_server_create_software_config_zaqar_metadata(self): md = {'os-collect-config': {'polling_interval': 10}} queue_id, server = self._server_create_software_config_zaqar(md=md) self.assertEqual({ 'os-collect-config': { 'zaqar': { 'user_id': '1234', 'password': server.password, 'auth_url': 'http://server.test:5000/v2.0', 'project_id': '8888', 'queue_id': queue_id, 'region_name': 'RegionOne', }, 'collectors': ['ec2', 'zaqar', 'local'], 'polling_interval': 10 }, 'deployments': [] }, server.metadata_get()) def test_server_create_default_admin_pass(self): return_server = self.fc.servers.list()[1] return_server.adminPass = 'autogenerated' stack_name = 'admin_pass_s' (tmpl, stack) = self._setup_test_stack(stack_name) resource_defns = tmpl.resource_definitions(stack) server = servers.Server('WebServer', resource_defns['WebServer'], stack) self.patchobject(server, 'store_external_ports') self.patchobject(nova.NovaClientPlugin, '_create', return_value=self.fc) mock_create = self.patchobject(self.fc.servers, 'create', return_value=return_server) scheduler.TaskRunner(server.create)() _, kwargs = mock_create.call_args self.assertIsNone(kwargs['admin_pass']) self.assertEqual({}, kwargs['meta']) def test_server_create_custom_admin_pass(self): return_server = self.fc.servers.list()[1] return_server.adminPass = 'foo' stack_name = 'admin_pass_s' (tmpl, stack) = self._setup_test_stack(stack_name) tmpl.t['Resources']['WebServer']['Properties']['admin_pass'] = 'foo' resource_defns = tmpl.resource_definitions(stack) server = servers.Server('WebServer', resource_defns['WebServer'], stack) self.patchobject(server, 'store_external_ports') self.patchobject(nova.NovaClientPlugin, '_create', return_value=self.fc) mock_create = self.patchobject(self.fc.servers, 'create', return_value=return_server) scheduler.TaskRunner(server.create)() _, kwargs = mock_create.call_args self.assertEqual('foo', kwargs['admin_pass']) self.assertEqual({}, kwargs['meta']) def test_server_create_with_stack_scheduler_hints(self): self.patchobject(nova.NovaClientPlugin, '_create', return_value=self.fc) return_server = self.fc.servers.list()[1] return_server.id = '5678' sh.cfg.CONF.set_override('stack_scheduler_hints', True) # Unroll _create_test_server, to enable check # for addition of heat ids (stack id, resource name) stack_name = 'test_server_w_stack_sched_hints_s' server_name = 'server_w_stack_sched_hints' (t, stack) = self._get_test_template(stack_name, server_name) self.patchobject(stack, 'path_in_stack', return_value=[('parent', stack.name)]) resource_defns = t.resource_definitions(stack) server = servers.Server(server_name, resource_defns['WebServer'], stack) self.patchobject(server, 'store_external_ports') # server.uuid is only available once the resource has been added. stack.add_resource(server) self.assertIsNotNone(server.uuid) mock_create = self.patchobject(self.fc.servers, 'create', return_value=return_server) shm = sh.SchedulerHintsMixin scheduler_hints = {shm.HEAT_ROOT_STACK_ID: stack.root_stack_id(), shm.HEAT_STACK_ID: stack.id, shm.HEAT_STACK_NAME: stack.name, shm.HEAT_PATH_IN_STACK: [','.join(['parent', stack.name])], shm.HEAT_RESOURCE_NAME: server.name, shm.HEAT_RESOURCE_UUID: server.uuid} scheduler.TaskRunner(server.create)() _, kwargs = mock_create.call_args self.assertEqual(scheduler_hints, kwargs['scheduler_hints']) self.assertEqual({}, kwargs['meta']) # this makes sure the auto increment worked on server creation self.assertGreater(server.id, 0) def test_check_maximum(self): msg = 'test_check_maximum' self.assertIsNone(servers.Server._check_maximum(1, 1, msg)) self.assertIsNone(servers.Server._check_maximum(1000, -1, msg)) error = self.assertRaises(exception.StackValidationFailed, servers.Server._check_maximum, 2, 1, msg) self.assertEqual(msg, six.text_type(error)) def test_server_validate(self): stack_name = 'srv_val' (tmpl, stack) = self._setup_test_stack(stack_name) self.patchobject(nova.NovaClientPlugin, '_create', return_value=self.fc) resource_defns = tmpl.resource_definitions(stack) server = servers.Server('server_create_image', resource_defns['WebServer'], stack) self.patchobject(glance.GlanceClientPlugin, 'get_image', return_value=self.mock_image) self.patchobject(nova.NovaClientPlugin, 'get_flavor', return_value=self.mock_flavor) self.assertIsNone(server.validate()) def test_server_validate_with_bootable_vol(self): stack_name = 'srv_val_bootvol' (tmpl, stack) = self._setup_test_stack(stack_name) self.patchobject(nova.NovaClientPlugin, '_create', return_value=self.fc) self.stub_VolumeConstraint_validate() # create a server with bootable volume web_server = tmpl.t['Resources']['WebServer'] del web_server['Properties']['image'] def create_server(device_name): web_server['Properties']['block_device_mapping'] = [{ "device_name": device_name, "volume_id": "5d7e27da-6703-4f7e-9f94-1f67abef734c", "delete_on_termination": False }] resource_defns = tmpl.resource_definitions(stack) server = servers.Server('server_with_bootable_volume', resource_defns['WebServer'], stack) return server server = create_server(u'vda') self.assertIsNone(server.validate()) server = create_server('vda') self.assertIsNone(server.validate()) server = create_server('vdb') ex = self.assertRaises(exception.StackValidationFailed, server.validate) self.assertEqual('Neither image nor bootable volume is specified for ' 'instance server_with_bootable_volume', six.text_type(ex)) web_server['Properties']['image'] = '' server = create_server('vdb') self.assertIsNone(server.validate()) def test_server_validate_with_nova_keypair_resource(self): stack_name = 'srv_val_test' nova_keypair_template = ''' { "AWSTemplateFormatVersion" : "2010-09-09", "Description" : "WordPress", "Resources" : { "WebServer": { "Type": "OS::Nova::Server", "Properties": { "image" : "F17-x86_64-gold", "flavor" : "m1.large", "key_name" : { "Ref": "SSHKey" }, "user_data" : "wordpress" } }, "SSHKey": { "Type": "OS::Nova::KeyPair", "Properties": { "name": "my_key" } } } } ''' self.patchobject(nova.NovaClientPlugin, 'has_extension', return_value=True) t = template_format.parse(nova_keypair_template) templ = template.Template(t) self.patchobject(nova.NovaClientPlugin, '_create', return_value=self.fc) stack = parser.Stack(utils.dummy_context(), stack_name, templ, stack_id=uuidutils.generate_uuid()) resource_defns = templ.resource_definitions(stack) server = servers.Server('server_validate_test', resource_defns['WebServer'], stack) self.patchobject(glance.GlanceClientPlugin, 'get_image', return_value=self.mock_image) self.patchobject(nova.NovaClientPlugin, 'get_flavor', return_value=self.mock_flavor) self.assertIsNone(server.validate()) def test_server_validate_with_invalid_ssh_key(self): stack_name = 'srv_val_test' (tmpl, stack) = self._setup_test_stack(stack_name) self.patchobject(nova.NovaClientPlugin, '_create', return_value=self.fc) web_server = tmpl['Resources']['WebServer'] # Make the ssh key have an invalid name web_server['Properties']['key_name'] = 'test2' resource_defns = tmpl.resource_definitions(stack) server = servers.Server('WebServer', resource_defns['WebServer'], stack) self.patchobject(glance.GlanceClientPlugin, 'get_image', return_value=self.mock_image) self.patchobject(nova.NovaClientPlugin, 'get_flavor', return_value=self.mock_flavor) error = self.assertRaises(exception.StackValidationFailed, server.validate) self.assertEqual( "Property error: Resources.WebServer.Properties.key_name: " "Error validating value 'test2': The Key (test2) could not " "be found.", six.text_type(error)) def test_server_validate_software_config_invalid_meta(self): stack_name = 'srv_val_test' (tmpl, stack) = self._setup_test_stack(stack_name) self.patchobject(nova.NovaClientPlugin, '_create', return_value=self.fc) web_server = tmpl['Resources']['WebServer'] web_server['Properties']['user_data_format'] = 'SOFTWARE_CONFIG' web_server['Metadata'] = {'deployments': 'notallowed'} resource_defns = tmpl.resource_definitions(stack) server = servers.Server('WebServer', resource_defns['WebServer'], stack) self.patchobject(glance.GlanceClientPlugin, 'get_image', return_value=self.mock_image) self.patchobject(nova.NovaClientPlugin, 'get_flavor', return_value=self.mock_flavor) error = self.assertRaises(exception.StackValidationFailed, server.validate) self.assertEqual( "deployments key not allowed in resource metadata " "with user_data_format of SOFTWARE_CONFIG", six.text_type(error)) def test_server_validate_with_networks(self): stack_name = 'srv_net' (tmpl, stack) = self._setup_test_stack(stack_name) self.stub_KeypairConstraint_validate() network_name = 'public' # create a server with 'uuid' and 'network' properties tmpl['Resources']['WebServer']['Properties']['networks'] = ( [{'uuid': 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa', 'network': network_name}]) resource_defns = tmpl.resource_definitions(stack) server = servers.Server('server_validate_with_networks', resource_defns['WebServer'], stack) self.stub_NetworkConstraint_validate() ex = self.assertRaises(exception.StackValidationFailed, server.validate) self.assertIn("Cannot define the following properties at " "the same time: networks.network, networks.uuid", six.text_type(ex)) def test_server_validate_with_network_empty_ref(self): stack_name = 'srv_net' (tmpl, stack) = self._setup_test_stack(stack_name) self.patchobject(nova.NovaClientPlugin, '_create', return_value=self.fc) tmpl['Resources']['WebServer']['Properties']['networks'] = ( [{'network': ''}]) resource_defns = tmpl.resource_definitions(stack) server = servers.Server('server_validate_with_networks', resource_defns['WebServer'], stack) self.patchobject(glance.GlanceClientPlugin, 'get_image', return_value=self.mock_image) self.patchobject(nova.NovaClientPlugin, 'get_flavor', return_value=self.mock_flavor) self.patchobject(neutron.NeutronClientPlugin, 'find_resourceid_by_name_or_id') self.assertIsNone(server.validate()) def test_server_validate_with_only_fixed_ip(self): stack_name = 'srv_net' (tmpl, stack) = self._setup_test_stack(stack_name) # create a server with 'uuid' and 'network' properties tmpl['Resources']['WebServer']['Properties']['networks'] = ( [{'fixed_ip': '10.0.0.99'}]) self.patchobject(nova.NovaClientPlugin, '_create', return_value=self.fc) resource_defns = tmpl.resource_definitions(stack) server = servers.Server('server_validate_with_networks', resource_defns['WebServer'], stack) self.patchobject(glance.GlanceClientPlugin, 'get_image', return_value=self.mock_image) self.patchobject(nova.NovaClientPlugin, 'get_flavor', return_value=self.mock_flavor) self.patchobject(neutron.NeutronClientPlugin, 'find_resourceid_by_name_or_id') ex = self.assertRaises(exception.StackValidationFailed, server.validate) self.assertIn(_('One of the properties "network", "port", ' '"allocate_network" or "subnet" should be set ' 'for the specified network of ' 'server "%s".') % server.name, six.text_type(ex)) def test_server_validate_with_network_floating_ip(self): stack_name = 'srv_net_floating_ip' (tmpl, stack) = self._setup_test_stack(stack_name) # create a server with 'uuid' and 'network' properties tmpl['Resources']['WebServer']['Properties']['networks'] = ( [{'floating_ip': '172.24.4.14', 'network': '6b1688bb-18a0-4754-ab05-19daaedc5871'}]) self.patchobject(nova.NovaClientPlugin, '_create', return_value=self.fc) resource_defns = tmpl.resource_definitions(stack) server = servers.Server('server_validate_net_floating_ip', resource_defns['WebServer'], stack) self.patchobject(glance.GlanceClientPlugin, 'get_image', return_value=self.mock_image) self.patchobject(nova.NovaClientPlugin, 'get_flavor', return_value=self.mock_flavor) self.patchobject(neutron.NeutronClientPlugin, 'find_resourceid_by_name_or_id') ex = self.assertRaises(exception.StackValidationFailed, server.validate) self.assertIn(_('Property "floating_ip" is not supported if ' 'only "network" is specified, because the ' 'corresponding port can not be retrieved.'), six.text_type(ex)) def test_server_validate_with_networks_str_net(self): stack_name = 'srv_networks_str_nets' (tmpl, stack) = self._setup_test_stack(stack_name) # create a server with 'uuid' and 'network' properties tmpl['Resources']['WebServer']['Properties']['networks'] = ( [{'network': '6b1688bb-18a0-4754-ab05-19daaedc5871', 'allocate_network': 'auto'}]) self.patchobject(nova.NovaClientPlugin, '_create', return_value=self.fc) resource_defns = tmpl.resource_definitions(stack) server = servers.Server('server_validate_net_list_str', resource_defns['WebServer'], stack) self.patchobject(glance.GlanceClientPlugin, 'get_image', return_value=self.mock_image) self.patchobject(nova.NovaClientPlugin, 'get_flavor', return_value=self.mock_flavor) self.patchobject(neutron.NeutronClientPlugin, 'find_resourceid_by_name_or_id') ex = self.assertRaises(exception.StackValidationFailed, server.validate) self.assertIn(_('Can not specify "allocate_network" with ' 'other keys of networks at the same time.'), six.text_type(ex)) def test_server_validate_port_fixed_ip(self): stack_name = 'port_with_fixed_ip' (tmpl, stack) = self._setup_test_stack(stack_name, test_templ=with_port_template) resource_defns = tmpl.resource_definitions(stack) server = servers.Server('validate_port_reference_fixed_ip', resource_defns['server'], stack) self.patchobject(glance.GlanceClientPlugin, 'get_image', return_value=self.mock_image) self.patchobject(nova.NovaClientPlugin, 'get_flavor', return_value=self.mock_flavor) error = self.assertRaises(exception.ResourcePropertyConflict, server.validate) self.assertEqual("Cannot define the following properties at the same " "time: networks/fixed_ip, networks/port.", six.text_type(error)) # test if the 'port' doesn't reference with non-created resource tmpl['Resources']['server']['Properties']['networks'] = ( [{'port': 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa', 'fixed_ip': '10.0.0.99'}]) resource_defns = tmpl.resource_definitions(stack) server = servers.Server('with_port_fixed_ip', resource_defns['server'], stack) self.patchobject(neutron.NeutronClientPlugin, 'find_resourceid_by_name_or_id') error = self.assertRaises(exception.ResourcePropertyConflict, server.validate) self.assertEqual("Cannot define the following properties at the same " "time: networks/fixed_ip, networks/port.", six.text_type(error)) def test_server_validate_with_uuid_fixed_ip(self): stack_name = 'srv_net' (tmpl, stack) = self._setup_test_stack(stack_name) tmpl['Resources']['WebServer']['Properties']['networks'] = ( [{'uuid': 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa', 'fixed_ip': '10.0.0.99'}]) self.patchobject(nova.NovaClientPlugin, '_create', return_value=self.fc) resource_defns = tmpl.resource_definitions(stack) server = servers.Server('server_validate_with_networks', resource_defns['WebServer'], stack) self.patchobject(neutron.NeutronClientPlugin, 'find_resourceid_by_name_or_id') self.patchobject(glance.GlanceClientPlugin, 'get_image', return_value=self.mock_image) self.patchobject(nova.NovaClientPlugin, 'get_flavor', return_value=self.mock_flavor) self.assertIsNone(server.validate()) def test_server_validate_with_network_fixed_ip(self): stack_name = 'srv_net' (tmpl, stack) = self._setup_test_stack(stack_name) tmpl['Resources']['WebServer']['Properties']['networks'] = ( [{'network': 'public', 'fixed_ip': '10.0.0.99'}]) self.patchobject(nova.NovaClientPlugin, '_create', return_value=self.fc) resource_defns = tmpl.resource_definitions(stack) server = servers.Server('server_validate_with_networks', resource_defns['WebServer'], stack) self.patchobject(glance.GlanceClientPlugin, 'get_image', return_value=self.mock_image) self.patchobject(nova.NovaClientPlugin, 'get_flavor', return_value=self.mock_flavor) self.patchobject(neutron.NeutronClientPlugin, 'find_resourceid_by_name_or_id') self.assertIsNone(server.validate()) def test_server_validate_net_security_groups(self): # Test that if network 'ports' are assigned security groups are # not, because they'll be ignored stack_name = 'srv_net_secgroups' (tmpl, stack) = self._setup_test_stack(stack_name) tmpl['Resources']['WebServer']['Properties']['networks'] = [ {'port': ''}] tmpl['Resources']['WebServer']['Properties'][ 'security_groups'] = ['my_security_group'] self.patchobject(nova.NovaClientPlugin, '_create', return_value=self.fc) resource_defns = tmpl.resource_definitions(stack) server = servers.Server('server_validate_net_security_groups', resource_defns['WebServer'], stack) self.patchobject(glance.GlanceClientPlugin, 'get_image', return_value=self.mock_image) self.patchobject(nova.NovaClientPlugin, 'get_flavor', return_value=self.mock_flavor) self.patchobject(neutron.NeutronClientPlugin, 'find_resourceid_by_name_or_id') error = self.assertRaises(exception.ResourcePropertyConflict, server.validate) self.assertEqual("Cannot define the following properties at the same " "time: security_groups, networks/port.", six.text_type(error)) def test_server_delete(self): return_server = self.fc.servers.list()[1] server = self._create_test_server(return_server, 'create_delete') server.resource_id = '1234' # this makes sure the auto increment worked on server creation self.assertGreater(server.id, 0) side_effect = [server, fakes_nova.fake_exception()] self.patchobject(self.fc.servers, 'get', side_effect=side_effect) scheduler.TaskRunner(server.delete)() self.assertEqual((server.DELETE, server.COMPLETE), server.state) def test_server_delete_notfound(self): return_server = self.fc.servers.list()[1] server = self._create_test_server(return_server, 'create_delete2') server.resource_id = '1234' # this makes sure the auto increment worked on server creation self.assertGreater(server.id, 0) self.patchobject(self.fc.client, 'delete_servers_1234', side_effect=fakes_nova.fake_exception()) scheduler.TaskRunner(server.delete)() self.assertEqual((server.DELETE, server.COMPLETE), server.state) def test_server_delete_error(self): return_server = self.fc.servers.list()[1] server = self._create_test_server(return_server, 'create_delete') server.resource_id = '1234' # this makes sure the auto increment worked on server creation self.assertGreater(server.id, 0) def make_error(*args): return_server.status = "ERROR" return return_server self.patchobject(self.fc.servers, 'get', side_effect=[return_server, return_server, make_error()]) resf = self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(server.delete)) self.assertIn("Server %s delete failed" % return_server.name, six.text_type(resf)) def test_server_delete_error_task_in_progress(self): # test server in 'ERROR', but task state in nova is 'deleting' return_server = self.fc.servers.list()[1] server = self._create_test_server(return_server, 'create_delete') server.resource_id = '1234' def make_error(*args): return_server.status = "ERROR" setattr(return_server, 'OS-EXT-STS:task_state', 'deleting') return return_server def make_error_done(*args): return_server.status = "ERROR" setattr(return_server, 'OS-EXT-STS:task_state', None) return return_server self.patchobject(self.fc.servers, 'get', side_effect=[make_error(), make_error_done()]) resf = self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(server.delete)) self.assertIn("Server %s delete failed" % return_server.name, six.text_type(resf)) def test_server_soft_delete(self): return_server = self.fc.servers.list()[1] server = self._create_test_server(return_server, 'create_delete') server.resource_id = '1234' # this makes sure the auto increment worked on server creation self.assertGreater(server.id, 0) def make_soft_delete(*args): return_server.status = "SOFT_DELETED" return return_server self.patchobject(self.fc.servers, 'get', side_effect=[return_server, return_server, make_soft_delete()]) scheduler.TaskRunner(server.delete)() self.assertEqual((server.DELETE, server.COMPLETE), server.state) def test_server_update_metadata(self): return_server = self.fc.servers.list()[1] server = self._create_test_server(return_server, 'md_update') ud_tmpl = self._get_test_template('update_stack')[0] ud_tmpl.t['Resources']['WebServer']['Metadata'] = {'test': 123} resource_defns = ud_tmpl.resource_definitions(server.stack) scheduler.TaskRunner(server.update, resource_defns['WebServer'])() self.assertEqual({'test': 123}, server.metadata_get()) ud_tmpl.t['Resources']['WebServer']['Metadata'] = {'test': 456} server.t = ud_tmpl.resource_definitions(server.stack)['WebServer'] self.assertEqual({'test': 123}, server.metadata_get()) server.metadata_update() self.assertEqual({'test': 456}, server.metadata_get()) @mock.patch.object(heat_plugin.HeatClientPlugin, 'url_for') def test_server_update_metadata_software_config(self, fake_url): fake_url.return_value = 'http://ip:8000/v1' server, ud_tmpl = self._server_create_software_config( stack_name='update_meta_sc', ret_tmpl=True) expected_md = { 'os-collect-config': { 'cfn': { 'access_key_id': '4567', 'metadata_url': 'http://ip:8000/v1/', 'path': 'WebServer.Metadata', 'secret_access_key': '8901', 'stack_name': 'update_meta_sc' }, 'collectors': ['ec2', 'cfn', 'local'] }, 'deployments': []} self.assertEqual(expected_md, server.metadata_get()) ud_tmpl.t['Resources']['WebServer']['Metadata'] = {'test': 123} resource_defns = ud_tmpl.resource_definitions(server.stack) scheduler.TaskRunner(server.update, resource_defns['WebServer'])() expected_md.update({'test': 123}) self.assertEqual(expected_md, server.metadata_get()) server.metadata_update() self.assertEqual(expected_md, server.metadata_get()) @mock.patch.object(heat_plugin.HeatClientPlugin, 'url_for') def test_server_update_metadata_software_config_merge(self, fake_url): md = {'os-collect-config': {'polling_interval': 10}} fake_url.return_value = 'http://ip/heat-api-cfn/v1' server, ud_tmpl = self._server_create_software_config( stack_name='update_meta_sc', ret_tmpl=True, md=md) expected_md = { 'os-collect-config': { 'cfn': { 'access_key_id': '4567', 'metadata_url': 'http://ip/heat-api-cfn/v1/', 'path': 'WebServer.Metadata', 'secret_access_key': '8901', 'stack_name': 'update_meta_sc' }, 'collectors': ['ec2', 'cfn', 'local'], 'polling_interval': 10 }, 'deployments': []} self.assertEqual(expected_md, server.metadata_get()) ud_tmpl.t['Resources']['WebServer']['Metadata'] = {'test': 123} resource_defns = ud_tmpl.resource_definitions(server.stack) scheduler.TaskRunner(server.update, resource_defns['WebServer'])() expected_md.update({'test': 123}) self.assertEqual(expected_md, server.metadata_get()) server.metadata_update() self.assertEqual(expected_md, server.metadata_get()) @mock.patch.object(heat_plugin.HeatClientPlugin, 'url_for') def test_server_update_software_config_transport(self, fake_url): md = {'os-collect-config': {'polling_interval': 10}} fake_url.return_value = 'http://ip/heat-api-cfn/v1' server = self._server_create_software_config( stack_name='update_meta_sc', md=md) expected_md = { 'os-collect-config': { 'cfn': { 'access_key_id': '4567', 'metadata_url': 'http://ip/heat-api-cfn/v1/', 'path': 'WebServer.Metadata', 'secret_access_key': '8901', 'stack_name': 'update_meta_sc' }, 'collectors': ['ec2', 'cfn', 'local'], 'polling_interval': 10 }, 'deployments': []} self.assertEqual(expected_md, server.metadata_get()) sc = mock.Mock() sc.head_account.return_value = { 'x-account-meta-temp-url-key': 'secrit' } sc.url = 'http://192.0.2.2' self.patchobject(swift.SwiftClientPlugin, '_create', return_value=sc) update_props = self.server_props.copy() update_props['software_config_transport'] = 'POLL_TEMP_URL' update_template = server.t.freeze(properties=update_props) self.rpc_client = mock.MagicMock() server._rpc_client = self.rpc_client self.rpc_client.create_software_config.return_value = None scheduler.TaskRunner(server.update, update_template)() self.assertEqual((server.UPDATE, server.COMPLETE), server.state) md = server.metadata_get() metadata_url = md['os-collect-config']['request']['metadata_url'] self.assertTrue(metadata_url.startswith( 'http://192.0.2.2/v1/AUTH_test_tenant_id/')) expected_md = { 'os-collect-config': { 'cfn': { 'access_key_id': None, 'metadata_url': None, 'path': None, 'secret_access_key': None, 'stack_name': None, }, 'request': { 'metadata_url': 'the_url', }, 'collectors': ['ec2', 'request', 'local'], 'polling_interval': 10 }, 'deployments': []} md['os-collect-config']['request']['metadata_url'] = 'the_url' self.assertEqual(expected_md, server.metadata_get()) def test_update_transport_heat_to_zaqar(self): stack, server = self._server_create_software_config_poll_heat() password = server.password self.assertEqual({ 'os-collect-config': { 'heat': { 'auth_url': 'http://server.test:5000/v2.0', 'password': password, 'project_id': '8888', 'region_name': 'RegionOne', 'resource_name': 'WebServer', 'stack_id': 'software_config_s/%s' % stack.id, 'user_id': '1234' }, 'collectors': ['ec2', 'heat', 'local'], }, 'deployments': [] }, server.metadata_get()) update_props = self.server_props.copy() update_props['software_config_transport'] = 'ZAQAR_MESSAGE' update_template = server.t.freeze(properties=update_props) zcc = self.patchobject(zaqar.ZaqarClientPlugin, 'create_for_tenant') zc = mock.Mock() zcc.return_value = zc queue = mock.Mock() zc.queue.return_value = queue self.rpc_client = mock.MagicMock() server._rpc_client = self.rpc_client self.rpc_client.create_software_config.return_value = None scheduler.TaskRunner(server.update, update_template)() self.assertEqual((server.UPDATE, server.COMPLETE), server.state) password_1 = server.password self.assertEqual(password, password_1) self.assertEqual({ 'os-collect-config': { 'zaqar': { 'user_id': '1234', 'password': password_1, 'auth_url': 'http://server.test:5000/v2.0', 'project_id': '8888', 'queue_id': server.data().get('metadata_queue_id'), 'region_name': 'RegionOne', }, 'heat': { 'auth_url': None, 'password': None, 'project_id': None, 'region_name': None, 'resource_name': None, 'stack_id': None, 'user_id': None }, 'collectors': ['ec2', 'zaqar', 'local'] }, 'deployments': [] }, server.metadata_get()) def test_server_update_nova_metadata(self): return_server = self.fc.servers.list()[1] server = self._create_test_server(return_server, 'md_update') new_meta = {'test': 123} self.patchobject(self.fc.servers, 'get', return_value=return_server) set_meta_mock = self.patchobject(self.fc.servers, 'set_meta') update_props = self.server_props.copy() update_props['metadata'] = new_meta update_template = server.t.freeze(properties=update_props) scheduler.TaskRunner(server.update, update_template)() self.assertEqual((server.UPDATE, server.COMPLETE), server.state) set_meta_mock.assert_called_with( return_server, server.client_plugin().meta_serialize(new_meta)) def test_server_update_nova_metadata_complex(self): """Test that complex metadata values are correctly serialized to JSON. Test that complex metadata values are correctly serialized to JSON when sent to Nova. """ return_server = self.fc.servers.list()[1] server = self._create_test_server(return_server, 'md_update') self.patchobject(self.fc.servers, 'get', return_value=return_server) new_meta = {'test': {'testkey': 'testvalue'}} set_meta_mock = self.patchobject(self.fc.servers, 'set_meta') # If we're going to call set_meta() directly we # need to handle the serialization ourselves. update_props = self.server_props.copy() update_props['metadata'] = new_meta update_template = server.t.freeze(properties=update_props) scheduler.TaskRunner(server.update, update_template)() self.assertEqual((server.UPDATE, server.COMPLETE), server.state) set_meta_mock.assert_called_with( return_server, server.client_plugin().meta_serialize(new_meta)) def test_server_update_nova_metadata_with_delete(self): return_server = self.fc.servers.list()[1] server = self._create_test_server(return_server, 'md_update') # part one, add some metadata new_meta = {'test': '123', 'this': 'that'} self.patchobject(self.fc.servers, 'get', return_value=return_server) set_meta_mock = self.patchobject(self.fc.servers, 'set_meta') update_props = self.server_props.copy() update_props['metadata'] = new_meta update_template = server.t.freeze(properties=update_props) scheduler.TaskRunner(server.update, update_template)() self.assertEqual((server.UPDATE, server.COMPLETE), server.state) set_meta_mock.assert_called_with( return_server, server.client_plugin().meta_serialize(new_meta)) # part two change the metadata (test removing the old key) new_meta = {'new_key': 'yeah'} # new fake with the correct metadata server.resource_id = '56789' new_return_server = self.fc.servers.list()[5] self.patchobject(self.fc.servers, 'get', return_value=new_return_server) del_meta_mock = self.patchobject(self.fc.servers, 'delete_meta') update_props = self.server_props.copy() update_props['metadata'] = new_meta update_template = server.t.freeze(properties=update_props) scheduler.TaskRunner(server.update, update_template)() self.assertEqual((server.UPDATE, server.COMPLETE), server.state) del_meta_mock.assert_called_with(new_return_server, ['test', 'this']) set_meta_mock.assert_called_with( new_return_server, server.client_plugin().meta_serialize(new_meta)) def test_server_update_server_name(self): """Server.handle_update supports changing the name.""" return_server = self.fc.servers.list()[1] return_server.id = '5678' server = self._create_test_server(return_server, 'srv_update') new_name = 'new_name' update_props = self.server_props.copy() update_props['name'] = new_name update_template = server.t.freeze(properties=update_props) self.patchobject(self.fc.servers, 'get', return_value=return_server) self.patchobject(return_server, 'update') return_server.update(new_name).AndReturn(None) scheduler.TaskRunner(server.update, update_template)() self.assertEqual((server.UPDATE, server.COMPLETE), server.state) def test_server_update_server_admin_password(self): """Server.handle_update supports changing the admin password.""" return_server = self.fc.servers.list()[1] return_server.id = '5678' server = self._create_test_server(return_server, 'change_password') new_password = 'new_password' update_props = self.server_props.copy() update_props['admin_pass'] = new_password update_template = server.t.freeze(properties=update_props) self.patchobject(self.fc.servers, 'get', return_value=return_server) self.patchobject(return_server, 'change_password') scheduler.TaskRunner(server.update, update_template)() self.assertEqual((server.UPDATE, server.COMPLETE), server.state) return_server.change_password.assert_called_once_with(new_password) self.assertEqual(1, return_server.change_password.call_count) def test_server_get_live_state(self): return_server = self.fc.servers.list()[1] return_server.id = '5678' server = self._create_test_server(return_server, 'get_live_state_stack') server.properties.data['networks'] = [{'network': 'public_id', 'fixed_ip': '5.6.9.8'}] iface0 = create_fake_iface(port='aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa', net='public', ip='5.6.9.8', mac='fa:16:3e:8c:33:aa') iface1 = create_fake_iface(port='bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb', net='public', ip='4.5.6.7', mac='fa:16:3e:8c:22:aa') iface2 = create_fake_iface(port='cccccccc-cccc-cccc-cccc-cccccccccccc', net='private', ip='10.13.12.13', mac='fa:16:3e:8c:44:cc') self.patchobject(return_server, 'interface_list', return_value=[iface0, iface1, iface2]) self.patchobject(neutron.NeutronClientPlugin, 'find_resourceid_by_name_or_id', side_effect=['public_id', 'private_id']) reality = server.get_live_state(server.properties.data) expected = {'flavor': '1', 'image': '2', 'name': 'sample-server2', 'networks': [ {'fixed_ip': '4.5.6.7', 'network': 'public', 'port': 'bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb'}, {'fixed_ip': '5.6.9.8', 'network': 'public', 'port': None}, {'fixed_ip': '10.13.12.13', 'network': 'private', 'port': 'cccccccc-cccc-cccc-cccc-cccccccccccc'}], 'metadata': {}} self.assertEqual(set(expected.keys()), set(reality.keys())) expected_nets = expected.pop('networks') reality_nets = reality.pop('networks') for net in reality_nets: for exp_net in expected_nets: if net == exp_net: for key in net: self.assertEqual(exp_net[key], net[key]) break for key in six.iterkeys(reality): self.assertEqual(expected[key], reality[key]) def test_server_update_server_flavor(self): """Tests update server changing the flavor. Server.handle_update supports changing the flavor, and makes the change making a resize API call against Nova. """ return_server = self.fc.servers.list()[1] return_server.id = '1234' server = self._create_test_server(return_server, 'srv_update') update_props = self.server_props.copy() update_props['flavor'] = 'm1.small' update_template = server.t.freeze(properties=update_props) def set_status(status): return_server.status = status return return_server self.patchobject(self.fc.servers, 'get', side_effect=[set_status('ACTIVE'), set_status('RESIZE'), set_status('VERIFY_RESIZE'), set_status('VERIFY_RESIZE'), set_status('ACTIVE')]) mock_post = self.patchobject(self.fc.client, 'post_servers_1234_action', return_value=(202, None)) scheduler.TaskRunner(server.update, update_template)() self.assertEqual((server.UPDATE, server.COMPLETE), server.state) mock_post.called_once_with(body={'resize': {'flavorRef': 2}}) mock_post.called_once_with(body={'confirmResize': None}) def test_server_update_server_flavor_failed(self): """Check raising exception due to resize call failing. If the status after a resize is not VERIFY_RESIZE, it means the resize call failed, so we raise an explicit error. """ return_server = self.fc.servers.list()[1] return_server.id = '1234' server = self._create_test_server(return_server, 'srv_update2') update_props = self.server_props.copy() update_props['flavor'] = 'm1.small' update_template = server.t.freeze(properties=update_props) def set_status(status): return_server.status = status return return_server self.patchobject(self.fc.servers, 'get', side_effect=[set_status('RESIZE'), set_status('ERROR')]) mock_post = self.patchobject(self.fc.client, 'post_servers_1234_action', return_value=(202, None)) updater = scheduler.TaskRunner(server.update, update_template) error = self.assertRaises(exception.ResourceFailure, updater) self.assertEqual( "Error: resources.srv_update2: Resizing to '2' failed, " "status 'ERROR'", six.text_type(error)) self.assertEqual((server.UPDATE, server.FAILED), server.state) mock_post.called_once_with(body={'resize': {'flavorRef': 2}}) def test_server_update_flavor_resize_has_not_started(self): """Test update of server flavor if server resize has not started. Server resize is asynchronous operation in nova. So when heat is requesting resize and polling the server then the server may still be in ACTIVE state. So we need to wait some amount of time till the server status becomes RESIZE. """ # create the server for resizing server = self.fc.servers.list()[1] server.id = '1234' server_resource = self._create_test_server(server, 'resize_server') # prepare template with resized server update_props = self.server_props.copy() update_props['flavor'] = 'm1.small' update_template = server_resource.t.freeze(properties=update_props) # define status transition when server resize # ACTIVE(initial) -> ACTIVE -> RESIZE -> VERIFY_RESIZE def set_status(status): server.status = status return server self.patchobject(self.fc.servers, 'get', side_effect=[set_status('ACTIVE'), set_status('ACTIVE'), set_status('RESIZE'), set_status('VERIFY_RESIZE'), set_status('VERIFY_RESIZE'), set_status('ACTIVE')]) mock_post = self.patchobject(self.fc.client, 'post_servers_1234_action', return_value=(202, None)) # check that server resize has finished correctly scheduler.TaskRunner(server_resource.update, update_template)() self.assertEqual((server_resource.UPDATE, server_resource.COMPLETE), server_resource.state) mock_post.called_once_with(body={'resize': {'flavorRef': 2}}) mock_post.called_once_with(body={'confirmResize': None}) @mock.patch.object(servers.Server, 'prepare_for_replace') def test_server_update_server_flavor_replace(self, mock_replace): stack_name = 'update_flvrep' (tmpl, stack) = self._setup_test_stack(stack_name) server_props = tmpl['Resources']['WebServer']['Properties'] server_props['flavor_update_policy'] = 'REPLACE' resource_defns = tmpl.resource_definitions(stack) server = servers.Server('server_server_update_flavor_replace', resource_defns['WebServer'], stack) update_props = server_props.copy() update_props['flavor'] = 'm1.small' update_template = server.t.freeze(properties=update_props) updater = scheduler.TaskRunner(server.update, update_template) self.assertRaises(resource.UpdateReplace, updater) @mock.patch.object(servers.Server, 'prepare_for_replace') def test_server_update_server_flavor_policy_update(self, mock_replace): stack_name = 'update_flvpol' (tmpl, stack) = self._setup_test_stack(stack_name) resource_defns = tmpl.resource_definitions(stack) server = servers.Server('server_server_update_flavor_replace', resource_defns['WebServer'], stack) update_props = tmpl.t['Resources']['WebServer']['Properties'].copy() # confirm that when flavor_update_policy is changed during # the update then the updated policy is followed for a flavor # update update_props['flavor_update_policy'] = 'REPLACE' update_props['flavor'] = 'm1.small' update_template = server.t.freeze(properties=update_props) updater = scheduler.TaskRunner(server.update, update_template) self.assertRaises(resource.UpdateReplace, updater) @mock.patch.object(servers.Server, 'prepare_for_replace') @mock.patch.object(nova.NovaClientPlugin, '_create') def test_server_update_server_userdata_replace(self, mock_create, mock_replace): stack_name = 'update_udatrep' (tmpl, stack) = self._setup_test_stack(stack_name) resource_defns = tmpl.resource_definitions(stack) server = servers.Server('server_update_userdata_replace', resource_defns['WebServer'], stack) update_props = tmpl.t['Resources']['WebServer']['Properties'].copy() update_props['user_data'] = 'changed' update_template = server.t.freeze(properties=update_props) server.action = server.CREATE updater = scheduler.TaskRunner(server.update, update_template) self.assertRaises(resource.UpdateReplace, updater) @mock.patch.object(nova.NovaClientPlugin, '_create') def test_update_failed_server_not_replace(self, mock_create): stack_name = 'update_failed_server_not_replace' (tmpl, stack) = self._setup_test_stack(stack_name) resource_defns = tmpl.resource_definitions(stack) server = servers.Server('failed_not_replace', resource_defns['WebServer'], stack) update_props = tmpl.t['Resources']['WebServer']['Properties'].copy() update_props['name'] = 'my_server' update_template = server.t.freeze(properties=update_props) server.action = server.CREATE server.status = server.FAILED server.resource_id = '6a953104-b874-44d2-a29a-26e7c367dc5c' nova_server = self.fc.servers.list()[1] nova_server.status = 'ACTIVE' server.client = mock.Mock() server.client().servers.get.return_value = nova_server scheduler.TaskRunner(server.update, update_template)() self.assertEqual((server.UPDATE, server.COMPLETE), server.state) @mock.patch.object(servers.Server, 'prepare_for_replace') @mock.patch.object(nova.NovaClientPlugin, '_create') def test_server_update_server_userdata_ignore(self, mock_create, mock_replace): stack_name = 'update_udatignore' (tmpl, stack) = self._setup_test_stack(stack_name) self.patchobject(servers.Server, 'check_update_complete', return_value=True) resource_defns = tmpl.resource_definitions(stack) server = servers.Server('server_update_userdata_ignore', resource_defns['WebServer'], stack) update_props = tmpl.t['Resources']['WebServer']['Properties'].copy() update_props['user_data'] = 'changed' update_props['user_data_update_policy'] = 'IGNORE' update_template = server.t.freeze(properties=update_props) server.action = server.CREATE scheduler.TaskRunner(server.update, update_template)() self.assertEqual((server.UPDATE, server.COMPLETE), server.state) @mock.patch.object(servers.Server, 'prepare_for_replace') def test_server_update_image_replace(self, mock_replace): stack_name = 'update_imgrep' (tmpl, stack) = self._setup_test_stack(stack_name) tmpl.t['Resources']['WebServer']['Properties'][ 'image_update_policy'] = 'REPLACE' resource_defns = tmpl.resource_definitions(stack) server = servers.Server('server_update_image_replace', resource_defns['WebServer'], stack) image_id = self.getUniqueString() update_props = tmpl.t['Resources']['WebServer']['Properties'].copy() update_props['image'] = image_id update_template = server.t.freeze(properties=update_props) updater = scheduler.TaskRunner(server.update, update_template) self.assertRaises(resource.UpdateReplace, updater) def _test_server_update_image_rebuild(self, status, policy='REBUILD', password=None): # Server.handle_update supports changing the image, and makes # the change making a rebuild API call against Nova. return_server = self.fc.servers.list()[1] return_server.id = '1234' server = self._create_test_server(return_server, 'srv_updimgrbld') new_image = 'F17-x86_64-gold' # current test demonstrate updating when image_update_policy was not # changed, so image_update_policy will be used from self.properties before_props = self.server_props.copy() before_props['image_update_policy'] = policy server.t = server.t.freeze(properties=before_props) server.reparse() update_props = before_props.copy() update_props['image'] = new_image if password: update_props['admin_pass'] = password update_template = server.t.freeze(properties=update_props) mock_rebuild = self.patchobject(self.fc.servers, 'rebuild') def get_sideeff(stat): def sideeff(*args): return_server.status = stat return return_server return sideeff for stat in status: self.patchobject(self.fc.servers, 'get', side_effect=get_sideeff(stat)) scheduler.TaskRunner(server.update, update_template)() self.assertEqual((server.UPDATE, server.COMPLETE), server.state) if 'REBUILD' == policy: mock_rebuild.assert_called_once_with( return_server, '2', password=password, preserve_ephemeral=False, meta={}, files={}) else: mock_rebuild.assert_called_once_with( return_server, '2', password=password, preserve_ephemeral=True, meta={}, files={}) def test_server_update_image_rebuild_status_rebuild(self): # Normally we will see 'REBUILD' first and then 'ACTIVE". self._test_server_update_image_rebuild(status=('REBUILD', 'ACTIVE')) def test_server_update_image_rebuild_status_active(self): # It is possible for us to miss the REBUILD status. self._test_server_update_image_rebuild(status=('ACTIVE',)) def test_server_update_image_rebuild_status_rebuild_keep_ephemeral(self): # Normally we will see 'REBUILD' first and then 'ACTIVE". self._test_server_update_image_rebuild( policy='REBUILD_PRESERVE_EPHEMERAL', status=('REBUILD', 'ACTIVE')) def test_server_update_image_rebuild_status_active_keep_ephemeral(self): # It is possible for us to miss the REBUILD status. self._test_server_update_image_rebuild( policy='REBUILD_PRESERVE_EPHEMERAL', status=('ACTIVE',)) def test_server_update_image_rebuild_with_new_password(self): # Normally we will see 'REBUILD' first and then 'ACTIVE". self._test_server_update_image_rebuild(password='new_admin_password', status=('REBUILD', 'ACTIVE')) def test_server_update_image_rebuild_failed(self): # If the status after a rebuild is not REBUILD or ACTIVE, it means the # rebuild call failed, so we raise an explicit error. return_server = self.fc.servers.list()[1] return_server.id = '1234' server = self._create_test_server(return_server, 'srv_updrbldfail') new_image = 'F17-x86_64-gold' # current test demonstrate updating when image_update_policy was not # changed, so image_update_policy will be used from self.properties before_props = self.server_props.copy() before_props['image_update_policy'] = 'REBUILD' update_props = before_props.copy() update_props['image'] = new_image update_template = server.t.freeze(properties=update_props) server.t = server.t.freeze(properties=before_props) server.reparse() mock_rebuild = self.patchobject(self.fc.servers, 'rebuild') def set_status(status): return_server.status = status return return_server self.patchobject(self.fc.servers, 'get', side_effect=[set_status('REBUILD'), set_status('ERROR')]) updater = scheduler.TaskRunner(server.update, update_template) error = self.assertRaises(exception.ResourceFailure, updater) self.assertEqual( "Error: resources.srv_updrbldfail: " "Rebuilding server failed, status 'ERROR'", six.text_type(error)) self.assertEqual((server.UPDATE, server.FAILED), server.state) mock_rebuild.assert_called_once_with( return_server, '2', password=None, preserve_ephemeral=False, meta={}, files={}) def test_server_update_properties(self): return_server = self.fc.servers.list()[1] server = self._create_test_server(return_server, 'update_prop') update_props = self.server_props.copy() update_props['image'] = 'F17-x86_64-gold' update_props['image_update_policy'] = 'REPLACE' update_template = server.t.freeze(properties=update_props) updater = scheduler.TaskRunner(server.update, update_template) self.assertRaises(resource.UpdateReplace, updater) def test_server_status_build(self): return_server = self.fc.servers.list()[0] server = self._setup_test_server(return_server, 'sts_build') server.resource_id = '1234' def status_active(*args): return_server.status = 'ACTIVE' return return_server self.patchobject(self.fc.servers, 'get', return_value=status_active()) scheduler.TaskRunner(server.create)() self.assertEqual((server.CREATE, server.COMPLETE), server.state) def test_server_status_suspend_no_resource_id(self): return_server = self.fc.servers.list()[1] server = self._create_test_server(return_server, 'srv_sus1') server.resource_id = None ex = self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(server.suspend)) self.assertEqual('Error: resources.srv_sus1: ' 'Cannot suspend srv_sus1, ' 'resource_id not set', six.text_type(ex)) self.assertEqual((server.SUSPEND, server.FAILED), server.state) def test_server_status_suspend_not_found(self): return_server = self.fc.servers.list()[1] server = self._create_test_server(return_server, 'srv_sus2') server.resource_id = '1234' self.patchobject(self.fc.servers, 'get', side_effect=fakes_nova.fake_exception()) ex = self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(server.suspend)) self.assertEqual('NotFound: resources.srv_sus2: ' 'Failed to find server 1234', six.text_type(ex)) self.assertEqual((server.SUSPEND, server.FAILED), server.state) def _test_server_status_suspend(self, name, state=('CREATE', 'COMPLETE')): return_server = self.fc.servers.list()[1] server = self._create_test_server(return_server, name) server.resource_id = '1234' server.state_set(state[0], state[1]) def set_status(status): return_server.status = status return return_server self.patchobject(return_server, 'suspend') self.patchobject(self.fc.servers, 'get', side_effect=[set_status('ACTIVE'), set_status('ACTIVE'), set_status('SUSPENDED')]) scheduler.TaskRunner(server.suspend)() self.assertEqual((server.SUSPEND, server.COMPLETE), server.state) def test_server_suspend_in_create_complete(self): self._test_server_status_suspend('test_suspend_in_create_complete') def test_server_suspend_in_suspend_failed(self): self._test_server_status_suspend( name='test_suspend_in_suspend_failed', state=('SUSPEND', 'FAILED')) def test_server_suspend_in_suspend_complete(self): self._test_server_status_suspend( name='test_suspend_in_suspend_complete', state=('SUSPEND', 'COMPLETE')) def test_server_status_suspend_unknown_status(self): return_server = self.fc.servers.list()[1] server = self._create_test_server(return_server, 'srv_susp_uk') server.resource_id = '1234' def set_status(status): return_server.status = status return return_server self.patchobject(return_server, 'suspend') self.patchobject(self.fc.servers, 'get', side_effect=[set_status('ACTIVE'), set_status('ACTIVE'), set_status('TRANSMOGRIFIED')]) ex = self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(server.suspend)) self.assertIsInstance(ex.exc, exception.ResourceUnknownStatus) self.assertEqual('Suspend of server %s failed - ' 'Unknown status TRANSMOGRIFIED ' 'due to "Unknown"' % return_server.name, six.text_type(ex.exc.message)) self.assertEqual((server.SUSPEND, server.FAILED), server.state) def _test_server_status_resume(self, name, state=('SUSPEND', 'COMPLETE')): return_server = self.fc.servers.list()[1] server = self._create_test_server(return_server, name) server.resource_id = '1234' server.state_set(state[0], state[1]) def set_status(status): return_server.status = status return return_server self.patchobject(return_server, 'resume') self.patchobject(self.fc.servers, 'get', side_effect=[set_status('SUSPENDED'), set_status('SUSPENDED'), set_status('ACTIVE')]) scheduler.TaskRunner(server.resume)() self.assertEqual((server.RESUME, server.COMPLETE), server.state) def test_server_resume_in_suspend_complete(self): self._test_server_status_resume( name='test_resume_in_suspend_complete') def test_server_resume_in_resume_failed(self): self._test_server_status_resume( name='test_resume_in_resume_failed', state=('RESUME', 'FAILED')) def test_server_resume_in_resume_complete(self): self._test_server_status_resume( name='test_resume_in_resume_complete', state=('RESUME', 'COMPLETE')) def test_server_status_resume_no_resource_id(self): return_server = self.fc.servers.list()[1] server = self._create_test_server(return_server, 'srv_susp_norid') server.resource_id = None server.state_set(server.SUSPEND, server.COMPLETE) ex = self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(server.resume)) self.assertEqual('Error: resources.srv_susp_norid: ' 'Cannot resume srv_susp_norid, ' 'resource_id not set', six.text_type(ex)) self.assertEqual((server.RESUME, server.FAILED), server.state) def test_server_status_resume_not_found(self): return_server = self.fc.servers.list()[1] server = self._create_test_server(return_server, 'srv_res_nf') server.resource_id = '1234' self.patchobject(self.fc.servers, 'get', side_effect=fakes_nova.fake_exception()) server.state_set(server.SUSPEND, server.COMPLETE) ex = self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(server.resume)) self.assertEqual('NotFound: resources.srv_res_nf: ' 'Failed to find server 1234', six.text_type(ex)) self.assertEqual((server.RESUME, server.FAILED), server.state) def test_server_status_build_spawning(self): self._test_server_status_not_build_active('BUILD(SPAWNING)') def test_server_status_hard_reboot(self): self._test_server_status_not_build_active('HARD_REBOOT') def test_server_status_password(self): self._test_server_status_not_build_active('PASSWORD') def test_server_status_reboot(self): self._test_server_status_not_build_active('REBOOT') def test_server_status_rescue(self): self._test_server_status_not_build_active('RESCUE') def test_server_status_resize(self): self._test_server_status_not_build_active('RESIZE') def test_server_status_revert_resize(self): self._test_server_status_not_build_active('REVERT_RESIZE') def test_server_status_shutoff(self): self._test_server_status_not_build_active('SHUTOFF') def test_server_status_suspended(self): self._test_server_status_not_build_active('SUSPENDED') def test_server_status_verify_resize(self): self._test_server_status_not_build_active('VERIFY_RESIZE') def _test_server_status_not_build_active(self, uncommon_status): return_server = self.fc.servers.list()[0] server = self._setup_test_server(return_server, 'srv_sts_bld') server.resource_id = '1234' def set_status(status): return_server.status = status return return_server self.patchobject(self.fc.servers, 'get', side_effect=[set_status(uncommon_status), set_status('ACTIVE')]) scheduler.TaskRunner(server.create)() self.assertEqual((server.CREATE, server.COMPLETE), server.state) def test_build_nics(self): return_server = self.fc.servers.list()[1] server = self._create_test_server(return_server, 'test_server_create') self.patchobject(neutronclient.Client, 'create_port', return_value={'port': {'id': '4815162342'}}) self.assertIsNone(server._build_nics([])) self.assertIsNone(server._build_nics(None)) self.assertEqual([ {'port-id': 'aaaabbbb', 'net-id': None, 'tag': 'nic1'}, {'v4-fixed-ip': '192.0.2.0', 'net-id': None}], server._build_nics([ {'port': 'aaaabbbb', 'tag': 'nic1'}, {'fixed_ip': '192.0.2.0'}])) self.assertEqual([{'port-id': 'aaaabbbb', 'net-id': None}, {'port-id': 'aaaabbbb', 'net-id': None}], server._build_nics([{'port': 'aaaabbbb', 'fixed_ip': '192.0.2.0'}, {'port': 'aaaabbbb', 'fixed_ip': '2002::2'}])) self.assertEqual([{'port-id': 'aaaabbbb', 'net-id': None}, {'v6-fixed-ip': '2002::2', 'net-id': None}], server._build_nics([{'port': 'aaaabbbb'}, {'fixed_ip': '2002::2'}])) self.assertEqual([{'net-id': 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa'}], server._build_nics( [{'network': 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa'}])) def test_server_network_errors(self): stack_name = 'net_err' (tmpl, stack) = self._setup_test_stack(stack_name, test_templ=ns_template) resolver = self.patchobject(neutron.NeutronClientPlugin, 'find_resourceid_by_name_or_id') resource_defns = tmpl.resource_definitions(stack) server = servers.Server('server', resource_defns['server'], stack) resolver.side_effect = neutron.exceptions.NotFound() server.reparse() self.assertRaises(ValueError, server.properties.get, 'networks') resolver.side_effect = neutron.exceptions.NeutronClientNoUniqueMatch() ex = self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(server.create)) self.assertIn('use an ID to be more specific.', six.text_type(ex)) def test_server_without_ip_address(self): return_server = self.fc.servers.list()[3] return_server.id = '9102' server = self._create_test_server(return_server, 'wo_ipaddr') self.patchobject(neutron.NeutronClientPlugin, 'find_resourceid_by_name_or_id', return_value=None) self.patchobject(self.fc.servers, 'get', return_value=return_server) self.patchobject(return_server, 'interface_list', return_value=[]) mock_detach = self.patchobject(return_server, 'interface_detach') mock_attach = self.patchobject(return_server, 'interface_attach') self.assertEqual({'empty_net': []}, server.FnGetAtt('addresses')) self.assertEqual({'empty_net': []}, server.FnGetAtt('networks')) self.assertEqual(0, mock_detach.call_count) self.assertEqual(0, mock_attach.call_count) def test_build_block_device_mapping(self): self.assertIsNone(servers.Server._build_block_device_mapping([])) self.assertIsNone(servers.Server._build_block_device_mapping(None)) self.assertEqual({ 'vda': '1234::', 'vdb': '1234:snap:', }, servers.Server._build_block_device_mapping([ {'device_name': 'vda', 'volume_id': '1234'}, {'device_name': 'vdb', 'snapshot_id': '1234'}, ])) self.assertEqual({ 'vdc': '1234::10', 'vdd': '1234:snap::True' }, servers.Server._build_block_device_mapping([ { 'device_name': 'vdc', 'volume_id': '1234', 'volume_size': 10 }, { 'device_name': 'vdd', 'snapshot_id': '1234', 'delete_on_termination': True } ])) @mock.patch.object(nova.NovaClientPlugin, '_create') def test_validate_block_device_mapping_volume_size_valid_int(self, mock_create): stack_name = 'val_vsize_valid' tmpl, stack = self._setup_test_stack(stack_name) bdm = [{'device_name': 'vda', 'volume_id': '1234', 'volume_size': 10}] wsp = tmpl.t['Resources']['WebServer']['Properties'] wsp['block_device_mapping'] = bdm resource_defns = tmpl.resource_definitions(stack) server = servers.Server('server_create_image_err', resource_defns['WebServer'], stack) self.patchobject(glance.GlanceClientPlugin, 'get_image', return_value=self.mock_image) self.patchobject(nova.NovaClientPlugin, 'get_flavor', return_value=self.mock_flavor) self.stub_VolumeConstraint_validate() self.assertIsNone(server.validate()) @mock.patch.object(nova.NovaClientPlugin, '_create') def test_validate_block_device_mapping_volume_size_valid_str(self, mock_create): stack_name = 'val_vsize_valid' tmpl, stack = self._setup_test_stack(stack_name) bdm = [{'device_name': 'vda', 'volume_id': '1234', 'volume_size': '10'}] wsp = tmpl.t['Resources']['WebServer']['Properties'] wsp['block_device_mapping'] = bdm resource_defns = tmpl.resource_definitions(stack) server = servers.Server('server_create_image_err', resource_defns['WebServer'], stack) self.stub_VolumeConstraint_validate() self.patchobject(glance.GlanceClientPlugin, 'get_image', return_value=self.mock_image) self.patchobject(nova.NovaClientPlugin, 'get_flavor', return_value=self.mock_flavor) self.assertIsNone(server.validate()) @mock.patch.object(nova.NovaClientPlugin, '_create') def test_validate_bd_mapping_volume_size_invalid_str(self, mock_create): stack_name = 'val_vsize_invalid' tmpl, stack = self._setup_test_stack(stack_name) bdm = [{'device_name': 'vda', 'volume_id': '1234', 'volume_size': '10a'}] wsp = tmpl.t['Resources']['WebServer']['Properties'] wsp['block_device_mapping'] = bdm resource_defns = tmpl.resource_definitions(stack) server = servers.Server('server_create_image_err', resource_defns['WebServer'], stack) self.stub_VolumeConstraint_validate() exc = self.assertRaises(exception.StackValidationFailed, server.validate) self.assertIn("Value '10a' is not an integer", six.text_type(exc)) @mock.patch.object(nova.NovaClientPlugin, '_create') def test_validate_conflict_block_device_mapping_props(self, mock_create): stack_name = 'val_blkdev1' (tmpl, stack) = self._setup_test_stack(stack_name) bdm = [{'device_name': 'vdb', 'snapshot_id': '1234', 'volume_id': '1234'}] wsp = tmpl.t['Resources']['WebServer']['Properties'] wsp['block_device_mapping'] = bdm resource_defns = tmpl.resource_definitions(stack) server = servers.Server('server_create_image_err', resource_defns['WebServer'], stack) self.stub_VolumeConstraint_validate() self.stub_SnapshotConstraint_validate() self.assertRaises(exception.ResourcePropertyConflict, server.validate) @mock.patch.object(nova.NovaClientPlugin, '_create') def test_validate_insufficient_block_device_mapping_props(self, mock_create): stack_name = 'val_blkdev2' (tmpl, stack) = self._setup_test_stack(stack_name) bdm = [{'device_name': 'vdb', 'volume_size': 1, 'delete_on_termination': True}] wsp = tmpl.t['Resources']['WebServer']['Properties'] wsp['block_device_mapping'] = bdm resource_defns = tmpl.resource_definitions(stack) server = servers.Server('server_create_image_err', resource_defns['WebServer'], stack) ex = self.assertRaises(exception.StackValidationFailed, server.validate) msg = ("Either volume_id or snapshot_id must be specified " "for device mapping vdb") self.assertEqual(msg, six.text_type(ex)) @mock.patch.object(nova.NovaClientPlugin, '_create') def test_validate_block_device_mapping_with_empty_ref(self, mock_create): stack_name = 'val_blkdev2' (tmpl, stack) = self._setup_test_stack(stack_name) bdm = [{'device_name': 'vda', 'volume_id': '', 'volume_size': '10'}] wsp = tmpl.t['Resources']['WebServer']['Properties'] wsp['block_device_mapping'] = bdm resource_defns = tmpl.resource_definitions(stack) server = servers.Server('server_create_image_err', resource_defns['WebServer'], stack) self.patchobject(glance.GlanceClientPlugin, 'get_image', return_value=self.mock_image) self.patchobject(nova.NovaClientPlugin, 'get_flavor', return_value=self.mock_flavor) self.stub_VolumeConstraint_validate() self.assertIsNone(server.validate()) @mock.patch.object(nova.NovaClientPlugin, '_create') def test_validate_without_image_or_bootable_volume(self, mock_create): stack_name = 'val_imgvol' (tmpl, stack) = self._setup_test_stack(stack_name) del tmpl['Resources']['WebServer']['Properties']['image'] bdm = [{'device_name': 'vdb', 'volume_id': '1234'}] wsp = tmpl.t['Resources']['WebServer']['Properties'] wsp['block_device_mapping'] = bdm resource_defns = tmpl.resource_definitions(stack) server = servers.Server('server_create_image_err', resource_defns['WebServer'], stack) self.stub_VolumeConstraint_validate() ex = self.assertRaises(exception.StackValidationFailed, server.validate) msg = ('Neither image nor bootable volume is specified ' 'for instance %s' % server.name) self.assertEqual(msg, six.text_type(ex)) @mock.patch.object(nova.NovaClientPlugin, '_create') def test_validate_invalid_image_status(self, mock_create): stack_name = 'test_stack' tmpl, stack = self._setup_test_stack(stack_name) resource_defns = tmpl.resource_definitions(stack) server = servers.Server('server_inactive_image', resource_defns['WebServer'], stack) mock_image = mock.Mock(min_ram=2, status='sdfsdf') self.patchobject(glance.GlanceClientPlugin, 'get_image', return_value=mock_image) error = self.assertRaises(exception.StackValidationFailed, server.validate) self.assertEqual( 'Image status is required to be active not sdfsdf.', six.text_type(error)) @mock.patch.object(nova.NovaClientPlugin, '_create') def test_validate_insufficient_ram_flavor(self, mock_create): stack_name = 'test_stack' tmpl, stack = self._setup_test_stack(stack_name) resource_defns = tmpl.resource_definitions(stack) server = servers.Server('server_insufficient_ram_flavor', resource_defns['WebServer'], stack) mock_image = mock.Mock(min_ram=100, status='active') self.patchobject(glance.GlanceClientPlugin, 'get_image', return_value=mock_image) self.patchobject(nova.NovaClientPlugin, 'get_flavor', return_value=self.mock_flavor) error = self.assertRaises(exception.StackValidationFailed, server.validate) self.assertEqual( 'Image F18-x86_64-gold requires 100 minimum ram. Flavor m1.large ' 'has only 4.', six.text_type(error)) @mock.patch.object(nova.NovaClientPlugin, '_create') def test_validate_image_flavor_not_found(self, mock_create): stack_name = 'test_stack' tmpl, stack = self._setup_test_stack(stack_name) resource_defns = tmpl.resource_definitions(stack) server = servers.Server('image_not_found', resource_defns['WebServer'], stack) self.patchobject(glance.GlanceClientPlugin, 'get_image', side_effect=[ glance.client_exception.EntityMatchNotFound, self.mock_image]) self.patchobject(nova.NovaClientPlugin, 'get_flavor', side_effect=nova.exceptions.NotFound('')) self.assertIsNone(server.validate()) self.assertIsNone(server.validate()) @mock.patch.object(nova.NovaClientPlugin, '_create') def test_validate_insufficient_disk_flavor(self, mock_create): stack_name = 'test_stack' tmpl, stack = self._setup_test_stack(stack_name) resource_defns = tmpl.resource_definitions(stack) server = servers.Server('server_insufficient_disk_flavor', resource_defns['WebServer'], stack) mock_image = mock.Mock(min_ram=1, status='active', min_disk=100) self.patchobject(glance.GlanceClientPlugin, 'get_image', return_value=mock_image) self.patchobject(nova.NovaClientPlugin, 'get_flavor', return_value=self.mock_flavor) error = self.assertRaises(exception.StackValidationFailed, server.validate) self.assertEqual( 'Image F18-x86_64-gold requires 100 GB minimum disk space. ' 'Flavor m1.large has only 4 GB.', six.text_type(error)) def test_build_block_device_mapping_v2(self): self.assertIsNone(servers.Server._build_block_device_mapping_v2([])) self.assertIsNone(servers.Server._build_block_device_mapping_v2(None)) self.assertEqual([{ 'uuid': '1', 'source_type': 'volume', 'destination_type': 'volume', 'boot_index': 0, 'delete_on_termination': False} ], servers.Server._build_block_device_mapping_v2([ {'volume_id': '1'} ])) self.assertEqual([{ 'uuid': '1', 'source_type': 'snapshot', 'destination_type': 'volume', 'boot_index': 0, 'delete_on_termination': False} ], servers.Server._build_block_device_mapping_v2([ {'snapshot_id': '1'} ])) self.assertEqual([{ 'uuid': '1', 'source_type': 'image', 'destination_type': 'volume', 'boot_index': 0, 'delete_on_termination': False} ], servers.Server._build_block_device_mapping_v2([ {'image': '1'} ])) self.assertEqual([{ 'source_type': 'blank', 'destination_type': 'local', 'boot_index': -1, 'delete_on_termination': True, 'guest_format': 'swap', 'volume_size': 1} ], servers.Server._build_block_device_mapping_v2([ {'swap_size': 1} ])) self.assertEqual([], servers.Server._build_block_device_mapping_v2([ {'device_name': ''} ])) self.assertEqual([ {'source_type': 'blank', 'destination_type': 'local', 'boot_index': -1, 'delete_on_termination': True, 'volume_size': 1, 'guest_format': 'ext4'} ], servers.Server._build_block_device_mapping_v2([ {'ephemeral_size': 1, 'ephemeral_format': 'ext4'} ])) def test_block_device_mapping_v2_image_resolve(self): (tmpl, stack) = self._setup_test_stack('mapping', test_templ=bdm_v2_template) resource_defns = tmpl.resource_definitions(stack) self.server = servers.Server('server', resource_defns['server'], stack) self.server.translate_properties(self.server.properties) self.assertEqual('2', self.server.properties['block_device_mapping_v2'][ 0]['image']) def test_block_device_mapping_v2_image_prop_conflict(self): test_templ = bdm_v2_template + "\n image: F17-x86_64-gold" (tmpl, stack) = self._setup_test_stack('mapping', test_templ=test_templ) resource_defns = tmpl.resource_definitions(stack) msg = ("Cannot define the following " "properties at the same time: block_device_mapping_v2.image, " "block_device_mapping_v2.image_id") server = servers.Server('server', resource_defns['server'], stack) exc = self.assertRaises(exception.StackValidationFailed, server.validate) self.assertIn(msg, six.text_type(exc)) @mock.patch.object(nova.NovaClientPlugin, '_create') def test_validate_with_both_blk_dev_map_and_blk_dev_map_v2(self, mock_create): stack_name = 'invalid_stack' tmpl, stack = self._setup_test_stack(stack_name) bdm = [{'device_name': 'vda', 'volume_id': '1234', 'volume_size': '10'}] bdm_v2 = [{'volume_id': '1'}] wsp = tmpl.t['Resources']['WebServer']['Properties'] wsp['block_device_mapping'] = bdm wsp['block_device_mapping_v2'] = bdm_v2 resource_defns = tmpl.resource_definitions(stack) server = servers.Server('server_create_image_err', resource_defns['WebServer'], stack) self.stub_VolumeConstraint_validate() exc = self.assertRaises(exception.ResourcePropertyConflict, server.validate) msg = ('Cannot define the following properties at the same time: ' 'block_device_mapping, block_device_mapping_v2.') self.assertEqual(msg, six.text_type(exc)) def _test_validate_bdm_v2(self, stack_name, bdm_v2, with_image=True, error_msg=None, raise_exc=None): tmpl, stack = self._setup_test_stack(stack_name) if not with_image: del tmpl['Resources']['WebServer']['Properties']['image'] wsp = tmpl.t['Resources']['WebServer']['Properties'] wsp['block_device_mapping_v2'] = bdm_v2 resource_defns = tmpl.resource_definitions(stack) server = servers.Server('server_create_image_err', resource_defns['WebServer'], stack) self.patchobject(nova.NovaClientPlugin, 'get_flavor', return_value=self.mock_flavor) self.patchobject(glance.GlanceClientPlugin, 'get_image', return_value=self.mock_image) self.stub_VolumeConstraint_validate() if raise_exc: ex = self.assertRaises(raise_exc, server.validate) self.assertIn(error_msg, six.text_type(ex)) else: self.assertIsNone(server.validate()) @mock.patch.object(nova.NovaClientPlugin, '_create') def test_validate_conflict_block_device_mapping_v2_props(self, mock_create): stack_name = 'val_blkdev2' bdm_v2 = [{'volume_id': '1', 'snapshot_id': 2}] error_msg = ('Cannot define the following properties at ' 'the same time: volume_id, snapshot_id') self.stub_SnapshotConstraint_validate() self._test_validate_bdm_v2( stack_name, bdm_v2, raise_exc=exception.ResourcePropertyConflict, error_msg=error_msg) @mock.patch.object(nova.NovaClientPlugin, '_create') def test_validate_bdm_v2_with_empty_mapping(self, mock_create): stack_name = 'val_blkdev2' bdm_v2 = [{}] msg = ('Either volume_id, snapshot_id, image_id, swap_size, ' 'ephemeral_size or ephemeral_format must be specified.') self._test_validate_bdm_v2(stack_name, bdm_v2, raise_exc=exception.StackValidationFailed, error_msg=msg) @mock.patch.object(nova.NovaClientPlugin, '_create') def test_validate_bdm_v2_properties_success(self, mock_create): stack_name = 'bdm_v2_success' bdm_v2 = [{'volume_id': '1', 'boot_index': -1}] self._test_validate_bdm_v2(stack_name, bdm_v2) @mock.patch.object(nova.NovaClientPlugin, '_create') def test_validate_bdm_v2_with_unresolved_volume(self, mock_create): stack_name = 'bdm_v2_with_unresolved_vol' # empty string indicates that volume is unresolved bdm_v2 = [{'volume_id': ''}] self._test_validate_bdm_v2(stack_name, bdm_v2, with_image=False) @mock.patch.object(nova.NovaClientPlugin, '_create') def test_validate_bdm_v2_multiple_bootable_source(self, mock_create): stack_name = 'v2_multiple_bootable' # with two bootable sources: volume_id and image bdm_v2 = [{'volume_id': '1', 'boot_index': 0}] msg = ('Multiple bootable sources for instance') self._test_validate_bdm_v2(stack_name, bdm_v2, raise_exc=exception.StackValidationFailed, error_msg=msg) @mock.patch.object(nova.NovaClientPlugin, '_create') def test_validate_bdm_v2_properties_no_bootable_vol(self, mock_create): stack_name = 'bdm_v2_no_bootable' bdm_v2 = [{'swap_size': 10}] msg = ('Neither image nor bootable volume is specified for instance ' 'server_create_image_err') self._test_validate_bdm_v2(stack_name, bdm_v2, raise_exc=exception.StackValidationFailed, error_msg=msg, with_image=False) def test_validate_metadata_too_many(self): stack_name = 'srv_val_metadata' (tmpl, stack) = self._setup_test_stack(stack_name) tmpl.t['Resources']['WebServer']['Properties']['metadata'] = {'a': 1, 'b': 2, 'c': 3, 'd': 4} self.patchobject(nova.NovaClientPlugin, '_create', return_value=self.fc) resource_defns = tmpl.resource_definitions(stack) server = servers.Server('server_create_image_err', resource_defns['WebServer'], stack) self.patchobject(glance.GlanceClientPlugin, 'get_image', return_value=self.mock_image) self.patchobject(nova.NovaClientPlugin, 'get_flavor', return_value=self.mock_flavor) ex = self.assertRaises(exception.StackValidationFailed, server.validate) self.assertIn('Instance metadata must not contain greater than 3 ' 'entries', six.text_type(ex)) def test_validate_metadata_okay(self): stack_name = 'srv_val_metadata' (tmpl, stack) = self._setup_test_stack(stack_name) tmpl.t['Resources']['WebServer']['Properties']['metadata'] = {'a': 1, 'b': 2, 'c': 3} self.patchobject(nova.NovaClientPlugin, '_create', return_value=self.fc) resource_defns = tmpl.resource_definitions(stack) server = servers.Server('server_create_image_err', resource_defns['WebServer'], stack) self.patchobject(glance.GlanceClientPlugin, 'get_image', return_value=self.mock_image) self.patchobject(nova.NovaClientPlugin, 'get_flavor', return_value=self.mock_flavor) self.assertIsNone(server.validate()) def test_server_unsupported_microversion_tags(self): stack_name = 'srv_val_tags' (tmpl, stack) = self._setup_test_stack(stack_name) props = tmpl.t['Resources']['WebServer']['Properties'] props['tags'] = ['a'] # no need test with key_name props.pop('key_name') self.patchobject(nova.NovaClientPlugin, '_create', side_effect=[ exception.InvalidServiceVersion(service='a', version='2.26')]) resource_defns = tmpl.resource_definitions(stack) server = servers.Server('server_create_image_err', resource_defns['WebServer'], stack) self.patchobject(glance.GlanceClientPlugin, 'get_image', return_value=self.mock_image) self.patchobject(nova.NovaClientPlugin, 'get_flavor', return_value=self.mock_flavor) exc = self.assertRaises(exception.StackValidationFailed, server.validate) self.assertEqual('Cannot use "tags" property - nova does not support ' 'it: Invalid service a version 2.26', six.text_type(exc)) def test_server_validate_too_many_personality(self): stack_name = 'srv_val' (tmpl, stack) = self._setup_test_stack(stack_name) self.patchobject(nova.NovaClientPlugin, '_create', return_value=self.fc) tmpl.t['Resources']['WebServer']['Properties'][ 'personality'] = {"/fake/path1": "fake contents1", "/fake/path2": "fake_contents2", "/fake/path3": "fake_contents3", "/fake/path4": "fake_contents4", "/fake/path5": "fake_contents5", "/fake/path6": "fake_contents6"} resource_defns = tmpl.resource_definitions(stack) server = servers.Server('server_create_image_err', resource_defns['WebServer'], stack) self.patchobject(self.fc.limits, 'get', return_value=self.limits) self.patchobject(glance.GlanceClientPlugin, 'get_image', return_value=self.mock_image) exc = self.assertRaises(exception.StackValidationFailed, server.validate) self.assertEqual("The personality property may not contain " "greater than 5 entries.", six.text_type(exc)) def test_server_validate_personality_okay(self): stack_name = 'srv_val' (tmpl, stack) = self._setup_test_stack(stack_name) self.patchobject(nova.NovaClientPlugin, '_create', return_value=self.fc) tmpl.t['Resources']['WebServer']['Properties'][ 'personality'] = {"/fake/path1": "fake contents1", "/fake/path2": "fake_contents2", "/fake/path3": "fake_contents3", "/fake/path4": "fake_contents4", "/fake/path5": "fake_contents5"} resource_defns = tmpl.resource_definitions(stack) server = servers.Server('server_create_image_err', resource_defns['WebServer'], stack) self.patchobject(self.fc.limits, 'get', return_value=self.limits) self.patchobject(glance.GlanceClientPlugin, 'get_image', return_value=self.mock_image) self.assertIsNone(server.validate()) def test_server_validate_personality_file_size_okay(self): stack_name = 'srv_val' (tmpl, stack) = self._setup_test_stack(stack_name) self.patchobject(nova.NovaClientPlugin, '_create', return_value=self.fc) tmpl.t['Resources']['WebServer']['Properties'][ 'personality'] = {"/fake/path1": "a" * 10240} resource_defns = tmpl.resource_definitions(stack) server = servers.Server('server_create_image_err', resource_defns['WebServer'], stack) self.patchobject(self.fc.limits, 'get', return_value=self.limits) self.patchobject(glance.GlanceClientPlugin, 'get_image', return_value=self.mock_image) self.assertIsNone(server.validate()) def test_server_validate_personality_file_size_too_big(self): stack_name = 'srv_val' (tmpl, stack) = self._setup_test_stack(stack_name) self.patchobject(nova.NovaClientPlugin, '_create', return_value=self.fc) tmpl.t['Resources']['WebServer']['Properties'][ 'personality'] = {"/fake/path1": "a" * 10241} resource_defns = tmpl.resource_definitions(stack) server = servers.Server('server_create_image_err', resource_defns['WebServer'], stack) self.patchobject(self.fc.limits, 'get', return_value=self.limits) self.patchobject(glance.GlanceClientPlugin, 'get_image', return_value=self.mock_image) exc = self.assertRaises(exception.StackValidationFailed, server.validate) self.assertEqual('The contents of personality file "/fake/path1" ' 'is larger than the maximum allowed personality ' 'file size (10240 bytes).', six.text_type(exc)) def test_server_validate_personality_get_attr_return_none(self): stack_name = 'srv_val' (tmpl, stack) = self._setup_test_stack( stack_name, server_with_sw_config_personality) self.patchobject(nova.NovaClientPlugin, '_create', return_value=self.fc) resource_defns = tmpl.resource_definitions(stack) server = servers.Server('server_create_image_err', resource_defns['server'], stack) self.patchobject(self.fc.limits, 'get', return_value=self.limits) self.patchobject(glance.GlanceClientPlugin, 'get_image', return_value=self.mock_image) self.assertIsNone(server.validate()) def test_resolve_attribute_server_not_found(self): return_server = self.fc.servers.list()[1] server = self._create_test_server(return_server, 'srv_resolve_attr') server.resource_id = '1234' self.patchobject(self.fc.servers, 'get', side_effect=fakes_nova.fake_exception()) self.assertEqual('', server._resolve_any_attribute("accessIPv4")) def test_resolve_attribute_console_url(self): server = self.fc.servers.list()[0] tmpl, stack = self._setup_test_stack('console_url_stack') self.patchobject(nova.NovaClientPlugin, '_create', return_value=self.fc) ws = servers.Server( 'WebServer', tmpl.resource_definitions(stack)['WebServer'], stack) ws.resource_id = server.id self.patchobject(self.fc.servers, 'get', return_value=server) console_urls = ws._resolve_any_attribute('console_urls') self.assertIsInstance(console_urls, collections.Mapping) supported_consoles = ('novnc', 'xvpvnc', 'spice-html5', 'rdp-html5', 'serial', 'webmks') self.assertEqual(set(supported_consoles), set(console_urls)) def test_resolve_attribute_networks(self): return_server = self.fc.servers.list()[1] server = self._create_test_server(return_server, 'srv_resolve_attr') server.resource_id = '1234' server.networks = {"fake_net": ["10.0.0.3"]} self.patchobject(self.fc.servers, 'get', return_value=server) self.patchobject(neutron.NeutronClientPlugin, 'find_resourceid_by_name_or_id', return_value='fake_uuid') expect_networks = {"fake_uuid": ["10.0.0.3"], "fake_net": ["10.0.0.3"]} self.assertEqual(expect_networks, server._resolve_any_attribute("networks")) def test_empty_instance_user(self): """Test Nova server doesn't set instance_user in build_userdata Launching the instance should not pass any user name to build_userdata. The default cloud-init user set up for the image will be used instead. """ return_server = self.fc.servers.list()[1] server = self._setup_test_server(return_server, 'without_user') metadata = server.metadata_get() build_data = self.patchobject(nova.NovaClientPlugin, 'build_userdata') scheduler.TaskRunner(server.create)() build_data.assert_called_with(metadata, 'wordpress', instance_user=None, user_data_format='HEAT_CFNTOOLS') def create_old_net(self, port=None, net=None, ip=None, uuid=None, subnet=None, port_extra_properties=None, floating_ip=None, str_network=None, tag=None): return {'port': port, 'network': net, 'fixed_ip': ip, 'uuid': uuid, 'subnet': subnet, 'floating_ip': floating_ip, 'port_extra_properties': port_extra_properties, 'allocate_network': str_network, 'tag': tag} def test_get_network_id_neutron(self): return_server = self.fc.servers.list()[3] server = self._create_test_server(return_server, 'networks_update') net = {'port': '2a60cbaa-3d33-4af6-a9ce-83594ac546fc'} net_id = server._get_network_id(net) self.assertIsNone(net_id) net = {'network': 'f3ef5d2f-d7ba-4b27-af66-58ca0b81e032', 'fixed_ip': '1.2.3.4'} self.patchobject(neutron.NeutronClientPlugin, 'find_resourceid_by_name_or_id', return_value='f3ef5d2f-d7ba-4b27-af66-58ca0b81e032') net_id = server._get_network_id(net) self.assertEqual('f3ef5d2f-d7ba-4b27-af66-58ca0b81e032', net_id) net = {'network': '', 'fixed_ip': '1.2.3.4'} net_id = server._get_network_id(net) self.assertIsNone(net_id) def test_exclude_not_updated_networks_no_matching(self): return_server = self.fc.servers.list()[3] server = self._create_test_server(return_server, 'networks_update') for new_nets in ( [], [{'port': '952fd4ae-53b9-4b39-9e5f-8929c553b5ae', 'network': '450abbc9-9b6d-4d6f-8c3a-c47ac34100dd'}]): old_nets = [ self.create_old_net( port='2a60cbaa-3d33-4af6-a9ce-83594ac546fc'), self.create_old_net( net='f3ef5d2f-d7ba-4b27-af66-58ca0b81e032', ip='1.2.3.4'), self.create_old_net( net='0da8adbf-a7e2-4c59-a511-96b03d2da0d7')] interfaces = [ create_fake_iface( port='2a60cbaa-3d33-4af6-a9ce-83594ac546fc', net='450abbc9-9b6d-4d6f-8c3a-c47ac34100aa', ip='4.3.2.1', subnet='subnetsu-bnet-subn-etsu-bnetsubnetsu'), create_fake_iface( port='aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa', net='f3ef5d2f-d7ba-4b27-af66-58ca0b81e032', ip='1.2.3.4', subnet='subnetsu-bnet-subn-etsu-bnetsubnetsu'), create_fake_iface( port='bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb', net='0da8adbf-a7e2-4c59-a511-96b03d2da0d7', ip='4.2.3.1', subnet='subnetsu-bnet-subn-etsu-bnetsubnetsu')] new_nets_cpy = copy.deepcopy(new_nets) old_nets_cpy = copy.deepcopy(old_nets) # Add values to old_nets_cpy that is populated in old_nets when # calling update_networks_matching_iface_port() in # _exclude_not_updated_networks() old_nets_cpy[0]['fixed_ip'] = '4.3.2.1' old_nets_cpy[0]['network'] = '450abbc9-9b6d-4d6f-8c3a-c47ac34100aa' old_nets_cpy[0]['subnet'] = 'subnetsu-bnet-subn-etsu-bnetsubnetsu' old_nets_cpy[1]['fixed_ip'] = '1.2.3.4' old_nets_cpy[1]['port'] = 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa' old_nets_cpy[1]['subnet'] = 'subnetsu-bnet-subn-etsu-bnetsubnetsu' old_nets_cpy[2]['fixed_ip'] = '4.2.3.1' old_nets_cpy[2]['port'] = 'bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb' old_nets_cpy[2]['subnet'] = 'subnetsu-bnet-subn-etsu-bnetsubnetsu' for net in new_nets_cpy: for key in ('port', 'network', 'fixed_ip', 'uuid', 'subnet', 'port_extra_properties', 'floating_ip', 'allocate_network', 'tag'): net.setdefault(key) server._exclude_not_updated_networks(old_nets, new_nets, interfaces) self.assertEqual(old_nets_cpy, old_nets) self.assertEqual(new_nets_cpy, new_nets) def test_exclude_not_updated_networks_success(self): return_server = self.fc.servers.list()[3] server = self._create_test_server(return_server, 'networks_update') old_nets = [ self.create_old_net( port='2a60cbaa-3d33-4af6-a9ce-83594ac546fc'), self.create_old_net( net='f3ef5d2f-d7ba-4b27-af66-58ca0b81e032', ip='1.2.3.4'), self.create_old_net( net='0da8adbf-a7e2-4c59-a511-96b03d2da0d7')] new_nets = [ {'port': '2a60cbaa-3d33-4af6-a9ce-83594ac546fc'}, {'network': 'f3ef5d2f-d7ba-4b27-af66-58ca0b81e032', 'fixed_ip': '1.2.3.4'}, {'port': '952fd4ae-53b9-4b39-9e5f-8929c553b5ae'}] interfaces = [ create_fake_iface(port='2a60cbaa-3d33-4af6-a9ce-83594ac546fc', net='aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa', ip='3.4.5.6'), create_fake_iface(port='bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb', net='f3ef5d2f-d7ba-4b27-af66-58ca0b81e032', ip='1.2.3.4'), create_fake_iface(port='cccccccc-cccc-cccc-cccc-cccccccccccc', net='0da8adbf-a7e2-4c59-a511-96b03d2da0d7', ip='2.3.4.5')] new_nets_copy = copy.deepcopy(new_nets) old_nets_copy = copy.deepcopy(old_nets) # Add values to old_nets_copy that is populated in old_nets when # calling update_networks_matching_iface_port() in # _exclude_not_updated_networks() old_nets_copy[2]['fixed_ip'] = '2.3.4.5' old_nets_copy[2]['port'] = 'cccccccc-cccc-cccc-cccc-cccccccccccc' for net in new_nets_copy: for key in ('port', 'network', 'fixed_ip', 'uuid', 'subnet', 'port_extra_properties', 'floating_ip', 'allocate_network', 'tag'): net.setdefault(key) server._exclude_not_updated_networks(old_nets, new_nets, interfaces) self.assertEqual([old_nets_copy[2]], old_nets) self.assertEqual([new_nets_copy[2]], new_nets) def test_exclude_not_updated_networks_nothing_for_update(self): return_server = self.fc.servers.list()[3] server = self._create_test_server(return_server, 'networks_update') old_nets = [ self.create_old_net( net='f3ef5d2f-d7ba-4b27-af66-58ca0b81e032', ip='', port='')] new_nets = [ {'network': 'f3ef5d2f-d7ba-4b27-af66-58ca0b81e032', 'fixed_ip': None, 'port': None, 'subnet': None, 'uuid': None, 'port_extra_properties': None, 'floating_ip': None, 'allocate_network': None, 'tag': None}] interfaces = [ create_fake_iface(port='', net='f3ef5d2f-d7ba-4b27-af66-58ca0b81e032', ip='')] server._exclude_not_updated_networks(old_nets, new_nets, interfaces) self.assertEqual([], old_nets) self.assertEqual([], new_nets) def test_update_networks_matching_iface_port(self): return_server = self.fc.servers.list()[3] server = self._create_test_server(return_server, 'networks_update') # old order 0 1 2 3 4 5 6 nets = [ self.create_old_net(port='aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa'), self.create_old_net(net='gggggggg-1111-1111-1111-gggggggggggg', ip='1.2.3.4'), self.create_old_net(net='gggggggg-1111-1111-1111-gggggggggggg'), self.create_old_net(port='dddddddd-dddd-dddd-dddd-dddddddddddd'), self.create_old_net(net='gggggggg-1111-1111-1111-gggggggggggg', ip='5.6.7.8'), self.create_old_net(net='gggggggg-1111-1111-1111-gggggggggggg', subnet='hhhhhhhh-1111-1111-1111-hhhhhhhhhhhh'), self.create_old_net(subnet='iiiiiiii-1111-1111-1111-iiiiiiiiiiii')] # new order 2 3 0 1 4 6 5 interfaces = [ create_fake_iface(port='cccccccc-cccc-cccc-cccc-cccccccccccc', net=nets[2]['network'], ip='10.0.0.11'), create_fake_iface(port=nets[3]['port'], net='gggggggg-1111-1111-1111-gggggggggggg', ip='10.0.0.12'), create_fake_iface(port=nets[0]['port'], net='gggggggg-1111-1111-1111-gggggggggggg', ip='10.0.0.13'), create_fake_iface(port='bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb', net=nets[1]['network'], ip=nets[1]['fixed_ip']), create_fake_iface(port='eeeeeeee-eeee-eeee-eeee-eeeeeeeeeeee', net=nets[4]['network'], ip=nets[4]['fixed_ip']), create_fake_iface(port='gggggggg-gggg-gggg-gggg-gggggggggggg', net='gggggggg-1111-1111-1111-gggggggggggg', ip='10.0.0.14', subnet=nets[6]['subnet']), create_fake_iface(port='ffffffff-ffff-ffff-ffff-ffffffffffff', net=nets[5]['network'], ip='10.0.0.15', subnet=nets[5]['subnet'])] # all networks should get port id expected = [ {'port': 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa', 'network': 'gggggggg-1111-1111-1111-gggggggggggg', 'fixed_ip': '10.0.0.13', 'subnet': None, 'floating_ip': None, 'port_extra_properties': None, 'uuid': None, 'allocate_network': None, 'tag': None}, {'port': 'bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb', 'network': 'gggggggg-1111-1111-1111-gggggggggggg', 'fixed_ip': '1.2.3.4', 'subnet': None, 'port_extra_properties': None, 'floating_ip': None, 'uuid': None, 'allocate_network': None, 'tag': None}, {'port': 'cccccccc-cccc-cccc-cccc-cccccccccccc', 'network': 'gggggggg-1111-1111-1111-gggggggggggg', 'fixed_ip': '10.0.0.11', 'subnet': None, 'port_extra_properties': None, 'floating_ip': None, 'uuid': None, 'allocate_network': None, 'tag': None}, {'port': 'dddddddd-dddd-dddd-dddd-dddddddddddd', 'network': 'gggggggg-1111-1111-1111-gggggggggggg', 'fixed_ip': '10.0.0.12', 'subnet': None, 'port_extra_properties': None, 'floating_ip': None, 'uuid': None, 'allocate_network': None, 'tag': None}, {'port': 'eeeeeeee-eeee-eeee-eeee-eeeeeeeeeeee', 'uuid': None, 'fixed_ip': '5.6.7.8', 'subnet': None, 'port_extra_properties': None, 'floating_ip': None, 'network': 'gggggggg-1111-1111-1111-gggggggggggg', 'allocate_network': None, 'tag': None}, {'port': 'ffffffff-ffff-ffff-ffff-ffffffffffff', 'uuid': None, 'fixed_ip': '10.0.0.15', 'subnet': 'hhhhhhhh-1111-1111-1111-hhhhhhhhhhhh', 'port_extra_properties': None, 'floating_ip': None, 'network': 'gggggggg-1111-1111-1111-gggggggggggg', 'allocate_network': None, 'tag': None}, {'port': 'gggggggg-gggg-gggg-gggg-gggggggggggg', 'uuid': None, 'fixed_ip': '10.0.0.14', 'subnet': 'iiiiiiii-1111-1111-1111-iiiiiiiiiiii', 'port_extra_properties': None, 'floating_ip': None, 'network': 'gggggggg-1111-1111-1111-gggggggggggg', 'allocate_network': None, 'tag': None}] self.patchobject(neutron.NeutronClientPlugin, 'find_resourceid_by_name_or_id', return_value='gggggggg-1111-1111-1111-gggggggggggg') self.patchobject(neutron.NeutronClientPlugin, 'network_id_from_subnet_id', return_value='gggggggg-1111-1111-1111-gggggggggggg') server.update_networks_matching_iface_port(nets, interfaces) self.assertEqual(expected, nets) def test_server_update_None_networks_with_port(self): return_server = self.fc.servers.list()[3] return_server.id = '9102' server = self._create_test_server(return_server, 'networks_update') new_networks = [{'port': '2a60cbaa-3d33-4af6-a9ce-83594ac546fc'}] update_props = self.server_props.copy() # old_networks is None, and update to new_networks with port update_props['networks'] = new_networks update_template = server.t.freeze(properties=update_props) self.patchobject(self.fc.servers, 'get', return_value=return_server) iface = create_fake_iface( port='aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa', net='450abbc9-9b6d-4d6f-8c3a-c47ac34100ef', ip='1.2.3.4') self.patchobject(return_server, 'interface_list', return_value=[iface]) mock_detach = self.patchobject(return_server, 'interface_detach') mock_attach = self.patchobject(return_server, 'interface_attach') mock_detach_check = self.patchobject(nova.NovaClientPlugin, 'check_interface_detach', return_value=True) mock_attach_check = self.patchobject(nova.NovaClientPlugin, 'check_interface_attach', return_value=True) scheduler.TaskRunner(server.update, update_template)() self.assertEqual((server.UPDATE, server.COMPLETE), server.state) self.assertEqual(1, mock_detach.call_count) self.assertEqual(1, mock_attach.call_count) self.assertEqual(1, mock_detach_check.call_count) self.assertEqual(1, mock_attach_check.call_count) def test_server_update_None_networks_with_network_id(self): return_server = self.fc.servers.list()[3] return_server.id = '9102' self.patchobject(neutronclient.Client, 'create_port', return_value={'port': {'id': 'abcd1234'}}) server = self._create_test_server(return_server, 'networks_update') # old_networks is None, and update to new_networks with port new_networks = [{'network': 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa', 'fixed_ip': '1.2.3.4'}] update_props = self.server_props.copy() update_props['networks'] = new_networks update_template = server.t.freeze(properties=update_props) self.patchobject(self.fc.servers, 'get', return_value=return_server) iface = create_fake_iface( port='aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa', net='450abbc9-9b6d-4d6f-8c3a-c47ac34100ef', ip='1.2.3.4') self.patchobject(return_server, 'interface_list', return_value=[iface]) mock_detach = self.patchobject(return_server, 'interface_detach') mock_attach = self.patchobject(return_server, 'interface_attach') mock_detach_check = self.patchobject(nova.NovaClientPlugin, 'check_interface_detach', return_value=True) mock_attach_check = self.patchobject(nova.NovaClientPlugin, 'check_interface_attach', return_value=True) scheduler.TaskRunner(server.update, update_template)() self.assertEqual((server.UPDATE, server.COMPLETE), server.state) self.assertEqual(1, mock_detach.call_count) self.assertEqual(1, mock_attach.call_count) self.assertEqual(1, mock_detach_check.call_count) self.assertEqual(1, mock_attach_check.call_count) def test_server_update_subnet_with_security_group(self): return_server = self.fc.servers.list()[3] return_server.id = '9102' server = self._create_test_server(return_server, 'update_subnet') # set old properties for 'networks' and 'security_groups' before_props = self.server_props.copy() before_props['networks'] = [ {'subnet': 'aaa09d50-8c23-4498-a542-aa0deb24f73e'} ] before_props['security_groups'] = ['the_sg'] # set new property 'networks' new_networks = [{'subnet': '2a60cbaa-3d33-4af6-a9ce-83594ac546fc'}] update_props = self.server_props.copy() update_props['networks'] = new_networks update_props['security_groups'] = ['the_sg'] update_template = server.t.freeze(properties=update_props) server.t = server.t.freeze(properties=before_props) sec_uuids = ['86c0f8ae-23a8-464f-8603-c54113ef5467'] self.patchobject(self.fc.servers, 'get', return_value=return_server) self.patchobject(neutron.NeutronClientPlugin, 'get_secgroup_uuids', return_value=sec_uuids) self.patchobject(neutron.NeutronClientPlugin, 'network_id_from_subnet_id', return_value='05d8e681-4b37-4570-bc8d-810089f706b2') mock_create_port = self.patchobject( neutronclient.Client, 'create_port') iface = create_fake_iface( port='aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa', net='05d8e681-4b37-4570-bc8d-810089f706b2', subnet='aaa09d50-8c23-4498-a542-aa0deb24f73e', ip='1.2.3.4') self.patchobject(return_server, 'interface_list', return_value=[iface]) mock_detach = self.patchobject(return_server, 'interface_detach') mock_attach = self.patchobject(return_server, 'interface_attach') mock_detach_check = self.patchobject(nova.NovaClientPlugin, 'check_interface_detach', return_value=True) mock_attach_check = self.patchobject(nova.NovaClientPlugin, 'check_interface_attach', return_value=True) scheduler.TaskRunner(server.update, update_template, before=server.t)() self.assertEqual((server.UPDATE, server.COMPLETE), server.state) self.assertEqual(1, mock_detach.call_count) self.assertEqual(1, mock_attach.call_count) self.assertEqual(1, mock_detach_check.call_count) self.assertEqual(1, mock_attach_check.call_count) kwargs = {'network_id': '05d8e681-4b37-4570-bc8d-810089f706b2', 'fixed_ips': [ {'subnet_id': '2a60cbaa-3d33-4af6-a9ce-83594ac546fc'}], 'security_groups': sec_uuids, 'name': 'update_subnet-port-0', } mock_create_port.assert_called_with({'port': kwargs}) def test_server_update_subnet_to_network_with_security_group(self): return_server = self.fc.servers.list()[3] return_server.id = '9102' server = self._create_test_server(return_server, 'update_subnet') # set old properties for 'networks' and 'security_groups' before_props = self.server_props.copy() before_props['networks'] = [ {'subnet': 'aaa09d50-8c23-4498-a542-aa0deb24f73e'} ] before_props['security_groups'] = ['the_sg'] # set new property 'networks' new_networks = [{'network': '2a60cbaa-3d33-4af6-a9ce-83594ac546fc'}] update_props = self.server_props.copy() update_props['networks'] = new_networks update_props['security_groups'] = ['the_sg'] update_template = server.t.freeze(properties=update_props) server.t = server.t.freeze(properties=before_props) sec_uuids = ['86c0f8ae-23a8-464f-8603-c54113ef5467'] self.patchobject(self.fc.servers, 'get', return_value=return_server) self.patchobject(neutron.NeutronClientPlugin, 'get_secgroup_uuids', return_value=sec_uuids) self.patchobject(neutron.NeutronClientPlugin, 'network_id_from_subnet_id', return_value='05d8e681-4b37-4570-bc8d-810089f706b2') iface = create_fake_iface( port='aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa', net='05d8e681-4b37-4570-bc8d-810089f706b2', subnet='aaa09d50-8c23-4498-a542-aa0deb24f73e', ip='1.2.3.4') self.patchobject(return_server, 'interface_list', return_value=[iface]) mock_detach = self.patchobject(return_server, 'interface_detach') mock_attach = self.patchobject(return_server, 'interface_attach') def interface_attach_mock(port, net): class attachment(object): def __init__(self, port_id, net_id): self.port_id = port_id self.net_id = net_id return attachment(port, net) mock_attach.return_value = interface_attach_mock( 'ad4a231b-67f7-45fe-aee9-461176b48203', '2a60cbaa-3d33-4af6-a9ce-83594ac546fc') mock_detach_check = self.patchobject(nova.NovaClientPlugin, 'check_interface_detach', return_value=True) mock_attach_check = self.patchobject(nova.NovaClientPlugin, 'check_interface_attach', return_value=True) mock_update_port = self.patchobject( neutronclient.Client, 'update_port') scheduler.TaskRunner(server.update, update_template, before=server.t)() self.assertEqual((server.UPDATE, server.COMPLETE), server.state) self.assertEqual(1, mock_detach.call_count) self.assertEqual(1, mock_attach.call_count) self.assertEqual(1, mock_detach_check.call_count) self.assertEqual(1, mock_attach_check.call_count) kwargs = {'security_groups': sec_uuids} mock_update_port.assert_called_with( 'ad4a231b-67f7-45fe-aee9-461176b48203', {'port': kwargs}) def test_server_update_empty_networks_with_complex_parameters(self): return_server = self.fc.servers.list()[3] return_server.id = '9102' server = self._create_test_server(return_server, 'networks_update') new_networks = [{'network': 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa', 'fixed_ip': '1.2.3.4', 'port': '2a60cbaa-3d33-4af6-a9ce-83594ac546fc'}] update_props = self.server_props.copy() update_props['networks'] = new_networks update_template = server.t.freeze(properties=update_props) self.patchobject(self.fc.servers, 'get', return_value=return_server) iface = create_fake_iface( port='aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa', net='450abbc9-9b6d-4d6f-8c3a-c47ac34100ef', ip='1.2.3.4') self.patchobject(return_server, 'interface_list', return_value=[iface]) mock_detach = self.patchobject(return_server, 'interface_detach') mock_attach = self.patchobject(return_server, 'interface_attach') mock_detach_check = self.patchobject(nova.NovaClientPlugin, 'check_interface_detach', return_value=True) mock_attach_check = self.patchobject(nova.NovaClientPlugin, 'check_interface_attach', return_value=True) scheduler.TaskRunner(server.update, update_template)() self.assertEqual((server.UPDATE, server.COMPLETE), server.state) self.assertEqual(1, mock_detach.call_count) self.assertEqual(1, mock_attach.call_count) self.assertEqual(1, mock_detach_check.call_count) self.assertEqual(1, mock_attach_check.call_count) def test_server_update_empty_networks_to_None(self): return_server = self.fc.servers.list()[3] return_server.id = '9102' server = self._create_test_server(return_server, 'networks_update', networks=[]) update_props = copy.deepcopy(self.server_props) update_props.pop('networks') update_template = server.t.freeze(properties=update_props) self.patchobject(self.fc.servers, 'get', return_value=return_server) iface = create_fake_iface( port='aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa', net='450abbc9-9b6d-4d6f-8c3a-c47ac34100ef', ip='1.2.3.4') self.patchobject(return_server, 'interface_list', return_value=[iface]) mock_detach = self.patchobject(return_server, 'interface_detach') mock_attach = self.patchobject(return_server, 'interface_attach') mock_detach_check = self.patchobject(nova.NovaClientPlugin, 'check_interface_detach', return_value=True) mock_attach_check = self.patchobject(nova.NovaClientPlugin, 'check_interface_attach', return_value=True) scheduler.TaskRunner(server.update, update_template)() self.assertEqual((server.UPDATE, server.COMPLETE), server.state) # test we detach the old interface and attach a new one self.assertEqual(0, mock_detach.call_count) self.assertEqual(0, mock_attach.call_count) self.assertEqual(0, mock_detach_check.call_count) self.assertEqual(0, mock_attach_check.call_count) def _test_server_update_to_auto(self, available_multi_nets=None): multi_nets = available_multi_nets or [] return_server = self.fc.servers.list()[1] return_server.id = '5678' old_networks = [ {'port': '95e25541-d26a-478d-8f36-ae1c8f6b74dc'}] server = self._create_test_server(return_server, 'networks_update', networks=old_networks) update_props = self.server_props.copy() update_props['networks'] = [{'allocate_network': 'auto'}] update_template = server.t.freeze(properties=update_props) self.patchobject(self.fc.servers, 'get', return_value=return_server) poor_interfaces = [ create_fake_iface(port='95e25541-d26a-478d-8f36-ae1c8f6b74dc', net='aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa', ip='11.12.13.14')] self.patchobject(return_server, 'interface_list', return_value=poor_interfaces) self.patchobject(server, '_get_available_networks', return_value=multi_nets) mock_detach = self.patchobject(return_server, 'interface_detach') mock_attach = self.patchobject(return_server, 'interface_attach') updater = scheduler.TaskRunner(server.update, update_template) if not multi_nets: self.patchobject(nova.NovaClientPlugin, 'check_interface_detach', return_value=True) self.patchobject(nova.NovaClientPlugin, 'check_interface_attach', return_value=True) auto_allocate_net = '9cfe6c74-c105-4906-9a1f-81d9064e9bca' self.patchobject(server, '_auto_allocate_network', return_value=[auto_allocate_net]) updater() self.assertEqual((server.UPDATE, server.COMPLETE), server.state) self.assertEqual(1, mock_detach.call_count) self.assertEqual(1, mock_attach.call_count) mock_attach.called_once_with( {'port_id': None, 'net_id': auto_allocate_net, 'fip': None}) else: self.assertRaises(exception.ResourceFailure, updater) self.assertEqual(0, mock_detach.call_count) self.assertEqual(0, mock_attach.call_count) def test_server_update_str_networks_auto(self): self._test_server_update_to_auto() def test_server_update_str_networks_auto_multi_nets(self): available_nets = ['net_1', 'net_2'] self._test_server_update_to_auto(available_nets) def test_server_update_str_networks_none(self): return_server = self.fc.servers.list()[1] return_server.id = '5678' old_networks = [ {'port': '95e25541-d26a-478d-8f36-ae1c8f6b74dc'}, {'port': '4121f61a-1b2e-4ab0-901e-eade9b1cb09d'}, {'network': 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa', 'fixed_ip': '31.32.33.34'}] server = self._create_test_server(return_server, 'networks_update', networks=old_networks) update_props = self.server_props.copy() update_props['networks'] = [{'allocate_network': 'none'}] update_template = server.t.freeze(properties=update_props) self.patchobject(self.fc.servers, 'get', return_value=return_server) port_interfaces = [ create_fake_iface(port='95e25541-d26a-478d-8f36-ae1c8f6b74dc', net='aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa', ip='11.12.13.14'), create_fake_iface(port='4121f61a-1b2e-4ab0-901e-eade9b1cb09d', net='aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa', ip='21.22.23.24'), create_fake_iface(port='0907fa82-a024-43c2-9fc5-efa1bccaa74a', net='aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa', ip='31.32.33.34')] self.patchobject(return_server, 'interface_list', return_value=port_interfaces) mock_detach = self.patchobject(return_server, 'interface_detach') self.patchobject(nova.NovaClientPlugin, 'check_interface_detach', return_value=True) mock_attach = self.patchobject(return_server, 'interface_attach') scheduler.TaskRunner(server.update, update_template)() self.assertEqual((server.UPDATE, server.COMPLETE), server.state) self.assertEqual(3, mock_detach.call_count) self.assertEqual(0, mock_attach.call_count) def test_server_update_networks_with_complex_parameters(self): return_server = self.fc.servers.list()[1] return_server.id = '5678' old_networks = [ {'port': '95e25541-d26a-478d-8f36-ae1c8f6b74dc'}, {'network': 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa', 'fixed_ip': '1.2.3.4'}, {'port': '4121f61a-1b2e-4ab0-901e-eade9b1cb09d'}, {'network': 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa', 'fixed_ip': '31.32.33.34'}] server = self._create_test_server(return_server, 'networks_update', networks=old_networks) new_networks = [ {'network': 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa', 'fixed_ip': '1.2.3.4'}, {'port': '2a60cbaa-3d33-4af6-a9ce-83594ac546fc'}] update_props = copy.deepcopy(self.server_props) update_props['networks'] = new_networks update_template = server.t.freeze(properties=update_props) self.patchobject(self.fc.servers, 'get', return_value=return_server) poor_interfaces = [ create_fake_iface(port='95e25541-d26a-478d-8f36-ae1c8f6b74dc', net='aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa', ip='11.12.13.14'), create_fake_iface(port='450abbc9-9b6d-4d6f-8c3a-c47ac34100ef', net='aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa', ip='1.2.3.4'), create_fake_iface(port='4121f61a-1b2e-4ab0-901e-eade9b1cb09d', net='aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa', ip='21.22.23.24'), create_fake_iface(port='0907fa82-a024-43c2-9fc5-efa1bccaa74a', net='aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa', ip='31.32.33.34')] self.patchobject(return_server, 'interface_list', return_value=poor_interfaces) mock_detach = self.patchobject(return_server, 'interface_detach') mock_attach = self.patchobject(return_server, 'interface_attach') mock_detach_check = self.patchobject(nova.NovaClientPlugin, 'check_interface_detach', return_value=True) mock_attach_check = self.patchobject(nova.NovaClientPlugin, 'check_interface_attach', return_value=True) scheduler.TaskRunner(server.update, update_template)() self.assertEqual((server.UPDATE, server.COMPLETE), server.state) # we only detach the three old networks, and attach a new one self.assertEqual(3, mock_detach.call_count) self.assertEqual(1, mock_attach.call_count) self.assertEqual(3, mock_detach_check.call_count) self.assertEqual(1, mock_attach_check.call_count) def test_server_update_networks_with_None(self): return_server = self.fc.servers.list()[1] return_server.id = '5678' old_networks = [ {'port': '95e25541-d26a-478d-8f36-ae1c8f6b74dc'}, {'port': '4121f61a-1b2e-4ab0-901e-eade9b1cb09d'}, {'network': 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa', 'fixed_ip': '31.32.33.34'}] server = self._create_test_server(return_server, 'networks_update', networks=old_networks) update_props = self.server_props.copy() update_props['networks'] = None update_template = server.t.freeze(properties=update_props) self.patchobject(self.fc.servers, 'get', return_value=return_server) poor_interfaces = [ create_fake_iface(port='95e25541-d26a-478d-8f36-ae1c8f6b74dc', net='aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa', ip='11.12.13.14'), create_fake_iface(port='4121f61a-1b2e-4ab0-901e-eade9b1cb09d', net='aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa', ip='21.22.23.24'), create_fake_iface(port='0907fa82-a024-43c2-9fc5-efa1bccaa74a', net='aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa', ip='31.32.33.34')] self.patchobject(return_server, 'interface_list', return_value=poor_interfaces) mock_detach = self.patchobject(return_server, 'interface_detach') mock_attach = self.patchobject(return_server, 'interface_attach') mock_detach_check = self.patchobject(nova.NovaClientPlugin, 'check_interface_detach', return_value=True) mock_attach_check = self.patchobject(nova.NovaClientPlugin, 'check_interface_attach', return_value=True) scheduler.TaskRunner(server.update, update_template, before=server.t)() self.assertEqual((server.UPDATE, server.COMPLETE), server.state) self.assertEqual(3, mock_detach.call_count) self.assertEqual(1, mock_attach.call_count) self.assertEqual(3, mock_detach_check.call_count) self.assertEqual(1, mock_attach_check.call_count) def test_server_update_old_networks_to_empty_list(self): return_server = self.fc.servers.list()[1] return_server.id = '5678' old_networks = [ {'port': '95e25541-d26a-478d-8f36-ae1c8f6b74dc'}, {'port': '4121f61a-1b2e-4ab0-901e-eade9b1cb09d'}, {'network': 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa', 'fixed_ip': '31.32.33.34'}] server = self._create_test_server(return_server, 'networks_update', networks=old_networks) update_props = self.server_props.copy() update_props['networks'] = [] update_template = server.t.freeze(properties=update_props) self.patchobject(self.fc.servers, 'get', return_value=return_server) poor_interfaces = [ create_fake_iface(port='95e25541-d26a-478d-8f36-ae1c8f6b74dc', net='aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa', ip='11.12.13.14'), create_fake_iface(port='4121f61a-1b2e-4ab0-901e-eade9b1cb09d', net='aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa', ip='21.22.23.24'), create_fake_iface(port='0907fa82-a024-43c2-9fc5-efa1bccaa74a', net='aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa', ip='31.32.33.34')] self.patchobject(return_server, 'interface_list', return_value=poor_interfaces) mock_detach = self.patchobject(return_server, 'interface_detach') mock_attach = self.patchobject(return_server, 'interface_attach') mock_detach_check = self.patchobject(nova.NovaClientPlugin, 'check_interface_detach', return_value=True) mock_attach_check = self.patchobject(nova.NovaClientPlugin, 'check_interface_attach', return_value=True) scheduler.TaskRunner(server.update, update_template)() self.assertEqual((server.UPDATE, server.COMPLETE), server.state) self.assertEqual(3, mock_detach.call_count) self.assertEqual(1, mock_attach.call_count) self.assertEqual(3, mock_detach_check.call_count) self.assertEqual(1, mock_attach_check.call_count) def test_server_update_remove_network_non_empty(self): return_server = self.fc.servers.list()[1] return_server.id = '5678' old_networks = [ {'port': '95e25541-d26a-478d-8f36-ae1c8f6b74dc'}, {'port': '4121f61a-1b2e-4ab0-901e-eade9b1cb09d'}] new_networks = [ {'port': '95e25541-d26a-478d-8f36-ae1c8f6b74dc'}] server = self._create_test_server(return_server, 'networks_update', networks=old_networks) update_props = self.server_props.copy() update_props['networks'] = new_networks update_template = server.t.freeze(properties=update_props) self.patchobject(self.fc.servers, 'get', return_value=return_server) poor_interfaces = [ create_fake_iface(port='95e25541-d26a-478d-8f36-ae1c8f6b74dc', net='aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa', ip='11.12.13.14'), create_fake_iface(port='4121f61a-1b2e-4ab0-901e-eade9b1cb09d', net='aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa', ip='21.22.23.24')] self.patchobject(return_server, 'interface_list', return_value=poor_interfaces) mock_detach = self.patchobject(return_server, 'interface_detach') mock_attach = self.patchobject(return_server, 'interface_attach') mock_detach_check = self.patchobject(nova.NovaClientPlugin, 'check_interface_detach', return_value=True) scheduler.TaskRunner(server.update, update_template)() self.assertEqual((server.UPDATE, server.COMPLETE), server.state) self.assertEqual(1, mock_detach.call_count) self.assertEqual(1, mock_detach_check.call_count) self.assertEqual(0, mock_attach.call_count) def test_server_properties_validation_create_and_update(self): return_server = self.fc.servers.list()[1] # create # validation calls are already mocked there server = self._create_test_server(return_server, 'my_server') update_props = self.server_props.copy() update_props['image'] = 'F17-x86_64-gold' update_props['image_update_policy'] = 'REPLACE' update_template = server.t.freeze(properties=update_props) updater = scheduler.TaskRunner(server.update, update_template) self.assertRaises(resource.UpdateReplace, updater) def test_server_properties_validation_create_and_update_fail(self): return_server = self.fc.servers.list()[1] # create # validation calls are already mocked there server = self._create_test_server(return_server, 'my_server') ex = glance.client_exception.EntityMatchNotFound(entity='image', args='Update Image') self.patchobject(glance.GlanceClientPlugin, 'find_image_by_name_or_id', side_effect=[1, ex]) update_props = self.server_props.copy() update_props['image'] = 'Update Image' update_template = server.t.freeze(properties=update_props) # update updater = scheduler.TaskRunner(server.update, update_template) err = self.assertRaises(exception.ResourceFailure, updater) self.assertEqual("StackValidationFailed: resources.my_server: " "Property error: Properties.image: Error validating " "value '1': No image matching Update Image.", six.text_type(err)) def test_server_snapshot(self): return_server = self.fc.servers.list()[1] return_server.id = '1234' server = self._create_test_server(return_server, 'test_server_snapshot') scheduler.TaskRunner(server.snapshot)() self.assertEqual((server.SNAPSHOT, server.COMPLETE), server.state) self.assertEqual({'snapshot_image_id': '456'}, resource_data_object.ResourceData.get_all(server)) def test_server_check_snapshot_complete_image_in_deleted(self): self._test_server_check_snapshot_complete(image_status='DELETED') def test_server_check_snapshot_complete_image_in_error(self): self._test_server_check_snapshot_complete() def test_server_check_snapshot_complete_fail(self): self._test_server_check_snapshot_complete() def _test_server_check_snapshot_complete(self, image_status='ERROR'): return_server = self.fc.servers.list()[1] return_server.id = '1234' server = self._create_test_server(return_server, 'test_server_snapshot') image_in_error = mock.Mock() image_in_error.status = image_status self.patchobject(glance.GlanceClientPlugin, 'get_image', return_value=image_in_error) self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(server.snapshot)) self.assertEqual((server.SNAPSHOT, server.FAILED), server.state) # test snapshot_image_id already set to resource data self.assertEqual({'snapshot_image_id': '456'}, resource_data_object.ResourceData.get_all(server)) def test_server_dont_validate_personality_if_personality_isnt_set(self): stack_name = 'srv_val' (tmpl, stack) = self._setup_test_stack(stack_name) resource_defns = tmpl.resource_definitions(stack) server = servers.Server('server_create_image_err', resource_defns['WebServer'], stack) self.patchobject(nova.NovaClientPlugin, 'get_flavor', return_value=self.mock_flavor) self.patchobject(glance.GlanceClientPlugin, 'get_image', return_value=self.mock_image) mock_limits = self.patchobject(nova.NovaClientPlugin, 'absolute_limits') self.patchobject(nova.NovaClientPlugin, '_create') # Assert here checks that server resource validates, but actually # this call is Act stage of this test. We calling server.validate() # to verify that no excessive calls to Nova are made during validation. self.assertIsNone(server.validate()) # Check nova.NovaClientPlugin.absolute_limits is not called during # call to server.validate() self.assertFalse(mock_limits.called) def test_server_validate_connection_error_retry_successful(self): stack_name = 'srv_val' (tmpl, stack) = self._setup_test_stack(stack_name) tmpl.t['Resources']['WebServer']['Properties'][ 'personality'] = {"/fake/path1": "a" * 10} self.patchobject(nova.NovaClientPlugin, '_create', return_value=self.fc) resource_defns = tmpl.resource_definitions(stack) server = servers.Server('server_create_image_err', resource_defns['WebServer'], stack) self.patchobject(self.fc.limits, 'get', side_effect=[requests.ConnectionError(), self.limits]) self.patchobject(glance.GlanceClientPlugin, 'get_image', return_value=self.mock_image) self.assertIsNone(server.validate()) def test_server_validate_connection_error_retry_failure(self): stack_name = 'srv_val' (tmpl, stack) = self._setup_test_stack(stack_name) tmpl.t['Resources']['WebServer']['Properties'][ 'personality'] = {"/fake/path1": "a" * 10} self.patchobject(nova.NovaClientPlugin, '_create', return_value=self.fc) resource_defns = tmpl.resource_definitions(stack) server = servers.Server('server_create_image_err', resource_defns['WebServer'], stack) self.patchobject(self.fc.limits, 'get', side_effect=[requests.ConnectionError(), requests.ConnectionError(), requests.ConnectionError()]) self.patchobject(glance.GlanceClientPlugin, 'get_image', return_value=self.mock_image) self.assertRaises(requests.ConnectionError, server.validate) def test_server_restore(self): t = template_format.parse(ns_template) tmpl = template.Template(t, files={'a_file': 'the content'}) stack = parser.Stack(utils.dummy_context(), "server_restore", tmpl) stack.store() self.patchobject(nova.NovaClientPlugin, '_create', return_value=self.fc) self.patchobject(glance.GlanceClientPlugin, 'get_image', return_value=self.mock_image) self.patchobject(stack['server'], 'store_external_ports') return_server = self.fc.servers.list()[1] return_server.id = '1234' mock_create = self.patchobject(self.fc.servers, 'create', return_value=return_server) self.patchobject(self.fc.servers, 'get', side_effect=[return_server, None]) self.patchobject(neutron.NeutronClientPlugin, 'find_resourceid_by_name_or_id', return_value='aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa') scheduler.TaskRunner(stack.create)() self.assertEqual(1, mock_create.call_count) self.assertEqual((stack.CREATE, stack.COMPLETE), stack.state) scheduler.TaskRunner(stack.snapshot, None)() self.assertEqual((stack.SNAPSHOT, stack.COMPLETE), stack.state) data = stack.prepare_abandon() resource_data = data['resources']['server']['resource_data'] resource_data['snapshot_image_id'] = 'CentOS 5.2' fake_snapshot = collections.namedtuple( 'Snapshot', ('data', 'stack_id'))(data, stack.id) stack.restore(fake_snapshot) self.assertEqual((stack.RESTORE, stack.COMPLETE), stack.state) def test_snapshot_policy(self): t = template_format.parse(wp_template) t['Resources']['WebServer']['DeletionPolicy'] = 'Snapshot' tmpl = template.Template(t) stack = parser.Stack( utils.dummy_context(), 'snapshot_policy', tmpl) stack.store() self.patchobject(stack['WebServer'], 'store_external_ports') mock_plugin = self.patchobject(nova.NovaClientPlugin, '_create') mock_plugin.return_value = self.fc return_server = self.fc.servers.list()[1] return_server.id = '1234' mock_create = self.patchobject(self.fc.servers, 'create') mock_create.return_value = return_server mock_get = self.patchobject(self.fc.servers, 'get') mock_get.return_value = return_server image = self.fc.servers.create_image('1234', 'name') create_image = self.patchobject(self.fc.servers, 'create_image') create_image.return_value = image self.patchobject(glance.GlanceClientPlugin, 'get_image', return_value=self.mock_image) delete_server = self.patchobject(self.fc.servers, 'delete') delete_server.side_effect = nova_exceptions.NotFound(404) scheduler.TaskRunner(stack.create)() self.assertEqual((stack.CREATE, stack.COMPLETE), stack.state) scheduler.TaskRunner(stack.delete)() self.assertEqual((stack.DELETE, stack.COMPLETE), stack.state) create_image.assert_called_once_with( '1234', utils.PhysName('snapshot_policy', 'WebServer')) delete_server.assert_called_once_with('1234') def test_snapshot_policy_image_failed(self): t = template_format.parse(wp_template) t['Resources']['WebServer']['DeletionPolicy'] = 'Snapshot' tmpl = template.Template(t) stack = parser.Stack( utils.dummy_context(), 'snapshot_policy', tmpl) stack.store() self.patchobject(stack['WebServer'], 'store_external_ports') mock_plugin = self.patchobject(nova.NovaClientPlugin, '_create') mock_plugin.return_value = self.fc self.patchobject(glance.GlanceClientPlugin, 'get_image', return_value=self.mock_image) return_server = self.fc.servers.list()[1] return_server.id = '1234' mock_create = self.patchobject(self.fc.servers, 'create') mock_create.return_value = return_server mock_get = self.patchobject(self.fc.servers, 'get') mock_get.return_value = return_server image = self.fc.servers.create_image('1234', 'name') create_image = self.patchobject(self.fc.servers, 'create_image') create_image.return_value = image delete_server = self.patchobject(self.fc.servers, 'delete') delete_server.side_effect = nova_exceptions.NotFound(404) scheduler.TaskRunner(stack.create)() self.assertEqual((stack.CREATE, stack.COMPLETE), stack.state) failed_image = mock.Mock(**{ 'id': 456, 'name': 'CentOS 5.2', 'updated': '2010-10-10T12:00:00Z', 'created': '2010-08-10T12:00:00Z', 'status': 'ERROR'}) self.patchobject(glance.GlanceClientPlugin, 'get_image', return_value=failed_image) return_server = self.fc.servers.list()[1] scheduler.TaskRunner(stack.delete)() self.assertEqual((stack.DELETE, stack.FAILED), stack.state) self.assertEqual( 'Resource DELETE failed: Error: resources.WebServer: ERROR', stack.status_reason) create_image.assert_called_once_with( '1234', utils.PhysName('snapshot_policy', 'WebServer')) delete_server.assert_not_called() def test_handle_snapshot_delete(self): t = template_format.parse(wp_template) t['Resources']['WebServer']['DeletionPolicy'] = 'Snapshot' tmpl = template.Template(t) stack = parser.Stack( utils.dummy_context(), 'snapshot_policy', tmpl) stack.store() rsrc = stack['WebServer'] mock_plugin = self.patchobject(nova.NovaClientPlugin, '_create') mock_plugin.return_value = self.fc delete_server = self.patchobject(self.fc.servers, 'delete') delete_server.side_effect = nova_exceptions.NotFound(404) create_image = self.patchobject(self.fc.servers, 'create_image') # test resource_id is None self.patchobject(servers.Server, 'user_data_software_config', return_value=True) delete_internal_ports = self.patchobject(servers.Server, '_delete_internal_ports') delete_queue = self.patchobject(servers.Server, '_delete_queue') delete_user = self.patchobject(servers.Server, '_delete_user') delete_swift_object = self.patchobject(servers.Server, '_delete_temp_url') rsrc.handle_snapshot_delete((rsrc.CREATE, rsrc.FAILED)) delete_server.assert_not_called() create_image.assert_not_called() # attempt to delete queue/user/swift_object/internal_ports # if no resource_id delete_internal_ports.assert_called_once_with() delete_queue.assert_called_once_with() delete_user.assert_called_once_with() delete_swift_object.assert_called_once_with() # test has resource_id but state is CREATE_FAILED rsrc.resource_id = '4567' rsrc.handle_snapshot_delete((rsrc.CREATE, rsrc.FAILED)) delete_server.assert_called_once_with('4567') create_image.assert_not_called() # attempt to delete internal_ports if has resource_id self.assertEqual(2, delete_internal_ports.call_count) def test_handle_delete_without_resource_id(self): t = template_format.parse(wp_template) tmpl = template.Template(t) stack = parser.Stack( utils.dummy_context(), 'without_resource_id', tmpl) rsrc = stack['WebServer'] delete_server = self.patchobject(self.fc.servers, 'delete') # test resource_id is None self.patchobject(servers.Server, 'user_data_software_config', return_value=True) delete_internal_ports = self.patchobject(servers.Server, '_delete_internal_ports') delete_queue = self.patchobject(servers.Server, '_delete_queue') delete_user = self.patchobject(servers.Server, '_delete_user') delete_swift_object = self.patchobject(servers.Server, '_delete_temp_url') rsrc.handle_delete() delete_server.assert_not_called() # attempt to delete queue/user/swift_object/internal_ports # if no resource_id delete_internal_ports.assert_called_once_with() delete_queue.assert_called_once_with() delete_user.assert_called_once_with() delete_swift_object.assert_called_once_with() class ServerInternalPortTest(ServersTest): def setUp(self): super(ServerInternalPortTest, self).setUp() self.resolve = self.patchobject(neutron.NeutronClientPlugin, 'find_resourceid_by_name_or_id') self.port_create = self.patchobject(neutronclient.Client, 'create_port') self.port_delete = self.patchobject(neutronclient.Client, 'delete_port') self.port_show = self.patchobject(neutronclient.Client, 'show_port') self.server_get = self.patchobject(novaclient.servers.ServerManager, 'get') self.server_get.return_value = self.fc.servers.list()[1] def neutron_side_effect(*args): if args[0] == 'subnet': return '1234' if args[0] == 'network': return '4321' if args[0] == 'port': return '12345' self.resolve.side_effect = neutron_side_effect def _return_template_stack_and_rsrc_defn(self, stack_name, temp): templ = template.Template(template_format.parse(temp), env=environment.Environment( {'key_name': 'test'})) stack = parser.Stack(utils.dummy_context(), stack_name, templ, stack_id=uuidutils.generate_uuid(), stack_user_project_id='8888') resource_defns = templ.resource_definitions(stack) server = servers.Server('server', resource_defns['server'], stack) return templ, stack, server def test_build_nics_without_internal_port(self): tmpl = """ heat_template_version: 2015-10-15 resources: server: type: OS::Nova::Server properties: flavor: m1.small image: F17-x86_64-gold networks: - port: 12345 network: 4321 """ t, stack, server = self._return_template_stack_and_rsrc_defn('test', tmpl) create_internal_port = self.patchobject(server, '_create_internal_port', return_value='12345') networks = [{'port': '12345', 'network': '4321'}] nics = server._build_nics(networks) self.assertEqual([{'port-id': '12345', 'net-id': '4321'}], nics) self.assertEqual(0, create_internal_port.call_count) def test_validate_internal_port_subnet_not_this_network(self): tmpl = """ heat_template_version: 2015-10-15 resources: server: type: OS::Nova::Server properties: flavor: m1.small image: F17-x86_64-gold networks: - network: 4321 subnet: 1234 """ t, stack, server = self._return_template_stack_and_rsrc_defn('test', tmpl) networks = server.properties['networks'] for network in networks: # validation passes at validate time server._validate_network(network) self.patchobject(neutron.NeutronClientPlugin, 'network_id_from_subnet_id', return_value='not_this_network') ex = self.assertRaises(exception.StackValidationFailed, server._build_nics, networks) self.assertEqual('Specified subnet 1234 does not belongs to ' 'network 4321.', six.text_type(ex)) def test_build_nics_create_internal_port_all_props_without_extras(self): tmpl = """ heat_template_version: 2015-10-15 resources: server: type: OS::Nova::Server properties: flavor: m1.small image: F17-x86_64-gold security_groups: - test_sec networks: - network: 4321 subnet: 1234 fixed_ip: 127.0.0.1 """ t, stack, server = self._return_template_stack_and_rsrc_defn('test', tmpl) self.patchobject(server, '_validate_belonging_subnet_to_net') self.patchobject(neutron.NeutronClientPlugin, 'get_secgroup_uuids', return_value=['5566']) self.port_create.return_value = {'port': {'id': '111222'}} data_set = self.patchobject(resource.Resource, 'data_set') network = [{'network': '4321', 'subnet': '1234', 'fixed_ip': '127.0.0.1'}] security_groups = ['test_sec'] server._build_nics(network, security_groups) self.port_create.assert_called_once_with( {'port': {'name': 'server-port-0', 'network_id': '4321', 'fixed_ips': [{ 'ip_address': '127.0.0.1', 'subnet_id': '1234' }], 'security_groups': ['5566']}}) data_set.assert_called_once_with('internal_ports', '[{"id": "111222"}]') def test_build_nics_do_not_create_internal_port(self): t, stack, server = self._return_template_stack_and_rsrc_defn( 'test', tmpl_server_with_network_id) self.port_create.return_value = {'port': {'id': '111222'}} data_set = self.patchobject(resource.Resource, 'data_set') network = [{'network': '4321'}] server._build_nics(network) self.assertFalse(self.port_create.called) self.assertFalse(data_set.called) def test_prepare_port_kwargs_with_extras(self): tmpl = """ heat_template_version: 2015-10-15 resources: server: type: OS::Nova::Server properties: flavor: m1.small image: F17-x86_64-gold networks: - network: 4321 subnet: 1234 fixed_ip: 127.0.0.1 port_extra_properties: mac_address: 00:00:00:00:00:00 allowed_address_pairs: - ip_address: 127.0.0.1 mac_address: None - mac_address: 00:00:00:00:00:00 """ t, stack, server = self._return_template_stack_and_rsrc_defn('test', tmpl) network = {'network': '4321', 'subnet': '1234', 'fixed_ip': '127.0.0.1', 'port_extra_properties': { 'value_specs': {}, 'mac_address': '00:00:00:00:00:00', 'allowed_address_pairs': [ {'ip_address': '127.0.0.1', 'mac_address': None}, {'mac_address': '00:00:00:00:00:00'} ] }} sec_uuids = ['8d94c72093284da88caaef5e985d96f7'] self.patchobject(neutron.NeutronClientPlugin, 'get_secgroup_uuids', return_value=sec_uuids) kwargs = server._prepare_internal_port_kwargs( network, security_groups=['test_sec']) self.assertEqual({'network_id': '4321', 'security_groups': sec_uuids, 'fixed_ips': [ {'ip_address': '127.0.0.1', 'subnet_id': '1234'} ], 'mac_address': '00:00:00:00:00:00', 'allowed_address_pairs': [ {'ip_address': '127.0.0.1'}, {'mac_address': '00:00:00:00:00:00'}]}, kwargs) def test_build_nics_create_internal_port_without_net(self): tmpl = """ heat_template_version: 2015-10-15 resources: server: type: OS::Nova::Server properties: flavor: m1.small image: F17-x86_64-gold networks: - subnet: 1234 """ t, stack, server = self._return_template_stack_and_rsrc_defn('test', tmpl) self.patchobject(neutron.NeutronClientPlugin, 'network_id_from_subnet_id', return_value='4321') net = {'subnet': '1234'} net_id = server._get_network_id(net) self.assertEqual('4321', net_id) self.assertEqual({'subnet': '1234'}, net) self.port_create.return_value = {'port': {'id': '111222'}} data_set = self.patchobject(resource.Resource, 'data_set') network = [{'subnet': '1234'}] server._build_nics(network) self.port_create.assert_called_once_with( {'port': {'name': 'server-port-0', 'network_id': '4321', 'fixed_ips': [{ 'subnet_id': '1234' }]}}) data_set.assert_called_once_with('internal_ports', '[{"id": "111222"}]') def test_calculate_networks_internal_ports(self): tmpl = """ heat_template_version: 2015-10-15 resources: server: type: OS::Nova::Server properties: flavor: m1.small image: F17-x86_64-gold networks: - network: 4321 subnet: 1234 fixed_ip: 127.0.0.1 - port: 3344 """ t, stack, server = self._return_template_stack_and_rsrc_defn('test', tmpl) data_mock = self.patchobject(server, '_data_get_ports') data_mock.side_effect = [[{"id": "1122"}], [{"id": "1122"}], []] self.port_create.return_value = {'port': {'id': '7788'}} data_set = self.patchobject(resource.Resource, 'data_set') old_net = [self.create_old_net(net='4321', subnet='1234', ip='127.0.0.1'), self.create_old_net(port='3344')] new_net = [{'port': '3344'}, {'port': '5566'}, {'network': '4321', 'subnet': '5678', 'fixed_ip': '10.0.0.1'} ] interfaces = [create_fake_iface(port='1122', net='4321', ip='127.0.0.1', subnet='1234'), create_fake_iface(port='3344', net='4321', ip='10.0.0.2', subnet='subnet')] server.calculate_networks(old_net, new_net, interfaces) # we can only delete the port 1122, # port 3344 is external port, cant delete it self.port_delete.assert_called_once_with('1122') self.port_create.assert_called_once_with( {'port': {'name': 'server-port-1', 'network_id': '4321', 'fixed_ips': [{'subnet_id': '5678', 'ip_address': '10.0.0.1'}]}}) self.assertEqual(2, data_set.call_count) data_set.assert_has_calls(( mock.call('internal_ports', '[]'), mock.call('internal_ports', '[{"id": "7788"}]'))) def test_calculate_networks_internal_ports_with_fipa(self): tmpl = """ heat_template_version: 2015-10-15 resources: server: type: OS::Nova::Server properties: flavor: m1.small image: F17-x86_64-gold networks: - network: 4321 subnet: 1234 fixed_ip: 127.0.0.1 floating_ip: 1199 - network: 8765 subnet: 5678 fixed_ip: 127.0.0.2 floating_ip: 9911 """ t, stack, server = self._return_template_stack_and_rsrc_defn('test', tmpl) # NOTE(prazumovsky): this method update old_net and new_net with # interfaces' ports. Because of uselessness of checking this method, # we can afford to give port as part of calculate_networks args. self.patchobject(server, 'update_networks_matching_iface_port') server._data = {'internal_ports': '[{"id": "1122"}]'} self.port_create.return_value = {'port': {'id': '5566'}} self.patchobject(resource.Resource, 'data_set') self.resolve.side_effect = ['0912', '9021'] fipa = self.patchobject(neutronclient.Client, 'update_floatingip', side_effect=[neutronclient.exceptions.NotFound, '9911', '11910', '1199']) old_net = [ self.create_old_net(net='4321', subnet='1234', ip='127.0.0.1', port='1122', floating_ip='1199'), self.create_old_net(net='8765', subnet='5678', ip='127.0.0.2', port='3344', floating_ip='9911') ] interfaces = [create_fake_iface(port='1122', net='4321', ip='127.0.0.1', subnet='1234'), create_fake_iface(port='3344', net='8765', ip='127.0.0.2', subnet='5678')] new_net = [{'network': '8765', 'subnet': '5678', 'fixed_ip': '127.0.0.2', 'port': '3344', 'floating_ip': '11910'}, {'network': '0912', 'subnet': '9021', 'fixed_ip': '127.0.0.1', 'floating_ip': '1199', 'port': '1122'}] server.calculate_networks(old_net, new_net, interfaces) fipa.assert_has_calls(( mock.call('1199', {'floatingip': {'port_id': None}}), mock.call('9911', {'floatingip': {'port_id': None}}), mock.call('11910', {'floatingip': {'port_id': '3344', 'fixed_ip_address': '127.0.0.2'}}), mock.call('1199', {'floatingip': {'port_id': '1122', 'fixed_ip_address': '127.0.0.1'}}) )) def test_delete_fipa_with_exception_not_found_neutron(self): tmpl = """ heat_template_version: 2015-10-15 resources: server: type: OS::Nova::Server properties: flavor: m1.small image: F17-x86_64-gold networks: - network: 4321 subnet: 1234 fixed_ip: 127.0.0.1 floating_ip: 1199 - network: 8765 subnet: 5678 fixed_ip: 127.0.0.2 floating_ip: 9911 """ t, stack, server = self._return_template_stack_and_rsrc_defn('test', tmpl) delete_flip = mock.MagicMock( side_effect=[neutron.exceptions.NotFound(404)]) server.client('neutron').update_floatingip = delete_flip self.assertIsNone(server._floating_ip_disassociate('flip123')) self.assertEqual(1, delete_flip.call_count) def test_delete_internal_ports(self): t, stack, server = self._return_template_stack_and_rsrc_defn( 'test', tmpl_server_with_network_id) get_data = [{'internal_ports': '[{"id": "1122"}, {"id": "3344"}, ' '{"id": "5566"}]'}, {'internal_ports': '[{"id": "1122"}, {"id": "3344"}, ' '{"id": "5566"}]'}, {'internal_ports': '[{"id": "3344"}, ' '{"id": "5566"}]'}, {'internal_ports': '[{"id": "5566"}]'}] self.patchobject(server, 'data', side_effect=get_data) data_set = self.patchobject(server, 'data_set') data_delete = self.patchobject(server, 'data_delete') server._delete_internal_ports() self.assertEqual(3, self.port_delete.call_count) self.assertEqual(('1122',), self.port_delete.call_args_list[0][0]) self.assertEqual(('3344',), self.port_delete.call_args_list[1][0]) self.assertEqual(('5566',), self.port_delete.call_args_list[2][0]) self.assertEqual(3, data_set.call_count) data_set.assert_has_calls(( mock.call('internal_ports', '[{"id": "3344"}, {"id": "5566"}]'), mock.call('internal_ports', '[{"id": "5566"}]'), mock.call('internal_ports', '[]'))) data_delete.assert_called_once_with('internal_ports') def test_get_data_internal_ports(self): t, stack, server = self._return_template_stack_and_rsrc_defn( 'test', tmpl_server_with_network_id) server._data = {"internal_ports": '[{"id": "1122"}]'} data = server._data_get_ports() self.assertEqual([{"id": "1122"}], data) server._data = {"internal_ports": ''} data = server._data_get_ports() self.assertEqual([], data) def test_store_external_ports(self): t, stack, server = self._return_template_stack_and_rsrc_defn( 'test', tmpl_server_with_network_id) class Fake(object): def interface_list(self): return [iface('1122'), iface('1122'), iface('2233'), iface('3344')] server.client = mock.Mock() server.client().servers.get.return_value = Fake() server.client_plugin = mock.Mock() server.client_plugin().has_extension.return_value = True server._data = {"internal_ports": '[{"id": "1122"}]', "external_ports": '[{"id": "3344"},{"id": "5566"}]'} iface = collections.namedtuple('iface', ['port_id']) update_data = self.patchobject(server, '_data_update_ports') server.store_external_ports() self.assertEqual(2, update_data.call_count) self.assertEqual(('5566', 'delete',), update_data.call_args_list[0][0]) self.assertEqual({'port_type': 'external_ports'}, update_data.call_args_list[0][1]) self.assertEqual(('2233', 'add',), update_data.call_args_list[1][0]) self.assertEqual({'port_type': 'external_ports'}, update_data.call_args_list[1][1]) def test_prepare_ports_for_replace_detach_failed(self): t, stack, server = self._return_template_stack_and_rsrc_defn( 'test', tmpl_server_with_network_id) class Fake(object): def interface_list(self): return [iface(1122)] iface = collections.namedtuple('iface', ['port_id']) server.resource_id = 'ser-11' port_ids = [{'id': 1122}] server._data = {"internal_ports": jsonutils.dumps(port_ids)} self.patchobject(nova.NovaClientPlugin, 'interface_detach') self.patchobject(nova.NovaClientPlugin, 'fetch_server') self.patchobject(nova.NovaClientPlugin.check_interface_detach.retry, 'sleep') nova.NovaClientPlugin.fetch_server.side_effect = [Fake()] * 10 exc = self.assertRaises(exception.InterfaceDetachFailed, server.prepare_for_replace) self.assertIn('Failed to detach interface (1122) from server ' '(ser-11)', six.text_type(exc)) def test_prepare_ports_for_replace(self): t, stack, server = self._return_template_stack_and_rsrc_defn( 'test', tmpl_server_with_network_id) server.resource_id = 'test_server' port_ids = [{'id': '1122'}, {'id': '3344'}] external_port_ids = [{'id': '5566'}] server._data = {"internal_ports": jsonutils.dumps(port_ids), "external_ports": jsonutils.dumps(external_port_ids)} nova_server = self.fc.servers.list()[1] server.client = mock.Mock() server.client().servers.get.return_value = nova_server self.patchobject(nova.NovaClientPlugin, 'interface_detach', return_value=True) self.patchobject(nova.NovaClientPlugin, 'check_interface_detach', return_value=True) server.prepare_for_replace() # check, that the ports were detached from server nova.NovaClientPlugin.interface_detach.assert_has_calls([ mock.call('test_server', '1122'), mock.call('test_server', '3344'), mock.call('test_server', '5566')]) def test_prepare_ports_for_replace_not_found(self): t, stack, server = self._return_template_stack_and_rsrc_defn( 'test', tmpl_server_with_network_id) server.resource_id = 'test_server' port_ids = [{'id': '1122'}, {'id': '3344'}] external_port_ids = [{'id': '5566'}] server._data = {"internal_ports": jsonutils.dumps(port_ids), "external_ports": jsonutils.dumps(external_port_ids)} self.patchobject(nova.NovaClientPlugin, 'fetch_server', side_effect=nova_exceptions.NotFound(404)) check_detach = self.patchobject(nova.NovaClientPlugin, 'check_interface_detach') nova_server = self.fc.servers.list()[1] nova_server.status = 'DELETED' self.server_get.return_value = nova_server server.prepare_for_replace() check_detach.assert_not_called() self.assertEqual(0, self.port_delete.call_count) def test_prepare_ports_for_replace_error_state(self): t, stack, server = self._return_template_stack_and_rsrc_defn( 'test', tmpl_server_with_network_id) server.resource_id = 'test_server' port_ids = [{'id': '1122'}, {'id': '3344'}] external_port_ids = [{'id': '5566'}] server._data = {"internal_ports": jsonutils.dumps(port_ids), "external_ports": jsonutils.dumps(external_port_ids)} nova_server = self.fc.servers.list()[1] nova_server.status = 'ERROR' self.server_get.return_value = nova_server self.patchobject(nova.NovaClientPlugin, 'interface_detach', return_value=True) self.patchobject(nova.NovaClientPlugin, 'check_interface_detach', return_value=True) data_set = self.patchobject(server, 'data_set') data_delete = self.patchobject(server, 'data_delete') server.prepare_for_replace() # check, that the internal ports were deleted self.assertEqual(2, self.port_delete.call_count) self.assertEqual(('1122',), self.port_delete.call_args_list[0][0]) self.assertEqual(('3344',), self.port_delete.call_args_list[1][0]) data_set.assert_has_calls(( mock.call('internal_ports', '[{"id": "3344"}]'), mock.call('internal_ports', '[{"id": "1122"}]'))) data_delete.assert_called_once_with('internal_ports') def test_prepare_ports_for_replace_not_created(self): t, stack, server = self._return_template_stack_and_rsrc_defn( 'test', tmpl_server_with_network_id) prepare_mock = self.patchobject(server, 'prepare_ports_for_replace') server.prepare_for_replace() self.assertIsNone(server.resource_id) self.assertEqual(0, prepare_mock.call_count) @mock.patch.object(server_network_mixin.ServerNetworkMixin, 'store_external_ports') def test_restore_ports_after_rollback(self, store_ports): t, stack, server = self._return_template_stack_and_rsrc_defn( 'test', tmpl_server_with_network_id) server.resource_id = 'existing_server' port_ids = [{'id': 1122}, {'id': 3344}] external_port_ids = [{'id': 5566}] server._data = {"internal_ports": jsonutils.dumps(port_ids), "external_ports": jsonutils.dumps(external_port_ids)} self.patchobject(nova.NovaClientPlugin, '_check_active') nova.NovaClientPlugin._check_active.side_effect = [False, True] # add data to old server in backup stack old_server = mock.Mock() old_server.resource_id = 'old_server' stack._backup_stack = mock.Mock() stack._backup_stack().resources.get.return_value = old_server old_server._data_get_ports.side_effect = [port_ids, external_port_ids] self.patchobject(nova.NovaClientPlugin, 'interface_detach', return_value=True) self.patchobject(nova.NovaClientPlugin, 'check_interface_detach', return_value=True) self.patchobject(nova.NovaClientPlugin, 'interface_attach') self.patchobject(nova.NovaClientPlugin, 'check_interface_attach', return_value=True) server.restore_prev_rsrc() self.assertEqual(2, nova.NovaClientPlugin._check_active.call_count) # check, that ports were detached from new server nova.NovaClientPlugin.interface_detach.assert_has_calls([ mock.call('existing_server', 1122), mock.call('existing_server', 3344), mock.call('existing_server', 5566)]) # check, that ports were attached to old server nova.NovaClientPlugin.interface_attach.assert_has_calls([ mock.call('old_server', 1122), mock.call('old_server', 3344), mock.call('old_server', 5566)]) @mock.patch.object(server_network_mixin.ServerNetworkMixin, 'store_external_ports') def test_restore_ports_after_rollback_attach_failed(self, store_ports): t, stack, server = self._return_template_stack_and_rsrc_defn( 'test', tmpl_server_with_network_id) server.resource_id = 'existing_server' port_ids = [{'id': 1122}, {'id': 3344}] server._data = {"internal_ports": jsonutils.dumps(port_ids)} self.patchobject(nova.NovaClientPlugin, '_check_active') nova.NovaClientPlugin._check_active.return_value = True # add data to old server in backup stack old_server = mock.Mock() old_server.resource_id = 'old_server' stack._backup_stack = mock.Mock() stack._backup_stack().resources.get.return_value = old_server old_server._data_get_ports.side_effect = [port_ids, []] class Fake(object): def interface_list(self): return [iface(1122)] iface = collections.namedtuple('iface', ['port_id']) self.patchobject(nova.NovaClientPlugin, 'interface_detach') self.patchobject(nova.NovaClientPlugin, 'check_interface_detach', return_value=True) self.patchobject(nova.NovaClientPlugin, 'interface_attach') self.patchobject(nova.NovaClientPlugin, 'fetch_server') self.patchobject(nova.NovaClientPlugin.check_interface_attach.retry, 'sleep') # need to mock 11 times: 1 for port 1122, 10 for port 3344 nova.NovaClientPlugin.fetch_server.side_effect = [Fake()] * 11 exc = self.assertRaises(exception.InterfaceAttachFailed, server.restore_prev_rsrc) self.assertIn('Failed to attach interface (3344) to server ' '(old_server)', six.text_type(exc)) @mock.patch.object(server_network_mixin.ServerNetworkMixin, 'store_external_ports') def test_restore_ports_after_rollback_convergence(self, store_ports): t = template_format.parse(tmpl_server_with_network_id) stack = utils.parse_stack(t) stack.store() self.patchobject(nova.NovaClientPlugin, '_check_active') nova.NovaClientPlugin._check_active.return_value = True # mock resource from previous template prev_rsrc = stack['server'] # store in db prev_rsrc.state_set(prev_rsrc.UPDATE, prev_rsrc.COMPLETE) prev_rsrc.resource_id = 'prev_rsrc' # mock resource from existing template, store in db, and set _data resource_defns = stack.t.resource_definitions(stack) existing_rsrc = servers.Server('server', resource_defns['server'], stack) existing_rsrc.stack = stack existing_rsrc.current_template_id = stack.t.id existing_rsrc.resource_id = 'existing_rsrc' existing_rsrc.state_set(existing_rsrc.UPDATE, existing_rsrc.COMPLETE) port_ids = [{'id': 1122}, {'id': 3344}] external_port_ids = [{'id': 5566}] existing_rsrc.data_set("internal_ports", jsonutils.dumps(port_ids)) existing_rsrc.data_set("external_ports", jsonutils.dumps(external_port_ids)) # mock previous resource was replaced by existing resource prev_rsrc.replaced_by = existing_rsrc.id self.patchobject(nova.NovaClientPlugin, 'interface_detach', return_value=True) self.patchobject(nova.NovaClientPlugin, 'check_interface_detach', return_value=True) self.patchobject(nova.NovaClientPlugin, 'interface_attach') self.patchobject(nova.NovaClientPlugin, 'check_interface_attach', return_value=True) prev_rsrc.restore_prev_rsrc(convergence=True) # check, that ports were detached from existing server nova.NovaClientPlugin.interface_detach.assert_has_calls([ mock.call('existing_rsrc', 1122), mock.call('existing_rsrc', 3344), mock.call('existing_rsrc', 5566)]) # check, that ports were attached to old server nova.NovaClientPlugin.interface_attach.assert_has_calls([ mock.call('prev_rsrc', 1122), mock.call('prev_rsrc', 3344), mock.call('prev_rsrc', 5566)]) def test_store_external_ports_os_interface_not_installed(self): t, stack, server = self._return_template_stack_and_rsrc_defn( 'test', tmpl_server_with_network_id) class Fake(object): def interface_list(self): return [iface('1122'), iface('1122'), iface('2233'), iface('3344')] server.client = mock.Mock() server.client().servers.get.return_value = Fake() server.client_plugin = mock.Mock() server.client_plugin().has_extension.return_value = False server._data = {"internal_ports": '[{"id": "1122"}]', "external_ports": '[{"id": "3344"},{"id": "5566"}]'} iface = collections.namedtuple('iface', ['port_id']) update_data = self.patchobject(server, '_data_update_ports') server.store_external_ports() self.assertEqual(0, update_data.call_count) heat-10.0.2/heat/tests/openstack/nova/test_flavor.py0000666000175000017500000002065013343562340022464 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from heat.engine.clients.os import nova as novac from heat.engine import stack from heat.engine import template from heat.tests import common from heat.tests import utils flavor_template = { 'heat_template_version': '2013-05-23', 'resources': { 'my_flavor': { 'type': 'OS::Nova::Flavor', 'properties': { 'ram': 1024, 'vcpus': 2, 'disk': 20, 'swap': 2, 'rxtx_factor': 1.0, 'ephemeral': 0, 'extra_specs': {"foo": "bar"}, 'tenants': [] } } } } class NovaFlavorTest(common.HeatTestCase): def setUp(self): super(NovaFlavorTest, self).setUp() self.patchobject(novac.NovaClientPlugin, 'has_extension', return_value=True) self.ctx = utils.dummy_context() def create_flavor(self, with_name_id=False, is_public=True): if with_name_id: props = flavor_template['resources']['my_flavor']['properties'] props['name'] = 'test_flavor' props['flavorid'] = '1234' if not is_public: props = flavor_template['resources']['my_flavor']['properties'] props['is_public'] = False props['tenants'] = ["foo", "bar"] self.stack = stack.Stack( self.ctx, 'nova_flavor_test_stack', template.Template(flavor_template) ) self.my_flavor = self.stack['my_flavor'] nova = mock.MagicMock() self.novaclient = mock.MagicMock() self.my_flavor.client = nova nova.return_value = self.novaclient self.flavors = self.novaclient.flavors def test_flavor_handle_create_no_id_name(self): self.create_flavor() kwargs = { 'vcpus': 2, 'disk': 20, 'swap': 2, 'flavorid': 'auto', 'is_public': True, 'rxtx_factor': 1.0, 'ram': 1024, 'ephemeral': 0, 'name': 'm1.xxx' } self.patchobject(self.my_flavor, 'physical_resource_name', return_value='m1.xxx') value = mock.MagicMock() flavor_id = '927202df-1afb-497f-8368-9c2d2f26e5db' value.id = flavor_id value.is_public = True value.get_keys.return_value = {'k': 'v'} self.flavors.create.return_value = value self.flavors.get.return_value = value self.my_flavor.handle_create() self.flavors.create.assert_called_once_with(**kwargs) value.set_keys.assert_called_once_with({"foo": "bar"}) self.assertEqual(flavor_id, self.my_flavor.resource_id) self.assertTrue(self.my_flavor.FnGetAtt('is_public')) self.assertEqual({'k': 'v'}, self.my_flavor.FnGetAtt('extra_specs')) def test_flavor_handle_create_with_id_name(self): self.create_flavor(with_name_id=True) kwargs = { 'vcpus': 2, 'disk': 20, 'swap': 2, 'flavorid': '1234', 'is_public': True, 'rxtx_factor': 1.0, 'ram': 1024, 'ephemeral': 0, 'name': 'test_flavor' } self.patchobject(self.my_flavor, 'physical_resource_name', return_value='m1.xxx') value = mock.MagicMock() flavor_id = '927202df-1afb-497f-8368-9c2d2f26e5db' value.id = flavor_id value.is_public = True self.flavors.create.return_value = value self.flavors.get.return_value = value self.my_flavor.handle_create() self.flavors.create.assert_called_once_with(**kwargs) value.set_keys.assert_called_once_with({"foo": "bar"}) self.assertEqual(flavor_id, self.my_flavor.resource_id) self.assertTrue(self.my_flavor.FnGetAtt('is_public')) def test_private_flavor_handle_create(self): self.create_flavor(is_public=False) value = mock.MagicMock() flavor_id = '927202df-1afb-497f-8368-9c2d2f26e5db' value.id = flavor_id value.is_public = False self.flavors.create.return_value = value self.flavors.get.return_value = value self.my_flavor.handle_create() value.set_keys.assert_called_once_with({"foo": "bar"}) self.assertEqual(flavor_id, self.my_flavor.resource_id) self.assertFalse(self.my_flavor.FnGetAtt('is_public')) client_test = self.my_flavor.client().flavor_access.add_tenant_access test_tenants = [mock.call(value, 'foo'), mock.call(value, 'bar')] self.assertEqual(test_tenants, client_test.call_args_list) def test_flavor_handle_update_keys(self): self.create_flavor() value = mock.MagicMock() self.flavors.get.return_value = value value.get_keys.return_value = {} new_keys = {"new_foo": "new_bar"} prop_diff = {'extra_specs': new_keys} self.my_flavor.handle_update(json_snippet=None, tmpl_diff=None, prop_diff=prop_diff) value.unset_keys.assert_called_once_with({}) value.set_keys.assert_called_once_with(new_keys) def test_flavor_handle_update_add_tenants(self): self.create_flavor(is_public=False) value = mock.MagicMock() new_tenants = ["new_foo", "new_bar"] prop_diff = {'tenants': new_tenants} self.flavors.get.return_value = value self.my_flavor.handle_update(json_snippet=None, tmpl_diff=None, prop_diff=prop_diff) test_tenants_add = [mock.call(value, 'new_foo'), mock.call(value, 'new_bar')] test_add = self.my_flavor.client().flavor_access.add_tenant_access self.assertItemsEqual(test_tenants_add, test_add.call_args_list) def test_flavor_handle_update_remove_tenants(self): self.create_flavor(is_public=False) value = mock.MagicMock() new_tenants = [] prop_diff = {'tenants': new_tenants} self.flavors.get.return_value = value itemFoo = mock.MagicMock() itemFoo.tenant_id = 'foo' itemBar = mock.MagicMock() itemBar.tenant_id = 'bar' self.my_flavor.client().flavor_access.list.return_value = [itemFoo, itemBar] self.my_flavor.handle_update(json_snippet=None, tmpl_diff=None, prop_diff=prop_diff) test_tenants_remove = [mock.call(value, 'foo'), mock.call(value, 'bar')] test_rem = self.my_flavor.client().flavor_access.remove_tenant_access self.assertItemsEqual(test_tenants_remove, test_rem.call_args_list) def test_flavor_show_resource(self): self.create_flavor() self.my_flavor.resource_id = 'flavor_test_id' self.my_flavor.client = mock.MagicMock() flavors = mock.MagicMock() flavor = mock.MagicMock() flavor.to_dict.return_value = {'flavor': 'info'} flavors.get.return_value = flavor self.my_flavor.client().flavors = flavors self.assertEqual({'flavor': 'info'}, self.my_flavor.FnGetAtt('show')) flavors.get.assert_called_once_with('flavor_test_id') def test_flavor_get_live_state(self): self.create_flavor() value = mock.MagicMock() value.get_keys.return_value = {'key': 'value'} value.to_dict.return_value = {'ram': 1024, 'disk': 0, 'vcpus': 1, 'rxtx_factor': 1.0, 'OS-FLV-EXT-DATA:ephemeral': 0, 'os-flavor-access:is_public': True} self.flavors.get.return_value = value self.my_flavor.resource_id = '1234' reality = self.my_flavor.get_live_state(self.my_flavor.properties) self.assertEqual({'extra_specs': {'key': 'value'}}, reality) heat-10.0.2/heat/tests/openstack/magnum/0000775000175000017500000000000013343562672020106 5ustar zuulzuul00000000000000heat-10.0.2/heat/tests/openstack/magnum/__init__.py0000666000175000017500000000000013343562340022177 0ustar zuulzuul00000000000000heat-10.0.2/heat/tests/openstack/magnum/test_cluster_template.py0000666000175000017500000001531113343562340025066 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import mock from neutronclient.neutron import v2_0 as neutronV20 import six from heat.common import exception from heat.common import template_format from heat.engine import resource from heat.engine.resources.openstack.magnum import cluster_template from heat.engine import scheduler from heat.engine import template from heat.tests import common from heat.tests import utils RESOURCE_TYPE = 'OS::Magnum::ClusterTemplate' class TestMagnumClusterTemplate(common.HeatTestCase): magnum_template = ''' heat_template_version: ocata resources: test_cluster_template: type: OS::Magnum::ClusterTemplate properties: name: test_cluster_template image: fedora-21-atomic-2 flavor: m1.small master_flavor: m1.medium keypair: heat_key external_network: 0244b54d-ae1f-44f0-a24a-442760f1d681 fixed_network: 0f59a3dd-fac1-4d03-b41a-d4115fbffa89 fixed_subnet: 27a8c89c-0d28-4946-8c78-82cfec1d670a dns_nameserver: 8.8.8.8 docker_volume_size: 5 docker_storage_driver: devicemapper coe: 'mesos' network_driver: 'flannel' http_proxy: 'http://proxy.com:123' https_proxy: 'https://proxy.com:123' no_proxy: '192.168.0.1' labels: {'flannel_cidr': ['10.101.0.0/16', '10.102.0.0/16']} tls_disabled: True public: True registry_enabled: True volume_driver: rexray server_type: vm master_lb_enabled: True floating_ip_enabled: True ''' expected = { 'name': 'test_cluster_template', 'image_id': 'fedora-21-atomic-2', 'flavor_id': 'm1.small', 'master_flavor_id': 'm1.medium', 'keypair_id': 'heat_key', 'external_network_id': 'id_for_net_or_sub', 'fixed_network': 'id_for_net_or_sub', 'fixed_subnet': 'id_for_net_or_sub', 'dns_nameserver': '8.8.8.8', 'docker_volume_size': 5, 'docker_storage_driver': 'devicemapper', 'coe': 'mesos', 'network_driver': 'flannel', 'http_proxy': 'http://proxy.com:123', 'https_proxy': 'https://proxy.com:123', 'no_proxy': '192.168.0.1', 'labels': {'flannel_cidr': ['10.101.0.0/16', '10.102.0.0/16']}, 'tls_disabled': True, 'public': True, 'registry_enabled': True, 'volume_driver': 'rexray', 'server_type': 'vm', 'master_lb_enabled': True, 'floating_ip_enabled': True } def setUp(self): super(TestMagnumClusterTemplate, self).setUp() resource._register_class(RESOURCE_TYPE, cluster_template.ClusterTemplate) self.t = template_format.parse(self.magnum_template) self.stack = utils.parse_stack(self.t) resource_defns = self.stack.t.resource_definitions(self.stack) self.rsrc_defn = resource_defns['test_cluster_template'] self.client = mock.Mock() self.patchobject(cluster_template.ClusterTemplate, 'client', return_value=self.client) self.find_mock = self.patchobject(neutronV20, 'find_resourceid_by_name_or_id') self.find_mock.return_value = 'id_for_net_or_sub' self.stub_FlavorConstraint_validate() self.stub_KeypairConstraint_validate() self.stub_ImageConstraint_validate() self.stub_NetworkConstraint_validate() self.stub_SubnetConstraint_validate() def _create_resource(self, name, snippet, stack): self.resource_id = '12345' self.test_cluster_template = self.stack['test_cluster_template'] value = mock.MagicMock(uuid=self.resource_id) self.client.cluster_templates.create.return_value = value bm = cluster_template.ClusterTemplate(name, snippet, stack) scheduler.TaskRunner(bm.create)() return bm def test_cluster_template_create(self): bm = self._create_resource('bm', self.rsrc_defn, self.stack) self.assertEqual(self.resource_id, bm.resource_id) self.assertEqual((bm.CREATE, bm.COMPLETE), bm.state) self.client.cluster_templates.create.assert_called_once_with( **self.expected) def test_validate_invalid_volume_driver(self): props = self.t['resources']['test_cluster_template']['properties'] props['volume_driver'] = 'cinder' stack = utils.parse_stack(self.t) msg = ("Volume driver type cinder is not supported by COE:mesos, " "expecting a ['rexray'] volume driver.") ex = self.assertRaises(exception.StackValidationFailed, stack['test_cluster_template'].validate) self.assertEqual(msg, six.text_type(ex)) def _cluster_template_update(self, update_status='UPDATE_COMPLETE', exc_msg=None): ct = self._create_resource('ct', self.rsrc_defn, self.stack) status = mock.MagicMock(status=update_status) self.client.cluster_templates.get.return_value = status t = template_format.parse(self.magnum_template) new_t = copy.deepcopy(t) new_t['resources'][self.expected['name']]['properties'][ cluster_template.ClusterTemplate.PUBLIC] = False rsrc_defns = template.Template(new_t).resource_definitions(self.stack) new_ct = rsrc_defns[self.expected['name']] if update_status == 'UPDATE_COMPLETE': scheduler.TaskRunner(ct.update, new_ct)() self.assertEqual((ct.UPDATE, ct.COMPLETE), ct.state) else: exc = self.assertRaises( exception.ResourceFailure, scheduler.TaskRunner(ct.update, new_ct)) self.assertIn(exc_msg, six.text_type(exc)) def test_cluster_update(self): self._cluster_template_update() def test_cluster_update_failed(self): self._cluster_template_update('UPDATE_FAILED', 'Failed to update Cluster') def test_cluster_update_unknown_status(self): self._cluster_template_update('UPDATE_BAR', 'Unknown status updating Cluster') heat-10.0.2/heat/tests/openstack/magnum/test_bay.py0000666000175000017500000001444113343562340022270 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import mock from oslo_config import cfg import six from heat.common import exception from heat.common import template_format from heat.engine.clients.os import magnum as mc from heat.engine import resource from heat.engine.resources.openstack.magnum import bay from heat.engine import scheduler from heat.engine import template from heat.tests import common from heat.tests import utils magnum_template = ''' heat_template_version: 2015-04-30 resources: test_bay: type: OS::Magnum::Bay properties: name: test_bay baymodel: 123456 node_count: 5 master_count: 1 discovery_url: https://discovery.etcd.io bay_create_timeout: 15 ''' RESOURCE_TYPE = 'OS::Magnum::Bay' class TestMagnumBay(common.HeatTestCase): def setUp(self): super(TestMagnumBay, self).setUp() resource._register_class(RESOURCE_TYPE, bay.Bay) t = template_format.parse(magnum_template) self.stack = utils.parse_stack(t) resource_defns = self.stack.t.resource_definitions(self.stack) self.rsrc_defn = resource_defns['test_bay'] self.client = mock.Mock() self.patchobject(bay.Bay, 'client', return_value=self.client) self.patchobject(mc.MagnumClientPlugin, 'get_baymodel') def _create_resource(self, name, snippet, stack, stat='CREATE_COMPLETE'): self.resource_id = '12345' value = mock.MagicMock(uuid=self.resource_id) self.client.bays.create.return_value = value get_rv = mock.MagicMock(status=stat) self.client.bays.get.return_value = get_rv b = bay.Bay(name, snippet, stack) return b def test_bay_create(self): b = self._create_resource('bay', self.rsrc_defn, self.stack) scheduler.TaskRunner(b.create)() self.assertEqual(self.resource_id, b.resource_id) self.assertEqual((b.CREATE, b.COMPLETE), b.state) def test_bay_create_failed(self): cfg.CONF.set_override('action_retry_limit', 0) b = self._create_resource('bay', self.rsrc_defn, self.stack, stat='CREATE_FAILED') exc = self.assertRaises( exception.ResourceFailure, scheduler.TaskRunner(b.create)) self.assertIn("Failed to create Bay", six.text_type(exc)) def test_bay_create_unknown_status(self): b = self._create_resource('bay', self.rsrc_defn, self.stack, stat='CREATE_FOO') exc = self.assertRaises( exception.ResourceFailure, scheduler.TaskRunner(b.create)) self.assertIn("Unknown status creating Bay", six.text_type(exc)) def test_bay_update(self): b = self._create_resource('bay', self.rsrc_defn, self.stack) scheduler.TaskRunner(b.create)() status = mock.MagicMock(status='UPDATE_COMPLETE') self.client.bays.get.return_value = status t = template_format.parse(magnum_template) new_t = copy.deepcopy(t) new_t['resources']['test_bay']['properties']['node_count'] = 10 rsrc_defns = template.Template(new_t).resource_definitions(self.stack) new_bm = rsrc_defns['test_bay'] scheduler.TaskRunner(b.update, new_bm)() self.assertEqual((b.UPDATE, b.COMPLETE), b.state) def test_bay_update_failed(self): b = self._create_resource('bay', self.rsrc_defn, self.stack) scheduler.TaskRunner(b.create)() status = mock.MagicMock(status='UPDATE_FAILED') self.client.bays.get.return_value = status t = template_format.parse(magnum_template) new_t = copy.deepcopy(t) new_t['resources']['test_bay']['properties']['node_count'] = 10 rsrc_defns = template.Template(new_t).resource_definitions(self.stack) new_bm = rsrc_defns['test_bay'] exc = self.assertRaises( exception.ResourceFailure, scheduler.TaskRunner(b.update, new_bm)) self.assertIn("Failed to update Bay", six.text_type(exc)) def test_bay_update_unknown_status(self): b = self._create_resource('bay', self.rsrc_defn, self.stack) scheduler.TaskRunner(b.create)() status = mock.MagicMock(status='UPDATE_BAR') self.client.bays.get.return_value = status t = template_format.parse(magnum_template) new_t = copy.deepcopy(t) new_t['resources']['test_bay']['properties']['node_count'] = 10 rsrc_defns = template.Template(new_t).resource_definitions(self.stack) new_bm = rsrc_defns['test_bay'] exc = self.assertRaises( exception.ResourceFailure, scheduler.TaskRunner(b.update, new_bm)) self.assertIn("Unknown status updating Bay", six.text_type(exc)) def test_bay_delete(self): b = self._create_resource('bay', self.rsrc_defn, self.stack) scheduler.TaskRunner(b.create)() b.client_plugin = mock.MagicMock() self.client.bays.get.side_effect = Exception('Not Found') self.client.get.reset_mock() scheduler.TaskRunner(b.delete)() self.assertEqual((b.DELETE, b.COMPLETE), b.state) self.assertEqual(2, self.client.bays.get.call_count) def test_bay_get_live_state(self): b = self._create_resource('bay', self.rsrc_defn, self.stack) scheduler.TaskRunner(b.create)() value = mock.MagicMock() value.to_dict.return_value = { 'name': 'test_bay', 'baymodel': 123456, 'node_count': 5, 'master_count': 1, 'discovery_url': 'https://discovery.etcd.io', 'bay_create_timeout': 15} self.client.bays.get.return_value = value reality = b.get_live_state(b.properties) self.assertEqual({'node_count': 5, 'master_count': 1}, reality) heat-10.0.2/heat/tests/openstack/magnum/test_cluster.py0000666000175000017500000003114013343562340023171 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import mock from oslo_config import cfg import six from heat.common import exception from heat.common import template_format from heat.engine.clients.os import magnum as mc from heat.engine.clients.os import nova from heat.engine import resource from heat.engine.resources.openstack.magnum import cluster from heat.engine import scheduler from heat.engine import template from heat.tests import common from heat.tests import utils magnum_template = ''' heat_template_version: ocata resources: test_cluster: type: OS::Magnum::Cluster properties: name: test_cluster keypair: key cluster_template: 123456 node_count: 5 master_count: 1 discovery_url: https://discovery.etcd.io create_timeout: 15 test_cluster_min: type: OS::Magnum::Cluster properties: cluster_template: 123456 ''' RESOURCE_TYPE = 'OS::Magnum::Cluster' class TestMagnumCluster(common.HeatTestCase): def setUp(self): super(TestMagnumCluster, self).setUp() self.resource_id = '12345' self.fake_name = u'test_cluster' self.fake_keypair = u'key' self.fake_cluster_template = '123456' self.fake_node_count = 5 self.fake_master_count = 1 self.fake_discovery_url = u'https://discovery.etcd.io' self.fake_create_timeout = 15 self.fake_api_address = 'https://192.168.0.249:6443' self.fake_coe_version = 'v1.5.2' self.fake_master_addresses = ['192.168.0.2'] self.fake_status = 'bar' self.fake_node_addresses = ['192.168.0.3', '192.168.0.4', '192.168.0.5', '192.168.0.6', '192.168.0.7'] self.fake_status_reason = 'foobar' self.fake_stack_id = '22767a68-a7f2-45fe-bc08-335a83e2b919' self.fake_container_version = '1.12.6' resource._register_class(RESOURCE_TYPE, cluster.Cluster) t = template_format.parse(magnum_template) self.stack = utils.parse_stack(t) resource_defns = self.stack.t.resource_definitions(self.stack) self.min_rsrc_defn = resource_defns['test_cluster_min'] resource_defns = self.stack.t.resource_definitions(self.stack) self.rsrc_defn = resource_defns[self.fake_name] self.client = mock.Mock() self.patchobject(cluster.Cluster, 'client', return_value=self.client) self.m_fct = self.patchobject(mc.MagnumClientPlugin, 'get_cluster_template') self.m_fnk = self.patchobject(nova.NovaClientPlugin, 'get_keypair', return_value=self.fake_keypair) def _mock_get_client(self): value = mock.MagicMock() value.name = self.fake_name value.cluster_template_id = self.fake_cluster_template value.uuid = self.resource_id value.coe_version = self.fake_coe_version value.node_count = self.fake_node_count value.master_count = self.fake_master_count value.discovery_url = self.fake_discovery_url value.create_timeout = self.fake_create_timeout value.api_address = self.fake_api_address value.master_addresses = self.fake_master_addresses value.status = self.fake_status value.node_addresses = self.fake_node_addresses value.status_reason = self.fake_status_reason value.stack_id = self.fake_stack_id value.container_version = self.fake_container_version value.keypair = self.fake_keypair value.to_dict.return_value = value.__dict__ self.client.clusters.get.return_value = value def _create_resource(self, name, snippet, stack, stat='CREATE_COMPLETE'): self.m_fct.return_value = self.fake_cluster_template value = mock.MagicMock(uuid=self.resource_id) self.client.clusters.create.return_value = value get_rv = mock.MagicMock(status=stat) self.client.clusters.get.return_value = get_rv b = cluster.Cluster(name, snippet, stack) return b def test_cluster_create(self): b = self._create_resource('cluster', self.rsrc_defn, self.stack) # validate the properties self.assertEqual( self.fake_name, b.properties.get(cluster.Cluster.NAME)) self.assertEqual( self.fake_cluster_template, b.properties.get(cluster.Cluster.CLUSTER_TEMPLATE)) self.assertEqual( self.fake_keypair, b.properties.get(cluster.Cluster.KEYPAIR)) self.assertEqual( self.fake_node_count, b.properties.get(cluster.Cluster.NODE_COUNT)) self.assertEqual( self.fake_master_count, b.properties.get(cluster.Cluster.MASTER_COUNT)) self.assertEqual( self.fake_discovery_url, b.properties.get(cluster.Cluster.DISCOVERY_URL)) self.assertEqual( self.fake_create_timeout, b.properties.get(cluster.Cluster.CREATE_TIMEOUT)) scheduler.TaskRunner(b.create)() self.assertEqual(self.resource_id, b.resource_id) self.assertEqual((b.CREATE, b.COMPLETE), b.state) self.client.clusters.create.assert_called_once_with( name=self.fake_name, keypair=self.fake_keypair, cluster_template_id=self.fake_cluster_template, node_count=self.fake_node_count, master_count=self.fake_master_count, discovery_url=self.fake_discovery_url, create_timeout=self.fake_create_timeout ) def test_cluster_create_with_default_value(self): b = self._create_resource('cluster', self.min_rsrc_defn, self.stack) # validate the properties self.assertEqual( None, b.properties.get(cluster.Cluster.NAME)) self.assertEqual( self.fake_cluster_template, b.properties.get(cluster.Cluster.CLUSTER_TEMPLATE)) self.assertEqual( None, b.properties.get(cluster.Cluster.KEYPAIR)) self.assertEqual( 1, b.properties.get(cluster.Cluster.NODE_COUNT)) self.assertEqual( 1, b.properties.get(cluster.Cluster.MASTER_COUNT)) self.assertEqual( None, b.properties.get(cluster.Cluster.DISCOVERY_URL)) self.assertEqual( 60, b.properties.get(cluster.Cluster.CREATE_TIMEOUT)) scheduler.TaskRunner(b.create)() self.assertEqual(self.resource_id, b.resource_id) self.assertEqual((b.CREATE, b.COMPLETE), b.state) self.client.clusters.create.assert_called_once_with( name=None, keypair=None, cluster_template_id=self.fake_cluster_template, node_count=1, master_count=1, discovery_url=None, create_timeout=60) def test_cluster_create_failed(self): cfg.CONF.set_override('action_retry_limit', 0) b = self._create_resource('cluster', self.rsrc_defn, self.stack, stat='CREATE_FAILED') exc = self.assertRaises( exception.ResourceFailure, scheduler.TaskRunner(b.create)) self.assertIn("Failed to create Cluster", six.text_type(exc)) def test_cluster_create_unknown_status(self): b = self._create_resource('cluster', self.rsrc_defn, self.stack, stat='CREATE_FOO') exc = self.assertRaises( exception.ResourceFailure, scheduler.TaskRunner(b.create)) self.assertIn("Unknown status creating Cluster", six.text_type(exc)) def _cluster_update(self, update_status='UPDATE_COMPLETE', exc_msg=None): b = self._create_resource('cluster', self.rsrc_defn, self.stack) scheduler.TaskRunner(b.create)() status = mock.MagicMock(status=update_status) self.client.clusters.get.return_value = status t = template_format.parse(magnum_template) new_t = copy.deepcopy(t) new_t['resources'][self.fake_name]['properties']['node_count'] = 10 rsrc_defns = template.Template(new_t).resource_definitions(self.stack) new_bm = rsrc_defns[self.fake_name] if update_status == 'UPDATE_COMPLETE': scheduler.TaskRunner(b.update, new_bm)() self.assertEqual((b.UPDATE, b.COMPLETE), b.state) else: exc = self.assertRaises( exception.ResourceFailure, scheduler.TaskRunner(b.update, new_bm)) self.assertIn(exc_msg, six.text_type(exc)) def test_cluster_update(self): self._cluster_update() def test_cluster_update_failed(self): self._cluster_update('UPDATE_FAILED', 'Failed to update Cluster') def test_cluster_update_unknown_status(self): self._cluster_update('UPDATE_BAR', 'Unknown status updating Cluster') def test_cluster_delete(self): b = self._create_resource('cluster', self.rsrc_defn, self.stack) scheduler.TaskRunner(b.create)() b.client_plugin = mock.MagicMock() self.client.clusters.get.side_effect = Exception('Not Found') self.client.get.reset_mock() scheduler.TaskRunner(b.delete)() self.assertEqual((b.DELETE, b.COMPLETE), b.state) self.assertEqual(2, self.client.clusters.get.call_count) def test_cluster_get_live_state(self): b = self._create_resource('cluster', self.rsrc_defn, self.stack) scheduler.TaskRunner(b.create)() self._mock_get_client() reality = b.get_live_state(b.properties) self.assertEqual( { cluster.Cluster.CREATE_TIMEOUT: self.fake_create_timeout, cluster.Cluster.DISCOVERY_URL: self.fake_discovery_url, cluster.Cluster.MASTER_COUNT: self.fake_master_count, cluster.Cluster.NODE_COUNT: self.fake_node_count }, reality) def test_resolve_attributes(self): b = self._create_resource('cluster', self.rsrc_defn, self.stack) scheduler.TaskRunner(b.create)() self._mock_get_client() self.assertEqual( self.fake_name, b._resolve_attribute(cluster.Cluster.NAME_ATTR)) self.assertEqual( self.fake_coe_version, b._resolve_attribute(cluster.Cluster.COE_VERSION_ATTR)) self.assertEqual( self.fake_stack_id, b._resolve_attribute(cluster.Cluster.STACK_ID_ATTR)) self.assertEqual( self.fake_api_address, b._resolve_attribute(cluster.Cluster.API_ADDRESS_ATTR)) self.assertEqual( self.fake_master_count, b._resolve_attribute(cluster.Cluster.MASTER_COUNT_ATTR)) self.assertEqual( self.fake_status, b._resolve_attribute(cluster.Cluster.STATUS_ATTR)) self.assertEqual( self.fake_master_addresses, b._resolve_attribute(cluster.Cluster.MASTER_ADDRESSES_ATTR)) self.assertEqual( self.fake_node_addresses, b._resolve_attribute(cluster.Cluster.NODE_ADDRESSES_ATTR)) self.assertEqual( self.fake_status_reason, b._resolve_attribute(cluster.Cluster.STATUS_REASON_ATTR)) self.assertEqual( self.fake_node_count, b._resolve_attribute(cluster.Cluster.NODE_COUNT_ATTR)) self.assertEqual( self.fake_container_version, b._resolve_attribute(cluster.Cluster.CONTAINER_VERSION_ATTR)) self.assertEqual( self.fake_discovery_url, b._resolve_attribute(cluster.Cluster.DISCOVERY_URL_ATTR)) self.assertEqual( self.fake_cluster_template, b._resolve_attribute(cluster.Cluster.CLUSTER_TEMPLATE_ID_ATTR)) self.assertEqual( self.fake_keypair, b._resolve_attribute(cluster.Cluster.KEYPAIR_ATTR)) self.assertEqual( self.fake_create_timeout, b._resolve_attribute(cluster.Cluster.CREATE_TIMEOUT_ATTR)) heat-10.0.2/heat/tests/openstack/swift/0000775000175000017500000000000013343562672017756 5ustar zuulzuul00000000000000heat-10.0.2/heat/tests/openstack/swift/__init__.py0000666000175000017500000000000013343562340022047 0ustar zuulzuul00000000000000heat-10.0.2/heat/tests/openstack/swift/test_container.py0000666000175000017500000004654513343562340023361 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import six import swiftclient.client as sc from heat.common import exception from heat.common import template_format from heat.engine import node_data from heat.engine.resources.openstack.swift import container as swift_c from heat.engine import scheduler from heat.tests import common from heat.tests import utils SWIFT_TEMPLATE = ''' { "AWSTemplateFormatVersion" : "2010-09-09", "Description" : "Template to test OS::Swift::Container resources", "Resources" : { "SwiftContainerWebsite" : { "Type" : "OS::Swift::Container", "DeletionPolicy" : "Delete", "Properties" : { "X-Container-Read" : ".r:*", "X-Container-Meta" : { "Web-Index" : "index.html", "Web-Error" : "error.html" } } }, "SwiftAccountMetadata" : { "Type" : "OS::Swift::Container", "DeletionPolicy" : "Delete", "Properties" : { "X-Account-Meta" : { "Temp-Url-Key" : "secret" } } }, "S3Bucket" : { "Type" : "AWS::S3::Bucket", "Properties" : { "SwiftContainer" : {"Ref" : "SwiftContainer"} } }, "SwiftContainer" : { "Type" : "OS::Swift::Container", "Properties" : { } } } } ''' class SwiftTest(common.HeatTestCase): def setUp(self): super(SwiftTest, self).setUp() self.t = template_format.parse(SWIFT_TEMPLATE) def _create_container(self, stack, definition_name='SwiftContainer'): resource_defns = stack.t.resource_definitions(stack) container = swift_c.SwiftContainer('test_resource', resource_defns[definition_name], stack) runner = scheduler.TaskRunner(container.create) runner() self.assertEqual((container.CREATE, container.COMPLETE), container.state) return container @mock.patch('swiftclient.client.Connection.put_container') def test_create_container_name(self, mock_put): # Setup res_prop = self.t['Resources']['SwiftContainer']['Properties'] res_prop['name'] = 'the_name' stack = utils.parse_stack(self.t) # Test container = self._create_container(stack) container_name = container.physical_resource_name() # Verify self.assertEqual('the_name', container_name) mock_put.assert_called_once_with('the_name', {}) def test_build_meta_headers(self): # Setup headers = {'Web-Index': 'index.html', 'Web-Error': 'error.html'} # Test self.assertEqual({}, swift_c.SwiftContainer._build_meta_headers( 'container', {})) self.assertEqual({}, swift_c.SwiftContainer._build_meta_headers( 'container', None)) built = swift_c.SwiftContainer._build_meta_headers( 'container', headers) # Verify expected = { 'X-Container-Meta-Web-Index': 'index.html', 'X-Container-Meta-Web-Error': 'error.html' } self.assertEqual(expected, built) @mock.patch('swiftclient.client.Connection.head_container') @mock.patch('swiftclient.client.Connection.put_container') def test_attributes(self, mock_put, mock_head): # Setup headers = {'content-length': '0', 'x-container-object-count': '82', 'accept-ranges': 'bytes', 'x-trans-id': 'tx08ea48ef2fa24e6da3d2f5c188fd938b', 'date': 'Wed, 23 Jan 2013 22:48:05 GMT', 'x-timestamp': '1358980499.84298', 'x-container-read': '.r:*', 'x-container-bytes-used': '17680980', 'content-type': 'text/plain; charset=utf-8'} mock_head.return_value = headers stack = utils.parse_stack(self.t) container_name = utils.PhysName(stack.name, 'test_resource') # Test container = self._create_container(stack) # call this to populate the url of swiftclient. This is actually # set in head_container/put_container, but we're patching them in # this test. container.client().get_auth() # Verify Attributes self.assertEqual(container_name, container.FnGetRefId()) self.assertEqual('82', container.FnGetAtt('ObjectCount')) self.assertEqual('17680980', container.FnGetAtt('BytesUsed')) self.assertEqual('server.test', container.FnGetAtt('DomainName')) self.assertEqual(headers, container.FnGetAtt('HeadContainer')) self.assertEqual(headers, container.FnGetAtt('show')) expected_url = 'http://server.test:5000/v3/%s' % container.FnGetRefId() self.assertEqual(expected_url, container.FnGetAtt('WebsiteURL')) self.assertRaises(exception.InvalidTemplateAttribute, container.FnGetAtt, 'Foo') # Verify Expected Calls mock_put.assert_called_once_with(container_name, {}) self.assertGreater(mock_head.call_count, 0) @mock.patch('swiftclient.client.Connection.put_container') def test_public_read(self, mock_put): # Setup properties = self.t['Resources']['SwiftContainer']['Properties'] properties['X-Container-Read'] = '.r:*' stack = utils.parse_stack(self.t) container_name = utils.PhysName(stack.name, 'test_resource') # Test self._create_container(stack) # Verify expected = {'X-Container-Read': '.r:*'} mock_put.assert_called_once_with(container_name, expected) @mock.patch('swiftclient.client.Connection.put_container') def test_public_read_write(self, mock_put): # Setup properties = self.t['Resources']['SwiftContainer']['Properties'] properties['X-Container-Read'] = '.r:*' properties['X-Container-Write'] = '.r:*' stack = utils.parse_stack(self.t) container_name = utils.PhysName(stack.name, 'test_resource') # Test self._create_container(stack) # Verify expected = {'X-Container-Write': '.r:*', 'X-Container-Read': '.r:*'} mock_put.assert_called_once_with(container_name, expected) @mock.patch('swiftclient.client.Connection.put_container') def test_container_headers(self, mock_put): # Setup stack = utils.parse_stack(self.t) container_name = utils.PhysName(stack.name, 'test_resource') # Test self._create_container(stack, definition_name='SwiftContainerWebsite') # Verify expected = {'X-Container-Meta-Web-Error': 'error.html', 'X-Container-Meta-Web-Index': 'index.html', 'X-Container-Read': '.r:*'} mock_put.assert_called_once_with(container_name, expected) @mock.patch('swiftclient.client.Connection.post_account') @mock.patch('swiftclient.client.Connection.put_container') def test_account_headers(self, mock_put, mock_post): # Setup stack = utils.parse_stack(self.t) container_name = utils.PhysName(stack.name, 'test_resource') # Test self._create_container(stack, definition_name='SwiftAccountMetadata') # Verify mock_put.assert_called_once_with(container_name, {}) expected = {'X-Account-Meta-Temp-Url-Key': 'secret'} mock_post.assert_called_once_with(expected) @mock.patch('swiftclient.client.Connection.put_container') def test_default_headers_not_none_empty_string(self, mock_put): # Setup stack = utils.parse_stack(self.t) container_name = utils.PhysName(stack.name, 'test_resource') # Test container = self._create_container(stack) # Verify mock_put.assert_called_once_with(container_name, {}) self.assertEqual({}, container.metadata_get()) @mock.patch('swiftclient.client.Connection.delete_container') @mock.patch('swiftclient.client.Connection.get_container') @mock.patch('swiftclient.client.Connection.put_container') def test_delete_exception(self, mock_put, mock_get, mock_delete): # Setup stack = utils.parse_stack(self.t) container_name = utils.PhysName(stack.name, 'test_resource') mock_delete.side_effect = sc.ClientException('test-delete-failure') mock_get.return_value = ({'name': container_name}, []) # Test container = self._create_container(stack) runner = scheduler.TaskRunner(container.delete) self.assertRaises(exception.ResourceFailure, runner) # Verify self.assertEqual((container.DELETE, container.FAILED), container.state) mock_put.assert_called_once_with(container_name, {}) mock_get.assert_called_once_with(container_name) mock_delete.assert_called_once_with(container_name) @mock.patch('swiftclient.client.Connection.delete_container') @mock.patch('swiftclient.client.Connection.get_container') @mock.patch('swiftclient.client.Connection.put_container') def test_delete_not_found(self, mock_put, mock_get, mock_delete): # Setup stack = utils.parse_stack(self.t) container_name = utils.PhysName(stack.name, 'test_resource') mock_delete.side_effect = sc.ClientException('missing', http_status=404) mock_get.return_value = ({'name': container_name}, []) # Test container = self._create_container(stack) runner = scheduler.TaskRunner(container.delete) runner() # Verify self.assertEqual((container.DELETE, container.COMPLETE), container.state) mock_put.assert_called_once_with(container_name, {}) mock_get.assert_called_once_with(container_name) mock_delete.assert_called_once_with(container_name) @mock.patch('swiftclient.client.Connection.get_container') @mock.patch('swiftclient.client.Connection.put_container') def test_delete_non_empty_not_allowed(self, mock_put, mock_get): # Setup stack = utils.parse_stack(self.t) container_name = utils.PhysName(stack.name, 'test_resource') mock_get.return_value = ({'name': container_name}, [{'name': 'test_object'}]) # Test container = self._create_container(stack) runner = scheduler.TaskRunner(container.delete) ex = self.assertRaises(exception.ResourceFailure, runner) # Verify self.assertEqual((container.DELETE, container.FAILED), container.state) self.assertIn('ResourceActionNotSupported: resources.test_resource: ' 'Deleting non-empty container', six.text_type(ex)) mock_put.assert_called_once_with(container_name, {}) mock_get.assert_called_once_with(container_name) @mock.patch('swiftclient.client.Connection.delete_container') @mock.patch('swiftclient.client.Connection.delete_object') @mock.patch('swiftclient.client.Connection.get_container') @mock.patch('swiftclient.client.Connection.put_container') def test_delete_non_empty_allowed(self, mock_put, mock_get, mock_delete_object, mock_delete_container): # Setup res_prop = self.t['Resources']['SwiftContainer']['Properties'] res_prop['PurgeOnDelete'] = True stack = utils.parse_stack(self.t) container_name = utils.PhysName(stack.name, 'test_resource') get_return_values = [ ({'name': container_name}, [{'name': 'test_object1'}, {'name': 'test_object2'}]), ({'name': container_name}, [{'name': 'test_object1'}]), ] mock_get.side_effect = get_return_values # Test container = self._create_container(stack) runner = scheduler.TaskRunner(container.delete) runner() # Verify self.assertEqual((container.DELETE, container.COMPLETE), container.state) mock_put.assert_called_once_with(container_name, {}) mock_delete_container.assert_called_once_with(container_name) self.assertEqual(2, mock_get.call_count) self.assertEqual(2, mock_delete_object.call_count) @mock.patch('swiftclient.client.Connection.delete_container') @mock.patch('swiftclient.client.Connection.delete_object') @mock.patch('swiftclient.client.Connection.get_container') @mock.patch('swiftclient.client.Connection.put_container') def test_delete_non_empty_allowed_not_found(self, mock_put, mock_get, mock_delete_object, mock_delete_container): # Setup res_prop = self.t['Resources']['SwiftContainer']['Properties'] res_prop['PurgeOnDelete'] = True stack = utils.parse_stack(self.t) container_name = utils.PhysName(stack.name, 'test_resource') mock_get.return_value = ({'name': container_name}, [{'name': 'test_object'}]) mock_delete_object.side_effect = sc.ClientException('object-is-gone', http_status=404) mock_delete_container.side_effect = sc.ClientException( 'container-is-gone', http_status=404) # Test container = self._create_container(stack) runner = scheduler.TaskRunner(container.delete) runner() # Verify self.assertEqual((container.DELETE, container.COMPLETE), container.state) mock_put.assert_called_once_with(container_name, {}) mock_get.assert_called_once_with(container_name) mock_delete_object.assert_called_once_with(container_name, 'test_object') mock_delete_container.assert_called_once_with(container_name) @mock.patch('swiftclient.client.Connection.delete_object') @mock.patch('swiftclient.client.Connection.get_container') @mock.patch('swiftclient.client.Connection.put_container') def test_delete_non_empty_fails_delete_object(self, mock_put, mock_get, mock_delete_object): # Setup res_prop = self.t['Resources']['SwiftContainer']['Properties'] res_prop['PurgeOnDelete'] = True stack = utils.parse_stack(self.t) container_name = utils.PhysName(stack.name, 'test_resource') mock_get.return_value = ({'name': container_name}, [{'name': 'test_object'}]) mock_delete_object.side_effect = ( sc.ClientException('object-delete-failure')) # Test container = self._create_container(stack) runner = scheduler.TaskRunner(container.delete) self.assertRaises(exception.ResourceFailure, runner) # Verify self.assertEqual((container.DELETE, container.FAILED), container.state) mock_put.assert_called_once_with(container_name, {}) mock_get.assert_called_once_with(container_name) mock_delete_object.assert_called_once_with(container_name, 'test_object') @mock.patch('swiftclient.client.Connection.put_container') def test_delete_retain(self, mock_put): # Setup self.t['Resources']['SwiftContainer']['DeletionPolicy'] = 'Retain' stack = utils.parse_stack(self.t) container_name = utils.PhysName(stack.name, 'test_resource') # Test container = self._create_container(stack) runner = scheduler.TaskRunner(container.delete) runner() # Verify self.assertEqual((container.DELETE, container.COMPLETE), container.state) mock_put.assert_called_once_with(container_name, {}) @mock.patch('swiftclient.client.Connection.get_container') @mock.patch('swiftclient.client.Connection.put_container') def test_check(self, mock_put, mock_get): # Setup res_prop = self.t['Resources']['SwiftContainer']['Properties'] res_prop['PurgeOnDelete'] = True stack = utils.parse_stack(self.t) # Test container = self._create_container(stack) runner = scheduler.TaskRunner(container.check) runner() self.assertEqual((container.CHECK, container.COMPLETE), container.state) @mock.patch('swiftclient.client.Connection.get_container') @mock.patch('swiftclient.client.Connection.put_container') def test_check_fail(self, mock_put, mock_get): # Setup res_prop = self.t['Resources']['SwiftContainer']['Properties'] res_prop['PurgeOnDelete'] = True stack = utils.parse_stack(self.t) mock_get.side_effect = Exception('boom') # Test container = self._create_container(stack) runner = scheduler.TaskRunner(container.check) ex = self.assertRaises(exception.ResourceFailure, runner) # Verify self.assertIn('boom', six.text_type(ex)) self.assertEqual((container.CHECK, container.FAILED), container.state) def test_refid(self): stack = utils.parse_stack(self.t) rsrc = stack['SwiftContainer'] rsrc.resource_id = 'xyz' self.assertEqual('xyz', rsrc.FnGetRefId()) def test_refid_convergence_cache_data(self): cache_data = {'SwiftContainer': node_data.NodeData.from_dict({ 'uuid': mock.ANY, 'id': mock.ANY, 'action': 'CREATE', 'status': 'COMPLETE', 'reference_id': 'xyz_convg' })} stack = utils.parse_stack(self.t, cache_data=cache_data) rsrc = stack.defn['SwiftContainer'] self.assertEqual('xyz_convg', rsrc.FnGetRefId()) @mock.patch('swiftclient.client.Connection.head_account') @mock.patch('swiftclient.client.Connection.head_container') @mock.patch('swiftclient.client.Connection.put_container') def test_parse_live_resource_data(self, mock_put, mock_container, mock_account): stack = utils.parse_stack(self.t) container = self._create_container( stack, definition_name="SwiftContainerWebsite") mock_container.return_value = { 'x-container-read': '.r:*', 'x-container-meta-web-index': 'index.html', 'x-container-meta-web-error': 'error.html', 'x-container-meta-login': 'login.html' } mock_account.return_value = {} live_state = container.parse_live_resource_data( container.properties, container.get_live_resource_data()) # live state properties values should be equal to current resource # properties values self.assertEqual(dict(container.properties.items()), live_state) heat-10.0.2/heat/tests/openstack/keystone/0000775000175000017500000000000013343562672020463 5ustar zuulzuul00000000000000heat-10.0.2/heat/tests/openstack/keystone/test_endpoint.py0000666000175000017500000003672213343562340023720 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import mock from heat.engine.clients.os.keystone import fake_keystoneclient as fake_ks from heat.engine import constraints from heat.engine import properties from heat.engine import resource from heat.engine.resources.openstack.keystone import endpoint from heat.engine import stack from heat.engine import template from heat.tests import common from heat.tests import utils keystone_endpoint_template = { 'heat_template_version': '2015-04-30', 'resources': { 'test_endpoint': { 'type': 'OS::Keystone::Endpoint', 'properties': { 'service': 'heat', 'region': 'RegionOne', 'interface': 'public', 'url': 'http://127.0.0.1:8004/v1/tenant-id', 'name': 'endpoint_foo', 'enabled': False } } } } class KeystoneEndpointTest(common.HeatTestCase): def setUp(self): super(KeystoneEndpointTest, self).setUp() self.ctx = utils.dummy_context() # Mock client self.keystoneclient = mock.Mock() self.patchobject(resource.Resource, 'client', return_value=fake_ks.FakeKeystoneClient( client=self.keystoneclient)) self.endpoints = self.keystoneclient.endpoints # Mock client plugin self.keystone_client_plugin = mock.MagicMock() def _get_mock_endpoint(self): value = mock.MagicMock() value.id = '477e8273-60a7-4c41-b683-fdb0bc7cd152' return value def _setup_endpoint_resource(self, stack_name, use_default=False): tmpl_data = copy.deepcopy(keystone_endpoint_template) if use_default: props = tmpl_data['resources']['test_endpoint']['properties'] del props['name'] del props['enabled'] test_stack = stack.Stack( self.ctx, stack_name, template.Template(tmpl_data) ) r_endpoint = test_stack['test_endpoint'] r_endpoint.client = mock.MagicMock() r_endpoint.client.return_value = self.keystoneclient r_endpoint.client_plugin = mock.MagicMock() r_endpoint.client_plugin.return_value = self.keystone_client_plugin return r_endpoint def test_endpoint_handle_create(self): rsrc = self._setup_endpoint_resource('test_endpoint_create') mock_endpoint = self._get_mock_endpoint() self.endpoints.create.return_value = mock_endpoint # validate the properties self.assertEqual( 'heat', rsrc.properties.get(endpoint.KeystoneEndpoint.SERVICE)) self.assertEqual( 'public', rsrc.properties.get(endpoint.KeystoneEndpoint.INTERFACE)) self.assertEqual( 'RegionOne', rsrc.properties.get(endpoint.KeystoneEndpoint.REGION)) self.assertEqual( 'http://127.0.0.1:8004/v1/tenant-id', rsrc.properties.get(endpoint.KeystoneEndpoint.SERVICE_URL)) self.assertEqual( 'endpoint_foo', rsrc.properties.get(endpoint.KeystoneEndpoint.NAME)) self.assertFalse(rsrc.properties.get( endpoint.KeystoneEndpoint.ENABLED)) rsrc.handle_create() # validate endpoint creation self.endpoints.create.assert_called_once_with( service='heat', url='http://127.0.0.1:8004/v1/tenant-id', interface='public', region='RegionOne', name='endpoint_foo', enabled=False) # validate physical resource id self.assertEqual(mock_endpoint.id, rsrc.resource_id) def test_endpoint_handle_create_default(self): rsrc = self._setup_endpoint_resource('test_create_with_defaults', use_default=True) mock_endpoint = self._get_mock_endpoint() self.endpoints.create.return_value = mock_endpoint rsrc.physical_resource_name = mock.MagicMock() rsrc.physical_resource_name.return_value = 'stack_endpoint_foo' # validate the properties self.assertIsNone( rsrc.properties.get(endpoint.KeystoneEndpoint.NAME)) self.assertTrue(rsrc.properties.get( endpoint.KeystoneEndpoint.ENABLED)) rsrc.handle_create() # validate endpoints creation with physical resource name # and with enabled(default is True) self.endpoints.create.assert_called_once_with( service='heat', url='http://127.0.0.1:8004/v1/tenant-id', interface='public', region='RegionOne', name='stack_endpoint_foo', enabled=True) def test_endpoint_handle_update(self): rsrc = self._setup_endpoint_resource('test_endpoint_update') rsrc.resource_id = '477e8273-60a7-4c41-b683-fdb0bc7cd151' prop_diff = {endpoint.KeystoneEndpoint.REGION: 'RegionTwo', endpoint.KeystoneEndpoint.INTERFACE: 'internal', endpoint.KeystoneEndpoint.SERVICE: 'updated_id', endpoint.KeystoneEndpoint.SERVICE_URL: 'http://127.0.0.1:8004/v2/tenant-id', endpoint.KeystoneEndpoint.NAME: 'endpoint_foo_updated', endpoint.KeystoneEndpoint.ENABLED: True} rsrc.handle_update(json_snippet=None, tmpl_diff=None, prop_diff=prop_diff) self.endpoints.update.assert_called_once_with( endpoint=rsrc.resource_id, region=prop_diff[endpoint.KeystoneEndpoint.REGION], interface=prop_diff[endpoint.KeystoneEndpoint.INTERFACE], service=prop_diff[endpoint.KeystoneEndpoint.SERVICE], url=prop_diff[endpoint.KeystoneEndpoint.SERVICE_URL], name=prop_diff[endpoint.KeystoneEndpoint.NAME], enabled=prop_diff[endpoint.KeystoneEndpoint.ENABLED] ) def test_endpoint_handle_update_default(self): rsrc = self._setup_endpoint_resource('test_endpoint_update_default') rsrc.resource_id = '477e8273-60a7-4c41-b683-fdb0bc7cd151' rsrc.physical_resource_name = mock.MagicMock() rsrc.physical_resource_name.return_value = 'stack_endpoint_foo' # Name is reset to None, so default to physical resource name prop_diff = {endpoint.KeystoneEndpoint.NAME: None} rsrc.handle_update(json_snippet=None, tmpl_diff=None, prop_diff=prop_diff) # validate default name to physical resource name self.endpoints.update.assert_called_once_with( endpoint=rsrc.resource_id, region=None, interface=None, service=None, url=None, name='stack_endpoint_foo', enabled=None ) def test_endpoint_handle_update_only_enabled(self): rsrc = self._setup_endpoint_resource('test_endpoint_update_enabled') rsrc.resource_id = '477e8273-60a7-4c41-b683-fdb0bc7cd151' prop_diff = {endpoint.KeystoneEndpoint.ENABLED: True} rsrc.handle_update(json_snippet=None, tmpl_diff=None, prop_diff=prop_diff) self.endpoints.update.assert_called_once_with( endpoint=rsrc.resource_id, region=None, interface=None, service=None, url=None, name=None, enabled=prop_diff[endpoint.KeystoneEndpoint.ENABLED] ) def test_properties_title(self): property_title_map = { endpoint.KeystoneEndpoint.SERVICE: 'service', endpoint.KeystoneEndpoint.REGION: 'region', endpoint.KeystoneEndpoint.INTERFACE: 'interface', endpoint.KeystoneEndpoint.SERVICE_URL: 'url', endpoint.KeystoneEndpoint.NAME: 'name', endpoint.KeystoneEndpoint.ENABLED: 'enabled' } for actual_title, expected_title in property_title_map.items(): self.assertEqual( expected_title, actual_title, 'KeystoneEndpoint PROPERTIES(%s) title modified.' % actual_title) def test_property_service_validate_schema(self): schema = (endpoint.KeystoneEndpoint.properties_schema[ endpoint.KeystoneEndpoint.SERVICE]) self.assertTrue( schema.update_allowed, 'update_allowed for property %s is modified' % endpoint.KeystoneEndpoint.SERVICE) self.assertTrue( schema.required, 'required for property %s is modified' % endpoint.KeystoneEndpoint.SERVICE) self.assertEqual(properties.Schema.STRING, schema.type, 'type for property %s is modified' % endpoint.KeystoneEndpoint.SERVICE) self.assertEqual('Name or Id of keystone service.', schema.description, 'description for property %s is modified' % endpoint.KeystoneEndpoint.SERVICE) # Make sure, SERVICE is of keystone.service custom constrain type self.assertEqual(1, len(schema.constraints)) keystone_service_constrain = schema.constraints[0] self.assertIsInstance(keystone_service_constrain, constraints.CustomConstraint) self.assertEqual('keystone.service', keystone_service_constrain.name) def test_property_region_validate_schema(self): schema = (endpoint.KeystoneEndpoint.properties_schema[ endpoint.KeystoneEndpoint.REGION]) self.assertTrue( schema.update_allowed, 'update_allowed for property %s is modified' % endpoint.KeystoneEndpoint.REGION) self.assertEqual(properties.Schema.STRING, schema.type, 'type for property %s is modified' % endpoint.KeystoneEndpoint.REGION) self.assertEqual('Name or Id of keystone region.', schema.description, 'description for property %s is modified' % endpoint.KeystoneEndpoint.REGION) # Make sure, REGION is of keystone.region custom constraint type self.assertEqual(1, len(schema.constraints)) keystone_region_constraint = schema.constraints[0] self.assertIsInstance(keystone_region_constraint, constraints.CustomConstraint) self.assertEqual('keystone.region', keystone_region_constraint.name) def test_property_interface_validate_schema(self): schema = (endpoint.KeystoneEndpoint.properties_schema[ endpoint.KeystoneEndpoint.INTERFACE]) self.assertTrue( schema.update_allowed, 'update_allowed for property %s is modified' % endpoint.KeystoneEndpoint.INTERFACE) self.assertTrue( schema.required, 'required for property %s is modified' % endpoint.KeystoneEndpoint.INTERFACE) self.assertEqual(properties.Schema.STRING, schema.type, 'type for property %s is modified' % endpoint.KeystoneEndpoint.INTERFACE) self.assertEqual('Interface type of keystone service endpoint.', schema.description, 'description for property %s is modified' % endpoint.KeystoneEndpoint.INTERFACE) # Make sure INTERFACE valid constrains self.assertEqual(1, len(schema.constraints)) allowed_constrain = schema.constraints[0] self.assertIsInstance(allowed_constrain, constraints.AllowedValues) self.assertEqual(('public', 'internal', 'admin'), allowed_constrain.allowed) def test_property_service_url_validate_schema(self): schema = (endpoint.KeystoneEndpoint.properties_schema[ endpoint.KeystoneEndpoint.SERVICE_URL]) self.assertTrue( schema.update_allowed, 'update_allowed for property %s is modified' % endpoint.KeystoneEndpoint.SERVICE_URL) self.assertTrue( schema.required, 'required for property %s is modified' % endpoint.KeystoneEndpoint.SERVICE_URL) self.assertEqual(properties.Schema.STRING, schema.type, 'type for property %s is modified' % endpoint.KeystoneEndpoint.SERVICE_URL) self.assertEqual('URL of keystone service endpoint.', schema.description, 'description for property %s is modified' % endpoint.KeystoneEndpoint.SERVICE_URL) def test_property_name_validate_schema(self): schema = (endpoint.KeystoneEndpoint.properties_schema[ endpoint.KeystoneEndpoint.NAME]) self.assertTrue( schema.update_allowed, 'update_allowed for property %s is modified' % endpoint.KeystoneEndpoint.NAME) self.assertEqual(properties.Schema.STRING, schema.type, 'type for property %s is modified' % endpoint.KeystoneEndpoint.NAME) self.assertEqual('Name of keystone endpoint.', schema.description, 'description for property %s is modified' % endpoint.KeystoneEndpoint.NAME) def test_show_resource(self): rsrc = self._setup_endpoint_resource('test_show_resource') mock_endpoint = mock.Mock() mock_endpoint.to_dict.return_value = {'attr': 'val'} self.endpoints.get.return_value = mock_endpoint attrs = rsrc._show_resource() self.assertEqual({'attr': 'val'}, attrs) def test_get_live_state(self): rsrc = self._setup_endpoint_resource('test_get_live_state') mock_endpoint = mock.Mock() mock_endpoint.to_dict.return_value = { 'region_id': 'RegionOne', 'links': {'self': 'some_link'}, 'url': 'http://127.0.0.1:8004/v1/1234', 'region': 'RegionOne', 'enabled': True, 'interface': 'admin', 'service_id': '934f10ea63c24d82a8d9370cc0a1cb3b', 'id': '7f1944ae8c524e2799119b5f2dcf9781', 'name': 'fake'} self.endpoints.get.return_value = mock_endpoint reality = rsrc.get_live_state(rsrc.properties) expected = { 'region': 'RegionOne', 'enabled': True, 'interface': 'admin', 'service': '934f10ea63c24d82a8d9370cc0a1cb3b', 'name': 'fake', 'url': 'http://127.0.0.1:8004/v1/1234' } self.assertEqual(expected, reality) heat-10.0.2/heat/tests/openstack/keystone/test_role_assignments.py0000666000175000017500000005363513343562340025456 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import mock from heat.common import exception from heat.engine.clients.os.keystone import fake_keystoneclient as fake_ks from heat.engine import properties from heat.engine import resource from heat.engine.resources.openstack.keystone import role_assignments from heat.engine import stack from heat.engine import template from heat.tests import common from heat.tests import generic_resource from heat.tests import utils RESOURCE_TYPE = 'OS::Keystone::DummyRoleAssignment' keystone_role_assignment_template = { 'heat_template_version': '2015-10-15', 'resources': { 'test_role_assignment': { 'type': RESOURCE_TYPE, 'properties': { 'roles': [ { 'role': 'role_1', 'project': 'project_1', }, { 'role': 'role_1', 'domain': 'domain_1' } ] } } } } MixinClass = role_assignments.KeystoneRoleAssignmentMixin class DummyRoleAssignment(generic_resource.GenericResource, MixinClass): properties_schema = {} properties_schema.update(MixinClass.mixin_properties_schema) def validate(self): super(DummyRoleAssignment, self).validate() self.validate_assignment_properties() class KeystoneRoleAssignmentMixinTest(common.HeatTestCase): def setUp(self): super(KeystoneRoleAssignmentMixinTest, self).setUp() self.ctx = utils.dummy_context() # For unit testing purpose. Register resource provider explicitly. resource._register_class(RESOURCE_TYPE, DummyRoleAssignment) self.stack = stack.Stack( self.ctx, 'test_stack_keystone', template.Template(keystone_role_assignment_template) ) self.test_role_assignment = self.stack['test_role_assignment'] # Mock client self.keystoneclient = mock.MagicMock() self.test_role_assignment.client = mock.MagicMock() self.test_role_assignment.client.return_value = self.keystoneclient self.roles = self.keystoneclient.roles # Mock client plugin def _side_effect(value): return value self.keystone_client_plugin = mock.MagicMock() (self.keystone_client_plugin.get_domain_id. side_effect) = _side_effect (self.keystone_client_plugin.get_role_id. side_effect) = _side_effect (self.keystone_client_plugin.get_project_id. side_effect) = _side_effect self.test_role_assignment.client_plugin = mock.MagicMock() (self.test_role_assignment.client_plugin. return_value) = self.keystone_client_plugin self.parse_assgmnts = self.test_role_assignment.parse_list_assignments self.test_role_assignment.parse_list_assignments = mock.MagicMock() self.test_role_assignment.parse_list_assignments.return_value = [ {'role': 'role_1', 'domain': 'domain_1', 'project': None}, {'role': 'role_1', 'project': 'project_1', 'domain': None}] def test_properties_title(self): property_title_map = {MixinClass.ROLES: 'roles'} for actual_title, expected_title in property_title_map.items(): self.assertEqual( expected_title, actual_title, 'KeystoneRoleAssignmentMixin PROPERTIES(%s) title modified.' % actual_title) def test_property_roles_validate_schema(self): schema = MixinClass.mixin_properties_schema[MixinClass.ROLES] self.assertEqual( True, schema.update_allowed, 'update_allowed for property %s is modified' % MixinClass.ROLES) self.assertEqual(properties.Schema.LIST, schema.type, 'type for property %s is modified' % MixinClass.ROLES) self.assertEqual('List of role assignments.', schema.description, 'description for property %s is modified' % MixinClass.ROLES) def test_role_assignment_create_user(self): expected = [ { 'role': 'role_1', 'project': 'project_1', 'domain': None }, { 'role': 'role_1', 'project': None, 'domain': 'domain_1' } ] # validate the properties self.assertEqual( expected, self.test_role_assignment.properties.get(MixinClass.ROLES)) self.test_role_assignment.create_assignment(user_id='user_1') # validate role assignment creation # role-user-domain self.roles.grant.assert_any_call( role='role_1', user='user_1', domain='domain_1') # role-user-project self.roles.grant.assert_any_call( role='role_1', user='user_1', project='project_1') def test_role_assignment_create_group(self): expected = [ { 'role': 'role_1', 'project': 'project_1', 'domain': None }, { 'role': 'role_1', 'project': None, 'domain': 'domain_1' } ] # validate the properties self.assertEqual( expected, self.test_role_assignment.properties.get(MixinClass.ROLES)) self.test_role_assignment.create_assignment(group_id='group_1') # validate role assignment creation # role-group-domain self.roles.grant.assert_any_call( role='role_1', group='group_1', domain='domain_1') # role-group-project self.roles.grant.assert_any_call( role='role_1', group='group_1', project='project_1') def test_role_assignment_update_user(self): prop_diff = { MixinClass.ROLES: [ { 'role': 'role_2', 'project': 'project_1' }, { 'role': 'role_2', 'domain': 'domain_1' } ] } self.test_role_assignment.update_assignment( user_id='user_1', prop_diff=prop_diff) # Add role2-project1-domain1 # role-user-domain self.roles.grant.assert_any_call( role='role_2', user='user_1', domain='domain_1') # role-user-project self.roles.grant.assert_any_call( role='role_2', user='user_1', project='project_1') # Remove role1-project1-domain1 # role-user-domain self.roles.revoke.assert_any_call( role='role_1', user='user_1', domain='domain_1') # role-user-project self.roles.revoke.assert_any_call( role='role_1', user='user_1', project='project_1') def test_role_assignment_update_group(self): prop_diff = { MixinClass.ROLES: [ { 'role': 'role_2', 'project': 'project_1' }, { 'role': 'role_2', 'domain': 'domain_1' } ] } self.test_role_assignment.update_assignment( group_id='group_1', prop_diff=prop_diff) # Add role2-project1-domain1 # role-group-domain self.roles.grant.assert_any_call( role='role_2', group='group_1', domain='domain_1') # role-group-project self.roles.grant.assert_any_call( role='role_2', group='group_1', project='project_1') # Remove role1-project1-domain1 # role-group-domain self.roles.revoke.assert_any_call( role='role_1', group='group_1', domain='domain_1') # role-group-project self.roles.revoke.assert_any_call( role='role_1', group='group_1', project='project_1') def test_role_assignment_update_roles_no_change(self): prop_diff = {} self.test_role_assignment.update_assignment( group_id='group_1', prop_diff=prop_diff) self.assertEqual(0, self.roles.grant.call_count) self.assertEqual(0, self.roles.revoke.call_count) self.test_role_assignment.update_assignment( user_id='user_1', prop_diff=prop_diff) self.assertEqual(0, self.roles.grant.call_count) self.assertEqual(0, self.roles.revoke.call_count) def test_role_assignment_delete_user(self): self.assertIsNone(self.test_role_assignment.delete_assignment( user_id='user_1')) # Remove role1-project1-domain1 # role-user-domain self.roles.revoke.assert_any_call( role='role_1', user='user_1', domain='domain_1') # role-user-project self.roles.revoke.assert_any_call( role='role_1', user='user_1', project='project_1') def test_role_assignment_delete_group(self): self.assertIsNone(self.test_role_assignment.delete_assignment( group_id='group_1' )) # Remove role1-project1-domain1 # role-group-domain self.roles.revoke.assert_any_call( role='role_1', group='group_1', domain='domain_1') # role-group-project self.roles.revoke.assert_any_call( role='role_1', group='group_1', project='project_1') def test_role_assignment_delete_removed(self): self.test_role_assignment.parse_list_assignments.return_value = [ {'role': 'role_1', 'domain': 'domain_1', 'project': None}] self.assertIsNone(self.test_role_assignment.delete_assignment( user_id='user_1')) expected = [ ({'role': 'role_1', 'user': 'user_1', 'domain': 'domain_1'},) ] self.assertItemsEqual(expected, self.roles.revoke.call_args_list) def test_validate_1(self): self.test_role_assignment.properties = mock.MagicMock() # both project and domain are none self.test_role_assignment.properties.get.return_value = [ dict(role='role1')] self.assertRaises(exception.StackValidationFailed, self.test_role_assignment.validate) def test_validate_2(self): self.test_role_assignment.properties = mock.MagicMock() # both project and domain are not none self.test_role_assignment.properties.get.return_value = [ dict(role='role1', project='project1', domain='domain1') ] self.assertRaises(exception.ResourcePropertyConflict, self.test_role_assignment.validate) def test_empty_parse_list_assignments(self): self.test_role_assignment.parse_list_assignments = self.parse_assgmnts self.assertEqual([], self.test_role_assignment.parse_list_assignments()) def test_user_parse_list_assignments(self): self._test_parse_list_assignments('user') def test_group_parse_list_assignments(self): self._test_parse_list_assignments('group') def _test_parse_list_assignments(self, entity=None): self.test_role_assignment.parse_list_assignments = self.parse_assgmnts dict_obj = mock.MagicMock() dict_obj.to_dict.side_effect = [{'scope': { 'project': {'id': 'fc0fe982401643368ff2eb11d9ca70f1'}}, 'role': {'id': '3b8b253648f44256a457a5073b78021d'}, entity: {'id': '4147558a763046cfb68fb870d58ef4cf'}}, {'role': {'id': '3b8b253648f44258021d6a457a5073b7'}, entity: {'id': '4147558a763046cfb68fb870d58ef4cf'}}] self.keystoneclient.role_assignments.list.return_value = [dict_obj, dict_obj] kwargs = {'%s_id' % entity: '4147558a763046cfb68fb870d58ef4cf'} list_assignments = self.test_role_assignment.parse_list_assignments( **kwargs) expected = [ {'role': '3b8b253648f44256a457a5073b78021d', 'project': 'fc0fe982401643368ff2eb11d9ca70f1', 'domain': None}, {'role': '3b8b253648f44258021d6a457a5073b7', 'project': None, 'domain': None}, ] self.assertEqual(expected, list_assignments) class KeystoneUserRoleAssignmentTest(common.HeatTestCase): role_assignment_template = copy.deepcopy(keystone_role_assignment_template) role = role_assignment_template['resources']['test_role_assignment'] role['properties']['user'] = 'user_1' role['type'] = 'OS::Keystone::UserRoleAssignment' def setUp(self): super(KeystoneUserRoleAssignmentTest, self).setUp() self.ctx = utils.dummy_context() self.stack = stack.Stack( self.ctx, 'test_stack_keystone_user_role_add', template.Template(self.role_assignment_template) ) self.test_role_assignment = self.stack['test_role_assignment'] # Mock client self.keystoneclient = mock.Mock() self.patchobject(resource.Resource, 'client', return_value=fake_ks.FakeKeystoneClient( client=self.keystoneclient)) self.roles = self.keystoneclient.roles # Mock client plugin def _side_effect(value): return value self.keystone_client_plugin = mock.MagicMock() self.keystone_client_plugin.get_user_id.side_effect = _side_effect self.keystone_client_plugin.get_domain_id.side_effect = _side_effect self.keystone_client_plugin.get_role_id.side_effect = _side_effect self.keystone_client_plugin.get_project_id.side_effect = _side_effect self.test_role_assignment.client_plugin = mock.MagicMock() (self.test_role_assignment.client_plugin. return_value) = self.keystone_client_plugin self.test_role_assignment.parse_list_assignments = mock.MagicMock() self.test_role_assignment.parse_list_assignments.return_value = [ {'role': 'role_1', 'domain': 'domain_1', 'project': None}, {'role': 'role_1', 'project': 'project_1', 'domain': None}] def test_user_role_assignment_handle_create(self): self.test_role_assignment.handle_create() # role-user-domain created self.roles.grant.assert_any_call( role='role_1', user='user_1', domain='domain_1') # role-user-project created self.roles.grant.assert_any_call( role='role_1', user='user_1', project='project_1') def test_user_role_assignment_handle_update(self): prop_diff = { MixinClass.ROLES: [ { 'role': 'role_2', 'project': 'project_1' }, { 'role': 'role_2', 'domain': 'domain_1' } ] } self.test_role_assignment.handle_update(json_snippet=None, tmpl_diff=None, prop_diff=prop_diff) # Add role2-project1-domain1 # role-user-domain self.roles.grant.assert_any_call( role='role_2', user='user_1', domain='domain_1') # role-user-project self.roles.grant.assert_any_call( role='role_2', user='user_1', project='project_1') # Remove role1-project1-domain1 # role-user-domain self.roles.revoke.assert_any_call( role='role_1', user='user_1', domain='domain_1') # role-user-project self.roles.revoke.assert_any_call( role='role_1', user='user_1', project='project_1') def test_user_role_assignment_handle_delete(self): self.assertIsNone(self.test_role_assignment.handle_delete()) # Remove role1-project1-domain1 # role-user-domain self.roles.revoke.assert_any_call( role='role_1', user='user_1', domain='domain_1') # role-user-project self.roles.revoke.assert_any_call( role='role_1', user='user_1', project='project_1') def test_user_role_assignment_delete_user_not_found(self): self.keystone_client_plugin.get_user_id.side_effect = [ exception.EntityNotFound] self.assertIsNone(self.test_role_assignment.handle_delete()) self.roles.revoke.assert_not_called() class KeystoneGroupRoleAssignmentTest(common.HeatTestCase): role_assignment_template = copy.deepcopy(keystone_role_assignment_template) role = role_assignment_template['resources']['test_role_assignment'] role['properties']['group'] = 'group_1' role['type'] = 'OS::Keystone::GroupRoleAssignment' def setUp(self): super(KeystoneGroupRoleAssignmentTest, self).setUp() self.ctx = utils.dummy_context() self.stack = stack.Stack( self.ctx, 'test_stack_keystone_group_role_add', template.Template(self.role_assignment_template) ) self.test_role_assignment = self.stack['test_role_assignment'] # Mock client self.keystoneclient = mock.Mock() self.patchobject(resource.Resource, 'client', return_value=fake_ks.FakeKeystoneClient( client=self.keystoneclient)) self.roles = self.keystoneclient.roles # Mock client plugin def _side_effect(value): return value self.keystone_client_plugin = mock.MagicMock() self.keystone_client_plugin.get_group_id.side_effect = _side_effect self.keystone_client_plugin.get_domain_id.side_effect = _side_effect self.keystone_client_plugin.get_role_id.side_effect = _side_effect self.keystone_client_plugin.get_project_id.side_effect = _side_effect self.test_role_assignment.client_plugin = mock.MagicMock() (self.test_role_assignment.client_plugin. return_value) = self.keystone_client_plugin self.test_role_assignment.parse_list_assignments = mock.MagicMock() self.test_role_assignment.parse_list_assignments.return_value = [ {'role': 'role_1', 'domain': 'domain_1', 'project': None}, {'role': 'role_1', 'project': 'project_1', 'domain': None}] def test_group_role_assignment_handle_create(self): self.test_role_assignment.handle_create() # role-group-domain created self.roles.grant.assert_any_call( role='role_1', group='group_1', domain='domain_1') # role-group-project created self.roles.grant.assert_any_call( role='role_1', group='group_1', project='project_1') def test_group_role_assignment_handle_update(self): prop_diff = { MixinClass.ROLES: [ { 'role': 'role_2', 'project': 'project_1' }, { 'role': 'role_2', 'domain': 'domain_1' } ] } self.test_role_assignment.handle_update(json_snippet=None, tmpl_diff=None, prop_diff=prop_diff) # Add role2-project1-domain1 # role-group-domain self.roles.grant.assert_any_call( role='role_2', group='group_1', domain='domain_1') # role-group-project self.roles.grant.assert_any_call( role='role_2', group='group_1', project='project_1') # Remove role1-project1-domain1 # role-group-domain self.roles.revoke.assert_any_call( role='role_1', group='group_1', domain='domain_1') # role-group-project self.roles.revoke.assert_any_call( role='role_1', group='group_1', project='project_1') def test_group_role_assignment_handle_delete(self): self.assertIsNone(self.test_role_assignment.handle_delete()) # Remove role1-project1-domain1 # role-group-domain self.roles.revoke.assert_any_call( role='role_1', group='group_1', domain='domain_1') # role-group-project self.roles.revoke.assert_any_call( role='role_1', group='group_1', project='project_1') def test_group_role_assignment_delete_group_not_found(self): self.keystone_client_plugin.get_group_id.side_effect = [ exception.EntityNotFound] self.assertIsNone(self.test_role_assignment.handle_delete()) self.roles.revoke.assert_not_called() heat-10.0.2/heat/tests/openstack/keystone/test_group.py0000666000175000017500000003020413343562340023221 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from heat.engine.clients.os.keystone import fake_keystoneclient as fake_ks from heat.engine import constraints from heat.engine import properties from heat.engine import resource from heat.engine.resources.openstack.keystone import group from heat.engine import stack from heat.engine import template from heat.tests import common from heat.tests import utils keystone_group_template = { 'heat_template_version': '2013-05-23', 'resources': { 'test_group': { 'type': 'OS::Keystone::Group', 'properties': { 'name': 'test_group_1', 'description': 'Test group', 'domain': 'default' } } } } class KeystoneGroupTest(common.HeatTestCase): def setUp(self): super(KeystoneGroupTest, self).setUp() self.ctx = utils.dummy_context() self.stack = stack.Stack( self.ctx, 'test_stack_keystone', template.Template(keystone_group_template) ) self.test_group = self.stack['test_group'] # Mock client self.keystoneclient = mock.Mock() self.patchobject(resource.Resource, 'client', return_value=fake_ks.FakeKeystoneClient( client=self.keystoneclient)) self.groups = self.keystoneclient.groups self.role_assignments = self.keystoneclient.role_assignments # Mock client plugin def _side_effect(value): return value self.keystone_client_plugin = mock.MagicMock() (self.keystone_client_plugin.get_domain_id. side_effect) = _side_effect self.test_group.client_plugin = mock.MagicMock() (self.test_group.client_plugin. return_value) = self.keystone_client_plugin def _get_mock_group(self): value = mock.MagicMock() group_id = '477e8273-60a7-4c41-b683-fdb0bc7cd151' value.id = group_id return value def test_properties_title(self): property_title_map = { group.KeystoneGroup.NAME: 'name', group.KeystoneGroup.DESCRIPTION: 'description', group.KeystoneGroup.DOMAIN: 'domain' } for actual_title, expected_title in property_title_map.items(): self.assertEqual( expected_title, actual_title, 'KeystoneGroup PROPERTIES(%s) title modified.' % actual_title) def test_property_name_validate_schema(self): schema = group.KeystoneGroup.properties_schema[ group.KeystoneGroup.NAME] self.assertEqual( True, schema.update_allowed, 'update_allowed for property %s is modified' % group.KeystoneGroup.NAME) self.assertEqual(properties.Schema.STRING, schema.type, 'type for property %s is modified' % group.KeystoneGroup.NAME) self.assertEqual('Name of keystone group.', schema.description, 'description for property %s is modified' % group.KeystoneGroup.NAME) def test_property_description_validate_schema(self): schema = group.KeystoneGroup.properties_schema[ group.KeystoneGroup.DESCRIPTION] self.assertEqual( True, schema.update_allowed, 'update_allowed for property %s is modified' % group.KeystoneGroup.DESCRIPTION) self.assertEqual(properties.Schema.STRING, schema.type, 'type for property %s is modified' % group.KeystoneGroup.DESCRIPTION) self.assertEqual('Description of keystone group.', schema.description, 'description for property %s is modified' % group.KeystoneGroup.DESCRIPTION) self.assertEqual( '', schema.default, 'default for property %s is modified' % group.KeystoneGroup.DESCRIPTION) def test_property_domain_validate_schema(self): schema = group.KeystoneGroup.properties_schema[ group.KeystoneGroup.DOMAIN] self.assertEqual( True, schema.update_allowed, 'update_allowed for property %s is modified' % group.KeystoneGroup.DOMAIN) self.assertEqual(properties.Schema.STRING, schema.type, 'type for property %s is modified' % group.KeystoneGroup.DOMAIN) self.assertEqual('Name or id of keystone domain.', schema.description, 'description for property %s is modified' % group.KeystoneGroup.DOMAIN) self.assertEqual([constraints.CustomConstraint('keystone.domain')], schema.constraints, 'constrains for property %s is modified' % group.KeystoneGroup.DOMAIN) self.assertEqual( 'default', schema.default, 'default for property %s is modified' % group.KeystoneGroup.DOMAIN) def _get_property_schema_value_default(self, name): schema = group.KeystoneGroup.properties_schema[name] return schema.default def test_group_handle_create(self): mock_group = self._get_mock_group() self.groups.create.return_value = mock_group # validate the properties self.assertEqual( 'test_group_1', self.test_group.properties.get(group.KeystoneGroup.NAME)) self.assertEqual( 'Test group', self.test_group.properties.get(group.KeystoneGroup.DESCRIPTION)) self.assertEqual( 'default', self.test_group.properties.get(group.KeystoneGroup.DOMAIN)) self.test_group.handle_create() # validate group creation self.groups.create.assert_called_once_with( name='test_group_1', description='Test group', domain='default') # validate physical resource id self.assertEqual(mock_group.id, self.test_group.resource_id) def test_group_handle_create_default(self): values = { group.KeystoneGroup.NAME: None, group.KeystoneGroup.DESCRIPTION: (self._get_property_schema_value_default( group.KeystoneGroup.DESCRIPTION)), group.KeystoneGroup.DOMAIN: (self._get_property_schema_value_default( group.KeystoneGroup.DOMAIN)), group.KeystoneGroup.ROLES: None } def _side_effect(key): return values[key] mock_group = self._get_mock_group() self.groups.create.return_value = mock_group self.test_group.properties = mock.MagicMock() self.test_group.properties.get.side_effect = _side_effect self.test_group.properties.__getitem__.side_effect = _side_effect self.test_group.physical_resource_name = mock.MagicMock() self.test_group.physical_resource_name.return_value = 'foo' # validate the properties self.assertEqual( None, self.test_group.properties.get(group.KeystoneGroup.NAME)) self.assertEqual( '', self.test_group.properties.get(group.KeystoneGroup.DESCRIPTION)) self.assertEqual( 'default', self.test_group.properties.get(group.KeystoneGroup.DOMAIN)) self.test_group.handle_create() # validate group creation self.groups.create.assert_called_once_with( name='foo', description='', domain='default') def test_group_handle_update(self): self.test_group.resource_id = '477e8273-60a7-4c41-b683-fdb0bc7cd151' prop_diff = {group.KeystoneGroup.NAME: 'test_group_1_updated', group.KeystoneGroup.DESCRIPTION: 'Test Group updated', group.KeystoneGroup.DOMAIN: 'test_domain'} self.test_group.handle_update(json_snippet=None, tmpl_diff=None, prop_diff=prop_diff) self.groups.update.assert_called_once_with( group=self.test_group.resource_id, name=prop_diff[group.KeystoneGroup.NAME], description=prop_diff[group.KeystoneGroup.DESCRIPTION], domain_id='test_domain' ) # validate the role assignment isn't updated self.roles = self.keystoneclient.roles self.assertEqual(0, self.roles.revoke.call_count) self.assertEqual(0, self.roles.grant.call_count) def test_group_handle_update_default(self): self.test_group.resource_id = '477e8273-60a7-4c41-b683-fdb0bc7cd151' prop_diff = {group.KeystoneGroup.DESCRIPTION: 'Test Project updated'} self.test_group.handle_update(json_snippet=None, tmpl_diff=None, prop_diff=prop_diff) # validate default name to physical resource name and # domain is set from stored properties used during creation. self.groups.update.assert_called_once_with( group=self.test_group.resource_id, name=None, description=prop_diff[group.KeystoneGroup.DESCRIPTION], domain_id='default' ) def test_group_handle_delete(self): self.test_group.resource_id = '477e8273-60a7-4c41-b683-fdb0bc7cd151' self.groups.delete.return_value = None self.test_group.handle_delete() self.groups.delete.assert_called_once_with( self.test_group.resource_id ) def test_group_handle_delete_resource_id_is_none(self): self.resource_id = None self.assertIsNone(self.test_group.handle_delete()) def test_group_handle_delete_not_found(self): exc = self.keystoneclient.NotFound self.groups.delete.side_effect = exc self.assertIsNone(self.test_group.handle_delete()) def test_show_resource(self): group = mock.Mock() group.to_dict.return_value = {'attr': 'val'} self.groups.get.return_value = group res = self.test_group._show_resource() self.assertEqual({'attr': 'val'}, res) def test_get_live_state(self): group = mock.Mock() group.to_dict.return_value = { 'id': '48ee1f94b77047e592de55a4934c198c', 'domain_id': 'default', 'name': 'fake', 'links': {'self': 'some_link'}, 'description': ''} roles = mock.MagicMock() roles.to_dict.return_value = { 'scope': { 'project': {'id': 'fc0fe982401643368ff2eb11d9ca70f1'}}, 'role': {'id': '3b8b253648f44256a457a5073b78021d'}, 'group': {'id': '4147558a763046cfb68fb870d58ef4cf'}} self.role_assignments.list.return_value = [roles] self.groups.get.return_value = group self.test_group.resource_id = '1234' reality = self.test_group.get_live_state(self.test_group.properties) expected = { 'domain': 'default', 'name': 'fake', 'description': '', 'roles': [{ 'role': '3b8b253648f44256a457a5073b78021d', 'project': 'fc0fe982401643368ff2eb11d9ca70f1', 'domain': None }] } self.assertEqual(set(expected.keys()), set(reality.keys())) for key in expected: self.assertEqual(expected[key], reality[key]) heat-10.0.2/heat/tests/openstack/keystone/test_user.py0000666000175000017500000003104313343562340023045 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from heat.engine.clients.os.keystone import fake_keystoneclient as fake_ks from heat.engine import resource from heat.engine.resources.openstack.keystone import user from heat.engine import stack from heat.engine import template from heat.tests import common from heat.tests import utils keystone_user_template = { 'heat_template_version': '2013-05-23', 'resources': { 'test_user': { 'type': 'OS::Keystone::User', 'properties': { 'name': 'test_user_1', 'description': 'Test user', 'domain': 'default', 'email': 'abc@xyz.com', 'password': 'password', 'default_project': 'project_1', 'groups': ['group1', 'group2'], 'enabled': True } } } } class KeystoneUserTest(common.HeatTestCase): def setUp(self): super(KeystoneUserTest, self).setUp() self.ctx = utils.dummy_context() self.stack = stack.Stack( self.ctx, 'test_stack_keystone', template.Template(keystone_user_template) ) self.test_user = self.stack['test_user'] # Mock client self.keystoneclient = mock.Mock() self.patchobject(resource.Resource, 'client', return_value=fake_ks.FakeKeystoneClient( client=self.keystoneclient)) self.users = self.keystoneclient.users # Mock client plugin def _side_effect(value): return value self.keystone_client_plugin = mock.MagicMock() (self.keystone_client_plugin.get_domain_id. side_effect) = _side_effect (self.keystone_client_plugin.get_group_id. side_effect) = _side_effect (self.keystone_client_plugin.get_project_id. side_effect) = _side_effect self.test_user.client_plugin = mock.MagicMock() (self.test_user.client_plugin. return_value) = self.keystone_client_plugin def _get_mock_user(self): value = mock.MagicMock() user_id = '477e8273-60a7-4c41-b683-fdb0bc7cd151' value.id = user_id value.name = 'test_user_1' value.default_project_id = 'project_1' value.domain_id = 'default' value.enabled = True value.password_expires_at = '2016-12-10T17:28:49.000000' return value def test_user_handle_create(self): mock_user = self._get_mock_user() self.users.create.return_value = mock_user self.users.get.return_value = mock_user self.users.add_to_group = mock.MagicMock() # validate the properties self.assertEqual( 'test_user_1', self.test_user.properties.get(user.KeystoneUser.NAME)) self.assertEqual( 'Test user', self.test_user.properties.get(user.KeystoneUser.DESCRIPTION)) self.assertEqual( 'default', self.test_user.properties.get(user.KeystoneUser.DOMAIN)) self.assertEqual( True, self.test_user.properties.get(user.KeystoneUser.ENABLED)) self.assertEqual( 'abc@xyz.com', self.test_user.properties.get(user.KeystoneUser.EMAIL)) self.assertEqual( 'password', self.test_user.properties.get(user.KeystoneUser.PASSWORD)) self.assertEqual( 'project_1', self.test_user.properties.get(user.KeystoneUser.DEFAULT_PROJECT)) self.assertEqual( ['group1', 'group2'], self.test_user.properties.get(user.KeystoneUser.GROUPS)) self.test_user.handle_create() # validate user creation self.users.create.assert_called_once_with( name='test_user_1', description='Test user', domain='default', enabled=True, email='abc@xyz.com', password='password', default_project='project_1') # validate physical resource id self.assertEqual(mock_user.id, self.test_user.resource_id) # validate groups for group in ['group1', 'group2']: self.users.add_to_group.assert_any_call( self.test_user.resource_id, group) def _get_property_schema_value_default(self, name): schema = user.KeystoneUser.properties_schema[name] return schema.default def test_user_handle_create_default(self): values = { user.KeystoneUser.NAME: None, user.KeystoneUser.DESCRIPTION: (self._get_property_schema_value_default( user.KeystoneUser.DESCRIPTION)), user.KeystoneUser.DOMAIN: (self._get_property_schema_value_default( user.KeystoneUser.DOMAIN)), user.KeystoneUser.ENABLED: (self._get_property_schema_value_default( user.KeystoneUser.ENABLED)), user.KeystoneUser.ROLES: None, user.KeystoneUser.GROUPS: None, user.KeystoneUser.PASSWORD: 'password', user.KeystoneUser.EMAIL: 'abc@xyz.com', user.KeystoneUser.DEFAULT_PROJECT: 'default_project' } def _side_effect(key): return values[key] mock_user = self._get_mock_user() self.users.create.return_value = mock_user self.test_user.properties = mock.MagicMock() self.test_user.properties.get.side_effect = _side_effect self.test_user.properties.__getitem__.side_effect = _side_effect self.test_user.physical_resource_name = mock.MagicMock() self.test_user.physical_resource_name.return_value = 'foo' self.test_user.handle_create() # validate user creation self.users.create.assert_called_once_with( name='foo', description='', domain='default', enabled=True, email='abc@xyz.com', password='password', default_project='default_project') def test_user_handle_update(self): self.test_user.resource_id = '477e8273-60a7-4c41-b683-fdb0bc7cd151' # add new group group3 and remove group group2 prop_diff = {user.KeystoneUser.NAME: 'test_user_1_updated', user.KeystoneUser.DESCRIPTION: 'Test User updated', user.KeystoneUser.ENABLED: False, user.KeystoneUser.EMAIL: 'xyz@abc.com', user.KeystoneUser.PASSWORD: 'passWORD', user.KeystoneUser.DEFAULT_PROJECT: 'project_2', user.KeystoneUser.GROUPS: ['group1', 'group3']} self.test_user.handle_update(json_snippet=None, tmpl_diff=None, prop_diff=prop_diff) # validate user update self.users.update.assert_called_once_with( user=self.test_user.resource_id, domain=self.test_user.properties[user.KeystoneUser.DOMAIN], name=prop_diff[user.KeystoneUser.NAME], description=prop_diff[user.KeystoneUser.DESCRIPTION], email=prop_diff[user.KeystoneUser.EMAIL], password=prop_diff[user.KeystoneUser.PASSWORD], default_project=prop_diff[user.KeystoneUser.DEFAULT_PROJECT], enabled=prop_diff[user.KeystoneUser.ENABLED] ) # validate the new groups added for group in ['group3']: self.users.add_to_group.assert_any_call( self.test_user.resource_id, group) # validate the removed groups are deleted for group in ['group2']: self.users.remove_from_group.assert_any_call( self.test_user.resource_id, group) # validate the role assignment isn't updated self.roles = self.keystoneclient.roles self.roles.revoke.assert_not_called() self.roles.grant.assert_not_called() def test_user_handle_update_password_only(self): self.test_user.resource_id = '477e8273-60a7-4c41-b683-fdb0bc7cd151' # Update the password only prop_diff = {user.KeystoneUser.PASSWORD: 'passWORD'} self.test_user.handle_update(json_snippet=None, tmpl_diff=None, prop_diff=prop_diff) # Validate user update self.users.update.assert_called_once_with( user=self.test_user.resource_id, domain=self.test_user.properties[user.KeystoneUser.DOMAIN], password=prop_diff[user.KeystoneUser.PASSWORD] ) # Validate that there is no change in groups self.users.add_to_group.assert_not_called() self.users.remove_from_group.assert_not_called() def test_user_handle_delete(self): self.test_user.resource_id = '477e8273-60a7-4c41-b683-fdb0bc7cd151' self.users.delete.return_value = None self.assertIsNone(self.test_user.handle_delete()) self.users.delete.assert_called_once_with( self.test_user.resource_id ) # validate the groups are deleted for group in ['group1', 'group2']: self.users.remove_from_group.assert_any_call( self.test_user.resource_id, group) def test_user_handle_delete_resource_id_is_none(self): self.resource_id = None self.assertIsNone(self.test_user.handle_delete()) def test_user_handle_delete_not_found(self): exc = self.keystoneclient.NotFound self.users.delete.side_effect = exc self.assertIsNone(self.test_user.handle_delete()) def test_show_resource(self): user = mock.Mock() user.to_dict.return_value = {'attr': 'val'} self.users.get.return_value = user res = self.test_user._show_resource() self.assertEqual({'attr': 'val'}, res) def test_get_live_state(self): user = mock.MagicMock() user.to_dict.return_value = { 'description': '', 'enabled': True, 'domain_id': 'default', 'email': 'fake@312.com', 'default_project_id': '859aee961e30408e813853e1cffad089', 'id': '4060e773e26842a88b7490528d78de4f', 'name': 'user1-user-275g3vdmwuo5'} self.users.get.return_value = user role = mock.MagicMock() role.to_dict.return_value = { 'scope': {'domain': {'id': '1234'}}, 'role': {'id': '4321'} } self.keystoneclient.role_assignments.list.return_value = [role] group = mock.MagicMock() group.id = '39393' self.keystoneclient.groups.list.return_value = [group] self.test_user.resource_id = '1111' reality = self.test_user.get_live_state(self.test_user.properties) expected = { 'description': '', 'enabled': True, 'domain': 'default', 'email': 'fake@312.com', 'default_project': '859aee961e30408e813853e1cffad089', 'name': 'user1-user-275g3vdmwuo5', 'groups': ['39393'], 'roles': [{'role': '4321', 'domain': '1234', 'project': None}] } self.assertEqual(set(expected.keys()), set(reality.keys())) for key in expected: self.assertEqual(expected[key], reality[key]) def test_resolve_attributes(self): mock_user = self._get_mock_user() self.test_user.resource_id = mock_user['id'] self.users.get.return_value = mock_user self.assertEqual( mock_user.name, self.test_user._resolve_attribute('name')) self.assertEqual( mock_user.default_project_id, self.test_user._resolve_attribute('default_project_id')) self.assertEqual( mock_user.domain_id, self.test_user._resolve_attribute('domain_id')) self.assertEqual( mock_user.enabled, self.test_user._resolve_attribute('enabled')) self.assertEqual( mock_user.password_expires_at, self.test_user._resolve_attribute('password_expires_at')) heat-10.0.2/heat/tests/openstack/keystone/test_region.py0000666000175000017500000001307513343562340023357 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from six.moves.urllib import parse from heat.engine.clients.os.keystone import fake_keystoneclient as fake_ks from heat.engine import resource from heat.engine.resources.openstack.keystone import region from heat.engine import stack from heat.engine import template from heat.tests import common from heat.tests import utils KEYSTONE_REGION_TEMPLATE = { 'heat_template_version': '2015-10-15', 'resources': { 'test_region': { 'type': 'OS::Keystone::Region', 'properties': { 'id': 'test_region_1', 'description': 'Test region', 'parent_region': 'default_region', 'enabled': 'True' } } } } class KeystoneRegionTest(common.HeatTestCase): def setUp(self): super(KeystoneRegionTest, self).setUp() self.ctx = utils.dummy_context() self.stack = stack.Stack( self.ctx, 'test_stack_keystone', template.Template(KEYSTONE_REGION_TEMPLATE) ) self.test_region = self.stack['test_region'] # Mock client self.keystoneclient = mock.Mock() self.patchobject(resource.Resource, 'client', return_value=fake_ks.FakeKeystoneClient( client=self.keystoneclient)) self.regions = self.keystoneclient.regions keystone_client_plugin = mock.MagicMock() self.test_region.client_plugin = mock.MagicMock() self.test_region.client_plugin.return_value = keystone_client_plugin def _get_mock_region(self): value = mock.MagicMock() region_id = '477e8273-60a7-4c41-b683-fdb0bc7cd151' value.id = region_id return value def test_region_handle_create(self): mock_region = self._get_mock_region() self.regions.create.return_value = mock_region # validate the properties self.assertEqual( 'test_region_1', self.test_region.properties.get(region.KeystoneRegion.ID)) self.assertEqual( 'Test region', self.test_region.properties.get( region.KeystoneRegion.DESCRIPTION)) self.assertEqual( 'default_region', self.test_region.properties.get( region.KeystoneRegion.PARENT_REGION)) self.assertEqual( True, self.test_region.properties.get(region.KeystoneRegion.ENABLED)) self.test_region.handle_create() # validate region creation self.regions.create.assert_called_once_with( id=parse.quote('test_region_1'), description='Test region', parent_region='default_region', enabled=True) # validate physical resource id self.assertEqual(mock_region.id, self.test_region.resource_id) def test_region_handle_create_minimal(self): values = { 'description': 'sample region', 'enabled': True, 'parent_region': None, 'id': None } def _side_effect(key): return values[key] mock_region = self._get_mock_region() self.regions.create.return_value = mock_region self.test_region.properties = mock.MagicMock() self.test_region.properties.__getitem__.side_effect = _side_effect self.test_region.handle_create() self.regions.create.assert_called_once_with( id=None, description='sample region', parent_region=None, enabled=True) def test_region_handle_update(self): self.test_region.resource_id = '477e8273-60a7-4c41-b683-fdb0bc7cd151' prop_diff = {region.KeystoneRegion.DESCRIPTION: 'Test Region updated', region.KeystoneRegion.ENABLED: False, region.KeystoneRegion.PARENT_REGION: 'test_parent_region'} self.test_region.handle_update(json_snippet=None, tmpl_diff=None, prop_diff=prop_diff) self.regions.update.assert_called_once_with( region=self.test_region.resource_id, description=prop_diff[region.KeystoneRegion.DESCRIPTION], enabled=prop_diff[region.KeystoneRegion.ENABLED], parent_region='test_parent_region' ) def test_region_get_live_state(self): self.test_region.resource_id = '477e8273-60a7-4c41-b683-fdb0bc7cd151' mock_dict = mock.MagicMock() mock_dict.to_dict.return_value = { "parent_region_id": None, "enabled": True, "id": "79e4d02f8b454a7885c413d5d4297813", "links": {"self": "link"}, "description": "" } self.regions.get.return_value = mock_dict reality = self.test_region.get_live_state(self.test_region.properties) expected = { "parent_region": None, "enabled": True, "description": "" } self.assertEqual(expected, reality) heat-10.0.2/heat/tests/openstack/keystone/__init__.py0000666000175000017500000000000013343562340022554 0ustar zuulzuul00000000000000heat-10.0.2/heat/tests/openstack/keystone/test_project.py0000666000175000017500000003702113343562340023537 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from heat.engine import constraints from heat.engine import properties from heat.engine.resources.openstack.keystone import project from heat.engine import stack from heat.engine import template from heat.tests import common from heat.tests import utils keystone_project_template = { 'heat_template_version': '2013-05-23', 'resources': { 'test_project': { 'type': 'OS::Keystone::Project', 'properties': { 'name': 'test_project_1', 'description': 'Test project', 'domain': 'default', 'enabled': 'True', 'parent': 'my_father', 'tags': ['label', 'insignia'] } } } } class KeystoneProjectTest(common.HeatTestCase): def setUp(self): super(KeystoneProjectTest, self).setUp() self.ctx = utils.dummy_context() self.stack = stack.Stack( self.ctx, 'test_stack_keystone', template.Template(keystone_project_template) ) self.test_project = self.stack['test_project'] # Mock client self.keystoneclient = mock.MagicMock() self.test_project.client = mock.MagicMock() self.test_project.client.return_value = self.keystoneclient self.projects = self.keystoneclient.projects # Mock client plugin def _id_side_effect(value): return value keystone_client_plugin = mock.MagicMock() keystone_client_plugin.get_domain_id.side_effect = _id_side_effect keystone_client_plugin.get_project_id.side_effect = _id_side_effect self.test_project.client_plugin = mock.MagicMock() self.test_project.client_plugin.return_value = keystone_client_plugin def _get_mock_project(self): value = mock.MagicMock() project_id = '477e8273-60a7-4c41-b683-fdb0bc7cd151' value.id = project_id value.name = 'test_project_1' value.domain_id = 'default' value.enabled = True value.parent_id = 'my_father' value.is_domain = False return value def test_project_handle_create(self): mock_project = self._get_mock_project() self.projects.create.return_value = mock_project # validate the properties self.assertEqual( 'test_project_1', self.test_project.properties.get(project.KeystoneProject.NAME)) self.assertEqual( 'Test project', self.test_project.properties.get( project.KeystoneProject.DESCRIPTION)) self.assertEqual( 'default', self.test_project.properties.get(project.KeystoneProject.DOMAIN)) self.assertEqual( True, self.test_project.properties.get(project.KeystoneProject.ENABLED)) self.assertEqual( 'my_father', self.test_project.properties.get(project.KeystoneProject.PARENT)) self.assertEqual( ['label', 'insignia'], self.test_project.properties.get(project.KeystoneProject.TAGS)) self.test_project.handle_create() # validate project creation self.projects.create.assert_called_once_with( name='test_project_1', description='Test project', domain='default', enabled=True, parent='my_father', tags=['label', 'insignia']) # validate physical resource id self.assertEqual(mock_project.id, self.test_project.resource_id) def test_properties_title(self): property_title_map = { project.KeystoneProject.NAME: 'name', project.KeystoneProject.DESCRIPTION: 'description', project.KeystoneProject.DOMAIN: 'domain', project.KeystoneProject.ENABLED: 'enabled', project.KeystoneProject.PARENT: 'parent' } for actual_title, expected_title in property_title_map.items(): self.assertEqual( expected_title, actual_title, 'KeystoneProject PROPERTIES(%s) title modified.' % actual_title) def test_property_name_validate_schema(self): schema = project.KeystoneProject.properties_schema[ project.KeystoneProject.NAME] self.assertEqual( True, schema.update_allowed, 'update_allowed for property %s is modified' % project.KeystoneProject.NAME) self.assertEqual(properties.Schema.STRING, schema.type, 'type for property %s is modified' % project.KeystoneProject.NAME) self.assertEqual('Name of keystone project.', schema.description, 'description for property %s is modified' % project.KeystoneProject.NAME) def test_property_description_validate_schema(self): schema = project.KeystoneProject.properties_schema[ project.KeystoneProject.DESCRIPTION] self.assertEqual( True, schema.update_allowed, 'update_allowed for property %s is modified' % project.KeystoneProject.DESCRIPTION) self.assertEqual(properties.Schema.STRING, schema.type, 'type for property %s is modified' % project.KeystoneProject.DESCRIPTION) self.assertEqual('Description of keystone project.', schema.description, 'description for property %s is modified' % project.KeystoneProject.DESCRIPTION) self.assertEqual( '', schema.default, 'default for property %s is modified' % project.KeystoneProject.DESCRIPTION) def test_property_domain_validate_schema(self): schema = project.KeystoneProject.properties_schema[ project.KeystoneProject.DOMAIN] self.assertEqual( True, schema.update_allowed, 'update_allowed for property %s is modified' % project.KeystoneProject.DOMAIN) self.assertEqual(properties.Schema.STRING, schema.type, 'type for property %s is modified' % project.KeystoneProject.DOMAIN) self.assertEqual('Name or id of keystone domain.', schema.description, 'description for property %s is modified' % project.KeystoneProject.DOMAIN) self.assertEqual( [constraints.CustomConstraint('keystone.domain')], schema.constraints, 'constrains for property %s is modified' % project.KeystoneProject.DOMAIN) self.assertEqual( 'default', schema.default, 'default for property %s is modified' % project.KeystoneProject.DOMAIN) def test_property_enabled_validate_schema(self): schema = project.KeystoneProject.properties_schema[ project.KeystoneProject.ENABLED] self.assertEqual( True, schema.update_allowed, 'update_allowed for property %s is modified' % project.KeystoneProject.DOMAIN) self.assertEqual(properties.Schema.BOOLEAN, schema.type, 'type for property %s is modified' % project.KeystoneProject.ENABLED) self.assertEqual('This project is enabled or disabled.', schema.description, 'description for property %s is modified' % project.KeystoneProject.ENABLED) self.assertEqual( True, schema.default, 'default for property %s is modified' % project.KeystoneProject.ENABLED) def _get_property_schema_value_default(self, name): schema = project.KeystoneProject.properties_schema[name] return schema.default def test_project_handle_create_default(self): values = { project.KeystoneProject.NAME: None, project.KeystoneProject.DESCRIPTION: (self._get_property_schema_value_default( project.KeystoneProject.DESCRIPTION)), project.KeystoneProject.DOMAIN: (self._get_property_schema_value_default( project.KeystoneProject.DOMAIN)), project.KeystoneProject.ENABLED: (self._get_property_schema_value_default( project.KeystoneProject.ENABLED)), project.KeystoneProject.PARENT: (self._get_property_schema_value_default( project.KeystoneProject.PARENT)), project.KeystoneProject.TAGS: (self._get_property_schema_value_default( project.KeystoneProject.TAGS)) } def _side_effect(key): return values[key] mock_project = self._get_mock_project() self.projects.create.return_value = mock_project self.test_project.properties = mock.MagicMock() self.test_project.properties.get.side_effect = _side_effect self.test_project.properties.__getitem__.side_effect = _side_effect self.test_project.physical_resource_name = mock.MagicMock() self.test_project.physical_resource_name.return_value = 'foo' # validate the properties self.assertEqual( None, self.test_project.properties.get(project.KeystoneProject.NAME)) self.assertEqual( '', self.test_project.properties.get( project.KeystoneProject.DESCRIPTION)) self.assertEqual( 'default', self.test_project.properties.get(project.KeystoneProject.DOMAIN)) self.assertEqual( True, self.test_project.properties.get(project.KeystoneProject.ENABLED)) self.assertIsNone( self.test_project.properties.get(project.KeystoneProject.PARENT)) self.test_project.handle_create() # validate project creation self.projects.create.assert_called_once_with( name='foo', description='', domain='default', enabled=True, parent=None, tags=[]) def test_project_handle_update(self): self.test_project.resource_id = '477e8273-60a7-4c41-b683-fdb0bc7cd151' prop_diff = {project.KeystoneProject.NAME: 'test_project_1_updated', project.KeystoneProject.DESCRIPTION: 'Test Project updated', project.KeystoneProject.ENABLED: False, project.KeystoneProject.DOMAIN: 'test_domain', project.KeystoneProject.TAGS: ['tag1', 'tag2']} self.test_project.handle_update(json_snippet=None, tmpl_diff=None, prop_diff=prop_diff) self.projects.update.assert_called_once_with( project=self.test_project.resource_id, name=prop_diff[project.KeystoneProject.NAME], description=prop_diff[project.KeystoneProject.DESCRIPTION], enabled=prop_diff[project.KeystoneProject.ENABLED], domain='test_domain', tags=prop_diff[project.KeystoneProject.TAGS] ) def test_project_handle_update_default(self): self.test_project.resource_id = '477e8273-60a7-4c41-b683-fdb0bc7cd151' prop_diff = {project.KeystoneProject.DESCRIPTION: 'Test Project updated', project.KeystoneProject.ENABLED: False, project.KeystoneProject.TAGS: ['one', 'two']} self.test_project.handle_update(json_snippet=None, tmpl_diff=None, prop_diff=prop_diff) # validate default name to physical resource name and # domain is set from stored properties used during creation. self.projects.update.assert_called_once_with( project=self.test_project.resource_id, name=None, description=prop_diff[project.KeystoneProject.DESCRIPTION], enabled=prop_diff[project.KeystoneProject.ENABLED], domain='default', tags=prop_diff[project.KeystoneProject.TAGS] ) def test_project_handle_update_only_enabled(self): self.test_project.resource_id = '477e8273-60a7-4c41-b683-fdb0bc7cd151' prop_diff = {project.KeystoneProject.ENABLED: False} self.test_project.handle_update(json_snippet=None, tmpl_diff=None, prop_diff=prop_diff) self.projects.update.assert_called_once_with( project=self.test_project.resource_id, name=None, description=None, enabled=prop_diff[project.KeystoneProject.ENABLED], domain='default', tags=['label', 'insignia'] ) def test_show_resource(self): project = mock.Mock() project.to_dict.return_value = {'attr': 'val'} self.projects.get.return_value = project res = self.test_project._show_resource() self.assertEqual({'attr': 'val'}, res) def test_get_live_state(self): project = mock.Mock() project.to_dict.return_value = { "is_domain": False, "description": "", "links": {"self": "link"}, "enabled": True, "id": "8cbb746917ee42f08a787e721552e738", "parent_id": "default", "domain_id": "default", "name": "fake" } self.projects.get.return_value = project reality = self.test_project.get_live_state( self.test_project.properties) expected = { "description": "", "enabled": True, "domain": "default", "name": "fake" } self.assertEqual(set(expected.keys()), set(reality.keys())) for key in expected: self.assertEqual(expected[key], reality[key]) def test_resolve_attributes(self): mock_project = self._get_mock_project() self.test_project.resource_id = mock_project['id'] self.projects.get.return_value = mock_project self.assertEqual( 'test_project_1', self.test_project._resolve_attribute( project.KeystoneProject.NAME_ATTR)) self.assertEqual( 'my_father', self.test_project._resolve_attribute( project.KeystoneProject.PARENT_ATTR)) self.assertEqual( 'default', self.test_project._resolve_attribute( project.KeystoneProject.DOMAIN_ATTR)) self.assertTrue(self.test_project._resolve_attribute( project.KeystoneProject.ENABLED_ATTR)) self.assertFalse(self.test_project._resolve_attribute( project.KeystoneProject.IS_DOMAIN_ATTR)) heat-10.0.2/heat/tests/openstack/keystone/test_role.py0000666000175000017500000001102013343562340023021 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import mock from heat.engine.clients.os.keystone import fake_keystoneclient as fake_ks from heat.engine import resource from heat.engine.resources.openstack.keystone import role from heat.engine import stack from heat.engine import template from heat.tests import common from heat.tests import utils keystone_role_template = { 'heat_template_version': '2013-05-23', 'resources': { 'test_role': { 'type': 'OS::Keystone::Role', 'properties': { 'name': 'test_role_1' } } } } class KeystoneRoleTest(common.HeatTestCase): def setUp(self): super(KeystoneRoleTest, self).setUp() self.ctx = utils.dummy_context() # Mock client self.keystoneclient = mock.Mock() self.patchobject(resource.Resource, 'client', return_value=fake_ks.FakeKeystoneClient( client=self.keystoneclient)) self.roles = self.keystoneclient.roles def _get_rsrc(self, domain='default', without_name=False): t = template.Template(keystone_role_template) tmpl = copy.deepcopy(t) tmpl['resources']['test_role']['Properties']['domain'] = domain if without_name: tmpl['resources']['test_role']['Properties'].pop('name') test_stack = stack.Stack(self.ctx, 'test_keystone_role', tmpl) test_role = test_stack['test_role'] return test_role def _get_mock_role(self, domain='default'): value = mock.MagicMock() role_id = '477e8273-60a7-4c41-b683-fdb0bc7cd151' domain_id = domain value.id = role_id value.domain_id = domain_id return value def _test_handle_create(self, domain='default'): test_role = self._get_rsrc(domain) mock_role = self._get_mock_role(domain) self.roles.create.return_value = mock_role # validate the properties self.assertEqual('test_role_1', test_role.properties.get(role.KeystoneRole.NAME)) self.assertEqual(domain, test_role.properties.get(role.KeystoneRole.DOMAIN)) test_role.handle_create() # validate role creation with given name self.roles.create.assert_called_once_with(name='test_role_1', domain=domain) # validate physical resource id self.assertEqual(mock_role.id, test_role.resource_id) def test_role_handle_create(self): self._test_handle_create() def test_role_handle_create_with_domain(self): self._test_handle_create(domain='d_test') def test_role_handle_create_default_name(self): # reset the NAME value to None, to make sure role is # created with physical_resource_name test_role = self._get_rsrc(without_name=True) test_role.physical_resource_name = mock.Mock( return_value='phy_role_name') test_role.handle_create() # validate role creation with default name self.roles.create.assert_called_once_with(name='phy_role_name', domain='default') def test_role_handle_update(self): test_role = self._get_rsrc() test_role.resource_id = '477e8273-60a7-4c41-b683-fdb0bc7cd151' # update the name property prop_diff = {role.KeystoneRole.NAME: 'test_role_1_updated'} test_role.handle_update(json_snippet=None, tmpl_diff=None, prop_diff=prop_diff) self.roles.update.assert_called_once_with( role=test_role.resource_id, name=prop_diff[role.KeystoneRole.NAME] ) def test_show_resource(self): role = mock.Mock() role.to_dict.return_value = {'attr': 'val'} self.roles.get.return_value = role test_role = self._get_rsrc() res = test_role._show_resource() self.assertEqual({'attr': 'val'}, res) heat-10.0.2/heat/tests/openstack/keystone/test_service.py0000666000175000017500000002470613343562340023537 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import mock from heat.engine.clients.os.keystone import fake_keystoneclient as fake_ks from heat.engine import properties from heat.engine import resource from heat.engine.resources.openstack.keystone import service from heat.engine import stack from heat.engine import template from heat.tests import common from heat.tests import utils keystone_service_template = { 'heat_template_version': '2015-04-30', 'resources': { 'test_service': { 'type': 'OS::Keystone::Service', 'properties': { 'name': 'test_service_1', 'description': 'Test service', 'type': 'orchestration', 'enabled': False } } } } class KeystoneServiceTest(common.HeatTestCase): def setUp(self): super(KeystoneServiceTest, self).setUp() self.ctx = utils.dummy_context() # Mock client self.keystoneclient = mock.Mock() self.patchobject(resource.Resource, 'client', return_value=fake_ks.FakeKeystoneClient( client=self.keystoneclient)) self.services = self.keystoneclient.services # Mock client plugin self.keystone_client_plugin = mock.MagicMock() def _setup_service_resource(self, stack_name, use_default=False): tmpl_data = copy.deepcopy(keystone_service_template) if use_default: props = tmpl_data['resources']['test_service']['properties'] del props['name'] del props['enabled'] del props['description'] test_stack = stack.Stack( self.ctx, stack_name, template.Template(tmpl_data) ) r_service = test_stack['test_service'] r_service.client = mock.MagicMock() r_service.client.return_value = self.keystoneclient r_service.client_plugin = mock.MagicMock() r_service.client_plugin.return_value = self.keystone_client_plugin return r_service def _get_mock_service(self): value = mock.MagicMock() value.id = '477e8273-60a7-4c41-b683-fdb0bc7cd152' return value def test_service_handle_create(self): rsrc = self._setup_service_resource('test_service_create') mock_service = self._get_mock_service() self.services.create.return_value = mock_service # validate the properties self.assertEqual( 'test_service_1', rsrc.properties.get(service.KeystoneService.NAME)) self.assertEqual( 'Test service', rsrc.properties.get( service.KeystoneService.DESCRIPTION)) self.assertEqual( 'orchestration', rsrc.properties.get(service.KeystoneService.TYPE)) self.assertFalse(rsrc.properties.get( service.KeystoneService.ENABLED)) rsrc.handle_create() # validate service creation self.services.create.assert_called_once_with( name='test_service_1', description='Test service', type='orchestration', enabled=False) # validate physical resource id self.assertEqual(mock_service.id, rsrc.resource_id) def test_service_handle_create_default(self): rsrc = self._setup_service_resource('test_create_with_defaults', use_default=True) mock_service = self._get_mock_service() self.services.create.return_value = mock_service rsrc.physical_resource_name = mock.MagicMock() rsrc.physical_resource_name.return_value = 'foo' # validate the properties self.assertIsNone( rsrc.properties.get(service.KeystoneService.NAME)) self.assertIsNone(rsrc.properties.get( service.KeystoneService.DESCRIPTION)) self.assertEqual( 'orchestration', rsrc.properties.get(service.KeystoneService.TYPE)) self.assertTrue(rsrc.properties.get(service.KeystoneService.ENABLED)) rsrc.handle_create() # validate service creation with physical resource name self.services.create.assert_called_once_with( name='foo', description=None, type='orchestration', enabled=True) def test_service_handle_update(self): rsrc = self._setup_service_resource('test_update') rsrc.resource_id = '477e8273-60a7-4c41-b683-fdb0bc7cd151' prop_diff = {service.KeystoneService.NAME: 'test_service_1_updated', service.KeystoneService.DESCRIPTION: 'Test Service updated', service.KeystoneService.TYPE: 'heat_updated', service.KeystoneService.ENABLED: False} rsrc.handle_update(json_snippet=None, tmpl_diff=None, prop_diff=prop_diff) self.services.update.assert_called_once_with( service=rsrc.resource_id, name=prop_diff[service.KeystoneService.NAME], description=prop_diff[service.KeystoneService.DESCRIPTION], type=prop_diff[service.KeystoneService.TYPE], enabled=prop_diff[service.KeystoneService.ENABLED] ) def test_service_handle_update_default_name(self): rsrc = self._setup_service_resource('test_update_default_name') rsrc.resource_id = '477e8273-60a7-4c41-b683-fdb0bc7cd151' rsrc.physical_resource_name = mock.MagicMock() rsrc.physical_resource_name.return_value = 'foo' # Name is reset to None, so default to physical resource name prop_diff = {service.KeystoneService.NAME: None} rsrc.handle_update(json_snippet=None, tmpl_diff=None, prop_diff=prop_diff) # validate default name to physical resource name self.services.update.assert_called_once_with( service=rsrc.resource_id, name='foo', type=None, description=None, enabled=None ) def test_service_handle_update_only_enabled(self): rsrc = self._setup_service_resource('test_update_enabled_only') rsrc.resource_id = '477e8273-60a7-4c41-b683-fdb0bc7cd151' prop_diff = {service.KeystoneService.ENABLED: False} rsrc.handle_update(json_snippet=None, tmpl_diff=None, prop_diff=prop_diff) self.services.update.assert_called_once_with( service=rsrc.resource_id, name=None, description=None, type=None, enabled=prop_diff[service.KeystoneService.ENABLED] ) def test_properties_title(self): property_title_map = { service.KeystoneService.NAME: 'name', service.KeystoneService.DESCRIPTION: 'description', service.KeystoneService.TYPE: 'type', service.KeystoneService.ENABLED: 'enabled' } for actual_title, expected_title in property_title_map.items(): self.assertEqual( expected_title, actual_title, 'KeystoneService PROPERTIES(%s) title modified.' % actual_title) def test_property_name_validate_schema(self): schema = service.KeystoneService.properties_schema[ service.KeystoneService.NAME] self.assertTrue( schema.update_allowed, 'update_allowed for property %s is modified' % service.KeystoneService.NAME) self.assertEqual(properties.Schema.STRING, schema.type, 'type for property %s is modified' % service.KeystoneService.NAME) self.assertEqual('Name of keystone service.', schema.description, 'description for property %s is modified' % service.KeystoneService.NAME) def test_property_description_validate_schema(self): schema = service.KeystoneService.properties_schema[ service.KeystoneService.DESCRIPTION] self.assertTrue( schema.update_allowed, 'update_allowed for property %s is modified' % service.KeystoneService.DESCRIPTION) self.assertEqual(properties.Schema.STRING, schema.type, 'type for property %s is modified' % service.KeystoneService.DESCRIPTION) self.assertEqual('Description of keystone service.', schema.description, 'description for property %s is modified' % service.KeystoneService.DESCRIPTION) def test_property_type_validate_schema(self): schema = service.KeystoneService.properties_schema[ service.KeystoneService.TYPE] self.assertTrue( schema.update_allowed, 'update_allowed for property %s is modified' % service.KeystoneService.TYPE) self.assertTrue( schema.required, 'required for property %s is modified' % service.KeystoneService.TYPE) self.assertEqual(properties.Schema.STRING, schema.type, 'type for property %s is modified' % service.KeystoneService.TYPE) self.assertEqual('Type of keystone Service.', schema.description, 'description for property %s is modified' % service.KeystoneService.TYPE) def test_show_resource(self): rsrc = self._setup_service_resource('test_show_resource') moc_service = mock.Mock() moc_service.to_dict.return_value = {'attr': 'val'} self.services.get.return_value = moc_service attributes = rsrc._show_resource() self.assertEqual({'attr': 'val'}, attributes) heat-10.0.2/heat/tests/openstack/keystone/test_domain.py0000666000175000017500000001051013343562340023332 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from heat.engine.clients.os.keystone import fake_keystoneclient as fake_ks from heat.engine import resource from heat.engine.resources.openstack.keystone import domain from heat.engine import stack from heat.engine import template from heat.tests import common from heat.tests import utils KEYSTONE_REGION_TEMPLATE = { 'heat_template_version': '2017-02-24', 'resources': { 'test_domain': { 'type': 'OS::Keystone::Domain', 'properties': { 'name': 'test_domain_1', 'description': 'Test domain', 'enabled': 'True' } } } } class KeystoneDomainTest(common.HeatTestCase): def setUp(self): super(KeystoneDomainTest, self).setUp() self.ctx = utils.dummy_context() self.stack = stack.Stack( self.ctx, 'test_stack_keystone', template.Template(KEYSTONE_REGION_TEMPLATE) ) self.test_domain = self.stack['test_domain'] # Mock client self.keystoneclient = mock.Mock() self.patchobject(resource.Resource, 'client', return_value=fake_ks.FakeKeystoneClient( client=self.keystoneclient)) self.domains = self.keystoneclient.domains keystone_client_plugin = mock.MagicMock() self.test_domain.client_plugin = mock.MagicMock() self.test_domain.client_plugin.return_value = keystone_client_plugin def _get_mock_domain(self): value = mock.MagicMock() domain_id = '477e8273-60a7-4c41-b683-fdb0bc7cd151' value.id = domain_id return value def test_domain_handle_create(self): mock_domain = self._get_mock_domain() self.domains.create.return_value = mock_domain # validate the properties self.assertEqual( 'test_domain_1', self.test_domain.properties.get(domain.KeystoneDomain.NAME)) self.assertEqual( 'Test domain', self.test_domain.properties.get( domain.KeystoneDomain.DESCRIPTION)) self.assertEqual( True, self.test_domain.properties.get(domain.KeystoneDomain.ENABLED)) self.test_domain.handle_create() # validate domain creation self.domains.create.assert_called_once_with( name='test_domain_1', description='Test domain', enabled=True) # validate physical resource id self.assertEqual(mock_domain.id, self.test_domain.resource_id) def test_domain_handle_update(self): self.test_domain.resource_id = '477e8273-60a7-4c41-b683-fdb0bc7cd151' prop_diff = {domain.KeystoneDomain.DESCRIPTION: 'Test Domain updated', domain.KeystoneDomain.ENABLED: False, domain.KeystoneDomain.NAME: 'test_domain_2'} self.test_domain.handle_update(json_snippet=None, tmpl_diff=None, prop_diff=prop_diff) self.domains.update.assert_called_once_with( domain=self.test_domain.resource_id, description=prop_diff[domain.KeystoneDomain.DESCRIPTION], enabled=prop_diff[domain.KeystoneDomain.ENABLED], name='test_domain_2' ) def test_get_live_state(self): sample_domain = { domain.KeystoneDomain.NAME: 'test', domain.KeystoneDomain.ENABLED: True, domain.KeystoneDomain.DESCRIPTION: 'test domain' } d = mock.Mock() d.to_dict.return_value = sample_domain self.domains.get.return_value = d reality = self.test_domain.get_live_state(self.test_domain.properties) self.assertEqual(sample_domain, reality) heat-10.0.2/heat/tests/openstack/zun/0000775000175000017500000000000013343562672017436 5ustar zuulzuul00000000000000heat-10.0.2/heat/tests/openstack/zun/__init__.py0000666000175000017500000000000013343562340021527 0ustar zuulzuul00000000000000heat-10.0.2/heat/tests/openstack/zun/test_container.py0000666000175000017500000003334713343562352023040 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import mock import six from oslo_config import cfg from zunclient import exceptions as zc_exc from heat.common import exception from heat.common import template_format from heat.engine.resources.openstack.zun import container from heat.engine import scheduler from heat.engine import template from heat.tests import common from heat.tests import utils zun_template = ''' heat_template_version: 2017-09-01 resources: test_container: type: OS::Zun::Container properties: name: test_container image: "cirros:latest" command: sleep 10000 cpu: 0.1 memory: 100 environment: myenv: foo workdir: /testdir labels: mylabel: bar image_pull_policy: always restart_policy: on-failure:2 interactive: false image_driver: docker hints: hintkey: hintval hostname: myhost security_groups: - my_seg mounts: - volume_size: 1 mount_path: /data - volume_id: 6ec29ba3-bf2c-4276-a88e-3670ea5abc80 mount_path: /data2 ''' class ZunContainerTest(common.HeatTestCase): def setUp(self): super(ZunContainerTest, self).setUp() self.resource_id = '12345' self.fake_name = 'test_container' self.fake_image = 'cirros:latest' self.fake_command = 'sleep 10000' self.fake_cpu = 0.1 self.fake_memory = 100 self.fake_env = {'myenv': 'foo'} self.fake_workdir = '/testdir' self.fake_labels = {'mylabel': 'bar'} self.fake_image_policy = 'always' self.fake_restart_policy = {'MaximumRetryCount': '2', 'Name': 'on-failure'} self.fake_interactive = False self.fake_image_driver = 'docker' self.fake_hints = {'hintkey': 'hintval'} self.fake_hostname = 'myhost' self.fake_security_groups = ['my_seg'] self.fake_mounts = [ {'volume_id': None, 'volume_size': 1, 'mount_path': '/data'}, {'volume_id': '6ec29ba3-bf2c-4276-a88e-3670ea5abc80', 'volume_size': None, 'mount_path': '/data2'}] self.fake_mounts_args = [ {'size': 1, 'destination': '/data'}, {'source': '6ec29ba3-bf2c-4276-a88e-3670ea5abc80', 'destination': '/data2'}] self.fake_network_id = '9c11d847-99ce-4a83-82da-9827362a68e8' self.fake_network_name = 'private' self.fake_networks = { 'networks': [ { 'id': self.fake_network_id, 'name': self.fake_network_name, } ] } self.fake_address = { 'version': 4, 'addr': '10.0.0.12', 'port': 'ab5c12d8-f414-48a3-b765-8ce34a6714d2' } self.fake_addresses = { self.fake_network_id: [self.fake_address] } self.fake_extended_addresses = { self.fake_network_id: [self.fake_address], self.fake_network_name: [self.fake_address], } t = template_format.parse(zun_template) self.stack = utils.parse_stack(t) resource_defns = self.stack.t.resource_definitions(self.stack) self.rsrc_defn = resource_defns[self.fake_name] self.client = mock.Mock() self.patchobject(container.Container, 'client', return_value=self.client) self.neutron_client = mock.Mock() self.patchobject(container.Container, 'neutron', return_value=self.neutron_client) self.stub_VolumeConstraint_validate() def _mock_get_client(self): value = mock.MagicMock() value.name = self.fake_name value.image = self.fake_image value.command = self.fake_command value.cpu = self.fake_cpu value.memory = self.fake_memory value.environment = self.fake_env value.workdir = self.fake_workdir value.labels = self.fake_labels value.image_pull_policy = self.fake_image_policy value.restart_policy = self.fake_restart_policy value.interactive = self.fake_interactive value.image_driver = self.fake_image_driver value.hints = self.fake_hints value.hostname = self.fake_hostname value.security_groups = self.fake_security_groups value.addresses = self.fake_addresses value.to_dict.return_value = value.__dict__ self.client.containers.get.return_value = value def _create_resource(self, name, snippet, stack, status='Running'): value = mock.MagicMock(uuid=self.resource_id) self.client.containers.run.return_value = value get_rv = mock.MagicMock(status=status) self.client.containers.get.return_value = get_rv c = container.Container(name, snippet, stack) return c def test_create(self): c = self._create_resource('container', self.rsrc_defn, self.stack) # validate the properties self.assertEqual( self.fake_name, c.properties.get(container.Container.NAME)) self.assertEqual( self.fake_image, c.properties.get(container.Container.IMAGE)) self.assertEqual( self.fake_command, c.properties.get(container.Container.COMMAND)) self.assertEqual( self.fake_cpu, c.properties.get(container.Container.CPU)) self.assertEqual( self.fake_memory, c.properties.get(container.Container.MEMORY)) self.assertEqual( self.fake_env, c.properties.get(container.Container.ENVIRONMENT)) self.assertEqual( self.fake_workdir, c.properties.get(container.Container.WORKDIR)) self.assertEqual( self.fake_labels, c.properties.get(container.Container.LABELS)) self.assertEqual( self.fake_image_policy, c.properties.get(container.Container.IMAGE_PULL_POLICY)) self.assertEqual( 'on-failure:2', c.properties.get(container.Container.RESTART_POLICY)) self.assertEqual( self.fake_interactive, c.properties.get(container.Container.INTERACTIVE)) self.assertEqual( self.fake_image_driver, c.properties.get(container.Container.IMAGE_DRIVER)) self.assertEqual( self.fake_hints, c.properties.get(container.Container.HINTS)) self.assertEqual( self.fake_hostname, c.properties.get(container.Container.HOSTNAME)) self.assertEqual( self.fake_security_groups, c.properties.get(container.Container.SECURITY_GROUPS)) self.assertEqual( self.fake_mounts, c.properties.get(container.Container.MOUNTS)) scheduler.TaskRunner(c.create)() self.assertEqual(self.resource_id, c.resource_id) self.assertEqual((c.CREATE, c.COMPLETE), c.state) self.assertEqual('containers', c.entity) self.client.containers.run.assert_called_once_with( name=self.fake_name, image=self.fake_image, command=self.fake_command, cpu=self.fake_cpu, memory=self.fake_memory, environment=self.fake_env, workdir=self.fake_workdir, labels=self.fake_labels, image_pull_policy=self.fake_image_policy, restart_policy=self.fake_restart_policy, interactive=self.fake_interactive, image_driver=self.fake_image_driver, hints=self.fake_hints, hostname=self.fake_hostname, security_groups=self.fake_security_groups, mounts=self.fake_mounts_args, ) def test_container_create_failed(self): cfg.CONF.set_override('action_retry_limit', 0) c = self._create_resource('container', self.rsrc_defn, self.stack, status='Error') exc = self.assertRaises( exception.ResourceFailure, scheduler.TaskRunner(c.create)) self.assertEqual((c.CREATE, c.FAILED), c.state) self.assertIn("Error in creating container ", six.text_type(exc)) def test_container_create_unknown_status(self): c = self._create_resource('container', self.rsrc_defn, self.stack, status='FOO') exc = self.assertRaises( exception.ResourceFailure, scheduler.TaskRunner(c.create)) self.assertEqual((c.CREATE, c.FAILED), c.state) self.assertIn("Unknown status Container", six.text_type(exc)) def test_container_update(self): c = self._create_resource('container', self.rsrc_defn, self.stack) scheduler.TaskRunner(c.create)() t = template_format.parse(zun_template) new_t = copy.deepcopy(t) new_t['resources'][self.fake_name]['properties']['name'] = \ 'fake-container' new_t['resources'][self.fake_name]['properties']['cpu'] = 10 new_t['resources'][self.fake_name]['properties']['memory'] = 10 rsrc_defns = template.Template(new_t).resource_definitions(self.stack) new_c = rsrc_defns[self.fake_name] scheduler.TaskRunner(c.update, new_c)() self.client.containers.update.assert_called_once_with( self.resource_id, cpu=10, memory=10) self.client.containers.rename.assert_called_once_with( self.resource_id, name='fake-container') self.assertEqual((c.UPDATE, c.COMPLETE), c.state) def test_container_delete(self): c = self._create_resource('container', self.rsrc_defn, self.stack) scheduler.TaskRunner(c.create)() self.patchobject(self.client.containers, 'get', side_effect=[c, zc_exc.NotFound('Not Found')]) scheduler.TaskRunner(c.delete)() self.assertEqual((c.DELETE, c.COMPLETE), c.state) self.client.containers.delete.assert_called_once_with( c.resource_id, stop=True) def test_container_delete_not_found(self): c = self._create_resource('container', self.rsrc_defn, self.stack) scheduler.TaskRunner(c.create)() c.client_plugin = mock.MagicMock() self.client.containers.delete.side_effect = Exception('Not Found') scheduler.TaskRunner(c.delete)() self.assertEqual((c.DELETE, c.COMPLETE), c.state) self.client.containers.delete.assert_called_once_with( c.resource_id, stop=True) mock_ignore_not_found = c.client_plugin.return_value.ignore_not_found self.assertEqual(1, mock_ignore_not_found.call_count) def test_container_get_live_state(self): c = self._create_resource('container', self.rsrc_defn, self.stack) scheduler.TaskRunner(c.create)() self._mock_get_client() reality = c.get_live_state(c.properties) self.assertEqual( { container.Container.NAME: self.fake_name, container.Container.CPU: self.fake_cpu, container.Container.MEMORY: self.fake_memory, }, reality) def test_resolve_attributes(self): self.neutron_client.list_networks.return_value = self.fake_networks c = self._create_resource('container', self.rsrc_defn, self.stack) scheduler.TaskRunner(c.create)() self._mock_get_client() self.assertEqual( self.fake_name, c._resolve_attribute(container.Container.NAME)) self.assertEqual( self.fake_extended_addresses, c._resolve_attribute(container.Container.ADDRESSES)) def test_resolve_attributes_duplicate_net_name(self): self.neutron_client.list_networks.return_value = { 'networks': [ {'id': 'fake_net_id', 'name': 'test'}, {'id': 'fake_net_id2', 'name': 'test'}, ] } self.fake_addresses = { 'fake_net_id': [{'addr': '10.0.0.12'}], 'fake_net_id2': [{'addr': '10.100.0.12'}], } self.fake_extended_addresses = { 'fake_net_id': [{'addr': '10.0.0.12'}], 'fake_net_id2': [{'addr': '10.100.0.12'}], 'test': [{'addr': '10.0.0.12'}, {'addr': '10.100.0.12'}], } c = self._create_resource('container', self.rsrc_defn, self.stack) scheduler.TaskRunner(c.create)() self._mock_get_client() self._assert_addresses( self.fake_extended_addresses, c._resolve_attribute(container.Container.ADDRESSES)) def _assert_addresses(self, expected, actual): matched = True if len(expected) != len(actual): matched = False for key in expected: if key not in actual: matched = False break list1 = expected[key] list1 = sorted(list1, key=lambda x: sorted(x.values())) list2 = actual[key] list2 = sorted(list2, key=lambda x: sorted(x.values())) if list1 != list2: matched = False break if not matched: raise AssertionError( 'Addresses is unmatched:\n reference = ' + str(expected) + '\nactual = ' + str(actual)) heat-10.0.2/heat/tests/openstack/designate/0000775000175000017500000000000013343562672020565 5ustar zuulzuul00000000000000heat-10.0.2/heat/tests/openstack/designate/test_recordset.py0000666000175000017500000001717113343562340024171 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from designateclient import exceptions as designate_exception import mock from heat.common import exception from heat.engine.resources.openstack.designate import recordset from heat.engine import stack from heat.engine import template from heat.tests import common from heat.tests import utils sample_template = { 'heat_template_version': '2015-04-30', 'resources': { 'test_resource': { 'type': 'OS::Designate::RecordSet', 'properties': { 'name': 'test-record.com', 'description': 'Test record', 'ttl': 3600, 'type': 'A', 'records': ['1.1.1.1'], 'zone': '1234567' } } } } class DesignateRecordSetTest(common.HeatTestCase): def setUp(self): super(DesignateRecordSetTest, self).setUp() self.ctx = utils.dummy_context() self.stack = stack.Stack( self.ctx, 'test_stack', template.Template(sample_template) ) self.test_resource = self.stack['test_resource'] # Mock client plugin self.test_client_plugin = mock.MagicMock() self.test_resource.client_plugin = mock.MagicMock( return_value=self.test_client_plugin) # Mock client self.test_client = mock.MagicMock() self.test_resource.client = mock.MagicMock( return_value=self.test_client) def _get_mock_resource(self): value = {} value['id'] = '477e8273-60a7-4c41-b683-fdb0bc7cd152' return value def test_resource_validate_properties(self): mock_record_create = self.test_client_plugin.record_create mock_resource = self._get_mock_resource() mock_record_create.return_value = mock_resource # validate the properties self.assertEqual( 'test-record.com', self.test_resource.properties.get( recordset.DesignateRecordSet.NAME)) self.assertEqual( 'Test record', self.test_resource.properties.get( recordset.DesignateRecordSet.DESCRIPTION)) self.assertEqual( 3600, self.test_resource.properties.get( recordset.DesignateRecordSet.TTL)) self.assertEqual( 'A', self.test_resource.properties.get( recordset.DesignateRecordSet.TYPE)) self.assertEqual( ['1.1.1.1'], self.test_resource.properties.get( recordset.DesignateRecordSet.RECORDS)) self.assertEqual( '1234567', self.test_resource.properties.get( recordset.DesignateRecordSet.ZONE)) def test_resource_handle_create(self): mock_record_create = self.test_client.recordsets.create mock_resource = self._get_mock_resource() mock_record_create.return_value = mock_resource self.test_resource.properties = args = dict( name='test-record.com', description='Test record', ttl=3600, type='A', records=['1.1.1.1'], zone='1234567' ) self.test_resource.handle_create() args['type_'] = args.pop('type') mock_record_create.assert_called_with( **args ) # validate physical resource id self.assertEqual(mock_resource['id'], self.test_resource.resource_id) def _mock_check_status_active(self): self.test_client.recordsets.get.side_effect = [ {'status': 'PENDING'}, {'status': 'ACTIVE'}, {'status': 'ERROR'} ] def test_check_create_complete(self): self._mock_check_status_active() self.assertFalse(self.test_resource.check_create_complete()) self.assertTrue(self.test_resource.check_create_complete()) ex = self.assertRaises(exception.ResourceInError, self.test_resource.check_create_complete) self.assertIn('Error in RecordSet', ex.message) def test_resource_handle_update(self): mock_record_update = self.test_client.recordsets.update self.test_resource.resource_id = '477e8273-60a7-4c41-b683-fdb0bc7cd151' prop_diff = args = { recordset.DesignateRecordSet.DESCRIPTION: 'updated description', recordset.DesignateRecordSet.TTL: 4200, recordset.DesignateRecordSet.TYPE: 'B', recordset.DesignateRecordSet.RECORDS: ['2.2.2.2'] } self.test_resource.handle_update(json_snippet=None, tmpl_diff=None, prop_diff=prop_diff) args['type_'] = args.pop('type') mock_record_update.assert_called_with( zone='1234567', recordset='477e8273-60a7-4c41-b683-fdb0bc7cd151', values=args) def test_check_update_complete(self): self._mock_check_status_active() self.assertFalse(self.test_resource.check_update_complete()) self.assertTrue(self.test_resource.check_update_complete()) ex = self.assertRaises(exception.ResourceInError, self.test_resource.check_create_complete) self.assertIn('Error in RecordSet', ex.message) def test_resource_handle_delete(self): mock_record_delete = self.test_client.recordsets.delete self.test_resource.resource_id = '477e8273-60a7-4c41-b683-fdb0bc7cd151' mock_record_delete.return_value = None self.assertIsNone(self.test_resource.handle_delete()) mock_record_delete.assert_called_once_with( zone='1234567', recordset=self.test_resource.resource_id ) def test_resource_handle_delete_resource_id_is_none(self): self.test_resource.resource_id = None self.assertIsNone(self.test_resource.handle_delete()) def test_resource_handle_delete_not_found(self): mock_record_delete = self.test_client_plugin.record_delete mock_record_delete.side_effect = designate_exception.NotFound self.assertIsNone(self.test_resource.handle_delete()) def test_check_delete_complete(self): self.test_resource.resource_id = self._get_mock_resource()['id'] self._mock_check_status_active() self.assertFalse(self.test_resource.check_delete_complete()) self.assertTrue(self.test_resource.check_delete_complete()) ex = self.assertRaises(exception.ResourceInError, self.test_resource.check_create_complete) self.assertIn('Error in RecordSet', ex.message) def test_resource_show_resource(self): args = dict( name='test-record.com', description='Test record', ttl=3600, type='A', records=['1.1.1.1'] ) mock_get = self.test_client.recordsets.get mock_get.return_value = args self.assertEqual(args, self.test_resource._show_resource(), 'Failed to show resource') heat-10.0.2/heat/tests/openstack/designate/test_zone.py0000666000175000017500000001675413343562340023160 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from heat.common import exception from heat.engine.resources.openstack.designate import zone from heat.engine import stack from heat.engine import template from heat.tests import common from heat.tests import utils sample_template = { 'heat_template_version': '2015-04-30', 'resources': { 'test_resource': { 'type': 'OS::Designate::Zone', 'properties': { 'name': 'test-zone.com', 'description': 'Test zone', 'ttl': 3600, 'email': 'abc@test-zone.com', 'type': 'PRIMARY', 'masters': [] } } } } class DesignateZoneTest(common.HeatTestCase): def setUp(self): super(DesignateZoneTest, self).setUp() self.ctx = utils.dummy_context() self.stack = stack.Stack( self.ctx, 'test_stack', template.Template(sample_template) ) self.test_resource = self.stack['test_resource'] # Mock client plugin self.test_client_plugin = mock.MagicMock() self.test_resource.client_plugin = mock.MagicMock( return_value=self.test_client_plugin) # Mock client self.test_client = mock.MagicMock() self.test_resource.client = mock.MagicMock( return_value=self.test_client) def _get_mock_resource(self): value = {} value['id'] = '477e8273-60a7-4c41-b683-fdb0bc7cd152' value['serial'] = '1434596972' return value def test_resource_handle_create(self): mock_zone_create = self.test_client.zones.create mock_resource = self._get_mock_resource() mock_zone_create.return_value = mock_resource # validate the properties self.assertEqual( 'test-zone.com', self.test_resource.properties.get(zone.DesignateZone.NAME)) self.assertEqual( 'Test zone', self.test_resource.properties.get( zone.DesignateZone.DESCRIPTION)) self.assertEqual( 3600, self.test_resource.properties.get(zone.DesignateZone.TTL)) self.assertEqual( 'abc@test-zone.com', self.test_resource.properties.get(zone.DesignateZone.EMAIL)) self.assertEqual( 'PRIMARY', self.test_resource.properties.get(zone.DesignateZone.TYPE)) self.assertEqual( [], self.test_resource.properties.get(zone.DesignateZone.MASTERS)) self.test_resource.data_set = mock.Mock() self.test_resource.handle_create() args = dict( name='test-zone.com', description='Test zone', ttl=3600, email='abc@test-zone.com', type_='PRIMARY' ) mock_zone_create.assert_called_once_with(**args) # validate physical resource id self.assertEqual(mock_resource['id'], self.test_resource.resource_id) def _mock_check_status_active(self): self.test_client.zones.get.side_effect = [ {'status': 'PENDING'}, {'status': 'ACTIVE'}, {'status': 'ERROR'} ] def test_check_create_complete(self): self._mock_check_status_active() self.assertFalse(self.test_resource.check_create_complete()) self.assertTrue(self.test_resource.check_create_complete()) ex = self.assertRaises(exception.ResourceInError, self.test_resource.check_create_complete) self.assertIn('Error in zone', ex.message) def _test_resource_validate(self, type_, prp): def _side_effect(key): if key == prp: return None if key == zone.DesignateZone.TYPE: return type_ else: return sample_template['resources'][ 'test_resource']['properties'][key] self.test_resource.properties = mock.MagicMock() self.test_resource.properties.get.side_effect = _side_effect self.test_resource.properties.__getitem__.side_effect = _side_effect ex = self.assertRaises(exception.StackValidationFailed, self.test_resource.validate) self.assertEqual('Property %s is required for zone type %s' % (prp, type_), ex.message) def test_resource_validate_primary(self): self._test_resource_validate(zone.DesignateZone.PRIMARY, zone.DesignateZone.EMAIL) def test_resource_validate_secondary(self): self._test_resource_validate(zone.DesignateZone.SECONDARY, zone.DesignateZone.MASTERS) def test_resource_handle_update(self): mock_zone_update = self.test_client.zones.update self.test_resource.resource_id = '477e8273-60a7-4c41-b683-fdb0bc7cd151' prop_diff = {zone.DesignateZone.EMAIL: 'xyz@test-zone.com', zone.DesignateZone.DESCRIPTION: 'updated description', zone.DesignateZone.TTL: 4200} self.test_resource.handle_update(json_snippet=None, tmpl_diff=None, prop_diff=prop_diff) args = dict( description='updated description', ttl=4200, email='xyz@test-zone.com' ) mock_zone_update.assert_called_once_with( self.test_resource.resource_id, args) def test_check_update_complete(self): self._mock_check_status_active() self.assertFalse(self.test_resource.check_update_complete()) self.assertTrue(self.test_resource.check_update_complete()) ex = self.assertRaises(exception.ResourceInError, self.test_resource.check_update_complete) self.assertIn('Error in zone', ex.message) def test_check_delete_complete(self): self._mock_check_status_active() self.assertFalse(self.test_resource.check_delete_complete( self._get_mock_resource()['id'] )) self.assertTrue(self.test_resource.check_delete_complete( self._get_mock_resource()['id'] )) ex = self.assertRaises(exception.ResourceInError, self.test_resource.check_delete_complete, self._get_mock_resource()['id']) self.assertIn('Error in zone', ex.message) def test_resolve_attributes(self): mock_zone = self._get_mock_resource() self.test_resource.resource_id = mock_zone['id'] self.test_client.zones.get.return_value = mock_zone self.assertEqual( mock_zone['serial'], self.test_resource._resolve_attribute(zone.DesignateZone.SERIAL)) self.test_client.zones.get.assert_called_once_with( self.test_resource.resource_id ) heat-10.0.2/heat/tests/openstack/designate/test_record.py0000666000175000017500000002432713343562340023456 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from designateclient import exceptions as designate_exception from designateclient.v1 import records import mock from heat.engine.resources.openstack.designate import record from heat.engine import stack from heat.engine import template from heat.tests import common from heat.tests import utils sample_template = { 'heat_template_version': '2015-04-30', 'resources': { 'test_resource': { 'type': 'OS::Designate::Record', 'properties': { 'name': 'test-record.com', 'description': 'Test record', 'ttl': 3600, 'type': 'MX', 'priority': 1, 'data': '1.1.1.1', 'domain': '1234567' } } } } class DesignateRecordTest(common.HeatTestCase): def setUp(self): super(DesignateRecordTest, self).setUp() self.ctx = utils.dummy_context() self.stack = stack.Stack( self.ctx, 'test_stack', template.Template(sample_template) ) self.test_resource = self.stack['test_resource'] # Mock client plugin self.test_client_plugin = mock.MagicMock() self.test_resource.client_plugin = mock.MagicMock( return_value=self.test_client_plugin) # Mock client self.test_client = mock.MagicMock() self.test_resource.client = mock.MagicMock( return_value=self.test_client) def _get_mock_resource(self): value = mock.MagicMock() value.id = '477e8273-60a7-4c41-b683-fdb0bc7cd152' return value def test_resource_validate_properties(self): mock_record_create = self.test_client_plugin.record_create mock_resource = self._get_mock_resource() mock_record_create.return_value = mock_resource # validate the properties self.assertEqual( 'test-record.com', self.test_resource.properties.get(record.DesignateRecord.NAME)) self.assertEqual( 'Test record', self.test_resource.properties.get( record.DesignateRecord.DESCRIPTION)) self.assertEqual( 3600, self.test_resource.properties.get(record.DesignateRecord.TTL)) self.assertEqual( 'MX', self.test_resource.properties.get(record.DesignateRecord.TYPE)) self.assertEqual( 1, self.test_resource.properties.get(record.DesignateRecord.PRIORITY)) self.assertEqual( '1.1.1.1', self.test_resource.properties.get(record.DesignateRecord.DATA)) self.assertEqual( '1234567', self.test_resource.properties.get( record.DesignateRecord.DOMAIN)) def test_resource_handle_create_non_mx_or_srv(self): mock_record_create = self.test_client_plugin.record_create mock_resource = self._get_mock_resource() mock_record_create.return_value = mock_resource for type in (set(self.test_resource._ALLOWED_TYPES) - set([self.test_resource.MX, self.test_resource.SRV])): self.test_resource.properties = args = dict( name='test-record.com', description='Test record', ttl=3600, type=type, priority=1, data='1.1.1.1', domain='1234567' ) self.test_resource.handle_create() # Make sure priority is set to None for non mx or srv records args['priority'] = None mock_record_create.assert_called_with( **args ) # validate physical resource id self.assertEqual(mock_resource.id, self.test_resource.resource_id) def test_resource_handle_create_mx_or_srv(self): mock_record_create = self.test_client_plugin.record_create mock_resource = self._get_mock_resource() mock_record_create.return_value = mock_resource for type in [self.test_resource.MX, self.test_resource.SRV]: self.test_resource.properties = args = dict( name='test-record.com', description='Test record', ttl=3600, type=type, priority=1, data='1.1.1.1', domain='1234567' ) self.test_resource.handle_create() mock_record_create.assert_called_with( **args ) # validate physical resource id self.assertEqual(mock_resource.id, self.test_resource.resource_id) def test_resource_handle_update_non_mx_or_srv(self): mock_record_update = self.test_client_plugin.record_update self.test_resource.resource_id = '477e8273-60a7-4c41-b683-fdb0bc7cd151' for type in (set(self.test_resource._ALLOWED_TYPES) - set([self.test_resource.MX, self.test_resource.SRV])): prop_diff = args = { record.DesignateRecord.DESCRIPTION: 'updated description', record.DesignateRecord.TTL: 4200, record.DesignateRecord.TYPE: type, record.DesignateRecord.DATA: '2.2.2.2', record.DesignateRecord.PRIORITY: 1} self.test_resource.handle_update(json_snippet=None, tmpl_diff=None, prop_diff=prop_diff) # priority is not considered for records other than mx or srv args.update(dict( id=self.test_resource.resource_id, priority=None, domain='1234567', )) mock_record_update.assert_called_with(**args) def test_resource_handle_update_mx_or_srv(self): mock_record_update = self.test_client_plugin.record_update self.test_resource.resource_id = '477e8273-60a7-4c41-b683-fdb0bc7cd151' for type in [self.test_resource.MX, self.test_resource.SRV]: prop_diff = args = { record.DesignateRecord.DESCRIPTION: 'updated description', record.DesignateRecord.TTL: 4200, record.DesignateRecord.TYPE: type, record.DesignateRecord.DATA: '2.2.2.2', record.DesignateRecord.PRIORITY: 1} self.test_resource.handle_update(json_snippet=None, tmpl_diff=None, prop_diff=prop_diff) args.update(dict( id=self.test_resource.resource_id, domain='1234567', )) mock_record_update.assert_called_with(**args) def test_resource_handle_delete(self): mock_record_delete = self.test_client_plugin.record_delete self.test_resource.resource_id = '477e8273-60a7-4c41-b683-fdb0bc7cd151' mock_record_delete.return_value = None self.assertIsNone(self.test_resource.handle_delete()) mock_record_delete.assert_called_once_with( domain='1234567', id=self.test_resource.resource_id ) def test_resource_handle_delete_resource_id_is_none(self): self.test_resource.resource_id = None self.assertIsNone(self.test_resource.handle_delete()) def test_resource_handle_delete_not_found(self): mock_record_delete = self.test_client_plugin.record_delete mock_record_delete.side_effect = designate_exception.NotFound self.assertIsNone(self.test_resource.handle_delete()) def test_resource_show_resource(self): args = dict( name='test-record.com', description='Test record', ttl=3600, type='A', priority=1, data='1.1.1.1' ) rsc = records.Record(args) mock_notification_get = self.test_client_plugin.record_show mock_notification_get.return_value = rsc self.assertEqual(args, self.test_resource._show_resource(), 'Failed to show resource') def test_resource_get_live_state(self): tmpl = { 'heat_template_version': '2015-04-30', 'resources': { 'test_resource': { 'type': 'OS::Designate::Record', 'properties': { 'name': 'test-record.com', 'description': 'Test record', 'ttl': 3600, 'type': 'MX', 'priority': 1, 'data': '1.1.1.1', 'domain': 'example.com.' } } } } s = stack.Stack( self.ctx, 'test_stack', template.Template(tmpl) ) test_resource = s['test_resource'] test_resource.resource_id = '1234' test_resource.client_plugin().get_domain_id = mock.MagicMock() test_resource.client_plugin().get_domain_id.return_value = '1234567' test_resource.client().records = mock.MagicMock() test_resource.client().records.get.return_value = { 'type': 'MX', 'data': '1.1.1.1', 'ttl': 3600, 'description': 'test', 'domain_id': '1234567', 'name': 'www.example.com.', 'priority': 0 } reality = test_resource.get_live_state(test_resource.properties) expected = { 'type': 'MX', 'data': '1.1.1.1', 'ttl': 3600, 'description': 'test', 'priority': 0 } self.assertEqual(expected, reality) heat-10.0.2/heat/tests/openstack/designate/__init__.py0000666000175000017500000000000013343562340022656 0ustar zuulzuul00000000000000heat-10.0.2/heat/tests/openstack/designate/test_domain.py0000666000175000017500000001660213343562340023444 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from designateclient import exceptions as designate_exception from designateclient.v1 import domains from heat.engine.resources.openstack.designate import domain from heat.engine import stack from heat.engine import template from heat.tests import common from heat.tests import utils sample_template = { 'heat_template_version': '2015-04-30', 'resources': { 'test_resource': { 'type': 'OS::Designate::Domain', 'properties': { 'name': 'test-domain.com', 'description': 'Test domain', 'ttl': 3600, 'email': 'abc@test-domain.com' } } } } class DesignateDomainTest(common.HeatTestCase): def setUp(self): super(DesignateDomainTest, self).setUp() self.ctx = utils.dummy_context() self.stack = stack.Stack( self.ctx, 'test_stack', template.Template(sample_template) ) self.test_resource = self.stack['test_resource'] # Mock client plugin self.test_client_plugin = mock.MagicMock() self.test_resource.client_plugin = mock.MagicMock( return_value=self.test_client_plugin) # Mock client self.test_client = mock.MagicMock() self.test_resource.client = mock.MagicMock( return_value=self.test_client) def _get_mock_resource(self): value = mock.MagicMock() value.id = '477e8273-60a7-4c41-b683-fdb0bc7cd152' value.serial = '1434596972' return value def test_resource_handle_create(self): mock_domain_create = self.test_client_plugin.domain_create mock_resource = self._get_mock_resource() mock_domain_create.return_value = mock_resource # validate the properties self.assertEqual( 'test-domain.com', self.test_resource.properties.get(domain.DesignateDomain.NAME)) self.assertEqual( 'Test domain', self.test_resource.properties.get( domain.DesignateDomain.DESCRIPTION)) self.assertEqual( 3600, self.test_resource.properties.get(domain.DesignateDomain.TTL)) self.assertEqual( 'abc@test-domain.com', self.test_resource.properties.get(domain.DesignateDomain.EMAIL)) self.test_resource.data_set = mock.Mock() self.test_resource.handle_create() args = dict( name='test-domain.com', description='Test domain', ttl=3600, email='abc@test-domain.com' ) mock_domain_create.assert_called_once_with(**args) # validate physical resource id self.assertEqual(mock_resource.id, self.test_resource.resource_id) def test_resource_handle_update(self): mock_domain_update = self.test_client_plugin.domain_update self.test_resource.resource_id = '477e8273-60a7-4c41-b683-fdb0bc7cd151' prop_diff = {domain.DesignateDomain.EMAIL: 'xyz@test-domain.com', domain.DesignateDomain.DESCRIPTION: 'updated description', domain.DesignateDomain.TTL: 4200} self.test_resource.handle_update(json_snippet=None, tmpl_diff=None, prop_diff=prop_diff) args = dict( id=self.test_resource.resource_id, description='updated description', ttl=4200, email='xyz@test-domain.com' ) mock_domain_update.assert_called_once_with(**args) def test_resource_handle_delete(self): mock_domain_delete = self.test_client.domains.delete self.test_resource.resource_id = '477e8273-60a7-4c41-b683-fdb0bc7cd151' mock_domain_delete.return_value = None self.assertEqual('477e8273-60a7-4c41-b683-fdb0bc7cd151', self.test_resource.handle_delete()) mock_domain_delete.assert_called_once_with( self.test_resource.resource_id ) def test_resource_handle_delete_resource_id_is_none(self): self.test_resource.resource_id = None self.assertIsNone(self.test_resource.handle_delete()) def test_resource_handle_delete_not_found(self): mock_domain_delete = self.test_client.domains.delete mock_domain_delete.side_effect = designate_exception.NotFound self.assertIsNone(self.test_resource.handle_delete()) def test_resolve_attributes(self): mock_domain = self._get_mock_resource() self.test_resource.resource_id = mock_domain.id self.test_client.domains.get.return_value = mock_domain self.assertEqual(mock_domain.serial, self.test_resource._resolve_attribute( domain.DesignateDomain.SERIAL )) self.test_client.domains.get.assert_called_once_with( self.test_resource.resource_id ) def test_resource_show_resource(self): args = dict( name='test', description='updated description', ttl=4200, email='xyz@test-domain.com' ) rsc = domains.Domain(args) mock_notification_get = self.test_client.domains.get mock_notification_get.return_value = rsc self.assertEqual(args, self.test_resource._show_resource(), 'Failed to show resource') def test_no_ttl(self): mock_domain_create = self.test_client_plugin.domain_create mock_resource = self._get_mock_resource() mock_domain_create.return_value = mock_resource self.test_resource.properties.data['ttl'] = None self.test_resource.handle_create() mock_domain_create.assert_called_once_with( name='test-domain.com', description='Test domain', email='abc@test-domain.com') def test_domain_get_live_state(self): return_domain = { 'name': 'test-domain.com', 'description': 'Test domain', 'ttl': 3600, 'email': 'abc@test-domain.com' } self.test_client.domains.get.return_value = return_domain self.test_resource.resource_id = '1234' reality = self.test_resource.get_live_state( self.test_resource.properties) self.assertEqual(return_domain, reality) def test_domain_get_live_state_ttl_equals_zero(self): return_domain = { 'name': 'test-domain.com', 'description': 'Test domain', 'ttl': 0, 'email': 'abc@test-domain.com' } self.test_client.domains.get.return_value = return_domain self.test_resource.resource_id = '1234' reality = self.test_resource.get_live_state( self.test_resource.properties) self.assertEqual(return_domain, reality) heat-10.0.2/heat/tests/openstack/cinder/0000775000175000017500000000000013343562672020066 5ustar zuulzuul00000000000000heat-10.0.2/heat/tests/openstack/cinder/test_qos_specs.py0000666000175000017500000001654313343562340023501 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from heat.engine.clients.os import cinder as c_plugin from heat.engine.resources.openstack.cinder import qos_specs from heat.engine import stack from heat.engine import template from heat.tests import common from heat.tests import utils QOS_SPECS_TEMPLATE = { 'heat_template_version': '2015-10-15', 'description': 'Cinder QoS specs creation example', 'resources': { 'my_qos_specs': { 'type': 'OS::Cinder::QoSSpecs', 'properties': { 'name': 'foobar', 'specs': {"foo": "bar", "foo1": "bar1"} } } } } QOS_ASSOCIATE_TEMPLATE = { 'heat_template_version': '2015-10-15', 'description': 'Cinder QoS specs association example', 'resources': { 'my_qos_associate': { 'type': 'OS::Cinder::QoSAssociation', 'properties': { 'volume_types': ['ceph', 'lvm'], 'qos_specs': 'foobar' } } } } class QoSSpecsTest(common.HeatTestCase): def setUp(self): super(QoSSpecsTest, self).setUp() self.ctx = utils.dummy_context() self.patchobject(c_plugin.CinderClientPlugin, 'has_extension', return_value=True) self.stack = stack.Stack( self.ctx, 'cinder_qos_spec_test_stack', template.Template(QOS_SPECS_TEMPLATE) ) self.my_qos_specs = self.stack['my_qos_specs'] cinder_client = mock.MagicMock() self.cinderclient = mock.MagicMock() self.my_qos_specs.client = cinder_client cinder_client.return_value = self.cinderclient self.qos_specs = self.cinderclient.qos_specs self.value = mock.MagicMock() self.value.id = '927202df-1afb-497f-8368-9c2d2f26e5db' self.value.name = 'foobar' self.value.specs = {'foo': 'bar', 'foo1': 'bar1'} self.qos_specs.create.return_value = self.value def test_resource_mapping(self): mapping = qos_specs.resource_mapping() self.assertEqual(2, len(mapping)) self.assertEqual(qos_specs.QoSSpecs, mapping['OS::Cinder::QoSSpecs']) self.assertIsInstance(self.my_qos_specs, qos_specs.QoSSpecs) def _set_up_qos_specs_environment(self): self.qos_specs.create.return_value = self.value self.my_qos_specs.handle_create() def test_qos_specs_handle_create_specs(self): self._set_up_qos_specs_environment() self.assertEqual(1, self.qos_specs.create.call_count) self.assertEqual(self.value.id, self.my_qos_specs.resource_id) def test_qos_specs_handle_update_specs(self): self._set_up_qos_specs_environment() resource_id = self.my_qos_specs.resource_id prop_diff = {'specs': {'foo': 'bar', 'bar': 'bar'}} set_expected = {'bar': 'bar'} unset_expected = ['foo1'] self.my_qos_specs.handle_update( json_snippet=None, tmpl_diff=None, prop_diff=prop_diff ) self.qos_specs.set_keys.assert_called_once_with( resource_id, set_expected ) self.qos_specs.unset_keys.assert_called_once_with( resource_id, unset_expected ) def test_qos_specs_handle_delete_specs(self): self._set_up_qos_specs_environment() resource_id = self.my_qos_specs.resource_id self.my_qos_specs.handle_delete() self.qos_specs.disassociate_all.assert_called_once_with(resource_id) class QoSAssociationTest(common.HeatTestCase): def setUp(self): super(QoSAssociationTest, self).setUp() self.ctx = utils.dummy_context() self.qos_specs_id = 'foobar' self.patchobject(c_plugin.CinderClientPlugin, 'has_extension', return_value=True) self.patchobject(c_plugin.CinderClientPlugin, 'get_qos_specs', return_value=self.qos_specs_id) self.stack = stack.Stack( self.ctx, 'cinder_qos_associate_test_stack', template.Template(QOS_ASSOCIATE_TEMPLATE) ) self.my_qos_associate = self.stack['my_qos_associate'] cinder_client = mock.MagicMock() self.cinderclient = mock.MagicMock() self.my_qos_associate.client = cinder_client cinder_client.return_value = self.cinderclient self.qos_specs = self.cinderclient.qos_specs self.stub_QoSSpecsConstraint_validate() self.stub_VolumeTypeConstraint_validate() self.vt_ceph = 'ceph' self.vt_lvm = 'lvm' self.vt_new_ceph = 'new_ceph' def test_resource_mapping(self): mapping = qos_specs.resource_mapping() self.assertEqual(2, len(mapping)) self.assertEqual(qos_specs.QoSAssociation, mapping['OS::Cinder::QoSAssociation']) self.assertIsInstance(self.my_qos_associate, qos_specs.QoSAssociation) def _set_up_qos_associate_environment(self): self.my_qos_associate.handle_create() def test_qos_associate_handle_create(self): self.patchobject(c_plugin.CinderClientPlugin, 'get_volume_type', side_effect=[self.vt_ceph, self.vt_lvm]) self._set_up_qos_associate_environment() self.cinderclient.qos_specs.associate.assert_any_call( self.qos_specs_id, self.vt_ceph ) self.qos_specs.associate.assert_any_call( self.qos_specs_id, self.vt_lvm ) def test_qos_associate_handle_update(self): self.patchobject(c_plugin.CinderClientPlugin, 'get_volume_type', side_effect=[self.vt_lvm, self.vt_ceph, self.vt_new_ceph, self.vt_ceph]) self._set_up_qos_associate_environment() prop_diff = {'volume_types': [self.vt_lvm, self.vt_new_ceph]} self.my_qos_associate.handle_update( json_snippet=None, tmpl_diff=None, prop_diff=prop_diff ) self.qos_specs.associate.assert_any_call( self.qos_specs_id, self.vt_new_ceph ) self.qos_specs.disassociate.assert_any_call( self.qos_specs_id, self.vt_ceph ) def test_qos_associate_handle_delete_specs(self): self.patchobject(c_plugin.CinderClientPlugin, 'get_volume_type', side_effect=[self.vt_ceph, self.vt_lvm, self.vt_ceph, self.vt_lvm]) self._set_up_qos_associate_environment() self.my_qos_associate.handle_delete() self.qos_specs.disassociate.assert_any_call( self.qos_specs_id, self.vt_ceph ) self.qos_specs.disassociate.assert_any_call( self.qos_specs_id, self.vt_lvm ) heat-10.0.2/heat/tests/openstack/cinder/test_volume_utils.py0000666000175000017500000001205313343562351024223 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from cinderclient import exceptions as cinder_exp from cinderclient.v2 import client as cinderclient import six from heat.engine.clients.os import cinder from heat.engine.clients.os import nova from heat.engine.resources.aws.ec2 import volume as aws_vol from heat.engine.resources.openstack.cinder import volume as os_vol from heat.engine import scheduler from heat.engine import stk_defn from heat.tests import common from heat.tests.openstack.nova import fakes as fakes_nova class BaseVolumeTest(common.HeatTestCase): def setUp(self): super(BaseVolumeTest, self).setUp() self.fc = fakes_nova.FakeClient() self.cinder_fc = cinderclient.Client('username', 'password') self.cinder_fc.volume_api_version = 2 self.m.StubOutWithMock(cinder.CinderClientPlugin, '_create') self.m.StubOutWithMock(nova.NovaClientPlugin, '_create') self.m.StubOutWithMock(self.cinder_fc.volumes, 'create') self.m.StubOutWithMock(self.cinder_fc.volumes, 'get') self.m.StubOutWithMock(self.cinder_fc.volumes, 'delete') self.m.StubOutWithMock(self.cinder_fc.volumes, 'extend') self.m.StubOutWithMock(self.cinder_fc.volumes, 'update') self.m.StubOutWithMock(self.cinder_fc.volumes, 'update_all_metadata') self.m.StubOutWithMock(self.fc.volumes, 'create_server_volume') self.m.StubOutWithMock(self.fc.volumes, 'delete_server_volume') self.m.StubOutWithMock(self.fc.volumes, 'get_server_volume') self.use_cinder = False def _mock_delete_volume(self, fv): self.cinder_fc.volumes.get(fv.id).AndReturn( FakeVolume('available')) self.cinder_fc.volumes.delete(fv.id).AndReturn(True) self.cinder_fc.volumes.get(fv.id).AndRaise( cinder_exp.NotFound('Not found')) def _mock_create_server_volume_script(self, fva, server=u'WikiDatabase', volume='vol-123', device=u'/dev/vdc', final_status='in-use', update=False): if not update: nova.NovaClientPlugin._create().MultipleTimes().AndReturn(self.fc) self.fc.volumes.create_server_volume( device=device, server_id=server, volume_id=volume).AndReturn(fva) fv_ready = FakeVolume(final_status, id=fva.id) self.cinder_fc.volumes.get(fva.id).AndReturn(fv_ready) return fv_ready def get_volume(self, t, stack, resource_name): if self.use_cinder: Volume = os_vol.CinderVolume else: data = t['Resources'][resource_name] data['Properties']['AvailabilityZone'] = 'nova' Volume = aws_vol.Volume vol = Volume(resource_name, stack.defn.resource_definition(resource_name), stack) return vol def create_volume(self, t, stack, resource_name): rsrc = self.get_volume(t, stack, resource_name) if isinstance(rsrc, os_vol.CinderVolume): self.patchobject(rsrc, '_store_config_default_properties') self.assertIsNone(rsrc.validate()) scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) stk_defn.update_resource_data(stack.defn, resource_name, rsrc.node_data()) return rsrc def create_attachment(self, t, stack, resource_name): if self.use_cinder: Attachment = os_vol.CinderVolumeAttachment else: Attachment = aws_vol.VolumeAttachment rsrc = Attachment(resource_name, stack.defn.resource_definition(resource_name), stack) self.assertIsNone(rsrc.validate()) scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) stk_defn.update_resource_data(stack.defn, resource_name, rsrc.node_data()) return rsrc class FakeVolume(object): _ID = 'vol-123' def __init__(self, status, **attrs): self.status = status for key, value in six.iteritems(attrs): setattr(self, key, value) if 'id' not in attrs: self.id = self._ID class FakeBackup(FakeVolume): _ID = 'backup-123' class FakeBackupRestore(object): def __init__(self, volume_id='vol-123'): self.volume_id = volume_id heat-10.0.2/heat/tests/openstack/cinder/test_volume_type_encryption.py0000666000175000017500000001170213343562340026314 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from heat.common import exception from heat.engine.clients.os import cinder as c_plugin from heat.engine import stack from heat.engine import template from heat.tests import common from heat.tests import utils cinder_volume_type_encryption = { 'heat_template_version': '2015-04-30', 'resources': { 'my_encrypted_vol_type': { 'type': 'OS::Cinder::EncryptedVolumeType', 'properties': { 'provider': 'nova.volume.encryptors.luks.LuksEncryptor', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': '512', 'volume_type': '01bd581d-33fe-4d6d-bd7b-70ae076d39fb' } } } } class CinderEncryptedVolumeTypeTest(common.HeatTestCase): def setUp(self): super(CinderEncryptedVolumeTypeTest, self).setUp() self.ctx = utils.dummy_context() self.patchobject(c_plugin.CinderClientPlugin, 'has_extension', return_value=True) self.stack = stack.Stack( self.ctx, 'cinder_vol_type_encryption_test_stack', template.Template(cinder_volume_type_encryption) ) self.my_encrypted_vol_type = self.stack['my_encrypted_vol_type'] cinder = mock.MagicMock() self.cinderclient = mock.MagicMock() self.my_encrypted_vol_type.client = cinder cinder.return_value = self.cinderclient self.volume_encryption_types = ( self.cinderclient.volume_encryption_types) def test_handle_create(self): value = mock.MagicMock() volume_type_id = '01bd581d-33fe-4d6d-bd7b-70ae076d39fb' value.volume_type_id = volume_type_id self.volume_encryption_types.create.return_value = value with mock.patch.object(self.my_encrypted_vol_type.client_plugin(), 'get_volume_type') as mock_get_volume_type: mock_get_volume_type.return_value = volume_type_id self.my_encrypted_vol_type.handle_create() mock_get_volume_type.assert_called_once_with(volume_type_id) specs = { 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 512, 'provider': 'nova.volume.encryptors.luks.LuksEncryptor' } self.volume_encryption_types.create.assert_called_once_with( volume_type=volume_type_id, specs=specs) self.assertEqual(volume_type_id, self.my_encrypted_vol_type.resource_id) def test_handle_update(self): update_args = { 'control_location': 'back-end', 'key_size': 256, 'cipher': 'aes-cbc-essiv', 'provider': 'nova.volume.encryptors.cryptsetup.CryptsetupEncryptor' } volume_type_id = '01bd581d-33fe-4d6d-bd7b-70ae076d39fb' self.my_encrypted_vol_type.resource_id = volume_type_id self.my_encrypted_vol_type.handle_update(json_snippet=None, tmpl_diff=None, prop_diff=update_args) self.volume_encryption_types.update.assert_called_once_with( volume_type=volume_type_id, specs=update_args) def test_get_live_state(self): self.my_encrypted_vol_type.resource_id = '1234' value = mock.MagicMock() value.to_dict.return_value = { 'volume_type_id': '1122', 'provider': 'nova.Test', 'cipher': 'aes-xts-plain64', 'control_location': 'front-end', 'key_size': 256 } self.volume_encryption_types.get.return_value = value reality = self.my_encrypted_vol_type.get_live_state( self.my_encrypted_vol_type.properties) expected = { 'provider': 'nova.Test', 'cipher': 'aes-xts-plain64', 'control_location': 'front-end', 'key_size': 256 } self.assertEqual(expected, reality) def test_get_live_state_found_but_deleted(self): self.my_encrypted_vol_type.resource_id = '1234' value = mock.MagicMock(spec=[]) self.volume_encryption_types.get.return_value = value self.assertRaises(exception.EntityNotFound, self.my_encrypted_vol_type.get_live_state, self.my_encrypted_vol_type.properties) heat-10.0.2/heat/tests/openstack/cinder/test_quota.py0000666000175000017500000001567213343562351022637 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import six from heat.common import exception from heat.common import template_format from heat.engine.clients.os import cinder as c_plugin from heat.engine.clients.os import keystone as k_plugin from heat.engine import rsrc_defn from heat.engine import stack as parser from heat.engine import template from heat.tests import common from heat.tests import utils quota_template = ''' heat_template_version: newton description: Sample cinder quota heat template resources: my_quota: type: OS::Cinder::Quota properties: project: demo gigabytes: 5 snapshots: 2 volumes: 3 ''' class CinderQuotaTest(common.HeatTestCase): def setUp(self): super(CinderQuotaTest, self).setUp() self.ctx = utils.dummy_context() self.patchobject(c_plugin.CinderClientPlugin, 'has_extension', return_value=True) self.patchobject(k_plugin.KeystoneClientPlugin, 'get_project_id', return_value='some_project_id') tpl = template_format.parse(quota_template) self.stack = parser.Stack( self.ctx, 'cinder_quota_test_stack', template.Template(tpl) ) self.my_quota = self.stack['my_quota'] cinder = mock.MagicMock() self.cinderclient = mock.MagicMock() self.my_quota.client = cinder cinder.return_value = self.cinderclient self.quotas = self.cinderclient.quotas self.quota_set = mock.MagicMock() self.quotas.update.return_value = self.quota_set self.quotas.delete.return_value = self.quota_set class FakeVolumeOrSnapshot(object): def __init__(self, size=1): self.size = size self.fv = FakeVolumeOrSnapshot f_v = self.fv() self.volume_snapshots = self.cinderclient.volume_snapshots self.volume_snapshots.list.return_value = [f_v, f_v] self.volumes = self.cinderclient.volumes self.volumes.list.return_value = [f_v, f_v, f_v] self.err_msg = ("Invalid quota %(property)s value(s): %(value)s. " "Can not be less than the current usage value(s): " "%(total)s.") def _test_validate(self, resource, error_msg): exc = self.assertRaises(exception.StackValidationFailed, resource.validate) self.assertIn(error_msg, six.text_type(exc)) def _test_invalid_property(self, prop_name): my_quota = self.stack['my_quota'] props = self.stack.t.t['resources']['my_quota']['properties'].copy() props[prop_name] = -2 my_quota.t = my_quota.t.freeze(properties=props) my_quota.reparse() error_msg = ('Property error: resources.my_quota.properties.%s:' ' -2 is out of range (min: -1, max: None)' % prop_name) self._test_validate(my_quota, error_msg) def test_invalid_gigabytes(self): self._test_invalid_property('gigabytes') def test_invalid_snapshots(self): self._test_invalid_property('snapshots') def test_invalid_volumes(self): self._test_invalid_property('volumes') def test_miss_all_quotas(self): my_quota = self.stack['my_quota'] props = self.stack.t.t['resources']['my_quota']['properties'].copy() del props['gigabytes'], props['snapshots'], props['volumes'] my_quota.t = my_quota.t.freeze(properties=props) my_quota.reparse() msg = ('At least one of the following properties must be specified: ' 'gigabytes, snapshots, volumes.') self.assertRaisesRegex(exception.PropertyUnspecifiedError, msg, my_quota.validate) def test_quota_handle_create(self): self.my_quota.physical_resource_name = mock.MagicMock( return_value='some_resource_id') self.my_quota.reparse() self.my_quota.handle_create() self.quotas.update.assert_called_once_with( 'some_project_id', gigabytes=5, snapshots=2, volumes=3 ) self.assertEqual('some_resource_id', self.my_quota.resource_id) def test_quota_handle_update(self): tmpl_diff = mock.MagicMock() prop_diff = mock.MagicMock() props = {'project': 'some_project_id', 'gigabytes': 6, 'volumes': 4} json_snippet = rsrc_defn.ResourceDefinition( self.my_quota.name, 'OS::Cinder::Quota', properties=props) self.my_quota.reparse() self.my_quota.handle_update(json_snippet, tmpl_diff, prop_diff) self.quotas.update.assert_called_once_with( 'some_project_id', gigabytes=6, volumes=4 ) def test_quota_handle_delete(self): self.my_quota.reparse() self.my_quota.handle_delete() self.quotas.delete.assert_called_once_with('some_project_id') def test_quota_with_invalid_gigabytes(self): fake_v = self.fv(2) self.volumes.list.return_value = [fake_v, fake_v] self.my_quota.physical_resource_name = mock.MagicMock( return_value='some_resource_id') self.my_quota.reparse() err = self.assertRaises(ValueError, self.my_quota.handle_create) self.assertEqual( self.err_msg % {'property': 'gigabytes', 'value': 5, 'total': 6}, six.text_type(err)) def test_quota_with_invalid_volumes(self): fake_v = self.fv(0) self.volumes.list.return_value = [fake_v, fake_v, fake_v, fake_v] self.my_quota.physical_resource_name = mock.MagicMock( return_value='some_resource_id') self.my_quota.reparse() err = self.assertRaises(ValueError, self.my_quota.handle_create) self.assertEqual( self.err_msg % {'property': 'volumes', 'value': 3, 'total': 4}, six.text_type(err)) def test_quota_with_invalid_snapshots(self): fake_v = self.fv(0) self.volume_snapshots.list.return_value = [fake_v, fake_v, fake_v, fake_v] self.my_quota.physical_resource_name = mock.MagicMock( return_value='some_resource_id') self.my_quota.reparse() err = self.assertRaises(ValueError, self.my_quota.handle_create) self.assertEqual( self.err_msg % {'property': 'snapshots', 'value': 2, 'total': 4}, six.text_type(err)) heat-10.0.2/heat/tests/openstack/cinder/test_volume_type.py0000666000175000017500000002607013343562340024046 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import mock import six from heat.common import exception from heat.engine.clients.os import cinder as c_plugin from heat.engine import stack from heat.engine import template from heat.tests import common from heat.tests import utils volume_type_template = { 'heat_template_version': '2013-05-23', 'resources': { 'my_volume_type': { 'type': 'OS::Cinder::VolumeType', 'properties': { 'name': 'volumeBackend', 'metadata': {'volume_backend_name': 'lvmdriver'} } } } } class CinderVolumeTypeTest(common.HeatTestCase): def setUp(self): super(CinderVolumeTypeTest, self).setUp() self.ctx = utils.dummy_context() self.patchobject(c_plugin.CinderClientPlugin, 'has_extension', return_value=True) self.stack = stack.Stack( self.ctx, 'cinder_volume_type_test_stack', template.Template(volume_type_template) ) self.my_volume_type = self.stack['my_volume_type'] cinder = mock.MagicMock() self.cinderclient = mock.MagicMock() self.my_volume_type.client = cinder cinder.return_value = self.cinderclient self.volume_types = self.cinderclient.volume_types self.volume_type_access = self.cinderclient.volume_type_access keystoneclient = self.stack.clients.client_plugin('keystone').client() keystoneclient.client = mock.MagicMock() keystoneclient.client.projects = mock.MagicMock() self.project_list = mock.MagicMock() keystoneclient.client.projects.get = self.project_list def _test_handle_create(self, is_public=True, projects=None): value = mock.MagicMock() volume_type_id = '927202df-1afb-497f-8368-9c2d2f26e5db' value.id = volume_type_id self.volume_types.create.return_value = value tmpl = self.stack.t.t props = tmpl['resources']['my_volume_type']['properties'].copy() props['is_public'] = is_public if projects: props['projects'] = projects project = collections.namedtuple('Project', ['id']) stub_projects = [project(p) for p in projects] self.project_list.side_effect = [p for p in stub_projects] self.my_volume_type.t = self.my_volume_type.t.freeze(properties=props) self.my_volume_type.reparse() self.my_volume_type.handle_create() self.volume_types.create.assert_called_once_with( name='volumeBackend', is_public=is_public, description=None) value.set_keys.assert_called_once_with( {'volume_backend_name': 'lvmdriver'}) self.assertEqual(volume_type_id, self.my_volume_type.resource_id) if projects: calls = [] for p in projects: calls.append(mock.call(volume_type_id, p)) self.volume_type_access.add_project_access.assert_has_calls(calls) def test_volume_type_handle_create_public(self): self._test_handle_create() def test_volume_type_handle_create_not_public(self): self._test_handle_create(is_public=False) def test_volume_type_with_projects(self): self.cinderclient.volume_api_version = 2 self._test_handle_create(projects=['id1', 'id2']) def _test_update(self, update_args, is_update_metadata=False): if is_update_metadata: value = mock.MagicMock() self.volume_types.get.return_value = value value.get_keys.return_value = {'volume_backend_name': 'lvmdriver'} else: volume_type_id = '927202df-1afb-497f-8368-9c2d2f26e5db' self.my_volume_type.resource_id = volume_type_id self.my_volume_type.handle_update(json_snippet=None, tmpl_diff=None, prop_diff=update_args) if is_update_metadata: value.unset_keys.assert_called_once_with( {'volume_backend_name': 'lvmdriver'}) value.set_keys.assert_called_once_with( update_args['metadata']) else: self.volume_types.update.assert_called_once_with( volume_type_id, **update_args) def test_volume_type_handle_update_description(self): update_args = {'description': 'update'} self._test_update(update_args) def test_volume_type_handle_update_name(self): update_args = {'name': 'update'} self._test_update(update_args) def test_volume_type_handle_update_is_public(self): prop_diff = {'is_public': True, "projects": []} self.patchobject(self.volume_type_access, 'list') volume_type_id = '927202df-1afb-497f-8368-9c2d2f26e5db' self.my_volume_type.resource_id = volume_type_id self.my_volume_type.handle_update(json_snippet=None, tmpl_diff=None, prop_diff=prop_diff) self.volume_types.update.assert_called_once_with( volume_type_id, is_public=True) self.volume_type_access.list.assert_not_called() def test_volume_type_handle_update_metadata(self): new_keys = {'volume_backend_name': 'lvmdriver', 'capabilities:replication': 'True'} prop_diff = {'metadata': new_keys} self._test_update(prop_diff, is_update_metadata=True) def test_volume_type_update_projects(self): self.my_volume_type.resource_id = '8aeaa446459a4d3196bc573fc252800b' prop_diff = {'projects': ['id2', 'id3'], 'is_public': False} class Access(object): def __init__(self, idx, project): self.volume_type_id = idx self.project_id = project info = {'volume_type_id': idx, 'project_id': project} self.to_dict = mock.Mock(return_value=info) old_access = [Access(self.my_volume_type.resource_id, 'id1'), Access(self.my_volume_type.resource_id, 'id2')] self.patchobject(self.volume_type_access, 'list', return_value=old_access) self.patchobject(self.volume_type_access, 'remove_project_access') project = collections.namedtuple('Project', ['id']) self.project_list.return_value = project('id3') self.my_volume_type.handle_update(json_snippet=None, tmpl_diff=None, prop_diff=prop_diff) self.volume_type_access.remove_project_access.assert_called_once_with( self.my_volume_type.resource_id, 'id1') self.project_list.assert_called_once_with('id3') self.volume_type_access.add_project_access.assert_called_once_with( self.my_volume_type.resource_id, 'id3') def test_validate_projects_when_public(self): tmpl = self.stack.t.t props = tmpl['resources']['my_volume_type']['properties'].copy() props['is_public'] = True props['projects'] = ['id1'] self.my_volume_type.t = self.my_volume_type.t.freeze(properties=props) self.my_volume_type.reparse() self.cinderclient.volume_api_version = 2 self.stub_KeystoneProjectConstraint() ex = self.assertRaises(exception.StackValidationFailed, self.my_volume_type.validate) expected = ('Can not specify property "projects" ' 'if the volume type is public.') self.assertEqual(expected, six.text_type(ex)) def test_validate_projects_when_private(self): tmpl = self.stack.t.t props = tmpl['resources']['my_volume_type']['properties'].copy() props['is_public'] = False props['projects'] = ['id1'] self.my_volume_type.t = self.my_volume_type.t.freeze(properties=props) self.my_volume_type.reparse() self.cinderclient.volume_api_version = 2 self.stub_KeystoneProjectConstraint() self.assertIsNone(self.my_volume_type.validate()) def test_volume_type_get_live_state_public(self): self.my_volume_type.resource_id = '1234' volume_type = mock.Mock() volume_type.to_dict.return_value = {'name': 'test', 'is_public': True, 'description': 'test1', 'metadata': {'one': 'two'}} self.volume_types.get.return_value = volume_type volume_type.get_keys.return_value = {'one': 'two'} volume_type_access = mock.MagicMock() self.cinderclient.volume_type_access = volume_type_access reality = self.my_volume_type.get_live_state( self.my_volume_type.properties) expected = { 'name': 'test', 'is_public': True, 'description': 'test1', 'projects': [], 'metadata': {'one': 'two'} } self.assertEqual(set(expected.keys()), set(reality.keys())) for key in reality: self.assertEqual(expected[key], reality[key]) self.assertEqual(0, volume_type_access.list.call_count) def test_volume_type_get_live_state_not_public(self): self.my_volume_type.resource_id = '1234' volume_type = mock.Mock() volume_type.to_dict.return_value = {'name': 'test', 'is_public': False, 'description': 'test1', 'metadata': {'one': 'two'}} self.volume_types.get.return_value = volume_type volume_type.get_keys.return_value = {'one': 'two'} volume_type_access = mock.MagicMock() class Access(object): def __init__(self, idx, project, info): self.volumetype_id = idx self.project_id = project self.to_dict = mock.Mock(return_value=info) volume_type_access.list.return_value = [ Access('1234', '1', {'volumetype_id': '1234', 'project_id': '1'}), Access('1234', '2', {'volumetype_id': '1234', 'project_id': '2'})] self.cinderclient.volume_type_access = volume_type_access reality = self.my_volume_type.get_live_state( self.my_volume_type.properties) expected = { 'name': 'test', 'is_public': False, 'description': 'test1', 'metadata': {'one': 'two'}, 'projects': ['1', '2'] } self.assertEqual(expected, reality) volume_type_access.list.assert_called_once_with('1234') heat-10.0.2/heat/tests/openstack/cinder/__init__.py0000666000175000017500000000000013343562340022157 0ustar zuulzuul00000000000000heat-10.0.2/heat/tests/openstack/cinder/test_volume.py0000666000175000017500000015265213343562351023015 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import copy import json from cinderclient import exceptions as cinder_exp import mock from oslo_config import cfg import six from heat.common import exception from heat.common import template_format from heat.engine.clients.os import cinder from heat.engine.clients.os import glance from heat.engine.clients.os import nova from heat.engine.resources.openstack.cinder import volume as c_vol from heat.engine.resources import scheduler_hints as sh from heat.engine import rsrc_defn from heat.engine import scheduler from heat.objects import resource_data as resource_data_object from heat.tests.openstack.cinder import test_volume_utils as vt_base from heat.tests.openstack.nova import fakes as fakes_nova from heat.tests import utils cinder_volume_template = ''' heat_template_version: 2013-05-23 description: Cinder volumes and attachments. resources: volume: type: OS::Cinder::Volume properties: availability_zone: nova size: 1 name: test_name description: test_description metadata: key: value volume2: type: OS::Cinder::Volume properties: availability_zone: nova size: 2 volume3: type: OS::Cinder::Volume properties: availability_zone: nova size: 1 name: test_name scheduler_hints: {"hint1": "good_advice"} volume4: type: OS::Cinder::Volume properties: availability_zone: nova size: 1 name: test_name multiattach: True attachment: type: OS::Cinder::VolumeAttachment properties: instance_uuid: WikiDatabase volume_id: { get_resource: volume } mountpoint: /dev/vdc ''' single_cinder_volume_template = ''' heat_template_version: 2013-05-23 description: Cinder volume resources: volume: type: OS::Cinder::Volume properties: size: 1 name: test_name description: test_description ''' class CinderVolumeTest(vt_base.BaseVolumeTest): def setUp(self): super(CinderVolumeTest, self).setUp() self.t = template_format.parse(cinder_volume_template) self.use_cinder = True def _mock_create_volume(self, fv, stack_name, size=1, final_status='available'): cinder.CinderClientPlugin._create().MultipleTimes().AndReturn( self.cinder_fc) self.cinder_fc.volumes.create( size=size, availability_zone='nova', description='test_description', name='test_name', metadata={'key': 'value'}, multiattach=False ).AndReturn(fv) self.cinder_fc.volumes.get(fv.id).AndReturn(fv) fv_ready = vt_base.FakeVolume(final_status, id=fv.id) self.cinder_fc.volumes.get(fv.id).AndReturn(fv_ready) return fv_ready def test_cinder_volume_size_constraint(self): self.t['resources']['volume']['properties']['size'] = 0 stack = utils.parse_stack(self.t) error = self.assertRaises(exception.StackValidationFailed, self.create_volume, self.t, stack, 'volume') self.assertEqual( "Property error: resources.volume.properties.size: " "0 is out of range (min: 1, max: None)", six.text_type(error)) def test_cinder_create(self): fv = vt_base.FakeVolume('creating') stack_name = 'test_cvolume_stack' self.stub_SnapshotConstraint_validate() self.stub_VolumeConstraint_validate() self.stub_VolumeTypeConstraint_validate() cinder.CinderClientPlugin._create().AndReturn( self.cinder_fc) self.cinder_fc.volumes.create( size=1, availability_zone='nova', description='test_description', name='test_name', metadata={'key': 'value'}, volume_type='lvm', multiattach=False).AndReturn(fv) self.cinder_fc.volumes.get(fv.id).AndReturn(fv) fv_ready = vt_base.FakeVolume('available', id=fv.id) self.cinder_fc.volumes.get(fv.id).AndReturn(fv_ready) self.m.ReplayAll() self.t['resources']['volume']['properties'].update({ 'volume_type': 'lvm', }) stack = utils.parse_stack(self.t, stack_name=stack_name) self.create_volume(self.t, stack, 'volume') self.m.VerifyAll() def test_cinder_create_from_image(self): fv = vt_base.FakeVolume('downloading') stack_name = 'test_cvolume_create_from_img_stack' image_id = '46988116-6703-4623-9dbc-2bc6d284021b' cinder.CinderClientPlugin._create().AndReturn( self.cinder_fc) self.m.StubOutWithMock(glance.GlanceClientPlugin, 'find_image_by_name_or_id') glance.GlanceClientPlugin.find_image_by_name_or_id( image_id).MultipleTimes().AndReturn(image_id) self.cinder_fc.volumes.create( size=1, availability_zone='nova', description='ImageVolumeDescription', name='ImageVolume', imageRef=image_id, multiattach=False, metadata={}).AndReturn(fv) self.cinder_fc.volumes.get(fv.id).AndReturn(fv) fv_ready = vt_base.FakeVolume('available', id=fv.id) self.cinder_fc.volumes.get(fv.id).AndReturn(fv_ready) self.m.ReplayAll() self.t['resources']['volume']['properties'] = { 'size': '1', 'name': 'ImageVolume', 'description': 'ImageVolumeDescription', 'availability_zone': 'nova', 'image': image_id, } stack = utils.parse_stack(self.t, stack_name=stack_name) self.create_volume(self.t, stack, 'volume') self.m.VerifyAll() def test_cinder_create_with_read_only(self): fv = vt_base.FakeVolume('with_read_only_access_mode') stack_name = 'test_create_with_read_only' cinder.CinderClientPlugin._create().AndReturn( self.cinder_fc) self.cinder_fc.volumes.create( size=1, availability_zone='nova', description='ImageVolumeDescription', name='ImageVolume', multiattach=False, metadata={}).AndReturn(fv) update_readonly_mock = self.patchobject(self.cinder_fc.volumes, 'update_readonly_flag') update_readonly_mock.return_value = None fv_ready = vt_base.FakeVolume('available', id=fv.id) self.cinder_fc.volumes.get(fv.id).AndReturn(fv_ready) self.m.ReplayAll() self.t['resources']['volume']['properties'] = { 'size': '1', 'name': 'ImageVolume', 'description': 'ImageVolumeDescription', 'availability_zone': 'nova', 'read_only': False, } stack = utils.parse_stack(self.t, stack_name=stack_name) self.create_volume(self.t, stack, 'volume') update_readonly_mock.assert_called_once_with(fv.id, False) self.m.VerifyAll() def test_cinder_default(self): fv = vt_base.FakeVolume('creating') stack_name = 'test_cvolume_default_stack' cinder.CinderClientPlugin._create().AndReturn( self.cinder_fc) vol_name = utils.PhysName(stack_name, 'volume') self.cinder_fc.volumes.create( size=1, availability_zone='nova', description=None, name=vol_name, multiattach=False, metadata={} ).AndReturn(fv) self.cinder_fc.volumes.get(fv.id).AndReturn(fv) fv_ready = vt_base.FakeVolume('available', id=fv.id) self.cinder_fc.volumes.get(fv.id).AndReturn(fv_ready) self.m.ReplayAll() self.t['resources']['volume']['properties'] = { 'size': '1', 'availability_zone': 'nova', } stack = utils.parse_stack(self.t, stack_name=stack_name) self.create_volume(self.t, stack, 'volume') self.m.VerifyAll() def test_cinder_fn_getatt(self): stack_name = 'test_cvolume_fngetatt_stack' self._mock_create_volume(vt_base.FakeVolume('creating'), stack_name) fv = vt_base.FakeVolume( 'available', availability_zone='zone1', size=1, snapshot_id='snap-123', name='name', description='desc', volume_type='lvm', metadata={'key': 'value'}, source_volid=None, bootable=False, created_at='2013-02-25T02:40:21.000000', encrypted=False, attachments=[], multiattach=False) self.cinder_fc.volumes.get('vol-123').MultipleTimes().AndReturn(fv) self.m.ReplayAll() stack = utils.parse_stack(self.t, stack_name=stack_name) rsrc = self.create_volume(self.t, stack, 'volume') self.assertEqual(u'zone1', rsrc.FnGetAtt('availability_zone')) self.assertEqual(u'1', rsrc.FnGetAtt('size')) self.assertEqual(u'snap-123', rsrc.FnGetAtt('snapshot_id')) self.assertEqual(u'name', rsrc.FnGetAtt('display_name')) self.assertEqual(u'desc', rsrc.FnGetAtt('display_description')) self.assertEqual(u'lvm', rsrc.FnGetAtt('volume_type')) self.assertEqual(json.dumps({'key': 'value'}), rsrc.FnGetAtt('metadata')) self.assertEqual({'key': 'value'}, rsrc.FnGetAtt('metadata_values')) self.assertEqual(u'None', rsrc.FnGetAtt('source_volid')) self.assertEqual(u'available', rsrc.FnGetAtt('status')) self.assertEqual(u'2013-02-25T02:40:21.000000', rsrc.FnGetAtt('created_at')) self.assertEqual(u'False', rsrc.FnGetAtt('bootable')) self.assertEqual(u'False', rsrc.FnGetAtt('encrypted')) self.assertEqual(u'[]', rsrc.FnGetAtt('attachments')) self.assertEqual([], rsrc.FnGetAtt('attachments_list')) self.assertEqual('False', rsrc.FnGetAtt('multiattach')) error = self.assertRaises(exception.InvalidTemplateAttribute, rsrc.FnGetAtt, 'unknown') self.assertEqual( 'The Referenced Attribute (volume unknown) is incorrect.', six.text_type(error)) self.m.VerifyAll() def test_cinder_attachment(self): stack_name = 'test_cvolume_attach_stack' self._mock_create_volume(vt_base.FakeVolume('creating'), stack_name) self._mock_create_server_volume_script(vt_base.FakeVolume('attaching')) self.stub_VolumeConstraint_validate() # delete script fva = vt_base.FakeVolume('in-use') self.fc.volumes.get_server_volume(u'WikiDatabase', 'vol-123').AndReturn(fva) self.cinder_fc.volumes.get(fva.id).AndReturn(fva) self.fc.volumes.delete_server_volume( 'WikiDatabase', 'vol-123').MultipleTimes().AndReturn(None) self.cinder_fc.volumes.get(fva.id).AndReturn( vt_base.FakeVolume('available')) self.fc.volumes.get_server_volume(u'WikiDatabase', 'vol-123').AndReturn(fva) self.fc.volumes.get_server_volume( u'WikiDatabase', 'vol-123').AndRaise(fakes_nova.fake_exception()) self.m.ReplayAll() stack = utils.parse_stack(self.t, stack_name=stack_name) self.create_volume(self.t, stack, 'volume') rsrc = self.create_attachment(self.t, stack, 'attachment') scheduler.TaskRunner(rsrc.delete)() self.m.VerifyAll() def test_cinder_attachment_no_mountpoint(self): stack_name = 'test_cvolume_attach_stack' self._mock_create_volume(vt_base.FakeVolume('creating'), stack_name) self._mock_create_server_volume_script(vt_base.FakeVolume('attaching'), device=None) self.stub_VolumeConstraint_validate() # delete script fva = vt_base.FakeVolume('in-use') self.fc.volumes.get_server_volume(u'WikiDatabase', 'vol-123').AndReturn(fva) self.cinder_fc.volumes.get(fva.id).AndReturn(fva) self.fc.volumes.delete_server_volume( 'WikiDatabase', 'vol-123').MultipleTimes().AndReturn(None) self.cinder_fc.volumes.get(fva.id).AndReturn( vt_base.FakeVolume('available')) self.fc.volumes.get_server_volume(u'WikiDatabase', 'vol-123').AndReturn(fva) self.fc.volumes.get_server_volume( u'WikiDatabase', 'vol-123').AndRaise(fakes_nova.fake_exception()) self.m.ReplayAll() self.t['resources']['attachment']['properties']['mountpoint'] = '' stack = utils.parse_stack(self.t, stack_name=stack_name) self.create_volume(self.t, stack, 'volume') rsrc = self.create_attachment(self.t, stack, 'attachment') scheduler.TaskRunner(rsrc.delete)() self.m.VerifyAll() def test_cinder_volume_shrink_fails(self): stack_name = 'test_cvolume_shrink_fail_stack' # create script self._mock_create_volume(vt_base.FakeVolume('creating'), stack_name, size=2) # update script fv = vt_base.FakeVolume('available', size=2) self.cinder_fc.volumes.get(fv.id).AndReturn(fv) self.m.ReplayAll() self.t['resources']['volume']['properties']['size'] = 2 stack = utils.parse_stack(self.t, stack_name=stack_name) rsrc = self.create_volume(self.t, stack, 'volume') props = copy.deepcopy(rsrc.properties.data) props['size'] = 1 after = rsrc_defn.ResourceDefinition(rsrc.name, rsrc.type(), props) update_task = scheduler.TaskRunner(rsrc.update, after) ex = self.assertRaises(exception.ResourceFailure, update_task) self.assertEqual('NotSupported: resources.volume: ' 'Shrinking volume is not supported.', six.text_type(ex)) self.assertEqual((rsrc.UPDATE, rsrc.FAILED), rsrc.state) self.m.VerifyAll() def test_cinder_volume_extend_detached(self): stack_name = 'test_cvolume_extend_det_stack' # create script self._mock_create_volume(vt_base.FakeVolume('creating'), stack_name) # update script fv = vt_base.FakeVolume('available', size=1, attachments=[]) self.cinder_fc.volumes.get(fv.id).AndReturn(fv) self.cinder_fc.volumes.extend(fv.id, 2) self.cinder_fc.volumes.get(fv.id).AndReturn( vt_base.FakeVolume('extending')) self.cinder_fc.volumes.get(fv.id).AndReturn( vt_base.FakeVolume('extending')) self.cinder_fc.volumes.get(fv.id).AndReturn( vt_base.FakeVolume('available')) self.m.ReplayAll() stack = utils.parse_stack(self.t, stack_name=stack_name) rsrc = self.create_volume(self.t, stack, 'volume') props = copy.deepcopy(rsrc.properties.data) props['size'] = 2 after = rsrc_defn.ResourceDefinition(rsrc.name, rsrc.type(), props) update_task = scheduler.TaskRunner(rsrc.update, after) self.assertIsNone(update_task()) self.assertEqual((rsrc.UPDATE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_cinder_volume_extend_fails_to_start(self): stack_name = 'test_cvolume_extend_fail_start_stack' # create script self._mock_create_volume(vt_base.FakeVolume('creating'), stack_name) # update script fv = vt_base.FakeVolume('available', size=1, attachments=[]) self.cinder_fc.volumes.get(fv.id).AndReturn(fv) self.cinder_fc.volumes.extend(fv.id, 2).AndRaise( cinder_exp.OverLimit(413)) self.m.ReplayAll() stack = utils.parse_stack(self.t, stack_name=stack_name) rsrc = self.create_volume(self.t, stack, 'volume') props = copy.deepcopy(rsrc.properties.data) props['size'] = 2 after = rsrc_defn.ResourceDefinition(rsrc.name, rsrc.type(), props) update_task = scheduler.TaskRunner(rsrc.update, after) ex = self.assertRaises(exception.ResourceFailure, update_task) self.assertIn('Over limit', six.text_type(ex)) self.assertEqual((rsrc.UPDATE, rsrc.FAILED), rsrc.state) self.m.VerifyAll() def test_cinder_volume_extend_fails_to_complete(self): stack_name = 'test_cvolume_extend_fail_compl_stack' # create script self._mock_create_volume(vt_base.FakeVolume('creating'), stack_name) # update script fv = vt_base.FakeVolume('available', size=1, attachments=[]) self.cinder_fc.volumes.get(fv.id).AndReturn(fv) self.cinder_fc.volumes.extend(fv.id, 2) self.cinder_fc.volumes.get(fv.id).AndReturn( vt_base.FakeVolume('extending')) self.cinder_fc.volumes.get(fv.id).AndReturn( vt_base.FakeVolume('extending')) self.cinder_fc.volumes.get(fv.id).AndReturn( vt_base.FakeVolume('error_extending')) self.m.ReplayAll() stack = utils.parse_stack(self.t, stack_name=stack_name) rsrc = self.create_volume(self.t, stack, 'volume') props = copy.deepcopy(rsrc.properties.data) props['size'] = 2 after = rsrc_defn.ResourceDefinition(rsrc.name, rsrc.type(), props) update_task = scheduler.TaskRunner(rsrc.update, after) ex = self.assertRaises(exception.ResourceFailure, update_task) self.assertIn("Volume resize failed - Unknown status error_extending", six.text_type(ex)) self.assertEqual((rsrc.UPDATE, rsrc.FAILED), rsrc.state) self.m.VerifyAll() def test_cinder_volume_extend_attached(self): stack_name = 'test_cvolume_extend_att_stack' self._update_if_attached(stack_name) def test_cinder_volume_extend_created_from_backup_with_same_size(self): stack_name = 'test_cvolume_extend_snapsht_stack' # create script self.stub_VolumeBackupConstraint_validate() fvbr = vt_base.FakeBackupRestore('vol-123') cinder.CinderClientPlugin._create().MultipleTimes().AndReturn( self.cinder_fc) self.m.StubOutWithMock(self.cinder_fc.restores, 'restore') self.cinder_fc.restores.restore('backup-123').AndReturn(fvbr) self.cinder_fc.volumes.get('vol-123').AndReturn( vt_base.FakeVolume('restoring-backup')) vol_name = utils.PhysName(stack_name, 'volume') self.cinder_fc.volumes.update('vol-123', description=None, name=vol_name).AndReturn(None) self.cinder_fc.volumes.get('vol-123').AndReturn( vt_base.FakeVolume('available')) # update script fv = vt_base.FakeVolume('available', size=2) self.cinder_fc.volumes.get(fv.id).AndReturn(fv) self.m.ReplayAll() self.t['resources']['volume']['properties'] = { 'availability_zone': 'nova', 'backup_id': 'backup-123' } stack = utils.parse_stack(self.t, stack_name=stack_name) rsrc = self.create_volume(self.t, stack, 'volume') self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) self.assertEqual('available', fv.status) props = copy.deepcopy(rsrc.properties.data) props['size'] = 2 after = rsrc_defn.ResourceDefinition(rsrc.name, rsrc.type(), props) update_task = scheduler.TaskRunner(rsrc.update, after) self.assertIsNone(update_task()) self.assertEqual((rsrc.UPDATE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_cinder_volume_retype(self): fv = vt_base.FakeVolume('available', size=1, name='my_vol', description='test') stack_name = 'test_cvolume_retype' new_vol_type = 'new_type' self.patchobject(cinder.CinderClientPlugin, '_create', return_value=self.cinder_fc) self.patchobject(self.cinder_fc.volumes, 'create', return_value=fv) self.patchobject(self.cinder_fc.volumes, 'get', return_value=fv) stack = utils.parse_stack(self.t, stack_name=stack_name) rsrc = self.create_volume(self.t, stack, 'volume2') props = copy.deepcopy(rsrc.properties.data) props['volume_type'] = new_vol_type after = rsrc_defn.ResourceDefinition(rsrc.name, rsrc.type(), props) self.patchobject(cinder.CinderClientPlugin, 'get_volume_type', return_value=new_vol_type) self.patchobject(self.cinder_fc.volumes, 'retype') scheduler.TaskRunner(rsrc.update, after)() self.assertEqual((rsrc.UPDATE, rsrc.COMPLETE), rsrc.state) self.assertEqual(1, self.cinder_fc.volumes.retype.call_count) def test_cinder_volume_update_name_and_metadata(self): # update the name, description and metadata fv = vt_base.FakeVolume('creating', size=1, name='my_vol', description='test') stack_name = 'test_cvolume_updname_stack' update_name = 'update_name' meta = {'Key': 'New Value'} update_description = 'update_description' kwargs = { 'name': update_name, 'description': update_description } fv = self._mock_create_volume(fv, stack_name) self.cinder_fc.volumes.get(fv.id).AndReturn(fv) self.cinder_fc.volumes.update(fv, **kwargs).AndReturn(None) self.cinder_fc.volumes.update_all_metadata(fv, meta).AndReturn(None) self.m.ReplayAll() stack = utils.parse_stack(self.t, stack_name=stack_name) rsrc = self.create_volume(self.t, stack, 'volume') props = copy.deepcopy(rsrc.properties.data) props['name'] = update_name props['description'] = update_description props['metadata'] = meta after = rsrc_defn.ResourceDefinition(rsrc.name, rsrc.type(), props) scheduler.TaskRunner(rsrc.update, after)() self.assertEqual((rsrc.UPDATE, rsrc.COMPLETE), rsrc.state) def test_cinder_volume_update_read_only(self): # update read only access mode fv = vt_base.FakeVolume('update_read_only_access_mode') stack_name = 'test_update_read_only' cinder.CinderClientPlugin._create().AndReturn( self.cinder_fc) self.cinder_fc.volumes.create( size=1, availability_zone='nova', description='test_description', name='test_name', multiattach=False, metadata={u'key': u'value'}).AndReturn(fv) update_readonly_mock = self.patchobject(self.cinder_fc.volumes, 'update_readonly_flag') update_readonly_mock.return_value = None fv_ready = vt_base.FakeVolume('available', id=fv.id, attachments=[]) self.cinder_fc.volumes.get(fv.id).MultipleTimes().AndReturn(fv_ready) self.m.ReplayAll() stack = utils.parse_stack(self.t, stack_name=stack_name) rsrc = self.create_volume(self.t, stack, 'volume') props = copy.deepcopy(rsrc.properties.data) props['read_only'] = True after = rsrc_defn.ResourceDefinition(rsrc.name, rsrc.type(), props) scheduler.TaskRunner(rsrc.update, after)() self.assertEqual((rsrc.UPDATE, rsrc.COMPLETE), rsrc.state) update_readonly_mock.assert_called_once_with(fv.id, True) def test_cinder_volume_update_no_need_replace(self): # update read only access mode fv = vt_base.FakeVolume('creating') stack_name = 'test_update_no_need_replace' cinder.CinderClientPlugin._create().AndReturn( self.cinder_fc) vol_name = utils.PhysName(stack_name, 'volume2') self.cinder_fc.volumes.create( size=2, availability_zone='nova', description=None, name=vol_name, multiattach=False, metadata={} ).AndReturn(fv) fv_ready = vt_base.FakeVolume('available', id=fv.id, size=2, attachments=[]) self.cinder_fc.volumes.get(fv.id).MultipleTimes().AndReturn(fv_ready) self.cinder_fc.volumes.extend(fv.id, 3) self.m.ReplayAll() stack = utils.parse_stack(self.t, stack_name=stack_name) rsrc = self.create_volume(self.t, stack, 'volume2') props = copy.deepcopy(rsrc.properties.data) props['size'] = 1 after = rsrc_defn.ResourceDefinition(rsrc.name, rsrc.type(), props) update_task = scheduler.TaskRunner(rsrc.update, after) ex = self.assertRaises(exception.ResourceFailure, update_task) self.assertEqual((rsrc.UPDATE, rsrc.FAILED), rsrc.state) self.assertIn("NotSupported: resources.volume2: Shrinking volume is " "not supported", six.text_type(ex)) props = copy.deepcopy(rsrc.properties.data) props['size'] = 3 after = rsrc_defn.ResourceDefinition(rsrc.name, rsrc.type(), props) scheduler.TaskRunner(rsrc.update, after)() self.assertEqual((rsrc.UPDATE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def _update_if_attached(self, stack_name, update_type='resize'): # create script self.stub_VolumeConstraint_validate() self._mock_create_volume(vt_base.FakeVolume('creating'), stack_name) self._mock_create_server_volume_script(vt_base.FakeVolume('attaching')) # update script attachments = [{'id': 'vol-123', 'device': '/dev/vdc', 'server_id': u'WikiDatabase'}] fv2 = vt_base.FakeVolume('in-use', attachments=attachments, size=1) self.cinder_fc.volumes.get(fv2.id).AndReturn(fv2) # detach script fvd = vt_base.FakeVolume('in-use') self.fc.volumes.get_server_volume(u'WikiDatabase', 'vol-123').AndReturn(fvd) self.cinder_fc.volumes.get(fvd.id).AndReturn(fvd) self.fc.volumes.delete_server_volume('WikiDatabase', 'vol-123') self.cinder_fc.volumes.get(fvd.id).AndReturn( vt_base.FakeVolume('available')) self.fc.volumes.get_server_volume(u'WikiDatabase', 'vol-123').AndReturn(fvd) self.fc.volumes.get_server_volume( u'WikiDatabase', 'vol-123').AndRaise(fakes_nova.fake_exception()) if update_type is 'access_mode': # update access mode script update_readonly_mock = self.patchobject(self.cinder_fc.volumes, 'update_readonly_flag') update_readonly_mock.return_value = None if update_type is 'resize': # resize script self.cinder_fc.volumes.extend(fvd.id, 2) self.cinder_fc.volumes.get(fvd.id).AndReturn( vt_base.FakeVolume('extending')) self.cinder_fc.volumes.get(fvd.id).AndReturn( vt_base.FakeVolume('extending')) self.cinder_fc.volumes.get(fvd.id).AndReturn( vt_base.FakeVolume('available')) # attach script self._mock_create_server_volume_script(vt_base.FakeVolume('attaching'), update=True) self.m.ReplayAll() stack = utils.parse_stack(self.t, stack_name=stack_name) rsrc = self.create_volume(self.t, stack, 'volume') self.create_attachment(self.t, stack, 'attachment') props = copy.deepcopy(rsrc.properties.data) if update_type is 'access_mode': props['read_only'] = True if update_type is 'resize': props['size'] = 2 after = rsrc_defn.ResourceDefinition(rsrc.name, rsrc.type(), props) update_task = scheduler.TaskRunner(rsrc.update, after) self.assertIsNone(update_task()) self.assertEqual((rsrc.UPDATE, rsrc.COMPLETE), rsrc.state) if update_type is 'access_mode': update_readonly_mock.assert_called_once_with(fvd.id, True) self.m.VerifyAll() def test_cinder_volume_update_read_only_attached(self): stack_name = 'test_cvolume_update_read_only_att_stack' self._update_if_attached(stack_name, update_type='access_mode') def test_cinder_snapshot(self): stack_name = 'test_cvolume_snpsht_stack' cinder.CinderClientPlugin._create().MultipleTimes().AndReturn( self.cinder_fc) self.cinder_fc.volumes.create( size=1, availability_zone=None, description='test_description', name='test_name', multiattach=False, metadata={} ).AndReturn(vt_base.FakeVolume('creating')) fv = vt_base.FakeVolume('available') self.cinder_fc.volumes.get(fv.id).AndReturn(fv) fb = vt_base.FakeBackup('creating') self.m.StubOutWithMock(self.cinder_fc.backups, 'create') self.cinder_fc.backups.create(fv.id, force=True).AndReturn(fb) self.m.StubOutWithMock(self.cinder_fc.backups, 'get') self.cinder_fc.backups.get(fb.id).AndReturn( vt_base.FakeBackup('available')) self.m.ReplayAll() t = template_format.parse(single_cinder_volume_template) stack = utils.parse_stack(t, stack_name=stack_name) rsrc = stack['volume'] self.patchobject(rsrc, '_store_config_default_properties') scheduler.TaskRunner(rsrc.create)() scheduler.TaskRunner(rsrc.snapshot)() self.assertEqual((rsrc.SNAPSHOT, rsrc.COMPLETE), rsrc.state) self.assertEqual({'backup_id': 'backup-123'}, resource_data_object.ResourceData.get_all(rsrc)) self.m.VerifyAll() def test_cinder_snapshot_error(self): stack_name = 'test_cvolume_snpsht_err_stack' cinder.CinderClientPlugin._create().MultipleTimes().AndReturn( self.cinder_fc) self.cinder_fc.volumes.create( size=1, availability_zone=None, description='test_description', name='test_name', multiattach=False, metadata={} ).AndReturn(vt_base.FakeVolume('creating')) fv = vt_base.FakeVolume('available') self.cinder_fc.volumes.get(fv.id).AndReturn(fv) fb = vt_base.FakeBackup('creating') self.m.StubOutWithMock(self.cinder_fc.backups, 'create') self.cinder_fc.backups.create(fv.id, force=True).AndReturn(fb) self.m.StubOutWithMock(self.cinder_fc.backups, 'get') fail_reason = 'Could not determine which Swift endpoint to use' self.cinder_fc.backups.get(fb.id).AndReturn( vt_base.FakeBackup('error', fail_reason=fail_reason)) self.m.ReplayAll() t = template_format.parse(single_cinder_volume_template) stack = utils.parse_stack(t, stack_name=stack_name) rsrc = stack['volume'] self.patchobject(rsrc, '_store_config_default_properties') scheduler.TaskRunner(rsrc.create)() self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(rsrc.snapshot)) self.assertEqual((rsrc.SNAPSHOT, rsrc.FAILED), rsrc.state) self.assertIn(fail_reason, rsrc.status_reason) self.assertEqual({u'backup_id': u'backup-123'}, resource_data_object.ResourceData.get_all(rsrc)) self.m.VerifyAll() def test_cinder_volume_attachment_update_device(self): stack_name = 'test_cvolume_attach_udev_stack' self._mock_create_volume(vt_base.FakeVolume('creating'), stack_name) self._mock_create_server_volume_script( vt_base.FakeVolume('attaching'), device=u'/dev/vdc') self.stub_VolumeConstraint_validate() # delete script fva = vt_base.FakeVolume('in-use') self.fc.volumes.get_server_volume(u'WikiDatabase', 'vol-123').AndReturn(fva) self.cinder_fc.volumes.get(fva.id).AndReturn(fva) self.fc.volumes.delete_server_volume( 'WikiDatabase', 'vol-123').MultipleTimes().AndReturn(None) self.cinder_fc.volumes.get(fva.id).AndReturn( vt_base.FakeVolume('available')) self.fc.volumes.get_server_volume(u'WikiDatabase', 'vol-123').AndReturn(fva) self.fc.volumes.get_server_volume( u'WikiDatabase', 'vol-123').AndRaise(fakes_nova.fake_exception()) # attach script self._mock_create_server_volume_script(vt_base.FakeVolume('attaching'), device=None, update=True) self.m.ReplayAll() stack = utils.parse_stack(self.t, stack_name=stack_name) self.create_volume(self.t, stack, 'volume') rsrc = self.create_attachment(self.t, stack, 'attachment') props = copy.deepcopy(rsrc.properties.data) props['mountpoint'] = '' props['volume_id'] = 'vol-123' after = rsrc_defn.ResourceDefinition(rsrc.name, rsrc.type(), props) scheduler.TaskRunner(rsrc.update, after)() self.assertEqual((rsrc.UPDATE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_cinder_volume_attachment_update_volume(self): stack_name = 'test_cvolume_attach_uvol_stack' self.stub_VolumeConstraint_validate() self._mock_create_volume(vt_base.FakeVolume('creating'), stack_name) fv2 = vt_base.FakeVolume('creating', id='vol-456') vol2_name = utils.PhysName(stack_name, 'volume2') self.cinder_fc.volumes.create( size=2, availability_zone='nova', description=None, name=vol2_name, multiattach=False, metadata={} ).AndReturn(fv2) self.cinder_fc.volumes.get(fv2.id).AndReturn(fv2) fv2 = vt_base.FakeVolume('available', id=fv2.id) self.cinder_fc.volumes.get(fv2.id).AndReturn(fv2) self._mock_create_server_volume_script(vt_base.FakeVolume('attaching')) # delete script fva = vt_base.FakeVolume('in-use') self.fc.volumes.get_server_volume(u'WikiDatabase', 'vol-123').AndReturn(fva) self.cinder_fc.volumes.get(fva.id).AndReturn(fva) self.fc.volumes.delete_server_volume( 'WikiDatabase', 'vol-123').MultipleTimes().AndReturn(None) self.cinder_fc.volumes.get(fva.id).AndReturn( vt_base.FakeVolume('available')) self.fc.volumes.get_server_volume(u'WikiDatabase', 'vol-123').AndReturn(fva) self.fc.volumes.get_server_volume( u'WikiDatabase', 'vol-123').AndRaise(fakes_nova.fake_exception()) # attach script fv2a = vt_base.FakeVolume('attaching', id='vol-456') self._mock_create_server_volume_script(fv2a, volume='vol-456', update=True) self.m.ReplayAll() stack = utils.parse_stack(self.t, stack_name=stack_name) self.create_volume(self.t, stack, 'volume') self.create_volume(self.t, stack, 'volume2') rsrc = self.create_attachment(self.t, stack, 'attachment') self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) props = copy.deepcopy(rsrc.properties.data) props['volume_id'] = 'vol-456' after = rsrc_defn.ResourceDefinition(rsrc.name, rsrc.type(), props) scheduler.TaskRunner(rsrc.update, after)() self.assertEqual((rsrc.UPDATE, rsrc.COMPLETE), rsrc.state) self.assertEqual(fv2a.id, rsrc.resource_id) self.m.VerifyAll() def test_cinder_volume_attachment_update_server(self): stack_name = 'test_cvolume_attach_usrv_stack' self._mock_create_volume(vt_base.FakeVolume('creating'), stack_name) self._mock_create_server_volume_script( vt_base.FakeVolume('attaching')) self.stub_VolumeConstraint_validate() # delete script fva = vt_base.FakeVolume('in-use') self.fc.volumes.get_server_volume(u'WikiDatabase', 'vol-123').AndReturn(fva) self.cinder_fc.volumes.get(fva.id).AndReturn(fva) self.fc.volumes.delete_server_volume( 'WikiDatabase', 'vol-123').MultipleTimes().AndReturn(None) self.cinder_fc.volumes.get(fva.id).AndReturn( vt_base.FakeVolume('available')) self.fc.volumes.get_server_volume(u'WikiDatabase', 'vol-123').AndReturn(fva) self.fc.volumes.get_server_volume( u'WikiDatabase', 'vol-123').AndRaise(fakes_nova.fake_exception()) # attach script self._mock_create_server_volume_script(vt_base.FakeVolume('attaching'), server=u'AnotherServer', update=True) self.m.ReplayAll() stack = utils.parse_stack(self.t, stack_name=stack_name) self.create_volume(self.t, stack, 'volume') rsrc = self.create_attachment(self.t, stack, 'attachment') self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) props = copy.deepcopy(rsrc.properties.data) props['instance_uuid'] = 'AnotherServer' props['volume_id'] = 'vol-123' after = rsrc_defn.ResourceDefinition(rsrc.name, rsrc.type(), props) scheduler.TaskRunner(rsrc.update, after)() self.assertEqual((rsrc.UPDATE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_delete_attachment_has_not_been_created(self): stack_name = 'test_delete_attachment_has_not_been_created' stack = utils.parse_stack(self.t, stack_name=stack_name) resource_defn = stack.t.resource_definitions(stack) att_rsrc = c_vol.CinderVolumeAttachment( 'test_attachment', resource_defn['attachment'], stack) att_rsrc.state_set(att_rsrc.UPDATE, att_rsrc.COMPLETE) self.assertIsNone(att_rsrc.resource_id) # assert even not to create the novaclient instance nc = self.patchobject(nova.NovaClientPlugin, '_create') scheduler.TaskRunner(att_rsrc.delete)() self.assertEqual(0, nc.call_count) self.assertEqual((att_rsrc.DELETE, att_rsrc.COMPLETE), att_rsrc.state) def test_cinder_create_with_scheduler_hints(self): fv = vt_base.FakeVolume('creating') cinder.CinderClientPlugin._create().AndReturn(self.cinder_fc) self.cinder_fc.volumes.create( size=1, name='test_name', description=None, availability_zone='nova', scheduler_hints={'hint1': 'good_advice'}, multiattach=False, metadata={} ).AndReturn(fv) self.cinder_fc.volumes.get(fv.id).AndReturn(fv) fv_ready = vt_base.FakeVolume('available', id=fv.id) self.cinder_fc.volumes.get(fv.id).AndReturn(fv_ready) self.m.ReplayAll() stack_name = 'test_cvolume_scheduler_hints_stack' stack = utils.parse_stack(self.t, stack_name=stack_name) self.patchobject(stack['volume3'], '_store_config_default_properties') self.create_volume(self.t, stack, 'volume3') self.m.VerifyAll() def test_cinder_create_with_multiattach(self): fv = vt_base.FakeVolume('creating') cinder.CinderClientPlugin._create().AndReturn(self.cinder_fc) self.cinder_fc.volumes.create( size=1, name='test_name', description=None, availability_zone='nova', multiattach=True, metadata={}).AndReturn(fv) self.cinder_fc.volumes.get(fv.id).AndReturn(fv) fv_ready = vt_base.FakeVolume('available', id=fv.id) self.cinder_fc.volumes.get(fv.id).AndReturn(fv_ready) self.m.ReplayAll() stack_name = 'test_cvolume_multiattach_stack' stack = utils.parse_stack(self.t, stack_name=stack_name) self.create_volume(self.t, stack, 'volume4') self.m.VerifyAll() def test_cinder_create_with_stack_scheduler_hints(self): fv = vt_base.FakeVolume('creating') sh.cfg.CONF.set_override('stack_scheduler_hints', True) stack_name = 'test_cvolume_stack_scheduler_hints_stack' t = template_format.parse(single_cinder_volume_template) stack = utils.parse_stack(t, stack_name=stack_name) rsrc = stack['volume'] # rsrc.uuid is only available once the resource has been added. stack.add_resource(rsrc) self.assertIsNotNone(rsrc.uuid) cinder.CinderClientPlugin._create().AndReturn(self.cinder_fc) shm = sh.SchedulerHintsMixin self.cinder_fc.volumes.create( size=1, name='test_name', description='test_description', availability_zone=None, metadata={}, multiattach=False, scheduler_hints={shm.HEAT_ROOT_STACK_ID: stack.root_stack_id(), shm.HEAT_STACK_ID: stack.id, shm.HEAT_STACK_NAME: stack.name, shm.HEAT_PATH_IN_STACK: [stack.name], shm.HEAT_RESOURCE_NAME: rsrc.name, shm.HEAT_RESOURCE_UUID: rsrc.uuid}).AndReturn(fv) self.cinder_fc.volumes.get(fv.id).AndReturn(fv) fv_ready = vt_base.FakeVolume('available', id=fv.id) self.cinder_fc.volumes.get(fv.id).AndReturn(fv_ready) self.m.ReplayAll() self.patchobject(rsrc, '_store_config_default_properties') scheduler.TaskRunner(rsrc.create)() # this makes sure the auto increment worked on volume creation self.assertGreater(rsrc.id, 0) self.m.VerifyAll() def _test_cinder_create_invalid_property_combinations( self, stack_name, combinations, err_msg, exc): stack = utils.parse_stack(self.t, stack_name=stack_name) vp = stack.t['Resources']['volume2']['Properties'] vp.pop('size') vp.update(combinations) rsrc = stack['volume2'] ex = self.assertRaises(exc, rsrc.validate) self.assertEqual(err_msg, six.text_type(ex)) def test_cinder_create_with_image_and_imageRef(self): stack_name = 'test_create_with_image_and_imageRef' combinations = {'imageRef': 'image-456', 'image': 'image-123'} err_msg = ("Cannot define the following properties at the same time: " "image, imageRef") self.stub_ImageConstraint_validate() stack = utils.parse_stack(self.t, stack_name=stack_name) vp = stack.t['Resources']['volume2']['Properties'] vp.pop('size') vp.update(combinations) rsrc = stack.get('volume2') ex = self.assertRaises(exception.StackValidationFailed, rsrc.validate) self.assertIn(err_msg, six.text_type(ex)) def test_cinder_create_with_image_and_size(self): stack_name = 'test_create_with_image_and_size' combinations = {'image': 'image-123'} err_msg = ('If neither "backup_id" nor "size" is provided, one and ' 'only one of "source_volid", "snapshot_id" must be ' 'specified, but currently ' 'specified options: [\'image\'].') self.stub_ImageConstraint_validate() self._test_cinder_create_invalid_property_combinations( stack_name, combinations, err_msg, exception.StackValidationFailed) def test_cinder_create_with_size_snapshot_and_image(self): stack_name = 'test_create_with_size_snapshot_and_image' combinations = { 'size': 1, 'image': 'image-123', 'snapshot_id': 'snapshot-123'} self.stub_ImageConstraint_validate() self.stub_SnapshotConstraint_validate() err_msg = ('If "size" is provided, only one of "image", "imageRef", ' '"source_volid", "snapshot_id" can be specified, but ' 'currently specified options: ' '[\'snapshot_id\', \'image\'].') self._test_cinder_create_invalid_property_combinations( stack_name, combinations, err_msg, exception.StackValidationFailed) def test_cinder_create_with_size_snapshot_and_imageRef(self): stack_name = 'test_create_with_size_snapshot_and_imageRef' combinations = { 'size': 1, 'imageRef': 'image-123', 'snapshot_id': 'snapshot-123'} self.stub_ImageConstraint_validate() self.stub_SnapshotConstraint_validate() # image appears there because of translation rule err_msg = ('If "size" is provided, only one of "image", "imageRef", ' '"source_volid", "snapshot_id" can be specified, but ' 'currently specified options: ' '[\'snapshot_id\', \'image\'].') self._test_cinder_create_invalid_property_combinations( stack_name, combinations, err_msg, exception.StackValidationFailed) def test_cinder_create_with_size_snapshot_and_sourcevol(self): stack_name = 'test_create_with_size_snapshot_and_sourcevol' combinations = { 'size': 1, 'source_volid': 'volume-123', 'snapshot_id': 'snapshot-123'} self.stub_VolumeConstraint_validate() self.stub_SnapshotConstraint_validate() err_msg = ('If "size" is provided, only one of "image", "imageRef", ' '"source_volid", "snapshot_id" can be specified, but ' 'currently specified options: ' '[\'snapshot_id\', \'source_volid\'].') self._test_cinder_create_invalid_property_combinations( stack_name, combinations, err_msg, exception.StackValidationFailed) def test_cinder_create_with_snapshot_and_source_volume(self): stack_name = 'test_create_with_snapshot_and_source_volume' combinations = { 'source_volid': 'source_volume-123', 'snapshot_id': 'snapshot-123'} err_msg = ('If neither "backup_id" nor "size" is provided, one and ' 'only one of "source_volid", "snapshot_id" must be ' 'specified, but currently ' 'specified options: [\'snapshot_id\', \'source_volid\'].') self.stub_VolumeConstraint_validate() self.stub_SnapshotConstraint_validate() self._test_cinder_create_invalid_property_combinations( stack_name, combinations, err_msg, exception.StackValidationFailed) def test_cinder_create_with_image_and_source_volume(self): stack_name = 'test_create_with_image_and_source_volume' combinations = { 'source_volid': 'source_volume-123', 'image': 'image-123'} err_msg = ('If neither "backup_id" nor "size" is provided, one and ' 'only one of "source_volid", "snapshot_id" must be ' 'specified, but currently ' 'specified options: [\'source_volid\', \'image\'].') self.stub_VolumeConstraint_validate() self.stub_ImageConstraint_validate() self._test_cinder_create_invalid_property_combinations( stack_name, combinations, err_msg, exception.StackValidationFailed) def test_cinder_create_no_size_no_combinations(self): stack_name = 'test_create_no_size_no_options' combinations = {} err_msg = ('If neither "backup_id" nor "size" is provided, one and ' 'only one of "source_volid", "snapshot_id" must be ' 'specified, but currently specified options: [].') self._test_cinder_create_invalid_property_combinations( stack_name, combinations, err_msg, exception.StackValidationFailed) def _test_volume_restore(self, stack_name, final_status='available', stack_final_status=('RESTORE', 'COMPLETE')): # create script cinder.CinderClientPlugin._create().MultipleTimes().AndReturn( self.cinder_fc) self.cinder_fc.volumes.create( size=1, availability_zone=None, description='test_description', name='test_name', multiattach=False, metadata={} ).AndReturn(vt_base.FakeVolume('creating')) fv = vt_base.FakeVolume('available') self.cinder_fc.volumes.get(fv.id).AndReturn(fv) self.stub_VolumeBackupConstraint_validate() # snapshot script fb = vt_base.FakeBackup('creating') self.m.StubOutWithMock(self.cinder_fc.backups, 'create') self.cinder_fc.backups.create(fv.id, force=True).AndReturn(fb) self.m.StubOutWithMock(self.cinder_fc.backups, 'get') self.cinder_fc.backups.get(fb.id).AndReturn( vt_base.FakeBackup('available')) # restore script fvbr = vt_base.FakeBackupRestore('vol-123') self.m.StubOutWithMock(self.cinder_fc.restores, 'restore') self.cinder_fc.restores.restore('backup-123', 'vol-123').AndReturn(fvbr) fv_restoring = vt_base.FakeVolume( 'restoring-backup', id=fv.id, attachments=[]) self.cinder_fc.volumes.get('vol-123').AndReturn(fv_restoring) fv_final = vt_base.FakeVolume(final_status, id=fv.id) self.cinder_fc.volumes.get('vol-123').AndReturn(fv_final) self.m.ReplayAll() t = template_format.parse(single_cinder_volume_template) stack = utils.parse_stack(t, stack_name=stack_name) self.patchobject(stack['volume'], '_store_config_default_properties') scheduler.TaskRunner(stack.create)() self.assertEqual((stack.CREATE, stack.COMPLETE), stack.state) scheduler.TaskRunner(stack.snapshot, None)() self.assertEqual((stack.SNAPSHOT, stack.COMPLETE), stack.state) data = stack.prepare_abandon() fake_snapshot = collections.namedtuple( 'Snapshot', ('data', 'stack_id'))(data, stack.id) stack.restore(fake_snapshot) self.assertEqual(stack_final_status, stack.state) self.m.VerifyAll() def test_volume_restore_success(self): self._test_volume_restore(stack_name='test_volume_restore_success') def test_volume_restore_failed(self): self._test_volume_restore(stack_name='test_volume_restore_failed', final_status='error', stack_final_status=('RESTORE', 'FAILED')) def test_handle_delete_snapshot_no_backup(self): stack_name = 'test_handle_delete_snapshot_no_backup' mock_vs = { 'resource_data': {} } t = template_format.parse(single_cinder_volume_template) stack = utils.parse_stack(t, stack_name=stack_name) rsrc = c_vol.CinderVolume( 'volume', stack.t.resource_definitions(stack)['volume'], stack) self.assertIsNone(rsrc.handle_delete_snapshot(mock_vs)) def test_vaildate_deletion_policy(self): cfg.CONF.set_override('backups_enabled', False, group='volumes') stack_name = 'test_volume_validate_deletion_policy' self.t['resources']['volume']['deletion_policy'] = 'Snapshot' stack = utils.parse_stack(self.t, stack_name=stack_name) rsrc = self.get_volume(self.t, stack, 'volume') self.assertRaisesRegex( exception.StackValidationFailed, 'volume backup service is not enabled', rsrc.validate) def test_volume_get_live_state(self): tmpl = """ heat_template_version: 2013-05-23 description: Cinder volume resources: volume: type: OS::Cinder::Volume properties: size: 1 name: test_name description: test_description image: 1234 scheduler_hints: 'consistencygroup_id': 4444 """ t = template_format.parse(tmpl) stack = utils.parse_stack(t, stack_name='get_live_state') rsrc = stack['volume'] rsrc._availability_zone = 'nova' rsrc.resource_id = '1234' vol_resp = { 'attachments': [], 'availability_zone': 'nova', 'snapshot_id': None, 'size': 1, 'metadata': {'test': 'test_value', 'readonly': False}, 'consistencygroup_id': '4444', 'volume_image_metadata': {'image_id': '1234', 'image_name': 'test'}, 'description': None, 'multiattach': False, 'source_volid': None, 'name': 'test-volume-jbdbgdsy3vyg', 'volume_type': 'lvmdriver-1' } vol = mock.MagicMock() vol.to_dict.return_value = vol_resp rsrc.client().volumes = mock.MagicMock() rsrc.client().volumes.get = mock.MagicMock(return_value=vol) rsrc.client().volume_api_version = 2 rsrc.data = mock.MagicMock(return_value={'volume_type': 'lvmdriver-1'}) reality = rsrc.get_live_state(rsrc.properties) expected = { 'size': 1, 'metadata': {'test': 'test_value'}, 'description': None, 'name': 'test-volume-jbdbgdsy3vyg', 'backup_id': None, 'read_only': False, } self.assertEqual(expected, reality) heat-10.0.2/heat/tests/openstack/barbican/0000775000175000017500000000000013343562672020363 5ustar zuulzuul00000000000000heat-10.0.2/heat/tests/openstack/barbican/__init__.py0000666000175000017500000000000013343562340022454 0ustar zuulzuul00000000000000heat-10.0.2/heat/tests/openstack/barbican/test_secret.py0000666000175000017500000001727613343562340023270 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from heat.common import exception from heat.common import template_format from heat.engine.resources.openstack.barbican import secret from heat.engine import rsrc_defn from heat.engine import scheduler from heat.tests import common from heat.tests import utils stack_template = ''' heat_template_version: 2013-05-23 description: Test template resources: secret: type: OS::Barbican::Secret properties: name: foobar-secret ''' class FakeSecret(object): def __init__(self, name): self.name = name def store(self): return self.name class TestSecret(common.HeatTestCase): def setUp(self): super(TestSecret, self).setUp() self.patcher_client = mock.patch.object(secret.Secret, 'client') mock_client = self.patcher_client.start() self.barbican = mock_client.return_value self.stack = utils.parse_stack(template_format.parse(stack_template)) self.stack.validate() resource_defns = self.stack.t.resource_definitions(self.stack) self.res_template = resource_defns['secret'] self.res = self._create_resource('foo', self.res_template, self.stack) def tearDown(self): super(TestSecret, self).tearDown() self.patcher_client.stop() def _create_resource(self, name, snippet, stack): res = secret.Secret(name, snippet, stack) self.barbican.secrets.create.return_value = FakeSecret(name + '_id') scheduler.TaskRunner(res.create)() return res def test_create_secret(self): expected_state = (self.res.CREATE, self.res.COMPLETE) self.assertEqual(expected_state, self.res.state) args = self.barbican.secrets.create.call_args[1] self.assertEqual('foobar-secret', args['name']) self.assertEqual('opaque', args['secret_type']) def test_attributes(self): mock_secret = mock.Mock() mock_secret.status = 'test-status' self.barbican.secrets.get.return_value = mock_secret mock_secret.payload = 'foo' self.assertEqual('test-status', self.res.FnGetAtt('status')) self.assertEqual('foo', self.res.FnGetAtt('decrypted_payload')) def test_attributes_handles_exceptions(self): self.barbican.barbican_client.HTTPClientError = Exception self.barbican.secrets.get.side_effect = Exception('boom') self.assertRaises(self.barbican.barbican_client.HTTPClientError, self.res.FnGetAtt, 'order_ref') def test_create_secret_sets_resource_id(self): self.assertEqual('foo_id', self.res.resource_id) def test_create_secret_with_plain_text(self): content_type = 'text/plain' props = { 'name': 'secret', 'payload': 'foobar', 'payload_content_type': content_type, } defn = rsrc_defn.ResourceDefinition('secret', 'OS::Barbican::Secret', props) res = self._create_resource(defn.name, defn, self.stack) args = self.barbican.secrets.create.call_args[1] self.assertEqual('foobar', args[res.PAYLOAD]) self.assertEqual(content_type, args[res.PAYLOAD_CONTENT_TYPE]) def test_create_secret_with_octet_stream(self): content_type = 'application/octet-stream' props = { 'name': 'secret', 'payload': 'foobar', 'payload_content_type': content_type, } defn = rsrc_defn.ResourceDefinition('secret', 'OS::Barbican::Secret', props) res = self._create_resource(defn.name, defn, self.stack) args = self.barbican.secrets.create.call_args[1] self.assertEqual('foobar', args[res.PAYLOAD]) self.assertEqual(content_type, args[res.PAYLOAD_CONTENT_TYPE]) def test_create_secret_other_content_types_not_allowed(self): props = { 'name': 'secret', 'payload_content_type': 'not/allowed', } defn = rsrc_defn.ResourceDefinition('secret', 'OS::Barbican::Secret', props) self.assertRaises(exception.ResourceFailure, self._create_resource, defn.name, defn, self.stack) def test_validate_content_type_without_payload(self): props = { 'name': 'secret', 'payload_content_type': 'text/plain', } defn = rsrc_defn.ResourceDefinition('secret', 'OS::Barbican::Secret', props) res = self._create_resource(defn.name, defn, self.stack) msg = "payload_content_type cannot be specified without payload." self.assertRaisesRegex(exception.ResourcePropertyDependency, msg, res.validate) def test_validate_octet_stream_without_encoding(self): props = { 'name': 'secret', 'payload': 'foobar', 'payload_content_type': 'application/octet-stream', } defn = rsrc_defn.ResourceDefinition('secret', 'OS::Barbican::Secret', props) res = self._create_resource(defn.name, defn, self.stack) msg = ("Property unspecified. For 'application/octet-stream' value of " "'payload_content_type' property, 'payload_content_encoding' " "property must be specified.") self.assertRaisesRegex(exception.StackValidationFailed, msg, res.validate) def test_validate_base64(self): props = { 'name': 'secret', 'payload': 'foobar', 'payload_content_type': 'application/octet-stream', 'payload_content_encoding': 'base64' } defn = rsrc_defn.ResourceDefinition('secret', 'OS::Barbican::Secret', props) res = self._create_resource(defn.name, defn, self.stack) msg = ("Invalid payload for specified 'base64' value of " "'payload_content_encoding' property.") self.assertRaisesRegex(exception.StackValidationFailed, msg, res.validate) def test_validate_encoding_dependency(self): props = { 'name': 'secret', 'payload': 'foobar', 'payload_content_type': 'text/plain', 'payload_content_encoding': 'base64' } defn = rsrc_defn.ResourceDefinition('secret', 'OS::Barbican::Secret', props) res = self._create_resource(defn.name, defn, self.stack) msg = ("payload_content_encoding property should only be specified " "for payload_content_type with value " "application/octet-stream.") self.assertRaisesRegex(exception.ResourcePropertyValueDependency, msg, res.validate) heat-10.0.2/heat/tests/openstack/barbican/test_container.py0000666000175000017500000002022513343562340023751 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import six from heat.common import exception from heat.common import template_format from heat.engine.resources.openstack.barbican import container from heat.engine import rsrc_defn from heat.engine import scheduler from heat.tests import common from heat.tests import utils stack_template_generic = ''' heat_template_version: 2015-10-15 description: Test template resources: container: type: OS::Barbican::GenericContainer properties: name: mynewcontainer secrets: - name: secret1 ref: ref1 - name: secret2 ref: ref2 ''' stack_template_certificate = ''' heat_template_version: 2015-10-15 description: Test template resources: container: type: OS::Barbican::CertificateContainer properties: name: mynewcontainer certificate_ref: cref private_key_ref: pkref private_key_passphrase_ref: pkpref intermediates_ref: iref ''' stack_template_rsa = ''' heat_template_version: 2015-10-15 description: Test template resources: container: type: OS::Barbican::RSAContainer properties: name: mynewcontainer private_key_ref: pkref private_key_passphrase_ref: pkpref public_key_ref: pubref ''' def template_by_name(name='OS::Barbican::GenericContainer'): mapping = {'OS::Barbican::GenericContainer': stack_template_generic, 'OS::Barbican::CertificateContainer': stack_template_certificate, 'OS::Barbican::RSAContainer': stack_template_rsa} return mapping[name] class FakeContainer(object): def __init__(self, name): self.name = name def store(self): return self.name class TestContainer(common.HeatTestCase): def setUp(self): super(TestContainer, self).setUp() self.patcher_client = mock.patch.object( container.GenericContainer, 'client') self.patcher_plugin = mock.patch.object( container.GenericContainer, 'client_plugin') mock_client = self.patcher_client.start() self.client = mock_client.return_value mock_plugin = self.patcher_plugin.start() self.client_plugin = mock_plugin.return_value self.stub_SecretConstraint_validate() def tearDown(self): super(TestContainer, self).tearDown() self.patcher_client.stop() self.patcher_plugin.stop() def _create_resource(self, name, snippet=None, stack=None, tmpl_name='OS::Barbican::GenericContainer'): tmpl = template_format.parse(template_by_name(tmpl_name)) if stack is None: self.stack = utils.parse_stack(tmpl) else: self.stack = stack resource_defns = self.stack.t.resource_definitions(self.stack) if snippet is None: snippet = resource_defns['container'] res_class = container.resource_mapping()[tmpl_name] res = res_class(name, snippet, self.stack) res.check_create_complete = mock.Mock(return_value=True) create_generic_container = self.client_plugin.create_generic_container create_generic_container.return_value = FakeContainer('generic') self.client_plugin.create_certificate.return_value = FakeContainer( 'certificate' ) self.client_plugin.create_rsa.return_value = FakeContainer('rsa') scheduler.TaskRunner(res.create)() return res def test_create_generic(self): res = self._create_resource('foo') expected_state = (res.CREATE, res.COMPLETE) self.assertEqual(expected_state, res.state) args = self.client_plugin.create_generic_container.call_args[1] self.assertEqual('mynewcontainer', args['name']) self.assertEqual({'secret1': 'ref1', 'secret2': 'ref2'}, args['secret_refs']) self.assertEqual(sorted(['ref1', 'ref2']), sorted(res.get_refs())) def test_create_certificate(self): res = self._create_resource( 'foo', tmpl_name='OS::Barbican::CertificateContainer') expected_state = (res.CREATE, res.COMPLETE) self.assertEqual(expected_state, res.state) args = self.client_plugin.create_certificate.call_args[1] self.assertEqual('mynewcontainer', args['name']) self.assertEqual('cref', args['certificate_ref']) self.assertEqual('pkref', args['private_key_ref']) self.assertEqual('pkpref', args['private_key_passphrase_ref']) self.assertEqual('iref', args['intermediates_ref']) self.assertEqual(sorted(['pkref', 'pkpref', 'iref', 'cref']), sorted(res.get_refs())) def test_create_rsa(self): res = self._create_resource( 'foo', tmpl_name='OS::Barbican::RSAContainer') expected_state = (res.CREATE, res.COMPLETE) self.assertEqual(expected_state, res.state) args = self.client_plugin.create_rsa.call_args[1] self.assertEqual('mynewcontainer', args['name']) self.assertEqual('pkref', args['private_key_ref']) self.assertEqual('pubref', args['public_key_ref']) self.assertEqual('pkpref', args['private_key_passphrase_ref']) self.assertEqual(sorted(['pkref', 'pubref', 'pkpref']), sorted(res.get_refs())) def test_create_failed_on_validation(self): tmpl = template_format.parse(template_by_name()) stack = utils.parse_stack(tmpl) props = tmpl['resources']['container']['properties'] props['secrets'].append({'name': 'secret3', 'ref': 'ref1'}) defn = rsrc_defn.ResourceDefinition( 'failed_container', 'OS::Barbican::GenericContainer', props) res = container.GenericContainer('foo', defn, stack) self.assertRaisesRegex(exception.StackValidationFailed, 'Duplicate refs are not allowed', res.validate) def test_attributes(self): mock_container = mock.Mock() mock_container.status = 'test-status' mock_container.container_ref = 'test-container-ref' mock_container.secret_refs = {'name': 'ref'} mock_container.consumers = [{'name': 'name1', 'ref': 'ref1'}] res = self._create_resource('foo') self.client.containers.get.return_value = mock_container self.assertEqual('test-status', res.FnGetAtt('status')) self.assertEqual('test-container-ref', res.FnGetAtt('container_ref')) self.assertEqual({'name': 'ref'}, res.FnGetAtt('secret_refs')) self.assertEqual([{'name': 'name1', 'ref': 'ref1'}], res.FnGetAtt('consumers')) def test_check_create_complete(self): tmpl = template_format.parse(template_by_name()) stack = utils.parse_stack(tmpl) resource_defns = stack.t.resource_definitions(stack) res_template = resource_defns['container'] res = container.GenericContainer('foo', res_template, stack) mock_active = mock.Mock(status='ACTIVE') self.client.containers.get.return_value = mock_active self.assertTrue(res.check_create_complete('foo')) mock_not_active = mock.Mock(status='PENDING') self.client.containers.get.return_value = mock_not_active self.assertFalse(res.check_create_complete('foo')) mock_not_active = mock.Mock(status='ERROR', error_reason='foo', error_status_code=500) self.client.containers.get.return_value = mock_not_active exc = self.assertRaises(exception.ResourceInError, res.check_create_complete, 'foo') self.assertIn('foo', six.text_type(exc)) self.assertIn('500', six.text_type(exc)) heat-10.0.2/heat/tests/openstack/barbican/test_order.py0000666000175000017500000002264713343562340023114 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import six from heat.common import exception from heat.common import template_format from heat.engine.resources.openstack.barbican import order from heat.engine import rsrc_defn from heat.engine import scheduler from heat.tests import common from heat.tests import utils stack_template = ''' heat_template_version: 2013-05-23 description: Test template resources: order: type: OS::Barbican::Order properties: name: foobar-order algorithm: aes bit_length: 256 mode: cbc type: key ''' class FakeOrder(object): def __init__(self, name): self.name = name def submit(self): return self.name class TestOrder(common.HeatTestCase): def setUp(self): super(TestOrder, self).setUp() tmpl = template_format.parse(stack_template) self.stack = utils.parse_stack(tmpl) resource_defns = self.stack.t.resource_definitions(self.stack) self.res_template = resource_defns['order'] self.props = tmpl['resources']['order']['properties'] self.patcher_client = mock.patch.object(order.Order, 'client') mock_client = self.patcher_client.start() self.barbican = mock_client.return_value def tearDown(self): super(TestOrder, self).tearDown() self.patcher_client.stop() def _create_resource(self, name, snippet, stack): res = order.Order(name, snippet, stack) res.check_create_complete = mock.Mock(return_value=True) self.barbican.orders.create.return_value = FakeOrder(name) scheduler.TaskRunner(res.create)() return res def test_create_order(self): res = self._create_resource('foo', self.res_template, self.stack) expected_state = (res.CREATE, res.COMPLETE) self.assertEqual(expected_state, res.state) args = self.barbican.orders.create.call_args[1] self.assertEqual('foobar-order', args['name']) self.assertEqual('aes', args['algorithm']) self.assertEqual('cbc', args['mode']) self.assertEqual(256, args['bit_length']) def test_create_order_without_type_fail(self): props = self.props.copy() del props['type'] snippet = self.res_template.freeze(properties=props) self.assertRaisesRegex(exception.ResourceFailure, 'Property type not assigned', self._create_resource, 'foo', snippet, self.stack) def test_validate_non_certificate_order(self): props = self.props.copy() del props['bit_length'] del props['algorithm'] snippet = self.res_template.freeze(properties=props) res = self._create_resource('test', snippet, self.stack) msg = ("Properties algorithm and bit_length are required for " "key type of order.") self.assertRaisesRegex(exception.StackValidationFailed, msg, res.validate) def test_validate_certificate_with_profile_without_ca_id(self): props = self.props.copy() props['profile'] = 'cert' props['type'] = 'certificate' snippet = self.res_template.freeze(properties=props) res = self._create_resource('test', snippet, self.stack) msg = ("profile cannot be specified without ca_id.") self.assertRaisesRegex(exception.ResourcePropertyDependency, msg, res.validate) def test_key_order_validation_fail(self): props = self.props.copy() props['pass_phrase'] = "something" snippet = self.res_template.freeze(properties=props) res = self._create_resource('test', snippet, self.stack) msg = ("Unexpected properties: pass_phrase. Only these properties " "are allowed for key type of order: algorithm, " "bit_length, expiration, mode, name, payload_content_type.") self.assertRaisesRegex(exception.StackValidationFailed, msg, res.validate) def test_certificate_validation_fail(self): props = self.props.copy() props['type'] = 'certificate' snippet = self.res_template.freeze(properties=props) res = self._create_resource('test', snippet, self.stack) msg = ("Unexpected properties: algorithm, bit_length, mode. Only " "these properties are allowed for certificate type of order: " "ca_id, name, profile, request_data, request_type, " "source_container_ref, subject_dn.") self.assertRaisesRegex(exception.StackValidationFailed, msg, res.validate) def test_asymmetric_order_validation_fail(self): props = self.props.copy() props['type'] = 'asymmetric' props['subject_dn'] = 'asymmetric' snippet = self.res_template.freeze(properties=props) res = self._create_resource('test', snippet, self.stack) msg = ("Unexpected properties: subject_dn. Only these properties are " "allowed for asymmetric type of order: algorithm, bit_length, " "expiration, mode, name, pass_phrase, payload_content_type") self.assertRaisesRegex(exception.StackValidationFailed, msg, res.validate) def test_attributes(self): mock_order = mock.Mock() mock_order.status = 'test-status' mock_order.order_ref = 'test-order-ref' mock_order.secret_ref = 'test-secret-ref' res = self._create_resource('foo', self.res_template, self.stack) self.barbican.orders.get.return_value = mock_order self.assertEqual('test-order-ref', res.FnGetAtt('order_ref')) self.assertEqual('test-secret-ref', res.FnGetAtt('secret_ref')) def test_attributes_handle_exceptions(self): mock_order = mock.Mock() res = self._create_resource('foo', self.res_template, self.stack) self.barbican.orders.get.return_value = mock_order self.barbican.barbican_client.HTTPClientError = Exception self.barbican.orders.get.side_effect = Exception('boom') self.assertRaises(self.barbican.barbican_client.HTTPClientError, res.FnGetAtt, 'order_ref') def test_container_attributes(self): mock_order = mock.Mock() mock_order.container_ref = 'test-container-ref' mock_container = mock.Mock() mock_container.public_key = mock.Mock(payload='public-key') mock_container.private_key = mock.Mock(payload='private-key') mock_container.certificate = mock.Mock(payload='cert') mock_container.intermediates = mock.Mock(payload='interm') res = self._create_resource('foo', self.res_template, self.stack) self.barbican.orders.get.return_value = mock_order self.barbican.containers.get.return_value = mock_container self.assertEqual('public-key', res.FnGetAtt('public_key')) self.barbican.containers.get.assert_called_once_with( 'test-container-ref') self.assertEqual('private-key', res.FnGetAtt('private_key')) self.assertEqual('cert', res.FnGetAtt('certificate')) self.assertEqual('interm', res.FnGetAtt('intermediates')) def test_create_order_sets_resource_id(self): self.barbican.orders.create.return_value = FakeOrder('foo') res = self._create_resource('foo', self.res_template, self.stack) self.assertEqual('foo', res.resource_id) def test_create_order_with_octet_stream(self): content_type = 'application/octet-stream' self.props['payload_content_type'] = content_type defn = rsrc_defn.ResourceDefinition('foo', 'OS::Barbican::Order', self.props) res = self._create_resource(defn.name, defn, self.stack) args = self.barbican.orders.create.call_args[1] self.assertEqual(content_type, args[res.PAYLOAD_CONTENT_TYPE]) def test_check_create_complete(self): res = order.Order('foo', self.res_template, self.stack) mock_active = mock.Mock(status='ACTIVE') self.barbican.orders.get.return_value = mock_active self.assertTrue(res.check_create_complete('foo')) mock_not_active = mock.Mock(status='PENDING') self.barbican.orders.get.return_value = mock_not_active self.assertFalse(res.check_create_complete('foo')) mock_not_active = mock.Mock(status='ERROR', error_reason='foo', error_status_code=500) self.barbican.orders.get.return_value = mock_not_active exc = self.assertRaises(exception.Error, res.check_create_complete, 'foo') self.assertIn('foo', six.text_type(exc)) self.assertIn('500', six.text_type(exc)) heat-10.0.2/heat/tests/openstack/__init__.py0000666000175000017500000000000013343562340020713 0ustar zuulzuul00000000000000heat-10.0.2/heat/tests/openstack/monasca/0000775000175000017500000000000013343562672020243 5ustar zuulzuul00000000000000heat-10.0.2/heat/tests/openstack/monasca/test_alarm_definition.py0000666000175000017500000001704313343562340025157 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from heat.engine.clients.os import monasca as client_plugin from heat.engine.resources.openstack.monasca import alarm_definition from heat.engine import stack from heat.engine import template from heat.tests import common from heat.tests import utils sample_template = { 'heat_template_version': '2015-10-15', 'resources': { 'test_resource': { 'type': 'OS::Monasca::AlarmDefinition', 'properties': { 'name': 'sample_alarm_id', 'description': 'sample alarm def', 'expression': 'sample expression', 'match_by': ['match_by'], 'severity': 'low', 'ok_actions': ['sample_notification'], 'alarm_actions': ['sample_notification'], 'undetermined_actions': ['sample_notification'], 'actions_enabled': False } } } } class MonascaAlarmDefinitionTest(common.HeatTestCase): def setUp(self): super(MonascaAlarmDefinitionTest, self).setUp() self.ctx = utils.dummy_context() self.stack = stack.Stack( self.ctx, 'test_stack', template.Template(sample_template) ) self.test_resource = self.stack['test_resource'] # Mock client self.test_client = mock.MagicMock() self.test_resource.client = mock.MagicMock( return_value=self.test_client) # Mock client plugin self.test_client_plugin = client_plugin.MonascaClientPlugin(self.ctx) self.test_client_plugin._create = mock.MagicMock( return_value=self.test_client) self.test_resource.client_plugin = mock.MagicMock( return_value=self.test_client_plugin) self.test_client_plugin.get_notification = mock.MagicMock( return_value='sample_notification') def _get_mock_resource(self): value = dict(id='477e8273-60a7-4c41-b683-fdb0bc7cd152') return value def test_resource_handle_create(self): mock_alarm_create = self.test_client.alarm_definitions.create mock_alarm_patch = self.test_client.alarm_definitions.patch mock_resource = self._get_mock_resource() mock_alarm_create.return_value = mock_resource # validate the properties self.assertEqual( 'sample_alarm_id', self.test_resource.properties.get( alarm_definition.MonascaAlarmDefinition.NAME)) self.assertEqual( 'sample alarm def', self.test_resource.properties.get( alarm_definition.MonascaAlarmDefinition.DESCRIPTION)) self.assertEqual( 'sample expression', self.test_resource.properties.get( alarm_definition.MonascaAlarmDefinition.EXPRESSION)) self.assertEqual( ['match_by'], self.test_resource.properties.get( alarm_definition.MonascaAlarmDefinition.MATCH_BY)) self.assertEqual( 'low', self.test_resource.properties.get( alarm_definition.MonascaAlarmDefinition.SEVERITY)) self.assertEqual( ['sample_notification'], self.test_resource.properties.get( alarm_definition.MonascaAlarmDefinition.OK_ACTIONS)) self.assertEqual( ['sample_notification'], self.test_resource.properties.get( alarm_definition.MonascaAlarmDefinition.ALARM_ACTIONS)) self.assertEqual( ['sample_notification'], self.test_resource.properties.get( alarm_definition.MonascaAlarmDefinition.UNDETERMINED_ACTIONS)) self.assertEqual( False, self.test_resource.properties.get( alarm_definition.MonascaAlarmDefinition.ACTIONS_ENABLED)) self.test_resource.data_set = mock.Mock() self.test_resource.handle_create() # validate physical resource id self.assertEqual(mock_resource['id'], self.test_resource.resource_id) args = dict( name='sample_alarm_id', description='sample alarm def', expression='sample expression', match_by=['match_by'], severity='low', ok_actions=['sample_notification'], alarm_actions=['sample_notification'], undetermined_actions=['sample_notification'] ) mock_alarm_create.assert_called_once_with(**args) mock_alarm_patch.assert_called_once_with( alarm_id=self.test_resource.resource_id, actions_enabled=False) def test_resource_handle_update(self): mock_alarm_patch = self.test_client.alarm_definitions.patch self.test_resource.resource_id = '477e8273-60a7-4c41-b683-fdb0bc7cd151' prop_diff = { alarm_definition.MonascaAlarmDefinition.NAME: 'name-updated', alarm_definition.MonascaAlarmDefinition.DESCRIPTION: 'description-updated', alarm_definition.MonascaAlarmDefinition.ACTIONS_ENABLED: True, alarm_definition.MonascaAlarmDefinition.SEVERITY: 'medium', alarm_definition.MonascaAlarmDefinition.OK_ACTIONS: ['sample_notification'], alarm_definition.MonascaAlarmDefinition.ALARM_ACTIONS: ['sample_notification'], alarm_definition.MonascaAlarmDefinition.UNDETERMINED_ACTIONS: ['sample_notification']} self.test_resource.handle_update(json_snippet=None, tmpl_diff=None, prop_diff=prop_diff) args = dict( alarm_id=self.test_resource.resource_id, name='name-updated', description='description-updated', actions_enabled=True, severity='medium', ok_actions=['sample_notification'], alarm_actions=['sample_notification'], undetermined_actions=['sample_notification'] ) mock_alarm_patch.assert_called_once_with(**args) def test_resource_handle_delete(self): mock_alarm_delete = self.test_client.alarm_definitions.delete self.test_resource.resource_id = '477e8273-60a7-4c41-b683-fdb0bc7cd151' mock_alarm_delete.return_value = None self.assertIsNone(self.test_resource.handle_delete()) mock_alarm_delete.assert_called_once_with( alarm_id=self.test_resource.resource_id ) def test_resource_handle_delete_resource_id_is_none(self): self.test_resource.resource_id = None self.assertIsNone(self.test_resource.handle_delete()) def test_resource_handle_delete_not_found(self): self.test_resource.resource_id = '477e8273-60a7-4c41-b683-fdb0bc7cd151' mock_alarm_delete = self.test_client.alarm_definitions.delete mock_alarm_delete.side_effect = client_plugin.monasca_exc.NotFound self.assertIsNone(self.test_resource.handle_delete()) heat-10.0.2/heat/tests/openstack/monasca/__init__.py0000666000175000017500000000000013343562340022334 0ustar zuulzuul00000000000000heat-10.0.2/heat/tests/openstack/monasca/test_notification.py0000666000175000017500000002714713343562340024347 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import six from heat.common import exception from heat.engine.cfn import functions as cfn_funcs from heat.engine.clients.os import monasca as client_plugin from heat.engine.resources.openstack.monasca import notification from heat.engine import stack from heat.engine import template from heat.tests import common from heat.tests import utils sample_template = { 'heat_template_version': '2015-10-15', 'resources': { 'test_resource': { 'type': 'OS::Monasca::Notification', 'properties': { 'name': 'test-notification', 'type': 'webhook', 'address': 'http://localhost:80/', 'period': 60 } } } } class MonascaNotificationTest(common.HeatTestCase): def setUp(self): super(MonascaNotificationTest, self).setUp() self.ctx = utils.dummy_context() self.stack = stack.Stack( self.ctx, 'test_stack', template.Template(sample_template) ) self.test_resource = self.stack['test_resource'] # Mock client self.test_client = mock.MagicMock() self.test_resource.client = mock.MagicMock( return_value=self.test_client) # Mock client plugin self.test_client_plugin = client_plugin.MonascaClientPlugin(self.ctx) self.test_client_plugin._create = mock.MagicMock( return_value=self.test_client) self.test_resource.client_plugin = mock.MagicMock( return_value=self.test_client_plugin) def _get_mock_resource(self): value = dict(id='477e8273-60a7-4c41-b683-fdb0bc7cd152') return value def test_validate_success_no_period(self): self.test_resource.properties.data.pop('period') self.test_resource.validate() def test_validate_invalid_type_with_period(self): self.test_resource.properties.data['type'] = self.test_resource.EMAIL self.assertRaises(exception.StackValidationFailed, self.test_resource.validate) def test_validate_no_scheme_address_for_get_attr(self): self.test_resource.properties.data['type'] = self.test_resource.WEBHOOK self.patchobject(cfn_funcs, 'GetAtt', return_value=None) get_att = cfn_funcs.GetAtt(self.stack, 'Fn::GetAtt', ["ResourceA", "abc"]) self.test_resource.properties.data['address'] = get_att self.assertIsNone(self.test_resource.validate()) def test_validate_no_scheme_address_for_webhook(self): self.test_resource.properties.data['type'] = self.test_resource.WEBHOOK self.test_resource.properties.data['address'] = 'abc@def.com' ex = self.assertRaises(exception.StackValidationFailed, self.test_resource.validate) self.assertEqual('Address "abc@def.com" doesn\'t have ' 'required URL scheme', six.text_type(ex)) def test_validate_no_netloc_address_for_webhook(self): self.test_resource.properties.data['type'] = self.test_resource.WEBHOOK self.test_resource.properties.data['address'] = 'https://' ex = self.assertRaises(exception.StackValidationFailed, self.test_resource.validate) self.assertEqual('Address "https://" doesn\'t have ' 'required network location', six.text_type(ex)) def test_validate_prohibited_address_for_webhook(self): self.test_resource.properties.data['type'] = self.test_resource.WEBHOOK self.test_resource.properties.data['address'] = 'ftp://127.0.0.1' ex = self.assertRaises(exception.StackValidationFailed, self.test_resource.validate) self.assertEqual('Address "ftp://127.0.0.1" doesn\'t satisfies ' 'allowed schemes: http, https', six.text_type(ex)) def test_validate_incorrect_address_for_email(self): self.test_resource.properties.data['type'] = self.test_resource.EMAIL self.test_resource.properties.data['address'] = 'abc#def.com' self.test_resource.properties.data.pop('period') ex = self.assertRaises(exception.StackValidationFailed, self.test_resource.validate) self.assertEqual('Address "abc#def.com" doesn\'t satisfies allowed ' 'format for "email" type of "type" property', six.text_type(ex)) def test_validate_invalid_address_parsing(self): self.test_resource.properties.data['type'] = self.test_resource.WEBHOOK self.test_resource.properties.data['address'] = "https://example.com]" ex = self.assertRaises(exception.StackValidationFailed, self.test_resource.validate) self.assertEqual('Address "https://example.com]" should have correct ' 'format required by "webhook" type of "type" ' 'property', six.text_type(ex)) def test_resource_handle_create(self): mock_notification_create = self.test_client.notifications.create mock_resource = self._get_mock_resource() mock_notification_create.return_value = mock_resource # validate the properties self.assertEqual( 'test-notification', self.test_resource.properties.get( notification.MonascaNotification.NAME)) self.assertEqual( 'webhook', self.test_resource.properties.get( notification.MonascaNotification.TYPE)) self.assertEqual( 'http://localhost:80/', self.test_resource.properties.get( notification.MonascaNotification.ADDRESS)) self.assertEqual( 60, self.test_resource.properties.get( notification.MonascaNotification.PERIOD)) self.test_resource.data_set = mock.Mock() self.test_resource.handle_create() args = dict( name='test-notification', type='webhook', address='http://localhost:80/', period=60 ) mock_notification_create.assert_called_once_with(**args) # validate physical resource id self.assertEqual(mock_resource['id'], self.test_resource.resource_id) def test_resource_handle_create_default_period(self): self.test_resource.properties.data.pop('period') mock_notification_create = self.test_client.notifications.create self.test_resource.handle_create() args = dict( name='test-notification', type='webhook', address='http://localhost:80/', period=60 ) mock_notification_create.assert_called_once_with(**args) def test_resource_handle_create_no_period(self): self.test_resource.properties.data.pop('period') self.test_resource.properties.data['type'] = 'email' self.test_resource.properties.data['address'] = 'abc@def.com' mock_notification_create = self.test_client.notifications.create self.test_resource.handle_create() args = dict( name='test-notification', type='email', address='abc@def.com' ) mock_notification_create.assert_called_once_with(**args) def test_resource_handle_update(self): mock_notification_update = self.test_client.notifications.update self.test_resource.resource_id = '477e8273-60a7-4c41-b683-fdb0bc7cd151' prop_diff = {notification.MonascaNotification.ADDRESS: 'http://localhost:1234/', notification.MonascaNotification.NAME: 'name-updated', notification.MonascaNotification.TYPE: 'webhook', notification.MonascaNotification.PERIOD: 0} self.test_resource.handle_update(json_snippet=None, tmpl_diff=None, prop_diff=prop_diff) args = dict( notification_id=self.test_resource.resource_id, name='name-updated', type='webhook', address='http://localhost:1234/', period=0 ) mock_notification_update.assert_called_once_with(**args) def test_resource_handle_update_default_period(self): mock_notification_update = self.test_client.notifications.update self.test_resource.resource_id = '477e8273-60a7-4c41-b683-fdb0bc7cd151' self.test_resource.properties.data.pop('period') prop_diff = {notification.MonascaNotification.ADDRESS: 'http://localhost:1234/', notification.MonascaNotification.NAME: 'name-updated', notification.MonascaNotification.TYPE: 'webhook'} self.test_resource.handle_update(json_snippet=None, tmpl_diff=None, prop_diff=prop_diff) args = dict( notification_id=self.test_resource.resource_id, name='name-updated', type='webhook', address='http://localhost:1234/', period=60 ) mock_notification_update.assert_called_once_with(**args) def test_resource_handle_update_no_period(self): mock_notification_update = self.test_client.notifications.update self.test_resource.resource_id = '477e8273-60a7-4c41-b683-fdb0bc7cd151' self.test_resource.properties.data.pop('period') prop_diff = {notification.MonascaNotification.ADDRESS: 'abc@def.com', notification.MonascaNotification.NAME: 'name-updated', notification.MonascaNotification.TYPE: 'email'} self.test_resource.handle_update(json_snippet=None, tmpl_diff=None, prop_diff=prop_diff) args = dict( notification_id=self.test_resource.resource_id, name='name-updated', type='email', address='abc@def.com' ) mock_notification_update.assert_called_once_with(**args) def test_resource_handle_delete(self): mock_notification_delete = self.test_client.notifications.delete self.test_resource.resource_id = '477e8273-60a7-4c41-b683-fdb0bc7cd151' mock_notification_delete.return_value = None self.assertIsNone(self.test_resource.handle_delete()) mock_notification_delete.assert_called_once_with( notification_id=self.test_resource.resource_id ) def test_resource_handle_delete_resource_id_is_none(self): self.test_resource.resource_id = None self.assertIsNone(self.test_resource.handle_delete()) def test_resource_handle_delete_not_found(self): self.test_resource.resource_id = '477e8273-60a7-4c41-b683-fdb0bc7cd151' mock_notification_delete = self.test_client.notifications.delete mock_notification_delete.side_effect = ( client_plugin.monasca_exc.NotFound) self.assertIsNone(self.test_resource.handle_delete()) heat-10.0.2/heat/tests/openstack/glance/0000775000175000017500000000000013343562672020053 5ustar zuulzuul00000000000000heat-10.0.2/heat/tests/openstack/glance/__init__.py0000666000175000017500000000000013343562340022144 0ustar zuulzuul00000000000000heat-10.0.2/heat/tests/openstack/glance/test_image.py0000666000175000017500000004036113343562340022544 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import six from glanceclient import exc from heat.common import exception from heat.common import template_format from heat.engine import stack as parser from heat.engine import template from heat.tests import common from heat.tests import utils image_template = ''' heat_template_version: 2013-05-23 description: This template to define a glance image. resources: my_image: type: OS::Glance::Image properties: name: cirros_image id: 41f0e60c-ebb4-4375-a2b4-845ae8b9c995 disk_format: qcow2 container_format: bare is_public: True min_disk: 10 min_ram: 512 protected: False location: https://launchpad.net/cirros/cirros-0.3.0-x86_64-disk.img location: https://launchpad.net/cirros/cirros-0.3.0-x86_64-disk.img architecture: test_architecture kernel_id: 12345678-1234-1234-1234-123456789012 os_distro: test_distro owner: test_owner ramdisk_id: 12345678-1234-1234-1234-123456789012 ''' image_template_validate = ''' heat_template_version: 2013-05-23 description: This template to define a glance image. resources: image: type: OS::Glance::Image properties: name: image_validate disk_format: qcow2 container_format: bare location: https://launchpad.net/cirros/cirros-0.3.0-x86_64-disk.img ''' class GlanceImageTest(common.HeatTestCase): def setUp(self): super(GlanceImageTest, self).setUp() self.ctx = utils.dummy_context() tpl = template_format.parse(image_template) self.stack = parser.Stack( self.ctx, 'glance_image_test_stack', template.Template(tpl) ) self.my_image = self.stack['my_image'] glance = mock.MagicMock() self.glanceclient = mock.MagicMock() self.my_image.client = glance glance.return_value = self.glanceclient self.images = self.glanceclient.images self.image_tags = self.glanceclient.image_tags def _test_validate(self, resource, error_msg): exc = self.assertRaises(exception.StackValidationFailed, resource.validate) self.assertIn(error_msg, six.text_type(exc)) def test_invalid_min_disk(self): # invalid 'min_disk' tpl = template_format.parse(image_template_validate) stack = parser.Stack( self.ctx, 'glance_image_stack_validate', template.Template(tpl) ) image = stack['image'] props = stack.t.t['resources']['image']['properties'].copy() props['min_disk'] = -1 image.t = image.t.freeze(properties=props) image.reparse() error_msg = ('Property error: resources.image.properties.min_disk: ' '-1 is out of range (min: 0, max: None)') self._test_validate(image, error_msg) def test_invalid_min_ram(self): # invalid 'min_ram' tpl = template_format.parse(image_template_validate) stack = parser.Stack( self.ctx, 'glance_image_stack_validate', template.Template(tpl) ) image = stack['image'] props = stack.t.t['resources']['image']['properties'].copy() props['min_ram'] = -1 image.t = image.t.freeze(properties=props) image.reparse() error_msg = ('Property error: resources.image.properties.min_ram: ' '-1 is out of range (min: 0, max: None)') self._test_validate(image, error_msg) def test_miss_disk_format(self): # miss disk_format tpl = template_format.parse(image_template_validate) stack = parser.Stack( self.ctx, 'glance_image_stack_validate', template.Template(tpl) ) image = stack['image'] props = stack.t.t['resources']['image']['properties'].copy() del props['disk_format'] image.t = image.t.freeze(properties=props) image.reparse() error_msg = 'Property disk_format not assigned' self._test_validate(image, error_msg) def test_invalid_disk_format(self): # invalid disk_format tpl = template_format.parse(image_template_validate) stack = parser.Stack( self.ctx, 'glance_image_stack_validate', template.Template(tpl) ) image = stack['image'] props = stack.t.t['resources']['image']['properties'].copy() props['disk_format'] = 'incorrect_format' image.t = image.t.freeze(properties=props) image.reparse() error_msg = ('Property error: ' 'resources.image.properties.disk_format: ' '"incorrect_format" is not an allowed value ' '[ami, ari, aki, vhd, vmdk, raw, qcow2, vdi, iso]') self._test_validate(image, error_msg) def test_miss_container_format(self): # miss container_format tpl = template_format.parse(image_template_validate) stack = parser.Stack( self.ctx, 'glance_image_stack_validate', template.Template(tpl) ) image = stack['image'] props = stack.t.t['resources']['image']['properties'].copy() del props['container_format'] image.t = image.t.freeze(properties=props) image.reparse() error_msg = 'Property container_format not assigned' self._test_validate(image, error_msg) def test_invalid_container_format(self): # invalid container_format tpl = template_format.parse(image_template_validate) stack = parser.Stack( self.ctx, 'glance_image_stack_validate', template.Template(tpl) ) image = stack['image'] props = stack.t.t['resources']['image']['properties'].copy() props['container_format'] = 'incorrect_format' image.t = image.t.freeze(properties=props) image.reparse() error_msg = ('Property error: ' 'resources.image.properties.container_format: ' '"incorrect_format" is not an allowed value ' '[ami, ari, aki, bare, ova, ovf]') self._test_validate(image, error_msg) def test_miss_location(self): # miss location tpl = template_format.parse(image_template_validate) stack = parser.Stack( self.ctx, 'glance_image_stack_validate', template.Template(tpl) ) image = stack['image'] props = stack.t.t['resources']['image']['properties'].copy() del props['location'] image.t = image.t.freeze(properties=props) image.reparse() error_msg = 'Property location not assigned' self._test_validate(image, error_msg) def test_invalid_disk_container_mix(self): tpl = template_format.parse(image_template_validate) stack = parser.Stack( self.ctx, 'glance_image_stack_validate', template.Template(tpl) ) image = stack['image'] props = stack.t.t['resources']['image']['properties'].copy() props['disk_format'] = 'raw' props['container_format'] = 'ari' image.t = image.t.freeze(properties=props) image.reparse() error_msg = ("Invalid mix of disk and container formats. When " "setting a disk or container format to one of 'aki', " "'ari', or 'ami', the container and disk formats must " "match.") self._test_validate(image, error_msg) def test_image_handle_create(self): value = mock.MagicMock() image_id = '41f0e60c-ebb4-4375-a2b4-845ae8b9c995' value.id = image_id self.images.create.return_value = value self.image_tags.update.return_value = None props = self.stack.t.t['resources']['my_image']['properties'].copy() props['tags'] = ['tag1'] self.my_image.t = self.my_image.t.freeze(properties=props) self.my_image.reparse() self.my_image.handle_create() self.assertEqual(image_id, self.my_image.resource_id) # assert that no tags pass when image create self.images.create.assert_called_once_with( container_format=u'bare', disk_format=u'qcow2', id=u'41f0e60c-ebb4-4375-a2b4-845ae8b9c995', is_public=True, location=u'https://launchpad.net/cirros/' u'cirros-0.3.0-x86_64-disk.img', min_disk=10, min_ram=512, name=u'cirros_image', protected=False, owner=u'test_owner', properties={} ) self.image_tags.update.assert_called_once_with( self.my_image.resource_id, 'tag1') calls = [mock.call(self.my_image.resource_id, architecture='test_architecture'), mock.call(self.my_image.resource_id, kernel_id='12345678-1234-1234-1234-123456789012'), mock.call(self.my_image.resource_id, os_distro='test_distro'), mock.call(self.my_image.resource_id, ramdisk_id='12345678-1234-1234-1234-123456789012'), ] self.images.update.assert_has_calls(calls) def _handle_update_tags(self, prop_diff): self.my_image.handle_update(json_snippet=None, tmpl_diff=None, prop_diff=prop_diff) self.image_tags.update.assert_called_once_with( self.my_image.resource_id, 'tag2' ) self.image_tags.delete.assert_called_once_with( self.my_image.resource_id, 'tag1' ) def test_image_handle_update(self): self.my_image.resource_id = '477e8273-60a7-4c41-b683-fdb0bc7cd151' prop_diff = { 'architecture': 'test_architecture', 'kernel_id': '12345678-1234-1234-1234-123456789012', 'os_distro': 'test_distro', 'owner': 'test_owner', 'ramdisk_id': '12345678-1234-1234-1234-123456789012', 'extra_properties': {'key1': 'value1', 'key2': 'value2'}} self.my_image.handle_update(json_snippet=None, tmpl_diff=None, prop_diff=prop_diff) self.images.update.assert_called_once_with( self.my_image.resource_id, [], architecture='test_architecture', kernel_id='12345678-1234-1234-1234-123456789012', os_distro='test_distro', owner='test_owner', ramdisk_id='12345678-1234-1234-1234-123456789012', key1='value1', key2='value2' ) def test_image_handle_update_tags(self): self.my_image.resource_id = '477e8273-60a7-4c41-b683-fdb0bc7cd151' props = self.stack.t.t['resources']['my_image']['properties'].copy() props['tags'] = ['tag1'] self.my_image.t = self.my_image.t.freeze(properties=props) self.my_image.reparse() prop_diff = {'tags': ['tag2']} self._handle_update_tags(prop_diff) def test_image_handle_update_remove_tags(self): self.my_image.resource_id = '477e8273-60a7-4c41-b683-fdb0bc7cd151' props = self.stack.t.t['resources']['my_image']['properties'].copy() props['tags'] = ['tag1'] self.my_image.t = self.my_image.t.freeze(properties=props) self.my_image.reparse() prop_diff = {'tags': None} self.my_image.handle_update(json_snippet=None, tmpl_diff=None, prop_diff=prop_diff) self.image_tags.delete.assert_called_once_with( self.my_image.resource_id, 'tag1' ) def test_image_handle_update_tags_delete_not_found(self): self.my_image.resource_id = '477e8273-60a7-4c41-b683-fdb0bc7cd151' props = self.stack.t.t['resources']['my_image']['properties'].copy() props['tags'] = ['tag1'] self.my_image.t = self.my_image.t.freeze(properties=props) self.my_image.reparse() prop_diff = {'tags': ['tag2']} self.image_tags.delete.side_effect = exc.HTTPNotFound() self._handle_update_tags(prop_diff) def test_image_show_resource_v1(self): self.glanceclient.version = 1.0 self.my_image.resource_id = 'test_image_id' image = mock.MagicMock() images = mock.MagicMock() image.to_dict.return_value = {'image': 'info'} images.get.return_value = image self.my_image.client().images = images self.assertEqual({'image': 'info'}, self.my_image.FnGetAtt('show')) images.get.assert_called_once_with('test_image_id') def test_image_show_resource_v2(self): self.my_image.resource_id = 'test_image_id' # glance image in v2 is warlock.model object, so it can be # handled via dict(). In test we use easiest analog - dict. image = {"key1": "val1", "key2": "val2"} self.images.get.return_value = image self.glanceclient.version = 2.0 self.assertEqual({"key1": "val1", "key2": "val2"}, self.my_image.FnGetAtt('show')) self.images.get.assert_called_once_with('test_image_id') def test_image_get_live_state_v1(self): self._test_image_get_live_state(1.0) def test_image_get_live_state_v2(self): self._test_image_get_live_state(2.0) def _test_image_get_live_state(self, version): self.glanceclient.version = version self.my_image.resource_id = '1234' images = mock.MagicMock() show_value = { 'name': 'test', 'disk_format': 'qcow2', 'container_format': 'bare', 'protected': False, 'is_public': False, 'min_disk': 0, 'min_ram': 0, 'id': '41f0e60c-ebb4-4375-a2b4-845ae8b9c995', 'tags': [], 'extra_properties': {}, 'architecture': 'test_architecture', 'kernel_id': '12345678-1234-1234-1234-123456789012', 'os_distro': 'test_distro', 'owner': 'test_owner', 'ramdisk_id': '12345678-1234-1234-1234-123456789012' } if version == 1.0: image = mock.MagicMock() image.to_dict.return_value = show_value else: image = show_value images.get.return_value = image self.my_image.client().images = images if version == 1.0: self.my_image.properties.data.update( {self.my_image.LOCATION: 'stub'}) reality = self.my_image.get_live_state(self.my_image.properties) expected = { 'name': 'test', 'disk_format': 'qcow2', 'container_format': 'bare', 'protected': False, 'is_public': False, 'min_disk': 0, 'min_ram': 0, 'id': '41f0e60c-ebb4-4375-a2b4-845ae8b9c995', 'tags': [], 'extra_properties': {}, 'architecture': 'test_architecture', 'kernel_id': '12345678-1234-1234-1234-123456789012', 'os_distro': 'test_distro', 'owner': 'test_owner', 'ramdisk_id': '12345678-1234-1234-1234-123456789012' } if version == 1.0: expected.update({'location': 'stub'}) self.assertEqual(set(expected.keys()), set(reality.keys())) for key in expected: self.assertEqual(expected[key], reality[key]) def test_get_live_state_resource_is_deleted(self): self.my_image.resource_id = '1234' self.my_image.client().images.get.return_value = {'status': 'deleted'} self.assertRaises(exception.EntityNotFound, self.my_image.get_live_state, self.my_image.properties) heat-10.0.2/heat/tests/openstack/aodh/0000775000175000017500000000000013343562672017535 5ustar zuulzuul00000000000000heat-10.0.2/heat/tests/openstack/aodh/test_alarm.py0000666000175000017500000006506613343562340022251 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import json import mock import six from heat.common import exception from heat.common import template_format from heat.engine.clients.os import aodh from heat.engine import resource from heat.engine.resources.openstack.aodh import alarm from heat.engine import rsrc_defn from heat.engine import scheduler from heat.engine import stack as parser from heat.engine import template as tmpl from heat.tests import common from heat.tests import utils alarm_template = ''' { "AWSTemplateFormatVersion" : "2010-09-09", "Description" : "Alarm Test", "Parameters" : {}, "Resources" : { "MEMAlarmHigh": { "Type": "OS::Aodh::Alarm", "Properties": { "description": "Scale-up if MEM > 50% for 1 minute", "meter_name": "MemoryUtilization", "statistic": "avg", "period": "60", "evaluation_periods": "1", "threshold": "50", "alarm_actions": [], "matching_metadata": {}, "comparison_operator": "gt", } }, "signal_handler" : { "Type" : "SignalResourceType" } } } ''' alarm_template_with_time_constraints = ''' { "AWSTemplateFormatVersion" : "2010-09-09", "Description" : "Alarm Test", "Parameters" : {}, "Resources" : { "MEMAlarmHigh": { "Type": "OS::Aodh::Alarm", "Properties": { "description": "Scale-up if MEM > 50% for 1 minute", "meter_name": "MemoryUtilization", "statistic": "avg", "period": "60", "evaluation_periods": "1", "threshold": "50", "alarm_actions": [], "matching_metadata": {}, "comparison_operator": "gt", "time_constraints": [{"name": "tc1", "start": "0 23 * * *", "timezone": "Asia/Taipei", "duration": 10800, "description": "a description" }] } }, "signal_handler" : { "Type" : "SignalResourceType" } } } ''' not_string_alarm_template = ''' { "AWSTemplateFormatVersion" : "2010-09-09", "Description" : "Alarm Test", "Parameters" : {}, "Resources" : { "MEMAlarmHigh": { "Type": "OS::Aodh::Alarm", "Properties": { "description": "Scale-up if MEM > 50% for 1 minute", "meter_name": "MemoryUtilization", "statistic": "avg", "period": 60, "evaluation_periods": 1, "threshold": 50, "alarm_actions": [], "matching_metadata": {}, "comparison_operator": "gt", } }, "signal_handler" : { "Type" : "SignalResourceType" } } } ''' event_alarm_template = ''' { "heat_template_version" : "newton", "description" : "Event alarm test", "parameters" : {}, "resources" : { "test_event_alarm": { "type": "OS::Aodh::EventAlarm", "properties": { "description": "Alarm event when an image is updated", "event_type": "image.update", "query": [{ "field": 'traits.resource_id', "op": "eq", "value": "9a8fec25-1ba6-4170-aa44-5d72f17c07f6"}] } }, "signal_handler" : { "type" : "SignalResourceType" } } } ''' FakeAodhAlarm = {'other_attrs': 'val', 'alarm_id': 'foo'} class AodhAlarmTest(common.HeatTestCase): def setUp(self): super(AodhAlarmTest, self).setUp() self.fa = mock.Mock() def create_stack(self, template=None, time_constraints=None): if template is None: template = alarm_template temp = template_format.parse(template) template = tmpl.Template(temp) ctx = utils.dummy_context() ctx.tenant = 'test_tenant' stack = parser.Stack(ctx, utils.random_name(), template, disable_rollback=True) stack.store() self.patchobject(aodh.AodhClientPlugin, '_create').return_value = self.fa al = copy.deepcopy(temp['Resources']['MEMAlarmHigh']['Properties']) al['time_constraints'] = time_constraints if time_constraints else [] self.patchobject(self.fa.alarm, 'create').return_value = FakeAodhAlarm return stack def test_mem_alarm_high_update_no_replace(self): """Tests update updatable properties without replacing the Alarm.""" # short circuit the alarm's references t = template_format.parse(alarm_template) properties = t['Resources']['MEMAlarmHigh']['Properties'] properties['alarm_actions'] = ['signal_handler'] properties['matching_metadata'] = {'a': 'v'} properties['query'] = [dict(field='b', op='eq', value='w')] test_stack = self.create_stack(template=json.dumps(t)) update_mock = self.patchobject(self.fa.alarm, 'update') test_stack.create() rsrc = test_stack['MEMAlarmHigh'] update_props = copy.deepcopy(rsrc.properties.data) update_props.update({ 'comparison_operator': 'lt', 'description': 'fruity', 'evaluation_periods': '2', 'period': '90', 'enabled': True, 'repeat_actions': True, 'statistic': 'max', 'threshold': '39', 'insufficient_data_actions': [], 'alarm_actions': [], 'ok_actions': ['signal_handler'], 'matching_metadata': {'x': 'y'}, 'query': [dict(field='c', op='ne', value='z')] }) snippet = rsrc_defn.ResourceDefinition(rsrc.name, rsrc.type(), update_props) scheduler.TaskRunner(rsrc.update, snippet)() self.assertEqual((rsrc.UPDATE, rsrc.COMPLETE), rsrc.state) self.assertEqual(1, update_mock.call_count) def test_mem_alarm_high_update_replace(self): """Tests resource replacing when changing non-updatable properties.""" t = template_format.parse(alarm_template) properties = t['Resources']['MEMAlarmHigh']['Properties'] properties['alarm_actions'] = ['signal_handler'] properties['matching_metadata'] = {'a': 'v'} test_stack = self.create_stack(template=json.dumps(t)) test_stack.create() rsrc = test_stack['MEMAlarmHigh'] properties = copy.copy(rsrc.properties.data) properties['meter_name'] = 'temp' snippet = rsrc_defn.ResourceDefinition(rsrc.name, rsrc.type(), properties) updater = scheduler.TaskRunner(rsrc.update, snippet) self.assertRaises(resource.UpdateReplace, updater) def test_mem_alarm_suspend_resume(self): """Tests suspending and resuming of the alarm. Make sure that the Alarm resource gets disabled on suspend and re-enabled on resume. """ test_stack = self.create_stack() update_mock = self.patchobject(self.fa.alarm, 'update') al_suspend = {'enabled': False} al_resume = {'enabled': True} test_stack.create() rsrc = test_stack['MEMAlarmHigh'] scheduler.TaskRunner(rsrc.suspend)() self.assertEqual((rsrc.SUSPEND, rsrc.COMPLETE), rsrc.state) scheduler.TaskRunner(rsrc.resume)() self.assertEqual((rsrc.RESUME, rsrc.COMPLETE), rsrc.state) update_mock.assert_has_calls(( mock.call('foo', al_suspend), mock.call('foo', al_resume))) def test_mem_alarm_high_correct_int_parameters(self): test_stack = self.create_stack(not_string_alarm_template) test_stack.create() rsrc = test_stack['MEMAlarmHigh'] self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) self.assertIsNone(rsrc.validate()) self.assertIsInstance(rsrc.properties['evaluation_periods'], int) self.assertIsInstance(rsrc.properties['period'], int) self.assertIsInstance(rsrc.properties['threshold'], int) def test_alarm_metadata_prefix(self): t = template_format.parse(alarm_template) properties = t['Resources']['MEMAlarmHigh']['Properties'] # Test for bug/1383521, where meter_name is in NOVA_METERS properties[alarm.AodhAlarm.METER_NAME] = 'memory.usage' properties['matching_metadata'] = {'metadata.user_metadata.groupname': 'foo'} test_stack = self.create_stack(template=json.dumps(t)) rsrc = test_stack['MEMAlarmHigh'] rsrc.properties.data = rsrc.get_alarm_props(properties) self.assertIsNone(rsrc.properties.data.get('matching_metadata')) query = rsrc.properties.data['threshold_rule']['query'] expected_query = [{'field': u'metadata.user_metadata.groupname', 'value': u'foo', 'op': 'eq'}] self.assertEqual(expected_query, query) def test_alarm_metadata_correct_query_key(self): t = template_format.parse(alarm_template) properties = t['Resources']['MEMAlarmHigh']['Properties'] # Test that meter_name is not in NOVA_METERS properties[alarm.AodhAlarm.METER_NAME] = 'memory_util' properties['matching_metadata'] = {'metadata.user_metadata.groupname': 'foo'} self.stack = self.create_stack(template=json.dumps(t)) rsrc = self.stack['MEMAlarmHigh'] rsrc.properties.data = rsrc.get_alarm_props(properties) self.assertIsNone(rsrc.properties.data.get('matching_metadata')) query = rsrc.properties.data['threshold_rule']['query'] expected_query = [{'field': u'metadata.metering.groupname', 'value': u'foo', 'op': 'eq'}] self.assertEqual(expected_query, query) def test_mem_alarm_high_correct_matching_metadata(self): t = template_format.parse(alarm_template) properties = t['Resources']['MEMAlarmHigh']['Properties'] properties['matching_metadata'] = {'fro': 'bar', 'bro': True, 'dro': 1234, 'pro': '{"Mem": {"Ala": {"Hig"}}}', 'tro': [1, 2, 3, 4]} test_stack = self.create_stack(template=json.dumps(t)) test_stack.create() rsrc = test_stack['MEMAlarmHigh'] self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) rsrc.properties.data = rsrc.get_alarm_props(properties) self.assertIsNone(rsrc.properties.data.get('matching_metadata')) for key in rsrc.properties.data['threshold_rule']['query']: self.assertIsInstance(key['value'], six.text_type) def test_no_matching_metadata(self): """Make sure that we can pass in an empty matching_metadata.""" t = template_format.parse(alarm_template) properties = t['Resources']['MEMAlarmHigh']['Properties'] properties['alarm_actions'] = ['signal_handler'] del properties['matching_metadata'] test_stack = self.create_stack(template=json.dumps(t)) test_stack.create() rsrc = test_stack['MEMAlarmHigh'] self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) self.assertIsNone(rsrc.validate()) def test_mem_alarm_high_not_correct_string_parameters(self): orig_snippet = template_format.parse(not_string_alarm_template) for p in ('period', 'evaluation_periods'): snippet = copy.deepcopy(orig_snippet) snippet['Resources']['MEMAlarmHigh']['Properties'][p] = '60a' stack = utils.parse_stack(snippet) resource_defns = stack.t.resource_definitions(stack) rsrc = alarm.AodhAlarm( 'MEMAlarmHigh', resource_defns['MEMAlarmHigh'], stack) error = self.assertRaises(exception.StackValidationFailed, rsrc.validate) self.assertEqual( "Property error: Resources.MEMAlarmHigh.Properties.%s: " "Value '60a' is not an integer" % p, six.text_type(error)) def test_mem_alarm_high_not_integer_parameters(self): orig_snippet = template_format.parse(not_string_alarm_template) for p in ('period', 'evaluation_periods'): snippet = copy.deepcopy(orig_snippet) snippet['Resources']['MEMAlarmHigh']['Properties'][p] = [60] stack = utils.parse_stack(snippet) resource_defns = stack.t.resource_definitions(stack) rsrc = alarm.AodhAlarm( 'MEMAlarmHigh', resource_defns['MEMAlarmHigh'], stack) # python 3.4.3 returns another error message # so try to handle this by regexp msg = ("Property error: Resources.MEMAlarmHigh.Properties.%s: " r"int\(\) argument must be a string" "(, a bytes-like object)?" " or a number, not 'list'" % p) self.assertRaisesRegex(exception.StackValidationFailed, msg, rsrc.validate) def test_mem_alarm_high_check_not_required_parameters(self): snippet = template_format.parse(not_string_alarm_template) snippet['Resources']['MEMAlarmHigh']['Properties'].pop('meter_name') stack = utils.parse_stack(snippet) resource_defns = stack.t.resource_definitions(stack) rsrc = alarm.AodhAlarm( 'MEMAlarmHigh', resource_defns['MEMAlarmHigh'], stack) error = self.assertRaises(exception.StackValidationFailed, rsrc.validate) self.assertEqual( "Property error: Resources.MEMAlarmHigh.Properties: " "Property meter_name not assigned", six.text_type(error)) for p in ('period', 'evaluation_periods', 'statistic', 'comparison_operator'): snippet = template_format.parse(not_string_alarm_template) snippet['Resources']['MEMAlarmHigh']['Properties'].pop(p) stack = utils.parse_stack(snippet) resource_defns = stack.t.resource_definitions(stack) rsrc = alarm.AodhAlarm( 'MEMAlarmHigh', resource_defns['MEMAlarmHigh'], stack) self.assertIsNone(rsrc.validate()) def _prepare_resource(self, for_check=True): snippet = template_format.parse(not_string_alarm_template) self.stack = utils.parse_stack(snippet) res = self.stack['MEMAlarmHigh'] if for_check: res.state_set(res.CREATE, res.COMPLETE) res.client = mock.Mock() mock_alarm = mock.Mock(enabled=True, state='ok') res.client().alarm.get.return_value = mock_alarm return res def test_check(self): res = self._prepare_resource() scheduler.TaskRunner(res.check)() self.assertEqual((res.CHECK, res.COMPLETE), res.state) def test_check_alarm_failure(self): res = self._prepare_resource() res.client().alarm.get.side_effect = Exception('Boom') self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(res.check)) self.assertEqual((res.CHECK, res.FAILED), res.state) self.assertIn('Boom', res.status_reason) def test_show_resource(self): res = self._prepare_resource(for_check=False) res.client().alarm.create.return_value = FakeAodhAlarm res.client().alarm.get.return_value = FakeAodhAlarm scheduler.TaskRunner(res.create)() self.assertEqual(FakeAodhAlarm, res.FnGetAtt('show')) def test_alarm_with_wrong_start_time(self): t = template_format.parse(alarm_template_with_time_constraints) time_constraints = [{"name": "tc1", "start": "0 23 * * *", "timezone": "Asia/Taipei", "duration": 10800, "description": "a description" }] test_stack = self.create_stack(template=json.dumps(t), time_constraints=time_constraints) test_stack.create() self.assertEqual((test_stack.CREATE, test_stack.COMPLETE), test_stack.state) rsrc = test_stack['MEMAlarmHigh'] properties = copy.copy(rsrc.properties.data) start_time = '* * * * * 100' properties.update({ 'comparison_operator': 'lt', 'description': 'fruity', 'evaluation_periods': '2', 'period': '90', 'enabled': True, 'repeat_actions': True, 'statistic': 'max', 'threshold': '39', 'insufficient_data_actions': [], 'alarm_actions': [], 'ok_actions': ['signal_handler'], 'matching_metadata': {'x': 'y'}, 'query': [dict(field='c', op='ne', value='z')], 'time_constraints': [{"name": "tc1", "start": start_time, "timezone": "Asia/Taipei", "duration": 10800, "description": "a description" }] }) snippet = rsrc_defn.ResourceDefinition(rsrc.name, rsrc.type(), properties) error = self.assertRaises( exception.ResourceFailure, scheduler.TaskRunner(rsrc.update, snippet) ) self.assertEqual( "StackValidationFailed: resources.MEMAlarmHigh: Property error: " "Properties.time_constraints[0].start: Error " "validating value '%s': Invalid CRON expression: " "[%s] is not acceptable, out of range" % (start_time, start_time), error.message) def test_alarm_with_wrong_timezone(self): t = template_format.parse(alarm_template_with_time_constraints) time_constraints = [{"name": "tc1", "start": "0 23 * * *", "timezone": "Asia/Taipei", "duration": 10800, "description": "a description" }] test_stack = self.create_stack(template=json.dumps(t), time_constraints=time_constraints) test_stack.create() self.assertEqual((test_stack.CREATE, test_stack.COMPLETE), test_stack.state) rsrc = test_stack['MEMAlarmHigh'] properties = copy.copy(rsrc.properties.data) timezone = 'wrongtimezone' properties.update({ 'comparison_operator': 'lt', 'description': 'fruity', 'evaluation_periods': '2', 'period': '90', 'enabled': True, 'repeat_actions': True, 'statistic': 'max', 'threshold': '39', 'insufficient_data_actions': [], 'alarm_actions': [], 'ok_actions': ['signal_handler'], 'matching_metadata': {'x': 'y'}, 'query': [dict(field='c', op='ne', value='z')], 'time_constraints': [{"name": "tc1", "start": "0 23 * * *", "timezone": timezone, "duration": 10800, "description": "a description" }] }) snippet = rsrc_defn.ResourceDefinition(rsrc.name, rsrc.type(), properties) error = self.assertRaises( exception.ResourceFailure, scheduler.TaskRunner(rsrc.update, snippet) ) self.assertEqual( "StackValidationFailed: resources.MEMAlarmHigh: Property error: " "Properties.time_constraints[0].timezone: Error " "validating value '%s': Invalid timezone: '%s'" % (timezone, timezone), error.message) def test_alarm_live_state(self): snippet = template_format.parse(alarm_template) self.stack = utils.parse_stack(snippet) resource_defns = self.stack.t.resource_definitions(self.stack) self.rsrc_defn = resource_defns['MEMAlarmHigh'] self.client = mock.Mock() self.patchobject(alarm.AodhAlarm, 'client', return_value=self.client) alarm_res = alarm.AodhAlarm('alarm', self.rsrc_defn, self.stack) alarm_res.create() value = { 'description': 'Scale-up if MEM > 50% for 1 minute', 'alarm_actions': [], 'time_constraints': [], 'threshold_rule': { 'meter_name': 'MemoryUtilization', 'statistic': 'avg', 'period': '60', 'evaluation_periods': '1', 'threshold': '50', 'matching_metadata': {}, 'comparison_operator': 'gt', 'query': [{'field': 'c', 'op': 'ne', 'value': 'z'}] } } self.client.alarm.get.return_value = value expected_data = { 'description': 'Scale-up if MEM > 50% for 1 minute', 'alarm_actions': [], 'statistic': 'avg', 'period': '60', 'evaluation_periods': '1', 'threshold': '50', 'matching_metadata': {}, 'comparison_operator': 'gt', 'query': [{'field': 'c', 'op': 'ne', 'value': 'z'}], 'repeat_actions': None, 'ok_actions': None, 'insufficient_data_actions': None, 'severity': None, 'enabled': None } reality = alarm_res.get_live_state(alarm_res.properties) self.assertEqual(expected_data, reality) def test_queue_actions(self): stack = self.create_stack() alarm = stack['MEMAlarmHigh'] props = { 'alarm_actions': ['http://example.com/test'], 'alarm_queues': ['alarm_queue'], 'ok_actions': [], 'ok_queues': ['ok_queue_1', 'ok_queue_2'], 'insufficient_data_actions': ['http://example.com/test2', 'http://example.com/test3'], 'insufficient_data_queues': ['nodata_queue'], } expected = { 'alarm_actions': ['http://example.com/test', 'trust+zaqar://?queue_name=alarm_queue'], 'ok_actions': ['trust+zaqar://?queue_name=ok_queue_1', 'trust+zaqar://?queue_name=ok_queue_2'], 'insufficient_data_actions': [ 'http://example.com/test2', 'http://example.com/test3', 'trust+zaqar://?queue_name=nodata_queue' ], } self.assertEqual(expected, alarm.actions_to_urls(props)) class EventAlarmTest(common.HeatTestCase): def setUp(self): super(EventAlarmTest, self).setUp() self.fa = mock.Mock() def create_stack(self, template=None): if template is None: template = event_alarm_template temp = template_format.parse(template) template = tmpl.Template(temp) ctx = utils.dummy_context() ctx.tenant = 'test_tenant' stack = parser.Stack(ctx, utils.random_name(), template, disable_rollback=True) stack.store() self.patchobject(aodh.AodhClientPlugin, '_create').return_value = self.fa self.patchobject(self.fa.alarm, 'create').return_value = FakeAodhAlarm return stack def test_update(self): test_stack = self.create_stack() update_mock = self.patchobject(self.fa.alarm, 'update') test_stack.create() rsrc = test_stack['test_event_alarm'] update_props = copy.deepcopy(rsrc.properties.data) update_props.update({ 'enabled': True, 'insufficient_data_actions': [], 'alarm_actions': [], 'ok_actions': ['signal_handler'], 'query': [dict( field='traits.resource_id', op='eq', value='c7405b0f-139f-4fbd-9348-f32dfc5674ac')] }) snippet = rsrc_defn.ResourceDefinition(rsrc.name, rsrc.type(), update_props) scheduler.TaskRunner(rsrc.update, snippet)() self.assertEqual((rsrc.UPDATE, rsrc.COMPLETE), rsrc.state) self.assertEqual(1, update_mock.call_count) def test_delete(self): test_stack = self.create_stack() rsrc = test_stack['test_event_alarm'] self.patchobject(aodh.AodhClientPlugin, 'client', return_value=self.fa) self.patchobject(self.fa.alarm, 'delete') rsrc.resource_id = '12345' self.assertEqual('12345', rsrc.handle_delete()) self.assertEqual(1, self.fa.alarm.delete.call_count) def _prepare_resource(self, for_check=True): snippet = template_format.parse(event_alarm_template) self.stack = utils.parse_stack(snippet) res = self.stack['test_event_alarm'] if for_check: res.state_set(res.CREATE, res.COMPLETE) res.client = mock.Mock() mock_alarm = mock.Mock(enabled=True, state='ok') res.client().alarm.get.return_value = mock_alarm return res def test_check(self): res = self._prepare_resource() scheduler.TaskRunner(res.check)() self.assertEqual((res.CHECK, res.COMPLETE), res.state) def test_check_alarm_failure(self): res = self._prepare_resource() res.client().alarm.get.side_effect = Exception('Boom') self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(res.check)) self.assertEqual((res.CHECK, res.FAILED), res.state) self.assertIn('Boom', res.status_reason) def test_show_resource(self): res = self._prepare_resource(for_check=False) res.client().alarm.create.return_value = FakeAodhAlarm res.client().alarm.get.return_value = FakeAodhAlarm scheduler.TaskRunner(res.create)() self.assertEqual(FakeAodhAlarm, res.FnGetAtt('show')) heat-10.0.2/heat/tests/openstack/aodh/test_composite_alarm.py0000666000175000017500000001145513343562340024324 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import mock import six from heat.common import exception from heat.common import template_format from heat.engine.clients.os import aodh from heat.engine import rsrc_defn from heat.engine import scheduler from heat.engine import stack as parser from heat.engine import template as tmpl from heat.tests import common from heat.tests import utils alarm_template = ''' heat_template_version: 2016-10-14 resources: cps_alarm: type: OS::Aodh::CompositeAlarm properties: description: test the composite alarm alarm_actions: [] severity: moderate composite_rule: operator: or rules: - type: threshold meter_name: cpu_util evaluation_periods: 1 period: 60 statistic: avg threshold: 0.8 comparison_operator: ge exclude_outliers: false - and: - type: threshold meter_name: disk.usage evaluation_periods: 1 period: 60 statistic: avg threshold: 0.8 comparison_operator: ge exclude_outliers: false - type: threshold meter_name: mem_util evaluation_periods: 1 period: 60 statistic: avg threshold: 0.8 comparison_operator: ge exclude_outliers: false ''' FakeCompositeAlarm = {'other_attrs': 'val', 'alarm_id': 'foo'} class CompositeAlarmTest(common.HeatTestCase): def setUp(self): super(CompositeAlarmTest, self).setUp() self.fa = mock.Mock() def create_stack(self, template=None): temp = template_format.parse(template) template = tmpl.Template(temp) ctx = utils.dummy_context() ctx.tenant = 'test_tenant' stack = parser.Stack(ctx, utils.random_name(), template, disable_rollback=True) stack.store() self.patchobject(aodh.AodhClientPlugin, '_create').return_value = self.fa self.patchobject(self.fa.alarm, 'create').return_value = FakeCompositeAlarm return stack def test_handle_create(self): """Test create the composite alarm.""" test_stack = self.create_stack(template=alarm_template) test_stack.create() rsrc = test_stack['cps_alarm'] self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) def test_handle_update(self): """Test update the composite alarm.""" test_stack = self.create_stack(template=alarm_template) update_mock = self.patchobject(self.fa.alarm, 'update') test_stack.create() rsrc = test_stack['cps_alarm'] self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) after_props = copy.deepcopy(rsrc.properties.data) update_props = { 'enabled': False, 'repeat_actions': False, 'insufficient_data_actions': [], 'ok_actions': ['signal_handler'] } after_props.update(update_props) snippet = rsrc_defn.ResourceDefinition(rsrc.name, rsrc.type(), after_props) scheduler.TaskRunner(rsrc.update, snippet)() self.assertEqual((rsrc.UPDATE, rsrc.COMPLETE), rsrc.state) self.assertEqual(1, update_mock.call_count) def test_validate(self): test_stack = self.create_stack(template=alarm_template) props = test_stack.t['resources']['cps_alarm']['Properties'] props['composite_rule']['operator'] = 'invalid' res = test_stack['cps_alarm'] error_msg = '"invalid" is not an allowed value [or, and]' exc = self.assertRaises(exception.StackValidationFailed, res.validate) self.assertIn(error_msg, six.text_type(exc)) def test_show_resource(self): test_stack = self.create_stack(template=alarm_template) res = test_stack['cps_alarm'] res.client().alarm.create.return_value = FakeCompositeAlarm res.client().alarm.get.return_value = FakeCompositeAlarm scheduler.TaskRunner(res.create)() self.assertEqual(FakeCompositeAlarm, res.FnGetAtt('show')) heat-10.0.2/heat/tests/openstack/aodh/test_gnocchi_alarm.py0000666000175000017500000005314513343562351023740 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import mox from heat.common import exception from heat.common import template_format from heat.engine.clients.os import aodh from heat.engine.resources.openstack.aodh.gnocchi import ( alarm as gnocchi) from heat.engine import scheduler from heat.tests import common from heat.tests import utils gnocchi_resources_alarm_template = ''' heat_template_version: 2013-05-23 description: Gnocchi Resources Alarm Test resources: GnoResAlarm: type: OS::Aodh::GnocchiResourcesAlarm properties: description: Do stuff with gnocchi metric: cpu_util aggregation_method: mean granularity: 60 evaluation_periods: 1 threshold: 50 alarm_actions: [] resource_type: instance resource_id: 5a517ceb-b068-4aca-9eb9-3e4eb9b90d9a comparison_operator: gt ''' gnocchi_aggregation_by_metrics_alarm_template = ''' heat_template_version: 2013-05-23 description: Gnocchi Aggregation by Metrics Alarm Test resources: GnoAggregationByMetricsAlarm: type: OS::Aodh::GnocchiAggregationByMetricsAlarm properties: description: Do stuff with gnocchi metrics metrics: ["911fce07-e0d7-4210-8c8c-4a9d811fcabc", "2543d435-fe93-4443-9351-fb0156930f94"] aggregation_method: mean granularity: 60 evaluation_periods: 1 threshold: 50 alarm_actions: [] comparison_operator: gt ''' gnocchi_aggregation_by_resources_alarm_template = ''' heat_template_version: 2013-05-23 description: Gnocchi Aggregation by Resources Alarm Test resources: GnoAggregationByResourcesAlarm: type: OS::Aodh::GnocchiAggregationByResourcesAlarm properties: description: Do stuff with gnocchi aggregation by resource aggregation_method: mean granularity: 60 evaluation_periods: 1 threshold: 50 metric: cpu_util alarm_actions: [] resource_type: instance query: '{"=": {"server_group": "my_autoscaling_group"}}' comparison_operator: gt ''' FakeAodhAlarm = {'other_attrs': 'val', 'alarm_id': 'foo'} class GnocchiResourcesAlarmTest(common.HeatTestCase): def setUp(self): super(GnocchiResourcesAlarmTest, self).setUp() self.fc = mock.Mock() def create_alarm(self): self.patchobject(aodh.AodhClientPlugin, '_create').return_value = self.fc self.m.StubOutWithMock(self.fc.alarm, 'create') self.fc.alarm.create( { 'alarm_actions': [], 'description': u'Do stuff with gnocchi', 'enabled': True, 'insufficient_data_actions': [], 'ok_actions': [], 'name': mox.IgnoreArg(), 'type': 'gnocchi_resources_threshold', 'repeat_actions': True, 'gnocchi_resources_threshold_rule': { "metric": "cpu_util", "aggregation_method": "mean", "granularity": 60, "evaluation_periods": 1, "threshold": 50, "resource_type": "instance", "resource_id": "5a517ceb-b068-4aca-9eb9-3e4eb9b90d9a", "comparison_operator": "gt", }, 'time_constraints': [], 'severity': 'low' }).AndReturn(FakeAodhAlarm) self.tmpl = template_format.parse(gnocchi_resources_alarm_template) self.stack = utils.parse_stack(self.tmpl) resource_defns = self.stack.t.resource_definitions(self.stack) return gnocchi.AodhGnocchiResourcesAlarm( 'GnoResAlarm', resource_defns['GnoResAlarm'], self.stack) def test_update(self): rsrc = self.create_alarm() self.m.StubOutWithMock(self.fc.alarm, 'update') self.fc.alarm.update( 'foo', { 'alarm_actions': [], 'description': u'Do stuff with gnocchi', 'enabled': True, 'insufficient_data_actions': [], 'ok_actions': [], 'repeat_actions': True, 'gnocchi_resources_threshold_rule': { "metric": "cpu_util", "aggregation_method": "mean", "granularity": 60, "evaluation_periods": 1, "threshold": 50, "resource_type": "instance", "resource_id": "d3d6c642-921e-4fc2-9c5f-15d9a5afb598", "comparison_operator": "gt", }, 'time_constraints': [], 'severity': 'low' } ) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() props = self.tmpl['resources']['GnoResAlarm']['properties'] props['resource_id'] = 'd3d6c642-921e-4fc2-9c5f-15d9a5afb598' update_template = rsrc.t.freeze(properties=props) scheduler.TaskRunner(rsrc.update, update_template)() self.assertEqual((rsrc.UPDATE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def _prepare_resource(self, for_check=True): snippet = template_format.parse(gnocchi_resources_alarm_template) self.stack = utils.parse_stack(snippet) res = self.stack['GnoResAlarm'] if for_check: res.state_set(res.CREATE, res.COMPLETE) res.client = mock.Mock() mock_alarm = mock.Mock(enabled=True, state='ok') res.client().alarm.get.return_value = mock_alarm return res def test_create(self): rsrc = self.create_alarm() self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) self.assertEqual('foo', rsrc.resource_id) self.m.VerifyAll() def test_suspend(self): rsrc = self.create_alarm() self.m.StubOutWithMock(self.fc.alarm, 'update') self.fc.alarm.update('foo', {'enabled': False}) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() scheduler.TaskRunner(rsrc.suspend)() self.assertEqual((rsrc.SUSPEND, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_resume(self): rsrc = self.create_alarm() self.m.StubOutWithMock(self.fc.alarm, 'update') self.fc.alarm.update('foo', {'enabled': True}) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() rsrc.state_set(rsrc.SUSPEND, rsrc.COMPLETE) scheduler.TaskRunner(rsrc.resume)() self.assertEqual((rsrc.RESUME, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_check(self): res = self._prepare_resource() scheduler.TaskRunner(res.check)() self.assertEqual((res.CHECK, res.COMPLETE), res.state) def test_check_failure(self): res = self._prepare_resource() res.client().alarm.get.side_effect = Exception('Boom') self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(res.check)) self.assertEqual((res.CHECK, res.FAILED), res.state) self.assertIn('Boom', res.status_reason) def test_show_resource(self): res = self._prepare_resource(for_check=False) res.client().alarm.create.return_value = FakeAodhAlarm res.client().alarm.get.return_value = FakeAodhAlarm scheduler.TaskRunner(res.create)() self.assertEqual(FakeAodhAlarm, res.FnGetAtt('show')) def test_gnocchi_alarm_live_state(self): snippet = template_format.parse(gnocchi_resources_alarm_template) self.stack = utils.parse_stack(snippet) resource_defns = self.stack.t.resource_definitions(self.stack) self.rsrc_defn = resource_defns['GnoResAlarm'] self.client = mock.Mock() self.patchobject(gnocchi.AodhGnocchiResourcesAlarm, 'client', return_value=self.client) alarm_res = gnocchi.AodhGnocchiResourcesAlarm( 'alarm', self.rsrc_defn, self.stack) alarm_res.create() value = { 'description': 'Do stuff with gnocchi', 'alarm_actions': [], 'time_constraints': [], 'gnocchi_resources_threshold_rule': { 'resource_id': '5a517ceb-b068-4aca-9eb9-3e4eb9b90d9a', 'metric': 'cpu_util', 'evaluation_periods': 1, 'aggregation_method': 'mean', 'granularity': 60, 'threshold': 50, 'comparison_operator': 'gt', 'resource_type': 'instance' } } self.client.alarm.get.return_value = value expected_data = { 'description': 'Do stuff with gnocchi', 'alarm_actions': [], 'resource_id': '5a517ceb-b068-4aca-9eb9-3e4eb9b90d9a', 'metric': 'cpu_util', 'evaluation_periods': 1, 'aggregation_method': 'mean', 'granularity': 60, 'threshold': 50, 'comparison_operator': 'gt', 'resource_type': 'instance', 'insufficient_data_actions': None, 'enabled': None, 'ok_actions': None, 'repeat_actions': None, 'severity': None } reality = alarm_res.get_live_state(alarm_res.properties) self.assertEqual(expected_data, reality) class GnocchiAggregationByMetricsAlarmTest(GnocchiResourcesAlarmTest): def create_alarm(self): self.patchobject(aodh.AodhClientPlugin, '_create').return_value = self.fc self.m.StubOutWithMock(self.fc.alarm, 'create') self.fc.alarm.create( { 'alarm_actions': [], 'description': u'Do stuff with gnocchi metrics', 'enabled': True, 'insufficient_data_actions': [], 'ok_actions': [], 'name': mox.IgnoreArg(), 'type': 'gnocchi_aggregation_by_metrics_threshold', 'repeat_actions': True, 'gnocchi_aggregation_by_metrics_threshold_rule': { "aggregation_method": "mean", "granularity": 60, "evaluation_periods": 1, "threshold": 50, "comparison_operator": "gt", "metrics": ["911fce07-e0d7-4210-8c8c-4a9d811fcabc", "2543d435-fe93-4443-9351-fb0156930f94"], }, 'time_constraints': [], 'severity': 'low'} ).AndReturn(FakeAodhAlarm) self.tmpl = template_format.parse( gnocchi_aggregation_by_metrics_alarm_template) self.stack = utils.parse_stack(self.tmpl) resource_defns = self.stack.t.resource_definitions(self.stack) return gnocchi.AodhGnocchiAggregationByMetricsAlarm( 'GnoAggregationByMetricsAlarm', resource_defns['GnoAggregationByMetricsAlarm'], self.stack) def test_update(self): rsrc = self.create_alarm() self.m.StubOutWithMock(self.fc.alarm, 'update') self.fc.alarm.update( 'foo', { 'alarm_actions': [], 'description': u'Do stuff with gnocchi metrics', 'enabled': True, 'insufficient_data_actions': [], 'ok_actions': [], 'repeat_actions': True, 'gnocchi_aggregation_by_metrics_threshold_rule': { "aggregation_method": "mean", "granularity": 60, "evaluation_periods": 1, "threshold": 50, "comparison_operator": "gt", 'metrics': ['d3d6c642-921e-4fc2-9c5f-15d9a5afb598', 'bc60f822-18a0-4a0c-94e7-94c554b00901'] }, 'time_constraints': [], 'severity': 'low' } ) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() snippet = self.tmpl['resources']['GnoAggregationByMetricsAlarm'] props = snippet['properties'].copy() props['metrics'] = ['d3d6c642-921e-4fc2-9c5f-15d9a5afb598', 'bc60f822-18a0-4a0c-94e7-94c554b00901'] update_template = rsrc.t.freeze(properties=props) scheduler.TaskRunner(rsrc.update, update_template)() self.assertEqual((rsrc.UPDATE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def _prepare_resource(self, for_check=True): snippet = template_format.parse( gnocchi_aggregation_by_metrics_alarm_template) self.stack = utils.parse_stack(snippet) res = self.stack['GnoAggregationByMetricsAlarm'] if for_check: res.state_set(res.CREATE, res.COMPLETE) res.client = mock.Mock() mock_alarm = mock.Mock(enabled=True, state='ok') res.client().alarm.get.return_value = mock_alarm return res def test_show_resource(self): res = self._prepare_resource(for_check=False) res.client().alarm.create.return_value = FakeAodhAlarm res.client().alarm.get.return_value = FakeAodhAlarm scheduler.TaskRunner(res.create)() self.assertEqual(FakeAodhAlarm, res.FnGetAtt('show')) def test_gnocchi_alarm_aggr_by_metrics_live_state(self): snippet = template_format.parse( gnocchi_aggregation_by_metrics_alarm_template) self.stack = utils.parse_stack(snippet) resource_defns = self.stack.t.resource_definitions(self.stack) self.rsrc_defn = resource_defns['GnoAggregationByMetricsAlarm'] self.client = mock.Mock() self.patchobject(gnocchi.AodhGnocchiAggregationByMetricsAlarm, 'client', return_value=self.client) alarm_res = gnocchi.AodhGnocchiAggregationByMetricsAlarm( 'alarm', self.rsrc_defn, self.stack) alarm_res.create() value = { 'description': 'Do stuff with gnocchi metrics', 'alarm_actions': [], 'time_constraints': [], 'gnocchi_aggregation_by_metrics_threshold_rule': { 'metrics': ['911fce07-e0d7-4210-8c8c-4a9d811fcabc', '2543d435-fe93-4443-9351-fb0156930f94'], 'evaluation_periods': 1, 'aggregation_method': 'mean', 'granularity': 60, 'threshold': 50, 'comparison_operator': 'gt' } } self.client.alarm.get.return_value = value expected_data = { 'description': 'Do stuff with gnocchi metrics', 'alarm_actions': [], 'metrics': ['911fce07-e0d7-4210-8c8c-4a9d811fcabc', '2543d435-fe93-4443-9351-fb0156930f94'], 'evaluation_periods': 1, 'aggregation_method': 'mean', 'granularity': 60, 'threshold': 50, 'comparison_operator': 'gt', 'insufficient_data_actions': None, 'enabled': None, 'ok_actions': None, 'repeat_actions': None, 'severity': None } reality = alarm_res.get_live_state(alarm_res.properties) self.assertEqual(expected_data, reality) class GnocchiAggregationByResourcesAlarmTest(GnocchiResourcesAlarmTest): def create_alarm(self): self.patchobject(aodh.AodhClientPlugin, '_create').return_value = self.fc self.m.StubOutWithMock(self.fc.alarm, 'create') self.fc.alarm.create( { 'alarm_actions': [], 'description': 'Do stuff with gnocchi aggregation by resource', 'enabled': True, 'insufficient_data_actions': [], 'ok_actions': [], 'name': mox.IgnoreArg(), 'type': 'gnocchi_aggregation_by_resources_threshold', 'repeat_actions': True, 'gnocchi_aggregation_by_resources_threshold_rule': { "aggregation_method": "mean", "granularity": 60, "evaluation_periods": 1, "threshold": 50, "comparison_operator": "gt", "metric": "cpu_util", "resource_type": "instance", "query": '{"=": {"server_group": "my_autoscaling_group"}}', }, 'time_constraints': [], 'severity': 'low'} ).AndReturn(FakeAodhAlarm) self.tmpl = template_format.parse( gnocchi_aggregation_by_resources_alarm_template) self.stack = utils.parse_stack(self.tmpl) resource_defns = self.stack.t.resource_definitions(self.stack) return gnocchi.AodhGnocchiAggregationByResourcesAlarm( 'GnoAggregationByResourcesAlarm', resource_defns['GnoAggregationByResourcesAlarm'], self.stack) def test_update(self): rsrc = self.create_alarm() self.m.StubOutWithMock(self.fc.alarm, 'update') self.fc.alarm.update( 'foo', { 'alarm_actions': [], 'description': 'Do stuff with gnocchi aggregation by resource', 'enabled': True, 'insufficient_data_actions': [], 'ok_actions': [], 'repeat_actions': True, 'gnocchi_aggregation_by_resources_threshold_rule': { "aggregation_method": "mean", "granularity": 60, "evaluation_periods": 1, "threshold": 50, "comparison_operator": "gt", "metric": "cpu_util", "resource_type": "instance", "query": '{"=": {"server_group": "my_new_group"}}', }, 'time_constraints': [], 'severity': 'low' } ) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() snippet = self.tmpl['resources']['GnoAggregationByResourcesAlarm'] props = snippet['properties'].copy() props['query'] = '{"=": {"server_group": "my_new_group"}}' update_template = rsrc.t.freeze(properties=props) scheduler.TaskRunner(rsrc.update, update_template)() self.assertEqual((rsrc.UPDATE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def _prepare_resource(self, for_check=True): snippet = template_format.parse( gnocchi_aggregation_by_resources_alarm_template) self.stack = utils.parse_stack(snippet) res = self.stack['GnoAggregationByResourcesAlarm'] if for_check: res.state_set(res.CREATE, res.COMPLETE) res.client = mock.Mock() mock_alarm = mock.Mock(enabled=True, state='ok') res.client().alarm.get.return_value = mock_alarm return res def test_show_resource(self): res = self._prepare_resource(for_check=False) res.client().alarm.create.return_value = FakeAodhAlarm res.client().alarm.get.return_value = FakeAodhAlarm scheduler.TaskRunner(res.create)() self.assertEqual(FakeAodhAlarm, res.FnGetAtt('show')) def test_gnocchi_alarm_aggr_by_resources_live_state(self): snippet = template_format.parse( gnocchi_aggregation_by_resources_alarm_template) self.stack = utils.parse_stack(snippet) resource_defns = self.stack.t.resource_definitions(self.stack) self.rsrc_defn = resource_defns['GnoAggregationByResourcesAlarm'] self.client = mock.Mock() self.patchobject(gnocchi.AodhGnocchiAggregationByResourcesAlarm, 'client', return_value=self.client) alarm_res = gnocchi.AodhGnocchiAggregationByResourcesAlarm( 'alarm', self.rsrc_defn, self.stack) alarm_res.create() value = { 'description': 'Do stuff with gnocchi aggregation by resource', 'alarm_actions': [], 'time_constraints': [], 'gnocchi_aggregation_by_resources_threshold_rule': { 'metric': 'cpu_util', 'resource_type': 'instance', 'query': "{'=': {'server_group': 'my_autoscaling_group'}}", 'evaluation_periods': 1, 'aggregation_method': 'mean', 'granularity': 60, 'threshold': 50, 'comparison_operator': 'gt' } } self.client.alarm.get.return_value = value expected_data = { 'description': 'Do stuff with gnocchi aggregation by resource', 'alarm_actions': [], 'metric': 'cpu_util', 'resource_type': 'instance', 'query': "{'=': {'server_group': 'my_autoscaling_group'}}", 'evaluation_periods': 1, 'aggregation_method': 'mean', 'granularity': 60, 'threshold': 50, 'comparison_operator': 'gt', 'insufficient_data_actions': None, 'enabled': None, 'ok_actions': None, 'repeat_actions': None, 'severity': None } reality = alarm_res.get_live_state(alarm_res.properties) self.assertEqual(expected_data, reality) heat-10.0.2/heat/tests/openstack/aodh/__init__.py0000666000175000017500000000000013343562340021626 0ustar zuulzuul00000000000000heat-10.0.2/heat/tests/openstack/trove/0000775000175000017500000000000013343562672017761 5ustar zuulzuul00000000000000heat-10.0.2/heat/tests/openstack/trove/test_instance.py0000666000175000017500000010567413343562340023205 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import uuid import mock from oslo_config import cfg import six from troveclient import exceptions as troveexc from troveclient.v1 import users from heat.common import exception from heat.common import template_format from heat.engine.clients.os import neutron from heat.engine.clients.os import trove from heat.engine import resource from heat.engine.resources.openstack.trove import instance as dbinstance from heat.engine import rsrc_defn from heat.engine import scheduler from heat.engine import stack as parser from heat.engine import template as tmpl from heat.tests import common from heat.tests import utils db_template = ''' { "AWSTemplateFormatVersion" : "2010-09-09", "Description" : "MySQL instance running on openstack DBaaS cloud", "Resources" : { "MySqlCloudDB": { "Type": "OS::Trove::Instance", "Properties" : { "name" : "test", "flavor" : "1GB", "size" : 30, "users" : [{"name": "testuser", "password": "pass", "databases": ["validdb"]}], "databases" : [{"name": "validdb"}], "datastore_type": "SomeDStype", "datastore_version": "MariaDB-5.5" } } } } ''' db_template_with_nics = ''' heat_template_version: 2013-05-23 description: MySQL instance running on openstack DBaaS cloud resources: MySqlCloudDB: type: OS::Trove::Instance properties: name: test flavor: 1GB size: 30 networks: - port: someportname fixed_ip: 1.2.3.4 ''' db_template_with_replication = ''' heat_template_version: 2013-05-23 description: MySQL instance running on openstack DBaaS cloud resources: MySqlCloudDB: type: OS::Trove::Instance properties: name: test flavor: 1GB size: 30 replica_of: 0e642916-dd64-43b3-933f-ff34fff69a7f replica_count: 2 ''' class FakeDBInstance(object): def __init__(self): self.id = 12345 self.hostname = "testhost" self.links = [ {"href": "https://adga23dd432a.rackspacecloud.com/132345245", "rel": "self"}] self.resource_id = 12345 self.status = 'ACTIVE' def delete(self): pass def to_dict(self): pass class FakeFlavor(object): def __init__(self, id, name): self.id = id self.name = name class FakeVersion(object): def __init__(self, name="MariaDB-5.5"): self.name = name class InstanceTest(common.HeatTestCase): def setUp(self): super(InstanceTest, self).setUp() self.fc = mock.MagicMock() self.nova = mock.Mock() self.client = mock.Mock() self.patchobject(trove.TroveClientPlugin, '_create', return_value=self.client) self.stub_TroveFlavorConstraint_validate() self.patchobject(resource.Resource, 'is_using_neutron', return_value=True) self.flavor_resolve = self.patchobject(trove.TroveClientPlugin, 'find_flavor_by_name_or_id', return_value='1') self.fake_instance = FakeDBInstance() self.client.instances.create.return_value = self.fake_instance self.client.instances.get.return_value = self.fake_instance def _setup_test_instance(self, name, t, rsrc_name='MySqlCloudDB'): stack_name = '%s_stack' % name template = tmpl.Template(t) self.stack = parser.Stack(utils.dummy_context(), stack_name, template, stack_id=str(uuid.uuid4())) rsrc = self.stack[rsrc_name] rsrc.resource_id = '12345' return rsrc def _stubout_validate(self, instance, neutron=None, mock_net_constraint=False, with_port=True): if mock_net_constraint: self.stub_NetworkConstraint_validate() self.client.datastore_versions.list.return_value = [FakeVersion()] if neutron is not None: instance.is_using_neutron = mock.Mock(return_value=bool(neutron)) if with_port: self.stub_PortConstraint_validate() def test_instance_create(self): t = template_format.parse(db_template) instance = self._setup_test_instance('dbinstance_create', t) scheduler.TaskRunner(instance.create)() self.assertEqual((instance.CREATE, instance.COMPLETE), instance.state) self.assertEqual('instances', instance.entity) def test_create_failed(self): t = template_format.parse(db_template) osdb_res = self._setup_test_instance('dbinstance_create', t) trove_mock = mock.Mock() self.patchobject(osdb_res, 'client', return_value=trove_mock) # test for bad statuses mock_input = mock.Mock() mock_input.status = 'ERROR' trove_mock.instances.get.return_value = mock_input error_string = ('Went to status ERROR due to "The last operation for ' 'the database instance failed due to an error."') exc = self.assertRaises(exception.ResourceInError, osdb_res.check_create_complete, mock_input) self.assertIn(error_string, six.text_type(exc)) mock_input = mock.Mock() mock_input.status = 'FAILED' trove_mock.instances.get.return_value = mock_input error_string = ('Went to status FAILED due to "The database instance ' 'was created, but heat failed to set up the ' 'datastore. If a database instance is in the FAILED ' 'state, it should be deleted and a new one should ' 'be created."') exc = self.assertRaises(exception.ResourceInError, osdb_res.check_create_complete, mock_input) self.assertIn(error_string, six.text_type(exc)) # test if error string is not defined osdb_res.TROVE_STATUS_REASON = {} mock_input = mock.Mock() mock_input.status = 'ERROR' error_string = ('Went to status ERROR due to "Unknown"') trove_mock.instances.get.return_value = mock_input exc = self.assertRaises(exception.ResourceInError, osdb_res.check_create_complete, mock_input) self.assertIn(error_string, six.text_type(exc)) def _create_failed_bad_status(self, status, error_message): t = template_format.parse(db_template) bad_instance = mock.Mock() bad_instance.status = status self.client.instances.get.return_value = bad_instance instance = self._setup_test_instance('test_bad_statuses', t) ex = self.assertRaises(exception.ResourceInError, instance.check_create_complete, self.fake_instance.id) self.assertIn(error_message, six.text_type(ex)) def test_create_failed_status_error(self): self._create_failed_bad_status( 'ERROR', 'Went to status ERROR due to "The last operation for ' 'the database instance failed due to an error."') def test_create_failed_status_failed(self): self._create_failed_bad_status( 'FAILED', 'Went to status FAILED due to "The database instance ' 'was created, but heat failed to set up the datastore. ' 'If a database instance is in the FAILED state, it ' 'should be deleted and a new one should be created."') def test_instance_restore_point(self): t = template_format.parse(db_template) t['Resources']['MySqlCloudDB']['Properties']['restore_point'] = "1234" instance = self._setup_test_instance('dbinstance_create', t) self.client.flavors.get.side_effect = [troveexc.NotFound()] self.client.flavors.find.return_value = FakeFlavor(1, '1GB') scheduler.TaskRunner(instance.create)() self.assertEqual((instance.CREATE, instance.COMPLETE), instance.state) users = [{"name": "testuser", "password": "pass", "host": "%", "databases": [{"name": "validdb"}]}] databases = [{"collate": "utf8_general_ci", "character_set": "utf8", "name": "validdb"}] self.client.instances.create.assert_called_once_with( 'test', '1', volume={'size': 30}, databases=databases, users=users, restorePoint={"backupRef": "1234"}, availability_zone=None, datastore="SomeDStype", datastore_version="MariaDB-5.5", nics=[], replica_of=None, replica_count=None) def test_instance_create_overlimit(self): t = template_format.parse(db_template) instance = self._setup_test_instance('dbinstance_create', t) # Simulate an OverLimit exception self.client.instances.get.side_effect = [ troveexc.RequestEntityTooLarge(), self.fake_instance] scheduler.TaskRunner(instance.create)() self.assertEqual((instance.CREATE, instance.COMPLETE), instance.state) def test_instance_create_fails(self): cfg.CONF.set_override('action_retry_limit', 0) t = template_format.parse(db_template) instance = self._setup_test_instance('dbinstance_create', t) self.fake_instance.status = 'ERROR' self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(instance.create)) # return previous status self.fake_instance.status = 'ACTIVE' def _get_db_instance(self): t = template_format.parse(db_template) res = self._setup_test_instance('trove_check', t) res.state_set(res.CREATE, res.COMPLETE) res.flavor = 'Foo Flavor' res.volume = 'Foo Volume' res.datastore_type = 'Foo Type' res.datastore_version = 'Foo Version' return res def test_instance_check(self): res = self._get_db_instance() scheduler.TaskRunner(res.check)() self.assertEqual((res.CHECK, res.COMPLETE), res.state) def test_instance_check_not_active(self): res = self._get_db_instance() self.fake_instance.status = 'FOOBAR' exc = self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(res.check)) self.assertIn('FOOBAR', six.text_type(exc)) self.assertEqual((res.CHECK, res.FAILED), res.state) # return previous status self.fake_instance.status = 'ACTIVE' def test_instance_delete(self): t = template_format.parse(db_template) instance = self._setup_test_instance('dbinstance_del', t) self.client.instances.get.side_effect = [self.fake_instance, troveexc.NotFound(404)] scheduler.TaskRunner(instance.create)() scheduler.TaskRunner(instance.delete)() def test_instance_delete_overlimit(self): t = template_format.parse(db_template) instance = self._setup_test_instance('dbinstance_del', t) # Simulate an OverLimit exception self.client.instances.get.side_effect = [ troveexc.RequestEntityTooLarge(), self.fake_instance, troveexc.NotFound(404)] scheduler.TaskRunner(instance.create)() scheduler.TaskRunner(instance.delete)() def test_instance_delete_resource_none(self): t = template_format.parse(db_template) instance = self._setup_test_instance('dbinstance_del', t) scheduler.TaskRunner(instance.create)() instance.resource_id = None scheduler.TaskRunner(instance.delete)() self.assertIsNone(instance.resource_id) def test_instance_resource_not_found(self): t = template_format.parse(db_template) instance = self._setup_test_instance('dbinstance_del', t) self.client.instances.get.side_effect = [self.fake_instance, troveexc.NotFound(404)] scheduler.TaskRunner(instance.create)() scheduler.TaskRunner(instance.delete)() def test_instance_attributes(self): fake_instance = FakeDBInstance() self.client.instances.create.return_value = fake_instance self.client.instances.get.return_value = fake_instance t = template_format.parse(db_template) instance = self._setup_test_instance('attr_test', t) self.assertEqual("testhost", instance.FnGetAtt('hostname')) self.assertEqual("https://adga23dd432a.rackspacecloud.com/132345245", instance.FnGetAtt('href')) def test_instance_validation_success(self): t = template_format.parse(db_template) instance = self._setup_test_instance('dbinstance_test', t) self._stubout_validate(instance) self.assertIsNone(instance.validate()) def test_instance_validation_invalid_db(self): t = template_format.parse(db_template) t['Resources']['MySqlCloudDB']['Properties']['databases'] = [ {"name": "onedb"}] t['Resources']['MySqlCloudDB']['Properties']['users'] = [ {"name": "testuser", "password": "pass", "databases": ["invaliddb"]}] instance = self._setup_test_instance('dbinstance_test', t) self._stubout_validate(instance) ex = self.assertRaises(exception.StackValidationFailed, instance.validate) self.assertEqual("Database ['invaliddb'] specified for user does not " "exist in databases for resource MySqlCloudDB.", six.text_type(ex)) def test_instance_validation_db_name_hyphens(self): t = template_format.parse(db_template) t['Resources']['MySqlCloudDB']['Properties']['databases'] = [ {"name": "-foo-bar-"}] t['Resources']['MySqlCloudDB']['Properties']['users'] = [ {"name": "testuser", "password": "pass", "databases": ["-foo-bar-"]}] instance = self._setup_test_instance('dbinstance_test', t) self._stubout_validate(instance) self.assertIsNone(instance.validate()) def test_instance_validation_users_none(self): t = template_format.parse(db_template) t['Resources']['MySqlCloudDB']['Properties']['users'] = [] instance = self._setup_test_instance('dbinstance_test', t) self._stubout_validate(instance) self.assertIsNone(instance.validate()) def test_instance_validation_databases_none(self): t = template_format.parse(db_template) t['Resources']['MySqlCloudDB']['Properties']['databases'] = [] t['Resources']['MySqlCloudDB']['Properties']['users'] = [ {"name": "testuser", "password": "pass", "databases": ["invaliddb"]}] instance = self._setup_test_instance('dbinstance_test', t) self._stubout_validate(instance) ex = self.assertRaises(exception.StackValidationFailed, instance.validate) self.assertEqual('Databases property is required if users property ' 'is provided for resource MySqlCloudDB.', six.text_type(ex)) def test_instance_validation_user_no_db(self): t = template_format.parse(db_template) t['Resources']['MySqlCloudDB']['Properties']['databases'] = [ {"name": "validdb"}] t['Resources']['MySqlCloudDB']['Properties']['users'] = [ {"name": "testuser", "password": "pass", "databases": []}] instance = self._setup_test_instance('dbinstance_test', t) ex = self.assertRaises(exception.StackValidationFailed, instance.validate) self.assertEqual('Property error: ' 'Resources.MySqlCloudDB.Properties.' 'users[0].databases: length (0) is out of range ' '(min: 1, max: None)', six.text_type(ex)) def test_instance_validation_no_datastore_yes_version(self): t = template_format.parse(db_template) t['Resources']['MySqlCloudDB']['Properties'].pop('datastore_type') instance = self._setup_test_instance('dbinstance_test', t) ex = self.assertRaises(exception.StackValidationFailed, instance.validate) exp_msg = "Not allowed - datastore_version without datastore_type." self.assertEqual(exp_msg, six.text_type(ex)) def test_instance_validation_no_ds_version(self): t = template_format.parse(db_template) t['Resources']['MySqlCloudDB']['Properties'][ 'datastore_type'] = 'mysql' t['Resources']['MySqlCloudDB']['Properties'].pop('datastore_version') instance = self._setup_test_instance('dbinstance_test', t) self._stubout_validate(instance) self.assertIsNone(instance.validate()) def test_instance_validation_wrong_dsversion(self): t = template_format.parse(db_template) t['Resources']['MySqlCloudDB']['Properties'][ 'datastore_type'] = 'mysql' t['Resources']['MySqlCloudDB']['Properties'][ 'datastore_version'] = 'SomeVersion' instance = self._setup_test_instance('dbinstance_test', t) self._stubout_validate(instance) ex = self.assertRaises(exception.StackValidationFailed, instance.validate) expected_msg = ("Datastore version SomeVersion for datastore type " "mysql is not valid. " "Allowed versions are MariaDB-5.5.") self.assertEqual(expected_msg, six.text_type(ex)) def test_instance_validation_implicit_version(self): t = template_format.parse(db_template) t['Resources']['MySqlCloudDB']['Properties'][ 'datastore_type'] = 'mysql' t['Resources']['MySqlCloudDB']['Properties'].pop('datastore_version') instance = self._setup_test_instance('dbinstance_test', t) self.client.datastore_versions.list.return_value = [ FakeVersion(), FakeVersion('MariaDB-5.0')] self.assertIsNone(instance.validate()) def test_instance_validation_net_with_port_fail(self): t = template_format.parse(db_template) t['Resources']['MySqlCloudDB']['Properties']['networks'] = [ { "port": "someportuuid", "network": "somenetuuid" }] instance = self._setup_test_instance('dbinstance_test', t) self._stubout_validate(instance, neutron=True, mock_net_constraint=True) ex = self.assertRaises( exception.StackValidationFailed, instance.validate) self.assertEqual('Either network or port must be provided.', six.text_type(ex)) def test_instance_validation_no_net_no_port_fail(self): t = template_format.parse(db_template) t['Resources']['MySqlCloudDB']['Properties']['networks'] = [ { "fixed_ip": "1.2.3.4" }] instance = self._setup_test_instance('dbinstance_test', t) self._stubout_validate(instance, neutron=True, with_port=False) ex = self.assertRaises( exception.StackValidationFailed, instance.validate) self.assertEqual('Either network or port must be provided.', six.text_type(ex)) def test_instance_validation_nic_port_on_novanet_fails(self): t = template_format.parse(db_template) t['Resources']['MySqlCloudDB']['Properties']['networks'] = [ { "port": "someportuuid", }] instance = self._setup_test_instance('dbinstance_test', t) self._stubout_validate(instance, neutron=False) ex = self.assertRaises( exception.StackValidationFailed, instance.validate) self.assertEqual('Can not use port property on Nova-network.', six.text_type(ex)) def test_instance_create_with_port(self): t = template_format.parse(db_template_with_nics) instance = self._setup_test_instance('dbinstance_test', t) self.patchobject(neutron.NeutronClientPlugin, 'find_resourceid_by_name_or_id', return_value='someportid') self.stub_PortConstraint_validate() scheduler.TaskRunner(instance.create)() self.assertEqual((instance.CREATE, instance.COMPLETE), instance.state) self.client.instances.create.assert_called_once_with( 'test', '1', volume={'size': 30}, databases=[], users=[], restorePoint=None, availability_zone=None, datastore=None, datastore_version=None, nics=[{'port-id': 'someportid', 'v4-fixed-ip': '1.2.3.4'}], replica_of=None, replica_count=None) def test_instance_create_with_net_id(self): net_id = '034aa4d5-0f36-4127-8481-5caa5bfc9403' t = template_format.parse(db_template_with_nics) t['resources']['MySqlCloudDB']['properties']['networks'] = [ {'network': net_id}] instance = self._setup_test_instance('dbinstance_test', t) self.stub_NetworkConstraint_validate() self.patchobject(neutron.NeutronClientPlugin, 'find_resourceid_by_name_or_id', return_value=net_id) scheduler.TaskRunner(instance.create)() self.assertEqual((instance.CREATE, instance.COMPLETE), instance.state) self.client.instances.create.assert_called_once_with( 'test', '1', volume={'size': 30}, databases=[], users=[], restorePoint=None, availability_zone=None, datastore=None, datastore_version=None, nics=[{'net-id': net_id}], replica_of=None, replica_count=None) def test_instance_create_with_replication(self): t = template_format.parse(db_template_with_replication) instance = self._setup_test_instance('dbinstance_test', t) scheduler.TaskRunner(instance.create)() self.assertEqual((instance.CREATE, instance.COMPLETE), instance.state) self.client.instances.create.assert_called_once_with( 'test', '1', volume={'size': 30}, databases=[], users=[], restorePoint=None, availability_zone=None, datastore=None, datastore_version=None, nics=[], replica_of="0e642916-dd64-43b3-933f-ff34fff69a7f", replica_count=2) def test_instance_get_live_state(self): self.fake_instance.to_dict = mock.Mock(return_value={ 'name': 'test_instance', 'flavor': {'id': '1'}, 'volume': {'size': 30} }) fake_db1 = mock.Mock() fake_db1.name = 'validdb' fake_db2 = mock.Mock() fake_db2.name = 'secondvaliddb' self.client.databases.list.return_value = [fake_db1, fake_db2] expected = { 'flavor': '1', 'name': 'test_instance', 'size': 30, 'databases': [{'name': 'validdb', 'character_set': 'utf8', 'collate': 'utf8_general_ci'}, {'name': 'secondvaliddb'}] } t = template_format.parse(db_template) instance = self._setup_test_instance('get_live_state_test', t) reality = instance.get_live_state(instance.properties) self.assertEqual(expected, reality) @mock.patch.object(resource.Resource, "client_plugin") @mock.patch.object(resource.Resource, "client") class InstanceUpdateTests(common.HeatTestCase): def setUp(self): super(InstanceUpdateTests, self).setUp() self._stack = utils.parse_stack(template_format.parse(db_template)) testprops = { "name": "testinstance", "flavor": "foo", "datastore_type": "database", "datastore_version": "1", "size": 10, "databases": [ {"name": "bar"}, {"name": "biff"} ], "users": [ { "name": "baz", "password": "password", "databases": ["bar"] }, { "name": "deleted", "password": "password", "databases": ["biff"] } ] } self._rdef = rsrc_defn.ResourceDefinition('test', dbinstance.Instance, properties=testprops) def test_handle_no_update(self, mock_client, mock_plugin): trove = dbinstance.Instance('test', self._rdef, self._stack) self.assertEqual({}, trove.handle_update(None, None, {})) def test_handle_update_name(self, mock_client, mock_plugin): prop_diff = { "name": "changed" } trove = dbinstance.Instance('test', self._rdef, self._stack) self.assertEqual(prop_diff, trove.handle_update(None, None, prop_diff)) def test_handle_update_databases(self, mock_client, mock_plugin): prop_diff = { "databases": [ {"name": "bar", "character_set": "ascii"}, {'name': "baz"} ] } mget = mock_client().databases.list mbar = mock.Mock(name='bar') mbar.name = 'bar' mbiff = mock.Mock(name='biff') mbiff.name = 'biff' mget.return_value = [mbar, mbiff] trove = dbinstance.Instance('test', self._rdef, self._stack) expected = { 'databases': [ {'character_set': 'ascii', 'name': 'bar'}, {'ACTION': 'CREATE', 'name': 'baz'}, {'ACTION': 'DELETE', 'name': 'biff'} ]} self.assertEqual(expected, trove.handle_update(None, None, prop_diff)) def test_handle_update_users(self, mock_client, mock_plugin): prop_diff = { "users": [ {"name": "baz", "password": "changed", "databases": ["bar", "biff"]}, {'name': "user2", "password": "password", "databases": ["biff", "bar"]} ] } uget = mock_client().users mbaz = mock.Mock(name='baz') mbaz.name = 'baz' mdel = mock.Mock(name='deleted') mdel.name = 'deleted' uget.list.return_value = [mbaz, mdel] trove = dbinstance.Instance('test', self._rdef, self._stack) expected = { 'users': [{ 'databases': ['bar', 'biff'], 'name': 'baz', 'password': 'changed' }, { 'ACTION': 'CREATE', 'databases': ['biff', 'bar'], 'name': 'user2', 'password': 'password' }, { 'ACTION': 'DELETE', 'name': 'deleted' }]} self.assertEqual(expected, trove.handle_update(None, None, prop_diff)) def test_handle_update_flavor(self, mock_client, mock_plugin): # Translation mechanism already resolved flavor name to id. prop_diff = { "flavor": 1234 } trove = dbinstance.Instance('test', self._rdef, self._stack) expected = { "flavor": 1234 } self.assertEqual(expected, trove.handle_update(None, None, prop_diff)) def test_handle_update_size(self, mock_client, mock_plugin): prop_diff = { "size": 42 } trove = dbinstance.Instance('test', self._rdef, self._stack) expected = { "size": 42 } self.assertEqual(expected, trove.handle_update(None, None, prop_diff)) def test_check_complete_none(self, mock_client, mock_plugin): trove = dbinstance.Instance('test', self._rdef, self._stack) self.assertTrue(trove.check_update_complete({})) def test_check_complete_error(self, mock_client, mock_plugin): mock_instance = mock.Mock(status="ERROR") mock_client().instances.get.return_value = mock_instance trove = dbinstance.Instance('test', self._rdef, self._stack) exc = self.assertRaises(exception.ResourceInError, trove.check_update_complete, {"foo": "bar"}) msg = "The last operation for the database instance failed" self.assertIn(msg, six.text_type(exc)) def test_check_client_exceptions(self, mock_client, mock_plugin): mock_instance = mock.Mock(status="ACTIVE") mock_client().instances.get.return_value = mock_instance mock_plugin().is_client_exception.return_value = True mock_plugin().is_over_limit.side_effect = [True, False] trove = dbinstance.Instance('test', self._rdef, self._stack) with mock.patch.object(trove, "_update_flavor") as mupdate: mupdate.side_effect = [Exception("test"), Exception("No change was requested " "because I'm testing")] self.assertFalse(trove.check_update_complete({"foo": "bar"})) self.assertFalse(trove.check_update_complete({"foo": "bar"})) self.assertEqual(2, mupdate.call_count) self.assertEqual(2, mock_plugin().is_client_exception.call_count) def test_check_complete_status(self, mock_client, mock_plugin): mock_instance = mock.Mock(status="RESIZING") mock_client().instances.get.return_value = mock_instance updates = {"foo": "bar"} trove = dbinstance.Instance('test', self._rdef, self._stack) self.assertFalse(trove.check_update_complete(updates)) def test_check_complete_name(self, mock_client, mock_plugin): mock_instance = mock.Mock(status="ACTIVE", name="mock_instance") mock_client().instances.get.return_value = mock_instance updates = {"name": "changed"} trove = dbinstance.Instance('test', self._rdef, self._stack) self.assertFalse(trove.check_update_complete(updates)) mock_instance.name = "changed" self.assertTrue(trove.check_update_complete(updates)) mock_client().instances.edit.assert_called_once_with(mock_instance, name="changed") def test_check_complete_databases(self, mock_client, mock_plugin): mock_instance = mock.Mock(status="ACTIVE", name="mock_instance") mock_client().instances.get.return_value = mock_instance updates = { 'databases': [ {'name': 'bar', "character_set": "ascii"}, {'ACTION': 'CREATE', 'name': 'baz'}, {'ACTION': 'DELETE', 'name': 'biff'} ]} trove = dbinstance.Instance('test', self._rdef, self._stack) self.assertTrue(trove.check_update_complete(updates)) mcreate = mock_client().databases.create mdelete = mock_client().databases.delete mcreate.assert_called_once_with(mock_instance, [{'name': 'baz'}]) mdelete.assert_called_once_with(mock_instance, 'biff') def test_check_complete_users(self, mock_client, mock_plugin): mock_instance = mock.Mock(status="ACTIVE", name="mock_instance") mock_client().instances.get.return_value = mock_instance mock_plugin().is_client_exception.return_value = False mock_client().users.get.return_value = users.User(None, { "databases": [{ "name": "bar" }, { "name": "buzz" }], "name": "baz" }, loaded=True) updates = { 'users': [{ 'databases': ['bar', 'biff'], 'name': 'baz', 'password': 'changed' }, { 'ACTION': 'CREATE', 'databases': ['biff', 'bar'], 'name': 'user2', 'password': 'password' }, { 'ACTION': 'DELETE', 'name': 'deleted' }]} trove = dbinstance.Instance('test', self._rdef, self._stack) self.assertTrue(trove.check_update_complete(updates)) create_calls = [ mock.call(mock_instance, [{'password': 'password', 'databases': [{'name': 'biff'}, {'name': 'bar'}], 'name': 'user2'}]) ] delete_calls = [ mock.call(mock_instance, 'deleted') ] mock_client().users.create.assert_has_calls(create_calls) mock_client().users.delete.assert_has_calls(delete_calls) self.assertEqual(1, mock_client().users.create.call_count) self.assertEqual(1, mock_client().users.delete.call_count) updateattr = mock_client().users.update_attributes updateattr.assert_called_once_with( mock_instance, 'baz', newuserattr={'password': 'changed'}, hostname=mock.ANY) mock_client().users.grant.assert_called_once_with( mock_instance, 'baz', ['biff']) mock_client().users.revoke.assert_called_once_with( mock_instance, 'baz', ['buzz']) def test_check_complete_flavor(self, mock_client, mock_plugin): mock_instance = mock.Mock(status="ACTIVE", flavor={'id': 4567}, name="mock_instance") mock_client().instances.get.return_value = mock_instance updates = { "flavor": 1234 } trove = dbinstance.Instance('test', self._rdef, self._stack) self.assertFalse(trove.check_update_complete(updates)) mock_instance.status = "RESIZING" self.assertFalse(trove.check_update_complete(updates)) mock_instance.status = "ACTIVE" mock_instance.flavor = {'id': 1234} self.assertTrue(trove.check_update_complete(updates)) def test_check_complete_size(self, mock_client, mock_plugin): mock_instance = mock.Mock(status="ACTIVE", volume={'size': 24}, name="mock_instance") mock_client().instances.get.return_value = mock_instance updates = { "size": 42 } trove = dbinstance.Instance('test', self._rdef, self._stack) self.assertFalse(trove.check_update_complete(updates)) mock_instance.status = "RESIZING" self.assertFalse(trove.check_update_complete(updates)) mock_instance.status = "ACTIVE" mock_instance.volume = {'size': 42} self.assertTrue(trove.check_update_complete(updates)) heat-10.0.2/heat/tests/openstack/trove/__init__.py0000666000175000017500000000000013343562340022052 0ustar zuulzuul00000000000000heat-10.0.2/heat/tests/openstack/trove/test_cluster.py0000666000175000017500000002275013343562340023053 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import mock import six from troveclient import exceptions as troveexc from heat.common import exception from heat.common import template_format from heat.engine.clients.os import neutron from heat.engine.clients.os import trove from heat.engine.resources.openstack.trove import cluster from heat.engine import scheduler from heat.tests import common from heat.tests import utils stack_template = ''' heat_template_version: 2013-05-23 resources: cluster: type: OS::Trove::Cluster properties: datastore_type: mongodb datastore_version: 2.6.1 instances: - flavor: m1.heat volume_size: 1 networks: - port: port1 - flavor: m1.heat volume_size: 1 networks: - port: port2 - flavor: m1.heat volume_size: 1 networks: - port: port3 ''' class FakeTroveCluster(object): def __init__(self, status='ACTIVE'): self.name = 'cluster' self.id = '1189aa64-a471-4aa3-876a-9eb7d84089da' self.ip = ['10.0.0.1'] self.instances = [ {'id': '416b0b16-ba55-4302-bbd3-ff566032e1c1', 'status': status}, {'id': '965ef811-7c1d-47fc-89f2-a89dfdd23ef2', 'status': status}, {'id': '3642f41c-e8ad-4164-a089-3891bf7f2d2b', 'status': status}] def delete(self): pass class FakeFlavor(object): def __init__(self, id, name): self.id = id self.name = name class FakeVersion(object): def __init__(self, name="2.6.1"): self.name = name class TroveClusterTest(common.HeatTestCase): def setUp(self): super(TroveClusterTest, self).setUp() self.tmpl = template_format.parse(stack_template) self.stack = utils.parse_stack(self.tmpl) resource_defns = self.stack.t.resource_definitions(self.stack) self.rsrc_defn = resource_defns['cluster'] self.patcher_client = mock.patch.object(cluster.TroveCluster, 'client') mock_client = self.patcher_client.start() self.client = mock_client.return_value self.troveclient = mock.Mock() self.troveclient.flavors.get.return_value = FakeFlavor(1, 'm1.heat') self.patchobject(neutron.NeutronClientPlugin, 'find_resourceid_by_name_or_id', return_value='someportid') self.troveclient.datastore_versions.list.return_value = [FakeVersion()] self.patchobject(trove.TroveClientPlugin, 'client', return_value=self.troveclient) def tearDown(self): super(TroveClusterTest, self).tearDown() self.patcher_client.stop() def _create_resource(self, name, snippet, stack): tc = cluster.TroveCluster(name, snippet, stack) self.client.clusters.create.return_value = FakeTroveCluster() self.client.clusters.get.return_value = FakeTroveCluster() scheduler.TaskRunner(tc.create)() return tc def test_create(self): tc = self._create_resource('cluster', self.rsrc_defn, self.stack) expected_state = (tc.CREATE, tc.COMPLETE) self.assertEqual(expected_state, tc.state) args = self.client.clusters.create.call_args[1] self.assertEqual([{'flavorRef': '1', 'volume': {'size': 1}, 'nics': [{'port-id': 'someportid'}]}, {'flavorRef': '1', 'volume': {'size': 1}, 'nics': [{'port-id': 'someportid'}]}, {'flavorRef': '1', 'volume': {'size': 1}, 'nics': [{'port-id': 'someportid'}]}], args['instances']) self.assertEqual('mongodb', args['datastore']) self.assertEqual('2.6.1', args['datastore_version']) self.assertEqual('1189aa64-a471-4aa3-876a-9eb7d84089da', tc.resource_id) self.assertEqual('clusters', tc.entity) def test_attributes(self): tc = self._create_resource('cluster', self.rsrc_defn, self.stack) self.assertEqual(['10.0.0.1'], tc.FnGetAtt('ip')) self.assertEqual(['416b0b16-ba55-4302-bbd3-ff566032e1c1', '965ef811-7c1d-47fc-89f2-a89dfdd23ef2', '3642f41c-e8ad-4164-a089-3891bf7f2d2b'], tc.FnGetAtt('instances')) def test_delete(self): tc = self._create_resource('cluster', self.rsrc_defn, self.stack) fake_cluster = FakeTroveCluster() fake_cluster.task = {'name': 'NONE'} self.client.clusters.get.side_effect = [fake_cluster, fake_cluster, troveexc.NotFound()] scheduler.TaskRunner(tc.delete)() self.assertEqual((tc.DELETE, tc.COMPLETE), tc.state) def test_delete_not_found(self): tc = self._create_resource('cluster', self.rsrc_defn, self.stack) self.client.clusters.get.side_effect = troveexc.NotFound() scheduler.TaskRunner(tc.delete)() self.assertEqual((tc.DELETE, tc.COMPLETE), tc.state) self.client.clusters.get.assert_called_with(tc.resource_id) self.assertEqual(2, self.client.clusters.get.call_count) def test_delete_incorrect_status(self): tc = self._create_resource('cluster', self.rsrc_defn, self.stack) fake_cluster_bad = FakeTroveCluster() fake_cluster_bad.task = {'name': 'BUILDING'} fake_cluster_bad.delete = mock.Mock() fake_cluster_ok = FakeTroveCluster() fake_cluster_ok.task = {'name': 'NONE'} fake_cluster_ok.delete = mock.Mock() self.client.clusters.get.side_effect = [fake_cluster_bad, # two for cluster_delete method fake_cluster_bad, fake_cluster_ok, # for _refresh_cluster method troveexc.NotFound()] scheduler.TaskRunner(tc.delete)() self.assertEqual((tc.DELETE, tc.COMPLETE), tc.state) fake_cluster_bad.delete.assert_not_called() fake_cluster_ok.delete.assert_called_once_with() def test_delete_not_found_during_delete(self): tc = self._create_resource('cluster', self.rsrc_defn, self.stack) fake_cluster = FakeTroveCluster() fake_cluster.task = {'name': 'NONE'} fake_cluster.delete = mock.Mock(side_effect=[troveexc.NotFound()]) self.client.clusters.get.side_effect = [fake_cluster, fake_cluster, troveexc.NotFound()] scheduler.TaskRunner(tc.delete)() self.assertEqual((tc.DELETE, tc.COMPLETE), tc.state) self.assertEqual(1, fake_cluster.delete.call_count) def test_delete_already_deleting(self): tc = self._create_resource('cluster', self.rsrc_defn, self.stack) fake_cluster = FakeTroveCluster() fake_cluster.task = {'name': 'DELETING'} fake_cluster.delete = mock.Mock() self.client.clusters.get.side_effect = [fake_cluster, fake_cluster, troveexc.NotFound()] scheduler.TaskRunner(tc.delete)() self.assertEqual((tc.DELETE, tc.COMPLETE), tc.state) self.assertEqual(0, fake_cluster.delete.call_count) def test_validate_ok(self): tc = cluster.TroveCluster('cluster', self.rsrc_defn, self.stack) self.assertIsNone(tc.validate()) def test_validate_invalid_dsversion(self): props = self.tmpl['resources']['cluster']['properties'].copy() props['datastore_version'] = '2.6.2' self.rsrc_defn = self.rsrc_defn.freeze(properties=props) tc = cluster.TroveCluster('cluster', self.rsrc_defn, self.stack) ex = self.assertRaises(exception.StackValidationFailed, tc.validate) error_msg = ('Datastore version 2.6.2 for datastore type mongodb is ' 'not valid. Allowed versions are 2.6.1.') self.assertEqual(error_msg, six.text_type(ex)) def test_validate_invalid_flavor(self): self.troveclient.flavors.get.side_effect = troveexc.NotFound() self.troveclient.flavors.find.side_effect = troveexc.NotFound() props = copy.deepcopy(self.tmpl['resources']['cluster']['properties']) props['instances'][0]['flavor'] = 'm1.small' self.rsrc_defn = self.rsrc_defn.freeze(properties=props) tc = cluster.TroveCluster('cluster', self.rsrc_defn, self.stack) ex = self.assertRaises(exception.StackValidationFailed, tc.validate) error_msg = ("Property error: " "resources.cluster.properties.instances[0].flavor: " "Error validating value 'm1.small': Not Found (HTTP 404)") self.assertEqual(error_msg, six.text_type(ex)) heat-10.0.2/heat/tests/openstack/neutron/0000775000175000017500000000000013343562672020314 5ustar zuulzuul00000000000000heat-10.0.2/heat/tests/openstack/neutron/test_neutron_trunk.py0000666000175000017500000004113213343562340024635 0ustar zuulzuul00000000000000# Copyright 2017 Ericsson # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import six from oslo_log import log as logging from heat.common import exception from heat.common import template_format from heat.engine.clients.os import neutron from heat.engine.resources.openstack.neutron import trunk from heat.engine import scheduler from heat.engine import stk_defn from heat.tests import common from heat.tests import utils from neutronclient.common import exceptions as ncex from neutronclient.neutron import v2_0 as neutronV20 from neutronclient.v2_0 import client as neutronclient LOG = logging.getLogger(__name__) create_template = ''' heat_template_version: 2017-09-01 description: Template to test Neutron Trunk resource resources: parent_port: type: OS::Neutron::Port properties: network: parent_port_net subport_1: type: OS::Neutron::Port properties: network: subport_1_net subport_2: type: OS::Neutron::Port properties: network: subport_2_net trunk: type: OS::Neutron::Trunk properties: name: trunk name description: trunk description port: { get_resource: parent_port } sub_ports: - { port: { get_resource: subport_1 }, segmentation_type: vlan, segmentation_id: 101 } - { port: { get_resource: subport_2 }, segmentation_type: vlan, segmentation_id: 102 } ''' update_template = ''' heat_template_version: 2017-09-01 description: Template to test Neutron Trunk resource resources: trunk: type: OS::Neutron::Trunk properties: name: trunk name description: trunk description port: parent_port_id sub_ports: - { port: subport_1_id, segmentation_type: vlan, segmentation_id: 101 } - { port: subport_2_id, segmentation_type: vlan, segmentation_id: 102 } ''' class NeutronTrunkTest(common.HeatTestCase): def setUp(self): super(NeutronTrunkTest, self).setUp() self.patchobject( neutron.NeutronClientPlugin, 'has_extension', return_value=True) self.create_trunk_mock = self.patchobject( neutronclient.Client, 'create_trunk') self.delete_trunk_mock = self.patchobject( neutronclient.Client, 'delete_trunk') self.show_trunk_mock = self.patchobject( neutronclient.Client, 'show_trunk') self.update_trunk_mock = self.patchobject( neutronclient.Client, 'update_trunk') self.trunk_remove_subports_mock = self.patchobject( neutronclient.Client, 'trunk_remove_subports') self.trunk_add_subports_mock = self.patchobject( neutronclient.Client, 'trunk_add_subports') self.find_resource_mock = self.patchobject( neutronV20, 'find_resourceid_by_name_or_id') rv = { 'trunk': { 'id': 'trunk id', 'status': 'DOWN', } } self.create_trunk_mock.return_value = rv self.show_trunk_mock.return_value = rv def find_resourceid_by_name_or_id( _client, _resource, name_or_id, **_kwargs): return name_or_id self.find_resource_mock.side_effect = find_resourceid_by_name_or_id def _create_trunk(self, stack): trunk = stack['trunk'] scheduler.TaskRunner(trunk.create)() stk_defn.update_resource_data(stack.defn, trunk.name, trunk.node_data()) self.assertEqual((trunk.CREATE, trunk.COMPLETE), trunk.state) def _delete_trunk(self, stack): trunk = stack['trunk'] scheduler.TaskRunner(trunk.delete)() self.assertEqual((trunk.DELETE, trunk.COMPLETE), trunk.state) def test_create_missing_port_property(self): t = template_format.parse(create_template) del t['resources']['trunk']['properties']['port'] stack = utils.parse_stack(t) self.assertRaises( exception.StackValidationFailed, stack.validate) def test_create_no_subport(self): t = template_format.parse(create_template) del t['resources']['trunk']['properties']['sub_ports'] del t['resources']['subport_1'] del t['resources']['subport_2'] stack = utils.parse_stack(t) parent_port = stack['parent_port'] self.patchobject(parent_port, 'get_reference_id', return_value='parent port id') self.find_resource_mock.return_value = 'parent port id' stk_defn.update_resource_data(stack.defn, parent_port.name, parent_port.node_data()) self._create_trunk(stack) self.create_trunk_mock.assert_called_once_with({ 'trunk': { 'description': 'trunk description', 'name': 'trunk name', 'port_id': 'parent port id', }} ) def test_create_one_subport(self): t = template_format.parse(create_template) del t['resources']['trunk']['properties']['sub_ports'][1:] del t['resources']['subport_2'] stack = utils.parse_stack(t) parent_port = stack['parent_port'] self.patchobject(parent_port, 'get_reference_id', return_value='parent port id') stk_defn.update_resource_data(stack.defn, parent_port.name, parent_port.node_data()) subport_1 = stack['subport_1'] self.patchobject(subport_1, 'get_reference_id', return_value='subport id') stk_defn.update_resource_data(stack.defn, subport_1.name, subport_1.node_data()) self._create_trunk(stack) self.create_trunk_mock.assert_called_once_with({ 'trunk': { 'description': 'trunk description', 'name': 'trunk name', 'port_id': 'parent port id', 'sub_ports': [ {'port_id': 'subport id', 'segmentation_type': 'vlan', 'segmentation_id': 101}, ], }} ) def test_create_two_subports(self): t = template_format.parse(create_template) del t['resources']['trunk']['properties']['sub_ports'][2:] stack = utils.parse_stack(t) parent_port = stack['parent_port'] self.patchobject(parent_port, 'get_reference_id', return_value='parent_port_id') stk_defn.update_resource_data(stack.defn, parent_port.name, parent_port.node_data()) subport_1 = stack['subport_1'] self.patchobject(subport_1, 'get_reference_id', return_value='subport_1_id') stk_defn.update_resource_data(stack.defn, subport_1.name, subport_1.node_data()) subport_2 = stack['subport_2'] self.patchobject(subport_2, 'get_reference_id', return_value='subport_2_id') stk_defn.update_resource_data(stack.defn, subport_2.name, subport_2.node_data()) self._create_trunk(stack) self.create_trunk_mock.assert_called_once_with({ 'trunk': { 'description': 'trunk description', 'name': 'trunk name', 'port_id': 'parent_port_id', 'sub_ports': [ {'port_id': 'subport_1_id', 'segmentation_type': 'vlan', 'segmentation_id': 101}, {'port_id': 'subport_2_id', 'segmentation_type': 'vlan', 'segmentation_id': 102}, ], }} ) def test_create_degraded(self): t = template_format.parse(create_template) stack = utils.parse_stack(t) rv = { 'trunk': { 'id': 'trunk id', 'status': 'DEGRADED', } } self.create_trunk_mock.return_value = rv self.show_trunk_mock.return_value = rv trunk = stack['trunk'] e = self.assertRaises( exception.ResourceInError, trunk.check_create_complete, trunk.resource_id) self.assertIn( 'Went to status DEGRADED due to', six.text_type(e)) def test_create_parent_port_by_name(self): t = template_format.parse(create_template) t['resources']['parent_port'][ 'properties']['name'] = 'parent port name' t['resources']['trunk'][ 'properties']['port'] = 'parent port name' del t['resources']['trunk']['properties']['sub_ports'] stack = utils.parse_stack(t) parent_port = stack['parent_port'] self.patchobject(parent_port, 'get_reference_id', return_value='parent port id') stk_defn.update_resource_data(stack.defn, parent_port.name, parent_port.node_data()) def find_resourceid_by_name_or_id( _client, _resource, name_or_id, **_kwargs): name_to_id = { 'parent port name': 'parent port id', 'parent port id': 'parent port id', } return name_to_id[name_or_id] self.find_resource_mock.side_effect = find_resourceid_by_name_or_id self._create_trunk(stack) self.create_trunk_mock.assert_called_once_with({ 'trunk': { 'description': 'trunk description', 'name': 'trunk name', 'port_id': 'parent port id', }} ) def test_create_subport_by_name(self): t = template_format.parse(create_template) del t['resources']['trunk']['properties']['sub_ports'][1:] del t['resources']['subport_2'] t['resources']['subport_1'][ 'properties']['name'] = 'subport name' t['resources']['trunk'][ 'properties']['sub_ports'][0]['port'] = 'subport name' stack = utils.parse_stack(t) parent_port = stack['parent_port'] self.patchobject(parent_port, 'get_reference_id', return_value='parent port id') stk_defn.update_resource_data(stack.defn, parent_port.name, parent_port.node_data()) subport_1 = stack['subport_1'] self.patchobject(subport_1, 'get_reference_id', return_value='subport id') stk_defn.update_resource_data(stack.defn, subport_1.name, subport_1.node_data()) def find_resourceid_by_name_or_id( _client, _resource, name_or_id, **_kwargs): name_to_id = { 'subport name': 'subport id', 'subport id': 'subport id', 'parent port name': 'parent port id', 'parent port id': 'parent port id', } return name_to_id[name_or_id] self.find_resource_mock.side_effect = find_resourceid_by_name_or_id self._create_trunk(stack) self.create_trunk_mock.assert_called_once_with({ 'trunk': { 'description': 'trunk description', 'name': 'trunk name', 'port_id': 'parent port id', 'sub_ports': [ {'port_id': 'subport id', 'segmentation_type': 'vlan', 'segmentation_id': 101}, ], }} ) def test_delete_proper(self): t = template_format.parse(create_template) stack = utils.parse_stack(t) self._create_trunk(stack) self._delete_trunk(stack) self.delete_trunk_mock.assert_called_once_with('trunk id') def test_delete_already_gone(self): t = template_format.parse(create_template) stack = utils.parse_stack(t) self._create_trunk(stack) self.delete_trunk_mock.side_effect = ncex.NeutronClientException( status_code=404) self._delete_trunk(stack) self.delete_trunk_mock.assert_called_once_with('trunk id') def test_update_basic_properties(self): t = template_format.parse(update_template) stack = utils.parse_stack(t) rsrc_defn = stack.defn.resource_definition('trunk') rsrc = trunk.Trunk('trunk', rsrc_defn, stack) scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) props = copy.deepcopy(t['resources']['trunk']['properties']) props['name'] = 'new trunk name' rsrc_defn = rsrc_defn.freeze(properties=props) scheduler.TaskRunner(rsrc.update, rsrc_defn)() self.assertEqual((rsrc.UPDATE, rsrc.COMPLETE), rsrc.state) self.update_trunk_mock.assert_called_once_with( 'trunk id', {'trunk': {'name': 'new trunk name'}} ) self.trunk_remove_subports_mock.assert_not_called() self.trunk_add_subports_mock.assert_not_called() def test_update_subport_delete(self): t = template_format.parse(update_template) stack = utils.parse_stack(t) rsrc_defn = stack.defn.resource_definition('trunk') rsrc = trunk.Trunk('trunk', rsrc_defn, stack) scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) props = copy.deepcopy(t['resources']['trunk']['properties']) del props['sub_ports'][1] rsrc_defn = rsrc_defn.freeze(properties=props) scheduler.TaskRunner(rsrc.update, rsrc_defn)() self.assertEqual((rsrc.UPDATE, rsrc.COMPLETE), rsrc.state) self.update_trunk_mock.assert_not_called() self.trunk_remove_subports_mock.assert_called_once_with( 'trunk id', {'sub_ports': [{'port_id': u'subport_2_id'}]} ) self.trunk_add_subports_mock.assert_not_called() def test_update_subport_add(self): t = template_format.parse(update_template) stack = utils.parse_stack(t) rsrc_defn = stack.defn.resource_definition('trunk') rsrc = trunk.Trunk('trunk', rsrc_defn, stack) scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) props = copy.deepcopy(t['resources']['trunk']['properties']) props['sub_ports'].append( {'port': 'subport_3_id', 'segmentation_type': 'vlan', 'segmentation_id': 103}) rsrc_defn = rsrc_defn.freeze(properties=props) scheduler.TaskRunner(rsrc.update, rsrc_defn)() self.assertEqual((rsrc.UPDATE, rsrc.COMPLETE), rsrc.state) self.update_trunk_mock.assert_not_called() self.trunk_remove_subports_mock.assert_not_called() self.trunk_add_subports_mock.assert_called_once_with( 'trunk id', {'sub_ports': [ {'port_id': 'subport_3_id', 'segmentation_id': 103, 'segmentation_type': 'vlan'} ]} ) def test_update_subport_change(self): t = template_format.parse(update_template) stack = utils.parse_stack(t) rsrc_defn = stack.defn.resource_definition('trunk') rsrc = trunk.Trunk('trunk', rsrc_defn, stack) scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) props = copy.deepcopy(t['resources']['trunk']['properties']) props['sub_ports'][1]['segmentation_id'] = 103 rsrc_defn = rsrc_defn.freeze(properties=props) scheduler.TaskRunner(rsrc.update, rsrc_defn)() self.assertEqual((rsrc.UPDATE, rsrc.COMPLETE), rsrc.state) self.update_trunk_mock.assert_not_called() self.trunk_remove_subports_mock.assert_called_once_with( 'trunk id', {'sub_ports': [{'port_id': u'subport_2_id'}]} ) self.trunk_add_subports_mock.assert_called_once_with( 'trunk id', {'sub_ports': [ {'port_id': 'subport_2_id', 'segmentation_id': 103, 'segmentation_type': 'vlan'} ]} ) heat-10.0.2/heat/tests/openstack/neutron/test_neutron_router.py0000666000175000017500000010403213343562351025013 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import mock from neutronclient.common import exceptions as qe from neutronclient.v2_0 import client as neutronclient import six from heat.common import exception from heat.common import template_format from heat.engine.clients.os import neutron from heat.engine.resources.openstack.neutron import router from heat.engine import rsrc_defn from heat.engine import scheduler from heat.tests import common from heat.tests import utils neutron_template = ''' heat_template_version: 2015-04-30 description: Template to test router related Neutron resources resources: router: type: OS::Neutron::Router properties: l3_agent_ids: - 792ff887-6c85-4a56-b518-23f24fa65581 router_interface: type: OS::Neutron::RouterInterface properties: router_id: { get_resource: router } subnet: sub1234 gateway: type: OS::Neutron::RouterGateway properties: router_id: { get_resource: router } network: net1234 ''' hidden_property_router_template = ''' heat_template_version: 2015-04-30 description: Template to test router related Neutron resources resources: router: type: OS::Neutron::Router properties: l3_agent_id: 792ff887-6c85-4a56-b518-23f24fa65581 ''' neutron_external_gateway_template = ''' heat_template_version: 2016-04-08 description: Template to test gateway Neutron resource resources: router: type: OS::Neutron::Router properties: name: Test Router external_gateway_info: network: public enable_snat: true external_fixed_ips: - ip_address: 192.168.10.99 subnet: sub1234 ''' neutron_subnet_and_external_gateway_template = ''' heat_template_version: 2015-04-30 description: Template to test gateway Neutron resource resources: net_external: type: OS::Neutron::Net properties: name: net_external admin_state_up: true value_specs: provider:network_type: flat provider:physical_network: default router:external: true subnet_external: type: OS::Neutron::Subnet properties: name: subnet_external network_id: { get_resource: net_external} ip_version: 4 cidr: 192.168.10.0/24 gateway_ip: 192.168.10.11 enable_dhcp: false floating_ip: type: OS::Neutron::FloatingIP properties: floating_network: { get_resource: net_external} router: type: OS::Neutron::Router properties: name: router_heat external_gateway_info: network: { get_resource: net_external} ''' class NeutronRouterTest(common.HeatTestCase): def setUp(self): super(NeutronRouterTest, self).setUp() self.create_mock = self.patchobject(neutronclient.Client, 'create_router') self.delete_mock = self.patchobject(neutronclient.Client, 'delete_router') self.show_mock = self.patchobject(neutronclient.Client, 'show_router') self.update_mock = self.patchobject(neutronclient.Client, 'update_router') self.add_if_mock = self.patchobject(neutronclient.Client, 'add_interface_router') self.remove_if_mock = self.patchobject(neutronclient.Client, 'remove_interface_router') self.add_gw_mock = self.patchobject(neutronclient.Client, 'add_gateway_router') self.remove_gw_mock = self.patchobject(neutronclient.Client, 'remove_gateway_router') self.add_router_mock = self.patchobject( neutronclient.Client, 'add_router_to_l3_agent') self.remove_router_mock = self.patchobject( neutronclient.Client, 'remove_router_from_l3_agent') self.list_l3_hr_mock = self.patchobject( neutronclient.Client, 'list_l3_agent_hosting_routers') self.find_rsrc_mock = self.patchobject( neutron.NeutronClientPlugin, 'find_resourceid_by_name_or_id') self.patchobject(neutron.NeutronClientPlugin, 'has_extension', return_value=True) def create_router(self, t, stack, resource_name): resource_defns = stack.t.resource_definitions(stack) rsrc = router.Router('router', resource_defns[resource_name], stack) scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) return rsrc def create_router_interface(self, t, stack, resource_name, properties=None): properties = properties or {} t['resources'][resource_name]['properties'] = properties resource_defns = stack.t.resource_definitions(stack) rsrc = router.RouterInterface( 'router_interface', resource_defns[resource_name], stack) scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) return rsrc def create_gateway_router(self, t, stack, resource_name, properties=None): properties = properties or {} t['resources'][resource_name]['properties'] = properties resource_defns = stack.t.resource_definitions(stack) rsrc = router.RouterGateway( 'gateway', resource_defns[resource_name], stack) scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) return rsrc def test_router_hidden_property_translation(self): t = template_format.parse(hidden_property_router_template) stack = utils.parse_stack(t) rsrc = stack['router'] self.assertIsNone(rsrc.properties['l3_agent_id']) self.assertEqual([u'792ff887-6c85-4a56-b518-23f24fa65581'], rsrc.properties['l3_agent_ids']) def test_router_validate_distribute_l3_agents(self): t = template_format.parse(neutron_template) props = t['resources']['router']['properties'] # test distributed can not specify l3_agent_id props['distributed'] = True stack = utils.parse_stack(t) rsrc = stack['router'] exc = self.assertRaises(exception.ResourcePropertyConflict, rsrc.validate) self.assertIn('distributed, l3_agent_id/l3_agent_ids', six.text_type(exc)) # test distributed can not specify l3_agent_ids props['l3_agent_ids'] = ['id1', 'id2'] stack = utils.parse_stack(t) rsrc = stack['router'] exc = self.assertRaises(exception.ResourcePropertyConflict, rsrc.validate) self.assertIn('distributed, l3_agent_id/l3_agent_ids', six.text_type(exc)) def test_router_validate_l3_agents(self): t = template_format.parse(neutron_template) props = t['resources']['router']['properties'] # test l3_agent_id and l3_agent_ids can not specify at the same time props['l3_agent_ids'] = ['id1', 'id2'] stack = utils.parse_stack(t) rsrc = stack['router'] exc = self.assertRaises(exception.StackValidationFailed, rsrc.validate) self.assertIn('Non HA routers can only have one L3 agent', six.text_type(exc)) self.assertIsNone(rsrc.properties.get(rsrc.L3_AGENT_ID)) def test_router_validate_ha_distribute(self): t = template_format.parse(neutron_template) props = t['resources']['router']['properties'] # test distributed and ha can not specify at the same time props['ha'] = True props['distributed'] = True stack = utils.parse_stack(t) rsrc = stack['router'] update_props = props.copy() del update_props['l3_agent_ids'] rsrc.t = rsrc.t.freeze(properties=update_props) rsrc.reparse() exc = self.assertRaises(exception.ResourcePropertyConflict, rsrc.validate) self.assertIn('distributed, ha', six.text_type(exc)) def test_router_validate_ha_l3_agents(self): t = template_format.parse(neutron_template) props = t['resources']['router']['properties'] # test non ha can not specify more than one l3 agent id props['ha'] = False props['l3_agent_ids'] = ['id1', 'id2'] stack = utils.parse_stack(t) rsrc = stack['router'] exc = self.assertRaises(exception.StackValidationFailed, rsrc.validate) self.assertIn('Non HA routers can only have one L3 agent.', six.text_type(exc)) def test_router(self): t = template_format.parse(neutron_template) tags = ['for_test'] t['resources']['router']['properties']['tags'] = tags stack = utils.parse_stack(t) create_body = { 'router': { 'name': utils.PhysName(stack.name, 'router'), 'admin_state_up': True}} router_base_info = { 'router': { "status": "BUILD", "external_gateway_info": None, "name": utils.PhysName(stack.name, 'router'), "admin_state_up": True, "tenant_id": "3e21026f2dc94372b105808c0e721661", "id": "3e46229d-8fce-4733-819a-b5fe630550f8"}} router_active_info = copy.deepcopy(router_base_info) router_active_info['router']['status'] = 'ACTIVE' self.create_mock.return_value = router_base_info self.show_mock.side_effect = [ # create complete check router_base_info, router_active_info, # first get_attr tenant qe.NeutronClientException(status_code=404), # second get_attr tenant router_active_info, # delete complete check qe.NeutronClientException(status_code=404)] agents_info = { "agents": [{ "admin_state_up": True, "agent_type": "L3 agent", "alive": True, "binary": "neutron-l3-agent", "configurations": { "ex_gw_ports": 1, "floating_ips": 0, "gateway_external_network_id": "", "handle_internal_only_routers": True, "interface_driver": "DummyDriver", "interfaces": 1, "router_id": "", "routers": 1, "use_namespaces": True}, "created_at": "2014-03-11 05:00:05", "description": None, "heartbeat_timestamp": "2014-03-11 05:01:49", "host": "l3_agent_host", "id": "792ff887-6c85-4a56-b518-23f24fa65581", "started_at": "2014-03-11 05:00:05", "topic": "l3_agent" }] } agents_info1 = copy.deepcopy(agents_info) agent = agents_info1['agents'][0] agent['id'] = '63b3fd83-2c5f-4dad-b3ae-e0f83a40f216' self.list_l3_hr_mock.side_effect = [ {"agents": []}, agents_info, agents_info1 ] self.delete_mock.side_effect = [ None, qe.NeutronClientException(status_code=404)] set_tag_mock = self.patchobject(neutronclient.Client, 'replace_tag') rsrc = self.create_router(t, stack, 'router') self.create_mock.assert_called_with(create_body) set_tag_mock.assert_called_with( 'routers', rsrc.resource_id, {'tags': tags} ) rsrc.validate() ref_id = rsrc.FnGetRefId() self.assertEqual('3e46229d-8fce-4733-819a-b5fe630550f8', ref_id) self.assertIsNone(rsrc.FnGetAtt('tenant_id')) self.assertEqual('3e21026f2dc94372b105808c0e721661', rsrc.FnGetAtt('tenant_id')) prop_diff = { "admin_state_up": False, "name": "myrouter", "l3_agent_ids": ["63b3fd83-2c5f-4dad-b3ae-e0f83a40f216"], 'tags': ['new_tag'] } props = copy.copy(rsrc.properties.data) props.update(prop_diff) update_snippet = rsrc_defn.ResourceDefinition(rsrc.name, rsrc.type(), props) rsrc.handle_update(update_snippet, {}, prop_diff) set_tag_mock.assert_called_with( 'routers', rsrc.resource_id, {'tags': ['new_tag']} ) self.update_mock.assert_called_with( '3e46229d-8fce-4733-819a-b5fe630550f8', {'router': { 'name': 'myrouter', 'admin_state_up': False }} ) prop_diff = { "l3_agent_ids": ["4c692423-2c5f-4dad-b3ae-e2339f58539f", "8363b3fd-2c5f-4dad-b3ae-0f216e0f83a4"] } props = copy.copy(rsrc.properties.data) props.update(prop_diff) update_snippet = rsrc_defn.ResourceDefinition(rsrc.name, rsrc.type(), props) rsrc.handle_update(update_snippet, {}, prop_diff) add_router_calls = [ # create mock.call( u'792ff887-6c85-4a56-b518-23f24fa65581', {'router_id': u'3e46229d-8fce-4733-819a-b5fe630550f8'}), # first update mock.call( u'63b3fd83-2c5f-4dad-b3ae-e0f83a40f216', {'router_id': u'3e46229d-8fce-4733-819a-b5fe630550f8'}), # second update mock.call( u'4c692423-2c5f-4dad-b3ae-e2339f58539f', {'router_id': u'3e46229d-8fce-4733-819a-b5fe630550f8'}), mock.call( u'8363b3fd-2c5f-4dad-b3ae-0f216e0f83a4', {'router_id': u'3e46229d-8fce-4733-819a-b5fe630550f8'}) ] remove_router_calls = [ # first update mock.call( u'792ff887-6c85-4a56-b518-23f24fa65581', u'3e46229d-8fce-4733-819a-b5fe630550f8'), # second update mock.call( u'63b3fd83-2c5f-4dad-b3ae-e0f83a40f216', u'3e46229d-8fce-4733-819a-b5fe630550f8') ] self.add_router_mock.assert_has_calls(add_router_calls) self.remove_router_mock.assert_has_calls(remove_router_calls) self.assertIsNone(scheduler.TaskRunner(rsrc.delete)()) rsrc.state_set(rsrc.CREATE, rsrc.COMPLETE, 'to delete again') self.assertIsNone(scheduler.TaskRunner(rsrc.delete)()) def test_router_dependence(self): # assert the implicit dependency between the router # and subnet t = template_format.parse( neutron_subnet_and_external_gateway_template) stack = utils.parse_stack(t) deps = stack.dependencies[stack['subnet_external']] self.assertIn(stack['router'], deps) required_by = set(stack.dependencies.required_by(stack['router'])) self.assertIn(stack['floating_ip'], required_by) def test_router_interface(self): self._test_router_interface() def test_router_interface_depr_router(self): self._test_router_interface(resolve_router=False) def _test_router_interface(self, resolve_router=True): self.remove_if_mock.side_effect = [ None, qe.NeutronClientException(status_code=404) ] t = template_format.parse(neutron_template) stack = utils.parse_stack(t) self.stub_SubnetConstraint_validate() self.stub_RouterConstraint_validate() def find_rsrc(resource, name_or_id, cmd_resource=None): id_mapping = { 'subnet': '91e47a57-7508-46fe-afc9-fc454e8580e1', 'router': '3e46229d-8fce-4733-819a-b5fe630550f8'} return id_mapping.get(resource) self.find_rsrc_mock.side_effect = find_rsrc router_key = 'router' subnet_key = 'subnet' rsrc = self.create_router_interface( t, stack, 'router_interface', properties={ router_key: '3e46229d-8fce-4733-819a-b5fe630550f8', subnet_key: '91e47a57-7508-46fe-afc9-fc454e8580e1' }) self.add_if_mock.assert_called_with( '3e46229d-8fce-4733-819a-b5fe630550f8', {'subnet_id': '91e47a57-7508-46fe-afc9-fc454e8580e1'}) # Ensure that properties correctly translates if not resolve_router: self.assertEqual('3e46229d-8fce-4733-819a-b5fe630550f8', rsrc.properties.get(rsrc.ROUTER)) self.assertIsNone(rsrc.properties.get(rsrc.ROUTER_ID)) scheduler.TaskRunner(rsrc.delete)() rsrc.state_set(rsrc.CREATE, rsrc.COMPLETE, 'to delete again') scheduler.TaskRunner(rsrc.delete)() def test_router_interface_with_old_data(self): self.stub_SubnetConstraint_validate() self.stub_RouterConstraint_validate() def find_rsrc(resource, name_or_id, cmd_resource=None): id_mapping = { 'subnet': '91e47a57-7508-46fe-afc9-fc454e8580e1', 'router': '3e46229d-8fce-4733-819a-b5fe630550f8'} return id_mapping.get(resource) self.find_rsrc_mock.side_effect = find_rsrc self.remove_if_mock.side_effect = [ None, qe.NeutronClientException(status_code=404) ] t = template_format.parse(neutron_template) stack = utils.parse_stack(t) rsrc = self.create_router_interface( t, stack, 'router_interface', properties={ 'router': '3e46229d-8fce-4733-819a-b5fe630550f8', 'subnet': '91e47a57-7508-46fe-afc9-fc454e8580e1' }) self.add_if_mock.assert_called_with( '3e46229d-8fce-4733-819a-b5fe630550f8', {'subnet_id': '91e47a57-7508-46fe-afc9-fc454e8580e1'} ) self.assertEqual('3e46229d-8fce-4733-819a-b5fe630550f8' ':subnet_id=91e47a57-7508-46fe-afc9-fc454e8580e1', rsrc.resource_id) (rsrc.resource_id) = ('3e46229d-8fce-4733-819a-b5fe630550f8:' '91e47a57-7508-46fe-afc9-fc454e8580e1') scheduler.TaskRunner(rsrc.delete)() self.assertEqual('3e46229d-8fce-4733-819a-b5fe630550f8' ':91e47a57-7508-46fe-afc9-fc454e8580e1', rsrc.resource_id) rsrc.state_set(rsrc.CREATE, rsrc.COMPLETE, 'to delete again') scheduler.TaskRunner(rsrc.delete)() def test_router_interface_with_port(self): self._test_router_interface_with_port() def test_router_interface_with_deprecated_port(self): self._test_router_interface_with_port(resolve_port=False) def _test_router_interface_with_port(self, resolve_port=True): def find_rsrc(resource, name_or_id, cmd_resource=None): id_mapping = { 'router': 'ae478782-53c0-4434-ab16-49900c88016c', 'port': '9577cafd-8e98-4059-a2e6-8a771b4d318e'} return id_mapping.get(resource) self.find_rsrc_mock.side_effect = find_rsrc self.remove_if_mock.side_effect = [ None, qe.NeutronClientException(status_code=404)] self.stub_PortConstraint_validate() self.stub_RouterConstraint_validate() t = template_format.parse(neutron_template) stack = utils.parse_stack(t) rsrc = self.create_router_interface( t, stack, 'router_interface', properties={ 'router': 'ae478782-53c0-4434-ab16-49900c88016c', 'port': '9577cafd-8e98-4059-a2e6-8a771b4d318e' }) # Ensure that properties correctly translates if not resolve_port: self.assertEqual('9577cafd-8e98-4059-a2e6-8a771b4d318e', rsrc.properties.get(rsrc.PORT)) self.assertIsNone(rsrc.properties.get(rsrc.PORT_ID)) scheduler.TaskRunner(rsrc.delete)() rsrc.state_set(rsrc.CREATE, rsrc.COMPLETE, 'to delete again') scheduler.TaskRunner(rsrc.delete)() def test_router_interface_validate(self): def find_rsrc(resource, name_or_id, cmd_resource=None): id_mapping = { 'router': 'ae478782-53c0-4434-ab16-49900c88016c', 'subnet': '8577cafd-8e98-4059-a2e6-8a771b4d318e', 'port': '9577cafd-8e98-4059-a2e6-8a771b4d318e'} return id_mapping.get(resource) self.find_rsrc_mock.side_effect = find_rsrc t = template_format.parse(neutron_template) json = t['resources']['router_interface'] json['properties'] = { 'router_id': 'ae478782-53c0-4434-ab16-49900c88016c', 'subnet_id': '8577cafd-8e98-4059-a2e6-8a771b4d318e', 'port_id': '9577cafd-8e98-4059-a2e6-8a771b4d318e'} stack = utils.parse_stack(t) resource_defns = stack.t.resource_definitions(stack) res = router.RouterInterface('router_interface', resource_defns['router_interface'], stack) self.assertRaises(exception.ResourcePropertyConflict, res.validate) json['properties'] = { 'router_id': 'ae478782-53c0-4434-ab16-49900c88016c', 'port_id': '9577cafd-8e98-4059-a2e6-8a771b4d318e'} stack = utils.parse_stack(t) resource_defns = stack.t.resource_definitions(stack) res = router.RouterInterface('router_interface', resource_defns['router_interface'], stack) self.assertIsNone(res.validate()) json['properties'] = { 'router_id': 'ae478782-53c0-4434-ab16-49900c88016c', 'subnet_id': '8577cafd-8e98-4059-a2e6-8a771b4d318e'} stack = utils.parse_stack(t) resource_defns = stack.t.resource_definitions(stack) res = router.RouterInterface('router_interface', resource_defns['router_interface'], stack) self.assertIsNone(res.validate()) json['properties'] = { 'router_id': 'ae478782-53c0-4434-ab16-49900c88016c'} stack = utils.parse_stack(t) resource_defns = stack.t.resource_definitions(stack) res = router.RouterInterface('router_interface', resource_defns['router_interface'], stack) ex = self.assertRaises(exception.PropertyUnspecifiedError, res.validate) self.assertEqual("At least one of the following properties " "must be specified: subnet, port.", six.text_type(ex)) def test_gateway_router(self): def find_rsrc(resource, name_or_id, cmd_resource=None): id_mapping = { 'router_id': '3e46229d-8fce-4733-819a-b5fe630550f8', 'network': 'fc68ea2c-b60b-4b4f-bd82-94ec81110766'} return id_mapping.get(resource) self.find_rsrc_mock.side_effect = find_rsrc self.remove_gw_mock.side_effect = [ None, qe.NeutronClientException(status_code=404)] self.stub_RouterConstraint_validate() t = template_format.parse(neutron_template) stack = utils.parse_stack(t) rsrc = self.create_gateway_router( t, stack, 'gateway', properties={ 'router_id': '3e46229d-8fce-4733-819a-b5fe630550f8', 'network': 'fc68ea2c-b60b-4b4f-bd82-94ec81110766' }) self.add_gw_mock.assert_called_with( '3e46229d-8fce-4733-819a-b5fe630550f8', {'network_id': 'fc68ea2c-b60b-4b4f-bd82-94ec81110766'} ) scheduler.TaskRunner(rsrc.delete)() rsrc.state_set(rsrc.CREATE, rsrc.COMPLETE, 'to delete again') scheduler.TaskRunner(rsrc.delete)() def _test_router_with_gateway(self, for_delete=False, for_update=False): t = template_format.parse(neutron_external_gateway_template) stack = utils.parse_stack(t) def find_rsrc(resource, name_or_id, cmd_resource=None): id_mapping = { 'subnet': 'sub1234', 'network': 'fc68ea2c-b60b-4b4f-bd82-94ec81110766'} return id_mapping.get(resource) self.find_rsrc_mock.side_effect = find_rsrc base_info = { "router": { "status": "BUILD", "external_gateway_info": None, "name": "Test Router", "admin_state_up": True, "tenant_id": "3e21026f2dc94372b105808c0e721661", "id": "3e46229d-8fce-4733-819a-b5fe630550f8", } } external_gw_info = { "network_id": "fc68ea2c-b60b-4b4f-bd82-94ec81110766", "enable_snat": True, 'external_fixed_ips': [{ 'ip_address': '192.168.10.99', 'subnet_id': 'sub1234' }]} active_info = copy.deepcopy(base_info) active_info['router']['status'] = 'ACTIVE' active_info['router']['external_gateway_info'] = external_gw_info ex_gw_info1 = copy.deepcopy(external_gw_info) ex_gw_info1['network_id'] = '91e47a57-7508-46fe-afc9-fc454e8580e1' ex_gw_info1['enable_snat'] = False active_info1 = copy.deepcopy(active_info) active_info1['router']['external_gateway_info'] = ex_gw_info1 self.create_mock.return_value = base_info if for_delete: self.show_mock.side_effect = [ # create complete check active_info, # delete complete check qe.NeutronClientException(status_code=404)] elif for_update: self.show_mock.side_effect = [ # create complete check active_info, # get attr after create active_info, # get attr after update active_info1] else: self.show_mock.side_effect = [ # create complete check active_info, # get attr after create active_info] return t, stack def test_create_router_gateway_as_property(self): t, stack = self._test_router_with_gateway() rsrc = self.create_router(t, stack, 'router') self._assert_mock_call_create_with_router_gw() ref_id = rsrc.FnGetRefId() self.assertEqual('3e46229d-8fce-4733-819a-b5fe630550f8', ref_id) gateway_info = rsrc.FnGetAtt('external_gateway_info') self.assertEqual('fc68ea2c-b60b-4b4f-bd82-94ec81110766', gateway_info.get('network_id')) self.assertTrue(gateway_info.get('enable_snat')) self.assertEqual([{'subnet_id': 'sub1234', 'ip_address': '192.168.10.99'}], gateway_info.get('external_fixed_ips')) def test_create_router_gateway_enable_snat(self): self.find_rsrc_mock.side_effect = [ 'fc68ea2c-b60b-4b4f-bd82-94ec81110766'] router_info = { "router": { "name": "Test Router", "external_gateway_info": { 'network_id': 'fc68ea2c-b60b-4b4f-bd82-94ec81110766', }, "admin_state_up": True, 'status': 'BUILD', 'id': '3e46229d-8fce-4733-819a-b5fe630550f8' } } active_info = copy.deepcopy(router_info) active_info['router']['status'] = 'ACTIVE' self.create_mock.return_value = router_info self.show_mock.side_effect = [ # create complete check active_info, # get attr active_info ] t = template_format.parse(neutron_external_gateway_template) t["resources"]["router"]["properties"]["external_gateway_info"].pop( "enable_snat") t["resources"]["router"]["properties"]["external_gateway_info"].pop( "external_fixed_ips") stack = utils.parse_stack(t) rsrc = self.create_router(t, stack, 'router') self.create_mock.assert_called_with( { "router": { "name": "Test Router", "external_gateway_info": { 'network_id': 'fc68ea2c-b60b-4b4f-bd82-94ec81110766', }, "admin_state_up": True, } } ) rsrc.validate() ref_id = rsrc.FnGetRefId() self.assertEqual('3e46229d-8fce-4733-819a-b5fe630550f8', ref_id) gateway_info = rsrc.FnGetAtt('external_gateway_info') self.assertEqual('fc68ea2c-b60b-4b4f-bd82-94ec81110766', gateway_info.get('network_id')) def test_update_router_gateway_as_property(self): t, stack = self._test_router_with_gateway(for_update=True) rsrc = self.create_router(t, stack, 'router') self._assert_mock_call_create_with_router_gw() gateway_info = rsrc.FnGetAtt('external_gateway_info') self.assertEqual('fc68ea2c-b60b-4b4f-bd82-94ec81110766', gateway_info.get('network_id')) self.assertTrue(gateway_info.get('enable_snat')) props = t['resources']['router']['properties'].copy() props['external_gateway_info'] = { "network": "other_public", "enable_snat": False } update_template = rsrc.t.freeze(properties=props) def find_rsrc_for_update(resource, name_or_id, cmd_resource=None): id_mapping = { 'subnet': 'sub1234', 'network': '91e47a57-7508-46fe-afc9-fc454e8580e1'} return id_mapping.get(resource) self.find_rsrc_mock.side_effect = find_rsrc_for_update scheduler.TaskRunner(rsrc.update, update_template)() self.assertEqual((rsrc.UPDATE, rsrc.COMPLETE), rsrc.state) self.update_mock.assert_called_with( '3e46229d-8fce-4733-819a-b5fe630550f8', {'router': { "external_gateway_info": { 'network_id': '91e47a57-7508-46fe-afc9-fc454e8580e1', 'enable_snat': False }, }} ) gateway_info = rsrc.FnGetAtt('external_gateway_info') self.assertEqual('91e47a57-7508-46fe-afc9-fc454e8580e1', gateway_info.get('network_id')) self.assertFalse(gateway_info.get('enable_snat')) def _assert_mock_call_create_with_router_gw(self): self.create_mock.assert_called_with({ "router": { "name": "Test Router", "external_gateway_info": { 'network_id': 'fc68ea2c-b60b-4b4f-bd82-94ec81110766', 'enable_snat': True, 'external_fixed_ips': [{ 'ip_address': '192.168.10.99', 'subnet_id': 'sub1234' }] }, "admin_state_up": True, } }) def test_delete_router_gateway_as_property(self): t, stack = self._test_router_with_gateway(for_delete=True) rsrc = self.create_router(t, stack, 'router') self._assert_mock_call_create_with_router_gw() self.assertIsNone(scheduler.TaskRunner(rsrc.delete)()) def test_router_get_live_state(self): tmpl = """ heat_template_version: 2015-10-15 resources: router: type: OS::Neutron::Router properties: external_gateway_info: network: public enable_snat: true value_specs: test_value_spec: spec_value """ t = template_format.parse(tmpl) stack = utils.parse_stack(t) rsrc = stack['router'] router_resp = { 'status': 'ACTIVE', 'external_gateway_info': { 'network_id': '1ede231a-0b46-40fc-ab3b-8029446d0d1b', 'enable_snat': True, 'external_fixed_ips': [ {'subnet_id': '8eea1723-6de7-4255-9f8a-a0ce0db8b995', 'ip_address': '10.0.3.3'}] }, 'name': 'er-router-naqzmqnzk4ej', 'admin_state_up': True, 'tenant_id': '30f466e3d14b4251853899f9c26e2b66', 'distributed': False, 'routes': [], 'ha': False, 'id': 'b047ff06-487d-48d7-a735-a54e2fd836c2', 'test_value_spec': 'spec_value' } rsrc.client().show_router = mock.MagicMock( return_value={'router': router_resp}) rsrc.client().list_l3_agent_hosting_routers = mock.MagicMock( return_value={'agents': [{'id': '1234'}, {'id': '5678'}]}) reality = rsrc.get_live_state(rsrc.properties) expected = { 'external_gateway_info': { 'network': '1ede231a-0b46-40fc-ab3b-8029446d0d1b', 'enable_snat': True }, 'admin_state_up': True, 'value_specs': { 'test_value_spec': 'spec_value' }, 'l3_agent_ids': ['1234', '5678'] } self.assertEqual(set(expected.keys()), set(reality.keys())) for key in expected: if key == 'external_gateway_info': for info in expected[key]: self.assertEqual(expected[key][info], reality[key][info]) self.assertEqual(expected[key], reality[key]) heat-10.0.2/heat/tests/openstack/neutron/test_neutron_network_gateway.py0000666000175000017500000005140013343562351026705 0ustar zuulzuul00000000000000# # Copyright 2013 NTT Corp. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from neutronclient.common import exceptions as qe from neutronclient.neutron import v2_0 as neutronV20 from neutronclient.v2_0 import client as neutronclient import six from heat.common import exception from heat.common import template_format from heat.common import timeutils from heat.engine.resources.openstack.neutron import network_gateway from heat.engine import rsrc_defn from heat.engine import scheduler from heat.tests import common from heat.tests import utils gw_template_deprecated = ''' heat_template_version: 2015-04-30 description: Template to test Network Gateway resource resources: NetworkGateway: type: OS::Neutron::NetworkGateway properties: name: NetworkGateway devices: - id: e52148ca-7db9-4ec3-abe6-2c7c0ff316eb interface_name: breth1 connections: - network_id: 6af055d3-26f6-48dd-a597-7611d7e58d35 segmentation_type: vlan segmentation_id: 10 ''' gw_template = ''' heat_template_version: 2015-04-30 description: Template to test Network Gateway resource resources: NetworkGateway: type: OS::Neutron::NetworkGateway properties: name: NetworkGateway devices: - id: e52148ca-7db9-4ec3-abe6-2c7c0ff316eb interface_name: breth1 connections: - network: 6af055d3-26f6-48dd-a597-7611d7e58d35 segmentation_type: vlan segmentation_id: 10 ''' sng = { 'network_gateway': { 'id': 'ed4c03b9-8251-4c09-acc4-e59ee9e6aa37', 'name': 'NetworkGateway', 'default': False, 'tenant_id': '96ba52dc-c5c5-44c6-9a9d-d3ba1a03f77f', 'devices': [{ 'id': 'e52148ca-7db9-4ec3-abe6-2c7c0ff316eb', 'interface_name': 'breth1'}], 'ports': [{ 'segmentation_type': 'vlan', 'port_id': '32acc49c-899e-44ea-8177-6f4157e12eb4', 'segmentation_id': 10}] } } class NeutronNetworkGatewayTest(common.HeatTestCase): def setUp(self): super(NeutronNetworkGatewayTest, self).setUp() self.m.StubOutWithMock(neutronclient.Client, 'create_network_gateway') self.m.StubOutWithMock(neutronclient.Client, 'show_network_gateway') self.m.StubOutWithMock(neutronclient.Client, 'delete_network_gateway') self.m.StubOutWithMock(neutronclient.Client, 'connect_network_gateway') self.m.StubOutWithMock(neutronclient.Client, 'update_network_gateway') self.m.StubOutWithMock(neutronclient.Client, 'disconnect_network_gateway') self.m.StubOutWithMock(neutronclient.Client, 'list_networks') self.m.StubOutWithMock(timeutils, 'retry_backoff_delay') self.patchobject(neutronV20, 'find_resourceid_by_name_or_id', return_value='6af055d3-26f6-48dd-a597-7611d7e58d35') def mock_create_fail_network_not_found_delete_success(self): neutronclient.Client.create_network_gateway({ 'network_gateway': { 'name': u'NetworkGateway', 'devices': [{'id': u'e52148ca-7db9-4ec3-abe6-2c7c0ff316eb', 'interface_name': u'breth1'}] } } ).AndReturn({ 'network_gateway': { 'id': 'ed4c03b9-8251-4c09-acc4-e59ee9e6aa37', 'name': 'NetworkGateway', 'default': False, 'tenant_id': '96ba52dc-c5c5-44c6-9a9d-d3ba1a03f77f', 'devices': [{ 'id': 'e52148ca-7db9-4ec3-abe6-2c7c0ff316eb', 'interface_name': 'breth1'}] } } ) neutronclient.Client.disconnect_network_gateway( 'ed4c03b9-8251-4c09-acc4-e59ee9e6aa37', { 'network_id': u'6af055d3-26f6-48dd-a597-7611d7e58d35', 'segmentation_id': 10, 'segmentation_type': u'vlan' } ).AndReturn(None) # mock successful to delete the network_gateway neutronclient.Client.delete_network_gateway( u'ed4c03b9-8251-4c09-acc4-e59ee9e6aa37' ).AndReturn(None) neutronclient.Client.show_network_gateway( u'ed4c03b9-8251-4c09-acc4-e59ee9e6aa37' ).AndRaise(qe.NeutronClientException(status_code=404)) t = template_format.parse(gw_template) self.stack = utils.parse_stack(t) resource_defns = self.stack.t.resource_definitions(self.stack) rsrc = network_gateway.NetworkGateway( 'test_network_gateway', resource_defns['NetworkGateway'], self.stack) return rsrc def prepare_create_network_gateway(self, resolve_neutron=True): neutronclient.Client.create_network_gateway({ 'network_gateway': { 'name': u'NetworkGateway', 'devices': [{'id': u'e52148ca-7db9-4ec3-abe6-2c7c0ff316eb', 'interface_name': u'breth1'}] } } ).AndReturn({ 'network_gateway': { 'id': 'ed4c03b9-8251-4c09-acc4-e59ee9e6aa37', 'name': 'NetworkGateway', 'default': False, 'tenant_id': '96ba52dc-c5c5-44c6-9a9d-d3ba1a03f77f', 'devices': [{ 'id': 'e52148ca-7db9-4ec3-abe6-2c7c0ff316eb', 'interface_name': 'breth1'}] } } ) neutronclient.Client.connect_network_gateway( u'ed4c03b9-8251-4c09-acc4-e59ee9e6aa37', { 'network_id': u'6af055d3-26f6-48dd-a597-7611d7e58d35', 'segmentation_id': 10, 'segmentation_type': u'vlan' } ).AndReturn({ 'connection_info': { 'network_gateway_id': u'ed4c03b9-8251-4c09-acc4-e59ee9e6aa37', 'network_id': u'6af055d3-26f6-48dd-a597-7611d7e58d35', 'port_id': u'32acc49c-899e-44ea-8177-6f4157e12eb4' } }) self.stub_NetworkConstraint_validate() if resolve_neutron: t = template_format.parse(gw_template) else: t = template_format.parse(gw_template_deprecated) self.stack = utils.parse_stack(t) resource_defns = self.stack.t.resource_definitions(self.stack) rsrc = network_gateway.NetworkGateway( 'test_network_gateway', resource_defns['NetworkGateway'], self.stack) return rsrc def _test_network_gateway_create(self, resolve_neutron=True): rsrc = self.prepare_create_network_gateway(resolve_neutron) neutronclient.Client.disconnect_network_gateway( 'ed4c03b9-8251-4c09-acc4-e59ee9e6aa37', { 'network_id': u'6af055d3-26f6-48dd-a597-7611d7e58d35', 'segmentation_id': 10, 'segmentation_type': u'vlan' } ).AndReturn(None) neutronclient.Client.disconnect_network_gateway( u'ed4c03b9-8251-4c09-acc4-e59ee9e6aa37', { 'network_id': u'6af055d3-26f6-48dd-a597-7611d7e58d35', 'segmentation_id': 10, 'segmentation_type': u'vlan' } ).AndReturn(qe.NeutronClientException(status_code=404)) neutronclient.Client.delete_network_gateway( u'ed4c03b9-8251-4c09-acc4-e59ee9e6aa37' ).AndReturn(None) neutronclient.Client.show_network_gateway( u'ed4c03b9-8251-4c09-acc4-e59ee9e6aa37' ).AndReturn(sng) timeutils.retry_backoff_delay(1, jitter_max=2.0).AndReturn(0.01) neutronclient.Client.delete_network_gateway( u'ed4c03b9-8251-4c09-acc4-e59ee9e6aa37' ).AndReturn(None) neutronclient.Client.show_network_gateway( u'ed4c03b9-8251-4c09-acc4-e59ee9e6aa37' ).AndRaise(qe.NeutronClientException(status_code=404)) neutronclient.Client.disconnect_network_gateway( u'ed4c03b9-8251-4c09-acc4-e59ee9e6aa37', { 'network_id': u'6af055d3-26f6-48dd-a597-7611d7e58d35', 'segmentation_id': 10, 'segmentation_type': u'vlan' } ).AndReturn(qe.NeutronClientException(status_code=404)) neutronclient.Client.delete_network_gateway( u'ed4c03b9-8251-4c09-acc4-e59ee9e6aa37' ).AndRaise(qe.NeutronClientException(status_code=404)) self.m.ReplayAll() rsrc.validate() scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) ref_id = rsrc.FnGetRefId() self.assertEqual(u'ed4c03b9-8251-4c09-acc4-e59ee9e6aa37', ref_id) self.assertRaises( exception.InvalidTemplateAttribute, rsrc.FnGetAtt, 'Foo') self.assertIsNone(scheduler.TaskRunner(rsrc.delete)()) self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state) rsrc.state_set(rsrc.CREATE, rsrc.COMPLETE, 'to delete again') scheduler.TaskRunner(rsrc.delete)() self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_network_gateway_create_deprecated(self): self._test_network_gateway_create(resolve_neutron=False) def test_network_gateway_create(self): self._test_network_gateway_create() def test_network_gateway_create_fail_delete_success(self): # if network_gateway created successful, but didn't to connect with # network, then can delete the network_gateway successful # without residue network_gateway rsrc = self.mock_create_fail_network_not_found_delete_success() self.stub_NetworkConstraint_validate() self.m.ReplayAll() rsrc.validate() self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(rsrc.create)) self.assertEqual((rsrc.CREATE, rsrc.FAILED), rsrc.state) ref_id = rsrc.FnGetRefId() self.assertEqual(u'ed4c03b9-8251-4c09-acc4-e59ee9e6aa37', ref_id) self.assertIsNone(scheduler.TaskRunner(rsrc.delete)()) self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_network_gateway_update(self): rsrc = self.prepare_create_network_gateway() neutronclient.Client.update_network_gateway( u'ed4c03b9-8251-4c09-acc4-e59ee9e6aa37', { 'network_gateway': { 'name': u'NetworkGatewayUpdate' } } ).AndReturn(None) neutronclient.Client.disconnect_network_gateway( u'ed4c03b9-8251-4c09-acc4-e59ee9e6aa37', { 'network_id': u'6af055d3-26f6-48dd-a597-7611d7e58d35', 'segmentation_id': 10, 'segmentation_type': u'vlan' } ).AndReturn(None) neutronclient.Client.connect_network_gateway( u'ed4c03b9-8251-4c09-acc4-e59ee9e6aa37', { 'network_id': u'6af055d3-26f6-48dd-a597-7611d7e58d35', 'segmentation_id': 0, 'segmentation_type': u'flat' } ).AndReturn({ 'connection_info': { 'network_gateway_id': u'ed4c03b9-8251-4c09-acc4-e59ee9e6aa37', 'network_id': u'6af055d3-26f6-48dd-a597-7611d7e58d35', 'port_id': u'aa800972-f6be-4c65-8453-9ab31834bf80' } }) neutronclient.Client.disconnect_network_gateway( u'ed4c03b9-8251-4c09-acc4-e59ee9e6aa37', { 'network_id': u'6af055d3-26f6-48dd-a597-7611d7e58d35', 'segmentation_id': 0, 'segmentation_type': u'flat' } ).AndRaise(qe.NeutronClientException(status_code=404)) neutronclient.Client.connect_network_gateway( u'ed4c03b9-8251-4c09-acc4-e59ee9e6aa37', { 'network_id': u'6af055d3-26f6-48dd-a597-7611d7e58d35', 'segmentation_id': 1, 'segmentation_type': u'flat' } ).AndReturn({ 'connection_info': { 'network_gateway_id': u'ed4c03b9-8251-4c09-acc4-e59ee9e6aa37', 'network_id': u'6af055d3-26f6-48dd-a597-7611d7e58d35', 'port_id': u'aa800972-f6be-4c65-8453-9ab31834bf80' } }) neutronclient.Client.disconnect_network_gateway( u'ed4c03b9-8251-4c09-acc4-e59ee9e6aa37', { 'network_id': u'6af055d3-26f6-48dd-a597-7611d7e58d35', 'segmentation_id': 1, 'segmentation_type': u'flat' } ).AndReturn(None) neutronclient.Client.delete_network_gateway( u'ed4c03b9-8251-4c09-acc4-e59ee9e6aa37' ).AndReturn(None) neutronclient.Client.create_network_gateway({ 'network_gateway': { 'name': u'NetworkGatewayUpdate', 'devices': [{'id': u'e52148ca-7db9-4ec3-abe6-2c7c0ff316eb', 'interface_name': u'breth2'}] } } ).AndReturn({ 'network_gateway': { 'id': 'ed4c03b9-8251-4c09-acc4-e59ee9e6aa37', 'name': 'NetworkGateway', 'default': False, 'tenant_id': '96ba52dc-c5c5-44c6-9a9d-d3ba1a03f77f', 'devices': [{ 'id': 'e52148ca-7db9-4ec3-abe6-2c7c0ff316eb', 'interface_name': 'breth2'}] } } ) neutronclient.Client.connect_network_gateway( u'ed4c03b9-8251-4c09-acc4-e59ee9e6aa37', { 'network_id': u'6af055d3-26f6-48dd-a597-7611d7e58d35', 'segmentation_id': 1, 'segmentation_type': u'flat' } ).AndReturn({ 'connection_info': { 'network_gateway_id': u'ed4c03b9-8251-4c09-acc4-e59ee9e6aa37', 'network_id': u'6af055d3-26f6-48dd-a597-7611d7e58d35', 'port_id': u'aa800972-f6be-4c65-8453-9ab31834bf80' } }) self.m.ReplayAll() rsrc.validate() scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) # update name snippet_for_update1 = rsrc_defn.ResourceDefinition( rsrc.name, rsrc.type(), { 'name': u'NetworkGatewayUpdate', 'devices': [{ 'id': u'e52148ca-7db9-4ec3-abe6-2c7c0ff316eb', 'interface_name': u'breth1'}], 'connections': [{ 'network': '6af055d3-26f6-48dd-a597-7611d7e58d35', 'segmentation_type': 'vlan', 'segmentation_id': 10}] }) scheduler.TaskRunner(rsrc.update, snippet_for_update1)() # update connections snippet_for_update2 = rsrc_defn.ResourceDefinition( rsrc.name, rsrc.type(), { 'name': u'NetworkGatewayUpdate', 'devices': [{ 'id': u'e52148ca-7db9-4ec3-abe6-2c7c0ff316eb', 'interface_name': u'breth1'}], 'connections': [{ 'network': u'6af055d3-26f6-48dd-a597-7611d7e58d35', 'segmentation_type': u'flat', 'segmentation_id': 0}] }) scheduler.TaskRunner(rsrc.update, snippet_for_update2, snippet_for_update1)() # update connections once more snippet_for_update3 = rsrc_defn.ResourceDefinition( rsrc.name, rsrc.type(), { 'name': u'NetworkGatewayUpdate', 'devices': [{ 'id': u'e52148ca-7db9-4ec3-abe6-2c7c0ff316eb', 'interface_name': u'breth1'}], 'connections': [{ 'network': u'6af055d3-26f6-48dd-a597-7611d7e58d35', 'segmentation_type': u'flat', 'segmentation_id': 1}] }) scheduler.TaskRunner(rsrc.update, snippet_for_update3, snippet_for_update2)() # update devices snippet_for_update4 = rsrc_defn.ResourceDefinition( rsrc.name, rsrc.type(), { 'name': u'NetworkGatewayUpdate', 'devices': [{ 'id': u'e52148ca-7db9-4ec3-abe6-2c7c0ff316eb', 'interface_name': u'breth2'}], 'connections': [{ 'network_id': u'6af055d3-26f6-48dd-a597-7611d7e58d35', 'segmentation_type': u'vlan', 'segmentation_id': 10}] }) scheduler.TaskRunner(rsrc.update, snippet_for_update4, snippet_for_update3)() self.m.VerifyAll() def test_network_gatway_create_failed(self): neutronclient.Client.create_network_gateway({ 'network_gateway': { 'name': u'NetworkGateway', 'devices': [{ 'id': u'e52148ca-7db9-4ec3-abe6-2c7c0ff316eb', 'interface_name': u'breth1'}] } } ).AndRaise(qe.NeutronClientException) self.stub_NetworkConstraint_validate() self.m.ReplayAll() t = template_format.parse(gw_template) stack = utils.parse_stack(t) resource_defns = stack.t.resource_definitions(stack) rsrc = network_gateway.NetworkGateway( 'network_gateway', resource_defns['NetworkGateway'], stack) error = self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(rsrc.create)) self.assertEqual( 'NeutronClientException: resources.network_gateway: ' 'An unknown exception occurred.', six.text_type(error)) self.assertEqual((rsrc.CREATE, rsrc.FAILED), rsrc.state) self.assertIsNone(scheduler.TaskRunner(rsrc.delete)()) self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_gateway_validate_failed_with_vlan(self): t = template_format.parse(gw_template) del t['resources']['NetworkGateway']['properties'][ 'connections'][0]['segmentation_id'] stack = utils.parse_stack(t) resource_defns = stack.t.resource_definitions(stack) rsrc = network_gateway.NetworkGateway( 'test_network_gateway', resource_defns['NetworkGateway'], stack) self.stub_NetworkConstraint_validate() self.m.ReplayAll() error = self.assertRaises(exception.StackValidationFailed, scheduler.TaskRunner(rsrc.validate)) self.assertEqual( 'segmentation_id must be specified for using vlan', six.text_type(error)) self.m.VerifyAll() def test_gateway_validate_failed_with_flat(self): t = template_format.parse(gw_template) t['resources']['NetworkGateway']['properties'][ 'connections'][0]['segmentation_type'] = 'flat' stack = utils.parse_stack(t) resource_defns = stack.t.resource_definitions(stack) rsrc = network_gateway.NetworkGateway( 'test_network_gateway', resource_defns['NetworkGateway'], stack) self.stub_NetworkConstraint_validate() self.m.ReplayAll() error = self.assertRaises(exception.StackValidationFailed, scheduler.TaskRunner(rsrc.validate)) self.assertEqual( 'segmentation_id cannot be specified except 0 for using flat', six.text_type(error)) self.m.VerifyAll() def test_network_gateway_attribute(self): neutronclient.Client.show_network_gateway( u'ed4c03b9-8251-4c09-acc4-e59ee9e6aa37' ).MultipleTimes().AndReturn(sng) rsrc = self.prepare_create_network_gateway() self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() self.assertEqual(u'ed4c03b9-8251-4c09-acc4-e59ee9e6aa37', rsrc.FnGetRefId()) self.assertFalse(rsrc.FnGetAtt('default')) error = self.assertRaises(exception.InvalidTemplateAttribute, rsrc.FnGetAtt, 'hoge') self.assertEqual( 'The Referenced Attribute (test_network_gateway hoge) is ' 'incorrect.', six.text_type(error)) self.m.VerifyAll() heat-10.0.2/heat/tests/openstack/neutron/test_neutron_security_group_rule.py0000666000175000017500000000767113343562352027621 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from heat.common import exception from heat.common import template_format from heat.engine.resources.openstack.neutron import security_group_rule from heat.tests import common from heat.tests.openstack.neutron import inline_templates from heat.tests import utils class SecurityGroupRuleTest(common.HeatTestCase): def test_resource_mapping(self): mapping = security_group_rule.resource_mapping() self.assertEqual(mapping['OS::Neutron::SecurityGroupRule'], security_group_rule.SecurityGroupRule) @mock.patch('heat.engine.clients.os.neutron.' 'NeutronClientPlugin.has_extension', return_value=True) def _create_stack(self, ext_func, tmpl=inline_templates.SECURITY_GROUP_RULE_TEMPLATE): self.t = template_format.parse(tmpl) self.stack = utils.parse_stack(self.t) self.sg_rule = self.stack['security_group_rule'] self.neutron_client = mock.MagicMock() self.sg_rule.client = mock.MagicMock(return_value=self.neutron_client) self.sg_rule.client_plugin().find_resourceid_by_name_or_id = ( mock.MagicMock(return_value='123')) def test_create(self): self._create_stack() self.neutron_client.create_security_group_rule.return_value = { 'security_group_rule': {'id': '1234'}} expected = { 'security_group_rule': { 'security_group_id': u'123', 'description': u'test description', 'remote_group_id': u'123', 'protocol': u'tcp', 'port_range_min': '100', 'direction': 'ingress', 'ethertype': 'IPv4' } } self.sg_rule.handle_create() self.neutron_client.create_security_group_rule.assert_called_with( expected) def test_validate_conflict_props(self): self.patchobject(security_group_rule.SecurityGroupRule, 'is_service_available', return_value=(True, None)) tmpl = inline_templates.SECURITY_GROUP_RULE_TEMPLATE tmpl += ' remote_ip_prefix: "123"' self._create_stack(tmpl=tmpl) self.assertRaises(exception.ResourcePropertyConflict, self.sg_rule.validate) def test_validate_max_port_less_than_min_port(self): self.patchobject(security_group_rule.SecurityGroupRule, 'is_service_available', return_value=(True, None)) tmpl = inline_templates.SECURITY_GROUP_RULE_TEMPLATE tmpl += ' port_range_max: 50' self._create_stack(tmpl=tmpl) self.assertRaises(exception.StackValidationFailed, self.sg_rule.validate) def test_show_resource(self): self._create_stack() self.sg_rule.resource_id_set('1234') self.neutron_client.show_security_group_rule.return_value = { 'security_group_rule': {'id': '1234'} } self.assertEqual({'id': '1234'}, self.sg_rule._show_resource()) self.neutron_client.show_security_group_rule.assert_called_with('1234') def test_delete(self): self._create_stack() self.sg_rule.resource_id_set('1234') self.sg_rule.handle_delete() self.neutron_client.delete_security_group_rule.assert_called_with( '1234') heat-10.0.2/heat/tests/openstack/neutron/test_neutron_subnetpool.py0000666000175000017500000003230113343562340025662 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from neutronclient.common import exceptions as qe from neutronclient.neutron import v2_0 as neutronV20 from neutronclient.v2_0 import client as neutronclient import six from heat.common import exception from heat.common import template_format from heat.engine.clients.os import neutron from heat.engine.resources.openstack.neutron import subnetpool from heat.engine import rsrc_defn from heat.engine import scheduler from heat.tests import common from heat.tests.openstack.neutron import inline_templates from heat.tests import utils class NeutronSubnetPoolTest(common.HeatTestCase): def setUp(self): super(NeutronSubnetPoolTest, self).setUp() self.patchobject(neutron.NeutronClientPlugin, 'has_extension', return_value=True) self.find_resource = self.patchobject(neutronV20, 'find_resourceid_by_name_or_id', return_value='new_test') def create_subnetpool(self, status='COMPLETE', tags=None): self.t = template_format.parse(inline_templates.SPOOL_TEMPLATE) if tags: self.t['resources']['sub_pool']['properties']['tags'] = tags self.stack = utils.parse_stack(self.t) resource_defns = self.stack.t.resource_definitions(self.stack) rsrc = subnetpool.SubnetPool('sub_pool', resource_defns['sub_pool'], self.stack) if status == 'FAILED': self.patchobject(neutronclient.Client, 'create_subnetpool', side_effect=qe.NeutronClientException( status_code=500)) error = self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(rsrc.create)) self.assertEqual( 'NeutronClientException: resources.sub_pool: ' 'An unknown exception occurred.', six.text_type(error)) else: self.patchobject(neutronclient.Client, 'create_subnetpool', return_value={'subnetpool': { 'id': 'fc68ea2c-b60b-4b4f-bd82-94ec81110766' }}) scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, status), rsrc.state) if tags: self.set_tag_mock.assert_called_once_with('subnetpools', rsrc.resource_id, {'tags': tags}) return rsrc def test_validate_prefixlen_min_gt_max(self): self.t = template_format.parse(inline_templates.SPOOL_TEMPLATE) props = self.t['resources']['sub_pool']['properties'] props['min_prefixlen'] = 28 props['max_prefixlen'] = 24 self.stack = utils.parse_stack(self.t) rsrc = self.stack['sub_pool'] errMessage = ('Illegal prefix bounds: max_prefixlen=24, ' 'min_prefixlen=28.') error = self.assertRaises(exception.StackValidationFailed, rsrc.validate) self.assertEqual(errMessage, six.text_type(error)) def test_validate_prefixlen_default_gt_max(self): self.t = template_format.parse(inline_templates.SPOOL_TEMPLATE) props = self.t['resources']['sub_pool']['properties'] props['default_prefixlen'] = 28 props['max_prefixlen'] = 24 self.stack = utils.parse_stack(self.t) rsrc = self.stack['sub_pool'] errMessage = ('Illegal prefix bounds: max_prefixlen=24, ' 'default_prefixlen=28.') error = self.assertRaises(exception.StackValidationFailed, rsrc.validate) self.assertEqual(errMessage, six.text_type(error)) def test_validate_prefixlen_min_gt_default(self): self.t = template_format.parse(inline_templates.SPOOL_TEMPLATE) props = self.t['resources']['sub_pool']['properties'] props['min_prefixlen'] = 28 props['default_prefixlen'] = 24 self.stack = utils.parse_stack(self.t) rsrc = self.stack['sub_pool'] errMessage = ('Illegal prefix bounds: min_prefixlen=28, ' 'default_prefixlen=24.') error = self.assertRaises(exception.StackValidationFailed, rsrc.validate) self.assertEqual(errMessage, six.text_type(error)) def test_validate_minimal(self): self.t = template_format.parse(inline_templates.SPOOL_MINIMAL_TEMPLATE) self.stack = utils.parse_stack(self.t) rsrc = self.stack['sub_pool'] self.assertIsNone(rsrc.validate()) def test_create_subnetpool(self): rsrc = self.create_subnetpool() ref_id = rsrc.FnGetRefId() self.assertEqual('fc68ea2c-b60b-4b4f-bd82-94ec81110766', ref_id) def test_create_subnetpool_with_tags(self): tags = ['for_test'] self.set_tag_mock = self.patchobject(neutronclient.Client, 'replace_tag') rsrc = self.create_subnetpool(tags=tags) ref_id = rsrc.FnGetRefId() self.assertEqual('fc68ea2c-b60b-4b4f-bd82-94ec81110766', ref_id) def test_create_subnetpool_failed(self): self.create_subnetpool('FAILED') def test_delete_subnetpool(self): self.patchobject(neutronclient.Client, 'delete_subnetpool') rsrc = self.create_subnetpool() ref_id = rsrc.FnGetRefId() self.assertEqual('fc68ea2c-b60b-4b4f-bd82-94ec81110766', ref_id) self.assertIsNone(scheduler.TaskRunner(rsrc.delete)()) self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state) def test_delete_subnetpool_not_found(self): self.patchobject(neutronclient.Client, 'delete_subnetpool', side_effect=qe.NotFound(status_code=404)) rsrc = self.create_subnetpool() ref_id = rsrc.FnGetRefId() self.assertEqual('fc68ea2c-b60b-4b4f-bd82-94ec81110766', ref_id) self.assertIsNone(scheduler.TaskRunner(rsrc.delete)()) self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state) def test_delete_subnetpool_resource_id_none(self): delete_pool = self.patchobject(neutronclient.Client, 'delete_subnetpool') rsrc = self.create_subnetpool() rsrc.resource_id = None self.assertIsNone(scheduler.TaskRunner(rsrc.delete)()) delete_pool.assert_not_called() def test_update_subnetpool(self): update_subnetpool = self.patchobject(neutronclient.Client, 'update_subnetpool') self.set_tag_mock = self.patchobject(neutronclient.Client, 'replace_tag') old_tags = ['old_tag'] rsrc = self.create_subnetpool(tags=old_tags) self.patchobject(rsrc, 'physical_resource_name', return_value='the_new_sp') ref_id = rsrc.FnGetRefId() self.assertEqual('fc68ea2c-b60b-4b4f-bd82-94ec81110766', ref_id) new_tags = ['new_tag'] props = { 'name': 'the_new_sp', 'prefixes': [ '10.1.0.0/16', '10.2.0.0/16'], 'address_scope': 'new_test', 'default_quota': '16', 'default_prefixlen': '24', 'min_prefixlen': '24', 'max_prefixlen': '28', 'is_default': False, 'tags': new_tags } update_dict = props.copy() update_dict['name'] = 'the_new_sp' update_dict['address_scope_id'] = update_dict.pop('address_scope') update_snippet = rsrc_defn.ResourceDefinition(rsrc.name, rsrc.type(), props) # with name self.assertIsNone(rsrc.handle_update(update_snippet, {}, props)) self.set_tag_mock.assert_called_with('subnetpools', rsrc.resource_id, {'tags': new_tags}) # without name props['name'] = None self.assertIsNone(rsrc.handle_update(update_snippet, {}, props)) self.assertEqual(2, update_subnetpool.call_count) update_dict.pop('tags') update_subnetpool.assert_called_with( 'fc68ea2c-b60b-4b4f-bd82-94ec81110766', {'subnetpool': update_dict}) def test_update_subnetpool_no_prop_diff(self): update_subnetpool = self.patchobject(neutronclient.Client, 'update_subnetpool') rsrc = self.create_subnetpool() ref_id = rsrc.FnGetRefId() self.assertEqual('fc68ea2c-b60b-4b4f-bd82-94ec81110766', ref_id) update_snippet = rsrc_defn.ResourceDefinition(rsrc.name, rsrc.type(), rsrc.t._properties) self.assertIsNone(rsrc.handle_update(update_snippet, {}, {})) update_subnetpool.assert_not_called() def test_update_subnetpool_validate_prefixes(self): update_subnetpool = self.patchobject(neutronclient.Client, 'update_subnetpool') rsrc = self.create_subnetpool() ref_id = rsrc.FnGetRefId() self.assertEqual('fc68ea2c-b60b-4b4f-bd82-94ec81110766', ref_id) prefix_old = rsrc.properties['prefixes'] props = { 'name': 'the_new_sp', 'prefixes': ['10.5.0.0/16'] } prefix_new = props['prefixes'] update_snippet = rsrc_defn.ResourceDefinition(rsrc.name, rsrc.type(), props) errMessage = ('Property prefixes updated value %(value1)s ' 'should be superset of existing value %(value2)s.' % dict(value1=sorted(prefix_new), value2=sorted(prefix_old))) error = self.assertRaises(exception.StackValidationFailed, rsrc.handle_update, update_snippet, {}, props) self.assertEqual(errMessage, six.text_type(error)) update_subnetpool.assert_not_called() props = { 'name': 'the_new_sp', 'prefixes': ['10.0.0.0/8', '10.6.0.0/16'], } update_snippet = rsrc_defn.ResourceDefinition(rsrc.name, rsrc.type(), props) self.assertIsNone(rsrc.handle_update(update_snippet, {}, props)) update_subnetpool.assert_called_once_with( 'fc68ea2c-b60b-4b4f-bd82-94ec81110766', {'subnetpool': props}) def test_update_subnetpool_update_address_scope(self): update_subnetpool = self.patchobject(neutronclient.Client, 'update_subnetpool') rsrc = self.create_subnetpool() ref_id = rsrc.FnGetRefId() self.assertEqual('fc68ea2c-b60b-4b4f-bd82-94ec81110766', ref_id) props = { 'name': 'the_new_sp', 'address_scope': 'new_test', 'prefixes': ['10.0.0.0/8', '10.6.0.0/16'], } update_dict = { 'name': 'the_new_sp', 'address_scope_id': 'new_test', 'prefixes': ['10.0.0.0/8', '10.6.0.0/16'], } update_snippet = rsrc_defn.ResourceDefinition(rsrc.name, rsrc.type(), props) self.assertIsNone(rsrc.handle_update(update_snippet, {}, props)) self.assertEqual(3, self.find_resource.call_count) update_subnetpool.assert_called_once_with( 'fc68ea2c-b60b-4b4f-bd82-94ec81110766', {'subnetpool': update_dict}) def test_update_subnetpool_remove_address_scope(self): update_subnetpool = self.patchobject(neutronclient.Client, 'update_subnetpool') rsrc = self.create_subnetpool() ref_id = rsrc.FnGetRefId() self.assertEqual('fc68ea2c-b60b-4b4f-bd82-94ec81110766', ref_id) props = { 'name': 'the_new_sp', 'prefixes': ['10.0.0.0/8', '10.6.0.0/16'], } props_diff = {'address_scope': None} update_snippet = rsrc_defn.ResourceDefinition(rsrc.name, rsrc.type(), props) self.assertIsNone(rsrc.handle_update(update_snippet, {}, props_diff)) self.assertEqual(2, self.find_resource.call_count) update_subnetpool.assert_called_once_with( 'fc68ea2c-b60b-4b4f-bd82-94ec81110766', {'subnetpool': props_diff}) heat-10.0.2/heat/tests/openstack/neutron/test_address_scope.py0000666000175000017500000001312713343562340024541 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from heat.common import template_format from heat.engine.clients.os import neutron from heat.engine import rsrc_defn from heat.engine import stack from heat.engine import template from heat.tests import common from heat.tests import utils address_scope_template = ''' heat_template_version: 2016-04-08 description: This template to define a neutron address scope. resources: my_address_scope: type: OS::Neutron::AddressScope properties: shared: False tenant_id: d66c74c01d6c41b9846088c1ad9634d0 ''' class NeutronAddressScopeTest(common.HeatTestCase): def setUp(self): super(NeutronAddressScopeTest, self).setUp() self.ctx = utils.dummy_context() tpl = template_format.parse(address_scope_template) self.stack = stack.Stack( self.ctx, 'neutron_address_scope_test', template.Template(tpl) ) self.neutronclient = mock.MagicMock() self.patchobject(neutron.NeutronClientPlugin, 'has_extension', return_value=True) self.my_address_scope = self.stack['my_address_scope'] self.my_address_scope.client = mock.MagicMock( return_value=self.neutronclient) self.patchobject(self.my_address_scope, 'physical_resource_name', return_value='test_address_scope') def test_address_scope_handle_create(self): addrs = { 'address_scope': { 'id': '9c1eb3fe-7bba-479d-bd43-1d497e53c384', 'tenant_id': 'd66c74c01d6c41b9846088c1ad9634d0', 'shared': False, 'ip_version': 4 } } create_props = {'name': 'test_address_scope', 'shared': False, 'tenant_id': 'd66c74c01d6c41b9846088c1ad9634d0', 'ip_version': 4} self.neutronclient.create_address_scope.return_value = addrs self.my_address_scope.handle_create() self.assertEqual('9c1eb3fe-7bba-479d-bd43-1d497e53c384', self.my_address_scope.resource_id) self.neutronclient.create_address_scope.assert_called_once_with( {'address_scope': create_props} ) def test_address_scope_handle_delete(self): addrs_id = '477e8273-60a7-4c41-b683-fdb0bc7cd151' self.my_address_scope.resource_id = addrs_id self.neutronclient.delete_address_scope.return_value = None self.assertIsNone(self.my_address_scope.handle_delete()) self.neutronclient.delete_address_scope.assert_called_once_with( self.my_address_scope.resource_id) def test_address_scope_handle_delete_not_found(self): addrs_id = '477e8273-60a7-4c41-b683-fdb0bc7cd151' self.my_address_scope.resource_id = addrs_id not_found = self.neutronclient.NotFound self.neutronclient.delete_address_scope.side_effect = not_found self.assertIsNone(self.my_address_scope.handle_delete()) self.neutronclient.delete_address_scope.assert_called_once_with( self.my_address_scope.resource_id) def test_address_scope_handle_delete_resource_id_is_none(self): self.my_address_scope.resource_id = None self.assertIsNone(self.my_address_scope.handle_delete()) self.assertEqual(0, self.neutronclient.delete_address_scope.call_count) def test_address_scope_handle_update(self): addrs_id = '477e8273-60a7-4c41-b683-fdb0bc7cd151' self.my_address_scope.resource_id = addrs_id props = { 'name': 'test_address_scope', 'shared': True } update_dict = props.copy() update_snippet = rsrc_defn.ResourceDefinition( self.my_address_scope.name, self.my_address_scope.type(), props) # with name self.my_address_scope.handle_update( json_snippet=update_snippet, tmpl_diff={}, prop_diff=props) # without name props['name'] = None self.my_address_scope.handle_update( json_snippet=update_snippet, tmpl_diff={}, prop_diff=props) self.assertEqual(2, self.neutronclient.update_address_scope.call_count) self.neutronclient.update_address_scope.assert_called_with( addrs_id, {'address_scope': update_dict}) def test_address_scope_get_attr(self): self.my_address_scope.resource_id = 'addrs_id' addrs = { 'address_scope': { 'name': 'test_addrs', 'id': '9c1eb3fe-7bba-479d-bd43-1d497e53c384', 'tenant_id': 'd66c74c01d6c41b9846088c1ad9634d0', 'shared': True, 'ip_version': 4 } } self.neutronclient.show_address_scope.return_value = addrs self.assertEqual(addrs['address_scope'], self.my_address_scope.FnGetAtt('show')) self.neutronclient.show_address_scope.assert_called_once_with( self.my_address_scope.resource_id) heat-10.0.2/heat/tests/openstack/neutron/test_neutron_firewall.py0000666000175000017500000005753413343562351025316 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from neutronclient.common import exceptions from neutronclient.v2_0 import client as neutronclient from oslo_config import cfg import six from heat.common import exception from heat.common import template_format from heat.engine.clients.os import neutron from heat.engine.resources.openstack.neutron import firewall from heat.engine import rsrc_defn from heat.engine import scheduler from heat.tests import common from heat.tests import utils firewall_template = ''' heat_template_version: 2015-04-30 description: Template to test neutron firewall resource resources: firewall: type: OS::Neutron::Firewall properties: name: test-firewall firewall_policy_id: policy-id admin_state_up: True shared: True value_specs: router_ids: - router_1 - router_2 ''' firewall_policy_template = ''' heat_template_version: 2015-04-30 description: Template to test neutron firewall policy resource resources: firewall_policy: type: OS::Neutron::FirewallPolicy properties: name: test-firewall-policy shared: True audited: True firewall_rules: - rule-id-1 - rule-id-2 ''' firewall_rule_template = ''' heat_template_version: 2015-04-30 description: Template to test neutron firewall rule resource resources: firewall_rule: type: OS::Neutron::FirewallRule properties: name: test-firewall-rule shared: True protocol: tcp action: allow enabled: True ip_version: 4 ''' class FirewallTest(common.HeatTestCase): def setUp(self): super(FirewallTest, self).setUp() self.m.StubOutWithMock(neutronclient.Client, 'create_firewall') self.m.StubOutWithMock(neutronclient.Client, 'delete_firewall') self.m.StubOutWithMock(neutronclient.Client, 'show_firewall') self.m.StubOutWithMock(neutronclient.Client, 'update_firewall') self.patchobject(neutron.NeutronClientPlugin, 'has_extension', return_value=True) def create_firewall(self, value_specs=True): snippet = template_format.parse(firewall_template) if not value_specs: del snippet['resources']['firewall']['properties']['value_specs'] neutronclient.Client.create_firewall({ 'firewall': { 'name': 'test-firewall', 'admin_state_up': True, 'firewall_policy_id': 'policy-id', 'shared': True}} ).AndReturn({'firewall': {'id': '5678'}}) else: neutronclient.Client.create_firewall({ 'firewall': { 'name': 'test-firewall', 'admin_state_up': True, 'router_ids': ['router_1', 'router_2'], 'firewall_policy_id': 'policy-id', 'shared': True}} ).AndReturn({'firewall': {'id': '5678'}}) self.stack = utils.parse_stack(snippet) resource_defns = self.stack.t.resource_definitions(self.stack) self.fw_props = snippet['resources']['firewall']['properties'] return firewall.Firewall( 'firewall', resource_defns['firewall'], self.stack) def test_create(self): rsrc = self.create_firewall() neutronclient.Client.show_firewall('5678').AndReturn( {'firewall': {'status': 'ACTIVE'}}) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_create_failed_error_status(self): cfg.CONF.set_override('action_retry_limit', 0) rsrc = self.create_firewall() neutronclient.Client.show_firewall('5678').AndReturn( {'firewall': {'status': 'PENDING_CREATE'}}) neutronclient.Client.show_firewall('5678').AndReturn( {'firewall': {'status': 'ERROR'}}) self.m.ReplayAll() error = self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(rsrc.create)) self.assertEqual( 'ResourceInError: resources.firewall: ' 'Went to status ERROR due to "Error in Firewall"', six.text_type(error)) self.assertEqual((rsrc.CREATE, rsrc.FAILED), rsrc.state) self.m.VerifyAll() def test_create_failed(self): neutronclient.Client.create_firewall({ 'firewall': { 'name': 'test-firewall', 'admin_state_up': True, 'router_ids': ['router_1', 'router_2'], 'firewall_policy_id': 'policy-id', 'shared': True}} ).AndRaise(exceptions.NeutronClientException()) self.m.ReplayAll() snippet = template_format.parse(firewall_template) stack = utils.parse_stack(snippet) resource_defns = stack.t.resource_definitions(stack) rsrc = firewall.Firewall( 'firewall', resource_defns['firewall'], stack) error = self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(rsrc.create)) self.assertEqual( 'NeutronClientException: resources.firewall: ' 'An unknown exception occurred.', six.text_type(error)) self.assertEqual((rsrc.CREATE, rsrc.FAILED), rsrc.state) self.m.VerifyAll() def test_delete(self): rsrc = self.create_firewall() neutronclient.Client.show_firewall('5678').AndReturn( {'firewall': {'status': 'ACTIVE'}}) neutronclient.Client.delete_firewall('5678') neutronclient.Client.show_firewall('5678').AndRaise( exceptions.NeutronClientException(status_code=404)) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() scheduler.TaskRunner(rsrc.delete)() self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_delete_already_gone(self): neutronclient.Client.delete_firewall('5678').AndRaise( exceptions.NeutronClientException(status_code=404)) rsrc = self.create_firewall() neutronclient.Client.show_firewall('5678').AndReturn( {'firewall': {'status': 'ACTIVE'}}) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() scheduler.TaskRunner(rsrc.delete)() self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_delete_failed(self): neutronclient.Client.delete_firewall('5678').AndRaise( exceptions.NeutronClientException(status_code=400)) rsrc = self.create_firewall() neutronclient.Client.show_firewall('5678').AndReturn( {'firewall': {'status': 'ACTIVE'}}) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() error = self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(rsrc.delete)) self.assertEqual( 'NeutronClientException: resources.firewall: ' 'An unknown exception occurred.', six.text_type(error)) self.assertEqual((rsrc.DELETE, rsrc.FAILED), rsrc.state) self.m.VerifyAll() def test_attribute(self): rsrc = self.create_firewall() neutronclient.Client.show_firewall('5678').AndReturn( {'firewall': {'status': 'ACTIVE'}}) neutronclient.Client.show_firewall('5678').MultipleTimes( ).AndReturn( {'firewall': {'admin_state_up': True, 'firewall_policy_id': 'policy-id', 'shared': True}}) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() self.assertIs(True, rsrc.FnGetAtt('admin_state_up')) self.assertEqual('This attribute is currently unsupported in neutron ' 'firewall resource.', rsrc.FnGetAtt('shared')) self.assertEqual('policy-id', rsrc.FnGetAtt('firewall_policy_id')) self.m.VerifyAll() def test_attribute_failed(self): rsrc = self.create_firewall() neutronclient.Client.show_firewall('5678').AndReturn( {'firewall': {'status': 'ACTIVE'}}) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() error = self.assertRaises(exception.InvalidTemplateAttribute, rsrc.FnGetAtt, 'subnet_id') self.assertEqual( 'The Referenced Attribute (firewall subnet_id) is ' 'incorrect.', six.text_type(error)) self.m.VerifyAll() def test_update(self): rsrc = self.create_firewall() neutronclient.Client.show_firewall('5678').AndReturn( {'firewall': {'status': 'ACTIVE'}}) neutronclient.Client.update_firewall( '5678', {'firewall': {'admin_state_up': False}}) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() props = self.fw_props.copy() props['admin_state_up'] = False update_template = rsrc.t.freeze(properties=props) scheduler.TaskRunner(rsrc.update, update_template)() self.m.VerifyAll() def test_update_with_value_specs(self): rsrc = self.create_firewall(value_specs=False) neutronclient.Client.show_firewall('5678').AndReturn( {'firewall': {'status': 'ACTIVE'}}) neutronclient.Client.update_firewall( '5678', {'firewall': {'router_ids': ['router_1', 'router_2']}}) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() prop_diff = { 'value_specs': { 'router_ids': ['router_1', 'router_2'] } } update_snippet = rsrc_defn.ResourceDefinition(rsrc.name, rsrc.type(), prop_diff) rsrc.handle_update(update_snippet, {}, prop_diff) self.m.VerifyAll() def test_get_live_state(self): rsrc = self.create_firewall(value_specs=True) rsrc.client().show_firewall = mock.Mock(return_value={ 'firewall': { 'status': 'ACTIVE', 'router_ids': ['router_1', 'router_2'], 'name': 'firewall-firewall-pwakkqdrcl7z', 'admin_state_up': True, 'tenant_id': 'df49ea64e87c43a792a510698364f03e', 'firewall_policy_id': '680eb26d-3eea-40be-b484-1476e4c7c1b3', 'id': '11425cd4-41b6-4fd4-97aa-17629c63de61', 'description': '' } }) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() reality = rsrc.get_live_state(rsrc.properties) expected = { 'value_specs': { 'router_ids': ['router_1', 'router_2'] }, 'name': 'firewall-firewall-pwakkqdrcl7z', 'admin_state_up': True, 'firewall_policy_id': '680eb26d-3eea-40be-b484-1476e4c7c1b3', 'description': '' } self.assertEqual(expected, reality) self.m.VerifyAll() class FirewallPolicyTest(common.HeatTestCase): def setUp(self): super(FirewallPolicyTest, self).setUp() self.m.StubOutWithMock(neutronclient.Client, 'create_firewall_policy') self.m.StubOutWithMock(neutronclient.Client, 'delete_firewall_policy') self.m.StubOutWithMock(neutronclient.Client, 'show_firewall_policy') self.m.StubOutWithMock(neutronclient.Client, 'update_firewall_policy') self.patchobject(neutron.NeutronClientPlugin, 'has_extension', return_value=True) def create_firewall_policy(self): neutronclient.Client.create_firewall_policy({ 'firewall_policy': { 'name': 'test-firewall-policy', 'shared': True, 'audited': True, 'firewall_rules': ['rule-id-1', 'rule-id-2']}} ).AndReturn({'firewall_policy': {'id': '5678'}}) snippet = template_format.parse(firewall_policy_template) self.stack = utils.parse_stack(snippet) self.tmpl = snippet resource_defns = self.stack.t.resource_definitions(self.stack) return firewall.FirewallPolicy( 'firewall_policy', resource_defns['firewall_policy'], self.stack) def test_create(self): rsrc = self.create_firewall_policy() self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_create_failed(self): neutronclient.Client.create_firewall_policy({ 'firewall_policy': { 'name': 'test-firewall-policy', 'shared': True, 'audited': True, 'firewall_rules': ['rule-id-1', 'rule-id-2']}} ).AndRaise(exceptions.NeutronClientException()) self.m.ReplayAll() snippet = template_format.parse(firewall_policy_template) stack = utils.parse_stack(snippet) resource_defns = stack.t.resource_definitions(stack) rsrc = firewall.FirewallPolicy( 'firewall_policy', resource_defns['firewall_policy'], stack) error = self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(rsrc.create)) self.assertEqual( 'NeutronClientException: resources.firewall_policy: ' 'An unknown exception occurred.', six.text_type(error)) self.assertEqual((rsrc.CREATE, rsrc.FAILED), rsrc.state) self.m.VerifyAll() def test_delete(self): neutronclient.Client.delete_firewall_policy('5678') neutronclient.Client.show_firewall_policy('5678').AndRaise( exceptions.NeutronClientException(status_code=404)) rsrc = self.create_firewall_policy() self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() scheduler.TaskRunner(rsrc.delete)() self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_delete_already_gone(self): neutronclient.Client.delete_firewall_policy('5678').AndRaise( exceptions.NeutronClientException(status_code=404)) rsrc = self.create_firewall_policy() self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() scheduler.TaskRunner(rsrc.delete)() self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_delete_failed(self): neutronclient.Client.delete_firewall_policy('5678').AndRaise( exceptions.NeutronClientException(status_code=400)) rsrc = self.create_firewall_policy() self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() error = self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(rsrc.delete)) self.assertEqual( 'NeutronClientException: resources.firewall_policy: ' 'An unknown exception occurred.', six.text_type(error)) self.assertEqual((rsrc.DELETE, rsrc.FAILED), rsrc.state) self.m.VerifyAll() def test_attribute(self): rsrc = self.create_firewall_policy() neutronclient.Client.show_firewall_policy('5678').MultipleTimes( ).AndReturn( {'firewall_policy': {'audited': True, 'shared': True}}) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() self.assertIs(True, rsrc.FnGetAtt('audited')) self.assertIs(True, rsrc.FnGetAtt('shared')) self.m.VerifyAll() def test_attribute_failed(self): rsrc = self.create_firewall_policy() self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() error = self.assertRaises(exception.InvalidTemplateAttribute, rsrc.FnGetAtt, 'subnet_id') self.assertEqual( 'The Referenced Attribute (firewall_policy subnet_id) is ' 'incorrect.', six.text_type(error)) self.m.VerifyAll() def test_update(self): rsrc = self.create_firewall_policy() neutronclient.Client.update_firewall_policy( '5678', {'firewall_policy': {'firewall_rules': ['3', '4']}}) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() props = self.tmpl['resources']['firewall_policy']['properties'].copy() props['firewall_rules'] = ['3', '4'] update_template = rsrc.t.freeze(properties=props) scheduler.TaskRunner(rsrc.update, update_template)() self.m.VerifyAll() class FirewallRuleTest(common.HeatTestCase): def setUp(self): super(FirewallRuleTest, self).setUp() self.m.StubOutWithMock(neutronclient.Client, 'create_firewall_rule') self.m.StubOutWithMock(neutronclient.Client, 'delete_firewall_rule') self.m.StubOutWithMock(neutronclient.Client, 'show_firewall_rule') self.m.StubOutWithMock(neutronclient.Client, 'update_firewall_rule') self.patchobject(neutron.NeutronClientPlugin, 'has_extension', return_value=True) def create_firewall_rule(self): neutronclient.Client.create_firewall_rule({ 'firewall_rule': { 'name': 'test-firewall-rule', 'shared': True, 'action': 'allow', 'protocol': 'tcp', 'enabled': True, 'ip_version': "4"}} ).AndReturn({'firewall_rule': {'id': '5678'}}) snippet = template_format.parse(firewall_rule_template) self.stack = utils.parse_stack(snippet) self.tmpl = snippet resource_defns = self.stack.t.resource_definitions(self.stack) return firewall.FirewallRule( 'firewall_rule', resource_defns['firewall_rule'], self.stack) def test_create(self): rsrc = self.create_firewall_rule() self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_validate_failed_with_string_None_protocol(self): snippet = template_format.parse(firewall_rule_template) stack = utils.parse_stack(snippet) rsrc = stack['firewall_rule'] props = dict(rsrc.properties) props['protocol'] = 'None' rsrc.t = rsrc.t.freeze(properties=props) rsrc.reparse() self.assertRaises(exception.StackValidationFailed, rsrc.validate) def test_create_with_protocol_any(self): neutronclient.Client.create_firewall_rule({ 'firewall_rule': { 'name': 'test-firewall-rule', 'shared': True, 'action': 'allow', 'protocol': None, 'enabled': True, 'ip_version': "4"}} ).AndReturn({'firewall_rule': {'id': '5678'}}) self.m.ReplayAll() snippet = template_format.parse(firewall_rule_template) snippet['resources']['firewall_rule']['properties']['protocol'] = 'any' stack = utils.parse_stack(snippet) rsrc = stack['firewall_rule'] scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_create_failed(self): neutronclient.Client.create_firewall_rule({ 'firewall_rule': { 'name': 'test-firewall-rule', 'shared': True, 'action': 'allow', 'protocol': 'tcp', 'enabled': True, 'ip_version': "4"}} ).AndRaise(exceptions.NeutronClientException()) self.m.ReplayAll() snippet = template_format.parse(firewall_rule_template) stack = utils.parse_stack(snippet) resource_defns = stack.t.resource_definitions(stack) rsrc = firewall.FirewallRule( 'firewall_rule', resource_defns['firewall_rule'], stack) error = self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(rsrc.create)) self.assertEqual( 'NeutronClientException: resources.firewall_rule: ' 'An unknown exception occurred.', six.text_type(error)) self.assertEqual((rsrc.CREATE, rsrc.FAILED), rsrc.state) self.m.VerifyAll() def test_delete(self): neutronclient.Client.delete_firewall_rule('5678') neutronclient.Client.show_firewall_rule('5678').AndRaise( exceptions.NeutronClientException(status_code=404)) rsrc = self.create_firewall_rule() self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() scheduler.TaskRunner(rsrc.delete)() self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_delete_already_gone(self): neutronclient.Client.delete_firewall_rule('5678').AndRaise( exceptions.NeutronClientException(status_code=404)) rsrc = self.create_firewall_rule() self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() scheduler.TaskRunner(rsrc.delete)() self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_delete_failed(self): neutronclient.Client.delete_firewall_rule('5678').AndRaise( exceptions.NeutronClientException(status_code=400)) rsrc = self.create_firewall_rule() self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() error = self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(rsrc.delete)) self.assertEqual( 'NeutronClientException: resources.firewall_rule: ' 'An unknown exception occurred.', six.text_type(error)) self.assertEqual((rsrc.DELETE, rsrc.FAILED), rsrc.state) self.m.VerifyAll() def test_attribute(self): rsrc = self.create_firewall_rule() neutronclient.Client.show_firewall_rule('5678').MultipleTimes( ).AndReturn( {'firewall_rule': {'protocol': 'tcp', 'shared': True}}) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() self.assertEqual('tcp', rsrc.FnGetAtt('protocol')) self.assertIs(True, rsrc.FnGetAtt('shared')) self.m.VerifyAll() def test_attribute_failed(self): rsrc = self.create_firewall_rule() self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() error = self.assertRaises(exception.InvalidTemplateAttribute, rsrc.FnGetAtt, 'subnet_id') self.assertEqual( 'The Referenced Attribute (firewall_rule subnet_id) is ' 'incorrect.', six.text_type(error)) self.m.VerifyAll() def test_update(self): rsrc = self.create_firewall_rule() neutronclient.Client.update_firewall_rule( '5678', {'firewall_rule': {'protocol': 'icmp'}}) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() props = self.tmpl['resources']['firewall_rule']['properties'].copy() props['protocol'] = 'icmp' update_template = rsrc.t.freeze(properties=props) scheduler.TaskRunner(rsrc.update, update_template)() self.m.VerifyAll() def test_update_protocol_to_any(self): rsrc = self.create_firewall_rule() neutronclient.Client.update_firewall_rule( '5678', {'firewall_rule': {'protocol': None}}) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() # update to 'any' protocol props = self.tmpl['resources']['firewall_rule']['properties'].copy() props['protocol'] = 'any' update_template = rsrc.t.freeze(properties=props) scheduler.TaskRunner(rsrc.update, update_template)() self.m.VerifyAll() heat-10.0.2/heat/tests/openstack/neutron/test_qos.py0000666000175000017500000003525213343562340022530 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from heat.common import template_format from heat.engine.clients.os import neutron from heat.engine import rsrc_defn from heat.engine import stack from heat.engine import template from heat.tests import common from heat.tests import utils qos_policy_template = ''' heat_template_version: 2016-04-08 description: This template to define a neutron qos policy. resources: my_qos_policy: type: OS::Neutron::QoSPolicy properties: description: a policy for test shared: true tenant_id: d66c74c01d6c41b9846088c1ad9634d0 ''' bandwidth_limit_rule_template = ''' heat_template_version: 2016-04-08 description: This template to define a neutron bandwidth limit rule. resources: my_bandwidth_limit_rule: type: OS::Neutron::QoSBandwidthLimitRule properties: policy: 477e8273-60a7-4c41-b683-fdb0bc7cd151 max_kbps: 1000 max_burst_kbps: 1000 tenant_id: d66c74c01d6c41b9846088c1ad9634d0 ''' dscp_marking_rule_template = ''' heat_template_version: 2016-04-08 description: This template to define a neutron DSCP marking rule. resources: my_dscp_marking_rule: type: OS::Neutron::QoSDscpMarkingRule properties: policy: 477e8273-60a7-4c41-b683-fdb0bc7cd151 dscp_mark: 16 tenant_id: d66c74c01d6c41b9846088c1ad9634d0 ''' class NeutronQoSPolicyTest(common.HeatTestCase): def setUp(self): super(NeutronQoSPolicyTest, self).setUp() self.ctx = utils.dummy_context() tpl = template_format.parse(qos_policy_template) self.stack = stack.Stack( self.ctx, 'neutron_qos_policy_test', template.Template(tpl) ) self.neutronclient = mock.MagicMock() self.patchobject(neutron.NeutronClientPlugin, 'has_extension', return_value=True) self.my_qos_policy = self.stack['my_qos_policy'] self.my_qos_policy.client = mock.MagicMock( return_value=self.neutronclient) self.patchobject(self.my_qos_policy, 'physical_resource_name', return_value='test_policy') def test_qos_policy_handle_create(self): policy = { 'policy': { 'description': 'a policy for test', 'id': '9c1eb3fe-7bba-479d-bd43-1d497e53c384', 'rules': [], 'tenant_id': 'd66c74c01d6c41b9846088c1ad9634d0', 'shared': True } } create_props = {'name': 'test_policy', 'description': 'a policy for test', 'shared': True, 'tenant_id': 'd66c74c01d6c41b9846088c1ad9634d0'} self.neutronclient.create_qos_policy.return_value = policy self.my_qos_policy.handle_create() self.assertEqual('9c1eb3fe-7bba-479d-bd43-1d497e53c384', self.my_qos_policy.resource_id) self.neutronclient.create_qos_policy.assert_called_once_with( {'policy': create_props} ) def test_qos_policy_handle_delete(self): policy_id = '477e8273-60a7-4c41-b683-fdb0bc7cd151' self.my_qos_policy.resource_id = policy_id self.neutronclient.delete_qos_policy.return_value = None self.assertIsNone(self.my_qos_policy.handle_delete()) self.neutronclient.delete_qos_policy.assert_called_once_with( self.my_qos_policy.resource_id) def test_qos_policy_handle_delete_not_found(self): policy_id = '477e8273-60a7-4c41-b683-fdb0bc7cd151' self.my_qos_policy.resource_id = policy_id not_found = self.neutronclient.NotFound self.neutronclient.delete_qos_policy.side_effect = not_found self.assertIsNone(self.my_qos_policy.handle_delete()) self.neutronclient.delete_qos_policy.assert_called_once_with( self.my_qos_policy.resource_id) def test_qos_policy_handle_delete_resource_id_is_none(self): self.my_qos_policy.resource_id = None self.assertIsNone(self.my_qos_policy.handle_delete()) self.assertEqual(0, self.neutronclient.delete_qos_policy.call_count) def test_qos_policy_handle_update(self): policy_id = '477e8273-60a7-4c41-b683-fdb0bc7cd151' self.my_qos_policy.resource_id = policy_id props = { 'name': 'test_policy', 'description': 'test', 'shared': False } prop_dict = props.copy() update_snippet = rsrc_defn.ResourceDefinition( self.my_qos_policy.name, self.my_qos_policy.type(), props) # with name self.my_qos_policy.handle_update(json_snippet=update_snippet, tmpl_diff={}, prop_diff=props) # without name props['name'] = None self.my_qos_policy.handle_update(json_snippet=update_snippet, tmpl_diff={}, prop_diff=props) self.assertEqual(2, self.neutronclient.update_qos_policy.call_count) self.neutronclient.update_qos_policy.assert_called_with( policy_id, {'policy': prop_dict}) def test_qos_policy_get_attr(self): self.my_qos_policy.resource_id = 'test policy' policy = { 'policy': { 'name': 'test_policy', 'description': 'a policy for test', 'id': '9c1eb3fe-7bba-479d-bd43-1d497e53c384', 'rules': [], 'tenant_id': 'd66c74c01d6c41b9846088c1ad9634d0', 'shared': True } } self.neutronclient.show_qos_policy.return_value = policy self.assertEqual([], self.my_qos_policy.FnGetAtt('rules')) self.assertEqual(policy['policy'], self.my_qos_policy.FnGetAtt('show')) self.neutronclient.show_qos_policy.assert_has_calls( [mock.call(self.my_qos_policy.resource_id)] * 2) class NeutronQoSBandwidthLimitRuleTest(common.HeatTestCase): def setUp(self): super(NeutronQoSBandwidthLimitRuleTest, self).setUp() self.ctx = utils.dummy_context() tpl = template_format.parse(bandwidth_limit_rule_template) self.stack = stack.Stack( self.ctx, 'neutron_bandwidth_limit_rule_test', template.Template(tpl) ) self.neutronclient = mock.MagicMock() self.patchobject(neutron.NeutronClientPlugin, 'has_extension', return_value=True) self.bandwidth_limit_rule = self.stack['my_bandwidth_limit_rule'] self.bandwidth_limit_rule.client = mock.MagicMock( return_value=self.neutronclient) self.find_mock = self.patchobject( neutron.neutronV20, 'find_resourceid_by_name_or_id') self.policy_id = '477e8273-60a7-4c41-b683-fdb0bc7cd151' self.find_mock.return_value = self.policy_id def test_rule_handle_create(self): rule = { 'bandwidth_limit_rule': { 'id': 'cf0eab12-ef8b-4a62-98d0-70576583c17a', 'max_kbps': 1000, 'max_burst_kbps': 1000, 'tenant_id': 'd66c74c01d6c41b9846088c1ad9634d0' } } create_props = {'max_kbps': 1000, 'max_burst_kbps': 1000, 'tenant_id': 'd66c74c01d6c41b9846088c1ad9634d0'} self.neutronclient.create_bandwidth_limit_rule.return_value = rule self.bandwidth_limit_rule.handle_create() self.assertEqual('cf0eab12-ef8b-4a62-98d0-70576583c17a', self.bandwidth_limit_rule.resource_id) self.neutronclient.create_bandwidth_limit_rule.assert_called_once_with( self.policy_id, {'bandwidth_limit_rule': create_props}) def test_rule_handle_delete(self): rule_id = 'cf0eab12-ef8b-4a62-98d0-70576583c17a' self.bandwidth_limit_rule.resource_id = rule_id self.neutronclient.delete_bandwidth_limit_rule.return_value = None self.assertIsNone(self.bandwidth_limit_rule.handle_delete()) self.neutronclient.delete_bandwidth_limit_rule.assert_called_once_with( rule_id, self.policy_id) def test_rule_handle_delete_not_found(self): rule_id = 'cf0eab12-ef8b-4a62-98d0-70576583c17a' self.bandwidth_limit_rule.resource_id = rule_id not_found = self.neutronclient.NotFound self.neutronclient.delete_bandwidth_limit_rule.side_effect = not_found self.assertIsNone(self.bandwidth_limit_rule.handle_delete()) self.neutronclient.delete_bandwidth_limit_rule.assert_called_once_with( rule_id, self.policy_id) def test_rule_handle_delete_resource_id_is_none(self): self.bandwidth_limit_rule.resource_id = None self.assertIsNone(self.bandwidth_limit_rule.handle_delete()) self.assertEqual(0, self.neutronclient.bandwidth_limit_rule.call_count) def test_rule_handle_update(self): rule_id = 'cf0eab12-ef8b-4a62-98d0-70576583c17a' self.bandwidth_limit_rule.resource_id = rule_id prop_diff = { 'max_kbps': 500, 'max_burst_kbps': 400 } self.bandwidth_limit_rule.handle_update( json_snippet={}, tmpl_diff={}, prop_diff=prop_diff) self.neutronclient.update_bandwidth_limit_rule.assert_called_once_with( rule_id, self.policy_id, {'bandwidth_limit_rule': prop_diff}) def test_rule_get_attr(self): self.bandwidth_limit_rule.resource_id = 'test rule' rule = { 'bandwidth_limit_rule': { 'id': 'cf0eab12-ef8b-4a62-98d0-70576583c17a', 'max_kbps': 1000, 'max_burst_kbps': 1000, 'tenant_id': 'd66c74c01d6c41b9846088c1ad9634d0' } } self.neutronclient.show_bandwidth_limit_rule.return_value = rule self.assertEqual(rule['bandwidth_limit_rule'], self.bandwidth_limit_rule.FnGetAtt('show')) self.neutronclient.show_bandwidth_limit_rule.assert_called_once_with( self.bandwidth_limit_rule.resource_id, self.policy_id) class NeutronQoSDscpMarkingRuleTest(common.HeatTestCase): def setUp(self): super(NeutronQoSDscpMarkingRuleTest, self).setUp() self.ctx = utils.dummy_context() tpl = template_format.parse(dscp_marking_rule_template) self.stack = stack.Stack( self.ctx, 'neutron_dscp_marking_rule_test', template.Template(tpl) ) self.neutronclient = mock.MagicMock() self.patchobject(neutron.NeutronClientPlugin, 'has_extension', return_value=True) self.dscp_marking_rule = self.stack['my_dscp_marking_rule'] self.dscp_marking_rule.client = mock.MagicMock( return_value=self.neutronclient) self.find_mock = self.patchobject( neutron.neutronV20, 'find_resourceid_by_name_or_id') self.policy_id = '477e8273-60a7-4c41-b683-fdb0bc7cd151' self.find_mock.return_value = self.policy_id def test_rule_handle_create(self): rule = { 'dscp_marking_rule': { 'id': 'cf0eab12-ef8b-4a62-98d0-70576583c17a', 'dscp_mark': 16, 'tenant_id': 'd66c74c01d6c41b9846088c1ad9634d0' } } create_props = {'dscp_mark': 16, 'tenant_id': 'd66c74c01d6c41b9846088c1ad9634d0'} self.neutronclient.create_dscp_marking_rule.return_value = rule self.dscp_marking_rule.handle_create() self.assertEqual('cf0eab12-ef8b-4a62-98d0-70576583c17a', self.dscp_marking_rule.resource_id) self.neutronclient.create_dscp_marking_rule.assert_called_once_with( self.policy_id, {'dscp_marking_rule': create_props}) def test_rule_handle_delete(self): rule_id = 'cf0eab12-ef8b-4a62-98d0-70576583c17a' self.dscp_marking_rule.resource_id = rule_id self.neutronclient.delete_dscp_marking_rule.return_value = None self.assertIsNone(self.dscp_marking_rule.handle_delete()) self.neutronclient.delete_dscp_marking_rule.assert_called_once_with( rule_id, self.policy_id) def test_rule_handle_delete_not_found(self): rule_id = 'cf0eab12-ef8b-4a62-98d0-70576583c17a' self.dscp_marking_rule.resource_id = rule_id not_found = self.neutronclient.NotFound self.neutronclient.delete_dscp_marking_rule.side_effect = not_found self.assertIsNone(self.dscp_marking_rule.handle_delete()) self.neutronclient.delete_dscp_marking_rule.assert_called_once_with( rule_id, self.policy_id) def test_rule_handle_delete_resource_id_is_none(self): self.dscp_marking_rule.resource_id = None self.assertIsNone(self.dscp_marking_rule.handle_delete()) self.assertEqual(0, self.neutronclient.dscp_marking_rule.call_count) def test_rule_handle_update(self): rule_id = 'cf0eab12-ef8b-4a62-98d0-70576583c17a' self.dscp_marking_rule.resource_id = rule_id prop_diff = { 'dscp_mark': 8 } self.dscp_marking_rule.handle_update( json_snippet={}, tmpl_diff={}, prop_diff=prop_diff) self.neutronclient.update_dscp_marking_rule.assert_called_once_with( rule_id, self.policy_id, {'dscp_marking_rule': prop_diff}) def test_rule_get_attr(self): self.dscp_marking_rule.resource_id = 'test rule' rule = { 'dscp_marking_rule': { 'id': 'cf0eab12-ef8b-4a62-98d0-70576583c17a', 'dscp_mark': 8, 'tenant_id': 'd66c74c01d6c41b9846088c1ad9634d0' } } self.neutronclient.show_dscp_marking_rule.return_value = rule self.assertEqual(rule['dscp_marking_rule'], self.dscp_marking_rule.FnGetAtt('show')) self.neutronclient.show_dscp_marking_rule.assert_called_once_with( self.dscp_marking_rule.resource_id, self.policy_id) heat-10.0.2/heat/tests/openstack/neutron/test_neutron_loadbalancer.py0000666000175000017500000013275613343562351026120 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import mox from neutronclient.common import exceptions from neutronclient.neutron import v2_0 as neutronV20 from neutronclient.v2_0 import client as neutronclient from oslo_config import cfg import six from heat.common import exception from heat.common.i18n import _ from heat.common import template_format from heat.engine.clients.os import neutron from heat.engine.clients.os import nova from heat.engine.resources.openstack.neutron import loadbalancer from heat.engine import scheduler from heat.tests import common from heat.tests.openstack.nova import fakes as fakes_nova from heat.tests import utils health_monitor_template = ''' heat_template_version: 2015-04-30 description: Template to test load balancer resources resources: monitor: type: OS::Neutron::HealthMonitor properties: type: HTTP delay: 3 max_retries: 5 timeout: 10 ''' pool_template_with_vip_subnet = ''' heat_template_version: 2015-04-30 description: Template to test load balancer resources resources: pool: type: OS::Neutron::Pool properties: protocol: HTTP subnet: sub123 lb_method: ROUND_ROBIN vip: protocol_port: 80 subnet: sub9999 ''' pool_template_with_provider = ''' heat_template_version: 2015-04-30 description: Template to test load balancer resources resources: pool: type: OS::Neutron::Pool properties: protocol: HTTP subnet: sub123 lb_method: ROUND_ROBIN provider: test_prov vip: protocol_port: 80 ''' pool_template = ''' heat_template_version: 2015-04-30 description: Template to test load balancer resources resources: pool: type: OS::Neutron::Pool properties: protocol: HTTP subnet: sub123 lb_method: ROUND_ROBIN vip: protocol_port: 80 ''' pool_template_deprecated = pool_template.replace('subnet', 'subnet_id') member_template = ''' heat_template_version: 2015-04-30 description: Template to test load balancer member resources: member: type: OS::Neutron::PoolMember properties: protocol_port: 8080 pool_id: pool123 address: 1.2.3.4 ''' lb_template = ''' heat_template_version: 2015-04-30 description: Template to test load balancer resources resources: lb: type: OS::Neutron::LoadBalancer properties: protocol_port: 8080 pool_id: pool123 members: [1234] ''' pool_with_session_persistence_template = ''' heat_template_version: 2015-04-30 description: Template to test load balancer resources resources: pool: type: OS::Neutron::Pool properties: protocol: HTTP subnet: sub123 lb_method: ROUND_ROBIN vip: protocol_port: 80 session_persistence: type: APP_COOKIE cookie_name: cookie ''' pool_with_health_monitors_template = ''' heat_template_version: 2015-04-30 description: Template to test load balancer resources resources: monitor1: type: OS::Neutron::HealthMonitor properties: type: HTTP delay: 3 max_retries: 5 timeout: 10 monitor2: type: OS::Neutron::HealthMonitor properties: type: HTTP delay: 3 max_retries: 5 timeout: 10 pool: type: OS::Neutron::Pool properties: protocol: HTTP subnet_id: sub123 lb_method: ROUND_ROBIN vip: protocol_port: 80 monitors: - {get_resource: monitor1} - {get_resource: monitor2} ''' class HealthMonitorTest(common.HeatTestCase): def setUp(self): super(HealthMonitorTest, self).setUp() self.m.StubOutWithMock(neutronclient.Client, 'create_health_monitor') self.m.StubOutWithMock(neutronclient.Client, 'delete_health_monitor') self.m.StubOutWithMock(neutronclient.Client, 'show_health_monitor') self.m.StubOutWithMock(neutronclient.Client, 'update_health_monitor') self.patchobject(neutron.NeutronClientPlugin, 'has_extension', return_value=True) def create_health_monitor(self): neutronclient.Client.create_health_monitor({ 'health_monitor': { 'delay': 3, 'max_retries': 5, 'type': u'HTTP', 'timeout': 10, 'admin_state_up': True}} ).AndReturn({'health_monitor': {'id': '5678'}}) snippet = template_format.parse(health_monitor_template) self.stack = utils.parse_stack(snippet) self.tmpl = snippet resource_defns = self.stack.t.resource_definitions(self.stack) return loadbalancer.HealthMonitor( 'monitor', resource_defns['monitor'], self.stack) def test_create(self): rsrc = self.create_health_monitor() self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_create_failed(self): neutronclient.Client.create_health_monitor({ 'health_monitor': { 'delay': 3, 'max_retries': 5, 'type': u'HTTP', 'timeout': 10, 'admin_state_up': True}} ).AndRaise(exceptions.NeutronClientException()) self.m.ReplayAll() snippet = template_format.parse(health_monitor_template) self.stack = utils.parse_stack(snippet) resource_defns = self.stack.t.resource_definitions(self.stack) rsrc = loadbalancer.HealthMonitor( 'monitor', resource_defns['monitor'], self.stack) error = self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(rsrc.create)) self.assertEqual( 'NeutronClientException: resources.monitor: ' 'An unknown exception occurred.', six.text_type(error)) self.assertEqual((rsrc.CREATE, rsrc.FAILED), rsrc.state) self.m.VerifyAll() def test_delete(self): neutronclient.Client.delete_health_monitor('5678') neutronclient.Client.show_health_monitor('5678').AndRaise( exceptions.NeutronClientException(status_code=404)) rsrc = self.create_health_monitor() self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() scheduler.TaskRunner(rsrc.delete)() self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_delete_already_gone(self): neutronclient.Client.delete_health_monitor('5678').AndRaise( exceptions.NeutronClientException(status_code=404)) rsrc = self.create_health_monitor() self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() scheduler.TaskRunner(rsrc.delete)() self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_delete_failed(self): neutronclient.Client.delete_health_monitor('5678').AndRaise( exceptions.NeutronClientException(status_code=400)) rsrc = self.create_health_monitor() self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() error = self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(rsrc.delete)) self.assertEqual( 'NeutronClientException: resources.monitor: ' 'An unknown exception occurred.', six.text_type(error)) self.assertEqual((rsrc.DELETE, rsrc.FAILED), rsrc.state) self.m.VerifyAll() def test_attribute(self): rsrc = self.create_health_monitor() neutronclient.Client.show_health_monitor('5678').MultipleTimes( ).AndReturn( {'health_monitor': {'admin_state_up': True, 'delay': 3}}) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() self.assertIs(True, rsrc.FnGetAtt('admin_state_up')) self.assertEqual(3, rsrc.FnGetAtt('delay')) self.m.VerifyAll() def test_attribute_failed(self): rsrc = self.create_health_monitor() self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() error = self.assertRaises(exception.InvalidTemplateAttribute, rsrc.FnGetAtt, 'subnet_id') self.assertEqual( 'The Referenced Attribute (monitor subnet_id) is incorrect.', six.text_type(error)) self.m.VerifyAll() def test_update(self): rsrc = self.create_health_monitor() neutronclient.Client.update_health_monitor( '5678', {'health_monitor': {'delay': 10}}) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() props = self.tmpl['resources']['monitor']['properties'].copy() props['delay'] = 10 update_template = rsrc.t.freeze(properties=props) scheduler.TaskRunner(rsrc.update, update_template)() self.m.VerifyAll() class PoolTest(common.HeatTestCase): def setUp(self): super(PoolTest, self).setUp() self.m.StubOutWithMock(neutronclient.Client, 'create_pool') self.m.StubOutWithMock(neutronclient.Client, 'delete_pool') self.m.StubOutWithMock(neutronclient.Client, 'show_pool') self.m.StubOutWithMock(neutronclient.Client, 'update_pool') self.m.StubOutWithMock(neutronclient.Client, 'associate_health_monitor') self.m.StubOutWithMock(neutronclient.Client, 'disassociate_health_monitor') self.m.StubOutWithMock(neutronclient.Client, 'create_vip') self.m.StubOutWithMock(neutronV20, 'find_resourceid_by_name_or_id') self.m.StubOutWithMock(neutronclient.Client, 'delete_vip') self.m.StubOutWithMock(neutronclient.Client, 'show_vip') self.patchobject(neutron.NeutronClientPlugin, 'has_extension', return_value=True) def create_pool(self, resolve_neutron=True, with_vip_subnet=False): if resolve_neutron: if with_vip_subnet: snippet = template_format.parse(pool_template_with_vip_subnet) else: snippet = template_format.parse(pool_template) else: snippet = template_format.parse(pool_template_deprecated) self.stack = utils.parse_stack(snippet) self.tmpl = snippet neutronclient.Client.create_pool({ 'pool': { 'subnet_id': 'sub123', 'protocol': u'HTTP', 'name': utils.PhysName(self.stack.name, 'pool'), 'lb_method': 'ROUND_ROBIN', 'admin_state_up': True}} ).AndReturn({'pool': {'id': '5678'}}) neutronclient.Client.show_pool('5678').AndReturn( {'pool': {'status': 'ACTIVE'}}) neutronclient.Client.show_vip('xyz').AndReturn( {'vip': {'status': 'ACTIVE'}}) stvipvsn = { 'vip': { 'protocol': u'HTTP', 'name': 'pool.vip', 'admin_state_up': True, 'subnet_id': u'sub9999', 'pool_id': '5678', 'protocol_port': 80} } stvippsn = copy.deepcopy(stvipvsn) stvippsn['vip']['subnet_id'] = 'sub123' self.stub_SubnetConstraint_validate() neutronV20.find_resourceid_by_name_or_id( mox.IsA(neutronclient.Client), 'subnet', 'sub123', cmd_resource=None, ).MultipleTimes().AndReturn('sub123') if resolve_neutron and with_vip_subnet: neutronV20.find_resourceid_by_name_or_id( mox.IsA(neutronclient.Client), 'subnet', 'sub9999', cmd_resource=None, ).MultipleTimes().AndReturn('sub9999') neutronclient.Client.create_vip(stvipvsn ).AndReturn({'vip': {'id': 'xyz'}}) else: neutronclient.Client.create_vip(stvippsn ).AndReturn({'vip': {'id': 'xyz'}}) resource_defns = self.stack.t.resource_definitions(self.stack) return loadbalancer.Pool( 'pool', resource_defns['pool'], self.stack) def test_create(self): self._test_create() def test_create_deprecated(self): self._test_create(resolve_neutron=False, with_vip_subnet=False) def _test_create(self, resolve_neutron=True, with_vip_subnet=False): rsrc = self.create_pool(resolve_neutron, with_vip_subnet) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_create_with_vip_subnet(self): self._test_create(with_vip_subnet=True) def test_create_pending(self): snippet = template_format.parse(pool_template) self.stack = utils.parse_stack(snippet) neutronV20.find_resourceid_by_name_or_id( mox.IsA(neutronclient.Client), 'subnet', 'sub123', cmd_resource=None, ).MultipleTimes().AndReturn('sub123') neutronclient.Client.create_pool({ 'pool': { 'subnet_id': 'sub123', 'protocol': u'HTTP', 'name': utils.PhysName(self.stack.name, 'pool'), 'lb_method': 'ROUND_ROBIN', 'admin_state_up': True}} ).AndReturn({'pool': {'id': '5678'}}) neutronclient.Client.create_vip({ 'vip': { 'protocol': u'HTTP', 'name': 'pool.vip', 'admin_state_up': True, 'subnet_id': u'sub123', 'pool_id': '5678', 'protocol_port': 80}} ).AndReturn({'vip': {'id': 'xyz'}}) neutronclient.Client.show_pool('5678').AndReturn( {'pool': {'status': 'PENDING_CREATE'}}) neutronclient.Client.show_pool('5678').MultipleTimes().AndReturn( {'pool': {'status': 'ACTIVE'}}) neutronclient.Client.show_vip('xyz').AndReturn( {'vip': {'status': 'PENDING_CREATE'}}) neutronclient.Client.show_vip('xyz').AndReturn( {'vip': {'status': 'ACTIVE'}}) resource_defns = self.stack.t.resource_definitions(self.stack) rsrc = loadbalancer.Pool( 'pool', resource_defns['pool'], self.stack) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_create_failed_error_status(self): cfg.CONF.set_override('action_retry_limit', 0) snippet = template_format.parse(pool_template) self.stack = utils.parse_stack(snippet) neutronV20.find_resourceid_by_name_or_id( mox.IsA(neutronclient.Client), 'subnet', 'sub123', cmd_resource=None, ).MultipleTimes().AndReturn('sub123') neutronclient.Client.create_pool({ 'pool': { 'subnet_id': 'sub123', 'protocol': u'HTTP', 'name': utils.PhysName(self.stack.name, 'pool'), 'lb_method': 'ROUND_ROBIN', 'admin_state_up': True}} ).AndReturn({'pool': {'id': '5678'}}) neutronclient.Client.create_vip({ 'vip': { 'protocol': u'HTTP', 'name': 'pool.vip', 'admin_state_up': True, 'subnet_id': u'sub123', 'pool_id': '5678', 'protocol_port': 80}} ).AndReturn({'vip': {'id': 'xyz'}}) neutronclient.Client.show_pool('5678').AndReturn( {'pool': {'status': 'ERROR', 'name': '5678'}}) resource_defns = self.stack.t.resource_definitions(self.stack) rsrc = loadbalancer.Pool( 'pool', resource_defns['pool'], self.stack) self.m.ReplayAll() error = self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(rsrc.create)) self.assertEqual( 'ResourceInError: resources.pool: ' 'Went to status ERROR due to "error in pool"', six.text_type(error)) self.assertEqual((rsrc.CREATE, rsrc.FAILED), rsrc.state) self.m.VerifyAll() def test_create_failed_unexpected_vip_status(self): snippet = template_format.parse(pool_template) self.stack = utils.parse_stack(snippet) neutronV20.find_resourceid_by_name_or_id( mox.IsA(neutronclient.Client), 'subnet', 'sub123', cmd_resource=None, ).MultipleTimes().AndReturn('sub123') neutronclient.Client.create_pool({ 'pool': { 'subnet_id': 'sub123', 'protocol': u'HTTP', 'name': utils.PhysName(self.stack.name, 'pool'), 'lb_method': 'ROUND_ROBIN', 'admin_state_up': True}} ).AndReturn({'pool': {'id': '5678'}}) neutronclient.Client.create_vip({ 'vip': { 'protocol': u'HTTP', 'name': 'pool.vip', 'admin_state_up': True, 'subnet_id': u'sub123', 'pool_id': '5678', 'protocol_port': 80}} ).AndReturn({'vip': {'id': 'xyz'}}) neutronclient.Client.show_pool('5678').MultipleTimes().AndReturn( {'pool': {'status': 'ACTIVE'}}) neutronclient.Client.show_vip('xyz').AndReturn( {'vip': {'status': 'SOMETHING', 'name': 'xyz'}}) resource_defns = self.stack.t.resource_definitions(self.stack) rsrc = loadbalancer.Pool( 'pool', resource_defns['pool'], self.stack) self.m.ReplayAll() error = self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(rsrc.create)) self.assertEqual('ResourceUnknownStatus: resources.pool: ' 'Pool creation failed due to ' 'vip - Unknown status SOMETHING due to "Unknown"', six.text_type(error)) self.assertEqual((rsrc.CREATE, rsrc.FAILED), rsrc.state) self.m.VerifyAll() def test_create_failed(self): snippet = template_format.parse(pool_template) self.stack = utils.parse_stack(snippet) neutronV20.find_resourceid_by_name_or_id( mox.IsA(neutronclient.Client), 'subnet', 'sub123', cmd_resource=None, ).MultipleTimes().AndReturn('sub123') neutronclient.Client.create_pool({ 'pool': { 'subnet_id': 'sub123', 'protocol': u'HTTP', 'name': utils.PhysName(self.stack.name, 'pool'), 'lb_method': 'ROUND_ROBIN', 'admin_state_up': True}} ).AndRaise(exceptions.NeutronClientException()) self.m.ReplayAll() resource_defns = self.stack.t.resource_definitions(self.stack) rsrc = loadbalancer.Pool( 'pool', resource_defns['pool'], self.stack) error = self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(rsrc.create)) self.assertEqual( 'NeutronClientException: resources.pool: ' 'An unknown exception occurred.', six.text_type(error)) self.assertEqual((rsrc.CREATE, rsrc.FAILED), rsrc.state) self.m.VerifyAll() def test_create_with_session_persistence(self): snippet = template_format.parse(pool_with_session_persistence_template) self.stack = utils.parse_stack(snippet) neutronV20.find_resourceid_by_name_or_id( mox.IsA(neutronclient.Client), 'subnet', 'sub123', cmd_resource=None, ).MultipleTimes().AndReturn('sub123') neutronclient.Client.create_pool({ 'pool': { 'subnet_id': 'sub123', 'protocol': u'HTTP', 'name': utils.PhysName(self.stack.name, 'pool'), 'lb_method': 'ROUND_ROBIN', 'admin_state_up': True}} ).AndReturn({'pool': {'id': '5678'}}) neutronclient.Client.create_vip({ 'vip': { 'protocol': u'HTTP', 'name': 'pool.vip', 'admin_state_up': True, 'subnet_id': u'sub123', 'pool_id': '5678', 'protocol_port': 80, 'session_persistence': { 'type': 'APP_COOKIE', 'cookie_name': 'cookie'}}} ).AndReturn({'vip': {'id': 'xyz'}}) neutronclient.Client.show_pool('5678').AndReturn( {'pool': {'status': 'ACTIVE'}}) neutronclient.Client.show_vip('xyz').AndReturn( {'vip': {'status': 'ACTIVE'}}) resource_defns = self.stack.t.resource_definitions(self.stack) rsrc = loadbalancer.Pool( 'pool', resource_defns['pool'], self.stack) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_create_pool_with_provider(self): snippet = template_format.parse(pool_template_with_provider) self.stub_ProviderConstraint_validate() self.stack = utils.parse_stack(snippet) neutronV20.find_resourceid_by_name_or_id( mox.IsA(neutronclient.Client), 'subnet', 'sub123', cmd_resource=None, ).MultipleTimes().AndReturn('sub123') neutronclient.Client.create_pool({ 'pool': { 'subnet_id': 'sub123', 'protocol': u'HTTP', 'name': utils.PhysName(self.stack.name, 'pool'), 'lb_method': 'ROUND_ROBIN', 'admin_state_up': True, 'provider': 'test_prov'}} ).AndReturn({'pool': {'id': '5678'}}) neutronclient.Client.create_vip({ 'vip': { 'protocol': u'HTTP', 'name': 'pool.vip', 'admin_state_up': True, 'subnet_id': u'sub123', 'pool_id': '5678', 'protocol_port': 80}} ).AndReturn({'vip': {'id': 'xyz'}}) neutronclient.Client.show_pool('5678').MultipleTimes().AndReturn( {'pool': {'status': 'ACTIVE', 'provider': 'test_prov'}}) neutronclient.Client.show_vip('xyz').AndReturn( {'vip': {'status': 'ACTIVE'}}) resource_defns = self.stack.t.resource_definitions(self.stack) rsrc = loadbalancer.Pool( 'pool', resource_defns['pool'], self.stack) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) self.assertEqual("test_prov", rsrc.FnGetAtt("provider")) self.m.VerifyAll() def test_failing_validation_with_session_persistence(self): msg = _('Property cookie_name is required, when ' 'session_persistence type is set to APP_COOKIE.') snippet = template_format.parse(pool_with_session_persistence_template) pool = snippet['resources']['pool'] persistence = pool['properties']['vip']['session_persistence'] # When persistence type is set to APP_COOKIE, cookie_name is required persistence['type'] = 'APP_COOKIE' persistence['cookie_name'] = None self.stack = utils.parse_stack(snippet) resource_defns = self.stack.t.resource_definitions(self.stack) resource = loadbalancer.Pool('pool', resource_defns['pool'], self.stack) error = self.assertRaises(exception.StackValidationFailed, resource.validate) self.assertEqual(msg, six.text_type(error)) def test_validation_not_failing_without_session_persistence(self): snippet = template_format.parse(pool_template) self.stack = utils.parse_stack(snippet) resource_defns = self.stack.t.resource_definitions(self.stack) resource = loadbalancer.Pool('pool', resource_defns['pool'], self.stack) self.stub_SubnetConstraint_validate() self.m.ReplayAll() self.assertIsNone(resource.validate()) self.m.VerifyAll() def test_properties_are_prepared_for_session_persistence(self): snippet = template_format.parse(pool_with_session_persistence_template) pool = snippet['resources']['pool'] persistence = pool['properties']['vip']['session_persistence'] # change persistence type to HTTP_COOKIE that not require cookie_name persistence['type'] = 'HTTP_COOKIE' del persistence['cookie_name'] self.stack = utils.parse_stack(snippet) neutronV20.find_resourceid_by_name_or_id( mox.IsA(neutronclient.Client), 'subnet', 'sub123', cmd_resource=None, ).MultipleTimes().AndReturn('sub123') neutronclient.Client.create_pool({ 'pool': { 'subnet_id': 'sub123', 'protocol': u'HTTP', 'name': utils.PhysName(self.stack.name, 'pool'), 'lb_method': 'ROUND_ROBIN', 'admin_state_up': True}} ).AndReturn({'pool': {'id': '5678'}}) neutronclient.Client.create_vip({ 'vip': { 'protocol': u'HTTP', 'name': 'pool.vip', 'admin_state_up': True, 'subnet_id': u'sub123', 'pool_id': '5678', 'protocol_port': 80, 'session_persistence': {'type': 'HTTP_COOKIE'}}} ).AndReturn({'vip': {'id': 'xyz'}}) neutronclient.Client.show_pool('5678').AndReturn( {'pool': {'status': 'ACTIVE'}}) neutronclient.Client.show_vip('xyz').AndReturn( {'vip': {'status': 'ACTIVE'}}) resource_defns = self.stack.t.resource_definitions(self.stack) resource = loadbalancer.Pool('pool', resource_defns['pool'], self.stack) # assert that properties contain cookie_name property with None value persistence = resource.properties['vip']['session_persistence'] self.assertIn('cookie_name', persistence) self.assertIsNone(persistence['cookie_name']) self.m.ReplayAll() scheduler.TaskRunner(resource.create)() self.assertEqual((resource.CREATE, resource.COMPLETE), resource.state) self.m.VerifyAll() def test_delete(self): rsrc = self.create_pool() neutronclient.Client.delete_vip('xyz') neutronclient.Client.show_vip('xyz').AndRaise( exceptions.NeutronClientException(status_code=404)) neutronclient.Client.delete_pool('5678') neutronclient.Client.show_pool('5678').AndRaise( exceptions.NeutronClientException(status_code=404)) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() scheduler.TaskRunner(rsrc.delete)() self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_delete_already_gone(self): neutronclient.Client.delete_vip('xyz').AndRaise( exceptions.NeutronClientException(status_code=404)) neutronclient.Client.delete_pool('5678').AndRaise( exceptions.NeutronClientException(status_code=404)) rsrc = self.create_pool() self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() scheduler.TaskRunner(rsrc.delete)() self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_delete_vip_failed(self): neutronclient.Client.delete_vip('xyz').AndRaise( exceptions.NeutronClientException(status_code=400)) rsrc = self.create_pool() self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() error = self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(rsrc.delete)) self.assertEqual( 'NeutronClientException: resources.pool: ' 'An unknown exception occurred.', six.text_type(error)) self.assertEqual((rsrc.DELETE, rsrc.FAILED), rsrc.state) self.m.VerifyAll() def test_delete_failed(self): neutronclient.Client.delete_vip('xyz').AndRaise( exceptions.NeutronClientException(status_code=404)) neutronclient.Client.delete_pool('5678').AndRaise( exceptions.NeutronClientException(status_code=400)) rsrc = self.create_pool() self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() error = self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(rsrc.delete)) self.assertEqual( 'NeutronClientException: resources.pool: ' 'An unknown exception occurred.', six.text_type(error)) self.assertEqual((rsrc.DELETE, rsrc.FAILED), rsrc.state) self.m.VerifyAll() def test_attribute(self): rsrc = self.create_pool() neutronclient.Client.show_pool('5678').MultipleTimes( ).AndReturn( {'pool': {'admin_state_up': True, 'lb_method': 'ROUND_ROBIN'}}) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() self.assertIs(True, rsrc.FnGetAtt('admin_state_up')) self.assertEqual('ROUND_ROBIN', rsrc.FnGetAtt('lb_method')) self.m.VerifyAll() def test_vip_attribute(self): rsrc = self.create_pool() neutronclient.Client.show_vip('xyz').AndReturn( {'vip': {'address': '10.0.0.3', 'name': 'xyz'}}) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() self.assertEqual({'address': '10.0.0.3', 'name': 'xyz'}, rsrc.FnGetAtt('vip')) self.m.VerifyAll() def test_attribute_failed(self): rsrc = self.create_pool() self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() error = self.assertRaises(exception.InvalidTemplateAttribute, rsrc.FnGetAtt, 'net_id') self.assertEqual( 'The Referenced Attribute (pool net_id) is incorrect.', six.text_type(error)) self.m.VerifyAll() def test_update(self): rsrc = self.create_pool() neutronclient.Client.update_pool( '5678', {'pool': {'admin_state_up': False}}) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() props = self.tmpl['resources']['pool']['properties'].copy() props['admin_state_up'] = False update_template = rsrc.t.freeze(properties=props) scheduler.TaskRunner(rsrc.update, update_template)() self.m.VerifyAll() def test_update_monitors(self): snippet = template_format.parse(pool_template) self.stack = utils.parse_stack(snippet) neutronV20.find_resourceid_by_name_or_id( mox.IsA(neutronclient.Client), 'subnet', 'sub123', cmd_resource=None, ).MultipleTimes().AndReturn('sub123') neutronclient.Client.create_pool({ 'pool': { 'subnet_id': 'sub123', 'protocol': u'HTTP', 'name': utils.PhysName(self.stack.name, 'pool'), 'lb_method': 'ROUND_ROBIN', 'admin_state_up': True}} ).AndReturn({'pool': {'id': '5678'}}) neutronclient.Client.associate_health_monitor( '5678', {'health_monitor': {'id': 'mon123'}}) neutronclient.Client.associate_health_monitor( '5678', {'health_monitor': {'id': 'mon456'}}) neutronclient.Client.create_vip({ 'vip': { 'protocol': u'HTTP', 'name': 'pool.vip', 'admin_state_up': True, 'subnet_id': u'sub123', 'pool_id': '5678', 'protocol_port': 80}} ).AndReturn({'vip': {'id': 'xyz'}}) neutronclient.Client.show_pool('5678').AndReturn( {'pool': {'status': 'ACTIVE'}}) neutronclient.Client.show_vip('xyz').AndReturn( {'vip': {'status': 'ACTIVE'}}) neutronclient.Client.disassociate_health_monitor( '5678', 'mon456') neutronclient.Client.associate_health_monitor( '5678', {'health_monitor': {'id': 'mon789'}}) snippet['resources']['pool']['properties']['monitors'] = [ 'mon123', 'mon456'] resource_defns = self.stack.t.resource_definitions(self.stack) rsrc = loadbalancer.Pool('pool', resource_defns['pool'], self.stack) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() props = snippet['resources']['pool']['properties'].copy() props['monitors'] = ['mon123', 'mon789'] update_template = rsrc.t.freeze(properties=props) scheduler.TaskRunner(rsrc.update, update_template)() self.m.VerifyAll() class PoolMemberTest(common.HeatTestCase): def setUp(self): super(PoolMemberTest, self).setUp() self.fc = fakes_nova.FakeClient() self.m.StubOutWithMock(neutronclient.Client, 'create_member') self.m.StubOutWithMock(neutronclient.Client, 'delete_member') self.m.StubOutWithMock(neutronclient.Client, 'update_member') self.m.StubOutWithMock(neutronclient.Client, 'show_member') self.patchobject(neutron.NeutronClientPlugin, 'has_extension', return_value=True) def create_member(self): neutronclient.Client.create_member({ 'member': { 'pool_id': 'pool123', 'protocol_port': 8080, 'address': '1.2.3.4', 'admin_state_up': True}} ).AndReturn({'member': {'id': 'member5678'}}) snippet = template_format.parse(member_template) self.stack = utils.parse_stack(snippet) self.tmpl = snippet resource_defns = self.stack.t.resource_definitions(self.stack) return loadbalancer.PoolMember( 'member', resource_defns['member'], self.stack) def test_create(self): rsrc = self.create_member() self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) self.assertEqual('member5678', rsrc.resource_id) self.m.VerifyAll() def test_create_optional_parameters(self): neutronclient.Client.create_member({ 'member': { 'pool_id': 'pool123', 'protocol_port': 8080, 'weight': 100, 'admin_state_up': False, 'address': '1.2.3.4'}} ).AndReturn({'member': {'id': 'member5678'}}) snippet = template_format.parse(member_template) snippet['resources']['member']['properties']['admin_state_up'] = False snippet['resources']['member']['properties']['weight'] = 100 self.stack = utils.parse_stack(snippet) resource_defns = self.stack.t.resource_definitions(self.stack) rsrc = loadbalancer.PoolMember( 'member', resource_defns['member'], self.stack) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) self.assertEqual('member5678', rsrc.resource_id) self.m.VerifyAll() def test_attribute(self): rsrc = self.create_member() neutronclient.Client.show_member('member5678').MultipleTimes( ).AndReturn( {'member': {'admin_state_up': True, 'weight': 5}}) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() self.assertIs(True, rsrc.FnGetAtt('admin_state_up')) self.assertEqual(5, rsrc.FnGetAtt('weight')) self.m.VerifyAll() def test_update(self): rsrc = self.create_member() neutronclient.Client.update_member( 'member5678', {'member': {'pool_id': 'pool456'}}) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() props = self.tmpl['resources']['member']['properties'].copy() props['pool_id'] = 'pool456' update_template = rsrc.t.freeze(properties=props) scheduler.TaskRunner(rsrc.update, update_template)() self.m.VerifyAll() def test_delete(self): rsrc = self.create_member() neutronclient.Client.delete_member(u'member5678') neutronclient.Client.show_member(u'member5678').AndRaise( exceptions.NeutronClientException(status_code=404)) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() scheduler.TaskRunner(rsrc.delete)() self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_delete_missing_member(self): rsrc = self.create_member() neutronclient.Client.delete_member(u'member5678').AndRaise( exceptions.NeutronClientException(status_code=404)) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() scheduler.TaskRunner(rsrc.delete)() self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() class LoadBalancerTest(common.HeatTestCase): def setUp(self): super(LoadBalancerTest, self).setUp() self.fc = fakes_nova.FakeClient() self.m.StubOutWithMock(neutronclient.Client, 'create_member') self.m.StubOutWithMock(neutronclient.Client, 'delete_member') self.m.StubOutWithMock(nova.NovaClientPlugin, '_create') self.patchobject(neutron.NeutronClientPlugin, 'has_extension', return_value=True) def create_load_balancer(self): nova.NovaClientPlugin._create().AndReturn(self.fc) neutronclient.Client.create_member({ 'member': { 'pool_id': 'pool123', 'protocol_port': 8080, 'address': '1.2.3.4'}} ).AndReturn({'member': {'id': 'member5678'}}) snippet = template_format.parse(lb_template) self.stack = utils.parse_stack(snippet) resource_defns = self.stack.t.resource_definitions(self.stack) return loadbalancer.LoadBalancer( 'lb', resource_defns['lb'], self.stack) def test_create(self): rsrc = self.create_load_balancer() self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_update(self): rsrc = self.create_load_balancer() neutronclient.Client.delete_member(u'member5678') neutronclient.Client.create_member({ 'member': { 'pool_id': 'pool123', 'protocol_port': 8080, 'address': '4.5.6.7'}} ).AndReturn({'member': {'id': 'memberxyz'}}) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() props = dict(rsrc.properties) props['members'] = ['5678'] update_template = rsrc.t.freeze(properties=props) scheduler.TaskRunner(rsrc.update, update_template)() self.m.VerifyAll() def test_update_missing_member(self): rsrc = self.create_load_balancer() neutronclient.Client.delete_member(u'member5678').AndRaise( exceptions.NeutronClientException(status_code=404)) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() props = dict(rsrc.properties) props['members'] = [] update_template = rsrc.t.freeze(properties=props) scheduler.TaskRunner(rsrc.update, update_template)() self.assertEqual((rsrc.UPDATE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_delete(self): rsrc = self.create_load_balancer() neutronclient.Client.delete_member(u'member5678') self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() scheduler.TaskRunner(rsrc.delete)() self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_delete_missing_member(self): rsrc = self.create_load_balancer() neutronclient.Client.delete_member(u'member5678').AndRaise( exceptions.NeutronClientException(status_code=404)) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() scheduler.TaskRunner(rsrc.delete)() self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() class PoolUpdateHealthMonitorsTest(common.HeatTestCase): def setUp(self): super(PoolUpdateHealthMonitorsTest, self).setUp() self.m.StubOutWithMock(neutronclient.Client, 'create_pool') self.m.StubOutWithMock(neutronclient.Client, 'delete_pool') self.m.StubOutWithMock(neutronclient.Client, 'show_pool') self.m.StubOutWithMock(neutronclient.Client, 'update_pool') self.m.StubOutWithMock(neutronclient.Client, 'associate_health_monitor') self.m.StubOutWithMock(neutronclient.Client, 'disassociate_health_monitor') self.m.StubOutWithMock(neutronclient.Client, 'create_health_monitor') self.m.StubOutWithMock(neutronclient.Client, 'delete_health_monitor') self.m.StubOutWithMock(neutronclient.Client, 'show_health_monitor') self.m.StubOutWithMock(neutronclient.Client, 'update_health_monitor') self.m.StubOutWithMock(neutronclient.Client, 'create_vip') self.m.StubOutWithMock(neutronclient.Client, 'delete_vip') self.m.StubOutWithMock(neutronclient.Client, 'show_vip') self.m.StubOutWithMock(neutronV20, 'find_resourceid_by_name_or_id') self.patchobject(neutron.NeutronClientPlugin, 'has_extension', return_value=True) def _create_pool_with_health_monitors(self, stack_name): neutronclient.Client.create_health_monitor({ 'health_monitor': { 'delay': 3, 'max_retries': 5, 'type': u'HTTP', 'timeout': 10, 'admin_state_up': True}} ).AndReturn({'health_monitor': {'id': '5555'}}) neutronclient.Client.create_health_monitor({ 'health_monitor': { 'delay': 3, 'max_retries': 5, 'type': u'HTTP', 'timeout': 10, 'admin_state_up': True}} ).AndReturn({'health_monitor': {'id': '6666'}}) self.stub_SubnetConstraint_validate() neutronV20.find_resourceid_by_name_or_id( mox.IsA(neutronclient.Client), 'subnet', 'sub123', cmd_resource=None, ).MultipleTimes().AndReturn('sub123') neutronclient.Client.create_pool({ 'pool': { 'subnet_id': 'sub123', 'protocol': u'HTTP', 'name': utils.PhysName(stack_name, 'pool'), 'lb_method': 'ROUND_ROBIN', 'admin_state_up': True}} ).AndReturn({'pool': {'id': '5678'}}) neutronclient.Client.associate_health_monitor( '5678', {'health_monitor': {'id': '5555'}}).InAnyOrder() neutronclient.Client.associate_health_monitor( '5678', {'health_monitor': {'id': '6666'}}).InAnyOrder() neutronclient.Client.create_vip({ 'vip': { 'protocol': u'HTTP', 'name': 'pool.vip', 'admin_state_up': True, 'subnet_id': u'sub123', 'pool_id': '5678', 'protocol_port': 80}} ).AndReturn({'vip': {'id': 'xyz'}}) neutronclient.Client.show_pool('5678').AndReturn( {'pool': {'status': 'ACTIVE'}}) neutronclient.Client.show_vip('xyz').AndReturn( {'vip': {'status': 'ACTIVE'}}) def test_update_pool_with_references_to_health_monitors(self): snippet = template_format.parse(pool_with_health_monitors_template) self.stack = utils.parse_stack(snippet) self._create_pool_with_health_monitors(self.stack.name) neutronclient.Client.disassociate_health_monitor( '5678', mox.IsA(six.string_types)) self.m.ReplayAll() self.stack.create() self.assertEqual((self.stack.CREATE, self.stack.COMPLETE), self.stack.state) snippet['resources']['pool']['properties']['monitors'] = [ {u'get_resource': u'monitor1'}] updated_stack = utils.parse_stack(snippet) self.stack.update(updated_stack) self.assertEqual((self.stack.UPDATE, self.stack.COMPLETE), self.stack.state) self.m.VerifyAll() def test_update_pool_with_empty_list_of_health_monitors(self): snippet = template_format.parse(pool_with_health_monitors_template) self.stack = utils.parse_stack(snippet) self._create_pool_with_health_monitors(self.stack.name) neutronclient.Client.disassociate_health_monitor( '5678', '5555').InAnyOrder() neutronclient.Client.disassociate_health_monitor( '5678', '6666').InAnyOrder() self.m.ReplayAll() self.stack.create() self.assertEqual((self.stack.CREATE, self.stack.COMPLETE), self.stack.state) snippet['resources']['pool']['properties']['monitors'] = [] updated_stack = utils.parse_stack(snippet) self.stack.update(updated_stack) self.assertEqual((self.stack.UPDATE, self.stack.COMPLETE), self.stack.state) self.m.VerifyAll() def test_update_pool_without_health_monitors(self): snippet = template_format.parse(pool_with_health_monitors_template) self.stack = utils.parse_stack(snippet) self._create_pool_with_health_monitors(self.stack.name) neutronclient.Client.disassociate_health_monitor( '5678', '5555').InAnyOrder() neutronclient.Client.disassociate_health_monitor( '5678', '6666').InAnyOrder() self.m.ReplayAll() self.stack.create() self.assertEqual((self.stack.CREATE, self.stack.COMPLETE), self.stack.state) snippet['resources']['pool']['properties'].pop('monitors') updated_stack = utils.parse_stack(snippet) self.stack.update(updated_stack) self.assertEqual((self.stack.UPDATE, self.stack.COMPLETE), self.stack.state) self.m.VerifyAll() heat-10.0.2/heat/tests/openstack/neutron/test_quota.py0000666000175000017500000001172413343562340023055 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import six from heat.common import exception from heat.common import template_format from heat.engine.clients.os import keystone as k_plugin from heat.engine.clients.os import neutron as n_plugin from heat.engine import rsrc_defn from heat.engine import stack as parser from heat.engine import template from heat.tests import common from heat.tests import utils quota_template = ''' heat_template_version: newton description: Sample neutron quota heat template resources: my_quota: type: OS::Neutron::Quota properties: project: demo subnet: 5 network: 5 floatingip: 5 security_group_rule: 5 security_group: 5 router: 5 port: 5 ''' valid_properties = [ 'subnet', 'network', 'floatingip', 'security_group_rule', 'security_group', 'router', 'port' ] class NeutronQuotaTest(common.HeatTestCase): def setUp(self): super(NeutronQuotaTest, self).setUp() self.ctx = utils.dummy_context() self.patchobject(n_plugin.NeutronClientPlugin, 'has_extension', return_value=True) self.patchobject(n_plugin.NeutronClientPlugin, 'ignore_not_found', return_value=None) self.patchobject(k_plugin.KeystoneClientPlugin, 'get_project_id', return_value='some_project_id') tpl = template_format.parse(quota_template) self.stack = parser.Stack( self.ctx, 'neutron_quota_test_stack', template.Template(tpl) ) self.my_quota = self.stack['my_quota'] neutron = mock.MagicMock() self.neutronclient = mock.MagicMock() self.my_quota.client = neutron neutron.return_value = self.neutronclient self.update_quota = self.neutronclient.update_quota self.delete_quota = self.neutronclient.delete_quota self.update_quota.return_value = mock.MagicMock() self.delete_quota.return_value = mock.MagicMock() def _test_validate(self, resource, error_msg): exc = self.assertRaises(exception.StackValidationFailed, resource.validate) self.assertIn(error_msg, six.text_type(exc)) def test_miss_all_quotas(self): my_quota = self.stack['my_quota'] props = self.stack.t.t['resources']['my_quota']['properties'].copy() for key in valid_properties: if key in props: del props[key] my_quota.t = my_quota.t.freeze(properties=props) my_quota.reparse() msg = ('At least one of the following properties must be specified: ' 'floatingip, network, port, router, ' 'security_group, security_group_rule, subnet.') self.assertRaisesRegex(exception.PropertyUnspecifiedError, msg, my_quota.validate) def test_quota_handle_create(self): self.my_quota.physical_resource_name = mock.MagicMock( return_value='some_resource_id') self.my_quota.reparse() self.my_quota.handle_create() body = { "quota": { 'subnet': 5, 'network': 5, 'floatingip': 5, 'security_group_rule': 5, 'security_group': 5, 'router': 5, 'port': 5 } } self.update_quota.assert_called_once_with( 'some_project_id', body ) self.assertEqual('some_resource_id', self.my_quota.resource_id) def test_quota_handle_update(self): tmpl_diff = mock.MagicMock() prop_diff = mock.MagicMock() props = {'project': 'some_project_id', 'floatingip': 1, 'security_group': 4} json_snippet = rsrc_defn.ResourceDefinition( self.my_quota.name, 'OS::Neutron::Quota', properties=props) self.my_quota.reparse() self.my_quota.handle_update(json_snippet, tmpl_diff, prop_diff) body = { "quota": { 'floatingip': 1, 'security_group': 4 } } self.update_quota.assert_called_once_with( 'some_project_id', body ) def test_quota_handle_delete(self): self.my_quota.reparse() self.my_quota.resource_id_set('some_project_id') self.my_quota.handle_delete() self.delete_quota.assert_called_once_with('some_project_id') heat-10.0.2/heat/tests/openstack/neutron/test_extraroute.py0000666000175000017500000001223413343562351024125 0ustar zuulzuul00000000000000 # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from neutronclient.v2_0 import client as neutronclient from heat.common import template_format from heat.engine.clients.os import neutron from heat.engine.resources.openstack.neutron import extraroute from heat.engine import scheduler from heat.tests import common from heat.tests import utils neutron_template = ''' { "AWSTemplateFormatVersion" : "2010-09-09", "Description" : "Template to test OS::Neutron::ExtraRoute resources", "Parameters" : {}, "Resources" : { "router": { "Type": "OS::Neutron::Router" }, "extraroute1": { "Type": "OS::Neutron::ExtraRoute", "Properties": { "router_id": { "Ref" : "router" }, "destination" : "192.168.0.0/24", "nexthop": "1.1.1.1" } }, "extraroute2": { "Type": "OS::Neutron::ExtraRoute", "Properties": { "router_id": { "Ref" : "router" }, "destination" : "192.168.255.0/24", "nexthop": "1.1.1.1" } } } } ''' class NeutronExtraRouteTest(common.HeatTestCase): def setUp(self): super(NeutronExtraRouteTest, self).setUp() self.m.StubOutWithMock(neutronclient.Client, 'show_router') self.m.StubOutWithMock(neutronclient.Client, 'update_router') self.patchobject(neutron.NeutronClientPlugin, 'has_extension', return_value=True) def create_extraroute(self, t, stack, resource_name, properties=None): properties = properties or {} t['Resources'][resource_name]['Properties'] = properties rsrc = extraroute.ExtraRoute( resource_name, stack.t.resource_definitions(stack)[resource_name], stack) scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) return rsrc def test_extraroute(self): self.stub_RouterConstraint_validate() # add first route neutronclient.Client.show_router( '3e46229d-8fce-4733-819a-b5fe630550f8' ).AndReturn({'router': {'routes': []}}) neutronclient.Client.update_router( '3e46229d-8fce-4733-819a-b5fe630550f8', {"router": { "routes": [ {"destination": "192.168.0.0/24", "nexthop": "1.1.1.1"}, ] }}).AndReturn(None) # add second route neutronclient.Client.show_router( '3e46229d-8fce-4733-819a-b5fe630550f8' ).AndReturn({'router': {'routes': [{"destination": "192.168.0.0/24", "nexthop": "1.1.1.1"}]}}) neutronclient.Client.update_router( '3e46229d-8fce-4733-819a-b5fe630550f8', {"router": { "routes": [ {"destination": "192.168.0.0/24", "nexthop": "1.1.1.1"}, {"destination": "192.168.255.0/24", "nexthop": "1.1.1.1"} ] }}).AndReturn(None) # first delete neutronclient.Client.show_router( '3e46229d-8fce-4733-819a-b5fe630550f8' ).AndReturn({'router': {'routes': [{"destination": "192.168.0.0/24", "nexthop": "1.1.1.1"}, {"destination": "192.168.255.0/24", "nexthop": "1.1.1.1"}]}}) neutronclient.Client.update_router( '3e46229d-8fce-4733-819a-b5fe630550f8', {"router": { "routes": [ {"destination": "192.168.255.0/24", "nexthop": "1.1.1.1"} ] }}).AndReturn(None) # second delete neutronclient.Client.show_router( '3e46229d-8fce-4733-819a-b5fe630550f8' ).AndReturn({'router': {'routes': [{"destination": "192.168.255.0/24", "nexthop": "1.1.1.1"}]}}) self.m.ReplayAll() t = template_format.parse(neutron_template) stack = utils.parse_stack(t) rsrc1 = self.create_extraroute( t, stack, 'extraroute1', properties={ 'router_id': '3e46229d-8fce-4733-819a-b5fe630550f8', 'destination': '192.168.0.0/24', 'nexthop': '1.1.1.1'}) self.create_extraroute( t, stack, 'extraroute2', properties={ 'router_id': '3e46229d-8fce-4733-819a-b5fe630550f8', 'destination': '192.168.255.0/24', 'nexthop': '1.1.1.1'}) scheduler.TaskRunner(rsrc1.delete)() rsrc1.state_set(rsrc1.CREATE, rsrc1.COMPLETE, 'to delete again') scheduler.TaskRunner(rsrc1.delete)() self.m.VerifyAll() heat-10.0.2/heat/tests/openstack/neutron/test_neutron_floating_ip.py0000666000175000017500000007350413343562351025777 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import mock import mox from neutronclient.common import exceptions as qe from neutronclient.neutron import v2_0 as neutronV20 from neutronclient.v2_0 import client as neutronclient from heat.common import exception from heat.common import template_format from heat.common import timeutils from heat.engine.clients.os import neutron from heat.engine.hot import functions as hot_funcs from heat.engine import node_data from heat.engine import rsrc_defn from heat.engine import scheduler from heat.engine import stack as parser from heat.engine import stk_defn from heat.engine import template as tmpl from heat.tests import common from heat.tests import utils neutron_floating_template = ''' heat_template_version: 2015-04-30 description: Template to test floatingip Neutron resource resources: port_floating: type: OS::Neutron::Port properties: network: abcd1234 fixed_ips: - subnet: sub1234 ip_address: 10.0.0.10 floating_ip: type: OS::Neutron::FloatingIP properties: floating_network: abcd1234 floating_ip_assoc: type: OS::Neutron::FloatingIPAssociation properties: floatingip_id: { get_resource: floating_ip } port_id: { get_resource: port_floating } router: type: OS::Neutron::Router router_interface: type: OS::Neutron::RouterInterface properties: router_id: { get_resource: router } subnet: sub1234 gateway: type: OS::Neutron::RouterGateway properties: router_id: { get_resource: router } network: abcd1234 ''' neutron_floating_no_assoc_template = ''' heat_template_version: 2015-04-30 description: Template to test floatingip Neutron resource resources: network: type: OS::Neutron::Net subnet: type: OS::Neutron::Subnet properties: network: { get_resource: network } cidr: 10.0.3.0/24, port_floating: type: OS::Neutron::Port properties: network: { get_resource: network } fixed_ips: - subnet: { get_resource: subnet } ip_address: 10.0.0.10 floating_ip: type: OS::Neutron::FloatingIP properties: floating_network: abcd1234 port_id: { get_resource: port_floating } router: type: OS::Neutron::Router router_interface: type: OS::Neutron::RouterInterface properties: router_id: { get_resource: router } subnet: { get_resource: subnet } gateway: type: OS::Neutron::RouterGateway properties: router_id: { get_resource: router } network: abcd1234 ''' neutron_floating_template_deprecated = neutron_floating_template.replace( 'network', 'network_id').replace('subnet', 'subnet_id') class NeutronFloatingIPTest(common.HeatTestCase): def setUp(self): super(NeutronFloatingIPTest, self).setUp() self.m.StubOutWithMock(neutronclient.Client, 'create_floatingip') self.m.StubOutWithMock(neutronclient.Client, 'delete_floatingip') self.m.StubOutWithMock(neutronclient.Client, 'show_floatingip') self.m.StubOutWithMock(neutronclient.Client, 'update_floatingip') self.m.StubOutWithMock(neutronclient.Client, 'create_port') self.m.StubOutWithMock(neutronclient.Client, 'delete_port') self.m.StubOutWithMock(neutronclient.Client, 'update_port') self.m.StubOutWithMock(neutronclient.Client, 'show_port') self.m.StubOutWithMock(neutronV20, 'find_resourceid_by_name_or_id') self.m.StubOutWithMock(timeutils, 'retry_backoff_delay') self.patchobject(neutron.NeutronClientPlugin, 'has_extension', return_value=True) def test_floating_ip_validate(self): neutronV20.find_resourceid_by_name_or_id( mox.IsA(neutronclient.Client), 'network', 'abcd1234', cmd_resource=None, ).MultipleTimes().AndReturn('abcd1234') self.m.ReplayAll() t = template_format.parse(neutron_floating_no_assoc_template) stack = utils.parse_stack(t) fip = stack['floating_ip'] self.assertIsNone(fip.validate()) del t['resources']['floating_ip']['properties']['port_id'] t['resources']['floating_ip']['properties'][ 'fixed_ip_address'] = '10.0.0.12' stack = utils.parse_stack(t) fip = stack['floating_ip'] self.assertRaises(exception.ResourcePropertyDependency, fip.validate) self.m.VerifyAll() def test_floating_ip_router_interface(self): t = template_format.parse(neutron_floating_template) del t['resources']['gateway'] self._test_floating_ip(t) def test_floating_ip_router_gateway(self): t = template_format.parse(neutron_floating_template) del t['resources']['router_interface'] self._test_floating_ip(t, r_iface=False) def test_floating_ip_deprecated_router_interface(self): t = template_format.parse(neutron_floating_template_deprecated) del t['resources']['gateway'] self._test_floating_ip(t) def test_floating_ip_deprecated_router_gateway(self): t = template_format.parse(neutron_floating_template_deprecated) del t['resources']['router_interface'] self._test_floating_ip(t, r_iface=False) def _test_floating_ip(self, tmpl, r_iface=True): neutronV20.find_resourceid_by_name_or_id( mox.IsA(neutronclient.Client), 'network', 'abcd1234', cmd_resource=None, ).MultipleTimes().AndReturn('abcd1234') neutronclient.Client.create_floatingip({ 'floatingip': {'floating_network_id': u'abcd1234'} }).AndReturn({'floatingip': { 'id': 'fc68ea2c-b60b-4b4f-bd82-94ec81110766', 'floating_network_id': u'abcd1234' }}) neutronclient.Client.show_floatingip( 'fc68ea2c-b60b-4b4f-bd82-94ec81110766' ).AndRaise(qe.NeutronClientException(status_code=404)) neutronclient.Client.show_floatingip( 'fc68ea2c-b60b-4b4f-bd82-94ec81110766' ).AndReturn({'floatingip': { 'id': 'fc68ea2c-b60b-4b4f-bd82-94ec81110766', 'floating_network_id': u'abcd1234' }}) neutronclient.Client.show_floatingip( 'fc68ea2c-b60b-4b4f-bd82-94ec81110766' ).AndReturn({'floatingip': { 'id': 'fc68ea2c-b60b-4b4f-bd82-94ec81110766', 'floating_network_id': u'abcd1234' }}) timeutils.retry_backoff_delay(1, jitter_max=2.0).AndReturn(0.01) neutronclient.Client.delete_floatingip( 'fc68ea2c-b60b-4b4f-bd82-94ec81110766').AndReturn(None) neutronclient.Client.show_floatingip( 'fc68ea2c-b60b-4b4f-bd82-94ec81110766' ).AndReturn({'floatingip': { 'id': 'fc68ea2c-b60b-4b4f-bd82-94ec81110766', 'floating_network_id': u'abcd1234' }}) neutronclient.Client.delete_floatingip( 'fc68ea2c-b60b-4b4f-bd82-94ec81110766').AndReturn(None) neutronclient.Client.show_floatingip( 'fc68ea2c-b60b-4b4f-bd82-94ec81110766').AndRaise( qe.NeutronClientException(status_code=404)) neutronclient.Client.delete_floatingip( 'fc68ea2c-b60b-4b4f-bd82-94ec81110766').AndRaise( qe.NeutronClientException(status_code=404)) self.stub_NetworkConstraint_validate() stack = utils.parse_stack(tmpl) # assert the implicit dependency between the floating_ip # and the gateway self.m.ReplayAll() if r_iface: required_by = set(stack.dependencies.required_by( stack['router_interface'])) self.assertIn(stack['floating_ip_assoc'], required_by) else: deps = stack.dependencies[stack['gateway']] self.assertIn(stack['floating_ip'], deps) fip = stack['floating_ip'] scheduler.TaskRunner(fip.create)() self.assertEqual((fip.CREATE, fip.COMPLETE), fip.state) fip.validate() fip_id = fip.FnGetRefId() self.assertEqual('fc68ea2c-b60b-4b4f-bd82-94ec81110766', fip_id) self.assertIsNone(fip.FnGetAtt('show')) self.assertEqual('fc68ea2c-b60b-4b4f-bd82-94ec81110766', fip.FnGetAtt('show')['id']) self.assertRaises(exception.InvalidTemplateAttribute, fip.FnGetAtt, 'Foo') self.assertEqual(u'abcd1234', fip.FnGetAtt('floating_network_id')) scheduler.TaskRunner(fip.delete)() fip.state_set(fip.CREATE, fip.COMPLETE, 'to delete again') scheduler.TaskRunner(fip.delete)() self.m.VerifyAll() def test_FnGetRefId(self): self.m.ReplayAll() t = template_format.parse(neutron_floating_template) stack = utils.parse_stack(t) rsrc = stack['floating_ip'] rsrc.resource_id = 'xyz' self.assertEqual('xyz', rsrc.FnGetRefId()) self.m.VerifyAll() def test_FnGetRefId_convergence_cache_data(self): neutronV20.find_resourceid_by_name_or_id( mox.IsA(neutronclient.Client), 'network', 'abcd1234', cmd_resource=None, ).MultipleTimes().AndReturn('abcd1234') neutronV20.find_resourceid_by_name_or_id( mox.IsA(neutronclient.Client), 'subnet', 'sub1234', cmd_resource=None, ).MultipleTimes().AndReturn('sub1234') self.m.ReplayAll() t = template_format.parse(neutron_floating_template) template = tmpl.Template(t) stack = parser.Stack(utils.dummy_context(), 'test', template, cache_data={ 'floating_ip': node_data.NodeData.from_dict({ 'uuid': mock.ANY, 'id': mock.ANY, 'action': 'CREATE', 'status': 'COMPLETE', 'reference_id': 'abc'})}) rsrc = stack.defn['floating_ip'] self.assertEqual('abc', rsrc.FnGetRefId()) def test_floatip_association_port(self): t = template_format.parse(neutron_floating_template) stack = utils.parse_stack(t) neutronV20.find_resourceid_by_name_or_id( mox.IsA(neutronclient.Client), 'network', 'abcd1234', cmd_resource=None, ).MultipleTimes().AndReturn('abcd1234') neutronV20.find_resourceid_by_name_or_id( mox.IsA(neutronclient.Client), 'subnet', 'sub1234', cmd_resource=None, ).MultipleTimes().AndReturn('sub1234') neutronclient.Client.create_floatingip({ 'floatingip': {'floating_network_id': u'abcd1234'} }).AndReturn({'floatingip': { "status": "ACTIVE", "id": "fc68ea2c-b60b-4b4f-bd82-94ec81110766" }}) neutronclient.Client.create_port({'port': { 'network_id': u'abcd1234', 'fixed_ips': [ {'subnet_id': u'sub1234', 'ip_address': u'10.0.0.10'} ], 'name': utils.PhysName(stack.name, 'port_floating'), 'admin_state_up': True, 'device_owner': '', 'device_id': '', 'binding:vnic_type': 'normal' }} ).AndReturn({'port': { "status": "BUILD", "id": "fc68ea2c-b60b-4b4f-bd82-94ec81110766" }}) neutronclient.Client.show_port( 'fc68ea2c-b60b-4b4f-bd82-94ec81110766' ).AndReturn({'port': { "status": "ACTIVE", "id": "fc68ea2c-b60b-4b4f-bd82-94ec81110766" }}) # create as neutronclient.Client.update_floatingip( 'fc68ea2c-b60b-4b4f-bd82-94ec81110766', { 'floatingip': { 'port_id': u'fc68ea2c-b60b-4b4f-bd82-94ec81110766'}} ).AndReturn({'floatingip': { "status": "ACTIVE", "id": "fc68ea2c-b60b-4b4f-bd82-94ec81110766" }}) # update as with port_id neutronclient.Client.update_floatingip( 'fc68ea2c-b60b-4b4f-bd82-94ec81110766', { 'floatingip': { 'port_id': u'2146dfbf-ba77-4083-8e86-d052f671ece5', 'fixed_ip_address': None}} ).AndReturn({'floatingip': { "status": "ACTIVE", "id": "fc68ea2c-b60b-4b4f-bd82-94ec81110766" }}) # update as with floatingip_id neutronclient.Client.update_floatingip( 'fc68ea2c-b60b-4b4f-bd82-94ec81110766', {'floatingip': { 'port_id': None }}).AndReturn(None) neutronclient.Client.update_floatingip( '2146dfbf-ba77-4083-8e86-d052f671ece5', { 'floatingip': { 'port_id': u'2146dfbf-ba77-4083-8e86-d052f671ece5', 'fixed_ip_address': None}} ).AndReturn({'floatingip': { "status": "ACTIVE", "id": "2146dfbf-ba77-4083-8e86-d052f671ece5" }}) # update as with both neutronclient.Client.update_floatingip( '2146dfbf-ba77-4083-8e86-d052f671ece5', {'floatingip': { 'port_id': None }}).AndReturn(None) neutronclient.Client.update_floatingip( 'fc68ea2c-b60b-4b4f-bd82-94ec81110766', { 'floatingip': { 'port_id': u'ade6fcac-7d47-416e-a3d7-ad12efe445c1', 'fixed_ip_address': None}} ).AndReturn({'floatingip': { "status": "ACTIVE", "id": "fc68ea2c-b60b-4b4f-bd82-94ec81110766" }}) # delete as neutronclient.Client.update_floatingip( 'fc68ea2c-b60b-4b4f-bd82-94ec81110766', {'floatingip': { 'port_id': None }}).AndReturn(None) neutronclient.Client.delete_port( 'fc68ea2c-b60b-4b4f-bd82-94ec81110766' ).AndReturn(None) neutronclient.Client.show_port( 'fc68ea2c-b60b-4b4f-bd82-94ec81110766' ).AndRaise(qe.PortNotFoundClient(status_code=404)) neutronclient.Client.delete_floatingip( 'fc68ea2c-b60b-4b4f-bd82-94ec81110766' ).AndReturn(None) neutronclient.Client.show_floatingip( 'fc68ea2c-b60b-4b4f-bd82-94ec81110766').AndRaise( qe.NeutronClientException(status_code=404)) neutronclient.Client.delete_port( 'fc68ea2c-b60b-4b4f-bd82-94ec81110766' ).AndRaise(qe.PortNotFoundClient(status_code=404)) neutronclient.Client.delete_floatingip( 'fc68ea2c-b60b-4b4f-bd82-94ec81110766' ).AndRaise(qe.NeutronClientException(status_code=404)) self.stub_PortConstraint_validate() self.m.ReplayAll() fip = stack['floating_ip'] scheduler.TaskRunner(fip.create)() self.assertEqual((fip.CREATE, fip.COMPLETE), fip.state) stk_defn.update_resource_data(stack.defn, fip.name, fip.node_data()) p = stack['port_floating'] scheduler.TaskRunner(p.create)() self.assertEqual((p.CREATE, p.COMPLETE), p.state) stk_defn.update_resource_data(stack.defn, p.name, p.node_data()) fipa = stack['floating_ip_assoc'] scheduler.TaskRunner(fipa.create)() self.assertEqual((fipa.CREATE, fipa.COMPLETE), fipa.state) stk_defn.update_resource_data(stack.defn, fipa.name, fipa.node_data()) self.assertIsNotNone(fipa.id) self.assertEqual(fipa.id, fipa.resource_id) fipa.validate() # test update FloatingIpAssociation with port_id props = copy.deepcopy(fipa.properties.data) update_port_id = '2146dfbf-ba77-4083-8e86-d052f671ece5' props['port_id'] = update_port_id update_snippet = rsrc_defn.ResourceDefinition(fipa.name, fipa.type(), stack.t.parse(stack.defn, props)) scheduler.TaskRunner(fipa.update, update_snippet)() self.assertEqual((fipa.UPDATE, fipa.COMPLETE), fipa.state) # test update FloatingIpAssociation with floatingip_id props = copy.deepcopy(fipa.properties.data) update_flip_id = '2146dfbf-ba77-4083-8e86-d052f671ece5' props['floatingip_id'] = update_flip_id update_snippet = rsrc_defn.ResourceDefinition(fipa.name, fipa.type(), props) scheduler.TaskRunner(fipa.update, update_snippet)() self.assertEqual((fipa.UPDATE, fipa.COMPLETE), fipa.state) # test update FloatingIpAssociation with port_id and floatingip_id props = copy.deepcopy(fipa.properties.data) update_flip_id = 'fc68ea2c-b60b-4b4f-bd82-94ec81110766' update_port_id = 'ade6fcac-7d47-416e-a3d7-ad12efe445c1' props['floatingip_id'] = update_flip_id props['port_id'] = update_port_id update_snippet = rsrc_defn.ResourceDefinition(fipa.name, fipa.type(), props) scheduler.TaskRunner(fipa.update, update_snippet)() self.assertEqual((fipa.UPDATE, fipa.COMPLETE), fipa.state) scheduler.TaskRunner(fipa.delete)() scheduler.TaskRunner(p.delete)() scheduler.TaskRunner(fip.delete)() fip.state_set(fip.CREATE, fip.COMPLETE, 'to delete again') p.state_set(p.CREATE, p.COMPLETE, 'to delete again') self.assertIsNone(scheduler.TaskRunner(p.delete)()) scheduler.TaskRunner(fip.delete)() self.m.VerifyAll() def test_floatip_port_dependency_subnet(self): t = template_format.parse(neutron_floating_no_assoc_template) stack = utils.parse_stack(t) p_result = self.patchobject(hot_funcs.GetResource, 'result') p_result.return_value = 'subnet_uuid' # check dependencies for fip resource required_by = set(stack.dependencies.required_by( stack['router_interface'])) self.assertIn(stack['floating_ip'], required_by) self.m.VerifyAll() def test_floatip_port_dependency_network(self): t = template_format.parse(neutron_floating_no_assoc_template) del t['resources']['port_floating']['properties']['fixed_ips'] stack = utils.parse_stack(t) p_show = self.patchobject(neutronclient.Client, 'show_network') p_show.return_value = {'network': {'subnets': ['subnet_uuid']}} p_result = self.patchobject(hot_funcs.GetResource, 'result', autospec=True) def return_uuid(self): if self.args == 'network': return 'net_uuid' return 'subnet_uuid' p_result.side_effect = return_uuid # check dependencies for fip resource required_by = set(stack.dependencies.required_by( stack['router_interface'])) self.assertIn(stack['floating_ip'], required_by) p_show.assert_called_once_with('net_uuid') self.m.VerifyAll() def test_floatingip_create_specify_ip_address(self): neutronV20.find_resourceid_by_name_or_id( mox.IsA(neutronclient.Client), 'network', 'abcd1234', cmd_resource=None, ).MultipleTimes().AndReturn('abcd1234') self.stub_NetworkConstraint_validate() neutronclient.Client.create_floatingip({ 'floatingip': {'floating_network_id': u'abcd1234', 'floating_ip_address': '172.24.4.98'} }).AndReturn({'floatingip': { 'status': 'ACTIVE', 'id': 'fc68ea2c-b60b-4b4f-bd82-94ec81110766', 'floating_ip_address': '172.24.4.98' }}) neutronclient.Client.show_floatingip( 'fc68ea2c-b60b-4b4f-bd82-94ec81110766' ).AndReturn({'floatingip': { 'status': 'ACTIVE', 'id': 'fc68ea2c-b60b-4b4f-bd82-94ec81110766', 'floating_ip_address': '172.24.4.98' }}) self.m.ReplayAll() t = template_format.parse(neutron_floating_template) props = t['resources']['floating_ip']['properties'] props['floating_ip_address'] = '172.24.4.98' stack = utils.parse_stack(t) fip = stack['floating_ip'] scheduler.TaskRunner(fip.create)() self.assertEqual((fip.CREATE, fip.COMPLETE), fip.state) self.assertEqual('172.24.4.98', fip.FnGetAtt('floating_ip_address')) self.m.VerifyAll() def test_floatingip_create_specify_dns(self): neutronV20.find_resourceid_by_name_or_id( mox.IsA(neutronclient.Client), 'network', 'abcd1234', cmd_resource=None, ).MultipleTimes().AndReturn('abcd1234') self.stub_NetworkConstraint_validate() neutronclient.Client.create_floatingip({ 'floatingip': {'floating_network_id': u'abcd1234', 'dns_name': 'myvm', 'dns_domain': 'openstack.org.'} }).AndReturn({'floatingip': { 'status': 'ACTIVE', 'id': 'fc68ea2c-b60b-4b4f-bd82-94ec81110766', 'floating_ip_address': '172.24.4.98' }}) self.m.ReplayAll() t = template_format.parse(neutron_floating_template) props = t['resources']['floating_ip']['properties'] props['dns_name'] = 'myvm' props['dns_domain'] = 'openstack.org.' stack = utils.parse_stack(t) fip = stack['floating_ip'] scheduler.TaskRunner(fip.create)() self.assertEqual((fip.CREATE, fip.COMPLETE), fip.state) self.m.VerifyAll() def test_floatingip_create_specify_subnet(self): neutronV20.find_resourceid_by_name_or_id( mox.IsA(neutronclient.Client), 'network', 'abcd1234', cmd_resource=None, ).MultipleTimes().AndReturn('abcd1234') neutronV20.find_resourceid_by_name_or_id( mox.IsA(neutronclient.Client), 'subnet', 'sub1234', cmd_resource=None, ).MultipleTimes().AndReturn('sub1234') self.stub_NetworkConstraint_validate() neutronclient.Client.create_floatingip({ 'floatingip': {'floating_network_id': u'abcd1234', 'subnet_id': u'sub1234'} }).AndReturn({'floatingip': { 'status': 'ACTIVE', 'id': 'fc68ea2c-b60b-4b4f-bd82-94ec81110766', 'floating_ip_address': '172.24.4.98' }}) self.m.ReplayAll() t = template_format.parse(neutron_floating_template) props = t['resources']['floating_ip']['properties'] props['floating_subnet'] = 'sub1234' stack = utils.parse_stack(t) fip = stack['floating_ip'] scheduler.TaskRunner(fip.create)() self.assertEqual((fip.CREATE, fip.COMPLETE), fip.state) self.m.VerifyAll() def test_floatip_port(self): t = template_format.parse(neutron_floating_no_assoc_template) t['resources']['port_floating']['properties']['network'] = "xyz1234" t['resources']['port_floating']['properties'][ 'fixed_ips'][0]['subnet'] = "sub1234" t['resources']['router_interface']['properties']['subnet'] = "sub1234" stack = utils.parse_stack(t) neutronV20.find_resourceid_by_name_or_id( mox.IsA(neutronclient.Client), 'network', 'xyz1234', cmd_resource=None, ).MultipleTimes().AndReturn('xyz1234') neutronV20.find_resourceid_by_name_or_id( mox.IsA(neutronclient.Client), 'subnet', 'sub1234', cmd_resource=None, ).MultipleTimes().AndReturn('sub1234') neutronclient.Client.create_port({'port': { 'network_id': u'xyz1234', 'fixed_ips': [ {'subnet_id': u'sub1234', 'ip_address': u'10.0.0.10'} ], 'name': utils.PhysName(stack.name, 'port_floating'), 'admin_state_up': True, 'binding:vnic_type': 'normal', 'device_owner': '', 'device_id': ''}} ).AndReturn({'port': { "status": "BUILD", "id": "fc68ea2c-b60b-4b4f-bd82-94ec81110766" }}) neutronclient.Client.show_port( 'fc68ea2c-b60b-4b4f-bd82-94ec81110766' ).AndReturn({'port': { "status": "ACTIVE", "id": "fc68ea2c-b60b-4b4f-bd82-94ec81110766" }}) neutronV20.find_resourceid_by_name_or_id( mox.IsA(neutronclient.Client), 'network', 'abcd1234', cmd_resource=None, ).MultipleTimes().AndReturn('abcd1234') neutronclient.Client.create_floatingip({ 'floatingip': { 'floating_network_id': u'abcd1234', 'port_id': u'fc68ea2c-b60b-4b4f-bd82-94ec81110766' } }).AndReturn({'floatingip': { "status": "ACTIVE", "id": "fc68ea2c-b60b-4b4f-bd82-94ec81110766" }}) # update with new port_id neutronclient.Client.update_floatingip( 'fc68ea2c-b60b-4b4f-bd82-94ec81110766', { 'floatingip': { 'port_id': u'2146dfbf-ba77-4083-8e86-d052f671ece5', 'fixed_ip_address': None}} ).AndReturn({'floatingip': { "status": "ACTIVE", "id": "fc68ea2c-b60b-4b4f-bd82-94ec81110766" }}) # update with None port_id neutronclient.Client.update_floatingip( 'fc68ea2c-b60b-4b4f-bd82-94ec81110766', { 'floatingip': { 'port_id': None, 'fixed_ip_address': None}} ).AndReturn({'floatingip': { "status": "ACTIVE", "id": "fc68ea2c-b60b-4b4f-bd82-94ec81110766" }}) neutronclient.Client.delete_floatingip( 'fc68ea2c-b60b-4b4f-bd82-94ec81110766' ).AndReturn(None) neutronclient.Client.show_floatingip( 'fc68ea2c-b60b-4b4f-bd82-94ec81110766').AndRaise( qe.NeutronClientException(status_code=404)) neutronclient.Client.delete_port( 'fc68ea2c-b60b-4b4f-bd82-94ec81110766' ).AndReturn(None) neutronclient.Client.show_port( 'fc68ea2c-b60b-4b4f-bd82-94ec81110766' ).AndRaise(qe.PortNotFoundClient(status_code=404)) self.stub_PortConstraint_validate() self.m.ReplayAll() # check dependencies for fip resource required_by = set(stack.dependencies.required_by( stack['router_interface'])) self.assertIn(stack['floating_ip'], required_by) p = stack['port_floating'] scheduler.TaskRunner(p.create)() self.assertEqual((p.CREATE, p.COMPLETE), p.state) stk_defn.update_resource_data(stack.defn, p.name, p.node_data()) fip = stack['floating_ip'] scheduler.TaskRunner(fip.create)() self.assertEqual((fip.CREATE, fip.COMPLETE), fip.state) stk_defn.update_resource_data(stack.defn, fip.name, fip.node_data()) # test update FloatingIp with port_id props = copy.deepcopy(fip.properties.data) update_port_id = '2146dfbf-ba77-4083-8e86-d052f671ece5' props['port_id'] = update_port_id update_snippet = rsrc_defn.ResourceDefinition(fip.name, fip.type(), stack.t.parse(stack.defn, props)) scheduler.TaskRunner(fip.update, update_snippet)() self.assertEqual((fip.UPDATE, fip.COMPLETE), fip.state) stk_defn.update_resource_data(stack.defn, fip.name, fip.node_data()) # test update FloatingIp with None port_id props = copy.deepcopy(fip.properties.data) del(props['port_id']) update_snippet = rsrc_defn.ResourceDefinition(fip.name, fip.type(), stack.t.parse(stack.defn, props)) scheduler.TaskRunner(fip.update, update_snippet)() self.assertEqual((fip.UPDATE, fip.COMPLETE), fip.state) scheduler.TaskRunner(fip.delete)() scheduler.TaskRunner(p.delete)() self.m.VerifyAll() def test_add_dependencies(self): t = template_format.parse(neutron_floating_template) stack = utils.parse_stack(t) fipa = stack['floating_ip_assoc'] port = stack['port_floating'] r_int = stack['router_interface'] deps = mock.MagicMock() dep_list = [] def iadd(obj): dep_list.append(obj[1]) deps.__iadd__.side_effect = iadd deps.graph.return_value = {fipa: [port]} fipa.add_dependencies(deps) self.assertEqual([r_int], dep_list) def test_add_dependencies_without_fixed_ips_in_port(self): t = template_format.parse(neutron_floating_template) del t['resources']['port_floating']['properties']['fixed_ips'] stack = utils.parse_stack(t) fipa = stack['floating_ip_assoc'] port = stack['port_floating'] deps = mock.MagicMock() dep_list = [] def iadd(obj): dep_list.append(obj[1]) deps.__iadd__.side_effect = iadd deps.graph.return_value = {fipa: [port]} fipa.add_dependencies(deps) self.assertEqual([], dep_list) heat-10.0.2/heat/tests/openstack/neutron/test_neutron_rbac_policy.py0000666000175000017500000001441513343562340025764 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import yaml from neutronclient.common import exceptions from heat.common import exception from heat.common import template_format from heat.tests import common from heat.tests.openstack.neutron import inline_templates from heat.tests import utils class RBACPolicyTest(common.HeatTestCase): @mock.patch('heat.engine.clients.os.neutron.' 'NeutronClientPlugin.has_extension', return_value=True) def _create_stack(self, ext_func, tmpl=inline_templates.RBAC_TEMPLATE): self.t = template_format.parse(tmpl) self.stack = utils.parse_stack(self.t) self.rbac = self.stack['rbac'] self.neutron_client = mock.MagicMock() self.rbac.client = mock.MagicMock() self.rbac.client.return_value = self.neutron_client def _test_create(self, obj_type='network'): tpl = yaml.safe_load(inline_templates.RBAC_TEMPLATE) tpl['resources']['rbac']['properties']['object_type'] = obj_type self._create_stack(tmpl=yaml.safe_dump(tpl)) expected = { "rbac_policy": { "action": "access_as_shared", "object_type": obj_type, "object_id": "9ba4c03a-dbd5-4836-b651-defa595796ba", "target_tenant": "d1dbbed707e5469da9cd4fdd618e9706" } } self.rbac.handle_create() self.neutron_client.create_rbac_policy.assert_called_with(expected) def test_create_network_rbac(self): self._test_create() def test_create_qos_policy_rbac(self): self._test_create(obj_type='qos_policy') def _test_validate_invalid_action(self, msg, invalid_action='invalid', obj_type='network'): tpl = yaml.safe_load(inline_templates.RBAC_TEMPLATE) tpl['resources']['rbac']['properties']['action'] = invalid_action tpl['resources']['rbac']['properties']['object_type'] = obj_type self._create_stack(tmpl=yaml.safe_dump(tpl)) self.patchobject(type(self.rbac), 'is_service_available', return_value=(True, None)) self.assertRaisesRegex(exception.StackValidationFailed, msg, self.rbac.validate) def test_validate_action_for_network(self): msg = ('Property error: resources.rbac.properties.action: ' '"invalid" is not an allowed value ' r'\[access_as_shared, access_as_external\]') self._test_validate_invalid_action(msg) def test_validate_action_for_qos_policy(self): msg = ('Property error: resources.rbac.properties.action: ' '"invalid" is not an allowed value ' r'\[access_as_shared, access_as_external\]') self._test_validate_invalid_action(msg, obj_type='qos_policy') # we dont support access_as_external for qos_policy msg = ('Property error: resources.rbac.properties.action: ' 'Invalid action "access_as_external" for object type ' 'qos_policy. Valid actions: access_as_shared') self._test_validate_invalid_action(msg, obj_type='qos_policy', invalid_action='access_as_external') def test_validate_invalid_type(self): tpl = yaml.safe_load(inline_templates.RBAC_TEMPLATE) tpl['resources']['rbac']['properties']['object_type'] = 'networks' self._create_stack(tmpl=yaml.safe_dump(tpl)) msg = '"networks" is not an allowed value' self.patchobject(type(self.rbac), 'is_service_available', return_value=(True, None)) self.assertRaisesRegex(exception.StackValidationFailed, msg, self.rbac.validate) def test_validate_object_id_reference(self): self._create_stack(tmpl=inline_templates.RBAC_REFERENCE_TEMPLATE) self.patchobject(type(self.rbac), 'is_service_available', return_value=(True, None)) # won't check the object_id, so validate() is success self.rbac.validate() def test_update(self): self._create_stack() self.rbac.resource_id_set('bca25c0e-5937-4341-a911-53e202629269') prop_diff = { 'target_tenant': '77485d3b002b4e0c9e8b37fac7261842' } self.rbac.handle_update(None, None, prop_diff) self.neutron_client.update_rbac_policy.assert_called_with( 'bca25c0e-5937-4341-a911-53e202629269', {'rbac_policy': prop_diff}) def test_delete(self): self._create_stack() self.rbac.resource_id_set('bca25c0e-5937-4341-a911-53e202629269') self.rbac.handle_delete() self.neutron_client.delete_rbac_policy.assert_called_with( 'bca25c0e-5937-4341-a911-53e202629269') def test_delete_failed(self): self._create_stack() self.rbac.resource_id_set('bca25c0e-5937-4341-a911-53e202629269') self.neutron_client.delete_rbac_policy.side_effect = ( exceptions.Unauthorized) self.assertRaises(exceptions.Unauthorized, self.rbac.handle_delete) def test_delete_not_found(self): self._create_stack() self.rbac.resource_id_set('bca25c0e-5937-4341-a911-53e202629269') self.neutron_client.delete_rbac_policy.side_effect = ( exceptions.NotFound()) self.assertIsNone(self.rbac.handle_delete()) def test_show_resource(self): self._create_stack() self.rbac.resource_id_set('1234') self.neutron_client.show_rbac_policy.return_value = { 'rbac_policy': {'id': '123'} } self.assertEqual(self.rbac._show_resource(), {'id': '123'}) self.neutron_client.show_rbac_policy.assert_called_with('1234') heat-10.0.2/heat/tests/openstack/neutron/__init__.py0000666000175000017500000000000013343562340022405 0ustar zuulzuul00000000000000heat-10.0.2/heat/tests/openstack/neutron/inline_templates.py0000666000175000017500000001214513343562340024217 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. SPOOL_TEMPLATE = ''' heat_template_version: 2015-04-30 description: Template to test subnetpool Neutron resource resources: sub_pool: type: OS::Neutron::SubnetPool properties: name: the_sp prefixes: - 10.1.0.0/16 address_scope: test default_quota: 2 default_prefixlen: 28 min_prefixlen: 8 max_prefixlen: 32 is_default: False tenant_id: c1210485b2424d48804aad5d39c61b8f shared: False ''' SPOOL_MINIMAL_TEMPLATE = ''' heat_template_version: 2015-04-30 description: Template to test subnetpool Neutron resource resources: sub_pool: type: OS::Neutron::SubnetPool properties: prefixes: - 10.0.0.0/16 - 10.1.0.0/16 ''' RBAC_TEMPLATE = ''' heat_template_version: 2016-04-08 description: Template to test rbac-policy Neutron resource resources: rbac: type: OS::Neutron::RBACPolicy properties: object_type: network target_tenant: d1dbbed707e5469da9cd4fdd618e9706 action: access_as_shared object_id: 9ba4c03a-dbd5-4836-b651-defa595796ba ''' RBAC_REFERENCE_TEMPLATE = ''' heat_template_version: 2016-04-08 description: Template to test rbac-policy Neutron resource resources: rbac: type: OS::Neutron::RBACPolicy properties: object_type: network target_tenant: d1dbbed707e5469da9cd4fdd618e9706 action: access_as_shared object_id: {get_resource: my_net} my_net: type: OS::Neutron::Net ''' LB_TEMPLATE = ''' heat_template_version: 2016-04-08 description: Create a loadbalancer resources: lb: type: OS::Neutron::LBaaS::LoadBalancer properties: name: my_lb description: my loadbalancer vip_address: 10.0.0.4 vip_subnet: sub123 provider: octavia tenant_id: 1234 admin_state_up: True ''' LISTENER_TEMPLATE = ''' heat_template_version: 2016-04-08 description: Create a listener resources: listener: type: OS::Neutron::LBaaS::Listener properties: protocol_port: 80 protocol: TCP loadbalancer: 123 default_pool: my_pool name: my_listener description: my listener admin_state_up: True default_tls_container_ref: ref sni_container_refs: - ref1 - ref2 connection_limit: -1 tenant_id: 1234 ''' POOL_TEMPLATE = ''' heat_template_version: 2016-04-08 description: Create a pool resources: pool: type: OS::Neutron::LBaaS::Pool properties: name: my_pool description: my pool session_persistence: type: HTTP_COOKIE lb_algorithm: ROUND_ROBIN loadbalancer: my_lb listener: 123 protocol: HTTP admin_state_up: True ''' MEMBER_TEMPLATE = ''' heat_template_version: 2016-04-08 description: Create a pool member resources: member: type: OS::Neutron::LBaaS::PoolMember properties: pool: 123 address: 1.2.3.4 protocol_port: 80 weight: 1 subnet: sub123 admin_state_up: True ''' MONITOR_TEMPLATE = ''' heat_template_version: 2016-04-08 description: Create a health monitor resources: monitor: type: OS::Neutron::LBaaS::HealthMonitor properties: admin_state_up: True delay: 3 expected_codes: 200-202 http_method: HEAD max_retries: 5 pool: 123 timeout: 10 type: HTTP url_path: /health ''' SECURITY_GROUP_RULE_TEMPLATE = ''' heat_template_version: 2016-10-14 resources: security_group_rule: type: OS::Neutron::SecurityGroupRule properties: security_group: 123 description: test description remote_group: 123 protocol: tcp port_range_min: 100 ''' L7POLICY_TEMPLATE = ''' heat_template_version: 2016-04-08 description: Template to test L7Policy Neutron resource resources: l7policy: type: OS::Neutron::LBaaS::L7Policy properties: admin_state_up: True name: test_l7policy description: test l7policy resource action: REDIRECT_TO_URL redirect_url: http://www.mirantis.com listener: 123 position: 1 ''' L7RULE_TEMPLATE = ''' heat_template_version: 2016-04-08 description: Template to test L7Rule Neutron resource resources: l7rule: type: OS::Neutron::LBaaS::L7Rule properties: admin_state_up: True l7policy: 123 type: HEADER compare_type: ENDS_WITH key: test_key value: test_value invert: False ''' SEGMENT_TEMPLATE = ''' heat_template_version: pike description: Template to test Segment resources: segment: type: OS::Neutron::Segment properties: network: private network_type: vxlan segmentation_id: 101 ''' heat-10.0.2/heat/tests/openstack/neutron/test_neutron_segment.py0000666000175000017500000002135613343562340025142 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from neutronclient.neutron import v2_0 as neutronV20 from openstack import exceptions from oslo_utils import excutils import six from heat.common import exception from heat.common import template_format from heat.engine.clients.os import neutron from heat.engine import rsrc_defn from heat.engine import stack from heat.engine import template from heat.tests import common from heat.tests.openstack.neutron import inline_templates from heat.tests import utils class NeutronSegmentTest(common.HeatTestCase): def setUp(self): super(NeutronSegmentTest, self).setUp() self.ctx = utils.dummy_context() tpl = template_format.parse(inline_templates.SEGMENT_TEMPLATE) self.stack = stack.Stack( self.ctx, 'segment_test', template.Template(tpl) ) class FakeOpenStackPlugin(object): @excutils.exception_filter def ignore_not_found(self, ex): if not isinstance(ex, exceptions.ResourceNotFound): raise ex self.sdkclient = mock.Mock() self.patchobject(neutron.NeutronClientPlugin, 'has_extension', return_value=True) self.patchobject(excutils.exception_filter, '__exit__') self.segment = self.stack['segment'] self.segment.client = mock.Mock(return_value=self.sdkclient) self.segment.client_plugin = mock.Mock( return_value=FakeOpenStackPlugin()) self.patchobject(self.segment, 'physical_resource_name', return_value='test_segment') self.patchobject(neutronV20, 'find_resourceid_by_name_or_id', return_value='private') def test_segment_handle_create(self): seg = mock.Mock(id='9c1eb3fe-7bba-479d-bd43-1d497e53c384') create_props = {'name': 'test_segment', 'network_id': 'private', 'network_type': 'vxlan', 'segmentation_id': 101} mock_create = self.patchobject(self.sdkclient.network, 'create_segment', return_value=seg) self.segment.handle_create() self.assertEqual('9c1eb3fe-7bba-479d-bd43-1d497e53c384', self.segment.resource_id) mock_create.assert_called_once_with(**create_props) def test_segment_handle_delete(self): segment_id = '477e8273-60a7-4c41-b683-fdb0bc7cd151' self.segment.resource_id = segment_id mock_delete = self.patchobject(self.sdkclient.network, 'delete_segment', return_value=None) self.assertIsNone(self.segment.handle_delete()) mock_delete.assert_called_once_with(self.segment.resource_id) def test_segment_handle_delete_not_found(self): segment_id = '477e8273-60a7-4c41-b683-fdb0bc7cd151' self.segment.resource_id = segment_id mock_delete = self.patchobject( self.sdkclient.network, 'delete_segment', side_effect=exceptions.ResourceNotFound) self.assertIsNone(self.segment.handle_delete()) mock_delete.assert_called_once_with(self.segment.resource_id) def test_segment_delete_resource_id_is_none(self): self.segment.resource_id = None mock_delete = self.patchobject(self.sdkclient.network, 'delete_segment') self.assertIsNone(self.segment.handle_delete()) self.assertEqual(0, mock_delete.call_count) def test_segment_handle_update(self): segment_id = '477e8273-60a7-4c41-b683-fdb0bc7cd151' self.segment.resource_id = segment_id props = { 'name': 'test_segment', 'description': 'updated' } mock_update = self.patchobject(self.sdkclient.network, 'update_segment') update_dict = props.copy() update_snippet = rsrc_defn.ResourceDefinition( self.segment.name, self.segment.type(), props) # with name self.segment.handle_update( json_snippet=update_snippet, tmpl_diff={}, prop_diff=props) # without name props['name'] = None self.segment.handle_update( json_snippet=update_snippet, tmpl_diff={}, prop_diff=props) self.assertEqual(2, mock_update.call_count) mock_update.assert_called_with(segment_id, **update_dict) def test_validate_vlan_type(self): self.t = template_format.parse(inline_templates.SEGMENT_TEMPLATE) props = self.t['resources']['segment']['properties'] props['network_type'] = 'vlan' self.stack = utils.parse_stack(self.t) rsrc = self.stack['segment'] errMsg = 'physical_network is required for vlan provider network.' error = self.assertRaises(exception.StackValidationFailed, rsrc.validate) self.assertEqual(errMsg, six.text_type(error)) props['physical_network'] = 'physnet' props['segmentation_id'] = '4095' self.stack = utils.parse_stack(self.t) rsrc = self.stack['segment'] errMsg = ('Up to 4094 VLAN network segments can exist ' 'on each physical_network.') error = self.assertRaises(exception.StackValidationFailed, rsrc.validate) self.assertEqual(errMsg, six.text_type(error)) def test_validate_flat_type(self): self.t = template_format.parse(inline_templates.SEGMENT_TEMPLATE) props = self.t['resources']['segment']['properties'] props['network_type'] = 'flat' props['physical_network'] = 'physnet' self.stack = utils.parse_stack(self.t) rsrc = self.stack['segment'] errMsg = ('segmentation_id is prohibited for flat provider network.') error = self.assertRaises(exception.StackValidationFailed, rsrc.validate) self.assertEqual(errMsg, six.text_type(error)) def test_validate_tunnel_type(self): self.t = template_format.parse(inline_templates.SEGMENT_TEMPLATE) props = self.t['resources']['segment']['properties'] props['network_type'] = 'vxlan' props['physical_network'] = 'physnet' self.stack = utils.parse_stack(self.t) rsrc = self.stack['segment'] errMsg = ('physical_network is prohibited for vxlan provider network.') error = self.assertRaises(exception.StackValidationFailed, rsrc.validate) self.assertEqual(errMsg, six.text_type(error)) def test_segment_get_attr(self): segment_id = '477e8273-60a7-4c41-b683-fdb0bc7cd151' self.segment.resource_id = segment_id seg = {'name': 'test_segment', 'id': '477e8273-60a7-4c41-b683-fdb0bc7cd151', 'network_type': 'vxlan', 'network_id': 'private', 'segmentation_id': 101} class FakeSegment(object): def to_dict(self): return seg get_mock = self.patchobject(self.sdkclient.network, 'get_segment', return_value=FakeSegment()) self.assertEqual(seg, self.segment.FnGetAtt('show')) get_mock.assert_called_once_with(self.segment.resource_id) def test_needs_replace_failed(self): self.stack.store() self.segment.state_set(self.segment.CREATE, self.segment.FAILED) side_effect = [exceptions.ResourceNotFound, 'attr'] mock_show_resource = self.patchobject(self.segment, '_show_resource', side_effect=side_effect) self.segment.resource_id = None self.assertTrue(self.segment.needs_replace_failed()) self.assertEqual(0, mock_show_resource.call_count) self.segment.resource_id = 'seg_id' self.assertTrue(self.segment.needs_replace_failed()) self.assertEqual(1, mock_show_resource.call_count) self.assertFalse(self.segment.needs_replace_failed()) self.assertEqual(2, mock_show_resource.call_count) heat-10.0.2/heat/tests/openstack/neutron/test_neutron_metering.py0000666000175000017500000002563313343562351025316 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from neutronclient.common import exceptions from neutronclient.v2_0 import client as neutronclient import six from heat.common import exception from heat.common import template_format from heat.engine.resources.openstack.neutron import metering from heat.engine import scheduler from heat.tests import common from heat.tests import utils metering_template = ''' heat_template_version: 2015-04-30 description: Template to test metering resources resources: label: type: OS::Neutron::MeteringLabel properties: name: TestLabel description: Description of TestLabel shared: True rule: type: OS::Neutron::MeteringRule properties: metering_label_id: { get_resource: label } remote_ip_prefix: 10.0.3.0/24 direction: ingress excluded: false ''' class MeteringLabelTest(common.HeatTestCase): def setUp(self): super(MeteringLabelTest, self).setUp() self.m.StubOutWithMock(neutronclient.Client, 'create_metering_label') self.m.StubOutWithMock(neutronclient.Client, 'delete_metering_label') self.m.StubOutWithMock(neutronclient.Client, 'show_metering_label') self.m.StubOutWithMock(neutronclient.Client, 'create_metering_label_rule') self.m.StubOutWithMock(neutronclient.Client, 'delete_metering_label_rule') self.m.StubOutWithMock(neutronclient.Client, 'show_metering_label_rule') def create_metering_label(self): neutronclient.Client.create_metering_label({ 'metering_label': { 'name': 'TestLabel', 'description': 'Description of TestLabel', 'shared': True} }).AndReturn({'metering_label': {'id': '1234'}}) snippet = template_format.parse(metering_template) self.stack = utils.parse_stack(snippet) resource_defns = self.stack.t.resource_definitions(self.stack) return metering.MeteringLabel( 'label', resource_defns['label'], self.stack) def test_create(self): rsrc = self.create_metering_label() self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_create_failed(self): neutronclient.Client.create_metering_label({ 'metering_label': { 'name': 'TestLabel', 'description': 'Description of TestLabel', 'shared': True} }).AndRaise(exceptions.NeutronClientException()) self.m.ReplayAll() snippet = template_format.parse(metering_template) stack = utils.parse_stack(snippet) resource_defns = stack.t.resource_definitions(stack) rsrc = metering.MeteringLabel( 'label', resource_defns['label'], stack) error = self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(rsrc.create)) self.assertEqual( 'NeutronClientException: resources.label: ' 'An unknown exception occurred.', six.text_type(error)) self.assertEqual((rsrc.CREATE, rsrc.FAILED), rsrc.state) self.m.VerifyAll() def test_delete(self): neutronclient.Client.delete_metering_label('1234') neutronclient.Client.show_metering_label('1234').AndRaise( exceptions.NeutronClientException(status_code=404)) rsrc = self.create_metering_label() self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() scheduler.TaskRunner(rsrc.delete)() self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_delete_already_gone(self): neutronclient.Client.delete_metering_label('1234').AndRaise( exceptions.NeutronClientException(status_code=404)) rsrc = self.create_metering_label() self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() scheduler.TaskRunner(rsrc.delete)() self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_delete_failed(self): neutronclient.Client.delete_metering_label('1234').AndRaise( exceptions.NeutronClientException(status_code=400)) rsrc = self.create_metering_label() self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() error = self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(rsrc.delete)) self.assertEqual( 'NeutronClientException: resources.label: ' 'An unknown exception occurred.', six.text_type(error)) self.assertEqual((rsrc.DELETE, rsrc.FAILED), rsrc.state) self.m.VerifyAll() def test_attribute(self): rsrc = self.create_metering_label() neutronclient.Client.show_metering_label('1234').MultipleTimes( ).AndReturn( {'metering_label': {'name': 'TestLabel', 'description': 'Description of TestLabel', 'shared': True}}) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() self.assertEqual('TestLabel', rsrc.FnGetAtt('name')) self.assertEqual('Description of TestLabel', rsrc.FnGetAtt('description')) self.assertTrue(rsrc.FnGetAtt('shared')) self.m.VerifyAll() class MeteringRuleTest(common.HeatTestCase): def setUp(self): super(MeteringRuleTest, self).setUp() self.m.StubOutWithMock(neutronclient.Client, 'create_metering_label') self.m.StubOutWithMock(neutronclient.Client, 'delete_metering_label') self.m.StubOutWithMock(neutronclient.Client, 'show_metering_label') self.m.StubOutWithMock(neutronclient.Client, 'create_metering_label_rule') self.m.StubOutWithMock(neutronclient.Client, 'delete_metering_label_rule') self.m.StubOutWithMock(neutronclient.Client, 'show_metering_label_rule') def create_metering_label_rule(self): neutronclient.Client.create_metering_label_rule({ 'metering_label_rule': { 'metering_label_id': '1234', 'remote_ip_prefix': '10.0.3.0/24', 'direction': 'ingress', 'excluded': False} }).AndReturn({'metering_label_rule': {'id': '5678'}}) snippet = template_format.parse(metering_template) self.stack = utils.parse_stack(snippet) self.patchobject(self.stack['label'], 'FnGetRefId', return_value='1234') resource_defns = self.stack.t.resource_definitions(self.stack) return metering.MeteringRule( 'rule', resource_defns['rule'], self.stack) def test_create(self): rsrc = self.create_metering_label_rule() self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_create_failed(self): neutronclient.Client.create_metering_label_rule({ 'metering_label_rule': { 'metering_label_id': '1234', 'remote_ip_prefix': '10.0.3.0/24', 'direction': 'ingress', 'excluded': False} }).AndRaise(exceptions.NeutronClientException()) self.m.ReplayAll() snippet = template_format.parse(metering_template) stack = utils.parse_stack(snippet) self.patchobject(stack['label'], 'FnGetRefId', return_value='1234') resource_defns = stack.t.resource_definitions(stack) rsrc = metering.MeteringRule( 'rule', resource_defns['rule'], stack) error = self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(rsrc.create)) self.assertEqual( 'NeutronClientException: resources.rule: ' 'An unknown exception occurred.', six.text_type(error)) self.assertEqual((rsrc.CREATE, rsrc.FAILED), rsrc.state) self.m.VerifyAll() def test_delete(self): neutronclient.Client.delete_metering_label_rule('5678') neutronclient.Client.show_metering_label_rule('5678').AndRaise( exceptions.NeutronClientException(status_code=404)) rsrc = self.create_metering_label_rule() self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() scheduler.TaskRunner(rsrc.delete)() self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_delete_already_gone(self): neutronclient.Client.delete_metering_label_rule('5678').AndRaise( exceptions.NeutronClientException(status_code=404)) rsrc = self.create_metering_label_rule() self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() scheduler.TaskRunner(rsrc.delete)() self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_delete_failed(self): neutronclient.Client.delete_metering_label_rule('5678').AndRaise( exceptions.NeutronClientException(status_code=400)) rsrc = self.create_metering_label_rule() self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() error = self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(rsrc.delete)) self.assertEqual( 'NeutronClientException: resources.rule: ' 'An unknown exception occurred.', six.text_type(error)) self.assertEqual((rsrc.DELETE, rsrc.FAILED), rsrc.state) self.m.VerifyAll() def test_attribute(self): rsrc = self.create_metering_label_rule() neutronclient.Client.show_metering_label_rule('5678').MultipleTimes( ).AndReturn( {'metering_label_rule': {'metering_label_id': '1234', 'remote_ip_prefix': '10.0.3.0/24', 'direction': 'ingress', 'excluded': False}}) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() self.assertEqual('10.0.3.0/24', rsrc.FnGetAtt('remote_ip_prefix')) self.assertEqual('ingress', rsrc.FnGetAtt('direction')) self.assertIs(False, rsrc.FnGetAtt('excluded')) self.m.VerifyAll() heat-10.0.2/heat/tests/openstack/neutron/test_neutron_vpnservice.py0000666000175000017500000007304713343562352025673 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import mox from neutronclient.common import exceptions from neutronclient.neutron import v2_0 as neutronV20 from neutronclient.v2_0 import client as neutronclient import six from heat.common import exception from heat.common import template_format from heat.engine.clients.os import neutron from heat.engine.resources.openstack.neutron import vpnservice from heat.engine import scheduler from heat.tests import common from heat.tests import utils vpnservice_template = ''' heat_template_version: 2015-04-30 description: Template to test vpnservice Neutron resource resources: VPNService: type: OS::Neutron::VPNService properties: name: VPNService description: My new VPN service admin_state_up: true router_id: rou123 subnet: sub123 ''' vpnservice_template_deprecated = vpnservice_template.replace( 'subnet', 'subnet_id') ipsec_site_connection_template = ''' heat_template_version: 2015-04-30 description: Template to test IPsec policy resource resources: IPsecSiteConnection: type: OS::Neutron::IPsecSiteConnection, properties: name: IPsecSiteConnection description: My new VPN connection peer_address: 172.24.4.233 peer_id: 172.24.4.233 peer_cidrs: [ 10.2.0.0/24 ] mtu: 1500 dpd: actions: hold interval: 30 timeout: 120 psk: secret initiator: bi-directional admin_state_up: true ikepolicy_id: ike123 ipsecpolicy_id: ips123 vpnservice_id: vpn123 ''' ikepolicy_template = ''' heat_template_version: 2015-04-30 description: Template to test IKE policy resource resources: IKEPolicy: type: OS::Neutron::IKEPolicy properties: name: IKEPolicy description: My new IKE policy auth_algorithm: sha1 encryption_algorithm: 3des phase1_negotiation_mode: main lifetime: units: seconds value: 3600 pfs: group5 ike_version: v1 ''' ipsecpolicy_template = ''' heat_template_version: 2015-04-30 description: Template to test IPsec policy resource resources: IPsecPolicy: type: OS::Neutron::IPsecPolicy properties: name: IPsecPolicy description: My new IPsec policy transform_protocol: esp encapsulation_mode: tunnel auth_algorithm: sha1 encryption_algorithm: 3des lifetime: units: seconds value: 3600 pfs : group5 ''' class VPNServiceTest(common.HeatTestCase): VPN_SERVICE_CONF = { 'vpnservice': { 'name': 'VPNService', 'description': 'My new VPN service', 'admin_state_up': True, 'router_id': 'rou123', 'subnet_id': 'sub123' } } def setUp(self): super(VPNServiceTest, self).setUp() self.m.StubOutWithMock(neutronclient.Client, 'create_vpnservice') self.m.StubOutWithMock(neutronclient.Client, 'delete_vpnservice') self.m.StubOutWithMock(neutronclient.Client, 'show_vpnservice') self.m.StubOutWithMock(neutronclient.Client, 'update_vpnservice') self.m.StubOutWithMock(neutronV20, 'find_resourceid_by_name_or_id') self.patchobject(neutron.NeutronClientPlugin, 'has_extension', return_value=True) def create_vpnservice(self, resolve_neutron=True, resolve_router=True): self.stub_SubnetConstraint_validate() self.stub_RouterConstraint_validate() neutronV20.find_resourceid_by_name_or_id( mox.IsA(neutronclient.Client), 'router', 'rou123', cmd_resource=None, ).MultipleTimes().AndReturn('rou123') neutronV20.find_resourceid_by_name_or_id( mox.IsA(neutronclient.Client), 'subnet', 'sub123', cmd_resource=None, ).MultipleTimes().AndReturn('sub123') if resolve_neutron: snippet = template_format.parse(vpnservice_template) else: snippet = template_format.parse(vpnservice_template_deprecated) if resolve_router: props = snippet['resources']['VPNService']['properties'] props['router'] = 'rou123' del props['router_id'] neutronclient.Client.create_vpnservice( self.VPN_SERVICE_CONF).AndReturn({'vpnservice': {'id': 'vpn123'}}) self.stack = utils.parse_stack(snippet) resource_defns = self.stack.t.resource_definitions(self.stack) return vpnservice.VPNService('vpnservice', resource_defns['VPNService'], self.stack) def test_create_deprecated(self): self._test_create(resolve_neutron=False) def test_create(self): self._test_create() def test_create_router_id(self): self._test_create(resolve_router=False) def _test_create(self, resolve_neutron=True, resolve_router=True): rsrc = self.create_vpnservice(resolve_neutron, resolve_router) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) # Ensure that property translates if not resolve_router: self.assertEqual('rou123', rsrc.properties.get(rsrc.ROUTER)) self.assertIsNone(rsrc.properties.get(rsrc.ROUTER_ID)) self.m.VerifyAll() def test_create_failed(self): neutronV20.find_resourceid_by_name_or_id( mox.IsA(neutronclient.Client), 'subnet', 'sub123', cmd_resource=None, ).MultipleTimes().AndReturn('sub123') neutronV20.find_resourceid_by_name_or_id( mox.IsA(neutronclient.Client), 'router', 'rou123', cmd_resource=None, ).MultipleTimes().AndReturn('rou123') self.stub_RouterConstraint_validate() neutronclient.Client.create_vpnservice(self.VPN_SERVICE_CONF).AndRaise( exceptions.NeutronClientException()) self.m.ReplayAll() snippet = template_format.parse(vpnservice_template) self.stack = utils.parse_stack(snippet) resource_defns = self.stack.t.resource_definitions(self.stack) rsrc = vpnservice.VPNService('vpnservice', resource_defns['VPNService'], self.stack) error = self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(rsrc.create)) self.assertEqual( 'NeutronClientException: resources.vpnservice: ' 'An unknown exception occurred.', six.text_type(error)) self.assertEqual((rsrc.CREATE, rsrc.FAILED), rsrc.state) self.m.VerifyAll() def test_delete(self): neutronclient.Client.delete_vpnservice('vpn123') neutronclient.Client.show_vpnservice('vpn123').AndRaise( exceptions.NeutronClientException(status_code=404)) rsrc = self.create_vpnservice() self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() scheduler.TaskRunner(rsrc.delete)() self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_delete_already_gone(self): neutronclient.Client.delete_vpnservice('vpn123').AndRaise( exceptions.NeutronClientException(status_code=404)) rsrc = self.create_vpnservice() self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() scheduler.TaskRunner(rsrc.delete)() self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_delete_failed(self): neutronclient.Client.delete_vpnservice('vpn123').AndRaise( exceptions.NeutronClientException(status_code=400)) rsrc = self.create_vpnservice() self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() error = self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(rsrc.delete)) self.assertEqual( 'NeutronClientException: resources.vpnservice: ' 'An unknown exception occurred.', six.text_type(error)) self.assertEqual((rsrc.DELETE, rsrc.FAILED), rsrc.state) self.m.VerifyAll() def test_attribute(self): rsrc = self.create_vpnservice() neutronclient.Client.show_vpnservice('vpn123').MultipleTimes( ).AndReturn(self.VPN_SERVICE_CONF) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() self.assertEqual('VPNService', rsrc.FnGetAtt('name')) self.assertEqual('My new VPN service', rsrc.FnGetAtt('description')) self.assertIs(True, rsrc.FnGetAtt('admin_state_up')) self.assertEqual('rou123', rsrc.FnGetAtt('router_id')) self.assertEqual('sub123', rsrc.FnGetAtt('subnet_id')) self.m.VerifyAll() def test_attribute_failed(self): rsrc = self.create_vpnservice() self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() error = self.assertRaises(exception.InvalidTemplateAttribute, rsrc.FnGetAtt, 'non-existent_property') self.assertEqual( 'The Referenced Attribute (vpnservice non-existent_property) is ' 'incorrect.', six.text_type(error)) self.m.VerifyAll() def test_update(self): rsrc = self.create_vpnservice() self.patchobject(rsrc, 'physical_resource_name', return_value='VPNService') upd_dict = {'vpnservice': {'name': 'VPNService', 'admin_state_up': False}} neutronclient.Client.update_vpnservice( 'vpn123', upd_dict).MultipleTimes().AndReturn(None) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() # with name prop_diff = {'name': 'VPNService', 'admin_state_up': False} self.assertIsNone(rsrc.handle_update({}, {}, prop_diff)) # without name prop_diff = {'name': None, 'admin_state_up': False} self.assertIsNone(rsrc.handle_update({}, {}, prop_diff)) self.m.VerifyAll() class IPsecSiteConnectionTest(common.HeatTestCase): IPSEC_SITE_CONNECTION_CONF = { 'ipsec_site_connection': { 'name': 'IPsecSiteConnection', 'description': 'My new VPN connection', 'peer_address': '172.24.4.233', 'peer_id': '172.24.4.233', 'peer_cidrs': ['10.2.0.0/24'], 'mtu': 1500, 'dpd': { 'actions': 'hold', 'interval': 30, 'timeout': 120 }, 'psk': 'secret', 'initiator': 'bi-directional', 'admin_state_up': True, 'ikepolicy_id': 'ike123', 'ipsecpolicy_id': 'ips123', 'vpnservice_id': 'vpn123' } } def setUp(self): super(IPsecSiteConnectionTest, self).setUp() self.m.StubOutWithMock(neutronclient.Client, 'create_ipsec_site_connection') self.m.StubOutWithMock(neutronclient.Client, 'delete_ipsec_site_connection') self.m.StubOutWithMock(neutronclient.Client, 'show_ipsec_site_connection') self.m.StubOutWithMock(neutronclient.Client, 'update_ipsec_site_connection') self.patchobject(neutron.NeutronClientPlugin, 'has_extension', return_value=True) def create_ipsec_site_connection(self): neutronclient.Client.create_ipsec_site_connection( self.IPSEC_SITE_CONNECTION_CONF).AndReturn( {'ipsec_site_connection': {'id': 'con123'}}) snippet = template_format.parse(ipsec_site_connection_template) self.stack = utils.parse_stack(snippet) resource_defns = self.stack.t.resource_definitions(self.stack) return vpnservice.IPsecSiteConnection( 'ipsec_site_connection', resource_defns['IPsecSiteConnection'], self.stack) def test_create(self): rsrc = self.create_ipsec_site_connection() self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_create_failed(self): neutronclient.Client.create_ipsec_site_connection( self.IPSEC_SITE_CONNECTION_CONF).AndRaise( exceptions.NeutronClientException()) self.m.ReplayAll() snippet = template_format.parse(ipsec_site_connection_template) self.stack = utils.parse_stack(snippet) resource_defns = self.stack.t.resource_definitions(self.stack) rsrc = vpnservice.IPsecSiteConnection( 'ipsec_site_connection', resource_defns['IPsecSiteConnection'], self.stack) error = self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(rsrc.create)) self.assertEqual( 'NeutronClientException: resources.ipsec_site_connection: ' 'An unknown exception occurred.', six.text_type(error)) self.assertEqual((rsrc.CREATE, rsrc.FAILED), rsrc.state) self.m.VerifyAll() def test_delete(self): neutronclient.Client.delete_ipsec_site_connection('con123') neutronclient.Client.show_ipsec_site_connection('con123').AndRaise( exceptions.NeutronClientException(status_code=404)) rsrc = self.create_ipsec_site_connection() self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() scheduler.TaskRunner(rsrc.delete)() self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_delete_already_gone(self): neutronclient.Client.delete_ipsec_site_connection('con123').AndRaise( exceptions.NeutronClientException(status_code=404)) rsrc = self.create_ipsec_site_connection() self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() scheduler.TaskRunner(rsrc.delete)() self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_delete_failed(self): neutronclient.Client.delete_ipsec_site_connection('con123').AndRaise( exceptions.NeutronClientException(status_code=400)) rsrc = self.create_ipsec_site_connection() self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() error = self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(rsrc.delete)) self.assertEqual( 'NeutronClientException: resources.ipsec_site_connection: ' 'An unknown exception occurred.', six.text_type(error)) self.assertEqual((rsrc.DELETE, rsrc.FAILED), rsrc.state) self.m.VerifyAll() def test_attribute(self): rsrc = self.create_ipsec_site_connection() neutronclient.Client.show_ipsec_site_connection( 'con123').MultipleTimes().AndReturn( self.IPSEC_SITE_CONNECTION_CONF) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() self.assertEqual('IPsecSiteConnection', rsrc.FnGetAtt('name')) self.assertEqual('My new VPN connection', rsrc.FnGetAtt('description')) self.assertEqual('172.24.4.233', rsrc.FnGetAtt('peer_address')) self.assertEqual('172.24.4.233', rsrc.FnGetAtt('peer_id')) self.assertEqual(['10.2.0.0/24'], rsrc.FnGetAtt('peer_cidrs')) self.assertEqual('hold', rsrc.FnGetAtt('dpd')['actions']) self.assertEqual(30, rsrc.FnGetAtt('dpd')['interval']) self.assertEqual(120, rsrc.FnGetAtt('dpd')['timeout']) self.assertEqual('secret', rsrc.FnGetAtt('psk')) self.assertEqual('bi-directional', rsrc.FnGetAtt('initiator')) self.assertIs(True, rsrc.FnGetAtt('admin_state_up')) self.assertEqual('ike123', rsrc.FnGetAtt('ikepolicy_id')) self.assertEqual('ips123', rsrc.FnGetAtt('ipsecpolicy_id')) self.assertEqual('vpn123', rsrc.FnGetAtt('vpnservice_id')) self.m.VerifyAll() def test_attribute_failed(self): rsrc = self.create_ipsec_site_connection() self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() error = self.assertRaises(exception.InvalidTemplateAttribute, rsrc.FnGetAtt, 'non-existent_property') self.assertEqual( 'The Referenced Attribute (ipsec_site_connection ' 'non-existent_property) is incorrect.', six.text_type(error)) self.m.VerifyAll() def test_update(self): rsrc = self.create_ipsec_site_connection() neutronclient.Client.update_ipsec_site_connection( 'con123', {'ipsec_site_connection': {'admin_state_up': False}}) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() props = dict(rsrc.properties) props['admin_state_up'] = False update_template = rsrc.t.freeze(properties=props) scheduler.TaskRunner(rsrc.update, update_template)() self.m.VerifyAll() class IKEPolicyTest(common.HeatTestCase): IKE_POLICY_CONF = { 'ikepolicy': { 'name': 'IKEPolicy', 'description': 'My new IKE policy', 'auth_algorithm': 'sha1', 'encryption_algorithm': '3des', 'phase1_negotiation_mode': 'main', 'lifetime': { 'units': 'seconds', 'value': 3600 }, 'pfs': 'group5', 'ike_version': 'v1' } } def setUp(self): super(IKEPolicyTest, self).setUp() self.m.StubOutWithMock(neutronclient.Client, 'create_ikepolicy') self.m.StubOutWithMock(neutronclient.Client, 'delete_ikepolicy') self.m.StubOutWithMock(neutronclient.Client, 'show_ikepolicy') self.m.StubOutWithMock(neutronclient.Client, 'update_ikepolicy') self.patchobject(neutron.NeutronClientPlugin, 'has_extension', return_value=True) def create_ikepolicy(self): neutronclient.Client.create_ikepolicy( self.IKE_POLICY_CONF).AndReturn( {'ikepolicy': {'id': 'ike123'}}) snippet = template_format.parse(ikepolicy_template) self.stack = utils.parse_stack(snippet) resource_defns = self.stack.t.resource_definitions(self.stack) return vpnservice.IKEPolicy('ikepolicy', resource_defns['IKEPolicy'], self.stack) def test_create(self): rsrc = self.create_ikepolicy() self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_create_failed(self): neutronclient.Client.create_ikepolicy( self.IKE_POLICY_CONF).AndRaise( exceptions.NeutronClientException()) self.m.ReplayAll() snippet = template_format.parse(ikepolicy_template) self.stack = utils.parse_stack(snippet) resource_defns = self.stack.t.resource_definitions(self.stack) rsrc = vpnservice.IKEPolicy( 'ikepolicy', resource_defns['IKEPolicy'], self.stack) error = self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(rsrc.create)) self.assertEqual( 'NeutronClientException: resources.ikepolicy: ' 'An unknown exception occurred.', six.text_type(error)) self.assertEqual((rsrc.CREATE, rsrc.FAILED), rsrc.state) self.m.VerifyAll() def test_delete(self): neutronclient.Client.delete_ikepolicy('ike123') neutronclient.Client.show_ikepolicy('ike123').AndRaise( exceptions.NeutronClientException(status_code=404)) rsrc = self.create_ikepolicy() self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() scheduler.TaskRunner(rsrc.delete)() self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_delete_already_gone(self): neutronclient.Client.delete_ikepolicy('ike123').AndRaise( exceptions.NeutronClientException(status_code=404)) rsrc = self.create_ikepolicy() self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() scheduler.TaskRunner(rsrc.delete)() self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_delete_failed(self): neutronclient.Client.delete_ikepolicy('ike123').AndRaise( exceptions.NeutronClientException(status_code=400)) rsrc = self.create_ikepolicy() self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() error = self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(rsrc.delete)) self.assertEqual( 'NeutronClientException: resources.ikepolicy: ' 'An unknown exception occurred.', six.text_type(error)) self.assertEqual((rsrc.DELETE, rsrc.FAILED), rsrc.state) self.m.VerifyAll() def test_attribute(self): rsrc = self.create_ikepolicy() neutronclient.Client.show_ikepolicy( 'ike123').MultipleTimes().AndReturn(self.IKE_POLICY_CONF) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() self.assertEqual('IKEPolicy', rsrc.FnGetAtt('name')) self.assertEqual('My new IKE policy', rsrc.FnGetAtt('description')) self.assertEqual('sha1', rsrc.FnGetAtt('auth_algorithm')) self.assertEqual('3des', rsrc.FnGetAtt('encryption_algorithm')) self.assertEqual('main', rsrc.FnGetAtt('phase1_negotiation_mode')) self.assertEqual('seconds', rsrc.FnGetAtt('lifetime')['units']) self.assertEqual(3600, rsrc.FnGetAtt('lifetime')['value']) self.assertEqual('group5', rsrc.FnGetAtt('pfs')) self.assertEqual('v1', rsrc.FnGetAtt('ike_version')) self.m.VerifyAll() def test_attribute_failed(self): rsrc = self.create_ikepolicy() self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() error = self.assertRaises(exception.InvalidTemplateAttribute, rsrc.FnGetAtt, 'non-existent_property') self.assertEqual( 'The Referenced Attribute (ikepolicy non-existent_property) is ' 'incorrect.', six.text_type(error)) self.m.VerifyAll() def test_update(self): rsrc = self.create_ikepolicy() update_body = { 'ikepolicy': { 'name': 'New IKEPolicy', 'auth_algorithm': 'sha512' } } neutronclient.Client.update_ikepolicy('ike123', update_body) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() props = dict(rsrc.properties) props['name'] = 'New IKEPolicy' props['auth_algorithm'] = 'sha512' update_template = rsrc.t.freeze(properties=props) scheduler.TaskRunner(rsrc.update, update_template)() self.m.VerifyAll() class IPsecPolicyTest(common.HeatTestCase): IPSEC_POLICY_CONF = { 'ipsecpolicy': { 'name': 'IPsecPolicy', 'description': 'My new IPsec policy', 'transform_protocol': 'esp', 'encapsulation_mode': 'tunnel', 'auth_algorithm': 'sha1', 'encryption_algorithm': '3des', 'lifetime': { 'units': 'seconds', 'value': 3600 }, 'pfs': 'group5' } } def setUp(self): super(IPsecPolicyTest, self).setUp() self.m.StubOutWithMock(neutronclient.Client, 'create_ipsecpolicy') self.m.StubOutWithMock(neutronclient.Client, 'delete_ipsecpolicy') self.m.StubOutWithMock(neutronclient.Client, 'show_ipsecpolicy') self.m.StubOutWithMock(neutronclient.Client, 'update_ipsecpolicy') self.patchobject(neutron.NeutronClientPlugin, 'has_extension', return_value=True) def create_ipsecpolicy(self): neutronclient.Client.create_ipsecpolicy( self.IPSEC_POLICY_CONF).AndReturn( {'ipsecpolicy': {'id': 'ips123'}}) snippet = template_format.parse(ipsecpolicy_template) self.stack = utils.parse_stack(snippet) resource_defns = self.stack.t.resource_definitions(self.stack) return vpnservice.IPsecPolicy('ipsecpolicy', resource_defns['IPsecPolicy'], self.stack) def test_create(self): rsrc = self.create_ipsecpolicy() self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_create_failed(self): neutronclient.Client.create_ipsecpolicy( self.IPSEC_POLICY_CONF).AndRaise( exceptions.NeutronClientException()) self.m.ReplayAll() snippet = template_format.parse(ipsecpolicy_template) self.stack = utils.parse_stack(snippet) resource_defns = self.stack.t.resource_definitions(self.stack) rsrc = vpnservice.IPsecPolicy( 'ipsecpolicy', resource_defns['IPsecPolicy'], self.stack) error = self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(rsrc.create)) self.assertEqual( 'NeutronClientException: resources.ipsecpolicy: ' 'An unknown exception occurred.', six.text_type(error)) self.assertEqual((rsrc.CREATE, rsrc.FAILED), rsrc.state) self.m.VerifyAll() def test_delete(self): neutronclient.Client.delete_ipsecpolicy('ips123') neutronclient.Client.show_ipsecpolicy('ips123').AndRaise( exceptions.NeutronClientException(status_code=404)) rsrc = self.create_ipsecpolicy() self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() scheduler.TaskRunner(rsrc.delete)() self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_delete_already_gone(self): neutronclient.Client.delete_ipsecpolicy('ips123').AndRaise( exceptions.NeutronClientException(status_code=404)) rsrc = self.create_ipsecpolicy() self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() scheduler.TaskRunner(rsrc.delete)() self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_delete_failed(self): neutronclient.Client.delete_ipsecpolicy('ips123').AndRaise( exceptions.NeutronClientException(status_code=400)) rsrc = self.create_ipsecpolicy() self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() error = self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(rsrc.delete)) self.assertEqual( 'NeutronClientException: resources.ipsecpolicy: ' 'An unknown exception occurred.', six.text_type(error)) self.assertEqual((rsrc.DELETE, rsrc.FAILED), rsrc.state) self.m.VerifyAll() def test_attribute(self): rsrc = self.create_ipsecpolicy() neutronclient.Client.show_ipsecpolicy( 'ips123').MultipleTimes().AndReturn(self.IPSEC_POLICY_CONF) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() self.assertEqual('IPsecPolicy', rsrc.FnGetAtt('name')) self.assertEqual('My new IPsec policy', rsrc.FnGetAtt('description')) self.assertEqual('esp', rsrc.FnGetAtt('transform_protocol')) self.assertEqual('tunnel', rsrc.FnGetAtt('encapsulation_mode')) self.assertEqual('sha1', rsrc.FnGetAtt('auth_algorithm')) self.assertEqual('3des', rsrc.FnGetAtt('encryption_algorithm')) self.assertEqual('seconds', rsrc.FnGetAtt('lifetime')['units']) self.assertEqual(3600, rsrc.FnGetAtt('lifetime')['value']) self.assertEqual('group5', rsrc.FnGetAtt('pfs')) self.m.VerifyAll() def test_attribute_failed(self): rsrc = self.create_ipsecpolicy() self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() error = self.assertRaises(exception.InvalidTemplateAttribute, rsrc.FnGetAtt, 'non-existent_property') self.assertEqual( 'The Referenced Attribute (ipsecpolicy non-existent_property) is ' 'incorrect.', six.text_type(error)) self.m.VerifyAll() def test_update(self): rsrc = self.create_ipsecpolicy() neutronclient.Client.update_ipsecpolicy( 'ips123', {'ipsecpolicy': {'name': 'New IPsecPolicy'}}) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() update_template = copy.deepcopy(rsrc.t) props = dict(rsrc.properties) props['name'] = 'New IPsecPolicy' update_template = rsrc.t.freeze(properties=props) scheduler.TaskRunner(rsrc.update, update_template)() self.m.VerifyAll() heat-10.0.2/heat/tests/openstack/neutron/test_sfc/0000775000175000017500000000000013343562672022126 5ustar zuulzuul00000000000000heat-10.0.2/heat/tests/openstack/neutron/test_sfc/test_port_chain.py0000666000175000017500000001463313343562340025666 0ustar zuulzuul00000000000000# # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from heat.engine.clients.os import neutron from heat.engine.resources.openstack.neutron.sfc import port_chain from heat.engine import stack from heat.engine import template from heat.tests import common from heat.tests import utils port_chain_template = { 'heat_template_version': '2015-04-30', 'resources': { 'test_resource': { 'type': 'OS::Neutron::PortChain', 'properties': { 'name': 'test_port_chain', 'description': 'port_chain_desc', 'port_pair_groups': ['port_pair_group_1'], 'flow_classifiers': ['flow_classifier1'], 'chain_parameters': {"correlation": 'mpls'} } } } } class PortChainTest(common.HeatTestCase): def setUp(self): super(PortChainTest, self).setUp() self.patchobject(neutron.NeutronClientPlugin, 'has_extension', return_value=True) self.ctx = utils.dummy_context() self.stack = stack.Stack( self.ctx, 'test_stack', template.Template(port_chain_template) ) self.test_resource = self.stack['test_resource'] self.test_client_plugin = mock.MagicMock() self.test_resource.client_plugin = mock.MagicMock( return_value=self.test_client_plugin) self.test_client = mock.MagicMock() self.test_resource.client = mock.MagicMock( return_value=self.test_client) self.test_client_plugin.get_notification = mock.MagicMock( return_value='sample_notification') self.patchobject(self.test_client_plugin, 'resolve_ext_resource' ).return_value = ('port_pair_group_1') self.patchobject(self.test_client_plugin, 'resolve_ext_resource' ).return_value = ('flow_classifier1') def test_resource_mapping(self): mapping = port_chain.resource_mapping() self.assertEqual(port_chain.PortChain, mapping['OS::Neutron::PortChain']) def _get_mock_resource(self): value = mock.MagicMock() value.id = '477e8273-60a7-4c41-b683-fdb0bc7cd152' return value def test_resource_handle_create(self): mock_pc_create = self.test_client_plugin.create_sfc_resource mock_resource = self._get_mock_resource() mock_pc_create.return_value = mock_resource # validate the properties self.assertEqual( 'test_port_chain', self.test_resource.properties.get( port_chain.PortChain.NAME)) self.assertEqual( 'port_chain_desc', self.test_resource.properties.get( port_chain.PortChain.DESCRIPTION)) self.assertEqual( ['port_pair_group_1'], self.test_resource.properties.get( port_chain.PortChain.PORT_PAIR_GROUPS)) self.assertEqual( ['flow_classifier1'], self.test_resource.properties.get( port_chain.PortChain.FLOW_CLASSIFIERS)) self.assertEqual( {"correlation": 'mpls'}, self.test_resource.properties.get( port_chain.PortChain.CHAIN_PARAMETERS)) self.test_resource.data_set = mock.Mock() self.test_resource.handle_create() mock_pc_create.assert_called_once_with( 'port_chain', { 'name': 'test_port_chain', 'description': 'port_chain_desc', 'port_pair_groups': ['port_pair_group_1'], 'flow_classifiers': ['flow_classifier1'], 'chain_parameters': {"correlation": 'mpls'}} ) def delete_portchain(self): mock_pc_delete = self.test_client_plugin.delete_sfc_resource self.test_resource.resource_id = '477e8273-60a7-4c41-b683-fdb0bc7cd151' mock_pc_delete.return_value = None self.assertIsNone(self.test_resource.handle_delete()) mock_pc_delete.assert_called_once_with( 'port_chain', self.test_resource.resource_id) def delete_portchain_resource_id_is_none(self): self.test_resource.resource_id = None self.assertIsNone(self.test_resource.handle_delete()) self.assertEqual(0, self.test_client_plugin. delete_sfc_resource.call_count) def test_resource_handle_delete_not_found(self): self.test_resource.resource_id = '477e8273-60a7-4c41-b683-fdb0bc7cd151' mock_pc_delete = self.test_client_plugin.delete_sfc_resource mock_pc_delete.side_effect = self.test_client_plugin.NotFound self.assertIsNone(self.test_resource.handle_delete()) def test_resource_show_resource(self): mock_pc_get = self.test_client_plugin.show_sfc_resource mock_pc_get.return_value = None self.assertIsNone(self.test_resource._show_resource(), 'Failed to show resource') def test_resource_handle_update(self): mock_ppg_patch = self.test_client_plugin.update_sfc_resource self.test_resource.resource_id = '477e8273-60a7-4c41-b683-fdb0bc7cd151' prop_diff = { 'name': 'name-updated', 'description': 'description-updated', 'port_pair_groups': ['port_pair_group_2'], 'flow_classifiers': ['flow_classifier2'], } self.test_resource.handle_update(json_snippet=None, tmpl_diff=None, prop_diff=prop_diff) mock_ppg_patch.assert_called_once_with( 'port_chain', { 'name': 'name-updated', 'description': 'description-updated', 'port_pair_groups': ['port_pair_group_2'], 'flow_classifiers': ['flow_classifier2'], }, self.test_resource.resource_id) heat-10.0.2/heat/tests/openstack/neutron/test_sfc/__init__.py0000666000175000017500000000000013343562340024217 0ustar zuulzuul00000000000000heat-10.0.2/heat/tests/openstack/neutron/test_sfc/test_port_pair.py0000666000175000017500000001364213343562340025536 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from heat.engine.resources.openstack.neutron.sfc import port_pair from heat.engine import stack from heat.engine import template from heat.tests import common from heat.tests import utils sample_template = { 'heat_template_version': '2016-04-08', 'resources': { 'test_resource': { 'type': 'OS::Neutron::PortPair', 'properties': { 'name': 'test_port_pair', 'description': 'desc', 'ingress': '6af055d3-26f6-48dd-a597-7611d7e58d35', 'egress': '6af055d3-26f6-48dd-a597-7611d7e58d35', 'service_function_parameters': {'correlation': None} } } } } class PortPairTest(common.HeatTestCase): def setUp(self): super(PortPairTest, self).setUp() self.ctx = utils.dummy_context() self.stack = stack.Stack( self.ctx, 'test_stack', template.Template(sample_template) ) self.test_resource = self.stack['test_resource'] self.test_client_plugin = mock.MagicMock() self.test_resource.client_plugin = mock.MagicMock( return_value=self.test_client_plugin) self.test_client = mock.MagicMock() self.test_resource.client = mock.MagicMock( return_value=self.test_client) self.test_client_plugin.get_notification = mock.MagicMock( return_value='sample_notification') def test_resource_mapping(self): mapping = port_pair.resource_mapping() self.assertEqual(port_pair.PortPair, mapping['OS::Neutron::PortPair']) def _get_mock_resource(self): value = mock.MagicMock() value.id = '477e8273-60a7-4c41-b683-fdb0bc7cd152' return value def test_resource_handle_create(self): mock_port_pair_create = self.test_client_plugin.create_sfc_resource mock_resource = self._get_mock_resource() mock_port_pair_create.return_value = mock_resource # validate the properties self.assertEqual( 'test_port_pair', self.test_resource.properties.get( port_pair.PortPair.NAME)) self.assertEqual( 'desc', self.test_resource.properties.get( port_pair.PortPair.DESCRIPTION)) self.assertEqual( '6af055d3-26f6-48dd-a597-7611d7e58d35', self.test_resource.properties.get( port_pair.PortPair.INGRESS)) self.assertEqual( '6af055d3-26f6-48dd-a597-7611d7e58d35', self.test_resource.properties.get( port_pair.PortPair.EGRESS)) self.assertEqual( {'correlation': None}, self.test_resource.properties.get( port_pair.PortPair.SERVICE_FUNCTION_PARAMETERS)) self.test_resource.data_set = mock.Mock() self.test_resource.handle_create() mock_port_pair_create.assert_called_once_with( 'port_pair', { 'name': 'test_port_pair', 'description': 'desc', 'ingress': '6af055d3-26f6-48dd-a597-7611d7e58d35', 'egress': '6af055d3-26f6-48dd-a597-7611d7e58d35', 'service_function_parameters': {'correlation': None}, } ) def test_resource_handle_delete(self): mock_port_pair_delete = self.test_client_plugin.delete_sfc_resource self.test_resource.resource_id = '477e8273-60a7-4c41-b683-fdb0bc7cd151' mock_port_pair_delete.return_value = None self.assertIsNone(self.test_resource.handle_delete()) mock_port_pair_delete.assert_called_once_with( 'port_pair', self.test_resource.resource_id) def test_resource_handle_delete_resource_id_is_none(self): self.test_resource.resource_id = None self.assertIsNone(self.test_resource.handle_delete()) self.assertEqual(0, self.test_client_plugin. delete_sfc_resource.call_count) def test_resource_handle_delete_not_found(self): self.test_resource.resource_id = '477e8273-60a7-4c41-b683-fdb0bc7cd151' mock_port_pair_delete = self.test_client_plugin.delete_sfc_resource mock_port_pair_delete.side_effect = self.test_client_plugin.NotFound self.assertIsNone(self.test_resource.handle_delete()) def test_resource_show_resource(self): mock_port_pair_get = self.test_client_plugin.show_sfc_resource mock_port_pair_get.return_value = {} self.assertEqual({}, self.test_resource._show_resource(), 'Failed to show resource') def test_resource_handle_update(self): mock_port_pair_patch = self.test_client_plugin.update_sfc_resource self.test_resource.resource_id = '477e8273-60a7-4c41-b683-fdb0bc7cd151' prop_diff = { port_pair.PortPair.NAME: 'name-updated', port_pair.PortPair.DESCRIPTION: 'description-updated', } self.test_resource.handle_update(json_snippet=None, tmpl_diff=None, prop_diff=prop_diff) mock_port_pair_patch.assert_called_once_with( 'port_pair', { 'name': 'name-updated', 'description': 'description-updated', }, self.test_resource.resource_id) heat-10.0.2/heat/tests/openstack/neutron/test_sfc/test_port_pair_group.py0000666000175000017500000001525113343562340026750 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from heat.engine.clients.os import neutron from heat.engine.resources.openstack.neutron.sfc import port_pair_group from heat.engine import stack from heat.engine import template from heat.tests import common from heat.tests import utils sample_template = { 'heat_template_version': '2016-04-08', 'resources': { 'test_resource': { 'type': 'OS::Neutron::PortPairGroup', 'properties': { 'name': 'test_port_pair_group', 'description': 'desc', 'port_pairs': ['port1'] } } } } class PortPairGroupTest(common.HeatTestCase): def setUp(self): super(PortPairGroupTest, self).setUp() self.patchobject(neutron.NeutronClientPlugin, 'has_extension', return_value=True) self.ctx = utils.dummy_context() self.stack = stack.Stack( self.ctx, 'test_stack', template.Template(sample_template) ) self.test_resource = self.stack['test_resource'] self.test_client_plugin = mock.MagicMock() self.test_resource.client_plugin = mock.MagicMock( return_value=self.test_client_plugin) self.test_client = mock.MagicMock() self.test_resource.client = mock.MagicMock( return_value=self.test_client) self.test_client_plugin.get_notification = mock.MagicMock( return_value='sample_notification') self.patchobject(self.test_client_plugin, 'resolve_ext_resource').return_value = ('port1') def test_resource_mapping(self): mapping = port_pair_group.resource_mapping() self.assertEqual(port_pair_group.PortPairGroup, mapping['OS::Neutron::PortPairGroup']) def _get_mock_resource(self): value = mock.MagicMock() value.id = '477e8273-60a7-4c41-b683-fdb0bc7cd152' return value def _resolve_sfc_resource(self): value = mock.MagicMock() value.id = '[port1]' return value.id def test_resource_handle_create(self): mock_ppg_create = self.test_client_plugin.create_sfc_resource mock_resource = self._get_mock_resource() mock_ppg_create.return_value = mock_resource # validate the properties self.assertEqual( 'test_port_pair_group', self.test_resource.properties.get( port_pair_group.PortPairGroup.NAME)) self.assertEqual( 'desc', self.test_resource.properties.get( port_pair_group.PortPairGroup.DESCRIPTION)) self.assertEqual( ['port1'], self.test_resource.properties.get( port_pair_group.PortPairGroup.PORT_PAIRS)) self.test_resource.data_set = mock.Mock() self.test_resource.handle_create() mock_ppg_create.assert_called_once_with( 'port_pair_group', { 'name': 'test_port_pair_group', 'description': 'desc', 'port_pairs': ['port1'], } ) def test_resource_handle_delete(self): mock_ppg_delete = self.test_client_plugin.delete_sfc_resource self.test_resource.resource_id = '477e8273-60a7-4c41-b683-fdb0bc7cd151' mock_ppg_delete.return_value = None self.assertIsNone(self.test_resource.handle_delete()) mock_ppg_delete.assert_called_once_with( 'port_pair_group', self.test_resource.resource_id) def test_resource_handle_delete_resource_id_is_none(self): self.test_resource.resource_id = None self.assertIsNone(self.test_resource.handle_delete()) self.assertEqual(0, self.test_client_plugin. delete_sfc_resource.call_count) def test_resource_handle_delete_not_found(self): self.test_resource.resource_id = '477e8273-60a7-4c41-b683-fdb0bc7cd151' mock_ppg_delete = self.test_client_plugin.delete_sfc_resource mock_ppg_delete.side_effect = self.test_client_plugin.NotFound self.assertIsNone(self.test_resource.handle_delete()) def test_resource_show_resource(self): mock_ppg_get = self.test_client_plugin.show_sfc_resource mock_ppg_get.return_value = {} self.assertEqual({}, self.test_resource._show_resource(), 'Failed to show resource') def test_resource_handle_update(self): mock_ppg_patch = self.test_client_plugin.update_sfc_resource self.test_resource.resource_id = '477e8273-60a7-4c41-b683-fdb0bc7cd151' prop_diff = { 'name': 'name-updated', 'description': 'description-updated', } self.test_resource.handle_update(json_snippet=None, tmpl_diff=None, prop_diff=prop_diff) mock_ppg_patch.assert_called_once_with( 'port_pair_group', { 'name': 'name-updated', 'description': 'description-updated', }, self.test_resource.resource_id) def test_resource_handle_update_port_pairs(self): self.patchobject(self.test_client_plugin, 'resolve_ext_resource').return_value = ('port2') mock_ppg_patch = self.test_client_plugin.update_sfc_resource self.test_resource.resource_id = '477e8273-60a7-4c41-b683-fdb0bc7cd151' prop_diff = { port_pair_group.PortPairGroup.NAME: 'name', port_pair_group.PortPairGroup.DESCRIPTION: 'description', port_pair_group.PortPairGroup.PORT_PAIRS: ['port2'], } self.test_resource.handle_update(json_snippet=None, tmpl_diff=None, prop_diff=prop_diff) mock_ppg_patch.assert_called_once_with( 'port_pair_group', { 'name': 'name', 'description': 'description', 'port_pairs': ['port2'], }, self.test_resource.resource_id) heat-10.0.2/heat/tests/openstack/neutron/test_sfc/test_flow_classifier.py0000666000175000017500000001767613343562340026725 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from heat.engine.resources.openstack.neutron.sfc import flow_classifier from heat.engine import stack from heat.engine import template from heat.tests import common from heat.tests import utils sample_template = { 'heat_template_version': '2016-04-08', 'resources': { 'test_resource': { 'type': 'OS::Neutron::FlowClassifier', 'properties': { 'name': 'test_flow_classifier', 'description': 'flow_classifier_desc', 'protocol': 'tcp', 'ethertype': 'IPv4', 'source_ip_prefix': '10.0.3.21', 'destination_ip_prefix': '10.0.3.22', 'source_port_range_min': 1, 'source_port_range_max': 10, 'destination_port_range_min': 80, 'destination_port_range_max': 100, 'logical_source_port': 'port-id1', 'logical_destination_port': 'port-id2', 'l7_parameters': {"url": 'http://local'} } } } } class FlowClassifierTest(common.HeatTestCase): def setUp(self): super(FlowClassifierTest, self).setUp() self.ctx = utils.dummy_context() self.stack = stack.Stack( self.ctx, 'test_stack', template.Template(sample_template) ) self.test_resource = self.stack['test_resource'] self.test_client_plugin = mock.MagicMock() self.test_resource.client_plugin = mock.MagicMock( return_value=self.test_client_plugin) self.test_client = mock.MagicMock() self.test_resource.client = mock.MagicMock( return_value=self.test_client) self.test_client_plugin.get_notification = mock.MagicMock( return_value='sample_notification') def test_resource_mapping(self): mapping = flow_classifier.resource_mapping() self.assertEqual(flow_classifier.FlowClassifier, mapping['OS::Neutron::FlowClassifier']) def _get_mock_resource(self): value = mock.MagicMock() value.id = '2a046ff4-cd7b-4500-b8f0-b60d96ce3e0c' return value def test_resource_handle_create(self): mock_fc_create = self.test_client_plugin.create_sfc_resource mock_resource = self._get_mock_resource() mock_fc_create.return_value = mock_resource # validate the properties self.assertEqual( 'test_flow_classifier', self.test_resource.properties.get( flow_classifier.FlowClassifier.NAME)) self.assertEqual( 'flow_classifier_desc', self.test_resource.properties.get( flow_classifier.FlowClassifier.DESCRIPTION)) self.assertEqual( 'tcp', self.test_resource.properties.get( flow_classifier.FlowClassifier.PROTOCOL)) self.assertEqual( 'IPv4', self.test_resource.properties.get( flow_classifier.FlowClassifier.ETHERTYPE)) self.assertEqual( '10.0.3.21', self.test_resource.properties.get( flow_classifier.FlowClassifier.SOURCE_IP_PREFIX)) self.assertEqual( '10.0.3.22', self.test_resource.properties.get( flow_classifier.FlowClassifier.DESTINATION_IP_PREFIX)) self.assertEqual( 1, self.test_resource.properties.get( flow_classifier.FlowClassifier.SOURCE_PORT_RANGE_MIN)) self.assertEqual( 10, self.test_resource.properties.get( flow_classifier.FlowClassifier.SOURCE_PORT_RANGE_MAX)) self.assertEqual( 80, self.test_resource.properties.get( flow_classifier.FlowClassifier.DESTINATION_PORT_RANGE_MIN)) self.assertEqual( 100, self.test_resource.properties.get( flow_classifier.FlowClassifier.DESTINATION_PORT_RANGE_MAX)) self.assertEqual( 'port-id1', self.test_resource.properties.get( flow_classifier.FlowClassifier.LOGICAL_SOURCE_PORT)) self.assertEqual( 'port-id2', self.test_resource.properties.get( flow_classifier.FlowClassifier.LOGICAL_DESTINATION_PORT)) self.assertEqual( {"url": 'http://local'}, self.test_resource.properties.get( flow_classifier.FlowClassifier.L7_PARAMETERS)) self.test_resource.data_set = mock.Mock() self.test_resource.handle_create() mock_fc_create.assert_called_once_with( 'flow_classifier', { 'name': 'test_flow_classifier', 'description': 'flow_classifier_desc', 'protocol': 'tcp', 'ethertype': 'IPv4', 'source_ip_prefix': '10.0.3.21', 'destination_ip_prefix': '10.0.3.22', 'source_port_range_min': 1, 'source_port_range_max': 10, 'destination_port_range_min': 80, 'destination_port_range_max': 100, 'logical_source_port': 'port-id1', 'logical_destination_port': 'port-id2', 'l7_parameters': {"url": 'http://local'} } ) def test_resource_handle_delete(self): mock_fc_delete = self.test_client_plugin.delete_sfc_resource self.test_resource.resource_id = '2a046ff4-cd7b-4500-b8f0-b60d96ce3e0c' mock_fc_delete.return_value = None self.assertIsNone(self.test_resource.handle_delete()) mock_fc_delete.assert_called_once_with( 'flow_classifier', self.test_resource.resource_id) def test_resource_handle_delete_resource_id_is_none(self): self.test_resource.resource_id = None self.assertIsNone(self.test_resource.handle_delete()) self.assertEqual(0, self.test_client_plugin. delete_sfc_resource.call_count) def test_resource_handle_delete_not_found(self): self.test_resource.resource_id = '2a046ff4-cd7b-4500-b8f0-b60d96ce3e0c' mock_fc_delete = self.test_client_plugin.delete_sfc_resource mock_fc_delete.side_effect = self.test_client_plugin.NotFound self.assertIsNone(self.test_resource.handle_delete()) def test_resource_show_resource(self): mock_fc_get = self.test_client_plugin.show_sfc_resource mock_fc_get.return_value = {} self.assertEqual({}, self.test_resource._show_resource(), 'Failed to show resource') def test_resource_handle_update(self): mock_fc_patch = self.test_client_plugin.update_sfc_resource self.test_resource.resource_id = '2a046ff4-cd7b-4500-b8f0-b60d96ce3e0c' prop_diff = { flow_classifier.FlowClassifier.NAME: 'name-updated', flow_classifier.FlowClassifier.DESCRIPTION: 'description-updated', } self.test_resource.handle_update(json_snippet=None, tmpl_diff=None, prop_diff=prop_diff) mock_fc_patch.assert_called_once_with( 'flow_classifier', { 'name': 'name-updated', 'description': 'description-updated', }, self.test_resource.resource_id) heat-10.0.2/heat/tests/openstack/neutron/test_neutron_provider_net.py0000666000175000017500000001751213343562351026201 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import mock from neutronclient.common import exceptions as qe from neutronclient.v2_0 import client as neutronclient from heat.common import exception from heat.common import template_format from heat.engine.clients.os import neutron from heat.engine.resources.openstack.neutron import provider_net from heat.engine import rsrc_defn from heat.engine import scheduler from heat.tests import common from heat.tests import utils provider_network_template = ''' heat_template_version: 2015-04-30 description: Template to test provider_net Neutron resources resources: provider_network_vlan: type: OS::Neutron::ProviderNet properties: name: the_provider_network network_type: vlan physical_network: physnet_1 segmentation_id: 101 router_external: False shared: true ''' stpna = { "network": { "status": "ACTIVE", "subnets": [], "name": "the_provider_network", "admin_state_up": True, "shared": True, "provider:network_type": "vlan", "provider:physical_network": "physnet_1", "provider:segmentation_id": "101", "router:external": False, "tenant_id": "c1210485b2424d48804aad5d39c61b8f", "id": "fc68ea2c-b60b-4b4f-bd82-94ec81110766" } } stpnb = copy.deepcopy(stpna) stpnb['network']['status'] = "BUILD" class NeutronProviderNetTest(common.HeatTestCase): def setUp(self): super(NeutronProviderNetTest, self).setUp() self.m.StubOutWithMock(neutronclient.Client, 'create_network') self.m.StubOutWithMock(neutronclient.Client, 'show_network') self.m.StubOutWithMock(neutronclient.Client, 'delete_network') self.m.StubOutWithMock(neutronclient.Client, 'update_network') self.patchobject(neutron.NeutronClientPlugin, 'has_extension', return_value=True) def create_provider_net(self): # Create script neutronclient.Client.create_network({ 'network': { 'name': u'the_provider_network', 'admin_state_up': True, 'provider:network_type': 'vlan', 'provider:physical_network': 'physnet_1', 'provider:segmentation_id': '101', 'router:external': False, 'shared': True} }).AndReturn(stpnb) neutronclient.Client.show_network( 'fc68ea2c-b60b-4b4f-bd82-94ec81110766' ).AndReturn(stpnb) neutronclient.Client.show_network( 'fc68ea2c-b60b-4b4f-bd82-94ec81110766' ).AndReturn(stpna) t = template_format.parse(provider_network_template) self.stack = utils.parse_stack(t) resource_defns = self.stack.t.resource_definitions(self.stack) rsrc = provider_net.ProviderNet( 'provider_net', resource_defns['provider_network_vlan'], self.stack) return rsrc def test_create_provider_net(self): rsrc = self.create_provider_net() neutronclient.Client.show_network( 'fc68ea2c-b60b-4b4f-bd82-94ec81110766' ).AndRaise(qe.NetworkNotFoundClient(status_code=404)) # Delete script neutronclient.Client.delete_network( 'fc68ea2c-b60b-4b4f-bd82-94ec81110766' ).AndReturn(None) neutronclient.Client.show_network( 'fc68ea2c-b60b-4b4f-bd82-94ec81110766' ).AndReturn(stpna) neutronclient.Client.show_network( 'fc68ea2c-b60b-4b4f-bd82-94ec81110766' ).AndRaise(qe.NetworkNotFoundClient(status_code=404)) neutronclient.Client.delete_network( 'fc68ea2c-b60b-4b4f-bd82-94ec81110766' ).AndRaise(qe.NetworkNotFoundClient(status_code=404)) self.m.ReplayAll() rsrc.validate() scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) ref_id = rsrc.FnGetRefId() self.assertEqual('fc68ea2c-b60b-4b4f-bd82-94ec81110766', ref_id) self.assertIsNone(rsrc.FnGetAtt('status')) self.assertEqual('ACTIVE', rsrc.FnGetAtt('status')) self.assertRaises( exception.InvalidTemplateAttribute, rsrc.FnGetAtt, 'Foo') self.assertIsNone(scheduler.TaskRunner(rsrc.delete)()) self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state) rsrc.state_set(rsrc.CREATE, rsrc.COMPLETE, 'to delete again') scheduler.TaskRunner(rsrc.delete)() self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_update_provider_net(self): rsrc = self.create_provider_net() neutronclient.Client.update_network( 'fc68ea2c-b60b-4b4f-bd82-94ec81110766', {'network': { 'provider:network_type': 'vlan', 'provider:physical_network': 'physnet_1', 'provider:segmentation_id': '102', 'port_security_enabled': False, 'router:external': True }}).AndReturn(None) neutronclient.Client.update_network( 'fc68ea2c-b60b-4b4f-bd82-94ec81110766', {'network': { 'name': utils.PhysName(rsrc.stack.name, 'provider_net') }}).AndReturn(None) self.m.ReplayAll() rsrc.validate() scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) prop_diff = { 'network_type': 'vlan', 'physical_network': 'physnet_1', 'segmentation_id': '102', 'port_security_enabled': False, 'router_external': True } update_snippet = rsrc_defn.ResourceDefinition(rsrc.name, rsrc.type(), prop_diff) self.assertIsNone(rsrc.handle_update(update_snippet, {}, prop_diff)) # name=None self.assertIsNone(rsrc.handle_update(update_snippet, {}, {'name': None})) # no prop_diff self.assertIsNone(rsrc.handle_update(update_snippet, {}, {})) self.m.VerifyAll() def test_get_live_state(self): rsrc = self.create_provider_net() rsrc.client().show_network = mock.Mock(return_value={ 'network': { 'status': 'ACTIVE', 'subnets': [], 'availability_zone_hints': [], 'availability_zones': [], 'name': 'prov-provider-nhalkd5xftp3', 'provider:physical_network': 'public', 'admin_state_up': True, 'tenant_id': 'df49ea64e87c43a792a510698364f03e', 'mtu': 0, 'router:external': False, 'port_security_enabled': True, 'shared': True, 'provider:network_type': 'flat', 'id': 'af216806-4462-4c68-bfa4-9580857e71c3', 'provider:segmentation_id': None}}) reality = rsrc.get_live_state(rsrc.properties) expected = { 'name': 'prov-provider-nhalkd5xftp3', 'physical_network': 'public', 'admin_state_up': True, 'network_type': 'flat', 'port_security_enabled': True, 'segmentation_id': None, 'router_external': False } self.assertEqual(expected, reality) heat-10.0.2/heat/tests/openstack/neutron/test_neutron_port.py0000666000175000017500000011755713343562351024477 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from neutronclient.common import exceptions as qe from neutronclient.neutron import v2_0 as neutronV20 from neutronclient.v2_0 import client as neutronclient from oslo_serialization import jsonutils import six from heat.common import exception from heat.common import template_format from heat.engine import resource from heat.engine import rsrc_defn from heat.engine import scheduler from heat.tests import common from heat.tests import utils neutron_port_template = ''' heat_template_version: 2015-04-30 description: Template to test port Neutron resource resources: port: type: OS::Neutron::Port properties: network: net1234 fixed_ips: - subnet: sub1234 ip_address: 10.0.3.21 device_owner: network:dhcp ''' neutron_port_with_address_pair_template = ''' heat_template_version: 2015-04-30 description: Template to test port Neutron resource resources: port: type: OS::Neutron::Port properties: network: abcd1234 allowed_address_pairs: - ip_address: 10.0.3.21 mac_address: 00-B0-D0-86-BB-F7 ''' neutron_port_security_template = ''' heat_template_version: 2015-04-30 description: Template to test port Neutron resource resources: port: type: OS::Neutron::Port properties: network: abcd1234 port_security_enabled: False ''' class NeutronPortTest(common.HeatTestCase): def setUp(self): super(NeutronPortTest, self).setUp() self.create_mock = self.patchobject( neutronclient.Client, 'create_port') self.port_show_mock = self.patchobject( neutronclient.Client, 'show_port') self.update_mock = self.patchobject( neutronclient.Client, 'update_port') self.subnet_show_mock = self.patchobject( neutronclient.Client, 'show_subnet') self.find_mock = self.patchobject( neutronV20, 'find_resourceid_by_name_or_id') def test_missing_network(self): t = template_format.parse(neutron_port_template) t['resources']['port']['properties'] = {} stack = utils.parse_stack(t) port = stack['port'] self.assertRaises(exception.StackValidationFailed, port.validate) def test_missing_subnet_id(self): t = template_format.parse(neutron_port_template) t['resources']['port']['properties']['fixed_ips'][0].pop('subnet') stack = utils.parse_stack(t) self.find_mock.return_value = 'net1234' self.create_mock.return_value = { 'port': { "status": "BUILD", "id": "fc68ea2c-b60b-4b4f-bd82-94ec81110766"}} self.port_show_mock.return_value = { 'port': { "status": "ACTIVE", "id": "fc68ea2c-b60b-4b4f-bd82-94ec81110766"}} port = stack['port'] scheduler.TaskRunner(port.create)() self.assertEqual((port.CREATE, port.COMPLETE), port.state) self.create_mock.assert_called_once_with({'port': { 'network_id': u'net1234', 'fixed_ips': [ {'ip_address': u'10.0.3.21'} ], 'name': utils.PhysName(stack.name, 'port'), 'admin_state_up': True, 'device_owner': u'network:dhcp', 'binding:vnic_type': 'normal', 'device_id': '' }}) self.port_show_mock.assert_called_once_with( 'fc68ea2c-b60b-4b4f-bd82-94ec81110766') def test_missing_ip_address(self): t = template_format.parse(neutron_port_template) t['resources']['port']['properties']['fixed_ips'][0].pop('ip_address') stack = utils.parse_stack(t) self.find_mock.return_value = 'net_or_sub' self.create_mock.return_value = {'port': { "status": "BUILD", "id": "fc68ea2c-b60b-4b4f-bd82-94ec81110766"}} self.port_show_mock.return_value = {'port': { "status": "ACTIVE", "id": "fc68ea2c-b60b-4b4f-bd82-94ec81110766"}} port = stack['port'] scheduler.TaskRunner(port.create)() self.assertEqual((port.CREATE, port.COMPLETE), port.state) self.create_mock.assert_called_once_with({'port': { 'network_id': u'net_or_sub', 'fixed_ips': [ {'subnet_id': u'net_or_sub'} ], 'name': utils.PhysName(stack.name, 'port'), 'admin_state_up': True, 'device_owner': u'network:dhcp', 'binding:vnic_type': 'normal', 'device_id': '' }}) self.port_show_mock.assert_called_once_with( 'fc68ea2c-b60b-4b4f-bd82-94ec81110766') def test_missing_fixed_ips(self): t = template_format.parse(neutron_port_template) t['resources']['port']['properties'].pop('fixed_ips') stack = utils.parse_stack(t) self.find_mock.return_value = 'net1234' self.create_mock.return_value = {'port': { "status": "BUILD", "id": "fc68ea2c-b60b-4b4f-bd82-94ec81110766"}} self.port_show_mock.return_value = {'port': { "status": "ACTIVE", "id": "fc68ea2c-b60b-4b4f-bd82-94ec81110766", "fixed_ips": { "subnet_id": "d0e971a6-a6b4-4f4c-8c88-b75e9c120b7e", "ip_address": "10.0.0.2" } }} port = stack['port'] scheduler.TaskRunner(port.create)() self.create_mock.assert_called_once_with({'port': { 'network_id': u'net1234', 'name': utils.PhysName(stack.name, 'port'), 'admin_state_up': True, 'device_owner': u'network:dhcp', 'binding:vnic_type': 'normal', 'device_id': '' }}) def test_allowed_address_pair(self): t = template_format.parse(neutron_port_with_address_pair_template) stack = utils.parse_stack(t) self.find_mock.return_value = 'abcd1234' self.create_mock.return_value = {'port': { "status": "BUILD", "id": "fc68ea2c-b60b-4b4f-bd82-94ec81110766" }} self.port_show_mock.return_value = {'port': { "status": "ACTIVE", "id": "fc68ea2c-b60b-4b4f-bd82-94ec81110766" }} port = stack['port'] scheduler.TaskRunner(port.create)() self.create_mock.assert_called_once_with({'port': { 'network_id': u'abcd1234', 'allowed_address_pairs': [{ 'ip_address': u'10.0.3.21', 'mac_address': u'00-B0-D0-86-BB-F7' }], 'name': utils.PhysName(stack.name, 'port'), 'admin_state_up': True, 'binding:vnic_type': 'normal', 'device_id': '', 'device_owner': '' }}) def test_port_security_enabled(self): t = template_format.parse(neutron_port_security_template) stack = utils.parse_stack(t) self.find_mock.return_value = 'abcd1234' self.create_mock.return_value = {'port': { "status": "BUILD", "id": "fc68ea2c-b60b-4b4f-bd82-94ec81110766" }} self.port_show_mock.return_value = {'port': { "status": "ACTIVE", "id": "fc68ea2c-b60b-4b4f-bd82-94ec81110766", }} port = stack['port'] scheduler.TaskRunner(port.create)() self.create_mock.assert_called_once_with({'port': { 'network_id': u'abcd1234', 'port_security_enabled': False, 'name': utils.PhysName(stack.name, 'port'), 'admin_state_up': True, 'binding:vnic_type': 'normal', 'device_id': '', 'device_owner': '' }}) def test_missing_mac_address(self): t = template_format.parse(neutron_port_with_address_pair_template) t['resources']['port']['properties']['allowed_address_pairs'][0].pop( 'mac_address' ) stack = utils.parse_stack(t) self.find_mock.return_value = 'abcd1234' self.create_mock.return_value = {'port': { "status": "BUILD", "id": "fc68ea2c-b60b-4b4f-bd82-94ec81110766" }} self.port_show_mock.return_value = {'port': { "status": "ACTIVE", "id": "fc68ea2c-b60b-4b4f-bd82-94ec81110766" }} port = stack['port'] scheduler.TaskRunner(port.create)() self.create_mock.assert_called_once_with({'port': { 'network_id': u'abcd1234', 'allowed_address_pairs': [{ 'ip_address': u'10.0.3.21', }], 'name': utils.PhysName(stack.name, 'port'), 'admin_state_up': True, 'binding:vnic_type': 'normal', 'device_owner': '', 'device_id': ''}}) def test_ip_address_is_cidr(self): t = template_format.parse(neutron_port_with_address_pair_template) t['resources']['port']['properties'][ 'allowed_address_pairs'][0]['ip_address'] = '10.0.3.0/24' stack = utils.parse_stack(t) self.find_mock.return_value = 'abcd1234' self.create_mock.return_value = {'port': { "status": "BUILD", "id": "2e00180a-ff9d-42c4-b701-a0606b243447" }} self.port_show_mock.return_value = {'port': { "status": "ACTIVE", "id": "2e00180a-ff9d-42c4-b701-a0606b243447" }} port = stack['port'] scheduler.TaskRunner(port.create)() self.create_mock.assert_called_once_with({'port': { 'network_id': u'abcd1234', 'allowed_address_pairs': [{ 'ip_address': u'10.0.3.0/24', 'mac_address': u'00-B0-D0-86-BB-F7' }], 'name': utils.PhysName(stack.name, 'port'), 'admin_state_up': True, 'binding:vnic_type': 'normal', 'device_owner': '', 'device_id': '' }}) def _mock_create_with_props(self): self.find_mock.return_value = 'net_or_sub' self.create_mock.return_value = {'port': { "status": "BUILD", "id": "fc68ea2c-b60b-4b4f-bd82-94ec81110766"}} self.port_show_mock.return_value = {'port': { "status": "ACTIVE", "id": "fc68ea2c-b60b-4b4f-bd82-94ec81110766", "dns_assignment": { "hostname": "my-vm", "ip_address": "10.0.0.15", "fqdn": "my-vm.openstack.org."} }} def test_create_with_tags(self): t = template_format.parse(neutron_port_template) t['resources']['port']['properties']['tags'] = ['tag1', 'tag2'] stack = utils.parse_stack(t) port_prop = { 'network_id': u'net_or_sub', 'fixed_ips': [ {'subnet_id': u'net_or_sub', 'ip_address': u'10.0.3.21'} ], 'name': utils.PhysName(stack.name, 'port'), 'admin_state_up': True, 'device_owner': u'network:dhcp', 'binding:vnic_type': 'normal', 'device_id': '' } set_tag_mock = self.patchobject(neutronclient.Client, 'replace_tag') self._mock_create_with_props() port = stack['port'] scheduler.TaskRunner(port.create)() self.assertEqual((port.CREATE, port.COMPLETE), port.state) self.create_mock.assert_called_once_with({'port': port_prop}) set_tag_mock.assert_called_with('ports', port.resource_id, {'tags': ['tag1', 'tag2']}) def test_security_groups(self): t = template_format.parse(neutron_port_template) t['resources']['port']['properties']['security_groups'] = [ '8a2f582a-e1cd-480f-b85d-b02631c10656', '024613dc-b489-4478-b46f-ada462738740'] stack = utils.parse_stack(t) port_prop = { 'network_id': u'net_or_sub', 'security_groups': ['8a2f582a-e1cd-480f-b85d-b02631c10656', '024613dc-b489-4478-b46f-ada462738740'], 'fixed_ips': [ {'subnet_id': u'net_or_sub', 'ip_address': u'10.0.3.21'} ], 'name': utils.PhysName(stack.name, 'port'), 'admin_state_up': True, 'device_owner': u'network:dhcp', 'binding:vnic_type': 'normal', 'device_id': '' } self._mock_create_with_props() port = stack['port'] scheduler.TaskRunner(port.create)() self.assertEqual((port.CREATE, port.COMPLETE), port.state) self.create_mock.assert_called_once_with({'port': port_prop}) def test_port_with_dns_name(self): t = template_format.parse(neutron_port_template) t['resources']['port']['properties']['dns_name'] = 'myvm' stack = utils.parse_stack(t) port_prop = { 'network_id': u'net_or_sub', 'dns_name': 'myvm', 'fixed_ips': [ {'subnet_id': u'net_or_sub', 'ip_address': u'10.0.3.21'} ], 'name': utils.PhysName(stack.name, 'port'), 'admin_state_up': True, 'device_owner': u'network:dhcp', 'binding:vnic_type': 'normal', 'device_id': '' } self._mock_create_with_props() port = stack['port'] scheduler.TaskRunner(port.create)() self.assertEqual('my-vm.openstack.org.', port.FnGetAtt('dns_assignment')['fqdn']) self.assertEqual((port.CREATE, port.COMPLETE), port.state) self.create_mock.assert_called_once_with({'port': port_prop}) def test_security_groups_empty_list(self): t = template_format.parse(neutron_port_template) t['resources']['port']['properties']['security_groups'] = [] stack = utils.parse_stack(t) port_prop = { 'network_id': u'net_or_sub', 'security_groups': [], 'fixed_ips': [ {'subnet_id': u'net_or_sub', 'ip_address': u'10.0.3.21'} ], 'name': utils.PhysName(stack.name, 'port'), 'admin_state_up': True, 'device_owner': u'network:dhcp', 'binding:vnic_type': 'normal', 'device_id': '' } self._mock_create_with_props() port = stack['port'] scheduler.TaskRunner(port.create)() self.assertEqual((port.CREATE, port.COMPLETE), port.state) self.create_mock.assert_called_once_with({'port': port_prop}) def test_update_failed_port_no_replace(self): t = template_format.parse(neutron_port_template) stack = utils.parse_stack(t) port = stack['port'] port.resource_id = 'r_id' port.state_set(port.CREATE, port.FAILED) new_props = port.properties.data.copy() new_props['name'] = 'new_one' self.find_mock.return_value = 'net_or_sub' self.port_show_mock.return_value = {'port': { "status": "ACTIVE", "id": "fc68ea2c-b60b-4b4f-bd82-94ec81110766", "fixed_ips": { "subnet_id": "d0e971a6-a6b4-4f4c-8c88-b75e9c120b7e", "ip_address": "10.0.3.21"}}} update_snippet = rsrc_defn.ResourceDefinition(port.name, port.type(), new_props) scheduler.TaskRunner(port.update, update_snippet)() self.assertEqual((port.UPDATE, port.COMPLETE), port.state) self.assertEqual(1, self.update_mock.call_count) def test_port_needs_update(self): t = template_format.parse(neutron_port_template) t['resources']['port']['properties'].pop('fixed_ips') stack = utils.parse_stack(t) props = {'network_id': u'net1234', 'name': utils.PhysName(stack.name, 'port'), 'admin_state_up': True, 'device_owner': u'network:dhcp', 'device_id': '', 'binding:vnic_type': 'normal'} self.find_mock.return_value = 'net1234' self.create_mock.return_value = {'port': { "status": "BUILD", "id": "fc68ea2c-b60b-4b4f-bd82-94ec81110766" }} self.port_show_mock.return_value = {'port': { "status": "ACTIVE", "id": "fc68ea2c-b60b-4b4f-bd82-94ec81110766", "fixed_ips": { "subnet_id": "d0e971a6-a6b4-4f4c-8c88-b75e9c120b7e", "ip_address": "10.0.0.2" } }} # create port port = stack['port'] scheduler.TaskRunner(port.create)() self.create_mock.assert_called_once_with({'port': props}) new_props = props.copy() # test always replace new_props['replacement_policy'] = 'REPLACE_ALWAYS' new_props['network'] = new_props.pop('network_id') update_snippet = rsrc_defn.ResourceDefinition(port.name, port.type(), new_props) self.assertRaises(resource.UpdateReplace, port._needs_update, update_snippet, port.frozen_definition(), new_props, port.properties, None) # test deferring to Resource._needs_update new_props['replacement_policy'] = 'AUTO' update_snippet = rsrc_defn.ResourceDefinition(port.name, port.type(), new_props) self.assertTrue(port._needs_update(update_snippet, port.frozen_definition(), new_props, port.properties, None)) def test_port_needs_update_network(self): net1 = '9cfe6c74-c105-4906-9a1f-81d9064e9bca' net2 = '0064eec9-5681-4ba7-a745-6f8e32db9503' props = {'network_id': net1, 'name': 'test_port', 'device_owner': u'network:dhcp', 'binding:vnic_type': 'normal', 'device_id': '' } create_kwargs = props.copy() create_kwargs['admin_state_up'] = True self.find_mock.side_effect = [net1] * 8 + [net2] * 2 + [net1] self.create_mock.return_value = {'port': { "status": "ACTIVE", "id": "fc68ea2c-b60b-4b4f-bd82-94ec81110766" }} self.port_show_mock.return_value = {'port': { "status": "ACTIVE", "id": "fc68ea2c-b60b-4b4f-bd82-94ec81110766", "fixed_ips": { "subnet_id": "d0e971a6-a6b4-4f4c-8c88-b75e9c120b7e", "ip_address": "10.0.0.2" } }} # create port with network_id tmpl = neutron_port_template.replace( 'network: net1234', 'network_id: 9cfe6c74-c105-4906-9a1f-81d9064e9bca') t = template_format.parse(tmpl) t['resources']['port']['properties'].pop('fixed_ips') t['resources']['port']['properties']['name'] = 'test_port' stack = utils.parse_stack(t) port = stack['port'] scheduler.TaskRunner(port.create)() self.assertEqual((port.CREATE, port.COMPLETE), port.state) self.create_mock.assert_called_once_with({'port': create_kwargs}) # Switch from network_id=ID to network=ID (no replace) new_props = props.copy() new_props['network'] = new_props.pop('network_id') update_snippet = rsrc_defn.ResourceDefinition(port.name, port.type(), new_props) scheduler.TaskRunner(port.update, update_snippet)() self.assertEqual((port.UPDATE, port.COMPLETE), port.state) self.assertEqual(0, self.update_mock.call_count) # Switch from network=ID to network=NAME (no replace) new_props['network'] = 'net1234' update_snippet = rsrc_defn.ResourceDefinition(port.name, port.type(), new_props) scheduler.TaskRunner(port.update, update_snippet)() self.assertEqual((port.UPDATE, port.COMPLETE), port.state) self.assertEqual(0, self.update_mock.call_count) # Switch to a different network (replace) new_props['network'] = 'net5678' update_snippet = rsrc_defn.ResourceDefinition(port.name, port.type(), new_props) updater = scheduler.TaskRunner(port.update, update_snippet) self.assertRaises(resource.UpdateReplace, updater) self.assertEqual(11, self.find_mock.call_count) def test_get_port_attributes(self): t = template_format.parse(neutron_port_template) t['resources']['port']['properties'].pop('fixed_ips') stack = utils.parse_stack(t) subnet_dict = {'name': 'test-subnet', 'enable_dhcp': True, 'network_id': 'net1234', 'dns_nameservers': [], 'tenant_id': '58a61fc3992944ce971404a2ece6ff98', 'ipv6_ra_mode': None, 'cidr': '10.0.0.0/24', 'allocation_pools': [{'start': '10.0.0.2', 'end': u'10.0.0.254'}], 'gateway_ip': '10.0.0.1', 'ipv6_address_mode': None, 'ip_version': 4, 'host_routes': [], 'id': '6dd609ad-d52a-4587-b1a0-b335f76062a5'} self.find_mock.return_value = 'net1234' self.create_mock.return_value = {'port': { 'status': 'BUILD', 'id': 'fc68ea2c-b60b-4b4f-bd82-94ec81110766' }} self.subnet_show_mock.return_value = {'subnet': subnet_dict} self.port_show_mock.return_value = {'port': { 'status': 'DOWN', 'name': utils.PhysName(stack.name, 'port'), 'allowed_address_pairs': [], 'admin_state_up': True, 'network_id': 'net1234', 'device_id': 'dc68eg2c-b60g-4b3f-bd82-67ec87650532', 'mac_address': 'fa:16:3e:75:67:60', 'tenant_id': '58a61fc3992944ce971404a2ece6ff98', 'security_groups': ['5b15d80c-6b70-4a1c-89c9-253538c5ade6'], 'fixed_ips': [{'subnet_id': 'd0e971a6-a6b4-4f4c-8c88-b75e9c120b7e', 'ip_address': '10.0.0.2'}] }} port = stack['port'] scheduler.TaskRunner(port.create)() self.create_mock.assert_called_once_with({'port': { 'network_id': u'net1234', 'name': utils.PhysName(stack.name, 'port'), 'admin_state_up': True, 'device_owner': u'network:dhcp', 'binding:vnic_type': 'normal', 'device_id': '' }}) self.assertEqual('DOWN', port.FnGetAtt('status')) self.assertEqual([], port.FnGetAtt('allowed_address_pairs')) self.assertTrue(port.FnGetAtt('admin_state_up')) self.assertEqual('net1234', port.FnGetAtt('network_id')) self.assertEqual('fa:16:3e:75:67:60', port.FnGetAtt('mac_address')) self.assertEqual(utils.PhysName(stack.name, 'port'), port.FnGetAtt('name')) self.assertEqual('dc68eg2c-b60g-4b3f-bd82-67ec87650532', port.FnGetAtt('device_id')) self.assertEqual('58a61fc3992944ce971404a2ece6ff98', port.FnGetAtt('tenant_id')) self.assertEqual(['5b15d80c-6b70-4a1c-89c9-253538c5ade6'], port.FnGetAtt('security_groups')) self.assertEqual([{'subnet_id': 'd0e971a6-a6b4-4f4c-8c88-b75e9c120b7e', 'ip_address': '10.0.0.2'}], port.FnGetAtt('fixed_ips')) self.assertEqual([subnet_dict], port.FnGetAtt('subnets')) self.assertRaises(exception.InvalidTemplateAttribute, port.FnGetAtt, 'Foo') def test_subnet_attribute_exception(self): t = template_format.parse(neutron_port_template) t['resources']['port']['properties'].pop('fixed_ips') stack = utils.parse_stack(t) self.find_mock.return_value = 'net1234' self.create_mock.return_value = {'port': { 'status': 'BUILD', 'id': 'fc68ea2c-b60b-4b4f-bd82-94ec81110766' }} self.port_show_mock.return_value = {'port': { 'status': 'DOWN', 'name': utils.PhysName(stack.name, 'port'), 'allowed_address_pairs': [], 'admin_state_up': True, 'network_id': 'net1234', 'device_id': 'dc68eg2c-b60g-4b3f-bd82-67ec87650532', 'mac_address': 'fa:16:3e:75:67:60', 'tenant_id': '58a61fc3992944ce971404a2ece6ff98', 'security_groups': ['5b15d80c-6b70-4a1c-89c9-253538c5ade6'], 'fixed_ips': [{'subnet_id': 'd0e971a6-a6b4-4f4c-8c88-b75e9c120b7e', 'ip_address': '10.0.0.2'}] }} self.subnet_show_mock.side_effect = (qe.NeutronClientException( 'ConnectionFailed: Connection to neutron failed: Maximum ' 'attempts reached')) port = stack['port'] scheduler.TaskRunner(port.create)() self.assertIsNone(port.FnGetAtt('subnets')) log_msg = ('Failed to fetch resource attributes: ConnectionFailed: ' 'Connection to neutron failed: Maximum attempts reached') self.assertIn(log_msg, self.LOG.output) self.create_mock.assert_called_once_with({'port': { 'network_id': u'net1234', 'name': utils.PhysName(stack.name, 'port'), 'admin_state_up': True, 'device_owner': u'network:dhcp', 'binding:vnic_type': 'normal', 'device_id': ''}} ) def test_prepare_for_replace_port_not_created(self): t = template_format.parse(neutron_port_template) stack = utils.parse_stack(t) port = stack['port'] port._show_resource = mock.Mock() port.data_set = mock.Mock() n_client = mock.Mock() port.client = mock.Mock(return_value=n_client) self.assertIsNone(port.resource_id) # execute prepare_for_replace port.prepare_for_replace() # check, if the port is not created, do nothing in # prepare_for_replace() self.assertFalse(port._show_resource.called) self.assertFalse(port.data_set.called) self.assertFalse(n_client.update_port.called) def test_prepare_for_replace_port_not_found(self): t = template_format.parse(neutron_port_template) stack = utils.parse_stack(t) port = stack['port'] port.resource_id = 'test_res_id' port._show_resource = mock.Mock(side_effect=qe.NotFound) port.data_set = mock.Mock() n_client = mock.Mock() port.client = mock.Mock(return_value=n_client) # execute prepare_for_replace port.prepare_for_replace() # check, if the port is not found, do nothing in # prepare_for_replace() self.assertTrue(port._show_resource.called) self.assertFalse(port.data_set.called) self.assertFalse(n_client.update_port.called) def test_prepare_for_replace_port(self): t = template_format.parse(neutron_port_template) stack = utils.parse_stack(t) port = stack['port'] port.resource_id = 'test_res_id' _value = { 'fixed_ips': { 'subnet_id': 'test_subnet', 'ip_address': '42.42.42.42' } } port._show_resource = mock.Mock(return_value=_value) port.data_set = mock.Mock() n_client = mock.Mock() port.client = mock.Mock(return_value=n_client) # execute prepare_for_replace port.prepare_for_replace() # check, that data was stored port.data_set.assert_called_once_with( 'port_fip', jsonutils.dumps(_value.get('fixed_ips'))) # check, that port was updated and ip was removed expected_props = {'port': {'fixed_ips': []}} n_client.update_port.assert_called_once_with('test_res_id', expected_props) def test_restore_prev_rsrc(self): t = template_format.parse(neutron_port_template) stack = utils.parse_stack(t) new_port = stack['port'] new_port.resource_id = 'new_res_id' # mock backup stack to return only one mocked old_port old_port = mock.Mock() new_port.stack._backup_stack = mock.Mock() new_port.stack._backup_stack().resources.get.return_value = old_port old_port.resource_id = 'old_res_id' _value = { 'subnet_id': 'test_subnet', 'ip_address': '42.42.42.42' } old_port.data = mock.Mock( return_value={'port_fip': jsonutils.dumps(_value)}) n_client = mock.Mock() new_port.client = mock.Mock(return_value=n_client) # execute restore_prev_rsrc new_port.restore_prev_rsrc() # check, that ports were updated: old port get ip and # same ip was removed from old port expected_new_props = {'port': {'fixed_ips': []}} expected_old_props = {'port': {'fixed_ips': _value}} n_client.update_port.assert_has_calls([ mock.call('new_res_id', expected_new_props), mock.call('old_res_id', expected_old_props)]) def test_restore_prev_rsrc_convergence(self): t = template_format.parse(neutron_port_template) stack = utils.parse_stack(t) stack.store() # mock resource from previous template prev_rsrc = stack['port'] prev_rsrc.resource_id = 'prev-rsrc' # store in db prev_rsrc.state_set(prev_rsrc.UPDATE, prev_rsrc.COMPLETE) # mock resource from existing template and store in db existing_rsrc = stack['port'] existing_rsrc.current_template_id = stack.t.id existing_rsrc.resource_id = 'existing-rsrc' existing_rsrc.state_set(existing_rsrc.UPDATE, existing_rsrc.COMPLETE) # mock previous resource was replaced by existing resource prev_rsrc.replaced_by = existing_rsrc.id _value = { 'subnet_id': 'test_subnet', 'ip_address': '42.42.42.42' } prev_rsrc._data = {'port_fip': jsonutils.dumps(_value)} n_client = mock.Mock() prev_rsrc.client = mock.Mock(return_value=n_client) # execute restore_prev_rsrc prev_rsrc.restore_prev_rsrc(convergence=True) expected_existing_props = {'port': {'fixed_ips': []}} expected_prev_props = {'port': {'fixed_ips': _value}} n_client.update_port.assert_has_calls([ mock.call(existing_rsrc.resource_id, expected_existing_props), mock.call(prev_rsrc.resource_id, expected_prev_props)]) def test_port_get_live_state(self): t = template_format.parse(neutron_port_template) t['resources']['port']['properties']['value_specs'] = { 'binding:vif_type': 'test'} stack = utils.parse_stack(t) port = stack['port'] resp = {'port': { 'status': 'DOWN', 'binding:host_id': '', 'name': 'flip-port-xjbal77qope3', 'allowed_address_pairs': [], 'admin_state_up': True, 'network_id': 'd6859535-efef-4184-b236-e5fcae856e0f', 'dns_name': '', 'extra_dhcp_opts': [], 'mac_address': 'fa:16:3e:fe:64:79', 'qos_policy_id': 'some', 'dns_assignment': [], 'binding:vif_details': {}, 'binding:vif_type': 'unbound', 'device_owner': '', 'tenant_id': '30f466e3d14b4251853899f9c26e2b66', 'binding:profile': {}, 'port_security_enabled': True, 'binding:vnic_type': 'normal', 'fixed_ips': [ {'subnet_id': '02d9608f-8f30-4611-ad02-69855c82457f', 'ip_address': '10.0.3.4'}], 'id': '829bf5c1-b59c-40ad-80e3-ea15a93879f3', 'security_groups': ['c276247f-50fd-4289-862a-80fb81a55de1'], 'device_id': ''} } port.client().show_port = mock.MagicMock(return_value=resp) port.resource_id = '1234' port._data = {} port.data_set = mock.Mock() reality = port.get_live_state(port.properties) expected = { 'allowed_address_pairs': [], 'admin_state_up': True, 'device_owner': '', 'port_security_enabled': True, 'binding:vnic_type': 'normal', 'fixed_ips': [ {'subnet': '02d9608f-8f30-4611-ad02-69855c82457f', 'ip_address': '10.0.3.4'}], 'security_groups': ['c276247f-50fd-4289-862a-80fb81a55de1'], 'device_id': '', 'dns_name': '', 'qos_policy': 'some', 'value_specs': {'binding:vif_type': 'unbound'} } self.assertEqual(set(expected.keys()), set(reality.keys())) for key in expected: self.assertEqual(expected[key], reality[key]) class UpdatePortTest(common.HeatTestCase): scenarios = [ ('with_secgrp', dict(secgrp=['8a2f582a-e1cd-480f-b85d-b02631c10656'], name='test', value_specs={}, fixed_ips=None, addr_pair=None, vnic_type=None)), ('with_no_name', dict(secgrp=['8a2f582a-e1cd-480f-b85d-b02631c10656'], name=None, value_specs={}, fixed_ips=None, addr_pair=None, vnic_type=None)), ('with_empty_values', dict(secgrp=[], name='test', value_specs={}, fixed_ips=[], addr_pair=[], vnic_type=None)), ('with_fixed_ips', dict(secgrp=None, value_specs={}, fixed_ips=[ {"subnet_id": "d0e971a6-a6b4-4f4c", "ip_address": "10.0.0.2"}], addr_pair=None, vnic_type=None)), ('with_addr_pair', dict(secgrp=None, value_specs={}, fixed_ips=None, addr_pair=[{'ip_address': '10.0.3.21', 'mac_address': '00-B0-D0-86'}], vnic_type=None)), ('with_value_specs', dict(secgrp=None, value_specs={'binding:vnic_type': 'direct'}, fixed_ips=None, addr_pair=None, vnic_type=None)), ('normal_vnic', dict(secgrp=None, value_specs={}, fixed_ips=None, addr_pair=None, vnic_type='normal')), ('direct_vnic', dict(secgrp=None, value_specs={}, fixed_ips=None, addr_pair=None, vnic_type='direct')), ('physical_direct_vnic', dict(secgrp=None, value_specs={}, fixed_ips=None, addr_pair=None, vnic_type='direct-physical')), ('baremetal_vnic', dict(secgrp=None, value_specs={}, fixed_ips=None, addr_pair=None, vnic_type='baremetal')), ('with_all', dict(secgrp=['8a2f582a-e1cd-480f-b85d-b02631c10656'], value_specs={}, fixed_ips=[ {"subnet_id": "d0e971a6-a6b4-4f4c", "ip_address": "10.0.0.2"}], addr_pair=[{'ip_address': '10.0.3.21', 'mac_address': '00-B0-D0-86-BB-F7'}], vnic_type='normal')), ] def test_update_port(self): t = template_format.parse(neutron_port_template) stack = utils.parse_stack(t) self.patchobject(neutronV20, 'find_resourceid_by_name_or_id', return_value='net1234') create_port = self.patchobject(neutronclient.Client, 'create_port') update_port = self.patchobject(neutronclient.Client, 'update_port') fake_groups_list = { 'security_groups': [ { 'tenant_id': 'dc4b074874244f7693dd65583733a758', 'id': '0389f747-7785-4757-b7bb-2ab07e4b09c3', 'name': 'default', 'security_group_rules': [], 'description': 'no protocol' } ] } self.patchobject(neutronclient.Client, 'list_security_groups', return_value=fake_groups_list) set_tag_mock = self.patchobject(neutronclient.Client, 'replace_tag') props = {'network_id': u'net1234', 'name': str(utils.PhysName(stack.name, 'port')), 'admin_state_up': True, 'device_owner': u'network:dhcp'} update_props = props.copy() update_props['security_groups'] = self.secgrp update_props['value_specs'] = self.value_specs update_props['tags'] = ['test_tag'] if self.fixed_ips: update_props['fixed_ips'] = self.fixed_ips update_props['allowed_address_pairs'] = self.addr_pair update_props['binding:vnic_type'] = self.vnic_type update_dict = update_props.copy() if update_props['security_groups'] is None: update_dict['security_groups'] = ['default'] if update_props['name'] is None: update_dict['name'] = utils.PhysName(stack.name, 'test_subnet') value_specs = update_dict.pop('value_specs') if value_specs: for value_spec in six.iteritems(value_specs): update_dict[value_spec[0]] = value_spec[1] tags = update_dict.pop('tags') # create port port = stack['port'] self.assertIsNone(scheduler.TaskRunner(port.handle_create)()) create_port.assset_called_once_with(props) # update port update_snippet = rsrc_defn.ResourceDefinition(port.name, port.type(), update_props) self.assertIsNone(scheduler.TaskRunner(port.handle_update, update_snippet, {}, update_props)()) update_port.assset_called_once_with(update_dict) set_tag_mock.assert_called_with('ports', port.resource_id, {'tags': tags}) # check, that update does not cause of Update Replace create_snippet = rsrc_defn.ResourceDefinition(port.name, port.type(), props) after_props, before_props = port._prepare_update_props(update_snippet, create_snippet) self.assertIsNotNone( port.update_template_diff_properties(after_props, before_props)) # With fixed_ips removed scheduler.TaskRunner(port.handle_update, update_snippet, {}, {'fixed_ips': None})() # update with empty prop_diff scheduler.TaskRunner(port.handle_update, update_snippet, {}, {})() self.assertEqual(1, update_port.call_count) heat-10.0.2/heat/tests/openstack/neutron/test_neutron_security_group.py0000666000175000017500000007143413343562351026567 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mox from neutronclient.common import exceptions as neutron_exc from neutronclient.neutron import v2_0 as neutronV20 from neutronclient.v2_0 import client as neutronclient from heat.common import exception from heat.common import template_format from heat.engine.clients.os import neutron from heat.engine import scheduler from heat.engine import stack as parser from heat.engine import template from heat.tests import common from heat.tests import utils class SecurityGroupTest(common.HeatTestCase): test_template = ''' heat_template_version: 2015-04-30 resources: the_sg: type: OS::Neutron::SecurityGroup properties: description: HTTP and SSH access rules: - port_range_min: 22 port_range_max: 22 remote_ip_prefix: 0.0.0.0/0 protocol: tcp - port_range_min: 80 port_range_max: 80 protocol: tcp remote_ip_prefix: 0.0.0.0/0 - remote_mode: remote_group_id remote_group_id: wwww protocol: tcp - direction: egress port_range_min: 22 port_range_max: 22 protocol: tcp remote_ip_prefix: 10.0.1.0/24 - direction: egress remote_mode: remote_group_id remote_group_id: xxxx - direction: egress remote_mode: remote_group_id ''' test_template_update = ''' heat_template_version: 2015-04-30 resources: the_sg: type: OS::Neutron::SecurityGroup properties: description: SSH access for private network name: myrules rules: - port_range_min: 22 port_range_max: 22 remote_ip_prefix: 10.0.0.10/24 protocol: tcp ''' test_template_validate = ''' heat_template_version: 2015-04-30 resources: the_sg: type: OS::Neutron::SecurityGroup properties: name: default ''' def setUp(self): super(SecurityGroupTest, self).setUp() self.m.StubOutWithMock(neutronclient.Client, 'create_security_group') self.m.StubOutWithMock( neutronclient.Client, 'create_security_group_rule') self.m.StubOutWithMock(neutronclient.Client, 'show_security_group') self.m.StubOutWithMock( neutronclient.Client, 'delete_security_group_rule') self.m.StubOutWithMock(neutronclient.Client, 'delete_security_group') self.m.StubOutWithMock(neutronclient.Client, 'update_security_group') self.patchobject(neutron.NeutronClientPlugin, 'has_extension', return_value=True) self.m.StubOutWithMock(neutronV20, 'find_resourceid_by_name_or_id') def create_stack(self, templ): t = template_format.parse(templ) self.stack = self.parse_stack(t) self.assertIsNone(self.stack.create()) return self.stack def parse_stack(self, t): stack_name = 'test_stack' tmpl = template.Template(t) stack = parser.Stack(utils.dummy_context(), stack_name, tmpl) stack.store() return stack def assertResourceState(self, rsrc, ref_id, metadata=None): metadata = metadata or {} self.assertIsNone(rsrc.validate()) self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) self.assertEqual(ref_id, rsrc.FnGetRefId()) self.assertEqual(metadata, dict(rsrc.metadata_get())) def test_security_group(self): show_created = {'security_group': { 'tenant_id': 'f18ca530cc05425e8bac0a5ff92f7e88', 'name': 'sc1', 'description': '', 'security_group_rules': [{ 'direction': 'ingress', 'protocol': 'tcp', 'port_range_max': '22', 'id': 'bbbb', 'ethertype': 'IPv4', 'security_group_id': 'aaaa', 'remote_group_id': None, 'remote_ip_prefix': '0.0.0.0/0', 'tenant_id': 'f18ca530cc05425e8bac0a5ff92f7e88', 'port_range_min': '22' }, { 'direction': 'ingress', 'protocol': 'tcp', 'port_range_max': '80', 'id': 'cccc', 'ethertype': 'IPv4', 'security_group_id': 'aaaa', 'remote_group_id': None, 'remote_ip_prefix': '0.0.0.0/0', 'tenant_id': 'f18ca530cc05425e8bac0a5ff92f7e88', 'port_range_min': '80' }, { 'direction': 'ingress', 'protocol': 'tcp', 'port_range_max': None, 'id': 'dddd', 'ethertype': 'IPv4', 'security_group_id': 'aaaa', 'remote_group_id': 'wwww', 'remote_ip_prefix': None, 'tenant_id': 'f18ca530cc05425e8bac0a5ff92f7e88', 'port_range_min': None }, { 'direction': 'egress', 'protocol': 'tcp', 'port_range_max': '22', 'id': 'eeee', 'ethertype': 'IPv4', 'security_group_id': 'aaaa', 'remote_group_id': None, 'remote_ip_prefix': '10.0.1.0/24', 'tenant_id': 'f18ca530cc05425e8bac0a5ff92f7e88', 'port_range_min': '22' }, { 'direction': 'egress', 'protocol': None, 'port_range_max': None, 'id': 'ffff', 'ethertype': 'IPv4', 'security_group_id': 'aaaa', 'remote_group_id': 'xxxx', 'remote_ip_prefix': None, 'tenant_id': 'f18ca530cc05425e8bac0a5ff92f7e88', 'port_range_min': None }, { 'direction': 'egress', 'protocol': None, 'port_range_max': None, 'id': 'gggg', 'ethertype': 'IPv4', 'security_group_id': 'aaaa', 'remote_group_id': 'aaaa', 'remote_ip_prefix': None, 'tenant_id': 'f18ca530cc05425e8bac0a5ff92f7e88', 'port_range_min': None }], 'id': 'aaaa'} } # create script sg_name = utils.PhysName('test_stack', 'the_sg') neutronV20.find_resourceid_by_name_or_id( mox.IsA(neutronclient.Client), 'security_group', 'wwww', cmd_resource=None, ).MultipleTimes().AndReturn('wwww') neutronV20.find_resourceid_by_name_or_id( mox.IsA(neutronclient.Client), 'security_group', 'xxxx', cmd_resource=None, ).MultipleTimes().AndReturn('xxxx') neutronclient.Client.create_security_group({ 'security_group': { 'name': sg_name, 'description': 'HTTP and SSH access' } }).AndReturn({ 'security_group': { 'tenant_id': 'f18ca530cc05425e8bac0a5ff92f7e88', 'name': sg_name, 'description': 'HTTP and SSH access', 'security_group_rules': [{ "direction": "egress", "ethertype": "IPv4", "id": "aaaa-1", "port_range_max": None, "port_range_min": None, "protocol": None, "remote_group_id": None, "remote_ip_prefix": None, "security_group_id": "aaaa", "tenant_id": "f18ca530cc05425e8bac0a5ff92f7e88" }, { "direction": "egress", "ethertype": "IPv6", "id": "aaaa-2", "port_range_max": None, "port_range_min": None, "protocol": None, "remote_group_id": None, "remote_ip_prefix": None, "security_group_id": "aaaa", "tenant_id": "f18ca530cc05425e8bac0a5ff92f7e88" }], 'id': 'aaaa' } }) neutronclient.Client.create_security_group_rule({ 'security_group_rule': { 'direction': 'ingress', 'remote_group_id': None, 'remote_ip_prefix': '0.0.0.0/0', 'port_range_min': '22', 'ethertype': 'IPv4', 'port_range_max': '22', 'protocol': 'tcp', 'security_group_id': 'aaaa' } }).AndReturn({ 'security_group_rule': { 'direction': 'ingress', 'remote_group_id': None, 'remote_ip_prefix': '0.0.0.0/0', 'port_range_min': '22', 'ethertype': 'IPv4', 'port_range_max': '22', 'protocol': 'tcp', 'security_group_id': 'aaaa', 'id': 'bbbb' } }) neutronclient.Client.create_security_group_rule({ 'security_group_rule': { 'direction': 'ingress', 'remote_group_id': None, 'remote_ip_prefix': '0.0.0.0/0', 'port_range_min': '80', 'ethertype': 'IPv4', 'port_range_max': '80', 'protocol': 'tcp', 'security_group_id': 'aaaa' } }).AndReturn({ 'security_group_rule': { 'direction': 'ingress', 'remote_group_id': None, 'remote_ip_prefix': '0.0.0.0/0', 'port_range_min': '80', 'ethertype': 'IPv4', 'port_range_max': '80', 'protocol': 'tcp', 'security_group_id': 'aaaa', 'id': 'cccc' } }) neutronclient.Client.create_security_group_rule({ 'security_group_rule': { 'direction': 'ingress', 'remote_group_id': 'wwww', 'remote_ip_prefix': None, 'port_range_min': None, 'ethertype': 'IPv4', 'port_range_max': None, 'protocol': 'tcp', 'security_group_id': 'aaaa' } }).AndReturn({ 'security_group_rule': { 'direction': 'ingress', 'remote_group_id': 'wwww', 'remote_ip_prefix': None, 'port_range_min': None, 'ethertype': 'IPv4', 'port_range_max': None, 'protocol': 'tcp', 'security_group_id': 'aaaa', 'id': 'dddd' } }) neutronclient.Client.show_security_group('aaaa').AndReturn({ 'security_group': { 'tenant_id': 'f18ca530cc05425e8bac0a5ff92f7e88', 'name': sg_name, 'description': 'HTTP and SSH access', 'security_group_rules': [{ "direction": "egress", "ethertype": "IPv4", "id": "aaaa-1", "port_range_max": None, "port_range_min": None, "protocol": None, "remote_group_id": None, "remote_ip_prefix": None, "security_group_id": "aaaa", "tenant_id": "f18ca530cc05425e8bac0a5ff92f7e88" }, { "direction": "egress", "ethertype": "IPv6", "id": "aaaa-2", "port_range_max": None, "port_range_min": None, "protocol": None, "remote_group_id": None, "remote_ip_prefix": None, "security_group_id": "aaaa", "tenant_id": "f18ca530cc05425e8bac0a5ff92f7e88" }], 'id': 'aaaa' } }) neutronclient.Client.delete_security_group_rule('aaaa-1').AndReturn( None) neutronclient.Client.delete_security_group_rule('aaaa-2').AndReturn( None) neutronclient.Client.create_security_group_rule({ 'security_group_rule': { 'direction': 'egress', 'remote_group_id': None, 'remote_ip_prefix': '10.0.1.0/24', 'port_range_min': '22', 'ethertype': 'IPv4', 'port_range_max': '22', 'protocol': 'tcp', 'security_group_id': 'aaaa' } }).AndReturn({ 'security_group_rule': { 'direction': 'egress', 'remote_group_id': None, 'remote_ip_prefix': '10.0.1.0/24', 'port_range_min': '22', 'ethertype': 'IPv4', 'port_range_max': '22', 'protocol': 'tcp', 'security_group_id': 'aaaa', 'id': 'eeee' } }) neutronclient.Client.create_security_group_rule({ 'security_group_rule': { 'direction': 'egress', 'remote_group_id': 'xxxx', 'remote_ip_prefix': None, 'port_range_min': None, 'ethertype': 'IPv4', 'port_range_max': None, 'protocol': None, 'security_group_id': 'aaaa' } }).AndReturn({ 'security_group_rule': { 'direction': 'egress', 'remote_group_id': 'xxxx', 'remote_ip_prefix': None, 'port_range_min': None, 'ethertype': 'IPv4', 'port_range_max': None, 'protocol': None, 'security_group_id': 'aaaa', 'id': 'ffff' } }) neutronclient.Client.create_security_group_rule({ 'security_group_rule': { 'direction': 'egress', 'remote_group_id': 'aaaa', 'remote_ip_prefix': None, 'port_range_min': None, 'ethertype': 'IPv4', 'port_range_max': None, 'protocol': None, 'security_group_id': 'aaaa' } }).AndReturn({ 'security_group_rule': { 'direction': 'egress', 'remote_group_id': 'aaaa', 'remote_ip_prefix': None, 'port_range_min': None, 'ethertype': 'IPv4', 'port_range_max': None, 'protocol': None, 'security_group_id': 'aaaa', 'id': 'gggg' } }) # update script neutronclient.Client.update_security_group( 'aaaa', {'security_group': { 'description': 'SSH access for private network', 'name': 'myrules'}} ).AndReturn({ 'security_group': { 'tenant_id': 'f18ca530cc05425e8bac0a5ff92f7e88', 'name': 'myrules', 'description': 'SSH access for private network', 'security_group_rules': [], 'id': 'aaaa' } }) neutronclient.Client.show_security_group('aaaa').AndReturn( show_created) neutronclient.Client.delete_security_group_rule('bbbb').AndReturn(None) neutronclient.Client.delete_security_group_rule('cccc').AndReturn(None) neutronclient.Client.delete_security_group_rule('dddd').AndReturn(None) neutronclient.Client.delete_security_group_rule('eeee').AndReturn(None) neutronclient.Client.delete_security_group_rule('ffff').AndReturn(None) neutronclient.Client.delete_security_group_rule('gggg').AndReturn(None) neutronclient.Client.show_security_group('aaaa').AndReturn({ 'security_group': { 'tenant_id': 'f18ca530cc05425e8bac0a5ff92f7e88', 'name': 'sc1', 'description': '', 'security_group_rules': [], 'id': 'aaaa' } }) neutronclient.Client.create_security_group_rule({ 'security_group_rule': { 'direction': 'egress', 'ethertype': 'IPv4', 'security_group_id': 'aaaa', } }).AndReturn({ 'security_group_rule': { 'direction': 'egress', 'remote_group_id': None, 'remote_ip_prefix': None, 'port_range_min': None, 'ethertype': 'IPv4', 'port_range_max': None, 'protocol': None, 'security_group_id': 'aaaa', 'id': 'hhhh' } }) neutronclient.Client.create_security_group_rule({ 'security_group_rule': { 'direction': 'egress', 'ethertype': 'IPv6', 'security_group_id': 'aaaa', } }).AndReturn({ 'security_group_rule': { 'direction': 'egress', 'remote_group_id': None, 'remote_ip_prefix': None, 'port_range_min': None, 'ethertype': 'IPv6', 'port_range_max': None, 'protocol': None, 'security_group_id': 'aaaa', 'id': 'iiii' } }) neutronclient.Client.create_security_group_rule({ 'security_group_rule': { 'direction': 'ingress', 'remote_group_id': None, 'remote_ip_prefix': '10.0.0.10/24', 'port_range_min': '22', 'ethertype': 'IPv4', 'port_range_max': '22', 'protocol': 'tcp', 'security_group_id': 'aaaa' } }).AndReturn({ 'security_group_rule': { 'direction': 'ingress', 'remote_group_id': None, 'remote_ip_prefix': '10.0.0.10/24', 'port_range_min': '22', 'ethertype': 'IPv4', 'port_range_max': '22', 'protocol': 'tcp', 'security_group_id': 'aaaa', 'id': 'jjjj' } }) # delete script neutronclient.Client.show_security_group('aaaa').AndReturn( show_created) neutronclient.Client.delete_security_group_rule('bbbb').AndReturn(None) neutronclient.Client.delete_security_group_rule('cccc').AndReturn(None) neutronclient.Client.delete_security_group_rule('dddd').AndReturn(None) neutronclient.Client.delete_security_group_rule('eeee').AndReturn(None) neutronclient.Client.delete_security_group_rule('ffff').AndReturn(None) neutronclient.Client.delete_security_group_rule('gggg').AndReturn(None) neutronclient.Client.delete_security_group('aaaa').AndReturn(None) self.m.ReplayAll() stack = self.create_stack(self.test_template) sg = stack['the_sg'] self.assertResourceState(sg, 'aaaa') updated_tmpl = template_format.parse(self.test_template_update) updated_stack = utils.parse_stack(updated_tmpl) stack.update(updated_stack) stack.delete() self.m.VerifyAll() def test_security_group_exception(self): # create script sg_name = utils.PhysName('test_stack', 'the_sg') neutronV20.find_resourceid_by_name_or_id( mox.IsA(neutronclient.Client), 'security_group', 'wwww', cmd_resource=None, ).MultipleTimes().AndReturn('wwww') neutronV20.find_resourceid_by_name_or_id( mox.IsA(neutronclient.Client), 'security_group', 'xxxx', cmd_resource=None, ).MultipleTimes().AndReturn('xxxx') neutronclient.Client.create_security_group({ 'security_group': { 'name': sg_name, 'description': 'HTTP and SSH access' } }).AndReturn({ 'security_group': { 'tenant_id': 'f18ca530cc05425e8bac0a5ff92f7e88', 'name': sg_name, 'description': 'HTTP and SSH access', 'security_group_rules': [], 'id': 'aaaa' } }) neutronclient.Client.create_security_group_rule({ 'security_group_rule': { 'direction': 'ingress', 'remote_group_id': None, 'remote_ip_prefix': '0.0.0.0/0', 'port_range_min': '22', 'ethertype': 'IPv4', 'port_range_max': '22', 'protocol': 'tcp', 'security_group_id': 'aaaa' } }).AndRaise( neutron_exc.Conflict()) neutronclient.Client.create_security_group_rule({ 'security_group_rule': { 'direction': 'ingress', 'remote_group_id': None, 'remote_ip_prefix': '0.0.0.0/0', 'port_range_min': '80', 'ethertype': 'IPv4', 'port_range_max': '80', 'protocol': 'tcp', 'security_group_id': 'aaaa' } }).AndRaise( neutron_exc.Conflict()) neutronclient.Client.create_security_group_rule({ 'security_group_rule': { 'direction': 'ingress', 'remote_group_id': 'wwww', 'remote_ip_prefix': None, 'port_range_min': None, 'ethertype': 'IPv4', 'port_range_max': None, 'protocol': 'tcp', 'security_group_id': 'aaaa' } }).AndRaise( neutron_exc.Conflict()) neutronclient.Client.show_security_group('aaaa').AndReturn({ 'security_group': { 'tenant_id': 'f18ca530cc05425e8bac0a5ff92f7e88', 'name': sg_name, 'description': 'HTTP and SSH access', 'security_group_rules': [], 'id': 'aaaa' } }) neutronclient.Client.create_security_group_rule({ 'security_group_rule': { 'direction': 'egress', 'remote_group_id': None, 'remote_ip_prefix': '10.0.1.0/24', 'port_range_min': '22', 'ethertype': 'IPv4', 'port_range_max': '22', 'protocol': 'tcp', 'security_group_id': 'aaaa' } }).AndRaise( neutron_exc.Conflict()) neutronclient.Client.create_security_group_rule({ 'security_group_rule': { 'direction': 'egress', 'remote_group_id': 'xxxx', 'remote_ip_prefix': None, 'port_range_min': None, 'ethertype': 'IPv4', 'port_range_max': None, 'protocol': None, 'security_group_id': 'aaaa' } }).AndRaise( neutron_exc.Conflict()) neutronclient.Client.create_security_group_rule({ 'security_group_rule': { 'direction': 'egress', 'remote_group_id': 'aaaa', 'remote_ip_prefix': None, 'port_range_min': None, 'ethertype': 'IPv4', 'port_range_max': None, 'protocol': None, 'security_group_id': 'aaaa' } }).AndRaise( neutron_exc.Conflict()) # delete script neutronclient.Client.show_security_group('aaaa').AndReturn({ 'security_group': { 'tenant_id': 'f18ca530cc05425e8bac0a5ff92f7e88', 'name': 'sc1', 'description': '', 'security_group_rules': [{ 'direction': 'ingress', 'protocol': 'tcp', 'port_range_max': '22', 'id': 'bbbb', 'ethertype': 'IPv4', 'security_group_id': 'aaaa', 'remote_group_id': None, 'remote_ip_prefix': '0.0.0.0/0', 'tenant_id': 'f18ca530cc05425e8bac0a5ff92f7e88', 'port_range_min': '22' }, { 'direction': 'ingress', 'protocol': 'tcp', 'port_range_max': '80', 'id': 'cccc', 'ethertype': 'IPv4', 'security_group_id': 'aaaa', 'remote_group_id': None, 'remote_ip_prefix': '0.0.0.0/0', 'tenant_id': 'f18ca530cc05425e8bac0a5ff92f7e88', 'port_range_min': '80' }, { 'direction': 'ingress', 'protocol': 'tcp', 'port_range_max': None, 'id': 'dddd', 'ethertype': 'IPv4', 'security_group_id': 'aaaa', 'remote_group_id': 'wwww', 'remote_ip_prefix': None, 'tenant_id': 'f18ca530cc05425e8bac0a5ff92f7e88', 'port_range_min': None }, { 'direction': 'egress', 'protocol': 'tcp', 'port_range_max': '22', 'id': 'eeee', 'ethertype': 'IPv4', 'security_group_id': 'aaaa', 'remote_group_id': None, 'remote_ip_prefix': '10.0.1.0/24', 'tenant_id': 'f18ca530cc05425e8bac0a5ff92f7e88', 'port_range_min': '22' }, { 'direction': 'egress', 'protocol': None, 'port_range_max': None, 'id': 'ffff', 'ethertype': 'IPv4', 'security_group_id': 'aaaa', 'remote_group_id': None, 'remote_ip_prefix': 'xxxx', 'tenant_id': 'f18ca530cc05425e8bac0a5ff92f7e88', 'port_range_min': None }, { 'direction': 'egress', 'protocol': None, 'port_range_max': None, 'id': 'gggg', 'ethertype': 'IPv4', 'security_group_id': 'aaaa', 'remote_group_id': None, 'remote_ip_prefix': 'aaaa', 'tenant_id': 'f18ca530cc05425e8bac0a5ff92f7e88', 'port_range_min': None }], 'id': 'aaaa'}}) neutronclient.Client.delete_security_group_rule('bbbb').AndRaise( neutron_exc.NeutronClientException(status_code=404)) neutronclient.Client.delete_security_group_rule('cccc').AndRaise( neutron_exc.NeutronClientException(status_code=404)) neutronclient.Client.delete_security_group_rule('dddd').AndRaise( neutron_exc.NeutronClientException(status_code=404)) neutronclient.Client.delete_security_group_rule('eeee').AndRaise( neutron_exc.NeutronClientException(status_code=404)) neutronclient.Client.delete_security_group_rule('ffff').AndRaise( neutron_exc.NeutronClientException(status_code=404)) neutronclient.Client.delete_security_group_rule('gggg').AndRaise( neutron_exc.NeutronClientException(status_code=404)) neutronclient.Client.delete_security_group('aaaa').AndRaise( neutron_exc.NeutronClientException(status_code=404)) neutronclient.Client.show_security_group('aaaa').AndRaise( neutron_exc.NeutronClientException(status_code=404)) self.m.ReplayAll() stack = self.create_stack(self.test_template) sg = stack['the_sg'] self.assertResourceState(sg, 'aaaa') scheduler.TaskRunner(sg.delete)() sg.state_set(sg.CREATE, sg.COMPLETE, 'to delete again') sg.resource_id = 'aaaa' stack.delete() self.m.VerifyAll() def test_security_group_validate(self): stack = self.create_stack(self.test_template_validate) sg = stack['the_sg'] ex = self.assertRaises(exception.StackValidationFailed, sg.validate) self.assertEqual( 'Security groups cannot be assigned the name "default".', ex.message) heat-10.0.2/heat/tests/openstack/neutron/test_neutron.py0000666000175000017500000001507613343562340023422 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from neutronclient.common import exceptions as qe import six from heat.common import exception from heat.engine import attributes from heat.engine import properties from heat.engine.resources.openstack.neutron import net from heat.engine.resources.openstack.neutron import neutron as nr from heat.engine import rsrc_defn from heat.engine import stack from heat.engine import template from heat.tests import common from heat.tests import utils class NeutronTest(common.HeatTestCase): def test_validate_properties(self): vs = {'router:external': True} data = {'admin_state_up': False, 'value_specs': vs} p = properties.Properties(net.Net.properties_schema, data) self.assertIsNone(nr.NeutronResource.validate_properties(p)) vs['foo'] = '1234' self.assertIsNone(nr.NeutronResource.validate_properties(p)) vs.pop('foo') banned_keys = {'shared': True, 'name': 'foo', 'tenant_id': '1234'} for key, val in six.iteritems(banned_keys): vs.update({key: val}) msg = '%s not allowed in value_specs' % key self.assertEqual(msg, nr.NeutronResource.validate_properties(p)) vs.pop(key) def test_prepare_properties(self): data = {'admin_state_up': False, 'value_specs': {'router:external': True}} p = properties.Properties(net.Net.properties_schema, data) props = nr.NeutronResource.prepare_properties(p, 'resource_name') self.assertEqual({'name': 'resource_name', 'router:external': True, 'admin_state_up': False, 'shared': False}, props) def test_is_built(self): self.assertTrue(nr.NeutronResource.is_built({'status': 'ACTIVE'})) self.assertTrue(nr.NeutronResource.is_built({'status': 'DOWN'})) self.assertFalse(nr.NeutronResource.is_built({'status': 'BUILD'})) e = self.assertRaises( exception.ResourceInError, nr.NeutronResource.is_built, {'status': 'ERROR'}) self.assertEqual( 'Went to status ERROR due to "Unknown"', six.text_type(e)) e = self.assertRaises( exception.ResourceUnknownStatus, nr.NeutronResource.is_built, {'status': 'FROBULATING'}) self.assertEqual('Resource is not built - Unknown status ' 'FROBULATING due to "Unknown"', six.text_type(e)) def _get_some_neutron_resource(self): class SomeNeutronResource(nr.NeutronResource): properties_schema = {} @classmethod def is_service_available(cls, context): return (True, None) empty_tmpl = {'heat_template_version': 'ocata'} tmpl = template.Template(empty_tmpl) stack_name = 'dummystack' self.dummy_stack = stack.Stack(utils.dummy_context(), stack_name, tmpl) self.dummy_stack.store() tmpl = rsrc_defn.ResourceDefinition('test_res', 'Foo') return SomeNeutronResource('aresource', tmpl, self.dummy_stack) def test_resolve_attribute(self): res = self._get_some_neutron_resource() res.attributes_schema.update( {'attr2': attributes.Schema(type=attributes.Schema.STRING)}) res.attributes = attributes.Attributes(res.name, res.attributes_schema, res._resolve_any_attribute) side_effect = [{'attr1': 'val1', 'attr2': 'val2'}, {'attr1': 'val1', 'attr2': 'val2'}, {'attr1': 'val1', 'attr2': 'val2'}, qe.NotFound] self.patchobject(res, '_show_resource', side_effect=side_effect) res.resource_id = 'resource_id' self.assertEqual({'attr1': 'val1', 'attr2': 'val2'}, res.FnGetAtt('show')) self.assertEqual('val2', res.attributes['attr2']) self.assertRaises(KeyError, res._resolve_any_attribute, 'attr3') self.assertIsNone(res._resolve_any_attribute('attr1')) res.resource_id = None # use local cached object for non-show attribute self.assertEqual('val2', res.FnGetAtt('attr2')) # but the 'show' attribute is never cached self.assertIsNone(res.FnGetAtt('show')) # remove 'attr2' from res.attributes cache res.attributes.reset_resolved_values() # _resolve_attribute (in NeutronResource class) returns None # due to no resource_id self.assertIsNone(res.FnGetAtt('attr2')) def test_needs_replace_failed(self): res = self._get_some_neutron_resource() res.state_set(res.CREATE, res.FAILED) side_effect = [ {'attr1': 'val1', 'status': 'ACTIVE'}, {'attr1': 'val1', 'status': 'ERROR'}, {'attr1': 'val1', 'attr2': 'val2'}, qe.NotFound] mock_show_resource = self.patchobject(res, '_show_resource', side_effect=side_effect) # needs replace because res not created yet res.resource_id = None self.assertTrue(res.needs_replace_failed()) self.assertEqual(0, mock_show_resource.call_count) # no need to replace because res is ACTIVE underlying res.resource_id = 'I am a resource' self.assertFalse(res.needs_replace_failed()) self.assertEqual(1, mock_show_resource.call_count) # needs replace because res is ERROR underlying self.assertTrue(res.needs_replace_failed()) self.assertEqual(2, mock_show_resource.call_count) # no need to replace because res exists and no status # to check self.assertFalse(res.needs_replace_failed()) self.assertEqual(3, mock_show_resource.call_count) # needs replace because res can not be found self.assertTrue(res.needs_replace_failed()) self.assertEqual(4, mock_show_resource.call_count) heat-10.0.2/heat/tests/openstack/neutron/test_neutron_subnet.py0000666000175000017500000007507313343562340025005 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import mock from neutronclient.common import exceptions as qe from neutronclient.neutron import v2_0 as neutronV20 from neutronclient.v2_0 import client as neutronclient import six from heat.common import exception from heat.common import template_format from heat.engine.clients.os import neutron from heat.engine.clients.os import openstacksdk from heat.engine.hot import functions as hot_funcs from heat.engine import node_data from heat.engine import resource from heat.engine.resources.openstack.neutron import subnet from heat.engine import rsrc_defn from heat.engine import scheduler from heat.engine import stk_defn from heat.tests import common from heat.tests import utils neutron_template = ''' heat_template_version: 2015-04-30 description: Template to test subnet Neutron resource resources: net: type: OS::Neutron::Net properties: name: the_net tenant_id: c1210485b2424d48804aad5d39c61b8f shared: true dhcp_agent_ids: - 28c25a04-3f73-45a7-a2b4-59e183943ddc sub_net: type: OS::Neutron::Subnet properties: network: { get_resource : net} tenant_id: c1210485b2424d48804aad5d39c61b8f ip_version: 4 cidr: 10.0.3.0/24 allocation_pools: - start: 10.0.3.20 end: 10.0.3.150 host_routes: - destination: 10.0.4.0/24 nexthop: 10.0.3.20 dns_nameservers: - 8.8.8.8 port: type: OS::Neutron::Port properties: device_id: d6b4d3a5-c700-476f-b609-1493dd9dadc0 name: port1 network: { get_resource : net} fixed_ips: - subnet: { get_resource : sub_net } ip_address: 10.0.3.21 port2: type: OS::Neutron::Port properties: name: port2 network: { get_resource : net} router: type: OS::Neutron::Router properties: l3_agent_ids: - 792ff887-6c85-4a56-b518-23f24fa65581 router_interface: type: OS::Neutron::RouterInterface properties: router_id: { get_resource : router } subnet: { get_resource : sub_net } gateway: type: OS::Neutron::RouterGateway properties: router_id: { get_resource : router } network: { get_resource : net} ''' neutron_template_deprecated = neutron_template.replace( 'neutron', 'neutron_id').replace('subnet', 'subnet_id') class NeutronSubnetTest(common.HeatTestCase): def setUp(self): super(NeutronSubnetTest, self).setUp() self.create_mock = self.patchobject(neutronclient.Client, 'create_subnet') self.delete_mock = self.patchobject(neutronclient.Client, 'delete_subnet') self.show_mock = self.patchobject(neutronclient.Client, 'show_subnet') self.update_mock = self.patchobject(neutronclient.Client, 'update_subnet') self.patchobject(neutron.NeutronClientPlugin, 'has_extension', return_value=True) self.patchobject(openstacksdk.OpenStackSDKPlugin, 'find_network_segment', return_value='fc68ea2c-b60b-4b4f-bd82-94ec81110766') self.patchobject(neutronV20, 'find_resourceid_by_name_or_id', return_value='fc68ea2c-b60b-4b4f-bd82-94ec81110766') def create_subnet(self, t, stack, resource_name): resource_defns = stack.t.resource_definitions(stack) rsrc = subnet.Subnet('test_subnet', resource_defns[resource_name], stack) return rsrc def _setup_mock(self, stack_name=None, use_deprecated_templ=False, tags=None): if use_deprecated_templ: t = template_format.parse(neutron_template_deprecated) else: t = template_format.parse(neutron_template) if tags: t['resources']['sub_net']['properties']['tags'] = tags stack = utils.parse_stack(t, stack_name=stack_name) sn = { "subnet": { "name": "name", "network_id": "fc68ea2c-b60b-4b4f-bd82-94ec81110766", "tenant_id": "c1210485b2424d48804aad5d39c61b8f", "allocation_pools": [ {"start": "10.0.3.20", "end": "10.0.3.150"}], "gateway_ip": "10.0.3.1", 'host_routes': [ {'destination': u'10.0.4.0/24', 'nexthop': u'10.0.3.20'}], "ip_version": 4, "cidr": "10.0.3.0/24", "dns_nameservers": ["8.8.8.8"], "id": "91e47a57-7508-46fe-afc9-fc454e8580e1", "enable_dhcp": True, } } self.create_mock.return_value = sn self.show_mock.side_effect = [ qe.NeutronClientException(status_code=404), sn, sn, qe.NeutronClientException(status_code=404) ] self.delete_mock.side_effect = [ None, qe.NeutronClientException(status_code=404) ] return t, stack def test_subnet(self): update_props = {'subnet': { 'dns_nameservers': ['8.8.8.8', '192.168.1.254'], 'name': 'mysubnet', 'enable_dhcp': True, 'host_routes': [{'destination': '192.168.1.0/24', 'nexthop': '194.168.1.2'}], 'gateway_ip': '10.0.3.105', 'tags': ['tag2', 'tag3'], 'allocation_pools': [ {'start': '10.0.3.20', 'end': '10.0.3.100'}, {'start': '10.0.3.110', 'end': '10.0.3.200'}]}} t, stack = self._setup_mock(tags=['tag1', 'tag2']) create_props = {'subnet': { 'name': utils.PhysName(stack.name, 'test_subnet'), 'network_id': 'fc68ea2c-b60b-4b4f-bd82-94ec81110766', 'dns_nameservers': [u'8.8.8.8'], 'allocation_pools': [ {'start': u'10.0.3.20', 'end': u'10.0.3.150'}], 'host_routes': [ {'destination': u'10.0.4.0/24', 'nexthop': u'10.0.3.20'}], 'ip_version': 4, 'cidr': u'10.0.3.0/24', 'tenant_id': 'c1210485b2424d48804aad5d39c61b8f', 'enable_dhcp': True}} self.patchobject(stack['net'], 'FnGetRefId', return_value='fc68ea2c-b60b-4b4f-bd82-94ec81110766') set_tag_mock = self.patchobject(neutronclient.Client, 'replace_tag') rsrc = self.create_subnet(t, stack, 'sub_net') scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) self.create_mock.assert_called_once_with(create_props) set_tag_mock.assert_called_once_with( 'subnets', rsrc.resource_id, {'tags': ['tag1', 'tag2']} ) rsrc.validate() ref_id = rsrc.FnGetRefId() self.assertEqual('91e47a57-7508-46fe-afc9-fc454e8580e1', ref_id) self.assertIsNone(rsrc.FnGetAtt('network_id')) self.assertEqual('fc68ea2c-b60b-4b4f-bd82-94ec81110766', rsrc.FnGetAtt('network_id')) self.assertEqual('8.8.8.8', rsrc.FnGetAtt('dns_nameservers')[0]) # assert the dependency (implicit or explicit) between the ports # and the subnet self.assertIn(stack['port'], stack.dependencies[stack['sub_net']]) self.assertIn(stack['port2'], stack.dependencies[stack['sub_net']]) update_snippet = rsrc_defn.ResourceDefinition(rsrc.name, rsrc.type(), update_props['subnet']) rsrc.handle_update(update_snippet, {}, update_props['subnet']) self.update_mock.assert_called_once_with( '91e47a57-7508-46fe-afc9-fc454e8580e1', update_props) set_tag_mock.assert_called_with( 'subnets', rsrc.resource_id, {'tags': ['tag2', 'tag3']} ) # with name None del update_props['subnet']['name'] rsrc.handle_update(update_snippet, {}, update_props['subnet']) self.update_mock.assert_called_with( '91e47a57-7508-46fe-afc9-fc454e8580e1', update_props) # with no prop_diff rsrc.handle_update(update_snippet, {}, {}) self.assertIsNone(scheduler.TaskRunner(rsrc.delete)()) rsrc.state_set(rsrc.CREATE, rsrc.COMPLETE, 'to delete again') self.assertIsNone(scheduler.TaskRunner(rsrc.delete)()) def test_update_subnet_with_value_specs(self): update_props = {'subnet': { 'name': 'mysubnet', 'value_specs': { 'enable_dhcp': True, } }} update_props_merged = copy.deepcopy(update_props) update_props_merged['subnet']['enable_dhcp'] = True del update_props_merged['subnet']['value_specs'] t, stack = self._setup_mock() self.patchobject(stack['net'], 'FnGetRefId', return_value='fc68ea2c-b60b-4b4f-bd82-94ec81110766') rsrc = self.create_subnet(t, stack, 'sub_net') scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) rsrc.validate() ref_id = rsrc.FnGetRefId() self.assertEqual('91e47a57-7508-46fe-afc9-fc454e8580e1', ref_id) self.assertIsNone(rsrc.FnGetAtt('network_id')) self.assertEqual('fc68ea2c-b60b-4b4f-bd82-94ec81110766', rsrc.FnGetAtt('network_id')) self.assertEqual('8.8.8.8', rsrc.FnGetAtt('dns_nameservers')[0]) update_snippet = rsrc_defn.ResourceDefinition(rsrc.name, rsrc.type(), update_props['subnet']) rsrc.handle_update(update_snippet, {}, update_props['subnet']) self.update_mock.assert_called_once_with( '91e47a57-7508-46fe-afc9-fc454e8580e1', update_props ) self.assertIsNone(scheduler.TaskRunner(rsrc.delete)()) rsrc.state_set(rsrc.CREATE, rsrc.COMPLETE, 'to delete again') self.assertIsNone(scheduler.TaskRunner(rsrc.delete)()) def test_update_subnet_with_no_name(self): stack_name = utils.random_name() update_props = {'subnet': { 'name': None, }} update_props_name = {'subnet': { 'name': utils.PhysName(stack_name, 'test_subnet'), }} t, stack = self._setup_mock(stack_name) self.patchobject(stack['net'], 'FnGetRefId', return_value='fc68ea2c-b60b-4b4f-bd82-94ec81110766') rsrc = self.create_subnet(t, stack, 'sub_net') scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) rsrc.validate() ref_id = rsrc.FnGetRefId() self.assertEqual('91e47a57-7508-46fe-afc9-fc454e8580e1', ref_id) self.assertIsNone(rsrc.FnGetAtt('network_id')) self.assertEqual('fc68ea2c-b60b-4b4f-bd82-94ec81110766', rsrc.FnGetAtt('network_id')) self.assertEqual('8.8.8.8', rsrc.FnGetAtt('dns_nameservers')[0]) update_snippet = rsrc_defn.ResourceDefinition(rsrc.name, rsrc.type(), update_props['subnet']) rsrc.handle_update(update_snippet, {}, update_props['subnet']) self.update_mock.assert_called_once_with( '91e47a57-7508-46fe-afc9-fc454e8580e1', update_props_name ) self.assertIsNone(scheduler.TaskRunner(rsrc.delete)()) rsrc.state_set(rsrc.CREATE, rsrc.COMPLETE, 'to delete again') self.assertIsNone(scheduler.TaskRunner(rsrc.delete)()) def test_subnet_with_subnetpool(self): subnet_dict = { "subnet": { "allocation_pools": [ {"start": "10.0.3.20", "end": "10.0.3.150"}], "host_routes": [ {"destination": "10.0.4.0/24", "nexthop": "10.0.3.20"}], "subnetpool_id": 'fc68ea2c-b60b-4b4f-bd82-94ec81110766', "prefixlen": 24, "dns_nameservers": ["8.8.8.8"], "enable_dhcp": True, "gateway_ip": "10.0.3.1", "id": "91e47a57-7508-46fe-afc9-fc454e8580e1", "ip_version": 4, "name": "name", "network_id": "fc68ea2c-b60b-4b4f-bd82-94ec81110766", "tenant_id": "c1210485b2424d48804aad5d39c61b8f" } } self.create_mock.return_value = subnet_dict self.show_mock.side_effect = [ qe.NeutronClientException(status_code=404)] t = template_format.parse(neutron_template) del t['resources']['sub_net']['properties']['cidr'] t['resources']['sub_net']['properties'][ 'subnetpool'] = 'fc68ea2c-b60b-4b4f-bd82-94ec81110766' t['resources']['sub_net']['properties'][ 'prefixlen'] = 24 t['resources']['sub_net']['properties'][ 'name'] = 'mysubnet' stack = utils.parse_stack(t) self.patchobject(stack['net'], 'FnGetRefId', return_value='fc68ea2c-b60b-4b4f-bd82-94ec81110766') rsrc = self.create_subnet(t, stack, 'sub_net') scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) ref_id = rsrc.FnGetRefId() self.assertEqual('91e47a57-7508-46fe-afc9-fc454e8580e1', ref_id) scheduler.TaskRunner(rsrc.delete)() def test_subnet_with_segment(self): subnet_dict = { "subnet": { "allocation_pools": [ {"start": "10.0.3.20", "end": "10.0.3.150"}], "host_routes": [ {"destination": "10.0.4.0/24", "nexthop": "10.0.3.20"}], "segment_id": 'fc68ea2c-b60b-4b4f-bd82-94ec81110766', "prefixlen": 24, "dns_nameservers": ["8.8.8.8"], "enable_dhcp": True, "gateway_ip": "10.0.3.1", "id": "91e47a57-7508-46fe-afc9-fc454e8580e1", "ip_version": 4, "name": "name", "network_id": "fc68ea2c-b60b-4b4f-bd82-94ec81110766", "tenant_id": "c1210485b2424d48804aad5d39c61b8f" } } self.create_mock.return_value = subnet_dict self.show_mock.side_effect = [ qe.NeutronClientException(status_code=404)] t = template_format.parse(neutron_template) del t['resources']['sub_net']['properties']['cidr'] t['resources']['sub_net']['properties'][ 'segment'] = 'fc68ea2c-b60b-4b4f-bd82-94ec81110766' t['resources']['sub_net']['properties'][ 'prefixlen'] = 24 t['resources']['sub_net']['properties'][ 'name'] = 'mysubnet' stack = utils.parse_stack(t) self.patchobject(stack['net'], 'FnGetRefId', return_value='fc68ea2c-b60b-4b4f-bd82-94ec81110766') rsrc = self.create_subnet(t, stack, 'sub_net') scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) ref_id = rsrc.FnGetRefId() self.assertEqual('91e47a57-7508-46fe-afc9-fc454e8580e1', ref_id) scheduler.TaskRunner(rsrc.delete)() def test_subnet_deprecated(self): t, stack = self._setup_mock(use_deprecated_templ=True) self.patchobject(stack['net'], 'FnGetRefId', return_value='fc68ea2c-b60b-4b4f-bd82-94ec81110766') rsrc = self.create_subnet(t, stack, 'sub_net') scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) rsrc.validate() ref_id = rsrc.FnGetRefId() self.assertEqual('91e47a57-7508-46fe-afc9-fc454e8580e1', ref_id) self.assertIsNone(rsrc.FnGetAtt('network_id')) self.assertEqual('fc68ea2c-b60b-4b4f-bd82-94ec81110766', rsrc.FnGetAtt('network_id')) self.assertEqual('8.8.8.8', rsrc.FnGetAtt('dns_nameservers')[0]) # assert the dependency (implicit or explicit) between the ports # and the subnet self.assertIn(stack['port'], stack.dependencies[stack['sub_net']]) self.assertIn(stack['port2'], stack.dependencies[stack['sub_net']]) self.assertIsNone(scheduler.TaskRunner(rsrc.delete)()) rsrc.state_set(rsrc.CREATE, rsrc.COMPLETE, 'to delete again') self.assertIsNone(scheduler.TaskRunner(rsrc.delete)()) def test_subnet_disable_dhcp(self): t = template_format.parse(neutron_template) t['resources']['sub_net']['properties']['enable_dhcp'] = 'False' stack = utils.parse_stack(t) subnet_info = { "subnet": { "allocation_pools": [ {"start": "10.0.3.20", "end": "10.0.3.150"}], "host_routes": [ {"destination": "10.0.4.0/24", "nexthop": "10.0.3.20"}], "cidr": "10.0.3.0/24", "dns_nameservers": ["8.8.8.8"], "enable_dhcp": False, "gateway_ip": "10.0.3.1", "id": "91e47a57-7508-46fe-afc9-fc454e8580e1", "ip_version": 4, "name": "name", "network_id": "fc68ea2c-b60b-4b4f-bd82-94ec81110766", "tenant_id": "c1210485b2424d48804aad5d39c61b8f" } } self.create_mock.return_value = subnet_info self.show_mock.side_effect = [ subnet_info, qe.NeutronClientException(status_code=404) ] self.patchobject(stack['net'], 'FnGetRefId', return_value='fc68ea2c-b60b-4b4f-bd82-94ec81110766') rsrc = self.create_subnet(t, stack, 'sub_net') scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) rsrc.validate() ref_id = rsrc.FnGetRefId() self.assertEqual('91e47a57-7508-46fe-afc9-fc454e8580e1', ref_id) self.assertIs(False, rsrc.FnGetAtt('enable_dhcp')) scheduler.TaskRunner(rsrc.delete)() def test_null_gateway_ip(self): p = {} subnet.Subnet._null_gateway_ip(p) self.assertEqual({}, p) p = {'foo': 'bar'} subnet.Subnet._null_gateway_ip(p) self.assertEqual({'foo': 'bar'}, p) p = { 'foo': 'bar', 'gateway_ip': '198.51.100.0' } subnet.Subnet._null_gateway_ip(p) self.assertEqual({ 'foo': 'bar', 'gateway_ip': '198.51.100.0' }, p) p = { 'foo': 'bar', 'gateway_ip': '' } subnet.Subnet._null_gateway_ip(p) self.assertEqual({ 'foo': 'bar', 'gateway_ip': None }, p) # This should not happen as prepare_properties # strips out None values, but testing anyway p = { 'foo': 'bar', 'gateway_ip': None } subnet.Subnet._null_gateway_ip(p) self.assertEqual({ 'foo': 'bar', 'gateway_ip': None }, p) def test_ipv6_subnet(self): t = template_format.parse(neutron_template) props = t['resources']['sub_net']['properties'] props.pop('allocation_pools') props.pop('host_routes') props['ip_version'] = 6 props['ipv6_address_mode'] = 'slaac' props['ipv6_ra_mode'] = 'slaac' props['cidr'] = 'fdfa:6a50:d22b::/64' props['dns_nameservers'] = ['2001:4860:4860::8844'] stack = utils.parse_stack(t) create_info = { 'subnet': { 'name': utils.PhysName(stack.name, 'test_subnet'), 'network_id': 'fc68ea2c-b60b-4b4f-bd82-94ec81110766', 'dns_nameservers': [u'2001:4860:4860::8844'], 'ip_version': 6, 'enable_dhcp': True, 'cidr': u'fdfa:6a50:d22b::/64', 'tenant_id': 'c1210485b2424d48804aad5d39c61b8f', 'ipv6_address_mode': 'slaac', 'ipv6_ra_mode': 'slaac' } } subnet_info = copy.deepcopy(create_info) subnet_info['subnet']['id'] = "91e47a57-7508-46fe-afc9-fc454e8580e1" self.create_mock.return_value = subnet_info self.patchobject(stack['net'], 'FnGetRefId', return_value='fc68ea2c-b60b-4b4f-bd82-94ec81110766') rsrc = self.create_subnet(t, stack, 'sub_net') scheduler.TaskRunner(rsrc.create)() self.create_mock.assert_called_once_with(create_info) self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) rsrc.validate() def test_host_routes_validate_destination(self): t = template_format.parse(neutron_template) props = t['resources']['sub_net']['properties'] props['host_routes'] = [{'destination': 'invalid_cidr', 'nexthop': '10.0.3.20'}] stack = utils.parse_stack(t) self.patchobject(stack['net'], 'FnGetRefId', return_value='fc68ea2c-b60b-4b4f-bd82-94ec81110766') rsrc = stack['sub_net'] ex = self.assertRaises(exception.StackValidationFailed, rsrc.validate) msg = ("Property error: " "resources.sub_net.properties.host_routes[0].destination: " "Error validating value 'invalid_cidr': Invalid net cidr " "invalid IPNetwork invalid_cidr ") self.assertEqual(msg, six.text_type(ex)) def test_ipv6_validate_ra_mode(self): t = template_format.parse(neutron_template) props = t['resources']['sub_net']['properties'] props['ipv6_address_mode'] = 'dhcpv6-stateful' props['ipv6_ra_mode'] = 'slaac' props['ip_version'] = 6 stack = utils.parse_stack(t) self.patchobject(stack['net'], 'FnGetRefId', return_value='fc68ea2c-b60b-4b4f-bd82-94ec81110766') rsrc = stack['sub_net'] ex = self.assertRaises(exception.StackValidationFailed, rsrc.validate) self.assertEqual("When both ipv6_ra_mode and ipv6_address_mode are " "set, they must be equal.", six.text_type(ex)) def test_ipv6_validate_ip_version(self): t = template_format.parse(neutron_template) props = t['resources']['sub_net']['properties'] props['ipv6_address_mode'] = 'slaac' props['ipv6_ra_mode'] = 'slaac' props['ip_version'] = 4 stack = utils.parse_stack(t) self.patchobject(stack['net'], 'FnGetRefId', return_value='fc68ea2c-b60b-4b4f-bd82-94ec81110766') rsrc = stack['sub_net'] ex = self.assertRaises(exception.StackValidationFailed, rsrc.validate) self.assertEqual("ipv6_ra_mode and ipv6_address_mode are not " "supported for ipv4.", six.text_type(ex)) def test_validate_both_subnetpool_cidr(self): self.patchobject(neutronV20, 'find_resourceid_by_name_or_id', return_value='new_pool') t = template_format.parse(neutron_template) props = t['resources']['sub_net']['properties'] props['subnetpool'] = 'new_pool' stack = utils.parse_stack(t) self.patchobject(stack['net'], 'FnGetRefId', return_value='fc68ea2c-b60b-4b4f-bd82-94ec81110766') rsrc = stack['sub_net'] ex = self.assertRaises(exception.ResourcePropertyConflict, rsrc.validate) msg = ("Cannot define the following properties at the same time: " "subnetpool, cidr.") self.assertEqual(msg, six.text_type(ex)) def test_validate_none_subnetpool_cidr(self): t = template_format.parse(neutron_template) props = t['resources']['sub_net']['properties'] del props['cidr'] stack = utils.parse_stack(t) self.patchobject(stack['net'], 'FnGetRefId', return_value='fc68ea2c-b60b-4b4f-bd82-94ec81110766') rsrc = stack['sub_net'] ex = self.assertRaises(exception.PropertyUnspecifiedError, rsrc.validate) msg = ("At least one of the following properties must be specified: " "subnetpool, cidr.") self.assertEqual(msg, six.text_type(ex)) def test_validate_subnetpool_ref_with_cidr(self): t = template_format.parse(neutron_template) props = t['resources']['sub_net']['properties'] props['subnetpool'] = {'get_resource': 'subnetpool'} props = t['resources']['sub_net']['properties'] stack = utils.parse_stack(t) snippet = rsrc_defn.ResourceDefinition('subnetpool', 'OS::Neutron::SubnetPool') res = resource.Resource('subnetpool', snippet, stack) stack.add_resource(res) self.patchobject(stack['subnetpool'], 'FnGetRefId', return_value=None) self.patchobject(stack['net'], 'FnGetRefId', return_value='fc68ea2c-b60b-4b4f-bd82-94ec81110766') rsrc = stack['sub_net'] ex = self.assertRaises(exception.ResourcePropertyConflict, rsrc.validate) msg = ("Cannot define the following properties at the same time: " "subnetpool, cidr.") self.assertEqual(msg, six.text_type(ex)) def test_validate_subnetpool_ref_no_cidr(self): t = template_format.parse(neutron_template) props = t['resources']['sub_net']['properties'] del props['cidr'] props['subnetpool'] = {'get_resource': 'subnetpool'} props = t['resources']['sub_net']['properties'] stack = utils.parse_stack(t) snippet = rsrc_defn.ResourceDefinition('subnetpool', 'OS::Neutron::SubnetPool') res = resource.Resource('subnetpool', snippet, stack) stack.add_resource(res) self.patchobject(stack['subnetpool'], 'FnGetRefId', return_value=None) self.patchobject(stack['net'], 'FnGetRefId', return_value='fc68ea2c-b60b-4b4f-bd82-94ec81110766') rsrc = stack['sub_net'] self.assertIsNone(rsrc.validate()) def test_validate_both_prefixlen_cidr(self): t = template_format.parse(neutron_template) props = t['resources']['sub_net']['properties'] props['prefixlen'] = '24' stack = utils.parse_stack(t) self.patchobject(stack['net'], 'FnGetRefId', return_value='fc68ea2c-b60b-4b4f-bd82-94ec81110766') rsrc = stack['sub_net'] ex = self.assertRaises(exception.ResourcePropertyConflict, rsrc.validate) msg = ("Cannot define the following properties at the same time: " "prefixlen, cidr.") self.assertEqual(msg, six.text_type(ex)) def test_deprecated_network_id(self): template = """ heat_template_version: 2015-04-30 resources: net: type: OS::Neutron::Net properties: name: test subnet: type: OS::Neutron::Subnet properties: network_id: { get_resource: net } cidr: 10.0.0.0/24 """ t = template_format.parse(template) stack = utils.parse_stack(t) rsrc = stack['subnet'] nd = {'reference_id': 'fc68ea2c-b60b-4b4f-bd82-94ec81110766'} stk_defn.update_resource_data(stack.defn, 'net', node_data.NodeData.from_dict(nd)) self.create_mock.return_value = { "subnet": { "id": "91e47a57-7508-46fe-afc9-fc454e8580e1", "ip_version": 4, "name": "name", "network_id": "fc68ea2c-b60b-4b4f-bd82-94ec81110766", "tenant_id": "c1210485b2424d48804aad5d39c61b8f" } } stack.create() self.assertEqual(hot_funcs.GetResource(stack.defn, 'get_resource', 'net'), rsrc.properties.get('network')) self.assertIsNone(rsrc.properties.get('network_id')) def test_subnet_get_live_state(self): template = """ heat_template_version: 2015-04-30 resources: net: type: OS::Neutron::Net properties: name: test subnet: type: OS::Neutron::Subnet properties: network_id: { get_resource: net } cidr: 10.0.0.0/25 value_specs: test_value_spec: value_spec_value """ t = template_format.parse(template) stack = utils.parse_stack(t) rsrc = stack['subnet'] stack.create() subnet_resp = {'subnet': { 'name': 'subnet-subnet-la5usdgifhrd', 'enable_dhcp': True, 'network_id': 'dffd43b3-6206-4402-87e6-8a16ddf3bd68', 'tenant_id': '30f466e3d14b4251853899f9c26e2b66', 'dns_nameservers': [], 'ipv6_ra_mode': None, 'allocation_pools': [{'start': '10.0.0.2', 'end': '10.0.0.126'}], 'gateway_ip': '10.0.0.1', 'ipv6_address_mode': None, 'ip_version': 4, 'host_routes': [], 'prefixlen': None, 'cidr': '10.0.0.0/25', 'id': 'b255342b-31b7-4674-8ea4-a144bca658b0', 'subnetpool_id': None, 'test_value_spec': 'value_spec_value'} } rsrc.client().show_subnet = mock.MagicMock(return_value=subnet_resp) rsrc.resource_id = '1234' reality = rsrc.get_live_state(rsrc.properties) expected = { 'enable_dhcp': True, 'dns_nameservers': [], 'allocation_pools': [{'start': '10.0.0.2', 'end': '10.0.0.126'}], 'gateway_ip': '10.0.0.1', 'host_routes': [], 'value_specs': {'test_value_spec': 'value_spec_value'} } self.assertEqual(set(expected.keys()), set(reality.keys())) for key in expected: self.assertEqual(expected[key], reality[key]) heat-10.0.2/heat/tests/openstack/neutron/test_neutron_net.py0000666000175000017500000003106613343562351024267 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from neutronclient.common import exceptions as qe from neutronclient.v2_0 import client as neutronclient from heat.common import exception from heat.common import template_format from heat.engine.clients.os import neutron from heat.engine.resources.openstack.neutron import net from heat.engine import rsrc_defn from heat.engine import scheduler from heat.tests import common from heat.tests import utils neutron_template = ''' heat_template_version: 2015-04-30 description: Template to test network Neutron resource resources: network: type: OS::Neutron::Net properties: name: the_network tenant_id: c1210485b2424d48804aad5d39c61b8f shared: true dhcp_agent_ids: - 28c25a04-3f73-45a7-a2b4-59e183943ddc port_security_enabled: False dns_domain: openstack.org. tags: - tag1 - tag2 subnet: type: OS::Neutron::Subnet properties: network: { get_resource: network } tenant_id: c1210485b2424d48804aad5d39c61b8f ip_version: 4 cidr: 10.0.3.0/24 allocation_pools: - start: 10.0.3.20 end: 10.0.3.150 host_routes: - destination: 10.0.4.0/24 nexthop: 10.0.3.20 dns_nameservers: - 8.8.8.8 router: type: OS::Neutron::Router properties: l3_agent_ids: - 792ff887-6c85-4a56-b518-23f24fa65581 router_interface: type: OS::Neutron::RouterInterface properties: router_id: {get_resource: router} subnet: {get_resource : subnet} gateway: type: OS::Neutron::RouterGateway properties: router_id: {get_resource: router} network: {get_resource : network} ''' class NeutronNetTest(common.HeatTestCase): def setUp(self): super(NeutronNetTest, self).setUp() self.patchobject(neutron.NeutronClientPlugin, 'has_extension', return_value=True) def create_net(self, t, stack, resource_name): resource_defns = stack.t.resource_definitions(stack) rsrc = net.Net('test_net', resource_defns[resource_name], stack) scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) return rsrc def test_net(self): t = template_format.parse(neutron_template) stack = utils.parse_stack(t) resource_type = 'networks' net_info_build = {"network": { "status": "BUILD", "subnets": [], "name": "name", "admin_state_up": True, "shared": True, "tenant_id": "c1210485b2424d48804aad5d39c61b8f", "id": "fc68ea2c-b60b-4b4f-bd82-94ec81110766", "mtu": 0 }} net_info_active = copy.deepcopy(net_info_build) net_info_active['network'].update({'status': 'ACTIVE'}) agent_info = { "agents": [{ "admin_state_up": True, "agent_type": "DHCP agent", "alive": True, "binary": "neutron-dhcp-agent", "configurations": { "dhcp_driver": "DummyDriver", "dhcp_lease_duration": 86400, "networks": 0, "ports": 0, "subnets": 0, "use_namespaces": True}, "created_at": "2014-03-20 05:12:34", "description": None, "heartbeat_timestamp": "2014-03-20 05:12:34", "host": "hostname", "id": "28c25a04-3f73-45a7-a2b4-59e183943ddc", "started_at": "2014-03-20 05:12:34", "topic": "dhcp_agent" }] } create_mock = self.patchobject(neutronclient.Client, 'create_network') create_mock.return_value = net_info_build list_dhcp_agent_mock = self.patchobject( neutronclient.Client, 'list_dhcp_agent_hosting_networks') list_dhcp_agent_mock.side_effect = [{"agents": []}, agent_info] add_dhcp_agent_mock = self.patchobject( neutronclient.Client, 'add_network_to_dhcp_agent') remove_dhcp_agent_mock = self.patchobject( neutronclient.Client, 'remove_network_from_dhcp_agent') replace_tag_mock = self.patchobject( neutronclient.Client, 'replace_tag') show_network_mock = self.patchobject( neutronclient.Client, 'show_network') show_network_mock.side_effect = [ net_info_build, net_info_active, qe.NetworkNotFoundClient(status_code=404), net_info_active, net_info_active, qe.NetworkNotFoundClient(status_code=404)] update_net_mock = self.patchobject( neutronclient.Client, 'update_network') del_net_mock = self.patchobject( neutronclient.Client, 'delete_network') del_net_mock.side_effect = [None, qe.NetworkNotFoundClient(status_code=404)] self.patchobject(neutron.NeutronClientPlugin, 'get_qos_policy_id', return_value='0389f747-7785-4757-b7bb-2ab07e4b09c3') self.patchobject(stack['router'], 'FnGetRefId', return_value='792ff887-6c85-4a56-b518-23f24fa65581') # network create rsrc = self.create_net(t, stack, 'network') create_mock.assert_called_with( {'network': {'name': u'the_network', 'admin_state_up': True, 'tenant_id': u'c1210485b2424d48804aad5d39c61b8f', 'dns_domain': u'openstack.org.', 'shared': True, 'port_security_enabled': False} } ) add_dhcp_agent_mock.assert_called_with( '28c25a04-3f73-45a7-a2b4-59e183943ddc', {'network_id': u'fc68ea2c-b60b-4b4f-bd82-94ec81110766'}) replace_tag_mock.assert_called_with( resource_type, 'fc68ea2c-b60b-4b4f-bd82-94ec81110766', {'tags': ['tag1', 'tag2']} ) # assert the implicit dependency between the gateway and the interface deps = stack.dependencies[stack['router_interface']] self.assertIn(stack['gateway'], deps) # assert the implicit dependency between the gateway and the subnet deps = stack.dependencies[stack['subnet']] self.assertIn(stack['gateway'], deps) rsrc.validate() ref_id = rsrc.FnGetRefId() self.assertEqual('fc68ea2c-b60b-4b4f-bd82-94ec81110766', ref_id) self.assertIsNone(rsrc.FnGetAtt('status')) self.assertEqual('ACTIVE', rsrc.FnGetAtt('status')) self.assertEqual(0, rsrc.FnGetAtt('mtu')) self.assertRaises( exception.InvalidTemplateAttribute, rsrc.FnGetAtt, 'Foo') # update tests prop_diff = { "name": "mynet", "dhcp_agent_ids": [ "bb09cfcd-5277-473d-8336-d4ed8628ae68" ], 'qos_policy': '0389f747-7785-4757-b7bb-2ab07e4b09c3' } update_snippet = rsrc_defn.ResourceDefinition(rsrc.name, rsrc.type(), prop_diff) rsrc.handle_update(update_snippet, {}, prop_diff) update_net_mock.assert_called_with( 'fc68ea2c-b60b-4b4f-bd82-94ec81110766', {'network': { 'name': 'mynet', 'qos_policy_id': '0389f747-7785-4757-b7bb-2ab07e4b09c3' }}) add_dhcp_agent_mock.assert_called_with( 'bb09cfcd-5277-473d-8336-d4ed8628ae68', {'network_id': 'fc68ea2c-b60b-4b4f-bd82-94ec81110766'}) remove_dhcp_agent_mock.assert_called_with( '28c25a04-3f73-45a7-a2b4-59e183943ddc', 'fc68ea2c-b60b-4b4f-bd82-94ec81110766' ) # Update with None qos_policy prop_diff['qos_policy'] = None rsrc.handle_update(update_snippet, {}, prop_diff) update_net_mock.assert_called_with( 'fc68ea2c-b60b-4b4f-bd82-94ec81110766', {'network': { 'name': 'mynet', 'qos_policy_id': None }} ) # Update with value_specs prop_diff['value_specs'] = {"port_security_enabled": True} rsrc.handle_update(update_snippet, {}, prop_diff) update_net_mock.assert_called_with( 'fc68ea2c-b60b-4b4f-bd82-94ec81110766', {'network': { 'name': 'mynet', 'port_security_enabled': True, 'qos_policy_id': None }} ) # Update with name = None rsrc.handle_update(update_snippet, {}, {'name': None}) update_net_mock.assert_called_with( 'fc68ea2c-b60b-4b4f-bd82-94ec81110766', {'network': { 'name': utils.PhysName(stack.name, 'test_net'), }} ) # Update with tags=[] rsrc.handle_update(update_snippet, {}, {'tags': []}) replace_tag_mock.assert_called_with( resource_type, 'fc68ea2c-b60b-4b4f-bd82-94ec81110766', {'tags': []} ) # Update with empty prop_diff rsrc.handle_update(update_snippet, {}, {}) # update delete scheduler.TaskRunner(rsrc.delete)() del_net_mock.assert_called_with('fc68ea2c-b60b-4b4f-bd82-94ec81110766') # delete raise not found rsrc.state_set(rsrc.CREATE, rsrc.COMPLETE, 'to delete again') scheduler.TaskRunner(rsrc.delete)() self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state) def test_net_get_live_state(self): tmpl = """ heat_template_version: 2015-10-15 resources: net: type: OS::Neutron::Net properties: value_specs: 'test:property': test_value """ t = template_format.parse(tmpl) stack = utils.parse_stack(t) show_net = self.patchobject(neutronclient.Client, 'show_network') show_net.return_value = {'network': {'status': 'ACTIVE'}} self.patchobject(neutronclient.Client, 'list_dhcp_agent_hosting_networks', return_value={'agents': [{'id': '1111'}]}) self.patchobject(neutronclient.Client, 'create_network', return_value={"network": { "status": "BUILD", "subnets": [], "qos_policy_id": "some", "name": "name", "admin_state_up": True, "shared": True, "tenant_id": "c1210485b2424d48804aad5d39c61b8f", "id": "fc68ea2c-b60b-4b4f-bd82-94ec81110766", "mtu": 0 }}) rsrc = self.create_net(t, stack, 'net') network_resp = { 'name': 'net1-net-wkkl2vwupdee', 'admin_state_up': True, 'tenant_id': '30f466e3d14b4251853899f9c26e2b66', 'mtu': 0, 'router:external': False, 'port_security_enabled': True, 'shared': False, 'qos_policy_id': 'some', 'id': u'5a4bb8a0-5077-4f8a-8140-5430370020e6', 'test:property': 'test_value_resp' } show_net.return_value = {'network': network_resp} reality = rsrc.get_live_state(rsrc.properties) expected = { 'admin_state_up': True, 'qos_policy': "some", 'value_specs': { 'test:property': 'test_value_resp' }, 'port_security_enabled': True, 'dhcp_agent_ids': ['1111'] } self.assertEqual(set(expected.keys()), set(reality.keys())) for key in expected: if key == 'dhcp_agent_ids': self.assertEqual(set(expected[key]), set(reality[key])) continue self.assertEqual(expected[key], reality[key]) heat-10.0.2/heat/tests/openstack/neutron/lbaas/0000775000175000017500000000000013343562672021376 5ustar zuulzuul00000000000000heat-10.0.2/heat/tests/openstack/neutron/lbaas/test_l7policy.py0000666000175000017500000002606513343562340024554 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import yaml from neutronclient.common import exceptions from heat.common import exception from heat.common.i18n import _ from heat.common import template_format from heat.engine.resources.openstack.neutron.lbaas import l7policy from heat.tests import common from heat.tests.openstack.neutron import inline_templates from heat.tests import utils class L7PolicyTest(common.HeatTestCase): def test_resource_mapping(self): mapping = l7policy.resource_mapping() self.assertEqual(mapping['OS::Neutron::LBaaS::L7Policy'], l7policy.L7Policy) @mock.patch('heat.engine.clients.os.neutron.' 'NeutronClientPlugin.has_extension', return_value=True) def _create_stack(self, ext_func, tmpl=inline_templates.L7POLICY_TEMPLATE): self.t = template_format.parse(tmpl) self.stack = utils.parse_stack(self.t) self.l7policy = self.stack['l7policy'] self.neutron_client = mock.MagicMock() self.l7policy.client = mock.MagicMock(return_value=self.neutron_client) self.l7policy.client_plugin().find_resourceid_by_name_or_id = ( mock.MagicMock(return_value='123')) self.l7policy.client_plugin().client = mock.MagicMock( return_value=self.neutron_client) self.neutron_client.show_loadbalancer.side_effect = [ {'loadbalancer': {'provisioning_status': 'PENDING_UPDATE'}}, {'loadbalancer': {'provisioning_status': 'PENDING_UPDATE'}}, {'loadbalancer': {'provisioning_status': 'ACTIVE'}}, ] def test_validate_reject_action_with_conflicting_props(self): tmpl = yaml.load(inline_templates.L7POLICY_TEMPLATE) props = tmpl['resources']['l7policy']['properties'] props['action'] = 'REJECT' self._create_stack(tmpl=yaml.dump(tmpl)) msg = _('Properties redirect_pool and redirect_url are not ' 'required when action type is set to REJECT.') with mock.patch('heat.engine.clients.os.neutron.NeutronClientPlugin.' 'has_extension', return_value=True): self.assertRaisesRegex(exception.StackValidationFailed, msg, self.l7policy.validate) def test_validate_redirect_pool_action_with_url(self): tmpl = yaml.load(inline_templates.L7POLICY_TEMPLATE) props = tmpl['resources']['l7policy']['properties'] props['action'] = 'REDIRECT_TO_POOL' props['redirect_pool'] = '123' self._create_stack(tmpl=yaml.dump(tmpl)) msg = _('redirect_url property should only be specified ' 'for action with value REDIRECT_TO_URL.') with mock.patch('heat.engine.clients.os.neutron.NeutronClientPlugin.' 'has_extension', return_value=True): self.assertRaisesRegex(exception.ResourcePropertyValueDependency, msg, self.l7policy.validate) def test_validate_redirect_pool_action_without_pool(self): tmpl = yaml.load(inline_templates.L7POLICY_TEMPLATE) props = tmpl['resources']['l7policy']['properties'] props['action'] = 'REDIRECT_TO_POOL' del props['redirect_url'] self._create_stack(tmpl=yaml.dump(tmpl)) msg = _('Property redirect_pool is required when action type ' 'is set to REDIRECT_TO_POOL.') with mock.patch('heat.engine.clients.os.neutron.NeutronClientPlugin.' 'has_extension', return_value=True): self.assertRaisesRegex(exception.StackValidationFailed, msg, self.l7policy.validate) def test_validate_redirect_url_action_with_pool(self): tmpl = yaml.load(inline_templates.L7POLICY_TEMPLATE) props = tmpl['resources']['l7policy']['properties'] props['redirect_pool'] = '123' self._create_stack(tmpl=yaml.dump(tmpl)) msg = _('redirect_pool property should only be specified ' 'for action with value REDIRECT_TO_POOL.') with mock.patch('heat.engine.clients.os.neutron.NeutronClientPlugin.' 'has_extension', return_value=True): self.assertRaisesRegex(exception.ResourcePropertyValueDependency, msg, self.l7policy.validate) def test_validate_redirect_url_action_without_url(self): tmpl = yaml.load(inline_templates.L7POLICY_TEMPLATE) props = tmpl['resources']['l7policy']['properties'] del props['redirect_url'] self._create_stack(tmpl=yaml.dump(tmpl)) msg = _('Property redirect_url is required when action type ' 'is set to REDIRECT_TO_URL.') with mock.patch('heat.engine.clients.os.neutron.NeutronClientPlugin.' 'has_extension', return_value=True): self.assertRaisesRegex(exception.StackValidationFailed, msg, self.l7policy.validate) def test_create(self): self._create_stack() self.neutron_client.create_lbaas_l7policy.side_effect = [ exceptions.StateInvalidClient, {'l7policy': {'id': '1234'}} ] expected = { 'l7policy': { 'name': u'test_l7policy', 'description': u'test l7policy resource', 'action': u'REDIRECT_TO_URL', 'listener_id': u'123', 'redirect_url': u'http://www.mirantis.com', 'position': 1, 'admin_state_up': True } } props = self.l7policy.handle_create() self.assertFalse(self.l7policy.check_create_complete(props)) self.neutron_client.create_lbaas_l7policy.assert_called_with(expected) self.assertFalse(self.l7policy.check_create_complete(props)) self.neutron_client.create_lbaas_l7policy.assert_called_with(expected) self.assertFalse(self.l7policy.check_create_complete(props)) self.assertTrue(self.l7policy.check_create_complete(props)) def test_create_missing_properties(self): self.patchobject(l7policy.L7Policy, 'is_service_available', return_value=(True, None)) for prop in ('action', 'listener'): tmpl = yaml.load(inline_templates.L7POLICY_TEMPLATE) del tmpl['resources']['l7policy']['properties'][prop] self._create_stack(tmpl=yaml.dump(tmpl)) self.assertRaises(exception.StackValidationFailed, self.l7policy.validate) def test_show_resource(self): self._create_stack() self.l7policy.resource_id_set('1234') self.neutron_client.show_lbaas_l7policy.return_value = { 'l7policy': {'id': '1234'} } self.assertEqual({'id': '1234'}, self.l7policy._show_resource()) self.neutron_client.show_lbaas_l7policy.assert_called_with('1234') def test_update(self): self._create_stack() self.l7policy.resource_id_set('1234') self.neutron_client.update_lbaas_l7policy.side_effect = [ exceptions.StateInvalidClient, None] prop_diff = { 'admin_state_up': False, 'name': 'your_l7policy', 'redirect_url': 'http://www.google.com' } prop_diff = self.l7policy.handle_update(None, None, prop_diff) self.assertFalse(self.l7policy.check_update_complete(prop_diff)) self.assertFalse(self.l7policy._update_called) self.neutron_client.update_lbaas_l7policy.assert_called_with( '1234', {'l7policy': prop_diff}) self.assertFalse(self.l7policy.check_update_complete(prop_diff)) self.assertTrue(self.l7policy._update_called) self.neutron_client.update_lbaas_l7policy.assert_called_with( '1234', {'l7policy': prop_diff}) self.assertFalse(self.l7policy.check_update_complete(prop_diff)) self.assertTrue(self.l7policy.check_update_complete(prop_diff)) def test_update_redirect_pool_prop_name(self): self._create_stack() self.l7policy.resource_id_set('1234') self.neutron_client.update_lbaas_l7policy.side_effect = [ exceptions.StateInvalidClient, None] unresolved_diff = { 'redirect_url': None, 'action': 'REDIRECT_TO_POOL', 'redirect_pool': 'UNRESOLVED_POOL' } resolved_diff = { 'redirect_url': None, 'action': 'REDIRECT_TO_POOL', 'redirect_pool_id': '123' } self.l7policy.handle_update(None, None, unresolved_diff) self.assertFalse(self.l7policy.check_update_complete(resolved_diff)) self.assertFalse(self.l7policy._update_called) self.neutron_client.update_lbaas_l7policy.assert_called_with( '1234', {'l7policy': resolved_diff}) self.assertFalse(self.l7policy.check_update_complete(resolved_diff)) self.assertTrue(self.l7policy._update_called) self.neutron_client.update_lbaas_l7policy.assert_called_with( '1234', {'l7policy': resolved_diff}) self.assertFalse(self.l7policy.check_update_complete(resolved_diff)) self.assertTrue(self.l7policy.check_update_complete(resolved_diff)) def test_delete(self): self._create_stack() self.l7policy.resource_id_set('1234') self.neutron_client.delete_lbaas_l7policy.side_effect = [ exceptions.StateInvalidClient, None] self.l7policy.handle_delete() self.assertFalse(self.l7policy.check_delete_complete(None)) self.assertFalse(self.l7policy._delete_called) self.assertFalse(self.l7policy.check_delete_complete(None)) self.assertTrue(self.l7policy._delete_called) self.neutron_client.delete_lbaas_l7policy.assert_called_with('1234') self.assertFalse(self.l7policy.check_delete_complete(None)) self.assertTrue(self.l7policy.check_delete_complete(None)) def test_delete_already_gone(self): self._create_stack() self.l7policy.resource_id_set('1234') self.neutron_client.delete_lbaas_l7policy.side_effect = ( exceptions.NotFound) self.l7policy.handle_delete() self.assertTrue(self.l7policy.check_delete_complete(None)) def test_delete_failed(self): self._create_stack() self.l7policy.resource_id_set('1234') self.neutron_client.delete_lbaas_l7policy.side_effect = ( exceptions.Unauthorized) self.l7policy.handle_delete() self.assertRaises(exceptions.Unauthorized, self.l7policy.check_delete_complete, None) heat-10.0.2/heat/tests/openstack/neutron/lbaas/test_loadbalancer.py0000666000175000017500000001554613343562340025423 0ustar zuulzuul00000000000000# # Copyright 2015 IBM Corp. # # All Rights Reserved. # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from neutronclient.common import exceptions from heat.common import exception from heat.common import template_format from heat.engine.resources.openstack.neutron.lbaas import loadbalancer from heat.tests import common from heat.tests.openstack.neutron import inline_templates from heat.tests import utils class LoadBalancerTest(common.HeatTestCase): def test_resource_mapping(self): mapping = loadbalancer.resource_mapping() self.assertEqual(loadbalancer.LoadBalancer, mapping['OS::Neutron::LBaaS::LoadBalancer']) @mock.patch('heat.engine.clients.os.neutron.' 'NeutronClientPlugin.has_extension', return_value=True) def _create_stack(self, ext_func, tmpl=inline_templates.LB_TEMPLATE): self.t = template_format.parse(tmpl) self.stack = utils.parse_stack(self.t) self.lb = self.stack['lb'] self.neutron_client = mock.MagicMock() self.lb.client = mock.MagicMock() self.lb.client.return_value = self.neutron_client self.lb.client_plugin().find_resourceid_by_name_or_id = mock.MagicMock( return_value='123') self.lb.client_plugin().client = mock.MagicMock( return_value=self.neutron_client) self.lb.translate_properties(self.lb.properties) self.lb.resource_id_set('1234') def test_create(self): self._create_stack() expected = { 'loadbalancer': { 'name': 'my_lb', 'description': 'my loadbalancer', 'vip_address': '10.0.0.4', 'vip_subnet_id': '123', 'provider': 'octavia', 'tenant_id': '1234', 'admin_state_up': True, } } self.lb.handle_create() self.neutron_client.create_loadbalancer.assert_called_with(expected) def test_check_create_complete(self): self._create_stack() self.neutron_client.show_loadbalancer.side_effect = [ {'loadbalancer': {'provisioning_status': 'ACTIVE'}}, {'loadbalancer': {'provisioning_status': 'PENDING_CREATE'}}, {'loadbalancer': {'provisioning_status': 'ERROR'}}, ] self.assertTrue(self.lb.check_create_complete(None)) self.assertFalse(self.lb.check_create_complete(None)) self.assertRaises(exception.ResourceInError, self.lb.check_create_complete, None) def test_show_resource(self): self._create_stack() self.neutron_client.show_loadbalancer.return_value = { 'loadbalancer': {'id': '1234'} } self.assertEqual({'id': '1234'}, self.lb._show_resource()) self.neutron_client.show_loadbalancer.assert_called_with('1234') def test_update(self): self._create_stack() prop_diff = { 'name': 'lb', 'description': 'a loadbalancer', 'admin_state_up': False, } prop_diff = self.lb.handle_update(None, None, prop_diff) self.neutron_client.update_loadbalancer.assert_called_once_with( '1234', {'loadbalancer': prop_diff}) def test_update_complete(self): self._create_stack() prop_diff = { 'name': 'lb', 'description': 'a loadbalancer', 'admin_state_up': False, } self.neutron_client.show_loadbalancer.side_effect = [ {'loadbalancer': {'provisioning_status': 'ACTIVE'}}, {'loadbalancer': {'provisioning_status': 'PENDING_UPDATE'}}, ] self.lb.handle_update(None, None, prop_diff) self.assertTrue(self.lb.check_update_complete(prop_diff)) self.assertFalse(self.lb.check_update_complete(prop_diff)) self.assertTrue(self.lb.check_update_complete({})) def test_delete_active(self): self._create_stack() self.neutron_client.show_loadbalancer.side_effect = [ {'loadbalancer': {'provisioning_status': 'ACTIVE'}}, exceptions.NotFound ] self.lb.handle_delete() self.assertFalse(self.lb.check_delete_complete(None)) self.assertTrue(self.lb.check_delete_complete(None)) self.neutron_client.delete_loadbalancer.assert_called_with('1234') self.assertEqual(2, self.neutron_client.show_loadbalancer.call_count) def test_delete_pending(self): self._create_stack() self.neutron_client.show_loadbalancer.side_effect = [ {'loadbalancer': {'provisioning_status': 'PENDING_UPDATE'}}, {'loadbalancer': {'provisioning_status': 'ACTIVE'}}, exceptions.NotFound ] self.lb.handle_delete() self.assertFalse(self.lb.check_delete_complete(None)) self.assertFalse(self.lb.check_delete_complete(None)) self.assertTrue(self.lb.check_delete_complete(None)) self.neutron_client.delete_loadbalancer.assert_called_with('1234') self.assertEqual(3, self.neutron_client.show_loadbalancer.call_count) def test_delete_error(self): self._create_stack() self.neutron_client.show_loadbalancer.side_effect = [ {'loadbalancer': {'provisioning_status': 'ERROR'}}, exceptions.NotFound ] self.lb.handle_delete() self.assertFalse(self.lb.check_delete_complete(None)) self.assertTrue(self.lb.check_delete_complete(None)) self.neutron_client.delete_loadbalancer.assert_called_with('1234') self.assertEqual(2, self.neutron_client.show_loadbalancer.call_count) def test_delete_already_gone(self): self._create_stack() self.neutron_client.show_loadbalancer.side_effect = ( exceptions.NotFound) self.lb.handle_delete() self.assertTrue(self.lb.check_delete_complete(None)) self.assertEqual(1, self.neutron_client.show_loadbalancer.call_count) def test_delete_failed(self): self._create_stack() self.neutron_client.show_loadbalancer.return_value = { 'loadbalancer': {'provisioning_status': 'ACTIVE'}} self.neutron_client.delete_loadbalancer.side_effect = ( exceptions.Unauthorized) self.lb.handle_delete() self.assertRaises(exceptions.Unauthorized, self.lb.check_delete_complete, None) heat-10.0.2/heat/tests/openstack/neutron/lbaas/test_l7rule.py0000666000175000017500000001606113343562340024217 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import yaml from neutronclient.common import exceptions from heat.common import exception from heat.common.i18n import _ from heat.common import template_format from heat.engine.resources.openstack.neutron.lbaas import l7rule from heat.tests import common from heat.tests.openstack.neutron import inline_templates from heat.tests import utils class L7RuleTest(common.HeatTestCase): def test_resource_mapping(self): mapping = l7rule.resource_mapping() self.assertEqual(mapping['OS::Neutron::LBaaS::L7Rule'], l7rule.L7Rule) @mock.patch('heat.engine.clients.os.neutron.' 'NeutronClientPlugin.has_extension', return_value=True) def _create_stack(self, ext_func, tmpl=inline_templates.L7RULE_TEMPLATE): self.t = template_format.parse(tmpl) self.stack = utils.parse_stack(self.t) self.l7rule = self.stack['l7rule'] self.neutron_client = mock.MagicMock() self.l7rule.client = mock.MagicMock(return_value=self.neutron_client) self.l7rule.client_plugin().find_resourceid_by_name_or_id = ( mock.MagicMock(return_value='123')) self.l7rule.client_plugin().client = mock.MagicMock( return_value=self.neutron_client) self.neutron_client.show_loadbalancer.side_effect = [ {'loadbalancer': {'provisioning_status': 'PENDING_UPDATE'}}, {'loadbalancer': {'provisioning_status': 'PENDING_UPDATE'}}, {'loadbalancer': {'provisioning_status': 'ACTIVE'}}, ] def test_validate_when_key_required(self): tmpl = yaml.load(inline_templates.L7RULE_TEMPLATE) props = tmpl['resources']['l7rule']['properties'] del props['key'] self._create_stack(tmpl=yaml.dump(tmpl)) msg = _('Property key is missing. This property should be ' 'specified for rules of HEADER and COOKIE types.') with mock.patch('heat.engine.clients.os.neutron.NeutronClientPlugin.' 'has_extension', return_value=True): self.assertRaisesRegex(exception.StackValidationFailed, msg, self.l7rule.validate) def test_create(self): self._create_stack() self.neutron_client.create_lbaas_l7rule.side_effect = [ exceptions.StateInvalidClient, {'rule': {'id': '1234'}} ] expected = ( '123', { 'rule': { 'admin_state_up': True, 'invert': False, 'type': u'HEADER', 'compare_type': u'ENDS_WITH', 'key': u'test_key', 'value': u'test_value', 'invert': False } } ) props = self.l7rule.handle_create() self.assertFalse(self.l7rule.check_create_complete(props)) self.neutron_client.create_lbaas_l7rule.assert_called_with(*expected) self.assertFalse(self.l7rule.check_create_complete(props)) self.neutron_client.create_lbaas_l7rule.assert_called_with(*expected) self.assertFalse(self.l7rule.check_create_complete(props)) self.assertTrue(self.l7rule.check_create_complete(props)) def test_create_missing_properties(self): self.patchobject(l7rule.L7Rule, 'is_service_available', return_value=(True, None)) for prop in ('l7policy', 'type', 'compare_type', 'value'): tmpl = yaml.load(inline_templates.L7RULE_TEMPLATE) del tmpl['resources']['l7rule']['properties'][prop] self._create_stack(tmpl=yaml.dump(tmpl)) self.assertRaises(exception.StackValidationFailed, self.l7rule.validate) def test_show_resource(self): self._create_stack() self.l7rule.resource_id_set('1234') self.neutron_client.show_lbaas_l7rule.return_value = { 'rule': {'id': '1234'} } self.assertEqual({'id': '1234'}, self.l7rule._show_resource()) self.neutron_client.show_lbaas_l7rule.assert_called_with('1234', '123') def test_update(self): self._create_stack() self.l7rule.resource_id_set('1234') self.neutron_client.update_lbaas_l7rule.side_effect = [ exceptions.StateInvalidClient, None] prop_diff = { 'admin_state_up': False, 'name': 'your_l7policy', 'redirect_url': 'http://www.google.com' } prop_diff = self.l7rule.handle_update(None, None, prop_diff) self.assertFalse(self.l7rule.check_update_complete(prop_diff)) self.assertFalse(self.l7rule._update_called) self.neutron_client.update_lbaas_l7rule.assert_called_with( '1234', '123', {'rule': prop_diff}) self.assertFalse(self.l7rule.check_update_complete(prop_diff)) self.assertTrue(self.l7rule._update_called) self.neutron_client.update_lbaas_l7rule.assert_called_with( '1234', '123', {'rule': prop_diff}) self.assertFalse(self.l7rule.check_update_complete(prop_diff)) self.assertTrue(self.l7rule.check_update_complete(prop_diff)) def test_delete(self): self._create_stack() self.l7rule.resource_id_set('1234') self.neutron_client.delete_lbaas_l7rule.side_effect = [ exceptions.StateInvalidClient, None] self.l7rule.handle_delete() self.assertFalse(self.l7rule.check_delete_complete(None)) self.assertFalse(self.l7rule._delete_called) self.assertFalse(self.l7rule.check_delete_complete(None)) self.assertTrue(self.l7rule._delete_called) self.neutron_client.delete_lbaas_l7rule.assert_called_with( '1234', '123') self.assertFalse(self.l7rule.check_delete_complete(None)) self.assertTrue(self.l7rule.check_delete_complete(None)) def test_delete_already_gone(self): self._create_stack() self.l7rule.resource_id_set('1234') self.neutron_client.delete_lbaas_l7rule.side_effect = ( exceptions.NotFound) self.l7rule.handle_delete() self.assertTrue(self.l7rule.check_delete_complete(None)) def test_delete_failed(self): self._create_stack() self.l7rule.resource_id_set('1234') self.neutron_client.delete_lbaas_l7rule.side_effect = ( exceptions.Unauthorized) self.l7rule.handle_delete() self.assertRaises(exceptions.Unauthorized, self.l7rule.check_delete_complete, None) heat-10.0.2/heat/tests/openstack/neutron/lbaas/test_pool.py0000666000175000017500000002073313343562340023757 0ustar zuulzuul00000000000000# # Copyright 2015 IBM Corp. # # All Rights Reserved. # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import yaml from neutronclient.common import exceptions from heat.common import exception from heat.common.i18n import _ from heat.common import template_format from heat.engine.resources.openstack.neutron.lbaas import pool from heat.tests import common from heat.tests.openstack.neutron import inline_templates from heat.tests import utils class PoolTest(common.HeatTestCase): def test_resource_mapping(self): mapping = pool.resource_mapping() self.assertEqual(pool.Pool, mapping['OS::Neutron::LBaaS::Pool']) @mock.patch('heat.engine.clients.os.neutron.' 'NeutronClientPlugin.has_extension', return_value=True) def _create_stack(self, ext_func, tmpl=inline_templates.POOL_TEMPLATE): self.t = template_format.parse(tmpl) self.stack = utils.parse_stack(self.t) self.pool = self.stack['pool'] self.neutron_client = mock.MagicMock() self.pool.client = mock.MagicMock(return_value=self.neutron_client) self.pool.client_plugin().find_resourceid_by_name_or_id = ( mock.MagicMock(return_value='123')) self.pool.client_plugin().client = mock.MagicMock( return_value=self.neutron_client) def test_validate_no_cookie_name(self): tmpl = yaml.load(inline_templates.POOL_TEMPLATE) sp = tmpl['resources']['pool']['properties']['session_persistence'] sp['type'] = 'APP_COOKIE' self._create_stack(tmpl=yaml.dump(tmpl)) msg = _('Property cookie_name is required when ' 'session_persistence type is set to APP_COOKIE.') with mock.patch('heat.engine.clients.os.neutron.NeutronClientPlugin.' 'has_extension', return_value=True): self.assertRaisesRegex(exception.StackValidationFailed, msg, self.pool.validate) def test_validate_source_ip_cookie_name(self): tmpl = yaml.load(inline_templates.POOL_TEMPLATE) sp = tmpl['resources']['pool']['properties']['session_persistence'] sp['type'] = 'SOURCE_IP' sp['cookie_name'] = 'cookie' self._create_stack(tmpl=yaml.dump(tmpl)) msg = _('Property cookie_name must NOT be specified when ' 'session_persistence type is set to SOURCE_IP.') with mock.patch('heat.engine.clients.os.neutron.NeutronClientPlugin.' 'has_extension', return_value=True): self.assertRaisesRegex(exception.StackValidationFailed, msg, self.pool.validate) def test_create(self): self._create_stack() self.neutron_client.show_loadbalancer.side_effect = [ {'loadbalancer': {'provisioning_status': 'PENDING_UPDATE'}}, {'loadbalancer': {'provisioning_status': 'PENDING_UPDATE'}}, {'loadbalancer': {'provisioning_status': 'ACTIVE'}}, ] self.neutron_client.create_lbaas_pool.side_effect = [ exceptions.StateInvalidClient, {'pool': {'id': '1234'}} ] expected = { 'pool': { 'name': 'my_pool', 'description': 'my pool', 'session_persistence': { 'type': 'HTTP_COOKIE' }, 'lb_algorithm': 'ROUND_ROBIN', 'listener_id': '123', 'loadbalancer_id': 'my_lb', 'protocol': 'HTTP', 'admin_state_up': True } } props = self.pool.handle_create() self.assertFalse(self.pool.check_create_complete(props)) self.neutron_client.create_lbaas_pool.assert_called_with(expected) self.assertFalse(self.pool.check_create_complete(props)) self.neutron_client.create_lbaas_pool.assert_called_with(expected) self.assertFalse(self.pool.check_create_complete(props)) self.assertTrue(self.pool.check_create_complete(props)) def test_create_missing_properties(self): self.patchobject(pool.Pool, 'is_service_available', return_value=(True, None)) for prop in ('lb_algorithm', 'listener', 'protocol'): tmpl = yaml.load(inline_templates.POOL_TEMPLATE) del tmpl['resources']['pool']['properties']['loadbalancer'] del tmpl['resources']['pool']['properties'][prop] self._create_stack(tmpl=yaml.dump(tmpl)) if prop == 'listener': self.assertRaises(exception.PropertyUnspecifiedError, self.pool.validate) else: self.assertRaises(exception.StackValidationFailed, self.pool.validate) def test_show_resource(self): self._create_stack() self.pool.resource_id_set('1234') self.neutron_client.show_lbaas_pool.return_value = { 'pool': {'id': '1234'} } self.assertEqual(self.pool._show_resource(), {'id': '1234'}) self.neutron_client.show_lbaas_pool.assert_called_with('1234') def test_update(self): self._create_stack() self.pool.resource_id_set('1234') self.neutron_client.show_loadbalancer.side_effect = [ {'loadbalancer': {'provisioning_status': 'PENDING_UPDATE'}}, {'loadbalancer': {'provisioning_status': 'PENDING_UPDATE'}}, {'loadbalancer': {'provisioning_status': 'ACTIVE'}}, ] self.neutron_client.update_lbaas_pool.side_effect = [ exceptions.StateInvalidClient, None] prop_diff = { 'admin_state_up': False, 'name': 'your_pool', 'lb_algorithm': 'SOURCE_IP' } prop_diff = self.pool.handle_update(None, None, prop_diff) self.assertFalse(self.pool.check_update_complete(prop_diff)) self.assertFalse(self.pool._update_called) self.neutron_client.update_lbaas_pool.assert_called_with( '1234', {'pool': prop_diff}) self.assertFalse(self.pool.check_update_complete(prop_diff)) self.assertTrue(self.pool._update_called) self.neutron_client.update_lbaas_pool.assert_called_with( '1234', {'pool': prop_diff}) self.assertFalse(self.pool.check_update_complete(prop_diff)) self.assertTrue(self.pool.check_update_complete(prop_diff)) def test_delete(self): self._create_stack() self.pool.resource_id_set('1234') self.neutron_client.show_loadbalancer.side_effect = [ {'loadbalancer': {'provisioning_status': 'PENDING_UPDATE'}}, {'loadbalancer': {'provisioning_status': 'PENDING_UPDATE'}}, {'loadbalancer': {'provisioning_status': 'ACTIVE'}}, ] self.neutron_client.delete_lbaas_pool.side_effect = [ exceptions.StateInvalidClient, None] self.pool.handle_delete() self.assertFalse(self.pool.check_delete_complete(None)) self.assertFalse(self.pool._delete_called) self.assertFalse(self.pool.check_delete_complete(None)) self.assertTrue(self.pool._delete_called) self.neutron_client.delete_lbaas_pool.assert_called_with('1234') self.assertFalse(self.pool.check_delete_complete(None)) self.assertTrue(self.pool.check_delete_complete(None)) def test_delete_already_gone(self): self._create_stack() self.pool.resource_id_set('1234') self.neutron_client.delete_lbaas_pool.side_effect = ( exceptions.NotFound) self.pool.handle_delete() self.assertTrue(self.pool.check_delete_complete(None)) def test_delete_failed(self): self._create_stack() self.pool.resource_id_set('1234') self.neutron_client.delete_lbaas_pool.side_effect = ( exceptions.Unauthorized) self.pool.handle_delete() self.assertRaises(exceptions.Unauthorized, self.pool.check_delete_complete, None) heat-10.0.2/heat/tests/openstack/neutron/lbaas/__init__.py0000666000175000017500000000000013343562340023467 0ustar zuulzuul00000000000000heat-10.0.2/heat/tests/openstack/neutron/lbaas/test_pool_member.py0000666000175000017500000001524113343562340025304 0ustar zuulzuul00000000000000# # Copyright 2015 IBM Corp. # # All Rights Reserved. # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from neutronclient.common import exceptions from heat.common import template_format from heat.engine.resources.openstack.neutron.lbaas import pool_member from heat.tests import common from heat.tests.openstack.neutron import inline_templates from heat.tests import utils class PoolMemberTest(common.HeatTestCase): def test_resource_mapping(self): mapping = pool_member.resource_mapping() self.assertEqual(pool_member.PoolMember, mapping['OS::Neutron::LBaaS::PoolMember']) @mock.patch('heat.engine.clients.os.neutron.' 'NeutronClientPlugin.has_extension', return_value=True) def _create_stack(self, ext_func, tmpl=inline_templates.MEMBER_TEMPLATE): self.t = template_format.parse(tmpl) self.stack = utils.parse_stack(self.t) self.member = self.stack['member'] self.neutron_client = mock.MagicMock() self.member.client = mock.MagicMock(return_value=self.neutron_client) self.member.client_plugin().find_resourceid_by_name_or_id = ( mock.MagicMock(return_value='123')) self.member.client_plugin().client = mock.MagicMock( return_value=self.neutron_client) self.member.translate_properties(self.member.properties) def test_create(self): self._create_stack() self.neutron_client.show_loadbalancer.side_effect = [ {'loadbalancer': {'provisioning_status': 'PENDING_UPDATE'}}, {'loadbalancer': {'provisioning_status': 'PENDING_UPDATE'}}, {'loadbalancer': {'provisioning_status': 'ACTIVE'}}, ] self.neutron_client.create_lbaas_member.side_effect = [ exceptions.StateInvalidClient, {'member': {'id': '1234'}} ] expected = { 'member': { 'address': '1.2.3.4', 'protocol_port': 80, 'weight': 1, 'subnet_id': '123', 'admin_state_up': True, } } props = self.member.handle_create() self.assertFalse(self.member.check_create_complete(props)) self.neutron_client.create_lbaas_member.assert_called_with('123', expected) self.assertFalse(self.member.check_create_complete(props)) self.neutron_client.create_lbaas_member.assert_called_with('123', expected) self.assertFalse(self.member.check_create_complete(props)) self.assertTrue(self.member.check_create_complete(props)) def test_show_resource(self): self._create_stack() self.member.resource_id_set('1234') self.neutron_client.show_lbaas_member.return_value = { 'member': {'id': '1234'} } self.assertEqual(self.member._show_resource(), {'id': '1234'}) self.neutron_client.show_lbaas_member.assert_called_with('1234', '123') def test_update(self): self._create_stack() self.member.resource_id_set('1234') self.neutron_client.show_loadbalancer.side_effect = [ {'loadbalancer': {'provisioning_status': 'PENDING_UPDATE'}}, {'loadbalancer': {'provisioning_status': 'PENDING_UPDATE'}}, {'loadbalancer': {'provisioning_status': 'ACTIVE'}}, ] self.neutron_client.update_lbaas_member.side_effect = [ exceptions.StateInvalidClient, None] prop_diff = { 'admin_state_up': False, 'weight': 2, } prop_diff = self.member.handle_update(None, None, prop_diff) self.assertFalse(self.member.check_update_complete(prop_diff)) self.assertFalse(self.member._update_called) self.neutron_client.update_lbaas_member.assert_called_with( '1234', '123', {'member': prop_diff}) self.assertFalse(self.member.check_update_complete(prop_diff)) self.assertTrue(self.member._update_called) self.neutron_client.update_lbaas_member.assert_called_with( '1234', '123', {'member': prop_diff}) self.assertFalse(self.member.check_update_complete(prop_diff)) self.assertTrue(self.member.check_update_complete(prop_diff)) def test_delete(self): self._create_stack() self.member.resource_id_set('1234') self.neutron_client.show_loadbalancer.side_effect = [ {'loadbalancer': {'provisioning_status': 'PENDING_UPDATE'}}, {'loadbalancer': {'provisioning_status': 'PENDING_UPDATE'}}, {'loadbalancer': {'provisioning_status': 'ACTIVE'}}, ] self.neutron_client.delete_lbaas_member.side_effect = [ exceptions.StateInvalidClient, None] self.member.handle_delete() self.assertFalse(self.member.check_delete_complete(None)) self.assertFalse(self.member._delete_called) self.neutron_client.delete_lbaas_member.assert_called_with('1234', '123') self.assertFalse(self.member.check_delete_complete(None)) self.assertTrue(self.member._delete_called) self.neutron_client.delete_lbaas_member.assert_called_with('1234', '123') self.assertFalse(self.member.check_delete_complete(None)) self.assertTrue(self.member.check_delete_complete(None)) def test_delete_already_gone(self): self._create_stack() self.member.resource_id_set('1234') self.neutron_client.delete_lbaas_member.side_effect = ( exceptions.NotFound) self.member.handle_delete() self.assertTrue(self.member.check_delete_complete(None)) def test_delete_failed(self): self._create_stack() self.member.resource_id_set('1234') self.neutron_client.delete_lbaas_member.side_effect = ( exceptions.Unauthorized) self.member.handle_delete() self.assertRaises(exceptions.Unauthorized, self.member.check_delete_complete, None) heat-10.0.2/heat/tests/openstack/neutron/lbaas/test_health_monitor.py0000666000175000017500000001556513343562340026031 0ustar zuulzuul00000000000000# # Copyright 2015 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from neutronclient.common import exceptions from heat.common import template_format from heat.engine.resources.openstack.neutron.lbaas import health_monitor from heat.tests import common from heat.tests.openstack.neutron import inline_templates from heat.tests import utils class HealthMonitorTest(common.HeatTestCase): @mock.patch('heat.engine.clients.os.neutron.' 'NeutronClientPlugin.has_extension', return_value=True) def _create_stack(self, ext_func, tmpl=inline_templates.MONITOR_TEMPLATE): self.t = template_format.parse(tmpl) self.stack = utils.parse_stack(self.t) self.healthmonitor = self.stack['monitor'] self.neutron_client = mock.MagicMock() self.healthmonitor.client = mock.MagicMock( return_value=self.neutron_client) self.healthmonitor.client_plugin().find_resourceid_by_name_or_id = ( mock.MagicMock(return_value='123')) self.healthmonitor.client_plugin().client = mock.MagicMock( return_value=self.neutron_client) def test_resource_mapping(self): mapping = health_monitor.resource_mapping() self.assertEqual(health_monitor.HealthMonitor, mapping['OS::Neutron::LBaaS::HealthMonitor']) def test_create(self): self._create_stack() self.neutron_client.show_loadbalancer.side_effect = [ {'loadbalancer': {'provisioning_status': 'PENDING_UPDATE'}}, {'loadbalancer': {'provisioning_status': 'PENDING_UPDATE'}}, {'loadbalancer': {'provisioning_status': 'ACTIVE'}}, ] self.neutron_client.create_lbaas_healthmonitor.side_effect = [ exceptions.StateInvalidClient, {'healthmonitor': {'id': '1234'}} ] expected = { 'healthmonitor': { 'admin_state_up': True, 'delay': 3, 'expected_codes': '200-202', 'http_method': 'HEAD', 'max_retries': 5, 'pool_id': '123', 'timeout': 10, 'type': 'HTTP', 'url_path': '/health' } } props = self.healthmonitor.handle_create() self.assertFalse(self.healthmonitor.check_create_complete(props)) self.neutron_client.create_lbaas_healthmonitor.assert_called_with( expected) self.assertFalse(self.healthmonitor.check_create_complete(props)) self.neutron_client.create_lbaas_healthmonitor.assert_called_with( expected) self.assertFalse(self.healthmonitor.check_create_complete(props)) self.assertTrue(self.healthmonitor.check_create_complete(props)) def test_show_resource(self): self._create_stack() self.healthmonitor.resource_id_set('1234') self.assertTrue(self.healthmonitor._show_resource()) self.neutron_client.show_lbaas_healthmonitor.assert_called_with( '1234') def test_update(self): self._create_stack() self.healthmonitor.resource_id_set('1234') self.neutron_client.show_loadbalancer.side_effect = [ {'loadbalancer': {'provisioning_status': 'PENDING_UPDATE'}}, {'loadbalancer': {'provisioning_status': 'PENDING_UPDATE'}}, {'loadbalancer': {'provisioning_status': 'ACTIVE'}}, ] self.neutron_client.update_lbaas_healthmonitor.side_effect = [ exceptions.StateInvalidClient, None] prop_diff = { 'admin_state_up': False, } prop_diff = self.healthmonitor.handle_update(None, None, prop_diff) self.assertFalse(self.healthmonitor.check_update_complete(prop_diff)) self.assertFalse(self.healthmonitor._update_called) self.neutron_client.update_lbaas_healthmonitor.assert_called_with( '1234', {'healthmonitor': prop_diff}) self.assertFalse(self.healthmonitor.check_update_complete(prop_diff)) self.assertTrue(self.healthmonitor._update_called) self.neutron_client.update_lbaas_healthmonitor.assert_called_with( '1234', {'healthmonitor': prop_diff}) self.assertFalse(self.healthmonitor.check_update_complete(prop_diff)) self.assertTrue(self.healthmonitor.check_update_complete(prop_diff)) def test_delete(self): self._create_stack() self.healthmonitor.resource_id_set('1234') self.neutron_client.show_loadbalancer.side_effect = [ {'loadbalancer': {'provisioning_status': 'PENDING_UPDATE'}}, {'loadbalancer': {'provisioning_status': 'PENDING_UPDATE'}}, {'loadbalancer': {'provisioning_status': 'ACTIVE'}}, ] self.neutron_client.delete_lbaas_healthmonitor.side_effect = [ exceptions.StateInvalidClient, None] self.healthmonitor.handle_delete() self.assertFalse(self.healthmonitor.check_delete_complete(None)) self.assertFalse(self.healthmonitor._delete_called) self.neutron_client.delete_lbaas_healthmonitor.assert_called_with( '1234') self.assertFalse(self.healthmonitor.check_delete_complete(None)) self.assertTrue(self.healthmonitor._delete_called) self.neutron_client.delete_lbaas_healthmonitor.assert_called_with( '1234') self.assertFalse(self.healthmonitor.check_delete_complete(None)) self.assertTrue(self.healthmonitor.check_delete_complete(None)) def test_delete_already_gone(self): self._create_stack() self.healthmonitor.resource_id_set('1234') self.neutron_client.delete_lbaas_healthmonitor.side_effect = ( exceptions.NotFound) self.healthmonitor.handle_delete() self.assertTrue(self.healthmonitor.check_delete_complete(None)) self.neutron_client.delete_lbaas_healthmonitor.assert_called_with( '1234') def test_delete_failed(self): self._create_stack() self.healthmonitor.resource_id_set('1234') self.neutron_client.delete_lbaas_healthmonitor.side_effect = ( exceptions.Unauthorized) self.healthmonitor.handle_delete() self.assertRaises(exceptions.Unauthorized, self.healthmonitor.check_delete_complete, None) self.neutron_client.delete_lbaas_healthmonitor.assert_called_with( '1234') heat-10.0.2/heat/tests/openstack/neutron/lbaas/test_listener.py0000666000175000017500000002002313343562340024623 0ustar zuulzuul00000000000000# # Copyright 2015 IBM Corp. # # All Rights Reserved. # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import yaml from neutronclient.common import exceptions from heat.common import exception from heat.common import template_format from heat.engine.resources.openstack.neutron.lbaas import listener from heat.tests import common from heat.tests.openstack.neutron import inline_templates from heat.tests import utils class ListenerTest(common.HeatTestCase): def test_resource_mapping(self): mapping = listener.resource_mapping() self.assertEqual(listener.Listener, mapping['OS::Neutron::LBaaS::Listener']) @mock.patch('heat.engine.clients.os.neutron.' 'NeutronClientPlugin.has_extension', return_value=True) def _create_stack(self, ext_func, tmpl=inline_templates.LISTENER_TEMPLATE): self.t = template_format.parse(tmpl) self.stack = utils.parse_stack(self.t) self.listener = self.stack['listener'] self.neutron_client = mock.MagicMock() self.listener.client = mock.MagicMock(return_value=self.neutron_client) self.listener.client_plugin().find_resourceid_by_name_or_id = ( mock.MagicMock(return_value='123')) self.listener.client_plugin().client = mock.MagicMock( return_value=self.neutron_client) def test_validate_terminated_https(self): self.patchobject(listener.Listener, 'is_service_available', return_value=(True, None)) tmpl = yaml.load(inline_templates.LISTENER_TEMPLATE) props = tmpl['resources']['listener']['properties'] props['protocol'] = 'TERMINATED_HTTPS' del props['default_tls_container_ref'] self._create_stack(tmpl=yaml.dump(tmpl)) self.assertRaises(exception.StackValidationFailed, self.listener.validate) def test_create(self): self._create_stack() self.neutron_client.show_loadbalancer.side_effect = [ {'loadbalancer': {'provisioning_status': 'PENDING_UPDATE'}}, {'loadbalancer': {'provisioning_status': 'PENDING_UPDATE'}}, {'loadbalancer': {'provisioning_status': 'ACTIVE'}}, ] self.neutron_client.create_listener.side_effect = [ exceptions.StateInvalidClient, {'listener': {'id': '1234'}} ] expected = { 'listener': { 'protocol_port': 80, 'protocol': 'TCP', 'loadbalancer_id': '123', 'default_pool_id': 'my_pool', 'name': 'my_listener', 'description': 'my listener', 'admin_state_up': True, 'default_tls_container_ref': 'ref', 'sni_container_refs': ['ref1', 'ref2'], 'connection_limit': -1, 'tenant_id': '1234', } } props = self.listener.handle_create() self.assertFalse(self.listener.check_create_complete(props)) self.neutron_client.create_listener.assert_called_with(expected) self.assertFalse(self.listener.check_create_complete(props)) self.neutron_client.create_listener.assert_called_with(expected) self.assertFalse(self.listener.check_create_complete(props)) self.assertTrue(self.listener.check_create_complete(props)) @mock.patch('heat.engine.clients.os.neutron.' 'NeutronClientPlugin.has_extension', return_value=True) def test_create_missing_properties(self, ext_func): for prop in ('protocol', 'protocol_port', 'loadbalancer'): tmpl = yaml.load(inline_templates.LISTENER_TEMPLATE) del tmpl['resources']['listener']['properties'][prop] del tmpl['resources']['listener']['properties']['default_pool'] self._create_stack(tmpl=yaml.dump(tmpl)) if prop == 'loadbalancer': self.assertRaises(exception.PropertyUnspecifiedError, self.listener.validate) else: self.assertRaises(exception.StackValidationFailed, self.listener.validate) def test_show_resource(self): self._create_stack() self.listener.resource_id_set('1234') self.neutron_client.show_listener.return_value = { 'listener': {'id': '1234'} } self.assertEqual({'id': '1234'}, self.listener._show_resource()) self.neutron_client.show_listener.assert_called_with('1234') def test_update(self): self._create_stack() self.listener.resource_id_set('1234') self.neutron_client.show_loadbalancer.side_effect = [ {'loadbalancer': {'provisioning_status': 'PENDING_UPDATE'}}, {'loadbalancer': {'provisioning_status': 'PENDING_UPDATE'}}, {'loadbalancer': {'provisioning_status': 'ACTIVE'}}, ] self.neutron_client.update_listener.side_effect = [ exceptions.StateInvalidClient, None] prop_diff = { 'admin_state_up': False, 'name': 'your_listener', } prop_diff = self.listener.handle_update(self.listener.t, None, prop_diff) self.assertFalse(self.listener.check_update_complete(prop_diff)) self.assertFalse(self.listener._update_called) self.neutron_client.update_listener.assert_called_with( '1234', {'listener': prop_diff}) self.assertFalse(self.listener.check_update_complete(prop_diff)) self.assertTrue(self.listener._update_called) self.neutron_client.update_listener.assert_called_with( '1234', {'listener': prop_diff}) self.assertFalse(self.listener.check_update_complete(prop_diff)) self.assertTrue(self.listener.check_update_complete(prop_diff)) def test_delete(self): self._create_stack() self.listener.resource_id_set('1234') self.neutron_client.show_loadbalancer.side_effect = [ {'loadbalancer': {'provisioning_status': 'PENDING_UPDATE'}}, {'loadbalancer': {'provisioning_status': 'PENDING_UPDATE'}}, {'loadbalancer': {'provisioning_status': 'ACTIVE'}}, ] self.neutron_client.delete_listener.side_effect = [ exceptions.StateInvalidClient, None] self.listener.handle_delete() self.assertFalse(self.listener.check_delete_complete(None)) self.assertFalse(self.listener._delete_called) self.neutron_client.delete_listener.assert_called_with('1234') self.assertFalse(self.listener.check_delete_complete(None)) self.assertTrue(self.listener._delete_called) self.neutron_client.delete_listener.assert_called_with('1234') self.assertFalse(self.listener.check_delete_complete(None)) self.assertTrue(self.listener.check_delete_complete(None)) def test_delete_already_gone(self): self._create_stack() self.listener.resource_id_set('1234') self.neutron_client.delete_listener.side_effect = ( exceptions.NotFound) self.listener.handle_delete() self.assertTrue(self.listener.check_delete_complete(None)) def test_delete_failed(self): self._create_stack() self.listener.resource_id_set('1234') self.neutron_client.delete_listener.side_effect = ( exceptions.Unauthorized) self.listener.handle_delete() self.assertRaises(exceptions.Unauthorized, self.listener.check_delete_complete, None) heat-10.0.2/heat/tests/test_event.py0000666000175000017500000003204613343562352017367 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_config import cfg import uuid from heat.db.sqlalchemy import models from heat.engine import event from heat.engine import stack from heat.engine import template from heat.objects import event as event_object from heat.objects import resource_properties_data as rpd_object from heat.objects import stack as stack_object from heat.tests import common from heat.tests import utils cfg.CONF.import_opt('event_purge_batch_size', 'heat.common.config') cfg.CONF.import_opt('max_events_per_stack', 'heat.common.config') tmpl = { 'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'EventTestResource': { 'Type': 'ResourceWithRequiredProps', 'Properties': {'Foo': 'goo'} } } } tmpl_multiple = { 'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'EventTestResource': { 'Type': 'ResourceWithMultipleRequiredProps', 'Properties': {'Foo1': 'zoo', 'Foo2': 'A0000000000', 'Foo3': '99999'} } } } class EventCommon(common.HeatTestCase): def _setup_stack(self, the_tmpl, encrypted=False): if encrypted: cfg.CONF.set_override('encrypt_parameters_and_properties', True) self.username = 'event_test_user' self.ctx = utils.dummy_context() self.m.ReplayAll() self.stack = stack.Stack(self.ctx, 'event_load_test_stack', template.Template(the_tmpl)) self.stack.store() self.resource = self.stack['EventTestResource'] self.resource._update_stored_properties() self.resource.store() self.addCleanup(stack_object.Stack.delete, self.ctx, self.stack.id) class EventTest(EventCommon): def setUp(self): super(EventTest, self).setUp() self._setup_stack(tmpl) def test_store_caps_events(self): cfg.CONF.set_override('event_purge_batch_size', 1) cfg.CONF.set_override('max_events_per_stack', 1) self.resource.resource_id_set('resource_physical_id') e = event.Event(self.ctx, self.stack, 'TEST', 'IN_PROGRESS', 'Testing', 'alabama', self.resource._rsrc_prop_data_id, self.resource._stored_properties_data, self.resource.name, self.resource.type()) e.store() self.assertEqual(1, len(event_object.Event.get_all_by_stack( self.ctx, self.stack.id))) e = event.Event(self.ctx, self.stack, 'TEST', 'IN_PROGRESS', 'Testing', 'arizona', self.resource._rsrc_prop_data_id, self.resource._stored_properties_data, self.resource.name, self.resource.type()) e.store() events = event_object.Event.get_all_by_stack(self.ctx, self.stack.id) self.assertEqual(1, len(events)) self.assertEqual('arizona', events[0].physical_resource_id) def test_store_caps_events_random_purge(self): cfg.CONF.set_override('event_purge_batch_size', 100) cfg.CONF.set_override('max_events_per_stack', 1) self.resource.resource_id_set('resource_physical_id') e = event.Event(self.ctx, self.stack, 'TEST', 'IN_PROGRESS', 'Testing', 'arkansas', None, None, self.resource.name, self.resource.type()) e.store() # purge happens with mock.patch("random.uniform") as mock_random_uniform: mock_random_uniform.return_value = 2.0 / 100 - .0001 e = event.Event(self.ctx, self.stack, 'TEST', 'IN_PROGRESS', 'Testing', 'alaska', None, None, self.resource.name, self.resource.type()) e.store() events = event_object.Event.get_all_by_stack(self.ctx, self.stack.id) self.assertEqual(1, len(events)) self.assertEqual('alaska', events[0].physical_resource_id) # no purge happens with mock.patch("random.uniform") as mock_random_uniform: mock_random_uniform.return_value = 2.0 / 100 + .0001 e = event.Event(self.ctx, self.stack, 'TEST', 'IN_PROGRESS', 'Testing', 'aardvark', None, None, self.resource.name, self.resource.type()) e.store() events = event_object.Event.get_all_by_stack(self.ctx, self.stack.id) self.assertEqual(2, len(events)) def test_store_caps_resource_props_data(self): cfg.CONF.set_override('event_purge_batch_size', 2) cfg.CONF.set_override('max_events_per_stack', 3) self.resource.resource_id_set('resource_physical_id') e = event.Event(self.ctx, self.stack, 'TEST', 'IN_PROGRESS', 'Testing', 'alabama', self.resource._rsrc_prop_data_id, self.resource._stored_properties_data, self.resource.name, self.resource.type()) e.store() rpd1_id = self.resource._rsrc_prop_data_id rpd2 = rpd_object.ResourcePropertiesData.create( self.ctx, {'encrypted': False, 'data': {'foo': 'bar'}}) rpd2_id = rpd2.id e = event.Event(self.ctx, self.stack, 'TEST', 'IN_PROGRESS', 'Testing', 'arizona', rpd2_id, rpd2.data, self.resource.name, self.resource.type()) e.store() rpd3 = rpd_object.ResourcePropertiesData.create( self.ctx, {'encrypted': False, 'data': {'foo': 'bar'}}) rpd3_id = rpd3.id e = event.Event(self.ctx, self.stack, 'TEST', 'IN_PROGRESS', 'Testing', 'arkansas', rpd3_id, rpd3.data, self.resource.name, self.resource.type()) e.store() rpd4 = rpd_object.ResourcePropertiesData.create( self.ctx, {'encrypted': False, 'data': {'foo': 'bar'}}) rpd4_id = rpd4.id e = event.Event(self.ctx, self.stack, 'TEST', 'IN_PROGRESS', 'Testing', 'arkansas', rpd4_id, rpd4.data, self.resource.name, self.resource.type()) e.store() events = event_object.Event.get_all_by_stack(self.ctx, self.stack.id) self.assertEqual(2, len(events)) self.assertEqual('arkansas', events[0].physical_resource_id) # rpd1 should still exist since that is still referred to by # the resource. rpd2 shoud have been deleted along with the # 2nd event. self.assertIsNotNone(self.ctx.session.query( models.ResourcePropertiesData).get(rpd1_id)) self.assertIsNone(self.ctx.session.query( models.ResourcePropertiesData).get(rpd2_id)) # We didn't purge the last two events, so we ought to have # kept rsrc_prop_data for both. self.assertIsNotNone(self.ctx.session.query( models.ResourcePropertiesData).get(rpd3_id)) self.assertIsNotNone(self.ctx.session.query( models.ResourcePropertiesData).get(rpd4_id)) def test_identifier(self): event_uuid = 'abc123yc-9f88-404d-a85b-531529456xyz' e = event.Event(self.ctx, self.stack, 'TEST', 'IN_PROGRESS', 'Testing', 'wibble', self.resource._rsrc_prop_data_id, self.resource._stored_properties_data, self.resource.name, self.resource.type(), uuid=event_uuid) e.store() expected_identifier = { 'stack_name': self.stack.name, 'stack_id': self.stack.id, 'tenant': self.ctx.tenant_id, 'path': '/resources/EventTestResource/events/%s' % str(event_uuid) } self.assertEqual(expected_identifier, e.identifier()) def test_identifier_is_none(self): e = event.Event(self.ctx, self.stack, 'TEST', 'IN_PROGRESS', 'Testing', 'wibble', None, None, self.resource.name, self.resource.type()) self.assertIsNone(e.identifier()) e.store() self.assertIsNotNone(e.identifier()) def test_as_dict(self): e = event.Event(self.ctx, self.stack, 'TEST', 'IN_PROGRESS', 'Testing', 'wibble', self.resource._rsrc_prop_data_id, self.resource._stored_properties_data, self.resource.name, self.resource.type()) e.store() expected = { 'id': e.uuid, 'timestamp': e.timestamp.isoformat(), 'type': 'os.heat.event', 'version': '0.1', 'payload': {'physical_resource_id': 'wibble', 'resource_action': 'TEST', 'resource_name': 'EventTestResource', 'resource_properties': {'Foo': 'goo'}, 'resource_status': 'IN_PROGRESS', 'resource_status_reason': 'Testing', 'resource_type': 'ResourceWithRequiredProps', 'stack_id': self.stack.id, 'version': '0.1'}} self.assertEqual(expected, e.as_dict()) def test_load_deprecated_prop_data(self): e = event.Event(self.ctx, self.stack, 'TEST', 'IN_PROGRESS', 'Testing', 'wibble', self.resource._rsrc_prop_data_id, self.resource._stored_properties_data, self.resource.name, self.resource.type()) e.store() # for test purposes, dress up the event to have the deprecated # properties_data field populated e_obj = self.ctx.session.query(models.Event).get(e.id) with self.ctx.session.begin(): e_obj['resource_properties'] = {'Time': 'not enough'} e_obj['rsrc_prop_data'] = None # verify the deprecated data gets loaded ev = event_object.Event.get_all_by_stack(self.ctx, self.stack.id)[0] self.assertEqual({'Time': 'not enough'}, ev.resource_properties) def test_event_object_resource_properties_data(self): cfg.CONF.set_override('encrypt_parameters_and_properties', True) data = {'p1': 'hello', 'p2': 'too soon?'} rpd_obj = rpd_object.ResourcePropertiesData().create_or_update( self.ctx, data) e_obj = event_object.Event().create( self.ctx, {'stack_id': self.stack.id, 'uuid': str(uuid.uuid4()), 'rsrc_prop_data_id': rpd_obj.id}) e_obj = event_object.Event.get_all_by_stack(utils.dummy_context(), self.stack.id)[0] # properties data appears unencrypted to event object self.assertEqual(data, e_obj.resource_properties) class EventEncryptedTest(EventCommon): def setUp(self): super(EventEncryptedTest, self).setUp() self._setup_stack(tmpl, encrypted=True) def test_props_encrypted(self): e = event.Event(self.ctx, self.stack, 'TEST', 'IN_PROGRESS', 'Testing', 'wibble', self.resource._rsrc_prop_data_id, self.resource._stored_properties_data, self.resource.name, self.resource.type()) e.store() # verify the resource_properties_data db data is encrypted e_obj = event_object.Event.get_all_by_stack(self.resource.context, self.stack.id)[0] rpd_id = e_obj['rsrc_prop_data_id'] results = self.resource.context.session.query( models.ResourcePropertiesData).filter_by( id=rpd_id) self.assertNotEqual('goo', results[0]['data']['Foo']) self.assertTrue(results[0]['encrypted']) ev = event_object.Event.get_all_by_stack(self.ctx, self.stack.id)[0] # verify not eager loaded self.assertIsNone(ev._resource_properties) # verify encrypted data is decrypted when retrieved through # heat object layer (normally it would be eager loaded) self.assertEqual({'Foo': 'goo'}, ev.resource_properties) # verify eager load case (uuid is specified) filters = {'uuid': ev.uuid} ev = event_object.Event.get_all_by_stack(self.ctx, self.stack.id, filters=filters)[0] # verify eager loaded self.assertIsNotNone(ev._resource_properties) self.assertEqual({'Foo': 'goo'}, ev.resource_properties) heat-10.0.2/heat/tests/test_stack_lock.py0000666000175000017500000003026313343562340020357 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from heat.common import exception from heat.common import service_utils from heat.engine import stack_lock from heat.objects import stack as stack_object from heat.objects import stack_lock as stack_lock_object from heat.tests import common from heat.tests import utils class StackLockTest(common.HeatTestCase): def setUp(self): super(StackLockTest, self).setUp() self.context = utils.dummy_context() self.stack_id = "aae01f2d-52ae-47ac-8a0d-3fde3d220fea" self.engine_id = service_utils.generate_engine_id() stack = mock.MagicMock() stack.id = self.stack_id stack.name = "test_stack" stack.action = "CREATE" self.mock_get_by_id = self.patchobject( stack_object.Stack, 'get_by_id', return_value=stack) class TestThreadLockException(Exception): pass def test_successful_acquire_new_lock(self): mock_create = self.patchobject(stack_lock_object.StackLock, 'create', return_value=None) slock = stack_lock.StackLock(self.context, self.stack_id, self.engine_id) slock.acquire() mock_create.assert_called_once_with( self.context, self.stack_id, self.engine_id) def test_failed_acquire_existing_lock_current_engine(self): mock_create = self.patchobject(stack_lock_object.StackLock, 'create', return_value=self.engine_id) slock = stack_lock.StackLock(self.context, self.stack_id, self.engine_id) self.assertRaises(exception.ActionInProgress, slock.acquire) self.mock_get_by_id.assert_called_once_with( self.context, self.stack_id, show_deleted=True, eager_load=False) mock_create.assert_called_once_with( self.context, self.stack_id, self.engine_id) def test_successful_acquire_existing_lock_engine_dead(self): mock_create = self.patchobject(stack_lock_object.StackLock, 'create', return_value='fake-engine-id') mock_steal = self.patchobject(stack_lock_object.StackLock, 'steal', return_value=None) slock = stack_lock.StackLock(self.context, self.stack_id, self.engine_id) self.patchobject(service_utils, 'engine_alive', return_value=False) slock.acquire() mock_create.assert_called_once_with( self.context, self.stack_id, self.engine_id) mock_steal.assert_called_once_with( self.context, self.stack_id, 'fake-engine-id', self.engine_id) def test_failed_acquire_existing_lock_engine_alive(self): mock_create = self.patchobject(stack_lock_object.StackLock, 'create', return_value='fake-engine-id') slock = stack_lock.StackLock(self.context, self.stack_id, self.engine_id) self.patchobject(service_utils, 'engine_alive', return_value=True) self.assertRaises(exception.ActionInProgress, slock.acquire) self.mock_get_by_id.assert_called_once_with( self.context, self.stack_id, show_deleted=True, eager_load=False) mock_create.assert_called_once_with( self.context, self.stack_id, self.engine_id) def test_failed_acquire_existing_lock_engine_dead(self): mock_create = self.patchobject(stack_lock_object.StackLock, 'create', return_value='fake-engine-id') mock_steal = self.patchobject(stack_lock_object.StackLock, 'steal', return_value='fake-engine-id2') slock = stack_lock.StackLock(self.context, self.stack_id, self.engine_id) self.patchobject(service_utils, 'engine_alive', return_value=False) self.assertRaises(exception.ActionInProgress, slock.acquire) self.mock_get_by_id.assert_called_once_with( self.context, self.stack_id, show_deleted=True, eager_load=False) mock_create.assert_called_once_with( self.context, self.stack_id, self.engine_id) mock_steal.assert_called_once_with( self.context, self.stack_id, 'fake-engine-id', self.engine_id) def test_successful_acquire_with_retry(self): mock_create = self.patchobject(stack_lock_object.StackLock, 'create', return_value='fake-engine-id') mock_steal = self.patchobject(stack_lock_object.StackLock, 'steal', side_effect=[True, None]) slock = stack_lock.StackLock(self.context, self.stack_id, self.engine_id) self.patchobject(service_utils, 'engine_alive', return_value=False) slock.acquire() mock_create.assert_has_calls( [mock.call(self.context, self.stack_id, self.engine_id)] * 2) mock_steal.assert_has_calls( [mock.call(self.context, self.stack_id, 'fake-engine-id', self.engine_id)] * 2) def test_failed_acquire_one_retry_only(self): mock_create = self.patchobject(stack_lock_object.StackLock, 'create', return_value='fake-engine-id') mock_steal = self.patchobject(stack_lock_object.StackLock, 'steal', return_value=True) slock = stack_lock.StackLock(self.context, self.stack_id, self.engine_id) self.patchobject(service_utils, 'engine_alive', return_value=False) self.assertRaises(exception.ActionInProgress, slock.acquire) self.mock_get_by_id.assert_called_with( self.context, self.stack_id, show_deleted=True, eager_load=False) mock_create.assert_has_calls( [mock.call(self.context, self.stack_id, self.engine_id)] * 2) mock_steal.assert_has_calls( [mock.call(self.context, self.stack_id, 'fake-engine-id', self.engine_id)] * 2) def test_context_mgr_exception(self): stack_lock_object.StackLock.create = mock.Mock(return_value=None) stack_lock_object.StackLock.release = mock.Mock(return_value=None) slock = stack_lock.StackLock(self.context, self.stack_id, self.engine_id) def check_lock(): with slock: self.assertEqual(1, stack_lock_object.StackLock.create.call_count) raise self.TestThreadLockException self.assertRaises(self.TestThreadLockException, check_lock) self.assertEqual(1, stack_lock_object.StackLock.release.call_count) def test_context_mgr_noexception(self): stack_lock_object.StackLock.create = mock.Mock(return_value=None) stack_lock_object.StackLock.release = mock.Mock(return_value=None) slock = stack_lock.StackLock(self.context, self.stack_id, self.engine_id) with slock: self.assertEqual(1, stack_lock_object.StackLock.create.call_count) self.assertEqual(1, stack_lock_object.StackLock.release.call_count) def test_thread_lock_context_mgr_exception_acquire_success(self): stack_lock_object.StackLock.create = mock.Mock(return_value=None) stack_lock_object.StackLock.release = mock.Mock(return_value=None) slock = stack_lock.StackLock(self.context, self.stack_id, self.engine_id) def check_thread_lock(): with slock.thread_lock(): self.assertEqual(1, stack_lock_object.StackLock.create.call_count) raise self.TestThreadLockException self.assertRaises(self.TestThreadLockException, check_thread_lock) self.assertEqual(1, stack_lock_object.StackLock.release.call_count) def test_thread_lock_context_mgr_exception_acquire_fail(self): stack_lock_object.StackLock.create = mock.Mock( return_value=self.engine_id) stack_lock_object.StackLock.release = mock.Mock() slock = stack_lock.StackLock(self.context, self.stack_id, self.engine_id) def check_thread_lock(): with slock.thread_lock(): self.assertEqual(1, stack_lock_object.StackLock.create.call_count) raise exception.ActionInProgress self.assertRaises(exception.ActionInProgress, check_thread_lock) self.assertFalse(stack_lock_object.StackLock.release.called) def test_thread_lock_context_mgr_no_exception(self): stack_lock_object.StackLock.create = mock.Mock(return_value=None) stack_lock_object.StackLock.release = mock.Mock(return_value=None) slock = stack_lock.StackLock(self.context, self.stack_id, self.engine_id) with slock.thread_lock(): self.assertEqual(1, stack_lock_object.StackLock.create.call_count) self.assertFalse(stack_lock_object.StackLock.release.called) def test_try_thread_lock_context_mgr_exception(self): stack_lock_object.StackLock.create = mock.Mock(return_value=None) stack_lock_object.StackLock.release = mock.Mock(return_value=None) slock = stack_lock.StackLock(self.context, self.stack_id, self.engine_id) def check_thread_lock(): with slock.try_thread_lock(): self.assertEqual(1, stack_lock_object.StackLock.create.call_count) raise self.TestThreadLockException self.assertRaises(self.TestThreadLockException, check_thread_lock) self.assertEqual(1, stack_lock_object.StackLock.release.call_count) def test_try_thread_lock_context_mgr_no_exception(self): stack_lock_object.StackLock.create = mock.Mock(return_value=None) stack_lock_object.StackLock.release = mock.Mock(return_value=None) slock = stack_lock.StackLock(self.context, self.stack_id, self.engine_id) with slock.try_thread_lock(): self.assertEqual(1, stack_lock_object.StackLock.create.call_count) self.assertFalse(stack_lock_object.StackLock.release.called) def test_try_thread_lock_context_mgr_existing_lock(self): stack_lock_object.StackLock.create = mock.Mock(return_value=1234) stack_lock_object.StackLock.release = mock.Mock(return_value=None) slock = stack_lock.StackLock(self.context, self.stack_id, self.engine_id) def check_thread_lock(): with slock.try_thread_lock(): self.assertEqual(1, stack_lock_object.StackLock.create.call_count) raise self.TestThreadLockException self.assertRaises(self.TestThreadLockException, check_thread_lock) self.assertFalse(stack_lock_object.StackLock.release.called) heat-10.0.2/heat/tests/test_properties.py0000666000175000017500000022447313343562352020451 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_serialization import jsonutils import six from heat.common import exception from heat.engine import constraints from heat.engine.hot import functions as hot_funcs from heat.engine.hot import parameters as hot_param from heat.engine import parameters from heat.engine import plugin_manager from heat.engine import properties from heat.engine import resources from heat.engine import support from heat.engine import translation from heat.tests import common class PropertySchemaTest(common.HeatTestCase): def test_schema_all(self): d = { 'type': 'string', 'description': 'A string', 'default': 'wibble', 'required': False, 'update_allowed': False, 'immutable': False, 'constraints': [ {'length': {'min': 4, 'max': 8}}, ] } s = properties.Schema(properties.Schema.STRING, 'A string', default='wibble', constraints=[constraints.Length(4, 8)]) self.assertEqual(d, dict(s)) def test_schema_list_schema(self): d = { 'type': 'list', 'description': 'A list', 'schema': { '*': { 'type': 'string', 'description': 'A string', 'default': 'wibble', 'required': False, 'update_allowed': False, 'immutable': False, 'constraints': [ {'length': {'min': 4, 'max': 8}}, ] } }, 'required': False, 'update_allowed': False, 'immutable': False, } s = properties.Schema(properties.Schema.STRING, 'A string', default='wibble', constraints=[constraints.Length(4, 8)]) l = properties.Schema(properties.Schema.LIST, 'A list', schema=s) self.assertEqual(d, dict(l)) def test_schema_map_schema(self): d = { 'type': 'map', 'description': 'A map', 'schema': { 'Foo': { 'type': 'string', 'description': 'A string', 'default': 'wibble', 'required': False, 'update_allowed': False, 'immutable': False, 'constraints': [ {'length': {'min': 4, 'max': 8}}, ] } }, 'required': False, 'update_allowed': False, 'immutable': False, } s = properties.Schema(properties.Schema.STRING, 'A string', default='wibble', constraints=[constraints.Length(4, 8)]) m = properties.Schema(properties.Schema.MAP, 'A map', schema={'Foo': s}) self.assertEqual(d, dict(m)) def test_schema_nested_schema(self): d = { 'type': 'list', 'description': 'A list', 'schema': { '*': { 'type': 'map', 'description': 'A map', 'schema': { 'Foo': { 'type': 'string', 'description': 'A string', 'default': 'wibble', 'required': False, 'update_allowed': False, 'immutable': False, 'constraints': [ {'length': {'min': 4, 'max': 8}}, ] } }, 'required': False, 'update_allowed': False, 'immutable': False, } }, 'required': False, 'update_allowed': False, 'immutable': False, } s = properties.Schema(properties.Schema.STRING, 'A string', default='wibble', constraints=[constraints.Length(4, 8)]) m = properties.Schema(properties.Schema.MAP, 'A map', schema={'Foo': s}) l = properties.Schema(properties.Schema.LIST, 'A list', schema=m) self.assertEqual(d, dict(l)) def test_all_resource_schemata(self): for resource_type in resources.global_env().get_types(): for schema in six.itervalues(getattr(resource_type, 'properties_schema', {})): properties.Schema.from_legacy(schema) def test_from_legacy_idempotency(self): s = properties.Schema(properties.Schema.STRING) self.assertTrue(properties.Schema.from_legacy(s) is s) def test_from_legacy_minimal_string(self): s = properties.Schema.from_legacy({ 'Type': 'String', }) self.assertEqual(properties.Schema.STRING, s.type) self.assertIsNone(s.description) self.assertIsNone(s.default) self.assertFalse(s.required) self.assertEqual(0, len(s.constraints)) def test_from_legacy_string(self): s = properties.Schema.from_legacy({ 'Type': 'String', 'Description': 'a string', 'Default': 'wibble', 'Implemented': False, 'MinLength': 4, 'MaxLength': 8, 'AllowedValues': ['blarg', 'wibble'], 'AllowedPattern': '[a-z]*', }) self.assertEqual(properties.Schema.STRING, s.type) self.assertEqual('a string', s.description) self.assertEqual('wibble', s.default) self.assertFalse(s.required) self.assertEqual(3, len(s.constraints)) self.assertFalse(s.immutable) def test_from_legacy_min_length(self): s = properties.Schema.from_legacy({ 'Type': 'String', 'MinLength': 4, }) self.assertEqual(1, len(s.constraints)) c = s.constraints[0] self.assertIsInstance(c, constraints.Length) self.assertEqual(4, c.min) self.assertIsNone(c.max) def test_from_legacy_max_length(self): s = properties.Schema.from_legacy({ 'Type': 'String', 'MaxLength': 8, }) self.assertEqual(1, len(s.constraints)) c = s.constraints[0] self.assertIsInstance(c, constraints.Length) self.assertIsNone(c.min) self.assertEqual(8, c.max) def test_from_legacy_minmax_length(self): s = properties.Schema.from_legacy({ 'Type': 'String', 'MinLength': 4, 'MaxLength': 8, }) self.assertEqual(1, len(s.constraints)) c = s.constraints[0] self.assertIsInstance(c, constraints.Length) self.assertEqual(4, c.min) self.assertEqual(8, c.max) def test_from_legacy_minmax_string_length(self): s = properties.Schema.from_legacy({ 'Type': 'String', 'MinLength': '4', 'MaxLength': '8', }) self.assertEqual(1, len(s.constraints)) c = s.constraints[0] self.assertIsInstance(c, constraints.Length) self.assertEqual(4, c.min) self.assertEqual(8, c.max) def test_from_legacy_min_value(self): s = properties.Schema.from_legacy({ 'Type': 'Integer', 'MinValue': 4, }) self.assertEqual(1, len(s.constraints)) c = s.constraints[0] self.assertIsInstance(c, constraints.Range) self.assertEqual(4, c.min) self.assertIsNone(c.max) def test_from_legacy_max_value(self): s = properties.Schema.from_legacy({ 'Type': 'Integer', 'MaxValue': 8, }) self.assertEqual(1, len(s.constraints)) c = s.constraints[0] self.assertIsInstance(c, constraints.Range) self.assertIsNone(c.min) self.assertEqual(8, c.max) def test_from_legacy_minmax_value(self): s = properties.Schema.from_legacy({ 'Type': 'Integer', 'MinValue': 4, 'MaxValue': 8, }) self.assertEqual(1, len(s.constraints)) c = s.constraints[0] self.assertIsInstance(c, constraints.Range) self.assertEqual(4, c.min) self.assertEqual(8, c.max) def test_from_legacy_minmax_string_value(self): s = properties.Schema.from_legacy({ 'Type': 'Integer', 'MinValue': '4', 'MaxValue': '8', }) self.assertEqual(1, len(s.constraints)) c = s.constraints[0] self.assertIsInstance(c, constraints.Range) self.assertEqual(4, c.min) self.assertEqual(8, c.max) def test_from_legacy_allowed_values(self): s = properties.Schema.from_legacy({ 'Type': 'String', 'AllowedValues': ['blarg', 'wibble'], }) self.assertEqual(1, len(s.constraints)) c = s.constraints[0] self.assertIsInstance(c, constraints.AllowedValues) self.assertEqual(('blarg', 'wibble'), c.allowed) def test_from_legacy_allowed_pattern(self): s = properties.Schema.from_legacy({ 'Type': 'String', 'AllowedPattern': '[a-z]*', }) self.assertEqual(1, len(s.constraints)) c = s.constraints[0] self.assertIsInstance(c, constraints.AllowedPattern) self.assertEqual('[a-z]*', c.pattern) def test_from_legacy_list(self): l = properties.Schema.from_legacy({ 'Type': 'List', 'Default': ['wibble'], 'Schema': { 'Type': 'String', 'Default': 'wibble', 'MaxLength': 8, } }) self.assertEqual(properties.Schema.LIST, l.type) self.assertEqual(['wibble'], l.default) ss = l.schema[0] self.assertEqual(properties.Schema.STRING, ss.type) self.assertEqual('wibble', ss.default) def test_from_legacy_map(self): l = properties.Schema.from_legacy({ 'Type': 'Map', 'Schema': { 'foo': { 'Type': 'String', 'Default': 'wibble', } } }) self.assertEqual(properties.Schema.MAP, l.type) ss = l.schema['foo'] self.assertEqual(properties.Schema.STRING, ss.type) self.assertEqual('wibble', ss.default) def test_from_legacy_invalid_key(self): self.assertRaises(exception.InvalidSchemaError, properties.Schema.from_legacy, {'Type': 'String', 'Foo': 'Bar'}) def test_from_string_param(self): description = "WebServer EC2 instance type" allowed_values = ["t1.micro", "m1.small", "m1.large", "m1.xlarge", "m2.xlarge", "m2.2xlarge", "m2.4xlarge", "c1.medium", "c1.xlarge", "cc1.4xlarge"] constraint_desc = "Must be a valid EC2 instance type." param = parameters.Schema.from_dict('name', { "Type": "String", "Description": description, "Default": "m1.large", "AllowedValues": allowed_values, "ConstraintDescription": constraint_desc, }) schema = properties.Schema.from_parameter(param) self.assertEqual(properties.Schema.STRING, schema.type) self.assertEqual(description, schema.description) self.assertIsNone(schema.default) self.assertFalse(schema.required) self.assertEqual(1, len(schema.constraints)) allowed_constraint = schema.constraints[0] self.assertEqual(tuple(allowed_values), allowed_constraint.allowed) self.assertEqual(constraint_desc, allowed_constraint.description) props = properties.Properties({'test': schema}, {}) props.validate() def test_from_string_allowed_pattern(self): description = "WebServer EC2 instance type" allowed_pattern = "[A-Za-z0-9.]*" constraint_desc = "Must contain only alphanumeric characters." param = parameters.Schema.from_dict('name', { "Type": "String", "Description": description, "Default": "m1.large", "AllowedPattern": allowed_pattern, "ConstraintDescription": constraint_desc, }) schema = properties.Schema.from_parameter(param) self.assertEqual(properties.Schema.STRING, schema.type) self.assertEqual(description, schema.description) self.assertIsNone(schema.default) self.assertFalse(schema.required) self.assertEqual(1, len(schema.constraints)) allowed_constraint = schema.constraints[0] self.assertEqual(allowed_pattern, allowed_constraint.pattern) self.assertEqual(constraint_desc, allowed_constraint.description) props = properties.Properties({'test': schema}, {}) props.validate() def test_from_string_multi_constraints(self): description = "WebServer EC2 instance type" allowed_pattern = "[A-Za-z0-9.]*" constraint_desc = "Must contain only alphanumeric characters." param = parameters.Schema.from_dict('name', { "Type": "String", "Description": description, "Default": "m1.large", "MinLength": "7", "AllowedPattern": allowed_pattern, "ConstraintDescription": constraint_desc, }) schema = properties.Schema.from_parameter(param) self.assertEqual(properties.Schema.STRING, schema.type) self.assertEqual(description, schema.description) self.assertIsNone(schema.default) self.assertFalse(schema.required) self.assertEqual(2, len(schema.constraints)) len_constraint = schema.constraints[0] allowed_constraint = schema.constraints[1] self.assertEqual(7, len_constraint.min) self.assertIsNone(len_constraint.max) self.assertEqual(allowed_pattern, allowed_constraint.pattern) self.assertEqual(constraint_desc, allowed_constraint.description) props = properties.Properties({'test': schema}, {}) props.validate() def test_from_param_string_min_len(self): param = parameters.Schema.from_dict('name', { "Description": "WebServer EC2 instance type", "Type": "String", "Default": "m1.large", "MinLength": "7", }) schema = properties.Schema.from_parameter(param) self.assertFalse(schema.required) self.assertIsNone(schema.default) self.assertEqual(1, len(schema.constraints)) len_constraint = schema.constraints[0] self.assertEqual(7, len_constraint.min) self.assertIsNone(len_constraint.max) props = properties.Properties({'test': schema}, {}) props.validate() def test_from_param_string_max_len(self): param = parameters.Schema.from_dict('name', { "Description": "WebServer EC2 instance type", "Type": "String", "Default": "m1.large", "MaxLength": "11", }) schema = properties.Schema.from_parameter(param) self.assertFalse(schema.required) self.assertIsNone(schema.default) self.assertEqual(1, len(schema.constraints)) len_constraint = schema.constraints[0] self.assertIsNone(len_constraint.min) self.assertEqual(11, len_constraint.max) props = properties.Properties({'test': schema}, {}) props.validate() def test_from_param_string_min_max_len(self): param = parameters.Schema.from_dict('name', { "Description": "WebServer EC2 instance type", "Type": "String", "Default": "m1.large", "MinLength": "7", "MaxLength": "11", }) schema = properties.Schema.from_parameter(param) self.assertFalse(schema.required) self.assertIsNone(schema.default) self.assertEqual(1, len(schema.constraints)) len_constraint = schema.constraints[0] self.assertEqual(7, len_constraint.min) self.assertEqual(11, len_constraint.max) props = properties.Properties({'test': schema}, {}) props.validate() def test_from_param_no_default(self): param = parameters.Schema.from_dict('name', { "Description": "WebServer EC2 instance type", "Type": "String", }) schema = properties.Schema.from_parameter(param) self.assertTrue(schema.required) self.assertIsNone(schema.default) self.assertEqual(0, len(schema.constraints)) self.assertFalse(schema.allow_conversion) props = properties.Properties({'name': schema}, {'name': 'm1.large'}) props.validate() def test_from_number_param_min(self): param = parameters.Schema.from_dict('name', { "Type": "Number", "Default": "42", "MinValue": "10", }) schema = properties.Schema.from_parameter(param) self.assertEqual(properties.Schema.NUMBER, schema.type) self.assertIsNone(schema.default) self.assertFalse(schema.required) self.assertEqual(1, len(schema.constraints)) value_constraint = schema.constraints[0] self.assertEqual(10, value_constraint.min) self.assertIsNone(value_constraint.max) props = properties.Properties({'test': schema}, {}) props.validate() def test_from_number_param_max(self): param = parameters.Schema.from_dict('name', { "Type": "Number", "Default": "42", "MaxValue": "100", }) schema = properties.Schema.from_parameter(param) self.assertEqual(properties.Schema.NUMBER, schema.type) self.assertIsNone(schema.default) self.assertFalse(schema.required) self.assertEqual(1, len(schema.constraints)) value_constraint = schema.constraints[0] self.assertIsNone(value_constraint.min) self.assertEqual(100, value_constraint.max) props = properties.Properties({'test': schema}, {}) props.validate() def test_from_number_param_min_max(self): param = parameters.Schema.from_dict('name', { "Type": "Number", "Default": "42", "MinValue": "10", "MaxValue": "100", }) schema = properties.Schema.from_parameter(param) self.assertEqual(properties.Schema.NUMBER, schema.type) self.assertIsNone(schema.default) self.assertFalse(schema.required) self.assertEqual(1, len(schema.constraints)) value_constraint = schema.constraints[0] self.assertEqual(10, value_constraint.min) self.assertEqual(100, value_constraint.max) props = properties.Properties({'test': schema}, {}) props.validate() def test_from_number_param_allowed_vals(self): constraint_desc = "The quick brown fox jumps over the lazy dog." param = parameters.Schema.from_dict('name', { "Type": "Number", "Default": "42", "AllowedValues": ["10", "42", "100"], "ConstraintDescription": constraint_desc, }) schema = properties.Schema.from_parameter(param) self.assertEqual(properties.Schema.NUMBER, schema.type) self.assertIsNone(schema.default) self.assertFalse(schema.required) self.assertEqual(1, len(schema.constraints)) self.assertFalse(schema.allow_conversion) allowed_constraint = schema.constraints[0] self.assertEqual(('10', '42', '100'), allowed_constraint.allowed) self.assertEqual(constraint_desc, allowed_constraint.description) props = properties.Properties({'test': schema}, {}) props.validate() def test_from_list_param(self): param = parameters.Schema.from_dict('name', { "Type": "CommaDelimitedList", "Default": "foo,bar,baz" }) schema = properties.Schema.from_parameter(param) self.assertEqual(properties.Schema.LIST, schema.type) self.assertIsNone(schema.default) self.assertFalse(schema.required) self.assertTrue(schema.allow_conversion) props = properties.Properties({'test': schema}, {}) props.validate() def test_from_json_param(self): param = parameters.Schema.from_dict('name', { "Type": "Json", "Default": {"foo": "bar", "blarg": "wibble"} }) schema = properties.Schema.from_parameter(param) self.assertEqual(properties.Schema.MAP, schema.type) self.assertIsNone(schema.default) self.assertFalse(schema.required) self.assertTrue(schema.allow_conversion) props = properties.Properties({'test': schema}, {}) props.validate() def test_no_mismatch_in_update_policy(self): manager = plugin_manager.PluginManager('heat.engine.resources') resource_mapping = plugin_manager.PluginMapping('resource') res_plugin_mappings = resource_mapping.load_all(manager) all_resources = {} for mapping in res_plugin_mappings: name, cls = mapping all_resources[name] = cls def check_update_policy(resource_type, prop_key, prop, update=False): if prop.update_allowed: update = True sub_schema = prop.schema if sub_schema: for sub_prop_key, sub_prop in six.iteritems(sub_schema): if not update: self.assertEqual(update, sub_prop.update_allowed, "Mismatch in update policies: " "resource %(res)s, properties " "'%(prop)s' and '%(nested_prop)s'." % {'res': resource_type, 'prop': prop_key, 'nested_prop': sub_prop_key}) if sub_prop_key is '*': check_update_policy(resource_type, prop_key, sub_prop, update) else: check_update_policy(resource_type, sub_prop_key, sub_prop, update) for resource_type, resource_class in six.iteritems(all_resources): props_schemata = properties.schemata( resource_class.properties_schema) for prop_key, prop in six.iteritems(props_schemata): check_update_policy(resource_type, prop_key, prop) class PropertyTest(common.HeatTestCase): def test_required_default(self): p = properties.Property({'Type': 'String'}) self.assertFalse(p.required()) def test_required_false(self): p = properties.Property({'Type': 'String', 'Required': False}) self.assertFalse(p.required()) def test_required_true(self): p = properties.Property({'Type': 'String', 'Required': True}) self.assertTrue(p.required()) def test_implemented_default(self): p = properties.Property({'Type': 'String'}) self.assertTrue(p.implemented()) def test_implemented_false(self): p = properties.Property({'Type': 'String', 'Implemented': False}) self.assertFalse(p.implemented()) def test_implemented_true(self): p = properties.Property({'Type': 'String', 'Implemented': True}) self.assertTrue(p.implemented()) def test_no_default(self): p = properties.Property({'Type': 'String'}) self.assertFalse(p.has_default()) def test_default(self): p = properties.Property({'Type': 'String', 'Default': 'wibble'}) self.assertEqual('wibble', p.default()) def test_type(self): p = properties.Property({'Type': 'String'}) self.assertEqual('String', p.type()) def test_bad_type(self): self.assertRaises(exception.InvalidSchemaError, properties.Property, {'Type': 'Fish'}) def test_bad_key(self): self.assertRaises(exception.InvalidSchemaError, properties.Property, {'Type': 'String', 'Foo': 'Bar'}) def test_string_pattern_good(self): schema = {'Type': 'String', 'AllowedPattern': '[a-z]*'} p = properties.Property(schema) self.assertEqual('foo', p.get_value('foo', True)) def test_string_pattern_bad_prefix(self): schema = {'Type': 'String', 'AllowedPattern': '[a-z]*'} p = properties.Property(schema) self.assertRaises(exception.StackValidationFailed, p.get_value, '1foo', True) def test_string_pattern_bad_suffix(self): schema = {'Type': 'String', 'AllowedPattern': '[a-z]*'} p = properties.Property(schema) self.assertRaises(exception.StackValidationFailed, p.get_value, 'foo1', True) def test_string_value_list_good(self): schema = {'Type': 'String', 'AllowedValues': ['foo', 'bar', 'baz']} p = properties.Property(schema) self.assertEqual('bar', p.get_value('bar', True)) def test_string_value_list_bad(self): schema = {'Type': 'String', 'AllowedValues': ['foo', 'bar', 'baz']} p = properties.Property(schema) self.assertRaises(exception.StackValidationFailed, p.get_value, 'blarg', True) def test_string_maxlength_good(self): schema = {'Type': 'String', 'MaxLength': '5'} p = properties.Property(schema) self.assertEqual('abcd', p.get_value('abcd', True)) def test_string_exceeded_maxlength(self): schema = {'Type': 'String', 'MaxLength': '5'} p = properties.Property(schema) self.assertRaises(exception.StackValidationFailed, p.get_value, 'abcdef', True) def test_string_length_in_range(self): schema = {'Type': 'String', 'MinLength': '5', 'MaxLength': '10'} p = properties.Property(schema) self.assertEqual('abcdef', p.get_value('abcdef', True)) def test_string_minlength_good(self): schema = {'Type': 'String', 'MinLength': '5'} p = properties.Property(schema) self.assertEqual('abcde', p.get_value('abcde', True)) def test_string_smaller_than_minlength(self): schema = {'Type': 'String', 'MinLength': '5'} p = properties.Property(schema) self.assertRaises(exception.StackValidationFailed, p.get_value, 'abcd', True) def test_int_good(self): schema = {'Type': 'Integer', 'MinValue': 3, 'MaxValue': 3} p = properties.Property(schema) self.assertEqual(3, p.get_value(3, True)) def test_int_bad(self): schema = {'Type': 'Integer'} p = properties.Property(schema) # python 3.4.3 returns another error message # try to handle this by regexp self.assertRaisesRegex( TypeError, r"int\(\) argument must be a string" "(, a bytes-like object)?" " or a number, not 'list'", p.get_value, [1]) def test_str_from_int(self): schema = {'Type': 'String'} p = properties.Property(schema) self.assertEqual('3', p.get_value(3)) def test_str_from_bool(self): schema = {'Type': 'String'} p = properties.Property(schema) self.assertEqual('True', p.get_value(True)) def test_int_from_str_good(self): schema = {'Type': 'Integer'} p = properties.Property(schema) self.assertEqual(3, p.get_value('3')) def test_int_from_str_bad(self): schema = {'Type': 'Integer'} p = properties.Property(schema) ex = self.assertRaises(TypeError, p.get_value, '3a') self.assertEqual("Value '3a' is not an integer", six.text_type(ex)) def test_integer_low(self): schema = {'Type': 'Integer', 'MinValue': 4} p = properties.Property(schema) self.assertRaises(exception.StackValidationFailed, p.get_value, 3, True) def test_integer_high(self): schema = {'Type': 'Integer', 'MaxValue': 2} p = properties.Property(schema) self.assertRaises(exception.StackValidationFailed, p.get_value, 3, True) def test_integer_value_list_good(self): schema = {'Type': 'Integer', 'AllowedValues': [1, 3, 5]} p = properties.Property(schema) self.assertEqual(5, p.get_value(5), True) def test_integer_value_list_bad(self): schema = {'Type': 'Integer', 'AllowedValues': [1, 3, 5]} p = properties.Property(schema) self.assertRaises(exception.StackValidationFailed, p.get_value, 2, True) def test_number_good(self): schema = {'Type': 'Number', 'MinValue': '3', 'MaxValue': '3'} p = properties.Property(schema) self.assertEqual(3, p.get_value(3, True)) def test_numbers_from_strings(self): """Numbers can be converted from strings.""" schema = {'Type': 'Number', 'MinValue': '3', 'MaxValue': '3'} p = properties.Property(schema) self.assertEqual(3, p.get_value('3')) def test_number_value_list_good(self): schema = {'Type': 'Number', 'AllowedValues': [1, 3, 5]} p = properties.Property(schema) self.assertEqual(5, p.get_value('5', True)) def test_number_value_list_bad(self): schema = {'Type': 'Number', 'AllowedValues': ['1', '3', '5']} p = properties.Property(schema) self.assertRaises(exception.StackValidationFailed, p.get_value, '2', True) def test_number_low(self): schema = {'Type': 'Number', 'MinValue': '4'} p = properties.Property(schema) self.assertRaises(exception.StackValidationFailed, p.get_value, '3', True) def test_number_high(self): schema = {'Type': 'Number', 'MaxValue': '2'} p = properties.Property(schema) self.assertRaises(exception.StackValidationFailed, p.get_value, '3', True) def test_boolean_true(self): p = properties.Property({'Type': 'Boolean'}) self.assertIs(True, p.get_value('True')) self.assertIs(True, p.get_value('true')) self.assertIs(True, p.get_value(True)) def test_boolean_false(self): p = properties.Property({'Type': 'Boolean'}) self.assertIs(False, p.get_value('False')) self.assertIs(False, p.get_value('false')) self.assertIs(False, p.get_value(False)) def test_boolean_invalid_string(self): p = properties.Property({'Type': 'Boolean'}) self.assertRaises(ValueError, p.get_value, 'fish') def test_boolean_invalid_int(self): p = properties.Property({'Type': 'Boolean'}) self.assertRaises(TypeError, p.get_value, 5) def test_list_string(self): p = properties.Property({'Type': 'List'}) self.assertRaises(TypeError, p.get_value, 'foo') def test_list_good(self): p = properties.Property({'Type': 'List'}) self.assertEqual(['foo', 'bar'], p.get_value(['foo', 'bar'])) def test_list_dict(self): p = properties.Property({'Type': 'List'}) self.assertRaises(TypeError, p.get_value, {'foo': 'bar'}) def test_list_is_delimited(self): p = properties.Property({'Type': 'List'}) self.assertRaises(TypeError, p.get_value, 'foo,bar') p.schema.allow_conversion = True self.assertEqual(['foo', 'bar'], p.get_value('foo,bar')) self.assertEqual(['foo'], p.get_value('foo')) def test_map_path(self): p = properties.Property({'Type': 'Map'}, name='test', path='parent') self.assertEqual('parent.test', p.path) def test_list_path(self): p = properties.Property({'Type': 'List'}, name='0', path='parent') self.assertEqual('parent.0', p.path) def test_list_maxlength_good(self): schema = {'Type': 'List', 'MaxLength': '3'} p = properties.Property(schema) self.assertEqual(['1', '2'], p.get_value(['1', '2'], True)) def test_list_exceeded_maxlength(self): schema = {'Type': 'List', 'MaxLength': '2'} p = properties.Property(schema) self.assertRaises(exception.StackValidationFailed, p.get_value, ['1', '2', '3'], True) def test_list_length_in_range(self): schema = {'Type': 'List', 'MinLength': '2', 'MaxLength': '4'} p = properties.Property(schema) self.assertEqual(['1', '2', '3'], p.get_value(['1', '2', '3'], True)) def test_list_minlength_good(self): schema = {'Type': 'List', 'MinLength': '3'} p = properties.Property(schema) self.assertEqual(['1', '2', '3'], p.get_value(['1', '2', '3'], True)) def test_list_smaller_than_minlength(self): schema = {'Type': 'List', 'MinLength': '4'} p = properties.Property(schema) self.assertRaises(exception.StackValidationFailed, p.get_value, ['1', '2', '3'], True) def test_map_list_default(self): schema = {'Type': 'Map', 'Default': ['foo', 'bar']} p = properties.Property(schema) p.schema.allow_conversion = True self.assertEqual(jsonutils.dumps(['foo', 'bar']), p.get_value(None)) def test_map_list_default_empty(self): schema = {'Type': 'Map', 'Default': []} p = properties.Property(schema) p.schema.allow_conversion = True self.assertEqual(jsonutils.dumps([]), p.get_value(None)) def test_map_list_no_default(self): schema = {'Type': 'Map'} p = properties.Property(schema) p.schema.allow_conversion = True self.assertEqual({}, p.get_value(None)) def test_map_string(self): p = properties.Property({'Type': 'Map'}) self.assertRaises(TypeError, p.get_value, 'foo') def test_map_list(self): p = properties.Property({'Type': 'Map'}) self.assertRaises(TypeError, p.get_value, ['foo']) def test_map_allow_conversion(self): p = properties.Property({'Type': 'Map'}) p.schema.allow_conversion = True self.assertEqual('foo', p.get_value('foo')) self.assertEqual(jsonutils.dumps(['foo']), p.get_value(['foo'])) def test_map_schema_good(self): map_schema = {'valid': {'Type': 'Boolean'}} p = properties.Property({'Type': 'Map', 'Schema': map_schema}) self.assertEqual({'valid': True}, p.get_value({'valid': 'TRUE'})) def test_map_schema_bad_data(self): map_schema = {'valid': {'Type': 'Boolean'}} p = properties.Property({'Type': 'Map', 'Schema': map_schema}) ex = self.assertRaises(exception.StackValidationFailed, p.get_value, {'valid': 'fish'}, True) self.assertEqual('Property error: valid: "fish" is not a ' 'valid boolean', six.text_type(ex)) def test_map_schema_missing_data(self): map_schema = {'valid': {'Type': 'Boolean'}} p = properties.Property({'Type': 'Map', 'Schema': map_schema}) self.assertEqual({'valid': None}, p.get_value({})) def test_map_schema_missing_required_data(self): map_schema = {'valid': {'Type': 'Boolean', 'Required': True}} p = properties.Property({'Type': 'Map', 'Schema': map_schema}) ex = self.assertRaises(exception.StackValidationFailed, p.get_value, {}, True) self.assertEqual('Property error: Property valid not assigned', six.text_type(ex)) def test_list_schema_good(self): map_schema = {'valid': {'Type': 'Boolean'}} list_schema = {'Type': 'Map', 'Schema': map_schema} p = properties.Property({'Type': 'List', 'Schema': list_schema}) self.assertEqual([{'valid': True}, {'valid': False}], p.get_value([{'valid': 'TRUE'}, {'valid': 'False'}])) def test_list_schema_bad_data(self): map_schema = {'valid': {'Type': 'Boolean'}} list_schema = {'Type': 'Map', 'Schema': map_schema} p = properties.Property({'Type': 'List', 'Schema': list_schema}) ex = self.assertRaises(exception.StackValidationFailed, p.get_value, [{'valid': 'True'}, {'valid': 'fish'}], True) self.assertEqual('Property error: [1].valid: "fish" is not ' 'a valid boolean', six.text_type(ex)) def test_list_schema_int_good(self): list_schema = {'Type': 'Integer'} p = properties.Property({'Type': 'List', 'Schema': list_schema}) self.assertEqual([1, 2, 3], p.get_value([1, 2, 3])) def test_list_schema_int_bad_data(self): list_schema = {'Type': 'Integer'} p = properties.Property({'Type': 'List', 'Schema': list_schema}) ex = self.assertRaises(exception.StackValidationFailed, p.get_value, [42, 'fish'], True) self.assertEqual("Property error: [1]: Value 'fish' is not " "an integer", six.text_type(ex)) class PropertiesTest(common.HeatTestCase): def setUp(self): super(PropertiesTest, self).setUp() schema = { 'int': {'Type': 'Integer'}, 'string': {'Type': 'String'}, 'required_int': {'Type': 'Integer', 'Required': True}, 'bad_int': {'Type': 'Integer'}, 'missing': {'Type': 'Integer'}, 'defaulted': {'Type': 'Integer', 'Default': 1}, 'default_override': {'Type': 'Integer', 'Default': 1}, 'default_bool': {'Type': 'Boolean', 'Default': 'false'}, } data = { 'int': 21, 'string': 'foo', 'bad_int': 'foo', 'default_override': 21, } def double(d): return d * 2 self.props = properties.Properties(schema, data, double, 'wibble') def test_integer_good(self): self.assertEqual(42, self.props['int']) def test_string_good(self): self.assertEqual('foofoo', self.props['string']) def test_bool_not_str(self): self.assertFalse(self.props['default_bool']) def test_missing_required(self): self.assertRaises(ValueError, self.props.get, 'required_int') @mock.patch.object(translation.Translation, 'has_translation') @mock.patch.object(translation.Translation, 'translate') def test_required_with_translate_no_value(self, m_t, m_ht): m_t.return_value = None m_ht.return_value = True self.assertRaises(ValueError, self.props.get, 'required_int') def test_integer_bad(self): self.assertRaises(ValueError, self.props.get, 'bad_int') def test_missing(self): self.assertIsNone(self.props['missing']) def test_default(self): self.assertEqual(1, self.props['defaulted']) def test_default_override(self): self.assertEqual(42, self.props['default_override']) def test_get_user_value(self): self.assertIsNone(self.props.get_user_value('defaulted')) self.assertEqual(42, self.props.get_user_value('default_override')) def test_get_user_value_key_error(self): ex = self.assertRaises(KeyError, self.props.get_user_value, 'foo') # Note we have to use args here: https://bugs.python.org/issue2651 self.assertEqual('Invalid Property foo', six.text_type(ex.args[0])) def test_bad_key(self): self.assertEqual('wibble', self.props.get('foo', 'wibble')) def test_key_error(self): ex = self.assertRaises(KeyError, self.props.__getitem__, 'foo') # Note we have to use args here: https://bugs.python.org/issue2651 self.assertEqual('Invalid Property foo', six.text_type(ex.args[0])) def test_none_string(self): schema = {'foo': {'Type': 'String'}} props = properties.Properties(schema, {'foo': None}) self.assertEqual('', props['foo']) def test_none_integer(self): schema = {'foo': {'Type': 'Integer'}} props = properties.Properties(schema, {'foo': None}) self.assertEqual(0, props['foo']) def test_none_number(self): schema = {'foo': {'Type': 'Number'}} props = properties.Properties(schema, {'foo': None}) self.assertEqual(0, props['foo']) def test_none_boolean(self): schema = {'foo': {'Type': 'Boolean'}} props = properties.Properties(schema, {'foo': None}) self.assertIs(False, props['foo']) def test_none_map(self): schema = {'foo': {'Type': 'Map'}} props = properties.Properties(schema, {'foo': None}) self.assertEqual({}, props['foo']) def test_none_list(self): schema = {'foo': {'Type': 'List'}} props = properties.Properties(schema, {'foo': None}) self.assertEqual([], props['foo']) def test_none_default_string(self): schema = {'foo': {'Type': 'String', 'Default': 'bar'}} props = properties.Properties(schema, {'foo': None}) self.assertEqual('bar', props['foo']) def test_none_default_integer(self): schema = {'foo': {'Type': 'Integer', 'Default': 42}} props = properties.Properties(schema, {'foo': None}) self.assertEqual(42, props['foo']) schema = {'foo': {'Type': 'Integer', 'Default': 0}} props = properties.Properties(schema, {'foo': None}) self.assertEqual(0, props['foo']) schema = {'foo': {'Type': 'Integer', 'Default': -273}} props = properties.Properties(schema, {'foo': None}) self.assertEqual(-273, props['foo']) def test_none_default_number(self): schema = {'foo': {'Type': 'Number', 'Default': 42.0}} props = properties.Properties(schema, {'foo': None}) self.assertEqual(42.0, props['foo']) schema = {'foo': {'Type': 'Number', 'Default': 0.0}} props = properties.Properties(schema, {'foo': None}) self.assertEqual(0.0, props['foo']) schema = {'foo': {'Type': 'Number', 'Default': -273.15}} props = properties.Properties(schema, {'foo': None}) self.assertEqual(-273.15, props['foo']) def test_none_default_boolean(self): schema = {'foo': {'Type': 'Boolean', 'Default': True}} props = properties.Properties(schema, {'foo': None}) self.assertIs(True, props['foo']) def test_none_default_map(self): schema = {'foo': {'Type': 'Map', 'Default': {'bar': 'baz'}}} props = properties.Properties(schema, {'foo': None}) self.assertEqual({'bar': 'baz'}, props['foo']) def test_none_default_list(self): schema = {'foo': {'Type': 'List', 'Default': ['one', 'two']}} props = properties.Properties(schema, {'foo': None}) self.assertEqual(['one', 'two'], props['foo']) def test_resolve_returns_none(self): schema = {'foo': {'Type': 'String', "MinLength": "5"}} def test_resolver(prop): return None self.patchobject(properties.Properties, '_find_deps_any_in_init').return_value = True props = properties.Properties(schema, {'foo': 'get_attr: [db, value]'}, test_resolver) try: self.assertIsNone(props.validate()) except exception.StackValidationFailed: self.fail("Constraints should not have been evaluated.") def test_resolve_ref_with_constraints(self): # create test custom constraint class IncorrectConstraint(constraints.BaseCustomConstraint): expected_exceptions = (Exception,) def validate_with_client(self, client, value): raise Exception("Test exception") class TestCustomConstraint(constraints.CustomConstraint): @property def custom_constraint(self): return IncorrectConstraint() # create schema with test constraint schema = { 'foo': properties.Schema( properties.Schema.STRING, constraints=[TestCustomConstraint('test_constraint')] ) } # define parameters for function def test_resolver(prop): return 'None' class rsrc(object): action = INIT = "INIT" class DummyStack(dict): pass stack = DummyStack(another_res=rsrc()) # define properties with function and constraint props = properties.Properties( schema, {'foo': hot_funcs.GetResource( stack, 'get_resource', 'another_res')}, test_resolver) try: self.assertIsNone(props.validate()) except exception.StackValidationFailed: self.fail("Constraints should not have been evaluated.") def test_schema_from_params(self): params_snippet = { "DBUsername": { "Type": "String", "Description": "The WordPress database admin account username", "Default": "admin", "MinLength": "1", "AllowedPattern": "[a-zA-Z][a-zA-Z0-9]*", "NoEcho": "true", "MaxLength": "16", "ConstraintDescription": ("must begin with a letter and " "contain only alphanumeric " "characters.") }, "KeyName": { "Type": "String", "Description": ("Name of an existing EC2 KeyPair to enable " "SSH access to the instances") }, "LinuxDistribution": { "Default": "F17", "Type": "String", "Description": "Distribution of choice", "AllowedValues": [ "F18", "F17", "U10", "RHEL-6.1", "RHEL-6.2", "RHEL-6.3" ] }, "DBPassword": { "Type": "String", "Description": "The WordPress database admin account password", "Default": "admin", "MinLength": "1", "AllowedPattern": "[a-zA-Z0-9]*", "NoEcho": "true", "MaxLength": "41", "ConstraintDescription": ("must contain only alphanumeric " "characters.") }, "DBName": { "AllowedPattern": "[a-zA-Z][a-zA-Z0-9]*", "Type": "String", "Description": "The WordPress database name", "MaxLength": "64", "Default": "wordpress", "MinLength": "1", "ConstraintDescription": ("must begin with a letter and " "contain only alphanumeric " "characters.") }, "InstanceType": { "Default": "m1.large", "Type": "String", "ConstraintDescription": "must be a valid EC2 instance type.", "Description": "WebServer EC2 instance type", "AllowedValues": [ "t1.micro", "m1.small", "m1.large", "m1.xlarge", "m2.xlarge", "m2.2xlarge", "m2.4xlarge", "c1.medium", "c1.xlarge", "cc1.4xlarge" ] }, "DBRootPassword": { "Type": "String", "Description": "Root password for MySQL", "Default": "admin", "MinLength": "1", "AllowedPattern": "[a-zA-Z0-9]*", "NoEcho": "true", "MaxLength": "41", "ConstraintDescription": ("must contain only alphanumeric " "characters.") } } expected = { "DBUsername": { "type": "string", "description": "The WordPress database admin account username", "required": False, 'update_allowed': True, 'immutable': False, "constraints": [ {"length": {"min": 1, "max": 16}, "description": "must begin with a letter and contain " "only alphanumeric characters."}, {"allowed_pattern": "[a-zA-Z][a-zA-Z0-9]*", "description": "must begin with a letter and contain " "only alphanumeric characters."}, ] }, "LinuxDistribution": { "type": "string", "description": "Distribution of choice", "required": False, 'update_allowed': True, 'immutable': False, "constraints": [ {"allowed_values": ["F18", "F17", "U10", "RHEL-6.1", "RHEL-6.2", "RHEL-6.3"]} ] }, "InstanceType": { "type": "string", "description": "WebServer EC2 instance type", "required": False, 'update_allowed': True, 'immutable': False, "constraints": [ {"allowed_values": ["t1.micro", "m1.small", "m1.large", "m1.xlarge", "m2.xlarge", "m2.2xlarge", "m2.4xlarge", "c1.medium", "c1.xlarge", "cc1.4xlarge"], "description": "must be a valid EC2 instance type."}, ] }, "DBRootPassword": { "type": "string", "description": "Root password for MySQL", "required": False, 'update_allowed': True, 'immutable': False, "constraints": [ {"length": {"min": 1, "max": 41}, "description": "must contain only alphanumeric " "characters."}, {"allowed_pattern": "[a-zA-Z0-9]*", "description": "must contain only alphanumeric " "characters."}, ] }, "KeyName": { "type": "string", "description": ("Name of an existing EC2 KeyPair to enable " "SSH access to the instances"), "required": True, 'update_allowed': True, 'immutable': False, }, "DBPassword": { "type": "string", "description": "The WordPress database admin account password", "required": False, 'update_allowed': True, 'immutable': False, "constraints": [ {"length": {"min": 1, "max": 41}, "description": "must contain only alphanumeric " "characters."}, {"allowed_pattern": "[a-zA-Z0-9]*", "description": "must contain only alphanumeric " "characters."}, ] }, "DBName": { "type": "string", "description": "The WordPress database name", "required": False, 'update_allowed': True, 'immutable': False, "constraints": [ {"length": {"min": 1, "max": 64}, "description": "must begin with a letter and contain " "only alphanumeric characters."}, {"allowed_pattern": "[a-zA-Z][a-zA-Z0-9]*", "description": "must begin with a letter and contain " "only alphanumeric characters."}, ] }, } params = dict((n, parameters.Schema.from_dict(n, s)) for n, s in params_snippet.items()) props_schemata = properties.Properties.schema_from_params(params) self.assertEqual(expected, dict((n, dict(s)) for n, s in props_schemata.items())) def test_schema_from_hot_params(self): params_snippet = { "KeyName": { "type": "string", "description": ("Name of an existing EC2 KeyPair to enable " "SSH access to the instances") }, "InstanceType": { "default": "m1.large", "type": "string", "description": "WebServer EC2 instance type", "constraints": [ {"allowed_values": ["t1.micro", "m1.small", "m1.large", "m1.xlarge", "m2.xlarge", "m2.2xlarge", "m2.4xlarge", "c1.medium", "c1.xlarge", "cc1.4xlarge"], "description": "Must be a valid EC2 instance type."} ] }, "LinuxDistribution": { "default": "F17", "type": "string", "description": "Distribution of choice", "constraints": [ {"allowed_values": ["F18", "F17", "U10", "RHEL-6.1", "RHEL-6.2", "RHEL-6.3"], "description": "Must be a valid Linux distribution"} ] }, "DBName": { "type": "string", "description": "The WordPress database name", "default": "wordpress", "constraints": [ {"length": {"min": 1, "max": 64}, "description": "Length must be between 1 and 64"}, {"allowed_pattern": "[a-zA-Z][a-zA-Z0-9]*", "description": ("Must begin with a letter and contain " "only alphanumeric characters.")} ] }, "DBUsername": { "type": "string", "description": "The WordPress database admin account username", "default": "admin", "hidden": "true", "constraints": [ {"length": {"min": 1, "max": 16}, "description": "Length must be between 1 and 16"}, {"allowed_pattern": "[a-zA-Z][a-zA-Z0-9]*", "description": ("Must begin with a letter and only " "contain alphanumeric characters")} ] }, "DBPassword": { "type": "string", "description": "The WordPress database admin account password", "default": "admin", "hidden": "true", "constraints": [ {"length": {"min": 1, "max": 41}, "description": "Length must be between 1 and 41"}, {"allowed_pattern": "[a-zA-Z0-9]*", "description": ("Must contain only alphanumeric " "characters")} ] }, "DBRootPassword": { "type": "string", "description": "Root password for MySQL", "default": "admin", "hidden": "true", "constraints": [ {"length": {"min": 1, "max": 41}, "description": "Length must be between 1 and 41"}, {"allowed_pattern": "[a-zA-Z0-9]*", "description": ("Must contain only alphanumeric " "characters")} ] } } expected = { "KeyName": { "type": "string", "description": ("Name of an existing EC2 KeyPair to enable " "SSH access to the instances"), "required": True, 'update_allowed': True, 'immutable': False, }, "InstanceType": { "type": "string", "description": "WebServer EC2 instance type", "required": False, 'update_allowed': True, 'immutable': False, "constraints": [ {"allowed_values": ["t1.micro", "m1.small", "m1.large", "m1.xlarge", "m2.xlarge", "m2.2xlarge", "m2.4xlarge", "c1.medium", "c1.xlarge", "cc1.4xlarge"], "description": "Must be a valid EC2 instance type."}, ] }, "LinuxDistribution": { "type": "string", "description": "Distribution of choice", "required": False, 'update_allowed': True, 'immutable': False, "constraints": [ {"allowed_values": ["F18", "F17", "U10", "RHEL-6.1", "RHEL-6.2", "RHEL-6.3"], "description": "Must be a valid Linux distribution"} ] }, "DBName": { "type": "string", "description": "The WordPress database name", "required": False, 'update_allowed': True, 'immutable': False, "constraints": [ {"length": {"min": 1, "max": 64}, "description": "Length must be between 1 and 64"}, {"allowed_pattern": "[a-zA-Z][a-zA-Z0-9]*", "description": ("Must begin with a letter and contain " "only alphanumeric characters.")}, ] }, "DBUsername": { "type": "string", "description": "The WordPress database admin account username", "required": False, 'update_allowed': True, 'immutable': False, "constraints": [ {"length": {"min": 1, "max": 16}, "description": "Length must be between 1 and 16"}, {"allowed_pattern": "[a-zA-Z][a-zA-Z0-9]*", "description": ("Must begin with a letter and only " "contain alphanumeric characters")}, ] }, "DBPassword": { "type": "string", "description": "The WordPress database admin account password", "required": False, 'update_allowed': True, 'immutable': False, "constraints": [ {"length": {"min": 1, "max": 41}, "description": "Length must be between 1 and 41"}, {"allowed_pattern": "[a-zA-Z0-9]*", "description": ("Must contain only alphanumeric " "characters")}, ] }, "DBRootPassword": { "type": "string", "description": "Root password for MySQL", "required": False, 'update_allowed': True, 'immutable': False, "constraints": [ {"length": {"min": 1, "max": 41}, "description": "Length must be between 1 and 41"}, {"allowed_pattern": "[a-zA-Z0-9]*", "description": ("Must contain only alphanumeric " "characters")}, ] } } params = dict((n, hot_param.HOTParamSchema.from_dict(n, s)) for n, s in params_snippet.items()) props_schemata = properties.Properties.schema_from_params(params) self.assertEqual(expected, dict((n, dict(s)) for n, s in props_schemata.items())) def test_compare_same(self): schema = {'foo': {'Type': 'Integer'}} props_a = properties.Properties(schema, {'foo': 1}) props_b = properties.Properties(schema, {'foo': 1}) self.assertFalse(props_a != props_b) def test_compare_different(self): schema = {'foo': {'Type': 'Integer'}} props_a = properties.Properties(schema, {'foo': 0}) props_b = properties.Properties(schema, {'foo': 1}) self.assertTrue(props_a != props_b) class PropertiesValidationTest(common.HeatTestCase): def test_required(self): schema = {'foo': {'Type': 'String', 'Required': True}} props = properties.Properties(schema, {'foo': 'bar'}) self.assertIsNone(props.validate()) def test_missing_required(self): schema = {'foo': {'Type': 'String', 'Required': True}} props = properties.Properties(schema, {}) self.assertRaises(exception.StackValidationFailed, props.validate) def test_missing_unimplemented(self): schema = {'foo': {'Type': 'String', 'Implemented': False}} props = properties.Properties(schema, {}) self.assertIsNone(props.validate()) def test_present_unimplemented(self): schema = {'foo': {'Type': 'String', 'Implemented': False}} props = properties.Properties(schema, {'foo': 'bar'}) self.assertRaises(exception.StackValidationFailed, props.validate) def test_missing(self): schema = {'foo': {'Type': 'String'}} props = properties.Properties(schema, {}) self.assertIsNone(props.validate()) def test_unknown_typo(self): schema = {'foo': {'Type': 'String'}} props = properties.Properties(schema, {'food': 42}) self.assertRaises(exception.StackValidationFailed, props.validate) def test_list_instead_string(self): schema = {'foo': {'Type': 'String'}} props = properties.Properties(schema, {'foo': ['foo', 'bar']}) ex = self.assertRaises(exception.StackValidationFailed, props.validate) self.assertIn('Property error: foo: Value must be a string', six.text_type(ex)) def test_dict_instead_string(self): schema = {'foo': {'Type': 'String'}} props = properties.Properties(schema, {'foo': {'foo': 'bar'}}) ex = self.assertRaises(exception.StackValidationFailed, props.validate) self.assertIn('Property error: foo: Value must be a string', six.text_type(ex)) def test_none_string(self): schema = {'foo': {'Type': 'String'}} props = properties.Properties(schema, {'foo': None}) self.assertIsNone(props.validate()) def test_none_integer(self): schema = {'foo': {'Type': 'Integer'}} props = properties.Properties(schema, {'foo': None}) self.assertIsNone(props.validate()) def test_none_number(self): schema = {'foo': {'Type': 'Number'}} props = properties.Properties(schema, {'foo': None}) self.assertIsNone(props.validate()) def test_none_boolean(self): schema = {'foo': {'Type': 'Boolean'}} props = properties.Properties(schema, {'foo': None}) self.assertIsNone(props.validate()) def test_none_map(self): schema = {'foo': {'Type': 'Map'}} props = properties.Properties(schema, {'foo': None}) self.assertIsNone(props.validate()) def test_none_list(self): schema = {'foo': {'Type': 'List'}} props = properties.Properties(schema, {'foo': None}) self.assertIsNone(props.validate()) def test_none_default_string(self): schema = {'foo': {'Type': 'String', 'Default': 'bar'}} props = properties.Properties(schema, {'foo': None}) self.assertIsNone(props.validate()) def test_none_default_integer(self): schema = {'foo': {'Type': 'Integer', 'Default': 42}} props = properties.Properties(schema, {'foo': None}) self.assertIsNone(props.validate()) def test_none_default_number(self): schema = {'foo': {'Type': 'Number', 'Default': 42.0}} props = properties.Properties(schema, {'foo': None}) self.assertIsNone(props.validate()) def test_none_default_boolean(self): schema = {'foo': {'Type': 'Boolean', 'Default': True}} props = properties.Properties(schema, {'foo': None}) self.assertIsNone(props.validate()) def test_none_default_map(self): schema = {'foo': {'Type': 'Map', 'Default': {'bar': 'baz'}}} props = properties.Properties(schema, {'foo': None}) self.assertIsNone(props.validate()) def test_none_default_list(self): schema = {'foo': {'Type': 'List', 'Default': ['one', 'two']}} props = properties.Properties(schema, {'foo': None}) self.assertIsNone(props.validate()) def test_schema_to_template_nested_map_map_schema(self): nested_schema = {'Key': {'Type': 'String', 'Required': True}, 'Value': {'Type': 'String', 'Default': 'fewaf'}} schema = {'foo': {'Type': 'Map', 'Schema': nested_schema}} prop_expected = {'foo': {'Ref': 'foo'}} param_expected = {'foo': {'Type': 'Json'}} (parameters, props) = properties.Properties.schema_to_parameters_and_properties( schema) self.assertEqual(param_expected, parameters) self.assertEqual(prop_expected, props) def test_schema_to_template_nested_map_list_map_schema(self): key_schema = {'bar': {'Type': 'Number'}} nested_schema = {'Key': {'Type': 'Map', 'Schema': key_schema}, 'Value': {'Type': 'String', 'Required': True}} schema = {'foo': {'Type': 'List', 'Schema': {'Type': 'Map', 'Schema': nested_schema}}} prop_expected = {'foo': {'Fn::Split': [",", {'Ref': 'foo'}]}} param_expected = {'foo': {'Type': 'CommaDelimitedList'}} (parameters, props) = properties.Properties.schema_to_parameters_and_properties( schema) self.assertEqual(param_expected, parameters) self.assertEqual(prop_expected, props) def test_schema_object_to_template_nested_map_list_map_schema(self): key_schema = {'bar': properties.Schema(properties.Schema.NUMBER)} nested_schema = { 'Key': properties.Schema(properties.Schema.MAP, schema=key_schema), 'Value': properties.Schema(properties.Schema.STRING, required=True) } schema = { 'foo': properties.Schema(properties.Schema.LIST, schema=properties.Schema( properties.Schema.MAP, schema=nested_schema)) } prop_expected = {'foo': {'Fn::Split': [",", {'Ref': 'foo'}]}} param_expected = {'foo': {'Type': 'CommaDelimitedList'}} (parameters, props) = properties.Properties.schema_to_parameters_and_properties( schema) self.assertEqual(param_expected, parameters) self.assertEqual(prop_expected, props) def test_schema_invalid_parameters_stripped(self): schema = {'foo': {'Type': 'String', 'Required': True, 'Implemented': True}} prop_expected = {'foo': {'Ref': 'foo'}} param_expected = {'foo': {'Type': 'String'}} (parameters, props) = properties.Properties.schema_to_parameters_and_properties( schema) self.assertEqual(param_expected, parameters) self.assertEqual(prop_expected, props) def test_schema_support_status(self): schema = { 'foo_sup': properties.Schema( properties.Schema.STRING, default='foo' ), 'bar_dep': properties.Schema( properties.Schema.STRING, default='bar', support_status=support.SupportStatus( support.DEPRECATED, 'Do not use this ever') ) } props = properties.Properties(schema, {}) self.assertEqual(support.SUPPORTED, props.props['foo_sup'].support_status().status) self.assertEqual(support.DEPRECATED, props.props['bar_dep'].support_status().status) self.assertEqual('Do not use this ever', props.props['bar_dep'].support_status().message) def test_nested_properties_schema_invalid_property_in_list(self): child_schema = {'Key': {'Type': 'String', 'Required': True}, 'Value': {'Type': 'Boolean', 'Default': True}} list_schema = {'Type': 'Map', 'Schema': child_schema} schema = {'foo': {'Type': 'List', 'Schema': list_schema}} valid_data = {'foo': [{'Key': 'Test'}]} props = properties.Properties(schema, valid_data) self.assertIsNone(props.validate()) invalid_data = {'foo': [{'Key': 'Test', 'bar': 'baz'}]} props = properties.Properties(schema, invalid_data) ex = self.assertRaises(exception.StackValidationFailed, props.validate) self.assertEqual('Property error: foo[0]: Unknown Property bar', six.text_type(ex)) def test_nested_properties_schema_invalid_property_in_map(self): child_schema = {'Key': {'Type': 'String', 'Required': True}, 'Value': {'Type': 'Boolean', 'Default': True}} map_schema = {'boo': {'Type': 'Map', 'Schema': child_schema}} schema = {'foo': {'Type': 'Map', 'Schema': map_schema}} valid_data = {'foo': {'boo': {'Key': 'Test'}}} props = properties.Properties(schema, valid_data) self.assertIsNone(props.validate()) invalid_data = {'foo': {'boo': {'Key': 'Test', 'bar': 'baz'}}} props = properties.Properties(schema, invalid_data) ex = self.assertRaises(exception.StackValidationFailed, props.validate) self.assertEqual('Property error: foo.boo: Unknown Property bar', six.text_type(ex)) def test_more_nested_properties_schema_invalid_property_in_list(self): nested_child_schema = {'Key': {'Type': 'String', 'Required': True}} child_schema = {'doo': {'Type': 'Map', 'Schema': nested_child_schema}} list_schema = {'Type': 'Map', 'Schema': child_schema} schema = {'foo': {'Type': 'List', 'Schema': list_schema}} valid_data = {'foo': [{'doo': {'Key': 'Test'}}]} props = properties.Properties(schema, valid_data) self.assertIsNone(props.validate()) invalid_data = {'foo': [{'doo': {'Key': 'Test', 'bar': 'baz'}}]} props = properties.Properties(schema, invalid_data) ex = self.assertRaises(exception.StackValidationFailed, props.validate) self.assertEqual('Property error: foo[0].doo: Unknown Property bar', six.text_type(ex)) def test_more_nested_properties_schema_invalid_property_in_map(self): nested_child_schema = {'Key': {'Type': 'String', 'Required': True}} child_schema = {'doo': {'Type': 'Map', 'Schema': nested_child_schema}} map_schema = {'boo': {'Type': 'Map', 'Schema': child_schema}} schema = {'foo': {'Type': 'Map', 'Schema': map_schema}} valid_data = {'foo': {'boo': {'doo': {'Key': 'Test'}}}} props = properties.Properties(schema, valid_data) self.assertIsNone(props.validate()) invalid_data = {'foo': {'boo': {'doo': {'Key': 'Test', 'bar': 'baz'}}}} props = properties.Properties(schema, invalid_data) ex = self.assertRaises(exception.StackValidationFailed, props.validate) self.assertEqual('Property error: foo.boo.doo: Unknown Property bar', six.text_type(ex)) def test_schema_to_template_empty_schema(self): schema = {} (parameters, props) = properties.Properties.schema_to_parameters_and_properties( schema) self.assertEqual({}, parameters) self.assertEqual({}, props) def test_update_allowed_and_immutable_contradict(self): self.assertRaises(exception.InvalidSchemaError, properties.Schema, properties.Schema.STRING, update_allowed=True, immutable=True) heat-10.0.2/heat/tests/test_plugin_loader.py0000666000175000017500000000720713343562340021070 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import pkgutil import sys import mock from heat.common import plugin_loader import heat.engine from heat.tests import common class PluginLoaderTest(common.HeatTestCase): def test_module_name(self): self.assertEqual('foo.bar.blarg.wibble', plugin_loader._module_name('foo.bar', 'blarg.wibble')) def test_create_subpackage_single_path(self): pkg_name = 'heat.engine.test_single_path' self.assertNotIn(pkg_name, sys.modules) pkg = plugin_loader.create_subpackage('/tmp', 'heat.engine', 'test_single_path') self.assertIn(pkg_name, sys.modules) self.assertEqual(sys.modules[pkg_name], pkg) self.assertEqual(['/tmp'], pkg.__path__) self.assertEqual(pkg_name, pkg.__name__) def test_create_subpackage_path_list(self): path_list = ['/tmp'] pkg_name = 'heat.engine.test_path_list' self.assertNotIn(pkg_name, sys.modules) pkg = plugin_loader.create_subpackage('/tmp', 'heat.engine', 'test_path_list') self.assertIn(pkg_name, sys.modules) self.assertEqual(sys.modules[pkg_name], pkg) self.assertEqual(path_list, pkg.__path__) self.assertNotIn(pkg.__path__, path_list) self.assertEqual(pkg_name, pkg.__name__) def test_import_module_existing(self): import heat.engine.service existing = heat.engine.service importer = pkgutil.ImpImporter(heat.engine.__path__[0]) loaded = plugin_loader._import_module(importer, 'heat.engine.service', heat.engine) self.assertIs(existing, loaded) def test_import_module_garbage(self): importer = pkgutil.ImpImporter(heat.engine.__path__[0]) self.assertIsNone(plugin_loader._import_module(importer, 'wibble', heat.engine)) @mock.patch.object(plugin_loader, "_import_module", mock.MagicMock()) @mock.patch('pkgutil.walk_packages') def test_load_modules_skip_test(self, mp): importer = pkgutil.ImpImporter(heat.engine.__path__[0]) mp.return_value = ((importer, "hola.foo", None), (importer, "hola.tests.test_foo", None)) loaded = plugin_loader.load_modules( heat.engine, ignore_error=True) self.assertEqual(1, len(list(loaded))) @mock.patch.object(plugin_loader, "_import_module", mock.MagicMock()) @mock.patch('pkgutil.walk_packages') def test_load_modules_skip_setup(self, mp): importer = pkgutil.ImpImporter(heat.engine.__path__[0]) mp.return_value = ((importer, "hola.foo", None), (importer, "hola.setup", None)) loaded = plugin_loader.load_modules( heat.engine, ignore_error=True) self.assertEqual(1, len(list(loaded))) heat-10.0.2/heat/tests/test_stack_update.py0000666000175000017500000027103313343562352020716 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import mock from heat.common import exception from heat.common import template_format from heat.db.sqlalchemy import api as db_api from heat.engine import environment from heat.engine import resource from heat.engine import rsrc_defn from heat.engine import scheduler from heat.engine import service from heat.engine import stack from heat.engine import support from heat.engine import template from heat.objects import stack as stack_object from heat.rpc import api as rpc_api from heat.tests import common from heat.tests import generic_resource as generic_rsrc from heat.tests import utils empty_template = template_format.parse('''{ "HeatTemplateFormatVersion" : "2012-12-12", }''') class StackUpdateTest(common.HeatTestCase): def setUp(self): super(StackUpdateTest, self).setUp() self.tmpl = template.Template(copy.deepcopy(empty_template)) self.ctx = utils.dummy_context() def test_update_add(self): tmpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': {'AResource': {'Type': 'GenericResourceType'}}} self.stack = stack.Stack(self.ctx, 'update_test_stack', template.Template(tmpl)) self.stack.store() self.stack.create() raw_template_id = self.stack.t.id self.assertEqual((stack.Stack.CREATE, stack.Stack.COMPLETE), self.stack.state) tmpl2 = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'AResource': {'Type': 'GenericResourceType'}, 'BResource': {'Type': 'GenericResourceType'}}} updated_stack = stack.Stack(self.ctx, 'updated_stack', template.Template(tmpl2)) self.stack.update(updated_stack) self.assertEqual((stack.Stack.UPDATE, stack.Stack.COMPLETE), self.stack.state) self.assertIn('BResource', self.stack) self.assertNotEqual(raw_template_id, self.stack.t.id) self.assertNotEqual(raw_template_id, self.stack.prev_raw_template_id) self.assertRaises(exception.NotFound, db_api.raw_template_get, self.ctx, raw_template_id) def test_update_remove(self): tmpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'AResource': {'Type': 'GenericResourceType'}, 'BResource': {'Type': 'GenericResourceType'}}} self.stack = stack.Stack(self.ctx, 'update_test_stack', template.Template(tmpl)) self.stack.store() self.stack.create() self.assertEqual((stack.Stack.CREATE, stack.Stack.COMPLETE), self.stack.state) tmpl2 = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': {'AResource': {'Type': 'GenericResourceType'}}} updated_stack = stack.Stack(self.ctx, 'updated_stack', template.Template(tmpl2)) self.stack.update(updated_stack) self.assertEqual((stack.Stack.UPDATE, stack.Stack.COMPLETE), self.stack.state) self.assertNotIn('BResource', self.stack) def test_update_different_type(self): tmpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'AResource': {'Type': 'GenericResourceType'}}} self.stack = stack.Stack(self.ctx, 'update_test_stack', template.Template(tmpl)) self.stack.store() self.stack.create() self.assertEqual((stack.Stack.CREATE, stack.Stack.COMPLETE), self.stack.state) self.assertEqual('GenericResourceType', self.stack['AResource'].type()) tmpl2 = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': {'AResource': {'Type': 'ResourceWithPropsType', 'Properties': {'Foo': 'abc'}}}} updated_stack = stack.Stack(self.ctx, 'updated_stack', template.Template(tmpl2)) self.stack.update(updated_stack) self.assertEqual((stack.Stack.UPDATE, stack.Stack.COMPLETE), self.stack.state) self.assertEqual('ResourceWithPropsType', self.stack['AResource'].type()) def test_update_description(self): tmpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Description': 'ATemplate', 'Resources': {'AResource': {'Type': 'GenericResourceType'}}} self.stack = stack.Stack(self.ctx, 'update_test_stack', template.Template(tmpl)) self.stack.store() self.stack.create() self.assertEqual((stack.Stack.CREATE, stack.Stack.COMPLETE), self.stack.state) tmpl2 = {'HeatTemplateFormatVersion': '2012-12-12', 'Description': 'BTemplate', 'Resources': {'AResource': {'Type': 'GenericResourceType'}}} updated_stack = stack.Stack(self.ctx, 'updated_stack', template.Template(tmpl2)) self.stack.update(updated_stack) self.assertEqual((stack.Stack.UPDATE, stack.Stack.COMPLETE), self.stack.state) self.assertEqual('BTemplate', self.stack.t[self.stack.t.DESCRIPTION]) def test_update_timeout(self): tmpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Description': 'ATemplate', 'Resources': {'AResource': {'Type': 'GenericResourceType'}}} self.stack = stack.Stack(self.ctx, 'update_test_stack', template.Template(tmpl), timeout_mins=60) self.stack.store() self.stack.create() self.assertEqual((stack.Stack.CREATE, stack.Stack.COMPLETE), self.stack.state) tmpl2 = {'HeatTemplateFormatVersion': '2012-12-12', 'Description': 'ATemplate', 'Resources': {'AResource': {'Type': 'GenericResourceType'}}} updated_stack = stack.Stack(self.ctx, 'updated_stack', template.Template(tmpl2), timeout_mins=30) self.stack.update(updated_stack) self.assertEqual((stack.Stack.UPDATE, stack.Stack.COMPLETE), self.stack.state) self.assertEqual(30, self.stack.timeout_mins) def test_update_disable_rollback(self): tmpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Description': 'ATemplate', 'Resources': {'AResource': {'Type': 'GenericResourceType'}}} self.stack = stack.Stack(self.ctx, 'update_test_stack', template.Template(tmpl), disable_rollback=False) self.stack.store() self.stack.create() self.assertEqual((stack.Stack.CREATE, stack.Stack.COMPLETE), self.stack.state) tmpl2 = {'HeatTemplateFormatVersion': '2012-12-12', 'Description': 'ATemplate', 'Resources': {'AResource': {'Type': 'GenericResourceType'}}} updated_stack = stack.Stack(self.ctx, 'updated_stack', template.Template(tmpl2), disable_rollback=True) self.stack.update(updated_stack) self.assertEqual((stack.Stack.UPDATE, stack.Stack.COMPLETE), self.stack.state) self.assertTrue(self.stack.disable_rollback) def test_update_tags(self): tmpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Description': 'ATemplate', 'Resources': {'AResource': {'Type': 'GenericResourceType'}}} self.stack = stack.Stack(self.ctx, 'update_test_stack', template.Template(tmpl), tags=['tag1', 'tag2']) self.stack.store() self.stack.create() self.assertEqual((stack.Stack.CREATE, stack.Stack.COMPLETE), self.stack.state) self.assertEqual(['tag1', 'tag2'], self.stack.tags) updated_stack = stack.Stack(self.ctx, 'updated_stack', template.Template(tmpl), tags=['tag3', 'tag4']) self.stack.update(updated_stack) self.assertEqual((stack.Stack.UPDATE, stack.Stack.COMPLETE), self.stack.state) self.assertEqual(['tag3', 'tag4'], self.stack.tags) def test_update_tags_remove_all(self): tmpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Description': 'ATemplate', 'Resources': {'AResource': {'Type': 'GenericResourceType'}}} self.stack = stack.Stack(self.ctx, 'update_test_stack', template.Template(tmpl), tags=['tag1', 'tag2']) self.stack.store() self.stack.create() self.assertEqual((stack.Stack.CREATE, stack.Stack.COMPLETE), self.stack.state) self.assertEqual(['tag1', 'tag2'], self.stack.tags) updated_stack = stack.Stack(self.ctx, 'updated_stack', template.Template(tmpl)) self.stack.update(updated_stack) self.assertEqual((stack.Stack.UPDATE, stack.Stack.COMPLETE), self.stack.state) self.assertIsNone(self.stack.tags) def test_update_modify_ok_replace(self): tmpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': {'AResource': {'Type': 'ResourceWithPropsType', 'Properties': {'Foo': 'abc'}}}} self.stack = stack.Stack(self.ctx, 'update_test_stack', template.Template(tmpl)) self.stack.store() self.stack.create() self.assertEqual((stack.Stack.CREATE, stack.Stack.COMPLETE), self.stack.state) tmpl2 = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': {'AResource': {'Type': 'ResourceWithPropsType', 'Properties': {'Foo': 'xyz'}}}} updated_stack = stack.Stack(self.ctx, 'updated_stack', template.Template(tmpl2)) self.stack.update(updated_stack) self.assertEqual((stack.Stack.UPDATE, stack.Stack.COMPLETE), self.stack.state) self.assertEqual( 'xyz', self.stack['AResource']._stored_properties_data['Foo']) loaded_stack = stack.Stack.load(self.ctx, self.stack.id) stored_props = loaded_stack['AResource']._stored_properties_data self.assertEqual({'Foo': 'xyz'}, stored_props) def test_update_replace_resticted(self): env = environment.Environment() env_snippet = {u'resource_registry': { u'resources': { 'AResource': {'restricted_actions': 'update'} } } } env.load(env_snippet) tmpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': {'AResource': {'Type': 'ResourceWithPropsType', 'Properties': {'Foo': 'abc'}}}} self.stack = stack.Stack(self.ctx, 'update_test_stack', template.Template(tmpl)) self.stack.store() self.stack.create() self.assertEqual((stack.Stack.CREATE, stack.Stack.COMPLETE), self.stack.state) tmpl1 = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': {'AResource': {'Type': 'ResourceWithPropsType', 'Properties': {'Foo': 'xyz'}}}} updated_stack = stack.Stack(self.ctx, 'updated_stack', template.Template(tmpl1, env=env)) self.stack.update(updated_stack) self.assertEqual((stack.Stack.UPDATE, stack.Stack.COMPLETE), self.stack.state) tmpl2 = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': {'AResource': {'Type': 'GenericResourceType'}}} env_snippet['resource_registry']['resources'][ 'AResource']['restricted_actions'] = 'replace' env.load(env_snippet) updated_stack = stack.Stack(self.ctx, 'updated_stack', template.Template(tmpl2, env=env)) self.stack.update(updated_stack) self.assertEqual((stack.Stack.UPDATE, stack.Stack.FAILED), self.stack.state) def test_update_modify_ok_replace_int(self): # create # ======== tmpl = {'heat_template_version': '2013-05-23', 'resources': {'AResource': { 'type': 'ResWithComplexPropsAndAttrs', 'properties': {'an_int': 1}}}} self.stack = stack.Stack(self.ctx, 'update_test_stack', template.Template(tmpl)) self.stack.store() stack_id = self.stack.id self.stack.create() self.stack._persist_state() self.assertEqual((stack.Stack.CREATE, stack.Stack.COMPLETE), self.stack.state) value1 = 2 prop_diff1 = {'an_int': value1} value2 = 1 prop_diff2 = {'an_int': value2} mock_upd = self.patchobject(generic_rsrc.ResWithComplexPropsAndAttrs, 'handle_update') # update 1 # ========== self.stack = stack.Stack.load(self.ctx, stack_id=stack_id) tmpl2 = {'heat_template_version': '2013-05-23', 'resources': {'AResource': { 'type': 'ResWithComplexPropsAndAttrs', 'properties': {'an_int': value1}}}} updated_stack = stack.Stack(self.ctx, 'updated_stack', template.Template(tmpl2)) self.stack.update(updated_stack) self.stack._persist_state() self.assertEqual((stack.Stack.UPDATE, stack.Stack.COMPLETE), self.stack.state) mock_upd.assert_called_once_with(mock.ANY, mock.ANY, prop_diff1) # update 2 # ========== # reload the previous stack self.stack = stack.Stack.load(self.ctx, stack_id=stack_id) tmpl3 = {'heat_template_version': '2013-05-23', 'resources': {'AResource': { 'type': 'ResWithComplexPropsAndAttrs', 'properties': {'an_int': value2}}}} updated_stack = stack.Stack(self.ctx, 'updated_stack', template.Template(tmpl3)) self.stack.update(updated_stack) self.stack._persist_state() self.assertEqual((stack.Stack.UPDATE, stack.Stack.COMPLETE), self.stack.state) mock_upd.assert_called_with(mock.ANY, mock.ANY, prop_diff2) def test_update_modify_param_ok_replace(self): tmpl = { 'HeatTemplateFormatVersion': '2012-12-12', 'Parameters': { 'foo': {'Type': 'String'} }, 'Resources': { 'AResource': { 'Type': 'ResourceWithPropsType', 'Properties': {'Foo': {'Ref': 'foo'}} } } } self.stack = stack.Stack( self.ctx, 'update_test_stack', template.Template( tmpl, env=environment.Environment({'foo': 'abc'}))) self.stack.store() self.stack.create() self.assertEqual((stack.Stack.CREATE, stack.Stack.COMPLETE), self.stack.state) env2 = environment.Environment({'foo': 'xyz'}) updated_stack = stack.Stack(self.ctx, 'updated_stack', template.Template(tmpl, env=env2)) def check_and_raise(*args): self.assertEqual( 'abc', self.stack['AResource']._stored_properties_data['Foo']) raise resource.UpdateReplace mock_upd = self.patchobject(generic_rsrc.ResourceWithProps, 'update_template_diff', side_effect=check_and_raise) self.stack.update(updated_stack) self.assertEqual((stack.Stack.UPDATE, stack.Stack.COMPLETE), self.stack.state) self.assertEqual( 'xyz', self.stack['AResource']._stored_properties_data['Foo']) after = rsrc_defn.ResourceDefinition('AResource', 'ResourceWithPropsType', properties={'Foo': 'xyz'}) before = rsrc_defn.ResourceDefinition('AResource', 'ResourceWithPropsType', properties={'Foo': 'abc'}) mock_upd.assert_called_once_with(after, before) def test_update_replace_create_hook(self): tmpl = { 'HeatTemplateFormatVersion': '2012-12-12', 'Parameters': { 'foo': {'Type': 'String'} }, 'Resources': { 'AResource': { 'Type': 'ResourceWithPropsType', 'Properties': {'Foo': {'Ref': 'foo'}} } } } self.stack = stack.Stack( self.ctx, 'update_test_stack', template.Template( tmpl, env=environment.Environment({'foo': 'abc'}))) self.stack.store() self.stack.create() self.assertEqual((stack.Stack.CREATE, stack.Stack.COMPLETE), self.stack.state) env2 = environment.Environment({'foo': 'xyz'}) # Add a create hook on the resource env2.registry.load( {'resources': {'AResource': {'hooks': 'pre-create'}}}) updated_stack = stack.Stack(self.ctx, 'updated_stack', template.Template(tmpl, env=env2)) self.stack.update(updated_stack) # The hook is not called, and update succeeds properly self.assertEqual((stack.Stack.UPDATE, stack.Stack.COMPLETE), self.stack.state) self.assertEqual( 'xyz', self.stack['AResource']._stored_properties_data['Foo']) def test_update_replace_delete_hook(self): tmpl = { 'HeatTemplateFormatVersion': '2012-12-12', 'Parameters': { 'foo': {'Type': 'String'} }, 'Resources': { 'AResource': { 'Type': 'ResourceWithPropsType', 'Properties': {'Foo': {'Ref': 'foo'}} } } } env = environment.Environment({'foo': 'abc'}) env.registry.load( {'resources': {'AResource': {'hooks': 'pre-delete'}}}) self.stack = stack.Stack( self.ctx, 'update_test_stack', template.Template(tmpl, env=env)) self.stack.store() self.stack.create() self.assertEqual((stack.Stack.CREATE, stack.Stack.COMPLETE), self.stack.state) env2 = environment.Environment({'foo': 'xyz'}) env2.registry.load( {'resources': {'AResource': {'hooks': 'pre-delete'}}}) updated_stack = stack.Stack(self.ctx, 'updated_stack', template.Template(tmpl, env=env2)) self.stack.update(updated_stack) # The hook is not called, and update succeeds properly self.assertEqual((stack.Stack.UPDATE, stack.Stack.COMPLETE), self.stack.state) self.assertEqual( 'xyz', self.stack['AResource']._stored_properties_data['Foo']) def test_update_replace_post_hook(self): tmpl = { 'HeatTemplateFormatVersion': '2012-12-12', 'Parameters': { 'foo': {'Type': 'String'} }, 'Resources': { 'AResource': { 'Type': 'ResWithComplexPropsAndAttrs', 'Properties': {'an_int': {'Ref': 'foo'}} } } } self.stack = stack.Stack( self.ctx, 'update_test_stack', template.Template(tmpl, env=environment.Environment({'foo': 1}))) self.stack.store() self.stack.create() self.assertEqual((stack.Stack.CREATE, stack.Stack.COMPLETE), self.stack.state) env2 = environment.Environment({'foo': 2}) env2.registry.load( {'resources': {'AResource': {'hooks': 'post-update'}}}) updated_stack = stack.Stack(self.ctx, 'updated_stack', template.Template(tmpl, env=env2)) mock_hook = self.patchobject(self.stack['AResource'], 'trigger_hook') self.stack.update(updated_stack) mock_hook.assert_called_once_with('post-update') self.assertEqual((stack.Stack.UPDATE, stack.Stack.COMPLETE), self.stack.state) self.assertEqual( '2', self.stack['AResource']._stored_properties_data['an_int']) def test_update_modify_update_failed(self): tmpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': {'AResource': {'Type': 'ResourceWithPropsType', 'Properties': {'Foo': 'abc'}}}} self.stack = stack.Stack(self.ctx, 'update_test_stack', template.Template(tmpl), disable_rollback=True) self.stack.store() self.stack.create() self.assertEqual((stack.Stack.CREATE, stack.Stack.COMPLETE), self.stack.state) res = self.stack['AResource'] res.update_allowed_properties = ('Foo',) tmpl2 = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': {'AResource': {'Type': 'ResourceWithPropsType', 'Properties': {'Foo': 'xyz'}}}} updated_stack = stack.Stack(self.ctx, 'updated_stack', template.Template(tmpl2)) mock_upd = self.patchobject(generic_rsrc.ResourceWithProps, 'handle_update', side_effect=Exception("Foo")) self.stack.update(updated_stack) self.assertEqual((stack.Stack.UPDATE, stack.Stack.FAILED), self.stack.state) mock_upd.assert_called_once_with( rsrc_defn.ResourceDefinition('AResource', 'ResourceWithPropsType', properties={'Foo': 'xyz'}), mock.ANY, {'Foo': 'xyz'}) def test_update_modify_replace_failed_delete(self): tmpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': {'AResource': {'Type': 'ResourceWithPropsType', 'Properties': {'Foo': 'abc'}}}} self.stack = stack.Stack(self.ctx, 'update_test_stack', template.Template(tmpl), disable_rollback=True) self.stack.store() self.stack.create() self.assertEqual((stack.Stack.CREATE, stack.Stack.COMPLETE), self.stack.state) tmpl2 = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': {'AResource': {'Type': 'ResourceWithPropsType', 'Properties': {'Foo': 'xyz'}}}} updated_stack = stack.Stack(self.ctx, 'updated_stack', template.Template(tmpl2)) # make the update fail deleting the existing resource mock_del = self.patchobject(generic_rsrc.ResourceWithProps, 'handle_delete', side_effect=Exception) self.stack.update(updated_stack) self.assertEqual((stack.Stack.UPDATE, stack.Stack.FAILED), self.stack.state) mock_del.assert_called_once_with() # Unset here so destroy() is not stubbed for stack.delete cleanup self.m.UnsetStubs() def test_update_modify_replace_failed_create(self): tmpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': {'AResource': {'Type': 'ResourceWithPropsType', 'Properties': {'Foo': 'abc'}}}} self.stack = stack.Stack(self.ctx, 'update_test_stack', template.Template(tmpl), disable_rollback=True) self.stack.store() self.stack.create() self.assertEqual((stack.Stack.CREATE, stack.Stack.COMPLETE), self.stack.state) tmpl2 = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': {'AResource': {'Type': 'ResourceWithPropsType', 'Properties': {'Foo': 'xyz'}}}} updated_stack = stack.Stack(self.ctx, 'updated_stack', template.Template(tmpl2)) # patch in a dummy handle_create making the replace fail creating mock_create = self.patchobject(generic_rsrc.ResourceWithProps, 'handle_create', side_effect=Exception) self.stack.update(updated_stack) self.assertEqual((stack.Stack.UPDATE, stack.Stack.FAILED), self.stack.state) mock_create.assert_called_once_with() def test_update_modify_replace_failed_create_and_delete_1(self): tmpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': {'AResource': {'Type': 'ResourceWithResourceIDType', 'Properties': {'ID': 'a_res'}}, 'BResource': {'Type': 'ResourceWithResourceIDType', 'Properties': {'ID': 'b_res'}, 'DependsOn': 'AResource'}}} self.stack = stack.Stack(self.ctx, 'update_test_stack', template.Template(tmpl), disable_rollback=True) self.stack.store() self.stack.create() self.assertEqual((stack.Stack.CREATE, stack.Stack.COMPLETE), self.stack.state) tmpl2 = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': {'AResource': {'Type': 'ResourceWithResourceIDType', 'Properties': {'ID': 'xyz'}}, 'BResource': {'Type': 'ResourceWithResourceIDType', 'Properties': {'ID': 'b_res'}, 'DependsOn': 'AResource'}}} updated_stack = stack.Stack(self.ctx, 'updated_stack', template.Template(tmpl2)) # patch in a dummy handle_create making the replace fail creating mock_create = self.patchobject(generic_rsrc.ResourceWithResourceID, 'handle_create', side_effect=Exception) mock_id = self.patchobject(generic_rsrc.ResourceWithResourceID, 'mox_resource_id', return_value=None) self.stack.update(updated_stack) self.assertEqual((stack.Stack.UPDATE, stack.Stack.FAILED), self.stack.state) self.stack.delete() self.assertEqual((stack.Stack.DELETE, stack.Stack.COMPLETE), self.stack.state) mock_create.assert_called_once_with() # Three calls in list: first is an attempt to delete backup_stack # where create(xyz) has failed, so no resource_id passed; the 2nd # and the 3rd calls are invoked by resource BResource deletion # followed by AResource deletion. mock_id.assert_has_calls( [mock.call(None), mock.call('b_res'), mock.call('a_res')]) def test_update_modify_replace_failed_create_and_delete_2(self): tmpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': {'AResource': {'Type': 'ResourceWithResourceIDType', 'Properties': {'ID': 'a_res'}}, 'BResource': {'Type': 'ResourceWithResourceIDType', 'Properties': {'ID': 'b_res'}, 'DependsOn': 'AResource'}}} self.stack = stack.Stack(self.ctx, 'update_test_stack', template.Template(tmpl), disable_rollback=True) self.stack.store() self.stack.create() self.assertEqual((stack.Stack.CREATE, stack.Stack.COMPLETE), self.stack.state) tmpl2 = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': {'AResource': {'Type': 'ResourceWithResourceIDType', 'Properties': {'ID': 'c_res'}}, 'BResource': {'Type': 'ResourceWithResourceIDType', 'Properties': {'ID': 'xyz'}, 'DependsOn': 'AResource'}}} updated_stack = stack.Stack(self.ctx, 'updated_stack', template.Template(tmpl2)) # patch in a dummy handle_create making the replace fail creating mock_create = self.patchobject(generic_rsrc.ResourceWithResourceID, 'handle_create', side_effect=[None, Exception]) mock_id = self.patchobject(generic_rsrc.ResourceWithResourceID, 'mox_resource_id', return_value=None) self.stack.update(updated_stack) # set resource_id for AResource because handle_create() is overwritten # by the mock. self.stack.resources['AResource'].resource_id_set('c_res') self.assertEqual((stack.Stack.UPDATE, stack.Stack.FAILED), self.stack.state) self.stack.delete() self.assertEqual((stack.Stack.DELETE, stack.Stack.COMPLETE), self.stack.state) self.assertEqual(2, mock_create.call_count) # Four calls in chain: 1st is an attempt to delete backup_stack where # the create(xyz) failed with no resource_id, the other three are # derived from resource dependencies. mock_id.assert_has_calls( [mock.call(None), mock.call('c_res'), mock.call('b_res'), mock.call('a_res')]) def test_update_modify_replace_create_in_progress_and_delete_1(self): tmpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': {'AResource': {'Type': 'ResourceWithResourceIDType', 'Properties': {'ID': 'a_res'}}, 'BResource': {'Type': 'ResourceWithResourceIDType', 'Properties': {'ID': 'b_res'}, 'DependsOn': 'AResource'}}} self.stack = stack.Stack(self.ctx, 'update_test_stack', template.Template(tmpl), disable_rollback=True) self.stack.store() self.stack.create() self.assertEqual((stack.Stack.CREATE, stack.Stack.COMPLETE), self.stack.state) tmpl2 = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': {'AResource': {'Type': 'ResourceWithResourceIDType', 'Properties': {'ID': 'xyz'}}, 'BResource': {'Type': 'ResourceWithResourceIDType', 'Properties': {'ID': 'b_res'}, 'DependsOn': 'AResource'}}} updated_stack = stack.Stack(self.ctx, 'updated_stack', template.Template(tmpl2)) # patch in a dummy handle_create making the replace fail creating mock_create = self.patchobject(generic_rsrc.ResourceWithResourceID, 'handle_create', side_effect=Exception) mock_id = self.patchobject(generic_rsrc.ResourceWithResourceID, 'mox_resource_id', return_value=None) self.stack.update(updated_stack) # Override stack status and resources status for emulating # IN_PROGRESS situation self.stack.state_set( stack.Stack.UPDATE, stack.Stack.IN_PROGRESS, None) self.stack.resources['AResource'].state_set( resource.Resource.CREATE, resource.Resource.IN_PROGRESS, None) self.stack.delete() self.assertEqual((stack.Stack.DELETE, stack.Stack.COMPLETE), self.stack.state) mock_create.assert_called_once_with() # Three calls in chain: 1st is an attempt to delete backup_stack where # the create(xyz) failed with no resource_id, the other two ordered by # resource dependencies. mock_id.assert_has_calls( [mock.call(None), mock.call('b_res'), mock.call('a_res')]) def test_update_modify_replace_create_in_progress_and_delete_2(self): tmpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': {'AResource': {'Type': 'ResourceWithResourceIDType', 'Properties': {'ID': 'a_res'}}, 'BResource': {'Type': 'ResourceWithResourceIDType', 'Properties': {'ID': 'b_res'}, 'DependsOn': 'AResource'}}} self.stack = stack.Stack(self.ctx, 'update_test_stack', template.Template(tmpl), disable_rollback=True) self.stack.store() self.stack.create() self.assertEqual((stack.Stack.CREATE, stack.Stack.COMPLETE), self.stack.state) tmpl2 = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': {'AResource': {'Type': 'ResourceWithResourceIDType', 'Properties': {'ID': 'c_res'}}, 'BResource': {'Type': 'ResourceWithResourceIDType', 'Properties': {'ID': 'xyz'}, 'DependsOn': 'AResource'}}} updated_stack = stack.Stack(self.ctx, 'updated_stack', template.Template(tmpl2)) # patch in a dummy handle_create making the replace fail creating mock_create = self.patchobject(generic_rsrc.ResourceWithResourceID, 'handle_create', side_effect=[None, Exception]) mock_id = self.patchobject(generic_rsrc.ResourceWithResourceID, 'mox_resource_id', return_value=None) self.stack.update(updated_stack) # set resource_id for AResource because handle_create() is mocked self.stack.resources['AResource'].resource_id_set('c_res') # Override stack status and resources status for emulating # IN_PROGRESS situation self.stack.state_set( stack.Stack.UPDATE, stack.Stack.IN_PROGRESS, None) self.stack.resources['BResource'].state_set( resource.Resource.CREATE, resource.Resource.IN_PROGRESS, None) self.stack.delete() self.assertEqual((stack.Stack.DELETE, stack.Stack.COMPLETE), self.stack.state) self.assertEqual(2, mock_create.call_count) # Four calls in chain: 1st is an attempt to delete backup_stack where # the create(xyz) failed with no resource_id, the other three are # derived from resource dependencies. mock_id.assert_has_calls( [mock.call(None), mock.call('c_res'), mock.call('b_res'), mock.call('a_res')]) def _update_force_cancel(self, state, disable_rollback=False, cancel_message=rpc_api.THREAD_CANCEL): tmpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': {'AResource': {'Type': 'GenericResourceType'}}} old_tags = ['tag1', 'tag2'] self.stack = stack.Stack(self.ctx, 'update_test_stack', template.Template(tmpl), tags=old_tags) self.stack.store() self.stack.create() self.assertEqual((stack.Stack.CREATE, stack.Stack.COMPLETE), self.stack.state) self.assertEqual(old_tags, self.stack.tags) tmpl2 = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'AResource': {'Type': 'GenericResourceType'}, 'BResource': {'Type': 'MultiStepResourceType'}}} new_tags = ['tag3', 'tag4'] updated_stack = stack.Stack(self.ctx, 'updated_stack', template.Template(tmpl2), disable_rollback=disable_rollback, tags=new_tags) msgq_mock = mock.MagicMock() msgq_mock.get_nowait.return_value = cancel_message self.stack.update(updated_stack, msg_queue=msgq_mock) self.assertEqual(state, self.stack.state) if disable_rollback: self.assertEqual(new_tags, self.stack.tags) else: self.assertEqual(old_tags, self.stack.tags) msgq_mock.get_nowait.assert_called_once_with() def test_update_force_cancel_no_rollback(self): self._update_force_cancel( state=(stack.Stack.UPDATE, stack.Stack.FAILED), disable_rollback=True, cancel_message=rpc_api.THREAD_CANCEL) def test_update_force_cancel_rollback(self): self._update_force_cancel( state=(stack.Stack.ROLLBACK, stack.Stack.COMPLETE), disable_rollback=False, cancel_message=rpc_api.THREAD_CANCEL) def test_update_force_cancel_force_rollback(self): self._update_force_cancel( state=(stack.Stack.ROLLBACK, stack.Stack.COMPLETE), disable_rollback=False, cancel_message=rpc_api.THREAD_CANCEL_WITH_ROLLBACK) def test_update_add_signal(self): tmpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': {'AResource': {'Type': 'GenericResourceType'}}} self.stack = stack.Stack(self.ctx, 'update_test_stack', template.Template(tmpl)) self.stack.store() self.stack.create() self.assertEqual((stack.Stack.CREATE, stack.Stack.COMPLETE), self.stack.state) tmpl2 = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'AResource': {'Type': 'GenericResourceType'}, 'BResource': {'Type': 'GenericResourceType'}}} updated_stack = stack.Stack(self.ctx, 'updated_stack', template.Template(tmpl2)) updater = scheduler.TaskRunner(self.stack.update_task, updated_stack) updater.start() while 'BResource' not in self.stack: self.assertFalse(updater.step()) self.assertEqual((stack.Stack.UPDATE, stack.Stack.IN_PROGRESS), self.stack.state) # Reload the stack from the DB and prove that it contains the new # resource already re_stack = stack.Stack.load(utils.dummy_context(), self.stack.id) self.assertIn('BResource', re_stack) updater.run_to_completion() self.assertEqual((stack.Stack.UPDATE, stack.Stack.COMPLETE), self.stack.state) self.assertIn('BResource', self.stack) def test_update_add_failed_create(self): tmpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': {'AResource': {'Type': 'GenericResourceType'}}} self.stack = stack.Stack(self.ctx, 'update_test_stack', template.Template(tmpl)) self.stack.store() self.stack.create() self.assertEqual((stack.Stack.CREATE, stack.Stack.COMPLETE), self.stack.state) tmpl2 = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'AResource': {'Type': 'GenericResourceType'}, 'BResource': {'Type': 'GenericResourceType'}}} updated_stack = stack.Stack(self.ctx, 'updated_stack', template.Template(tmpl2)) # patch in a dummy handle_create making BResource fail creating mock_create = self.patchobject(generic_rsrc.GenericResource, 'handle_create', side_effect=Exception) self.stack.update(updated_stack) self.assertEqual((stack.Stack.UPDATE, stack.Stack.FAILED), self.stack.state) self.assertIn('BResource', self.stack) # Reload the stack from the DB and prove that it contains the failed # resource (to ensure it will be deleted on stack delete) re_stack = stack.Stack.load(utils.dummy_context(), self.stack.id) self.assertIn('BResource', re_stack) mock_create.assert_called_once_with() def test_update_add_failed_create_rollback_failed(self): tmpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': {'AResource': {'Type': 'GenericResourceType'}}} self.stack = stack.Stack(self.ctx, 'update_test_stack', template.Template(tmpl)) self.stack.store() self.stack.create() self.assertEqual((stack.Stack.CREATE, stack.Stack.COMPLETE), self.stack.state) tmpl2 = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'AResource': {'Type': 'GenericResourceType'}, 'BResource': {'Type': 'GenericResourceType'}}} updated_stack = stack.Stack(self.ctx, 'updated_stack', template.Template(tmpl2), disable_rollback=False) # patch handle_create/delete making BResource fail creation/deletion mock_create = self.patchobject(generic_rsrc.GenericResource, 'handle_create', side_effect=Exception) mock_delete = self.patchobject(generic_rsrc.GenericResource, 'handle_delete', side_effect=Exception) self.stack.update(updated_stack) self.assertEqual((stack.Stack.ROLLBACK, stack.Stack.FAILED), self.stack.state) self.assertIn('BResource', self.stack) # Reload the stack from the DB and prove that it contains the failed # resource (to ensure it will be deleted on stack delete) re_stack = stack.Stack.load(utils.dummy_context(), self.stack.id) self.assertIn('BResource', re_stack) mock_create.assert_called_once_with() mock_delete.assert_called_once_with() def test_update_rollback(self): tmpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': {'AResource': {'Type': 'ResourceWithPropsType', 'Properties': {'Foo': 'abc'}}}} self.stack = stack.Stack(self.ctx, 'update_test_stack', template.Template(tmpl), disable_rollback=False) self.stack.store() self.stack.create() self.assertEqual((stack.Stack.CREATE, stack.Stack.COMPLETE), self.stack.state) self.stack._persist_state() tmpl2 = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': {'AResource': {'Type': 'ResourceWithPropsType', 'Properties': {'Foo': 'xyz'}}}} updated_stack = stack.Stack(self.ctx, 'updated_stack', template.Template(tmpl2), disable_rollback=False) # patch in a dummy handle_create making the replace fail when creating # the replacement rsrc mock_create = self.patchobject(generic_rsrc.ResourceWithProps, 'handle_create', side_effect=Exception) with mock.patch.object( stack_object.Stack, 'update_by_id', wraps=stack_object.Stack.update_by_id) as mock_db_update: self.stack.update(updated_stack) self.assertEqual((stack.Stack.ROLLBACK, stack.Stack.COMPLETE), self.stack.state) self.eng = service.EngineService('a-host', 'a-topic') events = self.eng.list_events(self.ctx, self.stack.identifier()) self.assertEqual(11, len(events)) self.assertEqual( 'abc', self.stack['AResource']._stored_properties_data['Foo']) self.assertEqual(5, mock_db_update.call_count) self.assertEqual('UPDATE', mock_db_update.call_args_list[0][0][2]['action']) self.assertEqual('IN_PROGRESS', mock_db_update.call_args_list[0][0][2]['status']) self.assertEqual('ROLLBACK', mock_db_update.call_args_list[1][0][2]['action']) self.assertEqual('IN_PROGRESS', mock_db_update.call_args_list[1][0][2]['status']) mock_create.assert_called_once_with() def test_update_rollback_on_cancel_event(self): tmpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': {'AResource': {'Type': 'ResourceWithPropsType', 'Properties': {'Foo': 'abc'}}}} self.stack = stack.Stack(self.ctx, 'update_test_stack', template.Template(tmpl), disable_rollback=False) self.stack.store() self.stack.create() self.assertEqual((stack.Stack.CREATE, stack.Stack.COMPLETE), self.stack.state) tmpl2 = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': {'AResource': {'Type': 'ResourceWithPropsType', 'Properties': {'Foo': 'xyz'}}, }} updated_stack = stack.Stack(self.ctx, 'updated_stack', template.Template(tmpl2), disable_rollback=False) msgq_mock = mock.MagicMock() msgq_mock.get_nowait.return_value = 'cancel_with_rollback' self.stack.update(updated_stack, msg_queue=msgq_mock) self.assertEqual((stack.Stack.ROLLBACK, stack.Stack.COMPLETE), self.stack.state) self.assertEqual( 'abc', self.stack['AResource']._stored_properties_data['Foo']) msgq_mock.get_nowait.assert_called_once_with() def test_update_rollback_fail(self): tmpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Parameters': {'AParam': {'Type': 'String'}}, 'Resources': {'AResource': {'Type': 'ResourceWithPropsType', 'Properties': {'Foo': 'abc'}}}} env1 = environment.Environment({'parameters': {'AParam': 'abc'}}) self.stack = stack.Stack(self.ctx, 'update_test_stack', template.Template(tmpl, env=env1), disable_rollback=False) self.stack.store() self.stack.create() self.assertEqual((stack.Stack.CREATE, stack.Stack.COMPLETE), self.stack.state) tmpl2 = {'HeatTemplateFormatVersion': '2012-12-12', 'Parameters': {'BParam': {'Type': 'String'}}, 'Resources': {'AResource': {'Type': 'ResourceWithPropsType', 'Properties': {'Foo': 'xyz'}}}} env2 = environment.Environment({'parameters': {'BParam': 'smelly'}}) updated_stack = stack.Stack(self.ctx, 'updated_stack', template.Template(tmpl2, env=env2), disable_rollback=False) # patch in a dummy handle_create making the replace fail when creating # the replacement rsrc, and again on the second call (rollback) mock_create = self.patchobject(generic_rsrc.ResourceWithProps, 'handle_create', side_effect=Exception) mock_delete = self.patchobject(generic_rsrc.ResourceWithProps, 'handle_delete', side_effect=Exception) self.stack.update(updated_stack) self.assertEqual((stack.Stack.ROLLBACK, stack.Stack.FAILED), self.stack.state) mock_create.assert_called_once_with() mock_delete.assert_called_once_with() def test_update_rollback_add(self): tmpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': {'AResource': {'Type': 'GenericResourceType'}}} self.stack = stack.Stack(self.ctx, 'update_test_stack', template.Template(tmpl), disable_rollback=False) self.stack.store() self.stack.create() self.assertEqual((stack.Stack.CREATE, stack.Stack.COMPLETE), self.stack.state) tmpl2 = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'AResource': {'Type': 'GenericResourceType'}, 'BResource': {'Type': 'GenericResourceType'}}} updated_stack = stack.Stack(self.ctx, 'updated_stack', template.Template(tmpl2), disable_rollback=False) # patch in a dummy handle_create making the replace fail when creating # the replacement rsrc, and succeed on the second call (rollback) mock_create = self.patchobject(generic_rsrc.GenericResource, 'handle_create', side_effect=Exception) self.stack.update(updated_stack) self.assertEqual((stack.Stack.ROLLBACK, stack.Stack.COMPLETE), self.stack.state) self.assertNotIn('BResource', self.stack) mock_create.assert_called_once_with() def test_update_rollback_remove(self): tmpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'AResource': {'Type': 'GenericResourceType'}, 'BResource': {'Type': 'ResourceWithPropsType'}}} self.stack = stack.Stack(self.ctx, 'update_test_stack', template.Template(tmpl), disable_rollback=False) self.stack.store() self.stack.create() self.assertEqual((stack.Stack.CREATE, stack.Stack.COMPLETE), self.stack.state) tmpl2 = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': {'AResource': {'Type': 'GenericResourceType'}}} updated_stack = stack.Stack(self.ctx, 'updated_stack', template.Template(tmpl2), disable_rollback=False) # patch in a dummy delete making the destroy fail mock_create = self.patchobject(generic_rsrc.ResourceWithProps, 'handle_create') mock_delete = self.patchobject(generic_rsrc.ResourceWithProps, 'handle_delete', side_effect=[Exception, None]) self.stack.update(updated_stack) self.assertEqual((stack.Stack.ROLLBACK, stack.Stack.COMPLETE), self.stack.state) self.assertIn('BResource', self.stack) mock_create.assert_called_once_with() self.assertEqual(2, mock_delete.call_count) # Unset here so delete() is not stubbed for stack.delete cleanup self.m.UnsetStubs() def test_update_rollback_replace(self): tmpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'AResource': {'Type': 'ResourceWithPropsType', 'Properties': {'Foo': 'foo'}}}} self.stack = stack.Stack(self.ctx, 'update_test_stack', template.Template(tmpl), disable_rollback=False) self.stack.store() self.stack.create() self.assertEqual((stack.Stack.CREATE, stack.Stack.COMPLETE), self.stack.state) tmpl2 = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': {'AResource': {'Type': 'ResourceWithPropsType', 'Properties': {'Foo': 'bar'}}}} updated_stack = stack.Stack(self.ctx, 'updated_stack', template.Template(tmpl2), disable_rollback=False) # patch in a dummy delete making the destroy fail mock_delete = self.patchobject(generic_rsrc.ResourceWithProps, 'handle_delete', side_effect=[Exception, None, None]) self.stack.update(updated_stack) self.assertEqual((stack.Stack.ROLLBACK, stack.Stack.COMPLETE), self.stack.state) self.assertEqual(3, mock_delete.call_count) # Unset here so delete() is not stubbed for stack.delete cleanup self.m.UnsetStubs() def test_update_replace_by_reference(self): """Test case for changes in dynamic attributes. Changes in dynamic attributes, due to other resources been updated are not ignored and can cause dependent resources to be updated. """ tmpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'AResource': {'Type': 'ResourceWithPropsType', 'Properties': {'Foo': 'abc'}}, 'BResource': {'Type': 'ResourceWithPropsType', 'Properties': { 'Foo': {'Ref': 'AResource'}}}}} tmpl2 = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'AResource': {'Type': 'ResourceWithPropsType', 'Properties': {'Foo': 'smelly'}}, 'BResource': {'Type': 'ResourceWithPropsType', 'Properties': { 'Foo': {'Ref': 'AResource'}}}}} self.stack = stack.Stack(self.ctx, 'update_test_stack', template.Template(tmpl)) self.stack.store() self.stack.create() self.assertEqual((stack.Stack.CREATE, stack.Stack.COMPLETE), self.stack.state) self.assertEqual( 'abc', self.stack['AResource']._stored_properties_data['Foo']) self.assertEqual( 'AResource', self.stack['BResource']._stored_properties_data['Foo']) mock_id = self.patchobject(generic_rsrc.ResourceWithProps, 'get_reference_id', return_value='inst-007') updated_stack = stack.Stack(self.ctx, 'updated_stack', template.Template(tmpl2)) self.stack.update(updated_stack) self.assertEqual((stack.Stack.UPDATE, stack.Stack.COMPLETE), self.stack.state) self.assertEqual( 'smelly', self.stack['AResource']._stored_properties_data['Foo']) self.assertEqual( 'inst-007', self.stack['BResource']._stored_properties_data['Foo']) mock_id.assert_called_with() def test_update_with_new_resources_with_reference(self): """Check correct resolving of references in new resources. Check, that during update with new resources which one has reference on second, reference will be correct resolved. """ tmpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'CResource': {'Type': 'ResourceWithPropsType', 'Properties': {'Foo': 'abc'}}}} tmpl2 = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'CResource': {'Type': 'ResourceWithPropsType', 'Properties': {'Foo': 'abc'}}, 'AResource': {'Type': 'ResourceWithPropsType', 'Properties': {'Foo': 'smelly'}}, 'BResource': {'Type': 'ResourceWithPropsType', 'Properties': { 'Foo': {'Ref': 'AResource'}}}}} self.stack = stack.Stack(self.ctx, 'update_test_stack', template.Template(tmpl)) self.stack.store() self.stack.create() self.assertEqual((stack.Stack.CREATE, stack.Stack.COMPLETE), self.stack.state) self.assertEqual( 'abc', self.stack['CResource']._stored_properties_data['Foo']) self.assertEqual(1, len(self.stack.resources)) mock_create = self.patchobject(generic_rsrc.ResourceWithProps, 'handle_create', return_value=None) updated_stack = stack.Stack(self.ctx, 'updated_stack', template.Template(tmpl2)) self.stack.update(updated_stack) self.assertEqual((stack.Stack.UPDATE, stack.Stack.COMPLETE), self.stack.state) self.assertEqual( 'smelly', self.stack['AResource']._stored_properties_data['Foo']) self.assertEqual( 'AResource', self.stack['BResource']._stored_properties_data['Foo']) self.assertEqual(3, len(self.stack.resources)) mock_create.assert_called_with() def test_update_by_reference_and_rollback_1(self): """Check that rollback still works with dynamic metadata. This test fails the first instance. """ tmpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'AResource': {'Type': 'ResourceWithPropsType', 'Properties': {'Foo': 'abc'}}, 'BResource': {'Type': 'ResourceWithPropsType', 'Properties': { 'Foo': {'Ref': 'AResource'}}}}} tmpl2 = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'AResource': {'Type': 'ResourceWithPropsType', 'Properties': {'Foo': 'smelly'}}, 'BResource': {'Type': 'ResourceWithPropsType', 'Properties': { 'Foo': {'Ref': 'AResource'}}}}} self.stack = stack.Stack(self.ctx, 'update_test_stack', template.Template(tmpl), disable_rollback=False) self.stack.store() self.stack.create() self.assertEqual((stack.Stack.CREATE, stack.Stack.COMPLETE), self.stack.state) self.assertEqual( 'abc', self.stack['AResource']._stored_properties_data['Foo']) self.assertEqual( 'AResource', self.stack['BResource']._stored_properties_data['Foo']) mock_id = self.patchobject(generic_rsrc.ResourceWithProps, 'get_reference_id', return_value='AResource') # mock to make the replace fail when creating the replacement resource mock_create = self.patchobject(generic_rsrc.ResourceWithProps, 'handle_create', side_effect=Exception) updated_stack = stack.Stack(self.ctx, 'updated_stack', template.Template(tmpl2), disable_rollback=False) self.stack.update(updated_stack) self.assertEqual((stack.Stack.ROLLBACK, stack.Stack.COMPLETE), self.stack.state) self.assertEqual( 'abc', self.stack['AResource']._stored_properties_data['Foo']) mock_id.assert_called_with() mock_create.assert_called_once_with() def test_update_by_reference_and_rollback_2(self): """Check that rollback still works with dynamic metadata. This test fails the second instance. """ class ResourceTypeA(generic_rsrc.ResourceWithProps): count = 0 def handle_create(self): ResourceTypeA.count += 1 self.resource_id_set('%s%d' % (self.name, self.count)) resource._register_class('ResourceTypeA', ResourceTypeA) tmpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'AResource': {'Type': 'ResourceTypeA', 'Properties': {'Foo': 'abc'}}, 'BResource': {'Type': 'ResourceWithPropsType', 'Properties': { 'Foo': {'Ref': 'AResource'}}}}} tmpl2 = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'AResource': {'Type': 'ResourceTypeA', 'Properties': {'Foo': 'smelly'}}, 'BResource': {'Type': 'ResourceWithPropsType', 'Properties': { 'Foo': {'Ref': 'AResource'}}}}} self.stack = stack.Stack(self.ctx, 'update_test_stack', template.Template(tmpl), disable_rollback=False) self.stack.store() self.stack.create() self.assertEqual((stack.Stack.CREATE, stack.Stack.COMPLETE), self.stack.state) self.assertEqual( 'abc', self.stack['AResource']._stored_properties_data['Foo']) self.assertEqual( 'AResource1', self.stack['BResource']._stored_properties_data['Foo']) # mock to make the replace fail when creating the second # replacement resource mock_create = self.patchobject(generic_rsrc.ResourceWithProps, 'handle_create', side_effect=Exception) updated_stack = stack.Stack(self.ctx, 'updated_stack', template.Template(tmpl2), disable_rollback=False) self.stack.update(updated_stack) self.assertEqual((stack.Stack.ROLLBACK, stack.Stack.COMPLETE), self.stack.state) self.assertEqual( 'abc', self.stack['AResource']._stored_properties_data['Foo']) self.assertEqual( 'AResource1', self.stack['BResource']._stored_properties_data['Foo']) mock_create.assert_called_once_with() def test_update_failure_recovery(self): """Check that rollback still works with dynamic metadata. This test fails the second instance. """ class ResourceTypeA(generic_rsrc.ResourceWithProps): count = 0 def handle_create(self): ResourceTypeA.count += 1 self.resource_id_set('%s%d' % (self.name, self.count)) def handle_delete(self): return super(ResourceTypeA, self).handle_delete() resource._register_class('ResourceTypeA', ResourceTypeA) tmpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'AResource': {'Type': 'ResourceTypeA', 'Properties': {'Foo': 'abc'}}, 'BResource': {'Type': 'ResourceWithPropsType', 'Properties': { 'Foo': {'Ref': 'AResource'}}}}} tmpl2 = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'AResource': {'Type': 'ResourceTypeA', 'Properties': {'Foo': 'smelly'}}, 'BResource': {'Type': 'ResourceWithPropsType', 'Properties': { 'Foo': {'Ref': 'AResource'}}}}} self.stack = stack.Stack(self.ctx, 'update_test_stack', template.Template(tmpl), disable_rollback=True) self.stack.store() self.stack.create() self.assertEqual((stack.Stack.CREATE, stack.Stack.COMPLETE), self.stack.state) self.assertEqual( 'abc', self.stack['AResource']._stored_properties_data['Foo']) self.assertEqual( 'AResource1', self.stack['BResource']._stored_properties_data['Foo']) # mock to make the replace fail when creating the second # replacement resource mock_create = self.patchobject(generic_rsrc.ResourceWithProps, 'handle_create', side_effect=[Exception, None]) mock_delete = self.patchobject(generic_rsrc.ResourceWithProps, 'handle_delete') mock_delete_A = self.patchobject(ResourceTypeA, 'handle_delete') updated_stack = stack.Stack(self.ctx, 'updated_stack', template.Template(tmpl2), disable_rollback=True) self.stack.update(updated_stack) mock_create.assert_called_once_with() self.assertEqual((stack.Stack.UPDATE, stack.Stack.FAILED), self.stack.state) self.assertEqual( 'smelly', self.stack['AResource']._stored_properties_data['Foo']) self.stack = stack.Stack.load(self.ctx, self.stack.id) updated_stack2 = stack.Stack(self.ctx, 'updated_stack', template.Template(tmpl2), disable_rollback=True) self.stack.update(updated_stack2) self.assertEqual((stack.Stack.UPDATE, stack.Stack.COMPLETE), self.stack.state) self.assertEqual( 'smelly', self.stack['AResource']._stored_properties_data['Foo']) self.assertEqual( 'AResource2', self.stack['BResource']._stored_properties_data['Foo']) self.assertEqual(2, mock_create.call_count) self.assertEqual(2, mock_delete.call_count) mock_delete_A.assert_called_once_with() def test_update_failure_recovery_new_param(self): """Check that rollback still works with dynamic metadata. This test fails the second instance. """ class ResourceTypeA(generic_rsrc.ResourceWithProps): count = 0 def handle_create(self): ResourceTypeA.count += 1 self.resource_id_set('%s%d' % (self.name, self.count)) def handle_delete(self): return super(ResourceTypeA, self).handle_delete() resource._register_class('ResourceTypeA', ResourceTypeA) tmpl = { 'HeatTemplateFormatVersion': '2012-12-12', 'Parameters': { 'abc-param': {'Type': 'String'} }, 'Resources': { 'AResource': {'Type': 'ResourceTypeA', 'Properties': {'Foo': {'Ref': 'abc-param'}}}, 'BResource': {'Type': 'ResourceWithPropsType', 'Properties': {'Foo': {'Ref': 'AResource'}}} } } env1 = environment.Environment({'abc-param': 'abc'}) tmpl2 = { 'HeatTemplateFormatVersion': '2012-12-12', 'Parameters': { 'smelly-param': {'Type': 'String'} }, 'Resources': { 'AResource': {'Type': 'ResourceTypeA', 'Properties': {'Foo': {'Ref': 'smelly-param'}}}, 'BResource': {'Type': 'ResourceWithPropsType', 'Properties': {'Foo': {'Ref': 'AResource'}}} } } env2 = environment.Environment({'smelly-param': 'smelly'}) self.stack = stack.Stack(self.ctx, 'update_test_stack', template.Template(tmpl, env=env1), disable_rollback=True) self.stack.store() self.stack.create() self.assertEqual((stack.Stack.CREATE, stack.Stack.COMPLETE), self.stack.state) self.assertEqual( 'abc', self.stack['AResource']._stored_properties_data['Foo']) self.assertEqual( 'AResource1', self.stack['BResource']._stored_properties_data['Foo']) mock_create = self.patchobject(generic_rsrc.ResourceWithProps, 'handle_create', side_effect=[Exception, None]) mock_delete = self.patchobject(generic_rsrc.ResourceWithProps, 'handle_delete') mock_delete_A = self.patchobject(ResourceTypeA, 'handle_delete') updated_stack = stack.Stack(self.ctx, 'updated_stack', template.Template(tmpl2, env=env2), disable_rollback=True) self.stack.update(updated_stack) # creation was a failure mock_create.assert_called_once_with() self.assertEqual((stack.Stack.UPDATE, stack.Stack.FAILED), self.stack.state) self.assertEqual( 'smelly', self.stack['AResource']._stored_properties_data['Foo']) self.stack = stack.Stack.load(self.ctx, self.stack.id) updated_stack2 = stack.Stack(self.ctx, 'updated_stack', template.Template(tmpl2, env=env2), disable_rollback=True) self.stack.update(updated_stack2) self.assertEqual((stack.Stack.UPDATE, stack.Stack.COMPLETE), self.stack.state) self.stack = stack.Stack.load(self.ctx, self.stack.id) a_props = self.stack['AResource']._stored_properties_data['Foo'] self.assertEqual('smelly', a_props) b_props = self.stack['BResource']._stored_properties_data['Foo'] self.assertEqual('AResource2', b_props) self.assertEqual(2, mock_delete.call_count) mock_delete_A.assert_called_once_with() self.assertEqual(2, mock_create.call_count) def test_update_failure_recovery_new_param_stack_list(self): """Check that stack-list is not broken if update fails in between. Also ensure that next update passes. """ class ResourceTypeA(generic_rsrc.ResourceWithProps): count = 0 def handle_create(self): ResourceTypeA.count += 1 self.resource_id_set('%s%d' % (self.name, self.count)) def handle_delete(self): return super(ResourceTypeA, self).handle_delete() resource._register_class('ResourceTypeA', ResourceTypeA) tmpl = { 'HeatTemplateFormatVersion': '2012-12-12', 'Parameters': { 'abc-param': {'Type': 'String'} }, 'Resources': { 'AResource': {'Type': 'ResourceTypeA', 'Properties': {'Foo': {'Ref': 'abc-param'}}}, 'BResource': {'Type': 'ResourceWithPropsType', 'Properties': {'Foo': {'Ref': 'AResource'}}} } } env1 = environment.Environment({'abc-param': 'abc'}) tmpl2 = { 'HeatTemplateFormatVersion': '2012-12-12', 'Parameters': { 'smelly-param': {'Type': 'String'} }, 'Resources': { 'AResource': {'Type': 'ResourceTypeA', 'Properties': {'Foo': {'Ref': 'smelly-param'}}}, 'BResource': {'Type': 'ResourceWithPropsType', 'Properties': {'Foo': {'Ref': 'AResource'}}} } } env2 = environment.Environment({'smelly-param': 'smelly'}) self.stack = stack.Stack(self.ctx, 'update_test_stack', template.Template(tmpl, env=env1), disable_rollback=True) self.stack.store() self.stack.create() self.assertEqual((stack.Stack.CREATE, stack.Stack.COMPLETE), self.stack.state) self.assertEqual( 'abc', self.stack['AResource']._stored_properties_data['Foo']) self.assertEqual( 'AResource1', self.stack['BResource']._stored_properties_data['Foo']) mock_create = self.patchobject(generic_rsrc.ResourceWithProps, 'handle_create', side_effect=[Exception, None]) mock_delete = self.patchobject(generic_rsrc.ResourceWithProps, 'handle_delete') mock_delete_A = self.patchobject(ResourceTypeA, 'handle_delete') updated_stack = stack.Stack(self.ctx, 'updated_stack', template.Template(tmpl2, env=env2), disable_rollback=True) self.stack.update(updated_stack) # Ensure UPDATE FAILED mock_create.assert_called_once_with() self.assertEqual((stack.Stack.UPDATE, stack.Stack.FAILED), self.stack.state) self.assertEqual( 'smelly', self.stack['AResource']._stored_properties_data['Foo']) # check if heat stack-list works, wherein it tries to fetch template # parameters value from env self.eng = service.EngineService('a-host', 'a-topic') self.eng.list_stacks(self.ctx) # Check if next update works fine self.stack = stack.Stack.load(self.ctx, self.stack.id) updated_stack2 = stack.Stack(self.ctx, 'updated_stack', template.Template(tmpl2, env=env2), disable_rollback=True) self.stack.update(updated_stack2) self.assertEqual((stack.Stack.UPDATE, stack.Stack.COMPLETE), self.stack.state) self.stack = stack.Stack.load(self.ctx, self.stack.id) a_props = self.stack['AResource']._stored_properties_data['Foo'] self.assertEqual('smelly', a_props) b_props = self.stack['BResource']._stored_properties_data['Foo'] self.assertEqual('AResource2', b_props) self.assertEqual(2, mock_delete.call_count) mock_delete_A.assert_called_once_with() self.assertEqual(2, mock_create.call_count) def test_update_replace_parameters(self): """Check that changes in static environment parameters are not ignored. Changes in static environment parameters are not ignored and can cause dependent resources to be updated. """ tmpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Parameters': {'AParam': {'Type': 'String'}}, 'Resources': { 'AResource': {'Type': 'ResourceWithPropsType', 'Properties': {'Foo': {'Ref': 'AParam'}}}}} env1 = environment.Environment({'parameters': {'AParam': 'abc'}}) env2 = environment.Environment({'parameters': {'AParam': 'smelly'}}) self.stack = stack.Stack(self.ctx, 'update_test_stack', template.Template(tmpl, env=env1)) self.stack.store() self.stack.create() self.assertEqual((stack.Stack.CREATE, stack.Stack.COMPLETE), self.stack.state) self.assertEqual( 'abc', self.stack['AResource']._stored_properties_data['Foo']) updated_stack = stack.Stack(self.ctx, 'updated_stack', template.Template(tmpl, env=env2)) self.stack.update(updated_stack) self.assertEqual((stack.Stack.UPDATE, stack.Stack.COMPLETE), self.stack.state) self.assertEqual( 'smelly', self.stack['AResource']._stored_properties_data['Foo']) def test_update_deletion_policy(self): tmpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'AResource': {'Type': 'ResourceWithPropsType', 'Properties': {'Foo': 'Bar'}}}} self.stack = stack.Stack(self.ctx, 'update_test_stack', template.Template(tmpl)) self.stack.store() self.stack.create() self.assertEqual((stack.Stack.CREATE, stack.Stack.COMPLETE), self.stack.state) resource_id = self.stack['AResource'].id new_tmpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'AResource': {'Type': 'ResourceWithPropsType', 'DeletionPolicy': 'Retain', 'Properties': {'Foo': 'Bar'}}}} updated_stack = stack.Stack(self.ctx, 'updated_stack', template.Template(new_tmpl)) self.stack.update(updated_stack) self.assertEqual((stack.Stack.UPDATE, stack.Stack.COMPLETE), self.stack.state) self.assertEqual(resource_id, self.stack['AResource'].id) def test_update_deletion_policy_no_handle_update(self): tmpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'AResource': {'Type': 'ResourceWithRequiredProps', 'Properties': {'Foo': 'Bar'}}}} self.stack = stack.Stack(self.ctx, 'update_test_stack', template.Template(tmpl)) self.stack.store() self.stack.create() self.assertEqual((stack.Stack.CREATE, stack.Stack.COMPLETE), self.stack.state) resource_id = self.stack['AResource'].id new_tmpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { 'AResource': {'Type': 'ResourceWithRequiredProps', 'DeletionPolicy': 'Retain', 'Properties': {'Foo': 'Bar'}}}} updated_stack = stack.Stack(self.ctx, 'updated_stack', template.Template(new_tmpl)) self.stack.update(updated_stack) self.assertEqual((stack.Stack.UPDATE, stack.Stack.COMPLETE), self.stack.state) self.assertEqual(resource_id, self.stack['AResource'].id) def test_update_template_format_version(self): tmpl = { 'HeatTemplateFormatVersion': '2012-12-12', 'Parameters': { 'AParam': {'Type': 'String', 'Default': 'abc'}}, 'Resources': { 'AResource': { 'Type': 'ResourceWithPropsType', 'Properties': {'Foo': {'Ref': 'AParam'}} }, } } self.stack = stack.Stack(self.ctx, 'update_test_stack', template.Template(tmpl)) self.stack.store() self.stack.create() self.assertEqual((stack.Stack.CREATE, stack.Stack.COMPLETE), self.stack.state) self.assertEqual( 'abc', self.stack['AResource']._stored_properties_data['Foo']) tmpl2 = { 'heat_template_version': '2013-05-23', 'parameters': { 'AParam': {'type': 'string', 'default': 'foo'}}, 'resources': { 'AResource': { 'type': 'ResourceWithPropsType', 'properties': {'Foo': {'get_param': 'AParam'}} } } } updated_stack = stack.Stack(self.ctx, 'updated_stack', template.Template(tmpl2)) self.stack.update(updated_stack) self.assertEqual((stack.Stack.UPDATE, stack.Stack.COMPLETE), self.stack.state) self.assertEqual( 'foo', self.stack['AResource']._stored_properties_data['Foo']) def test_delete_stack_when_update_failed_twice(self): """Test when stack update failed twice and delete the stack. Test checks the following scenario: 1. Create stack 2. Update stack (failed) 3. Update stack (failed) 4. Delete stack The test checks the behavior of backup stack when update is failed. If some resources were not backed up correctly then test will fail. """ tmpl_create = { 'heat_template_version': '2013-05-23', 'resources': { 'Ares': {'type': 'GenericResourceType'} } } # create a stack self.stack = stack.Stack(self.ctx, 'update_fail_test_stack', template.Template(tmpl_create), disable_rollback=True) self.stack.store() self.stack.create() self.assertEqual((stack.Stack.CREATE, stack.Stack.COMPLETE), self.stack.state) tmpl_update = { 'heat_template_version': '2013-05-23', 'parameters': {'aparam': {'type': 'number', 'default': 1}}, 'resources': { 'Ares': {'type': 'GenericResourceType'}, 'Bres': {'type': 'GenericResourceType'}, 'Cres': { 'type': 'ResourceWithPropsRefPropOnDelete', 'properties': { 'Foo': {'get_resource': 'Bres'}, 'FooInt': {'get_param': 'aparam'}, } } } } mock_create = self.patchobject( generic_rsrc.ResourceWithProps, 'handle_create', side_effect=[Exception, Exception]) updated_stack_first = stack.Stack(self.ctx, 'update_fail_test_stack', template.Template(tmpl_update)) self.stack.update(updated_stack_first) self.stack.resources['Cres'].resource_id_set('c_res') self.assertEqual((stack.Stack.UPDATE, stack.Stack.FAILED), self.stack.state) # try to update the stack again updated_stack_second = stack.Stack(self.ctx, 'update_fail_test_stack', template.Template(tmpl_update)) self.stack.update(updated_stack_second) self.assertEqual((stack.Stack.UPDATE, stack.Stack.FAILED), self.stack.state) self.assertEqual(mock_create.call_count, 2) # delete the failed stack self.stack.delete() self.assertEqual((stack.Stack.DELETE, stack.Stack.COMPLETE), self.stack.state) def test_backup_stack_synchronized_after_update(self): """Test when backup stack updated correctly during stack update. Test checks the following scenario: 1. Create stack 2. Update stack (failed - so the backup should not be deleted) 3. Update stack (failed - so the backup from step 2 should be updated) The test checks that backup stack is synchronized with the main stack. """ # create a stack tmpl_create = { 'heat_template_version': '2013-05-23', 'resources': { 'Ares': {'type': 'GenericResourceType'} } } self.stack = stack.Stack(self.ctx, 'test_update_stack_backup', template.Template(tmpl_create), disable_rollback=True) self.stack.store() self.stack.create() self.assertEqual((stack.Stack.CREATE, stack.Stack.COMPLETE), self.stack.state) # try to update a stack with a new resource that should be backed up tmpl_update = { 'heat_template_version': '2013-05-23', 'resources': { 'Ares': {'type': 'GenericResourceType'}, 'Bres': { 'type': 'ResWithComplexPropsAndAttrs', 'properties': { 'an_int': 0, } }, 'Cres': { 'type': 'ResourceWithPropsType', 'properties': { 'Foo': {'get_resource': 'Bres'}, } } } } self.patchobject(generic_rsrc.ResourceWithProps, 'handle_create', side_effect=[Exception, Exception]) stack_with_new_resource = stack.Stack( self.ctx, 'test_update_stack_backup', template.Template(tmpl_update)) self.stack.update(stack_with_new_resource) self.assertEqual((stack.Stack.UPDATE, stack.Stack.FAILED), self.stack.state) # assert that backup stack has been updated correctly self.assertIn('Bres', self.stack._backup_stack()) # set data for Bres in main stack self.stack['Bres'].data_set('test', '42') # update the stack with resource that updated in-place tmpl_update['resources']['Bres']['properties']['an_int'] = 1 updated_stack_second = stack.Stack(self.ctx, 'test_update_stack_backup', template.Template(tmpl_update)) self.stack.update(updated_stack_second) self.assertEqual((stack.Stack.UPDATE, stack.Stack.FAILED), self.stack.state) # assert that resource in backup stack also has been updated backup = self.stack._backup_stack() self.assertEqual(1, backup['Bres'].properties['an_int']) # check, that updated Bres in new stack has copied data. # Bres in backup stack should have empty data. self.assertEqual({}, backup['Bres'].data()) self.assertEqual({'test': '42'}, self.stack['Bres'].data()) def test_update_failed_with_new_condition(self): create_a_res = {'equals': [{'get_param': 'create_a_res'}, 'yes']} create_b_res = {'equals': [{'get_param': 'create_b_res'}, 'yes']} tmpl = { 'heat_template_version': '2016-10-14', 'parameters': { 'create_a_res': { 'type': 'string', 'default': 'yes' } }, 'conditions': { 'create_a': create_a_res }, 'resources': { 'Ares': {'type': 'GenericResourceType', 'condition': 'create_a'} } } test_stack = stack.Stack(self.ctx, 'test_update_failed_with_new_condition', template.Template(tmpl)) test_stack.store() test_stack.create() self.assertEqual((stack.Stack.CREATE, stack.Stack.COMPLETE), test_stack.state) self.assertIn('Ares', test_stack) update_tmpl = copy.deepcopy(tmpl) update_tmpl['conditions']['create_b'] = create_b_res update_tmpl['parameters']['create_b_res'] = { 'type': 'string', 'default': 'yes' } update_tmpl['resources']['Bres'] = { 'type': 'OS::Heat::TestResource', 'properties': { 'value': 'res_b', 'fail': True } } stack_with_new_resource = stack.Stack( self.ctx, 'test_update_with_new_res_cd', template.Template(update_tmpl)) test_stack.update(stack_with_new_resource) self.assertEqual((stack.Stack.UPDATE, stack.Stack.FAILED), test_stack.state) self.assertIn('Bres', test_stack) self.assertEqual((resource.Resource.CREATE, resource.Resource.FAILED), test_stack['Bres'].state) self.assertIn('create_b', test_stack.t.t['conditions']) self.assertIn('create_b_res', test_stack.t.t['parameters']) def test_stack_update_with_deprecated_resource(self): """Test with update deprecated resource to substitute. Test checks the following scenario: 1. Create stack with deprecated resource. 2. Update stack with substitute resource. The test checks that deprecated resource can be update to it's substitute resource during update Stack. """ class ResourceTypeB(generic_rsrc.GenericResource): count_b = 0 def update(self, after, before=None, prev_resource=None): ResourceTypeB.count_b += 1 resource._register_class('ResourceTypeB', ResourceTypeB) class ResourceTypeA(ResourceTypeB): support_status = support.SupportStatus( status=support.DEPRECATED, message='deprecation_msg', version='2014.2', substitute_class=ResourceTypeB) count_a = 0 def update(self, after, before=None, prev_resource=None): ResourceTypeA.count_a += 1 resource._register_class('ResourceTypeA', ResourceTypeA) TMPL_WITH_DEPRECATED_RES = """ heat_template_version: 2015-10-15 resources: AResource: type: ResourceTypeA """ TMPL_WITH_PEPLACE_RES = """ heat_template_version: 2015-10-15 resources: AResource: type: ResourceTypeB """ t = template_format.parse(TMPL_WITH_DEPRECATED_RES) templ = template.Template(t) self.stack = stack.Stack(self.ctx, 'update_test_stack', templ) self.stack.store() self.stack.create() self.assertEqual((stack.Stack.CREATE, stack.Stack.COMPLETE), self.stack.state) t = template_format.parse(TMPL_WITH_PEPLACE_RES) tmpl2 = template.Template(t) updated_stack = stack.Stack(self.ctx, 'updated_stack', tmpl2) self.stack.update(updated_stack) self.assertEqual((stack.Stack.UPDATE, stack.Stack.COMPLETE), self.stack.state) self.assertIn('AResource', self.stack) self.assertEqual(1, ResourceTypeB.count_b) self.assertEqual(0, ResourceTypeA.count_a) heat-10.0.2/heat/tests/test_common_serializers.py0000666000175000017500000001446513343562340022154 0ustar zuulzuul00000000000000# # Copyright 2010-2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import datetime from lxml import etree from oslo_serialization import jsonutils as json import six import webob from heat.common import serializers from heat.tests import common class JSONResponseSerializerTest(common.HeatTestCase): def test_to_json(self): fixture = {"key": "value"} expected = '{"key": "value"}' actual = serializers.JSONResponseSerializer().to_json(fixture) self.assertEqual(expected, actual) def test_to_json_with_date_format_value(self): fixture = {"date": datetime.datetime(1, 3, 8, 2)} expected = '{"date": "0001-03-08T02:00:00"}' actual = serializers.JSONResponseSerializer().to_json(fixture) self.assertEqual(expected, actual) def test_to_json_with_more_deep_format(self): fixture = collections.OrderedDict([ ('is_public', True), ('name', [collections.OrderedDict([ ('name1', 'test'), ])]) ]) expected = '{"is_public": true, "name": [{"name1": "test"}]}' actual = serializers.JSONResponseSerializer().to_json(fixture) self.assertEqual(json.loads(expected), json.loads(actual)) def test_to_json_with_objects(self): fixture = collections.OrderedDict([ ('is_public', True), ('value', complex(1, 2)), ]) expected = '{"is_public": true, "value": "(1+2j)"}' actual = serializers.JSONResponseSerializer().to_json(fixture) self.assertEqual(json.loads(expected), json.loads(actual)) def test_default(self): fixture = {"key": "value"} response = webob.Response() serializers.JSONResponseSerializer().default(response, fixture) self.assertEqual(200, response.status_int) content_types = list(filter(lambda h: h[0] == 'Content-Type', response.headerlist)) self.assertEqual(1, len(content_types)) self.assertEqual('application/json', response.content_type) self.assertEqual(b'{"key": "value"}', response.body) class XMLResponseSerializerTest(common.HeatTestCase): def _recursive_dict(self, element): return element.tag, dict( map(self._recursive_dict, element)) or element.text def test_to_xml(self): fixture = {"key": "value"} expected = b'value' actual = serializers.XMLResponseSerializer().to_xml(fixture) self.assertEqual(expected, actual) def test_to_xml_with_date_format_value(self): fixture = {"date": datetime.datetime(1, 3, 8, 2)} expected = b'0001-03-08 02:00:00' actual = serializers.XMLResponseSerializer().to_xml(fixture) self.assertEqual(expected, actual) def test_to_xml_with_list(self): fixture = {"name": ["1", "2"]} expected = b'12' actual = serializers.XMLResponseSerializer().to_xml(fixture) actual_xml_tree = etree.XML(actual) actual_xml_dict = self._recursive_dict(actual_xml_tree) expected_xml_tree = etree.XML(expected) expected_xml_dict = self._recursive_dict(expected_xml_tree) self.assertEqual(expected_xml_dict, actual_xml_dict) def test_to_xml_with_more_deep_format(self): # Note we expect tree traversal from one root key, which is compatible # with the AWS format responses we need to serialize fixture = collections.OrderedDict([ ('aresponse', collections.OrderedDict([ ('is_public', True), ('name', [collections.OrderedDict([ ('name1', 'test'), ])]) ])) ]) expected = six.b('True' 'test' '') actual = serializers.XMLResponseSerializer().to_xml(fixture) actual_xml_tree = etree.XML(actual) actual_xml_dict = self._recursive_dict(actual_xml_tree) expected_xml_tree = etree.XML(expected) expected_xml_dict = self._recursive_dict(expected_xml_tree) self.assertEqual(expected_xml_dict, actual_xml_dict) def test_to_xml_with_json_only_keys(self): # Certain keys are excluded from serialization because CFN # format demands a json blob in the XML body fixture = collections.OrderedDict([ ('aresponse', collections.OrderedDict([ ('is_public', True), ('TemplateBody', {"name1": "test"}), ('Metadata', {"name2": "test2"}), ])) ]) expected = six.b('True' '{"name1": "test"}' '{"name2": "test2"}') actual = serializers.XMLResponseSerializer().to_xml(fixture) actual_xml_tree = etree.XML(actual) actual_xml_dict = self._recursive_dict(actual_xml_tree) expected_xml_tree = etree.XML(expected) expected_xml_dict = self._recursive_dict(expected_xml_tree) self.assertEqual(expected_xml_dict, actual_xml_dict) def test_default(self): fixture = {"key": "value"} response = webob.Response() serializers.XMLResponseSerializer().default(response, fixture) self.assertEqual(200, response.status_int) content_types = list(filter(lambda h: h[0] == 'Content-Type', response.headerlist)) self.assertEqual(1, len(content_types)) self.assertEqual('application/xml', response.content_type) self.assertEqual(b'value', response.body) heat-10.0.2/heat/tests/test_convg_stack.py0000666000175000017500000012235213343562352020547 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_config import cfg from heat.common import template_format from heat.engine import environment from heat.engine import resource as res from heat.engine import stack as parser from heat.engine import template as templatem from heat.objects import raw_template as raw_template_object from heat.objects import resource as resource_objects from heat.objects import snapshot as snapshot_objects from heat.objects import stack as stack_object from heat.objects import sync_point as sync_point_object from heat.rpc import worker_client from heat.tests import common from heat.tests.engine import tools from heat.tests import utils @mock.patch.object(worker_client.WorkerClient, 'check_resource') class StackConvergenceCreateUpdateDeleteTest(common.HeatTestCase): def setUp(self): super(StackConvergenceCreateUpdateDeleteTest, self).setUp() cfg.CONF.set_override('convergence_engine', True) self.stack = None @mock.patch.object(parser.Stack, 'mark_complete') def test_converge_empty_template(self, mock_mc, mock_cr): empty_tmpl = templatem.Template.create_empty_template() stack = parser.Stack(utils.dummy_context(), 'empty_tmpl_stack', empty_tmpl, convergence=True) stack.store() stack.thread_group_mgr = tools.DummyThreadGroupManager() stack.converge_stack(template=stack.t, action=stack.CREATE) self.assertFalse(mock_cr.called) mock_mc.assert_called_once_with() def test_conv_wordpress_single_instance_stack_create(self, mock_cr): stack = tools.get_stack('test_stack', utils.dummy_context(), convergence=True) stack.store() # usually, stack is stored before converge is called stack.converge_stack(template=stack.t, action=stack.CREATE) self.assertIsNone(stack.ext_rsrcs_db) self.assertEqual([((1, True), None)], list(stack.convergence_dependencies._graph.edges())) stack_db = stack_object.Stack.get_by_id(stack.context, stack.id) self.assertIsNotNone(stack_db.current_traversal) self.assertIsNotNone(stack_db.raw_template_id) self.assertIsNone(stack_db.prev_raw_template_id) self.assertTrue(stack_db.convergence) self.assertEqual({'edges': [[[1, True], None]]}, stack_db.current_deps) leaves = set(stack.convergence_dependencies.leaves()) expected_calls = [] for rsrc_id, is_update in sorted(leaves, key=lambda n: n.is_update): expected_calls.append( mock.call.worker_client.WorkerClient.check_resource( stack.context, rsrc_id, stack.current_traversal, {'input_data': {}}, is_update, None, False)) self.assertEqual(expected_calls, mock_cr.mock_calls) def test_conv_string_five_instance_stack_create(self, mock_cr): stack = tools.get_stack('test_stack', utils.dummy_context(), template=tools.string_template_five, convergence=True) stack.store() stack.converge_stack(template=stack.t, action=stack.CREATE) self.assertIsNone(stack.ext_rsrcs_db) self.assertEqual([((1, True), (3, True)), ((2, True), (3, True)), ((3, True), (4, True)), ((3, True), (5, True))], sorted(stack.convergence_dependencies._graph.edges())) stack_db = stack_object.Stack.get_by_id(stack.context, stack.id) self.assertIsNotNone(stack_db.current_traversal) self.assertIsNotNone(stack_db.raw_template_id) self.assertIsNone(stack_db.prev_raw_template_id) self.assertTrue(stack_db.convergence) self.assertEqual(sorted([[[3, True], [5, True]], # C, A [[3, True], [4, True]], # C, B [[1, True], [3, True]], # E, C [[2, True], [3, True]]]), # D, C sorted(stack_db.current_deps['edges'])) # check if needed_by is stored properly expected_needed_by = {'A': [3], 'B': [3], 'C': [1, 2], 'D': [], 'E': []} rsrcs_db = resource_objects.Resource.get_all_by_stack( stack_db._context, stack_db.id ) self.assertEqual(5, len(rsrcs_db)) for rsrc_name, rsrc_obj in rsrcs_db.items(): self.assertEqual(sorted(expected_needed_by[rsrc_name]), sorted(rsrc_obj.needed_by)) self.assertEqual(stack_db.raw_template_id, rsrc_obj.current_template_id) # check if sync_points were stored for entity_id in [5, 4, 3, 2, 1, stack_db.id]: sync_point = sync_point_object.SyncPoint.get_by_key( stack_db._context, entity_id, stack_db.current_traversal, True ) self.assertIsNotNone(sync_point) self.assertEqual(stack_db.id, sync_point.stack_id) leaves = set(stack.convergence_dependencies.leaves()) expected_calls = [] for rsrc_id, is_update in sorted(leaves, key=lambda n: n.is_update): expected_calls.append( mock.call.worker_client.WorkerClient.check_resource( stack.context, rsrc_id, stack.current_traversal, {'input_data': {}}, is_update, None, False)) self.assertEqual(expected_calls, mock_cr.mock_calls) def _mock_convg_db_update_requires(self): """Updates requires column of resources. Required for testing the generation of convergence dependency graph on an update. """ requires = dict() for rsrc_id, is_update in self.stack.convergence_dependencies: if is_update: reqs = self.stack.convergence_dependencies.requires(( rsrc_id, is_update)) requires[rsrc_id] = list({id for id, is_update in reqs}) rsrcs_db = resource_objects.Resource.get_all_active_by_stack( self.stack.context, self.stack.id) for rsrc_id, rsrc in rsrcs_db.items(): if rsrc.id in requires: rsrcs_db[rsrc_id].requires = requires[rsrc.id] return rsrcs_db def test_conv_string_five_instance_stack_update(self, mock_cr): stack = tools.get_stack('test_stack', utils.dummy_context(), template=tools.string_template_five, convergence=True) stack.store() # create stack stack.converge_stack(template=stack.t, action=stack.CREATE) curr_stack_db = stack_object.Stack.get_by_id(stack.context, stack.id) curr_stack = parser.Stack.load(curr_stack_db._context, stack=curr_stack_db) # update stack with new template t2 = template_format.parse(tools.string_template_five_update) template2 = templatem.Template( t2, env=environment.Environment({'KeyName2': 'test2'})) # on our previous create_complete, worker would have updated the # rsrc.requires. Mock the same behavior here. self.stack = stack with mock.patch.object( parser.Stack, 'db_active_resources_get', side_effect=self._mock_convg_db_update_requires): curr_stack.thread_group_mgr = tools.DummyThreadGroupManager() curr_stack.converge_stack(template=template2, action=stack.UPDATE) self.assertIsNotNone(curr_stack.ext_rsrcs_db) deps = curr_stack.convergence_dependencies self.assertEqual([((3, False), (1, False)), ((3, False), (2, False)), ((4, False), (3, False)), ((4, False), (4, True)), ((5, False), (3, False)), ((5, False), (5, True)), ((6, True), (8, True)), ((7, True), (8, True)), ((8, True), (4, True)), ((8, True), (5, True))], sorted(deps._graph.edges())) stack_db = stack_object.Stack.get_by_id(curr_stack.context, curr_stack.id) self.assertIsNotNone(stack_db.raw_template_id) self.assertIsNotNone(stack_db.current_traversal) self.assertIsNotNone(stack_db.prev_raw_template_id) self.assertTrue(stack_db.convergence) self.assertEqual(sorted([[[7, True], [8, True]], [[8, True], [5, True]], [[8, True], [4, True]], [[6, True], [8, True]], [[3, False], [2, False]], [[3, False], [1, False]], [[5, False], [3, False]], [[5, False], [5, True]], [[4, False], [3, False]], [[4, False], [4, True]]]), sorted(stack_db.current_deps['edges'])) ''' To visualize: G(7, True) H(6, True) \ / \ / B(4, False) A(5, False) \ / / \ / / \ / / / F(8, True) / / \ / / \ / / C(3, False) / \ / / \ / / \ / / / \ / / \ B(4, True) A(5, True) D(2, False) E(1, False) Leaves are at the bottom ''' # check if needed_by are stored properly # For A & B: # needed_by=C, F expected_needed_by = {'A': [3, 8], 'B': [3, 8], 'C': [1, 2], 'D': [], 'E': [], 'F': [6, 7], 'G': [], 'H': []} rsrcs_db = resource_objects.Resource.get_all_by_stack( stack_db._context, stack_db.id ) self.assertEqual(8, len(rsrcs_db)) for rsrc_name, rsrc_obj in rsrcs_db.items(): self.assertEqual(sorted(expected_needed_by[rsrc_name]), sorted(rsrc_obj.needed_by)) # check if sync_points are created for forward traversal # [F, H, G, A, B, Stack] for entity_id in [8, 7, 6, 5, 4, stack_db.id]: sync_point = sync_point_object.SyncPoint.get_by_key( stack_db._context, entity_id, stack_db.current_traversal, True ) self.assertIsNotNone(sync_point) self.assertEqual(stack_db.id, sync_point.stack_id) # check if sync_points are created for cleanup traversal # [A, B, C, D, E] for entity_id in [5, 4, 3, 2, 1]: sync_point = sync_point_object.SyncPoint.get_by_key( stack_db._context, entity_id, stack_db.current_traversal, False ) self.assertIsNotNone(sync_point) self.assertEqual(stack_db.id, sync_point.stack_id) leaves = set(stack.convergence_dependencies.leaves()) expected_calls = [] for rsrc_id, is_update in sorted(leaves, key=lambda n: n.is_update): expected_calls.append( mock.call.worker_client.WorkerClient.check_resource( stack.context, rsrc_id, stack.current_traversal, {'input_data': {}}, is_update, None, False)) leaves = set(curr_stack.convergence_dependencies.leaves()) for rsrc_id, is_update in sorted(leaves, key=lambda n: n.is_update): expected_calls.append( mock.call.worker_client.WorkerClient.check_resource( curr_stack.context, rsrc_id, curr_stack.current_traversal, {'input_data': {}}, is_update, None, False)) self.assertEqual(expected_calls, mock_cr.mock_calls) def test_conv_empty_template_stack_update_delete(self, mock_cr): stack = tools.get_stack('test_stack', utils.dummy_context(), template=tools.string_template_five, convergence=True) stack.store() # create stack stack.converge_stack(template=stack.t, action=stack.CREATE) # update stack with new template template2 = templatem.Template.create_empty_template( version=stack.t.version) curr_stack_db = stack_object.Stack.get_by_id(stack.context, stack.id) curr_stack = parser.Stack.load(curr_stack_db._context, stack=curr_stack_db) # on our previous create_complete, worker would have updated the # rsrc.requires. Mock the same behavior here. self.stack = stack with mock.patch.object( parser.Stack, 'db_active_resources_get', side_effect=self._mock_convg_db_update_requires): curr_stack.thread_group_mgr = tools.DummyThreadGroupManager() curr_stack.converge_stack(template=template2, action=stack.DELETE) self.assertIsNotNone(curr_stack.ext_rsrcs_db) deps = curr_stack.convergence_dependencies self.assertEqual([((3, False), (1, False)), ((3, False), (2, False)), ((4, False), (3, False)), ((5, False), (3, False))], sorted(deps._graph.edges())) stack_db = stack_object.Stack.get_by_id(curr_stack.context, curr_stack.id) self.assertIsNotNone(stack_db.current_traversal) self.assertIsNotNone(stack_db.prev_raw_template_id) self.assertEqual(sorted([[[3, False], [2, False]], [[3, False], [1, False]], [[5, False], [3, False]], [[4, False], [3, False]]]), sorted(stack_db.current_deps['edges'])) expected_needed_by = {'A': [3], 'B': [3], 'C': [1, 2], 'D': [], 'E': []} rsrcs_db = resource_objects.Resource.get_all_by_stack( stack_db._context, stack_db.id ) self.assertEqual(5, len(rsrcs_db)) for rsrc_name, rsrc_obj in rsrcs_db.items(): self.assertEqual(sorted(expected_needed_by[rsrc_name]), sorted(rsrc_obj.needed_by)) # check if sync_points are created for cleanup traversal # [A, B, C, D, E, Stack] for entity_id in [5, 4, 3, 2, 1, stack_db.id]: is_update = False if entity_id == stack_db.id: is_update = True sync_point = sync_point_object.SyncPoint.get_by_key( stack_db._context, entity_id, stack_db.current_traversal, is_update) self.assertIsNotNone(sync_point, 'entity %s' % entity_id) self.assertEqual(stack_db.id, sync_point.stack_id) leaves = set(stack.convergence_dependencies.leaves()) expected_calls = [] for rsrc_id, is_update in sorted(leaves, key=lambda n: n.is_update): expected_calls.append( mock.call.worker_client.WorkerClient.check_resource( stack.context, rsrc_id, stack.current_traversal, {'input_data': {}}, is_update, None, False)) leaves = set(curr_stack.convergence_dependencies.leaves()) for rsrc_id, is_update in sorted(leaves, key=lambda n: n.is_update): expected_calls.append( mock.call.worker_client.WorkerClient.check_resource( curr_stack.context, rsrc_id, curr_stack.current_traversal, {'input_data': {}}, is_update, None, False)) self.assertEqual(expected_calls, mock_cr.mock_calls) def test_mark_complete_purges_db(self, mock_cr): stack = tools.get_stack('test_stack', utils.dummy_context(), template=tools.string_template_five, convergence=True) stack.store() stack.purge_db = mock.Mock() stack.mark_complete() self.assertTrue(stack.purge_db.called) def test_state_set_sets_empty_curr_trvsl_for_failed_stack( self, mock_cr): stack = tools.get_stack('test_stack', utils.dummy_context(), template=tools.string_template_five, convergence=True) stack.status = stack.FAILED stack.store() stack.purge_db() self.assertEqual('', stack.current_traversal) @mock.patch.object(raw_template_object.RawTemplate, 'delete') def test_purge_db_deletes_previous_template(self, mock_tmpl_delete, mock_cr): stack = tools.get_stack('test_stack', utils.dummy_context(), template=tools.string_template_five, convergence=True) stack.prev_raw_template_id = 10 stack.purge_db() self.assertTrue(mock_tmpl_delete.called) @mock.patch.object(parser.Stack, '_delete_credentials') @mock.patch.object(stack_object.Stack, 'delete') def test_purge_db_deletes_creds(self, mock_delete_stack, mock_creds_delete, mock_cr): stack = tools.get_stack('test_stack', utils.dummy_context(), template=tools.string_template_five, convergence=True) reason = 'stack delete complete' mock_creds_delete.return_value = (stack.COMPLETE, reason) stack.state_set(stack.DELETE, stack.COMPLETE, reason) stack.purge_db() self.assertTrue(mock_creds_delete.called) self.assertTrue(mock_delete_stack.called) @mock.patch.object(parser.Stack, '_delete_credentials') @mock.patch.object(stack_object.Stack, 'delete') def test_purge_db_deletes_creds_failed(self, mock_delete_stack, mock_creds_delete, mock_cr): stack = tools.get_stack('test_stack', utils.dummy_context(), template=tools.string_template_five, convergence=True) reason = 'stack delete complete' failed_reason = 'Error deleting trust' mock_creds_delete.return_value = (stack.FAILED, failed_reason) stack.state_set(stack.DELETE, stack.COMPLETE, reason) stack.purge_db() self.assertTrue(mock_creds_delete.called) self.assertFalse(mock_delete_stack.called) self.assertEqual((stack.DELETE, stack.FAILED), stack.state) @mock.patch.object(raw_template_object.RawTemplate, 'delete') def test_purge_db_does_not_delete_previous_template_when_stack_fails( self, mock_tmpl_delete, mock_cr): stack = tools.get_stack('test_stack', utils.dummy_context(), template=tools.string_template_five, convergence=True) stack.status = stack.FAILED stack.purge_db() self.assertFalse(mock_tmpl_delete.called) def test_purge_db_deletes_sync_points(self, mock_cr): stack = tools.get_stack('test_stack', utils.dummy_context(), template=tools.string_template_five, convergence=True) stack.store() stack.purge_db() rows = sync_point_object.SyncPoint.delete_all_by_stack_and_traversal( stack.context, stack.id, stack.current_traversal) self.assertEqual(0, rows) @mock.patch.object(stack_object.Stack, 'delete') def test_purge_db_deletes_stack_for_deleted_stack(self, mock_stack_delete, mock_cr): stack = tools.get_stack('test_stack', utils.dummy_context(), template=tools.string_template_five, convergence=True) stack.store() stack.state_set(stack.DELETE, stack.COMPLETE, 'test reason') stack.purge_db() self.assertTrue(mock_stack_delete.called) @mock.patch.object(resource_objects.Resource, 'purge_deleted') def test_purge_db_calls_rsrc_purge_deleted(self, mock_rsrc_purge_delete, mock_cr): stack = tools.get_stack('test_stack', utils.dummy_context(), template=tools.string_template_five, convergence=True) stack.store() stack.purge_db() self.assertTrue(mock_rsrc_purge_delete.called) def test_get_best_existing_db_resource(self, mock_cr): stack = tools.get_stack('test_stack', utils.dummy_context(), template=tools.string_template_five, convergence=True) stack.store() stack.prev_raw_template_id = 2 stack.t.id = 3 dummy_res = stack.resources['A'] a_res_2 = res.Resource('A', dummy_res.t, stack) a_res_2.current_template_id = 2 a_res_2.id = 2 a_res_3 = res.Resource('A', dummy_res.t, stack) a_res_3.current_template_id = 3 a_res_3.id = 3 a_res_1 = res.Resource('A', dummy_res.t, stack) a_res_1.current_template_id = 1 a_res_1.id = 1 existing_res = {2: a_res_2, 3: a_res_3, 1: a_res_1} stack.ext_rsrcs_db = existing_res best_res = stack._get_best_existing_rsrc_db('A') # should return resource with template id 3 which is current template self.assertEqual(a_res_3.id, best_res.id) # no resource with current template id as 3 existing_res = {1: a_res_1, 2: a_res_2} stack.ext_rsrcs_db = existing_res best_res = stack._get_best_existing_rsrc_db('A') # should return resource with template id 2 which is prev template self.assertEqual(a_res_2.id, best_res.id) # no resource with current template id as 3 or 2 existing_res = {1: a_res_1} stack.ext_rsrcs_db = existing_res best_res = stack._get_best_existing_rsrc_db('A') # should return resource with template id 1 existing in DB self.assertEqual(a_res_1.id, best_res.id) @mock.patch.object(parser.Stack, '_converge_create_or_update') def test_updated_time_stack_create(self, mock_ccu, mock_cr): stack = parser.Stack(utils.dummy_context(), 'convg_updated_time_test', templatem.Template.create_empty_template(), convergence=True) stack.thread_group_mgr = tools.DummyThreadGroupManager() stack.converge_stack(template=stack.t, action=stack.CREATE) self.assertIsNone(stack.updated_time) self.assertTrue(mock_ccu.called) @mock.patch.object(parser.Stack, '_converge_create_or_update') def test_updated_time_stack_update(self, mock_ccu, mock_cr): tmpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': {'R1': {'Type': 'GenericResourceType'}}} stack = parser.Stack(utils.dummy_context(), 'updated_time_test', templatem.Template(tmpl), convergence=True) stack.thread_group_mgr = tools.DummyThreadGroupManager() stack.converge_stack(template=stack.t, action=stack.UPDATE) self.assertIsNotNone(stack.updated_time) self.assertTrue(mock_ccu.called) @mock.patch.object(parser.Stack, '_converge_create_or_update') @mock.patch.object(sync_point_object.SyncPoint, 'delete_all_by_stack_and_traversal') def test_sync_point_delete_stack_create(self, mock_syncpoint_del, mock_ccu, mock_cr): stack = parser.Stack(utils.dummy_context(), 'convg_updated_time_test', templatem.Template.create_empty_template()) stack.thread_group_mgr = tools.DummyThreadGroupManager() stack.converge_stack(template=stack.t, action=stack.CREATE) self.assertFalse(mock_syncpoint_del.called) self.assertTrue(mock_ccu.called) @mock.patch.object(parser.Stack, '_converge_create_or_update') @mock.patch.object(sync_point_object.SyncPoint, 'delete_all_by_stack_and_traversal') def test_sync_point_delete_stack_update(self, mock_syncpoint_del, mock_ccu, mock_cr): tmpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': {'R1': {'Type': 'GenericResourceType'}}} stack = parser.Stack(utils.dummy_context(), 'updated_time_test', templatem.Template(tmpl)) stack.thread_group_mgr = tools.DummyThreadGroupManager() stack.current_traversal = 'prev_traversal' stack.converge_stack(template=stack.t, action=stack.UPDATE) self.assertTrue(mock_syncpoint_del.called) self.assertTrue(mock_ccu.called) def test_snapshot_delete(self, mock_cr): tmpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': {'R1': {'Type': 'GenericResourceType'}}} stack = parser.Stack(utils.dummy_context(), 'updated_time_test', templatem.Template(tmpl)) stack.current_traversal = 'prev_traversal' stack.action, stack.status = stack.CREATE, stack.COMPLETE stack.store() stack.thread_group_mgr = tools.DummyThreadGroupManager() snapshot_values = { 'stack_id': stack.id, 'name': 'fake_snapshot', 'tenant': stack.context.tenant_id, 'status': 'COMPLETE', 'data': None } snapshot_objects.Snapshot.create(stack.context, snapshot_values) # Ensure that snapshot is not deleted on stack update stack.converge_stack(template=stack.t, action=stack.UPDATE) db_snapshot_obj = snapshot_objects.Snapshot.get_all( stack.context, stack.id) self.assertEqual('fake_snapshot', db_snapshot_obj[0].name) self.assertEqual(stack.id, db_snapshot_obj[0].stack_id) # Ensure that snapshot is deleted on stack delete stack.converge_stack(template=stack.t, action=stack.DELETE) self.assertEqual([], snapshot_objects.Snapshot.get_all( stack.context, stack.id)) self.assertTrue(mock_cr.called) @mock.patch.object(parser.Stack, '_persist_state') class TestConvgStackStateSet(common.HeatTestCase): def setUp(self): super(TestConvgStackStateSet, self).setUp() cfg.CONF.set_override('convergence_engine', True) self.stack = tools.get_stack( 'test_stack', utils.dummy_context(), template=tools.wp_template, convergence=True) def test_state_set_create_adopt_update_delete_rollback_complete(self, mock_ps): mock_ps.return_value = 'updated' ret_val = self.stack.state_set(self.stack.CREATE, self.stack.COMPLETE, 'Create complete') self.assertTrue(mock_ps.called) # Ensure that state_set returns with value for convergence self.assertEqual('updated', ret_val) mock_ps.reset_mock() ret_val = self.stack.state_set(self.stack.UPDATE, self.stack.COMPLETE, 'Update complete') self.assertTrue(mock_ps.called) self.assertEqual('updated', ret_val) mock_ps.reset_mock() ret_val = self.stack.state_set( self.stack.ROLLBACK, self.stack.COMPLETE, 'Rollback complete') self.assertTrue(mock_ps.called) self.assertEqual('updated', ret_val) mock_ps.reset_mock() ret_val = self.stack.state_set(self.stack.DELETE, self.stack.COMPLETE, 'Delete complete') self.assertTrue(mock_ps.called) self.assertEqual('updated', ret_val) mock_ps.reset_mock() ret_val = self.stack.state_set(self.stack.ADOPT, self.stack.COMPLETE, 'Adopt complete') self.assertTrue(mock_ps.called) self.assertEqual('updated', ret_val) def test_state_set_stack_suspend(self, mock_ps): mock_ps.return_value = 'updated' ret_val = self.stack.state_set( self.stack.SUSPEND, self.stack.IN_PROGRESS, 'Suspend started') self.assertTrue(mock_ps.called) # Ensure that state_set returns None for other actions in convergence self.assertIsNone(ret_val) mock_ps.reset_mock() ret_val = self.stack.state_set( self.stack.SUSPEND, self.stack.COMPLETE, 'Suspend complete') self.assertFalse(mock_ps.called) self.assertIsNone(ret_val) def test_state_set_stack_resume(self, mock_ps): ret_val = self.stack.state_set( self.stack.RESUME, self.stack.IN_PROGRESS, 'Resume started') self.assertTrue(mock_ps.called) self.assertIsNone(ret_val) mock_ps.reset_mock() ret_val = self.stack.state_set(self.stack.RESUME, self.stack.COMPLETE, 'Resume complete') self.assertFalse(mock_ps.called) self.assertIsNone(ret_val) def test_state_set_stack_snapshot(self, mock_ps): ret_val = self.stack.state_set( self.stack.SNAPSHOT, self.stack.IN_PROGRESS, 'Snapshot started') self.assertTrue(mock_ps.called) self.assertIsNone(ret_val) mock_ps.reset_mock() ret_val = self.stack.state_set( self.stack.SNAPSHOT, self.stack.COMPLETE, 'Snapshot complete') self.assertFalse(mock_ps.called) self.assertIsNone(ret_val) def test_state_set_stack_restore(self, mock_ps): mock_ps.return_value = 'updated' ret_val = self.stack.state_set( self.stack.RESTORE, self.stack.IN_PROGRESS, 'Restore started') self.assertTrue(mock_ps.called) self.assertEqual('updated', ret_val) mock_ps.reset_mock() ret_val = self.stack.state_set( self.stack.RESTORE, self.stack.COMPLETE, 'Restore complete') self.assertTrue(mock_ps.called) self.assertEqual('updated', ret_val) class TestConvgStackRollback(common.HeatTestCase): def setUp(self): super(TestConvgStackRollback, self).setUp() self.ctx = utils.dummy_context() self.stack = tools.get_stack('test_stack_rollback', self.ctx, template=tools.string_template_five, convergence=True) def test_trigger_rollback_uses_old_template_if_available(self): # create a template and assign to stack as previous template t = template_format.parse(tools.wp_template) prev_tmpl = templatem.Template(t) prev_tmpl.store(context=self.ctx) self.stack.prev_raw_template_id = prev_tmpl.id # mock failure self.stack.action = self.stack.UPDATE self.stack.status = self.stack.FAILED self.stack.store() # mock converge_stack() self.stack.converge_stack = mock.Mock() # call trigger_rollbac self.stack.rollback() # Make sure stack converge is called with previous template self.assertTrue(self.stack.converge_stack.called) self.assertIsNone(self.stack.prev_raw_template_id) call_args, call_kwargs = self.stack.converge_stack.call_args template_used_for_rollback = call_args[0] self.assertEqual(prev_tmpl.id, template_used_for_rollback.id) def test_trigger_rollback_uses_empty_template_if_prev_tmpl_not_available( self): # mock create failure with no previous template self.stack.prev_raw_template_id = None self.stack.action = self.stack.CREATE self.stack.status = self.stack.FAILED self.stack.store() # mock converge_stack() self.stack.converge_stack = mock.Mock() # call trigger_rollback self.stack.rollback() # Make sure stack converge is called with empty template self.assertTrue(self.stack.converge_stack.called) call_args, call_kwargs = self.stack.converge_stack.call_args template_used_for_rollback = call_args[0] self.assertEqual({}, template_used_for_rollback['resources']) class TestConvgComputeDependencies(common.HeatTestCase): def setUp(self): super(TestConvgComputeDependencies, self).setUp() self.ctx = utils.dummy_context() self.stack = tools.get_stack('test_stack_convg', self.ctx, template=tools.string_template_five, convergence=True) def _fake_db_resources(self, stack): db_resources = {} i = 0 for rsrc_name in ['E', 'D', 'C', 'B', 'A']: i += 1 rsrc = mock.MagicMock() rsrc.id = i rsrc.name = rsrc_name rsrc.current_template_id = stack.prev_raw_template_id db_resources[i] = rsrc db_resources[3].requires = [4, 5] db_resources[1].requires = [3] db_resources[2].requires = [3] return db_resources def test_dependencies_create_stack_without_mock(self): self.stack.store() self.current_resources = self.stack._update_or_store_resources() self.stack._compute_convg_dependencies(self.stack.ext_rsrcs_db, self.stack.dependencies, self.current_resources) self.assertEqual([((1, True), (3, True)), ((2, True), (3, True)), ((3, True), (4, True)), ((3, True), (5, True))], sorted(self.stack._convg_deps._graph.edges())) def test_dependencies_update_same_template(self): t = template_format.parse(tools.string_template_five) tmpl = templatem.Template(t) self.stack.t = tmpl self.stack.t.id = 2 self.stack.prev_raw_template_id = 1 db_resources = self._fake_db_resources(self.stack) curr_resources = {res.name: res for id, res in db_resources.items()} self.stack._compute_convg_dependencies(db_resources, self.stack.dependencies, curr_resources) self.assertEqual([((1, False), (1, True)), ((1, True), (3, True)), ((2, False), (2, True)), ((2, True), (3, True)), ((3, False), (1, False)), ((3, False), (2, False)), ((3, False), (3, True)), ((3, True), (4, True)), ((3, True), (5, True)), ((4, False), (3, False)), ((4, False), (4, True)), ((5, False), (3, False)), ((5, False), (5, True))], sorted(self.stack._convg_deps._graph.edges())) def test_dependencies_update_new_template(self): t = template_format.parse(tools.string_template_five_update) tmpl = templatem.Template(t) self.stack.t = tmpl self.stack.t.id = 2 self.stack.prev_raw_template_id = 1 db_resources = self._fake_db_resources(self.stack) curr_resources = {res.name: res for id, res in db_resources.items()} # 'H', 'G', 'F' are part of new template i = len(db_resources) for new_rsrc in ['H', 'G', 'F']: i += 1 rsrc = mock.MagicMock() rsrc.name = new_rsrc rsrc.id = i curr_resources[new_rsrc] = rsrc self.stack._compute_convg_dependencies(db_resources, self.stack.dependencies, curr_resources) self.assertEqual([((3, False), (1, False)), ((3, False), (2, False)), ((4, False), (3, False)), ((4, False), (4, True)), ((5, False), (3, False)), ((5, False), (5, True)), ((6, True), (8, True)), ((7, True), (8, True)), ((8, True), (4, True)), ((8, True), (5, True))], sorted(self.stack._convg_deps._graph.edges())) def test_dependencies_update_replace_rollback(self): t = template_format.parse(tools.string_template_five) tmpl = templatem.Template(t) self.stack.t = tmpl self.stack.t.id = 1 self.stack.prev_raw_template_id = 2 db_resources = self._fake_db_resources(self.stack) # previous resource E still exists in db. db_resources[1].current_template_id = 1 # resource that replaced E res = mock.MagicMock() res.id = 6 res.name = 'E' res.requires = [3] res.replaces = 1 res.current_template_id = 2 db_resources[6] = res curr_resources = {res.name: res for id, res in db_resources.items()} # best existing resource curr_resources['E'] = db_resources[1] self.stack._compute_convg_dependencies(db_resources, self.stack.dependencies, curr_resources) self.assertEqual([((1, False), (1, True)), ((1, False), (6, False)), ((1, True), (3, True)), ((2, False), (2, True)), ((2, True), (3, True)), ((3, False), (1, False)), ((3, False), (2, False)), ((3, False), (3, True)), ((3, False), (6, False)), ((3, True), (4, True)), ((3, True), (5, True)), ((4, False), (3, False)), ((4, False), (4, True)), ((5, False), (3, False)), ((5, False), (5, True))], sorted(self.stack._convg_deps._graph.edges())) def test_dependencies_update_delete(self): tmpl = templatem.Template.create_empty_template( version=self.stack.t.version) self.stack.t = tmpl self.stack.t.id = 2 self.stack.prev_raw_template_id = 1 db_resources = self._fake_db_resources(self.stack) curr_resources = {res.name: res for id, res in db_resources.items()} self.stack._compute_convg_dependencies(db_resources, self.stack.dependencies, curr_resources) self.assertEqual([((3, False), (1, False)), ((3, False), (2, False)), ((4, False), (3, False)), ((5, False), (3, False))], sorted(self.stack._convg_deps._graph.edges())) class TestConvergenceMigration(common.HeatTestCase): def test_migration_to_convergence_engine(self): self.ctx = utils.dummy_context() self.stack = tools.get_stack('test_stack_convg', self.ctx, template=tools.string_template_five) self.stack.store() for r in self.stack.resources.values(): r.store() self.stack.migrate_to_convergence() self.stack = self.stack.load(self.ctx, self.stack.id) self.assertTrue(self.stack.convergence) self.assertIsNone(self.stack.prev_raw_template_id) exp_required_by = {'A': ['C'], 'B': ['C'], 'C': ['D', 'E'], 'D': [], 'E': []} exp_requires = {'A': [], 'B': [], 'C': ['A', 'B'], 'D': ['C'], 'E': ['C']} exp_tmpl_id = self.stack.t.id def id_to_name(ids): names = [] for r in self.stack.resources.values(): if r.id in ids: names.append(r.name) return names for r in self.stack.resources.values(): self.assertEqual(sorted(exp_required_by[r.name]), sorted(r.required_by())) self.assertEqual(sorted(exp_requires[r.name]), sorted(id_to_name(r.requires))) self.assertEqual(exp_tmpl_id, r.current_template_id) heat-10.0.2/heat/tests/__init__.py0000666000175000017500000000174713343562351016751 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import sys from mox3 import mox import oslo_i18n as i18n sys.modules['mox'] = mox def fake_translate_msgid(msgid, domain, desired_locale=None): return msgid i18n.enable_lazy() # To ensure messages don't really get translated while running tests. # As there are lots of places where matching is expected when comparing # exception message(translated) with raw message. i18n._translate_msgid = fake_translate_msgid heat-10.0.2/heat/tests/policy/0000775000175000017500000000000013343562672016132 5ustar zuulzuul00000000000000heat-10.0.2/heat/tests/policy/resources.json0000666000175000017500000000023013343562340021024 0ustar zuulzuul00000000000000{ "context_is_admin": "role:admin", "resource_types:OS::Cinder::Quota": "!", "resource_types:OS::Keystone::*": "rule:context_is_admin" } heat-10.0.2/heat/tests/policy/check_admin.json0000666000175000017500000000005213343562340021241 0ustar zuulzuul00000000000000{ "context_is_admin": "role:admin" } heat-10.0.2/heat/tests/policy/deny_stack_user.json0000666000175000017500000000141713343562340022204 0ustar zuulzuul00000000000000{ "deny_stack_user": "not role:heat_stack_user", "cloudformation:ListStacks": "rule:deny_stack_user", "cloudformation:CreateStack": "rule:deny_stack_user", "cloudformation:DescribeStacks": "rule:deny_stack_user", "cloudformation:DeleteStack": "rule:deny_stack_user", "cloudformation:UpdateStack": "rule:deny_stack_user", "cloudformation:DescribeStackEvents": "rule:deny_stack_user", "cloudformation:ValidateTemplate": "rule:deny_stack_user", "cloudformation:GetTemplate": "rule:deny_stack_user", "cloudformation:EstimateTemplateCost": "rule:deny_stack_user", "cloudformation:DescribeStackResource": "", "cloudformation:DescribeStackResources": "rule:deny_stack_user", "cloudformation:ListStackResources": "rule:deny_stack_user", } heat-10.0.2/heat/tests/policy/notallowed.json0000666000175000017500000000101313343562340021162 0ustar zuulzuul00000000000000{ "cloudformation:ListStacks": "!", "cloudformation:CreateStack": "!", "cloudformation:DescribeStacks": "!", "cloudformation:DeleteStack": "!", "cloudformation:UpdateStack": "!", "cloudformation:DescribeStackEvents": "!", "cloudformation:ValidateTemplate": "!", "cloudformation:GetTemplate": "!", "cloudformation:EstimateTemplateCost": "!", "cloudformation:DescribeStackResource": "!", "cloudformation:DescribeStackResources": "!", "cloudformation:ListStackResources": "!" } heat-10.0.2/heat/tests/clients/0000775000175000017500000000000013343562672016274 5ustar zuulzuul00000000000000heat-10.0.2/heat/tests/clients/test_ceilometer_client.py0000666000175000017500000000174213343562351023373 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from ceilometerclient import client as cc from heat.tests import common from heat.tests import utils class CeilometerClientPluginTest(common.HeatTestCase): def test_create(self): self.patchobject(cc.SessionClient, 'request') context = utils.dummy_context() plugin = context.clients.client_plugin('ceilometer') client = plugin.client() self.assertIsNotNone(client.alarms) heat-10.0.2/heat/tests/clients/test_sahara_client.py0000666000175000017500000002114613343562351022502 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import uuid import mock from saharaclient.api import base as sahara_base import six from heat.common import exception from heat.engine.clients.os import sahara from heat.tests import common from heat.tests import utils class SaharaUtilsTest(common.HeatTestCase): """Basic tests :module:'heat.engine.resources.clients.os.sahara'.""" def setUp(self): super(SaharaUtilsTest, self).setUp() self.sahara_client = mock.MagicMock() con = utils.dummy_context() c = con.clients self.sahara_plugin = c.client_plugin('sahara') self.sahara_plugin.client = lambda: self.sahara_client self.my_image = mock.MagicMock() self.my_plugin = mock.MagicMock() def test_get_image_id(self): """Tests the get_image_id function.""" img_id = str(uuid.uuid4()) img_name = 'myfakeimage' self.my_image.id = img_id self.my_image.name = img_name self.sahara_client.images.get.side_effect = [ self.my_image, sahara_base.APIException(404), sahara_base.APIException(404) ] self.sahara_client.images.find.side_effect = [[self.my_image], []] self.assertEqual(img_id, self.sahara_plugin.get_image_id(img_id)) self.assertEqual(img_id, self.sahara_plugin.get_image_id(img_name)) self.assertRaises(exception.EntityNotFound, self.sahara_plugin.get_image_id, 'noimage') calls = [mock.call(name=img_name), mock.call(name='noimage')] self.sahara_client.images.find.assert_has_calls(calls) def test_get_image_id_by_name_in_uuid(self): """Tests the get_image_id function by name in uuid.""" img_id = str(uuid.uuid4()) img_name = str(uuid.uuid4()) self.my_image.id = img_id self.my_image.name = img_name self.sahara_client.images.get.side_effect = [ sahara_base.APIException(error_code=400, error_name='IMAGE_NOT_REGISTERED')] self.sahara_client.images.find.return_value = [self.my_image] self.assertEqual(img_id, self.sahara_plugin.get_image_id(img_name)) self.sahara_client.images.get.assert_called_once_with(img_name) self.sahara_client.images.find.assert_called_once_with(name=img_name) def test_get_image_id_sahara_exception(self): """Test get_image_id when sahara raises an exception.""" # Simulate HTTP exception img_name = str(uuid.uuid4()) self.sahara_client.images.find.side_effect = [ sahara_base.APIException(error_message="Error", error_code=404)] expected_error = "Error retrieving images list from sahara: Error" e = self.assertRaises(exception.Error, self.sahara_plugin.find_resource_by_name, 'images', img_name) self.assertEqual(expected_error, six.text_type(e)) self.sahara_client.images.find.assert_called_once_with(name=img_name) def test_get_image_id_not_found(self): """Tests the get_image_id function while image is not found.""" img_name = str(uuid.uuid4()) self.my_image.name = img_name self.sahara_client.images.get.side_effect = [ sahara_base.APIException(error_code=400, error_name='IMAGE_NOT_REGISTERED')] self.sahara_client.images.find.return_value = [] self.assertRaises(exception.EntityNotFound, self.sahara_plugin.get_image_id, img_name) self.sahara_client.images.get.assert_called_once_with(img_name) self.sahara_client.images.find.assert_called_once_with(name=img_name) def test_get_image_id_name_ambiguity(self): """Tests the get_image_id function while name ambiguity .""" img_name = 'ambiguity_name' self.my_image.name = img_name self.sahara_client.images.get.side_effect = sahara_base.APIException() self.sahara_client.images.find.return_value = [self.my_image, self.my_image] self.assertRaises(exception.PhysicalResourceNameAmbiguity, self.sahara_plugin.get_image_id, img_name) self.sahara_client.images.find.assert_called_once_with(name=img_name) def test_get_plugin_id(self): """Tests the get_plugin_id function.""" plugin_name = 'myfakeplugin' self.my_plugin.name = plugin_name def side_effect(name): if name == plugin_name: return self.my_plugin else: raise sahara_base.APIException(error_code=404, error_name='NOT_FOUND') self.sahara_client.plugins.get.side_effect = side_effect self.assertIsNone(self.sahara_plugin.get_plugin_id(plugin_name)) self.assertRaises(exception.EntityNotFound, self.sahara_plugin.get_plugin_id, 'noplugin') calls = [mock.call(plugin_name), mock.call('noplugin')] self.sahara_client.plugins.get.assert_has_calls(calls) def test_validate_hadoop_version(self): """Tests the validate_hadoop_version function.""" versions = ['1.2.1', '2.6.0', '2.7.1'] plugin_name = 'vanilla' self.my_plugin.name = plugin_name self.my_plugin.versions = versions self.sahara_client.plugins.get.return_value = self.my_plugin self.assertIsNone(self.sahara_plugin.validate_hadoop_version( plugin_name, '2.6.0')) ex = self.assertRaises(exception.StackValidationFailed, self.sahara_plugin.validate_hadoop_version, plugin_name, '1.2.3') self.assertEqual("Requested plugin 'vanilla' doesn't support version " "'1.2.3'. Allowed versions are 1.2.1, 2.6.0, 2.7.1", six.text_type(ex)) calls = [mock.call(plugin_name), mock.call(plugin_name)] self.sahara_client.plugins.get.assert_has_calls(calls) class SaharaConstraintsTest(common.HeatTestCase): scenarios = [ ('JobType', dict( constraint=sahara.JobTypeConstraint(), resource_name='job_types' )), ('ClusterTemplate', dict( constraint=sahara.ClusterTemplateConstraint(), resource_name='cluster_templates' )), ('DataSource', dict( constraint=sahara.DataSourceConstraint(), resource_name='data_sources' )), ('Cluster', dict( constraint=sahara.ClusterConstraint(), resource_name='clusters' )), ('JobBinary', dict( constraint=sahara.JobBinaryConstraint(), resource_name='job_binaries' )), ('Plugin', dict( constraint=sahara.PluginConstraint(), resource_name=None )), ('Image', dict( constraint=sahara.ImageConstraint(), resource_name='images' )), ] def setUp(self): super(SaharaConstraintsTest, self).setUp() self.ctx = utils.dummy_context() self.mock_get = mock.Mock() cl_plgn = self.ctx.clients.client_plugin('sahara') cl_plgn.find_resource_by_name_or_id = self.mock_get cl_plgn.get_image_id = self.mock_get cl_plgn.get_plugin_id = self.mock_get def test_validation(self): self.mock_get.return_value = "fake_val" self.assertTrue(self.constraint.validate("foo", self.ctx)) if self.resource_name is None: self.mock_get.assert_called_once_with("foo") else: self.mock_get.assert_called_once_with(self.resource_name, "foo") def test_validation_error(self): self.mock_get.side_effect = exception.EntityNotFound( entity='Fake entity', name='bar') self.assertFalse(self.constraint.validate("bar", self.ctx)) if self.resource_name is None: self.mock_get.assert_called_once_with("bar") else: self.mock_get.assert_called_once_with(self.resource_name, "bar") heat-10.0.2/heat/tests/clients/test_senlin_client.py0000666000175000017500000001772013343562340022534 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from openstack import exceptions from heat.engine.clients.os import senlin as senlin_plugin from heat.tests import common from heat.tests import utils class SenlinClientPluginTest(common.HeatTestCase): @mock.patch('openstack.connection.Connection') def setUp(self, mock_connection): super(SenlinClientPluginTest, self).setUp() context = utils.dummy_context() self.plugin = context.clients.client_plugin('senlin') self.client = self.plugin.client() def test_cluster_get(self): self.assertIsNotNone(self.client.clusters) def test_is_bad_request(self): self.assertTrue(self.plugin.is_bad_request( exceptions.HttpException(http_status=400))) self.assertFalse(self.plugin.is_bad_request(Exception)) self.assertFalse(self.plugin.is_bad_request( exceptions.HttpException(http_status=404))) def test_check_action_success(self): mock_action = mock.MagicMock() mock_action.status = 'SUCCEEDED' mock_get = self.patchobject(self.client, 'get_action') mock_get.return_value = mock_action self.assertTrue(self.plugin.check_action_status('fake_id')) mock_get.assert_called_once_with('fake_id') def test_get_profile_id(self): mock_profile = mock.Mock(id='fake_profile_id') mock_get = self.patchobject(self.client, 'get_profile', return_value=mock_profile) ret = self.plugin.get_profile_id('fake_profile') self.assertEqual('fake_profile_id', ret) mock_get.assert_called_once_with('fake_profile') def test_get_cluster_id(self): mock_cluster = mock.Mock(id='fake_cluster_id') mock_get = self.patchobject(self.client, 'get_cluster', return_value=mock_cluster) ret = self.plugin.get_cluster_id('fake_cluster') self.assertEqual('fake_cluster_id', ret) mock_get.assert_called_once_with('fake_cluster') def test_get_policy_id(self): mock_policy = mock.Mock(id='fake_policy_id') mock_get = self.patchobject(self.client, 'get_policy', return_value=mock_policy) ret = self.plugin.get_policy_id('fake_policy') self.assertEqual('fake_policy_id', ret) mock_get.assert_called_once_with('fake_policy') class ProfileConstraintTest(common.HeatTestCase): @mock.patch('openstack.connection.Connection') def setUp(self, mock_connection): super(ProfileConstraintTest, self).setUp() self.senlin_client = mock.MagicMock() self.ctx = utils.dummy_context() self.mock_get_profile = mock.Mock() self.ctx.clients.client( 'senlin').get_profile = self.mock_get_profile self.constraint = senlin_plugin.ProfileConstraint() def test_validate_true(self): self.mock_get_profile.return_value = None self.assertTrue(self.constraint.validate("PROFILE_ID", self.ctx)) def test_validate_false(self): self.mock_get_profile.side_effect = exceptions.ResourceNotFound( 'PROFILE_ID') self.assertFalse(self.constraint.validate("PROFILE_ID", self.ctx)) self.mock_get_profile.side_effect = exceptions.HttpException( 'PROFILE_ID') self.assertFalse(self.constraint.validate("PROFILE_ID", self.ctx)) class ClusterConstraintTest(common.HeatTestCase): @mock.patch('openstack.connection.Connection') def setUp(self, mock_connection): super(ClusterConstraintTest, self).setUp() self.senlin_client = mock.MagicMock() self.ctx = utils.dummy_context() self.mock_get_cluster = mock.Mock() self.ctx.clients.client( 'senlin').get_cluster = self.mock_get_cluster self.constraint = senlin_plugin.ClusterConstraint() def test_validate_true(self): self.mock_get_cluster.return_value = None self.assertTrue(self.constraint.validate("CLUSTER_ID", self.ctx)) def test_validate_false(self): self.mock_get_cluster.side_effect = exceptions.ResourceNotFound( 'CLUSTER_ID') self.assertFalse(self.constraint.validate("CLUSTER_ID", self.ctx)) self.mock_get_cluster.side_effect = exceptions.HttpException( 'CLUSTER_ID') self.assertFalse(self.constraint.validate("CLUSTER_ID", self.ctx)) class PolicyConstraintTest(common.HeatTestCase): @mock.patch('openstack.connection.Connection') def setUp(self, mock_connection): super(PolicyConstraintTest, self).setUp() self.senlin_client = mock.MagicMock() self.ctx = utils.dummy_context() self.mock_get_policy = mock.Mock() self.ctx.clients.client( 'senlin').get_policy = self.mock_get_policy self.constraint = senlin_plugin.PolicyConstraint() def test_validate_true(self): self.mock_get_policy.return_value = None self.assertTrue(self.constraint.validate("POLICY_ID", self.ctx)) def test_validate_false(self): self.mock_get_policy.side_effect = exceptions.ResourceNotFound( 'POLICY_ID') self.assertFalse(self.constraint.validate("POLICY_ID", self.ctx)) self.mock_get_policy.side_effect = exceptions.HttpException( 'POLICY_ID') self.assertFalse(self.constraint.validate("POLICY_ID", self.ctx)) class ProfileTypeConstraintTest(common.HeatTestCase): @mock.patch('openstack.connection.Connection') def setUp(self, mock_connection): super(ProfileTypeConstraintTest, self).setUp() self.senlin_client = mock.MagicMock() self.ctx = utils.dummy_context() heat_profile_type = mock.MagicMock() heat_profile_type.name = 'os.heat.stack-1.0' nova_profile_type = mock.MagicMock() nova_profile_type.name = 'os.nova.server-1.0' self.mock_profile_types = mock.Mock( return_value=[heat_profile_type, nova_profile_type]) self.ctx.clients.client( 'senlin').profile_types = self.mock_profile_types self.constraint = senlin_plugin.ProfileTypeConstraint() def test_validate_true(self): self.assertTrue(self.constraint.validate("os.heat.stack-1.0", self.ctx)) def test_validate_false(self): self.assertFalse(self.constraint.validate("Invalid_type", self.ctx)) class PolicyTypeConstraintTest(common.HeatTestCase): @mock.patch('openstack.connection.Connection') def setUp(self, mock_connection): super(PolicyTypeConstraintTest, self).setUp() self.senlin_client = mock.MagicMock() self.ctx = utils.dummy_context() deletion_policy_type = mock.MagicMock() deletion_policy_type.name = 'senlin.policy.deletion-1.0' lb_policy_type = mock.MagicMock() lb_policy_type.name = 'senlin.policy.loadbalance-1.0' self.mock_policy_types = mock.Mock( return_value=[deletion_policy_type, lb_policy_type]) self.ctx.clients.client( 'senlin').policy_types = self.mock_policy_types self.constraint = senlin_plugin.PolicyTypeConstraint() def test_validate_true(self): self.assertTrue(self.constraint.validate( "senlin.policy.deletion-1.0", self.ctx)) def test_validate_false(self): self.assertFalse(self.constraint.validate("Invalid_type", self.ctx)) heat-10.0.2/heat/tests/clients/test_zun_client.py0000666000175000017500000000203513343562351022053 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.tests import common from heat.tests import utils class ZunClientPluginTest(common.HeatTestCase): def test_create(self): context = utils.dummy_context() plugin = context.clients.client_plugin('zun') client = plugin.client() self.assertEqual('http://server.test:5000/v3', client.containers.api.session.auth.endpoint) self.assertEqual('1.12', client.api_version.get_string()) heat-10.0.2/heat/tests/clients/test_swift_client.py0000666000175000017500000001213013343562340022366 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime import mock import pytz from testtools import matchers from heat.engine.clients.os import swift from heat.tests import common from heat.tests import utils class SwiftClientPluginTestCase(common.HeatTestCase): def setUp(self): super(SwiftClientPluginTestCase, self).setUp() self.swift_client = mock.Mock() self.context = utils.dummy_context() self.context.tenant = "demo" c = self.context.clients self.swift_plugin = c.client_plugin('swift') self.swift_plugin.client = lambda: self.swift_client class SwiftUtilsTest(SwiftClientPluginTestCase): def test_is_valid_temp_url_path(self): valids = [ "/v1/AUTH_demo/c/o", "/v1/AUTH_demo/c/o/", "/v1/TEST_demo/c/o", "/v1/AUTH_demo/c/pseudo_folder/o", ] for url in valids: self.assertTrue(self.swift_plugin.is_valid_temp_url_path(url)) invalids = [ "/v2/AUTH_demo/c/o", "/v1/AUTH_demo/c//", "/v1/AUTH_demo/c/", "/AUTH_demo/c//", "//AUTH_demo/c/o", "//v1/AUTH_demo/c/o", "/v1/AUTH_demo/o", "/v1/AUTH_demo//o", "/v1/AUTH_d3mo//o", "/v1//c/o", "/v1/c/o", ] for url in invalids: self.assertFalse(self.swift_plugin.is_valid_temp_url_path(url)) def test_get_temp_url(self): self.swift_client.url = ("http://fake-host.com:8080/v1/" "AUTH_demo") self.swift_client.head_account = mock.Mock(return_value={ 'x-account-meta-temp-url-key': '123456'}) self.swift_client.post_account = mock.Mock() container_name = '1234' # from stack.id stack_name = 'test' handle_name = 'foo' obj_name = '%s-%s' % (stack_name, handle_name) url = self.swift_plugin.get_temp_url(container_name, obj_name) self.assertFalse(self.swift_client.post_account.called) regexp = ("http://fake-host.com:8080/v1/AUTH_demo/%s" r"/%s\?temp_url_sig=[0-9a-f]{40}&" "temp_url_expires=[0-9]{10}" % (container_name, obj_name)) self.assertThat(url, matchers.MatchesRegex(regexp)) timeout = int(url.split('=')[-1]) self.assertLess(timeout, swift.MAX_EPOCH) def test_get_temp_url_no_account_key(self): self.swift_client.url = ("http://fake-host.com:8080/v1/" "AUTH_demo") head_account = {} def post_account(data): head_account.update(data) self.swift_client.head_account = mock.Mock(return_value=head_account) self.swift_client.post_account = post_account container_name = '1234' # from stack.id stack_name = 'test' handle_name = 'foo' obj_name = '%s-%s' % (stack_name, handle_name) self.assertNotIn('x-account-meta-temp-url-key', head_account) self.swift_plugin.get_temp_url(container_name, obj_name) self.assertIn('x-account-meta-temp-url-key', head_account) def test_get_signal_url(self): self.swift_client.url = ("http://fake-host.com:8080/v1/" "AUTH_demo") self.swift_client.head_account = mock.Mock(return_value={ 'x-account-meta-temp-url-key': '123456'}) self.swift_client.post_account = mock.Mock() container_name = '1234' # from stack.id stack_name = 'test' handle_name = 'foo' obj_name = '%s-%s' % (stack_name, handle_name) url = self.swift_plugin.get_signal_url(container_name, obj_name) self.assertTrue(self.swift_client.put_container.called) self.assertTrue(self.swift_client.put_object.called) regexp = ("http://fake-host.com:8080/v1/AUTH_demo/%s" r"/%s\?temp_url_sig=[0-9a-f]{40}&" "temp_url_expires=[0-9]{10}" % (container_name, obj_name)) self.assertThat(url, matchers.MatchesRegex(regexp)) def test_parse_last_modified(self): self.assertIsNone(self.swift_plugin.parse_last_modified(None)) now = datetime.datetime( 2015, 2, 5, 1, 4, 40, 0, pytz.timezone('GMT')) now_naive = datetime.datetime( 2015, 2, 5, 1, 4, 40, 0) last_modified = now.strftime('%a, %d %b %Y %H:%M:%S %Z') self.assertEqual('Thu, 05 Feb 2015 01:04:40 GMT', last_modified) self.assertEqual( now_naive, self.swift_plugin.parse_last_modified(last_modified)) heat-10.0.2/heat/tests/clients/test_clients.py0000666000175000017500000010037313343562351021346 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from aodhclient import exceptions as aodh_exc from ceilometerclient import exc as ceil_exc from cinderclient import exceptions as cinder_exc from glanceclient import exc as glance_exc from heatclient import client as heatclient from heatclient import exc as heat_exc from keystoneauth1 import exceptions as keystone_exc from keystoneauth1.identity import generic from manilaclient import exceptions as manila_exc from mistralclient.api import base as mistral_base import mock from neutronclient.common import exceptions as neutron_exc from openstack import exceptions from oslo_config import cfg from saharaclient.api import base as sahara_base import six from swiftclient import exceptions as swift_exc from testtools import testcase from troveclient import client as troveclient from zaqarclient.transport import errors as zaqar_exc from heat.common import exception from heat.engine import clients from heat.engine.clients import client_exception from heat.engine.clients import client_plugin from heat.engine.clients.os.keystone import fake_keystoneclient as fake_ks from heat.tests import common from heat.tests import fakes from heat.tests.openstack.nova import fakes as fakes_nova class ClientsTest(common.HeatTestCase): def test_bad_cloud_backend(self): con = mock.Mock() cfg.CONF.set_override('cloud_backend', 'some.weird.object') exc = self.assertRaises(exception.Invalid, clients.Clients, con) self.assertIn('Invalid cloud_backend setting in heat.conf detected', six.text_type(exc)) cfg.CONF.set_override('cloud_backend', 'heat.engine.clients.Clients') exc = self.assertRaises(exception.Invalid, clients.Clients, con) self.assertIn('Invalid cloud_backend setting in heat.conf detected', six.text_type(exc)) def test_clients_get_heat_url(self): con = mock.Mock() con.tenant_id = "b363706f891f48019483f8bd6503c54b" c = clients.Clients(con) con.clients = c obj = c.client_plugin('heat') obj._get_client_option = mock.Mock() obj._get_client_option.return_value = None obj.url_for = mock.Mock(name="url_for") obj.url_for.return_value = "url_from_keystone" self.assertEqual("url_from_keystone", obj.get_heat_url()) heat_url = "http://0.0.0.0:8004/v1/%(tenant_id)s" obj._get_client_option.return_value = heat_url tenant_id = "b363706f891f48019483f8bd6503c54b" result = heat_url % {"tenant_id": tenant_id} self.assertEqual(result, obj.get_heat_url()) obj._get_client_option.return_value = result self.assertEqual(result, obj.get_heat_url()) def _client_cfn_url(self, use_uwsgi=False, use_ipv6=False): con = mock.Mock() c = clients.Clients(con) con.clients = c obj = c.client_plugin('heat') obj._get_client_option = mock.Mock() obj._get_client_option.return_value = None obj.url_for = mock.Mock(name="url_for") if use_ipv6: if use_uwsgi: obj.url_for.return_value = "http://[::1]/heat-api-cfn/v1/" else: obj.url_for.return_value = "http://[::1]:8000/v1/" else: if use_uwsgi: obj.url_for.return_value = "http://0.0.0.0/heat-api-cfn/v1/" else: obj.url_for.return_value = "http://0.0.0.0:8000/v1/" return obj def test_clients_get_heat_cfn_url(self): obj = self._client_cfn_url() self.assertEqual("http://0.0.0.0:8000/v1/", obj.get_heat_cfn_url()) def test_clients_get_heat_cfn_metadata_url(self): obj = self._client_cfn_url() self.assertEqual("http://0.0.0.0:8000/v1/", obj.get_cfn_metadata_server_url()) def test_clients_get_heat_cfn_metadata_url_conf(self): cfg.CONF.set_override('heat_metadata_server_url', 'http://server.test:123') obj = self._client_cfn_url() self.assertEqual("http://server.test:123/v1/", obj.get_cfn_metadata_server_url()) @mock.patch.object(heatclient, 'Client') def test_clients_heat(self, mock_call): self.stub_keystoneclient() con = mock.Mock() con.auth_url = "http://auth.example.com:5000/v2.0" con.tenant_id = "b363706f891f48019483f8bd6503c54b" con.auth_token = "3bcc3d3a03f44e3d8377f9247b0ad155" c = clients.Clients(con) con.clients = c obj = c.client_plugin('heat') obj.url_for = mock.Mock(name="url_for") obj.url_for.return_value = "url_from_keystone" obj.client() self.assertEqual('url_from_keystone', obj.get_heat_url()) @mock.patch.object(heatclient, 'Client') def test_clients_heat_no_auth_token(self, mock_call): con = mock.Mock() con.auth_url = "http://auth.example.com:5000/v2.0" con.tenant_id = "b363706f891f48019483f8bd6503c54b" con.auth_token = None con.auth_plugin = fakes.FakeAuth(auth_token='anewtoken') c = clients.Clients(con) con.clients = c obj = c.client_plugin('heat') obj.url_for = mock.Mock(name="url_for") obj.url_for.return_value = "url_from_keystone" self.assertEqual('url_from_keystone', obj.get_heat_url()) @mock.patch.object(heatclient, 'Client') def test_clients_heat_cached(self, mock_call): self.stub_auth() con = mock.Mock() con.auth_url = "http://auth.example.com:5000/v2.0" con.tenant_id = "b363706f891f48019483f8bd6503c54b" con.auth_token = "3bcc3d3a03f44e3d8377f9247b0ad155" con.trust_id = None c = clients.Clients(con) con.clients = c obj = c.client_plugin('heat') obj.get_heat_url = mock.Mock(name="get_heat_url") obj.get_heat_url.return_value = None obj.url_for = mock.Mock(name="url_for") obj.url_for.return_value = "url_from_keystone" heat = obj.client() heat_cached = obj.client() self.assertEqual(heat, heat_cached) class FooClientsPlugin(client_plugin.ClientPlugin): def _create(self): pass @property def auth_token(self): return '5678' class ClientPluginTest(common.HeatTestCase): def test_get_client_option(self): con = mock.Mock() con.auth_url = "http://auth.example.com:5000/v2.0" con.tenant_id = "b363706f891f48019483f8bd6503c54b" con.auth_token = "3bcc3d3a03f44e3d8377f9247b0ad155" c = clients.Clients(con) con.clients = c plugin = FooClientsPlugin(con) cfg.CONF.set_override('ca_file', '/tmp/bar', group='clients_heat') cfg.CONF.set_override('ca_file', '/tmp/foo', group='clients') cfg.CONF.set_override('endpoint_type', 'internalURL', group='clients') # check heat group self.assertEqual('/tmp/bar', plugin._get_client_option('heat', 'ca_file')) # check fallback clients group for known client self.assertEqual('internalURL', plugin._get_client_option('glance', 'endpoint_type')) # check fallback clients group for unknown client foo self.assertEqual('/tmp/foo', plugin._get_client_option('foo', 'ca_file')) def test_auth_token(self): con = mock.Mock() con.auth_token = "1234" con.trust_id = None c = clients.Clients(con) con.clients = c plugin = FooClientsPlugin(con) # assert token is from plugin rather than context # even though both are set self.assertEqual('5678', plugin.auth_token) def test_url_for(self): con = mock.Mock() con.auth_token = "1234" con.trust_id = None c = clients.Clients(con) con.clients = c con.keystone_session = mock.Mock(name="keystone_Session") con.keystone_session.get_endpoint = mock.Mock(name="get_endpoint") con.keystone_session.get_endpoint.return_value = 'http://192.0.2.1/foo' plugin = FooClientsPlugin(con) self.assertEqual('http://192.0.2.1/foo', plugin.url_for(service_type='foo')) self.assertTrue(con.keystone_session.get_endpoint.called) @mock.patch.object(generic, "Token", name="v3_token") def test_get_missing_service_catalog(self, mock_v3): class FakeKeystone(fake_ks.FakeKeystoneClient): def __init__(self): super(FakeKeystone, self).__init__() self.client = self self.version = 'v3' self.stub_keystoneclient(fake_client=FakeKeystone()) con = mock.MagicMock(auth_token="1234", trust_id=None) c = clients.Clients(con) con.clients = c con.keystone_session = mock.Mock(name="keystone_session") get_endpoint_side_effects = [ keystone_exc.EmptyCatalog(), None, 'http://192.0.2.1/bar'] con.keystone_session.get_endpoint = mock.Mock( name="get_endpoint", side_effect=get_endpoint_side_effects) mock_token_obj = mock.Mock() mock_token_obj.get_auth_ref.return_value = {'catalog': 'foo'} mock_v3.return_value = mock_token_obj plugin = FooClientsPlugin(con) self.assertEqual('http://192.0.2.1/bar', plugin.url_for(service_type='bar')) @mock.patch.object(generic, "Token", name="v3_token") def test_endpoint_not_found(self, mock_v3): class FakeKeystone(fake_ks.FakeKeystoneClient): def __init__(self): super(FakeKeystone, self).__init__() self.client = self self.version = 'v3' self.stub_keystoneclient(fake_client=FakeKeystone()) con = mock.MagicMock(auth_token="1234", trust_id=None) c = clients.Clients(con) con.clients = c con.keystone_session = mock.Mock(name="keystone_session") get_endpoint_side_effects = [keystone_exc.EmptyCatalog(), None] con.keystone_session.get_endpoint = mock.Mock( name="get_endpoint", side_effect=get_endpoint_side_effects) mock_token_obj = mock.Mock() mock_v3.return_value = mock_token_obj mock_access = mock.Mock() self.patchobject(mock_token_obj, 'get_access', return_value=mock_access) self.patchobject(mock_access, 'has_service_catalog', return_value=False) plugin = FooClientsPlugin(con) self.assertRaises(keystone_exc.EndpointNotFound, plugin.url_for, service_type='nonexistent') def test_abstract_create(self): con = mock.Mock() c = clients.Clients(con) con.clients = c self.assertRaises(TypeError, client_plugin.ClientPlugin, c) class TestClientPluginsInitialise(common.HeatTestCase): @testcase.skip('skipped until keystone can read context auth_ref') def test_create_all_clients(self): con = mock.Mock() con.auth_url = "http://auth.example.com:5000/v2.0" con.tenant_id = "b363706f891f48019483f8bd6503c54b" con.auth_token = "3bcc3d3a03f44e3d8377f9247b0ad155" c = clients.Clients(con) con.clients = c for plugin_name in clients._mgr.names(): self.assertTrue(clients.has_client(plugin_name)) c.client(plugin_name) def test_create_all_client_plugins(self): plugin_types = clients._mgr.names() self.assertIsNotNone(plugin_types) con = mock.Mock() c = clients.Clients(con) con.clients = c for plugin_name in plugin_types: plugin = c.client_plugin(plugin_name) self.assertIsNotNone(plugin) self.assertEqual(c, plugin.clients) self.assertEqual(con, plugin.context) self.assertEqual({}, plugin._client_instances) self.assertTrue(clients.has_client(plugin_name)) self.assertIsInstance(plugin.service_types, list) self.assertGreaterEqual(len(plugin.service_types), 1, 'service_types is not defined for plugin') class TestIsNotFound(common.HeatTestCase): scenarios = [ ('ceilometer_not_found', dict( is_not_found=True, is_over_limit=False, is_client_exception=True, is_conflict=False, plugin='ceilometer', exception=lambda: ceil_exc.HTTPNotFound(details='gone'), )), ('ceilometer_exception', dict( is_not_found=False, is_over_limit=False, is_client_exception=False, is_conflict=False, plugin='ceilometer', exception=lambda: Exception() )), ('ceilometer_overlimit', dict( is_not_found=False, is_over_limit=True, is_client_exception=True, is_conflict=False, plugin='ceilometer', exception=lambda: ceil_exc.HTTPOverLimit(details='over'), )), ('ceilometer_conflict', dict( is_not_found=False, is_over_limit=False, is_client_exception=True, is_conflict=True, plugin='ceilometer', exception=lambda: ceil_exc.HTTPConflict(), )), ('aodh_not_found', dict( is_not_found=True, is_over_limit=False, is_client_exception=True, is_conflict=False, plugin='aodh', exception=lambda: aodh_exc.NotFound('not found'), )), ('aodh_overlimit', dict( is_not_found=False, is_over_limit=True, is_client_exception=True, is_conflict=False, plugin='aodh', exception=lambda: aodh_exc.OverLimit('over'), )), ('aodh_conflict', dict( is_not_found=False, is_over_limit=False, is_client_exception=True, is_conflict=True, plugin='aodh', exception=lambda: aodh_exc.Conflict('conflict'), )), ('cinder_not_found', dict( is_not_found=True, is_over_limit=False, is_client_exception=True, is_conflict=False, plugin='cinder', exception=lambda: cinder_exc.NotFound(code=404), )), ('cinder_exception', dict( is_not_found=False, is_over_limit=False, is_client_exception=False, is_conflict=False, plugin='cinder', exception=lambda: Exception() )), ('cinder_overlimit', dict( is_not_found=False, is_over_limit=True, is_client_exception=True, is_conflict=False, plugin='cinder', exception=lambda: cinder_exc.OverLimit(code=413), )), ('cinder_conflict', dict( is_not_found=False, is_over_limit=False, is_client_exception=True, is_conflict=True, plugin='cinder', exception=lambda: cinder_exc.ClientException( code=409, message='conflict'), )), ('glance_not_found_1', dict( is_not_found=True, is_over_limit=False, is_client_exception=True, is_conflict=False, plugin='glance', exception=lambda: client_exception.EntityMatchNotFound(), )), ('glance_not_found_2', dict( is_not_found=True, is_over_limit=False, is_client_exception=True, is_conflict=False, plugin='glance', exception=lambda: glance_exc.HTTPNotFound(), )), ('glance_exception', dict( is_not_found=False, is_over_limit=False, is_client_exception=False, is_conflict=False, plugin='glance', exception=lambda: Exception() )), ('glance_overlimit', dict( is_not_found=False, is_over_limit=True, is_client_exception=True, is_conflict=False, plugin='glance', exception=lambda: glance_exc.HTTPOverLimit(details='over'), )), ('glance_conflict_1', dict( is_not_found=False, is_over_limit=False, is_client_exception=True, is_conflict=True, plugin='glance', exception=lambda: glance_exc.Conflict(), )), ('heat_not_found', dict( is_not_found=True, is_over_limit=False, is_client_exception=True, is_conflict=False, plugin='heat', exception=lambda: heat_exc.HTTPNotFound(message='gone'), )), ('heat_exception', dict( is_not_found=False, is_over_limit=False, is_client_exception=False, is_conflict=False, plugin='heat', exception=lambda: Exception() )), ('heat_overlimit', dict( is_not_found=False, is_over_limit=True, is_client_exception=True, is_conflict=False, plugin='heat', exception=lambda: heat_exc.HTTPOverLimit(message='over'), )), ('heat_conflict', dict( is_not_found=False, is_over_limit=False, is_client_exception=True, is_conflict=True, plugin='heat', exception=lambda: heat_exc.HTTPConflict(), )), ('keystone_not_found', dict( is_not_found=True, is_over_limit=False, is_client_exception=True, is_conflict=False, plugin='keystone', exception=lambda: keystone_exc.NotFound(details='gone'), )), ('keystone_entity_not_found', dict( is_not_found=True, is_over_limit=False, is_client_exception=True, is_conflict=False, plugin='keystone', exception=lambda: exception.EntityNotFound(), )), ('keystone_exception', dict( is_not_found=False, is_over_limit=False, is_client_exception=False, is_conflict=False, plugin='keystone', exception=lambda: Exception() )), ('keystone_overlimit', dict( is_not_found=False, is_over_limit=True, is_client_exception=True, is_conflict=False, plugin='keystone', exception=lambda: keystone_exc.RequestEntityTooLarge( details='over'), )), ('keystone_conflict', dict( is_not_found=False, is_over_limit=False, is_client_exception=True, is_conflict=True, plugin='keystone', exception=lambda: keystone_exc.Conflict( message='Conflict'), )), ('neutron_not_found', dict( is_not_found=True, is_over_limit=False, is_client_exception=True, is_conflict=False, plugin='neutron', exception=lambda: neutron_exc.NotFound, )), ('neutron_network_not_found', dict( is_not_found=True, is_over_limit=False, is_client_exception=True, is_conflict=False, plugin='neutron', exception=lambda: neutron_exc.NetworkNotFoundClient(), )), ('neutron_port_not_found', dict( is_not_found=True, is_over_limit=False, is_client_exception=True, is_conflict=False, plugin='neutron', exception=lambda: neutron_exc.PortNotFoundClient(), )), ('neutron_status_not_found', dict( is_not_found=True, is_over_limit=False, is_client_exception=True, is_conflict=False, plugin='neutron', exception=lambda: neutron_exc.NeutronClientException( status_code=404), )), ('neutron_exception', dict( is_not_found=False, is_over_limit=False, is_client_exception=False, is_conflict=False, plugin='neutron', exception=lambda: Exception() )), ('neutron_overlimit', dict( is_not_found=False, is_over_limit=True, is_client_exception=True, is_conflict=False, plugin='neutron', exception=lambda: neutron_exc.NeutronClientException( status_code=413), )), ('neutron_conflict', dict( is_not_found=False, is_over_limit=False, is_client_exception=True, is_conflict=True, plugin='neutron', exception=lambda: neutron_exc.Conflict(), )), ('nova_not_found', dict( is_not_found=True, is_over_limit=False, is_client_exception=True, is_conflict=False, is_unprocessable_entity=False, plugin='nova', exception=lambda: fakes_nova.fake_exception(), )), ('nova_exception', dict( is_not_found=False, is_over_limit=False, is_client_exception=False, is_conflict=False, is_unprocessable_entity=False, plugin='nova', exception=lambda: Exception() )), ('nova_overlimit', dict( is_not_found=False, is_over_limit=True, is_client_exception=True, is_conflict=False, is_unprocessable_entity=False, plugin='nova', exception=lambda: fakes_nova.fake_exception(413), )), ('nova_unprocessable_entity', dict( is_not_found=False, is_over_limit=False, is_client_exception=True, is_conflict=False, is_unprocessable_entity=True, plugin='nova', exception=lambda: fakes_nova.fake_exception(422), )), ('nova_conflict', dict( is_not_found=False, is_over_limit=False, is_client_exception=True, is_conflict=True, is_unprocessable_entity=False, plugin='nova', exception=lambda: fakes_nova.fake_exception(409), )), ('openstack_not_found', dict( is_not_found=True, is_over_limit=False, is_client_exception=True, is_conflict=False, is_unprocessable_entity=False, plugin='openstack', exception=lambda: exceptions.ResourceNotFound, )), ('swift_not_found', dict( is_not_found=True, is_over_limit=False, is_client_exception=True, is_conflict=False, plugin='swift', exception=lambda: swift_exc.ClientException( msg='gone', http_status=404), )), ('swift_exception', dict( is_not_found=False, is_over_limit=False, is_client_exception=False, is_conflict=False, plugin='swift', exception=lambda: Exception() )), ('swift_overlimit', dict( is_not_found=False, is_over_limit=True, is_client_exception=True, is_conflict=False, plugin='swift', exception=lambda: swift_exc.ClientException( msg='ouch', http_status=413), )), ('swift_conflict', dict( is_not_found=False, is_over_limit=False, is_client_exception=True, is_conflict=True, plugin='swift', exception=lambda: swift_exc.ClientException( msg='conflict', http_status=409), )), ('trove_not_found', dict( is_not_found=True, is_over_limit=False, is_client_exception=True, is_conflict=False, plugin='trove', exception=lambda: troveclient.exceptions.NotFound(message='gone'), )), ('trove_exception', dict( is_not_found=False, is_over_limit=False, is_client_exception=False, is_conflict=False, plugin='trove', exception=lambda: Exception() )), ('trove_overlimit', dict( is_not_found=False, is_over_limit=True, is_client_exception=True, is_conflict=False, plugin='trove', exception=lambda: troveclient.exceptions.RequestEntityTooLarge( message='over'), )), ('trove_conflict', dict( is_not_found=False, is_over_limit=False, is_client_exception=True, is_conflict=True, plugin='trove', exception=lambda: troveclient.exceptions.Conflict( message='Conflict'), )), ('sahara_not_found', dict( is_not_found=True, is_over_limit=False, is_client_exception=True, is_conflict=False, plugin='sahara', exception=lambda: sahara_base.APIException( error_message='gone1', error_code=404), )), ('sahara_exception', dict( is_not_found=False, is_over_limit=False, is_client_exception=False, is_conflict=False, plugin='sahara', exception=lambda: Exception() )), ('sahara_overlimit', dict( is_not_found=False, is_over_limit=True, is_client_exception=True, is_conflict=False, plugin='sahara', exception=lambda: sahara_base.APIException( error_message='over1', error_code=413), )), ('sahara_conflict', dict( is_not_found=False, is_over_limit=False, is_client_exception=True, is_conflict=True, plugin='sahara', exception=lambda: sahara_base.APIException( error_message='conflict1', error_code=409), )), ('zaqar_not_found', dict( is_not_found=True, is_over_limit=False, is_client_exception=True, is_conflict=False, plugin='zaqar', exception=lambda: zaqar_exc.ResourceNotFound(), )), ('manila_not_found', dict( is_not_found=True, is_over_limit=False, is_client_exception=True, is_conflict=False, plugin='manila', exception=lambda: manila_exc.NotFound(), )), ('manila_exception', dict( is_not_found=False, is_over_limit=False, is_client_exception=False, is_conflict=False, plugin='manila', exception=lambda: Exception() )), ('manila_overlimit', dict( is_not_found=False, is_over_limit=True, is_client_exception=True, is_conflict=False, plugin='manila', exception=lambda: manila_exc.RequestEntityTooLarge(), )), ('manila_conflict', dict( is_not_found=False, is_over_limit=False, is_client_exception=True, is_conflict=True, plugin='manila', exception=lambda: manila_exc.Conflict(), )), ('mistral_not_found1', dict( is_not_found=True, is_over_limit=False, is_client_exception=False, is_conflict=False, plugin='mistral', exception=lambda: mistral_base.APIException(404), )), ('mistral_not_found2', dict( is_not_found=True, is_over_limit=False, is_client_exception=False, is_conflict=False, plugin='mistral', exception=lambda: keystone_exc.NotFound(), )), ] def test_is_not_found(self): con = mock.Mock() c = clients.Clients(con) client_plugin = c.client_plugin(self.plugin) try: raise self.exception() except Exception as e: if self.is_not_found != client_plugin.is_not_found(e): raise def test_ignore_not_found(self): con = mock.Mock() c = clients.Clients(con) client_plugin = c.client_plugin(self.plugin) try: exp = self.exception() exp_class = exp.__class__ raise exp except Exception as e: if self.is_not_found: client_plugin.ignore_not_found(e) else: self.assertRaises(exp_class, client_plugin.ignore_not_found, e) def test_ignore_not_found_context_manager(self): con = mock.Mock() c = clients.Clients(con) client_plugin = c.client_plugin(self.plugin) exp = self.exception() exp_class = exp.__class__ def try_raise(): with client_plugin.ignore_not_found: raise exp if self.is_not_found: try_raise() else: self.assertRaises(exp_class, try_raise) def test_ignore_conflict_and_not_found(self): con = mock.Mock() c = clients.Clients(con) client_plugin = c.client_plugin(self.plugin) try: exp = self.exception() exp_class = exp.__class__ raise exp except Exception as e: if self.is_conflict or self.is_not_found: client_plugin.ignore_conflict_and_not_found(e) else: self.assertRaises(exp_class, client_plugin.ignore_conflict_and_not_found, e) def test_ignore_conflict_and_not_found_context_manager(self): con = mock.Mock() c = clients.Clients(con) client_plugin = c.client_plugin(self.plugin) exp = self.exception() exp_class = exp.__class__ def try_raise(): with client_plugin.ignore_conflict_and_not_found: raise exp if self.is_conflict or self.is_not_found: try_raise() else: self.assertRaises(exp_class, try_raise) def test_is_over_limit(self): con = mock.Mock() c = clients.Clients(con) client_plugin = c.client_plugin(self.plugin) try: raise self.exception() except Exception as e: if self.is_over_limit != client_plugin.is_over_limit(e): raise def test_is_client_exception(self): con = mock.Mock() c = clients.Clients(con) client_plugin = c.client_plugin(self.plugin) try: raise self.exception() except Exception as e: ice = self.is_client_exception actual = client_plugin.is_client_exception(e) if ice != actual: raise def test_is_conflict(self): con = mock.Mock() c = clients.Clients(con) client_plugin = c.client_plugin(self.plugin) try: raise self.exception() except Exception as e: if self.is_conflict != client_plugin.is_conflict(e): raise def test_is_unprocessable_entity(self): con = mock.Mock() c = clients.Clients(con) # only 'nova' client plugin need to check this exception if self.plugin == 'nova': client_plugin = c.client_plugin(self.plugin) try: raise self.exception() except Exception as e: iue = self.is_unprocessable_entity if iue != client_plugin.is_unprocessable_entity(e): raise heat-10.0.2/heat/tests/clients/test_mistral_client.py0000666000175000017500000000402413343562340022710 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from mistralclient.auth import keystone import mock from heat.common import exception from heat.engine.clients.os import mistral from heat.tests import common from heat.tests import utils class MistralClientPluginTest(common.HeatTestCase): def test_create(self): self.patchobject(keystone.KeystoneAuthHandler, 'authenticate') context = utils.dummy_context() plugin = context.clients.client_plugin('mistral') client = plugin.client() self.assertIsNotNone(client.workflows) class WorkflowConstraintTest(common.HeatTestCase): def setUp(self): super(WorkflowConstraintTest, self).setUp() self.ctx = utils.dummy_context() self.mock_get_workflow_by_identifier = mock.Mock() self.ctx.clients.client_plugin( 'mistral' ).get_workflow_by_identifier = self.mock_get_workflow_by_identifier self.constraint = mistral.WorkflowConstraint() def test_validation(self): self.mock_get_workflow_by_identifier.return_value = {} self.assertTrue(self.constraint.validate("foo", self.ctx)) self.mock_get_workflow_by_identifier.assert_called_once_with("foo") def test_validation_error(self): exc = exception.EntityNotFound(entity='Workflow', name='bar') self.mock_get_workflow_by_identifier.side_effect = exc self.assertFalse(self.constraint.validate("bar", self.ctx)) self.mock_get_workflow_by_identifier.assert_called_once_with("bar") heat-10.0.2/heat/tests/clients/test_manila_client.py0000666000175000017500000000566613343562340022513 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections from manilaclient import exceptions import mock from heat.common import exception as heat_exception from heat.tests import common from heat.tests import utils class ManilaClientPluginTest(common.HeatTestCase): scenarios = [ ('share_type', dict(manager_name="share_types", method_name="get_share_type")), ('share_network', dict(manager_name="share_networks", method_name="get_share_network")), ('share_snapshot', dict(manager_name="share_snapshots", method_name="get_share_snapshot")), ('security_service', dict(manager_name="security_services", method_name="get_security_service")), ] def setUp(self): super(ManilaClientPluginTest, self).setUp() # mock client and plugin self.manila_client = mock.MagicMock() con = utils.dummy_context() c = con.clients self.manila_plugin = c.client_plugin('manila') self.manila_plugin.client = lambda: self.manila_client # prepare list of items to test search Item = collections.namedtuple('Item', ['id', 'name']) self.item_list = [ Item(name="unique_name", id="unique_id"), Item(name="unique_id", id="i_am_checking_that_id_prior"), Item(name="duplicated_name", id="duplicate_test_one"), Item(name="duplicated_name", id="duplicate_test_second")] def test_create(self): context = utils.dummy_context() plugin = context.clients.client_plugin('manila') client = plugin.client() self.assertIsNotNone(client.security_services) self.assertEqual('http://server.test:5000/v3', client.client.endpoint_url) def test_manila_get_method(self): # set item list as client output manager = getattr(self.manila_client, self.manager_name) manager.list.return_value = self.item_list # test that get_ is searching correctly get_method = getattr(self.manila_plugin, self.method_name) self.assertEqual(get_method("unique_id").name, "unique_name") self.assertRaises(heat_exception.EntityNotFound, get_method, "non_exist") self.assertRaises(exceptions.NoUniqueMatch, get_method, "duplicated_name") heat-10.0.2/heat/tests/clients/test_octavia_client.py0000666000175000017500000000157513343562340022673 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.tests import common from heat.tests import utils class OctaviaClientPluginTest(common.HeatTestCase): def test_create(self): context = utils.dummy_context() plugin = context.clients.client_plugin('octavia') client = plugin.client() self.assertIsNotNone(client.endpoint) heat-10.0.2/heat/tests/clients/test_zaqar_client.py0000666000175000017500000000340113343562340022351 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from heat.engine.clients.os import zaqar from heat.tests import common from heat.tests import utils class ZaqarClientPluginTest(common.HeatTestCase): def test_create(self): context = utils.dummy_context() plugin = context.clients.client_plugin('zaqar') client = plugin.client() self.assertIsNotNone(client.queue) def test_create_for_tenant(self): context = utils.dummy_context() plugin = context.clients.client_plugin('zaqar') client = plugin.create_for_tenant('other_tenant', 'token') self.assertEqual('other_tenant', client.conf['auth_opts']['options']['os_project_id']) self.assertEqual('token', client.conf['auth_opts']['options']['os_auth_token']) def test_event_sink(self): context = utils.dummy_context() client = context.clients.client('zaqar') fake_queue = mock.MagicMock() client.queue = lambda x, auto_create: fake_queue sink = zaqar.ZaqarEventSink('myqueue') sink.consume(context, {'hello': 'world'}) fake_queue.post.assert_called_once_with( {'body': {'hello': 'world'}, 'ttl': 3600}) heat-10.0.2/heat/tests/clients/test_designate_client.py0000666000175000017500000002727313343562351023215 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from designateclient import exceptions as designate_exceptions from designateclient import v1 as designate_client import mock import six from heat.common import exception as heat_exception from heat.engine.clients.os import designate as client from heat.tests import common class DesignateDomainConstraintTest(common.HeatTestCase): def test_expected_exceptions(self): self.assertEqual((heat_exception.EntityNotFound,), client.DesignateDomainConstraint.expected_exceptions, "DesignateDomainConstraint expected exceptions error") def test_constrain(self): constrain = client.DesignateDomainConstraint() client_mock = mock.MagicMock() client_plugin_mock = mock.MagicMock() client_plugin_mock.get_domain_id.return_value = None client_mock.client_plugin.return_value = client_plugin_mock self.assertIsNone(constrain.validate_with_client(client_mock, 'domain_1')) client_plugin_mock.get_domain_id.assert_called_once_with('domain_1') class DesignateClientPluginTest(common.HeatTestCase): @mock.patch.object(designate_client, 'Client') def test_client(self, client_designate): context = mock.Mock() session = mock.Mock() context.keystone_session = session client_plugin = client.DesignateClientPlugin(context) self.patchobject(client_plugin, '_get_region_name', return_value='region1') client_plugin.client() # Make sure proper client is created with expected args client_designate.assert_called_once_with( endpoint_type='publicURL', service_type='dns', session=session, region_name='region1' ) class DesignateClientPluginDomainTest(common.HeatTestCase): sample_uuid = '477e8273-60a7-4c41-b683-fdb0bc7cd152' sample_name = 'test-domain.com' def _get_mock_domain(self): domain = mock.MagicMock() domain.id = self.sample_uuid domain.name = self.sample_name return domain def setUp(self): super(DesignateClientPluginDomainTest, self).setUp() self._client = mock.MagicMock() self.client_plugin = client.DesignateClientPlugin( context=mock.MagicMock() ) @mock.patch.object(client.DesignateClientPlugin, 'client') def test_get_domain_id(self, client_designate): self._client.domains.get.return_value = self._get_mock_domain() client_designate.return_value = self._client self.assertEqual(self.sample_uuid, self.client_plugin.get_domain_id(self.sample_uuid)) self._client.domains.get.assert_called_once_with( self.sample_uuid) @mock.patch.object(client.DesignateClientPlugin, 'client') def test_get_domain_id_not_found(self, client_designate): self._client.domains.get.side_effect = (designate_exceptions .NotFound) client_designate.return_value = self._client ex = self.assertRaises(heat_exception.EntityNotFound, self.client_plugin.get_domain_id, self.sample_uuid) msg = ("The Designate Domain (%(name)s) could not be found." % {'name': self.sample_uuid}) self.assertEqual(msg, six.text_type(ex)) self._client.domains.get.assert_called_once_with( self.sample_uuid) @mock.patch.object(client.DesignateClientPlugin, 'client') def test_get_domain_id_by_name(self, client_designate): self._client.domains.get.side_effect = (designate_exceptions .NotFound) self._client.domains.list.return_value = [self._get_mock_domain()] client_designate.return_value = self._client self.assertEqual(self.sample_uuid, self.client_plugin.get_domain_id(self.sample_name)) self._client.domains.get.assert_called_once_with( self.sample_name) self._client.domains.list.assert_called_once_with() @mock.patch.object(client.DesignateClientPlugin, 'client') def test_get_domain_id_by_name_not_found(self, client_designate): self._client.domains.get.side_effect = (designate_exceptions .NotFound) self._client.domains.list.return_value = [] client_designate.return_value = self._client ex = self.assertRaises(heat_exception.EntityNotFound, self.client_plugin.get_domain_id, self.sample_name) msg = ("The Designate Domain (%(name)s) could not be found." % {'name': self.sample_name}) self.assertEqual(msg, six.text_type(ex)) self._client.domains.get.assert_called_once_with( self.sample_name) self._client.domains.list.assert_called_once_with() @mock.patch.object(client.DesignateClientPlugin, 'client') @mock.patch('designateclient.v1.domains.Domain') def test_domain_create(self, mock_domain, client_designate): self._client.domains.create.return_value = None client_designate.return_value = self._client domain = dict( name='test-domain.com', description='updated description', ttl=4200, email='xyz@test-domain.com' ) mock_sample_domain = mock.Mock() mock_domain.return_value = mock_sample_domain self.client_plugin.domain_create(**domain) # Make sure domain entity is created with right arguments mock_domain.assert_called_once_with(**domain) self._client.domains.create.assert_called_once_with( mock_sample_domain) @mock.patch.object(client.DesignateClientPlugin, 'client') def test_domain_update(self, client_designate): self._client.domains.update.return_value = None mock_domain = self._get_mock_domain() self._client.domains.get.return_value = mock_domain client_designate.return_value = self._client domain = dict( id='sample-id', description='updated description', ttl=4200, email='xyz@test-domain.com' ) self.client_plugin.domain_update(**domain) self._client.domains.get.assert_called_once_with( mock_domain.id) for key in domain.keys(): setattr(mock_domain, key, domain[key]) self._client.domains.update.assert_called_once_with( mock_domain) class DesignateClientPluginRecordTest(common.HeatTestCase): sample_uuid = '477e8273-60a7-4c41-b683-fdb0bc7cd152' sample_domain_id = '477e8273-60a7-4c41-b683-fdb0bc7cd153' def _get_mock_record(self): record = mock.MagicMock() record.id = self.sample_uuid record.domain_id = self.sample_domain_id return record def setUp(self): super(DesignateClientPluginRecordTest, self).setUp() self._client = mock.MagicMock() self.client_plugin = client.DesignateClientPlugin( context=mock.MagicMock() ) self.client_plugin.get_domain_id = mock.Mock( return_value=self.sample_domain_id) @mock.patch.object(client.DesignateClientPlugin, 'client') @mock.patch('designateclient.v1.records.Record') def test_record_create(self, mock_record, client_designate): self._client.records.create.return_value = None client_designate.return_value = self._client record = dict( name='test-record.com', description='updated description', ttl=4200, type='', priority=1, data='1.1.1.1', domain=self.sample_domain_id ) mock_sample_record = mock.Mock() mock_record.return_value = mock_sample_record self.client_plugin.record_create(**record) # Make sure record entity is created with right arguments domain_id = record.pop('domain') mock_record.assert_called_once_with(**record) self._client.records.create.assert_called_once_with( domain_id, mock_sample_record) @mock.patch.object(client.DesignateClientPlugin, 'client') @mock.patch('designateclient.v1.records.Record') def test_record_update(self, mock_record, client_designate): self._client.records.update.return_value = None mock_record = self._get_mock_record() self._client.records.get.return_value = mock_record client_designate.return_value = self._client record = dict( id=self.sample_uuid, name='test-record.com', description='updated description', ttl=4200, type='', priority=1, data='1.1.1.1', domain=self.sample_domain_id ) self.client_plugin.record_update(**record) self._client.records.get.assert_called_once_with( self.sample_domain_id, self.sample_uuid) for key in record.keys(): setattr(mock_record, key, record[key]) self._client.records.update.assert_called_once_with( self.sample_domain_id, mock_record) @mock.patch.object(client.DesignateClientPlugin, 'client') @mock.patch('designateclient.v1.records.Record') def test_record_delete(self, mock_record, client_designate): self._client.records.delete.return_value = None client_designate.return_value = self._client record = dict( id=self.sample_uuid, domain=self.sample_domain_id ) self.client_plugin.record_delete(**record) self._client.records.delete.assert_called_once_with( self.sample_domain_id, self.sample_uuid) @mock.patch.object(client.DesignateClientPlugin, 'client') @mock.patch('designateclient.v1.records.Record') def test_record_show(self, mock_record, client_designate): self._client.records.get.return_value = None client_designate.return_value = self._client record = dict( id=self.sample_uuid, domain=self.sample_domain_id ) self.client_plugin.record_show(**record) self._client.records.get.assert_called_once_with( self.sample_domain_id, self.sample_uuid) class DesignateZoneConstraintTest(common.HeatTestCase): def test_expected_exceptions(self): self.assertEqual((heat_exception.EntityNotFound,), client.DesignateZoneConstraint.expected_exceptions, "DesignateZoneConstraint expected exceptions error") def test_constrain(self): constrain = client.DesignateZoneConstraint() client_mock = mock.MagicMock() client_plugin_mock = mock.MagicMock() client_plugin_mock.get_zone_id.return_value = None client_mock.client_plugin.return_value = client_plugin_mock self.assertIsNone(constrain.validate_with_client(client_mock, 'zone_1')) client_plugin_mock.get_zone_id.assert_called_once_with('zone_1') heat-10.0.2/heat/tests/clients/test_aodh_client.py0000666000175000017500000000156413343562340022156 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.tests import common from heat.tests import utils class AodhClientPluginTest(common.HeatTestCase): def test_create(self): context = utils.dummy_context() plugin = context.clients.client_plugin('aodh') client = plugin.client() self.assertIsNotNone(client.alarm) heat-10.0.2/heat/tests/clients/test_progress.py0000666000175000017500000000371213343562340021546 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.engine.clients import progress from heat.tests import common class ServerUpdateProgressObjectTest(common.HeatTestCase): def setUp(self): super(ServerUpdateProgressObjectTest, self).setUp() self.server_id = '1234' self.handler = 'test' def _assert_common(self, prg): self.assertEqual(self.server_id, prg.server_id) self.assertEqual(self.handler, prg.handler) self.assertEqual('check_%s' % self.handler, prg.checker) self.assertFalse(prg.called) self.assertFalse(prg.complete) def test_extra_all_defaults(self): prg = progress.ServerUpdateProgress(self.server_id, self.handler) self._assert_common(prg) self.assertEqual((self.server_id,), prg.handler_args) self.assertEqual((self.server_id,), prg.checker_args) self.assertEqual({}, prg.handler_kwargs) self.assertEqual({}, prg.checker_kwargs) def test_handler_extra_kwargs_missing(self): handler_extra = {'args': ()} prg = progress.ServerUpdateProgress(self.server_id, self.handler, handler_extra=handler_extra) self._assert_common(prg) self.assertEqual((self.server_id,), prg.handler_args) self.assertEqual((self.server_id,), prg.checker_args) self.assertEqual({}, prg.handler_kwargs) self.assertEqual({}, prg.checker_kwargs) heat-10.0.2/heat/tests/clients/__init__.py0000666000175000017500000000000013343562340020365 0ustar zuulzuul00000000000000heat-10.0.2/heat/tests/clients/test_cinder_client.py0000666000175000017500000001735313343562340022512 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tests for :module:'heat.engine.clients.os.cinder'.""" import uuid from cinderclient import exceptions as cinder_exc from keystoneauth1 import exceptions as ks_exceptions import mock from heat.common import exception from heat.engine.clients.os import cinder from heat.tests import common from heat.tests import utils class CinderClientPluginTest(common.HeatTestCase): """Basic tests for :module:'heat.engine.clients.os.cinder'.""" def setUp(self): super(CinderClientPluginTest, self).setUp() self.cinder_client = mock.MagicMock() con = utils.dummy_context() c = con.clients self.cinder_plugin = c.client_plugin('cinder') self.cinder_plugin.client = lambda: self.cinder_client def test_get_volume(self): """Tests the get_volume function.""" volume_id = str(uuid.uuid4()) my_volume = mock.MagicMock() self.cinder_client.volumes.get.return_value = my_volume self.assertEqual(my_volume, self.cinder_plugin.get_volume(volume_id)) self.cinder_client.volumes.get.assert_called_once_with(volume_id) def test_get_snapshot(self): """Tests the get_volume_snapshot function.""" snapshot_id = str(uuid.uuid4()) my_snapshot = mock.MagicMock() self.cinder_client.volume_snapshots.get.return_value = my_snapshot self.assertEqual(my_snapshot, self.cinder_plugin.get_volume_snapshot(snapshot_id)) self.cinder_client.volume_snapshots.get.assert_called_once_with( snapshot_id) class VolumeConstraintTest(common.HeatTestCase): def setUp(self): super(VolumeConstraintTest, self).setUp() self.ctx = utils.dummy_context() self.mock_get_volume = mock.Mock() self.ctx.clients.client_plugin( 'cinder').get_volume = self.mock_get_volume self.constraint = cinder.VolumeConstraint() def test_validation(self): self.mock_get_volume.return_value = None self.assertTrue(self.constraint.validate("foo", self.ctx)) def test_validation_error(self): self.mock_get_volume.side_effect = exception.EntityNotFound( entity='Volume', name='bar') self.assertFalse(self.constraint.validate("bar", self.ctx)) class VolumeSnapshotConstraintTest(common.HeatTestCase): def setUp(self): super(VolumeSnapshotConstraintTest, self).setUp() self.ctx = utils.dummy_context() self.mock_get_snapshot = mock.Mock() self.ctx.clients.client_plugin( 'cinder').get_volume_snapshot = self.mock_get_snapshot self.constraint = cinder.VolumeSnapshotConstraint() def test_validation(self): self.mock_get_snapshot.return_value = 'snapshot' self.assertTrue(self.constraint.validate("foo", self.ctx)) def test_validation_error(self): self.mock_get_snapshot.side_effect = exception.EntityNotFound( entity='VolumeSnapshot', name='bar') self.assertFalse(self.constraint.validate("bar", self.ctx)) class VolumeTypeConstraintTest(common.HeatTestCase): def setUp(self): super(VolumeTypeConstraintTest, self).setUp() self.ctx = utils.dummy_context() self.mock_get_volume_type = mock.Mock() self.ctx.clients.client_plugin( 'cinder').get_volume_type = self.mock_get_volume_type self.constraint = cinder.VolumeTypeConstraint() def test_validation(self): self.mock_get_volume_type.return_value = 'volume_type' self.assertTrue(self.constraint.validate("foo", self.ctx)) def test_validation_error(self): self.mock_get_volume_type.side_effect = exception.EntityNotFound( entity='VolumeType', name='bar') self.assertFalse(self.constraint.validate("bar", self.ctx)) class VolumeBackupConstraintTest(common.HeatTestCase): def setUp(self): super(VolumeBackupConstraintTest, self).setUp() self.ctx = utils.dummy_context() self.mock_get_volume_backup = mock.Mock() self.ctx.clients.client_plugin( 'cinder').get_volume_backup = self.mock_get_volume_backup self.constraint = cinder.VolumeBackupConstraint() def test_validation(self): self.mock_get_volume_backup.return_value = 'volume_backup' self.assertTrue(self.constraint.validate("foo", self.ctx)) def test_validation_error(self): ex = exception.EntityNotFound(entity='Volume backup', name='bar') self.mock_get_volume_backup.side_effect = ex self.assertFalse(self.constraint.validate("bar", self.ctx)) class QoSSpecsConstraintTest(common.HeatTestCase): def setUp(self): super(QoSSpecsConstraintTest, self).setUp() self.ctx = utils.dummy_context() self.mock_get_qos_specs = mock.Mock() self.ctx.clients.client_plugin( 'cinder').get_qos_specs = self.mock_get_qos_specs self.constraint = cinder.QoSSpecsConstraint() def test_validation(self): self.mock_get_qos_specs.return_value = None self.assertTrue(self.constraint.validate("foo", self.ctx)) def test_validation_error(self): self.mock_get_qos_specs.side_effect = cinder_exc.NotFound(404) self.assertFalse(self.constraint.validate("bar", self.ctx)) class CinderClientAPIVersionTest(common.HeatTestCase): def test_cinder_api_v3(self): ctx = utils.dummy_context() self.patchobject(ctx.keystone_session, 'get_endpoint') client = ctx.clients.client('cinder') self.assertEqual('3.0', client.version) def test_cinder_api_v2(self): ctx = utils.dummy_context() self.patchobject(ctx.keystone_session, 'get_endpoint', side_effect=[ks_exceptions.EndpointNotFound, None]) client = ctx.clients.client('cinder') self.assertEqual('2.0', client.version) def test_cinder_api_not_supported(self): ctx = utils.dummy_context() self.patchobject(ctx.keystone_session, 'get_endpoint', side_effect=[ks_exceptions.EndpointNotFound, ks_exceptions.EndpointNotFound]) self.assertRaises(exception.Error, ctx.clients.client, 'cinder') class CinderClientPluginExtensionsTest(CinderClientPluginTest): """Tests for extensions in cinderclient.""" def test_has_no_extensions(self): self.cinder_client.list_extensions.show_all.return_value = [] self.assertFalse(self.cinder_plugin.has_extension( "encryption")) def test_has_no_interface_extensions(self): mock_extension = mock.Mock() p = mock.PropertyMock(return_value='os-xxxx') type(mock_extension).alias = p self.cinder_client.list_extensions.show_all.return_value = [ mock_extension] self.assertFalse(self.cinder_plugin.has_extension( "encryption")) def test_has_os_interface_extension(self): mock_extension = mock.Mock() p = mock.PropertyMock(return_value='encryption') type(mock_extension).alias = p self.cinder_client.list_extensions.show_all.return_value = [ mock_extension] self.assertTrue(self.cinder_plugin.has_extension( "encryption")) heat-10.0.2/heat/tests/clients/test_barbican_client.py0000666000175000017500000000664613343562340023012 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections from barbicanclient import exceptions import mock from heat.common import exception from heat.engine.clients.os import barbican from heat.tests import common from heat.tests import utils class BarbicanClientPluginTest(common.HeatTestCase): def setUp(self): super(BarbicanClientPluginTest, self).setUp() self.barbican_client = mock.MagicMock() con = utils.dummy_context() c = con.clients self.barbican_plugin = c.client_plugin('barbican') self.barbican_plugin.client = lambda: self.barbican_client def test_create(self): context = utils.dummy_context() plugin = context.clients.client_plugin('barbican') client = plugin.client() self.assertIsNotNone(client.orders) def test_get_secret_by_ref(self): secret = collections.namedtuple('Secret', ['name'])('foo') self.barbican_client.secrets.get.return_value = secret self.assertEqual(secret, self.barbican_plugin.get_secret_by_ref("secret")) def test_get_secret_by_ref_not_found(self): exc = exceptions.HTTPClientError(message="Not Found", status_code=404) self.barbican_client.secrets.get.side_effect = exc self.assertRaises( exception.EntityNotFound, self.barbican_plugin.get_secret_by_ref, "secret") class SecretConstraintTest(common.HeatTestCase): def setUp(self): super(SecretConstraintTest, self).setUp() self.ctx = utils.dummy_context() self.mock_get_secret_by_ref = mock.Mock() self.ctx.clients.client_plugin( 'barbican').get_secret_by_ref = self.mock_get_secret_by_ref self.constraint = barbican.SecretConstraint() def test_validation(self): self.mock_get_secret_by_ref.return_value = {} self.assertTrue(self.constraint.validate("foo", self.ctx)) def test_validation_error(self): self.mock_get_secret_by_ref.side_effect = exception.EntityNotFound( entity='Secret', name='bar') self.assertFalse(self.constraint.validate("bar", self.ctx)) class ContainerConstraintTest(common.HeatTestCase): def setUp(self): super(ContainerConstraintTest, self).setUp() self.ctx = utils.dummy_context() self.mock_get_container_by_ref = mock.Mock() self.ctx.clients.client_plugin( 'barbican').get_container_by_ref = self.mock_get_container_by_ref self.constraint = barbican.ContainerConstraint() def test_validation(self): self.mock_get_container_by_ref.return_value = {} self.assertTrue(self.constraint.validate("foo", self.ctx)) def test_validation_error(self): self.mock_get_container_by_ref.side_effect = exception.EntityNotFound( entity='Container', name='bar') self.assertFalse(self.constraint.validate("bar", self.ctx)) heat-10.0.2/heat/tests/clients/test_nova_client.py0000666000175000017500000006160113343562351022206 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tests for :module:'heat.engine.clients.os.nova'.""" import collections import uuid import mock from novaclient import client as nc from novaclient import exceptions as nova_exceptions from oslo_config import cfg from oslo_serialization import jsonutils as json import requests import six from heat.common import exception from heat.engine.clients.os import nova from heat.tests import common from heat.tests.openstack.nova import fakes as fakes_nova from heat.tests import utils class NovaClientPluginTestCase(common.HeatTestCase): def setUp(self): super(NovaClientPluginTestCase, self).setUp() self.nova_client = mock.MagicMock() con = utils.dummy_context() c = con.clients self.nova_plugin = c.client_plugin('nova') self.nova_plugin.client = lambda: self.nova_client class NovaClientPluginTest(NovaClientPluginTestCase): """Basic tests for :module:'heat.engine.clients.os.nova'.""" def test_create(self): context = utils.dummy_context() ext_mock = self.patchobject(nc, 'discover_extensions') plugin = context.clients.client_plugin('nova') client = plugin.client() ext_mock.assert_called_once_with('2.1') self.assertIsNotNone(client.servers) def test_v2_26_create(self): ctxt = utils.dummy_context() ext_mock = self.patchobject(nc, 'discover_extensions') self.patchobject(nc, 'Client', return_value=mock.Mock()) plugin = ctxt.clients.client_plugin('nova') plugin.client(version=plugin.V2_26) ext_mock.assert_called_once_with(plugin.V2_26) def test_v2_26_create_failed(self): ctxt = utils.dummy_context() self.patchobject(nc, 'discover_extensions') plugin = ctxt.clients.client_plugin('nova') client_stub = mock.Mock() client_stub.versions.get_current.side_effect = [ nova_exceptions.NotAcceptable(406)] self.patchobject(nc, 'Client', return_value=client_stub) self.assertRaises(exception.InvalidServiceVersion, plugin.client, plugin.V2_26) def test_get_ip(self): my_image = mock.MagicMock() my_image.addresses = { 'public': [{'version': 4, 'addr': '4.5.6.7'}, {'version': 6, 'addr': '2401:1801:7800:0101:c058:dd33:ff18:04e6'}], 'private': [{'version': 4, 'addr': '10.13.12.13'}]} expected = '4.5.6.7' observed = self.nova_plugin.get_ip(my_image, 'public', 4) self.assertEqual(expected, observed) expected = '10.13.12.13' observed = self.nova_plugin.get_ip(my_image, 'private', 4) self.assertEqual(expected, observed) expected = '2401:1801:7800:0101:c058:dd33:ff18:04e6' observed = self.nova_plugin.get_ip(my_image, 'public', 6) self.assertEqual(expected, observed) def test_find_flavor_by_name_or_id(self): """Tests the find_flavor_by_name_or_id function.""" flav_id = str(uuid.uuid4()) flav_name = 'X-Large' my_flavor = mock.MagicMock() my_flavor.name = flav_name my_flavor.id = flav_id self.nova_client.flavors.get.side_effect = [ my_flavor, nova_exceptions.NotFound(''), nova_exceptions.NotFound(''), ] self.nova_client.flavors.find.side_effect = [ my_flavor, nova_exceptions.NotFound(''), ] self.assertEqual(flav_id, self.nova_plugin.find_flavor_by_name_or_id(flav_id)) self.assertEqual(flav_id, self.nova_plugin.find_flavor_by_name_or_id(flav_name)) self.assertRaises(nova_exceptions.ClientException, self.nova_plugin.find_flavor_by_name_or_id, 'noflavor') self.assertEqual(3, self.nova_client.flavors.get.call_count) self.assertEqual(2, self.nova_client.flavors.find.call_count) def test_get_host(self): """Tests the get_host function.""" my_host_name = 'myhost' my_host = mock.MagicMock() my_host.host_name = my_host_name my_host.service = 'compute' wrong_host = mock.MagicMock() wrong_host.host_name = 'wrong_host' wrong_host.service = 'compute' self.nova_client.hosts.list.side_effect = [ [my_host], [wrong_host], exception.EntityNotFound(entity='Host', name='nohost') ] self.assertEqual(my_host, self.nova_plugin.get_host(my_host_name)) self.assertRaises(exception.EntityNotFound, self.nova_plugin.get_host, my_host_name) self.assertRaises(exception.EntityNotFound, self.nova_plugin.get_host, 'nohost') self.assertEqual(3, self.nova_client.hosts.list.call_count) calls = [mock.call(), mock.call(), mock.call()] self.assertEqual(calls, self.nova_client.hosts.list.call_args_list) def test_get_keypair(self): """Tests the get_keypair function.""" my_pub_key = 'a cool public key string' my_key_name = 'mykey' my_key = mock.MagicMock() my_key.public_key = my_pub_key my_key.name = my_key_name self.nova_client.keypairs.get.side_effect = [ my_key, nova_exceptions.NotFound(404)] self.assertEqual(my_key, self.nova_plugin.get_keypair(my_key_name)) self.assertRaises(exception.EntityNotFound, self.nova_plugin.get_keypair, 'notakey') calls = [mock.call(my_key_name), mock.call('notakey')] self.nova_client.keypairs.get.assert_has_calls(calls) def test_get_server(self): """Tests the get_server function.""" my_server = mock.MagicMock() self.nova_client.servers.get.side_effect = [ my_server, nova_exceptions.NotFound(404)] self.assertEqual(my_server, self.nova_plugin.get_server('my_server')) self.assertRaises(exception.EntityNotFound, self.nova_plugin.get_server, 'idontexist') calls = [mock.call('my_server'), mock.call('idontexist')] self.nova_client.servers.get.assert_has_calls(calls) def test_get_status(self): server = self.m.CreateMockAnything() server.status = 'ACTIVE' observed = self.nova_plugin.get_status(server) self.assertEqual('ACTIVE', observed) server.status = 'ACTIVE(STATUS)' observed = self.nova_plugin.get_status(server) self.assertEqual('ACTIVE', observed) def _absolute_limits(self): max_personality = self.m.CreateMockAnything() max_personality.name = 'maxPersonality' max_personality.value = 5 max_personality_size = self.m.CreateMockAnything() max_personality_size.name = 'maxPersonalitySize' max_personality_size.value = 10240 max_server_meta = self.m.CreateMockAnything() max_server_meta.name = 'maxServerMeta' max_server_meta.value = 3 yield max_personality yield max_personality_size yield max_server_meta def test_absolute_limits_success(self): limits = mock.Mock() limits.absolute = self._absolute_limits() self.nova_client.limits.get.return_value = limits self.nova_plugin.absolute_limits() def test_absolute_limits_retry(self): limits = mock.Mock() limits.absolute = self._absolute_limits() self.nova_client.limits.get.side_effect = [ requests.ConnectionError, requests.ConnectionError, limits] self.nova_plugin.absolute_limits() self.assertEqual(3, self.nova_client.limits.get.call_count) def test_absolute_limits_failure(self): limits = mock.Mock() limits.absolute = self._absolute_limits() self.nova_client.limits.get.side_effect = [ requests.ConnectionError, requests.ConnectionError, requests.ConnectionError] self.assertRaises(requests.ConnectionError, self.nova_plugin.absolute_limits) class NovaClientPluginRefreshServerTest(NovaClientPluginTestCase): msg = ("ClientException: The server has either erred or is " "incapable of performing the requested operation.") scenarios = [ ('successful_refresh', dict( value=None, e_raise=False)), ('overlimit_error', dict( value=nova_exceptions.OverLimit(413, "limit reached"), e_raise=False)), ('500_error', dict( value=nova_exceptions.ClientException(500, msg), e_raise=False)), ('503_error', dict( value=nova_exceptions.ClientException(503, msg), e_raise=False)), ('unhandled_exception', dict( value=nova_exceptions.ClientException(501, msg), e_raise=True)), ] def test_refresh(self): server = mock.MagicMock() server.get.side_effect = [self.value] if self.e_raise: self.assertRaises(nova_exceptions.ClientException, self.nova_plugin.refresh_server, server) else: self.assertIsNone(self.nova_plugin.refresh_server(server)) server.get.assert_called_once_with() class NovaClientPluginFetchServerTest(NovaClientPluginTestCase): server = mock.Mock() # set explicitly as id and name has internal meaning in mock.Mock server.id = '1234' server.name = 'test_fetch_server' msg = ("ClientException: The server has either erred or is " "incapable of performing the requested operation.") scenarios = [ ('successful_refresh', dict( value=server, e_raise=False)), ('overlimit_error', dict( value=nova_exceptions.OverLimit(413, "limit reached"), e_raise=False)), ('500_error', dict( value=nova_exceptions.ClientException(500, msg), e_raise=False)), ('503_error', dict( value=nova_exceptions.ClientException(503, msg), e_raise=False)), ('unhandled_exception', dict( value=nova_exceptions.ClientException(501, msg), e_raise=True)), ] def test_fetch_server(self): self.nova_client.servers.get.side_effect = [self.value] if self.e_raise: self.assertRaises(nova_exceptions.ClientException, self.nova_plugin.fetch_server, self.server.id) elif isinstance(self.value, mock.Mock): self.assertEqual(self.value, self.nova_plugin.fetch_server(self.server.id)) else: self.assertIsNone(self.nova_plugin.fetch_server(self.server.id)) self.nova_client.servers.get.assert_called_once_with(self.server.id) class NovaClientPluginCheckActiveTest(NovaClientPluginTestCase): scenarios = [ ('active', dict( status='ACTIVE', e_raise=False)), ('deferred', dict( status='BUILD', e_raise=False)), ('error', dict( status='ERROR', e_raise=exception.ResourceInError)), ('unknown', dict( status='VIKINGS!', e_raise=exception.ResourceUnknownStatus)) ] def setUp(self): super(NovaClientPluginCheckActiveTest, self).setUp() self.server = mock.Mock() self.server.id = '1234' self.server.status = self.status self.r_mock = self.patchobject(self.nova_plugin, 'refresh_server', return_value=None) self.f_mock = self.patchobject(self.nova_plugin, 'fetch_server', return_value=self.server) def test_check_active_with_object(self): if self.e_raise: self.assertRaises(self.e_raise, self.nova_plugin._check_active, self.server) self.r_mock.assert_called_once_with(self.server) elif self.status in self.nova_plugin.deferred_server_statuses: self.assertFalse(self.nova_plugin._check_active(self.server)) self.r_mock.assert_called_once_with(self.server) else: self.assertTrue(self.nova_plugin._check_active(self.server)) self.assertEqual(0, self.r_mock.call_count) self.assertEqual(0, self.f_mock.call_count) def test_check_active_with_string(self): if self.e_raise: self.assertRaises(self.e_raise, self.nova_plugin._check_active, self.server.id) elif self.status in self.nova_plugin.deferred_server_statuses: self.assertFalse(self.nova_plugin._check_active(self.server.id)) else: self.assertTrue(self.nova_plugin._check_active(self.server.id)) self.f_mock.assert_called_once_with(self.server.id) self.assertEqual(0, self.r_mock.call_count) def test_check_active_with_string_unavailable(self): self.f_mock.return_value = None self.assertFalse(self.nova_plugin._check_active(self.server.id)) self.f_mock.assert_called_once_with(self.server.id) self.assertEqual(0, self.r_mock.call_count) class NovaClientPluginUserdataTest(NovaClientPluginTestCase): def test_build_userdata(self): """Tests the build_userdata function.""" cfg.CONF.set_override('heat_metadata_server_url', 'http://server.test:123') cfg.CONF.set_override('instance_connection_is_secure', False) cfg.CONF.set_override( 'instance_connection_https_validate_certificates', False) data = self.nova_plugin.build_userdata({}) self.assertIn("Content-Type: text/cloud-config;", data) self.assertIn("Content-Type: text/cloud-boothook;", data) self.assertIn("Content-Type: text/part-handler;", data) self.assertIn("Content-Type: text/x-cfninitdata;", data) self.assertIn("Content-Type: text/x-shellscript;", data) self.assertIn("http://server.test:123", data) self.assertIn("[Boto]", data) def test_build_userdata_without_instance_user(self): """Don't add a custom instance user when not requested.""" cfg.CONF.set_override('heat_metadata_server_url', 'http://server.test:123') data = self.nova_plugin.build_userdata({}, instance_user=None) self.assertNotIn('user: ', data) self.assertNotIn('useradd', data) self.assertNotIn('ec2-user', data) def test_build_userdata_with_instance_user(self): """Add a custom instance user.""" cfg.CONF.set_override('heat_metadata_server_url', 'http://server.test:123') data = self.nova_plugin.build_userdata({}, instance_user='ec2-user') self.assertIn('user: ', data) self.assertIn('useradd', data) self.assertIn('ec2-user', data) class NovaClientPluginMetadataTest(NovaClientPluginTestCase): def test_serialize_string(self): original = {'test_key': 'simple string value'} self.assertEqual(original, self.nova_plugin.meta_serialize(original)) def test_serialize_int(self): original = {'test_key': 123} expected = {'test_key': '123'} self.assertEqual(expected, self.nova_plugin.meta_serialize(original)) def test_serialize_list(self): original = {'test_key': [1, 2, 3]} expected = {'test_key': '[1, 2, 3]'} self.assertEqual(expected, self.nova_plugin.meta_serialize(original)) def test_serialize_dict(self): original = collections.OrderedDict([ ('test_key', collections.OrderedDict([ ('a', 'b'), ('c', 'd'), ])) ]) expected = {'test_key': '{"a": "b", "c": "d"}'} actual = self.nova_plugin.meta_serialize(original) self.assertEqual(json.loads(expected['test_key']), json.loads(actual['test_key'])) def test_serialize_none(self): original = {'test_key': None} expected = {'test_key': 'null'} self.assertEqual(expected, self.nova_plugin.meta_serialize(original)) def test_serialize_no_value(self): """Prove that the user can only pass in a dict to nova metadata.""" excp = self.assertRaises(exception.StackValidationFailed, self.nova_plugin.meta_serialize, "foo") self.assertIn('metadata needs to be a Map', six.text_type(excp)) def test_serialize_combined(self): original = { 'test_key_1': 123, 'test_key_2': 'a string', 'test_key_3': {'a': 'b'}, 'test_key_4': None, } expected = { 'test_key_1': '123', 'test_key_2': 'a string', 'test_key_3': '{"a": "b"}', 'test_key_4': 'null', } self.assertEqual(expected, self.nova_plugin.meta_serialize(original)) class ServerConstraintTest(common.HeatTestCase): def setUp(self): super(ServerConstraintTest, self).setUp() self.ctx = utils.dummy_context() self.mock_get_server = mock.Mock() self.ctx.clients.client_plugin( 'nova').get_server = self.mock_get_server self.constraint = nova.ServerConstraint() def test_validation(self): self.mock_get_server.return_value = mock.MagicMock() self.assertTrue(self.constraint.validate("foo", self.ctx)) def test_validation_error(self): self.mock_get_server.side_effect = exception.EntityNotFound( entity='Server', name='bar') self.assertFalse(self.constraint.validate("bar", self.ctx)) class FlavorConstraintTest(common.HeatTestCase): def test_validate(self): client = fakes_nova.FakeClient() self.stub_keystoneclient() self.patchobject(nova.NovaClientPlugin, '_create', return_value=client) client.flavors = mock.MagicMock() flavor = collections.namedtuple("Flavor", ["id", "name"]) flavor.id = "1234" flavor.name = "foo" client.flavors.get.side_effect = [flavor, nova_exceptions.NotFound(''), nova_exceptions.NotFound('')] client.flavors.find.side_effect = [flavor, nova_exceptions.NotFound('')] constraint = nova.FlavorConstraint() ctx = utils.dummy_context() self.assertTrue(constraint.validate("1234", ctx)) self.assertTrue(constraint.validate("foo", ctx)) self.assertFalse(constraint.validate("bar", ctx)) self.assertEqual(1, nova.NovaClientPlugin._create.call_count) self.assertEqual(3, client.flavors.get.call_count) self.assertEqual(2, client.flavors.find.call_count) class HostConstraintTest(common.HeatTestCase): def setUp(self): super(HostConstraintTest, self).setUp() self.ctx = utils.dummy_context() self.mock_get_host = mock.Mock() self.ctx.clients.client_plugin( 'nova').get_host = self.mock_get_host self.constraint = nova.HostConstraint() def test_validation(self): self.mock_get_host.return_value = mock.MagicMock() self.assertTrue(self.constraint.validate("foo", self.ctx)) def test_validation_error(self): self.mock_get_host.side_effect = exception.EntityNotFound( entity='Host', name='bar') self.assertFalse(self.constraint.validate("bar", self.ctx)) class KeypairConstraintTest(common.HeatTestCase): def test_validation(self): client = fakes_nova.FakeClient() self.patchobject(nova.NovaClientPlugin, '_create', return_value=client) client.keypairs = mock.MagicMock() key = collections.namedtuple("Key", ["name"]) key.name = "foo" client.keypairs.get.side_effect = [ fakes_nova.fake_exception(), key] constraint = nova.KeypairConstraint() ctx = utils.dummy_context() self.assertFalse(constraint.validate("bar", ctx)) self.assertTrue(constraint.validate("foo", ctx)) self.assertTrue(constraint.validate("", ctx)) nova.NovaClientPlugin._create.assert_called_once_with() calls = [mock.call('bar'), mock.call(key.name)] client.keypairs.get.assert_has_calls(calls) class ConsoleUrlsTest(common.HeatTestCase): scenarios = [ ('novnc', dict(console_type='novnc', res_obj=True)), ('xvpvnc', dict(console_type='xvpvnc', res_obj=True)), ('spice', dict(console_type='spice-html5', res_obj=True)), ('rdp', dict(console_type='rdp-html5', res_obj=True)), ('serial', dict(console_type='serial', res_obj=True)), ('mks', dict(console_type='webmks', res_obj=False)), ] def setUp(self): super(ConsoleUrlsTest, self).setUp() self.nova_client = mock.Mock() con = utils.dummy_context() c = con.clients self.nova_plugin = c.client_plugin('nova') self.patchobject(self.nova_plugin, 'client', return_value=self.nova_client) self.server = mock.Mock() if self.res_obj: self.console_method = getattr(self.server, 'get_console_url') else: self.console_method = getattr(self.nova_client.servers, 'get_console_url') def test_get_console_url(self): console = { 'console': { 'type': self.console_type, 'url': '%s_console_url' % self.console_type } } self.console_method.return_value = console console_url = self.nova_plugin.get_console_urls(self.server)[ self.console_type] self.assertEqual(console['console']['url'], console_url) self._assert_console_method_called() def _assert_console_method_called(self): if self.console_type == 'webmks': self.console_method.assert_called_once_with(self.server, self.console_type) else: self.console_method.assert_called_once_with(self.console_type) def _test_get_console_url_tolerate_exception(self, msg): console_url = self.nova_plugin.get_console_urls(self.server)[ self.console_type] self._assert_console_method_called() self.assertIn(msg, console_url) def test_get_console_url_tolerate_unavailable(self): msg = 'Unavailable console type %s.' % self.console_type self.console_method.side_effect = nova_exceptions.BadRequest( 400, message=msg) self._test_get_console_url_tolerate_exception(msg) def test_get_console_url_tolerate_unsupport(self): msg = 'Unsupported console_type "%s"' % self.console_type self.console_method.side_effect = ( nova_exceptions.UnsupportedConsoleType( console_type=self.console_type)) self._test_get_console_url_tolerate_exception(msg) def test_get_console_urls_tolerate_other_400(self): exc = nova_exceptions.BadRequest self.console_method.side_effect = exc(400, message="spam") self._test_get_console_url_tolerate_exception('spam') def test_get_console_urls_reraises_other(self): exc = Exception self.console_method.side_effect = exc("spam") self._test_get_console_url_tolerate_exception('spam') class NovaClientPluginExtensionsTest(NovaClientPluginTestCase): """Tests for extensions in novaclient.""" def test_has_no_extensions(self): self.nova_client.list_extensions.show_all.return_value = [] self.assertFalse(self.nova_plugin.has_extension( "os-virtual-interfaces")) def test_has_no_interface_extensions(self): mock_extension = mock.Mock() p = mock.PropertyMock(return_value='os-xxxx') type(mock_extension).alias = p self.nova_client.list_extensions.show_all.return_value = [ mock_extension] self.assertFalse(self.nova_plugin.has_extension( "os-virtual-interfaces")) def test_has_os_interface_extension(self): mock_extension = mock.Mock() p = mock.PropertyMock(return_value='os-virtual-interfaces') type(mock_extension).alias = p self.nova_client.list_extensions.show_all.return_value = [ mock_extension] self.assertTrue(self.nova_plugin.has_extension( "os-virtual-interfaces")) heat-10.0.2/heat/tests/clients/test_magnum_client.py0000666000175000017500000000543013343562340022523 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from magnumclient import exceptions as mc_exc import mock from heat.engine.clients.os import magnum as mc from heat.tests import common from heat.tests import utils class MagnumClientPluginTest(common.HeatTestCase): def test_create(self): context = utils.dummy_context() plugin = context.clients.client_plugin('magnum') client = plugin.client() self.assertEqual('http://server.test:5000/v3', client.cluster_templates.api.session.auth.endpoint) class fake_cluster_template(object): def __init__(self, id=None, name=None): self.uuid = id self.name = name class ClusterTemplateConstraintTest(common.HeatTestCase): def setUp(self): super(ClusterTemplateConstraintTest, self).setUp() self.ctx = utils.dummy_context() self.mock_cluster_template_get = mock.Mock() self.ctx.clients.client_plugin( 'magnum').client().cluster_templates.get = \ self.mock_cluster_template_get self.constraint = mc.ClusterTemplateConstraint() def test_validate(self): self.mock_cluster_template_get.return_value = fake_cluster_template( id='my_cluster_template') self.assertTrue(self.constraint.validate( 'my_cluster_template', self.ctx)) def test_validate_fail(self): self.mock_cluster_template_get.side_effect = mc_exc.NotFound() self.assertFalse(self.constraint.validate( "bad_cluster_template", self.ctx)) class BaymodelConstraintTest(common.HeatTestCase): def setUp(self): super(BaymodelConstraintTest, self).setUp() self.ctx = utils.dummy_context() self.mock_baymodel_get = mock.Mock() self.ctx.clients.client_plugin( 'magnum').client().baymodels.get = self.mock_baymodel_get self.constraint = mc.BaymodelConstraint() def test_validate(self): self.mock_baymodel_get.return_value = fake_cluster_template( id='badbaymodel') self.assertTrue(self.constraint.validate("mybaymodel", self.ctx)) def test_validate_fail(self): self.mock_baymodel_get.side_effect = mc_exc.NotFound() self.assertFalse(self.constraint.validate("badbaymodel", self.ctx)) heat-10.0.2/heat/tests/clients/test_sdk_client.py0000666000175000017500000000343713343562351022027 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from openstack import exceptions from heat.engine.clients.os import openstacksdk from heat.tests import common from heat.tests import utils class OpenStackSDKPluginTest(common.HeatTestCase): @mock.patch('openstack.connection.Connection') def test_create(self, mock_connection): context = utils.dummy_context() plugin = context.clients.client_plugin('openstack') client = plugin.client() self.assertIsNotNone(client.network.segments) class SegmentConstraintTest(common.HeatTestCase): def setUp(self): super(SegmentConstraintTest, self).setUp() self.ctx = utils.dummy_context() self.mock_find_segment = mock.Mock() self.ctx.clients.client_plugin( 'openstack').find_network_segment = self.mock_find_segment self.constraint = openstacksdk.SegmentConstraint() def test_validation(self): self.mock_find_segment.side_effect = [ "seg1", exceptions.ResourceNotFound(), exceptions.DuplicateResource()] self.assertTrue(self.constraint.validate("foo", self.ctx)) self.assertFalse(self.constraint.validate("bar", self.ctx)) self.assertFalse(self.constraint.validate("baz", self.ctx)) heat-10.0.2/heat/tests/clients/test_keystone_client.py0000666000175000017500000007015513343562340023106 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from keystoneauth1 import exceptions as keystone_exceptions import mock import six from heat.common import exception from heat.engine.clients.os import keystone from heat.engine.clients.os.keystone import keystone_constraints as ks_constr from heat.tests import common class KeystoneRoleConstraintTest(common.HeatTestCase): def test_expected_exceptions(self): self.assertEqual( (exception.EntityNotFound,), ks_constr.KeystoneRoleConstraint.expected_exceptions, "KeystoneRoleConstraint expected exceptions error") def test_constraint(self): constraint = ks_constr.KeystoneRoleConstraint() client_mock = mock.MagicMock() client_plugin_mock = mock.MagicMock() client_plugin_mock.get_role_id.return_value = None client_mock.client_plugin.return_value = client_plugin_mock self.assertIsNone(constraint.validate_with_client(client_mock, 'role_1')) self.assertRaises(exception.EntityNotFound, constraint.validate_with_client, client_mock, '') client_plugin_mock.get_role_id.assert_called_once_with('role_1') class KeystoneProjectConstraintTest(common.HeatTestCase): def test_expected_exceptions(self): self.assertEqual( (exception.EntityNotFound,), ks_constr.KeystoneProjectConstraint.expected_exceptions, "KeystoneProjectConstraint expected exceptions error") def test_constraint(self): constraint = ks_constr.KeystoneProjectConstraint() client_mock = mock.MagicMock() client_plugin_mock = mock.MagicMock() client_plugin_mock.get_project_id.return_value = None client_mock.client_plugin.return_value = client_plugin_mock self.assertIsNone(constraint.validate_with_client(client_mock, 'project_1')) self.assertRaises(exception.EntityNotFound, constraint.validate_with_client, client_mock, '') client_plugin_mock.get_project_id.assert_called_once_with('project_1') class KeystoneGroupConstraintTest(common.HeatTestCase): def test_expected_exceptions(self): self.assertEqual( (exception.EntityNotFound,), ks_constr.KeystoneGroupConstraint.expected_exceptions, "KeystoneGroupConstraint expected exceptions error") def test_constraint(self): constraint = ks_constr.KeystoneGroupConstraint() client_mock = mock.MagicMock() client_plugin_mock = mock.MagicMock() client_plugin_mock.get_group_id.return_value = None client_mock.client_plugin.return_value = client_plugin_mock self.assertIsNone(constraint.validate_with_client(client_mock, 'group_1')) self.assertRaises(exception.EntityNotFound, constraint.validate_with_client, client_mock, '') client_plugin_mock.get_group_id.assert_called_once_with('group_1') class KeystoneDomainConstraintTest(common.HeatTestCase): def test_expected_exceptions(self): self.assertEqual( (exception.EntityNotFound,), ks_constr.KeystoneDomainConstraint.expected_exceptions, "KeystoneDomainConstraint expected exceptions error") def test_constraint(self): constraint = ks_constr.KeystoneDomainConstraint() client_mock = mock.MagicMock() client_plugin_mock = mock.MagicMock() client_plugin_mock.get_domain_id.return_value = None client_mock.client_plugin.return_value = client_plugin_mock self.assertIsNone(constraint.validate_with_client(client_mock, 'domain_1')) self.assertRaises(exception.EntityNotFound, constraint.validate_with_client, client_mock, '') client_plugin_mock.get_domain_id.assert_called_once_with('domain_1') class KeystoneServiceConstraintTest(common.HeatTestCase): sample_uuid = '477e8273-60a7-4c41-b683-fdb0bc7cd151' def test_expected_exceptions(self): self.assertEqual( (exception.EntityNotFound, exception.KeystoneServiceNameConflict,), ks_constr.KeystoneServiceConstraint.expected_exceptions, "KeystoneServiceConstraint expected exceptions error") def test_constraint(self): constraint = ks_constr.KeystoneServiceConstraint() client_mock = mock.MagicMock() client_plugin_mock = mock.MagicMock() client_plugin_mock.get_service_id.return_value = self.sample_uuid client_mock.client_plugin.return_value = client_plugin_mock self.assertIsNone(constraint.validate_with_client(client_mock, self.sample_uuid)) self.assertRaises(exception.EntityNotFound, constraint.validate_with_client, client_mock, '') client_plugin_mock.get_service_id.assert_called_once_with( self.sample_uuid) class KeystoneUserConstraintTest(common.HeatTestCase): def test_expected_exceptions(self): self.assertEqual( (exception.EntityNotFound,), ks_constr.KeystoneUserConstraint.expected_exceptions, "KeystoneUserConstraint expected exceptions error") def test_constraint(self): constraint = ks_constr.KeystoneUserConstraint() client_mock = mock.MagicMock() client_plugin_mock = mock.MagicMock() client_plugin_mock.get_user_id.return_value = None client_mock.client_plugin.return_value = client_plugin_mock self.assertIsNone(constraint.validate_with_client(client_mock, 'admin')) self.assertRaises(exception.EntityNotFound, constraint.validate_with_client, client_mock, '') client_plugin_mock.get_user_id.assert_called_once_with('admin') class KeystoneRegionConstraintTest(common.HeatTestCase): sample_uuid = '477e8273-60a7-4c41-b683-fdb0bc7cd151' def test_expected_exceptions(self): self.assertEqual( (exception.EntityNotFound,), ks_constr.KeystoneRegionConstraint.expected_exceptions, "KeystoneRegionConstraint expected exceptions error") def test_constraint(self): constraint = ks_constr.KeystoneRegionConstraint() client_mock = mock.MagicMock() client_plugin_mock = mock.MagicMock() client_plugin_mock.get_region_id.return_value = self.sample_uuid client_mock.client_plugin.return_value = client_plugin_mock self.assertIsNone(constraint.validate_with_client(client_mock, self.sample_uuid)) self.assertRaises(exception.EntityNotFound, constraint.validate_with_client, client_mock, '') client_plugin_mock.get_region_id.assert_called_once_with( self.sample_uuid) class KeystoneClientPluginServiceTest(common.HeatTestCase): sample_uuid = '477e8273-60a7-4c41-b683-fdb0bc7cd152' sample_name = 'sample_service' def _get_mock_service(self): srv = mock.MagicMock() srv.id = self.sample_uuid srv.name = self.sample_name return srv def setUp(self): super(KeystoneClientPluginServiceTest, self).setUp() self._client = mock.MagicMock() @mock.patch.object(keystone.KeystoneClientPlugin, 'client') def test_get_service_id(self, client_keystone): self._client.client.services.get.return_value = (self ._get_mock_service()) client_keystone.return_value = self._client client_plugin = keystone.KeystoneClientPlugin( context=mock.MagicMock() ) self.assertEqual(self.sample_uuid, client_plugin.get_service_id(self.sample_uuid)) self._client.client.services.get.assert_called_once_with( self.sample_uuid) @mock.patch.object(keystone.KeystoneClientPlugin, 'client') def test_get_service_id_with_name(self, client_keystone): self._client.client.services.get.side_effect = (keystone_exceptions .NotFound) self._client.client.services.list.return_value = [ self._get_mock_service() ] client_keystone.return_value = self._client client_plugin = keystone.KeystoneClientPlugin( context=mock.MagicMock() ) self.assertEqual(self.sample_uuid, client_plugin.get_service_id(self.sample_name)) self.assertRaises(keystone_exceptions.NotFound, self._client.client.services.get, self.sample_name) self._client.client.services.list.assert_called_once_with( name=self.sample_name) @mock.patch.object(keystone.KeystoneClientPlugin, 'client') def test_get_service_id_with_name_conflict(self, client_keystone): self._client.client.services.get.side_effect = (keystone_exceptions .NotFound) self._client.client.services.list.return_value = [ self._get_mock_service(), self._get_mock_service() ] client_keystone.return_value = self._client client_plugin = keystone.KeystoneClientPlugin( context=mock.MagicMock() ) ex = self.assertRaises(exception.KeystoneServiceNameConflict, client_plugin.get_service_id, self.sample_name) msg = ("Keystone has more than one service with same name " "%s. Please use service id instead of name" % self.sample_name) self.assertEqual(msg, six.text_type(ex)) self.assertRaises(keystone_exceptions.NotFound, self._client.client.services.get, self.sample_name) self._client.client.services.list.assert_called_once_with( name=self.sample_name) @mock.patch.object(keystone.KeystoneClientPlugin, 'client') def test_get_service_id_not_found(self, client_keystone): self._client.client.services.get.side_effect = (keystone_exceptions .NotFound) self._client.client.services.list.return_value = [ ] client_keystone.return_value = self._client client_plugin = keystone.KeystoneClientPlugin( context=mock.MagicMock() ) ex = self.assertRaises(exception.EntityNotFound, client_plugin.get_service_id, self.sample_name) msg = ("The KeystoneService (%(name)s) could not be found." % {'name': self.sample_name}) self.assertEqual(msg, six.text_type(ex)) self.assertRaises(keystone_exceptions.NotFound, self._client.client.services.get, self.sample_name) self._client.client.services.list.assert_called_once_with( name=self.sample_name) class KeystoneClientPluginRoleTest(common.HeatTestCase): sample_uuid = '477e8273-60a7-4c41-b683-fdb0bc7cd152' sample_name = 'sample_role' def _get_mock_role(self): role = mock.MagicMock() role.id = self.sample_uuid role.name = self.sample_name return role def setUp(self): super(KeystoneClientPluginRoleTest, self).setUp() self._client = mock.MagicMock() @mock.patch.object(keystone.KeystoneClientPlugin, 'client') def test_get_role_id(self, client_keystone): self._client.client.roles.get.return_value = (self ._get_mock_role()) client_keystone.return_value = self._client client_plugin = keystone.KeystoneClientPlugin( context=mock.MagicMock() ) self.assertEqual(self.sample_uuid, client_plugin.get_role_id(self.sample_uuid)) self._client.client.roles.get.assert_called_once_with( self.sample_uuid) @mock.patch.object(keystone.KeystoneClientPlugin, 'client') def test_get_role_id_with_name(self, client_keystone): self._client.client.roles.get.side_effect = (keystone_exceptions .NotFound) self._client.client.roles.list.return_value = [ self._get_mock_role() ] client_keystone.return_value = self._client client_plugin = keystone.KeystoneClientPlugin( context=mock.MagicMock() ) self.assertEqual(self.sample_uuid, client_plugin.get_role_id(self.sample_name)) self.assertRaises(keystone_exceptions.NotFound, self._client.client.roles.get, self.sample_name) self._client.client.roles.list.assert_called_once_with( name=self.sample_name) @mock.patch.object(keystone.KeystoneClientPlugin, 'client') def test_get_role_id_not_found(self, client_keystone): self._client.client.roles.get.side_effect = (keystone_exceptions .NotFound) self._client.client.roles.list.return_value = [ ] client_keystone.return_value = self._client client_plugin = keystone.KeystoneClientPlugin( context=mock.MagicMock() ) ex = self.assertRaises(exception.EntityNotFound, client_plugin.get_role_id, self.sample_name) msg = ("The KeystoneRole (%(name)s) could not be found." % {'name': self.sample_name}) self.assertEqual(msg, six.text_type(ex)) self.assertRaises(keystone_exceptions.NotFound, self._client.client.roles.get, self.sample_name) self._client.client.roles.list.assert_called_once_with( name=self.sample_name) class KeystoneClientPluginProjectTest(common.HeatTestCase): sample_uuid = '477e8273-60a7-4c41-b683-fdb0bc7cd152' sample_name = 'sample_project' def _get_mock_project(self): project = mock.MagicMock() project.id = self.sample_uuid project.name = self.sample_name return project def setUp(self): super(KeystoneClientPluginProjectTest, self).setUp() self._client = mock.MagicMock() @mock.patch.object(keystone.KeystoneClientPlugin, 'client') def test_get_project_id(self, client_keystone): self._client.client.projects.get.return_value = (self ._get_mock_project()) client_keystone.return_value = self._client client_plugin = keystone.KeystoneClientPlugin( context=mock.MagicMock() ) self.assertEqual(self.sample_uuid, client_plugin.get_project_id(self.sample_uuid)) self._client.client.projects.get.assert_called_once_with( self.sample_uuid) @mock.patch.object(keystone.KeystoneClientPlugin, 'client') def test_get_project_id_with_name(self, client_keystone): self._client.client.projects.get.side_effect = (keystone_exceptions .NotFound) self._client.client.projects.list.return_value = [ self._get_mock_project() ] client_keystone.return_value = self._client client_plugin = keystone.KeystoneClientPlugin( context=mock.MagicMock() ) self.assertEqual(self.sample_uuid, client_plugin.get_project_id(self.sample_name)) self.assertRaises(keystone_exceptions.NotFound, self._client.client.projects.get, self.sample_name) self._client.client.projects.list.assert_called_once_with( name=self.sample_name) @mock.patch.object(keystone.KeystoneClientPlugin, 'client') def test_get_project_id_not_found(self, client_keystone): self._client.client.projects.get.side_effect = (keystone_exceptions .NotFound) self._client.client.projects.list.return_value = [ ] client_keystone.return_value = self._client client_plugin = keystone.KeystoneClientPlugin( context=mock.MagicMock() ) ex = self.assertRaises(exception.EntityNotFound, client_plugin.get_project_id, self.sample_name) msg = ("The KeystoneProject (%(name)s) could not be found." % {'name': self.sample_name}) self.assertEqual(msg, six.text_type(ex)) self.assertRaises(keystone_exceptions.NotFound, self._client.client.projects.get, self.sample_name) self._client.client.projects.list.assert_called_once_with( name=self.sample_name) class KeystoneClientPluginDomainTest(common.HeatTestCase): sample_uuid = '477e8273-60a7-4c41-b683-fdb0bc7cd152' sample_name = 'sample_domain' def _get_mock_domain(self): domain = mock.MagicMock() domain.id = self.sample_uuid domain.name = self.sample_name return domain def setUp(self): super(KeystoneClientPluginDomainTest, self).setUp() self._client = mock.MagicMock() @mock.patch.object(keystone.KeystoneClientPlugin, 'client') def test_get_domain_id(self, client_keystone): self._client.client.domains.get.return_value = (self ._get_mock_domain()) client_keystone.return_value = self._client client_plugin = keystone.KeystoneClientPlugin( context=mock.MagicMock() ) self.assertEqual(self.sample_uuid, client_plugin.get_domain_id(self.sample_uuid)) self._client.client.domains.get.assert_called_once_with( self.sample_uuid) @mock.patch.object(keystone.KeystoneClientPlugin, 'client') def test_get_domain_id_with_name(self, client_keystone): self._client.client.domains.get.side_effect = (keystone_exceptions .NotFound) self._client.client.domains.list.return_value = [ self._get_mock_domain() ] client_keystone.return_value = self._client client_plugin = keystone.KeystoneClientPlugin( context=mock.MagicMock() ) self.assertEqual(self.sample_uuid, client_plugin.get_domain_id(self.sample_name)) self.assertRaises(keystone_exceptions.NotFound, self._client.client.domains.get, self.sample_name) self._client.client.domains.list.assert_called_once_with( name=self.sample_name) @mock.patch.object(keystone.KeystoneClientPlugin, 'client') def test_get_domain_id_not_found(self, client_keystone): self._client.client.domains.get.side_effect = (keystone_exceptions .NotFound) self._client.client.domains.list.return_value = [ ] client_keystone.return_value = self._client client_plugin = keystone.KeystoneClientPlugin( context=mock.MagicMock() ) ex = self.assertRaises(exception.EntityNotFound, client_plugin.get_domain_id, self.sample_name) msg = ("The KeystoneDomain (%(name)s) could not be found." % {'name': self.sample_name}) self.assertEqual(msg, six.text_type(ex)) self.assertRaises(keystone_exceptions.NotFound, self._client.client.domains.get, self.sample_name) self._client.client.domains.list.assert_called_once_with( name=self.sample_name) class KeystoneClientPluginGroupTest(common.HeatTestCase): sample_uuid = '477e8273-60a7-4c41-b683-fdb0bc7cd152' sample_name = 'sample_group' def _get_mock_group(self): group = mock.MagicMock() group.id = self.sample_uuid group.name = self.sample_name return group def setUp(self): super(KeystoneClientPluginGroupTest, self).setUp() self._client = mock.MagicMock() @mock.patch.object(keystone.KeystoneClientPlugin, 'client') def test_get_group_id(self, client_keystone): self._client.client.groups.get.return_value = (self ._get_mock_group()) client_keystone.return_value = self._client client_plugin = keystone.KeystoneClientPlugin( context=mock.MagicMock() ) self.assertEqual(self.sample_uuid, client_plugin.get_group_id(self.sample_uuid)) self._client.client.groups.get.assert_called_once_with( self.sample_uuid) @mock.patch.object(keystone.KeystoneClientPlugin, 'client') def test_get_group_id_with_name(self, client_keystone): self._client.client.groups.get.side_effect = (keystone_exceptions .NotFound) self._client.client.groups.list.return_value = [ self._get_mock_group() ] client_keystone.return_value = self._client client_plugin = keystone.KeystoneClientPlugin( context=mock.MagicMock() ) self.assertEqual(self.sample_uuid, client_plugin.get_group_id(self.sample_name)) self.assertRaises(keystone_exceptions.NotFound, self._client.client.groups.get, self.sample_name) self._client.client.groups.list.assert_called_once_with( name=self.sample_name) @mock.patch.object(keystone.KeystoneClientPlugin, 'client') def test_get_group_id_not_found(self, client_keystone): self._client.client.groups.get.side_effect = (keystone_exceptions .NotFound) self._client.client.groups.list.return_value = [ ] client_keystone.return_value = self._client client_plugin = keystone.KeystoneClientPlugin( context=mock.MagicMock() ) ex = self.assertRaises(exception.EntityNotFound, client_plugin.get_group_id, self.sample_name) msg = ("The KeystoneGroup (%(name)s) could not be found." % {'name': self.sample_name}) self.assertEqual(msg, six.text_type(ex)) self.assertRaises(keystone_exceptions.NotFound, self._client.client.groups.get, self.sample_name) self._client.client.groups.list.assert_called_once_with( name=self.sample_name) class KeystoneClientPluginUserTest(common.HeatTestCase): sample_uuid = '477e8273-60a7-4c41-b683-fdb0bc7cd152' sample_name = 'sample_user' def _get_mock_user(self): user = mock.MagicMock() user.id = self.sample_uuid user.name = self.sample_name return user def setUp(self): super(KeystoneClientPluginUserTest, self).setUp() self._client = mock.MagicMock() @mock.patch.object(keystone.KeystoneClientPlugin, 'client') def test_get_user_id(self, client_keystone): self._client.client.users.get.return_value = self._get_mock_user() client_keystone.return_value = self._client client_plugin = keystone.KeystoneClientPlugin( context=mock.MagicMock() ) self.assertEqual(self.sample_uuid, client_plugin.get_user_id(self.sample_uuid)) self._client.client.users.get.assert_called_once_with( self.sample_uuid) @mock.patch.object(keystone.KeystoneClientPlugin, 'client') def test_get_user_id_with_name(self, client_keystone): self._client.client.users.get.side_effect = (keystone_exceptions .NotFound) self._client.client.users.list.return_value = [ self._get_mock_user() ] client_keystone.return_value = self._client client_plugin = keystone.KeystoneClientPlugin( context=mock.MagicMock() ) self.assertEqual(self.sample_uuid, client_plugin.get_user_id(self.sample_name)) self.assertRaises(keystone_exceptions.NotFound, self._client.client.users.get, self.sample_name) self._client.client.users.list.assert_called_once_with( name=self.sample_name) @mock.patch.object(keystone.KeystoneClientPlugin, 'client') def test_get_user_id_not_found(self, client_keystone): self._client.client.users.get.side_effect = (keystone_exceptions .NotFound) self._client.client.users.list.return_value = [] client_keystone.return_value = self._client client_plugin = keystone.KeystoneClientPlugin( context=mock.MagicMock() ) ex = self.assertRaises(exception.EntityNotFound, client_plugin.get_user_id, self.sample_name) msg = ('The KeystoneUser (%(name)s) could not be found.' % {'name': self.sample_name}) self.assertEqual(msg, six.text_type(ex)) self.assertRaises(keystone_exceptions.NotFound, self._client.client.users.get, self.sample_name) self._client.client.users.list.assert_called_once_with( name=self.sample_name) class KeystoneClientPluginRegionTest(common.HeatTestCase): sample_uuid = '477e8273-60a7-4c41-b683-fdb0bc7cd152' sample_name = 'sample_region' def _get_mock_region(self): region = mock.MagicMock() region.id = self.sample_uuid region.name = self.sample_name return region def setUp(self): super(KeystoneClientPluginRegionTest, self).setUp() self._client = mock.MagicMock() @mock.patch.object(keystone.KeystoneClientPlugin, 'client') def test_get_region_id(self, client_keystone): self._client.client.regions.get.return_value = self._get_mock_region() client_keystone.return_value = self._client client_plugin = keystone.KeystoneClientPlugin( context=mock.MagicMock() ) self.assertEqual(self.sample_uuid, client_plugin.get_region_id(self.sample_uuid)) self._client.client.regions.get.assert_called_once_with( self.sample_uuid) @mock.patch.object(keystone.KeystoneClientPlugin, 'client') def test_get_region_id_not_found(self, client_keystone): self._client.client.regions.get.side_effect = (keystone_exceptions .NotFound) client_keystone.return_value = self._client client_plugin = keystone.KeystoneClientPlugin( context=mock.MagicMock() ) ex = self.assertRaises(exception.EntityNotFound, client_plugin.get_region_id, self.sample_name) msg = ('The KeystoneRegion (%(name)s) could not be found.' % {'name': self.sample_name}) self.assertEqual(msg, six.text_type(ex)) heat-10.0.2/heat/tests/clients/test_glance_client.py0000666000175000017500000000606713343562340022477 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import uuid from glanceclient import exc import mock from heat.engine.clients import client_exception as exception from heat.engine.clients.os import glance from heat.tests import common from heat.tests import utils class GlanceUtilsTest(common.HeatTestCase): """Basic tests for :module:'heat.engine.resources.clients.os.glance'.""" def setUp(self): super(GlanceUtilsTest, self).setUp() self.glance_client = mock.MagicMock() con = utils.dummy_context() c = con.clients self.glance_plugin = c.client_plugin('glance') self.glance_plugin.client = lambda: self.glance_client self.my_image = mock.MagicMock() def test_find_image_by_name_or_id(self): """Tests the find_image_by_name_or_id function.""" img_id = str(uuid.uuid4()) img_name = 'myfakeimage' self.my_image.id = img_id self.my_image.name = img_name self.glance_client.images.get.side_effect = [ self.my_image, exc.HTTPNotFound(), exc.HTTPNotFound(), exc.HTTPNotFound()] self.glance_client.images.list.side_effect = [ [self.my_image], [], [self.my_image, self.my_image]] self.assertEqual(img_id, self.glance_plugin.find_image_by_name_or_id(img_id)) self.assertEqual(img_id, self.glance_plugin.find_image_by_name_or_id(img_name)) self.assertRaises(exception.EntityMatchNotFound, self.glance_plugin.find_image_by_name_or_id, 'noimage') self.assertRaises(exception.EntityUniqueMatchNotFound, self.glance_plugin.find_image_by_name_or_id, 'myfakeimage') class ImageConstraintTest(common.HeatTestCase): def setUp(self): super(ImageConstraintTest, self).setUp() self.ctx = utils.dummy_context() self.mock_find_image = mock.Mock() self.ctx.clients.client_plugin( 'glance').find_image_by_name_or_id = self.mock_find_image self.constraint = glance.ImageConstraint() def test_validation(self): self.mock_find_image.side_effect = [ "id1", exception.EntityMatchNotFound(), exception.EntityUniqueMatchNotFound()] self.assertTrue(self.constraint.validate("foo", self.ctx)) self.assertFalse(self.constraint.validate("bar", self.ctx)) self.assertFalse(self.constraint.validate("baz", self.ctx)) heat-10.0.2/heat/tests/clients/test_neutron_client.py0000666000175000017500000002345513343562340022740 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import six import mock from neutronclient.common import exceptions as qe from heat.common import exception from heat.engine.clients.os import neutron from heat.engine.clients.os.neutron import lbaas_constraints as lc from heat.engine.clients.os.neutron import neutron_constraints as nc from heat.tests import common from heat.tests import utils class NeutronClientPluginTestCase(common.HeatTestCase): def setUp(self): super(NeutronClientPluginTestCase, self).setUp() self.neutron_client = mock.MagicMock() con = utils.dummy_context() c = con.clients self.neutron_plugin = c.client_plugin('neutron') self.neutron_plugin.client = lambda: self.neutron_client class NeutronClientPluginTest(NeutronClientPluginTestCase): def setUp(self): super(NeutronClientPluginTest, self).setUp() self.mock_find = self.patchobject(neutron.neutronV20, 'find_resourceid_by_name_or_id') self.mock_find.return_value = 42 def test_get_secgroup_uuids(self): # test get from uuids sgs_uuid = ['b62c3079-6946-44f5-a67b-6b9091884d4f', '9887157c-d092-40f5-b547-6361915fce7d'] sgs_list = self.neutron_plugin.get_secgroup_uuids(sgs_uuid) self.assertEqual(sgs_uuid, sgs_list) # test get from name, return only one sgs_non_uuid = ['security_group_1'] expected_groups = ['0389f747-7785-4757-b7bb-2ab07e4b09c3'] fake_list = { 'security_groups': [ { 'tenant_id': 'test_tenant_id', 'id': '0389f747-7785-4757-b7bb-2ab07e4b09c3', 'name': 'security_group_1', 'security_group_rules': [], 'description': 'no protocol' } ] } self.neutron_client.list_security_groups.return_value = fake_list self.assertEqual(expected_groups, self.neutron_plugin.get_secgroup_uuids(sgs_non_uuid)) # test only one belong to the tenant fake_list = { 'security_groups': [ { 'tenant_id': 'test_tenant_id', 'id': '0389f747-7785-4757-b7bb-2ab07e4b09c3', 'name': 'security_group_1', 'security_group_rules': [], 'description': 'no protocol' }, { 'tenant_id': 'not_test_tenant_id', 'id': '384ccd91-447c-4d83-832c-06974a7d3d05', 'name': 'security_group_1', 'security_group_rules': [], 'description': 'no protocol' } ] } self.neutron_client.list_security_groups.return_value = fake_list self.assertEqual(expected_groups, self.neutron_plugin.get_secgroup_uuids(sgs_non_uuid)) # test there are two securityGroups with same name, and the two # all belong to the tenant fake_list = { 'security_groups': [ { 'tenant_id': 'test_tenant_id', 'id': '0389f747-7785-4757-b7bb-2ab07e4b09c3', 'name': 'security_group_1', 'security_group_rules': [], 'description': 'no protocol' }, { 'tenant_id': 'test_tenant_id', 'id': '384ccd91-447c-4d83-832c-06974a7d3d05', 'name': 'security_group_1', 'security_group_rules': [], 'description': 'no protocol' } ] } self.neutron_client.list_security_groups.return_value = fake_list self.assertRaises(exception.PhysicalResourceNameAmbiguity, self.neutron_plugin.get_secgroup_uuids, sgs_non_uuid) def test_check_lb_status(self): self.neutron_client.show_loadbalancer.side_effect = [ {'loadbalancer': {'provisioning_status': 'ACTIVE'}}, {'loadbalancer': {'provisioning_status': 'PENDING_CREATE'}}, {'loadbalancer': {'provisioning_status': 'ERROR'}} ] self.assertTrue(self.neutron_plugin.check_lb_status('1234')) self.assertFalse(self.neutron_plugin.check_lb_status('1234')) self.assertRaises(exception.ResourceInError, self.neutron_plugin.check_lb_status, '1234') class NeutronConstraintsValidate(common.HeatTestCase): scenarios = [ ('validate_network', dict(constraint_class=nc.NetworkConstraint, resource_type='network')), ('validate_port', dict(constraint_class=nc.PortConstraint, resource_type='port')), ('validate_router', dict(constraint_class=nc.RouterConstraint, resource_type='router')), ('validate_subnet', dict(constraint_class=nc.SubnetConstraint, resource_type='subnet')), ('validate_subnetpool', dict(constraint_class=nc.SubnetPoolConstraint, resource_type='subnetpool')), ('validate_address_scope', dict(constraint_class=nc.AddressScopeConstraint, resource_type='address_scope')), ('validate_loadbalancer', dict(constraint_class=lc.LoadbalancerConstraint, resource_type='loadbalancer')), ('validate_listener', dict(constraint_class=lc.ListenerConstraint, resource_type='listener')), ('validate_pool', dict(constraint_class=lc.PoolConstraint, resource_type='pool')), ('validate_qos_policy', dict(constraint_class=nc.QoSPolicyConstraint, resource_type='policy')), ('validate_security_group', dict(constraint_class=nc.SecurityGroupConstraint, resource_type='security_group')) ] def test_validate(self): mock_extension = self.patchobject( neutron.NeutronClientPlugin, 'has_extension', return_value=True) nc = mock.Mock() mock_create = self.patchobject(neutron.NeutronClientPlugin, '_create') mock_create.return_value = nc mock_find = self.patchobject(neutron.NeutronClientPlugin, 'find_resourceid_by_name_or_id') mock_find.side_effect = [ 'foo', qe.NeutronClientException(status_code=404) ] constraint = self.constraint_class() ctx = utils.dummy_context() if hasattr(constraint, 'extension') and constraint.extension: mock_extension.side_effect = [ False, True, True, ] ex = self.assertRaises( exception.EntityNotFound, constraint.validate_with_client, ctx.clients, "foo" ) expected = ("The neutron extension (%s) could not be found." % constraint.extension) self.assertEqual(expected, six.text_type(ex)) self.assertTrue(constraint.validate("foo", ctx)) self.assertFalse(constraint.validate("bar", ctx)) mock_find.assert_has_calls( [mock.call(self.resource_type, 'foo'), mock.call(self.resource_type, 'bar')]) class NeutronProviderConstraintsValidate(common.HeatTestCase): scenarios = [ ('validate_lbaasv1', dict(constraint_class=nc.LBaasV1ProviderConstraint, service_type='LOADBALANCER')), ('validate_lbaasv2', dict(constraint_class=lc.LBaasV2ProviderConstraint, service_type='LOADBALANCERV2')) ] def test_provider_validate(self): nc = mock.Mock() mock_create = self.patchobject(neutron.NeutronClientPlugin, '_create') mock_create.return_value = nc providers = { 'service_providers': [ {'service_type': 'LOADBANALCERV2', 'name': 'haproxy'}, {'service_type': 'LOADBANALCER', 'name': 'haproxy'} ] } nc.list_service_providers.return_value = providers constraint = self.constraint_class() ctx = utils.dummy_context() self.assertTrue(constraint.validate('haproxy', ctx)) self.assertFalse(constraint.validate("bar", ctx)) class NeutronClientPluginExtensionsTest(NeutronClientPluginTestCase): """Tests for extensions in neutronclient.""" def test_has_no_extension(self): mock_extensions = {'extensions': []} self.neutron_client.list_extensions.return_value = mock_extensions self.assertFalse(self.neutron_plugin.has_extension('lbaas')) def test_without_service_extension(self): mock_extensions = {'extensions': [{'alias': 'router'}]} self.neutron_client.list_extensions.return_value = mock_extensions self.assertFalse(self.neutron_plugin.has_extension('lbaas')) def test_has_service_extension(self): mock_extensions = {'extensions': [{'alias': 'router'}]} self.neutron_client.list_extensions.return_value = mock_extensions self.assertTrue(self.neutron_plugin.has_extension('router')) heat-10.0.2/heat/tests/clients/test_monasca_client.py0000666000175000017500000001002413343562340022653 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import six import monascaclient from heat.common import exception as heat_exception from heat.engine.clients.os import monasca as client_plugin from heat.tests import common from heat.tests import utils class MonascaNotificationConstraintTest(common.HeatTestCase): def test_expected_exceptions(self): self.assertEqual( (heat_exception.EntityNotFound,), client_plugin.MonascaNotificationConstraint.expected_exceptions, "MonascaNotificationConstraint expected exceptions error") def test_constraint(self): constraint = client_plugin.MonascaNotificationConstraint() client_mock = mock.MagicMock() client_plugin_mock = mock.MagicMock() client_plugin_mock.get_notification.return_value = None client_mock.client_plugin.return_value = client_plugin_mock self.assertIsNone(constraint.validate_with_client(client_mock, 'notification_1')) client_plugin_mock.get_notification.assert_called_once_with( 'notification_1') class MonascaClientPluginTest(common.HeatTestCase): def test_client(self): context = utils.dummy_context() plugin = context.clients.client_plugin('monasca') client = plugin.client() self.assertIsNotNone(client.metrics) @mock.patch.object(monascaclient.client, '_session') def test_client_uses_session(self, mock_session): context = mock.MagicMock() monasca_client = client_plugin.MonascaClientPlugin(context=context) self.assertIsNotNone(monasca_client._create()) class MonascaClientPluginNotificationTest(common.HeatTestCase): sample_uuid = '477e8273-60a7-4c41-b683-fdb0bc7cd152' sample_name = 'test-notification' def _get_mock_notification(self): notification = dict() notification['id'] = self.sample_uuid notification['name'] = self.sample_name return notification def setUp(self): super(MonascaClientPluginNotificationTest, self).setUp() self._client = mock.MagicMock() self.client_plugin = client_plugin.MonascaClientPlugin( context=mock.MagicMock() ) @mock.patch.object(client_plugin.MonascaClientPlugin, 'client') def test_get_notification(self, client_monasca): mock_notification = self._get_mock_notification() self._client.notifications.get.return_value = mock_notification client_monasca.return_value = self._client self.assertEqual(self.sample_uuid, self.client_plugin.get_notification( self.sample_uuid)) self._client.notifications.get.assert_called_once_with( notification_id=self.sample_uuid) @mock.patch.object(client_plugin.MonascaClientPlugin, 'client') def test_get_notification_not_found(self, client_monasca): self._client.notifications.get.side_effect = ( client_plugin.monasca_exc.NotFound) client_monasca.return_value = self._client ex = self.assertRaises(heat_exception.EntityNotFound, self.client_plugin.get_notification, self.sample_uuid) msg = ("The Monasca Notification (%(name)s) could not be found." % {'name': self.sample_uuid}) self.assertEqual(msg, six.text_type(ex)) self._client.notifications.get.assert_called_once_with( notification_id=self.sample_uuid) heat-10.0.2/heat/tests/clients/test_heat_client.py0000666000175000017500000016610113343562351022165 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import json import mock import uuid from keystoneauth1 import access as ks_access from keystoneauth1 import exceptions as kc_exception from keystoneauth1.identity import access as ks_auth_access from keystoneauth1.identity import generic as ks_auth from keystoneauth1 import loading as ks_loading from keystoneauth1 import session as ks_session from keystoneauth1 import token_endpoint as ks_token_endpoint from keystoneclient.v3 import client as kc_v3 from keystoneclient.v3 import domains as kc_v3_domains import mox from oslo_config import cfg import six from heat.common import config from heat.common import exception from heat.common import password_gen from heat.engine.clients.os.keystone import heat_keystoneclient from heat.tests import common from heat.tests import utils cfg.CONF.import_opt('region_name_for_services', 'heat.common.config') cfg.CONF.import_group('keystone_authtoken', 'keystonemiddleware.auth_token') class KeystoneClientTest(common.HeatTestCase): """Test cases for heat.common.heat_keystoneclient.""" def setUp(self): super(KeystoneClientTest, self).setUp() self.mock_admin_client = self.m.CreateMock(kc_v3.Client) self.mock_ks_v3_client = self.m.CreateMock(kc_v3.Client) self.mock_ks_v3_client_domain_mngr = self.m.CreateMock( kc_v3_domains.DomainManager) self.m.StubOutWithMock(kc_v3, "Client") self.m.StubOutWithMock(ks_auth, 'Password') self.m.StubOutWithMock(ks_token_endpoint, 'Token') self.m.StubOutWithMock(ks_auth_access, 'AccessInfoPlugin') self.m.StubOutWithMock(ks_loading, 'load_auth_from_conf_options') cfg.CONF.set_override('auth_uri', 'http://server.test:5000/v2.0', group='keystone_authtoken') cfg.CONF.set_override('stack_user_domain_id', 'adomain123') cfg.CONF.set_override('stack_domain_admin', 'adminuser123') cfg.CONF.set_override('stack_domain_admin_password', 'adminsecret') self.addCleanup(self.m.VerifyAll) def _clear_domain_override(self): cfg.CONF.clear_override('stack_user_domain_id') def _stub_admin_auth(self, auth_ok=True): mock_ks_auth = self.m.CreateMockAnything() a = mock_ks_auth.get_user_id(mox.IsA(ks_session.Session)) if auth_ok: a.AndReturn('1234') else: a.AndRaise(kc_exception.Unauthorized) m = ks_loading.load_auth_from_conf_options( cfg.CONF, 'trustee', trust_id=None) m.AndReturn(mock_ks_auth) def _stub_domain_admin_client(self, domain_id=None): mock_ks_auth = self.m.CreateMockAnything() mock_ks_auth.get_token(mox.IsA(ks_session.Session)).AndReturn('tok') m = ks_auth.Password(auth_url='http://server.test:5000/v3', password='adminsecret', domain_id='adomain123', domain_name=None, user_domain_id='adomain123', user_domain_name=None, username='adminuser123') m.AndReturn(mock_ks_auth) n = kc_v3.Client(session=mox.IsA(ks_session.Session), auth=mock_ks_auth, region_name=None) n.AndReturn(self.mock_admin_client) self.mock_admin_client.domains = self.mock_ks_v3_client_domain_mngr def _stubs_auth(self, method='token', trust_scoped=True, user_id=None, auth_ref=None, client=True, project_id=None, stub_trust_context=False, version=3): mock_auth_ref = self.m.CreateMockAnything() mock_ks_auth = self.m.CreateMockAnything() if method == 'token': p = ks_token_endpoint.Token(token='abcd1234', endpoint='http://server.test:5000/v3') elif method == 'auth_ref' and version == 3: p = ks_auth_access.AccessInfoPlugin( auth_ref=mox.IsA(ks_access.AccessInfoV3), auth_url='http://server.test:5000/v3') elif method == 'auth_ref' and version == 2: p = ks_auth_access.AccessInfoPlugin( auth_ref=mox.IsA(ks_access.AccessInfoV2), auth_url='http://server.test:5000/v3') elif method == 'password': p = ks_auth.Password(auth_url='http://server.test:5000/v3', username='test_username', password='password', project_id=project_id or 'test_tenant_id', user_domain_id='adomain123') elif method == 'trust': p = ks_loading.load_auth_from_conf_options(cfg.CONF, 'trustee', trust_id='atrust123') mock_auth_ref.user_id = user_id or 'trustor_user_id' mock_auth_ref.project_id = project_id or 'test_tenant_id' mock_auth_ref.trust_scoped = trust_scoped mock_auth_ref.auth_token = 'atrusttoken' p.AndReturn(mock_ks_auth) if client: c = kc_v3.Client(session=mox.IsA(ks_session.Session), region_name=None) c.AndReturn(self.mock_ks_v3_client) if stub_trust_context: m = mock_ks_auth.get_user_id(mox.IsA(ks_session.Session)) m.AndReturn(user_id) m = mock_ks_auth.get_project_id(mox.IsA(ks_session.Session)) m.AndReturn(project_id) m = mock_ks_auth.get_access(mox.IsA(ks_session.Session)) m.AndReturn(mock_auth_ref) return mock_ks_auth, mock_auth_ref def test_username_length(self): """Test that user names >255 characters are properly truncated.""" self._stubs_auth() ctx = utils.dummy_context() ctx.trust_id = None # a >255 character user name and the expected version long_user_name = 'U' * 255 + 'S' good_user_name = 'U' * 254 + 'S' # mock keystone client user functions self.mock_ks_v3_client.users = self.m.CreateMockAnything() mock_user = self.m.CreateMockAnything() mock_user.id = 'auser123' # when keystone is called, the name should have been truncated # to the last 255 characters of the long name self.mock_ks_v3_client.users.create(name=good_user_name, password='password', default_project=ctx.tenant_id ).AndReturn(mock_user) self.mock_ks_v3_client.roles = self.m.CreateMockAnything() self.mock_ks_v3_client.roles.list( name='heat_stack_user').AndReturn(self._mock_roles_list()) self.mock_ks_v3_client.roles.grant(project=ctx.tenant_id, role='4546', user='auser123').AndReturn(None) self.m.ReplayAll() # call create_stack_user with a long user name. # the cleanup VerifyAll should verify that though we passed # long_user_name, keystone was actually called with a truncated # user name heat_ks_client = heat_keystoneclient.KeystoneClient(ctx) heat_ks_client.create_stack_user(long_user_name, password='password') def test_create_stack_user_error_norole(self): """Test error path when no role is found.""" self._stubs_auth() ctx = utils.dummy_context() ctx.trust_id = None self.mock_ks_v3_client.roles = self.m.CreateMockAnything() self.mock_ks_v3_client.roles.list( name='heat_stack_user').AndReturn([]) self.m.ReplayAll() heat_ks_client = heat_keystoneclient.KeystoneClient(ctx) err = self.assertRaises(exception.Error, heat_ks_client.create_stack_user, 'auser', password='password') self.assertIn("Can't find role heat_stack_user", six.text_type(err)) def _mock_roles_list(self, heat_stack_user='heat_stack_user'): mock_roles_list = [] mock_role = self.m.CreateMockAnything() mock_role.id = '4546' mock_role.name = heat_stack_user mock_roles_list.append(mock_role) return mock_roles_list def test_create_stack_domain_user(self): """Test creating a stack domain user.""" ctx = utils.dummy_context() self.patchobject(ctx, '_create_auth_plugin') ctx.trust_id = None # mock keystone client functions self._stub_domain_admin_client() self.mock_admin_client.users = self.m.CreateMockAnything() mock_user = self.m.CreateMockAnything() mock_user.id = 'duser123' self.mock_admin_client.users.create(name='duser', password=None, default_project='aproject', domain='adomain123' ).AndReturn(mock_user) self.mock_admin_client.roles = self.m.CreateMockAnything() self.mock_admin_client.roles.list( name='heat_stack_user').AndReturn(self._mock_roles_list()) self.mock_admin_client.roles.grant(project='aproject', role='4546', user='duser123').AndReturn(None) self.m.ReplayAll() heat_ks_client = heat_keystoneclient.KeystoneClient(ctx) heat_ks_client.create_stack_domain_user(username='duser', project_id='aproject') def test_create_stack_domain_user_legacy_fallback(self): """Test creating a stack domain user, fallback path.""" self._clear_domain_override() ctx = utils.dummy_context() ctx.trust_id = None # mock keystone client functions self._stubs_auth() self.mock_ks_v3_client.users = self.m.CreateMockAnything() mock_user = self.m.CreateMockAnything() mock_user.id = 'auser123' self.mock_ks_v3_client.users.create(name='auser', password='password', default_project=ctx.tenant_id ).AndReturn(mock_user) self.mock_ks_v3_client.roles = self.m.CreateMockAnything() self.mock_ks_v3_client.roles.list( name='heat_stack_user').AndReturn(self._mock_roles_list()) self.mock_ks_v3_client.roles.grant(project=ctx.tenant_id, role='4546', user='auser123').AndReturn(None) self.m.ReplayAll() heat_ks_client = heat_keystoneclient.KeystoneClient(ctx) heat_ks_client.create_stack_domain_user(username='auser', project_id='aproject', password='password') def test_create_stack_domain_user_error_norole(self): """Test creating a stack domain user, no role error path.""" ctx = utils.dummy_context() self.patchobject(ctx, '_create_auth_plugin') ctx.trust_id = None self._stub_domain_admin_client(domain_id=None) # mock keystone client functions self.mock_admin_client.roles = self.m.CreateMockAnything() self.mock_admin_client.roles.list(name='heat_stack_user').AndReturn([]) self.m.ReplayAll() heat_ks_client = heat_keystoneclient.KeystoneClient(ctx) err = self.assertRaises(exception.Error, heat_ks_client.create_stack_domain_user, username='duser', project_id='aproject') self.assertIn("Can't find role heat_stack_user", six.text_type(err)) def test_delete_stack_domain_user(self): """Test deleting a stack domain user.""" ctx = utils.dummy_context() self.patchobject(ctx, '_create_auth_plugin') ctx.trust_id = None # mock keystone client functions self._stub_domain_admin_client() self.mock_admin_client.users = self.m.CreateMockAnything() mock_user = self.m.CreateMockAnything() mock_user.id = 'duser123' mock_user.domain_id = 'adomain123' mock_user.default_project_id = 'aproject' self.mock_admin_client.users.get('duser123').AndReturn(mock_user) self.mock_admin_client.users.delete('duser123').AndReturn(None) self.mock_admin_client.users.get('duser123').AndRaise( kc_exception.NotFound) self.m.ReplayAll() heat_ks_client = heat_keystoneclient.KeystoneClient(ctx) heat_ks_client.delete_stack_domain_user(user_id='duser123', project_id='aproject') # Second delete will raise ignored NotFound heat_ks_client.delete_stack_domain_user(user_id='duser123', project_id='aproject') def test_delete_stack_domain_user_legacy_fallback(self): """Test deleting a stack domain user, fallback path.""" self._clear_domain_override() ctx = utils.dummy_context() ctx.trust_id = None # mock keystone client functions self._stubs_auth() self.mock_ks_v3_client.users = self.m.CreateMockAnything() self.mock_ks_v3_client.users.delete(user='user123').AndReturn(None) self.m.ReplayAll() heat_ks_client = heat_keystoneclient.KeystoneClient(ctx) heat_ks_client.delete_stack_domain_user(user_id='user123', project_id='aproject') def test_delete_stack_domain_user_error_domain(self): """Test deleting a stack domain user, wrong domain.""" ctx = utils.dummy_context() self.patchobject(ctx, '_create_auth_plugin') ctx.trust_id = None # mock keystone client functions self._stub_domain_admin_client() self.mock_admin_client.users = self.m.CreateMockAnything() mock_user = self.m.CreateMockAnything() mock_user.id = 'duser123' mock_user.domain_id = 'notadomain123' mock_user.default_project_id = 'aproject' self.mock_admin_client.users.get('duser123').AndReturn(mock_user) self.m.ReplayAll() heat_ks_client = heat_keystoneclient.KeystoneClient(ctx) err = self.assertRaises(ValueError, heat_ks_client.delete_stack_domain_user, user_id='duser123', project_id='aproject') self.assertIn('User delete in invalid domain', err.args) def test_delete_stack_domain_user_error_project(self): """Test deleting a stack domain user, wrong project.""" ctx = utils.dummy_context() self.patchobject(ctx, '_create_auth_plugin') ctx.trust_id = None # mock keystone client functions self._stub_domain_admin_client() self.mock_admin_client.users = self.m.CreateMockAnything() mock_user = self.m.CreateMockAnything() mock_user.id = 'duser123' mock_user.domain_id = 'adomain123' mock_user.default_project_id = 'notaproject' self.mock_admin_client.users.get('duser123').AndReturn(mock_user) self.m.ReplayAll() heat_ks_client = heat_keystoneclient.KeystoneClient(ctx) err = self.assertRaises(ValueError, heat_ks_client.delete_stack_domain_user, user_id='duser123', project_id='aproject') self.assertIn('User delete in invalid project', err.args) def test_delete_stack_user(self): """Test deleting a stack user.""" self._stubs_auth() ctx = utils.dummy_context() ctx.trust_id = None # mock keystone client delete function self.mock_ks_v3_client.users = self.m.CreateMockAnything() self.mock_ks_v3_client.users.delete(user='atestuser').AndReturn(None) self.mock_ks_v3_client.users.delete(user='atestuser').AndRaise( kc_exception.NotFound) self.m.ReplayAll() heat_ks_client = heat_keystoneclient.KeystoneClient(ctx) heat_ks_client.delete_stack_user('atestuser') # Second delete will raise ignored NotFound heat_ks_client.delete_stack_user('atestuser') def test_init_v3_token(self): """Test creating the client, token auth.""" self._stubs_auth() self.m.ReplayAll() ctx = utils.dummy_context() ctx.username = None ctx.password = None ctx.trust_id = None heat_ks_client = heat_keystoneclient.KeystoneClient(ctx) heat_ks_client.client self.assertIsNotNone(heat_ks_client._client) def test_init_v3_token_auth_ref_v2(self): """Test creating the client, token v2 auth_ref.""" expected_auth_ref = {'token': {'id': 'ctx_token', 'expires': '123'}, 'version': 'v2.0'} self._stubs_auth(method='auth_ref', auth_ref=expected_auth_ref, version=2) self.m.ReplayAll() ctx = utils.dummy_context() ctx.username = None ctx.password = None ctx.trust_id = None ctx.auth_token = 'ctx_token' ctx.auth_token_info = {'access': { 'token': {'id': 'abcd1234', 'expires': '123'}}} heat_ks_client = heat_keystoneclient.KeystoneClient(ctx) heat_ks_client.client self.assertIsNotNone(heat_ks_client._client) def test_init_v3_token_auth_ref_v3(self): """Test creating the client, token v3 auth_ref.""" expected_auth_ref = {'auth_token': 'ctx_token', 'expires': '456', 'version': 'v3', 'methods': []} self._stubs_auth(method='auth_ref', auth_ref=expected_auth_ref) self.m.ReplayAll() ctx = utils.dummy_context() ctx.username = None ctx.password = None ctx.trust_id = None ctx.auth_token = 'ctx_token' ctx.auth_token_info = {'token': {'expires': '456', 'methods': []}} heat_ks_client = heat_keystoneclient.KeystoneClient(ctx) heat_ks_client.client self.assertIsNotNone(heat_ks_client._client) def test_init_v3_password(self): """Test creating the client, password auth.""" self._stubs_auth(method='password') self.m.ReplayAll() ctx = utils.dummy_context() ctx.auth_token = None ctx.password = 'password' ctx.trust_id = None ctx.user_domain = 'adomain123' heat_ks_client = heat_keystoneclient.KeystoneClient(ctx) client = heat_ks_client.client self.assertIsNotNone(client) self.assertIsNone(ctx.trust_id) def test_init_v3_bad_nocreds(self): """Test creating the client, no credentials.""" ctx = utils.dummy_context() ctx.auth_token = None ctx.trust_id = None ctx.username = None ctx.password = None self.assertRaises(exception.AuthorizationFailure, heat_keystoneclient.KeystoneClient, ctx) def test_create_trust_context_trust_id(self): """Test create_trust_context with existing trust_id.""" self._stubs_auth(method='trust') cfg.CONF.set_override('deferred_auth_method', 'trusts') self.m.ReplayAll() ctx = utils.dummy_context() ctx.trust_id = 'atrust123' ctx.trustor_user_id = 'trustor_user_id' heat_ks_client = heat_keystoneclient.KeystoneClient(ctx) trust_context = heat_ks_client.create_trust_context() self.assertEqual(ctx.to_dict(), trust_context.to_dict()) def test_create_trust_context_trust_create_deletegate_subset_roles(self): delegate_roles = ['heat_stack_owner'] self._test_create_trust_context_trust_create(delegate_roles) def test_create_trust_context_trust_create_deletegate_all_roles(self): self._test_create_trust_context_trust_create() def _test_create_trust_context_trust_create(self, delegate_roles=None): """Test create_trust_context when creating a trust.""" class MockTrust(object): id = 'atrust123' self._stub_admin_auth() mock_ks_auth, mock_auth_ref = self._stubs_auth(user_id='5678', project_id='42', stub_trust_context=True) cfg.CONF.set_override('deferred_auth_method', 'trusts') if delegate_roles: cfg.CONF.set_override('trusts_delegated_roles', delegate_roles) trustor_roles = ['heat_stack_owner', 'admin', '__member__'] trustee_roles = delegate_roles or trustor_roles self.mock_ks_v3_client.trusts = self.m.CreateMockAnything() mock_auth_ref.user_id = '5678' mock_auth_ref.project_id = '42' self.mock_ks_v3_client.trusts = self.m.CreateMockAnything() self.mock_ks_v3_client.trusts.create( trustor_user='5678', trustee_user='1234', project='42', impersonation=True, role_names=trustee_roles).AndReturn(MockTrust()) self.m.ReplayAll() ctx = utils.dummy_context(roles=trustor_roles) ctx.trust_id = None heat_ks_client = heat_keystoneclient.KeystoneClient(ctx) trust_context = heat_ks_client.create_trust_context() self.assertEqual('atrust123', trust_context.trust_id) self.assertEqual('5678', trust_context.trustor_user_id) def test_create_trust_context_trust_create_norole(self): """Test create_trust_context when creating a trust.""" self._stub_admin_auth() mock_auth, mock_auth_ref = self._stubs_auth(user_id='5678', project_id='42', stub_trust_context=True) cfg.CONF.set_override('deferred_auth_method', 'trusts') cfg.CONF.set_override('trusts_delegated_roles', ['heat_stack_owner']) self.mock_ks_v3_client.trusts = self.m.CreateMockAnything() self.mock_ks_v3_client.trusts.create( trustor_user='5678', trustee_user='1234', project='42', impersonation=True, role_names=['heat_stack_owner']).AndRaise(kc_exception.NotFound()) self.m.ReplayAll() ctx = utils.dummy_context() ctx.trust_id = None heat_ks_client = heat_keystoneclient.KeystoneClient(ctx) exc = self.assertRaises(exception.MissingCredentialError, heat_ks_client.create_trust_context) expected = "Missing required credential: roles ['heat_stack_owner']" self.assertIn(expected, six.text_type(exc)) def test_init_domain_cfg_not_set_fallback(self): """Test error path when config lacks domain config.""" self._clear_domain_override() cfg.CONF.clear_override('stack_domain_admin') cfg.CONF.clear_override('stack_domain_admin_password') ctx = utils.dummy_context() self.patchobject(ctx, '_create_auth_plugin') ctx.username = None ctx.password = None ctx.trust_id = None self.assertIsNotNone(heat_keystoneclient.KeystoneClient(ctx)) def test_init_domain_cfg_not_set_error(self): """Test error path when config lacks domain config.""" cfg.CONF.clear_override('stack_domain_admin') cfg.CONF.clear_override('stack_domain_admin_password') err = self.assertRaises(exception.Error, config.startup_sanity_check) exp_msg = ('heat.conf misconfigured, cannot specify ' '"stack_user_domain_id" or "stack_user_domain_name" ' 'without "stack_domain_admin" and ' '"stack_domain_admin_password"') self.assertIn(exp_msg, six.text_type(err)) def test_trust_init(self): """Test consuming a trust when initializing.""" self._stubs_auth(method='trust') cfg.CONF.set_override('deferred_auth_method', 'trusts') self.m.ReplayAll() ctx = utils.dummy_context() ctx.username = None ctx.password = None ctx.auth_token = None ctx.trust_id = 'atrust123' ctx.trustor_user_id = 'trustor_user_id' heat_ks_client = heat_keystoneclient.KeystoneClient(ctx) self.assertIsNotNone(heat_ks_client.client) self.assertIsNone(ctx.auth_token) def test_trust_init_fail(self): """Test consuming a trust when initializing, error scoping.""" self._stubs_auth(method='trust', trust_scoped=False) cfg.CONF.set_override('deferred_auth_method', 'trusts') self.m.ReplayAll() ctx = utils.dummy_context() ctx.username = None ctx.password = None ctx.auth_token = None ctx.trust_id = 'atrust123' ctx.trustor_user_id = 'trustor_user_id' self.assertRaises(exception.AuthorizationFailure, heat_keystoneclient.KeystoneClient, ctx) def test_trust_init_fail_impersonation(self): """Test consuming a trust when initializing, impersonation error.""" self._stubs_auth(method='trust', user_id='wrong_user_id') cfg.CONF.set_override('deferred_auth_method', 'trusts') self.m.ReplayAll() ctx = utils.dummy_context() ctx.username = 'heat' ctx.password = None ctx.auth_token = None ctx.trust_id = 'atrust123' ctx.trustor_user_id = 'trustor_user_id' self.assertRaises(exception.AuthorizationFailure, heat_keystoneclient.KeystoneClient, ctx) def test_trust_init_pw(self): """Test trust_id is takes precedence username/password specified.""" self._stubs_auth(method='trust') self.m.ReplayAll() ctx = utils.dummy_context() ctx.auth_token = None ctx.trust_id = 'atrust123' ctx.trustor_user_id = 'trustor_user_id' heat_ks_client = heat_keystoneclient.KeystoneClient(ctx) self.assertIsNotNone(heat_ks_client._client) def test_trust_init_token(self): """Test trust_id takes precedence when token specified.""" self._stubs_auth(method='trust') self.m.ReplayAll() ctx = utils.dummy_context() ctx.username = None ctx.password = None ctx.trust_id = 'atrust123' ctx.trustor_user_id = 'trustor_user_id' heat_ks_client = heat_keystoneclient.KeystoneClient(ctx) self.assertIsNotNone(heat_ks_client._client) def _test_delete_trust(self, raise_ext=None): self._stubs_auth() cfg.CONF.set_override('deferred_auth_method', 'trusts') self.mock_ks_v3_client.trusts = self.m.CreateMockAnything() if raise_ext is None: self.mock_ks_v3_client.trusts.delete('atrust123').AndReturn(None) else: self.mock_ks_v3_client.trusts.delete('atrust123').AndRaise( raise_ext) self.m.ReplayAll() ctx = utils.dummy_context() heat_ks_client = heat_keystoneclient.KeystoneClient(ctx) self.assertIsNone(heat_ks_client.delete_trust(trust_id='atrust123')) def test_delete_trust(self): """Test delete_trust when deleting trust.""" self._test_delete_trust() def test_delete_trust_not_found(self): """Test delete_trust when trust already deleted.""" self._test_delete_trust(raise_ext=kc_exception.NotFound) def test_delete_trust_unauthorized(self): """Test delete_trust when trustor is deleted or trust is expired.""" self._test_delete_trust(raise_ext=kc_exception.Unauthorized) def test_disable_stack_user(self): """Test disabling a stack user.""" self._stubs_auth() ctx = utils.dummy_context() ctx.trust_id = None # mock keystone client update function self.mock_ks_v3_client.users = self.m.CreateMockAnything() self.mock_ks_v3_client.users.update(user='atestuser', enabled=False ).AndReturn(None) self.m.ReplayAll() heat_ks_client = heat_keystoneclient.KeystoneClient(ctx) heat_ks_client.disable_stack_user('atestuser') def test_enable_stack_user(self): """Test enabling a stack user.""" self._stubs_auth() ctx = utils.dummy_context() ctx.trust_id = None # mock keystone client update function self.mock_ks_v3_client.users = self.m.CreateMockAnything() self.mock_ks_v3_client.users.update(user='atestuser', enabled=True ).AndReturn(None) self.m.ReplayAll() heat_ks_client = heat_keystoneclient.KeystoneClient(ctx) heat_ks_client.enable_stack_user('atestuser') def _stub_admin_user_get(self, user_id, domain_id, project_id): self.mock_admin_client.users = self.m.CreateMockAnything() mock_user = self.m.CreateMockAnything() mock_user.id = user_id mock_user.domain_id = domain_id mock_user.default_project_id = project_id self.mock_admin_client.users.get(user_id).AndReturn(mock_user) return mock_user def test_enable_stack_domain_user(self): """Test enabling a stack domain user.""" ctx = utils.dummy_context() self.patchobject(ctx, '_create_auth_plugin') ctx.trust_id = None # mock keystone client functions self._stub_domain_admin_client() self._stub_admin_user_get('duser123', 'adomain123', 'aproject') self.mock_admin_client.users.update(user='duser123', enabled=True ).AndReturn(None) self.m.ReplayAll() heat_ks_client = heat_keystoneclient.KeystoneClient(ctx) heat_ks_client.enable_stack_domain_user(user_id='duser123', project_id='aproject') def test_enable_stack_domain_user_legacy_fallback(self): """Test enabling a stack domain user, fallback path.""" self._clear_domain_override() ctx = utils.dummy_context() ctx.trust_id = None # mock keystone client functions self._stubs_auth() self.mock_ks_v3_client.users = self.m.CreateMockAnything() self.mock_ks_v3_client.users.update(user='user123', enabled=True ).AndReturn(None) self.m.ReplayAll() heat_ks_client = heat_keystoneclient.KeystoneClient(ctx) heat_ks_client.enable_stack_domain_user(user_id='user123', project_id='aproject') def test_enable_stack_domain_user_error_project(self): """Test enabling a stack domain user, wrong project.""" ctx = utils.dummy_context() self.patchobject(ctx, '_create_auth_plugin') ctx.trust_id = None # mock keystone client functions self._stub_domain_admin_client() self._stub_admin_user_get('duser123', 'adomain123', 'notaproject') self.m.ReplayAll() heat_ks_client = heat_keystoneclient.KeystoneClient(ctx) self.assertRaises(ValueError, heat_ks_client.enable_stack_domain_user, user_id='duser123', project_id='aproject') def test_enable_stack_domain_user_error_domain(self): """Test enabling a stack domain user, wrong domain.""" ctx = utils.dummy_context() self.patchobject(ctx, '_create_auth_plugin') ctx.trust_id = None # mock keystone client functions self._stub_domain_admin_client() self._stub_admin_user_get('duser123', 'notadomain123', 'aproject') self.m.ReplayAll() heat_ks_client = heat_keystoneclient.KeystoneClient(ctx) self.assertRaises(ValueError, heat_ks_client.enable_stack_domain_user, user_id='duser123', project_id='aproject') def test_disable_stack_domain_user(self): """Test disabling a stack domain user.""" ctx = utils.dummy_context() self.patchobject(ctx, '_create_auth_plugin') ctx.trust_id = None # mock keystone client functions self._stub_domain_admin_client() self._stub_admin_user_get('duser123', 'adomain123', 'aproject') self.mock_admin_client.users.update(user='duser123', enabled=False ).AndReturn(None) self.m.ReplayAll() heat_ks_client = heat_keystoneclient.KeystoneClient(ctx) heat_ks_client.disable_stack_domain_user(user_id='duser123', project_id='aproject') def test_disable_stack_domain_user_legacy_fallback(self): """Test enabling a stack domain user, fallback path.""" self._clear_domain_override() ctx = utils.dummy_context() ctx.trust_id = None # mock keystone client functions self._stubs_auth() self.mock_ks_v3_client.users = self.m.CreateMockAnything() self.mock_ks_v3_client.users.update(user='user123', enabled=False ).AndReturn(None) self.m.ReplayAll() heat_ks_client = heat_keystoneclient.KeystoneClient(ctx) heat_ks_client.disable_stack_domain_user(user_id='user123', project_id='aproject') def test_disable_stack_domain_user_error_project(self): """Test disabling a stack domain user, wrong project.""" ctx = utils.dummy_context() self.patchobject(ctx, '_create_auth_plugin') ctx.trust_id = None # mock keystone client functions self._stub_domain_admin_client() self._stub_admin_user_get('duser123', 'adomain123', 'notaproject') self.m.ReplayAll() heat_ks_client = heat_keystoneclient.KeystoneClient(ctx) self.assertRaises(ValueError, heat_ks_client.disable_stack_domain_user, user_id='duser123', project_id='aproject') def test_disable_stack_domain_user_error_domain(self): """Test disabling a stack domain user, wrong domain.""" ctx = utils.dummy_context() self.patchobject(ctx, '_create_auth_plugin') ctx.trust_id = None # mock keystone client functions self._stub_domain_admin_client() self._stub_admin_user_get('duser123', 'notadomain123', 'aproject') self.m.ReplayAll() heat_ks_client = heat_keystoneclient.KeystoneClient(ctx) self.assertRaises(ValueError, heat_ks_client.disable_stack_domain_user, user_id='duser123', project_id='aproject') def test_delete_stack_domain_user_keypair(self): ctx = utils.dummy_context() self.patchobject(ctx, '_create_auth_plugin') ctx.trust_id = None # mock keystone client functions self._stub_domain_admin_client() user = self._stub_admin_user_get('duser123', 'adomain123', 'aproject') self.mock_admin_client.credentials = self.m.CreateMockAnything() self.mock_admin_client.credentials.delete( 'acredentialid').AndReturn(None) self.mock_admin_client.users.get('duser123').AndReturn(user) self.mock_admin_client.credentials.delete( 'acredentialid').AndRaise(kc_exception.NotFound) self.m.ReplayAll() heat_ks_client = heat_keystoneclient.KeystoneClient(ctx) heat_ks_client.delete_stack_domain_user_keypair( user_id='duser123', project_id='aproject', credential_id='acredentialid') # Second delete will raise ignored NotFound heat_ks_client.delete_stack_domain_user_keypair( user_id='duser123', project_id='aproject', credential_id='acredentialid') def test_delete_stack_domain_user_keypair_legacy_fallback(self): self._clear_domain_override() ctx = utils.dummy_context() ctx.trust_id = None # mock keystone client functions self._stubs_auth() self.mock_ks_v3_client.credentials = self.m.CreateMockAnything() self.mock_ks_v3_client.credentials.delete( 'acredentialid').AndReturn(None) self.m.ReplayAll() heat_ks_client = heat_keystoneclient.KeystoneClient(ctx) heat_ks_client.delete_stack_domain_user_keypair( user_id='user123', project_id='aproject', credential_id='acredentialid') def test_delete_stack_domain_user_keypair_error_project(self): ctx = utils.dummy_context() self.patchobject(ctx, '_create_auth_plugin') ctx.trust_id = None # mock keystone client functions self._stub_domain_admin_client() self._stub_admin_user_get('duser123', 'adomain123', 'notaproject') self.m.ReplayAll() heat_ks_client = heat_keystoneclient.KeystoneClient(ctx) self.assertRaises(ValueError, heat_ks_client.delete_stack_domain_user_keypair, user_id='duser123', project_id='aproject', credential_id='acredentialid') def test_delete_stack_domain_user_keypair_error_domain(self): ctx = utils.dummy_context() self.patchobject(ctx, '_create_auth_plugin') ctx.trust_id = None # mock keystone client functions self._stub_domain_admin_client() self._stub_admin_user_get('duser123', 'notadomain123', 'aproject') self.m.ReplayAll() heat_ks_client = heat_keystoneclient.KeystoneClient(ctx) self.assertRaises(ValueError, heat_ks_client.delete_stack_domain_user_keypair, user_id='duser123', project_id='aproject', credential_id='acredentialid') def _stub_gen_creds(self, access, secret): # stub UUID.hex to return the values specified mock_access_uuid = mock.Mock() mock_access_uuid.hex = access self.patchobject(uuid, 'uuid4', return_value=mock_access_uuid) self.patchobject(password_gen, 'generate_openstack_password', return_value=secret) def test_create_ec2_keypair(self): """Test creating ec2 credentials.""" self._stubs_auth() ctx = utils.dummy_context() ctx.trust_id = None ex_data = {'access': 'dummy_access', 'secret': 'dummy_secret'} ex_data_json = json.dumps(ex_data) # stub UUID.hex to match ex_data self._stub_gen_creds('dummy_access', 'dummy_secret') # mock keystone client credentials functions self.mock_ks_v3_client.credentials = self.m.CreateMockAnything() mock_credential = self.m.CreateMockAnything() mock_credential.id = '123456' mock_credential.user_id = 'atestuser' mock_credential.blob = ex_data_json mock_credential.type = 'ec2' # mock keystone client create function self.mock_ks_v3_client.credentials.create( user='atestuser', type='ec2', blob=ex_data_json, project=ctx.tenant_id).AndReturn(mock_credential) self.m.ReplayAll() heat_ks_client = heat_keystoneclient.KeystoneClient(ctx) ec2_cred = heat_ks_client.create_ec2_keypair(user_id='atestuser') self.assertEqual('123456', ec2_cred.id) self.assertEqual('dummy_access', ec2_cred.access) self.assertEqual('dummy_secret', ec2_cred.secret) def test_create_stack_domain_user_keypair(self): """Test creating ec2 credentials for domain user.""" self._stub_domain_admin_client(domain_id=None) ctx = utils.dummy_context() self.patchobject(ctx, '_create_auth_plugin') ctx.trust_id = None ex_data = {'access': 'dummy_access2', 'secret': 'dummy_secret2'} ex_data_json = json.dumps(ex_data) # stub UUID.hex to match ex_data self._stub_gen_creds('dummy_access2', 'dummy_secret2') # mock keystone client credentials functions self.mock_admin_client.credentials = self.m.CreateMockAnything() mock_credential = self.m.CreateMockAnything() mock_credential.id = '1234567' mock_credential.user_id = 'atestuser2' mock_credential.blob = ex_data_json mock_credential.type = 'ec2' # mock keystone client create function self.mock_admin_client.credentials.create( user='atestuser2', type='ec2', blob=ex_data_json, project='aproject').AndReturn(mock_credential) self.m.ReplayAll() heat_ks_client = heat_keystoneclient.KeystoneClient(ctx) ec2_cred = heat_ks_client.create_stack_domain_user_keypair( user_id='atestuser2', project_id='aproject') self.assertEqual('1234567', ec2_cred.id) self.assertEqual('dummy_access2', ec2_cred.access) self.assertEqual('dummy_secret2', ec2_cred.secret) def test_create_stack_domain_user_keypair_legacy_fallback(self): """Test creating ec2 credentials for domain user, fallback path.""" self._clear_domain_override() self._stubs_auth() ctx = utils.dummy_context() ctx.trust_id = None ex_data = {'access': 'dummy_access2', 'secret': 'dummy_secret2'} ex_data_json = json.dumps(ex_data) # stub UUID.hex to match ex_data self._stub_gen_creds('dummy_access2', 'dummy_secret2') # mock keystone client credentials functions self.mock_ks_v3_client.credentials = self.m.CreateMockAnything() mock_credential = self.m.CreateMockAnything() mock_credential.id = '1234567' mock_credential.user_id = 'atestuser2' mock_credential.blob = ex_data_json mock_credential.type = 'ec2' # mock keystone client create function self.mock_ks_v3_client.credentials.create( user='atestuser2', type='ec2', blob=ex_data_json, project=ctx.tenant_id).AndReturn(mock_credential) self.m.ReplayAll() heat_ks_client = heat_keystoneclient.KeystoneClient(ctx) ec2_cred = heat_ks_client.create_stack_domain_user_keypair( user_id='atestuser2', project_id='aproject') self.assertEqual('1234567', ec2_cred.id) self.assertEqual('dummy_access2', ec2_cred.access) self.assertEqual('dummy_secret2', ec2_cred.secret) def test_get_ec2_keypair_id(self): """Test getting ec2 credential by id.""" user_id = 'atestuser' self._stubs_auth(user_id=user_id) ctx = utils.dummy_context() ctx.trust_id = None ex_data = {'access': 'access123', 'secret': 'secret456'} ex_data_json = json.dumps(ex_data) # Create a mock credential response credential_id = 'acredential123' mock_credential = self.m.CreateMockAnything() mock_credential.id = credential_id mock_credential.user_id = user_id mock_credential.blob = ex_data_json mock_credential.type = 'ec2' # mock keystone client get function self.mock_ks_v3_client.credentials = self.m.CreateMockAnything() self.mock_ks_v3_client.credentials.get( credential_id).AndReturn(mock_credential) self.m.ReplayAll() heat_ks_client = heat_keystoneclient.KeystoneClient(ctx) ec2_cred = heat_ks_client.get_ec2_keypair(credential_id=credential_id) self.assertEqual(credential_id, ec2_cred.id) self.assertEqual('access123', ec2_cred.access) self.assertEqual('secret456', ec2_cred.secret) def _mock_credential_list(self, user_id): """Create a mock credential list response.""" mock_credential_list = [] for x in (1, 2, 3): mock_credential = self.m.CreateMockAnything() mock_credential.id = 'credential_id%s' % x mock_credential.user_id = user_id mock_credential.blob = json.dumps({'access': 'access%s' % x, 'secret': 'secret%s' % x}) mock_credential.type = 'ec2' mock_credential_list.append(mock_credential) # mock keystone client list function self.mock_ks_v3_client.credentials = self.m.CreateMockAnything() self.mock_ks_v3_client.credentials.list().AndReturn( mock_credential_list) def test_get_ec2_keypair_access(self): """Test getting ec2 credential by access.""" user_id = 'atestuser' self._stubs_auth(user_id=user_id) ctx = utils.dummy_context() ctx.trust_id = None self._mock_credential_list(user_id=user_id) self.m.ReplayAll() heat_ks_client = heat_keystoneclient.KeystoneClient(ctx) ec2_cred = heat_ks_client.get_ec2_keypair(access='access2') self.assertEqual('credential_id2', ec2_cred.id) self.assertEqual('access2', ec2_cred.access) self.assertEqual('secret2', ec2_cred.secret) def test_get_ec2_keypair_error(self): """Test getting ec2 credential error path.""" ctx = utils.dummy_context() self.patchobject(ctx, '_create_auth_plugin') ctx.trust_id = None heat_ks_client = heat_keystoneclient.KeystoneClient(ctx) self.assertRaises(ValueError, heat_ks_client.get_ec2_keypair) def test_delete_ec2_keypair_id(self): """Test deleting ec2 credential by id.""" user_id = 'atestuser' self._stubs_auth(user_id=user_id) ctx = utils.dummy_context() ctx.trust_id = None # mock keystone client credentials functions credential_id = 'acredential123' self.mock_ks_v3_client.credentials = self.m.CreateMockAnything() # mock keystone client delete function self.mock_ks_v3_client.credentials = self.m.CreateMockAnything() self.mock_ks_v3_client.credentials.delete(credential_id) self.mock_ks_v3_client.credentials.delete(credential_id).AndRaise( kc_exception.NotFound) self.m.ReplayAll() heat_ks_client = heat_keystoneclient.KeystoneClient(ctx) self.assertIsNone(heat_ks_client.delete_ec2_keypair( credential_id=credential_id)) # Second delete will raise ignored NotFound self.assertIsNone(heat_ks_client.delete_ec2_keypair( credential_id=credential_id)) def test_delete_ec2_keypair_access(self): """Test deleting ec2 credential by access.""" user_id = 'atestuser' self._stubs_auth(user_id=user_id) ctx = utils.dummy_context() ctx.trust_id = None self._mock_credential_list(user_id=user_id) # mock keystone client delete function self.mock_ks_v3_client.credentials.delete( 'credential_id2').AndReturn(None) self.m.ReplayAll() heat_ks_client = heat_keystoneclient.KeystoneClient(ctx) self.assertIsNone(heat_ks_client.delete_ec2_keypair(access='access2')) def test_deleting_ec2_keypair_error(self): """Test deleting ec2 credential error path.""" ctx = utils.dummy_context() self.patchobject(ctx, '_create_auth_plugin') ctx.trust_id = None self.m.ReplayAll() heat_ks_client = heat_keystoneclient.KeystoneClient(ctx) self.assertRaises(ValueError, heat_ks_client.delete_ec2_keypair) def test_create_stack_domain_project(self): """Test the create_stack_domain_project function.""" ctx = utils.dummy_context() self.patchobject(ctx, '_create_auth_plugin') ctx.trust_id = None expected_name = '%s-astack' % ctx.tenant_id self._stub_domain_admin_client() self.mock_admin_client.projects = self.m.CreateMockAnything() dummy = self.m.CreateMockAnything() dummy.id = 'aproject123' self.mock_admin_client.projects.create( name=expected_name, domain='adomain123', description='Heat stack user project').AndReturn(dummy) self.m.ReplayAll() heat_ks_client = heat_keystoneclient.KeystoneClient(ctx) self.assertEqual('aproject123', heat_ks_client.create_stack_domain_project('astack')) def test_create_stack_domain_project_legacy_fallback(self): """Test the create_stack_domain_project function, fallback path.""" self._clear_domain_override() ctx = utils.dummy_context() ctx.trust_id = None self.patchobject(ctx, '_create_auth_plugin') heat_ks_client = heat_keystoneclient.KeystoneClient(ctx) self.assertEqual(ctx.tenant_id, heat_ks_client.create_stack_domain_project('astack')) def test_delete_stack_domain_project(self): """Test the delete_stack_domain_project function.""" self._stub_domain_admin_client() self.mock_admin_client.projects = self.m.CreateMockAnything() dummy = self.m.CreateMockAnything() dummy.id = 'aproject123' dummy.domain_id = 'adomain123' dummy.delete().AndReturn(None) self.mock_admin_client.projects.get(project='aprojectid').AndReturn( dummy) self.m.ReplayAll() ctx = utils.dummy_context() self.patchobject(ctx, '_create_auth_plugin') ctx.trust_id = None heat_ks_client = heat_keystoneclient.KeystoneClient(ctx) heat_ks_client.delete_stack_domain_project(project_id='aprojectid') def test_delete_stack_domain_project_notfound(self): """Test the delete_stack_domain_project function.""" self._stub_domain_admin_client(domain_id=None) self.mock_admin_client.projects = self.m.CreateMockAnything() self.mock_admin_client.projects.get(project='aprojectid').AndRaise( kc_exception.NotFound) self.m.ReplayAll() ctx = utils.dummy_context() self.patchobject(ctx, '_create_auth_plugin') ctx.trust_id = None heat_ks_client = heat_keystoneclient.KeystoneClient(ctx) heat_ks_client.delete_stack_domain_project(project_id='aprojectid') def test_delete_stack_domain_project_forbidden(self): """Test the delete_stack_domain_project function.""" self._stub_domain_admin_client(domain_id=None) self.mock_admin_client.projects = self.m.CreateMockAnything() self.mock_admin_client.projects.get(project='aprojectid').AndRaise( kc_exception.Forbidden) self.m.ReplayAll() ctx = utils.dummy_context() self.patchobject(ctx, '_create_auth_plugin') ctx.trust_id = None heat_ks_client = heat_keystoneclient.KeystoneClient(ctx) heat_ks_client.delete_stack_domain_project(project_id='aprojectid') def test_delete_stack_domain_project_wrongdomain(self): """Test the delete_stack_domain_project function.""" self._stub_domain_admin_client() self.mock_admin_client.projects = self.m.CreateMockAnything() dummy = self.m.CreateMockAnything() dummy.id = 'aproject123' dummy.domain_id = 'default' self.mock_admin_client.projects.get(project='aprojectid').AndReturn( dummy) self.m.ReplayAll() ctx = utils.dummy_context() self.patchobject(ctx, '_create_auth_plugin') ctx.trust_id = None heat_ks_client = heat_keystoneclient.KeystoneClient(ctx) heat_ks_client.delete_stack_domain_project(project_id='aprojectid') def test_delete_stack_domain_project_nodomain(self): """Test the delete_stack_domain_project function.""" self._clear_domain_override() ctx = utils.dummy_context() self.patchobject(ctx, '_create_auth_plugin') ctx.trust_id = None heat_ks_client = heat_keystoneclient.KeystoneClient(ctx) heat_ks_client.delete_stack_domain_project(project_id='aprojectid') def _stub_domain_user_pw_auth(self): ks_auth.Password(auth_url='http://server.test:5000/v3', user_id='duser', password='apassw', project_id='aproject', user_domain_id='adomain123').AndReturn('dummyauth') def test_stack_domain_user_token(self): """Test stack_domain_user_token function.""" dum_tok = 'dummytoken' ctx = utils.dummy_context() mock_ks_auth = self.m.CreateMockAnything() mock_ks_auth.get_token(mox.IsA(ks_session.Session)).AndReturn(dum_tok) self.patchobject(ctx, '_create_auth_plugin') m = ks_auth.Password(auth_url='http://server.test:5000/v3', password='apassw', project_id='aproject', user_id='duser') m.AndReturn(mock_ks_auth) self.m.ReplayAll() ctx.trust_id = None heat_ks_client = heat_keystoneclient.KeystoneClient(ctx) token = heat_ks_client.stack_domain_user_token(user_id='duser', project_id='aproject', password='apassw') self.assertEqual(dum_tok, token) def test_stack_domain_user_token_err_nodomain(self): """Test stack_domain_user_token error path.""" self._clear_domain_override() ctx = utils.dummy_context() self.patchobject(ctx, '_create_auth_plugin') ctx.trust_id = None heat_ks_client = heat_keystoneclient.KeystoneClient(ctx) self.assertRaises(exception.Error, heat_ks_client.stack_domain_user_token, user_id='user', project_id='aproject', password='password') def test_delete_stack_domain_project_legacy_fallback(self): """Test the delete_stack_domain_project function, fallback path.""" self._clear_domain_override() ctx = utils.dummy_context() self.patchobject(ctx, '_create_auth_plugin') ctx.trust_id = None heat_ks_client = heat_keystoneclient.KeystoneClient(ctx) self.assertIsNone(heat_ks_client.delete_stack_domain_project( project_id='aprojectid')) class KeystoneClientTestDomainName(KeystoneClientTest): def setUp(self): cfg.CONF.set_override('stack_user_domain_name', 'fake_domain_name') super(KeystoneClientTestDomainName, self).setUp() cfg.CONF.clear_override('stack_user_domain_id') def _clear_domain_override(self): cfg.CONF.clear_override('stack_user_domain_name') def _stub_domain_admin_client_domain_get(self): dummy_domain = self.m.CreateMockAnything() dummy_domain.id = 'adomain123' self.mock_ks_v3_client_domain_mngr.list( name='fake_domain_name').AndReturn([dummy_domain]) def _stub_domain_admin_client(self, domain_id='adomain123'): mock_ks_auth = self.m.CreateMockAnything() mock_ks_auth.get_token(mox.IsA(ks_session.Session)).AndReturn('tok') if domain_id: a = self.m.CreateMockAnything() a.domain_id = domain_id mock_ks_auth.get_access( mox.IsA(ks_session.Session)).AndReturn(a) m = ks_auth.Password(auth_url='http://server.test:5000/v3', password='adminsecret', domain_id=None, domain_name='fake_domain_name', user_domain_id=None, user_domain_name='fake_domain_name', username='adminuser123') m.AndReturn(mock_ks_auth) n = kc_v3.Client(session=mox.IsA(ks_session.Session), auth=mock_ks_auth, region_name=None) n.AndReturn(self.mock_admin_client) self.mock_admin_client.domains = self.mock_ks_v3_client_domain_mngr def _stub_domain_user_pw_auth(self): ks_auth.Password(auth_url='http://server.test:5000/v3', user_id='duser', password='apassw', project_id='aproject', user_domain_name='fake_domain_name' ).AndReturn('dummyauth') def test_enable_stack_domain_user_error_project(self): p = super(KeystoneClientTestDomainName, self) p.test_enable_stack_domain_user_error_project() def test_delete_stack_domain_user_keypair(self): p = super(KeystoneClientTestDomainName, self) p.test_delete_stack_domain_user_keypair() def test_delete_stack_domain_user_error_project(self): p = super(KeystoneClientTestDomainName, self) p.test_delete_stack_domain_user_error_project() def test_delete_stack_domain_user_keypair_error_project(self): p = super(KeystoneClientTestDomainName, self) p.test_delete_stack_domain_user_keypair_error_project() def test_delete_stack_domain_user(self): p = super(KeystoneClientTestDomainName, self) p.test_delete_stack_domain_user() def test_enable_stack_domain_user(self): p = super(KeystoneClientTestDomainName, self) p.test_enable_stack_domain_user() def test_delete_stack_domain_user_error_domain(self): p = super(KeystoneClientTestDomainName, self) p.test_delete_stack_domain_user_error_domain() def test_disable_stack_domain_user_error_project(self): p = super(KeystoneClientTestDomainName, self) p.test_disable_stack_domain_user_error_project() def test_enable_stack_domain_user_error_domain(self): p = super(KeystoneClientTestDomainName, self) p.test_enable_stack_domain_user_error_domain() def test_delete_stack_domain_user_keypair_error_domain(self): p = super(KeystoneClientTestDomainName, self) p.test_delete_stack_domain_user_keypair_error_domain() def test_disable_stack_domain_user(self): p = super(KeystoneClientTestDomainName, self) p.test_disable_stack_domain_user() def test_disable_stack_domain_user_error_domain(self): p = super(KeystoneClientTestDomainName, self) p.test_disable_stack_domain_user_error_domain() def test_delete_stack_domain_project(self): p = super(KeystoneClientTestDomainName, self) p.test_delete_stack_domain_project() def test_delete_stack_domain_project_notfound(self): p = super(KeystoneClientTestDomainName, self) p.test_delete_stack_domain_project_notfound() def test_delete_stack_domain_project_wrongdomain(self): p = super(KeystoneClientTestDomainName, self) p.test_delete_stack_domain_project_wrongdomain() def test_create_stack_domain_project(self): p = super(KeystoneClientTestDomainName, self) p.test_create_stack_domain_project() def test_create_stack_domain_user(self): p = super(KeystoneClientTestDomainName, self) p.test_create_stack_domain_user() heat-10.0.2/heat/tests/test_constraints.py0000666000175000017500000005752213343562340020620 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import six from heat.common import exception from heat.engine import constraints from heat.engine import environment from heat.tests import common class SchemaTest(common.HeatTestCase): def test_range_schema(self): d = {'range': {'min': 5, 'max': 10}, 'description': 'a range'} r = constraints.Range(5, 10, description='a range') self.assertEqual(d, dict(r)) def test_range_min_schema(self): d = {'range': {'min': 5}, 'description': 'a range'} r = constraints.Range(min=5, description='a range') self.assertEqual(d, dict(r)) def test_range_max_schema(self): d = {'range': {'max': 10}, 'description': 'a range'} r = constraints.Range(max=10, description='a range') self.assertEqual(d, dict(r)) def test_length_schema(self): d = {'length': {'min': 5, 'max': 10}, 'description': 'a length range'} r = constraints.Length(5, 10, description='a length range') self.assertEqual(d, dict(r)) def test_length_min_schema(self): d = {'length': {'min': 5}, 'description': 'a length range'} r = constraints.Length(min=5, description='a length range') self.assertEqual(d, dict(r)) def test_length_max_schema(self): d = {'length': {'max': 10}, 'description': 'a length range'} r = constraints.Length(max=10, description='a length range') self.assertEqual(d, dict(r)) def test_modulo_schema(self): d = {'modulo': {'step': 2, 'offset': 1}, 'description': 'a modulo'} r = constraints.Modulo(2, 1, description='a modulo') self.assertEqual(d, dict(r)) def test_allowed_values_schema(self): d = {'allowed_values': ['foo', 'bar'], 'description': 'allowed values'} r = constraints.AllowedValues(['foo', 'bar'], description='allowed values') self.assertEqual(d, dict(r)) def test_allowed_pattern_schema(self): d = {'allowed_pattern': '[A-Za-z0-9]', 'description': 'alphanumeric'} r = constraints.AllowedPattern('[A-Za-z0-9]', description='alphanumeric') self.assertEqual(d, dict(r)) def test_range_validate(self): r = constraints.Range(min=5, max=5, description='a range') r.validate(5) def test_range_min_fail(self): r = constraints.Range(min=5, description='a range') self.assertRaises(ValueError, r.validate, 4) def test_range_max_fail(self): r = constraints.Range(max=5, description='a range') self.assertRaises(ValueError, r.validate, 6) def test_length_validate(self): l = constraints.Length(min=5, max=5, description='a range') l.validate('abcde') def test_length_min_fail(self): l = constraints.Length(min=5, description='a range') self.assertRaises(ValueError, l.validate, 'abcd') def test_length_max_fail(self): l = constraints.Length(max=5, description='a range') self.assertRaises(ValueError, l.validate, 'abcdef') def test_modulo_validate(self): r = constraints.Modulo(step=2, offset=1, description='a modulo') r.validate(1) r.validate(3) r.validate(5) r.validate(777777) r = constraints.Modulo(step=111, offset=0, description='a modulo') r.validate(111) r.validate(222) r.validate(444) r.validate(1110) r = constraints.Modulo(step=111, offset=11, description='a modulo') r.validate(122) r.validate(233) r.validate(1121) r = constraints.Modulo(step=-2, offset=-1, description='a modulo') r.validate(-1) r.validate(-3) r.validate(-5) r.validate(-777777) r = constraints.Modulo(step=-2, offset=0, description='a modulo') r.validate(-2) r.validate(-4) r.validate(-8888888) def test_modulo_validate_fail(self): r = constraints.Modulo(step=2, offset=1) err = self.assertRaises(ValueError, r.validate, 4) self.assertIn('4 is not a multiple of 2 with an offset of 1', six.text_type(err)) self.assertRaises(ValueError, r.validate, 0) self.assertRaises(ValueError, r.validate, 2) self.assertRaises(ValueError, r.validate, 888888) r = constraints.Modulo(step=2, offset=0) self.assertRaises(ValueError, r.validate, 1) self.assertRaises(ValueError, r.validate, 3) self.assertRaises(ValueError, r.validate, 5) self.assertRaises(ValueError, r.validate, 777777) err = self.assertRaises(exception.InvalidSchemaError, constraints.Modulo, step=111, offset=111) self.assertIn('offset must be smaller (by absolute value) than step', six.text_type(err)) err = self.assertRaises(exception.InvalidSchemaError, constraints.Modulo, step=111, offset=112) self.assertIn('offset must be smaller (by absolute value) than step', six.text_type(err)) err = self.assertRaises(exception.InvalidSchemaError, constraints.Modulo, step=0, offset=1) self.assertIn('step cannot be 0', six.text_type(err)) err = self.assertRaises(exception.InvalidSchemaError, constraints.Modulo, step=-2, offset=1) self.assertIn('step and offset must be both positive or both negative', six.text_type(err)) err = self.assertRaises(exception.InvalidSchemaError, constraints.Modulo, step=2, offset=-1) self.assertIn('step and offset must be both positive or both negative', six.text_type(err)) def test_schema_all(self): d = { 'type': 'string', 'description': 'A string', 'default': 'wibble', 'required': False, 'constraints': [ {'length': {'min': 4, 'max': 8}}, ] } s = constraints.Schema(constraints.Schema.STRING, 'A string', default='wibble', constraints=[constraints.Length(4, 8)]) self.assertEqual(d, dict(s)) def test_schema_list_schema(self): d = { 'type': 'list', 'description': 'A list', 'schema': { '*': { 'type': 'string', 'description': 'A string', 'default': 'wibble', 'required': False, 'constraints': [ {'length': {'min': 4, 'max': 8}}, ] } }, 'required': False, } s = constraints.Schema(constraints.Schema.STRING, 'A string', default='wibble', constraints=[constraints.Length(4, 8)]) l = constraints.Schema(constraints.Schema.LIST, 'A list', schema=s) self.assertEqual(d, dict(l)) def test_schema_map_schema(self): d = { 'type': 'map', 'description': 'A map', 'schema': { 'Foo': { 'type': 'string', 'description': 'A string', 'default': 'wibble', 'required': False, 'constraints': [ {'length': {'min': 4, 'max': 8}}, ] } }, 'required': False, } s = constraints.Schema(constraints.Schema.STRING, 'A string', default='wibble', constraints=[constraints.Length(4, 8)]) m = constraints.Schema(constraints.Schema.MAP, 'A map', schema={'Foo': s}) self.assertEqual(d, dict(m)) def test_schema_nested_schema(self): d = { 'type': 'list', 'description': 'A list', 'schema': { '*': { 'type': 'map', 'description': 'A map', 'schema': { 'Foo': { 'type': 'string', 'description': 'A string', 'default': 'wibble', 'required': False, 'constraints': [ {'length': {'min': 4, 'max': 8}}, ] } }, 'required': False, } }, 'required': False, } s = constraints.Schema(constraints.Schema.STRING, 'A string', default='wibble', constraints=[constraints.Length(4, 8)]) m = constraints.Schema(constraints.Schema.MAP, 'A map', schema={'Foo': s}) l = constraints.Schema(constraints.Schema.LIST, 'A list', schema=m) self.assertEqual(d, dict(l)) def test_invalid_type(self): self.assertRaises(exception.InvalidSchemaError, constraints.Schema, 'Fish') def test_schema_invalid_type(self): self.assertRaises(exception.InvalidSchemaError, constraints.Schema, 'String', schema=constraints.Schema('String')) def test_range_invalid_type(self): schema = constraints.Schema('String', constraints=[constraints.Range(1, 10)]) err = self.assertRaises(exception.InvalidSchemaError, schema.validate) self.assertIn('Range constraint invalid for String', six.text_type(err)) def test_length_invalid_type(self): schema = constraints.Schema('Integer', constraints=[constraints.Length(1, 10)]) err = self.assertRaises(exception.InvalidSchemaError, schema.validate) self.assertIn('Length constraint invalid for Integer', six.text_type(err)) def test_modulo_invalid_type(self): schema = constraints.Schema('String', constraints=[constraints.Modulo(2, 1)]) err = self.assertRaises(exception.InvalidSchemaError, schema.validate) self.assertIn('Modulo constraint invalid for String', six.text_type(err)) def test_allowed_pattern_invalid_type(self): schema = constraints.Schema( 'Integer', constraints=[constraints.AllowedPattern('[0-9]*')] ) err = self.assertRaises(exception.InvalidSchemaError, schema.validate) self.assertIn('AllowedPattern constraint invalid for Integer', six.text_type(err)) def test_range_vals_invalid_type(self): self.assertRaises(exception.InvalidSchemaError, constraints.Range, '1', 10) self.assertRaises(exception.InvalidSchemaError, constraints.Range, 1, '10') def test_length_vals_invalid_type(self): self.assertRaises(exception.InvalidSchemaError, constraints.Length, '1', 10) self.assertRaises(exception.InvalidSchemaError, constraints.Length, 1, '10') def test_modulo_vals_invalid_type(self): self.assertRaises(exception.InvalidSchemaError, constraints.Modulo, '2', 1) self.assertRaises(exception.InvalidSchemaError, constraints.Modulo, 2, '1') def test_schema_validate_good(self): s = constraints.Schema(constraints.Schema.STRING, 'A string', default='wibble', constraints=[constraints.Length(4, 8)]) self.assertIsNone(s.validate()) def test_schema_validate_fail(self): s = constraints.Schema(constraints.Schema.STRING, 'A string', default='wibble', constraints=[constraints.Range(max=4)]) err = self.assertRaises(exception.InvalidSchemaError, s.validate) self.assertIn('Range constraint invalid for String', six.text_type(err)) def test_schema_nested_validate_good(self): nested = constraints.Schema(constraints.Schema.STRING, 'A string', default='wibble', constraints=[constraints.Length(4, 8)]) s = constraints.Schema(constraints.Schema.MAP, 'A map', schema={'Foo': nested}) self.assertIsNone(s.validate()) def test_schema_nested_validate_fail(self): nested = constraints.Schema(constraints.Schema.STRING, 'A string', default='wibble', constraints=[constraints.Range(max=4)]) s = constraints.Schema(constraints.Schema.MAP, 'A map', schema={'Foo': nested}) err = self.assertRaises(exception.InvalidSchemaError, s.validate) self.assertIn('Range constraint invalid for String', six.text_type(err)) def test_allowed_values_numeric_int(self): """Test AllowedValues constraint for numeric integer values. Test if the AllowedValues constraint works for numeric values in any combination of numeric strings or numbers in the constraint and numeric strings or numbers as value. """ # Allowed values defined as integer numbers schema = constraints.Schema( 'Integer', constraints=[constraints.AllowedValues([1, 2, 4])] ) # ... and value as number or string self.assertIsNone(schema.validate_constraints(1)) err = self.assertRaises(exception.StackValidationFailed, schema.validate_constraints, 3) self.assertEqual('"3" is not an allowed value [1, 2, 4]', six.text_type(err)) self.assertIsNone(schema.validate_constraints('1')) err = self.assertRaises(exception.StackValidationFailed, schema.validate_constraints, '3') self.assertEqual('"3" is not an allowed value [1, 2, 4]', six.text_type(err)) # Allowed values defined as integer strings schema = constraints.Schema( 'Integer', constraints=[constraints.AllowedValues(['1', '2', '4'])] ) # ... and value as number or string self.assertIsNone(schema.validate_constraints(1)) err = self.assertRaises(exception.StackValidationFailed, schema.validate_constraints, 3) self.assertEqual('"3" is not an allowed value [1, 2, 4]', six.text_type(err)) self.assertIsNone(schema.validate_constraints('1')) err = self.assertRaises(exception.StackValidationFailed, schema.validate_constraints, '3') self.assertEqual('"3" is not an allowed value [1, 2, 4]', six.text_type(err)) def test_allowed_values_numeric_float(self): """Test AllowedValues constraint for numeric floating point values. Test if the AllowedValues constraint works for numeric values in any combination of numeric strings or numbers in the constraint and numeric strings or numbers as value. """ # Allowed values defined as numbers schema = constraints.Schema( 'Number', constraints=[constraints.AllowedValues([1.1, 2.2, 4.4])] ) # ... and value as number or string self.assertIsNone(schema.validate_constraints(1.1)) err = self.assertRaises(exception.StackValidationFailed, schema.validate_constraints, 3.3) self.assertEqual('"3.3" is not an allowed value [1.1, 2.2, 4.4]', six.text_type(err)) self.assertIsNone(schema.validate_constraints('1.1')) err = self.assertRaises(exception.StackValidationFailed, schema.validate_constraints, '3.3') self.assertEqual('"3.3" is not an allowed value [1.1, 2.2, 4.4]', six.text_type(err)) # Allowed values defined as strings schema = constraints.Schema( 'Number', constraints=[constraints.AllowedValues(['1.1', '2.2', '4.4'])] ) # ... and value as number or string self.assertIsNone(schema.validate_constraints(1.1)) err = self.assertRaises(exception.StackValidationFailed, schema.validate_constraints, 3.3) self.assertEqual('"3.3" is not an allowed value [1.1, 2.2, 4.4]', six.text_type(err)) self.assertIsNone(schema.validate_constraints('1.1')) err = self.assertRaises(exception.StackValidationFailed, schema.validate_constraints, '3.3') self.assertEqual('"3.3" is not an allowed value [1.1, 2.2, 4.4]', six.text_type(err)) def test_to_schema_type_int(self): """Test Schema.to_schema_type method for type Integer.""" schema = constraints.Schema('Integer') # test valid values, i.e. integeres as string or number res = schema.to_schema_type(1) self.assertIsInstance(res, int) res = schema.to_schema_type('1') self.assertIsInstance(res, int) # test invalid numeric values, i.e. floating point numbers err = self.assertRaises(ValueError, schema.to_schema_type, 1.5) self.assertEqual('Value "1.5" is invalid for data type "Integer".', six.text_type(err)) err = self.assertRaises(ValueError, schema.to_schema_type, '1.5') self.assertEqual('Value "1.5" is invalid for data type "Integer".', six.text_type(err)) # test invalid string values err = self.assertRaises(ValueError, schema.to_schema_type, 'foo') self.assertEqual('Value "foo" is invalid for data type "Integer".', six.text_type(err)) def test_to_schema_type_num(self): """Test Schema.to_schema_type method for type Number.""" schema = constraints.Schema('Number') res = schema.to_schema_type(1) self.assertIsInstance(res, int) res = schema.to_schema_type('1') self.assertIsInstance(res, int) res = schema.to_schema_type(1.5) self.assertIsInstance(res, float) res = schema.to_schema_type('1.5') self.assertIsInstance(res, float) self.assertEqual(1.5, res) err = self.assertRaises(ValueError, schema.to_schema_type, 'foo') self.assertEqual('Value "foo" is invalid for data type "Number".', six.text_type(err)) def test_to_schema_type_string(self): """Test Schema.to_schema_type method for type String.""" schema = constraints.Schema('String') res = schema.to_schema_type('one') self.assertIsInstance(res, six.string_types) res = schema.to_schema_type('1') self.assertIsInstance(res, six.string_types) res = schema.to_schema_type(1) self.assertIsInstance(res, six.string_types) res = schema.to_schema_type(True) self.assertIsInstance(res, six.string_types) res = schema.to_schema_type(None) self.assertIsInstance(res, six.string_types) def test_to_schema_type_boolean(self): """Test Schema.to_schema_type method for type Boolean.""" schema = constraints.Schema('Boolean') true_values = [1, '1', True, 'true', 'True', 'yes', 'Yes'] for v in true_values: res = schema.to_schema_type(v) self.assertIsInstance(res, bool) self.assertTrue(res) false_values = [0, '0', False, 'false', 'False', 'No', 'no'] for v in false_values: res = schema.to_schema_type(v) self.assertIsInstance(res, bool) self.assertFalse(res) err = self.assertRaises(ValueError, schema.to_schema_type, 'foo') self.assertEqual('Value "foo" is invalid for data type "Boolean".', six.text_type(err)) def test_to_schema_type_map(self): """Test Schema.to_schema_type method for type Map.""" schema = constraints.Schema('Map') res = schema.to_schema_type({'a': 'aa', 'b': 'bb'}) self.assertIsInstance(res, dict) self.assertEqual({'a': 'aa', 'b': 'bb'}, res) def test_to_schema_type_list(self): """Test Schema.to_schema_type method for type List.""" schema = constraints.Schema('List') res = schema.to_schema_type(['a', 'b']) self.assertIsInstance(res, list) self.assertEqual(['a', 'b'], res) class CustomConstraintTest(common.HeatTestCase): def setUp(self): super(CustomConstraintTest, self).setUp() self.env = environment.Environment({}) def test_validation(self): class ZeroConstraint(object): def validate(self, value, context): return value == 0 self.env.register_constraint("zero", ZeroConstraint) constraint = constraints.CustomConstraint("zero", environment=self.env) self.assertEqual("Value must be of type zero", six.text_type(constraint)) self.assertIsNone(constraint.validate(0)) error = self.assertRaises(ValueError, constraint.validate, 1) self.assertEqual('"1" does not validate zero', six.text_type(error)) def test_custom_error(self): class ZeroConstraint(object): def error(self, value): return "%s is not 0" % value def validate(self, value, context): return value == 0 self.env.register_constraint("zero", ZeroConstraint) constraint = constraints.CustomConstraint("zero", environment=self.env) error = self.assertRaises(ValueError, constraint.validate, 1) self.assertEqual("1 is not 0", six.text_type(error)) def test_custom_message(self): class ZeroConstraint(object): message = "Only zero!" def validate(self, value, context): return value == 0 self.env.register_constraint("zero", ZeroConstraint) constraint = constraints.CustomConstraint("zero", environment=self.env) self.assertEqual("Only zero!", six.text_type(constraint)) def test_unknown_constraint(self): constraint = constraints.CustomConstraint("zero", environment=self.env) error = self.assertRaises(ValueError, constraint.validate, 1) self.assertEqual('"1" does not validate zero (constraint not found)', six.text_type(error)) def test_constraints(self): class ZeroConstraint(object): def validate(self, value, context): return value == 0 self.env.register_constraint("zero", ZeroConstraint) constraint = constraints.CustomConstraint("zero", environment=self.env) self.assertEqual("zero", constraint["custom_constraint"]) heat-10.0.2/heat/tests/test_auth_password.py0000666000175000017500000001362413343562352021132 0ustar zuulzuul00000000000000# # Copyright 2013 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from keystoneauth1 import exceptions as keystone_exc from keystoneauth1.identity import generic as ks_auth from keystoneauth1 import session as ks_session import mox from oslo_config import cfg import six import webob from heat.common import auth_password from heat.tests import common cfg.CONF.import_opt('keystone_backend', 'heat.engine.clients.os.keystone.heat_keystoneclient') EXPECTED_ENV_RESPONSE = { 'HTTP_X_IDENTITY_STATUS': 'Confirmed', 'HTTP_X_PROJECT_ID': 'tenant_id1', 'HTTP_X_PROJECT_NAME': 'tenant_name1', 'HTTP_X_USER_ID': 'user_id1', 'HTTP_X_USER_NAME': 'user_name1', 'HTTP_X_ROLES': 'role1,role2', 'HTTP_X_AUTH_TOKEN': 'lalalalalala', } TOKEN_V3_RESPONSE = { 'version': 'v3', 'project_id': 'tenant_id1', 'project_name': 'tenant_name1', 'user_id': 'user_id1', 'username': 'user_name1', 'service_catalog': None, 'role_names': ['role1', 'role2'], 'auth_token': 'lalalalalala', 'user_domain_id': 'domain1' } TOKEN_V2_RESPONSE = { 'version': 'v2', 'tenant_id': 'tenant_id1', 'tenant_name': 'tenant_name1', 'user_id': 'user_id1', 'service_catalog': None, 'username': 'user_name1', 'role_names': ['role1', 'role2'], 'auth_token': 'lalalalalala', 'user_domain_id': 'domain1' } class FakeAccessInfo(object): def __init__(self, **args): self.__dict__.update(args) class FakeApp(object): """This represents a WSGI app protected by our auth middleware.""" def __init__(self, expected_env=None): expected_env = expected_env or {} self.expected_env = dict(EXPECTED_ENV_RESPONSE) self.expected_env.update(expected_env) def __call__(self, env, start_response): """Assert that expected environment is present when finally called.""" for k, v in self.expected_env.items(): assert env[k] == v, '%s != %s' % (env[k], v) resp = webob.Response() resp.body = six.b('SUCCESS') return resp(env, start_response) class KeystonePasswordAuthProtocolTest(common.HeatTestCase): def setUp(self): super(KeystonePasswordAuthProtocolTest, self).setUp() self.config = {'auth_uri': 'http://keystone.test.com:5000'} self.app = FakeApp() self.middleware = auth_password.KeystonePasswordAuthProtocol( self.app, self.config) def _start_fake_response(self, status, headers): self.response_status = int(status.split(' ', 1)[0]) self.response_headers = dict(headers) def test_valid_v2_request(self): mock_auth = self.m.CreateMock(ks_auth.Password) self.m.StubOutWithMock(ks_auth, 'Password') ks_auth.Password( auth_url=self.config['auth_uri'], password='goodpassword', project_id='tenant_id1', user_domain_id='domain1', username='user_name1').AndReturn(mock_auth) m = mock_auth.get_access(mox.IsA(ks_session.Session)) m.AndReturn(FakeAccessInfo(**TOKEN_V2_RESPONSE)) self.m.ReplayAll() req = webob.Request.blank('/tenant_id1/') req.headers['X_AUTH_USER'] = 'user_name1' req.headers['X_AUTH_KEY'] = 'goodpassword' req.headers['X_AUTH_URL'] = self.config['auth_uri'] req.headers['X_USER_DOMAIN_ID'] = 'domain1' self.middleware(req.environ, self._start_fake_response) self.m.VerifyAll() def test_valid_v3_request(self): mock_auth = self.m.CreateMock(ks_auth.Password) self.m.StubOutWithMock(ks_auth, 'Password') ks_auth.Password(auth_url=self.config['auth_uri'], password='goodpassword', project_id='tenant_id1', user_domain_id='domain1', username='user_name1').AndReturn(mock_auth) m = mock_auth.get_access(mox.IsA(ks_session.Session)) m.AndReturn(FakeAccessInfo(**TOKEN_V3_RESPONSE)) self.m.ReplayAll() req = webob.Request.blank('/tenant_id1/') req.headers['X_AUTH_USER'] = 'user_name1' req.headers['X_AUTH_KEY'] = 'goodpassword' req.headers['X_AUTH_URL'] = self.config['auth_uri'] req.headers['X_USER_DOMAIN_ID'] = 'domain1' self.middleware(req.environ, self._start_fake_response) self.m.VerifyAll() def test_request_with_bad_credentials(self): self.m.StubOutWithMock(ks_auth, 'Password') m = ks_auth.Password(auth_url=self.config['auth_uri'], password='badpassword', project_id='tenant_id1', user_domain_id='domain1', username='user_name1') m.AndRaise(keystone_exc.Unauthorized(401)) self.m.ReplayAll() req = webob.Request.blank('/tenant_id1/') req.headers['X_AUTH_USER'] = 'user_name1' req.headers['X_AUTH_KEY'] = 'badpassword' req.headers['X_AUTH_URL'] = self.config['auth_uri'] req.headers['X_USER_DOMAIN_ID'] = 'domain1' self.middleware(req.environ, self._start_fake_response) self.m.VerifyAll() self.assertEqual(401, self.response_status) def test_request_with_no_tenant_in_url_or_auth_headers(self): req = webob.Request.blank('/') self.middleware(req.environ, self._start_fake_response) self.assertEqual(401, self.response_status) heat-10.0.2/heat/tests/test_common_context.py0000666000175000017500000004063413343562340021301 0ustar zuulzuul00000000000000# -*- coding: utf-8 -*- # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os from keystoneauth1 import loading as ks_loading import mock from oslo_config import cfg from oslo_config import fixture as config_fixture from oslo_middleware import request_id from oslo_policy import opts as policy_opts from oslo_utils import importutils import webob from heat.common import context from heat.common import exception from heat.tests import common policy_path = os.path.dirname(os.path.realpath(__file__)) + "/policy/" class TestRequestContext(common.HeatTestCase): def setUp(self): self.ctx = {'username': 'mick', 'trustor_user_id': None, 'auth_token': '123', 'auth_token_info': {'123info': 'woop'}, 'is_admin': False, 'user': 'mick', 'password': 'foo', 'trust_id': None, 'global_request_id': None, 'show_deleted': False, 'roles': ['arole', 'notadmin'], 'tenant_id': '456tenant', 'user_id': 'fooUser', 'tenant': u'\u5218\u80dc', 'auth_url': 'http://xyz', 'aws_creds': 'blah', 'region_name': 'RegionOne', 'user_identity': 'fooUser 456tenant', 'user_domain': None, 'project_domain': None} super(TestRequestContext, self).setUp() def test_request_context_init(self): ctx = context.RequestContext( auth_token=self.ctx.get('auth_token'), username=self.ctx.get('username'), password=self.ctx.get('password'), aws_creds=self.ctx.get('aws_creds'), project_name=self.ctx.get('tenant'), tenant=self.ctx.get('tenant_id'), user=self.ctx.get('user_id'), auth_url=self.ctx.get('auth_url'), roles=self.ctx.get('roles'), show_deleted=self.ctx.get('show_deleted'), is_admin=self.ctx.get('is_admin'), auth_token_info=self.ctx.get('auth_token_info'), trustor_user_id=self.ctx.get('trustor_user_id'), trust_id=self.ctx.get('trust_id'), region_name=self.ctx.get('region_name'), user_domain_id=self.ctx.get('user_domain'), project_domain_id=self.ctx.get('project_domain')) ctx_dict = ctx.to_dict() del(ctx_dict['request_id']) self.assertEqual(self.ctx, ctx_dict) def test_request_context_to_dict_unicode(self): ctx_origin = {'username': 'mick', 'trustor_user_id': None, 'auth_token': '123', 'auth_token_info': {'123info': 'woop'}, 'is_admin': False, 'user': 'mick', 'password': 'foo', 'trust_id': None, 'global_request_id': None, 'show_deleted': False, 'roles': ['arole', 'notadmin'], 'tenant_id': '456tenant', 'user_id': u'Gāo', 'tenant': u'\u5218\u80dc', 'auth_url': 'http://xyz', 'aws_creds': 'blah', 'region_name': 'RegionOne', 'user_identity': u'Gāo 456tenant', 'user_domain': None, 'project_domain': None} ctx = context.RequestContext( auth_token=ctx_origin.get('auth_token'), username=ctx_origin.get('username'), password=ctx_origin.get('password'), aws_creds=ctx_origin.get('aws_creds'), project_name=ctx_origin.get('tenant'), tenant=ctx_origin.get('tenant_id'), user=ctx_origin.get('user_id'), auth_url=ctx_origin.get('auth_url'), roles=ctx_origin.get('roles'), show_deleted=ctx_origin.get('show_deleted'), is_admin=ctx_origin.get('is_admin'), auth_token_info=ctx_origin.get('auth_token_info'), trustor_user_id=ctx_origin.get('trustor_user_id'), trust_id=ctx_origin.get('trust_id'), region_name=ctx_origin.get('region_name'), user_domain_id=ctx_origin.get('user_domain'), project_domain_id=ctx_origin.get('project_domain')) ctx_dict = ctx.to_dict() del(ctx_dict['request_id']) self.assertEqual(ctx_origin, ctx_dict) def test_request_context_from_dict(self): ctx = context.RequestContext.from_dict(self.ctx) ctx_dict = ctx.to_dict() del(ctx_dict['request_id']) self.assertEqual(self.ctx, ctx_dict) def test_request_context_update(self): ctx = context.RequestContext.from_dict(self.ctx) for k in self.ctx: if (k == 'user_identity' or k == 'user_domain_id' or k == 'project_domain_id'): continue # these values are different between attribute and context if k == 'tenant' or k == 'user': continue self.assertEqual(self.ctx.get(k), ctx.to_dict().get(k)) override = '%s_override' % k setattr(ctx, k, override) self.assertEqual(override, ctx.to_dict().get(k)) def test_get_admin_context(self): ctx = context.get_admin_context() self.assertTrue(ctx.is_admin) self.assertFalse(ctx.show_deleted) def test_get_admin_context_show_deleted(self): ctx = context.get_admin_context(show_deleted=True) self.assertTrue(ctx.is_admin) self.assertTrue(ctx.show_deleted) def test_admin_context_policy_true(self): policy_check = 'heat.common.policy.Enforcer.check_is_admin' with mock.patch(policy_check) as pc: pc.return_value = True ctx = context.RequestContext(roles=['admin']) self.assertTrue(ctx.is_admin) def test_admin_context_policy_false(self): policy_check = 'heat.common.policy.Enforcer.check_is_admin' with mock.patch(policy_check) as pc: pc.return_value = False ctx = context.RequestContext(roles=['notadmin']) self.assertFalse(ctx.is_admin) def test_keystone_v3_endpoint_in_context(self): """Ensure that the context is the preferred source for the auth_uri.""" cfg.CONF.set_override('auth_uri', 'http://xyz', group='clients_keystone') policy_check = 'heat.common.policy.Enforcer.check_is_admin' with mock.patch(policy_check) as pc: pc.return_value = False ctx = context.RequestContext( auth_url='http://example.com:5000/v2.0') self.assertEqual(ctx.keystone_v3_endpoint, 'http://example.com:5000/v3') def test_keystone_v3_endpoint_in_clients_keystone_config(self): """Ensure that the [clients_keystone] section is the preferred source. Ensure that the [clients_keystone] section of the configuration is the preferred source when the context does not have the auth_uri. """ cfg.CONF.set_override('auth_uri', 'http://xyz', group='clients_keystone') policy_check = 'heat.common.policy.Enforcer.check_is_admin' with mock.patch(policy_check) as pc: pc.return_value = False with mock.patch('keystoneauth1.discover.Discover') as discover: class MockDiscover(object): def url_for(self, endpoint): return 'http://xyz/v3' discover.return_value = MockDiscover() ctx = context.RequestContext(auth_url=None) self.assertEqual(ctx.keystone_v3_endpoint, 'http://xyz/v3') def test_keystone_v3_endpoint_in_keystone_authtoken_config(self): """Ensure that the [keystone_authtoken] section is used. Ensure that the [keystone_authtoken] section of the configuration is used when the auth_uri is not defined in the context or the [clients_keystone] section. """ importutils.import_module('keystonemiddleware.auth_token') cfg.CONF.set_override('auth_uri', 'http://abc/v2.0', group='keystone_authtoken') policy_check = 'heat.common.policy.Enforcer.check_is_admin' with mock.patch(policy_check) as pc: pc.return_value = False ctx = context.RequestContext(auth_url=None) self.assertEqual(ctx.keystone_v3_endpoint, 'http://abc/v3') def test_keystone_v3_endpoint_not_set_in_config(self): """Ensure an exception is raised when the auth_uri cannot be obtained. Ensure an exception is raised when the auth_uri cannot be obtained from any source. """ policy_check = 'heat.common.policy.Enforcer.check_is_admin' with mock.patch(policy_check) as pc: pc.return_value = False ctx = context.RequestContext(auth_url=None) self.assertRaises(exception.AuthorizationFailure, getattr, ctx, 'keystone_v3_endpoint') def test_get_trust_context_auth_plugin_unauthorized(self): self.ctx['trust_id'] = 'trust_id' ctx = context.RequestContext.from_dict(self.ctx) self.patchobject(ks_loading, 'load_auth_from_conf_options', return_value=None) self.assertRaises(exception.AuthorizationFailure, getattr, ctx, 'auth_plugin') def test_cache(self): ctx = context.RequestContext.from_dict(self.ctx) class Class1(object): pass class Class2(object): pass self.assertEqual(0, len(ctx._object_cache)) cache1 = ctx.cache(Class1) self.assertIsInstance(cache1, Class1) self.assertEqual(1, len(ctx._object_cache)) cache1a = ctx.cache(Class1) self.assertEqual(cache1, cache1a) self.assertEqual(1, len(ctx._object_cache)) cache2 = ctx.cache(Class2) self.assertIsInstance(cache2, Class2) self.assertEqual(2, len(ctx._object_cache)) class RequestContextMiddlewareTest(common.HeatTestCase): scenarios = [( 'empty_headers', dict( environ=None, headers={}, context_dict={ 'auth_token': None, 'auth_token_info': None, 'auth_url': None, 'aws_creds': None, 'is_admin': False, 'password': None, 'roles': [], 'show_deleted': False, 'tenant': None, 'tenant_id': None, 'trust_id': None, 'trustor_user_id': None, 'user': None, 'user_id': None, 'username': None }) ), ( 'username_password', dict( environ=None, headers={ 'X-Auth-User': 'my_username', 'X-Auth-Key': 'my_password', 'X-Auth-EC2-Creds': '{"ec2Credentials": {}}', 'X-User-Id': '7a87ff18-31c6-45ce-a186-ec7987f488c3', 'X-Auth-Token': 'atoken', 'X-Project-Name': 'my_tenant', 'X-Project-Id': 'db6808c8-62d0-4d92-898c-d644a6af20e9', 'X-Auth-Url': 'http://192.0.2.1:5000/v1', 'X-Roles': 'role1,role2,role3' }, context_dict={ 'auth_token': 'atoken', 'auth_url': 'http://192.0.2.1:5000/v1', 'aws_creds': None, 'is_admin': False, 'password': 'my_password', 'roles': ['role1', 'role2', 'role3'], 'show_deleted': False, 'tenant': 'my_tenant', 'tenant_id': 'db6808c8-62d0-4d92-898c-d644a6af20e9', 'trust_id': None, 'trustor_user_id': None, 'user': 'my_username', 'user_id': '7a87ff18-31c6-45ce-a186-ec7987f488c3', 'username': 'my_username' }) ), ( 'aws_creds', dict( environ=None, headers={ 'X-Auth-EC2-Creds': '{"ec2Credentials": {}}', 'X-User-Id': '7a87ff18-31c6-45ce-a186-ec7987f488c3', 'X-Auth-Token': 'atoken', 'X-Project-Name': 'my_tenant', 'X-Project-Id': 'db6808c8-62d0-4d92-898c-d644a6af20e9', 'X-Auth-Url': 'http://192.0.2.1:5000/v1', 'X-Roles': 'role1,role2,role3', }, context_dict={ 'auth_token': 'atoken', 'auth_url': 'http://192.0.2.1:5000/v1', 'aws_creds': '{"ec2Credentials": {}}', 'is_admin': False, 'password': None, 'roles': ['role1', 'role2', 'role3'], 'show_deleted': False, 'tenant': 'my_tenant', 'tenant_id': 'db6808c8-62d0-4d92-898c-d644a6af20e9', 'trust_id': None, 'trustor_user_id': None, 'user': None, 'user_id': '7a87ff18-31c6-45ce-a186-ec7987f488c3', 'username': None }) ), ( 'token_creds', dict( environ={'keystone.token_info': {'info': 123}}, headers={ 'X-User-Id': '7a87ff18-31c6-45ce-a186-ec7987f488c3', 'X-Auth-Token': 'atoken2', 'X-Project-Name': 'my_tenant2', 'X-Project-Id': 'bb9108c8-62d0-4d92-898c-d644a6af20e9', 'X-Auth-Url': 'http://192.0.2.1:5000/v1', 'X-Roles': 'role1,role2,role3', }, context_dict={ 'auth_token': 'atoken2', 'auth_token_info': {'info': 123}, 'auth_url': 'http://192.0.2.1:5000/v1', 'aws_creds': None, 'is_admin': False, 'password': None, 'roles': ['role1', 'role2', 'role3'], 'show_deleted': False, 'tenant': 'my_tenant2', 'tenant_id': 'bb9108c8-62d0-4d92-898c-d644a6af20e9', 'trust_id': None, 'trustor_user_id': None, 'user': None, 'user_id': '7a87ff18-31c6-45ce-a186-ec7987f488c3', 'username': None }) )] def setUp(self): super(RequestContextMiddlewareTest, self).setUp() self.fixture = self.useFixture(config_fixture.Config()) self.fixture.conf(args=['--config-dir', policy_path]) policy_opts.set_defaults(cfg.CONF, 'check_admin.json') def test_context_middleware(self): middleware = context.ContextMiddleware(None, None) request = webob.Request.blank('/stacks', headers=self.headers, environ=self.environ) self.assertIsNone(middleware.process_request(request)) ctx = request.context.to_dict() for k, v in self.context_dict.items(): self.assertEqual(v, ctx[k], 'Key %s values do not match' % k) self.assertIsNotNone(ctx.get('request_id')) def test_context_middleware_with_requestid(self): middleware = context.ContextMiddleware(None, None) request = webob.Request.blank('/stacks', headers=self.headers, environ=self.environ) req_id = 'req-5a63f0d7-1b69-447b-b621-4ea87cc7186d' request.environ[request_id.ENV_REQUEST_ID] = req_id self.assertIsNone(middleware.process_request(request)) ctx = request.context.to_dict() for k, v in self.context_dict.items(): self.assertEqual(v, ctx[k], 'Key %s values do not match' % k) self.assertEqual( ctx.get('request_id'), req_id, 'Key request_id values do not match') heat-10.0.2/heat/tests/test_notifications.py0000666000175000017500000001244413343562340021114 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_utils import timeutils from heat.common import timeutils as heat_timeutils from heat.engine import notification from heat.tests import common from heat.tests import utils class StackTest(common.HeatTestCase): def setUp(self): super(StackTest, self).setUp() self.ctx = utils.dummy_context(user_id='test_user_id') def test_send(self): created_time = timeutils.utcnow() st = mock.Mock() st.state = ('x', 'f') st.status = st.state[0] st.action = st.state[1] st.name = 'fred' st.status_reason = 'this is why' st.created_time = created_time st.context = self.ctx st.id = 'hay-are-en' updated_time = timeutils.utcnow() st.updated_time = updated_time st.tags = ['tag1', 'tag2'] st.t = mock.MagicMock() st.t.__getitem__.return_value = 'for test' st.t.DESCRIPTION = 'description' notify = self.patchobject(notification, 'notify') notification.stack.send(st) notify.assert_called_once_with( self.ctx, 'stack.f.error', 'ERROR', {'state_reason': 'this is why', 'user_id': 'test_username', 'username': 'test_username', 'user_identity': 'test_user_id', 'stack_identity': 'hay-are-en', 'stack_name': 'fred', 'tenant_id': 'test_tenant_id', 'create_at': heat_timeutils.isotime(created_time), 'state': 'x_f', 'description': 'for test', 'tags': ['tag1', 'tag2'], 'updated_at': heat_timeutils.isotime(updated_time)}) class AutoScaleTest(common.HeatTestCase): def setUp(self): super(AutoScaleTest, self).setUp() self.ctx = utils.dummy_context(user_id='test_user_id') def _mock_stack(self): created_time = timeutils.utcnow() st = mock.Mock() st.state = ('x', 'f') st.status = st.state[0] st.action = st.state[1] st.name = 'fred' st.status_reason = 'this is why' st.created_time = created_time st.context = self.ctx st.id = 'hay-are-en' updated_time = timeutils.utcnow() st.updated_time = updated_time st.tags = ['tag1', 'tag2'] st.t = mock.MagicMock() st.t.__getitem__.return_value = 'for test' st.t.DESCRIPTION = 'description' return st def test_send(self): stack = self._mock_stack() notify = self.patchobject(notification, 'notify') notification.autoscaling.send(stack, adjustment='x', adjustment_type='y', capacity='5', groupname='c', message='fred', suffix='the-end') notify.assert_called_once_with( self.ctx, 'autoscaling.the-end', 'INFO', {'state_reason': 'this is why', 'user_id': 'test_username', 'username': 'test_username', 'user_identity': 'test_user_id', 'stack_identity': 'hay-are-en', 'stack_name': 'fred', 'tenant_id': 'test_tenant_id', 'create_at': heat_timeutils.isotime(stack.created_time), 'description': 'for test', 'tags': ['tag1', 'tag2'], 'updated_at': heat_timeutils.isotime(stack.updated_time), 'state': 'x_f', 'adjustment_type': 'y', 'groupname': 'c', 'capacity': '5', 'message': 'fred', 'adjustment': 'x'}) def test_send_error(self): stack = self._mock_stack() notify = self.patchobject(notification, 'notify') notification.autoscaling.send(stack, adjustment='x', adjustment_type='y', capacity='5', groupname='c', suffix='error') notify.assert_called_once_with( self.ctx, 'autoscaling.error', 'ERROR', {'state_reason': 'this is why', 'user_id': 'test_username', 'username': 'test_username', 'user_identity': 'test_user_id', 'stack_identity': 'hay-are-en', 'stack_name': 'fred', 'tenant_id': 'test_tenant_id', 'create_at': heat_timeutils.isotime(stack.created_time), 'description': 'for test', 'tags': ['tag1', 'tag2'], 'updated_at': heat_timeutils.isotime(stack.updated_time), 'state': 'x_f', 'adjustment_type': 'y', 'groupname': 'c', 'capacity': '5', 'message': 'error', 'adjustment': 'x'}) heat-10.0.2/heat/tests/test_rpc_worker_client.py0000666000175000017500000000737013343562340021760 0ustar zuulzuul00000000000000# Copyright (c) 2014 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import mock from heat.rpc import worker_api as rpc_api from heat.rpc import worker_client as rpc_client from heat.tests import common class WorkerClientTest(common.HeatTestCase): def setUp(self): super(WorkerClientTest, self).setUp() self.fake_engine_id = 'fake-engine-id' def test_make_msg(self): method = 'sample_method' kwargs = {'a': '1', 'b': '2'} result = method, kwargs self.assertEqual( result, rpc_client.WorkerClient.make_msg(method, **kwargs)) @mock.patch('heat.common.messaging.get_rpc_client', return_value=mock.Mock()) def test_cast(self, rpc_client_method): # Mock the rpc client mock_rpc_client = rpc_client_method.return_value # Create the WorkerClient worker_client = rpc_client.WorkerClient() rpc_client_method.assert_called_once_with( version=rpc_client.WorkerClient.BASE_RPC_API_VERSION, topic=rpc_api.TOPIC) self.assertEqual(mock_rpc_client, worker_client._client, "Failed to create RPC client") # Check cast in default version mock_cnxt = mock.Mock() method = 'sample_method' kwargs = {'a': '1', 'b': '2'} msg = method, kwargs # go with default version return_value = worker_client.cast(mock_cnxt, msg) self.assertIsNone(return_value) mock_rpc_client.cast.assert_called_with(mock_cnxt, method, **kwargs) # Check cast in given version version = '1.3' return_value = worker_client.cast(mock_cnxt, msg, version) self.assertIsNone(return_value) mock_rpc_client.prepare.assert_called_once_with(version=version) mock_rpc_client.cast.assert_called_once_with(mock_cnxt, method, **kwargs) def test_cancel_check_resource(self): mock_stack_id = 'dummy-stack-id' mock_cnxt = mock.Mock() method = 'cancel_check_resource' kwargs = {'stack_id': mock_stack_id} mock_rpc_client = mock.MagicMock() mock_cast = mock.MagicMock() with mock.patch('heat.common.messaging.get_rpc_client') as mock_grc: mock_grc.return_value = mock_rpc_client mock_rpc_client.prepare.return_value = mock_cast wc = rpc_client.WorkerClient() ret_val = wc.cancel_check_resource(mock_cnxt, mock_stack_id, self.fake_engine_id) # ensure called with fanout=True mock_grc.assert_called_with( version=wc.BASE_RPC_API_VERSION, topic=rpc_api.TOPIC, server=self.fake_engine_id) self.assertIsNone(ret_val) mock_rpc_client.prepare.assert_called_with( version='1.3') # ensure correct rpc method is called mock_cast.cast.assert_called_with(mock_cnxt, method, **kwargs) heat-10.0.2/heat/tests/test_noauth.py0000666000175000017500000000522013343562340017533 0ustar zuulzuul00000000000000# # Copyright (C) 2016, Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import six import webob from heat.common import noauth from heat.tests import common EXPECTED_ENV_RESPONSE = { 'HTTP_X_IDENTITY_STATUS': 'Confirmed', 'HTTP_X_PROJECT_ID': 'admin', 'HTTP_X_PROJECT_NAME': 'admin', 'HTTP_X_USER_ID': 'admin', 'HTTP_X_USER_NAME': 'admin', 'HTTP_X_ROLES': 'admin', 'HTTP_X_SERVICE_CATALOG': {}, 'HTTP_X_AUTH_USER': 'admin', 'HTTP_X_AUTH_KEY': 'unset', } class FakeApp(object): """This represents a WSGI app protected by our auth middleware.""" def __init__(self, expected_env=None): expected_env = expected_env or {} self.expected_env = dict(EXPECTED_ENV_RESPONSE) self.expected_env.update(expected_env) def __call__(self, env, start_response): """Assert that expected environment is present when finally called.""" for k, v in self.expected_env.items(): assert env[k] == v, '%s != %s' % (env[k], v) resp = webob.Response() resp.body = six.b('SUCCESS') return resp(env, start_response) class KeystonePasswordAuthProtocolTest(common.HeatTestCase): def setUp(self): super(KeystonePasswordAuthProtocolTest, self).setUp() self.config = {'auth_uri': 'http://keystone.test.com:5000'} self.app = FakeApp() self.middleware = noauth.NoAuthProtocol( self.app, self.config) def _start_fake_response(self, status, headers): self.response_status = int(status.split(' ', 1)[0]) self.response_headers = dict(headers) def test_request_with_bad_credentials(self): req = webob.Request.blank('/tenant_id1/') req.headers['X_AUTH_USER'] = 'admin' req.headers['X_AUTH_KEY'] = 'blah' req.headers['X_AUTH_URL'] = self.config['auth_uri'] self.middleware(req.environ, self._start_fake_response) self.assertEqual(200, self.response_status) def test_request_with_no_tenant_in_url_or_auth_headers(self): req = webob.Request.blank('/') self.middleware(req.environ, self._start_fake_response) self.assertEqual(200, self.response_status) heat-10.0.2/heat/tests/test_common_service_utils.py0000666000175000017500000000614313343562340022472 0ustar zuulzuul00000000000000# Copyright (c) 2014 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import datetime from oslo_utils import timeutils import uuid from heat.common import service_utils from heat.db.sqlalchemy import models from heat.tests import common class TestServiceUtils(common.HeatTestCase): def test_status_check(self): service = models.Service() service.id = str(uuid.uuid4()) service.engine_id = str(uuid.uuid4()) service.binary = 'heat-engine' service.hostname = 'host.devstack.org' service.host = 'engine-1' service.report_interval = 60 service.topic = 'engine' service.created_at = timeutils.utcnow() service.deleted_at = None service.updated_at = None service_dict = service_utils.format_service(service) self.assertEqual(service_dict['id'], service.id) self.assertEqual(service_dict['engine_id'], service.engine_id) self.assertEqual(service_dict['host'], service.host) self.assertEqual(service_dict['hostname'], service.hostname) self.assertEqual(service_dict['binary'], service.binary) self.assertEqual(service_dict['topic'], service.topic) self.assertEqual(service_dict['report_interval'], service.report_interval) self.assertEqual(service_dict['created_at'], service.created_at) self.assertEqual(service_dict['updated_at'], service.updated_at) self.assertEqual(service_dict['deleted_at'], service.deleted_at) self.assertEqual(service_dict['status'], 'up') # check again within first report_interval time (60) service_dict = service_utils.format_service(service) self.assertEqual(service_dict['status'], 'up') # check update not happen within report_interval time (60+) service.created_at = (timeutils.utcnow() - datetime.timedelta(0, 70)) service_dict = service_utils.format_service(service) self.assertEqual(service_dict['status'], 'down') # check update happened after report_interval time (60+) service.updated_at = (timeutils.utcnow() - datetime.timedelta(0, 70)) service_dict = service_utils.format_service(service) self.assertEqual(service_dict['status'], 'down') # check update happened within report_interval time (60) service.updated_at = (timeutils.utcnow() - datetime.timedelta(0, 50)) service_dict = service_utils.format_service(service) self.assertEqual(service_dict['status'], 'up') heat-10.0.2/heat/tests/test_environment.py0000666000175000017500000012253413343562340020611 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os.path import sys import fixtures import mock from oslo_config import cfg import six from heat.common import environment_format from heat.common import exception from heat.engine import environment from heat.engine import resources from heat.engine.resources.aws.ec2 import instance from heat.engine.resources.openstack.nova import server from heat.engine import support from heat.tests import common from heat.tests import generic_resource from heat.tests import utils cfg.CONF.import_opt('environment_dir', 'heat.common.config') class EnvironmentTest(common.HeatTestCase): def setUp(self): super(EnvironmentTest, self).setUp() self.g_env = resources.global_env() def test_load_old_parameters(self): old = {u'a': u'ff', u'b': u'ss'} expected = {u'parameters': old, u'encrypted_param_names': [], u'parameter_defaults': {}, u'event_sinks': [], u'resource_registry': {u'resources': {}}} env = environment.Environment(old) self.assertEqual(expected, env.env_as_dict()) del(expected['encrypted_param_names']) self.assertEqual(expected, env.user_env_as_dict()) def test_load_new_env(self): new_env = {u'parameters': {u'a': u'ff', u'b': u'ss'}, u'encrypted_param_names': [], u'parameter_defaults': {u'ff': 'new_def'}, u'event_sinks': [], u'resource_registry': {u'OS::Food': u'fruity.yaml', u'resources': {}}} env = environment.Environment(new_env) self.assertEqual(new_env, env.env_as_dict()) del(new_env['encrypted_param_names']) self.assertEqual(new_env, env.user_env_as_dict()) def test_global_registry(self): self.g_env.register_class('CloudX::Nova::Server', generic_resource.GenericResource) new_env = {u'parameters': {u'a': u'ff', u'b': u'ss'}, u'resource_registry': {u'OS::*': 'CloudX::*'}} env = environment.Environment(new_env) self.assertEqual('CloudX::Nova::Server', env.get_resource_info('OS::Nova::Server', 'my_db_server').name) def test_global_registry_many_to_one(self): new_env = {u'parameters': {u'a': u'ff', u'b': u'ss'}, u'resource_registry': {u'OS::Nova::*': 'OS::Heat::None'}} env = environment.Environment(new_env) self.assertEqual('OS::Heat::None', env.get_resource_info('OS::Nova::Server', 'my_db_server').name) def test_global_registry_many_to_one_no_recurse(self): new_env = {u'parameters': {u'a': u'ff', u'b': u'ss'}, u'resource_registry': {u'OS::*': 'OS::Heat::None'}} env = environment.Environment(new_env) self.assertEqual('OS::Heat::None', env.get_resource_info('OS::Some::Name', 'my_db_server').name) def test_map_one_resource_type(self): new_env = {u'parameters': {u'a': u'ff', u'b': u'ss'}, u'resource_registry': {u'resources': {u'my_db_server': {u'OS::DBInstance': 'db.yaml'}}}} env = environment.Environment(new_env) info = env.get_resource_info('OS::DBInstance', 'my_db_server') self.assertEqual('db.yaml', info.value) def test_map_all_resources_of_type(self): self.g_env.register_class('OS::Nova::FloatingIP', generic_resource.GenericResource) new_env = {u'parameters': {u'a': u'ff', u'b': u'ss'}, u'resource_registry': {u'OS::Networking::FloatingIP': 'OS::Nova::FloatingIP', u'OS::Loadbalancer': 'lb.yaml'}} env = environment.Environment(new_env) self.assertEqual('OS::Nova::FloatingIP', env.get_resource_info('OS::Networking::FloatingIP', 'my_fip').name) def test_resource_sort_order_len(self): new_env = {u'resource_registry': {u'resources': {u'my_fip': { u'OS::Networking::FloatingIP': 'ip.yaml'}}}, u'OS::Networking::FloatingIP': 'OS::Nova::FloatingIP'} env = environment.Environment(new_env) self.assertEqual('ip.yaml', env.get_resource_info('OS::Networking::FloatingIP', 'my_fip').value) def test_env_load(self): new_env = {u'resource_registry': {u'resources': {u'my_fip': { u'OS::Networking::FloatingIP': 'ip.yaml'}}}} env = environment.Environment() self.assertRaises(exception.EntityNotFound, env.get_resource_info, 'OS::Networking::FloatingIP', 'my_fip') env.load(new_env) self.assertEqual('ip.yaml', env.get_resource_info('OS::Networking::FloatingIP', 'my_fip').value) def test_register_with_path(self): yaml_env = ''' resource_registry: test::one: a.yaml resources: res_x: test::two: b.yaml ''' env = environment.Environment(environment_format.parse(yaml_env)) self.assertEqual('a.yaml', env.get_resource_info('test::one').value) self.assertEqual('b.yaml', env.get_resource_info('test::two', 'res_x').value) env2 = environment.Environment() env2.register_class('test::one', 'a.yaml', path=['test::one']) env2.register_class('test::two', 'b.yaml', path=['resources', 'res_x', 'test::two']) self.assertEqual(env.env_as_dict(), env2.env_as_dict()) def test_constraints(self): env = environment.Environment({}) first_constraint = object() second_constraint = object() env.register_constraint("constraint1", first_constraint) env.register_constraint("constraint2", second_constraint) self.assertIs(first_constraint, env.get_constraint("constraint1")) self.assertIs(second_constraint, env.get_constraint("constraint2")) self.assertIsNone(env.get_constraint("no_constraint")) def test_constraints_registry(self): constraint_content = ''' class MyConstraint(object): pass def constraint_mapping(): return {"constraint1": MyConstraint} ''' plugin_dir = self.useFixture(fixtures.TempDir()) plugin_file = os.path.join(plugin_dir.path, 'test.py') with open(plugin_file, 'w+') as ef: ef.write(constraint_content) self.addCleanup(sys.modules.pop, "heat.engine.plugins.test") cfg.CONF.set_override('plugin_dirs', plugin_dir.path) env = environment.Environment({}) resources._load_global_environment(env) self.assertEqual("MyConstraint", env.get_constraint("constraint1").__name__) self.assertIsNone(env.get_constraint("no_constraint")) def test_constraints_registry_error(self): constraint_content = ''' def constraint_mapping(): raise ValueError("oops") ''' plugin_dir = self.useFixture(fixtures.TempDir()) plugin_file = os.path.join(plugin_dir.path, 'test.py') with open(plugin_file, 'w+') as ef: ef.write(constraint_content) self.addCleanup(sys.modules.pop, "heat.engine.plugins.test") cfg.CONF.set_override('plugin_dirs', plugin_dir.path) env = environment.Environment({}) error = self.assertRaises(ValueError, resources._load_global_environment, env) self.assertEqual("oops", six.text_type(error)) def test_constraints_registry_stevedore(self): env = environment.Environment({}) resources._load_global_environment(env) self.assertEqual("FlavorConstraint", env.get_constraint("nova.flavor").__name__) self.assertIsNone(env.get_constraint("no_constraint")) def test_event_sinks(self): env = environment.Environment( {"event_sinks": [{"type": "zaqar-queue", "target": "myqueue"}]}) self.assertEqual([{"type": "zaqar-queue", "target": "myqueue"}], env.user_env_as_dict()["event_sinks"]) sinks = env.get_event_sinks() self.assertEqual(1, len(sinks)) self.assertEqual("myqueue", sinks[0]._target) def test_event_sinks_load(self): env = environment.Environment() self.assertEqual([], env.get_event_sinks()) env.load( {"event_sinks": [{"type": "zaqar-queue", "target": "myqueue"}]}) self.assertEqual([{"type": "zaqar-queue", "target": "myqueue"}], env.user_env_as_dict()["event_sinks"]) class EnvironmentDuplicateTest(common.HeatTestCase): scenarios = [ ('same', dict(resource_type='test.yaml', expected_equal=True)), ('diff_temp', dict(resource_type='not.yaml', expected_equal=False)), ('diff_map', dict(resource_type='OS::Nova::Server', expected_equal=False)), ('diff_path', dict(resource_type='a/test.yaml', expected_equal=False)), ] def setUp(self): super(EnvironmentDuplicateTest, self).setUp(quieten_logging=False) def test_env_load(self): env_initial = {u'resource_registry': { u'OS::Test::Dummy': 'test.yaml'}} env = environment.Environment() env.load(env_initial) info = env.get_resource_info('OS::Test::Dummy', 'something') replace_log = 'Changing %s from %s to %s' % ('OS::Test::Dummy', 'test.yaml', self.resource_type) self.assertNotIn(replace_log, self.LOG.output) env_test = {u'resource_registry': { u'OS::Test::Dummy': self.resource_type}} env.load(env_test) if self.expected_equal: # should return exactly the same object. self.assertIs(info, env.get_resource_info('OS::Test::Dummy', 'my_fip')) self.assertNotIn(replace_log, self.LOG.output) else: self.assertIn(replace_log, self.LOG.output) self.assertNotEqual(info, env.get_resource_info('OS::Test::Dummy', 'my_fip')) def test_env_register_while_get_resource_info(self): env_test = {u'resource_registry': { u'OS::Test::Dummy': self.resource_type}} env = environment.Environment() env.load(env_test) env.get_resource_info('OS::Test::Dummy') self.assertEqual({'OS::Test::Dummy': self.resource_type, 'resources': {}}, env.user_env_as_dict().get( environment_format.RESOURCE_REGISTRY)) env_test = {u'resource_registry': { u'resources': {u'test': {u'OS::Test::Dummy': self.resource_type}}}} env.load(env_test) env.get_resource_info('OS::Test::Dummy') self.assertEqual({u'OS::Test::Dummy': self.resource_type, 'resources': {u'test': {u'OS::Test::Dummy': self.resource_type}}}, env.user_env_as_dict().get( environment_format.RESOURCE_REGISTRY)) class GlobalEnvLoadingTest(common.HeatTestCase): def test_happy_path(self): with mock.patch('glob.glob') as m_ldir: m_ldir.return_value = ['/etc_etc/heat/environment.d/a.yaml'] env_dir = '/etc_etc/heat/environment.d' env_content = '{"resource_registry": {}}' env = environment.Environment({}, user_env=False) with mock.patch('heat.engine.environment.open', mock.mock_open(read_data=env_content), create=True) as m_open: environment.read_global_environment(env, env_dir) m_ldir.assert_called_once_with(env_dir + '/*') m_open.assert_called_once_with('%s/a.yaml' % env_dir) def test_empty_env_dir(self): with mock.patch('glob.glob') as m_ldir: m_ldir.return_value = [] env_dir = '/etc_etc/heat/environment.d' env = environment.Environment({}, user_env=False) environment.read_global_environment(env, env_dir) m_ldir.assert_called_once_with(env_dir + '/*') def test_continue_on_ioerror(self): """Assert we get all files processed. Assert we get all files processed even if there are processing exceptions. Test uses IOError as side effect of mock open. """ with mock.patch('glob.glob') as m_ldir: m_ldir.return_value = ['/etc_etc/heat/environment.d/a.yaml', '/etc_etc/heat/environment.d/b.yaml'] env_dir = '/etc_etc/heat/environment.d' env_content = '{}' env = environment.Environment({}, user_env=False) with mock.patch('heat.engine.environment.open', mock.mock_open(read_data=env_content), create=True) as m_open: m_open.side_effect = IOError environment.read_global_environment(env, env_dir) m_ldir.assert_called_once_with(env_dir + '/*') expected = [mock.call('%s/a.yaml' % env_dir), mock.call('%s/b.yaml' % env_dir)] self.assertEqual(expected, m_open.call_args_list) def test_continue_on_parse_error(self): """Assert we get all files processed. Assert we get all files processed even if there are processing exceptions. Test checks case when env content is incorrect. """ with mock.patch('glob.glob') as m_ldir: m_ldir.return_value = ['/etc_etc/heat/environment.d/a.yaml', '/etc_etc/heat/environment.d/b.yaml'] env_dir = '/etc_etc/heat/environment.d' env_content = '{@$%#$%' env = environment.Environment({}, user_env=False) with mock.patch('heat.engine.environment.open', mock.mock_open(read_data=env_content), create=True) as m_open: environment.read_global_environment(env, env_dir) m_ldir.assert_called_once_with(env_dir + '/*') expected = [mock.call('%s/a.yaml' % env_dir), mock.call('%s/b.yaml' % env_dir)] self.assertEqual(expected, m_open.call_args_list) def test_env_resources_override_plugins(self): # assertion: any template resources in the global environment # should override the default plugins. # 1. set our own global test env # (with a template resource that shadows a plugin) g_env_content = ''' resource_registry: "OS::Nova::Server": "file:///not_really_here.yaml" ''' envdir = self.useFixture(fixtures.TempDir()) # envfile = os.path.join(envdir.path, 'test.yaml') with open(envfile, 'w+') as ef: ef.write(g_env_content) cfg.CONF.set_override('environment_dir', envdir.path) # 2. load global env g_env = environment.Environment({}, user_env=False) resources._load_global_environment(g_env) # 3. assert our resource is in place. self.assertEqual('file:///not_really_here.yaml', g_env.get_resource_info('OS::Nova::Server').value) def test_env_one_resource_disable(self): # prove we can disable a resource in the global environment g_env_content = ''' resource_registry: "OS::Nova::Server": ''' # 1. fake an environment file envdir = self.useFixture(fixtures.TempDir()) envfile = os.path.join(envdir.path, 'test.yaml') with open(envfile, 'w+') as ef: ef.write(g_env_content) cfg.CONF.set_override('environment_dir', envdir.path) # 2. load global env g_env = environment.Environment({}, user_env=False) resources._load_global_environment(g_env) # 3. assert our resource is in now gone. self.assertRaises(exception.EntityNotFound, g_env.get_resource_info, 'OS::Nova::Server') # 4. make sure we haven't removed something we shouldn't have self.assertEqual(instance.Instance, g_env.get_resource_info('AWS::EC2::Instance').value) def test_env_multi_resources_disable(self): # prove we can disable resources in the global environment g_env_content = ''' resource_registry: "AWS::*": ''' # 1. fake an environment file envdir = self.useFixture(fixtures.TempDir()) envfile = os.path.join(envdir.path, 'test.yaml') with open(envfile, 'w+') as ef: ef.write(g_env_content) cfg.CONF.set_override('environment_dir', envdir.path) # 2. load global env g_env = environment.Environment({}, user_env=False) resources._load_global_environment(g_env) # 3. assert our resources are now gone. self.assertRaises(exception.EntityNotFound, g_env.get_resource_info, 'AWS::EC2::Instance') # 4. make sure we haven't removed something we shouldn't have self.assertEqual(server.Server, g_env.get_resource_info('OS::Nova::Server').value) def test_env_user_cant_disable_sys_resource(self): # prove a user can't disable global resources from the user environment u_env_content = ''' resource_registry: "AWS::*": ''' # 1. load user env u_env = environment.Environment() u_env.load(environment_format.parse(u_env_content)) # 2. assert global resources are NOT gone. self.assertEqual( instance.Instance, u_env.get_resource_info('AWS::EC2::Instance').value) def test_env_ignore_files_starting_dot(self): # prove we can disable a resource in the global environment g_env_content = '' # 1. fake an environment file envdir = self.useFixture(fixtures.TempDir()) with open(os.path.join(envdir.path, 'a.yaml'), 'w+') as ef: ef.write(g_env_content) with open(os.path.join(envdir.path, '.test.yaml'), 'w+') as ef: ef.write(g_env_content) with open(os.path.join(envdir.path, 'b.yaml'), 'w+') as ef: ef.write(g_env_content) cfg.CONF.set_override('environment_dir', envdir.path) # 2. load global env g_env = environment.Environment({}, user_env=False) with mock.patch('heat.engine.environment.open', mock.mock_open(read_data=g_env_content), create=True) as m_open: resources._load_global_environment(g_env) # 3. assert that the file were ignored expected = [mock.call('%s/a.yaml' % envdir.path), mock.call('%s/b.yaml' % envdir.path)] call_list = m_open.call_args_list expected.sort() call_list.sort() self.assertEqual(expected, call_list) class ChildEnvTest(common.HeatTestCase): def test_params_flat(self): new_params = {'foo': 'bar', 'tester': 'Yes'} penv = environment.Environment() expected = {'parameters': new_params, 'encrypted_param_names': [], 'parameter_defaults': {}, 'event_sinks': [], 'resource_registry': {'resources': {}}} cenv = environment.get_child_environment(penv, new_params) self.assertEqual(expected, cenv.env_as_dict()) def test_params_normal(self): new_params = {'parameters': {'foo': 'bar', 'tester': 'Yes'}} penv = environment.Environment() expected = {'parameter_defaults': {}, 'encrypted_param_names': [], 'event_sinks': [], 'resource_registry': {'resources': {}}} expected.update(new_params) cenv = environment.get_child_environment(penv, new_params) self.assertEqual(expected, cenv.env_as_dict()) def test_params_parent_overwritten(self): new_params = {'parameters': {'foo': 'bar', 'tester': 'Yes'}} parent_params = {'parameters': {'gone': 'hopefully'}} penv = environment.Environment(env=parent_params) expected = {'parameter_defaults': {}, 'encrypted_param_names': [], 'event_sinks': [], 'resource_registry': {'resources': {}}} expected.update(new_params) cenv = environment.get_child_environment(penv, new_params) self.assertEqual(expected, cenv.env_as_dict()) def test_registry_merge_simple(self): env1 = {u'resource_registry': {u'OS::Food': u'fruity.yaml'}} env2 = {u'resource_registry': {u'OS::Fruit': u'apples.yaml'}} penv = environment.Environment(env=env1) cenv = environment.get_child_environment(penv, env2) rr = cenv.user_env_as_dict()['resource_registry'] self.assertIn('OS::Food', rr) self.assertIn('OS::Fruit', rr) def test_registry_merge_favor_child(self): env1 = {u'resource_registry': {u'OS::Food': u'carrots.yaml'}} env2 = {u'resource_registry': {u'OS::Food': u'apples.yaml'}} penv = environment.Environment(env=env1) cenv = environment.get_child_environment(penv, env2) res = cenv.get_resource_info('OS::Food') self.assertEqual('apples.yaml', res.value) def test_item_to_remove_simple(self): env = {u'resource_registry': {u'OS::Food': u'fruity.yaml'}} penv = environment.Environment(env) victim = penv.get_resource_info('OS::Food', resource_name='abc') self.assertIsNotNone(victim) cenv = environment.get_child_environment(penv, None, item_to_remove=victim) self.assertRaises(exception.EntityNotFound, cenv.get_resource_info, 'OS::Food', resource_name='abc') self.assertNotIn('OS::Food', cenv.user_env_as_dict()['resource_registry']) # make sure the parent env is unaffected innocent = penv.get_resource_info('OS::Food', resource_name='abc') self.assertIsNotNone(innocent) def test_item_to_remove_complex(self): env = {u'resource_registry': {u'OS::Food': u'fruity.yaml', u'resources': {u'abc': { u'OS::Food': u'nutty.yaml'}}}} penv = environment.Environment(env) # the victim we want is the most specific one. victim = penv.get_resource_info('OS::Food', resource_name='abc') self.assertEqual(['resources', 'abc', 'OS::Food'], victim.path) cenv = environment.get_child_environment(penv, None, item_to_remove=victim) res = cenv.get_resource_info('OS::Food', resource_name='abc') self.assertEqual(['OS::Food'], res.path) rr = cenv.user_env_as_dict()['resource_registry'] self.assertIn('OS::Food', rr) self.assertNotIn('OS::Food', rr['resources']['abc']) # make sure the parent env is unaffected innocent2 = penv.get_resource_info('OS::Food', resource_name='abc') self.assertEqual(['resources', 'abc', 'OS::Food'], innocent2.path) def test_item_to_remove_none(self): env = {u'resource_registry': {u'OS::Food': u'fruity.yaml'}} penv = environment.Environment(env) victim = penv.get_resource_info('OS::Food', resource_name='abc') self.assertIsNotNone(victim) cenv = environment.get_child_environment(penv, None) res = cenv.get_resource_info('OS::Food', resource_name='abc') self.assertIsNotNone(res) def test_drill_down_to_child_resource(self): env = { u'resource_registry': { u'OS::Food': u'fruity.yaml', u'resources': { u'a': { u'OS::Fruit': u'apples.yaml', u'hooks': 'pre-create', }, u'nested': { u'b': { u'OS::Fruit': u'carrots.yaml', }, u'nested_res': { u'hooks': 'pre-create', } } } } } penv = environment.Environment(env) cenv = environment.get_child_environment( penv, None, child_resource_name=u'nested') registry = cenv.user_env_as_dict()['resource_registry'] resources = registry['resources'] self.assertIn('nested_res', resources) self.assertIn('hooks', resources['nested_res']) self.assertIsNotNone( cenv.get_resource_info('OS::Food', resource_name='abc')) self.assertRaises(exception.EntityNotFound, cenv.get_resource_info, 'OS::Fruit', resource_name='a') res = cenv.get_resource_info('OS::Fruit', resource_name='b') self.assertIsNotNone(res) self.assertEqual(u'carrots.yaml', res.value) def test_drill_down_non_matching_wildcard(self): env = { u'resource_registry': { u'resources': { u'nested': { u'c': { u'OS::Fruit': u'carrots.yaml', u'hooks': 'pre-create', }, }, u'*_doesnt_match_nested': { u'nested_res': { u'hooks': 'pre-create', }, } } } } penv = environment.Environment(env) cenv = environment.get_child_environment( penv, None, child_resource_name=u'nested') registry = cenv.user_env_as_dict()['resource_registry'] resources = registry['resources'] self.assertIn('c', resources) self.assertNotIn('nested_res', resources) res = cenv.get_resource_info('OS::Fruit', resource_name='c') self.assertIsNotNone(res) self.assertEqual(u'carrots.yaml', res.value) def test_drill_down_matching_wildcard(self): env = { u'resource_registry': { u'resources': { u'nested': { u'c': { u'OS::Fruit': u'carrots.yaml', u'hooks': 'pre-create', }, }, u'nest*': { u'nested_res': { u'hooks': 'pre-create', }, } } } } penv = environment.Environment(env) cenv = environment.get_child_environment( penv, None, child_resource_name=u'nested') registry = cenv.user_env_as_dict()['resource_registry'] resources = registry['resources'] self.assertIn('c', resources) self.assertIn('nested_res', resources) res = cenv.get_resource_info('OS::Fruit', resource_name='c') self.assertIsNotNone(res) self.assertEqual(u'carrots.yaml', res.value) def test_drill_down_prefer_exact_match(self): env = { u'resource_registry': { u'resources': { u'*esource': { u'hooks': 'pre-create', }, u'res*': { u'hooks': 'pre-create', }, u'resource': { u'OS::Fruit': u'carrots.yaml', u'hooks': 'pre-update', }, u'resource*': { u'hooks': 'pre-create', }, u'*resource': { u'hooks': 'pre-create', }, u'*sour*': { u'hooks': 'pre-create', }, } } } penv = environment.Environment(env) cenv = environment.get_child_environment( penv, None, child_resource_name=u'resource') registry = cenv.user_env_as_dict()['resource_registry'] resources = registry['resources'] self.assertEqual(u'carrots.yaml', resources[u'OS::Fruit']) self.assertEqual('pre-update', resources[u'hooks']) class ResourceRegistryTest(common.HeatTestCase): def test_resources_load(self): resources = { u'pre_create': { u'OS::Fruit': u'apples.yaml', u'hooks': 'pre-create', }, u'pre_update': { u'hooks': 'pre-update', }, u'both': { u'hooks': ['pre-create', 'pre-update'], }, u'b': { u'OS::Food': u'fruity.yaml', }, u'nested': { u'res': { u'hooks': 'pre-create', }, }, } registry = environment.ResourceRegistry(None, {}) registry.load({'resources': resources}) self.assertIsNotNone(registry.get_resource_info( 'OS::Fruit', resource_name='pre_create')) self.assertIsNotNone(registry.get_resource_info( 'OS::Food', resource_name='b')) resources = registry.as_dict()['resources'] self.assertEqual('pre-create', resources['pre_create']['hooks']) self.assertEqual('pre-update', resources['pre_update']['hooks']) self.assertEqual(['pre-create', 'pre-update'], resources['both']['hooks']) self.assertEqual('pre-create', resources['nested']['res']['hooks']) def test_load_registry_invalid_hook_type(self): resources = { u'resources': { u'a': { u'hooks': 'invalid-type', } } } registry = environment.ResourceRegistry(None, {}) msg = ('Invalid hook type "invalid-type" for resource breakpoint, ' 'acceptable hook types are: (\'pre-create\', \'pre-update\', ' '\'pre-delete\', \'post-create\', \'post-update\', ' '\'post-delete\')') ex = self.assertRaises(exception.InvalidBreakPointHook, registry.load, {'resources': resources}) self.assertEqual(msg, six.text_type(ex)) def test_list_type_validation_invalid_support_status(self): registry = environment.ResourceRegistry(None, {}) ex = self.assertRaises(exception.Invalid, registry.get_types, support_status='junk') msg = ('Invalid support status and should be one of %s' % six.text_type(support.SUPPORT_STATUSES)) self.assertIn(msg, ex.message) def test_list_type_validation_valid_support_status(self): registry = environment.ResourceRegistry(None, {}) for status in support.SUPPORT_STATUSES: self.assertEqual([], registry.get_types(support_status=status)) def test_list_type_find_by_status(self): registry = resources.global_env().registry types = registry.get_types(support_status=support.UNSUPPORTED) self.assertIn('ResourceTypeUnSupportedLiberty', types) self.assertNotIn('GenericResourceType', types) def test_list_type_find_by_status_none(self): registry = resources.global_env().registry types = registry.get_types(support_status=None) self.assertIn('ResourceTypeUnSupportedLiberty', types) self.assertIn('GenericResourceType', types) def test_list_type_with_name(self): registry = resources.global_env().registry types = registry.get_types(type_name='ResourceType*') self.assertIn('ResourceTypeUnSupportedLiberty', types) self.assertNotIn('GenericResourceType', types) def test_list_type_with_name_none(self): registry = resources.global_env().registry types = registry.get_types(type_name=None) self.assertIn('ResourceTypeUnSupportedLiberty', types) self.assertIn('GenericResourceType', types) def test_list_type_with_is_available_exception(self): registry = resources.global_env().registry self.patchobject( generic_resource.GenericResource, 'is_service_available', side_effect=exception.ClientNotAvailable(client_name='generic')) types = registry.get_types(utils.dummy_context(), type_name='GenericResourceType') self.assertNotIn('GenericResourceType', types) def test_list_type_with_invalid_type_name(self): registry = resources.global_env().registry types = registry.get_types(type_name="r'[^\\+]'") self.assertEqual([], types) def test_list_type_with_version(self): registry = resources.global_env().registry types = registry.get_types(version='5.0.0') self.assertIn('ResourceTypeUnSupportedLiberty', types) self.assertNotIn('ResourceTypeSupportedKilo', types) def test_list_type_with_version_none(self): registry = resources.global_env().registry types = registry.get_types(version=None) self.assertIn('ResourceTypeUnSupportedLiberty', types) self.assertIn('ResourceTypeSupportedKilo', types) def test_list_type_with_version_invalid(self): registry = resources.global_env().registry types = registry.get_types(version='invalid') self.assertEqual([], types) class HookMatchTest(common.HeatTestCase): scenarios = [(hook_type, {'hook': hook_type}) for hook_type in environment.HOOK_TYPES] def test_plain_matches(self): other_hook = next(hook for hook in environment.HOOK_TYPES if hook != self.hook) resources = { u'a': { u'OS::Fruit': u'apples.yaml', u'hooks': [self.hook, other_hook] }, u'b': { u'OS::Food': u'fruity.yaml', }, u'nested': { u'res': { u'hooks': self.hook, }, }, } registry = environment.ResourceRegistry(None, {}) registry.load({ u'OS::Fruit': u'apples.yaml', 'resources': resources}) self.assertTrue(registry.matches_hook('a', self.hook)) self.assertFalse(registry.matches_hook('b', self.hook)) self.assertFalse(registry.matches_hook('OS::Fruit', self.hook)) self.assertFalse(registry.matches_hook('res', self.hook)) self.assertFalse(registry.matches_hook('unknown', self.hook)) def test_wildcard_matches(self): other_hook = next(hook for hook in environment.HOOK_TYPES if hook != self.hook) resources = { u'prefix_*': { u'hooks': self.hook }, u'*_suffix': { u'hooks': self.hook }, u'*': { u'hooks': other_hook }, } registry = environment.ResourceRegistry(None, {}) registry.load({'resources': resources}) self.assertTrue(registry.matches_hook('prefix_', self.hook)) self.assertTrue(registry.matches_hook('prefix_some', self.hook)) self.assertFalse(registry.matches_hook('some_prefix', self.hook)) self.assertTrue(registry.matches_hook('_suffix', self.hook)) self.assertTrue(registry.matches_hook('some_suffix', self.hook)) self.assertFalse(registry.matches_hook('_suffix_blah', self.hook)) self.assertTrue(registry.matches_hook('some_prefix', other_hook)) self.assertTrue(registry.matches_hook('_suffix_blah', other_hook)) def test_hook_types(self): resources = { u'hook': { u'hooks': self.hook }, u'not-hook': { u'hooks': [hook for hook in environment.HOOK_TYPES if hook != self.hook] }, u'all': { u'hooks': environment.HOOK_TYPES }, } registry = environment.ResourceRegistry(None, {}) registry.load({'resources': resources}) self.assertTrue(registry.matches_hook('hook', self.hook)) self.assertFalse(registry.matches_hook('not-hook', self.hook)) self.assertTrue(registry.matches_hook('all', self.hook)) class ActionRestrictedTest(common.HeatTestCase): def test_plain_matches(self): resources = { u'a': { u'OS::Fruit': u'apples.yaml', u'restricted_actions': [u'update', u'replace'], }, u'b': { u'OS::Food': u'fruity.yaml', }, u'nested': { u'res': { u'restricted_actions': 'update', }, }, } registry = environment.ResourceRegistry(None, {}) registry.load({ u'OS::Fruit': u'apples.yaml', 'resources': resources}) self.assertIn(environment.UPDATE, registry.get_rsrc_restricted_actions('a')) self.assertNotIn(environment.UPDATE, registry.get_rsrc_restricted_actions('b')) self.assertNotIn(environment.UPDATE, registry.get_rsrc_restricted_actions('OS::Fruit')) self.assertNotIn(environment.UPDATE, registry.get_rsrc_restricted_actions('res')) self.assertNotIn(environment.UPDATE, registry.get_rsrc_restricted_actions('unknown')) def test_wildcard_matches(self): resources = { u'prefix_*': { u'restricted_actions': 'update', }, u'*_suffix': { u'restricted_actions': 'update', }, u'*': { u'restricted_actions': 'replace', }, } registry = environment.ResourceRegistry(None, {}) registry.load({'resources': resources}) self.assertIn(environment.UPDATE, registry.get_rsrc_restricted_actions('prefix_')) self.assertIn(environment.UPDATE, registry.get_rsrc_restricted_actions('prefix_some')) self.assertNotIn(environment.UPDATE, registry.get_rsrc_restricted_actions('some_prefix')) self.assertIn(environment.UPDATE, registry.get_rsrc_restricted_actions('_suffix')) self.assertIn(environment.UPDATE, registry.get_rsrc_restricted_actions('some_suffix')) self.assertNotIn(environment.UPDATE, registry.get_rsrc_restricted_actions('_suffix_blah')) self.assertIn(environment.REPLACE, registry.get_rsrc_restricted_actions('some_prefix')) self.assertIn(environment.REPLACE, registry.get_rsrc_restricted_actions('_suffix_blah')) def test_restricted_action_types(self): resources = { u'update': { u'restricted_actions': 'update', }, u'replace': { u'restricted_actions': 'replace', }, u'all': { u'restricted_actions': ['update', 'replace'], }, } registry = environment.ResourceRegistry(None, {}) registry.load({'resources': resources}) self.assertIn(environment.UPDATE, registry.get_rsrc_restricted_actions('update')) self.assertNotIn(environment.UPDATE, registry.get_rsrc_restricted_actions('replace')) self.assertIn(environment.REPLACE, registry.get_rsrc_restricted_actions('replace')) self.assertNotIn(environment.REPLACE, registry.get_rsrc_restricted_actions('update')) self.assertIn(environment.UPDATE, registry.get_rsrc_restricted_actions('all')) self.assertIn(environment.REPLACE, registry.get_rsrc_restricted_actions('all')) heat-10.0.2/heat/tests/test_common_param_utils.py0000666000175000017500000001075613343562340022137 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common import param_utils from heat.tests import common class TestExtractBool(common.HeatTestCase): def test_extract_bool(self): for value in ('True', 'true', 'TRUE', True): self.assertTrue(param_utils.extract_bool('bool', value)) for value in ('False', 'false', 'FALSE', False): self.assertFalse(param_utils.extract_bool('bool', value)) for value in ('foo', 't', 'f', 'yes', 'no', 'y', 'n', '1', '0', None): self.assertRaises(ValueError, param_utils.extract_bool, 'bool', value) class TestExtractInt(common.HeatTestCase): def test_extract_int(self): # None case self.assertIsNone(param_utils.extract_int('num', None)) # 0 case self.assertEqual(0, param_utils.extract_int('num', 0)) self.assertEqual(0, param_utils.extract_int('num', 0, allow_zero=True)) self.assertEqual(0, param_utils.extract_int('num', '0')) self.assertEqual(0, param_utils.extract_int('num', '0', allow_zero=True)) self.assertRaises(ValueError, param_utils.extract_int, 'num', 0, allow_zero=False) self.assertRaises(ValueError, param_utils.extract_int, 'num', '0', allow_zero=False) # positive values self.assertEqual(1, param_utils.extract_int('num', 1)) self.assertEqual(1, param_utils.extract_int('num', '1')) self.assertRaises(ValueError, param_utils.extract_int, 'num', '1.1') self.assertRaises(ValueError, param_utils.extract_int, 'num', 1.1) # negative values self.assertEqual(-1, param_utils.extract_int('num', -1, allow_negative=True)) self.assertEqual(-1, param_utils.extract_int('num', '-1', allow_negative=True)) self.assertRaises(ValueError, param_utils.extract_int, 'num', '-1.1', allow_negative=True) self.assertRaises(ValueError, param_utils.extract_int, 'num', -1.1, allow_negative=True) self.assertRaises(ValueError, param_utils.extract_int, 'num', -1) self.assertRaises(ValueError, param_utils.extract_int, 'num', '-1') self.assertRaises(ValueError, param_utils.extract_int, 'num', '-1.1') self.assertRaises(ValueError, param_utils.extract_int, 'num', -1.1) self.assertRaises(ValueError, param_utils.extract_int, 'num', -1, allow_negative=False) self.assertRaises(ValueError, param_utils.extract_int, 'num', '-1', allow_negative=False) self.assertRaises(ValueError, param_utils.extract_int, 'num', '-1.1', allow_negative=False) self.assertRaises(ValueError, param_utils.extract_int, 'num', -1.1, allow_negative=False) # Non-int value self.assertRaises(ValueError, param_utils.extract_int, 'num', 'abc') self.assertRaises(ValueError, param_utils.extract_int, 'num', '') self.assertRaises(ValueError, param_utils.extract_int, 'num', 'true') self.assertRaises(ValueError, param_utils.extract_int, 'num', True) class TestExtractTags(common.HeatTestCase): def test_extract_tags(self): self.assertRaises(ValueError, param_utils.extract_tags, "aaaaaaaaaaaaa" "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" "aaaaaaaaaaaaaaaaa,a") self.assertEqual(["foo", "bar"], param_utils.extract_tags('foo,bar')) heat-10.0.2/heat/tests/test_parameters.py0000666000175000017500000007121213343562340020404 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_serialization import jsonutils as json import six from heat.common import exception from heat.common import identifier from heat.engine import parameters from heat.engine import template from heat.tests import common def new_parameter(name, schema, value=None, validate_value=True): tmpl = template.Template({'HeatTemplateFormatVersion': '2012-12-12', 'Parameters': {name: schema}}) schema = tmpl.param_schemata()[name] param = parameters.Parameter(name, schema, value) param.validate(validate_value) return param class ParameterTestCommon(common.HeatTestCase): scenarios = [ ('type_string', dict(p_type='String', inst=parameters.StringParam, value='test', expected='test', allowed_value=['foo'], zero='', default='default')), ('type_number', dict(p_type='Number', inst=parameters.NumberParam, value=10, expected='10', allowed_value=[42], zero=0, default=13)), ('type_list', dict(p_type='CommaDelimitedList', inst=parameters.CommaDelimitedListParam, value=['a', 'b', 'c'], expected='a,b,c', allowed_value=['foo'], zero=[], default=['d', 'e', 'f'])), ('type_json', dict(p_type='Json', inst=parameters.JsonParam, value={'a': '1'}, expected='{"a": "1"}', allowed_value=[{'foo': 'bar'}], zero={}, default={'d': '1'})), ('type_int_json', dict(p_type='Json', inst=parameters.JsonParam, value={'a': 1}, expected='{"a": 1}', allowed_value=[{'foo': 'bar'}], zero={}, default={'d': 1})), ('type_boolean', dict(p_type='Boolean', inst=parameters.BooleanParam, value=True, expected='True', allowed_value=[False], zero=False, default=True)), ('type_int_string', dict(p_type='String', inst=parameters.StringParam, value='111', expected='111', allowed_value=['111'], zero='', default='0')), ('type_string_json', dict(p_type='Json', inst=parameters.JsonParam, value={'1': 1}, expected='{"1": 1}', allowed_value=[{'2': '2'}], zero={}, default={'3': 3})) ] def test_new_param(self): p = new_parameter('p', {'Type': self.p_type}, validate_value=False) self.assertIsInstance(p, self.inst) def test_param_to_str(self): p = new_parameter('p', {'Type': self.p_type}, self.value) if self.p_type == 'Json': self.assertEqual(json.loads(self.expected), json.loads(str(p))) else: self.assertEqual(self.expected, str(p)) def test_default_no_override(self): p = new_parameter('defaulted', {'Type': self.p_type, 'Default': self.default}) self.assertTrue(p.has_default()) self.assertEqual(self.default, p.default()) self.assertEqual(self.default, p.value()) def test_default_override(self): p = new_parameter('defaulted', {'Type': self.p_type, 'Default': self.default}, self.value) self.assertTrue(p.has_default()) self.assertEqual(self.default, p.default()) self.assertEqual(self.value, p.value()) def test_default_invalid(self): schema = {'Type': self.p_type, 'AllowedValues': self.allowed_value, 'ConstraintDescription': 'wibble', 'Default': self.default} if self.p_type == 'Json': err = self.assertRaises(exception.InvalidSchemaError, new_parameter, 'p', schema) self.assertIn('AllowedValues constraint invalid for Json', six.text_type(err)) else: err = self.assertRaises(exception.InvalidSchemaError, new_parameter, 'p', schema) self.assertIn('wibble', six.text_type(err)) def test_description(self): description = 'Description of the parameter' p = new_parameter('p', {'Type': self.p_type, 'Description': description}, validate_value=False) self.assertEqual(description, p.description()) def test_no_description(self): p = new_parameter('p', {'Type': self.p_type}, validate_value=False) self.assertEqual('', p.description()) def test_no_echo_true(self): p = new_parameter('anechoic', {'Type': self.p_type, 'NoEcho': 'true'}, self.value) self.assertTrue(p.hidden()) self.assertEqual('******', str(p)) def test_no_echo_true_caps(self): p = new_parameter('anechoic', {'Type': self.p_type, 'NoEcho': 'TrUe'}, self.value) self.assertTrue(p.hidden()) self.assertEqual('******', str(p)) def test_no_echo_false(self): p = new_parameter('echoic', {'Type': self.p_type, 'NoEcho': 'false'}, self.value) self.assertFalse(p.hidden()) if self.p_type == 'Json': self.assertEqual(json.loads(self.expected), json.loads(str(p))) else: self.assertEqual(self.expected, str(p)) def test_default_empty(self): p = new_parameter('defaulted', {'Type': self.p_type, 'Default': self.zero}) self.assertTrue(p.has_default()) self.assertEqual(self.zero, p.default()) self.assertEqual(self.zero, p.value()) def test_default_no_empty_user_value_empty(self): p = new_parameter('defaulted', {'Type': self.p_type, 'Default': self.default}, self.zero) self.assertTrue(p.has_default()) self.assertEqual(self.default, p.default()) self.assertEqual(self.zero, p.value()) class ParameterTestSpecific(common.HeatTestCase): def test_new_bad_type(self): self.assertRaises(exception.InvalidSchemaError, new_parameter, 'p', {'Type': 'List'}, validate_value=False) def test_string_len_good(self): schema = {'Type': 'String', 'MinLength': '3', 'MaxLength': '3'} p = new_parameter('p', schema, 'foo') self.assertEqual('foo', p.value()) def test_string_underflow(self): schema = {'Type': 'String', 'ConstraintDescription': 'wibble', 'MinLength': '4'} err = self.assertRaises(exception.StackValidationFailed, new_parameter, 'p', schema, 'foo') self.assertIn('wibble', six.text_type(err)) def test_string_overflow(self): schema = {'Type': 'String', 'ConstraintDescription': 'wibble', 'MaxLength': '2'} err = self.assertRaises(exception.StackValidationFailed, new_parameter, 'p', schema, 'foo') self.assertIn('wibble', six.text_type(err)) def test_string_pattern_good(self): schema = {'Type': 'String', 'AllowedPattern': '[a-z]*'} p = new_parameter('p', schema, 'foo') self.assertEqual('foo', p.value()) def test_string_pattern_bad_prefix(self): schema = {'Type': 'String', 'ConstraintDescription': 'wibble', 'AllowedPattern': '[a-z]*'} err = self.assertRaises(exception.StackValidationFailed, new_parameter, 'p', schema, '1foo') self.assertIn('wibble', six.text_type(err)) def test_string_pattern_bad_suffix(self): schema = {'Type': 'String', 'ConstraintDescription': 'wibble', 'AllowedPattern': '[a-z]*'} err = self.assertRaises(exception.StackValidationFailed, new_parameter, 'p', schema, 'foo1') self.assertIn('wibble', six.text_type(err)) def test_string_value_list_good(self): schema = {'Type': 'String', 'AllowedValues': ['foo', 'bar', 'baz']} p = new_parameter('p', schema, 'bar') self.assertEqual('bar', p.value()) def test_string_value_unicode(self): schema = {'Type': 'String'} p = new_parameter('p', schema, u'test\u2665') self.assertEqual(u'test\u2665', p.value()) def test_string_value_list_bad(self): schema = {'Type': 'String', 'ConstraintDescription': 'wibble', 'AllowedValues': ['foo', 'bar', 'baz']} err = self.assertRaises(exception.StackValidationFailed, new_parameter, 'p', schema, 'blarg') self.assertIn('wibble', six.text_type(err)) def test_number_int_good(self): schema = {'Type': 'Number', 'MinValue': '3', 'MaxValue': '3'} p = new_parameter('p', schema, '3') self.assertEqual(3, p.value()) def test_number_float_good_string(self): schema = {'Type': 'Number', 'MinValue': '3.0', 'MaxValue': '4.0'} p = new_parameter('p', schema, '3.5') self.assertEqual(3.5, p.value()) def test_number_float_good_number(self): schema = {'Type': 'Number', 'MinValue': '3.0', 'MaxValue': '4.0'} p = new_parameter('p', schema, 3.5) self.assertEqual(3.5, p.value()) def test_number_low(self): schema = {'Type': 'Number', 'ConstraintDescription': 'wibble', 'MinValue': '4'} err = self.assertRaises(exception.StackValidationFailed, new_parameter, 'p', schema, '3') self.assertIn('wibble', six.text_type(err)) def test_number_high(self): schema = {'Type': 'Number', 'ConstraintDescription': 'wibble', 'MaxValue': '2'} err = self.assertRaises(exception.StackValidationFailed, new_parameter, 'p', schema, '3') self.assertIn('wibble', six.text_type(err)) def test_number_bad(self): schema = {'Type': 'Number'} err = self.assertRaises(exception.StackValidationFailed, new_parameter, 'p', schema, 'str') self.assertIn('float', six.text_type(err)) def test_number_bad_type(self): schema = {'Type': 'Number'} err = self.assertRaises(exception.StackValidationFailed, new_parameter, 'p', schema, ['foo']) self.assertIn('int', six.text_type(err)) def test_number_value_list_good(self): schema = {'Type': 'Number', 'AllowedValues': ['1', '3', '5']} p = new_parameter('p', schema, '5') self.assertEqual(5, p.value()) def test_number_value_list_bad(self): schema = {'Type': 'Number', 'ConstraintDescription': 'wibble', 'AllowedValues': ['1', '3', '5']} err = self.assertRaises(exception.StackValidationFailed, new_parameter, 'p', schema, '2') self.assertIn('wibble', six.text_type(err)) def test_list_value_list_default_empty(self): schema = {'Type': 'CommaDelimitedList', 'Default': ''} p = new_parameter('p', schema) self.assertEqual([], p.value()) def test_list_value_list_good(self): schema = {'Type': 'CommaDelimitedList', 'AllowedValues': ['foo', 'bar', 'baz']} p = new_parameter('p', schema, 'baz,foo,bar') self.assertEqual('baz,foo,bar'.split(','), p.value()) schema['Default'] = [] p = new_parameter('p', schema) self.assertEqual([], p.value()) schema['Default'] = 'baz,foo,bar' p = new_parameter('p', schema) self.assertEqual('baz,foo,bar'.split(','), p.value()) schema['AllowedValues'] = ['1', '3', '5'] schema['Default'] = [] p = new_parameter('p', schema, [1, 3, 5]) self.assertEqual('1,3,5', str(p)) schema['Default'] = [1, 3, 5] p = new_parameter('p', schema) self.assertEqual('1,3,5'.split(','), p.value()) def test_list_value_list_bad(self): schema = {'Type': 'CommaDelimitedList', 'ConstraintDescription': 'wibble', 'AllowedValues': ['foo', 'bar', 'baz']} err = self.assertRaises(exception.StackValidationFailed, new_parameter, 'p', schema, 'foo,baz,blarg') self.assertIn('wibble', six.text_type(err)) def test_list_validate_good(self): schema = {'Type': 'CommaDelimitedList'} val = ['foo', 'bar', 'baz'] val_s = 'foo,bar,baz' p = new_parameter('p', schema, val_s, validate_value=False) p.validate() self.assertEqual(val, p.value()) self.assertEqual(val, p.parsed) def test_list_validate_bad(self): schema = {'Type': 'CommaDelimitedList'} # just need something here that is growing to throw an AttributeError # when .split() is called val_s = 0 p = new_parameter('p', schema, validate_value=False) p.user_value = val_s err = self.assertRaises(exception.StackValidationFailed, p.validate) self.assertIn('Parameter \'p\' is invalid', six.text_type(err)) def test_map_value(self): '''Happy path for value that's already a map.''' schema = {'Type': 'Json'} val = {"foo": "bar", "items": [1, 2, 3]} p = new_parameter('p', schema, val) self.assertEqual(val, p.value()) self.assertEqual(val, p.parsed) def test_map_value_bad(self): '''Map value is not JSON parsable.''' schema = {'Type': 'Json', 'ConstraintDescription': 'wibble'} val = {"foo": "bar", "not_json": len} err = self.assertRaises(ValueError, new_parameter, 'p', schema, val) self.assertIn('Value must be valid JSON', six.text_type(err)) def test_map_value_parse(self): '''Happy path for value that's a string.''' schema = {'Type': 'Json'} val = {"foo": "bar", "items": [1, 2, 3]} val_s = json.dumps(val) p = new_parameter('p', schema, val_s) self.assertEqual(val, p.value()) self.assertEqual(val, p.parsed) def test_map_value_bad_parse(self): '''Test value error for unparsable string value.''' schema = {'Type': 'Json', 'ConstraintDescription': 'wibble'} val = "I am not a map" err = self.assertRaises(ValueError, new_parameter, 'p', schema, val) self.assertIn('Value must be valid JSON', six.text_type(err)) def test_map_underrun(self): '''Test map length under MIN_LEN.''' schema = {'Type': 'Json', 'MinLength': 3} val = {"foo": "bar", "items": [1, 2, 3]} err = self.assertRaises(exception.StackValidationFailed, new_parameter, 'p', schema, val) self.assertIn('out of range', six.text_type(err)) def test_map_overrun(self): '''Test map length over MAX_LEN.''' schema = {'Type': 'Json', 'MaxLength': 1} val = {"foo": "bar", "items": [1, 2, 3]} err = self.assertRaises(exception.StackValidationFailed, new_parameter, 'p', schema, val) self.assertIn('out of range', six.text_type(err)) def test_json_list(self): schema = {'Type': 'Json'} val = ["fizz", "buzz"] p = new_parameter('p', schema, val) self.assertIsInstance(p.value(), list) self.assertIn("fizz", p.value()) self.assertIn("buzz", p.value()) def test_json_string_list(self): schema = {'Type': 'Json'} val = '["fizz", "buzz"]' p = new_parameter('p', schema, val) self.assertIsInstance(p.value(), list) self.assertIn("fizz", p.value()) self.assertIn("buzz", p.value()) def test_json_validate_good(self): schema = {'Type': 'Json'} val = {"foo": "bar", "items": [1, 2, 3]} val_s = json.dumps(val) p = new_parameter('p', schema, val_s, validate_value=False) p.validate() self.assertEqual(val, p.value()) self.assertEqual(val, p.parsed) def test_json_validate_bad(self): schema = {'Type': 'Json'} val_s = '{"foo": "bar", "invalid": ]}' p = new_parameter('p', schema, validate_value=False) p.user_value = val_s err = self.assertRaises(exception.StackValidationFailed, p.validate) self.assertIn('Parameter \'p\' is invalid', six.text_type(err)) def test_bool_value_true(self): schema = {'Type': 'Boolean'} for val in ('1', 't', 'true', 'on', 'y', 'yes', True, 1): bo = new_parameter('bo', schema, val) self.assertTrue(bo.value()) def test_bool_value_false(self): schema = {'Type': 'Boolean'} for val in ('0', 'f', 'false', 'off', 'n', 'no', False, 0): bo = new_parameter('bo', schema, val) self.assertFalse(bo.value()) def test_bool_value_invalid(self): schema = {'Type': 'Boolean'} err = self.assertRaises(exception.StackValidationFailed, new_parameter, 'bo', schema, 'foo') self.assertIn("Unrecognized value 'foo'", six.text_type(err)) def test_missing_param_str(self): '''Test missing user parameter.''' self.assertRaises(exception.UserParameterMissing, new_parameter, 'p', {'Type': 'String'}) def test_missing_param_list(self): '''Test missing user parameter.''' self.assertRaises(exception.UserParameterMissing, new_parameter, 'p', {'Type': 'CommaDelimitedList'}) def test_missing_param_map(self): '''Test missing user parameter.''' self.assertRaises(exception.UserParameterMissing, new_parameter, 'p', {'Type': 'Json'}) def test_param_name_in_error_message(self): schema = {'Type': 'String', 'AllowedPattern': '[a-z]*'} err = self.assertRaises(exception.StackValidationFailed, new_parameter, 'testparam', schema, '234') expected = ("Parameter 'testparam' is invalid: " '"234" does not match pattern "[a-z]*"') self.assertEqual(expected, six.text_type(err)) params_schema = json.loads('''{ "Parameters" : { "User" : { "Type": "String" }, "Defaulted" : { "Type": "String", "Default": "foobar" } } }''') class ParametersBase(common.HeatTestCase): def new_parameters(self, stack_name, tmpl, user_params=None, stack_id=None, validate_value=True, param_defaults=None): user_params = user_params or {} tmpl.update({'HeatTemplateFormatVersion': '2012-12-12'}) tmpl = template.Template(tmpl) params = tmpl.parameters( identifier.HeatIdentifier('', stack_name, stack_id), user_params, param_defaults=param_defaults) params.validate(validate_value) return params class ParametersTest(ParametersBase): def test_pseudo_params(self): stack_name = 'test_stack' params = self.new_parameters(stack_name, {"Parameters": {}}) self.assertEqual('test_stack', params['AWS::StackName']) self.assertEqual( 'arn:openstack:heat:::stacks/{0}/{1}'.format(stack_name, 'None'), params['AWS::StackId']) self.assertIn('AWS::Region', params) def test_pseudo_param_stackid(self): stack_name = 'test_stack' params = self.new_parameters(stack_name, {'Parameters': {}}, stack_id='abc123') self.assertEqual( 'arn:openstack:heat:::stacks/{0}/{1}'.format(stack_name, 'abc123'), params['AWS::StackId']) stack_identifier = identifier.HeatIdentifier('', '', 'def456') params.set_stack_id(stack_identifier) self.assertEqual(stack_identifier.arn(), params['AWS::StackId']) def test_schema_invariance(self): params1 = self.new_parameters('test', params_schema, {'User': 'foo', 'Defaulted': 'wibble'}) self.assertEqual('wibble', params1['Defaulted']) params2 = self.new_parameters('test', params_schema, {'User': 'foo'}) self.assertEqual('foobar', params2['Defaulted']) def test_to_dict(self): template = {'Parameters': {'Foo': {'Type': 'String'}, 'Bar': {'Type': 'Number', 'Default': '42'}}} params = self.new_parameters('test_params', template, {'Foo': 'foo'}) as_dict = dict(params) self.assertEqual('foo', as_dict['Foo']) self.assertEqual(42, as_dict['Bar']) self.assertEqual('test_params', as_dict['AWS::StackName']) self.assertIn('AWS::Region', as_dict) def test_map(self): template = {'Parameters': {'Foo': {'Type': 'String'}, 'Bar': {'Type': 'Number', 'Default': '42'}}} params = self.new_parameters('test_params', template, {'Foo': 'foo'}) expected = {'Foo': False, 'Bar': True, 'AWS::Region': True, 'AWS::StackId': True, 'AWS::StackName': True} self.assertEqual(expected, params.map(lambda p: p.has_default())) def test_map_str(self): template = {'Parameters': {'Foo': {'Type': 'String'}, 'Bar': {'Type': 'Number'}, 'Uni': {'Type': 'String'}}} stack_name = 'test_params' params = self.new_parameters(stack_name, template, {'Foo': 'foo', 'Bar': '42', 'Uni': u'test\u2665'}) expected = {'Foo': 'foo', 'Bar': '42', 'Uni': b'test\xe2\x99\xa5', 'AWS::Region': 'ap-southeast-1', 'AWS::StackId': 'arn:openstack:heat:::stacks/{0}/{1}'.format( stack_name, 'None'), 'AWS::StackName': 'test_params'} mapped_params = params.map(six.text_type) mapped_params['Uni'] = mapped_params['Uni'].encode('utf-8') self.assertEqual(expected, mapped_params) def test_unknown_params(self): user_params = {'Foo': 'wibble'} self.assertRaises(exception.UnknownUserParameter, self.new_parameters, 'test', params_schema, user_params) def test_missing_params(self): user_params = {} self.assertRaises(exception.UserParameterMissing, self.new_parameters, 'test', params_schema, user_params) def test_missing_attribute_params(self): params = {'Parameters': {'Foo': {'Type': 'String'}, 'NoAttr': 'No attribute.', 'Bar': {'Type': 'Number', 'Default': '1'}}} self.assertRaises(exception.InvalidSchemaError, self.new_parameters, 'test', params) class ParameterDefaultsTest(ParametersBase): scenarios = [ ('type_list', dict(p_type='CommaDelimitedList', value='1,1,1', expected=[['4', '2'], ['7', '7'], ['1', '1', '1']], param_default='7,7', default='4,2')), ('type_number', dict(p_type='Number', value=111, expected=[42, 77, 111], param_default=77, default=42)), ('type_string', dict(p_type='String', value='111', expected=['42', '77', '111'], param_default='77', default='42')), ('type_json', dict(p_type='Json', value={'1': '11'}, expected=[{'4': '2'}, {'7': '7'}, {'1': '11'}], param_default={'7': '7'}, default={'4': '2'})), ('type_boolean1', dict(p_type='Boolean', value=True, expected=[False, False, True], param_default=False, default=False)), ('type_boolean2', dict(p_type='Boolean', value=False, expected=[False, True, False], param_default=True, default=False)), ('type_boolean3', dict(p_type='Boolean', value=False, expected=[True, False, False], param_default=False, default=True))] def test_use_expected_default(self): template = {'Parameters': {'a': {'Type': self.p_type, 'Default': self.default}}} params = self.new_parameters('test_params', template) self.assertEqual(self.expected[0], params['a']) params = self.new_parameters('test_params', template, param_defaults={'a': self.param_default}) self.assertEqual(self.expected[1], params['a']) params = self.new_parameters('test_params', template, {'a': self.value}, param_defaults={'a': self.param_default}) self.assertEqual(self.expected[2], params['a']) class ParameterSchemaTest(common.HeatTestCase): def test_validate_schema_wrong_key(self): error = self.assertRaises(exception.InvalidSchemaError, parameters.Schema.from_dict, 'param_name', {"foo": "bar"}) self.assertEqual("Invalid key 'foo' for parameter (param_name)", six.text_type(error)) def test_validate_schema_no_type(self): error = self.assertRaises(exception.InvalidSchemaError, parameters.Schema.from_dict, 'broken', {"Description": "Hi!"}) self.assertEqual("Missing parameter type for parameter: broken", six.text_type(error)) heat-10.0.2/heat/tests/convergence/0000775000175000017500000000000013343562672017131 5ustar zuulzuul00000000000000heat-10.0.2/heat/tests/convergence/framework/0000775000175000017500000000000013343562672021126 5ustar zuulzuul00000000000000heat-10.0.2/heat/tests/convergence/framework/processes.py0000666000175000017500000000240413343562340023500 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.tests.convergence.framework import engine_wrapper from heat.tests.convergence.framework import event_loop as event_loop_module from heat.tests.convergence.framework import worker_wrapper engine = None worker = None event_loop = None class Processes(object): def __init__(self): global engine global worker global event_loop worker = worker_wrapper.Worker() engine = engine_wrapper.Engine(worker) event_loop = event_loop_module.EventLoop(engine, worker) self.engine = engine self.worker = worker self.event_loop = event_loop def clear(self): self.engine.clear() self.worker.clear() Processes() heat-10.0.2/heat/tests/convergence/framework/worker_wrapper.py0000666000175000017500000000311613343562340024544 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from heat.engine import worker from heat.tests.convergence.framework import message_processor from heat.tests.convergence.framework import message_queue class Worker(message_processor.MessageProcessor): queue = message_queue.MessageQueue('worker') def __init__(self): super(Worker, self).__init__('worker') @message_processor.asynchronous def check_resource(self, ctxt, resource_id, current_traversal, data, is_update, adopt_stack_data, converge=False): worker.WorkerService("fake_host", "fake_topic", "fake_engine", mock.Mock()).check_resource( ctxt, resource_id, current_traversal, data, is_update, adopt_stack_data, converge) def stop_traversal(self, current_stack): pass def stop_all_workers(self, current_stack): pass heat-10.0.2/heat/tests/convergence/framework/message_queue.py0000666000175000017500000000217213343562340024324 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections Message = collections.namedtuple('Message', ['name', 'data']) class MessageQueue(object): def __init__(self, name): self.name = name self._queue = collections.deque() def send(self, name, data=None): self._queue.append(Message(name, data)) def send_priority(self, name, data=None): self._queue.appendleft(Message(name, data)) def get(self): try: return self._queue.popleft() except IndexError: return None def clear(self): self._queue.clear() heat-10.0.2/heat/tests/convergence/framework/scenario.py0000666000175000017500000000304713343562340023301 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os from oslo_log import log as logging LOG = logging.getLogger(__name__) def list_all(): scenario_dir = os.path.join(os.path.dirname(__file__), '../scenarios') if not os.path.isdir(scenario_dir): LOG.error('Scenario directory "%s" not found', scenario_dir) return for root, dirs, files in os.walk(scenario_dir): for filename in files: name, ext = os.path.splitext(filename) if ext == '.py': LOG.debug('Found scenario "%s"', name) yield name, os.path.join(root, filename) class Scenario(object): def __init__(self, name, path): self.name = name with open(path) as f: source = f.read() self.code = compile(source, path, 'exec') LOG.debug('Loaded scenario %s', self.name) def __call__(self, _event_loop, **global_env): LOG.info('*** Beginning scenario "%s"', self.name) exec(self.code, global_env, {}) _event_loop() heat-10.0.2/heat/tests/convergence/framework/event_loop.py0000666000175000017500000000142513343562340023646 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. class EventLoop(object): def __init__(self, *processors): self.processors = processors def __call__(self): while any([processor() for processor in self.processors]): continue heat-10.0.2/heat/tests/convergence/framework/engine_wrapper.py0000666000175000017500000001164513343562340024506 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import six from heat.db.sqlalchemy import api as db_api from heat.engine import service from heat.engine import stack from heat.tests.convergence.framework import message_processor from heat.tests.convergence.framework import message_queue from heat.tests.convergence.framework import scenario_template from heat.tests import utils class SynchronousThreadGroupManager(service.ThreadGroupManager): """Wrapper for thread group manager. The start method of thread group manager needs to be overridden to run the function synchronously so the convergence scenario tests can be run. """ def start(self, stack_id, func, *args, **kwargs): func(*args, **kwargs) class Engine(message_processor.MessageProcessor): """Wrapper for the engine service. Methods of this class will be called from the scenario tests. """ queue = message_queue.MessageQueue('engine') def __init__(self, worker): super(Engine, self).__init__('engine') self.worker = worker def scenario_template_to_hot(self, scenario_tmpl): """Converts the scenario template into hot template.""" hot_tmpl = {"heat_template_version": "2013-05-23"} resources = {} for res_name, res_def in six.iteritems(scenario_tmpl.resources): props = getattr(res_def, 'properties') depends = getattr(res_def, 'depends_on') res_defn = {"type": "OS::Heat::TestResource"} if props: props_def = {} for prop_name, prop_value in props.items(): if type(prop_value) == scenario_template.GetRes: prop_res = getattr(prop_value, "target_name") prop_value = {'get_resource': prop_res} elif type(prop_value) == scenario_template.GetAtt: prop_res = getattr(prop_value, "target_name") prop_attr = getattr(prop_value, "attr") prop_value = {'get_attr': [prop_res, prop_attr]} props_def[prop_name] = prop_value res_defn["properties"] = props_def if depends: res_defn["depends_on"] = depends resources[res_name] = res_defn hot_tmpl['resources'] = resources return hot_tmpl @message_processor.asynchronous def create_stack(self, stack_name, scenario_tmpl): cnxt = utils.dummy_context() srv = service.EngineService("host", "engine") srv.thread_group_mgr = SynchronousThreadGroupManager() srv.worker_service = self.worker hot_tmpl = self.scenario_template_to_hot(scenario_tmpl) srv.create_stack(cnxt, stack_name, hot_tmpl, params={}, files={}, environment_files=None, args={}) @message_processor.asynchronous def update_stack(self, stack_name, scenario_tmpl): cnxt = utils.dummy_context() db_stack = db_api.stack_get_by_name(cnxt, stack_name) srv = service.EngineService("host", "engine") srv.thread_group_mgr = SynchronousThreadGroupManager() srv.worker_service = self.worker hot_tmpl = self.scenario_template_to_hot(scenario_tmpl) stack_identity = {'stack_name': stack_name, 'stack_id': db_stack.id, 'tenant': db_stack.tenant, 'path': ''} srv.update_stack(cnxt, stack_identity, hot_tmpl, params={}, files={}, environment_files=None, args={}) @message_processor.asynchronous def delete_stack(self, stack_name): cnxt = utils.dummy_context() db_stack = db_api.stack_get_by_name(cnxt, stack_name) stack_identity = {'stack_name': stack_name, 'stack_id': db_stack.id, 'tenant': db_stack.tenant, 'path': ''} srv = service.EngineService("host", "engine") srv.thread_group_mgr = SynchronousThreadGroupManager() srv.worker_service = self.worker srv.delete_stack(cnxt, stack_identity) @message_processor.asynchronous def rollback_stack(self, stack_name): cntxt = utils.dummy_context() db_stack = db_api.stack_get_by_name(cntxt, stack_name) stk = stack.Stack.load(cntxt, stack=db_stack) stk.thread_group_mgr = SynchronousThreadGroupManager() stk.rollback() heat-10.0.2/heat/tests/convergence/framework/testutils.py0000666000175000017500000000545213343562340023540 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import functools from heat.tests.convergence.framework import reality from heat.tests.convergence.framework import scenario_template from oslo_log import log as logging LOG = logging.getLogger(__name__) def verify(test, reality, tmpl): LOG.info('Verifying %s', tmpl) for name in tmpl.resources: rsrc_count = len(reality.resources_by_logical_name(name)) test.assertEqual(1, rsrc_count, 'Found %d copies of resource "%s"' % (rsrc_count, name)) all_rsrcs = reality.all_resources() for name, defn in tmpl.resources.items(): phys_rsrc = reality.resources_by_logical_name(name)[0] for prop_name, prop_def in defn.properties.items(): real_value = reality.resource_properties(phys_rsrc, prop_name) if isinstance(prop_def, scenario_template.GetAtt): targs = reality.resources_by_logical_name(prop_def.target_name) prop_def = targs[0].rsrc_prop_data.data[prop_def.attr] elif isinstance(prop_def, scenario_template.GetRes): targs = reality.resources_by_logical_name(prop_def.target_name) prop_def = targs[0].physical_resource_id test.assertEqual(prop_def, real_value, 'Unexpected value for %s prop %s' % (name, prop_name)) len_rsrc_prop_data = 0 if phys_rsrc.rsrc_prop_data: len_rsrc_prop_data = len(phys_rsrc.rsrc_prop_data.data) test.assertEqual(len(defn.properties), len_rsrc_prop_data) test.assertEqual(set(tmpl.resources), set(r.name for r in all_rsrcs)) def scenario_globals(procs, testcase): return { 'test': testcase, 'reality': reality.reality, 'verify': functools.partial(verify, testcase, reality.reality), 'Template': scenario_template.Template, 'RsrcDef': scenario_template.RsrcDef, 'GetRes': scenario_template.GetRes, 'GetAtt': scenario_template.GetAtt, 'engine': procs.engine, 'worker': procs.worker, } heat-10.0.2/heat/tests/convergence/framework/__init__.py0000666000175000017500000000000013343562340023217 0ustar zuulzuul00000000000000heat-10.0.2/heat/tests/convergence/framework/message_processor.py0000666000175000017500000000715713343562340025227 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import functools import inspect from oslo_log import log as logging from oslo_messaging import rpc LOG = logging.getLogger(__name__) def asynchronous(function): """Decorator for MessageProcessor methods to make them asynchronous. To use, simply call the method as usual. Instead of being executed immediately, it will be placed on the queue for the MessageProcessor and run on a future iteration of the event loop. """ arg_names = inspect.getargspec(function).args MessageData = collections.namedtuple(function.__name__, arg_names[1:]) @functools.wraps(function) def call_or_send(processor, *args, **kwargs): if len(args) == 1 and not kwargs and isinstance(args[0], MessageData): try: return function(processor, **args[0]._asdict()) except rpc.dispatcher.ExpectedException as exc: LOG.error('[%s] Exception in "%s": %s', processor.name, function.__name__, exc.exc_info[1], exc_info=exc.exc_info) raise except Exception as exc: LOG.exception('[%s] Exception in "%s": %s', processor.name, function.__name__, exc) raise else: data = inspect.getcallargs(function, processor, *args, **kwargs) data.pop(arg_names[0]) # lose self return processor.queue.send(function.__name__, MessageData(**data)) call_or_send.MessageData = MessageData return call_or_send class MessageProcessor(object): queue = None def __init__(self, name): self.name = name def __call__(self): message = self.queue.get() if message is None: LOG.debug('[%s] No messages', self.name) return False try: method = getattr(self, message.name) except AttributeError: LOG.error('[%s] Bad message name "%s"' % (self.name, message.name)) raise else: LOG.info('[%s] %r' % (self.name, message.data)) method(message.data) return True @asynchronous def noop(self, count=1): """Insert No-op operations in the message queue.""" assert isinstance(count, int) if count > 1: self.queue.send_priority('noop', self.noop.MessageData(count - 1)) @asynchronous def _execute(self, func): """Insert a function call in the message queue. The function takes no arguments, so use functools.partial to curry the arguments before passing it here. """ func() def call(self, func, *args, **kwargs): """Insert a function call in the message queue.""" self._execute(functools.partial(func, *args, **kwargs)) def clear(self): """Delete all the messages from the queue.""" self.queue.clear() __all__ = ['MessageProcessor', 'asynchronous'] heat-10.0.2/heat/tests/convergence/framework/fake_resource.py0000666000175000017500000000612713343562340024315 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common.i18n import _ from heat.engine import attributes from heat.engine import properties from heat.engine import resource from oslo_log import log as logging LOG = logging.getLogger(__name__) class TestResource(resource.Resource): PROPERTIES = ( A, B, C, CA, rA, rB ) = ( 'a', 'b', 'c', 'ca', '!a', '!b' ) ATTRIBUTES = ( A, rA ) = ( 'a', '!a' ) properties_schema = { A: properties.Schema( properties.Schema.STRING, _('Fake property a.'), default='a', update_allowed=True ), B: properties.Schema( properties.Schema.STRING, _('Fake property b.'), default='b', update_allowed=True ), C: properties.Schema( properties.Schema.STRING, _('Fake property c.'), update_allowed=True, default='c' ), CA: properties.Schema( properties.Schema.STRING, _('Fake property ca.'), update_allowed=True, default='ca' ), rA: properties.Schema( properties.Schema.STRING, _('Fake property !a.'), update_allowed=True, default='!a' ), rB: properties.Schema( properties.Schema.STRING, _('Fake property !c.'), update_allowed=True, default='!b' ), } attributes_schema = { A: attributes.Schema( _('Fake attribute a.'), cache_mode=attributes.Schema.CACHE_NONE ), rA: attributes.Schema( _('Fake attribute !a.'), cache_mode=attributes.Schema.CACHE_NONE ), } def handle_create(self): LOG.info('Creating resource %s with properties %s', self.name, dict(self.properties)) for prop in self.properties.props.keys(): self.data_set(prop, self.properties.get(prop), redact=False) self.resource_id_set(self.physical_resource_name()) def handle_update(self, json_snippet=None, tmpl_diff=None, prop_diff=None): LOG.info('Updating resource %s with prop_diff %s', self.name, prop_diff) for prop in prop_diff: if '!' in prop: raise resource.UpdateReplace(self.name) self.data_set(prop, prop_diff.get(prop), redact=False) def _resolve_attribute(self, name): if name in self.attributes: return self.data().get(name) heat-10.0.2/heat/tests/convergence/framework/reality.py0000666000175000017500000000332413343562340023145 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common import exception from heat.db.sqlalchemy import api as db_api from heat.tests import utils class RealityStore(object): def __init__(self): self.cntxt = utils.dummy_context() def resources_by_logical_name(self, logical_name): ret = [] resources = db_api.resource_get_all(self.cntxt) for res in resources: if (res.name == logical_name and res.action in ("CREATE", "UPDATE") and res.status == "COMPLETE"): ret.append(res) return ret def all_resources(self): try: resources = db_api.resource_get_all(self.cntxt) except exception.NotFound: return [] ret = [] for res in resources: if res.action in ("CREATE", "UPDATE") and res.status == "COMPLETE": ret.append(res) return ret def resource_properties(self, res, prop_name): res_data = db_api.resource_data_get_by_key(self.cntxt, res.id, prop_name) return res_data.value reality = RealityStore() heat-10.0.2/heat/tests/convergence/framework/scenario_template.py0000666000175000017500000000262013343562340025170 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. class GetRes(object): def __init__(self, target_name): self.target_name = target_name def __repr__(self): return 'GetRes(%r)' % self.target_name class GetAtt(GetRes): def __init__(self, target_name, attr): super(GetAtt, self).__init__(target_name) self.attr = attr def __repr__(self): return 'GetAtt(%r, %r)' % (self.target_name, self.attr) class RsrcDef(object): def __init__(self, properties, depends_on): self.properties = properties self.depends_on = depends_on def __repr__(self): return 'RsrcDef(%r, %r)' % (self.properties, self.depends_on) class Template(object): def __init__(self, resources={}, key=None): self.key = key self.resources = resources def __repr__(self): return 'Template(%r)' % self.resources heat-10.0.2/heat/tests/convergence/__init__.py0000666000175000017500000000000013343562340021222 0ustar zuulzuul00000000000000heat-10.0.2/heat/tests/convergence/scenarios/0000775000175000017500000000000013343562672021117 5ustar zuulzuul00000000000000heat-10.0.2/heat/tests/convergence/scenarios/update_interrupt_create.py0000666000175000017500000000124113343562340026402 0ustar zuulzuul00000000000000def check_resource_count(expected_count): test.assertEqual(expected_count, len(reality.all_resources())) example_template = Template({ 'A': RsrcDef({}, []), 'B': RsrcDef({'a': '4alpha'}, ['A']), 'C': RsrcDef({'a': 'foo'}, ['B']), 'D': RsrcDef({'a': 'bar'}, ['C']), }) engine.create_stack('foo', example_template) engine.noop(1) example_template2 = Template({ 'A': RsrcDef({}, []), 'B': RsrcDef({'a': '4alpha'}, ['A']), 'C': RsrcDef({'a': 'blarg'}, ['B']), 'D': RsrcDef({'a': 'wibble'}, ['C']), }) engine.update_stack('foo', example_template2) engine.call(check_resource_count, 2) engine.noop(11) engine.call(verify, example_template2) heat-10.0.2/heat/tests/convergence/scenarios/update_add_rollback.py0000666000175000017500000000241513343562340025430 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. example_template = Template({ 'A': RsrcDef({}, []), 'B': RsrcDef({}, []), 'C': RsrcDef({'a': '4alpha'}, ['A', 'B']), 'D': RsrcDef({'c': GetRes('C')}, []), 'E': RsrcDef({'ca': GetAtt('C', 'a')}, []), }) engine.create_stack('foo', example_template) engine.noop(5) engine.call(verify, example_template) example_template2 = Template({ 'A': RsrcDef({}, []), 'B': RsrcDef({}, []), 'C': RsrcDef({'a': '4alpha'}, ['A', 'B']), 'D': RsrcDef({'c': GetRes('C')}, []), 'E': RsrcDef({'ca': GetAtt('C', 'a')}, []), 'F': RsrcDef({}, ['A']), }) engine.update_stack('foo', example_template2) engine.noop(4) engine.rollback_stack('foo') engine.noop(8) engine.call(verify, example_template) heat-10.0.2/heat/tests/convergence/scenarios/update_user_replace_rollback.py0000666000175000017500000000257313343562340027356 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. example_template = Template({ 'A': RsrcDef({'a': 'initial'}, []), 'B': RsrcDef({}, []), 'C': RsrcDef({'!a': GetAtt('A', 'a')}, ['B']), 'D': RsrcDef({'c': GetRes('C')}, []), 'E': RsrcDef({'ca': GetAtt('C', '!a')}, []), }) engine.create_stack('foo', example_template) engine.noop(5) engine.call(verify, example_template) example_template_updated = Template({ 'A': RsrcDef({'a': 'updated'}, []), 'B': RsrcDef({}, []), 'newC': RsrcDef({'!a': GetAtt('A', 'a')}, ['B']), 'D': RsrcDef({'c': GetRes('newC')}, []), 'E': RsrcDef({'ca': GetAtt('newC', '!a')}, []), }) engine.update_stack('foo', example_template_updated) engine.noop(3) engine.rollback_stack('foo') engine.noop(12) engine.call(verify, example_template) engine.delete_stack('foo') engine.noop(6) engine.call(verify, Template({})) heat-10.0.2/heat/tests/convergence/scenarios/update_replace_rollback.py0000666000175000017500000000301613343562340026311 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. def check_c_count(expected_count): test.assertEqual(expected_count, len(reality.resources_by_logical_name('C'))) example_template = Template({ 'A': RsrcDef({'a': 'initial'}, []), 'B': RsrcDef({}, []), 'C': RsrcDef({'!a': GetAtt('A', 'a')}, ['B']), 'D': RsrcDef({'c': GetRes('C')}, []), 'E': RsrcDef({'ca': GetAtt('C', '!a')}, []), }) engine.create_stack('foo', example_template) engine.noop(5) engine.call(verify, example_template) example_template2 = Template({ 'A': RsrcDef({'a': 'updated'}, []), 'B': RsrcDef({}, []), 'C': RsrcDef({'!a': GetAtt('A', 'a')}, ['B']), 'D': RsrcDef({'c': GetRes('C')}, []), 'E': RsrcDef({'ca': GetAtt('C', '!a')}, []), }) engine.update_stack('foo', example_template2) engine.noop(4) engine.rollback_stack('foo') engine.call(check_c_count, 2) engine.noop(11) engine.call(verify, example_template) engine.delete_stack('foo') engine.noop(12) engine.call(verify, Template({})) heat-10.0.2/heat/tests/convergence/scenarios/basic_create_rollback.py0000666000175000017500000000164613343562340025747 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. example_template = Template({ 'A': RsrcDef({}, []), 'B': RsrcDef({}, []), 'C': RsrcDef({'a': '4alpha'}, ['A', 'B']), 'D': RsrcDef({'c': GetRes('C')}, []), 'E': RsrcDef({'ca': GetAtt('C', 'a')}, []), }) engine.create_stack('foo', example_template) engine.noop(3) engine.rollback_stack('foo') engine.noop(6) engine.call(verify, Template()) heat-10.0.2/heat/tests/convergence/scenarios/disjoint_create.py0000666000175000017500000000154513343562340024636 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. example_template = Template({ 'A': RsrcDef({}, []), 'B': RsrcDef({}, []), 'C': RsrcDef({'a': '4alpha'}, ['A']), 'D': RsrcDef({'c': GetRes('C')}, []), 'E': RsrcDef({}, []), }) engine.create_stack('foo', example_template) engine.noop(5) engine.call(verify, example_template) heat-10.0.2/heat/tests/convergence/scenarios/update_replace_missed_cleanup.py0000666000175000017500000000351413343562340027516 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. def check_c_count(expected_count): test.assertEqual(expected_count, len(reality.resources_by_logical_name('C'))) example_template = Template({ 'A': RsrcDef({'a': 'initial'}, []), 'B': RsrcDef({}, []), 'C': RsrcDef({'!a': GetAtt('A', 'a')}, ['B']), 'D': RsrcDef({'c': GetRes('C')}, []), 'E': RsrcDef({'ca': GetAtt('C', '!a')}, []), }) engine.create_stack('foo', example_template) engine.noop(5) engine.call(verify, example_template) example_template_shrunk = Template({ 'A': RsrcDef({'a': 'updated'}, []), 'B': RsrcDef({}, []), 'C': RsrcDef({'!a': GetAtt('A', 'a')}, ['B']), 'D': RsrcDef({'c': GetRes('C')}, []), 'E': RsrcDef({'ca': GetAtt('C', '!a')}, []), }) engine.update_stack('foo', example_template_shrunk) engine.noop(7) example_template_long = Template({ 'A': RsrcDef({'a': 'updated'}, []), 'B': RsrcDef({}, []), 'C': RsrcDef({'!a': GetAtt('A', 'a')}, ['B']), 'D': RsrcDef({'c': GetRes('C')}, []), 'E': RsrcDef({'ca': GetAtt('C', '!a')}, []), 'F': RsrcDef({}, ['D', 'E']), }) engine.update_stack('foo', example_template_long) engine.call(check_c_count, 2) engine.noop(11) engine.call(verify, example_template_long) engine.delete_stack('foo') engine.noop(12) engine.call(verify, Template({})) heat-10.0.2/heat/tests/convergence/scenarios/update_user_replace_rollback_update.py0000666000175000017500000000340513343562340030713 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. example_template = Template({ 'A': RsrcDef({'a': 'initial'}, []), 'B': RsrcDef({}, []), 'C': RsrcDef({'!a': GetAtt('A', 'a'), 'b': 'val1'}, []), 'D': RsrcDef({'c': GetRes('C')}, []), 'E': RsrcDef({'ca': GetAtt('C', '!a')}, []), }) engine.create_stack('foo', example_template) engine.noop(5) engine.call(verify, example_template) example_template_updated = Template({ 'A': RsrcDef({'a': 'updated'}, []), 'B': RsrcDef({}, []), 'C': RsrcDef({'!a': GetAtt('A', 'a'), 'b': 'val1'}, []), 'D': RsrcDef({'c': GetRes('C')}, []), 'E': RsrcDef({'ca': GetAtt('C', '!a')}, []), }) engine.update_stack('foo', example_template_updated) engine.noop(3) engine.rollback_stack('foo') engine.noop(12) engine.call(verify, example_template) example_template_final = Template({ 'A': RsrcDef({'a': 'initial'}, []), 'B': RsrcDef({}, []), 'C': RsrcDef({'!a': GetAtt('A', 'a'), 'b': 'val2'}, []), 'D': RsrcDef({'c': GetRes('C')}, []), 'E': RsrcDef({'ca': GetAtt('C', '!a')}, []), }) engine.update_stack('foo', example_template_final) engine.noop(3) engine.call(verify, example_template_final) engine.noop(4) engine.delete_stack('foo') engine.noop(6) engine.call(verify, Template({})) heat-10.0.2/heat/tests/convergence/scenarios/basic_create.py0000666000175000017500000000160013343562340024064 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. example_template = Template({ 'A': RsrcDef({}, []), 'B': RsrcDef({}, []), 'C': RsrcDef({'a': '4alpha'}, ['A', 'B']), 'D': RsrcDef({'c': GetRes('C')}, []), 'E': RsrcDef({'ca': GetAtt('C', 'a')}, []), }) engine.create_stack('foo', example_template) engine.noop(5) engine.call(verify, example_template) heat-10.0.2/heat/tests/convergence/scenarios/update_remove_rollback.py0000666000175000017500000000306313343562340026175 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. b_uuid = None def store_b_uuid(): global b_uuid b_uuid = next(iter(reality.resources_by_logical_name('B'))).uuid def check_b_not_replaced(): test.assertEqual(b_uuid, next(iter(reality.resources_by_logical_name('B'))).uuid) test.assertIsNotNone(b_uuid) example_template = Template({ 'A': RsrcDef({}, []), 'B': RsrcDef({}, []), 'C': RsrcDef({'a': '4alpha'}, ['A', 'B']), 'D': RsrcDef({'c': GetRes('C')}, []), 'E': RsrcDef({'ca': GetAtt('C', 'a')}, []), }) engine.create_stack('foo', example_template) engine.noop(5) engine.call(verify, example_template) engine.call(store_b_uuid) example_template2 = Template({ 'A': RsrcDef({}, []), 'C': RsrcDef({'a': '4alpha'}, ['A']), 'D': RsrcDef({'c': GetRes('C')}, []), 'E': RsrcDef({'ca': GetAtt('C', 'a')}, []), }) engine.update_stack('foo', example_template2) engine.noop(2) engine.rollback_stack('foo') engine.noop(10) engine.call(verify, example_template) engine.call(check_b_not_replaced) heat-10.0.2/heat/tests/convergence/scenarios/update_user_replace.py0000666000175000017500000000410713343562340025500 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. c_uuid = None def store_c_uuid(): global c_uuid c_uuid = next(iter(reality.resources_by_logical_name('C'))).uuid def check_c_replaced(): test.assertNotEqual(c_uuid, next(iter(reality.resources_by_logical_name('newC'))).uuid) test.assertIsNotNone(c_uuid) example_template = Template({ 'A': RsrcDef({'a': 'initial'}, []), 'B': RsrcDef({}, []), 'C': RsrcDef({'!a': GetAtt('A', 'a')}, ['B']), 'D': RsrcDef({'c': GetRes('C')}, []), 'E': RsrcDef({'ca': GetAtt('C', '!a')}, []), }) engine.create_stack('foo', example_template) engine.noop(5) engine.call(verify, example_template) engine.call(store_c_uuid) example_template_updated = Template({ 'A': RsrcDef({'a': 'updated'}, []), 'B': RsrcDef({}, []), 'newC': RsrcDef({'!a': GetAtt('A', 'a')}, ['B']), 'D': RsrcDef({'c': GetRes('newC')}, []), 'E': RsrcDef({'ca': GetAtt('newC', '!a')}, []), }) engine.update_stack('foo', example_template_updated) engine.noop(11) engine.call(verify, example_template_updated) example_template_long = Template({ 'A': RsrcDef({'a': 'updated'}, []), 'B': RsrcDef({}, []), 'newC': RsrcDef({'!a': GetAtt('A', 'a')}, ['B']), 'D': RsrcDef({'c': GetRes('newC')}, []), 'E': RsrcDef({'ca': GetAtt('newC', '!a')}, []), 'F': RsrcDef({}, ['D', 'E']), }) engine.update_stack('foo', example_template_long) engine.noop(12) engine.call(verify, example_template_long) engine.call(check_c_replaced) engine.delete_stack('foo') engine.noop(6) engine.call(verify, Template({})) heat-10.0.2/heat/tests/convergence/scenarios/update_replace_invert_deps.py0000666000175000017500000000261613343562340027047 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. def check_resource_counts(count_map): for name, count in count_map.items(): test.assertEqual(count, len(list(reality.resources_by_logical_name(name)))) example_template = Template({ 'A': RsrcDef({'!a': 'initial'}, []), 'B': RsrcDef({'!b': 'first'}, ['A']), }) engine.create_stack('foo', example_template) engine.noop(4) engine.call(verify, example_template) example_template_inverted = Template({ 'A': RsrcDef({'!a': 'updated'}, ['B']), 'B': RsrcDef({'!b': 'second'}, []), }) engine.update_stack('foo', example_template_inverted) engine.noop(4) engine.call(check_resource_counts, {'A': 2, 'B': 1}) engine.noop(2) engine.call(verify, example_template_inverted) engine.call(check_resource_counts, {'A': 1, 'B': 1}) engine.delete_stack('foo') engine.noop(3) engine.call(verify, Template({})) heat-10.0.2/heat/tests/convergence/scenarios/update_remove.py0000666000175000017500000000222413343562340024322 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. example_template = Template({ 'A': RsrcDef({}, []), 'B': RsrcDef({}, []), 'C': RsrcDef({'a': '4alpha'}, ['A', 'B']), 'D': RsrcDef({'c': GetRes('C')}, []), 'E': RsrcDef({'ca': GetAtt('C', 'a')}, []), }) engine.create_stack('foo', example_template) engine.noop(5) engine.call(verify, example_template) example_template2 = Template({ 'A': RsrcDef({}, []), 'B': RsrcDef({}, []), 'C': RsrcDef({'a': '4alpha'}, ['A', 'B']), 'D': RsrcDef({'c': GetRes('C')}, []), }) engine.update_stack('foo', example_template2) engine.noop(9) engine.call(verify, example_template2) heat-10.0.2/heat/tests/convergence/scenarios/update_replace_missed_cleanup_delete.py0000666000175000017500000000270613343562340031042 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. def check_c_count(expected_count): test.assertEqual(expected_count, len(reality.resources_by_logical_name('C'))) example_template = Template({ 'A': RsrcDef({'a': 'initial'}, []), 'B': RsrcDef({}, []), 'C': RsrcDef({'!a': GetAtt('A', 'a')}, ['B']), 'D': RsrcDef({'c': GetRes('C')}, []), 'E': RsrcDef({'ca': GetAtt('C', '!a')}, []), }) engine.create_stack('foo', example_template) engine.noop(5) engine.call(verify, example_template) example_template_shrunk = Template({ 'A': RsrcDef({'a': 'updated'}, []), 'B': RsrcDef({}, []), 'C': RsrcDef({'!a': GetAtt('A', 'a')}, ['B']), 'D': RsrcDef({'c': GetRes('C')}, []), 'E': RsrcDef({'ca': GetAtt('C', '!a')}, []), }) engine.update_stack('foo', example_template_shrunk) engine.noop(7) engine.delete_stack('foo') engine.call(check_c_count, 2) engine.noop(11) engine.call(verify, Template({})) heat-10.0.2/heat/tests/convergence/scenarios/update_add_rollback_early.py0000666000175000017500000000241513343562340026624 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. example_template = Template({ 'A': RsrcDef({}, []), 'B': RsrcDef({}, []), 'C': RsrcDef({'a': '4alpha'}, ['A', 'B']), 'D': RsrcDef({'c': GetRes('C')}, []), 'E': RsrcDef({'ca': GetAtt('C', 'a')}, []), }) engine.create_stack('foo', example_template) engine.noop(5) engine.call(verify, example_template) example_template2 = Template({ 'A': RsrcDef({}, []), 'B': RsrcDef({}, []), 'C': RsrcDef({'a': '4alpha'}, ['A', 'B']), 'D': RsrcDef({'c': GetRes('C')}, []), 'E': RsrcDef({'ca': GetAtt('C', 'a')}, []), 'F': RsrcDef({}, ['D']), }) engine.update_stack('foo', example_template2) engine.noop(4) engine.rollback_stack('foo') engine.noop(8) engine.call(verify, example_template) heat-10.0.2/heat/tests/convergence/scenarios/update_add.py0000666000175000017500000000234713343562340023563 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. example_template = Template({ 'A': RsrcDef({}, []), 'B': RsrcDef({}, []), 'C': RsrcDef({'a': '4alpha'}, ['A', 'B']), 'D': RsrcDef({'c': GetRes('C')}, []), 'E': RsrcDef({'ca': GetAtt('C', 'a')}, []), }) engine.create_stack('foo', example_template) engine.noop(5) engine.call(verify, example_template) example_template2 = Template({ 'A': RsrcDef({}, []), 'B': RsrcDef({}, []), 'C': RsrcDef({'a': '4alpha'}, ['A', 'B']), 'D': RsrcDef({'c': GetRes('C')}, []), 'E': RsrcDef({'ca': GetAtt('C', 'a')}, []), 'F': RsrcDef({}, ['D', 'E']), }) engine.update_stack('foo', example_template2) engine.noop(11) engine.call(verify, example_template2) heat-10.0.2/heat/tests/convergence/scenarios/basic_update_delete.py0000666000175000017500000000264113343562340025433 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. def check_resource_count(expected_count): test.assertEqual(expected_count, len(reality.all_resources())) example_template = Template({ 'A': RsrcDef({}, []), 'B': RsrcDef({}, []), 'C': RsrcDef({'a': '4alpha'}, ['A', 'B']), 'D': RsrcDef({'c': GetRes('C')}, []), 'E': RsrcDef({'ca': GetAtt('C', 'a')}, []), }) engine.create_stack('foo', example_template) engine.noop(2) example_template2 = Template({ 'A': RsrcDef({}, []), 'B': RsrcDef({}, []), 'C': RsrcDef({'a': '4alpha'}, ['A', 'B']), 'D': RsrcDef({'c': GetRes('C')}, []), 'E': RsrcDef({'ca': GetAtt('C', 'a')}, []), 'F': RsrcDef({}, ['D', 'E']), }) engine.update_stack('foo', example_template2) engine.call(check_resource_count, 3) engine.noop(11) engine.call(verify, example_template2) engine.delete_stack('foo') engine.noop(6) engine.call(verify, Template({})) heat-10.0.2/heat/tests/convergence/scenarios/create_early_delete.py0000666000175000017500000000164713343562340025454 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. example_template = Template({ 'A': RsrcDef({}, []), 'B': RsrcDef({}, []), 'C': RsrcDef({'a': '4alpha'}, ['A', 'B']), 'D': RsrcDef({'c': GetRes('C')}, []), 'E': RsrcDef({'ca': GetAtt('C', 'a')}, []), }) engine.create_stack('foo', example_template) engine.noop(2) engine.delete_stack('foo') engine.noop(6) engine.call(verify, Template({})) heat-10.0.2/heat/tests/convergence/scenarios/update_add_concurrent.py0000666000175000017500000000252413343562340026022 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. def check_resource_count(expected_count): test.assertEqual(expected_count, len(reality.all_resources())) example_template = Template({ 'A': RsrcDef({}, []), 'B': RsrcDef({}, []), 'C': RsrcDef({'a': '4alpha'}, ['A', 'B']), 'D': RsrcDef({'c': GetRes('C')}, []), 'E': RsrcDef({'ca': GetAtt('C', 'a')}, []), }) engine.create_stack('foo', example_template) engine.noop(2) example_template2 = Template({ 'A': RsrcDef({}, []), 'B': RsrcDef({}, []), 'C': RsrcDef({'a': '4alpha'}, ['A', 'B']), 'D': RsrcDef({'c': GetRes('C')}, []), 'E': RsrcDef({'ca': GetAtt('C', 'a')}, []), 'F': RsrcDef({}, ['D', 'E']), }) engine.update_stack('foo', example_template2) engine.call(check_resource_count, 3) engine.noop(11) engine.call(verify, example_template2) heat-10.0.2/heat/tests/convergence/scenarios/update_replace.py0000666000175000017500000000406213343562340024442 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. c_uuid = None def store_c_uuid(): global c_uuid c_uuid = next(iter(reality.resources_by_logical_name('C'))).uuid def check_c_replaced(): test.assertNotEqual(c_uuid, next(iter(reality.resources_by_logical_name('C'))).uuid) test.assertIsNotNone(c_uuid) example_template = Template({ 'A': RsrcDef({'a': 'initial'}, []), 'B': RsrcDef({}, []), 'C': RsrcDef({'!a': GetAtt('A', 'a')}, ['B']), 'D': RsrcDef({'c': GetRes('C')}, []), 'E': RsrcDef({'ca': GetAtt('C', '!a')}, []), }) engine.create_stack('foo', example_template) engine.noop(5) engine.call(verify, example_template) engine.call(store_c_uuid) example_template_updated = Template({ 'A': RsrcDef({'a': 'updated'}, []), 'B': RsrcDef({}, []), 'C': RsrcDef({'!a': GetAtt('A', 'a')}, ['B']), 'D': RsrcDef({'c': GetRes('C')}, []), 'E': RsrcDef({'ca': GetAtt('C', '!a')}, []), }) engine.update_stack('foo', example_template_updated) engine.noop(11) engine.call(verify, example_template_updated) example_template_long = Template({ 'A': RsrcDef({'a': 'updated'}, []), 'B': RsrcDef({}, []), 'C': RsrcDef({'!a': GetAtt('A', 'a')}, ['B']), 'D': RsrcDef({'c': GetRes('C')}, []), 'E': RsrcDef({'ca': GetAtt('C', '!a')}, []), 'F': RsrcDef({}, ['D', 'E']), }) engine.update_stack('foo', example_template_long) engine.noop(12) engine.call(verify, example_template_long) engine.call(check_c_replaced) engine.delete_stack('foo') engine.noop(6) engine.call(verify, Template({})) heat-10.0.2/heat/tests/convergence/scenarios/multiple_update.py0000666000175000017500000000314713343562340024665 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. example_template = Template({ 'A': RsrcDef({}, []), 'B': RsrcDef({}, []), 'C': RsrcDef({'a': '4alpha'}, ['A', 'B']), 'D': RsrcDef({'c': GetRes('C')}, []), 'E': RsrcDef({'ca': GetAtt('C', 'a')}, []), }) engine.create_stack('foo', example_template) engine.noop(5) engine.call(verify, example_template) example_template_shrunk = Template({ 'A': RsrcDef({}, []), 'B': RsrcDef({}, []), 'C': RsrcDef({'a': '4alpha'}, ['A', 'B']), 'D': RsrcDef({'c': GetRes('C')}, []), }) engine.update_stack('foo', example_template_shrunk) engine.noop(10) engine.call(verify, example_template_shrunk) example_template_long = Template({ 'A': RsrcDef({}, []), 'B': RsrcDef({}, []), 'C': RsrcDef({'a': '4alpha'}, ['A', 'B']), 'D': RsrcDef({'c': GetRes('C')}, []), 'E': RsrcDef({'ca': GetAtt('C', 'a')}, []), 'F': RsrcDef({}, ['D', 'E']), }) engine.update_stack('foo', example_template_long) engine.noop(12) engine.call(verify, example_template_long) engine.delete_stack('foo') engine.noop(6) engine.call(verify, Template({})) heat-10.0.2/heat/tests/convergence/test_converge.py0000666000175000017500000000324413343562340022347 0ustar zuulzuul00000000000000#!/usr/bin/env python # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.engine import resource from heat.tests import common from heat.tests.convergence.framework import fake_resource from heat.tests.convergence.framework import processes from heat.tests.convergence.framework import scenario from heat.tests.convergence.framework import testutils from oslo_config import cfg class ScenarioTest(common.HeatTestCase): scenarios = [(name, {'name': name, 'path': path}) for name, path in scenario.list_all()] def setUp(self): super(ScenarioTest, self).setUp() resource._register_class('OS::Heat::TestResource', fake_resource.TestResource) self.procs = processes.Processes() po = self.patch("heat.rpc.worker_client.WorkerClient.check_resource") po.side_effect = self.procs.worker.check_resource cfg.CONF.set_default('convergence_engine', True) def test_scenario(self): self.procs.clear() runner = scenario.Scenario(self.name, self.path) runner(self.procs.event_loop, **testutils.scenario_globals(self.procs, self)) heat-10.0.2/heat/tests/test_crypt.py0000666000175000017500000000603413343562340017402 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg import six from heat.common import config from heat.common import crypt from heat.common import exception from heat.tests import common class CryptTest(common.HeatTestCase): def test_fernet_key(self): key = 'x' * 16 method, result = crypt.encrypt('foo', key) self.assertEqual('cryptography_decrypt_v1', method) self.assertIsNotNone(result) def test_init_auth_encryption_key_length(self): """Test for length of the auth_encryption_length in config file""" cfg.CONF.set_override('auth_encryption_key', 'abcdefghijklma') err = self.assertRaises(exception.Error, config.startup_sanity_check) exp_msg = ('heat.conf misconfigured, auth_encryption_key ' 'must be 32 characters') self.assertIn(exp_msg, six.text_type(err)) def _test_encrypt_decrypt_dict(self, encryption_key=None): data = {'p1': u'happy', '2': [u'a', u'little', u'blue'], 'p3': {u'really': u'exited', u'ok int': 9}, '4': u'', 'p5': True, '6': 7} encrypted_data = crypt.encrypted_dict(data, encryption_key) for k in encrypted_data: self.assertEqual('cryptography_decrypt_v1', encrypted_data[k][0]) self.assertEqual(2, len(encrypted_data[k])) # the keys remain the same self.assertEqual(set(data), set(encrypted_data)) decrypted_data = crypt.decrypted_dict(encrypted_data, encryption_key) self.assertEqual(data, decrypted_data) def test_encrypt_decrypt_dict_custom_enc_key(self): self._test_encrypt_decrypt_dict('just for testing not so great re') def test_encrypt_decrypt_dict_default_enc_key(self): self._test_encrypt_decrypt_dict() def test_decrypt_dict_invalid_key(self): data = {'p1': u'happy', '2': [u'a', u'little', u'blue'], '6': 7} encrypted_data = crypt.encrypted_dict( data, '767c3ed056cbaa3b9dfedb8c6f825bf0') ex = self.assertRaises(exception.InvalidEncryptionKey, crypt.decrypted_dict, encrypted_data, '767c3ed056cbaa3b9dfedb8c6f825bf1') self.assertEqual('Can not decrypt data with the auth_encryption_key ' 'in heat config.', six.text_type(ex)) heat-10.0.2/heat/tests/test_template.py0000666000175000017500000020574213343562352020066 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import hashlib import json import fixtures import six from stevedore import extension from heat.common import exception from heat.common import template_format from heat.engine.cfn import functions as cfn_funcs from heat.engine.cfn import parameters as cfn_p from heat.engine.cfn import template as cfn_t from heat.engine.clients.os import nova from heat.engine import environment from heat.engine import function from heat.engine.hot import template as hot_t from heat.engine import node_data from heat.engine import rsrc_defn from heat.engine import stack from heat.engine import stk_defn from heat.engine import template from heat.tests import common from heat.tests.openstack.nova import fakes as fakes_nova from heat.tests import utils mapping_template = template_format.parse('''{ "AWSTemplateFormatVersion" : "2010-09-09", "Mappings" : { "ValidMapping" : { "TestKey" : { "TestValue" : "wibble" } }, "InvalidMapping" : { "ValueList" : [ "foo", "bar" ], "ValueString" : "baz" }, "MapList": [ "foo", { "bar" : "baz" } ], "MapString": "foobar" } }''') empty_template = template_format.parse('''{ "HeatTemplateFormatVersion" : "2012-12-12", }''') aws_empty_template = template_format.parse('''{ "AWSTemplateFormatVersion" : "2010-09-09", }''') parameter_template = template_format.parse('''{ "HeatTemplateFormatVersion" : "2012-12-12", "Parameters" : { "foo" : { "Type" : "String" }, "blarg" : { "Type" : "String", "Default": "quux" } } }''') resource_template = template_format.parse('''{ "HeatTemplateFormatVersion" : "2012-12-12", "Resources" : { "foo" : { "Type" : "GenericResourceType" }, "blarg" : { "Type" : "GenericResourceType" } } }''') def join(raw): tmpl = template.Template(mapping_template) return function.resolve(tmpl.parse(None, raw)) class DummyClass(object): metadata = None def metadata_get(self): return self.metadata def metadata_set(self, metadata): self.metadata = metadata class TemplatePluginFixture(fixtures.Fixture): def __init__(self, templates=None): templates = templates or {} super(TemplatePluginFixture, self).__init__() self.templates = [extension.Extension(k, None, v, None) for (k, v) in templates.items()] def _get_template_extension_manager(self): return extension.ExtensionManager.make_test_instance(self.templates) def setUp(self): super(TemplatePluginFixture, self).setUp() def clear_template_classes(): template._template_classes = None clear_template_classes() self.useFixture(fixtures.MockPatchObject( template, '_get_template_extension_manager', new=self._get_template_extension_manager)) self.addCleanup(clear_template_classes) class TestTemplatePluginManager(common.HeatTestCase): def test_template_NEW_good(self): class NewTemplate(template.Template): SECTIONS = (VERSION, MAPPINGS, CONDITIONS, PARAMETERS) = ( 'NEWTemplateFormatVersion', '__undefined__', 'conditions', 'parameters') RESOURCES = 'thingies' def param_schemata(self, param_defaults=None): pass def get_section_name(self, section): pass def parameters(self, stack_identifier, user_params, param_defaults=None): pass def resource_definitions(self, stack): pass def add_resource(self, definition, name=None): pass def outputs(self, stack): pass def __getitem__(self, section): return {} class NewTemplatePrint(function.Function): def result(self): return 'always this' self.useFixture(TemplatePluginFixture( {'NEWTemplateFormatVersion.2345-01-01': NewTemplate})) t = {'NEWTemplateFormatVersion': '2345-01-01'} tmpl = template.Template(t) err = tmpl.validate() self.assertIsNone(err) class TestTemplateVersion(common.HeatTestCase): versions = (('heat_template_version', '2013-05-23'), ('HeatTemplateFormatVersion', '2012-12-12'), ('AWSTemplateFormatVersion', '2010-09-09')) def test_hot_version(self): tmpl = { 'heat_template_version': '2013-05-23', 'foo': 'bar', 'parameters': {} } self.assertEqual(('heat_template_version', '2013-05-23'), template.get_version(tmpl, self.versions)) def test_cfn_version(self): tmpl = { 'AWSTemplateFormatVersion': '2010-09-09', 'foo': 'bar', 'Parameters': {} } self.assertEqual(('AWSTemplateFormatVersion', '2010-09-09'), template.get_version(tmpl, self.versions)) def test_heat_cfn_version(self): tmpl = { 'HeatTemplateFormatVersion': '2012-12-12', 'foo': 'bar', 'Parameters': {} } self.assertEqual(('HeatTemplateFormatVersion', '2012-12-12'), template.get_version(tmpl, self.versions)) def test_missing_version(self): tmpl = { 'foo': 'bar', 'Parameters': {} } ex = self.assertRaises(exception.InvalidTemplateVersion, template.get_version, tmpl, self.versions) self.assertEqual('The template version is invalid: Template version ' 'was not provided', six.text_type(ex)) def test_ambiguous_version(self): tmpl = { 'AWSTemplateFormatVersion': '2010-09-09', 'HeatTemplateFormatVersion': '2012-12-12', 'foo': 'bar', 'Parameters': {} } self.assertRaises(exception.InvalidTemplateVersion, template.get_version, tmpl, self.versions) class ParserTest(common.HeatTestCase): def test_list(self): raw = ['foo', 'bar', 'baz'] parsed = join(raw) for i in six.moves.xrange(len(raw)): self.assertEqual(raw[i], parsed[i]) self.assertIsNot(raw, parsed) def test_dict(self): raw = {'foo': 'bar', 'blarg': 'wibble'} parsed = join(raw) for k in raw: self.assertEqual(raw[k], parsed[k]) self.assertIsNot(raw, parsed) def test_dict_list(self): raw = {'foo': ['bar', 'baz'], 'blarg': 'wibble'} parsed = join(raw) self.assertEqual(raw['blarg'], parsed['blarg']) for i in six.moves.xrange(len(raw['foo'])): self.assertEqual(raw['foo'][i], parsed['foo'][i]) self.assertIsNot(raw, parsed) self.assertIsNot(raw['foo'], parsed['foo']) def test_list_dict(self): raw = [{'foo': 'bar', 'blarg': 'wibble'}, 'baz', 'quux'] parsed = join(raw) for i in six.moves.xrange(1, len(raw)): self.assertEqual(raw[i], parsed[i]) for k in raw[0]: self.assertEqual(raw[0][k], parsed[0][k]) self.assertIsNot(raw, parsed) self.assertIsNot(raw[0], parsed[0]) def test_join(self): raw = {'Fn::Join': [' ', ['foo', 'bar', 'baz']]} self.assertEqual('foo bar baz', join(raw)) def test_join_none(self): raw = {'Fn::Join': [' ', ['foo', None, 'baz']]} self.assertEqual('foo baz', join(raw)) def test_join_list(self): raw = [{'Fn::Join': [' ', ['foo', 'bar', 'baz']]}, 'blarg', 'wibble'] parsed = join(raw) self.assertEqual('foo bar baz', parsed[0]) for i in six.moves.xrange(1, len(raw)): self.assertEqual(raw[i], parsed[i]) self.assertIsNot(raw, parsed) def test_join_dict_val(self): raw = {'quux': {'Fn::Join': [' ', ['foo', 'bar', 'baz']]}, 'blarg': 'wibble'} parsed = join(raw) self.assertEqual('foo bar baz', parsed['quux']) self.assertEqual(raw['blarg'], parsed['blarg']) self.assertIsNot(raw, parsed) class TestTemplateConditionParser(common.HeatTestCase): def setUp(self): super(TestTemplateConditionParser, self).setUp() self.ctx = utils.dummy_context() t = { 'heat_template_version': '2016-10-14', 'parameters': { 'env_type': { 'type': 'string', 'default': 'test' } }, 'conditions': { 'prod_env': { 'equals': [{'get_param': 'env_type'}, 'prod']}}, 'resources': { 'r1': { 'type': 'GenericResourceType', 'condition': 'prod_env' } }, 'outputs': { 'foo': { 'condition': 'prod_env', 'value': 'show me' } } } self.tmpl = template.Template(t) def test_conditions_with_non_supported_functions(self): t = { 'heat_template_version': '2016-10-14', 'parameters': { 'env_type': { 'type': 'string', 'default': 'test' } }, 'conditions': { 'prod_env': { 'equals': [{'get_param': 'env_type'}, {'get_attr': [None, 'att']}]}}} # test with get_attr in equals tmpl = template.Template(t) stk = stack.Stack(self.ctx, 'test_condition_with_get_attr_func', tmpl) ex = self.assertRaises(exception.StackValidationFailed, tmpl.conditions, stk) self.assertIn('"get_attr" is invalid', six.text_type(ex)) self.assertIn('conditions.prod_env.equals[1].get_attr', six.text_type(ex)) # test with get_resource in top level of a condition tmpl.t['conditions']['prod_env'] = {'get_resource': 'R1'} stk = stack.Stack(self.ctx, 'test_condition_with_get_attr_func', tmpl) ex = self.assertRaises(exception.StackValidationFailed, tmpl.conditions, stk) self.assertIn('"get_resource" is invalid', six.text_type(ex)) # test with get_attr in top level of a condition tmpl.t['conditions']['prod_env'] = {'get_attr': [None, 'att']} stk = stack.Stack(self.ctx, 'test_condition_with_get_attr_func', tmpl) ex = self.assertRaises(exception.StackValidationFailed, tmpl.conditions, stk) self.assertIn('"get_attr" is invalid', six.text_type(ex)) def test_condition_resolved_not_boolean(self): t = { 'heat_template_version': '2016-10-14', 'parameters': { 'env_type': { 'type': 'string', 'default': 'test' } }, 'conditions': { 'prod_env': {'get_param': 'env_type'}}} # test with get_attr in equals tmpl = template.Template(t) stk = stack.Stack(self.ctx, 'test_condition_not_boolean', tmpl) conditions = tmpl.conditions(stk) ex = self.assertRaises(exception.StackValidationFailed, conditions.is_enabled, 'prod_env') self.assertIn('The definition of condition "prod_env" is invalid', six.text_type(ex)) def test_condition_reference_condition(self): t = { 'heat_template_version': '2016-10-14', 'parameters': { 'env_type': { 'type': 'string', 'default': 'test' } }, 'conditions': { 'prod_env': {'equals': [{'get_param': 'env_type'}, 'prod']}, 'test_env': {'not': 'prod_env'}, 'prod_or_test_env': {'or': ['prod_env', 'test_env']}, 'prod_and_test_env': {'and': ['prod_env', 'test_env']}, }} # test with get_attr in equals tmpl = template.Template(t) stk = stack.Stack(self.ctx, 'test_condition_reference', tmpl) conditions = tmpl.conditions(stk) self.assertFalse(conditions.is_enabled('prod_env')) self.assertTrue(conditions.is_enabled('test_env')) self.assertTrue(conditions.is_enabled('prod_or_test_env')) self.assertFalse(conditions.is_enabled('prod_and_test_env')) def test_get_res_condition_invalid(self): tmpl = copy.deepcopy(self.tmpl) # test condition name is invalid stk = stack.Stack(self.ctx, 'test_res_invalid_condition', tmpl) conds = tmpl.conditions(stk) ex = self.assertRaises(ValueError, conds.is_enabled, 'invalid_cd') self.assertIn('Invalid condition "invalid_cd"', six.text_type(ex)) # test condition name is not string ex = self.assertRaises(ValueError, conds.is_enabled, 111) self.assertIn('Invalid condition "111"', six.text_type(ex)) def test_res_condition_using_boolean(self): tmpl = copy.deepcopy(self.tmpl) # test condition name is boolean stk = stack.Stack(self.ctx, 'test_res_cd_boolean', tmpl) conds = tmpl.conditions(stk) self.assertTrue(conds.is_enabled(True)) self.assertFalse(conds.is_enabled(False)) def test_parse_output_condition_invalid(self): stk = stack.Stack(self.ctx, 'test_output_invalid_condition', self.tmpl) # test condition name is invalid self.tmpl.t['outputs']['foo']['condition'] = 'invalid_cd' ex = self.assertRaises(exception.StackValidationFailed, lambda: stk.outputs) self.assertIn('Invalid condition "invalid_cd"', six.text_type(ex)) self.assertIn('outputs.foo.condition', six.text_type(ex)) # test condition name is not string self.tmpl.t['outputs']['foo']['condition'] = 222 ex = self.assertRaises(exception.StackValidationFailed, lambda: stk.outputs) self.assertIn('Invalid condition "222"', six.text_type(ex)) self.assertIn('outputs.foo.condition', six.text_type(ex)) def test_conditions_circular_ref(self): t = { 'heat_template_version': '2016-10-14', 'parameters': { 'env_type': { 'type': 'string', 'default': 'test' } }, 'conditions': { 'first_cond': {'not': 'second_cond'}, 'second_cond': {'not': 'third_cond'}, 'third_cond': {'not': 'first_cond'}, } } tmpl = template.Template(t) stk = stack.Stack(self.ctx, 'test_condition_circular_ref', tmpl) conds = tmpl.conditions(stk) ex = self.assertRaises(exception.StackValidationFailed, conds.is_enabled, 'first_cond') self.assertIn('Circular definition for condition "first_cond"', six.text_type(ex)) def test_parse_output_condition_boolean(self): t = copy.deepcopy(self.tmpl.t) t['outputs']['foo']['condition'] = True stk = stack.Stack(self.ctx, 'test_output_cd_boolean', template.Template(t)) self.assertEqual('show me', stk.outputs['foo'].get_value()) t = copy.deepcopy(self.tmpl.t) t['outputs']['foo']['condition'] = False stk = stack.Stack(self.ctx, 'test_output_cd_boolean', template.Template(t)) self.assertIsNone(stk.outputs['foo'].get_value()) def test_parse_output_condition_function(self): t = copy.deepcopy(self.tmpl.t) t['outputs']['foo']['condition'] = {'not': 'prod_env'} stk = stack.Stack(self.ctx, 'test_output_cd_function', template.Template(t)) self.assertEqual('show me', stk.outputs['foo'].get_value()) class TestTemplateValidate(common.HeatTestCase): def test_template_validate_cfn_check_t_digest(self): t = { 'AWSTemplateFormatVersion': '2010-09-09', 'Description': 'foo', 'Parameters': {}, 'Mappings': {}, 'Resources': { 'server': { 'Type': 'OS::Nova::Server' } }, 'Outputs': {}, } tmpl = template.Template(t) self.assertIsNone(tmpl.t_digest) tmpl.validate() self.assertEqual( hashlib.sha256(six.text_type(t).encode('utf-8')).hexdigest(), tmpl.t_digest, 'invalid template digest') def test_template_validate_cfn_good(self): t = { 'AWSTemplateFormatVersion': '2010-09-09', 'Description': 'foo', 'Parameters': {}, 'Mappings': {}, 'Resources': { 'server': { 'Type': 'OS::Nova::Server' } }, 'Outputs': {}, } tmpl = template.Template(t) err = tmpl.validate() self.assertIsNone(err) # test with alternate version key t = { 'HeatTemplateFormatVersion': '2012-12-12', 'Description': 'foo', 'Parameters': {}, 'Mappings': {}, 'Resources': { 'server': { 'Type': 'OS::Nova::Server' } }, 'Outputs': {}, } tmpl = template.Template(t) err = tmpl.validate() self.assertIsNone(err) def test_template_validate_cfn_bad_section(self): t = { 'AWSTemplateFormatVersion': '2010-09-09', 'Description': 'foo', 'Parameteers': {}, 'Mappings': {}, 'Resources': { 'server': { 'Type': 'OS::Nova::Server' } }, 'Outputs': {}, } tmpl = template.Template(t) err = self.assertRaises(exception.InvalidTemplateSection, tmpl.validate) self.assertIn('Parameteers', six.text_type(err)) def test_template_validate_cfn_empty(self): t = template_format.parse(''' AWSTemplateFormatVersion: 2010-09-09 Parameters: Resources: Outputs: ''') tmpl = template.Template(t) err = tmpl.validate() self.assertIsNone(err) def test_get_resources_good(self): """Test get resources successful.""" t = template_format.parse(''' AWSTemplateFormatVersion: 2010-09-09 Resources: resource1: Type: AWS::EC2::Instance Properties: property1: value1 Metadata: foo: bar DependsOn: dummy DeletionPolicy: dummy UpdatePolicy: foo: bar ''') expected = {'resource1': {'Type': 'AWS::EC2::Instance', 'Properties': {'property1': 'value1'}, 'Metadata': {'foo': 'bar'}, 'DependsOn': 'dummy', 'DeletionPolicy': 'dummy', 'UpdatePolicy': {'foo': 'bar'}}} tmpl = template.Template(t) self.assertEqual(expected, tmpl[tmpl.RESOURCES]) def test_get_resources_bad_no_data(self): """Test get resources without any mapping.""" t = template_format.parse(''' AWSTemplateFormatVersion: 2010-09-09 Resources: resource1: ''') tmpl = template.Template(t) error = self.assertRaises(exception.StackValidationFailed, tmpl.validate) self.assertEqual('Each Resource must contain a Type key.', six.text_type(error)) def test_get_resources_no_type(self): """Test get resources with invalid key.""" t = template_format.parse(''' AWSTemplateFormatVersion: 2010-09-09 Resources: resource1: Properties: property1: value1 Metadata: foo: bar DependsOn: dummy DeletionPolicy: dummy UpdatePolicy: foo: bar ''') tmpl = template.Template(t) error = self.assertRaises(exception.StackValidationFailed, tmpl.validate) self.assertEqual('Each Resource must contain a Type key.', six.text_type(error)) def test_template_validate_hot_check_t_digest(self): t = { 'heat_template_version': '2015-04-30', 'description': 'foo', 'parameters': {}, 'resources': { 'server': { 'type': 'OS::Nova::Server' } }, 'outputs': {}, } tmpl = template.Template(t) self.assertIsNone(tmpl.t_digest) tmpl.validate() self.assertEqual(hashlib.sha256( six.text_type(t).encode('utf-8')).hexdigest(), tmpl.t_digest, 'invalid template digest') def test_template_validate_hot_good(self): t = { 'heat_template_version': '2013-05-23', 'description': 'foo', 'parameters': {}, 'resources': { 'server': { 'type': 'OS::Nova::Server' } }, 'outputs': {}, } tmpl = template.Template(t) err = tmpl.validate() self.assertIsNone(err) def test_template_validate_hot_bad_section(self): t = { 'heat_template_version': '2013-05-23', 'description': 'foo', 'parameteers': {}, 'resources': { 'server': { 'type': 'OS::Nova::Server' } }, 'outputs': {}, } tmpl = template.Template(t) err = self.assertRaises(exception.InvalidTemplateSection, tmpl.validate) self.assertIn('parameteers', six.text_type(err)) class TemplateTest(common.HeatTestCase): def setUp(self): super(TemplateTest, self).setUp() self.ctx = utils.dummy_context() @staticmethod def resolve(snippet, template, stack=None): return function.resolve(template.parse(stack and stack.defn, snippet)) @staticmethod def resolve_condition(snippet, template, stack=None): return function.resolve(template.parse_condition(stack and stack.defn, snippet)) def test_defaults(self): empty = template.Template(empty_template) self.assertNotIn('AWSTemplateFormatVersion', empty) self.assertEqual('No description', empty['Description']) self.assertEqual({}, empty['Mappings']) self.assertEqual({}, empty['Resources']) self.assertEqual({}, empty['Outputs']) def test_aws_version(self): tmpl = template.Template(mapping_template) self.assertEqual(('AWSTemplateFormatVersion', '2010-09-09'), tmpl.version) def test_heat_version(self): tmpl = template.Template(resource_template) self.assertEqual(('HeatTemplateFormatVersion', '2012-12-12'), tmpl.version) def test_invalid_hot_version(self): invalid_hot_version_tmp = template_format.parse( '''{ "heat_template_version" : "2012-12-12", }''') init_ex = self.assertRaises(exception.InvalidTemplateVersion, template.Template, invalid_hot_version_tmp) valid_versions = ['2013-05-23', '2014-10-16', '2015-04-30', '2015-10-15', '2016-04-08', '2016-10-14', '2017-02-24', '2017-09-01', '2018-03-02', 'newton', 'ocata', 'pike', 'queens'] ex_error_msg = ('The template version is invalid: ' '"heat_template_version: 2012-12-12". ' '"heat_template_version" should be one of: %s' % ', '.join(valid_versions)) self.assertEqual(ex_error_msg, six.text_type(init_ex)) def test_invalid_version_not_in_hot_versions(self): invalid_hot_version_tmp = template_format.parse( '''{ "heat_template_version" : "2012-12-12", }''') versions = { ('heat_template_version', '2013-05-23'): hot_t.HOTemplate20130523, ('heat_template_version', '2013-06-23'): hot_t.HOTemplate20130523 } temp_copy = copy.deepcopy(template._template_classes) template._template_classes = versions init_ex = self.assertRaises(exception.InvalidTemplateVersion, template.Template, invalid_hot_version_tmp) ex_error_msg = ('The template version is invalid: ' '"heat_template_version: 2012-12-12". ' '"heat_template_version" should be ' 'one of: 2013-05-23, 2013-06-23') self.assertEqual(ex_error_msg, six.text_type(init_ex)) template._template_classes = temp_copy def test_invalid_aws_version(self): invalid_aws_version_tmp = template_format.parse( '''{ "AWSTemplateFormatVersion" : "2012-12-12", }''') init_ex = self.assertRaises(exception.InvalidTemplateVersion, template.Template, invalid_aws_version_tmp) ex_error_msg = ('The template version is invalid: ' '"AWSTemplateFormatVersion: 2012-12-12". ' '"AWSTemplateFormatVersion" should be: 2010-09-09') self.assertEqual(ex_error_msg, six.text_type(init_ex)) def test_invalid_version_not_in_aws_versions(self): invalid_aws_version_tmp = template_format.parse( '''{ "AWSTemplateFormatVersion" : "2012-12-12", }''') versions = { ('AWSTemplateFormatVersion', '2010-09-09'): cfn_t.CfnTemplate, ('AWSTemplateFormatVersion', '2011-06-23'): cfn_t.CfnTemplate } temp_copy = copy.deepcopy(template._template_classes) template._template_classes = versions init_ex = self.assertRaises(exception.InvalidTemplateVersion, template.Template, invalid_aws_version_tmp) ex_error_msg = ('The template version is invalid: ' '"AWSTemplateFormatVersion: 2012-12-12". ' '"AWSTemplateFormatVersion" should be ' 'one of: 2010-09-09, 2011-06-23') self.assertEqual(ex_error_msg, six.text_type(init_ex)) template._template_classes = temp_copy def test_invalid_heat_version(self): invalid_heat_version_tmp = template_format.parse( '''{ "HeatTemplateFormatVersion" : "2010-09-09", }''') init_ex = self.assertRaises(exception.InvalidTemplateVersion, template.Template, invalid_heat_version_tmp) ex_error_msg = ('The template version is invalid: ' '"HeatTemplateFormatVersion: 2010-09-09". ' '"HeatTemplateFormatVersion" should be: 2012-12-12') self.assertEqual(ex_error_msg, six.text_type(init_ex)) def test_invalid_version_not_in_heat_versions(self): invalid_heat_version_tmp = template_format.parse( '''{ "HeatTemplateFormatVersion" : "2010-09-09", }''') versions = { ('HeatTemplateFormatVersion', '2012-12-12'): cfn_t.CfnTemplate, ('HeatTemplateFormatVersion', '2014-12-12'): cfn_t.CfnTemplate } temp_copy = copy.deepcopy(template._template_classes) template._template_classes = versions init_ex = self.assertRaises(exception.InvalidTemplateVersion, template.Template, invalid_heat_version_tmp) ex_error_msg = ('The template version is invalid: ' '"HeatTemplateFormatVersion: 2010-09-09". ' '"HeatTemplateFormatVersion" should be ' 'one of: 2012-12-12, 2014-12-12') self.assertEqual(ex_error_msg, six.text_type(init_ex)) template._template_classes = temp_copy def test_invalid_template(self): scanner_error = ''' 1 Mappings: ValidMapping: TestKey: TestValue ''' parser_error = ''' Mappings: ValidMapping: TestKey: {TestKey1: "Value1" TestKey2: "Value2"} ''' self.assertRaises(ValueError, template_format.parse, scanner_error) self.assertRaises(ValueError, template_format.parse, parser_error) def test_invalid_section(self): tmpl = template.Template({'HeatTemplateFormatVersion': '2012-12-12', 'Foo': ['Bar']}) self.assertNotIn('Foo', tmpl) def test_find_in_map(self): tmpl = template.Template(mapping_template) stk = stack.Stack(self.ctx, 'test', tmpl) find = {'Fn::FindInMap': ["ValidMapping", "TestKey", "TestValue"]} self.assertEqual("wibble", self.resolve(find, tmpl, stk)) def test_find_in_invalid_map(self): tmpl = template.Template(mapping_template) stk = stack.Stack(self.ctx, 'test', tmpl) finds = ({'Fn::FindInMap': ["InvalidMapping", "ValueList", "foo"]}, {'Fn::FindInMap': ["InvalidMapping", "ValueString", "baz"]}, {'Fn::FindInMap': ["MapList", "foo", "bar"]}, {'Fn::FindInMap': ["MapString", "foo", "bar"]}) for find in finds: self.assertRaises((KeyError, TypeError), self.resolve, find, tmpl, stk) def test_bad_find_in_map(self): tmpl = template.Template(mapping_template) stk = stack.Stack(self.ctx, 'test', tmpl) finds = ({'Fn::FindInMap': "String"}, {'Fn::FindInMap': {"Dict": "String"}}, {'Fn::FindInMap': ["ShortList", "foo"]}, {'Fn::FindInMap': ["ReallyShortList"]}) for find in finds: self.assertRaises(exception.StackValidationFailed, self.resolve, find, tmpl, stk) def test_param_refs(self): env = environment.Environment({'foo': 'bar', 'blarg': 'wibble'}) tmpl = template.Template(parameter_template, env=env) stk = stack.Stack(self.ctx, 'test', tmpl) p_snippet = {"Ref": "foo"} self.assertEqual("bar", self.resolve(p_snippet, tmpl, stk)) def test_param_ref_missing(self): env = environment.Environment({'foo': 'bar'}) tmpl = template.Template(parameter_template, env=env) stk = stack.Stack(self.ctx, 'test', tmpl) tmpl.env = environment.Environment({}) stk.defn.parameters = cfn_p.CfnParameters(stk.identifier(), tmpl) snippet = {"Ref": "foo"} self.assertRaises(exception.UserParameterMissing, self.resolve, snippet, tmpl, stk) def test_resource_refs(self): tmpl = template.Template(resource_template) stk = stack.Stack(self.ctx, 'test', tmpl) stk.validate() data = node_data.NodeData.from_dict({'reference_id': 'bar'}) stk_defn.update_resource_data(stk.defn, 'foo', data) r_snippet = {"Ref": "foo"} self.assertEqual("bar", self.resolve(r_snippet, tmpl, stk)) def test_resource_refs_param(self): tmpl = template.Template(resource_template) stk = stack.Stack(self.ctx, 'test', tmpl) p_snippet = {"Ref": "baz"} parsed = tmpl.parse(stk.defn, p_snippet) self.assertIsInstance(parsed, cfn_funcs.ParamRef) def test_select_from_list(self): tmpl = template.Template(empty_template) data = {"Fn::Select": ["1", ["foo", "bar"]]} self.assertEqual("bar", self.resolve(data, tmpl)) def test_select_from_list_integer_index(self): tmpl = template.Template(empty_template) data = {"Fn::Select": [1, ["foo", "bar"]]} self.assertEqual("bar", self.resolve(data, tmpl)) def test_select_from_list_out_of_bound(self): tmpl = template.Template(empty_template) data = {"Fn::Select": ["0", ["foo", "bar"]]} self.assertEqual("foo", self.resolve(data, tmpl)) data = {"Fn::Select": ["1", ["foo", "bar"]]} self.assertEqual("bar", self.resolve(data, tmpl)) data = {"Fn::Select": ["2", ["foo", "bar"]]} self.assertEqual("", self.resolve(data, tmpl)) def test_select_from_dict(self): tmpl = template.Template(empty_template) data = {"Fn::Select": ["red", {"red": "robin", "re": "foo"}]} self.assertEqual("robin", self.resolve(data, tmpl)) def test_select_int_from_dict(self): tmpl = template.Template(empty_template) data = {"Fn::Select": ["2", {"1": "bar", "2": "foo"}]} self.assertEqual("foo", self.resolve(data, tmpl)) def test_select_from_none(self): tmpl = template.Template(empty_template) data = {"Fn::Select": ["red", None]} self.assertEqual("", self.resolve(data, tmpl)) def test_select_from_dict_not_existing(self): tmpl = template.Template(empty_template) data = {"Fn::Select": ["green", {"red": "robin", "re": "foo"}]} self.assertEqual("", self.resolve(data, tmpl)) def test_select_from_serialized_json_map(self): tmpl = template.Template(empty_template) js = json.dumps({"red": "robin", "re": "foo"}) data = {"Fn::Select": ["re", js]} self.assertEqual("foo", self.resolve(data, tmpl)) def test_select_from_serialized_json_list(self): tmpl = template.Template(empty_template) js = json.dumps(["foo", "fee", "fum"]) data = {"Fn::Select": ["0", js]} self.assertEqual("foo", self.resolve(data, tmpl)) def test_select_empty_string(self): tmpl = template.Template(empty_template) data = {"Fn::Select": ["0", '']} self.assertEqual("", self.resolve(data, tmpl)) data = {"Fn::Select": ["1", '']} self.assertEqual("", self.resolve(data, tmpl)) data = {"Fn::Select": ["one", '']} self.assertEqual("", self.resolve(data, tmpl)) def test_equals(self): tpl = template_format.parse(''' AWSTemplateFormatVersion: 2010-09-09 Parameters: env_type: Type: String Default: 'test' ''') snippet = {'Fn::Equals': [{'Ref': 'env_type'}, 'prod']} # when param 'env_type' is 'test', equals function resolve to false tmpl = template.Template(tpl) stk = stack.Stack(utils.dummy_context(), 'test_equals_false', tmpl) resolved = self.resolve_condition(snippet, tmpl, stk) self.assertFalse(resolved) # when param 'env_type' is 'prod', equals function resolve to true tmpl = template.Template(tpl, env=environment.Environment( {'env_type': 'prod'})) stk = stack.Stack(utils.dummy_context(), 'test_equals_true', tmpl) resolved = self.resolve_condition(snippet, tmpl, stk) self.assertTrue(resolved) def test_equals_invalid_args(self): tmpl = template.Template(aws_empty_template) snippet = {'Fn::Equals': ['test', 'prod', 'invalid']} exc = self.assertRaises(exception.StackValidationFailed, self.resolve_condition, snippet, tmpl) error_msg = ('.Fn::Equals: Arguments to "Fn::Equals" must be ' 'of the form: [value_1, value_2]') self.assertIn(error_msg, six.text_type(exc)) # test invalid type snippet = {'Fn::Equals': {"equal": False}} exc = self.assertRaises(exception.StackValidationFailed, self.resolve_condition, snippet, tmpl) self.assertIn(error_msg, six.text_type(exc)) def test_not(self): tpl = template_format.parse(''' AWSTemplateFormatVersion: 2010-09-09 Parameters: env_type: Type: String Default: 'test' ''') snippet = {'Fn::Not': [{'Fn::Equals': [{'Ref': 'env_type'}, 'prod']}]} # when param 'env_type' is 'test', not function resolve to true tmpl = template.Template(tpl) stk = stack.Stack(utils.dummy_context(), 'test_not_true', tmpl) resolved = self.resolve_condition(snippet, tmpl, stk) self.assertTrue(resolved) # when param 'env_type' is 'prod', not function resolve to false tmpl = template.Template(tpl, env=environment.Environment( {'env_type': 'prod'})) stk = stack.Stack(utils.dummy_context(), 'test_not_false', tmpl) resolved = self.resolve_condition(snippet, tmpl, stk) self.assertFalse(resolved) def test_not_invalid_args(self): tmpl = template.Template(aws_empty_template) stk = stack.Stack(utils.dummy_context(), 'test_not_invalid', tmpl) snippet = {'Fn::Not': ['invalid_arg']} exc = self.assertRaises(ValueError, self.resolve_condition, snippet, tmpl, stk) error_msg = 'Invalid condition "invalid_arg"' self.assertIn(error_msg, six.text_type(exc)) # test invalid type snippet = {'Fn::Not': 'invalid'} exc = self.assertRaises(exception.StackValidationFailed, self.resolve_condition, snippet, tmpl) error_msg = 'Arguments to "Fn::Not" must be ' self.assertIn(error_msg, six.text_type(exc)) snippet = {'Fn::Not': ['cd1', 'cd2']} exc = self.assertRaises(exception.StackValidationFailed, self.resolve_condition, snippet, tmpl) error_msg = 'Arguments to "Fn::Not" must be ' self.assertIn(error_msg, six.text_type(exc)) def test_and(self): tpl = template_format.parse(''' AWSTemplateFormatVersion: 2010-09-09 Parameters: env_type: Type: String Default: 'test' zone: Type: String Default: 'shanghai' ''') snippet = { 'Fn::And': [ {'Fn::Equals': [{'Ref': 'env_type'}, 'prod']}, {'Fn::Not': [{'Fn::Equals': [{'Ref': 'zone'}, "beijing"]}]}]} # when param 'env_type' is 'test', and function resolve to false tmpl = template.Template(tpl) stk = stack.Stack(utils.dummy_context(), 'test_and_false', tmpl) resolved = self.resolve_condition(snippet, tmpl, stk) self.assertFalse(resolved) # when param 'env_type' is 'prod', and param 'zone' is 'shanghai', # the 'and' function resolve to true tmpl = template.Template(tpl, env=environment.Environment( {'env_type': 'prod'})) stk = stack.Stack(utils.dummy_context(), 'test_and_true', tmpl) resolved = self.resolve_condition(snippet, tmpl, stk) self.assertTrue(resolved) # when param 'env_type' is 'prod', and param 'zone' is 'shanghai', # the 'and' function resolve to true tmpl = template.Template(tpl, env=environment.Environment( {'env_type': 'prod', 'zone': 'beijing'})) stk = stack.Stack(utils.dummy_context(), 'test_and_false', tmpl) resolved = self.resolve_condition(snippet, tmpl, stk) self.assertFalse(resolved) def test_and_invalid_args(self): tmpl = template.Template(aws_empty_template) error_msg = ('The minimum number of condition arguments to "Fn::And" ' 'is 2.') snippet = {'Fn::And': ['invalid_arg']} exc = self.assertRaises(exception.StackValidationFailed, self.resolve_condition, snippet, tmpl) self.assertIn(error_msg, six.text_type(exc)) error_msg = 'Arguments to "Fn::And" must be' # test invalid type snippet = {'Fn::And': 'invalid'} exc = self.assertRaises(exception.StackValidationFailed, self.resolve_condition, snippet, tmpl) self.assertIn(error_msg, six.text_type(exc)) stk = stack.Stack(utils.dummy_context(), 'test_and_invalid', tmpl) snippet = {'Fn::And': ['cd1', True]} exc = self.assertRaises(ValueError, self.resolve_condition, snippet, tmpl, stk) error_msg = 'Invalid condition "cd1"' self.assertIn(error_msg, six.text_type(exc)) def test_or(self): tpl = template_format.parse(''' AWSTemplateFormatVersion: 2010-09-09 Parameters: zone: Type: String Default: 'guangzhou' ''') snippet = { 'Fn::Or': [ {'Fn::Equals': [{'Ref': 'zone'}, 'shanghai']}, {'Fn::Equals': [{'Ref': 'zone'}, 'beijing']}]} # when param 'zone' is neither equal to 'shanghai' nor 'beijing', # the 'or' function resolve to false tmpl = template.Template(tpl) stk = stack.Stack(utils.dummy_context(), 'test_or_false', tmpl) resolved = self.resolve_condition(snippet, tmpl, stk) self.assertFalse(resolved) # when param 'zone' equals to 'shanghai' or 'beijing', # the 'or' function resolve to true tmpl = template.Template(tpl, env=environment.Environment( {'zone': 'beijing'})) stk = stack.Stack(utils.dummy_context(), 'test_or_true', tmpl) resolved = self.resolve_condition(snippet, tmpl, stk) self.assertTrue(resolved) tmpl = template.Template(tpl, env=environment.Environment( {'zone': 'shanghai'})) stk = stack.Stack(utils.dummy_context(), 'test_or_true', tmpl) resolved = self.resolve_condition(snippet, tmpl, stk) self.assertTrue(resolved) def test_or_invalid_args(self): tmpl = template.Template(aws_empty_template) error_msg = ('The minimum number of condition arguments to "Fn::Or" ' 'is 2.') snippet = {'Fn::Or': ['invalid_arg']} exc = self.assertRaises(exception.StackValidationFailed, self.resolve_condition, snippet, tmpl) self.assertIn(error_msg, six.text_type(exc)) error_msg = 'Arguments to "Fn::Or" must be' # test invalid type snippet = {'Fn::Or': 'invalid'} exc = self.assertRaises(exception.StackValidationFailed, self.resolve_condition, snippet, tmpl) self.assertIn(error_msg, six.text_type(exc)) stk = stack.Stack(utils.dummy_context(), 'test_or_invalid', tmpl) snippet = {'Fn::Or': ['invalid_cd', True]} exc = self.assertRaises(ValueError, self.resolve_condition, snippet, tmpl, stk) error_msg = 'Invalid condition "invalid_cd"' self.assertIn(error_msg, six.text_type(exc)) def test_join(self): tmpl = template.Template(empty_template) join = {"Fn::Join": [" ", ["foo", "bar"]]} self.assertEqual("foo bar", self.resolve(join, tmpl)) def test_split_ok(self): tmpl = template.Template(empty_template) data = {"Fn::Split": [";", "foo; bar; achoo"]} self.assertEqual(['foo', ' bar', ' achoo'], self.resolve(data, tmpl)) def test_split_no_delim_in_str(self): tmpl = template.Template(empty_template) data = {"Fn::Split": [";", "foo, bar, achoo"]} self.assertEqual(['foo, bar, achoo'], self.resolve(data, tmpl)) def test_base64(self): tmpl = template.Template(empty_template) snippet = {"Fn::Base64": "foobar"} # For now, the Base64 function just returns the original text, and # does not convert to base64 (see issue #133) self.assertEqual("foobar", self.resolve(snippet, tmpl)) def test_get_azs(self): tmpl = template.Template(empty_template) snippet = {"Fn::GetAZs": ""} self.assertEqual(["nova"], self.resolve(snippet, tmpl)) def test_get_azs_with_stack(self): tmpl = template.Template(empty_template) snippet = {"Fn::GetAZs": ""} stk = stack.Stack(self.ctx, 'test_stack', template.Template(empty_template)) self.m.StubOutWithMock(nova.NovaClientPlugin, '_create') fc = fakes_nova.FakeClient() nova.NovaClientPlugin._create().AndReturn(fc) self.m.ReplayAll() self.assertEqual(["nova1"], self.resolve(snippet, tmpl, stk)) def test_replace_string_values(self): tmpl = template.Template(empty_template) snippet = {"Fn::Replace": [ {'$var1': 'foo', '%var2%': 'bar'}, '$var1 is %var2%' ]} self.assertEqual('foo is bar', self.resolve(snippet, tmpl)) def test_replace_number_values(self): tmpl = template.Template(empty_template) snippet = {"Fn::Replace": [ {'$var1': 1, '%var2%': 2}, '$var1 is not %var2%' ]} self.assertEqual('1 is not 2', self.resolve(snippet, tmpl)) snippet = {"Fn::Replace": [ {'$var1': 1.3, '%var2%': 2.5}, '$var1 is not %var2%' ]} self.assertEqual('1.3 is not 2.5', self.resolve(snippet, tmpl)) def test_replace_none_values(self): tmpl = template.Template(empty_template) snippet = {"Fn::Replace": [ {'$var1': None, '${var2}': None}, '"$var1" is "${var2}"' ]} self.assertEqual('"" is ""', self.resolve(snippet, tmpl)) def test_replace_missing_key(self): tmpl = template.Template(empty_template) snippet = {"Fn::Replace": [ {'$var1': 'foo', 'var2': 'bar'}, '"$var1" is "${var3}"' ]} self.assertEqual('"foo" is "${var3}"', self.resolve(snippet, tmpl)) def test_replace_param_values(self): env = environment.Environment({'foo': 'wibble'}) tmpl = template.Template(parameter_template, env=env) stk = stack.Stack(self.ctx, 'test_stack', tmpl) snippet = {"Fn::Replace": [ {'$var1': {'Ref': 'foo'}, '%var2%': {'Ref': 'blarg'}}, '$var1 is %var2%' ]} self.assertEqual('wibble is quux', self.resolve(snippet, tmpl, stk)) def test_member_list2map_good(self): tmpl = template.Template(empty_template) snippet = {"Fn::MemberListToMap": [ 'Name', 'Value', ['.member.0.Name=metric', '.member.0.Value=cpu', '.member.1.Name=size', '.member.1.Value=56']]} self.assertEqual({'metric': 'cpu', 'size': '56'}, self.resolve(snippet, tmpl)) def test_member_list2map_good2(self): tmpl = template.Template(empty_template) snippet = {"Fn::MemberListToMap": [ 'Key', 'Value', ['.member.2.Key=metric', '.member.2.Value=cpu', '.member.5.Key=size', '.member.5.Value=56']]} self.assertEqual({'metric': 'cpu', 'size': '56'}, self.resolve(snippet, tmpl)) def test_resource_facade(self): metadata_snippet = {'Fn::ResourceFacade': 'Metadata'} deletion_policy_snippet = {'Fn::ResourceFacade': 'DeletionPolicy'} update_policy_snippet = {'Fn::ResourceFacade': 'UpdatePolicy'} parent_resource = DummyClass() parent_resource.metadata_set({"foo": "bar"}) parent_resource.t = rsrc_defn.ResourceDefinition( 'parent', 'SomeType', deletion_policy=rsrc_defn.ResourceDefinition.RETAIN, update_policy={"blarg": "wibble"}) tmpl = copy.deepcopy(empty_template) tmpl['Resources'] = {'parent': {'Type': 'SomeType', 'DeletionPolicy': 'Retain', 'UpdatePolicy': {"blarg": "wibble"}}} parent_resource.stack = stack.Stack(self.ctx, 'toplevel_stack', template.Template(tmpl)) parent_resource.stack._resources = {'parent': parent_resource} stk = stack.Stack(self.ctx, 'test_stack', template.Template(empty_template), parent_resource='parent', owner_id=45) stk.set_parent_stack(parent_resource.stack) self.assertEqual({"foo": "bar"}, self.resolve(metadata_snippet, stk.t, stk)) self.assertEqual('Retain', self.resolve(deletion_policy_snippet, stk.t, stk)) self.assertEqual({"blarg": "wibble"}, self.resolve(update_policy_snippet, stk.t, stk)) def test_resource_facade_function(self): deletion_policy_snippet = {'Fn::ResourceFacade': 'DeletionPolicy'} parent_resource = DummyClass() parent_resource.metadata_set({"foo": "bar"}) del_policy = cfn_funcs.Join(None, 'Fn::Join', ['eta', ['R', 'in']]) parent_resource.t = rsrc_defn.ResourceDefinition( 'parent', 'SomeType', deletion_policy=del_policy) tmpl = copy.deepcopy(empty_template) tmpl['Resources'] = {'parent': {'Type': 'SomeType', 'DeletionPolicy': del_policy}} parent_resource.stack = stack.Stack(self.ctx, 'toplevel_stack', template.Template(tmpl)) parent_resource.stack._resources = {'parent': parent_resource} stk = stack.Stack(self.ctx, 'test_stack', template.Template(empty_template), parent_resource='parent') stk.set_parent_stack(parent_resource.stack) self.assertEqual('Retain', self.resolve(deletion_policy_snippet, stk.t, stk)) def test_resource_facade_invalid_arg(self): snippet = {'Fn::ResourceFacade': 'wibble'} stk = stack.Stack(self.ctx, 'test_stack', template.Template(empty_template)) error = self.assertRaises(exception.StackValidationFailed, self.resolve, snippet, stk.t, stk) self.assertIn(next(iter(snippet)), six.text_type(error)) def test_resource_facade_missing_deletion_policy(self): snippet = {'Fn::ResourceFacade': 'DeletionPolicy'} parent_resource = DummyClass() parent_resource.metadata_set({"foo": "bar"}) parent_resource.t = rsrc_defn.ResourceDefinition('parent', 'SomeType') tmpl = copy.deepcopy(empty_template) tmpl['Resources'] = {'parent': {'Type': 'SomeType'}} parent_resource.stack = stack.Stack(self.ctx, 'toplevel_stack', template.Template(tmpl)) parent_resource.stack._resources = {'parent': parent_resource} stk = stack.Stack(self.ctx, 'test_stack', template.Template(empty_template), parent_resource='parent', owner_id=78) stk.set_parent_stack(parent_resource.stack) self.assertEqual('Delete', self.resolve(snippet, stk.t, stk)) def test_prevent_parameters_access(self): expected_description = "This can be accessed" tmpl = template.Template({ 'AWSTemplateFormatVersion': '2010-09-09', 'Description': expected_description, 'Parameters': { 'foo': {'Type': 'String', 'Required': True} } }) self.assertEqual(expected_description, tmpl['Description']) keyError = self.assertRaises(KeyError, tmpl.__getitem__, 'Parameters') self.assertIn("can not be accessed directly", six.text_type(keyError)) def test_parameters_section_not_iterable(self): expected_description = "This can be accessed" tmpl = template.Template({ 'AWSTemplateFormatVersion': '2010-09-09', 'Description': expected_description, 'Parameters': { 'foo': {'Type': 'String', 'Required': True} } }) self.assertEqual(expected_description, tmpl['Description']) self.assertNotIn('Parameters', tmpl.keys()) def test_add_resource(self): cfn_tpl = template_format.parse(''' AWSTemplateFormatVersion: 2010-09-09 Resources: resource1: Type: AWS::EC2::Instance Properties: property1: value1 Metadata: foo: bar DependsOn: dummy DeletionPolicy: Retain UpdatePolicy: foo: bar resource2: Type: AWS::EC2::Instance resource3: Type: AWS::EC2::Instance DependsOn: - resource1 - dummy - resource2 ''') source = template.Template(cfn_tpl) empty = template.Template(copy.deepcopy(empty_template)) stk = stack.Stack(self.ctx, 'test_stack', source) for rname, defn in sorted(source.resource_definitions(stk).items()): empty.add_resource(defn) expected = copy.deepcopy(cfn_tpl['Resources']) del expected['resource1']['DependsOn'] expected['resource3']['DependsOn'] = ['resource1', 'resource2'] self.assertEqual(expected, empty.t['Resources']) def test_add_output(self): cfn_tpl = template_format.parse(''' AWSTemplateFormatVersion: 2010-09-09 Outputs: output1: Description: An output Value: foo ''') source = template.Template(cfn_tpl) empty = template.Template(copy.deepcopy(empty_template)) stk = stack.Stack(self.ctx, 'test_stack', source) for defn in six.itervalues(source.outputs(stk)): empty.add_output(defn) self.assertEqual(cfn_tpl['Outputs'], empty.t['Outputs']) def test_create_empty_template_default_version(self): empty_template = template.Template.create_empty_template() self.assertEqual(hot_t.HOTemplate20150430, empty_template.__class__) self.assertEqual({}, empty_template['parameter_groups']) self.assertEqual({}, empty_template['resources']) self.assertEqual({}, empty_template['outputs']) def test_create_empty_template_returns_correct_version(self): t = template_format.parse(''' AWSTemplateFormatVersion: 2010-09-09 Parameters: Resources: Outputs: ''') aws_tmpl = template.Template(t) empty_template = template.Template.create_empty_template( version=aws_tmpl.version) self.assertEqual(aws_tmpl.__class__, empty_template.__class__) self.assertEqual({}, empty_template['Mappings']) self.assertEqual({}, empty_template['Resources']) self.assertEqual({}, empty_template['Outputs']) t = template_format.parse(''' HeatTemplateFormatVersion: 2012-12-12 Parameters: Resources: Outputs: ''') heat_tmpl = template.Template(t) empty_template = template.Template.create_empty_template( version=heat_tmpl.version) self.assertEqual(heat_tmpl.__class__, empty_template.__class__) self.assertEqual({}, empty_template['Mappings']) self.assertEqual({}, empty_template['Resources']) self.assertEqual({}, empty_template['Outputs']) t = template_format.parse(''' heat_template_version: 2015-04-30 parameter_groups: resources: outputs: ''') hot_tmpl = template.Template(t) empty_template = template.Template.create_empty_template( version=hot_tmpl.version) self.assertEqual(hot_tmpl.__class__, empty_template.__class__) self.assertEqual({}, empty_template['parameter_groups']) self.assertEqual({}, empty_template['resources']) self.assertEqual({}, empty_template['outputs']) def test_create_empty_template_from_another_template(self): res_param_template = template_format.parse('''{ "HeatTemplateFormatVersion" : "2012-12-12", "Parameters" : { "foo" : { "Type" : "String" }, "blarg" : { "Type" : "String", "Default": "quux" } }, "Resources" : { "foo" : { "Type" : "GenericResourceType" }, "blarg" : { "Type" : "GenericResourceType" } } }''') env = environment.Environment({'foo': 'bar'}) hot_tmpl = template.Template(res_param_template, env) empty_template = template.Template.create_empty_template( from_template=hot_tmpl) self.assertEqual({}, empty_template['Resources']) self.assertEqual(hot_tmpl.env, empty_template.env) class TemplateFnErrorTest(common.HeatTestCase): scenarios = [ ('select_from_list_not_int', dict(expect=TypeError, snippet={"Fn::Select": ["one", ["foo", "bar"]]})), ('select_from_dict_not_str', dict(expect=TypeError, snippet={"Fn::Select": [1, {"red": "robin", "re": "foo"}]})), ('select_from_serialized_json_wrong', dict(expect=ValueError, snippet={"Fn::Select": ["not", "no json"]})), ('select_wrong_num_args_1', dict(expect=exception.StackValidationFailed, snippet={"Fn::Select": []})), ('select_wrong_num_args_2', dict(expect=exception.StackValidationFailed, snippet={"Fn::Select": ["4"]})), ('select_wrong_num_args_3', dict(expect=exception.StackValidationFailed, snippet={"Fn::Select": ["foo", {"foo": "bar"}, ""]})), ('select_wrong_num_args_4', dict(expect=TypeError, snippet={'Fn::Select': [['f'], {'f': 'food'}]})), ('split_no_delim', dict(expect=exception.StackValidationFailed, snippet={"Fn::Split": ["foo, bar, achoo"]})), ('split_no_list', dict(expect=exception.StackValidationFailed, snippet={"Fn::Split": "foo, bar, achoo"})), ('base64_list', dict(expect=TypeError, snippet={"Fn::Base64": ["foobar"]})), ('base64_dict', dict(expect=TypeError, snippet={"Fn::Base64": {"foo": "bar"}})), ('replace_list_value', dict(expect=TypeError, snippet={"Fn::Replace": [ {'$var1': 'foo', '%var2%': ['bar']}, '$var1 is %var2%']})), ('replace_list_mapping', dict(expect=exception.StackValidationFailed, snippet={"Fn::Replace": [ ['var1', 'foo', 'var2', 'bar'], '$var1 is ${var2}']})), ('replace_dict', dict(expect=exception.StackValidationFailed, snippet={"Fn::Replace": {}})), ('replace_missing_template', dict(expect=exception.StackValidationFailed, snippet={"Fn::Replace": [['var1', 'foo', 'var2', 'bar']]})), ('replace_none_template', dict(expect=exception.StackValidationFailed, snippet={"Fn::Replace": [['var2', 'bar'], None]})), ('replace_list_string', dict(expect=TypeError, snippet={"Fn::Replace": [ {'var1': 'foo', 'var2': 'bar'}, ['$var1 is ${var2}']]})), ('join_string', dict(expect=TypeError, snippet={"Fn::Join": [" ", "foo"]})), ('join_dict', dict(expect=TypeError, snippet={"Fn::Join": [" ", {"foo": "bar"}]})), ('join_wrong_num_args_1', dict(expect=exception.StackValidationFailed, snippet={"Fn::Join": []})), ('join_wrong_num_args_2', dict(expect=exception.StackValidationFailed, snippet={"Fn::Join": [" "]})), ('join_wrong_num_args_3', dict(expect=exception.StackValidationFailed, snippet={"Fn::Join": [" ", {"foo": "bar"}, ""]})), ('join_string_nodelim', dict(expect=exception.StackValidationFailed, snippet={"Fn::Join": "o"})), ('join_string_nodelim_1', dict(expect=exception.StackValidationFailed, snippet={"Fn::Join": "oh"})), ('join_string_nodelim_2', dict(expect=exception.StackValidationFailed, snippet={"Fn::Join": "ohh"})), ('join_dict_nodelim1', dict(expect=exception.StackValidationFailed, snippet={"Fn::Join": {"foo": "bar"}})), ('join_dict_nodelim2', dict(expect=exception.StackValidationFailed, snippet={"Fn::Join": {"foo": "bar", "blarg": "wibble"}})), ('join_dict_nodelim3', dict(expect=exception.StackValidationFailed, snippet={"Fn::Join": {"foo": "bar", "blarg": "wibble", "baz": "quux"}})), ('member_list2map_no_key_or_val', dict(expect=exception.StackValidationFailed, snippet={"Fn::MemberListToMap": [ 'Key', ['.member.2.Key=metric', '.member.2.Value=cpu', '.member.5.Key=size', '.member.5.Value=56']]})), ('member_list2map_no_list', dict(expect=exception.StackValidationFailed, snippet={"Fn::MemberListToMap": [ 'Key', '.member.2.Key=metric']})), ('member_list2map_not_string', dict(expect=exception.StackValidationFailed, snippet={"Fn::MemberListToMap": [ 'Name', ['Value'], ['.member.0.Name=metric', '.member.0.Value=cpu', '.member.1.Name=size', '.member.1.Value=56']]})), ] def test_bad_input(self): tmpl = template.Template(empty_template) def resolve(s): return TemplateTest.resolve(s, tmpl) error = self.assertRaises(self.expect, resolve, self.snippet) self.assertIn(next(iter(self.snippet)), six.text_type(error)) class ResolveDataTest(common.HeatTestCase): def setUp(self): super(ResolveDataTest, self).setUp() self.username = 'parser_stack_test_user' self.ctx = utils.dummy_context() self.stack = stack.Stack(self.ctx, 'resolve_test_stack', template.Template(empty_template)) def resolve(self, snippet): return function.resolve(self.stack.t.parse(self.stack.defn, snippet)) def test_join_split(self): # join snippet = {'Fn::Join': [';', ['one', 'two', 'three']]} self.assertEqual('one;two;three', self.resolve(snippet)) # join then split snippet = {'Fn::Split': [';', snippet]} self.assertEqual(['one', 'two', 'three'], self.resolve(snippet)) def test_split_join_split_join(self): # each snippet in this test encapsulates # the snippet from the previous step, leading # to increasingly nested function calls # split snippet = {'Fn::Split': [',', 'one,two,three']} self.assertEqual(['one', 'two', 'three'], self.resolve(snippet)) # split then join snippet = {'Fn::Join': [';', snippet]} self.assertEqual('one;two;three', self.resolve(snippet)) # split then join then split snippet = {'Fn::Split': [';', snippet]} self.assertEqual(['one', 'two', 'three'], self.resolve(snippet)) # split then join then split then join snippet = {'Fn::Join': ['-', snippet]} self.assertEqual('one-two-three', self.resolve(snippet)) def test_join_recursive(self): raw = {'Fn::Join': ['\n', [{'Fn::Join': [' ', ['foo', 'bar']]}, 'baz']]} self.assertEqual('foo bar\nbaz', self.resolve(raw)) def test_join_not_string(self): snippet = {'Fn::Join': ['\n', [{'Fn::Join': [' ', ['foo', 45]]}, 'baz']]} error = self.assertRaises(TypeError, self.resolve, snippet) self.assertIn('45', six.text_type(error)) def test_base64_replace(self): raw = {'Fn::Base64': {'Fn::Replace': [ {'foo': 'bar'}, 'Meet at the foo']}} self.assertEqual('Meet at the bar', self.resolve(raw)) def test_replace_base64(self): raw = {'Fn::Replace': [{'foo': 'bar'}, { 'Fn::Base64': 'Meet at the foo'}]} self.assertEqual('Meet at the bar', self.resolve(raw)) def test_nested_selects(self): data = { 'a': ['one', 'two', 'three'], 'b': ['een', 'twee', {'d': 'D', 'e': 'E'}] } raw = {'Fn::Select': ['a', data]} self.assertEqual(data['a'], self.resolve(raw)) raw = {'Fn::Select': ['b', data]} self.assertEqual(data['b'], self.resolve(raw)) raw = { 'Fn::Select': ['1', { 'Fn::Select': ['b', data] }] } self.assertEqual('twee', self.resolve(raw)) raw = { 'Fn::Select': ['e', { 'Fn::Select': ['2', { 'Fn::Select': ['b', data] }] }] } self.assertEqual('E', self.resolve(raw)) def test_member_list_select(self): snippet = {'Fn::Select': ['metric', {"Fn::MemberListToMap": [ 'Name', 'Value', ['.member.0.Name=metric', '.member.0.Value=cpu', '.member.1.Name=size', '.member.1.Value=56']]}]} self.assertEqual('cpu', self.resolve(snippet)) heat-10.0.2/heat/tests/common.py0000666000175000017500000003313213343562351016473 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os import sys import fixtures import mox from oslo_config import cfg from oslo_log import log as logging import testscenarios import testtools from heat.common import context from heat.common import messaging from heat.common import policy from heat.engine.clients.os import barbican from heat.engine.clients.os import cinder from heat.engine.clients.os import glance from heat.engine.clients.os import keystone from heat.engine.clients.os.keystone import fake_keystoneclient as fake_ks from heat.engine.clients.os.keystone import keystone_constraints as ks_constr from heat.engine.clients.os.neutron import neutron_constraints as neutron from heat.engine.clients.os import nova from heat.engine.clients.os import sahara from heat.engine.clients.os import trove from heat.engine import environment from heat.engine import resource from heat.engine import resources from heat.engine import scheduler from heat.tests import fakes from heat.tests import generic_resource as generic_rsrc from heat.tests import utils TEST_DEFAULT_LOGLEVELS = {'migrate': logging.WARN, 'sqlalchemy': logging.WARN, 'heat.engine.environment': logging.ERROR} _LOG_FORMAT = "%(levelname)8s [%(name)s] %(message)s" _TRUE_VALUES = ('True', 'true', '1', 'yes') class FakeLogMixin(object): def setup_logging(self, quieten=True): # Assign default logs to self.LOG so we can still # assert on heat logs. default_level = logging.INFO if os.environ.get('OS_DEBUG') in _TRUE_VALUES: default_level = logging.DEBUG self.LOG = self.useFixture( fixtures.FakeLogger(level=default_level, format=_LOG_FORMAT)) base_list = set([nlog.split('.')[0] for nlog in logging.logging.Logger.manager.loggerDict] ) for base in base_list: if base in TEST_DEFAULT_LOGLEVELS: self.useFixture(fixtures.FakeLogger( level=TEST_DEFAULT_LOGLEVELS[base], name=base, format=_LOG_FORMAT)) elif base != 'heat': self.useFixture(fixtures.FakeLogger( name=base, format=_LOG_FORMAT)) if quieten: for ll in TEST_DEFAULT_LOGLEVELS: if ll.startswith('heat.'): self.useFixture(fixtures.FakeLogger( level=TEST_DEFAULT_LOGLEVELS[ll], name=ll, format=_LOG_FORMAT)) class HeatTestCase(testscenarios.WithScenarios, testtools.TestCase, FakeLogMixin): def setUp(self, mock_keystone=True, mock_resource_policy=True, quieten_logging=True): super(HeatTestCase, self).setUp() self.m = mox.Mox() self.addCleanup(self.m.UnsetStubs) self.setup_logging(quieten=quieten_logging) self.warnings = self.useFixture(fixtures.WarningsCapture()) scheduler.ENABLE_SLEEP = False self.useFixture(fixtures.MonkeyPatch( 'heat.common.exception._FATAL_EXCEPTION_FORMAT_ERRORS', True)) def enable_sleep(): scheduler.ENABLE_SLEEP = True self.addCleanup(enable_sleep) mod_dir = os.path.dirname(sys.modules[__name__].__file__) project_dir = os.path.abspath(os.path.join(mod_dir, '../../')) env_dir = os.path.join(project_dir, 'etc', 'heat', 'environment.d') template_dir = os.path.join(project_dir, 'etc', 'heat', 'templates') cfg.CONF.set_default('environment_dir', env_dir) cfg.CONF.set_override('error_wait_time', None) cfg.CONF.set_default('template_dir', template_dir) self.addCleanup(cfg.CONF.reset) messaging.setup("fake://", optional=True) self.addCleanup(messaging.cleanup) tri_names = ['AWS::RDS::DBInstance', 'AWS::CloudWatch::Alarm'] tris = [] for name in tri_names: tris.append(resources.global_env().get_resource_info( name, registry_type=environment.TemplateResourceInfo)) for tri in tris: if tri is not None: cur_path = tri.template_name templ_path = os.path.join(project_dir, 'etc', 'heat', 'templates') if templ_path not in cur_path: tri.template_name = cur_path.replace( '/etc/heat/templates', templ_path) if mock_keystone: self.stub_keystoneclient() if mock_resource_policy: self.mock_resource_policy = self.patchobject( policy.ResourceEnforcer, 'enforce') utils.setup_dummy_db() self.register_test_resources() self.addCleanup(utils.reset_dummy_db) def register_test_resources(self): resource._register_class('GenericResourceType', generic_rsrc.GenericResource) resource._register_class('MultiStepResourceType', generic_rsrc.MultiStepResource) resource._register_class('ResWithShowAttrType', generic_rsrc.ResWithShowAttr) resource._register_class('SignalResourceType', generic_rsrc.SignalResource) resource._register_class('ResourceWithPropsType', generic_rsrc.ResourceWithProps) resource._register_class('ResourceWithPropsRefPropOnDelete', generic_rsrc.ResourceWithPropsRefPropOnDelete) resource._register_class( 'ResourceWithPropsRefPropOnValidate', generic_rsrc.ResourceWithPropsRefPropOnValidate) resource._register_class('StackUserResourceType', generic_rsrc.StackUserResource) resource._register_class('ResourceWithResourceIDType', generic_rsrc.ResourceWithResourceID) resource._register_class('ResourceWithAttributeType', generic_rsrc.ResourceWithAttributeType) resource._register_class('ResourceWithRequiredProps', generic_rsrc.ResourceWithRequiredProps) resource._register_class( 'ResourceWithMultipleRequiredProps', generic_rsrc.ResourceWithMultipleRequiredProps) resource._register_class( 'ResourceWithRequiredPropsAndEmptyAttrs', generic_rsrc.ResourceWithRequiredPropsAndEmptyAttrs) resource._register_class('ResourceWithPropsAndAttrs', generic_rsrc.ResourceWithPropsAndAttrs) resource._register_class('ResWithStringPropAndAttr', generic_rsrc.ResWithStringPropAndAttr), resource._register_class('ResWithComplexPropsAndAttrs', generic_rsrc.ResWithComplexPropsAndAttrs) resource._register_class('ResourceWithCustomConstraint', generic_rsrc.ResourceWithCustomConstraint) resource._register_class('ResourceWithComplexAttributesType', generic_rsrc.ResourceWithComplexAttributes) resource._register_class('ResourceWithDefaultClientName', generic_rsrc.ResourceWithDefaultClientName) resource._register_class('OverwrittenFnGetAttType', generic_rsrc.ResourceWithFnGetAttType) resource._register_class('OverwrittenFnGetRefIdType', generic_rsrc.ResourceWithFnGetRefIdType) resource._register_class('ResourceWithListProp', generic_rsrc.ResourceWithListProp) resource._register_class('StackResourceType', generic_rsrc.StackResourceType) resource._register_class('ResourceWithRestoreType', generic_rsrc.ResourceWithRestoreType) resource._register_class('ResourceTypeUnSupportedLiberty', generic_rsrc.ResourceTypeUnSupportedLiberty) resource._register_class('ResourceTypeSupportedKilo', generic_rsrc.ResourceTypeSupportedKilo) resource._register_class('ResourceTypeHidden', generic_rsrc.ResourceTypeHidden) resource._register_class( 'ResourceWithHiddenPropertyAndAttribute', generic_rsrc.ResourceWithHiddenPropertyAndAttribute) def patchobject(self, obj, attr, **kwargs): mockfixture = self.useFixture(fixtures.MockPatchObject(obj, attr, **kwargs)) return mockfixture.mock # NOTE(pshchelo): this overrides the testtools.TestCase.patch method # that does simple monkey-patching in favor of mock's patching def patch(self, target, **kwargs): mockfixture = self.useFixture(fixtures.MockPatch(target, **kwargs)) return mockfixture.mock def stub_auth(self, ctx=None, **kwargs): auth = self.patchobject(ctx or context.RequestContext, "_create_auth_plugin") fake_auth = fakes.FakeAuth(**kwargs) auth.return_value = fake_auth return auth def stub_keystoneclient(self, fake_client=None, **kwargs): client = self.patchobject(keystone.KeystoneClientPlugin, "_create") fkc = fake_client or fake_ks.FakeKeystoneClient(**kwargs) client.return_value = fkc return fkc def stub_KeypairConstraint_validate(self): validate = self.patchobject(nova.KeypairConstraint, 'validate') validate.return_value = True def stub_ImageConstraint_validate(self, num=None): validate = self.patchobject(glance.ImageConstraint, 'validate') if num is None: validate.return_value = True else: validate.side_effect = [True for x in range(num)] def stub_FlavorConstraint_validate(self): validate = self.patchobject(nova.FlavorConstraint, 'validate') validate.return_value = True def stub_VolumeConstraint_validate(self): validate = self.patchobject(cinder.VolumeConstraint, 'validate') validate.return_value = True def stub_QoSSpecsConstraint_validate(self): validate = self.patchobject(cinder.QoSSpecsConstraint, 'validate') validate.return_value = True def stub_SnapshotConstraint_validate(self): validate = self.patchobject( cinder.VolumeSnapshotConstraint, 'validate') validate.return_value = True def stub_VolumeTypeConstraint_validate(self): validate = self.patchobject(cinder.VolumeTypeConstraint, 'validate') validate.return_value = True def stub_VolumeBackupConstraint_validate(self): validate = self.patchobject(cinder.VolumeBackupConstraint, 'validate') validate.return_value = True def stub_ServerConstraint_validate(self): validate = self.patchobject(nova.ServerConstraint, 'validate') validate.return_value = True def stub_NetworkConstraint_validate(self): validate = self.patchobject(neutron.NetworkConstraint, 'validate') validate.return_value = True def stub_PortConstraint_validate(self): validate = self.patchobject(neutron.PortConstraint, 'validate') validate.return_value = True def stub_TroveFlavorConstraint_validate(self): validate = self.patchobject(trove.FlavorConstraint, 'validate') validate.return_value = True def stub_SubnetConstraint_validate(self): validate = self.patchobject(neutron.SubnetConstraint, 'validate') validate.return_value = True def stub_AddressScopeConstraint_validate(self): validate = self.patchobject(neutron.AddressScopeConstraint, 'validate') validate.return_value = True def stub_SubnetPoolConstraint_validate(self): validate = self.patchobject(neutron.SubnetPoolConstraint, 'validate') validate.return_value = True def stub_RouterConstraint_validate(self): validate = self.patchobject(neutron.RouterConstraint, 'validate') validate.return_value = True def stub_QoSPolicyConstraint_validate(self): validate = self.patchobject(neutron.QoSPolicyConstraint, 'validate') validate.return_value = True def stub_KeystoneProjectConstraint(self): validate = self.patchobject(ks_constr.KeystoneProjectConstraint, 'validate') validate.return_value = True def stub_SaharaPluginConstraint(self): validate = self.patchobject(sahara.PluginConstraint, 'validate') validate.return_value = True def stub_ProviderConstraint_validate(self): validate = self.patchobject(neutron.ProviderConstraint, 'validate') validate.return_value = True def stub_SecretConstraint_validate(self): validate = self.patchobject(barbican.SecretConstraint, 'validate') validate.return_value = True heat-10.0.2/heat/tests/test_support.py0000666000175000017500000000727113343562340017761 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import six from heat.engine import support from heat.tests import common class SupportStatusTest(common.HeatTestCase): def test_valid_status(self): for sstatus in support.SUPPORT_STATUSES: previous = support.SupportStatus(version='test_version') status = support.SupportStatus( status=sstatus, message='test_message', version='test_version', previous_status=previous, ) self.assertEqual(sstatus, status.status) self.assertEqual('test_message', status.message) self.assertEqual('test_version', status.version) self.assertEqual(previous, status.previous_status) self.assertEqual({ 'status': sstatus, 'message': 'test_message', 'version': 'test_version', 'previous_status': {'status': 'SUPPORTED', 'message': None, 'version': 'test_version', 'previous_status': None}, }, status.to_dict()) def test_invalid_status(self): status = support.SupportStatus( status='RANDOM', message='test_message', version='test_version', previous_status=support.SupportStatus() ) self.assertEqual(support.UNKNOWN, status.status) self.assertEqual('Specified status is invalid, defaulting to UNKNOWN', status.message) self.assertIsNone(status.version) self.assertIsNone(status.previous_status) self.assertEqual({ 'status': 'UNKNOWN', 'message': 'Specified status is invalid, defaulting to UNKNOWN', 'version': None, 'previous_status': None, }, status.to_dict()) def test_previous_status(self): sstatus = support.SupportStatus( status=support.DEPRECATED, version='5.0.0', previous_status=support.SupportStatus( status=support.SUPPORTED, version='2015.1' ) ) self.assertEqual(support.DEPRECATED, sstatus.status) self.assertEqual('5.0.0', sstatus.version) self.assertEqual(support.SUPPORTED, sstatus.previous_status.status) self.assertEqual('2015.1', sstatus.previous_status.version) self.assertEqual({'status': 'DEPRECATED', 'version': '5.0.0', 'message': None, 'previous_status': {'status': 'SUPPORTED', 'version': '2015.1', 'message': None, 'previous_status': None}}, sstatus.to_dict()) def test_invalid_previous_status(self): ex = self.assertRaises(ValueError, support.SupportStatus, previous_status='YARRR') self.assertEqual('previous_status must be SupportStatus ' 'instead of %s' % str, six.text_type(ex)) heat-10.0.2/heat/tests/test_fault_middleware.py0000666000175000017500000002450313343562340021552 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import inspect import re from oslo_config import cfg from oslo_log import log from oslo_messaging._drivers import common as rpc_common import six import webob import heat.api.middleware.fault as fault from heat.common import exception as heat_exc from heat.common.i18n import _ from heat.tests import common class StackNotFoundChild(heat_exc.EntityNotFound): pass class ErrorWithNewline(webob.exc.HTTPBadRequest): pass class FaultMiddlewareTest(common.HeatTestCase): def setUp(self): super(FaultMiddlewareTest, self).setUp() log.register_options(cfg.CONF) def test_disguised_http_exception_with_newline(self): wrapper = fault.FaultWrapper(None) newline_error = ErrorWithNewline('Error with \n newline') msg = wrapper._error(heat_exc.HTTPExceptionDisguise(newline_error)) expected = {'code': 400, 'error': {'message': 'Error with \n newline', 'traceback': None, 'type': 'ErrorWithNewline'}, 'explanation': ('The server could not comply with the ' 'request since it is either malformed ' 'or otherwise incorrect.'), 'title': 'Bad Request'} self.assertEqual(expected, msg) def test_http_exception_with_traceback(self): wrapper = fault.FaultWrapper(None) newline_error = ErrorWithNewline( 'Error with \n newline\nTraceback (most recent call last):\nFoo') msg = wrapper._error(heat_exc.HTTPExceptionDisguise(newline_error)) expected = {'code': 400, 'error': {'message': 'Error with \n newline', 'traceback': None, 'type': 'ErrorWithNewline'}, 'explanation': ('The server could not comply with the ' 'request since it is either malformed ' 'or otherwise incorrect.'), 'title': 'Bad Request'} self.assertEqual(expected, msg) def test_openstack_exception_with_kwargs(self): wrapper = fault.FaultWrapper(None) msg = wrapper._error(heat_exc.EntityNotFound(entity='Stack', name='a')) expected = {'code': 404, 'error': {'message': 'The Stack (a) could not be found.', 'traceback': None, 'type': 'EntityNotFound'}, 'explanation': 'The resource could not be found.', 'title': 'Not Found'} self.assertEqual(expected, msg) def test_openstack_exception_without_kwargs(self): wrapper = fault.FaultWrapper(None) msg = wrapper._error(heat_exc.StackResourceLimitExceeded()) expected = {'code': 500, 'error': {'message': 'Maximum resources ' 'per stack exceeded.', 'traceback': None, 'type': 'StackResourceLimitExceeded'}, 'explanation': 'The server has either erred or is ' 'incapable of performing the requested ' 'operation.', 'title': 'Internal Server Error'} self.assertEqual(expected, msg) def test_exception_with_non_ascii_chars(self): # We set debug to true to test the code path for serializing traces too cfg.CONF.set_override('debug', True) msg = u'Error with non-ascii chars \x80' class TestException(heat_exc.HeatException): msg_fmt = msg wrapper = fault.FaultWrapper(None) msg = wrapper._error(TestException()) expected = {'code': 500, 'error': {'message': u'Error with non-ascii chars \x80', 'traceback': 'None\n', 'type': 'TestException'}, 'explanation': ('The server has either erred or is ' 'incapable of performing the requested ' 'operation.'), 'title': 'Internal Server Error'} self.assertEqual(expected, msg) def test_remote_exception(self): # We want tracebacks cfg.CONF.set_override('debug', True) error = heat_exc.EntityNotFound(entity='Stack', name='a') exc_info = (type(error), error, None) serialized = rpc_common.serialize_remote_exception(exc_info) remote_error = rpc_common.deserialize_remote_exception( serialized, ["heat.common.exception"]) wrapper = fault.FaultWrapper(None) msg = wrapper._error(remote_error) expected_message, expected_traceback = six.text_type( remote_error).split('\n', 1) expected = {'code': 404, 'error': {'message': expected_message, 'traceback': expected_traceback, 'type': 'EntityNotFound'}, 'explanation': 'The resource could not be found.', 'title': 'Not Found'} self.assertEqual(expected, msg) def remote_exception_helper(self, name, error): if six.PY3: error.args = () exc_info = (type(error), error, None) serialized = rpc_common.serialize_remote_exception(exc_info) remote_error = rpc_common.deserialize_remote_exception( serialized, name) wrapper = fault.FaultWrapper(None) msg = wrapper._error(remote_error) expected = {'code': 500, 'error': {'traceback': None, 'type': 'RemoteError'}, 'explanation': msg['explanation'], 'title': 'Internal Server Error'} self.assertEqual(expected, msg) def test_all_remote_exceptions(self): for name, obj in inspect.getmembers( heat_exc, lambda x: inspect.isclass(x) and issubclass( x, heat_exc.HeatException)): if '__init__' in obj.__dict__: if obj == heat_exc.HeatException: # manually ignore baseclass continue elif obj == heat_exc.Error: error = obj('Error') elif obj == heat_exc.NotFound: error = obj() elif obj == heat_exc.ResourceFailure: exc = heat_exc.Error(_('Error')) error = obj(exc, None, 'CREATE') elif obj == heat_exc.ResourcePropertyConflict: error = obj('%s' % 'a test prop') else: continue self.remote_exception_helper(name, error) continue if hasattr(obj, 'msg_fmt'): kwargs = {} spec_names = re.findall(r'%\((\w+)\)([cdeEfFgGinorsxX])', obj.msg_fmt) for key, convtype in spec_names: if convtype == 'r' or convtype == 's': kwargs[key] = '"' + key + '"' else: # this is highly unlikely raise Exception("test needs additional conversion" " type added due to %s exception" " using '%c' specifier" % (obj, convtype)) error = obj(**kwargs) self.remote_exception_helper(name, error) def test_should_not_ignore_parent_classes(self): wrapper = fault.FaultWrapper(None) msg = wrapper._error(StackNotFoundChild(entity='Stack', name='a')) expected = {'code': 404, 'error': {'message': 'The Stack (a) could not be found.', 'traceback': None, 'type': 'StackNotFoundChild'}, 'explanation': 'The resource could not be found.', 'title': 'Not Found'} self.assertEqual(expected, msg) def test_internal_server_error_when_exeption_and_parents_not_mapped(self): wrapper = fault.FaultWrapper(None) class NotMappedException(Exception): pass msg = wrapper._error(NotMappedException('A message')) expected = {'code': 500, 'error': {'traceback': None, 'type': 'NotMappedException'}, 'explanation': ('The server has either erred or is ' 'incapable of performing the requested ' 'operation.'), 'title': 'Internal Server Error'} self.assertEqual(expected, msg) def test_should_not_ignore_parent_classes_even_for_remote_ones(self): # We want tracebacks cfg.CONF.set_override('debug', True) error = StackNotFoundChild(entity='Stack', name='a') exc_info = (type(error), error, None) serialized = rpc_common.serialize_remote_exception(exc_info) remote_error = rpc_common.deserialize_remote_exception( serialized, ["heat.tests.test_fault_middleware"]) wrapper = fault.FaultWrapper(None) msg = wrapper._error(remote_error) expected_message, expected_traceback = six.text_type( remote_error).split('\n', 1) expected = {'code': 404, 'error': {'message': expected_message, 'traceback': expected_traceback, 'type': 'StackNotFoundChild'}, 'explanation': 'The resource could not be found.', 'title': 'Not Found'} self.assertEqual(expected, msg) heat-10.0.2/heat/tests/test_function.py0000666000175000017500000003023513343562340020066 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import uuid import six from heat.common import exception from heat.common.i18n import _ from heat.engine.cfn import functions from heat.engine import environment from heat.engine import function from heat.engine import resource from heat.engine import rsrc_defn from heat.engine import stack from heat.engine import stk_defn from heat.engine import template from heat.tests import common from heat.tests import utils class TestFunction(function.Function): def validate(self): if len(self.args) < 2: raise TypeError(_('Need more arguments')) def dependencies(self, path): return ['foo', 'bar'] def result(self): return 'wibble' class TestFunctionKeyError(function.Function): def result(self): raise TypeError class TestFunctionValueError(function.Function): def result(self): raise ValueError class TestFunctionResult(function.Function): def result(self): return super(TestFunctionResult, self).result() class FunctionTest(common.HeatTestCase): def test_equal(self): func = TestFunction(None, 'foo', ['bar', 'baz']) self.assertTrue(func == 'wibble') self.assertTrue('wibble' == func) def test_not_equal(self): func = TestFunction(None, 'foo', ['bar', 'baz']) self.assertTrue(func != 'foo') self.assertTrue('foo' != func) def test_equal_func(self): func1 = TestFunction(None, 'foo', ['bar', 'baz']) func2 = TestFunction(None, 'blarg', ['wibble', 'quux']) self.assertTrue(func1 == func2) def test_function_str_value(self): func1 = TestFunction(None, 'foo', ['bar', 'baz']) expected = '%s %s' % (" 'wibble'>") self.assertEqual(expected, six.text_type(func1)) def test_function_stack_reference_none(self): func1 = TestFunction(None, 'foo', ['bar', 'baz']) self.assertIsNone(func1.stack) def test_function_exception_key_error(self): func1 = TestFunctionKeyError(None, 'foo', ['bar', 'baz']) expected = '%s %s' % (" ???>") self.assertEqual(expected, six.text_type(func1)) def test_function_eq_exception_key_error(self): func1 = TestFunctionKeyError(None, 'foo', ['bar', 'baz']) func2 = TestFunctionKeyError(None, 'foo', ['bar', 'baz']) result = func1.__eq__(func2) self.assertEqual(result, NotImplemented) def test_function_ne_exception_key_error(self): func1 = TestFunctionKeyError(None, 'foo', ['bar', 'baz']) func2 = TestFunctionKeyError(None, 'foo', ['bar', 'baz']) result = func1.__ne__(func2) self.assertEqual(result, NotImplemented) def test_function_exception_value_error(self): func1 = TestFunctionValueError(None, 'foo', ['bar', 'baz']) expected = '%s %s' % ( " ???>") self.assertEqual(expected, six.text_type(func1)) def test_function_eq_exception_value_error(self): func1 = TestFunctionValueError(None, 'foo', ['bar', 'baz']) func2 = TestFunctionValueError(None, 'foo', ['bar', 'baz']) result = func1.__eq__(func2) self.assertEqual(result, NotImplemented) def test_function_ne_exception_value_error(self): func1 = TestFunctionValueError(None, 'foo', ['bar', 'baz']) func2 = TestFunctionValueError(None, 'foo', ['bar', 'baz']) result = func1.__ne__(func2) self.assertEqual(result, NotImplemented) def test_function_abstract_result(self): func1 = TestFunctionResult(None, 'foo', ['bar', 'baz']) expected = '%s %s -> %s' % ( "") self.assertEqual(expected, six.text_type(func1)) def test_copy(self): func = TestFunction(None, 'foo', ['bar', 'baz']) self.assertEqual({'foo': ['bar', 'baz']}, copy.deepcopy(func)) class ResolveTest(common.HeatTestCase): def test_resolve_func(self): func = TestFunction(None, 'foo', ['bar', 'baz']) result = function.resolve(func) self.assertEqual('wibble', result) self.assertIsInstance(result, str) def test_resolve_dict(self): func = TestFunction(None, 'foo', ['bar', 'baz']) snippet = {'foo': 'bar', 'blarg': func} result = function.resolve(snippet) self.assertEqual({'foo': 'bar', 'blarg': 'wibble'}, result) self.assertIsNot(result, snippet) def test_resolve_list(self): func = TestFunction(None, 'foo', ['bar', 'baz']) snippet = ['foo', 'bar', 'baz', 'blarg', func] result = function.resolve(snippet) self.assertEqual(['foo', 'bar', 'baz', 'blarg', 'wibble'], result) self.assertIsNot(result, snippet) def test_resolve_all(self): func = TestFunction(None, 'foo', ['bar', 'baz']) snippet = ['foo', {'bar': ['baz', {'blarg': func}]}] result = function.resolve(snippet) self.assertEqual(['foo', {'bar': ['baz', {'blarg': 'wibble'}]}], result) self.assertIsNot(result, snippet) class ValidateTest(common.HeatTestCase): def setUp(self): super(ValidateTest, self).setUp() self.func = TestFunction(None, 'foo', ['bar', 'baz']) def test_validate_func(self): self.assertIsNone(function.validate(self.func)) self.func = TestFunction(None, 'foo', ['bar']) self.assertRaisesRegex(exception.StackValidationFailed, 'foo: Need more arguments', function.validate, self.func) def test_validate_dict(self): snippet = {'foo': 'bar', 'blarg': self.func} function.validate(snippet) self.func = TestFunction(None, 'foo', ['bar']) snippet = {'foo': 'bar', 'blarg': self.func} self.assertRaisesRegex(exception.StackValidationFailed, 'blarg.foo: Need more arguments', function.validate, snippet) def test_validate_list(self): snippet = ['foo', 'bar', 'baz', 'blarg', self.func] function.validate(snippet) self.func = TestFunction(None, 'foo', ['bar']) snippet = {'foo': 'bar', 'blarg': self.func} self.assertRaisesRegex(exception.StackValidationFailed, 'blarg.foo: Need more arguments', function.validate, snippet) def test_validate_all(self): snippet = ['foo', {'bar': ['baz', {'blarg': self.func}]}] function.validate(snippet) self.func = TestFunction(None, 'foo', ['bar']) snippet = {'foo': 'bar', 'blarg': self.func} self.assertRaisesRegex(exception.StackValidationFailed, 'blarg.foo: Need more arguments', function.validate, snippet) class DependenciesTest(common.HeatTestCase): func = TestFunction(None, 'test', None) scenarios = [ ('function', dict(snippet=func)), ('nested_map', dict(snippet={'wibble': func})), ('nested_list', dict(snippet=['wibble', func])), ('deep_nested', dict(snippet=[{'wibble': ['wibble', func]}])), ] def test_dependencies(self): deps = list(function.dependencies(self.snippet)) self.assertIn('foo', deps) self.assertIn('bar', deps) self.assertEqual(2, len(deps)) class ValidateGetAttTest(common.HeatTestCase): def setUp(self): super(ValidateGetAttTest, self).setUp() env = environment.Environment() env.load({u'resource_registry': {u'OS::Test::GenericResource': u'GenericResourceType'}}) env.load({u'resource_registry': {u'OS::Test::FakeResource': u'OverwrittenFnGetAttType'}}) tmpl = template.Template({"HeatTemplateFormatVersion": "2012-12-12", "Resources": { "test_rsrc": { "Type": "OS::Test::GenericResource" }, "get_att_rsrc": { "Type": "OS::Heat::Value", "Properties": { "value": { "Fn::GetAtt": ["test_rsrc", "Foo"] } } } }}, env=env) self.stack = stack.Stack( utils.dummy_context(), 'test_stack', tmpl, stack_id=str(uuid.uuid4())) self.rsrc = self.stack['test_rsrc'] self.stack.validate() def test_resource_is_appear_in_stack(self): func = functions.GetAtt(self.stack.defn, 'Fn::GetAtt', [self.rsrc.name, 'Foo']) self.assertIsNone(func.validate()) def test_resource_is_not_appear_in_stack(self): self.stack.remove_resource(self.rsrc.name) func = functions.GetAtt(self.stack.defn, 'Fn::GetAtt', [self.rsrc.name, 'Foo']) ex = self.assertRaises(exception.InvalidTemplateReference, func.validate) self.assertEqual('The specified reference "test_rsrc" (in unknown) ' 'is incorrect.', six.text_type(ex)) def test_resource_no_attribute_with_default_fn_get_att(self): res_defn = rsrc_defn.ResourceDefinition('test_rsrc', 'ResWithStringPropAndAttr') self.rsrc = resource.Resource('test_rsrc', res_defn, self.stack) self.stack.add_resource(self.rsrc) stk_defn.update_resource_data(self.stack.defn, self.rsrc.name, self.rsrc.node_data()) self.stack.validate() func = functions.GetAtt(self.stack.defn, 'Fn::GetAtt', [self.rsrc.name, 'Bar']) ex = self.assertRaises(exception.InvalidTemplateAttribute, func.validate) self.assertEqual('The Referenced Attribute (test_rsrc Bar) ' 'is incorrect.', six.text_type(ex)) def test_resource_no_attribute_with_overwritten_fn_get_att(self): res_defn = rsrc_defn.ResourceDefinition('test_rsrc', 'OS::Test::FakeResource') self.rsrc = resource.Resource('test_rsrc', res_defn, self.stack) self.rsrc.attributes_schema = {} self.stack.add_resource(self.rsrc) stk_defn.update_resource_data(self.stack.defn, self.rsrc.name, self.rsrc.node_data()) self.stack.validate() func = functions.GetAtt(self.stack.defn, 'Fn::GetAtt', [self.rsrc.name, 'Foo']) self.assertIsNone(func.validate()) def test_get_attr_without_attribute_name(self): ex = self.assertRaises(ValueError, functions.GetAtt, self.stack.defn, 'Fn::GetAtt', [self.rsrc.name]) self.assertEqual('Arguments to "Fn::GetAtt" must be ' 'of the form [resource_name, attribute]', six.text_type(ex)) heat-10.0.2/heat/tests/test_short_id.py0000666000175000017500000000537413343562340020062 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import uuid from heat.common import short_id from heat.tests import common class ShortIdTest(common.HeatTestCase): def test_byte_string_8(self): self.assertEqual(b'\xab', short_id._to_byte_string(0xab, 8)) self.assertEqual(b'\x05', short_id._to_byte_string(0x05, 8)) def test_byte_string_16(self): self.assertEqual(b'\xab\xcd', short_id._to_byte_string(0xabcd, 16)) self.assertEqual(b'\x0a\xbc', short_id._to_byte_string(0xabc, 16)) def test_byte_string_12(self): self.assertEqual(b'\xab\xc0', short_id._to_byte_string(0xabc, 12)) self.assertEqual(b'\x0a\xb0', short_id._to_byte_string(0x0ab, 12)) def test_byte_string_60(self): val = 0x111111111111111 byte_string = short_id._to_byte_string(val, 60) self.assertEqual(b'\x11\x11\x11\x11\x11\x11\x11\x10', byte_string) def test_get_id_string(self): id = short_id.get_id('11111111-1111-4111-bfff-ffffffffffff') self.assertEqual('ceirceirceir', id) def test_get_id_uuid_1(self): source = uuid.UUID('11111111-1111-4111-bfff-ffffffffffff') self.assertEqual(0x111111111111111, source.time) self.assertEqual('ceirceirceir', short_id.get_id(source)) def test_get_id_uuid_f(self): source = uuid.UUID('ffffffff-ffff-4fff-8000-000000000000') self.assertEqual('777777777777', short_id.get_id(source)) def test_get_id_uuid_0(self): source = uuid.UUID('00000000-0000-4000-bfff-ffffffffffff') self.assertEqual('aaaaaaaaaaaa', short_id.get_id(source)) def test_get_id_uuid_endianness(self): source = uuid.UUID('ffffffff-00ff-4000-aaaa-aaaaaaaaaaaa') self.assertEqual('aaaa77777777', short_id.get_id(source)) def test_get_id_uuid1(self): source = uuid.uuid1() self.assertRaises(ValueError, short_id.get_id, source) def test_generate_ids(self): allowed_chars = [ord(c) for c in u'abcdefghijklmnopqrstuvwxyz234567'] ids = [short_id.generate_id() for i in range(25)] for id in ids: self.assertEqual(12, len(id)) self.assertFalse(id.translate({c: None for c in allowed_chars})) self.assertEqual(1, ids.count(id)) heat-10.0.2/heat/tests/test_properties_group.py0000666000175000017500000000767213343562340021662 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import six from heat.common import exception from heat.engine import properties_group as pg from heat.tests import common class TestSchemaSimpleValidation(common.HeatTestCase): scenarios = [ ('correct schema', dict( schema={pg.AND: [['a'], ['b']]}, message=None, )), ('invalid type schema', dict( schema=[{pg.OR: [['a'], ['b']]}], message="Properties group schema incorrectly specified. " "Schema should be a mapping, " "found %s instead." % list, )), ('invalid type subschema', dict( schema={pg.OR: [['a'], ['b'], [{pg.XOR: [['c'], ['d']]}]]}, message='Properties group schema incorrectly specified. List ' 'items should be properties list-type names with format ' '"[prop, prop_child, prop_sub_child, ...]" or nested ' 'properties group schemas.', )), ('several keys schema', dict( schema={pg.OR: [['a'], ['b']], pg.XOR: [['v', 'g']]}, message='Properties group schema incorrectly specified. Schema ' 'should be one-key dict.', )), ('several keys subschema', dict( schema={pg.OR: [['a'], ['b'], {pg.XOR: [['c']], pg.OR: ['d']}]}, message='Properties group schema incorrectly specified. ' 'Schema should be one-key dict.', )), ('invalid key schema', dict( schema={'NOT KEY': [['a'], ['b']]}, message='Properties group schema incorrectly specified. ' 'Properties group schema key should be one of the ' 'operators: AND, OR, XOR.', )), ('invalid key subschema', dict( schema={pg.AND: [['a'], {'NOT KEY': [['b']]}]}, message='Properties group schema incorrectly specified. ' 'Properties group schema key should be one of the ' 'operators: AND, OR, XOR.', )), ('invalid value type schema', dict( schema={pg.OR: 'a'}, message="Properties group schema incorrectly specified. " "Schemas' values should be lists of properties names " "or nested schemas.", )), ('invalid value type subschema', dict( schema={pg.OR: [{pg.XOR: 'a'}]}, message="Properties group schema incorrectly specified. " "Schemas' values should be lists of properties names " "or nested schemas.", )), ('invalid prop name schema', dict( schema={pg.OR: ['a', 'b']}, message='Properties group schema incorrectly specified. List ' 'items should be properties list-type names with format ' '"[prop, prop_child, prop_sub_child, ...]" or nested ' 'properties group schemas.', )), ] def test_properties_group_schema_validate(self): if self.message is not None: ex = self.assertRaises(exception.InvalidSchemaError, pg.PropertiesGroup, self.schema) self.assertEqual(self.message, six.text_type(ex)) else: self.assertIsInstance(pg.PropertiesGroup(self.schema), pg.PropertiesGroup) heat-10.0.2/heat/tests/test_dbinstance.py0000666000175000017500000001135113343562340020351 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common import template_format from heat.engine import attributes from heat.engine import constraints from heat.engine import properties from heat.engine import resource from heat.engine import stack as parser from heat.engine import template from heat.tests import common from heat.tests import utils rds_template = ''' { "AWSTemplateFormatVersion" : "2010-09-09", "Description" : "RDS Test", "Parameters" : { "KeyName" : { "Description" : "KeyName", "Type" : "String", "Default" : "test" } }, "Resources" : { "DatabaseServer": { "Type": "AWS::RDS::DBInstance", "Properties": { "DBName" : "wordpress", "Engine" : "MySQL", "MasterUsername" : "admin", "DBInstanceClass" : "db.m1.small", "DBSecurityGroups" : [], "AllocatedStorage" : "5", "MasterUserPassword": "admin" } } } } ''' class DBInstance(resource.Resource): """Verify the schema of the new TemplateResource. This is copied from the old DBInstance. """ properties_schema = { 'DBSnapshotIdentifier': properties.Schema( properties.Schema.STRING, implemented=False ), 'AllocatedStorage': properties.Schema( properties.Schema.STRING, required=True ), 'AvailabilityZone': properties.Schema( properties.Schema.STRING, implemented=False ), 'BackupRetentionPeriod': properties.Schema( properties.Schema.STRING, implemented=False ), 'DBInstanceClass': properties.Schema( properties.Schema.STRING, required=True ), 'DBName': properties.Schema( properties.Schema.STRING, required=False ), 'DBParameterGroupName': properties.Schema( properties.Schema.STRING, implemented=False ), 'DBSecurityGroups': properties.Schema( properties.Schema.LIST, required=False, default=[] ), 'DBSubnetGroupName': properties.Schema( properties.Schema.STRING, implemented=False ), 'Engine': properties.Schema( properties.Schema.STRING, constraints=[ constraints.AllowedValues(['MySQL']), ], required=True ), 'EngineVersion': properties.Schema( properties.Schema.STRING, implemented=False ), 'LicenseModel': properties.Schema( properties.Schema.STRING, implemented=False ), 'MasterUsername': properties.Schema( properties.Schema.STRING, required=True ), 'MasterUserPassword': properties.Schema( properties.Schema.STRING, required=True ), 'Port': properties.Schema( properties.Schema.STRING, required=False, default='3306' ), 'PreferredBackupWindow': properties.Schema( properties.Schema.STRING, implemented=False ), 'PreferredMaintenanceWindow': properties.Schema( properties.Schema.STRING, implemented=False ), 'MultiAZ': properties.Schema( properties.Schema.BOOLEAN, implemented=False ), } # We only support a couple of the attributes right now attributes_schema = { "Endpoint.Address": attributes.Schema( "Connection endpoint for the database." ), "Endpoint.Port": attributes.Schema( ("The port number on which the database accepts " "connections.") ), } class DBInstanceTest(common.HeatTestCase): def test_dbinstance(self): """Test that Template is parsable and publishes correct properties.""" templ = template.Template(template_format.parse(rds_template)) stack = parser.Stack(utils.dummy_context(), 'test_stack', templ) res = stack['DatabaseServer'] self.assertIsNone(res._validate_against_facade(DBInstance)) heat-10.0.2/heat/tests/test_attributes.py0000666000175000017500000002701513343562340020431 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import six from heat.engine import attributes from heat.engine import resources from heat.engine import support from heat.tests import common class AttributeSchemaTest(common.HeatTestCase): def test_schema_all(self): d = {'description': 'A attribute'} s = attributes.Schema('A attribute') self.assertEqual(d, dict(s)) d = {'description': 'Another attribute', 'type': 'string'} s = attributes.Schema('Another attribute', type=attributes.Schema.STRING) self.assertEqual(d, dict(s)) def test_all_resource_schemata(self): for resource_type in resources.global_env().get_types(): for schema in six.itervalues(getattr(resource_type, 'attributes_schema', {})): attributes.Schema.from_attribute(schema) def test_from_attribute_new_schema_format(self): s = attributes.Schema('Test description.') self.assertIs(s, attributes.Schema.from_attribute(s)) self.assertEqual('Test description.', attributes.Schema.from_attribute(s).description) s = attributes.Schema('Test description.', type=attributes.Schema.MAP) self.assertIs(s, attributes.Schema.from_attribute(s)) self.assertEqual(attributes.Schema.MAP, attributes.Schema.from_attribute(s).type) def test_schema_support_status(self): schema = { 'foo_sup': attributes.Schema( 'Description1' ), 'bar_dep': attributes.Schema( 'Description2', support_status=support.SupportStatus( support.DEPRECATED, 'Do not use this ever') ) } attrs = attributes.Attributes('test_rsrc', schema, lambda d: d) self.assertEqual(support.SUPPORTED, attrs._attributes['foo_sup'].support_status().status) self.assertEqual(support.DEPRECATED, attrs._attributes['bar_dep'].support_status().status) self.assertEqual('Do not use this ever', attrs._attributes['bar_dep'].support_status().message) class AttributeTest(common.HeatTestCase): """Test the Attribute class.""" def test_as_output(self): """Test that Attribute looks right when viewed as an Output.""" expected = { "Value": {"Fn::GetAtt": ["test_resource", "test1"]}, "Description": "The first test attribute" } attr = attributes.Attribute( "test1", attributes.Schema("The first test attribute")) self.assertEqual(expected, attr.as_output("test_resource")) def test_as_output_hot(self): """Test that Attribute looks right when viewed as an Output.""" expected = { "value": {"get_attr": ["test_resource", "test1"]}, "description": "The first test attribute" } attr = attributes.Attribute( "test1", attributes.Schema("The first test attribute")) self.assertEqual(expected, attr.as_output("test_resource", "hot")) class AttributesTest(common.HeatTestCase): """Test the Attributes class.""" def setUp(self): super(AttributesTest, self).setUp() self.resolver = mock.MagicMock() self.attributes_schema = { "test1": attributes.Schema("Test attrib 1"), "test2": attributes.Schema("Test attrib 2"), "test3": attributes.Schema( "Test attrib 3", cache_mode=attributes.Schema.CACHE_NONE) } def test_get_attribute(self): """Test that we get the attribute values we expect.""" self.resolver.return_value = "value1" attribs = attributes.Attributes('test resource', self.attributes_schema, self.resolver) self.assertEqual("value1", attribs['test1']) self.resolver.assert_called_once_with('test1') def test_attributes_representation(self): """Test that attributes are displayed correct.""" self.resolver.return_value = "value1" attribs = attributes.Attributes('test resource', self.attributes_schema, self.resolver) msg = 'Attributes for test resource:\n\tvalue1\n\tvalue1\n\tvalue1' self.assertEqual(msg, str(attribs)) calls = [ mock.call('test1'), mock.call('test2'), mock.call('test3') ] self.resolver.assert_has_calls(calls, any_order=True) def test_get_attribute_none(self): """Test that we get the attribute values we expect.""" self.resolver.return_value = None attribs = attributes.Attributes('test resource', self.attributes_schema, self.resolver) self.assertIsNone(attribs['test1']) self.resolver.assert_called_once_with('test1') def test_get_attribute_nonexist(self): """Test that we get the attribute values we expect.""" self.resolver.return_value = "value1" attribs = attributes.Attributes('test resource', self.attributes_schema, self.resolver) self.assertRaises(KeyError, attribs.__getitem__, 'not there') self.assertFalse(self.resolver.called) def test_as_outputs(self): """Test that Output format works as expected.""" expected = { "test1": { "Value": {"Fn::GetAtt": ["test_resource", "test1"]}, "Description": "Test attrib 1" }, "test2": { "Value": {"Fn::GetAtt": ["test_resource", "test2"]}, "Description": "Test attrib 2" }, "test3": { "Value": {"Fn::GetAtt": ["test_resource", "test3"]}, "Description": "Test attrib 3" }, "OS::stack_id": { "Value": {"Ref": "test_resource"}, } } MyTestResourceClass = mock.MagicMock() MyTestResourceClass.attributes_schema = { "test1": attributes.Schema("Test attrib 1"), "test2": attributes.Schema("Test attrib 2"), "test3": attributes.Schema("Test attrib 3"), "test4": attributes.Schema( "Test attrib 4", support_status=support.SupportStatus(status=support.HIDDEN)) } self.assertEqual( expected, attributes.Attributes.as_outputs("test_resource", MyTestResourceClass)) def test_as_outputs_hot(self): """Test that Output format works as expected.""" expected = { "test1": { "value": {"get_attr": ["test_resource", "test1"]}, "description": "Test attrib 1" }, "test2": { "value": {"get_attr": ["test_resource", "test2"]}, "description": "Test attrib 2" }, "test3": { "value": {"get_attr": ["test_resource", "test3"]}, "description": "Test attrib 3" }, "OS::stack_id": { "value": {"get_resource": "test_resource"}, } } MyTestResourceClass = mock.MagicMock() MyTestResourceClass.attributes_schema = { "test1": attributes.Schema("Test attrib 1"), "test2": attributes.Schema("Test attrib 2"), "test3": attributes.Schema("Test attrib 3"), "test4": attributes.Schema( "Test attrib 4", support_status=support.SupportStatus(status=support.HIDDEN)) } self.assertEqual( expected, attributes.Attributes.as_outputs("test_resource", MyTestResourceClass, "hot")) def test_caching_local(self): self.resolver.side_effect = ["value1", "value1 changed"] attribs = attributes.Attributes('test resource', self.attributes_schema, self.resolver) self.assertEqual("value1", attribs['test1']) self.assertEqual("value1", attribs['test1']) attribs.reset_resolved_values() self.assertEqual("value1 changed", attribs['test1']) calls = [ mock.call('test1'), mock.call('test1') ] self.resolver.assert_has_calls(calls) def test_caching_none(self): self.resolver.side_effect = ["value3", "value3 changed"] attribs = attributes.Attributes('test resource', self.attributes_schema, self.resolver) self.assertEqual("value3", attribs['test3']) self.assertEqual("value3 changed", attribs['test3']) calls = [ mock.call('test3'), mock.call('test3') ] self.resolver.assert_has_calls(calls) class AttributesTypeTest(common.HeatTestCase): scenarios = [ ('string_type', dict(a_type=attributes.Schema.STRING, value='correct value', invalid_value=[])), ('list_type', dict(a_type=attributes.Schema.LIST, value=[], invalid_value='invalid_value')), ('map_type', dict(a_type=attributes.Schema.MAP, value={}, invalid_value='invalid_value')), ('integer_type', dict(a_type=attributes.Schema.INTEGER, value=1, invalid_value='invalid_value')), ('boolean_type', dict(a_type=attributes.Schema.BOOLEAN, value=True, invalid_value='invalid_value')), ('boolean_type_string_true', dict(a_type=attributes.Schema.BOOLEAN, value="True", invalid_value='invalid_value')), ('boolean_type_string_false', dict(a_type=attributes.Schema.BOOLEAN, value="false", invalid_value='invalid_value')) ] def test_validate_type(self): resolver = mock.Mock() msg = 'Attribute test1 is not of type %s' % self.a_type attr_schema = attributes.Schema("Test attribute", type=self.a_type) attrs_schema = {'res1': attr_schema} attr = attributes.Attribute("test1", attr_schema) attribs = attributes.Attributes('test res1', attrs_schema, resolver) attribs._validate_type(attr, self.value) self.assertNotIn(msg, self.LOG.output) attribs._validate_type(attr, self.invalid_value) self.assertIn(msg, self.LOG.output) heat-10.0.2/heat/tests/test_hacking.py0000666000175000017500000000336113343562340017645 0ustar zuulzuul00000000000000# Copyright 2016 NTT DATA. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.hacking import checks from heat.tests import common class HackingTestCase(common.HeatTestCase): def test_dict_iteritems(self): self.assertEqual(1, len(list(checks.check_python3_no_iteritems( "obj.iteritems()")))) self.assertEqual(0, len(list(checks.check_python3_no_iteritems( "obj.items()")))) self.assertEqual(0, len(list(checks.check_python3_no_iteritems( "six.iteritems(obj)")))) def test_dict_iterkeys(self): self.assertEqual(1, len(list(checks.check_python3_no_iterkeys( "obj.iterkeys()")))) self.assertEqual(0, len(list(checks.check_python3_no_iterkeys( "obj.keys()")))) self.assertEqual(0, len(list(checks.check_python3_no_iterkeys( "six.iterkeys(obj)")))) def test_dict_itervalues(self): self.assertEqual(1, len(list(checks.check_python3_no_itervalues( "obj.itervalues()")))) self.assertEqual(0, len(list(checks.check_python3_no_itervalues( "obj.values()")))) self.assertEqual(0, len(list(checks.check_python3_no_itervalues( "six.itervalues(obj)")))) heat-10.0.2/heat/tests/test_hot.py0000666000175000017500000043144713343562352017050 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import mock import six from heat.common import exception from heat.common import identifier from heat.common import template_format from heat.engine.cfn import functions as cfn_functions from heat.engine.cfn import parameters as cfn_param from heat.engine import conditions from heat.engine import environment from heat.engine import function from heat.engine.hot import functions as hot_functions from heat.engine.hot import parameters as hot_param from heat.engine.hot import template as hot_template from heat.engine import resource from heat.engine import resources from heat.engine import rsrc_defn from heat.engine import stack as parser from heat.engine import stk_defn from heat.engine import template from heat.tests import common from heat.tests import generic_resource as generic_rsrc from heat.tests import utils empty_template = template_format.parse('''{ "HeatTemplateFormatVersion" : "2012-12-12", }''') hot_tpl_empty = template_format.parse(''' heat_template_version: 2013-05-23 ''') hot_juno_tpl_empty = template_format.parse(''' heat_template_version: 2014-10-16 ''') hot_kilo_tpl_empty = template_format.parse(''' heat_template_version: 2015-04-30 ''') hot_liberty_tpl_empty = template_format.parse(''' heat_template_version: 2015-10-15 ''') hot_mitaka_tpl_empty = template_format.parse(''' heat_template_version: 2016-04-08 ''') hot_newton_tpl_empty = template_format.parse(''' heat_template_version: 2016-10-14 ''') hot_ocata_tpl_empty = template_format.parse(''' heat_template_version: 2017-02-24 ''') hot_pike_tpl_empty = template_format.parse(''' heat_template_version: 2017-09-01 ''') hot_tpl_empty_sections = template_format.parse(''' heat_template_version: 2013-05-23 parameters: resources: outputs: ''') hot_tpl_generic_resource = template_format.parse(''' heat_template_version: 2013-05-23 resources: resource1: type: GenericResourceType ''') hot_tpl_generic_resource_20141016 = template_format.parse(''' heat_template_version: 2014-10-16 resources: resource1: type: GenericResourceType ''') hot_tpl_generic_resource_all_attrs = template_format.parse(''' heat_template_version: 2015-10-15 resources: resource1: type: GenericResourceType ''') hot_tpl_complex_attrs_all_attrs = template_format.parse(''' heat_template_version: 2015-10-15 resources: resource1: type: ResourceWithComplexAttributesType ''') hot_tpl_complex_attrs = template_format.parse(''' heat_template_version: 2013-05-23 resources: resource1: type: ResourceWithComplexAttributesType ''') hot_tpl_complex_attrs_20141016 = template_format.parse(''' heat_template_version: 2014-10-16 resources: resource1: type: ResourceWithComplexAttributesType ''') hot_tpl_mapped_props = template_format.parse(''' heat_template_version: 2013-05-23 resources: resource1: type: ResWithComplexPropsAndAttrs resource2: type: ResWithComplexPropsAndAttrs properties: a_list: { get_attr: [ resource1, list] } a_string: { get_attr: [ resource1, string ] } a_map: { get_attr: [ resource1, map] } ''') hot_tpl_mapped_props_all_attrs = template_format.parse(''' heat_template_version: 2015-10-15 resources: resource1: type: ResWithComplexPropsAndAttrs resource2: type: ResWithComplexPropsAndAttrs properties: a_list: { get_attr: [ resource1, list] } a_string: { get_attr: [ resource1, string ] } a_map: { get_attr: [ resource1, map] } ''') class DummyClass(object): metadata = None def metadata_get(self): return self.metadata def metadata_set(self, metadata): self.metadata = metadata class HOTemplateTest(common.HeatTestCase): """Test processing of HOT templates.""" @staticmethod def resolve(snippet, template, stack=None): return function.resolve(template.parse(stack and stack.defn, snippet)) @staticmethod def resolve_condition(snippet, template, stack=None): return function.resolve(template.parse_condition(stack and stack.defn, snippet)) def test_defaults(self): """Test default content behavior of HOT template.""" tmpl = template.Template(hot_tpl_empty) # check if we get the right class self.assertIsInstance(tmpl, hot_template.HOTemplate20130523) # test getting an invalid section self.assertNotIn('foobar', tmpl) # test defaults for valid sections self.assertEqual('No description', tmpl[tmpl.DESCRIPTION]) self.assertEqual({}, tmpl[tmpl.RESOURCES]) self.assertEqual({}, tmpl[tmpl.OUTPUTS]) def test_defaults_for_empty_sections(self): """Test default secntion's content behavior of HOT template.""" tmpl = template.Template(hot_tpl_empty_sections) # check if we get the right class self.assertIsInstance(tmpl, hot_template.HOTemplate20130523) # test getting an invalid section self.assertNotIn('foobar', tmpl) # test defaults for valid sections self.assertEqual('No description', tmpl[tmpl.DESCRIPTION]) self.assertEqual({}, tmpl[tmpl.RESOURCES]) self.assertEqual({}, tmpl[tmpl.OUTPUTS]) stack = parser.Stack(utils.dummy_context(), 'test_stack', tmpl) self.assertIsNone(stack.parameters._validate_user_parameters()) self.assertIsNone(stack.validate()) def test_translate_resources_good(self): """Test translation of resources into internal engine format.""" hot_tpl = template_format.parse(''' heat_template_version: 2013-05-23 resources: resource1: type: AWS::EC2::Instance properties: property1: value1 metadata: foo: bar depends_on: dummy deletion_policy: dummy update_policy: foo: bar ''') expected = {'resource1': {'Type': 'AWS::EC2::Instance', 'Properties': {'property1': 'value1'}, 'Metadata': {'foo': 'bar'}, 'DependsOn': 'dummy', 'DeletionPolicy': 'dummy', 'UpdatePolicy': {'foo': 'bar'}}} tmpl = template.Template(hot_tpl) self.assertEqual(expected, tmpl[tmpl.RESOURCES]) def test_translate_resources_bad_no_data(self): """Test translation of resources without any mapping.""" hot_tpl = template_format.parse(""" heat_template_version: 2013-05-23 resources: resource1: """) tmpl = template.Template(hot_tpl) error = self.assertRaises(exception.StackValidationFailed, tmpl.__getitem__, tmpl.RESOURCES) self.assertEqual('Each resource must contain a type key.', six.text_type(error)) def test_translate_resources_bad_type(self): """Test translation of resources including invalid keyword.""" hot_tpl = template_format.parse(''' heat_template_version: 2013-05-23 resources: resource1: Type: AWS::EC2::Instance properties: property1: value1 metadata: foo: bar depends_on: dummy deletion_policy: dummy update_policy: foo: bar ''') tmpl = template.Template(hot_tpl) err = self.assertRaises(exception.StackValidationFailed, tmpl.__getitem__, tmpl.RESOURCES) self.assertEqual('"Type" is not a valid keyword ' 'inside a resource definition', six.text_type(err)) def test_translate_resources_bad_properties(self): """Test translation of resources including invalid keyword.""" hot_tpl = template_format.parse(''' heat_template_version: 2013-05-23 resources: resource1: type: AWS::EC2::Instance Properties: property1: value1 metadata: foo: bar depends_on: dummy deletion_policy: dummy update_policy: foo: bar ''') tmpl = template.Template(hot_tpl) err = self.assertRaises(exception.StackValidationFailed, tmpl.__getitem__, tmpl.RESOURCES) self.assertEqual('"Properties" is not a valid keyword ' 'inside a resource definition', six.text_type(err)) def test_translate_resources_resources_without_name(self): hot_tpl = template_format.parse(''' heat_template_version: 2013-05-23 resources: type: AWS::EC2::Instance properties: property1: value1 metadata: foo: bar depends_on: dummy deletion_policy: dummy ''') tmpl = template.Template(hot_tpl) error = self.assertRaises(exception.StackValidationFailed, tmpl.__getitem__, tmpl.RESOURCES) self.assertEqual('"resources" must contain a map of resource maps. ' 'Found a [%s] instead' % six.text_type, six.text_type(error)) def test_translate_resources_bad_metadata(self): """Test translation of resources including invalid keyword.""" hot_tpl = template_format.parse(''' heat_template_version: 2013-05-23 resources: resource1: type: AWS::EC2::Instance properties: property1: value1 Metadata: foo: bar depends_on: dummy deletion_policy: dummy update_policy: foo: bar ''') tmpl = template.Template(hot_tpl) err = self.assertRaises(exception.StackValidationFailed, tmpl.__getitem__, tmpl.RESOURCES) self.assertEqual('"Metadata" is not a valid keyword ' 'inside a resource definition', six.text_type(err)) def test_translate_resources_bad_depends_on(self): """Test translation of resources including invalid keyword.""" hot_tpl = template_format.parse(''' heat_template_version: 2013-05-23 resources: resource1: type: AWS::EC2::Instance properties: property1: value1 metadata: foo: bar DependsOn: dummy deletion_policy: dummy update_policy: foo: bar ''') tmpl = template.Template(hot_tpl) err = self.assertRaises(exception.StackValidationFailed, tmpl.__getitem__, tmpl.RESOURCES) self.assertEqual('"DependsOn" is not a valid keyword ' 'inside a resource definition', six.text_type(err)) def test_translate_resources_bad_deletion_policy(self): """Test translation of resources including invalid keyword.""" hot_tpl = template_format.parse(''' heat_template_version: 2013-05-23 resources: resource1: type: AWS::EC2::Instance properties: property1: value1 metadata: foo: bar depends_on: dummy DeletionPolicy: dummy update_policy: foo: bar ''') tmpl = template.Template(hot_tpl) err = self.assertRaises(exception.StackValidationFailed, tmpl.__getitem__, tmpl.RESOURCES) self.assertEqual('"DeletionPolicy" is not a valid keyword ' 'inside a resource definition', six.text_type(err)) def test_translate_resources_bad_update_policy(self): """Test translation of resources including invalid keyword.""" hot_tpl = template_format.parse(''' heat_template_version: 2013-05-23 resources: resource1: type: AWS::EC2::Instance properties: property1: value1 metadata: foo: bar depends_on: dummy deletion_policy: dummy UpdatePolicy: foo: bar ''') tmpl = template.Template(hot_tpl) err = self.assertRaises(exception.StackValidationFailed, tmpl.__getitem__, tmpl.RESOURCES) self.assertEqual('"UpdatePolicy" is not a valid keyword ' 'inside a resource definition', six.text_type(err)) def test_get_outputs_good(self): """Test get outputs.""" hot_tpl = template_format.parse(''' heat_template_version: 2013-05-23 outputs: output1: description: output1 value: value1 ''') expected = {'output1': {'description': 'output1', 'value': 'value1'}} tmpl = template.Template(hot_tpl) self.assertEqual(expected, tmpl[tmpl.OUTPUTS]) def test_get_outputs_bad_no_data(self): """Test get outputs without any mapping.""" hot_tpl = template_format.parse(""" heat_template_version: 2013-05-23 outputs: output1: """) tmpl = template.Template(hot_tpl) error = self.assertRaises(exception.StackValidationFailed, tmpl.__getitem__, tmpl.OUTPUTS) self.assertEqual('Each output must contain a value key.', six.text_type(error)) def test_get_outputs_bad_without_name(self): """Test get outputs without name.""" hot_tpl = template_format.parse(""" heat_template_version: 2013-05-23 outputs: description: wrong output value: value1 """) tmpl = template.Template(hot_tpl) error = self.assertRaises(exception.StackValidationFailed, tmpl.__getitem__, tmpl.OUTPUTS) self.assertEqual('"outputs" must contain a map of output maps. ' 'Found a [%s] instead' % six.text_type, six.text_type(error)) def test_get_outputs_bad_description(self): """Test get outputs with bad description name.""" hot_tpl = template_format.parse(''' heat_template_version: 2013-05-23 outputs: output1: Description: output1 value: value1 ''') tmpl = template.Template(hot_tpl) err = self.assertRaises(exception.StackValidationFailed, tmpl.__getitem__, tmpl.OUTPUTS) self.assertIn('Description', six.text_type(err)) def test_get_outputs_bad_value(self): """Test get outputs with bad value name.""" hot_tpl = template_format.parse(''' heat_template_version: 2013-05-23 outputs: output1: description: output1 Value: value1 ''') tmpl = template.Template(hot_tpl) err = self.assertRaises(exception.StackValidationFailed, tmpl.__getitem__, tmpl.OUTPUTS) self.assertIn('Value', six.text_type(err)) def test_resource_group_list_join(self): """Test list_join on a ResourceGroup's inner attributes This should not fail during validation (i.e. before the ResourceGroup can return the list of the runtime values. """ hot_tpl = template_format.parse(''' heat_template_version: 2014-10-16 resources: rg: type: OS::Heat::ResourceGroup properties: count: 3 resource_def: type: OS::Nova::Server ''') tmpl = template.Template(hot_tpl) stack = parser.Stack(utils.dummy_context(), 'test_stack', tmpl) snippet = {'list_join': ["\n", {'get_attr': ['rg', 'name']}]} self.assertEqual('', self.resolve(snippet, tmpl, stack)) # test list_join for liberty template hot_tpl['heat_template_version'] = '2015-10-15' tmpl = template.Template(hot_tpl) stack = parser.Stack(utils.dummy_context(), 'test_stack', tmpl) snippet = {'list_join': ["\n", {'get_attr': ['rg', 'name']}]} self.assertEqual('', self.resolve(snippet, tmpl, stack)) # test list join again and update to multiple lists snippet = {'list_join': ["\n", {'get_attr': ['rg', 'name']}, {'get_attr': ['rg', 'name']}]} self.assertEqual('', self.resolve(snippet, tmpl, stack)) def test_deletion_policy_titlecase(self): hot_tpl = template_format.parse(''' heat_template_version: 2016-10-14 resources: del: type: OS::Heat::None deletion_policy: Delete ret: type: OS::Heat::None deletion_policy: Retain snap: type: OS::Heat::None deletion_policy: Snapshot ''') rsrc_defns = template.Template(hot_tpl).resource_definitions(None) self.assertEqual(rsrc_defn.ResourceDefinition.DELETE, rsrc_defns['del'].deletion_policy()) self.assertEqual(rsrc_defn.ResourceDefinition.RETAIN, rsrc_defns['ret'].deletion_policy()) self.assertEqual(rsrc_defn.ResourceDefinition.SNAPSHOT, rsrc_defns['snap'].deletion_policy()) def test_deletion_policy(self): hot_tpl = template_format.parse(''' heat_template_version: 2016-10-14 resources: del: type: OS::Heat::None deletion_policy: delete ret: type: OS::Heat::None deletion_policy: retain snap: type: OS::Heat::None deletion_policy: snapshot ''') rsrc_defns = template.Template(hot_tpl).resource_definitions(None) self.assertEqual(rsrc_defn.ResourceDefinition.DELETE, rsrc_defns['del'].deletion_policy()) self.assertEqual(rsrc_defn.ResourceDefinition.RETAIN, rsrc_defns['ret'].deletion_policy()) self.assertEqual(rsrc_defn.ResourceDefinition.SNAPSHOT, rsrc_defns['snap'].deletion_policy()) def test_str_replace(self): """Test str_replace function.""" snippet = {'str_replace': {'template': 'Template var1 string var2', 'params': {'var1': 'foo', 'var2': 'bar'}}} snippet_resolved = 'Template foo string bar' tmpl = template.Template(hot_tpl_empty) self.assertEqual(snippet_resolved, self.resolve(snippet, tmpl)) def test_str_replace_map_param(self): """Test old str_replace function with non-string map param.""" snippet = {'str_replace': {'template': 'jsonvar1', 'params': {'jsonvar1': {'foo': 123}}}} tmpl = template.Template(hot_tpl_empty) ex = self.assertRaises(TypeError, self.resolve, snippet, tmpl) self.assertIn('"str_replace" params must be strings or numbers, ' 'param jsonvar1 is not valid', six.text_type(ex)) def test_liberty_str_replace_map_param(self): """Test str_replace function with non-string map param.""" snippet = {'str_replace': {'template': 'jsonvar1', 'params': {'jsonvar1': {'foo': 123}}}} snippet_resolved = '{"foo": 123}' tmpl = template.Template(hot_liberty_tpl_empty) self.assertEqual(snippet_resolved, self.resolve(snippet, tmpl)) def test_str_replace_list_param(self): """Test old str_replace function with non-string list param.""" snippet = {'str_replace': {'template': 'listvar1', 'params': {'listvar1': ['foo', 123]}}} tmpl = template.Template(hot_tpl_empty) ex = self.assertRaises(TypeError, self.resolve, snippet, tmpl) self.assertIn('"str_replace" params must be strings or numbers, ' 'param listvar1 is not valid', six.text_type(ex)) def test_liberty_str_replace_list_param(self): """Test str_replace function with non-string param.""" snippet = {'str_replace': {'template': 'listvar1', 'params': {'listvar1': ['foo', 123]}}} snippet_resolved = '["foo", 123]' tmpl = template.Template(hot_liberty_tpl_empty) self.assertEqual(snippet_resolved, self.resolve(snippet, tmpl)) def test_str_replace_number(self): """Test str_replace function with numbers.""" snippet = {'str_replace': {'template': 'Template number string bar', 'params': {'number': 1}}} snippet_resolved = 'Template 1 string bar' tmpl = template.Template(hot_tpl_empty) self.assertEqual(snippet_resolved, self.resolve(snippet, tmpl)) def test_str_fn_replace(self): """Test Fn:Replace function.""" snippet = {'Fn::Replace': [{'$var1': 'foo', '$var2': 'bar'}, 'Template $var1 string $var2']} snippet_resolved = 'Template foo string bar' tmpl = template.Template(hot_tpl_empty) self.assertEqual(snippet_resolved, self.resolve(snippet, tmpl)) def test_str_replace_order(self): """Test str_replace function substitution order.""" snippet = {'str_replace': {'template': '1234567890', 'params': {'1': 'a', '12': 'b', '123': 'c', '1234': 'd', '12345': 'e', '123456': 'f', '1234567': 'g'}}} tmpl = template.Template(hot_tpl_empty) self.assertEqual('g890', self.resolve(snippet, tmpl)) def test_str_replace_single_pass(self): """Test that str_replace function does not do double substitution.""" snippet = {'str_replace': {'template': '1234567890', 'params': {'1': 'a', '4': 'd', '8': 'h', '9': 'i', '123': '1', '456': '4', '890': '8', '90': '9'}}} tmpl = template.Template(hot_tpl_empty) self.assertEqual('1478', self.resolve(snippet, tmpl)) def test_str_replace_sort_order(self): """Test str_replace function replacement order.""" snippet = {'str_replace': {'template': '9876543210', 'params': {'987654': 'a', '876543': 'b', '765432': 'c', '654321': 'd', '543210': 'e'}}} tmpl = template.Template(hot_tpl_empty) self.assertEqual('9876e', self.resolve(snippet, tmpl)) def test_str_replace_syntax(self): """Test str_replace function syntax. Pass wrong syntax (array instead of dictionary) to function and validate that we get a TypeError. """ snippet = {'str_replace': [{'template': 'Template var1 string var2'}, {'params': {'var1': 'foo', 'var2': 'bar'}}]} tmpl = template.Template(hot_tpl_empty) self.assertRaises(exception.StackValidationFailed, self.resolve, snippet, tmpl) def test_str_replace_missing_param(self): """Test str_replace function missing param is OK.""" snippet = {'str_replace': {'template': 'Template var1 string var2', 'params': {'var1': 'foo', 'var2': 'bar', 'var3': 'zed'}}} snippet_resolved = 'Template foo string bar' # older template uses Replace, newer templates use ReplaceJson. # test both. for hot_tpl in (hot_tpl_empty, hot_ocata_tpl_empty): tmpl = template.Template(hot_tpl) self.assertEqual(snippet_resolved, self.resolve(snippet, tmpl)) def test_str_replace_strict_no_missing_param(self): """Test str_replace_strict function no missing params, no problem.""" snippet = {'str_replace_strict': {'template': 'Template var1 var1 s var2 t varvarvar3', 'params': {'var1': 'foo', 'var2': 'bar', 'var3': 'zed', 'var': 'tricky '}}} snippet_resolved = 'Template foo foo s bar t tricky tricky zed' tmpl = template.Template(hot_ocata_tpl_empty) self.assertEqual(snippet_resolved, self.resolve(snippet, tmpl)) def test_str_replace_strict_missing_param(self): """Test str_replace_strict function missing param(s) raises error.""" snippet = {'str_replace_strict': {'template': 'Template var1 string var2', 'params': {'var1': 'foo', 'var2': 'bar', 'var3': 'zed'}}} tmpl = template.Template(hot_ocata_tpl_empty) ex = self.assertRaises(ValueError, self.resolve, snippet, tmpl) self.assertEqual('The following params were not found in the ' 'template: var3', six.text_type(ex)) snippet = {'str_replace_strict': {'template': 'Template var1 string var2', 'params': {'var1': 'foo', 'var2': 'bar', 'var0': 'zed'}}} ex = self.assertRaises(ValueError, self.resolve, snippet, tmpl) self.assertEqual('The following params were not found in the ' 'template: var0', six.text_type(ex)) # str_replace_vstrict has same behaviour snippet = {'str_replace_vstrict': {'template': 'Template var1 string var2', 'params': {'var1': 'foo', 'var2': 'bar', 'var0': 'zed', 'var': 'z', 'longvarname': 'q'}}} tmpl = template.Template(hot_pike_tpl_empty) ex = self.assertRaises(ValueError, self.resolve, snippet, tmpl) self.assertEqual('The following params were not found in the ' 'template: longvarname,var0,var', six.text_type(ex)) def test_str_replace_strict_empty_param_ok(self): """Test str_replace_strict function with empty params.""" snippet = {'str_replace_strict': {'template': 'Template var1 string var2', 'params': {'var1': 'foo', 'var2': ''}}} tmpl = template.Template(hot_ocata_tpl_empty) self.assertEqual('Template foo string ', self.resolve(snippet, tmpl)) def test_str_replace_vstrict_empty_param_not_ok(self): """Test str_replace_vstrict function with empty params. Raise ValueError when any of the params are None or empty. """ snippet = {'str_replace_vstrict': {'template': 'Template var1 string var2', 'params': {'var1': 'foo', 'var2': ''}}} tmpl = template.Template(hot_pike_tpl_empty) for val in (None, '', {}, []): snippet['str_replace_vstrict']['params']['var2'] = val ex = self.assertRaises(ValueError, self.resolve, snippet, tmpl) self.assertIn('str_replace_vstrict has an undefined or empty ' 'value for param var2', six.text_type(ex)) def test_str_replace_invalid_param_keys(self): """Test str_replace function parameter keys. Pass wrong parameters to function and verify that we get a KeyError. """ snippet = {'str_replace': {'tmpl': 'Template var1 string var2', 'params': {'var1': 'foo', 'var2': 'bar'}}} tmpl = template.Template(hot_tpl_empty) self.assertRaises(exception.StackValidationFailed, self.resolve, snippet, tmpl) snippet = {'str_replace': {'tmpl': 'Template var1 string var2', 'parms': {'var1': 'foo', 'var2': 'bar'}}} ex = self.assertRaises(exception.StackValidationFailed, self.resolve, snippet, tmpl) self.assertIn('"str_replace" syntax should be str_replace:\\n', six.text_type(ex)) def test_str_replace_strict_invalid_param_keys(self): """Test str_replace function parameter keys. Pass wrong parameters to function and verify that we get a KeyError. """ snippets = [{'str_replace_strict': {'t': 'Template var1 string var2', 'params': {'var1': 'foo', 'var2': 'bar'}}}, {'str_replace_strict': {'template': 'Template var1 string var2', 'param': {'var1': 'foo', 'var2': 'bar'}}}] for snippet in snippets: tmpl = template.Template(hot_ocata_tpl_empty) ex = self.assertRaises(exception.StackValidationFailed, self.resolve, snippet, tmpl) self.assertIn('"str_replace_strict" syntax should be ' 'str_replace_strict:\\n', six.text_type(ex)) def test_str_replace_invalid_param_types(self): """Test str_replace function parameter values. Pass parameter values of wrong type to function and verify that we get a TypeError. """ snippet = {'str_replace': {'template': 12345, 'params': {'var1': 'foo', 'var2': 'bar'}}} tmpl = template.Template(hot_tpl_empty) self.assertRaises(TypeError, self.resolve, snippet, tmpl) snippet = {'str_replace': {'template': 'Template var1 string var2', 'params': ['var1', 'foo', 'var2', 'bar']}} ex = self.assertRaises(exception.StackValidationFailed, self.resolve, snippet, tmpl) self.assertIn('str_replace: "str_replace" parameters must be a' ' mapping', six.text_type(ex)) def test_str_replace_invalid_param_type_init(self): """Test str_replace function parameter values. Pass parameter values of wrong type to function and verify that we get a TypeError in the constructor. """ args = [['var1', 'foo', 'var2', 'bar'], 'Template var1 string var2'] ex = self.assertRaises( TypeError, cfn_functions.Replace, None, 'Fn::Replace', args) self.assertIn('parameters must be a mapping', six.text_type(ex)) def test_str_replace_ref_get_param(self): """Test str_replace referencing parameters.""" hot_tpl = template_format.parse(''' heat_template_version: 2015-04-30 parameters: p_template: type: string default: foo-replaceme p_params: type: json default: replaceme: success resources: rsrc: type: ResWithStringPropAndAttr properties: a_string: str_replace: template: {get_param: p_template} params: {get_param: p_params} outputs: replaced: value: {get_attr: [rsrc, string]} ''') tmpl = template.Template(hot_tpl) self.stack = parser.Stack(utils.dummy_context(), 'test_stack', tmpl) self.stack.store() self.stack.create() self.assertEqual((parser.Stack.CREATE, parser.Stack.COMPLETE), self.stack.state) self.stack._update_all_resource_data(False, True) self.assertEqual('foo-success', self.stack.outputs['replaced'].get_value()) def test_get_file(self): """Test get_file function.""" snippet = {'get_file': 'file:///tmp/foo.yaml'} snippet_resolved = 'foo contents' tmpl = template.Template(hot_tpl_empty, files={ 'file:///tmp/foo.yaml': 'foo contents' }) stack = parser.Stack(utils.dummy_context(), 'param_id_test', tmpl) self.assertEqual(snippet_resolved, self.resolve(snippet, tmpl, stack)) def test_get_file_not_string(self): """Test get_file function with non-string argument.""" snippet = {'get_file': ['file:///tmp/foo.yaml']} tmpl = template.Template(hot_tpl_empty) stack = parser.Stack(utils.dummy_context(), 'param_id_test', tmpl) notStrErr = self.assertRaises(TypeError, self.resolve, snippet, tmpl, stack) self.assertEqual( 'Argument to "get_file" must be a string', six.text_type(notStrErr)) def test_get_file_missing_files(self): """Test get_file function with no matching key in files section.""" snippet = {'get_file': 'file:///tmp/foo.yaml'} tmpl = template.Template(hot_tpl_empty, files={ 'file:///tmp/bar.yaml': 'bar contents' }) stack = parser.Stack(utils.dummy_context(), 'param_id_test', tmpl) missingErr = self.assertRaises(ValueError, self.resolve, snippet, tmpl, stack) self.assertEqual( ('No content found in the "files" section for ' 'get_file path: file:///tmp/foo.yaml'), six.text_type(missingErr)) def test_get_file_nested_does_not_resolve(self): """Test get_file function does not resolve nested calls.""" snippet = {'get_file': 'file:///tmp/foo.yaml'} snippet_resolved = '{get_file: file:///tmp/bar.yaml}' tmpl = template.Template(hot_tpl_empty, files={ 'file:///tmp/foo.yaml': snippet_resolved, 'file:///tmp/bar.yaml': 'bar content', }) stack = parser.Stack(utils.dummy_context(), 'param_id_test', tmpl) self.assertEqual(snippet_resolved, self.resolve(snippet, tmpl, stack)) def test_list_join(self): snippet = {'list_join': [',', ['bar', 'baz']]} snippet_resolved = 'bar,baz' tmpl = template.Template(hot_kilo_tpl_empty) self.assertEqual(snippet_resolved, self.resolve(snippet, tmpl)) def test_join_multiple(self): snippet = {'list_join': [',', ['bar', 'baz'], ['bar2', 'baz2']]} snippet_resolved = 'bar,baz,bar2,baz2' tmpl = template.Template(hot_liberty_tpl_empty) self.assertEqual(snippet_resolved, self.resolve(snippet, tmpl)) def test_list_join_empty_list(self): snippet = {'list_join': [',', []]} snippet_resolved = '' k_tmpl = template.Template(hot_kilo_tpl_empty) self.assertEqual(snippet_resolved, self.resolve(snippet, k_tmpl)) l_tmpl = template.Template(hot_liberty_tpl_empty) self.assertEqual(snippet_resolved, self.resolve(snippet, l_tmpl)) def test_join_json(self): snippet = {'list_join': [',', [{'foo': 'json'}, {'foo2': 'json2'}]]} snippet_resolved = '{"foo": "json"},{"foo2": "json2"}' l_tmpl = template.Template(hot_liberty_tpl_empty) self.assertEqual(snippet_resolved, self.resolve(snippet, l_tmpl)) # old versions before liberty don't support to join json k_tmpl = template.Template(hot_kilo_tpl_empty) exc = self.assertRaises(TypeError, self.resolve, snippet, k_tmpl) self.assertEqual("Items to join must be strings not {'foo': 'json'}", six.text_type(exc)) def test_join_object_type_fail(self): not_serializable = object snippet = {'list_join': [',', [not_serializable]]} l_tmpl = template.Template(hot_liberty_tpl_empty) exc = self.assertRaises(TypeError, self.resolve, snippet, l_tmpl) self.assertIn('Items to join must be string, map or list not', six.text_type(exc)) k_tmpl = template.Template(hot_kilo_tpl_empty) exc = self.assertRaises(TypeError, self.resolve, snippet, k_tmpl) self.assertIn("Items to join must be strings", six.text_type(exc)) def test_join_json_fail(self): not_serializable = object snippet = {'list_join': [',', [{'foo': not_serializable}]]} l_tmpl = template.Template(hot_liberty_tpl_empty) exc = self.assertRaises(TypeError, self.resolve, snippet, l_tmpl) self.assertIn('Items to join must be string, map or list', six.text_type(exc)) self.assertIn("failed json serialization", six.text_type(exc)) def test_join_invalid(self): snippet = {'list_join': 'bad'} l_tmpl = template.Template(hot_liberty_tpl_empty) exc = self.assertRaises(exception.StackValidationFailed, self.resolve, snippet, l_tmpl) self.assertIn('list_join: Incorrect arguments to "list_join"', six.text_type(exc)) k_tmpl = template.Template(hot_kilo_tpl_empty) exc1 = self.assertRaises(exception.StackValidationFailed, self.resolve, snippet, k_tmpl) self.assertIn('list_join: Incorrect arguments to "list_join"', six.text_type(exc1)) def test_join_int_invalid(self): snippet = {'list_join': 5} l_tmpl = template.Template(hot_liberty_tpl_empty) exc = self.assertRaises(exception.StackValidationFailed, self.resolve, snippet, l_tmpl) self.assertIn('list_join: Incorrect arguments', six.text_type(exc)) k_tmpl = template.Template(hot_kilo_tpl_empty) exc1 = self.assertRaises(exception.StackValidationFailed, self.resolve, snippet, k_tmpl) self.assertIn('list_join: Incorrect arguments', six.text_type(exc1)) def test_join_invalid_value(self): snippet = {'list_join': [',']} l_tmpl = template.Template(hot_liberty_tpl_empty) exc = self.assertRaises(exception.StackValidationFailed, self.resolve, snippet, l_tmpl) self.assertIn('list_join: Incorrect arguments to "list_join"', six.text_type(exc)) k_tmpl = template.Template(hot_kilo_tpl_empty) exc1 = self.assertRaises(exception.StackValidationFailed, self.resolve, snippet, k_tmpl) self.assertIn('list_join: Incorrect arguments to "list_join"', six.text_type(exc1)) def test_join_invalid_multiple(self): snippet = {'list_join': [',', 'bad', ['foo']]} tmpl = template.Template(hot_liberty_tpl_empty) exc = self.assertRaises(TypeError, self.resolve, snippet, tmpl) self.assertIn('must operate on a list', six.text_type(exc)) def test_merge(self): snippet = {'map_merge': [{'f1': 'b1', 'f2': 'b2'}, {'f1': 'b2'}]} tmpl = template.Template(hot_mitaka_tpl_empty) resolved = self.resolve(snippet, tmpl) self.assertEqual('b2', resolved['f1']) self.assertEqual('b2', resolved['f2']) def test_merge_none(self): snippet = {'map_merge': [{'f1': 'b1', 'f2': 'b2'}, None]} tmpl = template.Template(hot_mitaka_tpl_empty) resolved = self.resolve(snippet, tmpl) self.assertEqual('b1', resolved['f1']) self.assertEqual('b2', resolved['f2']) def test_merge_invalid(self): snippet = {'map_merge': [{'f1': 'b1', 'f2': 'b2'}, ['f1', 'b2']]} tmpl = template.Template(hot_mitaka_tpl_empty) exc = self.assertRaises(TypeError, self.resolve, snippet, tmpl) self.assertIn('Incorrect arguments', six.text_type(exc)) def test_merge_containing_repeat(self): snippet = {'map_merge': {'repeat': {'template': {'ROLE': 'ROLE'}, 'for_each': {'ROLE': ['role1', 'role2']}}}} tmpl = template.Template(hot_mitaka_tpl_empty) resolved = self.resolve(snippet, tmpl) self.assertEqual('role1', resolved['role1']) self.assertEqual('role2', resolved['role2']) def test_merge_containing_repeat_with_none(self): snippet = {'map_merge': {'repeat': {'template': {'ROLE': 'ROLE'}, 'for_each': {'ROLE': None}}}} tmpl = template.Template(hot_mitaka_tpl_empty) resolved = self.resolve(snippet, tmpl) self.assertEqual({}, resolved) def test_merge_containing_repeat_multi_list_no_nested_loop_with_none(self): snippet = {'map_merge': {'repeat': { 'template': {'ROLE': 'ROLE', 'NAME': 'NAME'}, 'for_each': {'ROLE': None, 'NAME': ['n1', 'n2']}, 'permutations': False}}} tmpl = template.Template(hot_mitaka_tpl_empty) resolved = self.resolve(snippet, tmpl) self.assertEqual({}, resolved) def test_merge_containing_repeat_multi_list_no_nested_loop_all_none(self): snippet = {'map_merge': {'repeat': { 'template': {'ROLE': 'ROLE', 'NAME': 'NAME'}, 'for_each': {'ROLE': None, 'NAME': None}, 'permutations': False}}} tmpl = template.Template(hot_mitaka_tpl_empty) resolved = self.resolve(snippet, tmpl) self.assertEqual({}, resolved) def test_map_replace(self): snippet = {'map_replace': [{'f1': 'b1', 'f2': 'b2'}, {'keys': {'f1': 'F1'}, 'values': {'b2': 'B2'}}]} tmpl = template.Template(hot_newton_tpl_empty) resolved = self.resolve(snippet, tmpl) self.assertEqual({'F1': 'b1', 'f2': 'B2'}, resolved) def test_map_replace_nokeys(self): snippet = {'map_replace': [{'f1': 'b1', 'f2': 'b2'}, {'values': {'b2': 'B2'}}]} tmpl = template.Template(hot_newton_tpl_empty) resolved = self.resolve(snippet, tmpl) self.assertEqual({'f1': 'b1', 'f2': 'B2'}, resolved) def test_map_replace_novalues(self): snippet = {'map_replace': [{'f1': 'b1', 'f2': 'b2'}, {'keys': {'f2': 'F2'}}]} tmpl = template.Template(hot_newton_tpl_empty) resolved = self.resolve(snippet, tmpl) self.assertEqual({'f1': 'b1', 'F2': 'b2'}, resolved) def test_map_replace_keys_collide_ok_equal(self): # It's OK to replace a key with the same value snippet = {'map_replace': [{'f1': 'b1', 'f2': 'b2'}, {'keys': {'f2': 'f2'}}]} tmpl = template.Template(hot_newton_tpl_empty) resolved = self.resolve(snippet, tmpl) self.assertEqual({'f1': 'b1', 'f2': 'b2'}, resolved) def test_map_replace_none_values(self): snippet = {'map_replace': [{'f1': 'b1', 'f2': 'b2'}, {'values': None}]} tmpl = template.Template(hot_newton_tpl_empty) resolved = self.resolve(snippet, tmpl) self.assertEqual({'f1': 'b1', 'f2': 'b2'}, resolved) def test_map_replace_none_keys(self): snippet = {'map_replace': [{'f1': 'b1', 'f2': 'b2'}, {'keys': None}]} tmpl = template.Template(hot_newton_tpl_empty) resolved = self.resolve(snippet, tmpl) self.assertEqual({'f1': 'b1', 'f2': 'b2'}, resolved) def test_map_replace_unhashable_value(self): snippet = {'map_replace': [{'f1': 'b1', 'f2': []}, {'values': {}}]} tmpl = template.Template(hot_newton_tpl_empty) resolved = self.resolve(snippet, tmpl) self.assertEqual({'f1': 'b1', 'f2': []}, resolved) def test_map_replace_keys_collide(self): snippet = {'map_replace': [{'f1': 'b1', 'f2': 'b2'}, {'keys': {'f2': 'f1'}}]} tmpl = template.Template(hot_newton_tpl_empty) msg = "key replacement f1 collides with a key in the input map" self.assertRaisesRegex(ValueError, msg, self.resolve, snippet, tmpl) def test_map_replace_replaced_keys_collide(self): snippet = {'map_replace': [{'f1': 'b1', 'f2': 'b2'}, {'keys': {'f1': 'f3', 'f2': 'f3'}}]} tmpl = template.Template(hot_newton_tpl_empty) msg = "key replacement f3 collides with a key in the output map" self.assertRaisesRegex(ValueError, msg, self.resolve, snippet, tmpl) def test_map_replace_invalid_str_arg1(self): snippet = {'map_replace': 'ab'} tmpl = template.Template(hot_newton_tpl_empty) msg = "Incorrect arguments to \"map_replace\" should be:" self.assertRaisesRegex(TypeError, msg, self.resolve, snippet, tmpl) def test_map_replace_invalid_str_arg2(self): snippet = {'map_replace': [{'f1': 'b1', 'f2': 'b2'}, "ab"]} tmpl = template.Template(hot_newton_tpl_empty) msg = ("Incorrect arguments: to \"map_replace\", " "arguments must be a list of maps") self.assertRaisesRegex(TypeError, msg, self.resolve, snippet, tmpl) def test_map_replace_invalid_empty(self): snippet = {'map_replace': []} tmpl = template.Template(hot_newton_tpl_empty) msg = "Incorrect arguments to \"map_replace\" should be:" self.assertRaisesRegex(TypeError, msg, self.resolve, snippet, tmpl) def test_map_replace_invalid_missing1(self): snippet = {'map_replace': [{'f1': 'b1', 'f2': 'b2'}]} tmpl = template.Template(hot_newton_tpl_empty) msg = "Incorrect arguments to \"map_replace\" should be:" self.assertRaisesRegex(TypeError, msg, self.resolve, snippet, tmpl) def test_map_replace_invalid_missing2(self): snippet = {'map_replace': [{'keys': {'f1': 'f3', 'f2': 'f3'}}]} tmpl = template.Template(hot_newton_tpl_empty) msg = "Incorrect arguments to \"map_replace\" should be:" self.assertRaisesRegex(TypeError, msg, self.resolve, snippet, tmpl) def test_map_replace_invalid_wrongkey(self): snippet = {'map_replace': [{'f1': 'b1', 'f2': 'b2'}, {'notkeys': {'f2': 'F2'}}]} tmpl = template.Template(hot_newton_tpl_empty) msg = "Incorrect arguments to \"map_replace\" should be:" self.assertRaisesRegex(ValueError, msg, self.resolve, snippet, tmpl) def test_yaql(self): snippet = {'yaql': {'expression': '$.data.var1.sum()', 'data': {'var1': [1, 2, 3, 4]}}} tmpl = template.Template(hot_newton_tpl_empty) stack = parser.Stack(utils.dummy_context(), 'test_stack', tmpl) resolved = self.resolve(snippet, tmpl, stack=stack) self.assertEqual(10, resolved) def test_yaql_list_input(self): snippet = {'yaql': {'expression': '$.data.sum()', 'data': [1, 2, 3, 4]}} tmpl = template.Template(hot_newton_tpl_empty) stack = parser.Stack(utils.dummy_context(), 'test_stack', tmpl) resolved = self.resolve(snippet, tmpl, stack=stack) self.assertEqual(10, resolved) def test_yaql_string_input(self): snippet = {'yaql': {'expression': '$.data', 'data': 'whynotastring'}} tmpl = template.Template(hot_newton_tpl_empty) stack = parser.Stack(utils.dummy_context(), 'test_stack', tmpl) resolved = self.resolve(snippet, tmpl, stack=stack) self.assertEqual('whynotastring', resolved) def test_yaql_int_input(self): snippet = {'yaql': {'expression': '$.data + 2', 'data': 2}} tmpl = template.Template(hot_newton_tpl_empty) stack = parser.Stack(utils.dummy_context(), 'test_stack', tmpl) resolved = self.resolve(snippet, tmpl, stack=stack) self.assertEqual(4, resolved) def test_yaql_bogus_keys(self): snippet = {'yaql': {'expression': '1 + 3', 'data': {'var1': [1, 2, 3, 4]}, 'bogus': ""}} tmpl = template.Template(hot_newton_tpl_empty) self.assertRaises(exception.StackValidationFailed, self.resolve, snippet, tmpl) def test_yaql_invalid_syntax(self): snippet = {'yaql': {'wrong': 'wrong_expr', 'wrong_data': 'mustbeamap'}} tmpl = template.Template(hot_newton_tpl_empty) self.assertRaises(exception.StackValidationFailed, self.resolve, snippet, tmpl) def test_yaql_non_map_args(self): snippet = {'yaql': 'invalid'} tmpl = template.Template(hot_newton_tpl_empty) msg = 'yaql: Arguments to "yaql" must be a map.' self.assertRaisesRegex(exception.StackValidationFailed, msg, self.resolve, snippet, tmpl) def test_yaql_invalid_expression(self): snippet = {'yaql': {'expression': 'invalid(', 'data': {'var1': [1, 2, 3, 4]}}} tmpl = template.Template(hot_newton_tpl_empty) yaql = tmpl.parse(None, snippet) regxp = ('yaql: Bad expression Parse error: unexpected end ' 'of statement.') self.assertRaisesRegex(exception.StackValidationFailed, regxp, function.validate, yaql) def test_yaql_data_as_function(self): snippet = {'yaql': {'expression': '$.data.var1.len()', 'data': {'var1': {'list_join': ['', ['1', '2']]}}}} tmpl = template.Template(hot_newton_tpl_empty) stack = parser.Stack(utils.dummy_context(), 'test_stack', tmpl) resolved = self.resolve(snippet, tmpl, stack=stack) self.assertEqual(2, resolved) def test_yaql_merge(self): snippet = {'yaql': {'expression': '$.data.d.reduce($1.mergeWith($2))', 'data': {'d': [{'a': [1]}, {'a': [2]}, {'a': [3]}]}}} tmpl = template.Template(hot_newton_tpl_empty) stack = parser.Stack(utils.dummy_context(), 'test_stack', tmpl) resolved = self.resolve(snippet, tmpl, stack=stack) self.assertEqual({'a': [1, 2, 3]}, resolved) def test_yaql_as_condition(self): hot_tpl = template_format.parse(''' heat_template_version: pike parameters: ServiceNames: type: comma_delimited_list default: ['neutron', 'heat'] ''') snippet = { 'yaql': { 'expression': '$.data.service_names.contains("neutron")', 'data': {'service_names': {'get_param': 'ServiceNames'}}}} # when param 'ServiceNames' contains 'neutron', # equals function resolve to true tmpl = template.Template(hot_tpl) stack = parser.Stack(utils.dummy_context(), 'test_condition_yaql_true', tmpl) resolved = self.resolve_condition(snippet, tmpl, stack) self.assertTrue(resolved) # when param 'ServiceNames' doesn't contain 'neutron', # equals function resolve to false tmpl = template.Template( hot_tpl, env=environment.Environment( {'ServiceNames': ['nova_network', 'heat']})) stack = parser.Stack(utils.dummy_context(), 'test_condition_yaql_false', tmpl) resolved = self.resolve_condition(snippet, tmpl, stack) self.assertFalse(resolved) def test_equals(self): hot_tpl = template_format.parse(''' heat_template_version: 2016-10-14 parameters: env_type: type: string default: 'test' ''') snippet = {'equals': [{'get_param': 'env_type'}, 'prod']} # when param 'env_type' is 'test', equals function resolve to false tmpl = template.Template(hot_tpl) stack = parser.Stack(utils.dummy_context(), 'test_equals_false', tmpl) resolved = self.resolve_condition(snippet, tmpl, stack) self.assertFalse(resolved) # when param 'env_type' is 'prod', equals function resolve to true tmpl = template.Template(hot_tpl, env=environment.Environment( {'env_type': 'prod'})) stack = parser.Stack(utils.dummy_context(), 'test_equals_true', tmpl) resolved = self.resolve_condition(snippet, tmpl, stack) self.assertTrue(resolved) def test_equals_invalid_args(self): tmpl = template.Template(hot_newton_tpl_empty) snippet = {'equals': ['test', 'prod', 'invalid']} exc = self.assertRaises(exception.StackValidationFailed, self.resolve_condition, snippet, tmpl) error_msg = ('equals: Arguments to "equals" must be ' 'of the form: [value_1, value_2]') self.assertIn(error_msg, six.text_type(exc)) snippet = {'equals': "invalid condition"} exc = self.assertRaises(exception.StackValidationFailed, self.resolve_condition, snippet, tmpl) self.assertIn(error_msg, six.text_type(exc)) def test_equals_with_non_supported_function(self): tmpl = template.Template(hot_newton_tpl_empty) snippet = {'equals': [{'get_attr': [None, 'att1']}, {'get_attr': [None, 'att2']}]} exc = self.assertRaises(exception.StackValidationFailed, self.resolve_condition, snippet, tmpl) self.assertIn('"get_attr" is invalid', six.text_type(exc)) def test_if(self): snippet = {'if': ['create_prod', 'value_if_true', 'value_if_false']} # when condition evaluates to true, if function # resolve to value_if_true tmpl = template.Template(hot_newton_tpl_empty) stack = parser.Stack(utils.dummy_context(), 'test_if_function', tmpl) with mock.patch.object(tmpl, 'conditions') as conds: conds.return_value = conditions.Conditions({'create_prod': True}) resolved = self.resolve(snippet, tmpl, stack) self.assertEqual('value_if_true', resolved) # when condition evaluates to false, if function # resolve to value_if_false with mock.patch.object(tmpl, 'conditions') as conds: conds.return_value = conditions.Conditions({'create_prod': False}) resolved = self.resolve(snippet, tmpl, stack) self.assertEqual('value_if_false', resolved) def test_if_using_boolean_condition(self): snippet = {'if': [True, 'value_if_true', 'value_if_false']} # when condition is true, if function resolve to value_if_true tmpl = template.Template(hot_newton_tpl_empty) stack = parser.Stack(utils.dummy_context(), 'test_if_using_boolean_condition', tmpl) resolved = self.resolve(snippet, tmpl, stack) self.assertEqual('value_if_true', resolved) # when condition is false, if function resolve to value_if_false snippet = {'if': [False, 'value_if_true', 'value_if_false']} resolved = self.resolve(snippet, tmpl, stack) self.assertEqual('value_if_false', resolved) def test_if_null_return(self): snippet = {'if': [True, None, 'value_if_false']} # when condition is true, if function resolve to value_if_true tmpl = template.Template(hot_newton_tpl_empty) stack = parser.Stack(utils.dummy_context(), 'test_if_null_return', tmpl) resolved = self.resolve(snippet, tmpl, stack) self.assertIsNone(resolved) def test_if_using_condition_function(self): tmpl_with_conditions = template_format.parse(''' heat_template_version: 2016-10-14 conditions: create_prod: False ''') snippet = {'if': [{'not': 'create_prod'}, 'value_if_true', 'value_if_false']} tmpl = template.Template(tmpl_with_conditions) stack = parser.Stack(utils.dummy_context(), 'test_if_using_condition_function', tmpl) resolved = self.resolve(snippet, tmpl, stack) self.assertEqual('value_if_true', resolved) def test_if_referenced_by_resource(self): tmpl_with_conditions = template_format.parse(''' heat_template_version: pike conditions: create_prod: False resources: AResource: type: ResourceWithPropsType properties: Foo: if: - create_prod - "one" - "two" ''') tmpl = template.Template(tmpl_with_conditions) self.stack = parser.Stack(utils.dummy_context(), 'test_if_referenced_by_resource', tmpl) self.stack.store() self.stack.create() self.assertEqual((parser.Stack.CREATE, parser.Stack.COMPLETE), self.stack.state) self.assertEqual('two', self.stack['AResource'].properties['Foo']) def test_if_referenced_by_resource_null(self): tmpl_with_conditions = template_format.parse(''' heat_template_version: pike conditions: create_prod: True resources: AResource: type: ResourceWithPropsType properties: Foo: if: - create_prod - null - "two" ''') tmpl = template.Template(tmpl_with_conditions) self.stack = parser.Stack(utils.dummy_context(), 'test_if_referenced_by_resource_null', tmpl) self.stack.store() self.stack.create() self.assertEqual((parser.Stack.CREATE, parser.Stack.COMPLETE), self.stack.state) self.assertEqual('', self.stack['AResource'].properties['Foo']) def test_if_invalid_args(self): snippet = {'if': ['create_prod', 'one_value']} tmpl = template.Template(hot_newton_tpl_empty) exc = self.assertRaises(exception.StackValidationFailed, self.resolve, snippet, tmpl) self.assertIn('Arguments to "if" must be of the form: ' '[condition_name, value_if_true, value_if_false]', six.text_type(exc)) def test_if_condition_name_non_existing(self): snippet = {'if': ['cd_not_existing', 'value_true', 'value_false']} tmpl = template.Template(hot_newton_tpl_empty) stack = parser.Stack(utils.dummy_context(), 'test_if_function', tmpl) with mock.patch.object(tmpl, 'conditions') as conds: conds.return_value = conditions.Conditions({'create_prod': True}) exc = self.assertRaises(exception.StackValidationFailed, self.resolve, snippet, tmpl, stack) self.assertIn('Invalid condition "cd_not_existing"', six.text_type(exc)) self.assertIn('if:', six.text_type(exc)) def _test_repeat(self, templ=hot_kilo_tpl_empty): """Test repeat function.""" snippet = {'repeat': {'template': 'this is %var%', 'for_each': {'%var%': ['a', 'b', 'c']}}} snippet_resolved = ['this is a', 'this is b', 'this is c'] tmpl = template.Template(templ) self.assertEqual(snippet_resolved, self.resolve(snippet, tmpl)) def test_repeat(self): self._test_repeat() def test_repeat_with_pike_version(self): self._test_repeat(templ=hot_pike_tpl_empty) def test_repeat_get_param(self): """Test repeat function with get_param function as an argument.""" hot_tpl = template_format.parse(''' heat_template_version: 2015-04-30 parameters: param: type: comma_delimited_list default: 'a,b,c' ''') snippet = {'repeat': {'template': 'this is var%', 'for_each': {'var%': {'get_param': 'param'}}}} snippet_resolved = ['this is a', 'this is b', 'this is c'] tmpl = template.Template(hot_tpl) stack = parser.Stack(utils.dummy_context(), 'test_stack', tmpl) self.assertEqual(snippet_resolved, self.resolve(snippet, tmpl, stack)) def _test_repeat_dict_with_no_replacement(self, templ=hot_newton_tpl_empty): snippet = {'repeat': {'template': {'SERVICE_enabled': True}, 'for_each': {'SERVICE': ['x', 'y', 'z']}}} snippet_resolved = [{'x_enabled': True}, {'y_enabled': True}, {'z_enabled': True}] tmpl = template.Template(templ) self.assertEqual(snippet_resolved, self.resolve(snippet, tmpl)) def test_repeat_dict_with_no_replacement(self): self._test_repeat_dict_with_no_replacement() def test_repeat_dict_with_no_replacement_pike_version(self): self._test_repeat_dict_with_no_replacement(templ=hot_pike_tpl_empty) def _test_repeat_dict_template(self, templ=hot_kilo_tpl_empty): """Test repeat function with a dictionary as a template.""" snippet = {'repeat': {'template': {'key-%var%': 'this is %var%'}, 'for_each': {'%var%': ['a', 'b', 'c']}}} snippet_resolved = [{'key-a': 'this is a'}, {'key-b': 'this is b'}, {'key-c': 'this is c'}] tmpl = template.Template(templ) self.assertEqual(snippet_resolved, self.resolve(snippet, tmpl)) def test_repeat_dict_template(self): self._test_repeat_dict_template() def test_repeat_dict_template_pike_version(self): self._test_repeat_dict_template(templ=hot_pike_tpl_empty) def _test_repeat_list_template(self, templ=hot_kilo_tpl_empty): """Test repeat function with a list as a template.""" snippet = {'repeat': {'template': ['this is %var%', 'static'], 'for_each': {'%var%': ['a', 'b', 'c']}}} snippet_resolved = [['this is a', 'static'], ['this is b', 'static'], ['this is c', 'static']] tmpl = template.Template(templ) self.assertEqual(snippet_resolved, self.resolve(snippet, tmpl)) def test_repeat_list_template(self): self._test_repeat_list_template() def test_repeat_list_template_pike_version(self): self._test_repeat_list_template(templ=hot_pike_tpl_empty) def _test_repeat_multi_list(self, templ=hot_kilo_tpl_empty): """Test repeat function with multiple input lists.""" snippet = {'repeat': {'template': 'this is %var1%-%var2%', 'for_each': {'%var1%': ['a', 'b', 'c'], '%var2%': ['1', '2']}}} snippet_resolved = ['this is a-1', 'this is b-1', 'this is c-1', 'this is a-2', 'this is b-2', 'this is c-2'] tmpl = template.Template(templ) result = self.resolve(snippet, tmpl) self.assertEqual(len(result), len(snippet_resolved)) for item in result: self.assertIn(item, snippet_resolved) def test_repeat_multi_list(self): self._test_repeat_multi_list() def test_repeat_multi_list_pike_version(self): self._test_repeat_multi_list(templ=hot_pike_tpl_empty) def test_repeat_list_and_map(self): """Test repeat function with a list and a map.""" snippet = {'repeat': {'template': 'this is %var1%-%var2%', 'for_each': {'%var1%': ['a', 'b', 'c'], '%var2%': {'x': 'v', 'y': 'v'}}}} snippet_resolved = ['this is a-x', 'this is b-x', 'this is c-x', 'this is a-y', 'this is b-y', 'this is c-y'] tmpl = template.Template(hot_newton_tpl_empty) result = self.resolve(snippet, tmpl) self.assertEqual(len(result), len(snippet_resolved)) for item in result: self.assertIn(item, snippet_resolved) def test_repeat_with_no_nested_loop(self): snippet = {'repeat': {'template': {'network': '%net%', 'port': '%port%', 'subnet': '%sub%'}, 'for_each': {'%net%': ['n1', 'n2', 'n3', 'n4'], '%port%': ['p1', 'p2', 'p3', 'p4'], '%sub%': ['s1', 's2', 's3', 's4']}, 'permutations': False}} tmpl = template.Template(hot_pike_tpl_empty) snippet_resolved = [{'network': 'n1', 'port': 'p1', 'subnet': 's1'}, {'network': 'n2', 'port': 'p2', 'subnet': 's2'}, {'network': 'n3', 'port': 'p3', 'subnet': 's3'}, {'network': 'n4', 'port': 'p4', 'subnet': 's4'}] result = self.resolve(snippet, tmpl) self.assertEqual(snippet_resolved, result) def test_repeat_no_nested_loop_different_len(self): snippet = {'repeat': {'template': {'network': '%net%', 'port': '%port%', 'subnet': '%sub%'}, 'for_each': {'%net%': ['n1', 'n2', 'n3'], '%port%': ['p1', 'p2'], '%sub%': ['s1', 's2']}, 'permutations': False}} tmpl = template.Template(hot_pike_tpl_empty) self.assertRaises(ValueError, self.resolve, snippet, tmpl) def test_repeat_no_nested_loop_with_dict_type(self): snippet = {'repeat': {'template': {'network': '%net%', 'port': '%port%', 'subnet': '%sub%'}, 'for_each': {'%net%': ['n1', 'n2'], '%port%': {'p1': 'pp', 'p2': 'qq'}, '%sub%': ['s1', 's2']}, 'permutations': False}} tmpl = template.Template(hot_pike_tpl_empty) self.assertRaises(TypeError, self.resolve, snippet, tmpl) def test_repeat_permutations_non_bool(self): snippet = {'repeat': {'template': {'network': '%net%', 'port': '%port%', 'subnet': '%sub%'}, 'for_each': {'%net%': ['n1', 'n2'], '%port%': ['p1', 'p2'], '%sub%': ['s1', 's2']}, 'permutations': 'non bool'}} tmpl = template.Template(hot_pike_tpl_empty) exc = self.assertRaises(exception.StackValidationFailed, self.resolve, snippet, tmpl) self.assertIn('"permutations" should be boolean type ' 'for repeat function', six.text_type(exc)) def test_repeat_bad_args(self): """Tests reporting error by repeat function. Test that the repeat function reports a proper error when missing or invalid arguments. """ tmpl = template.Template(hot_kilo_tpl_empty) # missing for_each snippet = {'repeat': {'template': 'this is %var%'}} self.assertRaises(exception.StackValidationFailed, self.resolve, snippet, tmpl) # misspelled for_each snippet = {'repeat': {'template': 'this is %var%', 'foreach': {'%var%': ['a', 'b', 'c']}}} self.assertRaises(exception.StackValidationFailed, self.resolve, snippet, tmpl) # misspelled template snippet = {'repeat': {'templte': 'this is %var%', 'for_each': {'%var%': ['a', 'b', 'c']}}} self.assertRaises(exception.StackValidationFailed, self.resolve, snippet, tmpl) def test_repeat_bad_arg_type(self): tmpl = template.Template(hot_kilo_tpl_empty) # for_each is not a map snippet = {'repeat': {'template': 'this is %var%', 'for_each': '%var%'}} repeat = tmpl.parse(None, snippet) regxp = ('repeat: The "for_each" argument to "repeat" ' 'must contain a map') self.assertRaisesRegex(exception.StackValidationFailed, regxp, function.validate, repeat) def test_digest(self): snippet = {'digest': ['md5', 'foobar']} snippet_resolved = '3858f62230ac3c915f300c664312c63f' tmpl = template.Template(hot_kilo_tpl_empty) self.assertEqual(snippet_resolved, self.resolve(snippet, tmpl)) def test_digest_invalid_types(self): tmpl = template.Template(hot_kilo_tpl_empty) invalid_snippets = [ {'digest': 'invalid'}, {'digest': {'foo': 'invalid'}}, {'digest': [123]}, ] for snippet in invalid_snippets: exc = self.assertRaises(TypeError, self.resolve, snippet, tmpl) self.assertIn('must be a list of strings', six.text_type(exc)) def test_digest_incorrect_number_arguments(self): tmpl = template.Template(hot_kilo_tpl_empty) invalid_snippets = [ {'digest': []}, {'digest': ['foo']}, {'digest': ['md5']}, {'digest': ['md5', 'foo', 'bar']}, ] for snippet in invalid_snippets: exc = self.assertRaises(ValueError, self.resolve, snippet, tmpl) self.assertIn('usage: ["", ""]', six.text_type(exc)) def test_digest_invalid_algorithm(self): tmpl = template.Template(hot_kilo_tpl_empty) snippet = {'digest': ['invalid_algorithm', 'foobar']} exc = self.assertRaises(ValueError, self.resolve, snippet, tmpl) self.assertIn('Algorithm must be one of', six.text_type(exc)) def test_str_split(self): tmpl = template.Template(hot_liberty_tpl_empty) snippet = {'str_split': [',', 'bar,baz']} snippet_resolved = ['bar', 'baz'] self.assertEqual(snippet_resolved, self.resolve(snippet, tmpl)) def test_str_split_index(self): tmpl = template.Template(hot_liberty_tpl_empty) snippet = {'str_split': [',', 'bar,baz', 1]} snippet_resolved = 'baz' self.assertEqual(snippet_resolved, self.resolve(snippet, tmpl)) def test_str_split_index_str(self): tmpl = template.Template(hot_liberty_tpl_empty) snippet = {'str_split': [',', 'bar,baz', '1']} snippet_resolved = 'baz' self.assertEqual(snippet_resolved, self.resolve(snippet, tmpl)) def test_str_split_index_bad(self): tmpl = template.Template(hot_liberty_tpl_empty) snippet = {'str_split': [',', 'bar,baz', 'bad']} exc = self.assertRaises(ValueError, self.resolve, snippet, tmpl) self.assertIn('Incorrect index to \"str_split\"', six.text_type(exc)) def test_str_split_index_out_of_range(self): tmpl = template.Template(hot_liberty_tpl_empty) snippet = {'str_split': [',', 'bar,baz', '2']} exc = self.assertRaises(ValueError, self.resolve, snippet, tmpl) expected = 'Incorrect index to \"str_split\" should be between 0 and 1' self.assertEqual(expected, six.text_type(exc)) def test_str_split_bad_novalue(self): tmpl = template.Template(hot_liberty_tpl_empty) snippet = {'str_split': [',']} exc = self.assertRaises(ValueError, self.resolve, snippet, tmpl) self.assertIn('Incorrect arguments to \"str_split\"', six.text_type(exc)) def test_str_split_bad_empty(self): tmpl = template.Template(hot_liberty_tpl_empty) snippet = {'str_split': []} exc = self.assertRaises(ValueError, self.resolve, snippet, tmpl) self.assertIn('Incorrect arguments to \"str_split\"', six.text_type(exc)) def test_str_split_none_string_to_split(self): tmpl = template.Template(hot_liberty_tpl_empty) snippet = {'str_split': ['.', None]} self.assertIsNone(self.resolve(snippet, tmpl)) def test_str_split_none_delim(self): tmpl = template.Template(hot_liberty_tpl_empty) snippet = {'str_split': [None, 'check']} self.assertEqual(['check'], self.resolve(snippet, tmpl)) def test_prevent_parameters_access(self): """Check parameters section inaccessible using the template as a dict. Test that the parameters section can't be accessed using the template as a dictionary. """ expected_description = "This can be accessed" hot_tpl = template_format.parse(''' heat_template_version: 2013-05-23 description: {0} parameters: foo: type: string '''.format(expected_description)) tmpl = template.Template(hot_tpl) self.assertEqual(expected_description, tmpl['description']) err_str = "can not be accessed directly" # Hot template test keyError = self.assertRaises(KeyError, tmpl.__getitem__, 'parameters') self.assertIn(err_str, six.text_type(keyError)) # CFN template test keyError = self.assertRaises(KeyError, tmpl.__getitem__, 'Parameters') self.assertIn(err_str, six.text_type(keyError)) def test_parameters_section_not_iterable(self): """Check parameters section is not returned using the template as iter. Test that the parameters section is not returned when the template is used as an iterable. """ expected_description = "This can be accessed" tmpl = template.Template({'heat_template_version': '2013-05-23', 'description': expected_description, 'parameters': {'foo': {'Type': 'String', 'Required': True}}}) self.assertEqual(expected_description, tmpl['description']) self.assertNotIn('parameters', tmpl.keys()) def test_invalid_hot_version(self): """Test HOT version check. Pass an invalid HOT version to template.Template.__new__() and validate that we get a ValueError. """ tmpl_str = "heat_template_version: this-ain't-valid" hot_tmpl = template_format.parse(tmpl_str) self.assertRaises(exception.InvalidTemplateVersion, template.Template, hot_tmpl) def test_valid_hot_version(self): """Test HOT version check. Pass a valid HOT version to template.Template.__new__() and validate that we get back a parsed template. """ tmpl_str = "heat_template_version: 2013-05-23" hot_tmpl = template_format.parse(tmpl_str) parsed_tmpl = template.Template(hot_tmpl) expected = ('heat_template_version', '2013-05-23') observed = parsed_tmpl.version self.assertEqual(expected, observed) def test_resource_facade(self): metadata_snippet = {'resource_facade': 'metadata'} deletion_policy_snippet = {'resource_facade': 'deletion_policy'} update_policy_snippet = {'resource_facade': 'update_policy'} parent_resource = DummyClass() parent_resource.metadata_set({"foo": "bar"}) parent_resource.t = rsrc_defn.ResourceDefinition( 'parent', 'SomeType', deletion_policy=rsrc_defn.ResourceDefinition.RETAIN, update_policy={"blarg": "wibble"}) tmpl = copy.deepcopy(hot_tpl_empty) tmpl['resources'] = {'parent': parent_resource.t.render_hot()} parent_resource.stack = parser.Stack(utils.dummy_context(), 'toplevel_stack', template.Template(tmpl)) parent_resource.stack._resources = {'parent': parent_resource} stack = parser.Stack(utils.dummy_context(), 'test_stack', template.Template(hot_tpl_empty), parent_resource='parent') stack.set_parent_stack(parent_resource.stack) self.assertEqual({"foo": "bar"}, self.resolve(metadata_snippet, stack.t, stack)) self.assertEqual('Retain', self.resolve(deletion_policy_snippet, stack.t, stack)) self.assertEqual({"blarg": "wibble"}, self.resolve(update_policy_snippet, stack.t, stack)) def test_resource_facade_function(self): deletion_policy_snippet = {'resource_facade': 'deletion_policy'} parent_resource = DummyClass() parent_resource.metadata_set({"foo": "bar"}) del_policy = hot_functions.Join(None, 'list_join', ['eta', ['R', 'in']]) parent_resource.t = rsrc_defn.ResourceDefinition( 'parent', 'SomeType', deletion_policy=del_policy) tmpl = copy.deepcopy(hot_juno_tpl_empty) tmpl['resources'] = {'parent': parent_resource.t.render_hot()} parent_resource.stack = parser.Stack(utils.dummy_context(), 'toplevel_stack', template.Template(tmpl)) parent_resource.stack._resources = {'parent': parent_resource} stack = parser.Stack(utils.dummy_context(), 'test_stack', template.Template(hot_tpl_empty), parent_resource='parent') stack.set_parent_stack(parent_resource.stack) self.assertEqual('Retain', self.resolve(deletion_policy_snippet, stack.t, stack)) def test_resource_facade_invalid_arg(self): snippet = {'resource_facade': 'wibble'} stack = parser.Stack(utils.dummy_context(), 'test_stack', template.Template(hot_tpl_empty)) error = self.assertRaises(exception.StackValidationFailed, self.resolve, snippet, stack.t, stack) self.assertIn(next(iter(snippet)), six.text_type(error)) def test_resource_facade_missing_deletion_policy(self): snippet = {'resource_facade': 'deletion_policy'} parent_resource = DummyClass() parent_resource.metadata_set({"foo": "bar"}) parent_resource.t = rsrc_defn.ResourceDefinition('parent', 'SomeType') tmpl = copy.deepcopy(hot_tpl_empty) tmpl['resources'] = {'parent': parent_resource.t.render_hot()} parent_stack = parser.Stack(utils.dummy_context(), 'toplevel_stack', template.Template(tmpl)) parent_stack._resources = {'parent': parent_resource} stack = parser.Stack(utils.dummy_context(), 'test_stack', template.Template(hot_tpl_empty), parent_resource='parent') stack.set_parent_stack(parent_stack) self.assertEqual('Delete', self.resolve(snippet, stack.t, stack)) def test_removed_function(self): snippet = {'Fn::GetAZs': ''} stack = parser.Stack(utils.dummy_context(), 'test_stack', template.Template(hot_juno_tpl_empty)) regxp = 'Fn::GetAZs: The template version is invalid' self.assertRaisesRegex(exception.StackValidationFailed, regxp, function.validate, stack.t.parse(stack.defn, snippet)) def test_add_resource(self): hot_tpl = template_format.parse(''' heat_template_version: 2013-05-23 resources: resource1: type: AWS::EC2::Instance properties: property1: value1 metadata: foo: bar depends_on: - dummy deletion_policy: Retain update_policy: foo: bar resource2: type: AWS::EC2::Instance resource3: type: AWS::EC2::Instance depends_on: - resource1 - dummy - resource2 ''') source = template.Template(hot_tpl) empty = template.Template(copy.deepcopy(hot_tpl_empty)) stack = parser.Stack(utils.dummy_context(), 'test_stack', source) for rname, defn in sorted(source.resource_definitions(stack).items()): empty.add_resource(defn) expected = copy.deepcopy(hot_tpl['resources']) expected['resource1']['depends_on'] = [] expected['resource3']['depends_on'] = ['resource1', 'resource2'] self.assertEqual(expected, empty.t['resources']) def test_add_output(self): hot_tpl = template_format.parse(''' heat_template_version: 2013-05-23 outputs: output1: description: An output value: bar ''') source = template.Template(hot_tpl) empty = template.Template(copy.deepcopy(hot_tpl_empty)) stack = parser.Stack(utils.dummy_context(), 'test_stack', source) for defn in six.itervalues(source.outputs(stack)): empty.add_output(defn) self.assertEqual(hot_tpl['outputs'], empty.t['outputs']) def test_filter(self): snippet = {'filter': [[None], [1, None, 4, 2, None]]} tmpl = template.Template(hot_ocata_tpl_empty) stack = parser.Stack(utils.dummy_context(), 'test_stack', tmpl) resolved = self.resolve(snippet, tmpl, stack=stack) self.assertEqual([1, 4, 2], resolved) def test_filter_wrong_args_type(self): snippet = {'filter': 'foo'} tmpl = template.Template(hot_ocata_tpl_empty) stack = parser.Stack(utils.dummy_context(), 'test_stack', tmpl) self.assertRaises(exception.StackValidationFailed, self.resolve, snippet, tmpl, stack=stack) def test_filter_wrong_args_number(self): snippet = {'filter': [[None], [1, 2], 'foo']} tmpl = template.Template(hot_ocata_tpl_empty) stack = parser.Stack(utils.dummy_context(), 'test_stack', tmpl) self.assertRaises(exception.StackValidationFailed, self.resolve, snippet, tmpl, stack=stack) def test_filter_dict(self): snippet = {'filter': [[None], {'a': 1}]} tmpl = template.Template(hot_ocata_tpl_empty) stack = parser.Stack(utils.dummy_context(), 'test_stack', tmpl) self.assertRaises(TypeError, self.resolve, snippet, tmpl, stack=stack) def test_filter_str(self): snippet = {'filter': [['a'], 'abcd']} tmpl = template.Template(hot_ocata_tpl_empty) stack = parser.Stack(utils.dummy_context(), 'test_stack', tmpl) self.assertRaises(TypeError, self.resolve, snippet, tmpl, stack=stack) def test_filter_str_values(self): snippet = {'filter': ['abcd', ['a', 'b', 'c', 'd']]} tmpl = template.Template(hot_ocata_tpl_empty) stack = parser.Stack(utils.dummy_context(), 'test_stack', tmpl) self.assertRaises(TypeError, self.resolve, snippet, tmpl, stack=stack) def test_make_url_basic(self): snippet = { 'make_url': { 'scheme': 'http', 'host': 'example.com', 'path': '/foo/bar', } } tmpl = template.Template(hot_pike_tpl_empty) func = tmpl.parse(None, snippet) function.validate(func) resolved = function.resolve(func) self.assertEqual('http://example.com/foo/bar', resolved) def test_make_url_ipv6(self): snippet = { 'make_url': { 'scheme': 'http', 'host': '::1', 'path': '/foo/bar', } } tmpl = template.Template(hot_pike_tpl_empty) resolved = self.resolve(snippet, tmpl) self.assertEqual('http://[::1]/foo/bar', resolved) def test_make_url_ipv6_ready(self): snippet = { 'make_url': { 'scheme': 'http', 'host': '[::1]', 'path': '/foo/bar', } } tmpl = template.Template(hot_pike_tpl_empty) resolved = self.resolve(snippet, tmpl) self.assertEqual('http://[::1]/foo/bar', resolved) def test_make_url_port_string(self): snippet = { 'make_url': { 'scheme': 'https', 'host': 'example.com', 'port': '80', 'path': '/foo/bar', } } tmpl = template.Template(hot_pike_tpl_empty) resolved = self.resolve(snippet, tmpl) self.assertEqual('https://example.com:80/foo/bar', resolved) def test_make_url_port_int(self): snippet = { 'make_url': { 'scheme': 'https', 'host': 'example.com', 'port': 80, 'path': '/foo/bar', } } tmpl = template.Template(hot_pike_tpl_empty) resolved = self.resolve(snippet, tmpl) self.assertEqual('https://example.com:80/foo/bar', resolved) def test_make_url_port_invalid_high(self): snippet = { 'make_url': { 'scheme': 'https', 'host': 'example.com', 'port': 100000, 'path': '/foo/bar', } } tmpl = template.Template(hot_pike_tpl_empty) self.assertRaises(ValueError, self.resolve, snippet, tmpl) def test_make_url_port_invalid_low(self): snippet = { 'make_url': { 'scheme': 'https', 'host': 'example.com', 'port': '0', 'path': '/foo/bar', } } tmpl = template.Template(hot_pike_tpl_empty) self.assertRaises(ValueError, self.resolve, snippet, tmpl) def test_make_url_port_invalid_string(self): snippet = { 'make_url': { 'scheme': 'https', 'host': 'example.com', 'port': '1.1', 'path': '/foo/bar', } } tmpl = template.Template(hot_pike_tpl_empty) self.assertRaises(ValueError, self.resolve, snippet, tmpl) def test_make_url_username(self): snippet = { 'make_url': { 'scheme': 'http', 'username': 'wibble', 'host': 'example.com', 'path': '/foo/bar', } } tmpl = template.Template(hot_pike_tpl_empty) resolved = self.resolve(snippet, tmpl) self.assertEqual('http://wibble@example.com/foo/bar', resolved) def test_make_url_username_password(self): snippet = { 'make_url': { 'scheme': 'http', 'username': 'wibble', 'password': 'blarg', 'host': 'example.com', 'path': '/foo/bar', } } tmpl = template.Template(hot_pike_tpl_empty) resolved = self.resolve(snippet, tmpl) self.assertEqual('http://wibble:blarg@example.com/foo/bar', resolved) def test_make_url_query(self): snippet = { 'make_url': { 'scheme': 'http', 'host': 'example.com', 'path': '/foo/?bar', 'query': { 'foo#': 'bar & baz', 'blarg': '/wib=ble/', }, } } tmpl = template.Template(hot_pike_tpl_empty) resolved = self.resolve(snippet, tmpl) self.assertIn(resolved, ['http://example.com/foo/%3Fbar' '?foo%23=bar+%26+baz&blarg=/wib%3Dble/', 'http://example.com/foo/%3Fbar' '?blarg=/wib%3Dble/&foo%23=bar+%26+baz']) def test_make_url_fragment(self): snippet = { 'make_url': { 'scheme': 'http', 'host': 'example.com', 'path': 'foo/bar', 'fragment': 'baz' } } tmpl = template.Template(hot_pike_tpl_empty) resolved = self.resolve(snippet, tmpl) self.assertEqual('http://example.com/foo/bar#baz', resolved) def test_make_url_file(self): snippet = { 'make_url': { 'scheme': 'file', 'path': 'foo/bar' } } tmpl = template.Template(hot_pike_tpl_empty) resolved = self.resolve(snippet, tmpl) self.assertEqual('file:///foo/bar', resolved) def test_make_url_file_leading_slash(self): snippet = { 'make_url': { 'scheme': 'file', 'path': '/foo/bar' } } tmpl = template.Template(hot_pike_tpl_empty) resolved = self.resolve(snippet, tmpl) self.assertEqual('file:///foo/bar', resolved) def test_make_url_bad_args_type(self): snippet = { 'make_url': 'http://example.com/foo/bar' } tmpl = template.Template(hot_pike_tpl_empty) func = tmpl.parse(None, snippet) self.assertRaises(exception.StackValidationFailed, function.validate, func) def test_make_url_invalid_key(self): snippet = { 'make_url': { 'scheme': 'http', 'host': 'example.com', 'foo': 'bar', } } tmpl = template.Template(hot_pike_tpl_empty) func = tmpl.parse(None, snippet) self.assertRaises(exception.StackValidationFailed, function.validate, func) def test_depends_condition(self): hot_tpl = template_format.parse(''' heat_template_version: 2016-10-14 resources: one: type: OS::Heat::None two: type: OS::Heat::None condition: False three: type: OS::Heat::None depends_on: two ''') tmpl = template.Template(hot_tpl) stack = parser.Stack(utils.dummy_context(), 'test_stack', tmpl) stack.validate() self.assertEqual({'one', 'three'}, set(stack.resources)) def test_list_concat(self): snippet = {'list_concat': [['v1', 'v2'], ['v3', 'v4']]} snippet_resolved = ['v1', 'v2', 'v3', 'v4'] tmpl = template.Template(hot_pike_tpl_empty) resolved = self.resolve(snippet, tmpl) self.assertEqual(snippet_resolved, resolved) def test_list_concat_none(self): snippet = {'list_concat': [['v1', 'v2'], ['v3', 'v4'], None]} snippet_resolved = ['v1', 'v2', 'v3', 'v4'] tmpl = template.Template(hot_pike_tpl_empty) resolved = self.resolve(snippet, tmpl) self.assertEqual(snippet_resolved, resolved) def test_list_concat_repeat_dict_item(self): snippet = {'list_concat': [[{'v1': 'v2'}], [{'v1': 'v2'}]]} snippet_resolved = [{'v1': 'v2'}, {'v1': 'v2'}] tmpl = template.Template(hot_pike_tpl_empty) resolved = self.resolve(snippet, tmpl) self.assertEqual(snippet_resolved, resolved) def test_list_concat_repeat_item(self): snippet = {'list_concat': [['v1', 'v2'], ['v2', 'v3']]} snippet_resolved = ['v1', 'v2', 'v2', 'v3'] tmpl = template.Template(hot_pike_tpl_empty) resolved = self.resolve(snippet, tmpl) self.assertEqual(snippet_resolved, resolved) def test_list_concat_unique_dict_item(self): snippet = {'list_concat_unique': [[{'v1': 'v2'}], [{'v1': 'v2'}]]} snippet_resolved = [{'v1': 'v2'}] tmpl = template.Template(hot_pike_tpl_empty) resolved = self.resolve(snippet, tmpl) self.assertEqual(snippet_resolved, resolved) def test_list_concat_unique(self): snippet = {'list_concat_unique': [['v1', 'v2'], ['v2', 'v3']]} snippet_resolved = ['v1', 'v2', 'v3'] tmpl = template.Template(hot_pike_tpl_empty) resolved = self.resolve(snippet, tmpl) self.assertEqual(snippet_resolved, resolved) def _test_list_concat_invalid(self, snippet): tmpl = template.Template(hot_pike_tpl_empty) msg = 'Incorrect arguments' exc = self.assertRaises(TypeError, self.resolve, snippet, tmpl) self.assertIn(msg, six.text_type(exc)) def test_list_concat_with_dict_arg(self): snippet = {'list_concat': [{'k1': 'v2'}, ['v3', 'v4']]} self._test_list_concat_invalid(snippet) def test_list_concat_with_string_arg(self): snippet = {'list_concat': 'I am string'} self._test_list_concat_invalid(snippet) def test_list_concat_with_string_item(self): snippet = {'list_concat': ['v1', 'v2']} self._test_list_concat_invalid(snippet) def test_contains_with_list(self): snippet = {'contains': ['v1', ['v1', 'v2']]} tmpl = template.Template(hot_pike_tpl_empty) resolved = self.resolve(snippet, tmpl) self.assertTrue(resolved) def test_contains_with_string(self): snippet = {'contains': ['a', 'abc']} tmpl = template.Template(hot_pike_tpl_empty) resolved = self.resolve(snippet, tmpl) self.assertTrue(resolved) def test_contains_with_invalid_args_type(self): snippet = {'contains': {'key': 'value'}} tmpl = template.Template(hot_pike_tpl_empty) exc = self.assertRaises(exception.StackValidationFailed, self.resolve, snippet, tmpl) msg = 'Incorrect arguments to ' self.assertIn(msg, six.text_type(exc)) def test_contains_with_invalid_args_number(self): snippet = {'contains': ['v1', ['v1', 'v2'], 'redundant']} tmpl = template.Template(hot_pike_tpl_empty) exc = self.assertRaises(exception.StackValidationFailed, self.resolve, snippet, tmpl) msg = 'must be of the form: [value1, [value1, value2]]' self.assertIn(msg, six.text_type(exc)) def test_contains_with_invalid_sequence(self): snippet = {'contains': ['v1', {'key': 'value'}]} tmpl = template.Template(hot_pike_tpl_empty) exc = self.assertRaises(TypeError, self.resolve, snippet, tmpl) msg = 'should be a sequence' self.assertIn(msg, six.text_type(exc)) class HotStackTest(common.HeatTestCase): """Test stack function when stack was created from HOT template.""" def setUp(self): super(HotStackTest, self).setUp() self.tmpl = template.Template(copy.deepcopy(empty_template)) self.ctx = utils.dummy_context() def resolve(self, snippet): return function.resolve(self.stack.t.parse(self.stack.defn, snippet)) def test_repeat_get_attr(self): """Test repeat function with get_attr function as an argument.""" tmpl = template.Template(hot_tpl_complex_attrs_all_attrs) self.stack = parser.Stack(self.ctx, 'test_repeat_get_attr', tmpl) snippet = {'repeat': {'template': 'this is %var%', 'for_each': {'%var%': {'get_attr': ['resource1', 'list']}}}} repeat = self.stack.t.parse(self.stack.defn, snippet) self.stack.store() with mock.patch.object(rsrc_defn.ResourceDefinition, 'dep_attrs') as mock_da: mock_da.return_value = ['list'] self.stack.create() self.assertEqual((parser.Stack.CREATE, parser.Stack.COMPLETE), self.stack.state) self.assertEqual(['this is foo', 'this is bar'], function.resolve(repeat)) def test_get_attr_multiple_rsrc_status(self): """Test resolution of get_attr occurrences in HOT template.""" hot_tpl = hot_tpl_generic_resource self.stack = parser.Stack(self.ctx, 'test_get_attr', template.Template(hot_tpl)) self.stack.store() with mock.patch.object(rsrc_defn.ResourceDefinition, 'dep_attrs') as mock_da: mock_da.return_value = ['foo'] self.stack.create() self.assertEqual((parser.Stack.CREATE, parser.Stack.COMPLETE), self.stack.state) snippet = {'Value': {'get_attr': ['resource1', 'foo']}} rsrc = self.stack['resource1'] for action, status in ( (rsrc.CREATE, rsrc.IN_PROGRESS), (rsrc.CREATE, rsrc.COMPLETE), (rsrc.RESUME, rsrc.IN_PROGRESS), (rsrc.RESUME, rsrc.COMPLETE), (rsrc.UPDATE, rsrc.IN_PROGRESS), (rsrc.UPDATE, rsrc.COMPLETE)): rsrc.state_set(action, status) # GenericResourceType has an attribute 'foo' which yields the # resource name. self.assertEqual({'Value': 'resource1'}, self.resolve(snippet)) def test_get_attr_invalid(self): """Test resolution of get_attr occurrences in HOT template.""" hot_tpl = hot_tpl_generic_resource self.stack = parser.Stack(self.ctx, 'test_get_attr', template.Template(hot_tpl)) self.stack.store() self.stack.create() self.assertEqual((parser.Stack.CREATE, parser.Stack.COMPLETE), self.stack.state) self.assertRaises(exception.InvalidTemplateAttribute, self.resolve, {'Value': {'get_attr': ['resource1', 'NotThere']}}) def test_get_attr_invalid_resource(self): """Test resolution of get_attr occurrences in HOT template.""" hot_tpl = hot_tpl_complex_attrs self.stack = parser.Stack(self.ctx, 'test_get_attr_invalid_none', template.Template(hot_tpl)) self.stack.store() self.stack.create() self.assertEqual((parser.Stack.CREATE, parser.Stack.COMPLETE), self.stack.state) snippet = {'Value': {'get_attr': ['resource2', 'who_cares']}} self.assertRaises(exception.InvalidTemplateReference, self.resolve, snippet) def test_get_resource(self): """Test resolution of get_resource occurrences in HOT template.""" hot_tpl = hot_tpl_generic_resource self.stack = parser.Stack(self.ctx, 'test_get_resource', template.Template(hot_tpl)) self.stack.store() self.stack.create() self.assertEqual((parser.Stack.CREATE, parser.Stack.COMPLETE), self.stack.state) snippet = {'value': {'get_resource': 'resource1'}} self.assertEqual({'value': 'resource1'}, self.resolve(snippet)) def test_set_param_id(self): tmpl = template.Template(hot_tpl_empty) self.stack = parser.Stack(self.ctx, 'param_id_test', tmpl) self.assertEqual('None', self.stack.parameters['OS::stack_id']) self.stack.store() stack_identifier = self.stack.identifier() self.assertEqual(self.stack.id, self.stack.parameters['OS::stack_id']) self.assertEqual(stack_identifier.stack_id, self.stack.parameters['OS::stack_id']) self.m.VerifyAll() def test_set_wrong_param(self): tmpl = template.Template(hot_tpl_empty) stack_id = identifier.HeatIdentifier('', "stack_testit", None) params = tmpl.parameters(None, {}) self.assertFalse(params.set_stack_id(None)) self.assertTrue(params.set_stack_id(stack_id)) def test_set_param_id_update(self): tmpl = template.Template( {'heat_template_version': '2013-05-23', 'resources': {'AResource': {'type': 'ResourceWithPropsType', 'metadata': {'Bar': {'get_param': 'OS::stack_id'}}, 'properties': {'Foo': 'abc'}}}}) self.stack = parser.Stack(self.ctx, 'update_stack_id_test', tmpl) self.stack.store() self.stack.create() self.assertEqual((parser.Stack.CREATE, parser.Stack.COMPLETE), self.stack.state) stack_id = self.stack.parameters['OS::stack_id'] tmpl2 = template.Template( {'heat_template_version': '2013-05-23', 'resources': {'AResource': {'type': 'ResourceWithPropsType', 'metadata': {'Bar': {'get_param': 'OS::stack_id'}}, 'properties': {'Foo': 'xyz'}}}}) updated_stack = parser.Stack(self.ctx, 'updated_stack', tmpl2) self.stack.update(updated_stack) self.assertEqual((parser.Stack.UPDATE, parser.Stack.COMPLETE), self.stack.state) self.assertEqual('xyz', self.stack['AResource'].properties['Foo']) self.assertEqual(stack_id, self.stack['AResource'].metadata_get()['Bar']) def test_load_param_id(self): tmpl = template.Template(hot_tpl_empty) self.stack = parser.Stack(self.ctx, 'param_load_id_test', tmpl) self.stack.store() stack_identifier = self.stack.identifier() self.assertEqual(stack_identifier.stack_id, self.stack.parameters['OS::stack_id']) newstack = parser.Stack.load(self.ctx, stack_id=self.stack.id) self.assertEqual(stack_identifier.stack_id, newstack.parameters['OS::stack_id']) def test_update_modify_param_ok_replace(self): tmpl = { 'heat_template_version': '2013-05-23', 'parameters': { 'foo': {'type': 'string'} }, 'resources': { 'AResource': { 'type': 'ResourceWithPropsType', 'properties': {'Foo': {'get_param': 'foo'}} } } } self.m.StubOutWithMock(generic_rsrc.ResourceWithProps, 'update_template_diff') self.stack = parser.Stack(self.ctx, 'update_test_stack', template.Template( tmpl, env=environment.Environment( {'foo': 'abc'}))) self.stack.store() self.stack.create() self.assertEqual((parser.Stack.CREATE, parser.Stack.COMPLETE), self.stack.state) updated_stack = parser.Stack(self.ctx, 'updated_stack', template.Template( tmpl, env=environment.Environment( {'foo': 'xyz'}))) def check_props(*args): self.assertEqual('abc', self.stack['AResource'].properties['Foo']) generic_rsrc.ResourceWithProps.update_template_diff( rsrc_defn.ResourceDefinition('AResource', 'ResourceWithPropsType', properties={'Foo': 'xyz'}), rsrc_defn.ResourceDefinition('AResource', 'ResourceWithPropsType', properties={'Foo': 'abc'}) ).WithSideEffects(check_props).AndRaise(resource.UpdateReplace) self.m.ReplayAll() self.stack.update(updated_stack) self.assertEqual((parser.Stack.UPDATE, parser.Stack.COMPLETE), self.stack.state) self.assertEqual('xyz', self.stack['AResource'].properties['Foo']) self.m.VerifyAll() def test_update_modify_files_ok_replace(self): tmpl = { 'heat_template_version': '2013-05-23', 'parameters': {}, 'resources': { 'AResource': { 'type': 'ResourceWithPropsType', 'properties': {'Foo': {'get_file': 'foo'}} } } } self.m.StubOutWithMock(generic_rsrc.ResourceWithProps, 'update_template_diff') self.stack = parser.Stack(self.ctx, 'update_test_stack', template.Template(tmpl, files={'foo': 'abc'})) self.stack.store() self.stack.create() self.assertEqual((parser.Stack.CREATE, parser.Stack.COMPLETE), self.stack.state) updated_stack = parser.Stack(self.ctx, 'updated_stack', template.Template(tmpl, files={'foo': 'xyz'})) def check_props(*args): self.assertEqual('abc', self.stack['AResource'].properties['Foo']) generic_rsrc.ResourceWithProps.update_template_diff( rsrc_defn.ResourceDefinition('AResource', 'ResourceWithPropsType', properties={'Foo': 'xyz'}), rsrc_defn.ResourceDefinition('AResource', 'ResourceWithPropsType', properties={'Foo': 'abc'}) ).WithSideEffects(check_props).AndRaise(resource.UpdateReplace) self.m.ReplayAll() self.stack.update(updated_stack) self.assertEqual((parser.Stack.UPDATE, parser.Stack.COMPLETE), self.stack.state) self.assertEqual('xyz', self.stack['AResource'].properties['Foo']) self.m.VerifyAll() class StackAttributesTest(common.HeatTestCase): """Test get_attr function when stack was created from HOT template.""" def setUp(self): super(StackAttributesTest, self).setUp() self.ctx = utils.dummy_context() self.m.ReplayAll() scenarios = [ # for hot template 2013-05-23, get_attr: hot_funcs.GetAttThenSelect ('get_flat_attr', dict(hot_tpl=hot_tpl_generic_resource, snippet={'Value': {'get_attr': ['resource1', 'foo']}}, resource_name='resource1', expected={'Value': 'resource1'})), ('get_list_attr', dict(hot_tpl=hot_tpl_complex_attrs, snippet={'Value': {'get_attr': ['resource1', 'list', 0]}}, resource_name='resource1', expected={ 'Value': generic_rsrc.ResourceWithComplexAttributes.list[0]})), ('get_flat_dict_attr', dict(hot_tpl=hot_tpl_complex_attrs, snippet={'Value': {'get_attr': ['resource1', 'flat_dict', 'key2']}}, resource_name='resource1', expected={ 'Value': generic_rsrc.ResourceWithComplexAttributes. flat_dict['key2']})), ('get_nested_attr_list', dict(hot_tpl=hot_tpl_complex_attrs, snippet={'Value': {'get_attr': ['resource1', 'nested_dict', 'list', 0]}}, resource_name='resource1', expected={ 'Value': generic_rsrc.ResourceWithComplexAttributes. nested_dict['list'][0]})), ('get_nested_attr_dict', dict(hot_tpl=hot_tpl_complex_attrs, snippet={'Value': {'get_attr': ['resource1', 'nested_dict', 'dict', 'a']}}, resource_name='resource1', expected={ 'Value': generic_rsrc.ResourceWithComplexAttributes. nested_dict['dict']['a']})), ('get_attr_none', dict(hot_tpl=hot_tpl_complex_attrs, snippet={'Value': {'get_attr': ['resource1', 'none', 'who_cares']}}, resource_name='resource1', expected={'Value': None})), # for hot template version 2014-10-16 and 2015-04-30, # get_attr: hot_funcs.GetAtt ('get_flat_attr', dict(hot_tpl=hot_tpl_generic_resource_20141016, snippet={'Value': {'get_attr': ['resource1', 'foo']}}, resource_name='resource1', expected={'Value': 'resource1'})), ('get_list_attr', dict(hot_tpl=hot_tpl_complex_attrs_20141016, snippet={'Value': {'get_attr': ['resource1', 'list', 0]}}, resource_name='resource1', expected={ 'Value': generic_rsrc.ResourceWithComplexAttributes.list[0]})), ('get_flat_dict_attr', dict(hot_tpl=hot_tpl_complex_attrs_20141016, snippet={'Value': {'get_attr': ['resource1', 'flat_dict', 'key2']}}, resource_name='resource1', expected={ 'Value': generic_rsrc.ResourceWithComplexAttributes. flat_dict['key2']})), ('get_nested_attr_list', dict(hot_tpl=hot_tpl_complex_attrs_20141016, snippet={'Value': {'get_attr': ['resource1', 'nested_dict', 'list', 0]}}, resource_name='resource1', expected={ 'Value': generic_rsrc.ResourceWithComplexAttributes. nested_dict['list'][0]})), ('get_nested_attr_dict', dict(hot_tpl=hot_tpl_complex_attrs_20141016, snippet={'Value': {'get_attr': ['resource1', 'nested_dict', 'dict', 'a']}}, resource_name='resource1', expected={ 'Value': generic_rsrc.ResourceWithComplexAttributes. nested_dict['dict']['a']})), ('get_attr_none', dict(hot_tpl=hot_tpl_complex_attrs_20141016, snippet={'Value': {'get_attr': ['resource1', 'none', 'who_cares']}}, resource_name='resource1', expected={'Value': None})) ] def test_get_attr(self): """Test resolution of get_attr occurrences in HOT template.""" self.stack = parser.Stack(self.ctx, 'test_get_attr', template.Template(self.hot_tpl)) self.stack.store() parsed = self.stack.t.parse(self.stack.defn, self.snippet) dep_attrs = list(function.dep_attrs(parsed, self.resource_name)) self.stack.create() self.assertEqual((parser.Stack.CREATE, parser.Stack.COMPLETE), self.stack.state) rsrc = self.stack[self.resource_name] for action, status in ( (rsrc.CREATE, rsrc.IN_PROGRESS), (rsrc.CREATE, rsrc.COMPLETE), (rsrc.RESUME, rsrc.IN_PROGRESS), (rsrc.RESUME, rsrc.COMPLETE), (rsrc.SUSPEND, rsrc.IN_PROGRESS), (rsrc.SUSPEND, rsrc.COMPLETE), (rsrc.UPDATE, rsrc.IN_PROGRESS), (rsrc.UPDATE, rsrc.COMPLETE), (rsrc.SNAPSHOT, rsrc.IN_PROGRESS), (rsrc.SNAPSHOT, rsrc.COMPLETE), (rsrc.CHECK, rsrc.IN_PROGRESS), (rsrc.CHECK, rsrc.COMPLETE), (rsrc.ADOPT, rsrc.IN_PROGRESS), (rsrc.ADOPT, rsrc.COMPLETE)): rsrc.state_set(action, status) with mock.patch.object(rsrc_defn.ResourceDefinition, 'dep_attrs') as mock_da: mock_da.return_value = dep_attrs node_data = rsrc.node_data() stk_defn.update_resource_data(self.stack.defn, rsrc.name, node_data) self.assertEqual(self.expected, function.resolve(parsed)) class StackGetAttrValidationTest(common.HeatTestCase): def setUp(self): super(StackGetAttrValidationTest, self).setUp() self.ctx = utils.dummy_context() def test_validate_props_from_attrs(self): stack = parser.Stack(self.ctx, 'test_props_from_attrs', template.Template(hot_tpl_mapped_props)) stack.resources['resource1'].list = None stack.resources['resource1'].map = None stack.resources['resource1'].string = None try: stack.validate() except exception.StackValidationFailed as exc: self.fail("Validation should have passed: %s" % six.text_type(exc)) self.assertEqual([], stack.resources['resource2'].properties['a_list']) self.assertEqual({}, stack.resources['resource2'].properties['a_map']) self.assertEqual('', stack.resources['resource2'].properties['a_string']) def test_validate_props_from_attrs_all_attrs(self): stack = parser.Stack(self.ctx, 'test_props_from_attrs', template.Template(hot_tpl_mapped_props_all_attrs)) stack.resources['resource1'].list = None stack.resources['resource1'].map = None stack.resources['resource1'].string = None try: stack.validate() except exception.StackValidationFailed as exc: self.fail("Validation should have passed: %s" % six.text_type(exc)) self.assertEqual([], stack.resources['resource2'].properties['a_list']) self.assertEqual({}, stack.resources['resource2'].properties['a_map']) self.assertEqual('', stack.resources['resource2'].properties['a_string']) class StackParametersTest(common.HeatTestCase): """Test get_param function when stack was created from HOT template.""" scenarios = [ ('Ref_string', dict(params={'foo': 'bar', 'blarg': 'wibble'}, snippet={'properties': {'prop1': {'Ref': 'foo'}, 'prop2': {'Ref': 'blarg'}}}, expected={'properties': {'prop1': 'bar', 'prop2': 'wibble'}})), ('get_param_string', dict(params={'foo': 'bar', 'blarg': 'wibble'}, snippet={'properties': {'prop1': {'get_param': 'foo'}, 'prop2': {'get_param': 'blarg'}}}, expected={'properties': {'prop1': 'bar', 'prop2': 'wibble'}})), ('get_list_attr', dict(params={'list': 'foo,bar'}, snippet={'properties': {'prop1': {'get_param': ['list', 1]}}}, expected={'properties': {'prop1': 'bar'}})), ('get_list_attr_string_index', dict(params={'list': 'foo,bar'}, snippet={'properties': {'prop1': {'get_param': ['list', '1']}}}, expected={'properties': {'prop1': 'bar'}})), ('get_flat_dict_attr', dict(params={'flat_dict': {'key1': 'val1', 'key2': 'val2', 'key3': 'val3'}}, snippet={'properties': {'prop1': {'get_param': ['flat_dict', 'key2']}}}, expected={'properties': {'prop1': 'val2'}})), ('get_nested_attr_list', dict(params={'nested_dict': {'list': [1, 2, 3], 'string': 'abc', 'dict': {'a': 1, 'b': 2, 'c': 3}}}, snippet={'properties': {'prop1': {'get_param': ['nested_dict', 'list', 0]}}}, expected={'properties': {'prop1': 1}})), ('get_nested_attr_dict', dict(params={'nested_dict': {'list': [1, 2, 3], 'string': 'abc', 'dict': {'a': 1, 'b': 2, 'c': 3}}}, snippet={'properties': {'prop1': {'get_param': ['nested_dict', 'dict', 'a']}}}, expected={'properties': {'prop1': 1}})), ('get_attr_none', dict(params={'none': None}, snippet={'properties': {'prop1': {'get_param': ['none', 'who_cares']}}}, expected={'properties': {'prop1': ''}})), ('pseudo_stack_id', dict(params={}, snippet={'properties': {'prop1': {'get_param': 'OS::stack_id'}}}, expected={'properties': {'prop1': '1ba8c334-2297-4312-8c7c-43763a988ced'}})), ('pseudo_stack_name', dict(params={}, snippet={'properties': {'prop1': {'get_param': 'OS::stack_name'}}}, expected={'properties': {'prop1': 'test'}})), ('pseudo_project_id', dict(params={}, snippet={'properties': {'prop1': {'get_param': 'OS::project_id'}}}, expected={'properties': {'prop1': '9913ef0a-b8be-4b33-b574-9061441bd373'}})), ] props_template = template_format.parse(''' heat_template_version: 2013-05-23 parameters: foo: type: string default: '' blarg: type: string default: '' list: type: comma_delimited_list default: '' flat_dict: type: json default: {} nested_dict: type: json default: {} none: type: string default: 'default' ''') def test_param_refs(self): """Test if parameter references work.""" env = environment.Environment(self.params) tmpl = template.Template(self.props_template, env=env) stack = parser.Stack(utils.dummy_context(), 'test', tmpl, stack_id='1ba8c334-2297-4312-8c7c-43763a988ced', tenant_id='9913ef0a-b8be-4b33-b574-9061441bd373') self.assertEqual(self.expected, function.resolve(tmpl.parse(stack.defn, self.snippet))) class HOTParamValidatorTest(common.HeatTestCase): """Test HOTParamValidator.""" def test_multiple_constraint_descriptions(self): len_desc = 'string length should be between 8 and 16' pattern_desc1 = 'Value must consist of characters only' pattern_desc2 = 'Value must start with a lowercase character' param = { 'db_name': { 'description': 'The WordPress database name', 'type': 'string', 'default': 'wordpress', 'constraints': [ {'length': {'min': 6, 'max': 16}, 'description': len_desc}, {'allowed_pattern': '[a-zA-Z]+', 'description': pattern_desc1}, {'allowed_pattern': '[a-z]+[a-zA-Z]*', 'description': pattern_desc2}]}} name = 'db_name' schema = param['db_name'] def v(value): param_schema = hot_param.HOTParamSchema.from_dict(name, schema) param_schema.validate() param_schema.validate_value(value) return True value = 'wp' err = self.assertRaises(exception.StackValidationFailed, v, value) self.assertIn(len_desc, six.text_type(err)) value = 'abcdefghijklmnopq' err = self.assertRaises(exception.StackValidationFailed, v, value) self.assertIn(len_desc, six.text_type(err)) value = 'abcdefgh1' err = self.assertRaises(exception.StackValidationFailed, v, value) self.assertIn(pattern_desc1, six.text_type(err)) value = 'Abcdefghi' err = self.assertRaises(exception.StackValidationFailed, v, value) self.assertIn(pattern_desc2, six.text_type(err)) value = 'abcdefghi' self.assertTrue(v(value)) value = 'abcdefghI' self.assertTrue(v(value)) def test_hot_template_validate_param(self): len_desc = 'string length should be between 8 and 16' pattern_desc1 = 'Value must consist of characters only' pattern_desc2 = 'Value must start with a lowercase character' hot_tpl = template_format.parse(''' heat_template_version: 2013-05-23 parameters: db_name: description: The WordPress database name type: string default: wordpress constraints: - length: { min: 8, max: 16 } description: %s - allowed_pattern: "[a-zA-Z]+" description: %s - allowed_pattern: "[a-z]+[a-zA-Z]*" description: %s ''' % (len_desc, pattern_desc1, pattern_desc2)) tmpl = template.Template(hot_tpl) def run_parameters(value): tmpl.parameters( identifier.HeatIdentifier('', "stack_testit", None), {'db_name': value}).validate(validate_value=True) return True value = 'wp' err = self.assertRaises(exception.StackValidationFailed, run_parameters, value) self.assertIn(len_desc, six.text_type(err)) value = 'abcdefghijklmnopq' err = self.assertRaises(exception.StackValidationFailed, run_parameters, value) self.assertIn(len_desc, six.text_type(err)) value = 'abcdefgh1' err = self.assertRaises(exception.StackValidationFailed, run_parameters, value) self.assertIn(pattern_desc1, six.text_type(err)) value = 'Abcdefghi' err = self.assertRaises(exception.StackValidationFailed, run_parameters, value) self.assertIn(pattern_desc2, six.text_type(err)) value = 'abcdefghi' self.assertTrue(run_parameters(value)) value = 'abcdefghI' self.assertTrue(run_parameters(value)) def test_range_constraint(self): range_desc = 'Value must be between 30000 and 50000' param = { 'db_port': { 'description': 'The database port', 'type': 'number', 'default': 31000, 'constraints': [ {'range': {'min': 30000, 'max': 50000}, 'description': range_desc}]}} name = 'db_port' schema = param['db_port'] def v(value): param_schema = hot_param.HOTParamSchema.from_dict(name, schema) param_schema.validate() param_schema.validate_value(value) return True value = 29999 err = self.assertRaises(exception.StackValidationFailed, v, value) self.assertIn(range_desc, six.text_type(err)) value = 50001 err = self.assertRaises(exception.StackValidationFailed, v, value) self.assertIn(range_desc, six.text_type(err)) value = 30000 self.assertTrue(v(value)) value = 40000 self.assertTrue(v(value)) value = 50000 self.assertTrue(v(value)) def test_custom_constraint(self): class ZeroConstraint(object): def validate(self, value, context): return value == "0" env = resources.global_env() env.register_constraint("zero", ZeroConstraint) self.addCleanup(env.constraints.pop, "zero") desc = 'Value must be zero' param = { 'param1': { 'type': 'string', 'constraints': [ {'custom_constraint': 'zero', 'description': desc}]}} name = 'param1' schema = param['param1'] def v(value): param_schema = hot_param.HOTParamSchema.from_dict(name, schema) param_schema.validate() param_schema.validate_value(value) return True value = "1" err = self.assertRaises(exception.StackValidationFailed, v, value) self.assertEqual(desc, six.text_type(err)) value = "2" err = self.assertRaises(exception.StackValidationFailed, v, value) self.assertEqual(desc, six.text_type(err)) value = "0" self.assertTrue(v(value)) def test_custom_constraint_default_skip(self): schema = { 'type': 'string', 'constraints': [{ 'custom_constraint': 'skipping', 'description': 'Must be skipped on default value' }], 'default': 'foo' } param_schema = hot_param.HOTParamSchema.from_dict('p', schema) param_schema.validate() def test_range_constraint_invalid_default(self): range_desc = 'Value must be between 30000 and 50000' param = { 'db_port': { 'description': 'The database port', 'type': 'number', 'default': 15, 'constraints': [ {'range': {'min': 30000, 'max': 50000}, 'description': range_desc}]}} schema = hot_param.HOTParamSchema.from_dict('db_port', param['db_port']) err = self.assertRaises(exception.InvalidSchemaError, schema.validate) self.assertIn(range_desc, six.text_type(err)) def test_validate_schema_wrong_key(self): hot_tpl = template_format.parse(''' heat_template_version: 2013-05-23 parameters: param1: foo: bar ''') error = self.assertRaises( exception.InvalidSchemaError, cfn_param.CfnParameters, "stack_testit", template.Template(hot_tpl)) self.assertEqual("Invalid key 'foo' for parameter (param1)", six.text_type(error)) def test_validate_schema_no_type(self): hot_tpl = template_format.parse(''' heat_template_version: 2013-05-23 parameters: param1: description: Hi! ''') error = self.assertRaises( exception.InvalidSchemaError, cfn_param.CfnParameters, "stack_testit", template.Template(hot_tpl)) self.assertEqual("Missing parameter type for parameter: param1", six.text_type(error)) def test_validate_schema_unknown_type(self): hot_tpl = template_format.parse(''' heat_template_version: 2013-05-23 parameters: param1: type: Unicode ''') error = self.assertRaises( exception.InvalidSchemaError, cfn_param.CfnParameters, "stack_testit", template.Template(hot_tpl)) self.assertEqual( "Invalid type (Unicode)", six.text_type(error)) def test_validate_schema_constraints(self): hot_tpl = template_format.parse(''' heat_template_version: 2013-05-23 parameters: param1: type: string constraints: - allowed_valus: [foo, bar] default: foo ''') error = self.assertRaises( exception.InvalidSchemaError, cfn_param.CfnParameters, "stack_testit", template.Template(hot_tpl)) self.assertEqual( "Invalid key 'allowed_valus' for parameter constraints", six.text_type(error)) def test_validate_schema_constraints_not_list(self): hot_tpl = template_format.parse(''' heat_template_version: 2013-05-23 parameters: param1: type: string constraints: 1 default: foo ''') error = self.assertRaises( exception.InvalidSchemaError, cfn_param.CfnParameters, "stack_testit", template.Template(hot_tpl)) self.assertEqual( "Invalid parameter constraints for parameter param1, " "expected a list", six.text_type(error)) def test_validate_schema_constraints_not_mapping(self): hot_tpl = template_format.parse(''' heat_template_version: 2013-05-23 parameters: param1: type: string constraints: [foo] default: foo ''') error = self.assertRaises( exception.InvalidSchemaError, cfn_param.CfnParameters, "stack_testit", template.Template(hot_tpl)) self.assertEqual( "Invalid parameter constraints, expected a mapping", six.text_type(error)) def test_validate_schema_empty_constraints(self): hot_tpl = template_format.parse(''' heat_template_version: 2013-05-23 parameters: param1: type: string constraints: - description: a constraint default: foo ''') error = self.assertRaises( exception.InvalidSchemaError, cfn_param.CfnParameters, "stack_testit", template.Template(hot_tpl)) self.assertEqual("No constraint expressed", six.text_type(error)) def test_validate_schema_constraints_range_wrong_format(self): hot_tpl = template_format.parse(''' heat_template_version: 2013-05-23 parameters: param1: type: number constraints: - range: foo default: foo ''') error = self.assertRaises( exception.InvalidSchemaError, cfn_param.CfnParameters, "stack_testit", template.Template(hot_tpl)) self.assertEqual( "Invalid range constraint, expected a mapping", six.text_type(error)) def test_validate_schema_constraints_range_invalid_key(self): hot_tpl = template_format.parse(''' heat_template_version: 2013-05-23 parameters: param1: type: number constraints: - range: {min: 1, foo: bar} default: 1 ''') error = self.assertRaises( exception.InvalidSchemaError, cfn_param.CfnParameters, "stack_testit", template.Template(hot_tpl)) self.assertEqual( "Invalid key 'foo' for range constraint", six.text_type(error)) def test_validate_schema_constraints_length_wrong_format(self): hot_tpl = template_format.parse(''' heat_template_version: 2013-05-23 parameters: param1: type: string constraints: - length: foo default: foo ''') error = self.assertRaises( exception.InvalidSchemaError, cfn_param.CfnParameters, "stack_testit", template.Template(hot_tpl)) self.assertEqual( "Invalid length constraint, expected a mapping", six.text_type(error)) def test_validate_schema_constraints_length_invalid_key(self): hot_tpl = template_format.parse(''' heat_template_version: 2013-05-23 parameters: param1: type: string constraints: - length: {min: 1, foo: bar} default: foo ''') error = self.assertRaises( exception.InvalidSchemaError, cfn_param.CfnParameters, "stack_testit", template.Template(hot_tpl)) self.assertEqual( "Invalid key 'foo' for length constraint", six.text_type(error)) def test_validate_schema_constraints_wrong_allowed_pattern(self): hot_tpl = template_format.parse(''' heat_template_version: 2013-05-23 parameters: param1: type: string constraints: - allowed_pattern: [foo, bar] default: foo ''') error = self.assertRaises( exception.InvalidSchemaError, cfn_param.CfnParameters, "stack_testit", template.Template(hot_tpl)) self.assertEqual( "AllowedPattern must be a string", six.text_type(error)) def test_modulo_constraint(self): modulo_desc = 'Value must be an odd number' modulo_name = 'ControllerCount' param = { modulo_name: { 'description': 'Number of controller nodes', 'type': 'number', 'default': 1, 'constraints': [{ 'modulo': {'step': 2, 'offset': 1}, 'description': modulo_desc }] } } def v(value): param_schema = hot_param.HOTParamSchema20170224.from_dict( modulo_name, param[modulo_name]) param_schema.validate() param_schema.validate_value(value) return True value = 2 err = self.assertRaises(exception.StackValidationFailed, v, value) self.assertIn(modulo_desc, six.text_type(err)) value = 100 err = self.assertRaises(exception.StackValidationFailed, v, value) self.assertIn(modulo_desc, six.text_type(err)) value = 1 self.assertTrue(v(value)) value = 3 self.assertTrue(v(value)) value = 777 self.assertTrue(v(value)) def test_modulo_constraint_invalid_default(self): modulo_desc = 'Value must be an odd number' modulo_name = 'ControllerCount' param = { modulo_name: { 'description': 'Number of controller nodes', 'type': 'number', 'default': 2, 'constraints': [{ 'modulo': {'step': 2, 'offset': 1}, 'description': modulo_desc }] } } schema = hot_param.HOTParamSchema20170224.from_dict( modulo_name, param[modulo_name]) err = self.assertRaises(exception.InvalidSchemaError, schema.validate) self.assertIn(modulo_desc, six.text_type(err)) class TestGetAttAllAttributes(common.HeatTestCase): scenarios = [ ('test_get_attr_all_attributes', dict( hot_tpl=hot_tpl_generic_resource_all_attrs, snippet={'Value': {'get_attr': ['resource1']}}, expected={'Value': {'Foo': 'resource1', 'foo': 'resource1'}}, raises=None )), ('test_get_attr_all_attributes_str', dict( hot_tpl=hot_tpl_generic_resource_all_attrs, snippet={'Value': {'get_attr': 'resource1'}}, expected='.Value.get_attr: Argument to "get_attr" must be a ' 'list', raises=exception.StackValidationFailed )), ('test_get_attr_all_attributes_invalid_resource_list', dict( hot_tpl=hot_tpl_generic_resource_all_attrs, snippet={'Value': {'get_attr': ['resource2']}}, raises=exception.InvalidTemplateReference, expected='The specified reference "resource2" ' '(in unknown) is incorrect.' )), ('test_get_attr_all_attributes_invalid_type', dict( hot_tpl=hot_tpl_generic_resource_all_attrs, snippet={'Value': {'get_attr': {'resource1': 'attr1'}}}, raises=exception.StackValidationFailed, expected='.Value.get_attr: Argument to "get_attr" must be a ' 'list' )), ('test_get_attr_all_attributes_invalid_arg_str', dict( hot_tpl=hot_tpl_generic_resource_all_attrs, snippet={'Value': {'get_attr': ''}}, raises=exception.StackValidationFailed, expected='.Value.get_attr: Arguments to "get_attr" can be of ' 'the next forms: [resource_name] or ' '[resource_name, attribute, (path), ...]' )), ('test_get_attr_all_attributes_invalid_arg_list', dict( hot_tpl=hot_tpl_generic_resource_all_attrs, snippet={'Value': {'get_attr': []}}, raises=exception.StackValidationFailed, expected='.Value.get_attr: Arguments to "get_attr" can be of ' 'the next forms: [resource_name] or ' '[resource_name, attribute, (path), ...]' )), ('test_get_attr_all_attributes_standard', dict( hot_tpl=hot_tpl_generic_resource_all_attrs, snippet={'Value': {'get_attr': ['resource1', 'foo']}}, expected={'Value': 'resource1'}, raises=None )), ('test_get_attr_all_attrs_complex_attrs', dict( hot_tpl=hot_tpl_complex_attrs_all_attrs, snippet={'Value': {'get_attr': ['resource1']}}, expected={'Value': {'flat_dict': {'key1': 'val1', 'key2': 'val2', 'key3': 'val3'}, 'list': ['foo', 'bar'], 'nested_dict': {'dict': {'a': 1, 'b': 2, 'c': 3}, 'list': [1, 2, 3], 'string': 'abc'}, 'none': None}}, raises=None )), ('test_get_attr_all_attrs_complex_attrs_standard', dict( hot_tpl=hot_tpl_complex_attrs_all_attrs, snippet={'Value': {'get_attr': ['resource1', 'list', 1]}}, expected={'Value': 'bar'}, raises=None )), ] @staticmethod def resolve(snippet, template, stack): return function.resolve(template.parse(stack.defn, snippet)) def test_get_attr_all_attributes(self): tmpl = template.Template(self.hot_tpl) stack = parser.Stack(utils.dummy_context(), 'test_get_attr', tmpl) stack.store() if self.raises is None: dep_attrs = list(function.dep_attrs(tmpl.parse(stack.defn, self.snippet), 'resource1')) else: dep_attrs = [] stack.create() self.assertEqual((parser.Stack.CREATE, parser.Stack.COMPLETE), stack.state) rsrc = stack['resource1'] for action, status in ( (rsrc.CREATE, rsrc.IN_PROGRESS), (rsrc.CREATE, rsrc.COMPLETE), (rsrc.RESUME, rsrc.IN_PROGRESS), (rsrc.RESUME, rsrc.COMPLETE), (rsrc.SUSPEND, rsrc.IN_PROGRESS), (rsrc.SUSPEND, rsrc.COMPLETE), (rsrc.UPDATE, rsrc.IN_PROGRESS), (rsrc.UPDATE, rsrc.COMPLETE), (rsrc.SNAPSHOT, rsrc.IN_PROGRESS), (rsrc.SNAPSHOT, rsrc.COMPLETE), (rsrc.CHECK, rsrc.IN_PROGRESS), (rsrc.CHECK, rsrc.COMPLETE), (rsrc.ADOPT, rsrc.IN_PROGRESS), (rsrc.ADOPT, rsrc.COMPLETE)): rsrc.state_set(action, status) with mock.patch.object(rsrc_defn.ResourceDefinition, 'dep_attrs') as mock_da: mock_da.return_value = dep_attrs node_data = rsrc.node_data() stk_defn.update_resource_data(stack.defn, rsrc.name, node_data) if self.raises is not None: ex = self.assertRaises(self.raises, self.resolve, self.snippet, tmpl, stack) self.assertEqual(self.expected, six.text_type(ex)) else: self.assertEqual(self.expected, self.resolve(self.snippet, tmpl, stack)) def test_stack_validate_outputs_get_all_attribute(self): hot_liberty_tpl = template_format.parse(''' heat_template_version: 2015-10-15 resources: resource1: type: GenericResourceType outputs: all_attr: value: {get_attr: [resource1]} ''') stack = parser.Stack(utils.dummy_context(), 'test_outputs_get_all', template.Template(hot_liberty_tpl)) stack.validate() heat-10.0.2/heat/tests/test_rpc_listener_client.py0000666000175000017500000000566113343562340022275 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import mock import oslo_messaging as messaging from heat.rpc import api as rpc_api from heat.rpc import listener_client as rpc_client from heat.tests import common class ListenerClientTest(common.HeatTestCase): @mock.patch('heat.common.messaging.get_rpc_client', return_value=mock.Mock()) def test_engine_alive_ok(self, rpc_client_method): mock_rpc_client = rpc_client_method.return_value mock_prepare_method = mock_rpc_client.prepare mock_prepare_client = mock_prepare_method.return_value mock_cnxt = mock.Mock() listener_client = rpc_client.EngineListenerClient('engine-007') rpc_client_method.assert_called_once_with( version=rpc_client.EngineListenerClient.BASE_RPC_API_VERSION, topic=rpc_api.LISTENER_TOPIC, server='engine-007', ) mock_prepare_method.assert_called_once_with(timeout=2) self.assertEqual(mock_prepare_client, listener_client._client, "Failed to create RPC client") ret = listener_client.is_alive(mock_cnxt) self.assertTrue(ret) mock_prepare_client.call.assert_called_once_with(mock_cnxt, 'listening') @mock.patch('heat.common.messaging.get_rpc_client', return_value=mock.Mock()) def test_engine_alive_timeout(self, rpc_client_method): mock_rpc_client = rpc_client_method.return_value mock_prepare_method = mock_rpc_client.prepare mock_prepare_client = mock_prepare_method.return_value mock_cnxt = mock.Mock() listener_client = rpc_client.EngineListenerClient('engine-007') rpc_client_method.assert_called_once_with( version=rpc_client.EngineListenerClient.BASE_RPC_API_VERSION, topic=rpc_api.LISTENER_TOPIC, server='engine-007', ) mock_prepare_method.assert_called_once_with(timeout=2) self.assertEqual(mock_prepare_client, listener_client._client, "Failed to create RPC client") mock_prepare_client.call.side_effect = messaging.MessagingTimeout( 'too slow') ret = listener_client.is_alive(mock_cnxt) self.assertFalse(ret) mock_prepare_client.call.assert_called_once_with(mock_cnxt, 'listening') heat-10.0.2/heat/tests/test_timeutils.py0000666000175000017500000001075313343562340020263 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from testtools import matchers from heat.common import timeutils as util from heat.tests import common class ISO8601UtilityTest(common.HeatTestCase): def test_valid_durations(self): self.assertEqual(0, util.parse_isoduration('PT')) self.assertEqual(3600, util.parse_isoduration('PT1H')) self.assertEqual(120, util.parse_isoduration('PT2M')) self.assertEqual(3, util.parse_isoduration('PT3S')) self.assertEqual(3900, util.parse_isoduration('PT1H5M')) self.assertEqual(3605, util.parse_isoduration('PT1H5S')) self.assertEqual(303, util.parse_isoduration('PT5M3S')) self.assertEqual(3903, util.parse_isoduration('PT1H5M3S')) self.assertEqual(24 * 3600, util.parse_isoduration('PT24H')) def test_invalid_durations(self): self.assertRaises(ValueError, util.parse_isoduration, 'P1Y') self.assertRaises(ValueError, util.parse_isoduration, 'P1DT12H') self.assertRaises(ValueError, util.parse_isoduration, 'PT1Y1D') self.assertRaises(ValueError, util.parse_isoduration, 'PTAH1M0S') self.assertRaises(ValueError, util.parse_isoduration, 'PT1HBM0S') self.assertRaises(ValueError, util.parse_isoduration, 'PT1H1MCS') self.assertRaises(ValueError, util.parse_isoduration, 'PT1H1H') self.assertRaises(ValueError, util.parse_isoduration, 'PT1MM') self.assertRaises(ValueError, util.parse_isoduration, 'PT1S0S') self.assertRaises(ValueError, util.parse_isoduration, 'ABCDEFGH') class DurationTest(common.HeatTestCase): def setUp(self): super(DurationTest, self).setUp() st = util.wallclock() mock_clock = self.patchobject(util, 'wallclock') mock_clock.side_effect = [st, st + 0.5] def test_duration_not_expired(self): self.assertFalse(util.Duration(1.0).expired()) def test_duration_expired(self): self.assertTrue(util.Duration(0.1).expired()) class RetryBackoffExponentialTest(common.HeatTestCase): scenarios = [( '0_0', dict( attempt=0, scale_factor=0.0, delay=0.0, ) ), ( '0_1', dict( attempt=0, scale_factor=1.0, delay=1.0, ) ), ( '1_1', dict( attempt=1, scale_factor=1.0, delay=2.0, ) ), ( '2_1', dict( attempt=2, scale_factor=1.0, delay=4.0, ) ), ( '3_1', dict( attempt=3, scale_factor=1.0, delay=8.0, ) ), ( '4_1', dict( attempt=4, scale_factor=1.0, delay=16.0, ) ), ( '4_4', dict( attempt=4, scale_factor=4.0, delay=64.0, ) )] def test_backoff_delay(self): delay = util.retry_backoff_delay( self.attempt, self.scale_factor) self.assertEqual(self.delay, delay) class RetryBackoffJitterTest(common.HeatTestCase): scenarios = [( '0_0_1', dict( attempt=0, scale_factor=0.0, jitter_max=1.0, delay_from=0.0, delay_to=1.0 ) ), ( '1_1_1', dict( attempt=1, scale_factor=1.0, jitter_max=1.0, delay_from=2.0, delay_to=3.0 ) ), ( '1_1_5', dict( attempt=1, scale_factor=1.0, jitter_max=5.0, delay_from=2.0, delay_to=7.0 ) )] def test_backoff_delay(self): for _ in range(100): delay = util.retry_backoff_delay( self.attempt, self.scale_factor, self.jitter_max) self.assertThat(delay, matchers.GreaterThan(self.delay_from)) self.assertThat(delay, matchers.LessThan(self.delay_to)) heat-10.0.2/heat/tests/templates/0000775000175000017500000000000013343562672016631 5ustar zuulzuul00000000000000heat-10.0.2/heat/tests/templates/Neutron.yaml0000666000175000017500000000351313343562340021143 0ustar zuulzuul00000000000000HeatTemplateFormatVersion: '2012-12-12' Description: Template to test Neutron resources Resources: network: Type: OS::Neutron::Net Properties: {name: the_network} unnamed_network: Type: 'OS::Neutron::Net' admin_down_network: Type: OS::Neutron::Net Properties: {admin_state_up: false} subnet: Type: OS::Neutron::Subnet Properties: network_id: {Ref: network} ip_version: 4 cidr: 10.0.3.0/24 allocation_pools: - {end: 10.0.3.150, start: 10.0.3.20} port: Type: OS::Neutron::Port Properties: device_id: d6b4d3a5-c700-476f-b609-1493dd9dadc0 name: port1 network_id: {Ref: network} fixed_ips: - subnet_id: {Ref: subnet} ip_address: 10.0.3.21 router: Type: 'OS::Neutron::Router' router_interface: Type: OS::Neutron::RouterInterface Properties: router_id: {Ref: router} subnet_id: {Ref: subnet} Outputs: the_network_status: Value: Fn::GetAtt: [network, status] Description: Status of network port_device_owner: Value: Fn::GetAtt: [port, device_owner] Description: Device owner of the port port_fixed_ips: Value: Fn::GetAtt: [port, fixed_ips] Description: Fixed IPs of the port port_mac_address: Value: Fn::GetAtt: [port, mac_address] Description: MAC address of the port port_status: Value: Fn::GetAtt: [port, status] Description: Status of the port port_show: Value: Fn::GetAtt: [port, show] Description: All attributes for port subnet_show: Value: Fn::GetAtt: [subnet, show] Description: All attributes for subnet network_show: Value: Fn::GetAtt: [network, show] Description: All attributes for network router_show: Value: Fn::GetAtt: [router, show] Description: All attributes for router heat-10.0.2/heat/tests/templates/WordPress_Single_Instance.yaml0000666000175000017500000001204713343562340024570 0ustar zuulzuul00000000000000HeatTemplateFormatVersion: '2012-12-12' Description: 'AWS CloudFormation Sample Template WordPress_Single_Instance: WordPress is web software you can use to create a beautiful website or blog. This template installs a single-instance WordPress deployment using a local MySQL database to store the data.' Parameters: KeyName: {Description: Name of an existing EC2 KeyPair to enable SSH access to the instances, Type: String} InstanceType: Description: WebServer EC2 instance type Type: String Default: m1.large AllowedValues: [t1.micro, m1.small, m1.large, m1.xlarge, m2.xlarge, m2.2xlarge, m2.4xlarge, c1.medium, c1.xlarge, cc1.4xlarge] ConstraintDescription: must be a valid EC2 instance type. DBName: {Default: wordpress, Description: The WordPress database name, Type: String, MinLength: '1', MaxLength: '64', AllowedPattern: '[a-zA-Z][a-zA-Z0-9]*', ConstraintDescription: must begin with a letter and contain only alphanumeric characters.} DBUsername: {Default: admin, NoEcho: 'true', Description: The WordPress database admin account username, Type: String, MinLength: '1', MaxLength: '16', AllowedPattern: '[a-zA-Z][a-zA-Z0-9]*', ConstraintDescription: must begin with a letter and contain only alphanumeric characters.} DBPassword: {Default: admin, NoEcho: 'true', Description: The WordPress database admin account password, Type: String, MinLength: '1', MaxLength: '41', AllowedPattern: '[a-zA-Z0-9]*', ConstraintDescription: must contain only alphanumeric characters.} DBRootPassword: {Default: admin, NoEcho: 'true', Description: Root password for MySQL, Type: String, MinLength: '1', MaxLength: '41', AllowedPattern: '[a-zA-Z0-9]*', ConstraintDescription: must contain only alphanumeric characters.} LinuxDistribution: Default: F17 Description: Distribution of choice Type: String AllowedValues: [F18, F17, U10, RHEL-6.1, RHEL-6.2, RHEL-6.3] Mappings: AWSInstanceType2Arch: t1.micro: {Arch: '32'} m1.small: {Arch: '32'} m1.large: {Arch: '64'} m1.xlarge: {Arch: '64'} m2.xlarge: {Arch: '64'} m2.2xlarge: {Arch: '64'} m2.4xlarge: {Arch: '64'} c1.medium: {Arch: '32'} c1.xlarge: {Arch: '64'} cc1.4xlarge: {Arch: '64'} DistroArch2AMI: F18: {'32': F18-i386-cfntools, '64': F18-x86_64-cfntools} F17: {'32': F17-i386-cfntools, '64': F17-x86_64-cfntools} U10: {'32': U10-i386-cfntools, '64': U10-x86_64-cfntools} RHEL-6.1: {'32': rhel61-i386-cfntools, '64': rhel61-x86_64-cfntools} RHEL-6.2: {'32': rhel62-i386-cfntools, '64': rhel62-x86_64-cfntools} RHEL-6.3: {'32': rhel63-i386-cfntools, '64': rhel63-x86_64-cfntools} Resources: WikiDatabase: Type: AWS::EC2::Instance Metadata: AWS::CloudFormation::Init: config: packages: yum: mysql: [] mysql-server: [] httpd: [] wordpress: [] services: systemd: mysqld: {enabled: 'true', ensureRunning: 'true'} httpd: {enabled: 'true', ensureRunning: 'true'} Properties: ImageId: Fn::FindInMap: - DistroArch2AMI - {Ref: LinuxDistribution} - Fn::FindInMap: - AWSInstanceType2Arch - {Ref: InstanceType} - Arch InstanceType: {Ref: InstanceType} KeyName: {Ref: KeyName} UserData: Fn::Base64: Fn::Join: - '' - - '#!/bin/bash -v ' - '/opt/aws/bin/cfn-init ' - '# Setup MySQL root password and create a user ' - mysqladmin -u root password ' - {Ref: DBRootPassword} - ''' ' - cat << EOF | mysql -u root --password=' - {Ref: DBRootPassword} - ''' ' - 'CREATE DATABASE ' - {Ref: DBName} - '; ' - 'GRANT ALL PRIVILEGES ON ' - {Ref: DBName} - .* TO " - {Ref: DBUsername} - '"@"localhost" ' - IDENTIFIED BY " - {Ref: DBPassword} - '"; ' - 'FLUSH PRIVILEGES; ' - 'EXIT ' - 'EOF ' - 'sed -i "/Deny from All/d" /etc/httpd/conf.d/wordpress.conf ' - 'sed -i "s/Require local/Require all granted/" /etc/httpd/conf.d/wordpress.conf ' - sed --in-place --e s/database_name_here/ - {Ref: DBName} - / --e s/username_here/ - {Ref: DBUsername} - / --e s/password_here/ - {Ref: DBPassword} - '/ /usr/share/wordpress/wp-config.php ' - 'systemctl restart httpd.service ' Outputs: WebsiteURL: Value: Fn::Join: - '' - - http:// - Fn::GetAtt: [WikiDatabase, PublicIp] - /wordpress Description: URL for Wordpress wiki heat-10.0.2/heat/tests/templates/WordPress_Single_Instance.template0000666000175000017500000001327713343562340025447 0ustar zuulzuul00000000000000{ "AWSTemplateFormatVersion" : "2010-09-09", "Description" : "AWS CloudFormation Sample Template WordPress_Single_Instance: WordPress is web software you can use to create a beautiful website or blog. This template installs a single-instance WordPress deployment using a local MySQL database to store the data.", "Parameters" : { "KeyName" : { "Description" : "Name of an existing EC2 KeyPair to enable SSH access to the instances", "Type" : "String" }, "InstanceType" : { "Description" : "WebServer EC2 instance type", "Type" : "String", "Default" : "m1.large", "AllowedValues" : [ "t1.micro", "m1.small", "m1.large", "m1.xlarge", "m2.xlarge", "m2.2xlarge", "m2.4xlarge", "c1.medium", "c1.xlarge", "cc1.4xlarge" ], "ConstraintDescription" : "must be a valid EC2 instance type." }, "DBName": { "Default": "wordpress", "Description" : "The WordPress database name", "Type": "String", "MinLength": "1", "MaxLength": "64", "AllowedPattern" : "[a-zA-Z][a-zA-Z0-9]*", "ConstraintDescription" : "must begin with a letter and contain only alphanumeric characters." }, "DBUsername": { "Default": "admin", "NoEcho": "true", "Description" : "The WordPress database admin account username", "Type": "String", "MinLength": "1", "MaxLength": "16", "AllowedPattern" : "[a-zA-Z][a-zA-Z0-9]*", "ConstraintDescription" : "must begin with a letter and contain only alphanumeric characters." }, "DBPassword": { "Default": "admin", "NoEcho": "true", "Description" : "The WordPress database admin account password", "Type": "String", "MinLength": "1", "MaxLength": "41", "AllowedPattern" : "[a-zA-Z0-9]*", "ConstraintDescription" : "must contain only alphanumeric characters." }, "DBRootPassword": { "Default": "admin", "NoEcho": "true", "Description" : "Root password for MySQL", "Type": "String", "MinLength": "1", "MaxLength": "41", "AllowedPattern" : "[a-zA-Z0-9]*", "ConstraintDescription" : "must contain only alphanumeric characters." }, "LinuxDistribution": { "Default": "F17", "Description" : "Distribution of choice", "Type": "String", "AllowedValues" : [ "F18", "F17", "U10", "RHEL-6.1", "RHEL-6.2", "RHEL-6.3" ] } }, "Mappings" : { "AWSInstanceType2Arch" : { "t1.micro" : { "Arch" : "32" }, "m1.small" : { "Arch" : "32" }, "m1.large" : { "Arch" : "64" }, "m1.xlarge" : { "Arch" : "64" }, "m2.xlarge" : { "Arch" : "64" }, "m2.2xlarge" : { "Arch" : "64" }, "m2.4xlarge" : { "Arch" : "64" }, "c1.medium" : { "Arch" : "32" }, "c1.xlarge" : { "Arch" : "64" }, "cc1.4xlarge" : { "Arch" : "64" } }, "DistroArch2AMI": { "F18" : { "32" : "F18-i386-cfntools", "64" : "F18-x86_64-cfntools" }, "F17" : { "32" : "F17-i386-cfntools", "64" : "F17-x86_64-cfntools" }, "U10" : { "32" : "U10-i386-cfntools", "64" : "U10-x86_64-cfntools" }, "RHEL-6.1" : { "32" : "rhel61-i386-cfntools", "64" : "rhel61-x86_64-cfntools" }, "RHEL-6.2" : { "32" : "rhel62-i386-cfntools", "64" : "rhel62-x86_64-cfntools" }, "RHEL-6.3" : { "32" : "rhel63-i386-cfntools", "64" : "rhel63-x86_64-cfntools" } } }, "Resources" : { "WikiDatabase": { "Type": "AWS::EC2::Instance", "Metadata" : { "AWS::CloudFormation::Init" : { "config" : { "packages" : { "yum" : { "mysql" : [], "mysql-server" : [], "httpd" : [], "wordpress" : [] } }, "services" : { "systemd" : { "mysqld" : { "enabled" : "true", "ensureRunning" : "true" }, "httpd" : { "enabled" : "true", "ensureRunning" : "true" } } } } } }, "Properties": { "ImageId" : { "Fn::FindInMap" : [ "DistroArch2AMI", { "Ref" : "LinuxDistribution" }, { "Fn::FindInMap" : [ "AWSInstanceType2Arch", { "Ref" : "InstanceType" }, "Arch" ] } ] }, "InstanceType" : { "Ref" : "InstanceType" }, "KeyName" : { "Ref" : "KeyName" }, "UserData" : { "Fn::Base64" : { "Fn::Join" : ["", [ "#!/bin/bash -v\n", "/opt/aws/bin/cfn-init\n", "# Setup MySQL root password and create a user\n", "mysqladmin -u root password '", { "Ref" : "DBRootPassword" }, "'\n", "cat << EOF | mysql -u root --password='", { "Ref" : "DBRootPassword" }, "'\n", "CREATE DATABASE ", { "Ref" : "DBName" }, ";\n", "GRANT ALL PRIVILEGES ON ", { "Ref" : "DBName" }, ".* TO \"", { "Ref" : "DBUsername" }, "\"@\"localhost\"\n", "IDENTIFIED BY \"", { "Ref" : "DBPassword" }, "\";\n", "FLUSH PRIVILEGES;\n", "EXIT\n", "EOF\n", "sed -i \"/Deny from All/d\" /etc/httpd/conf.d/wordpress.conf\n", "sed -i \"s/Require local/Require all granted/\" /etc/httpd/conf.d/wordpress.conf\n", "sed --in-place --e s/database_name_here/", { "Ref" : "DBName" }, "/ --e s/username_here/", { "Ref" : "DBUsername" }, "/ --e s/password_here/", { "Ref" : "DBPassword" }, "/ /usr/share/wordpress/wp-config.php\n", "systemctl restart httpd.service\n" ]]}} } } }, "Outputs" : { "WebsiteURL" : { "Value" : { "Fn::Join" : ["", ["http://", { "Fn::GetAtt" : [ "WikiDatabase", "PublicIp" ]}, "/wordpress"]] }, "Description" : "URL for Wordpress wiki" } } } heat-10.0.2/heat/tests/templates/README0000666000175000017500000000053013343562340017501 0ustar zuulzuul00000000000000These templates are required by test_template_format and test_provider_template in situations where we don't want to use a minimal template snippet. Ideally we want to test the maximum possible syntax to prove the format conversion works. In general, tests should not depend on these templates, inline minimal template snippets are preferred. heat-10.0.2/heat/tests/templates/Neutron.template0000666000175000017500000000476113343562340022022 0ustar zuulzuul00000000000000{ "AWSTemplateFormatVersion" : "2010-09-09", "Description" : "Template to test Neutron resources", "Resources" : { "network": { "Type": "OS::Neutron::Net", "Properties": { "name": "the_network" } }, "unnamed_network": { "Type": "OS::Neutron::Net" }, "admin_down_network": { "Type": "OS::Neutron::Net", "Properties": { "admin_state_up": false } }, "subnet": { "Type": "OS::Neutron::Subnet", "Properties": { "network_id": { "Ref" : "network" }, "ip_version": 4, "cidr": "10.0.3.0/24", "allocation_pools": [{"start": "10.0.3.20", "end": "10.0.3.150"}] } }, "port": { "Type": "OS::Neutron::Port", "Properties": { "device_id": "d6b4d3a5-c700-476f-b609-1493dd9dadc0", "name": "port1", "network_id": { "Ref" : "network" }, "fixed_ips": [{ "subnet_id": { "Ref" : "subnet" }, "ip_address": "10.0.3.21" }] } }, "router": { "Type": "OS::Neutron::Router" }, "router_interface": { "Type": "OS::Neutron::RouterInterface", "Properties": { "router_id": { "Ref" : "router" }, "subnet_id": { "Ref" : "subnet" } } } }, "Outputs" : { "the_network_status" : { "Value" : { "Fn::GetAtt" : [ "network", "status" ]}, "Description" : "Status of network" }, "port_device_owner" : { "Value" : { "Fn::GetAtt" : [ "port", "device_owner" ]}, "Description" : "Device owner of the port" }, "port_fixed_ips" : { "Value" : { "Fn::GetAtt" : [ "port", "fixed_ips" ]}, "Description" : "Fixed IPs of the port" }, "port_mac_address" : { "Value" : { "Fn::GetAtt" : [ "port", "mac_address" ]}, "Description" : "MAC address of the port" }, "port_status" : { "Value" : { "Fn::GetAtt" : [ "port", "status" ]}, "Description" : "Status of the port" }, "port_show" : { "Value" : { "Fn::GetAtt" : [ "port", "show" ]}, "Description" : "All attributes for port" }, "subnet_show" : { "Value" : { "Fn::GetAtt" : [ "subnet", "show" ]}, "Description" : "All attributes for subnet" }, "network_show" : { "Value" : { "Fn::GetAtt" : [ "network", "show" ]}, "Description" : "All attributes for network" }, "router_show" : { "Value" : { "Fn::GetAtt" : [ "router", "show" ]}, "Description" : "All attributes for router" } } } heat-10.0.2/heat/tests/test_stack_collect_attributes.py0000666000175000017500000002463213343562340023325 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import itertools import six from heat.common import template_format from heat.engine import stack from heat.engine import template from heat.tests import common from heat.tests import utils tmpl1 = """ heat_template_version: 2014-10-16 resources: AResource: type: ResourceWithPropsType properties: Foo: 'abc' """ tmpl2 = """ heat_template_version: 2014-10-16 resources: AResource: type: ResourceWithPropsType properties: Foo: 'abc' BResource: type: ResourceWithPropsType properties: Foo: {get_attr: [AResource, attr_A1]} metadata: Foo: {get_attr: [AResource, attr_A1]} outputs: out1: value: {get_attr: [AResource, attr_A1]} """ tmpl3 = """ heat_template_version: 2014-10-16 resources: AResource: type: ResourceWithPropsType properties: Foo: 'abc' BResource: type: ResourceWithPropsType properties: Foo: {get_attr: [AResource, attr_A1]} Doo: {get_attr: [AResource, attr_A2]} Bar: {get_attr: [AResource, attr_A3]} metadata: first: {get_attr: [AResource, meta_A1]} second: {get_attr: [AResource, meta_A2]} third: {get_attr: [AResource, attr_A3]} outputs: out1: value: {get_attr: [AResource, out_A1]} out2: value: {get_attr: [AResource, out_A2]} """ tmpl4 = """ heat_template_version: 2014-10-16 resources: AResource: type: ResourceWithPropsType properties: Foo: 'abc' BResource: type: ResourceWithPropsType properties: Foo: 'abc' CResource: type: ResourceWithPropsType properties: Foo: 'abc' DResource: type: ResourceWithPropsType properties: Foo: {get_attr: [AResource, attr_A1]} Doo: {get_attr: [BResource, attr_B1]} metadata: Doo: {get_attr: [CResource, attr_C1]} outputs: out1: value: [{get_attr: [AResource, attr_A1]}, {get_attr: [BResource, attr_B1]}, {get_attr: [CResource, attr_C1]}] """ tmpl5 = """ heat_template_version: 2014-10-16 resources: AResource: type: ResourceWithPropsType properties: Foo: 'abc' BResource: type: ResourceWithPropsType properties: Foo: {get_attr: [AResource, attr_A1]} Doo: {get_attr: [AResource, attr_A2]} metadata: first: {get_attr: [AResource, meta_A1]} CResource: type: ResourceWithPropsType properties: Foo: {get_attr: [AResource, attr_A1]} Doo: {get_attr: [BResource, attr_B2]} metadata: Doo: {get_attr: [BResource, attr_B1]} first: {get_attr: [AResource, meta_A1]} second: {get_attr: [BResource, meta_B2]} outputs: out1: value: [{get_attr: [AResource, attr_A3]}, {get_attr: [AResource, attr_A4]}, {get_attr: [BResource, attr_B3]}] """ tmpl6 = """ heat_template_version: 2015-04-30 resources: AResource: type: ResourceWithComplexAttributesType BResource: type: ResourceWithPropsType properties: Foo: {get_attr: [AResource, list, 1]} Doo: {get_attr: [AResource, nested_dict, dict, b]} outputs: out1: value: [{get_attr: [AResource, flat_dict, key2]}, {get_attr: [AResource, nested_dict, string]}, {get_attr: [BResource, attr_B3]}] out2: value: {get_resource: BResource} """ tmpl7 = """ heat_template_version: 2015-10-15 resources: AResource: type: ResourceWithPropsType properties: Foo: 'abc' BResource: type: ResourceWithPropsType properties: Foo: {get_attr: [AResource, attr_A1]} Doo: {get_attr: [AResource, attr_A2]} metadata: first: {get_attr: [AResource, meta_A1]} CResource: type: ResourceWithPropsType properties: Foo: {get_attr: [AResource, attr_A1]} Doo: {get_attr: [BResource, attr_B2]} metadata: Doo: {get_attr: [BResource, attr_B1]} first: {get_attr: [AResource, meta_A1]} second: {get_attr: [BResource, meta_B2]} outputs: out1: value: [{get_attr: [AResource, attr_A3]}, {get_attr: [AResource, attr_A4]}, {get_attr: [BResource, attr_B3]}, {get_attr: [CResource]}] """ class DepAttrsTest(common.HeatTestCase): scenarios = [ ('no_attr', dict(tmpl=tmpl1, expected={'AResource': set()})), ('one_res_one_attr', dict(tmpl=tmpl2, expected={'AResource': {'attr_A1'}, 'BResource': set()})), ('one_res_several_attrs', dict(tmpl=tmpl3, expected={'AResource': {'attr_A1', 'attr_A2', 'attr_A3', 'meta_A1', 'meta_A2'}, 'BResource': set()})), ('several_res_one_attr', dict(tmpl=tmpl4, expected={'AResource': {'attr_A1'}, 'BResource': {'attr_B1'}, 'CResource': {'attr_C1'}, 'DResource': set()})), ('several_res_several_attrs', dict(tmpl=tmpl5, expected={'AResource': {'attr_A1', 'attr_A2', 'meta_A1'}, 'BResource': {'attr_B1', 'attr_B2', 'meta_B2'}, 'CResource': set()})), ('nested_attr', dict(tmpl=tmpl6, expected={'AResource': set([(u'list', 1), (u'nested_dict', u'dict', u'b')]), 'BResource': set([])})), ('several_res_several_attrs_and_all_attrs', dict(tmpl=tmpl7, expected={'AResource': {'attr_A1', 'attr_A2', 'meta_A1'}, 'BResource': {'attr_B1', 'attr_B2', 'meta_B2'}, 'CResource': set()})) ] def setUp(self): super(DepAttrsTest, self).setUp() self.ctx = utils.dummy_context() self.parsed_tmpl = template_format.parse(self.tmpl) self.stack = stack.Stack(self.ctx, 'test_stack', template.Template(self.parsed_tmpl)) def test_dep_attrs(self): for res in six.itervalues(self.stack): definitions = (self.stack.defn.resource_definition(n) for n in self.parsed_tmpl['resources']) self.assertEqual(self.expected[res.name], set(itertools.chain.from_iterable( d.dep_attrs(res.name) for d in definitions))) def test_all_dep_attrs(self): for res in six.itervalues(self.stack): definitions = (self.stack.defn.resource_definition(n) for n in self.parsed_tmpl['resources']) attrs = set(itertools.chain.from_iterable( d.dep_attrs(res.name, load_all=True) for d in definitions)) self.assertEqual(self.expected[res.name], attrs) class ReferencedAttrsTest(common.HeatTestCase): def setUp(self): super(ReferencedAttrsTest, self).setUp() parsed_tmpl = template_format.parse(tmpl6) self.stack = stack.Stack(utils.dummy_context(), 'test_stack', template.Template(parsed_tmpl)) self.resA = self.stack['AResource'] self.resB = self.stack['BResource'] def test_referenced_attrs_resources(self): self.assertEqual(self.resA.referenced_attrs(in_resources=True, in_outputs=False), {('list', 1), ('nested_dict', 'dict', 'b')}) self.assertEqual(self.resB.referenced_attrs(in_resources=True, in_outputs=False), set()) def test_referenced_attrs_outputs(self): self.assertEqual(self.resA.referenced_attrs(in_resources=False, in_outputs=True), {('flat_dict', 'key2'), ('nested_dict', 'string')}) self.assertEqual(self.resB.referenced_attrs(in_resources=False, in_outputs=True), {'attr_B3'}) def test_referenced_attrs_single_output(self): self.assertEqual(self.resA.referenced_attrs(in_resources=False, in_outputs={'out1'}), {('flat_dict', 'key2'), ('nested_dict', 'string')}) self.assertEqual(self.resB.referenced_attrs(in_resources=False, in_outputs={'out1'}), {'attr_B3'}) self.assertEqual(self.resA.referenced_attrs(in_resources=False, in_outputs={'out2'}), set()) self.assertEqual(self.resB.referenced_attrs(in_resources=False, in_outputs={'out2'}), set()) def test_referenced_attrs_outputs_list(self): self.assertEqual(self.resA.referenced_attrs(in_resources=False, in_outputs={'out1', 'out2'}), {('flat_dict', 'key2'), ('nested_dict', 'string')}) self.assertEqual(self.resB.referenced_attrs(in_resources=False, in_outputs={'out1', 'out2'}), {'attr_B3'}) def test_referenced_attrs_both(self): self.assertEqual(self.resA.referenced_attrs(in_resources=True, in_outputs=True), {('list', 1), ('nested_dict', 'dict', 'b'), ('flat_dict', 'key2'), ('nested_dict', 'string')}) self.assertEqual(self.resB.referenced_attrs(in_resources=True, in_outputs=True), {'attr_B3'}) heat-10.0.2/heat/tests/test_nokey.py0000666000175000017500000000641213343562352017371 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.common import template_format from heat.engine.clients.os import glance from heat.engine.clients.os import nova from heat.engine.resources.aws.ec2 import instance as instances from heat.engine import scheduler from heat.tests import common from heat.tests.openstack.nova import fakes as fakes_nova from heat.tests import utils nokey_template = ''' { "AWSTemplateFormatVersion" : "2010-09-09", "Description" : "NoKey Test", "Parameters" : {}, "Resources" : { "WebServer": { "Type": "AWS::EC2::Instance", "Properties": { "ImageId" : "foo", "InstanceType" : "m1.large", "UserData" : "some data" } } } } ''' class nokeyTest(common.HeatTestCase): def setUp(self): super(nokeyTest, self).setUp() self.fc = fakes_nova.FakeClient() def test_nokey_create(self): stack_name = 's_nokey' t = template_format.parse(nokey_template) stack = utils.parse_stack(t, stack_name=stack_name) t['Resources']['WebServer']['Properties']['ImageId'] = 'CentOS 5.2' t['Resources']['WebServer']['Properties'][ 'InstanceType'] = '256 MB Server' resource_defns = stack.t.resource_definitions(stack) instance = instances.Instance('create_instance_name', resource_defns['WebServer'], stack) self.m.StubOutWithMock(nova.NovaClientPlugin, '_create') nova.NovaClientPlugin._create().AndReturn(self.fc) self.m.StubOutWithMock(glance.GlanceClientPlugin, 'find_image_by_name_or_id') glance.GlanceClientPlugin.find_image_by_name_or_id( 'CentOS 5.2').MultipleTimes().AndReturn(1) # need to resolve the template functions metadata = instance.metadata_get() server_userdata = instance.client_plugin().build_userdata( metadata, instance.properties['UserData'], 'ec2-user') self.m.StubOutWithMock(nova.NovaClientPlugin, 'build_userdata') nova.NovaClientPlugin.build_userdata( metadata, instance.properties['UserData'], 'ec2-user').AndReturn(server_userdata) self.m.StubOutWithMock(self.fc.servers, 'create') self.fc.servers.create( image=1, flavor=1, key_name=None, name=utils.PhysName(stack_name, instance.name), security_groups=None, userdata=server_userdata, scheduler_hints=None, meta=None, nics=None, availability_zone=None, block_device_mapping=None).AndReturn( self.fc.servers.list()[1]) self.m.ReplayAll() scheduler.TaskRunner(instance.create)() self.m.VerifyAll() heat-10.0.2/heat/api/0000775000175000017500000000000013343562672014242 5ustar zuulzuul00000000000000heat-10.0.2/heat/api/middleware/0000775000175000017500000000000013343562672016357 5ustar zuulzuul00000000000000heat-10.0.2/heat/api/middleware/__init__.py0000666000175000017500000000000013343562337020456 0ustar zuulzuul00000000000000heat-10.0.2/heat/api/middleware/version_negotiation.py0000666000175000017500000001211613343562337023017 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Inspects the requested URI for a version string and/or Accept headers. Also attempts to negotiate an API controller to return. """ import re from oslo_log import log as logging import webob from heat.common import wsgi LOG = logging.getLogger(__name__) class VersionNegotiationFilter(wsgi.Middleware): def __init__(self, version_controller, app, conf, **local_conf): self.versions_app = version_controller(conf) self.version_uri_regex = re.compile(r"^v(\d+)\.?(\d+)?") self.conf = conf super(VersionNegotiationFilter, self).__init__(app) def process_request(self, req): """Process Accept header or simply return correct API controller. If there is a version identifier in the URI, return the correct API controller, otherwise, if we find an Accept: header, process it """ # See if a version identifier is in the URI passed to # us already. If so, simply return the right version # API controller msg = ("Processing request: %(method)s %(path)s Accept: " "%(accept)s" % {'method': req.method, 'path': req.path, 'accept': req.accept}) LOG.debug(msg) # If the request is for /versions, just return the versions container if req.path_info_peek() in ("versions", ""): return self.versions_app match = self._match_version_string(req.path_info_peek(), req) if match: major_version = req.environ['api.major_version'] minor_version = req.environ['api.minor_version'] if (major_version == 1 and minor_version == 0): LOG.debug("Matched versioned URI. " "Version: %(major_version)d.%(minor_version)d" % {'major_version': major_version, 'minor_version': minor_version}) # Strip the version from the path req.path_info_pop() return None else: LOG.debug("Unknown version in versioned URI: " "%(major_version)d.%(minor_version)d. " "Returning version choices." % {'major_version': major_version, 'minor_version': minor_version}) return self.versions_app accept = str(req.accept) if accept.startswith('application/vnd.openstack.orchestration-'): token_loc = len('application/vnd.openstack.orchestration-') accept_version = accept[token_loc:] match = self._match_version_string(accept_version, req) if match: major_version = req.environ['api.major_version'] minor_version = req.environ['api.minor_version'] if (major_version == 1 and minor_version == 0): LOG.debug("Matched versioned media type. Version: " "%(major_version)d.%(minor_version)d" % {'major_version': major_version, 'minor_version': minor_version}) return None else: LOG.debug("Unknown version in accept header: " "%(major_version)d.%(minor_version)d... " "returning version choices." % {'major_version': major_version, 'minor_version': minor_version}) return self.versions_app else: if req.accept not in ('*/*', ''): LOG.debug("Unknown accept header: %s... " "returning HTTP not found.", req.accept) return webob.exc.HTTPNotFound() return None def _match_version_string(self, subject, req): """Given a subject, tries to match a major and/or minor version number. If found, sets the api.major_version and api.minor_version environ variables. Returns True if there was a match, false otherwise. :param subject: The string to check :param req: Webob.Request object """ match = self.version_uri_regex.match(subject) if match: major_version, minor_version = match.groups(0) major_version = int(major_version) minor_version = int(minor_version) req.environ['api.major_version'] = major_version req.environ['api.minor_version'] = minor_version return match is not None heat-10.0.2/heat/api/middleware/fault.py0000666000175000017500000001433513343562351020046 0ustar zuulzuul00000000000000# -*- coding: utf-8 -*- # # Copyright © 2013 Unitedstack Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """A middleware that turns exceptions into parsable string. Inspired by Cinder's faultwrapper. """ import sys import traceback from oslo_config import cfg from oslo_utils import reflection import six import webob from heat.common import exception from heat.common import serializers from heat.common import wsgi class Fault(object): def __init__(self, error): self.error = error @webob.dec.wsgify(RequestClass=wsgi.Request) def __call__(self, req): if req.content_type == 'application/xml': serializer = serializers.XMLResponseSerializer() else: serializer = serializers.JSONResponseSerializer() resp = webob.Response(request=req) default_webob_exc = webob.exc.HTTPInternalServerError() resp.status_code = self.error.get('code', default_webob_exc.code) serializer.default(resp, self.error) return resp class FaultWrapper(wsgi.Middleware): """Replace error body with something the client can parse.""" error_map = { 'AttributeError': webob.exc.HTTPBadRequest, 'ActionInProgress': webob.exc.HTTPConflict, 'ValueError': webob.exc.HTTPBadRequest, 'EntityNotFound': webob.exc.HTTPNotFound, 'NotFound': webob.exc.HTTPNotFound, 'ResourceActionNotSupported': webob.exc.HTTPBadRequest, 'InvalidGlobalResource': webob.exc.HTTPInternalServerError, 'ResourceNotAvailable': webob.exc.HTTPNotFound, 'PhysicalResourceNameAmbiguity': webob.exc.HTTPBadRequest, 'PhysicalResourceIDAmbiguity': webob.exc.HTTPBadRequest, 'InvalidTenant': webob.exc.HTTPForbidden, 'Forbidden': webob.exc.HTTPForbidden, 'StackExists': webob.exc.HTTPConflict, 'StackValidationFailed': webob.exc.HTTPBadRequest, 'InvalidSchemaError': webob.exc.HTTPBadRequest, 'InvalidTemplateReference': webob.exc.HTTPBadRequest, 'InvalidTemplateVersion': webob.exc.HTTPBadRequest, 'InvalidTemplateSection': webob.exc.HTTPBadRequest, 'UnknownUserParameter': webob.exc.HTTPBadRequest, 'RevertFailed': webob.exc.HTTPInternalServerError, 'StopActionFailed': webob.exc.HTTPInternalServerError, 'EventSendFailed': webob.exc.HTTPInternalServerError, 'ServerBuildFailed': webob.exc.HTTPInternalServerError, 'InvalidEncryptionKey': webob.exc.HTTPInternalServerError, 'NotSupported': webob.exc.HTTPBadRequest, 'MissingCredentialError': webob.exc.HTTPBadRequest, 'UserParameterMissing': webob.exc.HTTPBadRequest, 'RequestLimitExceeded': webob.exc.HTTPBadRequest, 'Invalid': webob.exc.HTTPBadRequest, 'ResourcePropertyConflict': webob.exc.HTTPBadRequest, 'PropertyUnspecifiedError': webob.exc.HTTPBadRequest, 'ObjectFieldInvalid': webob.exc.HTTPBadRequest, 'ReadOnlyFieldError': webob.exc.HTTPBadRequest, 'ObjectActionError': webob.exc.HTTPBadRequest, 'IncompatibleObjectVersion': webob.exc.HTTPBadRequest, 'OrphanedObjectError': webob.exc.HTTPBadRequest, 'UnsupportedObjectError': webob.exc.HTTPBadRequest, 'ResourceTypeUnavailable': webob.exc.HTTPBadRequest, 'InvalidBreakPointHook': webob.exc.HTTPBadRequest, 'ImmutableParameterModified': webob.exc.HTTPBadRequest } def _map_exception_to_error(self, class_exception): if class_exception == Exception: return webob.exc.HTTPInternalServerError if class_exception.__name__ not in self.error_map: return self._map_exception_to_error(class_exception.__base__) return self.error_map[class_exception.__name__] def _error(self, ex): trace = None traceback_marker = 'Traceback (most recent call last)' webob_exc = None safe = getattr(ex, 'safe', False) if isinstance(ex, exception.HTTPExceptionDisguise): # An HTTP exception was disguised so it could make it here # let's remove the disguise and set the original HTTP exception if cfg.CONF.debug: trace = ''.join(traceback.format_tb(ex.tb)) ex = ex.exc webob_exc = ex ex_type = reflection.get_class_name(ex, fully_qualified=False) is_remote = ex_type.endswith('_Remote') if is_remote: ex_type = ex_type[:-len('_Remote')] full_message = six.text_type(ex) if '\n' in full_message and is_remote: message, msg_trace = full_message.split('\n', 1) elif traceback_marker in full_message: message, msg_trace = full_message.split(traceback_marker, 1) message = message.rstrip('\n') msg_trace = traceback_marker + msg_trace else: msg_trace = 'None\n' if sys.exc_info() != (None, None, None): msg_trace = traceback.format_exc() message = full_message if isinstance(ex, exception.HeatException): message = ex.message if cfg.CONF.debug and not trace: trace = msg_trace if not webob_exc: webob_exc = self._map_exception_to_error(ex.__class__) error = { 'code': webob_exc.code, 'title': webob_exc.title, 'explanation': webob_exc.explanation, 'error': { 'type': ex_type, 'traceback': trace, } } if safe: error['error']['message'] = message return error def process_request(self, req): try: return req.get_response(self.application) except Exception as exc: return req.get_response(Fault(self._error(exc))) heat-10.0.2/heat/api/cfn/0000775000175000017500000000000013343562672015010 5ustar zuulzuul00000000000000heat-10.0.2/heat/api/cfn/__init__.py0000666000175000017500000000152213343562337017121 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.api.cfn import versions from heat.api.middleware import version_negotiation as vn def version_negotiation_filter(app, conf, **local_conf): return vn.VersionNegotiationFilter(versions.Controller, app, conf, **local_conf) heat-10.0.2/heat/api/cfn/versions.py0000666000175000017500000000144113343562337017232 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Controller that returns information on the heat API versions. Now it's a subclass of module versions, because of identity with OpenStack module versions. """ from heat.api import versions Controller = versions.Controller heat-10.0.2/heat/api/cfn/v1/0000775000175000017500000000000013343562672015336 5ustar zuulzuul00000000000000heat-10.0.2/heat/api/cfn/v1/__init__.py0000666000175000017500000000563113343562337017454 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import routes import webob from heat.api.cfn.v1 import signal from heat.api.cfn.v1 import stacks from heat.common import wsgi class API(wsgi.Router): """WSGI router for Heat CloudFormation v1 API requests.""" _actions = { 'list': 'ListStacks', 'create': 'CreateStack', 'describe': 'DescribeStacks', 'delete': 'DeleteStack', 'update': 'UpdateStack', 'cancel_update': 'CancelUpdateStack', 'events_list': 'DescribeStackEvents', 'validate_template': 'ValidateTemplate', 'get_template': 'GetTemplate', 'estimate_template_cost': 'EstimateTemplateCost', 'describe_stack_resource': 'DescribeStackResource', 'describe_stack_resources': 'DescribeStackResources', 'list_stack_resources': 'ListStackResources', } def __init__(self, conf, **local_conf): self.conf = conf mapper = routes.Mapper() stacks_resource = stacks.create_resource(conf) mapper.resource("stack", "stacks", controller=stacks_resource, collection={'detail': 'GET'}) def conditions(action): api_action = self._actions[action] def action_match(environ, result): req = webob.Request(environ) env_action = req.params.get("Action") return env_action == api_action return {'function': action_match} for action in self._actions: mapper.connect("/", controller=stacks_resource, action=action, conditions=conditions(action)) mapper.connect("/", controller=stacks_resource, action="index") # Add controller which handles signals on resources like: # waitconditions and alarms. # This is not part of the main CFN API spec, hence handle it # separately via a different path signal_controller = signal.create_resource(conf) mapper.connect('/waitcondition/{arn:.*}', controller=signal_controller, action='update_waitcondition', conditions=dict(method=['PUT'])) mapper.connect('/signal/{arn:.*}', controller=signal_controller, action='signal', conditions=dict(method=['POST'])) super(API, self).__init__(mapper) heat-10.0.2/heat/api/cfn/v1/stacks.py0000666000175000017500000006141313343562337017205 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Stack endpoint for Heat CloudFormation v1 API.""" import socket from oslo_log import log as logging from oslo_serialization import jsonutils from heat.api.aws import exception from heat.api.aws import utils as api_utils from heat.common import exception as heat_exception from heat.common.i18n import _ from heat.common import identifier from heat.common import policy from heat.common import template_format from heat.common import urlfetch from heat.common import wsgi from heat.rpc import api as rpc_api from heat.rpc import client as rpc_client LOG = logging.getLogger(__name__) class StackController(object): """WSGI controller for stacks resource in Heat CloudFormation v1 API. Implements the API actions. """ def __init__(self, options): self.options = options self.rpc_client = rpc_client.EngineClient() self.policy = policy.Enforcer(scope='cloudformation') def default(self, req, **args): raise exception.HeatInvalidActionError() def _enforce(self, req, action): """Authorize an action against the policy.json and policies in code.""" try: self.policy.enforce(req.context, action, is_registered_policy=True) except heat_exception.Forbidden: msg = _('Action %s not allowed for user') % action raise exception.HeatAccessDeniedError(msg) except Exception: # We expect policy.enforce to either pass or raise Forbidden # however, if anything else happens, we want to raise # HeatInternalFailureError, failure to do this results in # the user getting a big stacktrace spew as an API response msg = _('Error authorizing action %s') % action raise exception.HeatInternalFailureError(msg) @staticmethod def _id_format(resp): """Format the StackId field in the response as an ARN. Also, process other IDs into the correct format. """ if 'StackId' in resp: identity = identifier.HeatIdentifier(**resp['StackId']) resp['StackId'] = identity.arn() if 'EventId' in resp: identity = identifier.EventIdentifier(**resp['EventId']) resp['EventId'] = identity.event_id return resp @staticmethod def _extract_user_params(params): """Extract a dictionary of user input parameters for the stack. In the AWS API parameters, each user parameter appears as two key-value pairs with keys of the form below:: Parameters.member.1.ParameterKey Parameters.member.1.ParameterValue """ return api_utils.extract_param_pairs(params, prefix='Parameters', keyname='ParameterKey', valuename='ParameterValue') def _get_identity(self, con, stack_name): """Generate a stack identifier from the given stack name or ARN. In the case of a stack name, the identifier will be looked up in the engine over RPC. """ try: return dict(identifier.HeatIdentifier.from_arn(stack_name)) except ValueError: return self.rpc_client.identify_stack(con, stack_name) def list(self, req): """Implements ListStacks API action. Lists summary information for all stacks. """ self._enforce(req, 'ListStacks') def format_stack_summary(s): """Reformat engine output into the AWS "StackSummary" format.""" # Map the engine-api format to the AWS StackSummary datatype keymap = { rpc_api.STACK_CREATION_TIME: 'CreationTime', rpc_api.STACK_UPDATED_TIME: 'LastUpdatedTime', rpc_api.STACK_ID: 'StackId', rpc_api.STACK_NAME: 'StackName', rpc_api.STACK_STATUS_DATA: 'StackStatusReason', rpc_api.STACK_TMPL_DESCRIPTION: 'TemplateDescription', } result = api_utils.reformat_dict_keys(keymap, s) action = s[rpc_api.STACK_ACTION] status = s[rpc_api.STACK_STATUS] result['StackStatus'] = '_'.join((action, status)) # AWS docs indicate DeletionTime is omitted for current stacks # This is still TODO(unknown) in the engine, we don't keep data for # stacks after they are deleted if rpc_api.STACK_DELETION_TIME in s: result['DeletionTime'] = s[rpc_api.STACK_DELETION_TIME] return self._id_format(result) con = req.context try: stack_list = self.rpc_client.list_stacks(con) except Exception as ex: return exception.map_remote_error(ex) res = {'StackSummaries': [format_stack_summary(s) for s in stack_list]} return api_utils.format_response('ListStacks', res) def describe(self, req): """Implements DescribeStacks API action. Gets detailed information for a stack (or all stacks). """ self._enforce(req, 'DescribeStacks') def format_stack_outputs(o): keymap = { rpc_api.OUTPUT_DESCRIPTION: 'Description', rpc_api.OUTPUT_KEY: 'OutputKey', rpc_api.OUTPUT_VALUE: 'OutputValue', } def replacecolon(d): return dict(map(lambda k_v: (k_v[0].replace(':', '.'), k_v[1]), d.items())) def transform(attrs): """Recursively replace all `:` with `.` in dict keys. After that they are not interpreted as xml namespaces. """ new = replacecolon(attrs) for key, value in new.items(): if isinstance(value, dict): new[key] = transform(value) return new return api_utils.reformat_dict_keys(keymap, transform(o)) def format_stack(s): """Reformat engine output into the AWS "StackSummary" format.""" keymap = { rpc_api.STACK_CAPABILITIES: 'Capabilities', rpc_api.STACK_CREATION_TIME: 'CreationTime', rpc_api.STACK_DESCRIPTION: 'Description', rpc_api.STACK_DISABLE_ROLLBACK: 'DisableRollback', rpc_api.STACK_NOTIFICATION_TOPICS: 'NotificationARNs', rpc_api.STACK_PARAMETERS: 'Parameters', rpc_api.STACK_ID: 'StackId', rpc_api.STACK_NAME: 'StackName', rpc_api.STACK_STATUS_DATA: 'StackStatusReason', rpc_api.STACK_TIMEOUT: 'TimeoutInMinutes', } if s[rpc_api.STACK_UPDATED_TIME] is not None: keymap[rpc_api.STACK_UPDATED_TIME] = 'LastUpdatedTime' result = api_utils.reformat_dict_keys(keymap, s) action = s[rpc_api.STACK_ACTION] status = s[rpc_api.STACK_STATUS] result['StackStatus'] = '_'.join((action, status)) # Reformat outputs, these are handled separately as they are # only present in the engine output for a completely created # stack result['Outputs'] = [] if rpc_api.STACK_OUTPUTS in s: for o in s[rpc_api.STACK_OUTPUTS]: result['Outputs'].append(format_stack_outputs(o)) # Reformat Parameters dict-of-dict into AWS API format # This is a list-of-dict with nasty "ParameterKey" : key # "ParameterValue" : value format. result['Parameters'] = [{'ParameterKey': k, 'ParameterValue': v} for (k, v) in result['Parameters'].items()] return self._id_format(result) con = req.context # If no StackName parameter is passed, we pass None into the engine # this returns results for all stacks (visible to this user), which # is the behavior described in the AWS DescribeStacks API docs try: if 'StackName' in req.params: identity = self._get_identity(con, req.params['StackName']) else: identity = None stack_list = self.rpc_client.show_stack(con, identity) except Exception as ex: return exception.map_remote_error(ex) res = {'Stacks': [format_stack(s) for s in stack_list]} return api_utils.format_response('DescribeStacks', res) def _get_template(self, req): """Get template file contents, either from local file or URL.""" if 'TemplateBody' in req.params: LOG.debug('TemplateBody ...') return req.params['TemplateBody'] elif 'TemplateUrl' in req.params: url = req.params['TemplateUrl'] LOG.debug('TemplateUrl %s' % url) try: return urlfetch.get(url) except IOError as exc: msg = _('Failed to fetch template: %s') % exc raise exception.HeatInvalidParameterValueError(detail=msg) return None CREATE_OR_UPDATE_ACTION = ( CREATE_STACK, UPDATE_STACK, ) = ( "CreateStack", "UpdateStack", ) def create(self, req): self._enforce(req, 'CreateStack') return self.create_or_update(req, self.CREATE_STACK) def update(self, req): self._enforce(req, 'UpdateStack') return self.create_or_update(req, self.UPDATE_STACK) def create_or_update(self, req, action=None): """Implements CreateStack and UpdateStack API actions. Create or update stack as defined in template file. """ def extract_args(params): """Extract request params and reformat them to match engine API. FIXME: we currently only support a subset of the AWS defined parameters (both here and in the engine) """ # TODO(shardy) : Capabilities, NotificationARNs keymap = {'TimeoutInMinutes': rpc_api.PARAM_TIMEOUT, 'DisableRollback': rpc_api.PARAM_DISABLE_ROLLBACK} if 'DisableRollback' in params and 'OnFailure' in params: msg = _('DisableRollback and OnFailure ' 'may not be used together') raise exception.HeatInvalidParameterCombinationError( detail=msg) result = {} for k in keymap: if k in params: result[keymap[k]] = params[k] if 'OnFailure' in params: value = params['OnFailure'] if value == 'DO_NOTHING': result[rpc_api.PARAM_DISABLE_ROLLBACK] = 'true' elif value in ('ROLLBACK', 'DELETE'): result[rpc_api.PARAM_DISABLE_ROLLBACK] = 'false' return result if action not in self.CREATE_OR_UPDATE_ACTION: msg = _("Unexpected action %(action)s") % ({'action': action}) # This should not happen, so return HeatInternalFailureError return exception.HeatInternalFailureError(detail=msg) engine_action = {self.CREATE_STACK: self.rpc_client.create_stack, self.UPDATE_STACK: self.rpc_client.update_stack} con = req.context # Extract the stack input parameters stack_parms = self._extract_user_params(req.params) # Extract any additional arguments ("Request Parameters") create_args = extract_args(req.params) try: templ = self._get_template(req) except socket.gaierror: msg = _('Invalid Template URL') return exception.HeatInvalidParameterValueError(detail=msg) if templ is None: msg = _("TemplateBody or TemplateUrl were not given.") return exception.HeatMissingParameterError(detail=msg) try: stack = template_format.parse(templ) except ValueError: msg = _("The Template must be a JSON or YAML document.") return exception.HeatInvalidParameterValueError(detail=msg) args = {'template': stack, 'params': stack_parms, 'files': {}, 'args': create_args} try: stack_name = req.params['StackName'] if action == self.CREATE_STACK: args['stack_name'] = stack_name else: args['stack_identity'] = self._get_identity(con, stack_name) result = engine_action[action](con, **args) except Exception as ex: return exception.map_remote_error(ex) try: identity = identifier.HeatIdentifier(**result) except (ValueError, TypeError): response = result else: response = {'StackId': identity.arn()} return api_utils.format_response(action, response) def cancel_update(self, req): action = 'CancelUpdateStack' self._enforce(req, action) con = req.context stack_name = req.params['StackName'] stack_identity = self._get_identity(con, stack_name) try: self.rpc_client.stack_cancel_update( con, stack_identity=stack_identity, cancel_with_rollback=True) except Exception as ex: return exception.map_remote_error(ex) return api_utils.format_response(action, {}) def get_template(self, req): """Implements the GetTemplate API action. Get the template body for an existing stack. """ self._enforce(req, 'GetTemplate') con = req.context try: identity = self._get_identity(con, req.params['StackName']) templ = self.rpc_client.get_template(con, identity) except Exception as ex: return exception.map_remote_error(ex) return api_utils.format_response('GetTemplate', {'TemplateBody': templ}) def estimate_template_cost(self, req): """Implements the EstimateTemplateCost API action. Get the estimated monthly cost of a template. """ self._enforce(req, 'EstimateTemplateCost') return api_utils.format_response('EstimateTemplateCost', {'Url': 'http://en.wikipedia.org/wiki/Gratis' } ) def validate_template(self, req): """Implements the ValidateTemplate API action. Validates the specified template. """ self._enforce(req, 'ValidateTemplate') con = req.context try: templ = self._get_template(req) except socket.gaierror: msg = _('Invalid Template URL') return exception.HeatInvalidParameterValueError(detail=msg) if templ is None: msg = _("TemplateBody or TemplateUrl were not given.") return exception.HeatMissingParameterError(detail=msg) try: template = template_format.parse(templ) except ValueError: msg = _("The Template must be a JSON or YAML document.") return exception.HeatInvalidParameterValueError(detail=msg) LOG.info('validate_template') def format_validate_parameter(key, value): """Reformat engine output into AWS "ValidateTemplate" format.""" return { 'ParameterKey': key, 'DefaultValue': value.get(rpc_api.PARAM_DEFAULT, ''), 'Description': value.get(rpc_api.PARAM_DESCRIPTION, ''), 'NoEcho': value.get(rpc_api.PARAM_NO_ECHO, 'false') } try: res = self.rpc_client.validate_template(con, template) if 'Error' in res: return api_utils.format_response('ValidateTemplate', res['Error']) res['Parameters'] = [format_validate_parameter(k, v) for k, v in res['Parameters'].items()] return api_utils.format_response('ValidateTemplate', res) except Exception as ex: return exception.map_remote_error(ex) def delete(self, req): """Implements the DeleteStack API action. Deletes the specified stack. """ self._enforce(req, 'DeleteStack') con = req.context try: identity = self._get_identity(con, req.params['StackName']) res = self.rpc_client.delete_stack(con, identity, cast=False) except Exception as ex: return exception.map_remote_error(ex) if res is None: return api_utils.format_response('DeleteStack', '') else: return api_utils.format_response('DeleteStack', res['Error']) def events_list(self, req): """Implements the DescribeStackEvents API action. Returns events related to a specified stack (or all stacks). """ self._enforce(req, 'DescribeStackEvents') def format_stack_event(e): """Reformat engine output into AWS "StackEvent" format.""" keymap = { rpc_api.EVENT_ID: 'EventId', rpc_api.EVENT_RES_NAME: 'LogicalResourceId', rpc_api.EVENT_RES_PHYSICAL_ID: 'PhysicalResourceId', rpc_api.EVENT_RES_PROPERTIES: 'ResourceProperties', rpc_api.EVENT_RES_STATUS_DATA: 'ResourceStatusReason', rpc_api.EVENT_RES_TYPE: 'ResourceType', rpc_api.EVENT_STACK_ID: 'StackId', rpc_api.EVENT_STACK_NAME: 'StackName', rpc_api.EVENT_TIMESTAMP: 'Timestamp', } result = api_utils.reformat_dict_keys(keymap, e) action = e[rpc_api.EVENT_RES_ACTION] status = e[rpc_api.EVENT_RES_STATUS] result['ResourceStatus'] = '_'.join((action, status)) result['ResourceProperties'] = jsonutils.dumps(result[ 'ResourceProperties']) return self._id_format(result) con = req.context stack_name = req.params.get('StackName') try: identity = stack_name and self._get_identity(con, stack_name) events = self.rpc_client.list_events(con, identity) except Exception as ex: return exception.map_remote_error(ex) result = [format_stack_event(e) for e in events] return api_utils.format_response('DescribeStackEvents', {'StackEvents': result}) @staticmethod def _resource_status(res): action = res[rpc_api.RES_ACTION] status = res[rpc_api.RES_STATUS] return '_'.join((action, status)) def describe_stack_resource(self, req): """Implements the DescribeStackResource API action. Return the details of the given resource belonging to the given stack. """ self._enforce(req, 'DescribeStackResource') def format_resource_detail(r): # Reformat engine output into the AWS "StackResourceDetail" format keymap = { rpc_api.RES_DESCRIPTION: 'Description', rpc_api.RES_UPDATED_TIME: 'LastUpdatedTimestamp', rpc_api.RES_NAME: 'LogicalResourceId', rpc_api.RES_METADATA: 'Metadata', rpc_api.RES_PHYSICAL_ID: 'PhysicalResourceId', rpc_api.RES_STATUS_DATA: 'ResourceStatusReason', rpc_api.RES_TYPE: 'ResourceType', rpc_api.RES_STACK_ID: 'StackId', rpc_api.RES_STACK_NAME: 'StackName', } result = api_utils.reformat_dict_keys(keymap, r) result['ResourceStatus'] = self._resource_status(r) return self._id_format(result) con = req.context try: identity = self._get_identity(con, req.params['StackName']) resource_details = self.rpc_client.describe_stack_resource( con, stack_identity=identity, resource_name=req.params.get('LogicalResourceId')) except Exception as ex: return exception.map_remote_error(ex) result = format_resource_detail(resource_details) return api_utils.format_response('DescribeStackResource', {'StackResourceDetail': result}) def describe_stack_resources(self, req): """Implements the DescribeStackResources API action. Return details of resources specified by the parameters. `StackName`: returns all resources belonging to the stack. `PhysicalResourceId`: returns all resources belonging to the stack this resource is associated with. Only one of the parameters may be specified. Optional parameter: `LogicalResourceId`: filter the resources list by the logical resource id. """ self._enforce(req, 'DescribeStackResources') def format_stack_resource(r): """Reformat engine output into AWS "StackResource" format.""" keymap = { rpc_api.RES_DESCRIPTION: 'Description', rpc_api.RES_NAME: 'LogicalResourceId', rpc_api.RES_PHYSICAL_ID: 'PhysicalResourceId', rpc_api.RES_STATUS_DATA: 'ResourceStatusReason', rpc_api.RES_TYPE: 'ResourceType', rpc_api.RES_STACK_ID: 'StackId', rpc_api.RES_STACK_NAME: 'StackName', rpc_api.RES_UPDATED_TIME: 'Timestamp', } result = api_utils.reformat_dict_keys(keymap, r) result['ResourceStatus'] = self._resource_status(r) return self._id_format(result) con = req.context stack_name = req.params.get('StackName') physical_resource_id = req.params.get('PhysicalResourceId') if stack_name and physical_resource_id: msg = 'Use `StackName` or `PhysicalResourceId` but not both' return exception.HeatInvalidParameterCombinationError(detail=msg) try: if stack_name is not None: identity = self._get_identity(con, stack_name) else: identity = self.rpc_client.find_physical_resource( con, physical_resource_id=physical_resource_id) resources = self.rpc_client.describe_stack_resources( con, stack_identity=identity, resource_name=req.params.get('LogicalResourceId')) except Exception as ex: return exception.map_remote_error(ex) result = [format_stack_resource(r) for r in resources] return api_utils.format_response('DescribeStackResources', {'StackResources': result}) def list_stack_resources(self, req): """Implements the ListStackResources API action. Return summary of the resources belonging to the specified stack. """ self._enforce(req, 'ListStackResources') def format_resource_summary(r): """Reformat engine output to AWS "StackResourceSummary" format.""" keymap = { rpc_api.RES_UPDATED_TIME: 'LastUpdatedTimestamp', rpc_api.RES_NAME: 'LogicalResourceId', rpc_api.RES_PHYSICAL_ID: 'PhysicalResourceId', rpc_api.RES_STATUS_DATA: 'ResourceStatusReason', rpc_api.RES_TYPE: 'ResourceType', } result = api_utils.reformat_dict_keys(keymap, r) result['ResourceStatus'] = self._resource_status(r) return result con = req.context try: identity = self._get_identity(con, req.params['StackName']) resources = self.rpc_client.list_stack_resources( con, stack_identity=identity) except Exception as ex: return exception.map_remote_error(ex) summaries = [format_resource_summary(r) for r in resources] return api_utils.format_response('ListStackResources', {'StackResourceSummaries': summaries}) def create_resource(options): """Stacks resource factory method.""" deserializer = wsgi.JSONRequestDeserializer() return wsgi.Resource(StackController(options), deserializer) heat-10.0.2/heat/api/cfn/v1/signal.py0000666000175000017500000000376513343562337017200 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.api.aws import exception from heat.common import identifier from heat.common import wsgi from heat.rpc import client as rpc_client class SignalController(object): def __init__(self, options): self.options = options self.rpc_client = rpc_client.EngineClient() def update_waitcondition(self, req, body, arn): con = req.context identity = identifier.ResourceIdentifier.from_arn(arn) try: md = self.rpc_client.resource_signal( con, stack_identity=dict(identity.stack()), resource_name=identity.resource_name, details=body, sync_call=True) except Exception as ex: return exception.map_remote_error(ex) return {'resource': identity.resource_name, 'metadata': md} def signal(self, req, arn, body=None): con = req.context identity = identifier.ResourceIdentifier.from_arn(arn) try: self.rpc_client.resource_signal( con, stack_identity=dict(identity.stack()), resource_name=identity.resource_name, details=body) except Exception as ex: return exception.map_remote_error(ex) def create_resource(options): """Signal resource factory method.""" deserializer = wsgi.JSONRequestDeserializer() return wsgi.Resource(SignalController(options), deserializer) heat-10.0.2/heat/api/aws/0000775000175000017500000000000013343562672015034 5ustar zuulzuul00000000000000heat-10.0.2/heat/api/aws/ec2token.py0000666000175000017500000002373513343562337017132 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import hashlib from oslo_config import cfg from oslo_log import log as logging from oslo_serialization import jsonutils as json import requests import webob from heat.api.aws import exception from heat.common import endpoint_utils from heat.common.i18n import _ from heat.common import wsgi LOG = logging.getLogger(__name__) opts = [ cfg.StrOpt('auth_uri', help=_("Authentication Endpoint URI.")), cfg.BoolOpt('multi_cloud', default=False, help=_('Allow orchestration of multiple clouds.')), cfg.ListOpt('allowed_auth_uris', default=[], help=_('Allowed keystone endpoints for auth_uri when ' 'multi_cloud is enabled. At least one endpoint needs ' 'to be specified.')), cfg.StrOpt('cert_file', help=_('Optional PEM-formatted certificate chain file.')), cfg.StrOpt('key_file', help=_('Optional PEM-formatted file that contains the ' 'private key.')), cfg.StrOpt('ca_file', help=_('Optional CA cert file to use in SSL connections.')), cfg.BoolOpt('insecure', default=False, help=_('If set, then the server\'s certificate will not ' 'be verified.')), ] cfg.CONF.register_opts(opts, group='ec2authtoken') class EC2Token(wsgi.Middleware): """Authenticate an EC2 request with keystone and convert to token.""" def __init__(self, app, conf): self.conf = conf self.application = app self._ssl_options = None def _conf_get(self, name): # try config from paste-deploy first if name in self.conf: return self.conf[name] else: return cfg.CONF.ec2authtoken[name] def _conf_get_auth_uri(self): auth_uri = self._conf_get('auth_uri') if auth_uri: return auth_uri.replace('v2.0', 'v3') else: return endpoint_utils.get_auth_uri() @staticmethod def _conf_get_keystone_ec2_uri(auth_uri): if auth_uri.endswith('ec2tokens'): return auth_uri if auth_uri.endswith('/'): return '%sec2tokens' % auth_uri return '%s/ec2tokens' % auth_uri def _get_signature(self, req): """Extract the signature from the request. This can be a get/post variable or for v4 also in a header called 'Authorization'. - params['Signature'] == version 0,1,2,3 - params['X-Amz-Signature'] == version 4 - header 'Authorization' == version 4 """ sig = req.params.get('Signature') or req.params.get('X-Amz-Signature') if sig is None and 'Authorization' in req.headers: auth_str = req.headers['Authorization'] sig = auth_str.partition("Signature=")[2].split(',')[0] return sig def _get_access(self, req): """Extract the access key identifier. For v 0/1/2/3 this is passed as the AccessKeyId parameter, for version4 it is either and X-Amz-Credential parameter or a Credential= field in the 'Authorization' header string. """ access = req.params.get('AWSAccessKeyId') if access is None: cred_param = req.params.get('X-Amz-Credential') if cred_param: access = cred_param.split("/")[0] if access is None and 'Authorization' in req.headers: auth_str = req.headers['Authorization'] cred_str = auth_str.partition("Credential=")[2].split(',')[0] access = cred_str.split("/")[0] return access @webob.dec.wsgify(RequestClass=wsgi.Request) def __call__(self, req): if not self._conf_get('multi_cloud'): return self._authorize(req, self._conf_get_auth_uri()) else: # attempt to authorize for each configured allowed_auth_uris # until one is successful. # This is safe for the following reasons: # 1. AWSAccessKeyId is a randomly generated sequence # 2. No secret is transferred to validate a request last_failure = None for auth_uri in self._conf_get('allowed_auth_uris'): try: LOG.debug("Attempt authorize on %s" % auth_uri) return self._authorize(req, auth_uri) except exception.HeatAPIException as e: LOG.debug("Authorize failed: %s" % e.__class__) last_failure = e raise last_failure or exception.HeatAccessDeniedError() @property def ssl_options(self): if not self._ssl_options: cacert = self._conf_get('ca_file') insecure = self._conf_get('insecure') cert = self._conf_get('cert_file') key = self._conf_get('key_file') self._ssl_options = { 'verify': cacert if cacert else not insecure, 'cert': (cert, key) if cert else None } return self._ssl_options def _authorize(self, req, auth_uri): # Read request signature and access id. # If we find X-Auth-User in the headers we ignore a key error # here so that we can use both authentication methods. # Returning here just means the user didn't supply AWS # authentication and we'll let the app try native keystone next. LOG.info("Checking AWS credentials..") signature = self._get_signature(req) if not signature: if 'X-Auth-User' in req.headers: return self.application else: LOG.info("No AWS Signature found.") raise exception.HeatIncompleteSignatureError() access = self._get_access(req) if not access: if 'X-Auth-User' in req.headers: return self.application else: LOG.info("No AWSAccessKeyId/Authorization Credential") raise exception.HeatMissingAuthenticationTokenError() LOG.info("AWS credentials found, checking against keystone.") if not auth_uri: LOG.error("Ec2Token authorization failed, no auth_uri " "specified in config file") raise exception.HeatInternalFailureError(_('Service ' 'misconfigured')) # Make a copy of args for authentication and signature verification. auth_params = dict(req.params) # 'Signature' param Not part of authentication args auth_params.pop('Signature', None) # Authenticate the request. # AWS v4 authentication requires a hash of the body body_hash = hashlib.sha256(req.body).hexdigest() creds = {'ec2Credentials': {'access': access, 'signature': signature, 'host': req.host, 'verb': req.method, 'path': req.path, 'params': auth_params, 'headers': dict(req.headers), 'body_hash': body_hash }} creds_json = json.dumps(creds) headers = {'Content-Type': 'application/json'} keystone_ec2_uri = self._conf_get_keystone_ec2_uri(auth_uri) LOG.info('Authenticating with %s', keystone_ec2_uri) response = requests.post(keystone_ec2_uri, data=creds_json, headers=headers, verify=self.ssl_options['verify'], cert=self.ssl_options['cert']) result = response.json() try: token_id = response.headers['X-Subject-Token'] tenant = result['token']['project']['name'] tenant_id = result['token']['project']['id'] roles = [role['name'] for role in result['token'].get('roles', [])] except (AttributeError, KeyError): LOG.info("AWS authentication failure.") # Try to extract the reason for failure so we can return the # appropriate AWS error via raising an exception try: reason = result['error']['message'] except KeyError: reason = None if reason == "EC2 access key not found.": raise exception.HeatInvalidClientTokenIdError() elif reason == "EC2 signature not supplied.": raise exception.HeatSignatureError() else: raise exception.HeatAccessDeniedError() else: LOG.info("AWS authentication successful.") # Authenticated! ec2_creds = {'ec2Credentials': {'access': access, 'signature': signature}} req.headers['X-Auth-EC2-Creds'] = json.dumps(ec2_creds) req.headers['X-Auth-Token'] = token_id req.headers['X-Tenant-Name'] = tenant req.headers['X-Tenant-Id'] = tenant_id req.headers['X-Auth-URL'] = auth_uri req.headers['X-Roles'] = ','.join(roles) return self.application def EC2Token_filter_factory(global_conf, **local_conf): """Factory method for paste.deploy.""" conf = global_conf.copy() conf.update(local_conf) def filter(app): return EC2Token(app, conf) return filter def list_opts(): yield 'ec2authtoken', opts heat-10.0.2/heat/api/aws/utils.py0000666000175000017500000000676013343562337016557 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Helper utilities related to the AWS API implementations.""" import itertools import re from oslo_log import log as logging from heat.api.aws import exception LOG = logging.getLogger(__name__) def format_response(action, response): """Format response from engine into API format.""" return {'%sResponse' % action: {'%sResult' % action: response}} def extract_param_pairs(params, prefix='', keyname='', valuename=''): """Extract user input params from AWS style parameter-pair encoded list. In the AWS API list items appear as two key-value pairs (passed as query parameters) with keys of the form below: Prefix.member.1.keyname=somekey Prefix.member.1.keyvalue=somevalue Prefix.member.2.keyname=anotherkey Prefix.member.2.keyvalue=somevalue We reformat this into a dict here to match the heat engine API expected format. """ plist = extract_param_list(params, prefix) kvs = [(p[keyname], p[valuename]) for p in plist if keyname in p and valuename in p] return dict(kvs) def extract_param_list(params, prefix=''): """Extract a list-of-dicts based on parameters containing AWS style list. MetricData.member.1.MetricName=buffers MetricData.member.1.Unit=Bytes MetricData.member.1.Value=231434333 MetricData.member.2.MetricName=buffers2 MetricData.member.2.Unit=Bytes MetricData.member.2.Value=12345 This can be extracted by passing prefix=MetricData, resulting in a list containing two dicts. """ key_re = re.compile(r"%s\.member\.([0-9]+)\.(.*)" % (prefix)) def get_param_data(params): for param_name, value in params.items(): match = key_re.match(param_name) if match: try: index = int(match.group(1)) except ValueError: pass else: key = match.group(2) yield (index, (key, value)) # Sort and group by index def key_func(d): return d[0] data = sorted(get_param_data(params), key=key_func) members = itertools.groupby(data, key_func) return [dict(kv for di, kv in m) for mi, m in members] def get_param_value(params, key): """Looks up an expected parameter in a parsed params dict. Helper function, looks up an expected parameter in a parsed params dict and returns the result. If params does not contain the requested key we raise an exception of the appropriate type. """ try: return params[key] except KeyError: LOG.error("Request does not contain %s parameter!", key) raise exception.HeatMissingParameterError(key) def reformat_dict_keys(keymap=None, inputdict=None): """Utility function for mapping one dict format to another.""" keymap = keymap or {} inputdict = inputdict or {} return dict([(outk, inputdict[ink]) for ink, outk in keymap.items() if ink in inputdict]) heat-10.0.2/heat/api/aws/__init__.py0000666000175000017500000000000013343562337017133 0ustar zuulzuul00000000000000heat-10.0.2/heat/api/aws/exception.py0000666000175000017500000002433113343562337017407 0ustar zuulzuul00000000000000# # Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Heat API exception subclasses - maps API response errors to AWS Errors.""" from oslo_utils import reflection import six import webob.exc from heat.common.i18n import _ from heat.common import serializers class HeatAPIException(webob.exc.HTTPError): """webob HTTPError subclass that creates a serialized body. Subclass webob HTTPError so we can correctly serialize the wsgi response into the http response body, using the format specified by the request. Note this should not be used directly, instead use the subclasses defined below which map to AWS API errors. """ code = 400 title = "HeatAPIException" explanation = _("Generic HeatAPIException, please use specific " "subclasses!") err_type = "Sender" def __init__(self, detail=None): """Overload HTTPError constructor to create a default serialized body. This is required because not all error responses are processed by the wsgi controller (such as auth errors), which are further up the paste pipeline. We serialize in XML by default (as AWS does). """ webob.exc.HTTPError.__init__(self, detail=detail) serializer = serializers.XMLResponseSerializer() serializer.default(self, self.get_unserialized_body()) def get_unserialized_body(self): """Return a dict suitable for serialization in the wsgi controller. This wraps the exception details in a format which maps to the expected format for the AWS API. """ # Note the aws response format specifies a "Code" element which is not # the html response code, but the AWS API error code, e.g self.title if self.detail: message = ":".join([self.explanation, self.detail]) else: message = self.explanation return {'ErrorResponse': {'Error': {'Type': self.err_type, 'Code': self.title, 'Message': message}}} # Common Error Subclasses: class HeatIncompleteSignatureError(HeatAPIException): """The request signature does not conform to AWS standards.""" code = 400 title = "IncompleteSignature" explanation = _("The request signature does not conform to AWS standards") class HeatInternalFailureError(HeatAPIException): """The request processing has failed due to some unknown error.""" code = 500 title = "InternalFailure" explanation = _("The request processing has failed due to an " "internal error") err_type = "Server" class HeatInvalidActionError(HeatAPIException): """The action or operation requested is invalid.""" code = 400 title = "InvalidAction" explanation = _("The action or operation requested is invalid") class HeatInvalidClientTokenIdError(HeatAPIException): """The X.509 certificate or AWS Access Key ID provided does not exist.""" code = 403 title = "InvalidClientTokenId" explanation = _("The certificate or AWS Key ID provided does not exist") class HeatInvalidParameterCombinationError(HeatAPIException): """Parameters that must not be used together were used together.""" code = 400 title = "InvalidParameterCombination" explanation = _("Incompatible parameters were used together") class HeatInvalidParameterValueError(HeatAPIException): """A bad or out-of-range value was supplied for the input parameter.""" code = 400 title = "InvalidParameterValue" explanation = _("A bad or out-of-range value was supplied") class HeatInvalidQueryParameterError(HeatAPIException): """AWS query string is malformed, does not adhere to AWS standards.""" code = 400 title = "InvalidQueryParameter" explanation = _("AWS query string is malformed, does not adhere to " "AWS spec") class HeatMalformedQueryStringError(HeatAPIException): """The query string is malformed.""" code = 404 title = "MalformedQueryString" explanation = _("The query string is malformed") class HeatMissingActionError(HeatAPIException): """The request is missing an action or operation parameter.""" code = 400 title = "MissingAction" explanation = _("The request is missing an action or operation parameter") class HeatMissingAuthenticationTokenError(HeatAPIException): """Does not contain a valid AWS Access Key or certificate. Request must contain either a valid (registered) AWS Access Key ID or X.509 certificate. """ code = 403 title = "MissingAuthenticationToken" explanation = _("Does not contain a valid AWS Access Key or certificate") class HeatMissingParameterError(HeatAPIException): """A mandatory input parameter is missing. An input parameter that is mandatory for processing the request is missing. """ code = 400 title = "MissingParameter" explanation = _("A mandatory input parameter is missing") class HeatOptInRequiredError(HeatAPIException): """The AWS Access Key ID needs a subscription for the service.""" code = 403 title = "OptInRequired" explanation = _("The AWS Access Key ID needs a subscription for the " "service") class HeatRequestExpiredError(HeatAPIException): """Request expired or more than 15 minutes in the future. Request is past expires date or the request date (either with 15 minute padding), or the request date occurs more than 15 minutes in the future. """ code = 400 title = "RequestExpired" explanation = _("Request expired or more than 15mins in the future") class HeatServiceUnavailableError(HeatAPIException): """The request has failed due to a temporary failure of the server.""" code = 503 title = "ServiceUnavailable" explanation = _("Service temporarily unavailable") err_type = "Server" class HeatThrottlingError(HeatAPIException): """Request was denied due to request throttling.""" code = 400 title = "Throttling" explanation = _("Request was denied due to request throttling") class AlreadyExistsError(HeatAPIException): """Resource with the name requested already exists.""" code = 400 title = 'AlreadyExists' explanation = _("Resource with the name requested already exists") # Not documented in the AWS docs, authentication failure errors class HeatAccessDeniedError(HeatAPIException): """Authentication fails due to user IAM group memberships. This is the response given when authentication fails due to user IAM group memberships meaning we deny access. """ code = 403 title = "AccessDenied" explanation = _("User is not authorized to perform action") class HeatSignatureError(HeatAPIException): """Authentication fails due to a bad signature.""" code = 403 title = "SignatureDoesNotMatch" explanation = _("The request signature we calculated does not match the " "signature you provided") # Heat-specific errors class HeatAPINotImplementedError(HeatAPIException): """API action is not yet implemented.""" code = 500 title = "APINotImplemented" explanation = _("The requested action is not yet implemented") err_type = "Server" class HeatActionInProgressError(HeatAPIException): """Cannot perform action on stack in its current state.""" code = 400 title = 'InvalidAction' explanation = ("Cannot perform action on stack while other actions are " + "in progress") class HeatRequestLimitExceeded(HeatAPIException): """Payload size of the request exceeds maximum allowed size.""" code = 400 title = 'RequestLimitExceeded' explanation = _("Payload exceeds maximum allowed size") def map_remote_error(ex): """Map rpc_common.RemoteError exceptions to HeatAPIException subclasses. Map rpc_common.RemoteError exceptions returned by the engine to HeatAPIException subclasses which can be used to return properly formatted AWS error responses. """ inval_param_errors = ( 'AttributeError', 'ValueError', 'InvalidTenant', 'EntityNotFound', 'ResourceActionNotSupported', 'ResourceNotFound', 'ResourceNotAvailable', 'StackValidationFailed', 'InvalidSchemaError', 'InvalidTemplateReference', 'InvalidTemplateVersion', 'InvalidTemplateSection', 'UnknownUserParameter', 'UserParameterMissing', 'MissingCredentialError', 'ResourcePropertyConflict', 'PropertyUnspecifiedError', 'NotSupported', 'InvalidBreakPointHook', 'PhysicalResourceIDAmbiguity', ) denied_errors = ('Forbidden', 'NotAuthorized') already_exists_errors = ('StackExists') invalid_action_errors = ('ActionInProgress',) request_limit_exceeded = ('RequestLimitExceeded') ex_type = reflection.get_class_name(ex, fully_qualified=False) if ex_type.endswith('_Remote'): ex_type = ex_type[:-len('_Remote')] safe = getattr(ex, 'safe', False) detail = six.text_type(ex) if safe else None if ex_type in inval_param_errors: return HeatInvalidParameterValueError(detail=detail) elif ex_type in denied_errors: return HeatAccessDeniedError(detail=detail) elif ex_type in already_exists_errors: return AlreadyExistsError(detail=detail) elif ex_type in invalid_action_errors: return HeatActionInProgressError(detail=detail) elif ex_type in request_limit_exceeded: return HeatRequestLimitExceeded(detail=detail) else: # Map everything else to internal server error for now return HeatInternalFailureError(detail=detail) heat-10.0.2/heat/api/openstack/0000775000175000017500000000000013343562672016231 5ustar zuulzuul00000000000000heat-10.0.2/heat/api/openstack/__init__.py0000666000175000017500000000172213343562337020344 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.api.middleware import fault from heat.api.middleware import version_negotiation as vn from heat.api.openstack import versions def version_negotiation_filter(app, conf, **local_conf): return vn.VersionNegotiationFilter(versions.Controller, app, conf, **local_conf) def faultwrap_filter(app, conf, **local_conf): return fault.FaultWrapper(app) heat-10.0.2/heat/api/openstack/versions.py0000666000175000017500000000152413343562337020455 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Controller that returns information on the heat API versions. Now it's a subclass of module versions, because of identity with cfn module versions. It can be changed, if there will be another API version. """ from heat.api import versions Controller = versions.Controller heat-10.0.2/heat/api/openstack/v1/0000775000175000017500000000000013343562672016557 5ustar zuulzuul00000000000000heat-10.0.2/heat/api/openstack/v1/actions.py0000666000175000017500000000565013343562337020577 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import six from webob import exc from heat.api.openstack.v1 import util from heat.common.i18n import _ from heat.common import serializers from heat.common import wsgi from heat.rpc import client as rpc_client class ActionController(object): """WSGI controller for Actions in Heat v1 API. Implements the API for stack actions """ # Define request scope (must match what is in policy.json or policies in # code) REQUEST_SCOPE = 'actions' ACTIONS = ( SUSPEND, RESUME, CHECK, CANCEL_UPDATE, CANCEL_WITHOUT_ROLLBACK ) = ( 'suspend', 'resume', 'check', 'cancel_update', 'cancel_without_rollback' ) def __init__(self, options): self.options = options self.rpc_client = rpc_client.EngineClient() @util.registered_identified_stack def action(self, req, identity, body=None): """Performs a specified action on a stack. The body is expecting to contain exactly one item whose key specifies the action. """ body = body or {} if len(body) < 1: raise exc.HTTPBadRequest(_("No action specified")) if len(body) > 1: raise exc.HTTPBadRequest(_("Multiple actions specified")) ac = next(six.iterkeys(body)) if ac not in self.ACTIONS: raise exc.HTTPBadRequest(_("Invalid action %s specified") % ac) if ac == self.SUSPEND: self.rpc_client.stack_suspend(req.context, identity) elif ac == self.RESUME: self.rpc_client.stack_resume(req.context, identity) elif ac == self.CHECK: self.rpc_client.stack_check(req.context, identity) elif ac == self.CANCEL_UPDATE: self.rpc_client.stack_cancel_update(req.context, identity, cancel_with_rollback=True) elif ac == self.CANCEL_WITHOUT_ROLLBACK: self.rpc_client.stack_cancel_update(req.context, identity, cancel_with_rollback=False) else: raise exc.HTTPInternalServerError(_("Unexpected action %s") % ac) def create_resource(options): """Actions action factory method.""" deserializer = wsgi.JSONRequestDeserializer() serializer = serializers.JSONResponseSerializer() return wsgi.Resource(ActionController(options), deserializer, serializer) heat-10.0.2/heat/api/openstack/v1/services.py0000666000175000017500000000331613343562337020757 0ustar zuulzuul00000000000000# Copyright (c) 2014 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_messaging import exceptions from webob import exc from heat.api.openstack.v1 import util from heat.common.i18n import _ from heat.common import serializers from heat.common import wsgi from heat.rpc import client as rpc_client class ServiceController(object): """WSGI controller for reporting the heat engine status in Heat v1 API.""" # Define request scope (must match what is in policy.json or policies in # code) REQUEST_SCOPE = 'service' def __init__(self, options): self.options = options self.rpc_client = rpc_client.EngineClient() @util.registered_policy_enforce def index(self, req): try: services = self.rpc_client.list_services(req.context) return {'services': services} except exceptions.MessagingTimeout: msg = _('All heat engines are down.') raise exc.HTTPServiceUnavailable(msg) def create_resource(options): deserializer = wsgi.JSONRequestDeserializer() serializer = serializers.JSONResponseSerializer() return wsgi.Resource(ServiceController(options), deserializer, serializer) heat-10.0.2/heat/api/openstack/v1/build_info.py0000666000175000017500000000330213343562337021241 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg from heat.api.openstack.v1 import util from heat.common import serializers from heat.common import wsgi from heat.rpc import client as rpc_client class BuildInfoController(object): """WSGI controller for BuildInfo in Heat v1 API. Returns build information for current app. """ # Define request scope (must match what is in policy.json or policies in # code) REQUEST_SCOPE = 'build_info' def __init__(self, options): self.options = options self.rpc_client = rpc_client.EngineClient() @util.registered_policy_enforce def build_info(self, req): engine_revision = self.rpc_client.get_revision(req.context) build_info = { 'api': {'revision': cfg.CONF.revision['heat_revision']}, 'engine': {'revision': engine_revision} } return build_info def create_resource(options): """BuildInfo factory method.""" deserializer = wsgi.JSONRequestDeserializer() serializer = serializers.JSONResponseSerializer() return wsgi.Resource(BuildInfoController(options), deserializer, serializer) heat-10.0.2/heat/api/openstack/v1/util.py0000666000175000017500000001103113343562337020102 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import six from webob import exc from heat.common.i18n import _ from heat.common import identifier def policy_enforce(handler): """Decorator that enforces policies. Checks the path matches the request context and enforce policy defined in policy.json or in policies. This is a handler method decorator. """ return _policy_enforce(handler) def registered_policy_enforce(handler): """Decorator that enforces policies. Checks the path matches the request context and enforce policy defined in policies. This is a handler method decorator. """ return _policy_enforce(handler, is_registered_policy=True) def _policy_enforce(handler, is_registered_policy=False): @six.wraps(handler) def handle_stack_method(controller, req, tenant_id, **kwargs): if req.context.tenant_id != tenant_id and not req.context.is_admin: raise exc.HTTPForbidden() allowed = req.context.policy.enforce( context=req.context, action=handler.__name__, scope=controller.REQUEST_SCOPE, is_registered_policy=is_registered_policy) if not allowed: raise exc.HTTPForbidden() return handler(controller, req, **kwargs) return handle_stack_method def identified_stack(handler): """Decorator that passes a stack identifier instead of path components. This is a handler method decorator. """ return _identified_stack(handler) def registered_identified_stack(handler): """Decorator that passes a stack identifier instead of path components. This is a handler method decorator. """ return _identified_stack(handler, is_registered_policy=True) def _identified_stack(handler, is_registered_policy=False): @six.wraps(handler) def handle_stack_method(controller, req, stack_name, stack_id, **kwargs): stack_identity = identifier.HeatIdentifier(req.context.tenant_id, stack_name, stack_id) return handler(controller, req, dict(stack_identity), **kwargs) return _policy_enforce(handle_stack_method, is_registered_policy=is_registered_policy) def make_url(req, identity): """Return the URL for the supplied identity dictionary.""" try: stack_identity = identifier.HeatIdentifier(**identity) except ValueError: err_reason = _('Invalid Stack address') raise exc.HTTPInternalServerError(err_reason) return req.relative_url(stack_identity.url_path(), True) def make_link(req, identity, relationship='self'): """Return a link structure for the supplied identity dictionary.""" return {'href': make_url(req, identity), 'rel': relationship} PARAM_TYPES = ( PARAM_TYPE_SINGLE, PARAM_TYPE_MULTI, PARAM_TYPE_MIXED ) = ( 'single', 'multi', 'mixed' ) def get_allowed_params(params, whitelist): """Extract from ``params`` all entries listed in ``whitelist``. The returning dict will contain an entry for a key if, and only if, there's an entry in ``whitelist`` for that key and at least one entry in ``params``. If ``params`` contains multiple entries for the same key, it will yield an array of values: ``{key: [v1, v2,...]}`` :param params: a NestedMultiDict from webob.Request.params :param whitelist: an array of strings to whitelist :returns: a dict with {key: value} pairs """ allowed_params = {} for key, get_type in six.iteritems(whitelist): assert get_type in PARAM_TYPES value = None if get_type == PARAM_TYPE_SINGLE: value = params.get(key) elif get_type == PARAM_TYPE_MULTI: value = params.getall(key) elif get_type == PARAM_TYPE_MIXED: value = params.getall(key) if isinstance(value, list) and len(value) == 1: value = value.pop() if value: allowed_params[key] = value return allowed_params heat-10.0.2/heat/api/openstack/v1/views/0000775000175000017500000000000013343562672017714 5ustar zuulzuul00000000000000heat-10.0.2/heat/api/openstack/v1/views/stacks_view.py0000666000175000017500000000513313343562337022612 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import itertools from heat.api.openstack.v1 import util from heat.api.openstack.v1.views import views_common from heat.rpc import api as rpc_api _collection_name = 'stacks' basic_keys = ( rpc_api.STACK_ID, rpc_api.STACK_NAME, rpc_api.STACK_DESCRIPTION, rpc_api.STACK_STATUS, rpc_api.STACK_STATUS_DATA, rpc_api.STACK_CREATION_TIME, rpc_api.STACK_DELETION_TIME, rpc_api.STACK_UPDATED_TIME, rpc_api.STACK_OWNER, rpc_api.STACK_PARENT, rpc_api.STACK_USER_PROJECT_ID, rpc_api.STACK_TAGS, ) def format_stack(req, stack, keys=None, include_project=False): def transform(key, value): if keys and key not in keys: return if key == rpc_api.STACK_ID: yield ('id', value['stack_id']) yield ('links', [util.make_link(req, value)]) if include_project: yield ('project', value['tenant']) elif key == rpc_api.STACK_ACTION: return elif (key == rpc_api.STACK_STATUS and rpc_api.STACK_ACTION in stack): # To avoid breaking API compatibility, we join RES_ACTION # and RES_STATUS, so the API format doesn't expose the # internal split of state into action/status yield (key, '_'.join((stack[rpc_api.STACK_ACTION], value))) else: # TODO(zaneb): ensure parameters can be formatted for XML # elif key == rpc_api.STACK_PARAMETERS: # return key, json.dumps(value) yield (key, value) return dict(itertools.chain.from_iterable( transform(k, v) for k, v in stack.items())) def collection(req, stacks, count=None, include_project=False): keys = basic_keys formatted_stacks = [format_stack(req, s, keys, include_project) for s in stacks] result = {'stacks': formatted_stacks} links = views_common.get_collection_links(req, formatted_stacks) if links: result['links'] = links if count is not None: result['count'] = count return result heat-10.0.2/heat/api/openstack/v1/views/views_common.py0000666000175000017500000000244713343562337023002 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from six.moves.urllib import parse as urlparse def get_collection_links(request, items): """Retrieve 'next' link, if applicable.""" links = [] try: limit = int(request.params.get("limit") or 0) except ValueError: limit = 0 if limit > 0 and limit == len(items): last_item = items[-1] last_item_id = last_item["id"] links.append({ "rel": "next", "href": _get_next_link(request, last_item_id) }) return links def _get_next_link(request, marker): """Return href string with proper limit and marker params.""" params = request.params.copy() params['marker'] = marker return "%s?%s" % (request.path_url, urlparse.urlencode(params)) heat-10.0.2/heat/api/openstack/v1/views/__init__.py0000666000175000017500000000000013343562337022013 0ustar zuulzuul00000000000000heat-10.0.2/heat/api/openstack/v1/__init__.py0000666000175000017500000004612313343562337020676 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import routes import six from heat.api.openstack.v1 import actions from heat.api.openstack.v1 import build_info from heat.api.openstack.v1 import events from heat.api.openstack.v1 import resources from heat.api.openstack.v1 import services from heat.api.openstack.v1 import software_configs from heat.api.openstack.v1 import software_deployments from heat.api.openstack.v1 import stacks from heat.common import wsgi class API(wsgi.Router): """WSGI router for Heat v1 REST API requests.""" def __init__(self, conf, **local_conf): self.conf = conf mapper = routes.Mapper() default_resource = wsgi.Resource(wsgi.DefaultMethodController(), wsgi.JSONRequestDeserializer()) def connect(controller, path_prefix, routes): """Connects list of routes to given controller with path_prefix. This function connects the list of routes to the given controller, prepending the given path_prefix. Then for each URL it finds which request methods aren't handled and configures those to return a 405 error. Finally, it adds a handler for the OPTIONS method to all URLs that returns the list of allowed methods with 204 status code. """ # register the routes with the mapper, while keeping track of which # methods are defined for each URL urls = {} for r in routes: url = path_prefix + r['url'] methods = r['method'] if isinstance(methods, six.string_types): methods = [methods] methods_str = ','.join(methods) mapper.connect(r['name'], url, controller=controller, action=r['action'], conditions={'method': methods_str}) if url not in urls: urls[url] = methods else: urls[url] += methods # now register the missing methods to return 405s, and register # a handler for OPTIONS that returns the list of allowed methods for url, methods in urls.items(): all_methods = ['HEAD', 'GET', 'POST', 'PUT', 'PATCH', 'DELETE'] missing_methods = [m for m in all_methods if m not in methods] allowed_methods_str = ','.join(methods) mapper.connect(url, controller=default_resource, action='reject', allowed_methods=allowed_methods_str, conditions={'method': missing_methods}) if 'OPTIONS' not in methods: mapper.connect(url, controller=default_resource, action='options', allowed_methods=allowed_methods_str, conditions={'method': 'OPTIONS'}) # Stacks stacks_resource = stacks.create_resource(conf) connect(controller=stacks_resource, path_prefix='/{tenant_id}', routes=[ # Template handling { 'name': 'template_validate', 'url': '/validate', 'action': 'validate_template', 'method': 'POST' }, { 'name': 'resource_types', 'url': '/resource_types', 'action': 'list_resource_types', 'method': 'GET' }, { 'name': 'resource_schema', 'url': '/resource_types/{type_name}', 'action': 'resource_schema', 'method': 'GET' }, { 'name': 'generate_template', 'url': '/resource_types/{type_name}/template', 'action': 'generate_template', 'method': 'GET' }, { 'name': 'template_versions', 'url': '/template_versions', 'action': 'list_template_versions', 'method': 'GET' }, { 'name': 'template_functions', 'url': '/template_versions/{template_version}' '/functions', 'action': 'list_template_functions', 'method': 'GET' }, # Stack collection { 'name': 'stack_index', 'url': '/stacks', 'action': 'index', 'method': 'GET' }, { 'name': 'stack_create', 'url': '/stacks', 'action': 'create', 'method': 'POST' }, { 'name': 'stack_preview', 'url': '/stacks/preview', 'action': 'preview', 'method': 'POST' }, { 'name': 'stack_detail', 'url': '/stacks/detail', 'action': 'detail', 'method': 'GET' }, # Stack data { 'name': 'stack_lookup', 'url': '/stacks/{stack_name}', 'action': 'lookup', 'method': ['GET', 'POST', 'PUT', 'PATCH', 'DELETE'] }, # \x3A matches on a colon. # Routes treats : specially in its regexp { 'name': 'stack_lookup', 'url': r'/stacks/{stack_name:arn\x3A.*}', 'action': 'lookup', 'method': ['GET', 'POST', 'PUT', 'PATCH', 'DELETE'] }, { 'name': 'stack_lookup_subpath', 'url': '/stacks/{stack_name}/' '{path:resources|events|template|actions' '|environment|files}', 'action': 'lookup', 'method': 'GET' }, { 'name': 'stack_lookup_subpath_post', 'url': '/stacks/{stack_name}/' '{path:resources|events|template|actions}', 'action': 'lookup', 'method': 'POST' }, { 'name': 'stack_show', 'url': '/stacks/{stack_name}/{stack_id}', 'action': 'show', 'method': 'GET' }, { 'name': 'stack_lookup', 'url': '/stacks/{stack_name}/{stack_id}/template', 'action': 'template', 'method': 'GET' }, { 'name': 'stack_lookup', 'url': '/stacks/{stack_name}/{stack_id}/environment', 'action': 'environment', 'method': 'GET' }, { 'name': 'stack_lookup', 'url': '/stacks/{stack_name}/{stack_id}/files', 'action': 'files', 'method': 'GET' }, # Stack update/delete { 'name': 'stack_update', 'url': '/stacks/{stack_name}/{stack_id}', 'action': 'update', 'method': 'PUT' }, { 'name': 'stack_update_patch', 'url': '/stacks/{stack_name}/{stack_id}', 'action': 'update_patch', 'method': 'PATCH' }, { 'name': 'preview_stack_update', 'url': '/stacks/{stack_name}/{stack_id}/preview', 'action': 'preview_update', 'method': 'PUT' }, { 'name': 'preview_stack_update_patch', 'url': '/stacks/{stack_name}/{stack_id}/preview', 'action': 'preview_update_patch', 'method': 'PATCH' }, { 'name': 'stack_delete', 'url': '/stacks/{stack_name}/{stack_id}', 'action': 'delete', 'method': 'DELETE' }, # Stack abandon { 'name': 'stack_abandon', 'url': '/stacks/{stack_name}/{stack_id}/abandon', 'action': 'abandon', 'method': 'DELETE' }, { 'name': 'stack_export', 'url': '/stacks/{stack_name}/{stack_id}/export', 'action': 'export', 'method': 'GET' }, { 'name': 'stack_snapshot', 'url': '/stacks/{stack_name}/{stack_id}/snapshots', 'action': 'snapshot', 'method': 'POST' }, { 'name': 'stack_snapshot_show', 'url': '/stacks/{stack_name}/{stack_id}/snapshots/' '{snapshot_id}', 'action': 'show_snapshot', 'method': 'GET' }, { 'name': 'stack_snapshot_delete', 'url': '/stacks/{stack_name}/{stack_id}/snapshots/' '{snapshot_id}', 'action': 'delete_snapshot', 'method': 'DELETE' }, { 'name': 'stack_list_snapshots', 'url': '/stacks/{stack_name}/{stack_id}/snapshots', 'action': 'list_snapshots', 'method': 'GET' }, { 'name': 'stack_snapshot_restore', 'url': '/stacks/{stack_name}/{stack_id}/snapshots/' '{snapshot_id}/restore', 'action': 'restore_snapshot', 'method': 'POST' }, # Stack outputs { 'name': 'stack_output_list', 'url': '/stacks/{stack_name}/{stack_id}/outputs', 'action': 'list_outputs', 'method': 'GET' }, { 'name': 'stack_output_show', 'url': '/stacks/{stack_name}/{stack_id}/outputs/' '{output_key}', 'action': 'show_output', 'method': 'GET' } ]) # Resources resources_resource = resources.create_resource(conf) stack_path = '/{tenant_id}/stacks/{stack_name}/{stack_id}' connect(controller=resources_resource, path_prefix=stack_path, routes=[ # Resource collection { 'name': 'resource_index', 'url': '/resources', 'action': 'index', 'method': 'GET' }, # Resource data { 'name': 'resource_show', 'url': '/resources/{resource_name}', 'action': 'show', 'method': 'GET' }, { 'name': 'resource_metadata_show', 'url': '/resources/{resource_name}/metadata', 'action': 'metadata', 'method': 'GET' }, { 'name': 'resource_signal', 'url': '/resources/{resource_name}/signal', 'action': 'signal', 'method': 'POST' }, { 'name': 'resource_mark_unhealthy', 'url': '/resources/{resource_name}', 'action': 'mark_unhealthy', 'method': 'PATCH' } ]) # Events events_resource = events.create_resource(conf) connect(controller=events_resource, path_prefix=stack_path, routes=[ # Stack event collection { 'name': 'event_index_stack', 'url': '/events', 'action': 'index', 'method': 'GET' }, # Resource event collection { 'name': 'event_index_resource', 'url': '/resources/{resource_name}/events', 'action': 'index', 'method': 'GET' }, # Event data { 'name': 'event_show', 'url': '/resources/{resource_name}/events/{event_id}', 'action': 'show', 'method': 'GET' } ]) # Actions actions_resource = actions.create_resource(conf) connect(controller=actions_resource, path_prefix=stack_path, routes=[ { 'name': 'action_stack', 'url': '/actions', 'action': 'action', 'method': 'POST' } ]) # Info info_resource = build_info.create_resource(conf) connect(controller=info_resource, path_prefix='/{tenant_id}', routes=[ { 'name': 'build_info', 'url': '/build_info', 'action': 'build_info', 'method': 'GET' } ]) # Software configs software_config_resource = software_configs.create_resource(conf) connect(controller=software_config_resource, path_prefix='/{tenant_id}/software_configs', routes=[ { 'name': 'software_config_index', 'url': '', 'action': 'index', 'method': 'GET' }, { 'name': 'software_config_create', 'url': '', 'action': 'create', 'method': 'POST' }, { 'name': 'software_config_show', 'url': '/{config_id}', 'action': 'show', 'method': 'GET' }, { 'name': 'software_config_delete', 'url': '/{config_id}', 'action': 'delete', 'method': 'DELETE' } ]) # Software deployments sd_resource = software_deployments.create_resource(conf) connect(controller=sd_resource, path_prefix='/{tenant_id}/software_deployments', routes=[ { 'name': 'software_deployment_index', 'url': '', 'action': 'index', 'method': 'GET' }, { 'name': 'software_deployment_metadata', 'url': '/metadata/{server_id}', 'action': 'metadata', 'method': 'GET' }, { 'name': 'software_deployment_create', 'url': '', 'action': 'create', 'method': 'POST' }, { 'name': 'software_deployment_show', 'url': '/{deployment_id}', 'action': 'show', 'method': 'GET' }, { 'name': 'software_deployment_update', 'url': '/{deployment_id}', 'action': 'update', 'method': 'PUT' }, { 'name': 'software_deployment_delete', 'url': '/{deployment_id}', 'action': 'delete', 'method': 'DELETE' } ]) # Services service_resource = services.create_resource(conf) with mapper.submapper( controller=service_resource, path_prefix='/{tenant_id}/services' ) as sa_mapper: sa_mapper.connect("service_index", "", action="index", conditions={'method': 'GET'}) # now that all the routes are defined, add a handler for super(API, self).__init__(mapper) heat-10.0.2/heat/api/openstack/v1/stacks.py0000666000175000017500000006644713343562351020436 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Stack endpoint for Heat v1 REST API.""" import contextlib from oslo_log import log as logging import six from six.moves.urllib import parse from webob import exc from heat.api.openstack.v1 import util from heat.api.openstack.v1.views import stacks_view from heat.common import context from heat.common import environment_format from heat.common.i18n import _ from heat.common import identifier from heat.common import param_utils from heat.common import serializers from heat.common import template_format from heat.common import urlfetch from heat.common import wsgi from heat.rpc import api as rpc_api from heat.rpc import client as rpc_client LOG = logging.getLogger(__name__) class InstantiationData(object): """The data to create or update a stack. The data accompanying a PUT or POST request. """ PARAMS = ( PARAM_STACK_NAME, PARAM_TEMPLATE, PARAM_TEMPLATE_URL, PARAM_USER_PARAMS, PARAM_ENVIRONMENT, PARAM_FILES, PARAM_ENVIRONMENT_FILES, ) = ( 'stack_name', 'template', 'template_url', 'parameters', 'environment', 'files', 'environment_files', ) def __init__(self, data, patch=False): """Initialise from the request object. If called from the PATCH api, insert a flag for the engine code to distinguish. """ self.data = data self.patch = patch if patch: self.data[rpc_api.PARAM_EXISTING] = True @staticmethod @contextlib.contextmanager def parse_error_check(data_type): try: yield except ValueError as parse_ex: mdict = {'type': data_type, 'error': six.text_type(parse_ex)} msg = _("%(type)s not in valid format: %(error)s") % mdict raise exc.HTTPBadRequest(msg) def stack_name(self): """Return the stack name.""" if self.PARAM_STACK_NAME not in self.data: raise exc.HTTPBadRequest(_("No stack name specified")) return self.data[self.PARAM_STACK_NAME] def template(self): """Get template file contents. Get template file contents, either inline, from stack adopt data or from a URL, in JSON or YAML format. """ template_data = None if rpc_api.PARAM_ADOPT_STACK_DATA in self.data: adopt_data = self.data[rpc_api.PARAM_ADOPT_STACK_DATA] try: adopt_data = template_format.simple_parse(adopt_data) template_format.validate_template_limit( six.text_type(adopt_data['template'])) return adopt_data['template'] except (ValueError, KeyError) as ex: err_reason = _('Invalid adopt data: %s') % ex raise exc.HTTPBadRequest(err_reason) elif self.PARAM_TEMPLATE in self.data: template_data = self.data[self.PARAM_TEMPLATE] if isinstance(template_data, dict): template_format.validate_template_limit(six.text_type( template_data)) return template_data elif self.PARAM_TEMPLATE_URL in self.data: url = self.data[self.PARAM_TEMPLATE_URL] LOG.debug('TemplateUrl %s' % url) try: template_data = urlfetch.get(url) except IOError as ex: err_reason = _('Could not retrieve template: %s') % ex raise exc.HTTPBadRequest(err_reason) if template_data is None: if self.patch: return None else: raise exc.HTTPBadRequest(_("No template specified")) with self.parse_error_check('Template'): return template_format.parse(template_data) def environment(self): """Get the user-supplied environment for the stack in YAML format. If the user supplied Parameters then merge these into the environment global options. """ env = {} # Don't use merged environment, if environment_files are supplied. if (self.PARAM_ENVIRONMENT in self.data and not self.data.get(self.PARAM_ENVIRONMENT_FILES)): env_data = self.data[self.PARAM_ENVIRONMENT] with self.parse_error_check('Environment'): if isinstance(env_data, dict): env = environment_format.validate(env_data) else: env = environment_format.parse(env_data) environment_format.default_for_missing(env) parameters = self.data.get(self.PARAM_USER_PARAMS, {}) env[self.PARAM_USER_PARAMS].update(parameters) return env def files(self): return self.data.get(self.PARAM_FILES, {}) def environment_files(self): return self.data.get(self.PARAM_ENVIRONMENT_FILES, None) def args(self): """Get any additional arguments supplied by the user.""" params = self.data.items() return dict((k, v) for k, v in params if k not in self.PARAMS) class StackController(object): """WSGI controller for stacks resource in Heat v1 API. Implements the API actions. """ # Define request scope (must match what is in policy.json or policies in # code) REQUEST_SCOPE = 'stacks' def __init__(self, options): self.options = options self.rpc_client = rpc_client.EngineClient() def default(self, req, **args): raise exc.HTTPNotFound() def _extract_bool_param(self, name, value): try: return param_utils.extract_bool(name, value) except ValueError as e: raise exc.HTTPBadRequest(six.text_type(e)) def _extract_int_param(self, name, value, allow_zero=True, allow_negative=False): try: return param_utils.extract_int(name, value, allow_zero, allow_negative) except ValueError as e: raise exc.HTTPBadRequest(six.text_type(e)) def _extract_tags_param(self, tags): try: return param_utils.extract_tags(tags) except ValueError as e: raise exc.HTTPBadRequest(six.text_type(e)) def _index(self, req, use_admin_cnxt=False): filter_whitelist = { # usage of keys in this list are not encouraged, please use # rpc_api.STACK_KEYS instead 'id': util.PARAM_TYPE_MIXED, 'status': util.PARAM_TYPE_MIXED, 'name': util.PARAM_TYPE_MIXED, 'action': util.PARAM_TYPE_MIXED, 'tenant': util.PARAM_TYPE_MIXED, 'username': util.PARAM_TYPE_MIXED, 'owner_id': util.PARAM_TYPE_MIXED, } whitelist = { 'limit': util.PARAM_TYPE_SINGLE, 'marker': util.PARAM_TYPE_SINGLE, 'sort_dir': util.PARAM_TYPE_SINGLE, 'sort_keys': util.PARAM_TYPE_MULTI, 'show_deleted': util.PARAM_TYPE_SINGLE, 'show_nested': util.PARAM_TYPE_SINGLE, 'show_hidden': util.PARAM_TYPE_SINGLE, 'tags': util.PARAM_TYPE_SINGLE, 'tags_any': util.PARAM_TYPE_SINGLE, 'not_tags': util.PARAM_TYPE_SINGLE, 'not_tags_any': util.PARAM_TYPE_SINGLE, } params = util.get_allowed_params(req.params, whitelist) stack_keys = dict.fromkeys(rpc_api.STACK_KEYS, util.PARAM_TYPE_MIXED) unsupported = ( rpc_api.STACK_ID, # not user visible rpc_api.STACK_CAPABILITIES, # not supported rpc_api.STACK_CREATION_TIME, # don't support timestamp rpc_api.STACK_DELETION_TIME, # don't support timestamp rpc_api.STACK_DESCRIPTION, # not supported rpc_api.STACK_NOTIFICATION_TOPICS, # not supported rpc_api.STACK_OUTPUTS, # not in database rpc_api.STACK_PARAMETERS, # not in this table rpc_api.STACK_TAGS, # tags query following a specific guideline rpc_api.STACK_TMPL_DESCRIPTION, # not supported rpc_api.STACK_UPDATED_TIME, # don't support timestamp ) for key in unsupported: stack_keys.pop(key) # downward compatibility stack_keys.update(filter_whitelist) filter_params = util.get_allowed_params(req.params, stack_keys) show_deleted = False p_name = rpc_api.PARAM_SHOW_DELETED if p_name in params: params[p_name] = self._extract_bool_param(p_name, params[p_name]) show_deleted = params[p_name] show_nested = False p_name = rpc_api.PARAM_SHOW_NESTED if p_name in params: params[p_name] = self._extract_bool_param(p_name, params[p_name]) show_nested = params[p_name] key = rpc_api.PARAM_LIMIT if key in params: params[key] = self._extract_int_param(key, params[key]) show_hidden = False p_name = rpc_api.PARAM_SHOW_HIDDEN if p_name in params: params[p_name] = self._extract_bool_param(p_name, params[p_name]) show_hidden = params[p_name] tags = None if rpc_api.PARAM_TAGS in params: params[rpc_api.PARAM_TAGS] = self._extract_tags_param( params[rpc_api.PARAM_TAGS]) tags = params[rpc_api.PARAM_TAGS] tags_any = None if rpc_api.PARAM_TAGS_ANY in params: params[rpc_api.PARAM_TAGS_ANY] = self._extract_tags_param( params[rpc_api.PARAM_TAGS_ANY]) tags_any = params[rpc_api.PARAM_TAGS_ANY] not_tags = None if rpc_api.PARAM_NOT_TAGS in params: params[rpc_api.PARAM_NOT_TAGS] = self._extract_tags_param( params[rpc_api.PARAM_NOT_TAGS]) not_tags = params[rpc_api.PARAM_NOT_TAGS] not_tags_any = None if rpc_api.PARAM_NOT_TAGS_ANY in params: params[rpc_api.PARAM_NOT_TAGS_ANY] = self._extract_tags_param( params[rpc_api.PARAM_NOT_TAGS_ANY]) not_tags_any = params[rpc_api.PARAM_NOT_TAGS_ANY] # get the with_count value, if invalid, raise ValueError with_count = False if req.params.get('with_count'): with_count = self._extract_bool_param( 'with_count', req.params.get('with_count')) if not filter_params: filter_params = None if use_admin_cnxt: cnxt = context.get_admin_context() else: cnxt = req.context stacks = self.rpc_client.list_stacks(cnxt, filters=filter_params, **params) count = None if with_count: try: # Check if engine has been updated to a version with # support to count_stacks before trying to use it. count = self.rpc_client.count_stacks(cnxt, filters=filter_params, show_deleted=show_deleted, show_nested=show_nested, show_hidden=show_hidden, tags=tags, tags_any=tags_any, not_tags=not_tags, not_tags_any=not_tags_any) except AttributeError as ex: LOG.warning("Old Engine Version: %s", ex) return stacks_view.collection(req, stacks=stacks, count=count, include_project=cnxt.is_admin) @util.registered_policy_enforce def global_index(self, req): return self._index(req, use_admin_cnxt=True) @util.registered_policy_enforce def index(self, req): """Lists summary information for all stacks.""" global_tenant = False name = rpc_api.PARAM_GLOBAL_TENANT if name in req.params: global_tenant = self._extract_bool_param( name, req.params.get(name)) if global_tenant: return self.global_index(req, req.context.tenant_id) return self._index(req) @util.registered_policy_enforce def detail(self, req): """Lists detailed information for all stacks.""" stacks = self.rpc_client.list_stacks(req.context) return {'stacks': [stacks_view.format_stack(req, s) for s in stacks]} @util.registered_policy_enforce def preview(self, req, body): """Preview the outcome of a template and its params.""" data = InstantiationData(body) args = self.prepare_args(data) result = self.rpc_client.preview_stack( req.context, data.stack_name(), data.template(), data.environment(), data.files(), args, environment_files=data.environment_files() ) formatted_stack = stacks_view.format_stack(req, result) return {'stack': formatted_stack} def prepare_args(self, data, is_update=False): args = data.args() key = rpc_api.PARAM_TIMEOUT if key in args: args[key] = self._extract_int_param(key, args[key]) key = rpc_api.PARAM_TAGS if args.get(key) is not None: args[key] = self._extract_tags_param(args[key]) key = rpc_api.PARAM_CONVERGE if not is_update and key in args: msg = _("%s flag only supported in stack update (or update " "preview) request.") % key raise exc.HTTPBadRequest(six.text_type(msg)) return args @util.registered_policy_enforce def create(self, req, body): """Create a new stack.""" data = InstantiationData(body) args = self.prepare_args(data) result = self.rpc_client.create_stack( req.context, data.stack_name(), data.template(), data.environment(), data.files(), args, environment_files=data.environment_files()) formatted_stack = stacks_view.format_stack( req, {rpc_api.STACK_ID: result} ) return {'stack': formatted_stack} @util.registered_policy_enforce def lookup(self, req, stack_name, path='', body=None): """Redirect to the canonical URL for a stack.""" try: identity = dict(identifier.HeatIdentifier.from_arn(stack_name)) except ValueError: identity = self.rpc_client.identify_stack(req.context, stack_name) location = util.make_url(req, identity) if path: location = '/'.join([location, path]) params = req.params if params: location += '?%s' % parse.urlencode(params, True) raise exc.HTTPFound(location=location) @util.registered_identified_stack def show(self, req, identity): """Gets detailed information for a stack.""" params = req.params p_name = rpc_api.RESOLVE_OUTPUTS if rpc_api.RESOLVE_OUTPUTS in params: resolve_outputs = self._extract_bool_param( p_name, params[p_name]) else: resolve_outputs = True stack_list = self.rpc_client.show_stack(req.context, identity, resolve_outputs) if not stack_list: raise exc.HTTPInternalServerError() stack = stack_list[0] return {'stack': stacks_view.format_stack(req, stack)} @util.registered_identified_stack def template(self, req, identity): """Get the template body for an existing stack.""" templ = self.rpc_client.get_template(req.context, identity) # TODO(zaneb): always set Content-type to application/json return templ @util.registered_identified_stack def environment(self, req, identity): """Get the environment for an existing stack.""" env = self.rpc_client.get_environment(req.context, identity) return env @util.registered_identified_stack def files(self, req, identity): """Get the files for an existing stack.""" return self.rpc_client.get_files(req.context, identity) @util.registered_identified_stack def update(self, req, identity, body): """Update an existing stack with a new template and/or parameters.""" data = InstantiationData(body) args = self.prepare_args(data, is_update=True) self.rpc_client.update_stack( req.context, identity, data.template(), data.environment(), data.files(), args, environment_files=data.environment_files()) raise exc.HTTPAccepted() @util.registered_identified_stack def update_patch(self, req, identity, body): """Update an existing stack with a new template. Update an existing stack with a new template by patching the parameters Add the flag patch to the args so the engine code can distinguish """ data = InstantiationData(body, patch=True) args = self.prepare_args(data, is_update=True) self.rpc_client.update_stack( req.context, identity, data.template(), data.environment(), data.files(), args, environment_files=data.environment_files()) raise exc.HTTPAccepted() def _param_show_nested(self, req): whitelist = {'show_nested': 'single'} params = util.get_allowed_params(req.params, whitelist) p_name = 'show_nested' if p_name in params: return self._extract_bool_param(p_name, params[p_name]) @util.registered_identified_stack def preview_update(self, req, identity, body): """Preview update for existing stack with a new template/parameters.""" data = InstantiationData(body) args = self.prepare_args(data, is_update=True) show_nested = self._param_show_nested(req) if show_nested is not None: args[rpc_api.PARAM_SHOW_NESTED] = show_nested changes = self.rpc_client.preview_update_stack( req.context, identity, data.template(), data.environment(), data.files(), args, environment_files=data.environment_files()) return {'resource_changes': changes} @util.registered_identified_stack def preview_update_patch(self, req, identity, body): """Preview PATCH update for existing stack.""" data = InstantiationData(body, patch=True) args = self.prepare_args(data, is_update=True) show_nested = self._param_show_nested(req) if show_nested is not None: args['show_nested'] = show_nested changes = self.rpc_client.preview_update_stack( req.context, identity, data.template(), data.environment(), data.files(), args, environment_files=data.environment_files()) return {'resource_changes': changes} @util.registered_identified_stack def delete(self, req, identity): """Delete the specified stack.""" self.rpc_client.delete_stack(req.context, identity, cast=False) raise exc.HTTPNoContent() @util.registered_identified_stack def abandon(self, req, identity): """Abandons specified stack. Abandons specified stack by deleting the stack and it's resources from the database, but underlying resources will not be deleted. """ return self.rpc_client.abandon_stack(req.context, identity) @util.registered_identified_stack def export(self, req, identity): """Export specified stack. Return stack data in JSON format. """ return self.rpc_client.export_stack(req.context, identity) @util.registered_policy_enforce def validate_template(self, req, body): """Implements the ValidateTemplate API action. Validates the specified template. """ data = InstantiationData(body) whitelist = {'show_nested': util.PARAM_TYPE_SINGLE, 'ignore_errors': util.PARAM_TYPE_SINGLE} params = util.get_allowed_params(req.params, whitelist) show_nested = False p_name = rpc_api.PARAM_SHOW_NESTED if p_name in params: params[p_name] = self._extract_bool_param(p_name, params[p_name]) show_nested = params[p_name] if rpc_api.PARAM_IGNORE_ERRORS in params: ignorable_errors = params[rpc_api.PARAM_IGNORE_ERRORS].split(',') else: ignorable_errors = None result = self.rpc_client.validate_template( req.context, data.template(), data.environment(), files=data.files(), environment_files=data.environment_files(), show_nested=show_nested, ignorable_errors=ignorable_errors) if 'Error' in result: raise exc.HTTPBadRequest(result['Error']) return result @util.registered_policy_enforce def list_resource_types(self, req): """Returns a resource types list which may be used in template.""" support_status = req.params.get('support_status') type_name = req.params.get('name') version = req.params.get('version') if req.params.get('with_description') is not None: with_description = self._extract_bool_param( 'with_description', req.params.get('with_description')) else: # Add backward compatibility support for case when heatclient # version is lower than version with this parameter. with_description = False return { 'resource_types': self.rpc_client.list_resource_types( req.context, support_status=support_status, type_name=type_name, heat_version=version, with_description=with_description)} @util.registered_policy_enforce def list_template_versions(self, req): """Returns a list of available template versions.""" return { 'template_versions': self.rpc_client.list_template_versions(req.context) } @util.registered_policy_enforce def list_template_functions(self, req, template_version): """Returns a list of available functions in a given template.""" if req.params.get('with_condition_func') is not None: with_condition = self._extract_bool_param( 'with_condition_func', req.params.get('with_condition_func')) else: with_condition = False return { 'template_functions': self.rpc_client.list_template_functions(req.context, template_version, with_condition) } @util.registered_policy_enforce def resource_schema(self, req, type_name, with_description=False): """Returns the schema of the given resource type.""" return self.rpc_client.resource_schema( req.context, type_name, self._extract_bool_param('with_description', with_description)) @util.registered_policy_enforce def generate_template(self, req, type_name): """Generates a template based on the specified type.""" template_type = 'cfn' if rpc_api.TEMPLATE_TYPE in req.params: try: template_type = param_utils.extract_template_type( req.params.get(rpc_api.TEMPLATE_TYPE)) except ValueError as ex: msg = _("Template type is not supported: %s") % ex raise exc.HTTPBadRequest(six.text_type(msg)) return self.rpc_client.generate_template(req.context, type_name, template_type) @util.registered_identified_stack def snapshot(self, req, identity, body): name = body.get('name') return self.rpc_client.stack_snapshot(req.context, identity, name) @util.registered_identified_stack def show_snapshot(self, req, identity, snapshot_id): snapshot = self.rpc_client.show_snapshot( req.context, identity, snapshot_id) return {'snapshot': snapshot} @util.registered_identified_stack def delete_snapshot(self, req, identity, snapshot_id): self.rpc_client.delete_snapshot(req.context, identity, snapshot_id) raise exc.HTTPNoContent() @util.registered_identified_stack def list_snapshots(self, req, identity): return { 'snapshots': self.rpc_client.stack_list_snapshots( req.context, identity) } @util.registered_identified_stack def restore_snapshot(self, req, identity, snapshot_id): self.rpc_client.stack_restore(req.context, identity, snapshot_id) raise exc.HTTPAccepted() @util.registered_identified_stack def list_outputs(self, req, identity): return { 'outputs': self.rpc_client.list_outputs( req.context, identity) } @util.registered_identified_stack def show_output(self, req, identity, output_key): return {'output': self.rpc_client.show_output(req.context, identity, output_key)} class StackSerializer(serializers.JSONResponseSerializer): """Handles serialization of specific controller method responses.""" def _populate_response_header(self, response, location, status): response.status = status if six.PY2: response.headers['Location'] = location.encode('utf-8') else: response.headers['Location'] = location response.headers['Content-Type'] = 'application/json' return response def create(self, response, result): self._populate_response_header(response, result['stack']['links'][0]['href'], 201) response.body = six.b(self.to_json(result)) return response def create_resource(options): """Stacks resource factory method.""" deserializer = wsgi.JSONRequestDeserializer() serializer = StackSerializer() return wsgi.Resource(StackController(options), deserializer, serializer) heat-10.0.2/heat/api/openstack/v1/resources.py0000666000175000017500000001742113343562337021150 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import itertools import six from webob import exc from heat.api.openstack.v1 import util from heat.common.i18n import _ from heat.common import identifier from heat.common import param_utils from heat.common import serializers from heat.common import wsgi from heat.rpc import api as rpc_api from heat.rpc import client as rpc_client def format_resource(req, res, keys=None): keys = keys or [] def include_key(k): return k in keys if keys else True def transform(key, value): if not include_key(key): return if key == rpc_api.RES_ID: identity = identifier.ResourceIdentifier(**value) links = [util.make_link(req, identity), util.make_link(req, identity.stack(), 'stack')] nested_id = res.get(rpc_api.RES_NESTED_STACK_ID) if nested_id: nested_identity = identifier.HeatIdentifier(**nested_id) links.append(util.make_link(req, nested_identity, 'nested')) yield ('links', links) elif (key == rpc_api.RES_STACK_NAME or key == rpc_api.RES_STACK_ID or key == rpc_api.RES_ACTION or key == rpc_api.RES_NESTED_STACK_ID): return elif (key == rpc_api.RES_METADATA): return elif (key == rpc_api.RES_STATUS and rpc_api.RES_ACTION in res): # To avoid breaking API compatibility, we join RES_ACTION # and RES_STATUS, so the API format doesn't expose the # internal split of state into action/status yield (key, '_'.join((res[rpc_api.RES_ACTION], value))) elif (key == rpc_api.RES_NAME): yield ('logical_resource_id', value) yield (key, value) else: yield (key, value) return dict(itertools.chain.from_iterable( transform(k, v) for k, v in res.items())) class ResourceController(object): """WSGI controller for Resources in Heat v1 API. Implements the API actions. """ # Define request scope (must match what is in policy.json or policies in # code) REQUEST_SCOPE = 'resource' def __init__(self, options): self.options = options self.rpc_client = rpc_client.EngineClient() def _extract_to_param(self, req, rpc_param, extractor, default): key = rpc_param if key in req.params: try: return extractor(key, req.params[key]) except ValueError as e: raise exc.HTTPBadRequest(six.text_type(e)) else: return default @util.registered_identified_stack def index(self, req, identity): """Lists information for all resources.""" whitelist = { 'type': 'mixed', 'status': 'mixed', 'name': 'mixed', 'action': 'mixed', 'id': 'mixed', 'physical_resource_id': 'mixed' } invalid_keys = (set(req.params.keys()) - set(list(whitelist) + [rpc_api.PARAM_NESTED_DEPTH, rpc_api.PARAM_WITH_DETAIL])) if invalid_keys: raise exc.HTTPBadRequest(_('Invalid filter parameters %s') % six.text_type(list(invalid_keys))) nested_depth = self._extract_to_param(req, rpc_api.PARAM_NESTED_DEPTH, param_utils.extract_int, default=0) with_detail = self._extract_to_param(req, rpc_api.PARAM_WITH_DETAIL, param_utils.extract_bool, default=False) params = util.get_allowed_params(req.params, whitelist) res_list = self.rpc_client.list_stack_resources(req.context, identity, nested_depth, with_detail, filters=params) return {'resources': [format_resource(req, res) for res in res_list]} @util.registered_identified_stack def show(self, req, identity, resource_name): """Gets detailed information for a resource.""" whitelist = {'with_attr': util.PARAM_TYPE_MULTI} params = util.get_allowed_params(req.params, whitelist) if 'with_attr' not in params: params['with_attr'] = None res = self.rpc_client.describe_stack_resource(req.context, identity, resource_name, **params) return {'resource': format_resource(req, res)} @util.registered_identified_stack def metadata(self, req, identity, resource_name): """Gets metadata information for a resource.""" res = self.rpc_client.describe_stack_resource(req.context, identity, resource_name) return {rpc_api.RES_METADATA: res[rpc_api.RES_METADATA]} @util.registered_identified_stack def signal(self, req, identity, resource_name, body=None): self.rpc_client.resource_signal(req.context, stack_identity=identity, resource_name=resource_name, details=body) @util.registered_identified_stack def mark_unhealthy(self, req, identity, resource_name, body): """Mark a resource as healthy or unhealthy.""" data = dict() VALID_KEYS = (RES_UPDATE_MARK_UNHEALTHY, RES_UPDATE_STATUS_REASON) = ( 'mark_unhealthy', rpc_api.RES_STATUS_DATA) invalid_keys = set(body) - set(VALID_KEYS) if invalid_keys: raise exc.HTTPBadRequest(_("Invalid keys in resource " "mark unhealthy %s") % invalid_keys) if RES_UPDATE_MARK_UNHEALTHY not in body: raise exc.HTTPBadRequest( _("Missing mandatory (%s) key from mark unhealthy " "request") % RES_UPDATE_MARK_UNHEALTHY) try: data[RES_UPDATE_MARK_UNHEALTHY] = param_utils.extract_bool( RES_UPDATE_MARK_UNHEALTHY, body[RES_UPDATE_MARK_UNHEALTHY]) except ValueError as e: raise exc.HTTPBadRequest(six.text_type(e)) data[RES_UPDATE_STATUS_REASON] = body.get(RES_UPDATE_STATUS_REASON, "") self.rpc_client.resource_mark_unhealthy(req.context, stack_identity=identity, resource_name=resource_name, **data) def create_resource(options): """Resources resource factory method.""" deserializer = wsgi.JSONRequestDeserializer() serializer = serializers.JSONResponseSerializer() return wsgi.Resource(ResourceController(options), deserializer, serializer) heat-10.0.2/heat/api/openstack/v1/events.py0000666000175000017500000001416013343562337020437 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import itertools import six from webob import exc from heat.api.openstack.v1 import util from heat.common.i18n import _ from heat.common import identifier from heat.common import param_utils from heat.common import serializers from heat.common import wsgi from heat.rpc import api as rpc_api from heat.rpc import client as rpc_client summary_keys = [ rpc_api.EVENT_ID, rpc_api.EVENT_TIMESTAMP, rpc_api.EVENT_RES_NAME, rpc_api.EVENT_RES_STATUS, rpc_api.EVENT_RES_STATUS_DATA, rpc_api.EVENT_RES_PHYSICAL_ID, ] def format_event(req, event, keys=None): def include_key(k): return k in keys if keys else True def transform(key, value): if not include_key(key): return if key == rpc_api.EVENT_ID: identity = identifier.EventIdentifier(**value) yield ('id', identity.event_id) yield ('links', [util.make_link(req, identity), util.make_link(req, identity.resource(), 'resource'), util.make_link(req, identity.stack(), 'stack')]) elif key in (rpc_api.EVENT_STACK_ID, rpc_api.EVENT_STACK_NAME, rpc_api.EVENT_RES_ACTION): return elif (key == rpc_api.EVENT_RES_STATUS and rpc_api.EVENT_RES_ACTION in event): # To avoid breaking API compatibility, we join RES_ACTION # and RES_STATUS, so the API format doesn't expose the # internal split of state into action/status yield (key, '_'.join((event[rpc_api.EVENT_RES_ACTION], value))) elif (key == rpc_api.RES_NAME): yield ('logical_resource_id', value) yield (key, value) else: yield (key, value) ev = dict(itertools.chain.from_iterable( transform(k, v) for k, v in event.items())) root_stack_id = event.get(rpc_api.EVENT_ROOT_STACK_ID) if root_stack_id: root_identifier = identifier.HeatIdentifier(**root_stack_id) ev['links'].append(util.make_link(req, root_identifier, 'root_stack')) return ev class EventController(object): """WSGI controller for Events in Heat v1 API. Implements the API actions. """ # Define request scope (must match what is in policy.json or policies in # code) REQUEST_SCOPE = 'events' def __init__(self, options): self.options = options self.rpc_client = rpc_client.EngineClient() def _event_list(self, req, identity, detail=False, filters=None, limit=None, marker=None, sort_keys=None, sort_dir=None, nested_depth=None): events = self.rpc_client.list_events(req.context, identity, filters=filters, limit=limit, marker=marker, sort_keys=sort_keys, sort_dir=sort_dir, nested_depth=nested_depth) keys = None if detail else summary_keys return [format_event(req, e, keys) for e in events] @util.registered_identified_stack def index(self, req, identity, resource_name=None): """Lists summary information for all events.""" whitelist = { 'limit': util.PARAM_TYPE_SINGLE, 'marker': util.PARAM_TYPE_SINGLE, 'sort_dir': util.PARAM_TYPE_SINGLE, 'sort_keys': util.PARAM_TYPE_MULTI, 'nested_depth': util.PARAM_TYPE_SINGLE, } filter_whitelist = { 'resource_status': util.PARAM_TYPE_MIXED, 'resource_action': util.PARAM_TYPE_MIXED, 'resource_name': util.PARAM_TYPE_MIXED, 'resource_type': util.PARAM_TYPE_MIXED, } params = util.get_allowed_params(req.params, whitelist) filter_params = util.get_allowed_params(req.params, filter_whitelist) int_params = (rpc_api.PARAM_LIMIT, rpc_api.PARAM_NESTED_DEPTH) try: for key in int_params: if key in params: params[key] = param_utils.extract_int( key, params[key], allow_zero=True) except ValueError as e: raise exc.HTTPBadRequest(six.text_type(e)) if resource_name is None: if not filter_params: filter_params = None else: filter_params['resource_name'] = resource_name events = self._event_list( req, identity, filters=filter_params, **params) if not events and resource_name is not None: msg = _('No events found for resource %s') % resource_name raise exc.HTTPNotFound(msg) return {'events': events} @util.registered_identified_stack def show(self, req, identity, resource_name, event_id): """Gets detailed information for an event.""" filters = {"resource_name": resource_name, "uuid": event_id} events = self._event_list(req, identity, filters=filters, detail=True) if not events: raise exc.HTTPNotFound(_('No event %s found') % event_id) return {'event': events[0]} def create_resource(options): """Events resource factory method.""" deserializer = wsgi.JSONRequestDeserializer() serializer = serializers.JSONResponseSerializer() return wsgi.Resource(EventController(options), deserializer, serializer) heat-10.0.2/heat/api/openstack/v1/software_configs.py0000666000175000017500000000754213343562337022503 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import six from webob import exc from heat.api.openstack.v1 import util from heat.common import context from heat.common import param_utils from heat.common import serializers from heat.common import wsgi from heat.rpc import api as rpc_api from heat.rpc import client as rpc_client class SoftwareConfigController(object): """WSGI controller for Software config in Heat v1 API. Implements the API actions. """ REQUEST_SCOPE = 'software_configs' def __init__(self, options): self.options = options self.rpc_client = rpc_client.EngineClient() def default(self, req, **args): raise exc.HTTPNotFound() def _extract_bool_param(self, name, value): try: return param_utils.extract_bool(name, value) except ValueError as e: raise exc.HTTPBadRequest(six.text_type(e)) def _index(self, req, use_admin_cnxt=False): whitelist = { 'limit': util.PARAM_TYPE_SINGLE, 'marker': util.PARAM_TYPE_SINGLE } params = util.get_allowed_params(req.params, whitelist) if use_admin_cnxt: cnxt = context.get_admin_context() else: cnxt = req.context scs = self.rpc_client.list_software_configs(cnxt, **params) return {'software_configs': scs} @util.registered_policy_enforce def global_index(self, req): return self._index(req, use_admin_cnxt=True) @util.registered_policy_enforce def index(self, req): """Lists summary information for all software configs.""" global_tenant = False name = rpc_api.PARAM_GLOBAL_TENANT if name in req.params: global_tenant = self._extract_bool_param( name, req.params.get(name)) if global_tenant: return self.global_index(req, req.context.tenant_id) return self._index(req) @util.registered_policy_enforce def show(self, req, config_id): """Gets detailed information for a software config.""" sc = self.rpc_client.show_software_config( req.context, config_id) return {'software_config': sc} @util.registered_policy_enforce def create(self, req, body): """Create a new software config.""" create_data = { 'name': body.get('name'), 'group': body.get('group'), 'config': body.get('config'), 'inputs': body.get('inputs'), 'outputs': body.get('outputs'), 'options': body.get('options'), } sc = self.rpc_client.create_software_config( req.context, **create_data) return {'software_config': sc} @util.registered_policy_enforce def delete(self, req, config_id): """Delete an existing software config.""" res = self.rpc_client.delete_software_config(req.context, config_id) if res is not None: raise exc.HTTPBadRequest(res['Error']) raise exc.HTTPNoContent() def create_resource(options): """Software configs resource factory method.""" deserializer = wsgi.JSONRequestDeserializer() serializer = serializers.JSONResponseSerializer() return wsgi.Resource( SoftwareConfigController(options), deserializer, serializer) heat-10.0.2/heat/api/openstack/v1/software_deployments.py0000666000175000017500000000753313343562337023416 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from webob import exc from heat.api.openstack.v1 import util from heat.common import serializers from heat.common import wsgi from heat.rpc import client as rpc_client class SoftwareDeploymentController(object): """WSGI controller for Software deployments in Heat v1 API. Implements the API actions. """ REQUEST_SCOPE = 'software_deployments' def __init__(self, options): self.options = options self.rpc_client = rpc_client.EngineClient() def default(self, req, **args): raise exc.HTTPNotFound() @util.registered_policy_enforce def index(self, req): """List software deployments.""" whitelist = { 'server_id': util.PARAM_TYPE_SINGLE, } params = util.get_allowed_params(req.params, whitelist) sds = self.rpc_client.list_software_deployments(req.context, **params) return {'software_deployments': sds} @util.registered_policy_enforce def metadata(self, req, server_id): """List software deployments grouped by the group name. This is done for the requested server. """ sds = self.rpc_client.metadata_software_deployments( req.context, server_id=server_id) return {'metadata': sds} @util.registered_policy_enforce def show(self, req, deployment_id): """Gets detailed information for a software deployment.""" sd = self.rpc_client.show_software_deployment(req.context, deployment_id) return {'software_deployment': sd} @util.registered_policy_enforce def create(self, req, body): """Create a new software deployment.""" create_data = dict((k, body.get(k)) for k in ( 'config_id', 'server_id', 'input_values', 'action', 'status', 'status_reason', 'stack_user_project_id')) sd = self.rpc_client.create_software_deployment(req.context, **create_data) return {'software_deployment': sd} @util.registered_policy_enforce def update(self, req, deployment_id, body): """Update an existing software deployment.""" update_data = dict((k, body.get(k)) for k in ( 'config_id', 'input_values', 'output_values', 'action', 'status', 'status_reason') if body.get(k, None) is not None) sd = self.rpc_client.update_software_deployment(req.context, deployment_id, **update_data) return {'software_deployment': sd} @util.registered_policy_enforce def delete(self, req, deployment_id): """Delete an existing software deployment.""" res = self.rpc_client.delete_software_deployment(req.context, deployment_id) if res is not None: raise exc.HTTPBadRequest(res['Error']) raise exc.HTTPNoContent() def create_resource(options): """Software deployments resource factory method.""" deserializer = wsgi.JSONRequestDeserializer() serializer = serializers.JSONResponseSerializer() return wsgi.Resource( SoftwareDeploymentController(options), deserializer, serializer) heat-10.0.2/heat/api/__init__.py0000666000175000017500000000000013343562337016341 0ustar zuulzuul00000000000000heat-10.0.2/heat/api/versions.py0000666000175000017500000000320413343562351016457 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Controller that returns information on the heat API versions.""" from oslo_serialization import jsonutils from six.moves import http_client import webob.dec class Controller(object): """A controller that produces information on the heat API versions.""" def __init__(self, conf): self.conf = conf @webob.dec.wsgify def __call__(self, req): """Respond to a request for all OpenStack API versions.""" version_objs = [ { "id": "v1.0", "status": "CURRENT", "links": [ { "rel": "self", "href": self.get_href(req) }] }] body = jsonutils.dumps(dict(versions=version_objs)) response = webob.Response(request=req, status=http_client.MULTIPLE_CHOICES, content_type='application/json') response.body = body return response def get_href(self, req): return "%s/v1/" % req.host_url heat-10.0.2/heat/db/0000775000175000017500000000000013343562672014056 5ustar zuulzuul00000000000000heat-10.0.2/heat/db/sqlalchemy/0000775000175000017500000000000013343562672016220 5ustar zuulzuul00000000000000heat-10.0.2/heat/db/sqlalchemy/filters.py0000666000175000017500000000305713343562337020247 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import six def exact_filter(query, model, filters): """Applies exact match filtering to a query. Returns the updated query. Modifies filters argument to remove filters consumed. :param query: query to apply filters to :param model: model object the query applies to, for IN-style filtering :param filters: dictionary of filters; values that are lists, tuples, sets, or frozensets cause an 'IN' test to be performed, while exact matching ('==' operator) is used for other values """ filter_dict = {} if filters is None: filters = {} for key, value in six.iteritems(filters): if isinstance(value, (list, tuple, set, frozenset)): column_attr = getattr(model, key) query = query.filter(column_attr.in_(value)) else: filter_dict[key] = value if filter_dict: query = query.filter_by(**filter_dict) return query heat-10.0.2/heat/db/sqlalchemy/migrate_repo/0000775000175000017500000000000013343562672020675 5ustar zuulzuul00000000000000heat-10.0.2/heat/db/sqlalchemy/migrate_repo/versions/0000775000175000017500000000000013343562672022545 5ustar zuulzuul00000000000000heat-10.0.2/heat/db/sqlalchemy/migrate_repo/versions/073_resource_data_fk_ondelete_cascade.py0000666000175000017500000000315313343562337032334 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for Liberty backports. # Do not use this number for new Mitaka work. New Mitaka work starts after # all the placeholders. import sqlalchemy from migrate import ForeignKeyConstraint def upgrade(migrate_engine): meta = sqlalchemy.MetaData() meta.bind = migrate_engine resource_data = sqlalchemy.Table('resource_data', meta, autoload=True) resource = sqlalchemy.Table('resource', meta, autoload=True) for fk in resource_data.foreign_keys: if fk.column == resource.c.id: # delete the existing fk # and create with ondelete cascade and a proper name existing_fkey = ForeignKeyConstraint( columns=[resource_data.c.resource_id], refcolumns=[resource.c.id], name=fk.name) existing_fkey.drop() fkey = ForeignKeyConstraint( columns=[resource_data.c.resource_id], refcolumns=[resource.c.id], name="fk_resource_id", ondelete='CASCADE') fkey.create() break heat-10.0.2/heat/db/sqlalchemy/migrate_repo/versions/072_raw_template_files.py0000666000175000017500000000274113343562337027361 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import sqlalchemy from heat.db.sqlalchemy import types def upgrade(migrate_engine): meta = sqlalchemy.MetaData(bind=migrate_engine) raw_template_files = sqlalchemy.Table( 'raw_template_files', meta, sqlalchemy.Column('id', sqlalchemy.Integer, primary_key=True, nullable=False), sqlalchemy.Column('files', types.Json), sqlalchemy.Column('created_at', sqlalchemy.DateTime), sqlalchemy.Column('updated_at', sqlalchemy.DateTime), mysql_engine='InnoDB', mysql_charset='utf8' ) raw_template_files.create() raw_template = sqlalchemy.Table('raw_template', meta, autoload=True) files_id = sqlalchemy.Column( 'files_id', sqlalchemy.Integer(), sqlalchemy.ForeignKey('raw_template_files.id', name='raw_tmpl_files_fkey_ref')) files_id.create(raw_template) heat-10.0.2/heat/db/sqlalchemy/migrate_repo/versions/081_placeholder.py0000666000175000017500000000135413343562337025774 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for Pike backports. # Do not use this number for new Queens work, which starts after # all the placeholders. def upgrade(migrate_engine): pass heat-10.0.2/heat/db/sqlalchemy/migrate_repo/versions/076_placeholder.py0000666000175000017500000000135513343562337026001 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for Newton backports. # Do not use this number for new Ocata work, which starts after # all the placeholders. def upgrade(migrate_engine): pass heat-10.0.2/heat/db/sqlalchemy/migrate_repo/versions/085_placeholder.py0000666000175000017500000000135413343562337026000 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for Pike backports. # Do not use this number for new Queens work, which starts after # all the placeholders. def upgrade(migrate_engine): pass heat-10.0.2/heat/db/sqlalchemy/migrate_repo/versions/077_placeholder.py0000666000175000017500000000135513343562337026002 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for Newton backports. # Do not use this number for new Ocata work, which starts after # all the placeholders. def upgrade(migrate_engine): pass heat-10.0.2/heat/db/sqlalchemy/migrate_repo/versions/074_placeholder.py0000666000175000017500000000135513343562337025777 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for Newton backports. # Do not use this number for new Ocata work, which starts after # all the placeholders. def upgrade(migrate_engine): pass heat-10.0.2/heat/db/sqlalchemy/migrate_repo/versions/084_placeholder.py0000666000175000017500000000135413343562337025777 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for Pike backports. # Do not use this number for new Queens work, which starts after # all the placeholders. def upgrade(migrate_engine): pass heat-10.0.2/heat/db/sqlalchemy/migrate_repo/versions/071_mitaka.py0000666000175000017500000004007013343562337024755 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import uuid import sqlalchemy from heat.db.sqlalchemy import types def upgrade(migrate_engine): meta = sqlalchemy.MetaData() meta.bind = migrate_engine raw_template = sqlalchemy.Table( 'raw_template', meta, sqlalchemy.Column('id', sqlalchemy.Integer, primary_key=True, nullable=False), sqlalchemy.Column('created_at', sqlalchemy.DateTime), sqlalchemy.Column('updated_at', sqlalchemy.DateTime), sqlalchemy.Column('template', types.LongText), sqlalchemy.Column('files', types.Json), sqlalchemy.Column('environment', types.Json), mysql_engine='InnoDB', mysql_charset='utf8' ) user_creds = sqlalchemy.Table( 'user_creds', meta, sqlalchemy.Column('id', sqlalchemy.Integer, primary_key=True, nullable=False), sqlalchemy.Column('created_at', sqlalchemy.DateTime), sqlalchemy.Column('updated_at', sqlalchemy.DateTime), sqlalchemy.Column('username', sqlalchemy.String(255)), sqlalchemy.Column('password', sqlalchemy.String(255)), sqlalchemy.Column('region_name', sqlalchemy.String(length=255)), sqlalchemy.Column('decrypt_method', sqlalchemy.String(length=64)), sqlalchemy.Column('tenant', sqlalchemy.String(1024)), sqlalchemy.Column('auth_url', sqlalchemy.Text), sqlalchemy.Column('tenant_id', sqlalchemy.String(256)), sqlalchemy.Column('trust_id', sqlalchemy.String(255)), sqlalchemy.Column('trustor_user_id', sqlalchemy.String(64)), mysql_engine='InnoDB', mysql_charset='utf8' ) stack = sqlalchemy.Table( 'stack', meta, sqlalchemy.Column('id', sqlalchemy.String(36), primary_key=True, nullable=False), sqlalchemy.Column('created_at', sqlalchemy.DateTime), sqlalchemy.Column('updated_at', sqlalchemy.DateTime), sqlalchemy.Column('deleted_at', sqlalchemy.DateTime), sqlalchemy.Column('name', sqlalchemy.String(255)), sqlalchemy.Column('raw_template_id', sqlalchemy.Integer, sqlalchemy.ForeignKey('raw_template.id'), nullable=False), sqlalchemy.Column('prev_raw_template_id', sqlalchemy.Integer, sqlalchemy.ForeignKey('raw_template.id')), sqlalchemy.Column('user_creds_id', sqlalchemy.Integer, sqlalchemy.ForeignKey('user_creds.id')), sqlalchemy.Column('username', sqlalchemy.String(256)), sqlalchemy.Column('owner_id', sqlalchemy.String(36)), sqlalchemy.Column('action', sqlalchemy.String(255)), sqlalchemy.Column('status', sqlalchemy.String(255)), sqlalchemy.Column('status_reason', types.LongText), sqlalchemy.Column('timeout', sqlalchemy.Integer), sqlalchemy.Column('tenant', sqlalchemy.String(256)), sqlalchemy.Column('disable_rollback', sqlalchemy.Boolean, nullable=False), sqlalchemy.Column('stack_user_project_id', sqlalchemy.String(length=64)), sqlalchemy.Column('backup', sqlalchemy.Boolean, default=False), sqlalchemy.Column('nested_depth', sqlalchemy.Integer, default=0), sqlalchemy.Column('convergence', sqlalchemy.Boolean, default=False), sqlalchemy.Column('current_traversal', sqlalchemy.String(36)), sqlalchemy.Column('current_deps', types.Json), sqlalchemy.Column('parent_resource_name', sqlalchemy.String(255)), sqlalchemy.Index('ix_stack_name', 'name', mysql_length=255), sqlalchemy.Index('ix_stack_tenant', 'tenant', mysql_length=255), sqlalchemy.Index('ix_stack_owner_id', 'owner_id', mysql_length=36), mysql_engine='InnoDB', mysql_charset='utf8' ) resource = sqlalchemy.Table( 'resource', meta, sqlalchemy.Column('id', sqlalchemy.Integer, primary_key=True, nullable=False), sqlalchemy.Column('uuid', sqlalchemy.String(36), unique=True, default=lambda: str(uuid.uuid4())), sqlalchemy.Column('nova_instance', sqlalchemy.String(255)), sqlalchemy.Column('name', sqlalchemy.String(255)), sqlalchemy.Column('created_at', sqlalchemy.DateTime), sqlalchemy.Column('updated_at', sqlalchemy.DateTime), sqlalchemy.Column('action', sqlalchemy.String(255)), sqlalchemy.Column('status', sqlalchemy.String(255)), sqlalchemy.Column('status_reason', types.LongText), sqlalchemy.Column('stack_id', sqlalchemy.String(36), sqlalchemy.ForeignKey('stack.id'), nullable=False), sqlalchemy.Column('rsrc_metadata', types.LongText), sqlalchemy.Column('properties_data', types.Json), sqlalchemy.Column('engine_id', sqlalchemy.String(length=36)), sqlalchemy.Column('atomic_key', sqlalchemy.Integer), sqlalchemy.Column('needed_by', types.List), sqlalchemy.Column('requires', types.List), sqlalchemy.Column('replaces', sqlalchemy.Integer), sqlalchemy.Column('replaced_by', sqlalchemy.Integer), sqlalchemy.Column('current_template_id', sqlalchemy.Integer, sqlalchemy.ForeignKey('raw_template.id')), sqlalchemy.Column('properties_data_encrypted', sqlalchemy.Boolean, default=False), sqlalchemy.Column('root_stack_id', sqlalchemy.String(36)), sqlalchemy.Index('ix_resource_root_stack_id', 'root_stack_id', mysql_length=36), mysql_engine='InnoDB', mysql_charset='utf8' ) resource_data = sqlalchemy.Table( 'resource_data', meta, sqlalchemy.Column('id', sqlalchemy.Integer, primary_key=True, nullable=False), sqlalchemy.Column('created_at', sqlalchemy.DateTime), sqlalchemy.Column('updated_at', sqlalchemy.DateTime), sqlalchemy.Column('key', sqlalchemy.String(255)), sqlalchemy.Column('value', sqlalchemy.Text), sqlalchemy.Column('redact', sqlalchemy.Boolean), sqlalchemy.Column('decrypt_method', sqlalchemy.String(length=64)), sqlalchemy.Column('resource_id', sqlalchemy.Integer, sqlalchemy.ForeignKey('resource.id'), nullable=False), mysql_engine='InnoDB', mysql_charset='utf8' ) event = sqlalchemy.Table( 'event', meta, sqlalchemy.Column('id', sqlalchemy.Integer, primary_key=True, nullable=False), sqlalchemy.Column('uuid', sqlalchemy.String(36), default=lambda: str(uuid.uuid4()), unique=True), sqlalchemy.Column('stack_id', sqlalchemy.String(36), sqlalchemy.ForeignKey('stack.id'), nullable=False), sqlalchemy.Column('created_at', sqlalchemy.DateTime), sqlalchemy.Column('updated_at', sqlalchemy.DateTime), sqlalchemy.Column('resource_action', sqlalchemy.String(255)), sqlalchemy.Column('resource_status', sqlalchemy.String(255)), sqlalchemy.Column('resource_name', sqlalchemy.String(255)), sqlalchemy.Column('physical_resource_id', sqlalchemy.String(255)), sqlalchemy.Column('resource_status_reason', sqlalchemy.String(255)), sqlalchemy.Column('resource_type', sqlalchemy.String(255)), sqlalchemy.Column('resource_properties', sqlalchemy.PickleType), mysql_engine='InnoDB', mysql_charset='utf8' ) watch_rule = sqlalchemy.Table( 'watch_rule', meta, sqlalchemy.Column('id', sqlalchemy.Integer, primary_key=True, nullable=False), sqlalchemy.Column('created_at', sqlalchemy.DateTime), sqlalchemy.Column('updated_at', sqlalchemy.DateTime), sqlalchemy.Column('name', sqlalchemy.String(255)), sqlalchemy.Column('state', sqlalchemy.String(255)), sqlalchemy.Column('rule', types.LongText), sqlalchemy.Column('last_evaluated', sqlalchemy.DateTime), sqlalchemy.Column('stack_id', sqlalchemy.String(36), sqlalchemy.ForeignKey('stack.id'), nullable=False), mysql_engine='InnoDB', mysql_charset='utf8' ) watch_data = sqlalchemy.Table( 'watch_data', meta, sqlalchemy.Column('id', sqlalchemy.Integer, primary_key=True, nullable=False), sqlalchemy.Column('created_at', sqlalchemy.DateTime), sqlalchemy.Column('updated_at', sqlalchemy.DateTime), sqlalchemy.Column('data', types.LongText), sqlalchemy.Column('watch_rule_id', sqlalchemy.Integer, sqlalchemy.ForeignKey('watch_rule.id'), nullable=False), mysql_engine='InnoDB', mysql_charset='utf8' ) stack_lock = sqlalchemy.Table( 'stack_lock', meta, sqlalchemy.Column('stack_id', sqlalchemy.String(length=36), sqlalchemy.ForeignKey('stack.id'), primary_key=True, nullable=False), sqlalchemy.Column('created_at', sqlalchemy.DateTime), sqlalchemy.Column('updated_at', sqlalchemy.DateTime), sqlalchemy.Column('engine_id', sqlalchemy.String(length=36)), mysql_engine='InnoDB', mysql_charset='utf8' ) software_config = sqlalchemy.Table( 'software_config', meta, sqlalchemy.Column('id', sqlalchemy.String(36), primary_key=True, nullable=False), sqlalchemy.Column('created_at', sqlalchemy.DateTime), sqlalchemy.Column('updated_at', sqlalchemy.DateTime), sqlalchemy.Column('name', sqlalchemy.String(255)), sqlalchemy.Column('group', sqlalchemy.String(255)), sqlalchemy.Column('config', types.LongText), sqlalchemy.Column('tenant', sqlalchemy.String(64), nullable=False, index=True), mysql_engine='InnoDB', mysql_charset='utf8' ) software_deployment = sqlalchemy.Table( 'software_deployment', meta, sqlalchemy.Column('id', sqlalchemy.String(36), primary_key=True, nullable=False), sqlalchemy.Column('created_at', sqlalchemy.DateTime, index=True), sqlalchemy.Column('updated_at', sqlalchemy.DateTime), sqlalchemy.Column('server_id', sqlalchemy.String(36), nullable=False, index=True), sqlalchemy.Column('config_id', sqlalchemy.String(36), sqlalchemy.ForeignKey('software_config.id'), nullable=False), sqlalchemy.Column('input_values', types.Json), sqlalchemy.Column('output_values', types.Json), sqlalchemy.Column('action', sqlalchemy.String(255)), sqlalchemy.Column('status', sqlalchemy.String(255)), sqlalchemy.Column('status_reason', types.LongText), sqlalchemy.Column('tenant', sqlalchemy.String(64), nullable=False, index=True), sqlalchemy.Column('stack_user_project_id', sqlalchemy.String(length=64)), mysql_engine='InnoDB', mysql_charset='utf8' ) snapshot = sqlalchemy.Table( 'snapshot', meta, sqlalchemy.Column('id', sqlalchemy.String(36), primary_key=True, nullable=False), sqlalchemy.Column('stack_id', sqlalchemy.String(36), sqlalchemy.ForeignKey('stack.id'), nullable=False), sqlalchemy.Column('name', sqlalchemy.String(255)), sqlalchemy.Column('created_at', sqlalchemy.DateTime), sqlalchemy.Column('updated_at', sqlalchemy.DateTime), sqlalchemy.Column('status', sqlalchemy.String(255)), sqlalchemy.Column('status_reason', sqlalchemy.String(255)), sqlalchemy.Column('data', types.Json), sqlalchemy.Column('tenant', sqlalchemy.String(64), nullable=False, index=True), mysql_engine='InnoDB', mysql_charset='utf8' ) service = sqlalchemy.Table( 'service', meta, sqlalchemy.Column('id', sqlalchemy.String(36), primary_key=True, default=lambda: str(uuid.uuid4())), sqlalchemy.Column('engine_id', sqlalchemy.String(36), nullable=False), sqlalchemy.Column('host', sqlalchemy.String(255), nullable=False), sqlalchemy.Column('hostname', sqlalchemy.String(255), nullable=False), sqlalchemy.Column('binary', sqlalchemy.String(255), nullable=False), sqlalchemy.Column('topic', sqlalchemy.String(255), nullable=False), sqlalchemy.Column('report_interval', sqlalchemy.Integer, nullable=False), sqlalchemy.Column('created_at', sqlalchemy.DateTime), sqlalchemy.Column('updated_at', sqlalchemy.DateTime), sqlalchemy.Column('deleted_at', sqlalchemy.DateTime), mysql_engine='InnoDB', mysql_charset='utf8' ) stack_tag = sqlalchemy.Table( 'stack_tag', meta, sqlalchemy.Column('id', sqlalchemy.Integer, primary_key=True, nullable=False), sqlalchemy.Column('created_at', sqlalchemy.DateTime), sqlalchemy.Column('updated_at', sqlalchemy.DateTime), sqlalchemy.Column('tag', sqlalchemy.Unicode(80)), sqlalchemy.Column('stack_id', sqlalchemy.String(36), sqlalchemy.ForeignKey('stack.id'), nullable=False), mysql_engine='InnoDB', mysql_charset='utf8' ) sync_point = sqlalchemy.Table( 'sync_point', meta, sqlalchemy.Column('entity_id', sqlalchemy.String(36)), sqlalchemy.Column('traversal_id', sqlalchemy.String(36)), sqlalchemy.Column('is_update', sqlalchemy.Boolean), sqlalchemy.Column('atomic_key', sqlalchemy.Integer, nullable=False), sqlalchemy.Column('stack_id', sqlalchemy.String(36), nullable=False), sqlalchemy.Column('input_data', types.Json), sqlalchemy.Column('created_at', sqlalchemy.DateTime), sqlalchemy.Column('updated_at', sqlalchemy.DateTime), sqlalchemy.PrimaryKeyConstraint('entity_id', 'traversal_id', 'is_update'), sqlalchemy.ForeignKeyConstraint(['stack_id'], ['stack.id'], name='fk_stack_id'), mysql_engine='InnoDB', mysql_charset='utf8' ) tables = ( raw_template, user_creds, stack, resource, resource_data, event, watch_rule, watch_data, stack_lock, software_config, software_deployment, snapshot, service, stack_tag, sync_point, ) for index, table in enumerate(tables): try: table.create() except Exception: # If an error occurs, drop all tables created so far to return # to the previously existing state. meta.drop_all(tables=tables[:index]) raise heat-10.0.2/heat/db/sqlalchemy/migrate_repo/versions/083_placeholder.py0000666000175000017500000000135413343562337025776 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for Pike backports. # Do not use this number for new Queens work, which starts after # all the placeholders. def upgrade(migrate_engine): pass heat-10.0.2/heat/db/sqlalchemy/migrate_repo/versions/__init__.py0000666000175000017500000000000013343562337024644 0ustar zuulzuul00000000000000heat-10.0.2/heat/db/sqlalchemy/migrate_repo/versions/080_resource_attrs_data.py0000666000175000017500000000237013343562337027545 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from migrate.changeset import constraint import sqlalchemy def upgrade(migrate_engine): meta = sqlalchemy.MetaData(bind=migrate_engine) resource = sqlalchemy.Table('resource', meta, autoload=True) resource_properties_data = sqlalchemy.Table('resource_properties_data', meta, autoload=True) attr_data_id = sqlalchemy.Column('attr_data_id', sqlalchemy.Integer) attr_data_id.create(resource) res_fkey = constraint.ForeignKeyConstraint( columns=[resource.c.attr_data_id], refcolumns=[resource_properties_data.c.id], name='rsrc_attr_data_ref') res_fkey.create() heat-10.0.2/heat/db/sqlalchemy/migrate_repo/versions/082_placeholder.py0000666000175000017500000000135413343562337025775 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for Pike backports. # Do not use this number for new Queens work, which starts after # all the placeholders. def upgrade(migrate_engine): pass heat-10.0.2/heat/db/sqlalchemy/migrate_repo/versions/079_resource_properties_data.py0000666000175000017500000000421113343562337030610 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from migrate.changeset import constraint import sqlalchemy from heat.db.sqlalchemy import types def upgrade(migrate_engine): meta = sqlalchemy.MetaData(bind=migrate_engine) resource_properties_data = sqlalchemy.Table( 'resource_properties_data', meta, sqlalchemy.Column('id', sqlalchemy.Integer, primary_key=True, nullable=False), sqlalchemy.Column('data', types.Json), sqlalchemy.Column('encrypted', sqlalchemy.Boolean), sqlalchemy.Column('created_at', sqlalchemy.DateTime), sqlalchemy.Column('updated_at', sqlalchemy.DateTime), mysql_engine='InnoDB', mysql_charset='utf8' ) resource_properties_data.create() resource = sqlalchemy.Table('resource', meta, autoload=True) rsrc_prop_data_id = sqlalchemy.Column('rsrc_prop_data_id', sqlalchemy.Integer) rsrc_prop_data_id.create(resource) res_fkey = constraint.ForeignKeyConstraint( columns=[resource.c.rsrc_prop_data_id], refcolumns=[resource_properties_data.c.id], name='rsrc_rsrc_prop_data_ref') res_fkey.create() event = sqlalchemy.Table('event', meta, autoload=True) rsrc_prop_data_id = sqlalchemy.Column('rsrc_prop_data_id', sqlalchemy.Integer) rsrc_prop_data_id.create(event) ev_fkey = constraint.ForeignKeyConstraint( columns=[event.c.rsrc_prop_data_id], refcolumns=[resource_properties_data.c.id], name='ev_rsrc_prop_data_ref') ev_fkey.create() heat-10.0.2/heat/db/sqlalchemy/migrate_repo/versions/075_placeholder.py0000666000175000017500000000135513343562337026000 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for Newton backports. # Do not use this number for new Ocata work, which starts after # all the placeholders. def upgrade(migrate_engine): pass heat-10.0.2/heat/db/sqlalchemy/migrate_repo/versions/078_placeholder.py0000666000175000017500000000135513343562337026003 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for Newton backports. # Do not use this number for new Ocata work, which starts after # all the placeholders. def upgrade(migrate_engine): pass heat-10.0.2/heat/db/sqlalchemy/migrate_repo/manage.py0000777000175000017500000000124013343562337022477 0ustar zuulzuul00000000000000#!/usr/bin/env python # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from migrate.versioning.shell import main if __name__ == '__main__': main(debug='False') heat-10.0.2/heat/db/sqlalchemy/migrate_repo/__init__.py0000666000175000017500000000000013343562337022774 0ustar zuulzuul00000000000000heat-10.0.2/heat/db/sqlalchemy/migrate_repo/migrate.cfg0000666000175000017500000000231713343562337023011 0ustar zuulzuul00000000000000[db_settings] # Used to identify which repository this database is versioned under. # You can use the name of your project. repository_id=heat # The name of the database table used to track the schema version. # This name shouldn't already be used by your project. # If this is changed once a database is under version control, you'll need to # change the table name in each database too. version_table=migrate_version # When committing a change script, Migrate will attempt to generate the # sql for all supported databases; normally, if one of them fails - probably # because you don't have that database installed - it is ignored and the # commit continues, perhaps ending successfully. # Databases in this list MUST compile successfully during a commit, or the # entire commit will fail. List the databases your application will actually # be using to ensure your updates to that database work properly. # This must be a list; example: ['postgres','sqlite'] required_dbs=[] # When creating new change scripts, Migrate will stamp the new script with # a version number. By default this is latest_version + 1. You can set this # to 'true' to tell Migrate to use the UTC timestamp instead. use_timestamp_numbering=False heat-10.0.2/heat/db/sqlalchemy/migrate_repo/README0000666000175000017500000000017213343562337021555 0ustar zuulzuul00000000000000This is a database migration repository. More information at https://git.openstack.org/cgit/openstack/sqlalchemy-migrate heat-10.0.2/heat/db/sqlalchemy/utils.py0000666000175000017500000000562713343562337017744 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # SQLAlchemy helper functions import sqlalchemy from sqlalchemy.orm import exc import tenacity def clone_table(name, parent, meta, newcols=None, ignorecols=None, swapcols=None, ignorecons=None): """Helper function that clones parent table schema onto new table. :param name: new table name :param parent: parent table to copy schema from :param newcols: names of new columns to be added :param ignorecols: names of columns to be ignored while cloning :param swapcols: alternative column schema :param ignorecons: names of constraints to be ignored :return: sqlalchemy.Table instance """ newcols = newcols or [] ignorecols = ignorecols or [] swapcols = swapcols or {} ignorecons = ignorecons or [] cols = [c.copy() for c in parent.columns if c.name not in ignorecols if c.name not in swapcols] cols.extend(swapcols.values()) cols.extend(newcols) new_table = sqlalchemy.Table(name, meta, *(cols)) def _is_ignorable(cons): # consider constraints on columns only if hasattr(cons, 'columns'): for col in ignorecols: if col in cons.columns: return True return False constraints = [c.copy() for c in parent.constraints if c.name not in ignorecons if not _is_ignorable(c)] for c in constraints: new_table.append_constraint(c) new_table.create() return new_table def migrate_data(migrate_engine, table, new_table, skip_columns=None): table_name = table.name list_of_rows = list(table.select().execute()) colnames = [c.name for c in table.columns] for row in list_of_rows: values = dict(zip(colnames, map(lambda colname: getattr(row, colname), colnames))) if skip_columns is not None: for column in skip_columns: del values[column] migrate_engine.execute(new_table.insert(values)) table.drop() new_table.rename(table_name) def retry_on_stale_data_error(func): wrapper = tenacity.retry( stop=tenacity.stop_after_attempt(3), retry=tenacity.retry_if_exception_type(exc.StaleDataError), reraise=True) return wrapper(func) heat-10.0.2/heat/db/sqlalchemy/__init__.py0000666000175000017500000000000013343562337020317 0ustar zuulzuul00000000000000heat-10.0.2/heat/db/sqlalchemy/api.py0000666000175000017500000021752513343562351017353 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Implementation of SQLAlchemy backend.""" import datetime import itertools import random from oslo_config import cfg from oslo_db import api as oslo_db_api from oslo_db import exception as db_exception from oslo_db import options from oslo_db.sqlalchemy import enginefacade from oslo_db.sqlalchemy import utils from oslo_log import log as logging from oslo_utils import encodeutils from oslo_utils import timeutils import osprofiler.sqlalchemy import six import sqlalchemy from sqlalchemy import and_ from sqlalchemy import func from sqlalchemy import or_ from sqlalchemy import orm from sqlalchemy.orm import aliased as orm_aliased from heat.common import crypt from heat.common import exception from heat.common.i18n import _ from heat.db.sqlalchemy import filters as db_filters from heat.db.sqlalchemy import migration from heat.db.sqlalchemy import models from heat.db.sqlalchemy import utils as db_utils from heat.engine import environment as heat_environment from heat.rpc import api as rpc_api CONF = cfg.CONF CONF.import_opt('hidden_stack_tags', 'heat.common.config') CONF.import_opt('max_events_per_stack', 'heat.common.config') CONF.import_group('profiler', 'heat.common.config') options.set_defaults(CONF) _facade = None db_context = enginefacade.transaction_context() LOG = logging.getLogger(__name__) # TODO(sbaker): fix tests so that sqlite_fk=True can be passed to configure db_context.configure() def get_facade(): global _facade if _facade is None: # FIXME: get_facade() is called by the test suite startup, # but will not be called normally for API calls. # osprofiler / oslo_db / enginefacade currently don't have hooks # to talk to each other, however one needs to be added to oslo.db # to allow access to the Engine once constructed. db_context.configure(**CONF.database) _facade = db_context.get_legacy_facade() if CONF.profiler.enabled: if CONF.profiler.trace_sqlalchemy: osprofiler.sqlalchemy.add_tracing(sqlalchemy, _facade.get_engine(), "db") return _facade def get_engine(): return get_facade().get_engine() def get_session(): return get_facade().get_session() def update_and_save(context, obj, values): with context.session.begin(subtransactions=True): for k, v in six.iteritems(values): setattr(obj, k, v) def delete_softly(context, obj): """Mark this object as deleted.""" update_and_save(context, obj, {'deleted_at': timeutils.utcnow()}) def soft_delete_aware_query(context, *args, **kwargs): """Stack query helper that accounts for context's `show_deleted` field. :param show_deleted: if True, overrides context's show_deleted field. """ query = context.session.query(*args) show_deleted = kwargs.get('show_deleted') or context.show_deleted if not show_deleted: query = query.filter_by(deleted_at=None) return query def raw_template_get(context, template_id): result = context.session.query(models.RawTemplate).get(template_id) if not result: raise exception.NotFound(_('raw template with id %s not found') % template_id) return result def raw_template_create(context, values): raw_template_ref = models.RawTemplate() raw_template_ref.update(values) raw_template_ref.save(context.session) return raw_template_ref def raw_template_update(context, template_id, values): raw_template_ref = raw_template_get(context, template_id) # get only the changed values values = dict((k, v) for k, v in values.items() if getattr(raw_template_ref, k) != v) if values: update_and_save(context, raw_template_ref, values) return raw_template_ref def raw_template_delete(context, template_id): raw_template = raw_template_get(context, template_id) raw_tmpl_files_id = raw_template.files_id session = context.session with session.begin(subtransactions=True): session.delete(raw_template) if raw_tmpl_files_id is None: return # If no other raw_template is referencing the same raw_template_files, # delete that too if session.query(models.RawTemplate).filter_by( files_id=raw_tmpl_files_id).first() is None: raw_tmpl_files = raw_template_files_get(context, raw_tmpl_files_id) session.delete(raw_tmpl_files) def raw_template_files_create(context, values): session = context.session raw_templ_files_ref = models.RawTemplateFiles() raw_templ_files_ref.update(values) with session.begin(): raw_templ_files_ref.save(session) return raw_templ_files_ref def raw_template_files_get(context, files_id): result = context.session.query(models.RawTemplateFiles).get(files_id) if not result: raise exception.NotFound( _("raw_template_files with files_id %d not found") % files_id) return result def resource_get(context, resource_id, refresh=False, refresh_data=False, eager=True): query = context.session.query(models.Resource) query = query.options(orm.joinedload("data")) if eager: query = query.options(orm.joinedload("rsrc_prop_data")) result = query.get(resource_id) if not result: raise exception.NotFound(_("resource with id %s not found") % resource_id) if refresh: context.session.refresh(result) if refresh_data: # ensure data is loaded (lazy or otherwise) result.data return result def resource_get_by_name_and_stack(context, resource_name, stack_id): result = context.session.query( models.Resource ).filter_by( name=resource_name ).filter_by( stack_id=stack_id ).options(orm.joinedload("data")).first() return result def resource_get_all_by_physical_resource_id(context, physical_resource_id): results = (context.session.query(models.Resource) .filter_by(physical_resource_id=physical_resource_id) .all()) for result in results: if context is None or context.is_admin or context.tenant_id in ( result.stack.tenant, result.stack.stack_user_project_id): yield result def resource_get_by_physical_resource_id(context, physical_resource_id): results = resource_get_all_by_physical_resource_id(context, physical_resource_id) try: return next(results) except StopIteration: return None def resource_get_all(context): results = context.session.query(models.Resource).all() if not results: raise exception.NotFound(_('no resources were found')) return results def resource_purge_deleted(context, stack_id): filters = {'stack_id': stack_id, 'action': 'DELETE', 'status': 'COMPLETE'} query = context.session.query(models.Resource) result = query.filter_by(**filters) attr_ids = [r.attr_data_id for r in result if r.attr_data_id is not None] with context.session.begin(subtransactions=True): result.delete() if attr_ids: context.session.query(models.ResourcePropertiesData).filter( models.ResourcePropertiesData.id.in_(attr_ids)).delete( synchronize_session=False) def _add_atomic_key_to_values(values, atomic_key): if atomic_key is None: values['atomic_key'] = 1 else: values['atomic_key'] = atomic_key + 1 @oslo_db_api.wrap_db_retry(max_retries=3, retry_on_deadlock=True, retry_interval=0.5, inc_retry_interval=True) def resource_update(context, resource_id, values, atomic_key, expected_engine_id=None): session = context.session with session.begin(subtransactions=True): _add_atomic_key_to_values(values, atomic_key) rows_updated = session.query(models.Resource).filter_by( id=resource_id, engine_id=expected_engine_id, atomic_key=atomic_key).update(values) return bool(rows_updated) def resource_update_and_save(context, resource_id, values): resource = context.session.query(models.Resource).get(resource_id) update_and_save(context, resource, values) def resource_delete(context, resource_id): session = context.session with session.begin(subtransactions=True): resource = session.query(models.Resource).get(resource_id) if resource: session.delete(resource) if resource.attr_data_id is not None: attr_prop_data = session.query( models.ResourcePropertiesData).get(resource.attr_data_id) session.delete(attr_prop_data) def resource_attr_id_set(context, resource_id, atomic_key, attr_id): session = context.session with session.begin(subtransactions=True): values = {'attr_data_id': attr_id} _add_atomic_key_to_values(values, atomic_key) rows_updated = session.query(models.Resource).filter(and_( models.Resource.id == resource_id, models.Resource.atomic_key == atomic_key, models.Resource.engine_id.is_(None), or_(models.Resource.attr_data_id == attr_id, models.Resource.attr_data_id.is_(None)))).update( values) if rows_updated > 0: return True else: # Someone else set the attr_id first and/or we have a stale # view of the resource based on atomic_key, so delete the # resource_properties_data (attr) db row. LOG.debug('Not updating res_id %(rid)s with attr_id %(aid)s', {'rid': resource_id, 'aid': attr_id}) session.query( models.ResourcePropertiesData).filter( models.ResourcePropertiesData.id == attr_id).delete() return False def resource_attr_data_delete(context, resource_id, attr_id): session = context.session with session.begin(subtransactions=True): resource = session.query(models.Resource).get(resource_id) attr_prop_data = session.query( models.ResourcePropertiesData).get(attr_id) if resource: resource.update({'attr_data_id': None}) if attr_prop_data: session.delete(attr_prop_data) def resource_data_get_all(context, resource_id, data=None): """Looks up resource_data by resource.id. If data is encrypted, this method will decrypt the results. """ if data is None: data = (context.session.query(models.ResourceData) .filter_by(resource_id=resource_id)).all() if not data: raise exception.NotFound(_('no resource data found')) ret = {} for res in data: if res.redact: ret[res.key] = crypt.decrypt(res.decrypt_method, res.value) else: ret[res.key] = res.value return ret def resource_data_get(context, resource_id, key): """Lookup value of resource's data by key. Decrypts resource data if necessary. """ result = resource_data_get_by_key(context, resource_id, key) if result.redact: return crypt.decrypt(result.decrypt_method, result.value) return result.value def stack_tags_set(context, stack_id, tags): session = context.session with session.begin(): stack_tags_delete(context, stack_id) result = [] for tag in tags: stack_tag = models.StackTag() stack_tag.tag = tag stack_tag.stack_id = stack_id stack_tag.save(session=session) result.append(stack_tag) return result or None def stack_tags_delete(context, stack_id): session = context.session with session.begin(subtransactions=True): result = stack_tags_get(context, stack_id) if result: for tag in result: session.delete(tag) def stack_tags_get(context, stack_id): result = (context.session.query(models.StackTag) .filter_by(stack_id=stack_id) .all()) return result or None def resource_data_get_by_key(context, resource_id, key): """Looks up resource_data by resource_id and key. Does not decrypt resource_data. """ result = (context.session.query(models.ResourceData) .filter_by(resource_id=resource_id) .filter_by(key=key).first()) if not result: raise exception.NotFound(_('No resource data found')) return result def resource_data_set(context, resource_id, key, value, redact=False): """Save resource's key/value pair to database.""" if redact: method, value = crypt.encrypt(value) else: method = '' try: current = resource_data_get_by_key(context, resource_id, key) except exception.NotFound: current = models.ResourceData() current.key = key current.resource_id = resource_id current.redact = redact current.value = value current.decrypt_method = method current.save(session=context.session) return current def resource_exchange_stacks(context, resource_id1, resource_id2): query = context.session.query(models.Resource) session = query.session with session.begin(): res1 = query.get(resource_id1) res2 = query.get(resource_id2) res1.stack, res2.stack = res2.stack, res1.stack def resource_data_delete(context, resource_id, key): result = resource_data_get_by_key(context, resource_id, key) session = context.session with session.begin(subtransactions=True): session.delete(result) def resource_create(context, values): resource_ref = models.Resource() resource_ref.update(values) resource_ref.save(context.session) return resource_ref def resource_create_replacement(context, existing_res_id, existing_res_values, new_res_values, atomic_key, expected_engine_id=None): session = context.session try: with session.begin(subtransactions=True): new_res = resource_create(context, new_res_values) update_data = {'replaced_by': new_res.id} update_data.update(existing_res_values) if not resource_update(context, existing_res_id, update_data, atomic_key, expected_engine_id=expected_engine_id): data = {} if 'name' in new_res_values: data['resource_name'] = new_res_values['name'] raise exception.UpdateInProgress(**data) except db_exception.DBReferenceError as exc: # New template_id no longer exists LOG.debug('Not creating replacement resource: %s', exc) return None else: return new_res def resource_get_all_by_stack(context, stack_id, filters=None): query = context.session.query( models.Resource ).filter_by( stack_id=stack_id ).options(orm.joinedload("data")).options(orm.joinedload("rsrc_prop_data")) query = db_filters.exact_filter(query, models.Resource, filters) results = query.all() return dict((res.name, res) for res in results) def resource_get_all_active_by_stack(context, stack_id): filters = {'stack_id': stack_id, 'action': 'DELETE', 'status': 'COMPLETE'} subquery = context.session.query(models.Resource.id).filter_by(**filters) results = context.session.query(models.Resource).filter_by( stack_id=stack_id).filter( models.Resource.id.notin_(subquery.as_scalar()) ).options(orm.joinedload("data")).all() return dict((res.id, res) for res in results) def resource_get_all_by_root_stack(context, stack_id, filters=None, stack_id_only=False): query = context.session.query( models.Resource ).filter_by( root_stack_id=stack_id ) if stack_id_only: query = query.options(orm.load_only("id", "stack_id")) else: query = query.options(orm.joinedload("data")).options( orm.joinedload("rsrc_prop_data")) query = db_filters.exact_filter(query, models.Resource, filters) results = query.all() return dict((res.id, res) for res in results) def engine_get_all_locked_by_stack(context, stack_id): query = context.session.query( func.distinct(models.Resource.engine_id) ).filter( models.Resource.stack_id == stack_id, models.Resource.engine_id.isnot(None)) return set(i[0] for i in query.all()) def resource_prop_data_create_or_update(context, values, rpd_id=None): obj_ref = None if rpd_id is not None: obj_ref = context.session.query( models.ResourcePropertiesData).filter_by(id=rpd_id).first() if obj_ref is None: obj_ref = models.ResourcePropertiesData() obj_ref.update(values) obj_ref.save(context.session) return obj_ref def resource_prop_data_create(context, values): return resource_prop_data_create_or_update(context, values) def resource_prop_data_get(context, resource_prop_data_id): result = context.session.query(models.ResourcePropertiesData).get( resource_prop_data_id) if result is None: raise exception.NotFound( _('ResourcePropertiesData with id %s not found') % id) return result def stack_get_by_name_and_owner_id(context, stack_name, owner_id): query = soft_delete_aware_query( context, models.Stack ).options(orm.joinedload("raw_template")).filter(sqlalchemy.or_( models.Stack.tenant == context.tenant_id, models.Stack.stack_user_project_id == context.tenant_id) ).filter_by(name=stack_name).filter_by(owner_id=owner_id) return query.first() def stack_get_by_name(context, stack_name): query = soft_delete_aware_query( context, models.Stack ).filter(sqlalchemy.or_( models.Stack.tenant == context.tenant_id, models.Stack.stack_user_project_id == context.tenant_id) ).filter_by(name=stack_name) return query.first() def stack_get(context, stack_id, show_deleted=False, eager_load=True): query = context.session.query(models.Stack) if eager_load: query = query.options(orm.joinedload("raw_template")) result = query.get(stack_id) deleted_ok = show_deleted or context.show_deleted if result is None or result.deleted_at is not None and not deleted_ok: return None # One exception to normal project scoping is users created by the # stacks in the stack_user_project_id (in the heat stack user domain) if (result is not None and context is not None and not context.is_admin and context.tenant_id not in (result.tenant, result.stack_user_project_id)): return None return result def stack_get_status(context, stack_id): query = context.session.query(models.Stack) query = query.options( orm.load_only("action", "status", "status_reason", "updated_at")) result = query.filter_by(id=stack_id).first() if result is None: raise exception.NotFound(_('Stack with id %s not found') % stack_id) return (result.action, result.status, result.status_reason, result.updated_at) def stack_get_all_by_owner_id(context, owner_id): results = soft_delete_aware_query( context, models.Stack).filter_by(owner_id=owner_id, backup=False).all() return results def stack_get_all_by_root_owner_id(context, owner_id): for stack in stack_get_all_by_owner_id(context, owner_id): yield stack for ch_st in stack_get_all_by_root_owner_id(context, stack.id): yield ch_st def _get_sort_keys(sort_keys, mapping): """Returns an array containing only whitelisted keys :param sort_keys: an array of strings :param mapping: a mapping from keys to DB column names :returns: filtered list of sort keys """ if isinstance(sort_keys, six.string_types): sort_keys = [sort_keys] return [mapping[key] for key in sort_keys or [] if key in mapping] def _paginate_query(context, query, model, limit=None, sort_keys=None, marker=None, sort_dir=None): default_sort_keys = ['created_at'] if not sort_keys: sort_keys = default_sort_keys if not sort_dir: sort_dir = 'desc' # This assures the order of the stacks will always be the same # even for sort_key values that are not unique in the database sort_keys = sort_keys + ['id'] model_marker = None if marker: model_marker = context.session.query(model).get(marker) try: query = utils.paginate_query(query, model, limit, sort_keys, model_marker, sort_dir) except utils.InvalidSortKey as exc: err_msg = encodeutils.exception_to_unicode(exc) raise exception.Invalid(reason=err_msg) return query def _query_stack_get_all(context, show_deleted=False, show_nested=False, show_hidden=False, tags=None, tags_any=None, not_tags=None, not_tags_any=None): if show_nested: query = soft_delete_aware_query( context, models.Stack, show_deleted=show_deleted ).filter_by(backup=False) else: query = soft_delete_aware_query( context, models.Stack, show_deleted=show_deleted ).filter_by(owner_id=None) if not context.is_admin: query = query.filter_by(tenant=context.tenant_id) query = query.options(orm.subqueryload("tags")) if tags: for tag in tags: tag_alias = orm_aliased(models.StackTag) query = query.join(tag_alias, models.Stack.tags) query = query.filter(tag_alias.tag == tag) if tags_any: query = query.filter( models.Stack.tags.any( models.StackTag.tag.in_(tags_any))) if not_tags: subquery = soft_delete_aware_query( context, models.Stack, show_deleted=show_deleted ) for tag in not_tags: tag_alias = orm_aliased(models.StackTag) subquery = subquery.join(tag_alias, models.Stack.tags) subquery = subquery.filter(tag_alias.tag == tag) not_stack_ids = [s.id for s in subquery.all()] query = query.filter(models.Stack.id.notin_(not_stack_ids)) if not_tags_any: query = query.filter( ~models.Stack.tags.any( models.StackTag.tag.in_(not_tags_any))) if not show_hidden and cfg.CONF.hidden_stack_tags: query = query.filter( ~models.Stack.tags.any( models.StackTag.tag.in_(cfg.CONF.hidden_stack_tags))) return query def stack_get_all(context, limit=None, sort_keys=None, marker=None, sort_dir=None, filters=None, show_deleted=False, show_nested=False, show_hidden=False, tags=None, tags_any=None, not_tags=None, not_tags_any=None, eager_load=False): query = _query_stack_get_all(context, show_deleted=show_deleted, show_nested=show_nested, show_hidden=show_hidden, tags=tags, tags_any=tags_any, not_tags=not_tags, not_tags_any=not_tags_any) if eager_load: query = query.options(orm.joinedload("raw_template")) return _filter_and_page_query(context, query, limit, sort_keys, marker, sort_dir, filters).all() def _filter_and_page_query(context, query, limit=None, sort_keys=None, marker=None, sort_dir=None, filters=None): if filters is None: filters = {} sort_key_map = {rpc_api.STACK_NAME: models.Stack.name.key, rpc_api.STACK_STATUS: models.Stack.status.key, rpc_api.STACK_CREATION_TIME: models.Stack.created_at.key, rpc_api.STACK_UPDATED_TIME: models.Stack.updated_at.key} whitelisted_sort_keys = _get_sort_keys(sort_keys, sort_key_map) query = db_filters.exact_filter(query, models.Stack, filters) return _paginate_query(context, query, models.Stack, limit, whitelisted_sort_keys, marker, sort_dir) def stack_count_all(context, filters=None, show_deleted=False, show_nested=False, show_hidden=False, tags=None, tags_any=None, not_tags=None, not_tags_any=None): query = _query_stack_get_all(context, show_deleted=show_deleted, show_nested=show_nested, show_hidden=show_hidden, tags=tags, tags_any=tags_any, not_tags=not_tags, not_tags_any=not_tags_any) query = db_filters.exact_filter(query, models.Stack, filters) return query.count() def stack_create(context, values): stack_ref = models.Stack() stack_ref.update(values) stack_ref.save(context.session) return stack_ref @oslo_db_api.wrap_db_retry(max_retries=3, retry_on_deadlock=True, retry_interval=0.5, inc_retry_interval=True) def stack_update(context, stack_id, values, exp_trvsl=None): session = context.session with session.begin(subtransactions=True): query = (session.query(models.Stack) .filter(and_(models.Stack.id == stack_id), (models.Stack.deleted_at.is_(None)))) if not context.is_admin: query = query.filter(sqlalchemy.or_( models.Stack.tenant == context.tenant_id, models.Stack.stack_user_project_id == context.tenant_id)) if exp_trvsl is not None: query = query.filter(models.Stack.current_traversal == exp_trvsl) rows_updated = query.update(values, synchronize_session=False) if not rows_updated: LOG.debug('Did not set stack state with values ' '%(vals)s, stack id: %(id)s with ' 'expected traversal: %(trav)s', {'id': stack_id, 'vals': str(values), 'trav': str(exp_trvsl)}) if not stack_get(context, stack_id, eager_load=False): raise exception.NotFound( _('Attempt to update a stack with id: ' '%(id)s %(msg)s') % { 'id': stack_id, 'msg': 'that does not exist'}) session.expire_all() return (rows_updated is not None and rows_updated > 0) def stack_delete(context, stack_id): s = stack_get(context, stack_id, eager_load=False) if not s: raise exception.NotFound(_('Attempt to delete a stack with id: ' '%(id)s %(msg)s') % { 'id': stack_id, 'msg': 'that does not exist'}) session = context.session with session.begin(): attr_ids = [] # normally the resources are deleted already by this point for r in s.resources: if r.attr_data_id is not None: attr_ids.append(r.attr_data_id) session.delete(r) if attr_ids: session.query( models.ResourcePropertiesData.id).filter( models.ResourcePropertiesData.id.in_(attr_ids)).delete( synchronize_session=False) delete_softly(context, s) def _is_duplicate_error(exc): return isinstance(exc, db_exception.DBDuplicateEntry) @oslo_db_api.wrap_db_retry(max_retries=3, retry_on_deadlock=True, retry_interval=0.5, inc_retry_interval=True, exception_checker=_is_duplicate_error) def stack_lock_create(context, stack_id, engine_id): with db_context.writer.independent.using(context) as session: lock = session.query(models.StackLock).get(stack_id) if lock is not None: return lock.engine_id session.add(models.StackLock(stack_id=stack_id, engine_id=engine_id)) def stack_lock_get_engine_id(context, stack_id): with db_context.reader.independent.using(context) as session: lock = session.query(models.StackLock).get(stack_id) if lock is not None: return lock.engine_id def persist_state_and_release_lock(context, stack_id, engine_id, values): session = context.session with session.begin(): rows_updated = (session.query(models.Stack) .filter(models.Stack.id == stack_id) .update(values, synchronize_session=False)) rows_affected = None if rows_updated is not None and rows_updated > 0: rows_affected = session.query( models.StackLock ).filter_by(stack_id=stack_id, engine_id=engine_id).delete() session.expire_all() if not rows_affected: return True def stack_lock_steal(context, stack_id, old_engine_id, new_engine_id): with db_context.writer.independent.using(context) as session: lock = session.query(models.StackLock).get(stack_id) rows_affected = session.query( models.StackLock ).filter_by(stack_id=stack_id, engine_id=old_engine_id ).update({"engine_id": new_engine_id}) if not rows_affected: return lock.engine_id if lock is not None else True def stack_lock_release(context, stack_id, engine_id): with db_context.writer.independent.using(context) as session: rows_affected = session.query( models.StackLock ).filter_by(stack_id=stack_id, engine_id=engine_id).delete() if not rows_affected: return True def stack_get_root_id(context, stack_id): s = stack_get(context, stack_id, eager_load=False) if not s: return None while s.owner_id: s = stack_get(context, s.owner_id, eager_load=False) return s.id def stack_count_total_resources(context, stack_id): # count all resources which belong to the root stack return context.session.query( func.count(models.Resource.id) ).filter_by(root_stack_id=stack_id).scalar() def user_creds_create(context): values = context.to_dict() user_creds_ref = models.UserCreds() if values.get('trust_id'): method, trust_id = crypt.encrypt(values.get('trust_id')) user_creds_ref.trust_id = trust_id user_creds_ref.decrypt_method = method user_creds_ref.trustor_user_id = values.get('trustor_user_id') user_creds_ref.username = None user_creds_ref.password = None user_creds_ref.tenant = values.get('tenant') user_creds_ref.tenant_id = values.get('tenant_id') user_creds_ref.auth_url = values.get('auth_url') user_creds_ref.region_name = values.get('region_name') else: user_creds_ref.update(values) method, password = crypt.encrypt(values['password']) if len(six.text_type(password)) > 255: raise exception.Error(_("Length of OS_PASSWORD after encryption" " exceeds Heat limit (255 chars)")) user_creds_ref.password = password user_creds_ref.decrypt_method = method user_creds_ref.save(context.session) result = dict(user_creds_ref) if values.get('trust_id'): result['trust_id'] = values.get('trust_id') else: result['password'] = values.get('password') return result def user_creds_get(context, user_creds_id): db_result = context.session.query(models.UserCreds).get(user_creds_id) if db_result is None: return None # Return a dict copy of db results, do not decrypt details into db_result # or it can be committed back to the DB in decrypted form result = dict(db_result) del result['decrypt_method'] result['password'] = crypt.decrypt( db_result.decrypt_method, result['password']) result['trust_id'] = crypt.decrypt( db_result.decrypt_method, result['trust_id']) return result @db_utils.retry_on_stale_data_error def user_creds_delete(context, user_creds_id): creds = context.session.query(models.UserCreds).get(user_creds_id) if not creds: raise exception.NotFound( _('Attempt to delete user creds with id ' '%(id)s that does not exist') % {'id': user_creds_id}) with context.session.begin(): context.session.delete(creds) def event_get_all_by_tenant(context, limit=None, marker=None, sort_keys=None, sort_dir=None, filters=None): query = context.session.query(models.Event) query = db_filters.exact_filter(query, models.Event, filters) query = query.join( models.Event.stack ).filter_by(tenant=context.tenant_id).filter_by(deleted_at=None) filters = None return _events_filter_and_page_query(context, query, limit, marker, sort_keys, sort_dir, filters).all() def _query_all_by_stack(context, stack_id): query = context.session.query(models.Event).filter_by(stack_id=stack_id) return query def event_get_all_by_stack(context, stack_id, limit=None, marker=None, sort_keys=None, sort_dir=None, filters=None): query = _query_all_by_stack(context, stack_id) if filters and 'uuid' in filters: # retrieving a single event, so eager load its rsrc_prop_data detail query = query.options(orm.joinedload("rsrc_prop_data")) return _events_filter_and_page_query(context, query, limit, marker, sort_keys, sort_dir, filters).all() def _events_paginate_query(context, query, model, limit=None, sort_keys=None, marker=None, sort_dir=None): default_sort_keys = ['created_at'] if not sort_keys: sort_keys = default_sort_keys if not sort_dir: sort_dir = 'desc' # This assures the order of the stacks will always be the same # even for sort_key values that are not unique in the database sort_keys = sort_keys + ['id'] model_marker = None if marker: # not to use context.session.query(model).get(marker), because # user can only see the ID(column 'uuid') and the ID as the marker model_marker = context.session.query( model).filter_by(uuid=marker).first() try: query = utils.paginate_query(query, model, limit, sort_keys, model_marker, sort_dir) except utils.InvalidSortKey as exc: err_msg = encodeutils.exception_to_unicode(exc) raise exception.Invalid(reason=err_msg) return query def _events_filter_and_page_query(context, query, limit=None, marker=None, sort_keys=None, sort_dir=None, filters=None): if filters is None: filters = {} sort_key_map = {rpc_api.EVENT_TIMESTAMP: models.Event.created_at.key, rpc_api.EVENT_RES_TYPE: models.Event.resource_type.key} whitelisted_sort_keys = _get_sort_keys(sort_keys, sort_key_map) query = db_filters.exact_filter(query, models.Event, filters) return _events_paginate_query(context, query, models.Event, limit, whitelisted_sort_keys, marker, sort_dir) def event_count_all_by_stack(context, stack_id): query = context.session.query(func.count(models.Event.id)) return query.filter_by(stack_id=stack_id).scalar() def _delete_event_rows(context, stack_id, limit): # MySQL does not support LIMIT in subqueries, # sqlite does not support JOIN in DELETE. # So we must manually supply the IN() values. # pgsql SHOULD work with the pure DELETE/JOIN below but that must be # confirmed via integration tests. session = context.session with session.begin(): query = _query_all_by_stack(context, stack_id) query = query.order_by(models.Event.id).limit(limit) id_pairs = [(e.id, e.rsrc_prop_data_id) for e in query.all()] if not id_pairs: return 0 (ids, rsrc_prop_ids) = zip(*id_pairs) max_id = ids[-1] # delete the events retval = session.query(models.Event.id).filter( models.Event.id <= max_id).filter( models.Event.stack_id == stack_id).delete() # delete unreferenced resource_properties_data if rsrc_prop_ids: ev_ref_ids = set(e.rsrc_prop_data_id for e in _query_all_by_stack(context, stack_id).all()) rsrc_ref_ids = set(r.rsrc_prop_data_id for r in session.query(models.Resource).filter_by( stack_id=stack_id).all()) clr_prop_ids = set(rsrc_prop_ids) - ev_ref_ids - rsrc_ref_ids q_rpd = session.query(models.ResourcePropertiesData.id).filter( models.ResourcePropertiesData.id.in_(clr_prop_ids)) q_rpd.delete(synchronize_session=False) return retval def event_create(context, values): if 'stack_id' in values and cfg.CONF.max_events_per_stack: # only count events and purge on average # 200.0/cfg.CONF.event_purge_batch_size percent of the time. check = (2.0 / cfg.CONF.event_purge_batch_size) > random.uniform(0, 1) if (check and (event_count_all_by_stack(context, values['stack_id']) >= cfg.CONF.max_events_per_stack)): # prune try: _delete_event_rows(context, values['stack_id'], cfg.CONF.event_purge_batch_size) except db_exception.DBError as exc: LOG.error('Failed to purge events: %s', six.text_type(exc)) event_ref = models.Event() event_ref.update(values) event_ref.save(context.session) return event_ref def watch_rule_get(context, watch_rule_id): result = context.session.query(models.WatchRule).get(watch_rule_id) return result def watch_rule_get_by_name(context, watch_rule_name): result = context.session.query( models.WatchRule).filter_by(name=watch_rule_name).first() return result def watch_rule_get_all(context): results = context.session.query(models.WatchRule).all() return results def watch_rule_get_all_by_stack(context, stack_id): results = context.session.query( models.WatchRule).filter_by(stack_id=stack_id).all() return results def watch_rule_create(context, values): obj_ref = models.WatchRule() obj_ref.update(values) obj_ref.save(context.session) return obj_ref def watch_rule_update(context, watch_id, values): wr = watch_rule_get(context, watch_id) if not wr: raise exception.NotFound(_('Attempt to update a watch with id: ' '%(id)s %(msg)s') % { 'id': watch_id, 'msg': 'that does not exist'}) wr.update(values) wr.save(context.session) def watch_rule_delete(context, watch_id): wr = watch_rule_get(context, watch_id) if not wr: raise exception.NotFound(_('Attempt to delete watch_rule: ' '%(id)s %(msg)s') % { 'id': watch_id, 'msg': 'that does not exist'}) with context.session.begin(): for d in wr.watch_data: context.session.delete(d) context.session.delete(wr) def watch_data_create(context, values): obj_ref = models.WatchData() obj_ref.update(values) obj_ref.save(context.session) return obj_ref def watch_data_get_all(context): results = context.session.query(models.WatchData).all() return results def watch_data_get_all_by_watch_rule_id(context, watch_rule_id): results = context.session.query(models.WatchData).filter_by( watch_rule_id=watch_rule_id).all() return results def software_config_create(context, values): obj_ref = models.SoftwareConfig() obj_ref.update(values) obj_ref.save(context.session) return obj_ref def software_config_get(context, config_id): result = context.session.query(models.SoftwareConfig).get(config_id) if (result is not None and context is not None and result.tenant != context.tenant_id): result = None if not result: raise exception.NotFound(_('Software config with id %s not found') % config_id) return result def software_config_get_all(context, limit=None, marker=None): query = context.session.query(models.SoftwareConfig) if not context.is_admin: query = query.filter_by(tenant=context.tenant_id) return _paginate_query(context, query, models.SoftwareConfig, limit=limit, marker=marker).all() def software_config_delete(context, config_id): config = software_config_get(context, config_id) # Query if the software config has been referenced by deployment. result = context.session.query(models.SoftwareDeployment).filter_by( config_id=config_id).first() if result: msg = (_("Software config with id %s can not be deleted as " "it is referenced.") % config_id) raise exception.InvalidRestrictedAction(message=msg) with context.session.begin(): context.session.delete(config) def software_deployment_create(context, values): obj_ref = models.SoftwareDeployment() obj_ref.update(values) session = context.session with session.begin(): obj_ref.save(session) return obj_ref def software_deployment_get(context, deployment_id): result = context.session.query( models.SoftwareDeployment).get(deployment_id) if (result is not None and context is not None and context.tenant_id not in (result.tenant, result.stack_user_project_id)): result = None if not result: raise exception.NotFound(_('Deployment with id %s not found') % deployment_id) return result def software_deployment_get_all(context, server_id=None): sd = models.SoftwareDeployment query = context.session.query( sd ).filter(sqlalchemy.or_( sd.tenant == context.tenant_id, sd.stack_user_project_id == context.tenant_id) ).order_by(sd.created_at) if server_id: query = query.filter_by(server_id=server_id) return query.all() def software_deployment_update(context, deployment_id, values): deployment = software_deployment_get(context, deployment_id) update_and_save(context, deployment, values) return deployment def software_deployment_delete(context, deployment_id): deployment = software_deployment_get(context, deployment_id) session = context.session with session.begin(subtransactions=True): session.delete(deployment) def snapshot_create(context, values): obj_ref = models.Snapshot() obj_ref.update(values) obj_ref.save(context.session) return obj_ref def snapshot_get(context, snapshot_id): result = context.session.query(models.Snapshot).get(snapshot_id) if (result is not None and context is not None and context.tenant_id != result.tenant): result = None if not result: raise exception.NotFound(_('Snapshot with id %s not found') % snapshot_id) return result def snapshot_get_by_stack(context, snapshot_id, stack): snapshot = snapshot_get(context, snapshot_id) if snapshot.stack_id != stack.id: raise exception.SnapshotNotFound(snapshot=snapshot_id, stack=stack.name) return snapshot def snapshot_update(context, snapshot_id, values): snapshot = snapshot_get(context, snapshot_id) snapshot.update(values) snapshot.save(context.session) return snapshot def snapshot_delete(context, snapshot_id): snapshot = snapshot_get(context, snapshot_id) with context.session.begin(): context.session.delete(snapshot) def snapshot_get_all(context, stack_id): return context.session.query(models.Snapshot).filter_by( stack_id=stack_id, tenant=context.tenant_id) def service_create(context, values): service = models.Service() service.update(values) service.save(context.session) return service def service_update(context, service_id, values): service = service_get(context, service_id) values.update({'updated_at': timeutils.utcnow()}) service.update(values) service.save(context.session) return service def service_delete(context, service_id, soft_delete=True): service = service_get(context, service_id) session = context.session with session.begin(): if soft_delete: delete_softly(context, service) else: session.delete(service) def service_get(context, service_id): result = context.session.query(models.Service).get(service_id) if result is None: raise exception.EntityNotFound(entity='Service', name=service_id) return result def service_get_all(context): return (context.session.query(models.Service). filter_by(deleted_at=None).all()) def service_get_all_by_args(context, host, binary, hostname): return (context.session.query(models.Service). filter_by(host=host). filter_by(binary=binary). filter_by(hostname=hostname).all()) def purge_deleted(age, granularity='days', project_id=None, batch_size=20): def _validate_positive_integer(val, argname): try: val = int(val) except ValueError: raise exception.Error(_("%s should be an integer") % argname) if val < 0: raise exception.Error(_("%s should be a positive integer") % argname) return val age = _validate_positive_integer(age, 'age') batch_size = _validate_positive_integer(batch_size, 'batch_size') if granularity not in ('days', 'hours', 'minutes', 'seconds'): raise exception.Error( _("granularity should be days, hours, minutes, or seconds")) if granularity == 'days': age = age * 86400 elif granularity == 'hours': age = age * 3600 elif granularity == 'minutes': age = age * 60 time_line = timeutils.utcnow() - datetime.timedelta(seconds=age) engine = get_engine() meta = sqlalchemy.MetaData() meta.bind = engine stack = sqlalchemy.Table('stack', meta, autoload=True) service = sqlalchemy.Table('service', meta, autoload=True) # Purge deleted services srvc_del = service.delete().where(service.c.deleted_at < time_line) engine.execute(srvc_del) # find the soft-deleted stacks that are past their expiry sel = sqlalchemy.select([stack.c.id, stack.c.raw_template_id, stack.c.prev_raw_template_id, stack.c.user_creds_id, stack.c.action, stack.c.status, stack.c.name]) if project_id: stack_where = sel.where(and_( stack.c.tenant == project_id, stack.c.deleted_at < time_line)) else: stack_where = sel.where( stack.c.deleted_at < time_line) stacks = engine.execute(stack_where) while True: next_stacks_to_purge = list(itertools.islice(stacks, batch_size)) if len(next_stacks_to_purge): _purge_stacks(next_stacks_to_purge, engine, meta) else: break def _purge_stacks(stack_infos, engine, meta): """Purge some stacks and their releated events, raw_templates, etc. stack_infos is a list of lists of selected stack columns: [[id, raw_template_id, prev_raw_template_id, user_creds_id, action, status, name], ...] """ stack = sqlalchemy.Table('stack', meta, autoload=True) stack_lock = sqlalchemy.Table('stack_lock', meta, autoload=True) stack_tag = sqlalchemy.Table('stack_tag', meta, autoload=True) resource = sqlalchemy.Table('resource', meta, autoload=True) resource_data = sqlalchemy.Table('resource_data', meta, autoload=True) resource_properties_data = sqlalchemy.Table( 'resource_properties_data', meta, autoload=True) event = sqlalchemy.Table('event', meta, autoload=True) raw_template = sqlalchemy.Table('raw_template', meta, autoload=True) raw_template_files = sqlalchemy.Table('raw_template_files', meta, autoload=True) user_creds = sqlalchemy.Table('user_creds', meta, autoload=True) syncpoint = sqlalchemy.Table('sync_point', meta, autoload=True) stack_info_str = ','.join([str(i) for i in stack_infos]) LOG.info("Purging stacks %s", stack_info_str) # TODO(cwolfe): find a way to make this re-entrant with # reasonably sized transactions (good luck), or add # a cleanup for orphaned rows. stack_ids = [stack_info[0] for stack_info in stack_infos] # delete stack locks (just in case some got stuck) stack_lock_del = stack_lock.delete().where( stack_lock.c.stack_id.in_(stack_ids)) engine.execute(stack_lock_del) # delete stack tags stack_tag_del = stack_tag.delete().where( stack_tag.c.stack_id.in_(stack_ids)) engine.execute(stack_tag_del) # delete resource_data res_where = sqlalchemy.select([resource.c.id]).where( resource.c.stack_id.in_(stack_ids)) res_data_del = resource_data.delete().where( resource_data.c.resource_id.in_(res_where)) engine.execute(res_data_del) # clean up any sync_points that may have lingered sync_del = syncpoint.delete().where( syncpoint.c.stack_id.in_(stack_ids)) engine.execute(sync_del) # get rsrc_prop_data_ids to delete rsrc_prop_data_where = sqlalchemy.select( [resource.c.rsrc_prop_data_id]).where( resource.c.stack_id.in_(stack_ids)) rsrc_prop_data_ids = set( [i[0] for i in list(engine.execute(rsrc_prop_data_where))]) rsrc_prop_data_where = sqlalchemy.select( [resource.c.attr_data_id]).where( resource.c.stack_id.in_(stack_ids)) rsrc_prop_data_ids.update( [i[0] for i in list(engine.execute(rsrc_prop_data_where))]) rsrc_prop_data_where = sqlalchemy.select( [event.c.rsrc_prop_data_id]).where( event.c.stack_id.in_(stack_ids)) rsrc_prop_data_ids.update( [i[0] for i in list(engine.execute(rsrc_prop_data_where))]) # delete events event_del = event.delete().where(event.c.stack_id.in_(stack_ids)) engine.execute(event_del) # delete resources (normally there shouldn't be any) res_del = resource.delete().where(resource.c.stack_id.in_(stack_ids)) engine.execute(res_del) # delete resource_properties_data if rsrc_prop_data_ids: # keep rpd's in events rsrc_prop_data_where = sqlalchemy.select( [event.c.rsrc_prop_data_id]).where( event.c.rsrc_prop_data_id.in_(rsrc_prop_data_ids)) ids = list(engine.execute(rsrc_prop_data_where)) rsrc_prop_data_ids.difference_update([i[0] for i in ids]) if rsrc_prop_data_ids: # keep rpd's in resources rsrc_prop_data_where = sqlalchemy.select( [resource.c.rsrc_prop_data_id]).where( resource.c.rsrc_prop_data_id.in_(rsrc_prop_data_ids)) ids = list(engine.execute(rsrc_prop_data_where)) rsrc_prop_data_ids.difference_update([i[0] for i in ids]) if rsrc_prop_data_ids: # delete if we have any rsrc_prop_data_del = resource_properties_data.delete().where( resource_properties_data.c.id.in_(rsrc_prop_data_ids)) engine.execute(rsrc_prop_data_del) # delete the stacks stack_del = stack.delete().where(stack.c.id.in_(stack_ids)) engine.execute(stack_del) # delete orphaned raw templates raw_template_ids = [i[1] for i in stack_infos if i[1] is not None] raw_template_ids.extend(i[2] for i in stack_infos if i[2] is not None) if raw_template_ids: # keep those still referenced raw_tmpl_sel = sqlalchemy.select([stack.c.raw_template_id]).where( stack.c.raw_template_id.in_(raw_template_ids)) raw_tmpl = [i[0] for i in engine.execute(raw_tmpl_sel)] raw_template_ids = set(raw_template_ids) - set(raw_tmpl) if raw_template_ids: # keep those still referenced (previous tmpl) raw_tmpl_sel = sqlalchemy.select( [stack.c.prev_raw_template_id]).where( stack.c.prev_raw_template_id.in_(raw_template_ids)) raw_tmpl = [i[0] for i in engine.execute(raw_tmpl_sel)] raw_template_ids = raw_template_ids - set(raw_tmpl) if raw_template_ids: # delete raw_templates if we have any raw_tmpl_file_sel = sqlalchemy.select( [raw_template.c.files_id]).where( raw_template.c.id.in_(raw_template_ids)) raw_tmpl_file_ids = [i[0] for i in engine.execute( raw_tmpl_file_sel)] raw_templ_del = raw_template.delete().where( raw_template.c.id.in_(raw_template_ids)) engine.execute(raw_templ_del) if raw_tmpl_file_ids: # keep _files still referenced raw_tmpl_file_sel = sqlalchemy.select( [raw_template.c.files_id]).where( raw_template.c.files_id.in_(raw_tmpl_file_ids)) raw_tmpl_files = [i[0] for i in engine.execute( raw_tmpl_file_sel)] raw_tmpl_file_ids = set(raw_tmpl_file_ids) \ - set(raw_tmpl_files) if raw_tmpl_file_ids: # delete _files if we have any raw_tmpl_file_del = raw_template_files.delete().where( raw_template_files.c.id.in_(raw_tmpl_file_ids)) engine.execute(raw_tmpl_file_del) # purge any user creds that are no longer referenced user_creds_ids = [i[3] for i in stack_infos if i[3] is not None] if user_creds_ids: # keep those still referenced user_sel = sqlalchemy.select([stack.c.user_creds_id]).where( stack.c.user_creds_id.in_(user_creds_ids)) users = [i[0] for i in engine.execute(user_sel)] user_creds_ids = set(user_creds_ids) - set(users) if user_creds_ids: # delete if we have any usr_creds_del = user_creds.delete().where( user_creds.c.id.in_(user_creds_ids)) engine.execute(usr_creds_del) def sync_point_delete_all_by_stack_and_traversal(context, stack_id, traversal_id): rows_deleted = context.session.query(models.SyncPoint).filter_by( stack_id=stack_id, traversal_id=traversal_id).delete() return rows_deleted @oslo_db_api.wrap_db_retry(max_retries=3, retry_on_deadlock=True, retry_interval=0.5, inc_retry_interval=True) def sync_point_create(context, values): values['entity_id'] = str(values['entity_id']) sync_point_ref = models.SyncPoint() sync_point_ref.update(values) sync_point_ref.save(context.session) return sync_point_ref def sync_point_get(context, entity_id, traversal_id, is_update): entity_id = str(entity_id) return context.session.query(models.SyncPoint).get( (entity_id, traversal_id, is_update) ) def sync_point_update_input_data(context, entity_id, traversal_id, is_update, atomic_key, input_data): entity_id = str(entity_id) rows_updated = context.session.query(models.SyncPoint).filter_by( entity_id=entity_id, traversal_id=traversal_id, is_update=is_update, atomic_key=atomic_key ).update({"input_data": input_data, "atomic_key": atomic_key + 1}) return rows_updated def db_sync(engine, version=None): """Migrate the database to `version` or the most recent version.""" if version is not None and int(version) < db_version(engine): raise exception.Error(_("Cannot migrate to lower schema version.")) return migration.db_sync(engine, version=version) def db_version(engine): """Display the current database version.""" return migration.db_version(engine) def _crypt_action(encrypt): if encrypt: return _('encrypt') return _('decrypt') def _db_encrypt_or_decrypt_template_params( ctxt, encryption_key, encrypt=False, batch_size=50, verbose=False): from heat.engine import template session = ctxt.session excs = [] query = session.query(models.RawTemplate) template_batches = _get_batch( session, ctxt=ctxt, query=query, model=models.RawTemplate, batch_size=batch_size) next_batch = list(itertools.islice(template_batches, batch_size)) while next_batch: with session.begin(subtransactions=True): for raw_template in next_batch: try: if verbose: LOG.info("Processing raw_template %s...", raw_template.id) env = raw_template.environment needs_update = False # using "in env.keys()" so an exception is raised # if env is something weird like a string. if env is None or 'parameters' not in env.keys(): continue if 'encrypted_param_names' in env: encrypted_params = env['encrypted_param_names'] else: encrypted_params = [] if encrypt: tmpl = template.Template.load( ctxt, raw_template.id, raw_template) param_schemata = tmpl.param_schemata() if not param_schemata: continue for param_name, param_val in env['parameters'].items(): if (param_name in encrypted_params or param_name not in param_schemata or not param_schemata[param_name].hidden): continue encrypted_val = crypt.encrypt( six.text_type(param_val), encryption_key) env['parameters'][param_name] = encrypted_val encrypted_params.append(param_name) needs_update = True if needs_update: newenv = env.copy() newenv['encrypted_param_names'] = encrypted_params else: # decrypt for param_name in encrypted_params: method, value = env['parameters'][param_name] decrypted_val = crypt.decrypt(method, value, encryption_key) env['parameters'][param_name] = decrypted_val needs_update = True if needs_update: newenv = env.copy() newenv['encrypted_param_names'] = [] if needs_update: raw_template_update(ctxt, raw_template.id, {'environment': newenv}) except Exception as exc: LOG.exception('Failed to %(crypt_action)s parameters ' 'of raw template %(id)d', {'id': raw_template.id, 'crypt_action': _crypt_action(encrypt)}) excs.append(exc) continue finally: if verbose: LOG.info("Finished %(crypt_action)s processing of " "raw_template %(id)d.", {'id': raw_template.id, 'crypt_action': _crypt_action(encrypt)}) next_batch = list(itertools.islice(template_batches, batch_size)) return excs def _db_encrypt_or_decrypt_resource_prop_data_legacy( ctxt, encryption_key, encrypt=False, batch_size=50, verbose=False): session = ctxt.session excs = [] # Older resources may have properties_data in the legacy column, # so update those as needed query = session.query(models.Resource).filter( models.Resource.properties_data_encrypted.isnot(encrypt)) resource_batches = _get_batch( session=session, ctxt=ctxt, query=query, model=models.Resource, batch_size=batch_size) next_batch = list(itertools.islice(resource_batches, batch_size)) while next_batch: with session.begin(subtransactions=True): for resource in next_batch: if not resource.properties_data: continue try: if verbose: LOG.info("Processing resource %s...", resource.id) if encrypt: result = crypt.encrypted_dict(resource.properties_data, encryption_key) else: result = crypt.decrypted_dict(resource.properties_data, encryption_key) resource_update(ctxt, resource.id, {'properties_data': result, 'properties_data_encrypted': encrypt}, resource.atomic_key) except Exception as exc: LOG.exception('Failed to %(crypt_action)s ' 'properties_data of resource %(id)d' % {'id': resource.id, 'crypt_action': _crypt_action(encrypt)}) excs.append(exc) continue finally: if verbose: LOG.info("Finished processing resource %s.", resource.id) next_batch = list(itertools.islice(resource_batches, batch_size)) return excs def _db_encrypt_or_decrypt_resource_prop_data( ctxt, encryption_key, encrypt=False, batch_size=50, verbose=False): session = ctxt.session excs = [] # Older resources may have properties_data in the legacy column, # so update those as needed query = session.query(models.ResourcePropertiesData).filter( models.ResourcePropertiesData.encrypted.isnot(encrypt)) rpd_batches = _get_batch( session=session, ctxt=ctxt, query=query, model=models.ResourcePropertiesData, batch_size=batch_size) next_batch = list(itertools.islice(rpd_batches, batch_size)) while next_batch: with session.begin(subtransactions=True): for rpd in next_batch: if not rpd.data: continue try: if verbose: LOG.info("Processing resource_properties_data " "%s...", rpd.id) if encrypt: result = crypt.encrypted_dict(rpd.data, encryption_key) else: result = crypt.decrypted_dict(rpd.data, encryption_key) rpd.update({'data': result, 'encrypted': encrypt}) except Exception as exc: LOG.exception( "Failed to %(crypt_action)s " "data of resource_properties_data %(id)d" % {'id': rpd.id, 'crypt_action': _crypt_action(encrypt)}) excs.append(exc) continue finally: if verbose: LOG.info( "Finished processing resource_properties_data" " %s.", rpd.id) next_batch = list(itertools.islice(rpd_batches, batch_size)) return excs def db_encrypt_parameters_and_properties(ctxt, encryption_key, batch_size=50, verbose=False): """Encrypt parameters and properties for all templates in db. :param ctxt: RPC context :param encryption_key: key that will be used for parameter and property encryption :param batch_size: number of templates requested from db in each iteration. 50 means that heat requests 50 templates, encrypt them and proceed with next 50 items. :param verbose: log an INFO message when processing of each raw_template or resource begins or ends :return: list of exceptions encountered during encryption """ excs = [] excs.extend(_db_encrypt_or_decrypt_template_params( ctxt, encryption_key, True, batch_size, verbose)) excs.extend(_db_encrypt_or_decrypt_resource_prop_data( ctxt, encryption_key, True, batch_size, verbose)) excs.extend(_db_encrypt_or_decrypt_resource_prop_data_legacy( ctxt, encryption_key, True, batch_size, verbose)) return excs def db_decrypt_parameters_and_properties(ctxt, encryption_key, batch_size=50, verbose=False): """Decrypt parameters and properties for all templates in db. :param ctxt: RPC context :param encryption_key: key that will be used for parameter and property decryption :param batch_size: number of templates requested from db in each iteration. 50 means that heat requests 50 templates, encrypt them and proceed with next 50 items. :param verbose: log an INFO message when processing of each raw_template or resource begins or ends :return: list of exceptions encountered during decryption """ excs = [] excs.extend(_db_encrypt_or_decrypt_template_params( ctxt, encryption_key, False, batch_size, verbose)) excs.extend(_db_encrypt_or_decrypt_resource_prop_data( ctxt, encryption_key, False, batch_size, verbose)) excs.extend(_db_encrypt_or_decrypt_resource_prop_data_legacy( ctxt, encryption_key, False, batch_size, verbose)) return excs def db_properties_data_migrate(ctxt, batch_size=50): """Migrate properties data from legacy columns to new location in db. :param ctxt: RPC context :param batch_size: number of templates requested from db in each iteration. 50 means that heat requests 50 templates, encrypt them and proceed with next 50 items. """ session = ctxt.session query = session.query(models.Resource).filter(and_( models.Resource.properties_data.isnot(None), models.Resource.rsrc_prop_data_id.is_(None))) resource_batches = _get_batch( session=session, ctxt=ctxt, query=query, model=models.Resource, batch_size=batch_size) next_batch = list(itertools.islice(resource_batches, batch_size)) while next_batch: with session.begin(): for resource in next_batch: try: encrypted = resource.properties_data_encrypted if encrypted is None: LOG.warning( 'Unexpected: resource.encrypted is None for ' 'resource id %s for legacy ' 'resource.properties_data, assuming False.', resource.id) encrypted = False rsrc_prop_data = resource_prop_data_create( ctxt, {'encrypted': encrypted, 'data': resource.properties_data}) resource_update(ctxt, resource.id, {'properties_data_encrypted': None, 'properties_data': None, 'rsrc_prop_data_id': rsrc_prop_data.id}, resource.atomic_key) except Exception: LOG.exception('Failed to migrate properties_data for ' 'resource %d', resource.id) continue next_batch = list(itertools.islice(resource_batches, batch_size)) query = session.query(models.Event).filter(and_( models.Event.resource_properties.isnot(None), models.Event.rsrc_prop_data_id.is_(None))) event_batches = _get_batch( session=session, ctxt=ctxt, query=query, model=models.Event, batch_size=batch_size) next_batch = list(itertools.islice(event_batches, batch_size)) while next_batch: with session.begin(): for event in next_batch: try: prop_data = event.resource_properties rsrc_prop_data = resource_prop_data_create( ctxt, {'encrypted': False, 'data': prop_data}) event.update({'resource_properties': None, 'rsrc_prop_data_id': rsrc_prop_data.id}) except Exception: LOG.exception('Failed to migrate resource_properties ' 'for event %d', event.id) continue next_batch = list(itertools.islice(event_batches, batch_size)) def _get_batch(session, ctxt, query, model, batch_size=50): last_batch_marker = None while True: results = _paginate_query( context=ctxt, query=query, model=model, limit=batch_size, marker=last_batch_marker).all() if not results: break else: for result in results: yield result last_batch_marker = results[-1].id def reset_stack_status(context, stack_id, stack=None): session = context.session if stack is None: stack = session.query(models.Stack).get(stack_id) if stack is None: raise exception.NotFound(_('Stack with id %s not found') % stack_id) with session.begin(): query = session.query(models.Resource).filter_by( status='IN_PROGRESS', stack_id=stack_id) query.update({'status': 'FAILED', 'status_reason': 'Stack status manually reset', 'engine_id': None}) query = session.query(models.ResourceData) query = query.join(models.Resource) query = query.filter_by(stack_id=stack_id) query = query.filter( models.ResourceData.key.in_(heat_environment.HOOK_TYPES)) data_ids = [data.id for data in query] if data_ids: query = session.query(models.ResourceData) query = query.filter(models.ResourceData.id.in_(data_ids)) query.delete(synchronize_session='fetch') query = session.query(models.Stack).filter_by(owner_id=stack_id) for child in query: reset_stack_status(context, child.id, child) with session.begin(): if stack.status == 'IN_PROGRESS': stack.status = 'FAILED' stack.status_reason = 'Stack status manually reset' session.query( models.StackLock ).filter_by(stack_id=stack_id).delete() heat-10.0.2/heat/db/sqlalchemy/types.py0000666000175000017500000000317513343562337017744 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_serialization import jsonutils from sqlalchemy.dialects import mysql from sqlalchemy import types dumps = jsonutils.dumps loads = jsonutils.loads class LongText(types.TypeDecorator): impl = types.Text def load_dialect_impl(self, dialect): if dialect.name == 'mysql': return dialect.type_descriptor(mysql.LONGTEXT()) else: return self.impl class Json(LongText): def process_bind_param(self, value, dialect): return dumps(value) def process_result_value(self, value, dialect): if value is None: return None return loads(value) class List(types.TypeDecorator): impl = types.Text def load_dialect_impl(self, dialect): if dialect.name == 'mysql': return dialect.type_descriptor(mysql.LONGTEXT()) else: return self.impl def process_bind_param(self, value, dialect): return dumps(value) def process_result_value(self, value, dialect): if value is None: return None return loads(value) heat-10.0.2/heat/db/sqlalchemy/migration.py0000666000175000017500000000250713343562337020567 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os from oslo_db.sqlalchemy import migration as oslo_migration INIT_VERSION = 70 def db_sync(engine, version=None): path = os.path.join(os.path.abspath(os.path.dirname(__file__)), 'migrate_repo') return oslo_migration.db_sync(engine, path, version, init_version=INIT_VERSION) def db_version(engine): path = os.path.join(os.path.abspath(os.path.dirname(__file__)), 'migrate_repo') return oslo_migration.db_version(engine, path, INIT_VERSION) def db_version_control(engine, version=None): path = os.path.join(os.path.abspath(os.path.dirname(__file__)), 'migrate_repo') return oslo_migration.db_version_control(engine, path, version) heat-10.0.2/heat/db/sqlalchemy/models.py0000666000175000017500000004261513343562351020061 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """SQLAlchemy models for heat data.""" import uuid from oslo_db.sqlalchemy import models from oslo_utils import timeutils import sqlalchemy from sqlalchemy.ext import declarative from sqlalchemy.orm import backref from sqlalchemy.orm import relationship from heat.db.sqlalchemy import types BASE = declarative.declarative_base() class HeatBase(models.ModelBase, models.TimestampMixin): """Base class for Heat Models.""" __table_args__ = {'mysql_engine': 'InnoDB'} class SoftDelete(object): deleted_at = sqlalchemy.Column(sqlalchemy.DateTime) class StateAware(object): action = sqlalchemy.Column('action', sqlalchemy.String(255)) status = sqlalchemy.Column('status', sqlalchemy.String(255)) status_reason = sqlalchemy.Column('status_reason', sqlalchemy.Text) class RawTemplate(BASE, HeatBase): """Represents an unparsed template which should be in JSON format.""" __tablename__ = 'raw_template' id = sqlalchemy.Column(sqlalchemy.Integer, primary_key=True) template = sqlalchemy.Column(types.Json) # legacy column files = sqlalchemy.Column(types.Json) # modern column, reference to raw_template_files files_id = sqlalchemy.Column( sqlalchemy.Integer(), sqlalchemy.ForeignKey('raw_template_files.id')) environment = sqlalchemy.Column('environment', types.Json) class RawTemplateFiles(BASE, HeatBase): """Where template files json dicts are stored.""" __tablename__ = 'raw_template_files' id = sqlalchemy.Column(sqlalchemy.Integer, primary_key=True) files = sqlalchemy.Column(types.Json) class StackTag(BASE, HeatBase): """Key/value store of arbitrary stack tags.""" __tablename__ = 'stack_tag' id = sqlalchemy.Column('id', sqlalchemy.Integer, primary_key=True, nullable=False) tag = sqlalchemy.Column('tag', sqlalchemy.Unicode(80)) stack_id = sqlalchemy.Column('stack_id', sqlalchemy.String(36), sqlalchemy.ForeignKey('stack.id'), nullable=False) class SyncPoint(BASE, HeatBase): """Represents a syncpoint for a stack that is being worked on.""" __tablename__ = 'sync_point' __table_args__ = ( sqlalchemy.PrimaryKeyConstraint('entity_id', 'traversal_id', 'is_update'), sqlalchemy.ForeignKeyConstraint(['stack_id'], ['stack.id']) ) entity_id = sqlalchemy.Column(sqlalchemy.String(36)) traversal_id = sqlalchemy.Column(sqlalchemy.String(36)) is_update = sqlalchemy.Column(sqlalchemy.Boolean) # integer field for atomic update operations atomic_key = sqlalchemy.Column(sqlalchemy.Integer, nullable=False) stack_id = sqlalchemy.Column(sqlalchemy.String(36), nullable=False) input_data = sqlalchemy.Column(types.Json) class Stack(BASE, HeatBase, SoftDelete, StateAware): """Represents a stack created by the heat engine.""" __tablename__ = 'stack' __table_args__ = ( sqlalchemy.Index('ix_stack_name', 'name', mysql_length=255), sqlalchemy.Index('ix_stack_tenant', 'tenant', mysql_length=255), ) id = sqlalchemy.Column(sqlalchemy.String(36), primary_key=True, default=lambda: str(uuid.uuid4())) name = sqlalchemy.Column(sqlalchemy.String(255)) raw_template_id = sqlalchemy.Column( sqlalchemy.Integer, sqlalchemy.ForeignKey('raw_template.id'), nullable=False) raw_template = relationship(RawTemplate, backref=backref('stack'), foreign_keys=[raw_template_id]) prev_raw_template_id = sqlalchemy.Column( 'prev_raw_template_id', sqlalchemy.Integer, sqlalchemy.ForeignKey('raw_template.id')) prev_raw_template = relationship(RawTemplate, foreign_keys=[prev_raw_template_id]) username = sqlalchemy.Column(sqlalchemy.String(256)) tenant = sqlalchemy.Column(sqlalchemy.String(256)) user_creds_id = sqlalchemy.Column( sqlalchemy.Integer, sqlalchemy.ForeignKey('user_creds.id')) owner_id = sqlalchemy.Column(sqlalchemy.String(36), index=True) parent_resource_name = sqlalchemy.Column(sqlalchemy.String(255)) timeout = sqlalchemy.Column(sqlalchemy.Integer) disable_rollback = sqlalchemy.Column(sqlalchemy.Boolean, nullable=False) stack_user_project_id = sqlalchemy.Column(sqlalchemy.String(64)) backup = sqlalchemy.Column('backup', sqlalchemy.Boolean) nested_depth = sqlalchemy.Column('nested_depth', sqlalchemy.Integer) convergence = sqlalchemy.Column('convergence', sqlalchemy.Boolean) tags = relationship(StackTag, cascade="all,delete", backref=backref('stack')) current_traversal = sqlalchemy.Column('current_traversal', sqlalchemy.String(36)) current_deps = sqlalchemy.Column('current_deps', types.Json) # Override timestamp column to store the correct value: it should be the # time the create/update call was issued, not the time the DB entry is # created/modified. (bug #1193269) updated_at = sqlalchemy.Column(sqlalchemy.DateTime) class StackLock(BASE, HeatBase): """Store stack locks for deployments with multiple-engines.""" __tablename__ = 'stack_lock' stack_id = sqlalchemy.Column(sqlalchemy.String(36), sqlalchemy.ForeignKey('stack.id'), primary_key=True) engine_id = sqlalchemy.Column(sqlalchemy.String(36)) class UserCreds(BASE, HeatBase): """Represents user credentials. Also, mirrors the 'context' handed in by wsgi. """ __tablename__ = 'user_creds' id = sqlalchemy.Column(sqlalchemy.Integer, primary_key=True) username = sqlalchemy.Column(sqlalchemy.String(255)) password = sqlalchemy.Column(sqlalchemy.String(255)) region_name = sqlalchemy.Column(sqlalchemy.String(255)) decrypt_method = sqlalchemy.Column(sqlalchemy.String(64)) tenant = sqlalchemy.Column(sqlalchemy.String(1024)) auth_url = sqlalchemy.Column(sqlalchemy.Text) tenant_id = sqlalchemy.Column(sqlalchemy.String(256)) trust_id = sqlalchemy.Column(sqlalchemy.String(255)) trustor_user_id = sqlalchemy.Column(sqlalchemy.String(64)) stack = relationship(Stack, backref=backref('user_creds'), cascade_backrefs=False) class ResourcePropertiesData(BASE, HeatBase): """Represents resource properties data, current or older""" __tablename__ = 'resource_properties_data' id = sqlalchemy.Column(sqlalchemy.Integer, primary_key=True) data = sqlalchemy.Column('data', types.Json) encrypted = sqlalchemy.Column('encrypted', sqlalchemy.Boolean) class Event(BASE, HeatBase): """Represents an event generated by the heat engine.""" __tablename__ = 'event' id = sqlalchemy.Column(sqlalchemy.Integer, primary_key=True) stack_id = sqlalchemy.Column(sqlalchemy.String(36), sqlalchemy.ForeignKey('stack.id'), nullable=False) stack = relationship(Stack, backref=backref('events')) uuid = sqlalchemy.Column(sqlalchemy.String(36), default=lambda: str(uuid.uuid4()), unique=True) resource_action = sqlalchemy.Column(sqlalchemy.String(255)) resource_status = sqlalchemy.Column(sqlalchemy.String(255)) resource_name = sqlalchemy.Column(sqlalchemy.String(255)) physical_resource_id = sqlalchemy.Column(sqlalchemy.String(255)) _resource_status_reason = sqlalchemy.Column( 'resource_status_reason', sqlalchemy.String(255)) resource_type = sqlalchemy.Column(sqlalchemy.String(255)) rsrc_prop_data_id = sqlalchemy.Column(sqlalchemy.Integer, sqlalchemy.ForeignKey( 'resource_properties_data.id')) rsrc_prop_data = relationship(ResourcePropertiesData, backref=backref('event')) resource_properties = sqlalchemy.Column(sqlalchemy.PickleType) @property def resource_status_reason(self): return self._resource_status_reason @resource_status_reason.setter def resource_status_reason(self, reason): self._resource_status_reason = reason and reason[:255] or '' class ResourceData(BASE, HeatBase): """Key/value store of arbitrary, resource-specific data.""" __tablename__ = 'resource_data' id = sqlalchemy.Column('id', sqlalchemy.Integer, primary_key=True, nullable=False) key = sqlalchemy.Column('key', sqlalchemy.String(255)) value = sqlalchemy.Column('value', sqlalchemy.Text) redact = sqlalchemy.Column('redact', sqlalchemy.Boolean) decrypt_method = sqlalchemy.Column(sqlalchemy.String(64)) resource_id = sqlalchemy.Column( 'resource_id', sqlalchemy.Integer, sqlalchemy.ForeignKey(column='resource.id', name='fk_resource_id', ondelete='CASCADE'), nullable=False) class Resource(BASE, HeatBase, StateAware): """Represents a resource created by the heat engine.""" __tablename__ = 'resource' id = sqlalchemy.Column(sqlalchemy.Integer, primary_key=True) uuid = sqlalchemy.Column(sqlalchemy.String(36), default=lambda: str(uuid.uuid4()), unique=True) name = sqlalchemy.Column('name', sqlalchemy.String(255)) physical_resource_id = sqlalchemy.Column('nova_instance', sqlalchemy.String(255)) # odd name as "metadata" is reserved rsrc_metadata = sqlalchemy.Column('rsrc_metadata', types.Json) stack_id = sqlalchemy.Column(sqlalchemy.String(36), sqlalchemy.ForeignKey('stack.id'), nullable=False) stack = relationship(Stack, backref=backref('resources')) root_stack_id = sqlalchemy.Column(sqlalchemy.String(36), index=True) data = relationship(ResourceData, cascade="all", passive_deletes=True, backref=backref('resource')) rsrc_prop_data_id = sqlalchemy.Column(sqlalchemy.Integer, sqlalchemy.ForeignKey( 'resource_properties_data.id')) rsrc_prop_data = relationship(ResourcePropertiesData, foreign_keys=[rsrc_prop_data_id]) attr_data_id = sqlalchemy.Column(sqlalchemy.Integer, sqlalchemy.ForeignKey( 'resource_properties_data.id')) attr_data = relationship(ResourcePropertiesData, foreign_keys=[attr_data_id]) # Override timestamp column to store the correct value: it should be the # time the create/update call was issued, not the time the DB entry is # created/modified. (bug #1193269) updated_at = sqlalchemy.Column(sqlalchemy.DateTime) properties_data = sqlalchemy.Column('properties_data', types.Json) properties_data_encrypted = sqlalchemy.Column('properties_data_encrypted', sqlalchemy.Boolean) engine_id = sqlalchemy.Column(sqlalchemy.String(36)) atomic_key = sqlalchemy.Column(sqlalchemy.Integer) needed_by = sqlalchemy.Column('needed_by', types.List) requires = sqlalchemy.Column('requires', types.List) replaces = sqlalchemy.Column('replaces', sqlalchemy.Integer, default=None) replaced_by = sqlalchemy.Column('replaced_by', sqlalchemy.Integer, default=None) current_template_id = sqlalchemy.Column( 'current_template_id', sqlalchemy.Integer, sqlalchemy.ForeignKey('raw_template.id')) class WatchRule(BASE, HeatBase): """Represents a watch_rule created by the heat engine.""" __tablename__ = 'watch_rule' id = sqlalchemy.Column(sqlalchemy.Integer, primary_key=True) name = sqlalchemy.Column('name', sqlalchemy.String(255)) rule = sqlalchemy.Column('rule', types.Json) state = sqlalchemy.Column('state', sqlalchemy.String(255)) last_evaluated = sqlalchemy.Column(sqlalchemy.DateTime, default=timeutils.utcnow) stack_id = sqlalchemy.Column(sqlalchemy.String(36), sqlalchemy.ForeignKey('stack.id'), nullable=False) stack = relationship(Stack, backref=backref('watch_rule')) class WatchData(BASE, HeatBase): """Represents a watch_data created by the heat engine.""" __tablename__ = 'watch_data' id = sqlalchemy.Column(sqlalchemy.Integer, primary_key=True) data = sqlalchemy.Column('data', types.Json) watch_rule_id = sqlalchemy.Column( sqlalchemy.Integer, sqlalchemy.ForeignKey('watch_rule.id'), nullable=False) watch_rule = relationship(WatchRule, backref=backref('watch_data')) class SoftwareConfig(BASE, HeatBase): """Represents a software configuration resource. Represents a software configuration resource to be applied to one or more servers. """ __tablename__ = 'software_config' id = sqlalchemy.Column('id', sqlalchemy.String(36), primary_key=True, default=lambda: str(uuid.uuid4())) name = sqlalchemy.Column('name', sqlalchemy.String(255)) group = sqlalchemy.Column('group', sqlalchemy.String(255)) config = sqlalchemy.Column('config', types.Json) tenant = sqlalchemy.Column( 'tenant', sqlalchemy.String(64), nullable=False, index=True) class SoftwareDeployment(BASE, HeatBase, StateAware): """Represents a software deployment resource. Represents applying a software configuration resource to a single server resource. """ __tablename__ = 'software_deployment' __table_args__ = ( sqlalchemy.Index('ix_software_deployment_created_at', 'created_at'),) id = sqlalchemy.Column('id', sqlalchemy.String(36), primary_key=True, default=lambda: str(uuid.uuid4())) config_id = sqlalchemy.Column( 'config_id', sqlalchemy.String(36), sqlalchemy.ForeignKey('software_config.id'), nullable=False) config = relationship(SoftwareConfig, backref=backref('deployments')) server_id = sqlalchemy.Column('server_id', sqlalchemy.String(36), nullable=False, index=True) input_values = sqlalchemy.Column('input_values', types.Json) output_values = sqlalchemy.Column('output_values', types.Json) tenant = sqlalchemy.Column( 'tenant', sqlalchemy.String(64), nullable=False, index=True) stack_user_project_id = sqlalchemy.Column(sqlalchemy.String(64)) updated_at = sqlalchemy.Column(sqlalchemy.DateTime) class Snapshot(BASE, HeatBase): __tablename__ = 'snapshot' id = sqlalchemy.Column('id', sqlalchemy.String(36), primary_key=True, default=lambda: str(uuid.uuid4())) stack_id = sqlalchemy.Column(sqlalchemy.String(36), sqlalchemy.ForeignKey('stack.id'), nullable=False) name = sqlalchemy.Column('name', sqlalchemy.String(255)) data = sqlalchemy.Column('data', types.Json) tenant = sqlalchemy.Column( 'tenant', sqlalchemy.String(64), nullable=False, index=True) status = sqlalchemy.Column('status', sqlalchemy.String(255)) status_reason = sqlalchemy.Column('status_reason', sqlalchemy.String(255)) stack = relationship(Stack, backref=backref('snapshot')) class Service(BASE, HeatBase, SoftDelete): __tablename__ = 'service' id = sqlalchemy.Column('id', sqlalchemy.String(36), primary_key=True, default=lambda: str(uuid.uuid4())) engine_id = sqlalchemy.Column('engine_id', sqlalchemy.String(36), nullable=False) host = sqlalchemy.Column('host', sqlalchemy.String(255), nullable=False) hostname = sqlalchemy.Column('hostname', sqlalchemy.String(255), nullable=False) binary = sqlalchemy.Column('binary', sqlalchemy.String(255), nullable=False) topic = sqlalchemy.Column('topic', sqlalchemy.String(255), nullable=False) report_interval = sqlalchemy.Column('report_interval', sqlalchemy.Integer, nullable=False) heat-10.0.2/heat/db/__init__.py0000666000175000017500000000000013343562337016155 0ustar zuulzuul00000000000000heat-10.0.2/heat/cloudinit/0000775000175000017500000000000013343562672015463 5ustar zuulzuul00000000000000heat-10.0.2/heat/cloudinit/boothook.sh0000777000175000017500000000105013343562337017642 0ustar zuulzuul00000000000000#!/bin/bash # FIXME(shadower) this is a workaround for cloud-init 0.6.3 present in Ubuntu # 12.04 LTS: # https://bugs.launchpad.net/heat/+bug/1257410 # # The old cloud-init doesn't create the users directly so the commands to do # this are injected though nova_utils.py. # # Once we drop support for 0.6.3, we can safely remove this. ${add_custom_user} # in case heat-cfntools has been installed from package but no symlinks # are yet in /opt/aws/bin/ cfn-create-aws-symlinks # Do not remove - the cloud boothook should always return success exit 0 heat-10.0.2/heat/cloudinit/__init__.py0000666000175000017500000000000013343562337017562 0ustar zuulzuul00000000000000heat-10.0.2/heat/cloudinit/config0000666000175000017500000000025313343562337016653 0ustar zuulzuul00000000000000${add_custom_user} # Capture all subprocess output into a logfile # Useful for troubleshooting cloud-init issues output: {all: '| tee -a /var/log/cloud-init-output.log'} heat-10.0.2/heat/cloudinit/part_handler.py0000666000175000017500000000277213343562337020510 0ustar zuulzuul00000000000000# part-handler # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime import errno import os import sys def list_types(): return(["text/x-cfninitdata"]) def handle_part(data, ctype, filename, payload): if ctype == "__begin__": try: os.makedirs('/var/lib/heat-cfntools', int("700", 8)) except OSError: ex_type, e, tb = sys.exc_info() if e.errno != errno.EEXIST: raise return if ctype == "__end__": return timestamp = datetime.datetime.now() with open('/var/log/part-handler.log', 'a') as log: log.write('%s filename:%s, ctype:%s\n' % (timestamp, filename, ctype)) if ctype == 'text/x-cfninitdata': with open('/var/lib/heat-cfntools/%s' % filename, 'w') as f: f.write(payload) # TODO(sdake) hopefully temporary until users move to heat-cfntools-1.3 with open('/var/lib/cloud/data/%s' % filename, 'w') as f: f.write(payload) heat-10.0.2/heat/cloudinit/loguserdata.py0000777000175000017500000000750613343562337020362 0ustar zuulzuul00000000000000#!/bin/bash # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. "true" '''\' # NOTE(vgridnev): ubuntu trusty by default has python3, # but pkg_resources can't be imported. echo "import pkg_resources" | python3 2>/dev/null has_py3=$? if [ $has_py3 = 0 ]; then interpreter="python3" else interpreter="python" fi exec $interpreter "$0" ''' import datetime from distutils import version import errno import logging import os import re import subprocess import sys import pkg_resources VAR_PATH = '/var/lib/heat-cfntools' LOG = logging.getLogger('heat-provision') def chk_ci_version(): try: v = version.LooseVersion( pkg_resources.get_distribution('cloud-init').version) return v >= version.LooseVersion('0.6.0') except Exception: pass data = subprocess.Popen(['cloud-init', '--version'], stdout=subprocess.PIPE, stderr=subprocess.PIPE).communicate() if data[0]: raise Exception() # data[1] has such format: 'cloud-init 0.7.5\n', need to parse version v = re.split(' |\n', data[1])[1].split('.') return tuple(v) >= tuple(['0', '6', '0']) def init_logging(): LOG.setLevel(logging.INFO) LOG.addHandler(logging.StreamHandler()) fh = logging.FileHandler("/var/log/heat-provision.log") os.chmod(fh.baseFilename, int("600", 8)) LOG.addHandler(fh) def call(args): class LogStream(object): def write(self, data): LOG.info(data) LOG.info('%s\n', ' '.join(args)) # noqa try: ls = LogStream() p = subprocess.Popen(args, stdout=subprocess.PIPE, stderr=subprocess.PIPE) data = p.communicate() if data: for x in data: ls.write(x) except OSError: ex_type, ex, tb = sys.exc_info() if ex.errno == errno.ENOEXEC: LOG.error('Userdata empty or not executable: %s', ex) return os.EX_OK else: LOG.error('OS error running userdata: %s', ex) return os.EX_OSERR except Exception: ex_type, ex, tb = sys.exc_info() LOG.error('Unknown error running userdata: %s', ex) return os.EX_SOFTWARE return p.returncode def main(): try: if not chk_ci_version(): # pre 0.6.0 - user data executed via cloudinit, not this helper LOG.error('Unable to log provisioning, need a newer version of ' 'cloud-init') return -1 except Exception: LOG.warning('Can not determine the version of cloud-init. It is ' 'possible to get errors while logging provisioning.') userdata_path = os.path.join(VAR_PATH, 'cfn-userdata') os.chmod(userdata_path, int("700", 8)) LOG.info('Provision began: %s', datetime.datetime.now()) returncode = call([userdata_path]) LOG.info('Provision done: %s', datetime.datetime.now()) if returncode: return returncode if __name__ == '__main__': init_logging() code = main() if code: LOG.error('Provision failed with exit code %s', code) sys.exit(code) provision_log = os.path.join(VAR_PATH, 'provision-finished') # touch the file so it is timestamped with when finished with open(provision_log, 'a'): os.utime(provision_log, None) heat-10.0.2/heat/version.py0000666000175000017500000000120313343562340015516 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import pbr.version version_info = pbr.version.VersionInfo('heat') heat-10.0.2/heat/__init__.py0000666000175000017500000000124013343562337015577 0ustar zuulzuul00000000000000# # Copyright 2013 IBM Corp. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import oslo_i18n as i18n i18n.enable_lazy() heat-10.0.2/heat/policies/0000775000175000017500000000000013343562672015300 5ustar zuulzuul00000000000000heat-10.0.2/heat/policies/actions.py0000666000175000017500000000225113343562340017304 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from heat.policies import base POLICY_ROOT = 'actions:%s' actions_policies = [ policy.DocumentedRuleDefault( name=POLICY_ROOT % 'action', check_str=base.RULE_DENY_STACK_USER, description='Performs non-lifecycle operations on the stack ' '(Snapshot, Resume, Cancel update, or check stack resources).', operations=[ { 'path': '/v1/{tenant_id}/stacks/{stack_name}/{stack_id}/' 'actions', 'method': 'POST' } ] ) ] def list_rules(): return actions_policies heat-10.0.2/heat/policies/base.py0000666000175000017500000000320713343562340016560 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy RULE_CONTEXT_IS_ADMIN = 'rule:context_is_admin' RULE_PROJECT_ADMIN = 'rule:project_admin' RULE_DENY_STACK_USER = 'rule:deny_stack_user' RULE_DENY_EVERYBODY = 'rule:deny_everybody' RULE_ALLOW_EVERYBODY = 'rule:allow_everybody' rules = [ policy.RuleDefault( name="context_is_admin", check_str="role:admin and is_admin_project:True", description="Decides what is required for the 'is_admin:True' check " "to succeed."), policy.RuleDefault( name="project_admin", check_str="role:admin", description="Default rule for project admin."), policy.RuleDefault( name="deny_stack_user", check_str="not role:heat_stack_user", description="Default rule for deny stack user."), policy.RuleDefault( name="deny_everybody", check_str="!", description="Default rule for deny everybody."), policy.RuleDefault( name="allow_everybody", check_str="", description="Default rule for allow everybody.") ] def list_rules(): return rules heat-10.0.2/heat/policies/build_info.py0000666000175000017500000000204713343562340017761 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from heat.policies import base POLICY_ROOT = 'build_info:%s' build_info_policies = [ policy.DocumentedRuleDefault( name=POLICY_ROOT % 'build_info', check_str=base.RULE_DENY_STACK_USER, description='Show build information.', operations=[ { 'path': '/v1/{tenant_id}/build_info', 'method': 'GET' } ] ) ] def list_rules(): return build_info_policies heat-10.0.2/heat/policies/resource.py0000666000175000017500000000512313343562340017474 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from heat.policies import base POLICY_ROOT = 'resource:%s' resource_policies = [ policy.DocumentedRuleDefault( name=POLICY_ROOT % 'index', check_str=base.RULE_DENY_STACK_USER, description='List resources.', operations=[ { 'path': '/v1/{tenant_id}/stacks/{stack_name}/{stack_id}/' 'resources', 'method': 'GET' } ] ), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'metadata', check_str=base.RULE_ALLOW_EVERYBODY, description='Show resource metadata.', operations=[ { 'path': '/v1/{tenant_id}/stacks/{stack_name}/{stack_id}/' 'resources/{resource_name}/metadata', 'method': 'GET' } ] ), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'signal', check_str=base.RULE_ALLOW_EVERYBODY, description='Signal resource.', operations=[ { 'path': '/v1/{tenant_id}/stacks/{stack_name}/{stack_id}/' 'resources/{resource_name}/signal', 'method': 'POST' } ] ), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'mark_unhealthy', check_str=base.RULE_DENY_STACK_USER, description='Mark resource as unhealthy.', operations=[ { 'path': '/v1/{tenant_id}/stacks/{stack_name}/{stack_id}/' 'resources/{resource_name_or_physical_id}', 'method': 'PATCH' } ] ), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'show', check_str=base.RULE_DENY_STACK_USER, description='Show resource.', operations=[ { 'path': '/v1/{tenant_id}/stacks/{stack_name}/{stack_id}/' 'resources/{resource_name}', 'method': 'GET' } ] ) ] def list_rules(): return resource_policies heat-10.0.2/heat/policies/__init__.py0000666000175000017500000000261413343562340017406 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import itertools from heat.policies import actions from heat.policies import base from heat.policies import build_info from heat.policies import cloudformation from heat.policies import events from heat.policies import resource from heat.policies import resource_types from heat.policies import service from heat.policies import software_configs from heat.policies import software_deployments from heat.policies import stacks def list_rules(): return itertools.chain( base.list_rules(), actions.list_rules(), build_info.list_rules(), cloudformation.list_rules(), events.list_rules(), resource.list_rules(), resource_types.list_rules(), service.list_rules(), software_configs.list_rules(), software_deployments.list_rules(), stacks.list_rules(), ) heat-10.0.2/heat/policies/stacks.py0000666000175000017500000002557213343562340017147 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from heat.policies import base POLICY_ROOT = 'stacks:%s' stacks_policies = [ policy.DocumentedRuleDefault( name=POLICY_ROOT % 'abandon', check_str=base.RULE_DENY_STACK_USER, description='Abandon stack.', operations=[ { 'path': '/v1/{tenant_id}/stacks/{stack_name}/{stack_id}/' 'abandon', 'method': 'DELETE' } ] ), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'create', check_str=base.RULE_DENY_STACK_USER, description='Create stack.', operations=[ { 'path': '/v1/{tenant_id}/stacks', 'method': 'POST' } ] ), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'delete', check_str=base.RULE_DENY_STACK_USER, description='Delete stack.', operations=[ { 'path': '/v1/{tenant_id}/stacks/{stack_name}/{stack_id}', 'method': 'DELETE' } ] ), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'detail', check_str=base.RULE_DENY_STACK_USER, description='List stacks in detail.', operations=[ { 'path': '/v1/{tenant_id}/stacks', 'method': 'GET' } ] ), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'export', check_str=base.RULE_DENY_STACK_USER, description='Export stack.', operations=[ { 'path': '/v1/{tenant_id}/stacks/{stack_name}/{stack_id}/' 'export', 'method': 'GET' } ] ), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'generate_template', check_str=base.RULE_DENY_STACK_USER, description='Generate stack template.', operations=[ { 'path': '/v1/{tenant_id}/stacks/{stack_name}/{stack_id}/' 'template', 'method': 'GET' } ] ), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'global_index', check_str=base.RULE_DENY_EVERYBODY, description='List stacks globally.', operations=[ { 'path': '/v1/{tenant_id}/stacks', 'method': 'GET' } ] ), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'index', check_str=base.RULE_DENY_STACK_USER, description='List stacks.', operations=[ { 'path': '/v1/{tenant_id}/stacks', 'method': 'GET' } ] ), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'list_resource_types', check_str=base.RULE_DENY_STACK_USER, description='List resource types.', operations=[ { 'path': '/v1/{tenant_id}/resource_types', 'method': 'GET' } ] ), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'list_template_versions', check_str=base.RULE_DENY_STACK_USER, description='List template versions.', operations=[ { 'path': '/v1/{tenant_id}/template_versions', 'method': 'GET' } ] ), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'list_template_functions', check_str=base.RULE_DENY_STACK_USER, description='List template functions.', operations=[ { 'path': '/v1/{tenant_id}/template_versions/' '{template_version}/functions', 'method': 'GET' } ] ), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'lookup', check_str=base.RULE_ALLOW_EVERYBODY, description='Find stack.', operations=[ { 'path': '/v1/{tenant_id}/stacks/{stack_identity}', 'method': 'GET' } ] ), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'preview', check_str=base.RULE_DENY_STACK_USER, description='Preview stack.', operations=[ { 'path': '/v1/{tenant_id}/stacks/preview', 'method': 'POST' } ] ), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'resource_schema', check_str=base.RULE_DENY_STACK_USER, description='Show resource type schema.', operations=[ { 'path': '/v1/{tenant_id}/resource_types/{type_name}', 'method': 'GET' } ] ), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'show', check_str=base.RULE_DENY_STACK_USER, description='Show stack.', operations=[ { 'path': '/v1/{tenant_id}/stacks/{stack_identity}', 'method': 'GET' } ] ), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'template', check_str=base.RULE_DENY_STACK_USER, description='Get stack template.', operations=[ { 'path': '/v1/{tenant_id}/stacks/{stack_name}/{stack_id}/' 'template', 'method': 'GET' } ] ), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'environment', check_str=base.RULE_DENY_STACK_USER, description='Get stack environment.', operations=[ { 'path': '/v1/{tenant_id}/stacks/{stack_name}/{stack_id}/' 'environment', 'method': 'GET' } ] ), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'files', check_str=base.RULE_DENY_STACK_USER, description='Get stack files.', operations=[ { 'path': '/v1/{tenant_id}/stacks/{stack_name}/{stack_id}/' 'files', 'method': 'GET' } ] ), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'update', check_str=base.RULE_DENY_STACK_USER, description='Update stack.', operations=[ { 'path': '/v1/{tenant_id}/stacks/{stack_name}/{stack_id}', 'method': 'PUT' } ] ), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'update_patch', check_str=base.RULE_DENY_STACK_USER, description='Update stack (PATCH).', operations=[ { 'path': '/v1/{tenant_id}/stacks/{stack_name}/{stack_id}', 'method': 'PATCH' } ] ), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'preview_update', check_str=base.RULE_DENY_STACK_USER, description='Preview update stack.', operations=[ { 'path': '/v1/{tenant_id}/stacks/{stack_name}/{stack_id}/' 'preview', 'method': 'PUT' } ] ), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'preview_update_patch', check_str=base.RULE_DENY_STACK_USER, description='Preview update stack (PATCH).', operations=[ { 'path': '/v1/{tenant_id}/stacks/{stack_name}/{stack_id}/' 'preview', 'method': 'PATCH' } ] ), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'validate_template', check_str=base.RULE_DENY_STACK_USER, description='Validate template.', operations=[ { 'path': '/v1/{tenant_id}/validate', 'method': 'POST' } ] ), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'snapshot', check_str=base.RULE_DENY_STACK_USER, description='Snapshot Stack.', operations=[ { 'path': '/v1/{tenant_id}/stacks/{stack_name}/{stack_id}/' 'snapshots', 'method': 'POST' } ] ), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'show_snapshot', check_str=base.RULE_DENY_STACK_USER, description='Show snapshot.', operations=[ { 'path': '/v1/{tenant_id}/stacks/{stack_name}/{stack_id}/' 'snapshots/{snapshot_id}', 'method': 'GET' } ] ), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'delete_snapshot', check_str=base.RULE_DENY_STACK_USER, description='Delete snapshot.', operations=[ { 'path': '/v1/{tenant_id}/stacks/{stack_name}/{stack_id}/' 'snapshots/{snapshot_id}', 'method': 'DELETE' } ] ), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'list_snapshots', check_str=base.RULE_DENY_STACK_USER, description='List snapshots.', operations=[ { 'path': '/v1/{tenant_id}/stacks/{stack_name}/{stack_id}/' 'snapshots', 'method': 'GET' } ] ), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'restore_snapshot', check_str=base.RULE_DENY_STACK_USER, description='Restore snapshot.', operations=[ { 'path': '/v1/{tenant_id}/stacks/{stack_name}/{stack_id}/' 'snapshots/{snapshot_id}/restore', 'method': 'POST' } ] ), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'list_outputs', check_str=base.RULE_DENY_STACK_USER, description='List outputs.', operations=[ { 'path': '/v1/{tenant_id}/stacks/{stack_name}/{stack_id}/' 'outputs', 'method': 'GET' } ] ), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'show_output', check_str=base.RULE_DENY_STACK_USER, description='Show outputs.', operations=[ { 'path': '/v1/{tenant_id}/stacks/{stack_name}/{stack_id}/' 'outputs/{output_key}', 'method': 'GET' } ] ) ] def list_rules(): return stacks_policies heat-10.0.2/heat/policies/service.py0000666000175000017500000000151213343562340017303 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from heat.policies import base POLICY_ROOT = 'service:%s' service_policies = [ policy.RuleDefault( name=POLICY_ROOT % 'index', check_str=base.RULE_CONTEXT_IS_ADMIN) ] def list_rules(): return service_policies heat-10.0.2/heat/policies/cloudformation.py0000666000175000017500000000450513343562340020675 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from heat.policies import base # These policies are for AWS CloudFormation-like APIs, so we won't list out # the URI paths in rules. POLICY_ROOT = 'cloudformation:%s' cloudformation_policies = [ policy.RuleDefault( name=POLICY_ROOT % 'ListStacks', check_str=base.RULE_DENY_STACK_USER), policy.RuleDefault( name=POLICY_ROOT % 'CreateStack', check_str=base.RULE_DENY_STACK_USER), policy.RuleDefault( name=POLICY_ROOT % 'DescribeStacks', check_str=base.RULE_DENY_STACK_USER), policy.RuleDefault( name=POLICY_ROOT % 'DeleteStack', check_str=base.RULE_DENY_STACK_USER), policy.RuleDefault( name=POLICY_ROOT % 'UpdateStack', check_str=base.RULE_DENY_STACK_USER), policy.RuleDefault( name=POLICY_ROOT % 'CancelUpdateStack', check_str=base.RULE_DENY_STACK_USER), policy.RuleDefault( name=POLICY_ROOT % 'DescribeStackEvents', check_str=base.RULE_DENY_STACK_USER), policy.RuleDefault( name=POLICY_ROOT % 'ValidateTemplate', check_str=base.RULE_DENY_STACK_USER), policy.RuleDefault( name=POLICY_ROOT % 'GetTemplate', check_str=base.RULE_DENY_STACK_USER), policy.RuleDefault( name=POLICY_ROOT % 'EstimateTemplateCost', check_str=base.RULE_DENY_STACK_USER), policy.RuleDefault( name=POLICY_ROOT % 'DescribeStackResource', check_str=base.RULE_ALLOW_EVERYBODY), policy.RuleDefault( name=POLICY_ROOT % 'DescribeStackResources', check_str=base.RULE_DENY_STACK_USER), policy.RuleDefault( name=POLICY_ROOT % 'ListStackResources', check_str=base.RULE_DENY_STACK_USER) ] def list_rules(): return cloudformation_policies heat-10.0.2/heat/policies/events.py0000666000175000017500000000267113343562340017156 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from heat.policies import base POLICY_ROOT = 'events:%s' events_policies = [ policy.DocumentedRuleDefault( name=POLICY_ROOT % 'index', check_str=base.RULE_DENY_STACK_USER, description='List events.', operations=[ { 'path': '/v1/{tenant_id}/stacks/{stack_name}/{stack_id}/' 'events', 'method': 'GET' } ] ), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'show', check_str=base.RULE_DENY_STACK_USER, description='Show event.', operations=[ { 'path': '/v1/{tenant_id}/stacks/{stack_name}/{stack_id}/' 'resources/{resource_name}/events/{event_id}', 'method': 'GET' } ] ) ] def list_rules(): return events_policies heat-10.0.2/heat/policies/software_configs.py0000666000175000017500000000447513343562340021220 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from heat.policies import base POLICY_ROOT = 'software_configs:%s' software_configs_policies = [ policy.DocumentedRuleDefault( name=POLICY_ROOT % 'global_index', check_str=base.RULE_DENY_EVERYBODY, description='List configs globally.', operations=[ { 'path': '/v1/{tenant_id}/software_configs', 'method': 'GET' } ] ), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'index', check_str=base.RULE_DENY_STACK_USER, description='List configs.', operations=[ { 'path': '/v1/{tenant_id}/software_configs', 'method': 'GET' } ] ), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'create', check_str=base.RULE_DENY_STACK_USER, description='Create config.', operations=[ { 'path': '/v1/{tenant_id}/software_configs', 'method': 'POST' } ] ), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'show', check_str=base.RULE_DENY_STACK_USER, description='Show config details.', operations=[ { 'path': '/v1/{tenant_id}/software_configs/{config_id}', 'method': 'GET' } ] ), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'delete', check_str=base.RULE_DENY_STACK_USER, description='Delete config.', operations=[ { 'path': '/v1/{tenant_id}/software_configs/{config_id}', 'method': 'DELETE' } ] ) ] def list_rules(): return software_configs_policies heat-10.0.2/heat/policies/resource_types.py0000666000175000017500000000500513343562340020717 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from heat.policies import base POLICY_ROOT = 'resource_types:%s' resource_types_policies = [ policy.RuleDefault( name=POLICY_ROOT % 'OS::Nova::Flavor', check_str=base.RULE_PROJECT_ADMIN), policy.RuleDefault( name=POLICY_ROOT % 'OS::Cinder::EncryptedVolumeType', check_str=base.RULE_PROJECT_ADMIN), policy.RuleDefault( name=POLICY_ROOT % 'OS::Cinder::VolumeType', check_str=base.RULE_PROJECT_ADMIN), policy.RuleDefault( name=POLICY_ROOT % 'OS::Cinder::Quota', check_str=base.RULE_PROJECT_ADMIN), policy.RuleDefault( name=POLICY_ROOT % 'OS::Neutron::Quota', check_str=base.RULE_PROJECT_ADMIN), policy.RuleDefault( name=POLICY_ROOT % 'OS::Nova::Quota', check_str=base.RULE_PROJECT_ADMIN), policy.RuleDefault( name=POLICY_ROOT % 'OS::Manila::ShareType', check_str=base.RULE_PROJECT_ADMIN), policy.RuleDefault( name=POLICY_ROOT % 'OS::Neutron::ProviderNet', check_str=base.RULE_PROJECT_ADMIN), policy.RuleDefault( name=POLICY_ROOT % 'OS::Neutron::QoSPolicy', check_str=base.RULE_PROJECT_ADMIN), policy.RuleDefault( name=POLICY_ROOT % 'OS::Neutron::QoSBandwidthLimitRule', check_str=base.RULE_PROJECT_ADMIN), policy.RuleDefault( name=POLICY_ROOT % 'OS::Neutron::Segment', check_str=base.RULE_PROJECT_ADMIN), policy.RuleDefault( name=POLICY_ROOT % 'OS::Nova::HostAggregate', check_str=base.RULE_PROJECT_ADMIN), policy.RuleDefault( name=POLICY_ROOT % 'OS::Cinder::QoSSpecs', check_str=base.RULE_PROJECT_ADMIN), policy.RuleDefault( name=POLICY_ROOT % 'OS::Cinder::QoSAssociation', check_str=base.RULE_PROJECT_ADMIN), policy.RuleDefault( name=POLICY_ROOT % 'OS::Keystone::*', check_str=base.RULE_PROJECT_ADMIN) ] def list_rules(): return resource_types_policies heat-10.0.2/heat/policies/software_deployments.py0000666000175000017500000000536713343562340022134 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from heat.policies import base POLICY_ROOT = 'software_deployments:%s' software_deployments_policies = [ policy.DocumentedRuleDefault( name=POLICY_ROOT % 'index', check_str=base.RULE_DENY_STACK_USER, description='List deployments.', operations=[ { 'path': '/v1/{tenant_id}/software_deployments', 'method': 'GET' } ] ), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'create', check_str=base.RULE_DENY_STACK_USER, description='Create deployment.', operations=[ { 'path': '/v1/{tenant_id}/software_deployments', 'method': 'POST' } ] ), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'show', check_str=base.RULE_DENY_STACK_USER, description='Show deployment details.', operations=[ { 'path': '/v1/{tenant_id}/software_deployments/{deployment_id}', 'method': 'GET' } ] ), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'update', check_str=base.RULE_DENY_STACK_USER, description='Update deployment.', operations=[ { 'path': '/v1/{tenant_id}/software_deployments/{deployment_id}', 'method': 'PUT' } ] ), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'delete', check_str=base.RULE_DENY_STACK_USER, description='Delete deployment.', operations=[ { 'path': '/v1/{tenant_id}/software_deployments/{deployment_id}', 'method': 'DELETE' } ] ), policy.DocumentedRuleDefault( name=POLICY_ROOT % 'metadata', check_str=base.RULE_ALLOW_EVERYBODY, description='Show server configuration metadata.', operations=[ { 'path': '/v1/{tenant_id}/software_deployments/metadata/' '{server_id}', 'method': 'GET' } ] ) ] def list_rules(): return software_deployments_policies heat-10.0.2/heat/hacking/0000775000175000017500000000000013343562672015075 5ustar zuulzuul00000000000000heat-10.0.2/heat/hacking/__init__.py0000666000175000017500000000000013343562340017166 0ustar zuulzuul00000000000000heat-10.0.2/heat/hacking/checks.py0000666000175000017500000000411213343562340016677 0ustar zuulzuul00000000000000# Copyright (c) 2016 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import re """ Guidelines for writing new hacking checks - Use only for Heat specific tests. OpenStack general tests should be submitted to the common 'hacking' module. - Pick numbers in the range H3xx. Find the current test with the highest allocated number and then pick the next value. - Keep the test method code in the source file ordered based on the Heat3xx value. - List the new rule in the top level HACKING.rst file - Add test cases for each new rule to heat/tests/test_hacking.py """ def no_log_warn(logical_line): """Disallow 'LOG.warn(' https://bugs.launchpad.net/tempest/+bug/1508442 Heat301 """ if logical_line.startswith('LOG.warn('): yield(0, 'Heat301 Use LOG.warning() rather than LOG.warn()') def check_python3_no_iteritems(logical_line): msg = ("Heat302: Use dict.items() instead of dict.iteritems().") if re.search(r".*\.iteritems\(\)", logical_line): yield(0, msg) def check_python3_no_iterkeys(logical_line): msg = ("Heat303: Use dict.keys() instead of dict.iterkeys().") if re.search(r".*\.iterkeys\(\)", logical_line): yield(0, msg) def check_python3_no_itervalues(logical_line): msg = ("Heat304: Use dict.values() instead of dict.itervalues().") if re.search(r".*\.itervalues\(\)", logical_line): yield(0, msg) def factory(register): register(no_log_warn) register(check_python3_no_iteritems) register(check_python3_no_iterkeys) register(check_python3_no_itervalues) heat-10.0.2/heat/common/0000775000175000017500000000000013343562672014761 5ustar zuulzuul00000000000000heat-10.0.2/heat/common/short_id.py0000666000175000017500000000372213343562337017152 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Utilities for creating short ID strings based on a random UUID. The IDs each comprise 12 (lower-case) alphanumeric characters. """ import base64 import uuid import six from heat.common.i18n import _ def _to_byte_string(value, num_bits): """Convert an integer to a big-endian string of bytes with padding. Padding is added at the end (i.e. after the least-significant bit) if required. """ shifts = six.moves.xrange(num_bits - 8, -8, -8) def byte_at(off): return (value >> off if off >= 0 else value << -off) & 0xff return b''.join(six.int2byte(byte_at(offset)) for offset in shifts) def get_id(source_uuid): """Derive a short (12 character) id from a random UUID. The supplied UUID must be a version 4 UUID object. """ if isinstance(source_uuid, six.string_types): source_uuid = uuid.UUID(source_uuid) if source_uuid.version != 4: raise ValueError(_('Invalid UUID version (%d)') % source_uuid.version) # The "time" field of a v4 UUID contains 60 random bits # (see RFC4122, Section 4.4) random_bytes = _to_byte_string(source_uuid.time, 60) # The first 12 bytes (= 60 bits) of base32-encoded output is our data encoded_bytes = base64.b32encode(random_bytes)[:12] return encoded_bytes.decode('ascii').lower() def generate_id(): """Generate a short (12 character), random id.""" return get_id(uuid.uuid4()) heat-10.0.2/heat/common/messaging.py0000666000175000017500000001233213343562337017311 0ustar zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2013 eNovance # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import eventlet from oslo_config import cfg import oslo_messaging from oslo_messaging.rpc import dispatcher from oslo_serialization import jsonutils from osprofiler import profiler from heat.common import context TRANSPORT = None NOTIFICATIONS_TRANSPORT = None NOTIFIER = None class RequestContextSerializer(oslo_messaging.Serializer): def __init__(self, base): self._base = base def serialize_entity(self, ctxt, entity): if not self._base: return entity return self._base.serialize_entity(ctxt, entity) def deserialize_entity(self, ctxt, entity): if not self._base: return entity return self._base.deserialize_entity(ctxt, entity) @staticmethod def serialize_context(ctxt): _context = ctxt.to_dict() prof = profiler.get() if prof: trace_info = { "hmac_key": prof.hmac_key, "base_id": prof.get_base_id(), "parent_id": prof.get_id() } _context.update({"trace_info": trace_info}) return _context @staticmethod def deserialize_context(ctxt): trace_info = ctxt.pop("trace_info", None) if trace_info: profiler.init(**trace_info) return context.RequestContext.from_dict(ctxt) class JsonPayloadSerializer(oslo_messaging.NoOpSerializer): @classmethod def serialize_entity(cls, context, entity): return jsonutils.to_primitive(entity, convert_instances=True) def get_specific_transport(url, optional, exmods, is_for_notifications=False): try: if is_for_notifications: return oslo_messaging.get_notification_transport( cfg.CONF, url, allowed_remote_exmods=exmods) else: return oslo_messaging.get_rpc_transport( cfg.CONF, url, allowed_remote_exmods=exmods) except oslo_messaging.InvalidTransportURL as e: if not optional or e.url: # NOTE(sileht): oslo_messaging is configured but unloadable # so reraise the exception raise else: return None def setup_transports(url, optional): global TRANSPORT, NOTIFICATIONS_TRANSPORT oslo_messaging.set_transport_defaults('heat') exmods = ['heat.common.exception'] TRANSPORT = get_specific_transport(url, optional, exmods) NOTIFICATIONS_TRANSPORT = get_specific_transport(url, optional, exmods, is_for_notifications=True) def setup(url=None, optional=False): """Initialise the oslo_messaging layer.""" global NOTIFIER if url and url.startswith("fake://"): # NOTE(sileht): oslo_messaging fake driver uses time.sleep # for task switch, so we need to monkey_patch it eventlet.monkey_patch(time=True) if not TRANSPORT or not NOTIFICATIONS_TRANSPORT: setup_transports(url, optional) # In the fake driver, make the dict of exchanges local to each exchange # manager, instead of using the shared class attribute. Doing otherwise # breaks the unit tests. if url and url.startswith("fake://"): TRANSPORT._driver._exchange_manager._exchanges = {} if not NOTIFIER and NOTIFICATIONS_TRANSPORT: serializer = RequestContextSerializer(JsonPayloadSerializer()) NOTIFIER = oslo_messaging.Notifier(NOTIFICATIONS_TRANSPORT, serializer=serializer) def cleanup(): """Cleanup the oslo_messaging layer.""" global TRANSPORT, NOTIFICATIONS_TRANSPORT, NOTIFIER if TRANSPORT: TRANSPORT.cleanup() NOTIFICATIONS_TRANSPORT.cleanup() TRANSPORT = NOTIFICATIONS_TRANSPORT = NOTIFIER = None def get_rpc_server(target, endpoint): """Return a configured oslo_messaging rpc server.""" serializer = RequestContextSerializer(JsonPayloadSerializer()) access_policy = dispatcher.DefaultRPCAccessPolicy return oslo_messaging.get_rpc_server(TRANSPORT, target, [endpoint], executor='eventlet', serializer=serializer, access_policy=access_policy) def get_rpc_client(**kwargs): """Return a configured oslo_messaging RPCClient.""" target = oslo_messaging.Target(**kwargs) serializer = RequestContextSerializer(JsonPayloadSerializer()) return oslo_messaging.RPCClient(TRANSPORT, target, serializer=serializer) def get_notifier(publisher_id): """Return a configured oslo_messaging notifier.""" return NOTIFIER.prepare(publisher_id=publisher_id) heat-10.0.2/heat/common/context.py0000666000175000017500000003467213343562337017033 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from keystoneauth1 import access from keystoneauth1.identity import access as access_plugin from keystoneauth1.identity import generic from keystoneauth1 import loading as ks_loading from keystoneauth1 import session from keystoneauth1 import token_endpoint from oslo_config import cfg from oslo_context import context from oslo_log import log as logging import oslo_messaging from oslo_middleware import request_id as oslo_request_id from oslo_utils import importutils import six from heat.common import config from heat.common import endpoint_utils from heat.common import exception from heat.common import policy from heat.common import wsgi from heat.db.sqlalchemy import api as db_api from heat.engine import clients LOG = logging.getLogger(__name__) # Note, we yield the options via list_opts to enable generation of the # sample heat.conf, but we don't register these options directly via # cfg.CONF.register*, it's done via ks_loading.register_auth_conf_options # Note, only auth_type = v3password is expected to work, example config: # [trustee] # auth_type = password # auth_url = http://192.168.1.2:35357 # username = heat # password = password # user_domain_id = default PASSWORD_PLUGIN = 'password' TRUSTEE_CONF_GROUP = 'trustee' ks_loading.register_auth_conf_options(cfg.CONF, TRUSTEE_CONF_GROUP) def list_opts(): trustee_opts = ks_loading.get_auth_common_conf_options() trustee_opts.extend(ks_loading.get_auth_plugin_conf_options( PASSWORD_PLUGIN)) yield TRUSTEE_CONF_GROUP, trustee_opts def _moved_attr(new_name): def getter(self): return getattr(self, new_name) def setter(self, value): setattr(self, new_name, value) return property(getter, setter) class RequestContext(context.RequestContext): """Stores information about the security context. Under the security context the user accesses the system, as well as additional request information. """ def __init__(self, username=None, password=None, aws_creds=None, auth_url=None, roles=None, is_admin=None, read_only=False, show_deleted=False, overwrite=True, trust_id=None, trustor_user_id=None, request_id=None, auth_token_info=None, region_name=None, auth_plugin=None, trusts_auth_plugin=None, user_domain_id=None, project_domain_id=None, project_name=None, **kwargs): """Initialisation of the request context. :param overwrite: Set to False to ensure that the greenthread local copy of the index is not overwritten. """ if user_domain_id: kwargs['user_domain'] = user_domain_id if project_domain_id: kwargs['project_domain'] = project_domain_id super(RequestContext, self).__init__(is_admin=is_admin, read_only=read_only, show_deleted=show_deleted, request_id=request_id, roles=roles, overwrite=overwrite, **kwargs) self.username = username self.password = password self.region_name = region_name self.aws_creds = aws_creds self.project_name = project_name self.auth_token_info = auth_token_info self.auth_url = auth_url self._session = None self._clients = None self._keystone_session = session.Session( **config.get_ssl_options('keystone')) self.trust_id = trust_id self.trustor_user_id = trustor_user_id self.policy = policy.get_enforcer() self._auth_plugin = auth_plugin self._trusts_auth_plugin = trusts_auth_plugin if is_admin is None: self.is_admin = self.policy.check_is_admin(self) else: self.is_admin = is_admin # context scoped cache dict where the key is a class of the type of # object being cached and the value is the cache implementation class self._object_cache = {} def cache(self, cache_cls): cache = self._object_cache.get(cache_cls) if not cache: cache = cache_cls() self._object_cache[cache_cls] = cache return cache user_id = _moved_attr('user') tenant_id = _moved_attr('tenant') @property def session(self): if self._session is None: self._session = db_api.get_session() return self._session @property def keystone_session(self): if not self._keystone_session.auth: self._keystone_session.auth = self.auth_plugin return self._keystone_session @property def clients(self): if self._clients is None: self._clients = clients.Clients(self) return self._clients def to_dict(self): user_idt = u'{user} {tenant}'.format(user=self.user_id or '-', tenant=self.tenant_id or '-') return {'auth_token': self.auth_token, 'username': self.username, 'user_id': self.user_id, 'password': self.password, 'aws_creds': self.aws_creds, 'tenant': self.project_name, 'tenant_id': self.tenant_id, 'trust_id': self.trust_id, 'trustor_user_id': self.trustor_user_id, 'auth_token_info': self.auth_token_info, 'auth_url': self.auth_url, 'roles': self.roles, 'is_admin': self.is_admin, 'user': self.username, 'request_id': self.request_id, 'global_request_id': self.global_request_id, 'show_deleted': self.show_deleted, 'region_name': self.region_name, 'user_identity': user_idt, 'user_domain': self.user_domain, 'project_domain': self.project_domain} @classmethod def from_dict(cls, values): return cls( auth_token=values.get('auth_token'), username=values.get('username'), user=values.get('user_id'), password=values.get('password'), aws_creds=values.get('aws_creds'), project_name=values.get('tenant'), tenant=values.get('tenant_id'), trust_id=values.get('trust_id'), trustor_user_id=values.get('trustor_user_id'), auth_token_info=values.get('auth_token_info'), auth_url=values.get('auth_url'), roles=values.get('roles'), is_admin=values.get('is_admin'), request_id=values.get('request_id'), show_deleted=values.get('show_deleted', False), region_name=values.get('region_name'), user_domain_id=values.get('user_domain'), project_domain_id=values.get('project_domain') ) def to_policy_values(self): policy = super(RequestContext, self).to_policy_values() # NOTE(jamielennox): These are deprecated values passed to oslo.policy # for enforcement. They shouldn't be needed as the base class defines # what should be used when writing policy but are maintained for # compatibility. policy['user'] = self.user_id policy['tenant'] = self.tenant_id policy['is_admin'] = self.is_admin policy['auth_token_info'] = self.auth_token_info return policy @property def keystone_v3_endpoint(self): if self.auth_url: return self.auth_url.replace('v2.0', 'v3') else: auth_uri = endpoint_utils.get_auth_uri() if auth_uri: return auth_uri else: LOG.error('Keystone API endpoint not provided. Set ' 'auth_uri in section [clients_keystone] ' 'of the configuration file.') raise exception.AuthorizationFailure() @property def trusts_auth_plugin(self): if not self._trusts_auth_plugin: self._trusts_auth_plugin = ks_loading.load_auth_from_conf_options( cfg.CONF, TRUSTEE_CONF_GROUP, trust_id=self.trust_id) if not self._trusts_auth_plugin: LOG.error('Please add the trustee credentials you need ' 'to the %s section of your heat.conf file.', TRUSTEE_CONF_GROUP) raise exception.AuthorizationFailure() return self._trusts_auth_plugin def _create_auth_plugin(self): if self.auth_token_info: access_info = access.create(body=self.auth_token_info, auth_token=self.auth_token) return access_plugin.AccessInfoPlugin( auth_ref=access_info, auth_url=self.keystone_v3_endpoint) if self.password: return generic.Password(username=self.username, password=self.password, project_id=self.tenant_id, user_domain_id=self.user_domain, auth_url=self.keystone_v3_endpoint) if self.auth_token: # FIXME(jamielennox): This is broken but consistent. If you # only have a token but don't load a service catalog then # url_for wont work. Stub with the keystone endpoint so at # least it might be right. return token_endpoint.Token(endpoint=self.keystone_v3_endpoint, token=self.auth_token) LOG.error("Keystone API connection failed, no password " "trust or auth_token!") raise exception.AuthorizationFailure() def reload_auth_plugin(self): self._auth_plugin = None @property def auth_plugin(self): if not self._auth_plugin: if self.trust_id: self._auth_plugin = self.trusts_auth_plugin else: self._auth_plugin = self._create_auth_plugin() return self._auth_plugin class StoredContext(RequestContext): def _load_keystone_data(self): self._keystone_loaded = True auth_ref = self.auth_plugin.get_access(self.keystone_session) self.roles = auth_ref.role_names self.user_domain = auth_ref.user_domain_id self.project_domain = auth_ref.project_domain_id @property def roles(self): if not getattr(self, '_keystone_loaded', False): self._load_keystone_data() return self._roles @roles.setter def roles(self, roles): self._roles = roles @property def user_domain(self): if not getattr(self, '_keystone_loaded', False): self._load_keystone_data() return self._user_domain_id @user_domain.setter def user_domain(self, user_domain): self._user_domain_id = user_domain @property def project_domain(self): if not getattr(self, '_keystone_loaded', False): self._load_keystone_data() return self._project_domain_id @project_domain.setter def project_domain(self, project_domain): self._project_domain_id = project_domain def get_admin_context(show_deleted=False): return RequestContext(is_admin=True, show_deleted=show_deleted) class ContextMiddleware(wsgi.Middleware): def __init__(self, app, conf, **local_conf): # Determine the context class to use self.ctxcls = RequestContext if 'context_class' in local_conf: self.ctxcls = importutils.import_class(local_conf['context_class']) super(ContextMiddleware, self).__init__(app) def process_request(self, req): """Constructs an appropriate context from extracted auth information. Extract any authentication information in the request and construct an appropriate context from it. """ headers = req.headers environ = req.environ username = None password = None aws_creds = None user_domain = None project_domain = None if headers.get('X-Auth-User') is not None: username = headers.get('X-Auth-User') password = headers.get('X-Auth-Key') elif headers.get('X-Auth-EC2-Creds') is not None: aws_creds = headers.get('X-Auth-EC2-Creds') if headers.get('X-User-Domain-Id') is not None: user_domain = headers.get('X-User-Domain-Id') if headers.get('X-Project-Domain-Id') is not None: project_domain = headers.get('X-Project-Domain-Id') project_name = headers.get('X-Project-Name') region_name = headers.get('X-Region-Name') auth_url = headers.get('X-Auth-Url') token_info = environ.get('keystone.token_info') auth_plugin = environ.get('keystone.token_auth') req_id = environ.get(oslo_request_id.ENV_REQUEST_ID) req.context = self.ctxcls.from_environ( environ, project_name=project_name, aws_creds=aws_creds, username=username, password=password, auth_url=auth_url, request_id=req_id, user_domain=user_domain, project_domain=project_domain, auth_token_info=token_info, region_name=region_name, auth_plugin=auth_plugin, ) def ContextMiddleware_filter_factory(global_conf, **local_conf): """Factory method for paste.deploy.""" conf = global_conf.copy() conf.update(local_conf) def filter(app): return ContextMiddleware(app, conf) return filter def request_context(func): @six.wraps(func) def wrapped(self, ctx, *args, **kwargs): try: return func(self, ctx, *args, **kwargs) except exception.HeatException: raise oslo_messaging.rpc.dispatcher.ExpectedException() return wrapped heat-10.0.2/heat/common/i18n.py0000666000175000017500000000275613343562351016120 0ustar zuulzuul00000000000000# Copyright 2014 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. # It's based on oslo.i18n usage in OpenStack Keystone project and # recommendations from http://docs.openstack.org/developer/oslo.i18n/usage.html import six import oslo_i18n as i18n from oslo_utils import encodeutils _translators = i18n.TranslatorFactory(domain='heat') # The primary translation function using the well-known name "_" _ = _translators.primary def repr_wrapper(klass): """A decorator that defines __repr__ method under Python 2. Under Python 2 it will encode repr return value to str type. Under Python 3 it does nothing. """ if six.PY2: if '__repr__' not in klass.__dict__: raise ValueError("@repr_wrapper cannot be applied " "to %s because it doesn't define __repr__()." % klass.__name__) klass._repr = klass.__repr__ klass.__repr__ = lambda self: encodeutils.safe_encode(self._repr()) return klass heat-10.0.2/heat/common/grouputils.py0000666000175000017500000001577413343562351017562 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import six from heat.common import exception from heat.common.i18n import _ from heat.engine import status from heat.engine import template from heat.rpc import api as rpc_api class GroupInspector(object): """A class for returning data about a scaling group. All data is fetched over RPC, and the group's stack is never loaded into memory locally. Data is cached so it will be fetched only once. To refresh the data, create a new GroupInspector. """ def __init__(self, context, rpc_client, group_identity): """Initialise with a context, rpc_client, and stack identifier.""" self._context = context self._rpc_client = rpc_client self._identity = group_identity self._member_data = None self._template_data = None @classmethod def from_parent_resource(cls, parent_resource): """Create a GroupInspector from a parent resource. This is a convenience method to instantiate a GroupInspector from a Heat StackResource object. """ return cls(parent_resource.context, parent_resource.rpc_client(), parent_resource.nested_identifier()) def _get_member_data(self): if self._identity is None: return [] if self._member_data is None: rsrcs = self._rpc_client.list_stack_resources(self._context, dict(self._identity)) def sort_key(r): return (r[rpc_api.RES_STATUS] != status.ResourceStatus.FAILED, r[rpc_api.RES_CREATION_TIME], r[rpc_api.RES_NAME]) self._member_data = sorted(rsrcs, key=sort_key) return self._member_data def _members(self, include_failed): return (r for r in self._get_member_data() if (include_failed or r[rpc_api.RES_STATUS] != status.ResourceStatus.FAILED)) def size(self, include_failed): """Return the size of the group. If include_failed is False, only members not in a FAILED state will be counted. """ return sum(1 for m in self._members(include_failed)) def member_names(self, include_failed): """Return an iterator over the names of the group members If include_failed is False, only members not in a FAILED state will be included. """ return (m[rpc_api.RES_NAME] for m in self._members(include_failed)) def _get_template_data(self): if self._identity is None: return None if self._template_data is None: self._template_data = self._rpc_client.get_template(self._context, self._identity) return self._template_data def template(self): """Return a Template object representing the group's current template. Note that this does *not* include any environment data. """ data = self._get_template_data() if data is None: return None return template.Template(data) def get_size(group, include_failed=False): """Get number of member resources managed by the specified group. The size excludes failed members by default; set include_failed=True to get the total size. """ return GroupInspector.from_parent_resource(group).size(include_failed) def get_members(group, include_failed=False): """Get a list of member resources managed by the specified group. Sort the list of instances first by created_time then by name. If include_failed is set, failed members will be put first in the list sorted by created_time then by name. """ resources = [] if group.nested(): resources = [r for r in six.itervalues(group.nested()) if include_failed or r.status != r.FAILED] return sorted(resources, key=lambda r: (r.status != r.FAILED, r.created_time, r.name)) def get_member_refids(group, exclude=None): """Get a list of member resources managed by the specified group. The list of resources is sorted first by created_time then by name. """ members = get_members(group) if len(members) == 0: return [] if exclude is None: exclude = [] return [r.FnGetRefId() for r in members if r.FnGetRefId() not in exclude] def get_member_names(group): """Get a list of resource names of the resources in the specified group. Failed resources will be ignored. """ inspector = GroupInspector.from_parent_resource(group) return list(inspector.member_names(include_failed=False)) def get_resource(stack, resource_name, use_indices, key=None): nested_stack = stack.nested() if not nested_stack: return None try: if use_indices: return get_members(stack)[int(resource_name)] else: return nested_stack[resource_name] except (IndexError, KeyError): raise exception.NotFound(_("Member '%(mem)s' not found " "in group resource '%(grp)s'.") % {'mem': resource_name, 'grp': stack.name}) def get_rsrc_attr(stack, key, use_indices, resource_name, *attr_path): resource = get_resource(stack, resource_name, use_indices) if resource: return resource.FnGetAtt(*attr_path) def get_rsrc_id(stack, key, use_indices, resource_name): resource = get_resource(stack, resource_name, use_indices) if resource: return resource.FnGetRefId() def get_nested_attrs(stack, key, use_indices, *path): path = key.split(".", 2)[1:] + list(path) if len(path) > 1: return get_rsrc_attr(stack, key, use_indices, *path) else: return get_rsrc_id(stack, key, use_indices, *path) def get_member_definitions(group, include_failed=False): """Get member definitions in (name, ResourceDefinition) pair for group. The List is sorted first by created_time then by name. If include_failed is set, failed members will be put first in the List sorted by created_time then by name. """ inspector = GroupInspector.from_parent_resource(group) template = inspector.template() if template is None: return [] definitions = template.resource_definitions(None) return [(name, definitions[name]) for name in inspector.member_names(include_failed=include_failed) if name in definitions] heat-10.0.2/heat/common/auth_password.py0000666000175000017500000001034413343562337020220 0ustar zuulzuul00000000000000# # Copyright 2013 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from keystoneauth1 import exceptions as keystone_exceptions from keystoneauth1 import session from webob import exc from heat.common import config from heat.common import context class KeystonePasswordAuthProtocol(object): """Middleware uses username and password to authenticate against Keystone. Alternative authentication middleware that uses username and password to authenticate against Keystone instead of validating existing auth token. The benefit being that you no longer require admin/service token to authenticate users. """ def __init__(self, app, conf): self.app = app self.conf = conf self.session = session.Session(**config.get_ssl_options('keystone')) def __call__(self, env, start_response): """Authenticate incoming request.""" username = env.get('HTTP_X_AUTH_USER') password = env.get('HTTP_X_AUTH_KEY') # Determine tenant id from path. tenant = env.get('PATH_INFO').split('/')[1] auth_url = env.get('HTTP_X_AUTH_URL') user_domain_id = env.get('HTTP_X_USER_DOMAIN_ID') if not tenant: return self._reject_request(env, start_response, auth_url) try: ctx = context.RequestContext(username=username, password=password, tenant=tenant, auth_url=auth_url, user_domain_id=user_domain_id, is_admin=False) auth_ref = ctx.auth_plugin.get_access(self.session) except (keystone_exceptions.Unauthorized, keystone_exceptions.Forbidden, keystone_exceptions.NotFound, keystone_exceptions.AuthorizationFailure): return self._reject_request(env, start_response, auth_url) env.update(self._build_user_headers(auth_ref)) return self.app(env, start_response) def _reject_request(self, env, start_response, auth_url): """Redirect client to auth server.""" headers = [('WWW-Authenticate', "Keystone uri='%s'" % auth_url)] resp = exc.HTTPUnauthorized('Authentication required', headers) return resp(env, start_response) def _build_user_headers(self, token_info): """Build headers that represent authenticated user from auth token.""" if token_info.version == 'v3': project_id = token_info.project_id project_name = token_info.project_name else: project_id = token_info.tenant_id project_name = token_info.tenant_name user_id = token_info.user_id user_name = token_info.username roles = ','.join( [role for role in token_info.role_names]) service_catalog = token_info.service_catalog auth_token = token_info.auth_token user_domain_id = token_info.user_domain_id headers = { 'HTTP_X_IDENTITY_STATUS': 'Confirmed', 'HTTP_X_PROJECT_ID': project_id, 'HTTP_X_PROJECT_NAME': project_name, 'HTTP_X_USER_ID': user_id, 'HTTP_X_USER_NAME': user_name, 'HTTP_X_ROLES': roles, 'HTTP_X_SERVICE_CATALOG': service_catalog, 'HTTP_X_AUTH_TOKEN': auth_token, 'HTTP_X_USER_DOMAIN_ID': user_domain_id, } return headers def filter_factory(global_conf, **local_conf): """Returns a WSGI filter app for use with paste.deploy.""" conf = global_conf.copy() conf.update(local_conf) def auth_filter(app): return KeystonePasswordAuthProtocol(app, conf) return auth_filter heat-10.0.2/heat/common/netutils.py0000666000175000017500000000450613343562337017207 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import netaddr import re from heat.common.i18n import _ DNS_LABEL_MAX_LEN = 63 DNS_LABEL_REGEX = "[a-z0-9-]{1,%d}$" % DNS_LABEL_MAX_LEN FQDN_MAX_LEN = 255 def is_prefix_subset(orig_prefixes, new_prefixes): """Check whether orig_prefixes is subset of new_prefixes. This takes valid prefix lists for orig_prefixes and new_prefixes, returns 'True', if orig_prefixes is subset of new_prefixes. """ orig_set = netaddr.IPSet(orig_prefixes) new_set = netaddr.IPSet(new_prefixes) return orig_set.issubset(new_set) def validate_dns_format(data): if not data: return trimmed = data if not data.endswith('.') else data[:-1] if len(trimmed) > FQDN_MAX_LEN: raise ValueError( _("'%(data)s' exceeds the %(max_len)s character FQDN limit") % { 'data': trimmed, 'max_len': FQDN_MAX_LEN}) names = trimmed.split('.') for name in names: if not name: raise ValueError(_("Encountered an empty component.")) if name.endswith('-') or name.startswith('-'): raise ValueError( _("Name '%s' must not start or end with a hyphen.") % name) if not re.match(DNS_LABEL_REGEX, name): raise ValueError( _("Name '%(name)s' must be 1-%(max_len)s characters long, " "each of which can only be alphanumeric or " "a hyphen.") % {'name': name, 'max_len': DNS_LABEL_MAX_LEN}) # RFC 1123 hints that a Top Level Domain(TLD) can't be all numeric. # Last part is a TLD, if it's a FQDN. if (data.endswith('.') and len(names) > 1 and re.match("^[0-9]+$", names[-1])): raise ValueError(_("TLD '%s' must not be all numeric.") % names[-1]) heat-10.0.2/heat/common/lifecycle_plugin_utils.py0000666000175000017500000001057313343562337022076 0ustar zuulzuul00000000000000 # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Utility for fetching and running plug point implementation classes.""" from oslo_log import log as logging from heat.engine import resources LOG = logging.getLogger(__name__) pp_class_instances = None def get_plug_point_class_instances(): """Instances of classes that implements pre/post stack operation methods. Get list of instances of classes that (may) implement pre and post stack operation methods. The list of class instances is sorted using get_ordinal methods on the plug point classes. If class1.ordinal() < class2.ordinal(), then class1 will be before before class2 in the list. """ global pp_class_instances if pp_class_instances is None: pp_class_instances = [] pp_classes = [] try: slps = resources.global_env().get_stack_lifecycle_plugins() pp_classes = [cls for name, cls in slps] except Exception: LOG.exception("failed to get lifecycle plug point classes") for ppc in pp_classes: try: pp_class_instances.append(ppc()) except Exception: LOG.exception( "failed to instantiate stack lifecycle class %s", ppc) try: pp_class_instances = sorted(pp_class_instances, key=lambda ppci: ppci.get_ordinal()) except Exception: LOG.exception("failed to sort lifecycle plug point classes") return pp_class_instances def do_pre_ops(cnxt, stack, current_stack=None, action=None): """Call available pre-op methods sequentially. In order determined with get_ordinal(), with parameters context, stack, current_stack, action. On failure of any pre_op method, will call post-op methods corresponding to successful calls of pre-op methods. """ cinstances = get_plug_point_class_instances() if action is None: action = stack.action failure, failure_exception_message, success_count = _do_ops( cinstances, 'do_pre_op', cnxt, stack, current_stack, action, None) if failure: cinstances = cinstances[0:success_count] _do_ops(cinstances, 'do_post_op', cnxt, stack, current_stack, action, True) raise Exception(failure_exception_message) def do_post_ops(cnxt, stack, current_stack=None, action=None, is_stack_failure=False): """Call available post-op methods sequentially. In order determined with get_ordinal(), with parameters context, stack, current_stack, action, is_stack_failure. """ cinstances = get_plug_point_class_instances() if action is None: action = stack.action _do_ops(cinstances, 'do_post_op', cnxt, stack, current_stack, action, None) def _do_ops(cinstances, opname, cnxt, stack, current_stack=None, action=None, is_stack_failure=None): success_count = 0 failure = False failure_exception_message = None for ci in cinstances: op = getattr(ci, opname, None) if callable(op): try: if is_stack_failure is not None: op(cnxt, stack, current_stack, action, is_stack_failure) else: op(cnxt, stack, current_stack, action) success_count += 1 except Exception as ex: LOG.exception( "%(opname)s %(ci)s failed for %(a)s on %(sid)s", {'opname': opname, 'ci': type(ci), 'a': action, 'sid': stack.id}) failure = True failure_exception_message = ex.args[0] if ex.args else str(ex) break LOG.info("done with class=%(c)s, stackid=%(sid)s, action=%(a)s", {'c': type(ci), 'sid': stack.id, 'a': action}) return (failure, failure_exception_message, success_count) heat-10.0.2/heat/common/environment_util.py0000666000175000017500000001451713343562351020740 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections from oslo_serialization import jsonutils import six from heat.common import environment_format as env_fmt from heat.common import exception from heat.common.i18n import _ ALLOWED_PARAM_MERGE_STRATEGIES = (OVERWRITE, MERGE, DEEP_MERGE) = ( 'overwrite', 'merge', 'deep_merge') def get_param_merge_strategy(merge_strategies, param_key): if merge_strategies is None: return OVERWRITE env_default = merge_strategies.get('default', OVERWRITE) merge_strategy = merge_strategies.get(param_key, env_default) if merge_strategy in ALLOWED_PARAM_MERGE_STRATEGIES: return merge_strategy return env_default def merge_list(old, new): """merges lists and comma delimited lists.""" if not old: return new if isinstance(new, list): old.extend(new) return old else: return ','.join([old, new]) def merge_map(old, new, deep_merge=False): """Merge nested dictionaries.""" if not old: return new for k, v in new.items(): if v is not None: if not deep_merge: old[k] = v elif isinstance(v, collections.Mapping): old_v = old.get(k) old[k] = merge_map(old_v, v, deep_merge) if old_v else v elif (isinstance(v, collections.Sequence) and not isinstance(v, six.string_types)): old_v = old.get(k) old[k] = merge_list(old_v, v) if old_v else v elif isinstance(v, six.string_types): old[k] = ''.join([old.get(k, ''), v]) else: old[k] = v return old def parse_param(p_val, p_schema): try: if p_schema.type == p_schema.MAP: if not isinstance(p_val, six.string_types): p_val = jsonutils.dumps(p_val) if p_val: return jsonutils.loads(p_val) elif not isinstance(p_val, collections.Sequence): raise ValueError() except (ValueError, TypeError) as err: msg = _("Invalid parameter in environment %s.") % six.text_type(err) raise ValueError(msg) return p_val def merge_parameters(old, new, param_schemata, strategies_in_file, available_strategies, env_file): def param_merge(p_key, p_value, p_schema, deep_merge=False): p_type = p_schema.type p_value = parse_param(p_value, p_schema) if p_type == p_schema.MAP: old[p_key] = merge_map(old.get(p_key, {}), p_value, deep_merge) elif p_type == p_schema.LIST: old[p_key] = merge_list(old.get(p_key), p_value) elif p_type == p_schema.STRING: old[p_key] = ''.join([old.get(p_key, ''), p_value]) elif p_type == p_schema.NUMBER: old[p_key] = old.get(p_key, 0) + p_value else: raise exception.InvalidMergeStrategyForParam(strategy=MERGE, param=p_key) new_strategies = {} if not old: return new, new_strategies for key, value in new.items(): # if key not in param_schemata ignore it if key in param_schemata and value is not None: param_merge_strategy = get_param_merge_strategy( strategies_in_file, key) if key not in available_strategies: new_strategies[key] = param_merge_strategy elif param_merge_strategy != available_strategies[key]: raise exception.ConflictingMergeStrategyForParam( strategy=param_merge_strategy, param=key, env_file=env_file) if param_merge_strategy == DEEP_MERGE: param_merge(key, value, param_schemata[key], deep_merge=True) elif param_merge_strategy == MERGE: param_merge(key, value, param_schemata[key]) else: old[key] = value return old, new_strategies def merge_environments(environment_files, files, params, param_schemata): """Merges environment files into the stack input parameters. If a list of environment files have been specified, this call will pull the contents of each from the files dict, parse them as environments, and merge them into the stack input params. This behavior is the same as earlier versions of the Heat client that performed this params population client-side. :param environment_files: ordered names of the environment files found in the files dict :type environment_files: list or None :param files: mapping of stack filenames to contents :type files: dict :param params: parameters describing the stack :type dict: :param param_schemata: parameter schema dict :type param_schemata: dict """ if not environment_files: return available_strategies = {} for filename in environment_files: raw_env = files[filename] parsed_env = env_fmt.parse(raw_env) strategies_in_file = parsed_env.pop( env_fmt.PARAMETER_MERGE_STRATEGIES, {}) for section_key, section_value in parsed_env.items(): if section_value: if section_key in (env_fmt.PARAMETERS, env_fmt.PARAMETER_DEFAULTS): params[section_key], new_strategies = merge_parameters( params[section_key], section_value, param_schemata, strategies_in_file, available_strategies, filename) available_strategies.update(new_strategies) else: params[section_key] = merge_map(params[section_key], section_value) heat-10.0.2/heat/common/plugin_loader.py0000666000175000017500000000712213343562337020161 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Utilities to dynamically load plugin modules. Modules imported this way remain accessible to static imports, regardless of the order in which they are imported. For modules that are not part of an existing package tree, use create_subpackage() to dynamically create a package for them before loading them. """ import pkgutil import sys import types from oslo_log import log as logging import six LOG = logging.getLogger(__name__) def _module_name(*components): """Assemble a fully-qualified module name from its components.""" return '.'.join(components) def create_subpackage(path, parent_package_name, subpackage_name="plugins"): """Dynamically create a package into which to load plugins. This allows us to not include an __init__.py in the plugins directory. We must still create a package for plugins to go in, otherwise we get warning messages during import. This also provides a convenient place to store the path(s) to the plugins directory. """ package_name = _module_name(parent_package_name, subpackage_name) package = types.ModuleType(package_name) package.__path__ = ([path] if isinstance(path, six.string_types) else list(path)) sys.modules[package_name] = package return package def _import_module(importer, module_name, package): """Import a module dynamically into a package. :param importer: PEP302 Importer object (which knows the path to look in). :param module_name: the name of the module to import. :param package: the package to import the module into. """ # Duplicate copies of modules are bad, so check if this has already been # imported statically if module_name in sys.modules: return sys.modules[module_name] loader = importer.find_module(module_name) if loader is None: return None module = loader.load_module(module_name) # Make this accessible through the parent package for static imports local_name = module_name.partition(package.__name__ + '.')[2] module_components = local_name.split('.') parent = six.moves.reduce(getattr, module_components[:-1], package) setattr(parent, module_components[-1], module) return module def load_modules(package, ignore_error=False): """Dynamically load all modules from a given package.""" path = package.__path__ pkg_prefix = package.__name__ + '.' for importer, module_name, is_package in pkgutil.walk_packages(path, pkg_prefix): # Skips tests or setup packages so as not to load tests in general # or setup.py during doc generation. if '.tests.' in module_name or module_name.endswith('.setup'): continue try: module = _import_module(importer, module_name, package) except ImportError: LOG.error('Failed to import module %s', module_name) if not ignore_error: raise else: if module is not None: yield module heat-10.0.2/heat/common/urlfetch.py0000666000175000017500000000557213343562351017154 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Utility for fetching a resource (e.g. a template) from a URL.""" from oslo_config import cfg from oslo_log import log as logging import requests from requests import exceptions from six.moves import urllib from heat.common import exception from heat.common.i18n import _ cfg.CONF.import_opt('max_template_size', 'heat.common.config') LOG = logging.getLogger(__name__) class URLFetchError(exception.Error, IOError): pass def get(url, allowed_schemes=('http', 'https')): """Get the data at the specified URL. The URL must use the http: or https: schemes. The file: scheme is also supported if you override the allowed_schemes argument. Raise an IOError if getting the data fails. """ LOG.info('Fetching data from %s', url) components = urllib.parse.urlparse(url) if components.scheme not in allowed_schemes: raise URLFetchError(_('Invalid URL scheme %s') % components.scheme) if components.scheme == 'file': try: return urllib.request.urlopen(url).read() except urllib.error.URLError as uex: raise URLFetchError(_('Failed to retrieve template: %s') % uex) try: resp = requests.get(url, stream=True) resp.raise_for_status() # We cannot use resp.text here because it would download the # entire file, and a large enough file would bring down the # engine. The 'Content-Length' header could be faked, so it's # necessary to download the content in chunks to until # max_template_size is reached. The chunk_size we use needs # to balance CPU-intensive string concatenation with accuracy # (eg. it's possible to fetch 1000 bytes greater than # max_template_size with a chunk_size of 1000). reader = resp.iter_content(chunk_size=1000) result = b"" for chunk in reader: result += chunk if len(result) > cfg.CONF.max_template_size: raise URLFetchError(_("Template exceeds maximum allowed size " "(%s bytes)") % cfg.CONF.max_template_size) return result except exceptions.RequestException as ex: LOG.info('Failed to retrieve template: %s', ex) raise URLFetchError(_('Failed to retrieve template from %s') % url) heat-10.0.2/heat/common/service_utils.py0000666000175000017500000000430513343562337020215 0ustar zuulzuul00000000000000# Copyright (c) 2014 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import uuid from oslo_utils import timeutils from heat.rpc import listener_client SERVICE_KEYS = ( SERVICE_ID, SERVICE_HOST, SERVICE_HOSTNAME, SERVICE_BINARY, SERVICE_TOPIC, SERVICE_ENGINE_ID, SERVICE_REPORT_INTERVAL, SERVICE_CREATED_AT, SERVICE_UPDATED_AT, SERVICE_DELETED_AT, SERVICE_STATUS ) = ( 'id', 'host', 'hostname', 'binary', 'topic', 'engine_id', 'report_interval', 'created_at', 'updated_at', 'deleted_at', 'status' ) def format_service(service): if service is None: return status = 'down' if service.updated_at is not None: if ((timeutils.utcnow() - service.updated_at).total_seconds() <= service.report_interval): status = 'up' else: if ((timeutils.utcnow() - service.created_at).total_seconds() <= service.report_interval): status = 'up' result = { SERVICE_ID: service.id, SERVICE_BINARY: service.binary, SERVICE_ENGINE_ID: service.engine_id, SERVICE_HOST: service.host, SERVICE_HOSTNAME: service.hostname, SERVICE_TOPIC: service.topic, SERVICE_REPORT_INTERVAL: service.report_interval, SERVICE_CREATED_AT: service.created_at, SERVICE_UPDATED_AT: service.updated_at, SERVICE_DELETED_AT: service.deleted_at, SERVICE_STATUS: status } return result def engine_alive(context, engine_id): return listener_client.EngineListenerClient( engine_id).is_alive(context) def generate_engine_id(): return str(uuid.uuid4()) heat-10.0.2/heat/common/crypt.py0000666000175000017500000001507713343562351016502 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import base64 import sys from cryptography import fernet from cryptography.hazmat import backends from cryptography.hazmat.primitives.ciphers import algorithms from cryptography.hazmat.primitives.ciphers import Cipher from cryptography.hazmat.primitives.ciphers import modes from cryptography.hazmat.primitives import padding from oslo_config import cfg from oslo_serialization import jsonutils from oslo_utils import encodeutils from heat.common import exception from heat.common.i18n import _ auth_opts = [ cfg.StrOpt('auth_encryption_key', secret=True, default='notgood but just long enough i t', help=_('Key used to encrypt authentication info in the ' 'database. Length of this key must be 32 characters.')) ] cfg.CONF.register_opts(auth_opts) class SymmetricCrypto(object): """Symmetric Key Crypto object. This class creates a Symmetric Key Crypto object that can be used to decrypt arbitrary data. Note: This is a reimplementation of the decryption algorithm from oslo-incubator, and is provided for backward compatibility. Once we have a db migration script available to re-encrypt using new encryption method as part of upgrade, this can be removed. :param enctype: Encryption Cipher name (default: AES) """ def __init__(self, enctype='AES'): self.algo = algorithms.AES def decrypt(self, key, msg, b64decode=True): """Decrypts the provided ciphertext. The ciphertext can be optionally base64 encoded. Uses AES-128-CBC with an IV by default. :param key: The Encryption key. :param msg: the ciphetext, the first block is the IV :returns plain: the plaintext message, after padding is removed. """ key = str.encode(get_valid_encryption_key(key)) if b64decode: msg = base64.b64decode(msg) algo = self.algo(key) block_size_bytes = algo.block_size // 8 iv = msg[:block_size_bytes] backend = backends.default_backend() cipher = Cipher(algo, modes.CBC(iv), backend=backend) decryptor = cipher.decryptor() padded = (decryptor.update(msg[block_size_bytes:]) + decryptor.finalize()) unpadder = padding.ANSIX923(algo.block_size).unpadder() plain = unpadder.update(padded) + unpadder.finalize() # The original padding algorithm was a slight variation on ANSI X.923, # where the size of the padding did not include the byte that tells # you the size of the padding. Therefore, we need to remove one extra # byte (which will be 0x00) when unpadding. return plain[:-1] def encrypt(value, encryption_key=None): if value is None: return None, None encryption_key = get_valid_encryption_key(encryption_key, fix_length=True) encoded_key = base64.b64encode(encryption_key.encode('utf-8')) sym = fernet.Fernet(encoded_key) res = sym.encrypt(encodeutils.safe_encode(value)) return 'cryptography_decrypt_v1', encodeutils.safe_decode(res) def decrypt(method, data, encryption_key=None): if method is None or data is None: return None decryptor = getattr(sys.modules[__name__], method) value = decryptor(data, encryption_key) if value is not None: return encodeutils.safe_decode(value, 'utf-8') def encrypted_dict(data, encryption_key=None): 'Return an encrypted dict. Values converted to json before encrypted' return_data = {} if not data: return return_data for prop_name, prop_value in data.items(): prop_string = jsonutils.dumps(prop_value) encrypted_value = encrypt(prop_string, encryption_key) return_data[prop_name] = encrypted_value return return_data def decrypted_dict(data, encryption_key=None): 'Return a decrypted dict. Assume input values are encrypted json fields.' return_data = {} if not data: return return_data for prop_name, prop_value in data.items(): method, value = prop_value decrypted_value = decrypt(method, value, encryption_key) prop_string = jsonutils.loads(decrypted_value) return_data[prop_name] = prop_string return return_data def oslo_decrypt_v1(value, encryption_key=None): encryption_key = get_valid_encryption_key(encryption_key) sym = SymmetricCrypto() return sym.decrypt(encryption_key, value, b64decode=True) def cryptography_decrypt_v1(value, encryption_key=None): encryption_key = get_valid_encryption_key(encryption_key, fix_length=True) encoded_key = base64.b64encode(encryption_key.encode('utf-8')) sym = fernet.Fernet(encoded_key) try: return sym.decrypt(encodeutils.safe_encode(value)) except fernet.InvalidToken: raise exception.InvalidEncryptionKey() def get_valid_encryption_key(encryption_key, fix_length=False): if encryption_key is None: encryption_key = cfg.CONF.auth_encryption_key if fix_length and len(encryption_key) < 32: # Backward compatible size encryption_key = encryption_key * 2 return encryption_key[:32] def heat_decrypt(value, encryption_key=None): """Decrypt data that has been encrypted using an older version of Heat. Note: the encrypt function returns the function that is needed to decrypt the data. The database then stores this. When the data is then retrieved (potentially by a later version of Heat) the decrypt function must still exist. So whilst it may seem that this function is not referenced, it will be referenced from the database. """ encryption_key = str.encode(get_valid_encryption_key(encryption_key)) auth = base64.b64decode(value) AES = algorithms.AES(encryption_key) block_size_bytes = AES.block_size // 8 iv = auth[:block_size_bytes] backend = backends.default_backend() cipher = Cipher(AES, modes.CFB(iv), backend=backend) decryptor = cipher.decryptor() return decryptor.update(auth[block_size_bytes:]) + decryptor.finalize() def list_opts(): yield None, auth_opts heat-10.0.2/heat/common/password_gen.py0000666000175000017500000001045513343562337020033 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import random as random_module import string import six # NOTE(pas-ha) Heat officially supports only POSIX::Linux platform # where os.urandom() and random.SystemRandom() are available random = random_module.SystemRandom() CHARACTER_CLASSES = ( LETTERS_DIGITS, LETTERS, LOWERCASE, UPPERCASE, DIGITS, HEXDIGITS, OCTDIGITS, ) = ( 'lettersdigits', 'letters', 'lowercase', 'uppercase', 'digits', 'hexdigits', 'octdigits', ) _char_class_members = { LETTERS_DIGITS: string.ascii_letters + string.digits, LETTERS: string.ascii_letters, LOWERCASE: string.ascii_lowercase, UPPERCASE: string.ascii_uppercase, DIGITS: string.digits, HEXDIGITS: string.digits + 'ABCDEF', OCTDIGITS: string.octdigits, } CharClass = collections.namedtuple('CharClass', ('allowed_chars', 'min_count')) def named_char_class(char_class, min_count=0): """Return a predefined character class. The result of this function can be passed to :func:`generate_password` as one of the character classes to use in generating a password. :param char_class: Any of the character classes named in :const:`CHARACTER_CLASSES` :param min_count: The minimum number of members of this class to appear in a generated password """ assert char_class in CHARACTER_CLASSES return CharClass(frozenset(_char_class_members[char_class]), min_count) def special_char_class(allowed_chars, min_count=0): """Return a character class containing custom characters. The result of this function can be passed to :func:`generate_password` as one of the character classes to use in generating a password. :param allowed_chars: Iterable of the characters in the character class :param min_count: The minimum number of members of this class to appear in a generated password """ return CharClass(frozenset(allowed_chars), min_count) def generate_password(length, char_classes): """Generate a random password. The password will be of the specified length, and comprised of characters from the specified character classes, which can be generated using the :func:`named_char_class` and :func:`special_char_class` functions. Where a minimum count is specified in the character class, at least that number of characters in the resulting password are guaranteed to be from that character class. :param length: The length of the password to generate, in characters :param char_classes: Iterable over classes of characters from which to generate a password """ char_buffer = six.StringIO() all_allowed_chars = set() # Add the minimum number of chars from each char class for char_class in char_classes: all_allowed_chars |= char_class.allowed_chars allowed_chars = tuple(char_class.allowed_chars) for i in six.moves.xrange(char_class.min_count): char_buffer.write(random.choice(allowed_chars)) # Fill up rest with random chars from provided classes combined_chars = tuple(all_allowed_chars) for i in six.moves.xrange(max(0, length - char_buffer.tell())): char_buffer.write(random.choice(combined_chars)) # Shuffle string selected_chars = char_buffer.getvalue() char_buffer.close() return ''.join(random.sample(selected_chars, length)) def generate_openstack_password(): """Generate a random password suitable for a Keystone User.""" return generate_password(32, [named_char_class(LOWERCASE, 1), named_char_class(UPPERCASE, 1), named_char_class(DIGITS, 1), special_char_class('!@#%^&*', 1)]) heat-10.0.2/heat/common/pluginutils.py0000666000175000017500000000204013343562337017706 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging import six LOG = logging.getLogger(__name__) def log_fail_msg(manager, entrypoint, exception): LOG.warning('Encountered exception while loading %(module_name)s: ' '"%(message)s". Not using %(name)s.', {'module_name': entrypoint.module_name, 'message': getattr(exception, 'message', six.text_type(exception)), 'name': entrypoint.name}) heat-10.0.2/heat/common/identifier.py0000666000175000017500000002015013343562337017453 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import re from oslo_utils import encodeutils from six.moves.urllib import parse as urlparse from heat.common.i18n import _ class HeatIdentifier(collections.Mapping): FIELDS = ( TENANT, STACK_NAME, STACK_ID, PATH ) = ( 'tenant', 'stack_name', 'stack_id', 'path' ) path_re = re.compile(r'stacks/([^/]+)/([^/]+)(.*)') def __init__(self, tenant, stack_name, stack_id, path=''): """Initialise a HeatIdentifier. Identifier is initialized from a Tenant ID, Stack name, Stack ID and optional path. If a path is supplied and it does not begin with "/", a "/" will be prepended. """ if path and not path.startswith('/'): path = '/' + path if '/' in stack_name: raise ValueError(_('Stack name may not contain "/"')) self.identity = { self.TENANT: tenant, self.STACK_NAME: stack_name, self.STACK_ID: str(stack_id), self.PATH: path, } @classmethod def from_arn(cls, arn): """Generate a new HeatIdentifier by parsing the supplied ARN.""" fields = arn.split(':') if len(fields) < 6 or fields[0].lower() != 'arn': raise ValueError(_('"%s" is not a valid ARN') % arn) id_fragment = ':'.join(fields[5:]) path = cls.path_re.match(id_fragment) if fields[1] != 'openstack' or fields[2] != 'heat' or not path: raise ValueError(_('"%s" is not a valid Heat ARN') % arn) return cls(urlparse.unquote(fields[4]), urlparse.unquote(path.group(1)), urlparse.unquote(path.group(2)), urlparse.unquote(path.group(3))) @classmethod def from_arn_url(cls, url): """Generate a new HeatIdentifier by parsing the supplied URL. The URL is expected to contain a valid arn as part of the path. """ # Sanity check the URL urlp = urlparse.urlparse(url) if (urlp.scheme not in ('http', 'https') or not urlp.netloc or not urlp.path): raise ValueError(_('"%s" is not a valid URL') % url) # Remove any query-string and extract the ARN arn_url_prefix = '/arn%3Aopenstack%3Aheat%3A%3A' match = re.search(arn_url_prefix, urlp.path, re.IGNORECASE) if match is None: raise ValueError(_('"%s" is not a valid ARN URL') % url) # the +1 is to skip the leading / url_arn = urlp.path[match.start() + 1:] arn = urlparse.unquote(url_arn) return cls.from_arn(arn) def arn(self): """Return as an ARN. Returned in the form: arn:openstack:heat:::stacks// """ return 'arn:openstack:heat::%s:%s' % (urlparse.quote(self.tenant, ''), self._tenant_path()) def arn_url_path(self): """Return an ARN quoted correctly for use in a URL.""" return '/' + urlparse.quote(self.arn()) def url_path(self): """Return a URL-encoded path segment of a URL. Returned in the form: /stacks// """ return '/'.join((urlparse.quote(self.tenant, ''), self._tenant_path())) def _tenant_path(self): """URL-encoded path segment of a URL within a particular tenant. Returned in the form: stacks// """ return 'stacks/%s%s' % (self.stack_path(), urlparse.quote(encodeutils.safe_encode( self.path))) def stack_path(self): """Return a URL-encoded path segment of a URL without a tenant. Returned in the form: / """ return '%s/%s' % (urlparse.quote(self.stack_name, ''), urlparse.quote(self.stack_id, '')) def _path_components(self): """Return a list of the path components.""" return self.path.lstrip('/').split('/') def __getattr__(self, attr): """Return a component of the identity when accessed as an attribute.""" if attr not in self.FIELDS: raise AttributeError(_('Unknown attribute "%s"') % attr) return self.identity[attr] def __getitem__(self, key): """Return one of the components of the identity.""" if key not in self.FIELDS: raise KeyError(_('Unknown attribute "%s"') % key) return self.identity[key] def __len__(self): """Return the number of components in an identity.""" return len(self.FIELDS) def __contains__(self, key): return key in self.FIELDS def __iter__(self): return iter(self.FIELDS) def __repr__(self): return repr(dict(self)) class ResourceIdentifier(HeatIdentifier): """An identifier for a resource.""" RESOURCE_NAME = 'resource_name' def __init__(self, tenant, stack_name, stack_id, path, resource_name=None): """Initialise a new Resource identifier. The identifier is based on the identifier components of the owning stack and the resource name. """ if resource_name is not None: if '/' in resource_name: raise ValueError(_('Resource name may not contain "/"')) path = '/'.join([path.rstrip('/'), 'resources', resource_name]) super(ResourceIdentifier, self).__init__(tenant, stack_name, stack_id, path) def __getattr__(self, attr): """Return a component of the identity when accessed as an attribute.""" if attr == self.RESOURCE_NAME: return self._path_components()[-1] return HeatIdentifier.__getattr__(self, attr) def stack(self): """Return a HeatIdentifier for the owning stack.""" return HeatIdentifier(self.tenant, self.stack_name, self.stack_id, '/'.join(self._path_components()[:-2])) class EventIdentifier(HeatIdentifier): """An identifier for an event.""" (RESOURCE_NAME, EVENT_ID) = (ResourceIdentifier.RESOURCE_NAME, 'event_id') def __init__(self, tenant, stack_name, stack_id, path, event_id=None): """Initialise a new Event identifier based on components. The identifier is based on the identifier components of the associated resource and the event ID. """ if event_id is not None: path = '/'.join([path.rstrip('/'), 'events', event_id]) super(EventIdentifier, self).__init__(tenant, stack_name, stack_id, path) def __getattr__(self, attr): """Return a component of the identity when accessed as an attribute.""" if attr == self.RESOURCE_NAME: return getattr(self.resource(), attr) if attr == self.EVENT_ID: return self._path_components()[-1] return HeatIdentifier.__getattr__(self, attr) def resource(self): """Return a HeatIdentifier for the owning resource.""" return ResourceIdentifier(self.tenant, self.stack_name, self.stack_id, '/'.join(self._path_components()[:-2])) def stack(self): """Return a HeatIdentifier for the owning stack.""" return self.resource().stack() heat-10.0.2/heat/common/cache.py0000666000175000017500000001057513343562337016406 0ustar zuulzuul00000000000000# # Copyright 2015 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """The code related to integration between oslo.cache module and heat.""" from oslo_cache import core from oslo_config import cfg from heat.common.i18n import _ def register_cache_configurations(conf): """Register all configurations required for oslo.cache. The procedure registers all configurations required for oslo.cache. It should be called before configuring of cache region :param conf: instance of heat configuration :returns: updated heat configuration """ # register global configurations for caching in heat core.configure(conf) # register heat specific configurations constraint_cache_group = cfg.OptGroup('constraint_validation_cache') constraint_cache_opts = [ cfg.IntOpt('expiration_time', default=60, help=_( 'TTL, in seconds, for any cached item in the ' 'dogpile.cache region used for caching of validation ' 'constraints.')), cfg.BoolOpt("caching", default=True, help=_( 'Toggle to enable/disable caching when Orchestration ' 'Engine validates property constraints of stack.' 'During property validation with constraints ' 'Orchestration Engine caches requests to other ' 'OpenStack services. Please note that the global ' 'toggle for oslo.cache(enabled=True in [cache] group) ' 'must be enabled to use this feature.')) ] conf.register_group(constraint_cache_group) conf.register_opts(constraint_cache_opts, group=constraint_cache_group) extension_cache_group = cfg.OptGroup('service_extension_cache') extension_cache_opts = [ cfg.IntOpt('expiration_time', default=3600, help=_( 'TTL, in seconds, for any cached item in the ' 'dogpile.cache region used for caching of service ' 'extensions.')), cfg.BoolOpt('caching', default=True, help=_( 'Toggle to enable/disable caching when Orchestration ' 'Engine retrieves extensions from other OpenStack ' 'services. Please note that the global toggle for ' 'oslo.cache(enabled=True in [cache] group) must be ' 'enabled to use this feature.')) ] conf.register_group(extension_cache_group) conf.register_opts(extension_cache_opts, group=extension_cache_group) find_cache_group = cfg.OptGroup('resource_finder_cache') find_cache_opts = [ cfg.IntOpt('expiration_time', default=3600, help=_( 'TTL, in seconds, for any cached item in the ' 'dogpile.cache region used for caching of OpenStack ' 'service finder functions.')), cfg.BoolOpt('caching', default=True, help=_( 'Toggle to enable/disable caching when Orchestration ' 'Engine looks for other OpenStack service resources ' 'using name or id. Please note that the global ' 'toggle for oslo.cache(enabled=True in [cache] group) ' 'must be enabled to use this feature.')) ] conf.register_group(find_cache_group) conf.register_opts(find_cache_opts, group=find_cache_group) return conf # variable that stores an initialized cache region for heat _REGION = None def get_cache_region(): global _REGION if not _REGION: _REGION = core.configure_cache_region( conf=register_cache_configurations(cfg.CONF), region=core.create_region()) return _REGION heat-10.0.2/heat/common/endpoint_utils.py0000666000175000017500000000303213343562337020371 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # # Copyright 2015 IBM Corp. from keystoneauth1 import discover as ks_discover from keystoneauth1 import session as ks_session from oslo_config import cfg from oslo_utils import importutils from heat.common import config def get_auth_uri(v3=True): # Look for the keystone auth_uri in the configuration. First we # check the [clients_keystone] section, and if it is not set we # look in [keystone_authtoken] if cfg.CONF.clients_keystone.auth_uri: session = ks_session.Session(**config.get_ssl_options('keystone')) discover = ks_discover.Discover( session=session, url=cfg.CONF.clients_keystone.auth_uri) return discover.url_for('3.0') else: # Import auth_token to have keystone_authtoken settings setup. importutils.import_module('keystonemiddleware.auth_token') auth_uri = cfg.CONF.keystone_authtoken.auth_uri return auth_uri.replace('v2.0', 'v3') if auth_uri and v3 else auth_uri heat-10.0.2/heat/common/policy.py0000666000175000017500000001510613343562351016631 0ustar zuulzuul00000000000000# # Copyright (c) 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # Based on glance/api/policy.py """Policy Engine For Heat.""" from oslo_config import cfg from oslo_log import log as logging from oslo_policy import policy from oslo_utils import excutils import six from heat.common import exception from heat.common.i18n import _ from heat import policies CONF = cfg.CONF LOG = logging.getLogger(__name__) DEFAULT_RULES = policy.Rules.from_dict({'default': '!'}) DEFAULT_RESOURCE_RULES = policy.Rules.from_dict({'default': '@'}) ENFORCER = None class Enforcer(object): """Responsible for loading and enforcing rules.""" def __init__(self, scope='heat', exc=exception.Forbidden, default_rule=DEFAULT_RULES['default'], policy_file=None): self.scope = scope self.exc = exc self.default_rule = default_rule self.enforcer = policy.Enforcer( CONF, default_rule=default_rule, policy_file=policy_file) self.log_not_registered = True # register rules self.enforcer.register_defaults(policies.list_rules()) def set_rules(self, rules, overwrite=True): """Create a new Rules object based on the provided dict of rules.""" rules_obj = policy.Rules(rules, self.default_rule) self.enforcer.set_rules(rules_obj, overwrite) def load_rules(self, force_reload=False): """Set the rules found in the json file on disk.""" self.enforcer.load_rules(force_reload) def _check(self, context, rule, target, exc, is_registered_policy=False, *args, **kwargs): """Verifies that the action is valid on the target in this context. :param context: Heat request context :param rule: String representing the action to be checked :param target: Dictionary representing the object of the action. :raises: self.exc (defaults to heat.common.exception.Forbidden) :returns: A non-False value if access is allowed. """ do_raise = False if not exc else True credentials = context.to_policy_values() if is_registered_policy: try: return self.enforcer.authorize(rule, target, credentials, do_raise=do_raise, exc=exc, action=rule) except policy.PolicyNotRegistered: if self.log_not_registered: with excutils.save_and_reraise_exception(): LOG.exception(_('Policy not registered.')) else: raise else: return self.enforcer.enforce(rule, target, credentials, do_raise, exc=exc, *args, **kwargs) def enforce(self, context, action, scope=None, target=None, is_registered_policy=False): """Verifies that the action is valid on the target in this context. :param context: Heat request context :param action: String representing the action to be checked :param target: Dictionary representing the object of the action. :raises: self.exc (defaults to heat.common.exception.Forbidden) :returns: A non-False value if access is allowed. """ _action = '%s:%s' % (scope or self.scope, action) _target = target or {} return self._check(context, _action, _target, self.exc, action=action, is_registered_policy=is_registered_policy) def check_is_admin(self, context): """Whether or not is admin according to policy. By default the rule will check whether or not roles contains 'admin' role and is admin project. :param context: Heat request context :returns: A non-False value if the user is admin according to policy """ return self._check(context, 'context_is_admin', target={}, exc=None, is_registered_policy=True) def get_enforcer(): global ENFORCER if ENFORCER is None: ENFORCER = Enforcer() return ENFORCER class ResourceEnforcer(Enforcer): def __init__(self, default_rule=DEFAULT_RESOURCE_RULES['default'], **kwargs): super(ResourceEnforcer, self).__init__( default_rule=default_rule, **kwargs) self.log_not_registered = False def _enforce(self, context, res_type, scope=None, target=None, is_registered_policy=False): try: result = super(ResourceEnforcer, self).enforce( context, res_type, scope=scope or 'resource_types', target=target, is_registered_policy=is_registered_policy) except policy.PolicyNotRegistered: result = True except self.exc as ex: LOG.info(six.text_type(ex)) raise if not result: if self.exc: raise self.exc(action=res_type) return result def enforce(self, context, res_type, scope=None, target=None, is_registered_policy=False): # NOTE(pas-ha): try/except just to log the exception result = self._enforce(context, res_type, scope, target, is_registered_policy=is_registered_policy) if result: # check for wildcard resource types subparts = res_type.split("::")[:-1] subparts.append('*') res_type_wc = "::".join(subparts) try: return self._enforce(context, res_type_wc, scope, target, is_registered_policy=is_registered_policy) except self.exc: raise self.exc(action=res_type) return result def enforce_stack(self, stack, scope=None, target=None, is_registered_policy=False): for res in stack.resources.values(): self.enforce(stack.context, res.type(), scope=scope, target=target, is_registered_policy=is_registered_policy) heat-10.0.2/heat/common/timeutils.py0000666000175000017500000000527513343562337017363 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Utilities for handling ISO 8601 duration format.""" import random import re import time from heat.common.i18n import _ iso_duration_re = re.compile(r'PT(?:(\d+)H)?(?:(\d+)M)?(?:(\d+)S)?$') wallclock = time.time class Duration(object): # (NOTE): we don't attempt to handle leap seconds or large clock # jumps here. The latter are assumed to be rare and the former # negligible in the context of the timeout. Time zone adjustments, # Daylight Savings and the like *are* handled. PEP 418 adds a proper # monotonic clock, but only in Python 3.3. def __init__(self, timeout=0): self._endtime = wallclock() + timeout def expired(self): return wallclock() > self._endtime def endtime(self): return self._endtime def parse_isoduration(duration): """Convert duration in ISO 8601 format to second(s). Year, Month, Week, and Day designators are not supported. Example: 'PT12H30M5S' """ result = iso_duration_re.match(duration) if not result: raise ValueError(_('Only ISO 8601 duration format of the form ' 'PT#H#M#S is supported.')) t = 0 t += (3600 * int(result.group(1))) if result.group(1) else 0 t += (60 * int(result.group(2))) if result.group(2) else 0 t += int(result.group(3)) if result.group(3) else 0 return t def retry_backoff_delay(attempt, scale_factor=1.0, jitter_max=0.0): """Calculate an exponential backoff delay with jitter. Delay is calculated as 2^attempt + (uniform random from [0,1) * jitter_max) :param attempt: The count of the current retry attempt :param scale_factor: Multiplier to scale the exponential delay by :param jitter_max: Maximum of random seconds to add to the delay :returns: Seconds since epoch to wait until """ exp = float(2 ** attempt) * float(scale_factor) if jitter_max == 0.0: return exp return exp + random.random() * jitter_max def isotime(at): """Stringify UTC time in ISO 8601 format. :param at: Timestamp in UTC to format. """ if at is None: return None return at.strftime('%Y-%m-%dT%H:%M:%SZ') heat-10.0.2/heat/common/__init__.py0000666000175000017500000000000013343562337017060 0ustar zuulzuul00000000000000heat-10.0.2/heat/common/template_format.py0000666000175000017500000001164013343562337020520 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections from oslo_config import cfg from oslo_serialization import jsonutils import six import yaml from heat.common import exception from heat.common.i18n import _ if hasattr(yaml, 'CSafeLoader'): _yaml_loader_base = yaml.CSafeLoader else: _yaml_loader_base = yaml.SafeLoader class yaml_loader(_yaml_loader_base): def _construct_yaml_str(self, node): # Override the default string handling function # to always return unicode objects return self.construct_scalar(node) if hasattr(yaml, 'CSafeDumper'): _yaml_dumper_base = yaml.CSafeDumper else: _yaml_dumper_base = yaml.SafeDumper class yaml_dumper(_yaml_dumper_base): def represent_ordered_dict(self, data): return self.represent_dict(data.items()) yaml_loader.add_constructor(u'tag:yaml.org,2002:str', yaml_loader._construct_yaml_str) # Unquoted dates like 2013-05-23 in yaml files get loaded as objects of type # datetime.data which causes problems in API layer when being processed by # openstack.common.jsonutils. Therefore, make unicode string out of timestamps # until jsonutils can handle dates. yaml_loader.add_constructor(u'tag:yaml.org,2002:timestamp', yaml_loader._construct_yaml_str) yaml_dumper.add_representer(collections.OrderedDict, yaml_dumper.represent_ordered_dict) def simple_parse(tmpl_str, tmpl_url=None): try: tpl = jsonutils.loads(tmpl_str) except ValueError: try: tpl = yaml.load(tmpl_str, Loader=yaml_loader) except yaml.YAMLError: # NOTE(prazumovsky): we need to return more informative error for # user, so use SafeLoader, which return error message with template # snippet where error has been occurred. try: tpl = yaml.load(tmpl_str, Loader=yaml.SafeLoader) except yaml.YAMLError as yea: if tmpl_url is None: tmpl_url = '[root stack]' yea = six.text_type(yea) msg = _('Error parsing template %(tmpl)s ' '%(yea)s') % {'tmpl': tmpl_url, 'yea': yea} raise ValueError(msg) else: if tpl is None: tpl = {} if not isinstance(tpl, dict): raise ValueError(_('The template is not a JSON object ' 'or YAML mapping.')) return tpl def validate_template_limit(contain_str): """Validate limit for the template. Check if the contain exceeds allowed size range. """ if len(contain_str) > cfg.CONF.max_template_size: msg = _("Template size (%(actual_len)s bytes) exceeds maximum " "allowed size (%(limit)s bytes)." ) % {'actual_len': len(contain_str), 'limit': cfg.CONF.max_template_size} raise exception.RequestLimitExceeded(message=msg) def parse(tmpl_str, tmpl_url=None): """Takes a string and returns a dict containing the parsed structure. This includes determination of whether the string is using the JSON or YAML format. """ # TODO(ricolin): Move this validation to api side. # Validate nested stack template. validate_template_limit(six.text_type(tmpl_str)) tpl = simple_parse(tmpl_str, tmpl_url) # Looking for supported version keys in the loaded template if not ('HeatTemplateFormatVersion' in tpl or 'heat_template_version' in tpl or 'AWSTemplateFormatVersion' in tpl): raise ValueError(_("Template format version not found.")) return tpl def convert_json_to_yaml(json_str): """Convert AWS JSON template format to Heat YAML format. :param json_str: a string containing the AWS JSON template format. :returns: the equivalent string containing the Heat YAML format. """ # parse the string as json to a python structure tpl = jsonutils.loads(json_str, object_pairs_hook=collections.OrderedDict) # Replace AWS format version with Heat format version def top_level_items(tpl): yield ("HeatTemplateFormatVersion", '2012-12-12') for k, v in six.iteritems(tpl): if k != 'AWSTemplateFormatVersion': yield k, v # dump python structure to yaml return yaml.dump(collections.OrderedDict(top_level_items(tpl)), Dumper=yaml_dumper) heat-10.0.2/heat/common/noauth.py0000666000175000017500000000502713343562337016635 0ustar zuulzuul00000000000000# # Copyright (C) 2016, Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """Middleware that accepts any authentication.""" import json import os from oslo_config import cfg from oslo_log import log as logging LOG = logging.getLogger(__name__) class NoAuthProtocol(object): def __init__(self, app, conf): self.conf = conf self.app = app self._token_info = {} response_file = cfg.CONF.noauth.token_response if os.path.exists(response_file): with open(response_file) as f: self._token_info = json.loads(f.read()) def __call__(self, env, start_response): """Handle incoming request. Authenticate send downstream on success. Reject request if we can't authenticate. """ LOG.debug('Authenticating user token') env.update(self._build_user_headers(env)) return self.app(env, start_response) def _build_user_headers(self, env): """Build headers that represent authenticated user from auth token.""" # token = env.get('X-Auth-Token', '') # user_id, _sep, project_id = token.partition(':') # project_id = project_id or user_id username = env.get('HTTP_X_AUTH_USER', 'admin') project = env.get('HTTP_X_AUTH_PROJECT', 'admin') headers = { 'HTTP_X_IDENTITY_STATUS': 'Confirmed', 'HTTP_X_PROJECT_ID': project, 'HTTP_X_PROJECT_NAME': project, 'HTTP_X_USER_ID': username, 'HTTP_X_USER_NAME': username, 'HTTP_X_ROLES': 'admin', 'HTTP_X_SERVICE_CATALOG': {}, 'HTTP_X_AUTH_USER': username, 'HTTP_X_AUTH_KEY': 'unset', 'HTTP_X_AUTH_URL': 'url', } if self._token_info: headers['keystone.token_info'] = self._token_info return headers def filter_factory(global_conf, **local_conf): conf = global_conf.copy() conf.update(local_conf) def auth_filter(app): return NoAuthProtocol(app, conf) return auth_filter heat-10.0.2/heat/common/auth_url.py0000666000175000017500000000431313343562337017157 0ustar zuulzuul00000000000000# # Copyright 2013 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_config import cfg from webob import exc from heat.common import endpoint_utils from heat.common.i18n import _ from heat.common import wsgi class AuthUrlFilter(wsgi.Middleware): def __init__(self, app, conf): super(AuthUrlFilter, self).__init__(app) self.conf = conf self._auth_url = None @property def auth_url(self): if not self._auth_url: self._auth_url = self._get_auth_url() return self._auth_url def _get_auth_url(self): if 'auth_uri' in self.conf: return self.conf['auth_uri'] else: return endpoint_utils.get_auth_uri(v3=False) def _validate_auth_url(self, auth_url): """Validate auth_url to ensure it can be used.""" if not auth_url: raise exc.HTTPBadRequest(_('Request missing required header ' 'X-Auth-Url')) allowed = cfg.CONF.auth_password.allowed_auth_uris if auth_url not in allowed: raise exc.HTTPUnauthorized(_('Header X-Auth-Url "%s" not ' 'an allowed endpoint') % auth_url) return True def process_request(self, req): auth_url = self.auth_url if cfg.CONF.auth_password.multi_cloud: auth_url = req.headers.get('X-Auth-Url') self._validate_auth_url(auth_url) req.headers['X-Auth-Url'] = auth_url return None def filter_factory(global_conf, **local_conf): conf = global_conf.copy() conf.update(local_conf) def auth_url_filter(app): return AuthUrlFilter(app, conf) return auth_url_filter heat-10.0.2/heat/common/environment_format.py0000666000175000017500000000460413343562337021253 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import yaml from heat.common.i18n import _ from heat.common import template_format SECTIONS = ( PARAMETERS, RESOURCE_REGISTRY, PARAMETER_DEFAULTS, ENCRYPTED_PARAM_NAMES, EVENT_SINKS, PARAMETER_MERGE_STRATEGIES, ) = ( 'parameters', 'resource_registry', 'parameter_defaults', 'encrypted_param_names', 'event_sinks', 'parameter_merge_strategies', ) def parse(env_str): """Takes a string and returns a dict containing the parsed structure.""" if env_str is None: return {} try: env = template_format.yaml.load(env_str, Loader=template_format.yaml_loader) except yaml.YAMLError: # NOTE(prazumovsky): we need to return more informative error for # user, so use SafeLoader, which return error message with template # snippet where error has been occurred. try: env = yaml.load(env_str, Loader=yaml.SafeLoader) except yaml.YAMLError as yea: raise ValueError(yea) else: if env is None: env = {} if not isinstance(env, dict): raise ValueError(_('The environment is not a valid ' 'YAML mapping data type.')) return validate(env) def validate(env): for param in env: if param not in SECTIONS: raise ValueError(_('environment has wrong section "%s"') % param) if env[param] is None: raise ValueError(_('environment has empty section "%s"') % param) return env def default_for_missing(env): """Checks a parsed environment for missing sections.""" for param in SECTIONS: if param not in env and param != PARAMETER_MERGE_STRATEGIES: if param in (ENCRYPTED_PARAM_NAMES, EVENT_SINKS): env[param] = [] else: env[param] = {} heat-10.0.2/heat/common/serializers.py0000666000175000017500000000614713343562337017677 0ustar zuulzuul00000000000000# # Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # Copyright 2013 IBM Corp. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Utility methods for serializing responses.""" import datetime from lxml import etree from oslo_log import log as logging from oslo_serialization import jsonutils import six LOG = logging.getLogger(__name__) class JSONResponseSerializer(object): def to_json(self, data): def sanitizer(obj): if isinstance(obj, datetime.datetime): return obj.isoformat() return six.text_type(obj) response = jsonutils.dumps(data, default=sanitizer) # TODO(ricolin): Fix response through private credential information, # before enable below debug message. # LOG.debug("JSON response : %s" % response) return response def default(self, response, result): response.content_type = 'application/json' response.body = six.b(self.to_json(result)) # Escape XML serialization for these keys, as the AWS API defines them as # JSON inside XML when the response format is XML. JSON_ONLY_KEYS = ('TemplateBody', 'Metadata') class XMLResponseSerializer(object): def object_to_element(self, obj, element): if isinstance(obj, list): for item in obj: subelement = etree.SubElement(element, "member") self.object_to_element(item, subelement) elif isinstance(obj, dict): for key, value in obj.items(): subelement = etree.SubElement(element, key) if key in JSON_ONLY_KEYS: if value: # Need to use json.dumps for the JSON inside XML # otherwise quotes get mangled and json.loads breaks try: subelement.text = jsonutils.dumps(value) except TypeError: subelement.text = str(value) else: self.object_to_element(value, subelement) else: element.text = six.text_type(obj) def to_xml(self, data): # Assumption : root node is dict with single key root = next(six.iterkeys(data)) eltree = etree.Element(root) self.object_to_element(data.get(root), eltree) response = etree.tostring(eltree) return response def default(self, response, result): response.content_type = 'application/xml' response.body = self.to_xml(result) heat-10.0.2/heat/common/exception.py0000666000175000017500000004306413343562351017334 0ustar zuulzuul00000000000000# # Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Heat exception subclasses""" import sys from oslo_log import log as logging from oslo_utils import excutils import six from heat.common.i18n import _ _FATAL_EXCEPTION_FORMAT_ERRORS = False LOG = logging.getLogger(__name__) # TODO(kanagaraj-manickam): Expose this to user via REST API ERROR_CODE_MAP = { '99001': _("Service %(service_name)s is not available for resource " "type %(resource_type)s, reason: %(reason)s") } @six.python_2_unicode_compatible class HeatException(Exception): """Base Heat Exception. To correctly use this class, inherit from it and define a 'msg_fmt' property. That msg_fmt will get formatted with the keyword arguments provided to the constructor. """ message = _("An unknown exception occurred.") # error_code helps to provide an unique number for a given exception # and is encoded in XXYYY format. # Here, XX - For each of the entity type like stack, resource, etc # an unique number will be provided. All exceptions for a entity will # have same XX code. # YYY - Specific error code for a given exception. error_code = None safe = True def __init__(self, **kwargs): self.kwargs = kwargs if self.error_code in ERROR_CODE_MAP: self.msg_fmt = ERROR_CODE_MAP[self.error_code] try: self.message = self.msg_fmt % kwargs except KeyError: with excutils.save_and_reraise_exception( reraise=_FATAL_EXCEPTION_FORMAT_ERRORS): # kwargs doesn't match a variable in the message # log the issue and the kwargs LOG.exception('Exception in string format operation') for name, value in six.iteritems(kwargs): LOG.error("%(name)s: %(value)s", {'name': name, 'value': value}) # noqa if self.error_code: self.message = 'HEAT-E%s %s' % (self.error_code, self.message) def __str__(self): return self.message def __deepcopy__(self, memo): return self.__class__(**self.kwargs) class MissingCredentialError(HeatException): msg_fmt = _("Missing required credential: %(required)s") class AuthorizationFailure(HeatException): msg_fmt = _("Authorization failed.") class NotAuthenticated(HeatException): msg_fmt = _("You are not authenticated.") class Forbidden(HeatException): msg_fmt = _("You are not authorized to use %(action)s.") def __init__(self, action='this action'): super(Forbidden, self).__init__(action=action) # NOTE(bcwaldon): here for backwards-compatibility, need to deprecate. class NotAuthorized(Forbidden): msg_fmt = _("You are not authorized to complete this action.") class Invalid(HeatException): msg_fmt = _("Data supplied was not valid: %(reason)s") class UserParameterMissing(HeatException): msg_fmt = _("The Parameter (%(key)s) was not provided.") class UnknownUserParameter(HeatException): msg_fmt = _("The Parameter (%(key)s) was not defined in template.") class InvalidTemplateVersion(HeatException): msg_fmt = _("The template version is invalid: %(explanation)s") class InvalidTemplateSection(HeatException): msg_fmt = _("The template section is invalid: %(section)s") class ImmutableParameterModified(HeatException): msg_fmt = _("The following parameters are immutable and may not be " "updated: %(keys)s") def __init__(self, *args, **kwargs): if args: kwargs.update({'keys': ", ".join(args)}) super(ImmutableParameterModified, self).__init__(**kwargs) class InvalidMergeStrategyForParam(HeatException): msg_fmt = _("Invalid merge strategy '%(strategy)s' for " "parameter '%(param)s'.") class ConflictingMergeStrategyForParam(HeatException): msg_fmt = _("Conflicting merge strategy '%(strategy)s' for " "parameter '%(param)s' in file '%(env_file)s'.") class InvalidTemplateAttribute(HeatException): msg_fmt = _("The Referenced Attribute (%(resource)s %(key)s)" " is incorrect.") class InvalidTemplateReference(HeatException): msg_fmt = _('The specified reference "%(resource)s" (in %(key)s)' ' is incorrect.') class TemplateOutputError(HeatException): msg_fmt = _('Error in %(resource)s output %(attribute)s: %(message)s') class InvalidEncryptionKey(HeatException): msg_fmt = _('Can not decrypt data with the auth_encryption_key' ' in heat config.') class InvalidExternalResourceDependency(HeatException): msg_fmt = _("Invalid dependency with external %(resource_type)s " "resource: %(external_id)s") class EntityNotFound(HeatException): msg_fmt = _("The %(entity)s (%(name)s) could not be found.") def __init__(self, entity=None, name=None, **kwargs): self.entity = entity self.name = name super(EntityNotFound, self).__init__(entity=entity, name=name, **kwargs) class PhysicalResourceExists(HeatException): msg_fmt = _("The physical resource for (%(name)s) exists.") class PhysicalResourceNameAmbiguity(HeatException): msg_fmt = _( "Multiple physical resources were found with name (%(name)s).") class PhysicalResourceIDAmbiguity(HeatException): msg_fmt = _( "Multiple resources were found with the physical ID (%(phys_id)s).") class InvalidTenant(HeatException): msg_fmt = _("Searching Tenant %(target)s " "from Tenant %(actual)s forbidden.") class StackExists(HeatException): msg_fmt = _("The Stack (%(stack_name)s) already exists.") class HeatExceptionWithPath(HeatException): msg_fmt = _("%(error)s%(path)s%(message)s") def __init__(self, error=None, path=None, message=None): self.error = error or '' self.path = [] if path is not None: if isinstance(path, list): self.path = path elif isinstance(path, six.string_types): self.path = [path] result_path = '' for path_item in self.path: if isinstance(path_item, int) or path_item.isdigit(): result_path += '[%s]' % path_item elif len(result_path) > 0: result_path += '.%s' % path_item else: result_path = path_item self.error_message = message or '' super(HeatExceptionWithPath, self).__init__( error=('%s: ' % self.error if self.error != '' else ''), path=('%s: ' % result_path if len(result_path) > 0 else ''), message=self.error_message ) class StackValidationFailed(HeatExceptionWithPath): def __init__(self, error=None, path=None, message=None, resource=None): if path is None: path = [] elif isinstance(path, six.string_types): path = [path] if resource is not None and not path: path = [resource.stack.t.get_section_name( resource.stack.t.RESOURCES), resource.name] if isinstance(error, Exception): if isinstance(error, StackValidationFailed): str_error = error.error message = error.error_message path = path + error.path # This is a hack to avoid the py3 (chained exception) # json serialization circular reference error from # oslo.messaging. self.args = error.args else: str_error = six.text_type(type(error).__name__) message = six.text_type(error) else: str_error = error super(StackValidationFailed, self).__init__(error=str_error, path=path, message=message) class InvalidSchemaError(HeatException): msg_fmt = _("%(message)s") class ResourceNotFound(EntityNotFound): msg_fmt = _("The Resource (%(resource_name)s) could not be found " "in Stack %(stack_name)s.") class SnapshotNotFound(EntityNotFound): msg_fmt = _("The Snapshot (%(snapshot)s) for Stack (%(stack)s) " "could not be found.") class InvalidGlobalResource(HeatException): msg_fmt = _("There was an error loading the definition of the global " "resource type %(type_name)s.") class ResourceTypeUnavailable(HeatException): error_code = '99001' class InvalidBreakPointHook(HeatException): msg_fmt = _("%(message)s") class InvalidRestrictedAction(HeatException): msg_fmt = _("%(message)s") class ResourceNotAvailable(HeatException): msg_fmt = _("The Resource (%(resource_name)s) is not available.") class ClientNotAvailable(HeatException): msg_fmt = _("The client (%(client_name)s) is not available.") class ResourceFailure(HeatExceptionWithPath): def __init__(self, exception_or_error, resource, action=None): self.resource = resource self.action = action if action is None and resource is not None: self.action = resource.action path = [] res_path = [] if resource is not None: res_path = [resource.stack.t.get_section_name('resources'), resource.name] if isinstance(exception_or_error, Exception): if isinstance(exception_or_error, ResourceFailure): self.exc = exception_or_error.exc error = exception_or_error.error message = exception_or_error.error_message path = exception_or_error.path else: self.exc = exception_or_error error = six.text_type(type(self.exc).__name__) message = six.text_type(self.exc) path = res_path else: self.exc = None res_failed = 'Resource %s failed: ' % action.upper() if res_failed in exception_or_error: (error, message, new_path) = self._from_status_reason( exception_or_error) path = res_path + new_path else: path = res_path error = None message = exception_or_error super(ResourceFailure, self).__init__(error=error, path=path, message=message) def _from_status_reason(self, status_reason): """Split the status_reason up into parts. Given the following status_reason: "Resource DELETE failed: Exception : resources.AResource: foo" we are going to return: ("Exception", "resources.AResource", "foo") """ parsed = [sp.strip() for sp in status_reason.split(':')] if len(parsed) >= 4: error = parsed[1] message = ': '.join(parsed[3:]) path = parsed[2].split('.') else: error = '' message = status_reason path = [] return (error, message, path) class NotSupported(HeatException): msg_fmt = _("%(feature)s is not supported.") class ResourceActionNotSupported(HeatException): msg_fmt = _("%(action)s is not supported for resource.") class ResourceActionRestricted(HeatException): msg_fmt = _("%(action)s is restricted for resource.") class ResourcePropertyConflict(HeatException): msg_fmt = _('Cannot define the following properties ' 'at the same time: %(props)s.') def __init__(self, *args, **kwargs): if args: kwargs.update({'props': ", ".join(args)}) super(ResourcePropertyConflict, self).__init__(**kwargs) class ResourcePropertyDependency(HeatException): msg_fmt = _('%(prop1)s cannot be specified without %(prop2)s.') class ResourcePropertyValueDependency(HeatException): msg_fmt = _('%(prop1)s property should only be specified ' 'for %(prop2)s with value %(value)s.') class PropertyUnspecifiedError(HeatException): msg_fmt = _('At least one of the following properties ' 'must be specified: %(props)s.') def __init__(self, *args, **kwargs): if args: kwargs.update({'props': ", ".join(args)}) super(PropertyUnspecifiedError, self).__init__(**kwargs) # Do not reference this here - in the future it will move back to its # correct (and original) location in heat.engine.resource. Reference it as # heat.engine.resource.UpdateReplace instead. class UpdateReplace(Exception): """Raised when resource update requires replacement.""" def __init__(self, resource_name='Unknown'): msg = _("The Resource %s requires replacement.") % resource_name super(Exception, self).__init__(six.text_type(msg)) class ResourceUnknownStatus(HeatException): msg_fmt = _('%(result)s - Unknown status %(resource_status)s due to ' '"%(status_reason)s"') def __init__(self, result=_('Resource failed'), status_reason=_('Unknown'), **kwargs): super(ResourceUnknownStatus, self).__init__( result=result, status_reason=status_reason, **kwargs) class ResourceInError(HeatException): msg_fmt = _('Went to status %(resource_status)s ' 'due to "%(status_reason)s"') def __init__(self, status_reason=_('Unknown'), **kwargs): super(ResourceInError, self).__init__(status_reason=status_reason, **kwargs) class UpdateInProgress(Exception): def __init__(self, resource_name='Unknown'): msg = _("The resource %s is already being updated.") % resource_name super(Exception, self).__init__(six.text_type(msg)) class HTTPExceptionDisguise(Exception): """Disguises HTTP exceptions. They can be handled by the webob fault application in the wsgi pipeline. """ safe = True def __init__(self, exception): self.exc = exception self.tb = sys.exc_info()[2] class EgressRuleNotAllowed(HeatException): msg_fmt = _("Egress rules are only allowed when " "Neutron is used and the 'VpcId' property is set.") class Error(HeatException): msg_fmt = "%(message)s" def __init__(self, msg): super(Error, self).__init__(message=msg) class NotFound(HeatException): def __init__(self, msg_fmt=_('Not found')): self.msg_fmt = msg_fmt super(NotFound, self).__init__() class InvalidContentType(HeatException): msg_fmt = _("Invalid content type %(content_type)s") class RequestLimitExceeded(HeatException): msg_fmt = _('Request limit exceeded: %(message)s') class StackResourceLimitExceeded(HeatException): msg_fmt = _('Maximum resources per stack exceeded.') class ActionInProgress(HeatException): msg_fmt = _("Stack %(stack_name)s already has an action (%(action)s) " "in progress.") class ActionNotComplete(HeatException): msg_fmt = _("Stack %(stack_name)s has an action (%(action)s) " "in progress or failed state.") class StopActionFailed(HeatException): msg_fmt = _("Failed to stop stack (%(stack_name)s) on other engine " "(%(engine_id)s)") class EventSendFailed(HeatException): msg_fmt = _("Failed to send message to stack (%(stack_name)s) " "on other engine (%(engine_id)s)") class InterfaceAttachFailed(HeatException): msg_fmt = _("Failed to attach interface (%(port)s) " "to server (%(server)s)") class InterfaceDetachFailed(HeatException): msg_fmt = _("Failed to detach interface (%(port)s) " "from server (%(server)s)") class UnsupportedObjectError(HeatException): msg_fmt = _('Unsupported object type %(objtype)s') class OrphanedObjectError(HeatException): msg_fmt = _('Cannot call %(method)s on orphaned %(objtype)s object') class IncompatibleObjectVersion(HeatException): msg_fmt = _('Version %(objver)s of %(objname)s is not supported') class ObjectActionError(HeatException): msg_fmt = _('Object action %(action)s failed because: %(reason)s') class ReadOnlyFieldError(HeatException): msg_fmt = _('Cannot modify readonly field %(field)s') class ConcurrentTransaction(HeatException): msg_fmt = _('Concurrent transaction for %(action)s') class ObjectFieldInvalid(HeatException): msg_fmt = _('Field %(field)s of %(objname)s is not an instance of Field') class KeystoneServiceNameConflict(HeatException): msg_fmt = _("Keystone has more than one service with same name " "%(service)s. Please use service id instead of name") class SIGHUPInterrupt(HeatException): msg_fmt = _("System SIGHUP signal received.") class InvalidServiceVersion(HeatException): msg_fmt = _("Invalid service %(service)s version %(version)s") class InvalidTemplateVersions(HeatException): msg_fmt = _('A template version alias %(version)s was added for a ' 'template class that has no official YYYY-MM-DD version.') class UnableToAutoAllocateNetwork(HeatException): msg_fmt = _('Unable to automatically allocate a network: %(message)s') heat-10.0.2/heat/common/param_utils.py0000666000175000017500000000537713343562337017667 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_utils import strutils from heat.common.i18n import _ def extract_bool(name, value): """Convert any true/false string to its corresponding boolean value. Value is case insensitive. """ if str(value).lower() not in ('true', 'false'): raise ValueError(_('Unrecognized value "%(value)s" for "%(name)s", ' 'acceptable values are: true, false.') % {'value': value, 'name': name}) return strutils.bool_from_string(value, strict=True) def delim_string_to_list(value): if value is None: return None if value == '': return [] return value.split(',') def extract_int(name, value, allow_zero=True, allow_negative=False): if value is None: return None if not strutils.is_int_like(value): raise ValueError(_("Only integer is acceptable by " "'%(name)s'.") % {'name': name}) if value in ('0', 0): if allow_zero: return int(value) raise ValueError(_("Only non-zero integer is acceptable by " "'%(name)s'.") % {'name': name}) try: result = int(value) except (TypeError, ValueError): raise ValueError(_("Value '%(value)s' is invalid for '%(name)s' " "which only accepts integer.") % {'name': name, 'value': value}) if allow_negative is False and result < 0: raise ValueError(_("Value '%(value)s' is invalid for '%(name)s' " "which only accepts non-negative integer.") % {'name': name, 'value': value}) return result def extract_tags(subject): tags = subject.split(',') for tag in tags: if len(tag) > 80: raise ValueError(_('Invalid tag, "%s" is longer than 80 ' 'characters') % tag) return tags def extract_template_type(subject): template_type = subject.lower() if template_type not in ('cfn', 'hot'): raise ValueError(_('Invalid template type "%(value)s", valid ' 'types are: cfn, hot.') % {'value': subject}) return template_type heat-10.0.2/heat/common/custom_backend_auth.py0000666000175000017500000000411213343562337021333 0ustar zuulzuul00000000000000# # Copyright (C) 2012, Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """Middleware for authenticating against custom backends.""" from oslo_context import context from oslo_log import log as logging import webob.exc from heat.common.i18n import _ from heat.rpc import client as rpc_client LOG = logging.getLogger(__name__) class AuthProtocol(object): def __init__(self, app, conf): self.conf = conf self.app = app self.rpc_client = rpc_client.EngineClient() def __call__(self, env, start_response): """Handle incoming request. Authenticate send downstream on success. Reject request if we can't authenticate. """ LOG.debug('Authenticating user token') ctx = context.get_current() authenticated = self.rpc_client.authenticated_to_backend(ctx) if authenticated: return self.app(env, start_response) else: return self._reject_request(env, start_response) def _reject_request(self, env, start_response): """Redirect client to auth server. :param env: wsgi request environment :param start_response: wsgi response callback :returns: HTTPUnauthorized http response """ resp = webob.exc.HTTPUnauthorized(_("Backend authentication failed"), []) return resp(env, start_response) def filter_factory(global_conf, **local_conf): conf = global_conf.copy() conf.update(local_conf) def auth_filter(app): return AuthProtocol(app, conf) return auth_filter heat-10.0.2/heat/common/wsgi.py0000666000175000017500000012423413343562351016306 0ustar zuulzuul00000000000000# # Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # Copyright 2013 IBM Corp. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Utility methods for working with WSGI servers.""" import abc import errno import os import signal import sys import time import eventlet from eventlet.green import socket from eventlet.green import ssl import eventlet.greenio import eventlet.wsgi import functools from oslo_concurrency import processutils from oslo_config import cfg import oslo_i18n as i18n from oslo_log import log as logging from oslo_serialization import jsonutils from oslo_utils import encodeutils from oslo_utils import importutils from paste import deploy import routes import routes.middleware import six import webob.dec import webob.exc from heat.api.aws import exception as aws_exception from heat.common import exception from heat.common.i18n import _ from heat.common import serializers LOG = logging.getLogger(__name__) URL_LENGTH_LIMIT = 50000 api_opts = [ cfg.IPOpt('bind_host', default='0.0.0.0', help=_('Address to bind the server. Useful when ' 'selecting a particular network interface.'), deprecated_group='DEFAULT'), cfg.PortOpt('bind_port', default=8004, help=_('The port on which the server will listen.'), deprecated_group='DEFAULT'), cfg.IntOpt('backlog', default=4096, help=_("Number of backlog requests " "to configure the socket with."), deprecated_group='DEFAULT'), cfg.StrOpt('cert_file', help=_("Location of the SSL certificate file " "to use for SSL mode."), deprecated_group='DEFAULT'), cfg.StrOpt('key_file', help=_("Location of the SSL key file to use " "for enabling SSL mode."), deprecated_group='DEFAULT'), cfg.IntOpt('workers', min=0, default=0, help=_("Number of workers for Heat service. " "Default value 0 means, that service will start number " "of workers equal number of cores on server."), deprecated_group='DEFAULT'), cfg.IntOpt('max_header_line', default=16384, help=_('Maximum line size of message headers to be accepted. ' 'max_header_line may need to be increased when using ' 'large tokens (typically those generated by the ' 'Keystone v3 API with big service catalogs).')), cfg.IntOpt('tcp_keepidle', default=600, help=_('The value for the socket option TCP_KEEPIDLE. This is ' 'the time in seconds that the connection must be idle ' 'before TCP starts sending keepalive probes.')), ] api_group = cfg.OptGroup('heat_api') cfg.CONF.register_group(api_group) cfg.CONF.register_opts(api_opts, group=api_group) api_cfn_opts = [ cfg.IPOpt('bind_host', default='0.0.0.0', help=_('Address to bind the server. Useful when ' 'selecting a particular network interface.'), deprecated_group='DEFAULT'), cfg.PortOpt('bind_port', default=8000, help=_('The port on which the server will listen.'), deprecated_group='DEFAULT'), cfg.IntOpt('backlog', default=4096, help=_("Number of backlog requests " "to configure the socket with."), deprecated_group='DEFAULT'), cfg.StrOpt('cert_file', help=_("Location of the SSL certificate file " "to use for SSL mode."), deprecated_group='DEFAULT'), cfg.StrOpt('key_file', help=_("Location of the SSL key file to use " "for enabling SSL mode."), deprecated_group='DEFAULT'), cfg.IntOpt('workers', min=0, default=1, help=_("Number of workers for Heat service."), deprecated_group='DEFAULT'), cfg.IntOpt('max_header_line', default=16384, help=_('Maximum line size of message headers to be accepted. ' 'max_header_line may need to be increased when using ' 'large tokens (typically those generated by the ' 'Keystone v3 API with big service catalogs).')), cfg.IntOpt('tcp_keepidle', default=600, help=_('The value for the socket option TCP_KEEPIDLE. This is ' 'the time in seconds that the connection must be idle ' 'before TCP starts sending keepalive probes.')), ] api_cfn_group = cfg.OptGroup('heat_api_cfn') cfg.CONF.register_group(api_cfn_group) cfg.CONF.register_opts(api_cfn_opts, group=api_cfn_group) api_cw_opts = [ cfg.IPOpt('bind_host', default='0.0.0.0', help=_('Address to bind the server. Useful when ' 'selecting a particular network interface.'), deprecated_group='DEFAULT', deprecated_for_removal=True, deprecated_reason='Heat CloudWatch API has been removed.', deprecated_since='10.0.0'), cfg.PortOpt('bind_port', default=8003, help=_('The port on which the server will listen.'), deprecated_group='DEFAULT', deprecated_for_removal=True, deprecated_reason='Heat CloudWatch API has been removed.', deprecated_since='10.0.0'), cfg.IntOpt('backlog', default=4096, help=_("Number of backlog requests " "to configure the socket with."), deprecated_group='DEFAULT', deprecated_for_removal=True, deprecated_reason='Heat CloudWatch API has been removed.', deprecated_since='10.0.0'), cfg.StrOpt('cert_file', help=_("Location of the SSL certificate file " "to use for SSL mode."), deprecated_group='DEFAULT', deprecated_for_removal=True, deprecated_reason='Heat CloudWatch API has been Removed.', deprecated_since='10.0.0'), cfg.StrOpt('key_file', help=_("Location of the SSL key file to use " "for enabling SSL mode."), deprecated_group='DEFAULT', deprecated_for_removal=True, deprecated_reason='Heat CloudWatch API has been Removed.', deprecated_since='10.0.0'), cfg.IntOpt('workers', min=0, default=1, help=_("Number of workers for Heat service."), deprecated_group='DEFAULT', deprecated_for_removal=True, deprecated_reason='Heat CloudWatch API has been Removed.', deprecated_since='10.0.0'), cfg.IntOpt('max_header_line', default=16384, help=_('Maximum line size of message headers to be accepted. ' 'max_header_line may need to be increased when using ' 'large tokens (typically those generated by the ' 'Keystone v3 API with big service catalogs.)'), deprecated_for_removal=True, deprecated_reason='Heat CloudWatch API has been Removed.', deprecated_since='10.0.0'), cfg.IntOpt('tcp_keepidle', default=600, help=_('The value for the socket option TCP_KEEPIDLE. This is ' 'the time in seconds that the connection must be idle ' 'before TCP starts sending keepalive probes.'), deprecated_for_removal=True, deprecated_reason='Heat CloudWatch API has been Removed.', deprecated_since='10.0.0') ] api_cw_group = cfg.OptGroup('heat_api_cloudwatch') cfg.CONF.register_group(api_cw_group) cfg.CONF.register_opts(api_cw_opts, group=api_cw_group) wsgi_elt_opts = [ cfg.BoolOpt('wsgi_keep_alive', default=True, help=_("If False, closes the client socket connection " "explicitly.")), cfg.IntOpt('client_socket_timeout', default=900, help=_("Timeout for client connections' socket operations. " "If an incoming connection is idle for this number of " "seconds it will be closed. A value of '0' means " "wait forever.")), ] wsgi_elt_group = cfg.OptGroup('eventlet_opts') cfg.CONF.register_group(wsgi_elt_group) cfg.CONF.register_opts(wsgi_elt_opts, group=wsgi_elt_group) json_size_opt = cfg.IntOpt('max_json_body_size', default=1048576, help=_('Maximum raw byte size of JSON request body.' ' Should be larger than max_template_size.')) cfg.CONF.register_opt(json_size_opt) def list_opts(): yield None, [json_size_opt] yield 'heat_api', api_opts yield 'heat_api_cfn', api_cfn_opts yield 'heat_api_cloudwatch', api_cw_opts yield 'eventlet_opts', wsgi_elt_opts def get_bind_addr(conf, default_port=None): """Return the host and port to bind to.""" return (conf.bind_host, conf.bind_port or default_port) def get_socket(conf, default_port): """Bind socket to bind ip:port in conf. Note: Mostly comes from Swift with a few small changes... :param conf: a cfg.ConfigOpts object :param default_port: port to bind to if none is specified in conf :returns : a socket object as returned from socket.listen or ssl.wrap_socket if conf specifies cert_file """ bind_addr = get_bind_addr(conf, default_port) # TODO(jaypipes): eventlet's greened socket module does not actually # support IPv6 in getaddrinfo(). We need to get around this in the # future or monitor upstream for a fix address_family = [addr[0] for addr in socket.getaddrinfo(bind_addr[0], bind_addr[1], socket.AF_UNSPEC, socket.SOCK_STREAM) if addr[0] in (socket.AF_INET, socket.AF_INET6)][0] cert_file = conf.cert_file key_file = conf.key_file use_ssl = cert_file or key_file if use_ssl and (not cert_file or not key_file): raise RuntimeError(_("When running server in SSL mode, you must " "specify both a cert_file and key_file " "option value in your configuration file")) sock = None retry_until = time.time() + 30 while not sock and time.time() < retry_until: try: sock = eventlet.listen(bind_addr, backlog=conf.backlog, family=address_family) except socket.error as err: if err.args[0] != errno.EADDRINUSE: raise eventlet.sleep(0.1) if not sock: raise RuntimeError(_("Could not bind to %(bind_addr)s " "after trying for 30 seconds") % {'bind_addr': bind_addr}) return sock class Server(object): """Server class to manage multiple WSGI sockets and applications.""" def __init__(self, name, conf, threads=1000): os.umask(0o27) # ensure files are created with the correct privileges self._logger = logging.getLogger("eventlet.wsgi.server") self.name = name self.threads = threads self.children = set() self.stale_children = set() self.running = True self.pgid = os.getpid() self.conf = conf try: os.setpgid(self.pgid, self.pgid) except OSError: self.pgid = 0 def kill_children(self, *args): """Kills the entire process group.""" LOG.error('SIGTERM received') signal.signal(signal.SIGTERM, signal.SIG_IGN) signal.signal(signal.SIGINT, signal.SIG_IGN) self.running = False os.killpg(0, signal.SIGTERM) def hup(self, *args): """Reloads configuration files with zero down time.""" LOG.error('SIGHUP received') signal.signal(signal.SIGHUP, signal.SIG_IGN) raise exception.SIGHUPInterrupt def start(self, application, default_port): """Run a WSGI server with the given application. :param application: The application to run in the WSGI server :param default_port: Port to bind to if none is specified in conf """ eventlet.wsgi.MAX_HEADER_LINE = self.conf.max_header_line self.application = application self.default_port = default_port self.configure_socket() self.start_wsgi() def start_wsgi(self): workers = self.conf.workers # childs == num of cores if workers == 0: childs_num = processutils.get_worker_count() # launch only one GreenPool without childs elif workers == 1: # Useful for profiling, test, debug etc. self.pool = eventlet.GreenPool(size=self.threads) self.pool.spawn_n(self._single_run, self.application, self.sock) return # childs equal specified value of workers else: childs_num = workers LOG.info("Starting %d workers", workers) signal.signal(signal.SIGTERM, self.kill_children) signal.signal(signal.SIGINT, self.kill_children) signal.signal(signal.SIGHUP, self.hup) while len(self.children) < childs_num: self.run_child() def wait_on_children(self): while self.running: try: pid, status = os.wait() if os.WIFEXITED(status) or os.WIFSIGNALED(status): self._remove_children(pid) self._verify_and_respawn_children(pid, status) except OSError as err: if err.errno not in (errno.EINTR, errno.ECHILD): raise except KeyboardInterrupt: LOG.info('Caught keyboard interrupt. Exiting.') os.killpg(0, signal.SIGTERM) break except exception.SIGHUPInterrupt: self.reload() continue eventlet.greenio.shutdown_safe(self.sock) self.sock.close() LOG.debug('Exited') def configure_socket(self, old_conf=None, has_changed=None): """Ensure a socket exists and is appropriately configured. This function is called on start up, and can also be called in the event of a configuration reload. When called for the first time a new socket is created. If reloading and either bind_host or bind port have been changed the existing socket must be closed and a new socket opened (laws of physics). In all other cases (bind_host/bind_port have not changed) the existing socket is reused. :param old_conf: Cached old configuration settings (if any) :param has changed: callable to determine if a parameter has changed """ # Do we need a fresh socket? new_sock = (old_conf is None or ( has_changed('bind_host') or has_changed('bind_port'))) # Will we be using https? use_ssl = not (not self.conf.cert_file or not self.conf.key_file) # Were we using https before? old_use_ssl = (old_conf is not None and not ( not old_conf.get('key_file') or not old_conf.get('cert_file'))) # Do we now need to perform an SSL wrap on the socket? wrap_sock = use_ssl is True and (old_use_ssl is False or new_sock) # Do we now need to perform an SSL unwrap on the socket? unwrap_sock = use_ssl is False and old_use_ssl is True if new_sock: self._sock = None if old_conf is not None: self.sock.close() _sock = get_socket(self.conf, self.default_port) _sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) # sockets can hang around forever without keepalive _sock.setsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1) self._sock = _sock if wrap_sock: self.sock = ssl.wrap_socket(self._sock, certfile=self.conf.cert_file, keyfile=self.conf.key_file) if unwrap_sock: self.sock = self._sock if new_sock and not use_ssl: self.sock = self._sock # Pick up newly deployed certs if old_conf is not None and use_ssl is True and old_use_ssl is True: if has_changed('cert_file'): self.sock.certfile = self.conf.cert_file if has_changed('key_file'): self.sock.keyfile = self.conf.key_file if new_sock or (old_conf is not None and has_changed('tcp_keepidle')): # This option isn't available in the OS X version of eventlet if hasattr(socket, 'TCP_KEEPIDLE'): self.sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPIDLE, self.conf.tcp_keepidle) if old_conf is not None and has_changed('backlog'): self.sock.listen(self.conf.backlog) def _remove_children(self, pid): if pid in self.children: self.children.remove(pid) LOG.info('Removed dead child %s', pid) elif pid in self.stale_children: self.stale_children.remove(pid) LOG.info('Removed stale child %s', pid) else: LOG.warning('Unrecognised child %s', pid) def _verify_and_respawn_children(self, pid, status): if len(self.stale_children) == 0: LOG.debug('No stale children') if os.WIFEXITED(status) and os.WEXITSTATUS(status) != 0: LOG.error('Not respawning child %d, cannot ' 'recover from termination', pid) if not self.children and not self.stale_children: LOG.info( 'All workers have terminated. Exiting') self.running = False else: if len(self.children) < self.conf.workers: self.run_child() def stash_conf_values(self): """Make a copy of some of the current global CONF's settings. Allows determining if any of these values have changed when the config is reloaded. """ conf = {} conf['bind_host'] = self.conf.bind_host conf['bind_port'] = self.conf.bind_port conf['backlog'] = self.conf.backlog conf['key_file'] = self.conf.key_file conf['cert_file'] = self.conf.cert_file return conf def reload(self): """Reload and re-apply configuration settings. Existing child processes are sent a SIGHUP signal and will exit after completing existing requests. New child processes, which will have the updated configuration, are spawned. This allows preventing interruption to the service. """ def _has_changed(old, new, param): old = old.get(param) new = getattr(new, param) return (new != old) old_conf = self.stash_conf_values() has_changed = functools.partial(_has_changed, old_conf, self.conf) cfg.CONF.reload_config_files() os.killpg(self.pgid, signal.SIGHUP) self.stale_children = self.children self.children = set() # Ensure any logging config changes are picked up logging.setup(cfg.CONF, self.name) self.configure_socket(old_conf, has_changed) self.start_wsgi() def wait(self): """Wait until all servers have completed running.""" try: if self.children: self.wait_on_children() else: self.pool.waitall() except KeyboardInterrupt: pass def run_child(self): def child_hup(*args): """Shuts down child processes, existing requests are handled.""" signal.signal(signal.SIGHUP, signal.SIG_IGN) eventlet.wsgi.is_accepting = False self.sock.close() pid = os.fork() if pid == 0: signal.signal(signal.SIGHUP, child_hup) signal.signal(signal.SIGTERM, signal.SIG_DFL) # ignore the interrupt signal to avoid a race whereby # a child worker receives the signal before the parent # and is respawned unnecessarily as a result signal.signal(signal.SIGINT, signal.SIG_IGN) # The child has no need to stash the unwrapped # socket, and the reference prevents a clean # exit on sighup self._sock = None self.run_server() LOG.info('Child %d exiting normally', os.getpid()) # self.pool.waitall() is now called in wsgi's server so # it's safe to exit here sys.exit(0) else: LOG.info('Started child %s', pid) self.children.add(pid) def run_server(self): """Run a WSGI server.""" eventlet.wsgi.HttpProtocol.default_request_version = "HTTP/1.0" eventlet.hubs.use_hub('poll') eventlet.patcher.monkey_patch(all=False, socket=True) self.pool = eventlet.GreenPool(size=self.threads) socket_timeout = cfg.CONF.eventlet_opts.client_socket_timeout or None try: eventlet.wsgi.server( self.sock, self.application, custom_pool=self.pool, url_length_limit=URL_LENGTH_LIMIT, log=self._logger, debug=cfg.CONF.debug, keepalive=cfg.CONF.eventlet_opts.wsgi_keep_alive, socket_timeout=socket_timeout) except socket.error as err: if err[0] != errno.EINVAL: raise self.pool.waitall() def _single_run(self, application, sock): """Start a WSGI server in a new green thread.""" LOG.info("Starting single process server") eventlet.wsgi.server(sock, application, custom_pool=self.pool, url_length_limit=URL_LENGTH_LIMIT, log=self._logger, debug=cfg.CONF.debug) class Middleware(object): """Base WSGI middleware wrapper. These classes require an application to be initialized that will be called next. By default the middleware will simply call its wrapped app, or you can override __call__ to customize its behavior. """ def __init__(self, application): self.application = application def process_request(self, req): """Called on each request. If this returns None, the next application down the stack will be executed. If it returns a response then that response will be returned and execution will stop here. """ return None def process_response(self, response): """Do whatever you'd like to the response.""" return response @webob.dec.wsgify def __call__(self, req): response = self.process_request(req) if response: return response response = req.get_response(self.application) return self.process_response(response) class Debug(Middleware): """Helper class to get information about the request and response. Helper class that can be inserted into any WSGI application chain to get information about the request and response. """ @webob.dec.wsgify def __call__(self, req): print(("*" * 40) + " REQUEST ENVIRON") for key, value in req.environ.items(): print(key, "=", value) print('') resp = req.get_response(self.application) print(("*" * 40) + " RESPONSE HEADERS") for (key, value) in six.iteritems(resp.headers): print(key, "=", value) print('') resp.app_iter = self.print_generator(resp.app_iter) return resp @staticmethod def print_generator(app_iter): """Prints the contents of a wrapper string iterator when iterated.""" print(("*" * 40) + " BODY") for part in app_iter: sys.stdout.write(part) sys.stdout.flush() yield part print('') def debug_filter(app, conf, **local_conf): return Debug(app) class DefaultMethodController(object): """Controller that handles the OPTIONS request method. This controller handles the OPTIONS request method and any of the HTTP methods that are not explicitly implemented by the application. """ def options(self, req, allowed_methods, *args, **kwargs): """Return a response that includes the 'Allow' header. Return a response that includes the 'Allow' header listing the methods that are implemented. A 204 status code is used for this response. """ raise webob.exc.HTTPNoContent(headers=[('Allow', allowed_methods)]) def reject(self, req, allowed_methods, *args, **kwargs): """Return a 405 method not allowed error. As a convenience, the 'Allow' header with the list of implemented methods is included in the response as well. """ raise webob.exc.HTTPMethodNotAllowed( headers=[('Allow', allowed_methods)]) class Router(object): """WSGI middleware that maps incoming requests to WSGI apps.""" def __init__(self, mapper): """Create a router for the given routes.Mapper. Each route in `mapper` must specify a 'controller', which is a WSGI app to call. You'll probably want to specify an 'action' as well and have your controller be a wsgi.Controller, who will route the request to the action method. Examples: mapper = routes.Mapper() sc = ServerController() # Explicit mapping of one route to a controller+action mapper.connect(None, "/svrlist", controller=sc, action="list") # Actions are all implicitly defined mapper.resource("server", "servers", controller=sc) # Pointing to an arbitrary WSGI app. You can specify the # {path_info:.*} parameter so the target app can be handed just that # section of the URL. mapper.connect(None, "/v1.0/{path_info:.*}", controller=BlogApp()) """ self.map = mapper self._router = routes.middleware.RoutesMiddleware(self._dispatch, self.map) @webob.dec.wsgify def __call__(self, req): """Route the incoming request to a controller based on self.map. If no match, return a 404. """ return self._router @staticmethod @webob.dec.wsgify def _dispatch(req): """Returns controller after matching the incoming request to a route. Called by self._router after matching the incoming request to a route and putting the information into req.environ. Either returns 404 or the routed WSGI app's response. """ match = req.environ['wsgiorg.routing_args'][1] if not match: return webob.exc.HTTPNotFound() app = match['controller'] return app class Request(webob.Request): """Add some OpenStack API-specific logic to the base webob.Request.""" def best_match_content_type(self): """Determine the requested response content-type.""" supported = ('application/json',) bm = self.accept.best_match(supported) return bm or 'application/json' def get_content_type(self, allowed_content_types): """Determine content type of the request body.""" if "Content-Type" not in self.headers: raise exception.InvalidContentType(content_type=None) content_type = self.content_type if content_type not in allowed_content_types: raise exception.InvalidContentType(content_type=content_type) else: return content_type def best_match_language(self): """Determines best available locale from the Accept-Language header. :returns: the best language match or None if the 'Accept-Language' header was not available in the request. """ if not self.accept_language: return None all_languages = i18n.get_available_languages('heat') return self.accept_language.best_match(all_languages) def is_json_content_type(request): if request.method == 'GET': try: aws_content_type = request.params.get("ContentType") except Exception: aws_content_type = None # respect aws_content_type when both available content_type = aws_content_type or request.content_type else: content_type = request.content_type # bug #1887882 # for back compatible for null or plain content type if not content_type or content_type.startswith('text/plain'): content_type = 'application/json' if (content_type in ('JSON', 'application/json') and request.body.startswith(b'{')): return True return False class JSONRequestDeserializer(object): def has_body(self, request): """Returns whether a Webob.Request object will possess an entity body. :param request: Webob.Request object """ if (int(request.content_length or 0) > 0 and is_json_content_type(request)): return True return False def from_json(self, datastring): try: if len(datastring) > cfg.CONF.max_json_body_size: msg = _('JSON body size (%(len)s bytes) exceeds maximum ' 'allowed size (%(limit)s bytes).' ) % {'len': len(datastring), 'limit': cfg.CONF.max_json_body_size} raise exception.RequestLimitExceeded(message=msg) return jsonutils.loads(datastring) except ValueError as ex: raise webob.exc.HTTPBadRequest(six.text_type(ex)) def default(self, request): if self.has_body(request): return {'body': self.from_json(request.body)} else: return {} class Resource(object): """WSGI app that handles (de)serialization and controller dispatch. Reads routing information supplied by RoutesMiddleware and calls the requested action method upon its deserializer, controller, and serializer. Those three objects may implement any of the basic controller action methods (create, update, show, index, delete) along with any that may be specified in the api router. A 'default' method may also be implemented to be used in place of any non-implemented actions. Deserializer methods must accept a request argument and return a dictionary. Controller methods must accept a request argument. Additionally, they must also accept keyword arguments that represent the keys returned by the Deserializer. They may raise a webob.exc exception or return a dict, which will be serialized by requested content type. """ def __init__(self, controller, deserializer, serializer=None): """Initialisation of the WSGI app. :param controller: object that implement methods created by routes lib :param deserializer: object that supports webob request deserialization through controller-like actions :param serializer: object that supports webob response serialization through controller-like actions """ self.controller = controller self.deserializer = deserializer self.serializer = serializer @webob.dec.wsgify(RequestClass=Request) def __call__(self, request): """WSGI method that controls (de)serialization and method dispatch.""" action_args = self.get_action_args(request.environ) action = action_args.pop('action', None) # From reading the boto code, and observation of real AWS api responses # it seems that the AWS api ignores the content-type in the html header # Instead it looks at a "ContentType" GET query parameter # This doesn't seem to be documented in the AWS cfn API spec, but it # would appear that the default response serialization is XML, as # described in the API docs, but passing a query parameter of # ContentType=JSON results in a JSON serialized response... content_type = request.params.get("ContentType") try: deserialized_request = self.dispatch(self.deserializer, action, request) action_args.update(deserialized_request) LOG.debug(('Calling %(controller)s : %(action)s'), {'controller': self.controller, 'action': action}) action_result = self.dispatch(self.controller, action, request, **action_args) except TypeError as err: LOG.error('Exception handling resource: %s', err) msg = _('The server could not comply with the request since ' 'it is either malformed or otherwise incorrect.') err = webob.exc.HTTPBadRequest(msg) http_exc = translate_exception(err, request.best_match_language()) # NOTE(luisg): We disguise HTTP exceptions, otherwise they will be # treated by wsgi as responses ready to be sent back and they # won't make it into the pipeline app that serializes errors raise exception.HTTPExceptionDisguise(http_exc) except webob.exc.HTTPException as err: if isinstance(err, aws_exception.HeatAPIException): # The AWS compatible API's don't use faultwrap, so # we want to detect the HeatAPIException subclasses # and raise rather than wrapping in HTTPExceptionDisguise raise if not isinstance(err, webob.exc.HTTPError): # Some HTTPException are actually not errors, they are # responses ready to be sent back to the users, so we don't # error log, disguise or translate those raise if isinstance(err, webob.exc.HTTPServerError): LOG.error( "Returning %(code)s to user: %(explanation)s", {'code': err.code, 'explanation': err.explanation}) http_exc = translate_exception(err, request.best_match_language()) raise exception.HTTPExceptionDisguise(http_exc) except exception.HeatException as err: raise translate_exception(err, request.best_match_language()) except Exception as err: log_exception(err, sys.exc_info()) raise translate_exception(err, request.best_match_language()) # Here we support either passing in a serializer or detecting it # based on the content type. try: serializer = self.serializer if serializer is None: if content_type == "JSON": serializer = serializers.JSONResponseSerializer() else: serializer = serializers.XMLResponseSerializer() response = webob.Response(request=request) self.dispatch(serializer, action, response, action_result) return response # return unserializable result (typically an exception) except Exception: # Here we should get API exceptions derived from HeatAPIException # these implement get_unserialized_body(), which allow us to get # a dict containing the unserialized error response. # We only need to serialize for JSON content_type, as the # exception body is pre-serialized to the default XML in the # HeatAPIException constructor # If we get something else here (e.g a webob.exc exception), # this will fail, and we just return it without serializing, # which will not conform to the expected AWS error response format if content_type == "JSON": try: err_body = action_result.get_unserialized_body() serializer.default(action_result, err_body) except Exception: LOG.warning("Unable to serialize exception response") return action_result def dispatch(self, obj, action, *args, **kwargs): """Find action-specific method on self and call it.""" try: method = getattr(obj, action) except AttributeError: method = getattr(obj, 'default') return method(*args, **kwargs) def get_action_args(self, request_environment): """Parse dictionary created by routes library.""" try: args = request_environment['wsgiorg.routing_args'][1].copy() except Exception: return {} try: del args['controller'] except KeyError: pass try: del args['format'] except KeyError: pass return args def log_exception(err, exc_info): args = {'exc_info': exc_info} if cfg.CONF.debug else {} LOG.error("Unexpected error occurred serving API: %s", err, **args) def translate_exception(exc, locale): """Translates all translatable elements of the given exception.""" if isinstance(exc, exception.HeatException): exc.message = i18n.translate(exc.message, locale) else: err_msg = encodeutils.exception_to_unicode(exc) exc.message = i18n.translate(err_msg, locale) if isinstance(exc, webob.exc.HTTPError): exc.explanation = i18n.translate(exc.explanation, locale) exc.detail = i18n.translate(getattr(exc, 'detail', ''), locale) return exc @six.add_metaclass(abc.ABCMeta) class BasePasteFactory(object): """A base class for paste app and filter factories. Sub-classes must override the KEY class attribute and provide a __call__ method. """ KEY = None def __init__(self, conf): self.conf = conf @abc.abstractmethod def __call__(self, global_conf, **local_conf): return def _import_factory(self, local_conf): """Import an app/filter class. Lookup the KEY from the PasteDeploy local conf and import the class named there. This class can then be used as an app or filter factory. Note we support the : format. Note also that if you do e.g. key = value then ConfigParser returns a value with a leading newline, so we strip() the value before using it. """ class_name = local_conf[self.KEY].replace(':', '.').strip() return importutils.import_class(class_name) class AppFactory(BasePasteFactory): """A Generic paste.deploy app factory. This requires heat.app_factory to be set to a callable which returns a WSGI app when invoked. The format of the name is : e.g. [app:apiv1app] paste.app_factory = heat.common.wsgi:app_factory heat.app_factory = heat.api.cfn.v1:API The WSGI app constructor must accept a ConfigOpts object and a local config dict as its two arguments. """ KEY = 'heat.app_factory' def __call__(self, global_conf, **local_conf): """The actual paste.app_factory protocol method.""" factory = self._import_factory(local_conf) return factory(self.conf, **local_conf) class FilterFactory(AppFactory): """A Generic paste.deploy filter factory. This requires heat.filter_factory to be set to a callable which returns a WSGI filter when invoked. The format is : e.g. [filter:cache] paste.filter_factory = heat.common.wsgi:filter_factory heat.filter_factory = heat.api.middleware.cache:CacheFilter The WSGI filter constructor must accept a WSGI app, a ConfigOpts object and a local config dict as its three arguments. """ KEY = 'heat.filter_factory' def __call__(self, global_conf, **local_conf): """The actual paste.filter_factory protocol method.""" factory = self._import_factory(local_conf) def filter(app): return factory(app, self.conf, **local_conf) return filter def setup_paste_factories(conf): """Set up the generic paste app and filter factories. Set things up so that: paste.app_factory = heat.common.wsgi:app_factory and paste.filter_factory = heat.common.wsgi:filter_factory work correctly while loading PasteDeploy configuration. The app factories are constructed at runtime to allow us to pass a ConfigOpts object to the WSGI classes. :param conf: a ConfigOpts object """ global app_factory, filter_factory app_factory = AppFactory(conf) filter_factory = FilterFactory(conf) def teardown_paste_factories(): """Reverse the effect of setup_paste_factories().""" global app_factory, filter_factory del app_factory del filter_factory def paste_deploy_app(paste_config_file, app_name, conf): """Load a WSGI app from a PasteDeploy configuration. Use deploy.loadapp() to load the app from the PasteDeploy configuration, ensuring that the supplied ConfigOpts object is passed to the app and filter constructors. :param paste_config_file: a PasteDeploy config file :param app_name: the name of the app/pipeline to load from the file :param conf: a ConfigOpts object to supply to the app and its filters :returns: the WSGI app """ setup_paste_factories(conf) try: return deploy.loadapp("config:%s" % paste_config_file, name=app_name) finally: teardown_paste_factories() heat-10.0.2/heat/common/config.py0000666000175000017500000006240013343562351016576 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Routines for configuring Heat.""" import os from eventlet.green import socket from oslo_config import cfg from oslo_log import log as logging from oslo_middleware import cors from osprofiler import opts as profiler from heat.common import exception from heat.common.i18n import _ from heat.common import wsgi LOG = logging.getLogger(__name__) paste_deploy_group = cfg.OptGroup('paste_deploy') paste_deploy_opts = [ cfg.StrOpt('flavor', help=_("The flavor to use.")), cfg.StrOpt('api_paste_config', default="api-paste.ini", help=_("The API paste config file to use."))] service_opts = [ cfg.IntOpt('periodic_interval', default=60, help=_('Seconds between running periodic tasks.')), cfg.StrOpt('heat_metadata_server_url', help=_('URL of the Heat metadata server. ' 'NOTE: Setting this is only needed if you require ' 'instances to use a different endpoint than in the ' 'keystone catalog')), cfg.StrOpt('heat_waitcondition_server_url', help=_('URL of the Heat waitcondition server.')), cfg.StrOpt('heat_watch_server_url', default="", deprecated_for_removal=True, deprecated_reason='Heat CloudWatch Service has been removed.', deprecated_since='10.0.0', help=_('URL of the Heat CloudWatch server.')), cfg.StrOpt('instance_connection_is_secure', default="0", help=_('Instance connection to CFN/CW API via https.')), cfg.StrOpt('instance_connection_https_validate_certificates', default="1", help=_('Instance connection to CFN/CW API validate certs if ' 'SSL is used.')), cfg.StrOpt('region_name_for_services', help=_('Default region name used to get services endpoints.')), cfg.StrOpt('heat_stack_user_role', default="heat_stack_user", help=_('Keystone role for heat template-defined users.')), cfg.StrOpt('stack_user_domain_id', deprecated_opts=[cfg.DeprecatedOpt('stack_user_domain', group=None)], help=_('Keystone domain ID which contains heat ' 'template-defined users. If this option is set, ' 'stack_user_domain_name option will be ignored.')), cfg.StrOpt('stack_user_domain_name', help=_('Keystone domain name which contains heat ' 'template-defined users. If `stack_user_domain_id` ' 'option is set, this option is ignored.')), cfg.StrOpt('stack_domain_admin', help=_('Keystone username, a user with roles sufficient to ' 'manage users and projects in the stack_user_domain.')), cfg.StrOpt('stack_domain_admin_password', secret=True, help=_('Keystone password for stack_domain_admin user.')), cfg.IntOpt('max_template_size', default=524288, help=_('Maximum raw byte size of any template.')), cfg.IntOpt('max_nested_stack_depth', default=5, help=_('Maximum depth allowed when using nested stacks.')), cfg.IntOpt('num_engine_workers', help=_('Number of heat-engine processes to fork and run. ' 'Will default to either to 4 or number of CPUs on ' 'the host, whichever is greater.'))] engine_opts = [ cfg.ListOpt('plugin_dirs', default=['/usr/lib64/heat', '/usr/lib/heat', '/usr/local/lib/heat', '/usr/local/lib64/heat'], help=_('List of directories to search for plug-ins.')), cfg.StrOpt('environment_dir', default='/etc/heat/environment.d', help=_('The directory to search for environment files.')), cfg.StrOpt('template_dir', default='/etc/heat/templates', help=_('The directory to search for template files.')), cfg.StrOpt('deferred_auth_method', choices=['password', 'trusts'], default='trusts', deprecated_for_removal=True, deprecated_reason='Stored password based deferred auth is ' 'broken when used with keystone v3 and ' 'is not supported.', deprecated_since='9.0.0', help=_('Select deferred auth method, ' 'stored password or trusts.')), cfg.StrOpt('reauthentication_auth_method', choices=['', 'trusts'], default='', help=_('Allow reauthentication on token expiry, such that' ' long-running tasks may complete. Note this defeats' ' the expiry of any provided user tokens.')), cfg.ListOpt('trusts_delegated_roles', default=[], help=_('Subset of trustor roles to be delegated to heat.' ' If left unset, all roles of a user will be' ' delegated to heat when creating a stack.')), cfg.IntOpt('max_resources_per_stack', default=1000, help=_('Maximum resources allowed per top-level stack. ' '-1 stands for unlimited.')), cfg.IntOpt('max_stacks_per_tenant', default=100, help=_('Maximum number of stacks any one tenant may have' ' active at one time.')), cfg.IntOpt('action_retry_limit', default=5, help=_('Number of times to retry to bring a ' 'resource to a non-error state. Set to 0 to disable ' 'retries.')), cfg.IntOpt('client_retry_limit', default=2, help=_('Number of times to retry when a client encounters an ' 'expected intermittent error. Set to 0 to disable ' 'retries.')), # Server host name limit to 53 characters by due to typical default # linux HOST_NAME_MAX of 64, minus the .novalocal appended to the name cfg.IntOpt('max_server_name_length', default=53, max=53, help=_('Maximum length of a server name to be used ' 'in nova.')), cfg.IntOpt('max_interface_check_attempts', min=1, default=10, help=_('Number of times to check whether an interface has ' 'been attached or detached.')), cfg.IntOpt('event_purge_batch_size', min=1, default=200, help=_("Controls how many events will be pruned whenever a " "stack's events are purged. Set this " "lower to keep more events at the expense of more " "frequent purges.")), cfg.IntOpt('max_events_per_stack', default=1000, help=_('Rough number of maximum events that will be available ' 'per stack. Actual number of events can be a bit ' 'higher since purge checks take place randomly ' '200/event_purge_batch_size percent of the time. ' 'Older events are deleted when events are purged. ' 'Set to 0 for unlimited events per stack.')), cfg.IntOpt('stack_action_timeout', default=3600, help=_('Timeout in seconds for stack action (ie. create or' ' update).')), cfg.IntOpt('error_wait_time', default=240, help=_('The amount of time in seconds after an error has' ' occurred that tasks may continue to run before' ' being cancelled.')), cfg.IntOpt('engine_life_check_timeout', default=2, help=_('RPC timeout for the engine liveness check that is used' ' for stack locking.')), cfg.BoolOpt('enable_cloud_watch_lite', default=False, deprecated_for_removal=True, deprecated_reason='Heat CloudWatch Service has been removed.', deprecated_since='10.0.0', help=_('Enable the legacy OS::Heat::CWLiteAlarm resource.')), cfg.BoolOpt('enable_stack_abandon', default=False, help=_('Enable the preview Stack Abandon feature.')), cfg.BoolOpt('enable_stack_adopt', default=False, help=_('Enable the preview Stack Adopt feature.')), cfg.BoolOpt('convergence_engine', default=True, help=_('Enables engine with convergence architecture. All ' 'stacks with this option will be created using ' 'convergence engine.')), cfg.BoolOpt('observe_on_update', default=False, help=_('On update, enables heat to collect existing resource ' 'properties from reality and converge to ' 'updated template.')), cfg.StrOpt('default_software_config_transport', choices=['POLL_SERVER_CFN', 'POLL_SERVER_HEAT', 'POLL_TEMP_URL', 'ZAQAR_MESSAGE'], default='POLL_SERVER_CFN', help=_('Template default for how the server should receive the ' 'metadata required for software configuration. ' 'POLL_SERVER_CFN will allow calls to the cfn API action ' 'DescribeStackResource authenticated with the provided ' 'keypair (requires enabled heat-api-cfn). ' 'POLL_SERVER_HEAT will allow calls to the ' 'Heat API resource-show using the provided keystone ' 'credentials (requires keystone v3 API, and configured ' 'stack_user_* config options). ' 'POLL_TEMP_URL will create and populate a ' 'Swift TempURL with metadata for polling (requires ' 'object-store endpoint which supports TempURL).' 'ZAQAR_MESSAGE will create a dedicated zaqar queue and ' 'post the metadata for polling.')), cfg.StrOpt('default_deployment_signal_transport', choices=['CFN_SIGNAL', 'TEMP_URL_SIGNAL', 'HEAT_SIGNAL', 'ZAQAR_SIGNAL'], default='CFN_SIGNAL', help=_('Template default for how the server should signal to ' 'heat with the deployment output values. CFN_SIGNAL ' 'will allow an HTTP POST to a CFN keypair signed URL ' '(requires enabled heat-api-cfn). ' 'TEMP_URL_SIGNAL will create a Swift TempURL to be ' 'signaled via HTTP PUT (requires object-store endpoint ' 'which supports TempURL). ' 'HEAT_SIGNAL will allow calls to the Heat API ' 'resource-signal using the provided keystone ' 'credentials. ZAQAR_SIGNAL will create a dedicated ' 'zaqar queue to be signaled using the provided keystone ' 'credentials.')), cfg.StrOpt('default_user_data_format', choices=['HEAT_CFNTOOLS', 'RAW', 'SOFTWARE_CONFIG'], default='HEAT_CFNTOOLS', help=_('Template default for how the user_data should be ' 'formatted for the server. For HEAT_CFNTOOLS, the ' 'user_data is bundled as part of the heat-cfntools ' 'cloud-init boot configuration data. For RAW the ' 'user_data is passed to Nova unmodified. For ' 'SOFTWARE_CONFIG user_data is bundled as part of the ' 'software config data, and metadata is derived from any ' 'associated SoftwareDeployment resources.')), cfg.ListOpt('hidden_stack_tags', default=['data-processing-cluster'], help=_('Stacks containing these tag names will be hidden. ' 'Multiple tags should be given in a comma-delimited ' 'list (eg. hidden_stack_tags=hide_me,me_too).')), cfg.StrOpt('onready', help=_('Deprecated.')), cfg.BoolOpt('stack_scheduler_hints', default=False, help=_('When this feature is enabled, scheduler hints' ' identifying the heat stack context of a server' ' or volume resource are passed to the configured' ' schedulers in nova and cinder, for creates done' ' using heat resource types OS::Cinder::Volume,' ' OS::Nova::Server, and AWS::EC2::Instance.' ' heat_root_stack_id will be set to the id of the' ' root stack of the resource, heat_stack_id will be' ' set to the id of the resource\'s parent stack,' ' heat_stack_name will be set to the name of the' ' resource\'s parent stack, heat_path_in_stack will' ' be set to a list of comma delimited strings of' ' stackresourcename and stackname with list[0] being' ' \'rootstackname\', heat_resource_name will be set to' ' the resource\'s name, and heat_resource_uuid will be' ' set to the resource\'s orchestration id.')), cfg.BoolOpt('encrypt_parameters_and_properties', default=False, help=_('Encrypt template parameters that were marked as' ' hidden and also all the resource properties before' ' storing them in database.'))] rpc_opts = [ cfg.StrOpt('host', default=socket.gethostname(), sample_default='', help=_('Name of the engine node. ' 'This can be an opaque identifier. ' 'It is not necessarily a hostname, FQDN, ' 'or IP address.'))] auth_password_group = cfg.OptGroup('auth_password') auth_password_opts = [ cfg.BoolOpt('multi_cloud', default=False, help=_('Allow orchestration of multiple clouds.')), cfg.ListOpt('allowed_auth_uris', default=[], help=_('Allowed keystone endpoints for auth_uri when ' 'multi_cloud is enabled. At least one endpoint needs ' 'to be specified.'))] # these options define baseline defaults that apply to all clients default_clients_opts = [ cfg.StrOpt('endpoint_type', default='publicURL', help=_( 'Type of endpoint in Identity service catalog to use ' 'for communication with the OpenStack service.')), cfg.StrOpt('ca_file', help=_('Optional CA cert file to use in SSL connections.')), cfg.StrOpt('cert_file', help=_('Optional PEM-formatted certificate chain file.')), cfg.StrOpt('key_file', help=_('Optional PEM-formatted file that contains the ' 'private key.')), cfg.BoolOpt('insecure', default=False, help=_("If set, then the server's certificate will not " "be verified."))] # these options can be defined for each client # they must not specify defaults, since any options not defined in a client # specific group is looked up on the generic group above clients_opts = [ cfg.StrOpt('endpoint_type', help=_( 'Type of endpoint in Identity service catalog to use ' 'for communication with the OpenStack service.')), cfg.StrOpt('ca_file', help=_('Optional CA cert file to use in SSL connections.')), cfg.StrOpt('cert_file', help=_('Optional PEM-formatted certificate chain file.')), cfg.StrOpt('key_file', help=_('Optional PEM-formatted file that contains the ' 'private key.')), cfg.BoolOpt('insecure', help=_("If set, then the server's certificate will not " "be verified."))] heat_client_opts = [ cfg.StrOpt('url', default='', help=_('Optional heat url in format like' ' http://0.0.0.0:8004/v1/%(tenant_id)s.'))] keystone_client_opts = [ cfg.StrOpt('auth_uri', default='', help=_('Unversioned keystone url in format like' ' http://0.0.0.0:5000.'))] client_http_log_debug_opts = [ cfg.BoolOpt('http_log_debug', default=False, help=_("Allow client's debug log output."))] revision_group = cfg.OptGroup('revision') revision_opts = [ cfg.StrOpt('heat_revision', default='unknown', help=_('Heat build revision. ' 'If you would prefer to manage your build revision ' 'separately, you can move this section to a different ' 'file and add it as another config option.'))] volumes_group = cfg.OptGroup('volumes') volumes_opts = [ cfg.BoolOpt('backups_enabled', default=True, help=_("Indicate if cinder-backup service is enabled. " "This is a temporary workaround until cinder-backup " "service becomes discoverable, see LP#1334856."))] noauth_group = cfg.OptGroup('noauth') noauth_opts = [ cfg.StrOpt('token_response', default='', help=_("JSON file containing the content returned by the " "noauth middleware."))] def startup_sanity_check(): if (not cfg.CONF.stack_user_domain_id and not cfg.CONF.stack_user_domain_name): # FIXME(shardy): Legacy fallback for folks using old heat.conf # files which lack domain configuration LOG.warning('stack_user_domain_id or stack_user_domain_name not ' 'set in heat.conf falling back to using default') else: domain_admin_user = cfg.CONF.stack_domain_admin domain_admin_password = cfg.CONF.stack_domain_admin_password if not (domain_admin_user and domain_admin_password): raise exception.Error(_('heat.conf misconfigured, cannot ' 'specify "stack_user_domain_id" or ' '"stack_user_domain_name" without ' '"stack_domain_admin" and ' '"stack_domain_admin_password"')) auth_key_len = len(cfg.CONF.auth_encryption_key) if auth_key_len in (16, 24): LOG.warning( 'Please update auth_encryption_key to be 32 characters.') elif auth_key_len != 32: raise exception.Error(_('heat.conf misconfigured, auth_encryption_key ' 'must be 32 characters')) def list_opts(): yield None, rpc_opts yield None, engine_opts yield None, service_opts yield paste_deploy_group.name, paste_deploy_opts yield auth_password_group.name, auth_password_opts yield revision_group.name, revision_opts yield volumes_group.name, volumes_opts yield noauth_group.name, noauth_opts yield profiler.list_opts()[0] yield 'clients', default_clients_opts for client in ('aodh', 'barbican', 'ceilometer', 'cinder', 'designate', 'glance', 'heat', 'keystone', 'magnum', 'manila', 'mistral', 'monasca', 'neutron', 'nova', 'octavia', 'sahara', 'senlin', 'swift', 'trove', 'zaqar' ): client_specific_group = 'clients_' + client yield client_specific_group, clients_opts yield 'clients_heat', heat_client_opts yield 'clients_keystone', keystone_client_opts yield 'clients_nova', client_http_log_debug_opts yield 'clients_cinder', client_http_log_debug_opts cfg.CONF.register_group(paste_deploy_group) cfg.CONF.register_group(auth_password_group) cfg.CONF.register_group(revision_group) profiler.set_defaults(cfg.CONF) for group, opts in list_opts(): cfg.CONF.register_opts(opts, group=group) def _get_deployment_flavor(): """Retrieves the paste_deploy.flavor config item. Item formatted appropriately for appending to the application name. """ flavor = cfg.CONF.paste_deploy.flavor return '' if not flavor else ('-' + flavor) def _get_deployment_config_file(): """Retrieves the deployment_config_file config item. Item formatted as an absolute pathname. """ config_path = cfg.CONF.find_file( cfg.CONF.paste_deploy['api_paste_config']) if config_path is None: return None return os.path.abspath(config_path) def load_paste_app(app_name=None): """Builds and returns a WSGI app from a paste config file. We assume the last config file specified in the supplied ConfigOpts object is the paste config file. :param app_name: name of the application to load :raises RuntimeError when config file cannot be located or application cannot be loaded from config file """ if app_name is None: app_name = cfg.CONF.prog # append the deployment flavor to the application name, # in order to identify the appropriate paste pipeline app_name += _get_deployment_flavor() conf_file = _get_deployment_config_file() if conf_file is None: raise RuntimeError(_("Unable to locate config file [%s]") % cfg.CONF.paste_deploy['api_paste_config']) try: app = wsgi.paste_deploy_app(conf_file, app_name, cfg.CONF) # Log the options used when starting if we're in debug mode... if cfg.CONF.debug: cfg.CONF.log_opt_values(logging.getLogger(app_name), logging.DEBUG) return app except (LookupError, ImportError) as e: raise RuntimeError(_("Unable to load %(app_name)s from " "configuration file %(conf_file)s." "\nGot: %(e)r") % {'app_name': app_name, 'conf_file': conf_file, 'e': e}) def get_client_option(client, option): # look for the option in the [clients_${client}] section # unknown options raise cfg.NoSuchOptError try: group_name = 'clients_' + client cfg.CONF.import_opt(option, 'heat.common.config', group=group_name) v = getattr(getattr(cfg.CONF, group_name), option) if v is not None: return v except cfg.NoSuchGroupError: pass # do not error if the client is unknown # look for the option in the generic [clients] section cfg.CONF.import_opt(option, 'heat.common.config', group='clients') return getattr(cfg.CONF.clients, option) def get_ssl_options(client): # Look for the ssl options in the [clients_${client}] section cacert = get_client_option(client, 'ca_file') insecure = get_client_option(client, 'insecure') cert = get_client_option(client, 'cert_file') key = get_client_option(client, 'key_file') if insecure: verify = False else: verify = cacert or True if cert and key: cert = (cert, key) return {'verify': verify, 'cert': cert} def set_config_defaults(): """This method updates all configuration default values.""" # CORS Defaults # TODO(krotscheck): Update with https://review.openstack.org/#/c/285368/ cfg.set_defaults(cors.CORS_OPTS, allow_headers=['X-Auth-Token', 'X-Identity-Status', 'X-Roles', 'X-Service-Catalog', 'X-User-Id', 'X-Tenant-Id', 'X-OpenStack-Request-ID'], expose_headers=['X-Auth-Token', 'X-Subject-Token', 'X-Service-Token', 'X-OpenStack-Request-ID'], allow_methods=['GET', 'PUT', 'POST', 'DELETE', 'PATCH'] ) heat-10.0.2/heat/common/profiler.py0000666000175000017500000000326013343562337017156 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg from oslo_log import log as logging import osprofiler.initializer from heat.common import context cfg.CONF.import_opt('enabled', 'heat.common.config', group='profiler') LOG = logging.getLogger(__name__) def setup(binary, host): if cfg.CONF.profiler.enabled: osprofiler.initializer.init_from_conf( conf=cfg.CONF, context=context.get_admin_context().to_dict(), project="heat", service=binary, host=host) LOG.warning("OSProfiler is enabled.\nIt means that person who " "knows any of hmac_keys that are specified in " "/etc/heat/heat.conf can trace his requests. \n" "In real life only operator can read this file so " "there is no security issue. Note that even if person " "can trigger profiler, only admin user can retrieve " "trace information.\n" "To disable OSprofiler set in heat.conf:\n" "[profiler]\nenabled=false") else: osprofiler.web.disable() heat-10.0.2/heat/rpc/0000775000175000017500000000000013343562672014255 5ustar zuulzuul00000000000000heat-10.0.2/heat/rpc/worker_client.py0000666000175000017500000000516413343562340017476 0ustar zuulzuul00000000000000# Copyright (c) 2014 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """Client side of the heat worker RPC API.""" from heat.common import messaging from heat.rpc import worker_api class WorkerClient(object): """Client side of the heat worker RPC API. API version history:: 1.0 - Initial version. 1.1 - Added check_resource. 1.2 - Add adopt data argument to check_resource. 1.3 - Added cancel_check_resource API. 1.4 - Add converge argument to check_resource """ BASE_RPC_API_VERSION = '1.0' def __init__(self): self._client = messaging.get_rpc_client( topic=worker_api.TOPIC, version=self.BASE_RPC_API_VERSION) @staticmethod def make_msg(method, **kwargs): return method, kwargs def cast(self, ctxt, msg, version=None): method, kwargs = msg if version is not None: client = self._client.prepare(version=version) else: client = self._client client.cast(ctxt, method, **kwargs) def check_resource(self, ctxt, resource_id, current_traversal, data, is_update, adopt_stack_data, converge=False): self.cast(ctxt, self.make_msg( 'check_resource', resource_id=resource_id, current_traversal=current_traversal, data=data, is_update=is_update, adopt_stack_data=adopt_stack_data, converge=converge ), version='1.4') def cancel_check_resource(self, ctxt, stack_id, engine_id): """Send check-resource cancel message. Sends a cancel message to given heat engine worker. """ _client = messaging.get_rpc_client( topic=worker_api.TOPIC, version=self.BASE_RPC_API_VERSION, server=engine_id) method, kwargs = self.make_msg('cancel_check_resource', stack_id=stack_id) cl = _client.prepare(version='1.3') cl.cast(ctxt, method, **kwargs) heat-10.0.2/heat/rpc/listener_client.py0000666000175000017500000000270113343562340020004 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """Client side of the heat worker RPC API.""" from oslo_config import cfg import oslo_messaging as messaging from heat.common import messaging as rpc_messaging from heat.rpc import api as rpc_api cfg.CONF.import_opt('engine_life_check_timeout', 'heat.common.config') class EngineListenerClient(object): """Client side of the heat listener RPC API. API version history:: 1.0 - Initial version. """ BASE_RPC_API_VERSION = '1.0' def __init__(self, engine_id): _client = rpc_messaging.get_rpc_client( topic=rpc_api.LISTENER_TOPIC, version=self.BASE_RPC_API_VERSION, server=engine_id) self._client = _client.prepare( timeout=cfg.CONF.engine_life_check_timeout) def is_alive(self, ctxt): try: return self._client.call(ctxt, 'listening') except messaging.MessagingTimeout: return False heat-10.0.2/heat/rpc/client.py0000666000175000017500000011242413343562351016105 0ustar zuulzuul00000000000000# # Copyright 2012, Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Client side of the heat engine RPC API.""" import warnings from oslo_utils import excutils from oslo_utils import reflection from heat.common import messaging from heat.rpc import api as rpc_api class EngineClient(object): """Client side of the heat engine rpc API. API version history:: 1.0 - Initial version. 1.1 - Add support_status argument to list_resource_types() 1.4 - Add support for service list 1.9 - Add template_type option to generate_template() 1.10 - Add support for software config list 1.11 - Add support for template versions list 1.12 - Add with_detail option for stack resources list 1.13 - Add support for template functions list 1.14 - Add cancel_with_rollback option to stack_cancel_update 1.15 - Add preview_update_stack() call 1.16 - Adds version, type_name to list_resource_types() 1.17 - Add files to validate_template 1.18 - Add show_nested to validate_template 1.19 - Add show_output and list_outputs for returning stack outputs 1.20 - Add resolve_outputs to stack show 1.21 - Add deployment_id to create_software_deployment 1.22 - Add support for stack export 1.23 - Add environment_files to create/update/preview/validate 1.24 - Adds ignorable_errors to validate_template 1.25 - list_stack_resource filter update 1.26 - Add mark_unhealthy 1.27 - Add check_software_deployment 1.28 - Add get_environment call 1.29 - Add template_id to create_stack/update_stack 1.30 - Add possibility to resource_type_* return descriptions 1.31 - Add nested_depth to list_events, when nested_depth is specified add root_stack_id to response 1.32 - Add get_files call 1.33 - Remove tenant_safe from list_stacks, count_stacks and list_software_configs 1.34 - Add migrate_convergence_1 call 1.35 - Add with_condition to list_template_functions """ BASE_RPC_API_VERSION = '1.0' def __init__(self): self._client = messaging.get_rpc_client( topic=rpc_api.ENGINE_TOPIC, version=self.BASE_RPC_API_VERSION) @staticmethod def make_msg(method, **kwargs): return method, kwargs def call(self, ctxt, msg, version=None, timeout=None): method, kwargs = msg if version is not None: client = self._client.prepare(version=version) else: client = self._client if timeout is not None: client = client.prepare(timeout=timeout) return client.call(ctxt, method, **kwargs) def cast(self, ctxt, msg, version=None): method, kwargs = msg if version is not None: client = self._client.prepare(version=version) else: client = self._client return client.cast(ctxt, method, **kwargs) def local_error_name(self, error): """Returns the name of the error with any _Remote postfix removed. :param error: Remote raised error to derive the name from. """ error_name = reflection.get_class_name(error, fully_qualified=False) return error_name.split('_Remote')[0] def ignore_error_by_name(self, name): """Returns a context manager that filters exceptions with a given name. :param name: Name to compare the local exception name to. """ def error_name_matches(err): return self.local_error_name(err) == name return excutils.exception_filter(error_name_matches) def ignore_error_named(self, error, name): """Raises the error unless its local name matches the supplied name. :param error: Remote raised error to derive the local name from. :param name: Name to compare local name to. """ warnings.warn("Use ignore_error_by_name() to get a context manager " "instead.", DeprecationWarning) return self.ignore_error_by_name(name)(error) def identify_stack(self, ctxt, stack_name): """Returns the full stack identifier for a single, live stack. :param ctxt: RPC context. :param stack_name: Name of the stack you want to see, or None to see all """ return self.call(ctxt, self.make_msg('identify_stack', stack_name=stack_name)) def list_stacks(self, ctxt, limit=None, marker=None, sort_keys=None, sort_dir=None, filters=None, show_deleted=False, show_nested=False, show_hidden=False, tags=None, tags_any=None, not_tags=None, not_tags_any=None): """Returns attributes of all stacks. It supports pagination (``limit`` and ``marker``), sorting (``sort_keys`` and ``sort_dir``) and filtering (``filters``) of the results. :param ctxt: RPC context. :param limit: the number of stacks to list (integer or string) :param marker: the ID of the last item in the previous page :param sort_keys: an array of fields used to sort the list :param sort_dir: the direction of the sort ('asc' or 'desc') :param filters: a dict with attribute:value to filter the list :param show_deleted: if true, show soft-deleted stacks :param show_nested: if true, show nested stacks :param show_hidden: if true, show hidden stacks :param tags: show stacks containing these tags. If multiple tags are passed, they will be combined using the boolean AND expression :param tags_any: show stacks containing these tags. If multiple tags are passed, they will be combined using the boolean OR expression :param not_tags: show stacks not containing these tags. If multiple tags are passed, they will be combined using the boolean AND expression :param not_tags_any: show stacks not containing these tags. If multiple tags are passed, they will be combined using the boolean OR expression :returns: a list of stacks """ return self.call(ctxt, self.make_msg('list_stacks', limit=limit, sort_keys=sort_keys, marker=marker, sort_dir=sort_dir, filters=filters, show_deleted=show_deleted, show_nested=show_nested, show_hidden=show_hidden, tags=tags, tags_any=tags_any, not_tags=not_tags, not_tags_any=not_tags_any), version='1.33') def count_stacks(self, ctxt, filters=None, show_deleted=False, show_nested=False, show_hidden=False, tags=None, tags_any=None, not_tags=None, not_tags_any=None): """Returns the number of stacks that match the given filters. :param ctxt: RPC context. :param filters: a dict of ATTR:VALUE to match against stacks :param show_deleted: if true, count will include the deleted stacks :param show_nested: if true, count will include nested stacks :param show_hidden: if true, count will include hidden stacks :param tags: count stacks containing these tags. If multiple tags are passed, they will be combined using the boolean AND expression :param tags_any: count stacks containing these tags. If multiple tags are passed, they will be combined using the boolean OR expression :param not_tags: count stacks not containing these tags. If multiple tags are passed, they will be combined using the boolean AND expression :param not_tags_any: count stacks not containing these tags. If multiple tags are passed, they will be combined using the boolean OR expression :returns: an integer representing the number of matched stacks """ return self.call(ctxt, self.make_msg('count_stacks', filters=filters, show_deleted=show_deleted, show_nested=show_nested, show_hidden=show_hidden, tags=tags, tags_any=tags_any, not_tags=not_tags, not_tags_any=not_tags_any), version='1.33') def show_stack(self, ctxt, stack_identity, resolve_outputs=True): """Returns detailed information about one or all stacks. :param ctxt: RPC context. :param stack_identity: Name of the stack you want to show, or None to show all :param resolve_outputs: If True, stack outputs will be resolved """ return self.call(ctxt, self.make_msg('show_stack', stack_identity=stack_identity, resolve_outputs=resolve_outputs), version='1.20') def preview_stack(self, ctxt, stack_name, template, params, files, args, environment_files=None): """Simulates a new stack using the provided template. Note that at this stage the template has already been fetched from the heat-api process if using a template-url. :param ctxt: RPC context. :param stack_name: Name of the stack you want to create. :param template: Template of stack you want to create. :param params: Stack Input Params/Environment :param files: files referenced from the environment. :param args: Request parameters/args passed from API :param environment_files: optional ordered list of environment file names included in the files dict :type environment_files: list or None """ return self.call(ctxt, self.make_msg('preview_stack', stack_name=stack_name, template=template, params=params, files=files, environment_files=environment_files, args=args), version='1.23') def create_stack(self, ctxt, stack_name, template, params, files, args, environment_files=None): """Creates a new stack using the template provided. Note that at this stage the template has already been fetched from the heat-api process if using a template-url. :param ctxt: RPC context. :param stack_name: Name of the stack you want to create. :param template: Template of stack you want to create. :param params: Stack Input Params/Environment :param files: files referenced from the environment. :param args: Request parameters/args passed from API :param environment_files: optional ordered list of environment file names included in the files dict :type environment_files: list or None """ return self._create_stack(ctxt, stack_name, template, params, files, args, environment_files=environment_files) def _create_stack(self, ctxt, stack_name, template, params, files, args, environment_files=None, owner_id=None, nested_depth=0, user_creds_id=None, stack_user_project_id=None, parent_resource_name=None, template_id=None): """Internal interface for engine-to-engine communication via RPC. Allows some additional options which should not be exposed to users via the API: :param owner_id: parent stack ID for nested stacks :param nested_depth: nested depth for nested stacks :param user_creds_id: user_creds record for nested stack :param stack_user_project_id: stack user project for nested stack :param parent_resource_name: the parent resource name :param template_id: the ID of a pre-stored template in the DB """ return self.call( ctxt, self.make_msg('create_stack', stack_name=stack_name, template=template, params=params, files=files, environment_files=environment_files, args=args, owner_id=owner_id, nested_depth=nested_depth, user_creds_id=user_creds_id, stack_user_project_id=stack_user_project_id, parent_resource_name=parent_resource_name, template_id=template_id), version='1.29') def update_stack(self, ctxt, stack_identity, template, params, files, args, environment_files=None): """Updates an existing stack based on the provided template and params. Note that at this stage the template has already been fetched from the heat-api process if using a template-url. :param ctxt: RPC context. :param stack_name: Name of the stack you want to create. :param template: Template of stack you want to create. :param params: Stack Input Params/Environment :param files: files referenced from the environment. :param args: Request parameters/args passed from API :param environment_files: optional ordered list of environment file names included in the files dict :type environment_files: list or None """ return self._update_stack(ctxt, stack_identity, template, params, files, args, environment_files=environment_files) def _update_stack(self, ctxt, stack_identity, template, params, files, args, environment_files=None, template_id=None): """Internal interface for engine-to-engine communication via RPC. Allows an additional option which should not be exposed to users via the API: :param template_id: the ID of a pre-stored template in the DB """ return self.call(ctxt, self.make_msg('update_stack', stack_identity=stack_identity, template=template, params=params, files=files, environment_files=environment_files, args=args, template_id=template_id), version='1.29') def preview_update_stack(self, ctxt, stack_identity, template, params, files, args, environment_files=None): """Returns the resources that would be changed in an update. Based on the provided template and parameters. Requires RPC version 1.15 or above. :param ctxt: RPC context. :param stack_identity: Name of the stack you wish to update. :param template: New template for the stack. :param params: Stack Input Params/Environment :param files: files referenced from the environment. :param args: Request parameters/args passed from API :param environment_files: optional ordered list of environment file names included in the files dict :type environment_files: list or None """ return self.call(ctxt, self.make_msg('preview_update_stack', stack_identity=stack_identity, template=template, params=params, files=files, environment_files=environment_files, args=args, ), version='1.23') def validate_template(self, ctxt, template, params=None, files=None, environment_files=None, show_nested=False, ignorable_errors=None): """Uses the stack parser to check the validity of a template. :param ctxt: RPC context. :param template: Template of stack you want to create. :param params: Stack Input Params/Environment :param files: files referenced from the environment/template. :param environment_files: ordered list of environment file names included in the files dict :param show_nested: if True nested templates will be validated :param ignorable_errors: List of error_code to be ignored as part of validation """ return self.call(ctxt, self.make_msg( 'validate_template', template=template, params=params, files=files, show_nested=show_nested, environment_files=environment_files, ignorable_errors=ignorable_errors), version='1.24') def authenticated_to_backend(self, ctxt): """Validate the credentials in the RPC context. Verify that the credentials in the RPC context are valid for the current cloud backend. :param ctxt: RPC context. """ return self.call(ctxt, self.make_msg('authenticated_to_backend')) def get_template(self, ctxt, stack_identity): """Get the template. :param ctxt: RPC context. :param stack_name: Name of the stack you want to see. """ return self.call(ctxt, self.make_msg('get_template', stack_identity=stack_identity)) def get_environment(self, context, stack_identity): """Returns the environment for an existing stack. :param context: RPC context :param stack_identity: identifies the stack :rtype: dict """ return self.call(context, self.make_msg('get_environment', stack_identity=stack_identity), version='1.28') def get_files(self, context, stack_identity): """Returns the files for an existing stack. :param context: RPC context :param stack_identity: identifies the stack :rtype: dict """ return self.call(context, self.make_msg('get_files', stack_identity=stack_identity), version='1.32') def delete_stack(self, ctxt, stack_identity, cast=False): """Deletes a given stack. :param ctxt: RPC context. :param stack_identity: Name of the stack you want to delete. :param cast: cast the message instead of using call (default: False) You probably never want to use cast(). If you do, you'll never hear about any exceptions the call might raise. """ rpc_method = self.cast if cast else self.call return rpc_method(ctxt, self.make_msg('delete_stack', stack_identity=stack_identity)) def abandon_stack(self, ctxt, stack_identity): """Deletes a given stack but resources would not be deleted. :param ctxt: RPC context. :param stack_identity: Name of the stack you want to abandon. """ return self.call(ctxt, self.make_msg('abandon_stack', stack_identity=stack_identity)) def list_resource_types(self, ctxt, support_status=None, type_name=None, heat_version=None, with_description=False): """Get a list of valid resource types. :param ctxt: RPC context. :param support_status: Support status of resource type :param type_name: Resource type's name (regular expression allowed) :param heat_version: Heat version :param with_description: Either return resource type description or not """ return self.call(ctxt, self.make_msg('list_resource_types', support_status=support_status, type_name=type_name, heat_version=heat_version, with_description=with_description), version='1.30') def list_template_versions(self, ctxt): """Get a list of available template versions. :param ctxt: RPC context. """ return self.call(ctxt, self.make_msg('list_template_versions'), version='1.11') def list_template_functions(self, ctxt, template_version, with_condition=False): """Get a list of available functions in a given template. :param ctxt: RPC context :param template_name : name of the template which function list you want to get :param with_condition: return includes condition functions. """ return self.call(ctxt, self.make_msg('list_template_functions', template_version=template_version, with_condition=with_condition), version='1.35') def resource_schema(self, ctxt, type_name, with_description=False): """Get the schema for a resource type. :param ctxt: RPC context. :param with_description: Return resource with description or not. """ return self.call(ctxt, self.make_msg('resource_schema', type_name=type_name, with_description=with_description), version='1.30') def generate_template(self, ctxt, type_name, template_type='cfn'): """Generate a template based on the specified type. :param ctxt: RPC context. :param type_name: The resource type name to generate a template for. :param template_type: the template type to generate, cfn or hot. """ return self.call(ctxt, self.make_msg('generate_template', type_name=type_name, template_type=template_type), version='1.9') def list_events(self, ctxt, stack_identity, filters=None, limit=None, marker=None, sort_keys=None, sort_dir=None, nested_depth=None): """Lists all events associated with a given stack. It supports pagination (``limit`` and ``marker``), sorting (``sort_keys`` and ``sort_dir``) and filtering(filters) of the results. :param ctxt: RPC context. :param stack_identity: Name of the stack you want to get events for :param filters: a dict with attribute:value to filter the list :param limit: the number of events to list (integer or string) :param marker: the ID of the last event in the previous page :param sort_keys: an array of fields used to sort the list :param sort_dir: the direction of the sort ('asc' or 'desc'). :param nested_depth: Levels of nested stacks to list events for. """ return self.call(ctxt, self.make_msg('list_events', stack_identity=stack_identity, filters=filters, limit=limit, marker=marker, sort_keys=sort_keys, sort_dir=sort_dir, nested_depth=nested_depth), version='1.31') def describe_stack_resource(self, ctxt, stack_identity, resource_name, with_attr=False): """Get detailed resource information about a particular resource. :param ctxt: RPC context. :param stack_identity: Name of the stack. :param resource_name: the Resource. """ return self.call(ctxt, self.make_msg('describe_stack_resource', stack_identity=stack_identity, resource_name=resource_name, with_attr=with_attr), version='1.2') def find_physical_resource(self, ctxt, physical_resource_id): """Return an identifier for the resource. :param ctxt: RPC context. :param physcial_resource_id: The physical resource ID to look up. """ return self.call(ctxt, self.make_msg( 'find_physical_resource', physical_resource_id=physical_resource_id)) def describe_stack_resources(self, ctxt, stack_identity, resource_name): """Get detailed resource information about one or more resources. :param ctxt: RPC context. :param stack_identity: Name of the stack. :param resource_name: the Resource. """ return self.call(ctxt, self.make_msg('describe_stack_resources', stack_identity=stack_identity, resource_name=resource_name)) def list_stack_resources(self, ctxt, stack_identity, nested_depth=0, with_detail=False, filters=None): """List the resources belonging to a stack. :param ctxt: RPC context. :param stack_identity: Name of the stack. :param nested_depth: Levels of nested stacks of which list resources. :param with_detail: show detail for resources in list. :param filters: a dict with attribute:value to search the resources """ return self.call(ctxt, self.make_msg('list_stack_resources', stack_identity=stack_identity, nested_depth=nested_depth, with_detail=with_detail, filters=filters), version='1.25') def stack_suspend(self, ctxt, stack_identity): return self.call(ctxt, self.make_msg('stack_suspend', stack_identity=stack_identity)) def stack_resume(self, ctxt, stack_identity): return self.call(ctxt, self.make_msg('stack_resume', stack_identity=stack_identity)) def stack_check(self, ctxt, stack_identity): return self.call(ctxt, self.make_msg('stack_check', stack_identity=stack_identity)) def stack_cancel_update(self, ctxt, stack_identity, cancel_with_rollback=True): return self.call(ctxt, self.make_msg( 'stack_cancel_update', stack_identity=stack_identity, cancel_with_rollback=cancel_with_rollback), version='1.14') def resource_signal(self, ctxt, stack_identity, resource_name, details, sync_call=False): """Generate an alarm on the resource. :param ctxt: RPC context. :param stack_identity: Name of the stack. :param resource_name: the Resource. :param details: the details of the signal. """ return self.call(ctxt, self.make_msg('resource_signal', stack_identity=stack_identity, resource_name=resource_name, details=details, sync_call=sync_call), version='1.3') def resource_mark_unhealthy(self, ctxt, stack_identity, resource_name, mark_unhealthy, resource_status_reason=None): """Mark the resource as unhealthy or healthy. :param ctxt: RPC context. :param stack_identity: Name of the stack. :param resource_name: the Resource. :param mark_unhealthy: indicates whether the resource is unhealthy. :param resource_status_reason: reason for health change. """ return self.call( ctxt, self.make_msg('resource_mark_unhealthy', stack_identity=stack_identity, resource_name=resource_name, mark_unhealthy=mark_unhealthy, resource_status_reason=resource_status_reason), version='1.26') def get_revision(self, ctxt): return self.call(ctxt, self.make_msg('get_revision')) def show_software_config(self, cnxt, config_id): return self.call(cnxt, self.make_msg('show_software_config', config_id=config_id)) def list_software_configs(self, cnxt, limit=None, marker=None): return self.call(cnxt, self.make_msg('list_software_configs', limit=limit, marker=marker), version='1.33') def create_software_config(self, cnxt, group, name, config, inputs=None, outputs=None, options=None): inputs = inputs or [] outputs = outputs or [] options = options or {} return self.call(cnxt, self.make_msg('create_software_config', group=group, name=name, config=config, inputs=inputs, outputs=outputs, options=options)) def delete_software_config(self, cnxt, config_id): return self.call(cnxt, self.make_msg('delete_software_config', config_id=config_id)) def list_software_deployments(self, cnxt, server_id=None): return self.call(cnxt, self.make_msg('list_software_deployments', server_id=server_id)) def metadata_software_deployments(self, cnxt, server_id): return self.call(cnxt, self.make_msg('metadata_software_deployments', server_id=server_id)) def show_software_deployment(self, cnxt, deployment_id): return self.call(cnxt, self.make_msg('show_software_deployment', deployment_id=deployment_id)) def check_software_deployment(self, cnxt, deployment_id, timeout): return self.call(cnxt, self.make_msg('check_software_deployment', deployment_id=deployment_id, timeout=timeout), timeout=timeout, version='1.27') def create_software_deployment(self, cnxt, server_id, config_id=None, input_values=None, action='INIT', status='COMPLETE', status_reason='', stack_user_project_id=None, deployment_id=None): input_values = input_values or {} return self.call(cnxt, self.make_msg( 'create_software_deployment', server_id=server_id, config_id=config_id, deployment_id=deployment_id, input_values=input_values, action=action, status=status, status_reason=status_reason, stack_user_project_id=stack_user_project_id)) def update_software_deployment(self, cnxt, deployment_id, config_id=None, input_values=None, output_values=None, action=None, status=None, status_reason=None, updated_at=None): return self.call( cnxt, self.make_msg('update_software_deployment', deployment_id=deployment_id, config_id=config_id, input_values=input_values, output_values=output_values, action=action, status=status, status_reason=status_reason, updated_at=updated_at), version='1.5') def delete_software_deployment(self, cnxt, deployment_id): return self.call(cnxt, self.make_msg('delete_software_deployment', deployment_id=deployment_id)) def signal_software_deployment(self, cnxt, deployment_id, details, updated_at=None): return self.call( cnxt, self.make_msg('signal_software_deployment', deployment_id=deployment_id, details=details, updated_at=updated_at), version='1.6') def stack_snapshot(self, ctxt, stack_identity, name): return self.call(ctxt, self.make_msg('stack_snapshot', stack_identity=stack_identity, name=name)) def show_snapshot(self, cnxt, stack_identity, snapshot_id): return self.call(cnxt, self.make_msg('show_snapshot', stack_identity=stack_identity, snapshot_id=snapshot_id)) def delete_snapshot(self, cnxt, stack_identity, snapshot_id): return self.call(cnxt, self.make_msg('delete_snapshot', stack_identity=stack_identity, snapshot_id=snapshot_id)) def stack_list_snapshots(self, cnxt, stack_identity): return self.call(cnxt, self.make_msg('stack_list_snapshots', stack_identity=stack_identity)) def stack_restore(self, cnxt, stack_identity, snapshot_id): return self.call(cnxt, self.make_msg('stack_restore', stack_identity=stack_identity, snapshot_id=snapshot_id)) def list_services(self, cnxt): return self.call(cnxt, self.make_msg('list_services'), version='1.4') def list_outputs(self, cntx, stack_identity): return self.call(cntx, self.make_msg('list_outputs', stack_identity=stack_identity), version='1.19') def show_output(self, cntx, stack_identity, output_key): return self.call(cntx, self.make_msg('show_output', stack_identity=stack_identity, output_key=output_key), version='1.19') def export_stack(self, ctxt, stack_identity): """Exports the stack data in JSON format. :param ctxt: RPC context. :param stack_identity: Name of the stack you want to export. """ return self.call(ctxt, self.make_msg('export_stack', stack_identity=stack_identity), version='1.22') def migrate_convergence_1(self, ctxt, stack_id): """Migrate the stack to convergence engine :param ctxt: RPC context :param stack_name: Name of the stack you want to migrate """ return self.call(ctxt, self.make_msg('migrate_convergence_1', stack_id=stack_id), version='1.34') heat-10.0.2/heat/rpc/worker_api.py0000666000175000017500000000117313343562340016765 0ustar zuulzuul00000000000000# Copyright (c) 2014 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. TOPIC = 'engine_worker' heat-10.0.2/heat/rpc/__init__.py0000666000175000017500000000000013343562340016346 0ustar zuulzuul00000000000000heat-10.0.2/heat/rpc/api.py0000666000175000017500000001525713343562340015404 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. ENGINE_TOPIC = 'engine' LISTENER_TOPIC = 'heat-engine-listener' PARAM_KEYS = ( PARAM_TIMEOUT, PARAM_DISABLE_ROLLBACK, PARAM_ADOPT_STACK_DATA, PARAM_SHOW_DELETED, PARAM_SHOW_NESTED, PARAM_EXISTING, PARAM_CLEAR_PARAMETERS, PARAM_GLOBAL_TENANT, PARAM_LIMIT, PARAM_NESTED_DEPTH, PARAM_TAGS, PARAM_SHOW_HIDDEN, PARAM_TAGS_ANY, PARAM_NOT_TAGS, PARAM_NOT_TAGS_ANY, TEMPLATE_TYPE, PARAM_WITH_DETAIL, RESOLVE_OUTPUTS, PARAM_IGNORE_ERRORS, PARAM_CONVERGE ) = ( 'timeout_mins', 'disable_rollback', 'adopt_stack_data', 'show_deleted', 'show_nested', 'existing', 'clear_parameters', 'global_tenant', 'limit', 'nested_depth', 'tags', 'show_hidden', 'tags_any', 'not_tags', 'not_tags_any', 'template_type', 'with_detail', 'resolve_outputs', 'ignore_errors', 'converge' ) STACK_KEYS = ( STACK_NAME, STACK_ID, STACK_CREATION_TIME, STACK_UPDATED_TIME, STACK_DELETION_TIME, STACK_NOTIFICATION_TOPICS, STACK_DESCRIPTION, STACK_TMPL_DESCRIPTION, STACK_PARAMETERS, STACK_OUTPUTS, STACK_ACTION, STACK_STATUS, STACK_STATUS_DATA, STACK_CAPABILITIES, STACK_DISABLE_ROLLBACK, STACK_TIMEOUT, STACK_OWNER, STACK_PARENT, STACK_USER_PROJECT_ID, STACK_TAGS ) = ( 'stack_name', 'stack_identity', 'creation_time', 'updated_time', 'deletion_time', 'notification_topics', 'description', 'template_description', 'parameters', 'outputs', 'stack_action', 'stack_status', 'stack_status_reason', 'capabilities', 'disable_rollback', 'timeout_mins', 'stack_owner', 'parent', 'stack_user_project_id', 'tags' ) STACK_OUTPUT_KEYS = ( OUTPUT_DESCRIPTION, OUTPUT_KEY, OUTPUT_VALUE, OUTPUT_ERROR, ) = ( 'description', 'output_key', 'output_value', 'output_error', ) RES_KEYS = ( RES_DESCRIPTION, RES_CREATION_TIME, RES_UPDATED_TIME, RES_NAME, RES_PHYSICAL_ID, RES_METADATA, RES_ACTION, RES_STATUS, RES_STATUS_DATA, RES_TYPE, RES_ID, RES_STACK_ID, RES_STACK_NAME, RES_REQUIRED_BY, RES_NESTED_STACK_ID, RES_NESTED_RESOURCES, RES_PARENT_RESOURCE, RES_PROPERTIES, RES_ATTRIBUTES, ) = ( 'description', 'creation_time', 'updated_time', 'resource_name', 'physical_resource_id', 'metadata', 'resource_action', 'resource_status', 'resource_status_reason', 'resource_type', 'resource_identity', STACK_ID, STACK_NAME, 'required_by', 'nested_stack_id', 'nested_resources', 'parent_resource', 'properties', 'attributes', ) RES_SCHEMA_KEYS = ( RES_SCHEMA_RES_TYPE, RES_SCHEMA_PROPERTIES, RES_SCHEMA_ATTRIBUTES, RES_SCHEMA_SUPPORT_STATUS, RES_SCHEMA_DESCRIPTION ) = ( RES_TYPE, 'properties', 'attributes', 'support_status', 'description' ) EVENT_KEYS = ( EVENT_ID, EVENT_STACK_ID, EVENT_STACK_NAME, EVENT_TIMESTAMP, EVENT_RES_NAME, EVENT_RES_PHYSICAL_ID, EVENT_RES_ACTION, EVENT_RES_STATUS, EVENT_RES_STATUS_DATA, EVENT_RES_TYPE, EVENT_RES_PROPERTIES, EVENT_ROOT_STACK_ID ) = ( 'event_identity', STACK_ID, STACK_NAME, 'event_time', RES_NAME, RES_PHYSICAL_ID, RES_ACTION, RES_STATUS, RES_STATUS_DATA, RES_TYPE, 'resource_properties', 'root_stack_id' ) NOTIFY_KEYS = ( NOTIFY_TENANT_ID, NOTIFY_USER_ID, NOTIFY_USERID, NOTIFY_USERNAME, NOTIFY_STACK_ID, NOTIFY_STACK_NAME, NOTIFY_STATE, NOTIFY_STATE_REASON, NOTIFY_CREATE_AT, NOTIFY_DESCRIPTION, NOTIFY_UPDATE_AT, NOTIFY_TAGS, ) = ( 'tenant_id', 'user_id', 'user_identity', 'username', STACK_ID, STACK_NAME, 'state', 'state_reason', 'create_at', STACK_DESCRIPTION, 'updated_at', STACK_TAGS, ) VALIDATE_PARAM_KEYS = ( PARAM_TYPE, PARAM_DEFAULT, PARAM_NO_ECHO, PARAM_ALLOWED_VALUES, PARAM_ALLOWED_PATTERN, PARAM_MAX_LENGTH, PARAM_MIN_LENGTH, PARAM_MAX_VALUE, PARAM_MIN_VALUE, PARAM_STEP, PARAM_OFFSET, PARAM_DESCRIPTION, PARAM_CONSTRAINT_DESCRIPTION, PARAM_LABEL, PARAM_CUSTOM_CONSTRAINT, PARAM_VALUE, PARAM_TAG ) = ( 'Type', 'Default', 'NoEcho', 'AllowedValues', 'AllowedPattern', 'MaxLength', 'MinLength', 'MaxValue', 'MinValue', 'Step', 'Offset', 'Description', 'ConstraintDescription', 'Label', 'CustomConstraint', 'Value', 'Tags' ) VALIDATE_PARAM_TYPES = ( PARAM_TYPE_STRING, PARAM_TYPE_NUMBER, PARAM_TYPE_COMMA_DELIMITED_LIST, PARAM_TYPE_JSON, PARAM_TYPE_BOOLEAN ) = ( 'String', 'Number', 'CommaDelimitedList', 'Json', 'Boolean' ) SOFTWARE_CONFIG_KEYS = ( SOFTWARE_CONFIG_ID, SOFTWARE_CONFIG_NAME, SOFTWARE_CONFIG_GROUP, SOFTWARE_CONFIG_CONFIG, SOFTWARE_CONFIG_INPUTS, SOFTWARE_CONFIG_OUTPUTS, SOFTWARE_CONFIG_OPTIONS, SOFTWARE_CONFIG_CREATION_TIME, SOFTWARE_CONFIG_PROJECT ) = ( 'id', 'name', 'group', 'config', 'inputs', 'outputs', 'options', 'creation_time', 'project' ) SOFTWARE_DEPLOYMENT_KEYS = ( SOFTWARE_DEPLOYMENT_ID, SOFTWARE_DEPLOYMENT_CONFIG_ID, SOFTWARE_DEPLOYMENT_SERVER_ID, SOFTWARE_DEPLOYMENT_INPUT_VALUES, SOFTWARE_DEPLOYMENT_OUTPUT_VALUES, SOFTWARE_DEPLOYMENT_ACTION, SOFTWARE_DEPLOYMENT_STATUS, SOFTWARE_DEPLOYMENT_STATUS_REASON, SOFTWARE_DEPLOYMENT_CREATION_TIME, SOFTWARE_DEPLOYMENT_UPDATED_TIME ) = ( 'id', 'config_id', 'server_id', 'input_values', 'output_values', 'action', 'status', 'status_reason', 'creation_time', 'updated_time' ) SOFTWARE_DEPLOYMENT_STATUSES = ( SOFTWARE_DEPLOYMENT_IN_PROGRESS, SOFTWARE_DEPLOYMENT_FAILED, SOFTWARE_DEPLOYMENT_COMPLETE ) = ( 'IN_PROGRESS', 'FAILED', 'COMPLETE' ) SOFTWARE_DEPLOYMENT_OUTPUTS = ( SOFTWARE_DEPLOYMENT_OUTPUT_STDOUT, SOFTWARE_DEPLOYMENT_OUTPUT_STDERR, SOFTWARE_DEPLOYMENT_OUTPUT_STATUS_CODE ) = ( 'deploy_stdout', 'deploy_stderr', 'deploy_status_code' ) SNAPSHOT_KEYS = ( SNAPSHOT_ID, SNAPSHOT_NAME, SNAPSHOT_STACK_ID, SNAPSHOT_DATA, SNAPSHOT_STATUS, SNAPSHOT_STATUS_REASON, SNAPSHOT_CREATION_TIME, ) = ( 'id', 'name', 'stack_id', 'data', 'status', 'status_reason', 'creation_time' ) THREAD_MESSAGES = (THREAD_CANCEL, THREAD_CANCEL_WITH_ROLLBACK ) = ('cancel', 'cancel_with_rollback') heat-10.0.2/heat/httpd/0000775000175000017500000000000013343562672014614 5ustar zuulzuul00000000000000heat-10.0.2/heat/httpd/heat_api_cfn.py0000666000175000017500000000300613343562340017557 0ustar zuulzuul00000000000000#!/usr/bin/env python # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """WSGI script for heat-api-cfn. Script for running heat-api-cfn under Apache2. """ from oslo_config import cfg import oslo_i18n as i18n from oslo_log import log as logging from heat.common import config from heat.common import messaging from heat.common import profiler from heat import version def init_application(): i18n.enable_lazy() LOG = logging.getLogger('heat.api.cfn') logging.register_options(cfg.CONF) cfg.CONF(project='heat', prog='heat-api-cfn', version=version.version_info.version_string()) logging.setup(cfg.CONF, 'heat-api-cfn') logging.set_defaults() config.set_config_defaults() messaging.setup() port = cfg.CONF.heat_api_cfn.bind_port host = cfg.CONF.heat_api_cfn.bind_host LOG.info('Starting Heat API on %(host)s:%(port)s', {'host': host, 'port': port}) profiler.setup('heat-api-cfn', host) return config.load_paste_app() heat-10.0.2/heat/httpd/__init__.py0000666000175000017500000000000013343562340016705 0ustar zuulzuul00000000000000heat-10.0.2/heat/httpd/heat_api.py0000666000175000017500000000273013343562340016734 0ustar zuulzuul00000000000000#!/usr/bin/env python # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """WSGI script for heat-api. Script for running heat-api under Apache2. """ from oslo_config import cfg import oslo_i18n as i18n from oslo_log import log as logging from heat.common import config from heat.common import messaging from heat.common import profiler from heat import version as hversion def init_application(): i18n.enable_lazy() LOG = logging.getLogger('heat.api') logging.register_options(cfg.CONF) version = hversion.version_info.version_string() cfg.CONF(project='heat', prog='heat-api', version=version) logging.setup(cfg.CONF, 'heat-api') config.set_config_defaults() messaging.setup() port = cfg.CONF.heat_api.bind_port host = cfg.CONF.heat_api.bind_host profiler.setup('heat-api', host) LOG.info('Starting Heat REST API on %(host)s:%(port)s', {'host': host, 'port': port}) return config.load_paste_app() heat-10.0.2/heat/httpd/files/0000775000175000017500000000000013343562672015716 5ustar zuulzuul00000000000000heat-10.0.2/heat/httpd/files/uwsgi-heat-api.conf0000666000175000017500000000010513343562340021377 0ustar zuulzuul00000000000000KeepAlive Off ProxyPass "/heat-api" "http://127.0.0.1:80999" retry=0 heat-10.0.2/heat/httpd/files/heat-api-cfn-uwsgi.ini0000666000175000017500000000044313343562340022002 0ustar zuulzuul00000000000000[uwsgi] chmod-socket = 666 lazy-apps = true add-header = Connection: close buffer-size = 65535 thunder-lock = true plugins = python enable-threads = true exit-on-reload = true die-on-term = true master = true processes = 4 http = 127.0.0.1:80998 wsgi-file = /usr/local/bin/heat-wsgi-api-cfn heat-10.0.2/heat/httpd/files/heat-api-uwsgi.ini0000666000175000017500000000043713343562340021241 0ustar zuulzuul00000000000000[uwsgi] chmod-socket = 666 lazy-apps = true add-header = Connection: close buffer-size = 65535 thunder-lock = true plugins = python enable-threads = true exit-on-reload = true die-on-term = true master = true processes = 4 http = 127.0.0.1:80999 wsgi-file = /usr/local/bin/heat-wsgi-api heat-10.0.2/heat/httpd/files/heat-api-cfn.conf0000666000175000017500000000151013343562340021010 0ustar zuulzuul00000000000000Listen %PUBLICPORT% WSGIDaemonProcess heat-api-cfn processes=%API_WORKERS% threads=1 user=%USER% display-name=%{GROUP} %VIRTUALENV% WSGIProcessGroup heat-api-cfn WSGIScriptAlias / %HEAT_BIN_DIR%/heat-wsgi-api-cfn WSGIApplicationGroup %{GLOBAL} WSGIPassAuthorization On AllowEncodedSlashes On = 2.4> ErrorLogFormat "%{cu}t %M" ErrorLog /var/log/%APACHE_NAME%/heat_api_cfn.log CustomLog /var/log/%APACHE_NAME%/heat_api_cfn_access.log combined %SSLENGINE% %SSLCERTFILE% %SSLKEYFILE% = 2.4> Require all granted Order allow,deny Allow from all heat-10.0.2/heat/httpd/files/uwsgi-heat-api-cfn.conf0000666000175000017500000000011113343562340022140 0ustar zuulzuul00000000000000KeepAlive Off ProxyPass "/heat-api-cfn" "http://127.0.0.1:80998" retry=0 heat-10.0.2/heat/httpd/files/heat-api.conf0000666000175000017500000000146513343562340020255 0ustar zuulzuul00000000000000Listen %PUBLICPORT% WSGIDaemonProcess heat-api processes=%API_WORKERS% threads=10 user=%USER% display-name=%{GROUP} %VIRTUALENV% WSGIProcessGroup heat-api WSGIScriptAlias / %HEAT_BIN_DIR%/heat-wsgi-api WSGIApplicationGroup %{GLOBAL} WSGIPassAuthorization On AllowEncodedSlashes On = 2.4> ErrorLogFormat "%{cu}t %M" ErrorLog /var/log/%APACHE_NAME%/heat_api.log CustomLog /var/log/%APACHE_NAME%/heat_api_access.log combined %SSLENGINE% %SSLCERTFILE% %SSLKEYFILE% = 2.4> Require all granted Order allow,deny Allow from all heat-10.0.2/heat/cmd/0000775000175000017500000000000013343562672014234 5ustar zuulzuul00000000000000heat-10.0.2/heat/cmd/engine.py0000666000175000017500000000467713343562337016071 0ustar zuulzuul00000000000000#!/usr/bin/env python # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Heat Engine Server. This does the work of actually implementing the API calls made by the user. Normal communications is done via the heat API which then calls into this engine. """ import eventlet eventlet.monkey_patch() import sys from oslo_concurrency import processutils from oslo_config import cfg import oslo_i18n as i18n from oslo_log import log as logging from oslo_reports import guru_meditation_report as gmr from oslo_service import service from heat.common import config from heat.common import messaging from heat.common import profiler from heat.engine import template from heat.rpc import api as rpc_api from heat import version i18n.enable_lazy() LOG = logging.getLogger('heat.engine') def launch_engine(setup_logging=True): if setup_logging: logging.register_options(cfg.CONF) cfg.CONF(project='heat', prog='heat-engine', version=version.version_info.version_string()) if setup_logging: logging.setup(cfg.CONF, 'heat-engine') logging.set_defaults() messaging.setup() config.startup_sanity_check() mgr = None try: mgr = template._get_template_extension_manager() except template.TemplatePluginNotRegistered as ex: LOG.critical("%s", ex) if not mgr or not mgr.names(): sys.exit("ERROR: No template format plugins registered") from heat.engine import service as engine # noqa profiler.setup('heat-engine', cfg.CONF.host) gmr.TextGuruMeditation.setup_autorun(version) srv = engine.EngineService(cfg.CONF.host, rpc_api.ENGINE_TOPIC) workers = cfg.CONF.num_engine_workers if not workers: workers = max(4, processutils.get_worker_count()) launcher = service.launch(cfg.CONF, srv, workers=workers, restart_method='mutate') return launcher def main(): launcher = launch_engine() launcher.wait() heat-10.0.2/heat/cmd/manage.py0000666000175000017500000002225013343562337016037 0ustar zuulzuul00000000000000# # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """CLI interface for heat management.""" import sys from oslo_config import cfg from oslo_log import log from six import moves from heat.common import context from heat.common import exception from heat.common.i18n import _ from heat.common import messaging from heat.common import service_utils from heat.db.sqlalchemy import api as db_api from heat.objects import service as service_objects from heat.rpc import client as rpc_client from heat import version CONF = cfg.CONF def do_db_version(): """Print database's current migration level.""" print(db_api.db_version(db_api.get_engine())) def do_db_sync(): """Place a database under migration control and upgrade. Creating first if necessary. """ db_api.db_sync(db_api.get_engine(), CONF.command.version) class ServiceManageCommand(object): def service_list(self): ctxt = context.get_admin_context() services = [service_utils.format_service(service) for service in service_objects.Service.get_all(ctxt)] print_format = "%-16s %-16s %-36s %-10s %-10s %-10s %-10s" print(print_format % (_('Hostname'), _('Binary'), _('Engine_Id'), _('Host'), _('Topic'), _('Status'), _('Updated At'))) for svc in services: print(print_format % (svc['hostname'], svc['binary'], svc['engine_id'], svc['host'], svc['topic'], svc['status'], svc['updated_at'])) def service_clean(self): ctxt = context.get_admin_context() for service in service_objects.Service.get_all(ctxt): svc = service_utils.format_service(service) if svc['status'] == 'down': service_objects.Service.delete(ctxt, svc['id']) print(_('Dead engines are removed.')) @staticmethod def add_service_parsers(subparsers): service_parser = subparsers.add_parser('service') service_parser.set_defaults(command_object=ServiceManageCommand) service_subparsers = service_parser.add_subparsers(dest='action') list_parser = service_subparsers.add_parser('list') list_parser.set_defaults(func=ServiceManageCommand().service_list) remove_parser = service_subparsers.add_parser('clean') remove_parser.set_defaults(func=ServiceManageCommand().service_clean) def do_resource_data_list(): ctxt = context.get_admin_context() data = db_api.resource_data_get_all(ctxt, CONF.command.resource_id) print_format = "%-16s %-64s" for k in data.keys(): print(print_format % (k, data[k])) def do_reset_stack_status(): print(_("Warning: this command is potentially destructive and only " "intended to recover from specific crashes.")) print(_("It is advised to shutdown all Heat engines beforehand.")) print(_("Continue ? [y/N]")) data = moves.input() if not data.lower().startswith('y'): return ctxt = context.get_admin_context() db_api.reset_stack_status(ctxt, CONF.command.stack_id) def do_migrate(): messaging.setup() client = rpc_client.EngineClient() ctxt = context.get_admin_context() try: client.migrate_convergence_1(ctxt, CONF.command.stack_id) except exception.NotFound: raise Exception(_("Stack with id %s can not be found.") % CONF.command.stack_id) except (exception.NotSupported, exception.ActionNotComplete) as ex: raise Exception(ex.message) def purge_deleted(): """Remove database records that have been previously soft deleted.""" db_api.purge_deleted(CONF.command.age, CONF.command.granularity, CONF.command.project_id, CONF.command.batch_size) def do_crypt_parameters_and_properties(): """Encrypt/decrypt hidden parameters and resource properties data.""" ctxt = context.get_admin_context() prev_encryption_key = CONF.command.previous_encryption_key if CONF.command.crypt_operation == "encrypt": db_api.encrypt_parameters_and_properties( ctxt, prev_encryption_key, CONF.command.verbose_update_params) elif CONF.command.crypt_operation == "decrypt": db_api.decrypt_parameters_and_properties( ctxt, prev_encryption_key, CONF.command.verbose_update_params) def do_properties_data_migrate(): ctxt = context.get_admin_context() db_api.db_properties_data_migrate(ctxt) def add_command_parsers(subparsers): # db_version parser parser = subparsers.add_parser('db_version') parser.set_defaults(func=do_db_version) # db_sync parser parser = subparsers.add_parser('db_sync') parser.set_defaults(func=do_db_sync) # positional parameter, can be skipped. default=None parser.add_argument('version', nargs='?') # migrate_convergence_1 parser parser = subparsers.add_parser('migrate_convergence_1') parser.set_defaults(func=do_migrate) parser.add_argument('stack_id') # purge_deleted parser parser = subparsers.add_parser('purge_deleted') parser.set_defaults(func=purge_deleted) # positional parameter, can be skipped. default='90' parser.add_argument('age', nargs='?', default='90', help=_('How long to preserve deleted data.')) # optional parameter, can be skipped. default='days' parser.add_argument( '-g', '--granularity', default='days', choices=['days', 'hours', 'minutes', 'seconds'], help=_('Granularity to use for age argument, defaults to days.')) # optional parameter, can be skipped. parser.add_argument( '-p', '--project-id', help=_('Project ID to purge deleted stacks.')) # optional parameter, can be skipped. default='20' parser.add_argument( '-b', '--batch_size', default='20', help=_('Number of stacks to delete at a time (per transaction). ' 'Note that a single stack may have many db rows ' '(events, etc.) associated with it.')) # update_params parser parser = subparsers.add_parser('update_params') parser.set_defaults(func=do_crypt_parameters_and_properties) # positional parameter, can't be skipped parser.add_argument('crypt_operation', choices=['encrypt', 'decrypt'], help=_('Valid values are encrypt or decrypt. The ' 'heat-engine processes must be stopped to use ' 'this.')) # positional parameter, can be skipped. default=None parser.add_argument('previous_encryption_key', nargs='?', default=None, help=_('Provide old encryption key. New encryption' ' key would be used from config file.')) parser.add_argument('--verbose-update-params', action='store_true', help=_('Print an INFO message when processing of each ' 'raw_template or resource begins or ends')) parser = subparsers.add_parser('resource_data_list') parser.set_defaults(func=do_resource_data_list) parser.add_argument('resource_id', help=_('Stack resource id')) parser = subparsers.add_parser('reset_stack_status') parser.set_defaults(func=do_reset_stack_status) parser.add_argument('stack_id', help=_('Stack id')) # migrate properties_data parser parser = subparsers.add_parser('migrate_properties_data') parser.set_defaults(func=do_properties_data_migrate) ServiceManageCommand.add_service_parsers(subparsers) command_opt = cfg.SubCommandOpt('command', title='Commands', help=_('Show available commands.'), handler=add_command_parsers) def main(): log.register_options(CONF) log.setup(CONF, "heat-manage") CONF.register_cli_opt(command_opt) try: default_config_files = cfg.find_config_files('heat', 'heat-engine') CONF(sys.argv[1:], project='heat', prog='heat-manage', version=version.version_info.version_string(), default_config_files=default_config_files) except RuntimeError as e: sys.exit("ERROR: %s" % e) try: CONF.command.func() except Exception as e: sys.exit("ERROR: %s" % e) heat-10.0.2/heat/cmd/api_cfn.py0000666000175000017500000000436113343562337016211 0ustar zuulzuul00000000000000#!/usr/bin/env python # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Heat API Server. This implements an approximation of the Amazon CloudFormation API and translates it into a native representation. It then calls the heat-engine via AMQP RPC to implement them. """ import eventlet eventlet.monkey_patch(os=False) import sys from oslo_config import cfg import oslo_i18n as i18n from oslo_log import log as logging from oslo_reports import guru_meditation_report as gmr from oslo_service import systemd import six from heat.common import config from heat.common import messaging from heat.common import profiler from heat.common import wsgi from heat import version i18n.enable_lazy() LOG = logging.getLogger('heat.api.cfn') def launch_cfn_api(setup_logging=True): if setup_logging: logging.register_options(cfg.CONF) cfg.CONF(project='heat', prog='heat-api-cfn', version=version.version_info.version_string()) if setup_logging: logging.setup(cfg.CONF, 'heat-api-cfn') logging.set_defaults() config.set_config_defaults() messaging.setup() app = config.load_paste_app() port = cfg.CONF.heat_api_cfn.bind_port host = cfg.CONF.heat_api_cfn.bind_host LOG.info('Starting Heat API on %(host)s:%(port)s', {'host': host, 'port': port}) profiler.setup('heat-api-cfn', host) gmr.TextGuruMeditation.setup_autorun(version) server = wsgi.Server('heat-api-cfn', cfg.CONF.heat_api_cfn) server.start(app, default_port=port) return server def main(): try: server = launch_cfn_api() systemd.notify_once() server.wait() except RuntimeError as e: msg = six.text_type(e) sys.exit("ERROR: %s" % msg) heat-10.0.2/heat/cmd/__init__.py0000666000175000017500000000000013343562337016333 0ustar zuulzuul00000000000000heat-10.0.2/heat/cmd/api.py0000666000175000017500000000402013343562337015353 0ustar zuulzuul00000000000000#!/usr/bin/env python # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Heat API Server. An OpenStack ReST API to Heat. """ import eventlet eventlet.monkey_patch(os=False) import sys from oslo_config import cfg import oslo_i18n as i18n from oslo_log import log as logging from oslo_reports import guru_meditation_report as gmr from oslo_service import systemd import six from heat.common import config from heat.common import messaging from heat.common import profiler from heat.common import wsgi from heat import version i18n.enable_lazy() LOG = logging.getLogger('heat.api') def launch_api(setup_logging=True): if setup_logging: logging.register_options(cfg.CONF) cfg.CONF(project='heat', prog='heat-api', version=version.version_info.version_string()) if setup_logging: logging.setup(cfg.CONF, 'heat-api') config.set_config_defaults() messaging.setup() app = config.load_paste_app() port = cfg.CONF.heat_api.bind_port host = cfg.CONF.heat_api.bind_host LOG.info('Starting Heat REST API on %(host)s:%(port)s', {'host': host, 'port': port}) profiler.setup('heat-api', host) gmr.TextGuruMeditation.setup_autorun(version) server = wsgi.Server('heat-api', cfg.CONF.heat_api) server.start(app, default_port=port) return server def main(): try: server = launch_api() systemd.notify_once() server.wait() except RuntimeError as e: msg = six.text_type(e) sys.exit("ERROR: %s" % msg) heat-10.0.2/heat/cmd/all.py0000666000175000017500000000512313343562337015357 0ustar zuulzuul00000000000000#!/usr/bin/env python # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Heat All Server. An OpenStack Heat server that can run all services. """ import eventlet eventlet.monkey_patch(os=False) import six import sys from heat.cmd import api from heat.cmd import api_cfn from heat.cmd import engine from heat.common import config from heat.common import messaging from heat import version from oslo_config import cfg import oslo_i18n as i18n from oslo_log import log as logging from oslo_service import systemd i18n.enable_lazy() LOG = logging.getLogger('heat.all') API_LAUNCH_OPTS = {'setup_logging': False} LAUNCH_SERVICES = { 'engine': [engine.launch_engine, {'setup_logging': False}], 'api': [api.launch_api, API_LAUNCH_OPTS], 'api_cfn': [api_cfn.launch_cfn_api, API_LAUNCH_OPTS], } services_opt = cfg.ListOpt( 'enabled_services', default=['engine', 'api', 'api_cfn'], help='Specifies the heat services that are enabled when running heat-all. ' 'Valid options are all or any combination of ' 'api, engine or api_cfn.' ) cfg.CONF.register_opt(services_opt, group='heat_all') def _start_service_threads(services): threads = [] for option in services: launch_func = LAUNCH_SERVICES[option][0] kwargs = LAUNCH_SERVICES[option][1] threads.append(eventlet.spawn(launch_func, **kwargs)) return threads def launch_all(setup_logging=True): if setup_logging: logging.register_options(cfg.CONF) cfg.CONF(project='heat', prog='heat-all', version=version.version_info.version_string()) if setup_logging: logging.setup(cfg.CONF, 'heat-all') config.set_config_defaults() messaging.setup() return _start_service_threads(set(cfg.CONF.heat_all.enabled_services)) def main(): try: threads = launch_all() services = [thread.wait() for thread in threads] systemd.notify_once() [service.wait() for service in services] except RuntimeError as e: msg = six.text_type(e) sys.exit("ERROR: %s" % msg) heat-10.0.2/heat/scaling/0000775000175000017500000000000013343562672015111 5ustar zuulzuul00000000000000heat-10.0.2/heat/scaling/rolling_update.py0000666000175000017500000000363513343562340020474 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. def needs_update(targ_capacity, curr_capacity, num_up_to_date): """Return whether there are more batch updates to do. Inputs are the target size for the group, the current size of the group, and the number of members that already have the latest definition. """ return not (num_up_to_date >= curr_capacity == targ_capacity) def next_batch(targ_capacity, curr_capacity, num_up_to_date, batch_size, min_in_service): """Return details of the next batch in a batched update. The result is a tuple containing the new size of the group and the number of members that may receive the new definition (by a combination of creating new members and updating existing ones). Inputs are the target size for the group, the current size of the group, the number of members that already have the latest definition, the batch size, and the minimum number of members to keep in service during a rolling update. """ assert num_up_to_date <= curr_capacity efft_min_sz = min(min_in_service, targ_capacity, curr_capacity) efft_bat_sz = min(batch_size, max(targ_capacity - num_up_to_date, 0)) new_capacity = efft_bat_sz + max(min(curr_capacity, targ_capacity - efft_bat_sz), efft_min_sz) return new_capacity, efft_bat_sz heat-10.0.2/heat/scaling/scalingutil.py0000666000175000017500000000470013343562340017774 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import math ADJUSTMENT_TYPES = ( EXACT_CAPACITY, CHANGE_IN_CAPACITY, PERCENT_CHANGE_IN_CAPACITY) = ( 'exact_capacity', 'change_in_capacity', 'percent_change_in_capacity') CFN_ADJUSTMENT_TYPES = ( CFN_EXACT_CAPACITY, CFN_CHANGE_IN_CAPACITY, CFN_PERCENT_CHANGE_IN_CAPACITY) = ('ExactCapacity', 'ChangeInCapacity', 'PercentChangeInCapacity') def calculate_new_capacity(current, adjustment, adjustment_type, min_adjustment_step, minimum, maximum): """Calculates new capacity from the given adjustments. Given the current capacity, calculates the new capacity which results from applying the given adjustment of the given adjustment-type. The new capacity will be kept within the maximum and minimum bounds. """ def _get_minimum_adjustment(adjustment, min_adjustment_step): if min_adjustment_step and min_adjustment_step > abs(adjustment): adjustment = (min_adjustment_step if adjustment > 0 else -min_adjustment_step) return adjustment if adjustment_type in (CHANGE_IN_CAPACITY, CFN_CHANGE_IN_CAPACITY): new_capacity = current + adjustment elif adjustment_type in (EXACT_CAPACITY, CFN_EXACT_CAPACITY): new_capacity = adjustment else: # PercentChangeInCapacity delta = current * adjustment / 100.0 if math.fabs(delta) < 1.0: rounded = int(math.ceil(delta) if delta > 0.0 else math.floor(delta)) else: rounded = int(math.floor(delta) if delta > 0.0 else math.ceil(delta)) adjustment = _get_minimum_adjustment(rounded, min_adjustment_step) new_capacity = current + adjustment if new_capacity > maximum: return maximum if new_capacity < minimum: return minimum return new_capacity heat-10.0.2/heat/scaling/cooldown.py0000666000175000017500000001051313343562340017301 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime from heat.common import exception from heat.common.i18n import _ from heat.engine import resource from oslo_log import log as logging from oslo_utils import timeutils import six LOG = logging.getLogger(__name__) class CooldownMixin(object): """Utility class to encapsulate Cooldown related logic. This logic includes both cooldown timestamp comparing and scaling in progress checking. """ def _sanitize_cooldown(self, cooldown): if cooldown is None: return 0 return max(0, cooldown) def _check_scaling_allowed(self, cooldown): metadata = self.metadata_get() if metadata.get('scaling_in_progress'): LOG.info("Can not perform scaling action: resource %s " "is already in scaling.", self.name) reason = _('due to scaling activity') raise resource.NoActionRequired(res_name=self.name, reason=reason) cooldown = self._sanitize_cooldown(cooldown) # if both cooldown and cooldown_end not in metadata if all(k not in metadata for k in ('cooldown', 'cooldown_end')): # Note: this is for supporting old version cooldown checking metadata.pop('scaling_in_progress', None) if metadata and cooldown != 0: last_adjust = next(six.iterkeys(metadata)) if not timeutils.is_older_than(last_adjust, cooldown): self._log_and_raise_no_action(cooldown) elif 'cooldown_end' in metadata: cooldown_end = next(six.iterkeys(metadata['cooldown_end'])) now = timeutils.utcnow().isoformat() if now < cooldown_end: self._log_and_raise_no_action(cooldown) elif cooldown != 0: # Note: this is also for supporting old version cooldown checking last_adjust = next(six.iterkeys(metadata['cooldown'])) if not timeutils.is_older_than(last_adjust, cooldown): self._log_and_raise_no_action(cooldown) # Assumes _finished_scaling is called # after the scaling operation completes metadata['scaling_in_progress'] = True self.metadata_set(metadata) def _log_and_raise_no_action(self, cooldown): LOG.info("Can not perform scaling action: " "resource %(name)s is in cooldown (%(cooldown)s).", {'name': self.name, 'cooldown': cooldown}) reason = _('due to cooldown, ' 'cooldown %s') % cooldown raise resource.NoActionRequired( res_name=self.name, reason=reason) def _finished_scaling(self, cooldown, cooldown_reason, size_changed=True): # If we wanted to implement the AutoScaling API like AWS does, # we could maintain event history here, but since we only need # the latest event for cooldown, just store that for now metadata = self.metadata_get() if size_changed: cooldown = self._sanitize_cooldown(cooldown) cooldown_end = (timeutils.utcnow() + datetime.timedelta( seconds=cooldown)).isoformat() if 'cooldown_end' in metadata: cooldown_end = max( next(six.iterkeys(metadata['cooldown_end'])), cooldown_end) metadata['cooldown_end'] = {cooldown_end: cooldown_reason} metadata['scaling_in_progress'] = False try: self.metadata_set(metadata) except exception.NotFound: pass def handle_metadata_reset(self): metadata = self.metadata_get() if 'scaling_in_progress' in metadata: metadata['scaling_in_progress'] = False self.metadata_set(metadata) heat-10.0.2/heat/scaling/template.py0000666000175000017500000000557013343562340017277 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from heat.engine import template def _identity(resource_name, definition): return definition def member_definitions(old_resources, new_definition, num_resources, num_new, get_new_id, customise=_identity): """Iterate over resource definitions for a scaling group Generates the definitions for the next change to the scaling group. Each item is a (name, definition) tuple. The input is a list of (name, definition) tuples for existing resources in the group, sorted in the order that they should be replaced or removed (i.e. the resource that should be the first to be replaced (on update) or removed (on scale down) appears at the beginning of the list.) New resources are added or old resources removed as necessary to ensure a total of num_resources. The number of resources to have their definition changed to the new one is controlled by num_new. This value includes any new resources to be added, with any shortfall made up by modifying the definitions of existing resources. """ old_resources = old_resources[-num_resources:] num_create = num_resources - len(old_resources) num_replace = num_new - num_create for i in range(num_resources): if i < len(old_resources): old_name, old_definition = old_resources[i] custom_definition = customise(old_name, new_definition) if old_definition != custom_definition and num_replace > 0: num_replace -= 1 yield old_name, custom_definition else: yield old_name, old_definition else: new_name = get_new_id() yield new_name, customise(new_name, new_definition) def make_template(resource_definitions, version=('heat_template_version', '2015-04-30'), child_env=None): """Return a Template object containing the given resource definitions. By default, the template will be in the HOT format. A different format can be specified by passing a (version_type, version_string) tuple matching any of the available template format plugins. """ tmpl = template.Template(dict([version]), env=child_env) for name, defn in resource_definitions: tmpl.add_resource(defn, name) return tmpl heat-10.0.2/heat/scaling/__init__.py0000666000175000017500000000000013343562340017202 0ustar zuulzuul00000000000000heat-10.0.2/heat/scaling/lbutils.py0000666000175000017500000000337013343562351017140 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import six from heat.common import exception from heat.common import grouputils from heat.common.i18n import _ from heat.engine import rsrc_defn from heat.engine import scheduler def reload_loadbalancers(group, load_balancers, exclude=None): """Notify the LoadBalancer to reload its config. This must be done after activation (instance in ACTIVE state), otherwise the instances' IP addresses may not be available. """ exclude = exclude or [] id_list = grouputils.get_member_refids(group, exclude=exclude) for name, lb in six.iteritems(load_balancers): props = copy.copy(lb.properties.data) if 'Instances' in lb.properties_schema: props['Instances'] = id_list elif 'members' in lb.properties_schema: props['members'] = id_list else: raise exception.Error( _("Unsupported resource '%s' in LoadBalancerNames") % name) lb_defn = rsrc_defn.ResourceDefinition( lb.name, lb.type(), properties=props, metadata=lb.t.metadata(), deletion_policy=lb.t.deletion_policy()) scheduler.TaskRunner(lb.update, lb_defn)() heat-10.0.2/setup.py0000666000175000017500000000200613343562340014252 0ustar zuulzuul00000000000000# Copyright (c) 2013 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. # THIS FILE IS MANAGED BY THE GLOBAL REQUIREMENTS REPO - DO NOT EDIT import setuptools # In python < 2.7.4, a lazy loading of package `pbr` will break # setuptools if some other modules registered functions in `atexit`. # solution from: http://bugs.python.org/issue15881#msg170215 try: import multiprocessing # noqa except ImportError: pass setuptools.setup( setup_requires=['pbr>=2.0.0'], pbr=True) heat-10.0.2/AUTHORS0000664000175000017500000004506413343562667013635 0ustar zuulzuul00000000000000Aaron Rosen Aaron-DH Abhishek Chanda Adrien Vergé Ahmed Elkhouly Aigerim Ala Rezmerita Alan Alan Pevec AleptNamrata AleptNmarata Alex Gaynor Alexander Chudnovets Alexander Gordeev Alexander Ignatov Alexander Ignatyev Alexandr Nedopekin Alfredo Moralejo Amit Agarwal Amit Ugol Amy Fong Ana Krivokapic Anant Patil Anastasia Kuznetsova Anderson Mesquita Andre Andrea Rosa Andreas Jaeger Andreas Jaeger Andrew Hutchings Andrew Lazarev Andrew Orlov Andrew Plunk Andrey Kurilin Angus Salkeld Angus Salkeld Angus Salkeld Angus Salkeld Anh Tran Ankit Agrawal Anne McCormick Arata Notsu Ashutosh Mishra Atsushi SAKAI Attila Fazekas AvnishPal BK Box Bartosz Górski Ben Nemec Ben Nemec Bernard Van De Walle Bernhard M. Wiedemann Bertrand Lallau Bertrand Lallau Bill Arnold Bin Zhou Bo Wang Boden R Botond Zoltán Brant Knudson Brent Eagles Brian Moss Bryan Jones Béla Vancsics Cao Xuan Hoang Cedric Brandily Chandan Kumar Chang Bo Guo ChangBo Guo(gcb) Chaozhe.Chen Chen Xiao ChenZheng Chmouel Boudjnah Chris Chris Alfonso Chris Buccella Chris Buccella Chris Hultin Chris Roberts Christian Berendt Christoph Dwertmann Christopher Armstrong Chuck Short Clark Boylan Claudiu Belu Clint Byrum Cody A.W. Somerville Colleen Murphy Crag Wolfe Dan Dan Prince Daniel Givens Daniel Gonzalez Daniel Pawlik Davanum Srinivas Davanum Srinivas Dave Wilde David Rabel Deliang Fan Denes Nemeth DennyZhang Derek Higgins Diane Fleming Dimitri Mazmanov Dina Belova Dirk Mueller Dmitriy Uvarenkov Dmitry Tyurnikov Doug Fish Doug Hellmann Drago Rosson Eli Qiao Emilien Macchi Endre Karlson Eoghan Glynn Eoghan Glynn Eric Brown Erik Olof Gunnar Andersson Ethan Lynn Fabien Boucher FeihuJiang Feilong Wang Flavio Percoco Gary Kotton Gauvain Pocentek George Peristerakis Gerard Braad Gerry Buteau Giulio Fidente Graham Hayes Greg Blomquist Gregory Haynes Grzegorz Grasza Gábor Antal H S, Rakesh Ha Van Tu Haiwei Xu Haiyang Ding Han Manjong Harald Jensas Harald Jensas He Jie Xu He Yongli Hironori Shiina Hongbin Lu Hongbin Lu Huangsm Hui HX Xiang Ian Main Ian McLeod Igor D.C Ihar Hrachyshka Ionuț Arțăriși Isaku Yamahata JUN JIE NAN JUNJIE NAN Jaewoo Park Jaime Guerrero James Combs James E. Blair James Reeves James Slagle Jamie Lennox Jamie Lennox Jan Provaznik Jason Dunsmore Javeme Javier Pena Jay Clark Jay Dobies Jay Lau Jeff Peeler Jeff Sloyer Jennifer Mulsow Jens Rosenboom Jeremy Freudberg Jeremy Liu Jeremy Pugh Jeremy Stanley Jesse Andrews Jesse Pretorius Jesse Proudman Ji-Wei Jia Dong Jianing YANG Jin Nan Zhang Jiri Stransky Jiří Suchomel Joe D'Andrea Joe Gordon Joe Talerico Johannes Grassler Johnu George JordanP Jose Luis Franco Arza Joshua Harlow JuPing Juan Antonio Osorio Robles Julia Kreger Julia Varlamova Julian Sy Julien Danjou KIYOHIRO ADACHI Kamal Hussain Kamil Rykowski Kanagaraj Manickam Kanagaraj Manickam Kanagaraj Manickam Kent Wang Keshava Bharadwaj Kevin Fox Kevin_Zheng Khaled Qarout Kien Nguyen Krishna Raman Lars Kellogg-Stedman Laura Fitzgerald Li Jinjing Liang Chen LiangChen Limor Stotland LiuNanke Lon Hohberger Lu lei Luis A. Garcia Luis Tomas Bolivar Lukas Bednar Luong Anh Tuan M V P Nitesh Maksym Iarmak Marga Millet Mark McClain Mark McLoughlin Mark Vanderwiel Martin Geisler Martin Kletzander Martin Oemke Masco Kaliyamoorthy Matt Riedemann Matt Riedemann Matthew Edmonds Matthew Flusche Matthew Gilliard Matthew Printz Matthew Treinish Mehdi Abaakouk (sileht) Mehdi Abaakouk Mehdi Abaakouk Michael Ionkin Michael Krotscheck Michael Still Michal Jastrzebski (inc0) Michal Jastrzebski Michal Rostecki Miguel Grinberg Miguel Grinberg Mike Asthalter Mike Spreitzer Mitsuru Kanabuchi Mohammed Naser Mohankumar Monty Taylor Morgan Fainberg Morgan Fainberg Moshe Elisha Nam Nguyen Hoai Nguyen Hung Phuong Nguyen Phuong An Nicolas Noa Koffman Norbert Illes OTSUKA, Yuanying Oleg Khavroshin Oleksii Chuprykov Omar Soriano OpenStack Release Bot Pablo Andres Fuente PanFengyun PanFengyun Patrick Woods Paul Bourke Paul Van Eck Pavlo Shchelokovskyy Pavlo Shchelokovskyy Pengfei Zhang Peter Razumovsky Petr Kovar PhilippeJ Pierre Pierre Freund Pierre Padrixe Pierre Riteau Pradeep Kumar Singh Pratik Mallya Praveen Yalagandula Prince Katiyar QI ZHANG Rabi Mihsra Rabi Mishra Rajiv Kumar Rakesh H S Rakesh H S Randall Burt Richard Lee Rico Lin Rikimaru, Honjo Robert Collins Robert Pothier Robert van Leeuwen Roberto Polli Roman Podoliaka Russell Bryant Ryan Brown Ryo Miki Sabeen Syed Sagi Shnaidman Sahdev Zala Sam Alba Samuel de Medeiros Queiroz Saravanan KR Sean Dague Sean M. Collins Serg Melikyan Sergey Sergey Kraynev Sergey Lukjanov Sergey Reshetnyak Sergey Skripnick Sergey Vilgelm Seyeong Kim Shane Wang ShaoHe Feng Sharmin Choksey Shengjie Min Shilla Saebi ShunliZhou Simon Pasquier Simon Pasquier Simon Pasquier Sirushti Murugesan Song Li Spencer Yu Sreeram Vancheeswaran Stan Lagun Stefan Nica Stephen Gordon Stephen Gran Stephen Sugden Steve Baker Steve Baker Steve McLellan Steven Dake Steven Hardy Sumit Naiksatam Sushil Kumar Sven Anderson Svetlana Shturm Swann Croiset Swann Croiset Swapnil Kulkarni (coolsvap) Sylvain Baubeau Takashi NATSUME Tanvir Talukder Tetiana Lashchova Thierry Carrez Thomas Bechtold Thomas Goirand Thomas Herve Thomas Herve Thomas Herve Thomas Herve Thomas Spatzier Tim Rozet Tim Schnell Tim Smith Timothy Okwii Tomas Sedovic Tomas Sedovic Tomasz Trębski Tomer Shtilman Tomer Shtilman Ton Ngo Tovin Seven Unmesh Gurjar Vic Howard Victor HU Victor Sergeyev Victor Stinner Vijendar Komalla Vikas Jain Vinod Mangalpally Visnusaran Murugan Vitaly Gridnev Vlad Gridin Vladimir Kuklin Wang Muyu Winson Chan Xiao Liang YangLei Yanyan Hu Yaoguo Jiang Yatin Kumbhare Ying Zuo Yosef Hoffman Yoshimi Tominaga ZHU ZHU Zane Bitter Zhang Lei (Sneeze) Zhang Yang Zhenguo Niu ZhiQiang Fan ZhiQiang Fan Zhiqiang Fan Ziad Sawalha Zuul abdul nizamuddin abhishekkekane aivanitskiy ananta april cbjchen@cn.ibm.com chen-li chenaidong1 chenxiao chenxing chestack cyli danny deepakmourya divakar-padiyar-nandavar dixiaoli duvarenkov ekudryashova fandeliang gecong1973 gengchc2 gong yong sheng gordon chung guohliu hgangwx hmonika huangtianhua igor ishant jiangwt100 jufeng jun xie junxu kairat_kushaev kaz_shinohara kylin7-sg lawrancejing liangshang linwwu liu-sheng liudong liumk liusheng liyi liyi lizheming lvdongbing matts2006 mohankumar_n npraveen35 pallavi pawnesh.kumar rabi rajat29 rajiv rico.lin ricolin ricolin ricolin root sabeensyed sdake sharat.sharma shizhihui sslypushenko tanlin tengqim tengqm tyagi uberj ubuntu venkatamahesh wangtianfa wbluo0907 xiaolihope xiexs yangxurong yanyanhu yatin yuanpeng yuntongjin yushangbin yuyafei zengchen zengyingzhe zhangchunlong1@huawei.com zhangguoqing zhanghao zhaozhilong zhufeng zhufl Émilien Macchi heat-10.0.2/README.rst0000666000175000017500000000533113343562351014235 0ustar zuulzuul00000000000000======================== Team and repository tags ======================== .. image:: http://governance.openstack.org/badges/heat.svg :target: http://governance.openstack.org/reference/tags/index.html .. Change things from this point on ==== Heat ==== Heat is a service to orchestrate multiple composite cloud applications using templates, through both an OpenStack-native REST API and a CloudFormation-compatible Query API. Why heat? It makes the clouds rise and keeps them there. Getting Started --------------- If you'd like to run from the master branch, you can clone the git repo: git clone https://git.openstack.org/openstack/heat * Wiki: http://wiki.openstack.org/Heat * Developer docs: http://docs.openstack.org/heat/latest * Template samples: https://git.openstack.org/cgit/openstack/heat-templates * Agents: https://git.openstack.org/cgit/openstack/heat-agents Python client ------------- https://git.openstack.org/cgit/openstack/python-heatclient References ---------- * http://docs.amazonwebservices.com/AWSCloudFormation/latest/APIReference/API_CreateStack.html * http://docs.amazonwebservices.com/AWSCloudFormation/latest/UserGuide/create-stack.html * http://docs.amazonwebservices.com/AWSCloudFormation/latest/UserGuide/aws-template-resource-type-ref.html * http://www.oasis-open.org/committees/tc_home.php?wg_abbrev=tosca We have integration with ------------------------ * https://git.openstack.org/cgit/openstack/python-novaclient (instance) * https://git.openstack.org/cgit/openstack/python-keystoneclient (auth) * https://git.openstack.org/cgit/openstack/python-swiftclient (s3) * https://git.openstack.org/cgit/openstack/python-neutronclient (networking) * https://git.openstack.org/cgit/openstack/python-ceilometerclient (metering) * https://git.openstack.org/cgit/openstack/python-aodhclient (alarming service) * https://git.openstack.org/cgit/openstack/python-cinderclient (storage service) * https://git.openstack.org/cgit/openstack/python-glanceclient (image service) * https://git.openstack.org/cgit/openstack/python-troveclient (database as a Service) * https://git.openstack.org/cgit/openstack/python-saharaclient (hadoop cluster) * https://git.openstack.org/cgit/openstack/python-barbicanclient (key management service) * https://git.openstack.org/cgit/openstack/python-designateclient (DNS service) * https://git.openstack.org/cgit/openstack/python-magnumclient (container service) * https://git.openstack.org/cgit/openstack/python-manilaclient (shared file system service) * https://git.openstack.org/cgit/openstack/python-mistralclient (workflow service) * https://git.openstack.org/cgit/openstack/python-zaqarclient (messaging service) * https://git.openstack.org/cgit/openstack/python-monascaclient (monitoring service) heat-10.0.2/requirements.txt0000666000175000017500000000441413343562352016034 0ustar zuulzuul00000000000000# The order of packages is significant, because pip processes them in the order # of appearance. Changing the order has an impact on the overall integration # process, which may cause wedges in the gate later. pbr!=2.1.0,>=2.0.0 # Apache-2.0 Babel!=2.4.0,>=2.3.4 # BSD croniter>=0.3.4 # MIT License cryptography!=2.0,>=1.9 # BSD/Apache-2.0 eventlet!=0.18.3,!=0.20.1,<0.21.0,>=0.18.2 # MIT keystoneauth1>=3.3.0 # Apache-2.0 keystonemiddleware>=4.17.0 # Apache-2.0 lxml!=3.7.0,>=3.4.1 # BSD netaddr>=0.7.18 # BSD openstacksdk>=0.9.19 # Apache-2.0 oslo.cache>=1.26.0 # Apache-2.0 oslo.config>=5.1.0 # Apache-2.0 oslo.concurrency>=3.25.0 # Apache-2.0 oslo.context>=2.19.2 # Apache-2.0 oslo.db>=4.27.0 # Apache-2.0 oslo.i18n>=3.15.3 # Apache-2.0 oslo.log>=3.36.0 # Apache-2.0 oslo.messaging>=5.29.0 # Apache-2.0 oslo.middleware>=3.31.0 # Apache-2.0 oslo.policy>=1.30.0 # Apache-2.0 oslo.reports>=1.18.0 # Apache-2.0 oslo.serialization!=2.19.1,>=2.18.0 # Apache-2.0 oslo.service!=1.28.1,>=1.24.0 # Apache-2.0 oslo.utils>=3.33.0 # Apache-2.0 osprofiler>=1.4.0 # Apache-2.0 oslo.versionedobjects>=1.31.2 # Apache-2.0 PasteDeploy>=1.5.0 # MIT aodhclient>=0.9.0 # Apache-2.0 python-barbicanclient!=4.5.0,!=4.5.1,>=4.0.0 # Apache-2.0 python-ceilometerclient>=2.5.0 # Apache-2.0 python-cinderclient>=3.3.0 # Apache-2.0 python-designateclient>=2.7.0 # Apache-2.0 python-glanceclient>=2.8.0 # Apache-2.0 python-heatclient>=1.10.0 # Apache-2.0 python-keystoneclient>=3.8.0 # Apache-2.0 python-magnumclient>=2.1.0 # Apache-2.0 python-manilaclient>=1.16.0 # Apache-2.0 python-mistralclient!=3.2.0,>=3.1.0 # Apache-2.0 python-monascaclient>=1.7.0 # Apache-2.0 python-neutronclient>=6.3.0 # Apache-2.0 python-novaclient>=9.1.0 # Apache-2.0 python-octaviaclient>=1.3.0 # Apache-2.0 python-openstackclient>=3.12.0 # Apache-2.0 python-saharaclient>=1.4.0 # Apache-2.0 python-swiftclient>=3.2.0 # Apache-2.0 python-troveclient>=2.2.0 # Apache-2.0 python-zaqarclient>=1.0.0 # Apache-2.0 python-zunclient>=1.0.0 # Apache-2.0 pytz>=2013.6 # MIT PyYAML>=3.10 # MIT requests>=2.14.2 # Apache-2.0 tenacity>=3.2.1 # Apache-2.0 Routes>=2.3.1 # MIT six>=1.10.0 # MIT SQLAlchemy!=1.1.5,!=1.1.6,!=1.1.7,!=1.1.8,>=1.0.10 # MIT sqlalchemy-migrate>=0.11.0 # Apache-2.0 stevedore>=1.20.0 # Apache-2.0 WebOb>=1.7.1 # MIT yaql>=1.1.3 # Apache 2.0 License heat-10.0.2/.coveragerc0000666000175000017500000000013213343562337014665 0ustar zuulzuul00000000000000[run] branch = True source = heat,contrib omit = */tests/* [report] ignore_errors = True heat-10.0.2/releasenotes/0000775000175000017500000000000013343562672015241 5ustar zuulzuul00000000000000heat-10.0.2/releasenotes/source/0000775000175000017500000000000013343562672016541 5ustar zuulzuul00000000000000heat-10.0.2/releasenotes/source/newton.rst0000666000175000017500000000023213343562340020574 0ustar zuulzuul00000000000000=================================== Newton Series Release Notes =================================== .. release-notes:: :branch: origin/stable/newton heat-10.0.2/releasenotes/source/liberty.rst0000666000175000017500000000022113343562340020732 0ustar zuulzuul00000000000000============================== Liberty Series Release Notes ============================== .. release-notes:: :branch: origin/stable/libertyheat-10.0.2/releasenotes/source/ocata.rst0000666000175000017500000000023013343562340020347 0ustar zuulzuul00000000000000=================================== Ocata Series Release Notes =================================== .. release-notes:: :branch: origin/stable/ocata heat-10.0.2/releasenotes/source/index.rst0000666000175000017500000000023713343562352020401 0ustar zuulzuul00000000000000====================== Heat Release Notes ====================== .. toctree:: :maxdepth: 1 unreleased pike ocata newton mitaka liberty heat-10.0.2/releasenotes/source/conf.py0000666000175000017500000002134013343562340020032 0ustar zuulzuul00000000000000# -*- coding: utf-8 -*- # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. # Heat Release Notes documentation build configuration file, created by # sphinx-quickstart on Tue Nov 3 17:40:50 2015. # # This file is execfile()d with the current directory set to its # containing dir. # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. # sys.path.insert(0, os.path.abspath('.')) # -- General configuration ------------------------------------------------ # If your documentation needs a minimal Sphinx version, state it here. # needs_sphinx = '1.0' # Add any Sphinx extension module names here, as strings. They can be # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom # ones. extensions = [ 'openstackdocstheme', 'reno.sphinxext', ] # openstackdocstheme options repository_name = 'openstack/heat' bug_project = 'heat' bug_tag = '' # Add any paths that contain templates here, relative to this directory. templates_path = ['_templates'] # The suffix of source filenames. source_suffix = '.rst' # The encoding of source files. # source_encoding = 'utf-8-sig' # The master toctree document. master_doc = 'index' # General information about the project. project = u'Heat Release Notes' copyright = u'2015, Heat Developers' # Release notes are version independent, no need to set version and release release = '' version = '' # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. # language = None # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: # today = '' # Else, today_fmt is used as the format for a strftime call. # today_fmt = '%B %d, %Y' # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. exclude_patterns = [] # The reST default role (used for this markup: `text`) to use for all # documents. # default_role = None # If true, '()' will be appended to :func: etc. cross-reference text. # add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). # add_module_names = True # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. # show_authors = False # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' # A list of ignored prefixes for module index sorting. # modindex_common_prefix = [] # If true, keep warnings as "system message" paragraphs in the built documents. # keep_warnings = False # -- Options for HTML output ---------------------------------------------- # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. html_theme = 'openstackdocs' # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. # html_theme_options = {} # Add any paths that contain custom themes here, relative to this directory. # html_theme_path = [] # The name for this set of Sphinx documents. If None, it defaults to # " v documentation". # html_title = None # A shorter title for the navigation bar. Default is the same as html_title. # html_short_title = None # The name of an image file (relative to this directory) to place at the top # of the sidebar. # html_logo = None # The name of an image file (within the static path) to use as favicon of the # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. # html_favicon = None # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". html_static_path = ['_static'] # Add any extra paths that contain custom files (such as robots.txt or # .htaccess) here, relative to this directory. These files are copied # directly to the root of the documentation. # html_extra_path = [] # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, # using the given strftime format. html_last_updated_fmt = '%Y-%m-%d %H:%M' # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. # html_use_smartypants = True # Custom sidebar templates, maps document names to template names. # html_sidebars = {} # Additional templates that should be rendered to pages, maps page names to # template names. # html_additional_pages = {} # If false, no module index is generated. # html_domain_indices = True # If false, no index is generated. # html_use_index = True # If true, the index is split into individual pages for each letter. # html_split_index = False # If true, links to the reST sources are added to the pages. # html_show_sourcelink = True # If true, "Created using Sphinx" is shown in the HTML footer. Default is True. # html_show_sphinx = True # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. # html_show_copyright = True # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. # html_use_opensearch = '' # This is the file name suffix for HTML files (e.g. ".xhtml"). # html_file_suffix = None # Output file base name for HTML help builder. htmlhelp_basename = 'HeatReleaseNotesdoc' # -- Options for LaTeX output --------------------------------------------- latex_elements = { # The paper size ('letterpaper' or 'a4paper'). # 'papersize': 'letterpaper', # The font size ('10pt', '11pt' or '12pt'). # 'pointsize': '10pt', # Additional stuff for the LaTeX preamble. # 'preamble': '', } # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, # author, documentclass [howto, manual, or own class]). latex_documents = [ ('index', 'HeatReleaseNotes.tex', u'Heat Release Notes Documentation', u'Heat Developers', 'manual'), ] # The name of an image file (relative to this directory) to place at the top of # the title page. # latex_logo = None # For "manual" documents, if this is true, then toplevel headings are parts, # not chapters. # latex_use_parts = False # If true, show page references after internal links. # latex_show_pagerefs = False # If true, show URL addresses after external links. # latex_show_urls = False # Documents to append as an appendix to all manuals. # latex_appendices = [] # If false, no module index is generated. # latex_domain_indices = True # -- Options for manual page output --------------------------------------- # One entry per manual page. List of tuples # (source start file, name, description, authors, manual section). man_pages = [ ('index', 'heatreleasenotes', u'Heat Release Notes Documentation', [u'Heat Developers'], 1) ] # If true, show URL addresses after external links. # man_show_urls = False # -- Options for Texinfo output ------------------------------------------- # Grouping the document tree into Texinfo files. List of tuples # (source start file, target name, title, author, # dir menu entry, description, category) texinfo_documents = [ ('index', 'HeatReleaseNotes', u'Heat Release Notes Documentation', u'Heat Developers', 'HeatReleaseNotes', 'One line description of project.', 'Miscellaneous'), ] # Documents to append as an appendix to all manuals. # texinfo_appendices = [] # If false, no module index is generated. # texinfo_domain_indices = True # How to display URL addresses: 'footnote', 'no', or 'inline'. # texinfo_show_urls = 'footnote' # If true, do not generate a @detailmenu in the "Top" node's menu. # texinfo_no_detailmenu = False # -- Options for Internationalization output ------------------------------ locale_dirs = ['locale/'] heat-10.0.2/releasenotes/source/_templates/0000775000175000017500000000000013343562672020676 5ustar zuulzuul00000000000000heat-10.0.2/releasenotes/source/_templates/.placeholder0000666000175000017500000000000013343562340023141 0ustar zuulzuul00000000000000heat-10.0.2/releasenotes/source/unreleased.rst0000666000175000017500000000015713343562340021417 0ustar zuulzuul00000000000000============================== Current Series Release Notes ============================== .. release-notes:: heat-10.0.2/releasenotes/source/mitaka.rst0000666000175000017500000000023213343562340020530 0ustar zuulzuul00000000000000=================================== Mitaka Series Release Notes =================================== .. release-notes:: :branch: origin/stable/mitaka heat-10.0.2/releasenotes/source/pike.rst0000666000175000017500000000021713343562340020215 0ustar zuulzuul00000000000000=================================== Pike Series Release Notes =================================== .. release-notes:: :branch: stable/pike heat-10.0.2/releasenotes/source/_static/0000775000175000017500000000000013343562672020167 5ustar zuulzuul00000000000000heat-10.0.2/releasenotes/source/_static/.placeholder0000666000175000017500000000000013343562340022432 0ustar zuulzuul00000000000000heat-10.0.2/releasenotes/notes/0000775000175000017500000000000013343562672016371 5ustar zuulzuul00000000000000heat-10.0.2/releasenotes/notes/set-networks-for-trove-cluster-b997a049eedbad17.yaml0000666000175000017500000000012613343562340027676 0ustar zuulzuul00000000000000--- features: - Allow to set networks of instances for OS::Trove::Cluster resource. heat-10.0.2/releasenotes/notes/set-tags-for-subnetpool-d86ca0d7e35a05f1.yaml0000666000175000017500000000013013343562340026234 0ustar zuulzuul00000000000000--- features: - Allow to set or update the tags for OS::Neutron::SubnetPool resource. heat-10.0.2/releasenotes/notes/server-ephemeral-bdm-v2-55e0fe2afc5d8b63.yaml0000666000175000017500000000066413343562340026167 0ustar zuulzuul00000000000000--- features: - OS::Nova::Server now supports ephemeral_size and ephemeral_format properties for block_device_mapping_v2 property. Property ephemeral_size is integer, that require flavor with ephemeral disk size greater that 0. Property ephemeral_format is string with allowed values ext2, ext3, ext4, xfs and ntfs for Windows guests; it is optional and if has no value, uses default, defined in nova config file. heat-10.0.2/releasenotes/notes/glance-image-tag-6fa123ca30be01aa.yaml0000666000175000017500000000021113343562340024644 0ustar zuulzuul00000000000000--- features: - OS::Glance::Image resource plug-in is updated to support tagging when image is created or updated as part of stack.heat-10.0.2/releasenotes/notes/restrict_update_replace-68abece58cf3f6a0.yaml0000666000175000017500000000043313343562340026604 0ustar zuulzuul00000000000000--- features: - Adds a new feature to restrict update or replace of a resource when a stack is being updated. Template authors can set ``restricted_actions`` in the ``resources`` section of ``resource_registry`` in an environment file to restrict update or replace.heat-10.0.2/releasenotes/notes/add-list-concat-unique-function-5a87130d9c93cb08.yaml0000666000175000017500000000033513343562340027504 0ustar zuulzuul00000000000000--- features: - The list_concat_unique function was added, which behaves identically to the function ``list_concat`` to concat several lists using python's extend function and make sure without repeating items. heat-10.0.2/releasenotes/notes/add-zun-container-c31fa5316237b13d.yaml0000666000175000017500000000055613343562340024717 0ustar zuulzuul00000000000000--- features: - | A new OS::Zun::Container resource is added that allows users to manage docker containers powered by Zun. This resource will have an 'addresses' attribute that contains various networking information including the neutron port id. This allows users to orchestrate containers with other networking resources (i.e. floating ip). heat-10.0.2/releasenotes/notes/hidden-designate-domain-record-res-d445ca7f1251b63d.yaml0000666000175000017500000000030713343562340030162 0ustar zuulzuul00000000000000--- deprecations: - | Hidden Designate resource plugins ``OS::Designate::Domain`` and ``OS::Designate::Record``. To use ``OS::Designate::Zone`` and ``OS::Designate::RecordSet`` instead.heat-10.0.2/releasenotes/notes/configurable-server-name-limit-947d9152fe9b43ee.yaml0000666000175000017500000000037413343562340027506 0ustar zuulzuul00000000000000--- features: - Adds new 'max_server_name_length' configuration option which defaults to the prior upper bound (53) and can be lowered by users (if they need to, for example due to ldap or other internal name limit restrictions). heat-10.0.2/releasenotes/notes/stack-definition-in-functions-3f7f172a53edf535.yaml0000666000175000017500000000125013343562340027334 0ustar zuulzuul00000000000000--- other: - Intrinsic function plugins will now be passed a StackDefinition object instead of a Stack object. When accessing resources, the StackDefinition will return ResourceProxy objects instead of Resource objects. These classes replicate the parts of the Stack and Resource APIs that are used by the built-in Function plugins, but authors of custom third-party Template/Function plugins should audit them to ensure they do not depend on unstable parts of the API that are no longer accessible. The StackDefinition and ResourceProxy APIs are considered stable and any future changes to them will go through the standard deprecation process. heat-10.0.2/releasenotes/notes/map-replace-function-26bf247c620f64bf.yaml0000666000175000017500000000041613343562340025476 0ustar zuulzuul00000000000000--- features: - Add ``map_replace`` function, that takes 2 arguments an input map and a map containing a ``keys`` and/or ``values`` map. key/value substitutions on the input map are performed based on the mappings passed in ``keys`` and ``values``. heat-10.0.2/releasenotes/notes/octavia-resources-0a25720e16dfe55d.yaml0000666000175000017500000000200413343562340025110 0ustar zuulzuul00000000000000--- features: - Adds new resources for octavia lbaas service. - New resource ``OS::Octavia::LoadBalancer`` is added to create and manage Load Balancers which allow traffic to be directed between servers. - New resource ``OS::Octavia::Listener`` is added to create and manage Listeners which represent a listening endpoint for the Load Balancer. - New resource ``OS::Octavia::Pool`` is added to create and manage Pools which represent a group of nodes. Pools define the subnet where nodes reside, the balancing algorithm, and the nodes themselves. - New resource ``OS::Octavia::PoolMember`` is added to create and manage Pool members which represent a single backend node. - New resource ``OS::Octavia::HealthMonitor`` is added to create and manage Health Monitors which watch status of the Load Balanced servers. - New resource ``OS::Octavia::L7Policy`` is added to create and manage L7 Policies. - New resource ``OS::Octavia::L7Rule`` is added to create and manage L7 Rules. heat-10.0.2/releasenotes/notes/add-aodh-composite-alarm-f8eb4f879fe0916b.yaml0000666000175000017500000000030313343562340026317 0ustar zuulzuul00000000000000--- features: - OS::Aodh::CompositeAlarm resource plugin is added to manage Aodh composite alarm, aim to replace OS::Aodh::CombinationAlarm which has been deprecated in Newton release. heat-10.0.2/releasenotes/notes/bp-support-rbac-policy-fd71f8f6cc97bfb6.yaml0000666000175000017500000000041113343562340026237 0ustar zuulzuul00000000000000--- features: - OS::Neutron::RBACPolicy resource plugin is added to support RBAC policy, which is used to manage RBAC policy in Neutron. This resource creates and manages Neutron RBAC policy, which allows to share Neutron networks to subsets of tenants. heat-10.0.2/releasenotes/notes/system-random-string-38a14ae2cb6f4a24.yaml0000666000175000017500000000026113343562340025636 0ustar zuulzuul00000000000000--- security: - | Heat no longer uses standard Python RNG when generating values for OS::Heat::RandomString resource, and instead relies on system's RNG for that. heat-10.0.2/releasenotes/notes/server-add-user-data-update-policy-c34646acfaada4d4.yaml0000666000175000017500000000056513343562340030375 0ustar zuulzuul00000000000000--- features: - The OS::Nova::Server now supports a new property user_data_update_policy, which may be set to either 'REPLACE' (default) or 'IGNORE' if you wish to allow user_data updates to be ignored on stack update. This is useful when managing a group of servers where changed user_data should apply to new servers without replacing existing servers. heat-10.0.2/releasenotes/notes/bp-support-host-aggregate-fbc4097f4e6332b8.yaml0000666000175000017500000000044013343562340026502 0ustar zuulzuul00000000000000--- features: - OS::Nova::HostAggregate resource plugin is added to support host aggregate, which is provided by nova ``aggregates`` API extension. - nova.host constraint is added to support to validate host attribute which is provided by nova ``host`` API extension. heat-10.0.2/releasenotes/notes/magnum-resource-update-0f617eec45ef8ef7.yaml0000666000175000017500000000156313343562340026241 0ustar zuulzuul00000000000000--- prelude: > Magnum recently changed terminology to more intuitively convey key concepts in order to align with industry standards. "Bay" is now "Cluster" and "BayModel" is now "ClusterTemplate". This release deprecates the old names in favor of the new. features: - OS::Magnum::Cluster resource plugin added to support magnum cluster feature, which is provided by magnum ``cluster`` API. - OS::Magnum::ClusterTemplate resource plugin added to support magnum cluster template feature, which is provided by magnum ``clustertemplates`` API. deprecations: - Magnum terminology deprecations * `OS::Magnum::Bay` is now deprecated, should use `OS::Magnum::Cluster` instead * `OS::Magnum::BayModel` is now deprecated, should use `OS::Magnum::ClusterTemplate` instead Deprecation warnings are printed for old usages. heat-10.0.2/releasenotes/notes/neutron-lbaas-v2-resources-c0ebbeb9bc9f7a42.yaml0000666000175000017500000000174213343562340027075 0ustar zuulzuul00000000000000--- features: - New resources for Neutron Load Balancer version 2. These are unique for version 2 and do not support or mix with existing version 1 resources. - New resource ``OS::Neutron::LBaaS::LoadBalancer`` is added to create and manage Load Balancers which allow traffic to be directed between servers. - New resource ``OS::Neutron::LBaaS::Listener`` is added to create and manage Listeners which represent a listening endpoint for the Load Balancer. - New resource ``OS::Neutron::LBaaS::Pool`` is added to create and manage Pools which represent a group of nodes. Pools define the subnet where nodes reside, the balancing algorithm, and the nodes themselves. - New resource ``OS::Neutron::LBaaS::PoolMember`` is added to create and manage Pool members which represent a single backend node. - New resource ``OS::Neutron::LBaaS::HealthMonitor`` is added to create and manage Health Monitors which watch status of the Load Balanced servers. heat-10.0.2/releasenotes/notes/server-side-multi-env-7862a75e596ae8f5.yaml0000666000175000017500000000064413343562340025604 0ustar zuulzuul00000000000000--- features: - Multiple environment files may be passed to the server in the files dictionary along with an ordered list of the environment file names. The server will generate the stack's environment from the provided files rather than requiring the client to merge the environments together. This is optional; the existing interface to pass in the already resolved environment is still present. heat-10.0.2/releasenotes/notes/force-delete-nova-instance-6ed5d7fbd5b6f5fe.yaml0000666000175000017500000000053713343562340027110 0ustar zuulzuul00000000000000--- fixes: - Force delete the nova instance. If a resource is related with a nova instance which is in 'SOFT_DELETED' status, the resource can't be deleted, when nova config 'reclaim_instance_interval'. so, force-delete the nova instance, and then all the resources are related with the instance would be processed properly. heat-10.0.2/releasenotes/notes/add-tags-for-neutron-router-43d72e78aa89fd07.yaml0000666000175000017500000000012413343562340026753 0ustar zuulzuul00000000000000--- features: - Allow to set or update the tags for OS::Neutron::Router resource. heat-10.0.2/releasenotes/notes/add-cephfs-share-protocol-033e091e7c6c5166.yaml0000666000175000017500000000015113343562340026255 0ustar zuulzuul00000000000000--- fixes: - | 'CEPHFS' can be used as a share protocol when using OS::Manila::Share resource. heat-10.0.2/releasenotes/notes/cinder-qos-specs-resource-ca5a237ebc114729.yaml0000666000175000017500000000035513343562340026463 0ustar zuulzuul00000000000000--- features: - OS::Cinder::QoSSpecs resource plugin added to support cinder QoS Specs, which is provided by cinder ``qos-specs`` API extension. - cinder.qos_specs constraint added to support to validate QoS Specs attribute. heat-10.0.2/releasenotes/notes/converge-flag-for-stack-update-e0e92a7fe232f10f.yaml0000666000175000017500000000050113343562340027422 0ustar zuulzuul00000000000000--- features: - Add `converge` parameter for stack update (and update preview) API. This parameter will force resources to observe the reality of resources before actually update it. The value of this parameter can be any boolean value. This will replace config flag `observe_on_update` in near future. heat-10.0.2/releasenotes/notes/bp-support-trunk-port-733019c49a429826.yaml0000666000175000017500000000013313343562340025426 0ustar zuulzuul00000000000000--- features: - New resource ``OS::Neutron::Trunk`` is added to manage Neutron Trunks. heat-10.0.2/releasenotes/notes/bp-mistral-new-resource-type-workflow-execution-748bd37faa3e427b.yaml0000666000175000017500000000036213343562340033065 0ustar zuulzuul00000000000000--- features: - | A new OS::Mistral::ExternalResource is added that allows users to manage resources that are not known to Heat by specifying in the template Mistral workflows to handle actions such as create, update and delete.heat-10.0.2/releasenotes/notes/doc-migrate-10c968c819848240.yaml0000666000175000017500000000031213343562340023365 0ustar zuulzuul00000000000000--- features: - | All developer, contributor, and user content from various guides in openstack-manuals has been moved in-tree and are published at `https://docs.openstack.org/heat/pike/`.heat-10.0.2/releasenotes/notes/resource-search-3234afe601ea4e9d.yaml0000666000175000017500000000043513343562340024633 0ustar zuulzuul00000000000000--- features: - A stack can be searched for resources based on their name, status, type, action, id and physcial_resource_id. And this feature is enabled both in REST API and CLI. For more details, please refer orchestration API document and heat CLI user guide. heat-10.0.2/releasenotes/notes/nova-quota-resource-84350f0467ce2d40.yaml0000666000175000017500000000021413343562340025235 0ustar zuulzuul00000000000000--- features: - New resource ``OS::Nova::Quota`` is added to enable an admin to manage Compute service quotas for a specific project. heat-10.0.2/releasenotes/notes/policy-in-code-124372f6cdb0a497.yaml0000666000175000017500000000126013343562340024213 0ustar zuulzuul00000000000000--- features: - | Heat now support policy in code, which means if you didn't modify any of policy rules, you won't need to add rules in the `policy.yaml` or `policy.json` file. Because from now, heat keeps all default policies under `heat/policies`. You can still generate and modify a `policy.yaml` file which will override policy rules in code if those rules appear in the `policy.yaml` file. upgrade: - | Default policy.json file is now removed as we now generate the default policies in code. Please be aware that when using that file in your environment. You still can generate a `policy.yaml` file if that's required in your environment. heat-10.0.2/releasenotes/notes/server-group-soft-policy-8eabde24bf14bf1d.yaml0000666000175000017500000000021013343562340026654 0ustar zuulzuul00000000000000--- features: - Two new policies soft-affinity and soft-anti-affinity have been supported for the OS::Nova::ServerGroup resource. heat-10.0.2/releasenotes/notes/bp-support-conditions-1a9f89748a08cd4f.yaml0000666000175000017500000000166313343562340025773 0ustar zuulzuul00000000000000--- features: - Adds optional section ``conditions`` for hot template ( heat_template_version.2016-10-14) and ``Conditions`` for cfn template (AWSTemplateFormatVersion.2010-09-09). - Adds some condition functions, like ``equals``, ``not``, ``and`` and ``or``, these condition functions can be used in ``conditions`` section to define one or more conditions which are evaluated based on input parameter values provided when a user creates or updates a stack. - Adds optional section ``condition`` for resource and output definitions. Condition name defined in ``conditions`` and condition functions can be referenced in this section, in order to conditionally create resources or conditionally give outputs of a stack. - Adds function ``if`` to return corresponding value based on condition evaluation. This function can be used to conditionally set the value of resource properties and outputs. heat-10.0.2/releasenotes/notes/cinder-backup-cb72e775681fb5a5.yaml0000666000175000017500000000071413343562340024205 0ustar zuulzuul00000000000000--- upgrade: - New config section ``volumes`` with new config option ``[volumes]backups_enabled`` (defaults to ``True``). Operators that do not have Cinder backup service deployed in their cloud are encouraged to set this option to ``False``. fixes: - Allow to configure Heat service to forbid creation of stacks containing Volume resources with ``deletion_policy`` set to ``Snapshot`` when there is no Cinder backup service available. heat-10.0.2/releasenotes/notes/deprecate-nova-floatingip-resources-d5c9447a199be402.yaml0000666000175000017500000000053413343562340030452 0ustar zuulzuul00000000000000--- deprecations: - nova-network is no longer supported in OpenStack. Please use OS::Neutron::FloatingIPAssociation and OS::Neutron::FloatingIP in place of OS::Nova::FloatingIPAssociation and OS::Nova::FloatingIP - The AWS::EC2::EIP domain is always assumed to be 'vpc', since nova-network is not supported in OpenStack any longer. heat-10.0.2/releasenotes/notes/set-tags-for-port-471155bb53436361.yaml0000666000175000017500000000012213343562340024446 0ustar zuulzuul00000000000000--- features: - Allow to set or update the tags for OS::Neutron::Port resource. heat-10.0.2/releasenotes/notes/barbican-container-77967add0832d51b.yaml0000666000175000017500000000055413343562340025137 0ustar zuulzuul00000000000000--- features: - Add new ``OS::Barbican::GenericContainer`` resource for storing arbitrary barbican secrets. - Add new ``OS::Barbican::RSAContainer`` resource for storing RSA public keys, private keys, and private key pass phrases. - A new ``OS::Barbican::CertificateContainer`` resource for storing the secrets that are relevant to certificates. heat-10.0.2/releasenotes/notes/store-resource-attributes-8bcbedca2f86986e.yaml0000666000175000017500000000110013343562340027057 0ustar zuulzuul00000000000000--- features: - Resource attributes are now stored at the time a resource is created or updated, allowing for fast resolution of outputs without having to retrieve live data from the underlying physical resource. To minimise compatibility problems, the behaviour of the `show` attribute, the `with_attr` option to the resource show API, and stacks that do not yet use the convergence architecture (due to the convergence_engine being disabled at the time they were created) is unchanged - in each of these cases live data will still be returned. heat-10.0.2/releasenotes/notes/event-transport-302d1db6c5a5daa9.yaml0000666000175000017500000000031113343562340024755 0ustar zuulzuul00000000000000--- features: - Added a new ``event-sinks`` element to the environment which allows specifying a target where events from the stack are sent. It supports the ``zaqar-queue`` element for now. heat-10.0.2/releasenotes/notes/external-resources-965d01d690d32bd2.yaml0000666000175000017500000000057413343562340025241 0ustar zuulzuul00000000000000--- prelude: > Support external resource reference in template. features: - Add `external_id` attribute for resource to reference on an exists external resource. The resource (with `external_id` attribute) will not able to be updated. This will keep management rights stay externally. - This feature only supports templates with version over `2016-10-14`. heat-10.0.2/releasenotes/notes/monasca-period-f150cdb134f1e036.yaml0000666000175000017500000000116113343562340024346 0ustar zuulzuul00000000000000--- features: - Add optional 'period' property for Monasca Notification resource. The new added property will now allow the user to tell Monasca the interval in seconds to periodically invoke a webhook until the ALARM state transitions back to an OK state or vice versa. This is useful when the user wants to create a stack which will automatically scale up or scale down more than once if the alarm state continues to be in the same state. To conform to the existing Heat autoscaling behaviour, we manually create the monasca notification resource in Heat with a default interval value of 60. heat-10.0.2/releasenotes/notes/immutable-parameters-a13dc9bec7d6fa0f.yaml0000666000175000017500000000100513343562340026075 0ustar zuulzuul00000000000000--- features: - Adds a new "immutable" boolean field to the parameters section in a HOT template. This gives template authors the ability to mark template parameters as immutable to restrict updating parameters which have destructive effects on the application. A value of True results in the engine rejecting stack-updates that include changes to that parameter. When not specified in the template, "immutable" defaults to False to ensure backwards compatibility with old templates. heat-10.0.2/releasenotes/notes/add-contains-function-440aa7184a07758c.yaml0000666000175000017500000000027313343562340025512 0ustar zuulzuul00000000000000--- features: - The 'contains' function was added, which checks whether the specified value is in a sequence. In addition, the new function can be used as a condition function. heat-10.0.2/releasenotes/notes/resource_group_removal_policies_mode-d489e0cc49942e2a.yaml0000666000175000017500000000035313343562340031160 0ustar zuulzuul00000000000000--- features: - | OS::Heat::ResourceGroup now supports a removal_policies_mode property. This can be used to optionally select different behavior on update where you may wish to overwrite vs append to the current policy. heat-10.0.2/releasenotes/notes/bp-support-neutron-qos-3feb38eb2abdcc87.yaml0000666000175000017500000000101113343562340026360 0ustar zuulzuul00000000000000--- features: - OS::Neutron::QoSPolicy resource plugin is added to support QoS policy, which is provided by neutron ``qos`` API extension. - OS::Neutron::QoSBandwidthLimitRule resource plugin is added to support neutron QoS bandwidth limit rule, which is provided by neutron ``qos`` API extension. - Resources ``OS::Neutron::Port`` and ``OS::Neutron::Net`` now support ``qos_policy`` optional property, that will associate with QoS policy to offer different service levels based on the policy rules. heat-10.0.2/releasenotes/notes/cinder-quota-resource-f13211c04020cd0c.yaml0000666000175000017500000000034413343562340025566 0ustar zuulzuul00000000000000--- features: - New resource ``OS::Cinder::Quota`` is added to manage cinder quotas. Cinder quotas are operational limits to projects on cinder block storage resources. These include gigabytes, snapshots, and volumes. heat-10.0.2/releasenotes/notes/neutron-segment-support-a7d44af499838a4e.yaml0000666000175000017500000000106413343562340026347 0ustar zuulzuul00000000000000--- features: - A new ``openstack`` client plugin to use python-openstacksdk library and a ``neutron.segment`` custom constraint. - A new ``OS::Neutron:Segment`` resource to create routed networks. Availability of this resource depends on availability of neutron ``segment`` API extension. - Resource ``OS::Neutron::Subnet`` now supports ``segment`` optional property to specify a segment. - Resource ``OS::Neutron::Net`` now supports ``l2_adjacency`` atribute on whether L2 connectivity is available across the network or not. heat-10.0.2/releasenotes/notes/repeat-support-setting-permutations-fbc3234166b529ca.yaml0000666000175000017500000000072213343562340030654 0ustar zuulzuul00000000000000--- features: - Added new section ``permutations`` for ``repeat`` function, to decide whether to iterate nested the over all the permutations of the elements in the given lists. If 'permutations' is not specified, we set the default value to true to compatible with before behavior. The args have to be lists instead of dicts if 'permutations' is False because keys in a dict are unordered, and the list args all have to be of the same length. heat-10.0.2/releasenotes/notes/senlin-resources-71c856dc62d0b407.yaml0000666000175000017500000000146413343562340024707 0ustar zuulzuul00000000000000--- features: - New resource ``OS::Senlin::Cluster`` is added to create a cluster in senlin. A cluster is a group of homogeneous nodes. - New resource ``OS::Senlin::Node`` is added to create a node in senlin. Node represents a physical object exposed by other OpenStack services. - New resource ``OS::Senlin::Receiver`` is added to create a receiver in senlin. Receiver can be used to hook the engine to some external event/alarm sources. - New resource ``OS::Senlin::Profile`` is added to create a profile in senlin. Profile is a module used for creating nodes, it's the definition of a node. - New resource ``OS::Senlin::Policy`` is added to create a policy in senlin. Policy is a set of rules that can be checked and/or enforced when an Action is performed on a Cluster. heat-10.0.2/releasenotes/notes/designate-v2-support-0f889e9ad13d4aa2.yaml0000666000175000017500000000035113343562340025547 0ustar zuulzuul00000000000000--- features: - Designate v2 resource plugins OS::Designate::Zone and OS::Designate::RecordSet are newly added. deprecations: - Designate v1 resource plugins OS::Designate::Domain and OS::Designate::Record are deprecated.heat-10.0.2/releasenotes/notes/set-tags-for-subnet-17a97b88dd11de63.yaml0000666000175000017500000000012413343562340025301 0ustar zuulzuul00000000000000--- features: - Allow to set or update the tags for OS::Neutron::Subnet resource. heat-10.0.2/releasenotes/notes/change-heat-keystone-user-name-limit-to-255-bd076132b98744be.yaml0000666000175000017500000000025513343562340031352 0ustar zuulzuul00000000000000--- other: - Now heat keystone user name charaters limit increased from 64 to 255. Any extra charaters will lost when truncate the name to the last 255 charaters. heat-10.0.2/releasenotes/notes/environment_validate_template-fee21a03bb628446.yaml0000666000175000017500000000033713343562340027570 0ustar zuulzuul00000000000000--- features: - | The template validate API call now returns the Environment calculated by heat - this enables preview of the merged environment when using parameter_merge_strategy prior to creating the stack heat-10.0.2/releasenotes/notes/mark-combination-alarm-as-placeholder-resource-e243e9692cab52e0.yaml0000666000175000017500000000074113343562340032517 0ustar zuulzuul00000000000000--- critical: - Since Aodh drop support for combination alarm, therefore OS::Aodh::CombinationAlarm is now mark as hidden resource with directly inheriting from None resource which will make the resource do nothing when handling any actions (other than delete). And please don't use it. Old resource which created with that resource type still able to delete. It's recommand to switch that resource type ASAP, since we will remove that resource soon. heat-10.0.2/releasenotes/notes/legacy-stack-user-id-cebbad8b0f2ed490.yaml0000666000175000017500000000060513343562340025673 0ustar zuulzuul00000000000000--- upgrade: - If upgrading with pre-icehouse stacks which contain resources that create users (such as OS::Nova::Server, OS::Heat::SoftwareDeployment, and OS::Heat::WaitConditionHandle), it is possible that the users will not be removed upon stack deletion due to the removal of a legacy fallback code path. In such a situation, these users will require manual removal. heat-10.0.2/releasenotes/notes/template-validate-improvements-52ecf5125c9efeda.yaml0000666000175000017500000000073713343562340030046 0ustar zuulzuul00000000000000--- features: - Template validation is improved to ignore the given set of error codes. For example, heat will report template as invalid one, if it does not find any required OpenStack services in the cloud deployment and while authoring the template, user might wants to avoid such scenarios, so that (s)he could create valid template without bothering about run-time environments. Please refer the API documentation of validate template for more details.heat-10.0.2/releasenotes/notes/parameter-group-for-nested-04559c4de34e326a.yaml0000666000175000017500000000017513343562340026564 0ustar zuulzuul00000000000000--- features: - ParameterGroups section is added to the nested stacks, for the output of the stack validate templates. heat-10.0.2/releasenotes/notes/remove-heat-resourcetype-constraint-b679618a149fc04e.yaml0000666000175000017500000000017413343562340030543 0ustar zuulzuul00000000000000--- deprecations: - The heat.resource_type custom constraint has been removed. This constraint never actually worked. heat-10.0.2/releasenotes/notes/deployment-swift-data-server-property-51fd4f9d1671fc90.yaml0000666000175000017500000000055113343562340031104 0ustar zuulzuul00000000000000--- features: - A new property, deployment_swift_data is added to the OS::Nova::Server and OS::Heat::DeployedServer resources. The property is used to define the Swift container and object name that is used for deployment data for the server. If unset, the fallback is the previous behavior where these values will be automatically generated. heat-10.0.2/releasenotes/notes/cancel_without_rollback-e5d978a60d9baf45.yaml0000666000175000017500000000013213343562340026435 0ustar zuulzuul00000000000000--- features: Adds REST api support to cancel a stack create/update without rollback. heat-10.0.2/releasenotes/notes/convergence-delete-race-5b821bbd4c5ba5dc.yaml0000666000175000017500000000111513343562340026335 0ustar zuulzuul00000000000000--- fixes: - | Previously, when deleting a convergence stack, the API call would return immediately, so that it was possible for a client immediately querying the status of the stack to see the state of the previous operation in progress or having failed, and confuse that with a current status. (This included Heat itself when acting as a client for a nested stack.) Convergence stacks are now guaranteed to have moved to the ``DELETE_IN_PROGRESS`` state before the delete API call returns, so any subsequent polling will reflect up-to-date information. heat-10.0.2/releasenotes/notes/api-outputs-6d09ebf5044f51c3.yaml0000666000175000017500000000114713343562340023760 0ustar zuulzuul00000000000000--- features: - Added new functionality for showing and listing stack outputs without resolving all outputs during stack initialisation. - Added new API calls for showing and listing stack outputs ``/stack/outputs`` and ``/stack/outputs/output_key``. - Added using of new API in python-heatclient for ``output_show`` and ``output_list``. Now, if version of Heat API is 1.19 or above, Heat client will use API calls ``output_show`` and ``output_list`` instead of parsing of stack get response. If version of Heat API is lower than 1.19, outputs resolve in Heat client as well as before.heat-10.0.2/releasenotes/notes/sahara-job-resource-84aecc11fdf1d5af.yaml0000666000175000017500000000050313343562340025611 0ustar zuulzuul00000000000000--- features: - A new resource ``OS::Sahara::Job`` has been added, which allows to create and launch sahara jobs. Job can be launched with resource-signal. - Custom constraints for all sahara resources added - sahara.cluster, sahara.cluster_template, sahara.data_source, sahara.job_binary, sahara.job_type. heat-10.0.2/releasenotes/notes/keystone-domain-support-e06e2c65c5925ae5.yaml0000666000175000017500000000017213343562340026307 0ustar zuulzuul00000000000000--- features: - A new resource plugin ``OS::Keystone::Domain`` is added to support the lifecycle of keystone domain.heat-10.0.2/releasenotes/notes/keystone-project-allow-get-attribute-b382fe97694e3987.yaml0000666000175000017500000000020113343562340030551 0ustar zuulzuul00000000000000--- fixes: - Add attribute schema to `OS::Keystone::Project`. This allow get_attr function can work with project resource. heat-10.0.2/releasenotes/notes/legacy-client-races-ba7a60cef5ec1694.yaml0000666000175000017500000000126613343562340025444 0ustar zuulzuul00000000000000--- fixes: - | Previously, the suspend, resume, and check API calls for all stacks, and the update, restore, and delete API calls for non-convergence stacks, returned immediately after starting the stack operation. This meant that for a client reading the state immediately when performing the same operation twice in a row, it could have misinterpreted a previous state as the latest unless careful reference were made to the updated_at timestamp. Stacks are now guaranteed to have moved to the ``IN_PROGRESS`` state before any of these APIs return (except in the case of deleting a non-convergence stack where another operation was already in progress). heat-10.0.2/releasenotes/notes/dns-resolution-5afc1c57dfd05aff.yaml0000666000175000017500000000040113343562340024743 0ustar zuulzuul00000000000000--- features: - Supports internal DNS resolution and integration with external DNS services for neutron resources. Template authors can use the ``dns_name`` and ``dns_domain`` properties of neutron resource plugins for this functionality. heat-10.0.2/releasenotes/notes/zaqar-notification-a4d240bbf31b7440.yaml0000666000175000017500000000042013343562340025240 0ustar zuulzuul00000000000000--- features: - New ``OS::Zaqar::Subscription`` and ``OS::Zaqar::MistralTrigger`` resource types allow users to attach to Zaqar queues (respectively) notifications in general, and notifications that trigger Mistral workflow executions in particular. heat-10.0.2/releasenotes/notes/remove-SSLMiddleware-2f15049af559f26a.yaml0000666000175000017500000000041313343562340025340 0ustar zuulzuul00000000000000--- deprecations: - | The SSL middleware ``heat.api.middleware.ssl:SSLMiddleware`` that has been deprecated since 6.0.0 has now been removed, check your paste config and ensure it has been replaced by ``oslo_middleware.http_proxy_to_wsgi`` instead. heat-10.0.2/releasenotes/notes/parameter-tags-148ef065616f92fc.yaml0000666000175000017500000000017113343562340024331 0ustar zuulzuul00000000000000--- features: - | Added a new schema property tags, to parameters, to categorize parameters based on features. heat-10.0.2/releasenotes/notes/event-list-nested-depth-80081a2a8eefee1a.yaml0000666000175000017500000000137013343562340026272 0ustar zuulzuul00000000000000--- prelude: > Previously the event list REST API call only returned events for the specified stack even when that stack contained nested stack resources. This meant that fetching all nested events required an inefficient recursive client-side implementation. features: - The event list GET REST API call now has a different behaviour when the 'nested_depth' parameter is set to an integer greater than zero. The response will contain all events down to the requested nested depth. - When 'nested_depth' is set the response also includes an extra entry in the 'links' list with 'rel' set to 'root_stack'. This can be used by client side implementations to detect whether it is necessary to fall back to client-side recurisive event fetching. heat-10.0.2/releasenotes/notes/get-server-webmks-console-url-f7066a9e14429084.yaml0000666000175000017500000000023613343562340027066 0ustar zuulzuul00000000000000--- features: - Supports to get the webmks console url for OS::Nova::Server resource. And this requires nova api version equal or greater than 2.8. heat-10.0.2/releasenotes/notes/add-list_concat-function-c28563ab8fb6362e.yaml0000666000175000017500000000016613343562340026346 0ustar zuulzuul00000000000000--- features: - The list_concat function was added, which concats several lists using python's extend function. heat-10.0.2/releasenotes/notes/add-zun-client-plugin-dfc10ecd1a6e98be.yaml0000666000175000017500000000020313343562340026070 0ustar zuulzuul00000000000000--- other: - | Introduce a Zun client plugin module that will be used by the Zun's resources that are under development. heat-10.0.2/releasenotes/notes/environment-merging-d623362fac1279f7.yaml0000666000175000017500000000067113343562340025406 0ustar zuulzuul00000000000000--- prelude: > Previously 'parameters' and 'parameter_defaults' specified in an environment file used to overwrite their existing values. features: - A new 'parameter_merge_strategies' section can be added to the environment file, where 'default' and/or parameter specific merge strategies can be specified. - Parameters and parameter defaults specified in the environment file would be merged as per their specified strategies. heat-10.0.2/releasenotes/notes/sync-queens-releasenote-13f68851f7201e37.yaml0000666000175000017500000000215113343562340026026 0ustar zuulzuul00000000000000--- prelude: | Note that Heat is compatible with OpenStack Identity federation, even when using Keystone trusts. It should work after you enable Federation and build the `auto-provisioning map`_ with the heat service user in Keystone. Auto-provisioning has been available in Keystone since the Ocata release. .. _auto-provisioning map: https://docs.openstack.org/keystone/latest/advanced-topics/federation/federated_identity.html#auto-provisioning other: - | The Heat plugin in Horizon has been replaced with a new stand-alone Horizon plugin, heat-dashboard. You can see more detail in the heat-dashboard repository (https://git.openstack.org/cgit/openstack/heat-dashboard). - | The old Heat Tempest plugin ``heat_tests`` has been removed and replaced by a separate Tempest plugin named ``heat``, in the heat-tempest-plugin repository (https://git.openstack.org/cgit/openstack/heat-tempest-plugin). Functional tests that are appropriate for the Tempest environment have been migrated to the new plugin. Other functional tests remain behind in the heat repository. heat-10.0.2/releasenotes/notes/give-me-a-network-67e23600945346cd.yaml0000666000175000017500000000103413343562340024503 0ustar zuulzuul00000000000000--- features: - New item key 'allocate_network' of 'networks' with allowed values 'auto' and 'none' for OS::Nova::Server, to support 'Give Me a Network' nova feature. Specifying 'auto' would auto allocate a network topology for the project if there is no existing network available; Specifying 'none' means no networking will be allocated for the created server. This feature requires nova API micro version 2.37 or later and the ``auto-allocated-topology`` API is available in the Neutron networking service. heat-10.0.2/releasenotes/notes/monasca-supported-71c5373282c3b338.yaml0000666000175000017500000000026713343562340024714 0ustar zuulzuul00000000000000--- features: - OS::Monasca::AlarmDefinition and OS::Monasca::Notification resource plug-ins are now supported by heat community as monasca became offcial OpenStack project.heat-10.0.2/releasenotes/notes/fix-attachments-type-c5b6fb5b4c2bcbfe.yaml0000666000175000017500000000034413343562340026115 0ustar zuulzuul00000000000000--- deprecations: - | The 'attachments' attribute of OS::Cinder::Volume has been deprecated in favor of 'attachments_list', which has the correct type of LIST. This makes this data easier for end users to process. heat-10.0.2/releasenotes/notes/hidden-heat-harestarter-resource-a123479c317886a3.yaml0000666000175000017500000000125213343562340027570 0ustar zuulzuul00000000000000--- upgrade: - | The ``OS::Heat::HARestarter`` resource type is no longer supported. This resource type is now hidden from the documentation. HARestarter resources in stacks, including pre-existing ones, are now only placeholders and will no longer do anything. The recommended alternative is to mark a resource unhealthy and then do a stack update to replace it. This still correctly manages dependencies but, unlike HARestarter, also avoid replacing dependent resources unnecessarily. An example of this technique can be seen in the autohealing sample templates at https://git.openstack.org/cgit/openstack/heat-templates/tree/hot/autohealing heat-10.0.2/releasenotes/notes/add-template-dir-config-b96392a9e116a2d3.yaml0000666000175000017500000000042113343562340025761 0ustar zuulzuul00000000000000--- features: - Add `template_dir` to config. Normally heat has template directory `/etc/heat/templates`. This change makes it more official. In the future, it is possible to implement features like access templates directly from global template environment. heat-10.0.2/releasenotes/notes/deprecate-threshold-alarm-5738f5ab8aebfd20.yaml0000666000175000017500000000030713343562340026645 0ustar zuulzuul00000000000000--- deprecations: - Threshold alarm which uses ceilometer API is deprecated in aodh since Ocata. Please use ``OS::Aodh::GnocchiAggregationByResourcesAlarm`` in place of ``OS::Aodh::Alarm``.heat-10.0.2/releasenotes/notes/know-limit-releasenote-4d21fc4d91d136d9.yaml0000666000175000017500000000047013343562340026064 0ustar zuulzuul00000000000000--- issues: - | Heat does not work with keystone identity federation. This is a known limitation as heat uses keystone trusts for deferred authentication and trusts don't work with federated keystone. For more details check `https://etherpad.openstack.org/p/pike-ptg-cross-project-federation`. heat-10.0.2/releasenotes/notes/mark-unhealthy-phys-id-e90fd669d86963d1.yaml0000666000175000017500000000030713343562340025736 0ustar zuulzuul00000000000000--- features: - The ``resource mark unhealthy`` command now accepts either a logical resource name (as it did previously) or a physical resource ID to identify the resource to be marked unhealthy. heat-10.0.2/releasenotes/notes/make_url-function-d76737adb1e54801.yaml0000666000175000017500000000031413343562340025025 0ustar zuulzuul00000000000000--- features: - The Pike version of HOT (2017-09-01) adds a make_url function to simplify combining data from different sources into a URL with correct handling for escaping and IPv6 addresses. heat-10.0.2/releasenotes/notes/bp-update-cinder-resources-e23e62762f167d29.yaml0000666000175000017500000000030213343562340026463 0ustar zuulzuul00000000000000--- features: - OS::Cinder::QoSAssociation resource plugin is added to support cinder QoS Specs Association with Volume Types, which is provided by cinder ``qos-specs`` API extension. heat-10.0.2/releasenotes/notes/neutron-quota-resource-7fa5e4df8287bf77.yaml0000666000175000017500000000013113343562340026232 0ustar zuulzuul00000000000000--- features: - New resource ``OS::Neutron::Quota`` is added to manage neutron quotas. heat-10.0.2/releasenotes/notes/support-rbac-for-qos-policy-a55434654e1dd953.yaml0000666000175000017500000000022113343562340026625 0ustar zuulzuul00000000000000--- features: - Support to managing rbac policy for 'qos_policy' resource, which allows to share Neutron qos policy to subsets of tenants. heat-10.0.2/releasenotes/notes/random-string-entropy-9b8e23874cd79b8f.yaml0000666000175000017500000000065513343562340026003 0ustar zuulzuul00000000000000--- security: - | Passwords generated by the OS::Heat::RandomString resource may have had less entropy than expected, depending on what is specified in the ``character_class`` and ``character_sequence`` properties. This has been corrected so that each character present in any of the specified classes or sequences now has an equal probability of appearing at each point in the generated random string. heat-10.0.2/releasenotes/notes/subnet-pool-resource-c32ff97d4f956b73.yaml0000666000175000017500000000063213343562340025603 0ustar zuulzuul00000000000000--- features: - A new ``OS::Neutron:SubnetPool`` resource that helps in managing the lifecycle of neutron subnet pool. Availability of this resource depends on availability of neutron ``subnet_allocation`` API extension. - Resource ``OS::neutron::Subnet`` now supports ``subnetpool`` optional property, that will automate the allocation of CIDR for the subnet from the specified subnet pool.heat-10.0.2/releasenotes/notes/yaql-function-4895e39555c2841d.yaml0000666000175000017500000000026113343562340024054 0ustar zuulzuul00000000000000--- features: - Add ``yaql`` function, that takes 2 arguments ``expression`` of type string and ``data`` of type map and evaluates ``expression`` on a given ``data``. heat-10.0.2/releasenotes/notes/set-tags-for-network-resource-d6f3843c546744a2.yaml0000666000175000017500000000012113343562340027153 0ustar zuulzuul00000000000000--- features: - Allow to set or update the tags for OS::Neutron::Net resource. heat-10.0.2/releasenotes/notes/neutron-address-scope-ce234763e22c7449.yaml0000666000175000017500000000062013343562340025553 0ustar zuulzuul00000000000000--- features: - | A new ``OS::Neutron:AddressScope`` resource that helps in managing the lifecycle of neutron address scope. Availability of this resource depends on availability of neutron ``address-scope`` API extension. This resource can be associated with multiple subnet pools in a one-to-many relationship. The subnet pools under an address scope must not overlap.heat-10.0.2/releasenotes/notes/add-hostname-hints-security_groups-to-container-d3b69ae4b6f71fc7.yaml0000666000175000017500000000020613343562340033165 0ustar zuulzuul00000000000000--- features: - | Added ``hostname``, ``hints``, ``security_groups``, and ``mounts`` properties to Zun Container resources. heat-10.0.2/releasenotes/notes/remove-cloudwatch-api-149403251da97b41.yaml0000666000175000017500000000040213343562340025430 0ustar zuulzuul00000000000000--- upgrade: - | The AWS compatible CloudWatch API, deprecated since long has been finally removed. OpenStack deployments, packagers, and deployment projects which deploy/package CloudWatch should take appropriate action to remove support. ././@LongLink0000000000000000000000000000015100000000000011212 Lustar 00000000000000heat-10.0.2/releasenotes/notes/project-tags-orchestration-If9125519e35f9f95ea8343cb07c377de9ccf5edf.yamlheat-10.0.2/releasenotes/notes/project-tags-orchestration-If9125519e35f9f95ea8343cb07c377de9ccf5edf.0000666000175000017500000000025713343562340031622 0ustar zuulzuul00000000000000--- features: - Add `tags` parameter for create and update keystone projects. Defined comma deliniated list will insert tags into newly created or updated projects. heat-10.0.2/releasenotes/notes/keystone-region-ce3b435c73c81ce4.yaml0000666000175000017500000000016713343562340024670 0ustar zuulzuul00000000000000--- features: - A new ``OS::Keystone::Region`` resource that helps in managing the lifecycle of keystone region. heat-10.0.2/releasenotes/notes/.placeholder0000666000175000017500000000000013343562340020634 0ustar zuulzuul00000000000000heat-10.0.2/install.sh0000777000175000017500000000624013343562340014551 0ustar zuulzuul00000000000000#!/bin/bash if [[ $EUID -ne 0 ]]; then echo "This script must be run as root" >&2 exit 1 fi # Install prefix for config files (e.g. "/usr/local"). # Leave empty to install into /etc CONF_PREFIX="" LOG_DIR=/var/log/heat install -d $LOG_DIR detect_rabbit() { PKG_CMD="rpm -q" RABBIT_PKG="rabbitmq-server" QPID_PKG="qpid-cpp-server" # Detect OS type # Ubuntu has an lsb_release command which allows us to detect if it is Ubuntu if lsb_release -i 2>/dev/null | grep -iq ubuntu then PKG_CMD="dpkg -s" QPID_PKG="qpidd" fi if $PKG_CMD $RABBIT_PKG > /dev/null 2>&1 then if ! $PKG_CMD $QPID_PKG > /dev/null 2>&1 then return 0 fi fi return 1 } # Determinate is the given option present in the INI file # ini_has_option config-file section option function ini_has_option() { local file=$1 local section=$2 local option=$3 local line line=$(sed -ne "/^\[$section\]/,/^\[.*\]/ { /^$option[ \t]*=/ p; }" "$file") [ -n "$line" ] } # Set an option in an INI file # iniset config-file section option value function iniset() { local file=$1 local section=$2 local option=$3 local value=$4 if ! grep -q "^\[$section\]" "$file"; then # Add section at the end echo -e "\n[$section]" >>"$file" fi if ! ini_has_option "$file" "$section" "$option"; then # Add it sed -i -e "/^\[$section\]/ a\\ $option = $value " "$file" else # Replace it sed -i -e "/^\[$section\]/,/^\[.*\]/ s|^\($option[ \t]*=[ \t]*\).*$|\1$value|" "$file" fi } basic_configuration() { conf_path=$1 if echo $conf_path | grep ".conf$" >/dev/null 2>&1 then iniset $target DEFAULT auth_encryption_key `hexdump -n 16 -v -e '/1 "%02x"' /dev/random` iniset $target database connection "mysql+pymysql://heat:heat@localhost/heat" BRIDGE_IP=127.0.0.1 iniset $target DEFAULT heat_metadata_server_url "http://${BRIDGE_IP}:8000/" if detect_rabbit then echo "rabbitmq detected, configuring $conf_path for rabbit" >&2 iniset $conf_path DEFAULT rpc_backend kombu iniset $conf_path oslo_messaging_rabbit rabbit_password guest else echo "qpid detected, configuring $conf_path for qpid" >&2 iniset $conf_path DEFAULT rpc_backend qpid fi fi } install_dir() { local dir=$1 local prefix=$2 for fn in $(ls $dir); do f=$dir/$fn target=$prefix/$f if [ $fn = 'heat.conf.sample' ]; then target=$prefix/$dir/heat.conf fi if [ -d $f ]; then [ -d $target ] || install -d $target install_dir $f $prefix elif [ -f $target ]; then echo "NOT replacing existing config file $target" >&2 diff -u $target $f else echo "Installing $fn in $prefix/$dir" >&2 install -m 664 $f $target if [ $fn = 'heat.conf.sample' ]; then basic_configuration $target fi fi done } install_dir etc $CONF_PREFIX python setup.py install >/dev/null rm -rf build heat.egg-info heat-10.0.2/devstack/0000775000175000017500000000000013343562672014354 5ustar zuulzuul00000000000000heat-10.0.2/devstack/README.rst0000666000175000017500000000056313343562337016047 0ustar zuulzuul00000000000000========================= Enabling heat in DevStack ========================= 1. Download DevStack:: git clone https://git.openstack.org/openstack-dev/devstack cd devstack 2. Add this repo as an external repository into your ``local.conf`` file:: [[local|localrc]] enable_plugin heat https://git.openstack.org/openstack/heat 3. Run ``stack.sh``. heat-10.0.2/devstack/lib/0000775000175000017500000000000013343562672015122 5ustar zuulzuul00000000000000heat-10.0.2/devstack/lib/heat0000666000175000017500000003737613343562337016006 0ustar zuulzuul00000000000000#!/bin/bash # # lib/heat # Install and start **Heat** service # To enable, add the following to localrc # # ENABLED_SERVICES+=,heat,h-api,h-api-cfn,h-eng # Dependencies: # (none) # stack.sh # --------- # - install_heatclient # - install_heat # - configure_heatclient # - configure_heat # - init_heat # - start_heat # - stop_heat # - cleanup_heat # Save trace setting _XTRACE_HEAT=$(set +o | grep xtrace) set +o xtrace # Defaults # -------- # set up default directories GITDIR["python-heatclient"]=$DEST/python-heatclient # heat service HEAT_REPO=${HEAT_REPO:-${GIT_BASE}/openstack/heat.git} HEAT_BRANCH=${HEAT_BRANCH:-master} # python heat client library GITREPO["python-heatclient"]=${HEATCLIENT_REPO:-${GIT_BASE}/openstack/python-heatclient.git} GITBRANCH["python-heatclient"]=${HEATCLIENT_BRANCH:-master} # Use HEAT_USE_MOD_WSGI for backward compatibility HEAT_USE_APACHE=${HEAT_USE_APACHE:-${HEAT_USE_MOD_WSGI:-True}} HEAT_DIR=$DEST/heat HEAT_FILES_DIR=$HEAT_DIR/heat/httpd/files HEAT_STANDALONE=$(trueorfalse False HEAT_STANDALONE) HEAT_ENABLE_ADOPT_ABANDON=$(trueorfalse False HEAT_ENABLE_ADOPT_ABANDON) HEAT_CONF_DIR=/etc/heat HEAT_CONF=$HEAT_CONF_DIR/heat.conf HEAT_ENV_DIR=$HEAT_CONF_DIR/environment.d HEAT_TEMPLATES_DIR=$HEAT_CONF_DIR/templates HEAT_API_HOST=${HEAT_API_HOST:-$HOST_IP} HEAT_API_PORT=${HEAT_API_PORT:-8004} HEAT_SERVICE_USER=${HEAT_SERVICE_USER:-heat} HEAT_TRUSTEE_USER=${HEAT_TRUSTEE_USER:-$HEAT_SERVICE_USER} HEAT_TRUSTEE_PASSWORD=${HEAT_TRUSTEE_PASSWORD:-$SERVICE_PASSWORD} HEAT_TRUSTEE_DOMAIN=${HEAT_TRUSTEE_DOMAIN:-default} # Support entry points installation of console scripts HEAT_BIN_DIR=$(get_python_exec_prefix) HEAT_API_UWSGI_CONF=$HEAT_CONF_DIR/heat-api-uwsgi.ini HEAT_CFN_API_UWSGI_CONF=$HEAT_CONF_DIR/heat-api-cfn-uwsgi.ini HEAT_API_UWSGI=$HEAT_BIN_DIR/heat-wsgi-api HEAT_CFN_API_UWSGI=$HEAT_BIN_DIR/heat-wsgi-api-cfn # other default options if [[ "$HEAT_STANDALONE" == "True" ]]; then # for standalone, use defaults which require no service user HEAT_STACK_DOMAIN=$(trueorfalse False HEAT_STACK_DOMAIN) HEAT_DEFERRED_AUTH=${HEAT_DEFERRED_AUTH:-password} if [[ ${HEAT_DEFERRED_AUTH} != "password" ]]; then # Heat does not support keystone trusts when deployed in # standalone mode die $LINENO \ 'HEAT_DEFERRED_AUTH can only be set to "password" when HEAT_STANDALONE is True.' fi else HEAT_STACK_DOMAIN=$(trueorfalse True HEAT_STACK_DOMAIN) HEAT_DEFERRED_AUTH=${HEAT_DEFERRED_AUTH:-} fi HEAT_PLUGIN_DIR=${HEAT_PLUGIN_DIR:-$DATA_DIR/heat/plugins} ENABLE_HEAT_PLUGINS=${ENABLE_HEAT_PLUGINS:-} # Functions # --------- # Test if any Heat services are enabled # is_heat_enabled function is_heat_enabled { [[ ,${ENABLED_SERVICES} =~ ,"h-" ]] && return 0 return 1 } # cleanup_heat() - Remove residual data files, anything left over from previous # runs that a clean run would need to clean up function cleanup_heat { if [[ "$HEAT_USE_APACHE" == "True" ]]; then _cleanup_heat_apache_wsgi fi sudo rm -rf $HEAT_ENV_DIR sudo rm -rf $HEAT_TEMPLATES_DIR sudo rm -rf $HEAT_CONF_DIR } # configure_heat() - Set config files, create data dirs, etc function configure_heat { sudo install -d -o $STACK_USER $HEAT_CONF_DIR # remove old config files rm -f $HEAT_CONF_DIR/heat-*.conf HEAT_API_CFN_HOST=${HEAT_API_CFN_HOST:-$HOST_IP} HEAT_API_CFN_PORT=${HEAT_API_CFN_PORT:-8000} HEAT_ENGINE_HOST=${HEAT_ENGINE_HOST:-$SERVICE_HOST} HEAT_ENGINE_PORT=${HEAT_ENGINE_PORT:-8001} HEAT_API_PASTE_FILE=$HEAT_CONF_DIR/api-paste.ini cp $HEAT_DIR/etc/heat/api-paste.ini $HEAT_API_PASTE_FILE # common options iniset_rpc_backend heat $HEAT_CONF if [[ "$HEAT_USE_APACHE" == "True" && "$WSGI_MODE" == "uwsgi" ]]; then iniset $HEAT_CONF DEFAULT heat_metadata_server_url http://$HEAT_API_CFN_HOST/heat-api-cfn iniset $HEAT_CONF DEFAULT heat_waitcondition_server_url http://$HEAT_API_CFN_HOST/heat-api-cfn/v1/waitcondition else iniset $HEAT_CONF DEFAULT heat_metadata_server_url http://$HEAT_API_CFN_HOST:$HEAT_API_CFN_PORT iniset $HEAT_CONF DEFAULT heat_waitcondition_server_url http://$HEAT_API_CFN_HOST:$HEAT_API_CFN_PORT/v1/waitcondition fi iniset $HEAT_CONF database connection `database_connection_url heat` # we are using a hardcoded auth_encryption_key as it has to be the same for # multinode deployment. iniset $HEAT_CONF DEFAULT auth_encryption_key "767c3ed056cbaa3b9dfedb8c6f825bf0" iniset $HEAT_CONF DEFAULT region_name_for_services "$REGION_NAME" # logging iniset $HEAT_CONF DEFAULT debug $ENABLE_DEBUG_LOG_LEVEL local no_format="False" if [[ "$HEAT_USE_APACHE" == "True" && "$WSGI_MODE" != "uwsgi" ]]; then no_format="True" fi # Format logging setup_logging $HEAT_CONF $no_format if [[ ! -z "$HEAT_DEFERRED_AUTH" ]]; then iniset $HEAT_CONF DEFAULT deferred_auth_method $HEAT_DEFERRED_AUTH fi if [[ "$HEAT_USE_APACHE" == "True" ]]; then if [[ $WSGI_MODE == "uwsgi" ]]; then write_uwsgi_config "$HEAT_API_UWSGI_CONF" "$HEAT_API_UWSGI" "/heat-api" # configure threads for h-api to avoid IO wait and messaging timeout. We use # 'nproc/4' to calculate API workers, hence, 4 would be probably correct # approximation. iniset "$HEAT_API_UWSGI_CONF" uwsgi threads 4 write_uwsgi_config "$HEAT_CFN_API_UWSGI_CONF" "$HEAT_CFN_API_UWSGI" "/heat-api-cfn" else _config_heat_apache_wsgi fi fi if [[ "$HEAT_STANDALONE" = "True" ]]; then iniset $HEAT_CONF paste_deploy flavor standalone iniset $HEAT_CONF clients_heat url "http://$HEAT_API_HOST:$HEAT_API_PORT/v1/%(tenant_id)s" else configure_auth_token_middleware $HEAT_CONF heat fi # If HEAT_DEFERRED_AUTH is unset or explicitly set to trusts, configure # the section for the client plugin associated with the trustee if [ -z "$HEAT_DEFERRED_AUTH" -o "trusts" == "$HEAT_DEFERRED_AUTH" ]; then iniset $HEAT_CONF trustee auth_type password iniset $HEAT_CONF trustee auth_url $KEYSTONE_AUTH_URI iniset $HEAT_CONF trustee username $HEAT_TRUSTEE_USER iniset $HEAT_CONF trustee password $HEAT_TRUSTEE_PASSWORD iniset $HEAT_CONF trustee user_domain_id $HEAT_TRUSTEE_DOMAIN fi # clients_keystone iniset $HEAT_CONF clients_keystone auth_uri $KEYSTONE_AUTH_URI # OpenStack API iniset $HEAT_CONF heat_api bind_port $HEAT_API_PORT iniset $HEAT_CONF heat_api workers "$API_WORKERS" # Cloudformation API iniset $HEAT_CONF heat_api_cfn bind_port $HEAT_API_CFN_PORT if is_ssl_enabled_service "key" || is_service_enabled tls-proxy; then iniset $HEAT_CONF clients_keystone ca_file $SSL_BUNDLE_FILE fi if is_ssl_enabled_service "nova" || is_service_enabled tls-proxy; then iniset $HEAT_CONF clients_nova ca_file $SSL_BUNDLE_FILE fi if is_ssl_enabled_service "cinder" || is_service_enabled tls-proxy; then iniset $HEAT_CONF clients_cinder ca_file $SSL_BUNDLE_FILE fi if [[ "$HEAT_ENABLE_ADOPT_ABANDON" = "True" ]]; then iniset $HEAT_CONF DEFAULT enable_stack_adopt true iniset $HEAT_CONF DEFAULT enable_stack_abandon true fi iniset $HEAT_CONF cache enabled "True" iniset $HEAT_CONF cache backend "dogpile.cache.memory" if ! is_service_enabled c-bak; then iniset $HEAT_CONF volumes backups_enabled false fi sudo install -d -o $STACK_USER $HEAT_ENV_DIR $HEAT_TEMPLATES_DIR # copy the default environment cp $HEAT_DIR/etc/heat/environment.d/* $HEAT_ENV_DIR/ # copy the default templates cp $HEAT_DIR/etc/heat/templates/* $HEAT_TEMPLATES_DIR/ # Enable heat plugins. # NOTE(nic): The symlink nonsense is necessary because when # plugins are installed in "developer mode", the final component # of their target directory is always "resources", which confuses # Heat's plugin loader into believing that all plugins are named # "resources", and therefore are all the same plugin; so it # will only load one of them. Linking them all to a common # location with unique names avoids that type of collision, # while still allowing the plugins to be edited in-tree. local err_count=0 if [[ -n "$ENABLE_HEAT_PLUGINS" ]]; then mkdir -p $HEAT_PLUGIN_DIR # Clean up cruft from any previous runs rm -f $HEAT_PLUGIN_DIR/* iniset $HEAT_CONF DEFAULT plugin_dirs $HEAT_PLUGIN_DIR fi for heat_plugin in $ENABLE_HEAT_PLUGINS; do if [[ -d $HEAT_DIR/contrib/$heat_plugin ]]; then setup_package $HEAT_DIR/contrib/$heat_plugin -e ln -s $HEAT_DIR/contrib/$heat_plugin/$heat_plugin/resources $HEAT_PLUGIN_DIR/$heat_plugin else : # clear retval on the test so that we can roll up errors err $LINENO "Requested Heat plugin(${heat_plugin}) not found." err_count=$(($err_count + 1)) fi done [ $err_count -eq 0 ] || die $LINENO "$err_count of the requested Heat plugins could not be installed." } # init_heat() - Initialize database function init_heat { # recreate db only if one of the db services is enabled if is_service_enabled $DATABASE_BACKENDS; then # (re)create heat database recreate_database heat $HEAT_BIN_DIR/heat-manage db_sync fi } # install_heatclient() - Collect source and prepare function install_heatclient { if use_library_from_git "python-heatclient"; then git_clone_by_name "python-heatclient" setup_dev_lib "python-heatclient" sudo install -D -m 0644 -o $STACK_USER {${GITDIR["python-heatclient"]}/tools/,/etc/bash_completion.d/}heat.bash_completion fi } # install_heat() - Collect source and prepare function install_heat { git_clone $HEAT_REPO $HEAT_DIR $HEAT_BRANCH setup_develop $HEAT_DIR if [[ "$HEAT_USE_APACHE" == "True" ]]; then if [ "$WSGI_MODE" == "uwsgi" ]; then pip_install uwsgi else install_apache_wsgi fi fi } # start_heat() - Start running processes, including screen function start_heat { run_process h-eng "$HEAT_BIN_DIR/heat-engine --config-file=$HEAT_CONF" # If the site is not enabled then we are in a grenade scenario local enabled_site_file enabled_site_file=$(apache_site_config_for heat-api) if [[ "$HEAT_USE_APACHE" == "True" ]]; then if [[ -f ${enabled_site_file} && "$WSGI_MODE" != "uwsgi" ]]; then enable_apache_site heat-api enable_apache_site heat-api-cfn restart_apache_server tail_log heat-api /var/log/$APACHE_NAME/heat_api.log tail_log heat-api-access /var/log/$APACHE_NAME/heat_api_access.log tail_log heat-api-cfn /var/log/$APACHE_NAME/heat_api_cfn.log tail_log heat-api-cfn-access /var/log/$APACHE_NAME/heat_api_cfn_access.log else run_process h-api "$HEAT_BIN_DIR/uwsgi --ini $HEAT_API_UWSGI_CONF" "" run_process h-api-cfn "$HEAT_BIN_DIR/uwsgi --ini $HEAT_CFN_API_UWSGI_CONF" "" fi else run_process h-api "$HEAT_BIN_DIR/heat-api --config-file=$HEAT_CONF" run_process h-api-cfn "$HEAT_BIN_DIR/heat-api-cfn --config-file=$HEAT_CONF" fi } function _stop_processes { local serv for serv in h-api h-api-cfn; do stop_process $serv done } # stop_heat() - Stop running processes function stop_heat { # Kill the screen windows stop_process h-eng if [[ "$HEAT_USE_APACHE" == "True" ]]; then if [[ "$WSGI_MODE" == "uwsgi" ]]; then _stop_processes else disable_apache_site heat-api disable_apache_site heat-api-cfn restart_apache_server fi else _stop_processes fi } # TODO(ramishra): Remove after Queens function stop_cw_service { if $SYSTEMCTL is-enabled devstack@h-api-cw.service; then $SYSTEMCTL stop devstack@h-api-cw.service $SYSTEMCTL disable devstack@h-api-cw.service fi } # _cleanup_heat_apache_wsgi() - Remove wsgi files, disable and remove apache vhost file function _cleanup_heat_apache_wsgi { if [[ "$WSGI_MODE" == "uwsgi" ]]; then remove_uwsgi_config "$HEAT_API_UWSGI_CONF" "$HEAT_API_UWSGI" remove_uwsgi_config "$HEAT_CFN_API_UWSGI_CONF" "$HEAT_CFN_API_UWSGI" fi sudo rm -f $(apache_site_config_for heat-api) sudo rm -f $(apache_site_config_for heat-api-cfn) } # _config_heat_apache_wsgi() - Set WSGI config files of Heat function _config_heat_apache_wsgi { local heat_apache_conf heat_apache_conf=$(apache_site_config_for heat-api) local heat_cfn_apache_conf heat_cfn_apache_conf=$(apache_site_config_for heat-api-cfn) local heat_ssl="" local heat_certfile="" local heat_keyfile="" local heat_api_port=$HEAT_API_PORT local heat_cfn_api_port=$HEAT_API_CFN_PORT local venv_path="" sudo cp $HEAT_FILES_DIR/heat-api.conf $heat_apache_conf sudo sed -e " s|%PUBLICPORT%|$heat_api_port|g; s|%APACHE_NAME%|$APACHE_NAME|g; s|%HEAT_BIN_DIR%|$HEAT_BIN_DIR|g; s|%API_WORKERS%|$API_WORKERS|g; s|%SSLENGINE%|$heat_ssl|g; s|%SSLCERTFILE%|$heat_certfile|g; s|%SSLKEYFILE%|$heat_keyfile|g; s|%USER%|$STACK_USER|g; s|%VIRTUALENV%|$venv_path|g " -i $heat_apache_conf sudo cp $HEAT_FILES_DIR/heat-api-cfn.conf $heat_cfn_apache_conf sudo sed -e " s|%PUBLICPORT%|$heat_cfn_api_port|g; s|%APACHE_NAME%|$APACHE_NAME|g; s|%HEAT_BIN_DIR%|$HEAT_BIN_DIR|g; s|%API_WORKERS%|$API_WORKERS|g; s|%SSLENGINE%|$heat_ssl|g; s|%SSLCERTFILE%|$heat_certfile|g; s|%SSLKEYFILE%|$heat_keyfile|g; s|%USER%|$STACK_USER|g; s|%VIRTUALENV%|$venv_path|g " -i $heat_cfn_apache_conf } # create_heat_accounts() - Set up common required heat accounts function create_heat_accounts { if [[ "$HEAT_STANDALONE" != "True" ]]; then local heat_api_service_url local heat_cfn_api_service_url if [[ "$HEAT_USE_APACHE" == "True" && "$WSGI_MODE" == "uwsgi" ]]; then heat_api_service_url="$SERVICE_PROTOCOL://$HEAT_API_HOST/heat-api/v1/\$(project_id)s" heat_cfn_api_service_url="$SERVICE_PROTOCOL://$HEAT_API_CFN_HOST/heat-api-cfn/v1" else heat_api_service_url="$SERVICE_PROTOCOL://$HEAT_API_HOST:$HEAT_API_PORT/v1/\$(project_id)s" heat_cfn_api_service_url="$SERVICE_PROTOCOL://$HEAT_API_CFN_HOST:$HEAT_API_CFN_PORT/v1" fi create_service_user "heat" "admin" get_or_create_service "heat" "orchestration" "Heat Orchestration Service" get_or_create_endpoint \ "orchestration" \ "$REGION_NAME" \ "$heat_api_service_url" "$heat_api_service_url" "$heat_api_service_url" get_or_create_service "heat-cfn" "cloudformation" "Heat CloudFormation Service" get_or_create_endpoint \ "cloudformation" \ "$REGION_NAME" \ "$heat_cfn_api_service_url" "$heat_cfn_api_service_url" "$heat_cfn_api_service_url" # heat_stack_user role is for users created by Heat get_or_create_role "heat_stack_user" fi if [[ "$HEAT_STACK_DOMAIN" == "True" ]]; then # domain -> heat and user -> heat_domain_admin domain_id=$(get_or_create_domain heat 'Owns users and projects created by heat') iniset $HEAT_CONF DEFAULT stack_user_domain_id ${domain_id} get_or_create_user heat_domain_admin $SERVICE_PASSWORD heat get_or_add_user_domain_role admin heat_domain_admin heat iniset $HEAT_CONF DEFAULT stack_domain_admin heat_domain_admin iniset $HEAT_CONF DEFAULT stack_domain_admin_password $SERVICE_PASSWORD fi } # Restore xtrace $_XTRACE_HEAT # Tell emacs to use shell-script-mode ## Local variables: ## mode: shell-script ## End: heat-10.0.2/devstack/upgrade/0000775000175000017500000000000013343562672016003 5ustar zuulzuul00000000000000heat-10.0.2/devstack/upgrade/resources.sh0000777000175000017500000001006613343562351020353 0ustar zuulzuul00000000000000#!/bin/bash # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. set -o errexit source $GRENADE_DIR/grenaderc source $GRENADE_DIR/functions source $TOP_DIR/openrc admin admin source $TOP_DIR/inc/ini-config set -o xtrace HEAT_USER=heat_grenade HEAT_PROJECT=heat_grenade HEAT_PASS=pass DEFAULT_DOMAIN=default function _heat_set_user { OS_TENANT_NAME=$HEAT_PROJECT OS_PROJECT_NAME=$HEAT_PROJECT OS_USERNAME=$HEAT_USER OS_PASSWORD=$HEAT_PASS OS_USER_DOMAIN_ID=$DEFAULT_DOMAIN OS_PROJECT_DOMAIN_ID=$DEFAULT_DOMAIN } function _run_heat_api_tests { local devstack_dir=$1 pushd $devstack_dir/../tempest sed -i -e '/group_regex/c\group_regex=heat_tempest_plugin\\.tests\\.api\\.test_heat_api(?:\\.|_)([^_]+)' .stestr.conf conf_file=etc/tempest.conf iniset_multiline $conf_file service_available heat_plugin True iniset $conf_file heat_plugin username $OS_USERNAME iniset $conf_file heat_plugin password $OS_PASSWORD iniset $conf_file heat_plugin tenant_name $OS_PROJECT_NAME iniset $conf_file heat_plugin auth_url $OS_AUTH_URL iniset $conf_file heat_plugin user_domain_id $OS_USER_DOMAIN_ID iniset $conf_file heat_plugin project_domain_id $OS_PROJECT_DOMAIN_ID iniset $conf_file heat_plugin user_domain_name $OS_USER_DOMAIN_NAME iniset $conf_file heat_plugin project_domain_name $OS_PROJECT_DOMAIN_NAME iniset $conf_file heat_plugin region $OS_REGION_NAME iniset $conf_file heat_plugin auth_version $OS_IDENTITY_API_VERSION tox -evenv-tempest -- tempest run --regex heat_tempest_plugin.tests.api popd } function create { # run heat api tests instead of tempest smoke before create _run_heat_api_tests $BASE_DEVSTACK_DIR # creates a tenant for the server eval $(openstack project create -f shell -c id $HEAT_PROJECT) if [[ -z "$id" ]]; then die $LINENO "Didn't create $HEAT_PROJECT project" fi resource_save heat project_id $id local project_id=$id # creates the user, and sets $id locally eval $(openstack user create $HEAT_USER \ --project $id \ --password $HEAT_PASS \ -f shell -c id) if [[ -z "$id" ]]; then die $LINENO "Didn't create $HEAT_USER user" fi resource_save heat user_id $id # with keystone v3 user created in a project is not assigned a role # https://bugs.launchpad.net/keystone/+bug/1662911 openstack role add Member --user $id --project $project_id _heat_set_user local stack_name='grenadine' resource_save heat stack_name $stack_name local loc=`dirname $BASH_SOURCE` heat stack-create -f $loc/templates/random_string.yaml $stack_name } function verify { _heat_set_user local side="$1" if [[ "$side" = "post-upgrade" ]]; then _run_heat_api_tests $TARGET_DEVSTACK_DIR fi stack_name=$(resource_get heat stack_name) heat stack-show $stack_name # TODO(sirushtim): Create more granular checks for Heat. } function verify_noapi { # TODO(sirushtim): Write tests to validate liveness of the resources # it creates during possible API downtime. : } function destroy { _heat_set_user heat stack-delete $(resource_get heat stack_name) source $TOP_DIR/openrc admin admin local user_id=$(resource_get heat user_id) local project_id=$(resource_get heat project_id) openstack user delete $user_id openstack project delete $project_id } # Dispatcher case $1 in "create") create ;; "verify_noapi") verify_noapi ;; "verify") verify $2 ;; "destroy") destroy ;; esac heat-10.0.2/devstack/upgrade/shutdown.sh0000777000175000017500000000224313343562337020216 0ustar zuulzuul00000000000000#!/bin/bash # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. set -o errexit source $GRENADE_DIR/grenaderc source $GRENADE_DIR/functions # We need base DevStack functions for this source $BASE_DEVSTACK_DIR/functions source $BASE_DEVSTACK_DIR/stackrc # needed for status directory source $BASE_DEVSTACK_DIR/lib/tls source $BASE_DEVSTACK_DIR/lib/apache HEAT_DEVSTACK_DIR=$(dirname $(dirname $0)) source $HEAT_DEVSTACK_DIR/lib/heat set -o xtrace stop_heat # stop cloudwatch service if running # TODO(ramishra): Remove it after Queens stop_cw_service SERVICES_DOWN="heat-api heat-engine heat-api-cfn" # sanity check that services are actually down ensure_services_stopped $SERVICES_DOWN heat-10.0.2/devstack/upgrade/upgrade.sh0000777000175000017500000000553513343562351017775 0ustar zuulzuul00000000000000#!/bin/bash # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # # ``upgrade-heat`` echo "*********************************************************************" echo "Begin $0" echo "*********************************************************************" # Clean up any resources that may be in use cleanup() { set +o errexit echo "*********************************************************************" echo "ERROR: Abort $0" >&2 echo "*********************************************************************" # Kill ourselves to signal any calling process trap 2; kill -2 $$ } trap cleanup SIGHUP SIGINT SIGTERM # Keep track of the grenade directory RUN_DIR=$(cd $(dirname "$0") && pwd) # Source params source $GRENADE_DIR/grenaderc # Import common functions source $GRENADE_DIR/functions # This script exits on an error so that errors don't compound and you see # only the first error that occurred. set -o errexit # Upgrade Heat # ============ # Locate heat devstack plugin, the directory above the # grenade plugin. HEAT_DEVSTACK_DIR=$(dirname $(dirname $0)) # Duplicate some setup bits from target DevStack source $TARGET_DEVSTACK_DIR/functions source $TARGET_DEVSTACK_DIR/stackrc source $TARGET_DEVSTACK_DIR/lib/tls source $TARGET_DEVSTACK_DIR/lib/stack source $TARGET_DEVSTACK_DIR/lib/apache # Get heat functions from devstack plugin source $HEAT_DEVSTACK_DIR/lib/heat # Print the commands being run so that we can see the command that triggers # an error. It is also useful for following allowing as the install occurs. set -o xtrace # Save current config files for posterity [[ -d $SAVE_DIR/etc.heat ]] || cp -pr $HEAT_CONF_DIR $SAVE_DIR/etc.heat # Install the target heat source $HEAT_DEVSTACK_DIR/plugin.sh stack install # calls upgrade-heat for specific release upgrade_project heat $RUN_DIR $BASE_DEVSTACK_BRANCH $TARGET_DEVSTACK_BRANCH # Simulate init_heat() HEAT_BIN_DIR=$(dirname $(which heat-manage)) $HEAT_BIN_DIR/heat-manage --config-file $HEAT_CONF db_sync || die $LINENO "DB sync error" # Start Heat start_heat # Don't succeed unless the services come up # Truncating some service names to 11 characters ensure_services_started heat-api heat-engine heat-api-cf set +o xtrace echo "*********************************************************************" echo "SUCCESS: End $0" echo "*********************************************************************" heat-10.0.2/devstack/upgrade/templates/0000775000175000017500000000000013343562672020001 5ustar zuulzuul00000000000000heat-10.0.2/devstack/upgrade/templates/random_string.yaml0000666000175000017500000000013713343562337023534 0ustar zuulzuul00000000000000heat_template_version: 2014-10-16 resources: random_string: type: OS::Heat::RandomString heat-10.0.2/devstack/upgrade/settings0000666000175000017500000000037113343562351017563 0ustar zuulzuul00000000000000register_project_for_upgrade heat register_db_to_save heat devstack_localrc base enable_service h-api h-api-cfn h-eng heat tempest devstack_localrc target enable_service h-api h-api-cfn h-eng heat tempest BASE_RUN_SMOKE=False TARGET_RUN_SMOKE=False heat-10.0.2/devstack/plugin.sh0000666000175000017500000000236313343562337016212 0ustar zuulzuul00000000000000# heat.sh - Devstack extras script to install heat # Save trace setting XTRACE=$(set +o | grep xtrace) set -o xtrace echo_summary "heat's plugin.sh was called..." source $DEST/heat/devstack/lib/heat (set -o posix; set) if is_heat_enabled; then if [[ "$1" == "stack" && "$2" == "install" ]]; then echo_summary "Installing heat" # Use stack_install_service here to account for virtualenv stack_install_service heat echo_summary "Installing heatclient" install_heatclient elif [[ "$1" == "stack" && "$2" == "test-config" ]]; then if is_service_enabled tempest; then setup_develop $TEMPEST_DIR fi elif [[ "$1" == "stack" && "$2" == "post-config" ]]; then echo_summary "Cleaning up heat" cleanup_heat echo_summary "Configuring heat" configure_heat create_heat_accounts elif [[ "$1" == "stack" && "$2" == "extra" ]]; then # Initialize heat init_heat # Start the heat API and heat taskmgr components echo_summary "Starting heat" start_heat fi if [[ "$1" == "unstack" ]]; then stop_heat fi if [[ "$1" == "clean" ]]; then cleanup_heat fi fi # Restore xtrace $XTRACE heat-10.0.2/devstack/settings0000666000175000017500000000026213343562337016137 0ustar zuulzuul00000000000000# Devstack settings # We have to add Heat to enabled services for screen_it to work # It consists of 4 parts enable_service h-eng enable_service h-api enable_service h-api-cfn heat-10.0.2/doc/0000775000175000017500000000000013343562672013315 5ustar zuulzuul00000000000000heat-10.0.2/doc/README.rst0000666000175000017500000000135513343562337015010 0ustar zuulzuul00000000000000=========================== Building the developer docs =========================== For user and admin docs, go to the directory `doc/docbkx`. Dependencies ============ You'll need to install python *Sphinx* package and *oslosphinx* package: :: sudo pip install sphinx oslosphinx If you are using the virtualenv you'll need to install them in the virtualenv. Get Help ======== Just type make to get help: :: make It will list available build targets. Build Doc ========= To build the man pages: :: make man To build the developer documentation as HTML: :: make html Type *make* for more formats. Test Doc ======== If you modify doc files, you can type: :: make doctest to check whether the format has problem. heat-10.0.2/doc/source/0000775000175000017500000000000013343562672014615 5ustar zuulzuul00000000000000heat-10.0.2/doc/source/glossary.rst0000666000175000017500000001507113343562351017212 0ustar zuulzuul00000000000000.. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ========== Glossary ========== .. glossary:: :sorted: API server HTTP REST API service for heat. CFN An abbreviated form of "AWS CloudFormation". Constraint Defines valid input :term:`parameters` for a :term:`template`. Dependency When a :term:`resource` must wait for another resource to finish creation before being created itself. Heat adds an implicit dependency when a resource references another resource or one of its :term:`attributes `. An explicit dependency can also be created by the user in the template definition. Environment Used to affect the run-time behavior of the template. Provides a way to override the default resource implementation and parameters passed to Heat. See :ref:`Environments`. Heat Orchestration Template A particular :term:`template` format that is native to Heat. Heat Orchestration Templates are expressed in YAML and are not backwards-compatible with CloudFormation templates. HOT An acronym for ":term:`Heat Orchestration Template`". Input parameters See :term:`Parameters`. Metadata May refer to :term:`Resource Metadata`, :term:`Nova Instance metadata`, or the :term:`Metadata service`. Metadata service A Compute service that enables virtual machine instances to retrieve instance-specific data. See `Metadata service (OpenStack Administrator Guide)`_. .. _Metadata service (OpenStack Administrator Guide): http://docs.openstack.org/admin-guide/compute-networking-nova.html#metadata-service Multi-region A feature of Heat that supports deployment to multiple regions. Nested resource A :term:`resource` instantiated as part of a :term:`nested stack`. Nested stack A :term:`template` referenced by URL inside of another template. Used to reduce redundant resource definitions and group complex architectures into logical groups. Nova Instance metadata User-provided *key:value* pairs associated with a Compute Instance. See `Instance specific data (OpenStack Operations Guide)`_. .. _Instance specific data (OpenStack Operations Guide): http://docs.openstack.org/openstack-ops/content/instances.html#instance_specific_data OpenStack Open source software for building private and public clouds. Orchestrate Arrange or direct the elements of a situation to produce a desired effect. Outputs A top-level block in a :term:`template` that defines what data will be returned by a stack after instantiation. Parameters A top-level block in a :term:`template` that defines what data can be passed to customise a template when it is used to create or update a :term:`stack`. Provider resource A :term:`resource` implemented by a :term:`provider template`. The parent resource's properties become the :term:`nested stack's ` parameters. See `What are "Providers"? (OpenStack Wiki)`_. .. _`What are "Providers"? (OpenStack Wiki)`: https://wiki.openstack.org/wiki/Heat/Providers#What_are_.22Providers.22.3F Provider template Allows user-definable :term:`resource providers ` to be specified via :term:`nested stacks `. The nested stack's :term:`outputs` become the parent stack's :term:`attributes `. Resource An element of OpenStack infrastructure instantiated from a particular :term:`resource provider`. See also :term:`Nested resource`. Resource attribute Data that can be obtained from a :term:`resource`, e.g. a server's public IP or name. Usually passed to another resource's :term:`properties ` or added to the stack's :term:`outputs`. Resource group A :term:`resource provider` that creates one or more identically configured :term:`resources ` or :term:`nested resources `. Resource Metadata A :term:`resource property` that contains CFN-style template metadata. See `AWS::CloudFormation::Init (AWS CloudFormation User Guide)`_ .. _AWS::CloudFormation::Init (AWS CloudFormation User Guide): http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-init.html Resource plugin Python code that understands how to instantiate and manage a :term:`resource`. See `Heat Resource Plugins (OpenStack wiki)`_. .. _Heat Resource Plugins (OpenStack wiki): https://wiki.openstack.org/wiki/Heat/Plugins#Heat_Resource_Plugins Resource property Data utilized for the instantiation of a :term:`resource`. Can be defined statically in a :term:`template` or passed in as :term:`input parameters `. Resource provider The implementation of a particular resource type. May be a :term:`Resource plugin` or a :term:`Provider template`. Stack A collection of instantiated :term:`resources ` that are defined in a single :term:`template`. Stack resource A :term:`resource provider` that allows the management of a :term:`nested stack` as a :term:`resource` in a parent stack. Template An orchestration document that details everything needed to carry out an :term:`orchestration `. Template resource See :term:`Provider resource`. User data A :term:`resource property` that contains a user-provided data blob. User data gets passed to `cloud-init`_ to automatically configure instances at boot time. See also `User data (OpenStack End User Guide)`_. .. _User data (OpenStack End User Guide): https://docs.openstack.org/nova/latest/user/user-data.html .. _cloud-init: https://help.ubuntu.com/community/CloudInit Wait condition A :term:`resource provider` that provides a way to communicate data or events from servers back to the orchestration engine. Most commonly used to pause the creation of the :term:`stack` while the server is being configured. heat-10.0.2/doc/source/_extra/0000775000175000017500000000000013343562672016077 5ustar zuulzuul00000000000000heat-10.0.2/doc/source/_extra/.htaccess0000666000175000017500000000024313343562351017670 0ustar zuulzuul00000000000000# The top-level docs project will redirect URLs from the old /developer docs # to their equivalent pages on the new docs.openstack.org only if this file # exists. heat-10.0.2/doc/source/contributing/0000775000175000017500000000000013343562672017324 5ustar zuulzuul00000000000000heat-10.0.2/doc/source/contributing/index.rst0000666000175000017500000000067713343562337021177 0ustar zuulzuul00000000000000Heat Contribution Guidelines ============================ In the Contributions Guide, you will find documented policies for developing with heat. This includes the processes we use for blueprints and specs, bugs, contributor onboarding, core reviewer memberships, and other procedural items. Policies -------- .. toctree:: :maxdepth: 3 blueprints .. bugs contributor-onboarding core-reviewers gate-failure-triage code-reviews heat-10.0.2/doc/source/contributing/blueprints.rst0000666000175000017500000001036113343562351022242 0ustar zuulzuul00000000000000Blueprints and Specs ==================== The Heat team uses the `heat-specs `_ repository for its specification reviews. Detailed information can be found `here `_. Please note that we use a template for spec submissions. Please use the `template for the latest release `_. It is not required to fill out all sections in the template. Spec Notes ---------- There are occasions when a spec is approved and the code does not land in the cycle it was targeted for. For these cases, the workflow to get the spec into the next release is as below: * Anyone can propose a patch to heat-specs which moves a spec from the previous release backlog into the new release directory. The specs which are moved in this way can be fast-tracked into the next release. Please note that it is required to re-propose the spec for the new release and it'll be evaluated based on the resources available and cycle priorities. Heat Spec Lite -------------- Lite specs are small feature requests tracked as Launchpad bugs, with status 'Wishlist' and tagged with 'spec-lite' tag. These allow for submission and review of these feature requests before code is submitted. These can be used for small features that don’t warrant a detailed spec to be proposed, evaluated, and worked on. The team evaluates these requests as it evaluates specs. Once a `spec-lite` bug has been approved/triaged as a Request for Enhancement(RFE), it’ll be targeted for a release. The workflow for the life of a spec-lite in Launchpad is as follows: * File a bug with a small summary of what the requested change is and tag it as `spec-lite`. * The bug is triaged and importance changed to `Wishlist`. * The bug is evaluated and marked as `Triaged` to announce approval or to `Won't fix` to announce rejection or `Invalid` to request a full spec. * The bug is moved to `In Progress` once the code is up and ready to review. * The bug is moved to `Fix Committed` once the patch lands. In summary: +--------------+-----------------------------------------------------------------------------+ |State | Meaning | +==============+=============================================================================+ |New | This is where spec-lite starts, as filed by the community. | +--------------+-----------------------------------------------------------------------------+ |Triaged | Drivers - Move to this state to mean, "you can start working on it" | +--------------+-----------------------------------------------------------------------------+ |Won't Fix | Drivers - Move to this state to reject a lite-spec. | +--------------+-----------------------------------------------------------------------------+ |Invalid | Drivers - Move to this state to request a full spec for this request | +--------------+-----------------------------------------------------------------------------+ The drivers team will discuss the following bug reports in IRC meetings: * `heat RFE's `_ * `python-heatclient RFE's `_ Lite spec Submission Guidelines ------------------------------- When a bug is submitted, there are two fields that must be filled: ‘summary’ and ‘further information’. The ‘summary’ must be brief enough to fit in one line. The ‘further information’ section must be a description of what you would like to see implemented in heat. The description should provide enough details for a knowledgeable developer to understand what is the existing problem and what’s the proposed solution. Add `spec-lite` tag to the bug. Lite spec from existing bugs ---------------------------- If there's an already existing bug that describes a small feature suitable for a spec-lite, add a `spec-lite' tag to the bug. There is no need to create a new bug. The comments and history of the existing bug are important for it's review. heat-10.0.2/doc/source/api/0000775000175000017500000000000013343562672015366 5ustar zuulzuul00000000000000heat-10.0.2/doc/source/api/index.rst0000666000175000017500000000014713343562351017225 0ustar zuulzuul00000000000000=================== Source Code Index =================== .. toctree:: :maxdepth: 1 autoindex heat-10.0.2/doc/source/template_guide/0000775000175000017500000000000013343562672017605 5ustar zuulzuul00000000000000heat-10.0.2/doc/source/template_guide/environment.rst0000666000175000017500000002530513343562351022704 0ustar zuulzuul00000000000000.. highlight: yaml :linenothreshold: 5 .. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. .. _environments: ============ Environments ============ The environment affects the runtime behavior of a template. It provides a way to override the resource implementations and a mechanism to place parameters that the service needs. To fully understand the runtime behavior you have to consider what plug-ins are installed on the cloud you're using. Environment file format ~~~~~~~~~~~~~~~~~~~~~~~ The environment is a yaml text file that contains two main sections: ``parameters`` A list of key/value pairs. ``resource_registry`` Definition of custom resources. It also can contain some other sections: ``parameter_defaults`` Default parameters passed to all template resources. ``encrypted_parameters`` List of encrypted parameters. ``event_sinks`` List of endpoints that would receive stack events. ``parameter_merge_strategies`` Merge strategies for merging parameters and parameter defaults from the environment file. Use the :option:`-e` option of the :command:`openstack stack create` command to create a stack using the environment defined in such a file. You can also provide environment parameters as a list of key/value pairs using the :option:`--parameter` option of the :command:`openstack stack create` command. In the following example the environment is read from the :file:`my_env.yaml` file and an extra parameter is provided using the :option:`--parameter` option:: $ openstack stack create my_stack -e my_env.yaml --parameter "param1=val1;param2=val2" -t my_tmpl.yaml Environment Merging ~~~~~~~~~~~~~~~~~~~ Parameters and their defaults (``parameter_defaults``) are merged based on merge strategies in an environment file. There are three merge strategy types: ``overwrite`` Overwrites a parameter, existing parameter values are replaced. ``merge`` Merges the existing parameter value and the new value. String values are concatenated, comma delimited lists are extended and json values are updated. ``deep_merge`` Json values are deep merged. Not useful for other types like comma delimited lists and strings. If specified for them, it falls back to ``merge``. You can provide a default merge strategy and/or parameter specific merge strategies per environment file. Parameter specific merge strategy is only used for that parameter. An example of ``parameter_merge_strategies`` section in an environment file:: parameter_merge_strategies: default: merge param1: overwrite param2: deep_merge If no merge strategy is provided in an environment file, ``overwrite`` becomes the default merge strategy for all ``parameters`` and ``parameter_defaults`` in that environment file. Global and effective environments ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The environment used for a stack is the combination of the environment you use with the template for the stack, and a global environment that is determined by your cloud operator. An entry in the user environment takes precedence over the global environment. OpenStack includes a default global environment, but your cloud operator can add additional environment entries. The cloud operator can add to the global environment by putting environment files in a configurable directory wherever the Orchestration engine runs. The configuration variable is named ``environment_dir`` and is found in the ``[DEFAULT]`` section of :file:`/etc/heat/heat.conf`. The default for that directory is :file:`/etc/heat/environment.d`. Its contents are combined in whatever order the shell delivers them when the service starts up, which is the time when these files are read. If the :file:`my_env.yaml` file from the example above had been put in the ``environment_dir`` then the user's command line could be this:: openstack stack create my_stack --parameter "some_parm=bla" -t my_tmpl.yaml Global templates ---------------- A global template directory allows files to be pre-loaded in the global environment. A global template is determined by your cloud operator. An entry in the user template takes precedence over the global environment. OpenStack includes a default global template, but your cloud operator can add additional template entries. The cloud operator can add new global templates by putting template files in a configurable directory wherever the Orchestration engine runs. The configuration variable is named ``template_dir`` and is found in the ``[DEFAULT]`` section of :file:`/etc/heat/heat.conf`. The default for that directory is :file:`/etc/heat/templates`. Its contents are combined in whatever order the shell delivers them when the service starts up, which is the time when these files are read. If the :file:`my_tmpl.yaml` file from the example below has been put in the ``template_dir``, other templates which we used to create stacks could contain following way to include `my_tmpl.yaml` in it:: resourceA: type: {get_file: "my_tmpl.yaml"} Usage examples ~~~~~~~~~~~~~~ Define values for template arguments ------------------------------------ You can define values for the template arguments in the ``parameters`` section of an environment file:: parameters: KeyName: heat_key InstanceType: m1.micro ImageId: F18-x86_64-cfntools Define defaults to parameters -------------------------------- You can define default values for all template arguments in the ``parameter_defaults`` section of an environment file. These defaults are passed into all template resources:: parameter_defaults: KeyName: heat_key Mapping resources ----------------- You can map one resource to another in the ``resource_registry`` section of an environment file. The resource you provide in this manner must have an identifier, and must reference either another resource's ID or the URL of an existing template file. The following example maps a new ``OS::Networking::FloatingIP`` resource to an existing ``OS::Nova::FloatingIP`` resource:: resource_registry: "OS::Networking::FloatingIP": "OS::Nova::FloatingIP" You can use wildcards to map multiple resources, for example to map all ``OS::Neutron`` resources to ``OS::Network``:: resource_registry: "OS::Network*": "OS::Neutron*" Override a resource with a custom resource ------------------------------------------ To create or override a resource with a custom resource, create a template file to define this resource, and provide the URL to the template file in the environment file:: resource_registry: "AWS::EC2::Instance": file:///path/to/my_instance.yaml The supported URL schemes are ``file``, ``http`` and ``https``. .. note:: The template file extension must be ``.yaml`` or ``.template``, or it will not be treated as a custom template resource. You can limit the usage of a custom resource to a specific resource of the template:: resource_registry: resources: my_db_server: "OS::DBInstance": file:///home/mine/all_my_cool_templates/db.yaml Pause stack creation, update or deletion on a given resource ------------------------------------------------------------ If you want to debug your stack as it's being created, updated or deleted, or if you want to run it in phases, you can set ``pre-create``, ``pre-update``, ``pre-delete``, ``post-create``, ``post-update`` and ``post-delete`` hooks in the ``resources`` section of ``resource_registry``. To set a hook, add either ``hooks: $hook_name`` (for example ``hooks: pre-update``) to the resource's dictionary. You can also use a list (``hooks: [pre-create, pre-update]``) to stop on several actions. You can combine hooks with other ``resources`` properties such as provider templates or type mapping:: resource_registry: resources: my_server: "OS::DBInstance": file:///home/mine/all_my_cool_templates/db.yaml hooks: pre-create nested_stack: nested_resource: hooks: pre-update another_resource: hooks: [pre-create, pre-update] When heat encounters a resource that has a hook, it pauses the resource action until the hook clears. Any resources that depend on the paused action wait as well. Non-dependent resources are created in parallel unless they have their own hooks. It is possible to perform a wild card match using an asterisk (`*`) in the resource name. For example, the following entry pauses while creating ``app_server`` and ``database_server``, but not ``server`` or ``app_network``:: resource_registry: resources: "*_server": hooks: pre-create Clear hooks by signaling the resource with ``{unset_hook: $hook_name}`` (for example ``{unset_hook: pre-update}``). Retrieving events ----------------- By default events are stored in the database and can be retrieved via the API. Using the environment, you can register an endpoint which will receive events produced by your stack, so that you don't have to poll Heat. You can specify endpoints using the ``event_sinks`` property:: event_sinks: - type: zaqar-queue target: myqueue ttl: 1200 Restrict update or replace of a given resource ----------------------------------------------- If you want to restrict update or replace of a resource when your stack is being updated, you can set ``restricted_actions`` in the ``resources`` section of ``resource_registry``. To restrict update or replace, add ``restricted_actions: update`` or ``restricted_actions: replace`` to the resource dictionary. You can also use ``[update, replace]`` to restrict both actions. You can combine restricted actions with other ``resources`` properties such as provider templates or type mapping or hooks:: resource_registry: resources: my_server: "OS::DBInstance": file:///home/mine/all_my_cool_templates/db.yaml restricted_actions: replace hooks: pre-create nested_stack: nested_resource: restricted_actions: update another_resource: restricted_actions: [update, replace] It is possible to perform a wild card match using an asterisk (`*`) in the resource name. For example, the following entry restricts replace for ``app_server`` and ``database_server``, but not ``server`` or ``app_network``:: resource_registry: resources: "*_server": restricted_actions: replace heat-10.0.2/doc/source/template_guide/cfn.rst0000666000175000017500000000126613343562351021106 0ustar zuulzuul00000000000000.. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. CloudFormation Compatible Resource Types ---------------------------------------- .. integratedrespages:: AWS:: heat-10.0.2/doc/source/template_guide/software_deployment.rst0000666000175000017500000006502113343562351024431 0ustar zuulzuul00000000000000.. highlight: yaml :linenothreshold: 5 .. _software_deployment: ====================== Software configuration ====================== There are a variety of options to configure the software which runs on the servers in your stack. These can be broadly divided into the following: * Custom image building * User-data boot scripts and cloud-init * Software deployment resources This section will describe each of these options and provide examples for using them together in your stacks. Image building ~~~~~~~~~~~~~~ The first opportunity to influence what software is configured on your servers is by booting them with a custom-built image. There are a number of reasons you might want to do this, including: * **Boot speed** - since the required software is already on the image there is no need to download and install anything at boot time. * **Boot reliability** - software downloads can fail for a number of reasons including transient network failures and inconsistent software repositories. * **Test verification** - custom built images can be verified in test environments before being promoted to production. * **Configuration dependencies** - post-boot configuration may depend on agents already being installed and enabled A number of tools are available for building custom images, including: * diskimage-builder_ image building tools for OpenStack * imagefactory_ builds images for a variety of operating system/cloud combinations Examples in this guide which require custom images will use diskimage-builder_. User-data boot scripts and cloud-init ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ When booting a server it is possible to specify the contents of the user-data to be passed to that server. This user-data is made available either from configured config-drive or from the `Metadata service`_. How this user-data is consumed depends on the image being booted, but the most commonly used tool for default cloud images is Cloud-init_. Whether the image is using Cloud-init_ or not, it should be possible to specify a shell script in the ``user_data`` property and have it be executed by the server during boot: .. code-block:: yaml resources: the_server: type: OS::Nova::Server properties: # flavor, image etc user_data: | #!/bin/bash echo "Running boot script" # ... .. note:: Debugging these scripts it is often useful to view the boot log using :code:`nova console-log ` to view the progress of boot script execution. Often there is a need to set variable values based on parameters or resources in the stack. This can be done with the :code:`str_replace` intrinsic function: .. code-block:: yaml parameters: foo: default: bar resources: the_server: type: OS::Nova::Server properties: # flavor, image etc user_data: str_replace: template: | #!/bin/bash echo "Running boot script with $FOO" # ... params: $FOO: {get_param: foo} .. warning:: If a stack-update is performed and there are any changes at all to the content of user_data then the server will be replaced (deleted and recreated) so that the modified boot configuration can be run on a new server. When these scripts grow it can become difficult to maintain them inside the template, so the ``get_file`` intrinsic function can be used to maintain the script in a separate file: .. code-block:: yaml parameters: foo: default: bar resources: the_server: type: OS::Nova::Server properties: # flavor, image etc user_data: str_replace: template: {get_file: the_server_boot.sh} params: $FOO: {get_param: foo} .. note:: ``str_replace`` can replace any strings, not just strings starting with ``$``. However doing this for the above example is useful because the script file can be executed for testing by passing in environment variables. Choosing the user_data_format ----------------------------- The :ref:`OS::Nova::Server` ``user_data_format`` property determines how the ``user_data`` should be formatted for the server. For the default value ``HEAT_CFNTOOLS``, the ``user_data`` is bundled as part of the heat-cfntools cloud-init boot configuration data. While ``HEAT_CFNTOOLS`` is the default for ``user_data_format``, it is considered legacy and ``RAW`` or ``SOFTWARE_CONFIG`` will generally be more appropriate. For ``RAW`` the user_data is passed to Nova unmodified. For a Cloud-init_ enabled image, the following are both valid ``RAW`` user-data: .. code-block:: yaml resources: server_with_boot_script: type: OS::Nova::Server properties: # flavor, image etc user_data_format: RAW user_data: | #!/bin/bash echo "Running boot script" # ... server_with_cloud_config: type: OS::Nova::Server properties: # flavor, image etc user_data_format: RAW user_data: | #cloud-config final_message: "The system is finally up, after $UPTIME seconds" For ``SOFTWARE_CONFIG`` ``user_data`` is bundled as part of the software config data, and metadata is derived from any associated `Software deployment resources`_. Signals and wait conditions --------------------------- Often it is necessary to pause further creation of stack resources until the boot configuration script has notified that it has reached a certain state. This is usually either to notify that a service is now active, or to pass out some generated data which is needed by another resource. The resources :ref:`OS::Heat::WaitCondition` and :ref:`OS::Heat::SwiftSignal` both perform this function using different techniques and tradeoffs. :ref:`OS::Heat::WaitCondition` is implemented as a call to the `Orchestration API`_ resource signal. The token is created using credentials for a user account which is scoped only to the wait condition handle resource. This user is created when the handle is created, and is associated to a project which belongs to the stack, in an identity domain which is dedicated to the orchestration service. Sending the signal is a simple HTTP request, as with this example using curl_: .. code-block:: sh curl -i -X POST -H 'X-Auth-Token: ' \ -H 'Content-Type: application/json' -H 'Accept: application/json' \ '' --data-binary '' The JSON containing the signal data is expected to be of the following format: .. code-block:: json { "status": "SUCCESS", "reason": "The reason which will appear in the 'heat event-list' output", "data": "Data to be used elsewhere in the template via get_attr", "id": "Optional unique ID of signal" } All of these values are optional, and if not specified will be set to the following defaults: .. code-block:: json { "status": "SUCCESS", "reason": "Signal received", "data": null, "id": "" } If ``status`` is set to ``FAILURE`` then the resource (and the stack) will go into a ``FAILED`` state using the ``reason`` as failure reason. The following template example uses the convenience attribute ``curl_cli`` which builds a curl command with a valid token: .. code-block:: yaml resources: wait_condition: type: OS::Heat::WaitCondition properties: handle: {get_resource: wait_handle} # Note, count of 5 vs 6 is due to duplicate signal ID 5 sent below count: 5 timeout: 300 wait_handle: type: OS::Heat::WaitConditionHandle the_server: type: OS::Nova::Server properties: # flavor, image etc user_data_format: RAW user_data: str_replace: template: | #!/bin/sh # Below are some examples of the various ways signals # can be sent to the Handle resource # Simple success signal wc_notify --data-binary '{"status": "SUCCESS"}' # Or you optionally can specify any of the additional fields wc_notify --data-binary '{"status": "SUCCESS", "reason": "signal2"}' wc_notify --data-binary '{"status": "SUCCESS", "reason": "signal3", "data": "data3"}' wc_notify --data-binary '{"status": "SUCCESS", "reason": "signal4", "id": "id4", "data": "data4"}' # If you require control of the ID, you can pass it. # The ID should be unique, unless you intend for duplicate # signals to overwrite each other. The following two calls # do the exact same thing, and will be treated as one signal # (You can prove this by changing count above to 7) wc_notify --data-binary '{"status": "SUCCESS", "id": "id5"}' wc_notify --data-binary '{"status": "SUCCESS", "id": "id5"}' # Example of sending a failure signal, optionally # reason, id, and data can be specified as above # wc_notify --data-binary '{"status": "FAILURE"}' params: wc_notify: { get_attr: [wait_handle, curl_cli] } outputs: wc_data: value: { get_attr: [wait_condition, data] } # this would return the following json # {"1": null, "2": null, "3": "data3", "id4": "data4", "id5": null} wc_data_4: value: { 'Fn::Select': ['id4', { get_attr: [wait_condition, data] }] } # this would return "data4" .. :ref:`OS::Heat::SwiftSignal` is implemented by creating an Object Storage API temporary URL which is populated with signal data with an HTTP PUT. The orchestration service will poll this object until the signal data is available. Object versioning is used to store multiple signals. Sending the signal is a simple HTTP request, as with this example using curl_: .. code-block:: sh curl -i -X PUT '' --data-binary '' The above template example only needs to have the ``type`` changed to the swift signal resources: .. code-block:: yaml resources: signal: type: OS::Heat::SwiftSignal properties: handle: {get_resource: wait_handle} timeout: 300 signal_handle: type: OS::Heat::SwiftSignalHandle # ... The decision to use :ref:`OS::Heat::WaitCondition` or :ref:`OS::Heat::SwiftSignal` will depend on a few factors: * :ref:`OS::Heat::SwiftSignal` depends on the availability of an Object Storage API * :ref:`OS::Heat::WaitCondition` depends on whether the orchestration service has been configured with a dedicated stack domain (which may depend on the availability of an Identity V3 API). * The preference to protect signal URLs with token authentication or a secret webhook URL. Software config resources ------------------------- Boot configuration scripts can also be managed as their own resources. This allows configuration to be defined once and run on multiple server resources. These software-config resources are stored and retrieved via dedicated calls to the `Orchestration API`_. It is not possible to modify the contents of an existing software-config resource, so a stack-update which changes any existing software-config resource will result in API calls to create a new config and delete the old one. The resource :ref:`OS::Heat::SoftwareConfig` is used for storing configs represented by text scripts, for example: .. code-block:: yaml resources: boot_script: type: OS::Heat::SoftwareConfig properties: group: ungrouped config: | #!/bin/bash echo "Running boot script" # ... server_with_boot_script: type: OS::Nova::Server properties: # flavor, image etc user_data_format: SOFTWARE_CONFIG user_data: {get_resource: boot_script} The resource :ref:`OS::Heat::CloudConfig` allows Cloud-init_ cloud-config to be represented as template YAML rather than a block string. This allows intrinsic functions to be included when building the cloud-config. This also ensures that the cloud-config is valid YAML, although no further checks for valid cloud-config are done. .. code-block:: yaml parameters: file_content: type: string description: The contents of the file /tmp/file resources: boot_config: type: OS::Heat::CloudConfig properties: cloud_config: write_files: - path: /tmp/file content: {get_param: file_content} server_with_cloud_config: type: OS::Nova::Server properties: # flavor, image etc user_data_format: SOFTWARE_CONFIG user_data: {get_resource: boot_config} The resource :ref:`OS::Heat::MultipartMime` allows multiple :ref:`OS::Heat::SoftwareConfig` and :ref:`OS::Heat::CloudConfig` resources to be combined into a single Cloud-init_ multi-part message: .. code-block:: yaml parameters: file_content: type: string description: The contents of the file /tmp/file other_config: type: string description: The ID of a software-config resource created elsewhere resources: boot_config: type: OS::Heat::CloudConfig properties: cloud_config: write_files: - path: /tmp/file content: {get_param: file_content} boot_script: type: OS::Heat::SoftwareConfig properties: group: ungrouped config: | #!/bin/bash echo "Running boot script" # ... server_init: type: OS::Heat::MultipartMime properties: parts: - config: {get_resource: boot_config} - config: {get_resource: boot_script} - config: {get_param: other_config} server: type: OS::Nova::Server properties: # flavor, image etc user_data_format: SOFTWARE_CONFIG user_data: {get_resource: server_init} Software deployment resources ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ There are many situations where it is not desirable to replace the server whenever there is a configuration change. The :ref:`OS::Heat::SoftwareDeployment` resource allows any number of software configurations to be added or removed from a server throughout its life-cycle. Building custom image for software deployments ---------------------------------------------- :ref:`OS::Heat::SoftwareConfig` resources are used to store software configuration, and a :ref:`OS::Heat::SoftwareDeployment` resource is used to associate a config resource with one server. The ``group`` attribute on :ref:`OS::Heat::SoftwareConfig` specifies what tool will consume the config content. :ref:`OS::Heat::SoftwareConfig` has the ability to define a schema of ``inputs`` and which the configuration script supports. Inputs are mapped to whatever concept the configuration tool has for assigning variables/parameters. Likewise, ``outputs`` are mapped to the tool's capability to export structured data after configuration execution. For tools which do not support this, outputs can always be written to a known file path for the hook to read. The :ref:`OS::Heat::SoftwareDeployment` resource allows values to be assigned to the config inputs, and the resource remains in an ``IN_PROGRESS`` state until the server signals to heat what (if any) output values were generated by the config script. Custom image script ------------------- Each of the following examples requires that the servers be booted with a custom image. The following script uses diskimage-builder to create an image required in later examples: .. code-block:: sh # Clone the required repositories. Some of these are also available # via pypi or as distro packages. git clone https://git.openstack.org/openstack/diskimage-builder.git git clone https://git.openstack.org/openstack/tripleo-image-elements.git git clone https://git.openstack.org/openstack/heat-agents.git # Required by diskimage-builder to discover element collections export ELEMENTS_PATH=tripleo-image-elements/elements:heat-agents/ # The base operating system element(s) provided by the diskimage-builder # elements collection. Other values which may work include: # centos7, debian, opensuse, rhel, rhel7, or ubuntu export BASE_ELEMENTS="fedora selinux-permissive" # Install and configure the os-collect-config agent to poll the metadata # server (heat service or zaqar message queue and so on) for configuration # changes to execute export AGENT_ELEMENTS="os-collect-config os-refresh-config os-apply-config" # heat-config installs an os-refresh-config script which will invoke the # appropriate hook to perform configuration. The element heat-config-script # installs a hook to perform configuration with shell scripts export DEPLOYMENT_BASE_ELEMENTS="heat-config heat-config-script" # Install a hook for any other chosen configuration tool(s). # Elements which install hooks include: # heat-config-cfn-init, heat-config-puppet, or heat-config-salt export DEPLOYMENT_TOOL="" # The name of the qcow2 image to create, and the name of the image # uploaded to the OpenStack image registry. export IMAGE_NAME=fedora-software-config # Create the image diskimage-builder/bin/disk-image-create vm $BASE_ELEMENTS $AGENT_ELEMENTS \ $DEPLOYMENT_BASE_ELEMENTS $DEPLOYMENT_TOOL -o $IMAGE_NAME.qcow2 # Upload the image, assuming valid credentials are already sourced openstack image create --disk-format qcow2 --container-format bare \ $IMAGE_NAME < $IMAGE_NAME.qcow2 .. note:: Above script uses diskimage-builder, make sure the environment already fulfill all requirements in requirements.txt of diskimage-builder. Configuring with scripts ------------------------ The `Custom image script`_ already includes the ``heat-config-script`` element so the built image will already have the ability to configure using shell scripts. Config inputs are mapped to shell environment variables. The script can communicate outputs to heat by writing to the :file:`$heat_outputs_path.{output name}` file. See the following example for a script which expects inputs ``foo``, ``bar`` and generates an output ``result``. .. code-block:: yaml resources: config: type: OS::Heat::SoftwareConfig properties: group: script inputs: - name: foo - name: bar outputs: - name: result config: | #!/bin/sh -x echo "Writing to /tmp/$bar" echo $foo > /tmp/$bar echo -n "The file /tmp/$bar contains `cat /tmp/$bar` for server $deploy_server_id during $deploy_action" > $heat_outputs_path.result echo "Written to /tmp/$bar" echo "Output to stderr" 1>&2 deployment: type: OS::Heat::SoftwareDeployment properties: config: get_resource: config server: get_resource: server input_values: foo: fooooo bar: baaaaa server: type: OS::Nova::Server properties: # flavor, image etc user_data_format: SOFTWARE_CONFIG outputs: result: value: get_attr: [deployment, result] stdout: value: get_attr: [deployment, deploy_stdout] stderr: value: get_attr: [deployment, deploy_stderr] status_code: value: get_attr: [deployment, deploy_status_code] .. note:: A config resource can be associated with multiple deployment resources, and each deployment can specify the same or different values for the ``server`` and ``input_values`` properties. As can be seen in the ``outputs`` section of the above template, the ``result`` config output value is available as an attribute on the ``deployment`` resource. Likewise the captured stdout, stderr and status_code are also available as attributes. Configuring with os-apply-config -------------------------------- The agent toolchain of ``os-collect-config``, ``os-refresh-config`` and ``os-apply-config`` can actually be used on their own to inject heat stack configuration data into a server running a custom image. The custom image needs to have the following to use this approach: * All software dependencies installed * os-refresh-config_ scripts to be executed on configuration changes * os-apply-config_ templates to transform the heat-provided config data into service configuration files The projects tripleo-image-elements_ and tripleo-heat-templates_ demonstrate this approach. Configuring with cfn-init ------------------------- Likely the only reason to use the ``cfn-init`` hook is to migrate templates which contain `AWS::CloudFormation::Init`_ metadata without needing a complete rewrite of the config metadata. It is included here as it introduces a number of new concepts. To use the ``cfn-init`` tool the ``heat-config-cfn-init`` element is required to be on the built image, so `Custom image script`_ needs to be modified with the following: .. code-block:: sh export DEPLOYMENT_TOOL="heat-config-cfn-init" Configuration data which used to be included in the ``AWS::CloudFormation::Init`` section of resource metadata is instead moved to the ``config`` property of the config resource, as in the following example: .. code-block:: yaml resources: config: type: OS::Heat::StructuredConfig properties: group: cfn-init inputs: - name: bar config: config: files: /tmp/foo: content: get_input: bar mode: '000644' deployment: type: OS::Heat::StructuredDeployment properties: name: 10_deployment signal_transport: NO_SIGNAL config: get_resource: config server: get_resource: server input_values: bar: baaaaa other_deployment: type: OS::Heat::StructuredDeployment properties: name: 20_other_deployment signal_transport: NO_SIGNAL config: get_resource: config server: get_resource: server input_values: bar: barmy server: type: OS::Nova::Server properties: image: {get_param: image} flavor: {get_param: flavor} key_name: {get_param: key_name} user_data_format: SOFTWARE_CONFIG There are a number of things to note about this template example: * :ref:`OS::Heat::StructuredConfig` is like :ref:`OS::Heat::SoftwareConfig` except that the ``config`` property contains structured YAML instead of text script. This is useful for a number of other configuration tools including ansible, salt and os-apply-config. * ``cfn-init`` has no concept of inputs, so ``{get_input: bar}`` acts as a placeholder which gets replaced with the :ref:`OS::Heat::StructuredDeployment` ``input_values`` value when the deployment resource is created. * ``cfn-init`` has no concept of outputs, so specifying ``signal_transport: NO_SIGNAL`` will mean that the deployment resource will immediately go into the ``CREATED`` state instead of waiting for a completed signal from the server. * The template has 2 deployment resources deploying the same config with different ``input_values``. The order these are deployed in on the server is determined by sorting the values of the ``name`` property for each resource (10_deployment, 20_other_deployment) Configuring with puppet ----------------------- The puppet_ hook makes it possible to write configuration as puppet manifests which are deployed and run in a masterless environment. To specify configuration as puppet manifests the ``heat-config-puppet`` element is required to be on the built image, so `Custom image script`_ needs to be modified with the following: .. code-block:: sh export DEPLOYMENT_TOOL="heat-config-puppet" .. code-block:: yaml resources: config: type: OS::Heat::SoftwareConfig properties: group: puppet inputs: - name: foo - name: bar outputs: - name: result config: get_file: example-puppet-manifest.pp deployment: type: OS::Heat::SoftwareDeployment properties: config: get_resource: config server: get_resource: server input_values: foo: fooooo bar: baaaaa server: type: OS::Nova::Server properties: image: {get_param: image} flavor: {get_param: flavor} key_name: {get_param: key_name} user_data_format: SOFTWARE_CONFIG outputs: result: value: get_attr: [deployment, result] stdout: value: get_attr: [deployment, deploy_stdout] This demonstrates the use of the ``get_file`` function, which will attach the contents of the file ``example-puppet-manifest.pp``, containing: .. code-block:: puppet file { 'barfile': ensure => file, mode => '0644', path => '/tmp/$::bar', content => '$::foo', } file { 'output_result': ensure => file, path => '$::heat_outputs_path.result', mode => '0644', content => 'The file /tmp/$::bar contains $::foo', } .. _`AWS::CloudFormation::Init`: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-init.html .. _diskimage-builder: https://git.openstack.org/cgit/openstack/diskimage-builder .. _imagefactory: http://imgfac.org/ .. _`Metadata service`: http://docs.openstack.org/admin-guide/compute-networking-nova.html#metadata-service .. _Cloud-init: http://cloudinit.readthedocs.org/en/latest/ .. _curl: http://curl.haxx.se/ .. _`Orchestration API`: http://developer.openstack.org/api-ref/orchestration/v1/ .. _os-refresh-config: https://git.openstack.org/cgit/openstack/os-refresh-config .. _os-apply-config: https://git.openstack.org/cgit/openstack/os-apply-config .. _tripleo-heat-templates: https://git.openstack.org/cgit/openstack/tripleo-heat-templates .. _tripleo-image-elements: https://git.openstack.org/cgit/openstack/tripleo-image-elements .. _puppet: http://puppetlabs.com/ heat-10.0.2/doc/source/template_guide/advanced_topics.rst0000666000175000017500000000044513343562337023470 0ustar zuulzuul00000000000000:orphan: .. _advanced_topics: =============== Advanced topics =============== Networking ~~~~~~~~~~ Load balancer ------------- TODO Firewall -------- TODO VPN --- TODO Auto scaling ~~~~~~~~~~~~ Alarming -------- TODO Up scaling and down scaling --------------------------- TODO heat-10.0.2/doc/source/template_guide/functions.rst0000666000175000017500000002303013343562337022345 0ustar zuulzuul00000000000000.. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. =================================== CloudFormation Compatible Functions =================================== There are a number of functions that you can use to help you write CloudFormation compatible templates. While most CloudFormation functions are supported in HOT version '2013-05-23', *Fn::Select* is the only CloudFormation function supported in HOT templates since version '2014-10-16' which is introduced in Juno. All of these functions (except *Ref*) start with *Fn::*. --- Ref --- Returns the value of the named parameter or resource. Parameters ~~~~~~~~~~ name : String The name of the resource or parameter. Usage ~~~~~ .. code-block:: yaml {Ref: my_server} Returns the nova instance ID. For example, ``d8093de0-850f-4513-b202-7979de6c0d55``. ---------- Fn::Base64 ---------- This is a placeholder for a function to convert an input string to Base64. This function in Heat actually performs no conversion. It is included for the benefit of CFN templates that convert UserData to Base64. Heat only accepts UserData in plain text. Parameters ~~~~~~~~~~ value : String The string to convert. Usage ~~~~~ .. code-block:: yaml {"Fn::Base64": "convert this string please."} Returns the original input string. ------------- Fn::FindInMap ------------- Returns the value corresponding to keys into a two-level map declared in the Mappings section. Parameters ~~~~~~~~~~ map_name : String The logical name of a mapping declared in the Mappings section that contains the keys and values. top_level_key : String The top-level key name. It's value is a list of key-value pairs. second_level_key : String The second-level key name, which is set to one of the keys from the list assigned to top_level_key. Usage ~~~~~ .. code-block:: yaml Mapping: MyContacts: jone: {phone: 337, email: a@b.com} jim: {phone: 908, email: g@b.com} {"Fn::FindInMap": ["MyContacts", "jim", "phone" ] } Returns ``908``. ---------- Fn::GetAtt ---------- Returns an attribute of a resource within the template. Parameters ~~~~~~~~~~ resource : String The name of the resource. attribute : String The name of the attribute. Usage ~~~~~ .. code-block:: yaml {Fn::GetAtt: [my_server, PublicIp]} Returns an IP address such as ``10.0.0.2``. ---------- Fn::GetAZs ---------- Returns the Availability Zones within the given region. *Note: AZ's and regions are not fully implemented in Heat.* Parameters ~~~~~~~~~~ region : String The name of the region. Usage ~~~~~ .. code-block:: yaml {Fn::GetAZs: ""} Returns the list provided by ``nova availability-zone-list``. -------- Fn::Join -------- Like python join, it joins a list of strings with the given delimiter. Parameters ~~~~~~~~~~ delimiter : String The string to join the list with. list : list The list to join. Usage ~~~~~ .. code-block:: yaml {Fn::Join: [",", ["beer", "wine", "more beer"]]} Returns ``beer, wine, more beer``. ---------- Fn::Select ---------- Select an item from a list. *Heat extension: Select an item from a map* Parameters ~~~~~~~~~~ selector : string or integer The number of item in the list or the name of the item in the map. collection : map or list The collection to select the item from. Usage ~~~~~ For a list lookup: .. code-block:: yaml { "Fn::Select" : [ "2", [ "apples", "grapes", "mangoes" ] ] } Returns ``mangoes``. For a map lookup: .. code-block:: yaml { "Fn::Select" : [ "red", {"red": "a", "flu": "b"} ] } Returns ``a``. --------- Fn::Split --------- This is the reverse of Join. Convert a string into a list based on the delimiter. Parameters ~~~~~~~~~~ delimiter : string Matching string to split on. string : String The string to split. Usage ~~~~~ .. code-block:: yaml { "Fn::Split" : [ ",", "str1,str2,str3,str4"]} Returns ``{["str1", "str2", "str3", "str4"]}``. ----------- Fn::Replace ----------- Find and replace one string with another. Parameters ~~~~~~~~~~ substitutions : map A map of substitutions. string: String The string to do the substitutions in. Usage ~~~~~ .. code-block:: yaml {"Fn::Replace": [ {'$var1': 'foo', '%var2%': 'bar'}, '$var1 is %var2%' ]} Returns ``"foo is bar"``. ------------------ Fn::ResourceFacade ------------------ When writing a Template Resource: - user writes a template that will fill in for a resource (the resource is the facade). - when they are writing their template they need to access the metadata from the facade. Parameters ~~~~~~~~~~ attribute_name : String One of ``Metadata``, ``DeletionPolicy`` or ``UpdatePolicy``. Usage ~~~~~ .. code-block:: yaml {'Fn::ResourceFacade': 'Metadata'} {'Fn::ResourceFacade': 'DeletionPolicy'} {'Fn::ResourceFacade': 'UpdatePolicy'} Example ~~~~~~~ Here is a top level template ``top.yaml`` .. code-block:: yaml resources: my_server: type: OS::Nova::Server metadata: key: value some: more stuff Here is a resource template ``my_actual_server.yaml`` .. code-block:: yaml resources: _actual_server_: type: OS::Nova::Server metadata: {'Fn::ResourceFacade': Metadata} The environment file ``env.yaml`` .. code-block:: yaml resource_registry: resources: my_server: "OS::Nova::Server": my_actual_server.yaml To use it :: $ openstack stack create -t top.yaml -e env.yaml mystack What happened is the metadata in ``top.yaml`` (key: value, some: more stuff) gets passed into the resource template via the `Fn::ResourceFacade`_ function. ------------------- Fn::MemberListToMap ------------------- Convert an AWS style member list into a map. Parameters ~~~~~~~~~~ key name: string The name of the key (normally "Name" or "Key"). value name: string The name of the value (normally "Value"). list: A list of strings The string to convert. Usage ~~~~~ .. code-block:: yaml {'Fn::MemberListToMap': ['Name', 'Value', ['.member.0.Name=key', '.member.0.Value=door', '.member.1.Name=colour', '.member.1.Value=green']]} Returns ``{'key': 'door', 'colour': 'green'}``. ---------- Fn::Equals ---------- Compares whether two values are equal. And returns true if the two values are equal or false if they aren't. Parameters ~~~~~~~~~~ value1: A value of any type that you want to compare. value2: A value of any type that you want to compare. Usage ~~~~~ .. code-block:: yaml {'Fn::Equals': [{'Ref': 'env_type'}, 'prod']} Returns true if the param 'env_type' equals to 'prod', otherwise returns false. ------ Fn::If ------ Returns one value if the specified condition evaluates to true and another value if the specified condition evaluates to false. Parameters ~~~~~~~~~~ condition_name: A reference to a condition in the ``Conditions`` section. value_if_true: A value to be returned if the specified condition evaluates to true. value_if_false: A value to be returned if the specified condition evaluates to false. Usage ~~~~~ .. code-block:: yaml {'Fn::If': ['create_prod', 'value_true', 'value_false']} Returns 'value_true' if the condition 'create_prod' evaluates to true, otherwise returns 'value_false'. ------- Fn::Not ------- Acts as a NOT operator. The syntax of the ``Fn::Not`` function is .. code-block:: yaml {'Fn::Not': [condition]} Returns true for a condition that evaluates to false or returns false for a condition that evaluates to true. Parameters ~~~~~~~~~~ condition: A condition such as ``Fn::Equals`` that evaluates to true or false can be defined in this function, also we can set a boolean value as a condition. Usage ~~~~~ .. code-block:: yaml {'Fn::Not': [{'Fn::Equals': [{'Ref': env_type'}, 'prod']}]} Returns false if the param 'env_type' equals to 'prod', otherwise returns true. ------- Fn::And ------- Acts as an AND operator to evaluate all the specified conditions. Returns true if all the specified conditions evaluate to true, or returns false if any one of the conditions evaluates to false. Parameters ~~~~~~~~~~ condition: A condition such as Fn::Equals that evaluates to true or false. Usage ~~~~~ .. code-block:: yaml {'Fn::And': [{'Fn::Equals': [{'Ref': env_type}, 'prod']}, {'Fn::Not': [{'Fn::Equals': [{'Ref': zone}, 'beijing']}]}] Returns true if the param 'env_type' equals to 'prod' and the param 'zone' is not equal to 'beijing', otherwise returns false. ------ Fn::Or ------ Acts as an OR operator to evaluate all the specified conditions. Returns true if any one of the specified conditions evaluate to true, or returns false if all of the conditions evaluates to false. Parameters ~~~~~~~~~~ condition: A condition such as Fn::Equals that evaluates to true or false. Usage ~~~~~ .. code-block:: yaml {'Fn::Or': [{'Fn::Equals': [{'Ref': zone}, 'shanghai']}, {'Fn::Equals': [{'Ref': zone}, 'beijing']}]} Returns true if the param 'zone' equals to 'shanghai' or 'beijing', otherwise returns false. heat-10.0.2/doc/source/template_guide/index.rst0000666000175000017500000000155513343562337021454 0ustar zuulzuul00000000000000.. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. .. _template-guide: Template Guide ============== .. toctree:: :maxdepth: 2 hot_guide hello_world hot_spec basic_resources software_deployment environment composition openstack cfn unsupported contrib functions .. existing_templates .. advanced_topics heat-10.0.2/doc/source/template_guide/existing_templates.rst0000666000175000017500000000220613343562351024243 0ustar zuulzuul00000000000000:orphan: .. _existing_templates: ================================ Where to find existing templates ================================ There are several repositories where you can find existing HOT templates. The `OpenStack Heat Templates repository`_ contains example templates demonstrating core Heat functionality, related image-building templates, and template-related scripts and conversion tools. .. _OpenStack Heat Templates Repository: https://git.openstack.org/cgit/openstack/heat-templates/tree/ The `OpenStack TripleO Heat Templates repository`_ contains a variety of heat templates that are included in the tripleo-heat-templates codebase. .. _OpenStack TripleO Heat Templates repository: https://git.openstack.org/cgit/openstack/tripleo-heat-templates/tree/ Rackspace has provided a set of Heat templates at the `RCB Ops repository`_ that can be used by cloud operators to launch applications, templates for building a multi-node OpenStack cluster, as well as templates for CI development. Heat templates for deployment of Magento, Hadoop, MongoDB, ELK, Drupal and more can be found here. .. _RCB Ops repository: http://github.com/rcbops/ heat-10.0.2/doc/source/template_guide/basic_resources.rst0000666000175000017500000003226613343562351023517 0ustar zuulzuul00000000000000.. highlight: yaml :linenothreshold: 5 .. _basic_resources: ========= Instances ========= .. For consistency let's define a few values to use in the samples: * image name: ubuntu-trusty-x86_64 * shared/provider network name: "public" * tenant network and subnet names: "private" and "private-subnet" Manage instances ~~~~~~~~~~~~~~~~ Create an instance ------------------ Use the :ref:`OS::Nova::Server` resource to create a Compute instance. The ``flavor`` property is the only mandatory one, but you need to define a boot source using one of the ``image`` or ``block_device_mapping`` properties. You also need to define the ``networks`` property to indicate to which networks your instance must connect if multiple networks are available in your tenant. The following example creates a simple instance, booted from an image, and connecting to the ``private`` network: .. code-block:: yaml resources: instance: type: OS::Nova::Server properties: flavor: m1.small image: ubuntu-trusty-x86_64 networks: - network: private Connect an instance to a network -------------------------------- Use the ``networks`` property of an :ref:`OS::Nova::Server` resource to define which networks an instance should connect to. Define each network as a YAML map, containing one of the following keys: ``port`` The ID of an existing Networking port. You usually create this port in the same template using an :ref:`OS::Neutron::Port` resource. You will be able to associate a floating IP to this port, and the port to your Compute instance. ``network`` The name or ID of an existing network. You don't need to create an :ref:`OS::Neutron::Port` resource if you use this property. But you will not be able to use neutron floating IP association for this instance because there will be no specified port for server. The following example demonstrates the use of the ``port`` and ``network`` properties: .. code-block:: yaml resources: instance_port: type: OS::Neutron::Port properties: network: private fixed_ips: - subnet_id: "private-subnet" instance1: type: OS::Nova::Server properties: flavor: m1.small image: ubuntu-trusty-x86_64 networks: - port: { get_resource: instance_port } instance2: type: OS::Nova::Server properties: flavor: m1.small image: ubuntu-trusty-x86_64 networks: - network: private Create and associate security groups to an instance --------------------------------------------------- Use the :ref:`OS::Neutron::SecurityGroup` resource to create security groups. Define the ``security_groups`` property of the :ref:`OS::Neutron::Port` resource to associate security groups to a port, then associate the port to an instance. The following example creates a security group allowing inbound connections on ports 80 and 443 (web server) and associates this security group to an instance port: .. code-block:: yaml resources: web_secgroup: type: OS::Neutron::SecurityGroup properties: rules: - protocol: tcp remote_ip_prefix: 0.0.0.0/0 port_range_min: 80 port_range_max: 80 - protocol: tcp remote_ip_prefix: 0.0.0.0/0 port_range_min: 443 port_range_max: 443 instance_port: type: OS::Neutron::Port properties: network: private security_groups: - default - { get_resource: web_secgroup } fixed_ips: - subnet_id: private-subnet instance: type: OS::Nova::Server properties: flavor: m1.small image: ubuntu-trusty-x86_64 networks: - port: { get_resource: instance_port } Create and associate a floating IP to an instance ------------------------------------------------- You can use two sets of resources to create and associate floating IPs to instances. OS::Nova resources ++++++++++++++++++ Use the :ref:`OS::Nova::FloatingIP` resource to create a floating IP, and the :ref:`OS::Nova::FloatingIPAssociation` resource to associate the floating IP to an instance. The following example creates an instance and a floating IP, and associate the floating IP to the instance: .. code-block:: yaml resources: floating_ip: type: OS::Nova::FloatingIP properties: pool: public inst1: type: OS::Nova::Server properties: flavor: m1.small image: ubuntu-trusty-x86_64 networks: - network: private association: type: OS::Nova::FloatingIPAssociation properties: floating_ip: { get_resource: floating_ip } server_id: { get_resource: inst1 } OS::Neutron resources +++++++++++++++++++++ .. note:: The Networking service (neutron) must be enabled on your OpenStack deployment to use these resources. Use the :ref:`OS::Neutron::FloatingIP` resource to create a floating IP, and the :ref:`OS::Neutron::FloatingIPAssociation` resource to associate the floating IP to a port: .. code-block:: yaml parameters: net: description: name of network used to launch instance. type: string default: private resources: inst1: type: OS::Nova::Server properties: flavor: m1.small image: ubuntu-trusty-x86_64 networks: - network: {get_param: net} floating_ip: type: OS::Neutron::FloatingIP properties: floating_network: public association: type: OS::Neutron::FloatingIPAssociation properties: floatingip_id: { get_resource: floating_ip } port_id: {get_attr: [inst1, addresses, {get_param: net}, 0, port]} You can also create an OS::Neutron::Port and associate that with the server and the floating IP. However the approach mentioned above will work better with stack updates. .. code-block:: yaml resources: instance_port: type: OS::Neutron::Port properties: network: private fixed_ips: - subnet_id: "private-subnet" floating_ip: type: OS::Neutron::FloatingIP properties: floating_network: public association: type: OS::Neutron::FloatingIPAssociation properties: floatingip_id: { get_resource: floating_ip } port_id: { get_resource: instance_port } Enable remote access to an instance ----------------------------------- The ``key_name`` attribute of the :ref:`OS::Nova::Server` resource defines the key pair to use to enable SSH remote access: .. code-block:: yaml resources: my_instance: type: OS::Nova::Server properties: flavor: m1.small image: ubuntu-trusty-x86_64 key_name: my_key .. note:: For more information about key pairs, see `Configure access and security for instances `_. Create a key pair ----------------- You can create new key pairs with the :ref:`OS::Nova::KeyPair` resource. Key pairs can be imported or created during the stack creation. If the ``public_key`` property is not specified, the Orchestration module creates a new key pair. If the ``save_private_key`` property is set to ``true``, the ``private_key`` attribute of the resource holds the private key. The following example creates a new key pair and uses it as authentication key for an instance: .. code-block:: yaml resources: my_key: type: OS::Nova::KeyPair properties: save_private_key: true name: my_key my_instance: type: OS::Nova::Server properties: flavor: m1.small image: ubuntu-trusty-x86_64 key_name: { get_resource: my_key } outputs: private_key: description: Private key value: { get_attr: [ my_key, private_key ] } Manage networks ~~~~~~~~~~~~~~~ Create a network and a subnet ----------------------------- .. note:: The Networking service (neutron) must be enabled on your OpenStack deployment to create and manage networks and subnets. Networks and subnets cannot be created if your deployment uses legacy networking (nova-network). Use the :ref:`OS::Neutron::Net` resource to create a network, and the :ref:`OS::Neutron::Subnet` resource to provide a subnet for this network: .. code-block:: yaml resources: new_net: type: OS::Neutron::Net new_subnet: type: OS::Neutron::Subnet properties: network_id: { get_resource: new_net } cidr: "10.8.1.0/24" dns_nameservers: [ "8.8.8.8", "8.8.4.4" ] ip_version: 4 Create and manage a router -------------------------- Use the :ref:`OS::Neutron::Router` resource to create a router. You can define its gateway with the ``external_gateway_info`` property: .. code-block:: yaml resources: router1: type: OS::Neutron::Router properties: external_gateway_info: { network: public } You can connect subnets to routers with the :ref:`OS::Neutron::RouterInterface` resource: .. code-block:: yaml resources: subnet1_interface: type: OS::Neutron::RouterInterface properties: router_id: { get_resource: router1 } subnet: private-subnet Complete network example ------------------------ The following example creates a network stack: * A network and an associated subnet. * A router with an external gateway. * An interface to the new subnet for the new router. In this example, the ``public`` network is an existing shared network: .. code-block:: yaml resources: internal_net: type: OS::Neutron::Net internal_subnet: type: OS::Neutron::Subnet properties: network_id: { get_resource: internal_net } cidr: "10.8.1.0/24" dns_nameservers: [ "8.8.8.8", "8.8.4.4" ] ip_version: 4 internal_router: type: OS::Neutron::Router properties: external_gateway_info: { network: public } internal_interface: type: OS::Neutron::RouterInterface properties: router_id: { get_resource: internal_router } subnet: { get_resource: internal_subnet } Manage volumes ~~~~~~~~~~~~~~ Create a volume --------------- Use the :ref:`OS::Cinder::Volume` resource to create a new Block Storage volume. For example: .. code-block:: yaml resources: my_new_volume: type: OS::Cinder::Volume properties: size: 10 The volumes that you create are empty by default. Use the ``image`` property to create a bootable volume from an existing image: .. code-block:: yaml resources: my_new_bootable_volume: type: OS::Cinder::Volume properties: size: 10 image: ubuntu-trusty-x86_64 You can also create new volumes from another volume, a volume snapshot, or a volume backup. Use the ``source_volid``, ``snapshot_id`` or ``backup_id`` properties to create a new volume from an existing source. For example, to create a new volume from a backup: .. code-block:: yaml resources: another_volume: type: OS::Cinder::Volume properties: backup_id: 2fff50ab-1a9c-4d45-ae60-1d054d6bc868 In this example the ``size`` property is not defined because the Block Storage service uses the size of the backup to define the size of the new volume. Attach a volume to an instance ------------------------------ Use the :ref:`OS::Cinder::VolumeAttachment` resource to attach a volume to an instance. The following example creates a volume and an instance, and attaches the volume to the instance: .. code-block:: yaml resources: new_volume: type: OS::Cinder::Volume properties: size: 1 new_instance: type: OS::Nova::Server properties: flavor: m1.small image: ubuntu-trusty-x86_64 volume_attachment: type: OS::Cinder::VolumeAttachment properties: volume_id: { get_resource: new_volume } instance_uuid: { get_resource: new_instance } Boot an instance from a volume ------------------------------ Use the ``block_device_mapping`` property of the :ref:`OS::Nova::Server` resource to define a volume used to boot the instance. This property is a list of volumes to attach to the instance before its boot. The following example creates a bootable volume from an image, and uses it to boot an instance: .. code-block:: yaml resources: bootable_volume: type: OS::Cinder::Volume properties: size: 10 image: ubuntu-trusty-x86_64 instance: type: OS::Nova::Server properties: flavor: m1.small networks: - network: private block_device_mapping: - device_name: vda volume_id: { get_resource: bootable_volume } delete_on_termination: false .. TODO A few elements that probably belong here: - OS::Swift::Container - OS::Trove::Instance heat-10.0.2/doc/source/template_guide/contrib.rst0000666000175000017500000000346313343562351022001 0ustar zuulzuul00000000000000.. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Contributed Heat Resource Types =============================== .. rubric:: These resources are not enabled by default. .. contribrespages:: OS:: Rackspace Cloud Resource Types ------------------------------ .. rubric:: These resources are not enabled by default. The resources in this module are for using Heat with the Rackspace Cloud. These resources either allow using Rackspace services that don't have equivalent services in OpenStack or account for differences between a generic OpenStack deployment and the Rackspace Cloud. Rackspace resources depend on the dev branch of `pyrax `_ to work properly. More information about them can be found in the `RACKSPACE_README `_. .. contribrespages:: Rackspace:: DockerInc Resource ------------------ .. rubric:: This resource is not enabled by default. This plugin enables the use of Docker containers in a Heat template and requires the `docker-py `_ package. You can find more information in the `DOCKER_README `_. .. contribrespages:: DockerInc:: heat-10.0.2/doc/source/template_guide/hello_world.rst0000666000175000017500000001746213343562351022657 0ustar zuulzuul00000000000000.. highlight: yaml :linenothreshold: 5 .. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. .. _hello_world: ================================== Writing a hello world HOT template ================================== HOT is a new template format meant to replace the CloudFormation-compatible format (CFN) as the native format supported by the Orchestration module over time. This guide is targeted towards template authors and explains how to write HOT templates based on examples. A detailed specification of HOT can be found at :ref:`hot_spec`. This section gives an introduction on how to write HOT templates, starting from very basic steps and then going into more and more detail by means of examples. A most basic template ~~~~~~~~~~~~~~~~~~~~~ The most basic template you can think of contains only a single resource definition using only predefined properties. For example, the template below could be used to deploy a single compute instance: .. code-block:: yaml heat_template_version: 2015-04-30 description: Simple template to deploy a single compute instance resources: my_instance: type: OS::Nova::Server properties: key_name: my_key image: ubuntu-trusty-x86_64 flavor: m1.small Each HOT template must include the ``heat_template_version`` key with the HOT version value, for example, ``2013-05-23``. A list of HOT template versions can be found at `Heat Template Version file `__ The ``description`` key is optional, however it is good practice to include some useful text that describes what users can do with the template. In case you want to provide a longer description that does not fit on a single line, you can provide multi-line text in YAML, for example: .. code-block:: yaml description: > This is how you can provide a longer description of your template that goes over several lines. The ``resources`` section is required and must contain at least one resource definition. In the above example, a compute instance is defined with fixed values for the ``key_name``, ``image`` and ``flavor`` properties. .. note:: All the defined elements (key pair, image, flavor) have to exist in the OpenStack environment where the template is used. Input parameters ~~~~~~~~~~~~~~~~ Input parameters defined in the ``parameters`` section of a template allow users to customize a template during deployment. For example, this allows for providing custom key pair names or image IDs to be used for a deployment. From a template author's perspective, this helps to make a template more easily reusable by avoiding hardcoded assumptions. The following example extends the previous template to provide parameters for the key pair, image and flavor properties of the resource: .. code-block:: yaml heat_template_version: 2015-04-30 description: Simple template to deploy a single compute instance parameters: key_name: type: string label: Key Name description: Name of key-pair to be used for compute instance image_id: type: string label: Image ID description: Image to be used for compute instance flavor: type: string label: Instance Type description: Type of instance (flavor) to be used resources: my_instance: type: OS::Nova::Server properties: key_name: { get_param: key_name } image: { get_param: image_id } flavor: { get_param: flavor } Values for the three parameters must be defined by the template user during the deployment of a stack. The ``get_param`` intrinsic function retrieves a user-specified value for a given parameter and uses this value for the associated resource property. For more information about intrinsic functions, see :ref:`hot_spec_intrinsic_functions`. Providing default values ------------------------ You can provide default values for parameters. If a user doesn't define a value for a parameter, the default value is used during the stack deployment. The following example defines a default value ``m1.small`` for the ``flavor`` property: .. code-block:: yaml parameters: flavor: type: string label: Instance Type description: Flavor to be used default: m1.small .. note:: If a template doesn't define a default value for a parameter, then the user must define the value, otherwise the stack creation will fail. Hiding parameters values ------------------------ The values that a user provides when deploying a stack are available in the stack details and can be accessed by any user in the same tenant. To hide the value of a parameter, use the ``hidden`` boolean attribute of the parameter: .. code-block:: yaml parameters: database_password: type: string label: Database Password description: Password to be used for database hidden: true Restricting user input ---------------------- You can restrict the values of an input parameter to make sure that the user defines valid data for this parameter. The ``constraints`` property of an input parameter defines a list of constraints to apply for the parameter. The following example restricts the ``flavor`` parameter to a list of three possible values: .. code-block:: yaml parameters: flavor: type: string label: Instance Type description: Type of instance (flavor) to be used constraints: - allowed_values: [ m1.medium, m1.large, m1.xlarge ] description: Value must be one of m1.medium, m1.large or m1.xlarge. The following example defines multiple constraints for a password definition: .. code-block:: yaml parameters: database_password: type: string label: Database Password description: Password to be used for database hidden: true constraints: - length: { min: 6, max: 8 } description: Password length must be between 6 and 8 characters. - allowed_pattern: "[a-zA-Z0-9]+" description: Password must consist of characters and numbers only. - allowed_pattern: "[A-Z]+[a-zA-Z0-9]*" description: Password must start with an uppercase character. The list of supported constraints is available in the :ref:`hot_spec_parameters_constraints` section. .. note:: You can define multiple constraints of the same type. Especially in the case of allowed patterns this not only allows for keeping regular expressions simple and maintainable, but also for keeping error messages to be presented to users precise. Template outputs ~~~~~~~~~~~~~~~~ In addition to template customization through input parameters, you can provide information about the resources created during the stack deployment to the users in the ``outputs`` section of a template. In the following example the output section provides the IP address of the ``my_instance`` resource: .. code-block:: yaml outputs: instance_ip: description: The IP address of the deployed instance value: { get_attr: [my_instance, first_address] } .. note:: Output values are typically resolved using intrinsic function such as the ``get_attr``. See :ref:`hot_spec_intrinsic_functions` for more information about intrinsic functions.. See :ref:`hot_spec_outputs` for more information about the ``outputs`` section. heat-10.0.2/doc/source/template_guide/openstack.rst0000666000175000017500000000122613343562351022323 0ustar zuulzuul00000000000000.. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. OpenStack Resource Types ------------------------ .. integratedrespages:: OS:: heat-10.0.2/doc/source/template_guide/unsupported.rst0000666000175000017500000000135513343562351022727 0ustar zuulzuul00000000000000.. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Unsupported Heat Resource Types =============================== .. rubric:: These resources are enabled, but are not officially supported. .. unsupportedrespages:: heat-10.0.2/doc/source/template_guide/hot_spec.rst0000666000175000017500000016337413343562351022155 0ustar zuulzuul00000000000000.. highlight: yaml :linenothreshold: 5 .. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. .. _hot_spec: =============================================== Heat Orchestration Template (HOT) specification =============================================== HOT is a new template format meant to replace the Heat CloudFormation-compatible format (CFN) as the native format supported by the Heat over time. This specification explains in detail all elements of the HOT template format. An example driven guide to writing HOT templates can be found at :ref:`hot_guide`. Status ~~~~~~ HOT is considered reliable, supported, and standardized as of our Icehouse (April 2014) release. The Heat core team may make improvements to the standard, which very likely would be backward compatible. The template format is also versioned. Since Juno release, Heat supports multiple different versions of the HOT specification. Template structure ~~~~~~~~~~~~~~~~~~ HOT templates are defined in YAML and follow the structure outlined below. .. code-block:: yaml heat_template_version: 2016-10-14 description: # a description of the template parameter_groups: # a declaration of input parameter groups and order parameters: # declaration of input parameters resources: # declaration of template resources outputs: # declaration of output parameters conditions: # declaration of conditions heat_template_version This key with value ``2013-05-23`` (or a later date) indicates that the YAML document is a HOT template of the specified version. description This optional key allows for giving a description of the template, or the workload that can be deployed using the template. parameter_groups This section allows for specifying how the input parameters should be grouped and the order to provide the parameters in. This section is optional and can be omitted when necessary. parameters This section allows for specifying input parameters that have to be provided when instantiating the template. The section is optional and can be omitted when no input is required. resources This section contains the declaration of the single resources of the template. This section with at least one resource should be defined in any HOT template, or the template would not really do anything when being instantiated. outputs This section allows for specifying output parameters available to users once the template has been instantiated. This section is optional and can be omitted when no output values are required. conditions This optional section includes statements which can be used to restrict when a resource is created or when a property is defined. They can be associated with resources and resource properties in the ``resources`` section, also can be associated with outputs in the ``outputs`` sections of a template. Note: Support for this section is added in the Newton version. .. _hot_spec_template_version: Heat template version ~~~~~~~~~~~~~~~~~~~~~ The value of ``heat_template_version`` tells Heat not only the format of the template but also features that will be validated and supported. Beginning with the Newton release, the version can be either the date of the Heat release or the code name of the Heat release. Heat currently supports the following values for the ``heat_template_version`` key: 2013-05-23 ---------- The key with value ``2013-05-23`` indicates that the YAML document is a HOT template and it may contain features implemented until the Icehouse release. This version supports the following functions (some are back ported to this version):: get_attr get_file get_param get_resource list_join resource_facade str_replace Fn::Base64 Fn::GetAZs Fn::Join Fn::MemberListToMap Fn::Replace Fn::ResourceFacade Fn::Select Fn::Split Ref 2014-10-16 ---------- The key with value ``2014-10-16`` indicates that the YAML document is a HOT template and it may contain features added and/or removed up until the Juno release. This version removes most CFN functions that were supported in the Icehouse release, i.e. the ``2013-05-23`` version. So the supported functions now are:: get_attr get_file get_param get_resource list_join resource_facade str_replace Fn::Select 2015-04-30 ---------- The key with value ``2015-04-30`` indicates that the YAML document is a HOT template and it may contain features added and/or removed up until the Kilo release. This version adds the ``repeat`` function. So the complete list of supported functions is:: get_attr get_file get_param get_resource list_join repeat digest resource_facade str_replace Fn::Select 2015-10-15 ---------- The key with value ``2015-10-15`` indicates that the YAML document is a HOT template and it may contain features added and/or removed up until the Liberty release. This version removes the *Fn::Select* function, path based ``get_attr``/``get_param`` references should be used instead. Moreover ``get_attr`` since this version returns dict of all attributes for the given resource excluding *show* attribute, if there's no specified, e.g. :code:`{ get_attr: []}`. This version also adds the str_split function and support for passing multiple lists to the existing list_join function. The complete list of supported functions is:: get_attr get_file get_param get_resource list_join repeat digest resource_facade str_replace str_split 2016-04-08 ---------- The key with value ``2016-04-08`` indicates that the YAML document is a HOT template and it may contain features added and/or removed up until the Mitaka release. This version also adds the ``map_merge`` function which can be used to merge the contents of maps. The complete list of supported functions is:: digest get_attr get_file get_param get_resource list_join map_merge repeat resource_facade str_replace str_split 2016-10-14 | newton ------------------- The key with value ``2016-10-14`` or ``newton`` indicates that the YAML document is a HOT template and it may contain features added and/or removed up until the Newton release. This version adds the ``yaql`` function which can be used for evaluation of complex expressions, the ``map_replace`` function that can do key/value replacements on a mapping, and the ``if`` function which can be used to return corresponding value based on condition evaluation. The complete list of supported functions is:: digest get_attr get_file get_param get_resource list_join map_merge map_replace repeat resource_facade str_replace str_split yaql if This version adds ``equals`` condition function which can be used to compare whether two values are equal, the ``not`` condition function which acts as a NOT operator, the ``and`` condition function which acts as an AND operator to evaluate all the specified conditions, the ``or`` condition function which acts as an OR operator to evaluate all the specified conditions. The complete list of supported condition functions is:: equals get_param not and or 2017-02-24 | ocata ------------------- The key with value ``2017-02-24`` or ``ocata`` indicates that the YAML document is a HOT template and it may contain features added and/or removed up until the Ocata release. This version adds the ``str_replace_strict`` function which raises errors for missing params and the ``filter`` function which filters out values from lists. The complete list of supported functions is:: digest filter get_attr get_file get_param get_resource list_join map_merge map_replace repeat resource_facade str_replace str_replace_strict str_split yaql if The complete list of supported condition functions is:: equals get_param not and or 2017-09-01 | pike ----------------- The key with value ``2017-09-01`` or ``pike`` indicates that the YAML document is a HOT template and it may contain features added and/or removed up until the Pike release. This version adds the ``make_url`` function for assembling URLs, the ``list_concat`` function for combining multiple lists, the ``list_concat_unique`` function for combining multiple lists without repeating items, the ``string_replace_vstrict`` function which raises errors for missing and empty params, and the ``contains`` function which checks whether specific value is in a sequence. The complete list of supported functions is:: digest filter get_attr get_file get_param get_resource list_join make_url list_concat list_concat_unique contains map_merge map_replace repeat resource_facade str_replace str_replace_strict str_replace_vstrict str_split yaql if We support 'yaql' and 'contains' as condition functions in this version. The complete list of supported condition functions is:: equals get_param not and or yaql contains 2018-03-02 | queens ------------------- The key with value ``2018-03-02`` or ``queens`` indicates that the YAML document is a HOT template and it may contain features added and/or removed up until the Queens release. The complete list of supported functions is:: digest filter get_attr get_file get_param get_resource list_join make_url list_concat list_concat_unique contains map_merge map_replace repeat resource_facade str_replace str_replace_strict str_replace_vstrict str_split yaql if The complete list of supported condition functions is:: equals get_param not and or yaql contains .. _hot_spec_parameter_groups: Parameter groups section ~~~~~~~~~~~~~~~~~~~~~~~~ The ``parameter_groups`` section allows for specifying how the input parameters should be grouped and the order to provide the parameters in. These groups are typically used to describe expected behavior for downstream user interfaces. These groups are specified in a list with each group containing a list of associated parameters. The lists are used to denote the expected order of the parameters. Each parameter should be associated to a specific group only once using the parameter name to bind it to a defined parameter in the ``parameters`` section. .. code-block:: yaml parameter_groups: - label: description: parameters: - - label A human-readable label that defines the associated group of parameters. description This attribute allows for giving a human-readable description of the parameter group. parameters A list of parameters associated with this parameter group. param name The name of the parameter that is defined in the associated ``parameters`` section. .. _hot_spec_parameters: Parameters section ~~~~~~~~~~~~~~~~~~ The ``parameters`` section allows for specifying input parameters that have to be provided when instantiating the template. Such parameters are typically used to customize each deployment (e.g. by setting custom user names or passwords) or for binding to environment-specifics like certain images. Each parameter is specified in a separated nested block with the name of the parameters defined in the first line and additional attributes such as type or default value defined as nested elements. .. code-block:: yaml parameters: : type: label: description: default: hidden: constraints: immutable: tags: param name The name of the parameter. type The type of the parameter. Supported types are ``string``, ``number``, ``comma_delimited_list``, ``json`` and ``boolean``. This attribute is required. label A human readable name for the parameter. This attribute is optional. description A human readable description for the parameter. This attribute is optional. default A default value for the parameter. This value is used if the user doesn't specify his own value during deployment. This attribute is optional. hidden Defines whether the parameters should be hidden when a user requests information about a stack created from the template. This attribute can be used to hide passwords specified as parameters. This attribute is optional and defaults to ``false``. constraints A list of constraints to apply. The constraints are validated by the Orchestration engine when a user deploys a stack. The stack creation fails if the parameter value doesn't comply to the constraints. This attribute is optional. immutable Defines whether the parameter is updatable. Stack update fails, if this is set to ``true`` and the parameter value is changed. This attribute is optional and defaults to ``false``. tags A list of strings to specify the category of a parameter. This value is used to categorize a parameter so that users can group the parameters. This attribute is optional. The table below describes all currently supported types with examples: +----------------------+-------------------------------+------------------+ | Type | Description | Examples | +======================+===============================+==================+ | string | A literal string. | "String param" | +----------------------+-------------------------------+------------------+ | number | An integer or float. | "2"; "0.2" | +----------------------+-------------------------------+------------------+ | comma_delimited_list | An array of literal strings | ["one", "two"]; | | | that are separated by commas. | "one, two"; | | | The total number of strings | Note: "one, two" | | | should be one more than the | returns | | | total number of commas. | ["one", " two"] | +----------------------+-------------------------------+------------------+ | json | A JSON-formatted map or list. | {"key": "value"} | +----------------------+-------------------------------+------------------+ | boolean | Boolean type value, which can | "on"; "n" | | | be equal "t", "true", "on", | | | | "y", "yes", or "1" for true | | | | value and "f", "false", | | | | "off", "n", "no", or "0" for | | | | false value. | | +----------------------+-------------------------------+------------------+ The following example shows a minimalistic definition of two parameters .. code-block:: yaml parameters: user_name: type: string label: User Name description: User name to be configured for the application port_number: type: number label: Port Number description: Port number to be configured for the web server .. note:: The description and the label are optional, but defining these attributes is good practice to provide useful information about the role of the parameter to the user. .. _hot_spec_parameters_constraints: Parameter Constraints --------------------- The ``constraints`` block of a parameter definition defines additional validation constraints that apply to the value of the parameter. The parameter values provided by a user are validated against the constraints at instantiation time. The constraints are defined as a list with the following syntax .. code-block:: yaml constraints: - : description: constraint type Type of constraint to apply. The set of currently supported constraints is given below. constraint definition The actual constraint, depending on the constraint type. The concrete syntax for each constraint type is given below. description A description of the constraint. The text is presented to the user when the value he defines violates the constraint. If omitted, a default validation message is presented to the user. This attribute is optional. The following example shows the definition of a string parameter with two constraints. Note that while the descriptions for each constraint are optional, it is good practice to provide concrete descriptions to present useful messages to the user at deployment time. .. code-block:: yaml parameters: user_name: type: string label: User Name description: User name to be configured for the application constraints: - length: { min: 6, max: 8 } description: User name must be between 6 and 8 characters - allowed_pattern: "[A-Z]+[a-zA-Z0-9]*" description: User name must start with an uppercase character .. note:: While the descriptions for each constraint are optional, it is good practice to provide concrete descriptions so useful messages can be presented to the user at deployment time. The following sections list the supported types of parameter constraints, along with the concrete syntax for each type. length ++++++ The ``length`` constraint applies to parameters of type ``string``, ``comma_delimited_list`` and ``json``. It defines a lower and upper limit for the length of the string value or list/map collection. The syntax of the ``length`` constraint is .. code-block:: yaml length: { min: , max: } It is possible to define a length constraint with only a lower limit or an upper limit. However, at least one of ``min`` or ``max`` must be specified. range +++++ The ``range`` constraint applies to parameters of type ``number``. It defines a lower and upper limit for the numeric value of the parameter. The syntax of the ``range`` constraint is .. code-block:: yaml range: { min: , max: } It is possible to define a range constraint with only a lower limit or an upper limit. However, at least one of ``min`` or ``max`` must be specified. The minimum and maximum boundaries are included in the range. For example, the following range constraint would allow for all numeric values between 0 and 10 .. code-block:: yaml range: { min: 0, max: 10 } modulo ++++++ The ``modulo`` constraint applies to parameters of type ``number``. The value is valid if it is a multiple of ``step``, starting with ``offset``. The syntax of the ``modulo`` constraint is .. code-block:: yaml modulo: { step: , offset: } Both ``step`` and ``offset`` must be specified. For example, the following modulo constraint would only allow for odd numbers .. code-block:: yaml modulo: { step: 2, offset: 1 } allowed_values ++++++++++++++ The ``allowed_values`` constraint applies to parameters of type ``string`` or ``number``. It specifies a set of possible values for a parameter. At deployment time, the user-provided value for the respective parameter must match one of the elements of the list. The syntax of the ``allowed_values`` constraint is .. code-block:: yaml allowed_values: [ , , ... ] Alternatively, the following YAML list notation can be used .. code-block:: yaml allowed_values: - - - ... For example .. code-block:: yaml parameters: instance_type: type: string label: Instance Type description: Instance type for compute instances constraints: - allowed_values: - m1.small - m1.medium - m1.large allowed_pattern +++++++++++++++ The ``allowed_pattern`` constraint applies to parameters of type ``string``. It specifies a regular expression against which a user-provided parameter value must evaluate at deployment. The syntax of the ``allowed_pattern`` constraint is .. code-block:: yaml allowed_pattern: For example .. code-block:: yaml parameters: user_name: type: string label: User Name description: User name to be configured for the application constraints: - allowed_pattern: "[A-Z]+[a-zA-Z0-9]*" description: User name must start with an uppercase character custom_constraint +++++++++++++++++ The ``custom_constraint`` constraint adds an extra step of validation, generally to check that the specified resource exists in the backend. Custom constraints get implemented by plug-ins and can provide any kind of advanced constraint validation logic. The syntax of the ``custom_constraint`` constraint is .. code-block:: yaml custom_constraint: The ``name`` attribute specifies the concrete type of custom constraint. It corresponds to the name under which the respective validation plugin has been registered in the Orchestration engine. For example .. code-block:: yaml parameters: key_name type: string description: SSH key pair constraints: - custom_constraint: nova.keypair The following section lists the custom constraints and the plug-ins that support them. .. table_from_text:: ../../setup.cfg :header: Name,Plug-in :regex: (.*)=(.*) :start-after: heat.constraints = :end-before: heat.stack_lifecycle_plugins = :sort: .. _hot_spec_pseudo_parameters: Pseudo parameters ----------------- In addition to parameters defined by a template author, Heat also creates three parameters for every stack that allow referential access to the stack's name, stack's identifier and project's identifier. These parameters are named ``OS::stack_name`` for the stack name, ``OS::stack_id`` for the stack identifier and ``OS::project_id`` for the project identifier. These values are accessible via the `get_param`_ intrinsic function, just like user-defined parameters. .. note:: ``OS::project_id`` is available since 2015.1 (Kilo). .. _hot_spec_resources: Resources section ~~~~~~~~~~~~~~~~~ The ``resources`` section defines actual resources that make up a stack deployed from the HOT template (for instance compute instances, networks, storage volumes). Each resource is defined as a separate block in the ``resources`` section with the following syntax .. code-block:: yaml resources: : type: properties: : metadata: depends_on: update_policy: deletion_policy: external_id: condition: resource ID A resource ID which must be unique within the ``resources`` section of the template. type The resource type, such as ``OS::Nova::Server`` or ``OS::Neutron::Port``. This attribute is required. properties A list of resource-specific properties. The property value can be provided in place, or via a function (see :ref:`hot_spec_intrinsic_functions`). This section is optional. metadata Resource-specific metadata. This section is optional. depends_on Dependencies of the resource on one or more resources of the template. See :ref:`hot_spec_resources_dependencies` for details. This attribute is optional. update_policy Update policy for the resource, in the form of a nested dictionary. Whether update policies are supported and what the exact semantics are depends on the type of the current resource. This attribute is optional. deletion_policy Deletion policy for the resource. The allowed deletion policies are ``Delete``, ``Retain``, and ``Snapshot``. Beginning with ``heat_template_version`` ``2016-10-14``, the lowercase equivalents ``delete``, ``retain``, and ``snapshot`` are also allowed. This attribute is optional; the default policy is to delete the physical resource when deleting a resource from the stack. external_id Allows for specifying the resource_id for an existing external (to the stack) resource. External resources can not depend on other resources, but we allow other resources depend on external resource. This attribute is optional. Note: when this is specified, properties will not be used for building the resource and the resource is not managed by Heat. This is not possible to update that attribute. Also resource won't be deleted by heat when stack is deleted. condition Condition for the resource. Which decides whether to create the resource or not. This attribute is optional. Note: Support ``condition`` for resource is added in the Newton version. Depending on the type of resource, the resource block might include more resource specific data. All resource types that can be used in CFN templates can also be used in HOT templates, adapted to the YAML structure as outlined above. The following example demonstrates the definition of a simple compute resource with some fixed property values .. code-block:: yaml resources: my_instance: type: OS::Nova::Server properties: flavor: m1.small image: F18-x86_64-cfntools .. _hot_spec_resources_dependencies: Resource dependencies --------------------- The ``depends_on`` attribute of a resource defines a dependency between this resource and one or more other resources. If a resource depends on just one other resource, the ID of the other resource is specified as string of the ``depends_on`` attribute, as shown in the following example .. code-block:: yaml resources: server1: type: OS::Nova::Server depends_on: server2 server2: type: OS::Nova::Server If a resource depends on more than one other resources, the value of the ``depends_on`` attribute is specified as a list of resource IDs, as shown in the following example .. code-block:: yaml resources: server1: type: OS::Nova::Server depends_on: [ server2, server3 ] server2: type: OS::Nova::Server server3: type: OS::Nova::Server .. _hot_spec_outputs: Outputs section ~~~~~~~~~~~~~~~ The ``outputs`` section defines output parameters that should be available to the user after a stack has been created. This would be, for example, parameters such as IP addresses of deployed instances, or URLs of web applications deployed as part of a stack. Each output parameter is defined as a separate block within the outputs section according to the following syntax .. code-block:: yaml outputs: : description: value: condition: parameter name The output parameter name, which must be unique within the ``outputs`` section of a template. description A short description of the output parameter. This attribute is optional. parameter value The value of the output parameter. This value is usually resolved by means of a function. See :ref:`hot_spec_intrinsic_functions` for details about the functions. This attribute is required. condition To conditionally define an output value. None value will be shown if the condition is False. This attribute is optional. Note: Support ``condition`` for output is added in the Newton version. The example below shows how the IP address of a compute resource can be defined as an output parameter .. code-block:: yaml outputs: instance_ip: description: IP address of the deployed compute instance value: { get_attr: [my_instance, first_address] } Conditions section ~~~~~~~~~~~~~~~~~~ The ``conditions`` section defines one or more conditions which are evaluated based on input parameter values provided when a user creates or updates a stack. The condition can be associated with resources, resource properties and outputs. For example, based on the result of a condition, user can conditionally create resources, user can conditionally set different values of properties, and user can conditionally give outputs of a stack. The ``conditions`` section is defined with the following syntax .. code-block:: yaml conditions: : {expression1} : {expression2} ... condition name The condition name, which must be unique within the ``conditions`` section of a template. expression The expression which is expected to return True or False. Usually, the condition functions can be used as expression to define conditions:: equals get_param not and or yaql Note: In condition functions, you can reference a value from an input parameter, but you cannot reference resource or its attribute. We support referencing other conditions (by condition name) in condition functions. We support 'yaql' as condition function in the Pike version. An example of conditions section definition .. code-block:: yaml conditions: cd1: True cd2: get_param: param1 cd3: equals: - get_param: param2 - yes cd4: not: equals: - get_param: param3 - yes cd5: and: - equals: - get_param: env_type - prod - not: equals: - get_param: zone - beijing cd6: or: - equals: - get_param: zone - shanghai - equals: - get_param: zone - beijing cd7: not: cd4 cd8: and: - cd1 - cd2 cd9: yaql: expression: $.data.services.contains('heat') data: services: get_param: ServiceNames cd10: contains: - 'neutron' - get_param: ServiceNames The example below shows how to associate condition with resources .. code-block:: yaml parameters: env_type: default: test type: string conditions: create_prod_res: {equals : [{get_param: env_type}, "prod"]} resources: volume: type: OS::Cinder::Volume condition: create_prod_res properties: size: 1 The 'create_prod_res' condition evaluates to true if the 'env_type' parameter is equal to 'prod'. In the above sample template, the 'volume' resource is associated with the 'create_prod_res' condition. Therefore, the 'volume' resource is created only if the 'env_type' is equal to 'prod'. The example below shows how to conditionally define an output .. code-block:: yaml outputs: vol_size: value: {get_attr: [my_volume, size]} condition: create_prod_res In the above sample template, the 'vol_size' output is associated with the 'create_prod_res' condition. Therefore, the 'vol_size' output is given corresponding value only if the 'env_type' is equal to 'prod', otherwise the value of the output is None. .. _hot_spec_intrinsic_functions: Intrinsic functions ~~~~~~~~~~~~~~~~~~~ HOT provides a set of intrinsic functions that can be used inside templates to perform specific tasks, such as getting the value of a resource attribute at runtime. The following section describes the role and syntax of the intrinsic functions. Note: these functions can only be used within the "properties" section of each resource or in the outputs section. get_attr -------- The ``get_attr`` function references an attribute of a resource. The attribute value is resolved at runtime using the resource instance created from the respective resource definition. Path based attribute referencing using keys or indexes requires ``heat_template_version`` ``2014-10-16`` or higher. The syntax of the ``get_attr`` function is .. code-block:: yaml get_attr: - - - (optional) - (optional) - ... resource name The resource name for which the attribute needs to be resolved. The resource name must exist in the ``resources`` section of the template. attribute name The attribute name to be resolved. If the attribute returns a complex data structure such as a list or a map, then subsequent keys or indexes can be specified. These additional parameters are used to navigate the data structure to return the desired value. The following example demonstrates how to use the :code:`get_attr` function: .. code-block:: yaml resources: my_instance: type: OS::Nova::Server # ... outputs: instance_ip: description: IP address of the deployed compute instance value: { get_attr: [my_instance, first_address] } instance_private_ip: description: Private IP address of the deployed compute instance value: { get_attr: [my_instance, networks, private, 0] } In this example, if the ``networks`` attribute contained the following data:: {"public": ["2001:0db8:0000:0000:0000:ff00:0042:8329", "1.2.3.4"], "private": ["10.0.0.1"]} then the value of ``get_attr`` function would resolve to ``10.0.0.1`` (first item of the ``private`` entry in the ``networks`` map). From ``heat_template_version``: '2015-10-15' is optional and if is not specified, ``get_attr`` returns dict of all attributes for the given resource excluding *show* attribute. In this case syntax would be next: .. code-block:: yaml get_attr: - get_file -------- The ``get_file`` function returns the content of a file into the template. It is generally used as a file inclusion mechanism for files containing scripts or configuration files. The syntax of ``get_file`` function is .. code-block:: yaml get_file: The ``content key`` is used to look up the ``files`` dictionary that is provided in the REST API call. The Orchestration client command (``heat``) is ``get_file`` aware and populates the ``files`` dictionary with the actual content of fetched paths and URLs. The Orchestration client command supports relative paths and transforms these to the absolute URLs required by the Orchestration API. .. note:: The ``get_file`` argument must be a static path or URL and not rely on intrinsic functions like ``get_param``. the Orchestration client does not process intrinsic functions (they are only processed by the Orchestration engine). The example below demonstrates the ``get_file`` function usage with both relative and absolute URLs .. code-block:: yaml resources: my_instance: type: OS::Nova::Server properties: # general properties ... user_data: get_file: my_instance_user_data.sh my_other_instance: type: OS::Nova::Server properties: # general properties ... user_data: get_file: http://example.com/my_other_instance_user_data.sh The ``files`` dictionary generated by the Orchestration client during instantiation of the stack would contain the following keys: * :file:`file:///path/to/my_instance_user_data.sh` * :file:`http://example.com/my_other_instance_user_data.sh` get_param --------- The ``get_param`` function references an input parameter of a template. It resolves to the value provided for this input parameter at runtime. The syntax of the ``get_param`` function is .. code-block:: yaml get_param: - - (optional) - (optional) - ... parameter name The parameter name to be resolved. If the parameters returns a complex data structure such as a list or a map, then subsequent keys or indexes can be specified. These additional parameters are used to navigate the data structure to return the desired value. The following example demonstrates the use of the ``get_param`` function .. code-block:: yaml parameters: instance_type: type: string label: Instance Type description: Instance type to be used. server_data: type: json resources: my_instance: type: OS::Nova::Server properties: flavor: { get_param: instance_type} metadata: { get_param: [ server_data, metadata ] } key_name: { get_param: [ server_data, keys, 0 ] } In this example, if the ``instance_type`` and ``server_data`` parameters contained the following data:: {"instance_type": "m1.tiny", {"server_data": {"metadata": {"foo": "bar"}, "keys": ["a_key","other_key"]}}} then the value of the property ``flavor`` would resolve to ``m1.tiny``, ``metadata`` would resolve to ``{"foo": "bar"}`` and ``key_name`` would resolve to ``a_key``. get_resource ------------ The ``get_resource`` function references another resource within the same template. At runtime, it is resolved to reference the ID of the referenced resource, which is resource type specific. For example, a reference to a floating IP resource returns the respective IP address at runtime. The syntax of the ``get_resource`` function is .. code-block:: yaml get_resource: The resource ID of the referenced resource is given as single parameter to the ``get_resource`` function. For example .. code-block:: yaml resources: instance_port: type: OS::Neutron::Port properties: ... instance: type: OS::Nova::Server properties: ... networks: port: { get_resource: instance_port } list_join --------- The ``list_join`` function joins a list of strings with the given delimiter. The syntax of the ``list_join`` function is .. code-block:: yaml list_join: - - For example .. code-block:: yaml list_join: [', ', ['one', 'two', 'and three']] This resolve to the string ``one, two, and three``. From HOT version ``2015-10-15`` you may optionally pass additional lists, which will be appended to the previous lists to join. For example:: list_join: [', ', ['one', 'two'], ['three', 'four']] This resolve to the string ``one, two, three, four``. From HOT version ``2015-10-15`` you may optionally also pass non-string list items (e.g json/map/list parameters or attributes) and they will be serialized as json before joining. digest ------ The ``digest`` function allows for performing digest operations on a given value. This function has been introduced in the Kilo release and is usable with HOT versions later than ``2015-04-30``. The syntax of the ``digest`` function is .. code-block:: yaml digest: - - algorithm The digest algorithm. Valid algorithms are the ones provided natively by hashlib (md5, sha1, sha224, sha256, sha384, and sha512) or any one provided by OpenSSL. value The value to digest. This function will resolve to the corresponding hash of the value. For example .. code-block:: yaml # from a user supplied parameter pwd_hash: { digest: ['sha512', { get_param: raw_password }] } The value of the digest function would resolve to the corresponding hash of the value of ``raw_password``. repeat ------ The ``repeat`` function allows for dynamically transforming lists by iterating over the contents of one or more source lists and replacing the list elements into a template. The result of this function is a new list, where the elements are set to the template, rendered for each list item. The syntax of the ``repeat`` function is .. code-block:: yaml repeat: template: